id
stringlengths 1
6
| content
stringlengths 0
5.74M
|
|---|---|
0
|
approximation - Calculating the square root of 2 - Mathematics Stack Exchange
===============
Join Mathematics
By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy.
Sign up with Google
OR
Email
Password
Sign up
Already have an account? Log in
Skip to main content
Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Loading…
Tour Start here for a quick overview of the site
Help Center Detailed answers to any questions you might have
Meta Discuss the workings and policies of this site
About Us Learn more about Stack Overflow the company, and our products
current community
Mathematics helpchat
Mathematics Meta
your communities
Sign up or log in to customize your list.
more stack exchange communities
company blog
Log in
Sign up
Home
Questions
Unanswered
AI Assist Labs
Tags
Chat
Users
Teams
Ask questions, find answers and collaborate at work with Stack Overflow for Teams.
Try Teams for freeExplore Teams
3. Teams
4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
Hang on, you can't upvote just yet.
You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it?
Instead, you can save this post to reference later.
Save this post for later Not now
Calculating the square root of 2
Ask Question
Asked 6 years, 11 months ago
Modified6 years, 3 months ago
Viewed 52k times
This question shows research effort; it is useful and clear
32
Save this question.
Show activity on this post.
Since 2–√2 is irrational, is there a way to compute the first 20 digits of it?
What I have done so far
I started the first digit decimal of the 2–√2 by calculating iteratively so that it would not go to 3 so fast. It looks like this:
2–√2–√2–√=1.4 2≡1.96=1.41 2≡1.9881=1.414 2≡1.999396…2=1.4 2≡1.96 2=1.41 2≡1.9881 2=1.414 2≡1.999396…
First I tell whether it passes such that 1.x 2 1.x 2 would be not greater than 3.
If that passes, I will add a new decimal to it. Let's say y.y.1.x y 2 1.x y 2
If that y fails, I increment y y by 1 and square it again.
The process will keep repeating. Unfortunately, the process takes so much time.
approximation
radicals
Share
Share a link to this question
Copy linkCC BY-SA 4.0
Cite
Follow
Follow this question to receive notifications
edited Sep 15, 2018 at 11:36
amWhy
211k 197 197 gold badges 282 282 silver badges 503 503 bronze badges
asked Sep 14, 2018 at 12:10
MMJMMMJM
667 1 1 gold badge 5 5 silver badges 12 12 bronze badges
5
4 You can go on trying to compute the square of 1.414 x 1.414 x, where x x is a number between 0 0 and 9 9. The greatest number between 1.4140 1.4140 and 1.4149 1.4149 such that its square is less then 2 2 is your next candidate to repeat the process. –Gibbs Commented Sep 14, 2018 at 12:14
2 See en.wikipedia.org/wiki/Methods_of_computing_square_roots –lhf Commented Sep 14, 2018 at 12:17
@Gibbs I tried that so far. But the reason is that it takes more time to compute it. –MMJM Commented Sep 14, 2018 at 12:21
2 Possible duplicate of 1. calculate-more-digits-of-square-root-of-22. is-there-any-simple-method-to-calculate-sqrt-x-without-using-logarithm3. –user202729 Commented Sep 14, 2018 at 15:22
1 @Gibbs Please don't post answers as comments. –David Richerby Commented Sep 14, 2018 at 18:44
Add a comment|
16 Answers 16
Sorted by: Reset to default
This answer is useful
49
Save this answer.
Show activity on this post.
Calculating the square root of a number is one of the first problems tackled with numerical methods, known I think to the ancient Babylonians. The observation is that if x,y>0 x,y>0 and y≠x−−√y≠x then y,x/y y,x/y will be on opposite sides of x−−√x, and we could try averaging them. So try y 0=1,y n+1=1 2(y n+x y n)y 0=1,y n+1=1 2(y n+x y n). This is actually the Newton-Raphson method 5xum mentioned. The number of correct decimal places approximately doubles at each stage, i.e. you probably only have to go as far as y 5 y 5 or so.
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Cite
Follow
Follow this answer to receive notifications
edited May 16, 2019 at 13:44
answered Sep 14, 2018 at 12:17
J.G.J.G.
118k 8 8 gold badges 79 79 silver badges 146 146 bronze badges
8
18 Definitely one of the fastest methods: y 0=1.0;y 1=1.5;y 2=1.41 666666666666666666666666666...;y 3=1.41421 568627450980392156862745...;y 4=1.41421356237 468991062629557889...;y 5=1.41421356237309504880168 962350...;⋯y 0=1.0;y 1=1.5;y 2=1.41 666666666666666666666666666...;y 3=1.41421 568627450980392156862745...;y 4=1.41421356237 468991062629557889...;y 5=1.41421356237309504880168 962350...;⋯ –Oleg567 Commented Sep 14, 2018 at 12:26
6 @Oleg567 We could go even faster with post-Newton Householder methods, but the individual steps become more computationally complex. BTW the calculator you used to check that probably also used Newton-Raphson for the division. –J.G. Commented Sep 14, 2018 at 12:30
2 The beauty of this method is that the initial estimate can be way off and the method will converge quickly anyway. of course, making an educated guess to pick the initial estimate helps to reduce the number of iterations. –Vasili Commented Sep 14, 2018 at 12:32
4 Love the intuitive explanation for it! –Sort of Damocles Commented Sep 14, 2018 at 12:52
1 @Paul Since it's Newton-Raphson it'll be about 2 n 2 n of them, but a more detailed answer than that can't be obtained without careful analysis of the specifics of the problem. However, if you look at how which digits have "gotten stuck", you can be confident from the shrinking error terms that they won't change. See the black digits in Oleg567's comment for an example. –J.G. Commented Sep 14, 2018 at 22:04
|Show 3 more comments
This answer is useful
22
Save this answer.
Show activity on this post.
Here's the way I learnt to obtain decimal digit after decimal digit when I began middle school:
2 1 00−96−0 4 00−2 81−0 119 00 0−112 96 00 604 00(1.414 2…24×4=96<100 25×5=125>100 281×1<400 282×2>400 2824×4<11900 2825×5>11900 28282×2<60400 28283×3>60400 2(1.414 2…1 00 24×4=96<100−96 25×5=125>100−0 4 00 281×1<400−2 81 282×2>400−0 119 00 2824×4<11900 0−112 96 2825×5>11900 00 604 00 28282×2<60400 28283×3>60400
&c.
Let me explain the procedure on the first two steps. It relies on a clever use of the identity (x+y)2=x 2+2 x y+y 2(x+y)2=x 2+2 x y+y 2. Suppose more generally we want to find the square root of a number a a.
We first find the greatest natural number n n such that n 2≤a n 2≤a.
If a a is not a perfect square, i.e. if n 2<a n 2<a, let d d be the first decimal digit of the square root. This is the greatest digit such that (n+d 10)2≤a(n+d 10)2≤a. We'll transform this inequality into a more easy-to-use test: (n+d 10)2≤a⟺2 n 10 d+d 2 100<a−n 2⟺(10×2 n+d)×d≤(a−n 2)×100(n+d 10)2≤a⟺2 n 10 d+d 2 100<a−n 2⟺(10×2 n+d)×d≤(a−n 2)×100 In practice, this means, we calculate the difference a−n 2 a−n 2 and add two 0s. Then we double n n, add a digit d (this is the result of calculating 10×2 n+d 10×2 n+d) and multiply what we obtain by this digit. Last, we test whether the result is less than 100(a−n 2)100(a−n 2), and retain the largest possible digit.
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Cite
Follow
Follow this answer to receive notifications
edited Sep 15, 2018 at 9:43
answered Sep 14, 2018 at 12:55
BernardBernard
179k 10 10 gold badges 75 75 silver badges 182 182 bronze badges
6
4 Looks interesting, can you talk us through it a bit? I don't really get it. e.g. where does 100 come from? –goblin GONE Commented Sep 15, 2018 at 1:57
@goblin, There are some references for this method at math.stackexchange.com/a/538055/117057 and math.stackexchange.com/q/376365/117057 –shoover Commented Sep 15, 2018 at 5:21
@goblin: I've added an explanation for the first two steps. The following stepsruns along te same lines, only the first step is different. Hope this will make it clear. –Bernard Commented Sep 15, 2018 at 9:24
@Bernard, thanks. –goblin GONE Commented Sep 15, 2018 at 9:39
@goblin You start off with 1 because 1 is the largest integer whoose square is less than 2. Then extend 1 by the next two digits, 00, to get 100. Now double the 1 just obtained and find the largest digit such that 2x times x is less than 100. –Paul Evans Commented Sep 15, 2018 at 9:41
|Show 1 more comment
This answer is useful
8
Save this answer.
Show activity on this post.
On a similar note to the answer by R. Romero: in the special case of taking the square root of an integer N N, it is fairly straightforward to calculate the continued fraction representation of N−−√N.
In the particular case N=2 N=2, we have:
2–√=1+1 2+1 2+1 2+⋱.2=1+1 2+1 2+1 2+⋱.
(This follows from the fact that if x=2–√−1 x=2−1, then x=2–√−1=1 2√+1=1 2+x x=2−1=1 2+1=1 2+x.)
Now, from this we can calculate subsequent rational approximations to 2–√2:
0 1 1 0 1 1 1 2 3 2 2 7 5 2 17 12 2 41 29 2 99 70⋯⋯⋯1 2 2 2 2 2⋯0 1 1 3 7 17 41 99⋯1 0 1 2 5 12 29 70⋯
So, for example 99 70≈1.4142857 99 70≈1.4142857 whereas 2–√≈1.4142136 2≈1.4142136.
(It also happens that this procedure generates solutions to Pell's equation a 2−2 b 2=±1 a 2−2 b 2=±1; for example, 99 2−2⋅70 2=1 99 2−2⋅70 2=1. The connection is: if a 2−2 b 2=±1 a 2−2 b 2=±1 then a−b 2–√=±1 a+b 2√a−b 2=±1 a+b 2; so if a a and b b are large positive integers satisfying Pell's equation, then a−b 2–√≈±1 2 a a−b 2≈±1 2 a which implies a b−2–√≈±1 2 a b≈±1 a 2 2√a b−2≈±1 2 a b≈±1 a 2 2.)
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Cite
Follow
Follow this answer to receive notifications
answered Sep 14, 2018 at 22:43
Daniel ScheplerDaniel Schepler
22.9k 1 1 gold badge 25 25 silver badges 48 48 bronze badges
4
2 Is there somewhere I can read more about this, especially the connection between continued fractions and Pell's equation? –goblin GONE Commented Sep 15, 2018 at 2:00
1 Once you see the first few rational approximations it's easy to guess and prove the recursion for p/q p/q, namely, p n=p n−1+2 q n−1 p n=p n−1+2 q n−1, q n=p n−1+q n−1 q n=p n−1+q n−1..See en.wikipedia.org/wiki/…, en.wikipedia.org/wiki/Pell%27s_equation –Ethan Bolker Commented Sep 15, 2018 at 13:13
There's also a simple two step recursion, which is identical for the numerator & denominator sequences. Using Ethan's notation, p n+1=2 p n+p n−1 p n+1=2 p n+p n−1 and q n+1=2 q n+q n−1 q n+1=2 q n+q n−1. Also p n=q n+1−q n p n=q n+1−q n and q n=(p n+p n−1)/2 q n=(p n+p n−1)/2. –PM 2Ring Commented May 26, 2021 at 10:19
can someone please explain that table where "subsequent rational approximations" appear? how does one get that from a known continued fraction? –Noone AtAll Commented Aug 15, 2021 at 12:46
Add a comment|
This answer is useful
6
Save this answer.
Show activity on this post.
The number 2–√2 is the solution to the equation x 2−2=0 x 2−2=0, so any method for numerically approximating the roots of an equation (such as the Newton method) will be able to approximate 2–√2.
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Cite
Follow
Follow this answer to receive notifications
answered Sep 14, 2018 at 12:13
5xum5xum
126k 6 6 gold badges 136 136 silver badges 212 212 bronze badges
1
9 I don't see how this qualifies as an answer. It is just a general statement. –M. Wind Commented Sep 16, 2018 at 5:29
Add a comment|
This answer is useful
5
Save this answer.
Show activity on this post.
Okay, I searched through the answers, but none seems to mention this one: long quadratic root calculation.
From the name it is obvious that it resembles long division, like this:
2.00 00 00 00..−−−−−−−−−−−−√2.00 00 00 00..
Notice how they are grouped into tuples. Now estimate the first digit, namely 1 1:
1 1.2.00 00 00 00..−−−−−−−−−−−−√1 1 00¯¯¯¯¯¯¯¯1.1 2.00 00 00 00..1 1 00¯
We calculate 1×1=1 1×1=1, write it down, and calculate the "remainder", just like divisions. Notice that we append 2 digits behind instead of 1.
Next, double the number on the top, and write it on the left of 1 00 1 00:
1 2∗1.∗2.00 00 00 00..−−−−−−−−−−−−√1|1 00¯¯¯¯¯¯¯¯1.∗1 2.00 00 00 00..1 2∗|1 00¯
Now we estimate the next digit, . It is written both on the top and to the left. Of course, we know that it is 4, so:
1 24 1.4∗2.00 00 00 00..−−−−−−−−−−−−√1|1 00¯¯¯¯¯¯¯¯|96 2 8∗|4 00¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯1.4∗1 2.00 00 00 00..1 24|1 00¯|96 2 8∗|4 00¯
We double the numbers on the top again to get 28∗28∗, and repeat the process:
1 24 1.4 1 2.00 00 00 00..−−−−−−−−−−−−√1|1 00¯¯¯¯¯¯¯¯|96 2 8 1|4 00¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯|2 81 1.4 1 1 2.00 00 00 00..1 24|1 00¯|96 2 8 1|4 00¯|2 81
I found a picture, but not of 2–√2:
This is extremely inefficient for computers, but great for manual calculation. After all, we don't do multiplication through fast Fourier transforms!
Also, this method is developed in ancient China.
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Cite
Follow
Follow this answer to receive notifications
answered Sep 17, 2018 at 7:44
TreborTrebor
5,589 2 2 gold badges 12 12 silver badges 33 33 bronze badges
Add a comment|
This answer is useful
4
Save this answer.
Show activity on this post.
Suppose you want to find the square root of p p and suppose your initial guess is x/y x/y:
Let M=[1 1 p 1]M=[1 p 1 1] and q=(x y)q=(x y) Then M M M...q M M M...q gives a numerator and denominator the ratio of which converges to the square root of p p. This gives an approximation to the square root of 2 2 as fast as the other methods but with no floating point arithmetic until the final division.
Performs well for calculation tools optimized for Matrix arithmetic. This also gives you solutions for Pell's equation for p=2 p=2 as mentioned by Daniel Schepler.
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Cite
Follow
Follow this answer to receive notifications
edited Sep 15, 2018 at 3:29
Tyberius
1,446 3 3 gold badges 16 16 silver badges 32 32 bronze badges
answered Sep 15, 2018 at 1:52
TurlocTheRedTurlocTheRed
6,538 1 1 gold badge 11 11 silver badges 17 17 bronze badges
Add a comment|
This answer is useful
3
Save this answer.
Show activity on this post.
In this answer, there is a method using continued fraction approximations for 2–√2 and the generating function for the central binomial coefficients to get some very quickly convergent series for 2–√2. For example,
2–√=7 5∑k=0∞(2 k k)1 200 k(1)(1)2=7 5∑k=0∞(2 k k)1 200 k
and
2–√=239 169∑k=0∞(2 k k)1 228488 k(2)(2)2=239 169∑k=0∞(2 k k)1 228488 k
For example, summing to k=4 k=4 in (2)(2) gives
2–√=1.414213562373095048801688 2=1.414213562373095048801688
which is accurate to 23 23 places.
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Cite
Follow
Follow this answer to receive notifications
edited Sep 15, 2018 at 23:39
answered Sep 15, 2018 at 13:50
robjohn♦robjohn
354k 38 38 gold badges 497 497 silver badges 889 889 bronze badges
2
(+) This method could be used to calculate millions of 2–√2 digits (especially when notice that the series has rational terms, and apply en.wikipedia.org/wiki/Binary_splitting technique). –Oleg567 Commented Sep 24, 2018 at 4:44
Good description of the elementary school method. You are not likely to succeed unless you do at least one square root every day. After age 75, you have to do more than one every day. That is why there are calculators. –richard1941 Commented Sep 24, 2018 at 23:53
Add a comment|
This answer is useful
3
Save this answer.
Show activity on this post.
Binary search for it.
Since 1<2<4 1<2<4, we must have 1–√<2–√<4–√1<2<4, so 2–√∈(1,2)2∈(1,2). Now repeatedly: find the midpoint, m m, of the current interval, (a,b)(a,b), square m m and compare with 2 2, and if 2=m 2 2=m 2 declare that m=2–√m=2, or if 2<m 2 2<m 2, make the new interval (a,m)(a,m), otherwise make the new interval (m,b)(m,b). This process halves the size of the interval on each step. Since log 2(10−20)=−66.438…log 2(10−20)=−66.438…, after 67 doublings, the error in taking any value from the interval is <10−20<10−20 (but, if the interval straddles a digit change, you may have to perform additional steps to find out on which side of the change is 2–√2).
This process is shown in the table below. Each decimal number is computed to 21 21 digits and has trailing zeroes stripped. If there are still 21 21 digits, a space is inserted between the 20 th 20 th and 21 st 21 st.
step 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 interval(1.,2.)(1.,1.5)(1.25,1.5)(1.375,1.5)(1.375,1.4375)(1.40625,1.4375)(1.40625,1.421875)(1.4140625,1.421875)(1.4140625,1.41796875)(1.4140625,1.416015625)(1.4140625,1.4150390625)(1.4140625,1.41455078125)(1.4140625,1.414306640625)(1.4141845703125,1.414306640625)(1.4141845703125,1.41424560546875)(1.4141845703125,1.414215087890625)(1.4141998291015625,1.414215087890625)(1.41420745849609375,1.414215087890625)(1.414211273193359375,1.414215087890625)(1.4142131805419921875,1.414215087890625)m 1.5 1.25 1.375 1.4375 1.40625 1.421875 1.4140625 1.41796875 1.416015625 1.4150390625 1.41455078125 1.414306640625 1.4141845703125 1.41424560546875 1.414215087890625 1.4141998291015625 1.41420745849609375 1.414211273193359375 1.4142131805419921875 1.41421413421630859375 m 2 2<2.25 1.5625<2 1.890625<2 2<2.06640625 1.9775390625<2 2<2.021728515625 1.99957275390625<2 2<2.0106353759765625 2<2.005100250244140625 2<2.00233554840087890625 2<2.00095391273498535156 3 2<2.00026327371597290039 1.99991799890995025634 8<2 2<2.00009063258767127990 7 2<2.00000431481748819351 2 1.99996115663088858127 6<2 1.99998273566598072648<2 1.99999352522718254476 8<2 1.99999892001869739033 3<2 2<2.00000161741718329722 step interval m m 2 1(1.,2.)1.5 2<2.25 2(1.,1.5)1.25 1.5625<2 3(1.25,1.5)1.375 1.890625<2 4(1.375,1.5)1.4375 2<2.06640625 5(1.375,1.4375)1.40625 1.9775390625<2 6(1.40625,1.4375)1.421875 2<2.021728515625 7(1.40625,1.421875)1.4140625 1.99957275390625<2 8(1.4140625,1.421875)1.41796875 2<2.0106353759765625 9(1.4140625,1.41796875)1.416015625 2<2.005100250244140625 10(1.4140625,1.416015625)1.4150390625 2<2.00233554840087890625 11(1.4140625,1.4150390625)1.41455078125 2<2.00095391273498535156 3 12(1.4140625,1.41455078125)1.414306640625 2<2.00026327371597290039 13(1.4140625,1.414306640625)1.4141845703125 1.99991799890995025634 8<2 14(1.4141845703125,1.414306640625)1.41424560546875 2<2.00009063258767127990 7 15(1.4141845703125,1.41424560546875)1.414215087890625 2<2.00000431481748819351 2 16(1.4141845703125,1.414215087890625)1.4141998291015625 1.99996115663088858127 6<2 17(1.4141998291015625,1.414215087890625)1.41420745849609375 1.99998273566598072648<2 18(1.41420745849609375,1.414215087890625)1.414211273193359375 1.99999352522718254476 8<2 19(1.414211273193359375,1.414215087890625)1.4142131805419921875 1.99999892001869739033 3<2 20(1.4142131805419921875,1.414215087890625)1.41421413421630859375 2<2.00000161741718329722
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35(1.4142131805419921875,1.41421413421630859375)(1.4142131805419921875,1.41421365737915039062 5)(1.41421341896057128906 2,1.41421365737915039062 5)(1.41421353816986083984 4,1.41421365737915039062 5)(1.41421353816986083984 4,1.41421359777450561523 4)(1.41421353816986083984 4,1.41421356797218322753 9)(1.41421355307102203369 1,1.41421356797218322753 9)(1.41421356052160263061 5,1.41421356797218322753 9)(1.41421356052160263061 5,1.41421356424689292907 7)(1.41421356052160263061 5,1.41421356238424777984 6)(1.41421356145292520523,1.41421356238424777984 6)(1.41421356191858649253 8,1.41421356238424777984 6)(1.41421356215141713619 2,1.41421356238424777984 6)(1.41421356226783245801 9,1.41421356238424777984 6)(1.41421356232604011893 3,1.41421356238424777984 6)1.41421365737915039062 5 1.41421341896057128906 2 1.41421353816986083984 4 1.41421359777450561523 4 1.41421356797218322753 9 1.41421355307102203369 1 1.41421356052160263061 5 1.41421356424689292907 7 1.41421356238424777984 6 1.41421356145292520523 1.41421356191858649253 8 1.41421356215141713619 2 1.41421356226783245801 9 1.41421356232604011893 3 1.41421356235514394938 9 2<2.00000026871771297010 1 1.99999959436814833679 8<2 1.99999993154291644259 5<2 2<2.00000010013031115363 4 2<2.00000001583661290993 6 1.99999997368976445422 1<2 1.99999999476318862656 8<2 2<2.00000000529990075437 4 2<2.00000000003154468700 1 1.99999999739736665591 7<2 1.99999999871445567124 2<2 1.99999999937300017906 8<2 1.99999999970227243302 1<2 1.99999999986690856000 8<2 1.99999999994922662350 4<2 21(1.4142131805419921875,1.41421413421630859375)1.41421365737915039062 5 2<2.00000026871771297010 1 22(1.4142131805419921875,1.41421365737915039062 5)1.41421341896057128906 2 1.99999959436814833679 8<2 23(1.41421341896057128906 2,1.41421365737915039062 5)1.41421353816986083984 4 1.99999993154291644259 5<2 24(1.41421353816986083984 4,1.41421365737915039062 5)1.41421359777450561523 4 2<2.00000010013031115363 4 25(1.41421353816986083984 4,1.41421359777450561523 4)1.41421356797218322753 9 2<2.00000001583661290993 6 26(1.41421353816986083984 4,1.41421356797218322753 9)1.41421355307102203369 1 1.99999997368976445422 1<2 27(1.41421355307102203369 1,1.41421356797218322753 9)1.41421356052160263061 5 1.99999999476318862656 8<2 28(1.41421356052160263061 5,1.41421356797218322753 9)1.41421356424689292907 7 2<2.00000000529990075437 4 29(1.41421356052160263061 5,1.41421356424689292907 7)1.41421356238424777984 6 2<2.00000000003154468700 1 30(1.41421356052160263061 5,1.41421356238424777984 6)1.41421356145292520523 1.99999999739736665591 7<2 31(1.41421356145292520523,1.41421356238424777984 6)1.41421356191858649253 8 1.99999999871445567124 2<2 32(1.41421356191858649253 8,1.41421356238424777984 6)1.41421356215141713619 2 1.99999999937300017906 8<2 33(1.41421356215141713619 2,1.41421356238424777984 6)1.41421356226783245801 9 1.99999999970227243302 1<2 34(1.41421356226783245801 9,1.41421356238424777984 6)1.41421356232604011893 3 1.99999999986690856000 8<2 35(1.41421356232604011893 3,1.41421356238424777984 6)1.41421356235514394938 9 1.99999999994922662350 4<2
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69(1.41421356235514394938 9,1.41421356238424777984 6)(1.41421356236969586461 8,1.41421356238424777984 6)(1.41421356236969586461 8,1.41421356237697182223 2)(1.41421356236969586461 8,1.41421356237333384342 5)(1.41421356237151485402 1,1.41421356237333384342 5)(1.41421356237242434872 3,1.41421356237333384342 5)(1.41421356237287909607 4,1.41421356237333384342 5)(1.41421356237287909607 4,1.41421356237310646974 9)(1.41421356237299278291 2,1.41421356237310646974 9)(1.41421356237304962633,1.41421356237310646974 9)(1.41421356237307804804,1.41421356237310646974 9)(1.41421356237309225889 5,1.41421356237310646974 9)(1.41421356237309225889 5,1.41421356237309936432 2)(1.41421356237309225889 5,1.41421356237309581160 8)(1.41421356237309403525 2,1.41421356237309581160 8)(1.41421356237309492343,1.41421356237309581160 8)(1.41421356237309492343,1.41421356237309536751 9)(1.41421356237309492343,1.41421356237309514547 5)(1.41421356237309503445 2,1.41421356237309514547 5)(1.41421356237309503445 2,1.41421356237309508996 3)(1.41421356237309503445 2,1.41421356237309506220 8)(1.41421356237309504833,1.41421356237309506220 8)(1.41421356237309504833,1.41421356237309505526 9)(1.41421356237309504833,1.41421356237309505180 0)(1.41421356237309504833,1.41421356237309505006 5)(1.41421356237309504833,1.41421356237309504919 7)(1.41421356237309504876 4,1.41421356237309504919 7)(1.41421356237309504876 4,1.41421356237309504898)(1.41421356237309504876 4,1.41421356237309504887 2)(1.41421356237309504876 4,1.41421356237309504881 8)(1.41421356237309504879,1.41421356237309504881 8)(1.41421356237309504879,1.41421356237309504880 4)(1.41421356237309504879 8,1.41421356237309504880 4)(1.41421356237309504880 1,1.41421356237309504880 4)1.41421356236969586461 8 1.41421356237697182223 2 1.41421356237333384342 5 1.41421356237151485402 1 1.41421356237242434872 3 1.41421356237287909607 4 1.41421356237310646974 9 1.41421356237299278291 2 1.41421356237304962633 1.41421356237307804804 1.41421356237309225889 5 1.41421356237309936432 2 1.41421356237309581160 8 1.41421356237309403525 2 1.41421356237309492343 1.41421356237309536751 9 1.41421356237309514547 5 1.41421356237309503445 2 1.41421356237309508996 3 1.41421356237309506220 8 1.41421356237309504833 1.41421356237309505526 9 1.41421356237309505180 0 1.41421356237309505006 5 1.41421356237309504919 7 1.41421356237309504876 4 1.41421356237309504898 1.41421356237309504887 2 1.41421356237309504881 8 1.41421356237309504879 1.41421356237309504880 4 1.41421356237309504879 8 1.41421356237309504880 1 1.41421356237309504880 3 1.99999999999038565525 2<2 2<2.00000000001096517112 7 2<2.00000000000067541319 0 1.99999999999553053422 1<2 1.99999999999810297370 5<2 1.99999999999938919344 7<2 2<2.00000000000003230331 9 1.99999999999971074838 3<2 1.99999999999987152585<2 1.99999999999995191458 5<2 1.99999999999999210895 2<2 2<2.00000000000001220613 5 2<2.00000000000000215754 3 1.99999999999999713324 7<2 1.99999999999999964539 5<2 2<2.00000000000000090146 9 2<2.00000000000000027343 2 1.99999999999999995941 4<2 2<2.00000000000000011642 3 2<2.00000000000000003791 8 1.99999999999999999866 6<2 2<2.00000000000000001829 2 2<2.00000000000000000847 9 2<2.00000000000000000357 3 2<2.00000000000000000111 9 1.99999999999999999989 3<2 2<2.00000000000000000050 6 2<2.00000000000000000019 9 2<2.00000000000000000004 6 1.99999999999999999996 9<2 2<2.00000000000000000000 8 1.99999999999999999998 9<2 1.99999999999999999999 8<2 2<2.00000000000000000000 3 36(1.41421356235514394938 9,1.41421356238424777984 6)1.41421356236969586461 8 1.99999999999038565525 2<2 37(1.41421356236969586461 8,1.41421356238424777984 6)1.41421356237697182223 2 2<2.00000000001096517112 7 38(1.41421356236969586461 8,1.41421356237697182223 2)1.41421356237333384342 5 2<2.00000000000067541319 0 39(1.41421356236969586461 8,1.41421356237333384342 5)1.41421356237151485402 1 1.99999999999553053422 1<2 40(1.41421356237151485402 1,1.41421356237333384342 5)1.41421356237242434872 3 1.99999999999810297370 5<2 41(1.41421356237242434872 3,1.41421356237333384342 5)1.41421356237287909607 4 1.99999999999938919344 7<2 42(1.41421356237287909607 4,1.41421356237333384342 5)1.41421356237310646974 9 2<2.00000000000003230331 9 43(1.41421356237287909607 4,1.41421356237310646974 9)1.41421356237299278291 2 1.99999999999971074838 3<2 44(1.41421356237299278291 2,1.41421356237310646974 9)1.41421356237304962633 1.99999999999987152585<2 45(1.41421356237304962633,1.41421356237310646974 9)1.41421356237307804804 1.99999999999995191458 5<2 46(1.41421356237307804804,1.41421356237310646974 9)1.41421356237309225889 5 1.99999999999999210895 2<2 47(1.41421356237309225889 5,1.41421356237310646974 9)1.41421356237309936432 2 2<2.00000000000001220613 5 48(1.41421356237309225889 5,1.41421356237309936432 2)1.41421356237309581160 8 2<2.00000000000000215754 3 49(1.41421356237309225889 5,1.41421356237309581160 8)1.41421356237309403525 2 1.99999999999999713324 7<2 50(1.41421356237309403525 2,1.41421356237309581160 8)1.41421356237309492343 1.99999999999999964539 5<2 51(1.41421356237309492343,1.41421356237309581160 8)1.41421356237309536751 9 2<2.00000000000000090146 9 52(1.41421356237309492343,1.41421356237309536751 9)1.41421356237309514547 5 2<2.00000000000000027343 2 53(1.41421356237309492343,1.41421356237309514547 5)1.41421356237309503445 2 1.99999999999999995941 4<2 54(1.41421356237309503445 2,1.41421356237309514547 5)1.41421356237309508996 3 2<2.00000000000000011642 3 55(1.41421356237309503445 2,1.41421356237309508996 3)1.41421356237309506220 8 2<2.00000000000000003791 8 56(1.41421356237309503445 2,1.41421356237309506220 8)1.41421356237309504833 1.99999999999999999866 6<2 57(1.41421356237309504833,1.41421356237309506220 8)1.41421356237309505526 9 2<2.00000000000000001829 2 58(1.41421356237309504833,1.41421356237309505526 9)1.41421356237309505180 0 2<2.00000000000000000847 9 59(1.41421356237309504833,1.41421356237309505180 0)1.41421356237309505006 5 2<2.00000000000000000357 3 60(1.41421356237309504833,1.41421356237309505006 5)1.41421356237309504919 7 2<2.00000000000000000111 9 61(1.41421356237309504833,1.41421356237309504919 7)1.41421356237309504876 4 1.99999999999999999989 3<2 62(1.41421356237309504876 4,1.41421356237309504919 7)1.41421356237309504898 2<2.00000000000000000050 6 63(1.41421356237309504876 4,1.41421356237309504898)1.41421356237309504887 2 2<2.00000000000000000019 9 64(1.41421356237309504876 4,1.41421356237309504887 2)1.41421356237309504881 8 2<2.00000000000000000004 6 65(1.41421356237309504876 4,1.41421356237309504881 8)1.41421356237309504879 1.99999999999999999996 9<2 66(1.41421356237309504879,1.41421356237309504881 8)1.41421356237309504880 4 2<2.00000000000000000000 8 67(1.41421356237309504879,1.41421356237309504880 4)1.41421356237309504879 8 1.99999999999999999998 9<2 68(1.41421356237309504879 8,1.41421356237309504880 4)1.41421356237309504880 1 1.99999999999999999999 8<2 69(1.41421356237309504880 1,1.41421356237309504880 4)1.41421356237309504880 3 2<2.00000000000000000000 3
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Cite
Follow
Follow this answer to receive notifications
answered Sep 16, 2018 at 5:59
Eric TowersEric Towers
71.4k 3 3 gold badges 55 55 silver badges 124 124 bronze badges
Add a comment|
This answer is useful
2
Save this answer.
Show activity on this post.
Using the fact that sin π 4=2√2 sinπ 4=2 2, then we have to find 2 sin π 4 2 sinπ 4.
We can approximate sin x sinx using the Taylor series to three terms:
sin x=x−x 3 3!+x 5 5!+O(x 6),sinx=x−x 3 3!+x 5 5!+O(x 6),
so we have:
sin π 4≈π 4−(π/4)3 3!+(π/4)5 5!.sinπ 4≈π 4−(π/4)3 3!+(π/4)5 5!.
If we approximate π π as 22 7 22 7, then we have π 4=11 14 π 4=11 14, then we have:
sin π 4≈11 14−(11/14)3 3!+(11/14)5 5!,sinπ 4≈11 14−(11/14)3 3!+(11/14)5 5!,
which when you multiply by 2 2 to get 2–√2, gives 1.4147 1.4147, while the actual value is 1.4142 1.4142.
If we expand the Taylor series to more terms, or improve the approximation of π π (such as 355 113 355 113), then we can get to 20 20 correct digits.
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Cite
Follow
Follow this answer to receive notifications
answered Sep 14, 2018 at 13:49
Toby MakToby Mak
17.1k 4 4 gold badges 31 31 silver badges 46 46 bronze badges
1
2 Don’t you need pi to nearly 20 digits for this to work? –JTP - Apologise to Monica Commented Sep 15, 2018 at 2:10
Add a comment|
This answer is useful
1
Save this answer.
Show activity on this post.
There's a general method that converges about as quickly as Newton-Raphson but is somewhat more general. It's based off of Continued Fractions:
Suppose you want to find the square root of N N. Let a+b=N a+b=N where b b has an easy to calculate square root.
let y n+1=b√+a b√+y n y n+1=b+a b+y n
y n+1 y n+1 converges to N−−√N.
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Cite
Follow
Follow this answer to receive notifications
edited Sep 14, 2018 at 22:33
asky
241 1 1 silver badge 8 8 bronze badges
answered Sep 14, 2018 at 20:35
TurlocTheRedTurlocTheRed
6,538 1 1 gold badge 11 11 silver badges 17 17 bronze badges
Add a comment|
This answer is useful
1
Save this answer.
Show activity on this post.
Start with an initial guess x x for the square root of 2 2. Then add a correction term y y. Write down (x+y)2−2=0(x+y)2−2=0. Solve this equation for y y by expanding it up to third order in the difference (2−x 2)(2−x 2). This is a straightforward calculation. Combining all contributions, the result is elegant:
x+y=(x 4+12 x 2+4)/(4 x 3+8 x)x+y=(x 4+12 x 2+4)/(4 x 3+8 x)
For a rational initial guess x x the result (x+y)(x+y) is also rational, but much closer to the desired value.
For example if we take x=3/2 x=3/2, then (x+y)=577/408(x+y)=577/408, which differs from the square root of 2 by a factor 1.0000015. If we start with x=7/5 x=7/5, the result is 19601/13860 19601/13860, which differs from the square of root of 2 2 by a factor 1.0000000013 1.0000000013
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Cite
Follow
Follow this answer to receive notifications
edited Sep 16, 2018 at 16:42
answered Sep 15, 2018 at 5:36
M. WindM. Wind
4,275 1 1 gold badge 16 16 silver badges 21 21 bronze badges
2
Please show what happens with 140/99. I find the error to be 1.2 10^-18 on my WP-34s iPhone emulator in double precision mode (good to at least 30 digits). If you recycle 577/408, you get an error 9.0 10^-25. That meets to goal of 20 digits. Recycling 19601/13860 gives an error of absolute zero (on the calculator). –richard1941 Commented Dec 21, 2018 at 17:50
Thanks! The initial values 99/70 99/70 and 140/99 140/99 both result in 768398401/543339720 768398401/543339720. –M. Wind Commented Dec 23, 2018 at 19:39
Add a comment|
This answer is useful
0
Save this answer.
Show activity on this post.
You can compute it manually using the algorithm:
p=0 p=0, r=0 r=0, i=0 i=0
Split the number into sections of two digits
Take i'th section n i n i, let k=100 t+n i k=100 t+n i
Find the greatest number x x, such that y=x(20 p+x)≤k y=x(20 p+x)≤k
Assign p=10 p+x p=10 p+x, i=i+1 i=i+1, if the accyracy of the result is not satisfied, then return to 3.
Example:
02.00 00 00 00 00
n 0=2 n 0=2, k=2 k=2, therefore for x=1 x=1: y=1 y=1 and p=1 p=1
n 1=0 n 1=0, k=100 k=100, so for x=4 x=4: y=24∗4=96<100 y=24∗4=96<100 and p=14 p=14
n 2=0 n 2=0, k=400 k=400, so for x=1 x=1, y=281∗1=281<400 y=281∗1=281<400 and p=141 p=141
n 3=0 n 3=0, k=11900 k=11900, so for x=4 x=4, y=2824∗4=11296<11900 y=2824∗4=11296<11900 and p=1414 p=1414
n 4=0 n 4=0, k=60400 k=60400, so for x=2 x=2, y=28282∗2=56564<60400 y=28282∗2=56564<60400 and p=14142 p=14142
n 5=0 n 5=0, k=383600 k=383600, so for x=1 x=1, y=282841∗1=282841<383600 y=282841∗1=282841<383600 and p=141421 p=141421
...
After all just remember to point the comma in place, where it should be, ie. after first number (it depends how many sections were there on the left side of our number), so you'll have:
2–√≈1.41421 2≈1.41421
To obtain accuracy of 20 numbers after the comma, you should append 20 sections of 00 in the step 2. , ie.:
02.00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Cite
Follow
Follow this answer to receive notifications
answered Sep 14, 2018 at 13:05
Jaroslaw MatlakJaroslaw Matlak
4,955 16 16 silver badges 34 34 bronze badges
Add a comment|
This answer is useful
0
Save this answer.
Show activity on this post.
Newton-Rhapson is a good idea because of the convergence rate. However, I am more of a fan of using Taylor's expansions here since it is super easy to derive on the go to give fairly ok estimates in quite a reasonable time. So, the way to go to find x−−√x is to find first the closest integer which approximates x−−√x and call this a a, then apply Taylor to a 2 a 2. Then Taylor says
x−−√≈a+(x−a 2)⋅1 2 a−(x−a 2)2/2⋅1 4 a 3+⋯.x≈a+(x−a 2)⋅1 2 a−(x−a 2)2/2⋅1 4 a 3+⋯.
The thing that is nice here is that you also get bounds on the error you make. So, denote f(x)=x−−√f(x)=x, then the error of a n n th order approximation (i.e., going as far as (x−a 2)n/n!⋅f(n)(a 2)(x−a 2)n/n!⋅f(n)(a 2) in the approximation above) is given by
(x−a)n+1/(n+1)!⋅f(n+1)(ξ)(x−a)n+1/(n+1)!⋅f(n+1)(ξ)
for a certain ξ ξ between a 2 a 2 and x x. This can be estimated quite easily since this f(n+1)f(n+1) is monotone around x x. Thus look at the boundaries of the domain of ξ ξ and find the 'best' maximal value which you can calculate without a calculator.
Example for x=2 x=2. Apparently 1 1 is the closest integer to 2–√2 and thus we will take a=1 a=1. Then, let's take a second order approximation
2–√≈1+(2−1)⋅1 2−(2−1)2/2⋅1 4=1+0.5−0.125=1.375 2≈1+(2−1)⋅1 2−(2−1)2/2⋅1 4=1+0.5−0.125=1.375
and the absolute error is given by
E=∣∣∣(2−1)3/3!⋅3 8⋅ξ 2 ξ√∣∣∣=1 16⋅1|ξ 2 ξ√|E=|(2−1)3/3!⋅3 8⋅ξ 2 ξ|=1 16⋅1|ξ 2 ξ|
for a certain ξ ξ between 1 1 and 2 2. Since this is a decreasing function on (1,2)(1,2). The maximum is attained at 1 1 and hence the error is bounded by
E≤1 16 E≤1 16
which seems to be a good estimate since E=0.039…E=0.039… and 1/16=0.0625 1/16=0.0625.
Edit As some of you noted this method 'looks' more difficult than Newton-Rhapson and the convergence is slower. The last part is obviously true and I would answer this question with: How quick do you need it to be and do you want to calculate it in your head or do you have a computer? Do you need to have a quick guess which is approximately equal to the value of 2–√2 or do you need a precise estimate. If you don't have a computer but pen and paper, the best method is Newton-Rhapson.
I would argue that my method is better if you don't have pen and paper or a computer and you are asked to give an estimation of 10−−√10 on the go (especially for x−−√x with x x big, the Taylor approximation is better since the ∙–√∙ function becomes more linear as x x grows).
I agree that my method looks way more difficult but it isn't if you get more familiar with it. Also, this method is super quick in terms of calculation time in your head and if you practice a little with it, it becomes way easier. Also, this method works particularly nice for x−−√x where x x differs one from a perfect square because then the (x−a 2)n(x−a 2)n term will always be one.
Let's look at an example here. Suppose you need to calculate 122−−−√122, then first order approximation of my method gives
122−−−√≈11+1 2⋅11.122≈11+1 2⋅11.
It took me less than one second to find this approximation and the second order approximation works almost as quick here. You just need to add −1 8⋅11 3−1 8⋅11 3. Please note that the error of the first order approximation here is approximately equal to 10−4 10−4.
If you apply Newton-Rhapson here you get the same approximation after one step if you choose x 0=11 x 0=11. The only thing is that I always forget what the exact form is of Newton-Rhapson. So when I want to apply it, I have to think about it where I could have immediately applied Taylor but I would say that is just my particular preference.
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Cite
Follow
Follow this answer to receive notifications
edited Sep 15, 2018 at 11:09
answered Sep 14, 2018 at 12:39
Stan TendijckStan Tendijck
2,432 11 11 silver badges 15 15 bronze badges
4
2 I'd say this is more difficult, less precise, and not as generally applicable as Newton-Raphson. –leftaroundabout Commented Sep 14, 2018 at 14:48
I would say it is less difficult since when you apply Newton-Rhapson you always have to find the exact algorithm and this method can be applied to find 2.243−−−−√2.243 also quite quickly. –Stan Tendijck Commented Sep 14, 2018 at 15:38
I agree with @leftaroundabout, but perhaps if you edit into your post an illustration of how this method could be used by hand to compute rad 2 to high accuracy, it would appear simpler. Right now, it looks much more difficult. –Wildcard Commented Sep 14, 2018 at 18:17
3 Taylor's converges much more slowly than Newton Raphson. Note the second order term starting with initial guess 1 is 1.4166.... already correct to two digits behind the decimal. You might get an additional correct digit at each step of the calculation heavy Taylor series. The accuracy doubles per step for Newton Raphson without the difficulty of calculating the Taylor coefficients. There might be ways to patch it up. There's an alternative series to the Taylor series for arctan that converges much faster than Taylor. –TurlocTheRed Commented Sep 15, 2018 at 2:09
Add a comment|
This answer is useful
0
Save this answer.
Show activity on this post.
I came up with an interesting, but terribly inefficient method.
Consider the sequence {x n x n}: 1,1 2,1 2,1 3,1 3,1 3,1 4,1 4,1 4,1 4 1,1 2,1 2,1 3,1 3,1 3,1 4,1 4,1 4,1 4, ...
Suppose you want k digits of the square root of 2. Then add up the first 100 k 100 k terms and then divide the sum by 10 k 10 k.
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Cite
Follow
Follow this answer to receive notifications
answered Sep 28, 2018 at 1:09
TurlocTheRedTurlocTheRed
6,538 1 1 gold badge 11 11 silver badges 17 17 bronze badges
Add a comment|
This answer is useful
0
Save this answer.
Show activity on this post.
I know an easy way to calculate the binary digits of 2–√2. Take the ordered pair (1, 2) 1 2 1 2 is less than 2 and 2 2 2 2 is more than 2. Calculate the square of the average 1.5 2 1.5 2 in base 2. The square of the average is just the average of the squares minus 1 4 1 4. The result expressed in binary is 10.01 so the first binary digit after the decimal is 0. Take the next ordered pair to be (1, 1.5) and calculate the square of its average which is the average of its squares minus 1 16 1 16. The result expressed in binary is 1.1001 so the next binary digit is 1.
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Cite
Follow
Follow this answer to receive notifications
answered Oct 24, 2018 at 3:56
TimothyTimothy
860 10 10 silver badges 18 18 bronze badges
Add a comment|
This answer is useful
0
Save this answer.
Show activity on this post.
Towers' bisection method above is similar to your own approach, but more efficient. Another method that is not as good as binary search, but is better than your own method, is to increment the last digit in bigger steps. I would try incrementing by 3. The worst case is that you reach the correct digit in 5 steps instead of 9.
My favorite method for mental approximation is to find the next lowest square, determine the error, and add to its square root the error divided by double the guess. For sqrt(200), the lowest square is 196. The error is 4, so my mental estimate is 14 + 4/14 = 14.142857...
I apologize for off-topic, but note that square roots can be used to calculate logarithms by a process similar to bisection. I suspect that is how it was done in the late 16th century, as they did not yet have calculus. In our times, there are extremely accurate formulas for logarithm that still require square roots. This exercise should make you appreciate the power of a square root button on a calculator, even if you have no "scientific" functions.
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Cite
Follow
Follow this answer to receive notifications
answered Dec 21, 2018 at 17:26
richard1941richard1941
1,051 7 7 silver badges 14 14 bronze badges
Add a comment|
You must log in to answer this question.
Protected question. To answer this question, you need to have at least 10 reputation on this site (not counting the association bonus). The reputation requirement helps protect this question from spam and non-answer activity.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
approximation
radicals
See similar questions with these tags.
Featured on Meta
Will you help build our new visual identity?
Upcoming initiatives on Stack Overflow and across the Stack Exchange network...
Community help needed to clean up goo.gl links (by August 25)
Report this ad
Linked
24Is there any simple method to calculate x−−√x without using logarithm
9Can exact square roots not be found?
17Infinite series for 2–√2
7How to manually calculate cube roots
4Calculate more digits of square root of 2?
Related
4Calculate fractional part of square root without taking square root
3How to treat small number within square root
1How to prove that your approximation using Newton's Method is correct to x x decimal places?
1A square-root approximation method that would halt on 378−−−√378
5Number of correct digits of an expression.
3Calculating the square root of e e
1How do algorithms for calculating the square root work?
Hot Network Questions
Wiring a bathroom exhaust fan
What violent acts or injuries are attributable to Palestine Action?
Does it make any sense to run a journal for pre-college students interested in medicine?
Dimension too large compiling longtable with lualatex. What is the cause?
What is a single adjective for someone who accepts their faults?
When was this builder's paper produced?
Does the warning "5 years imprisonment for removal" on Canada's Four Corners obelisk have any legal backing?
Why are there no 'add14' chords?
Intel NUC automatically shuts down when trying Ubuntu
repeat_and_join function for strings and chars in rust
Can my daughter’s candy preferences be modelled using numeric weights II?
Can Suspended Sentence be cast Twice?
What keeps an index ETF pegged to the index?
how often do CANZUK judges color their text?
Rectangle and circle with same area and circumference
What's at stake if the E3/EU "snaps back" their sanctions on Iran?
Why does grounding eliminate mains hum but not radio signals?
Elfquest story where two elves argue over one's hypnotizing of an animal
Using my custom font on kitty Kubuntu
Are there other LEGO Duplo track layouts with two trains that trigger all the switches indefinitely?
Can "Accepted" Be Used as a Noun?
VLOOKUP with wildcards
Road tire bulge - is it still safe to ride?
Summation with fractional part
Question feed
Subscribe to RSS
Question feed
To subscribe to this RSS feed, copy and paste this URL into your RSS reader.
Why are you flagging this comment?
It contains harassment, bigotry or abuse.
This comment attacks a person or group. Learn more in our Code of Conduct.
It's unfriendly or unkind.
This comment is rude or condescending. Learn more in our Code of Conduct.
Not needed.
This comment is not relevant to the post.
Enter at least 6 characters
Something else.
A problem not listed above. Try to be as specific as possible.
Enter at least 6 characters
Flag comment Cancel
You have 0 flags left today
Mathematics
Tour
Help
Chat
Contact
Feedback
Company
Stack Overflow
Teams
Advertising
Talent
About
Press
Legal
Privacy Policy
Terms of Service
Your Privacy Choices
Cookie Policy
Stack Exchange Network
Technology
Culture & recreation
Life & arts
Science
Professional
Business
API
Data
Blog
Facebook
Twitter
LinkedIn
Instagram
Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804
By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.
Accept all cookies Necessary cookies only
Customize settings
Cookie Consent Preference Center
When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer.
Cookie Policy
Accept all cookies
Manage Consent Preferences
Strictly Necessary Cookies
Always Active
These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.
Cookies Details
Performance Cookies
[x] Performance Cookies
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.
Cookies Details
Functional Cookies
[x] Functional Cookies
These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly.
Cookies Details
Targeting Cookies
[x] Targeting Cookies
These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device.
Cookies Details
Cookie List
Clear
[x] checkbox label label
Apply Cancel
Consent Leg.Interest
[x] checkbox label label
[x] checkbox label label
[x] checkbox label label
Necessary cookies only Confirm my choices
|
1
|
Project MUSE - The Concept of Will in Early Latin Philosophy
===============
This website uses cookies to ensure you get the best experience on our website.Without cookies your experience may not be seamless.
Accept
[Skip to main content]
Institutional Login
LOG IN
Accessibility
Please log into MyMUSE to save this to your account
Browse
OR
Search: Search:
menu
Advanced SearchBrowseUser SettingsMyMUSE Account
Access via InstitutionLog In / Sign Up
Contact Support
Journal of the History of Philosophy
Access options available:
Download PDF
The Concept of Will in Early Latin Philosophy
Neal Ward Gilbert
Journal of the History of Philosophy
Johns Hopkins University Press
Volume 1, Number 1, October 1963
pp. 17-35
10.1353/hph.2008.1582
Article
View Citation
Related Content
Additional Information
In lieu of an abstract, here is a brief excerpt of the content:
The Concept of Will in EarlyLatin Philosophy NEAL W. GILBERT AN HISTORICALDISCUSSIONOf the concept of will is best begun with an analysis of the use of voluntas in Latin philosophy, from its earliest occurrences in Lucretius and Cicero on down to Augustine and medieval times. This development can be traced without much controversy because the line of transmission and development is more or less unbroken. But the correlating of Latin psychological terms with their Greek originals presents problems. Greek philosophy was indeed the source from which Latin philosophical psychology developed: Lucretius reflects Epicurean doctrine, while Cicero and Seneca were strongly influenced by Stoic philosophy when they discussed moral action. However, many modern scholars are convinced that the Greeks had no word that corresponds to our "will" at all, and so we seem to be left with no choice but to regard the will as an original creation of Latin philosophy . But this would be to overlook the fact that ancient Latin writers sometimes specifically equated voluntas with certain Greek terms, and such evidence ought not to go unexamined. Voluntas was well established in Latin usage before Roman writers began to concern themselves with philosophical problems, but it did not have a technical sense. It meant simply "good will," or "favor," or concretely, a "will, or testament. ''I Clustered around it were other derivatives of "volo," such as "benevolentia" and "malevolentia," "well-wishing" and "ill-wishing." When Roman writers began to deal with philosophical problems, there were a number of different contexts in which voluntas could play a leading role. Chief among them, naturally, was the "freedom of the will" which, then as now, intrigued the ordinary man or the moralist. However, discussions revolving around the determinist-libertarian issue often fail to clarify the concept of the will itself. Major philosophers (Chrysippus, Augustine, or Hume, for example) offer fine distinctions within moral action when considering the determinist-libertarian dispute, but lesser figures are apt to "C'est seulement lots de la creation du vocabulairephilosophiqueque voluntas a pris le sens abstrait et technique de 'volont6'." Ernout, A., and A. Meillet,Dictionnaire Etymologique de la langue latine, 4th ed. (Paris, 1959),sub "volo." 18 HISTORY OF PHILOSOPHY neglect the detailed analysis of moral action in favor of sweeping metaphysical pronouncements. When a Latin writer wished to say that a person did something of his own accord, the locution most natural to him was "sua sponte," an ablative absolute derived from the same root as our word "spontaneous." The nominative case of "sponte" ("spons") was almost never used, so that dictionaries are perhaps somewhat misleading in suggesting that the word "spons" means "free will": it would be more correct to say that the phrase "sua sponte" means "of his own free will." Latin writers also indicate that someone did something willingly by saying that he did it "non invitus" ("not unwillingly "). Finally, they could apply the adjective "voluntarius" to such an action, or say that it was done "ex voluntate." When the latter term begins to be used as a designation for a separate faculty of the soul distinct from reason or intelligence, it begins to resemble the modern concept of will. Augustine is usually given credit for introducing the concept of will into philosophy, but even the earliest Latin treatment of the free-will problem framed the issue in terms of voluntas. Nevertheless, I think that the received view is substantially correct, although it would be more proper to say that what Augustine introduced into philosophy was not the concept of will in general but the concept of the evil will. What Augustine called the "good" will ("bona voluntas") was closely related to the "reasonable desire" of the Stoic sage, rendered into Latin by Cicero as voluntas, without any adjective. The addition of the evil will altered the outlines of moral analysis considerably , involving among other things the major shift from a Socratic or Platonic psychology and ethics (usually labeled "intellectualistic") to a Christian psychology and ethics (usually labeled "voluntaristic"). Such a characterization is likely to be drawn too sharply, and to overlook anticipations of voluntarism in Greek thought or reminiscences of Greek intellectualism in Christian thought. Hence it may be useful...
Access options available:
Download PDF
Share
FacebookTwitterEmailPrintShare
Additional Information | ISSN | 1538-4586 |
| --- |
| Print ISSN | 0022-5053 |
| Pages | pp. 17-35 |
| Launched on MUSE | 2008-01-01 |
| Open Access | No |
Project MUSE Mission
Project MUSE promotes the creation and dissemination of essential humanities and social science resources through collaboration with libraries, publishers, and scholars worldwide. Forged from a partnership between a university press and a library, Project MUSE is a trusted part of the academic and scholarly community it serves.
About
About Project MUSE
Publishers
Discovery Partners
Journal Subscribers
Book Customers
Conferences
What's on Muse
Open Access
Journals
Books
The Complete Prose of T. S. Eliot
MUSE in Focus
Resources
News & Announcements
Email Sign-Up
Promotional Materials
Presentations
Get Alerts
Information For
Publishers
Librarians
Individuals
Instructors
Contact
Contact Us
Help
Policy & Terms
Accessibility
Privacy Policy
Terms of Use
2715 North Charles Street
Baltimore, Maryland, USA 21218
+1 (410) 516-6989
[email protected]
©2025 Project MUSE. Produced by Johns Hopkins University Press in collaboration with The Sheridan Libraries.
Now and Always,
The Trusted Content Your Research Requires
Now and Always, The Trusted Content Your Research Requires
Built on the Johns Hopkins University Campus
Built on the Johns Hopkins University Campus
©2025 Project MUSE. Produced by Johns Hopkins University Press in collaboration with The Sheridan Libraries.
Back To Top
✓
Thanks for sharing!
AddToAny
More…
|
2
|
reference request - Covering a complete graph with as few complete bipartite subgraphs as possible - Mathematics Stack Exchange
===============
Join Mathematics
By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy.
Sign up with Google
OR
Email
Password
Sign up
Already have an account? Log in
Skip to main content
Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Loading…
Tour Start here for a quick overview of the site
Help Center Detailed answers to any questions you might have
Meta Discuss the workings and policies of this site
About Us Learn more about Stack Overflow the company, and our products
current community
Mathematics helpchat
Mathematics Meta
your communities
Sign up or log in to customize your list.
more stack exchange communities
company blog
Log in
Sign up
Home
Questions
Unanswered
AI Assist Labs
Tags
Chat
Users
Teams
Ask questions, find answers and collaborate at work with Stack Overflow for Teams.
Try Teams for freeExplore Teams
3. Teams
4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
Hang on, you can't upvote just yet.
You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it?
Instead, you can save this post to reference later.
Save this post for later Not now
Covering a complete graph with as few complete bipartite subgraphs as possible
Ask Question
Asked 8 years, 4 months ago
Modified2 months ago
Viewed 1k times
This question shows research effort; it is useful and clear
3
Save this question.
Show activity on this post.
The biclique covering number b c(G)b c(G) of a graph G G is the smallest number of bicliques (complete bipartite subgraphs) of G G such that every edge of G G belongs to at least one of these bicliques.
Quoted from the paper "On covering graphs by complete bipartite subgraphs" by S. Jukna and A.S. Kulikov, 2009:
We have b c(K n)≤⌈log 2 n⌉b c(K n)≤⌈log 2n⌉: just encode the vertices of K n K n (complete graph with n n vertices) by binary vectors of length m=⌈log 2 n⌉m=⌈log 2n⌉ and define, for each i=1,…,m i=1,…,m, a biclique containing all edges, the codes of whose endpoints differ in the i i th coordinate.
For example, to cover K 6 K 6, we write
1=001 2=010 3=011 4=100 5=101 6=110 1=001 2=010 3=011 4=100 5=101 6=110
The three bicliques are thus K{1,2,3},{4,5,6},K{1,4,5},{2,3,6},and K{1,3,5},{2,4,6}K{1,2,3},{4,5,6},K{1,4,5},{2,3,6},and K{1,3,5},{2,4,6}, corresponding to the leftmost, the middle, and the rightmost bits, respectively.
Note that the result above is b c(K n)≤⌈log 2 n⌉b c(K n)≤⌈log 2n⌉. My question is what is the exact value of b c(K n)b c(K n) and how to prove it? In other words, what is the smallest number of bicliques to cover K n K n?
graph-theory
reference-request
bipartite-graphs
Share
Share a link to this question
Copy linkCC BY-SA 4.0
Cite
Follow
Follow this question to receive notifications
edited Jun 5 at 4:56
The Amplitwist
1,556 3 3 gold badges 17 17 silver badges 28 28 bronze badges
asked Mar 31, 2017 at 14:04
hengxinhengxin
3,787 2 2 gold badges 27 27 silver badges 50 50 bronze badges
Add a comment|
1 Answer 1
Sorted by: Reset to default
This answer is useful
5
Save this answer.
Show activity on this post.
This is tight: we have b c(K n)=⌈log 2 n⌉b c(K n)=⌈log 2n⌉. This proof of the lower bound can be found in Fishburn and Hammer's Bipartite dimensions and bipartite degrees of graphs.
We prove that b c(K n)≥log 2 n b c(K n)≥log 2n by strong induction on n n. To cover K n K n, suppose that K p,q K p,q is the first biclique we put down. We may assume p+q=n p+q=n, since if we didn't include a vertex at all, we could add it to either side of the biclique and only increase the set of edges covered.
The vertices on either side of the first biclique induce a K p K p and a K q K q respectively that we haven't touched at all yet. Since p+q=n p+q=n, either p≥n 2 p≥n 2 or q≥n 2 q≥n 2, so we have a clique of size at least n 2 n 2 left to cover. By the inductive hypothesis, this needs at least log 2 n 2=(log 2 n)−1 log 2n 2=(log 2n)−1 more bicliques.
Together with the first biclique we put down, that brings us to a total of log 2 n log 2n bicliques needed to cover K n K n, which proves the result by induction on n n.
Share
Share a link to this answer
Copy linkCC BY-SA 3.0
Cite
Follow
Follow this answer to receive notifications
edited Mar 31, 2017 at 18:04
answered Mar 31, 2017 at 14:41
Misha LavrovMisha Lavrov
160k 11 11 gold badges 167 167 silver badges 304 304 bronze badges
1
Thanks for the correction. Deleting my answer. –bof Commented Mar 31, 2017 at 22:00
Add a comment|
You must log in to answer this question.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
graph-theory
reference-request
bipartite-graphs
See similar questions with these tags.
Featured on Meta
Will you help build our new visual identity?
Upcoming initiatives on Stack Overflow and across the Stack Exchange network...
Community help needed to clean up goo.gl links (by August 25)
Report this ad
Linked
4Minimum number of k k-partitions of a set of size n n to enumerate all (n k)(n k) combinations
3A sequence of partitions that splits up every triple
1Set Problem:find the minimum number of d
Related
5Bound with biclique covering
4Bipartite graph matching like problem.
5Minimum vertices set bipartite graph covering-special case
2Finding the spanning subgraphs of a complete bipartite graph
5Genus of a complete bipartite graph
3Minimal edge-covering path in complete graph
9The maximal complete bipartite subgraphs of the partition graph P(3 3)P(3 3).
4Graph with large minimum degree can be union of few complete (bipartite) graphs
Hot Network Questions
How can I colour text with metafun colours in context?
Could a Manned Jupiter Mission use a Shadow Shield?
Dropdown width with very long options
In Isa. 46:9 why is וְאֵ֣ין עֹ֔וד אֱלֹהִ֖ים not translated "and there are no other gods?"
Do you email authors whose results you have improved?
Reskinning creatures without accidentally hiding how dangerous/safe they are
Is Adj N Adj possible?
repeat_and_join function for strings and chars in rust
Does Germany have the highest wolf density in the world?
If linear negation is interpreted as representing destructors, how to make sense of double linear negation elimination?
Using Print for multiple numbers in a row
Rectangle and circle with same area and circumference
Is it possible to use yt or vt across lines?
2 Kings 2:11&12 Horses or horsemen, which description is accurate or is it both?
Drawing a 3D vector field with vortices (spirals) and perspective axis labels
strangely large bounding box of circuitikz and standalone documentclass?
I failed to make Claus benzene. (With sticks.)
I found that we can calculate the time of solar eclipses that will happen in the very far future. Do we need relativity in this calculation?
Did recently killed Al Jazeera journalist Anas al-Sharif call the Oct 7 attackers "heroes"?
How soon after parking a car in a paid parking area must I provide proof of payment?
History of Wilcoxon/Mann-Whitney being for the median?
Do utilitarians value the lives of people in worse situations less?
A story where a character that looks like Wile E. Coyote helps to relocate a community of business-sharp hunters-gatherers
What's the difference between democracy and totalitarianism if, even in democracy, we must respect laws set by parties we didn't vote for?
more hot questions
Question feed
Subscribe to RSS
Question feed
To subscribe to this RSS feed, copy and paste this URL into your RSS reader.
Why are you flagging this comment?
It contains harassment, bigotry or abuse.
This comment attacks a person or group. Learn more in our Code of Conduct.
It's unfriendly or unkind.
This comment is rude or condescending. Learn more in our Code of Conduct.
Not needed.
This comment is not relevant to the post.
Enter at least 6 characters
Something else.
A problem not listed above. Try to be as specific as possible.
Enter at least 6 characters
Flag comment Cancel
You have 0 flags left today
Mathematics
Tour
Help
Chat
Contact
Feedback
Company
Stack Overflow
Teams
Advertising
Talent
About
Press
Legal
Privacy Policy
Terms of Service
Your Privacy Choices
Cookie Policy
Stack Exchange Network
Technology
Culture & recreation
Life & arts
Science
Professional
Business
API
Data
Blog
Facebook
Twitter
LinkedIn
Instagram
Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.14.32835
By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.
Accept all cookies Necessary cookies only
Customize settings
Cookie Consent Preference Center
When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer.
Cookie Policy
Accept all cookies
Manage Consent Preferences
Strictly Necessary Cookies
Always Active
These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.
Cookies Details
Performance Cookies
[x] Performance Cookies
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.
Cookies Details
Functional Cookies
[x] Functional Cookies
These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly.
Cookies Details
Targeting Cookies
[x] Targeting Cookies
These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device.
Cookies Details
Cookie List
Clear
[x] checkbox label label
Apply Cancel
Consent Leg.Interest
[x] checkbox label label
[x] checkbox label label
[x] checkbox label label
Necessary cookies only Confirm my choices
|
3
|
Published Time: Fri, 20 Jan 2023 21:58:48 GMT
arXiv:1511.00266v1 [math.GN] 1 Nov 2015
GENERALIZED INVERSE LIMITS INDEXED BY TOTALLY ORDERED SETS
SCOTT VARAGONA
Abstract. Although inverse limits with factor spaces indexed by the positive integers are most commonly studied, Ingram and Mahavier have defined inverse limits with set-valued functions broadly enough for any directed index set to be used. In this paper, we investigate generalized inverse limits whose factor spaces are indexed by totally ordered sets. Using information about the projections of such inverse limits onto finitely many coordinates, we generalize various well-known theorems on connectedness in inverse limits. Moreover, numerous theorems and examples are given addressing the special case of an inverse limit with a single idempotent surjective u.s.c. bonding function.
Introduction
In recent years, the vast majority of work on generalized inverse limits has fo-cused only on the case where the factor spaces are indexed by the positive integers. However, in , W. T. Ingram and William S. Mahavier define generalized inverse limits much more broadly, so that any directed index set could be used—for exam-ple, an uncountable totally ordered set such as the limit ordinal ω1. In the case of traditional inverse limits with continuous bonding functions, such “long” inverse limits have proven to be fruitful for continuum theory; for example, Michel Smith and later David Bellamy used long traditional inverse limits to construct non-metric indecomposable continua with remarkable properties. It is therefore natural for us to investigate long generalized inverse limits as well. After the groundbreaking work of Ingram and Mahavier, some other researchers have worked with generalized inverse limits using alternate index sets. Inverse limits indexed by the set of all integers have been investigated by the author and, to a more significant degree, by Patrick Vernon ; theorems about Mahavier Products indexed by totally ordered sets have been proven by Wlodzimierz Charatonik and Robert Roe in . However, in general, relatively little is known about generalized inverse limits indexed by any set aside from the positive integers. The goal of this paper is to expand the theory of generalized inverse limits indexed by totally ordered sets. To that end, we present a variety of theorems and examples that should be a good starting point for future research. After giving basic definitions and background information in Section 2, in Section 3 we discuss
2010 Mathematics Subject Classification. Primary 54F15, 54H20.
Key words and phrases. inverse limits, upper semi-continuous inverse limits, connected, set valued function, totally ordered sets, Continuum Theory. The author is indebted to W. T. Ingram and William Mahavier for their groundbreaking work on this topic (in ), to Michel Smith, who first introduced the author to “long” inverse limits, and to Van Nall, Sina Greenwood, and Steven Clontz for their feedback on the presentation this paper is based upon.
12SCOTT VARAGONA
the behavior of the projections of inverse limits indexed by totally ordered sets. In Section 4, we study the properties of idempotent u.s.c. functions; such functions, it seems, will be important to this type of inverse limit. Next, in Section 5, we further generalize some of the connectedness results by Ingram and Mahavier in . We also extend some of the connectedness results by Van Nall in to the case of generalized inverse limits indexed by totally ordered sets, at least in the case of inverse limits with a single idempotent u.s.c. bonding function f . Then, in Section 6, we apply our results to study examples of inverse limits indexed by some limit ordinal γ ≥ ω. Just as inverse limits indexed by the positive integers have been useful for representing complicated metric continua in a straightforward way, inverse limits indexed by “long” initial segments of the ordinals can be useful in the same way for certain non-metric continua. Finally, we conclude by stating some questions for future investigation in Section 7. 2. Definitions and Background Theorems
If X is a non-empty compact Hausdorff space, let 2X denote the set of non-empty compact subsets of X. C(X) denotes the set of connected members of 2X . Suppose both X and Y are non-empty compact Hausdorff spaces and f : X → 2Y is a set-valued function; then we say f is upper semi-continuous (u.s.c.) if, for all x ∈ X,whenever V is open in Y containing f (x), there exists an open U in X containing
x so that f (u) ⊆ V for each u ∈ U . The graph of f , denoted in this paper by
Graph (f ), is the set {(x, y ) ∈ X × Y | y ∈ f (x)}; it is well-known that f is u.s.c. iff Graph (f ) is a closed subset of X × Y . If f (x) = {y} for some x ∈ X, y ∈ Y , we simply write f (x) = y. Given some u.s.c. function f : X → 2Y , the preimage via f
of a given y ∈ Y is f −1(y) = {x ∈ X | y ∈ f (x)}. (It is not hard to see that f −1(y)
is a compact subset of X.) If A ⊆ X, then f (A) = ⋃
a∈A
f (a). Similarly, if B ⊆ Y ,
f −1(B) = ⋃
b∈B
f −1(b). We say f is surjective if, for all y ∈ Y , f −1(y) is non-empty. If f : X → 2Y is surjective, then the inverse of f , denoted f −1, is the u.s.c. function f −1 : Y → 2X whose graph is Graph (f −1) = {(y, x ) | (x, y ) ∈ Graph (f )}.Given non-empty compact Hausdorff spaces X, Y and Z and u.s.c. functions
f : X → 2Y and g : Y → 2Z , the composition g ◦ f : X → 2Z is the u.s.c. function given by (g ◦ f )( x) = {z ∈ Z | ∃ y ∈ Y such that y ∈ f (x) and z ∈ g(y)}. In the special case where f : X → 2X is u.s.c., the composition f ◦ f is denoted f 2. When
f 2 = f , we say f is idempotent .If Xα is a compact Hausdorff space for each α in an index set A, then ∏
α∈A
Xα
denotes the product space with the usual product topology. If B ⊆ A, then πB
denotes the projection map from ∏
α∈A
Xα into ∏
α∈B
Xα. If β ∈ A, we will write
π{β} as πβ . Let us use a boldface x to denote an element (xα)α∈A of the product ∏
α∈A
Xα. (So, if β ∈ A, πβ (x) = xβ .) A directed set (D, ) is a set D together with a relation on D that is reflexive, transitive, and has the property that whenever α, β ∈ D, there exists some η ∈ D
such that α η and β η. A directed set (D, ) will often be denoted simply by D when the relation is understood. A directed set D is totally ordered if,
∀α, β ∈ D, either 1) α β and β 6 α, 2) β α and α 6 β, or 3) α β and β α,in which case α = β. Whenever we write a finite subset {β1, β 2, . . . , β n} of some totally ordered set D, we tacitly assume that βi βj in the ordering on D iff i ≤ j
in the standard ordering on the natural numbers. Also, if α β but α 6 = β, then we write α ≺ β.GEN. INVERSE LIMITS INDEXED BY TOTALLY ORDERED SETS 3
The following notation, and indeed, most of the notation used in this paper, is intended to coordinate with the notation used in . Suppose that, for each element
α of a directed set D, Xα is a non-empty compact Hausdorff space. Moreover, if
α, β ∈ D with α β, let fαβ : Xβ → 2Xα be u.s.c. (where fαα always denotes the identity on Xα). If, for all α β η in D, fαβ ◦ fβη = fαη , then the collection
{fαβ | α β ∈ D} is called exact . Assuming Xα is compact Hausdorff ∀α ∈ D,
fαβ : Xβ → 2Xα is u.s.c. for all α β ∈ D, and f = {fαβ | α β ∈ D} is exact, we say {Xα, f αβ , D } is an inverse limit system (or simply, a system ). The inverse limit , lim
←− f, of this system is given by {x ∈ ∏
α∈D
Xα | xα ∈ fαβ (xβ ) ∀α β ∈ D}.The spaces Xα are called the factor spaces of the inverse limit, and the u.s.c. functions fαβ are called the bonding functions . In the special case where X is compact Hausdorff and f : X → 2X is an idempotent u.s.c. function, if Xα = X
for all α ∈ D and fαβ = f for all α ≺ β ∈ D, then {Xα, f αβ , D } = {X, f, D } is a system and lim
←− f is an inverse limit with a single idempotent bonding function f .(Note that, in this special case, the collection f is automatically exact because f
is idempotent.) If {Xα, f αβ , D } is a system so that, for each η ∈ D and for each
p ∈ Xη, there exists x ∈ ∏
α∈D
Xα with xα ∈ fαβ (xβ ) for all α β η and xη = p,then we say the system is consistent .As we will see, certain sets will be helpful for our study of inverse limits. Suppose
{Xα, f αβ , D } is a system. If η ∈ D, we define Gη = {x ∈ ∏
α∈D
Xα | xα ∈
fαβ (xβ ) for all α β η}. If {β1, β 2, . . . , β n} is a finite subset of D, then let us say G(β1, β 2, . . . , β n) = {(xβ1 , x β2 , . . . , x βn ) ∈ ∏
1≤i≤n
Xβi | xβi ∈ fβiβj (xβj )
for 1 ≤ i ≤ j ≤ n}. (In the case where two different inverse limits, e.g., lim
←− f
and lim
←− g, with the same index set D are being discussed at the same time, we may use subscripts Gf (β1, β 2, . . . , β n) and Gg (β1, β 2, . . . , β n) to help distinguish between the two corresponding sets.) We also define G′(β1, β 2, . . . , β n) = {x ∈∏
α∈D
Xα | xβi ∈ fβiβj (xβj ) for 1 ≤ i ≤ j ≤ n}. Note that G(β1, β 2, . . . , β n)
is a subset of ∏
1≤i≤n
Xβi , whereas G′(β1, β 2, . . . , β n) is the analogous subset of
∏
α∈D
Xα.Finally, let us say K is a Hausdorff continuum if K is a non-empty, compact and connected subset of a Hausdorff space. If K happens to be metrizable, we call
K a metric continuum or simply a continuum . If H is a set and K is a Hausdorff continuum that is a subset of H, we call K a subcontinuum of H. If X and Y are compact Hausdorff spaces and the u.s.c. function f : X → 2Y has the property that f (x) is connected for each x ∈ X, then we say f is Hausdorff continuum-valued
and we write f : X → C(X).A great deal of the initial work on generalized inverse limits indexed by totally ordered sets was done by Ingram and Mahavier in . Although, in general, inverse limits indexed by arbitrary directed sets may fail to be non-empty, Ingram and Ma-havier showed that the inverse limit of a consistent system with non-empty compact Hausdorff spaces indexed by a directed set D is non-empty and compact (Theorem 111, ). Charatonik and Roe recently showed that any system indexed by a totally ordered set D is automatically consistent. When combining this result with the original theorems of Ingram and Mahavier, we obtain an important background theorem:
Theorem 2.1. Let {Xα, f αβ , D } be a system with non-empty compact Hausdorff factor spaces, u.s.c. bonding functions, and a totally ordered index set D. Then 1) for each η ∈ D, Gη is non-empty and compact; 2) for each finite subset {β1, β 2, . . . , β n}4 SCOTT VARAGONA
of D, G(β1, β 2, . . . , β n) and G′(β1, β 2, . . . , β n) are non-empty and compact; 3) lim
←− f
is non-empty and compact.
Since lim
←− f is always non-empty and compact in the context we have described, it is natural to look for conditions under which such an inverse limit would also be connected. The following result (which combines the work of Ingram, Mahavier, Charatonik and Roe) is given as Corollary 2.3 in (although, in that paper the result is stated in the wider context of Mahavier Products).
Theorem 2.2. Let {Xα, f αβ , D } be a system for which each factor space is a Hausdorff continuum, each bonding function is u.s.c., and D is totally ordered. Suppose that, for each α, β ∈ D with α β, either fαβ is Hausdorff continuum-valued or fαβ (Xβ ) is connected with f −1
αβ
(y) a Hausdorff continuum for each y ∈
fαβ (Xβ ). Then lim
←− f is a Hausdorff continuum.
The above theorem generalizes two important theorems on connectedness in inverse limits given in . Later in this paper, we will generalize other well-known results as well. Whenever possible, we will prove theorems in the broad setting of inverse limit systems {Xα, f αβ , D } where each Xα is compact Hausdorff, each fαβ
is u.s.c., and D is a totally ordered set. However, we will also give a number of theorems that are particular to the case of an inverse limit with a single idempotent, surjective, u.s.c. bonding function f .3. Projections of Inverse Limits Onto Finitely Many Coordinates
To prepare for the main results in the later sections, we first need to discuss the behavior of the projections of an inverse limit. Unfortunately, for a general inverse limit system {Xα, f αβ , D }, if H = {β1, β 2, . . . , β n} is a finite subset of the index set D, then πH (lim
←− f ) is not necessarily equal to G(β1, β 2, . . . , β n). For example, Ingram and Mahavier give an example (, Example 106) of a system with compact factor spaces, surjective bonding functions, and a (non-totally ordered) directed index set D where lim
←− f is the empty set, but G(α, β ) is non-empty for every α β ∈ D. On the other hand, if f : [0 , 1] → 2[0 ,1] is given by f (x) = 0
for 0 ≤ x ≤ 1, then the inverse limit lim
←− f of the system {[0 , 1] , f, ω } (with the single bonding function f ) is the singleton {(0 , 0, 0, . . . )}. Therefore, in this case,
π{0,1}(lim
←− f) = {(0 , 0) }, whereas G(0 , 1) , being homeomorphic to the graph of f , is an arc. Let us say an inverse limit lim
←− f is cordial if, for each finite subset {β1, β 2, . . . , β n}
of the index set D, π{β1,β 2,...,β n}(lim
←− f) = G(β1, β 2, . . . , β n). The goal of this section is to show that an inverse limit (with compact Hausdorff factor spaces indexed by a totally ordered set) is cordial if and only if its bonding functions are all surjective.
Lemma 3.1. Suppose {Xα, f αβ , D } is a system with non-empty compact Hausdorff factor spaces, surjective u.s.c. bonding functions and a totally ordered index set D.Then lim
←− f is cordial. Proof. Let a finite subset H = {β1, β 2, . . . , β n} of D be given. We aim to show that πH (lim
←− f) = G(β1, β 2, . . . , β n). Clearly πH (lim
←− f) ⊆ G(β1, β 2, . . . , β n), so let
p = ( pβ1 , p β2 , . . . , p βn ) ∈ G(β1, β 2, . . . , β n). We need to show that there exists
y ∈ lim
←− f such that yβi = pβi for 1 ≤ i ≤ n.GEN. INVERSE LIMITS INDEXED BY TOTALLY ORDERED SETS 5
If M = {η1, η 2, . . . , η k} is a finite subset of D with H ⊆ M , let us define
p∗(M ) = {x ∈ ∏
α∈D
Xα | xβi = pβi for 1 ≤ i ≤ n and xσ ∈ fστ (xτ ) for all
σ, τ ∈ M with σ τ }. We intend to show that p∗(M ) is non-empty and compact. Let us construct an element x of p∗(M ) as follows. First, let xβi = pβi for
1 ≤ i ≤ n. If {η1, η 2, . . . , η m} is the set of all members of M with η1, η 2, . . . , η m ≺
β1, then let xηm ∈ fηmβ1 (xβ1 ), and then continue inductively: for each integer
j with m − 1 ≥ j ≥ 1, once xηj+1 has been defined, let xηj ∈ fηj ηj+1 (xηj+1 ).Similarly, for a given integer i with 1 ≤ i ≤ n − 1, suppose ηr , η r+1 , . . . , η s are all the members of M lying strictly between βi and βi+1 in the ordering on D.Then since xβi ∈ fβiβi+1 (xβi+1 ) and f is exact, there must exist some xηs ∈ Xηs
so that xηs ∈ fηsβi+1 (xβi+1 ) and xβi ∈ fβiηs (xηs ). Using the same argument, we may continue in this way inductively: for each integer j with s − 1 ≥ j ≥ r,once xηj+1 has been defined, we select xηj ∈ Xηj so that xηj ∈ fηj ηj+1 (xηj+1 ) and
xβi ∈ fβiηj (xηj ). Finally, if ηt, η t+1 , . . . , η u are all the members of M that are strictly greater than βn in the ordering on D, then because each bonding function is surjective, we can choose some xηt ∈ f −1
βnηt
(xβn ) and then proceed inductively: for each integer j with t + 1 ≤ j ≤ u, once xηj−1 has been defined, choose xηj ∈
f −1
ηj−1ηj
(xηj−1 ). For each α ∈ D − M , we let xα ∈ Xα be chosen arbitrarily. Then
x satisfies xβi = pβi for 1 ≤ i ≤ n, and (by construction) whenever σ, τ ∈ M
with σ τ , we have xσ ∈ fστ (xτ ). So x ∈ p∗(M ). To see that p∗(M ) is closed, for 1 ≤ i ≤ n, let Oβi = π−1
βi
(Xβi − { pβi }). Each Oβi is open in ∏
α∈D
Xα,so G′(η1, η 2, . . . , η k) − ⋃
1≤i≤n
Oβi is closed. However, it is not hard to see that
p∗(M ) = G′(η1, η 2, . . . , η k) − ⋃
1≤i≤n
Oβi , so p∗(M ) is compact. Note that, whenever M and N are two finite subsets of D with H ⊆ M and H ⊆
N , then p∗(M ∪ N ) ⊆ p∗(M ) ∩ p∗(N ). This means the collection Λ = {p∗(M ) | M
is a finite subset of D with H ⊆ M } has the finite intersection property, and ⋂ Λ
is non-empty. Let y ∈ ⋂ Λ; we claim that y ∈ lim
←− f. Let α1, α 2 ∈ D with α1 α2.Then since y ∈ p∗({α1, α 2} ∪ H), yα1 ∈ fα1 α2 (yα2 ). So indeed, y ∈ lim
←− f, which means that there is an element y ∈ lim
←− f with yβi = pβi for 1 ≤ i ≤ n. We conclude that G(β1, β 2, . . . , β n) = πH (lim
←− f).
Theorem 3.2. Suppose {Xα, f αβ , D } is a system with non-empty compact Haus-dorff factor spaces, u.s.c. bonding functions and a totally ordered index set D. Then
lim
←− f is cordial if and only if, for all α, β ∈ D with α β, fαβ is surjective. Proof. By Lemma 3.1, if each fαβ is surjective, lim
←− f is cordial. On the other hand, suppose lim
←− f is cordial and let α, β ∈ D with α β. Let y ∈ Xα; we intend to show that f −1
αβ
(y) is non-empty. Because lim
←− f is cordial, G(α) = πα(lim
←− f). But
G(α) = Xα. So, there exists x ∈ lim
←− f such that xα = y, and therefore xβ ∈ f −1
αβ
(y).This means fαβ is surjective.
It may also be interesting to note that, since G(α) = Xα for each α in a directed index set D, when lim
←− f is cordial the corresponding system {Xα, f αβ , D } must be consistent. However, as the previous example (where f was the constant map 0)shows, consistent systems do not always produce cordial inverse limits. 4. Idempotent u.s.c. Functions
The main obstacle to constructing concrete examples of inverse limits indexed by some “long” totally ordered set (e.g., an uncountable limit ordinal such as ω1) is 6 SCOTT VARAGONA
finding a large collection f of bonding functions that is exact. One way around this difficulty is to consider generalized inverse limits with a single idempotent u.s.c. bonding function f ; in this context, because f 2 = f , the collection of bonding functions is automatically exact. So, let us suppose X is a non-empty compact Hausdorff space, f : X → 2X is an idempotent u.s.c. bonding function, and D is a totally ordered set. Then, as we have seen, {X, f, D } is a system and the inverse limit lim
←− f with the single idempotent bonding function f is defined. Generalized inverse limits indexed by the positive integers with a single bonding function f = fi i +1 for each i ∈ Z+ (and fij = fi i +1 ◦fi+1 i+2 ◦· · ·◦ fj−1 j whenever
i < j ) are commonly studied, and have given rise to many interesting problems and theorems. Therefore, we believe generalized inverse limits indexed by a totally ordered set D with a single idempotent bonding function f should be a natural (and, hopefully, fruitful) next step for the theory. In this section, we discuss some basic properties of idempotent u.s.c. functions and show how to construct some simple examples of such functions. The lemmas in this section will also be needed for some of the theorems and examples seen later in this paper.
Lemma 4.1. Suppose X is a compact Hausdorff space and f : X → 2X is surjec-tive. Then f is idempotent if and only if f −1 is idempotent. Proof. Let f be idempotent. (y, x ) ∈ Graph (f −1) ⇔ (x, y ) ∈ Graph (f ) = Graph (f ◦
f ) ⇔ ∃ z ∈ X such that z ∈ f (x) and y ∈ f (z) ⇔ ∃ z ∈ X such that x ∈ f −1(z) and
z ∈ f −1(y) ⇔ (y, x ) ∈ Graph (f −1 ◦ f −1). Thus, f −1 is idempotent. The argument can easily be reversed by changing the roles of f and f −1.
Lemma 4.2. Let X be a compact Hausdorff space and let f : X → 2X be u.s.c. Then: 1. f 2 = f if and only if, for each A ⊆ X satisfying f (x) = A for some x ∈ X,
f (A) = A.2. Suppose f 2 = f . If f (x) = y for some x, y ∈ X, then f (y) = y.3. If there is some B ⊆ X so that, for all x ∈ X, either f (x) = x or f (x) = B,then f 2 = f .4. Suppose that, for some A, B ⊆ X where A ∩ B = ∅, we have f (a) = X for each a ∈ A and f (x) = B whenever x 6 ∈ A. Then f 2 = f .Proof. To prove statement 1, first note that, if f 2 = f and f (x) = A, then f (A) =
f (f (x)) = f (x) = A. To prove the other direction of the equivalence, we let
x ∈ X, so f (x) = A for some A ⊆ X. However, f (A) = A, so that f 2(x) =
f (f (x)) = f (A) = A = f (x). Applying the forward implication in statement 1 in the case where A = {y} gives us statement 2. As for statement 3, if f satisfies the given conditions, then whenever x ∈ X, either f (x) = x (in which case, clearly
f 2(x) = x) or f (x) = B (in which case, for each b ∈ B, either f (b) = b or
f (b) = B, so that f (B) = ⋃
b∈B
f (b) = B, and therefore f 2(x) = B). Thus,
f 2 = f . Finally, addressing statement 4, we note that when x ∈ A, f (x) = X (so of course f 2(x) = X), and if x 6 ∈ A, then f (x) = B, so that f 2(x) = f (B). However, if b ∈ B, since b 6 ∈ A, f (b) = B. Thus, f (B) = B, so that f 2(x) = B, and we conclude that f 2 = f .
In the special case where X = [0 , 1] , we also have the following. These statements are easy to verify directly and so we omit the proof. (Statement 3 below has been observed before by Ingram in .) GEN. INVERSE LIMITS INDEXED BY TOTALLY ORDERED SETS 7
Lemma 4.3. Let f : [0 , 1] → 2[0 ,1] be u.s.c. 1. If, for all x ∈ [0 , 1] , either f (x) = x or f (x) = [0 , x ], then f 2 = f .2. If, for all x ∈ [0 , 1] , either f (x) = x or f (x) = [ x, 1] , then f 2 = f .3. If, for all x ∈ [0 , 1] , f (x) = {x, 1 − x}, then f 2 = f .
Despite how restrictive the condition f 2 = f may seem, the following lemma shows that one often has more freedom when constructing u.s.c. idempotent func-tions than it may first appear.
Lemma 4.4. Let a ∈ (0 , 1) and let K be any closed subset of [0 , 1] 2 such that
K ⊆ ([0 , a ) × (a, 1]) ∪ { (a, a )}. Let ∆ = {(x, x ) ∈ [0 , 1] 2 | x ∈ [0 , 1] }. Then K ∪ ∆
is the graph of an idempotent surjective u.s.c. function f : [0 , 1] → 2[0 ,1] .Proof. K ∪ ∆ is a closed subset of [0 , 1] 2 whose projections map onto both coordi-nates, so K ∪ ∆ is the graph of a surjective u.s.c. function f : [0 , 1] → 2[0 ,1] ; it re-mains to check that f is idempotent. If a ≤ x ≤ 1, then f (x) = x and so f 2(x) = x
also. If 0 ≤ x < a , then f (x) = {x} ∪ Hx, where Hx = π2(K ∩ ({x} × (a, 1])) .In particular, Hx ⊆ (a, 1] , and since f |[a, 1] is the identity, f (Hx) = Hx. Thus,
f 2(x) = f (x) ∪ f (Hx) = ( {x} ∪ Hx) ∪ Hx = {x} ∪ Hx = f (x). So, f 2 = f .
We close this section with a remark: if f : X → X is continuous, idempotent and surjective, then by Lemma 4.2 part 2, f can only be the identity on X. However, a multitude of distinct idempotent, surjective, u.s.c. functions exist, which is an unexpected advantage to working in the general setting of u.s.c. functions. 5. Connectedness Results
A considerable subset of the literature on generalized inverse limits has been devoted to finding necessary and/or sufficient conditions for lim
←− f to be connected. The difficulty of this problem is very well-known; still, researchers have produced various helpful results. In this section, we intend to reformulate some of these results in the context of generalized inverse limits indexed by a totally ordered set. The key to proving the main theorems of this section will be the various “ G-sets” introduced in Section 2 of this paper. So, the following lemma (which was originally proved by Ingram and Mahavier amid the proof of Theorem 124 in ) will be useful:
Lemma 5.1. (Ingram & Mahavier) Let {Xα, f αβ , D } be a system for which each factor space is non-empty compact Hausdorff, each bonding function is u.s.c., and D
is totally ordered. Let η ∈ D be fixed, and let Γ = {G′(β1, β 2, . . . , β n) | { β1, β 2, . . . , β n}
is a finite subset of D with βi η for 1 ≤ i ≤ n}. Then ⋂ Γ = Gη .Proof. Let x ∈ Gη and let G′(β1, β 2, . . . , β n) be an arbitrary member of Γ. Then because x ∈ Gη, we have xβi ∈ fβiβj (xβj ) for 1 ≤ i ≤ j ≤ n. Thus, x ∈
G′(β1, β 2, . . . , β n), which implies x ∈ ⋂ Γ. On the other hand, suppose x ∈ ⋂ Γ.Then for any α β η, because x ∈ G′(α, β ), xα ∈ fαβ (xβ ). Thus, x ∈ Gη .
The first main theorem of this section is an easy generalization of background Theorem 2.2. Note that the original theorem required fαβ to be Hausdorff continuum-valued (or fαβ (Xβ ) to be connected and f −1
αβ
to be Hausdorff continuum-valued) for each α β in the index set D. This condition was imposed in order to force each G(β1, β 2, . . . , β n) to be connected, which was the key to the proof; however, there are more general circumstances in which G(β1, β 2, . . . , β n) also turns out to 8 SCOTT VARAGONA
be connected, so the following theorem will be helpful. The proof requires only a minor adjustment to the original proofs of Theorems 124 and 125 given by Ingram and Mahavier in .
Theorem 5.2. Let {Xα, f αβ , D } be a system for which each factor space is a Hausdorff continuum, each bonding function is u.s.c., and D is a totally ordered set. Suppose that, for each finite subset {β1, β 2, . . . , β n} of D, G(β1, β 2, . . . , β n) is connected. Then lim
←− f is a Hausdorff continuum. Proof. Let η ∈ D be fixed; we intend to show that Gη is a Hausdorff continuum. Let
H = {β1, β 2, . . . , β n} be a finite subset of D. Then G′(β1, β 2, . . . , β n) is homeomor-phic to G(β1, β 2, . . . , β n)×∏
α∈D−H
Xα, a product of Hausdorff continua; therefore,
G′(β1, β 2, . . . , β n) is a Hausdorff continuum. Let Γ = {G′(β1, β 2, . . . , β n) | { β1, β 2, . . . , β n}
is a finite subset of D with βi η for 1 ≤ i ≤ n}. Next, note that whenever
{β1, β 2, . . . , β n} and {σ1, σ 2, . . . , σ s} are finite subsets of D, if we let {τ1, τ 2, . . . , τ r } =
{β1, β 2, . . . , β n} ∪ { σ1, σ 2, . . . , σ s}, then we may conclude that G′(τ1, τ 2, . . . , τ r ) ⊆
G′(β1, β 2, . . . , β n) ∩ G′(σ1, σ 2, . . . , σ s). Thus, Γ is a collection of Hausdorff continua with the property that the intersection of any finite subcollection of members of
Γ contains another member of Γ. It follows that ⋂ Γ is a Hausdorff continuum. However, by Lemma 5.1, ⋂ Γ = Gη . Gη is therefore a Hausdorff continuum for each η ∈ D.Finally, it is easy to see that lim
←− f = ⋂
η∈D
Gη , but {Gη | η ∈ D} is a nested collection of Hausdorff continua, so lim
←− f is also a Hausdorff continuum.
We may apply the above theorem to obtain a characterization of connectedness in inverse limits with surjective bonding functions:
Theorem 5.3. Let {Xα, f αβ , D } be a system for which each factor space is a Hausdorff continuum, each bonding function is u.s.c. and surjective, and D is a totally ordered set. Then lim
←− f is a Hausdorff continuum iff for each finite subset
{β1, β 2, . . . , β n} of D, G(β1, β 2, . . . , β n) is connected. Proof. Suppose that, for each finite subset {β1, β 2, . . . , β n} of D, the set G(β1, β 2, . . . , β n)
is connected. By Theorem 5.2, we may conclude that lim
←− f is a Hausdorff con-tinuum. On the other hand, suppose lim
←− f is a Hausdorff continuum. Since the bonding functions are surjective, lim
←− f is cordial, and therefore for any finite sub-set H = {β1, β 2, . . . , β n} of D, πH (lim
←− f) = G(β1, β 2, . . . , β n), which implies that
G(β1, β 2, . . . , β n) is connected.
A similar characterization, this time in terms of Gη, is also worth noting. (This result was inspired by Theorem 2.1 in .)
Theorem 5.4. Let {Xα, f αβ , D } be a system for which each factor space is a Hausdorff continuum, each bonding function is u.s.c. and surjective, and D is a totally ordered set. Then lim
←− f is a Hausdorff continuum iff Gη is connected for each η ∈ D.Proof. Since lim
←− f = ⋂
η∈D
Gη , if Gη is connected for each η ∈ D, then lim
←− f is the intersection of a nested collection of Hausdorff continua and is therefore a Hausdorff continuum. On the other hand, suppose lim
←− f is a Hausdorff continuum and η ∈ D.By Theorem 5.3, when {β1, β 2, . . . , β n} is a finite subset of D, G(β1, β 2, . . . , β n) is a Hausdorff continuum, which would mean that G′(β1, β 2, . . . , β n) is also a Hausdorff GEN. INVERSE LIMITS INDEXED BY TOTALLY ORDERED SETS 9
continuum. So, if Γ = {G′(β1, β 2, . . . , β n) | { β1, β 2, . . . , β n} is a finite subset of
D with βi η for 1 ≤ i ≤ n}, then by the same argument given in the proof of Theorem 5.2, ⋂ Γ is a Hausdorff continuum. By Lemma 5.1, ⋂ Γ = Gη.
In the last two theorems, the hypothesis that each bonding function is surjective is necessary. To see this, consider the following example (which can also be found as Example 1.8 in ). Let f : [0 , 1] → 2[0 ,1] be given by f (x) = 0 for 0 ≤ x < 1
and f (1) = {0, 1/2}, let the index set be the positive integers, and let fi i +1 = f for each positive integer i. Then the inverse limit lim
←− f is the singleton {(0 , 0, 0, . . . )},whereas G(1 , 2) , being homeomorphic to the graph of f , is not connected. G2 is not connected for similar reasons. The source of the trouble here is that, because the bonding functions are not surjective, lim
←− f is not cordial, and so, G(1 , 2) need not equal π{1,2}(lim
←− f). This example also shows that, in Theorem 2.1 of , the bonding functions should have been assumed to be surjective in order for lim
←− f
being a continuum to imply that each Gn is connected. (The converse, which is more commonly used, is true as it stands, without having to assume surjectivity.) Ingram has asked the author to make the readers of this paper aware of the error. Next, we give a simple sufficient condition for non-connectedness in an inverse limit with surjective bonding functions. (This result generalizes an observation by Nall in .) Once again, the previous example shows that the bonding functions in this theorem must be surjective.
Theorem 5.5. Let {Xα, f αβ , D } be a system for which each factor space is non-empty, compact and Hausdorff, each bonding function is u.s.c. and surjective, and
D is totally ordered. If, for some α β ∈ D, the graph of fαβ is not connected, then lim
←− f is not connected. Proof. Suppose lim
←− f is connected. Then because it is cordial (by Lemma 3.1),
π{α,β }(lim
←− f) = G(α, β ) is connected for every α β ∈ D. Since, for each α β ∈
D, G(α, β ) is homeomorphic to Graph (fαβ ), the proof is complete.
The reader should be warned that the converse of the previous theorem is not true, as illustrated by an example from Jonathan Meddaugh that is given in . For the remainder of this section, we will focus on generalized inverse limits with a single idempotent surjective u.s.c. bonding function. In , Van Nall presents a variety of theorems that give information about the connectedness of a generalized inverse limit (indexed by the positive integers) where there is some u.s.c. function
f so that fi i +1 = f for each i > 0. Using techniques from the original proofs by Nall, we will reformulate two of those theorems in the setting of an inverse limit (indexed by some totally ordered set D) with a single idempotent u.s.c. bonding function f .To prepare for the first of these theorems, we give a lemma that restates a key detail from Nall’s original Theorem 3.1 in , but in the context of this paper. (For the proof of this lemma, we refer the reader to Nall’s paper.) Suppose X
is compact Hausdorff, f : X → 2X is u.s.c., {β1, β 2, . . . , β n} (with n ≥ 2) is a finite subset of a totally ordered set D, and Xβi = X for 1 ≤ i ≤ n. Then let us define K(β1, β 2, . . . , β n) = {(xβ1 , x β2 , . . . , x βn ) ∈ ∏
1≤i≤n
Xβi | xβi ∈ f (xβi+1 ) for
1 ≤ i ≤ n − 1}.
Lemma 5.6. (Nall) Suppose X is a metric continuum and f : X → 2X is a surjective u.s.c. function whose graph is connected and is the union of a collection 10 SCOTT VARAGONA
of u.s.c. functions, each of which has domain X and maps into C(X). Suppose also that {β1, β 2, . . . , β n} (with n ≥ 2) is a finite subset of a totally ordered set D, and
Xβi = X for 1 ≤ i ≤ n. Then K(β1, β 2, . . . , β n) is a continuum for each integer
n ≥ 2.
Nall proved the above result in the case of K(1 , 2, . . . , n ), but of course the same result holds if we replace the natural numbers 1, 2, . . . , n with other sym-bols β1, β 2, . . . , β n from a totally ordered set D. Also, not surprisingly, the set
K(β1, β 2, . . . , β n) can be rewritten using the “ G” notation of this paper:
Lemma 5.7. Let X be a non-empty compact Hausdorff space, and let f : X → 2X
be an idempotent surjective u.s.c. function. Suppose {X, f, D } is a system with the single idempotent bonding function f and a totally ordered index set D. If
{β1, β 2, . . . , β n} (with n ≥ 2) is a finite subset of D, then K(β1, β 2, . . . , β n) =
G(β1, β 2, . . . , β n).Proof. If (xβ1 , x β2 , . . . , x βn ) ∈ G(β1, β 2, . . . , β n), then because we have xβi ∈ fβiβi+1 (xβi+1 ) =
f (xβi+1 ) for 1 ≤ i ≤ n − 1, it follows that (xβ1 , x β2 , . . . , x βn ) ∈ K(β1, β 2, . . . , β n).Now let (xβ1 , x β2 , . . . , x βn ) ∈ K(β1, β 2, . . . , β n). Clearly xβi ∈ fβiβi+1 (xβi+1 ) for
1 ≤ i < n . So now, for a fixed i with 1 ≤ i < n , we will show inductively that
xβi ∈ fβiβj (xβj ) when i < j ≤ n. Suppose we have shown xβi ∈ fβiβk (xβk )
for some k with i + 1 ≤ k < n . Then since xβk ∈ fβk βk+1 (xβk+1 ), we know
xβi ∈ fβiβk ◦ fβk βk+1 (xβk+1 ). However, fβiβk ◦ fβkβk+1 = f ◦ f = f = fβiβk+1 , so we have xβi ∈ fβiβk+1 (xβk+1 ). Thus, xβi ∈ fβiβj (xβj ) for all 1 ≤ i ≤ j ≤ n, and
(xβ1 , x β2 , . . . , x βn ) ∈ G(β1, β 2, . . . , β n).
Theorem 5.8. Suppose X is a metric continuum and f : X → 2X is an idempotent surjective u.s.c. function whose graph is connected and is the union of a collection of u.s.c. functions, each of which has domain X and maps into C(X). Let {X, f, D }
be a system with the single idempotent bonding function f and a totally ordered index set D. Then the inverse limit lim
←− f of the system {X, f, D } is a Hausdorff continuum. Proof. By Theorem 5.2, it suffices to show that for each finite subset {β1, β 2, . . . , β n}
of D, G(β1, β 2, . . . , β n) is connected. Of course, G(β1) = X is connected, so assume {β1, β 2, . . . , β n} is a finite subset of D with n ≥ 2. By Lemma 5.7,
G(β1, β 2, . . . , β n) = K(β1, β 2, . . . , β n). However, by Lemma 5.6, K(β1, β 2, . . . , β n)
is a continuum, so the proof is complete.
Lemma 4.1 stated that if f is u.s.c., idempotent and surjective, then so is f −1.Thus, assuming X is a non-empty compact Hausdorff space and f : X → 2X is u.s.c., idempotent, and surjective, then if D is totally ordered, not only is the inverse limit lim
←− f of the system {X, f, D } defined, but also the inverse limit lim
←− f–1 of the system {X, f −1, D }. This is the basis for the last theorem of this section, which retains the flavor of Theorem 3.3 in .
Theorem 5.9. Suppose X is a Hausdorff continuum, f : X → 2X is an idem-potent surjective u.s.c. function, and D is a totally ordered set. Let lim
←− f be the inverse limit of the system {X, f, D }, and let lim
←− f–1 be the inverse limit of the system {X, f −1, D }. Then lim
←− f is a Hausdorff continuum if and only if lim
←− f–1 is a Hausdorff continuum. GEN. INVERSE LIMITS INDEXED BY TOTALLY ORDERED SETS 11
Proof. We employ a similar proof technique as the one used by Nall. Let {β1, β 2, . . . , β n}
be a finite subset of D. If lim
←− f is a Hausdorff continuum, then because it is cor-dial, we may conclude that the set Gf (β1, β 2, . . . , β n) is a Hausdorff continuum. Also, (xβ1 , x β2 , . . . , x βn ) ∈ Gf (β1, β 2, . . . , β n) if and only if (xβn , x βn−1 , . . . , x β1 ) ∈
Gf −1 (β1, β 2, . . . , β n). (Justification: xβi ∈ f (xβj ) for each 1 ≤ i < j ≤ n if and only if xβj ∈ f −1(xβi ) for each 1 ≤ i < j ≤ n.) Thus, the map that reverses the order of the sequence (xβ1 , x β2 , . . . , x βn ) is a homeomorphism from Gf (β1, β 2, . . . , β n) to
Gf −1 (β1, β 2, . . . , β n), and therefore, Gf −1 (β1, β 2, . . . , β n) is a Hausdorff continuum. We apply Theorem 5.2 to conclude that lim
←− f–1 is a Hausdorff continuum. The argument is easily reversed to obtain the other direction of the equivalence.
Examples
As we will show, in each of the following examples, the given surjective u.s.c. function f : [0 , 1] → 2[0 ,1] is idempotent. Thus, if D is a totally ordered set, the inverse limit lim
←− f of the system {X, f, D } with the single bonding function f is defined. We will use a limit ordinal γ ≥ ω for our index set D in these examples. (Recall that an ordinal γ is equal to the set of its predecessors. Although the index sets of inverse limits typically start at 1 because the positive integers are so often used, when we use ordinals our index set will start with 0. For more background material on ordinals, see, e.g., .) Of course, if γ = ω, then lim
←− f is homeomorphic to the usual generalized inverse limit indexed by the positive integers. Versions of the first four examples have been studied previously by others (e.g., Ingram in ) in that setting; however, as we shall see below, some striking changes can occur when larger limit ordinals are chosen for the index set—especially γ ≥ ω1.
Example 6.1. Let f : [0 , 1] → C([0 , 1]) be given by f (0) = [0 , 1] and f (x) = x for each x 6 = 0 . (f is idempotent by Lemma 4.2, part 3.) The inverse limit lim
←− f of the system {[0 , 1] , f, γ } is a fan that is the union of one arc for each ordinal β with
1 ≤ β ≤ γ, all intersecting at the vertex (0 , 0, 0, . . . ).Proof. For 1 ≤ β ≤ γ, let Aβ = {x ∈ ∏
α<γ
[0 , 1] | xα = x0 if α < β and xα = 0 if
β ≤ α < γ }. Then for 1 ≤ β ≤ γ, Aβ is an arc and lim
←− f = ⋃
1≤β≤γ
Aβ . Note that
⋂
1≤β≤γ
Aβ = {(0 , 0, 0, . . . )}.
Example 6.2. Let f : [0 , 1] → C([0 , 1]) be given by f (0) = [0 , 1] and f (x) = 1 for each x 6 = 0 . (f is idempotent by Lemma 4.2, part 4.) The inverse limit lim
←− f of the system {[0 , 1] , f, γ } is homeomorphic to the set (γ × [0 , 1)) ∪ { (γ, 0) } with the lexicographic order topology. (If γ = ω1, this space is the traditional compactified long line.) Proof. Let Y be the space (γ×[0 , 1)) ∪{ (γ, 0) } with the lexicographic order topology, and denote the points (0 , 0, 0, . . . ) and (1 , 1, 1, . . . ) of lim
←− f by 0 and 1, respectively. Let h : lim
←− f → Y be given as follows: suppose x = ( xα)α<γ ∈ lim
←− f. If x = 1, then
h(x) = ( γ, 0) ; if x 6 = 1, then h(x) = ( β, x β ), where β is the least ordinal < γ such that xβ 6 = 1 . We intend to show that h is a homeomorphism. Let x, y ∈ lim
←− f with x 6 = y. If exactly one of x or y is 1, then clearly h(x) 6 =
h(y). So, suppose both x and y are not 1. Then h(x) = ( β1, x β1 ) and h(y) = (β2, x β2 ) for some β1, β 2 < γ . If β1 6 = β2, then of course h(x) 6 = h(y). If β1 = β2,then by the way f and h were defined, xα = yα = 1 for all α < β 1, and xα =
yα = 0 for all α > β 1. However, x 6 = y, and that forces xβ1 6 = yβ1 , which means 12 SCOTT VARAGONA
h(x) 6 = h(y). So, h is one-to-one. To show h is onto, we recall that h(1) = ( γ, 0) ,so let (β, t ) ∈ Y − { (γ, 0) }. Then if x = ( xα)α<γ is the element of the inverse limit that satisfies xα = 1 for all α < β , xβ = t, and xα = 0 for all α > β , we have
h(x) = ( β, t ). So, h is indeed onto. Finally, to show h is continuous, we let x ∈ lim
←− f and let V be an open set in
Y containing h(x). We consider the subcase where h(x) = ( β, x β ), where β < γ
and 0 < x β < 1. Then V contains an open interval of form (( β, s ), (β, t )) where
0 < s < x β < t < 1. Let U = ( ∏
α<γ
Uα) ∩ lim
←− f, where Uα = [0 , 1] for all α 6 = β
and Uβ = ( s, t ). Then U is open in lim
←− f and x ∈ U . To see that h(U ) ⊆ V , let
y ∈ U and note that yβ ∈ (s, t ). Since 0 < s < y β < t < 1, we may conclude yα = 1
for all α < β , and that means h(y) = ( β, y β ), which lies in (( β, s ), (β, t )) ⊆ V . The remaining subcases (i.e., when h(x) = ( β, 0) for some 0 ≤ β ≤ γ), though slightly more complicated, are similar and will be left to the reader. Thus, h is one-to-one, onto, and continuous, so h−1 is also continuous and h is a homeomorphism.
A similar argument can be used for the following example, so we omit the proof.
Example 6.3. Let f : [0 , 1] → C([0 , 1]) be given by f (0) = [0 , 1/2] , f (1) = [1 /2, 1] ,and f (x) = 1 /2 whenever 0 < x < 1. ( f can be shown to be idempotent directly, or by applying Lemma 4.2, part 1.) The inverse limit lim
←− f of the system {[0 , 1] , f, γ }
is an arc homeomorphic to the union of two copies of the space produced in Example 6.2 intersecting at the common compactification point (1 /2, 1/2, 1/2, . . . ).
It may be interesting to note that in the previous two examples, if γ is chosen to be any limit ordinal < ω 1, then the inverse limit is simply a metric arc. Only once γ = ω1 is chosen do we get a non-metric arc.
Example 6.4. Let f : [0 , 1] → 2[0 ,1] be given by f (x) = {x, 1−x} for each x ∈ [0 , 1] .(f is idempotent by Lemma 4.3, part 3.) The inverse limit lim
←− f of the system
{[0 , 1] , f, γ } is a cone over the set {0, 1}γ with vertex (1 /2, 1/2, 1/2, . . . ).Proof. For a given y ∈ { t, 1 − t}γ and a ∈ [0 , 1] , let y(a) denote the sequence y
with each t replaced by a. For a given y ∈ { t, 1 − t}γ , let Ay = {y(a) | a ∈ [0 , 1] }.Then lim
←− f = ⋃
y∈{ t, 1−t}γ
Ay. Note that, for each y, Ay is an arc from y(0) to y(1)
and whenever y, z ∈ { t, 1 − t}γ with y 6 = z, Ay ∩ Az = {(1 /2, 1/2, 1/2, . . . )}.
Example 6.5. Let f : [0 , 1] → 2[0 ,1] satisfy the hypothesis of Lemma 4.4 (so f is idempotent), with the additional requirement that K is a continuum containing the point (a, a ). Then the inverse limit lim
←− f of the system {[0 , 1] , f, γ } is a Hausdorff continuum. (This continuum contains a fan of copies of K, with one copy of K for each ordinal 1 ≤ β < γ .) Proof. For 1 ≤ β < γ , let Aβ = {x ∈ lim
←− f | ∃ (s, t ) ∈ K with xα = t ∀α < β and
xα = s ∀α ≥ β}. Let B = {x ∈ lim
←− f | ∃ t ∈ [0 , 1] such that xα = t ∀α < γ }.Then lim
←− f = ( ⋃
1≤β<γ
Aβ ) ∪ B. Each Aβ can be seen to be homeomorphic to K, a continuum, and B is an arc. Since the sequence a = ( a)α<γ is an element of each
Aβ as well as B, lim
←− f is connected.
Conclusion
Virtually any question that has been stated for inverse limits indexed by the positive integers has an analogue for inverse limits indexed by totally ordered sets, GEN. INVERSE LIMITS INDEXED BY TOTALLY ORDERED SETS 13
so there are ample opportunities for further research. A number of problems (stated mainly for inverse limits indexed by the positive integers) can be found in and in Chapter 6 of . One question of interest would be the following:
Question 7.1. Choose some totally ordered set D. What continua are homeomor-phic to an inverse limit with a single u.s.c. idempotent surjective bonding function
f : [0 , 1] → 2[0 ,1] with index set D?
Choosing index sets other than the positive integers can have surprising effects on this problem. For example, Patrick Vernon used the integers as his index set to produce a 2-cell in , whereas Van Nall proved (in ) that a 2-cell cannot be produced if the index set consists only of the positive integers. (The bonding function used in Vernon’s example was not idempotent, however.) Let us also state a much more open-ended question:
Question 7.2. If lim
←− f is the inverse limit of a system {[0 , 1] , f, D } with a single u.s.c. idempotent surjective bonding function f : [0 , 1] → 2[0 ,1] and totally ordered index set D, what can be said of lim
←− f?
When the index set D is large, finding collections of u.s.c. functions f that are exact (without being trivial, e.g., by making almost all of the bonding functions be the identity) remains a difficult problem. Using a single idempotent bonding function is only one possible solution. The collection of continuous bonding func-tions used by Michel Smith in is exact, although the factor spaces become increasingly complicated as one moves deeper into the index set D.
Question 7.3. What other techniques are there for generating non-trivial collec-tions of u.s.c. (surjective) functions that are exact?
The examples section showed how strongly the choice of index set D can affect the properties of the inverse limit space. So, we close with another open-ended question:
Question 7.4. If the index set D has a given property P , under what conditions (and to what degree) does that impact the properties of lim
←− f?
References
David Bellamy, Indecomposable continua with one and two composants , Fund. Math. 101 (1978), no. 2, 129-134. Wlodzimierz J. Charatonik and Robert P. Roe, On Mahavier Products , Topology and its Applications, 166, (2014), 92-97. W. T. Ingram, William S. Mahavier, Inverse Limits: from Continua to Chaos , Springer, Developments in Mathematics (vol. 25), 2012. W. T. Ingram, William S. Mahavier, Inverse limits of upper semi-continuous set valued functions , Houston Journal of Mathematics, vol. 32 (2006) no. 1, 119-130. W. T. Ingram, Concerning Nonconnected Inverse Limits with Upper Semi-Continous Set-Valued Functions , Topology Proceedings vol. 40 (2012), 203-214. W. T. Ingram, An introduction to inverse limits with set-valued functions , Springer Briefs in Mathematics, 2012. W. T. Ingram, Inverse Limits with Upper Semi-Continuous Bonding Functions: Problems and Some Partial Solutions , Topology Proceedings, vol. 34 (1) (2010), 353-373. Kenneth Kunen, Set Theory: An Introduction to Independence Proofs , Elsevier B.V., 1980. 14 SCOTT VARAGONA
Van Nall, Inverse limits with set valued functions , Houston Journal of Mathematics, 37 (2011), no. 4, 1323-1332. Van Nall, Connected inverse limits with a set-valued function , Topology Proc. 40 (2012), 167-177. Michel Smith, Generating Large Indecomposable Continua , Pacific Journal, vol. 62, No.2 (1976), 587-593. Scott Varagona, Simple techniques for detecting decomposability or indecomposability of gen-eralized inverse limits , Ph.D. dissertation (Auburn University), 2012. Patrick Vernon, Inverse limits of set-valued functions indexed by the integers , Topology Ap-plications 171 (2014), 35-40.
Department of Biology, Chemistry & Mathematics; University of Montevallo; Montevallo, Alabama 35115
E-mail address : [email protected]
|
4
|
Forced Stalemates
I was thinking, is it possible to be forced into a Stalemate? If so provide examples.
Here in a winning position Reshevsky blunders with Qxg3? This allows Evans to sacrifice his queen and force a draw. If the rook is captured it will be stalemate and if the king moves it will be perpetual check.
It's quite hard.
Even in my this "lost game", my opponent made an unforced error and allowed me to escape with a draw, I would not classify this as a forced perpetual check.
Forced stalemates are much harder, and so far all of my games that ended in stalemate (both the situations where I was stalemating the opponent and where I was stalemated) appear due to something unforced.
Depends on how you define forced stalemate, but I doubt this example counts.
Here in a winning position Reshevsky blunders with Qxg3? This allows Evans to sacrifice his queen and force a draw. If the rook is captured it will be stalemate and if the king moves it will be perpetual check.
This one is not strictly a forced stalemate, because Reshevsky does have the choice between capturing the rook (draw by stalemate) or moving the king (perpetual check eventually forcing threefold repetition). I suppose there might be desperado positions where that choice does not exist and the only legal move to get out of check is the capture that creates stalemate - but cannot provide examples.
Turns out it is impossible to be legally forced into stalemate since 1997:
| | |
| --- | --- |
| 5.2.2 | The game is drawn when a position has arisen in which neither player can checkmate the opponent’s king with any series of legal moves. The game is said to end in a ‘dead position’. This immediately ends the game, provided that the move producing the position was in accordance with Article 3 and Articles 4.2 – 4.7. |
Therefore, as soon as all moves that otherwise would be legal result in a stalemate, the game already ends in a dead position - being forced into stalemate is impossible because the position forcing stalemate is a dead position. Example position:
White is in check and would otherwise have two legal moves to get out of check: Kxg5 or hxg5. However, both would create stalemate. Therefore the position already is a dead position and the game has already ended.
It is more common, I think, to force a draw by repetition where stalemate is a threat. That's what happened here. Black to move in all three diagrams.
But, stalemate can be forced from many positions.
My opponent forced stalemate from:
And I did from:
Pfft! Who would want to force stalemate? If you're determined not to win you might as well resign.
Yeah cuz I'm sure if you were in a tournament you would throw away the half a point.
It is more common, I think, to force a draw by repetition where stalemate is a threat. That's what happened here. Black to move in all three diagrams.
But, stalemate can be forced from many positions.
My opponent forced stalemate from:
As I pointed out, no, under 5.2.2 stalemate can never be forced:
White would otherwise have one legal move to get out of check, but this would produce stalemate. Since all legal moves draw, the position on board is a dead position, the game already is ended and it is illegal for White to actually capture the rook and produce stalemate.
Choice between stalemate and threefold repetition is not a dead position. In case of perpetual check, the defender needs to actually keep up checking until third repetition and then actually claim. The defender might legally blunder into making a different move and getting checkmated after all. Therefore the game does not end in dead position before the attacker actually plays stalemate or either player claims threefold repetition (or a fivefold repetition happens).
Pfft! Who would want to force stalemate? If you're determined not to win you might as well resign.
If I have an inferior position, I would be thrilled to draw!
Half a point beats no points all day and twice on Sunday.
Only moronic clowns don't appreciate the draw in the game of chess and the many ways to achieve it!
Turns out it is impossible to be legally forced into stalemate since 1997:
Therefore, as soon as all moves that otherwise would be legal result in a stalemate, the game already ends in a dead position - being forced into stalemate is impossible because the position forcing stalemate is a dead position. Example position:
White is in check and would otherwise have two legal moves to get out of check: Kxg5 or hxg5. However, both would create stalemate. Therefore the position already is a dead position and the game has already ended.
Actually, forced stalemate is possible based on the rules given.
WKc8, WQh8, WPb5, BKa7, BQg3, BPb6. Black to Move.
The game is not over as Black can legally play one of many legal moves, and many lose, but he has a forced stalemate with 1...Qc7 if he wants it. Of course, instead of the losing moves or 1...Qc7, I would advise Black to play 1...Qb8 and win!
As I pointed out, no, under 5.2.2 stalemate can never be forced:
Chess.com does not use FIDE rules. In the US, very few events follow FIDE rules. Under USCF rules, stalemate is a legal move.
I saw your post before I created mine. It is useful, but does not answer the OP's question.
What about the "crazy rock"? Where you have only one rock beside your king and this one can't move at all (or there are also other materials that can't move anymore).
Therefore, you just attack the opposent's king with your rock at each move; either he moves and you go on with attacks, or he takes the rock and lt's draw.
Whatever the circomstances, the goal might be for the team's score, or when the victory isn't possible anymore (or you are in a hurry to eat!!).
What about the "crazy rock"? Where you have only one rock beside your king and this one can't move at all (or there are also other materials that can't move anymore).
Therefore, you just attack the opposent's king with your rock at each move; either he moves and you go on with attacks, or he takes the rock and lt's draw.
Yes, that´s the situation in all the diagrams.
But my point is that under the Fide rules since 1997, none of them is a "forced stalemate", and there cannot be a "forced stalemate". Either the king has a choice between capturing the rook (and drawing by stalemate) and moving (and being chased by perpetual check into threefold repetition) and then the stalemate is not specifically forced because the king can choose to draw by repetition instead of stalemate. Or else the king has nowhere to move, but then it is a dead position and the game should end in a draw then, without actually capturing the rook.
What about the "crazy rock"? Where you have only one rock beside your king and this one can't move at all (or there are also other materials that can't move anymore).
Therefore, you just attack the opposent's king with your rock at each move; either he moves and you go on with attacks, or he takes the rock and lt's draw.
Yes, that´s the situation in all the diagrams.
But my point is that under the Fide rules since 1997, none of them is a "forced stalemate", and there cannot be a "forced stalemate". Either the king has a choice between capturing the rook (and drawing by stalemate) and moving (and being chased by perpetual check into threefold repetition) and then the stalemate is not specifically forced because the king can choose to draw by repetition instead of stalemate. Or else the king has nowhere to move, but then it is a dead position and the game should end in a draw then, without actually capturing the rook.
Please bear in mind that FIDE rules apply to FIDE sanctioned competition, and only FIDE sanctioned competition.
It does not apply to competition sanctioned by other bodies, nor to casual chess, nor to the vast majority of websites (and those claiming to employ FIDE rules likely do not encode the rule as you have explained it).
Hence, your post is interesting, but barely relevant.
king and pawn vs king endgame where the lone king has the opposition is an example.
force a stalemate with white
force a stalemate with white
Forced stalemate from black
|
5
|
ANNALES DE L’I. H. P., SECTION B PHILIP S. GRIFFIN TERRY R. MCCONNELL GREGORY VERCHOTA Conditioned brownian motion in simply connected planar domains Annales de l’I. H. P., section B, tome 29, no 2 (1993), p. 229-249 © Gauthier-Villars, 1993, tous droits réservés.
L’accès aux archives de la revue « Annales de l’I. H. P., section B » ( implique l’accord avec les condi-tions générales d’utilisation ( Toute uti-lisation commerciale ou impression systématique est constitutive d’une infraction pénale. Toute copie ou impression de ce fichier doit conte-nir la présente mention de copyright.
Article numérisé dans le cadre du programme Numérisation de documents anciens mathématiques 229.
Conditioned Brownian motion in simply connected planar domains Philip S. GRIFFIN (), Terry R. McCONNELL () and Gregory VERCHOTA () Department of Mathematics, Syracuse University, Syracuse, NY 13244-1150, U.S.A Ann. Inst. Henri Poincaré, Vol. 29, n° 2, 1993, p.
249.
Probabilités et Statistiques ABSTRACT. - The purpose of this paper is to study Doob’s conditioned Brownian motions in simply connected domains in [R2. We obtain the precise value of the best constant in the lifetime inequality of Cranston and McConnell and prove a related maximum principle. We also exhibit a connection between these processes and the classical isoperimetric inequalities.
Key words : Conditioned Brownian motion, expected lifetime, positive harmonic function, isoperimetric inequality.
1. INTRODUCTION Let D be a domain in !R" and p (t, a, P) the transition densities of Brownian motion killed on exiting D. Let positive harmonic function in D}.
Classification A.M.S. : (1980) Subject classification (1985 Revision) : 60 J 45, 60 J 65.
() Research supported in part by N.S.F. Grant DMS-8700928.
() Research supported in part by N.S.F. Grant DMS-8900503.
() Research supported in part by N.S.F. Grant DMS-8915413.
Annales de l’lnstitut Henri Poincaré - Probabilites et Statistiques - 0246-0203 Vol. 29/93/02/229/21/$ 4,10/ © Gauthier-Villars 230 P. S. GRIFFIN, T. R. McCONNELL AND G. VERCHOTA For h E H + D set Let P a denote the measure on path space induced by these transition densities and Ea expectation with respect to Pa. The canonical process, which we denote by Zt [or sometimes Z (t)], is then a Doob conditioned Brownian motion or h-process. Its lifetime is given by If no confusion can arise we will often drop the subscript D and simply write T for the lifetime.
In 1983, Cranston and McConnell proved that for any domain D ~ 1R2, there exists a constant cD such that and furthermore Note that ( 1.1 ) is only of interest when D has finite area. See ] for references to extensions, applications and related results.
To translate this result into analytic terms, we only need observe that where G is the (probabilist’s) Green function for D, i. e., minus two times the analyst’s Green function and m is Lebesgue measure.
One of the main results of this paper is an evaluation of the best possible constant among all simply connected domains, i. e.
c = sup { CD : D is simply connected}. _ We will prove that c = 1 and furthermore that the su p remum can not be attained. " Examples of domains for which c D approaches - are given in section 3 7T (see Theorem 3 . 4) and include long thin rectangles. The opposite extreme to these domains is the disc and here we explicitly compute CD to be (4 log 2 - 2)/~c ^-~ . .7726jn.
If h is a Martin kernel (i. e. minimal) function with pole at where aM D = minimal Martin boundary of D, then we write P a and E~ for P a and Ea respectively. In this case we prove that, as a function of a E D, satisfies a maximum principle; see section 2. It is not clear Annales de l’Institut Henri Poincaré - Probabilités et Statistiques 231 CONDITIONED BROWNIAN MOTION how important it is that h be minimal in this result, but we should point out that does not satisfy the maximum principle for all hEH+ D, for example when h is constant.
In the final section we extend the best constant result to the case of superharmonic h and discuss various topics including the connection with the isoperimetric inequality; see Remark 5.4. -The authors would like to thank Tadeusz Iwaniec for informing us of the work of Carleman, Gerry Cargo for a simplification in the proof of Theorem 2.1 and Eugene Poletsky for several enlightening discussions.
Finally we would like to thank the referee for pointing out an oversight on our part in the statement of the strong maximum principle in section 2, and for the observation that 0~4/71: for any planar domain. This bound can be bound in Banuelos .
2. A MAXIMUM PRINCIPLE In this section we will prove a strong maximum principle for as a function of a E D whenever D c 1R2 is simply connected and I D oo .
Here z) and K(3 is any kernel function for D with pole at By replacing K(3 with the constant function we see that such a principle cannot hold in general for positive harmonic functions. We will also prove a maximum principle for simply connected domains of infinite area in 1R2.
Before giving the strong maximum principle we make a few simple observations. Let B denote the unit disc and aB its euclidean boundary.
Fix a E B, b E aB and let 03A6: B ~ D be the 1-1, conformal map of B onto D such Here and elsewhere below we assume that 03A6 has been extended to a homeomorphism of the Martin closures, and we identify OM B with aB. Then by conformal invariance of Green functions and kernel functions where, Vol. 29, n° 2-1993.
232 P. S. GRIFFIN, T. R. McCONNELL AND G. VERCHOTA is the Green function for B with pole at a and is a kernel function for B with pole at b E aB. We can always find such a conformal map for which a = 0 and b =1. In this case 0 maps the interval [ -1, 1] onto the hyperbolic geodesic through a and P. Thus the strong maximum principle is a consequence of the following stronger result: THEOREM 2. 1. - Let F not identically zero be holomorphic in the disc B and FeL2(B). let Then L’ (0) O. ’ ’ Remark. - This result is stronger than the strong maximum principle in two ways. First it allows us to replace C’ in (2.2) with an arbitrary holomorphic function Fe L 2 (B), and secondly it shows that if uo is on the hyperbolic geodesic connecting a point ex E D = D U OM D to P, iD increases as ao moves along the geodesic away from P.
Proof. - One easily sees that the derivative of the numerator may be taken inside the integral. Evaluating the derivative one gets Let akzk. In polar coordinates dm (z) = r dr d8 so that by orthogonality while Thus integrating in 8 first and then in r, the theorem follows if the inequality holds.
Annales de l’lnstitut Henri Poincaré - Probabilités et Statistiques 233 CONDITIONED BROWNIAN MOTION By taking absolute values, it suffices to prove (2. 3) for nonnegative ak.
Let aM be the first nonzero coefficient. Then by the geometric-arithmetic mean inequality This shows the inequality in (2. 3) is strict provided all quantities in (2.4) 00 k are finite. To see this observe that if then b~. Thus j=O ;=o by the Cauchy-Schwarz inequality, and the last quantity is finite since F E L2 (B)..
. COROLLARY 2. 2. - Let D be a nonempty simply connected plane domain of finite area. Let 03B1, 03B2 belong to OM D and exo lie on the hyperbolic geodesic r joining ex and 03B2. Then where the kernel functions are normalized to be 1 at,some point, call it ç, on h.
Proof. - We begin by showing that if a" converges to a in the Martin topology then To see this, map the unit disk 1-1 and conformally onto D so that -1, 0, and 1 are mapped to cx, ç, and P respectively. Then (2 . 6) follows from its counterpart in the unit disk, which reads This is a straightforward computation.
Vol. 29, n° 2-1993.
234 P. S. GRIFFIN, T. R. McCONNELL AND G. VERCHOTA Now let I> denote the conformal map described above. Choose a strictly decreasing sequence wn of negative real nurnbers such that lim wn = -1.
n ~ o0 It follows from Theorem 2 .1 that is strictly increasing. Thus By [ 11 ], Corollary 1. 2, the integrands on the right-hand side of the last inequality are uniformly integrable. Thus we obtain (2. 5) from (2.6) by passage to the limit as n - oo. M Remark 2.3. - Essentially the same argument shows that if an is any sequence converging to a in the Martin topology, then where the kernel functions are normalized at some point ~ lying on the hyperbolic geodesic joining cx and ~. The right-hand side is the expected lifetime of Brownian motion in D started at the entrance boundary point cx and conditioned to die at the boundary point ~. (See .) Accordingly, we denote it by Ea ’to COROLLARY 2.4. - Let D be a nonempty simply connected plane domain with a Green function. Let belong to aM D and CXo lie on the hyperbolic geodesic r joining oc and ~. Then Proof. - By conformal invariance it is enough to prove for any conformal map 03A6 defined in B.
By monotone convergence the right hand side of (2. 8) may be written as Annales de l’lnstitut Henri Poincaré - Probabilités et Statistiques 235 CONDITIONED BROWNIAN MOTION The left hand side may be written The first integral converges by monotone convergence and the second by dominated convergence. For the third the functions may be extended to {z: l/2~z 1 and Rez>0} so that con-verge uniformly there to /(z) = ’20142014L log -2014-. This function is uni-formly bounded away from zero. As has already been noted the functions gr (z) = (r2-|z|2)2 |r2-z2|2 for {z:|z| r}, extended to be zero outside { z: |z|r}, increase monotonically on [z: z| 1} to g (z) = -2014’ . . Thus given any E > 0, by Fatou’s lemma, uniform convergence to f and monotone convergence, we have If the same is true of since f is bounded away from zero. Thus the left side of (2. 8) is Now by Corollary 2. 2 for each r > 0 the integrals in (2. 9) are strictly greater than those in (2 .10) and (2. 8) follows..
Remark 2 . 5. - If D 1 = 00 then it is possible that ’t = 00 for CXo E D so that the above proof can not give us a strict inequality in (2.7).
However it would be interesting to know whether or not the strict maximum principle holds in the case where the maximum expected life-time in D is known to be finite but D has infinite area. -Vol. 29, n° 2-1993.
236 P. S. GRIFFIN, T. R. McCONNELL AND G. VERCHOTA 3. THE BEST CONSTANT In this section we show that - is the best constant in the lifetime 7T inequality when the terminal point lies on the boundary, or, more generally, when the conditioning function is positive harmonic. The ine-quality continues to hold with the same constant even when the terminal point lies in the interior. The more difficult proof of that fact is deferred to section 5. The precise result to be proved here is the following THEOREM 3.1. 2014 Let D be a nonempty simply connected plane domain of finite area, u e D, and /!>0 harmonic on D. Then and the constant 1 is best possible.
It suffices to prove (3 .1 ) when h is minimal, say h = K03B2, 03B2~~MD.
Moreover, by the results of Section 2 we may assume Recall (see Corollary 2. 2 and Remark 2. 3) that if ç lies on the hyperbolic geodesic joining a and P, and if K" and K~ are normalized so that K" (~) == K~) = 1, then If O is the unique 1-1 conformal map of the unit disc B onto D such that c~ ( -1 ) = cx, d~ (0) =~, and 0 ( 1 ) = P, then The unit disc here may be replaced by any other model simply connected domain, . and it is more convenient for our purposes in this section to replace it by the infinite strip An easy computation using e. g., the fact that log ( 20142014) maps B to S, shows that 1 - z Annales de l’lnstitut Henri Poincare - Probabilités et Statistiques 237 CONDITIONED BROWNIAN MOTION where ’P maps S 1-1 and conformally onto D with 03A8 ( - oo) = a, W (0) = ç, and (This formula has been noted independently by R. Banuelos and T. Carrol .) By the half-angle formula, where The proof of (3 . 1) thus reduces to showing that the second term on the right-hand side of (3 . 3) is strictly negative, which, in turn, is an immediate consequence of the following two results.
LEMMA 3. 2. - Let f be a strictly increasing real-valued function and g a strictly decreasing function on a finite interval [a, b]. Then LEMMA 3. 3. - The function H defined in (3 . 4) above is strictly increasing To complete the proof of (3.1), apply Lemma 3.2 with [a, b] = 0,- , f=H, and g(y)=cos(2y).. Proof of Lemma 3 . 2. - Let 1=20142014 f(y)dy and choose such that f~I on [a, y] andf>I on (y, b]. Let/=/-!. Then We should remark that this result is a special case of an inequality of Chebyshev. See, e. g., Theorem 43 (2.17.1) in .
Proof of Lemma 3. 3. - We begin by showing that H is bounded on [0, A] for each 0A03C0 2. Fix B such that AB03C0 2. Since Vol. 29, n° 2-1993.
238 P. S. GRIFFIN, T. R. McCONNELL AND G. VERCHOTA it follows from Fubini’s Theorem that we may further assume B is chosen so that H (B) 00. By similar reasoning applied to integrals over vertical line segments we may choose Xn -+ oo and a constant C independent of n such that Let z0=x0+iy0 satisfy and let rn denote the positively oriented boundary of the rectangle [- x~, xn] X [ - B, B]. By the Cauchy integral formula we have for xn> 1 Xo I It follows from the Schwarz inequality and (3. 5) that the contribution to this integral from the vertical ends of rn vanishes in the limit as n - oo .
Thus we obtain the representation valid for all z~ = xo + iyo satisfying A.
Since H (B) 00, one may apply results from the theory of HP-spaces with p = 2 (e. g., the corollary on p. 172 of ) to each of the two integrals above to conclude that Next, by a well-known Paley-Wiener Theorem (see, e. g. , p. 174) is the Fourier transform of a nonzero function cp such that ey|03BB|03C6(03BB)~L2(R) for each |y|03C0 2. Thus, by Plancherel’s Theorem the function H (y) is a positive multiple of cosh(2y03BB)|03C6(03BB)|2d03BB, which is clearly strictly increasing in y..
To show that 1 is best-possible, let Dp denote the image of B under the 7T 0pl. Then a direct, but lengthy, com-putation shows that .
Annales de l’Institut Henri Poincaré - Probabilités et Statistiques 239 CONDITIONED BROWNIAN MOTION Alternatively, one may apply Theorem 3.4 below (see esp. example 3.6) but perhaps the easiest way to see this is by means of a probabilistic argument, which we sketch at the end of this section.
Since strict inequality always holds in (3 . 1) there are no extremal domains, but domains such as DP above for p close to 1 may be considered "near extremal". The following result shows that there are many near extremal domains and provides some additional insight into how the geometry of the domain influences the expected lifetime.
THEOREM 3 4. - Let D be a bounded convex domain which is symmetric with respect to one of its diameters. Let R = R (D) denote the supremum of radii of all open discs contained in D, P = P (D) the perimeter, and 0394 = 0394 (D) the diameter of D. Then there exist points cx E aD and 03B2 E aD such that Remark 3 . 5. - J. Xu [1 5] has shown that there is a positive constant y so that for all convex plane domains.
Proof of Theorem 3 4. - We may assume that D is symmetric with respect to the x-axis and that there are points a and 03B2 on the x-axis so that [a, P] is a diameter of D. For technical reasons it is convenient to replace D with a slightly smaller convex domain having smooth boundary.
Thus, let D be a 1-1 conformal map of B onto D such D(1)==P. Let for 0pl, and let Dp be the image of B under Op. Then Dp is also convex by Study’s Theorem (see, e. g. , p. 224) and symmetric with respect to the x-axis.
- . Note that ’Pp maps S conformally onto Dp, is real on the x-axis and maps the upper boundary of S to the upper boundary of Dp. It is easy to check that Dp has a smooth boundary and also that is analytic in an open set containing the euclidean closure of S, (3 . 7) ) lP§ I is bounded on the euclidean closure of S, (3 . 8) Vol. 29, n° 2-1993.
240 P. S. GRIFFIN, T. R. McCONNELL AND G. VERCHOTA and Since the left-hand side of (3 .10) equals Ea i and the right-hand side represents the limit of the analogous quantities for the Dp, it is sufficient to prove (3. 6) for the domains DP, with the quantity A (Dp) replaced by the length of the intersection of Dp with the x-axis. To simplify the notation, we shall assume for the remainder of the proof that D = Dp for some 0 p 1, and drop the "p" throughout.
Our method now is to find a lower bound for the second term on the right-hand side of (3 . 3), i. e., Let u, v real, and note that, by (3 . 7) and the Cauchy-Riemann equations, Here all expressions are evaluated at O~~~Tr/2, and K is the curvature of the curve ~ -+ ’P (t, y). Thus, since D is convex we have ~ ~y(|03A8’|2~0 for y=03C0 2; since ’P maps the x-axis to the x-axis we have ~(|03A8’|2)=0 for y=0. On the other hand, ly =Re [.q"J 1- so that the =2K ~ is 12 2 ’ ’ harmonic on S, and bounded on S by (3.8). By the maximum principle we may conlude for 03C0 2, -~x~.
Recalling that the expression in (3.11) is negative and integrating by parts on y, we have Annales de l’lnstitut Henri Poincaré - Probabilités et Statistiques 241 CONDITIONED BROWNIAN MOTION By Koebe’s distortion theorem (See e. g., , p. 147, where the result is stated for the unit disc. The version we use is easily obtained by conformal mapping.) Thus, which completes the proof..
Example 3.6. - Take for D the rectangle C- ~ ~ 21 x C 2’ 21’ Then by the proof of Theorem 3 . 4 ~ ~ ~ ~ We conclude this section by sketching a simple probabilistic argument to show that long thin rectangles are near-extremal. For given E > 0 let Dn denote the rectangle [ - En, n] [-03C0 2,03C02]. Let h (z) = ex cos y, so that h is positive harmonic on Dn. Finally, let tn = We will show that E© tn ’" n as n - ~, hence E© Dn 1-+ 1 as n - 00.
n(l+E) The h-process Zt = Yt) started from 0 satisfies the stochastic differen-tial equation with Zo=0. Here Wt is a standard 1R2-valued Brownian motion started from 0. Let 03B2t denote the x-component of Wt. Since the drift is given by V/x ’ 2014(z)=(l, - tan y), we have that and moreover, , h ( ) ( ~ .v)~ t at > > (Note that Zt cannot leave Dn through the top or bottom boundaries since h vanishes there.) Since both points - E n and n are accessible for the 1-dimensional diffusion fl~ + t, it follows that oo . Thus, by Wald’s Vol. 29, n° 2-1993.
242 P. S. GRIFFIN, T. R. McCONNELL AND G. VERCHOTA identity, E ~3~n = o. We conclude that The desired result follows if we can show pð = n) - 1 as n ~ ~. One way to see this is to note that -+ 0, a. s., as t - 00 by the strong law, hence ~3t + t has sample paths which are almost surely bounded below.
4. THE DISC As we have already indicated, the extremal domains in the best constant problem are long thin rectangles. The opposite extreme to these domains is the disc. We will explicitly compute the best constant in this case. By the maximum principle of section 2, the worst case occurs on the boundary, i. e. Brownian motion conditioned to go from one boundary point to another. The first step is to show that the boundary points are diametri-cally opposite. We will do this probabilistically, by a coupling argument.
We begin by recalling some basic facts about h-processes. Let D 1 and D2 be two domains and C a 1-1 conformal map of D 1 onto D~. We will write where z = x + iy and w = u + iv. If h2 is a positive harmonic function in D2, then is a positive harmonic function in Di. If Zt is an hi-process in D 1 then 03A6(Zt) is a time change of an h2-process in D2; to be precise, there exists an h2-process Wt such that As an immediate consequence, we have that If we apply this with D1=B, D2 = ~ w : u > 0 ~, h2 (w) = u and ~ (z) _ ( 1 + z) ( 1- z) -1, we see that Furthermore the h2-process W = (U, V) is very simple; U is a Bessel process of index 3 and V is an independent 1-dimensional Brownian motion. If we let be the inverse of ~, and apply the above in reverse, we can conclude that where H is the half space M>0. and ’tH = 00 a. s., to show that iB is maximized when Z starts at z = -1 is equivalent to showing Annales de l’Institut Henri Poincaré - Probabilités et Statistiques 243 CONDITIONED BROWNIAN MOTION is maximized when W starts at w = O. Since we already know the maximum occurs on the boundary, we only need consider starting points on the imaginary axis u = O.
PROPOSITION 4 . 1. - Let w~ = iv~, j =1, 2, be two points on the boundary of H such that I v21. Then on the same probability space, one can define two h2-processes W and W 2 starting at w l and w2 respectively, such that Proof - Let W1 = (Ul, be an h2-process started at w 1 defined on some probability space. Let Define W~=(U~, V2) by U2 = U1 and By the reflection principle V; is a Brownian motion starting at v2 indepen-dent of U2. Thus W2 is an h2-process started at w~ and by construction V; = V; and I V; I ~ for all t. Since it follows that for all s, from which the result is immediate..
As a consequence of the previous result we have that Thus by the Poisson representation of positive harmonic functions in B the best possible constant c B for the unit disk is 1 03C0E1 1 which we now compute. " Vol. 29, n° 2-1993.
244 P. S. GRIFFIN, T. R. McCONNELL AND G. VERCHOTA PROPOSITION 4 . 2. - is = 4 log 2 - 2 ~ . 7726.
Proof. - Recall that by (3 . 2) Among convex domains the disc is at the opposite extreme to the long thin rectangles, i. e., it minimizes the perimeter for a given area. Thus it seems reasonable to ask Open Question. - Amongst all convex domains D of area x, is minimized when D = B?
If we remove the convexity assumption then the result is false. Indeed by an example of Xu [ 15] there are simply connected domains of infinite area having sup E~ r as small as desired.
a, fi e D It is interesting to note that in the case of unconditioned Brownian motion (i. e., A= 1), the disc is in fact the worst case, i. e., and in the simply connected case, equality holds iff D is a disc. This is a consequence of classical isoperimetric theory which says that the distribu-tion function of the Green function is pointwise maximized over domains of equal area only in the ball; see .
5. THE SUPERHARMONIC CASE In this section we wish to establish (3 .1 ) in simply connected domains with finite area in the plane when h is superharmonic with nonnegative Annales de l’Institut Henri Poincaré - Probabilités et Statistiques 245 CONDITIONED BROWNIAN MOTION boundary values. See . To do this it suffices by conformal mapping and the Riesz Representation Theorem to prove the following result.
THEOREM 5.1. - Let F not identically zero be holomorphic in the unit disc B with F e L2 (B). Then for all b e B Our proof is based on a lemma that has a very close relationship with Carleman’s generalization of the isoperimetric inequality . Denote rd 9 by dc and LEMMA 5.2. - Let F1, ..., Fn+l’ n =1, 2, ..., not identically zero be holomorphic in B with F1F2...Fn+1 ~ L2(B). Then Proof. - Let Fj(z)= 03A3 a(j)k zk, 1 - j - n + 1. The square of the modulus k=0 of the jo-th coefficient of F 1 (z)... (z) is by the Schwarz inequality.
Let I denote the left side of (5 . 2). Integrating in 8 and using (5.3) Vol. 29, n° 2-1993.
246 P. S. GRIFFIN, T. R. McCONNELL AND G. VERCHOTA By an induction on n > 1, for all J > O. Thus Proof of Theorem 5 . 1. - It suffices to take 0 b 1. Since it follows that Thus Apply the lemma for each ~1 with F~=F~= ...F~+i= 201420142014.
Using the harmonicity of -2014’2014L m B ~l-&z~ Thus (5.4) is less than Annales de l’Institut Henri Poincaré - Probabilités et Statistiques 247 CONDITIONED BROWNIAN MOTION Now use the identity valid for all 0~1. N Remark 5. 3. - An alternative approach to Theorem 5. 1 would be to obtain a maximum principle similar to the one in section 2 but with both points lying in the interior. Unfortunately, we have been unable to prove this.
Remark 5.4. - The proof of Lemma 5.2 is a generalization of Carle-man’s proof when n =1 and the log and the weights (1- z|2)n are not present. Many different lemmas of this type may be stated. This one is devised for Theorem 5.1. In Beckenbach and Rado generalized Carleman’s result to deal with integrands that are logarithmically sub-harmonic.
Carleman’s result can be used to give an alternative proof of Theo-rem 3 .1, which we very briefly sketch below leaving the details to the interested reader. Using Carleman’s isoperimetric inequality, the Cauchy-Schwarz inequality, and the coarea formula, we have The second inequality is strict except when F is a constant multiple of 201420142014, i, e., the derivative of the conformal map taking B to an infinite strip. This is consistent with the observation that long thin domains are extremal for the expected lifetime. Furthermore in this case is the Jacobian of a linear fractional transformation. The isoperimetric inequality is then also sharp since the t ~ are discs.
Vol. 29, n° 2-1993.
248 P. S. GRIFFIN, T. R. McCONNELL AND G. VERCHOTA Remark 5. 5. - A simple computation shows that K -1 is logarithmi-cally harmonic. Hence, by conformal mapping, given any two kernel functions K°‘, KP for a simply connected domain D c R2, is loga-rithmically harmonic. Thus for all zeD. More generally, given any kernel function K and any positive harmonic function h for D, h/K is logarithmically subharmonic. This is because when transferred to the disc, by the Poisson integral representa-tion, h/K has the representation I(z)= f" |F03B8(z)|2 d (03B8) where Fo (z) is holomorphic in B for each 8 and dJl is a positive Borel measure. Taking the log and then applying the Laplacian yields which is positive by the Schwarz inequality. As a consequence Given an h-process the quantity ~h is called the drft. We have thus h proved the following.
PROPOSITION 5. 6. - In a simply connected domain D c [R2 the magnitude of the drift at a point z e D is maximized over all positive harmonic functions h in D precisely by the kernel functions, and this maximum is attained by the drift associated with every kernel function for D.
Remark 5. 7. - The quantity G03B1(z)G03B2(z) G03B1(03B2) dm (z) can be obtained as the limit of quantities D G« ( ~) where BE is a small disc shrinking to a point and 03B2 e aBE. Thus 1 /03C0 is the best constant in the lifetime inequality with terminal point on the boundary in a doubly connected domain when the hole is small enough. Saying anything more than this about the best constant in multiply connected domains’ is an open problem.
Annales de l’lnstitut Henri Poincaré - Probabilités et Statistiques 249 CONDITIONED BROWNIAN MOTION REFERENCES C. BANDLE, Isoperimetric inequalities and applications, Pitman Publishing, Boston, 1980.
R. BAÑUELOS, On the estimate of Cranston and McConnell for elliptic diffusions in uniform domains, Prob. Th. Rel. Fields, Vol. 76, 1987, pp. 311-323.
R. BAÑUELOS and T. CARROL, Conditional Brownian motion and hyperbolic geodesics in simply connected domains, (Preprint).
E. F. BECKENBACH and T. RADO, Subharmonic functions and surfaces of negative curvature, Trans. Amer. Math. Soc., Vol. 35, 1933, pp. 662-674.
T. CARLEMAN, Zur Theorie der Minimalflächen, Math Z., Vol. 9, 1921, pp. 154-160.
M. CRANSTON and T. R. MCCONNELL, The lifetime of conditioned Brownian motion, Z. Warsch. Verw. Gebiete, Vol. 65, 1983, pp. 1-11.
J. L. DooB, Classical Potential Theory and its Probabilistic Counterpart, Springer, New York, 1984.
G. HARDY J. E. LITTLEWOOD, and G. PÓLYA, Inequalities, 2nd Ed., Cambridge Univer-sity Press, Cambridge, 1952.
Y. KATZNELSON, An Introduction to Harmonic Analysis, 2nd Ed., Dover, New York, 1976.
P. KOOSIS, Introduction to Hp spaces, Cambridge University Press, Cambridge, 1980.
T. R. MCCONNELL, A conformal inequality related to the conditional gauge theorem, Trans. Amer. Math. Soc., Vol. 318, 1990, pp. 721-733.
S. SAKS and A. ZYGMUND, Analytic Functions, 3rd Ed., American Elsevier Publishing Co., New York, 1971.
T. SALISBURY, A Martin boundary in the plane, Trans. Amer. Math. Soc., Vol. 293, 1986, pp. 623-642.
W. A. VEECH, A Second Course in Complex Analysis, W. A. BENJAMIN Ed., Inc., New York, 1967.
J. Xu, The lifetime of conditioned Brownian motion in planar domains of finite area, Prob. Theory Related Fields, Vol. 87, 1991, pp. 469-487.
(Manuscript received November 22, 1991; revised September 9, 1992.) Vol. 29, n° 2-1993.
|
6
|
Published Time: Sun, 10 Aug 2025 12:50:11 GMT
HAL Id: hal-00643780
Preprint submitted on 22 Nov 2011 (v1), last revised 15 Feb 2012 (v2)
HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL , est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Existence of the harmonic measure for random walks on graphs and in random environments
Daniel Boivin, Clément Rau
To cite this version:
Daniel Boivin, Clément Rau. Existence of the harmonic measure for random walks on graphs and in random environments. 2011. hal-00643780v1 EXISTENCE OF THE HARMONIC MEASURE FOR RANDOM WALKS ON GRAPHS AND IN RANDOM ENVIRONMENTS
DANIEL BOIVIN AND CL ´EMENT RAU
Abstract. We give a sufficient condition for the existence of the harmonic measure from infinity of transient random walks on weighted graphs. In particular, this condition is verified by the random conductance model on Zd,d≥3, when the conductances are i.i.d. and the bonds with positive conductance percolate. The harmonic measure from infinity also exists for random walks on supercritical clusters of Z2. This is proved using results of Barlow (2004).
Keywords: Harmonic measure, supercritical percolation clusters, Harnack inequality, random conductance model.
Subject Classification: 60J05, 60K35, 60K37
Introduction and results
The harmonic measure from infinity of a closed subset A of Rd, d ≥ 2, is the hitting distribution of the set A by a d-dimensional Brownian motion started at infinity. A detailed description of this measure is given by M¨ orters and Peres in [26, section 3.4]. Similarly, given a Markov chain on an infinite graph, the harmonic measure of a finite subset of the graph is defined as the hitting distribution of the set by the Markov chain starting at infinity. The existence of the harmonic measure for the simple symmetric random walk on Zd is shown by Lawler in [22, chapter 2] and it is extended to a wider class of random walks on Zd by Lawler and Limic in [21, section 6.5]. From these results, one might expect that the existence of the harmonic measure for a Markov chain on Zd, d ≥ 2, relies on its Green function asymptotics. The goal of this paper is to show that actually, the existence of the harmonic measure is a fairly robust result in the sense that for a random walk on a weighted graph, it holds as soon as there is a weak form of Harnack inequality. In particular, it is verified by a large family of fractal-like graphs and by random conductance models on Zd, d ≥ 3, given by a sequence of i.i.d. conductances as soon as there is percolation of the positive conductances. This is done using recent estimates of . In Z2, although we do not give a general sufficient condition for recurrent graphs, we show the existence of the harmonic measure for the random walk on the supercritical cluster using some estimates of Barlow . The results of for the random conductance model are part of a long series of works which go back to homogenization of divergence form elliptic operators with random coefficients and to the investigation of the properties of the supercritical percolation cluster. Some highlights of the properties of the random walk on the supercritical percolation cluster of
Zd is the proof of the Liouville property for bounded harmonic functions (see Kaimanovich
12DANIEL BOIVIN and CLEMENT RAU
and ) and the proof of the transience of the walk when d ≥ 3 by Grimmett, Kesten and Zhang . In , Barlow proved upper and lower gaussian estimates for the probability transitions of a random walk on the supercritical percolation cluster. These are then used to prove a Harnack inequality [6, Theorem 3]. The Liouville property for positive harmonic functions on the perco-lation cluster follows as well as an estimate of the mean-square displacement of the walk. Barlow’s upper gaussian estimates were also used to prove the invariance principle for the ran-dom walk on supercritical percolation clusters by , , . The invariance principle for the random walk on Zd with independent conductances that are bounded below is proved in . In this paper, we show how to prove the existence of the harmonic measure from the Green function estimates of Andres, Barlow, Deuschel and Hambly [3, Theorem 1.2]. In the case of the two-dimensional percolation cluster, we use the Harnack inequality of . Whenever the harmonic measure from infinity exists, one can study external diffusion-limited aggregates. The shape of the internal diffusion-limited aggregates of random walks on percolation clusters is described in and . The harmonic mesure is of interest to physicists as it can be expressed as the normal derivative of an electrical potential on the surface. Recent simulations of the harmonic measure in Zd can be found in and on percolation and Ising clusters in . Analytic predictions for the harmonic measure of two dimensional cluster are given by Duplantier in and . The values of the constants c, C, . . . may change at each appearance but they are always strictly positive and they do not depend on the environment. The minimum of a and b and the maximum of a and b are respectively denoted by a ∧ b and by a ∨ b.1.1. Reversible random walks. A weighted graph (Γ , a ) is given by a countably infinite set Γ and a symmetric function
a : Γ × Γ → [0; ∞[which verifies a(x, y ) = a(y, x ) for all x, y ∈ Γ and
π(x) := ∑
y∈Γ
a(x, y ) > 0 for all x ∈ Γ.
The weight a(x, y ) is also called conductance since it can be interpreted as the electrical or thermic conductance of the edge connecting x and y.Given a weighted graph (Γ , a ), we will write x ∼ y if a(x, y ) > 0. We will always assume that (Γ , ∼) is an infinite, locally finite countable graph without multiple edges. We will say that the weighted graph (Γ , a ) is connected if (Γ , ∼) is a connected graph, that is, for all x, y ∈ Γ there is a sequence x0, x 1, . . . , x n such that x0 = x, xn = y and xi−1 ∼ xi for all 1 ≤ i ≤ n. The graph distance between two vertices x, y ∈ Γ will be denoted by D(x, y ). It is the minimal number of edges in a path from x to y in the graph (Γ , ∼). The ball centered at x ∈ Γ of radius R will be denoted by B(x, R ) := {y ∈ Γ; D(x, y ) < R }.The random walk on the weighted graph (Γ , a ) is the Markov chain on Γ with transition proba-bilities given by
p(x, y ) := a(x, y )
π(x) , x, y ∈ Γ. (1.1) EXISTENCE OF THE HARMONIC MEASURE 3
We denote by Px the law of the random walk starting at the vertex x ∈ Γ. The corresponding ex-pectation is denoted by Ex. The random walk admits reversible measures which are proportional to the measure π(·). For A ⊂ Γ, we have the following definitions
∂A := {y ∈ Γ; y / ∈ A and there is x ∈ A with x ∼ y} and A := ∂A ∪ A,
τA := inf {k ≥ 1; Xk ∈ A} and τ A := inf {k ≥ 0; Xk ∈ A}
with the convention that inf ∅ = ∞,
D(x, A ) = inf {D(x, y ); y ∈ A},and for u : A → R the Laplacian is defined by
Lu(x) := ∑
y∼x
p(x, y )( u(y) − u(x)) , x ∈ A.
A function u : A → R is harmonic in A if for all x ∈ A, ( Lu)( x) = 0. The Green function of the random walk is defined by
G(x, y ) :=
∞
∑
k=0
p(x, y, k ), x, y ∈ Γ (1.2) where p(x, y, k ) := Px(Xk = y) are the transition probabilities of the walk. Note that G(·, y ) is harmonic in Γ \ { y}.For irreducible Markov chains, if G(x, y ) < ∞ for some x, y ∈ Γ then G(x, y ) < ∞ for all x, y ∈ Γ. The random walk is recurrent if G(x, y ) = ∞ for some x, y ∈ Γ otherwise we say that the walk is transient .1.2. Results on the existence of the harmonic measure. Let X = ( Xj ) be a random walk on a connected weighted graph (Γ , a ). The hitting probability of a set A starting from x ∈ Γ, y ∈ A by :
HA(x, y ) = Px(XτA = y).
If Px(τA < +∞) > 0 , we can also define :
HA(x, y ) = Px(XτA = y|τA < +∞).
The harmonic measure on a finite subset A of Γ is the hitting distribution from infinity, if it exists,
HA(y) = lim
D(x,A )→∞
HA(x, y ), y ∈ A. (1.3) Our goal is to prove the existence of the harmonic measure for all finite subsets of various weighted graphs. The proof of the existence of the harmonic measure given in [21, section 6.5] for random walks on Zd, relies on a Harnack inequality and on Green function estimates. Actually, it turns out that only a weak form of Harnack inequality is needed. In Theorem 1.2, we show that a weak Harnack inequality is a sufficient condition for the existence of the harmonic measure of transient graph. Moreover, weak estimates of the Green function imply the weak Harnack inequaltiy. 4 DANIEL BOIVIN and CLEMENT RAU
As it happens for Brownian motion and for the simple random walks (see for instance , ), the harmonic measure can be expressed in terms of capacities. The capacity of A with respect to B, for A ⊂ B ⊂ Γ, is defined by Cap B (A) = ∑
x∈A
π(x)Px(τ Bc < τ A).
The escape probability of a set A is defined by Es A(x) := Px(τA = ∞) and the capacity of a finite subset A ⊂ Γ is defined by Cap( A) = ∑
x∈A
π(x)Es A(x).
The main result for transient graphs is the existence of the harmonic measure for random walks which verify the following weak form of the Harnack inequality.
Definition 1.1. We say that a weighted graph (Γ , a ) satisfies wH( C), the weak Harnack inequal-ity, if there is a constant C ≥ 1 such that for all x ∈ Γ and for all R > 0 there is R′ = R′(x, R )
such that for any positive harmonic function u on B(x, R ′),
max
B(x,R )
u ≤ C min
B(x,R )
u.
Theorem 1.2. Let (Γ , a ) be a weighted graph. If (Γ , a ) is connected, transient and if it verifies the weak Harnack inequality wH( C)
then for any finite subset A ⊂ Γ the harmonic measure on A exists. That is, for all y ∈ A, the limit (1.3) exists. Moreover, we have:
lim
D(x,A )→∞
HA(x, y ) = lim
m→+∞
HmA (y),
where, for m large enough,
HmA (y) = π(y)Py (τA > τ ∂B (x0,m ))
Cap m(A)
where Cap m(A) is the capacity of A with respect to B(x0, m ) for some x0 ∈ Γ. The limit does not depend on the choice of x0.
The following Green function estimates imply the weak Harnack inequality.
Definition 1.3. We say that a weighted graph (Γ , a ) satisfies the Green function estimate GE γ
for γ > 0 if there are constants 0 < C i ≤ Cs < ∞ and if for all z ∈ Γ, there exists Rz < ∞ such that for all x, y ∈ Γ with D(x, y ) ≥ Rx ∧ Ry we have:
Ci
D(x, y )γ ≤ G(x, y ) ≤ Cs
D(x, y )γ . (GE γ )This condition is a weak version of [30, Definition 1] where γ is called a Greenian index. It is used by Telcs to give an upper bound for the probability transitions of a Markov chain in terms of the growth rate of the volume and of the Greenian index.
Proposition 1.4. Let (Γ , a ) be a weighted graph which verifies (GE γ ) for some γ > 0. Then the graph is connected, transient and wH( C) holds with C = 2 γ Cs
Ci
.EXISTENCE OF THE HARMONIC MEASURE 5
In the following corollaries, we describe some weighted graphs where the harmonic measure from infinity exists. A weighted graph (Γ , a ) is said to be uniformly elliptic if there is a constant c ≥ 1 such that for all edges e,
c−1 ≤ a(e) ≤ c. (1.4)
Corollary 1.5. Let (Zd, a ), d ≥ 3, be a uniformly elliptic graph. Then for all finite subsets A of Zd and for all y ∈ A, the limit (1.3) exists. Moreover, we have:
lim
|x|→ +∞
HA(x, y ) = lim
m→+∞
HmA (y),
where HmA (y) = π(y)Py (τA > τ ∂B (0 ,m ))
Cap m(A) .
Indeed, by [13, Proposition 4.2] the Green function of a uniformly elliptic graph ( Zd, a ), d ≥ 3, verifies the estimates (GE γ ) with γ = d − 2. The existence of the harmonic measure then follows from proposition 1.4 and Theorem 1.2. The harmonic measure also exists for a large class of fractal like graphs with some regular-ity properties. Some examples are given in . See also [31, section 1.1] and the references therein. The volume of a ball B(x, R ) is defined by V (x, R ) := ∑
x∈B(x,R )
π(x) and the mean exit time from the ball is E(x, R ) := Ex(σR) where σR := inf {k ≥ 0; Xk /∈ B(x, R )}.A weighted graph (Γ , a ) has polynomial volume growth with exponent α > 0 if there is a constant
c ≥ 1 such that for all x ∈ Γ and for all R ≥ 1,
c−1Rα ≤ V (x, R ) ≤ cR α. (Vα)A weighted graph (Γ , a ) has polynomial mean exit time with exponent β > 0 if there is a constant
c ≥ 1 such that for all x ∈ Γ and for all R ≥ 1,
c−1Rβ ≤ E(x, R ) ≤ cR β . (Eβ )A weighted graph (Γ , a ) verifies the condition ( p0) if there is a constant p0 > 0 such that for all
x ∼ y,
p(x, y ) ≥ p0. (p0)Note that under ( p0), if the graph verifies the elliptic Harnack inequality with a shrinking param-eter M > 1 (see definition 2.1) then it verifies the elliptic Harnack inequality with any shrinking parameter M > 1. See [31, proposition 3.5] for instance. Barlow [9, Theorem 2] proved that for α ≥ 1 and for 2 ≤ β ≤ 1 + α, there is a weighted graph with polynomial growth with exponent α, with polynomial mean exit time with exponent β and which satisties the elliptic Harnack inequality with shrinking parameter M = 2. Moreover if
β ≥ 2 and if the graph verifies ( p0) then from [17, Theorem 3.1], the graph verifies the so-called
β-Gaussian estimates and consequently, for β < α , (GE γ ) holds with γ = α − β. These results are summarized in the corollary below.
Corollary 1.6. Let (Γ , a ) be a weighted graph. If (Γ , a ) verifies ( p0), ( Vα), ( Eβ ) for α > β ≥ 2
and the elliptic Harnack inequality H( C) then for all finite subsets A ⊂ Γ and y ∈ A the limit (1.3) exists. 6 DANIEL BOIVIN and CLEMENT RAU
The harmonic measure from infinity also exists for random walks in random environment and in particular for the random walk on the supercritical percolation cluster. Before stating this result, we give a brief description of the percolation model. See for more details. Consider the lattice Zd, d ≥ 2, where x ∼ y if |x − y|1 = 1 where | · | 1 is the ℓ1-distance. Denote the set of edges by Ed.Assume that ( a(e); e ∈ Ed) are i.i.d. non-negative random variables on a probability space (Ω , P). Call a bond e open if a(e) > 0 and closed if a(e) = 0. Let p = P(a(e) > 0). By percolation theory, there exists a critical value pc = pc(Zd) ∈]0; 1[ such that for p < p c, P almost surely, all open clusters of ω are finite and for p > p c, P almost surely, there is a unique infinite cluster of open edges which is called the supercritical cluster. It will be denoted by C∞ = C∞(ω). The edges of this graph are the open edges of the cluster and the end points of these edges are the vertices of the graph. For x, y ∈ C ∞(ω), we will write x ∼ y if the edge with endpoints x and y is open. The transition probabilities of the random walk on C∞(ω) are given by (1.1). The law of the paths starting at
x ∈ C ∞(ω) will be denoted by P ωx . The random walk on the supercritical percolation cluster corresponds to the case of Bernoulli random variables. In this case, we will write Pp instead of
P. For x, y ∈ Zd, we will write x ↔ y if there is an open path joining x and y.
Dω (x, y ) will denote the graph distance between x and y in the graph C∞(ω) and the ball cen-tered at x ∈ C ∞(ω) of radius R will be denoted by Bω (x, R ) = {y ∈ C ∞(ω); Dω (x, y ) < R }.The existence of the harmonic measure for i.i.d. conductances on Zd, d ≥ 3, is given in corollary 1.7 below. It follows from the Green function estimates of [3, Theorem 1.2a]. A weaker condition which might hold even if the conductances are not i.i.d. is given in [7, Theorem 6.1].
Corollary 1.7. Let (Zd, a ), d ≥ 3, be a weighted graph where the weights (a(e); e ∈ Ed) are i.i.d. non-negative random variables on a probability space (Ω , P) which verify
P(a(e) > 0) > p c(Zd).
Then there exist constants Ci, Cs, which depend on P and d, and Ω1 ⊂ Ω with P(Ω 1) = 1 such that for each ω ∈ Ω1, (GE γ ) holds in C∞(ω) with the constants Ci and Cs and with γ = d − 2.For any finite subset A of C∞ and for all y ∈ A, the limit (1.3) exists. Moreover, we have:
lim
|x|→ +∞,x ∈C ∞
HA(x, y ) = lim
m→+∞
HmA (y),
where HmA (y) = π(y)P ωy (τA>τ ∂Bω (x0,m ))
Cap m(A)
for some x0 ∈ C ∞ and for m large enough.
In , both the constant speed random walk and the variable speed random walk are considered. From the expression of their generators one immediately sees that they have the same harmonic functions as the discrete time random walk considered here. Moreover, since they are a time change of each other, the Green function is the same. Hence, by [3, Theorem 1.2 a] the Green function of a uniformly elliptic graph ( Zd, a ), d ≥ 3, verifies the estimates (GE γ ) with γ = d−2. The existence of the harmonic measure then follows from proposition 1.4 and Theorem 1.2. The harmonic mesure from infinity also exists for recurrent graphs. The main result here is the existence of the harmonic measure for all finite subsets of two-dimensional supercritical percolation clusters. EXISTENCE OF THE HARMONIC MEASURE 7
Theorem 1.8. Let (Z2, a ) be a weighted graph where the weights (a(e); e ∈ E2) are i.i.d. random variables on a probability space (Ω , P) which verify
P(a(e) = 1) = 1 − P(a(e) = 0) and P(a(e) = 1) > p c(Z2).
Then P almost surely, for any finite subset A of C∞(ω) and for all y ∈ A, the limit (1.3) exists. Moreover, we have:
lim
D(x,A )→∞
HA(x, y ) = −L uA(y)
where uA is defined in equation (4.5).
Theorem 1.9. If (Z2, a ) is a uniformly elliptic weighted graph then for all finite subset A ⊂ Z2
and for all y ∈ A, the limit (1.3) exists.
Remark 1.10. Note that on a regular tree, the harmonic mesure from infinity does not exist for any set A which contains at least two vertices. It would be interesting to investigate the links between the Poisson boundary of a graph and the existence of the harmonic measures. In particular, the triviality of the Poisson boundary does not imply the existence of the harmonic measure as is shown by the lamplighter group Z2 ≀ Z/2Z. See and the references therein.
Various forms of Harnack inequality that will be used both for transient graphs and for recurrent graphs are gathered in section 2. The proof of theorem 1.2 is given in section 3 while theorem 1.8 is proved in section 4. The last section contains the proof of the annulus Harnack inequality that is used in the proof of the existence of the harmonic measure for the random walk on the supercritical cluster of Z2.2. Harnack inequalities
We start by recalling a classical form of the Harnack inequality on a graph. Then we give related inequalities and weaker versions.
Definition 2.1. We say that a weighted graph (Γ , a ) satisfies H( C), the Harnack inequality with shrinking parameter M > 1, if there is a constant C < ∞ such that for all x ∈ Γ and R > 0,and for any positive harmonic function u on B(x, M R ),
max
B(x,R )
u ≤ C min
B(x,R )
u.
In our context, we will use the weak form of Harnack inequality given in definition 1.1. We rewrite this definition under a form similar to definition 2.1. The proofs will be given with these notations.
Definition 2.2. We say that a weighted graph (Γ , a ) satisfies wH( C), the weak Harnack in-equality, if there is a constant C > 0 such that for all x ∈ Γ and for all R > 0 there is Mx,R ≥ 2
such that for all M > M x,R and for any positive harmonic function u on B(x, M R ),
max
B(x,R )
u ≤ C min
B(x,R )
u.
Barlow [6, Theorem 3] showed that the supercritical percolation cluster verifies another form of Harnack inequality. However, by corollary 1.7 and proposition 1.4 below, the random walk on the supercritical percolation cluster also verifies wH( C). In Theorem 2.3 below, we give Harnack inequality under a form that will be most useful to us. It is an immediate consequence of Theorem 5.11, Proposition 6.11 and of (0.5) of Barlow’s work . 8 DANIEL BOIVIN and CLEMENT RAU
Theorem 2.3. Let d ≥ 2 and let p > p c(Zd). There exists c1 = c1(p, d ) and Ω1 ⊂ Ω with
P(Ω 1) = 1 , and R0(x, ω ) such that 3 ≤ R0(x, ω ) < ∞ for each ω ∈ Ω1, x ∈ C ∞(ω).
If R ≥ R0(x, ω ) and if D(x, z ) ≤ 1
3
R ln R and if u : B(z, R ) → R is positive and harmonic in
B(z, R ), then
max
B(z,R/ 2)
u ≤ c1 min
B(z,R/ 2)
u. (2.1)
Moreover, there are positive constants c2, c 3 and ε which depend on p and d such that the tail of
R0(x, ω ) satisfies
Pp(x ∈ C ∞, R 0(x, ·) ≥ n) ≤ c2 exp( −c3nε). (2.2) In the proof of Theorem 1.2, we will need the H¨ older continuity of harmonic function. It is a con-sequence of the weak Harnack inequality. Property wH( C) leads to the following lemma.
Lemma 2.4 (weak H¨ older continuity) . Let (Γ , a ) be a weighted graph which verifies wH (C) with shrinking parameters (Mx,R ; x ∈ Γ, R > 0) where Mx,R ≥ 2 for all x ∈ Γ and R > 0.Then there exist ν > 0, c > 0 such that for all x0 ∈ Γ, R > 0, M ≥ Mx0,R and for any positive harmonic function u on B(x0, M R ) and x ∈ B(x0, R ),
|u(x) − u(x0)| ≤ c ( D(x0, x )
R )ν max
B(x0,M R )
u.
Proof. Let x0 ∈ Γ and R > 0. Then for all M ≥ Mx0,R and R′ ≤ R, if u is a positive harmonic function on B(x0, M R ) then max
B(x0,R ′)
u ≤ max
B(x0,R )
u ≤ C min
B(x0,R )
u ≤ C min
B(x0,R ′)
u.
Let
V (i) := max
B(x0,2i)
u − min
B(x0,2i)
u.
Then for 2 i ≤ R, the functions u − min B(x0,2i+1 ) u and max B(x0,2i+1 ) u − u are harmonic in
B(x0, M R ). Then by the weak Harnack inequality on B(x0, 2i),
V (i) + V (i + 1) ≤ C[V (i + 1) − V (i)] .
And so, we deduce that there exists λ < 1 such that
V (i) ≤ λ V (i + 1) .
For any x ∈ B(x0, R ), we can find N1 such that 2 N1−1 ≤ D(x0, x ) ≤ 2N1 . Then
|u(x) − u(x0)| ≤ V (N1).
Let N2 be such that 2 N2 ≤ R < 2N2+1 . Then, since 2 N2+1 ≤ M R ,
V (N1) ≤ λN2−N1+1 V (N2 + 1) and in particular,
|u(x) − u(x0)| ≤ c
( D(x0, x )
R
)ν
max
B(x0,M R )
u
where ν > 0 solves λ−1 = 2 ν and c > 0 is a constant.
Similarly, from Harnack inequality for the supercritical cluster given in Theorem 2.3, we have the following H¨ older continuity property. EXISTENCE OF THE HARMONIC MEASURE 9
Theorem 2.5. Let d ≥ 2 and let p > p c(Zd). Let Ω1 and R0(x, ω ) be as in Theorem 2.3. Then there exist positive constants ν and c such that for each ω ∈ Ω1, x 0 ∈ C ∞(ω) if R ≥ R0(x0, ω )
and u is a positive harmonic function on Bω (x0, R ) then, for all x, y ∈ Bω (x0, R 0/2) ,
|u(x) − u(y)| ≤ c
( D(x, y )
R
)ν
max
B(x0,R )
u.
We will also need a Harnack inequality in the annulus of the two-dimensional supercritical per-colation cluster. It follows from results of Barlow , a percolation result due to Kesten and the following estimates of Antal and Pisztora [5, Theorem 1.1 and Corollary 1.3]. For p > p c(Zd), d ≥ 2, there is a constant μ = μ(p, d ) ≥ 1 such that lim sup
|y|1→∞
1
|y|1
ln Pp[0 ↔ y, D (0 , y ) > μ |y|1] < 0 (2.3) and, Pp almost surely, for x0 ∈ C ∞ and for all x ∈ C ∞ such that D(x0, x ) is sufficiently large
D(x0, x ) ≤ μ|x − x0|1. (2.4)
Proposition 2.6. Let p > p c(Z2). There is a constant C > 0 such that Q-a.s., for all x0 ∈ C ∞
if m is large enough, then for any positive function u harmonic in B(x0, 3μm ) \ { x0},
max
x;D(x0,x )= m
u(x) ≤ C min
x;D(x0,x )= m
u(x)
where μ is the constant that appears in (2.3).
Since we need a construction that is done in section 4.1, the proof of this Harnack inequality is postponed to section 5. 3. Proofs for transient graphs
In this section, we prove Proposition 1.4 and Theorem 1.2.
Proof of proposition 1.4. The key ingredient to prove proposition 1.4 is given by Boukricha’s lemma . See also [31, p. 37]. Roughly speaking, this lemma ensures that a Harnack inequality holds for general positive harmonic functions as soon as a Harnack inequality holds for the Green function in an annulus. For x ∈ Γ and R > 0, let
Mx,R = 3 ∨ 1
R max
w∈B(x,R )
Rw. (3.1) We claim that wH( C) holds with the shrinking parameters Mx,R and the constant C = 2 γ C s
Ci
.Fix x0 ∈ Γ, R > 0, M > M x0,R and let u be a positive harmonic function on B(x0, M R ). First note that under (GE γ ), the graph is transient and we can apply Boukricha’s lemma (, [31, p. 37]) with B0 = B(x0, R ), B 1 = B(x0, (M + 1) R), B 2 = B(x0, (M + 2) R) and B3 = Γ. So we get that if u is harmonic on B2, then max
B0
u ≤ D min
B0
u,
with
D = max
x,y ∈B0
max
z∈B2\B1
G(x, z )
G(y, z ) .10 DANIEL BOIVIN and CLEMENT RAU
So, we have to compare G(x, z ) and G(y, z ) for x, y ∈ B0 and z ∈ B(x0, (M + 2) R) \ B(x0, (M +1) R). For all w ∈ B0, D(w, z ) > M R > R w by (3.1). Hence, by (GE γ ),
Ci
D(w, z )γ ≤ G(w, z ) ≤ Cs
D(w, z )γ .
Then, we successively have :
G(x, z ) ≤ Cs
D(x, z )γ
= Cs
Ci
( D(y, z )
D(x, z )
)γ Ci
D(y, z )γ
≤ Cs
Ci
( R + ( M + 2) R
(M + 1) R − R
)γ
G(y, z )
≤ Cs
Ci
( M + 3
M
)γ
G(y, z )
≤ 2γ Cs
Ci
G(y, z ).
We can now state the main lemma to prove Theorem 1.2.
Lemma 3.1. Let (Γ , a ) be a weighted graph which verifies wH( C). Fix x0 ∈ Γ.Let A be a finite subset of Γ. Let R > 1 be such that A ⊂ B(x0, R ).For all M > 2, there is λM > 1 such that for all λ > λ M and for all y ∈ A and z ∈ ∂B (x0, λM R ),
Py (XτA∧τ∂B = z|τA > τ ∂B ) = H∂B (x0, z )[1 + O
( 1
M ν
)
], (3.2)
where B = B(x0, λM R ) and ν > 0 is the H¨ older exponent given by lemma 2.4. The constant in
O(·) depends only on the constants C and c that appear in wH( C) and in lemma 2.4 respectively. Proof. For M > 2, choose M2 and M3 such that
M2 > M (x0, M R ) and M3 > M (x0, M 2M R )where M (x0, ·) are the shrinking parameters that appear in wH( C).Let B1 = B(x0, M R ), B2 = B(x0, M 2M R ) and B3 = B(x0, M 3M2M R ). For z ∈ ∂B 3, we consider the function
f (x) = Px(Xτ∂B 3 = z), x ∈ Γ.
Since f is harmonic on B2, by lemma 2.4, for all u ∈ B1,
|f (u) − f (x0)| ≤ c
( D(x0, u )
M R
)ν
max
B2
f.
In particular, for u ∈ ∂B (x0, R ),
|f (u) − f (x0)| ≤ c
M ν max
B2
f. (3.3) EXISTENCE OF THE HARMONIC MEASURE 11
Now by considering f harmonic on B3, since the graph verifies wH(C) , we have that max
B2
f ≤ Cf (x0). (3.4) Therefore, by (3.3) and (3.4), for all u ∈ ∂B (x0, R ),
Pu(Xτ∂B 3 = z) = H∂B 3 (x0, z )
[
1 + O
( 1
M ν
)]
. (3.5) Introduce the following notation. For U, V and W subsets of Γ with U ⊂ V ⊂ W. We put
∂V [W, U ] = {x ∈ ∂V ; there exist paths in Γ from x to ∂W and from x to U }. (3.6) On the set {τA < τ ∂B 3 }, we let η = inf {j ≥ τA; Xj ∈ ∂B (x0, R )}.Then using (3.5), we obtain that for all x ∈ ∂B (x0, R )[ B3, A ]
Px(Xτ∂B 3 = z|τA < τ ∂B ) = ∑
u∈∂B (x0,R )
Px(Xη = u|τA < τ ∂B 3 )Pu(Xτ∂B 3 = z)= H∂B 3 (x0, z )[1 + O
( 1
M ν
)
] (3.7) Let x ∈ ∂B (x0, R )[ B3, A ]. By (3.5) and (3.7), we get from the relation
Px(Xτ∂B 3 = z) = Px(Xτ∂B 3 = z|τA > τ ∂B 3 )Px(τA > τ ∂B 3 )+Px(Xτ∂B 3 = z|τA ≤ τ∂B 3 )(1 − Px(τA > τ ∂B 3 )) ,
that
Px(Xτ∂B 3 = z|τA > τ ∂B 3 ) = H∂B 3 (x0, z )[1 + O
( 1
M ν
)
]This can also be written as,
Px(Xτ∂B 3 ∧τA = z) = H∂B 3 (x0, z )Px(τA > τ ∂B 3 )[1 + O
( 1
M ν
)
]. (3.8) Note that every path from y to ∂B 3 must go through some point of ∂B (x0, R )[ B3, A ]. So, for all
y ∈ A and for all z ∈ ∂B 3,
Py (Xτ∂B 3 ∧τA = z) = ∑
x∈∂B (x0,R )[ B3,A ]
Py (Xτ∂B (x0,R )[ B3,A ]∧τA = x)Px(Xτ∂B 3 ∧τA = z)
(3 .8)
= H∂B 3 (x0, z )[1 + O
( 1
M ν
)
]
× ∑
x∈∂B (x0,R )[ B3,A ]
Py (Xτ∂B (x0,R )[ B,A ]∧τA = x)Px(τA > τ ∂B 3 )= H∂B 3 (x0, z )[1 + O
( 1
M ν
)
]Py (τA > τ ∂B 3 ).
This last equation proves that lemma 3.1 holds with λM = M2M3 where M2 = M (x0, M R ) and
M3 = M (x0, M 2M R ).
As in Lawler [22, p. 49], using a last exit decomposition, we obtain the following representation of the hitting distribution in a weighted graph. The Green function of the random walk in B ⊂ Γ is defined by
GB (x, y ) :=
∞
∑
k=0
pB (x, y, k ), x, y ∈ B12 DANIEL BOIVIN and CLEMENT RAU
where pB (x, y, k ) := Px(Xk = y, k < τ Bc ) are the transition probabilities of the walk with Dirichlet boundary conditions.
Lemma 3.2. Let (Γ , a ) be a weighted graph. Let A ⊂ B be finite subsets of Γ. Then for all x ∈ Bc and y ∈ A,
HA(x, y ) = ∑
z∈∂B
GAc (x, z )HA∪∂B (z, y ), (3.9)
HA(x, y ) =
∑
z∈∂B
GAc (x, z )HA∪∂B (z, y )
∑
z∈∂B
GAc (x, z )Pz (τA < τ ∂B ) (3.10)
and
min
z∈∂B
HA∪∂B (z, y )
Pz (τA < τ ∂B ) ≤ HA(x, y ) ≤ max
z∈∂B
HA∪∂B (z, y )
Pz (τA < τ ∂B ) (3.11) Then by reversibility, π(z)HA∪∂B (z, y ) = π(y)HA∪∂B (y, z ) and
Pz (τA < τ ∂B ) = ∑
˜y∈A
H+
A∪∂B
(z, ˜y). Hence, min
z∈∂B
π(y)HA∪∂B (y, z )
∑
˜y∈A
π(˜ y)HA∪∂B (˜ y, z ) ≤ HA(x, y ) ≤ max
z∈∂B
π(y)HA∪∂B (y, z )
∑
˜y∈A
π(˜ y)HA∪∂B (˜ y, z ) (3.12) We complete the proof of Theorem 1.2.
Proof of Theorem 1.2. We are given x0 ∈ Γ and a finite set A ⊂ Γ. Let R > 1 be such that A ⊂ B(x0, R ). Let B = B(x0, λM R ) where λ ≥ λM is given by lemma 3.1. By equation (3.2), for all y ∈ A and z ∈ ∂B ,
π(y)HA∪∂B (y, z ) = H∂B (x0, z )[1 + O
( 1
M ν
)
]π(y)Py (τA > τ ∂B ). (3.13) By summing over y ∈ A the equation (3.13) gives,
∑
y∈A
π(y)Py (Xτ∂B ∧τA = z) = H∂B (x0, z )[1 + O
( 1
M ν
)
] ∑
y∈A
π(y)Py (τA > τ ∂B ). (3.14) Since (Γ , a ) is connected, both sides of (3.14) are positive. So we can divide (3.13) by (3.14). And a short calculation shows that
π(y)HA∪∂B (y, z )
∑
˜y∈A
π(˜ y)P˜y (Xτ∂B ∧τA = z) = π(y)Py (τA > τ ∂B )
∑
˜y∈A
π(˜ y)P˜y (τA > τ ∂B ) [1 + O
( 1
N ν
)
]where the constant in O(·) still depends only on the constants C and c that appear in wH( C)
and in lemma 2.4 respectively. By (3.12), we have that for all v / ∈ B,min
z∈∂B
π(y)HA∪∂B (y, z )
∑
˜y∈A
π(˜ y)P˜y (Xτ∂B ∧τA = z) ≤ HA(v, y ) ≤ max
z∈∂B
π(y)HA∪∂B (y, z )
∑
˜y∈A
π(˜ y)P˜y (Xτ∂B ∧τA = z)EXISTENCE OF THE HARMONIC MEASURE 13
So for all v / ∈ B we get:
HA(v, y ) = π(y)Py (τA > τ ∂B )
Cap B (A) [1 + O
( 1
M ν
)
] (3.15) As v goes to + ∞ in an arbitrary way, we will have that M → ∞ as well. Hence, by (3.15), we obtain that lim v→+∞ HA(v, y ) exists and lim
v→+∞
HA(v, y ) = lim
m→+∞
HmA (v, y ) = π(y)Py (τA > +∞)
∑
˜y∈A
π(˜ y)P˜y (τA > +∞) .
Recurrent graphs
In this section, we prove the existence of the harmonic measure for the random walk on a supercritical percolation cluster of Z2. The proof for the uniformly elliptic random walk on Z2 is similar but with many simplifications since we can use the estimates of instead of Barlow’s estimates. 4.1. Estimates of the capacity of a box. Proposition 4.1. Let p > p c(Z2). There is a constant C ≥ 1 such that Qp-a.s. for x0 ∈ C ∞,for all n sufficiently large,
C−1 ≤ (ln n)Cap Bω (x0,n )({x0}) ≤ C. (4.16) Flows of finite energy on the supercritical percolation cluster with respect to a convex gauge functions are constructed in . To do so, the flow is expressed by a probability on the set of self-avoiding paths. Here, however, the lower estimate of (4.16) is obtained by combining the method used in Z2, see [23, Proposition 2.14], with a percolation lemma of Kesten [20, Theorem 7.11].
Proof. The upper bound follows from the variational principle and a comparison with Z2 (see for instance [31, section 3.1]). To prove the lower bound, we assume 0 ∈ C ∞ and for each n sufficiently large, we construct a particular flow θn from 0 to ∂B ω (0 , n ). However, it is a difficult task to estimate the energy of a flow from 0 to ∂B ω (0 , n ) consisting of small flows along simple paths from 0 to ∂B ω (0 , n )since the percolation cluster is very irregular. So, as in Mathieu and Remy , we construct a ”subgrid” of Bω (0 , n ) by using a Theorem of Kesten (). Let us introduce some definitions.
Definition 4.2. Let Bm,n = [0; m] × [0; n] ∩ Z2.A horizontal [resp. vertical] channel of Bm,n is a path (v0, e 1, e 2, ..., e n, v n), with vi ∈ Z2 and
ei ∈ E2 for all i = 1 ...n such that:
• (v0, e 1, e 2, ..., e n−1, v n−1) is contained in the interior of Bm,n
• v0 ∈ { 0} × [0; n] [resp. v0 ∈ [0; m] × { 0}]
• vn ∈ { m} × [0; n] [resp. vn ∈ [0; m] × { n}]14 DANIEL BOIVIN and CLEMENT RAU
Figure 1. The outline of the Kesten ’s Grid KG(n).
We say that two channels are disjoint if they have no vertex in common. Let N (m, n ) be the maximal number of disjoint open horizontal channels in Bm,n . Then by [20, Theorem 11.1], for
p > p c, there is a constant c(p) and some universal constants 0 < c 1, c 2, ξ < ∞, such that
Pp
(N (m, n ) > c (p)n) ≥ 1 − c1(m + 1) exp( −c2(p − pc)ξ n). (4.17) Let us now construct the Kesten grid over [ −n; n]2. We divide the box [ −n; n]2 in horizontal strips of width CK ln( n) with CK large enough so that c2(p − pc)ξ C > 3. Then
∑
n
n
CK ln n c1(n + 1) exp( −c2(p − pc)ξ CK ln( n)) < ∞.
Hence by Borel-Cantelli lemma, we get that for n large enough there is at least c(p) ln( n) disjoint channels in each horizontal strips of width CK ln n. We do the same construction for vertical strips. Finally, we deduce the existence of a grid KG(n) in [ −n; n]2 where each horizontal and each vertical strip of width CK ln n contains at least c(p) ln( n) disjoint channels.
Construction of the flow
Since Bω (0 , n ) ⊂ C ∞ ∩ [−n; n]2, we have that Cap C∞∩[−n;n]2 (0) ≤ Cap Bω (0 ,n )(0). Hence, to obtain a lower bound, it suffices to construct a flow θn from 0 to ∂([ −n; n]2) ∩ C ∞.Then for each a path Π : ( e1, e 2, ..., e L) from 0 to ∂([ −n; n]2) ∩ C ∞ with the induced orientation and consisting only of edges of Kesten’s grid KG(n), we associate the unit flow Ψ Π = ∑
ℓ
(1{⇀
eℓ}
−
1{↼
eℓ}
). The flow θn will be a sum of flows Ψ Π for a set of well chosen paths. More precisely, consider a ray from 0 to boundary of the box [ −n; n]2, ending on a channel of
KG(n). There are about 2 2n
CK ln n c(p) ln n such rays. By the notation f (n, r ) ≍ g(n, r ) used below, we mean that there is a constant c ≥ 1 such that
Pp-a.s such that for n large enough and for 1 ≤ r ≤ n, then c−1f (n, r ) ≤ g(n, r ) ≤ cf (n, r ).EXISTENCE OF THE HARMONIC MEASURE 15
Figure 2. Construction of the flow θn.
Then for 1 ≤ r ≤ n, the boundary of the box [ −r; r]2 is divided in segments of length
≍ 2r CK ln n
2n
1
c(p) ln n .
Thus, the number of rays that crosses an edge of the boundary of [ −r; r]2 (of length 1),
≍ 1
r
2n
CK ln n c(p) ln n. (4.18) To each ray, we associate a simple path, chosen among paths from from 0 to the boundary of [−n; n]2. It consists of edges on the channels of KG(n) that are close to the ray. To go right from a horizontal channel to up on a vertical channel, the top horizontal channel is attached to the left-most vertical channel. Then similarly from top to bottom for the horizontal channels and from left to right for the vertical ones. We proceed similarly for the other turns. Let Pn be the set of chosen paths. We use these paths to construct the flow θn of intensity cn
by setting
θn = ∑
Π∈P n
ΨΠ.
The paths of Pn might not be disjoint but by (4.18), there is a constant C < ∞ such that an edge on the boundary of [ −r; r]2 belongs to less than C n
r
paths. Hence, there is a constant C′ < ∞
such that for an edge e at ℓ1-distance r from the origin, the flow θn satisfies:
θn(e) ≤ C′ n
r .
Hence, by Thomson’s principle (see for instance [23, section 2.4]), 1
Cap Bω (0 ,n )(0) ≤ 1
n2 E(θn) = 1
n2
∑
e
θn(e)2 ≤ C′′
n2
∑
r=1 ,...,n
n2
r2 r ≤ C′′ ln( n).16 DANIEL BOIVIN and CLEMENT RAU
4.2. The Green kernel and its properties. In this section, we will use the parabolic Harnack inequality for the random walk on the supercritical percolation cluster proved by Barlow and Hambly in . Besides this, we also use the comparison result for D and the | · | 1-distance of Antal and Pisztora , see (2.4).
Lemma 4.3. Pp-almost surely, for all x0, x ∈ C ∞, the series
∞
∑
k=0
[p(x0, x 0, k ) − p(x, x 0, k )]
converges. The limit will be denoted by g(x, x 0).Let G2n(x, y ) and p2n(x, y, k ) be respectively the Green function and the probability transitions of the random walk in the ball B(x0, 2n) with Dirichlet boundary conditions. Then
g(x, x 0) = lim
n
∞
∑
k=0
[p2n(x0, x 0, k ) − p2n(x, x 0, k )] (4.19) = lim
n
[G2n(x0, x 0) − G2n(x, x 0)] (4.20)
Proof. Let R0 be as in Theorem 2.3. Then by [8, Proposition 6.1], for R ≥ R0(x), B(x, R ) is very good with NB ≤ R1/(10( d+2)) and it is exceedingly good. Now let R ≥ R0(x) ∨ 16 and let R1 = R ln R. Then, since R1 ≥ R0, B = B(x, R 1) is very good with N 2d+4
B
≤ R(2 d+4) /(10( d+2)) 1 ≤ R1/(2 ln R1). Then by [8, Theorem 3.1], there exists a constant CH such that the parabolic Harnack inequality [8, (3.2)] holds in Q(x, R, R 2). Therefore [8, Proposition 3.2] holds with s(x0) = R0(x0) ∨ 16 and ρ(x0, x ) = R0(x0) ∨ 16 ∨ D(x0, x )Fix x0 ∈ C ∞ then v(n, x ) := p(x, x 0, n ) + p(x, x 0, n + 1) is a caloric function, that is, it verifies
v(n + 1 , x ) − v(n, x ) = Lv(n, x ), (n, x ) ∈ N × C ∞.
Let k > 4D(x0, x )2. Let t0 = k + 1 and r0 = √t0. Then v(n, x ) is caloric in ]0 , r 20 ] × B(x0, r 0),
x ∈ B(x0, r 0/2) since D(x0, x ) ≤ √k < r 0/2, and t0 − ρ(x0, x )2 ≤ k ≤ t0 − 1. Then by the upper gaussian estimates [6, Theorem 5.7] and [8, (2.18)] and by [8, Proposition 3.2], there is ν > 0 such that
|v(k, x ) − v(k, x 0)| ≤ C
( ρ(x0, x )
√t0
)ν
sup
Q+
v
≤ C
( ρ(x0, x )
√t0
)ν 1
r20
≤ C ρ(x0, x )ν
k1+ ν/ 2
Hence, the series converges. Note that we also have that
|p(x, x 0, k ) − p(x0, x 0, k )| ≤ C ρ(x0, x )ν
k1+ ν/ 2 .
Then (4.19) follows by Lebesgue dominated convergence theorem. EXISTENCE OF THE HARMONIC MEASURE 17
Lemma 4.4. There are constants 0 < c < C < ∞ such that, Pp-a.s., for all x0 ∈ C ∞ there is
ρ = ρ(x0) such that if D(x0, x ) > ρ ,
c ln D(x0, x ) < g (x, x 0) < C ln D(x0, x ) (4.21)
Proof. Let x0 ∈ C ∞. For m ≥ 1, write σm for τ B(x0,m )c .Note that for all n > 3μm where μ is the constant that appears in (2.3), P·(σn < τ x0 ) is harmonic in B(x0, 3μm ){ x0}. Then by the annulus Harnack inequality (Proposition 2.6), if m is sufficient large and if m = D(x, x 0)
Px0 (σn < τ x0 ) = ∑
x′;D(x0,x ′)= m
Px0 (X(σm) = x′, σ m < τ x0 )Px′ (σn < τ x0 )
≍ Px(σn < τ x0 ) ∑
x′;D(x0,x ′)= m
Px0 (X(σm) = x′, σ m < τ x0 )
≍ Px(σn < τ x0 )Cap m(x0).
By f (x) ≍ g(x) here, we mean that there are constants c and C, which do not depend on x, n, m
or ω, such that Pp-a.s for x ∈ C ∞, if D(x0, x ) is large enough, then 0 < cf (x) ≤ g(x) ≤ Cf (x).
Hence by the capacity estimates (4.16)
Px(σn < τ x0 ) ≍ Cap n(x0)
Cap m(x0) ≍ ln m
ln n = ln D(x0, x )
ln n . (4.22) Then by the capacity estimates (4.16) and by (4.22),
Gn(x0, x 0) − Gn(x, x 0) = Gn(x0, x 0) − Px(τx0 < σ n)Gn(x0, x 0)= Gn(x0, x 0)Px(τx0 > σ n)
≍ ln n ln D(x,x 0)
ln n
Then (4.21) follows by (4.19).
The harmonic measure will be expressed in terms of the function uA defined below.
Definition 4.5. Pp-a.s., for a finite subset A of C∞(ω) and for a fixed x0 ∈ C ∞(ω), let
uA(x, x 0) := g(x, x 0) − Ex,ω g(Xτ A , x 0).
Note that
uA(·, x 0) = 0 on A, uA(x, x 0) ≍ ln Dω (x0, x ) as x → ∞ by (4 .21) ,
for all x ∈ C ∞, P u A(x, x 0) = P g (x, x 0) − ∑
y∼x
p(x, y )Ey g(Xτ A , x 0)= g(x, x 0) − 1x0 (x) − Exg(XτA , x 0),
We will need to work in balls defined in terms of g(·, x 0). Let
˜Bn := ˜B(x0, n ) := {x ∈ C ∞; g(x, x 0) ≤ ln n} and ˜σn := inf {k ≥ 0; Xk /∈ ˜B(x0, n )} (4.23) Note that by (4.21), for all n sufficiently large,
B(x0, n 1/C ) ⊂ ˜B(x0, n ) ⊂ B(x0, n 1/c ). (4.24) 18 DANIEL BOIVIN and CLEMENT RAU
The next lemma is the analogue of [21, Proposition 6.4.7].
Proposition 4.6. Pp-a.s., for a finite subset A of C∞(ω) and for a fixed x0 ∈ C ∞(ω), for all
x ∈ Ac,
uA(x, x 0) = lim
n
(ln n)Px(˜σn < τ A).
Proof. Let R0(x, ω ) be as in Barlow’s Theorem 2.3. By (2.2), by (2.3) of Antal and Pisztora, and by (4.24)
∑
n
∑
x∈∂˜B(x0,n )
Pp(x ∈ C ∞, R 0(x, ·) ≥ √n) ≤ C ∑
n
n2/c exp( −c3nε/ 2) < ∞.
Therefore, by Borel-Cantelli, there is Ω 1 ⊂ Ω with Pp(Ω 1) = 1, such that for all ω ∈ Ω1 there is
n0 such that for all n ≥ n0 and for all x ∈ ∂ ˜B(x0, n ), R0(x) < √n.Let x ∈ ∂ ˜B(x0, n ) where n ≥ n0. Then there is x′ ∈ ˜B(x0, n ) such that x′ ∼ x and
g(x′, x 0) ≤ ln n < g (x, x 0).
Moreover, by (4.24), D(x, x 0) > n 1/C . Then by H¨ older’s continuity property given in Theorem 2.5 and by (4.21),
g(x, x 0) − ln n ≤ g(x, x 0) − g(x′, x 0)
≤ c
( 1
n1/C
)ν
max
B(x,n 1/C )
g(·, x 0)
≤ c
C
( 1
n1/C
)ν
ln n
→ 0 as n → ∞ .
By the optional stopping theorem applied to the martingale g(Xk, x 0), k ≥ 0 and for n large enough,
g(x, x 0) = Ex [g(XτA∧˜σn , x 0)] , x ∈ ˜B(x0, n ) \ A
= Px(˜σn < τ A)Ex [g(X˜σn , x 0) | ˜σn < τ A]+Px(τA < ˜σn)Ex [g(XτA , x 0) | τA < ˜σn]But lim
n
Px(τA < ˜σn)Ex [g(XτA , x 0) | τA < ˜σn] = lim
n
Ex [g(XτA , x 0); τA < ˜σn]= Exg(XτA , x 0)Therefore, uA(x, x 0) = lim n(ln n)Px(˜σn < τ A).
We can now prove the analogue of lemma 3.1 for the supercritical cluster. Theorem 1.8 will follow from this lemma and from proposition 4.6 above.
Lemma 4.7. Let p > p c(Z2). Let Ω1 and R0(x, ω ) be as in Theorem 2.3. There is ν′ > 0 such that the following holds. Let ω ∈ Ω1 and let A be a finite subset of C∞(ω). Fix x0 ∈ C ∞(ω).Let R > 1 be such that A ⊂ B(x0, R ).EXISTENCE OF THE HARMONIC MEASURE 19
Then there is N0 = N0(x0, ω ) such that for all n > N 0, for all y ∈ A and z ∈ ∂ ˜B(x0, n ),
Py (X˜σn∧τA = z|τA > ˜σn) = H∂ ˜B(x0,n )(x0, z )
[
1 + O
(( R
n
)ν′ )]
(4.25)
where ˜Bn and ˜σn are as in (4.23). ν′ > 0 depends on the H¨ older exponent given by Theorem 2.5 and the constants given in (4.21) The constants in O(·) depend only on the constants that appear in theorems 2.3 and 2.5. Proof. For R1 > max {R0(x0, ω ), R }, let B1 = B(x0, R 1), B2 = B(x0, 2R1), B3 = B(x0, 4R1). Set n = (4 R1)C and let ˜Bn = ˜B(x0, n ) and ˜σn be as in (4.23). Note that by (4.24), B3 ⊂ ˜Bn.For z ∈ ∂ ˜Bn, consider the function
f (x) = Px(X˜σn = z), x ∈ C ∞(ω).
Since f is harmonic on B2, by Theorem 2.5, for all u ∈ B1,
|f (u) − f (x0)| ≤ c
( D(x0, u )
R1
)ν
max
B2
f.
In particular, for u ∈ ∂B (x0, R ),
|f (u) − f (x0)| ≤ c
( R
R1
)ν
max
B2
f. (4.26) Now by considering f harmonic on B3, by Theorem 2.3, we have that max
B2
f ≤ c1f (x0). (4.27) Therefore, by (4.26) and (4.27), for all u ∈ ∂B (x0, R ),
Pu(X˜σn = z) = H∂ ˜Bn (x0, z )
[
1 + O
(( R
R1
)ν )]
. (4.28) On the set {τA < ˜σn}, we let η = inf {j ≥ τA; Xj ∈ ∂B (x0, R )}.Then using (4.28), we obtain that for all x ∈ ∂B (x0, R )[ ˜Bn, A ] (see (3.6) for the notation),
Px(X˜σn = z|τA < ˜σn) = ∑
u∈∂B (x0,R )
Px(Xη = u|τA < ˜σn)Pu(X˜σn = z)= H∂ ˜Bn (x0, z )
[
1 + O
(( R
R1
)ν )]
. (4.29) Let x ∈ ∂B (x0, R )[ ˜Bn, A ]. By (4.28) and (4.29), we get from the relation
Px(X˜σn = z) = Px(X˜σn = z|τA > ˜σn)Px(τA > ˜σn)+Px(X˜σn = z|τA ≤ ˜σn)(1 − Px(τA > ˜σn)) ,
that
Px(X˜σn = z|τA > ˜σn) = H∂ ˜Bn (x0, z )
[
1 + O
(( R
R1
)ν )]
.
This can also be written as,
Px(X˜σn∧τA = z) = H∂ ˜Bn (x0, z )Px(τA > ˜σn)
[
1 + O
(( R
R1
)ν )]
. (4.30) 20 DANIEL BOIVIN and CLEMENT RAU
Note that every path from y ∈ A to ∂ ˜Bn must go through some point of ∂B (x0, R )[ ˜Bn, A ]. So, for all y ∈ A and for all z ∈ ∂ ˜Bn,
Py (X˜σn∧τA = z) = ∑
x∈∂B (x0,R )[ ˜Bn,A ]
Py (Xτ∂B (x0,R )[ ˜Bn,A ]∧τA = x)Px(X˜σn∧τA = z)
(4 .30)
= H∂ ˜Bn (x0, z )
[
1 + O
(( R
R1
)ν )]
× ∑
x∈∂B (x0,R )[ ˜Bn,A ]
Py (Xτ∂B (x0,R )[ ˜Bn,A ]∧τA = x)Px(τA > ˜σn)= H∂ ˜Bn (x0, z )
[
1 + O
(( R
R1
)ν )]
Py (τA > ˜σn).
Hence the lemma holds with N0 = (4 max {R0(x0, ω ), R })C .
4.3. The existence of the harmonic measure. We now show how to obtain Theorem 1.8 from lemma 4.7. Let R be such that A ⊂ B(x0, R ).
Proof. Let ˜Bn and ˜σn be as in (4.23). Let y ∈ A. For x / ∈ ˜Bn, by (3.9), by reversibility of the Markov chain and by (4.25), for all n > N 0,
π(x)HA(x, y ) = π(x)Px(XτA = y)= π(x) ∑
z∈∂˜Bn
GAc (x, z )HA∪∂ ˜Bn (z, y )= ∑
z∈∂˜Bn
GAc (z, x )π(y)HA∪∂ ˜Bn (y, z )= ∑
z∈∂˜Bn
GAc (z, x )π(y)Py (˜σn < τ A)H∂ ˜Bn (x0, z )
[
1 + O
(( R
n
)ν′ )]
= π(y)Py (˜σn < τ A) ∑
z∈∂˜Bn
GAc (z, x )H∂ ˜Bn (x0, z )
[
1 + O
(( R
n
)ν′ )]
At this point for the supercritical cluster of Zd, d ≥ 3, it suffices to sum over y ∈ A and divide the equations. However, since the walk is recurrent on the supercritical percolation cluster of Z2,
Py (˜σn < τ A) → 0 as n → ∞ , this would lead to an indeterminate limit. But by proposition 4.6,
π(x)HA(x, y ) = π(x)HA(x, y )
π(x) ∑
y′∈A
HA(x, y ′)= lim
n
π(y)Py (˜σn < τ A)
∑
y′∈A
π(y′)Py′ (˜σn < τ A)= lim
n
(ln n)π(y)Py (˜σn < τ A)
(ln n) ∑
y′∈A
π(y′)Py′ (˜σn < τ A)= π(y)P u A(y)
∑
y′∈A
π(y′)P u A(y′) .EXISTENCE OF THE HARMONIC MEASURE 21
Proof of proposition 2.6
In this proof, we keep the notations of except for the graph distance which will still be denoted by D(x, y ). For a cube Q of side n, let Q+ := A1 ∩ Zd and Q⊕ := A2 ∩ Zd where A1 and A2 are the cubes in Rd with the same center as Q and with side length 3
2
n and 6
5
n respectively. Note that
Q ⊂ Q⊕ ⊂ Q+.
C(x) is the connected open cluster that contains x. CQ(x), which will be called the open Q
cluster, is the set of points connected to x by an open path within Q. And C∨(Q) is the largest open Q cluster (with some rule for breaking ties). Set α2 = (11( d + 2)) −1.
Proof. By [6, lemma 2.24] and by Borel-Cantelli lemma, for all x ∈ Zd, there is Nx such that for all n > N x, L(Q) (see [6, p. 3052]) holds for all cubes Q of side n with x ∈ Q.Let z ∈ Zd and let n > N z = Nz (ω). Let Q be a cube of side n which contains z.Let x0 ∈ C ∨(Q+) ∩ Q⊕ with Q(x0, r + k0)+ ⊂ Q+ where CH nα2 ≤ r ≤ n and k0 = k0(p, d ) is the integer chosen in [6, p. 3041]. Let R be such that
Bω (x0, (3 /2) R ln R) ⊂ Q⊕ and (5.31) (CH nα2 )d+2 ≤ (CH nα2 )4( d+2) < R < R ln R < n. (5.32) Then by [6, Theorem 2.18c], Bω (x0, R ln R) is ( CV , C P , C W )- very good with
NBω (x0,R ln R) ≤ CH nα2
with the constants given in [6, section 2]. Then by [6, Theorem 5.11] and (5.32), there is a constant C1, which depends only on d and on the constants CV , C P , C W , such that if D(x0, x 1) ≤ 1
3
R ln R and if h : B(x1, R ) → R is positive and harmonic in B(x1, R ), then max
B(x1,R/ 2)
h ≤ C1 min
B(x1,R/ 2)
h. (5.33) Note that since 4 α2(d + 2) = 4 /11 < 1/2, the conditions (5.32) are verified for R = 2 √n when n
large enough. We now apply a standard chaining argument to a well chosen covering by balls (see for instance [31, chapters 3 and 9]). Let x0 ∈ Z2 and consider environments such that x0 ∈ C ∞(ω). The main difficulty to carry out the chaining argument is to check that the intersection of “consecutive” balls is not empty. The remainder of the proof is to construct an appropriate covering of {x ∈C∞; D(x0, x ) = m}, for m large enough, with a finite number balls, which does not depend on
x0, m or ω, and such that the Harnack inequality (5.33) holds in each ball. Let δ1, δ 2 and δ3 be three positive real numbers such that 2δ2 < δ 1 and δ1 + 2 δ2 < δ 3 < 1
5μ
( 4
5 − δ2
)
. (5.34) 22 DANIEL BOIVIN and CLEMENT RAU
For instance, choose δ3 so that 0 < δ 3 < 4/(50 μ), then choose δ1 so that 0 < 2δ1 < δ 3 and finally choose δ2 so that δ2 < min {δ1/2, 4/(50 μ)}.Let n > N x0 .Furthermore, take n large enough so that there is a Kesten’s grid in Q with constant CK and
R(Q) holds (by [6, lemma 2.8]). That is in each vertical and each horizontal strip of width CK ln n
contains at least c(p) ln n open disjoint channels. Moreover, since R(Q) holds, C∨(Q) ⊂ C ∨(Q+). In particular, x0 ∈ C ∨(Q+) ∩ Q⊕.Furthermore by (2.3) and Borel-Cantelli, if m is large enough then for all x, y ∈ C ∞ such that
|x|1 ≤ 3μm , |y|1 ≤ 3μm and |x − y|1 ≥ m(δ1 − 2δ2)/μ we have
|x − y|1 ≤ D(x, y ) ≤ μ|x − y|1.
Set R
2 = mδ 3 = √n.Furthermore, take m large enough so that
CK ln n ≤ mδ 2/μ, 3mμ < 1
3 R ln R (5.31) and (5.32) are verified .
Instead of constructing a finite covering of {x ∈ C ∞; D(x0, x ) = m}, it is easier to construct a finite covering of the region {x ∈ C ∞; 4m
5μ
≤ | x − x0|1 ≤ 2m} which is a larger subset of Z2.Let I := {(i; j) ∈ N2; 4 /(5 δ1) ≤ i + j ≤ 2μ/δ 1.} Let M be the cardinal of I.Let xi,j = x0 + ( imδ 1/μ ; jmδ 1/μ ) with ( i; j) ∈ I . Then for each xi,j with ( i; j) ∈ I , there is
˜xi,j ∈ C ∞ such that |xi,j − ˜xi,j |1 ≤ mδ 2/μ .We proceed similarly in the other three quadrants to obtain a set of 4 M vertices which we denote by D. Note that M does not depend on m.The finite covering of the region 4m
5μ
≤ | x − x0|1 ≤ 2m is
{B(˜x, mδ 3), ˜x ∈ D} .
Note that each ball contains the center of the four neighbouring balls except those on the bound-ary of the region. But these are connected to at least one neighbouring ball. Indeed, if ˜x, ˜y ∈ D
are neighbouring centers then by (5.34),
D(˜x, ˜y) < μ |˜x − ˜y| < m (δ1 + 2 δ2) < mδ 3.
If ˜x ∈ D then by (5.34),
D(x0, ˜x) > m
μ
( 4
5 − δ2
)
5mδ 3,D(x0, ˜x) < μ |x0 − ˜x|1 < 2mμ and μ (2 m + mδ 2/μ ) < 3mμ .Therefore, x0 does not belong to a ball of the covering and u is harmonic in each ball B(˜x, 2mδ 3)with ˜x ∈ D . Then the Harnack inequality holds for R = 2 mδ 3 since for all ˜x ∈ D ,
D(x0, ˜x) < 2mμ < 1
3 R ln R. EXISTENCE OF THE HARMONIC MEASURE 23
Acknowledgment : The authors would like to thank Pierre Mathieu for numerous discussions and particularly, for pointing out the usefulness of Kesten’s lemma. This research was supported by the French ANR projects MEMEMO and MEMEMO2.
References
D. A. Adams, L. M. Sander, E. Somfai, and R. M. Ziff. The harmonic measure of diffusion-limited aggregates including rare events. EPL (Europhysics Letters) , 87(2):20001, 2009. David A. Adams, Leonard M. Sander, and Robert M. Ziff. Harmonic measure for percolation and ising clusters including rare events. Phys. Rev. Lett. , 101(14):144102, Sep 2008. S. Andres, M.T. Barlow, J-D. Deuschel, and B.M. Hambly. Invariance principle for the Random Conductance Model . Preprint available at. barlow/preprints/, 2010. Omer Angel, Itai Benjamini, Noam Berger, and Yuval Peres. Transience of percolation clusters on wedges.
Electron. J. Probab. , 11:no. 25, 655–669 (electronic), 2006. P. Antal and P. Pisztora. On the chemical distance for supercritical Bernoulli percolation. Ann. Probab. ,24:1036–1048, 1996. M. T. Barlow. Random walks on supercritical percolation clusters. Ann. Probab. , 32:3024–3084, 2004. M. T. Barlow and J.-D. Deuschel. Invariance principle for the random conductance model with unbounded conductances. Ann. Probab. , 38(1):234–276, 2010. M. T. Barlow and B.M. Hambly. Parabolic Harnack inequality and local limit theorem for percolation clusters.
Electron. J. Probab. , 14(1):1–27, 2009. Martin T. Barlow. Which values of the volume growth and escape time exponent are possible for a graph?
Rev. Mat. Iberoamericana , 20(1):1–31, 2004. Itai Benjamini, Russell Lyons, and Oded Schramm. Percolation perturbations in potential theory and random walks. In Random walks and discrete potential theory (Cortona, 1997) , Sympos. Math., XXXIX, pages 56–84. Cambridge Univ. Press, Cambridge, 1999. N. Berger and M. Biskup. Quenched invariance principle for simple random walk on percolation clusters.
Probab. Theory Related Fields , 137:83–120, 2007. A. Boukricha. Das Picard-Prinzip und verwandte Fragen bei St¨ orung von harmonischen R¨ aumen. . Math. Ann. , 239:247–270, 1979. T. Delmotte. Parabolic Harnack inequality and estimates of Markov chains on graphs. Rev. Mat. Iberoam. ,15:181–232, 1999. Hugo Duminil-Copin, Cyrille Lucas, Ariel Yadin, and Amir Yehudayoff. Containing internal diffusion limited aggregation. arXiv , 1111.0486v1, 2011. Bertrand Duplantier. Harmonic measure exponents for two-dimensional percolation. Phys. Rev. Lett. ,82(20):3940–3943, May 1999. Bertrand Duplantier. Conformally invariant fractals and potential theory. Phys. Rev. Lett. , 84(7):1363–1367, 2000. Alexander Grigor’yan and Andr´ as Telcs. Harnack inequalities and sub-Gaussian estimates for random walks.
Math. Ann. , 324(3):521–556, 2002. G. R. Grimmett, H. Kesten, and Y. Zhang. Random walk on the infinite cluster of the percolation model.
Probab. Theory Related Fields , 96(1):33–44, 1993. V.A. Kaimanovitch. Boundary theory and entropy of random walks in random environments. Probability Theory and Mathematical Statistics , pages 573–579, 1990. Harry Kesten. Percolation Theory for Mathematicians . Birkhauser, Boston, 1982. G. Lawler and V. Limic. Random Walk : A Modern Introduction . Cambridge Studies In Advanced Mathe-matics. Cambridge University Press, 2010. Gregory F. Lawler. Intersections of random walks . Probability and its Applications. Birkh¨ auser Boston Inc., Boston, MA, 1996. Russell Lyons, with Yuval Peres. Probability on trees and networks . Current version available at Cambridge University Press, A book in progress. P. Mathieu and A. Piatnitski. Quenched invariance principles for random walks on percolation clusters. Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. , 463(2085):2287–2307, 2007. Pierre Mathieu and Elisabeth Remy. Isoperimetry and heat kernel decay on percolation clusters. Ann. Probab. ,32(1A):100–128, 2004. P. M¨ orters and Y. Peres. Brownian motion , volume 30 of Cambridge Series in Statistical and Probabilistic Mathematics . Cambridge University Press, Cambridge, 2010. 24 DANIEL BOIVIN and CLEMENT RAU
Ecaterina Sava. A note on the Poisson boundary of lamplighter random walks. Monatsh. Math. , 159(4):379– 396, 2010. Eric Shellef. Idla on the supercritical percolation cluster. Electronic Journal of Probability , 15(Paper no. 24):723–740, 2010. Vladas Sidoravicius and Alain-Sol Sznitman. Quenched invariance principles for walks on clusters of perco-lation or among random conductances. Probab. Theory Related Fields , 129(2):219–244, 2004. Andras Telcs. Transition probability estimates for reversible markov chains. Elect. Comm. in Probab. , 5:29– 37, 2000. Andr as Telcs. The Art of Random Walks , volume 1885 of Lecture Notes in Mathematics . Springer, 2006.
Daniel Boivin Cl´ ement Rau Universit´ e Europ´ eenne de Bretagne Universit´ e Paul Sabatier Universit´ e de Bretagne Occidentale Institut de Math´ ematiques de Toulouse Laboratoire de Math´ ematiques CNRS UMR 6205 route de Narbonne 6 avenue Le Gorgeu, CS93837 31400 Toulouse F-29238 Brest Cedex 3, France France [email protected] [email protected] boivin/ rau/
|
7
|
A304916 - OEIS
===============
login
The OEIS is supported by the many generous donors to the OEIS Foundation.
Hints
(Greetings from The On-Line Encyclopedia of Integer Sequences!)
A304916
Numbers k such that the number of divisors of the k-th central binomial coefficient is a power of 2.
0
0, 1, 2, 4, 7, 11, 21, 22, 28, 37, 42, 52, 69, 784
(list; graph; refs; listen; history; text; internal format)
OFFSET
1,3
COMMENTS
Equivalently (as shown below), numbers k such that the number of divisors of the k-th central binomial coefficient is 1 or a prime power p^j, j >= 1.
a(15) > 10^8, if it exists.
Conjecture: there are no terms beyond 784.
The central binomial coefficient of k is C(2k, k) = (2k)!/k!^2 = (2k)(2k-1)(2k-2)...(k+1)/k!, which contains each prime in the interval [k+1, 2k] with multiplicity 1. Thus, for k >= 1, the number of divisors of C(2k, k) will be an even number. (For k=0, C(2k, k) = 1, whose number of divisors is 1 = 2^0.)
For each term k, the prime factorization of C(2k, k) must be of the form Product_{j=1..J} p_j ^ (2^m_j - 1), where J is the number of distinct prime factors and each m_j >= 1, so that the number of divisors will be Product_{j=1..J} 2^m_j = 2^Sum_{j=1..J} m_j.
No number k can be a term if there exists any prime p such that k < p^2 <= 2k and floor(2k/p) is odd, because the multiplicity of p in C(2k, k) will be exactly 2, so the number of divisors of C(2k, k) will be divisible by 3 (and by 2). This criterion is sufficient to rule out every k in the interval [20124, 10^8], and it seems nearly certain that it also applies for all k > 10^8. A proof that it does apply for all k > 10^8 would also prove that 784 is the final term of this sequence.
LINKS
Table of n, a(n) for n=1..14.
EXAMPLE
k=2 is a term because C(4,2) = 6 = 23, which has 4 = 2^2 divisors.
k=3 is not a term because C(6,3) = 20 = 2^2 5, which has 6 = 23 divisors.
k=784 is a term because the number of divisors of C(1568,784) is 2^172. (Its prime factorization is 2^3 times the product of 170 other primes, each of which occurs with a multiplicity of 1.)
k=2763 is not a term because the number of divisors of C(5526,2763) is 2^499 3^3. (Each of the prime factors of C(5526,2763) has a multiplicity of 2^1 - 1 = 1, 2^2 - 1 = 3, or 2^3 - 1 = 7, with the exception of 59, 71, and 73, each of which occurs with multiplicity 2.)
k=10^50 cannot be a term, since there exist primes p (such as 14142135623730950488016843) such that the unreduced numerator of C(2k, k), i.e., (2k)(2k-1)(2k-2)...(k+1), contains exactly one more factor that is a multiple of p than does the unreduced denominator k(k-1)...321, and one of those multiples of p that occurs in the unreduced numerator is p^2, with the result that p appears in the prime factorization of C(2k, k) with multiplicity 2, so its number of divisors is divisible by both 2 and 3.
PROG
(PARI) ispow2(n) = (n==1) || (n==2) || (ispower(n, , &p) && (p==2));
isok(n) = ispow2(numdiv(binomial(2n, n))); \ Michel Marcus, May 21 2018
CROSSREFS
Cf. A000984 (Central binomial coefficients).
Sequence in context: A288380A369581A146156 A024927A018077A114347
Adjacent sequences: A304913A304914A304915 A304917A304918A304919
KEYWORD
nonn,hard,more
AUTHOR
Jon E. Schoenfield, May 20 2018
STATUS
approved
LookupWelcomeWikiRegisterMusicPlot 2DemosIndexWebCamContributeFormatStyle SheetTransformsSuperseekerRecents
The OEIS Community
Maintained by The OEIS Foundation Inc.
Last modified August 13 10:23 EDT 2025. Contains 386649 sequences.
License Agreements, Terms of Use, Privacy Policy
|
8
|
MITOCW | watch?v=RITcQMokTJs The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: Good afternoon. The title here is the topic for our class today. We want to discuss formally the derivation Optical Bloch Equations, but usually when I teach you something, I have a general concept in mind and the concept right now here is, how can we get from unitary time evolution of a quantum system to rate equations anticipation. And this is sort of the subject of master equation open system dynamics, and this cartoon sort of tells you what you want to do.
We have a total system, which is one part we are interested in, and the other one, often it has many, many degrees of freedom, and we don't want to keep track of them. But they have one Hamiltonian. But we are only interested in how our atomic system evolves, and we want to find an equation, which is no longer a Schrodinger equation. How does an initial density matrix describing our system develop with time? And as we will see, in general, it follows a master equation, and we want to discuss what are general principles of such equations.
But it's very important here is that we are not keeping track of what happens in the environment, and that's associated here with a little bucket or trash can. Every result photons or such are measured, the environment is constantly projected on a measurement basis, and therefore, re-introduce probabilistic element into the part we don't observe mainly the part which characterizes the atoms.
So for that we need the formalism of density matrix, and that's where we want to start. At the end of last lecture, I reminded you that the density matrix can always be written as an ensemble where you say you have a certain probability for certain rate function. This is always possible, but it is not unique. So each of you could actually create the same density matrix by preparing a number of quantum states with a certain probability and saying this is my density matrix.
1 And also each of you has prepared different quantum states. If you sum them up in that same way, you get the same density matrix and therefore, all observables or measurements you will do on your ensemble in the future will be identical because the density matrix is a full description of the system. So at that level, it may look sort of very trivial that we have different unraveling. Unraveling means you look microscopically what is behind the density matrix, but in your homework assignment, you will all show that you can have a system which have very, very different dissipation mechanisms, but they're described by the same equation.
So therefore, it's physically not possible to distinguish by just measuring the density matrix what causes a dissipation. Of course, if you know what causes a dissipation, fluctuating fields or collisions, you know more than the density matrix knows. Any questions about density matrix or the agenda we want to go through today?
Just a quick reminder and since this was covered in 8421, and most of you know about it, I pre-wrote the slides. The density matrix is a time evolution. There is one which is sort trivial and covered in more elementary takes, and this is the Hamiltonian evolution, and I'm sure you've seen it. The time evolution, the unitary time evolution involves the commutative of the Hamiltonian with a density matrix, and later on, we want to specialize including dissipation, including the environment to the evolution of a tool evolutionary system driven by a monochromatic field And this the famous Jaynes-Cummings Hamiltonian, and we can characterize this system by a density matrix, but will be very important is that we distinguish between populations, the diagonal parts, and the coherences. And if you simply put this density matrix into this equation, you find the time evolution of the system, and we will refer to that result later on. Many of you have seen it in 8421. we can parametrize the density matrix for the two-level system.
So we can parametrize the densities of the two-level system by a local vector, r, which is defined by this equation. And then the equation of motion is simply the rotation of a Bloch vector on the Bloch sphere, and it has a rotational axis, which is given by-- the undriven system rotates around the z-axis. This is just e to the i 2 omega naught t, the normal evolution of the free system. But if you divide it with a monochromatic field rotation over the x-axis and the x-axis, of course, can take your two-level system, and flip it from the ground to the excited state. So it's just a reminder of the simple unitary time evolution, but now we want to add dissipation on top of it.
And what I've decided that before I discuss with you Optical Bloch Equation and master equation in general, I want to give you a very simple model. I really like simple models which capture the essence of what you are going to discuss. So what I want to use is I want to use a beam splitter model I formulated for photons, but it would also immediately apply to atoms.
And this model, what I like about it, it has all the ingredients of integration of the master equation we'll do later on without the kind of many indices and summations and integration, but it captures every single bit of what is important. And I usually like to present exactly solvable, simple models where you get it, and then I can go a little bit faster for the general derivation because you know exactly what the more complicated equations, what they are doing.
So in other words, what I want to derive for you is we want to have the following situation. We have a beam splitter, and we know everything about beam splitters because we talked about them in the first part of the course, and we have a wave function, which is the input, and this is a photon. And we want to understand after the beam splitter, how has the system evolved. In general, it will be a density matrix, and what you want to find out is, what is the equation for the density matrix.
Maybe this density matrix goes through the next beam splitter and then we want to know what comes out of it. And all we have to apply is the formalism we developed for the beam splitter earlier in the course. Of course, the beam splitter is not as harmless as it looks like. There is another part and another part one here brings in the environment, and for the environment, we will use the vacuum. That's the simplest environment.
It's actually important environment because it is the environment we will use all the 3 time when we discuss spontaneous emission. We send photons into the nirvana, into the vacuum and they disappear, and this is our modified. But the other one which is often not so explicit. If you send your photons away, you're not keeping track of what happens, but you could as well perform a measurement, and this is what we put in here.
We say those photons hit a bucket or detector and measure them, we observe them. There's nothing else we do with them, so we can as well measure them, and this being immediately lead to the equation for the density matrix. So this is what you want to discuss, and it will have all the ingredients later on in a mathematically simple form for the derivation of master equation.
So let's consider a similar photon for that. So the wave function, this is superposition of no photon and 1 photon, and the coefficients are alpha and beta. And just for simplifying notation, I pick alpha and beta to be real.
So what do we expect to happen at the beam splitter? Well, there's a probability that the photon gets reflected. Probability to reflect the photon and therefore to observe the photon. This probability is, of course, beta square-- the probability that we have a photon to begin with-- and then the beam splitter, remember we categorized the beam splitter with angle sine theta cosine theta. Sine theta was the reflection amplitude, cosine theta, the transmission amplitude.
So this probability, which I call P1 is the probability for reflection and for measurement. And now naively you would think what happens after the system has passed through the beam splitter, with a probability of P1, we've measured the photon. We know for sure there is no photon left. The system is in the vacuum state, but then you would say, well, maybe with probability 1 minus P1, we have not measured anything. Nothing has happened to the wave function, and that means the wave function just continuous.
Well as we will see, this is wrong. We are missing something. What we are actually missing is that if you measure nothing, the wave function is not sine. The possibility that we could have measured something, changes the wave function. I will comment 4 on that in much more detail in a few lectures down the road when I derive for you quantum Monte Carlo wave function. I will have a wonderful discussions with you about how does non-observation change a wave function.
So we will talk about the physics behind it in some more detail. Right now, I don't want to get into this discussion. I simply want to use our beam splitter equation, so we can just take the beam splitter equation and apply it.
So our output state is obtained by taking the operator for our beam splitter, and maybe you remember that the propagation for beam splitter was discovered by an operator, which had a dagger b dagger a in the exponent. a and b are the two input nodes. And the angle of the beam splitter, which interpolates between 0% and 100% reflection transmission is theta.
And we're now looking for the output state of the total system. We're not performing the measurement yet, and this is now acting on the total system, which is the cross product of our photon system, of our system of interest. And the other input, which we call the environment or the vacuum, is 0.
Well, look a few weeks back, we have done that all. The output state is, well, there was a probability, alpha, that we had no photon in the state psi. And if we have no photon in the state psi and no photon in the vacuum, this is the state 0, 0. What I denote here with this second place is the environment. And now we have one photon. We have exactly one photon with the amplitude beta, and this photon is split with cosine theta transmitted and with phi theta reflected.
If you transmit it, we have 1, 0. If we reflect it, we have 0, 1. And again, this is the environment, and here is a photon in the environment. So let me just be clear that this is where the environment comes in. It is a vacuum state, and here, this is the output part for the environment. This is where we do the measurement.
And I don't think it matters. I haven't really told you which is mode A, which is mode B. It doesn't matter, but one, let's say the environment is mode B, and the system evolves in mode A. As you can see, I'm using a new program, which has some nicer 5 features in terms of handwriting, but it is a little bit rough in scrolling, so I sometimes have to scroll back and forth.
So what is our output? Now, we have two possibilities. The environment is 0, or the environment is 1, and we perform a measurement. So we have to now go into a probabilistic description. So with probability P1, we have done a measurement, and our output state is now the vacuum state.
With probability 1 minus P1, we have not detected anything in the vacuum, and therefore, our state of the system is alpha 0 plus beta cosine theta 1. Is alpha 0, so it is not beta 1 as naively would have assumed. It's not the original state. There is a cosine theta factor, which we got exactly from the beam splitter from the unitary evolution provided by the beam splitter. And since these state is no longer normalized, I have to normalize it by alpha squared plus beta square cosine square theta.
So now we have done our measurement probability P1 to detect the photon. This projects the system into the vacuum state with the probability 1 minus P1. We have that state. Just one second. Scroll in the pictures. Write that down. In fact, millions that our system is now described by a density matrix with probability P1 and 1 minus 1 minus P1.
With probability P1, we are in the vacuum state, and with probability 1 minus P1, we are in that state, the denormalized state psi naught, which I just hold down.
Question?
AUDIENCE: Have you considered theta to be some like a dynamical phase evolution system. It's very low order like when you expand it the first time it looks almost identical to [INAUDIBLE] quantum effect maybe. The environment is measuring the state in some way, and I mean, it's the lowest order now.
PROFESSOR: Yeah, I Quite agree that random 0 is just an example of it. Pretty much, it's all the same. Yeah. What we do here is, I like the beam splitter because the beam splitter provides an exact formulation of the measurement process. You really can use a 6 beam splitter to discuss what happens fundamentally when you perform a measurement. And the beams splitter is one typical implementation of that, but it has all the features you'll find in any measurement system.
And especially what you observe here, let me just emphasizes is, the fact that we you do not make a measurement is changing the way function from the initial wave function psi to psi naught, we have a factor of cosine theta here. And that's also very general. A measurement perturbs, modifies your wave function no matter what the outcome of the measurement is.
So let me write it down because we want to take it to the next level. So we have now found in terms of the beam splitter, angle theta, and the parameters of the initial state, alpha beta. We found the density matrix after the beam splitter. Yes.
So what is the next step? Our goal is to derive the master equation for the density matrix, the time evolution of the density matrix. So since we want to discuss the time evolution, we want to find a differential equation. So what we want to figure out is, what is the difference between the output density matrix and the input density matrix. The input was, of course, pure state characterized by the matrix population alpha squared and beta squared of diagonal matrix element of alpha beta.
The difference between the density matrix is can just calculate the difference. You can simplify things by applying some trigonometric identities, so this is an exact result, cosine theta minus 1. Here we have alpha beta cosine theta minus 1, and on the diagonal, we have cosine 2 theta minus 1 divided by 2.
Anyway this is an intermediate result. We're interested in the differential equation.
We want to sort of find out what happens when we observe, when we have the density matrix interacting with the environment all the time. And this can be simulated by beam splitters by using many beam splitters with a small degree of reflection.
So we want to simplify this result now for the case of many beam splitters, and each of them has a small tipping angle, theta, and for later convenience, I defined theta 7 to be gamma times delta t over 2. That's just my definition of theta. So what we have in mind now is that we start with the system psi, and we have many such beam splitters with an infinitesimal tipping angle. Each beam splitter has the vacuum at its input state. And we always perform the measurement.
If I take the equation above, which I know you can't see anymore, we find a differential equation for the density matrix, which is we find an infinitesimal change delta over the density matrix, which looks like this. So all I've done is, I've used the equation above, and I've done a Taylor expansion in the small angle theta. And the reason why I brought in the square root, well, we get cosine theta.
The first order Taylor expansion or the lowest order Taylor expansion from cosine is 1 minus theta squared. So I get the square root squared. So I get gamma, which appears here, and then I divide by delta t.
So this is just an exact mathematical expression, and the next step is to form a differential equation. But before I do that, I want to emphasize the two features we are using here. They sort of enter automatically, but these are the two big assumptions we make when we derive a master equation.
The first one is that we always have a vacuum state as the input. So in other words, the environment is always in the same state, which is a vacuum state, and this is sort of called a Bohr approximation. What it means is that we do a measurement here, but the vacuum is not changing. In other words, we are not overloading the vacuum with so many photons that suddenly the vacuum Is no longer in the vacuum state.
Or in the case of spontaneous emission, the vacuum can just take as many photons as you dumping into it. They disappear so quickly that for all practical purposes, the environment stays in the vacuum state. So this is called the Bohr approximation.
The environment is not changing. It has enough capacity you to be modified by the measurement process.
And the second thing which is related is, the vacuum is always in the same state, 8 and there are no correlations from here to here to here, there is no memory effect.
Everything is completely uncorrelated. So the environment is uncorrelated. It has no memory. It's correlation function is a delta function, and this is called Markov approximation. So these are the two effects which are important. One is no memory for the environment, Delta function correlation, Markov approximation, and the two, of course, are related in the environment is all of this in the same state.
So remember this is a change for the density matrix, and alpha beta where the original parameters of the density matrix for the input state. So I can now rewrite everything as a differential equation. The density matrix has a derivative for the diagonal matrix elements and for the coherences. Here we have plus gamma. Here we have minus gamma 0, 1, 1, makes sense because we conserve the trace. We have unity probability that we have a stellar system. And therefore the two diagonal matrix elements, the population, the sum of them cannot change with time.
And for the coherence, we have gamma over 2. And if you're familiar with Optical Bloch Equations, which we derive next, we can say that these means if 0 is the ground state that the ground state changes because-- call it spontaneous emission from the excited state. This equation would say that the excited state decays with the rate gamma, and sometimes you may have wondered about that there are factors of two appearing, which also appears here that when the excited state decays with a rate gamma, we have a factor here for the coherences, which is gamma over 2.
So what we have accomplished in contrast to let's say Einstein's equation with the Einstein a and b coefficient, which lead to rate equations for the population, we have now a new feature. We have an the equation for the coherences, and we find a decay of the coherences with half the rate as a decay of the population. Questions?
So if you want you could rewrite this model for photon, which goes through beam splitters, undergoes measurement. You can rewrite it from atomic wave function and you measure whether the atomics in the excited and ground state and the equation for the measurement performed on the atom is exactly as the equation by 9 which the beam splitter acts on the photon state. So what I've shown here it's very specific for a single photon because I could use simple equations, but everything is what you find in a much more general situation.
So before I give you the general derivation of the master equation, let me talk about what we have learned from this example and what the general procedure is. The first thing is our goal is to find a differential equation for the density matrix of the system.
I just remember. There was one thing I wanted to mention. In the previous derivation with the beam splitter, I started with a purer state, and the purer state developed into a statistical mixture, and this statistical mixture would then transform the next beam splitter into another statistical mixture. I derived the differential equation for you for the first state from the pure state to the statistical mixture. But if you would spend a few minutes, you could immediately show that you can start with an elementary density matrix, look how it evolves through the beam splitter, and you get exactly the same differential equation.
So the general procedure is, we want a differential equation how the density matrix evolves with time. And this will be obtained by finding an operator which acts on the initial density matrix. This operator is not a unitary operator because we are performing measurements through the environment, even if you don't actually perform them. Once we dump something into the environment, it's out of our control and anybody could go and perform a measurement, and so we should assume that this measurement has been taken. It's one of those quantum mechanical things that you don't even have to care whether somebody does it. The environment does it for you.
So this operator is called a Liouvillian operator. It's sometimes called-- and I haven't really traced down why-- it's called sometimes super operator. I know what superconductivity is, but I don't know what the super powers of this operator are, but that's just a name which you will find.
The second thing which we have used is that the evolution of this system can be 10 obtained from the time evolution of the total system by performing a trace over the degrees of freedom of the environment. This was exactly what we actually did when we said the system continues with probability P naught in one state and probability P1 in the other state. The operation which lead to this density matrix was exactly the partial trace.
So this is the second general feature which we have to implement. Thirdly, if we could do one and two exactly, we would have an exact formulation for a small part of a quantum system no matter how complicated the environment is. In practice, we can solve the equations only when we make simplifying assumptions about the environment. One is, it is large, and more important, therefore, it's unchanging, and this is the Bohr approximation.
And the second feature is, it has a short correlation time, tau c. In the beam splitter, I have made the assumption that there is no correlation between different beam splitters in the derivation, which I want to walk you through. Right now, you will see explicitly where the correlation times enter. And this is called the Markov approximation.
And finally, this is number four, the whole possibility to derive a master equation hinges on the fact that we have different time scales which are very different. We are interested in the evolution of our system. We want to know how it relaxes, and this is on a time scale 1 over gamma.
So we call this slow. We are interested in the variation of our system, the atomic system or the photon state, which passes through the beam splitter, and this time scale has to be much slower than the fluctuations of the environment. So therefore, if the environment has fluctuations, which in the beam splitter model where assumed to be 0, it was delta function of time, if that correlation time is much smaller than the time it takes for the system to relax and to evolve, that opens a window delta t, and this is the time scale of the master equation.
Just to give you one example for the spontaneous emission, the correlation time, tau c, would be the time it takes the photon to disappear from the atom. And the 11 photon has disappeared from the atom when it is one wavelengths away. So typically, the correlation time f the vacuum for spontaneous emission is one cycle of the optical frequency. It's very, very fast. Whereas typical decay times of the excited states, a nano second. It's six orders of magnitude slower, and this is what we describe.
But on a time scale of a femtosecond of one optical cycle, the photon has not detached from the atom and it could go actually back to the atom. During that time, we talked a little bit about it when we did this diagrammatic discussions of resonance scattering for very, very early times. You don't have exponential decay because you cannot do the approximations where we approximated the kernel by something which was completely energy dependent. And so what happens at such short times, we encounter here again in such short times, we will not have a simple description of the system.
So the last point, let me summarize. Our goal is that we describe the density matrix of the system, and we want to find the Liouville operator or some matrix which acts on it. And because we will integrate over time steps, which are larger than the correlation time of the system, we can also call it, it will be a coarse-grained evolution. Any questions about that?
I really like the discussion, the derivation of the master equation, how it is presented in atom-photon indirection. But it is presented on more than 50 pages with many, many equations. So after giving you all of the principles, all of the concepts, I want to go with you now over those equations and point out how the principles, which we encountered with the beam splitter, how they are now implemented in a very general context.
I will not be able to give you all of the mathematical aspects of it, but I think by now you know that the book atom-photon interactions are actually wonderful. You can get a lot of conception information out of it by looking at the equations without understanding every technical detail. So I would really encourage you, if there is something which piques your interest and I hope there will be things which you'll find 12 very interesting, that you go to the book and read it. So I'm exactly following that actually I used copies of the book.
So we have a Hamiltonian, which is describing the atomic system. It describes the reservoir and then there is an interaction. We keep it very general here, but you may always think well, the atom is your favorite two-level system. The environment is maybe the vacuum with all its possible modes, and the interaction is the dipole interaction or the a dot p interaction.
So we start out with an equation, which is nothing else than Schrodinger's equation for the density matrix. The time derivation of the density matrix is commutative with the Hamilton. But it is often useful and you've seen it many times, to go to the interaction representation, that the time dependence due to the unperturbed part of the operator is absorbed in a unitary transformation, so therefore, this density matrix in the interaction representation evolves not with, h, because h naught is taken care of, it only involves due to the coupling between the two systems, between the system and the environment.
So now this equation, we are interested in a time step delta t. And this time step, delta t remember, we want to coarse grain, will be larger than the correlation time of the reservoir, and you will see exactly where it comes about. So we want to now do one of those coarse-grain steps. We take this equation and we integrate from time t to time t plus delta t.
So this is exact here. But now we want to iterate, and that means the following. We have expressed the time step in the density matrix by having the density matrix there. But now we can do in a first-order perturbation theory, we can do one step and we get the second order result by plugging the first order result into this equation.
It's the same we have seen with our diagrams and such. We have an exact equation. It's useless unless we do something, and what we do is, we realize that we can iterate it because the part we don't know involves one more occurrence of the interaction potential. And when you plug the nth order solution in here, you get 13 the n plus first order solution. And this is exactly what is done here.
And I skipped a few equations here. This is what he's done here, number one. And number two is, we are interested in the system, not in the reservoir, so therefore we perform the trace over the reservoir. And the trace of the reservoir for the photon beam splitter mend, we say we have two possible states, we detect a photon or not.
And for the system, we have now a density matrix which is probability P naught in one state, probability P1 in the other state. And this is exactly what the operator partial trace does.
Remember also I want to really make sure that you recognize all the structures. The time evolution of a density matrix was a commutator with h, but in the interaction picture, it's a commutator with v. But since we are putting the first order result in here, the second order result is now the commutator of v with the commutator of v and o. It's just we have iterated one more time.
So the sigma tilde, the density matrix for our system, tilde means in the interaction picture is now the partial trace over the reservoir of the total density matrix. Tilde means in the interaction picture. And the important part here is that it is exact. We have not done any approximation here. Any questions?
Of course, now we have to make approximations because we cannot solve an interactive problem exactly. The first one is-- and what do we want to do in the end?
We want to keep the first non-trivial term but to the extent possible, we want to factorize everything. We want to get rid of the entanglement of the environment in the system and only get sort of the minimum which is provided by the coupling.
So this evolves as follows. The interaction we assume is a product of two operators.
One operator acts on the system, one operator acts on the environment. So this could be the dipole acting on the atom, the vacuum field, e, interacting with the environment or it could be p dot a. Or maybe your system has a magnetic moment, m, and the environment consists of fluctuating magnetic fields.
So we'll pretty much find in every kind of measurement that a measurement 14 involves the product of two operators. One is an operator for your system and one is an operator for the reservoir of the environment. And so this is one thing we want to use, and now there is one thing which the moment we will set in our equations, there is one thing which will naturally appear. Let me scroll back.
What we have here is the interaction operator, v, at two different times. So this means if something happens at different times and we integrate over times. This is a correlation function, a correlation function between v at the time t prime and the time t double prime and since the reservoir part of this interaction is the operator, r, so what we have here now is, we have a correlation between the operator, r, at two times, which characterizes the environment.
And now comes an important approximation. You remember I said we want to assume that the environment has a very short correlation time. Whenever a photon is emitted, it appears dramatically fast. It disappears in one optical cycle, and the environment is sort of reset, it's back in the vacuum state. So this is now expressed here that this product over which we take the partial trace has a very short coherence time.
And the fact is now the following. We are integrating over a coarse-grained step delta t, but this correlation function goes to 0 in a very short time. So therefore it will not contribute a lot. Let me write that down. What we are going to approximate is our total density matrix is now approximately factorizing in a density matrix describing the atomic system. Well, we describe the atomic system when we trace out the environment.
We describe the environment when we trace out the atomic part. And if we now form the direct product, we are back to the total system, but we have factorized the total density matrix into two parts. What we neglect here is a part which cannot be factorized which is the correlated part of it. But what happens is, since we are integrating over time steps delta t and the correlation decay in a very, very short time, the result is that this complicated part, which we could never calculate, is smaller than the first part by the ratio of the time where the correlations contribute 15 over the time delta t, the time step we are going to take.
So this is a very critical assumption. There is a whole page or two in the book where an photon-atom interaction they discuss the validity of this assumption, but I've given you the physical motivation that we indicate over much larger time, and if this time is large and the correlation is lost for short time, they only contribute with this small parameter to the result.
So in other words, this means after-- we have an interaction between the environment and the system. We write it down in second order, but the second order result is now we evaluated by factorizing the density matrix into our system in the reservoir. So that means in that sense if you factorize something, it looks as if it's not interacting, but the trick is the same.
You write down something to first and second order, and once you have factored out the important physics, now you can evaluate the expression by using an approximation, which is now the approximation that the density matrix factorizes. So with that, this is the approximation that we have made that the correlation time is very short.
And now we have a differential equation for the density matrix sigma, which describes or atomic system. We have traced out the degrees of the reservoir. And now we want to insert B.17. You probably don't remember what B.17 is. It says that the interaction operator is a product of a, the operator a for the atoms and r for the reservoir.
So the reservoir part, tau prime tau double prime, gives a correlation function. This is the correlation function between the operator, r, at two different times and the part which acts on our system, the a part, is explicitly kept here.
So this is now a general master equation. It tells us the time evolution of the density matrix in this form. It looks very complicated, but this is because it's very general. In order to bring it into an easier form, we want to now introduce a basis of states, energy eigenstates of the unperturbed system, and write down all of these operator 16 into such a basis of states.
But anyway you saw here how we had an exact equation, and the main approximation we made is that the operator acting on the environment has a very short correlation time. Any questions?
Well, you're only a few minutes away from producing this result to Fermi's golden rule, which you have known for a long, long time. It's just we have made very general assumptions. You see sort of how the assumptions propagate, but now if you write it down for an energy eigenbasis, you will immediately see results you have probably known since your childhood.
So we want to have energy eigenstates of the atomic operators, so this is sort of ground and excited state if you think about a two-level system. The previous equation, I have to go back to it. Our previous equation is a differential equation for the density matrix here, and here is the density matrix.
So now we formulate this equation into an energy eigenbasis, and what do we get?
Well, we get an equation for the matrix elements, and what matrix elements are important? diagonal matrix elements which are population of diagonal matrix elements which are coherences. So we pretty much take this equation, use the energy eigenbasis and look, what do we get for the populations and what do we get for the coherences.
So the structure is now the following. That we have our matrix elements ab. There is one part which looks like a unitary time evolution. This is what comes from the Hamilton operator. This is sort of the-- we'll see that in a moment-- but this is the time evolution without relaxation and now we have something here which are generalized relaxation coefficients. And you will find if you go further above that those relaxation coefficients are directly related to the correlation function of the reservoir.
So we can now specify what happens between populations. Population means that we have a differential equation, let's say between sigma aa the puller, and sigma cc, 17 so we have a rate coefficient which connects the population in state A with the population in state C.
And if you take this expression, you find several things. Well, you find Fermi's golden rule, in a generalized way-- that's always nice-- you find Fermi's golden rule.
When you integrate over time, you often get data function, and you expect to get a delta function because of energy conservation. So you get that, of course, naturally.
Secondly, we have second order matrix element, which you know from Fermi's golden rule, but now we have the following situation that the matrix element in Fermi's golden rule may actually depend on the state, mu, of the environment. So you have maybe 10 different possibilities for the environment, and Fermi's golden rule gives you spontaneous emission, which is different for those 10 states. And naturally, since we have performed the partial trace over the environment, we have all those rates weighted with the probability that the environment is in one of those states.
So what you find here is a simple generalization of Fermi's golden rule. And if you look at the off-diagonal matrix elements, for instance, you want to know what is this rate, what is the rate coefficient, which gives you the time derivative of the coherence, and it's multiplied with the coherence. You'll find now that in general, this rate coefficient has a damping term, but it may also have an imaginary term.
And I hope you remember when we played with diagrams, that we had something similar. There was something which we called the radiative shift. I called it the AC stock effect of a single photon, and here it is a level shift which comes because the environment interacts with your system and it shifts the levels a little bit.
So in addition to just relaxation, spontaneous emission, and damping, there is also a dispersive part, a level shift, and it has exactly the same structure. Let me add that delta ab is the difference between the shift of state b and the shift of state a. And those shifts have exactly the same structure. You have to take the principle part of something which has 1 over the difference of energies, and we discussed that this has to be understood by somewhere adding an infinitesimal imaginary part and 18 doing the right thing with complex function. It's actually related to Laplace time difference between Laplace transformation and Fourier transformation.
So anyway what I find sort of beautiful is that we started with a most general situation. We perform the partial trace. We made one assumption of short correlation times, and a lot of things we have known about quantum system just pops out in a very general form here. Any questions about so far?
Well, the coherences are, of course, more interesting than the population.
Coherence is always something physicists get excited about it because it captures something which goes often beyond classical system that we have quantum mechanical coherences. And what happens is the coefficient here, which provides the damping of the coherence, just comes out of the formalism, has two parts. And one part is an adiabatic part and the other one is a non-adiabatic part.
Well, and that makes sense. If you have two quantum states and there is a coherence, some phase between the two, the phase can get lost if you do a transition between the two states or one state undergoes a collision and is quenched. So you definitely have one part which is due to the fact that the quantum states or the population changes, and you find that there is this state-changing part, which is pretty much the sum of all the rate coefficient leading out of state, leading to the decay of state a and leading to the decay of state b.
In other words, if you have a two-level system, which has a coherence and you have decay of the excited state and decay of the ground state, you would expect that those decay terms appear also in the decay of the coherence between the two levels and they do, and they appear with the correct factor of 1/2.
But there is another possibility and this is the following. You can have no [INAUDIBLE] of the population of the state, but you can still lose the coherence. The model you should maybe make is that you have spin up, spin down. You are not perturbing the populations in spin up and spin down, but the environment provides fluctuating magnetic field. Then due to the fluctuating magnetic field, you no longer can keep track of the phase, and that means in your identity matrix the off-diagonal 19 matrix elements decay.
And we find that here this is the second part which in this book is called the adiabatic part, and the physics behind it is now pure de-phasing. So it's an independent way for coherences to decay independent of the decay of the population. Questions? Collin.
AUDIENCE: Where does the Markov approximation come in?
PROFESSOR: The Markov approximation is, so to speak, the delta function approximation, which would say that-- I mean, I introduced the correlation function between the reservoir operator and said the correlation time tau c is very, very short. The Markov approximation would actually state that it would actually say in a more radical way the correlation time is 0.
And the Bohr approximation, the fact that the reservoir is unchanged came in when we said the total density matrix for the second order expression just factorizes. It factorizes into the environment, which is just this density matrix of the environment.
It's not changed by the interaction with the system. And this is the Bohr approximation. We just use the same expression for the reservoir independent of the measurements the reservoir has done.
Other questions? Yes, Nancy.
AUDIENCE: [INAUDIBLE].
PROFESSOR: This is something very general. Thank you actually for the question. Whenever we have some damping of the population, the coherence is only damped with a factor of 1/2. One way to explain it in a very simple way is, that if you have an amplitude alpha excited and alpha ground, the population in the excited state is this squared.
So you sometimes make the model that the amplitude decays with gamma over 2, but the population is-- because you take the product, decays with gamma.
So you would say alpha e and alpha g both decay with half the rate, but the probability is for the total rate, and the coherence is the product of the amplitudes.
20 So therefore when you look at the coherence, this decays with 1/2 gamma e. This decays with 1/2 gamma g, and this is what you get here, 1/2 gamma in state a, 1/2 gamma in state b.
Whereas the probability to be in this state decays with twice that because the probability is squared. So you find that pretty much in any quantum mechanically corrects derivation, which you do about the decay of population coherences. Other questions?
So we have done two things. We have done the very, very simple derivation using the beam splitter model where you may not even notice where I did the Markov approximation because I jumped from beam splitter to beam splitter and left all the correlations behind. Here in the most general calculation, you have seen exactly where it enters, but maybe now you have the full forest in front of you, and you don't recognize the trees anymore. So let me wrap up this lecture by now focusing on the system we want to discuss further on, namely a two-level system interacting with a vacuum through spontaneous emission.
But I also want to make some generalizations. I want to give you some generalizations about what kind of environments are possible in quantum physics.
Let me just see how I do that. So this part I actually owe to Professor [? Ikschuan ?] who wonderfully compiled that.
So what I want to do now is, I want to call your attention to the operator form which, is rather unique. Remember when we did second order perturbation theory, we had sort of the commutator of v with the commutator of v and rho. This came from iterating the exact equation of motion for the density matrix. And you want to specialize that now to Jaynes-Cummings model. I mean, in the end, at least in this course, we always come back to the Jaynes-Cummings model because it captures a lot of what we want to explore.
So the Jaynes-Cummings model in the rotating wave approximation is very simple.
It raises the atom from ground to excited state and destroys a photon, or it does the opposite. So this is our simple interaction between our system, the two-level atom 21 and our reservoir which is just the vacuum of all the modes.
And you, again, recognize what I said in general. You usually always find it by linear form, an operator which acts on the modes, on the reservoir on the vacuum, and an operator which acts on the vacuum.
Now, we want to make the explicit assumption that the initial state of the reservoir is the vacuum state. It's empty. And I want to show you what is the structure of the operators we obtain. And so if you put v into here, you have first the commutator with rho, which I write down here, and then we have to take another commutator with v. And the result of that is the following, that when it comes to relaxation pauses, based on the general structure of the time evolution of quantum mechanics, we have this double commutator.
And the operator which, couples our system to the environment is erasing and lower an operator. I mean, the atom because it interacts with the environment either absolves the photon or emits the photon. But those operators, sigma plus and sigma minus, appear now always as products because we have two occurrences of the interaction, v. But if you look at the double commutator structure, the operator sigma plus sigma minus appears. This is just the general structure of this double commutator appears to the left side of the atomic density matrix to the right side of the atomic density matrix and then there is the coarse term where the atomic density matrix is in the middle of the two.
So this is actually something which is very general and very important in the theory of open quantum system. What I'm discussing with you now is this famous Lindblad form. And the story goes like that. You want to know what are possible environments, not just empty vacuum. You can have fluctuating fields. You can have, you name it. But if you are saying that your environment interacts with your system through an operator, and our operator is now the operator sigma minus, which is a spontaneous emission, you need heavy t.
The mathematical structure of a valid quantum field remains that if your system interacts with an environment by emitting a photon sigma minus, this is now the 22 structure of the master equation. This is the structure of the time evolution of the density matrix. So the operator sigma minus and its and its Hermitian conjugate sigma plus, all of this have to appear in this combination. Yes, Collin.
AUDIENCE: This is still in my interaction picture. Right? There's no dynamical phase evolution that we put back in there.
PROFESSOR: Yeah. OK. What we do in general if the system is driven by a laser beam, for instance, Rabi oscillation, we simply add up the dynamics of the Rabi oscillation of the unitary time evolution to the time evolution done by the reservoir.
AUDIENCE: So this form is always in the interaction picture?
PROFESSOR: Well, this is, you would say, this form is what is the relaxation provided by the environment. And if you drive the system in addition with the coherent field, unitary time evolution, you would add it to it. So in other words, what I'm telling you here is that this is the general structure, and if you have a system which interacts with an environment in five different ways, with a dipole wand, with a magnetic moment and such, you have maybe five interaction terms and then you have to perform the sum over five operators, and here one of them is a sigma minus operator.
So in other words, if you want to know what is the whole world of possibilities for quantum system to relax and dissipate with an environment, you can pretty much take any operator which acts on your system, but then put it into this so-called Lindblad form and you have a possible environment. And I mentioned I think last week that people are now in our field actively working on environmental engineering. They want to expose a system to an artificial environment and hope the system is not relaxing, let's say, to a broken ground state, but to a fancy correlated state.
So what this Lindblad form, if the operators appears in this way, what it insures is the following. Just imagine if we have an equation, a derivative of the density matrix, which depends on the density matrix, you could write down a differential equation and say is it possible? Well, it has to be consistent with quantum physics. You have 23 certain requirements.
One requirement is that rho, the density matrix, always has to be the density matrix.
The trace equal 1 has to be conserved. A density matrix always must have non-negative eigenvalues, otherwise, what you write down, it might be a nice differential equation, but quantum mechanically, it's nonsense.
But now there is one more thing, which is also necessary. This time evolution of the system's density matrix must come from a unitary time evolution of a bigger system.
So you must be able to extend your system into a bigger system, which is now the environment, and this whole system must follow a Schrodinger equation with a Hamilton as a unitary time evolution.
And this is where it's restrictive. You cannot just write down a differential equation and hope that this will fill some requirement, and what people have shown is under very general assumptions, it is the Lindblad form which allows for it. So some operator always has to appear in this form.
So often in this Lindblad form, you have an operator, which is called a jump operator, which is responsible for the measurement, which the environment does on your system. The jump operator is here, the operator which takes the atom from the excited to the ground state. With that you need a photon, and the photon can be measured.
So often you can describe a system by a jump operator, and if the jump operator is put into this Lindblad form, then you have a valid master equation for your system.
So let me wrap up.
If you take now the definition of the raising and lowering operator, and you take the form I showed you, the Lindblad form, you'll find now this differential equation for your two-level system. And this is one part of the Optical Bloch Equation. Now, coming to Collin's question, if you include the time evolution of the classical field, this is a coherent evolution of the Bloch vector, which I showed at the beginning of the class, and we add this and what I wrote down at the beginning of the class.
24 Then we find the famous Optical Bloch Equations in the Jaynes-Cummings model.
So these are now the Optical Bloch Equations, and I hope you enjoy now after this complicated discussion, how simple they are. And it is this simple set of equations, which will be used in the rest of the course to describe the time evolution of the system.
Just because I did some generalizations about the Lindblad equation, I copied that into the lecture notes from Wikipedia, and probably now you sort of understand what is the most general Lindblad equation. It has a Hamiltonian part and then it has jump operators like you sigma minus operator, but it has to come in the form that the jump operator and its complex conjugate, emission. conjugate, is on the left side, on the right side, and left and right of your density matrix. So this is the generalization I've mentioned.
Yeah. with that I think with that we've derived the master equation, and on Wednesday, we will look at rather simple solutions, transient and steady-state solution of the Optical Bloch Equation. Any questions?
One reminder about the schedule, this week we have a lecture on Friday because I will not be on town next week on Wednesday. And of course, you know today in a week, next week on Monday. It's [INAUDIBLE] day. So we have three classes this week, no class the following week, and then the normal schedule for the rest of the semester.
25
|
9
|
The Bulletin of Symbolic Logic Volume 27, Number 1, March 2021 STRONG COLORINGS OVER PARTITIONS WILLIAM CHEN-MERTENS, MENACHEM KOJMAN, AND JURIS STEPR ¯ ANS Abstract. A strong coloring on a cardinal κ is a function f : [κ]2 →κ such that for every A ⊆κ of full size κ, every color γ < κ is attained by f ↾[A]2. The symbol κ ↛[κ]2 κ asserts the existence of a strong coloring on κ.
We introduce the symbol κ ↛p [κ]2 κ which asserts the existence of a coloring f : [κ]2 →κ which is strong over a partition p : [κ]2 →. A coloring f is strong over p if for every A ∈[κ]κ there is i < so that for every color γ < κ is attained by f ↾([A]2 ∩p–1(i)).
We prove that whenever κ ↛[κ]2 κ holds, also κ ↛p [κ]2 κ holds for an arbitrary finite partition p. Similarly, arbitrary finite p-s can be added to stronger symbols which hold in any model of ZFC. If κ = κ, then κ ↛p [κ]2 κ and stronger symbols, like Pr1(κ, κ, κ, )p or Pr0(κ, κ, κ, ℵ0)p, also hold for an arbitrary partition p to parts.
The symbols ℵ1 ↛p [ℵ1]2 ℵ1, ℵ1 ↛p [ℵ1 ⊛ℵ1]2 ℵ1, ℵ0 ⊛ℵ1 ↛p [1 ⊛ℵ1]2 ℵ1, Pr1(ℵ1, ℵ1, ℵ1, ℵ0)p, and Pr0(ℵ1, ℵ1, ℵ1, ℵ0)p hold for an arbitrary countable partition p under the Continuum Hypothesis and are independent over ZFC +¬CH.
§1. Introduction. The theory of strong colorings branched offRamsey theory in 1933 when Sierpinski constructed a coloring on [R]2 that contradicted the uncountable generalization of Ramsey’s theorem. For many years, pair-colorings which keep their range even after they are restricted to all unordered pairs from an arbitrary, sufficiently large set were called “bad”; now they are called “strong.” Definition 1. Let ≤κ be cardinals. A strong -coloring on κ is a function f : [κ]2 → such that = ran(f ↾[A]2) for every A ∈[κ]κ.
Received February 19, 2020.
2020 Mathematics Subject Classification. Primary 03E02, 03E17, 03E35, Secondary 03E50.
Key words and phrases. strong coloring, Ramsey theory, Generalized Continuum Hypothesis, forcing, Martin axiom.
© The Author(s), 2021. Published by Cambridge University Press on behalf of Association for Symbolic Logic.
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence ( which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
1079-8986/21/2701-0003 DOI :10.1017/bsl.2021.5 67 Published online by Cambridge University Press 68 WILLIAM CHEN-MERTENS ET AL.
By Ramsey’s theorem there are no strong -colorings on for > 1.
Sierpinski constructed a strong two-coloring on the continuum and on ℵ1.
Assertions of existence of strong colorings with various cardinal param-eters are conveniently phrased with partition-calculus symbols. The (nega-tive) square-brackets symbol κ ↛[κ]2 , asserts the existence of a strong -coloring on κ. Recall that the symbol for Ramsey’s theorem for pairs, →()2 n (1) reads “for every f : []2 →n there is an infinite subset A ⊆ such that f ↾[A]2 is constant (omits all colors but one).” The square brackets in place of the rounded ones stand for “omits at least one color”; with the negation on the arrow, the symbol κ ↛[κ]2 means, then, “not for all colorings f : [κ]2 → at least one color can be omitted on [A]2 for some A ⊆κ of cardinality |A| = κ.” That is, there exists a strong -coloring on κ.
When 2 is replaced with some d > 0 the symbol states the existence of an analogous coloring of unordered d-tuples. As Ramsey’s theorem holds for all finite d > 0, strong d-dimensional colorings can also exist only on uncountable cardinals. In what follows we shall address almost exclusively the case d = 2.
Definition 2. Given a coloring f : [κ]2 →, a set X ⊆[κ]2 is f -strong if ran(f ↾X) = ran(f).
The collection of f -strong subsets of [κ]2 is clearly upwards closed and not necessarily closed under intersections.
Different square-bracket symbols require that different families of sets are f -strong with respect to the coloring f whose existence each symbol asserts.
The symbol above asserts the existence of f such that every κ-square, that is, every [A]2 for some A ∈[κ]κ, is f -strong. A (, κ)-rectangle in [κ]2 is a set of the form A ⊛B = {{α, } : α < < κ, α ∈A and ∈B}. Every κ-square contains a ( 1, 2)-rectangle if 1 ≤ 2 ≤κ; the symbol κ ↛[ 1 ⊛ 2]2 , which asserts the existence of f : [κ]2 → such that every ( 1, 2)-rectangle A ⊛B ⊆[κ]2 is f -strong, is, then, stronger than κ ↛[κ]2 .
The next two strong-coloring symbols go beyond specifying which sets ought to be f -strong. They require the existence of certain patterns in the preimage of each color.
Definition 3.
(1) A coloring f : [κ]2 → witnesses the symbol Pr1(κ, , , ) if for every < and a pairwise disjoint family A ⊆[κ]< of cardinality |A| = and color γ < there are a, b ∈A with max a < Published online by Cambridge University Press STRONG COLORINGS OVER PARTITIONS 69 min b such that f(α, ) = γ for all α ∈a and ∈b. The quantified above is needed only in the case that ≥cf(κ), which received attention very recently. When < cf(κ) we omit from the definition and require only that A ⊆[κ]< instead.
(2) A coloring f : [κ]2 → witnesses the symbol Pr0(κ, , , ) if for every < , a pairwise disjoint family A ⊆[κ] of cardinality |A| = and a matrix {γi,j : i, j < } ⊆ there are a, b ∈A with max a < min b such that f(α(i), (j)) = γi,j for all i, j < , where a(i), b(j) are the ith and jth elements of a and of b, respectively, in increasing order.
For > 2 and ≥ℵ0, Pr1(κ, , , ) implies κ ↛[ ]2 (see 8 below). If < cf( ) then Pr0(κ, , , ) implies Pr1(κ, , , ).
Let us conclude the introduction with the remark that some authors use the term “strong coloring” only for colorings which witness Pr1 or a stronger symbol.
§2. A brief history of strong colorings. Strong κ-colorings on various cardinals κ were constructed by Erd˝ os, Hajn´ al, Milner and Rado in the 1950s and 1960s from instances of the GCH. For every cardinal κ they were able to construct from 2κ = κ+ colorings f : [κ+]2 →κ+ which witnessed κ+ ↛[κ ⊛κ+]2 κ+, and even colorings which witnesses the stronger κ+ ↛[κ⊛κ+ ⧸ 1⊛κ+]2 κ+ whose meaning is that inside every (κ, κ+)-rectangle A ⊛B ⊆κ+ there is a (1, κ+)-rectangle {α} ⊛B ⊆A ⊛B such that ran(f ↾({α} ⊛B)) = κ+ (see Section 49 in ). A coloring f[κ+]2 →κ+ witnesses this symbol if and only if for every B ∈[κ+]κ+, for all but fewer than κ ordinals α < κ+ the full range κ+ is attained by f on the set {α} ⊛B = {{α, } : α < ∈B}.
Galvin , who was motivated by the problem of productivity of chain conditions and by earlier work of Laver, used 2κ = κ+ to obtain a new class of two-colorings, which in modern notation witness Pr1(κ+, κ+, 2, ℵ0), and used these colorings for constructing counter examples to the productivity of the κ+-chain condition. A straightforward modification of Galvin’s proof actually gives Pr1(κ+, κ+, κ+, ℵ0) on all successor cardinals from 2κ = κ+.
A remarkable breakthrough in the theory of strong colorings was the invention of the method of ordinal-walks by Todorˇ cevi´ c (or, as it was originally called, minimal walks). Todorˇ cevi´ c applied his method to construct strong colorings on all successors of regulars in ZFC with no additional axioms. With the same method Todorˇ cevi´ c got in ZFC the square bracket symbol for triples 2 ↛[1]3 and proved that 2 ↛[1]2 1 is equivalent to the negation of the (ℵ2, ℵ1) Chang conjecture. The rectangular Published online by Cambridge University Press 70 WILLIAM CHEN-MERTENS ET AL.
symbol κ+ ↛[κ+ ⊛κ+]2 κ+ has been obtained since in ZFC on all succesors of uncountable regular cardinals κ by Shelah via further developments of ordinal-walks. Moore developed ordinal-walks further and provided the missing κ+ = ℵ1 case. Rinot and Todorˇ cevi´ c present a unified proof of the rectangle version for all successors of regulars with a completely arithmetic oscillation function.
Shelah, following Galvin , phrased the strong coloring relations Pr1(κ, , , ) and Pr0(κ, , , ) (and a few more!) and proved Pr1(κ++, κ++, κ++, κ) for every regular κ in ZFC . Shelah also proved a criterion for stepping up from Pr1 to Pr0: if Pr1(κ, κ, , ) holds, = < and there is some “interpolant” cardinal such that < ≤, 2 ≥κ and cf(κ) > <, then Pr0(κ, κ, , ) holds (Lemma 4.5(3), p. 170 of ). In particular, chosing = as the interpolant, Pr1(+, +, +, ℵ0) ⇒Pr0(+, +, +, ℵ0) for every cardinal ; so for all regular cardinals κ, Pr0(κ++, κ++, κ++, ℵ0) holds in ZFC (Pr1(ℵ1, ℵ1, 3, ℵ0) cannot hold in ZFC because under MA the product of two ccc spaces is ccc). See the survey in for more background on strong colorings and non-productivity of chain conditions.
On successors of singulars, Todorˇ cevi´ c proved that the pcf assumption pp( ) = + for a singular implies + ↛[ +]2 +. Shelah proved Pr1( +, +, cf( ), cf( )) for every singular (4.1 p. 67 of ). Eisworth proved Pr1( + +, +, cf( )) from pp( ) = +. Then Rinot, building on Eisworth’s [9, 10], proved that for every singular , Pr1( +, +, +, cf( )) holds iff + ↛[ +]2 + holds. In particular, via Shelah’s criterion, Pr0( +, +, +, ℵ0) ⇐ ⇒ + ↛[ +]2 + for all singular . Quite recently, Peng and Wu proved in that Pr0(ℵ1, ℵ1, ℵ1, n) holds for all n < outright in ZFC.
The most recent progress on strong colorings is made in a series of papers by Rinot and his collaborators. The result in , shown to be optimal in Theorem 3.4 in , establishes the property Pr1(, , , ) for regular > + from a non-reflecting stationary subset of composed of ordinals of cofinality ≥ (using a new oscillation function called Pℓ6). In , Rinot gets the same result from □(), thus establishing that if = cf() > ℵ1 and the -chain condition is productive, then is weakly compact in L. Then Rinot and Zhang prove in that for every regular cardinal κ, 2κ = κ+ implies Pr1(κ+, κ+, κ+, κ) and for every inaccessible such that □() and ♦∗() both hold, Pr1(, , , ) holds as well (this is the case in which our remark about at the end of (1) of Definition 3 is relevant). In the other direction it is proved in that Pr1(κ+, κ+, 2, κ) fails for every singular cardinal κ and that Pr1(κ+, κ+, 2, cf(κ)+) fails for a singular limit κ of strongly compact cardinals.
Ramsey’s theorem prohibits the existence of strong colorings with more than one color on countable sets for which all infinite subsets are strong, but in topological partition theory, strong colorings may exist also on countable spaces. Baumgartner , following some unpublished work by Galvin, constructed a coloring c : [Q]2 → which attains all colors on Published online by Cambridge University Press STRONG COLORINGS OVER PARTITIONS 71 every homeomorphic copy of Q. Todorˇ cevi´ c obtained the rectangular version of Baumgartner’s result and very recently, Raghavan and Todorˇ cevi´ c proved that if a Woodin cardinal exists then for every natural number k > 2, for every coloring c : [R]2 →k there is homeomorphic copy of Q in R on which at most two colors occur, confirming thus a conjecture of Galvin from the 1970s. They also proved that any regular topological space of cardinality ℵn admits a coloring of (n + 2)-tuples which attains all colors on every subspace which is homeomorphic to Q.
§3. Strong-coloring symbols over partitions. We introduce now the main new notion of symbols with an additional parameter p, where p is a partition of unordered pairs. Suppose p : [κ]2 → is a partition of unordered pairs from κ. A preliminary definition of the square brackets symbol κ ↛p [κ]2 κ with parameter p has been mentioned in the abstract: there exists a coloring f : [κ]2 →κ such that for every A ∈[κ]κ there is some p-cell i < such that for all γ < κ there is {α, } ∈[A]2 such that p(α, ) = i and f(α, ) = γ.
However, for Pr1 or for Pr0 it is not possible to require a prescribed pattern on a ⊛b in both f and p when a, b belong to an arbitrary A, as all such a ⊛b might meet more than one p-cell. What we do, then, is replace this definition by a different one. The new definition is equivalent to the initial definition in all square-bracket symbols by Fact 5 below, and works for Pr1 and Pr0.
Definition 4. Suppose f : [κ]d → is a coloring and p : [κ]d → is a partition for a cardinal κ and natural d > 0. Then: (1) For a function : → and α ∈[κ]d we say that f hits over p at α, if f(α) = (p(α)).
(2) A set X ⊆[κ]d is (f, p) -strong if for every ∈ there is α ∈X such that f hits over p at α.
Thus, the initial definition of an (f, p)-strong X ⊆[κ]d— that (X ∩ p–1(i)) is f -strong for some fixed p-cell i—is replaced in (2) above with the requirement that every assignment of colors to p-cells : → is hit by some d ∈X. The advantage of the new definition is that an assignment can be hit in any p-cell, so defining Pr1 and Pr0 over a partition will now make sense.
Topologically, a set X ⊆[κ]d is (f, p)-strong iff the collection {u⟨p(α),f(α)⟩: α ∈X} is an open cover of the space of all -sequences over with the product topology, where u⟨i,γ⟩is the basic open set { ∈ : (i) = γ}.
The definitions of the main symbols over partitions which we shall work with are in Definition 7 below; an impatient reader can proceed there directly.
We precede this definition with two useful facts about (f, p)-strong sets.
If X ⊆[κ]d is (f, p)-strong then for every γ < there is α ∈X such that f(α) = γ since if is the constant sequence with value γ and α ∈X is such that f(α) = (p(α)) then f(α) = γ. This also follows from the next fact: Fact 5. A set X ⊆[κ]d is (f, p)-strong if and only if there is some i < such that = ran(f ↾(X ∩p–1(i))).
Published online by Cambridge University Press 72 WILLIAM CHEN-MERTENS ET AL.
Proof. Suppose first that that i < is fixed so that = ran(f ↾(X ∩ p–1(i))). Let ∈ be arbitrary and let γ = (i). Fix some α ∈X such that f(α) = γ and p(α) = i. Now f(α) = (p(α)) as required.
For the other direction suppose to the contrary that for every i < there is some (i) ∈( \ ran(f ↾X)). Since X is (f, p)-strong, find α ∈X such that f hits over p at α. Let i = p(α). Now f(α) = (i) / ∈ran(f ↾X)—a contradiction.
⊣ Suppose that h : [κ]d →< is some function into sequences of length < . For every partition p : [κ]d → for some < , let hp : [κ]d → ∪{∗} be defined by hp(α) = h(α)(p(α)) if p(α) ∈dom (h(α)), ∗ otherwise.
Then for every α ∈[κ]d, if hp(α) ̸= ∗then hp hits h(α) over p at α. In particular, every X ⊆[κ]d which is h-strong is also (hp, p)-strong for every partition p of [κ]d to < cells. A simple book-keeping argument can waive the dependence of hp on p for a set of ≤< partitions: Lemma 6. Suppose h : [κ]d →< is given and p = ⟨p : < < ⟩is a sequence such that p : [κ]d → and < for all < < . Then there is a single coloring f : [κ]d → such that for all X ⊆[κ]d, if X is h-strong then X is (f, p)-strong for all < < .
Proof. Suppose h : [κ]d →< and p = ⟨p : < < ⟩are given, where p : [κ]d → and < for every < < .
Let R = {{} × : < < }. As |R| = < , we may fix a bijection t : < →R and let g = t ◦h. So g : [κ]d →R and every X ⊆[κ]d is g-strong iffit is h-strong.
Define f : [κ]d → by f(α) = (i) if g(α) = ⟨, ⟩and p(α) = i.
Let X ⊆[κ]d be given and assume that X is h-strong. Let < < and some desirable ∈ be given. As X is h-strong, it is also g-strong, so fix α ∈X such that g(α) = ⟨, ⟩. Now it holds by the definition of f that f(α) = (p(α)), that is f hits over p at α ∈X.
⊣ We define now the main symbols over a partition. We state only the case for pairs. The definitions of the square-bracket symbols for d ̸= 2 are similar.
Definition 7. Suppose p : [κ]2 → is a partition of all unordered pairs from a cardinal κ.
(1) The symbol κ ↛p [ ]2 asserts the existence of a coloring f : [κ]2 → such that for all A ∈[κ] , for every ∈ there is {α, } ∈[A]2 such that f(α, ) = (p(α, )).
Published online by Cambridge University Press STRONG COLORINGS OVER PARTITIONS 73 (2) The symbol κ ↛p [ 1 ⊛ 2]2 asserts the existence of a coloring f : [κ]2 → such that for all A ∈ [κ] 1 and B ∈[κ] 2, for every ∈ there is {α, } ∈A ⊛B such that f(α, ) = (p(α, )).
(3) The symbol Pr1(κ, , , )p asserts the existence of a coloring f : [κ]2 → such that for every < and a family A ⊆[κ]< of pairwise disjoint nonempty subsets of κ such that |A| = , for every ∈ there are a, b ∈A such that max a < min b and f(α, ) = (p(α, )) for all {α, } ∈a ⊛b.
(4) The symbol Pr0(κ, , , )p asserts the existence of a coloring f : [κ]2 → such that for every < , a pairwise disjoint family A ⊆[κ] of cardinality |A| = and a matrix {
i,j : i, j < } ⊆ there are a, b ∈A with max a < min b such that f(a(i), b(j)) = i,j(p(a(i), b(j)) for all i, j < , where a(i), b(j) are the ith and jth elements of a and of b, respectively, in increasing order. If < cf( ) then Pr0(κ, , , )p implies Pr1(κ, , , )p.
(5) Suppose p = ⟨p : < (∗)⟩is a sequence of partitions p : [κ]2 →.
In each of the four symbols above, writing p instead of p means there exists a single coloring which witnesses simultaneously the relation with p in place of p for each < (∗).
By Fact 5, the first two symbols are equivalently defined by requiring that for every X ⊆[κ]2 which is a -square or is a ( 1, 2)-rectangle there is a single cell i < (which depends on X) such that X ∩p–1(i) is f -strong.
Fact 8. Suppose κ ≥ ≥ are cardinals. Then every coloring f which witnesses Pr1(κ, , , 3)p witnesses also κ ↛p [ ⊛ ]2 . In particular, Pr1(κ, , , 3)p ⇒κ ↛p [ ⊛ ]2 for every partition p : [κ]2 →.
Proof. Fix f : [κ]2 → which witnesses Pr1(κ, , , 3)p. Let A ⊛B ⊆ [κ]2 be an arbitrary ( , )-rectangle. Find inductively a pair-wise disjoint A = {ai : i < } ⊆A ⊛B. Given some ∈, fix a = {α, } and b = {γ, } from A such that α < < γ < and such that f hits over p at all (four) elements {x, y} ∈a ⊛b. In particular, f hits over p at {α, } which belongs to A ⊛B.
⊣ The next lemma is the main tool for adding a partition parameter to a strong-coloring symbol.
Published online by Cambridge University Press 74 WILLIAM CHEN-MERTENS ET AL.
Lemma 9. Suppose κ ≥ ≥ ≥ are cardinals. Then for every sequence of partitions p = ⟨p : < <⟩in which p : [κ]2 → and < for < <: (1) κ ↛[ ]2 < ⇒ κ ↛p [ ]2 .
(2) For all ′ ≤ , κ ↛[ ′ ⊛ ]2 < ⇒ κ ↛p [ ′ ⊛ ]2 .
(3) κ+ ↛[κ⊛κ+ ⧸ 1⊛κ+]2 < ⇒ κ+ ↛p [κ⊛κ+ ⧸ 1⊛κ+]2 .
(4) For all > 0, Pr0(κ, , <, ) ⇒ Pr0(κ, , , )p.
(5) For all > 0, Pr1(κ, , <, ) ⇒ Pr1(κ, , , )p.
Proof. Given any of the first three symbols in the hypotheses above, fix a coloring h : [κ]2 →< which witnesses it. Suppose p = ⟨p : < <⟩is given, where p : [κ]2 → and < for every < <.
By Lemma 6 fix f : [κ]2 → such that every X ⊆[κ]2 which is h-strong is also (f, p)-strong for all < <. Let < < be arbitrary. Suppose that X ⊆[κ]2 is some -square [A]2 or X is some ( ′, )-rectangle A ⊛B. Then X is (f, p)-strong. This proves the first two implications. For the third, let A ⊛B be some (κ, κ+)-rectangle. By the hypothesis, there is some α ∈A such that {α} ⊛B is h-strong, hence it is also (f, p)-strong.
To prove the fourth implication, let, as in the proof of Lemma 6, R = {} × : < < , let g : [κ]2 →R witness Pr0(κ, , <, ) and let f(α, ) = (p(α, )) when g(α, ) = ⟨, ⟩. Suppose < and A ⊆[κ] is pair-wise disjoint and |A| = . Given any < < and {
i,j : i, j < } ⊆, use the fact g witnesses Pr0(κ, , <, ) to fix a, b ∈A such that max a < min b and f(α(i), (j)) = ⟨, i,j⟩for all i, j < , where a(i) and b(j) are the ith and jth members of a and of b respectively. Now f(a(i), b(j)) = i,j(p(a(i), b(j)) as required.
The proof of the last implication is gotten from the fourth by using constant i,j = .
⊣ §4. Valid symbols over partitions in ZFC and in ZFC with additional axioms.
Question 10. Suppose κ ≥ are cardinals. Which strong-coloring symbols in κ hold over all < partitions?
Clearly, every coloring which witnesses a strong-coloring symbol Φ over some partition p, witnesses the symbol gotten by deleting p from Φ. The question of existence of strong colorings over partition therefore refines the question of existence of strong colorings in the classical sense.
Let us mention two obvious constraints on obtaining strong-coloring symbols over partitions. Given any coloring f : [κ]2 → with ≥2, let Published online by Cambridge University Press STRONG COLORINGS OVER PARTITIONS 75 us define, for α < < κ, p(α, ) = 0 ⇐ ⇒f(α, ) = 0 and p(α, ) = 1 otherwise. Then f does not witness κ ↛p [κ]2 . Hence: Fact 11. No single coloring witnesses κ ↛p [κ]2 for all two-partitions p if > 1.
If ≥cf(κ) then there is a partition p : [κ]2 → with |p–1(i)| < κ for every i < , so κ ↛p [κ]2 κ cannot hold. This narrows down the discussion of κ ↛p [κ]2 κ to partitions p : [κ2] → with < cf(κ).
4.1. Symbols which are valid in ZFC. Every infinite cardinal satisfies <ℵ0 = . Therefore, by Lemma 9, every symbol with ≥ℵ0 colors which holds in ZFC continues to hold in ZFC over any sequence of length of finite partitions.
Let us state ZFC symbols over partitions whose classical counterparts were mentioned in §2 above: Theorem 12. For every regular cardinal κ and a sequence of length κ+ of finite partitions of [κ+]2, κ+ ↛p [κ+ ⊛κ+]2 κ+.
Proof. The symbol without p holds by the results of Todorˇ cevi´ c, Moore and Shelah. Now apply Lemma 9(1).
⊣ In particular, Corollary 13. For every finite partition p : [1]2 →n, 1 ↛p [1 ⊛1]2 1 and 1 ↛p [1]2 1.
Theorem 14. For every sequence of length 2 of finite partitions of [2]3, 2 ↛p [1]3 , and 2 ↛p [1]3 1 is equivalent to the negation of the (ℵ2, ℵ1) Chang conjecture.
Proof. The symbol 2 ↛[1]3 holds by Todorevic’s , and now apply Lemma 6 as in the proof of Lemma 9.
⊣ Theorem 15. For every cardinal κ and a list p of length κ++ of finite partitions of [κ++]2, Pr1(κ++, κ++, κ++, κ)p.
Proof. By Shelah’s and Lemma 9(4).
⊣ Theorem 16. For every cardinal κ and a list p of length κ++ of finite partitions of [κ++]2, Pr0(κ++, κ++, κ++, ℵ0)p.
Proof. By Shelah’s , 4.5(3) p. 170 in and Lemma 9(5).
⊣ Theorem 17. For every singular cardinal and a sequence of length cf( ) of finite partitions of [ +]2, Pr1( +, +, cf( ), cf( ))p.
Published online by Cambridge University Press 76 WILLIAM CHEN-MERTENS ET AL.
Proof. By Shelah’s 4.1 p. 67 of and Lemma 9(4).
⊣ Theorem 18. For every singular and a sequence p of length + of finite partitions of [ +]2, + ↛[ +]2 + ⇒Pr1( +, +, +, cf( ))p ∧Pr0( +, +, +, ℵ0)p.
Proof. Suppose +↛[ +]2 +. By Rinot’s , also Pr1( +, +, +, cf( )) holds. The first conjuct now follows by Lemma 9(4). To get the sec-ond conjunct observes that by the first conjunct we have in particular Pr1( +, +, +, ℵ0). The second conjunct follows now by 4.5(3) in and Lemma 9(5).
⊣ 4.2. Symbols from instances of the GCH or of the SCH. If the GCH holds then every regular cardinal satisfies < = . Thus, Theorem 19 (GCH). In Theorems (12)–(18) above, “finite partitions” may be replaced by < -partitions.
The GCH also makes Shelah’s implication 4.5(3) from valid in additional cases. For example, Theorem 20 (GCH). For every regular cardinal κ and a sequence p of length κ++ of κ+-partitions, Pr0(κ++, κ++, κ++, κ)p.
Proof. By Shelah’s we have Pr1(κ++, κ++, κ++, κ) in ZFC. Let = κ+. By the GCH, <κ = and (κ++)<κ = κ++, so qualifies as an interpolant in 4.5(3) p. 170 in and Pr0(κ++, κ++, κ++, κ) follows. Now use GCH again with Lemma 9(5).
⊣ Theorem 21. For every cardinal κ, if 2κ = κ+ then for every sequence p of length κ+ of κ-partitions of [κ]2, κ+ ↛p [κ⊛κ+ ⧸ 1⊛κ+]2 κ+.
Proof. The symbol κ+ ↛[κ⊛κ+ ⧸ 1⊛κ+]2 κ+ follows from 2κ = κ+ by the and Erdo˝ os–Hajnal–Milner theorem (see Section 49 in ). Use now Lemma 9(2).
⊣ Theorem 22. For every singular cardinal , if pp( ) = + then for every sequence p of length + of finite partitions of [ +]2, Pr1( +, +, +, cf( ))p.
Proof. By pp( )= + and Eisworth’s theorem , Pr1( +, +, +, cf( )) holds. Now use Lemma 9(4).
⊣ Theorem 23 (GCH). For every singular cardinal and a sequence p of length + of -partitions of [ +]2, Pr0( +, +, +, cf( ))p.
Proof. By Eisworth’s theorem it holds that Pr1( +, +, +, cf( )). By the GCH and Shelah’s 4.5(3) in , also Pr0( +, +, +, cf( )) holds.
Published online by Cambridge University Press STRONG COLORINGS OVER PARTITIONS 77 Finally, as ( +) = +, by Lemma 9(5), for every sequence p of length + of -partitions of [ ]+ it holds that Pr0( +, +, +, cf( ))p.
⊣ In the next theorem a different cardinal arithmetic assumption appears: Theorem 24. If is a singular cardinal and 2cf( ) > then for every sequence p of length + of finite partitions of [ +]2, Pr0( +, +, cf( ), ℵ0)p.
Proof. By Shelah’s 4.1 p. 67 the symbol Pr1( +, +, cf( ), cf( )) holds in ZFC. Choose = cf( ). So 2 ≥ +, <ℵ0 = and cf( +) > <ℵ0, so qualifies as an interpolant cardinal in 4.5(3) p. 170 in and Pr0( +, +, cf( ), ℵ0) follows. Now use Lemma 9(5).
⊣ Lastly in this section, we show that | •(κ), an axiom (stated in the proof below), which does not imply 2κ = κ+, implies the following rectangular square-brackets symbol.
Theorem 25. If κ is a cardinal and | •(κ+) holds then for every sequence of partitions p = ⟨pγ : γ < κ+⟩, where pγ : [κ+]2 →γ and γ < cf(κ) for each γ < κ+, it holds that κ+ ↛p [κ⊛κ+ ⧸ 1⊛κ+]2 κ+.
That is, there exists a coloring f : [κ+]2 →κ+ such that for every (κ+, κ+)-rectangle A ⊛B and γ < 1 there is j < γ and X ∈[A]κ such that κ+ = ran f ↾[(X ⊛B) ∩p–1 γ (j)] .
Proof. Suppose a sequence of partitions p = ⟨pγ : γ < κ+⟩is given as above and we shall define the required f assuming | •(κ+). Fix a sequence ⟨Xi : i < κ+⟩which witnesses | •(κ+), that is: Xi ⊆κ+, otp(Xi) = κ for each i < κ+ and for every A ∈[κ+]κ+ there exists some i < κ+ such that Xi ⊆A.
Let < κ+ be arbitrary. Towards defining f(α, ) for α < , let us define, for every triple ⟨γ, i, j⟩such that γ, i < and j < γ, A ⟨γ,i,j⟩= {α < : α ∈Xi ∧pγ(α, ) = j}.
(2) Let A = A ⟨γ,i,j⟩: γ, i < ∧j < γ ∧|A ⟨γ,i,j⟩| = κ .
(3) As A is a family of at most κ subsets of , each of cardinality κ, we may fix a disjoint refinement D = {D ⟨γ,i,j⟩: A ⟨γ,i,j⟩∈A}, that is, each D ⟨γ,i,j⟩⊆ A ⟨γ,i,j⟩has cardinality κ and ⟨γ, i, j⟩̸= ⟨γ′, i′, j′⟩⇒D ⟨γ,i,j⟩∩D ⟨γ′,i′,j′⟩= ∅ for any A ⟨γ,i,j⟩, A ⟨γ′,i′,j′⟩∈A.
Let us define now f(α, ) for all α below our fixed by cases. For each D ⟨γ,i,j⟩∈D define f ↾(D ⟨γ,i,j⟩⊛{}) to be some function onto . This is possible since |D ⟨γ,i,j⟩| = κ and < κ+ (so |D ⟨γ,i,j⟩| = ||) and because the Published online by Cambridge University Press 78 WILLIAM CHEN-MERTENS ET AL.
D ⟨γ,i,j⟩are pairwise disjoint, hence (D ⟨γ,i,j⟩⊛{}) ∩(D ⟨γ′,i′,j′⟩⊛{}) = ∅ when ⟨γ, i, j⟩̸= ⟨γ′, i′, j′⟩.
For α ∈ \ D define f(α, ) arbitrarily (say, as 0). As was arbitrary, we have defined f(α, ) for all α < < κ+. By this definition, for all < κ+ and D ⟨γ,i,j⟩∈D, = ran(f ↾(D ⟨γ,i,j⟩⊛{}).
(4) To see that f satisfies what Theorem 25 states, let A, B ⊆κ+ be arbitrary with |A| = |B| = κ+ and let γ < κ+ be given. Using the properties of the | •(κ+)-sequence, fix some i < κ+ such that Xi ⊆A.
(5) As Xi ⊆κ+ and otp(Xi) = κ, sup(Xi) < κ+, hence 0:= max{γ, i, supXi} < κ+.
If ∈B is any ordinal such that > 0 then Xi ⊆ and as |Xi| = κ while γ < cfκ, there exists some j() < γ such that |{α ∈Xi : pγ(α, ) = j}| = κ, that is, by (2) and (3), A ⟨γ,i,j()⟩∈A.
By the regularity of κ+ and the assumption that γ < cfκ < κ+, we can fix some B′ ⊆B \ (0 + 1) and j(∗) < γ such that j() = j(∗) for all ∈B.
For each ∈B′ it holds, then, that A ⟨γ,i,j(∗)⟩belongs to A, and therefore also D ⟨γ,i,j(∗)⟩∈D.
(6) Now, for each ∈B′ we have by (6) and (4) that = ran(c ↾D ⟨γ,i,j(∗)⟩⊛{}), and as D ⟨γ,i,j(∗)⟩⊆Xi ∩p–1 γ (j(∗)) by (2), = ran(f ↾[(Xi ⊛{}) ∩p–1 γ (j(∗))]).
As B′ ⊆B is unbounded in κ+ it follows, after setting X = Xi and j = j(∗), that κ+ = f ↾[(X ⊛B) ∩p–1 γ (j)].
⊣ §5. Independence results on ℵ1. In this section we shall show that the existence of strong colorings over countable partitions of [1]2 is independent over ZFC and over ZFC + 2ℵ0 > ℵ1.
Theorem 26. If the CH holds, then the following five symbols are valid for every sequence of partitions p = ⟨p : < 1⟩where p : [1]2 →: • ℵ1 ↛p [ℵ1]2 ℵ1, • ℵ1 ↛p [ℵ1 ⊛ℵ1]2 ℵ1, Published online by Cambridge University Press STRONG COLORINGS OVER PARTITIONS 79 • ℵ1 ↛p [ℵ0⊛ℵ1 ⧸ 1⊛ℵ1]2 ℵ1, • Pr1(ℵ1, ℵ1, ℵ1, ℵ0)p, • Pr0(ℵ1, ℵ1, ℵ1, ℵ0)p.
Proof. Assume CH, that is, 2ℵ0 = ℵ1. Then Pr1(ℵ1, ℵ1, ℵ1, ℵ0) holds by (a slight strengthening of) Galvin’s theorem. By Shelah’s 4.5(3) from , also Pr0(ℵ1, ℵ1, ℵ1, ℵ0) holds. The CH also implies that (ℵ1)ℵ0 = (2ℵ0)ℵ0 = 2ℵ0 = ℵ1. By Lemma 9(5), then, for every 1-sequence p of countable partitions of [1]2 it holds that Pr0(ℵ1, ℵ1, ℵ1, ℵ0)p and therefore by Lemma 8 also Pr1(ℵ1, ℵ1, ℵ1, ℵ0)p, 1 ↛p [1]2 1 and 1 ↛p [1 ⊛1]2 1.
Similarly, by the CH and Theorem 21 in the previous section, ℵ1 ↛p [ℵ0⊛ℵ1 ⧸ 1⊛ℵ1]2 ℵ1.
⊣ We prove next that these five symbols are valid in all models of ZFC obtained by adding ℵ2 Cohen reals over an arbitrary model V of ZFC, and, more generally, by forcing with a finite-support 2-iteration of -linked posets over an arbitrary model V of ZFC.
Before proving yet another combinatorial property in a Cohen extension let us recall Roitman’s proof that the addition of a single Cohen real introduces an S-space, Todorˇ cevi´ c’s presentation in , p. 26 and Rinot’s blog-post in which it is shown that a single Cohen real introduces Pr0(ℵ1, ℵ1, ℵ0, ℵ0). For a short proof of Shelah’s theorem that a single Cohen real introduces a Suslin line see . Fleissner proved that adding Cohen reals introduces two ccc spaces whose product is not -cc. Hajnal and Komjath proved that adding one Cohen subset to a cardinal κ = κ<κ forces the statement Q(κ+) they defined, following : for every graph G = ⟨κ+, E⟩with (G) = κ+ there is a coloring f : E →κ+ such that for every partition of κ+ to κ parts, all colors are gotten by f on edges from a single part. It is still open if Q(ℵ1) holds in ZFC.
Theorem 27. If Cℵ2 is the partial order for adding ℵ2 Cohen reals then for every sequence p = ⟨p : < 1⟩of partitions p : [1]2 → in the forcing extension by Cℵ2, 1 ⊩Cℵ2 “ℵ1 ↛p [ℵ0⊛ℵ1 ⧸ 1⊛ℵ1]2 ℵ1.′′ Proof. Let Cα be the partial order of finite partial functions from [α]2 to . Let V be a model of set theory and let G ⊆C2 be generic over V. Then G : [2]2 →.
Now suppose that p = ⟨p : < 1⟩is an arbitrary sequence of partitions p : [1]2 → in V [G]. As there is some α ∈2 such that p ∈V [G ∩Cα], it may be assumed that p ∈V . Let c = G ↾[1]2. So c : [1]2 →. In V, fix a sequence ⟨eα : ≤α < 1⟩, where eα : →α is a bijection. In the Published online by Cambridge University Press 80 WILLIAM CHEN-MERTENS ET AL.
generic extension, define a coloring f : [1]2 →1 by f(α, ) = e(c(α, )), for ≥ and as 0 otherwise.
To see that f witnesses ℵ1 ↛p [ℵ1]2 ℵ1 suppose that there is some < 1 for which f fails to witness ℵ1 ↛p [ℵ1]2 ℵ1. This means that in V G there are A ∈[1] and B ∈[1]1 such that for all α ∈A there is some W (α) ∈ 1 such that for all ∈B \ (α + 1) it holds that f(α, ) ̸= W (α)(p(α, )).
Let ˙ A and ˙ W be countable names for A and W and let ˙ B be a name for B.
Let r ∈G decide and force r ⊩“(∀α ∈˙ A)(∀ ∈˙ B \ (α + 1)) (f(α, )) ̸= ˙ W (α)(p(α, )).” Let M be a countable elementary submodel of H(2, ˙ A, ˙ B, ˙ W , r).
Fix an extension r′ ∈G of r and an ordinal ∈1 \ sup(M ∩1) such that r′ ⊩ ∈˙ B. Let r0 = r′ ∩M. Inside M extend r0 to r1 such that r1 ⊩“α ∈˙ A” for an ordinal α which is not in dom (r′) and r1 decides W (α)(p(α, )). Thus, {α, } / ∈dom (r′ ∪r1). Let r∗= r′ ∪r1 ∪ ⟨{α, }, e–1 (W (α)(p(α, ))⟩ .
Since r∗extends r and f(α, ) = e(c(α, )) = W (α)(p(α, )), this is a contradiction to the choice of r.
⊣ The forcing for adding a single Cohen real is obviously -linked. Thus, the next theorem applies to a broader class of posets than Cohen forcing.
The previous theorem holds also in this generality.
Theorem 28. If P is an 2-length finite support iteration of -linked partial orders then 1 ⊩P “ Pr0(ℵ1, ℵ1, ℵ1, ℵ0) ¯ p′′ for any 1 sequence of partitions ¯ p = ⟨p : < 1⟩such that p : [1]2 → for all < 1.
Proof. Let Pα be the finite support iteration of the first α partial orders and suppose that 1 ⊩Pα “Qα = n∈ Qα,n and each Qα,n is linked,” (7) 1 ⊩Pα “{ ˙ qα,n}n∈ is a maximal antichain in Qα.” (8) Let B : [2]2 →2 be a bijection and let e : → be a bijection for each infinite ∈1. Let V be a model of set theory, let G ⊆P be generic over V and let Gα be the generic filter induced on Qα by G.
Now suppose that a sequence of partitions ¯ p = ⟨p : < 1⟩such that p : [1]2 → belongs to V [G]. As there is some α ∈2 such that ¯ p ∈ V [G ∩Pα], it may be assumed that ¯ p ∈V . There is no harm in assuming that B maps [1]2 to 1 so let c : [1]2 →1 be defined by c(α, ) = if and only if qB(α,),k ∈GB(α,) and = eB(α,)(k).
Published online by Cambridge University Press STRONG COLORINGS OVER PARTITIONS 81 To see that c witnesses Pr0(ℵ1, ℵ1, ℵ1, ℵ0) ¯ p suppose that: • ∈1.
• k > 0.
• ˙ α ,i < 1 are P-names for < 1 and 1 ≤i ≤k of distinct ordinals such that for every < 1 the sequence ⟨˙ α ,i : 1 ≤i ≤κ⟩is increasing with i.
• ˙ Ni,j is a P-name for a k × k matrix with entries in 1.
We may fix q ∈G such that: (1) q ⊩P “ ˙ α ,i = α ,i ” for all 1 ≤i ≤k, (2) {α ,1, α ,2, ... , α ,k} ⊆d = B–1(dom (q )), (3) for all ∈dom (q ) there is n , such that q ↾ ⊩P “q ( ) ∈ Q ,n , .” Let {M}∈+1 be countable elementary submodels of H(2, {q , {α ,1, α ,2, ... , α ,k}} ∈1, B, ˙ N, G) such that Mj ≺Mj+1 ≺M and 1 ∩Mj ∈Mj+1 for each j ∈. Let ∈1 \ M. By elementarity there are j ∈1 ∩Mj such that: (1) dom (q ) ∩Mj ⊆dom (q j).
(2) n , j = n , for each ∈dom (q ) ∩Mj.
Note that {α ,1, α ,2, ... , α ,k} ∩M = ∅and hence (∀j ∈)(∀u ∈k)(∀v ∈k) B(α j,u, α ,v) / ∈M.
(9) Furthermore, note that B–1(dom (q )) is finite and so there is J such that B–1(dom (q )) ∩M ⊆MJ.
From (9) it follows that B(α J ,u, α ,v) / ∈dom (q J ) ∪dom (q ).
(10) From condition (19) in the choice of q and condition (1) in the choice of j, it follows that there is q∗such that q∗≤q J and q∗≤q and dom (q∗) = dom (q J ) ∪dom (q ).
Let A ∈M be a maximal antichain such that for every conditions r ∈A, r ⊩P “Mu,v = ˙ Nu,v(p(a J ,u, a ,v))” for some k × k matrix (Mi,j) with entries in 1.
By the countable chain condition, A is countable and hence A ⊆M. Let r ∈A be such that r is compatible with q∗and let (Mi,j) be the k × k matrix which witnesses that r ∈A. Let q∗∗≤q∗, r.
Note that B(α J ,u, α ,v) / ∈dom (q∗∗) because dom (q∗∗) \ (dom (q J ) ∪ dom (q )) ⊆M and (9) and (10) hold. Let ˆ q() = ⎧ ⎨ ⎩ q∗∗() if / ∈{B(a J ,u, a ,v)}u,v∈k, q,e–1 B(a J ,u,a ,v)(Mu,v) if = B(a J ,u, a ,v).
Published online by Cambridge University Press 82 WILLIAM CHEN-MERTENS ET AL.
Then by the definition of c ˆ q ⊩P “c(α J ,u, α ,v) = Mu,v = ˙ Nu,v((p(a J ,u, a ,v))” for each u and v as required.
⊣ Corollary 29. Itis consistentwithMAℵ1(-linked)thatPr0(ℵ1, ℵ1, ℵ1, ℵ0)¯ p holds for any 1 sequence of partitions ¯ p={p } ∈1 such that p : [1]2→.
Now we prove that the symbol 1 ↛p [1]2 1 can consistently fail for some p : [1]2 →.
We actually prove more. The failure of the symbol above over a partition p : [1]2 →, symbolically written as 1 →p [1]2 1, means that for every coloring f : [1]2 →1 there is a set A ∈[1]ℵ1 such that f ↾([A]2 ∩p–1(i)) omits at least one color for every i < . Let us introduce the following symbol: 1 →p [1]2 1\1, to say that for every coloring f : [1]2 →1 there is a set A ∈[1]ℵ1 such that for every i < a set of size ℵ1 of colors is omitted by f ↾([A]2 ∩p–1(i)).
An even stronger failure (via breaking 1 to two disjoint equinumerous sets and identifying all colors in each part) is 1 →p [1]2 2.
It is the consistency of the latter symbol which we prove. Note that with the rounded-brackets symbol in (1) from the introduction we may write this failure as: 1 →p (1)2 2, whose meaning is that for every coloring f : [1]2 →2 there is A ∈[1]ℵ1 such that for every i < the set [A]2 ∩p–1(i) is f -monochromatic. Thus, while 1 ↛[1]2 1 holds in ZFC, it is consistent that for a suitable countable partition p the symbol 1 ↛p [1]2 1 fails pretty badly.
Theorem 30. It is consistent that 2ℵ0 = ℵ2 and there is a partition p : [1]2 → such that 1 →p [1]2 2.
Corollary 31. It is consistent that 2ℵ0 = ℵ2 and there is some p : [1]2 → such that 1 →p [1]2 1\1 and hence 1 →p [1]2 1.
Proof of the Theorem. Let P be the partial order of finite partial functions from [1]2 → ordered by inclusion. More precisely, each Published online by Cambridge University Press STRONG COLORINGS OVER PARTITIONS 83 condition q ∈P has associated to it a finite subset of 1 which, abusing notation, will be called dom (q). Then q is a function [dom (q)]2 →.
Given any partition p : [1]2 → and a colouring c : [1]2 →2 define the partial order Q(p, c) to be the set of all pairs (h, w) such that • w ∈[1]<ℵ0, • h : m →2 for some m ∈ so that m ⊇p([w]2), • c({α, }) ̸= h(p({α, })) for each {α, } ∈[w]2 and order Q(p, c) by coordinatewise extension. Let V be a model of set theory in which 2ℵ1 = ℵ2 and let {c } ∈2 enumerate cofinally often the subsets of hereditary cardinality less than ℵ2. If G ⊆P is generic over V, in V [G] define pG = G. Then define a finite support iteration {Q
}
∈2 such that Q1 = P and if c is a Q
-name such that 1 ⊩Q “c : [1]2 →2” then Q
+1 = Q ∗Q(pG, c
).
It suffices to establish the following two claims.
Claim 32. For each ∈2 greater than 1 and ∈1 the set of q ∈Q
+1 such that q ↾ ⊩Q “q(
) = (h, w) and w \ ̸= ∅′′ is dense in Q
+1.
Proof. Given q it may be assumed that there are h and w such that q ↾ ⊩Q “q(
) = (ˇ h, ˇ w).” Let ∈1 be so large that > max(dom (q(0))), max(w), . Let f : w → be any one-to-one function so that ran(f) ∩dom (h) = ∅and let f : {{, }}∈w → be defined by f({, }) = f(). Note that since q(0) ∪ f ∈P it is possible to find ¯ q ≤q ↾ such that: • f ⊆¯ q(0) and • ¯ q ⊩Q “c
({, }) = ˇ k” for some family of integers {k}∈w equal to 0 or 1.
Then let ¯ h ⊇h be any finite function such that ¯ h(f()) = 1 – k and let ¯ w = w ∪{}. Then ¯ q ∗(¯ h, ¯ w) is the desired condition.
⊣ Claim 33. The partial order Q2 satisfies the ccc.
Proof. By a standard argument, there is a dense subset of Q2 of conditions q such that for each ∈dom (q) with > 0, there are h and w so that q ↾ ⊩Q2 “q(
) = (ˇ h, ˇ w).” We will assume that all conditions that we work with are members of this dense subset.
Let {q : < 1} be conditions in Q2. By thinning out, we can assume that their domains form a Δ-system with root {0, 0, 1, ... , k}. We can further assume that: • each of the sets {dom (q (0)) : < 1}, and {w ,
i : < 1} for each i ≤k form a Δ-system, Published online by Cambridge University Press 84 WILLIAM CHEN-MERTENS ET AL.
• the functions q (0) agree on the root of the Δ-system of their domains, and • there are hi, i ≤k, so that for all < we have hi = h ,
i, where q ↾ ⊩Q2 “q (
) = (ˇ h ,
, ˇ w ,
).” Let = max{dom (q0(0)), w0,
i : i ≤k}. Pick γ < 1 so that each of the values min(dom (qγ(0)) \ dom (q0(0))), min(wγ,
i \ w0,
i) for i ≤k, are above (if defined).
Arguing as in Claim 32, we see that q0 and qγ are compatible conditions.
⊣ This completes the proof of the theorem.
⊣ Definition 34. The symbol κ →p [κ]2 ,< for a partition p : [κ]2 → means that for every coloring f : [κ]2 → there is a set A ∈[κ]κ such that |ran(f ↾([A]2 ∩p–1(i)))| < for all i < .
Note that for ≤ this symbol is stronger than κ →p [κ]2 \. Thus the next theorem, which uses ideas from , gives a stronger consistency than the previous one.
Theorem 35. Given any regular κ > ℵ1 it is consistent that: • non(L) = ℵ1, • b = ℵ2 = 2ℵ0, • 2ℵ1 = κ, and • there is a p : [1]2 → such that 1 →p [1]2 ,<.
Theorem 36. Given any regular κ > ℵ1 it is consistent that: • non(M) = ℵ1, • b = ℵ1, • d = ℵ2 = 2ℵ0, • 2ℵ1 = κ, and • there is a p : [1]2 → such that 1 →p [1]2 ,<.
The proofs of both theorems are similar, using ideas from ; only the proof of Theorem 35 will be given in detail. Both rely on the following definition: Definition 37. Let be some probability measure on under which each singleton has positive measure, for example ({n}) = 2–n. A sequence of functions P = {p}∈1 will be said to have full outer measure if: • p : → and • for each ∈1 the set {p ↾}> has measure one in the measure space (, ).
Published online by Cambridge University Press STRONG COLORINGS OVER PARTITIONS 85 The sequence P is defined to be nowhere meagre similarly, by requiring that for each ∈1 the set {p ↾}> is nowhere meagre in (, ) with the usual product topology. In both cases define p = p(P) by p(α, ) = p(α) if α < .
By enumerating all functions from a countable ordinal into , we have: Proposition 38. Assuming the Continuum Hypothesis there is a sequence P = {p}∈1 such that {p ↾}> = for each ∈1. Hence P has full outer measure as in Definition 37.
While it is, of course, impossible to preserve the property that {p ↾ }> = when adding reals, the goal of the following arguments is to show that the properties of Definition 37 can be preserved in certain circumstances.
The following definition is from and will play a key role in this context.
Definition 39. A function : < →[1]<ℵ0 satisfying that (s) ∩ (t) = ∅unless s = t will be said to have disjoint range. If for each t ∈< there is k such that |(t⌢j)| < k for all j ∈ then will be called bounded with disjoint range. If G is a filter of subtrees of < and has disjoint range define S(G, ) = t∈ G (t).
If G is a generic filter of trees over a model V define Sb(G) = {S(G, ) | ∈V and is bounded with disjoint range}.
It is shown in that Lemmas 40 and 42 hold.
Lemma 40. If G ⊆L is generic over V then Sb(G) is a P-ideal in V [G].
Lemma 41 is the content of Section 3 of . Recall that if I is an ideal then X is said to be orthogonal to I if X ∩A is finite for each A ∈I.
Lemma 41 (Abraham and Todorˇ cevi´ c). Let I be a P-ideal on 1 that is generated by a family of ℵ1 countable sets and such that 1 is not the union of countably many sets orthogonal to I. Then there is a proper partial order PI, that adds no reals, even when iterated with countable support, such that there is a PI-name ˙ Z for an uncountable subset of 1 such that 1 ⊩PI “(∀ ∈ 1) ˙ Z ∩ ∈I.′′ Lemma 42. If G ⊆L is generic over V and PSb(G) is the partial order of Lemma 41 using Lemma 40 and H ⊆PSb(G) is generic over V [G] then in V [G][H] there is an uncountable R ⊆1 such that R ∩Y ̸= ∅for each uncountable Y ∈V [G] and such that [R]ℵ0 ⊆Sb(G).
Lemma 43. Let P be a sequence with full outer measure and suppose that p = p(P). Suppose further that • c : [1]2 →, • G ⊆L is generic over V, and • H ⊆PSb( ˙ G) is generic over V [G].
Published online by Cambridge University Press 86 WILLIAM CHEN-MERTENS ET AL.
Then there is an uncountable X ⊆1 in V [G][H] and L : → such that L(p(α, )) > c(α, ) for all {α, } ∈[X]2.
Proof. In V [G] let L = G be the Laver real. In V [G][H] let R be the uncountable set given by Lemma 42. Construct by induction distinct ∈R such that if ∈ then L(p( , )) > c( , ). To carry out the induction assume that R = { } ∈ have been chosen and satisfy the inductive hypothesis.
By the choice of R it follows that R ∈Sb(G). Since PSb( ˙ G) adds no new reals it follows that R ∈V [G] and so there is T ∈G and ∈V with bounded, disjoint range such that T ⊩L “ ˙ R = S( ˙ G, ).” Let be so large that T ⊩L “ ˙ R ⊆ ” and let r be the root of T. For t ∈T define Wt = x ∈2 x ↾(t) has constant value |t| and then define W+ t = x ∈2 (∃∞s ∈succT(t)) x ∈Ws .
Note that W+ t has measure one in 2 for each t ⊇r. To see this note that for a random h ∈2 the probability that h(
) = |t| + 1 is 2–(|t|+1). Also, note that since is bounded—see Definition 39—there is some k such that |(s)| ≤k for each s ∈succT(t). Hence, the probability of h belonging to Ws is bounded below by 2–(|t|+1)k for all s ∈succT(t) and these events are independent because the (s) are pairwise disjoint for s ∈succT(t).
Define f on j≤|r| (r ↾j) to have constant value |r| and note that the domain of f is disjoint from each (s) where s ⊋r. Hence the probability that f ⊆h is non-zero and independent from belonging to each W+. Since p has full outer measure it follows that ∈1 f ⊆p ↾ ∈ r⊆t∈T W+ t is uncountable and belongs to V [G]. Therefore by Lemma 42 there is some ∈R \ R such that f ⊆p ↾ and such that for all t ∈T containing r there are infinitely many s ∈succT(t) such that p(α, ) = |s| for all α ∈ (s).
Using this and the definition of f, it is possible to start with r and successively thin out the successors of each t ∈T to find a tree T ∗⊆T with root r such that p(α, ) = |t| for all t ∈T ∗and for all α ∈(t). Once again starting with r and removing only finitely many elements of succT ∗(t) for each t ∈T ∗it is possible to find T ∗∗⊆T ∗with root r such that (∀t ∈T ∗∗)(∀s ∈succT ∗∗(t))(∀α ∈(t)) s(|t|) = s(p(α, )) > c(α, ) and this implies that T ∗∗⊩L“(∀α ∈˙ R) ˙ L(p(α, )) > c(α, ).” Since this holds for any T, genericity yields that in V [G][H] there is some ∈R \ R such that L(p( , )) > c( , ) for each ∈. Define = to continue the induction. Since limit stages are immediate, this completes the proof.
⊣ Published online by Cambridge University Press STRONG COLORINGS OVER PARTITIONS 87 Proof of Theorem 35. Therequired model is theoneobtained by starting with a model of the Continuum Hypothesis in which 2ℵ1 = κ. Then iterate with countable support the partial order L ∗PSb ˙ G). In the initial model there is, by Proposition 38, a sequence with full outer measure. To see this, begin by observing that it is shown in Theorem 7.3.39 of that L preserves ⊑Random. Since PS( ˙ G) is proper and adds no new reals it is immediate that it also preserves ⊑Random. It follows by Theorem 6.1.13 of that the entire countable support iteration preserves outer measure sets and, hence, any sequence with full outer measure in the initial model maintains this property throughout the iteration.
To see that for every function c : [1]2 → there is an uncountable set witnessing ℵ1→p[ℵ1]ℵ0,<ℵ0 use Lemma 3.4 and Lemma 3.6 of to conclude that each partial order in the 2 length iteration is proper and has the ℵ2-pic of Definition 2.1 on page 409 of . By Lemma 2.4 on page 410 of it follows that the iteration has the ℵ2 chain condition and, hence, that c appears at some stage. It is then routine to apply Lemma 43.
That b = ℵ2 is a standard argument using that Laver forcing adds a dominating real.
⊣ Remark 44. The proof of Theorem 36 is similar but uses Miller reals instead of Laver reals. This requires that nowhere meagreness play the role of full outer measure.
Remark 45. Note that there is no partition p such that 1→p[1]2 1,<1 because a colouring c : [1]2 →1 that is a bijection will provide a counterexample.
§6. Concluding Remarks and Open Questions. It turns out, via Lemma 9, that getting strong coloring symbols over finite partitions is not harder than getting them without partitions; so one immediately gets many strong coloring symbols over partitions outright in ZFC. If the number of colors raised to the number of cells in a partition is not too large, Lemma 9 applies again, and consequently all GCH symbols gotten by Erdo˝ os, Hajnal and Milner on κ+ hold under the GCH over arbitrary κ-partitions. Even without instances of the GCH, strong colorings symbols over countable partitions are valid in Cohen-type forcing extenstions, by Theorems 27 and 28.
Yet, it is not the case that every time a strong-coloring symbol holds at a successor of a regular, it also holds over countable partitions: by Theorems 30 and 35 the ZFC symbol ℵ1 ↛[ℵ1]2 ℵ1, and hence all stronger ones, consistently fail quite badly over sufficiently generic countable partitions.
Thus, strong coloring symbols over partitions are a subject of their own, in which the independence phenomenon is manifested prominently.
Many natural questions about the combinatorial and set-theoretic connections between coloring and partition arise. We hope that this subject will get attention in the near future both in the infinite combinatorics and Published online by Cambridge University Press 88 WILLIAM CHEN-MERTENS ET AL.
in the forcing communities. For example, by Fact 11, there is always a set of 2-partitions of [κ+]2 such that no coloring is strong over all of them. What is the least cardinality of such a set? In the case of = κ = ℵ0, the results in Section 5 show that this cardinal may be as small as 1 or at least as large as ℵ2 = κ++. Can this number ever be κ or, say, κ+ < 2κ?
We conclude with a short selection of open questions.
Question 46. If Pr1(ℵ1, ℵ1, ℵ1, ℵ0)p holds for all countable p, does also Pr0(ℵ1, ℵ1, ℵ1, ℵ0)p hold for all countable p?
Question 47. Suppose Pr0(ℵ1, ℵ1, ℵ0, ℵ0)p holds for some countable partition p. Does Pr0(ℵ1, ℵ1, ℵ1, ℵ0)p hold as well?
Without partitions, both implications above hold.
Question 48. Does MA-linked or p = c or even full MAℵ1 imply that Pr0(ℵ1, ℵ1, ℵ1, ℵ0) ¯ p holds for every 1 sequence of partitions ¯ p = ⟨p : < 1⟩such that p : [1]2 →?
Question 49. Is it consistent that there is a partition p such that ℵ1 →p [ℵ1]2 ℵ0,<k for some integer k?
Question 50. Is ℵ2 ↛p [ℵ0⊛ℵ2 ⧸ 1⊛ℵ2]2 ℵ2 consistent for all ℵ0- or ℵ1-partitions p? That is, can there be a coloring f : [2]2 →2 such that for every (one, or sequence of 2 many) 1-partition(s) of [2]2, for every B ∈[2]2, for all but finitely many α < 2 there is i < 1 such that for every color < 2 there is ∈B such that p(α, ) = i and f(α, ) = .
The consistency of this symbol is open even without the p. A negative answer may be easier to get with p.
Added in proof: Problems 46–49 above are solved in .
Acknowledgments. The first author’s research for this paper was partially supported by an Israeli Science Foundation grant number 665/20. The third author’s research for this paper was partially supported by NSERC of Canada.
REFERENCES U. Abraham and S. Todorˇ cevi´ c, Partition properties of 1 compatible with CH.
Fundamenta Mathematicae, vol. 152 (1997), no. 2, pp. 165–181.
T. Bartoszy´ nski and H. Judah, Set Theory: On the Structure of the Real Line, A. K.
Peters, Boca Raton, 1995.
J. E. Baumgartner, Partition relations for countable topological spaces. The Journal of Combinatorial Theory, Series A, vol. 43 (1986), no. 2, pp. 178–195.
Published online by Cambridge University Press STRONG COLORINGS OVER PARTITIONS 89 P. Erd˝ os, F. Galvin and A. Hajnal, On set-systems having large chromatic number and not containing prescribed subsystems, Infinite and Finite Sets (Colloq. Keszthely 1973; dedicated to P. Erd˝ os on his 60th birthday,) Vol. I. Colloq. Math. Soc. J. Bolyai, Vol. 10, North Holland, Amsterdam, 1975, pp. 425–513.
P. Erd˝ os, A. Hajnal, A. M´ at´ e, and R. Rado, Combinatorial Set Theory Partition Relations for Cardinals, Vol. 106, Studies in Logic and the Foundations of Mathematics, North Holland, Amsterdam, 1984.
T . Eisworth, A note on strong negative partition relations. Fundamenta Mathematicae, vol. 202 (2009), pp. 97–123.
———, Club guessing, stationary reflection, and coloring theorems. Annals of Pure and Applied Logic, vol. 161 (2010), pp. 1216–1243.
———, Successors of singular cardinals, Handbook of Set Theory (M. Foreman and A. Kanamori, editors), Springer, Dordrecht, 2010, pp. 1229–1350.
———, Getting more colors I. Journal of Symbolic Logic, vol. 78 (2013), no. 1, pp. 1–16.
———, Getting more colors II. Journal of Symbolic Logic, vol. 78 (2013), no. 1, pp. 17–38.
———, On idealized versions of Pr1( ,+, +, +, cf( )). Archive for Mathematical Logic, vol. 53 (2014), pp. 809–824.
T. Eisworth and S. Shelah, Successors of singular cardinals and coloring theorems I.
Archive for Mathematical Logic, vol. 44 (2005), no. 5, pp. 597–618.
———, Successors of singular cardinals and coloring theorems II. Journal of Symbolic Logic, vol. 74 (2009), no. 4, pp. 1287–1309.
W . G. Fleissner, Some spaces related to topological inequalities proven by the Erd˝ os-Rado theorem. Proceedings of the American Mathematical Society, vol. 71 (1978), no. 2, pp. 313–320.
F. Galvin, Chain conditions and products. Fundamenta Mathematicae, vol. 108 (1980), no. 1, pp. 33–48.
O. Guzman, On (1, 1)-weakly universal functions. Fundamenta Mathematicae, vol. 247 (2019), no. 1, pp. 87–98.
A. Hajnal and P. Komjath, Some remarks on the simultaneous chromatic number.
Combinatorica, vol. 23 (2003), no. 1, pp. 89–104.
M. Kojman, A. Rinot and J. Steprans, Advances on strong colorings over partitions, Preprint February 2021. Available at 2021.
C. Lambie-Hanson and A. Rinot, Knaster and friends II: The C-sequence number.
Journal of Mathematical Logic, vol. 21 (2021), no. 1, p. 2150002, 54.
J. T. Moore, A solution to the L space problem. Journal of the American Mathematical Society, vol. 19 (2006), pp. 717–736.
Y . Peng and L. Wu, A Lindel¨ of group with non-Lindel¨ of square. Advances in Mathematics, vol. 325 (2018), pp. 215–242.
D. Raghavan and S. Todorˇ cevi´ c, Proof of a conjecture of Galvin. Forum of Mathematics, vol. Pi 8 (2020), p. e15.
A. Rinot, Transforming rectangles into squares, with applications to strong colorings.
Advances in Mathematics, vol. 231 (2012), no. 2, pp. 1085–1099.
———, Complicated colorings. Mathematical Research Letters, vol. 21 (2014), no. 6, pp. 1367–1388.
———, Chain conditions of products, and weakly compact cardinals, this Journal, vol. 20 (2014), no. 3, pp. 293–314.
———, An S-space from a Cohen real.
Blog post Available at
———, Personal communication. 2019.
A. Rinot and S. Todorˇ cevi´ c, Rectangular square-bracket operation for successor of regular cardinals. Fundamenta Mathematicae, vol. 220 (2013), no. 2, pp. 119–128.
A. Rinot and J. Zhang, Strongest transformations, Preprint February 2021. Available at 2021.
Published online by Cambridge University Press 90 WILLIAM CHEN-MERTENS ET AL.
J. Roitman, Adding a random or a Cohen real: Topological consequences and the effect on Martin’s axiom. Fundamenta Mathematicae, vol. 103 (1979), no. 1, pp. 47–60.
S. Shelah, Was Sierpi´ nski right? I. Israel Journal of Mathematics, vol. 62 (1988), pp. 355–380.
———, Strong partition relations below the power set: Consistency, was Sierpi´ nski right, II? Proceedings of the Conference on Set Theory and its Applications in Honor of A. Hajnal and V . T. Sos., Budapest, 1/91, 1991, pp. 637–668.
———, Was Sierpi´ nski right? III. Can continuum–c.c. times c.c.c. be continuum–c.c.?
Annals of Pure and Applied Logic, vol. 78 (1996), pp. 259–269.
———, Colouring and non-productivity of ℵ2-cc. Annals Pure and Applied Logic, vol. 84 (1997), pp. 153–174.
———, Proper and Improper Forcing, second ed., Perspectives in Mathematical Logic, Springer-Verlag, Berlin, 1998.
———, Was Sierpi´ nski right? IV. Journal of Symbolic Logic, vol. 65 (2000), pp. 1031–1054.
———, Successors of singulars, cofinalities of reduced products of cardinals and productivity of chain conditions. Israel Journal of Mathematics, vol. 62 (1988), pp. 213–256.
———, Cardinal Arithmetic, Vol. 29, Oxford Logic Guides, The Clarendon Press, Oxford University Press, New York, NY , 1994.
S. Shelah and J. Stepr¯ ans, Universal graphs and functions on 1, preprint, 2016.
J. Stepr¯ ans, Combinatorial consequences of adding Cohen reals. Set Theory of the Reals, pp. 583–617, Israel Math. Conf. Proc., 6, Bar-Ilan Univ., Ramat Gan, 1993.
S. Todorˇ cevi´ c, Partitioning pairs of countable ordinals. Acta Mathematica, vol. 159 (1987), no. 3–4, pp. 261–294.
———, Partition Problems in Topology, vol. 84, Contemporary Mathematics, American Mathematical Society, Providence, RI, 1989.
———, Some partitions of three-dimensional combinatorial cubes. The Journal of Combinatorial Theory, Series A, vol. 68 (1994), no. 2, pp. 410–437.
———, Oscillations of sets of integers. Advances in Applied Mathematics, vol. 20 (1998), no. 2, pp. 220–252.
DEPARTMENT OF MATHEMATICS & STATISTICS YORK UNIVERSITY , 4700 KEELE STREET TORONTO, ON M3J 1P3, CANADA E-mail: [email protected] E-mail: [email protected] DEPARTMENT OF MATHEMATICS BEN-GURION UNIVERSITY OF THE NEGEV P.O.B. 653, BE’ER SHEVA 84105, ISRAEL E-mail: [email protected] Published online by Cambridge University Press
|
10
|
Diophantine Set - SotaZK Labs docs
===============
- [x] - [x]
Skip to content
SotaZK Labs docs
Diophantine Set
Initializing search
GitHub 20 4
SotaZK Labs docs
GitHub 20 4
DOCUMENTATION REPOSITORY
[x] Articles Articles
[x] Proofs arguments and zero knowledge Proofs arguments and zero knowledge
Chapter 12: (\Sigma)-Protocols and Commitments from Hardness of Discrete Logarithm
Chapter 19: Bird’s Eye View of Practical Arguments
Chapter 2: The Power of Randomness (Fingerprinting and Freivalds’ Algorithm)
Chapter3 - Definitions and Technical Preliminaries
Chapter 4 - Interactive Proofs
Chapter 5 - Publicly Verifiable, Non-interactive Arguments via Fiat-Shamir
Proofs, Arguments, and Zero-Knowledge
[x] Quadratic arithmetic programs from zero to hero Quadratic arithmetic programs from zero to hero
Quadratic Arithmetic Programs (QAP)
[x] Seminar innovative internet technologies Seminar innovative internet technologies
Seminar Innovative Internet Technologies
[x] Why and how zk snark works Why and how zk snark works
Chapter 1 - Introduction
Chapter 2 - The medium of a proof
Chapter 3 - Non-Interactive Zero Knowledge of a Polynomial
Notes
[x] Zcash protocol Zcash protocol
Chapter 1: Introduction
Chapter 2: Notations
Chapter 3: Concepts
[x] Zk snarks under the hood Zk snarks under the hood
Zk snarks under the hood
[x] Docs Docs
Cyclotomic Polynomials and Primitive Roots of Unity
DARK: Diophantine Argument of Knowledge
Fast Reed - Solomon Interactive Oracle Proof of Proximity
The GSW FHE Scheme
Lattice-based Homomorphic Signatures
Monolith Hash
Non-Native Field Arithmetic
Nova
PLONK: Permutations over Lagrange-Bases for Oecumenical Noninteractive Knowledge
Plookup: A Simplified Polynomial Protocol for Lookup Tables
Reed-Solomon Fingerprinting
Reinforced Concrete
STARK Engine
STARK’s Mechanics
Zk Comparison
ZkPorter: Revolutionizing Layer 2 Scaling
ZK-Rollups after EIP-4844
zkSync Era
[x] Terms Terms
Abelian
Adleman’s Theorem
Advice Column
Arguments
Arithmetization
Ate Pairing
Barycentric equation
Base field
BLS12 - 381
Boolean Circuit
Boolean Formula
Boundary Constraints
Characteristic
Chinese Remainder Theorem
Chip
Circuit
Codeword
Commitment Scheme
Common Reference String (CRS)
Computation
Computational Integrity
Coordinate Pair Accumulator
Correlation-Intractability (CI)
Diophantine equation
[x] Diophantine Set Diophantine Set Table of contents
Definition
MRDP Theorem
Computably Enumerable Sets
Fact
Discrete Logarithm
Divisor
Doubly-Efficient
Elliptic Curve (EC)
Elliptic curve group
Elliptic Curve Pairings
Embedding Degree
Extension
Extension Fields
Fast Fourier Transforms
Fermat’s Little Theorem
Fiat Shamir
Folding Scheme
Fundamental Theorem Of Algebra
Gadget Matrix
Gate
GKR Protocol
Group
Holography
Homogeneous Polynomial or Projective Model
Homomorphic Encryption
Interactive Proofs
IP
IVC
Kintsugi Strategy
Knowledge Of Exponent (KOE)
Lagrange Interpolation
Language
Lattice
Learning With Errors
Low Degree
Low-degree Extension
Miller Algorithm
Modular Arithmetic
Multilinear
NP
Pairings or Bilinear Maps
Pinocchio Protocol
Polynomial-time computation
Poseidon Hash Function
Preimage Sampling
Prime Or Finite Fields
Quadratic Fields
R1CS
Random Oracle Model (ROM)
Rational Function
Recursive SNARKs
Relation
Restriction
Rollups: Streamlining Blockchain Scalability
Round-by-round Soundness
Scalar field
Schwartz-Zippel Lemma
Selector Function
Statement
Structured Reference String (SRS)
Succinct
Sum-Check Protocol
Symbolic Evaluation
Tate Pairing
Transition Constraints
Trapdoor
Trapdoor For Uniform Random Matrix
Trusted Setup
Uniform Reference String (URS)
Uniqueness of Multilinear Extension
Vector Commitment
Witness
ZK-Friendly Hash
Zero knowledge Succinct Non-interactive Arguments of Knowledge (zkSNARK)
Zero Knowledge Scalable Transparent Arguments of Knowledge (zkSTARK)
Zero knowledge proof (ZKP)
[x] Polynomial commitment Polynomial commitment
Polynomial Commitment
Kate Commitment (KZG)
Dory Commitment
Table of contents
Definition
MRDP Theorem
Computably Enumerable Sets
Fact
Diophantine Set¶
=========================================================================================================
Definition ¶
Read the definition of Diophantine equation first.
Note
A set S∈Z k is called Diophantine if and only if there exists an integer-coefficient multivariate polynomial R S such that μ∈S⇔∃ω∈Z k′ such that R S(μ,ω)=0, i.e., μ∈S if μ is a part of a solution for a fixed Diophantine equation. We call R S a representing polynomial of S, and ω an auxiliary witness.
Consider the Diophantine equation:
μ=ω 1⋅ω 2
Here, μ is a parameter, and ω 1 and ω 2 are unknowns.
The equation has a solution in ω 1 and ω 2 precisely when μ can be expressed as a product of two integers greater than 1 (i.e., when μ is a composite number).
MRDP Theorem ¶
The MRDP theorem states that any recursively enumerable set can be represented by a Diophantine equation.
[Diophantine Sets] = [Computably Enumerable Sets]
Computably Enumerable Sets¶
A set S is computably enumerable if and only if there is an algorithm that halts if the input is a member of S, and runs forever if otherwise.
Or, a set S is computably enumerable if there is an algorithm (not necessarily halting) that enumerates the members of S.
Fact¶
Suppose we have two Diophantine sets S 1 and S 2, respectively representing polynomials P 1,P 2. Then:
R S 1∪S 2(μ;ω 1,ω 2)=P 1(μ 1,ω 1)⋅P 2(μ 2,ω 2)
R S 1∩S 2(μ;ω 1,ω 2)=P 1(μ 1,ω 1)2+P 2(μ 2,ω 2)2
Comments
Back to top Previous Diophantine equationNext Discrete Logarithm
Made with Material for MkDocs with emoji by Twemoji
|
11
|
Skip to main content
8.4: Boltzmann's Equation
Last updated
: Sep 14, 2022
Save as PDF
8.3: Some Thermodynamics and Statistical Mechanics
8.5: Some Comments on Partition Functions
Page ID
: 6695
Jeremy Tatum
University of Victoria
( \newcommand{\kernel}{\mathrm{null}\,})
If we have a large number of atoms in a hot, dense gas, the atoms will constantly be experiencing collisions with each other, leading to excitation to the various possible energy levels. Collisional excitation will be followed, typically on timescales of the order of nanoseconds, by radiative deexcitation. If the temperature and pressure remain constant, there will exist a sort of dynamic equilibrium between collisional excitations and radiative de-excitations, leading to a certain distribution of the atoms among their various energy levels. Most of the atoms will be in low-lying levels; the number of atoms in higher levels will decrease exponentially with energy level. The lower the temperature, the faster will be the population drop at the higher levels. Only at very high temperatures will high-lying energy levels be occupied by an appreciable number of atoms. Boltzmann's Equation shows just what the distribution of the atoms will be among the various energy levels as a function of energy and temperature.
Let's imagine a box (constant volume) holding NN atoms, each of which has mm possible energy levels. Suppose that there are NjNj atoms in energy level EjEj. The total number NN of atoms is given by
N=m∑i=1Ni.
N=∑i=1mNi.(8.4.1)
Here, ii is a running integer going from 11 to mm, including jj as one of them.
The total internal energy UU of the system is
U=m∑i=1NiEi.
U=∑i=1mNiEi.(8.4.2)
We now need to establish how many ways there are of arranging NN atoms such that there are N1N1 in the first energy level, N2N2 in the second, and so on. We shall denote this number by XX. To some, it will be intuitive that
X=N!N1!N2!......Nj!......Nm!.
X=N!N1!N2!......Nj!......Nm!.(8.4.3)
That is,
X=N!∏mi=1Ni!.
X=N!∏mi=1Ni!.(8.4.4)
I don't find it immediately obvious myself, and I am happier with at least a minimal proof. Thus, the number of ways in which N1 atoms can be chosen from N to occupy the first level is (NN1), where the parentheses denote the usual binomial coefficient. For each of these ways, we need to know the number of ways in which N2 atoms can be chosen from the remaining N−N1. This is, of course, (N−N1N2). Thus the number of ways of populating the first two levels is (NN1)(N−N1N2).
On continuing with this argument, we eventually arrive at
X=(NN1)(N−N1N2)(N−N1−N2N3)......(N−∑m−11NiNm).
If the binomial coefficients are written out in full (do it - don't just take my word for it), there will be lots of cancellations and you almost immediately arrive at Equation 8.4.3.
We now need to know the most probable partition - i.e. the most probable numbers N1, N2, etc. The most probable partition is the one that maximizes X with respect to each of the Nj - subject to the constraints represented by Equations 8.4.1 and 8.4.2.
Mathematically it is easier to maximize lnX, which amounts to the same thing. Taking the logarithm of Equation 8.4.3, we obtain
lnX=lnN!−lnN1!−lnN2!−...
Apply Stirling's approximation to the factorials of all the variables. (You'll see in a moment that it won't matter whether or not you also apply it to the constant term lnN!) We obtain
lnX≅lnN!−(N1lnN1−N1)−(N2lnN2−N2)−...
Let us now maximize lnX with respect to one of the variables, for example Nj, in a manner that is consistent with the constraints of Equations 8.4.1 and 8.4.2. Using the method of Lagrangian multipliers, we obtain, for the most probable occupation number of the jth level, the condition
∂lnX∂Nj+λ∂N∂Nj+μ∂U∂Nj=0.
Upon carrying out the differentiations, we obtain
−lnNj+λ+μEj=0.
That is to say:
Nj=eλ+μEj=CeμEj.
What now remains is to identify the Lagrangian multipliers λ (or C=eλ) and μ. Multiply both sides of Equation 8.4.9 by Nj. Recall that i is a running subscript going from 1 to m, and that j is one particular value of i. Therefore now change the subscript from j to i, and sum from i=1 to m, and Equation 8.4.9 now becomes
−m∑i=1NilnNi+λN+μU=0,
where we have made use of Equations 8.4.1 and 8.4.2. From Equation 8.4.7, we see that
−m∑i=1NilnNi=lnX−lnN!−N,
so that lnX=lnN!−(λ+1)N−μU.
Now apply Equation 8.3.3, followed by Equation 8.3.2, and we immediately make the identification
μ=−1kT.
Thus Equation 8.4.10 becomes
Nj=Ce−Ej/(kT).
We still have to determine C. If we change the subscript in Equation 8.4.15 from j to i and sum from 1 to m, we immediately find that
C=N∑mi=1e−Ej/(kT).
Thus
NjN=e−Ej/(kT)∑e−Ej/(kT)
where I have omitted the summation limits (1 and m) as understood..
However, there is one factor we have not yet considered. Most energy levels in an atom are degenerate; that is to say there are several states with the same energy. Therefore, to find the population of a level, we have to add together the populations of the constituent states. Thus each term in Equation 8.4.17 must be multiplied by the statistical weight ϖ of the level. (This is unfortunately often given the symbol g. See section 7.14 for the distinction between d, g and ϖ. The symbol ϖ is a form of the Greek letter pi.) Thus we arrive at Boltzmann's Equation:
NjN=ϖje−Ej/(kT)∑ϖie−Ei/(kT)
The denominator of the expression is called the partition function (die Zustandsumme). It is often given the symbol u or Q or Z.
The statistical weight of a level of an atom with zero nuclear spin is 2J+1. If the nuclear spin is I, the statistical weight of a level is (2I+1)(2J+1). However, the same factor 2I+1 occurs in the numerator and in every term of the denominator of equation 8.4.18, and it therefore cancels out from top and bottom. Consequently, in working with Boltzmann's equation, under most circumstances it is not necessary to be concerned about whether the atom has any nuclear spin, and the statistical weight of each level in equation 8.4.18 can usually be safely taken to be (2J+1).
In equation 8.4.18 we have compared the number of atoms in level j with the number of atoms in all level. We can also compare the number of atoms in level j with the number in the ground level 0:
NjN0=ϖje−Ej/(kT)ϖ0
Or we could compare the number in level 2 to the number in level 1, where “2” represent any two level, 2 lying higher than 1:
N2N1=ϖ2ϖ1e−(E2−E1)/(kT)=ϖ2ϖ1e−hν/(kT).
8.3: Some Thermodynamics and Statistical Mechanics
8.5: Some Comments on Partition Functions
|
12
|
probability - How exactly are the beta and gamma distributions related? - Mathematics Stack Exchange
===============
Join Mathematics
By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy.
Sign up with Google
OR
Email
Password
Sign up
Already have an account? Log in
Skip to main content
Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Loading…
Tour Start here for a quick overview of the site
Help Center Detailed answers to any questions you might have
Meta Discuss the workings and policies of this site
About Us Learn more about Stack Overflow the company, and our products
current community
Mathematics helpchat
Mathematics Meta
your communities
Sign up or log in to customize your list.
more stack exchange communities
company blog
Log in
Sign up
Home
Questions
Unanswered
AI Assist Labs
Tags
Chat
Users
Teams
Ask questions, find answers and collaborate at work with Stack Overflow for Teams.
Try Teams for freeExplore Teams
3. Teams
4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
Hang on, you can't upvote just yet.
You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it?
Instead, you can save this post to reference later.
Save this post for later Not now
How exactly are the beta and gamma distributions related?
Ask Question
Asked 12 years, 11 months ago
Modified9 years, 4 months ago
Viewed 51k times
This question shows research effort; it is useful and clear
18
Save this question.
Show activity on this post.
According to Wikipedia, the Beta distribution is related to the gamma distribution by the following relation:
lim n→∞n B(k,n)=Γ(k,1)lim n→∞n B(k,n)=Γ(k,1)
Can you point me to a derivation of this fact? Can it be generalized? For example, is there a similar relation that results in something other than a constant 1 for the Gamma second parameter? What if we have
lim n→∞,m→∞,n=m b n B(k,m)lim n→∞,m→∞,n=m b n B(k,m)
That is, the two variables go to infinity while maintaining a constant ratio b.
The reason I'm asking is because I'm trying to figure out how to simplify a hieraerchical bayesian model involving the beta distribution.
(This is my first post; sorry for the math notation, the MathJaX syntax was too daunting, but I'll try to learn)
probability
probability-distributions
Share
Share a link to this question
Copy linkCC BY-SA 3.0
Cite
Follow
Follow this question to receive notifications
edited Sep 3, 2012 at 22:30
Pedro♦
125k 19 19 gold badges 237 237 silver badges 406 406 bronze badges
asked Sep 3, 2012 at 22:25
Sten LinnarssonSten Linnarsson
183 1 1 gold badge 1 1 silver badge 4 4 bronze badges
5
It's not MathJaX, it's LaTex, and there's a very very helpful post here: meta.math.stackexchange.com/questions/5020/… :) –Daniel Commented Sep 3, 2012 at 22:32
Your "fact" is wrong. For any integer k≥2 k≥2, lim n→∞n B(n,k)=lim n→∞Γ(k)(n+1)(n+2)…(n+k−1)=0 lim n→∞n B(n,k)=lim n→∞Γ(k)(n+1)(n+2)…(n+k−1)=0 –Robert Israel Commented Sep 3, 2012 at 23:04
Perhaps you meant lim n→∞n k B(n,k)=Γ(k)lim n→∞n k B(n,k)=Γ(k) –Robert Israel Commented Sep 3, 2012 at 23:06
For fixed b>0 b>0, B(n,b n)∼2 π/n−−−−√b b n−1/2(1+b)1/2−n−b n as n→∞B(n,b n)∼2 π/n b b n−1/2(1+b)1/2−n−b n as n→∞ –Robert Israel Commented Sep 3, 2012 at 23:12
3 Robert, I think you are talking about the Beta and Gamma functions, whereas my question concerns the Beta and Gamma distributions. –Sten Linnarsson Commented Sep 6, 2012 at 7:49
Add a comment|
3 Answers 3
Sorted by: Reset to default
This answer is useful
34
Save this answer.
Show activity on this post.
This concerns the relationship between the Gamma and Beta distributions as opposed to the Gamma and Beta functions. Let X∼Gamma(α,1)X∼Gamma(α,1) and Y∼Gamma(β,1)Y∼Gamma(β,1) where the paramaterization is such that α α is the shape parameter. Then
X X+Y∼Beta(α,β).X X+Y∼Beta(α,β).
To prove this, write the joint pdf f X,Y(x,y)=1 Γ(α)Γ(β)x α−1 y β−1 e−(x+y)f X,Y(x,y)=1 Γ(α)Γ(β)x α−1 y β−1 e−(x+y) (on R 2+R+2) and make the transformation U=X X+Y U=X X+Y and V=X+Y V=X+Y. The Jacobian of the transformation X=V U,Y=V(1−U)X=V U,Y=V(1−U) is equal to V V so the joint distribution of U U and V V has pdf
v Γ(α)Γ(β)(v u)α−1(v(1−u))β−1 e−v=1 Γ(α)Γ(β)v α+β−1 e−v u α−1(1−u)β−1 v Γ(α)Γ(β)(v u)α−1(v(1−u))β−1 e−v=1 Γ(α)Γ(β)v α+β−1 e−v u α−1(1−u)β−1
(on R+×[0,1]R+×[0,1]) and hence U U and V V are independent (because the pdf factors over u u and v v) with V∼Gamma(α+β,1)V∼Gamma(α+β,1) and U∼Beta(α,β)U∼Beta(α,β) which is apparent from the terms v α+β−1 e−v v α+β−1 e−v and u α−1(1−u)β−1 u α−1(1−u)β−1 respectively.
Share
Share a link to this answer
Copy linkCC BY-SA 3.0
Cite
Follow
Follow this answer to receive notifications
answered Sep 3, 2012 at 23:33
guyguy
4,630 1 1 gold badge 30 30 silver badges 41 41 bronze badges
2
1 Do we need X and Y be independent? –Jackie Commented Jan 8, 2023 at 15:32
@Jackie Yes, if not you can't say much. For example, if α=β α=β and X=Y X=Y then U=1/2 U=1/2 and X+Y∼Gam(α,1/2)X+Y∼Gam(α,1/2) –guy Commented Jan 8, 2023 at 20:07
Add a comment|
This answer is useful
19
Save this answer.
Show activity on this post.
Fix some k k and, for every n n, let X n X n denote a random variable with beta distribution B(k,n)B(k,n) and Y n=n X n Y n=n X n. Then, for every s⩾0 s⩾0, E(Y s n)=n s E(X s n)E(Y n s)=n s E(X n s) and one knows the value of E(X s n)E(X n s), hence
E(Y s n)=n s B(k+s,n)B(k,n)=n s Γ(k+s)Γ(k+n)Γ(k+s+n)Γ(k)⟶Γ(k+s)Γ(k).E(Y n s)=n s B(k+s,n)B(k,n)=n s Γ(k+s)Γ(k+n)Γ(k+s+n)Γ(k)⟶Γ(k+s)Γ(k).
This is E(Z s)E(Z s) for any random variable Z Z with gamma distribution Γ(k)Γ(k) hence Y n→Z Y n→Z in distribution.
Let X′n X n′ denote a random variable with beta distribution B(k,n/b)B(k,n/b), and Y′n=n X′n Y n′=n X n′. Then, X′n X n′ is distributed like X n/b X n/b hence Y′n Y n′ is distributed like b Y n/b b Y n/b and Y′n→b Z Y n′→b Z in distribution.
Share
Share a link to this answer
Copy linkCC BY-SA 3.0
Cite
Follow
Follow this answer to receive notifications
edited Apr 10, 2016 at 4:31
answered Sep 4, 2012 at 6:53
DidDid
284k 27 27 gold badges 332 332 silver badges 613 613 bronze badges
5
1 Thanks! I'm actually shocked and amazed at the speed and quality of answers on the stackexchange sites. It's like having an extra brain with just slightly higher latency! –Sten Linnarsson Commented Sep 6, 2012 at 8:07
I would upvote your answer if I could, but I get "please login or register to vote", even though I'm already logged-in. –Sten Linnarsson Commented Sep 6, 2012 at 8:10
Thanks. Since I am not an expert on these matters of login/register/vote, I am flagging this answer so that your comment is read by some moderator... –Did Commented Sep 6, 2012 at 9:51
@Sten & did, according to math.stackexchange.com/priviledges it requires 15 rep before a user can vote Q/A up and down. Also, while Sten is logged in, his account is not registered. Registering has many benefits (user identity more consistently kept track of etc.) and I strongly encourage users to do it. –Willie Wong Commented Sep 6, 2012 at 11:53
@Sten See? These MSE mods are amazing, aren't they? :-) –Did Commented Sep 6, 2012 at 12:13
Add a comment|
This answer is useful
9
Save this answer.
Show activity on this post.
There is another way to view the relationship between the gamma distribution and the beta distribution through the Dirichlet distribution. This post ( talks about exactly how they are related without the Dirichlet distribution, but here is a slightly broader view:
Let Z 1,Z 2,…,Z n Z 1,Z 2,…,Z n be independent random variables such that
Z i∼Gamma(α i,1),i=1,…,n Z i∼Gamma(α i,1),i=1,…,n
where α i≥0 α i≥0 are the shape parameters. If Y j Y j is defined as follows,
Y j=Z j/∑i=1 n Z i,j=1,…,n Y j=Z j/∑i=1 n Z i,j=1,…,n
then (Y 1,…,Y n)(Y 1,…,Y n) is Dirichlet distributed with parameter (α 1,…,α n)(α 1,…,α n). When n=2 n=2, the Dirichlet distribution reduces to the Beta distribution, denoted by Beta(α 1,α 2)Beta(α 1,α 2).
Here is a link to Ferguson's paper that mentions the above relationship:
Share
Share a link to this answer
Copy linkCC BY-SA 3.0
Cite
Follow
Follow this answer to receive notifications
edited Apr 13, 2017 at 12:21
CommunityBot
1
answered Apr 13, 2014 at 19:24
river_06river_06
150 1 1 silver badge 6 6 bronze badges
Add a comment|
You must log in to answer this question.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
probability
probability-distributions
See similar questions with these tags.
Featured on Meta
Will you help build our new visual identity?
Upcoming initiatives on Stack Overflow and across the Stack Exchange network...
Community help needed to clean up goo.gl links (by August 25)
Report this ad
Linked
2Ratio of Gamma random variables
0Random Sampling of Beta Distribution using direct method
1Variance of random variables (density)
8X,Y are independent exponentially distributed then what is the distribution of X/(X+Y)
8PDF of z=A B−C D A 2+B 2+C 2+D 2 z=A B−C D A 2+B 2+C 2+D 2, where A A, B B, C C, and D D are independent Gaussian random variables with mean 0 0 and variance 1 1
6distribution of one random over the sum of random variables
5Convex combination of Dirichlet random variables
2Prove that U=X+Y U=X+Y and V=X/(X+Y)V=X/(X+Y) are independent
6If U∼χ 2 m U∼χ m 2 independently of V∼χ 2 n V∼χ n 2 then prove that V U+V∼β(n 2,m 2)V U+V∼β(n 2,m 2)
3Does lim n→∞((∫1 0(1−s 2)n d s)−1∫1 A(1−s 2)n d s)=0 lim n→∞((∫0 1(1−s 2)n d s)−1∫A 1(1−s 2)n d s)=0 hold?
See more linked questions
Related
8Distribution of Ratio of Exponential and Gamma random variable
2Limit of Beta distribution on [0,A][0,A] as A→∞A→∞ with constant expectation and variance
0Derivation of the Beta posterior distribution
0Why isn't the moment generator function for the beta distribution divergent?
2Intuition for "If r.v. X∼G a m m a(a+b,λ)X∼G a m m a(a+b,λ) and Y∼B e t a(a,b)Y∼B e t a(a,b) are independent, then X Y∼G a m m a(a,λ)X Y∼G a m m a(a,λ)"
Hot Network Questions
What's the difference between democracy and totalitarianism if, even in democracy, we must respect laws set by parties we didn't vote for?
If I remove the point in a dataset which is furthest from the mean, does the sample variance automatically decrease, or at least not increase?
If I self-publish a book and give it away for free, would it meet a future publisher's desire to be "first publishing rights"?
Why isn't gauge symmetry a symmetry while global symmetry is?
How soon after parking a car in a paid parking area must I provide proof of payment?
I failed to make Claus benzene. (With sticks.)
Why are illegal immigrants counted towards congressional district apportionment and allocation of Electoral College votes in the United States?
Do you email authors whose results you have improved?
Graphical software tools for quick and easy diagrams
Which public officers other than presidents and lawmakers are chosen by people?
Dimension too large compiling longtable with lualatex. What is the cause?
How to deal with this problem in hedonism?
Pilot Procedures for OFV Control When Cabin System Fails
If linear negation is interpreted as representing destructors, how to make sense of double linear negation elimination?
In Matthew 17:4, what was Peter’s intention in proposing to make three tents for Jesus, Moses, and Elijah?
What violent acts or injuries are attributable to Palestine Action?
Is the logic of the original smoking study valid?
Is laser engraving on an interstellar object feasible?
Why doesn't chatGPT learn from its interactions with users?
Are there other LEGO Duplo track layouts with two trains that trigger all the switches indefinitely?
how to set grub default
Why does my HDD keep spinning and seeking when I power off the computer?
What does my 3D Printing Life-Seeder Probe need to print to populate the Universe for humans?
What keeps an index ETF pegged to the index?
Question feed
Subscribe to RSS
Question feed
To subscribe to this RSS feed, copy and paste this URL into your RSS reader.
Why are you flagging this comment?
It contains harassment, bigotry or abuse.
This comment attacks a person or group. Learn more in our Code of Conduct.
It's unfriendly or unkind.
This comment is rude or condescending. Learn more in our Code of Conduct.
Not needed.
This comment is not relevant to the post.
Enter at least 6 characters
Something else.
A problem not listed above. Try to be as specific as possible.
Enter at least 6 characters
Flag comment Cancel
You have 0 flags left today
Mathematics
Tour
Help
Chat
Contact
Feedback
Company
Stack Overflow
Teams
Advertising
Talent
About
Press
Legal
Privacy Policy
Terms of Service
Your Privacy Choices
Cookie Policy
Stack Exchange Network
Technology
Culture & recreation
Life & arts
Science
Professional
Business
API
Data
Blog
Facebook
Twitter
LinkedIn
Instagram
Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804
By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.
Accept all cookies Necessary cookies only
Customize settings
Cookie Consent Preference Center
When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer.
Cookie Policy
Accept all cookies
Manage Consent Preferences
Strictly Necessary Cookies
Always Active
These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.
Cookies Details
Performance Cookies
[x] Performance Cookies
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.
Cookies Details
Functional Cookies
[x] Functional Cookies
These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly.
Cookies Details
Targeting Cookies
[x] Targeting Cookies
These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device.
Cookies Details
Cookie List
Clear
[x] checkbox label label
Apply Cancel
Consent Leg.Interest
[x] checkbox label label
[x] checkbox label label
[x] checkbox label label
Necessary cookies only Confirm my choices
|
13
|
Published Time: 2006-06-30T02:05:40Z
Lebesgue differentiation theorem - Wikipedia
===============
Jump to content
[x] Main menu
Main menu
move to sidebar hide
Navigation
Main page
Contents
Current events
Random article
About Wikipedia
Contact us
Contribute
Help
Learn to edit
Community portal
Recent changes
Upload file
Special pages
Search
Search
[x] Appearance
Donate
Create account
Log in
[x] Personal tools
Donate
Create account
Log in
Pages for logged out editors learn more
Contributions
Talk
Contents
move to sidebar hide
(Top)
1 Statement
2 Proof
3 Discussion of proof
4 Discussion
5 See also
6 References
[x] Toggle the table of contents
Lebesgue differentiation theorem
[x] 9 languages
Français
한국어
Italiano
日本語
Piemontèis
Português
Suomi
Українська
中文
Edit links
Article
Talk
[x] English
Read
Edit
View history
[x] Tools
Tools
move to sidebar hide
Actions
Read
Edit
View history
General
What links here
Related changes
Upload file
Permanent link
Page information
Cite this page
Get shortened URL
Download QR code
Print/export
Download as PDF
Printable version
In other projects
Wikidata item
Appearance
move to sidebar hide
From Wikipedia, the free encyclopedia
Mathematical theorem in real analysis
In mathematics, the Lebesgue differentiation theorem is a theorem of real analysis, which states that for almost every point, the value of an integrable function is the limiting average taken around the point. The theorem is named for Henri Lebesgue.
Statement
[edit]
For a Lebesgue integrable real or complex-valued function f on Rn, the indefinite integral is a set function which maps a measurable set A to the Lebesgue integral of f⋅1 A{\displaystyle f\cdot \mathbf {1} {A}}, where 1 A{\displaystyle \mathbf {1} {A}} denotes the characteristic function of the set A. It is usually written A↦∫A f d λ,{\displaystyle A\mapsto \int {A}f\ \mathrm {d} \lambda ,} with _λ the n–dimensional Lebesgue measure.
The derivative of this integral at x is defined to be lim B→x 1|B|∫B f d λ,{\displaystyle \lim {B\to x}{\frac {1}{|B|}}\int {B}f\,\mathrm {d} \lambda ,} where |B| denotes the volume (i.e., the Lebesgue measure) of a ballB centered at x, and B→ x means that the diameter of B tends to 0.
The Lebesgue differentiation theorem (Lebesgue 1910) states that this derivative exists and is equal to f(x) at almost every point x∈ Rn. In fact a slightly stronger statement is true. Note that: |1|B|∫B f(y)d λ(y)−f(x)|=|1|B|∫B(f(y)−f(x))d λ(y)|≤1|B|∫B|f(y)−f(x)|d λ(y).{\displaystyle \left|{\frac {1}{|B|}}\int {B}f(y)\,\mathrm {d} \lambda (y)-f(x)\right|=\left|{\frac {1}{|B|}}\int {B}(f(y)-f(x))\,\mathrm {d} \lambda (y)\right|\leq {\frac {1}{|B|}}\int _{B}|f(y)-f(x)|\,\mathrm {d} \lambda (y).}
The stronger assertion is that the right hand side tends to zero for almost every point x. The points x for which this is true are called the Lebesgue points of f.
A more general version also holds. One may replace the balls B by a family V{\displaystyle {\mathcal {V}}} of sets U of bounded eccentricity. This means that there exists some fixed c> 0 such that each set U from the family is contained in a ball B with |U|≥c|B|{\displaystyle |U|\geq c\,|B|}. It is also assumed that every point x ∈ Rn is contained in arbitrarily small sets from V{\displaystyle {\mathcal {V}}}. When these sets shrink to x, the same result holds: for almost every point x, f(x)=lim U→x,U∈V 1|U|∫U f d λ.{\displaystyle f(x)=\lim {U\to x,\,U\in {\mathcal {V}}}{\frac {1}{|U|}}\int {U}f\,\mathrm {d} \lambda .}
The family of cubes is an example of such a family V{\displaystyle {\mathcal {V}}}, as is the family V{\displaystyle {\mathcal {V}}}(m) of rectangles in R2 such that the ratio of sides stays between m−1 and m, for some fixed m≥1. If an arbitrary norm is given on Rn, the family of balls for the metric associated to the norm is another example.
The one-dimensional case was proved earlier by Lebesgue (1904). If f is integrable on the real line, the function F(x)=∫(−∞,x]f(t)d t{\displaystyle F(x)=\int {(-\infty ,x]}f(t)\,\mathrm {d} t}![Image 15: {\displaystyle F(x)=\int {(-\infty ,x]}f(t)\,\mathrm {d} t}]( is almost everywhere differentiable, with F′(x)=f(x).{\displaystyle F'(x)=f(x).} Were F{\displaystyle F} defined by a Riemann integral this would be essentially the fundamental theorem of calculus, but Lebesgue proved that it remains true when using the Lebesgue integral.
Proof
[edit]
The theorem in its stronger form—that almost every point is a Lebesgue point of a locally integrable functionf—can be proved as a consequence of the weak–L 1 estimates for the Hardy–Littlewood maximal function. The proof below follows the standard treatment that can be found in Benedetto & Czaja (2009), Stein & Shakarchi (2005), Wheeden & Zygmund (1977) and Rudin (1987).
Since the statement is local in character, f can be assumed to be zero outside some ball of finite radius and hence integrable. It is then sufficient to prove that the set
E α={x∈R n:lim sup|B|→0,x∈B 1|B||∫B f(y)−f(x)d y|>2 α}{\displaystyle E_{\alpha }={\Bigl {}x\in \mathbf {R} ^{n}:\limsup {|B|\rightarrow 0,\,x\in B}{\frac {1}{|B|}}{\bigg |}\int {B}f(y)-f(x)\,\mathrm {d} y{\bigg |}>2\alpha {\Bigr }}}
has measure 0 for all α>0.
Let ε>0 be given. Using the density of continuous functions of compactsupport in L 1(Rn), one can find such a function g satisfying
‖f−g‖L 1=∫R n|f(x)−g(x)|d x<ε.{\displaystyle \|f-g\|{L^{1}}=\int {\mathbf {R} ^{n}}|f(x)-g(x)|\,\mathrm {d} x<\varepsilon .}
It is then helpful to rewrite the main difference as
1|B|∫B f(y)d y−f(x)=(1|B|∫B(f(y)−g(y))d y)+(1|B|∫B g(y)d y−g(x))+(g(x)−f(x)).{\displaystyle {\frac {1}{|B|}}\int {B}f(y)\,\mathrm {d} y-f(x)={\Bigl (}{\frac {1}{|B|}}\int {B}{\bigl (}f(y)-g(y){\bigr )}\,\mathrm {d} y{\Bigr )}+{\Bigl (}{\frac {1}{|B|}}\int {B}g(y)\,\mathrm {d} y-g(x){\Bigr )}+{\bigl (}g(x)-f(x){\bigr )}.}
The first term can be bounded by the value at _x of the maximal function for f−g, denoted here by (f−g)∗(x){\displaystyle (f-g)^{}(x)}:
1|B|∫B|f(y)−g(y)|d y≤sup r>0 1|B r(x)|∫B r(x)|f(y)−g(y)|d y=(f−g)∗(x).{\displaystyle {\frac {1}{|B|}}\int {B}|f(y)-g(y)|\,\mathrm {d} y\leq \sup {r>0}{\frac {1}{|B_{r}(x)|}}\int {B{r}(x)}|f(y)-g(y)|\,\mathrm {d} y=(f-g)^{}(x).}
The second term disappears in the limit since g is a continuous function, and the third term is bounded by |f(x)− g(x)|. For the absolute value of the original difference to be greater than 2 α in the limit, at least one of the first or third terms must be greater than α in absolute value. However, the estimate on the Hardy–Littlewood function says that
|{x:(f−g)∗(x)>α}|≤A n α‖f−g‖L 1\alpha \right}{\Bigr |}\leq {\frac {A_{n}}{\alpha }}\,\|f-g\|{L^{1}}<{\frac {A{n}}{\alpha }}\,\varepsilon ,}
for some constant A n depending only upon the dimension n. The Markov inequality (also called Tchebyshev's inequality) says that
|{x:|f(x)−g(x)|>α}|≤1 α‖f−g‖L 1<1 α ε{\displaystyle {\Bigl |}\left{x:|f(x)-g(x)|>\alpha \right}{\Bigr |}\leq {\frac {1}{\alpha }}\,\|f-g\|_{L^{1}}<{\frac {1}{\alpha }}\,\varepsilon }
thus
|E α|≤A n+1 α ε.{\displaystyle |E_{\alpha }|\leq {\frac {A_{n}+1}{\alpha }}\,\varepsilon .}
Since ε was arbitrary, it can be taken to be arbitrarily small, and the theorem follows.
Discussion of proof
[edit]
The Vitali covering lemma is vital to the proof of this theorem; its role lies in proving the estimate for the Hardy–Littlewood maximal function.
The theorem also holds if balls are replaced, in the definition of the derivative, by families of sets with diameter tending to zero satisfying the Lebesgue's regularity condition, defined above as family of sets with bounded eccentricity. This follows since the same substitution can be made in the statement of the Vitali covering lemma.
Discussion
[edit]
This is an analogue, and a generalization, of the fundamental theorem of calculus, which equates a Riemann integrable function and the derivative of its (indefinite) integral. It is also possible to show a converse – that every differentiable function is equal to the integral of its derivative, but this requires a Henstock–Kurzweil integral in order to be able to integrate an arbitrary derivative.
A special case of the Lebesgue differentiation theorem is the Lebesgue density theorem, which is equivalent to the differentiation theorem for characteristic functions of measurable sets. The density theorem is usually proved using a simpler method (e.g. see Measure and Category).
This theorem is also true for every finite Borel measure on Rn instead of Lebesgue measure (a proof can be found in e.g. (Ledrappier & Young 1985)). More generally, it is true of any finite Borel measure on a separable metric space such that at least one of the following holds:
the metric space is a Riemannian manifold,
the metric space is a locally compactultrametric space,
the measure is doubling.
A proof of these results can be found in sections 2.8–2.9 of (Federer 1969).
See also
[edit]
Lebesgue's density theorem
References
[edit]
^Folland, G. B. (1999). Real analysis: modern techniques and their applications (2 ed.). New York: Wiley. pp.Chapter 3. ISBN0-471-31716-0. OCLC39849337.
^McDonald, John N. (2013). A course in real analysis. N. A. Weiss (2 ed.). Boston, Mass.: Academic Press/Elsevier. ISBN978-0-12-387774-1. OCLC754105634.
Lebesgue, Henri (1904). Leçons sur l'Intégration et la recherche des fonctions primitives. Paris: Gauthier-Villars.
Lebesgue, Henri (1910). "Sur l'intégration des fonctions discontinues". Annales Scientifiques de l'École Normale Supérieure. 27: 361–450. doi:10.24033/asens.624.
Wheeden, Richard L.; Zygmund, Antoni (1977). Measure and Integral – An introduction to Real Analysis. Marcel Dekker.
Oxtoby, John C. (1980). Measure and Category. Springer Verlag.
Stein, Elias M.; Shakarchi, Rami (2005). Real analysis. Princeton Lectures in Analysis, III. Princeton, NJ: Princeton University Press. pp.xx+402. ISBN0-691-11386-6.MR2129625
Benedetto, John J.; Czaja, Wojciech (2009). Integration And Modern Analysis. Birkhäuser Advanced Texts. Springer. pp.361–364. ISBN978-0817643065.
Rudin, Walter (1987). Real and complex analysis. International Series in Pure and Applied Mathematics (3rd ed.). McGraw–Hill. ISBN0070542341.
Ledrappier, F.; Young, L.S. (1985). "The Metric Entropy of Diffeomorphisms: Part I: Characterization of Measures Satisfying Pesin's Entropy Formula". Annals of Mathematics. 122 (3): 509–539. doi:10.2307/1971328. JSTOR1971328.
Federer, Herbert (1969). Geometric measure theory. Die Grundlehren der mathematischen Wissenschaften, Band. Vol.153. New York: Springer-Verlag New York Inc.
| v t e Measure theory |
| --- |
| Basic concepts | Absolute continuityof measures Lebesgue integration L p spaces Measure Measure space Probability space Measurable space/function |
| Sets | Almost everywhere Atom Baire set Borel set equivalence relation Borel space Carathéodory's criterion Cylindrical σ-algebra Cylinder set 𝜆-system Essential range infimum/supremum Locally measurable π-system σ-algebra Non-measurable set Vitali set Null set Support Transverse measure Universally measurable |
| Types of measures | Atomic Baire Banach Besov Borel Brown Complex Complete Content (Logarithmically)Convex Decomposable Discrete Equivalent Finite Inner (Quasi-)Invariant Locally finite Maximising Metric outer Outer Perfect Pre-measure (Sub-)Probability Projection-valued Radon Random Regular Borel regular Inner regular Outer regular Saturated Set function σ-finite s-finite Signed Singular Spectral Strictly positive Tight Vector |
| Particular measures | Counting Dirac Euler Gaussian Haar Harmonic Hausdorff Intensity Lebesgue Infinite-dimensional Logarithmic Product Projections Pushforward Spherical measure Tangent Trivial Young |
| Maps | Measurable function Bochner Strongly Weakly Convergence: almost everywhere of measures in measure of random variables in distribution in probability Cylinder set measure Random: compact set element measure process variable vector Projection-valued measure |
| Main results | Carathéodory's extension theorem Convergence theorems Dominated Monotone Vitali Decomposition theorems Hahn Jordan Maharam's Egorov's Fatou's lemma Fubini's Fubini–Tonelli Hölder's inequality Minkowski inequality Radon–Nikodym Riesz–Markov–Kakutani representation theorem |
| Other results | Disintegration theorem Lifting theory Lebesgue's density theorem Lebesgue differentiation theorem Sard's theorem Vitali–Hahn–Saks theorem For Lebesgue measure Isoperimetric inequality Brunn–Minkowski theorem Milman's reverse Minkowski–Steiner formula Prékopa–Leindler inequality Vitale's random Brunn–Minkowski inequality |
| Applications&related | Convex analysis Descriptive set theory Probability theory Real analysis Spectral theory |
Retrieved from "
Categories:
Theorems in real analysis
Theorems in measure theory
Hidden categories:
Articles with short description
Short description matches Wikidata
This page was last edited on 10 July 2024, at 21:09(UTC).
Text is available under the Creative Commons Attribution-ShareAlike 4.0 License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.
Privacy policy
About Wikipedia
Disclaimers
Contact Wikipedia
Code of Conduct
Developers
Statistics
Cookie statement
Mobile view
Search
Search
[x] Toggle the table of contents
Lebesgue differentiation theorem
9 languagesAdd topic
|
14
|
A1 INTEGERS 18 (2018) 2-ADIC VALUATIONS OF GENERALIZED FIBONACCI NUMBERS OF ODD ORDER Paul Thomas Young Department of Mathematics, College of Charleston, Charleston, South Carolina [email protected] Received: 5/3/17, Accepted: 12/21/17, Published: 1/16/18 Abstract Let Tn denote the generalized Fibonacci number of order k defined by the recurrence Tn = Tn−1 + Tn−2 + · · · + Tn−k for n ≥k, with initial conditions T0 = 0 and Ti = 1 for 1 i < k. In this paper we establish the 2-adic valuation of Tn in almost all cases when k is odd. Our results settle some conjectures of Lengyel and Marques.
Introduction Let Tn denote the generalized Fibonacci number of order k defined by the recurrence Tn = Tn−1 + Tn−2 + · · · + Tn−k for n > k, with initial conditions T0 = 0 and Ti = 1 for 1 6 i < k. When k = 2 this is the usual Fibonacci sequence, whereas for k = 5 we have the sequence 0, 1, 1, 1, 1, 4, 8, 15, 29, 57, 113, 222, 436, 857, 1685, 3313, 6513, . . . .
(1.1) The 2-adic valuation ⌫2(Tn) has been a topic of recent interest, having been de-termined when k = 3 , k = 4 , k = 5 in almost all cases , and k even .
Motivated by the formulas and conjectures in , in the present article we focus primarily on the case where k > 5 is odd, and answer those conjectures (one affirmatively, one negatively) by considering 2-adic analytic functions which inter-polate subsequences of (Tn) in residue classes modulo 2k + 2. The main result is the following: Theorem 1. If k > 5 is odd, then for all integers n we have ⌫2(Tn) = 8 > > > > > > > > > > > > > < > > > > > > > > > > > > > : 0, if n 6⌘0, k (mod k + 1), ⌫2(k −1), if n ⌘k (mod 2k + 2), ⌫2(k −3), if n ⌘−1 (mod 2k + 2), ⌫2(n −k −1), if n ⌘k + 1 (mod 2k + 2) and ⌫2(n −k −1) < ⌫2(k2 −1), ⌫2(n −2) + 1, if n ⌘k + 1 (mod 2k + 2) and ⌫2(n −k −1) > ⌫2(k2 −1), ⌫2(n) −⌫2(k + 1) + 1, if n ⌘0 (mod 2k + 2), INTEGERS: 18 (2018) 2 We remark that in the case k = 5, the above Theorem 1 is equivalent to Theorem 2 of . The above theorem also implies the odd k case of Conjecture 2 of , which hypothesizes that for integers r, k, s with r > 1, k > 2, and s odd, the 2-adic valuation of the subsequence (Ts(k+1)2r) has the form ⌫2(Ts(k+1)2r) = r + c(k) (1.2) where c(2) = 2, and otherwise c(k) = ⌫2(k −2) + 1.
The even k case of this conjectured formula (1.2) was recently proved by Sobolewski , using a di↵erent method than the present paper.
For odd k > 5, Theorem 1 gives the exact valuation ⌫2(Tn) in all cases except when n ⌘k + 1 (mod 2k + 2) and ⌫2(n −k −1) = ⌫2(k2 −1). In the case k = 5, Lengyel and Marques (, Conjecture 1) conjectured a formula for ⌫2(Tn) when n = 12m + 6 and ⌫2(n −6) = 3. Although their formula is correct for positive integers n less than three million, we will show in the last section that it is not correct in general. However, the conjectured formula is correct in spirit; in fact, we have the following: Theorem 2. Suppose k > 5 is odd and let a = ⌫2(k−1). Then there exists a 2-adic integer z 2 Z2 with ⌫2(z) = a −1 and z ⌘k −1 4 −2k (mod 23a−1Z2) such that ⌫2(Tn) = ⌫2(m −z) + 2 when n is of the form n = (2k + 2)m + k + 1.
When n is not a multiple of k + 1 the above Theorem 1 can be established by simplifying the recurrence for (Tn), as we show in the next section. To handle the cases where n is a multiple of k + 1, we will rely on the following theorem which may be proved using elementary 2-adic analysis.
Theorem 3. Write k+1 = 2el with l odd. Then for each j 2 Z there exists a contin-uous function fj : Z2 ! Z2 such that fj(n) = Tln+j for all n 2 Z. Furthermore, for each j 2 Z there exists a function gj which is analytic on D = {x 2 C2 : ⌫2(x) > −1} such that gj(n) = T2(k+1)n+j for all n 2 Z.
The continuous functions fj described above will not play a computational role in the present paper, but they do illustrate an interesting property of the sequences (Tn); for example, when k = 7 the sequence (Tn) extends to a continuous function of n on Z2, but for k = 5 it does not. The analytic functions gj will be of much greater use in establishing the valuations ⌫2(Tn).
INTEGERS: 18 (2018) 3 2. Generalized Fibonacci Numbers The characteristic polynomial of the recurrence for (Tn) is p(x) = xk −xk−1 −xk−2 −· · · −x −1 = xk+1 −2xk + 1 x −1 = q(x) x −1.
(2.1) Therefore the order k recurrence Tn = Tn−1 + · · · + Tn−k (2.2) is equivalent to the order k + 1 recurrence Tn+1 = 2Tn −Tn−k.
(2.3) It is then easily seen that (Tn) is periodic modulo 2 with period k + 1. Moreover, considering the initial conditions, for even k we have Tn even if and only if n ⌘0 (mod k + 1), whereas for odd k we have Tn even if and only if n ⌘0, −1 (mod k + 1). From this recurrence one can easily compute Tn for n near zero, giving T−2k−2 = 4k −8 (if k > 3) T−2k−1 = 13 −4k T−2k = −3 T−k−i = 1 for 3 6 i 6 k −1 T−k−2 = k −1 (if k > 3) T−k−1 = 6 −2k T−k = −1 T−i = 1 for 2 6 i 6 k −1 T−1 = 3 −k T0 = 0 Ti = 1 for 1 6 i 6 k −1 Tk = k −1 Tk+i = 2i−1(2k −3) + 1 for 1 6 i 6 k T2k+1 = 2k(2k −3) −k + 3 T2k+2 = 2k+1(2k −3) −4k + 8.
From these initial values, it is easy to establish the following proposition by induction on r.
Proposition 1. For all nonnegative integers r we have Tr(k+1)+i ⌘1 (mod 2i), 1 6 i 6 k −1, INTEGERS: 18 (2018) 4 Tr(k+1)+k ⌘ ( k −1, r even, 3 −k, r odd, (mod 2k), Tr(k+1) ⌘ ( 4r −2rk, r even, 2rk −4r + 2, r odd.
(mod 2k+1).
Remark. Once we show that the functions T(2k+2)m+j are 2-adically continuous functions of m in the next section, it will follow that the above Proposition 1 is valid for negative integers r as well.
Construction of Interpolating Functions Let Z2 denote the ring of 2-adic integers, Q2 the field of 2-adic numbers, and C2 the completion of an algebraic closure of Q2.
The 2-adic valuation ⌫2(n) of an integer n is equal to the highest power of 2 which divides n, with the convention that ⌫2(0) = +1. This valuation extends uniquely to C2, on which it takes rational values (for example, ⌫2( p 6) = 1/2).
For a polynomial f(x) = Pn i=0 ai(x −↵)i 2 C2[x], the Newton polygon of f at ↵is the upper convex hull of the set of points {(i, ⌫2(ai)) : 0 6 i 6 n}. A basic property (, Ch. IV.3, Lemma 4; , Theorem 9.1) is that the Newton polygon of f at ↵has a side of slope m and horizontal run l if and only if f has l zeros (counted with multiplicity) ↵i 2 C2 with ⌫2(↵i −↵) = −m.
In order to 2-adically interpolate the sequence (Tn), we first use the theory of Newton polygons to locate the roots of p(x) in C2. If k + 1 = 2el with l odd, then the roots are partitioned into l subsets, each of which lie close to an l-th root of unity in C2. If ⇣l = 1 and ↵is a root of p(x) with ⌫2(↵−⇣) > 0, we will say ↵ corresponds to ⇣; this means that they have the same image in the residue class field of C2.
Proposition 2. Write k + 1 = 2el with l odd. Corresponding to each nontrivial solution ⇣2 C2 to ⇣l = 1 there are 2e roots ↵of p(x) which satisfy ⌫2(↵−⇣) = 2−e.
Corresponding to the trivial solution ⇣= 1, there are 2e −1 roots ↵of p(x) which satisfy ⌫2(↵−1) > 0. When e = 1, this root satisfies ⌫2(↵−1) = ⌫2(k −1); when e > 1 these 2e −1 roots all satisfy ⌫2(↵−1) = (2e −1)−1. If ↵i, ↵j are two roots of p(x) which correspond to the same ⇣, then ⌫2(↵i −↵j) = (2e −1)−1; otherwise ⌫2(↵i −↵j) = 0 for roots ↵i, ↵j of p(x) corresponding to distinct solutions to ⇣l = 1.
Proof. For any ↵2 C2 with ⌫2(↵) = 0 we have q(x) = (↵+ x −↵)k+1 −2(↵+ x −↵)k + 1 = k+1 X i=0 ri(x −↵)i (3.1) INTEGERS: 18 (2018) 5 where ri = ✓k + 1 i ◆ ↵k+1−i −2 ✓k i ◆ ↵k−i + δi,0.
(3.2) Since ⌫2( n m + ) equals the number of carries in the binary addition m + (n −m) = n, we see that i = 2e is the least positive index for which ⌫2(ri) = 0. We also have r1 = (k + 1)↵k −2k↵k−1 = ↵k−1((k + 1)↵−2k). If e = 0 we thus have ⌫2(r1) = 0.
If e > 1 we have ⌫2(r1) = 1, and if e = 1 we have ⌫2(r1) = 1 unless ↵corresponds to 1.
First take ↵= ⇣in (3.1), (3.2), where ⇣l = 1. In this case r0 = q(⇣) = 2(1−⇣−1) has positive 2-adic valuation; this valuation is 1 unless ⇣= 1, in which case it is +1. Therefore the vertices of the Newton polygon of q at ⇣are (0, 1), (2e, 0), and (k + 1, 0) when ⇣6= 1. This establishes ⌫2(↵−⇣) = 2−e for each of the 2e roots ↵ of p(x) corresponding to each nontrivial l-th root of unity ⇣. At ⇣= 1 the vertices are (0, +1), (1, 1), (2e, 0), and (k + 1, 0) when e > 1, and (0, +1), (1, ⌫2(k −1)), (2, 0), and (k + 1, 0) when e = 1. This establishes the stated valuations ⌫2(↵−1) for the roots of p(x) corresponding to 1. (Recall that 1 is a root of q but not of p.) Now assume e > 0, let ↵i be any root of p(x), and assume that e > 1 if ↵i corresponds to 1. Under these assumptions, from (3.1), (3.2), the vertices of the Newton polygon of q at ↵i are (0, +1), (1, 1), (2e, 0), and (k+1, 0). This shows that each root ↵i of p(x) has either 2e −2 or 2e −1 other roots ↵j of p(x) (according to whether ↵i corresponds to 1) which satisfy ⌫2(↵i−↵j) = (2e−1)−1. This completes the proof.
Having determined the location of the roots of the characteristic polynomial p(x) in C2, we may now give the proof of Theorem 3. The required functions are constructed as linear combinations of power functions of the form (1 + z)x := 1 X m=0 ✓x m ◆ zm (3.3) which are continuous functions of x 2 Z2 when ⌫2(z) > 0 (, Theorem 51.1) and analytic functions of x 2 Z2 when ⌫2(z) > 1 (, Theorem 54.4).
Proof of Theorem 3. According to Proposition 2, the roots of the characteristic polynomial p(x) are distinct, so the sequence Tn may be expressed in Binet form Tn = Pk i=1 ci↵n i , where ↵1, ..., ↵k are the roots of p(x) in C2 and ci 2 C2. More-over, each root ↵i may be expressed in the form ↵i = ⇣i(1 + "i), where ⇣l i = 1 and ⌫2("i) > 2−e. Given j 2 Z, define the function fj : Z2 ! Z2 by fj(x) := k X i=1 ci↵j i(1 + γi)x = k X i=1 ci↵j i 1 X m=0 ✓x m ◆ γm i (3.4) INTEGERS: 18 (2018) 6 where (1 + "i)l = 1 + γi with ⌫2(γi) > 2−e. This gives an expansion of fj(x) in terms of the basis { x m + } with coefficients P i ci↵j iγm i tending to 0 as m ! 1. By Mahler’s Theorem (, Theorem 51.1), fj is a continuous function on Z2. For any integer n we have Tln+j = k X i=1 ci↵ln+j i = k X i=1 ci↵j i(⇣i(1 + "i))ln = k X i=1 ci↵j i(1 + γi)n = fj(n).
(3.5) Since fj is continuous on Z2 and maps Z to Z, it maps Z2 to Z2. Therefore the existence of the required continuous functions fj has been established.
We now construct the analytic functions gj, paying attention to the coefficients.
If ⌫2(") = r 2 (0, 1], then (1 + ")2l = 1 + ⌘with ⌫2(⌘) > 2r. By induction it follows that if ⌫2("i) > 2−e, then (1 + "i)2(k+1) = 1 + ⌘i with ⌫2(⌘i) > 2. Since ⌫2(⌘i) > 2, we have log2(1 + ⌘i) := 1 X m=1 (−1)m+1 m ⌘m i = λi (3.6) with ⌫2(λi) = ⌫2(⌘i) > 2. Since ⌫2(λi) > 2 we have exp2(xλi) := 1 X m=0 λm i m! xm = 1 X m=0 bi,mxm (3.7) with ⌫2(bi,m) > 2m −⌫2(m!) = m + S2(m), where S2(m) denotes the binary digit sum of m. Using the fact that (1 + ⌘i)x = exp2(x log2(1 + ⌘i)) (3.8) when x 2 Z2 and ⌫2(⌘i) > 1 (, Theorem 47.10), we define gj(x) := k X i=1 ci↵j i(1 + ⌘i)x = k X i=1 ci↵j i 1 X m=0 bi,mxm !
= 1 X m=0 amxm (3.9) with coefficients am = P i ci↵j ibi,m. Since ⌫2(bi,m) > m, the series (3.9) converges on D = {x 2 C2 : ⌫2(x) > −1} to the function gj, which is therefore analytic on INTEGERS: 18 (2018) 7 this disc. Finally, for any integer n we compute T2(k+1)n+j = k X i=1 ci↵2(k+1)n+j i = k X i=1 ci↵j i(⇣i(1 + "i))2(k+1)n = k X i=1 ci↵j i(1 + ⌘i)n = gj(n).
(3.10) Thus the existence of the required analytic functions gj has been established.
Remark. Although the functions fj(x) and gj(x) may be evaluated at rational arguments x 2 Z2, we caution that the values obtained do not correspond to values of Tn when x 62 Z. For example, when k = 5 the function g0(x) interpolates the values {T12x} when x 2 Z and converges at x = 1/3 2 Z2, but g0(1/3) does not equal T4. We will see in the next section that ⌫2(g0(1/3)) = 2, while of course T4 = 1. The reason for this is that (↵n)1/n does not equal ↵in general.
Corollary 1. The sequence (Tn) may be extended to a continuous function of n 2 Z2 if and only if k is of the form k = 2e −1.
Proof. If k = 2e −1 then l = 1 and the function f0 constructed above provides the required extension. If k is not of the form 2e −1, then k + 1 has an odd prime factor, so for any positive integer n, 2n 6⌘0 (mod k +1) if k is even, and 2n 6⌘0, −1 (mod k +1) if k is odd. So the sequence (2n) converges to 0 in Z2, but the sequence (T2n) consists only of odd integers, and therefore cannot converge to T0 = 0 in Z2.
Therefore (Tn) cannot be extended to a continuous function on Z2.
Coefficients of the Analytic Functions gj(x) Proposition 3. The sequence Tn may be expressed in Binet form Tn = Pk i=1 ci↵n i , where ↵1, ..., ↵k are the roots of p(x) in C2 and ci 2 C2. If k is even, then ⌫2(ci) > 0 for all i, and if k is odd then ⌫2(ci) > −1 for all i.
Proof. Let e = ⌫2(k + 1) and a = ⌫2(k −1).
The initial conditions on Tn for n = 0, 1, ..., k −1 determine the constants ci according to the equation AC = T, where C = [c1 c2 · · · ck]T , T = [0 1 · · · 1]T , and A = 2 6 6 6 4 1 1 · · · 1 ↵1 ↵2 · · · ↵k .
.
.
.
.
.
...
.
.
.
↵k−1 1 ↵k−1 2 · · · ↵k−1 k 3 7 7 7 5 = Vk(↵1, ↵2, ..., ↵k) (4.1) INTEGERS: 18 (2018) 8 is the k ⇥k Vandermonde matrix with parameters ↵1, ↵2, ..., ↵k. By Cramer’s Rule, the solution to this system is given by ci = det(Ai)/ det(A), where Ai is the matrix obtained by replacing the i-th column of A with T. We have det(A) = Y 16i 0 we use Proposition 2 and (4.2) to compute ⌫2(det(A)) = (l −1) ✓2e 2 ◆ (2e −1)−1 + ✓2e −1 2 ◆ (2e −1)−1 = (l −1)2e−1 + (2e−1 −1) = (k −1)/2.
(4.3) The matrix Ai is formed by replacing the i-th column of A with T = [0 1 · · · 1]T .
We then calculate det(Ai) by cofactor expansion along the first row. This expresses det(Ai) as a sum of k −1 nonzero (k −1) ⇥(k −1) cofactors of the form ± ↵1 · · · ↵k ↵i↵j Vk−1(↵1, · · · , 1i, · · · , ˆ ↵j, · · · , ↵k) (4.4) where the symbol ˆ ↵j means that ↵j is omitted from the parameter list, and 1i means that ↵i is replaced with 1. We think of each of these Vk−1 Vandermonde matrices in (4.4) as being obtained from the Vk matrix (4.1) by inserting 1 among the parameters to obtain Vk+1(↵1, ..., ↵k, 1) (up to permutation of columns), and then removing two parameters ↵i and ↵j. Suppose that e > 1. Then from Proposition 2 and (4.2) we see that including 1 increases the valuation of the determinant by (2e −1)(2e −1)−1 = 1, while removing each of ↵i and ↵j decreases the valuation of the determinant by at most 1. Since det(Ai) is a sum of unit multiples of k −1 such cofactors, we have ⌫2(det(Ai)) > ⌫2(det(A))+1−2, which implies that ⌫2(ci) > −1.
In the case e = 1, including 1 increases the valuation by a, and removing ↵i and ↵j decreases the valuation by at most a + 1, so ⌫2(det(Ai)) > ⌫2(det(A)) −1, which implies that ⌫2(ci) > −1 in that case as well.
We now consider in detail the coefficients am of the analytic functions gj(x) = P m amxm. From (3.9) and Proposition 3 we see that a priori ⌫2(am) > ( m + S2(m) −1, k odd, m + S2(m), k even.
(4.5) We will primarily focus on the case where k is odd, since the even k case is similar.
It is immediate that Tj = gj(0) = a0. In general one may approximate the coeffi-cients am by computing gj(n) for several integers n and solving a system of linear equations. For example, for any exponent r, considering gj(2r) −gj(−2r) = 2r+1a1 + 23r+1a3 + 25r+1a5 + · · · (4.6) INTEGERS: 18 (2018) 9 leads to the determination a1 ⌘gj(2r) −gj(−2r) 2r+1 (mod 22r+4Z2), (4.7) and similarly a2 ⌘gj(2r) + gj(−2r) −2gj(0) 22r+1 (mod 22r+4Z2).
(4.8) As an example, in the case j = 0, taking r = bk/3c −1 in (4.7) and observing from Proposition 1 that g0(n) ⌘(8 −4k)n (mod 2k+1) for all integers n yields a1 ⌘8 −4k (mod 2b(2k+4)/3cZ2) (for j = 0), (4.9) and taking r = bk/4c −1 in (4.8) gives a2 ⌘0 (mod 22bk/4c+2Z2) (for j = 0).
(4.10) Although simple congruences such as these are sufficient for our purposes here, we remark that one may obtain stronger congruences by solving larger systems of equations. For example, if A is the k⇥k Vandermonde submatrix whose (i, j) entry is ji, and ~ b denotes the first column of A−1, then one may compute the i-th entry bi = (−1)i+1k i + /i. It follows that k X i=1 bigj(i) − k X i=1 bi !
gj(0) = a1 + k!ak+1 + · · · .
(4.11) In the case j = 0 we may conclude from Proposition 1 the stronger congruence a1 ⌘8 −4k (mod 2k+1−blog2 kcZ2) (for j = 0).
(4.12) The following table summarizes a few congruences for the coefficients a1 relevant to the valuation of Tn.
Coefficients of gj, k odd j a0 a1 0 0 8 −4k (mod 2k+1−blog2 kcZ2) 1 6 i 6 k −1 1 k k −1 0 (mod 2k−blog2 kcZ2) k + 1 2k −2 4k −8 (mod 2k+1−blog2 kcZ2) −1 3 −k 0 (mod 2k−blog2 kcZ2) 1 −k 6 i 6 −2 1 −k −1 INTEGERS: 18 (2018) 10 5. 2-adic Valuation of Tn Proof of Theorem 1. All cases of Theorem 1 except the n ⌘0 (mod k + 1) cases follow directly from Proposition 1. Suppose that n ⌘0 (mod 2k + 2), and write n = (2k + 2)m with m 2 Z. Then we have Tn = g0(m) = a1m + a2m2 + a3m3 + · · · .
(5.1) Since (4.9), (4.10) imply that ⌫2(a1) = 2 and ⌫2(ai) > 4 for i > 2, we have ⌫2(Tn) = 2 + ⌫2(m), proving the result in that case.
Now suppose that n = k + 1 + (2k + 2)m with m 2 Z. Then Tn = gk+1(m) = 2k −2 + a1m + a2m2 + a3m3 + · · · , (5.2) with ⌫2(a1) = 2 and ⌫2(ai) > 3 for i > 2. Let a = ⌫2(k −1). First consider the case where ⌫2(m) > a −1. In this case we have ⌫2(Tn) = ⌫2(2k −2) = a + 1 from (5.2). Also in this case, ⌫2((2k + 2)m) > a, so ⌫2(n −2) = ⌫2(k −1) = a. Since ⌫2(m) = ⌫2(n−k−1)−1−⌫2(k+1), the condition ⌫2(m) < ⌫2(k−1)−1 is therefore equivalent to ⌫2(n −k −1) < ⌫2(k2 −1).
Finally suppose that n = k + 1 + (2k + 2)m with ⌫2(m) < a −1. For this case to hold, we must have a > 2; since k −1 is a multiple of 4 we then have ⌫2(k + 1) = 1, which implies ⌫2(n −k −1) = ⌫2(m) + 2, so that ⌫2(n −k −1) < a + 1 = ⌫2(k2 −1).
In this case, we have from (5.2) that ⌫2(Tn) = ⌫2(a1m) = ⌫2(m) + 2. Therefore ⌫2(Tn) = ⌫2(n −k −1) as claimed, completing the proof.
It appears that the determination of ⌫2(Tn) in the case where n ⌘k+1 (mod 2k+ 2) and ⌫2(n −k −1) = ⌫2(k2 −1) requires more delicate analysis. We now examine the formula conjectured in (, Conjecture 1) in the case k = 5 and n ⌘6 (mod 12).
Theorem 4. In the case k = 5, the formula ⌫2(Tn) = ( ⌫2(n + 2), if n ⌘6 (mod 12) and ⌫2(n + 2) < 8, ⌫2(n + 43266), if n ⌘6 (mod 12) and ⌫2(n + 2) > 8 conjectured in is correct when ⌫2(n + 2) 6= 8, but is not correct in general.
Proof. For the affirmative part, it will suffice to compute T12m+6 modulo 29. We consider the analytic function g6(m) which interpolates the values T12m+6, and write g6(m) = P i aimi. We use the recurrence to compute the values g6(0) = 8, g6(1) = 25172, g6(2) = 83904288, g6(−1) = −4, g6(−2) = −16. As in (4.7) with r = 0, we have 12588 = a1 + a3 + a5 + · · · (5.3) INTEGERS: 18 (2018) 11 and with r = 1 we get 20976076 = a1 + 4a3 + 16a5 + · · · .
(5.4) Since ⌫2(a3) > 4, we initially get a1 ⌘12 (mod 26) from (5.4). Substituting this into (5.3) then gives a3 ⌘32 (mod 26) since ⌫2(a5) > 6. Substituting a3 = 32+64y back into (5.4) then shows a1 ⌘76 (mod 28).
A similar argument from (4.8) with r = 0 and r = 1 reveals that a2 ⌘224 (mod 29) and a4 ⌘0 (mod 27). If n = 12m + 6 with ⌫2(m) = 0 then Tn = g6(m) = 8 + a1m + · · · ⌘8 + 12m (mod 25) (5.5) and therefore ⌫2(Tn) = ⌫2(8 + 12m) = 2 = ⌫2(n + 2), proving the theorem in the case ⌫2(m) = 0. If ⌫2(m) > 2 then (5.5) shows that ⌫2(Tn) = 3 = ⌫2(n+2), proving the theorem when ⌫2(m) > 2.
Finally, we consider the case where ⌫2(m) = 1, and write m = 2u with u odd.
Then modulo 29 we compute Tn = g6(m) ⌘8 + a1(2u) + a2(4u2) + a3(8u3) ⌘8 + 152u + 896u2 + 256u3 ⌘(8 + 24u) + 128u + 896u2 + 768u3 (5.6) = (n + 2) + 128u(1 + u)(1 + 6u) (mod 29) Since 1 + u is even, we see that g6(m) ⌘(n + 2) (mod 28). It follows that ⌫2(Tn) = ⌫2(n+2) as long as ⌫2(n+2) < 8. Suppose that ⌫2(n+2) > 8. Since ⌫2(8+24u) > 8, we have ⌫2(1 + 3u) > 5. Since u is odd, this implies that ⌫2(1 + u) = 1, so that the factor 128u(1 + u)(1 + 6u) has valuation exactly 8. From (5.6) we conclude that ⌫2(Tn) = 8. But since ⌫2(43264) = 8, we also have ⌫2(n + 43266) = 8, proving the theorem in the case ⌫2(n + 2) > 8.
Numerical calculation shows that the conjectured formula ⌫2(Tn) = ⌫2(n+43266) is correct for positive integers n = 12m + 6 less than three million, however the formula is not correct in general. Assuming the formula were correct for positive integers n, it would necessarily also hold for negative integers n by the continuity of the analytic function g6(m).
However, the formula fails for n = −43266, as ⌫2(Tn) = 20 while ⌫2(n + 43266) = +1.
Remark. The above argument indicates how one may find actual positive integer counterexamples to the conjectured formula, using the continuity of the analytic function g6(m). However, this requires more extensive computation. The first two positive integer counterexamples are n = 3 · 220 −43266, for which ⌫2(Tn) = 22 while ⌫2(n + 43266) = 20; and n = 3 · 221 −43266, for which ⌫2(Tn) = 20 while ⌫2(n + 43266) = 21. From these calculations and Theorem 2, we can state the INTEGERS: 18 (2018) 12 correct formula for the case n ⌘6 (mod 12) in the form ⌫2(Tn) = ⌫2(n −y), where y ⌘3 · 220 −43266 (mod 222Z2); here y = 12z + 6 where z is the root of g6(x) guaranteed by Theorem 2.
Proof of Theorem 2. We consider the Newton polygon of the power series gk+1(x) = P m amxm.
As was done in (4.9), (4.10) in the case j = 0, we compute from Proposition 1 that a1 ⌘4k −8 (mod 2b(2k+4)/3cZ2) (for j = k + 1) (5.7) and a2 ⌘0 (mod 22bk/4c+2Z2) (for j = k + 1).
(5.8) Since g(0) = a0 = 2k −2, we have (0, a + 1) and (1, 2) as vertices of the Newton polygon for the power series g(x) at 0, with all other points (i, ⌫2(ai)) lying on or above the diagonal line through the origin with slope 1. All sides of the Newton polygon beyond the first therefore have slope at least 1. Since the first side has horizontal run 1 and slope 1 −a, the power series g has precisely one root z 2 C2 with ⌫2(z) = a −1, and no other roots with valuation less than −1.
Consider the power series h(x) = gk+1(x)/4, which has coefficients in Z2. If x0 = (k −1)/(4 −2k), then gk+1(x0) = (2k −2) + a1x0 + a2x2 0 + a3x3 0 + · · · ⌘0 (mod 23a+1Z2).
(5.9) Since h(x0) ⌘0 (mod 2Z2) and h0(x0) ⌘k 6⌘0 (mod 2Z2), by Hensel’s Lemma (, Theorem 3) there exists z 2 Z2 with z ⌘x0 (mod 2Z2) and h(z) = 0. This root is therefore the root z described in the preceding paragraph, and thus lies in Z2. We then have (2k −2) + a1z ⌘0 (mod 23a+1Z2), (5.10) and dividing by a1, which has valuation 2, gives the congruence of the Theorem.
Now write gk+1(x) = P m amxm = (x −z) P m bmxm. Then b0 = −a0/z has ⌫2(b0) = 2, and bm = −(a0 + a1z + · · · + amzm)/zm+1 for m > 1. Since gk+1(z) = 0 we have ⌫2(b1) > 4 and ⌫2(bm) > m + 1 for all m > 0. Therefore P m bmxm also converges on D = {x 2 C2 : ⌫2(x) > −1}. Since ⌫2(bm) > 2 for all m > 0, we have ⌫2(P m bmxm) = 2 for all x 2 Z2. It follows that ⌫2(gk+1(x)) = ⌫2(x −z) + 2 for all x 2 Z2, completing the proof.
Acknowledgement.
All numerical computation was done using the PARI-GP calculator created by C. Batut, K. Belabas, D. Bernardi, H. Cohen and M. Olivier.
INTEGERS: 18 (2018) 13 References N. Koblitz, p-adic Numbers, p-adic Analysis, and Zeta Functions, Second Edition, Springer-Verlag, New York, 1984.
T. Lengyel, The order of the Fibonacci and Lucas numbers, Fibonacci Quart. 33 (1995), 234-239.
T. Lengyel and D.Marques, The 2-adic order of some generalized Fibonacci numbers, Integers 17 (2017), Article #A5, 10pp.
D. Marques and T. Lengyel, The 2-adic order of the Tribonacci numbers and the equation Tn = m!, J. Integer Seq. 17 (2014), Article 14.10.1, 1-8.
M. R. Murty, Introduction to p-adic Analytic Number Theory, AMS Studies in Advanced Mathematics Vol. 27, American Mathematical Society, Providence, 2002.
W. H. Schikhof, Ultrametric Calculus. An Introduction to p-adic Analysis, Cambridge Uni-versity Press, London, 1984.
B. Sobolewski, The 2-adic valuation of generalized Fibonacci sequences with an application to certain Diophantine equations, J. Number Theory 180 (2017), 730-742.
|
15
|
rt.representation theory - permutation representation of $S_n$ - MathOverflow
===============
Join MathOverflow
By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy.
Sign up with Google
OR
Email
Password
Sign up
Already have an account? Log in
Skip to main content
Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Loading…
Tour Start here for a quick overview of the site
Help Center Detailed answers to any questions you might have
Meta Discuss the workings and policies of this site
About Us Learn more about Stack Overflow the company, and our products
current community
MathOverflow helpchat
MathOverflow Meta
your communities
Sign up or log in to customize your list.
more stack exchange communities
company blog
Log in
Sign up
Home
Questions
Unanswered
AI Assist Labs
Tags
Chat
Users
Hang on, you can't upvote just yet.
You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it?
Instead, you can save this post to reference later.
Save this post for later Not now
permutation representation of S n S n
Ask Question
Asked 12 years, 4 months ago
Modified12 years, 3 months ago
Viewed 997 times
This question shows research effort; it is useful and clear
1
Save this question.
Show activity on this post.
If a group G G acts on a set X X, then we can speak of permutation representation on K[X]K[X]. now set of all k k-subsets X k X k of n=1,2,...,n n=1,2,...,n is a S n S n - set and we can speak about permutation representation on K[X k]K[X k]. by decomposing it we get [k/2][k/2] irreducible representations V 0,...,V[k/2]V 0,...,V[k/2] for k less than or equal to n/2 n/2. Similarly we can do this for any partition of n instead of (k,n-k) which we consider in the above case and we obtain all irreducible representation of S n S n with the assumption that K[X]K[X] is completely reducible. I am looking for any other method (instead of permutation representation) which give as all irreducible representations of S n S n and its character values in simpler way. Is there any method like this? thank you.
rt.representation-theory
gr.group-theory
Share
Share a link to this question
Copy linkCC BY-SA 3.0
Cite
Improve this question
Follow
Follow this question to receive notifications
asked Apr 14, 2013 at 11:00
GA316GA316
1,299 11 11 silver badges 24 24 bronze badges
6
What is K K? –Gerry Myerson Commented Apr 14, 2013 at 11:05
any field which is algebraically closed. –GA316 Commented Apr 14, 2013 at 11:11
we have assumed that K[X] is completely reducible. –GA316 Commented Apr 14, 2013 at 11:11
thanks. Is there any way to get simple representations? –GA316 Commented Apr 14, 2013 at 12:09
Sorry, but your method is not clear from me... how do you get the non-hook irreps? Anyway, you are aware of the fact that if we fix a faithful rep V V of the finite group G G, then every irrep of G G is a direct summand of some tensor power of V V ? –darij grinberg Commented Apr 14, 2013 at 16:04
|Show 1 more comment
2 Answers 2
Sorted by: Reset to default
This answer is useful
2
Save this answer.
Show activity on this post.
Since you only care about the completely reducible case, I'll assume K=C K=C.
The easiest way that I know of to construct irreducible S n S n representations is a special case of the construction here. In particular, symmetric group is a quotient of the (degenerate) affine Hecke algebra appearing there (obtained by setting x 1=0 x 1=0). If you take μ=0 μ=0 in the link, then the skew shape λ/μ λ/μ is just λ λ and the module that is constructed is the irreducible representation S λ S λ.
This recovers the character formula s λ=∑T x T s λ=∑T x T, where the sum is over all standard tableaux of shape λ λ (as described in Macdonald's text, for example).
Share
Share a link to this answer
Copy linkCC BY-SA 3.0
Cite
Improve this answer
Follow
Follow this answer to receive notifications
edited Apr 13, 2017 at 12:58
CommunityBot
1 2 2 silver badges 3 3 bronze badges
answered Apr 16, 2013 at 20:31
David HillDavid Hill
1,502 8 8 silver badges 12 12 bronze badges
0
Add a comment|
This answer is useful
0
Save this answer.
Show activity on this post.
There is a general study of representations of symmetric groups via young tableaux. In particular there is a combinatorial expression for the characters. I think that's as simple as it gets currently.
Share
Share a link to this answer
Copy linkCC BY-SA 3.0
Cite
Improve this answer
Follow
Follow this answer to receive notifications
answered Apr 26, 2013 at 22:24
LenaLena
35 1 1 bronze badge
1
This is precisely the construction I linked to above. –David Hill Commented Apr 27, 2013 at 16:09
Add a comment|
You must log in to answer this question.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
rt.representation-theory
gr.group-theory
See similar questions with these tags.
Featured on Meta
Upcoming initiatives on Stack Overflow and across the Stack Exchange network...
Will you help build our new visual identity?
Linked
8Practical Ways to get Skew-Schur Functions
Related
6Exterior powers of the standard representation
1irreducibility of certain subspaces of the permutation group in quantum mechanics
5Frequency of a representation of SO(3)
5Field with one element look at counting index-n n subgroups in terms of Homs to S n S n, generalization to F 1 k F 1 k?
16How do I know if an irreducible representation is a permutation representation?
6Irreducible factors of primitive permutation group representation
5Summing over normalized characters of the permutation group
6Is there a known classification of regular multiplicity-free permutation groups?
Question feed
Subscribe to RSS
Question feed
To subscribe to this RSS feed, copy and paste this URL into your RSS reader.
Why are you flagging this comment?
It contains harassment, bigotry or abuse.
This comment attacks a person or group. Learn more in our Code of Conduct.
It's unfriendly or unkind.
This comment is rude or condescending. Learn more in our Code of Conduct.
Not needed.
This comment is not relevant to the post.
Enter at least 6 characters
Something else.
A problem not listed above. Try to be as specific as possible.
Enter at least 6 characters
Flag comment Cancel
You have 0 flags left today
MathOverflow
Tour
Help
Chat
Contact
Feedback
Company
Stack Overflow
Teams
Advertising
Talent
About
Press
Legal
Privacy Policy
Terms of Service
Your Privacy Choices
Cookie Policy
Stack Exchange Network
Technology
Culture & recreation
Life & arts
Science
Professional
Business
API
Data
Blog
Facebook
Twitter
LinkedIn
Instagram
Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804
By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.
Accept all cookies Necessary cookies only
Customize settings
|
16
|
Demand Systems in Industrial Organization: ∗ Allan Collard-Wexler [Econ 890-02: Fall 2019] Contents 1 Overview 2 1.1 Why spend time on Demand Systems? . . . . . . . . . . . . . . . . . . . . . . . . . .
2 2 Brief Theory Review 4 2.1 Homogenous Goods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4 2.1.1 The Cournot Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5 2.1.2 Homogeneous Products Bertrand Competition . . . . . . . . . . . . . . . . .
7 2.1.3 Strategic Complements and Strategic Substitutes: The Cournot Model.
. . .
11 2.2 Models of Product Differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13 2.3 Differentiated Products Bertrand Pricing . . . . . . . . . . . . . . . . . . . . . . . .
15 3 Approaches to demand estimation 15 4 On Demand Estimation 16 4.1 Data... (briefly) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16 4.2 Basics: Endogeneity of Prices and Other Definitions . . . . . . . . . . . . . . . . . .
16 4.3 Single Product Demand Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17 4.3.1 Some History. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19 4.3.2 Representative Agent vs. Heterogeneous Agents . . . . . . . . . . . . . . . .
20 4.4 Multi-product Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
21 4.4.1 Product vs Characteristic Space . . . . . . . . . . . . . . . . . . . . . . . . .
21 5 Product Space Approaches: AIDS Models 22 5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
24 5.2 Hausman, Leonard & Zona (1994) on Beer . . . . . . . . . . . . . . . . . . . . . . . .
25 5.3 Chaudhuri, Goldberg and Jia (2006) on Quinolones . . . . . . . . . . . . . . . . . . .
30 5.4 Estimation in the Linear Cournot Model. [Extra Notes.] . . . . . . . . . . . . . . . .
36 ∗These notes draw from a variety of sources: in particular Ariel Pakes’ lecture notes, and from (co-teaching with) John Asker at NYU Stern, and Robin Lee’s notes at Harvard.
1 6 Characteristic Space Approaches to Demand Estimation 39 6.1 Formal Treatment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
40 6.1.1 Aside on utility functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
40 6.2 Examples (Briefly) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
41 6.3 Estimation from Product Level Aggregate Data . . . . . . . . . . . . . . . . . . . . .
43 6.3.1 Illustrative Case: Vertical Model . . . . . . . . . . . . . . . . . . . . . . . . .
43 6.4 Identification: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
45 6.5 Problems with Estimates from Simple Models: . . . . . . . . . . . . . . . . . . . . .
45 6.6 Dealing with Simultaneity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
46 7 Generalizing Demand to allow for more Realistic Substitution Patterns: BLP 47 7.1 Estimation: Step by step overview . . . . . . . . . . . . . . . . . . . . . . . . . . . .
49 7.2 Identification in these models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
52 7.3 Adding in “Supply Side” Moments . . . . . . . . . . . . . . . . . . . . . . . . . . . .
52 7.4 Overview of BLP Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
53 7.5 Nevo 1998 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
54 8 Some Applications of Characteristics Based Demand Systems 57 8.1 “Micro”-BLP (2004 JPE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
57 8.2 Petrin (2002 JPE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
60 8.3 Gentzkow 2007 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
64 1 Overview Demand systems often form the bedrock upon which empirical work in industrial organization rests. The next few lectures aim to introduce you to the different ways empirical researchers have approached the issue of demand estimation in the applied contexts that we typical confront as IO economists. I will start by briefly overviewing the types of research questions and various instances in which demand estimation is useful, and the core problems we face when estimating demand.
We will begin with a basic overview of homogeneous product market competition (with which you should be familiar), and an overview of estimation in these markets. We will then move to models of differentiated product demand systems. I will review basic theory and standard data forms, after which I will go on to talk about the standard approaches to demand estimation and their advantages and disadvantages. All these approaches try to deal with the problem of estimating demand when we are in a market with many, differentiated goods. Specific papers will be used to illustrate the techniques once they have been discussed.
I will expect you to remember your basic econometrics, particularly the standard endogeneity problem of estimating demand (see Working 1927 or the treatment in standard econometrics texts, e.g. Hayashi 2000 in Ch 3).
There has been an explosion in the sophistication of technique used in demand estimation the last decade, due to a combination of advances in econometric technique, computation and data availability.
1.1 Why spend time on Demand Systems?
Many questions in IO require understanding how consumers choose among various goods and ser-vices as a function of market and individual characteristics. Though properly estimating a demand 2 system in its own right may be an objective of interest, demand systems (and their underlying parameters) are more often than not used as an input into answering other, perhaps larger, ques-tions. E.g., they are often used as providing the incentives for examining firm behavior (pricing, investment, product introduction, entry/exit, etc...), or computing consumer welfare from a policy change. For example...
• Infer firm conduct: sometimes it is difficult to observe/measure firm conduct directly, but we might be able to test certain theories by using consumer demand estimates to infer firm behavior.
– Example: Bresnahan 1987 Competition and Collusion in 1950s Auto Market Bresnahan wanted to examine the hypothesis that the dramatic increase in quantity (45% greater than in two surrounding years) and decrease in the price of Autos in 1955 was due to the temporary breakdown of a collusive agreement. Unlikely to be demand shock: “any explanation of all of the 1955 events from the demand side will need to be fairly fancy.” His idea was to assume that marginal costs were not varying and then ask whether the relationship between pricing and demand elasticities changed in a manner consistent with a shift from collusion to oligopolistic pricing.
3 He exploits data on P and Q for different makes of automobiles. He has about 85 models over 3 years. The “magic” in these approaches is using demand data combined with an equilibrium assumption on firm conduct to back out marginal costs, without using any cost data. We’ll come back to this later.
• Welfare impacts: to conduct welfare calculations subsequent to some market change brought about by, say, policy intervention, product introduction, or etc., one needs a well specified demand system. It allows us to quantify the “Value of Innovation”: e.g., compute consumer surplus from the introduction of a new good (e.g., minivans, CAT scans) with similar “char-acteristics” of existing ones.
• Determinants of Innovation: with a demand system, a researcher can compute predicted markups for a given good; consequently, one will understand the types of products a firm will want to produce (e.g., minivans or SUV’s, cancer drugs instead of malaria treatments).
Demand systems, in other words, help us measure the incentives for investing in new goods.
• Usually demand is important to think about various forms of comparative statics: common ones for IO researchers include pre and post merger pricing, tax incidence, monopoly vs duopoly pricing, effect of predatory pricing policies, impact of new product introductions, etc.
• In IO and Marketing, there is considerable work on advertising which usually involves some demand estimation. This about policy questions of direct-to-consumer drug adverting, or advertising as a barrier to entry. Furthermore, carefully specified demand systems can assist with decomposing the mechanisms or channels through which various advertising (and other) effects work. E.g., persuasive vs. informative advertising.
• Understanding the cross-price elasticities of good is often crucial to “preliminary” issues in policy work, such as market definition in antitrust cases. Also, they inform determinants of market power: should we allow two firms to merge? Is there collusion going on in this industry (unusually large markups)? Cross-price elasticities are one input into this equation.
(We will talk a bit (later) about the myriad antitrust applications of demand models. Note that this is the largest consumer of Ph.D’s in Empirical I.O. by a long shot!) • The tools used in demand estimation are starting to be applied in a variety of other contexts (e.g., political economy, development, education, health...) to confront empirical issues, of there is likely to be some intellectual arbitrage for your future research.
2 Brief Theory Review Before diving into estimation, it is useful to begin with a few basic theoretical models of quantity and price determination in markets. Predictions in these models will depend on characteristics of demand, which we will in turn discuss ways of estimating.
2.1 Homogenous Goods.
We begin with simple homogenous good markets provided by an oligopolistic set of firms competing in quantities (Cournot) or price setting (Bertrand).
4 2.1.1 The Cournot Model.
Assume that: • firms choose quantities and • a Nash equilibrium in quantities results.
where: • p(Q) is the inverse demand curve • Q = P j qj, where qj is the output of firm j, • Cj(·) and mcj(·) are the total and marginal cost functions.
Note.
We now work explicitly with different mc curves for each firm, but sufficing with an “aggregate” approximation to the demand curve. Provided that is a good enough approximation, it will not hurt the implications of the model we do investigate (like price, efficiency in production)...
However it will not allow us to get at the implications of the equilibrium on the distribution of consumer utilities and welfare.
Admitting heterogeneous marginal costs, however, will enable us to study “distributional” im-plications of the equilibrium on the supply side; for e.g., how efficient is the allocation compared to the least cost allocation of production for a given total quantity. This is a question which underlies many regulatory issues (e.g. increasing returns or large fixed cost justifications for monopoly and the output allocations that are generated by the ensuing regulations...) The profit function for firm j is πj(qj) = p(Q)qj −Cj(qj).
Assuming differentiability, and that all firms produce positive quantities, the f.o.c. for Cournot-Nash equilibrium provide a system of J equations in J unknowns p(Q) + qj ∂p ∂Q −mcj(qj) = 0 ∀j Of course even if q > 0, this is only a necessary condition for an equilibrium. For the solution to this system to truly be an equilibrium the choice of qj must also satisfy the second order condition for each agent 2 ∂p ∂Q + qj ∂2p ∂2Q −∂mcj ∂qj < 0.
Jointly sufficient conditions for this are that: • marginal revenue slopes down, and • marginal cost slopes up.
Of course one can get by with weaker conditions.
Question. Assume m.c. is constant and that the demand curve has a constant elasticity. What 5 then is a sufficient condition for the s.o.c. to be satisfied?
Note that both the necessary and sufficient conditions change when either; • there is a fixed or sunk cost of production for either of these may provide a reason for a plant to not close down but still not produce (i.e. to “mothball” the plant) even if there is a cost to mothballing (a cost of maintaining the plant and the cost of re-entering in a future year).
I.e in these cases we cannot assume a priori, that q > 0, for all active plants and we have to take account of corner solutions.
• there are capacity constraints, in which case the derivative of the cost function doesn’t exist at the capacity constraint. We cannot perform the experiment of increasing q above capacity, so we cannot get the derivative from the right. We can get the derivative from the left and what we know is that for us to produce at the capacity constraint it must be positive.
These two cases are cases when the cost function is non-differentiable at a point of interest (either at zero, or at the capacity constraint), and any other nondifferentiability will cause related problems, it is just that these two are often relevant in applied work. We come back to them presently, but first we consider the implications of this model when the cost functions are “sufficiently smooth”.
What are the efficiency implications of the allocation?
• Among the set of interior firms, larger firms have lower marginal cost at the produced quan-tities. So if you believe your data is being generated by a homogeneous product quantity setting model, and you believe that all firms are at an interior equilibrium point, then pro-vided your data show a large variance in output across firms (and recall that almost all data sets do), then you must believe that some firms have much lower marginal costs and much higher markups at the outputs they produce than do other firms. This in turn implies that from the point of view of allocating a fixed amount of production we can do better than the market by allocating more output to larger firms (a “social planner” would equate marginal costs).
• Compare the “smooth” allocation to the situation in either competition or monopoly. We think that the allocation of fixed amount of output would be worse if that output were allocated according to a Nash equilibria than it would be if say, a multiplant monopolist were allocating the output. Where the gains from competition come in is not from the efficiency of the allocation of a given amount of output, but rather from the quantity of output produced (which we expect to be larger in competition in a Nash equilibria).
• Thus consolidation of the industry (through mergers or buyouts) is likely to have two effects; – there is an additional incentive to increase price, thereby decreasing consumer surplus – it is likely to improve the productivity or efficiency of the productions allocation, which, all else equal, increases producer surplus.
You can find functional forms where either the first or second dominates in calculating total surplus. Indeed you can find functional forms where the gains in productivity actually are enough to overcome the incentive to increase price, and price will fall.
6 • The price term is often exhibited as a percentage markup, which in turn equals our Lerner index; i.e.
p(Q) = mcj −qj ∂p ∂Q ⇒Lj = (p −mcj)/p = sj/η where sj is the market share, and η is the absolute value of the demand elasticity. Conse-quently the average price cost-margin is just X j sj(p −mcj)/p = X j s2 j/η = H/η, where H is the Herfindahl index of concentration.
• If there is a unique equilibrium, they define a best reply function for each firm to each vector of rivals outputs; these are generally written as qj = rj(q−j).
However even if there is not a unique equilibrium, once the competitors play is fixed the response of the firm in question is “generically” unique. This fact will be quite useful when we move to empirical work. We come back to other properties of the reaction function below.
2.1.2 Homogeneous Products Bertrand Competition The next analysis of the homogeneous product model was by Bertrand. Noting that firm’s often seem to set prices, he assumed price competition rather than quantity competition. The model then has properties which do not seem to square with reality. As a result it is a model which is not really used in empirical work, except in modified form. Still it was a useful theoretical device as it generated both questions and insights that ended up pointing the way towards a series of developments. Moreover it is used repeatedly in applied theory papers because of the simplicity of the equilibria it generates.
The standard Bertrand homogeneous product model with two agents (and it will become obvious how to generalize to a larger number of agents) has D(p) being total demand, and if D1(p1, p2) is the demand for the first firm’s product given both prices we have it equal to = D(p1) if p1 < p2, = (1/2)D(p1) if p1 = p2, and = 0 if p1 > p2.
If there are constant costs, π1(p1, p2) = D1(p1, p2)(p1 −c).
Note that viewed as a function of p1, D1(·) and hence π1(·) has a jump in it at p1 = p2. Thus one can not analyze necessary conditions as “zero derivative conditions”, and to analyze Nash equilibria behavior we have to just compute profits at each p1 given p2 and figure out which price maximizes profits. If you run through the reasoning here you will see that the only Nash equilibrium is p1 = p2 = c.
7 (It is easy to show it is an equilibrium; to show that nothing else can be let one firm have a price higher than c and show that the second firm’s optimal response is to cut price....).
There are strong implications of this model.
• First it says that provided there are two firms in the industry with the lowest cost, the industry will act as if it is a “price taker” in the sense that the equilibrium is p = mc (at that cost).
• With different, but constant costs, there is a single producer but it produces at a price just under the cost of the second lowest cost producer. So now we have production efficiency, but we are producing under the total surplus maximizing quantity. The extent of the deviation between price and minimal marginal cost depends on how close the most efficient rival’s cost is to the efficient cost.
The results are suspect for many reasons.
• First and foremost, if there are any sunk or any fixed costs, this industry will never see entry (or production), by more than one plant. Thus the industry should be a monopoly, and the monopolist should not charge p = mc for it maximizes current profit by setting the monopoly price, and its ability to deter entry depends only on its costs; i.e. provided the firm’s costs can be revealed in a verifiable way and there are no collusive possibilities, there will be no entry no matter what price it sets. So we ought not ever see this equilibrium.
• Similarly we often think we observe small differences in prices existing without one firm dropping to zero demand. That is the discontinuity in the demand curve seems not to be true empirically.
There are a number of ways of getting around these unrealistic predictions, and we list some of them here. Note however that they do not necessarily get around all the problems, at least not without further assumptions1.
• Differentiated Product Models.
Then goods marketed are not perfect substitutes for one another (at least not to all consumers), so when one decreases its price below its competitor’s price not all consumers jump to it. We will discuss this in quite a lot of detail, as it has become the dominant form of analysis in empirical work on consumer goods markets, and to a lesser extent on producer goods markets also (many of these have relatively homogenous goods, think industrial chemicals, but location and the costs of transportation are differentiating factors).
• Prices have rigidities (i.e. they cannot be adjusted too quickly). Simplest case; two firms, must hold price fixed for two periods, firms move in alternating periods. We are then in a dynamic game, where we make a price choice for this period and it determines both my price, and indirectly, the price of my competitor in the next period. We come back to this when we discuss dynamic games where we show that there can be more than one equilibria; one we will develop is an Edgeworth cycle - “wars of attrition”, another is kinked demand curves; these solutions are discussed in the Maskin Tirole articles.
1E.g. it is easy enough to write down differentiated product models wherein a Bertrand equilibrium does not exist.
8 • Collusion (richer strategy spaces). Begins in the framework of “repeated” games; Green and Porter, Abreu Pearce and Stachetti, .... We set a price higher than mc and we enforce this price by punishing a firm from deviating from this price in future periods. See our analysis of collusion below.
• Capacity constraints, or more generally, increasing marginal cost. We discuss this briefly here.
Two Firms w/ Capacity Constraints, Competiting a la Bertrand.
Consider the following solution to a two-firm model with capacity constraints. Of course if the capacities are never binding (capacity greater than D(c) for both firms), then the capacity constraints will not change the nature of the equilibrium. So assume that the capacity constraint could be binding for at least one firm, say q1 ≤q1 < D(c).
To go further we have to respecify demand.
The reason is that if now one firm undercuts the price of its competitor, and that firm has an effective capacity constraint, then even though p1 < p2, D1(p1, p2) ̸= D(p1) (since D(p1) might be > q1).
What we have to do to proceed is specify who gets the low cost good when not everyone can; i.e. we need a rationing mechanism. This because the residual demand faced by the higher priced firm will depend on precisely which consumers get the lower priced good. Different rationing rules have been introduced in the literature, though the one that seems most popular is higher valuation consumers get the good first2. What this effectively does is “lop off” the first q1 consumers. So shift the vertical axis of the demand curve to q = q1, relabel the horizontal axis “zero” at that point, and call the demand curve which results the residual demand curve of the higher priced firm.
Now assume firm 1 plays p1 where D(p1) > q1 (so the first firm can not supply market demand at this price). Firm 2 has two options; it can play a price less than p1 or play a price greater than p1. If greater than p1 it will play a price that maximizes against its residual demand curve (at least if it has sufficient capacity). Is this a “Nash” equilibrium? We have to check, if: • conditional on firm 1’s price, firm 2 wants to play a price below firm 1’s (to do so we: give firm 2 the whole demand curve or its capacity, whichever is smallest; have the firm choose the profit maximizing price conditional on it being lower than p1; and compute the resultant profits), and • conditional of firm 2’s price, find out if firm 1 has an incentive to play a higher price (it would never play a lower price since that would generate less profits).
Depending on the shape of the demand curve and the capacity constraint, you can get no (p1, p2) equilibria, a unique such equilibria, or more than one such equilibria.
Kreps and Schenkmen (1983, Bell Journal) extend the analysis into a full information two stage game; in the first stage you chose capacities, and in the second stage you chose prices conditional on capacity. They use the rationing rule above, constant marginal costs, a fixed cost per unit of capacity build, and a concave demand curve. They use a clever argument to show that the equilibrium in the two stage game is the equilibrium from the one stage Cournot game; thereby providing some support for Cournot in a world where it looks like firms set prices.
To get a bit of the intuition note that no matter the quantities chosen in period one, the Nash equilibrium price vector in period 2 is p = D−1(q1 + q2) provided this is greater than mc. they 2Another is to assume that each agent gets an equal fraction of what its demand would have been were price at the low level.
9 would never build capacity s.t. this would be less than mc as it would provide no benefits; i.e.
they would not sell at a price less than marginal cost. If price were higher then this, firm 1 could lower its price by ϵ and receive an increment in profits of all the quantity it could provide times ≈p (loosing only ϵ times its initial sales, and this can be made arbitrarily close to zero). If price were lower than this price could be raised and there with no loss in quantity. Since the Nash price is the price determined by throwing all the capacity in the market, this is essentially a quantity setting game, and one can show that a Nash equilibrium in quantity would result (i.e. we play as if the cost curve was the cost of capacity plus c and play quantities that maximize the profits from this rule).
The Kreps Schenkman assumptions are extreme. In particular a marginal cost curve which is constant up to the capacity constraint, and has infinite slope thereafter is questionable. The basic idea is that something about “scale” is determined earlier on, and though it may be changed later, the change is costly (in terms of rising marginal cost or a cost of inventory). This is true in many industries. For example in autos the firms decide which plants have stamping machines for which vehicles early on. They then determine how many shifts at the plants as information on demand roles in. For smaller changes in demand, they will use (overtime)... The adjustments are discontinuous, but respond to market conditions.
Of course at the same time as they are modifying production they are modifying price, and there is movement back and forth, with “days of inventory” taking unexpected shocks.
That is the actual interactions between controls and equilibrating forces is, not surprisingly, more complicated then our simple models indicate.
A Further Note on Equilibrium Notions.
The Nash in price and Nash in quantity concepts have been the standard tools of static applied analysis. As noted they are not rich enough for many applied situations, particularly those where some form of dynamics comes into play. Indeed though, as we will see, our models do incredibly well in analyzing the distribution of prices in a market, we do much less well in analyzing the movements in price over time. This generates all sorts of mini literatures, like the literature on exchange rate pass through, or the literature on “sticky” prices.
When facing a particular market setting one should keep an open mind to how to model the price setting mechanism, and analyze its implications, as this is often a rich area for research. A good example of where relaxing standard assumptions can throw light on market outcomes is in a recent paper by Leslie, and Sorenson (forthcoming Restud) which investigates concert ticket re-sale (there is re-sale in many other markets as well). “Scalping” was illegal in many jurisdictions, at least until recently, which seems like a strange response to a mutually beneficial relationship. The worry stemmed from the fact that were re-sale allowed rent seeking activity in the primary market would diminish the welfare generated by the event and distort investment incentives. This has to be placed against the potential increase in welfare from trade that allows individuals to re-optimize given new information and the heterogeneity in value of time in going on the primary market. To evaluate the impacts of re-sale they have to compute equilibria with and without re-sale. In the equilibria with re-sale there are brokers who enter the market not because they want to view the event, but rather to purchase early and sell later at a markup. Notice that the different institutions generate different producer surplus and consumer surplus, as well as distributive effects.
We note that the institutions for re-sale of tickets entertainment events has changed recently with an increasing amount of web sites for ticket re-sale, and an increase in the use of auctions in the primary market. The implications of this are also being explored (e.g., Budish and Bhave, working paper).
10 2.1.3 Strategic Complements and Strategic Substitutes: The Cournot Model.
An often asked question is the following. When something changes in the environment determining the profitability of the actions of one firm, and as a result that firm, say increases, its control (in this case quantity), will other firms respond by increasing or decreasing their control? If another firm reacts by increasing its control we say the controls of the two firms are strategic complements, if that firm reacts by decreasing its control, we say the two firms controls are strategic substitutes. These terms were introduced by Bulow, Genanakopolis, and Klemperer (1985, JPE) who also discuss several applications.
Whether or not controls are strategic complements or substitutes in a given situation has im-portant implications on the likely impacts of just about any environmental changes. For example, when there is a merger and as a result of the change in incentives generated by the change in ownership we think the merged firms will decrease their quantities, the question of the impact of the merger on price, and hence on consumer welfare will depend critically on what the other firms (i.e. the firms not in the merger) will do in response to the decrease in quantities of the merged firms. If they also decrease their quantities, the impact on price is likely to be even more adverse; if they increase their quantities they will ameliorate the impact of the merger on prices. Alternatively when there is a tariff(or a voluntary export restraint, or a tax) which effects the costs and hence the control (quantities or prices) of one competitor, the impact on prices as a whole will depend on the response of the other competitors to the increase in control of the competitor whose costs are effected.
A couple of points should be kept in mind when dealing with these concepts; • To find the answer to the question of whether two firm’s controls are strategic complements or strategic substitutes we would have to resolve say J-equations (the price equations) in J unknowns (the prices), the strategic complement/substitute concept is a pairwise concept.
I.e. two firms in one market may have their static controls being strategic complements with respect to each other, whereas another couple of firms in the same market may have controls which are strategic substitutes.
• Also two firms can be strategic complements at one distribution of costs and set of demand conditions, and strategic substitutes at another. That is the concept is specific to a particular equilibrium configuration, and can change over configurations.
As a result whether controls are strategic complements or strategic substitutes is a matter of functional forms and the equilibrium being played; in many cases then it can not be answered without some empirical work. “Intuitive” generalizations from simpler models can easily be wrong (we come back to this when we consider Nash in prices equilibrium, where simple functional forms indicate that prices are strategic complements, but more realistic functional forms often indicate that this is not so). Things get even more complicated in dynamic models where the question is often quite important but even simple functional forms rarely generate an analytic result. E.g.
Wei Tan’s (2006; Review of Industrial Organization) analysis of the impact of the reduction in advertising forced by the Masters Settlement Agreement on cigarette demand by minors (there was a ban on advertising that was directed at minors). Paper’s claim is advertising and prices are strategic complements, advertising goes down so does price, and this increases demand by minors disproportionately since they are disproportionately price sensitive.
Consider two rivals setting controls, say (x1, x2) (these are usually either price or quantity in static models), and assume that equilibrium is Nash.
Profits of the two firms are given by 11 πi(xi, x−i; ·). The question of concern is whether agent i increases or decreases its x in response to an increase in the x of its rival. Assume that both before and after the change that induced the rivals price increase, all choices are interior, and satisfy a f.o.c., and that the market equilibrium is always unique.
The f.o.c. both before and after the change in the rival’s control is ∂πi ∂xi = 0.
while the second order condition insures ∂2πi ∂x2 i < 0.
Totally differentiating the foc w.r.t. xi we get ∂2πi(xi, x−i) ∂x2 i dxi + ∂2πi(xi, x−i) ∂xi∂x−i dx−i = 0.
Consequently dxi dx−i = [∂2πi(xi, x−i) ∂xi∂x−i ]/[−∂2πi(xi, x−i) ∂x2 i ].
Since the denominator has to be positive for the initial x choices to be a Nash equilibrium, the sign of the l.h.s is the sign of the numerator; or the strategies are strategic substitutes at a point if the cross partial of the profit function is negative at that point, and are strategic complements if that cross partial is positive.
Strategic Complements and Strategic Substitutes; Homogeneous Product Market with Nash in Quantities Equilibrium.
The profit functions is π(·) = p(Q)q −c(q).
which gives the f.o.c.
p(Q) −mc(q) + p′(Q)q = 0.
The needed cross partial is then p′(Q) + p′′(Q)q, which, unless p(·) is exceptionally convex, is negative. This generates the standard intuition that in homogeneous goods models for which the appropriate equilibrium concept is Cournot, quantities are strategic substitutes. For markets which are nearly homogeneous goods markets we expect the actions of firms which are outside of the merger to partly counterbalance the effect of the merger on price.
Note that p(·) cannot be too convex else the original point was not an equilibrium. I.e. the second order condition for the initial choice to be an equilibrium was 12 2p′(Q) + p′′(Q)q −∂mcj ∂qj < 0.
Questions.
• Assume constant marginal costs, and show that there are degrees of convexity that would generate equilibria in which the quantities are strategic complements (recall that second order conditions must be satisfied at an equilibrium).
• Assume an isoelastic demand curve and initially constant and equal marginal costs. Work out equilibrium price responses to a tariffwhich only effects one of the firm’s marginal costs (i.e. there is a domestic and a foreign producer).
The discussion thus far has dealt with the concept of strategic complements and strategic substitutes in the context of smoothly differentiable static profit functions with unique equilibria.
This concept has been extended in many ways; • Allow for corners (so a plant can be mothballed). Here the work uses the concept of super-modularity which is an extension of “cross-partials” to situations where cross partials don’t exist.
• Allow for non-uniqueness.
• Allow for dynamics.
See Milgrom and Shannon (Econometrica, 1994), and the literature cited there. The concept of strategic complements and substitutes becomes quite important in dynamics. For example, the question of whether capacity is a strategic complement or substitute is playing a big role in the debate on whether the response to entry by an incumbent airline is to increase its capacity. If this is not the “natural response”, i.e. if this response is only profitable were the entrant to exit, then the DOJ would call the observation that the incumbents did increase capacity, “predatory”, and there would be a case to be made. If this were the optimal response even if the entrant were to stay, say because the demand curve for the incumbent is now more elastic, then there would be no case against the incumbents. However to prove whether a pair of dynamic strategies are strategic complements or substitutes is often quite challenging.
2.2 Models of Product Differentiation We now move away from homogeneous goods towards markets with product differentiation. This part will be much briefer.
Broadly speaking, goods can be seen as being differentiated along the following dimensions: • Address vs. Non-Address Models: – Address model: Consumers prefer goods close to their “address” or location. In these types of models, an address can represent a physical location or a theoretical location in some sort of ideal space (e.g., characteristics). [E.g., local competition] – Non-address model: Product is a substitute for every other good in the market. [E.g., global competition] 13 • Vertical vs. Horizontal Models: – Horizontal Differentiation: Consumers differ in preferred products at the time price; i.e., at same price more than one good may be sold. Captured by having consumers differ in terms of tastes and/or preference for variety – (Pure) Vertical Differentiation: Consumers agree on which products are better/more preferred than others; i.e., at same price, only one good is sold.All consumers rank products’ non-price attributes similarly, but different goods may be sold as consumers may differ between how they trade of “quality” with price.
Horizontal Differentiation Example (Hotelling 1929): • Consumers distributed uniformly along a “linear city” of length 1. Consider two goods located at the end points of the city (x = 0 and x = 1). Consumers have transportation cost t per unit of length they must travel.
• Consumer with coordinate x derives utility (if consumes) of U = s −p1 −tx if buys from store 1 s −p2 −t(1 −x) if buys from store 2 • There is an ˜ x who is indifferent between stores: ˜ x(p1, p2) = (p2 −p1 + t)/(2t) • As long as prices are “not too high” and price difference between two shops doesn’t exceed transportation cost t along city, generates demand D1(p1, p2) = N ˜ x and D2(p1, p2) = N(1−˜ x) See also (Salop 1979).
Vertical Differentiation Example: • Consumers buy one or zero units of a good. Goods characterized by quality index s. Utility of a consumer given by: U = θs −p if he buys a good with quality s for price p 0 otherwise θ is a taste parameter distributed according to density f(θ) with CDF F(θ) with support [0, ∞].
• Two interpretations: either consumers have different tastes for quality, or they have different MRS between income and quality.
14 • If there is only one good, then demand for the good is simply: D(p) = N[1 −F(p/s)] where N is the mass of consumers.
• With two goods, can show: – If s2/p2 ≥s1/p1 (good 2 delivers more quality per dollar than 1), only 2 will be consumed, and demand is as above (w/ one good); – Otherwise, let ˜ θ ≡(p2 −p1)/(s2 −s1). All consumers with θ ≥˜ θ buy good 2, those with θ ∈[p1/s1, ˜ θ) buy good 1, and the rest don’t consume. Hence, D2(p1, p2) = N[1 −F(˜ θ)] and D1(p1, p2) = N[F(˜ θ) −F(p1/s1)] 2.3 Differentiated Products Bertrand Pricing Just to recap: assume firms are differentiated and compete a la Bertrand in prices. The pricing rule of a monopolist is to maximize profits (assuming constant marginal costs for now): πjt = (pjt −cjt)qjt(p) (1) where p is the vector of all prices. The F.O.C. for each firm of this problem is: ∂πjt ∂pjt = qjt(p) + (pjt −cjt)∂qjt(p) ∂pjt pjt = cjt −qjt ∂pjt ∂qjt pjt = cjt −pjt 1 ηjj (1 + 1/ηjj)pjt = cjt Thus, if we could estimate the price elasticity, observed prices, and assumed that firms optimally set prices, we could “infer” or recover marginal costs cjt.
3 Approaches to demand estimation Approaches breakdown along the following lines: • single vs multi-products • within multi-product: whether you use a product space or characteristic space approach • representative agent vs heterogenous agent • Other breakdowns: continuous vs.
discrete choice, horizontal vs.
vertical, dynamic vs.
static...
We will primarily focus on multi-product, demand systems with heterogeneous agents. We will cover both product and characteristics space approaches. We will focus on static settings, and later discuss methods for dealing with dynamics.
15 4 On Demand Estimation 4.1 Data... (briefly) As always, the credibility and success of empirical work will hinge on the data that is leveraged.
Depending on the industry and the application, data may be plentiful or sparse; it is always preferable to rely on richer data (when availabile and accessible at reasonable cost (both time and financial)) to inform our estimates than to implicitly assume them through structure or assumptions.
That said, research is all about navigating these tradeoffs (and being explicit and honest about them).
To anchor disucssion, the data that we should have in mind when discussing demand estimation tends to look as follows: • The unit of observation will be quantity of product purchased (say 12 oz Bud Light beer) together with a price for a given time period (say a week) at a location (Store, ZIP, MSA, state, country...).
• You will generally need to take a stance on the relevant market and set of products within a consumer’s choice set; in addition, there typically is an outside good (e.g., non purchase) that you will need to control for (either with data or via assumptions).
• There is now a large amount of consumer-level purchase data collected by marketing firms (for instance the ERIM panel used by Ackerberg RAND 1997 to look at the effects of TV ads on yogurt purchases). However, the vast majority of demand data is aggregated at some level. As we will discuss, less-aggregated data tends to allow us to estimate more detailed (ambitious) models.
• Note that you often have a lot of information: you can get many characteristics of the good (Alcohol by volume, calories, etc) from the manufacturer or industry publications or packaging since you know the brand. The location means we can merge the demand observation with census data to get information on consumer characteristics. The date means we can look at see what the spot prices of likely inputs were at the time (say gas, electricity etc).
• Typical data sources: industry organizations, marketing and survey firms (e.g. AC Nielson), proprietary data from manufacturer, marketing departments have some scanner data online (e.g. Chicago GSB).
• The survey of consumer expenditures also has some information on person-level consumption on product groups like cars or soft-drinks.
• More often than not, data will require some ingenuity, luck, and a lot of elbow grease to obtain.
Theory can help fill in some holes, but at the end of the day, good data (and variation!) is necessary for a convincing paper.
4.2 Basics: Endogeneity of Prices and Other Definitions Consider a market equilibrium in a competitive market with the following components: 16 Aggregate Demand.
Say it takes a constant elasticity form, i.e.
ln(Qn) = xnβ −αln(pn) + ϵn where n indexes markets, x are observed and ϵ are unobserved (by the econometrician) factors that cause differences in demand at a given price. E.g.,: parameters of income distribution, price of substitutes or complements, environmental factors that cause differences in the demand for the good,...
Aggregate supply.
mcn = wnγ + λQn + ωn w are observed and ω are unobserved (by the analyst) factors that cause differences in marginal cost. The marginal cost curve is the marginal cost of the market maker; it need not be the true social marginal cost.
Equilibrium.
We assume the market is in equilibrium, i.e. demand=supply, or that the auction-eer sets price at a level where the quantity it induces equates demand and supply pn = mcn.
Note that under an auctioneer interpretation, this assumes that he knows (ϵ, ω) even More generally there often are variables that are either observed to all agents, or revealed while finding the equilibrium price, that we do not contain good measures of in our data sets.
Keep in mind that: • if there are differences in ϵ or in ω that are not known by the ”auctioneers” (i.e. not incor-porated in price) then there can be excess demand or supply. You can introduce that into your model, but you need a way of dealing with it. In many markets you could introduce inventories (though then you might want to add dynamics) or a rationing system. One of the important facts about electricity generation is that it is very hard (though not impossible) to store energy, and this rules out inventories. What the market maker does in electricity generation is have a special reserve market where the ISO pays a “holding” fee to generators, and can bring them up or down from a central computer to make sure the market balances at all times.
• we have simplified by assuming that last period’s price does not effect either marginal cost or demand (in keeping within the simple static framework). As noted in the first lecture there are many reasons why it might, but this would put us into a world where demand or supply today depends on past, and perceptions of future, prices. I.e. a world where to analyze the determinants of current price and quantity determinants we need dynamics.
4.3 Single Product Demand Estimation Let’s now move away from competitive markets, and abstract from the supply side for a moment.
17 • Begin with one homogenous product. Assume demand for product j in market t could be given by qjt = D(pjt, Xj,t, ξjt), where qjt are quantities, pjt are prices, Xjt are exogenous variables, and ξjt are random shocks.
• Let’s assume now demand is iso-elastic: ln(qjt) = αj ln pjt + Xjtβ + ξjt (2) so that price elasticity ηjt = αj. Xjt could just be an intercept for now (constant term) or a vector of demand shifters. ξjt is a one-dimensional unobserved component of demand.
Problem 1: Endogeneity of Prices Recall from the monopoly discussion that we might be interested in price elasticities: doing so would allow us to use theory to perhaps recover (“infer”) marginal cost by simply observing the price charged in a market.
• Suppose we are in a situation where the error term ξjt is correlated with higher prices (pjt), i.e. E(ξjtpjt) > 0.
• Let’s decompose this correlation into: ξjt = λpjt + ϵjt where ϵjt is the remaining uncorrelated part, and λ will typically be positive. Then we can put this back in: ln(qjt) = αjpjt + Xjtβ + ξjt = αjpjt + Xjtβ + λpjt + ϵjt = (αj + λ) | {z } ˆ αj pjt + Xjtβ + ϵjt So the coefficient that we estimate denoted ˆ αj will be biased upwards. This will lead to unrealis-tically low estimates of price elasticity. We call this the simulataneity problem. The simultaneity (or endogeneity) problem is a recurrent theme in Empirical I.O.
• In I.O. we almost never get experimental or quasi-experimental data.
• Unlike what you’ve been taught in econometrics, we need to think very hard about what goes into the “unobservables” in the model (try to avoid the use of the word error term, it masks what really goes into the ϵ’s in I.O. models).
• Finally, it is a very strong assumption to think that the firm does not react to the unobservable because it does not see it – just because I don’t have the data doesn’t mean a firm doesn’t!
• Remember that these guys spend their lives thinking about pricing.
• Moreover, won’t firms react if they see higher than expected demand yesterday?
• Note: From here on, when you are reading the papers, think hard about “is there an endo-geneity problem that could be generating erroneous conclusions, and how do the authors deal with this problem?” 18 4.3.1 Some History.
• Henry Moore (1914)’s O.L.S. analysis of quantity on price (an attempt to estimate demand curves). Finds – Demand curves for agricultural products sloped down – Demand curves for manufacturing products sloped up.
• Working’s(1927) pictures. How do we connect equilibrium dots?
Figure 1: Working (1929 QJE) • Needed assumption for O.L.S. on demand: E[ϵ|x, p] = 0, or even E[ϵ(x, p)] = 0 contradicts model and common sense (at least if the auctioneer or the firm that is pricing knows or discovers ϵ). I.e. for this to be true there is nothing that affects demand that the auctioneer knows that the empirical analyst does not know.
• Similarly needed equation for ”supply” or price curve contradicts model • Solve for price and quantity as a function of (x, w, ω, ϵ).
• Possible Solutions: – Estimation by 2SLS, – Estimation by covariance restrictions between the disturbances in the demand and supply equation.
See any standard textbook, e.g. Goldberger(1991).
19 Lesson.
Thought should be given to what is likely to generate the disturbances in our models, and given that knowledge we should try to think through their likely properties.
Review: What is an instrument The broadest definition of an instrument is as follows, a variable Z such that for all possible values of Z: Pr[Z|ξ] = Pr[Z|ξ′] But for certain values of X we have Pr[X|Z] ̸= Pr[X|Z′] So the intuition is the Z is not affected by ξ, but has some effect on X. The usual way to express these conditions is that an instrument is such that: E[Zξ] = 0 and E[XZ] ̸= 0.
quantity demanded in response to a percentage change in price—first using ordinary least squares and then using instrumental variables with stormy weather as an instrument. In the regressions, fish has been treated as an approximately homogeneous product. The first column is an ordinary least squares regression with log quantity as the dependent variable and log price as the independent variable. The quantity is the total amount sold on a day and the price is the average price for that day.3 A higher price has a negative effect on quantity. The second column shows that this estimate is unchanged by including dummy variables for the day of the week (Friday is the omitted day), and for measures of the weather on shore.
The third column then uses an instrumental variables approach. That is, first a regression is run with log price as the dependent variable and the storminess of the weather as the explanatory variable. This regression seeks to measure the variation in price that is attributable to stormy weather. The coefficients from this regression are then used to predict log price on each day, and these predicted values for price are inserted back into the regression. The third column shows that the impact of these predicted values of price on quantity are double the ordinary 3 There does not appear to be any correlation between stormy weather and the quality of whiting sold.
Table 2 Ordinary Least Squares and Instrumental Variable Estimates of Demand Functions with Stormy Weather as an Instrument Variable Ordinary least squares (dependent variable: log quantity) Instrumental variable (1) (2) (3) (4) Log price 0.54 0.54 1.08 1.22 (0.18) (0.18) (0.48) (0.55) Monday 0.03 0.03 (0.21) (0.17) Tuesday 0.49 0.53 (0.20) (0.18) Wednesday 0.54 0.58 (0.21) (0.20) Thursday 0.09 0.12 (0.20) (0.18) Weather on shore 0.06 0.07 (0.13) (0.16) Rain on shore 0.07 0.07 (0.18) (0.16) R 2 0.08 0.23 No. of Obs.
111 111 111 111 Source: The data used in these regressions are available by contacting the author.
Note: Standard errors are reported in parentheses.
216 Journal of Economic Perspectives Figure 2: Graddy (2006 JEP) 4.3.2 Representative Agent vs. Heterogeneous Agents So far we have a representative agent model; to make it a heterogenous agent model we would have to build a micro model to make sure everything aggregated nicely, and then end up estimating something that looked something like qj = Z γig (dγ) + Z αipjf (dα) + βxj + ϵj (3) 20 Where αi ∼F (α|θ) and γi ∼G (α|τ) with θ and τ to be estimated. This is called a random coefficient model. Identification of the random coefficient parameters comes from differences in the sensitivity of demand to movements in price, as the price level changes. (Think about whether the model would be identified if the demand intercept were constant across all consumers). We will come back to this.
4.4 Multi-product Systems Now let’s think of a multiproduct demand system to capture the fact that most products have substitutes for each other. Generally this would be given by the relationship q = D(p, X, ξ) where q, p, ξ are J × 1 vectors of quantities, prices, and random shocks, and X are exogenous variables. We can follow the same approach before and assume that demand takes the following isoelastic form: ln q1 = X j∈J γ1j ln p1t + βx1t + ξ1t ...
ln qJ = X j∈J γJj ln pJt + βxJt + ξJt 4.4.1 Product vs Characteristic Space We can think of products as being: • a single fully integrated entity (a lexus SUV); or • a collection of various characteristics (a 1500 hp engine, four wheels and the colour blue).
It follows that we can model consumers as having preferences over products, or over charac-teristics.
The first approach embodies the product space conception of goods, while the second embodies the characteristic space approach (see Lancaster (1966, 75, 79)).
Product Space: disadvantages for estimation [Note that disadvantages of one approach tend to correspond to the advantages of the other] • Dimensionality: if there are J products then we have in the order of J2 parameters to estimate to get the cross-price effects alone (the γjk terms above).
– Can get around this to some extent by imposing more structure.
For example, one can use functional form assumptions on utility: this leads to ”grouping” or ”nesting” approaches whereby we group products together and consider substitution across and within groups as separate things - means that ex ante assumptions need to be made that do not always make sense. More on this later.
21 – Can also impose symmetry: e.g., CES demand of J products with utility given by: U(q1, . . . , qJ) = ( J X i=1 qρ i )1/ρ yields demand for good k: qk = p−1/(1−ρ) k PJ i=1 p−ρ/(1−ρ) i I where I is the income for the consumer. Note now only have to estimate ρ as opposed to number of parameters proportional to J2. However, note this model implies: ∂qi ∂pj pj qi = ∂qk ∂pj pj qk ∀i, k, j which means all goods i and k have the same cross-price elasticities with respect to good j. This is an extremely strong assumption, and imposes strong restrictions on the demand system. Though popular for analytic tractability, it is not generally used in empirical IO.
• Product space methods are not well suited to handle the introduction of new goods prior to their introduction (consider how this may hinder the counterfactual exercise of working out welfare if a product had been introduced earlier - see Hausman on Cell Phones in Brookings Papers 1997 - or working out the profits to entry in successive stages of an entry game...) Characteristic Space: disadvantages for estimation • getting data on the relevant characteristics may be very hard and dealing with situations where many characteristics are relevant • this leads to the need for unobserved characteristics and various computational issues in dealing with them.
• dealing with new goods when new goods have new dimensions is hard (consider the introduc-tion of the laptop into the personal computing market) • dealing with multiple choices and complements is a area of ongoing research, currently a limitation although work advances slowly each year.
We will explore product space approaches and then spend a fair amount of time on the char-acteristic space approach to demand.
Most recent work in methodology has tended to use a characteristics approach and this also tends to be the more involved of the two approaches.
5 Product Space Approaches: AIDS Models I will spend more than an average amount of time on AIDS (Almost Ideal Demand System (Deaton and Mueller 1980 AER), which wins the prize for worst acronym in all of economics models), which remain the state of the art for product space approaches. Moreover, AIDS models are still the dominant choice for applied work in things like merger analysis and can be coded up and estimated in a manner of days (rather than weeks for characteristics based approaches). Moreover, the AIDS 22 model shows you just how far you can get with a “reduced-form” model, and these less structural models often fit the data much better than more structural models.
The main disadvantage with AIDS approaches, is that when anything changes in the model (more consumers, adding new products, imperfect availability in some markets), it is difficult to modify the AIDS approach to account for this type of problem.
• Starting point for dealing with multiple goods in product space: ln qj = αpj + βpK + γxj + ϵj • What is in the unobservable (ϵj)?
– anything that shifts quantity demanded about that is not in the set of regressors – Think about the pricing problem of the firm ... depending on the pricing assumption and possibly the shape of the cost function (e.g. if constant cost and perfect comp, versus differentiated bertrand etc) then prices will almost certainly be endogenous. In particular, all prices will be endogenous.
– This calls for a very demanding IV strategy, at the very least • Also, as the number of products increases the number of parameters to be estimated will get very large, very fast: in particular, there will be J2 price terms to estimate and J constant terms, so if there are 9 products in a market we need at least 90 periods of data!
The last point is the one to be dealt with first, then, given the specification we can think about the usual endogeniety problems. The way to reduce the dimensionality of the estimation problem is to put more structure on the choice problem being faced by consumers. This is done by thinking about specific forms of the underlying utility functions that generate empricially convenient properties. (Note that we will also use helpful functional forms in the characteristics approach, although for somewhat different reasons) The usual empirical approach is to use a model of multi-level budgeting: • The idea is to impose something akin to a “utility tree” – steps: 1. group your products together is some sensible fashion (make sure you are happy to be grilled on the pros and cons of whatever approach you use). In Hausmann et al, the segments are Premium, Light and Standard.
2. allocate expenditures to these groups [part of the estimation procedure].
3. allocate expenditures within the groups [again, part of the estimation procedure]: Molson, Coors, Budweiser and etc...
Dealing with each step in reverse order: 3. When allocating expenditures within groups it is assumed that the division of expenditure within one group is independent of that within any other group. That is, the effect of a price change for a good in another group is only felt via the change in expenditures at the group level.
If the expenditure on a group does not change (even if the division of expenditures within it does) then there will be no effect on goods outside that group.
23 2. To be allocate expenditures across groups you have to be able to come up with a price index which can be calculated without knowing what is chosen within the group.
These two requirements lead to restrictive utility specifications, the most commonly used being the Almost Ideal Demand System (AIDS) of Deaton and Muellbauer (1980 AER).
5.1 Overview This comes out of the work on aggregation of preferences in the 1970s and before. (Recall Chapter 5 of Mas-Colell, Whinston and Green) Starting at the within-group level: assume expenditure functions for utility u and price vector p look like log(e(u, p) = (1 −u) log(a(p)) + u log(b(p)) where it is assumed: log(a(p)) = α0 + X k αk log pk + 1 2 X k X j γ∗ kj log pk log pj log(b(p)) = log(a(p)) + β0Πkpβk k Using Shepards Lemma we can get shares of expenditure within groups as: wi = ∂log(e(u, p)) ∂log pi = αi + X j γij log (pj) + βi log x P where x is total expenditure on the group, γij = 1 2(γ∗ ij + γ∗ ji), P is a price index for the group and everything else should be self explanatory.
Dealing with the price index can be a pain. It can be thought of as a price index that “deflates” income. There are two ways that are used. One is the ”proper” specification log (P) = α0 + X k αk log (pk) + 1 2 X j X k γkj log (pk) log (pj) which is used in the Goldberg paper, or a linear approximation (as in Stone 1954) used by most of the empirical litterature: log (P) = X k wk log (pk) 24 Deaton and Muellbauer go through all the micro-foundations in their AER paper.
For the allocation of expenditures across groups you just treat the groups as individual goods, with prices being the price indexes for each group. Again, note how much depends on the initial choice about how grouping works.
Steps 1. Calculate expenditure share wi of each good i using prices pi, quantities qi, and total expen-diture x = P k pkqk.
2. Compute Stone price index: log P = P k wk log(pk) 3. Run regression (e.g., IV): wi = αi + X k γik log(pk) + βi log( x P ) + ξi where ξi is the error term.
4. Recover J + 2 parameters (αi, γi1, . . . , γiJ, βi) 5.2 Hausman, Leonard & Zona (1994) on Beer This is Hausman, Leonard & Zona (1994) Competitive Analysis with Differentiated Products, Annales d’Econ. et Stat.
Here the authors want to estimate a demand system so as to be able to do merger analysis and also to discuss how you might test what model of competition best applies. The industry that they consider is the American domestic beer industry.
Note, that this is a well known paper due to the types of instruments used to control for endogeniety at the individual product level.
They use a three-stage budgeting approach: the top level captures the demand for the product, the next level the demand for the various groups and the last level the demand for individual products with the groups.
The bottom level uses the AIDS specification where spending on brand i in city n at time t is given by: wi,n,t = αin + X j γij log (pjnt) + βi log yGnt Pnt + εint where yGnt is expenditure on segment G. [note the paper makes the point that the exact form of the price index is not usually that important for the results] The next level uses a log-log demand system log qmnt = βm log yBnt + X k δk log (πknt) + αmn + εmnt where qmnt is the segment quantity purchased, yBnt is total expenditure on beer, π are segment price indices and α is a constant. [Does it make sense to switch from revenue shares at the bottom level, to quantities at the middle level?] The top level just estimates at similar equation as the middle level, but looking at the choice to buy beer overall. Again it is a log-log formulation.
log ut = β0 + β1 log yt + β2 log Πt + Ztδ + εt 25 where ut is overall spending on beer, yt is disposable income and Πt is a Price Index for Beer overall, and Zt are variables controlling for demographics, monthly factors, and minimum age requirements.
Identification of price coefficients: • recall that, as usual, price is likely to be correlated with the unobservable (nothing in the complexity that has been introduced gets us away from this problem) • what instruments are available, especially at the individual brand level?
– The authors propose using the prices in one city to instrument for prices in another.
This works under the assumption that the pricing rule looks like: log(pjnt) = δj log(cjt) + αjn + ωjnt where pjnt is the price of good j in city n at time t, cjt represents nation-wide product-costs at time t, αjn are city specific shifters which reflect transportation costs or local wage differentials, and ωjnt is a mean zero stochastic disturbance (e.g., local sales pro-motions.
Here they are claiming that city demand shocks ωjnt are uncorrelated. This allows us to use prices in other markets for the same product in the same time period as instruments (if you have a market fixed effect). Often these are referred to as Hausman instruments.
This has been criticized for ignoring the phenomena of nation-wide ad campaigns. Still, it is a pretty cool idea and has been used in different ways in several different studies.
• Often people use factor price instruments, such as wages, the price of malt or sugar as variables that shift marginal costs (and hence prices), but don’t affect the ξ’s.
• You can also use instruments if there is a large price change in one period for some external reason (like a strategic shift in all the companies’s pricing decisions). Then the instrument is just an indicator for the pricing shift having occurred or not.
Substitution Patterns The AIDS model makes some assumptions about the substitution patterns between products. You can’t get rid of estimating J2 coefficients without some assumptions!
• Top level: Coors and another product (chips). If the price of Coors goes up, then the price index of beer PB increases.
• Medium level: Coors and Old Style, two beers in separate segements. Increase in the price of Coors raises πP , which raises the quantity of light beer sold (and hence increases the sales of Old Style in particular).
• Bottom level: Coors and Budweiser, two beers in the same segment. Increase in the price of Coors affects Budweiser through γc,b.
So the AIDS model restricts subsitution patterns to be the same between two products any two products in different segments. Is this a reasonable assumption?
26 Figure 3: Demand Equations: Middle Level- Segment Choice Figure 4: Demand Equations: Bottom-Level Brand Choice 27 Figure 5: Segment & Overall Elasticities Merger Analysis (Preview) Recall a single firm sets price according to p1 −mc1 p1 = −1 η11 Imagine firm owns goods j = 1 . . . m. Then the first order condition for the firm will be for each j: pj Pm k=1 pkqk ∂π ∂pj = sj + m X k=1 pk −mck pk sk ηkj = 0 HLZ consider an hypothetical merger between two premium beers, Labatt and Coors. They find post-merger prices do not rise by too much – Coors price is constrained by Budweiser, and Labatt by Molson (another Canadian import). Without the premium beers constraining their prices, the estimates predict post-merger prices would rise by > 20%.
We will come back to these types of analysis later.
28 Figure 6: Merger Effects 29 5.3 Chaudhuri, Goldberg and Jia (2006) on Quinolones Question: The WTO has imposed rules on patent protection (both duration and enforcement) on member countries. There is a large debate on should we allow foreign multinationals to extent their drugs patents in poor countries such as India, which would raise prices considerably.
• Increase in IP rights raises the profits of patented drug firms, giving them greater incentives to innovate and create new drugs (or formulations such as long shelf life which could be quite useful in a country like India).
• Lower consumer surplus dues to generic drugs being taken offthe market.
To understand the tradeoffinherent in patent protection, we need to estimate the magnitude of these two effects. This is what CGJ do.
Market: Indian Market for antibiotics • Foreign and Domestic, Licensed and Non-Licensed producers.
• Different types of Antibiotics, in particular CGJ look at a particular class: Quinolones.
• Different brands, packages, dosages etc...
• Question: What would prices and quantities look like if there were no unlicensed firms selling this product in the market? 3 Data • The Data come from a market research firm. This is often the case for demand data since the firms in this market are willing to pay large amounts of money to track how well they are doing with respect to their competitors. However, prying data from these guys when they sell it for 10 000 a month to firms in the industry involves a lot of work and emailing.
• Monthly sales data for 4 regions, by product (down to the SKU level) and prices.
• The data come from audits of pharmacies, i.e. people go to a sample of pharmacies and collect the data.
• Some products have different dosages than others. How does one construct quantity for this market?
• Some products enter and exit the sample. How can the AIDS model deal with this?
3One of the reasons I.O. economists use structural models is that there is often no experiment in the data, i.e. a case where some markets have this regulation and others don’t.
30 Estimation and Results • CGJ estimate the AIDS specification with the aggregation of different brands to product level.
Product groups are defined to be indexed by molecule M and domestic/foreign status DF.
Revenue share of each product group i in each region r at time t: ωirt = αi + αir + X j γij ln pjrt + βi ln(XQrt PQrt ) + ϵirt (4) where ωirt = xirt/XQrt, prices for each group are aggregated/weighted over individual SKUs, and XQrt is expenditures on quinolones; and price index: ln PQ = α0 + X i αi ln pi + 1 2 X i X j ˜ γij ln pi ln pj (5) and upper level demand: ωGrt = αG + αGr + X H γGH ln PHrt + βG ln(Xrt Prt ) + ϵGrt (6) across different segments H of antibiotics.
• Do not model the choice of individual SKU products: – Large # of SKUs within each group (dimensionality), lack of price variation at SKU level, and varying choice sets over time (entry/exit of SKUs).
– Discrete choice approach difficult due to difficulty mapping revenue shares to physical shares – dosage of drugs not welll defined.
• Problem for the AIDS model: Over 300 different products, i.e. 90,000 cross product inter-action terms to estimate! CGJ need to do some serious aggregating of products to get rid of this problem: they will aggregate products by therapeutic class into 4 of these, interacted with the nationality of the producer. I.e., each product will have an own price coefficient γi,i, and a price coefficient for products of different molecules and/or nationalities, denoted γi,10, γi,01, γi,00. (Note that these coefficients are not whether or not the molecule is licensed).
Thus, a product i will exhibit the same cross-price elasticity for two different drugs if those two drugs differ in the same way both in molecule and foreign/domestic status. This yields 7 product groups (one group is only produced by foreign firms), and 7 × 4 price terms.
• Simultaneity bias: SKU revenue share weights (used in computation of price index for each product group) depend on expenditure, and will be correlated with demand shock. Instru-ments: # SKUs within group (violated if # of SKUs affect perceived quality of drug or is correlated with advertising), prices at SKU level (due to price controls) • Supply Side: You can get upper and lower bounds on marginal costs by assuming either that firms are perfect competitors within the segment (i.e. p = mc) or by assuming that firms are operating a cartel which can price at the monopoly level (i.e. p = mc 1+1/ηjj ). This is very smart: you just get a worse case scenario and show that even in the case with the highest 31 possible producer profits, these profits are small compared to the loss in consumer surplus.
Often it is better to bound the bias from some estimates rather than attempt to solve the problem.
• Use estimated demand system to compute the prices of domestic producers of unlicensed products that make expenditures on these products 0 (this is what “virtual prices” mean).
• Figure out what producer profits would be in the world without unlicensed firms (just (p−c)q in this setup).
• Compute the change in consumer surplus (think of integrating under the demand curve).
– Product Variety Effect – Expenditure Switching effect (substitution to other types of antibiotics, not quinolones); holds fixed prices of other products – Reduced-competition effect: firms adjust prices upwards due to removal of domestic products 32 antiinfectives segment ranks second in India, whereas in the world market it is fifth and has a share of only 9.0 percent. Hence, antiinfectives are important in India not only from a health and public policy point of view, but also as a source of firm revenue.
With this in mind, we focus on one partic-ular subsegment of antiinfectives, namely the TABLE 3—SUMMARY STATISTICS FOR THE QUINOLONES SUBSEGMENT: 1999–2000 North East West South Annual quinolones expenditure per household (Rs.) 31.25 19.75 27.64 23.59 (3.66) (3.67) (4.07) (2.86) Annual antibiotics expenditure per household (Rs.) 119.88 84.24 110.52 96.24 (12.24) (12.24) (9.60) (9.96) No. of SKUs Foreign ciprofloxacin 12.38 11.29 13.08 12.46 (1.50) (1.90) (1.02) (1.06) Foreign norfloxacin 1.83 1.71 2.00 1.58 (0.70) (0.75) (0.88) (0.83) Foreign ofloxacin 3.04 2.96 2.96 3.00 (0.86) (0.86) (0.91) (0.88) Domestic ciprofloxacin 106.21 97.63 103.42 105.50 (5.99) (4.34) (7.22) (4.51) Domestic norfloxacin 38.96 34.96 36.17 39.42 (2.71) (2.68) (2.51) (3.79) Domestic ofloxacin 18.46 16.00 17.25 17.25 (6.80) (6.34) (5.86) (6.35) Domestic sparfloxacin 29.83 28.29 31.21 29.29 (5.57) (6.38) (6.88) (6.57) Price per-unit API (Rs.) Foreign ciprofloxacin 9.58 10.90 10.85 10.07 (1.28) (0.66) (0.71) (0.58) Foreign norfloxacin 5.63 5.09 6.05 4.35 (0.77) (1.33) (1.39) (1.47) Foreign ofloxacin 109.46 109.43 108.86 106.12 (6.20) (6.64) (7.00) (11.40) Domestic ciprofloxacin 11.43 10.67 11.31 11.52 (0.16) (0.15) (0.17) (0.13) Domestic norfloxacin 9.51 9.07 8.88 8.73 (0.24) (0.35) (0.37) (0.20) Domestic ofloxacin 91.63 89.64 85.65 93.41 (16.15) (15.65) (14.22) (14.07) Domestic sparfloxacin 79.72 78.49 76.88 80.28 (9.76) (10.14) (11.85) (10.37) Annual sales (Rs. mill) Foreign ciprofloxacin 41.79 24.31 45.20 29.47 (15.34) (8.16) (12.73) (6.48) Foreign norfloxacin 1.28 1.00 0.58 0.73 (1.01) (0.82) (0.44) (0.57) Foreign ofloxacin 54.46 31.84 35.22 31.11 (13.99) (9.33) (9.06) (7.03) Domestic ciprofloxacin 962.29 585.91 678.74 703.81 (106.26) (130.26) (122.26) (87.40) Domestic norfloxacin 222.55 119.71 149.18 158.29 (38.84) (19.45) (26.91) (16.26) Domestic ofloxacin 125.02 96.21 149.36 112.05 (44.34) (30.11) (52.82) (42.59) Domestic sparfloxacin 156.17 121.75 161.30 98.11 (31.41) (25.76) (46.74) (34.20) Note: Standard deviations in parentheses.
API: Active pharmaceutical ingredient.
1483 VOL. 96 NO. 5 CHAUDHURI ET AL.: GLOBAL PATENT PROTECTION IN PHARMACEUTICALS Figure 7: Summary Statistics 33 not impose it through any of our assumptions regarding the demand function. The question that naturally arises, then, is what might explain this finding. While we cannot formally address this question, anecdotal accounts in various in-dustry studies suggest that the explanation may lie in the differences between domestic and foreign firms in the structure and coverage of retail distribution networks.
Distribution networks for pharmaceuticals in India are typically organized in a hierarchical fashion. Pharmaceutical companies deal mainly with carrying and forwarding (C&F) agents, in many instances regionally based, who each sup-ply a network of stockists (wholesalers). These stockists, in turn, deal with the retail pharma-cists through whom retail sales ultimately oc-cur.35 The market share enjoyed by a particular pharmaceutical product therefore depends in part on the number of retail pharmacists who stock the product. And it is here that there appears to be a distinction between domestic firms and multinational subsidiaries. In particu-lar, the retail reach of domestic firms, as a group, tends to be much more comprehensive than that of multinational subsidiaries (Indian Credit Rating Agency (ICRA), 1999).36 There appear to be two reasons for this. The first is that many of the larger Indian firms, because they have a much larger portfolio of products over which to spread the associated fixed costs, typically have more extensive net-works of medical representatives. The second is simply that there are many more domestic firms (and products) on the market. At the retail level, this would imply that local pharmacists might be more likely to stock domestic products con-taining two different molecules, say ciprofloxa-cin and norfloxacin, than they would domestic and foreign versions of the same molecule. To the extent that patients (or their doctors) are willing to substitute across molecules in order to save on transport or search costs (e.g., going to another pharmacy to check whether a particular 35 There are estimated to be some 300,000 retail pharma-cists in India. On average, stockists deal with about 75 retailers (ICRA, 1999). There are naturally variations in this structure, and a host of specific exclusive dealing and other arrangements exists in practice. Pharmaceutical firms also maintain networks of medical representatives whose main function is to market the company’s products to doctors who do the actual prescrib-ing of drugs. In some instances, firms do sell directly to the doctors who then become the “retailer” as far as patients are concerned, but these are relatively rare.
36 These differences were also highlighted in conversa-tions one of the authors had with CEOs and managing directors of several pharmaceutical firms, as part of a sep-arate study.
TABLE 6A—DEMAND PATTERNS WITHIN THE QUINOLONES SUBSEGMENT: UNCONDITIONAL PRICE AND EXPENDITURE ELASTICITIES IN THE NORTHERN REGION Product group Elasticity with respect to: Prices of foreign product groups Prices of domestic product groups Overall quinolones expenditure Cipro Norflo Oflo Cipro Norflo Oflo Sparflo Foreign ciprofloxacin 5.57 0.13† 0.15 4.01 0.11† 0.11† 0.16 1.37 (1.79) (0.07) (0.07) (1.84) (0.06) (0.06) (0.06) (0.29) Foreign norfloxacin 4.27† 0.45 4.27† 3.50† 6.02 4.51 4.65 2.20 (2.42) (1.12) (2.42) (2.10) (6.23) (1.84) (1.83) (1.05) Foreign ofloxacin 0.11 0.10† 1.38 0.09 0.09† 0.23 0.11 1.16 (0.05) (0.05) (0.31) (0.27) (0.05) (0.28) (0.04) (0.17) Domestic ciprofloxacin 0.18 0.01 0.01 1.68 0.08 0.08 0.10 1.17 (0.08) (0.00) (0.01) (0.23) (0.02) (0.02) (0.02) (0.03) Domestic norfloxacin 0.04 0.03 0.04 0.58 2.23 0.42 0.40 0.73 (0.01) (0.03) (0.01) (0.17) (0.11) (0.04) (0.03) (0.09) Domestic ofloxacin 0.05 0.05 0.11 0.77 0.74 3.42 0.74 0.89 (0.02) (0.02) (0.13) (0.28) (0.08) (0.25) (0.08) (0.21) Domestic sparfloxacin 0.07 0.04 0.07 1.15 0.63 0.63 2.88 0.28 (0.02) (0.01) (0.02) (0.15) (0.06) (0.06) (0.17) (0.12) Notes: Standard errors in parentheses. Elasticities evaluated at average revenue shares. Asterisk () denotes significance at the 5-percent significance level, and dagger (†) denotes significance at the 10-percent level.
1498 THE AMERICAN ECONOMIC REVIEW DECEMBER 2006 Figure 8: Elasticity Estimates foreign product is in stock), in aggregate data we would expect to find precisely the substitu-tion patterns that we report in Table 6.
Whether the particular explanation we provide above is the correct one, the high degree of sub-stitutability between domestic product groups turns out to have important implications for the welfare calculations. We discuss these in more detail below when we present the results of the counterfactual welfare analysis. Another elasticity the northern region for each of these seasons. As evident from the tables in the Appendix, our elas-ticity estimates are robust to the inclusion of sea-sonal effects. The demand elasticities in Table A5 are based on estimation of the demand system by OLS. Compared to the elasticities obtained by IV, the OLS elasticities are smaller in absolute value, implying that welfare calculations based on the OLS estimates would produce larger welfare loss estimates. Nevertheless, some of the patterns re-TABLE 7—UPPER AND LOWER BOUNDS FOR MARGINAL COST, MARKUP, AND ANNUAL PROFIT BY PRODUCT GROUPS WITHIN THE QUINOLONE SUBSEGMENT Product group Lower bound for MC (Rs.) Upper bound for markup Upper bound for profit (Rs. mill) Upper bound for MC (Rs.) Lower bound for markup Lower bound for profit (Rs.) Foreign ciprofloxacin 8.3 19% 26.9 10.3 0% 0.0 (1.23) (0.12) (16.55) Foreign norfloxacin NA NA NA 5.3 0% 0.0 Foreign ofloxacin 32.3 70% 106.1 108.5 0% 0.0 (23.16) (0.21) (31.85) Domestic ciprofloxacin 4.7 59% 1,701.9 11.2 0% 0.0 (1.14) (0.10) (298.58) Domestic norfloxacin 5.2 43% 280.7 9.0 0% 0.0 (0.20) (0.02) (15.32) Domestic ofloxacin 58.7 34% 161.2 90.1 0% 0.0 (2.18) (0.02) (12.80) Domestic sparfloxacin 49.5 37% 198.5 78.8 0% 0.0 (1.57) (0.02) (11.00) Notes: Standard errors in parentheses. Asterisk () denotes significance at the 5-percent level. Estimated lower bound for foreign norfloxacin’s marginal cost is negative, since the estimated price elasticity is less than one in absolute value.
1500 THE AMERICAN ECONOMIC REVIEW DECEMBER 2006 Figure 9: Marginal Costs 34 Figure 10: Counterfactuals 35 5.4 Estimation in the Linear Cournot Model. [Extra Notes.] Again we go back to the case of a cross-section of markets (indexed by n), but now we begin with a linear demand curve, which in inverse form is written as: pn = xnβx −βqQn + ϵn.
where, as in the perfectly competitive model, ϵ and x are unobserved and observed determinants of the level of demand, If w are observed determinants of costs, then provided E[ϵ|x, w] = 0, we can estimate the parameters of the demand equation with IV techniques.
For the supply side we now have a set of J f.o.c. for each market (these take the place of the supply curve of the competitive model). Assume linear m.c. (that is, variable costs are quadratic) so that mcn,j = wn,jγ + λqn,j + ωn,j.
Then if qn,j > 0, the Cournot f.o.c. (p −mcj −qj(∂p/∂q)) reduce to pn −wn,jγ −(λ + βq)qn,j −ωn,j = 0.
Provided E[ω|x, w] = 0, estimation offof these first order conditions is fairly straightforward, though there are some details you might want to keep in mind.
Here are the standard steps in estimation.
Note that you have NJ conditions of the form E[ωn,j|xn, wn,j] = 0.
• Write ωn,j(θ) ≡pn −wn,jγ −(λ + βq)qn,j, where θ ≡(γ, λ, βq).
• Find sufficiently rich vector valued function f(x, w) and form the sample moment conditions Gn(θ) = (NJ)−1 X n,j ωn,j(θ)f(xn, wn,j) • Search for that value of θ that makes ∥Gn(θ)∥as close as possible to zero.
Question. If G(θ) = EGn(θ), we usually call G(θ) the limit function, and G(θ0) = 0. What does ”sufficiently rich” mean in terms of G(θ); i.e. what is the identification condition for this model?
Can you derive the limit distribution of the parameters estimated in this way?
Here are some of the details you want to keep in mind when engaging in such an exercise.
• Provided we assume that no matter ω every firm produces positive outputs, the moment conditions hold at any equilibrium. So to estimate the parameters of this model we do not need ”to choose” among equilibrium. On the other hand to do policy analysis we might have to.
36 • Note that output and price depend on the whole distribution of cost shifters. This gives us more instruments (for both demand and costs) then were available for the perfectly compet-itive model. It also raises the question of how to use them efficiently (Chamberlain,1986).
• If this is a market where we only observe q for a particular subset of ω (say for those whose profits would be greater than fixed costs in equilibrium), even if the original sample was a draw from a population that satisfied E[ω|x, w] = 0 then generally E[ω|x, w, q > 0] ̸= 0 and you have a selection problem. The only way out of this is to build a model of the conditions which generate q = 0.
• When you have data on a cross section of markets, or a given market over time, you have to decide whether the ω’s of different firms in the same market are correlated. The first thing to do is to look at market (or time) averages of residuals. If they are, you need a variance correction, and could use a more efficient estimation algorithm. This problem is also implicitly present when you are analyzing a single cross section in a given market, and one has to be careful to allow for the proper assumptions on the realizations of the errors.
• To gain efficiency you would generally estimate the demand equation along with the f.o.c.; you should convince yourself you know how to do this - after allowing for a covariance between the demand and cost disturbances.
The Simple Linear Cournot Model and Identification.
Note that the f.o.c. for quantity do not, per se, determine the slope of either the marginal cost or the demand function. It is only by combining the slope coefficients from the demand and f.o.c.
that we can identify either.
Another way to say this, is that in this linear case the f.o.c. alone cannot tell us whether the market acts ”as if” it is populated by price taking firms or by Cournot competitors. One time we interpret the slope of the ”pricing equation” as being the slope of the m.c. curve, and one time we interpret as the sum of the slopes of demand and cost.4 On the other hand if we had variables which shifted the demand curve, but not the cost curves, we could distinguish (Bresnahan, 1982, Economic Letters).
This because holding Q constant, changes in the slope of the demand curve will not effect price in a price taking regime, but will in a Nash-Cournot regime. Formally, consider an inverse demand curve with interactions between quantity and demand shifters pn = xnβx −βqQn −Qnxnβq,x + ϵ Then the f.o.c. for the Cournot model becomes pn −(βq + λ + xnβq,x) × qn,j −wn,jγ −ωn,j = 0 but the pricing equation under p.c. does not change. I.e. the f.o.c. in the p.c. world does not depend on xn, while it does in the Cournot world. In some sense this is a “nonparametric” test of the model. I.e. we find an xn that affects the slope of the demand curve but does not effect the slope of the cost curve, and see if it interacts with the individual quantities in the f.o.c.
4Of course one could ask whether the “slope” of the f.o.c. is the same as the slope of the demand curve; but you may not have data on total demand, and even if one did one might worry about that being a bit dependent on functional form assumptions, the precision of your estimators, etc. As a result a slightly different literature developed.
37 Here are some further things one wants to keep in mind here.
• What would you expect to effect the slope of the demand equation (in contrast to the inter-cept) and not effect marginal costs? How would you effect the distribution of income to help here? How about past advertising? The literature has discussed advertising in this context (i.e.
advertising makes the demand curve for a product less elastic), but the mechanism through which this occurs is not really specified. To analyze the impact of advertising we would want to specify a micro level of how it effects individual demands and then aggregate up. This is an important topic for many reasons, and we come back to it below.
• Note that it is not clear that if we reject price-taking behavior we should immediately accept the Cournot-Nash behavior.
• All of this intuits the m.c.
function from price and a behavioral assumption about how price is set. I.e. there is no cost data. Thus it should not be surprising that we don’t get an estimate of the slope of the marginal cost function without imposing a behavioral assumption.
Remember there are lots of conditions when the static optimizing behavioral choices don’t make alot of sense. Also you might think of what you could do if you did have cost data, though, as noted, this is rarely the case.
The “Conduct” Parameter in the Cournot Setting.
Sometimes the literature goes further than this and looks for a ”conduct” parameter. The typical discussion considers the following form of a quantity setting f.o.c.
p(Q) + qj ∂p ∂Q ∂Q ∂qj −mcj = 0 and then interprets the term ∂Q ∂qj as firm j’s conjecture about the response of total industry output to increases in its own output. The conjecture is then estimated as a parameter, or as a function of the firm’s market share. A simple case would be p(Q) + qj ∂p ∂Q(θ1 + θ2 1 sj ) −mcj = 0 Then: • Under competition: θ1 = θ2 = 0, • Cournot: θ1 = 1 θ2 = 0, • Prove to yourself that under joint profit maximization (monopoly): θ1 = 0 θ2 = 1 Note: If one of these three combinations is not observed, then there is no obvious interpretation of “equilibrium” that is consistent with the estimates. Moreover before one believed a value of θ2 = 1 in a market with many firms you would want to provide evidence that one can support monopoly pricing (that there is a mechanism that insure that no firm has an incentive to deviate).
This suggest that you should be wary of an equilibrium interpretation of such results (there is no “mode of competition” that gives you a θ2 between zero and one). You may want to “test” zero and one, but what any other value tells you is that the equilibrium interpretation is wrong (of course you should also be wary of test statistics, since something that shows up significant with a lot of data may well be insignificant economically and vice versa).
38 Some Additional Notes on Estimating the Cournot model.
• Fundamentally this all relies a bit heavily on functional form (cost data are never directly used, presumably because the researchers didn’t have them). You should keep in mind that to really let the data tell us what type of equilibrium is a good approximation, what we would like to do is compare price to marginal cost, and ask what behavioral model could explain the difference. There are some studies which have credibly been able to do this; a study be Genoseve and Mullin(1999,RAND) on the sugar cartel is an interesting example. But for most studies there just simply isn’t enough data (especially cost data). What we do then is read about the industry, impose an equilibrium notion, insure that it is not grossly at odds with the data, and then move on to whatever we want to analyze.
• Though cost data are quite rare, as noted by Gollop and Roberts,1979, sometimes input demands are available, and they can also be used to help in estimation. That is provided factor markets are competitive, we should have [p + qj ∂p ∂Q][∂fj(xj) ∂xj,k ] = wj,k where fj(·) is the production function for firm j, xj,k is the input choice, and wj,k is the factor price. There ought to be one first order condition for each input, and this might help both in determining the nature of equilibrium, and in recovering cost. On the other hand you will have to be in a setting where estimating product-level production functions makes sense (most times we see factor choices they are on multi-product firms or plants; see below) and make explicit assumptions on how factor choices are made (again we come back to this below).
• Often the situation is worse then the situation we have focused on (a situation in which all we have is data on outputs and prices and possibly some exogenous cost and demand shifters).
Sometimes we don’t have the distribution of outputs. Appelbaum(1982) and Porter(1983) both work with this situation and aggregate up the first order condition to the market level.
We will show you how to do this in going over extensions to Houthackker’s example below.
6 Characteristic Space Approaches to Demand Estimation Basic approach: • Consider products as bundles of characteristics • Define consumer preferences over characteristics • Let each consumer choose that bundle which maximizes their utility. We restrict the consumer to choosing only one bundle. You will see why we do this as we develop the formal model, multiple purchases are easy to incorporate conceptually but incur a big computational cost and require more detailed data than we usually have. Working on elegant ways around this problem is an open area for research.
• Since we normally have aggregate demand data we get the aggregate demand implied by the model by summing over the consumers.
39 6.1 Formal Treatment • Utility of the individual: Uij = U (xj, pj, vi; θ) for j = {0, 1, 2, 3, ..., J} .
• Good 0 is generally referred to as the outside good. It represents the option chosen when none of the observed goods are chosen. A maintained assumption is that the pricing of the outside good is set exogenously.
• J is the number of goods in the industry • xj are non-price characteristics of good j • pj is the price • vi are characteristics of the consumer i • θ are the parameters of the model • Note that the product characteristics do not vary over consumers, this most commonly a problem when the choice sets of consumers are different and we do not observe the differences in the choice sets.
• Consumer i chooses good j when Uij > Uik ∀k [note that all preference relations are assumed to be strict] (7) • This means that the set of consumers that choose good j is given by Sj (θ) = {v|Uij > Uik ∀k} and given a distribution over the v’s, f (v) , we can recover the share of good j as sj (x, p|θ) = Z ν∈Sj(θ) f (dν) Obviously, if we let the market size be M then the total demand is M × sj (x, p|θ) .
• This is the formal analog of the basic approach outlined above. The rest of our discussion of the characteristic space approach to demand will consider the steps involved in making this operational for the purposes of estimation.
6.1.1 Aside on utility functions • Recall from basic micro that ordinal rankings of choices are invariant to affine transformations of the underlying utility function. More specifically, choices are invariant to multiplication of U (·) by a positive number and the addition of any constant.
• This means that in modelling utility we need to make some normalizations - that is we need to bolt down a zero to measure things against. Normally we do the following: 1. Normalize the mean utility of the outside good to zero.
2. Normalize the coefficient on the idiosyncratic error term to 1.
This allows us the interpret our coefficients and do estimation.
40 6.2 Examples (Briefly) Anderson, de Palma and Thisse go through many of these in very close detail. In the spring, Pierre Dubois will spend more time on variations as well.
Horizontally Differentiated vs Vertically Differentiated - Recall: horizontally differen-tiated means that, setting aside price, people disagree over which product is best.
Vertically differentiated means that, price aside, everyone agrees on which good is best, they just differ in how much they value additional quality.
1. Pure Horizontal Model • – This is the Hotelling model (n ice-cream sellers on the beach, with consumers dis-tributed along the beach) – Utility for a consumer at some point captured by νi is Uij = u −pj −θ (δj −νi)2 where the (δj −νi)2 term captures a quadratic ”transportation cost”.
– It is a standard workhorse for theory models exploring ideas to do with product location.
2. Pure Vertical Model • – Used by, Shaked and Sutton, Mussa-Rosen (monopoly pricing, slightly different), Bresnahan (demand for autos) and many others – Utility given by Uij = u −νipj + δj – This model is used most commonly in screening problems such a Mussa-Rosen where the problem is to set (p, q) tuples that induce high value and low value customers to self-select (2nd degree price discrimination). The model has also been used to consider product development issues, notably in computational work.
3. Logit • – This model assumes everyone has the same taste for quality but have different idiosyncratic taste for the product. Utility is given by Uij = δj + ϵij – ϵij iid ∼extreme value type I [F (ϵ) = e−e−ϵ]. This is a very helpful assumption as it allows for the aggregate shares to have an analytical form.
I.e.: Pr(Uij ≥Uik∀k) = exp(δj) P k=0,...,J exp(δk) (8) – This ease in aggregation comes at a cost, the embedded assumption on the dis-tribution on tastes creates more structure than we would like on the aggregate substitution matrix.
41 – Independence of Irrelevant Alternatives (IIA): Ratio of choice probabilities between two options j and k doesn’t depend on utilities of any other product. I.e.,: Pij Pik = eδij eδik (Red bus-Blue bus issue) – See McFadden 1972 for details on the construction.
4. Nested Logit • As in the AIDS Model, we need to make some “ex-ante” classification of goods into different segments, so each good j ∈S(j).
• Uij = Vij + ϵij where goods are divided into nests, and: F(·) = exp(− S X s=1 ( X j∈S(j) e−ϵnj/λk)λk) λk ∈(0, 1] is degree of independence in unobserved components within nest k (higher means more independence).
For two different goods in different segments, the relative choice probabilities are: Pni Pnm = eVni/λk(P j∈Sk(i) eVnj/λk)λk−1 eVnm/λl(P j∈Sl(m) eVnj/λl)λl−1 • The best example of using Nested-Logit for an IO application is Golberg (1995) Econo-metrica (in the same issue as BLP on the same industry!).
• One can classify goods into a hierarchy of nests (car or truck, foreign or domestic, nissan or toyota, camry or corrola).
5. “Generalized Extreme Value Models”: Bresnahan, Trajtenberg and Stern (RAND 1997) have looked at extensions of nested logit which allow for overlapping nests: foreign or domestic computer maker in one nest and high-end or standard performance level. The advantage of this approach is that there is no nead to choose which nest comes first.
6. Ken Train (2002) discusses many different models of discrete choice. This is a great reference to get into the details of how to do these procedures. Moreover we will focus on cases where we have aggregate data, but having individual level data can help you A LOT.
7. ”Ideal Type” (ADT) or ”Pure Characteristic” (Berry & Pakes) • Utility given by Uij = f (νi, pj) + X k X r g (xjk, νir, θkr) This nests the pure horizontal and pure vertical models (once you make a few function form assumptions and some normalizations.
8. BLP (1996) 42 • This is a parameterized version of the above case, with the logit error term tacked on.
It is probably the most commonly used demand model in the empirical literature, when differentiated goods are being dealt with.
Uij = f (νi, pj) + X k X r xjkνirθkr + ϵij 6.3 Estimation from Product Level Aggregate Data • The data typically are shares, prices and characteristics • That is: {(sj, pj, xj)}J j=1 • We will start by looking at the simpler cases (the vertical model and the logit) and then move onto an examination of BLP.
• Remember that all the standard problems, like price being endogenous and wider issues of identification, will continue to be a problem here. So don’t lose sight of this in all the fancy modelling!
6.3.1 Illustrative Case: Vertical Model Note that this is what Bresnahan estimates when he looks at the possibility of collusion explaining the relative dip in auto prices in 1955.
• In the vertical model people agree on the relative quality of products, hence there is a clear ranking of products in terms of quality • The only difference between people is that some have less willingness to pay for quality than others • Hence (recall) utility will look like Uij = u −νipj + δj • To gain the shares predicted by the model we need to: 1. Order the goods by increasing p. Note that this requires the ordering to also be increasing in δ if the goods in the sample all have non-zero share. (A good with higher p and lower δ will not be purchased by anyone.) 2. The lowest good is the outside good (good 0) - we normalise this to zero (u = 0) 3. Choose 0 if 0 > max j≥1 (δj −νipj) this implies νi > δ1 p1 43 4. Hence S0 = n ν ν > δ1 p1 o . Thus if ν is distributed lognormally, ν = exp (σx + µ) where x is distributed standard normal, then choose 0 if exp (σx + µ) ≥ δ1 p1 or ν ≥ ψ0 (θ) where ψ0 (θ) ≡σ−1 h log δ1 p1 −µ i , that is our model has s0 = F (ψ0 (θ)) , where F is standard normal 5. Similarly, choose good 1 iff0 < δ1 −νp1 and δ1 −νp1 ≥δ2 −νp2, or: s1 (θ) = F (ψ1 (θ)) −F (ψ0 (θ)) more generally sj (θ) = F (ψj (θ)) −F (ψj−1 (θ)) for j = 1, ...J.
• Question: What parameters are identified in θ? What are the sources of identification for each parameter? What are the implications for cross-price elasticities?
Estimation To complete estimation we need to specify a data generating process. We assume we observe the choices of a random sample of size n. Each individual chooses one from a finite number of cells; Choices are mutually exclusive and exhaustive.
This suggests a multinomial distribution of outcomes Lj ∝Πjsj (θ)nj Hence, choose θ to maximise the log-likelihood max θ X j nj log [sj (θ)] Where nj is the count of individuals choosing the object.
Another Example: Logit Here the utility is Uij = δj + ϵij where ϵij iid ∼extreme value type II [F (ϵ) = e −e−ϵ].
This yields the closed form expressions for the share of consumers who purchase inside goods j and outside good 0: sj = exp [δj −pj] 1 + P q≥1 exp [δq −pq] s0 = 1 1 + P q≥1 exp [δq −pq] 44 6.4 Identification: Identification is the key issue, always. Here we have to get all the identification offthe shares. Since s0 = 1−P j≥1 sj we have J shares to use to identify J +2 parameters (if we let θ = {δ1, ..., δJ, µ, σ}).
(you should be able to explain this with a simple diagram) Thus hit the dimensionality problem. To solve this we need more structure. Typically we reduce the dimensionality by ”projecting” product quality down onto characteristics, so that: δj = X k βkxkj This makes life a lot easier and we can now estimate via MLE.
An alternative approach would have been to use data from different regions or time periods which would help with this curse of dimensionality. Note that we are still in much better shape that the AIDS model since there are only J + 2 parameters to estimate versus J2 + J of them.
6.5 Problems with Estimates from Simple Models: Each model has its own problems and they share one problem in common: • Vertical Model: 1. Cross-price elasticities are only with respect to neighbouring goods - highly constrained substitution matrix.
2. Own-price elasticities are often not smaller for high priced goods, even though we might think this makes more sense (higher income →less price sensitivity).
• Logit Model: 1. Own price elasticities ηjj = ∂sj ∂pj pj sj = (−αpj (1 −sj)). If shares are close to 0, own price elasticities are proportional to price – higher price goods have higher elasticities.
2. Cross-price elasticities ηjk = ∂sj ∂pk pk sj = αpksk. This means the cross-price elasticities for a change in product k’s price is the same for all other products j ̸= k, and is solely a function of prices and shares, but not the relative proximity of products in characteristic space. This is a bit crazy for most products (e.g., cars). This is a function of the IIA assumption.
– Note: if you run logit, and your results do not generate these results you have bad code.
This is a helpful diagnostic for programming.
• Simultaneity: No way to control for endogeniety via simultaneity. This leads to the same economically stupid results that we see in single product demand estimation that ignores endogeniety (like upward sloping demand etc).
45 6.6 Dealing with Simultaneity The problem formally is that the regressors are correlated with an unobservable (we can’t separate variation due to cost shocks from variationdue to demand shocks), so to deal with this we need to have an unobservable component in the model.
Let product quality be δj = X k βkxkj −αpj + ξj Where the elements of ξ are unobserved product characteristics Estimation Strategy 1. Assume n large 2. So so j = sj (ξ1, ..., ξJ|θ) 3. For each θ there exists a ξ such that the model shares and observed shares are equal.
4. Thus we invert the model to find ξ as a function of the parameters.
5. This allows us to construct moments to drive estimation (we are going to run everything using GMM) • Note: sometimes inversion is easy, sometimes it is a real pain.
Example: The Logit Model Logit is the easiest inversion to do, since ln [sj] −ln [s0] = δj = X k βkxkj −αpj + ξj ⇒ ξj = ln [sj] −ln [s0] − X k βkxkj −αpj !
• Note that as far as estimation goes, we now are in a linear world where we can run things in the same way as we run OLS or IV or whatever. The precise routine to run will depend, as always, on what we think are the properties of ξ.
• Further simple examples in Berry 1994 More on Estimation • Regardless of the model we now have to choose the moment restriction we are going to use for estimation.
• This is where we can now properly deal with simultaneity in our model.
• Since consumers know ξj we should probably assume the firms do as well. Thus in standard pricing models you will have pj = p (xj, ξj, x−j, ξ−j) • Since p is a function of the unobservable, ξ, we should not use a moment restriction which interacts p and ξ. This is the standard endogeniety problem in demand estimation.
46 • It implies we need some instruments.
• There is nothing special about p in this context, if E (ξx) ̸= 0, then we need an instruments for x as well.
Some assumptions used for identification in literature: 1. E (ξ|x, w) = 0 x contains the vector of characteristics other than price and w contains cost side variables. Note that they are all valid instruments for price so long as the structure of the model implies they are correlated with pj.
Question: how do the vertical and logit models differ in this regard?
2. Multiple markets: here assume something like ξjr = ξj + ujr and put assumptions on ujr. Essentially treat the problem as a panel data problem, with the panel across region not time.
7 Generalizing Demand to allow for more Realistic Substitution Patterns: BLP • BLP is an extension to the logit model, that allows for unobserved product characteristics and, most importantly allows for consumer heterogeneity in tastes for characteristics.
• Since it is based on a solid micro foundation it can be adapted to a variety of data types and several papers have done this in particular applications.
• The single most important contribution of BLP is showing how to do the inversion in a random-coefficient logit model, that allows the error to be popped out, and thus allowing endogeniety problems to be addressed. The next most important contribution is showing that all the machinery can produce results that make a lot of sense.
• Lastly, use the NBER working paper version - it is easier to read.
Details: The Micro Model Uij = X k xjkβik + ξj + ϵιj with βik = λk + βo kzi + βu kvi Definitions: xjk : observed characteristic k of product j ξj : unobserved characteristics of product j ϵιj : the logit idiosyncratic error λk : the mean impact of characteristic k zi : a vector of observed individual characteristics 47 βo k : a vector of coefficients determining the impact of the elements of zi on the taste for characteristic xjk vi : a vector of unobserved individual characteristics βu k : a vector of coefficients determining the impact of the elements of vi on the taste for characteristic xjk • Substituting the definition of βik into the utility function you get Uij = X k xjkλk + X k xjkβo kzi + X k xjkβu kvi + ξj + ϵιj or, as is usually the way this is written (and also the way you end up thinking about things when you code up the resulting estimator) Uij = δj + X k xjkβo kzi + X k xjkβu kvi + ϵιj where δj = X k xjkλk + ξj • Note that this model has two different types of interactions between consumer characteristics and product characteristics: 1.
(a) i. Interactions between observed consumer characteristics zi and product characteris-tics xjk’s; and ii. Interactions between unobserved consumer characteristics vi and product charac-teristics xjk’s • These interactions are the key things in terms of why this model is different and preferred to the logit model. These interactions kill the IIA problem and mean that the aggregate substitution patterns are now far more reasonable (which is to say they are not constrained to have the logit form).
– Question: Are the substitution patterns at the individual level any different from the logit model?
The intuition for why things are better now runs as follows: • If the price of product j (say a BMW 7 series) increases, very specific customers will leave the car - those customers who have a preference for the car’s characteristics and consequently will like cars close to it in the characteristic space that the empirical researcher is using.
• Thus they will substitute to cars that are close to the BMW in characteristic space (say a Lexus, and not a Reliant Regal (a three wheeled engineering horror story still sometimes seen in the UK) • Also, price effects will be different for different products. Products with high prices, but low shares, will be bought by people who don’t respond much to price and so they will likely have higher markup than a cheap product with the same share.
48 Figure 1: The Reliant Regal • Thus they will substitute to cars that are close to the BMW in characteristic space (say a Lexus, and not a Reliant Regal (a three wheeled engineering horror story still sometimes seen in the UK) • Also, price effects will be different for different products. Products with high prices, but low shares, will be bought by people who don’t respond much to price and so they will likely have higher markup than a cheap product with the same share.
• This model also means that products can be either strategic complements or substitutes in the pricing game. (in Logit they are strategic complements).
• Usually, we only have product level data at the aggregate level so the source of consumer information is the distribution of zi from the census. That is, we are usually working with the vi part of the model. However, a few studies have used micro data of one form or another, notably MicroBLP (JPE 2004).
• With micro data you need to think about whether the individual specific data you have is enough to capture the richness of choices. If not, then you need to also include the unobserved part of the model as well.
20 Figure 11: The Reliant Regal • This model also means that products can be either strategic complements or substitutes in the pricing game. (in Logit they are strategic complements).
• Usually, we only have product level data at the aggregate level so the source of consumer information is the distribution of zi from the census. That is, we are usually working with the vi part of the model. However, a few studies have used micro data of one form or another, notably MicroBLP (JPE 2004).
• With micro data you need to think about whether the individual specific data you have is enough to capture the richness of choices. If not,then you need to also include the unobserved part of the model as well.
7.1 Estimation: Step by step overview We consider product level data (so there are no observed consumer characteristics). Thus we only have to deal with the v’s.
Uij = δj |{z} P k xjkλk+ξj + X k xjkβu kvi + ϵιj Step 0: Search over θ Generally, we are looking for θ ≡{λ, β} that minimizes a GMM Objective (see section notes). If the distribution of νi is unknown but can be parameterized, these parameters are also estimated within θ.
For a given evaluation of the objective function and a given θ, we will be looking for the implied set of product “mean-utilities” {δj}∀jso that the predicted shares of the model match those observed in the data. Once these are recovered, we can recover ξ(θ) that, along with our candidate parameter vector (θ), induces our model to match the shares in the data, and then compute the GMM objective G(ξ(θ), Z; θ) where Z represent instruments.
Thus, one performs the following steps: Step 1: Work out the aggregate shares conditional on (δ, β) 49 • After integrating out the ϵιj (recall that these are familiar logit errors) the equation for the share is sj (δ, β) = Z exp [δj + P k xjkviβk] 1 + P q≥1 exp [δq + P k xqkviβk]f (v) dv • This integral is not able to be solved analytically. (compare to the logit case). However, for the purposes of estimation we can handle this via simulation methods. That is, we can evaluate the integral use computational methods to implement an estimator...
• Take ns simulation draws from f (v). This gives you the simulated analog b sns j (δ, β) = X r exp [δj + P k xjkvirβk] 1 + P q≥1 exp [δq + P k xqkvirβk] Note the following points: • The logit error is very useful as it allows use to gain some precision in simulation at low cost.
• If the distribution of a characteristic is known from Census data then we can draw from that distribution (BLP fits a Lognormal to census income data and draws from that) • By using simulation you introduce a new source of error into the estimation routine (which goes away if you have “enough” simulations draws...). Working out what is enough is able to be evaluated (see BLP). The moments that you construct from the simulation will account for the simulation error without doing special tricks so this is mainly a point for interpreting standard errors.
• The are lots of ways to use simulation to evaluate integrals, some of them are quite involved.
Depending on the computational demands of your problem it could be worth investing some time in learning some of these methods. (Ken Judd has a book in computation methods in economics that is a good starting point, Ali can also talk to you about using an extension of Halton Draws, called the Latin Cube to perform this task) Step 2: Recover the ξ from the shares.
Remember from basic econometrics that when we want to estimate using GMM we want to exploit the orthogonality conditions that we impose on the data. To do this we need to be able to compute the unobservable, so as to evaluate the sample moments. So how to do this? This is one of the main contributions of BLP: • BLP point out that iterating on the system δk j (β) = δk−1 j (β) + ln so j −ln h b sns j δk−1, β i has a unique solution (the system is a contraction mapping with modulus less than one and so has a fixed point to which it converges monotonically at a geometric rate). Both Nevo or BLP also exploit the fact that the following is also a contraction exp h δk j (β) i = exp h δk−1 j (β) i so j b sns j (δk−1, β) This is what people actually use in the programming of the estimator.
50 • So given we have δ (β, so, P ns) we have an analytical form for λ and ξ (which we be determined by the exact indentifying assumptions you are using). In other words ξ (β, so, P ns) = δ (β, so, P ns) − X k xkλk and depending on the moments you are using, you can “concentrate out” λk as a function of the non-linear parameters (see Nevo JEMS RA guide appendix; similar to OLS but need to account for potentially different GMM weighting matrix).
• The implication is that you should only be doing a nonlinear search over the elements of β.
Step 3: Construct the Moments We want to interact ξ (β, so, P ns) with the instruments which will be the exogenous elements of x and our instrumental variables w (recall that we will be instrumenting for price etc).
You need to make sure that you have enough moment restrictions to identify the parameters of interest.
Step 4 →0 : Iterate until have reached a minimum • Recall that we want to estimate (λ, β) . Given the β the λ have analytic form, we only need to search over the β that minimize our objective function for minimizing the moment restrictions.
• Look back at the expression for the share and you will realize that the β is the only thing in there that we need to determine to do the rest of these steps. However, since it enters nonlinearly we need to do a nonlinear search to recover the values of β that minimize our objective function over the moments restrictions.
• You will need to decide on a stopping point.
• Some things to note about this: – This means that estimation is computationally intensive.
– You will need to use Matlab, Fortran, C, Gauss, R etc to code it up. I like Matlab, personally.
– There are different ways to numerically search for minimums: the advantage of a simplex search algorithm over derivative based methods is that they are a bit more robust to poorly behaved functions, but take longer. Also start you code from several different places before believing a given set of results. [in matlab fminsearch is a good tool]. Also newer search methods (e.g., KNITRO) with some cost (learning, financial) have been reported to be better than built in Matlab search functions.
– Alternatively, and even better thing to do is to use a program that can search for global minima so that you don’t have to worry too much about starting values. These can take about 10-20 times longer, but at least you can trust your estimates. Some are Differential Evolution and Simulated Annealing. You can get these in MATLAB, C or FORTRAN offthe web.
– Aviv Nevo has sample code posted on the web and this is a very useful place to look to see the various tricks in programming this estimator.
51 – Due to the non-linear nature of the estimator, the computation of the standard errors can be a arduous process, particularly if the data structure is complex.
– Taking numerical derivatives will often help you out in the computation of standard errors.
– For more details on how to construct simulation estimators and the standard errors for nonlinear estimators, look to you econometrics classes (and the GMM section notes) 7.2 Identification in these models The sources of identification in the standard set up are going be: 1. differences in choice sets across time or markets (i.e. changes in characteristics like price, and the other x’s) 2. differences in underlying preferences (and hence choices) over time or across markets 3. observed differences in the distribution of consumer characteristics (like income) across mar-kets 4. the functional form will play a role (although this is common to any model, and it is not overly strong here) • so if you are especially interesting in recovering the entire distribution of preferences from aggregate data you may be able to do it with sufficiently rich data, but it will likely be tough without some additional information or structure.
• additional sources of help can be: – adding a pricing equation (this is what BLP does) – add data, like micro data on consumer characteristics, impose additional moments from other data sources to help identify effects of consumer characteristics (see Petrin on the introduction of the minivan), survey data on who purchases what (MicroBLP).
7.3 Adding in “Supply Side” Moments One can also lean on a theoretical model of price competition in order to restrict the behavior of the estimated demand system. This is what BLP also do (recall ideas from Bresnahan 87).
A firm maximizes its profits over the set of products it produces: ΠF (j) = M X j∈F(j) sjt(pjt −mcjt) (9) where M is market size. Taking the first-order condition (and dropping out M) you get: ∂Π ∂pjt = sjt + X k∈F(j) skt pjt (pkt −mckt) = 0 ∀j (10) 52 Define the ownership matrix as Ωwhere Ωjk = 1(product j and k are owned by the same firm).
Then we can stack all the FOCs accross all products j in market t to get: s + Ω· ∗∂s ∂p(p −mc) = 0 (11) where ·∗is the element-by-element matrix product. Rearranging we get marginal costs: mc = p + (Ω· ∗∂s ∂p)−1s (12) We can use the supply side as an extra moment condition when estimating demand. Suppose that marginal cost as determined by: ln(mcjt) = Xjtγ + ωjt (13) where the X’s are things like car weight, horsepower and other factors that can change marginal costs. In the soft drink industry I know that all coke brands in the same bottle size have the same marginal costs, and I can impose this by having a coke brand dummy in the X’s.
Since the RHS of (12) is a function of θ, we can recover an ω(θ) during our estimation routine (once we add γ to the parameters being estimated), and we can add E(ω(θ)Z) = 0 to the previous moment conditions.
Idea: As in Bresnahan 87: 7.4 Overview of BLP Results BLP estimates this system for the US car market using data on essentially all car makes from 1971-1990. The characteristics are: • cylinders • # doors • weight • engine displacement • horsepower • length • width • wheelbase • EPA miles per gallon • dummies for automatic, front wheel drive, power steering and air conditioning as standard features.
• price (which is the list price) all in 1983 dollars 53 Figure 12: Bresnahan: Intuition for Supply Side Moments and Conduct.
year/model is an observation = 2217 obs Instruments: • Products that face substitutes will tend to have low markups, and those with poor substitutes will tend to have high markups.
• Hence, BLP motivate use of characteristics of products produced by rival firms and those of other products within the same firm as instruments.
7.5 Nevo 1998 • Nevo is trying to understand why there are such high markups in the Ready-to-Eat Cereal Industry, and where does market power come from in this industry.
54 S. B E R R Y , J. LEVINSOHN, AND A. PAKES T A B L E IV ESTIMATED AND PRICING EQUATIONS: PARAMETERSOF THE DEMAND BLP SPECIFICATION, 2217 OBSERVATIONS Parameter Standard Parameter Standard Demand Side Parameters Variable Estimate Error Estimate Error Means (p's) Constant HP/ Weigh t Air MP$ Size Std. Deviations (rrp's) Constant HP/ Weight Air MP$ Size Term o n Price (a) ln(y - p ) Cost Side Parameters Constant In (HP/ Weight) Air I n (MPG) In (Size) Trend I n k ) cients that are approximately the sum of the effect of the characteristic on marginal cost and the coefficient obtained from the auxiliary regression of the percentage markup on the characteristics. Comparing the cost side parameters in Table IV with the hedonic regression in Table I11 we find that the only two coefficients that seem to differ a great deal between tables are the constant term and the coefficient on size. The fall in these two coefficients tells us that there is a positive average percentage markup, and that this markup tends to increase in size. The coefficients on MPG and size may be a result of our constant returns to scale assumption. Note that, due to data limitations, neither sales nor produc- tion enter the cost function. Almost all domestic production is sold in the U.S., hence domestic sales is an excellent proxy for production. The same is not true for foreign production, and we do not have data on model-level production for foreign automobiles. The negative coefficient on MPG may result because the best selling cars are also those that have high MPG.By imposing constant returns to scale, we may force these cars to have a smaller marginal cost than they actually do. Due, to the positive correlation between both MPG and size and sales conditional on other attributes the coefficients on MPG and size are Figure 13: BLP Model Estimates.
• Part of the story comes from the production side: there are only 13 plants in the U.S. that manufacture RTE cereal. Nevo will focus on the demand side.
• Data: Regional Sales of Brand name RTE cereal over several years. (Actually a fair bit of data) • Unlike BLP, Nevo does not need to use supply side moments and uses brand dummies.
The supply side is: A firm maximizes it’s profits over the set of products it produces: ΠF (j) = M X j∈F(j) sjt(pjt −mcjt) (14) where M is market size. Taking the first-order condition (and dropping out M) you get: ∂Π ∂pjt = sjt + X k∈F(j) skt pkt (pjt −mcjt) = 0 (15) 55 TABLE VI A SAMPLE OWN-SEM~-ELASTIC~T~ES: FROM 1990OF ESTIMATED AND CROSS-PRICE BASED ON TABLE IV (CRTS) ESTIMATES r m Mazda Nissan Ford Chevy Honda Ford Buick Nissan Acura Lincoln Cadillac Lems BMW m 323 Sentra Escort Cavalier Accord Taurus Century Maxima Legend TownCar Seville LS400 7351 2 323 -125.933 1.518 8.954 9.680 2.185 0.852 0.485 0.056 0.009 0.012 0.002 0.002 0.000 -Sentra 0.705 -115.319 8.024 8.435 2.473 0.909 0.516 0.093 0.015 0.019 0.003 0.003 0.000 Escort 0.713 1.375 -106.497 7.570 2.298 0.708 0.445 0.082 0.015 0.015 0.003 0.003 0.000 m Cavalier 0.754 1.414 7.406 -110.972 2.291 1.083 0.646 0.087 0.015 0.023 0.004 0.003 0.000 Accord 0.120 0.293 1.590 1.621 -51.637 1.532 0.463 0.310 0.095 0.169 0.034 0.030 0.005 Taurus 0.063 0.144 0.653 1.020 2.041 -43.634 0.335 0.245 0.091 0.291 0.045 0.024 0.006 3 Century 0.099 0.228 1.146 1.700 1.722 0.937 -66.635 0.773 0.152 0.278 0.039 0.029 0.005 3 Maxima 0.013 0.046 0.236 0.256 1.293 0.768 0.866 -35.378 0.271 0.579 0.116 0.115 0.020 Legend 0.004 0.014 0.083 0.084 0.736 0.532 0.318 0.506 -21.820 0.775 0.183 0.210 0.043 i 2 TownCar 0.002 0.006 0.029 0.046 0.475 0.614 0.210 0.389 0.280 -20.175 0.226 0.168 0.048 Seville 0.001 0.005 0.026 0.035 0.425 0.420 0.131 0.351 0.296 1.011 -16.313 0.263 0.068 ' LS400 0.001 0.003 0.018 0.019 0.302 0.185 0.079 0.280 0.274 0.606 0.212 -11.199 0.086 7353 0.000 0.002 0.009 0.012 0.203 0.176 0.050 0.190 0.223 0.685 0.215 0.336 -9.376 c n Note: Cell entries r , ] , where i indexes row and j wlumn, give the percentage change in market share of i with a $1000 change in the price of j. Figure 14: Elasticities from the BLP Model.
Figure 15: BLP: Comparison b/w RC and Logit.
Define the ownership matrix as Ωwhere Ωjk = 1(product j and k are owned by the same firm).
Then we can stack all the FOCs accross all products j in market t to get: s + Ω· ∗∂s ∂p(p −c) = 0 (16) where ·∗is the element-by-element matrix product. Rearranging we get marginal costs: c = p + (Ω∂s ∂p)−1s (17) We can use the supply side as an extra moment condition when estimating demand. Suppose that marginal cost as determined by: ln(mcjt) = Xjtγ + ωjt (18) 56 Figure 16: BLP: Markups.
where the X’s are things like car weight, horsepower and other factors that can change marginal costs. In the soft drink industry I know that all coke brands in the same bottle size have the same marginal costs, and I can impose this by having a coke brand dummy in the X’s.
The additional moment condition become E(ωZ) = 0 which we can just add to the previous moment conditions.
8 Some Applications of Characteristics Based Demand Systems 8.1 “Micro”-BLP (2004 JPE) Uses CAMIP data provided by GM (not available outside company). Survey is a sample of 1993 vehicle registrations, where a given # of purchasers of each vehicle are sampled. Almost all vehicles sold in US (not just GM); 37K observations. HH attributes (income, age of HoH, family size, place of residence (urban, rural, etc.),...) Tends to be higher-income than CPS.
Key in this paper is the use of “second-choice” data conditional on purchase. “Direct, data-based measure of substitution” that provides identifying power w/o exogenous changes in choice sets.
uij = X k xjkβik + ξj + ϵij where βik = βk + X r zirβo kj + βu kνik 57 AVIV NEVO TABLE I11 DETAILED OF PRODUC~ION ESTIMATES COSTS % of Mfr % of Retail Item $/lb Price Price Manufacturer Price Manufacturing Cost: Grain Other Ingredients Packaging Labor Manufacturing Costs (net of capital costs)" Gross Margin Marketing Expenses: Advertising Consumer Promo (mfr coupons) Trade Promo (retail in-store) Operating Profits "apital costs were computed from ASM data. Soirrcr: Cotterill (1996) reporting from estimates in CS First Boston Reports "Kellogg Company." New York, October 25, 1994. (SIC 20). The gross price-average variable cost margin for the RTE cereal industry is 64.4%, compared to 26.5% for the aggregate food s e ~ t o r . ~ Accounting estimates of price-marginal cost margins taken from Cotterill (1996), presented in Table 111, are close to those above. Here the estimated gross margin is 7 percentage points lower than before, which can be attributed to the fact that these are marginal versus average costs. The last column of the table presents the retail margins. 3. THE EMPIRICAL FRAMEWORK My general strategy is to consider different models of supply conduct. For each model of supply, the pricing decision depends on brand-level demand, which is modeled as a function of product characteristics and consumer prefer- ences. Demand parameters are estimated and used to compute the PCM implied by different models of conduct. I use additional information on costs to compute observed PCM and choose the conduct model that best fits these margins. "he margins for the aggregate food sector are given only as support to the claim previously made that the margins of RTE cereal are "high." At this point no attempt has been made to explain these differences. As was pointed out in the Introduction, several explanations are possible. One of the goals of the analysis below will be to separate these possible explanations. Figure 17: Costs and Back of the envelope markups in the RTE Cereal Industry.
where zi and νi are observed and unobserved consumer attributes. Can be re-expressed: uij = X k xjkβk + ξj | {z } δj + X r zirβo kj + βu kνik + ϵij With parametric assumptions on z and ν and standard regularity conditions, can obtain consistent estimates of θ ≡{δ, βo, βu} from microlevel data; however, things such as price elasticities (which depend on knowing how δ changes when price changes) will require additional assumptions on ξ’s.
Can assume (as in BLP) that ξ is mean independent of nonprice characteristics of all the products.
Use as moments: 1. the covariance of the observed first-choice product characteristics (x) with observed consumer attributes (z) (e..g, family size and first-choice vehicle size); 2. the covariance between the first-choice product characteristics and the second-choice charac-teristics (e.g., covariance of the size of the first-choice vehicle with the size of the second-choice vehicle); 3. the market shares of the J products.
58 327 MEASURING MARKET POWER TABLE VI RESULTS FROM THE FULLMODEL^ Standard Interaction? with Demographic Variables Means Deviations Variablc (p's) (v's) Incume Illcomc Sq Age Child Price Advertising Constant Cal from Fat Sugar Mushy Fiber All-family Kids Adults GMM Objective (degrees of freedom) MD X 2 % of Price Coefficients > 0 "Based on 27,862 obrervations. Except where noted, parameters are GMM estimates. All regrersionr include brand and time dummy variables. Arymptotically robust rtandard errors are given in parentherer. "stimater from a minimum-dirtance procedure. regional prices in all quarters and the cost proxies discussed in the previous section. The results from the preferred specification are presented in Table VI. This specification does not include city fixed effects. I also examined a specifica- tion, equivalent to that presented in Table V column (x), which includes city specific intercepts. The point estimates are close to those of the preferred specification but the standard errors are very large, which is not surprising given that demographics are approximately constant during the sample period. Essen- tially the more elaborate manner in which the full model incorporates demo- graphics seems to fully control for city specific effects. Additional specifications are discussed and presented in Appendix B. The means of the distribution of marginal utilities, P's, are estimated by a minimum-distance procedure described above and presented in the first column. All coefficients are statistically significant and basically of the expected sign. The ability of the observed characteristics to fit the coefficients of the brand dummy variables is measured by using the chi-squared test, described in Section 4.4, which is presented at the bottom of Table VI. Since the brand dummy variables Figure 18: Estimated BLP Model.
First “help” identify βo; second with βu. Given {βo, βu}, there is a unique δ that matches observed market shares to predicted shares; hence, the use of 3 to id δ.
Once δ is obtained, can try to get βk: δj = pjβp + X k̸=p xjkβk + ξj Issue is that as opposed to BLP (20 annual cross sections), here only have a single 1993 cross section.
1. Could estimate w/ IV. Estimate: 2.5 w/ standard error 25.
2. Could mimic BLP and use supply side moment restrictions. Reliance on functional form and pricing assumption. Estimate of -3.58 w/ se of .22.
59 3. Use GM idea that aggregate (market) elasticity of new products is -1; gets βp = 11.
Though levels of price elasticities vary, the patterns are the same: semi-elasticities decrease w/ price, and (hodling fixed price) the elasticities of pickups, vans, SUVs, and (to a lesser extent) sports cars are lower (explaining larger markups).
Uses estimates to simulate introduction of Toyota and Mercedes SUV (sets new ξ to average of that firm’s products), and the shutting down of Oldsmobile division by GM.
8.2 Petrin (2002 JPE) • We often want to quantify the benefits of innovation. Theory is often ambiguous (e.g., new product introductions to gain market power, but new products may fill unserved needs).
• You need a demand curve to do this since we want to know what people would be willing to pay above the market price.
• Brief History: Chrysler/Dodge introduced Caravan in 1984, sold 170K in first year; others (GM, Ford) tried to imitate, but didn’t do as well since they relied on truck (as opposed to car) platform. Chrysler maintained large market share (44% after 14 years), but canibalized station wagon sales.
• Petrin quantifies the social benifit of the minivan: Increases total welfare by about 2.9 billion dollars from 1984-88, most of which is consumer surplus not profits which are captured by firms.
Also uses “Micro-Moments”: i.e., moment conditions coming from micro-data. So for example, one might have data coming from the CEX on the average amount of money spent on soft drinks by people who earn less than $ 10 000 a year, which I call ˆ st|I<10000. The model’s prediction is: st|I<10000,θ = X j>0 1 1(Ik<10000) X k (sijt(θ)1(Ik<10000)) (19) So we can build an error into the model which is ζt = ˆ st|I<10000 −st|I<10000,θ and treat it like all our other moment conditions.
Petrin’s estimating utility eq: uij = αi ln(yi −pj) + Xjβ + X k γikxjk + ξj + ϵij (20) where γik = γkνik and νik is an idiosyncratic taste for characteristic k. In particular, parameterizes tastes for minivans and station wagons as: γi,mi = γmi ln(fsi)νi,fv γi,sw = γsw ln(fsi)νi,fv where fsi is family size and there is a common preference for fv (family vehicles).
Petrin uses two paricular micromoments: the probability a family purchases any new vehicle given their income, and the average family size of a consumer conditional on purchasing a particular car type (minivan, station wagon, SUV, or full-size van).
The data relies on the CEX automobile supplement: 2660 new vehicle purchases over 6 year period with income and family size for vehicles of interest. Uses supply-side pricing moments as in BLP.
60 MEASURING MARKET POWER Figure 19: Estimated Cross-Price Elasticities.
61 718 journal of political economy TABLE 4 Parameter Estimates for the Demand-Side Equation Variable OLS Logit (1) Instrumental Variable Logit (2) Random Coefficients (3) Random Coefficients and Microdata (4) A. Price Coefficients (a’s) a1 .07 (.01) .13 (.01) 4.92 (9.78) 7.52 (1.24) a2 11.89 (21.41) 31.13 (4.07) a3 37.92 (18.64) 34.49 (2.56) B. Base Coefficients (b’s) Constant 10.03 (.32) 10.04 (.34) 12.74 (5.65) 15.67 (4.39) Horsepower/weight 1.48 (.34) 3.78 (.44) 3.40 (39.79) 2.83 (8.16) Size 3.17 (.26) 3.25 (.27) 4.60 (24.64) 4.80 (3.57) Air conditioning standard .20 (.06) .21 (.08) 1.97 (2.23) 3.88 (2.21) Miles/dollar .18 (.06) .05 (.07) .54 (3.40) 15.79 (.87) Front wheel drive .32 (.05) .15 (.06) 5.24 (3.09) 12.32 (2.36) Minivan .09 (.14) .10 (.15) 4.34 (13.16) 5.65 (.68) Station wagon 1.12 (.06) 1.12 (.07) 20.52 (36.17) 1.31 (.36) Sport-utility .41 (.09) .61 (.10) 3.10 (10.76) 4.38 (.41) Full-size van 1.73 (.16) 1.89 (.17) 28.54 (235.51) 5.26 (1.30) % change GNP .03 (.01) .03 (.01) .08 (.02) .24 (.02) Note.—Standard errors are in parentheses. A quadratic time trend is included in all specifications.
Z-statistic 11.
Z-statistic 12.
positive covariance in taste for these vehicles. A related finding is in column 5, where the sum of sales of station wagons and minivans is reported. While the market share for station wagons fell and the share of minivans climbed, the sum of the shares remained fairly constant over the sample period.
B.
Parameter Estimates Tables 4 and 5 report the results for the four different demand-side models: ordinary least squares (OLS), instrumental variables, random coefficients with instrumental variable correction, and random coeffi-Figure 20: Petrin: Parameter Estimates.
62 benefits of new products 725 TABLE 9 Implied Markups Derived from Demand-Side Estimates and Bertrand-Nash Pricing Assumption, 1981–93 (2,407 Models) Statistic OLS Logit (1) Instrumental Variable Logit (2) Random Coefficients Random Coeffi-cients and Microdata (3) (4) (5) (6) Median $13,834 $7,513 $2,593 36.7% $1,439 15.0% Mean $13,904 $7,551 $4,017 40.7% $1,753 16.7% 10% $13,647 $7,413 $1,628 27.8% $819 11.2% 90% $14,297 $7,765 $8,357 62.6% $2,856 24.8% Standard deviation $257 $140 $4,089 14.0% $1,229 6.2% Estimated marginal costs that are negative 73.7% 22.6% 0% 0% Note.—Percentage markups are estimated markups divided by observed prices. They are not reported for instrumental variable and OLS logits because the estimated marginal cost is negative for many vehicles. Dollars are 1982–84 CPI adjusted.
data is about twice that of the average markup when micro data are added to the framework (40 percent vs. 17 percent). Both are well within the range of previously reported estimates.16 I now focus exclusively on results using the micro data. Table 10 reports sales-weighted average prices and percentage markups (esti-mated markups divided by observed prices) for selected vehicle groups from 1983–87. Minivans enjoyed consistently larger markups than sta-tion wagons, sport-utility vehicles, and other new vehicles, in part derived from their location in a less crowded region of product space. Station wagon markups were 2–3 percent lower, on average, than minivan mark-ups and consistently fell below the market average. Overall, markups fell over this time period as the number of vehicle choices increased from 157 in 1983 to 198 in 1987.
Changes in Chrysler’s, Ford’s and GM’s estimated total variable profits due to the introduction of the minivan are reported in table 11. These numbers obtain by computing implied profits with no minivans and comparing them to estimated profits with minivans (see Sec. V). They 16 Two estimates of markups from the same time period come from Berry et al. (1995) and Goldberg (1995). Berry et al. use market-level data for 1971–90 on the U.S. passenger car market (i.e., no minivans, sport-utilities, or vans). Their random coefficients model yields ranges of markups depending on the cost- and demand-side specification from as small as 15–25 percent to as large as 30–40 percent. Goldberg uses a nested logit framework with the same market-level data as Berry et al. (using only 1983–87) and combines these data with consumer-level information from the CEX. She finds an average markup of 38 percent, with a range from 14 percent to 61 percent. She also reports implied estimates for markups ranging from 15 percent to 50 percent using separate information from the Annual Survey of Manufacturers and Consumer Reports.
Figure 21: Petrin: Implied Markups.
728 journal of political economy TABLE 13 Change in U.S. Welfare from the Minivan Innovation, 1984–88 ($ Millions) Year Compensating Variation Change in Producer Profits Welfare Change 1984 367.29 36.68 330.61 1985 625.04 25.07 599.97 1986 439.93 27.30 467.23 1987 596.59 29.75 626.34 1988 775.70 110.24 885.94 Total 2,804.55 105.54 2,910.09 Note.—Computations were done using 1982–84 CPI-adjusted dollars.
information plays the same role as consumer-level data, allowing esti-mated substitution patterns and (thus) welfare to directly reflect dem-ographic-driven differences in tastes for observed characteristics. The technique should be useful for a broad range of markets in which data on price and product characteristics may not be sufficient to precisely identify the relevant substitution patterns. In the minivan case, I find that the microdata are important for demand and welfare measurement, primarily because they appear to free the model from a heavy depen-dence on the idiosyncratic logit error.
My results suggest that overall gains from the introduction of the minivan were large and that consumer benefits far outweighed the costs of development and the profits obtained by the innovator. Consumer benefits were distributed across households in a nonrandom way, and almost half of these benefits came from increased price competition d d t i i h O th d id th Figure 22: Petrin: Welfare Estimates.
63 8.3 Gentzkow 2007 Attempts to understand whether or not print and online newspapers are substitutes or complements, and to determine the welfare impact of the introduction of the online edition of the Washington Post.
unmeasured components of print utility. Consum-ers who have a taste for reading news might also have tastes or skills that push them toward the kind of jobs that provide Internet access.19 An individual who dislikes reading, for example, might be less likely to be successful in white-collar occupations that involve computer-intensive work. Similarly, individuals with a strong taste for news may be more likely to invest in high-speed Internet connections at home. Although it is by definition impossible to test for the presence of such unmeasured corre-lation, some evidence can be obtained by asking how the 2SLS estimates change when more detailed occupation and industry controls are added. If omitted job characteristics correlated with both Internet availability and taste for read-ing news were a source of bias, and if these were at least partially captured by occupation and industry controls, we would expect these controls to change the IV estimate. The second IV specification in Table 4 shows how the 2SLS estimate changes when these controls are added.
The estimated effect of online news grows slightly stronger, but the basic picture remains the same. This is not proof of the validity of the identifying assumption, but it does provide some confidence that the role of unmeasured correlation is limited (Joseph G. Altonji, Todd E. Elder, and Christopher R. Taber 2005). There is no analogous way to verify the validity of high-speed Internet access as an excluded variable. I show in the robustness section at the end of the paper, however, that the results 19 A different possibility is that the ability to read Inter-net news per se is an important determinant of job choice.
Because time spent reading online news is a small fraction of the overall time spent at work for most employees, it is unlikely that this is a first-order consideration. One way to verify this is to note that if this channel were important, it would play the biggest role for employees who have re-cently changed jobs, since Internet access could not have played a role in job choice prior to the mid-1990s. I cannot observe job tenure directly, but I can use as a rough proxy the time respondents have lived in their current area of residence. Limiting the sample to those who have lived in the same area for more than ten years does not significantly change the IV point estimates.
TABLE 4—LINEAR PROBABILITY MODEL OF POST CONSUMPTION OLS IV (1) (2) (3) Dependent variable: Read Post last 5 days Read post.com last 5 days 0.0464 0.4132 0.4579 0.4381 (0.0090) (0.107) (0.119) (0.141) Other Internet news 0.0244 (0.0195) Industry controls X Occupation controls X Overidentification test p value 0.302 0.431 0.219 R-squared 0.333 0.214 0.207 0.202 N 14313 14313 14313 14313 Notes: Robust standard errors are in parentheses. The first row gives coefficients on a dummy for reading the post.com in the last five weekdays. IV regressions instrument for post.com consumption with dummy variables for Internet access at work, fast Internet connection, and reported use of the Internet for research/education and work-related tasks. Overidentification test p value is the p value from a standard Sargan test. Other Internet news is a dummy for online news use other than online newspapers. Industry controls are dummies for 12 industry categories. Occupation controls are dummies for 11 occupation categories. All regressions include controls for Washington Times readership, age, sex, education (four categories), white-collar work, computer work, employment status, income, political party, date of survey, location of residence within the DMA (six categories), and dummy variables for the number of missing values. The regressions omit observations where print newspapers are not in the choice set (consumer reports that she generally reads no newspaper sections) and control for presence of online newspapers in the choice set (whether the consumer used the Internet in the last 30 days).
Significant at 1 percent.
727 VOL. 97 NO. 3 GENTZKOW: PRINT AND ONLINE NEWSPAPERS Figure 23: Gentzkow: Linear Probability Model.
Issue: identification between complementarity vs. correlated preferences.
Uses variation in accessibility of Internet at work: will affect utility/price of online Post.com, but not of paper version. If goods are complementary, increasing accessibility of Internet should increase demand of paper version of Post; this exclusion restriction means that this is the only channel through which paper demand for Post can go up.
Also panel variation: complementarity makes it unlikely that only one good will be consumed on a given day for a consumer, but with correlated random effects (conditional on a consumer’s propensity to consume goods), variation in consumption should be uncorrelated across goods.
Data: Survey (16K adults) in DC DMA between March 2000 - Feb 2003. individual/household chars and enumeration of all local print newspapers read over the past 24 hours and 5 days, and readership of local online newspapers. Wash Post (print + online) and Wash Times. Outside option will be other news sources and non-consumption.
Utility from good j ¯ uijt = −αpj + δj + xiβj + νij + τit and utility from bundle r: uirt = X j∈r ¯ uijt + Γr + ϵirt 64 Identification of Γ relies on: using whether consumer users Internet at work, uses Internet for work or education related tasks, has high speed internet at home (all included for Post.com utility but excluded from Post print and Times print).
Uses supply side pricing moments to help assist with identification of α.
Estimated with both MSM and SML.
Computes industry outcomes and changes in producer and consumer surplus if Post.com is removed.
the asymptotic distribution of the estimated parameters ˆ SML, computing the statistic in question at each draw, and calculating the sample standard deviation (Berry, James Levinsohn, and Pakes 1999; Aviv Nevo 2000).33 IV. Results A. Demand Parameters Table 5 displays SML estimates of the co-efficients on observable characteristics. Most of the coefficients in the utility of the Post and post.com are significant, as are about half the coefficients in the utility of the Times. On the whole, the results correspond closely to expectations.
The coefficients in the utility of the Post are consistent with its reputation as a relatively high-brow, liberal newspaper. Both education and income have a positive and significant ef-fect. Considering a consumer with characteris-tics at the mean of the data, college attendance, graduate school attendance, and doubling household income increase the probability of choosing the Post by 31, 34, and 9 percentage points, respectively. Being a registered Demo-crat increases the probability by 3 points on the margin, while being a registered Republican reduces the probability by 2 points. Age has a positive impact, with an additional 10 years of age adding 8 points to the probability. Being employed full time decreases the probability of choosing the Post by 14 points, having a white-collar job not related to computers decreases it by 5 points, and having a computer-related job decreases it by 7 points.
The coefficients of post.com utility are gen-erally of the same sign as the Post coefficients, though their magnitudes in terms of marginal effects are smaller (reflecting the lower pre-dicted probability of choosing the post.com overall—16 percent versus 53 percent for the Post). Two notable exceptions are age and em-ployment: adding ten years of age decreases the probability of a mean consumer choosing the post.com by 1 percentage point; being employed ˆ SML arg max i ln 1 S s Pis 1 2 ¥ s Pis 1 S ¥ s Pis 2 1 S ¥ s Pis 2 .
33 As discussed by Berry, Levinsohn, and Pakes (1999), his method will generally be superior to the alternative of inearizing the estimates in the parameters and then com-puting standard errors analytically (the delta method). An-other alternative would be to estimate all standard errors in he model by bootstrapping. This would add greatly to the computational cost, since it would require estimating all parameters of the model for each subsample of the original data.
TABLE 5—PARAMETER ESTIMATES FROM FULL MODEL: OBSERVABLE CHARACTERISTICS Post post.com Times Age 0.661 0.545 0.688 (0.0407) (0.0628) (0.122) Female 0.473 0.423 3.20 (0.0921) (0.148) (0.344) High school 1.95 2.87 1.29 (0.214) (0.633) (0.856) College 2.54 4.03 1.49 (0.237) (0.654) (0.928) Grad school 2.76 4.16 1.19 (0.252) (0.670) (0.964) Computer job 0.567 1.22 0.322 (0.223) (0.323) (0.682) White-collar job 0.447 0.431 0.591 (0.118) (0.195) (0.423) Full-time 1.13 0.935 0.044 (0.141) (0.227) (0.450) Log income 0.709 0.217 0.934 (0.0729) (0.118) (0.260) Democrat 0.217 0.326 0.027 (0.099) (0.163) (0.346) Republican 0.193 0.0538 2.902 (0.119) (0.193) (0.408) Constant 6.97 1.54 8.85 (0.852) (1.18) (1.91) N 16179 16179 16179 Notes: Standard errors in parentheses. Details of the model are given in the text. Age is measured in units of ten years.
High school, college, and graduate school are mutually exclusive categories. Computer and white-collar job are dummies for reported occupations in these categories and are also mutually exclusive. Full-time is a dummy for full-time employment. Democrat and Republican indicate reg-istered members of the parties.
Additional model parameters are shown in Table 6. Not shown in any table are dummies for the number of missing observations, location of the respondent’s residence within DC, time dummies (in six-month intervals), and having print and online papers in the choice set.
Significant at 5 percent.
Significant at 1 percent.
733 VOL. 97 NO. 3 GENTZKOW: PRINT AND ONLINE NEWSPAPERS Figure 24: Gentzkow: Parameter Estimates.
65 full-time increases the probability by 2 points, having a white-collar job increases it by 1 point, and having a computer-related job increases it by 3 points. The age coefficient is consistent with a widely cited belief in the industry that the online edition has the potential to reach out to younger consumers. The employment coeffi-cients reflect the fact that the online edition is frequently read at work.
The coefficients in the utility of the Times are consistent with its reputation for being more conservative than the Post. Registered Repub-licans are significantly more likely to choose the Times, with a marginal effect of 3 percentage points, which is large viewed relative to the average predicted probability of choosing the Times, which is 9 percent.
Table 6 shows estimates of the other param-eters in the model. The first section shows the values of the interaction terms (). Both of the print-online interaction terms are significantly negative. The Post-Times term is positive but small and not significant. This implies that the post.com is a substitute for both print papers, consistent with the IV reduced-form regressions (the substitution patterns are discussed in more detail below). The second section shows that the and parameters are equal to 6.8 and 0.045, respectively. This means that on 4.5 percent of days the utility of all three products increases by about seven units—a natural interpretation is that major news stories, which occur relatively rarely, increase demand for news overall. The third section shows coefficients on the variables that were excluded a priori from utility of the print editions, all of which have the predicted signs, and half of which are significant.
Table 7 shows the sample covariance of the estimated utility from observables, the esti-mated covariance matrix of the unobservables , and the variance of the and unobserv-ables. The correlation of utilities for the Post and post.com, and for the post.com and the Times, are more positive than the observables alone would predict. The variance of the unob-servable component of Times utility is quite large, reflecting the fact that Times consumption is more consistent over days than the small prob-TABLE 6—PARAMETER ESTIMATES FROM FULL MODEL: OTHER Interaction terms Excluded variables (coefficient in utility of post.com) Post-post.com 1.285 Internet at work 1.357 (0.2307) (0.180) Post-Times 0.0809 Fast connection 0.146 (0.2479) (0.193) post.com-Times 1.231 Use for education-related 0.361 (0.4832) (0.212) Nonlinear parameters Use for work 0.582 6.846 (0.222) (0.5027) 0.0454 (0.0179) Notes: Standard errors in parentheses. The model also includes a third-order interaction term for the Post-Times-post.com bundle which is not reported in the table. Fast connection indicates consumers with DSL, cable modem, or T1 connections at home. Use variables were responses to the question, “In what ways do you use online services?” Significant at 1 percent.
TABLE 7—VARIANCE AND COVARIANCE OF CONSUMER CHARACTERISTICS Covariance of observable utility Post post.com Times Post 14.0 post.com 7.95 8.43 Times 7.66 4.85 10.2 Covariance of unobservables Post post.com Times Post 11.0 post.com 4.14 23.5 Times 1.17 5.19 86.1 Variance of unobservables 2.03 Variance of unobservables 1.64 Note: The table shows the covariance of different compo-nents of utility at the estimated parameter values.
734 THE AMERICAN ECONOMIC REVIEW JUNE 2007 Figure 25: Gentzkow: Parameter Estimates.
substitutability. We can now see how these conclusions hold up in the full model.
The key findings are summarized in Table 8.
The table shows three indicators of the degree of complementarity or substitutability between the print and online editions of the Post: (a) the cross-price derivative of demand, calculated as the change in readership of the post.com per $.10 change in the price of the Post; (b) the change in Post readership when the post.com is added to the choice set; and (c) and the cost of this lost read-cent). Comparing actual Post demand with the counterfactual simulation in which the post.com is removed from the choice set, I find that introduc-ing the post.com reduces Post readership by about 27,000 readers per day and reduces Post profits by approximately $5.5 million per year. These effects are all precisely estimated and significantly differ-ent from zero. Their magnitude is moderate, how-ever, and certainly does not suggest anything close to a one-for-one crowding out of print readership.
The second conclusion is that properly accounting TABLE 8—IMPACT OF THE ONLINE EDITION ON DEMAND FOR PRINT Case 1: Full model Cross-price derivative 8,358 (1,436) Change in print readership 26,822 (4,483) Change in print profits $ 5,466,846 (913,699) Case 2: Model with observable characteristics only Cross-price derivative 8,421 (752) Change in print readership 25,655 (2,270) Change in print profits $ 5,229,009 (462,771) Case 3: Model with no heterogeneity Cross-price derivative 16,143 (702) Change in print readership 51,897 (2,254) Change in print profits $10,577,720 (459,464) Notes: Standard errors in parentheses. The table shows three measures of the online edition’s impact. The cross-price derivative is the change in post.com readership when the Post’s price is increased by $.10. Change in print readership and print profits are the total changes for the Post when the online edition is added to the choice set. The table shows the estimated values in three models. Case 1 is the estimates from the full model. Case 2 is a model with observable consumer characteristics but no unobservables other than the i.i.d. logit errors. Case 3 is a model with no observable or unobservable consumer heterogeneity except the i.i.d. logit errors.
737 VOL. 97 NO. 3 GENTZKOW: PRINT AND ONLINE NEWSPAPERS Figure 26: Gentzkow: Counterfactual Estimates.
66
|
17
|
Published Time: Mon, 04 Aug 2025 05:59:30 GMT
Local polynomial factorisation: improving the Montes algorithm
Adrien Poteaux 1 and Martin Weimann 21Univ. Lille, CNRS, Centrale Lille, UMR 9189 CRIStAL, F-59000 Lille, France
2LMNO, Universit´ e de Caen-Normandie
Abstract
We improve significantly the Nart-Montes algorithm for factoring polynomials over a complete discrete valuation ring A. Our first contribution is to extend the Hensel lemma in the context of generalised Newton polygons, from which we derive a new divide and conquer strategy. Also, if A has residual characteristic zero or high enough, we prove that approximate roots are convenient representatives of types, leading finally to an almost optimal complexity both for irreducibility and factori-sation issues, plus the cost of factorisations above the residue field. For instance, to compute an OM-factorisation of F ∈ A[x], we improve the complexity results of by a factor δ, the discriminant valuation of F .
1 Introduction
Let A be a complete discrete valuation ring with residue field F and consider F ∈ A[x], monic and separable of degree d. The aim of this paper is to improve complexity bounds for the factorisation of F . Such a polynomial factorisation is a fundamental task of computer algebra with various applications in number theory and algebraic geometry. As such, our complexity results allow to fasten various computational problems, such as Okutsu frames, integral basis or genus of plane curves (see Section 6 for further details). Our work is based on the seminal Montes algorithm , for which the best known complexity is given in . In , the authors conclude their paper by: Probably, an optimal local factorisation algorithm would consist in the ap-plication of the Montes algorithm as a fast method to get an Okutsu approx-imation to each irreducible factor, combined with an efficient “Hensel lift” routine able to improve these initial approximations by doubling the preci-sion at each iteration. One may speculate that Newton polygons of higher order might also be used to design a similar acceleration procedure. 1With S. Pauli, Guardia and Nart answered partially to this question thanks to the single-factor lifting algorithm , that can be viewed as a Newton-like method to lift a single factor with a quadratic convergence. This led to the overall complexity analysis of . In this paper, we answer more precisely to this question, by showing that the classical Hensel algorithm can be adapted to the context of Newton polygon of higher order. We also provide a new divide and conquer strategy using this adapted Hensel algorithm, enabling us to lift all factors of F at the same time, with a complexity almost linear in the size of the output. These two elements allow us to gain a factor d in comparison to the complexity result of . Moreover, following , we show that when char( F) - d, we can use approximate roots as strongly optimal representatives of a type 1. This induces an irreducibility test with a complexity almost linear in δ the valuation of the discriminant of F ; see Theorem 2. This improvement propagates for factorisation with a slightly greater assumption
Assumption 1. char (F) = 0 or char (F) > d
leading a complexity almost linear in d n for a required precision n ≥ δ ; see Theorem 4.
Related work. Classical implemented algorithms for factoring polynomials over Qp (see e.g. [4, 6, 24, 25]) are based on the Zassenhaus Round Four algorithm, suffering from loss of precision in computing characteristic polynomials. In , the authors introduced a new technique as a combination of the Montes algorithm [9, 10] which exploits the Newton polygons of higher order (as initiated in ), and a Newton-like single factor lifting. Further complexity improvements are obtained in . The present work is in the same vein, with the notable difference that we introduce a multi factor lifting, which is used in course of the Montes algorithm whenever a non trivial factorisation is discovered. For rings of Laurent series K(( t)) of characteristic zero or high enough, Newton-Puiseux like algorithms can be used. The best complexity in this context is softly linear , as in the present paper, but more difficult to implement and slower for irreducibility issues. This led us to introduce in approximate roots a la Abhyankhar [1, 26] in order to derive a faster and easy-to-implement irreducibility test, quite close to the algorithm a la Montes , although not dealing with the small characteristic. This was a first step towards the present work, where we use now approximate roots in the factorisation context, as allowed by a systematic use of our generalised Hensel lifting. Note that the divide conquer in the present paper is quite different than the one of [29, Section 4.4]: in particular, the initialisation of the Hensel algorithm does not use the not yet implemented generalisation of the half gcd algorithm described in . Finally, let us insist that following , our algorithm computes as a byproduct an Okutsu frame of each irreducible factors of F , containing the most significant arithmetic informations and closely related to various computational problems of number theory and algebraic geometry, such as the computation of integral basis (see Section 6 for further details).
1see Sections 2 and 3 for the definitions of these terms
2Organisation of the paper. We start by a summary of important definitions related to the Montes algorithm in Section 2. Then, we focus on the irreducibility test when char( F) - d in Section 3, leading to Theorem 2. In Section 4, we show how to adapt the Hensel algorithm in the context of Newton polygon of higher orders. Section 5 uses this latter algorithm on a well chosen type in order to derive a divide and conquer algorithm, leading to Theorem 4. Finally, we discuss some direct applications in Section 6.
Complexity model. Polynomials in A[x] considered in this paper are supposed given in a dense representation, with coefficients available up to an arbitrary precision (e.g. represented as tables, as we always use truncation bounds). We use the algebraic RAM model of Kaltofen [16, Section 2], counting only the number of arithmetic operations in the residue field F. We classically denote O() and O˜() to respectively hide constant and logarithmic factors in our complexity results ; see e.g. [7, Chapter 25, Section 7]. We additionally let O(d) = O(d1+ (d)) with (d) → 0. We have O˜( d) ⊂ O (d), and freely speak of almost linear in d for both notations. Fast multiplication in F[y] is used, i.e. we multiply two polynomials of degree at most d within O˜( d) operations in F [7, Section 8.3]. We assume that univariate factorisation over F is available. Intermediate finite extensions Fk of degree fk−1 of F will occur (see Section 2), naturally represented as a quotient of F[y0, . . . , y k−1] by a triangular prime ideal ( P0(y0), . . . , P k−1(y0, . . . , y k−1)).
Lemma 1. An operation in Fk takes O(fk−1) operations over F.Proof. If Card( F) ≥ (d
2
), use [14, Theorem 4]. If Card( F) < (d
2
), proceed as in [27, proof of Theorem 1] (roughly speaking, keep the first i levels of the triangular set so that Card( F) fi−1 ≥ (d
2
) with i minimal, and apply [14, Theorem 4] over Fi ; as i ∈O(log log d), an operation in Fi is O˜( fi−1) via [17, Proposition 2]).
Remark 1. Since some subroutines use the triangular representation of Fk (Remark 3), introducing a randomised Las Vegas subroutine to fasten the arithmetic in Fk via a primitive representation would not a priori be sufficient to express our complexity results in O˜() instead of O() , in contrast to .
2 Types and factorisation
Let L be a complete discrete valuation field, v : L → Z ∪ { +∞} any normalised and surjective valuation on L and π an uniformiser. We denote by A ⊂ L the ring of integers of ( L, v ) and by F = A/(π) the residue field of υ. The two fields we have in mind in this paper are L = Qp the field of p-adic numbers and L = K(( t)) the field of Laurent series over any field K.
2.1 Types
We start with types of order 0, denoting the residue field F0 := F.
Definition 1. A type of order 0 is t0 = [ P0], where P0 ∈ F0[y] is a monic irreducible polynomial.
3For any G ∈ L[x], a type of order 0 comes together with the Gauss valuation v0(∑
i
ai xi) := min i(υ(ai)) and the residual polynomial operator R0(G) := G(y)/π v0(G) mod π.Types tk = [ P0, (φ1, λ 1, P 1), . . . , (φk, λ k, P k)] of order k ≥ 1 are defined inductively below. If 1 ≤ i ≤ k − 1, we denote ti = [ P0, (φ1, λ 1, P 1), . . . , (φi, λ i, P i)]. For any field K and
P, Q ∈ K[y], we write P (y) ∼ Q(y) if there exists c ∈ K× such that P (y) = c Q (y). Also, we denote P the semigroup of polygons (i.e. the set of all open convex polygons of the plane, attached to finite formal sums of sides) and P− ⊂ P the semi-group of polygons with negative slopes (principal polygons). See [10, Section 1.1] for details. Assume that types of order k − 1 have been defined and that we can attach to a type of order k − 1 a valuation vk−1 : L[x] → Z, a field extension Fk−1 of F and a residual polynomial operator Rk−1 : L[x] → Fk−1[y].
Definition 2. Let k ≥ 1. tk = [ P0, (φ1, λ 1, P 1), . . . , (φk, λ k, P k)] is a type of order k if
tk−1 is a type of order k − 1 and
• φk ∈ A[x] is monic, irreducible and satisfies Rk−1(φk) ∼ Pk−1.
• λk = −mk/q k ∈ Q−, with (qk, m k) ∈ N2 coprime. We denote (αk, β k) s.t. αk qk −
βk mk = 1 with 0 ≤ βk < q k.
• Pk 6 = y ∈ Fk[y] is monic, irreducible over Fk := Fk−1[y]/(Pk−1). We let k := deg( Pk) and zk := y mod Pk(y) ∈ Fk+1 .We will denote ek := q1 . . . q k and fk :=0 . . . ` k = [ Fk+1 : F]2.
2.2 Associated operators and representatives
We fix a type t = [ P0, (φ1, λ 1, P 1), . . . , (φk, λ k, P k)] of order k ≥ 1 and detail several operators associated to it. If G ∈ L[x], we denote G = ∑ a′
i
φik−1 and G = ∑ aiφik their
φk−1 and φk-adic expansion 3.
Augmented valuation. vk : L[x] → Z is defined from vk−1 as
vk(G) := min
i
(qk−1vk−1(a′
i
φik−1) + mk−1i). (1) This is indeed an “augmented valuation” as introduced by Mac Lane [18, 19] (see e.g. [10, page 379] for a detailed explanation). Notice in particular that vk(G) = min i vk(a′
i
φik−1).
Newton polygon of higher order. The polygon operator Nk : L[x] → P associates to
G the lower convex hull of {(i, v k(aiφik), a i 6 = 0 }. We let N −
k
(G) ∈ P − stands for the principal part of Nk(G).
2A reader used to the work of Nart et al should pay attention that the notations ekand fkin are here denoted qkand `k. We rather use ekand fkfor the ramification index and residual degree discovered so far, following .
3If k= 1, we let φ0=x,q0= 1 and m0= 0, so that v1=v0.
4Residual polynomial operator. We need several intermediate operators. We let Sk(G) :=
{(i, j ) ∈ N k(G), m ki + qkj is minimal }, Ik(G) := {i ∈ N, v k(aiφik)) ∈ Sk(G)} and
ik(G) := min( Ik(G)) (letting i0(G) = 0 additionally). We let τk,i := ik−1(ai)+ βk−1vk (aiφik )
qk−1
∈
Z (see , after Definition 2.19). Then Rk : L[x] → Fk[y] is defined inductively as
Rk(G) = ∑
i∈Ik(G)
zτk,i
k−1
Rk−1(aiφik)( zk−1) y
i−ik(G)
qk
.
Remark 2. Let us summarise the dependencies of these operators: vk depends only of
vk−1, φ k−1 and λk−1, thus only of tk−1. Nk depends of vk and φk. Finally, Rk depends of Rk−1, φk and λk.
Representative of tk. They are defined as follows.
Definition 3. Let G ∈ A[x] be monic. We say that G is of type t if for 0 ≤ i ≤ k,
Ni(G) is one-sided of slope λi, and for 1 ≤ i ≤ k R i(G) ∼ P Ni
i
for some Ni ∈ N×. We denote by Gt the product of all monic irreducible factors of G of type t. We say that t
divides G is deg( Gt) > 1. Finally, G is said to be a representative of t if additionally ord t(G) := ord Pk Rk(G) = 1 .
2.3 Factorisation according to a type
The following theorem summarises the main results that led to the Montes algorithm:
Theorem 1. Let k ≥ 1 and t be a type of order k − 1 together with a representative φk.Denote vk the associated augmented valuation and assume F ∈ A[x] monic. 1. (Theorem of the polygon, [10, Theorem 3.1]) Suppose that N −
k
(F ) = S1 + · · · + Sg
where the polygons S1, . . . , S g are one-sided of distinct slopes λk, 1, . . . , λ k,g . Denote by Rk,i the residual polynomial operator associated to vk, φk and the slope λk,i . The polynomial Ft admits a factorisation
Ft = Ft,1 · · · Ft,g ∈ A[x]
where N −
k
(Ft,i ) = Si up to translation and Rk,i (Ft,i ) ∼ Rk,i (F ).2. (Theorem of the residual polynomial, [10, Theorem 3.7]) Let Rk,i (F ) ∼ P a1
k,i, 1
· · · P ar
k,i,r
where the Pk,i,j are pairwise coprime irreducible polynomials. Then Ft,i admits a factorisation
Ft,i = Ft,i, 1 · · · Ft,i,r i ∈ A[x]
where Nk(Ft,i,j ) is straight of slope λk,i and Rk,i (Ft,i,j ) ∼ P aj
k,i,j
.3. (Irreducibility criterion) If aj = 1 , then Ft,i,j is irreducible.
53 Testing irreducibility
3.1 Approximate roots
Proposition 1. Let F ∈ A[x] be monic of degree d, with char (A) - d. Let N ∈ N
dividing d. There exists a unique polynomial ψ ∈ A[x] monic of degree d/N such that
deg( F − ψN ) < d − d/N . We call it the N th approximate roots of F , denoted by N
√F .It can be computed in less than O˜( d) operations in A.Proof. See e.g. [26, Proposition 3.1] for the existence, and [27, Proposition 11] for their computation.
Proposition 2. Let F ∈ A[x] be a monic separable polynomial of type t and denote
N = ord t(F ). Suppose char (F) - N and let ψ = N
√F .1. ψ is a representative of t.2. If F is of type t ∪ (ψ, −m/q, P ), then q deg( P ) > 1.Proof. Point 1 can be shown with arguments similar to [27, Lemma 7]. Point 2 is a direct consequence of the fact that the coefficient of ψN −1 in the ψ-adic expansion of F
is zero. Without dealing with the precision of computations in A, this leads to the following irreducibility test algorithm:
Algorithm: FastIrreducible (F )
Input: F ∈ A[x] monic separable s.t. char( F) - deg( F ).
Output: A Boolean (is F irreducible ?), a type t and a representative φ of t.
1
if R0(F ) is not some P N0
0
then return False, [ ];
2
t ← [P0], k ← 1, N ← N0;
3
while N > 1 do
4
φk ← N
√F ;
5
if N −
k
(F ) is not one sided then return (False, t, φk);
6
if Rk(F ) is not some P Nk
k
then return (False, t, φk);
7
t ← t ∪ (φk, λ k, P k); // λk the slope of Nk(F )
8
N ← Nk, k ← k + 1;
9
return (True, t, F )
This algorithm is similar to the one of Montes et al, with the exception of the way we construct the representatives: approximate roots enable a quick computation of the representative φk, together with the additional property qk`k > 1 which ensures k ∈
log( d). 63.2 Precision and complexity
It remains to deal with the necessary precision to conduct operations in A and get complexity bounds for the computation of N −
k
(F ) and Rk(F ). We proceed as in [27, Section 5.4]: starting with a small precision σ, we check at each iteration if σ is sufficient to certify that the computed data of F mod πσ is truly the data of F . If the precision is not sufficient, we double it and restart the whole computation. This process multiplies the overall complexity by at most 2. We need a certificate that the current precision σ
is high enough and an upper bound for σ.Assume that we computed a type tk−1 dividing F using a precision σ. Compute N −
k
(F )with precision σ and denote λmin its right hand slope , with convention λmin (F ) = + ∞
if it is reduced to a vertex.
Lemma 2. Let F ∈ A[x] monic divisible by tk−1. If σ > vk (F )+ N |λmin |
vk(π)
, then truncat-ing computations modulo πσ+1 will compute the correct right hand edge of N −
k
(F ). If moreover F is of type tk−1, then vk (F )+ N |λmin |
vk(π)
< 2δd .Proof. This is [3, Lemmas 2.9 and 2.8].
Lemma 3. One can compute vk(F ) in O˜( d) operations in A.Proof. Compute the φk−1-adic expansion of F in O˜( d) operations in A [7, Theorem 9.15]. If k > 1, compute recursively each vk−1(aiφik−1). As there is a closed formula for
vk−1(φk−1) [10, Proposition 2.15], the bound deg( ai) < deg( φk−1) concludes.
Proposition 3. One can compute N −
k
(F ) with precision σ in less than O˜( d σ ) opera-tions in F.Proof. First compute the φk-adic expansion of F , then the different values vk(ai φik). The complexity then comes from [7, Theorem 9.15] and Lemma 3.
Proposition 4. Up to the cost of operations already done while computing N −
k
(F ), one can compute Rk(F ) in O(d f k−1) op. in F.Proof. This is [3, Lemma 5.6].
Lemma 4. Let F ∈ A[x] monic of type tk−1 such that ord t0 F > 1. Then fk−1d ≤ 2`0δ.Proof. We know that F ≡ P N0
0
mod π for some irreducible polynomial P0 ∈ F[x] of degree `0. Let θ be a root of F , say with minimal polynomial G (prime factor of F ). Thus θ mod π is a root of P0 and there are N0 − 1 other roots θ′ of F such that θ ≡ θ′
mod π. It follows that v0(θ − θ′) > 0, hence v0(θ − θ′) ≥ 1
eG
where eG is the ramification index of G. Summing over all θ and all θ′, we get δ ≥ ∑
G
deg( G)( N0 − 1) /e G, the sum over all prime factors of F . We have eGfG = deg( G) ( fG residual degree) and fk−1|fG
for all G. It follows that δ ≥ fk−1(N0 − 1). Equality N0 `0 = d concludes.
Theorem 2. If char (F) - d, then there exists an algorithm that tests if F is irreducible in less than O(δ` 0) operations in F and at most log 2(d) univariate irreducibility tests.
7Proof. The algorithm is FastIrreducible with the above modification to deal with the precision of the computations. As we use a precision σ ≤ 2δ/d , the complexity comes from all the intermediate results of this section.
The case A = Zp. We say that F ∈ Zp[x] is Weierstrass if F ≡ xd mod p. Also, we say that p is small if it is polynomial in d and δ.
Corollary 1. If p is small and does not divide d, we can test the irreducibility of a separable Weierstrass polynomial F ∈ Zp[x] with O(δ) operations in Fp.Proof. F being Weierstrass, we have `0 = 1 and d ≤ 2δ by Lemma 4. We can check if
Rk(F ) ∈ Fk[x] is a prime power (and then compute Pk) within O(d log( p)) operations in Fp using [15, Corollary 2] 4 together with deg( Rk(F )) fk−1 ≤ d.Under the same hypothesis, Montes irreducibility test would require O(δ2) operations in Fp, see [3, Theorem 5.10].
3.3 The case char (F) | d
When char( F) | d, approximate roots cannot be used as representatives of types. Fol-lowing , we compute these representatives in another way (Proposition 6 below, in
O(dw (F )/w (π)) ⊂ O (δ) operations in F). Unfortunately, we might have now qkk = 1 (refinement steps) and the number of iterations is not in O(log( d)) anymore. From [3, Lemma 5.11], we can bound the number of recursive call by O(δ/ 0)5. This leads to a complexity in O(δ2) operations in F, plus the univariate irreducibility tests. We can still run only k ≤ log 2(d) of them by first checking that Rk(F ) has a single root (i.e.
qkk = 1), using a univariate shift, in O˜( d) operations in F, and use the univariate irreducibility test only when qkk > 1. We proved the following:
Proposition 5. If char (F) divides d, one can check if F is irreducible in less than O(δ2)
operations in F and at most log 2(d) univariate irreducibility test.
This result is very similar to [3, Theorem 5.10] (we just use some minor complexity improvements in some intermediate results, described in Section 3.2). We still call
FastIrreducible the underlying algorithm.
4 Generalised slope factorisation
Algorithm FastIrreducible of the previous section will either prove that F is irre-ducible, either provide a type tk−1 together with a representative φk such that either
Nk(F ) has two or more distinct slopes, either Rk is not a power of an irreducible polynomial. To summarise these two possibilities, it is convenient to consider a slight modification of the residual polynomial operator Rk attached to the right hand slope
λk = −mk/q k of N −
k
(F ):
4This result extends to towers of fields following the proof of [15, Corollary 3]
5Equation (5.4) of actually shows that it is bounded by O(δ/f k−1)
8Definition 4. The modified residual polynomial of G ∈ L[x] (attached to λk) is defined by ˜Rk(G)( y) := yik (G)Rk(G)( yqk ).
If F is not irreducible, we get ˜Rk(F ) = h0h1 · · · hr
with h0 ∼ yik (F ), h1, . . . , h r ∈ Fk[yqk ] coprime, this factorisation containing at least two different non trivial factors. From Theorem 1, there exist F0, F 1, · · · , F r ∈ A[x] monic such that
F = F0F1 · · · Fr with ˜Rk(Fi) ∼ hi, 0 ≤ i ≤ r.
This section describes a Hensel-like algorithm to compute such a factorisation up to an arbitrary precision with complexity almost linear in the size of the output.
4.1 A (slightly) more general problem
We express our problem in a slightly more general context, that will be useful in Section 5. Let F ∈ A[x] be a monic polynomial, t = tk−1 be a type of order k − 1 dividing F and
φ = φk a representative of t. We assume that either Ft is a proper factor of F , either
Ft admits a non trivial factorisation at order k induced by Theorem 1. Let v = vk the augmented valuation built from t (i.e. from vk−1, φ k−1 and λk−1) and
N (F ) = Nk(F ) the generalised Newton polygon defined by v and φ. Denote λ = − mq ∈
Q− the right hand slope of N −(F ) and R(F ) (resp. ˜R(F )) the (modified) residual polynomial defined by t, φ and λ.
Lemma 5. Let G ∈ A[x] monic and g = ˜R(G). If G is of type t, then
N (G) = N −(G), deg( G) = deg( φ) deg( g), lc (g) = R(φ)deg( g).
If t does not divide G then g ∈ F×
k
.Proof. See e.g. . Thanks to this lemma and our assumption on F , we get ˜R(F ) = h0h1 · · · hrh∞
with h0 = R(φ)sys, h∞ ∈ F×
k
and h1, . . . , h r ∈ Fk[yq] powers of some coprime irreducible polynomials satisfying hi(0) 6 = 0 and lc( hi) = R(φ)deg( hi). Moreover, there are uniquely determined monic polynomials F0, . . . , F r, F ∞ ∈ A[x] such that
F = F0F1 · · · FrF∞ with ˜R(Fi) = hi, i = 0 , . . . , t, ∞.
In particular, we have Ft = F0F1 · · · Fr and F = FtF∞.Such a factorisation can be seen as a generalisation of the “slope factorisation” of . It is natural to express the precision using the augmented valuation w = vk+1 defined by
v, φ and λ following (1). Thanks to a dichotomic argument, we are reduced to compute
F = ˜G ˜H (up to some precision w.r.t. w), with ˜G, ˜H products of some of the Fi’s above. We proceed as follows: 91. Initialisation. Compute G, H ∈ A[x] and U, V ∈ L[x] such that w(F − G H ) >w(F ) and w(U G + V H − 1) > 0. 2. Lifting. Run the classical Hensel Lemma with adapted truncations to compute ˜G
and ˜H such that w(F − ˜G ˜H) ≥ w(F ) + n for a given precision n.
4.2 Initialisation
A key point is to compute polynomials with prescribed (modified) residual polynomial. We start with a definition.
Definition 5. A polynomial H ∈ A[x] is said monic in φ if it has φ-adic expansion
H = ∑Ni=0 aiφi with aN = 1 . We say that H is strongly monic in φ (with respect to w)if moreover w(H) = N w (φ).
Proposition 6. Let h = ys h(yq) with h ∈ Fk[y] and W ∈ Z. We can compute H ∈ L[x]
of smallest degree such that
˜R(H) = h, w(H) = W, deg( H) < (deg( h) + 1) deg( φ).
It takes O˜
(
deg( H)(deg( h) + 1) w(φ)
w(π)
)
operations in F. Furthermore: 1. If W ≥ deg( h)w(φ), then H ∈ A[x].2. If W = deg( h)w(φ) and lc (h) = R(φ)deg( h), then H is strongly monic in φ.Proof. First use [3, Lemma 5.7] to compute G ∈ A[x] such that ˜R(G) = h and w(G) =
W + w(π) n with n =
⌈ deg( h)w(φ)−Ww(π)
⌉
. We can divide their complexity result by a factor deg( h) by replacing Horner evaluation therein by a divide and conquer strategy (see e.g. ). Finally, output H = π−n G.
Remark 3. The representation of Fk as a tower of fields as described before Lemma 1 is used in the proof of [3, Lemma 5.7]. In particular, no operations in Fk is performed, so that the complexity can be expressed with O˜() and not O() .
Lemma 6. Let G, H ∈ L[x] such that w(G) = w(H).1. ˜Rk(G) = ˜Rk(H) if and only if w(G − H) > w (G).2. If w(G + H) = w(G), then ˜Rk(G + H) = ˜Rk(G) + ˜Rk(H).Proof. (1) Since R(G) and R(H) have non zero constant term, Definition 4 implies that ˜R(G) = ˜R(H) if and only if R(G) = R(H) and ik(G) = ik(H). As w(G) = w(H), this is equivalent to that R(G) = R(H) and S(G) = S(H). [10, Proposition 2.8] concludes. (2) Equality w(G + H) = w(G) implies that Sk(G), S k(H), S k(G + H) lie on a same segment T of slope λ. We conclude with [10, Lemma 2.23 and eq. (19)], together with Definition 4. 10 Let us consider now our initialisation problem. By the discussion above, we can assume that ˜R(F ) = g h with g, h coprime, g ∈ Fk[yq] and h(y) = ys h(yq), with h ∈ Fk[y],
h(0) 6 = 0. We can additionally assume that lc y(h) = R(φ)deg( h). Notice that we might have g ∈ F×
k
.
Proposition 7. Let g, h as above. One can compute G, H ∈ A[x] and U, V ∈ L[x]
such that H is strongly monic in φ, ˜R(G) = g, ˜R(H) = h, deg( G) + deg( H) ≤ deg( F ),
w(F − G H ) > w (F ), deg( U ) < deg( H), deg( V ) < deg( G) and w(U G + V H − 1) > 0
in less than O˜( dw (F )/w (π)) operations in F.Proof. Use Proposition 6 to compute G, H ∈ A[x] with H strongly monic in φ and such that ˜R(H) = h, ˜R(G) = g, w(F ) = w(GH ) with O˜( dw (F )/w (π)) operations in F (this bound being a consequence of w(F ) = deg( ˜R(F )) w(φ)). Notice that deg( G)+deg( H) ≤
deg( F ) as we construct G, H of minimal degrees. As w(GH ) = w(F ) and ˜R(GH ) = ˜R(F ), point 1 in Lemma 6 gives w(F − GH ) > w (F ). For U, V , we first compute u, v in
Fk[y] such that u g +v h = 1, deg( u) < deg( h), deg( v) < deg( g) with O(deg( gh ) fk−1) ⊂O(d) operations in F [7, Corollary 11.9]. We necessarily have v ∈ Fk[yq] and u = yt u
with u ∈ Fk[yq], so we may apply again Proposition 6 to compute U, V ∈ L[x] such that ˜R(U ) = u, ˜R(V ) = v, deg( U ) < deg( H), deg( V ) < deg( G), w(U ) = −w(H)and w(V ) = −w(G) within the same complexity bound. We get ˜R(U G ) + ˜R(V H ) =
ug + vh = 1 6 = 0 and Lemma 6 (point 1) implies that w(U G + V H ) = w(U G ) = w(V H ), so that ˜R(U G + V H ) = ˜R(U G ) + ˜R(V H ) (point 2). We get ˜R(U G + V H ) = 1 = ˜R(1) and w(U G + V H ) = 0 = w(1). Point 1 again gives w(U G + V H − 1) > w (1) = 0.
4.3 Lifting: a valuated Hensel Lemma
Let QuoRem denotes the usual euclidean algorithm.
Lemma 7. Let A, B ∈ L[x] with B strongly monic in φ. Then, Q, R = QuoRem (A, B )
satisfies w(R) ≥ w(A) and w(Q) ≥ w(A) − w(B).Proof. We focus on the computation of R. First note that it be computed as follows 6:write A = ∑Ni=0 ai φi the φ-adic expansion of A and B = φb + · · · . If N < b , we get
R = A ; otherwise, compute ˜A = A − aN φN −b B, and apply recursively this strategy to ˜A. As ˜A = ∑N −1
i=0
˜ai φi, this procedure converges towards the unique remainder R.We now prove the result by induction on the value N ≥ b. By linearity, we can assume
A = aN φN . We then have w( ˜A) ≥ w(A) as w(aN φN −b B) = w(A) from the assumption
w(B) = b w (φ). By induction, this proves the lemma for R. Result for Q is then a straightforward consequence, as w(Q B ) = w(A − R). This lemma will enable us to prove that the classical Hensel lemma [7, Algorithm 15.10], when starting with correct initial polynomials, “double the precision” according to an extended valuation ( w, φ ). The only difference with the classical algorithm is the way we truncate polynomials: for any polynomial F ∈ A[x] and n ∈ Z, we denote dF en = dF enk+1
6in practice, we use the classical algorithm of A[x], this is only for this proof purpose
11 the “truncation of F according to the valuation w = vk+1 ”, defined recursively as follows. We let dF en
0
the usual truncation of F up to precision πn and, if F = ∑
i
ai φik and
k ≥ 0, then dF enk+1 = ∑
i
daie
n−vk+1 (φik)
qk
k
φik. This is indeed a natural definition, as deg( ai) < deg( φk) implies vk+1 (ai) = qk vk(ai). In other words, we remove all terms of F that have w-valuation greater than n in its ( φ0, . . . , φ k)-multiadic expansion [27, Section 3].
Lemma 8. Let G ∈ L[x] with deg( G) ≤ d and n ∈ N. We can compute dGen in
O˜( d(n/w (π) − v0(G))) operations in F.Proof. Compute dπ−v0(G)Gen−v0(G)w(π) with precision dn/w (π)e − v0(G) following the recursive definition above. The main cost is to compute the φk-adic expansions, as in the proof of Lemma 3. Algorithm HenselStep below takes as input F, G, H ∈ A[x] with H strongly monic in
φ, U, V ∈ L[x] and n ∈ N× such that
• w(F − G H ) ≥ w(F ) + n, with deg( F ) ≥ deg( G) + deg( H)
• w(U G + V H − 1) ≥ n, with deg( U ) < deg( H), deg( V ) < deg( G), w(U ) = −w(G)and w(V ) = −w(H). It computes ˜G, ˜H ∈ A[x] with ˜H strongly monic in φ and ˜U , ˜V ∈ L[x] such that
• w(F − ˜G ˜H) ≥ w(F ) + 2 n, with deg( ˜H) = deg( H), deg( F ) ≥ deg( ˜G) + deg( ˜H),
w( ˜H − H) ≥ w(H) + n and w( ˜G − G) ≥ w(G) + n.
• w( ˜U ˜G+ ˜V ˜H −1) ≥ 2 n, with deg( ˜U ) < deg( ˜H), deg( ˜V ) < deg( ˜G), w( ˜U ) = −w( ˜G)and w( ˜V ) = −w( ˜H).
Algorithm: HenselStep (F, G, H, U, V, n )
1
E ← d F − G H ew(F )+2 n;
2
Q, R ← d QuoRem (U E, H )ew(F )+2 n;
3
˜G ← d G + E V + Q G ew(G)+2 n;
4
˜H ← d H + Rew(H)+2 n;
5
B ← d U ˜G + V ˜H − 1e2n;
6
C, D ← d QuoRem (U B, ˜H)e2n;
7
˜U ← d U − De2n−w(G);
8
˜V ← d V − B V − C ˜Ge2n−w(H);
9
return ˜H, ˜G, ˜U , ˜V
Proposition 8. Algorithm HenselStep is correct. It takes less than O˜
( n+w(F )
w(π)
d
)
operations in F.
12 Proof. We start with the correctness. H being strongly monic in H, Lemma 7 ensures
w(R) ≥ w(H) + n > w (H) as w(E) ≥ w(F ) + n by assumption. As deg( R) < deg( H), this proves deg( ˜H) = deg( H), w( ˜H − H) ≥ w(H) + n and ˜H strongly monic in φ. As
w(Q) ≥ n, we also get w( ˜G − G) = w(V E + QG ) ≥ w(G) + n. Using these results and the equality
dF − ˜G ˜Hew(F )+2 n = dE (1 − U G − V H ) − R (E V + Q G )ew(F )+2 n,
we get dF − ˜G ˜Hew(F )+2 n = 0, i.e. w(F − ˜G ˜H) ≥ w(F ) + 2 n, from which we also deduce deg( F ) ≥ deg( ˜G) + deg( ˜H). Similarly, using dC ˜H + De2n = dU B e2n, we get
d ˜U ˜G + ˜V ˜H − 1e2n = dU ˜G + V ˜H − 1 − B (U ˜G + V ˜H)e2n = dB2e2n = 0 ,
proving w( ˜U ˜G + ˜V ˜H − 1) ≥ 2 n. Using Lemma 7 once again, we can deduce easily the remaining properties of the output. Finally, the complexity is a direct consequence of Lemma 8 (and the usual complexity of the Hensel algorithm [7, Theorem 15.11]).
Corollary 2. Let F ∈ A[x], t a type dividing F , a factorisation ˜R(F ) = h0 · · · hrh∞ and
n ∈ N. One can compute F0, . . . , F r, F ∞ such that ˜R(Fi) = hi and w(F − F0 · · · FrF∞) >n + w(F ) in O˜
( n+w(F )
w(π)
d
)
operations in F. If deg( Ft) > d/ 2, this is O˜( n d/w (π) + δ).Proof. This algorithm is similar to the classical multifactor Hensel lifting [7, Algorithm 15.17]. We start by building a subproduct tree of the factorisation ˜R(F ) = h0 · · · hr h∞.Then, for each node (from top to bottom), we initialise the polynomials G, H, U, V ,and run HenselStep until we reach the required precision. Complexity and correctness follows from Propositions 7 and 8. If deg( Ft) > d/ 2, Lemma 2 implies w(F )
w(π)
< 4 δd .In order to get a factorisation F = F0 · · · FrF∞ mod πσ+1 for some given σ ∈ N, we need to relate the valuations v0 and w:
Lemma 9. Let G ∈ L[x] and denote N = bdeg( G)/ deg( φ)c + 1 . Then we have
v0(G)w(π) ≤ w(G) ≤ v0(G)w(π) + N w (φ).Proof. First inequality is clear. For the second inequality, we can safely suppose that
v0(G) = 0. We first consider the case deg( G) < deg( φ) (so N = 1) and we proceed by induction (remind that φ = φk and w = vk+1 ). If k = 0, the claim is obvious. Assume
k ≥ 1. Then, from (1) and the assumption deg( G) < deg( φk), we have vk+1 (G) =
qkvk(G). We also get vk+1 (φk) = qkvk(φk) + mk. It is thus sufficient to show that
vk(G) ≤ vk(φk). Write G = ∑
0≤i<q k−1`k−1
a′
i
φik−1. As there is at least one a′
i
not dividable by π, i.e. vk(a′
i
) ≤ vk(φk−1) by recursion, we get vk(G) ≤ qk−1`k−1vk(φk−1). As
vk(φk) = qk−1`k−1vk(φk−1) by [10, Theorem 2.11], this concludes. If deg( G) ≥ deg( φ), we write G = ∑N −1
i=0
ai φi, deg( ai) < deg( φ). Let i such that υ0(ai) = 0. We get
w(G) ≤ w(aiφi) ≤ (i + 1) w(φ) ≤ N w (φ). 13 Theorem 3. There exists an algorithm SlopeFacto that given F ∈ A[x], t a type dividing F , a factorisation ˜R(F ) = h0 · · · hrh∞ and σ ∈ N, computes a factorisation
F = F0 · · · FrF∞ mod πσ+1 with ˜R(Fi) = hi. It takes O˜
(
σ d + d2 w(φ)deg( φ)w(π)
)
operations in F. If deg( Ft) > d/ 2, this is O˜( d σ + δ).Proof. Let n = w(πσ) − w(F ) + ( bd/ deg( φ)c + 1) w(φ) and apply Corollary 2 with n.Then G = F − F0 · · · FrF∞ satisfies deg( G) < d and w(G) > n + w(F ), i.e. v0(G) > σ by Lemma 9. The first complexity bound follows from Corollary 2. As Ft is strongly monic in φ, we have w(φ)/ deg( φ) = w(Ft)/ deg( Ft) and the second bound is a consequence of Lemma 2, which implies w(Ft)
w(π)
< 2 δ
deg( Ft)
.
5 A fast factorisation algorithm
From the previous two sections, we can easily deduce a factorisation algorithm. Let
n ≥ δ ([3, Theorem 3.13] shows that this is a sufficient precision to detect the whole factorisation). 1. Run algorithm FastIrreducible with precision 2 δ/d . We conclude that F is irreducible (and return F ) or we get a type tk−1.2. Compute the factorisation ˜Rk(F ) = h0h1 · · · hr in Fk[y] (we have h∞ = 1 since F
is of type tk−1). 3. Use Algorithm SlopeFacto of Section 4 to get a factorisation F = F0F1 · · · Fr with precision n.4. Go back to Step 1 for each Hi.If ρ is the number of irreducible factors of F , this requires at most ρ univariate factori-sations and log 2(d) univariate irreducible tests over Fk[y], plus a number of operations over F bounded by O(ρ n d ) under Assumption 1, and O(ρ n d + δ2) otherwise. This section describes a divide and conquer strategy that will reduce the computations over F to respectively O(d n ) and O(d n + δ2). The idea is the following: 1. Find a type t such that Ft has degree > d/ 2 and that either Ft is irreducible, either any of its proper factor has degree ≤ d/ 2. 2. Use Algorithm SlopeFacto of Section 4 to get a factorisation F = F0 F1 · · · FrF∞
with Ft = F0 F1 · · · Fr.3. Apply recursively this strategy to all factors that are not known irreducible.
5.1 Finding a dividing type
The aim of this section is to either find the irreducible factor of degree > d/ 2 if it exists, either find a factorisation F = F0 F1 · · · Fr F∞ with each factor of degree ≤ d/ 2, and to do so with precision σ ≤ 4δ/d . We start with the algorithm. 14 Algorithm: DividingType (F, σ )
Input: F ∈ A[x] monic separable, σ ∈ N the precision used.
Output: A type t with the properties described above.
1
H ← F , d ← deg( F );
2
while True do
3
(b, t) ← FastIrreducible (H, σ );
4
if b =True then return t ;
5
Compute ˜R(H) = g h , h = hi with highest degree;
6
if deg( h) deg( φ) ≤ d/ 2 then return t ;
7
(G, H ) ←SlopeFacto (H, t, [g, h ], σ );
Remark 4. Note that the notation FastIrreducible (H, σ ) means to run Algorithm
FastIrreducible with parameter H and precision σ. Also, at Line 5, H is always of type t, so that the factorisation ˜R(H) does not involve h∞.
Proposition 9. The function call DividingType (F, 4δ/d ) is correct. It takes less than
O(ρ δ ) operations in F if Assumption 1 is satisfied, and O(ρ δ + δ2) otherwise, plus at most ρ factorisations and log 2(d) irreducibility tests over some Fk.Proof. Note first that deg( H) decreases at each loop (via the factorisation of Line 7), proving the ending of the algorithm. Also, the test of Line 6 ensures deg( H) > d/ 2, so that 2 δ(H)/ deg( H) ≤ 4δ/d at any point of the algorithm. FastIrreducible (H, 4δ/d )will thus return a correct answer thanks to Lemma 3. This precision is also sufficient to compute ˜R(H) at Line 5 (use Proposition 4). As the precision used is high enough, correctness is straightforward: we output t when either Ft is irreducible (Line 4), either all of its factor has degree ≤ d/ 2 (Line 6). Finally, the complexity is a direct consequence of Theorems 2 and 3 (and Proposition 5).
Remark 5. There are various straightforward improvements to this algorithm: for in-stance, one can provide the type computed so far to FastIrreducible to avoid some already done computations. They do not appear to make the reading easier, but are needed to run less than log 2(d) irreducible tests over some Fk[y].
5.2 The divide and conquer algorithm
Thanks to this result, we derive a fast factorisation algorithm:
Remark 6. At Line 6, F0 and F∞ are known irreducible if they have degree ≤ 1 while
Fi is known irreducible if ord tFi = 1 for i = 1 , . . . , r .
Theorem 4. Let n ≥ δ. A function call Factorisation (F, n ) returns the correct output. It requires at most ρ factorisations and log 2(d) irreducibility tests in some Fk[y],plus O(d n ) operations in F if Assumption 1 is satisfied, and O(d n + δ2) otherwise.
15 Algorithm: Factorisation (F, n )
Input: F ∈ A[x] monic separable, n ∈ N the precision.
Output: The list F1, · · · , F ρ of irreducible factors of F known with precision n.
1
t ←DividingType (F, 4 n/d );
2
Compute ˜R(F ) = h0 h1 · · · hr h∞;
3
F0, · · · , F r, F ∞ ←SlopeFacto (F, t, [h0, · · · , h r, h ∞], n );
4
R ← {} ;
5
for i ∈ { 0, · · · , r, ∞} do
6
if Fi is known irreducible then R ← R ∪ { Fi};
7
else R ← R ∪ { Factorisation (Fi, n )};
8
return R;
Proof. This is a direct consequence of Proposition 9 and Theorem 3, as ∑
i
deg( Fi) = d
at Line 3, and deg( Fi) < d/ 2, so that the number of lines of the recursive calls tree is less than log 2(d). Precision δ is sufficient to detect all irreducible factors from [3, Theorem 3.13].
Remark 7. As in Section 3, we do not know in advance δ. We proceed similarly as explained in Section 3.2.
When considering n ∈ O (δ), then up to the cost of the residual factorisations, Theorem 4 improves [3, Theorem 5.15] by a factor δ if Assumption 1 is satisfied, and by a factor min( d, δ ) otherwise.
5.3 Residual factorisations over finite fields
If Card( F) = q, we factorise F mod π with an expected O˜( d2 + d log( q)) operations over
F by [7, Corollary 14.30]. The remaining residual factorisations performed by algorithm
Factorisation use then an expected O(dδ log( q)) operations in F, see the proof of [3, Theorem 5.14]. Together with Theorem 4, this proves:
Corollary 3. Let F ∈ Zp[x], separable with p small. Given the univariate factorisation of F mod p, we compute the irreducible factors of F with precision n ≥ δ with an expected O(nd ) operations over Fp if p > d and O(nd + δ2) otherwise. The same result holds for F ∈ Fp[ x].
Corollary 3 improves significantly [3, Theorem 5.18] which leads to the complexity esti-mate O(dδ 2 + nd 2) under the same hypothesis.
Remark 8. Corollary 3 requires a priori to compute a primitive representation of Fk
over F before applying the factorisation algorithm [7, Corollary 14.30] (this issue doesn’t seem to be considered in ). To this aim, we may use a Las Vegas subroutine [29, Proposition 4] whose complexity fits in our aimed bound. There are recent faster factori-sation algorithms [15, Corollary 3] (both for primitive or triangular representation), but not yet implemented.
16 5.4 Avoiding residual factorisations
If F has large cardinal, then the residual factorisations will probably dominate the cost of the all algorithm. It might thus be preferable to rely on dynamic evaluation [14, 29] : we allow the Pk’s to be square-free, hence the Fk’s to be product of fields. If at some point we find a zero divisor (while computing some gcd’s), then we pursue over each discovered summand of Fk (or we return false if we run an irreducibility test). At the end, we perform a unique residual factorisation (of expected small degree) for each discovered factor of F to deduce its irreducible factorisation. Notice that this last step might be useless, depending on the arithmetic information we want to compute. It is not needed for instance if we only want the ramification indices (e.g. for the computation of the genus of a plane curve ), or the valuation δ of the discriminant of F , or the equisingularity type of a germ of plane curve . In these cases, we may only use square-free residual factorisations (fast gcd’s) and the underlying algorithm is entirely deterministic.
6 Direct Applications
Our complexity estimates have several direct consequences for various tasks of compu-tational number theory and algebraic geometry. In what follows, the complexity results are given up to the cost of the residual univariate factorisations.
OM-factorisation Let K = Q[x]/(F ) be a number field and let p ∈ Z a prime. The first main consequence of Theorem 4 is that we compute an Okutsu-Montes (OM) represen-tation of the prime ideals dividing p with O(dδ ) operations in Fp if p > d or O(dδ + δ2)otherwise, improving [3, Theorem 5.15] respectively by a factor δ or min( d, δ ). These OM-representations carry on essential data about the corresponding extensions of local fields and give useful tools for various local and global arithmetic tasks (see e.g. ).
Valuations of discriminants and resultants As a straightforward application of fast OM-factorisation, we may use to compute δ = vp(Disc F ) with O(dδ ) operations in Fp if p > d or O(dδ + δ2) otherwise, improving [22, Theorem 2.5] respectively by a factor δ or min( d, δ ). If G, H ∈ Z[x] are coprime of degrees at most d, we compute
δ = vp(Res( G, H )) within the same bound, improving now [22, Theorem 3.3] by a factor δ or min( d, δ ). As mentioned in Section 5.4, we may rely on dynamic evaluation for this task: only square-free residual factorisation is required and the algorithm is deterministic.
Local integral basis Combined with Bauch’s algorithm , our results allow to compute a Z(p)-basis of the integral closure of Z(p) in K with O(d2δ) operations in Fp if p > d
or O(d2δ + δ2) otherwise, improving O(d2δ + dδ 2) of [2, Lemma 3.10]. This impacts the computation of a global integral basis of K/Q, obtained from local ones via Hermite normal forms and Chinese remaindering. 17 Function fields All complexity results above extend trivially to function fields satisfying Assumption 1. In this context, we may use moreover the Riemann-Hurwitz formula to compute the genus of a degree d plane curve over F within O(d3) operations in F
(following a similar strategy than in , but using the easier to implement algorithm
Factorisation ). Here again, only square-free residual factorisation is required and the algorithm is deterministic.
More applications There are several other computational consequences for global fields, as computing the valuation at a prime ideal, factoring fractional ideals or Chinese re-maindering , factoring bivariate polynomials , computing roots of polynomials , or computing Riemann-Roch spaces , but going into these details is beyond the scope of this paper.
References
S. Abhyankar. Algebraic Geometry for Scientists and Engineers , volume 35 of Math-ematical surveys and monographs . Amer. Math. Soc., 1990. J.-D. Bauch. Computation of integral bases. Journal of Number Theory , 165:382– 407, 2016. J.-D. Bauch, E. Nart, and H. Stainsby. Complexity of the OM factorizations of poly-nomials over local fields. LMS Journal of Computation and Mathematics , 16:139– 171, 2013. D. G. Cantor and D. Gordon. Factoring polynomials over p-adic fields. In ANTS-IV ,volume 1838 of LNCS . Springer Verlag, 2000. X. Caruso, D. Roe, and T. Vaccon. Division and slope factorization of p-adic polynomials. In Proceedings of the ACM on International Symposium on Symbolic and Algebraic Computation , ISSAC ’16, pages 159–166, New York, NY, USA, 2016. ACM. D. Ford, S. Pauli, and X.-F. Roblot. A fast algorithm for polynomial factorization over Qp. Journal de Th´ eorie des Nombres de Bordeaux , 14:151–169, 2002. J. v. z. Gathen and J. Gerhard. Modern Computer Algebra . Cambridge University Press, New York, NY, USA, 3rd edition, 2013. J. Guardia, J. Montes, and E. Nart. Okutsu invariants and newton polygons. Acta Arithmetica , 145:83–108, 2010. J. Gu ardia, J. Montes, and E. Nart. Higher Newton polygons in the computation of discriminants and prime ideal decomposition in number fields. Journal de Th´ eorie des Nombres de Bordeaux , 23(3):667–696, 2011. 18 J. Guardia, J. Montes, and E. Nart. Newton polygons of higher order in algebraic number theory. Transsactions of the American Mathematical Society , 364:361–416, 2012. J. Gu ardia, E. Nart, and S. Pauli. Single-factor lifting and factorization of poly-nomials over local fields. Journal of Symbolic Computation , 47(11):1318 – 1346, 2012. W. Hart and A. Novocin. Practical divide-and-conquer algorithms for polynomial arithmetic. In V. P. Gerdt, W. Koepf, E. W. Mayr, and E. V. Vorozhtsov, editors,
Computer Algebra in Scientific Computing , pages 200–214, Berlin, Heidelberg, 2011. Springer Berlin Heidelberg. F. Hess. Computing riemann–roch spaces in algebraic function fields and related topics. Journal of Symbolic Computation , 33(4):425–445, 2002. J. v. d. Hoeven and G. Lecerf. Directed evaluation. Journal of Complexity ,60:101498, 2020. J. v. d. Hoeven and G. Lecerf. Univariate polynomial factorization over finite fields with large extension degree. Applicable Algebra in Engineering, Communication and Computing , Jan 2022. E. Kaltofen. Greatest common divisors of polynomials given by straight-line pro-grams. J. ACM , 35(1):231–264, Jan. 1988. R. Lebreton. Relaxed hensel lifting of triangular sets. Journal of Symbolic Compu-tation , 68, Part 2:230 – 258, 2015. Effective Methods in Algebraic Geometry. S. Mac Lane. A construction for prime ideals as absolute values of an algebraic field. Duke Math. J. , 2(3):492–510, 1936. S. MacLane. A construction for absolute values in polynomial rings. Trans. Amer. Math. Soc. , 40(3):363–395, 1936. G. Moroz and ´E. Schost. A fast algorithm for computing the truncated resultant. In ISSAC ’16: Proceedings of the twenty-first international symposium on Symbolic and algebraic computation , pages 1–8, New York, NY, USA, 2016. ACM. E. Nart. Okutsu-montes representations of prime ideals of one-dimensional integral closures. Publicacions Matematiques - PUBL MAT , 55, 07 2011. E. Nart. Local computation of differents and discriminants. Mathematics of Com-putation , 83(287):1513–1534, 2014. V. Neiger, J. Rosenkilde, and E. Schost. Fast computation of the roots of polyno-mials over the ring of power series. In ISSAC’17 , pages 349–356. ACM, 2017. S. Pauli. Factoring polynomials over local fields. J. Symb. Comp. , 32:533–547, 2001. 19 S. Pauli. Factoring polynomials over local fields, ii. In ANTS-IX , LNCS. Springer Verlag, 2010. P. Popescu-Pampu. Approximate roots. Fields Institute Communiations , 33:1–37, 2002. A. Poteaux and M. Weimann. A quasi-linear irreducibility test in K[ y]. Preprint ,pages 1–21, 2018. A. Poteaux and M. Weimann. Computing the equisingularity type of a pseudo-irreducible polynomial. Applicable Algebra in Engineering, Communication and Computing , 31:435 – 460, 2020. A. Poteaux and M. Weimann. Computing Puiseux series: a fast divide and conquer algorithm. Annales Henri Lebesgue , 4:1061–1102, 2021. M. Weimann. Bivariate factorization using a critical fiber. Journal of Foundations of Computational Mathematics , pages 1–45, 2016. 20
|
18
|
Published Time: 2023-05-24
Myc-tag: An epitope tag for protein characterization, protein interaction analysis, and purification. | Proteintech Group
===============
### Products
### Applications
### cGMP Proteins
### Support
### Services
### Promotions
### About Us
##### Back
### Antibodies
### Immunoassays
### Cytokines and Growth Factors
### Nanobody-based Reagents
### Supporting Tools and Reagents
### Research Area
### Genomics
##### Back
Applications
Western Blot
Immunohistochemistry
Immunofluorescence
Immunoprecipitation
Flow Cytometry
ELISA
Cell Separation
Cell Regulation
Protein Purification
Chromatin Immunoprecipitation
Live Cell Imaging
Antibody Labeling New
##### Back
cGMP Proteins
cGMP Recombinant Cytokines and Growth Factors
Preclinical vs cGMP
cGMP Facility in Chicago, IL
Custom Order Request
##### Back
Services
Custom Order Request
Custom Antibody Services
Partnering with Proteintech
##### Back
Promotions
Buy 2, Get 1 Free: Primary Antibodies, Nano-Traps & RUO Proteins
Free Shipping: H&E Staining Reagents
Publication Promotion
Antibody Trial Pack
Newsletter Sign Up
Control Antibodies
Antibody Upgrade
New Lab Promotion
Nano-Trap Free Sample
##### Back
About Us
Contact Us
Careers
Company Profile
The Proteintech Story
Humanzyme Acquisition
About ChromoTek
About Proteintech Genomics
##### Back
Support
Product-specific protocols
Standard Protocols
Videos
Blogs
Resources
Product Guarantee
Learning Portal
siRNA Knockdown
Technical FAQs
Service FAQs
Company News
Events
Featured Products
Early Career Researcher Hub
CoraLite Plus Fluorescent Dyes
##### Back
Antibodies
Primary Antibodies
Secondary Antibodies
Recombinant Monoclonal Antibodies
Immunofluorescence Antibodies (conjugated)
Flow Cytometry Antibodies
FcZero-rAb for Flow Cytometry New
Neutralizing Antibodies
Tag Products
Loading Control Antibodies
Isotype Controls
Antibody Labeling Kits
Antibody Sampler Kits
##### Back
Immunoassays
ELISA Kits
Matched Antibody Pairs New
IHC Kits
Magnetic Cell Separation Systems
Cell Health & Proliferation Kits
##### Back
Cytokines and Growth Factors
Research Grade
cGMP Grade
##### Back
Nanobody-based Reagents
Nano-Traps
Anti-IgG VHH Beads for IP NEW
Nano-Secondary reagents
Chromobodies
Nano-Boosters & Nano-Labels
Nano-Caps
Nano-CaptureLigands
Nanobodies/VHHs
Spot-Tag System
##### Back
Supporting Tools & Reagents
Protein Ladders
Fusion Proteins
Chemiluminescent Substrate
Fixation and Permeabilization
Viability Dyes
##### Back
Research Area
Autophagy and Cell Death
COVID-19 Related
Cancer
Cardiovascular
Cell Division and Proliferation
Cell and Gene Therapy
Developmental Biology
Epigenetics
Immunology
Metabolism
Neuroscience
Organelle Markers
Signal Transduction
Stem Cells
##### Back
Genomics
Antibody Cocktails
Oligo Conjugated Primary Antibodies
Products
Antibodies
Primary Antibodies
Secondary Antibodies
Recombinant Monoclonal Antibodies
Immunofluorescence Antibodies (conjugated)
Flow Cytometry Antibodies
FcZero-rAb for Flow Cytometry New
Neutralizing Antibodies
Tag products
Loading Control Antibodies
Isotype Controls
Antibody Labeling Kits
Antibody Sampler Kits
Immunoassays
ELISA Kits
Matched Antibody Pairs New
IHC Kits
Magnetic Cell Separation Systems
Cell Health & Proliferation Kits
Supporting Tools and Reagents
Protein Ladders
Fusion Proteins
Chemiluminescent Substrate
Fixation and Permeabilization
Viability Dyes
Cytokines and Growth Factors
Research Grade
cGMP Grade
Nanobody-based Reagents
Nano-Traps
Primary VHHs
Anti-IgG VHH Beads for IP NEW
Nano-Secondary reagents
Chromobodies
Nano-Caps
Nano-CaptureLigands
Spot-Tag System
Genomics
Antibody Cocktails
Oligo Conjugated Primary Antibodies
Research Area
Autophagy and Cell Death
COVID-19 Related
Cancer
Cardiovascular
Cell Division and Proliferation
Cell and Gene Therapy
Developmental Biology
Epigenetics
Immunology
Metabolism
Neuroscience
Organelle Markers
Organoids
Signal Transduction
Stem Cells
Applications
Western Blot
Cell Separation
Immunohistochemistry
Cell Regulation
Immunofluorescence
Protein Purification
Immunoprecipitation
Chromatin Immunoprecipitation
Flow Cytometry
Live Cell Imaging
ELISA
Antibody Labeling New
cGMP Proteins
cGMP Recombinant Cytokines and Growth Factors
Preclinical vs cGMP
cGMP Facility in Chicago, IL
Custom Order Request
Support
Protocols
Product-specific protocols
Standard protocols
By Application
WB
IHC
IF
IP/ChIP
Flow Cytometry
Cell Culture
Antibody Labeling New
ELISA calculator
Spectra viewer
Flow cytometry panel builder
Others
Product Guarantee
Learning Portal
Videos
Resources
Pathway Posters
Review a Product
Technical FAQs
Service FAQs
Glossary
Replace Santa Cruz
Featured Products
Early Career Researcher Hub
CoraLite Plus Fluorescent Dyes
News
Blogs
Events
Publication Spotlight
Scientist Spotlight
Company News
Services
Custom Order Request
Custom Antibody Services
Partnering with Proteintech
Promotions
Buy 2, Get 1 Free: Primary Antibodies, Nano-Traps & RUO Proteins
Newsletter Sign Up
Nano-Trap Free Sample
Free Shipping: H&E Staining Reagents
Control Antibodies
Publication Promotion
Antibody Upgrade
Antibody Trial Pack
New Lab Promotion
About Us
Company Profile
Careers
Contact Us
The Proteintech Story
Humanzyme Acquisition
About ChromoTek
About Proteintech Genomics
Country/Region selector
No results
Country/Region selector
No results
All
All
Primary Antibodies
Conjugated Antibodies for IF
Conjugated Antibodies for FC
Secondary Antibodies
Antibody Labeling Kits New
ELISA Kits
IHC Kits
Magnetic Cell Separation Kits
Cytokines & Growth Factors
Neutralizing/activating Antibodies
Nanobody-based Reagents
Accessory Products and Kits
Fusion Proteins
×
×
Host
0
[x] Alpaca
[x] Donkey
[x] Goat
[x] Mouse
[x] Rabbit
Clear Confirm
Species Reactivity
0
[x] Chicken
[x] Goat
[x] Guinea Pig
[x] Horse
[x] Human
[x] Mouse
[x] Rabbit
[x] Rat
[x] Sheep
[x] Swine
Clear Confirm
Species Reactivity
0
[x] Human
[x] Mouse
[x] Rat
[x] Pig
[x] Bovine
[x] Chicken
[x] Zebrafish
[x] Hamster
[x] Monkey
[x] Drosophila
Clear Confirm
Conjugate
0
[x] APC
[x] Atlantic Blue™
[x] Biotin
[x] Cardinal Red™
[x] CoraLite® Plus 405
[x] CoraLite® Plus 488
[x] CoraLite® Plus 647
[x] CoraLite® Plus 750
[x] CoraLite®488
[x] CoraLite®532
[x] CoraLite®555
[x] CoraLite®568
[x] CoraLite®594
[x] FITC
[x] FITC Plus
[x] HRP
[x] PE
Clear Confirm
Conjugate
0
[x] Alexa Fluor® 488
[x] Alexa Fluor® 568
[x] Alexa Fluor® 647
[x] AP
[x] Biotin
[x] CoraLite®488
[x] CoraLite®594
[x] CoraLite®647
[x] Cy3
[x] FITC
[x] HRP
[x] Rhodamine
[x] R-PE
Clear Confirm
Compatible Species
0
[x] Rabbit
[x] Mouse
[x] Human
[x] Rat
Clear Confirm
Conjugate
0
[x] CoraLite Plus 405
[x] FITC Plus NEW
[x] CoraLite Plus 488
[x] CoraLite Plus 555
[x] CoraLite Plus 647
[x] CoraLite Plus 750
Clear Confirm
Search
Blog
Myc-tag: An epitope tag for protein characterization, protein interaction analysis, and purification.
Myc-tag: An epitope tag for protein characterization, protein interaction analysis, and purification.
Myc-tag is a peptide tag derived from the c-Myc protein. The Myc-tag can be used for many capture and detection applications such as immunoprecipitation, immunofluorescence and protein purification.
What is Myc- tag?
How does Myc-tag work?
3D structure of Myc-tag
Origin
Properties
Size of Myc-tag
Specifications of the Myc-tag
Myc-tag epitope tag sequences
Applications
Myc-tag vs. Flag
Myc-tag vs. HA
Myc-tag vs. Spot-Tag
How to elute Myc-tagged proteins?
Myc-tagged plasmids
Myc-tag Nanobodies and antibodies
Myc-Trap: The best anti-Myc-tag antibody for immunoprecipitation
Reference
What is Myc-tag?
Myc-tag is an epitope tag derived from the c-Myc protein.
How does Myc-tag work?
A Myc-tag can be used to detect expression of recombinant proteins. Therefore, the Myc-tag is genetically fused/cloned to a protein of interest (POI). After expression, the Myc-tagged protein can be captured and identified in crude biological samples. Common applications are immunoprecipitation(IP) &co-immunoprecipitation (co-IP), immunofluorescence, ELISA, flow cytometry, protein purification, and Western blotting (WB).
3D structure of Myc-tag
Origin
The Myc-tag is derived from the human c-myc oncogene and corresponds to amino acid residues 410-419 of the C-terminus of human c-Myc protein. The human c-Myc was discovered as a cellular homolog of v-myc oncogenes, which were identified through analyses of avian tumors1,2. C-Myc is a transcription factor.
Properties
The Myc-tag contains 10 amino acids (aa) and the sequence is EQKLISEEDL.
Size of the Myc-tag
Number of amino acids: 10
Molecular weight (MW): 1203.31Da
Specifications of the Myc-tag
Theoretical isoelectric point (pI): 4.00
Total number of negatively charged residues (Asp + Glu): 4
Total number of positively charged residues (Arg + Lys): 1
Myc-tag epitope tag sequences
Myc-tag amino acid sequence:
EQKLISEEDL
Myc-tag DNA sequence:
GAA CAA AAA CTC ATC TCA GAA GAG GAT CTG
Please note, the mentioned sequence is optimized for mammalian expression.
Applications
Myc-tagged proteins are used in immunoprecipitation (IP), protein purification, Western blotting, ELISA, flow cytometry, and immunofluorescence (IF).
Myc-tag vs. Flag-tag
Comparison of Myc-tag and Flag-tag
Myc-tag Flag-tag
Origin human c-Myc artificial design
3D Structure
Amino acid sequence EQKLISEEDL DYKDDDDK
Number of amino acids 10 8
Molecular weight (in Da)1203.31 1012.98
Theoretical isoelectric point (pI)4.00 3.97
Total number of negatively charged residues (Asp + Glu)4 5
Total number of positively charged residues (Arg + Lys)1 2
Myc-tag vs. HA-tag
Comparison of Myc-tag and HA-tag
Myc-tag HA-tag
Origin human c-Myc Human influenza hemagglutinin (HA) is derived from the human influenza virus
3D Structure
Amino acid sequence EQKLISEEDL YPYDVPDYA
Number of amino acids 10 9
Molecular weight (in Da)1203.31 1102.17
Theoretical isoelectric point (pI)4.00 3.56
Total number of negatively charged residues (Asp + Glu)4 2
Total number of positively charged residues (Arg + Lys)1 0
Myc-tag vs. Spot-Tag®
Comparison of Myc-tag and Spot-Tag®
Myc-tag Spot-Tag®
Origin human c-Myc Derived from human beta-catenin
3D Structure
Amino acid sequence EQKLISEEDL PDRVRAVSHWSS
Number of amino acids 10 12
Molecular weight (in Da)1203.31 1396.53
Theoretical isoelectric point (pI)4.00 10.03
Total number of negatively charged residues (Asp + Glu)4 1
Total number of positively charged residues (Arg + Lys)1 2
How to elute Myc-tagged proteins?
Myc-tagged proteins are normally eluted using glycine or citric acid or by competitive elution with 1x Myc-peptide (EQKLISEEDL) or 2x Myc-peptide (EQKLISEEDLEQKLISEEDL).
Myc-tagged plasmids
Vectors for expression with Myc-tag for mammalian cells, Drosophila, etc. are available from Merck (Sigma-Aldrich) and ThermoFisher (Invitrogen).
Myc-tag Nanobodies and antibodies
For experimental analysis of Myc-tagged proteins, Nanobodies, monoclonal antibodies, and polyclonal antibodies are available:
•Recombinant Nanobody:ChromoTek Myc-Trap® orChromoTek Myc VHH, recombinant binding protein
•Monoclonal antibody:Myc-tag monoclonal antibody (9E1) or MYC tag Monoclonal antibody
•Polyclonal antibodies: MYC tag Polyclonal antibody
Myc-Trap: The best anti-Myc-tag resin for immunoprecipitation
ChromoTek offers an anti-Myc-tag Nanobody conjugated to beads for immunoprecipitation:
Myc-Trap Agarose: anti-Myc-tag Nanobody conjugated to Agarose beads
Myc-Trap Magnetic Agarose: anti-Myc-tag Nanobody conjugated to Magnetic Agarose beads
Benefits of ChromoTek Myc-Trapin IP
• No heavy & light antibody chains in downstream applications
• Efficient and fast pulldown of Myc-tagged proteins
• One-step immunoprecipitation
• Peptide elution of native proteins
Request a free Myc-Trap sample
Reference
(1) Duesberg PH, Vogt PK. Avian acute leukemia viruses MC29 and MH2 share specific RNA sequences: evidence for a second class of transforming genes. Proc Natl Acad Sci U S A. 1979;76:1633–1637. [PMC free article] [PubMed]
(2) Sheiness D, Bishop JM. DNA and RNA from uninfected vertebrate cells contain nucleotide sequences related to the putative transforming gene of avian myelocytomatosis virus. J Virol. 1979;31:514–521. [PMC free article] [PubMed]
Related Content
Mass spec-compatible immunoprecipiation for GFP, mNeonGreen, Myc, RFP, Spot, and TurboGFPTags for protein purification
Immunoprecipitation of Myc-tagged proteins – How it works
How to immunoprecipitate Flag®-tagged proteins
Immunoprecipitation without additional bands
How to conduct a Co-immunoprecipitation (Co-IP)
Which beads should I use for my immunoprecipitation?
Advantages and limitations of different antibody formats in immunoprecipitation
8 Top Tips For Immunoprecipitation
Learn how to save precious hours on your IP, IF, and western blotting experiments
Support
Videos
Protocols
Pathway Posters Library
Early Career Researcher Hub
Request a Free Nano-Trap Sample
Nano-Trap system provides fast, reliable, and effective immunoprecipitation of fusion proteins. More than 3,000 peer-reviewed articles have already been published using ChromoTek's Nano-Traps.
Contact UsPrivacy PolicyTerms of useTerms and ConditionsTrademark InformationImprint (Impressum)Modern Slavery StatementChromoTek GmbH Privacy Policy
Copyright © 2002-2025 Proteintech Group, Inc. All rights reserved.
Proteintech North America (HQ)
Proteintech Group, Inc
5500 Pearl Street, Suite 400
Rosemont, IL 60018, USA
1-888-478-4522
[email protected]
Proteintech Europe
Transmission (6th Fl)
6 Atherton Street
M3 3GS, Manchester, UK
(+44) 161 839 3007
[email protected]
Proteintech Germany GmbH
Fraunhoferstr. 1
82152, Planegg-Martinsried
Germany
+49 89 124 148 850
[email protected]
Contact UsPrivacy PolicyTerms of useTerms and ConditionsTrademark InformationImprint (Impressum)Modern Slavery Statement
ChromoTek GmbH Privacy PolicyCopyright © 2002-2025 Proteintech Group, Inc. All rights reserved.
|
19
|
Galois extensions of degrees $p$ and $p^{n-1}$ given a Galois extension of $p^n$ - Mathematics Stack Exchange
===============
Join Mathematics
By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy.
Sign up with Google
OR
Email
Password
Sign up
Already have an account? Log in
Skip to main content
Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Loading…
Tour Start here for a quick overview of the site
Help Center Detailed answers to any questions you might have
Meta Discuss the workings and policies of this site
About Us Learn more about Stack Overflow the company, and our products
current community
Mathematics helpchat
Mathematics Meta
your communities
Sign up or log in to customize your list.
more stack exchange communities
company blog
Log in
Sign up
Home
Questions
Unanswered
AI Assist Labs
Tags
Chat
Users
Teams
Ask questions, find answers and collaborate at work with Stack Overflow for Teams.
Try Teams for freeExplore Teams
3. Teams
4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
Hang on, you can't upvote just yet.
You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it?
Instead, you can save this post to reference later.
Save this post for later Not now
Galois extensions of degrees p p and p n−1 p n−1 given a Galois extension of p n p n
Ask Question
Asked 13 years, 8 months ago
Modified4 years, 8 months ago
Viewed 3k times
This question shows research effort; it is useful and clear
13
Save this question.
Show activity on this post.
Suppose K K is a Galois extension of a field F F of degree p n p n for a p p a prime.
I want to see if there are Galois extensions of degrees p p and p n−1 p n−1 over F F.
If G=Gal(K/F)G=Gal(K/F), then |G|=p n|G|=p n. If G G is abelian, I know there are subgroups of order p i p i for 0≤i≤n 0≤i≤n, so there are subgroups of orders p n−1 p n−1 and p p, and their corresponding fixed fields are of degrees p p and p n−1 p n−1 over F F, and are Galois extensions since the subgroups are obviously normal in G G.
But if G G is not abelian, is this still true?
galois-theory
Share
Share a link to this question
Copy linkCC BY-SA 3.0
Cite
Follow
Follow this question to receive notifications
asked Nov 24, 2011 at 22:06
TurkTurk
133 1 1 silver badge 4 4 bronze badges
Add a comment|
3 Answers 3
Sorted by: Reset to default
This answer is useful
16
Save this answer.
Show activity on this post.
A group of order p n p n always has subgroups of order p n−1 p n−1 (in fact, all maximal subgroups are of order p n−1 p n−1), and they are always normal; and it always has subgroups of order p p that are normal (in fact central).
To see this, we use the class equation. Recall that if G G is a finite group and Z(G)Z(G) is the center of the group (the set of all elements g∈G g∈G such that g x=x g g x=x g for all x∈G x∈G), then
|G|=|Z(G)|+∑[G:C(x i)],|G|=|Z(G)|+∑[G:C(x i)],
where x 1,…,x n x 1,…,x n are representatives of the conjugacy classes of G G with more than one element. If |G|=p n|G|=p n, then [G:C(x i)][G:C(x i)] is a multiple of p p for every i i, so considering the equation modulo p p we conclude that |Z(G)|≡0(mod p)|Z(G)|≡0(mod p). Since Z(G)Z(G) has at least one element (the identity), it must be nontrivial.
Since Z(G)Z(G) is nontrivial, and is abelian, it has a subgroup of order p p. This subgroup is normal in G G. This gives you a normal subgroup of order p p, and hence a field of degree p n−1 p n−1 over F F that is Galois.
To show that all maximal subgroups are of order p n−1 p n−1 and that they are all normal, we proceed by induction on n n. If n=1 n=1 or n=2 n=2, then G G is abelian and we know the result is true. Assume the result is true for groups of order p n−1 p n−1, and let G G be of order p n p n.
Let N N be a subgroup of Z(G)Z(G) of order p p, and let H H be a maximal subgroup of G G. If N⊆H N⊆H, then consider G/N G/N. This has order p n−1 p n−1, and H/N H/N is maximal (by the Lattice Isomorphism Theorem); hence H/N H/N is of order p n−2 p n−2 and normal in G/N G/N, so |H|=|N|×|H/N|=p n−1|H|=|N|×|H/N|=p n−1, and is normal in G G.
If N N is not contained in H H, then H N H N is a subgroup of G G that contains H H (since N N is central, it is normal, so H N H N is a subgroup), hence H N=G H N=G. Since |G|=p n=|H N|=|H||N|/|H∩N||G|=p n=|H N|=|H||N|/|H∩N|, and |H∩N|=1|H∩N|=1 (N N is cyclic of order p p and not contained in H H), then |H|=p n−1|H|=p n−1. Moreover, given any g∈G g∈G, we can write g=h n g=h n with h∈H h∈H and n∈N n∈N. Then g H g−1=h n H n−1 h−1=h H n n−1 h−1=h H h−1=H g H g−1=h n H n−1 h−1=h H n n−1 h−1=h H h−1=H (with the second equality because n n is central). Thus, H H is normal in G G.
Hence, every maximal subgroup of G G has order p n−1 p n−1 and is normal.
In conclusion, you can always find normal subgroups of G G of order p p and of order p n−1 p n−1, when |G|=p n|G|=p n.
Added. You can now use this to show that G G always has subgroups of order p i p i that are normal for every i i, 0≤i≤n 0≤i≤n.
Proceed by induction on n n. The result holds for groups of order p p and p 2 p 2. Assume the result holds for any group of order p n p n, and let G G have order p n+1 p n+1. Let N N be a normal subgroup of order p p, and consider G/N G/N. Then G/N G/N has subgroups H i¯¯¯¯¯¯H i¯ of order p i p i that are normal in G/N G/N, i=0,…,n i=0,…,n. These correspond to subgroups H i H i of G G that contain N N, of order p i+1 p i+1, and that are normal in G G. So G G has subgroups of order p,…,p n+1 p,…,p n+1 that are normal; together with the trivial group of order p 0 p 0, this gives you subgroups of order p i p i that are normal for every i i, 0≤i≤n+1 0≤i≤n+1. □◻
Share
Share a link to this answer
Copy linkCC BY-SA 3.0
Cite
Follow
Follow this answer to receive notifications
edited Nov 24, 2011 at 22:43
answered Nov 24, 2011 at 22:16
Arturo MagidinArturo Magidin
418k 60 60 gold badges 862 862 silver badges 1.2k 1.2k bronze badges
1
Thanks, this was easy to follow. –Turk Commented Nov 24, 2011 at 22:40
Add a comment|
This answer is useful
6
Save this answer.
Show activity on this post.
I recall to have seen this question in the Dummit & Foote's Abstract Algebra, and if you go to page 188 of this book you will see that Theorem 1 (3) gives you that G a l(K/F)G a l(K/F) has a normal subgroup of order p k p k for 0≤k≤n 0≤k≤n, thus giving you the Galois extensions you were looking for. The proofs are in the book.
Hope that helps,
Share
Share a link to this answer
Copy linkCC BY-SA 3.0
Cite
Follow
Follow this answer to receive notifications
answered Nov 24, 2011 at 22:35
Patrick Da SilvaPatrick Da Silva
42.6k 6 6 gold badges 93 93 silver badges 144 144 bronze badges
1
1 Thanks mate, I'll try to find this. –Turk Commented Nov 24, 2011 at 22:40
Add a comment|
This answer is useful
1
Save this answer.
Show activity on this post.
I recognize this question was asked 9 years ago, but I was doing this problem (in Dummit and Foote) and found what I believe to be a simpler solution. I'll write it here to brag and to help future students of Dummit and Foote.
Using the class equation (as in the accepted answer here) find a normal subgroup of order p p, Z p⊴G a l(K/Q)Z p⊴G a l(K/Q). Its fixed field K 1⊆K K 1⊆K is indeed a Galois extension of Q Q of degree p n−1 p n−1. Now repeat this argument with K 1 K 1, finding again Z p⊴G a l(K 1/Q)Z p⊴G a l(K 1/Q) with fixed field K 2⊆K 1 K 2⊆K 1. We now have a Galois extension of order p n−2 p n−2. Continuing by the same argument for n n steps gives Q⊆K n−1⊆…⊆K 1⊆K Q⊆K n−1⊆…⊆K 1⊆K where each K i K i is Galois over Q Q of degree p n−i p n−i.
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Cite
Follow
Follow this answer to receive notifications
answered Nov 20, 2020 at 4:44
Jake MirraJake Mirra
3,323 14 14 silver badges 27 27 bronze badges
1
I think you meant "Continuing by the same argument for n−1 n−1 steps". –19021605 Commented Jun 27 at 9:45
Add a comment|
You must log in to answer this question.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
galois-theory
See similar questions with these tags.
Featured on Meta
Will you help build our new visual identity?
Upcoming initiatives on Stack Overflow and across the Stack Exchange network...
Community help needed to clean up goo.gl links (by August 25)
Report this ad
Related
3Finding Polynomials of Intermediate Galois extensions
2Inverse of the fundamental theorem of Galois Theory for finite extensions
2Abelian extensions with squarefree discriminant
3composition of a Z p Z p-extensions and Hilbert class field extension (Iwasawa)
1Galois extensions over Q Q.
8If an extension is locally Galois, is it Galois?
5Galois Groups and intermediate field extensions
0Suppose K/F K/F is a Galois extension of degree p m p m. Then there is a chain of extensions F⊆F 1⊆⋯F m=K F⊆F 1⊆⋯F m=K each of degree p p
Hot Network Questions
LM393 comparator not pulling down
Is Adj N Adj possible?
What is a good way to get magnetic sensor input?
In Grep, how can I grep -r --exclude build/lib//.py
Why are there no 'add14' chords?
VLOOKUP with wildcards
Is laser engraving on an interstellar object feasible?
Is it possible that death existed before the fall?
Does the Melf's Acid Arrow spell require a ranged attack roll?
Summation with fractional part
Does the warning "5 years imprisonment for removal" on Canada's Four Corners obelisk have any legal backing?
How can a theory be discarded if the Duhem–Quine thesis suggests it can’t be falsified
On pole distribution of matrix with Laurent polynomial entries
Make separate appendix table of contents and remove appendix chapters and sections from main toc
A story where a character that looks like Wile E. Coyote helps to relocate a community of business-sharp hunters-gatherers
Did recently killed Al Jazeera journalist Anas al-Sharif call the Oct 7 attackers "heroes"?
History of Wilcoxon/Mann-Whitney being for the median?
I found that we can calculate the time of solar eclipses that will happen in the very far future. Do we need relativity in this calculation?
Which set has greater cardinality and why?
Can high schoolers post to arXiv or write preprints?
Replacing \kern1em with $\hookrightarrow$ in macro using \discretionary gives ‘Improper discretionary list’ error. How to solve this problem?
Why was there a child at the dig site in Montana?
In the US, can I contribute to my Roth IRA, ahead of the time I get the earned income?
Can metal atoms act as ligands?
Question feed
Subscribe to RSS
Question feed
To subscribe to this RSS feed, copy and paste this URL into your RSS reader.
Why are you flagging this comment?
It contains harassment, bigotry or abuse.
This comment attacks a person or group. Learn more in our Code of Conduct.
It's unfriendly or unkind.
This comment is rude or condescending. Learn more in our Code of Conduct.
Not needed.
This comment is not relevant to the post.
Enter at least 6 characters
Something else.
A problem not listed above. Try to be as specific as possible.
Enter at least 6 characters
Flag comment Cancel
You have 0 flags left today
Mathematics
Tour
Help
Chat
Contact
Feedback
Company
Stack Overflow
Teams
Advertising
Talent
About
Press
Legal
Privacy Policy
Terms of Service
Your Privacy Choices
Cookie Policy
Stack Exchange Network
Technology
Culture & recreation
Life & arts
Science
Professional
Business
API
Data
Blog
Facebook
Twitter
LinkedIn
Instagram
Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804
By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.
Accept all cookies Necessary cookies only
Customize settings
Cookie Consent Preference Center
When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer.
Cookie Policy
Accept all cookies
Manage Consent Preferences
Strictly Necessary Cookies
Always Active
These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.
Cookies Details
Performance Cookies
[x] Performance Cookies
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.
Cookies Details
Functional Cookies
[x] Functional Cookies
These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly.
Cookies Details
Targeting Cookies
[x] Targeting Cookies
These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device.
Cookies Details
Cookie List
Clear
[x] checkbox label label
Apply Cancel
Consent Leg.Interest
[x] checkbox label label
[x] checkbox label label
[x] checkbox label label
Necessary cookies only Confirm my choices
|
20
|
The Indo-Aryan Languages | Cambridge University Press & Assessment
===============
Skip to main content
Our systems are now restored following recent technical disruption, and we’re working hard to catch up on publishing. We apologise for the inconvenience caused.Find out more
0
Show menu
Products and services
Cambridge University Press & Assessment Products and services
Our innovative products and services for learners, authors and customers are based on world-class research and are relevant, exciting and inspiring.
Academic Research, Teaching and Learning
English
International Education
Educational resources for schools
Bibles
OCR
Educational Research
The Assessment Network
Cambridge Insight
Partnership for Education
Cambridge Dictionary
The Cambridge Mathematics Project
Bookshop
Click to close sub-menu
About us
Cambridge University Press & Assessment About us
We unlock the potential of millions of people worldwide. Our assessments, publications and research spread knowledge, spark enquiry and aid understanding around the world.
About us
Our story
Archives and Heritage
People and planet
About usPeople and planet
Diversity & Inclusion
Environment
United Nations Global Compact
Communities
Ethics
Gender Pay
Click to close sub-menu
Diversity and inclusion
Annual Report 2024
Delivery and returns
Remittance information
Governance
Legal
About usLegal
AI Tools Terms of Use
Anti Slavery and Human Trafficking
Australia & New Zealand Terms of Trade
Candidate Privacy Notice
Cookies
Conditions of Sale - Consumer
Conditions of Sale - Goods
Copyright
DMCA
Freedom of Information
Mobile Apps
Purchase Terms
Safeguarding policy
Security & Vulnerability Disclosure Policy
Social Media Comments Policy
Website Terms of Use
Privacy
Click to close sub-menu
Accessibility
Rights and permissions
Contact us
Media enquiries
AI - embracing innovation
Services
Click to close sub-menu
Careers
Cambridge University Press & Assessment Careers
No matter who you are, what you do, or where you come from, you’ll feel proud to work here.
Careers
Jobs
Benefits
Click to close sub-menu
News & Insights
0
Your bag ✕
Format: Qty:– + Delete You have reached the maximum limit for this item
1 more item in your bag
Subtotal
Your bag is empty.
Continue shopping Checkout
Item added to bag Sorry, there was a problem getting your bag information Item cannot be purchased with the existing contents of your bag Item unavailable online
✕
was added to your bag.
Please try again.
Sorry, this item cannot be purchased in the same transaction as the existing items in your bag.
Please complete the purchase of the items currently in your bag and then add this item in a separate transaction or visit Shopping Help.
Sorry, this product is not currently available to buy online. Please visit Shopping Help.
Continue Shopping View bag and checkout
Remove item
✕
Are you sure you want to remove from your bag?
Remove item Keep
Search products
Shop books by subjectShop books by subject
Browse all subjects A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Anthropology
Anthropological theory
Anthropology: general interest
Biological anthropology
Linguistic anthropology
Physical anthropology
Social and cultural anthropology
Archaeology
Ancient Near East
Archaeological science
Archaeological theory and methods
Archaeology of Asia, Sub-Saharan Africa and the Pacific
Archaeology of Europe and the Near and Middle East
Archaeology of the Americas
Archaeology: general interest
Classical archaeology
Egyptology
Historical archaeology
Medieval archaeology
Prehistory
Arts, theatre and culture
American theatre
Applied arts
Architecture
Art: general interest
British theatre
Classical theatre
Communications
Creative writing
Drama and theatre: general interest
Editing
European theatre
Film and cinema
Journalism
Media, mass communication
Non-western art
Photography
Sports studies
Western art
Chemistry
Analytical chemistry
Chemistry: general interest
Environmental chemistry
Industrial chemistry
Inorganic chemistry
Organic chemistry
Physical chemistry
Classical studies
Ancient history
Ancient philosophy
Classical art and architecture
Classical languages
Classical literature
Classical studies (general)
Computer science
Algorithmics, complexity, computer algebra and computational geometry
Artificial intelligence and natural language processing
Communications, information theory and security
Computational biology and bioinformatics
Computer graphics, image processing and robotics
Computer hardware, architecture and distributed computing
Computing and society
Computing: general interest
Cryptography, cryptology and coding
Distributed, networked and mobile computing
IT management and ecommerce
Knowledge management, databases and data mining
Pattern recognition and machine learning
Programming languages and applied logic
Scientific computing, scientific software
Software engineering and development
Earth and environmental science
Applied geoscience, petroleum and mining geoscience
Atmospheric science and meteorology
Climatology and climate change
Earth science: general interest
Environmental and atmospheric science: general interest
Environmental policy, economics and law
Environmental science
Geochemistry and environmental chemistry
Geomorphology and physical geography
Hydrology, hydrogeology and water resources
Mineralogy, petrology and volcanology
Oceanography and marine science
Palaeontology and life history
Planetary science and astrobiology
Remote sensing and GIS
Sedimentology and stratigraphy
Soil science
Solid earth geophysics
Structural geology, tectonics and geodynamics
Economics
Econometrics, statistics and mathematical economics
Economic development and growth
Economic stratification
Economics and Finance Australia and New Zealand
Economics: general interest
Finance
History of economic thought and methodology
Industrial economics
International economics
Labour economics
Macroeconomics and monetary economics
Microeconomics
Natural resource and environmental economics
Public economics and public policy
Education
Education policy, strategy and reform
Education, history, theory
Teacher training and professional development
Engineering
Aerospace engineering
Biomedical engineering
Chemical engineering
Circuits and systems
Civil and environmental engineering
Communications, information theory and signal processing
Computer engineering
Control systems and optimization
Electromagnetics
Electronic, optoelectronic devices, and nanotechnology
Energy technology
Energy technology
Engineering mathematics and programming
Engineering: general interest
Image processing and machine vision
Industrial manufacturing, and operations engineering
Materials science
Polymer science and engineering
RF and microwave engineering
Solid mechanics and materials
Technology management
Thermal-fluids engineering
Wireless communications
General science
History of science
Popular science
Science handbooks
Geography
Geography: general interest
Historical geography
Human geography
Physical geography
Planning and urban geography
Regional geography
History
African American history
African history
American history 1861-1900
American history after 1945
American history: general interest
Atlantic history
British history 1066-1450
British history after 1450
British history before 1066
British history: general interest
Colonial American history
Cross-discipline history: general interest
Diplomatic and international history
Early republic and antebellum history
East Asian history
Economic history
Environmental history
European Histories of the Present
European history 1000-1450
European history 450-1000
European history after 1450
European history: general interest
Gender history
Global history
Historical theory, historical method and historiography
History after 1945 (general)
History of ideas and intellectual history
History of medicine
History of native American peoples
History of science and technology
History of science: general interest
Irish history
Latin American history
Middle East history
Military history
Regional and world history: general interest
Regional history after 1500
Regional history before 1500
Russian and east European history
Social and population history
South Asian history
South-east Asian history
Twentieth century American history
Twentieth century British history
Twentieth century European history
Twentieth century regional history
Languages and linguistics
African and Caribbean language and linguistics
Applied linguistics and second language acquisition
Arabic and Middle Eastern language and linguistics
Asian language and linguistics
Cognitive linguistics
Computational linguistics
Discourse analysis
English language and linguistics: general interest
European language and linguistics
Evolution of language
Grammar and syntax
Historical linguistics
History of the English language
Latin American language and linguistics
Morphology
Multilingualism
Other languages and linguistics
Phonetics and phonology
Psycholinguistics and neurolinguistics
Research methods in linguistics
Semantics and pragmatics
Sign language
Sociolinguistics
Stylistics
Law
Arbitration, dispute resolution and mediation
Comparative law
Competition law
Constitutional and administrative law
Contract law
Corporate law
Criminal law
Employment law
English legal system
Environmental law
Equity and trusts
European law
Evidence
Family law
Financial law
Human rights
Humanitarian law
Intellectual property
International economic and trade law, WTO law
Jurisprudence
Law and economics
Law and technology, science, communication
Legal history
Legal skills and practice
Medico-legal, bioethics and health law
Private international law
Private law
Property law
Public international law
Socio-legal studies
Taxation law
Tort law
UN and international organisations
US law
Life science
Animal behaviour
Bioengineering
Bioethics
Biological anthropology and primatology
Biological imaging
Biophysics and physiology
Biotechnology
Botanical reference
Cell biology and developmental biology
Darwin
Ecology and conservation
Entomology
Evolutionary biology
Genetics
Genomics, bioinformatics and systems biology
Life science professional development
Marine biology
Microbiology and immunology
Molecular biology, biochemistry, and structural biology
Natural resource management, agriculture, horticulture and forestry
Neuroscience
Pharmacology and drug discovery
Plant science
Quantitative biology, biostatistics and mathematical modelling
Understanding Life Series
Zoology
Literature
African and Caribbean literature
American literature
Anglo Saxon and medieval literature
Asian literature
Canadian literature
English literature 1700-1830
English literature 1830-1900
English literature 1900-1945
English literature after 1945
English literature: general interest
European and world literature: general interest
European literature
Irish literature
Latin American literature
Literary texts
Literary theory
Printing and publishing history
Renaissance and early modern literature
Management
Entrepreneurship and innovation
Governance
Human resource management
International business
Management: general interest
Marketing
Organisation studies
Responsible and ethical business
Strategic management
Mathematics
Abstract analysis
Algebra
Computational science
Differential and integral equations, dynamical systems and control
Discrete mathematics, information theory and coding
Fluid dynamics and solid mechanics
Geometry and topology
Historical mathematical texts
History of mathematics
Logic, categories and sets
Mathematical biology
Mathematical finance
Mathematical modelling and methods
Mathematical physics
Mathematical tables and handbooks
Mathematics (general)
Number theory
Numerical analysis
Numerical recipes
Optimization, OR and risk analysis
Real and complex analysis
Medicine
Anatomy
Anesthesia, intensive care, pain management
Biological imaging
Biophysics and physiology
Biotechnology
Botanical reference
Cardiology
Cell biology and developmental biology
Darwin
Dermatology
Emergency medicine
Endocrinology
Epidemiology, public health and medical statistics
Gastroenterology
Geriatrics
Hematology
Infectious disease
Internal medicine
Medical imaging
Medical law, ethics and forensic medicine
Medicine: general interest
Mental health, psychiatry and clinical psychology
Neurology and clinical neuroscience
Neuropsychology
Nursing
Obstetrics and gynecology, reproductive medicine
Oncology
Pathology and laboratory science
Pediatrics and child health
Respiratory medicine
Surgery
Music
Dance
Eighteenth-century music
Ethnomusicology
Medieval and renaissance music
Music criticism
Music performance
Music: general interest
Nineteenth-century music
Opera
Seventeenth-century music
Twentieth-century and contemporary music
Philosophy
Classical philosophy
Early modern philosophy
Eighteenth-century philosophy
Epistemology and metaphysics
Ethics
History of philosophy
Legal philosophy
Logic
Medieval philosophy
Nineteenth-century philosophy
Non-western philosophy
Philosophy of mind and language
Philosophy of science
Philosophy of social science
Philosophy texts
Philosophy: general interest
Political philosophy
Renaissance philosophy
Twentieth-century philosophy
Physics and astronomy
Amateur and popular astronomy
Astronomy (general)
Astrophysics
Atomic physics, molecular physics and chemical physics
Biological physics and soft matter physics
Condensed matter physics, nanoscience and mesoscopic physics
Cosmology, relativity and gravitation
Econophysics, financial physics and social physics
Electronics for physicists
General and classical physics
History and philosophy of physics and astronomy
Mathematical and computational methods and modelling
Nonlinear science and fluid dynamics
Observational astronomy, techniques and instrumentation
Optics, optoelectronics and photonics
Particle physics and nuclear physics
Planetary systems and astrobiology
Plasma physics and fusion physics
Quantum physics, quantum information and quantum computation
Solar and space plasma physics
Statistical physics, network science and complex systems
Theoretical physics and mathematical physics
Politics and international relations
African government, politics and policy
American government, politics and policy
American political development
Australian politics
British government, politics and policy
Comparative politics
East Asian government, politics and policy
European government, politics and policy
History of ideas
International relations and international organisations
Latin American government, politics and policy
Middle East government, politics and policy
Political economy
Political theory
Politics: general interest
Research methods in politics
Russian and east European government, politics and policy
South Asian government, politics and policy
South-east Asian government, politics and policy
Stay up to date with APSA Preprints
Texts in political thought
Psychology
Applied psychology
Biological psychology
Cognition
Critical psychology
Cultural psychology
Developmental psychology
Educational psychology
Experimental psychology
Health and clinical psychology
History of psychology
Personality psychology and individual differences
Psychology and Data Science Collection Australia and New Zealand
Psychology research methods and statistics
Psychology: general interest
Social psychology
Religion
Biblical studies - New Testament
Biblical studies - Old Testament, Hebrew bible
Buddhism and Eastern religions
Church history
History of religion
Islam
Judaism
Philosophy of religion
Religion: general interest
Religious ethics
Theology
Social science research methods
Qualitative methods
Quantitative methods
Sociology
Criminology
Demography, social statistics
Historical sociology
Organisational sociology
Political sociology
Research methods in sociology and criminology
Social Work Australia and New Zealand
Social policy and social work
Social theory
Sociology of gender
Sociology of race and ethnicity
Sociology of religion
Sociology of science and medicine
Sociology: general interest
Statistics and probability
Applied probability and stochastic networks
Computational statistics, machine learning and information science
Optimization, OR and risk
Probability theory and stochastic processes
Statistical theory and methods
Statistics and probability: general interest
Statistics for econometrics, finance and insurance
Statistics for environmental sciences
Statistics for life sciences, medicine and health
Statistics for physical sciences and engineering
Statistics for social sciences, behavioural sciences and law
What we offerWhat we offer
What we offer
Bibles
Research publishing
Teaching and learning
Quick links
Cambridge Advance Online
Cambridge Aspire
Cambridge Core
Cambridge Open Engage
Who we serveWho we serve
Who we serve
Authors
Editors
Instructors
Librarians
Publishing partners
Researchers
Students
Conferences & events
About usAbout us
About us
Awards
Environmental sustainability
Media office
News and blogs
Support
Recommended product
Popular links
Popular links
Breadcrumb
Home
Homepage
Academic Subject Publishing
Languages and linguistics
Other languages and linguistics
The Indo-Aryan Languages
All titles
The Indo-Aryan Languages
Series:
Cambridge Language Surveys
Author:
Colin P. Masica
Published:
September 1993
Availability:
Available
Format:
Paperback
ISBN:
9780521299442
Looking for an examination copy?
If you are interested in the title for your course we can consider offering an examination copy. To register your interest please contact [email protected] providing details of the course you are teaching.
$119.00
USD
Paperback
Add to bag
Description
Contents
About the authors
Description
In his ambitious survey of the Indo-Aryan languages, Masica has provided a fundamental, comparative introduction that will interest not only general and theoretical linguists but also students of one or more languages (Hindi, Urdu, Bengali, Punjabi, Gujurati, Marathi, Sinhalese, etc.) who want to acquaint themselves with the broader linguistic context. Generally synchronic in approach, concentrating on the phonology, morphology and syntax of the modern representatives of the group, the volume also covers their historical development, writing systems, and aspects of sociolinguistics.
Comprehensive, up-to-date survey of major language group
Of interest to theoretical linguists but also accessible to students of particular Indian languages
Includes appendices of Indo-Aryan languages and dialects and their subclassifications
Show more
Reviews & endorsements
"...this should prove to be a practical and serviceable reference book for many years to come." Canadian Journal of Linguistics
"This is a wonderful piece of scholarship...No anthropological linguist planning work in South Asia should leave home without it." American Anthropologist
"...Masica's text is a valuable addition to the Indo-Europeanist's library and is a must for Aryan studies. It is destined to become a classic." Jacob Caflisch, Sr., Language Quarterly
"...Dr. Masica is to be congratulated on assembling and organizing so much material, much of it little known, with the clarity and accuracy he has done and in maintaining the identifiable perspective of a single scholar on it....This very fine book will be much used and will not easily be replaced as a guide to the structure, modes of functioning, evolving interrelationships, and unresolved complexities of its varied subject-matter." Journal of the American Oriental Society
"...a genuine monument of erudition....Dr. Masica is remarkably conversant with all the available information and he digests it exceedingly well for the non-specialist and for the scholar as well....The volume is completed by rich appendices, copious notes and indices, as well as a very detailed bibliography. Clear tables illustrate the discussion." Journal of Indo-European Studies
See more reviews
Product details
Published: September 1993
Format: Paperback
ISBN: 9780521299442
Length: 560 pages
Dimensions: 229 × 152 × 32 mm
Weight: 0.81kg
Availability: Available
Often bought together
The Mesoamerican Indian Languages
Format : Paperback
Grammar of the Sanskrit Language
Format : Paperback
The Dravidian Languages
Format : Paperback
South Asian Languages
Format : Hardback
A Syntactic Typology
Often bought together
Related Journals
Also by this Author
Indo-Aryan Languages
Colin P. Masica
Format :Paperback
Price unavailable
Also by this Author
Contents
Table of Contents
Introduction
The modern Indo-Aryan languages and dialects
The historical context and development of Indo-Aryan
The nature of the New Indo-Aryan lexicon
NIA descriptive phonology
Writing systems
Historical phonology
Nominal forms and categories
Verbal forms and categories
Syntax
Appendix I Inventory of NIA languages and dialects
Appendix II Schemes of NIA subclassification.
Show more
About the authors
Author
Colin P. Masica
Browse by related subject
African and Caribbean language and linguistics
Applied linguistics and second language acquisition
Arabic and Middle Eastern language and linguistics
Asian language and linguistics
Cognitive linguistics
Computational linguistics
Discourse analysis
English language and linguistics: general interest
European language and linguistics
Evolution of language
Grammar and syntax
Historical linguistics
History of the English language
Latin American language and linguistics
Morphology
Multilingualism
Other languages and linguistics
Phonetics and phonology
Psycholinguistics and neurolinguistics
Research methods in linguistics
Semantics and pragmatics
Sign language
Sociolinguistics
Stylistics
Show more
✕
Sorry, this item cannot be purchased in the same transaction as the existing items in your cart.
Please complete the purchase of the items currently in your cart and then add this item in a separate transaction or contact
Customer Services
Continue Shopping
Cambridge University Press
Browse
Shop by subject
What we offer
Who we serve
Conferences and events
About us
Careers
Support
Contact and help
Policies
Terms of use
Purchase terms
Legal
Accessibility
Academic footer - Social links
Follow us online
What we do What we do
Academic Research, Teaching and Learning
English
International Education
Educational resources for schools
Bibles
Partnership for Education
Author support
Bookshop
Educational Research
The Assessment Network
OCR
About us About us
About us
Our story
Archives and Heritage
People and planet
Governance
News and insights
Legal
Accessibility
Rights and permissions
Contact us
Media enquiries
University of Cambridge
© 2025 Cambridge University Press & Assessment
Rights and permissions
Legal
Privacy
Modern slavery
People and planet
Diversity and inclusion
Sitemap
Email verified with restricted access to resources
✕
Your email has been verified. You are now able to request access to teacher restricted resources. If you are a teacher, simply complete the teacher resource request form, and wait for your request to be validated.
Request resourcesCancel
Email verified
✕
Your email has been verified.
Continue
In his ambitious survey of the Indo-Aryan languages, Masica has provided a fundamental, comparative introduction that will interest not only general and theoretical linguists but also students of one or more languages (Hindi, Urdu, Bengali, Punjabi, Gujurati, Marathi, Sinhalese, etc.) who want to acquaint themselves with the broader linguistic context. Generally synchronic in approach, concentrating on the phonology, morphology and syntax of the modern representatives of the group, the volume also covers their historical development, writing systems, and aspects of sociolinguistics.
Comprehensive, up-to-date survey of major language group
Of interest to theoretical linguists but also accessible to students of particular Indian languages
Includes appendices of Indo-Aryan languages and dialects and their subclassifications
"...this should prove to be a practical and serviceable reference book for many years to come." Canadian Journal of Linguistics
"This is a wonderful piece of scholarship...No anthropological linguist planning work in South Asia should leave home without it." American Anthropologist
"...Masica's text is a valuable addition to the Indo-Europeanist's library and is a must for Aryan studies. It is destined to become a classic." Jacob Caflisch, Sr., Language Quarterly
"...Dr. Masica is to be congratulated on assembling and organizing so much material, much of it little known, with the clarity and accuracy he has done and in maintaining the identifiable perspective of a single scholar on it....This very fine book will be much used and will not easily be replaced as a guide to the structure, modes of functioning, evolving interrelationships, and unresolved complexities of its varied subject-matter." Journal of the American Oriental Society
"...a genuine monument of erudition....Dr. Masica is remarkably conversant with all the available information and he digests it exceedingly well for the non-specialist and for the scholar as well....The volume is completed by rich appendices, copious notes and indices, as well as a very detailed bibliography. Clear tables illustrate the discussion." Journal of Indo-European Studies
This website uses cookies
By clicking “Accept all cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyse site usage, and assist in our marketing efforts which includes personalised advertising on certain pages.Cookie notice
Cookie Settings Reject all Accept all cookies
Use of Cookies by Cambridge
When you visit our website, Cambridge may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalised web experience including personalised advertising. Cambridge respects your right to privacy and, by using the options below, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change the default settings. However, blocking some types of cookies may impact your experience of our website and the services we are able to offer.
More information
Allow All
Manage Cookies
Strictly Necessary Cookies
Always Active
These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personal information.
Cookies Details
Functional Cookies
[x] Functional Cookies
These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third-party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly.
Cookies Details
Performance Cookies
[x] Performance Cookies
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and is therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and we will not be able to monitor its performance.
Cookies Details
Targeting Cookies
[x] Targeting Cookies
These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other websites. These cookies work by uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising.
Cookies Details
Cookie List
Clear
[x] checkbox label label
Apply Cancel
Consent Leg.Interest
[x] checkbox label label
[x] checkbox label label
[x] checkbox label label
Confirm My Choices
|
21
|
Siberian Mathematical Journal, Vol. 58, No. 1, pp. , 2017 Original Russian Text Copyright c ⃝2017 Bernshteyn A.Yu., Kostochka A.V., and Pron S.P.
ON DP-COLORING OF GRAPHS AND MULTIGRAPHS A. Yu. Bernshteyn, A. V. Kostochka, and S. P. Pron UDC 519.17 Abstract: While solving a question on the list coloring of planar graphs, Dvoˇ r´ ak and Postle introduced the new notion of DP-coloring (they called it correspondence coloring). A DP-coloring of a graph G reduces the problem of finding a coloring of G from a given list L to the problem of finding a “large” independent set in the auxiliary graph H(G, L) with vertex set {(v, c) : v ∈V (G) and c ∈L(v)}.
It is similar to the old reduction by Plesneviˇ c and Vizing of the k-coloring problem to the problem of finding an independent set of size |V (G)| in the Cartesian product G□Kk, but DP-coloring seems more promising and useful than the Plesneviˇ c–Vizing reduction. Some properties of the DP-chromatic number χDP (G) resemble the properties of the list chromatic number χℓ(G) but some differ quite a lot.
It is always the case that χDP (G) ≥χℓ(G). The goal of this note is to introduce DP-colorings for multi-graphs and to prove for them an analog of the result of Borodin and Erd˝ os–Rubin–Taylor characterizing the multigraphs that do not admit DP-colorings from some DP-degree-lists. This characterization yields an analog of Gallai’s Theorem on the minimum number of edges in n-vertex graphs critical with respect to DP-coloring.
DOI: 10.1134/S0037446617010049 Keywords: vertex degrees, list coloring, critical graphs 1. Introduction All graphs in this note are assumed simple; i.e., they cannot have parallel edges or loops; multigraphs may have multiple edges but not loops. The complete n-vertex graph is denoted by Kn, and the n-vertex cycle is denoted by Cn. If G is a (multi)graph and v, u ∈V (G), then EG(v, u) denotes the set of all edges in G connecting v and u, eG(v, u) := |EG(v, u)|, and degG(v) := u∈V (G){v} eG(v, u). For A ⊆V (G), G[A] denotes the sub(multi)graph of G induced by A and for disjoint A, B ⊆V (G), G[A, B] denotes the maximal bipartite sub(multi)graph of G with parts A and B. If G1, . . . , Gk are (multi)graphs, then G1+· · ·+Gk denotes the (multi)graph with vertex set V (G1)∪· · ·∪V (Gk) and edge set E(G1)∪· · ·∪E(Gk).
The independence number of G is denoted by α(G). Given k ∈Z>0, let [k] denote the set {1, . . . , k}.
Recall that a (proper) k-coloring of G is a mapping f : V (G) →[k] such that f(v) ̸= f(u) whenever vu ∈E(G). The smallest k such that G has a k-coloring is called the chromatic number of G and is denoted by χ(G). Plesneviˇ c and Vizing proved that G has k-coloring if and only if the Cartesian product G□Kk includes an independent set of size |V (G)|, i.e., α(G□Kk) = |V (G)|.
In order to tackle some graph coloring problems, Vizing and independently Erd˝ os, Rubin, and Taylor introduced a more general notion of list coloring. A list L for a graph G is a map L : V (G) → Pow(Z>0) that assigns to each vertex v ∈V (G) a set L(v) ⊆Z>0. An L-coloring of G is a mapping f : V (G) →Z>0 such that f(v) ∈L(v) for each v ∈V (G) and f(v) ̸= f(u) whenever vu ∈E(G). The list chromatic number, χℓ(G), is the minimum k such that G has an L-coloring for each L satisfying |L(v)| = k for every v ∈V (G).
Since G is k-colorable if and only if G is L-colorable with the list L : v →[k], we have χℓ(G) ≥χ(G) for every G; however, the difference χℓ(G) −χ(G) can be arbitrarily large.
Moreover, graphs with The first author was supported by the Illinois Distinguished Fellowship. The second author was supported by the Russian Foundation for Basic Research (Grants 15–01–05867 and 16–01–00499) and the NSF (Grants DMS– 1266016 and DMS–1600592).
Urbana–Champaign; Novosibirsk; Barnaul. Translated from Sibirski˘ ı Matematicheski˘ ı Zhurnal, Vol. 58, No. 1, pp. 36–47, January–February, 2017; DOI: 10.17377/smzh.2017.58.104. Original article submitted March 21, 2016.
28 0037-4466/17/5801–0028 c ⃝2017 Springer Science+Business Media, Inc.
chromatic number 2 may have arbitrarily high list chromatic number. While 2-colorable graphs may have arbitrarily high minimum degree, Alon showed that χℓ(G) ≥(1/2 −o(1)) log2 δ for each graph G with minimum degree δ.
On the other hand, some well-known upper bounds for χ(G) in terms of vertex degrees hold for χℓ(G) as well. For example, Brooks’ Theorem and the degeneracy upper bound hold for χℓ(G). Furthermore, Borodin [5, 6] and independently Erd˝ os, Rubin, and Taylor generalized Brooks’ Theorem to degree lists.
Recall that a list L for G is a degree list if |L(v)| = degG(v) for every v ∈V (G).
Theorem 1 [3, 5, 6] (a simple proof in ). Suppose that G is a connected graph. Then G is not L-colorable for some degree list L if and only if each block of G is either a complete graph or an odd cycle.
This result yields an extension of Gallai’s bound on the minimum number of edges in n-vertex k-critical graphs (i.e., graphs G with χ(G) = k such that after deletion of any edge or vertex the chromatic number decreases) to n-vertex list-k-critical graphs (i.e., graphs G with χℓ(G) = k such that after deletion of any edge or vertex the list chromatic number decreases).
List coloring proved useful in establishing a number of results for ordinary graph coloring; however, generally it is often much harder to prove upper bounds on the list chromatic number than on the chromatic number. In order to prove such an upper bound for a class of planar graphs, Dvoˇ r´ ak and Postle introduced and heavily used a new generalization of list coloring; they called it correspondence coloring, and we will call it DP-coloring, for short.
First, we show how to reduce to DP-coloring the problem of L-coloring of a graph G. Given a list L for G, the vertex set of the auxiliary graph H = H(G, L) is {(v, c) : v ∈V (G) and c ∈L(v)}, and two distinct vertices (v, c) and (v′, c′) are adjacent in H if and only if either c = c′ and vv′ ∈E(G), or v = v′.
Note that the independence number of H is at most |V (G)|, since V (H) is covered by |V (G)| cliques.
If H has an independent set I with |I| = |V (G)|, then, for each v ∈V (G), there is a unique c ∈L(v) such that (v, c) ∈I. Moreover, the same color c is not chosen for every two adjacent vertices. In other words, the map f : V (G) →Z>0 defined by (v, f(v)) ∈I is an L-coloring of G. On the other hand, if G has an L-coloring f, then the set {(v, f(v)) : v ∈V (G)} is an independent set of size |V (G)| in H.
By construction, for every distinct v, v′ ∈V (G), the set of edges of H connecting {(v, c) : c ∈L(v)} and {(v′, c′) : c′ ∈L(v′)} is empty if vv′ / ∈E(G) and forms a matching (possibly empty) if vv′ ∈E(G).
Based on these properties of H(G, L), Dvoˇ r´ ak and Postle introduced the DP-coloring. The phrasing below is slightly different, but the essence and the spirit are theirs.
Definition 2. Let G be a graph. A cover of G is a pair (L, H), where L is an assignment of pairwise disjoint sets to the vertices of G and H is a graph with vertex set v∈V (G) L(v) satisfying the following conditions: 1. H[L(v)] is a complete graph for each v ∈V (G).
2. For each uv ∈E(G) the edges between L(u) and L(v) form a matching (possibly empty).
3. For every two distinct u, v ∈V (G) with uv ̸∈E(G) no edges of H connect L(u) and L(v).
Definition 3. Suppose that G is a graph and (L, H) is a cover of G. An (L, H)-coloring of G is an independent set I ⊆V (H) of size |V (G)|. In this context, we refer to the vertices of H as the colors.
G is said to be (L, H)-colorable if G admits an (L, H)-coloring.
Note that if (L, H) is a cover of G and I is an (L, H)-coloring, then |I ∩L(v)| = 1 for all v ∈V (G).
Fig. 1 shows an example of two distinct covers of G ∼ = C4.
Definition 4. Let G be a graph and let f: V (G) →Z≥0 be an assignment of nonnegative integers to the vertices of G. Say that G is DP-f-colorable if G is (L, H)-colorable whenever (L, H) is a cover of G and |L(v)| ≥f(v) for all v ∈V (G). If G is DP-degG-colorable, then G is said to be DP-degree-colorable.
Definition 5. The DP-chromatic number, χDP (G), is the minimum k such that G is (L, H)-colorable for each choice of (L, H) with |L(v)| ≥k for all v ∈V (G).
Dvoˇ r´ ak and Postle observed that χDP (G) ≤k + 1 for every k-degenerate graph G and that Brooks’ Theorem almost holds for DP-colorings, with the exception that χDP (Cn) = 3 for every cycle Cn and 29 Fig. 1. The graph C4 and two covers of C4 such that C4 is (L, H1)-colorable but not (L, H2)-colorable not only for odd n, as for list coloring.
The fact that χDP (C4) = 3 marks an important difference between DP-coloring and list coloring, since it implies that the orientation theorems of Alon–Tarsi and the Bondy–Boppana–Siegel Lemma (see ) on list coloring do not extend to DP-coloring. Dvoˇ r´ ak and Postle also point out that the proof of Thomassen’s Theorem on list-5-colorability of planar graphs extends to DP-coloring. The first author of this note showed that the lower bound on the DP-chromatic number of a graph G with minimum degree δ is much stronger than Alon’s bound for list coloring; namely, χDP (G) ≥Ω(δ/ log δ). On the other hand, he proved an analog of Johansson’s upper bound on the list chromatic number of triangle-free graphs with given maximum degree.
The goal of this article is to naturally extend the notion of DP-coloring to multigraphs and to derive some simple properties of DP-colorings of multigraphs. The main result is an analog of Theorem 1: a characterization of connected multigraphs that are not DP-degree-colorable. This result also yields a lower bound on the number of edges in n-vertex DP-critical graphs (we define such graphs in the next section).
The structure of the article is as follows: In the next section we define DP-coloring of multigraphs and related notions, discuss some examples, and state our main result, Theorem 9. In Section 3 we prove the main result. In Section 4 we briefly discuss DP-critical (multi)graphs and show a bound on the number of edges in them which is implied by the main result. For completeness, in the Appendix we present a DP-version of Gallai’s proof of his lemma on the number of edges in so-called Gallai trees (the original paper is in German).
2. Definitions and the Main Result To define DP-coloring for multigraphs, we only need to change Definition 2 as below and replace the word graph with the word multigraph in Definitions 3–5. The new version of Definition 2 is Definition 6. Let G be a multigraph. A cover of G is a pair (L, H), where L is an assignment of pairwise disjoint sets to the vertices of G and H is a graph with vertex set v∈V (G) L(v) satisfying the following conditions: 1. H[L(v)] is a complete graph for each v ∈V (G).
2. The set of edges between L(u) and L(v) is the union of eG(u, v) (possibly empty) matchings for every two distinct u, v ∈V (G).
Given a positive integer k and a multigraph G, let Gk denote the multigraph that is obtained from G by replacing each edge in G with a set of k parallel edges. In particular, G1 = G for every G. The next two lemmas demonstrate the two classes of multigraphs that are not DP-degree-colorable; the first of them exhibits multigraphs whose DP-chromatic number exceeds the number of vertices. In particular, for each k ≥2, the 2-vertex multigraph Kk 2 has DP-chromatic number k + 1.
Lemma 7. Kk n is not DP-degree-colorable.
Proof. Let G := Kk n. For each v ∈V (G), let L(v) := {(v, i, j) : i ∈[n −1], j ∈[k]}, and (v1, i1, j1)(v2, i2, j2) ∈E(H) ⇐ ⇒v1 = v2 ∨i1 = i2.
30 Then (L, H) is a cover of G and |L(v)| = k(n −1) = degG(v) for all v ∈V (G). We claim that G is not (L, H)-colorable. Indeed, if I ⊆V (H) is such that |I ∩L(v)| = 1 for all v ∈V , then for some distinct (v1, i1, j1), (v2, i2, j2) ∈I, we have i1 = i2. Thus, I is not an independent set.
□ Lemma 8. Ck n is not DP-degree-colorable.
Proof. Let G := Ck n. Without loss of generality, assume that V (G) = [n] and eG(u, v) = k if and only if |u −v| = 1 or {u, v} = {1, n}. For each v ∈[n], let L(v) := {(v, i, j) : i ∈, j ∈[k]}, and let (v1, i1, j1)(v2, i2, j2) ∈E(H) :⇐ ⇒v1 = v2 ∨(|v1 −v2| = 1 ∧i1 = i2) ∨({v1, v2} = {1, n} ∧i1 = i2 + 1 + n (mod 2)).
Then (L, H) is a cover of G and |L(v)| = 2k = degG(v) for all v ∈[n]. We claim that G is not (L, H)-colorable. Indeed, suppose that I ⊂V (H) is an (L, H)-coloring of G. Let I = {(v, iv, jv)}n v=1. Without loss of generality, assume that i1 = 1. Then for each v ∈[n], iv = v (mod 2). Thus, i1 = in + 1 + n (mod 2), so (1, i1, j1)(n, in, jn) ∈E(H); therefore, I is not independent.
□ Our main result shows that the above lemmas describe all 2-connected multigraphs that are not DP-degree-colorable.
Theorem 9. Suppose that G is a connected multigraph. Then G is not DP-degree-colorable if and only if each block of G is one of the graphs Kk n and Ck n for some n and k.
The result has an implication for the number of edges in DP-k-critical graphs and multigraphs, i.e., (multi)graphs G with χDP (G) = k such that every proper sub(multi)graph of G has a smaller DP-chromatic number. It is easy to show (and follows from the above lemmas and Lemma 12 in the next section) that Kk n is DP-(k(n −1) + 1)-critical and Ck n is DP-(2k + 1)-critical. It is also easy to show (and follows from Theorem 9) that 2|E(G)| ≥(k −1)n for every n-vertex DP-k-critical multigraph G.
(1) The examples of Ck n show that, for each odd k ≥3, there are infinitely many 2-connected DP-k-critical multigraphs G with equality in (1). However, if we consider only simple graphs, then Theorem 9 implies a stronger bound than (1), which is an analog of Gallai’s bound for ordinary coloring (see for list coloring): Corollary 10. Let k ≥4 and let G be a DP-k-critical graph distinct from Kk. Then 2|E(G)| ≥ k −1 + k −3 k2 −3 n.
(2) We will prove Theorem 9 in the next section and derive Corollary 10 in Section 4.
3. Proof of Theorem 9 We proceed by a series of lemmas.
Lemma 11. Suppose that G is a regular n-vertex multigraph whose underlying simple graph is a cycle. Then G is not DP-degree-colorable if and only if G ∼ = Ck n for some k.
Proof. Without loss of generality, assume that V (G) = [n] and eG(u, v) > 0 if and only if |u−v| = 1 or {u, v} = {1, n}. Suppose that G ̸∼ = Ck n. Since G is regular, this implies that n is even and for some distinct positive r, s, eG(v, v + 1) = r for all odd v ∈[n] and eG(1, n) = eG(v, v + 1) = s for all even v ∈[n −1]. Without loss of generality, assume that s > r.
Let (L, H) be a cover of G such that |L(v)| = degG(v) = r + s for all v ∈[n].
We will show that G is (L, H)-colorable.
For x ∈L(1), say that a color y ∈L(v) is x-admissible if there exists 31 I ⊆V (H) independent in H −EH(L(1), L(n)) such that |I ∩L(u)| = 1 for all u ∈[v] and {x, y} ⊆I.
Let Ax(v) ⊆L(v) denote the set of all x-admissible colors in L(v). Clearly, |Ax(2)| ≥s and |Ax(3)| ≥r for each x ∈L(1). Suppose that |Ax(3)| > r for some x ∈L(1). Since each color in L(4) has at most r neighbors in L(3), Ax(4) = L(4). Similarly, Ax(v) = L(v) for all v ≥4. In particular, Ax(n) = L(n).
Take any y ∈L(n)\NH(x). Since y ∈Ax(n), there is a set I ⊆V (H) independent in H −EH(L(1), L(n)) such that |I ∩L(u)| = 1 for all u ∈[n] and {x, y} ⊆I. But then I is independent in H, and so I is an (L, H)-coloring of G. Thus, we may assume that |Ax(3)| = r for all x ∈L(v). Note that L(3) \ Ax(3) = L(3) ∩ y∈Ax(2) NH(y).
Therefore, L(3)∩NH(y) is the same set of size s for all y ∈Ax(2). Since each vertex in L(3) has at most s neighbors in L(2), the graph H[Ax(2) ∪(L(3) \ Ax(3))] is a complete 2s-vertex graph. Since every vertex in L(2) is x-admissible for some x ∈L(1), H[L(2)∪L(3)] includes a disjoint union of at least two complete 2s-vertex graphs. Hence, |L(2) ∪L(3)| ≥4s. But |L(2)| = |L(3)| = r + s < 2s; a contradiction.
□ Lemma 12. Let G be a connected multigraph and suppose that (L, H) is a cover of G such that |L(v)| ≥degG(v) for all v ∈V (G), and |L(v0)| > degG(v0) for some v0 ∈V (G). Then G is (L, H)-colorable.
Proof. If |V (G)| = 1 then the claim is obvious. Suppose now that G is a counterexample with the fewest vertices. Consider the multigraph G′ := G −v0. For each v ∈V (G′), let L′(v) := L(v), and let H′ := H −L(v0). By construction, (L′, H′) is a cover of G′ such that for all v ∈V (G′), |L(v)| ≥degG′(v).
Moreover, since G is connected, each connected component of G′ contains a vertex u adjacent in G to v0 and thus satisfying degG′(u) < degG(u). Hence, by the minimality assumption, G′ is (L′, H′)-colorable.
Let I′ ⊆V (H′) be an (L′, H′)-coloring of G′. Then |NG(I′) ∩L(v0)| ≤degG(v0), so L(v0) \ NG(I′) ̸= ∅.
Thus, I′ can be extended to an (L, H)-coloring I of G; a contradiction.
□ Lemma 13. Let G be a connected multigraph and let (L, H) be a cover of G. Suppose that there are v1 ∈V (G) and x1 ∈L(v1) such that G −v1 is connected and for some v2 ∈V (G) \ {v1}, x1 has fewer than eG(v1, v2) neighbors in L(v2). Then G is (L, H)-colorable.
Proof. Put G′ := G −v1. For each v ∈V (G′), let L′(v) := L(v) \ NH(x1), and H′ := H −L(v1) − NH(x1). Then (L′, H′) is a cover of G′. Moreover, for each v ∈V (G′), |L′(v)| = |L(v)| −|L(v) ∩NH(x1)| ≥degG(v) −eG(v, v1) = degG′(v), and |L′(v2)| = |L(v2)| −|L(v2) ∩NH(x1)| > degG(v2) −eG(v2, v1) = degG′(v2).
Since G′ is connected, Lemma 12 implies that G′ is (L′, H′)-colorable. But if I′ ⊆V (H′) is an (L′, H′)-coloring of G′, then I′ ∪{x1} is an (L, H)-coloring of G, as desired.
□ Lemma 14. Suppose that G is a 2-connected multigraph and (L, H) is a cover of G with |L(v)| ≥ degG(v) for each v ∈V (G). If G is not (L, H)-colorable, then G is regular and for each pair of adjacent vertices v1, v2 ∈V (G), the bipartite graph H[L(v1), L(v2)] is eG(v1, v2)-regular.
Proof. Consider any two adjacent v1, v2 ∈V (G). By Lemma 13, H[L(v1), L(v2)] is an eG(v1, v2)-regular bipartite graph with parts L(v1) and L(v2). Therefore, |L(v1)| = |L(v2)|, and so degG(v1) = degG(v2), as desired. Since G is connected while v1 and v2 are arbitrary adjacent vertices in G, this implies that G is regular.
□ Lemma 15. Let G be a 2-connected multigraph. Suppose that u1, u2, w ∈V (G) are distinct vertices such that G −u1 −u2 is connected, eG(u1, u2) < eG(u1, w), and eG(u2, w) ≥1. Then G is DP-degree-colorable.
Proof. Suppose that G is not (L, H)-colorable for some cover (L, H) with |L(v)| = degG(v) for all v ∈V (G). We show first that there are nonadjacent x1 ∈L(u1) and x2 ∈L(u2) with NH(x1) ∩NH(x2) ∩L(w) ̸= ∅.
(3) 32 Indeed, consider any x2 ∈L(u2).
By Lemma 14, |L(w) ∩NH(x2)| = eG(u2, w) ≥1.
Similarly, for each y ∈L(w) ∩NH(x2), |L(u1) ∩NH(y)| = eG(u1, w) > eG(u1, u2) = |L(u1) ∩NH(x2)|. Thus, there exists x1 ∈(L(u1) ∩NH(y)) \ (L(u1) ∩NH(x2)). By the choice, x1 and x2 are nonadjacent and y ∈ NH(x1) ∩NH(x2) ∩L(w). This proves (3).
Let x1 and x2 satisfy (3). Let G′ := G −u1 −u2. Given v ∈V (G′), put L′(v) := L(v) \ (NH(x1) ∪ NH(x2)), and H′ := H −L(u1) −L(u2) −NH(x1) −NH(x2). Then G′ is connected and (L′, H′) is a cover of G′ satisfying the conditions of Lemma 12 with w in the role of v0. Thus G′ is (L′, H′)-colorable, and hence G is (L, H)-colorable; a contradiction.
□ Lemma 16. Suppose that G is an n-vertex 2-connected multigraph that contains a vertex adjacent to all other vertices. Then either G ∼ = Kk n for some k or G is DP-degree-colorable.
Proof. Suppose that G is an n-vertex multigraph that is not DP-degree-colorable and assume that w ∈V (G) is adjacent to all other vertices. If some distinct u1, u2 ∈V (G){w} are nonadjacent, then the triple u1, u2, w satisfies the conditions of Lemma 15, and so G is DP-degree-colorable. Hence every two vertices in G are adjacent; in other words, the underlying simple graph of G is Kn. It remains to show that every two vertices in G are connected by the same number of edges. Indeed, if u1, u2, u3 ∈V (G) are such that eG(u1, u2) < eG(u1, u3), then, by Lemma 15 again, G is DP-degree-colorable.
□ Lemma 17. Suppose that G is a 2-connected n-vertex multigraph in which each vertex has at most 2 neighbors. Then either G ∼ = Ck n for some k, or G is DP-degree-colorable.
Proof. Suppose that G is a 2-connected n-vertex multigraph in which each vertex has at most 2 neighbors and that is not DP-degree-colorable. Then the underlying simple graph of G is a cycle and Lemma 14 implies that G is regular. Hence, G ∼ = Ck n by Lemma 11.
□ Lemma 18. Suppose that G is a 2-connected n-vertex multigraph that is not DP-degree-colorable.
Then G ∼ = Kk n or Ck n for some k.
Proof. By Lemmas 16 and 17, we may assume that G contains a vertex u such that 3 ≤|NG(u)| ≤ n −2. Since G is 2-connected, G −u is connected. However, G −u is not 2-connected. Indeed, let u1 be any vertex in V (G) \ ({u} ∪NG(u)) that shares a neighbor w with u. By Lemma 15 with u in place of u2, G −u1 −u is disconnected, so u1 is a cut vertex in G −u.
Therefore, G −u has at least two leaf blocks, say B1 and B2. For i ∈, let xi be the cut vertex of G −u contained in Bi. Since G itself is 2-connected, u has a neighbor ui ∈Bi −xi for each i ∈.
Then v1 and u2 are nonadjacent and G −u −v1 −u2 is connected. Since u has at least 3 neighbors, G −v1 −u2 is also connected. Hence, we are done by Lemma 15 with u in the role of w.
□ Lemma 19. Suppose that w ∈V (G), G = G1 + G2, and V (G1) ∩V (G2) = {w}. If G1 and G2 are not DP-degree-colorable, then G is not DP-degree-colorable.
Proof. Suppose that G1 is not (L1, H1)-colorable and G2 is not (L2, G2)-colorable, where for each i ∈, (Li, Hi) is a cover of Gi such that |L(v)| = degGi(v) for all v ∈V (Gi). Without loss of generality, assume that L1(v1) ∩L2(v2) = ∅for all v1 ∈V (G1) and v2 ∈V (G2). For each v ∈V (G), let L(v) := ⎧ ⎨ ⎩ L1(v), if v ∈V (G1) \ {w}, L2(v), if v ∈V (G2) \ {w}, L1(w) ∪L2(w), if v = w, and let H := H1 + H2 + K(L(w)), where K(L(w)) denotes the complete graph with vertex set L(w).
Then (L, H) is a cover of G and for each v ∈V (G), |L(v)| = degG(v). Suppose that G is (L, H)-colorable and let I be an (L, H)-coloring of G. Without loss of generality, assume that I ∩L(w) ⊆L1(w). Then I ∩V (H1) is an (L1, H1)-coloring of G1; a contradiction.
□ Proof of Theorem 9. Lemmas 7, 8, and 19 show that if each block of G is isomorphic to one of the multigraphs Kk n and Ck n for some n and k, then G is not DP-degree-colorable.
33 Now assume that G is a connected multigraph that is not DP-degree-colorable. If G is 2-connected, then we are done by Lemma 18. Therefore, we may assume that G has a cut vertex w ∈V (G). Let G1 and G2 be nontrivial connected subgraphs of G such that G = G1 + G2 and V (G1) ∩V (G2) = {w}. It remains to show that neither G1 nor G2 is DP-degree-colorable, since then we will be done by induction.
Suppose towards a contradiction that G1 is DP-degree-colorable. Let (L, H) be a cover of G such that |L(v)| = degG(v) for all v ∈V (G). By Lemma 12 applied to the connected components of G2 −w, there exists an independent set I2 ⊆ v∈V (G2){w} L(v) such that |L(v)∩I2| = 1 for all v ∈V (G2){w}. Given v ∈V (G1), put L1(v) := L(v) \ NH(I2). (Note that L1(v) = L(v) for all v ∈V (G1) \ {w}.) Also, let H1 := H v∈V (G1) L1(v) .
Then (L1, H1) is a cover of G1. Note that |L1(v)| = |L(v)| = degG(v) = degG1(v) for each v ∈V (G1){w}; and we have |L1(w)| = |L(v)| −|NH(I2) ∩L(w)| ≥degG(w) −degG2(w) = degG1(w) for w. Since G1 is DP-degree-colorable, it is (L1, H1)-colorable. But if I1 is an (L1, H1)-coloring of G1, then I1 ∪I2 is an (L, H)-coloring of G.
□ 4. On DP-Critical Graphs Gallai in proved (2) for ordinary k-critical n-vertex graphs using an upper bound on the number of edges in Gallai trees—the connected graphs in which every block is a complete graph or an odd cycle.
We will need the same statement for GDP-trees—the graphs in which each block is a complete graph or a cycle (not necessarily odd).
Lemma 20. Let k ≥4 and let T be an n-vertex GDP-tree with maximum degree Δ(T) ≤k −1 not containing Kk. Then 2|E(T)| ≤ k −2 + 2 k −1 n.
(4) The proof is the same as Gallai’s. We present the proof in the Appendix, since Gallai’s paper is in German. Below is the rest of the proof of Corollary 10. It is based on Gallai’s ideas but is shorter.
We use discharging. Let G be an n-vertex DP-k-critical graph distinct from Kk. Note that the minimum degree of G is at least k −1. The initial charge of each vertex v ∈V (G) is ch(v) := degG(v).
The only discharging rule is this: (R1) Each vertex v ∈V (G) with degG(v) ≥k sends to each neighbor the charge k−1 k2−3.
Denote the new charge of each vertex v by ch∗(v). We will show that v∈V (G) ch∗(v) ≥ k −1 + k −3 k2 −3 n.
(5) Indeed, if degG(v) ≥k, then ch∗(v) ≥degG(v) −k −1 k2 −3 · degG(v) ≥k 1 −k −1 k2 −3 = k −1 + k −3 k2 −3.
(6) Also, if T is a component of the subgraph G′ of G induced by the vertices of degree k −1, then v∈V (T) ch∗(v) ≥(k −1)|V (T)| + k −1 k2 −3|EG(V (T), V (G) \ V (T)|.
Since T is a GDP-tree and does not contain Kk, by Lemma 20, |E(V (T), V (G) \ V (T)| ≥(k −1)|V (T)| − k −2 + 2 k −1 |V (T)| = k −3 k −1|V (T)|.
Thus for every component T of G′ we have v∈V (T) ch∗(v) ≥(k −1)|V (T)| + k −1 k2 −3 · k −3 k −1 · |V (T)| = k −1 + k −3 k2 −3 |V (T)|.
Together with (6), this implies (5).
34 5. Appendix We essentially repeat Gallai’s proof of Lemma 20 by induction on the number of blocks. If T is a block; then, since T ̸∼ = Kk and k ≥4, Δ(T) ≤k −2, which is stronger than (4).
Suppose that (4) holds for all GDP-trees with at most s blocks and T is a GDP-tree with s + 1 blocks. Let B be a leaf block in T and let x be the cut vertex in V (B). Let D := Δ(B).
Case 1: D ≤k −3.
Let T ′ := T −(V (B) \ {x}).
Then T ′ is a GDP-tree with s blocks.
So 2|E(T)| = 2|E(T ′)| + D|V (B)| and, by induction, 2|E(T ′)| ≤ k −2 + 2 k −1 (n −|V (B)| + 1).
If B = Kr, then r = D + 1 ≤k −2. So in this case 2|E(T)| − k −2 + 2 k −1 n ≤ k −2 + 2 k −1 (n −D) + D(D + 1) − k −2 + 2 k −1 n = D −k + 2 − 2 k −1 + D + 1 ≤−D 2 k −1 < 0, as claimed. Similarly, if B = Ct, then, by the case, k ≥5 and 2|E(T)| − k −2 + 2 k −1 n ≤ k −2 + 2 k −1 (n −t + 1) + 2t −n k −2 + 2 k −1 = (t −1) −k + 2 − 2 k −1 + 2 + 2 < 2 −k + 4 + 2 ≤0.
Case 2: D = k −2. Since Δ(T) ≤k −1, only one block B′ apart from B may contain x and this B′ must be K2. Let T ′′ = T −V (B). Then T ′′ is a GDP-tree with s −1 blocks. So 2|E(T)| = 2|E(T ′′)| + D|V (B)| + 2 and, by induction, 2|(T ′′)| ≤ k −2 + 2 k −1 (n −|V (B)|).
Hence in this case, since |V (B)| ≥D + 1 = k −1, 2|E(T)| − k −2 + 2 k −1 n ≤ k −2 + 2 k −1 (n −|V (B)|) + (k −2)|V (B)| + 2 − k −2 + 2 k −1 n = |V (B)| −k + 2 − 2 k −1 + k −2 + 2 ≤− 2 k −1|V (B)| + 2 ≤0, again.
□ Acknowledgment. The authors are grateful to the referee for insightful comments.
35 References 1. Plesneviˇ c G. S. and Vizing V. G., “On the problem of the minimal coloring of the vertices of a graph,” Sib. Mat. Zh., vol. 6, no. 1, 234–236 (1965).
2. Vizing V. G., “Colouring the vertices of a graph with prescribed colours,” Metody Diskretnogo Analiza v Teorii Kodov i Skhem, vol. 29, 3–10 (1976).
3. Erd˝ os P., Rubin A. L., and Taylor H., “Choosability in graphs,” Proc. West Coast Conf. Combinatorics, Graph Theory and Computing (Humboldt State Univ., Arcata, CA, 1979) / Congr. Numer., vol. 26, 125–157 (1980).
4. Alon N., “Degrees and choice numbers,” Random Struct. Algorithms, vol. 16, 364–368 (2000).
5. Borodin O. V., “Criterion of chromaticity of a degree prescription,” in: Abstracts of IV All-Union Conf. on Theoretical Cybernetics [Russian], Novosibirsk, 1977, 127–128.
6. Borodin O. V., Problems of Coloring and of Covering the Vertex Set of a Graph by Induced Subgraphs [Russian], Ph. D. Thesis, Novosibirsk Univ., Novosibirsk (1979).
7. Kostochka A. V., Stiebitz M., and Wirth B., “The colour theorems of Brooks and Gallai extended,” Discrete Math., vol. 162, 299–303 (1996).
8. Gallai T., “Kritische Graphen. I,” Publ. Math. Inst. Hungar. Acad. Sci., vol. 8, 165–192 (1963).
9. Dvoˇ r´ ak Z. and Postle L., “List-coloring embedded graphs without cycles of lengths 4 to 8,” available at /abs/1508.03437.
10. Alon N. and Tarsi M., “Colorings and orientations of graphs,” Combinatorica, vol. 12, 125–134 (1992).
11. Bernshteyn A., “The asymptotic behavior of the correspondence chromatic number,” available at http:// arXiv.org/abs /1602.00347.
12. Johansson A., Asymptotic Choice Number for Triangle-Free Graphs. Technical Report 91–95, DIMACS, 1996.
13. Borodin O. V., “Colorings of plane graphs: A survey,” Discrete Math., vol. 313, 517–539 (2013).
14. Gallai T., “Kritische Graphen. II,” Publ. Math. Inst. Hungar. Acad. Sci., vol. 8, 373–395 (1963).
A. Yu. Bernshteyn Department of Mathematics University of Illinois at Urbana–Champaign, IL, USA E-mail address: [email protected] A. V. Kostochka University of Illinois at Urbana–Champaign, IL, USA Sobolev Institute of Mathematics, Novosibirsk, Russia E-mail address: [email protected] S. P. Pron Altai State University, Barnaul, Russia E-mail address: [email protected] 36
|
22
|
ELSEVIER Theoretical Computer Science 222 (1999) 77-111 Theoretical Computer Science www.elsevier.com/locate/tcs The powerset operator on abstract interpretations Gilberto Filr, Francesco Ranzato Dipartimento di Matematica Pura ed Applicata, Universith di Padova, Via Belzoni 7, 35131 Padova, Italy Communicated by G. Levi Abstract In the context of the standard Cousot and Cousot framework, refinement operators that system- atically produce more precise abstract interpretations from simpler ones are useful. We present a theoretical study of one such operator: the powerset. For any given abstract interpretation, i.e. an abstract domain equipped with corresponding abstract operations, the powerset operator yields a new abstract interpretation, where the abstract domain is (very close to) the powerset of the original one and the operations are accordingly extended. It turns out that the refined powerset domain is able to represent in the best possible way the concrete disjunction. Conditions that guarantee the correctness of the powerset operator are given, and the relationship, with respect to the precision, between any abstract interpretation and its powerset is studied. The general theory is applied to the well-known abstract interpretation POS, typically used for ground-dependency analysis of logic languages. We show that the powerset P(POS) is strictly more precise than POS both at the domain and operations level. Furthermore, the standard bottom-up abstract semantics of logic programs based on POS and P(POS) are compared by exhibiting a completeness rela- tionship between them, i.e. the first semantics can be obtained by abstracting back the second one. (~) 1999 Elsevier Science B.V. All rights reserved. Keywords." Abstract interpretation; Powerset operator; Logic program ground-dependency analysis 1. Introduction Abstract interpretation [13, 14] is a widely known methodology for programming language semantics approximation, which is primarily used for specifying static pro- gram analysis frameworks. Its basic idea is as follows. Let I_ be any programming language and SEM be a semantic description of L parameterized w.r.t, the domain of computation and the semantic operations. Several semantics of L can be specified by giving different interpretations of SEM. The standard or concrete semantics is obtained Corresponding author. E-mail: [email protected]. 0304-3975/99/S-see front matter (~) 1999 Elsevier Science B.V. All rights reserved. PII: S0304-3975(98)00007-3 78 G. Filb, F. Ranzato/ Theoretical Computer Science 222 (1999) 77-111 by specifying the concrete interpretation cg = (C, ob,...,o~), where C is the actual domain of computation of the programs, and oh,..., o~ are the operations that can be evaluated during the execution. In this setting, an approximate semantics is specified by giving a nonstandard or abstract interpretation ~ = (D, Ol ..... ok) of SEM, where D represents the approximate properties of C, and ol ..... ok mimic on D the operations 1 k o~ ..... o~. Following the standard terminology, C and o c .... , o c are called the con- crete domain and operations, respectively, while D and ol,... ,ok are, respectively, the abstract domain and operations. The concrete and abstract domains are equipped with partial ordering relations describing the relative precision of domain values (where the top element gives no information). The correctness of the approximation is guaranteed when ~ satisfies some conditions relating it to cg (in this case, we say that ~ abstracts cg). In particular, the correspondence between the domains C and D must be given by a Galois connection (7,D, C, ct), where ~(c)<~9d (or, equivalently, c<~cT(d)) holds if d is an abstract approximation of the concrete value c. Here, ~(c) is the most precise abstract (in D) approximation of c, while 7(d) is the concrete meaning of d. The main advantages of abstract interpretation over "ad hoc" dataflow analysis methods are its generality and the fact that it supports the correctness proof of the analysis. Clearly, the accuracy of a semantics approximation depends on the expressiveness of the chosen abstract interpretation. Thus, it is very interesting to define refinement opera- tors that systematically produce new and more precise abstract interpretations from sim- pler ones (see for a general treatment of refinement operators at the abstract domain level). Examples of well-known refinement operators on abstract domains are Cousot and Cousot's reduced product and Nielson's tensor product . The power- set operator (also called disjunctive completion) on abstract domains was originally introduced by Cousot and Cousot in their seminal work , where it has been exploited in order to demonstrate that a merge-over-all-paths dataflow analysis can be expressed in least fixpoint form. It has been further studied in [15, 17], and ap- plied, e.g., for the definition of comportment analysis for higher-order functional lan- guages in , and in Jensen's disjunctive strictness logic for functional languages. The basic idea of the powerset operator is simple. Given an abstract domain D, where the concrete domain is C, any subset S of D is considered as a denotation for the concrete disjunction of its elements, namely for the lub in C of the meaning of the values in S. Let us consider a classical and very simple example. Assume that the concrete domain is the powerset ~(7/) of the set of integers, equipped with the set-inclusion ordering. The abstract domain is Sign depicted in Fig. 1, which enjoys an obvious Galois connection with gd(7/) (for instance, 7(+) = {x C 7/ : x > 0} and ct({-1,-3}) -- -). In this case, the concrete disjunction is given by the union of sets of integers. It is simple to verify that the powerset abstract domain P(Sign) is that depicted in Fig. 1. P(Sign) contains three new elements with respect to Sign: For example, [+,-] represents precisely the disjunction of + and -, and therefore [+,-] is a denotation for the set of nonzero integers. Also notice that [T] = [+, 0,-], since 7/= ?(T) = 7(÷) U 7(0) U 7(-). G. FilO, F RanzatolTheoretical Computer Science 222 (1999) 77-111 79 T k [+, 0] [+] IT] [_L] Fig. 1. The abstract domain Sign and its poweset P(Sign). [-,0] [-] It should be clear that the powerset operator can generate very expressive abstract interpretations that allow to improve the precision of a program analysis. Let us con- sider the example of Mycroft's strictness analysis for functional languages [4, 33]. A function f is said to be strict if it is undefined when applied to undefined arguments. If ± denotes a generic undefined value, then f is strict if f(±) = _L. For example, the abstract domain for the basic type Int of integers ga(Z±) (where d denotes un- definedness) is the two-point domain str = {0, 1) (with 0 < 1), where 7(0) = {±} and 7(1) = Z±. In this case, whenever the strictness analysis of some function f of type Int ~ Int is able to detect that the semantic abstraction fa over str of f is such that F(0) = 0, then one can infer that f is strict. As observed in , the standard abstract domains used for strictness analysis are not able to model precisely the logical disjunction. The abstract domain for the product type Int x Int is given by the cartesian product abstract domain str × str depicted below. (1, 1) (0, 1) (o, o) (1, 0) The meaning of the abstract values of str × str is the most natural: For instance, (0, 1) represents the set of pairs of integers, whose first component is undefined, whereas (1, 1) represents the whole set of pairs of integers (defined or not). It is clear that the lub (1, 1) = (0, 1) V (1,0) is not precise: 7((0,1)) u 7((1,0)) = {(zl,z2) e 7/. × 7/. :zl = ± or z2 = ±} C 7/± x 7/± = 7((1,1)), 80 G. Filb, IE Ranzato/Theoretical Computer Science 222 (1999) 77-111 and therefore str x str is less precise than its powerset. Consider now the following functional program suggested by Jensen , where sum performs the addition of a pair of integers: f(x)=if B then (x,3) else (3,x) g(x) = sum(f (x)) Suppose that the value of the Boolean expression B cannot be statically determined, and that we already know that sum is strict in both its components. It is clear that g is a strict function. However, by using the abstract domain str x str we are not able to detect the strictness of f. In fact, the best approximation of applying f to an undefined argument is given by ~({(l,3)})V ~({(3,±)}), which in str x str is (1, 1). But evaluating sum for such an abstract value, we cannot detect the strictness of g. On the other hand, if one uses the powerset abstract domain P(str x str) then ct({(±, 3)})V ~({(3, l)}) is given by [(0,1), (1,0)]. Then, we get [sum((0,1)) = 0,sum((1,0)): 0] = , and therefore we detect that g is strict. This paper contains a general study of the powerset operator on abstract interpreta- tions. For any abstract interpretation ~ = (D, ol ..... ok) abstracting ~ = (C, Oc,...,Oc) , 1 k its powerset P(~)= (P(D),o~ .... ,o~) is systematically defined. Thus, our approach considers full abstract interpretations, namely abstract domains equipped with corre- sponding abstract operations. Conditions on the concrete interpretation cg that assure the correctness of the powerset P(~) are stated: The concrete domain C must be a completely distributive lattice, and any concrete operation o~ must satisfy a restricted form of additivity. We show that the powerset operator actually is a refinement operator in the sense of [21 ], and therefore P(~) (when it is correct) is an abstract interpretation that is always better than ~. Also, we demonstrate that under certain conditions it is possible to sharp the definition of the systematic lifting to the powerset of an abstract operation oi, so that when 0 i is complete (i.e., ~ o o~ = oi o ~ holds), its lifting to the powerset is complete too. This will be the case of an abstract operation considered in our application to logic program analysis. As recalled above, the powerset operator was first introduced by Cousot and Cousot in , and then successively generalized in [15, 17]. However, in those papers the powerset is applied to abstract domains only, whereas the corresponding new abstract operations are only defined implicitly as the best correct approximations of the concrete ones. In contrast, our approach allows to derive correct abstract operations for the powerset domain that are directly based on the definition of those of the original domain. Moreover, if the latter are finitely computable (and the abstract domain is finite) then so are the former. As far as the abstract domain is concerned, we show that our definition of the powerset is equivalent to those of [15, 17], although they differ from a syntactic perspective. In particular, our approach defines a simple and natural ordering relation on the powerset abstract domain that is based on the obvious set-inclusion relation. The powerset operator is applied to the well-known abstract interpretation POS [1, 10, 31, 32], typically used for ground-dependency analysis of logic languages. The G. Fil~, F Ranzato/Theoretical Computer Science 222 (1999) 7~111 81 abstract domain Pos consists of positive (i.e., true under the unitary truth-assignment) propositional formulae, while the abstract operations are logical conjunction (that sim- ulates concrete unification), disjunction, and existential quantification. We show that the powerset abstract interpretation P(POS) is strictly better than POS. This result is somehow against the intuition, given that the abstract domain Pos is already closed under logical disjunction. In order to clarify this phenomenon, we characterize precisely the subsets of formulae of Pos for which the concretization map preserves their logical disjunction. Furthermore, we show that this is the case for every subset consisting only of monotone formulae, which are exactly the abstraction in POS of all the sets of ground substitutions. We also study the relationship between the standard bottom-up abstract semantics (cf. [3, 5,32]) instantiated to the abstract interpretations POS and P(POS). We prove that the semantics using POS is complete with respect to that us- ing P(POS), in the sense that the former semantics can be obtained by abstracting the latter back to Pos. From this result it follows that, using P(POS) instead of POS for analysing logic programs, one cannot gain plain ground-dependency Pos-information, but possibly only disjunctive ground-dependency information, i.e. information that the base abstract domain Pos is not able to represent with no loss of precision. The rest of the paper is organized as follows. Section 2 contains the basic notations and notions on abstract interpretation used throughout the paper. The powerset operator is defined and studied in Sections 3 and 4. Section 3 is concerned with the powerset abstract domain, while in Section 4 the abstract operations for the powerset domain are defined and studied. The application of the powerset operator to the abstract inter- pretation POS is described in Section 5. Finally, Section 6 contains some concluding remarks. A preliminary short version of this paper appeared as . 2. Preliminaries In this section, we briefly introduce some notation used throughout the paper and summarize some well-known notions concerning abstract interpretation. For more de- tails on Galois connections see e.g. , while for abstract interpretation see the survey . 2.1. Galois connections Throughout the paper, we will use the following basic notation and terminology. If f is a function defined on the set X and A CA" then f(A) = {f(a) : a C A}. We use o for function composition. If X and Y are sets, we write X \ Y to denote the set difference between X and Y, and Y CX to denote that Y is a proper subset of X. If ~< is a partial ordering then a < b stands for a ~< b and a ~ b. If P is a partially ordered set (poset) and y c_p, then max(Y) = {y E Y : Vz E Y. y<<,ez ~ y = z} denotes the set of maximal elements of Y, while ~ Y = {x c P : 3y C Y. x<<,ey} denotes the downward closure of Y. A subset Y C_ P is downward closed if Y = ~ Y. A function f • L ~ M between complete lattices is additive if for any X c_ L, f(VL X) = 82 G. Filk, E Ranzato / Theoretical Computer Science 222 (1999) 77-111 VM f(X) (co-additivity is dually defined). A complete lattice L is join-generated by a subset S c L if for all z E L there exists X _c S such that z = VL X. An element x E L of a complete lattice is (completely) join-irreducible if for all X _ L, x = VL X implies x E X; the set of join-irreducible elements of L is denoted by JI(L). We recall the definitions of Galois connection and insertion. If C and D are two posets and ~ : C -~ D, 7 : D -+ C are monotonic functions such that Vc E C. c ~< c 7(cffc)) and Vd E D. ~(7(d))<~od, then the quadruple (7,D, C, ~) is a Galois connection (G.c. for short) between D and C. If in addition Vd E D. ~(7(d)) = d, then (7,D,C,~) is a Galois insertion (G.i. for short) of D in C. We also recall that the above definition of G.c. is equivalent to that of adjunction: (7,D,C,~) is an adjunction if Vc E C.Vd E D. ct(c)<~Dd ¢~ c<~cT(d). The map ~ (7) is called the left (riyht) adjoint to 7 (ct). The following are some well-known properties of Galois connections and insertions that will be useful later on. (i) If (7,D,C,~) is a G.i., then 7 and ~ are 1-1 and onto, respectively. Also, 7 is an embedding, i.e. d <~md' < = > 7(d)<~cT(d'). (ii) If (7,D,C,~) is a G.c. between the posets D and C, then ct preserves lub's (i.e., if for some S C C the lub Vc S exists then the lub VD ct(S) exists, and ~(Vc s) = VD 7(S)), and 7 preserves glb's. (iii) If (7,D, C, ct) is a G.i. of the poset D in the complete lattice C, then D is actually a complete lattice. (iv) Let C and D be posets, and suppose that 7 : D ~ C preserves glb's; in addition, for all c E C assume that ]ko{d E D : c<~c7(d)} exists. If we define ~ : C ~ D as ~(c) = ]ko{d E D : c<~c7(d)}, then (7,D,C,~).is a G.c. between D and C. Moreover, if 7 is 1-1 then it is a G.i. of D in C. (v) In any G.c., one of the two functions uniquely determines the other. Whenever C and D are complete lattices, property (ii) says that ~ and ~ are, re- spectively, additive and co-additive. By property (v), the function ~ as defined in (iv) will be the only mapping such that (7,D,C,7) is a G.c. (it is "the" left adjoint to 7). Moreover, starting with ~ : C ~ D it is possible to state the dual version of (iv). A G.c. (7,A,C,~) is the composition of two G.c.'s (yA,o,A,D,c~o,A) and (TD, c,D,C, ~c,o) if ~ = CtO.A o CtC, O (or, equivalently, 7 = 7D, c o 7A,D). 2.2. Abstract interpretation basics As recalled in the introduction, in the setting of abstract interpretation, the concrete and abstract domains, C and D, are related by a Galois connection (7,D,C,~). Fol- lowing the standard terminology, ~ and 7 are called the abstraction and concretization maps, respectively, and D is also called an abstraction of C. The intuition is that the abstract domain is a representation of some approximate properties of the values of the concrete domain. Both on the concrete and on the abstract domain, a partial order relation describing the relative precision of the values is defined: x ~< y means that x is more precise than y. The concretization map gives the concrete value corresponding to an abstract denotation (i.e. its semantics), whereas for a concrete value the abstraction G. FilO, F Ranzato/Theoretical Computer Science 222 (1999) 77-111 83 map gives its best (w.r.t. the ordering of D) abstract approximation (cf. property (iv) in Section 2.1). Thus, an abstract value y C D approximates a concrete value x E C if x<~cT(Y), or equivalently (by adjunction), if ~(x)<<,Dy. When c = 7(d), we will say that the abstract denotation d represents precisely (i.e. with no approximation) the concrete value c. If (7,D, C,~) is G.i., each value of the abstract domain D is useful in the representation of the concrete domain C, because all the elements of D represent distinct members of C. In practice, C and D are very often complete lattices; however, for the sake of generality, we will assume that they are mere posets, unless otherwise specified. If o~ .... , o~ are the operations defined on the concrete domain C, that are involved in the standard semantic definition, then cg 1 k = (C,o c ..... Oc) is called the concrete interpretation. For a G.c. (7,D,C,~), the abstract operations over D must correctly simulate the behavior of the concrete operations on the properties represented by D. Let us assume that oc : C × X ~ C is a concrete operation of cg, where X is any set of possible auxiliary parameters, also mathematically unstructured. 1 Then, following the standard Cousot and Cousot approach, a corresponding abstract operation oD : D × X D is a (correct) approximation of (or (correctly) approximates) oc if Vc E C.Vd E D.Vx C X. ~(c)<~Dd ~ ~(oc(c,x))<~DoD(d,x). The intuition for this definition should be clear: If d is an approximate description of c, then the concrete computation of oc(c,x) is approximated at the abstract level (on D) by oD(d,x). It is possible to state this notion of approximation by several equivalent formulations (cf. [13, 14]). In fact, it is easily shown that oD approximates oc iff Vc E C.Vx E X. o~(oc(c,x))<~DOD(O~(c),x) iff Vd E D.Vx C X. oc(];(d),x)<~cT(OD(d,x)). i is approximated by the corresponding oi, then we will say that ~ = If each o c (D, Ol ..... ok) abstracts (or is an abstract interpretation of) cg = (C, oc,...,10c).k Assume that Ol and 02 are two abstract operations (for D) both approximating a common concrete operation oc. Following the standard Cousot and Cousot defini- tion , we say that ol is more precise than 02 if for any d C D and x c X, ol(d,x)<~Do2(d,x) (namely, if Ol is lower than o2 with respect to the standard func- tional pointwise ordering). Cousot and Cousot showed in that it is always possible to define the best (correct) approximation of a concrete operation oc: This is defined as oboest(d,x) = ~(Oc(~(d),x)), for any d E D and x E X. Actually, it is easy to verify that this o~ ~st is more precise than every approximation of oc. The notion of completeness for an abstract operation is also well-known [15, 34]. We say that oD is complete for oc if for any c E C and x E X, oD(~(c),x) = ~(oc(c,x)). Notice that if oD is complete for oc then oD is the best approximation of oc, while the converse is in general not true. If C and D are complete lattices and we consider the concrete and abstract lub's as operations, then we always have that VD is complete for Vc: In I The extension to the more general case Oc : C n × X ~ C m × Y is straightforward. 84 G. Filk, F Ranzato / Theoretieal Computer Science 222 (1999) 77-111 fact, by (ii) in Section 2.1, for any Sc_C, we have that VD ~(S) = ~(VcS), hence showing the completeness. Completeness is a quite strong property for abstract oper- ations (see ). It also implies that if the least fixpoints lfp(oD) and lfp(oc) exist, then ~(lfp(oc))= lfp(oD) (see ). 2.3. Comparing abstract interpretations As far as precision of representation is concerned, the standard criterion for com- paring abstract domains, introduced by Cousot and Cousot in , is as follows. Let G1 = (71,D1,C,~1) and G2 = (72,O2, C, Ix2) be two G.c.'s. Then, D2 is better than D1 whenever 71(D1)C 72(D2), while O2 is strictly better than DI if D2 is better than D1 and DI is not better than D2, i.e. if 71(D1)C 7z(D2). Also, Dl and D2 are equivalent when D2 is better than DI and D1 is better than D2, i.e. when 71(Dl) = 72(D2). Thus, intuitively, D2 is better than D1 when D2 is able to represent precisely at least all the concrete elements that are represented precisely by D1. Lemma 2.1. (i) If G2 is a G.i., then 02 is better than Dl iff G1,2 ---- (~2 o 71,D1,D2, ~1 o 72) & a G.c. (ii) If G1,G2 are G.i.'s, then D2 is better than D1 zff G1,2 is a G.i. . Proof. We prove (i) (3) Monotonicity of ~2 o71 and ~X l 072 follows from that of their components. Thus, it is enough to show the following two points. - Vdl c D1. Ctl(72(a2(71(dl))))<-..D, dl: By hypothesis, there exists d2 E D2 such that 71(dl) = 72(d2); hence, a1(72(ct2(71(dl)))) = ~q(72(~2(Y2(d2)))) = ~1(72(d2)) = CXl (71 (dl)) ~< D, dl. - Vd2 E D2. d2--.<O2~2(Tl(~l(72(d2)))): Since 72(d2)--.<c 71(~1(72(d2))), by monotonic- ity of ~z2, we get d2 = IX2(72(d2))~D 2 lx2(71(IXl(72(d2)))). (¢=) It suffices to prove that Vdl E DI. 71(dl ) ---- 72(~2(71(dl))). Observe that, since Gl,2 is a G.c., oq(72(o~2(71(dl))))~Dldl. Hence, 72(~2(71(dl)))<<.CTl(dl), that joint with 71(dl)<...c72(o~2(7i(dl))) concludes the proof. [] It is also worth noting that if D2 is better than D1 (and G2 is a G.i.) then GI is the composition of Gl,2 and G2, i.e. for all dl C DI, 71(dl) ---- 72(~2(71(dl))) (this is a direct consequence of the fact that if 71(dl)= 72(d2) then d2 = ~2(71(dl))). We extend in the most natural way the notion of being better to full abstract interpre- tations. Let us assume that ol : D1 ×X --+ D1 and 02 " D2 xX --~ D2 are approximations of the same concrete operation oc. Following the standard criterion of comparison of Cousot and Cousot (cf. ), we say that the abstract operation 02 is better than the corresponding O1 if Vc E C.Vx E X. 72(02((ZE(C),X))~cTl(Ol(O~l(C),X)), Furthermore, 02 is strictly better than Ol if 02 is better than Ol and ot is not better than 02, i.e. there exist c E C and x c X such that 72(o2(~2(c),x))<c71(o1(~1(c),x)). Assume now that ~1 1 k /D 01 k \ = (Dl,00, ..... oo, ) and 92 = \ 2, 02 ..... OD2 / are two abstract interpretations of a common interpretation cg = (C,o~c ..... Okc). @2 is better than ~l if the abstract G. Filk, F. Ranzato / Theoretical Computer Science 222 (1999) 77-111 85 domain D2 is better than D1, and each operation o i is better than the corresponding D2 operation o i Moreover, ~2 is strictly better than ~1 when it is better, the abstract DI" domain D2 is strictly better than Dl, and at least one operation o~) 2 is strictly better than the corresponding o~. Thus, according to this definition, an abstract interpretation is strictly better than another one when its domain is strictly more expressive and when at least one of its operations takes advantage of this extra expressivity in order to be more precise than the corresponding operation of the other domain. 3. The powerset abstract domain In the following, we assume that the concrete domain C enjoys the standard gener- alized form of infinite distributivity, i.e. that C is a completely distributive lattice. This .fcil iE1 C C, where I means that C is a complete lattice such that for each subset t jJjEJ(i)- i where ji and, for any i E 1, J(i) are sets of indices, Ai~z VjcJ(i)c~. = V~EJ, Aic, c~(i), is the set of all the functions q~ : I ~ UiEz J(i) such that for any i E I, q~(i) E J(i). The dual condition, where meet and join are exchanged, is equivalent (cf. ). This condition of complete distributivity is satisfied by any complete ring of sets, i.e. any (complete lattice isomorphic to a) subset of a powerset, ordered by the subset or super- set relation and closed under arbitrary unions and intersections (see ). In particular, the powerset of any set, ordered with the subset or superset relation, is completely distributive, and therefore this class comprises the concrete domains used in collecting semantics for analysis (cf. ). The concrete domain C is related to the abstract do- main D by a Galois connection (7,D,C,~). Notice that if we assume a G.i. of D in C, then, by property (ii) in Section 2.1, the fact that C is a complete lattice implies that so is D. In the powerset construction, any subset S of the abstract domain D is intended to represent the concrete disjunction of its elements, namely, its concrete meaning is given by the lub Vc 7(S). Thus, we define the following equivalence relation between subsets of D: if Sh $2 C D then $1 -~ $2 ¢~ Vc ?($1 ) = Vc 7($2), where, as usual, Vc 0 = A-c. If S C D then we denote its equivalence class for -~ by [S] -- {Z C D : S -~ Z}. In order to simplify the notation, we will often denote the equivalence class of a finite subset {dl ..... dk} C_D simply by [dl ..... dk]. The powerset domain of D, denoted by P(D), is defined as the quotient with -7 of the set go(D) of all the subsets of D: P(D) def go(D)/= = {[S] : S G D}. For each class [S] E P(D) it will be useful to have a canonical representative. The fat of [S] is defined as t~[S] = U{ZCD : Z E [S]}. We now show that U[S] =7 S, and therefore ®[S] turns out to be the greatest element in [S]. 86 G. Filk, F Ranzato/Theoretical Computer Science 222 (1999) 77-111 Lemma 3.1. For [S] E P(D), (i) w[S] -~ S; (ii) t~[S] = {d E O : 7(d)~<c Vc 7(S)}; (iii) ~[S] = l +~[S]. Proof. (i) Vc ~,(w[S]) = Vc 7(U{ ZC-D:Z E [S]}) = Vc{Vc 7(Z) E c:z E [s]} = Vc (ii) On the one hand, ifZ E [S] and d E Z, then 7(d)<~cVcT(Z ) = Vc 7(s). On the other hand, if 7(d)~< c Vc 7(s) then Vc 7(s u {d}) = Vc 7(s), from which d E t~[S]. (iii) If d~< D d for some d E W[S], then 7(d)~< c 7(d)~ c Vc 7(s). Consequently, d' E ~[S] because VcT(S u {f}) = Vc7(S). [] It is well-known (see, e.g., ) that a poset satisfies the ascending chain condition (ACC for short, namely it does not contain infinite strictly increasing chains) iff for every nonempty subset S of the poset, max(S) ~ ~). As a consequence, whenever an abstract domain D satisfies the ACC, any equivalence class [S] E P(D) contains the set of maximal elements of S, i.e. S =--~ max(S). However, it is worth noting that max(S) could not be taken as the canonical representative of an equivalence class [S]. For instance, considering the domain Sign in Section 1, we have that [+,0,-] = IT], whereas max({+,0,-}) = {+,0,-} ~ {T} = max({T}). We exploit the fat sets in order to give the ordering relation on P(D): if [S], [T] E P(D) then [S] E [T] ¢:~ t~[S] C U[T]. Notice that r- is a partial order on P(D) (antisymmetry is a consequence of Lemma 3.1 (i)). For this partial order, P(D) has top and bottom elements: By ordering defini- tion, Tp(D) = [D], and, by Lemma 3.1 (ii), it is easy to verify that /p(D) = . If D has the top element To (as observed above, this always holds if we start from a G.i. rather than a G.c.), then Tp(D) = [TD]. Proposition 3.2. P(D) is a complete lattice, where UiEI[Si] = [UiEl~[Si]] and Riel[Si] = [Ni~I t~[Si]], for all {[Si]}iEI C_ P(D). Proof. We first show that if SC_T then [S] _ [T]: If d E t~[S] then 7(d)~< c Vc 7(S) ~<c Vc 7(T), and therefore, since Vc 7( T U {d}) = Vc 7(T), we obtain that d E ~[T]. We only prove the existence of glb's. For lub's the proof is dual. By Lemma 3.1 (i) and the above claim, E [~[Si]] = [Si], for each i E I. Suppose now that [T] F- [Si] for each i E I. Then, ~[T] C_ NiE1 [~[St], and again by Lemma 3.1 (i) and the previous claim, [T] -- [~[T]] _ [Nieit~[Si]]. From this it follows that [NiE1 ~[Si]] is the glb. [] As it is natural to expect, the powerset abstract domain is always completely dis- tributive. G. FilO, F. Ranzato/Theoretical Computer Seience 222 (1999) 77-111 87 Proposition 3.3. P(D) is completely distributive. Proof. Complete distributivity of P(D) is a simple consequence of the definition of lub and glb of Proposition 3.2, since each powerset of some set is obviously completely distributive. [] The concretization map for the powerset abstract domain 7 : P(D) -- C is the obvious one, namely it provides the concrete disjunction of an abstract subset: 7"([S]) = Vc 7(s). The hypothesis of complete distributivity of the concrete domain is central in the proof of the next basic lemma. Lemma 3.4. 7 is 1-1 and co-additive. Proof. That 7 is 1-1 follows from the definition of P(D). In order to show that it is co-additive, consider any subset 5 e = {[S/] E P(D) : i E I}. For each i E I, we pick out a set of indices J(i) such that U[Si] = {d} E D :j E J(i)}. Thus, we have 7([']~) = Vc 7 (niEl{d~ " : J E J(i)}), /\c7"(5") = Ac {VcT(W[Si]):i E I} (by Lemma 3.1 (i)) = Ac {Vc{7(d}) : J E J(i)} : i E I} = VC {AC{7(di~(i)) : i E I}: (p E jl} (by complete distributivity of C) = Vc {7 (Ao{d;(~): i E I}): q) E j1} (by co-additivity of 7). It suffices now to verify that (]i~,{d} : j E J(i)} = {AD{d~(i) • i E I}" ~o E j1}. (C_) If dE niel{d~j:jEd(i)} then there exists q) EJ I such that d = AD{di~(i)'i E l} = AD{d} = d. (2) Let q) E J' and d = AD{d~(~) 'i E I}. Then, d<,Dd~( 0 for all i E I, and therefore, by Lemma 3.1 (iii), d E ~[Si] for all i E I. Thus, d E niEl W[S,.]. [] Thus, the above lemma implies that we have correctly defined an abstract domain. By property (iv) in Section 2.1, the abstraction map c~ : C ~ P(D) can be defined as the left adjoint of 7: ~(c) = M{[S] E P(D) : c<~cT([S])}. Notice that ~(e) is well-defined, since P(D) is a complete lattice. We have therefore shown the following basic result. Theorem 3.5. Let C be a completely distributive lattice. If (7,D,C,~) is a Galois connection then (7,P(D), C,~) is a Galois insertion. We also exploit Lemma 3.4 in order to characterize the lub and glb in the powerset abstract domain as follows. 88 G. Filk, F. Ranzato/Theoretical Computer Science 222 (1999) 77-111 Proposition 3.6. For any subset {[S/])i~1 c_ P(D), Ilicl[Si] = [Ui~1 &] and [~iEI[S/] = [{AiEI xi : Vi E I. xi E S/}]. Proof. By Proposition 3.2, I [iEt[Si] = [Uiczu[Si]]. Further, VcT(uictsi) = Vc{VcT(Si) : i E I} = Vc{Vc?(~[si]) : i E I} = VcT(Ui6lW[Si]), and therefore I IiEz[S/] = [U/czS/]. Consider now, for any i E I, a suitable set of indices J(i) such that Si = {xj : j E J(i)}. Thus, notice that [{AiEiXi : Vi E I. xi E Si}] = [{Ai~lx~u) : ~o E J~}]. Using co-additivity of 7 and complete distributivity of C, we have that i 7 ([{AiEix~(i) q~ C jl}]) = V~oEJ, i Y(AiElX~o(i)) = V~oEj, i • AiE1 7(&o(i)) = AiE,%Ej(i)Y(xj) = AiE, Vv(Si) • It is now clear that for any i E I, 7([{AiElxi : Vi E L xi E Si}])<~cV([Si]), and hence, since 7 is an embedding, [{Aiclxi : Vi E L xi E S/}] is a lower bound. On the other hand, if [T] E P(D) is a lower bound, then 7([T])<~cAiezVv(Si), and therefore, [T] E [{AiclXi:Vi E I. xi E Si}]. This concludes the proof. [] In order to clarify the above characterization of the glb on the powerset, consider the example of Sign and P(Sign) in Fig. 1: Then, for instance, we have that [+,-] M = [+ A 0,- A 0] = [±]. When the concrete domain C is join-generated by its join-irreducible elements, i.e. for any c E C there exists S c_JI(C) such that c -- Vc S, we can give a very useful characterization for the abstraction map ~. Notice that any collecting concrete domain, i.e. go(Z) (ordered by the subset or superset relation) for some set Z, is join-generated by its join-irreducible elements (e.g., JI({go(Z),C_}) = {{z} E go(Z) : z E Z}). First, we need a standard lattice-theoretic lemma (see, e.g., [2, p. 244]). Lemma 3.7. (Balbes and Dwinger [2, p. 244])• Let C be a completely distributive lattice• Then, x E JI(C) iff for any S C C, x<<. V S implies x<~s for some s E S. Proposition 3.8. ff C is join-generated by JI(C), then, for any c E C,g(c) = [{~(x) : x E JI(C), x<~c}]. Proof. First, let us show that if x E JI(C) then ~(x) = [~(x)]. On the one hand, by observing that x<<.cV(Ct(x)) -- 7([~(x)]), we get ~(x) r [~(x)]. On the other, for all AC_D such that x<~cT([A]) --- Vc{7(a) : a E A}, by Lemma 3.7, there exists a E A such that x<<.c y(a), and therefore such that ct(x)~<o a. Then, 7([~(x)]) = 7(~(x)) ~< c 7(a)~< c 7([A]), from which, since 7 is an embedding, [~(x)] _E [A]. Hence, [~(x)] __ ~(x), and therefore, ~(x) = [~(x)]. Consider now any c E C. By hypothesis, c = Vc{x E C :x E JI(C), x<.%c}. Thus, ~(c) = (by additivity of ~) = Ll{~(x) : x E JI(C), x<.c} = U{[~(x)] :x E JI(C), x<.c} = (by Proposition 3.6) = [{~(x) :x E JI(C), x<~e}]. [] From this result, in particular, we get that for any c E JI(C), a(c) = [c¢(c)]. As a further consequence of the above characterization, for a collecting concrete domain (go(Z), _C), for some set Z, we have that for any S E go(Z), c~(S) = [{ct({s}) : s E S}]. G. Fil£ F. Ranzato/Theoretical Computer Science 222 (1999) 77-111 89 Fil6 et al. introduced in the notion of abstract domain refinement, formalizing the idea of systematic operators devoted to enhance the precision of representation of abstract domains. A refinement is an operator on abstract domains that improves the precision of abstract domains, and that is monotonic and idempotent (namely, it refines domains all at once). The powerset operator defined above turns out to be a refinement of abstract domains. This fact is precisely stated by the following result. Proposition 3.9. If D, E are two abstractions of C then: (i) P(D) is better than D; (ii) If D is better than E then P(D) is better than P(E); (iii) P(P(D)) is equivalent to P(D). Proof. (i) It is sufficient to verify that 7(D) _C 7(P(D)): by definition of 7, if d E D then 7(d) = 7([d]). (ii) By hypothesis, 7E(E)C_ 7D(D). Thus, if [S] E P(E) then there exists T C D such that Vc 7E(S) = Vc 7D(T). Then, 7~([S]) = 7~([T]), proving that 7~(P(E)) C_ 7o(p(E)). (iii) We have to prove that 7(P(D)) = 7(P(P(D))). (C_) For [S]r E P(D) and [{[S]J]< E P(P(D)),7([{[S]J]<) = 7"([S]7). (_D) For [A]< E P(P(D)), define UA = U{®[S]~. C_D : [S]y E A}C_D, and con- sider [UA]~, E P(D). Thus, 7([UA]r) = Vc{Vc 7(w[s]7) : [S]r E A} = (by Lemma 3.1 (i)) = VcT(A) = 7([A]<). [] By (i) above, Lemma 2.1 and Theorem 3.5, we get that (~ ov, D,P(D),~ov ) is a G.c., and if in addition (7,D, C, ~) is a G.i., then (c~o 7,D,P(D), ~ o 7) is a G.i. We can be more precise about this latter G.i. Proposition 3.10. If (7,D,C,~) is a G.i., then for all [S]EP(D) and d ED, cffV([S])) = VD S and ~(7(d)) = [d]. Proof. By additivity of c~, we get ~(7"([S])) = cffV c 7(s)) = VD ~(7(S)) = VD S. More- over, u(7(d)) = M{[T] E P(D):7(d)<,.cV([T])}. Observe that 7(d)<,cV([T]) is equivalent to [d] I- [T] (as V is an embedding), from which ~(7(d)) = [d]. [] In particular, by the observation following Lemma 2.1, we get that (7,D, C,~) is the composition of (2d.[d],D,P(D), 2[S].VD S) and (7,P(D), C, ~). It should be clear that whenever the concretization map of (7,D,C,~) is additive, the abstract domain D is already able to represent precisely, by means of its lub, the concrete disjunction of its elements. Thus, in this case, the powerset of D is equivalent to D. The next result shows that absence of additivity of the concretization map is a necessary and sufficient condition in order to get a strict improvement of precision by powerset. Proposition 3.11. Assume that (7,D, C,~) is a G.i. Then, P(D) is strictly better than D iff 7 is not additive. 90 G. Filb, F. Ranzato/Theoretical Computer Science 222 (1999) 77-111 Proof. By definition, P(D) is strictly better than D ¢¢, 7(D)C 7(P(D)) ¢~ 3[S] E P(D). 7"([S])~ 7(D) ¢~ (as 7 is an embedding) 3SC_D. Vc 7(S)~ c Vc 7(s) >~ c 7(VD S). [] We say that a value in a powerset P(D) is new, if it is able to represent precisely a concrete denotation otherwise not representable precisely in D. Hence, [S] E P(D) is new iff Vc 7(S) <c ~(VD S). Following their seminal work , Cousot and Cousot first proposed in a general definition of the powerset operator in a generic setting where abstract domains are spec- ified by Galois connections. Successively, and independently of the conference version of this paper, Cousot and Cousot introduced in various disjunctive comple- tions of an abstract domain, all defined by some form of powerset. They exploited their definitions in order to present the new powerful comportment analysis for higher-order functional languages. More specifically, they introduced the standard disjunctive com- pletion, the order ideal completion and the anti-chain completion (this latter completion is isomorphic to a further Scott closed ideal completion, cf. ). These definitions can be quickly summarized as follows. For subsets of a domain D abstracting a completely distributive lattice C, Cousot and Cousot consider the standard lower powerdomain pre-ordering relation (cf. ): If X, YC_D, then XC_ v Y ¢:~ Vx E X.3y E Y. X<,Dy. Then, one can define the equivalence relation X ~v y ¢:~ X c_ v Y & Y c_VX. - The disjunctive completion of D is defined as the quotient of go(D) with respect to the equivalence relation ~v. It is a complete lattice with respect to the lower powerdomain ordering: [X]~v _ [Y]~v ¢~ X c v Y. G. FilO. F. RanzatolTheoretical Computer Science 222 (1999) 77-111 91 - The order ideal completion of D consists of all the downward closed subsets of D. It is a complete lattice with respect to the lower powerdomain ordering c v above. - The anti-chain completion of D consists of all subsets S of D such that S = max(S). Also in this case, the complete ordering relation is given by c v. The concretization function (for all the definitions above) is obviously the same as that given in this paper: For any element S of the completion, the concretization is given by Vc 7(s). Cousot and Cousot showed that the disjunctive and order-ideal completions are equivalent, and for abstract domains satisfying the ACC, they are in turn equivalent to the anti-chain completion. It is easy to verify that also our powerset domain is equivalent to those above. Proposition 3.14. P(D) is equivalent to each of the above completions. Proof. It is enough to observe that {Vc 7(S) : S _ _ _ D} is the image by the concretization map of each completion, and it coincides with 7(P(D)). [] Although from a semantic viewpoint our powerset definition is equivalent to those in [ 17], they differ from a synctatic perspective. In particular, our definition of the ordering relation on the powerset abstract domain is more natural, since it directly relies on the set-inclusion relation. Moreover, as we will see in the next section, our use of fat sets as canonical representatives permits to lift, by an explicit direct definition, the abstract operations to the powerset domain. This is particularly relevant as far as implementation details are concerned. It is also worth mentioning that all these definitions of the powerset abstract domain have been generalized by Giacobazzi and Ranzato in , who give a general, but implicit, definition of the disjunctive completion by using closure operators, which only requires that the concrete domain is a mere complete lattice. On the other hand, Giacobazzi and Ranzato show how to supply an explicit powerset-like definition of the disjunctive completion, under the hypothesis of complete distributivity for the concrete domain. 4. Lifting abstract operations to the powerset Let us suppose that (7,D, C, ~) is a G.c., where C is a completely distributive lattice, and that oc : C x X ~ C is a concrete operation approximated by oo : D x X ~ D. We want to define an operation on the powerset abstract domain P(D), such that it extends OD and that still approximates oo Let us consider the following definition: 0 D : P(D) × X --~ P(D) OD([S],x) = [{oD(d,x) E D :d 6 ®[S]}]. The definition of this operation is the most natural one since it directly relies on that of OD: It consists in first applying OD to the elements of the canonical representative, 92 G. Filk, 1~ Ranzato/Theoretical Computer Science 222 (1999) 77-111 and then take the equivalence class of the obtained subset of D. It is worthwhile to observe that, whenever D is finite and oD is finitely computable, o~ is also finitely computable. Proposition 4.1. o~ is monotonic. Proof. If [S] I- [T] and x E X then, by definition of , we have {oD(d,x) : d E U[S]} C{oD(d,x) : d E t~[T]}, and therefore o~([S],x) E oo([T],x). [] In general, it is not true that o~ approximates oc. An example showing this fact is given below. Example 4.2. Let us consider the concrete domain (ga(Z), C), the abstract domain Sign of Fig. 1, and its powerset P(Sign), also in Fig. 1. Consider this concrete operation sq : ga(77) ~ go(Z): {a2:aEA} if0~A orA={0} sq(A) = 7/ otherwise. Thus, sq is the square operation on the sets of integers not containing zero and on {0}, while maps the remaining sets to the top Z. It is simple to check that sq is monotonic, and that it is abstractly approximated by the (monotonic) operation Sqa on Sign defined as follows: Sqa={IHI, + ~--~ +, 0 ~--~ 0, --H+, T ~-~ T}. The operation sq on the powerset domain P(Sign) is therefore defined as follows: sq~([±]) = [_L], sq~([+]) = sq~([-]) = sqS([+,-]) = [+], sq~() = , sqS([+,0])= sq~([-,0])= [+,0], sq~([7-])=- [7-]. In spite of the fact that sq~ is an approximation of sq, its extension sq~ to the powerset does not approximate sq. In fact, considering [-, 0] E P(Sign), we get sq(7([-,0])) = sq({a E Y : a~<0}) = 7/ 7(sq([-,0])) = 7"([+,0]) = {a E 7/ : a~>0}. [] The following condition guarantees the correctness of o~). Proposition 4.3. If oc preserves lub's of the subsets 7(S), for all S C D, i.e. for any x E X, oc(Vc{7(d) : d E S},x) = Vc{oc(7(d),x) : d E S}, then o~ is an approxima- tion of oo G. Filk, F RanzatolTheoretical Computer Science 222 (1999) 77-111 93 Proof. Let [S] c P(D) and x E X. Then, ÷ oc(7([s]),x) = oc(~ ([w[s]]),x) = Vc{oc(7(d),x)'d c W[S]} Vc{7(oD(d,x)) : d C W[S]} = ~(o~([S],x)). [] (by Lemma 3.1 (i)) (by hypothesis on oc) (by correctness of oo) Clearly, if oc is (fully) additive then it also verifies the condition in Proposition 4.3, and thus, in such a case, o~ approximates oc. Even though this condition of additivity may seem restrictive, we will show later that it is trivially satisfied by the standard concrete operations used in a typical logic program abstract interpretation framework. Additivity of abstract operations is also considered and discussed in , with particular emphasis to additive abstract operations used in functional program analysis. As an important example, let us consider the systematic liffing of the glb operation from D to P(D). Notice that this is possible since the hypothesis of complete distribu- tivity of the concrete domain implies that the concrete glb is additive. As expected, it tums out that the glb of P(D), that we explicitly defined in the previous section, coincides with this systematically defined operation. Proposition 4.4. The lifted glb A on P(D) actually is the glb 73. Proof. Assume that {[S/]}iCl C P(D). Also, for any i C I, we consider a suitable set of indices K(i) such that tS[&] = {x~ : k E K(i)}. Observe that, by definition, Aiel[&] = [{A,~t ' Y~oli) : ~P E K1}]. Therefore, by co-additivity of ? and complete distributiv- ity of C, we have that 7(Ae,[si]) = V~,c,:, 7(Ai~l yi~(i)) = V¢eK, Ale, i 7(y~(/)) = Aici Vkex(0 2(Y~) = A~cl v{7(s) : s E tJ[si]}. Thus, by Lemma Yl(i) and co-additivity of 7'' 7 (AicI[Si]) = Aicl ]2([si]) = ~(~iGl[Si]). This, because 7" is l-1, concludes the proof. [] We can be more precise about the relationship between the glb of P(D) and that of D. In fact, whenever D is completely distributive, the glb of D results to be complete with respect to the glb of P(D), where the Galois insertion (2d.[d],D,P(D), 2[S].VD S) of D in P(D) is that given by Proposition 3.10. Proposition 4.5. If D is completely distributive then the glb A on D is complete for the glb ~ on P(D) (w.r.t. the G.i. (2d.[d],D,P(D),2[S].VDS)). Proof. Consider any {[Si]}ici c_ P(D). We have to show that V([~icl[Si]) = A,Mvs,). For any i E I, we consider a suitable set of indices J(i) such that S/= {x} :j E J(i)}. Note that, by Proposition 3.6, we then have [~iEI[Si] = [{AiE/ xi ~o(i) : ~0 G jl}]. Thus, by complete distributivity, we have that Aici(vsi) = i V~j, Aicl x~(i), and therefore this concludes the proof. [] 94 G. Filk, F RanzatolTheoretical Computer Science 222 (1999) 77-111 Notice that in the above proposition, if the glb is considered as a finitary operation, then it suffices that D is (finitely) distributive. Further, it is worth noting that, as observed in Section 2.2, an analogous result for the lub trivially holds. We next investigate the relationship existing between Oo and o~. We already know that P(D) is better than D. Therefore, it should not come as a surprise that o~ is an extension of o D and that it is better than oD. Proposition 4.6. Assume that (7,D, C, c¢) is a G.i. (and that the hypotheses of Propo- sition 4.3 hold). Then, (i) o~ extends Oo, i.e. Vd C D.Vx C X. OD([d],x ) = [oo(d,x)]; (ii) o~ is better than Oo. Proof. (i) The following chain of equalities holds: y(Oo([d],x)) = Vc{7(oD(dt,x)) " d' C ~[{d}]} = Vc{7(oo(dt,x))'y(dt)<-.cT(d)} (by Lemma 3.1 (ii)) = Vc{?(oD(d',x)) :d' <~Dd} (as 7 is an embedding) = 7([oD(d,x)]) (by monotonicity of OD and 7). Since Y is 1-1, the thesis follows. (ii) If c E C then ~(c) r-- [~(c)]. Thus, applying 7", we get 7(~t(c))<~cT(~(c)). We want then to show that, for any x E X, 7(o~(~(c),x))<~c 7(OD(~(C),X)): 7(OD(O~(C),X)) = VC{7(oD(d,x)) : d E U~(c)} = Vc{7(oD(d,x)):7(d)<~cT(~(c)) ) <.c Vc{Y(OD(d,x)) : 7(d)<~cY(~(c))} = MC{7(oD(d,x))'d~DO~(c)} = 7(OD(O~(C),X)) (by Lemma 3.1 (ii)) (by the above observation) (as V is an embedding) (by monotonicity of OD and 7), and this concludes the proof. [] It is also straightforward to verify that (ii) above implies that OD approximates o~ w.r.t, the G.i. (2d.[d],D,P(D), 2[S].VD S). We denote by P(~)= (P(D), o~ ..... o~) the abstract interpretation obtained by ap- plying the powerset operator to ~ = (D, ol ..... ok). The following theorem summarizes some of the results achieved so far. G. Filk, F Ranzato/Theoretical Computer Science 222 (1999) 77-111 95 Theorem 4.7. Let ~ -- (C, Olc ..... okc) be a concrete interpretation, ~ = (D, ol .... , ok> abstracting cg, and P(~) = (P(D), o~ ..... o~) be the powerset of ~. If C is completely distributive and each oCi preserves lub's of the subsets 7(S), for all S c_ D, then: (i) P(~) abstracts (£; (ii) If (7,D,C,~ ) is a G.i., then P(~) is better than 9. In general, the systematic lifting of an abstract operation to the powerset does not preserve either the property of being complete or that of being the best correct approx- imation, as the following example shows. Example 4.8. Let us consider again the abstract domain Sign and its powerset P(Sign), depicted in Fig. 1. As concrete operation f : ~d(Z) --~ ~(Z) let us consider f = 2,K.{x • z E Z : x C X, z C Z, z ~ 0}, i.e. f(X) is the pointwise multiplication of X with the set of integers different from 0. It is then easy to see that the best correct approximations fsign and fe(sign) of f on Sign and P(Sign) are, respectively, defined as follows: fsig~={IHI, + ~--', q-, 0H0, -~--~ T, THT}; fP(s,g~) = {[/] ~ [L], [+] ~ [+,-], ~ , [-] ~ [+,-], [+,0] ~ [T], [+,-] H [+,-], [-,0] H [T], [T] ~--~ [T]}. Also, it is not too hard to verify that both fSign and fP(Sign) are complete for f, i.e. for all X C ~a(Z), ~(f(X)) = fsig,(~(X)) and ~(f(X)) = fP(sig,)(~(X)). Notice that f is additive, and therefore, by Proposition 4.3, one can correctly consider the systematic lifting fso, of fsig, to the powerset P(Sign). Although fsig, is the best correct approximation of f, it tums out that fso, is not. In fact, we have that fS~0n([+]) = [fSiq,(+)] = [T], whilst for the best correct approximation fP(sig,) we get a strictly precise result, that is, fP(sig,)([+]) -- [+,-]. Moreover, this also proves that fs~0, is not complete for f (of. Section 2.2). [] We now show that, under certain conditions, the definition of the lifting to the powerset of an abstract operation can be suitably modified, still remaining systematic, so that this step does preserve the property of being complete. Lemma 4.9. If C is join-generated by JI(C) and D is join-generated by ~(JI(C)), then for any [S] c P(D) there exists S ° C ~(JI(C)) such that IS] = [S°]. Proof. Let [S] E P(D) and S- = S \ ~(JI(C)). For any x C S-, let Ax = {d C ~(JI(C)) : d<<.DX}, and observe that, by hypothesis, x = V~Ax. Then, let S ° = (SN~(JI(C)))U(Ux~ s_ Ax). We demonstrate that [S] = [S°]. To show this, it is enough to verify that for any x C S-, Vc{V(d) : d c Az} = 7(x). The inequality Vc{7(d) : d E Ax} ~<c 7(x) trivially holds. On the other hand, note that if z E JI(C) and z ~<c 7(x), then ~(z)<<.Dx, and therefore ~(z) E Ax. Hence, z<~cT(~(z))<<. Vc{7(d) : d E Az}. Since, by hypothesis, 7(x) = Vc{Z E Jl(C):z<~c 7(x)}, we get that 7(x)~<c Vc{7(d) : dEAx}. [] 96 G. Filk, F. Ranzato / Theoretical Computer Science 222 (1999) 77-111 Thus, under the hypotheses of the above lemma, any [S] E P(D) can be transformed in an equivalent [S °] E P(D) such that any element x E S ° is the image, via the abstrac- tion map, of a concrete join-irreducible point. For instance, the G.c. (7,Sign, ga(Z),~) evidently satisfies such hypotheses: Hence, for [--] E P(Sign) we get that {Y} ° = {+,0,-}, and [T] = [+,0,-], where +, 0, and - are all the image of a singleton, i.e. a join-irreducible element, in ~o(Z). Given an abstract operation oD : D × X --, D, we exploit the above lemma for defining the following operation o~ : P(D) × X --~ P(D) on the powerset abstract domain P(D): For all [S] E P(D) and x E X, O ~D([S],x) = [{OD(t,x) E D : t C (++~[S])°}]. In other terms, we consider (t+3[S]) ° as canonical representative of an equivalence class [S] E P(D), and we apply pointwise oD to each element of (U[S]) °. It is straight- forward to verify by a simple inspection of the proof of Proposition 4.3, and under its hypotheses, that o~ is a correct approximation of oc. Here, the interesting point is that when oD is complete for oc and oc preserves any join-irreducible element (i.e., if c E JI(C) then, for all x E X, oc(c,x) E JI(C)), then o~9 on P(D) is still complete for oc, and its definition can be simplified by substituting U[S] with S. Proposition 4,10. Let C be join-generated by JI(C),D be join-generated by ~(JI(C) ), oc be additive and preserving join-irreducible elements. If oD is complete jbr oc then o~ is complete for oc as well, and, for all [S] E P(D) and x E X, o~([S],x) = [{Oo(t,x):t E SO}]. Proof. Let us fix a generic parameter x E X. In the proof, we will use the defini- tion o~([S],x) = [{oD(t,x) :t E S°}], and we will demonstrate at the end of the proof that this is correct. We first show that for a join-irreducible h E JI(C), ~(oc(h,x)) = o~(~(h),x). By hypothesis, oc(h,x) E JI(C), and so, by Proposition 3.8, ct(oc(h,x)) = [ct(oc(h,x))]. Thus, by completeness, ~(oc(h,x)) = [oo(~(h),x)]. Moreover, again by Proposition 3.8, ~(h) = [~(h)]. Therefore, since {~(h)} ° = {~(h)}, o~o(~(h),x) = [oo(~(h),x)] -- 7(oc(h,x)). Now, let c E C. By hypothesis, c = VcHc, where Hc = {h E JI(C) :h <<-c c}. Then, by additivity of Oc and ~ and by the above observations, ~(oC(c,x)) =- [ [hEHc ~(oc(h,x)) = [ [hEH, O~D(~(h),x) = ~hEHc[Oo(~(h),x)]. Thus, by Proposition 3.6, ~(oc(c,x)) = [{oo(ot(h),x) : h E Hc}]. Observe now that, by Proposition 3.8, ~(c) = [{~(h) : h E /-/~}], and, trivially, (~(Hc))° = ~(Hc). Thus, o~o(c~(e),c) -- [{oD(cffh),x) " h E Hc}] = c~(Oc(C,X)). To conclude, let us observe that if [R] = [S] then [{oD(t,x) : t E R°}] ---- [{oo(t,x) : t E S°}] = [{oD(t,x) : t E (~[S])°}]: In fact, there exists some c E C such that ~(c) = [R], and therefore, by what just proved above, these three equivalence classes are all equal to ~(oc(c,x)). [] We will exemplify this result in the next section, by applying it for the abstract operation of existential quantification used in ground-dependency analysis of logic lan- guages over an abstract domain of propositional formulae. G. FilO, F Ranzato/Theoretical Computer Science 222 (1999) 77-111 97 5. Powerset of the logic program abstract interpretation POS POS (cf. [1, 10, 31, 32]) is a well-known abstract interpretation for ground-depen- dency analysis of logic languages, whose underlying abstract domain Pos consists of certain propositional formulae. The groundness information results to be useful to a Prolog compiler for a number of relevant optimizations (see, e.g., [27, 37]). Minor variants of this abstract interpretation have been also used for a variety of other anal- yses, such as suspension analysis of concurrent logic programs . In this section, we apply the powerset operator to POS. We show that P(POS) is a strictly better inter- pretation than POS, although by abstracting back to Pos the outcome of an analysis performed with P(POS) one gets the corresponding analysis for POS. 5.1. The concrete &terpretation LP We briefly recall the standard concrete domain and operations used in an abstract interpretation framework for logic program analysis (for more details, see e.g. ). Assume that Var is an infinite set of variables, and IVar C_ Var is an infinite de- numerable subset of variables of interest. For any syntactic object o, var(o) denotes the set of variables occurring in o. A substitution 0. (over Var and a fixed alphabet of constant and function symbols) is denoted by its set of nontrivial bindings, i.e. 0. = {x/0.(x) " 0.(x) ¢ x}. The domain of definition of 0. is dom(0.) = {x E Var " 0.(x) ¢ x}, while its variable range is rng(g) = U{var(a(x)) "x E dom(6)}. The composition of substitutions a and 0 is denoted by 0.0. The empty substitution is denoted by e. If W C IVar is any subset of variables of interest and 0. is a substitution, then a/g, is any substitution obtained rom a by projecting variables of 0. over W (i.e., by re- naming variables in dom(a)U rnq(0.) belonging to IVar\W with variables of nonin- terest). If 0. is a substitution and E is any syntactic entity, then E0. stands for the result of applying 0. to E. The set of idempotent substitutions over Var is denoted by Sub. If 0. c Sub then eqn(6) denotes the corresponding set of term equations in solved form (the correspondence between idempotent substitutions and sets of syn- tactic equations in solved form is well-known, cf. ). Over Sub the usual relation ~ of instantiation is defined as follows: If O-l,O- 2 C Sub then 61_0. 2 iff there ex- ists a (possibly nonidempotent) substitution 0 such that 0.1 = a20. For any set E of term equations, mgu(E) is defined as follows: If E is not unifiable then mgu(E) = 13, otherwise mgu(E) = {0.}, for an arbitrary idempotent most general unifier a c Sub of E (recall that all the idempotent mgu's of E are equivalent up to renaming, cf. ). The concrete domain of interpretation is given by the standard collecting domain (~o(Sub), C). We follow in defining the following concrete operation of unification: u : ga(Sub) × go(Sub) ---+ 8d(Sub) u(S, O) = U mgu(eqn(a) U eqn(O)). aEZ, OEO 98 G. Filk, F. Ranzato/Theoretical Computer Science 222 (1999) 77-111 This operation of unification u is general enough to subsume other forms of unification and composition of substitutions used in most logic program analysis frameworks (see for more details and examples). For instance, it is simple to observe that if cr E Sub is a calling substitution and 0 is an idempotent mgu of an equation between atoms H = B then mgu(eqn(a)U eqn(O)) = {a~ : ~p E mgu(Ha = Ba)}, up to renaming. The operation of projection over a set of variables of interest is defined as follows: rt : ga(Sub) x ~d(IVar) ~ ga(Sub) ~t(s, rv) = {a/w : a E s}. The last trivial concrete operation is given by the union of sets of idempotent sub- stitutions, 2 U : go(Sub) × go(Sub) ~ ga(Sub), namely the lub operation of the con- crete domain. The concrete interpretation for logic programs is then given by LP = (go(Sub),u,U,n). It is immediate to note that all the concrete operations of LP are additive functions, due to the collecting nature of the concrete domain. We will make use of this obvious observation later on. 5.2. The abstract interpretation POS Let us succinctly recall the definitions of the abstract domains Mon, Def and Pos, and of their abstract operations. For more details see [1, 10, 31]. Let VI C_ I Var be a finite (nonempty) subset of variables of interest. In order to fix the notation, suppose that VI = {xl,... ,xn}. Assume also that B = {false, true} is the two point lattice. A Boolean function on VI is any function f : B n ~ B (where the i-th component of B n corresponds to the variable xi). Obviously, Boolean functions can be represented by means of propositional formulae: If F is the propositional formula over VI representing the Boolean function f, then f(bl ..... bn) = true iff {xi E VI : bi = true} is a truth-assignment (namely, the set of true logical variables) which is a model of F. On the other hand, each class of logically equivalent propositional formulae over VI determines a Boolean function. Thus, in the following, we will use without distinc- tion Boolean functions and propositional formulae as equivalent concepts. Any subset M C_ VI is considered as a truth-assignment for a Boolean function f, and mod(f) de- notes the set of truth-assignments which are models of f (it is then a subset of gd(VI)). The set Bool of Boolean functions on VI forms a (finite) Boolean lattice for the stan- dard ordering: j] -< J~ iff mod(fl )c_ mod(f2). The lub and glb are given by logical disjunction and conjunction: For two Boolean functions J] and j~, they are denoted by J] V J~ and fl A J~, respectively (obviously, for the corresponding propositional for- mulae, this corresponds to consider their syntactic disjunction and conjunction). With a slight abuse of notation, false and true denote the bottom and top elements of Bool. A Boolean function f E BooI is monotone if mod(f) is upward closed (that is, 2 Obviously, considering union as a binary operator is sufficient to get finite union. G. Fil~, F Ranzato/Theoretical Computer Science 222 (1999) 77-111 99 true true lrue z y y z y z^Y T false false false Fig. 2. Mon, Def and Pos for VI = {x, y}. for any M CN C VI, ifM C rood(f) then N C rood(f)). We denote by Mon the set of monotone Boolean functions. It is well-known that a propositional formula f E Bool is monotone iff f is equivalent either to false, or to true, or to a formula built using only the connectives V and A (see, e.g., ). It is immediate to observe that (Mon,-<) is a sublattice of (Bool, ~). A Boolean function f E Bool is positive if VIE mod(f) (or, equivalently, if f(true ..... true) = true). The set of positive Boolean functions is denoted by Pos. It is shown in [8, Corollary 1, p. 325] that a propositional formula f E Bool is positive iff f is equivalent to a formula built using only the connectives A, V and +-% successively, this synctatic characterization has been sharpened in [1, Theorem 3.1], where it is shown that f c Bool is positive iff f is equivalent to a formula built using only the connectives A and --~. It is easy to note that Pos is a sublattice of Bool, and that Pos is distributive. A Boolean function f C Bool is definite if f E Pos and rood(f) is closed under intersection (i.e., if M,N c mod(f) then MNN C rood(f)). As observed by Armstrong et al. , from [18, Proposition 3.1] one gets that a propositional formula f is definite iff f is equivalent to a conjunction of formulae having the shape xi~ A ... A xik -+ xj, where k may be 0 (in this case, the formula is simply xj). The subset of Pos given by the definite formulae is denoted by Def. It turns out that Def is merely a meet subsemilattice of Pos (and hence of Bool). This means that the glb of Def is given by logical conjunction, while its lub can be defined in the usual way in terms of the glb: Ji VDef f2 ---~ A{f E Def : fl ~ f, J~ _ f}. Note that Mon is not a subset of Def and Pos, because false (f Pos. However, since the element false is useful in order to represent precisely the empty set of substitutions (and therefore to represent the information of reachability), as it is common practice, we add false both to Def and Pos, and we still use Def and Pos to denote these lattices. The lattices Mon, Def and Pos are depicted in Fig. 2 for the case of two variables of interest VI = {x, y}. Let us now recall the Galois insertion of Pos into the concrete domain fJ(Sub). For a substitution a C Sub, the truth-assignment specifying which variables of interest are bound by G to ground terms and which are not, is given by 9r~ = {x E VI : var(a(x)) = 100 G. Filk, F Ranzato/Theoretical Computer Science 222 (1999) 77-111 0}. On the other hand, the propositional formula expressing the ground-dependencies in a of the variables of interest is given by 3 depa = 3-~. A {x ~ Avar(a(x)) : x E dom(cr)}. Note that dep~ = true. The abstraction ~ : ga(Sub) ~ Pos and concretization : Pos ~ ga(Sub) mappings are then defined as follows: • (Z) = V{dep~ : a E Z}, y(f) = {a C Sub : Va'~_ a. 9r~, 6 mod(f)}. Note that a(0) = false, for any a c Sub, ~({a}) = dep~ c Def, and y(false) = 9. These two mappings form the Galois insertion (7, Pos, go(Sub), ~). The restriction of 7 to Mon and Def also yields the right adjoint of a Galois insertion. Example 5.1. Assume that VI = {x,y,z,v}. The formula xA(y ~ z) is an element of Pos (and Def) representing all the substitutions a such that for any instance # of a the following conditions hold: (i) a'(x) is ground, (ii) #(y) is ground iff #(z) is ground. Thus, 4 a = {x/a,y/b,z/c,w/d} and 0 = {x/a,y/w,z/w} satisfy these conditions, and therefore {a, 0} C 7(x A (y ~ z)). On the other hand, we have that ~t({a}) = x A y A z and ct({0}) =x A(y ~ z). [] Let us now recall the abstract operations defined on Pos. Abstract unification is simply given by logical conjunction A : Pos x Pos ~ Pos. On the other hand, logical disjunction V : Pos x Pos ~ Pos is the obvious abstract operation corresponding to the concrete union of sets of substitutions. Finally, the concrete operation of projection is simulated in Pos by existential quantification as follows: rCp : Pos x ~o(IVar) --~ Pos gp(f, W) = 3VI\W. f. Hence, 3vi\w.f projects away the variables of interest which are not in W. As noted by Armstrong et al. [1, Theorem 3.2], this operation is well-defined (i.e., 3vz\w.f E Pos). Thus, the full abstract interpretation is given by POS = (Pos, A, V, ne). These ab- stract operations of Pos are correct approximations of the corresponding concrete oper- ations. Actually, as it is shown in , we can be more precise, since it turns out that A is the best correct approximation of the concrete unification u [10, Theorem 5.7], ne is complete for g [10, Lemma 6.3], and V is trivially complete for the union of sets of substitutions (see Section 2.2). 5.3. The powerset abstract domain P(Pos) Since the concrete domain go(Sub) is collecting (and therefore completely distribu- tive), we can correctly apply the construction in Section 3 of the powerset abstract 3 ~ is the existential quantification over the variables of noninterest (i.e. those in ~ = Var \ VI). 4 By a, b, c, ..., we denote ground terms. G. FilO, F RanzatolTheoretical Computer Science 222 (1999) 77-111 101 domain to Pos. By Proposition 3.5, we get the Galois insertion (?,P(Pos), go(Sub), ~ ). Note that, by Proposition 3.8, the abstraction in P(Pos) of a set of substitutions Z E go(Sub) is simply given by the set of the abstractions in Pos of every substitution in Z, i.e. ~(Z) = [{dep~ : a E Z}]. It is also worth noting that by Proposition 3.3, the lifting of Pos to P(Pos) preserves the property of distributivity of the domain. Obviously, by Proposition 3.9 (i), we know that P(Pos) is better than Pos. More precisely, we know that (2f.[f],Pos, P(Pos),2[S].V S) is a G.i. On the other hand, we now show that P(Pos) actually is strictly better than Pos. To this end, by Propo- sition 3.11, it suffices to find two formulae of Pos such that Y does not preserve their logical disjunction. The following example shows this phenomenon. Example 5.2. Assume that the set VI of variables of interest contains at least two variables x,y, and consider x V (x --+ y) E Pos. We want to verify that 7(x)tJ ?(x ~ y)C 7(x V (x -+ y)). Let a = {x/v}, where v is any other variable (possi- bly not in VI). Obviously, a ~ 7(x), because x is not ground for a, and a ~ 7(x -+ y), because considering the instance a' = {x/a, v/a} of a we have that gr~, ({ mod(x -+ y). On the other hand, note that the logical disjunction x V (x --+ y) is logically equivalent to true. Then, 7(x V (x -+ y)) = Sub, and therefore a E 7(x V (x --+ y)). [~ As an immediate consequence of the above example, we get the following result. Theorem 5.3. P(Pos) is strictly better than Pos. This result is somehow in opposition to the intuition. In fact, since the lub of Pos is logical disjunction, it is natural to think that the meaning of a disjunction f V g of two formulae of Pos is exactly given by the logical disjunction of the meanings of f and g, namely by the union of their concretizations. However, the above result shows that this is not the case. We now give a characterization of the subsets of formulae of Pos whose logical disjunction is preserved by the concretization. This characterization makes use of the definite formulae of Def. First, we need a preliminary lemma. Lemma 5.4. For any f E Def there exists af E Sub such that depo I = f. Proof. We prove indirectly this fact, by using other known results. In [7, Theo- rem 8.4.2], it is shown that there exists a Galois insertion of Def into the well-known Jacobs and Langen abstract domain Sharing . Thus, for any f E Def there exists af E Sharing such that ~Sharing, Def(af) = f. On the other hand, as observed in , for any a E Sharing there exists aa E Sub such that O~ga(Sub),Sharing({fa} ) ~- a. Thus, consider the substitution aai E Sub. In [7, Theorem 8.4.5], it is shown that for any a E Sub, dep~ = ~Sharino, Del'(O~¢a(Sub),Sharing({a})). Thus, dep% = f. [] Theorem 5.5. Let ~ C Pos with • ¢ O. Then, U(?(f) :f E q~} = 7(Vq~) if and only if gg E Def. (g ~ V~) =~ (3f E 4. g --< f). 102 G. Fil$, F. Ranzato/Theoretical Computer Science 222 (1999) 77-111 [t~e] ,x ----~ y] [;] [y] [false] Fig. 3. P(Pos) for VI = {x, y}. Proof. (if) By monotonicity of 7, it is sufficient to verify that 7(V~) C_ U{7(f) : f E • }. Consider a E 7(V~). Thus, ct({tr}) = dep~ _ _ _ V~ and dep~ E Def. By hypothesis, there exists f E • such that dep~ ~ f. Thus, 7(dep,)c_ 7(f), and, since a E 7(dep,), we get a E 7(f). (only if) Let g E Def such that g --< V~. By Lemma 5.4, there exists trg E Sub such that ~({trg}) = g. Therefore, ct({ag}) _ _ _ V~, implies ag E 7(V~) = U{7(f) I f E ~}. Thus, we get that there exists f E • such that ag E 7(f). We conclude by observing that ~({ag}) -- g ~ f. [] By using this characterization of Theorem 5.5, it is simple to verify that the pow- erset domain P(Pos) for VI = {x, y} is the lattice depicted in Fig. 3. In fact, it is easy to detect directly from the Hasse diagrams of Def and Pos in Fig. 2, that [x, x ~-~ y], [y,x ~-~ y], [x,x ~-~ y, y], [y, y --~ x], [x,x --~ y], [y ~ x,x --~ y] are all and only the new elements of P(Pos). For instance, by Theorem 5.5, we have that 7(x)U 7 (x ~ y) C 7(x V (x ~ y)) = 7(true), because y ~ x -< true but y ~ x ~ x and y---~ x ~ x--~ y. The following result proves that 7 preserves the logical disjunction of monotone formulae. G. Filk, F. Ranzato / Theoretical Computer Science 222 (1999) 77 111 103 Proposition 5.6. If 4 C Mon then U{7(f) : f E 4} = 7(V4). Proof. Obvious if 4 = O. Thus, assume that 4 -¢ 0. Let us suppose by contradiction that U{7(f) : f E 4} C 7(V4). Then, by Theorem 5.5, there exists g E Def such that mod(g)C_ rood(V4) = UfE, mod(f) and for any f E 4, rood(g) ~ mod(f). Thus, for any f E 4, there exists Mf E rood(g)\ mod(f). Since 4 is nonempty, by definiteness of g, we get that NfEq~ Mf E rood(g). Hence, there exists h E 4 such that NfEc b Mf E rood(h). Observe now that, since h is monotone, NfEq~ Mf C M h implies that Mh E rood(h), which is a contradiction. [] The converse of the above proposition does not hold. It is enough to consider the formulae x, which is monotone, and y ~ x, which is not: since x -< y --~ x, 7 trivially preserves their disjunction. The following result shows that the abstraction in Pos of all the ground substitutions yields precisely all the monotone formulae. In this sense, monotone formulae are only able to express plain groundness and no information of ground-dependency between variables. This fact also provides an intuitive justification to Proposition 5.6. Proposition 5.7. If Sub aR = {a E Sub " rng(a) = ~} is the subset of Sub of the ground substitutions, then a( £~( Sub aR ) ) = Mon. Proof. (c) Note that by definition of ~, for any a E Sub GR, ~({a}) = A(dom(a)D Vl), and hence ~({a}) E Mon (in particular, ~({~}) = true). Thus, if S E ~(Sub Gn) then, by definition of ~ and since Mon is closed by logical disjunction, we get that a(S) E Mon. (_D) If f E Mon, then it is possible to transform f in an equivalent formula in disjunctive normal form (see, e.g., ). Thus, we can assume that f = %EJ(AiEI xj;), for some finite J and I (where xj, E VI). For every j E J, consider the substitution aj = {xj;/a : i E I} E Sub Gn. Then, we have that ~({aj : j E J}) = Vjej ~({aj}) = V;~j(Ai~; x;,) = f . [] 5.4. Abstract operations on P(Pos) Let us now turn to lifting abstract operations from Pos to P(Pos). As far as the lub is concerned, we noticed in Section 2.2 that the lub of any abstract domain is always complete with respect to the concrete one. Thus, in view of this simple obser- vation, for the powerset P(Pos) we will consider its lub U, which is characterized by Proposition 3.6, as approximating the concrete union of sets of substitutions. Next, let us see how to lift the existential quantification 1re from Pos to P(Pos). We recalled above that 7re is complete for the corresponding concrete operation of projection ~ (cf. [10, Lemma 6.3]). Thus, we can hope to apply Proposition 4.10, so that to maintain this desirable property by lifting ~te to the powerset. We show that indeed this is the case. In fact, first note that the hypotheses of Lemma 4.9 are satisfied by the G.c. of Pos into go(Sub): The concrete domain ga(Sub) is collecting, and 104 G. Filb, IE Ranzato/Theoretical Computer Science 222 (1999) 77-111 therefore it is join-generated by its join-irreducible elements, namely by the singletons {a} with a E Sub; moreover, by Lemma 5.4, ~(Jl(ga(Sub))) = Def, and therefore Pos is join-generated by ~(JI(gd(Sub))). Hence, for every [S] E P(Pos) there exists IS °] E P(Pos) such that S ° C Def and [S] = [S°]. Such S ° can be obtained as in the proof of Lemma 4.9: If f E S and f E Pos\Def, then f is substituted with the set of formulae of Def less than f itself. Furthermore, the concrete projection n trivially preserves join- irreducible elements, i.e. singletons. Thus, we can apply Proposition 4.10 in defining n~e over P(Pos) as follows: For all [S] E P(Pos) and W E ga(IVar), n~([S], W) -- [{np(f W) : f E S°}]. Finally, by Proposition 4.4, lifting the logical conjunction of Pos to P(Pos) actually gives rise to the glb M of P(Pos). The following result shows the precision of these abstract operations on P(Pos) just defined. Proposition 5.8. The abstract operations U and r C ~ p are complete for the corresponding concrete operations U and 7z, while M is the best correct approximation of u. Proof. As recalled above, 13 is trivially complete for U. Also, by the observations above and by Proposition 4.10, we get that 7t~ is complete for ~z. Let us now turn to the glb M. In order to show that M is the best correct approximation of n, we have to prove that for any [R],[S] E P(Pos), a(n(7([R]),7([S]))) = [g] M [S]. Let us first show the following particular case: For all f9 E Def, a(u(7([f]),7([g]))) = [f] ~ [g], i.e. ~(u(7(f), 7(9))) = [f A 9]. To prove this, we need to note that by exploiting the proof of [10, Theorem 5.7], it is not too hard to demonstrate the following fact: If f9 E Def and f A g ¢ false then there exist af E 7(f) and 0 o E 7(g) such that ct(n({af}, {0g})) = f A g (this fact strongly relies on the hypothesis that both f and g are formulae of Def). We will refer to this observation by (t). If f A g = false then either f =false or g =false. Hence, either 7(f) or 7(g) is the empty set, and so a(u(7(f), 7(g))) = a(O) = [false] = [f A 9]. Thus, let us assume that f A g C false. By Proposition 3.8, a(u(7(f),7(9))) = [{~(u({a},{0})) : a E 7(f), 0 E 7(g)}]. Moreover, observe that for any a E 7(f) and 0 E 7(g), by monotonicity of u and ~t, ct(u({a}, {0})) _ a(u(7(f),7(9))), and therefore from f A 9 = ct(u(7(f), 7(g))) (viz., A is the best correct approximation on Pos of u, cf. [10, Theorem 5.7]), we get that a(u({a},{0})) ~ f A g- Thus, by (t), we get a(u(7(f),7(g))) = [f A g]. Let us now turn to the general case. As already observed above, by Lemma 4.9, there exist [R°],[S °] E P(Pos) such that [R] = [R °] and [S] = [S<>], and R°,S ° C_Def Thus, w.l.o.g., let R,S C _ Def. Hence, the following equalities hold: ~(u(7([R]),7([s]))) = U a(u(7(f), 7(g))) f GR,gGS ]] [fAg] f ER,gES (by additivity of u and a) (by what proved above) G. Filb, F. Ranzato/Theoretical Computer Science 222 (1999) 77-111 105 and this closes the proof. = [{fAg:fcR, gCS}] = [R] n IS] [] (by Proposition 3.6) (by Proposition 3.6), To conclude, let us observe that R is not complete for u: In fact, by considering 0" = {x/a} and 0 = {x/b}, we get ~(u({0"}, {0})) = ~(0) = [false], whereas ~({0"})m ~({0}) = [x] n [x] = [x]. 5.5. Comparing POS and P(POS) Let us first show that all the operations of P(POS) = (Pos, R,U,n~) are strictly better than the corresponding ones of POS. Theorem 5.9. P(POS) is strictly better than POS. Proof. Theorem 5.3 showed that P(Pos) is strictly better than Pos. Let us consider here the abstract operations. First, consider the lub operations of Pos and P(Pos). According to the definitions in Section 2.3, we have to show that there exist Xl, S2 E gd(Sub) such that 7"(~(S1) O a(X2)) C 7(~(Z~l ) V 9~(z~2) ). Consider 61 = {x/a} and 0" 2 = {x/f(v,w),y/v}, where x,y c VI and v,w ~ VI, such that a({al}) = [~({al})] = [x] and ~({a2}) = [a({a2})] = [x --+ y]. In this case, Example 5.2 directly shows the thesis. Let us now turn to the glb. Consider a3 = {y/a} and 0" 4 = {X/V, y/f(v,w)}. Then, ~({0"1,0"2}) = [x,x ~ y], ~({0"3,0"4}) = [Y, Y ~ x], while ~({0"1,0"2}) = true = 0~({0"3, 0-4}). Moreover, by Proposition 3.6, [x,x ~ y]R[y,y ---+ x] = [x,x ~ y,y]. Then, analogous to Example 5.2, it is immediate to show the thesis. Finally, consider the abstract projections. Consider 0"5 = {x/f(y,z)}, where z E VI. Then, c~({0"1,0"5}) = [x,x +-+ (yAz)] and ~({0"1,0"5}) = xV(x +-~ (yAz)). Thus, n~e([x,x +-~ (yAz)], {x, y}) = [3z.X, 3z.x +-+ (yAz)] = [x,x -- y], whereas ne(xV(x +-+ (y Az)), {x, y} ) = 3z.xV(x (y A z)) = true. Hence, as before, we conclude that n~p is strictly better than np. O We can go more in depth about the relationship between P(POS) and POS. In fact, it tums out that the operations of POS are complete for the corresponding operations of P(POS), where Pos and P(Pos) are related by the Galois insertion (2f.[f], Pos, P(Pos), 2[S].V S), as recalled at the beginning of Section 5.3. Proposition 5.10. A, V, and ne are complete, respectively, for m, O and n~p. Proof. Completeness for A follows by Proposition 4.5, since Pos is distributive. Com- pleteness for the lub V always holds (see Section 2.2). As far as the projections are concerned, we have to show that for any [S] E P(Pos), and W c_ IVar, Vn~([S], W) = ne(VS, W). This is a consequence of the fact that the existential quantification is ad- ditive on all formulae of Bool (see, e.g., ): Vn~([S], W) = Vfcs o ~vI\w.f = 3VI\W. V S ° ~- 3vI\w. V S z ltp(VS, W). [] 106 G. Fil~, F. Ranzato/Theoretical Computer Science 222 (1999) 77-111 We now present an example where analysing a logic program using P(POS) we get an output which is strictly better than the corresponding one for POS. We follow the approach outlined in [1,31], where the abstract semantics of a logic program is obtained by means of a simple computation of a least fixpoint. This abstract semantics is the well-known abstraction of the declarative s-semantics (cf. ) characterizing the computed answer substitutions. We do not present all the details of the computation of the abstract semantics, and we refer the reader to [1, 31] for a full exposition. Example 5.11. Let us consider the following logic program P : p(x,x). p(a,y). p(x,a) : - p(x,z), p(a,x). This program allows us to illustrate the role played by all the abstract operations in the computation of the abstract semantics. As an intermediate step, let us compute the Clark completion of P, which is as follows: p(x,y) ~ (x = y) V (x = a) V 3z.(y = a A p(x,z) A p(a,x)) Then, one considers the mgu (if any) of each constraint appearing in the completion. Thus, by abstracting in Pos these mgu's (if unification fails we abstract to false), we get the following recursive definition for the Boolean function p : p(x, y ) = (x +- y) V x V ne(y A p(x,z ) A p( true, x ), {x,y}) = (y ~ x) V ne(y A p(x,z) A p(true, x), {x,y}) Using Kleene iteration starting from the bottom element false, we get the following sequence: p°(x, y) = false pl(x, y) = (y ~ x) V false = y -~ x p2(x, y) = (y -- x) V 3z.(y A (z --- x) A (x ~ true)) = (y -- x) V y = true (least fixpoint) Thus, an analyzer performing the analysis of P with the domain Pos yields the top propositional formula true. This means that we get no ground-dependency information. Notice that this lack of ground-dependency information for P is due to the fact that the two logical disjunctions (x ~ y) V x and (y -, x) V y are not disjunctions in the concrete sense, namely they do not faithfully model the union of the corresponding sets of substitutions. In contrast, by using the powerset abstract domain P(Pos), the analyzer is able to mimic in a precise way the union of sets of substitutions by using the lub of P(Pos). In fact, for P(Pos) and using its abstract operations that we defined G. Filb, F Ranzato/Theoretical Computer Science 222 (1999) 77-111 107 above, we get the following recursive definition for p(x, y) E P(Pos) : p(x,y) = [x ~-~ y] U [x] U n~p([y] R p(x,z) 19 p(true, x), {x,y}) = [{x ~ y,x}] tA n~([y] R p(x,z) 19 p(true, x), {x, y}) In this case, starting from the bottom element [false] of P(Pos), the Kleene iteration is as follows. p°(x, y) = [false] pl(x, y) = [x ~ y,x] U n~e([false], {x,y}) = [x ~ y,x] pZ(x,y) = [x +-~ y,x] U [3z.y A (x ~ z) A (true ~ x),3z.y A (x +-~ z) A true, 3z.y A x A (true ~ x), 3~.y A x A true] =[x ~ y,x, xAy, y, xAy, xAy] = [x ~--~ y,x,y] p3(x, y) = [x ~ y,x] U [3z.y A (x ~ z) A (true ~ x), 3z.y /x (x ~ z)/x true, 3z.y A (x ~ z) A x, 3z.y A x A (true ~ x), 3z.y A x A true, 3z.y A x A x, 3~.y A z A (true ~ x), 3z.y A z/~ true, 3z.y A z A x] = Ix ~ y,x,x A y,y,x A y, xA y, xA y, xA y,x A y,y, xA y] = [x ~-+ y,x,y] (least fixpoint) Thus, using P(Pos) we are able to infer that in each computed answer substitution for the predicate p, either its first argument is ground or its second argument is ground or they are equivalent (namely, they are bound to the same variables). [] In the previous example, we can observe that abstracting in Pos the abstract seman- tics of the program P obtained for P(Pos), we get exactly the abstract semantics of P for Pos. In fact, the logical disjunction of the formulae in Ix ~ y,x,y] coincides with true. In other terms, this means that there is completeness between the abstract semantics for POS and P(POS). It turns out that this relationship of completeness al- ways holds, whenever the abstract semantics is the standard bottom-up abstraction (cf. [3, 5, 32]) of the denotational s-semantics. Let us introduce the following notation: For any logic program P, I[P]]D denotes the abstract bottom-up s-semantics of P instantiated to the abstract domain D (and corresponding abstract operations). Proposition 5.12. For any program P, e(Pos). Proofi We do not consider the details of a particular definition of an abstract bottom- up semantics, but we reason on their general pattern of definition. W.l.o.g., we can suppose that for an abstract interpretation ~ = (D, AD, VD, riD) abstracting LP, for any program P, pos = V[P]p(pos). [] We can draw the following consequence of the above result: Using P(POS) in- stead of POS for analysing logic programs, one cannot gain plain ground-dependency Pos-information, but possibly only disjunctive ground-dependency information, i.e. the information represented precisely by the new elements of P(Pos) and that the base abstract domain Pos is not able to represent with no loss of precision. Although the above results set a limit to what can be achieved by using the powerset abstract domain P(Pos), it is worthwhile to remark that, in general, it is not possible to recover ~P]]p(pos) from [~P]]Pos. For instance, if for some P the analyzer yields the answer liP]leone. = true, for P(Pos) we could have the answers [[P]p(eos~ = [x,y, x +-+ y], [z,z --+ y],[w --+ x,x --+ w], etc., namely the corresponding analysis of P with P(Pos) might well be strictly more precise. 6. Conclusion This paper proposed a general study of the powerset refinement operator on abstract interpretations. This operator, given an abstract interpretation ~ = (D, ol ..... ok), pro- duces a new full abstract interpretation P(9) = (P(D), o~,..., o~), i.e., it defines both a new refined powerset domain P(D) which is able to represent in the best possible way the concrete disjunction, and new conveniently defined abstract operations o for it. We have given conditions guaranteeing the correctness of P(~), and we have studied the relationship, as far as precision is concerned, between ~ and P(9). The general theory is applied to the well-known abstract interpretation POS (whose abstract do- main Pos consists of certain propositional formulae) for ground-dependency analysis of logic languages. We obtained a strict improvement by lifting POS to its powerset P(POS). This is somehow an unexpected result, since the abstract domain Pos is al- ready closed by logical disjunction of formulae. We have also characterized precisely the relationship between the abstract semantics using POS and P(POS), by showing the existence of a form of completeness between them. We have not addressed the complexity issue of the powerset operator, since our main aim was to study the powerset operator from a theoretical perspective, consid- ering those aspects related to the precision. It is clear that a static program analysis based on a powerset abstract interpretation P(~) will be more costly than one based on 9, being, in general, the size of the powerset domain P(D) exponential with respect to the size of D. It could be interesting to study practically the trade-off between the loss in efficiency and the gain in precision for some specific abstract interpretations. However, it is worth mentioning that abstract interpretation does not apply only to G. Filk, F. Ranzato/Theoretical Computer Science 222 (1999) 7~111 109 program analysis, but it is also useful in many other fields. For instance, abstract inter- pretation is a valuable tool in comparative semantics, i.e. for studying the relationship occurring between semantics of programming languages at different levels of abstrac- tion (cf. [11, 12, 16]). In this context, where obviously complexity issues are much less important, we believe that the powerset operator can be successfully applied in order to systematically derive many well-known collecting semantics by powerset of some base semantics. This might be particularly useful in order to simplify their semantic definitions, given that the results of allow to characterize the optimal (i.e. the simplest) base semantics. A similar application for logic program semantics has been given in , where complementation , i.e. the inverse operation to reduced product, has been exploited in order to systematically derive new semantic definitions. Acknowledgements We are grateful to Roberto Giacobazzi and Harald Sondergaard for many valuable suggestions and discussions. This research was partly supported by Progetto Finaliz- zato Sistemi Informatici e Calcolo Parallelo of Italian CNR under grant no. 93.01603 PF69. References T. Armstrong, K. Marriott, P. Schachte, H. Sondergaard, Two classes of Boolean functions for dependency analysis, Sci. Comput. Program., to appear. A preliminary version appeared in Proc. SAS '94, Lecture Notes in Computer Science, vol. 864, Springer, Berlin, 1994. 12] R. Balbes, P. Dwinger, Distributive Lattices, University of Missouri Press, Columbia, Missouri, 1974. R. Barbuti, R. Giacobazzi, G. Levi, A general framework for semantics-based bottom-up abstract interpretation of logic programs, ACM Trans. Program. Lang. Syst. 15(1) (1993) 133-181. G.L. Burn, C.L. Hankin, S. Abramsky, Strictness analysis for higher-order functions, Sci. Comput. Program. 7 (1986) 249-278. M. Codish, D. Dams, E. Yardeni, Bottom-up abstract interpretation of logic programs, Theoret. Compnt. Sci. 124(1) (1994) 93-126. M. Codish, M. Falaschi, K. Marriott, Suspension analyses for concurrent logic programs, ACM Trans. Program. Lang. Syst. 16(3) (1994) 649-686. A. Cortesi, G. Filr, R. Giacobazzi, C. Palamidessi, F. Ranzato, Complementation in abstract interpretation, ACM Trans. Program. Lang. Syst. 19(1 ) (1997) 7~,7. A. Cortesi, G. Filr, W. Winsborough, Prop revisited: propositional formula as abstract domain for groundness analysis, in: Proc. 6th IEEE Symp. on Logic in Computer Science (LICS '91), IEEE Computer Society Press, Los Alamitos, CA, 1991, pp. 322-327. A. Cortesi, G. Filr, W. Winsborough, Comparison of abstract interpretations, in: W. Kuich (Ed), Proc. 19th Intemat. Colloq. on Automata, Languages and Programming (ICALP '92), Lecture Notes in Computer Science, vol. 623, Springer, Berlin, 1992, pp. 521-532. A. Cortesi, G. Filr, W. Winsborough, Optimal groundness analysis using propositional logic, J. Logic Program. 27(2) (1996) 137-167. [1 I] P. Cousot, Abstract interpretation, ACM Compnt. Surveys 28(2) (1996) 324-328. P. Cousot, Types as abstract interpretations (Invited Paper), in Conf. Record of the 24th ACM Symp. on Principles of Programming Languages (POPL '97), ACM Press, New York, 1997, pp. 316-331. 110 G. FilO, F. Ranzato/Theoretical Computer Science 222 (1999) 77-111 P. Cousot, R. Cousot, Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints, in: Conf. Record of the 4th ACM Symp. on Principles of Programming Languages (POPL '77), ACM Press, New York, 1977, pp. 238-252. P. Cousot, R. Cousot, Systematic design of program analysis frameworks, in: Conf. Record of the 6th ACM Symp. on Principles of Programming Languages (POPL '79), ACM Press, New York, 1979, pp. 269-282. P. Cousot, R. Cousot, Abstract interpretation and application to logic programs, J. Logic Program. 13(2-3) (1992) 103-179. P. Cousot, R. Cousot, Inductive definitions, semantics and abstract interpretation, in: Conf. Record of the 19th ACM Syrup. on Principles of Programming Languages (POPL '92), ACM Press, New York, 1992, pp. 83-94. P. Cousot, R. Cousot, Higher-order abstract interpretation (and application to comportment analysis generalizing strictness, termination, projection and PER analysis of functional languages) (Invited Paper), in: Proc. IEEE Internat. Conf. on Computer Languages (ICCL '94), IEEE Computer Society Press, Los Alamitos, CA, 1994, pp. 95-112. P. Dart, On derived dependencies and connected databases, J. Logic Program. 11(2) (1991) 163-188. B.A. Davey, H.A. Priestley, Introduction to Lattices and Order, Cambridge University Press, Cambridge, UK, 1990. M. Falaschi, G. Levi, M. Martelli, C. Palamidessi, Declarative modeling of the operational behavior of logic languages, Theoret. Comput. Sci. 69(3) (1989) 289-318. G. Filr, R. Giacobazzi, F. Ranzato, A unifying view of abstract domain design, ACM Comput. Surveys 28(2) (1996) 333-336. G. Filr, F. Ranzato, Improving abstract interpretations by systematic lifting to the powerset, in: M. Bruynooghe (Ed.), Proc. tnternat. Logic Programming Symp. (ILPS '94), The MIT Press, Cambridge, MA, 1994, 655~569. R. Giacobazzi, F. Ranzato, Complementing logic program semantics, in: M. Hanus, M. Rodriguez- Artalejo (Eds.), Proc. 5th Internat. Conf. on Algebraic and Logic Programming (ALP '96), Lecture Notes in Computer Science, vol. 1139, Springer, Berlin, 1996, pp. 238-253. R. Giacobazzi, F. Ranzato, Optimal domains for disjunctive abstract interpretation, Sci. Comput. Program., 1998, to appear. G. Gierz, K.H. Hofmann, K. Keimel, J.D. Lawson, M. Mislove, D.S. Scott, A Compendium of Continuous Lattices, Springer, Berlin, 1980. C.A. Gunter, D.S. Scott, Semantic domains, in: J. van Leeuwen (Ed.), Handbook of Theoretical Computer Science, vol. B: Formal Models and Semantics, Elsevier, Amsterdam, and The MIT Press, Cambridge, MA, 1990, pp. 633-674. M. Hermenegildo, D.S. Warren, S.K. Debray, Global flow analysis as a practical compilation tool, J. Logic Program. 13(4) (1992) 349-366. D. Jacobs, A. Langen, Static analysis of logic programs for independent AND-parallelism, J. Logic Program. 13(2-3) (1992) 154-165. T.P. Jensen, Disjunctive strictness analysis, in: Proc. 7th IEEE Symp. on Logic in Computer Science (LICS '92), IEEE Computer Society Press, Los Alamitos, CA, 1992, pp. 174-185. J.L. Lassez, M.J. Maher, K. Marriott, Unification revisited, in: J. Minker (Ed.), Foundations of Deductive Databases and Logic Programming, Morgan Kanfmann, Los Altos, CA, 1988, pp. 587~525. K. Marriott, H. Sondergaard, Precise and efficient groundness analysis for logic programs, ACM Lett. Program. Lang. Syst. 2(1-4) (1993) 181-196. K. Marriott, H. Sondergaard, N.D. Jones, Denotational abstract interpretation of logic programs, ACM Trans. Program. Lang. Syst. 16(3) (1994) 607~48. A. Mycroft, The theory and practice of transforming call-by-need into call-by-value, in: B. Robinet (Ed.), Proc. 4th lntemat. Symp. on Programming, Lecture Notes in Computer Science, vol. 83, Springer, Berlin, 1980, pp. 270-281. A. Mycroft, Completeness and predicate-based abstract interpretation, in: Proc. ACM Symp. on Partial Evaluation and Program Manipulation (PEPM '93), ACM Press, New York, 1993, pp. 179-185. F. Nielson, Tensor products generalize the relational data flow analysis method, in: M. Aratr, I. Kfitai, L. Varga, (Eds.), Proc. 4th Hungarian Computer Science Conf., 1985, pp. 211-225. H.R. Nielson, F. Nielson, Bounded fixed point iteration, in: Conf. Record of the 19th ACM Symp. on Principles of Programming Languages (POPL '92), ACM Press, New York, 1992, pp. 71-82. G. Filb, F Ranzato / Theoretical Computer Science 222 (1999) 77-111 111 P. Van Roy, A.M. Despain, The benefits of global dataflow analysis for an optimizing Prolog compiler, in: S.K. Debray, M. Hermenegildo (Eds.), Proc. North American Conf. on Logic Programming (NACLP '90), The MIT Press, Cambridge, MA, 1990, pp. 501-515. I. Wegener, The Complexity of Boolean Functions, Wiley-Teubner Series in Computer Science, Wiley, New York, 1987.
|
23
|
Abstraction Techniques for Parameterized Verification Muralidhar Talupur November 2006 CMU-CS-06-169 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy.
Thesis Committee: Edmund M. Clarke, Chair Randal E. Bryant Amir Pnueli, New York University Jeannette M. Wing Copyright c ⃝2006 Muralidhar Talupur This research was sponsored by the Gigascale Systems Research Center (GSRC), the Semiconductor Re-search Corporation (SRC), the Office of Naval Research (ONR), the Naval Research Laboratory (NRL), and the Army Research Office (ARO).
The views and conclusions contained in this document are those of the author and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the sponsoring institutions, the U.S. Government, or any other entity.
Keywords: Formal methods, model checking, abstract interpretation, abstraction, pa-rameterized systems, cache coherence protocols, mutual exclusion protocols.
To my parents.
Abstract Model checking is a well known formal verification technique that has been particu-larly successful for finite state systems such as hardware systems. Model checking essen-tially works by a thorough exploration of the state space of a given system. As such, model checking is not directly applicable to systems with unbounded state spaces like parameter-ized systems. The standard approach for applying model checking to unbounded systems is to extract finite state models from them using conservative abstraction techniques. Prop-erties of interest can then be verified over the finite abstract models.
In this thesis, we propose a novel abstraction technique for model checking parameter-ized systems . Parameterized systems are systems with replicated processes in which the number of processes is a parameter. This kind of replicated structure is quite common in practice. Standard examples of systems with replicated processes are cache coherence pro-tocols, mutual exclusion protocols, and controllers on automobiles. As the exact number of processes is a parameter, the system is essentially an unbounded system. The abstrac-tion technique we propose, called environment abstraction, tries to simulate the way a human designer thinks about systems with replicated processes. The abstract models we construct are easy to compute and powerful enough to verify properties of interest without giving any spurious counterexamples. We have applied this abstraction method to several well known parameterized systems like cache coherence protocols and mutual exclusion protocols to demonstrate its efficacy. Importantly, we show how to remove a commonly used, but severely restricting assumption, called the atomicity assumption, while verifying parameterized systems.
We also apply insights from environment abstraction in a slightly different setting, namely, that of systems consisting of identical processes placed on a network graph.
Adapting principles from environment abstraction, we show how the verification of a sys-tem with a large network graph can be decomposed into verification of a collection of systems, each with a small constant sized network graph. As far as we are aware, ours is the first result to show that verification of systems with complex network graphs can be decomposed into smaller problems.
i ii Acknowledgments I would like to thank my advisor Prof. Edmund Clarke for providing me the opportunity to pursue my research interests.
Not only did I benefit aca-demically from him, but I also had a chance to learn valuable life lessons from him. His encouraging attitude towards his students, and his insistence on simplicity have significantly changed my view of things.
I would also like to thank my thesis committee members, Prof. Randal Bryant, Prof. Amir Pnueli, and Prof. Jeannette Wing, for their insightful comments and suggestions regarding my work. My discussions with Prof.
Pnueli helped me concretize my ideas. Prof. Wing’s thorough comments on the early drafts of my thesis were very useful.
It has been a pleasure to work with my collaborator and co-author Helmut Veith. His suggestions for improving the presentation of my work, including this thesis, have been invaluable.
My friends Himanshu Jain, Shuvendu Lahiri, Flavio Lerda, and Nishant Sinha gave me excellent company and I have gained a lot, academically and otherwise, from them. It was fun sharing my office with Owen Cheng, who also rescued me from technical difficulties more times than I can remember.
iv Contents 1 Introduction 1 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 1.1.1 Systems with Replicated Processes . . . . . . . . . . . . . . . .
7 1.1.2 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . .
12 2 Environment Abstraction 15 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15 2.2 A Generic Framework for Environment Abstraction . . . . . . . . . . . .
18 2.2.1 Description of the Abstract System PA. . . . . . . . . . . . . . .
22 2.3 Soundness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
26 2.3.1 Simulation Modulo Renaming . . . . . . . . . . . . . . . . . . .
26 2.3.2 Proof of Soundness . . . . . . . . . . . . . . . . . . . . . . . . .
27 2.4 Trade-Off between Expressive Labels and Index Variables . . . . . . . .
32 2.5 Extending Environment Abstraction . . . . . . . . . . . . . . . . . . . .
36 2.5.1 Multiple Reference Processes . . . . . . . . . . . . . . . . . . .
36 2.5.2 Adding Monitor Processes . . . . . . . . . . . . . . . . . . . . .
38 2.6 Example of Environment Abstraction . . . . . . . . . . . . . . . . . . .
41 2.6.1 Abstract Descriptions . . . . . . . . . . . . . . . . . . . . . . . .
42 2.7 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
44 2.7.1 Predicate abstraction . . . . . . . . . . . . . . . . . . . . . . . .
46 2.7.2 Indexed Predicates . . . . . . . . . . . . . . . . . . . . . . . . .
48 v 2.7.3 Three Valued Logical Analysis (TVLA) . . . . . . . . . . . . . .
51 2.7.4 Counter Abstraction . . . . . . . . . . . . . . . . . . . . . . . .
54 2.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
56 3 Environment Abstraction for Verification of Cache Coherence Protocols 59 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
59 3.1.1 Cache Coherence Protocols . . . . . . . . . . . . . . . . . . . .
60 3.2 Discussion of Related Work . . . . . . . . . . . . . . . . . . . . . . . .
64 3.3 System Model for Cache Coherence Protocols . . . . . . . . . . . . . . .
66 3.3.1 State Variables . . . . . . . . . . . . . . . . . . . . . . . . . . .
67 3.3.2 Program Description for the Caches . . . . . . . . . . . . . . . .
68 3.3.3 Program Description for the Directory . . . . . . . . . . . . . . .
69 3.3.4 Describing Real-Life Protocols . . . . . . . . . . . . . . . . . . .
72 3.4 Environment Abstraction for Cache Coherence Protocols . . . . . . . . .
74 3.4.1 Specifications and Labels . . . . . . . . . . . . . . . . . . . . . .
74 3.4.2 Abstract Model . . . . . . . . . . . . . . . . . . . . . . . . . . .
75 3.5 Optimizations to Reduce the Abstract State Space . . . . . . . . . . . . .
79 3.5.1 Eliminating Unreachable Environments . . . . . . . . . . . . . .
80 3.5.2 Redundancy of the Abstract Set Variables . . . . . . . . . . . . .
82 3.6 Computing the Abstract Model . . . . . . . . . . . . . . . . . . . . . . .
85 3.6.1 Cache Transitions . . . . . . . . . . . . . . . . . . . . . . . . . .
86 3.6.2 Directory Transitions . . . . . . . . . . . . . . . . . . . . . . . .
88 3.7 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
91 3.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
96 3.9 Protocol Descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . .
97 4 Environment Abstraction for Verification of Mutex Protocols 115 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
115 4.2 System Model for Mutual Exclusion Protocols . . . . . . . . . . . . . . .
119 vi 4.2.1 Local State Variables . . . . . . . . . . . . . . . . . . . . . . . .
119 4.2.2 Transition Constructs . . . . . . . . . . . . . . . . . . . . . . . .
120 4.3 Environment Abstraction for Mutual Exclusion Protocols . . . . . . . . .
123 4.3.1 Specifications and Labels . . . . . . . . . . . . . . . . . . . . . .
123 4.3.2 Abstract Descriptions . . . . . . . . . . . . . . . . . . . . . . . .
124 4.4 Extensions for Fairness and Liveness . . . . . . . . . . . . . . . . . . . .
132 4.4.1 Abstract Fairness Conditions . . . . . . . . . . . . . . . . . . . .
137 4.4.2 Soundness in the Presence of Fairness Conditions . . . . . . . . .
140 4.4.3 Proof of Soundness . . . . . . . . . . . . . . . . . . . . . . . . .
141 4.5 Computing the Abstract Model . . . . . . . . . . . . . . . . . . . . . . .
145 4.5.1 Case 1: Guarded Transition for Reference Process . . . . . . . .
146 4.5.2 Case 2: Guarded Transition for Environment Processes . . . . . .
149 4.5.3 Case 3: Update Transition for Reference Process . . . . . . . . .
152 4.5.4 Case 4: Update Transition for Environment Processes . . . . . . .
155 4.6 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . .
158 4.7 Protocols and Specifications . . . . . . . . . . . . . . . . . . . . . . . .
159 5 Removing the Atomicity Assumption for Mutex Protocols 165 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
165 5.2 Modeling Mutual Exclusion Protocols without Atomicity Assumption . .
167 5.3 Atomicity Assumption . . . . . . . . . . . . . . . . . . . . . . . . . . .
171 5.4 Monitors for Handling Non-atomicity . . . . . . . . . . . . . . . . . . .
175 5.4.1 Abstracting the Monitor Variables . . . . . . . . . . . . . . . . .
184 5.5 Computing the Abstract Model . . . . . . . . . . . . . . . . . . . . . . .
187 5.5.1 Case 1: Guarded Transition for Reference Process . . . . . . . .
187 5.5.2 Case 2: Guarded Transition for Environment Processes . . . . . .
192 5.5.3 Case 3: Update Transition for Reference Process . . . . . . . . .
195 5.5.4 Case 4: Update Transition for Environment Processes . . . . . . .
200 5.6 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . .
203 vii 6 Verification by Network Decomposition 205 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
205 6.2 Related Work. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
209 6.3 Computation Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
210 6.4 Reductions for Indexed LTL\X Specifications . . . . . . . . . . . . . . .
213 6.4.1 Existential 2-indexed LTL\X Specifications . . . . . . . . . . . .
214 6.4.2 Existential k-indexed LTL\X Specifications . . . . . . . . . . . .
224 6.4.3 Specifications with General Quantifier Prefixes . . . . . . . . . .
226 6.4.4 Cut-Offs for Network Topologies . . . . . . . . . . . . . . . . .
228 6.5 Bounded Reductions for CTL\X are Impossible . . . . . . . . . . . . . .
229 6.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
231 6.7 Proofs of Lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
232 6.8 Connection Topologies for 2-Indices . . . . . . . . . . . . . . . . . . . .
244 7 Conclusion 251 7.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
251 7.2 Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
254 Bibliography 257 viii List of Figures 1.1 Counter example guided model checking loop.
. . . . . . . . . . . . . .
5 1.2 Tool chain for environment abstraction.
. . . . . . . . . . . . . . . . . .
9 3.1 Results for Cache Coherence Protocols . . . . . . . . . . . . . . . . . .
96 4.1 Abstraction Mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
117 4.2 Process 7 changes its internal state, but the abstract state is not affected.
Thus, there is a self-loop around the abstract state. The abstract infinite path consisting of repeated executions of this loop has no corresponding concrete infinite path. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
133 4.3 Running Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
159 4.4 Szymanski’s Mutual Exclusion Protocol . . . . . . . . . . . . . . . . . .
160 4.5 Lamport’s Bakery Algorithm . . . . . . . . . . . . . . . . . . . . . . . .
163 5.1 Evaluation of a Guard . . . . . . . . . . . . . . . . . . . . . . . . . . .
168 5.2 Evaluation of a Wait condition . . . . . . . . . . . . . . . . . . . . . . .
169 5.3 Evaluation of an Update . . . . . . . . . . . . . . . . . . . . . . . . . .
171 5.4 A possible execution trace of the system with three processes.
. . . . . .
173 5.5 A more complicated trace of the system. . . . . . . . . . . . . . . . . . .
174 5.6 Execution trace seen from the “outside”. . . . . . . . . . . . . . . . . . .
176 5.7 Update procedure for monitor variables pertaining to guarded transitions.
178 5.8 Procedure for updating monitor variables pertaining to guarded transitions. 179 5.9 Procedure for updating monitor variables pertaining to update transitions.
182 ix 5.10 Function ω. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
194 5.11 Function Ωt.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
195 5.12 Function Ωb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
196 5.13 Running Times for the bakery protocol. Bakery(A) and Bakery(NA) stand for the bakery protocol with and without the atomicity assumption . . . .
203 6.1 Network Graphs A, B, realizing two different characteristic vectors . . .
216 6.2 A system with grid like network graph with 9 nodes.
. . . . . . . . . . .
221 6.3 Connection topologies for the grid-like network graph.
. . . . . . . . . .
222 6.4 An example of a 5-index connection topology . . . . . . . . . . . . . . .
225 6.5 The Kripke structure K, constructed for three levels. The dashed lines indicate the connections necessary to achieve a strongly connected graph.
242 x Chapter 1 Introduction 1.1 Introduction Modern hardware and software systems are extremely large and intricate. Designing such systems is necessarily an error prone process because of their complexity. A sig-nificant percentage of development time is taken up in identifying bugs. Error finding is primarily accomplished through informal/incomplete techniques like testing and simula-tion. These techniques are incomplete in that they are not guaranteed to find all the bugs in the system. The few errors that escape testing and simulation can still undermine a system, leading to huge financial losses (Intel Floating point error ) or even potentially fatal consequences (the Ariane 5 disaster ) .
Formal verification techniques like model checking and theorem proving provide an 1 alternative to incomplete techniques. These techniques explore every possible behavior of a system model and thus find all the bugs in the model. While formal verification methods tend to be expensive (both time wise and labor wise), they are worth the effort put in.
The SLAM project at Microsoft , which is one of the well known success stories of model checking, managed to exhaustively verify, against a set of properties, the device drivers in a Windows machine. It had been previously observed that most of the crashes of Windows systems occurred due to bugs in the device drivers that escaped detection using testing and simulation. The SLAM project succeeded in eliminating many of the subtle bugs responsible for system crashes using model checking. Thus, the latest versions of the Windows operating systems have benefitted significantly from this project. Model checking has been even more successful in the hardware industry. In fact, most chip design companies, such as Intel and AMD, have dedicated model checking teams as part of the development process. Spurred on by successes like these in the software industry and the hardware industry, there is an increasing adoption of formal verification methods as an integral part of system development.
The central question in formal methods is the following: given a model M and a property Φ, does the property Φ hold on system M? Formally this is expressed as: M | = Φ?
Model checking, which is the formal verification technique considered in this thesis, works by a thorough exploration of the state space of a given system. The system M is usually given as a Kripke Structure and the property Φ is expressed in a temporal logic. Kripke structures are specified by tuples of the form (S, I, T, L) where 2 • S is a finite collection of states, • I is the set of initial states, • T ⊆S × S is the transition relation, • L is a labelling function that associates every state in S with a finite set of labels.
Essentially, a Kripke structure is a non-deterministic finite state transition system. Since we are interested in the evolution of a system, we need the notion of time to express properties of interest. These properties are expressed in temporal logic, usually CTL or LTL . Traditional temporal logics are interpreted over Kripke structures.
In recent years, a whole range of powerful model checkers have been developed start-ing with Ken McMillan’s seminal Binary Decision Diagram (BDD) based model checker SMV . BDD based model checkers represent sets of states in a symbolic fashion. The representation of sets of states as BDDs is usually compact, and they can be efficiently manipulated using the standard operations on BDDs . Explicit state model checkers like SPIN , on the other hand, represent states explicitly. While explicit representa-tion of states can end up being cumbersome (especially if the reachable state space is very large), the fact that we can examine individual states in detail allows for clever pruning of the search space. For highly parallel systems, techniques like symmetry reduction in conjunction with explicit state model checkers are among the best options, time and space wise, available . In the last few years, the advent of powerful Boolean satisfiability solvers (or SAT solvers) has led to the development of a new class of model checkers.
SAT based Bounded Model Checkers , which convert the model checking question into 3 a SAT problem, are extremely fast and very useful in finding bugs that can be reached in a small number of transitions (called shallow bugs). Interpolant based model check-ers and proof based abstraction [46; 59; 61] too make use of fast SAT solvers, and are currently the fastest for a wide range of problems 1.
All the different model checkers essentially perform a thorough exploration of the state space. As such, model checking cannot be applied to large real world systems directly.
Successful application of model checking to complex pieces of code like device drivers depends on the use of abstraction methods. An abstraction method extracts a small finite state system, A, called the abstract system, from a given large or infinite concrete system C. The abstract system is usually a conservative abstraction of the concrete system, which means every behavior seen in C is also seen in A. It can be shown that if a universal property – a property that talks about all paths of a system – holds on the abstract system then it will also hold on the concrete model (see for the results that form the basis for abstraction). Thus, instead of model checking C directly, we can model check A and infer the properties satisfied by C.
Creating abstract models involves balancing two conflicting aims: • Small Abstract Models. The abstract model has to be small enough that we can model check it efficiently.
• Precise Abstract Models. The smaller the abstract system, the more behaviors it allows. For instance, if the transition relation were true (that is, there is a transition 1To decide whether M | = Φ is computationally very hard and it is unlikely that any one method performs the best on all problems.
4 from every state to all the other states), the abstract model would be the smallest possible one and would allow every trace. Abstract systems that have too many extraneous traces lead to spurious counter examples – traces that violate the property but do not appear in the concrete system. Thus, while the abstract model should be small, it should also be precise. This latter condition tends to make the abstract system large.
When we model check the abstract model, there are two possible outcomes, as shown in Figure 1.1: Abstract model SMV Model Checker Counter-example Real cex Refinement false true Spurious cex Figure 1.1: Counter example guided model checking loop.
(i) The model checker returns true, that is, the abstract model satisfies the universal 5 property Φ. In this case, the concrete model also satisfies the property Φ.
(ii) The model checker returns false, that is, the abstract model violates the universal property Φ. In this case, we can check if the counter example trace is a real counter example or a spurious counter example. If the trace is a real counter example, we have a valid counter example to Φ. Otherwise, we cannot say whether the concrete system satisfies Φ or not. In such a scenario, another abstract model, more refined than the previous one, is built and model checked. This process is continued until a definitive result is reached or the system capacity is exceeded.
In practice, it is never sufficient to build just one abstract model. It usually takes several abstract models – each more precise than the previous one – to reach a result.
Since the question of whether a (possibly infinite) system satisfies a temporal property Φ is undecidable in general, the abstraction refinement loop is not guaranteed to terminate.
To extract useful abstract models, the abstraction technique must be domain specific.
This is because the class of systems is too rich, including sequential software, concurrent protocols, and time triggered systems. The commonalities between these classes are not yet sufficiently understood that we can devise a general abstraction mechanism. All no-table successes of model checking (in fact, of formal verification in general) have come from projects which have focused on a specific class of systems, for instance, the class of device drivers in the SLAM project. Following this trend, this thesis proposes a new ab-straction technique for concurrent systems that have replicated components such as cache coherence protocols and mutual exclusion protocols.
We have applied this abstraction technique to various real world examples to demonstrate its efficacy.
6 1.1.1 Systems with Replicated Processes Many real world systems consist of concurrently executing replicated components. Classic examples of such systems are cache coherence protocols which consist of several processes (local caches) executing the exact same cache coherence protocol. That is, the same proto-col is replicated at several different processes. As another example, consider controllers in an automobile that are connected via a common bus. The controllers themselves might be different with each controller performing a different function. All controllers use some set of rules, i.e., a protocol to access the bus in a safe and coordinated manner. This bus access protocol must be the same in all the controllers. Thus, if we consider the sub-system con-sisting of the bus access protocol, we again have an instance of the replicated structure.
Replication is a widely occuring feature in real systems. In fact, any scenario in which a collection of processes are contending for a common resource will necessarily involve replication (of protocols/algorithms).
The main classes of replicated systems that researchers in formal verification have considered are cache coherence protocols, mutual exclusion protocols, and time triggered protocols. Such protocols are crucial parts of modern computer systems. Systems with replicated components/processes are usually designed to be correct no matter what the ex-act number of processes is. Systems with replicated components that have a parameterized number of processes are called parameterized systems. In general, systems can be param-eterized not just by the number of processes but also by other parameters such as the size of the buffers available per communication channel, the width of the data path, and so on.
All parameterized systems are essentially unbounded systems.
7 Applying model checking to such parameterized systems is challenging because they lack fixed state spaces. One way to formally reason about a parameterized system is to use model checking. In this approach, a finite state, conservative abstraction of the system is extracted and model checked. This is the approach followed by Pnueli et al. , Lahiri et al. [52; 53], Delzanno et al. [28; 29], Chou et al. , German and Sistla , Namjoshi , and Kahlon et al. . The abstraction created is a conservative (or sound) abstraction. This means any universal property (a property that talks about all paths) that holds on the abstract model will also hold on the concrete model. The implication in the other direction does not usually hold, that is, if the universal property holds on the parameterized system then it may or may not hold on an abstract model. There are other model checking based techniques like Invisible Invariants [52; 53] and McMillan’s Compositional Reasoning which use model checking in a different fashion.
An alternate approach to verifying parameterized systems is to use theorem prov-ing(We classify any technique that requires the users to supply lemmas about the system as theorem proving.). McMillan’s Compositional Reasoning, mentioned earlier, is a good example in this class (model checking is used in this approach but the user has to come up with non trivial lemmas). Rushby et al. have used the PVS theorem prover to estab-lish properties of certain clock synchronization algorithms (used in automobiles) and other systems with a parameterized number of replicated processes, see [49; 69].
One of the main contributions of this thesis is an abstraction technique, named envi-ronment abstraction, developed for reasoning about parameterized systems. Environment abstraction exploits the replicated structure of a parameterized system to make its verifi-cation easy. Ideas from this abstraction can be used even if the number of replicated pro-8 cesses in a system is fixed. The essential principle is to create an abstraction that matches human reasoning closely. When a human designer creates a system with replicated pro-cesses, (s)he reasons about its correctness by focussing on the execution of one reference process and sees how the other processes might interfere with its execution. Following this idea, our abstraction maintains detailed information on the reference process and ab-stracts the other processes in relation to the reference process. The resulting abstraction is quite powerful and we believe it is the most natural abstraction (that is, it corresponds most closely to the abstraction humans use in reasoning about parameterized systems).
In the tradition of classical model checking, our approach provides an automated tool chain (shown in Figure 1.2).
Protocol Description Abstract model SMV Model Checker Env. abst.
Figure 1.2: Tool chain for environment abstraction.
9 1. The behavior of the distributed algorithm/protocol is described in a suitable input language, cf. Section 3.3 and Section 4.2.
The user’s role ends with inputting the protocol to the verification tool.
1. The environment abstraction tool extracts a finite state model from the protocol de-scription, and puts the model in SMV format.
2. SMV verifies the specified properties.
We have used this abstraction based method to prove properties of well known cache coherence systems, mutual exclusion algorithms, and real time protocols.
Typically, handling liveness properties is much harder (theoretically) than handling safety properties. For instance, the Invisible Invariants method requires significant ad-ditional work before it can handle liveness properties and the Indexed Predicates method [52; 53] cannot handle liveness properties at all. Informally, this is because verification of safety properties depends only on the reachable set of states, whereas verification of live-ness properties depends also on the order in which the various states are reached. Ranking functions are needed to argue that desirable states are eventually reached. Finding such ranking functions is typically a non-trivial task.
In contrast, extending our method to handle liveness is very simple. Since our abstract model simulates the execution of one single process in precise detail and consequently, liveness properties of a single process are easy to reason about. We only need to rule out spurious loops introduced by the abstraction.
10 Importantly, other model checking based approaches to parameterized verification make the atomicity assumption while handling parameterized protocols. The atomicity assumption in essence states that, in a distributed system with several components, any component can know (or rather read) the state of all the other components instantaneously.
This is quite unrealistic and simplifies a distributed protocol significantly. In this thesis, we describe a simple extension to remove the atomicity assumption. Note that the term atomicity is used in a different sense from the classical usage in distributed computing lit-erature. In the latter usage, atomicity is used to qualify a single read or write operation. An atomic read or write operation is one which happens in an atomic time unit and thus, no other operation can interfere with its execution. Atomicity, as used in this thesis, qualifies a set of read/write operations.
The idea of looking at a system from the point of view of a reference process can be carried over into other settings as well. We consider systems with replicated processes which are arranged at the nodes of an underlying network graph. The processes commu-nicate by passing tokens among themselves. If we are interested in checking two-process properties of such a system, we can show that it is enough to consider how the system looks from the point of view of pairs of processes. This result lets us decompose the ver-ification problem of a system with a large network graph into verification of a collection of systems with small, constant sized network graphs. This network decomposition result lays the ground for reasoning about systems with network graphs and richer inter-process communication (such as complex leader election protocols).
11 1.1.2 Thesis Outline The outline for the rest of the thesis is as follows. In the next chapter we present environ-ment abstraction in general terms and derive its mathematical properties. We show that environment abstraction is sound for indexed temporal logic specifications in a very gen-eral framework, and discuss the relationship of our method to counter abstraction, canon-ical abstraction, and predicate abstraction. This chapter lays the foundation for our work on verification of parameterized systems with replicated processes. The same chapter also describes extensions to environment abstraction and considers some of the issues involved in applying this abstraction method to practical systems.
State-of-the-art architectures crucially rely on cache coherence protocols for increased performance. These protocols are extremely intricate and, usually, several processors run these protocols concurrently. Thus, ensuring the correctness of such protocols is a chal-lenging problem and formal verification techniques are indispensable. Since the number of processors executing the cache protocol can vary, cache coherence verification is a clas-sical example of the parameterized verification problem. In Chapter 3, we show how to apply environment abstraction for verifying cache coherence protocols. We first propose a simple programming language that allows us to model cache protocols at an algorithmic level. We then describe the precise abstract state space used in abstracting cache protocols.
Environment abstraction as presented in Chapter 2 talks only about the general structure of the abstract state space. The precise form of the abstract states depends on the class of systems under consideration. Chapter 3 also deals with the crucial issue of how exactly we compute the abstract model. We have applied this method to verify safety properties of 12 several cache coherence protocols, including several variants of GERMAN’s protocol and a modified version of the FLASH protocol. The language constructs used in describing cache coherence protocols are quite simple so that the essential principle behind the com-putation of the abstract model is easy to understand. It is for this reason that we consider cache coherence protocols as the first example.
In Chapter 4, we show how environment abstraction can be applied to mutual ex-clusion protocols, which exhibit complex inter-process communication. As with cache coherence protocols, we first describe a simple programming language that allows us to describe mutual exclusion protocols at an algorithmic level. The precise form of the ab-stract states is then described, followed by a section on how to compute the abstract model.
We demonstrate the power of our approach by verifying Lamport’s Bakery algorithm and Szymanski’s mutual exclusion protocol. Note that in Chapter 4, we verify mutual exclu-sion protocols under the atomicity assumption.
In Chapter 5, we show how to verify mutual exclusion protocols without the atomicity assumption. The atomicity assumption, which says that any component can know the state of all the other components instantaneously, significantly reduces the complexity of a protocol. To handle protocols in full generality, without the atomicity assumption, we need to keep track of history information. We introduce monitor processes for this purpose and show how we can apply environment abstraction in presence of these monitor processes.
In Chapter 6, we consider a different system model, namely systems built around net-work graphs. For example, in routing protocols, the underlying topology of the system plays a crucial role. Similarly, in many wireless applications, the system performance de-13 pends on how the different wireless entities are connected. Formal verification research has only now begun to address the problem of verifying these systems with complex net-work graphs. As a first step towards this larger problem, we consider systems consisting of a collection of identical processes arranged on the nodes of a network graph with very limited communication between the processes. We describe a new method to verify such networks of homogeneous processes that communicate by token passing. Given an arbi-trary network graph and an indexed LTL \ X property, we show how to decompose the network graph into multiple constant size networks, thereby reducing one model checking call on a large network to several calls on small networks. We thus obtain cut-offs for arbitrary classes of networks, adding to previous work by Emerson and Namjoshi on the ring topology . Our results on LTL \ X are complemented by a negative result that precludes the existence of reductions for CTL \ X on general networks.
The last chapter concludes this thesis with a summary of contributions and possible extensions to the work presented here. We also discuss some of the outstanding challenges in parameterized verification.
14 Chapter 2 Environment Abstraction 2.1 Introduction When a human engineer designs a hardware or software system, the correctness of the system, naturally, is among the main concerns of the designer. Although the reasoning of the designer is usually not available to the verification engineer in terms of assertions or proofs, the reasons for correctness are often reflected in the way a program is written.
Knowledge of these implicit design principles can be systematically exploited for the con-struction of abstract models. For example, it is natural for us to assume that control flow conditions yield important predicates for reasoning about software, and that polygons are good approximations of numeric data that are human generated. Thus, the presence of a human engineer renders the analysis of hardware and software very different from the analysis of systems in physics, chemistry, or biology.
15 To pinpoint this difference, consider an example frequently discussed in the history of science, namely the Ptolemaic system in which the planet earth is surrounded by the sun.
The persistance of Ptolemy’s viewpoint over many centuries shows the intuitive reasoning which the human mind applies to complex systems: we tend to imagine systems with the human observer in the center. While a Ptolemaic viewpoint is known to be wrong (or, more precisely, infeasible) in physics, it naturally appears in the systems we construct.
Consequently, the Ptolemaic viewpoint yields a natural abstraction principle for computer systems.
In this chapter, we explore a Ptolemaic viewpoint of concurrent systems to devise an abstraction method for concurrent systems with replicated processes which we call environment abstraction. Our systems are parameterized, i.e., the number of processes is a parameter, and all processes execute the same program. We write P(K) to denote a system with K processes 1. We argue that during the construction of such a system, the programmer naturally imagines him/herself to be in the position of one reference process, around which the other processes – which constitute the environment – evolve. Thus, in many cases, an abstract model that describes the system from the viewpoint of a reference process contains sufficient information to reason about specifications of interest.
The goal of environment abstraction is to put this intuition into a formal framework. In environment abstraction, an abstract state is a description of a concrete system state from the point of view of a single reference process and its environment. The properties of the reference process are computed as if the process were chosen without loss of generality.
Thus, verification results about the reference process generalize to all processes in the 1We will later also consider a finite number of non-replicated processes in addition.
16 system.
From a practical perspective, environment abstraction shares many properties with predicate abstraction as used in SLAM , BLAST , and MAGIC : • Environment abstraction computes a finite-state abstract model on which a stan-dard model checker can verify a property. To verify an indexed temporal property ∀x.φ(x) on all parameterized models P(K), K ≥1, the model checker just needs to verify the quantifier-free property φ(x) on a single abstract model P A which interprets the variable x. The model PA is obtained by a variation of existential abstraction that quantifies over the parameter K and the index variable x.
• Instead of computing the precise abstract model, environment abstraction over-approximates the abstract model. To this end, each statement of the concurrent program is approximated separately using decision procedures. Thus, similar to SLAM, BLAST, MAGIC, the abstract model used in the verification is an over-approximation of PA.
The aim of this chapter is to describe environment abstraction from first principles.
We derive environment abstraction from a few simple logical principles, and show its soundness for a large class of indexed ACTL⋆properties. In addition, we put the method in perspective to other abstraction approaches such as Indexed Predicates, and TVLA’s Canonical Abstraction.
17 2.2 A Generic Framework for Environment Abstraction We consider parameterized concurrent systems P(K), where the parameter K > 1 de-notes the number of replicated processes. The processes are distinguished by unique in-dices in {1, . . . , K} which serve as process ids. Each process executes the same program which has access to its process id. We do not make any specific assumptions about the processes, in particular we do not require them to be finite state processes.
Consider a system P(K) with a set SK of states. Each state s ∈SK contains the whole state information for each of the K concurrent processes, i.e., s is a vector ⟨s1, . . . , sK⟩ Technically, P(K) is a Kripke structure (SK, IK, RK, LK) where IK is the set of initial states and RK is the transition relation. We will discuss the labeling LK for the states in SK below.
Remark 1. While we consider systems composed solely of replicated processes, sys-tems with a constant number of non-replicated processes, in addition to a set of replicated processes, can also be similarly handled. For such systems, each state is of the form ⟨s1, . . . , sK, t⟩where t is the combined state of all non-replicated components. With this minor change, the treatment presented below can be carried as is to this modified setting as well.
Process Properties.
We will describe properties of P(K) using formulas with one free index variable x which denotes the index of a process. We will call such formulas process 18 properties, as they may or may not hold true for a process in a given state. For a process property φ(x), we write s | = φ(c) to express that in state s, process c has property φ. We assume that for each state s and process c, we have either s | = φ(c) or s | = ¬φ(c).
Example 2.2.1. The following statements are sample process properties: • “Process x has program counter position 5.” We will express this fact by the formula pc[x] = 5. We may use this process property in all systems where the processes have a variable pc.
• “There exists a process y ̸= x where pc[y] = 5.” This property is expressed by the quantified formula ∃y ̸= x.pc[y] = 5. Note that in this formula, only variable x is free. Intuitively, this property means that a process in the environment of x has program counter position 5. We shall therefore write 5 ∈env(x) to express this property.
• “Process x has program counter position 5, and there exist two other processes t1 and t2 in program counter position 1 such that the data variable d satisfies d[x] < d[t1] = d[t2].” This property, too, can be expressed easily with two quantifiers and one free variable x as shown below ∃t1, t2.
t1 ̸= t2 ∧x / ∈{t1, t2} ∧pc[x] = 5 ∧pc[t1] = 1 ∧pc[t2] = 1 ∧d[x] < d[t1] ∧d[t1] = d[t2] 19 Note that the labels discussed in the first two items are highly relevant in our applica-tions and will be discussed below in detail.
Labels and Descriptions.
In environment abstraction, we distinguish two sets of process properties that we use for different purposes: (a) Labels. A label is a process property l(x) that we use in a specification. The set of all labels is denoted by L. For example, for l(x) = (pc[x] = 10), we may write ∀x. AG ¬(pc[x] = 10) to denote that no process reaches program counter 10. For a process c, a c-label is an instantiated formula l(c) where l(x) ∈L. We write L(c) to denote the set of c-labels.
In the Kripke structure P(K), a state s has a label l(c), if s | = l(c), i.e., LK(s) = {l(c) : s | = l(c), c ∈[1..K]}.
(b) Descriptions. A description is a process property ∆(x) which typically describes not only the process, but also its environment, as in the second and the third items of Example 2.2.1. The set of all descriptions D is our abstract state space.
Intuitively, an abstract state ∆(x) ∈D is an abstraction of a concrete state s if there exists a concrete process c which has property ∆, i.e., if s | = ∆(c). For example, the description pc[x] = 5 represents all states s which have a process c whose pc variable equals 5. In our applications, the descriptions will usually be relatively large and intricate formulas.
Remark 2. Note that our process properties contain a free index variable x. While the name of the free index variable is immaterial, we have chosen to call it x as it makes the 20 presentation less cluttered. We also use x in other places, for example, in single index formulas ∀x.Φ(x). The usage should be clear from the context.
Soundness Requirements for Labels and Descriptions.
We will need two require-ments on the set D of descriptions and the set L of labels to make them useful as building blocks for the abstract model: 1. Coverage. For each system P(K),K ≥2, each state s in SK and each process c there is some description ∆(x) ∈D which describes the properties of c, i.e., s | = ∆(c).
The coverage property means that every concrete situation is reflected by some ab-stract state.
2. Congruence. For each description ∆(x) ∈D and each label l(x) ∈L it holds that either ∆(x) →l(x) or ∆(x) →¬l(x).
In other words, the descriptions in D contain enough information about a process to conclude whether a label holds true for this process or not.
The congruence property enables us to give natural labels to each state of the abstract system: An abstract state ∆(x) has the label l(x) if ∆(x) →l(x).
21 2.2.1 Description of the Abstract System PA.
Given two sets D and L of descriptions and labels that satisfy the coverage and congru-ence criteria, the abstract system PA is a Kripke structure ⟨D, IA, RA, LA⟩ where each ∆(x) ∈D has a label l(x) ∈L if ∆(x) →l(x), i.e., LA(∆(x)) = {l(x) : ∆(x) →l(c)}. Before we describe IA and RA, we can already state the following lemma about preservation of labels.
Lemma 2.2.2. Suppose that s | = ∆(c). Then the following are equivalent: (i) The concrete state s has label l(c).
(ii) The abstract state ∆(x) has label l(x).
Proof. Assume that (i) but not (ii). Then by the congruence property, we have ∆(x) → ¬l(x). Together with the assumption s | = ∆(c) of the lemma, we conclude that s | = ¬l(c), which contradicts (i). The converse implication is trivial.
Note that the proof of the lemma requires the congruence property.
This motivates the following abstraction function: Definition 2.2.3. Given a concrete state s and a process c, the abstraction of s with refer-ence process c is given by the set αc(s) = {∆(x) ∈D : s | = ∆(c)}.
22 Note the following remarks on this definition: • The coverage requirement guarantees that αc(s) is always non-empty.
• If the ∆(x) are mutually exclusive, then αc(s) always contains exactly one descrip-tion ∆(x).
• Two processes c and d of the same state s will in general give rise to different ab-stractions, i.e., αc(s) = αd(s) is in general not true.
Remark 3. In our application of environment abstraction to various distributed protocols, it is usually the case that the abstract descriptions ∆(x)’s are mutually exclusive. Thus, given a state s and reference process c, αc(s) will contain exactly one abstract description ∆(x). In such cases, we simply write αc(s) = ∆(x).
Now we define the transition relation of the abstract system by a variation of existential abstraction: RA contains a transition between ∆1(x) and ∆2(x) if there exists a concrete system P(K), two states s1, s2 and a process r such that 1. ∆1(x) ∈αr(s1), 2. ∆2(x) ∈αr(s2), and 3. there is a transition from s1 to s2 in P(K), i.e., (s1, s2) ∈RK.
We note three important properties of this definition: • We existentially quantify over K, s1, s2, and r. This is different from standard existential abstraction where we only quantify over s1 and s2. For fixed K and r, our definition is equivalent to existential abstraction.
23 • Both abstractions ∆1 and ∆2 use the same process r. Thus, the point of view of the abstraction is not changed in the transition.
• The process that actually makes the transition can be any process in P(K), it does not have to be r.
Finally, the set IA of abstract initial states is the union of the abstractions of concrete states, i.e., ∆(x) ∈IA if there exists a system P(K) with state s ∈IK and process r such that ∆(x) ∈αr(s).
To summarize, PA is a Kripke structure (D, IA, RA, LA) such that the set of abstract descriptions D satisfies the congruence and closure conditions with respect to the set of labels L and the transition relation RA is defined in an existential fashion.
Remark 4. It will be convenient later on to represent the abstract descriptions as tuples.
For example, if the abstract descriptions were all of the form ±P1(x) ∧. . . ± PT(x), T > 1 where P1(x), . . . , PT(x) are some process properties and ±Pi(x) indicates that property Pi(x) can appear negated or unnegated, then we can represent an abstract description ∆(x) as a tuple ⟨p1, . . . , pT⟩ where pi = 1 ⇔∆(x) ⇒Pi(x). That is, the value of each bit pi reflects the polarity of the corresponding predicate Pi(x) in ∆(x).
24 Single-Indexed Specifications and Soundness of Environment Abstraction.
We con-sider an indexed temporal specification language where specifications have the form ∀x.φ(x).
Here, φ(x) is an ACTL⋆formula whose atomic formulas are labels in L. We say that P(K) | = ∀x.φ(x) if for all c ∈{1 . . .K} we have P(K) | = φ(c).
Despite the single index, this specification language is powerful, because the labels in L can talk about other processes. For example, using the label 5 ∈env(x) from Exam-ple 2.2.1 above, we can express mutual exclusion by the formula ∀x.AG (pc[x] = 5) →¬(5 ∈env(x)) as well as many other properties. For a more thorough discussion of the expressive power of this language, see Section 2.4. In Section 2.5.1 we will also consider abstractions with multiple reference processes for specifications with multiple indices.
For environment abstractions with L and D that satisfy coverage and congruence, we have the following general soundness theorem.
Theorem 2.2.4 (Soundness of Environment Abstraction). Let P(K) be a parameter-ized system and PA be its abstraction as described above. Then for single indexed ACTL⋆ specifications ∀x.φ(x) the following holds: PA | = φ(x) implies ∀K.P(K) | = ∀x.φ(x).
25 2.3 Soundness We will now give a proof of correctness for Theorem 2.2.4. Before we give the proof of the soundness theorem we introduce some notation to simplify later proofs.
2.3.1 Simulation Modulo Renaming Given a fixed process c, we write Pc(K) to denote the Kripke structure obtained from P(K) where LK is restricted to only those labels which refer to process c. Thus, Pc(K) is labeled only with c-labels.
Fact 1. Let c be a process in P(K) and φ(x) be a temporal formula over atomic labels from L. Then P(K) | = φ(c) if and only if Pc(K) | = φ(c).
This follows directly from the fact that the truth of φ(c) depends only on c-labels.
Our soundness proofs will require a simple variation of the classical abstraction the-orem . Recall that the classical abstraction theorem for ACTL∗says that for ACTL∗ specifications φ and two Kripke structures K1 and K2 it holds that K1 ⪰K2 and K1 | = φ together imply K2 | = φ. That is, if K1 simulates K2 then any ACTL∗property satisfied by K1 is also satisfied by K2.
Definition 2.3.1 (Simulation Modulo Renaming). Let K be a Kripke structure, and c and d be processes. Then K[c/d] denotes the Kripke structure obtained from K by replacing 26 each label of the form l(c) by l(d). Simulation modulo renaming ⪯c/d is defined as follows: K1 ⪯c/d K2 iff K1[c/d] ⪯K2.
Then ⪯c/d gives rise to a simple variation of the classical abstraction theorem: Fact 2 (Abstraction Theorem Modulo Renaming). Let φ(x) be a temporal formula over atomic labels from L, and let K1, K2 be Kripke structures which are labelled only with c1-labels and c2-labels respectively.
If K2 ⪯c2/c1 K1 and K1 | = φ(c1), then K2 | = φ(c2).
Proof. First note that K2 | = φ(c2) is equivalent to K2[c2/c1] | = φ(c1): if the labels in the Kripke structure and the atomic propositions in the specification are consistently renamed, then the satisfaction relation does not change.
Thus, given that K2 ⪯c2/c1 K1 and K1 | = φ(c1), it is enough to show that K2[c2/c1] | = φ(c1) . By the definition of ⪯c/d, K2 ⪯c2/c1 K1 iff K2[c2/c1] ⪯K1 and by the classi-cal abstraction theorem , K1 | = φ(c1) implies K2[c2/c1] | = φ(c1). This proves the abstraction theorem.
2.3.2 Proof of Soundness We will show that environment abstraction preserves indexed properties of the form ∀x.φ(x) where φ(x) is an ACTL⋆formula over atomic labels from L.
Step 1: Reduction to Simulation. Formally, we have to show that 27 PA | = φ(x) implies ∀K.P(K) | = ∀x.φ(x).
By the semantics of our specification language, this is equivalent to saying that for all K > 1, PA | = φ(x) implies ∀c ∈[1..K].P(K) | = φ(c).
Thus, we need to show that for all K > 1 and all processes c ∈[1..K] PA | = φ(x) implies P(K) | = φ(c).
Recall that Pc(K) is the Kripke structure obtained from P(K) that contains only c-labels.
By Fact 1 we know that P(K) | = φ(c) iff Pc(K) | = φ(c). Thus, we need to show that for all K > 1 and for all c ∈[1..K] PA | = φ(x) implies Pc(K) | = φ(c).
Now, by the abstraction theorem modulo renaming (Fact 2), it suffices to show that Pc(K) ⪯c/x PA for all K and c ∈[1..K] where ⪯c/x denotes simulation modulo renaming as defined previously.
We will now prove these simulations.
Step 2: Proof of Simulation. We will now show how to establish the simulation relation Pc(K) ⪯c/x PA between Pc(K) and PA for all K > 1 and c ∈[1..K]. To this end, for each K and c, we will construct an intermediate abstract system P A c,K such that Pc(K) ⪯c/x PA c,K (Simulation 1) and 28 PA c,K ⪯PA.
(Simulation 2) The required simulation then follows by transitivity of simulation. Intuitively, the inter-mediate model PA c,K is the abstraction of the K-process non-parameterized system P(K) where the reference process c is fixed. Thus, PA c,K is obtained from Pc(K) by “classi-cal” predicate abstraction. Note however that PA c,K is a mathematical construction to show soundness of the abstract model PA. In the implementation, we directly construct an approximation of PA.
Construction of PA c,K. The abstract model PA c,K = ⟨D, IA c,K, RA c,K, LA⟩ is defined analogously to PA for the special case where K and c are fixed. Thus, P A c,K is the abstract model of the concrete system Pc(K) with a fixed number K of processes and reference process c. More precisely, PA c,K is defined as follows: (a) The state space D is the same as in PA.
(b) The set of initial states IA c,K is the subset of the initial states IA of PA for the special case of K and c. Thus, IA c,K is given by those abstract states ∆(x) for which there exists a state s in Pc(K) such that ∆(x) ∈αc(s).
(c) The transition relation RA c,K is the subset of the transition relation RA of PA for the special case of K and c. Thus, there is a transition from ∆1(x) to ∆2(x) in RA c,K if and only if there are two states s1, s2 in Pc(K) such that ∆1(x) ∈αc(s1), ∆2(x) ∈αc(s2), and (s1, s2) ∈R.
29 (d) The labeling function LA is the same as in PA.
Proof of Simulation 1. We need to show that Pc(K) ⪯c/x PA c,K, which by definition of ⪯c/x is equivalent to Pc(K)[c/x] ⪯PA c,K.
Consider the structure Pc(K)[c/x]. This is just the K-process system P(K) restricted to the labels for process c, but because of the renaming the labels have the form l(x) instead of l(c). Thus, the labels of Pc(K)[c/x] are taken from the set L. Note that the labels of the abstract system PA are also taken from the set L. The proof idea below is similar to the construction of a simulation relation for existential abstraction.
Consider the relation I = {⟨s, ∆(x)⟩: s | = ∆(c), s ∈SK, ∆(x) ∈D}.
We claim that I is a simulation relation between Pc(K)[c/x] and PA c,K: 1. Lemma 1 together with the renaming of c to x guarantees that for every tuple ⟨s, ∆(x)⟩∈I, the states s and ∆(x) have the same labels.
2. Consider a tuple ⟨s, ∆(x)⟩∈I. Assume that s has a successor state s′, i.e., (s, s′) ∈ RK. We need to show that there exists an abstract state ∆′(x) such that (i) (∆(x), ∆′(x)) ∈RA c,K, and (ii) ⟨s′, ∆′(x)⟩∈I.
30 To find such a ∆′(x), consider the abstraction αc(s′) of s′, and choose some descrip-tion Γ(x) ∈αc(s′). By the coverage condition, αc(s′) is non-empty.
We will show by contradiction that each such Γ(x) fulfills the properties (i) and (ii) menitioned above.
Property (i) Assume that Γ(x) does not fulfill property (i), i.e., (∆(x), Γ(x)) ̸∈ RA c,K. Then for all states s1 and s2 it must hold that whenever ∆(x) ∈αc(s1) and Γ(x) ∈αc(s2) that there is no transition between s1 and s2. On the other hand, we assumed above that ∆(x) ∈αc(s), Γ(x) ∈αc(s′) and there is a transition from s to s′. Hence we have a contradiction.
Property (ii) Assume now that Γ(x) does not fulfill property (ii), i.e, ⟨s′, Γ(x)⟩̸∈ I. By the definition of I, this means that s′ ̸| = Γ(c), and thus, Γ(c) ̸∈αc(s′).
This gives us the required contradiction.
Thus, ∆′(x) can be chosen from among the descriptions in αc(s′).
3. Finally, the coverage property guarantees that for every initial state s ∈IK there exists some ∆(x) ∈IA c,K s.t. ⟨s, ∆(x)⟩∈I.
Proof of Simulation 2. By construction, IA c,K ⊆IA and RA c,K ⊆RA. Therefore, PA is an over-approximation of P A c,K, and the simulation follows.
Remark 5. Note that the coverage and congruence requirements for D and L are used in crucial parts of Simulation 1 in the soundness proof. Congruence is used in the proof of Lemma 2.2.2 which gives us property 1 of Simulation 1. Property 2 of Simulation 1 31 requires coverage to make sure that αc(s′) is non-empty. Property 3 of Simulation 1 also requires coverage to ensure the existence of an abstract initial state.
Remark 6. In the formulation above, we have not assumed that processes in P(K) execute synchronously or asynchronously. That is, our definitions are not affected by how the system evolves. We only assume that there is a global transition relation for P(K). Thus, results described above hold whether the processes in P(K) execute synchrounously or asynchronously. This fact will allow us to later augment P(K) by adding synchronously executing monitor processes.
2.4 Trade-Off between Expressive Labels and Index Vari-ables In this section we argue why a well-chosen set of labels L makes it often possible to use a single index variable. The Ptolemaic system view explains why we seldom find more than two indices in practical specifications: when we specify a system, we tend to track properties the reference process has in relation to other processes out there, one at a time.
Thus, two-indexed specifications of the form ∀x, y. x ̸= y →φ(x, y) often suffice to express the specifications of interest. Properties involving three processes at a time are typically complicated, as we need to consider a triangle of processes and their relationships.
32 In our work on verifying mutual exclusion and cache coherence protocols, we used two kinds of labels (see also Example 2.2.1): • pc[x] = L and • L ∈env(x) which semantically stands for ∃y ̸= x.pc[y] = L.
Note that the label pc[x] = L refers only to process x whereas L ∈env(x) also refers to the environment of x using a hidden quantification. This hidden quantification in the environment label gives surprising power to single-indexed specifications.
To see this, consider the standard mutual exclusion property. The classical way to specify mutual exclusion is expressed in a formula such as ∀x, y.x ̸= y →AG (pc[x] = 5) →(pc[y] ̸= 5).
It is easy to see that using the label 5 ∈env(x), we can express this specification by the logically equivalent single-indexed formula ∀x.AG (pc[x] = 5) →¬(∃y ̸= x.pc[y] = 5).
which is in turn equivalent to ∀x.AG (pc[x] = 5) →¬(5 ∈env(x)) The difference between the three formulas is that in the first specification the index quantifiers are in prenex form, while in the second and third formula, the quantifier for y has been distributed inside the formula, and is hidden in the label 5 ∈env(x). Again, the 33 Ptolemaic viewpoint explains why such situations are likely to happen: in many specifi-cations, we consider our process over time (i.e., using a temporal logic specification), but only at the individual time points we evaluate its relationship to other processes. Thus, a time-local quantification suffices.
The interplay between labels and index variables gives rise to interesting logical con-siderations that we will discuss briefly now.
Distributive Fragments of CTL and LTL.
It is natural to ask when a double-indexed specification can be translated into a single-indexed specification as in the example above.
Somewhat surprisingly, this question is related to previous work on temporal logic query languages [73; 74; 75]. A temporal logic query is a formula γ with one occurrence of a distinguished atomic subformula “?” (called a placeholder). Given γ and a formula ψ, we write γ[ψ] to denote the formula obtained by replacing ? with ψ. In [73; 74; 75], syntactic characterizations for CTL and LTL queries with the distributivity property γ[ψ1 ∧ψ2] ↔γ[ψ1] ∧γ[ψ2].
are described. A template grammar for the distributive fragment of LTL is given in the appendix of .
The prototypical example of a distributive query is AG?, and we have seen above that for AG properties, we can translate double indexed properties into single-indexed properties. As argued above, this translation actually amounts to distributing one universal quantifier inside the temporal formula.
Such a translation is possible for all specifications which are distributive with respect 34 to one index variable: consider a double-indexed specification ∀x, y. x ̸= y →φ(x, y) where all occurrences of y in φ are located in a subformula θ(x, y) of φ. Then we can write φ as a query γ[θ]. Now suppose that γ is distributive. On each finite P(K), the universal quantification reduces to a conjunction, i.e., P(K) | = ∀x, y. x ̸= y →γ[θ(x, y)] iff P(K) | = ∀x.
^ 1≤c≤K,c̸=x γ[θ(x, c)] which by distributivity of γ is equivalent to P(K) | = ∀x. γ " ^ 1≤c≤K,i̸=x θ(x, c) # and thus to P(K) | = ∀x. γ [ ∀y.x ̸= y →θ(x, c) ] .
For a suitable label l(x) := ∀y. x ̸= y →θ(x, y) this can be written as P(K) | = ∀x. γ[l(x)].
For the important special case where θ(x, y) has the form pc[y] = L, this is equivalent to P(K) | = ∀x. γ[L ̸∈env(x)].
While the characterization of distributive queries gives us a good understanding about the scope of single-indexed specifications, it is clear that not all two-indexed specifications can be rewritten with a single index. Consider, for example, the formula ∀x, y.x ̸= y →AF(pc[x] = 5 ∧pc[y] = 5).
Here it is evidently not possible to move the quantifier inside. This can also be derived from the characterization in . Consequently, this specification cannot be expressed with a single index.
35 In Section 2.5.1 we will show how to extend environment abstraction to multiple ref-erence processes. Of course, having more reference processes will, in general, make the abstract model larger, and, thus, harder to analyze. This motivates the following approach to deal with two-indexed specifications σ: 1. Using the grammar characterizations of distributive queries, determine whether σ can be written with a single index.
2. Otherwise, use an abstraction with two reference processes, as described in Sec-tion 2.5.1.
2.5 Extending Environment Abstraction In this section, we will describe a few easy extensions to environment abstraction.
2.5.1 Multiple Reference Processes In the preceding sections, we focused on a framework for single-indexed specifications of the form ∀x.φ(x). Extending this framework to two reference processes is simple – essentially, we need to replace the free variable x in the process properties by a pair x, y, and carry this modification through all definitions and proofs. The generalization to more indices is straightforward, and left to the reader.
36 For the set D of descriptions, we will now use descriptions of the form ∆(x, y) which capture the state of two reference processes x, y and the environment around them. Thus, we can track the mutual relationship of two processes in greater detail. Similarly, we can extend the set of labels. The set L of labels is partitioned into unary labels L1 of the form l(x) and binary labels L2 of the form l(x, y). Note that, in practice, the single-indexed labels will usually suffice. A state s of system P(K) is labeled with l(c) if and only if s | = l(c). State s is labeled with l(c, d) if and only if s | = l(c, d).
The coverage and congruence requirements are generalized analogously: 1. Coverage. For each system P(K), each state s in P(K) and any two processes c, d there is some description ∆(x, y) ∈D which describes the properties of c, d, i.e., s | = ∆(c, d).
2. Congruence. For each description ∆(x, y) ∈D and each label l(x, y) ∈L2 it holds that either ∆(x, y) →l(x, y) or ∆(x, y) →¬l(x, y). An analogous condition holds for labels in L1.
Thus, we obtain a natural definition of the abstraction mapping: Definition 2.5.1. Given a concrete state s and two processes c and d, the abstraction of s with reference processes c and d is given by the set αc,d(s) = {∆(x, y) ∈D : s | = ∆(c, d)}.
The construction of the abstract model is analogous to the single index case. To in-dicate the number of reference processes in the abstract model, we write P A 2 for the ab-37 stract model with two reference processes. Analogously to the single-index case, we at-tach labels to each state of PA 2 such that the abstract state ∆(x, y) has label l(x, y) iff ∆(x, y) →l(x, y).
Theorem 2.5.2 (Soundness of Double-Index Environment Abstraction). Let P(K) be a parameterized system and P A 2 be its abstraction with two reference processes. Then for double indexed ACTL⋆specifications ∀x ̸= y.φ(x, y) the following holds: P A 2 | = φ(x, y) implies ∀K.P(K) | = ∀x ̸= y.φ(x, y).
The environment abstraction principle can be easily extended to incorporate more than two reference processes. As argued above, it is quite unlikely that a practical verification problem will require the use of three reference processes.
2.5.2 Adding Monitor Processes Often times it is necessary to augment a given parameterized system P(K) by adding non-interfering monitor processes. Monitors are essentially synchronous processes (i.e., they execute at every step of P(K)) that maintain history information regarding the pro-cesses in P(K). Addition of monitors gives more information about the evolution of the system. Thus, taking monitors into account during abstraction can give us better abstract models. A typical case where monitors are needed is for handling liveness properties.
As we will see later in Section 4.4, environment abstraction, as described in the earlier 38 sections, is too coarse to handle liveness properties. This is because the abstraction can introduce spurious loops, which can lead to false negatives. These spurious abstract be-haviors can be eliminated by augmenting the system P(K) with monitors and abstracting the augmented system. While the precise details of monitor processes are considered later in Chapter 4, we will consider here the theoretical basis for adding monitors.
Consider a parameterized system P(K) and assume that we augment it by adding a collection of identical monitor processes M(1), . . . , M(K). Each M(i) is exactly the same as the other monitor processes except for its id. Denote the augmented param-eterized system by PM(K). The states of PM(K) are given by tuples of the form sM .
= ⟨L1, . . . , LK, M1, . . ., MK⟩where Li is denotes local state of process P(i) and Mi denotes the local state of the monitor process M(i).
The results presented in Section 2.2 assume there is only one collection of replicated process.
To make the results of Section 2.2 applicable, we can compose each M(i) with the corresponding P(i) to create a hybrid process PM(i). The augmented system PM(K) .
= ⟨SM, IM, RM, LM⟩is a parameterized system with PM(i)’s as the constitut-ing processes. The set of labels LM is usually the same as the set of labels L of P(K). To apply environment abstraction to PM(K) we just have to pick the appropriate set of ab-stract descriptions satisfying the congruence and coverage properties together with labels in L. Let DM be a collection of abstract descriptions ∆M(x) and αM be the abstraction mapping from SM to DM such that DM satisfies the coverage and congruence conditions with respect to the set of labels L.
Define the augmented abstract model PA M in the usual fashion.
39 Definition 2.5.3 (Augmented Abstract Model). The abstract model P A M of a parameter-ized system PM(K) is defined as the Kripke structure (DM, IA M, RA M, LA M) where • DM is the set of all augmented abstract descriptions • IA M, the set of initial abstract states, is the set of augmented abstract states ˆ sM such that there exists a concrete initial state sM of a concrete system PM(K), K > 1, and a process p ∈[1..K], such that αMp(sM) = ˆ sM.
• RA M is defined as follows: There is a transition from abstract state ˆ sM1 to abstract state ˆ sM2 if there exist (i) a concrete system PM(K), K > 1 with a process p (ii) a concrete transition from concrete state sM1 to sM2 in PM(K) such that αMp(s1) = ˆ sM1 and αMp(s2) = ˆ sM2.
• ∆M(x) is labeled with l(x) ∈L if and only if ∆M(x) ⇒l(x).
Corollary 1. Let PM(K) be the augmented parameterized system corresponding to the parameterized system P(K). Let PA M be the augmented abstract model as described above.
Then, for any single indexed ACTL∗specification ∀x.φ(x), where φ(x) is a formula over labels L, we have PA M | = φ(x) ⇒∀K > 1.PM(K) | = ∀x.φ(x) Proof. This follows simply from Theorem 2.2.4. Note that we are using the fact that Theorem 2.2.4 holds whether the parameterized system P(K) executes asynchronously or not.
40 Since we have assumed that the monitors are non-interfering, PM(K) | = ∀x.φ(x) implies P(K) | = ∀x.φ(x). Thus PA M | = φ(x) ⇒∀K > 1.PM(K) | = ∀x.φ(x) ⇒∀K > 1.P(K) | = ∀x.φ(x) Remark 7. Note that the number of monitor processes is exactly the same as the number of processes. This does not reduce the generality of the results above for the following reasons: if the number of monitor processes is constant (i.e., independent of K) then they can be treated as one single non-replicated process. On the other hand if the number of monitors was a function of K then we can compose a set of monitors and processes (instead of one monitor and one process) to create composite processes. For example, suppose we had only K/2 monitors in the system P(K) with K processes. Then we can compose two processes and one monitor to create a larger composite process P 2 M(i) and the augmented parameterized system is composed of K/2 such composite processes.
Thus, our results will still be applicable.
2.6 Example of Environment Abstraction We have thus far described environment abstraction in its most general terms. We have not indicated what descriptions to choose or what labels to use beyond specifying their general forms. In the following, we discuss, using an example, some of these issues which let us apply this abstraction method to practical systems.
41 2.6.1 Abstract Descriptions Consider abstract descriptions of the form ∆(x) consisting of a single reference process. A description ∆(x) can provide very detailed information on process x and its environment.
In our work on verifying mutual exclusion protocols (see Chapter 4), we found it useful to have descriptions ∆(x) of the following form: ∆(x) .
= pc[x] = L ∧∃y ̸= x.E1(x, y) ∧. . . ∃y ̸= x.ET(x, y), T ≥1 Informally, the condition pc[x] = L describes the control location of the reference process x. Each of the conditions ∃y ̸= x.Ei(x, y) tells that there exists a process y in the environ-ment of x satisfying a certain predicate Ei(x, y) over the state variables of processes x, y.
Each Ei(x, y) itself is of the form Ei(x, y) .
= ±R1(x, y) ∧. . . ∧±RM(x, y) ∧pc[y] = L, M ≥1 where each Ri(x, y) is an atomic predicate relating the data variables of two processes x, y.
The condition pc[y] = L says that process y is in control state L. That is, we take every possible cube over the atomic predicates R1(x, y), . . ., RM(x, y) (that is,every expression of the form ±R1(x, y)∧. . .∧±RM(x, y) and conjoin them with every possible predicate of the form pc[y] = L to obtain the full set of Ei(x, y) predicates. It is easy to see that every process y in the environment of a process x will satisfy one of the Ei(x, y) predicates. It is also easy to see the set of descriptions as constructed above has the required coverage property: for all concrete systems P(K), each concrete s of P(K) and process c ∈[1..K], s | = ∆(c) for some description ∆(x).
The choice of the set of descriptions was dictated by the properties that we were inter-42 ested in verifying, namely, two index safety properties of the form ∀x, y.x ̸= y ∧(pc[x] = crit ⇒¬(pc[y] = crit) As discussed earlier, this property can be equivalently written as ∀x.(pc[x] = crit ⇒¬(∃y ̸= x.pc[y] = crit)) Thus, the two indexed property is essentially composed of two kinds of labels • pc[x] = L, and • ∃y ̸= x.pc[y] = L.
Observe first that only x occurs free in both types of labels. Further, each description ∆(x) either implies pc[x] = L or its negation. Similarly, each ∆(x) either implies ∃y ̸= x.pc[y] = L or its negation. Thus, we also have the required congruence property. Thus, the set of descriptions we chose have both congruence and coverage properties required by our abstraction framework.
As an aside, if we let ∃k stand for the generalization of the usual existential quantifier ∃meaning there exist at least k different elements, then our descriptions can be made even stronger. Instead of the descriptions above we can use ∆(x) .
= pc[x] = L ∧∃ky ̸= x.E1(x, y) ∧. . . ∃ky ̸= x.ET(x, y).
Instead of just telling us whether there is a process satisfying Ei(x, y) these descriptions also tell us whether there are atleast k such processes or not. Note that this is quite close in spirit to counting abstraction which also counts processes satisfying certain conditions (though there is no notion of a reference process in counting abstraction).
43 2.7 Related Work Verification of parameterized systems is well known to be undecidable, see [2; 76].
Nonetheless, many interesting approaches to this problem have been developed over the years, including the use of symbolic automata-based techniques [1; 10; 12; 51], invari-ant based techniques [3; 64], predicate abstraction , and symmetry [24; 31; 38; 39; 40]. Some of the earliest work on verifying parameterized systems includes the works by Browne et al [14; 15], German and Sistla , and Emerson and Sistla . Pa-pers that handle systems similar to the parameterized systems considered in this thesis are [3; 6; 7; 41; 42; 52; 53; 64; 66]. The paper by Pnueli et al., which introduces the term counter abstraction, inspired our work.
Environment abstraction fits the Abstract Interpretation framework of Cousot and Cousot . In the Abstract Interpretation framework one studies the effect of a program in an abstract domain instead of the concrete domain that the program is supposed to handle. The abstract domain is designed to be sound so that a property that holds in the abstract domain will also hold in the concrete domain. While this provides a general methodology, it provides no guidance on what abstract domain to choose. The choice of the abstract domain to consider is in fact the toughest question facing any Abstract Interpretation based method.
In the context of verifying software and hardware systems, several different alternatives have been proposed to construct abstract domains. Any such method must address two conflicting issues: 44 Generality: The abstract domains must be as widely applicable as possible. It de-feats the propose of automated and efficient program analysis if the user has to figure out the abstract domain for each and every program separately. Thus, it is required that methods for constructing the abstract domains should not be too specific.
Usability: On the other hand, widely applicable but trivial abstract domains can be constructed quite easily. Such abstract domains are useless in proving interesting properties of a program under consideration. It is typically the case that the more widely a method (for constructing abstract domains) is applicable, the less powerful it is.
In this thesis, we are essentially proposing a new approach for constructing abstract domains. This approach is applicable to any system that has replicated components. For such systems, the abstract domain we consider has detailed information on one reference component and the rest of the components are considered in less detail and in relation to the reference component. It is our claim that this is the way a human designer thinks (when designing systems with replicated components), and, hence the abstract domains constructed according to this pattern will be powerful. It is to be noted that we have not specified all the details of the abstract domain as they necessarily depends on the specific class of programs under consideration. But following this general structure, we hope that filling in the details will be easy.
In the following sections we discuss some of the well known abstraction methods and how they relate to our work.
45 2.7.1 Predicate abstraction This method, proposed by Graf and Saidi [70; 72] has, over the years, become one of the most widely used abstraction mechanisms for handling systems with large or unbounded state spaces. The basic idea of this approach is to consider the effect of the program on a set of (carefully chosen) predicates. The abstract domain consists of a set of predicates over the program variables. Assume that the set of predicates is P .
= {P1, . . ., Pn}. The ab-stract domain consists of all possible valuations ⟨b1, . . . , bn⟩of the predicates P1, . . . , Pn.
Denote the set of concrete states by S and the set of abstract states by ˆ S. We can then define the standard abstraction mapping α from S to abstract states ˆ S as follows. For any concrete state s ∈S and ⟨b1, . . . , bn⟩∈ˆ S α(s) .
= ⟨b1, . . . , bn⟩such that each bi = 1 iff s | = Pi.
The corresponding concretization mapping γ from ˆ S to 2S is then defined as γ(⟨b1, . . . , bn⟩) .
= {s|α(s) = ⟨b1, . . . , bn⟩} Once the abstraction mapping is defined, the abstract model is described using the well-known existential definition: given two abstract states ˆ s1, ˆ s2 there is an abstract transition from ˆ s1 to ˆ s2 if there exist two concrete states s1, s2 such that • α(s1) = ˆ s1, • α(s2) = ˆ s2, and • there is a concrete transition from s1 to s2.
46 It can be shown that the abstract model so defined is a conservative abstraction of the concrete system .
Predicate abstraction is a very general method. The main problem in applying pred-icate abstraction is in deciding what set of predicates to use. This is an active area of research and several heuristics are used to discover relevant predicates to use (for example the CEGAR loop ). In contrast, our method does provide a framework for constructing predicates.
There are some crucial differences between standard predicate abstraction and our method. Given a fixed set of predicates, each concrete state can map only to one ab-stract state in usual predicate abstraction. On the other hand, in our abstraction method, a concrete state can map to multiple abstract states depending on which process is chosen as the reference process.
Further, in standard predicate abstraction, the predicates typically involve the variables of the same process/program. In our approach, the predicates span multiple processes and relate the states of different components in the system. TVLA [71; 80] was the first work to identify the importance of such predicates and it has been successfully used to verify various multi-threaded systems and heap properties. We believe the use of predicates re-lating different processes/components within a system is a natural and powerful extension of standard predicate abstraction. Such predicates are required if one wants to verify multi process systems or reason about heap properties.
47 2.7.2 Indexed Predicates The Indexed Predicates approach was proposed by Lahiri and Bryant [52; 53] to handle unbounded systems such as those with replicated processes or unbounded data structures.
The invariants of such systems are usually quantified over the parameters and the indices of the various components of the system. Typically, the scope of the quantifiers contains complex formulas (which are themselves composed of smaller predicates containing free index variables). If one were to use standard predicate abstraction, discovering such invari-ants would involve predicates which are almost as complex as the invariants themselves.
To get around this problem, the Indexed Predicates approach uses simple predicates which can contain free-index variables and tries to build complex quantified invariants from these indexed predicates. The invariants discovered using this method contain only universal quantifiers.
The Indexed Predicates starts with a set of predicates P .
= {P1, . . . , Pn} which can contain free index variables from a set X. As with standard predicate abstraction, the abstract state space ˆ S is just the set of all possible valuations ⟨b1, . . . , bn⟩of the atomic predicates in P. The abstraction mapping function though is quite different. A concrete state s maps to an abstract state ˆ s .
= ⟨b1, . . . , bn⟩if for some valuation of the index variables in X the value of each predicate Pi in P matches the corresponding bi in ˆ s. More formally, let v(X) denote some valuation of the index variables in X. Then α(s) .
= {ˆ s ∈ˆ S|∃v(X).(s | =v(X) Pi ⇔bi)} where s | =v(X) Pi means state s satisfies predicate Pi with all the free index variables occuring in Pi fixed according to the valuation v(X).
48 Since there are multiple possible valuations for variables in X, a single concrete state can map to several different abstract states. Note that, in our method a single concrete state can also map to several different abstract states depending on which process is chosen as the reference process (the reference process in our method can be modeled as a free index variable). Thus, the abstraction mapping used in our method and in Indexed Predicates are essentially the same. But unlike our method, Indexed Predicates method defines a concretization function for a set of abstract states, not for a single abstract state. The concretization of a set of abstract states ˆ C is the set C of concrete states such that, for all valuation of the free index variables, every state s ∈S maps to some state ˆ s ∈ˆ S. More formally, for a set ˆ C of abstract states γ( ˆ C) .
= {C ⊆S|∀s ∈C.α(s) ⊆ˆ C} The abstract reachability is carried out by defining a reachability function that operates on sets of abstract states instead of single abstract states. Denote the concrete transition relation by ρ. Then the abstract reachability function ˆ ρ is defined as: ˆ ρ( ˆ S) .
= α(ρ(γ( ˆ S)).
Let R, ˆ R be the set of concrete and abstract reachable states. Then it can be shown that α(R) ⊆ˆ R.
Thus, an over-approximation of the concrete reachable states can be found by doing a reachability analysis on the abstract model.
49 A crucial difference between our method and Indexed Predicates method is that we can define a concretization function that operates on individual abstract states instead of sets of abstract states. In our framework, the concretization function γ is defined as follows.
For an abstract state ˆ s γ(ˆ s) .
= {s ∈S|∃c.αc(s) = ˆ s} where c is the reference process.
Because we can talk of concrete states corresponding to each abstract state, we can also define an abstract transition relation, not just an abstract reachability function operating on sets of states (as in Indexed Predicates. 2) As a consequence, Indexed predicates method is not suited for handling liveness properties, which requires an abstract transition relation.
Indexed Predicates method can verify only safety properties. In contrast, our approach can handle both safety and liveness properties.
In Indexed Predicates, the computation of the abstract reachable states is done symbol-ically by reducing each step of the reachability analysis to finding solutions of a quantified CLU formula (CLU logic is a subset of first order logic with uninterpreted functions ).
Quantified CLU formulas are then solved by posing them as Boolean SAT problems .
Observe that this method of abstract reachability does not really exploit our knowledge of the concrete transition statements. On the flip side, the systems that Indexed Predicates can handle is limited in theory only by the availability of solvers for first order logic formulas.
In contrast, in our approach, we consider each statement of the protocol and compute an over-approximation of all the abstract transitions this can lead to. In doing this, we 2As an aside, we believe the Indexed Predicates method can also be generalized to have an abstract transition relation.
50 are able to exploit our knowledge of the transition statements whereas Indexed Predicates cannot. This means that, for each type of transition statement, we have to write by hand an over-approximation for it. We believe this is a novel feature of our approach: the user has to provide an over-approximation for each type of transition statement in the system.
Since there are limited number of different types of transition statements (in our work on protocol verification we had around 6 types), this is a fairly easy task.
Another difference between the two approaches is that Indexed Predicates is a general framework much in the spirit of standard predicate abstraction and TVLA. It provides no guidance on what predicates to use. This problem is compounded by the fact that, unlike predicate abstraction, there is no possibility of applying the Counterexample Guided Ab-straction loop to extract useful predicates. This is because there is no abstract transition relation in Indexed Predicates and consequently, no notion of an abstract trace. Our ap-proach, in contrast, does provide a guideline for what predicates to use. Moreover, as we have an abstract transition relation, automatic predicate discovery guided by counterexam-ple traces is also possible.
2.7.3 Three Valued Logical Analysis (TVLA) The TVLA method proposed by Reps et al. [71; 80] is an abstract interpretation based approach for verifying safety properties of multi-threaded systems, and for doing shape analysis. This is a widely applicable method that uses the universe of first order logical structures as the abstract domain. To make verification of unbounded systems possible, they use the notion of summarization, which is similar to the idea of counting abstraction.
51 The essential idea is to represent the state of a system as a first order structure (called a configuration) consisting of objects and predicates over these objects. The objects in the first order structure can be used to represent threads and heap allocated data structures.
The predicates can be used to represent relationships between the objects. For example, the fact that a pointer p points to thread t can be represented by using a predicate pointsTo(p,t).
Papers on TVLA were the first to observe that predicates spanning or relating multiple components of a system are essential if we want to reason about multi-threaded systems and heap properties.
Once a set of relevant predicates have been picked, the mapping from concrete states to abstract states is straight-forward. There is a one to one mapping from the threads and other components of the concrete system to the objects in the abstract domain. Further, the valuations of the different predicates are known from the concrete state being considered.
If the number of threads and other components in the concrete system are bounded then the number of objects necessary in the abstract domain is also finite.
To handle the case where the concrete system can have an unbounded number of threads and other components, TVLA uses the notion of summarization which is essen-tially a form of counting abstraction. Suppose components c1, . . . , cn all satisfy the same set of unary predicates 3. Then instead of mapping them to different objects o1, . . . , on in the abstract domain, they are mapped to one abstract object ˆ o. Thus, an unbounded num-ber of concrete components can be summarized using a single abstract object ˆ o. Observe that summarization of o1, . . . , on into one abstract object ˆ o introduces uncertainty in the 3TVLA uses binary predicates to specify relationship between different components and unary predicates to specify properties of a particular component.
52 properties satisfied by ˆ o. For instance, suppose object o1 satisfies a certain binary predi-cate Bin(o1, t) for some fixed arguement t, but the rest of the objects do not. It is not clear in this case whether ˆ o should satisfy Bin(ˆ o, t) or not. To deal with situations like these, TVLA uses three-valued logic (hence the name) instead of the standard two-valued logic.
Thus, a predicate P can take three values 0, 1/2, and 1, where 0, 1 have the usual meaning and 1/2 denotes that P can have either value.
While summarization of similar objects is a powerful feature that lets TVLA deal with unbounded systems, it is sometimes necessary to track one object,say o1, separately from other objects o2, . . . , on even though they may have the same properties. For this sake, special unary predicates can be used to select some particular object as a special object and thus track its execution in detail. Such unary predicates used to distinguish individual objects are called instrumentation predicates.
It might seem that instrumentation predicates can be used to simulate our notion of a reference process, but that is not the case. The only thing that distinguishes a reference process from other processes in the system is its id. Thus, if we use instrumentation predicates to simulate the notion of a reference process, the predicates will have to refer to the process ids. This means that once a reference process is chosen by instrumentation predicates it cannot change. But, in our abstraction, the identity of the process that serves as the reference process may change from transition to transition. Thus, the notion of a reference process cannot be simulated using instrumentation predicates.
To explore the state space of a given system, TVLA starts with the initial set of ab-stract configurations each of which corresponds to some concrete initial state. The actions 53 of the concrete system rewrite the abstract configurations into new configurations. TVLA’s model checker performs on-the-fly model checking by exploring new configurations until all the configurations are covered. Because of summarization, the set of abstract configura-tions is bounded, and the explicit exploration of the abstract domain will terminate. Thus, no abstract model is built up front on which model checking is performed. In contrast, in our method we build an abstract model up front.
The framework proposed by TVLA is extremely general; essentially any real world system can be handled by this framework. Consequently, no method for choosing the predicates can be specified and the central problem in predicate abstraction, namely what predicates to use, is left unsolved. For the examples considered in , the authors man-ually pick the predicates. In contrast, our method specifies a framework for what type of predicates to pick. In our case studies, the relevant predicates were constructed just by a syntactic exploration of the protocol code.
2.7.4 Counter Abstraction Counter Abstraction is an intuitive method to use on parameterized systems, and it has been employed by various researchers in different contexts [5; 28; 34; 66]. Pnueli et al. , who coined the term counter abstraction, show how concurrent systems com-posed of symmetric and finite state processes can be handled automatically. The essential idea in counter abstraction is to have a counter Ci for each possible local state i of the processes. Counter Ci then counts the number of processes in state i in a given concrete system configuration. The counters are typically bounded by a small value so that the ab-54 stract system consisting of the counters is finite state. Environment abstraction generalizes counter abstraction since the abstract descriptions ∆(x) can serve as counters. But, in-stead of counting simply the processes according to their local states, we count processes according to their local states and according to their relationship to the reference process.
It is the latter feature that lets us handle systems in which each replicated process has infinite state space.
In a symmetric protocol, the identities of the processes cannot be used in the protocol code. For instance, a condition of the form forall j < i.
Φ(j) appearing in the code of process i breaks the symmetry because the process with id 1 will exhibit different behavior from process with id m > 1 (the condition is trivially true for process 1 and not so for other processes with ids greater than 1). Most real life systems are not symmetric, that is, the code for each process can make use of the process id. Thus, the verification of Szymanski’s protocol in requires manual introduction of new variables.
Our method does not require each process to be finite state nor do we require the processes to be symmetric.
In , the notion of “all-but-one” counter abstraction is described. The idea here is to apply counter abstraction to all processes except one. By tracking one special process in detail, they are able to reason about single index liveness properties. It is important to note the following: • In a symmetric protocol, any process can be chosen as the special process, it makes no difference in the abstraction.
55 • Further, the other processes in the system are abstracted (or counted) according to their local states alone, not based on their relationship to the special process. This is the crucial difference between our method and counter abstraction.
There are also important differences in how we compute the abstract model and how the abstract model is computed in . In , the abstract model is computed precisely using symbolic techniques. In contrast, we over-approximate the abstract model by con-sidering each transition statement of the protocol code.
Another approach that uses, among other things, counter abstraction is the method proposed by Henzinger et al. . Like “all-but-one” abstraction of Pnueli et al. , Henzinger et al. also track one thread in detail (called the main thread). As with counter abstraction, the main thread does not serve as a reference process. The other threads in the system are abstracted independently of the main thread.
2.8 Conclusion In this chapter we presented the mathematical principles underlying environment ab-straction. This abstraction framework is designed specifically for systems with replicated components. Informally, this framework is built around the insight that when we humans reason about systems with replicated components we focus on one particular component while considering the other components only abstractly.
In this chapter, we assumed that the replicated components were processes. In general 56 the replicated components can vary. For instance, a memory bank can be treated as a collection of identical memory cells. Our method can be extended to all these instances as well.
It is crucial to distinguish two different issues that have been covered in this chapter: (i) “what abstract state space to consider?” and (ii) “how to build the abstract model?” or rather, “how to use this abstract state space to accomplish verification?”. In answer to the first question we propose using an abstract state space of descriptions ∆(x). In answer to the second question, we propose constructing, up front, an over-approximate abstract model. It is not necessary for using environment abstraction that we build the abstract model up front. We can use an explicit state exploration as done by TVLA as well. However, we think that for protocols, which can usually be expressed using only a few types of basic constructs, our way of building the abstract model up front is the best possible choice.
In the next chapter, we instantiate environment abstraction in the context of cache coherence verification. We will cover all the issues raised in this chapter from descriptions to computing the abstract model.
57 58 Chapter 3 Environment Abstraction for Verification of Cache Coherence Protocols 3.1 Introduction The performance advantages of multi-core shared memory architectures have created a strong industrial trend towards multi-core designs. Such state-of-the-art architectures cru-cially rely on caching mechanisms for increased performance. The increasing complexity of such systems is reflected in the intricate cache protocols they employ. As these cache coherence protocols are inherently parameterized, it is a challenging task to ensure their correctness by automatic verification methods. In this chapter, we show how to use the 59 environment abstraction to verify directory based cache coherence protocols – the most widely employed class of cache coherence protocols. We use this abstraction method to verify the standard safety property of several versions of the GERMAN’s protocol and of a modified version of the FLASH protocol.
3.1.1 Cache Coherence Protocols Caching mechanisms are ubiquitous in modern computer systems. Computer systems usually have several memory banks, each with different latency. To reduce the time needed to access data items and thus improve the performance, caching mechanisms are used to store frequently accessed data items in the fastest available memory bank.
Modern processors typically come with several levels of caches. A cache is a small memory bank that usually sits on the motherboard of a processor. Higher the physical distance of a cache, the higher the latency of that cache. A data item that is frequently used by the processor can be stored in one of its caches. When the data item is needed again, instead of going all the way to the main memory, the data item can be supplied from the cache itself.
While the availability of such caches dramatically increases the performance of a multi-processor system, care must be taken to prevent processors from accessing data items in an unsafe manner. For instance, two processors P1 and P2 might both have a data item d in their local caches. After performing some computations both the processors may decide to write back their local values of d to the main memory. If this activity is not coordinated properly, the value of d as determined by one of the processors will be 60 lost. In the presence of multiple data items, such loss can lead to computations that are not feasible in any legal execution of the processors. Thus, to coordinate the activities of different processors in a multiple processor system and to provide a consistent view of the memory to all the processors cache coherence protocols are used.
There are broadly two types of cache coherence protocols, namely snoopy and di-rectory based protocols. The first class of protocols is broadcast based with no central coordination. The second type of protocols, the directory based protocols, are based on point to point communication and have centralized coordination. In snoopy protocols, all the processors (more precisely, their cache controllers) monitor the activities on the com-mon system bus. Since every processor knows what data items the other processors are using, cache coherence can be achieved quite simply. In snoopy protocols, there is no cen-tralized decision making. The actions of the local caches, which have full knowledge of other caches, are enough to ensure cache coherence. Snoopy protocols are typically used in systems which have a small number of processors.
Directory based protocols, on the other hand, use centralized decision making to ensure cache coherence. For each data item, one of the processors is designated as the home or the directory process. Requests by the processors to access a data item are sent to the home process for that item. The home process maintains detailed information about which processors are using the item and can respond appropriately to each request. Directory protocols are more widely used as they scale better .
A crucial issue in the design of cache protocols is the speed with which a data item is delivered to the requesting process. Depending on how this issue is handled, directory pro-61 tocols are of two types: lazy and eager protocols. In lazy protocols, the directory process doesn’t grant exclusive access to a requesting processor until it has received acknowledge-ments from other processors in the system that were sharing the data item and were sent invalidate messages. Eager protocols, on the other hand, do not wait for the acknowledge-ments. In our experiments, we considered an eager version of the FLASH protocol and a lazy version of the GERMAN protocol.
There is no consensus on which type of cache coherence protocols – snoopy or direc-tory based – is better. While snoopy protocols tend to have lower latency, they require totally-ordered interconnect with a broadcast mechanism (usually a bus) connecting all the processors. Directory protocols do away with the interconnect in exchange for higher latency. In an informative article , Martin revisits this debate from a verification point of view.
There are multiple correctness issues to be considered while designing cache coherence protocols. The simplest correctness properties talk about the way a single data item is accessed (called coherence properties). For instance, all cache protocols require that a data item cannot be held in exclusive (or dirty) state while it is held in shared state by some other processor. It is also required that a requesting process will eventually get the data item. In our work, we have dealt with correctness properties involving only one data item. Cache properties involving multiple data items (called consistency properties) are usually complex and very hard to verify formally. For example, verifying whether all the executions that a cache protocol allows are legal under the chosen memory consistency model is a very hard problem. While there has been some effort to address this problem, it is far from being solved .
62 In this chapter, we will first formalize the system model for cache coherence protocols.
Our model will contain one non-replicated process (the central process) representing the home processes, and an unbounded number of replicated processes (the local processes) representing the caches. The transitions executed by the caches are very simple, whereas the central directory can perform quite complex actions. For instance, the directory can keep pointer variables, which point to the caches, and modify the local states of the caches.
We will describe a simple language for writing the transitions of local processes and the central process. The constructs used in this language ignore the low level implementa-tion details and describe the protocol at an algorithmic level. In fact, these constructs correspond to the way system designers think about cache protocols.
We will then use the environment abstraction presented in the previous chapter to pa-rameterically verify the safety property of cache coherence protocols.
Outline In Section 3.3 we describe a modeling language that accounts for the specifics of cache coherence protocols, and in Section 3.4 we describe how to apply environment abstraction to verify cache coherence protocols. Section 3.5 describes a redundancy criterion for re-moving set variables which drastically reduces the size of the abstract models. Section 3.6 presents our approach to over-approximating the abstract model. The last two sections contain experimental results and conclusions.
In the rest of this chapter, we will, for the sake of simplicity, speak of “caches” and “di-rectory” instead of “local processes” and “central process” respectively. We will consider 63 only coherence properties involving a single data item.
3.2 Discussion of Related Work Parameterized verification of cache coherence protocols has received considerable at-tention, see [21; 28; 29; 52; 58; 64; 67; 68].
The papers closest to our approach are [67; 68] and . Delzanno and Bultan that describe a constraint based verification method for handling the safety and liveness properties of GERMAN’s protocol. Their approach avoids the problem of handling vari-ables which store cache ids and sets of cache ids by exploiting synchronization labels for actions. But, real protocols do not use such synchronizations mechanisms, which are un-suited to model cache coherence protocols. For example, when using such synchronization labels, staggered reception of messages by different caches (during a broadcast transition) cannot be modeled.
Pong and DuBois [67; 68] developed an explicit state model checking method that uses a technique very similar to counter abstraction to exploit the symmetry and homogeneity of cache coherence protocols. They handle snoopy protocols as well as directory based protocols. Note that neither [67; 68] nor have the notion of a reference process. Con-sequently, in contrast to our approach, they cannot verify single index liveness properties.
Furthermore, their abstraction explicitly considers the set variables. In our abstraction, we are able to eliminate the set variables from the abstract model, which drastically reduces 64 the size of the abstract model.
The compositional method of McMillan uses compositional reasoning to handle infinite state systems including directory based protocols. This technique, which requires user intervention at various stages, has been applied to verify safety and liveness properties of the FLASH protocol. The paper by Chou et al. presents a method along similar lines that was used to verify safety of FLASH and GERMAN’s protocol. The aggregated transactions method pioneered by Park and Dill is based on theorem proving, and has been used to verify directory based protocols such as the FLASH protocol. The essential idea behind this technique is to collect the various statements in the protocol code into a set of 7-8 high level transactions. The user has to provide proofs of correspondence between the high level transactions and the protocol code.
Pnueli et al. show how to verify safety of GERMAN’s cache coherence protocol.
They do not verify liveness properties nor have they handled FLASH protocol.
Bingham et al. describe a method for verifying infinite state systems that can be modelled as Well Structured Transition Systems or WSTS systems. WSTS systems are a well-studied class of infinite state systems for which the problem of reachability of error states is decidable (subject to some technicalities). They applied this method to GER-MAN’S protocol and verified data coherence (that is, read on a data item returns the last written value).
65 3.3 System Model for Cache Coherence Protocols Our system model reflects the structure of real-life cache coherence protocols. A typical cache coherence system contains several caches, one of which is designated as the home cache. The home cache maintains a directory and regulates the access to the data items for which it is responsible. Following , we will model the home cache as consisting only of the directory and call it the directory or directory process. Since the number of caches in the system is not fixed, cache coherence protocols are classical instances of parameter-ized systems. Note, however, that the presence of the directory in the system breaks the symmetry between the processes. Since we are concerned with coherence properties of a cache protocol, it is enough to consider only one data item. Thus, we will implicitly assume that there is only one data item in the system.
In our formal model, we consider asynchronous systems consisting of K caches run-ning the same program P and one directory running a different program C. For given programs P and C, the system consisting of K caches and one directory is denoted by P(K). Each cache has a distinct id in the range R = {1, . . ., K}. As all caches are identi-cal, their sets of variables are also named identically. When necessary, we will write v(i) to refer to variable v of cache i.
The system P(K) is formally modelled as Kripke structure (SK, IK, RK, LK). The set of states SK is given by tuples of the form ⟨L1, . . . , LK, C⟩, where each Li is the local state of cache i and C is the state of the central process C. In the following sections, we will describe the state space the caches and the central directory. Then, we will define the transition relation RK in terms of the transitions the caches and the central process take.
66 3.3.1 State Variables The caches are essentially finite state machines, and thus each cache has one finite range control variable pcL with range {1, . . ., T}. Since multiple finite range variables can al-ways be encoded as one variable, there is no loss of generality. In our implementation and in examples later in the chapter, we will, in fact, tacitly use multiple finite range variables.
The directory has three different kinds of variables, distinguished by the way they are used: • The control variable, pcC, has finite range {1, . . . , F}, F ≥1, and represents the control locations of the directory.
• The pointer variables, ptr1, . . . , ptrb, where b ≥1, are used to store the ids of caches.
Thus, in a system P(K), the range of the pointer variables is R.
• The set variables set1, . . . , setc, where c ≥1, are used to store sets of cache ids, and their range is the powerset 2R.
Example. In GERMAN’S protocol, the variable currclient holds the id of the cache that the directory is currently communicating with. This variable is naturally modeled as a pointer variable. Similarly, GERMAN’S protocol has a list sharlist containing all caches that hold the data item in a shared state. This list is naturally modeled as a set variable.
A state of system P(K) is a tuple ⟨L1, . . . , LK, C⟩where the Li are the control loca-tions of the caches, and C is the state of the directory,. The state of the directory, C, is a valuation of the tuple ⟨pcC, ptr1, . . . , ptrb, set1, . . ., setc⟩.
67 We shall see below that the ptr variables are used solely to access the state of the caches. That is, no arithmetic or comparison on ptr variables is allowed. Similarly, set variables are used either to access caches or in membership queries (i.e., whether a cache belongs to the set or not). We assume that all variables are used in a type-safe way, that is, they are assigned or compared only against values from their ranges.
The initial state of the directory and the caches is given by a fixed valuation of all variables.
3.3.2 Program Description for the Caches We will describe the transitions of the caches and the central process using a few high level constructs. Caches have very simple control flow structures, as they can move only from one control location to another. We can describe the cache transitions using the following transitions: pcL = L L 1 : goto pcL = L L 2 The semantics of the transition is simple: a cache P(i) in control location LL 1 can, at a nondeterministically chosen timepoint, change its state variable to LL 2. The goto statement is deterministic in the sense that for each location LL 1, there is at most one jump goal LL 2.
Note that the state of a cache can also be changed by the directory, see Section 3.3.3.
68 3.3.3 Program Description for the Directory The directory can execute more complex programs than the cache. In particular, it can execute a • simple action to change its control variables, or • update action to update its pointer or set variables, or • remote action to change the state of a cache referenced by a pointer.
These basic actions reflect the operations used in a typical directory based cache co-herence protocol. We will see that, of the above actions, the update action and the remote action depend on the state of the caches. However, only the remote action can change the state of a cache. Below we will define the actions in more detail.
A directory transition statement has the form guard : do actions A1, A2, .., Ak where A1, . . ., Ak are basic actions as described below, and guard is a condition of the form pcC = L ∧Φ(ptr, set). Here, L is a directory control location and Φ(ptr, set) is a Boolean combination of expressions of the form pcL[ptri] = LL, ptr ∈seti, or seti = ∅.
The semantics of this statement is as follows: 1. If guard is true, then execute the actions A1, . . . , Ak.
2. The whole transition, including the evaluation of the guard, is executed atomically in one time step with actions A1, . . . , Ak being executed in that order.
69 3. We will assume that the basic actions in a transition do not conflict with each other.
In other words, no variable should be modified by more that one action. This implies that there is only one simple action per transition, that no ptr variable is updated by more than one action, that only one set variable is updated, and that remote actions are executed on different caches.
We will now describe the basic actions in more detail.
Simple Actions have the format goto pcC = LC, where LC is a directory control location.
The semantics of this action is that the directory control variable pcC is set to LC.
Update Actions come in several formats: ◦assign ptri = ptrj and assign seti = setj. The next value of ptri is set to the current value of ptrj.
◦add ptri to setj and remove ptri from setj. Add or remove the cache pointed to by ptri from set setj.
◦pick ptri from SL, where SL is a list of (constant) cache control locations. The semantics of this action is that the variable ptri is nondeterministically made to point to one of the caches whose control location is in S L. If there is no such cache, then ptri is unchanged.
Remote Actions have the form remote V : goto pcL = LL, where LL is a cache control location and V is a pointer variable. This action enforces the new control location L on 70 the cache pointed to by V. In general, the remote action can also have the form remote V : map where map is a switch statement of the form: switch pcL{ L L 1 : goto pcL = L L′ 1 L L 2 : goto pcL = L L′ 2 . . .
. . .
} This action enforces the cache pointed to by V to execute the switch statement. The remote action is analogously defined for set variables. A remote action for a set variables forces all the caches in the set variable to execute the switch statement simultaneously. While GERMAN’S protocol does not require remote actions on set variables, FLASH protocol does.
The remote action is used to model the communication from central process to the local caches. For example, in GERMAN’S protocol, the central directory process sends invalidate message to all the caches present in the invlist set variable one cache at a time.
The central process first picks a cache present in invlist by assigning a pointer variable 71 temptr1 appropriately. Then the central process writes an invalid message to the incom-ing channel, chan2, of the cache pointed to by temptr1. Since, we model communication channels as internal variables of the caches, the effect of central process writing to chan2 can be accurately modelled as a remote action with the general switch statement.
3.3.4 Describing Real-Life Protocols GERMAN’S protocol and the FLASH protocol can be naturally expressed in our protocol description language. These protocols share a common basic functionality: when a cache requests shared access to a data item, the directory grants the request if the data item is not held in exclusive state by any other cache. Otherwise, the directory sends a message to the cache having exclusive access to the data item to relinquish control over the data item. Subsequently, the directory grants shared access to the cache that issued the re-quest. When a cache requests exclusive access to the data item, the directory grants the request if no other cache has any form of access to the data item. Otherwise, the direc-tory sends messages to all caches having access to the data item to invalidate their local copies. The directory can either wait to receive acknowledgements from the caches (lazy operating mode) or grant exclusive access to the cache which issued the request (eager operating mode). In this thesis, we consider the FLASH protocol operating in eager mode and GERMAN’S protocol operating in lazy mode.
While the basic functionality of many cache coherence protocols essentially follows the above description, there are a lot of additional low level details that add to the com-plexity of a directory based protocol and need to be accounted for in our input language.
72 In a typical protocol, the caches communicate with the directory process using dedi-cated communication channels. The caches execute relatively independently of each other.
Thus, the simple goto statements for caches suffice to model the transitions of the caches.
The directory process usually maintains pointers to caches and also sets of caches. The pointer and set variables are used to receive and send messages to specific recipients.
Following other work in this area [29; 64], we assume that the communication chan-nels between caches and the directory are of length 1. The communication channels are modeled using local variables of the caches. Since the directory can read and write to the local variables via the remote action, the local variables can simulate communication channels. For instance, in GERMAN’s protocol we have a central transition statement: currcmd = empty ∧read = yes ∧chan1[currclient] = reqshar: do actions goto read = no ∧currcmd = reqshar, remote currclient: goto chan1 = empty which shows how the directory communicates with a cache. Here, the pointer variable currclient points to a local process, and chan1[currclient] is the variable that serves as a communication channel from the cache to the directory. Note also that there is more than one control variable in the directory, namely, read, and currcmd.
The above transition says that if there is a reqshar message in channel chan1, the directory process reads it by updating the variable currcmd using the goto action. After reading it, the directory removes the message from chan1 using the remote action which sets chan1 to empty. Broadcast actions can also be described succinctly using remote actions.
Note that in our language, the protocol is described at a high level without getting 73 into implementation details, reflecting the abstraction level at which designers think about protocols. This approach is consistent with the current trend towards synthesis of low level designs from reliable and easily verifiable high level designs.
The full descriptions of the FLASH protocol and GERMAN’S protocol are given in Section 3.9.
3.4 Environment Abstraction for Cache Coherence Pro-tocols In this section, we instantiate environment abstraction for verifying cache coherence pro-tocols.
3.4.1 Specifications and Labels Most properties of interest in parameterized systems refer to the control locations: for example, typical safety properties say that no two caches can hold the same data item in exclusive state at the same time. Usually we are interested in verifying such properties for each cache in the system, not for a specific cache. In this chapter, we will consider the two-indexed safety property ∀x, y.x ̸= y ∧pc[x] = crit ⇒¬(pc[y] = crit) This can be equivalently written as a single index property ∀x.(pc[x] = crit ⇒¬(crit ∈env(x))) 74 To handle such specifications, the set of labels L we use will have labels of two types: • pcL[x] = L, and • L ∈env(x).
3.4.2 Abstract Model As mentioned previously, we will represent abstract descriptions as tuples as this simplifies the presentation significantly. The abstract states will contain information about • the internal state of the reference cache • the internal states that occur in other caches, and • the internal state of the directory.
Formally, an abstract state is a tuple ˆ s = ⟨pcL, e1, . . . , eT ; pcC, c ptr 1, . . . c ptr b, d set 1, . . ., d set c⟩ whose semantics we will explain in the following paragraphs.
First, and importantly, ˆ s describes the system from the viewpoint of the reference cache: pcL is the control location of the reference cache and each bit ei tells whether some other cache is in control location i. Moreover, ˆ s contains information about the di-rectory: pcC is the control location of the directory, and c ptr i and d set i are abstractions of the pointers and sets of the directory.
75 Thus, the variables have the following ranges: pcL ∈{1, . . ., T} is a cache control location, pcC ∈{1, . . ., F} is a directory control location, and the ei are Boolean values representing the “environments”. The bit ei has value 1 if there exists a cache y different from x that is in control location i, i.e., “the environment of x contains a cache in control location i”. This is expressed by the quantified formula Ei(x) .
= ∃y ̸= x.pcL[y] = i which we call the environment predicate. Note that an environment predicate EL(x) and its corresponding bit eL in the abstract state tell us if the atomic property L ∈env(x) holds true in a state.
Concerning the pointers, it is important to note that in the abstract model, a pointer cannot refer to a cache, but only to an abstracted cache, i.e., an environment or the refer-ence cache itself. Thus, we introduce the set {ref} ∪{1, . . . , T} of abstract locations. The abstract locations are the possible values for the pointers in the abstract model. An abstract pointer value i ∈{1, . . ., T} means that the pointer refers to a cache in control state i, and an abstract pointer value ref means that the pointer refers to the reference cache.
Analogously, the abstract set variables d set i range over the powerset 2{1,...,T}∪{ref} of the abstract locations.
Definition 3.4.1. Let s be a concrete state in a concrete system P(K), and consider a cache p in P(K). Then ˆ s is the abstraction of state s induced by cache p, in symbols αp(s) = ⟨pcL, e1, . . . , eT ; pcC, c ptr 1, . . . c ptr b, d set 1, . . . , d set c⟩ if the following conditions hold: 76 1. In state s, cache p is in control location pcL, i.e., s | = pcL = pcL[p].
“The reference cache is in control location pcL.” 2. Each ei is the truth value of the environment predicate Ei(x) for cache p, i.e., s | = ∃y ̸= p.pcL[y] = i iff ei = 1.
“The environment contains a cache in control location i. ” 3. The directory is in control location pcC, i.e., s | = pcC = pcC.
“The directory is in control location pcC.” 4. Each pointer c ptr i has value abs(ptri), where abs(ptri) is the abstract location pointed to by ptri, i.e., abs(ptri) .
= ref if s | = ptri = p pcL[ptri] otherwise .
“The i-th pointer points to the abstract location c ptr i.” 5. The sets d set i generalize the pointers in the natural way, i.e., d set i .
= {abs(q) : q ∈seti}.
“The i-th set variable points to the set d set i of abstract locations.” 77 Before we can apply environment abstraction, we have to prove that the set of abstract states SA and the set of labels L satisfy the coverage and congruence conditions.
Proposition 1. For the abstraction mapping α given above, the set of abstract states SA satisfies the coverage condition.
Proof. Our abstract state space SA consists of all possible tuples of the form ⟨pcL, e1, . . . , eT ; pcC, c ptr 1, . . . c ptr b, d set 1, . . . , d set c⟩. This fact combined with our ab-straction mapping defined above ensure that no matter what concrete state s and what process c we consider αc(s) ∈SA. Thus, the coverage condition is trivially satisfied by our abstract state space.
Proposition 2. For every label l(x) ∈L and every abstract state ˆ s .
= ⟨pcL, e1, . . . , eT ; pcC, c ptr 1, . . . c ptr b, d set 1, . . . , d set c⟩, the abstract description ∆(x) corresponding to ˆ s either implies l(x) or its negation. That is, ∆(x) ⇒l(x) or ∆(x) ⇒ ¬l(x) Proof. Clearly, if the label l(x) is of the form pc[x] = L, then the abstract state ∆(x) either implies l(x) or its negation.
In case l(x) is of the form L ∈env(x), then again ∆(x) implies l(x) or its negation.
This follows easily from the fact that each ei indicates whether or not there is an environ-ment process with control location i. If the bit eL corresponding to control location L is 1 in the tuple corresponding to ∆(x) then ∆(x) ⇒l(x). Otherwise, ∆(x) ⇒¬l(x).
The abstract model PA .
= (SA, IA, RA, LA) is defined as in Section 2.2. The following corollary is then just an instantiation of Theorem 2.2.4 78 Corollary 2 (Soundness of Abstraction). Let P(N) be a parameterized cache coherence system and PA. Consider a control specification ∀x.φ(x). If PA | = φ(x) then P(N) | = ∀x.φ(x).
From Environment Bits to Counters To keep the presentation simple we have represented the variables ei as bits which indicate whether there exists a cache in control location i. To make the abstraction more precise, the ei can be easily generalized to counters of range, e.g., {0, 1, 2}, where 2 is called the counter threshold. Then ei = 0 means that there is no cache y ̸= x in control location i, ei = 1 means that there is exactly one cache y ̸= x in control location i, and ei = 2 means that at least two caches y ̸= x are in control location i. All results in this chapter can be readily generalized to counter thresholds, and our tool also supports arbitrary counter thresholds.
3.5 Optimizations to Reduce the Abstract State Space 79 3.5.1 Eliminating Unreachable Environments The abstract model as described so far has a environment bit eL for each possible local state L of the caches. It may be the case that not all possible local states are indeed reachable and the corresponding abstract bits (or counters), which are redundant, can be eliminated. Our experiments in fact indicate that this kind of optimization achieves significant reduction in the size of the abstract model.
Finding the local reachable states can be done as follows. First note that the local state of a cache can change in two ways: 1) the cache executes a local goto action, or 2) the central process changes the state of the cache using a remote action. Considering the former case, if a local state s1 is reachable and there is a local transition pcL = s1 : goto pcL = s2 then local state s2 is reachable as well. Thus, we add a transition (s1, s2) to a reachability relation R (the reachability relation R is initially empty).
For the latter case, consider a remote action: remote V : map where map is a switch statement of the form 80 switch pcL{ L L 1 : goto pcL = L L′ 1 L L 2 : goto pcL = L L′ 2 . . .
. . .
} Here V can be a pointer variable or a set variable. In case V is a pointer variable, we will say V can point to a local state s if a cache in state s can be pointed to by V. Similarly, if V is a set variable, we will say V can point to a local state s if a cache in state s can belong to the set V.
Now, if V can point to local state s1 = LL 1 and s1 is a reachable local state then the local state s2 = LL′ 1 is also reachable. Thus, we add (s1, s2) to R as well. By syntactically examining the protocol code, we can determine an over-approximation of all the local states that V can point to, as described below.
First consider the case where V is a pointer variable. Suppose V is the pointer variable ptri and the central process assigns V a value using an action of the form pick ptri from S L.
The pointer V = ptri can point to any location in S L. Finding the union of S L’s from all actions that modify V gives us an over-approximation of the set of all caches locations 81 that V can point to. Call this set of locations S. For every, s ∈S we add (s, s′), where s′ is the location that s is mapped to by map, to the reachability relation R.
In case V is a set variable the over-approximation of the set of location V can point to is computed as in Remark 8.
Once we have R, an over-approximation to the set of reachable local states is given by R∗(init) where init is the initial state of the caches. It is enough to have counters corresponding to only these reachable locations.
3.5.2 Redundancy of the Abstract Set Variables In this section we will describe how the set variables can be eliminated from many real protocols including GERMAN’S protocol and the FLASH protocol by a straightforward program analysis. In the following sections, we can therefore assume that no set variables are present. The evident motivation to eliminate the set variables is state explosion. Since each concrete set variable gives rise to an abstract set variable with domain 2{1,...,T,ref}, the abstract model may become prohibitively large.
Our method is based on the observation that in many real-life protocols, the following pattern occurs: whenever a cache is added to a set by an add action, then the same tran-sition also contains a remote action which determines the control location of the cache (that is, when an add ptri to setj action occurs a remote ptri ... action occurs as well). In practice, this means that whenever a cache is added to a list, it also receives a message.
Similarly, each remove action is also accompanied by a remote action. Set variables fol-lowing this pattern are in fact often redundant, that is, conditions involving sets can be 82 replaced by equivalent conditions on the local states of the caches. We will now describe how to determine if a set is redundant.
Let us fix a set variable setj. Then we can partition the statements in the program D of the directory into three sets: • T in j is the set of remote actions which occur together with an action of the form add ptri to setj.
• T out j is the set of remote actions which occur together with an action of the form remove ptri from setj.
• The remaining remote actions in the program are collected in the set T rest j .
Using these three sets T in j , T out j and T rest j , we will compute three sets of cache states Rin j , Rout j , Rrest j . Intuitively, Rin j will be the set of all states that a cache can have while it is a member of setj. Similarly, Rout j contains all states that occur in caches that are not members of setj.
Given a set of cache states S, the set r(S) is the set of all states reachable from states in S by local transitions (i.e., goto’s in the program of the cache) and by remote actions in T rest j . Note that for a given set S, r(S) can be obtained by a simple syntactic computation on the program. With this notation, we can easily describe Rin j , Rout j , and Rrest j .
• Rrest j is the set of cache states reachable from the initial cache states, i.e., Rrest j = r(Iinit).
83 • Rin j is computed as follows: We collect all jump goals of remote 1 actions in T in j into a set Iin. Then Rin j is the set of cache states reachable from Iin, i.e., r(Iin).
• Rout j is computed analogously to Rin j , with Iin replaced by Iout, the set of jump goals in T out j .
If the sets Rrest j , and Rout j do not share any common elements with Rin j , then the vari-able setj is redundant in the sense of the following theorem: Theorem 3.5.1. Assume that Rrest j ∩Rin j = ∅and Rout j ∩Rin j = ∅, and consider a global state s of a concrete system P(K) with a process p. Then s | = p ∈setj iff s | = pcL[p] ∈Rin j , i.e., process p is contained in setj iff its control location is in Rin j .
Proof. Consider first the sets Rrest j and Rin j . Since Rrest j ∩Rin j = ∅(that is, these two sets are mutually exclusive), a process can have state from Rin j only if some central transition (of the directory) adds it to the variable setj. Recall that we assumed that a process is put on a list simultaneously with being sent a message.
Further, since Rin j and Rout j are mutually exclusive, i.e., Rout j ∩Rin j = ∅, a process with a state in Rin j must belong to the set variable setj. Thus, a process belongs to setj if and only if its state is in Rin j .
1Jump goals of a remote action are simply the control locations appearing after the goto’s in the remote action.
84 Remark 8.
Note that, for the optimization presented in the previous section, an over-approximation to the local states that setj can point to is given by Rin j .
In the following sections we will assume that all the set variables appearing in a proto-col are redundant according to the criterion presented in this section.
3.6 Computing the Abstract Model In this section we describe how to extract an overapproximation of the abstract model PA from the program text. The main challenge arises from the fact that there are infinite number of concrete systems to consider. To solve this problem, we consider each transition statement of the program separately and over-approximate the set of abstract transitions it can lead to. This over-approximation can be expressed by an invariant on the current state and next state variables. The disjunction of all these invariants is the abstract transition relation. To keep the presentation simple, we will assume that set variables have been removed using the redundancy criteria presented previously.
The abstract transition relation RA is computed as a series of transition invariants be-tween current abstract state ˆ s and the next abstract state ˆ s′. We consider each transition statement t appearing in the protocol code and find out what abstract transitions it can lead out. The set of abstract transitions corresponding to a concrete transition statement is described by a transition invariant I(t). The abstract transition relation RA is then given by _ t I(t) 85 We first consider the case where t is a local transition statement of a cache and later consider the more complicated case where t is a central transition statement.
3.6.1 Cache Transitions Recall that caches can only make simple transitions t of the form pcL = L L 1 : goto pcL = L L 2.
This transition can be made either (i) by the reference cache or (ii) by one of the environ-ment caches.
We will now give conditions when to include an abstract transition from ˆ s = ⟨pcL, e1, . . . , eT, pcC, c ptr 1, . . ., c ptr b⟩to ˆ s′ = ⟨pc′ L, e′ 1, . . . , e′ T, pc′ C, c ptr ′ 1, . . . , c ptr ′ b⟩corresponding to the transition statement t.
Case (i): Transition by reference cache. The local transition t is executed by the reference case. In this case, we require that pcL = L L 1 ∧pcL ′ = L L 2 (3.1) and all other variables are the same in ˆ s and ˆ s′. Note that no abstract pointers of the directory need to be changed because the abstract pointers have a special value ref for the reference cache.
Case (ii): Transition by cache in environment. The local transition t is executed by an environment cache. In this case, we have the obvious condition that there is a cache in 86 state LL 1 before the transition, and also a cache in LL 2 after the transition: eLL 1 = 1 ∧e′ LL 2 = 1.
(3.2) Moreover, we have to make sure that the pointers of the directory are changed in accor-dance with the transition. Let if φ then α else β denote the formula (φ ∧α) ∨(¬φ ∧β).
Then, looping over all pointer variables ptr1 to ptrb, we include the condition below, which we denote by Λ(L1, L2) ^ 1≤i≤b if c ptr i ̸= LL 1 then c ptr i = c ptr ′ i else { if e′ LL 1 = 0 then c ptr ′ i = LL 2 else c ptr ′ i ∈{LL 1, LL 2} }.
Intuitively, Λ(LL 1, LL 2) expresses the following: if the pointer does not point at LL 1, then it remains unchanged. Otherwise, one of two things can happen after the transition. First, if there is no cache left in location LL 1 i.e., e′ LL 1 = 0, then the cache referred to by the pointer must have moved, and thus, the pointer has to be updated to point to LL 2. Second, if a cache is left in location LL 1, then it is not clear which cache moved, and we over-approximate.
Again, all other variables are the same in ˆ s and ˆ s′.
The abstract invariant I(t) corresponding to the transition statement t is given by the disjunction of 3.1 and 3.2.
Lemma 3.6.1. Let s, s′ be two states of a concrete system P(K). Let there be a transition from s to s′ with process c executing a local transition. Then the abstract states αc(s) and αc(s′) satisfy the invariant described by I(t).
Proof. The proof of this lemma follows simply from the way we constructed I(t).
87 3.6.2 Directory Transitions Consider now the case where the transition statement t is a directory transition. Recall that the directory transitions have the form pcC = L ∧Φ(ptr, set) : do actions A1, .., Ak.
Each directory transition t will be translated into a condition pcC = L ∧ˆ Φ ∧IA1 ∧. . . IAk ∧R where the IAi are the abstract conditions corresponding to the actions Ai, and R constrains all the abstract variables not appearing elsewhere to be the same in ˆ s and ˆ s′.
We will first show how to translate each basic action Ai into a condition IAi.
• For the simple action goto LC we obtain the natural condition pc′ C = LC.
• For the update action assign ptr1 = ptr2 we obtain the condition c ptr ′ 1 = c ptr 2.
• For the update action pick ptr from S L we obtain the condition (pcL ∈S L ∧c ptr ′ = ref) ∨(ej = 1 ∧j ∈S L ∧c ptr ′ = j) ∨(pcL ̸∈SL ∧V j∈SL ej = 0 ∧c ptr ′ = c ptr ).
Intuitively, if the reference process has a control location from the set S L then, in the new state, ptr can point to the reference process. Thus, we have the disjunct pcL ∈SL ∧c ptr ′ = ref. Alternatively, some environment process might have a 88 control location from the set S L and the pointer variable can point to it in the next state. Thus, we have the disjunct ej = 1 ∧j ∈SL ∧c ptr ′ = j. Lastly, it is possible that none of the caches have control locations from the set S L. In this case, the value of the pointer variable does not change. Hence we have the disjunct pcL ̸∈ SL ∧V j∈SL ej = 0 ∧c ptr ′ = c ptr .
• For the remote action remote ptr : goto pcL = LL we obtain the condition ( c ptr = ref ∧pc′ L = L L) (3.3) _ _ 1≤L≤T ( c ptr = L ∧c ptr ′ = L L ∧e′ LL = 1 ∧Λptr(L, L L)) (3.4) where Λptr(L, LL) is defined as ^ 1≤i≤b, ptri̸=ptr if c ptr i ̸= L then c ptr i = c ptr ′ i else { if e′ L = 0 then c ptr ′ i = LL else c ptr ′ i ∈{L, LL} }.
Note that Λptr(L, LL) is similar to Λ(L, LL) defined in Section 3.6.1 except that pointer ptr is left unchaned.
The explanation for this abstract transition is quite simple. If the pointer ptr, which is used in the remote action, points to the reference process, then the control location of the reference process is changed to LL. Thus, we have the disjunct ( c ptr = ref ∧ pc′ L = LL) shown in Equation 3.3.
To understand the second disjunct, shown in Equation 3.4, consider the case where ptr points to an environment process. Suppose the environment process is in envi-ronment eL, that is, c ptr = L. Then the following hold: 89 – In the next state, the pointer variable points to a process in environment eLL because the new state of the cache pointed to by ptr is LL. Thus, we have the condition c ptr ′ = LL – In the next state, the environment eLL is non-empty, that is, e′ LL = 1. The environment eL could be 0 or 1 in the next state.
– Since a process moves from environment eL to eLL pointer variables other than ptr must be updated according to the condition Λptr(L, LL) Putting all the above together, we have the condition ( c ptr = L ∧c ptr ′ = L L ∧e′ LL = 1 ∧Λptr(L, L L)).
The case where the remote action is of the more general form with map involving set variables is similar to the case described above.
Remark 9. Since the set variables are redundant, add and remove actions are irrelevant for the construction of the abstract model.
Assuming that the set variables are redundant in the sense of Section 3.5, the abstrac-tion ˆ Φ of the condition Φ(set, ptr) is obtained by abstracting each atomic subformula: • pcL[ptri] = LL is abstracted into ( c ptr i = ref ∧pcL = LL) ∨c ptr i = LL.
• ptri ∈setj is abstracted into ( c ptr i = ref ∧pcL ∈Rin j ) ∨( c ptr i ̸= ref ∧c ptr i ∈Rin j ).
• setj = ∅is abstracted into pcL / ∈Rin j ∧V s∈Rin j es = 0. In other words, no cache should be in a state from Rin j ; hence pcL / ∈Rin j , and all counters corresponding to states in Rin j must be 0.
90 Lemma 3.6.2. Let s, s′ be two states of a concrete system P(K). Let there be a transition from s to s′ with the directory process executing the statement t. Then the abstract states αc(s) and αc(s′) for any process c ∈[1..K] together satisfy the transition invariant I(t).
Proof. The proof of this lemma follows simply from the way we constructed I(t).
3.7 Experiments GERMAN’S cache coherence protocol [44; 66] and FLASH cache coherence protocol are the two most widely studied cache coherence protocols. We applied our abstraction technique to several versions, including the standard, correct version, of the GERMAN’S protocol and a simplified version of the FLASH protocol.
GERMAN’s protocol, which operates in lazy mode, has two set variables sharlist and invlist. Whenever a cache enters a shared state it is added to sharlist. The variable sharlist is redundant according to the criterion in Section 3.5. The variable invlist is used to send invalidate messages to caches which are in a shared state. Initially, invlist is set equal to sharlist. When an invalidate message is sent to a cache in invlist, it is removed from invlist.
While invlist is not redundant according to the criterion of Section 3.5, a simple change makes our criterion applicable: instead of initializing invlist by assigning sharlist to it, we can add a cache to invlist whenever it is added to sharlist. This simple change makes the set variable invlist redundant, too. All the different versions of the GERMAN’S protocol that we verified had this modification 2.
2Alternatively, we can also create a stronger redundancy criterion for set variables, which will ensure that invlist is redundant without any modification. But, the modification we introduced is minor and does 91 In addition to verifying the standard correct version of GERMAN’S protocol, we also tried our method on buggy versions of GERMAN’S protocol including one supplied by Steve German . These buggy versions of the standard GERMAN’S protocol, referred to as BUGGY 1 and BUGGY 2, are described in the last section of this chapter. In addition, we also applied our method to a variant of GERMAN’S protocol which has four channels, instead of the usual three channels. We will refer to this version as GERMAN 4-CHAN.
For the FLASH protocol, we eliminated the local pointer variables from the caches.
These local pointers are used to handle the three-hop case where the directory forwards the id of the cache requesting exclusive access to the cache already holding that data item in an exclusive state. For the three-hop case, we exploit the fact that at any point, for a given data item, there can be only one three-hop transaction going on. Thus, to reduce the state space, instead of storing a pointer at each cache, we store one pointer in the central directory.
Hence, we can model the three-hop transaction as a remote action of the directory without changing the semantics of the three-hop transaction. While the modification in this case are significant, the resultant protocol is still quite complicated and retains enough similarity to the original protocol to justify calling it a variation of the FLASH protocol.
The safety property considered for all the protocols was ∀x.AG (pcL[x] = excl ⇒(excl ̸∈env(x) ∧shar ̸∈env(x)) i.e., if cache x holds the data item exclusively (pcL[x] = excl) then no other cache can hold the data item in shared or exclusive state. The results of our experiments are described below.
not change the protocol much.
92 Standard German’s protocol. We first applied our method to the standard, correct version of GERMAN’s protocol. We did not use the optimization to eliminate the counters for unreachable local states. For this unoptimized version, Cadence SMV took about 3 hours (11400 seconds) to verify the safety property. We then applied the optimization to eliminate the counters corresponding to unreachable local states. Instead of writing a procedure to find the unreachable states, we supplied a list of unreachable local states to the abstraction program manually as it is easy to figure out manually what local states are unreachable. For instance, for GERMAN’S protocol, it is easy to see that if the outgoing channel chan3 is carrying an invack message then the cache state must be invalid. While the list of states we supplied may be not exhaustive, it still gives significant reduction in the abstract state space. With this optimization, SMV takes about 5 minutes to complete the verification. This running time compares favorably with other verification efforts involving GERMAN’S protocol, see for instance .
Version Buggy 1. In the BUGGY 1 version, after the directory grants exclusive access to a cache, it fails to set the grantexcl variable to true. Thus, when another cache requests shared access, it gets the access even though the first cache holds it in exclusive state. We applied our abstraction (without the optimization to eliminate counters corresponding to unreachable states) and applied Cadence SMV’s Bounded Model Checker. BMC takes around 15 mins to find the bug at depth 12 (that is, the bug is reached after 12 transitions have been executed by the cache coherence system).
Version Buggy 2. In the BUGGY 2 version, the directory grants a shared request even if grantexcl variable is true. As with the previous version, we constructed the abstract model without using the optimization to remove counters for unreachable states. BMC 93 again takes under 15 mins to find the bug at depth 12.
German 4-Chan. In this variant of GERMAN’S protocol, there are four channels instead of the usual three channels. In specific, instead of just one incoming channel, there are two incoming channels, chan2 and chan4, for every cache. In the original version, the single incoming channel carries all three types of messages: grantshar, grantexcl and invalid. In the four channel version, one of the incoming channels, chan2 carries grantshar and grantexcl messages while the other one, chan4, carries invalid message.
Having two incoming channels leads to the following subtle bug: cache 1 requests a shared access, and while this is being processed, it sends out another request. The first request is honored and cache 1 gets shared access (while the other request for shared access is still pending). Now the central process reads the second request from cache 1, and sends it another grantshar message on chan2.
Immediately after this, another cache, say cache 2, requests exclusive access. Before granting exclusive access to cache 2, the central process sends out an invalidate message to all caches with shared access, including cache 1 on the second incoming channel chan4.
Cache 1 reads the invalid message on chan4 (while chan2 still has the grantshar mes-sage) and transitions to invalid state and sends an acknowledgement to the central process (on chan3). Once the central process sees all the acknowledgements, it grants exclusive access to cache 2. But, the grantshar message is still present in chan2 of cache 1 and this leads cache 1 to transition to a shared state. Thus, cache 1 ends with shared access while cache 2 still has exclusive access.
We applied our abstraction method (with both the optimizations described in Sec-94 tion 3.5) and used a BDD based model checker to find the bug. It took SMV 7 mins to find the bug at depth 15. BMC runs out of memory at depth 15. Note that BMC takes less than 15 mins for the two buggy versions because the counter example depth is only 12. For these buggy versions, BDD based model checker does not finish even after an hour (the abstract models for buggy versions were not optimized).
Flash protocol. We constructed an abstract model for FLASH protocol using both the optimizations described in Section 3.5. With counter threshold 1 (cf. Section 3.4.2), we get a spurious counterexample due to the three-hop case. The spurious counter example is as follows: suppose counter eexcl corresponds to an exclusive local state. Suppose now that the reference process requests exclusive access. The central process forwards this request to the environment process which is represented by the counter eexcl. After serving the request the environment process goes into an invalid state, and thus eexcl should become 0.
But, since 1 stands for many in the abstract model, there is an abstract transtion that keeps eexcl as 1. This leads to the violation of the safety property.
To get rid of this spurious counterexample, we track counters corresponding to exclu-sive local states more carefully. We refine the abstract model by increasing the counter threshold to 2 for those environments where the cache is in the exclusive state. The result-ing model is precise enough to prove the safety property. The model checking time was about 7 hours (25700 seconds).
The table shown in Figure 3.1 summarizes our experimental results.
Remark 10. For the protocols that do not satisfy the cache coherence property, the coun-terexamples always involve just two caches. For example, the version of GERMAN’S 95 Protocol Optimizations MC Cex Time German(std) 1 BDD No 3 hrs German(std) 1, 2 BDD No 5 mins German(Buggy 1) 1 BMC Yes (len=12) 15 mins German(Buggy 2) 1 BMC Yes (len=12) 15 mins German(4-Chan) 1, 2 BDD Yes (len=15) 7 mins Flash 1, 2 BDD No 7 hrs Figure 3.1: Results for Cache Coherence Protocols protocol with four channels has a bug involving only two caches. It seems to be the case that having just 3 caches might exhaust all the possibilities for a cache coherence protocol but this is hard to prove.
All the experiments were run on a 1.5 GHz machine with 3GB main memory. Since the time for extracting the abstract model is negligible compared to the model checking time, the reported times are runtimes of the model checker.
3.8 Conclusion We have presented a natural application of environment abstraction that allows us to automatically verify complex cache coherence protocols. We first describe a high level description language to model such protocols. Our language is natural and facilitates easy protocol descriptions in the spirit of Lamport’s TLA (although it is much more restricted).
96 In contrast to previous approaches , we use symbolic model checkers in this chapter.
To keep the computation feasible, we over-approximate the abstract transition system one statement at a time, similar to predicate abstraction for software. Moreover, we use the results of Section 3.5 to eliminate semantically redundant set variables, and thus to reduce the size of the abstract models.
In the next section, we present the descriptions of the protocols that we verified. In the next chapter we will consider the application of environment abstraction to mutual exclusion protocols.
3.9 Protocol Descriptions A simplified version of the FLASH cache coherence protocol is shown in our input lan-guage below. The simplifications are as follows: • In the original FLASH protocol, the directory (i.e. the central process) distinguishes between the home cache and the other caches. While our abstraction method can handle the full version, the current model checkers cannot handle the abstract model that is generated.
• The communication between the central process (directory) and local processes (the caches) is modelled by having two variables chanin and chanout per cache. These two variables serve as incoming and outgoing channels for each cache. Use of two variables implies that the communication buffers are bounded, in fact, are of size 1.
This restriction is similar to the restriction seen in GERMAN’s Protocol.
97 • For the three-hop case, we exploit the fact that at any point in time, for a given cache line, there can be only one three-hop transaction. This fact can be seen just by examining the code for the central process. So to reduce state space, instead of storing a pointer at each cache, we store one pointer at the central process (named threeptr in the model below). Since there is only one three-hop transaction and all the information on the caches involved is known, we model the three-hop transaction as part of the central process. This does not change the semantics of the three-hop transaction in any way, it is just a convenient representation in our modelling language.
The central process reads a message from a cache via a transition involving pick ac-tion. For example, the transition currcmd = empty ∧read = no: Do Actions goto currcmd = get ∧read = yes pick temptr from chanout = get says, if current command (CURRCMD) is empty, and nothing has been read (READ = no), then do the action pick temptr from chanout = get. This action sets the pointer temptr to some cache satisfying the condition chanout = get, that is, some cache which has sent a get message. Then the current command is set to get and READ is marked yes.
The expression sharlist = Φ indicates that the set sharlist is empty. Finally, the statement remote sharlist goto chanin = inv denotes the action where all caches present in sharlist get an invalidate (inv) message.
98 FLASH PROTOCOL Local Process Local Vars CACHESTATE: inv, shar, excl; INVMARKED: yes, no; CHANIN: empty, put, putx, inv, NAK; CHANOUT: empty, get, getx, invack; Local Transitions cachestate = inv ∧chanout = emtpy: goto chanout = get cachestate = inv ∧chanout = emtpy: goto chanout = getx cachestate = shar ∧chanout = emtpy: goto chanout = getx cachestate = inv ∧chanin = inv: goto invmarked = yes cachestate = inv ∧invmarked = no ∧chanin = put: goto cachestate = shar ∧ chanin = empty 99 cachestate = inv ∧invmarked = yes ∧chanin = put: goto invmarked = no ∧ chanin = empty cachestate = inv ∧invmarked = no ∧chanin = putx: goto cachestate = excl ∧ chanin = empty cachestate = inv ∧invmarked = yes ∧chanin = putx: goto invmarked = no ∧ chanin = empty cachestate = shar ∧chanin = inv: goto cachestate = inv ∧chanout = invack cachestate = excl ∧chanin = inv: goto cachestate = inv ∧chanout = invack cachestate = inv ∧chanin = NAK: goto chanin = empty cachestate = shar ∧chanin = NAK: goto chanin = empty Central Process Central vars DIRTY: no, yes; PENDING: no, yes; HDPTR: ptr; HDVALID: no, yes; CURRCMD: empty, get, getx, invack; THREEHOP: empty, get, get1, getx, getx1; THREEHOPTR: ptr; CHECKSHRLIST: no, yes; SHARLIST: set; TEMPTR: ptr; 100 Central Transitions currcmd = empty ∧read = no: Do Actions goto currcmd = get ∧read = yes pick temptr from chanout = get currcmd = empty ∧read = no: Do Actions goto currcmd = getx ∧read = yes pick temptr from chanout = getx currcmd = empty ∧read = no: Do Actions goto currcmd = invack ∧read = yes pick temptr from chanout = invack currcmd = empty ∧read = yes: Do Actions goto read = no remote temptr goto chanout = empty 101 currcmd = get∧read = no∧pending = no∧dirty = no∧chanout[temptr] = empty: Do Actions goto currcmd = empty remote temptr goto chanin = put currcmd = get∧read = no∧pending = no∧dirty = yes∧chanout[temptr] = empty: Do Actions goto currcmd = empty ∧threehop = get ∧pending = yes assign threehoptr = temptr threehop = get ∧cachestate[hdptr] = excl: Do Actions goto threehop = get1 remote hdptr goto cachestate = inv currcmd = get∧read = no∧pending = yes∧chanout[temptr] = empty: Do Actions goto currcmd = empty remote temptr goto chanin = NAK currcmd = getx ∧read = no ∧pending = yes ∧chanout[temptr] = empty: Do Actions goto currcmd = empty remote temptr goto chanin = NAK 102 currcmd = getx ∧read = no ∧pending = no ∧dirty = no ∧hdvalid = no ∧ chanout[temptr] = empty: Do Actions goto currcmd = empty remote temptr goto chanin = putx currcmd = getx ∧read = no ∧pending = no ∧dirty = yes ∧chanout[temptr] = empty: Do Actions goto currcmd = empty ∧threehop = getx ∧pending = yes assign threehoptr = temptr currcmd = getx ∧read = no ∧pending = no ∧dirty = no ∧hdvalid = yes ∧ chanout[temptr] = empty: Do Actions goto currcmd = empty ∧pending = yes remote sharlist goto chanin = inv remote temptr goto chanin = putx threehop = getx ∧cachestate[hdptr] = excl: Do Actions goto threehop = getx1 remote hdptr goto cachestate = inv currcmd = invack ∧read = no: Do Actions goto currcmd = empty ∧checksharlist = yes checksharlist = yes ∧sharlist = Φ: Do Actions goto chechsharlist = no ∧pending = no 103 STANDARD GERMAN’S PROTOCOL Local Process Local Vars Cachestate: {invalid, shar, excl} chan1: {Empty, reqshar, reqexcl} chan2: {Empty, invalid, grantshar, grantexcl} chan3: {Empty, Invack} Local Transitions cachestate = invalid ∧chan1 = empty: goto chan1 = reqshar; cachestate = invalid ∧chan1 = empty: goto chan1 = reqexcl; cachestate = shar ∧chan1 = empty: goto chan1 = reqexcl; chan2 = invalid ∧chan3 = empty: goto chan2 = empty ∧chan3 = invack ∧ cachestate = invalid; chan2 = grantshar: goto chan2 = empty ∧cachestate = shar; chan2 = grantexcl: goto chan2 = empty ∧cachestate = excl; 104 Central Process Central Vars exclgrant: {yes, no} currcmd: {empty, reqshar, reqexcl} currclient: ptr sharlist: set Invlist: set read: {yes, no} tmpread1: {no, yes} temptr2: ptr tmpread2: {no, yes} temptr1: ptr Central Transitions currcmd = empty ∧read = no: Do Actions goto read = yes pick currclient from {local| chan1[local]=reqshar ∨ chan1[local]=reqexcl} currcmd = empty ∧read = yes ∧chan1[currclient] = reqshar: Do Actions goto read = no ∧currcmd = reqshar remote currclient goto chan1 = Empty 105 currcmd = empty ∧read = yes ∧chan1[currclient] = reqexcl: Do Actions goto read = no ∧currcmd = reqexcl remote currclient goto chan1 = Empty assign Invlist = sharlist currcmd = reqshar ∧grantexcl = no ∧chan2[currclient] = empty: Do Actions goto currcmd = empty remote currclient goto chan2 = grantshar Add currclient to sharlist currcmd = reqexcl ∧sharlist = Φ ∧chan2[currclient] = empty: Do Actions goto currcmd = empty ∧grantexcl = yes remote currclient goto chan2 = grantexcl Add currclient to sharlist currcmd = reqshar ∧tmpread1 = no ∧grantexcl = yes: Do Actions goto tmpread1 = yes pick temptr1 from {local|(local ∈Invlist) ∧chan2[local] = Empty} currcmd = reqexcl ∧tmpread1 = no: Do Actions goto tmpread1 = yes pick temptr1 from {local|(local ∈Invlist) ∧chan2[local] = Empty} currcmd = reqshar ∧tmpread1 = yes: Do Actions goto tmpread = no remote temptr1 goto chan2 = invalid Remove temptr1 from Invlist 106 currcmd = reqexcl ∧tmpread1 = no: Do Actions goto tmpread1 = yes remote temptr1 goto chan2 = invalid Remove temptr1 from Invlist currcmd = reqshar ∧tmpread2 = no ∧grantexcl = yes: Do Actions goto tmpread2 = yes pick temptr2 from {local|chan3[local] = invack} currcmd = reqexcl ∧tmpread2 = no: Do Actions goto tmpread1 = yes pick temptr2 from {local|chan3[local] = invack} currcmd = reqshar ∧tmpread2 = yes: Do Actions goto tmpread2 = no ∧grantexcl = no remote temptr2 goto chan3 = Empty currcmd = reqexcl ∧tmpread2 = yes: Do Actions goto tmpread2 = no ∧grantexcl = no remote temptr2 goto chan2 = invalid Remove temptr2 from sharlist 107 BUGGY VERSIONS OF GERMAN’S PROTOCOL As a sanity check, we created two buggy versions of GERMAN’S protocol to see if our method is able to catch the bugs. The buggy versions are described below.
Buggy version 1. In the first buggy version, after the directory grants exclusive access to a cache, it fails to set the grantexcl variable to true. That is, instead of the correct transition currcmd = reqexcl ∧sharlist = Φ ∧chan2[currclient] = empty: Do Actions goto currcmd = empty ∧grantexcl = yes remote currclient goto chan2 = grantexcl Add currclient to sharlist we have the faulty version currcmd = reqexcl ∧sharlist = Φ ∧chan2[currclient] = empty: Do Actions goto currcmd = empty remote currclient goto chan2 = grantexcl Add currclient to sharlist 108 Buggy version 2. For the second buggy version, the directory grants a shared request even if grantexcl variable is true (that is, some cache has been granted exclusive access). Thus, instead of the normal transition currcmd = reqshar ∧grantexcl = no ∧chan2[currclient] = empty: Do Actions goto currcmd = empty remote currclient goto chan2 = grantshar Add currclient to sharlist we have currcmd = reqshar ∧grantexcl = yes ∧chan2[currclient] = empty: Do Actions goto currcmd = empty remote currclient goto chan2 = grantshar Add currclient to sharlist 109 GERMAN’S PROTOCOL WITH EXTRA CHANNELS (4-CHAN) Local Process Local Vars Cachestate: {invalid, shar, excl} chan1: {Empty, reqshar, reqexcl} chan2: {Empty, grantshar, grantexcl} chan3: {Empty, Invack} chan4: {Empty, invalid } Local Transitions cachestate = invalid ∧chan1 = empty: goto chan1 = reqshar; cachestate = invalid ∧chan1 = empty: goto chan1 = reqexcl; cachestate = shar ∧chan1 = empty: goto chan1 = reqexcl; chan4 = invalid ∧chan3 = empty: goto chan2 = empty ∧chan3 = invack ∧ cachestate = invalid chan2 = grantshar: goto chan2 = empty ∧cachestate = shar chan2 = grantexcl: goto chan2 = empty ∧cachestate = excl 110 Central Process Central Vars exclgrant: {yes, no} currcmd: {empty, reqshar, reqexcl} currclient: ptr sharlist: set Invlist: set read: {yes, no} tmpread1: {no, yes} temptr2: ptr tmpread2: {no, yes} temptr1: ptr Central Transitions currcmd = empty ∧read = no: Do Actions goto read = yes pick currclient from {local| chan1[local]=reqshar ∨ chan1[local]=reqexcl} 111 currcmd = empty ∧read = yes ∧chan1[currclient] = reqshar: Do Actions goto read = no ∧currcmd = reqshar remote currclient goto chan1 = Empty currcmd = empty ∧read = yes ∧chan1[currclient] = reqexcl: Do Actions goto read = no ∧currcmd = reqexcl remote currclient goto chan1 = Empty Assign Invlist = sharlist currcmd = reqshar ∧grantexcl = no ∧chan2[currclient] = empty: Do Actions goto currcmd = empty remote currclient goto chan2 = grantshar Add currclient to sharlist currcmd = reqexcl ∧sharlist = Φ ∧chan2[currclient] = empty: Do Actions goto currcmd = empty ∧grantexcl = yes remote currclient goto chan2 = grantexcl Add currclient to sharlist currcmd = reqshar ∧tmpread1 = no ∧grantexcl = yes: Do Actions goto tmpread1 = yes pick temptr1 from {local|Invlist[local] = in ∧chan2[local] = Empty} currcmd = reqexcl ∧tmpread1 = no: Do Actions goto tmpread1 = yes pick temptr1 from {local|Invlist[local] = in ∧chan2[local] = Empty} 112 currcmd = reqshar ∧tmpread1 = yes: Do Actions goto tmpread = no remote temptr1 goto chan4 = invalid Remove temptr1 from Invlist currcmd = reqexcl ∧tmpread1 = no: Do Actions goto tmpread1 = yes remote temptr1 goto chan2 = invalid Remove temptr1 from Invlist currcmd = reqshar ∧tmpread2 = no ∧grantexcl = yes: Do Actions goto tmpread2 = yes pick temptr2 from {local|chan3[local] = invack} currcmd = reqexcl ∧tmpread2 = no: Do Actions goto tmpread1 = yes pick temptr2 from {local|chan3[local] = invack} currcmd = reqshar ∧tmpread2 = yes: Do Actions goto tmpread2 = no ∧grantexcl = no remote temptr2 goto chan3 = Empty currcmd = reqexcl ∧tmpread2 = yes: Do Actions goto tmpread2 = no ∧grantexcl = no remote temptr2 goto chan2 = invalid Remove temptr2 from sharlist 113 114 Chapter 4 Environment Abstraction for Verification of Mutex Protocols 4.1 Introduction Given a set of contending processes, providing them mutually exclusive access to re-sources is among the most basic primitives that any computer system requires. As such, mutual exclusion protocols have received considerable attention in the distributed comput-ing literature. These protocols are usually designed to be correct no matter what the exact number of processes running them. Thus, mutual exclusion protocols are classic examples of parameterized systems. Note that, in contrast to cache coherence protocols, in mutual exclusion protocols each individual process itself might have infinite state space as they 115 can have unbounded data variables in addition to finite control variables.
Several model checking based methods, including Indexed Predicates , Invisible Invariants , and counter abstraction , have been proposed to parameterically verify mutual exclusion protocols. The Indexed Predicates method [52; 53], as already men-tioned, is a new form of predicate abstraction for infinite state systems. This method works only for safety properties and not for liveness properties.
The idea behind Invisible Invariants technique, introduced in a series of papers [3; 41; 42; 64], is to find an invariant for the parameterized system by examining concrete systems for low valuations of the parameter(s). In , a modified version of the Bakery algorithm is verified – the original Bakery algorithm is modified to eliminate unbounded data variables.
Pnueli et al. , who coined the term counter abstraction, show how systems com-posed of symmetric and finite state processes can be handled automatically. However, protocols that either break symmetry by exploiting knowledge of process ids or that have infinite state spaces require manual intervention. Thus, the verification of Szymanski’s and the Bakery protocol in requires manual introduction of new variables. All the three methods mentioned above make use of the atomicity assumption.
In this chapter, we will show how environment abstraction can be used to verify mutual exclusion protocols automatically under the atomicity assumption. Environment abstrac-tion essentially addresses the two disadvantages of counter abstraction by generalizing the idea of counting: since the state space is infinite, we do not count the processes in a given state as in traditional counter abstraction, but instead we count the number of processes 116 Figure 4.1: Abstraction Mapping.
satisfying a given predicate.
Figure 4.1 visualizes the intuition underlying environment abstraction. The grey box on the left hand side represents a concrete state of a system with 16 concurrent processes.
The different colors of the disks/processes represent the internal states of the processes, i.e., the state of the control variables.
The star-shaped graph on the right hand side of Figure 4.1 represents an abstract state.
The abstract state contains one distinguished process, the reference process x, which is at the center of the star. In this example, the reference process x represents process 1 of the concrete state. The disks on the circumference of the star represent the environment of the 117 reference process. Intuitively, the goal of the abstraction is to embed the reference process x of the abstract state into an abstract environment as rich as the environment that process 1 has in the concrete state. Thus, the abstract state represents the concrete state “from the point of view of process 1.” To describe the environment of a process, we need to consider the relationships which can hold between the data variables of two processes. We can graphically indicate a spe-cific relationship between any two processes by a corresponding arrow between the pro-cesses; the form of the arrow (full, dashed, etc.) determines which relationship the two processes have. In Figure 4.1, we assume that we have only two relationships R1 and R2.
For example, R1(x, y) might say that the local variable t of process x has the same value as local variable t in process y, while R2(x, y) might say that t has different values in processes x and y. Relationship R1 is indicated by a full arrow, and R2 is indicated by a dashed arrow. For better readability, not all relationships between the 16 processes are drawn.
Note that a single abstract state generally represents an infinite number of concrete states. Moreover, a given concrete state gives rise to several abstract states, each of which is induced by choosing a different possible reference process. For example, the concrete state in Figure 4.1 may induce up to 16 abstract states, one for each process.
Using the abstraction method described here, we have been able to verify automati-cally the safety and liveness properties of two well known mutual exclusion algorithms, namely Lamport’s Bakery algorithm and Szymanski’s algorithm . While safety and liveness properties of Szymanski’s algorithm have been automatically verified with 118 atomicity assumption by Baukus et al. , this is the first time both safety and liveness of Lamport’s Bakery algorithm have been verified (with the atomicity assumption) at this level of automation.
4.2 System Model for Mutual Exclusion Protocols As in Section 2.2, we consider parameterized system P(K), K > 1, composed of (pa-rameter) K processes. Unlike the system model for cache coherence protocols, there is no central process in the systems considered in this chapter. Technically, P(K) is a Kripke structure ⟨SK, IK, LK, RK⟩. The set of global states SK and the global transition relation RK are formed by composing the individual states and the transition relations of the K processes. Since mutual exclusion protocols are asynchronous system, the global transi-tion relation RK is the asynchronous composition of the individual the transition relations.
In the following we will describe the state spaces and transition relations of the individual processes.
4.2.1 Local State Variables Each process has two sets of variables: the control variables and the data variables. In-tuitively, the two sets of variables serve different purposes. The control variables de-termine the internal control state of the process. Without loss of generality, we can as-sume that there is only one control variable pc per process. The set of data variables, U .
= {u1, . . .ud}, contain actual data which can be read by other processes to calculate 119 their own data variables. We could also assume that there is only one data variable per process, but computation of the abstract model in presence of multiple data variables is different from the single data variable case. Hence, we consider the full general model.
We will usually refer to processes and their variables via their process ids. In particular, pc[i] and uk[i] denote the variables pc and uk of the process with id i. A process can use the reserved expression slf to refer to its own process id. When a protocol text contains the variables pc or uk without explicit reference to a process id, then this stands for pc[slf] and uk[slf] respectively. Note that all processes in a system P(K) are identical except for their ids. Thus, the process ids are the only means to break the symmetry between the processes.
A formula of the form pc = const is called a control assignment. The range of pc is called the set of control locations.
Though we assume that there is only one control variable, in program texts we may take the freedom to use more than one finite range control variable as it makes the program better readable.
4.2.2 Transition Constructs We will describe the transition relation of the processes in terms of two basic constructs, guarded transitions for the finite control, and the more complicated update transitions for modifying the data variables. A guarded transition has the form 120 pc = L1 : if ∀otr ̸= slf.G(slf, otr) then goto pc = L2 else goto pc = L3 or shorter L1 : if ∀otr ̸= slf.G(slf, otr) then goto L2 else goto L3 where L1, L2, L3 are control locations. In the guard ∀otr ̸= slf.G(slf, otr) the variable otr ranges over the process ids of all other processes. The condition G(slf, otr) can be any formula involving the data variables of processes slf, otr and the pc variable of otr. The semantics of a guarded transition is straightforward: in control location L1, the process evaluates the guard and changes to control location L2 or L3 accordingly.
Update transitions are needed to describe protocols such as the Bakery algorithm where a process computes a data value depending on all values that it can read from other pro-cesses. For example, the Bakery algorithm has to compute the maximum of a certain data variable (the “ticket variable”) in all other processes. Thus, we define an update transition to have the general form L1 : for all otr ̸= slf if T (slf, otr) then uk := Φ(otr) goto L2 where L1 and L2 are control assignments, and T (slf, otr) is a condition involving data variables of processes slf, otr. The semantics of the update transition is best understood in an operational manner: in control location L1, the process scans over all the other processes (in a nondeterministically chosen order), and for each process otr, checks if the formula T (slf, otr) is true. In this case, the process changes the value of its data variable 121 uk according to uk := Φ(otr), where Φ(otr) is an expression involving variables of process otr. Thus, the variable uk can be reassigned multiple times within a transition. Finally, the process changes to control location L2. We assume that both guarded and update transitions are atomic, i.e., during their execution no other process makes a move.
Example 4.2.1.
As an example of a protocol written in this language, consider a pa-rameterized system P(N) where each process P has one finite variable pc : {1, 2, 3} representing a program counter, one unbounded/integer variable t : Int, and executes the following program: 1 : goto 2 2 : if ∀otr ̸= slf.t[slf] ̸= t[otr] then goto 3 3 : t := t[otr] + 1; goto 1 The statement 1 : goto 2 is syntactic sugar for pc = 1 : if ∀otr ̸= slf.true then goto pc = 2 else goto pc = 1 Similarly, 3 : t := t[otr] + 1; goto = 1 is syntactic sugar for pc = 3 : if ∀otr ̸= slf.true then t := t[otr] + 1 goto pc = 1.
This example illustrates that most commonly occurring transition statements in protocols can be written in our input language. 2 Note that we have not specified the operations and predicates that are used in the con-ditions and assignments. Essentially, this choice depends on the protocols and the power 122 of the decision procedures used. For the protocols considered in this paper, we need lin-ear order and equality on data variables as well as incrementation, i.e., addition by one.
The last section of this chapter contains Szymanski’s protocol and the Bakery protocol described in our input language.
4.3 Environment Abstraction for Mutual Exclusion Pro-tocols In this section, we show how to apply environment abstraction for mutual exclusion pro-tocols. In Section 4.5, we will discuss how to actually compute abstract models.
To apply environment abstraction, we have to give the abstract descriptions and the abstraction mapping from the concrete states to the abstract states. We also have to prove that the abstract descriptions satisfy the coverage and congruence properties with respect to the set of labels we use. We consider these issues below.
4.3.1 Specifications and Labels The typical properties that we are interested in verifying can be expressed as shown below.
• A single process liveness property can be written as ∀x.AG(pc[x] = try ⇒F(pc[x] = crit)) “For all processes x, the following holds: If process x is trying to enter the critical section then it eventually will.” 123 • ∀x.AG(pc[x] = crit ⇒(crit / ∈env(x))) “For all processes x the following invariant holds: If process x is in the critical section, then no other process is in the critial section” Consequently, the set of labels L again has two types of labels: • pc[x] = L, and • L ∈env(x).
4.3.2 Abstract Descriptions Technically, our descriptions reuse the predicates which occur in the control statements of the protocol description. Let SL be the number of control locations in the program P. The internal state of a process x can be described by a predicate of the form pc[x] = L where L ∈{1..SL} is a control location.
In order to describe the relations between the data variables of different processes we collect all predicates EP1(x, y), . . ., EPr(x, y) which occur in the guards of the program.
From now on we will refer to these predicates as the inter-predicates of the program.
Since in most practical protocols, synchronization between processes involves only one or two data variables, the number of inter-predicates is usually quite small. The relationship 124 between a process x and a process y is now described by a formula of the form Ri(x, y) .
= ±EP1(x, y) ∧. . . ∧±EPr(x, y) where ±EPi stands for EPi or its negation ¬EPi. It is easy to see that there are 2r possible relationships R1(x, y), . . ., R2r(x, y) between x and y. In the example of Figure 4.1, the two relationship predicates R1 and R2 are visualized by full and dashed arrows.
Fact 3. The relationship conditions R1(x, y), . . ., R2r(x, y) are mutually exclusive.
Before we explain the descriptions ∆(x) in detail, let us first describe the most im-portant building blocks for the descriptions, which we call environment predicates. An environment predicate expresses that for process x we can find another process y which has a given relationship to process x and a certain internal state. The environment predi-cates thus have the form ∃y.y ̸= x ∧Ri(x, y) ∧pc[y] = j.
An environment predicate says the following: there exists a process y different from x whose relationship to x is described by the EP predicates in Ri and whose internal state is j. There are T := 2r × SL different environment predicates; we name them E1(x), . . . , ET(x), and their quantifier-free matrices E1(x, y), . . ., ET(x, y). Note that each Ek(x, y) has the form y ̸= x ∧R(x, y) ∧pc[y] = L.
Fact 4. If an environment process y satisfies an environment condition Ei(x, y), then it cannot simultaneously satisfy any other environment condition Ej(x, y), i ̸= j.
125 Proof. Each environment condition Ek(x, y) has the form y ̸= x ∧R(x, y) ∧pc[y] = L.
Thus let Ei(x, y) .
= y ̸= x ∧Ri(x, y) ∧pc[y] = Li and Ej(x, y) .
= y ̸= x ∧Rj(x, y) ∧pc[y] = Lj Since Ei(x, y) and Ej(x, y) are different, either Ri(x, y) is different from Rj(x, y) or Li ̸= Lj.
In the former case, by Fact 3, Ri(x, y) and Rj(x, y) are mutually exclusive. Thus, if process y satisfies Ei(x, y) then it cannot satisfy Ej(x, y).
In the latter case, if process y satisfies Ei(x, y) then the control location of process y is Li. Since Li ̸= Lj, process y cannot satisfy Ej(x, y).
Hence, in both the cases we have shown that if process y satisfies environment condi-tion Ei(x, y) then it cannot satisfy any other environment condition Ej(x, y).
Fact 5. Given a state s and two different processes c and d, there exists a unique environ-ment condition Ei(x, y) such that s | = Ei(c, d).
Proof. Let L be the control location of process d in state s. Thus, s | = pc[d] = L holds.
Given processes c and d, for each inter-predicate EP k(x, y) we have either s | = EPk(c, d) or s ̸| = EPk(c, d). Consider the formula F(x, y) .
= y ̸= x ∧pc[y] = L ∧ ^ s| =EPk(c,d) EPk(x, y) ∧ ^ s̸| =EPk(c,d) ¬EPk(x, y).
126 Clearly, s | = F(c, d) by construction. Syntactically, F(x, y) is identical to a unique environment condition. By Fact 4, processes c, d can satisfy at most one environment condition.
Fact 6. Let Ei(x, y) be an environment condition and G(x, y) be a boolean formula over the inter-predicates EP1(x, y), . . ., EPr(x, y) and predicates of the form pc[y] = L. Then either Ei(x, y) ⇒G(x, y) or Ei(x, y) ⇒¬G(x, y).
Proof. Since Ei(x, y) has the form pc[y] = j ∧Rk(x, y) where Rk(x, y) is a min-term over the inter-predicates EP1(x, y), . . ., EPr(x, y), Ei(x, y) enforces a unique truth value for all atomic subformulas of G(x, y).
We are ready to return to the descriptions ∆(x). A description ∆(x) has the format pc[x] = i ∧ ±E1(x) ∧±E2(x) ∧· · · ∧±ET(x), where i ∈[1..S].
Intuitively, a description ∆(x) gives detailed information on the internal state of process x, and how the other processes are related to process x. Note the correspondence of ∆(x) to the abstract state in Figure 4.1: the control location i determines the color of the central circle, and the Ej determine the processes surrounding the central one.
Definition 4.3.1 (Abstraction Mapping). Let P(K), K > 1, be a concrete system and p ∈[1..K] be a process. The abstraction mapping αp induced by p maps a global state s of P(K) to an abstract state ⟨pc, e1, . . . , eT⟩where pc = the value of pc[p] in state s and for all ej we have ej = 1 ⇔s | = Ej(p).
We will now prove the coverage and congruence conditions that let us apply environ-ment abstraction.
127 Lemma 4.3.2. Consider a description ∆(x) and a label l(x). Then either ∆(x) ⇒l(x) or ∆(x) ⇒¬l(x) Proof. Consider first the case where l(x) .
= pc[x] = L. Since ∆(x) is of the form pc[x] = L ∧ ±E1(x) ∧±E2(x) ∧· · · ∧±ET(x) it is easy to see that, in this case, ∆(x) either ∆(x) ⇒l(x) or ∆(x) ⇒¬l(x).
Consider the second case where label l(x) .
= L ∈env(x). Recall that L ∈env(x) is syntactic sugar for ∃y.y ̸= x.pc[y] = L. The description ∆(x) is of the form pc[x] = L ∧ ±E1(x) ∧±E2(x) ∧· · · ∧±ET(x) where each Ei(x) is of the form ∃y.y ̸= x ∧Rj(x, y) ∧pc[y] = Li Consider all those environment conditions Ei(x) of the form ∃y.y ̸= x ∧Rj(x, y) ∧pc[y] = L That is, those environment conditions that require the other process y to be in control location L. Denote this set of environment conditions by EL.
Now ∆(x) ⇒ _ Ei(x)∈EL ±Ei(x) (∗) where the polarity of each Ei(x) in the consequent is exactly as in the description ∆(x).
Suppose at least one environment condition, say Ej(x), in EL appears un-negated in the consequent of (∗). Then, ∆(x) ⇒Ej(x) (†) 128 Since Ej(x) is of the form ∃y.y ̸= x ∧Rk(x, y) ∧pc[y] = L it follows that Ej(x) ⇒∃y.y ̸= x ∧pc[y] = L.
Thus, we have ∆(x) ⇒∃y.y ̸= x ∧pc[y] = L in case at least one of the environment conditions in EL appears un-negated in ∆(x).
For the other case, suppose none of the environment conditions in EL appear unnegated in ∆(x). Then we have ∆(x) ⇒ ^ Ei(x)∈EL ¬Ei(x) or equivalently, ∆(x) ⇒¬( _ Ei(x)∈EL Ei(x)) Now Ei(x) is of the form ∃y.y ̸= x ∧Rk(x, y) ∧pc[y] = L and where each Rk(x, y) is of the form Rk(x, y) .
= ±EP1(x, y) ∧. . . ∧±EPr(x, y) Since the set of relation predicates Rk(x, y) is formed by taking all possible cubes over the inter-predicates EP1(x, y), . . ., EPT(x, y) it follows that _ Rk(x, y) = true 129 This means at least one of the relation predicates must be true. Assume without loss of generality that Rp(x, y) is true. Let the corresponding environment condition from EL which involves Rp(x, y) be Ep(x, y). Now ∆(x) ⇒¬( _ Ei(x)∈EL Ei(x)) which implies ∆(x) ⇒¬(Ep(x)) that is ∆(x) ⇒∃y.y ̸= x ∧Rp(x, y) ∧pc[y] = L Since Rp(x, y) is true Ep(x) .
= ∃y.y ̸= x ∧pc[y] = L So we have ∆(x) ⇒∃y.y ̸= x ∧pc[y] = L or equivalently ∆(x) ⇒¬l(x) Thus, in case none of the environment conditions in EL appear unnegated in ∆(x), ∆(x) ⇒ ¬l(x). Hence either ∆(x) ⇒l(x) or ∆(x) ⇒¬l(x) and the lemma is proved. Note that, this lemma establishes the congruence property described in Section 2.2.
Remark 11. Consider the full set of descriptions pc[x] = L ∧ ±E1(x) ∧±E2(x) ∧· · · ∧±ET(x), where L ∈[1..S].
130 Given any concrete state s and process c in a system P(K) it is clear that s | = ∆(c) for some description ∆(x). This is true simply because we take every possible conjunction of the predicates E1(x), . . . , ET(x) with every possible predicate pc[x] = L. Thus, the coverage condition discussed in Section 2.2 holds for the set of descriptions given above.
The other property required to make environment abstraction sound, namely the con-gruence property is established by the above lemma. Thus, for the chosen set of abstract descriptions and labels, environment abstract is sound.
We will now represent descriptions ∆(x) by tuples of values, as usual in predicate abstraction. The possible descriptions (∗) only differ in the value of the program counter pc[x] and in where they have negations in front of the E(x) predicates. Denoting negation by 0 and absence of negation by 1, every description ∆(x) can be identified with a tuple ⟨pc, e1, . . . eT⟩where pc is a control location, and each ei is a boolean variable.
Example 4.3.3.
Consider again the protocol shown in Example 4.2.1. There is only one inter-predicate EP1(x, y) .
= t[x] ̸= t[y]. Thus, we have two possible relationship conditions R1(x, y) .
= t[x] = t[y] and R2(x, y) .
= t[x] ̸= t[y]. Consequently, we have 6 different environment predicates: E1(x) .
= ∃y ̸= x.pc[y] = 1 ∧R1(x, y) E4(x) .
= ∃y ̸= x.pc[y] = 1 ∧R2(x, y) E2(x) .
= ∃y ̸= x.pc[y] = 2 ∧R1(x, y) E5(x) .
= ∃y ̸= x.pc[y] = 2 ∧R2(x, y) E3(x) .
= ∃y ̸= x.pc[y] = 3 ∧R1(x, y) E6(x) .
= ∃y ̸= x.pc[y] = 3 ∧R2(x, y) The abstract state then is a 7-tuple ⟨pc, e1, . . . , e6⟩where pc refers to the internal state of the reference process x. For each i ∈[1..6], the bit ei tells whether there is an 131 environment process y ̸= x such that the environment predicate Ei(x) becomes true. 2 We build the abstract model PA exactly as in Section 2.2. Since the congruence and coverage conditions hold for the set of descriptions and labels we have chosen, we have the following corollary of Theorem 2.2.4: Corollary 3 (Soundness of Abstraction). Let P(N) be a parameterized mutual exclusion system and PA be its abstraction. For an indexed property ∀x.Φ(x), where Φ(x) is a control condition, we have PA | = Φ(x) ⇒∀K.P(K) | = ∀x.Φ(x).
4.4 Extensions for Fairness and Liveness The abstract model that we have described, while sound, might be too coarse in practice to be able to verify liveness properties. The reason is two fold: (i) Spurious Infinite Paths. Our abstract model may have infinite paths which cannot occur in any concrete system. Figure 4.2 shows one such instance, where a self-loop in the abstract model leads to a spurious infinite path. The two concrete states s1 and s2, such that s1 transitions to s2, map to the same abstract state ˆ s, leading to a self-loop involving ˆ s. This self-loop can lead to a spurious infinite path. Such spurious paths hinder the verification of liveness properties.
132 Figure 4.2: Process 7 changes its internal state, but the abstract state is not affected. Thus, there is a self-loop around the abstract state. The abstract infinite path consisting of repeated executions of this loop has no corresponding concrete infinite path.
(ii) Fairness Conditions. Liveness properties are usually expected to hold under some fairness conditions. A typical example of a fairness condition is that every process x must leave the critical section a finite amount of time after entering it. This is expressed formally by the fairness condition pc[x] ̸= crit. In this work, we will consider fairness conditions pc[x] ̸= L, where L is a control location. Liveness properties are then expected to hold on fair paths: an infinite path in a concrete system P(K), K > 1 is fair only if, for each process i, the fairness condition pc[i] ̸= L holds infinitely often.
133 Monitor Processes for Liveness To handle these situations, we adapt a method developed by Pnueli et al. in the context of counter abstraction to our environment abstraction. The extension to handle liveness essentially consists of adding monitor processes. We first augment the concrete system P(K) by adding monitor processes M(1), . . . , M(K) where each M(i) has two sets of variables • variables fromi 1, . . . , fromi T where T is the number of different environments, and • variables toi 1, . . ., toi T where T is the number of different environments.
Intuitively, the fromi and toi variables keep track of the processes coming and going out of the environments E1(i), . . . , ET(i) as viewed from process i.
Updating Monitor Variables Suppose the system P(K) is initially in state s1 and some process j changes its state resulting in state s2 for system P(K). Monitor process M(i) then updates its variables as follows.
• Case 1: A process j ̸= i changes its state By Fact 4, we have uniquely defined environment predicates Ep(i) and Eq(i) such that s1 | = Ep(i, j) and s2 | = Eq(i, j). Thus monitor process M(i) sets fromi p = true and toi q = true in state s2. The rest of the fromi and toi variables are set to false in state s2.
134 • Case 2: j = i and the process changes its state using update transition For each environment condition Ep(i) such that there is a process y satisfying Ep(i, y) in s1, fromi p = true in state s2. For each environment condition Ep(i) such that there is a process z satisfying Ep(i, z) in s2, toi p = true in state s2. In all other cases, the fromi and toi variables in s2 are false.
• Case 3: j = i and the process changes its state using a guarded transition In this case, all the fromi and toi variables in s2 are false.
Denote the system obtained by augmenting P(K) with monitor processes M(1), . . . , M(K) by PM(K). The states of PM(K) are given by tuples of the form ⟨L1, . . . , LK, M1, . . . , MK⟩ where Li is denotes local state of process i and Mi denotes the local state of the monitor process M(i). The augmented abstract states, given by tuples of the form ⟨pc, e1, . . . eT, from1, . . . , fromT, to1, . . ., toT⟩, carry monitor information for reference process unchanged.
Definition 4.4.1 (Abstraction Mapping). Let PM(K), K > 1, be an augmented system and p ∈[1..K] be a process. The abstraction mapping αp induced by p maps a global state s of PM(K) to an abstract state ⟨pc, e1, . . . , eT, from1, . . ., fromT, to1, . . . , toT⟩where • pc = the value of pc[p] in state s • for all ej we have ej = 1 ⇔s | = Ej(p).
• ∀j.
fromj = fromp j • ∀j.
toj = top j 135 Intuitively, the from, and to variables keep track of the immediate history of an ab-stract state, that is, the last step by which the abstract state was reached.
Example 4.4.2. Referring to Figure 4.2, suppose process 7 in state s1 satisfies the environ-ment condition Ei(1, 7). Then, in the new augmented abstract state, variable fromi will be set to true to indicate that a process satisfying environment condition Ei(1) made the move. Similarly, suppose that in the new concrete state s2, process 7 satisfies the environ-ment condition Ej(1, 7). Then in the new augmented abstract state, the variable toj is set true to indicate that after the transition process 7 satisfies the new environment condition Ej(1).
Remark 12. Note that the abstract model does not retain the id of the process which was responsible for the transition (process 7 in this case). The abstract model only retains the environment predicates satisfied by the process before and after the transition. We are doing this for two reasons: • During abstraction, all the processes except the reference process lose their identi-ties.
• Remembering the environment predicate satisfied by the active process will give us a sufficiently precise abstraction to verify the properties of interest.
To recapitulate, using the from and to variables we are able to keep track of the last step of the route by which an abstract state was reached.
For an augmented abstract state ˆ s, we denote its projection consisting of only the pc and ei variables by π(ˆ s). The following notation is also useful: let s1, s2 be two concrete 136 states in a system P(K) such that there is a transition from s1 to s2. Denote by s1 s2 the index of the process whose local transition lead to the global transition from s1 to s2, e.g.
process 7 in Figure 4.2. Recall that in an asynchronous system, only one process at a time changes its state, i.e., for each global transition, there exists a single process causing the transition.
The augmented abstract model PA a .
= (SA a , IA a , RA a , LA a ) of the augment parameterized system PM(K) is defined as in Section 2.5.2.
Note that coverage and congruence conditions for the augmented abstract descriptions are trivial to establish given that the original abstract descriptions satisfy both conditions.
It follows from Section 2.5.2 that adding the additional monitor information does not affect the soundness of our abstract model. Thus, we have P A a | = Φ(x) ⇒PM(K) | = ∀x.Φ(x) where Φ(x) is a control condition.
Corollary 4 (Soundness of Augmented Abstraction).
Let P(N) be a parameterized system and PM(N) be an augmentation of P(N) with monitor processes as described above and PA a be its augmented abstraction. For an indexed property ∀x.Φ(x), where Φ(x) is a control condition we have PA a | = Φ(x) ⇒∀K.PM(K) | = ∀x.Φ(x) ⇒∀K.P(K) | = ∀x.Φ(x) 4.4.1 Abstract Fairness Conditions We will now show how to deal with the two problems mentioned in the beginning of this section, i.e., (i) spurious paths and (ii) fairness conditions.
137 Eliminating Spurious Infinite Paths.
Recall from the example in Figure 4.2 that due to the abstraction there may exist infinite spurious paths that do not have any corresponding concrete paths, in particular such paths where fromi is true infinitely often but toi is not. Such a path cannot correspond to any concrete path because: • By definition, the variable fromi is true if a process having satisfied Ei(x) in the previous state, does not satisfy Ei(x) in the current state.
• By definition, the variable toi is true if a process having satisfied Ej(x) in the previ-ous state, does satisfy Ei(x) in the current state.
• Each concrete system has only a finite number of processes.
• Thus, for a finite number of processes to make fromi true infinitely often, it is necessary for toi to be true infinitely often as well.
Therefore, to eliminate the spurious infinite paths arising from loops described above, we add for each i ∈[1..T] a compassion condition ⟨fromi, toi⟩which says if fromi = true holds infinitely often in a path, then toi = true must hold infinitely often as well.
We will denote this set of fairness conditions by F1.
Adding Abstract Fairness Conditions.
Assume that in the concrete model each process has a fairness condition of the form pc ̸= L. This means that a process is not allowed to stay at control location L forever. To abstract 138 the concrete fairness condition we have to find two sets of abstract fairness conditions, one for the reference process IA and the other for the environment processes.
Fairness conditions for the environment processes. The abstract model maintains the properties of the environment processes only in terms of the environment predicates. Thus, the concrete fairness conditions on the environment processes have to be translated to fairness conditions involving the environment predicates.
More precisely, given a fairness condition pc ̸= L, we need to consider those environ-ment conditions that require the environment process to be in control location L, i.e., those environment conditions Ei(x, y) where Ei(x, y) .
= Rj(x, y) ∧pc[y] = L. For each such Ei(x, y) we add the fairness condition ¬(fromi = false ∧ei = 1).
This condition excludes the cases where along an infinite path, the set of processes sat-isfying the environment condition Ei(x, y) is non-empty (i.e., ei = 1) and none of these processes ever changes its state (i.e., fromi = false). We will denote this set of fairness conditions by F2 Fairness conditions for the reference process. The abstract fairness condition correspond-ing to the reference process is given by pc ̸= L. This expresses the requirement that the control location of the reference process is not L infinitely often. We will denote this set of fairness conditions by F3.
139 4.4.2 Soundness in the Presence of Fairness Conditions Now we will show that adding these fairness conditions do not rule out any legitimate paths in the abstract model. Thus, the augmented abstracted model will be sound.
We are usually interested in verifying single index liveness properties of the form ∀x.AG(φ(x) →Fψ(x)).
For example, for mutual exclusion protocols, the standard liveness property, which says if process is trying to get into the critical section it eventually will, can be written as ∀x.AG(pc[x] = try →F(pc[x] = crit)).
The following theorem claims that, for single index liveness properties, the augmented abstract model that we constructed is sound.
Theorem 4.4.3 (Soundness of Abstraction). Let P(K) be a parameterized system with fairness constraint pc ̸= L and PA a be its augmented abstraction using the abstract fair-ness and compassion conditions. Given the single-indexed liveness property ∀x.Φ(x), PA a | = Φ(x) under the abstract fairness conditions implies P(K) | = ∀x.Φ(x) under the concrete fairness condition.
The augmented abstraction thus obtained is precise enough to prove liveness properties for the two mutual exclusion protocols we considered. The following section gives a proof of the soundness theorem.
140 4.4.3 Proof of Soundness Given concrete fairness condition pc[x] ̸= L, the augmented abstract model has three sets of fairness conditions: F1. For each i ∈[1..T], the compassion condition ⟨fromi, toi⟩saying that if fromi = true infinitely often along an abstract path then toi = true infinitely often as well.
F2. The fairness conditions ¬(fromi = true ∧ei = 1) for each i such that the environ-ment condition Ei(x, y) requires process y to be in control location L.
F3. The fairness condition pc ̸= L requiring that the reference process satisfies the concrete fairness condition pc[x] ̸= L.
The proof of Theorem 4.4.3 relies on the following lemma.
Lemma 4.4.4. Let P(K) be a concrete system with process c and let σ .
= g0, g1, . . . be a fair path under the fairness constraint pc[x] ̸= L. Then the augmented abstract model P A a has a path ˆ σ .
= ˆ g0, ˆ g1, . . . such that 1. for each i ≥0, π(ˆ gi) = αc(gi), and 2. the abstract fairness conditions F1, F2, F3 hold for ˆ σ.
Proof. The lemma claims that corresponding to every concrete fair path and a given ref-erence process c there is an abstract fair path in the augmented abstract model. In other words, our abstract fairness conditions do not remove any fair paths.
141 Given a concrete fair path σ of system P(K) we construct an abstract fair path ˆ σ as follows: • For the first state ˆ g0 we require π(ˆ g0) = αc(g0), and the fromi and toi variables can have any value.
• For each ˆ gi, i > 0 we require αc(gi) = π(ˆ g1), and the toi, and fromi are set accord-ing to the definition of the augmented abstract transition relation in Section 4.4.
Thus, item 1 of the lemma is satisfied by construction. The fact that ˆ σ is a valid trace in the abstract model also follows by construction.
We will now show that if σ is a fair path then ˆ σ satisfies the abstract fairness conditions as well. We consider the different ways in which ˆ σ might fail to satisfy the abstract fairness conditions and argue that each case leads to a contradiction.
Violation of F1. Assume towards a contradiction that ˆ σ violates the compassion condition ⟨fromk, tok⟩for some k ∈[1..T], i.e., there exists an i ≥0 such that • fromk is true infinitely often in states ˆ gi, ˆ gi+1, . . .
(†) • but tok = false in all the states ˆ gi, ˆ gi+1, . . .
() There are two cases in which fromk holds true in a certain state ˆ gl: (a) A process y satisfying enviroment condition Ek(c, y) in gl−1 moves to a new envi-ronment in gl.
142 (b) In state gl−1, there is a process y satisfying the environment condition Ek(c, y), and the reference process c makes an update transition from gl−1 to gl.
For fromk to be true infinitely often, either case (a) or case (b) has to hold infinitely often. We will show that both cases lead to a contradiction. First assume case (a). As there are only a finite number of processes, fromk being true inifinitely often requires tok to be true infinitely often as well. This contradicts ().
In case (b) the fromk is true in a state ˆ gl because the reference process made an update transition and there was process y satisfying the environment Ek(c, y) in state gl−1. After such an update transition we again have two cases: (b.1) There is a process y satisfying the condition Ek(c, y) in state gl, i.e., ek is 1 in ˆ gl.
In this case tok is set to true by our definition of the augmented abstract transition relation in Section 4.4, or (b.2) there is no process y satisfying the condition Ek(c, y) in state g1 i.e., ek = 0.
The former case (b.1) immediately contradicts the assumption (). In the latter case (b.2), if ek continues to be 0, then, by definition, fromk cannot be true again. This contradicts the assumption (†).
Thus, we have proved that the compassion condition ⟨fromi, toi⟩cannot be violated in the abstract trace ˆ σ.
Violation of F2. Assume towards a contradiction that ˆ σ violates the fairness condition ¬(fromk = false ∧ek = 1) where the environment condition Ek(x, y) requires process y to be in control location L. That is, there exists an i ≥0 such that ˆ gj | = fromk = 143 false ∧ek = 1 for all j ≥i. In other words, in the concrete trace in all the states gj, gj+1, . . . of the concrete trace there is a process y satisfying the environment condition Ek(c, y), and this process y never leaves the environment corresponding to Ek(x, y). Since Ek(x, y) requires process y to be in control location L, process y violates the concrete fairness condition pc[slf] ̸= L, and thus, the assumption of the lemma.
Violation of F3. Assume towards a contradiction that ˆ σ violates the fairness condition pc ̸= L, i.e., there is an i ≥0 such that for all ˆ gj, j > i, ˆ gj | = pc = L holds. This is possible only if process c stays in control location L after concrete state gi, thus violating the concrete fairness condition, and thus, the assumption of the lemma.
We see that in all the three cases we are led to a contradiction. Consequently, the abstract trace ˆ σ does not violate any abstract fairness conditions.
Theorem 4.4.3. Consider a single index liveness property ∀x.AG(φ(x) →Fψ(x)). As-sume that PA a | = Φ(x) under abstract fairness conditions, and assume towards a contradiction that there is a fair path σ .
= g0, g1, . . . in system P(K) such that σ ̸| = φ(c) →Fψ(c) for some process c. Thus, there exists an i ≥0 such that gi | = φ(c) and gj ̸| = ψ(c) for all j ≥i. By Lemma 4.4.4 there is a fair abstract path ˆ σ .
= ˆ g0, ˆ g1, . . . such that for all k, π(ˆ gk) = αc(gk).
By definition, ˆ gi | = φ(x) and for all j ≥i, ˆ gj ̸| = ψ(x). Thus, there is a fair path ˆ σ in the abstract model PA a that violates the liveness property Φ(x), contradicting our assumption that PA a | = Φ(x). This completes our proof.
144 4.5 Computing the Abstract Model We have thus far presented the theoretical description of the abstract model and the prop-erties it satisfies. We have not described how to actually obtain such an abstract model from a given parameterized system. In this section, we will show how to construct the abstract model.
Computing the abstract model is evidently complicated by the fact there is an infinite number of concrete systems. Further, it is well known that in predicate abstraction and related methods, computing the exact abstract model is computationally very expensive.
Instead of finding the most precise abstract model, we find an over-approximation of the abstract model. We consider each concrete transition statement of the program separately and over-approximate the set of abstract transitions it can lead to. The union of these sets will be the abstract transition relation. A concrete transition can either be a guarded transition or an update transition. Each transition can be executed by the reference process or one of the environment processes. Thus, there are four cases to consider: Active process is . . .
guarded transition update transition . . . reference process Case 1 Case 2 . . . environment process Case 3 Case 4 We will show how we abstract in each of these cases and argue why the computed abstract transition is an over-approximation. Before we begin we recall the following facts: Fact 7.
For any two environment predicates Ei(x, y) and Ej(x, y), i ̸= j Ei(x, y) ⇒ ¬Ej(x, y).
145 Fact 8. Given any formula G(x, y) involving inter-predicates EP 1(x, y), . . ., EPr(x, y) either Ei(x, y) ⇒¬G(x, y) or Ei(x, y) ⇒G(x, y).
We now introduce some useful notation. The environment condition Ei(x, y) .
= y ̸= x ∧Rj(x, y) ∧pc[y] = L will be denoted by E(j,L)(x, y). The variables and formulas cor-responding to this environment condition are referred to using the same subscript {j, L}, e.g., the corresponding environment predicate is referred to as E(j,L)(x) and the corre-sponding abstract variable is e(j,L). The set of all environment conditions E(j,L)(x, y) is referred to as EL.
4.5.1 Case 1: Guarded Transition for Reference Process Consider first the case of guarded transitions being executed by the reference process.
Consider the guarded transition statement tG: L1 : if ∀otr ̸= slf.G(slf, otr) then goto L2 else goto L3 Suppose the reference process is executing this guarded transition statement. If at least one of the environment processes contradicts the guard G then the reference process transitions to control location L3, i.e., the else branch. Otherwise, the reference process goes to L2. We will now formalize the conditions under which the if and else branches are taken.
Definition 4.5.1 (Blocking Set for Reference Process). Let G .
= ∀otr ̸= slf.G(slf, otr) be 146 a guard. We say that an environment condition Ei(x, y) blocks the guard G if Ei(x, y) ⇒ ¬G(x, y). The set Bx(G ) of all indices i such that Ei(x, y) blocks G is called the blocking set of the reference process for guard G .
Note that by Fact 6, either Ei(x, y) ⇒¬G(x, y) or Ei(x, y) ⇒G(x, y) for every environment Ei(x, y). The intuitive idea behind the definition is that Bx(G ) contains the indices of all environment conditions which enforce the else branch.
We will now explain how to represent the guarded transition tG in the abstract model: we introduce an abstract transition from ˆ s1 = ⟨pc, e1, .., eT, from1, .., fromT, to1, .., toT⟩ to ˆ s2 = ⟨pc′, e1, .., eT, from′ 1, .., from′ T, to′ 1, .., to′ T⟩if GR1. pc = L1, i.e., the reference process is in location L1, GR2. one of the following two conditions holds: • Then Branch: ∀i ∈Bx(G ). (ei = 0) and pc′ = L2, i.e., the guard is true and the reference process moves to control state L2.
• Else Branch: ¬∀i ∈Bx(G ). (ei = 0) and pc′ = L3, i.e., the guard is false and the reference process moves to control state L3.
GR3. and all the variables from′ 1, .., from′ T and to′ 1, .., to′ T are false, expressing that none of the environment processes changes its state.
Together, these three conditions can be viewed as an transition invariant I x(tG) be-tween the current and the next abstract states. The following fact shows that the set of 147 abstract transitions represented by Ix(tG) is indeed an over-approximation of the set of abstractions that tG gives rise to.
Lemma 4.5.2. Let s1 be a state in a concrete system P(K), and suppose that a process c executes a guarded transition tG which leads to state s2. Then the abstract states ˆ s1 .
= αc(s1) and ˆ s2 .
= αc(s2) satisfy the invariant Ix(tG).
Proof. Assume there is a guarded transition tG from state s1 to s2 with the reference pro-cess c as the active process. There are two cases to consider: • The concrete guard was true in state s1 and process c’s new control location in state s2 is L2. By Fact 6, each environment condition Ei(x, y) either implies the guard condition G(x, y) or its negation. Any process y satisfying an environment Ej(x, y), j ∈Bx(G ) would block the guard G(c, y). In other words, for the then branch to be taken, in state s1 every concrete process y ̸= c must have satisfied only environment conditions which are not mentioned in the blocking set Bx(G ). Thus, the condition ∀i ∈Bx(G ).ei = 0 is true in ˆ s1 and the abstract model transitions to state ˆ s2.
• The concrete guard was false in state s1 and process c’s new control location in state s2 is L3. By Fact 6, each environment condition Ei(x, y) either implies the guard condition G(x, y) or its negation. For the else-branch to be taken, there must be at least one process y in state s1 satisfying an environment Ej(x, y), j ∈Bx(G ) so that the guard G(c, y) evaluates to false. Thus, the abstract condition ∀i ∈Bx(G ).ei = 0 is false for ˆ s1, and the abstract model still transitions to state ˆ s2.
148 4.5.2 Case 2: Guarded Transition for Environment Processes Suppose that the guarded transition tG L1 : if ∀otr ̸= slf.G(slf, otr) then goto L2 else goto L3 is executed by a concrete process y satisfying the environment condition E(i,L1)(x, y).
The active process thus switches from environment condition E(i,L1)(x, y) to environment condition E(i,L2)(x, y) or E(i,L3)(x, y). Note that in a guarded transition, only the pc of the active process changes.
We will now define a blocking set for this environment condition E(i,L1)(x, y) as fol-lows. The difference from Definition 4.5.1 is that the guard for process y can be blocked either by the reference process or by another environment process. Therefore we need to distinguish two cases in the definition.
Definition 4.5.3 (Blocking Set for Environment E(i,L1)(x, y)).
Let G (slf) = ∀otr ̸= slf.G(slf, otr) be a guard. We say that 1. An environment condition Ej(x, z) blocks the guard for process y satisfying E(i,L1)(x, y) if E(i,L1)(x, y) ∧Ej(x, z) ⇒¬G(y, z).
Let B1 (i,L1)(G) be the set of all such indices j.
149 2. The control location L of the reference process x blocks the guard for process y satisfying E(i,L1)(x, y) if E(i,L1)(x, y) ∧pc[x] = L ⇒¬G(y, x).
Let B2 (i,L1)(G) be the set of all such control locations L.
Note that we consider the guards G(y, z) and G(y, x) because y is the active process, i.e., y executes the transition. We define the abstract guard G i for guard G (slf) .
= ∀otr ̸= slf.G(slf, otr) and the environment condition Ei(x, y) as follows: ∀j ∈B1 (i,L1)(G ).(ej = 0) ∧ pc / ∈B2 (i,L1)(G ).
Since the transition starts in control location L1 and the active process is an environment process, we will describe the abstract transition invariant I i,L1 y (tG) for each E(i,L1)(x, y) ∈ EL1 by a list of conditions as in Case 1. The abstract transition invariant for Case 2 will then be Iy(tG) .
= _ E(i,L1)(x,y)∈EL1 Ii,L1 y (tG).
Consider one such E(i,L1)(x, y) ∈EL1. The abstract transition relation Ii,L1 y (tG) has a transition from ˆ s = ⟨pc, e1, . . . , eT, from1, . . . , fromT, to1, . . . , toT⟩ to ˆ s′ = ⟨pc′, e′ 1, . . ., e′ T, from′ 1, . . . , from′ T, to′ 1, . . . , to′ T⟩if the following conditions hold: GE1. e(i,L1) = 1, that is, there is a process satisfying environment condition E(i,L1)(x, y).
GE2. from′ (i,L1) = true, that is, the active process switches from environment condition E(i,L1)(x, y) to some other environment condition.
150 GE3. e′ (i,L1) ∈{0, 1}, that is, due to the active process switching, there may or may not remain a process satisfying the environment condition E(i,L1)(x, y).
Depending on the value of the abstract guard, one of the following two cases holds: 1. The guard G i is true, i.e., ˆ s | = G i, and either • the then branch is taken, i.e.,e′ (i,L2) = 1 and to(i,L2) = true, or • the Else branch is taken, i.e., e′ (i,L3) = 1 and to(i,L3) = true.
We will explain this case below.
2. The guard is false, i.e., ˆ s ̸| = G i, and the Else branch is taken, i.e., e′ (i,L3) = 1, to(i,L3) = true.
GE4. The rest of the ej variables are the same in ˆ s and ˆ s′.
GE5. The from′ j and to′ j variables are set to false by default unless they are set to true by one of the above conditions.
The reason for the two else-branches is the fact that knowledge about a single process suffices to block the guard, while knowledge about all processes is necessary to make sure the guard is not blocked. The environment predicates Ej(x, y) only contain accurate information about the relationship between the data variables of the reference process x and the data variables of environment process y. If it follows already from this partial information that the guard is violated then the Else branch is enforced. If however the 151 guard G i is true, this may be due to lack of information in the abstract predicates, and we over-approximate the possible abstract transitions.
Note that Case 2 is different to Case 1, because, in Case 1, the reference process makes the guarded transition, while in Case 2, an environment process makes the transition. In case of the reference process, our abstraction maintains the relationship of its data variables to the other variables. In case of an environment process, we only know the relationship of its data variables to the reference process.
Lemma 4.5.4. Let s1 be a state in a concrete system P(K), and let c be a process used as reference process. Suppose that a process d ̸= c executes a guarded transition tG that leads to state s2. Then the abstract states αc(s1) and αc(s2) satisfy the invariant Iy(tG).
Proof. This follows directly from the construction of the transition invariant I y(tG).
4.5.3 Case 3: Update Transition for Reference Process Consider the case where the reference process is executing an update transition tU: L1 : for all otr ̸= slf if T (slf, otr) then uk := Φ(otr) goto L2 Recall that each process has data variables u1, . . . , ud. We denote the next state value of each variable um by u′ m.
152 When the reference process x changes the value of its data variables, the valuations of the environment predicates E1(x) . . . ET(x) will change. For a process y satisfying envi-ronment condition Ei(x, y) we need to figure out the possible new environment conditions Ej(x, y) that y might satisfy after the reference process x has executed the update transi-tion. The set of possible new environment conditions for process y is called the outset Oi for condition Ei(x, y). (Technically, the outset is the set of the indices of these environ-ment conditions.) We will now explain how to compute the outset.
Denote by T(x, y) the formula T (x, y) ∧u′ k[x] := φ(y) ∨¬T (x, y) ∧u′ k[x] := uk[x].
We call T(x, y) the update formula. Given the update formula we find what possible inter-predicates involving u′ k[x], uk[y] can be true. Formally, the set C1(uk) of these inter-predicates is given by all formulas uk[x] ≺uk[y] where ≺∈{<, >, =} such that u′ k[x] ≺uk[y] ∧T(x, y) is satisfiable.
Thus, C1 uk contains all possible ways uk[x] and uk[y] can relate to each other after the update. The possible relationships between uk[x] and uk[y] might change again when, in the course of evaluating the update transition, x repeatedly updates its uk value by looking at other processes.
Suppose x looks at another process z and updates its uk[x] value again. We now find the set C2(uk) of possible relationships between uk[x] and uk[y] after the new update involving process z under the assumption that a relation from C 1(uk) holds before the update.
153 Thus, the new set C2(uk) of relationships is given by all formulas uk[x] ≺uk[y] where ≺∈{<, >, =}, such that u′ k[x] ≺uk[y] ∧T(x, z) ∧ψ(x, y) is satisfiable and ψ(x, y) ∈C1(uk).
Note that C1(uk) ⊆C2(uk) because the definition of T(x, z) allows the possibility that the value of uk[x] remains unchanged. We similarly compute sets C3(uk), C4(uk) . . . until a fixpoint is reached. Since the number of possible inter-predicates is finite, a fixpoint always exists; for simple inter-predicates involving <, >, and =, the fixpoint computation takes three iterations at the most. We denote this fixpoint by C(uk).
In the environment condition Ei(x, y), let θ be the inter-predicate that describes the relation between uk[x] and uk[y]. Consider the set of environment conditions Ej(x, y) that are obtained from Ei(x, y) by replacing θ by a formula in the fixpoint C(uk) – the indices of these environment conditions constitute the outset Oi of Ei(x, y). Correspondingly, the inset Ik ⊆{1..T} for environment condition Ek(x, y) consists of all j such that k ∈Oj.
We denote the abstract transition invariant corresponding to the concrete update tran-sition tU by Ix up(tU). Ix up(tU) has a transition from the abstract state ˆ s = ⟨pc, e1, . . ., eT, from1, . . . , fromT, to1, . . . , toT⟩to the abstract state ˆ s′ = ⟨pc′, e′ 1, . . . , e′ T, from′ 1, . . ., from′ T, to′ 1, . . . , to′ T⟩if the following conditions hold: UR1. pc = L1, i.e., the reference process is in control location L1 before the transition.
UR2. pc′ = L2, i.e., the reference process moves to control location L2.
UR3. ∀k ∈[1..T].(ek = 1 ⇒∃j ∈Ok.e′ j = 1), i.e., if there was a process in environ-ment Ek(x) before the transition then there must be a process in one of the outset 154 environments Ok of Ek(x, y) after the transition.
UR4. ∀j ∈Ik.(ej = 0 ⇒e′ k = 0), i.e., if there is no process satisfying the inset environ-ments Ik of environment Ek(x, y) before the transition then after the transition there can be no process in environment Ek(x, y).
UR5. ∀k ∈[1..T].(e′ k = 1 ⇔to′ k = true). The variable to′ k indicates if after the transition there is a process satisfying Ek(x, y).
UR6. ∀k ∈[1..T].(ek = 1 ⇔from′ k = true). The variable from′ k indicates if before the transition there is a process satisfying Ek(x, y).
Lemma 4.5.5. Let s1 be a state in a concrete system P(K), and suppose that a process c executes an update transition tU which leads to state s2. Then the abstract states αc(s1) and αc(s2) satisfy the invariant Ix up(tU).
Proof. This follows directly from the construction of the invariant I x up(tU).
4.5.4 Case 4: Update Transition for Environment Processes This case is quite similar to Case 3. Recall that E(i,L1)(x, y) denotes the environment condition y ̸= x ∧Ri(x, y) ∧pc[y] = L1. Consider the case where a generic process y satisfying environment E(i,L1)(x, y) is executing an update transition tU: 155 L1 : for all otr ̸= slf if T (slf, otr) then uk := Φ(otr) goto L2 After the update transition process y will have a new control location and also the rela-tionship of its data variables to those of the reference process x will have changed. The out-set O(i,L1) for environment E(i,L1)(x, y) will consist of all those environments E(j,L2)(x, y) that process y may satisfy after the update transition. To compute the outset O(i,L1) we proceed as follows. As in the previous case, we find a fixpoint C(uk) that contains the possible relationships between uk[x] and uk[y]. The initial set of relationships C1(uk) is the set of all uk[x] ≺uk[y], ≺∈{<, >, =} such that T(y, x) ∧uk[x] ≺u′ k[y] is satisfiable where T(y, x) is the update condition as defined in Section 4.5.3. Note that we consider T(y, x) (and not T(x, y) as in the previous section) because y is the active process. As y updates its uk variable repeatedly, the relationship between uk[x], uk[y] will also change.
To compute all the possible relationships we use an approach similar to the fixpoint com-putation in Case 3. Thus, we find the set C2(uk) of all uk[x] ≺uk[y], ≺∈{<, >, =} such that T(y, z) ∧uk[x] ≺u′ k[y] ∧ψ(x, y) is satisfiable where ψ(x, y) ∈C1(uk). We similarly compute C3(uk), C4(uk), . . . until we reach a fixpoint C(uk).
In the environment condition E(i,L1)(x, y), let θ be the inter-predicate that describes the relation between uk[x] and uk[y]. Consider the set of environment conditions E(j,L2)(x, y) 156 that are obtained from E(i,L1)(x, y) by replacing θ by a formula in the fixpoint C(uk) and replacing the condition pc[y] = L1 by pc[y] = L2 – the indices of these environment conditions, written as pairs (j, L2), constitute the outset O(i,L1) of E(i,L1)(x, y).
Since the transition starts at control location L1 and a generic process executes it, we will describe the abstract transition I(i,L1),(j,L2) y (tU) for each environment condition E(i,L1)(x, y) and each (j, L2) ∈O(i,L1). The abstract transition Iy up(tU) for Case 4 will be _ E(i,L1)(x,y) _ (j,L2)∈O(i,L1) I(i,L1),(j,L2) y (tU).
As above, we will define I(i,L1),(j,L2) y (tU) by a list of conditions. I(i,L1),(j,L2) y (tU) has a transition from ˆ s = ⟨pc, e1, . . . , eT, from1, . . . , fromT, to1, . . . , toT⟩ to ˆ s′ = ⟨pc′, e′ 1, . . ., e′ T, from′ 1, . . . , from′ T, to′ 1, . . . , to′ T⟩if the following conditions hold: UE1. pc = pc′, i.e., the reference process does not move.
UE2. e(i,L1) = 1, i.e., there is a process in environment E(i,L1)(x,y) before the transition.
UE3. e′ (j,L2) = 1, i.e., there is a process in environment E(j,L2)(x,y) after the transition.
UE4. e′ l = el for l / ∈{(i, L1), (j, L2)}, i.e., all the e variables except e′ (i,L1) and e′ (j,L2) remain the same.
UE5. from′ (i,L1) = true and the rest of the from′ l variables are false, i.e., only a process satisfying environment condition E(i,L1)(x, y) moves, and no other process moves.
157 UE6. to′ (j,L2) = true and the rest of the to′ l variables are false, i.e., only the environment condition E(i,L1)(x, y) has a new process and no other environment condition has a new process.
Lemma 4.5.6. Let s1 be a state in a concrete system P(K), and let c be the process used as reference process. Suppose that a process d ̸= c executes an update transition tU that leads to state s2. Then the abstract states αc(s1) and αc(s2) satisfy the invariant Iy up(tU).
Proof. This follows directly from the construction of the invariant I y up(tU).
4.6 Experimental Results In most mutual exclusion protocols, the predicates appearing in the guards are simple linear expressions involving the <, >, and = operators. Thus, the decision problems that arise during abstraction are simple and are handled by our abstraction program internally.
We verified the safety and liveness properties of Szymanski’s mutual exclusion protocol and Lamport’s bakery algorithm. These two protocols have an intricate combinatorial structure and have been used widely as benchmarks for parameterized verification. For safety properties, we verified that no two processes can be present in the critical section at the same time. For liveness, we verified the property that if a process wishes to enter the critical section then it eventually will.
Note that these protocols have been analyzed by other methods, but in most cases ei-ther the protocols have been simplified (in addition to the atomicity assumption) or the method cannot handle both protocols. Pnueli et al. have verified Szymanski’s and 158 Inter-preds Intra-preds Reachable states Safety Liveness Szymanski 1 8 O(214) 0.1s 1.82s Bakery 3 5 O(2146) 68.55s 755.0s Figure 4.3: Running Times the Bakery protocol using counter abstraction, but they manually introduce new auxillary variables. Lahiri and Bryant verified the Bakery protocol but not Szymanski’s proto-col. Pnueli et al. have verified a modified version of the Bakery protocol in which the unbounded ticket variable is replaced by a bounded variable. The method described in can handle Szymanski’s protocol but not the Bakery protocol because it has unbounded integer variables. A possible exception is regular model checking, but this method is very different from ours and encoding protocols as regular languages is a complex and error prone process.
We used the Cadence SMV model checker to verify the finite abstract model. The model checking times are shown in Figure 4.3. The abstraction time is negligible, less than 0.1s. Figure 4.3 also shows the number of predicates and the size of the reachable state space as reported by SMV. All experiments were run on a 2.4 GHz Pentium machine with 512 MB main memory.
4.7 Protocols and Specifications The details of the two protocols that we verified are given below.
159 F = {pc}, pc ∈{0, 1, 2, 3, 4, 5, 6, 7} pc = 0 : goto pc = 1 pc = 1 : if ∀otr ̸= slf.pc[otr] ∈{0, 1, 2, 4} then goto pc = 2 else goto pc = 1 pc = 2 : goto pc = 3 pc = 3 : if ∀otr ̸= slf.pc[otr] / ∈1, 2 then goto pc = 5 else goto pc = 4 pc = 4 : if ∀otr ̸= slf.pc[otr] / ∈{5, 6, 7} then goto pc = 4 else goto pc = 5 pc = 5 : if ∀otr ̸= slf.pc[otr] / ∈{2, 3, 4} then goto pc = 6 else goto pc = 5 pc = 6 : if ∀otr > slf.pc[otr] ∈{0, 1, 2} then goto pc = 7 else goto pc = 6 pc = 7 : goto pc = 0 Figure 4.4: Szymanski’s Mutual Exclusion Protocol 160 Szymanski’s mutual exclusion protocols written in our specification language is shown in Figure 4.4. This protocol has been taken from . The protocol presented there has wait statements that, under the atomicity assumption, can be modeled by guarded statements.
The transition pc = 0 : goto pc = 1 is syntactic sugar for the more complicated but equivalent guarded statement pc = 0 : if ∀otr ̸= slf. true then goto pc = 1 else goto pc = 1.
The property that we verified for Szymanski is ∀x ̸= y. AG .¬(pc[x] = 7 ∧pc[y] = 7) and the liveness property that we verified is ∀x. AG .(pc[x] = 1 →F pc[x] = 7).
Note that pc = 7 corresponds to the critical state and pc = 1 corresponds to the trying state.
The only inter-predicate is x < y, were x, y are index variables. As mentioned previously, the inter-predicates and the control assignments of the form pc[x] = L constitute all the predicates that occur in the protocol text.
Lamport’s bakery algorithm is shown in Figure 4.5. The update transition pc = 2 ∧ch = 0 : update t := 0 then goto pc = 0 ∧ch = 0 is syntactic sugar for pc = 2 ∧ch = 0 : for all otr ̸= slf. (if true then t := 0) goto pc = 0 ∧ch = 0.
161 Note that here we have two finite variables pc and ch which together determine the control location. In Section 4.2 we have argued that without loss of generality we can have only one finite variable pc. In fact, we can easily write the Bakery protocol using just one finite variable pc with domain {0, 1, 2} × {0, 1}. Our implementation allows a protocol to have multiple finite variables. Thus, we did not have to rewrite the Bakery protocol before verifying it.
The variable ch indicates whether a process is updating its ticket variable t or not.
A process updates its t value by choosing the maximum among all other t values and incrementing it by 1. In Lamport’s original paper , a process i does the following check before entering the critical section: for all j ∈[1..N] L2 : if ch[j] ̸= 0 goto L2 else goto L3 L3 : if t[j] > 0 ∧((t[otr], otr) ≺(t, slf)) then goto L3 else goto crit crit Here (t[otr], otr) ≺(t, slf)) stands for t[otr] < t ∨(t[otr] = t ∧otr < slf). Following the atomicity assumption discussed in Section 4.2, we model the for loop in the original Bakery algorithm as a guarded transition: pc = 1 ∧ch = 0 : if ∀otr ̸= slf.ch[otr] = 0 ∧̸= (t[otr] > 0 ∧((t[otr], otr) ≺(t, slf))) 162 F = {pc, ch}, pc ∈{0, 1, 2}, ch ∈{0, 1} pc = 0 ∧ch = 0 : goto pc = 0 ∧ch = 1 pc = 0 ∧ch = 1 : for all (otr ̸= slf). if (t < t[otr]) then t := t[otr] + 1 goto pc = 1 ∧ch = 0 pc = 1∧ch = 0 : if ∀otr ̸= slf.ch[otr] = 0 ∧¬(t[otr] > 0∧((t[otr], otr) ≺(t, slf))) then goto pc = 2 ∧ch = 0 else goto pc = 1 ∧ch = 0 pc = 2 ∧ch = 0 : update t := 0 goto pc = 0 ∧ch = 0 Figure 4.5: Lamport’s Bakery Algorithm The safety property that we verified is ∀x ̸= y. AG .¬((pc[x] = 2 ∧ch[x] = 0) ∧(pc[y] = 2 ∧ch[y] = 0)) and the liveness property that we verified is ∀x. AG .((pc[x] = 0 ∧ch[x] = 1) →F (pc[x] = 2 ∧ch[x] = 0)).
Note that pc = 2 ∧ch = 0 corresponds to the critical state, and pc = 0 ∧ch = 0 corresponds to the trying state. The inter-predicates that we used are x < y, t(x) < t(y), t(x) = t(y), that is, all predicates appearing in the protocol code that compare variables of two different processes.
163 164 Chapter 5 Removing the Atomicity Assumption for Mutex Protocols 5.1 Introduction In Chapter 4, we showed how environment abstraction can be applied to verify mu-tual exclusion protocols, like the Bakery protocol and Szymanski’s protocol, completely automatically. But, the verification was carried out under the atomicity assumption. The atomicity assumption, in essence, says that any process in a distributed system consisting of a collection of processes can know the state of all the other processes instantaneously.
As we will see in Section 5.3, this assumption is quite restrictive. In this chapter, we will show how this assumption can be removed with the help of non-interfering monitor 165 processes and thus verify mutual exclusion protocols in their full generality.
All the previous model checking based methods for parameterized verification have as-sumed atomicity to some extent. Counter abstraction makes use of this assumption as does the work on Invisible Invariants [3; 41; 42; 64]. Removing the atomicity assumption in the latter method is theoretically possible but the reported experiments have made use of the atomicity assumption. The Indexed Predicates method [52; 53] too makes partial use of atomicity – the update transition appearing in the bakery protocol is assumed to happen atomically. As with the Invisible Invariants method, removing the atomicity assumption is theoretically possible in the Indexed Predicates method, but the cost of verification is prob-ably high. The Inductive Method, presented in , is an exception to this trend. It has been applied to verify both safety and liveness of the Bakery algorithm without assuming atomicity. This approach however is not automatic as the user is required to provide lem-mas and theorems to prove the properties under consideration. In contrast, our approach is a fully automatic procedure.
The outline for the rest of the chapter is as follows. In the next section, we will present the formal system model. In section 5.3, we will show, with the help of an example, why the atomicity assumption significantly reduces the complexity of a protocol. We will then discuss how monitors can be used to remove the assumption and show how to perform the abstraction in the presence of these monitor processes. In the last section, we will present experimental results to illustrate our method.
166 5.2 Modeling Mutual Exclusion Protocols without Atom-icity Assumption As before, we consider a parameterized system P(K) with K identical processes running asynchronously and communicating via shared variables. The state variables are exactly the same as in the model considered in Section 4.2. While we used only two transition constructs in the previous chapter, we will need three different transition constructs to describe mutual exclusion protocols in their full generality. We use guarded transitions and wait transitions for describing transitions involving only finite control, and the more complicated update transitions for transitions that modify data variables. Though guarded and update transitions are syntactically similar to their counterparts in Section 4.2, their semantics are quite different. The wait transition, as the name indicates, is used to model processes waiting for some global condition to happen before moving. The sections below describe the transitions in detail.
Guarded Transitions A guarded transition has the form pc = L1 : if ∀otr ̸= slf.G(slf, otr) then goto pc = L2 else goto pc = L3 or shorter L1 : if ∀otr ̸= slf.G(slf, otr) then goto L2 else goto L3 167 Obligations := {1, .., K} \ {slf} Loop Forever{ 1.
Pick otr ∈Obligations 2.
If G(slf, otr) then Obligations := Obligations{otr} else Exit Loop with false 3.
If Obligations is empty Exit Loop with true } Figure 5.1: Evaluation of a Guard where L1, L2, and L3 are control locations. In the guard ∀otr ̸= slf.G(slf, otr), the variable otr ranges over the process ids of all other processes. The condition G(slf, otr) is any formula involving the data variables of processes slf, otr and the pc variable of otr. The semantics of a guarded transition is as follows. A process slf executing the transition first evaluates the guard ∀otr ̸= slf.G(slf, otr) according to the pseudocode shown in Figure 5.1.
In executing the loop, each line in the code is executed atomically. This is not a restricting assumption because each line is an internal action of a process.
The then branch is taken if the loop is exited with value true and pc is set to L2.
Otherwise, the else branch is taken and pc is set to L3.
Wait Transitions A wait transition has the form 168 Obligations := {1, .., K} \ {slf} Loop Forever{ 1.
Pick otr ∈Obligations 2.
If G(slf, otr) then Obligations := Obligations \ {otr} 3.
If Obligations is empty Exit Loop } Figure 5.2: Evaluation of a Wait condition pc = L1 : wait till ∀otr ̸= slf.G(slf, otr) then goto pc = L2 or shorter L1 : wait till ∀otr ̸= slf.G(slf, otr) then goto L2 where L1, L2 are control locations. A process slf executing the transition first evaluates the guard ∀otr ̸= slf.G(slf, otr) according to the loop shown in Figure 5.2. As with guarded transitions, each line of the pseudocode is executed atomically.
Note that unlike the loop for a guarded transition, the loop for a wait transition cannot be exited until the set Obligations is empty. Once the loop is exited the process transitions to new control location L2. Wait transitions are found often in protocols. This construct was not present in Chapter 4 because, under the atomicity assumption, the wait transition L1 : wait till ∀otr ̸= slf.G(slf, otr) then goto L2 is equivalent to the guarded transition L1 : if ∀otr ̸= slf.G(slf, otr) then goto L2 else goto L1 169 Update Transitions Recall that update transitions are needed to describe protocols such as the Bakery algo-rithm where a process computes a data value depending on all values that it can read from other processes. Update transitions are syntactically of the form pc = L1 : for all otr ̸= slf if T (slf, otr) then uk := Φ(otr) goto pc = L2 or shorter L1 : for all otr ̸= slf if T (slf, otr) then uk := Φ(otr) goto L2 where L1 and L2 are control locations, and T (slf, otr) is a condition involving data vari-ables of processes slf and otr. The semantics of the update transition is best understood in an operational manner. A process slf executing the update transition first executes the loop shown in Figure 5.3. Each line in the pseudocode is executed atomically.
Once the loop is exited, the process transitions to control location L2. In control loca-tion L1, the process scans over all the other processes (in an arbitrary nondeterministically chosen order), and, for each process otr, checks if the formula T (slf, otr) is true. In this case, the process changes the value of its data variable uk according to uk := Φ(otr), where Φ(otr) is an expression involving variables of process otr. Thus, the variable uk can be reassigned multiple times within a transition.
Note that in the three loops above, process otr is chosen non-deterministically from the set Obligations. In real implementations, processes are usually evaluated in a fixed 170 Obligations := {1, .., K} \ {slf} Loop Forever{ 1.
Pick otr ∈Obligations 2.
If T (slf, otr) then uk[slf] := Φ(otr) Obligations := Obligations \ {otr} 3.
If Obligations is empty Exit Loop } Figure 5.3: Evaluation of an Update deterministic order. Since our semantics allows processes to be checked in any order, the protocols described in our language contain more behaviors than the actual imple-mentations. Thus, correctness (involving ACTL∗properties) of a protocol written in our language implies the correctness of the implementation as well.
Remark 13. In our system model, we do not consider how the loops described above are actually implemented. Clearly, implementing these loops will require additional state variables. We will treat such variables as invisible variables.
5.3 Atomicity Assumption In this section, we discuss, with the help of a running example, how removing the atomic-ity assumption makes a protocol considerably more complex. Although the atomicity as-sumption simplifies a protocol considerably, powerful machinery is still required to prove protocols correct automatically.
171 Consider the following simple protocol in which each process has just one variable pc.
init: pc = 1 : goto pc = 2 try: pc = 2 : if ∀otr ̸= slf.pc[otr] ̸= 3 then goto pc = 3 else goto pc = 1; crit’: pc = 3 : goto pc = 1; The state of each process is given by the valuation of its pc variable. If we assume that the transitions are all atomic, it is easy to see that this protocol ensures mutual exclusion.
This is because the guard condition G .
= ∀otr ̸= slf.G(otr, slf) where G(otr, slf) .
= pc[otr] ̸= 3 evaluates to true only when no process is in state pc = 3. While this simple protocol can ensure mutual exclusion under the atomicity assumption, it cannot do so under real life conditions, as we describe below.
Consider the concrete system P(3) with three processes P(1), P(2), and P(3). Fig-ure 5.4 shows a possible execution sequence. Note that, in giving this sequence, we assume we have knowledge of the “insides” of a process: for example, steps like “G true of 2” are not visible. The only things visible are the pc and the data variables of a process.
The local states for each of the three processes are shown, and the executing process at each step is indicated by an arrow (←). The observation step ‘G true of 2’ appearing under the column for P(1) denotes the step in which process P(1) evaluates the guard 172 Process P(1) P(2) P(3) Initial States pc= 1 pc= 1 pc=1 pc= 2 ← idle idle idle pc= 2← idle G true of 2 ← idle idle idle G true of 1 ← idle G true of 3 ← idle idle idle G true of 3 ← idle pc= 3 ← idle idle idle pc= 3 ← idle Figure 5.4: A possible execution trace of the system with three processes.
173 condition G for process P(2) and concludes that it is satisfied. Observe that in the last state both P(1) and P(2) are in state pc = 3, violating mutual exclusion. Consider a more complicated execution sequence, shown in Figure 5.5 Process P(1) P(2) P(3) Local States pc= 1 pc= 1 pc=1 pc= 2 ← pc= 1 pc= 1 pc= 2 pc= 1 pc= 2 ← pc= 2 pc= 1 G true of 1 ← pc= 2 pc= 1 G true of 2 ← G true of 3 ← pc= 1 pc= 2 pc= 2 pc= 1 pc= 3 ← G true of 2 ← pc= 1 pc= 3 pc= 2 pc= 2 ← pc= 3 pc= 2 G false of 3 ← pc= 3 pc= 3 ← pc= 2 pc= 3 pc= 3 pc= 1 ← pc= 3 Figure 5.5: A more complicated trace of the system.
In this sequence, the actions are interleaved such that process P(2) observes P(3) while P(3) is in state pc = 3. Thus the guard G is false for 2. Process P(1) sees P(3) 174 when it is in pc = 2, thus P(3) does not block P(1). It is clear from these two examples that the interleaving of actions is crucial and adds considerable complexity to the protocol.
In fact, under the atomicity assumption neither of the traces shown above are legal traces.
In particular, the execution sequences where the observation steps of different processes are interleaved are excluded by the atomicity assumption. It is precisely because of such execution sequences that designing a distributed mutual exclusion protocol is challenging.
5.4 Monitors for Handling Non-atomicity Recall that each process in our system has one pc variable and a collection of data vari-ables. While it is clear that interleaving of observation steps add considerable complexity to the protocol, none of the variables used in our systems really tracks the state of these observations steps. For example, consider again the sample trace shown in Figures 5.4.
Since the observation steps are hidden to the observers on the outside, the execution trace seen from the outside looks as shown in Figure 5.6.
The current state of process P(i) gives us no information about how much of global condition it has finished evaluating and how much is still left. For example, at the state marked idle under column marked P(1) in the figure above, we do not know much of the guard condition G .
= ∀otr ̸= slf.pc[otr] ̸= 3 has already been evaluated by process 175 Process P(1) P(2) P(3) Initial States pc= 1 pc= 1 pc=1 pc= 2 ← idle idle idle pc= 2← idle pc= 3 ← idle idle idle pc= 3 ← idle Figure 5.6: Execution trace seen from the “outside”.
P(1). Thus, if we consider only the visible state of processes, comprising of pc and data variables, we have no way of knowing the truth or falsity of global conditions 1.
Fortunately, even when the observation steps are invisible, we can gather some in-formation about the truth or falsity of the guards by looking at the state of each process.
For this, we have to consider the previous states, in addition to the current states, of the processes. To this end, we will define a collection of monitor processes that track the evo-lution of the local states of the processes. These monitor processes are non interfering and are composed synchronously with concrete systems P(K). By synchronously composed we mean the following: every time a process in P(K) moves, all the monitor processes run simultanesouly and update their variables based on the current state of the processes in P(K). Crucially, the construction of the monitor processes is not specific to any par-ticular protocol. In other words, for any mutual exclusion protocol we can automatically 1Note that, if we assume atomicity, the truth or falsity of guards can be known just by observing the current states of all processes.
176 construct the monitor processes defined below.
For each process P(i) in P(K) we have a monitor process Mg(i). The monitor process Mg(i) has the following variables • K −1 variables mg(i, j), j ̸= i one for each process P(j), j ̸= i with range {clean, dirty, idle}. These monitor variables are used to handle guarded transitions.
• Another set of K −1 variables mu(i, j), j ̸= i one for each process P(j), j ̸= i with range {clean, dirty, idle}. These variables are used to handle update transitions.
• In addition, there is one variable em[i] with range {clean, dirty, idle}.
Monitor variables have value idle if they are not in use. Usually, monitor variables transition to value dirty from value idle. Typically, a monitor variable being dirty indicates that certain actions are not possible (an exception to this is the value dirty of em variable, which actually permits more behaviors). After this the monitor variable may transition to value clean. This value for a monitor variable usually indicates that the monitor variable has seen enough history information to allow all behaviors. Sometimes a monitor variable can transition from value idle to clean directly. Once a variable has become clean, it will stay clean until it is reset to idle.
In the next two subsections we will describe how the monitor variables are updated by the monitor processes. We will also formalize the exact information that we gain from monitor processes.
177 Monitor Variables for Guarded Transition The variable mg(i, j) keeps track of process j and is updated as shown in the Figure 5.7.
The value of mg(i, j) is computed as follows: 1. If process i is not evaluting any guard mg(i, j) = idle.
2. If mg(i, j) = idle and process i is evaluating a guard with condition G(slf, otr) and G(i, j) is false then mg(i, j) = dirty.
3. If process i is evaluting a guard with condition G(slf, otr) and G(i, j) is true then mg(i, j) = clean.
4. Otherwise mg(i, j) retains its value Figure 5.7: Update procedure for monitor variables pertaining to guarded transitions.
Intuitively, the variable mg(i, j) present in monitor Mg(i) tracks whether process j entered any state that makes the guard condition G(i, j) true while process i is evaluating the guard G (slf, otr) .
= ∀otr ̸= slf.G(slf, otr). In such a case, the monitor variable is clean.
Otherwise it is dirty. Informally, the variable mg(i, j) being dirty means that process j will block the guard G (slf, otr) for process i.
This code is run by the monitor process after each step of the asynchronous system P(K). Note that, the monitor process does not intefere with the execution of P(K) in any way.
The variable em(i) with range {clean, dirty, idle} is updated as shown in Figure 5.8.
178 The value of variable em(i) is fixed as follows: 1. If process i is not evaluating any guard then em = idle.
2. If process i is a evaluating a guard with condition G(slf, otr) and there exists a process j ̸= i such that G(i, j) is false then em = dirty.
3. If process i is a evaluating a guard with condition G(slf, otr), em ̸= dirty and for all processes j ̸= i G(i, j) is true then em = dirty.
4. Otherwise em(i, j) retains its value.
Figure 5.8: Procedure for updating monitor variables pertaining to guarded transitions.
Intuitively, if any process j ̸= i was in a state that falsified the guard G(i, j) while process i was evaluating it, then em(i) becomes dirty. It stays dirty until it is reset to idle.
Against the general trend, value of em can go from idle to clean to dirty. In fact, the value dirty for em actually means more behaviors are possible.
Variable em(i) tracks whether any process j ̸= i was in a state which makes G(i, j) false while P(i) is evaluating G(i, j). If such a process exists, em(i) is set to dirty. If P(i) is not evaluating any guard, then em(i) is set to the default value clean.
The information given by monitor processes can be used to decide – approximately– the truth or falsity of the guards. The following lemma formalizes the relation between the monitor variables and guards.
Lemma 5.4.1. Let process i in a concrete system P(K) be evaluating a guard with con-dition G(slf, otr). Then we have the following: 179 • If process i concludes that the guard is true, then all monitor variables mg(i, j), j ̸= i, must be clean.
• If the process i concludes that the guard is false, then the variable em(i) is dirty.
Proof. This lemma follows trivially from the way we defined the monitor variables mg(i, j), j ̸= i and em(i).
Monitors Variables for Update Transitions Consider an update transition L1 : for all otr ̸= slf if T (slf, otr) then uk := Φ(otr) goto L2 This transition updates the variable uk of the executing process and, thus, affects the mutual relationships between the uk variables of the different processes. To predict what possible relations (more precisely predicates) hold between uk[i] and uk[j] after process i executes the above transition, we described an automatic procedure in Section 4.5. The fixpoint based computation presented in Section 4.5, assumes atomicity, that is, when process i is performing an update all the other processes stay fixed. Under this assumption, we can find a set F(uk[i], uk[j]) of all predicates of interest that can hold between uk[i] and uk[j] after the update.
But, without the atomicity assumption, the fixpoint computation is no longer valid.
More precisely, if process j also performs an update operation on variable uk while process 180 i is doing the same, then we cannot use the fixed point computation to predict which relationships hold between uk[i] and uk[j] after the update operation. In this case, we just say that the set of all possible relations between uk[i], uk[j] is simply F(uk[i], uk[j]), where F(uk[i], uk[j]) is the set of all possible predicates of interest (usually syntactically picked from the protocol code).
Thus, if we knew that two processes were not updating the same variable t simul-taneously, then we can better predict the set of possible relations after the update using the fixpoint computation.
The K −1 variables mu(i, 1), . . ., mu(i, i −1), mu(i, i + 1), . . . , mu(i, K) with range {clean, dirty, idle} try to track precisely this information: the variable mu(i, j) tells us whether process j was updating the same data variable at the same time as process i. The value of the variable mu(i, j) , j ̸= i is computed as shown in Figure 5.9.
Intuitively, mu(i, j) being clean indicates that, at some point when process i was up-dating its variable t, process j was also updating the same variable t. We can use the information contained in the monitor processes to abstract the concrete behaviors as fol-lows: • If there is a process j such that mu(i, j) is clean then the valuation of a predicate involving uk[j] and uk[i] could be anything as uk[j] might have changed while uk[i] was being updated.
• If process j is such that mu(i, j) is dirty then we know that uk[j] could not have changed while i was executing the update transition. It is possible to figure out the possible relationships, after the update, between uk[i] and uk[j] as described above.
181 The value of the variable mu(i, j) is computed as follows • If process i is not evaluating any update transiton, then mu(i, j) = idle.
• If mu(i, j) = idle, process i is evaluating an update transition in-volving variable t, and process j is not doing any update involving t, then mu(i, j) = dirty.
• If both processes i and j are doing update transitions involving the same unbounded variable t, then mu(i, j) = clean.
• Otherwise mu(i, j) retains its value.
Figure 5.9: Procedure for updating monitor variables pertaining to update transitions.
182 The following lemma formalizes the relationship between the monitor variables and the update transitions.
Lemma 5.4.2. Suppose process i is updating variable t in an update transition with the update expression T(slf, otr). Let F(uk[i], uk[j]) be the fixpoint of predicates as computed in Section 4.5. If mu(i, j) = dirty, then the set of predicates that hold between uk[i] and uk[j] after process i has finished the update transition is a subset of F(uk[i], uk[j]).
Proof. The proof follows from the fact that mu(i, j) is dirty only if process j was not updating its uk[j] variable while process i was updating its uk[i] variable. Thus, by the way we compute the fixpoint F(uk[i], uk[j]), it contains all the possible relationships between uk[i] and uk[j] after the update by process i.
Thus, our lack of information about the invisible/hidden steps (used in evaluating guards and updates) can be overcome by making use of synchronously composed non interfering monitors and we can build a sound abstraction of the actual behaviors.
Remark 14. Note that a process can either execute a guarded transition or an update tran-sition, but not both at the same time. Thus, instead of having two sets of variables, namely mg(i, j), j ̸= i and mu(i, j), j ̸= i, we can just have one set of variables m(i, j), j ̸= i with range {clean, dirty}.
From now on, each monitor process Mg(i) will have variables {m(i, j)|j ̸= i} and the variable em(i).
183 5.4.1 Abstracting the Monitor Variables As in Chapter 4, we start with descriptions of ∆(x) having the format pc[x] = i ∧ ±E1(x) ∧±E2(x) ∧· · · ∧±ET(x), where i ∈[1..S].
where the environment prediates Ei(x) are constructed as before. But, the abstract model constructed using descriptions ∆(x) given above will not have enough detail to verify a protocol without the atomicity assumption. Therefore, we augment our abstract states so that, in addition to the state of the reference process and its environment, they also contain the history information contained in the monitor processes. Our augmented abstract states will be of the form ⟨pc, e1, . . . , eT, t1, . . ., tT, b1, . . . , bT, te⟩ where the variables t1, . . ., tT and te (called trackers) abstract the monitor variables of the reference process, b1, . . . , bT (called backers) abstract the monitor variables of the environment processes. We now describe how to abstract the different monitor variables.
Abstracting Trackers Consider first the reference process x. Apart from the reference process, no other process is individually identifiable in the abstract state. Corresponding to each environment condi-tion Ei, we have an abstract variable ti with range {clean, dirty, both} which abstracts the information present in the monitors. The value of ti is computed as follows: 184 • If for all the processes y satisfying environment predicate Ei(x, y) the variable m(x, y) = clean, then ti = clean.
• If for all the processes y satisfying environment predicate Ei(x, y) the variable m(x, y) = dirty, then ti = dirty.
• Otherwise ti = both.
Given a concrete state s of a system P(K) with reference process x and an environment Ei, the value of tracker ti is uniquely determined. We denote the function from (s, x, i) to ti by F t. This function will be used later on.
In addition, we have another variable te that abstracts the monitor variable em(x). The value of te is exactly the same as value of em(x).
Abstracting Backers Trackers maintain history information that is relevant for the reference process. We also need to abstract the information present in the monitors for processes other than the ref-erence process x. In particular, for each environment process y, we are interested in the monitor variable m(y, x). As noted earlier, environment processes are grouped according to the environment condition they satisfy. For an environment Ei, we maintain a variable bci that combines the m(y, x) variables of all processes y in the environment Ei. The value of bci is computed as follows: • If for all processes y satisfying Ei(x, y) we have m(y, x) = clean, then bci = clean.
185 • If for all processes y satisfying Ei(x, y) we have m(y, x) = dirty, then bci = dirty.
• Otherwise bci = both.
Given a concrete state s of a system P(K) with reference process x and an environment Ei, the value of bci is uniquely determined. We can denote the function from (s, x, i) to bci by F b. This function will be used later on.
We will now define the abstraction mapping from augmented concrete states to aug-mented abstract states.
Definition 5.4.3 (Abstraction Mapping). Let P(K), K > 1, be a concrete system and p ∈[1..K] be a process. The abstraction mapping αp induced by p maps a global state s of P(K) to an abstract state ⟨pc, e1, . . . , eT, t1, . . . , tT, b1, . . . , bT, te⟩where • pc = the value of pc[p] in state s and for all ej, we have ej = 1 ⇔s | = Ej(p).
• For all j we have tj = F t(s, p, j), and for all j, we have bj = F b(s, p, j) The corresponding augmented abstract model P A is defined as in Section 2.5.2. The set of labels is the same as the labels used in Section 4.2. From the coverage and congru-ence properties of the original abstract descriptions we can conclude that the same prop-erties hold for the augmented abstract descriptions as well. Thus, the following corollary follows from Theorem 2.2.4.
Corollary 5 (Soundness of Augmented Abstraction).
Let P(N) be a parameterized mutual exclusion system, PM(N) be an augmentation of P(N) with monitor processes 186 as described above, and PMA be its augmented abstraction. For an indexed property ∀x.Φ(x), where Φ(x) is a control condition we have PMA | = Φ(x) ⇒∀K.PM(K) | = ∀x.Φ(x) ⇒∀K.P(K) | = ∀x.Φ(x) 5.5 Computing the Abstract Model As in the atomicity case, we consider the following four cases for computing an over-approximation of a transition statement: Active process is . . .
guarded transition update transition . . . reference process Case 1 Case 2 . . . environment process Case 3 Case 4 Before we begin, we recall the notation introduced earlier in Section 4.5. The envi-ronment condition Ei(x, y) .
= y ̸= x ∧Rj(x, y) ∧pc[y] = L is denoted by E(j,L)(x, y).
The corresponding environment predicate is referred to as E(j,L)(x) and the corresponding abstract variable is e(j,L). The set of all environment conditions E(j,L)(x, y) is referred to as EL.
5.5.1 Case 1: Guarded Transition for Reference Process Let us now turn to Case 1 and consider the guarded transition tG: L1 : if ∀otr ̸= slf.G(slf, otr) then goto L2 else goto L3 187 Suppose at least one of the trackers ti, i ∈[1..T] is not clean. Then the reference process cannot conclude the guard is true. If all the trackers are clean, then we may conclude that the guard is true or false. Once reference process x ends up in a new control location, we have to appropriately assign new values to the trackers and the backers. To do this we need the following two definitions. The first definition is exactly the same as the one in Chapter 4, but we repeat it for the sake of completeness.
Definition 5.5.1 (Blocking Set for Reference Process). Let G .
= ∀otr ̸= slf.G(slf, otr) be a guard. We say that an environment condition Ei(x, y) blocks the guard G if Ei(x, y) ⇒ ¬G(x, y). The set Bx(G ) of all indices i such that Ei(x, y) blocks G is called the blocking set of the reference process for guard G .
Note that either Ei(x, y) ⇒¬G(x, y) or Ei(x, y) ⇒G(x, y) holds for every environ-ment Ei(x, y).
Each environment Ei uniquely determines the control location of the processes satis-fying it. We will assume, for simplicity, that there is only one transition starting at each control location 2. Thus, each environment Ei has an unique guard or update expression as-sociated with it. The following notion of dependent environments for guarded transitions is similar to the blocking set for the reference process.
Definition 5.5.2 (Dependent Set for Guards). Let pc = L be a control location of the reference process. The guard dependent set of L, Dg(L) contains all those environments whose associated guard G .
= ∀otr ̸= slf.G(slf, otr) are such that Ei(y, x) ∧G(y, x) ∧ (pc[x] = L) is satisfiable.
2Extension to the general case is simple.
188 Intuitively, the guard dependent set of a control location pc = L is the set of all those environments whose associated guards are such that the reference process x in control location pc = L does not contradict the guards. Thus, a process y present in any such environment could have seen the reference process x satisfy process y’s guard. We define update dependent sets similarly.
Definition 5.5.3 (Dependent Set for Updates). Let pc = L be a control location of the reference process. The update dependent set of L, Du(L), contains those environments whose associated update expression updates the same data variable as the update transition associated with L. If there is no update transition associated with pc = L then the set is empty.
We will now explain how to abstract the guarded transition tG L1 : if ∀otr ̸= slf.G(slf, otr) then goto L2 else goto L3.
We will represent the set of abstract transition arising from this case by an invariant (be-tween current and next states) Ix tG. The invariant, structured similar to the one in Sec-tion 4.5, will be presented in terms of three conditions GR1, GR2, GR3. The abstract model will have transition from ˆ s1 = ⟨pc, e1, .., eT, t1. . . . , tT, bc1, . . . , bcT, te⟩to ˆ s2 = ⟨pc′, e1, .., eT, t′ 1. . . . , t′ T, bc′ 1, . . . , bc′ T, te′⟩if GR1. pc = L1, i.e., the reference process is in location L1, GR2. One of the following two conditions holds: 189 • Then Branch: ∀i).(ti = clean) and pc′ = L2, i.e., the guard is true and the reference process moves to control state L2.
• Else Branch: ¬(∀i.ti = clean) ∨(∀i.ti = clean ∧te = dirty) and pc′ = L3, i.e., the guard is false and the reference process moves to control state L3. Note that the condition te = dirty indicates that at least one tracker was dirty at some point in the past.
GR3 Assuming pc′ = L where L ∈{L2, L3} the following conditions hold.
• If the transition associated with control location L is a guarded transition with G .
= ∀otr ̸= slf.G (slf, otr), then the following conditions hold.
– If i ∈Bx(G) or e′ i = 0 then t′ i = clean. Else t′ i = dirty, i.e., if Ei is a not blocking environment or if the environment is empty the corresponding tracker is clean. Otherwise, it is set to dirty as there is a process in a blocking environment.
– If i ∈Dg(L) ∪Du(L), then bc′ i = clean else bc′ i = bci. i.e., if the reference process does not block the guard associated with environment Ei or updates the same variable as the update transition associated with Ei then bci is set to clean, otherwise it is set to dirty.
– If there exists an i such that t′ i = dirty then te = dirty, i.e., the variable te is dirty if at least one of the trackers is dirty.
• If the transition associated with control location L is an update transition then the following conditions hold 190 – If i ∈Du(L) and e′ i = 1 then t′ i = clean. Else t′ i = dirty. That is, if environment ei updates the same data variable as the reference process in control location L, then the tracker ti must be set to clean otherwise ti is set to dirty. This indicates that both the reference process and a process in ei can change the same data variables simultaneously.
– If i ∈Dg(L), then bc′ i = clean else bc′ i = bci. That is, if control location L is such that the guard associated with ei is not blocked by the reference process in control location L, then the backer bci is set to clean.
– te′ = clean. This is a default value for te as it is not really used for update transitions.
Similar to the concrete monitor variables, the value clean for trackers and backers is the most permissive –that is, if backers and trackers are clean, then the possible set of transitions is maximal. The value both is slightly more restrictive than clean: the envi-ronments corresponding to trackers and backers that are in the both state cannot be empty.
The value dirty is the most restrictive. A tracker being dirty prevents the reference pro-cess from moving forward. Similarly, a backer being dirty prevents the processes of the corresponding environment from moving forward.
Lemma 5.5.4. If states s1 and s2 in a concrete system P(K), K > 1 are such that αc(s1) = ˆ s1 and αc(s2) = ˆ s2 and there is a transition from s1 to s2 via process c exe-cuting a guarded transition tG then ˆ s1 and ˆ s2 satisfy the transition invariant Ix tG.
Proof. This follows simply from the way we constructed the invariant.
191 Note that all we have done is to translate the lemmas listed in Section 5.4 in terms of the reference process and its environment. This is precisely where the power of this approach comes from. Constructing the abstract model is theoretically simple and it is easily extendible in case new constructs are allowed in the concrete protocols.
5.5.2 Case 2: Guarded Transition for Environment Processes Suppose that the guarded transition tG L1 : if ∀otr ̸= slf.G(slf, otr) then goto L2 else goto L3 is executed by a concrete process y satisfying the environment condition E(i,L1)(x, y).
The active process thus switches from environment condition E(i,L1)(x, y) to environment condition E(i,L2)(x, y) or E(i,L3)(x, y). Note that in a guarded transition, only the pc of the active process changes.
We denote the abstract transition corresponding to this case by an invariant I y i (tG). We introduce an abstract transition from ˆ s1 = ⟨pc, e1, .., eT, t1. . . . , tT, bc1, . . . , bcT, te⟩to ˆ s2 = ⟨pc′, e′ 1, .., e′ T, t′ 1. . . . , t′ T, bc′ 1, . . ., bc′ T, te⟩if the following conditions hold.
For brevity we will represent the environment condition E(i,L1) by E1, E(i,L2) by E2, E(i,L3) by E3.
GE1. e1 = 1, that is, there is an environment process in state control location L1 3.
3The requirement that, for each control location L, there be only one transition starting from L is being 192 GE2 One of the following two conditions holds: • Then Branch: bck ∈{clean, both} and e′ 2 = 1, i.e., the guard is true and the reference process moves to control state L2. e′ 1 can be 0 or 1.
• Else Branch: e′ 3 = 1, i.e., the guard is false and the environment process moves to control state L3. e′ 1 can be 0 or 1.
GE3. pc′ = pc. That is, the control location of the reference does not change.
GE4. Let the new control location of the environment process be L ∈{L2, L3}. Denote the environment E(i,L) by Ej for the sake of brevity. The following conditions must hold: • t′ j = ω(t1, tj). Function ω, described below, takes the current values of the trackers t1, tj and returns the new value for tj.
• t′ 1 = Ωt(t1, e′ 1). Function Ωt, described below, takes the current value of a tracker (or a backer) and the next state value of the corresponding environment and returns the next state value of the tracker (or the backer).
• bc′ 1 = Ωt(bc1, e′ 1).
• If the transition associated with L is a guarded transition then bc′ j = Ωb(Dg(L), bcj). Function Ωb finds the new value of backer bcj as a function of the current value of the backer bcj and the guard dependent set of control location L.
used here.
193 • If the transition associated with L is an update transition, then bc′ j = Ωb(Du(L), bcj).
Function ω(ti, tj) is shown in tabular form in Figure 5.10 Function ω(ti, tj) returns the new value of tj given the current values of ti and tj.
tj ti ω(ti, tj) tj = clean ti = dirty t′ j = both tj = dirty ti = dirty t′ j = dirty tj = clean ti = clean t′ j = clean tj = dirty ti = clean t′ j = both tj = clean ti = both t′ j = both tj = dirty ti = both t′ j = dirty tj = both -t′ j = clean Figure 5.10: Function ω.
Informally, this new value of tj should reflect the collective status of processes in the environment ej. When a new process moves into the environment ej, we can figure out the status of this new process by looking at the tracker value associated with its old environment. Depending on these two values, the current values of tj and ti, we can figure out the new value of tj so that it reflects the collective condition of the processes in the environment ej.
Function Ωt(e′ i, ti) is shown in Figure 5.11. The function code is self-explanatory as is the function Ωb(Set, bcj) given in Figure 5.5.2 194 Function Ωt(e′ i, ti) returns the new value of the tracker ti given the current value of the tracker and the next value of the corresponding environment bit ei.
• If e′ i = 0 then t′ i = clean • Otherwise – If ti = clean then t′ i = clean – If ti = dirty then t′ i = dirty – If ti = both then t′ i ∈{clean, dirty, both} Figure 5.11: Function Ωt.
Lemma 5.5.5. If states s1 and s2 in a concrete system P(K), K > 1 are such that αc(s1) = ˆ s1 and αc(s2) = ˆ s2, c ∈[1..K] and there is a transition from s1 to s2 via process d ̸= c executing a guarded transition tG, then ˆ s1 and ˆ s2 satisfy the transition invariant Iy(tG).
Proof. The proof of this lemma follows directly from the way we constructed Iy(tG).
5.5.3 Case 3: Update Transition for Reference Process Consider the case where the reference process is executing an update transition tU: 195 Function Ωb takes set of environment conditions and one backer as its ar-guments and returns the new value of the backer.
• If Ej ∈Set then bc′ j = clean • Ej / ∈Set then one of the following holds: – If bcj = clean or bcj = both then bc′ j = both – If bcj = dirty then bc′ j = dirty Figure 5.12: Function Ωb L1 : for all otr ̸= slf if T (slf, otr) then uk := Φ(otr) goto L2 Recall that each process has data variables u1, . . . , ud. We denote the next state value of each variable um by u′ m.
When the reference process x changes the value of its data variables, the valuations of the environment predicates E1(x) . . . ET(x) will change. For a process y satisfying envi-ronment condition Ei(x, y), we need to figure out the possible new environment conditions Ej(x, y) that y will satisfy after the reference process x has executed the update transition.
Recall from Chapter 4 that the set of possible new environment conditions for process y satisfying the condition Ei(x, y) is called the outset Oi. (Technically, the outset is the set of the indices of these environment conditions.) For sake of completeness, we will briefly 196 explain again how to compute the outset.
Case A.
The first case we need to consider is when process y does not update uk while the reference process is updating its variable. Denote by T(x, y) the update formula T (x, y) ∧u′ k[x] := φ(y) ∨¬T (x, y) ∧u′ k[x] := uk[x]. Given the update formula, we find what possible inter-predicates involving u′ k[x], uk[y] can be true. Formally, the set C1(uk) of these inter-predicates is given by all formulas uk[x] ≺uk[y] where ≺∈{<, >, =} such that u′ k[x] ≺uk[y] ∧T(x, y) is satisfiable.
The possible relationships between uk[x] and uk[y] might change when x repeatedly up-dates its uk value by looking at other processes. Suppose x looks at another process z and updates its uk[x] value again. We now find the set C2(uk) of possible relationships be-tween uk[x] and uk[y] after the new update involving process z under the assumption that a relation from C1(uk) holds before the update. Thus, the new set C2(uk) of relationships is given by all formulas uk[x] ≺uk[y] where ≺∈{<, >, =} such that u′ k[x] ≺uk[y] ∧T(x, z) ∧ψ(x, y) is satisfiable and ψ(x, y) ∈C1(uk).
Note that C1(uk) ⊆C2(uk) because the definition of T(x, z) allows the possibility that the value of uk[x] remains unchanged. We similarly compute sets C3(uk), C4(uk) . . . until a fixpoint, C(uk), is reached.
In the environment condition Ei(x, y), let θ be the (unique) inter-predicate that de-scribes the relation between uk[x] and uk[y]. Consider the set of environment conditions 197 Ej(x, y) that are obtained from Ei(x, y) by replacing θ with a formula in the fixpoint C(uk): the indices of these environment conditions constitute the outset Oi of Ei(x, y).
Correspondingly, the inset Ik ⊆{1..T} for environment condition Ek(x, y) consists of all j such that k ∈Oj.
Case B.
In the second case, process y is also updating its uk variable. In this case, the set C(uk) is the set of all possible predicates involving uk[x] and uk[y]. In other words, we cannot predict what the relationship between uk[x] and uk[y] is. The outset consisting of [the indices of] environments is then computed as described.
In the abstract model, to compute the outset for environment em we use Case A if the associated tracker tm is dirty; otherwise we use Case B. Observe that, if the tracker is clean, more behaviors are possible.
Denote the set of abstract transitions corresponding to the concrete update transition (†) by Ix(tU). Ix(tU) has a transition from ˆ s1 = ⟨pc, e1, .., eT, t1. . . . , tT, bc1, . . . , bcT, te⟩ to ˆ s2 = ⟨pc′, e′ 1, .., e′ T, t′ 1. . . ., t′ T, bc′ 1, . . . , bc′ T, te⟩if the following conditions hold: UR1. pc = L1, i.e., the reference process first is in control location L1.
UR2. pc′ = L2, i.e., the reference process moves to control location L2.
UR3. ∀k ∈[1..T].(ek = 1 ⇒∃j ∈Ok.e′ j = 1), i.e., if there was a process in environ-ment Ek(x) before the transition, then there must be a process in one of the outset environments Ok of Ek(x, y) after the transition.
198 UR4. ∀j ∈Ik.(ej = 0 ⇒e′ k = 0), i.e., if there is no process satisfying the inset environ-ments Ik of environment Ek(x, y) before the transition then after the transition there can be no process in environment Ek(x, y).
UR5. for each k ∈[1..T], the value of bc′ k is computed as follows • if e′ k = 0 or k ∈Dg(L2) ∪Du(L2) then bc′ k = clean • otherwise we have three cases: – if ∀j ∈Ik.bcj = clean then bc′ k = clean – if ∀j ∈Ik.bcj = dirty then bc′ k = dirty – if ∃j ∈Ik.bcj = clean and ∃j ∈Ik.bcj = dirty then bc′ k can take any value in {clean, dirty, both} UR6. For each k ∈[1..T] if k ∈D(()L2) then t′ k = clean else t′ k = dirty where D(L2) is either Bx(G), if the transition associated with L2 is a guarded transition with guard condition G, or D(L2) is Du(L2), if the transition associated with L2 is an update transition.
Lemma 5.5.6. If states s1 and s2 in a concrete system P(K), K > 1, are such that αc(s1) = ˆ s1 and αc(s2) = ˆ s2, with c ∈[1..K] and there is a transition from s1 to s2 via process c executing a guarded transition tU then ˆ s1 and ˆ s2 satisfy Ix(tG).
Proof. The proof of this lemma follows directly from the way we constructed Ix(tG).
199 5.5.4 Case 4: Update Transition for Environment Processes Consider the case where a generic process y satisfying environment E(i,L1)(x, y) is executing an update transition tU: L1 : for all otr ̸= slf if T (slf, otr) then uk := Φ(otr) goto L2 After the update transition, process y will have a new control location and also the rela-tionship of its data variables to those of the reference process x will have changed. Recall the notation E(i,L1)(x, y) used to denote the environment condition y ̸= x ∧Ri(x, y) ∧ pc[y] = L1. The outset O(i,L1) for environment E(i,L1)(x, y) will consist of all those envi-ronments E(j,L2)(x, y) that process y may satisfy after the update transition. To compute the outset O(i,L1) we proceed as follows. As in the previous case we find a fixpoint C(uk) that contains the possible relationships between uk[x] and uk[y].
Case A.
Consider first the case where the reference process is not updating its variable uk. The initial set of relationships C1(uk) is the set of all uk[x] ≺uk[y], ≺∈{<, >, =} such that T(y, x) ∧uk[x] ≺u′ k[y] is satisfiable where T(y, x) is the update condition as defined in Section 5.5.3. Note that we consider T(y, x) (and not T(x, y) as in the previous section) because y is the active process. As 200 y updates its uk variable repeatedly, the relationship between uk[x], and uk[y] will also change. To compute all the possible relationships, we use an approach similar to the fixpoint computation in Case 3. Thus, we find the set C2(uk) of all uk[x] ≺uk[y], ≺∈{< , >, =, } such that T(y, z) ∧uk[x] ≺u′ k[y] ∧ψ(x, y) is satisfiable where ψ(x, y) ∈C1(uk). We similarly compute C3(uk), C4(uk), . . . until we reach a fixpoint C(uk).
Case B.
Consider now the case where the reference process is also updating its uk variable. In this case, C(uk) will consist of all possible relations on uk[x] and uk[y], denoting that we do not have enough information.
In the environment condition E(i,L1)(x, y), let θ be the (unique) inter-predicate that describes the relation between uk[x] and uk[y]. Consider the set of environment conditions E(j,L2)(x, y) that are obtained from E(i,L1)(x, y) by replacing θ by a formula in the fixpoint C(uk) and replacing the condition pc[y] = L1 by pc[y] = L2. The indices of these environment conditions constitute the outset O(i,L1) of E(i,L1)(x, y).
To compute the outset for an environment e(i,L1), we will use Case A if the associated backer bc(i,L1) is dirty. Otherwise, we use Case B. Note again that bc(i,L1) being clean or both allows more behaviors.
Since the transition starts at control location L1 and a generic process executes it, we will describe the abstract transition Ii,k y (tU) for each environment condition E(i,L1)(x, y) 201 and each k ∈O(i,L1). The abstract transition Iy(tU) for Case 4 will be _ E(i,L1)(x,y) _ k∈O(i,L1) Ii,k y (tU).
Ii,k y (tU) has a transition from ˆ s1 = ⟨pc, e1, .., eT, t1. . . . , tT, bc1, . . . , bcT, te⟩to ˆ s2 = ⟨pc′, e′ 1, .., e′ T, t′ 1. . . . , t′ T, bc′ 1, . . . , bc′ T, te⟩if the following conditions hold. For brevity we will represent the environment condition E(i,L1) by E1 and E(j,L2) by E2.
UR1. pc = pc′, i.e., the reference process does not move.
UR2. e1 = 1, i.e., there is a process in environment E(i,L1)(x,y) before the transition.
UR3. e′ 2 = 1, i.e., there is a process in environment E(j,L2)(x,y) after the transition.
UR4. The e variables except e′ 1, e′ 2 do not change, i.e., e′ l = el for l / ∈{(i, L1), (j, L2)}.
UR5. Assuming the new control location of the environment process that moved was L ∈L2, L3, denote the environment E(i,L) by Ej. The following conditions must hold: • t′ 2 = ω(t1, t2) • t′ 1 = Ωt(e1, t1) • bc′ 1 = Ωt(e1, bc1) • If the transition associated with L is a guarded transition bc′ j = Ωb(Dg(L), bci, bcj).
• If the transition associated with L is an update transition bc′ j = Ωb(Du(L), bci, bcj).
202 Inter-preds Intra-preds Reachable states Safety Bakery (NA) 3 5 O(2400) 3800s Bakery (A) 3 5 O(2146) 68.55s Figure 5.13: Running Times for the bakery protocol. Bakery(A) and Bakery(NA) stand for the bakery protocol with and without the atomicity assumption Lemma 5.5.7. If states s1 and s2 in a concrete system P(K), K > 1, are such that αc(s1) = ˆ s1 and αc(s2) = ˆ s2, with c ∈[1..K] and there is a transition from s1 to s2 via process d ̸= c, executing a guarded transition tU then ˆ s1 and ˆ s2 satisfy Iy(tU).
Proof. The proof of this lemma follows directly from the way we constructed Iy(tU).
5.6 Experimental Results We applied our abstraction method to the Bakery and Szymanski’s protocols without the atomicity assumption. We were able to verify the safety property of the Bakery proto-col, namely ∀x.∀y ̸= x.AG(pc[x] = crit ⇒pc[y] ̸= crit) in about 2 hours. The following table shows the run times and other statistics in the non-atomic case and the same verification carried out under the atomicity assumption Note the enormous increase in the state space size once we remove the atomicity as-sumption. The increase in the model checking is equally dramatic. This again underlines 203 the significant reduction in complexity of protocols due to the atomicity assumption.
We were not able to verify the safety property of Szymanski’s protocol. The correct-ness of Szymanski’s protocol depends on the specific order in which a process looks at the other processes in the system. Szymanski’s protocol is correct only if a process looks at the other processes in the increasing order of the index [55; 77]. The semantics we assigned to our guarded and update transitions was such that the order of processes was immaterial.
Hence we cannot accurately model Szymanski’s protocol in our input language.
We also applied our abstraction to the toy protocol described in Section 5.3. As ex-pected, our method finds a trace violating the mutual exclusion protocol in under 5 mins.
204 Chapter 6 Verification by Network Decomposition 6.1 Introduction Despite the big success of model checking in hardware and software verification, the clas-sical approach to model checking can handle only finite state systems. Consequently, ap-plying model checking techniques to systems involving unlimited concurrency, unlimited memory, or unlimited domain sizes, is a major challenge. Researchers have sought to ad-dress these issues by different verification methods including, among others, abstraction, regular model checking, static analysis, and theorem proving.
Many software and hardware systems, however, are described in terms of natural pa-rameters and, for each concrete value of the parameters, the systems have a finite state 205 space. Verifying a property of a parameterized system amounts to verifying this prop-erty for all values of the parameters. Examples of parameterized systems include mutual exclusion protocols, cache coherence protocols, and multi-threaded systems.
While there has been considerable effort in verifying parameterized systems such as cache protocols and mutual exclusions, that have replicated but no underlying network graphs, there is little work on parameterized systems that have replicated process and underlying network graphs. Common examples of systems that are required to operate on arbitrary network topologies are network routing protocols. Leader election protocols, for example, are usually designed to operate no matter what the underlying network topology of the system. Verifying such systems is obviously complicated by the fact that the network graph can be arbitrary (in addition to the fact the network graph induces asymmetry in the system).
In a seminal paper, Emerson and Namjoshi consider systems composed of iden-tical asynchronous processes which are arranged in a ring topology and communicate by passing a Boolean token. For several classes of indexed CTL∗\ X properties they provide cutoffs, i.e., reductions to single systems of constant small size. Consequently, CTL∗\ X properties over an infinite class of networks can be reduced to a single model checking call.
In this chapter, we extend the results of Emerson and Namjoshi from rings to arbitrary classes of networks. There are two modifications, however: first, our results hold true only for LTL\X, and second, we introduce a more refined notion of cut-offs. The first restriction is necessary: We show in Section 4 that with CTL\X it is impossible to obtain 206 cut-offs for arbitrary networks.
The second modification actually provides an interesting new view on the notion of cut-offs: in order to verify the parametrized system, we are allowed to model check a constant number c of small systems whose network graphs have sizes bounded by a constant s.
Then, the verification result for the parametrized system is a Boolean combination of the collected results for the small systems. We call such a reduction to a finite case distinction a (c, s)-bounded reduction.
Our main results can be summarized as follows: • Verification by Network Decomposition: Verifying systems with fixed large net-work graphs G (e.g., concrete instantiations of a parametrized system) can be as challenging as verifying parameterized systems. Note that when |Q| is the state space of the individual processes, then the state space of the whole network can be as high as |Q|n, where n is the number of nodes. We show that the verification of an indexed LTL\X property ϕ for a system with network graph G can be achieved by an efficiently computable (c, s)-bounded reduction. For the important case of 2-indexed properties, it is sufficient to model check at most 36 networks of size 4.
• Offline Verification: In a scenario where ϕ is known in advance and the network G can change for different applications, we can first verify a constant number of small systems offline. Later, when we get to know the network graph G, the correctness of G with respect to specification ϕ can be verified online by simply evaluating a constant size Boolean function, regardless of the size of the processes.
207 Again, for 2-indexed properties, the offline computation involves at most 36 calls to the model checker for networks of size 4.
• Cut-Offs: For every class of networks T and k-indexed LTL\X property ϕ one can verify if ϕ holds on all networks in T by a (c, s)-bounded reduction, where c and s depend only on k.
Depending on the complexity of the networks in T, finding a suitable (c, s)-bounded reduction will in general still involve manual algorithm design. Similar to famous results about linear time algorithms for bounded tree-width , our proofs just guarantee the existence of small reductions.
Our results lay the foundation for reasoning about systems with arbitrary network graphs. While communication between the processes is simple, the results we obtain are non-trivial. In fact, we were surprised to discover that for CTL \ X specification there are no cutoffs even for the simple communication model. The generalized notion of cutoffs we present will be crucial to reasoning about systems with more complicated communication.
This chapter is organized as follows: the next section contains the work closest to our work. In Section 3, we describe the system model in detail. Section 4 contains the main cutoff results. Section 5 shows that no cutoffs exist for CTL \ X. Finally, the conclusion in Section 5 briefly considers further performance enhancements for practical applications of our method.
208 6.2 Related Work.
Verification of parameterized systems is well known to be undecidable [2; 76]. Many interesting approaches to this problem have been developed over the years, including the use of symbolic automata-based techniques [1; 10; 12; 13; 51; 78], network invariants [3; 64], predicate abstraction [52; 53], and symmetry reduction [24; 31; 38; 39; 40]. In , cut-offs were used for the verification of systems sharing common resources, where the access to the resources is managed according to a FIFO-based policy.
In addition to mentioned above, Emerson et al. have shown a large number of fun-damental results involving cut-offs. The paper by Emerson and Kahlon also considers LTL\X cut-offs for arbitrary network topologies with multiple tokens, but each token is confined to two processes which renders their model incomparable to ours. Other previ-ous work by Emerson and Kahlon [32; 34; 35] consider other restricted forms of process interaction. Finally, considers the verification of single index properties for systems with multiple synchronous processes.
Indexed temporal logic was introduced in . The paper also considers identical processes arranged in ring topology.
The work that is closest in spirit to our negative results on CTL∗\ X logic is the work by Browne, Clarke and Grumberg in that shows how to characterize Kripke structures up to bisimilarity using fragments of CTL⋆. Our results show that even CTL∗\ X with only two atomic propositions is sufficient to describe an infinite class of Kripke structures that are not bisimilar to each other. In other words, bisimilarity over the class of Kripke structures with two labels gives rise to an infinite number of equivalence classes.
209 6.3 Computation Model Network Topologies. A network graph is a finite directed graph G = (S, C) without self-loops, where S is the set of sites, and C is the set of connections. Without loss of generality we assume that the sites are numbers, i.e., S = {1, 2, . . ., |S|}. A (network) topology T is a class of network graphs.
Token Passing Process. A single token passing process P (process) is a labeled transition system (Q, Σ, δ, I) such that: • Q = b Q × B, where b Q is a finite, nonempty set and B = {0, 1}. Elements of Q will be called local states. The boolean component of a local state indicates the possession of the token. We say that a local state (q, b) holds the token if b = 1.
• Σ = Σf ∪Σd∪{rcv, snd} is the set of actions. The actions in Σd are token dependent actions, those of Σf are called token independent actions, and {rcv, snd} are actions to receive and send the token. The sets Σf, Σd are mutually exclusive.
• δ ⊆Q × Σ × Q is a transition relation, such that every ((q, b), a, (q′, b′)) ∈δ fulfills the following conditions: (a) A free transition does not change token possession: a ∈Σf ⇒b = b′ (b) A dependent transition can execute only if the process possesses the token: a ∈Σd ⇒b = b′ = 1 (c) A receive establishes possession of token: a = rcv ⇒b = 0, b′ = 1 (d) A send revokes the possession of token: a = snd ⇒b = 1, b′ = 0 210 • I ⊆Q is the set of initial states.
Topological Composition. Let G = (S, C) be a network graph and P = (Q, Σ, δ, I) be a single token process. Then P G denotes the concurrent system containing n = |S| instances of P denoted by Ps, s ∈S. The only synchronization mechanism between the processes is the passage of a token according to the network graph G. Formally, the system P G is associated with a transition system (Q, ∆, I) defined as follows: • Q = {(q1, . . . , qn) ∈Qn | exactly one of the qi holds the token}.
• ∆⊆Q2n is defined as follows: a transition (q1, q2, . . . , qn) →(q′ 1, q′ 2, . . . , q′ n) is in ∆in one of two cases: (a) Asynchronous Transition: there exist an index j ∈{1, . . ., n} and an action a ∈Σf ∪Σd such that (qj, a, q′ j) ∈δ, and for all indices i ̸= j we have qi = q′ i.
In other words, only process Pj makes a transition (different from a send or receive).
(b) Token Transition: there exist a network connection (j, k) ∈C in the network graph, such that (qj, snd, q′ j) ∈δ, (qk, rcv, q′ k) ∈δ, and qi = q′ i for all indices i different from j, k.
• I = {(q1, . . . , qn) ∈In | exactly one of the qi holds the token}.
An execution path is considered fair if and only if every process Pi receives and sends the token infinitely often. We assume that every system P G that we consider has fair paths. An 211 immediate consequence of the fairness condition is that a system P G can have fair paths only if G is strongly connected.
We shall use indexed temporal logics, which can refer explicitly to the atomic propo-sitions of each process Pi, to specify properties of the compound systems. For each local state q in Q we introduce propositional variables q(1), . . . , q(n). The atomic proposition q(i) says that process Pi is in state q. Thus, for a global state g we define g | = q(i) iffin global state g, process Pi is in state q.
Starting from this definition for atomic propositions, we can easily define common tempo-ral logics such as CTL or LTL in a canonical way. Throughout this paper, we will assume that the path quantifiers A and E quantify over fair paths. Further we assume that LTL formulas are implicitly quantified by E. This restriction simplifies our proofs but does not restrict generality.
Example 6.3.1. The formula G(q(1) ⇒Fq(2)) says that whenever process P1 is in state q then process P2 will be in state q sometime in the future.
For increased expressibility we permit that in an atomic formula q(x) the process index x is a variable (called index variable) which can take any value from 1 to |S|, the total num-ber of processes. Thus, x can refer to arbitrary processes. We shall write ϕ(x1, . . . , xn) to indicate that the temporal formula ϕ depends on the index variables x1, . . .xn. We can substitute the index variables in a formula ϕ(x1, . . . , xk) by integer values i1, . . . , ik in the natural way, and denote the resulting formula by ϕ(i1, . . . , ik).
In addition to substitution by constants, we can also quantify over the index variables x1, . . . xn using a prefix of existential and universal quantifiers with the natural seman-212 tics. Such formulas are called quantified temporal formulas. For example, the formula ∀x∃y.ϕ(x, y) means “For all processes x there exists a process y, such that the temporal formula ϕ(x, y) holds.” A formula without quantifier prefix is called quantifier-free. If all index variables in a formula are bound by quantifiers we say that the formula is closed, and open otherwise. The quantifier-free part of a quantified formula is called the matrix of a formula.
Example 6.3.2. The formula ∃x, y.G(q(x) ⇒Fq(y)) says that there exist two processes Px and Py, such that whenever process Px is in state q then process Py will be in state q some time in future.
The formal semantics of this logic is straightforward and is omitted for the sake of brevity.
Definition 6.3.3 (k-indexed Temporal Formula). Let L be a temporal logic. A k-indexed temporal formula is a formula whose matrix refers to at most k different processes, i.e., there are at most k different constant indices and index variables.
6.4 Reductions for Indexed LTL\X Specifications In this section, we will show how to reduce the model checking question P G | = ϕ to a series of model checking questions on smaller systems P Gi’s where we can bound the size of the network graphs Gi as well as the number of the Gi’s. For the sake of simplicity, we will start with the special case of 2-indexed existential LTL\X specifications, which can be readily generalized to the full case.
213 6.4.1 Existential 2-indexed LTL\X Specifications In this section we show how to verify simple 2-indexed LTL\X properties of the form ∃i, j.ϕ(i, j), where i ̸= j. We will use the insights we obtain from this case to obtain the more general results later on.
Recall that 2-indexed properties are concerned only with properties of two processes in a given system. Our process communication model implies that two processes Pi and Pj can only affect each other by passing or receiving a token. Consequently, the synchro-nization between Pi and Pj crucially depends on the paths between sites i and j in the network graph. The following example is crucial to understanding the intuition behind our approach: Example 6.4.1. The Figure below shows one path π = i, a, b, i, j, b, c, i, c, j, . . . that the token takes in a network graph.
Φ→(i, j) Φ;(j, i) Φ;(i, j) Φ⟲(i, j) a b b c c i i j i j Suppose that we are only interested in properties concerning the processes Pi and Pj, but not in processes Pa, Pb, Pc. Then only the sequence of the i’s and j’s in the path are of interest. Looking at π from left to right, we see four possibilities for what can happen between i and j: (1) Pi sends a token, and receives it back without Pj seeing it (formally, we will write Φ⟲(i, j) to denote this); (2) Pi passes the token directly to Pj (Φ→(i, j)); (3) Pj sends the token to Pi through several intermediate sites (Φ;(j, i)); and (4) Pi sends 214 the token back to Pj through several intermediate sites (Φ;(i, j)). There are two more possibilities which do not occur in π: (5) Φ→(j, i) and (6) Φ⟲(j, i). The important insight is the following: If we know which of these 6 cases can occur in a network graph G, then we have all information needed to reason about the communication between Pi and Pj.
We will later construct small network graphs with 4 nodes where the sites i and j are represented by two distinguished nodes site1 and site2, while all other sites are repre-sented by two “hub” nodes hub1 and hub2.
This example motivates the following definitions: Definition 6.4.2 (Free Path). Let I be a set of indices, and π be a path in a network graph G. We say that π is I-free, if π does not contain a site from I.
We now define three kinds of path types that will be shown to capture all relevant token paths between two processes Pi and Pj.
Definition 6.4.3 (Connectivity, Characteristic Vectors). Let i, j be indices in a network graph G. We define three connectivity properties of the indices i, j: G | = Φ⟲(i, j) ”There is a {j}-free path from i to itself.” G | = Φ;(i, j) ”There is a path from i to j via a third node not in {i, j}.” G | = Φ→(i, j) ”There is a direct edge from i to j.” Using the connectivity properties, we define an equivalence relation ∼2 on network graphs: Given two network graphs G1 and G2 along with two pairs of indices a1, b1 and a2, b2, we 215 define (G1, a1, b1) ∼2 (G2, a2, b2) iff for every Φ ∈{Φ⟲, Φ;, Φ→}, G1 | = Φ(a1, b1) ⇐ ⇒ G2 | = Φ(a2, b2) and G1 | = Φ(b1, a1) ⇐ ⇒ G2 | = Φ(b2, a2) If (G1, a1, b1) ∼2 (G2, a2, b2) we say that the indices a1, b1 in G1 have the same con-nectivity as the indices a2, b2 in G2.
The characteristic vector v(G1, a1, b1) is the 6-tuple containing the truth values of G1 | = Φ⟲(a1, b1), G1 | = Φ;(a1, b1), G1 | = Φ→(a1, b1) G1 | = Φ⟲(b1, a1), G1 | = Φ→(b1, a1), and G1 | = Φ;(b1, a1), By definition it holds that (G1, a1, b1) ∼2 (G2, a2, b2) iff they have the same character-istic vectors, i.e., v(G1, a1, b1) = v(G2, a2, b2). Since the number of characteristic vectors is constant, it follows that ∼2 has finite index. The characteristic vectors can be viewed as representatives of the equivalence classes.
site1 hub1 site2 hub2 site1 hub1 site2 hub2 Figure 6.1: Network Graphs A, B, realizing two different characteristic vectors 216 Example 6.4.4. Consider the network graphs A, B of Figure 6.1. It is easy to see that (A, site1, site2) has characteristic vector (1, 1, 1, 1, 1, 1), i.e., A | = Φ⟲(site1, site2) ∧Φ;(site1, site2) ∧Φ→(site1, site2) ∧ Φ⟲(site2, site1) ∧Φ;(site2, site1) ∧Φ→(site2, site1) and (B, site1, site2) has characteristic vector (0, 1, 0, 1, 1, 0), i.e., B | = ¬Φ⟲(site1, site2) ∧Φ;(site1, site2) ∧¬Φ→(site1, site2) ∧ Φ⟲(site2, site1) ∧Φ;(site2, site1) ∧¬Φ→(site2, site1).
Note that a network graph will in general have several characteristic vectors depending on the indices we consider. The set of characteristic vectors of a graph G can be effi-ciently computed from G in quadratic time. The crucial insight in our proof is that for two processes Pi and Pj, the connectivity between their indices i, j in the network graph determines the satisfaction of quantifier-free LTL\X properties ϕ(i, j) over P G: Lemma 6.4.5 (2-Index Reduction Lemma). Let G1, G2 be network graphs, P a process, and ϕ(x, y) a 2-indexed quantifier-free LTL\X property. Let a1, b1 be a pair of indices on G1, and a2, b2 a pair of indices on G2. The following are equivalent: (a) (G1, a1, b1) ∼2 (G2, a2, b2), i.e., a1, b1 and a2, b2 have the same connectivity.
(b) P G1 | = ϕ(a1, b1) iff P G2 | = ϕ(a2, b2).
Proof of this lemma and other claims in this chapter have been moved to the last section for better readibility.
217 The lemma motivates the following model checking strategy: Given a (possibly com-plicated) network graph G1 and two of its sites i, j, we can try to obtain a simpler network G2 := G(i,j), with two special nodes site1 and site2 that have the same connectivity in G2 as the indices i and j in G1, and thus satisfies condition (a) of the lemma. For the case of two indices, we can always find such a network graph G(i,j) with at most 4 sites.
Proposition 3. For each graph G and indices i, j there exists a 4-node graph G(i,j) called the connection topology of i, j, having two special sites site1 and site2 such that (G, i, j) ∼2 (G(i,j), site1, site2).
In other words, the indices i and j in G have the same connectivity as the indices site1 and site2 in G(i,j).
Since G(i,j) satisfies condition (a) of Lemma 6.4.5, we obtain the following important consequence: Corollary 6. Let ϕ(i, j) be a 2-indexed quantifier-free LTL\X property. Then P G | = ϕ(i, j) iff P G(i,j) | = ϕ(site1, site2).
Thus, we have achieved a reduction from a potentially large network graph G to a 4-node network graph G(i,j). We will now show how to actually construct the connection topology G(i,j).
Construction of G(i,j). We construct the reduction graphs as follows. G(i,j) has four sites: site1, site2, hub1, and hub2. The sites site1 and site2 are called primary sites. They represent the sites of interest i and j. The other sites are called hubs, and they represent 218 the other nodes of the graph G. Let us describe in more detail the role of these different nodes. Recall that to satisfy Proposition 3, the sites site1 and site2 in G(i,j) should have the same connectivity as i, j in G. Therefore: • If Φ;(i, j) holds in G (i.e., there exists a path from i to j in G that goes through a third node), then Φ;(site1, site2) has also to hold in G(i,j), i.e., there should exist in G(i,j) a path from site1 to site2 that goes through a third node. The site hub1 will play the role of this “third node”. Therefore, in this case, G(i,j) contains an edge from site1 to hub1, and from hub1 to site2.
• In the same manner, if Φ⟲(i, j) holds in G (i.e., there exists a path from i to itself in G that does not go through j), then Φ⟲(site1, site2) should also be true in G(i,j).
As previously, this is ensured by considering the following edges: (site1, hub1) and (hub1, site1).
• Finally, if Φ→(i, j) holds in G (i.e., there exists a direct edge in G from i to j), then G(i,j) should also contain the edge (site1, site2).
• The paths from j to i are treated in a symmetrical way.
For example, let H be a graph having as sites i, j, k, and l (among others), such that v(H, i, j) = (1, 1, 1, 1, 1, 1), and v(H, k, l) = (0, 1, 0, 1, 1, 0); then the graphs A and B of Example 6.4.4 correspond respectively to the reduction graphs H(i,j) and H(k,l).
Since our fairness assumption implies that the network is strongly connected, not all char-acteristic vectors actually occur in practice. A closer analysis yields the following bound: 219 Proposition 4. For 2 indices, there exist at most 36 connection topologies.
All the 36 connection topologies are shown in the Section 6.8.
Let us now return to the question of verifying properties of the form ∃x, y.ϕ(x, y). Note that Corollary 6 only provides us with a way to verify one quantifier-free formula ϕ(i, j).
Given a system P G, we define its 2-topology, denoted by T2(G), as the collection of all different connection topologies appearing in G. Formally, Definition 6.4.6. Given a network graph G = (S, C), the 2-topology of G is given by T2(G) = {G(i,j) | i, j ∈S, i ̸= j}.
By Proposition 4, we know that |T2(G)| ≤36. Since we can express ∃x, y.ϕ(x, y) as a disjunction W i,j∈S ϕ(i, j) we obtain the following result as a consequence of Corollary 6: Theorem 6.4.7. The following are equivalent: (i) P G | = ∃x, y.ϕ(x, y) (ii) There exists a connection topology T ∈T2(G), such that P T | = ϕ(site1, site2).
Thus, we obtain the following reduction algorithm for model checking P G | = ∃x, y.ϕ(x, y): 1: Determine T2(G).
2: For each T ∈T2(G), model check P T | = ϕ(site1, site2).
3: If one of the model checking calls is successful then output “true” else output “false”.
220 Example 6.4.8.
Figure 6.2: A system with grid like network graph with 9 nodes.
Consider a system P G with a grid like network graph G shown in Figure 6.2. Assume that each edge of the network is bidirectional. To verify a 2-indexed LTL \ X property ∃x, y.ϕ(x, y) of this system, it is enough to consider two systems P G1 and P G2 with net-work graphs G1, G2 shown in Figure 6.3 and check ϕ(site1, site2) on each of them.
If either system satisfies ϕ(site1, site2) then P G | = ∃x, y.ϕ(x, y). Otherwise, it P G ̸| = ∃x, y.ϕ(x, y).
Relation with Environment Abstraction In this section we will consider the relationship between the decomposition presented here and environment abstraction presented in the earlier chapters. For ease of comparison, we will consider environment abstraction with single reference process and decompositions 221 site1 hub1 hub2 site2 site1 hub1 hub2 site2 Figure 6.3: Connection topologies for the grid-like network graph.
for two indexed properties.
First note that both the methods deal with properties of a fixed number of processes.
In the case of environment abstraction, we considered primarily single index properties, that is, properties of one process and its environment. Here we consider double indexed properties, that is properties satisfied by two processes and their common environment. To build the abstract model in environment abstraction, we begin by asking how the system looks when viewed from the reference process. The environment of the reference process is captured using an appropriately chosen set of predicates. Our soundness theorem of Chapter 2 shows that the abstract model built using these predicates is sound and our experiments show that the abstract models are quite precise.
In this chapter too, we ask how the system looks like from the point of view of two processes. But this time, the environment around the two processes is described mainly in terms of the network topology. Note that the reduced system P G(i,j) corresponding to processes i, j in a system P G can be thought of as an abstraction of P G. But, unlike usual abstractions, the set of properties (involving only processes i, j) satisfied by P G(i,j) 222 is exactly the same as the set of properties satisfied by P G.
One way of looking at environment abstraction is to first consider the abstract models obtained by fixing the reference process (as we do in the proof of soundness). That is, for a given system P(K) consider the abstract models PA 1 , . . ., PA K. If we can show, for each i ∈[1..K], PA i | = Φ(i) then we can conclude that P(K) | = ∀x.Φ(x) But, it is not feasible to check each of the abstract models P A i individually because there is no bound on K. So instead of verifying each abstract model separately, we create a new abstract model PA by combining all the individual models PA 1 , . . . , PA K to obtain an even more abstract model. By the existential abstraction principle, we have PA | = Φ(x) ⇒∀i.PA i | = Φ(i) Thus, it is enough to verify the abstract model PA.
In contrast, in this chapter, we take every possible pair of processes i, j, and construct the abstract model P G(i,j) specific to each of them. But then, instead of grouping all these abstract models, we keep them separate and check each of them individually. This is pos-sible because Proposition 4 guarantees that there are only 36 different possible reduction graphs (or abstract models). This could not be done in the case of environment abstrac-tion, because we don’t know apriori how many different individual abstract models are there nor do we know how to find them efficiently.
223 To summarize, the reduction presented here and the environment abstraction both in-volve describing the world around a fixed number of processes. Importantly, the results presented in this chapter amount to reductions, that is , the properties under consideration are preserved exactly. In contrast, in environment abstraction, the abstract model exhibits more behaviors than the concrete system.
6.4.2 Existential k-indexed LTL\X Specifications We will now show how to generalize the results of the previous section to k-indexed prop-erties. Throughout this section, we will write expressions such as ¯ i to denote k-tuples of indices, and ¯ x to denote k-tuples of variables. We will first adapt the notion of connectivity as follows. Let ¯ i = i1, i2 . . . ik be a sequence of indices, and I = {i1, i2 . . . ik}. Then we define the following connectivity properties: G | = Φ⟲(x, I) ”There is an (I \ {x})-free path from x to itself.” G | = Φ;(x, y, I) ”There is a path from x to y via a third node not in I.” G | = Φ→(x, y) ”There is a direct edge from x to y.” By instantiating the variables x and y by the indices i1, . . . , ik in all possible ways, we ob-tain a finite number of different conditions which will describe all possible connectivities between the indices i1, . . . , ik.
As in the previous section, we can define an equivalence relation ∼k, where (G1,¯ i) ∼k (G2,¯ j) iff the indices ¯ i have the same connectivity in G1 as the indices ¯ j in G2. Since the 224 hub1 site1 site2 site3 site4 site5 hub2 hub3 hub4 hub5 Figure 6.4: An example of a 5-index connection topology number of conditions is bounded, ∼k is an equivalence relation of finite index, and we can describe each equivalence class by a characteristic vector v(G, ¯ v). As in the previous sec-tion, we define the k-connection topologies, G(i1,i2...ik) of the processes Pi1, Pi2 . . . Pik in G as the smallest graphs that preserve all the connectivity properties between the processes Pi1, Pi2 . . . Pik. The construction of the topology graphs is illustrated in Figure 6.4.
The unfilled nodes site1, . . ., sitek in the graph are the primary sites. There is a hub site associated with each primary site. Moreover, there is an edge from each hub hubj back to its primary sitej if there is an (I {ij})-free path from ij to itself. There is an edge from hubj to sitel if there is a path from ij to il in G via a third node not in I, and there is an edge from sitej to sitel if there exists a direct edge (ij, il) in G.
Analogous to the bounds on 2-connection topologies it can be shown that each k-connection topology has at most 2k processes and that there are at most 3k(k−1)2k distinct k-connection topologies.
By an argument analogous to that of the previous section, we obtain the following corollary 225 Corollary 7. Let ϕ(¯ x) be a k-indexed quantifier-free LTL\X property. Then P G | = ϕ(¯ i) iff P G(¯ i) | = ϕ(site1, site2, . . . , sitek).
The notion of k-topology is defined analogously: Definition 6.4.9. Given a network graph G = (S, C) the k-topology of G is given by Tk(G) = {G(¯ i) | ¯ i ∈Sk, all indices in ¯ i are distinct}.
Consequently, we obtain a model checking procedure from the following theorem, similar to the case of 2-indices: Theorem 6.4.10. The following are equivalent: (i) P G | = ∃¯ x.ϕ(¯ x) (ii) There exists a connection topology T ∈Tk(G), such that P T | = ϕ(site1, site2, . . . , sitek).
As mentioned before |Tk(G)| ≤3k(k−1)2k.
6.4.3 Specifications with General Quantifier Prefixes In this section we will show how to obtain reductions for k-indexed specifications with first order prefixes.
Let us for simplicity consider the 2-indexed formula Φ := ∀x∃y.ϕ(x, y). Over a network graph G = (S, C), |S| = n, it is clear that Φ is equivalent to ∧1≤i≤n ∨1≤j≤n ϕ(i, j). A naive application of Corollary 7 would therefore require n2 calls to the model 226 checker, which may be expensive for practical values of n. In practice, however, we can bound the number of model checker calls by |T2(G)| since this is the maximum number of different connection topologies. We conclude that the n2 model checker calls must contain repetitions. We can make sure that at most 36 calls to the model checker are needed. We obtain the following algorithm: 1: Determine T2(G).
2: For each T ∈T2(G) 3: model check P T | = ϕ(site1, site2) 4: g[T] := 1 iff model checking successful, and 0 otherwise 5: Output V 1≤i≤n W 1≤j≤n g[G(i,j)].
By simplifying the formula in line 5, we may further increase performance. The algo-rithm can be adapted for k indices in the obvious way. To state the main theorem of this section, we define (c, s)-bounded reductions, where c bounds the number of calls to the model checker, and s bounds the size of the network graph.
Definition 6.4.11 ((c, s)-bounded Reduction). Let G, P be as above, and ϕ be a closed k-indexed formula with matrix ϕ′(x1, . . ., xk). Let Ψ denote a property of interest (e.g., the model checking property ′′P G | = ϕ′′). A (c, s)-bounded reduction of property Ψ is given by: • a sequence of c reduced network graphs Gi = (Si, Ci), 1 ≤i ≤c such that |Si| ≤s.
called reduction graphs.
227 • a boolean function B over c variables g1, . . . , gc, such that Ψ iff B(g1, . . . , gc) = 1 where gi := 1 iff GP i | = ϕ′(site1, . . . , sitek) In other words, property Ψ is decided by c calls to the model checker, where in each call the network graph is bounded by s.
Further, we say that a class L of specifications has (c, s) bounded reduction if for all network graphs G and any ϕ ∈L, the property P G | = ϕ has (c, s)-bounded reduction. We can now state our main result: Theorem 6.4.12. Let ϕ be any k-indexed LTL\X specification. Then the model checking problem ′′P G | = ϕ′′ has polynomial-time1 computable (3k(k−1)2k, 2k)-bounded reductions.
In fact, the sequence of reduced network graphs is just the different k-connection topolo-gies occurring in G. This implies that given k and network graph G, all k-indexed LTL\X specifications have the same reduction. Stated another way, LTL\X has (3k(k−1)2k, 2k)-bounded reduction.
6.4.4 Cut-Offs for Network Topologies In this section, we prove the existence of cutoffs for network topologies, i.e., (infinite) classes of network graphs. We say that a class of network graphs has cutoff (c, s), if the question whether all the network graphs in this topology satisfy the specification has a (c, s)-bounded reduction.
1in the size of the network graph G 228 Definition 6.4.13 (Cut-off). Let T be a network topology, and L a class of specifications.
T has a cut-off (c, s) for L if for all specifications ϕ ∈L the property Ψ := “ ∀G ∈T . P G | = ϕ ” has a (c, s)-bounded reduction.
It is not hard to prove that a (c, s)-bounded reduction for a network graph translates to a cut-off for a network topology: Theorem 6.4.14. For k-indexed specifications, all network topologies T have (2k, 3k(k−1)2k)-bounded reductions.
Note that the theorem does not provide us with an effective means to find the reduc-tion; it does however guarantee that at least in principle we can always find a cutoff by investigating the topology T.
6.5 Bounded Reductions for CTL\X are Impossible In this section, we show that indexed CTL\ X formulas over two indices do not have (c, s)-bounded reductions. We will first show the following generic result about CTL\ X: Theorem 6.5.1. For each number i there exists an CTL\ X formula ϕi with the following properties: • ϕi is satisfiable (and has a finite model).
• ϕi uses only two atomic propositions l and r.
229 • Every Kripke structure K where ϕi is true has at least i states.
• ϕi has the form EFϕ′ i.
The result is true even when the Kripke structure is required to have a strongly con-nected transition relation.
Remark 15. This result is closely related to early results about characterizing Kripke structures up to bisimulation in . The results in give rise to the following proof idea for Theorem 6.5.1: Let K1, . . . , Kn be all Kripke structures with 2 labels of size ≤i, and let f1, . . ., fn be CTL\ X formulas which characterize them up to stuttering bisimulation. Consider now the formula ϕi := V 1≤j≤n ¬fj. By construction every model of ϕi must have > i states. At this point, however, the proof breaks down, because we do not know from the construction if ϕi is satisfiable at all. The natural way to show that ϕi has a model would be to prove that stuttering bisimulation over a 2-symbol alphabet has infinite index. This property however is a corollary to Theorem 6.5.1, and we are not aware of a proof in the literature.
For properties involving only the presence of the token, a system P G, where G = (S, C) essentially behaves like a Kripke structure with set of states S and transition relation C. To see this, consider a system P G, where P is a trivial process which can always receive a token, and immediately send the token to a neighbor process. Let ti and tj be propositional formulas stating that the token is with process i and j respectively. Since the processes do not influence the path taken by the token, the token moves only according to the network graph G, and thus for each path on P G there exists a corresponding path in G.
Consequently, if a path on P G satisfies a property without X, then the corresponding path 230 on G also satisfies this property. Now we can show by contradiction that indexed CTL\ X cannot have bounded reductions. Suppose CTL\X did have (c, s)-bounded reduction for some s. Then, by Theorem 6.5.1, we can always find a CTL\X formula Φ such that the network graph underlying any system that satisfies Φ must have size at least c + 1. Thus CTL\X does not have bounded reductions. Consequently, we also have the following corollary: Corollary 8. There exists a network topology T for which 2-indexed CTL\ X does not have cut-offs.
A detailed proof can be found in the last section of this chapter.
6.6 Conclusion We have described a systematic approach for reducing the verification of large and pa-rameterized systems to the verification of a sequence of much smaller systems. We will conclude this chapter with further considerations concerning the practical complexity of model checking.
For simplicity, let us again consider the case of 2-indexed properties. Suppose the processes P in our network have state space |Q|. Then our reduction requires to model check up to 36 network graphs with 4 sites, resulting in a state space of |Q|4 . Even this model checking problem may be too expensive in practice. By a close analysis of our proofs, it is however possible to reduce the state space even further to O(|Q|2).
It is easy to show that Lemma 6.4.5 will hold even when the processes at the hubs 231 are simple dummy processes containing two states whose mere task is to send and receive the token infinitely often. Consequently, the systems P G(i,j) will have state space of size 22 × |Q|2.
The results in this chapter on LTL\X were derived assuming fairness condition on the systems. We can obtain similar reductions by removing this assumption. Doing away with fairness necessitates the consideration of two more path types other than the ones described in Section 6.4.1. Consequently, the topology graphs have more than 4 sites and also the number of different topology graphs increases.
6.7 Proofs of Lemmas Proposition 4. For 2 indices, there exist at most 36 connection topologies.
Proof. By our fairness assumption, every connection topology must be strongly con-nected. This implies that the following conditions must hold: • At least one of Φ→(i, j) or Φ;(i, j) must be true.
• At least one of Φ→(j, i) or Φ;(j, i) must be true.
This means in the characteristic vector of connection topology the following must hold: • At least one of the second and third elements (corresponding to the connectivity properties discussed above) must be 1. This gives us three choices in picking the second and third elements of the vector.
232 • At least one of the fifth and sixth elements must be 1. This again gives us three choices in picking the fifth and the sixth elements of the vector.
• First and fourth elements can be either 0 or 1. This gives us four choices in picking the first and the fourth elements of the vector.
Consequently the number of different possible characteristic vectors is 3 × 3 × 4 = 36.
Lemma 6.4.5. Let G1, G2 be network graphs, P a process, and ϕ(x, y) a 2-indexed quantifier-free LTL\X property. Let a1, b1 be a pair of indices on G1, and a2, b2 a pair of indices on G2. The following are equivalent: (a) (G1, a1, b1) ∼2 (G2, a2, b2), i.e., a1, b1 and a2, b2 have the same connectivity.
(b) P G1 | = ϕ(a1, b1) iff P G2 | = ϕ(a2, b2).
We first define some notions which will be helpful in proving the lemma. Let P G be a system with m processes An execution trace of the system P G is a series of global states in such that there is a transition from every kth state in the trace to the (k + 1)th state.
Given a trace t, we will denote the nth state in t by tn.
A witness in system P G for a LTL \ X formula ϕ(i, j) (where Pi and Pj are two processes in G) is an execution trace of S that satisfies the LTL \ X formula.
233 We now define the projection of an execution trace with respect to a set of indices I = {i1, . . .ik}. First we describe the collapse of a trace with respect to I.
Definition 6.7.1. Given an execution trace t of a system P G and a set of indices I, the collapse of t with respect to I is obtained by removing every global state, tn+1 in t such that ∀i ∈I.tn(i) = tn+1(i) Informally, a collapse of trace t is obtained by removing those global states from the trace which do not change the states of processes with indices I.
Definition 6.7.2. Given a collapsed trace tc of P G with respect to I the projection of t with respect to I is the series of states obtained by projecting each global state in tc onto the processes in I.
Lemma 6.7.3. If two execution traces, t1 and t2 have the same projection with respect to a set of processes, I, then the two traces satisfy exactly the same set of LTL\X properties over I.
Proof. This follows from the semantics of LTL \ X properties.
Lemma 6.7.4. A system P G with two indices i and j satisfies an LTL\X property ϕ(i, j) if and only if the system P G(i,j) satisfies the property ϕ(site1, site2).
Proof. We will prove that if P G satisifes a property ϕ(i, j) then P G(i,j) satisfies ϕ(site1, site2).
The proof for the other direction is exactly the same.
Consider any two-indexed LTL \ X property ϕ(i, j). Let system P G satisfy ϕ(i, j).
Consider a witness, w, for ϕ(i, j) in the system S. Obtain the projection, wp, of w with 234 respect to indices i, j. Note that each state in wp will be of the form (qi, qj) where qi, qj are local states of processes i, j respectively.
We will say a trace w′ of P G(i,j) matches wp if the projection of w′ with respect to in-dices site1 and site2 is isomorphic to wp modulo renaming. Clearly, if a trace w′ matching wp exists in P G(i,j) then the system P G(i,j) satisfies the property ϕ(site1, site2).
We will now construct a trace w′ of P G(i,j) that matches wp. The state of process i in P G will be matched by the state of process site1 in P G(i,j) and the state of process j will be matched by the state of process site2. Consider the first state (qi, qj) of the trace wp. Since (qi, qj) is the first state of the trace wp, both qi, qj must be the initial local states. The first state of w′ will then be (qsite1, qhub1, qsite2, qhub2) where qsite1, qsite2 are initial local states.
The hubs can be in any local state, so by default we require them to be in initial states as well.
The token could be held in three possible ways the state (qi, qj): • By process i.
• By process j.
• By neither i nor j In case the token is with process i then in w′ the token will be with process site1. In case the token is with process j then in w′ the token will be with process site2. In the last case, the first global state of w′, (qsite1, qhub1, qsite2, qhub2), is such that token is with qhub1 or qhub2. It is easy to see that the first state of w′ thus constructed matches the first state of wp.
235 Assume that we have been able to construct a prefix pf of w′ which matches the prefix of wp of length k. Denote the mth state in wp by wm p and the prefix of length m by m-prefix.
To extend the trace w′ to k + 1 states consider the states wk p and wk+1 p of wp. We have the following cases to consider: • In going from wk p to wk+1 p there is no change in the process holding the token. As-sume, without loss of generality, that the difference between wk p and wk+1 p is a change in the local state of process i. Consider w′k, the kth state of w′. Since w′ matches the k-prefix of wp, the states of process i in wk p and process site1 in w′k must match.
This means whatever action i can take, the same action can be taken by site1. Thus, we can extend w′ to k + 1 states by replicating the action of process i using process site1.
• In wk p the token is with i and in wk+1 p is with j. We can then infer that there must be a direct edge in G from process i to j, that is, G | = Φ→(i, j) must be true. Thus there must be a similar direct edge in G(i,j) from site1 to site2, that is in G(i,j) | = Φ→(i, j) is true. And since the prefix pf of w′ matches the k-prefix of wp, in the last state of pf, w′k, the token must be with process site1. Further, the states of site1 and site2 in w′k must be the same as the states of i and j (respectively) in wk p. Thus, we can extend w′ with the state that is obtained by a token transfer from site1 to site2. Thus we have a prefix of w′ that matches the k + 1-prefix of wp. The case where token is with j in wk p and with i in wk+1 p is analogous.
• In wk p the token is with neither i nor j and in wk+1 p it is with j. This case has the following three sub-cases 236 – In the k-prefix of wp the token was last with j. That is, there is a state wm p , m < k such that the token is with process j in wm p and in no state wn p, m < n ≤k is the token with either process i or j.
This implies that there a path from j to itself in G that does not go through i, that is G | = Φ⟲(j, i). Then there must be a similar path from site2 to itself in G(i,j) which does not go through site1, that is, G(i,j) | = Φ⟲(site2, site1) must hold. Since pf matches k-prefix of wp, the process that last had the token in pf must be site2. In the last state l of pf the token is neither with site1 or site2.
We can infer that the token must be with process hub2 because site2 can send token to either site1 or hub2 and the token is not with site1. Then we can add a series of states to pf such that, at the end, token is transferred back to site2 and the only process that changes the state in this series of states prior to token transfer is hub2. This is always possible because of our assumption that each process can send and receive token infinitely often. Thus we now have prefix of w′ that can match the first k + 1 states of wP.
– In the k-prefix of wp the token was last with i. This means that there a path from i to j in G that goes through a third process, that is Φ;(i, j) must hold in G. Then there must be a similar path from site1 to site2 in G(i,j) that goes through hub1. In the last state l of pf the token is with hub1. To see this, note that site1 can send token either to hub1 or site2 and in l the token cannot be with site2 (otherwise pf will not match the k-prefix of wp). As before, we can add a series of states to pf such that at the end token is with site2 and the only process that changes state prior to token transfer is process hub1.
237 – In the k-prefix of wp the token was never with i or j. That is wk+1 p is the first state where the token is with j. We have constructed pf such that it matches the k-prefix of wp. Since there was no token transitions involving either i or j, all the transitions in pf must have been transitions local to i or j. Thus, it does not matter where the token was initially in pf. Since we are interested only in existential properties, we can construct pf such that the token is with process hub2 in the last state l. This means that from l we can have a token transition from process hub2 to site2. Thus we can extend pf so that it matches k + 1-prefix of wp.
The case where the token is with neither i nor j in the state wk p and with i in the state wk+1 p is analogous.
Thus, we can construct a trace of w′ of S′ which matches wp.
Note that we have implicitly used the fairness assumption for P G. The assumption is implicit in the fact there is always a k + 1th state in pf that is to be matched.
Lemma 6.7.5. Let P G1 and P G2 be two systems. Further let there be two processes indexed i and j in both G1 and G2. If for all two indexed LTL \ X property ϕ(i, j), P G1 | = ϕ(i, j) ⇔P G2 | = ϕ(i, j) then G1(i,j) = G2(i,j).
Proof. The proof strategy is the following. For each path-type, we will give a two-indexed LTL \ X formula Ψ(i, j) such that if Ψ(i, j) holds on a system P G then the associated path-type exists between i and j in the network G.
238 The three formulas are: • For Φ⟲(i): F (ti ∧(¬tj U ¬(ti ∨tj)) U ti) • For Φ;(i, j): F (ti ∧(¬tj U ¬(ti ∨tj)) U tj) • For Φ→(i, j): F (ti ∧(ti U tj)) It is easy to see that each of the three formulas implies the associated path type.
Now if P G1 satisfies exactly those two-indexed properties as P G2, then the two systems must satisfy exactly the same type formulas. Hence G1(i,j) = G2(i,j) Lemma 6.4.5. We first prove that (i) ⇒(ii). Assume that (G1, a1, b1) ∼2 (G2, a2, b2).
Then we know that G1(a1,b1) = G2(a2,b2).
By Lemma 6.7.5, P G1 | = ϕ(a1, b1) ⇔P G1(a1,b1) | = ϕ(site1, site2) ⇔P G2(a2,b2) | = ϕ(site1, site2) ⇔P G2 | = ϕ(a2, b2).
For the other direction, assume P G1 | = ϕ(a1, b1) ⇔P G2 | = ϕ(a2, b2). Now, P G1 | = ϕ(a1, b1) ⇔P G1(a1,b1) | = ϕ(site1, site2) 239 and P G2 | = ϕ(a2, b2) ⇔P G2(a2,b2) | = ϕ(site1, site2).
Thus P G1(a1,b1) | = ϕ(site1, site2) ⇔P G2(a2,b2) | = ϕ(site1, site2) which implies, by Lemma 6.7.4, that G1(a1,b1) = G2(a2,b2) and therefore(G1, a1, b1) ∼2 (G2, a2, b2).
Theorem 6.4.14 For k-indexed specifications, all network topologies T have (2k, 3k(k−1)2k)-bounded reductions.
Proof. Let ϕ be a k-indexed specification and G1, G2, . . . be an enumeration of the net-work graphs in T. Since model checking for each graph Gi ∈T is (2k, 3k(k−1)2k)-bounded regardless of the size of Gi, we obtain a sequence of Boolean functions Bi over the same variables g1, . . . , g3k(k−1)2k. Consider now the (infinitary) conjunction B := V i≥1 Bi. By Corollary 7, the function B expresses that for all Gi we have GP i | = ϕ. It remains to show that Ψ is equivalent to a finite formula. Since B depends only on a finite number (3k(k−1)2k) of Boolean variables, functional completeness of Boolean logic implies that B is equivalent to a finite formula of size at most 23k(k−1)2k.
Theorem 6.5.1.
For each number i there exists a CTL\X formula ϕi with the following properties: • ϕi is satisfiable (and has a finite model).
240 • ϕi uses only two atomic propositions l and r.
• Every Kripke structure K where ϕi is true has at least i states.
• ϕi has the form EFϕ′ i.
The result remains true, when the Kripke structure is required to have a strongly con-nected transition relation.
Proof. Our goal is to describe a formula ϕi using atomic propositions l and r whose mod-els must have at least i states. We will construct a large conjunction V ψ∈Γ ψ, and describe which formulas to put in Γ. The idea is simple: Γ needs to contain i CTL\X formulas which describe the existence of i different states. Then the formula EF V ψ∈Γ ψ will be the sought for ϕi.
Consider a Kripke structure K as in Figure 6.5: • In Level 0, it contains two distinct states L, R labelled with l and r respectively. To express the presence of these states, we include the formulas, let ψ1 0 := (l ∧¬r) and ψ2 0 := (r ∧¬l), and include EFψ1 0 and EFψ2 0 into Γ.
It is clear that EFψ1 0 and EFψ2 0 express the presence of two mutually exclusive states.
• In Level 1, K contains 22 −1 = 3 states, such that the first one has {L, R}-free paths to L and R, the second one an {L, R}-free path only to L, and the third one an {L, R}-free path only to R. The characteristic properties of level 1 states are expressed by formulas 241 L R Level 0 Level 1 Auxilliary Node Level 2 Figure 6.5: The Kripke structure K, constructed for three levels. The dashed lines indicate the connections necessary to achieve a strongly connected graph.
ψ1 1 := EF−ψ1 0 ∧EF−ψ2 0 ψ2 1 := EF−ψ1 0 ∧¬EF−ψ2 0 ψ3 1 := ¬EF−ψ1 0 ∧EF−ψ2 0 where EF−x denotes E(¬l ∧¬r)Ux, i.e., a variant of EF which forbids paths through L and R. To enforce the existence of the Level 1 states in the Kripke struc-ture, we include EFψ1 1, EFψ2 1, EFψ3 1 into Γ.
• In Level 2, K contains 23 −1 = 7 states, such that every state in level 2 can reach one of the 7 non-empty subsets of Level 1. The characteristic properties of Level 2 states can be expressed by formulas such as ψ1 2 := EF−ψ1 1 ∧EF−ψ2 1 ∧EF−ψ3 1 242 and ψ2 2 := ¬EF−ψ1 1 ∧EF−ψ2 1 ∧EF−ψ3 1 , etc including ψ3 2 to ψ7 2. To enforce the presence of Level 2 states in the Kripke structure, we include the formulas EFψi 2 for i = 1, . . ., 7 into Γ.
• In general, each Level k has at least 2k+1 −1 states that differ in their relationship to the states in Level k −1. The presence of such states is expressed by formulas EFψx k.
All these formulas are included into Γ until the requested number i of different states is reached. By construction, all properties required by the theorem are trivially fulfilled. In particular, Figure 6.5 demonstrates that there always exists a strongly connected model.
The formula ϕi uses two labels l, r. To use the above theorem in the setting of systems with network graphs, we replace the labels l, r by atomic propositions tx, ty. Recall that an atomic proposition tx states that the token is with process x. We will denote the modified formula by ϕi(x, y). We have the following proposition as a consequence of the above theorem.
Corollary 9. If P G | = ∃x, y. ϕi(x, y), where G = (S, C) then |S| > i.
Proof. Consider formula ∃x, y.ϕi(x, y) and suppose, towards a contradiction, that there is a system P G, G = (S, C) where |S| < i such that P G | = ∃x, y.ϕi(x, y). Then there exist indices a, b such that P G | = ϕi(a, b). We construct the Kripke structure K with state space S, transition relation C, initial state 1, and two atomic propositions ta, tb which hold 243 true on states a and b respectively. Note that since the formula ϕi(a, b) is of the form EFϕ′ i(a, b) and C is strongly connected, satisfaction of ϕi(a, b) does not depend on the choice of the initial state. Since we know that for all paths on P G, the corresponding paths on G preserve properties without X, it follows that K | = ϕi(a, b). By the above theorem, K must have at least i states, which contradicts our assumption that |S| < i. Thus, we have a proof by contradiction that if P G | = ∃x, y.ϕi(x, y) then the network graph G must have at least i nodes in it.
Corollary 8.
There exists a network topology T for which 2-indexed CTL \ X does not have cut-offs.
Proof. Let T be the class of strongly connected graphs. Then Corollary 9 tells us that ∃x, y.ϕn(x, y) does not have a cut-off for T.
6.8 Connection Topologies for 2-Indices All the 36 possible connection topologies between two processes are presented below.
244 site1 hub1 hub2 site2 site1 hub1 hub2 site2 site1 hub1 hub2 site2 site1 hub1 hub2 site2 site1 hub1 hub2 site2 site1 hub1 hub2 site2 245 site1 hub1 hub2 site2 site1 hub1 hub2 site2 site1 hub1 hub2 site2 site1 hub1 hub2 site2 site1 hub1 hub2 site2 site1 hub1 hub2 site2 246 site1 hub1 hub2 site2 site1 hub1 hub2 site2 site1 hub1 hub2 site2 site1 hub1 hub2 site2 site1 hub1 hub2 site2 site1 hub1 hub2 site2 247 site1 hub1 hub2 site2 site1 hub1 hub2 site2 site1 hub1 hub2 site2 site1 hub1 hub2 site2 site1 hub1 hub2 site2 site1 hub1 hub2 site2 248 site1 hub1 hub2 site2 site1 hub1 hub2 site2 site1 hub1 hub2 site2 site1 hub1 hub2 site2 site1 hub1 hub2 site2 site1 hub1 hub2 site2 249 site1 hub1 hub2 site2 site1 hub1 hub2 site2 site1 hub1 hub2 site2 site1 hub1 hub2 site2 site1 hub1 hub2 site2 site1 hub1 hub2 site2 250 Chapter 7 Conclusion 7.1 Summary This thesis presents an efficient abstraction technique to facilitate model checking of parameterized systems with replicated processes. All successful applications of model checking thus far have made use of domain specific abstraction techniques. Continuing this trend, we exploit the domain knowledge about parameterized systems to devise an appropriate abstraction method.
The problem of verifying parameterized systems is both challenging theoretically (be-cause of their unboundedness) and very relevant practically (because many crucial com-ponents of real systems are parameterized). For example, in the recent years, verification of cache coherence protocols has become a very important problem in the hardware indus-try. All the modern multi-core architectures have very intricate cache coherence protocols, 251 and there are no rigorous techniques for their verification. Similarly, the number of con-trollers used in embedded applications, for example on automobiles, is also increasing, and, to facilitate efficient communication between the controllers, complex time triggered protocols are being developed. These protocols are also parameterized as the number of controllers can vary. As with cache coherence protocols, there are no efficient automated techniques for verifying these protocols. To model check complex protocols like these, we need efficient abstraction techniques.
In this thesis, we present an abstraction method called environment abstraction for verifying parameterized systems with replicated processes. The main insight in this tech-nique is that, when a human designer reasons about a system with replicated components, (s)he tends to focus on a reference component and consider the environment around it.
We formalize this insight and provide a rigorous framework for constructing such abstract models. The abstract models are quite precise and easy to construct. In most abstraction methods, liveness properties are more difficult to handle, even theoretically, than safety properties. Our method, however, has a simple extension to handle liveness properties.
Finally, most automatic abstraction based methods for verifying parameterized systems use the atomicity assumption. In contrast, we are able to remove the atomicity assumption by adding monitor processes and, thus, verify protocols in their full generality. Our ex-periments with different cache and mutual exclusion protocols suggest that environment abstraction works extremely well in practice.
The insight of constructing an abstract model by considering one reference process and looking at the world around it can be generalized to different settings. Instead of just considering a collection of processes, each of which can talk with every other process, 252 we consider a richer model in which processes are arranged on the nodes of a network graph. Thus, in addition to replication of processes, we also have an arbitrary network graph to reason about. This problem structure is quite common in real life. For instance, network routing protocols, which route data through complicated networks of machines, have to function no matter what the structure of the network. Each of these machines runs the exact same routing protocol. While there has been work on verifying systems with replicated processes, there has not been much work on verifying systems with network graphs. In Chapter 6, we take the first steps towards verifying parameterized systems with network graphs. We consider the verification of two process properties and show how to decompose a system with a large network to a collection of systems with constant sized network graphs. The main idea is that it suffices to consider how the network looks from a pair of processes to figure out what properties the pair satisfies. It can also be shown that, for any pair of processes, there are only a finite number of possibilities for how the network around them can look like. The results presented in Chapter 6 also highlight an interesting contrast in the expressive power of LTL and CTL specifications.
We show that, while decomposition of large network into smaller ones is possible for LTL specifications, it is not possible for CTL specifications. Informally, two process CTL specifications can encode information about the number of other processes in the system and, thus, decomposition is not possible for CTL properties.
253 7.2 Extensions In this thesis, we considered parameterized systems with replicated processes. Environ-ment abstraction is quite general and can be applied even when the replicated components are not processes or if there are multiple types of replication. For instance, we can think of a memory bank as a collection of identical components (ignoring the contents of the memory). Similarly, we can treat a collection of jobs waiting to be scheduled in a queue as an instance of replication (ignoring the specifics of the job). We believe this viewpoint will lead to useful abstractions.
The abstract model that we construct is doubly exponential in the number of local state variables. In real cache coherence protocols, the internal state of each cache can be quite complex and thus our method might fail. To get around this, the internal states of local caches themselves might have to be abstracted before applying environment abstraction.
An interesting extension to our work would be to combine environment abstraction with standard abstraction for the internal states of the caches.
Our work in Chapter 6 lays the foundational results for the verification of parame-terized systems with network graphs. While the system model does consider a network graph, the communication between the processes is very simple. An extension to our work would be to consider richer communication between processes. However, we suspect that the decomposition results may not exist even for LTL properties once we allow more than one token. It would be interesting to consider what restrictions to impose on the system model so that we can still obtain decomposition results.
The abstraction based approach we have presented for verification of parameterized 254 systems is just one possible approach. In most real world parameterized systems, it seems to be the case that, all possible two process behaviors are exhausted when the parameter value is just 4 or 5. If such a cutoff really exists, then parameterized verification is no different from ordinary verification. But, finding such cutoffs is very hard and no such cutoffs are currently known. A related idea is to determine cutoff on trace length: it would be extremely useful in practice if we can show that all interesting behaviors are exhibited by traces of length less than a certain cutoff c. For instance, we could use bounded model checkers, which are typically faster than the other types of model checkers, to explore the parameterized system up to depth c. If no bug is found, then our cutoff result ensures that the parameterized system is correct. While it seems such trace cutoffs must exist, no one has succeeded in finding them yet. Finding cutoff results is a challenging problem with significant practical impact.
Distributed and parallel systems are among the hardest systems for humans to rea-son about. Yet parallelism seems to afford the easiest route to scalability and increased performance. Consequently, highly parallel, distributed computer systems are becoming quite pervasive. Powerful verification techniques are required to ensure the correct func-tioning of these systems. Model checking, which performs an exhaustive search of the state space, seems ideally suited for verification of distributed systems. In this thesis, we have addressed the problem of model checking distributed protocols like cache coherence protocols and mutual exclusion and demonstrated that it is possible to efficiently and au-tomatically model check such protocols in their full generality.
255 256 Bibliography P. A. Abdulla, B. Jonsson, M. Nilsson, and J. d’Orso. Regular Model-Checking made Simple and Efficient. In Proceedings of the 13th International Conference on Concurrency Theory (CONCUR), 2002.
K. Apt and D. Kozen. Limits for Automatic Verification of Finite State Concurrent Systems. Information Processing Letters, 15:307–309, 1986.
T. Arons, A. Pnueli, S. Ruah, and L. Zuck. Parameterized Verification with Auto-matically Computed Inductive Assertions. In Proceedings of the 13th International Conference on Computer Aided Verification (CAV), 2001.
Thomas Ball and Sriram K. Rajamani. Boolean Programs: A Model and Process for Software Analysis. Technical Report MSR-TR-2000-14, Microsoft Research Corpo-ration, 2000.
Tom Ball, Sagar Chaki, and Sriram Rajamani. Verification of Multi-Threaded Soft-ware Libraries. In Proceedings of the 23rd International Conference on Software Engineering (ICSE), 2001.
K. Baukus, S. Bensalem, Y. Lakhnech, and K. Stahl. Abstracting WS1S Systems to Verify Parameterized Networks. In Proceedings of the 6th International Conference on Tools and Algorithms for Construction and Analysis of Systems (TACAS), 2000.
K. Baukus, Y. Lakhnech, and K. Stahl. Verification of Parameterized Protocols. In Journal of Universal of Computer Science, 2001.
Armin Biere, Alessandro Cimatti, Edmund M. Clarke, Masahiro Fujita, and Yunshan Zhu. Symbolic Model Checking using SAT Procedures instead of BDDs. In Pro-ceedings of the ACM/IEEE Design Automation Conference (DAC), pages 317–320, 1999.
257 Jesse Bingham and Alan Hu. Empirically Efficient Verification for a Class of Infinite State Systems. In Proceedings of the 11th International Tools and Algorithms for Construction and Analysis of Systems (TACAS), 2005.
B. Boigelot, A. Legay, and P. Wolper. Iterating Transducers in the Large. In Pro-ceedings of the 15th International Conference on Computer Aided Verification (CAV).
LNCS, Springer-Verlag, 2003.
A. Bouajjani, P. Habermehl, and T. Vojnar. Verification of Parametric Concurrent Systems with Prioritized FIFO Resource Management. In Proceedings of 14th Inter-national Conference on Concurrency Theory (CONCUR), 2003.
A. Bouajjani, B. Jonsson, M. Nilsson, and T. Touili. Regular Model Checking. In Proceedings of the 12th International Conference on Computer Aided Verification (CAV). LNCS, Springer-Verlag, 2000.
A. Bouajjani and T. Touili. Extrapolating Tree Transformations. In Proceedings of the 14th International Conference on Computer Aided Verification (CAV). LNCS, Springer-Verlag, 2002.
M. C. Browne, E. M. Clarke, and O. Grumberg. Characterizing Finite Kripke Struc-tures in Propositional Temporal Logic. Theoretical Computer Science, 59:115–131, 1988.
M. C. Browne, E. M. Clarke, and O. Grumberg. Reasoning about Networks with Many Identical Finite State Processes.
Information and Computation, 81:13–31, 1989.
Randal E. Bryant.
Graph-based Algorithms for Boolean Function Manipulation.
IEEE Transactions on Computers, C-35(8):677–691, August 1986.
Randal E. Bryant, Shuvendu K. Lahiri, and Sanjit A. Seshia. Deciding CLU Logic Formulas via Boolean and Pseudo-Boolean Encodings. In Proceedings of Interna-tional Workshop on Constraints in Formal Verification, 2002.
Randal E. Bryant, Shuvendu K. Lahiri, and Sanjit A. Seshia. Modeling and Verifying Systems using a Logic of Counter Arithmetic with Lambda Expressions and Uninter-preted Functions. In Proceedings of the 14th International Conference on Computer Aided Verification (CAV), 2002.
258 Sagar Chaki, Edmund Clarke, ALex Groce, Somesh Jha, and Helmut Veith. Modular Verification of Software Components in C. In Proceedings of the 25th International Conference on Software Engineering (ICSE), pages 385–395, 2003.
Prosenjit Chatterjee, Hemanthkumar Sivaraj, and Ganesh Gopalakrishnan. Shared Memory Consistency Protocol Verification against Weak Memory Models: Refine-ment via Distributed Model Checking.
In Proceedings of the 14th International Conference on Computer Aided Verification (CAV), 2002.
Ching-Tsun Chou, Phanindra K. Mannava, and Seungjoon Park. A Simple Method for Parameterized Verification of Cache Coherence Protocols. In Proceedings of the 5th International Conference on Formal Methods in Computer-Aided Design (FM-CAD), 2004.
E. Clarke, O. Grumberg, S. Jha, Y. Lu, and H. Veith. Counterexample-Guided Ab-straction Refinement. In Proceedings of the 12th International Conference on Com-puter Aided Verification (CAV), 2000.
E. Clarke, O. Grumberg, and D. Long. Model Checking and Abstraction. In Proceed-ings of the 19th ACM Symposium on Principles of Programming Languages (POPL), 1992.
E. M. Clarke, T. Filkorn, and S. Jha.
Exploiting Symmetry in Temporal Model Checking. In Proceedings of the 5th International Conference on Computer Aided Verification (CAV), 1993.
Bruno Courcelle. Graph Rewriting: An Algebraic and Logic Approach. Handbook of Theoretical Computer Science, Volume B:459–492, 1990.
Patrick Cousot and Radhia Cousot. Abstract Interpretation: A Unified Lattice Model for Static Analysis of Programs by Construction or Approximation of Fixponts. In Proceedings of the 4th ACM Symposium of Principles of Programming Languages (POPL), pages 238–272, 1977.
D. L. Dill. The Murphi Verification System. In Proceedings of the 8th International Conference on Computer Aided Verification (CAV), pages 390–393, 1996.
Georgio Delzanno. Automated Verification of Cache Coherence Protocols. In Pro-ceedings of the 12th International Conference on Computer Aided Verification (CAV), 2000.
259 Giorgio Delzanno and Tevfik Bultan. Constraint-Based Verification of Client-Server Protocols. In Proceedings of the 7th International Conference on Principles and Practice of Constraint Programming, pages 286–301, 2001.
E. A. Emerson and J. Y. Halpern. Decision Procedures and Expressiveness in the Temporal Logic of Branching Time. Journal of Computer and System Sciences, 30:1–24, 1985.
E. A. Emerson, J. Havlicek, and R. Trefler. Virtual Symmetry. In Proceedings of the 15th Annual IEEE Symposium on Logic in Computer Science (LICS), 2000.
E. A. Emerson and Vineet Kahlon. Reducing Model Checking of the Many to the Few. In Proceedings of the 17th International Conference on Automated Deduc-tion(CADE), pages 236–254, 2000.
E. A. Emerson and Vineet Kahlon. Model Checking Large-Scale and Parameterized Resource Allocation Systems. In Proceedings of the 8th International Conference on Tools and Algorithms for Construction and Analysis of Systems (TACAS), pages 251–265, 2002.
E. A. Emerson and Vineet Kahlon. Model Checking Guarded Protocols. In Pro-ceedings of the 18th Annual IEEE Symposium on Logic in Computer Science (LICS), pages 361–370, 2003.
E. A. Emerson and Vineet Kahlon. Rapid Parameterized Model Checking of Snoopy Cache Protocols. In Proceedings of the 9th International Conference on Tools and Algorithms for Construction and Analysis of Systems (TACAS), pages 144–159, 2003.
E. A. Emerson and Kedar Namjoshi. Automatic Verification of Parameterized Syn-chronous Systems. In Proceedings of the 8th International Conference on Computer Aided Verification (CAV), 1996.
E. A. Emerson and Kedar S. Namjoshi. Reasoning About Rings. In Proceedings of 22nd ACM Symposium on Principles of Programming Languages (POPL), 1995.
E. A. Emerson and A. P. Sistla. Symmetry and Model Checking. In Proceedings of the 5th International Conference on Computer Aided Verification (CAV), 1993.
E. A. Emerson and A.P Sistla. Utilizing Symmetry when Model-Checking under Fairness Assumptions: An Automata Theoretic Approach. ACM Transactions on Programming Languages and Systems (TOPLAS), 4, 1997.
260 E. A. Emerson and R. Trefler. From Asymmetry to Full Symmetry. In Proceed-ings of Advanced Research Working Conference on Correct Hardware Design and Verification Methods (CHARME), 1999.
Yi Fang, Nir Piterman, A. Pnueli, and L. Zuck. Liveness with Incomprehensible Ranking. In Proceedings of the 5th International Conference on Verification, Model Checking and Abstract Interpretation, 2004.
Yi Fang, Nir Piterman, A. Pnueli, and L. Zuck. Liveness with Invisible Ranking.
In Proceedings of the 10th International Conference on Tools and Algorithms for Construction and Analysis of Systems (TACAS), 2004.
S. M. German and A. P. Sistla. Reasoning about Systems with Many Processes.
Journal of the ACM, 39, 1992.
Steven German. Cache Coherence Examples, 2006.
Thomas Henzinger, Ranjit Jhala, and Rupak Majumdar. Race Checking with Con-text Inference.
In Proceedings of the International Conference on Programming Language Design and Implementation (PLDI), 2004.
Thomas A. Henzinger, Ranjit Jhala, Rupak Majumdar, and Kenneth L. McMillan.
Abstractions from Proofs. In Proceedings of the 31st ACM Symposium on Principles of Programming Languages (POPL), pages 232–244, 2004.
Thomas A. Henzinger, Ranjit Jhala, Rupak Majumdar, and Gregoire Sutre. Soft-ware Verification with Blast. In Proceedings of the 10th SPIN Workshop on Model Checking Software, pages 235–239, 2003.
Gerard J. Holzmann. The Model Checker SPIN. Software Engineering, 23(5):279– 295, 1997.
J. Rushby. A Formally Verified Algorithm for Clock Synchronization Under a Hy-brid Fault Model. In 13th ACM Symposium on Principles of Distributed Computing, pages 304–313, Los Angeles, CA, 1994.
Prof. J.L.Lions.
Ariane 5: Flight 501 Failure. Report by the Inquiry Board.
In www.ima.umn.edu/∼arnold/disasters/ariane5rep.html.
Y. Kesten, O. Maler, M. Marcus, A. Pnueli, and E. Shahar. Symbolic Model Check-ing with Rich Assertional Languages. In Proceedings of the 9th International Confer-ence on Computer Aided Verification(CAV), volume 1254 of LNCS, pages 424–435.
Springer, June 1997.
261 Shuvendu K. Lahiri and Randy Bryant. Constructing Quantified Invariants. In Pro-ceedings of the 10th International Conference on Tools and Algorithms for Construc-tion and Analysis of Systems (TACAS), 2004.
Shuvendu K. Lahiri and Randy Bryant. Indexed Predicate Discovery for Unbounded System Verification. In Proceedings of the 16th International Conference on Com-puter Aided Verification (CAV), 2004.
Leslie Lamport. A New Solution of Dijkstra’s Concurrent Programming Problem.
Communications of the ACM, 17(8):453–455, 1974.
Z. Manna and A. Pnueli. An Exercise in the Verification of Multi – Process Programs.
In Beauty is Our Business, 1990.
Milo K. Martin. Formal Verification and its Impact on Snooping vs Directory Pro-tocol Debate. In Proceedings of the International Conference on Computer Design (ICCD), 2005.
K. L. McMillan. Symbolic Model Checking: An Approach to the State Explosion Problem. PhD thesis, Carnegie Mellon University, Computer Science Department, 1992.
Kenneth L. McMillan.
Parameterized Verification of the FLASH Cache Coher-ence Protocol by Compositional Model Checking. In Proceedings of Advanced Re-search Working Conference on Correct Hardware Design and Verification Methods (CHARME), pages 179–195, 2001.
Kenneth L. McMillan.
Applying SAT Methods in Unbounded Symbolic Model Checking. In Proceedings of the 14th International Conference on Computer Aided Verification, pages 250–264, 2002.
Kenneth L. McMillan. Applications of Craig Interpolants in Model Checking. In Proceedings of the 11th International Tools and Algorithms for Construction and Analysis of Systems (TACAS), pages 1–12, 2005.
Kenneth L. McMillan and Nina Amla. Automatic Abstraction without Counterex-amples. In Proceedings of the 9th International Conference on Tools and Algorithms for Construction and Analysis of Systems, pages 2–17, 2003.
Kenneth L. McMillan, Shaz Qadeer, and James B. Saxe. Induction in Compositional Model Checking. In Proceedings of the 12th International Conference on Computer Aided Verfication (CAV), pages 312–327, 2000.
262 Seungjoon Park and David L. Dill. Verification of Cache Coherence Protocols by Aggregation of Distributed Transactions. In Theory of Computing Systems, pages 355–376, 1998.
A. Pnueli, S. Ruah, and L. Zuck. Automatic Deductive Verification with Invisible In-variants. In Proceedings of the 7th International Conference on Tools and Algorithms for Construction and Analysis of Systems (TACAS), 2001.
Amir Pnueli. The Temporal Logic of Programs. In Proceedings of the 18th IEEE Symposium on Foundations of Computer Science (FOCS), pages 946–67, 1977.
Amir Pnueli, Jessica Xu, and Lenore Zuck. Liveness with (0, 1, ∞) Counter Ab-straction. In Proceedings of the 14th International Conference on Computer Aided Verification 2002 (CAV), 2002.
Fong Pong and Michel Dubois. Formal Verification of Complex Coherence Protocols Using Symbolic State Models. Journal of the ACM, 45(4):557–587, 1998.
Fong Pong and Michel Dubois. Formal Automatic Verification of Cache Coherence in Multiprocessors with Relaxed Memory Models. IEEE Transactions on Parallel and Distributed Systems, 11(9):989–1006, 2000.
J. Rushby. An Overview of Formal Verification for the Time-Triggered Architecture.
In Formal Techniques in Real-Time and Fault-Tolerant Systems, Lecture Notes in Computer Science, 2002.
S. Graf and H. Saidi. Construction of Abstract State Graphs with PVS. In Pro-ceedings of the 9th International Conference on Computer Aided Verification (CAV), 1997.
Shmuel Sagiv, Thomas W. Reps, and Reinhard Wilhelm. Parametric Shape Analysis via 3-Valued Logic. ACM Transactions on Programming Languages and Systems (TOPLAS), 2002.
H. Saidi and N. Shankar. Abstract and Model Check while you Prove. In Proceedings of the 11th International Conference on Computer-Aided Verification (CAV), volume 1633 of LNCS, pages 443–454, 1999.
M. Samer and H. Veith. Validity of CTL Queries Revisiting. In Proceedings of 12th Annual Conference of the European Association for Computer Science Logic (CSL), 2003.
263 M. Samer and H. Veith. A Syntactic Characterization of Distributive LTL Queries.
In Proceedings of 31st International Conference on Automata, Languages and Pro-gramming (ICALP), 2004.
M. Samer and H. Veith. Deterministic CTL Query Solving. In Proceedings of the 12th International Symposium on Temporal Representation and Reasoning (TIME), 2005.
I. Suzuki. Proving Properties of a Ring of Finite State Machines. Information Pro-cessing Letters, 28:213–214, 1988.
B. K. Szymanski. A Simple Solution to Lamport’s Concurrent Programming Prob-lem with Linear Wait. In Proceedings of the International Conference on Supercom-puting Systems (ICS), 1988.
T. Touili. Widening Techniques for Regular Model Checking. In Proceedings of the 1st Vepas Workshop. Volume 50 of Electronic Notes in Theoretical Computer Science, 2001.
V. Pratt. Anatomy of the Pentium Bug. In TAPSOFT’95: Theory and Practice of Software Development, pages 97–107. Springer Verlag, 1995.
Eran Yahav. Verifying Safety Properties of Concurrent Java Programs using 3-Valued Logic. In In the Proceedings of 18th ACM Symposium on Principles of Programming Languages (POPL, 2001.
Eran Yahav. Property Guided Abstractions of Concurrent Heap Manipulating Pro-grams. PhD thesis, Tel-Aviv University, Israel, 2004.
264
|
24
|
2.5x raise not a good play IMO for micros because of the rake and the fact that too many people are going to overcall and facing every pot multiway is not ideal. Am I correct in thinking this way? I use oldschool 3x open for micros. I press pot button for limpers. : r/poker
===============
Skip to main content2.5x raise not a good play IMO for micros because of the rake and the fact that too many people are going to overcall and facing every pot multiway is not ideal. Am I correct in thinking this way? I use oldschool 3x open for micros. I press pot button for limpers. : r/poker
Open menu Open navigationGo to Reddit Home
r/poker A chip A close button
Log InLog in to Reddit
Expand user menu Open settings menu
Go to poker
r/poker
r/poker
Shuffle up and deal! We love to see all kinds of live / online poker content but please do not advertise illegal app clubs.
316K Members Online
•5 yr. ago
Rvr_phoenix
2.5x raise not a good play IMO for micros because of the rake and the fact that too many people are going to overcall and facing every pot multiway is not ideal. Am I correct in thinking this way? I use oldschool 3x open for micros. I press pot button for limpers.
Archived post. New comments cannot be posted and votes cannot be cast.
Share
New to Reddit?
Create your account and connect with a world of communities.
Continue with Email
Continue With Phone Number
By continuing, you agree to ourUser Agreementand acknowledge that you understand thePrivacy Policy.
Public
Anyone can view, post, and comment to this community
Top Posts
Reddit reReddit: Top posts of October 22, 2020
Reddit reReddit: Top posts of October 2020
Reddit reReddit: Top posts of 2020
Reddit RulesPrivacy PolicyUser AgreementAccessibilityReddit, Inc. © 2025. All rights reserved.
Expand Navigation Collapse Navigation
|
25
|
GitHub - lichess-org/fishnet: Distributed Stockfish analysis for lichess.org
===============
Skip to content
Navigation Menu
Toggle navigation
Sign in
Appearance settings
Product
GitHub Copilot Write better code with AI
GitHub Spark New Build and deploy intelligent apps
GitHub Models New Manage and compare prompts
GitHub Advanced Security Find and fix vulnerabilities
Actions Automate any workflow
Codespaces Instant dev environments
Issues Plan and track work
Code Review Manage code changes
Discussions Collaborate outside of code
Code Search Find more, search less
Explore
Why GitHub
All features
Documentation
GitHub Skills
Blog
Solutions
By company size
Enterprises
Small and medium teams
Startups
Nonprofits
By use case
DevSecOps
DevOps
CI/CD
View all use cases
By industry
Healthcare
Financial services
Manufacturing
Government
View all industries
View all solutions
Resources
Topics
AI
DevOps
Security
Software Development
View all
Explore
Learning Pathways
Events & Webinars
Ebooks & Whitepapers
Customer Stories
Partners
Executive Insights
Open Source
GitHub Sponsors Fund open source developers
The ReadME Project GitHub community articles
Repositories
Topics
Trending
Collections
Enterprise
Enterprise platform AI-powered developer platform
Available add-ons
GitHub Advanced Security Enterprise-grade security features
Copilot for business Enterprise-grade AI features
Premium Support Enterprise-grade 24/7 support
Pricing
Search or jump to...
Search code, repositories, users, issues, pull requests...
Search
Clear
Search syntax tips
Provide feedback
We read every piece of feedback, and take your input very seriously.
[x] Include my email address so I can be contacted
Cancel Submit feedback
Saved searches
Use saved searches to filter your results more quickly
Name
Query
To see all available qualifiers, see our documentation.
Cancel Create saved search
Sign in
Sign up
Appearance settings
Resetting focus
You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
lichess-org/fishnetPublic
Sponsor Sponsor lichess-org/fishnet
=========================== ##### External links
Learn more about funding links in repositories.
Report abuse
NotificationsYou must be signed in to change notification settings
Fork 108
Star 813
Distributed Stockfish analysis for lichess.org
lichess.org/get-fishnet
License
GPL-3.0, Unknown licenses found
Licenses found
GPL-3.0 LICENSE.txtUnknown COPYING.txt
813 stars108 forksBranchesTagsActivity
Star
NotificationsYou must be signed in to change notification settings
Code
Issues 4
Pull requests 0
Discussions
Actions
Projects 0
Security### Uh oh!
There was an error while loading.Please reload this page.
Insights
Additional navigation options
Code
Issues
Pull requests
Discussions
Actions
Projects
Security
Insights
lichess-org/fishnet
master
4Branches135Tags
Go to file
Code
Open more actions menu
Folders and files
| Name | Name | Last commit message | Last commit date |
| --- | --- | --- | --- |
| Latest commit ------------- niklasf wrap engine stdin in struct success Aug 13, 2025 b664093·Aug 13, 2025 History ------- 1,899 Commits Open commit details |
| .github/workflows | .github/workflows | update from deprecated windows-2019 builder to windows-2022 | Aug 3, 2025 |
| Fairy-Stockfish @ f3e6969 | Fairy-Stockfish @ f3e6969 | update to Fairy-Stockfish 14 | Sep 10, 2021 |
| Stockfish @ a37b38b | Stockfish @ a37b38b | support x86-64-avx512icl | Aug 3, 2025 |
| doc | doc | Add config options to enable stats in container | Apr 20, 2024 |
| src | src | wrap engine stdin in struct | Aug 13, 2025 |
| .dockerignore | .dockerignore | add .git to .dockerignore | Oct 31, 2022 |
| .gitignore | .gitignore | use intel sde also inside docker | Apr 11, 2022 |
| .gitmodules | .gitmodules | use official-stockfish repo again | May 26, 2022 |
| COPYING.txt | COPYING.txt | niklasf -> lichess-org | Jan 26, 2022 |
| Cargo.lock | Cargo.lock | bump deps | Aug 13, 2025 |
| Cargo.toml | Cargo.toml | bump 2.10.1-dev | Aug 3, 2025 |
| Dockerfile | Dockerfile | build now requires rust 1.85 | Jun 11, 2025 |
| LICENSE.txt | LICENSE.txt | switch to gpl 3 | Oct 8, 2016 |
| README.md | README.md | explain the role of fishnet keys | May 12, 2024 |
| build.rs | build.rs | remove debug print | Aug 3, 2025 |
| docker-entrypoint.sh | docker-entrypoint.sh | remove ENABLE_STATS env var (#266) | May 12, 2024 |
| favicon.ico | favicon.ico | add icon for windows executable | Sep 12, 2021 |
| rustfmt.toml | rustfmt.toml | tweak import grouping | Oct 17, 2021 |
| View all files |
Repository files navigation
README
Code of conduct
GPL-3.0 license
License
Security
fishnet: distributed Stockfish analysis for lichess.org
Installation
Request your personal fishnet key:
Install and run the fishnet client.
Download standalone binary
Select the binary for your platform from the latest release and run it.
undefinedshell
After download:
mv fishnet-x86_64-unknown-linux-musl fishnet
chmod +x fishnet
./fishnet --auto-update
undefined
Useful commands
undefinedshell
./fishnet configure # Rerun config dialog
./fishnet systemd --auto-update # Print a .service file
./fishnet --help # List commands and options
undefined
Other installation methods:From source, Docker, Kubernetes, OpenShift
Pick an update strategy.
Automatic updates
Run with --auto-update as recommended above.
Subscribe to release announcements
With a GitHub account, you can watch this repository (can be set to release announcements only). See the top right corner on this page.
Video introduction
Watch @arex explain fishnet.
FAQ
Which engine does fishnet use?
fishnet uses Stockfish (hence the name) and Fairy-Stockfish for chess variants.
What are the requirements?
| Available for | 64-bit Intel and AMD | ARMv8 / Silicon |
| --- | --- | --- |
| Linux | x86_64-unknown-linux-musl | aarch64-unknown-linux-musl |
| Windows | x86_64-pc-windows-gnu.exe | |
| macOS | x86_64-apple-darwin | aarch64-apple-darwin |
| FreeBSD | build from source | |
Needs Linux or an operating system from around 2019 or later
Will max out the configured number of CPU cores
Uses about 64 MiB RAM per CPU core
A small amount of disk space
Low-bandwidth network communication with Lichess servers (only outgoing HTTP requests, so probably no firewall configuration required, IPv4 not required)
Is my CPU fast enough?
Almost all processors will be able to meet the requirement of ~2 meganodes in 6 seconds. Clients on the faster end will automatically be assigned analysis jobs that have humans waiting for the result (the user queue, as opposed to the system queue for slower clients).
Why does my client remain idle?
Your client may remain idle if fishnet estimates that another client would be able to complete the next batch more quickly, or if the client has been configured to join the queue only if a backlog is building up. By standing by, you're still contributing to reliability by providing redundancy, and also to the potential maximum throughput in case requests peak.
What happens if I stop my client?
Feel free to turn your client on and off at any time. By default, the client will try to finish any batches it has already started. On immediate shutdown, the client tries to inform Lichess that batches should be reassigned. If even that fails, Lichess will reassign the batches after a timeout.
Will fishnet use my GPU?
No, Stockfish is a classical alpha-beta engine. The neural network evaluation of Stockfish NNUE works efficiently on CPUs.
Why do I need a key?
The key allows us to trace provided analysis back to your Lichess account. You can use a single key to run multiple instances.
You do not need to request a key (nor our permission) to run private instances.
Is fishnet secure?
To the best of our knowledge. All engine input is carefully validated.
Note that you implicitly trust the authors and the GitHub and Amazon S3 infrastructure when running with --auto-update. You can mitigate this by running fishnet as an unprivileged user.
cargo-crev is used to review the trustworthiness of dependencies. cargo-auditable is used to embed dependency meta data into binaries.
Is there a leaderboard of contributors?
No, sorry, not publicly. It would incentivize gaming the metrics.
Can I autoscale fishnet in the cloud?
There is currently no ready-made solution, but an API for monitoring the job queue status is provided.
Protocol
See protocol.md for details. Also supports SSLKEYLOGFILE for inspection at runtime.
License
fishnet is licensed under the GPLv3+. See LICENSE.txt or ./fishnet license for the full license text.
About
Distributed Stockfish analysis for lichess.org
lichess.org/get-fishnet
Topics
chessdistributed-computinglichessvolunteer-computingstockfish
Resources
Readme
License
GPL-3.0, Unknown licenses found
Licenses found
GPL-3.0 LICENSE.txtUnknown COPYING.txt
Code of conduct
Code of conduct
Security policy
Security policy
Uh oh!
There was an error while loading. Please reload this page.
Activity
Custom properties
Stars
813 stars
Watchers
29 watching
Forks
108 forks
Report repository
Releases 54
fishnet v2.10.0 Latest Aug 3, 2025
+ 53 releases
Sponsor this project
Uh oh!
There was an error while loading. Please reload this page.
Contributors 37
+ 23 contributors
Languages
Rust 99.3%
Other 0.7%
Footer
© 2025 GitHub,Inc.
Footer navigation
Terms
Privacy
Security
Status
Docs
Contact
Manage cookies
Do not share my personal information
You can’t perform that action at this time.
|
26
|
Randomized Algorithms Lecture 18: Total variation distance and coupling Raul Garcia-Patron School of Informatics University of Edinburgh RA (2022/23) – Lecture 18 – slide 1 Markov chain and mixing times ▶Sampling from a given probability distribution is a fundamental algorithmic tool.
▶We have seen that in some cases one can design a Markov chain that has as stationary distribution our target distribution.
▶After sufficiently many steps we converge to the target distribution regardless of the initial state.
▶To achieve our goal, we need to have a guarantee of the convergence to the target distribution, this will be the goal of this and next lecture.
1. This lecture: notion of distance + coupling as a tool to prove mixing times.
2. Next lecture: path coupling to prove mixing times.
RA (2022/23) – Lecture 18 – slide 2 Total variation distance Definition (Definition 12.1) The total variation distance between two distributions D1 and D2 on a countable state space S is given by ∥D1 −D2∥= 1 2 X x∈S |D1(x) −D2(x)|.
Properties: 1. Triangle inequality: ∥D1 −D3∥≤∥D1 −D2∥+ ∥D2 −D3∥ 2. ∥D1 −D2∥= 0 only if D1 = D2.
3. 0 ≤∥D1 −D2∥≤1 ▶π being the stationary distribution of a Markov chain M. We want to bound the distance between the distribution of the chain after t steps when starting at state x, i.e., bound ∥pt x −π∥.
▶We want to show that it becomes ϵ small in number of steps t polynomial on the size of the problem.
RA (2022/23) – Lecture 18 – slide 3 Examples ▶Two biased coins: 1. {p(0) = p, p(1) = 1 −p}, 2. {q(0) = 1 −p, q(1) = p} (where 0 ≤p ≤1/2), ||p −q|| = 1 2(|p −(1 −p)| + |1 −p −p|) = 1 −2p.
▶Non-overlapping supports: 1. For all W ⊆A, D1(W) > 0 and D2(W) = 0, 2. where for all W ⊆¯ A, D1(W) = 0 and D2(W) ≥0.
∥D1 −D2∥ = 1 2 X x∈S |D1(x) −D2(x)| = 1 2 X x∈A D1(x) + 1 2 X x∈¯ A D2(x) = 1 RA (2022/23) – Lecture 18 – slide 4 Operational interpretation Definition (Lemma 12.1) ∥D1 −D2∥= 1 2 X x∈S |D1(x) −D2(x)|.
For any A ⊆S let Di(A) = P x∈A Di(x), i.e., the weight of subspace A.
Then ∥D1 −D2∥= max A⊆S |D1(A) −D2(A)|.
(1) 1. For any B ⊆S we have ∥D1 −D2∥≥|D1(B) −D2(B)|.
▶It can also used to proved non-convergence: if ∃B, s.t.
|D1(B) −D2(B)| > c then also ∥D1 −D2∥> c.
2. If ∥D1 −D2∥≤ϵ: D1 and D2 can not be distinguish up to error ϵ, i.e., whether you sample from one or the other is indistinguishable on any subset B ⊆S!
▶Probability of guessing distribution 1 or 2 right: Pmax guess = 1 2 (1 + ∥D1 −D2∥) RA (2022/23) – Lecture 18 – slide 5 Proof of Lemma 12.1 ▶∥D1 −D2∥= 1 2 P x∈S |D1(x) −D2(x)|.
▶A ⊆S : ∥D1 −D2∥= maxA⊆S |D1(A) −D2(A)|.
Proof.
1. Let S+ ⊆S s.t. D1(x) ≥D2(x) and S−complement 2. maxA⊆S D1(A) −D2(A) = D1(S+) −D2(S+) 3. maxA⊆S D2(A) −D1(A) = D2(S−) −D1(S−) 4. D1(S+) + D1(S−) = 1 = D2(S+) + D2(S−) ▶D1(S+) −D2(S+) = D2(S−) −D1(S−) 5. maxS⊆S |D1(A)−D2(A)| = |D1(S+)−D2(S+)| = |D1(S−)−D2(S−)| 6. |D1(S+) −D2(S+)| + |D1(S−) −D2(S−)| = 2||D1 −D2|| (Def. TV) RA (2022/23) – Lecture 18 – slide 6 Mixing time Definition (Definition 12.2) Let M be a finite, irreducible and aperiodic Markov chain over the state space Ωand let π be its stationary distribution. We define ∆x(t), ∆(t) as ∆x(t) = ∥Mt[x, ·] −π∥, ∆(t) = max x∈Ω∆x(t).
We also define τx(ϵ) = min{t : ∆x(t) ≤ϵ}, τ(ϵ) = max x∈Ωτx(ϵ).
1. τ(ϵ) is called mixing time.
2. A chain is rapidly mixing if τ(ϵ) is polynomial in log(1/ϵ) and the size of the problem.
3. There are two main techniques for upper-bounding mixing time: ▶Coupling: nice tight bounds when it works.
▶Conductance: worse bounds, works on a larger pool.
RA (2022/23) – Lecture 18 – slide 7 Coupling as upper-bound of TV distance Definition (Definition 12.2) A coupling of two probability distributions µ and ν is a pair of random variables (X, Y) defined on a single probability space, i.e., a joint probability distribution q on Ω× Ωsuch that X y∈Ω q(x, y) = µ(x)and X x∈Ω q(x, y) = ν(x) (2) Definition (Lemma 12.3) Given distributions µ(x) and ν(x) on state space Ω. All couplings (X, Y) satisfy the condition inf Pr(X ̸= Y) ≥∥µ −ν∥.
(3) ▶This will allow us to upper-bound distances between two Markov chains at step t, and also with respect to the stationary distribution, which leads to upper-bounds on mixing times.
RA (2022/23) – Lecture 18 – slide 8 Coupling example I Coupling: X y∈Ω q(x, y) = µ(x) and X x∈Ω q(x, y) = ν(x) Bound on TV: inf Pr(X ̸= Y) ≥∥µ −ν∥ Consider two fair coins: µ(x) = ν(x) = c(x) = 1/2.
It is trivial to see that ∥µ −ν∥= ∥c −c∥= 0 as both are equal.
A trivial coupling consist of two independent coins ▶Because q(x, y) = c(x)c(y) ⇒P y∈Ωq(x, y) = c(x) = 1/2.
▶Remark that Pr(X ̸= Y) = 1/2 > 0 = ∥µ −ν∥ RA (2022/23) – Lecture 18 – slide 9 Coupling example II Coupling: X y∈Ω q(x, y) = µ(x) and X x∈Ω q(x, y) = ν(x) Bound on TV: inf Pr(X ̸= Y) ≥∥µ −ν∥ Consider two fair coins: µ(x) = ν(x) = c(x) = 1/2.
It is trivial to see that ∥µ −ν∥= ∥c −c∥= 0 as both are equal.
Consider now a perfectly correlated coin: ▶Because q(0, 0) = q(1, 1) = 1/2 ⇒P y∈Ωq(x, y) = c(x) = 1/2.
▶Remark that Pr(X ̸= Y) = 0 = ∥µ −ν∥ ▶Can we saturated the lower-bound building correlated joint distribution!
RA (2022/23) – Lecture 18 – slide 10 Coupling example III Consider the distributions 1. {µ(0) = 3/4, µ(1) = 1/4} 2. {ν(0) = 1/4, ν(1) = 3/4} ▶Remark that ||µ −ν|| = 1/2.
▶The following algorithm generates a coupling.
Algorithm COUPLING COINS() 1.
Generate a random bit with p(b1 = 0) = 1/2.
2.
ifb1 = 0 then Generate perfect random bit b2 and fix X = Y = b2.
3.
else Fix X = 0 and Y = 1 ▶Always ∃a coupling saturating |D1 −D2|| RA (2022/23) – Lecture 18 – slide 11 Proof Lemma 12.3 (upper-bound) ▶A ⊆S : ∥µ −ν∥= maxA⊆S |µ(A) −ν(A)|.
▶Bound on TV: inf Pr(X ̸= Y) ≥∥µ −ν∥ Proof.
1. µ(A) −ν(A) = Pr{X ∈A} −Pr{Y ∈A} 2.
µ(A) −ν(A) = (Pr{X ∈A, y ∈A} + Pr{X ∈A, Y / ∈A}) − (Pr{X ∈A, Y ∈A} + Pr{X / ∈A, Y ∈A}) = Pr{X ∈A, Y / ∈A} −Pr{X / ∈A, Y ∈A} ≤ Pr{X / ∈A, Y ∈A} ≤ Pr{X ̸= Y} It has to hold for all coupling (X, Y), including the one that provides the minimum of Pr(X ̸= Y).
RA (2022/23) – Lecture 18 – slide 12 Coupling of Markov chains 1. We have two chains Xt and Yt that independently behave like the original one, governed by the transition rule P.
2. We couple the chains X and Y, on a joint chain Z = (X, Y).
3. We design a Markov process M acting on Z (both X and Y), such that locally on each chain it still behaves as P, but globally the process is correlated.
▶Pr(Xt+1 = x ′|Zt = (x, y)) = P(x, x ′), ▶Pr(Yt+1 = y ′|Zt = (x, y)) = P(x, x ′).
4. We are interested in chains that: ▶Bring the two copies of the chain to the same state ▶Once in same state they will make exactly the same move and remain equal RA (2022/23) – Lecture 18 – slide 13 Coupling Lemma Lemma (Lemma 12.2|Coupling lemma) Let Zt = (Xt, Yt) be a coupling for a Markov chain M on a state space S. Suppose there is a T such that, for every x, y ∈S, Pr(XT ̸= YT|X0 = x, Y0 = y) ≤ϵ Then τ(ϵ) ≤T.
RA (2022/23) – Lecture 18 – slide 14 Shuffling cards ▶We want to couple two chains: think of having two decks of card arranged in different configurations X0 and Y0.
▶Each configuration is a given arrangement of the cards of a deck.
▶Coupling: 1. Choose a position j uniformly at random from deck 1 and then generate Xt+1 from Xt by moving the j-th card to the top. Let’s call that card C.
2. Search for card C on the second deck and move it to the top to obtain Yt+1 from Yt.
▶The movements of card when looking at the deck independently is a standard reshuffling of card with probability 1/n.
▶Once a card is moved to the top it follows the same trajectory on both decks.
▶Mixing when all cards have been moved to the top.
▶We have mapped the problem to coupon collector: τ(ϵ) = n log(n/ϵ).
RA (2022/23) – Lecture 18 – slide 15 Lazy random Walk on Hypercube ▶Lazy random walk on the hypercube ¯ x = (x1, x2, ..., xn).
1. Select a coordinate uniformly at random from 1 to n.
2. Set the value to 0 or 1 with equal probability 1/2.
▶Remark that with probability 1/2 you remain in the same configuration ⇒aperiodicity.
▶Coupling between Xt and Yt via implementing the same move on both chains.
▶Once the i-th coordinate chosen both chain will agree on i in future moves.
▶The problem is again mapped to a coupon collector: τ(ϵ) = n log(n/ϵ).
RA (2022/23) – Lecture 18 – slide 16 Proof of Lemma 12.2: Coupling lemma ▶Pr(XT ̸= YT|X0 = x, Y0 = y) ≤ϵ ⇒ τ(ϵ) ≤T ▶Choose Y0 according the uniform distribution and X0 takes an arbitrary value. For T, ϵ such that lemma is satisfied and for any A ⊆S: Pr(XT ∈A) ≥ Pr((XT = YT) ∩(YT ∈A)) = 1 −Pr((XT ̸= YT) ∪YT / ∈A) ≥(1 −Pr(YT / ∈A)) −Pr(XT ̸= YT) ≥Pr(YT ∈A) −ϵ = π(A) −ϵ ▶The same argument for S −A shows: Pr(XT / ∈A) ≥π(S −A) −ϵ or equivalently Pr(XT ∈A) ≤π(A) + ϵ.
▶It follows: ||pT x −π|| = max x,A |pT x (A) −π(A)| ≤ϵ.
RA (2022/23) – Lecture 18 – slide 17 Geometric Convergence Theorem (Theorem 12.5) Let P be the transition matrix for a finite, irreducible, aperiodic Markov chain. Let mj be the smallest entry in the j-th column of the matrix, and let m = P j mj. Then, for all x and t, ||pt x −π|| ≤(1 −m)t.
Proof.
1. The chain reached j with probability at least mj.
2. Design coupling where both chain move the j with probability at lest mj.
3. As it holds for all j. The two chains couple with probability m at each step.
4. The probability they do not couple at step t is (1 −m)t.
RA (2022/23) – Lecture 18 – slide 18
|
27
|
Quantum Computation and Quantum Information | Higher Education from Cambridge
===============
Skip to main contentAccessibility help
t d
We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
[x]
Internet Explorer 11 is being discontinued by Microsoft in August 2021. If you have difficulties viewing the site on Internet Explorer 11 we recommend using a different browser such as Microsoft Edge, Google Chrome, Apple Safari or Mozilla Firefox.
X
Discover Content
Products and Services
Register
Log In
(0) Cart
Search
Browse
Services
Institution Login
Search
Home
Subjects
Quantum Computation and Quantum Information
Quantum Computation and Quantum Information 10th Anniversary Edition
[x] Search within full text
eTextbook
US$89.00
Add to cart
Hardback
US$89.00
Buy the print book
Request instructor examination copy
Textbook
Senior/First Year Graduate
Library eCollections
Authors
Michael A. Nielsen
,
Isaac L. Chuang
,Massachusetts Institute of Technology
Published 2010
Description
One of the most cited books in physics of all time, Quantum Computation and Quantum Information remains the best textbook in this exciting field of science. This 10th anniversary edition includes an introduction from the authors setting the work in context. This comprehensive textbook describes such remarkable effects as fast quantum algorithms, quantum teleportation, quantum cryptography and quantum error-correction. Quantum mechanics and computer science are introduced before moving on to describe what a quantum computer is, how it can be…
Read more >
Get access
Share
Cite
Add to bookmarks
Download flyer
Add bookmark
Cite
Share
Overview
Contents
Authors
Reviews
Metrics
Cambridge Spiral eReader
Overview
Key features
The best introduction to quantum computing and quantum information, written by experts on the subject
Gives a comprehensive introduction to the main ideas and techniques, with hundreds of exercises and figures
Contains extensive background material so it can be understood without prior knowledge of quantum mechanics or quantum science
About the book
DOI
SubjectsComputer Science,Cryptography, Cryptology and Coding,Physics and Astronomy,Quantum Physics and Quantum Information
Format: Hardback
Publication date: 31 January 2011
ISBN: 9781107002173
Dimensions (mm): 247 x 174 mm
Weight: 1.5kg
Contains: 200 b/w illus. 10 tables 598 exercises
Page extent: 702 pages
Availability: In stock
Format: Digital
Publication date: 05 June 2012
ISBN: 9780511976667
Access options
Review the options below to login to check your access.
Institutional login
Log in with your institutional credentials below
Access through your institutionChange Institution
Personal login
Log in with your Cambridge Higher Education account to check access.
Log in
Purchase options
eTextbook
US$89.00
Add to cart
Hardback
US$89.00
Buy
Have an access code?
To redeem an access code, please log in with your personal login.
Log in
If you believe you should have access to this content, please contact your institutional librarian or consult our FAQ page for further information about accessing our content.
Also available to purchase from these educational ebook suppliers
Digital purchase options
Redshelf
Vitalsource
Perusall
Authors
Michael A. Nielsen
Michael Nielsen was educated at the University of Queensland, and as a Fulbright Scholar at the University of New Mexico. He worked as the Richard Chace Tolman Fellow at Caltech at Los Alamos National Laboratory, was Foundation Professor of Quantum Information Science and a Federation Fellow at the University of Queensland, and a Senior Faculty Member at the Perimeter Institute for Theoretical Physics.
Isaac L. Chuang,Massachusetts Institute of Technology
Isaac Chuang is an Associate Professor at the Massachusetts Institute of Technology, jointly appointed in Electrical Engineering and Computer Science and in Physics. He leads the quanta research group at the Center for Ultracold Atoms, in the MIT Research Laboratory of Electronics, which seeks to understand and create information technology and intelligence from the fundamental building blocks of physical systems, atoms and molecules.
Our Site
Accessibility
Contact & Help
Legal Notices
Our Platforms
Cambridge Core
Cambridge Open Engage
Cambridge Higher Education
Our Products
Journals
Books
Elements
Textbooks
Courseware
Join us online
Location
Please choose a valid location.
Update
Legal Information
Rights & Permissions
Copyright
Privacy Notice
Terms of Use
Cookies Policy
Cambridge University Press 2025
|
28
|
Dirichlet’s Theorem on Arithmetic Progressions Anthony V´ arilly Harvard University, Cambridge, MA 02138 1 Introduction Dirichlet’s theorem on arithmetic progressions is a gem of number theory. A great part of its beauty lies in the simplicity of its statement.
Theorem 1.1 (Dirichlet). Let a, m ∈Z, with (a, m) = 1. Then there are infinitely many prime numbers in the sequence of integers a, a + m, a + 2m, . . . , a + km, . . . for k ∈N.
A sixth grader knows enough mathematics to understand this particular formulation of the theorem. However, many deep ideas of algebra and analysis are required to prove it.
In order to motivate some of the ideas we will introduce, we will sketch how to show there are infinitely many primes of the form 4k + 1, the special case a = 1, m = 4 of Theorem 1.1.
We shall follow Knapp’s exposition in our sketch .
Define the (real valued) Riemann zeta function as ζ(s) = ∞ X n=1 1 ns, s > 1.
(1) Throughout this paper, p shall denote a prime number, unless otherwise indicated. It is possible to write the zeta function as the infinite product ζ(s) = Y p 1 1 −p−s.
(2) To see why this is true, notice that for finite N, Y p≤N 1 1 −p−s = X n∈S 1 ns where S is the set of natural numbers whose prime factors do not exceed N. Letting N →∞ we obtain the result. With this product formula for ζ(s), it is possible to show (and we will do so in the proof of Theorem 1.1) that log ζ(s) = X p 1 ps + g(s) (3) 1 where g(s) is bounded as s →1.
Define a function χ : Z →{−1, 0, 1} by χ(a) = 0 if a is even, 1 if a ≡1 mod 4, −1 if a ≡3 mod 4.
This function will allow us to distinguish primes of the form 4k + 1 and 4k + 3 from one another. Notice that χ(mn) = χ(m)χ(n) for all integers m and n. Now let L(s, χ) = ∞ X n=1 χ(n) ns .
(4) Since χ is multiplicative for all integers (we say χ is strictly multiplicative in this case), one can write, just like in the case of the zeta function, L(s, χ) = Y p 1 1 −χ(p)p−s.
(5) This product formula can be used to show that log L(s, χ) = X p χ(p) ps + g1(s, χ) (6) where g1(s, χ) is a function that remains bounded as s →1.
Combining (3) and (6) we get log ζ(s) + log L(s, χ) = 2 X p≡1(4) 1 ps + 1 2s + g(s) + g1(s, χ) , (7) log ζ(s) −log L(s, χ) = 2 X p≡3(4) 1 ps + 1 2s + g(s) −g1(s, χ) .
(8) From the Taylor expansion of arctan x, L(1, χ) = π/4 > 0. But ζ(s) diverges as s →1, so the left hand sides of (7) and (8) tend to infinity as s →1. Since both 1/2s + g(s) + g1(s, χ) and 1/2s + g(s) −g1(s, χ) remain bounded, it follows that both sums P p≡1(4) 1/ps and P p≡3(4) 1/ps diverge as s →1. This proves there are infinitely many primes of the form 4k + 1. As a bonus, we obtained the existence of infinitely many primes of the form 4k + 3.
There were two crucial ideas that made this last proof possible. First, it was imperative that L(1, χ) was finite and non-zero, so that its logarithm remain bounded in (8). The other key idea was the use of the function χ to ‘filter out’ the primes of the form 4k + 1 from all other primes. To prove Dirichlet’s theorem, we’ll need functions like χ that will filter out primes of the form a+km. We thus direct our attention to such functions: group characters.
2 2 Group Characters Let G be a finite abelian group. A group character is a homomorphism χ : G →C∗. The characters of a group form themselves a group under pointwise multiplication. We call this group the dual of G and denote it b G.
If G is a cyclic group of order n, then it is easy to describe b G. Let g be a generator of G. Then χ(g) = w for some w ∈C∗. Since χ is a homomorphism, 1 = χ(1) = χ(gn) = χn(g) = wn.
Hence w is an nth root of unity. Conversely, let w be an nth root of unity. Then we can define a character χ of b G by setting χ(g) = w. Notice that χ−1(a) = χ(a) for all a ∈G. We have a bijective correspondence between the group of nth roots of unity µn and b G. In fact, it is easy to see that this correspondence gives an isomorphism. Since µn ∼ = b G, it follows that G ∼ = b G.
Now let G be any finite abelian group. The structure theorem for finite abelian groups tells us G can be written as a direct product of cyclic groups, G ∼ = Cn1 ×· · ·×Cnk. Let gi be a generator of Cni. Every element of G can be written as a product of gi’s to the appropriate powers, so a character of G is completely determined by the images of the gi’s. These images must again be roots of unity.
Conversely, we can define a character χi of b G by sending gi to an nth i root of unity wni and all other generators gj to the identity element. It is easy to see that in this case G ∼ = b G as well. In particular, a group and its dual have the same order.
2.1 Examples of Groups Characters Example 2.1. The trivial character χ : G →C∗defined by χ(a) = 1 for all a ∈G.
Example 2.2. Dirichlet Characters modulo m: Let G = (Z/mZ)∗. Then G is a finite abelian group with φ(m) elements (here φ is the Euler totient function). The Dirichlet characters can be extended to all of Z by setting χ(a) = 0 if (a, m) > 1 and letting χ(a + m) = χ(a) for all integers a. These extensions are not themselves group characters (a character can’t take the value 0), but they are multiplicative functions on Z. Through an abuse of language, we will often times refer to these extensions as Dirichlet characters modulo m.
Example 2.3. The principal Dirichlet character modulo m is the extension to Z (as a multiplicative function) of the trivial character of (Z/mZ)∗: χ0(a) = ( 1 if (a, m) = 1, 0 otherwise.
We shall often write 1 for the principal character instead of χ0 3 Example 2.4. Let m = 4 in Example 2.2. Then (Z/4Z)∗∼ = Z/2Z, so the dual of (Z/4Z)∗ has one non-trivial character; it is given by χ(a) = 1 if a ≡1 mod 4, −1 if a ≡3 mod 4, 0 otherwise.
Example 2.5. Let m = p in Example 2.2; here p is an odd prime number. The dual of (Z/pZ)∗will be cyclic of order p −1. Hence there will be a character χ of order 2, that is χ2 = χ0. If a is a quadratic residue modulo p, then χ(a) is forced to be 1. If a is a quadratic non-residue, then χ(a) is forced to be −1. Thus we can identify χ with the familiar Legendre symbol, χ(a) = a p .
Remark. The characters χ of b G are strictly multiplicative, that is, χ(ab) = χ(a)χ(b) for all a, b ∈G. This follows from the definition of group homomorphism.
2.2 Orthogonality Relations As we said earlier, group characters are essential to the proof of Dirichlet’s theorem because they let us ‘filter out’ the primes of the form a + km. But how is this the case? It turns out the characters of a group satisfy certain orthogonality relations. These relations hold the key to the process of ‘filtering primes’.
Theorem 2.1. Let χ ∈b G. Then 1 |G| X a∈G χ(a) = ( 1 if χ = 1, 0 if χ ̸= 1.
(9) Proof. If χ = 1, the sum adds up to the number of elements in G. Otherwise, choose b ∈G such that χ(b) ̸= 1. Then χ(b) · 1 |G| X a∈G χ(a) = 1 |G| X a∈G χ(a)χ(b) = 1 |G| X a∈G χ(ab) = 1 |G| X a∈G χ(a).
The last equality follows from the fact that as a ranges through the elements of G, so does ab. Hence we have (χ(b) −1) · 1 |G| X a∈G χ(a) = 0.
Since χ(b) ̸= 1, (9) follows.
By applying Theorem 2.1 to the dual group b G and since G ∼ = b b G, we get the following result.
4 Corollary 2.2. Let a ∈G. Then 1 | b G| X χ∈b G χ(a) = ( 1 if a = 1, 0 if a ̸= 1.
(10) Consider now two characters χ, ψ ∈b G. Since b G is a group, χψ−1 ∈b G, and P a χψ−1(a) = P a χ(a)ψ−1(a) = P a χ(a)ψ(a). By Theorem 2.1, we have 1 |G| X a∈G χ(a)ψ(a) = ( 1 if χ = ψ, 0 otherwise.
(11) Similarly, P χ χ(ab−1) = P χ χ(a)χ(b−1) = P χ χ(a)χ(b), so that by Corollary 2.2, we get 1 | b G| X χ∈b G χ(a)χ(b) = ( 1 if a = b, 0 otherwise.
(12) Equations (11) and (12) are refered to as the orthogonality relations for group characters.
A special case of these relations, which is of interest to us, occurs when G = (Z/mZ)∗.
Corollary 2.3 (Orthogonality relations for Dirichlet Characters). Let χ and ψ be Dirichlet characters modulo m, and let a, b be integers. Then 1 φ(m) m−1 X a=0 χ(a)ψ(a) = ( 1 if χ = ψ, 0 otherwise.
(13) 1 φ(m) X χ χ(a)χ(b) = ( 1 if a ≡b mod m, 0 otherwise.
(14) These last two relations shall do us a great service when we try to ‘filter out’ primes of the form a + km from the zeta function. This is all the character theory we will need. If the reader is interested in a more thorough treatment of it, we recommend Serre’s book . For a treatment closer to ours, Ireland and Rosen would be a good book to look at.
We now turn our attention to series like (4). A careful study of them, together with our knowledge of group characters is enough to prove Theorem 1.1.
3 Dirichlet Series A series ∞ X n=1 an ns with an and s complex is called a Dirichlet series. We will be primarily concerned with series where an is a Dirichlet character modulo m. First, we must know something about a Dirichlet series’ region of convergence. We follow Knapp’s treatment on Dirichlet series for the following theorems.
5 Theorem 3.1. Let P∞ n=1 an ns be a Dirichlet series. If the series converges for a particular s = s0, then it converges uniformly on the open half-plane Re s > Re s0. Furthermore, the sum is analytic in this region.
We will need Abel’s summation formula to prove the theorem. Suppose {un} and {vn} are sequences of complex numbers such that P∞ n=1 unvn converges.
Let Un = Pn 1 ui; if Unvn →0 as n →∞, then ∞ X n=1 unvn = ∞ X n=1 Un(vn −vn+1) Proof of Theorem 3.1. We have an ns = an ns0 1 ns−s0 . Let un = an ns0 and vn = 1 ns−s0 . We know {Un} is convergent by hypothesis, and vn →0 uniformly on the half-plane Re s > Re s0.
Thus Unvn →0 as n →∞in this region. Say Un →U as n →∞. Then X unvn = X Un(vn −vn+1) ≤ X |Un||vn −vn+1| ≤U X |vn −vn+1|.
If we can show that X |vn −vn+1| = X 1 ns−s0 − 1 (n + 1)s−s0 converges uniformly on the half-plane Re s > Re s0, we will be done. For n ≤t ≤n + 1, we have n−(s−s0) −t−(s−s0) 1 ≤ sup n≤t≤n+1 d dt n−(s−s0) −t−(s−s0) = sup n≤t≤n+1 s −s0 ts−s0+1 ≤ |s −s0| n1+Re(s−s0), (15) and so |vn −vn+1| ≤ |s −s0| n1+Re(s−s0). Hence X n |vn −vn+1| ≤|s −s0| X n 1 n1+Re(s−s0), and this last expression converges uniformly when Re(s −s0) > 0. The analyticity of the sum follows from the analyticity of each term in the half-plane.
Corollary 3.2. If the Dirichlet series P∞ n=1 an/ns converges absolutely at s = s0, then it converges uniformly and absolutely in the half-plane Re s ≥Re s0.
Proof. With absolute convergence, we deduce X an ns = X an ns0 1 ns−s0 ≤ X an ns0 and since the sum on the right hand side of these relations converges, we get the result by a simple application of the Weierstrass M-test.
6 3.1 Dirichlet L-series and Euler Products Dirichlet series that have Dirichlet characters modulo m (extended to Z) as their coefficients are called L-functions.
L(s, χ) = ∞ X n=1 χ(n) ns .
(16) When we studied primes of the form 4k+1, we came across an example of L-function. Notice though that a general L-function can have s and χ(n) take complex values.
Remark. L(s, 1) looks like a zeta function with complex s that is missing all integers n that are divisible by m, since χ(n) = 0 for such n.
Lemma 3.3. The zeta function ζ(s) is meromorphic in the half-plane Re s > 0. Its only pole is s = 1 and it is simple.
Proof. We have ζ(s) = 1 s −1 + ∞ X n=1 1 ns − 1 s −1 = 1 s −1 + ∞ X n=1 1 ns − Z ∞ 1 1 ts dt = 1 s −1 + ∞ X n=1 1 ns − Z n+1 n 1 ts dt = 1 s −1 + ∞ X n=1 Z n+1 n 1 ns −1 ts dt.
Notice that R n+1 n (n−s −t−s) dt is an analytic function for Re s > 0. To show the sum of such integrals (as n ranges from 1 to ∞) is analytic, all we need is convergence on compact sets for which Re s > 0. Now, Z n+1 n n−s −t−s dt ≤ Z n+1 n |n−s −t−s| dt ≤ sup n≤t≤n+1 | n−s −t−s |, and this last expression is at most |s| n1+Re s by (15). The series X n 1 n1+Re s converges for Re s > 0. Hence the desired series of integrals converges in this region as well.
Our next goal is to obtain a product expansion for L(s, χ) like that of the zeta function.
We use the crucial fact that Dirichlet characters are strictly multiplicative.
Lemma 3.4. The Dirichlet series P n χ(n) ns converges absolutely for Res > 1. Furthermore, ∞ X n=1 χ(n) ns = Y p 1 1 −χ(p)p−s.
(17) 7 Proof. χ is a bounded function. This gives the desired absolute convergence for Re s > 1.
To see why the product expansion holds note that for Re s > 1 and a fixed prime number q, q Y p=2, p prime 1 1 −χ(p)p−s = q Y p=2, p prime 1 + χ(p)p−s + χ2(p)p−2s + · · · (18) = q Y p=2, p prime 1 + χ(p)p−s + χ(p2)p−2s + · · · = X n∈S χ(n) ns (19) where S is the set of natural numbers whose prime factors do not exceed q. This means the partial product (18) is equal to a convergent infinite sum. Now fix a natural number N. We have N X n=1 χ(n) ns = r Y p=2, p prime 1 1 −χ(p)p−s − X n∈S n>N χ(n) ns (20) where r is the largest prime number less than or equal to N, and now S is the set of natural numbers whose prime factors do not exceed r. Letting q →∞in (18) and N →∞in (20) we see that the product expansion and the series converge or diverge together. Since we know the series converges for Re s > 1, the product expansion must also converge in that region. Furthermore, letting q →∞in (18), we obtain (17) With the above three lemmas in hand, we can extend our remark about L(s, 1). Applying Lemma 3.4 to the principal character, we have L(s, 1) = Y p∤m 1 1 −p−s = Y p|m (1 −p−s)ζ(s).
This last equality follows from the fact that we have extended the zeta function to the region Re s > 0 in Lemma 3.3. Since the product over p|m is finite, it follows that L(s, 1) is meromorphic in the region Re s > 0 and its only pole is simple at s = 1. Note, however, that the product expansion of L(s, 1) is only valid in the region Re s > 1.
The product expression (17) is an example of an Euler product of first degree.
If χ is not the principal character, then we can go further and show that the series L(s, χ) is convergent and analytic in the region Re s > 0.
Theorem 3.5. Let χ be a Dirchlet character modulo m different from the principal character.
Then the series L(s, χ) converges and is analytic in Re s > 0.
Proof. We extended Dirchlet characters to Z by setting χ(a) = 0 when (a, m) > 1 and by letting χ(a + m) = χ(a) for all integers a. Using the extended characters, it follows, by Theorem 2.1, that m X n=1 χ(m + a) = 0 (21) 8 for any a.
Let s > 0 for now. We use Abel’s summation formula with un = χ(n) and vn = 1/ns.
Equation (21) says {Un} is bounded; say |Un| ≤U. It is easy to see that Unvn →0 as n →∞. Hence ∞ X n=M χ(n) ns = ∞ X n=M unvn = ∞ X n=M Un(vn −vn+1) ≤U ∞ X n=M |vn −vn+1| = U M s for any finite M. The last equality follows because |vn −vn+1| = (vn −vn+1) for s > 0. As M →∞, the last expression tends to zero. Therefore the series P n χ(n)/ns is convergent for s real and positive. By Theorem 3.1, the series is convergent and analytic in the region Re s > 0.
As a consequence of Theorem 3.5, we see that when χ is not the principal character, L(s, χ) is well defined at s = 1. We will need to show that in fact it is not zero to prove Dirichlet’s theorem.
Theorem 3.6. For non-principal χ, L(1, χ) ̸= 0.
We will postpone the proof of this theorem until we prove Dirichlet’s theorem.
4 Dirichlet’s theorem We are now in a position to prove Theorem 1.1. For the first part of the proof we will loosely follow Knapp’s treatment.
Proof of Theorem 1.1. First, we will show that for a Dirichlet character modulo m, log L(s, χ) = X p χ(p) ps + g(s, χ) (22) for real s > 1, where g(s, χ) is a function that remains bounded as s →1. Even if s is real, L(s, χ) could still be complex valued, so if we want to take its logarithm, we better choose a branch. For a given p and s ≥1, define the value of the logarithm of the pth factor in the L-function’s Euler product by log 1 1 −χ(p)p−s = χ(p) ps + ∞ X n=2 χ(pn) npns and let g(s, χ, p) = ∞ X n=2 χ(pn) npns . For this choice of branch, and for |z| ≤1/2, we have log 1 1 −z −z = ∞ X n=2 zn n ≤ ∞ X n=2 |z|n n ≤|z|2 ∞ X n=0 1 n + 2 1 2 n ≤|z|2 ∞ X n=0 1 2 n+1 = |z|2 9 Now set z = χ(p) ps . Since χ(p) ps ≤1 2 we obtain g(s, χ, p) = log 1 1 −χ(p)p−s −χ(p) ps ≤ χ(p) ps 2 ≤1 p2 for s ≥1.
Finally, set g(s, χ) = P p g(s, χ, p). Now P p |g(s, χ, p)| ≤P p 1 p2 ≤P n 1 n2 which converges.
Thus, g(s, χ) remains bounded as s →1.
By adding all the logarithms of the factors in the L-function’s Euler product, we obtain a branch of log L(s, χ), X p log 1 1 −χ(p)p−1 = X p χ(p) ps + g(s, χ) This establishes (22). Now we use group characters to ‘filter out’ the primes of the form a + km. Recall from our discussion of Dirichlet characters modulo m the following orthogonality relation 1 φ(m) X χ χ(a)χ(b) = ( 1 if a ≡b mod m, 0 otherwise.
If we multiply (22) by χ(a) and sum over all χ we get X χ χ(a) log L(s, χ) = X χ χ(a) X p χ(p) ps + X χ χ(a)g(s, χ) Using the orthogonality relation, we rearrange this last equation to give φ(m) X p≡a(m) 1 ps = X χ χ(a) log L(s, χ) − X χ χ(a)g(s, χ).
(23) We know L(s, 1) has a pole at s = 1; in fact L(s, 1) →∞as s →1 for real s. On the other hand, by Theorem 3.6, we know log L(s, χ) is bounded at s = 1 for χ non-principal.
Hence the sum P χ χ(a) log L(s, χ) has only one unbounded term as s →1. This means the sum must itself be unbounded as s →1. This is why Theorem 3.6 is so important. Had two or more terms of the previous sum been unbounded, they could have cancelled, depending on the sign of χ(a). The term P χ χ(a)g(s, χ) is bounded as s →1 because g(s, χ) is. Thus, overall, the right hand side of (23) is unbounded as s approaches 1. This means there must be infinitely many primes contributing to the sum on the left hand side. This concludes the proof of the theorem.
4.1 The missing link We have to prove Theorem 3.6. Let χ be a Dirichlet character modulo m. Define ζm(s) by ζm(s) = Y χ L(s, χ).
(24) 10 We know L(s, 1) has a simple pole at s = 1 and all other L(s, χ) are analytic for Re s > 0.
Suppose there is some non-principal χ such that L(1, χ) = 0. The function ζm(s) would then be analytic on Re s > 0. We will prove this is not the case. The theorem will follow.
Suppose p is a prime not dividing m.
Let f(p) be the order of the image p of p in (Z/mZ)∗. Define g(p) = φ(m)/f(p), that is, the order of the quotient of (Z/mZ)∗by (p).
Lemma 4.1. If p ∤m then Y χ 1 −χ(p) ps = 1 − 1 pf(p)s g(p) .
(25) Proof. Let µn denote the group of nth roots of unity. Then Y w∈µf(p) (1 −wx) = Y w∈µf(p) wf(p) Y w∈µf(p) 1 w −x = eiπ(n−1) Y w∈µf(p) (w −x) = (−1)n−1(−1)n Y w∈µf(p) (x −w) = 1 −xf(p).
For any w ∈µf(p), there are φ(m)/f(p) = g(p) Dirichlet characters modulo m such that χ(p) = w. Letting x = 1 ps we obtain the desired result.
Lemma 4.2. The function ζm(s) has a product expansion for Re s > 1 given by ζm(s) = Y p∤m 1 1 −p−f(p)s g(p) .
(26) Proof. We have ζm(s) = Y χ L(s, χ) = Y p∤m Y χ 1 1 −χ(p)p−s !
= Y p∤m 1 1 −p−f(p)s g(p) , where the last step follows from Lemma 4.1.
Notice that 1 1 −p−f(p)s is given by a Dirichlet series with positive coefficients. Hence, Lemma 4.2 shows ζm(s) is itself a Dirichlet series with positive coefficients. This is the key fact we shall exploit to obtain our contradiction.
Lemma 4.3. Let P n an/ns be a Dirichlet series with coefficients an ≥0. Suppose the series converges for Re s > s0 for some real s0, and that it extends analytically to an analytic function in a neighborhood around the point s0. Then there is an ϵ > 0 for which the series converges for Re s > (s0 −ϵ).
11 Proof. We will follow Serre’s [3, p. 67] ideas in this proof, though our proof is not as general as his. We may assume without loss of generality that s0 = 0. Just replace s with s −s0.
For convenience, denote the series above by f(s). Because f(s) is analytic in the region Re s > 0 and in a neighborhood around 0, there is an ϵ > 0 such that f(s) is analytic in the disc |s −1| ≤1 + ϵ. This means the Taylor series of f(s) must converge in this disc. The pth derivative of f(s) is given by f (p)(s) = ∞ X n=1 an(−log n)p ns →f (p)(1) = ∞ X n=1 (−1)pan(log n)p n The Taylor series of f(s) around s = 1 is given by f(s) = ∞ X p=1 f (p)(1) p !
(s −1)p, |s −1| ≤1 + ϵ.
Now for s = −ϵ, we obtain f(−ϵ) = ∞ X p=1 f (p)(1) p !
(1 + ϵ)p(−1)p.
But (−1)pf (p)(1) = ∞ X n=1 an(log n)p n is a convergent series with positive terms. This means the following double sum converges: f(−ϵ) = X p X n an 1 p !
(1 + ϵ)p(log n)p n .
But this sum can be re-expressed as f(−ϵ) = X n an ns ∞ X p=1 1 p !(1 + ϵ)p(log n)p = X n an ns e(log n)(1+ϵ) = X n annϵ This is the Dirichlet series we started with evaluated at −ϵ! Therefore, the series converges at s = −ϵ, and thus, by Theorem 3.1, for all Re(s) > −ϵ.
12 So far we have argued that if L(1, χ) = 0 for some non-principal χ, then ζm(s) must be analytic for Re s > 0. We saw that ζm(s) is a Dirichlet series with positive coefficients.
Moreover, since all L(s, χ) are convergent for Re(s) > 1, ζm(s) converges in this region as well. Since ζm(s) is analytic in the region Re(s) > 0, Lemma 4.3 tells us we can push back the region of convergence of ζm(s) to Re(s) > 0. Our contradiction is at hand.
Let s be a real number greater than 1. Consider the pth factor in the product expansion of ζm(s) (which is valid for Re s > 1) 1 1 −p−f(p)s g(p) = (1 + p−f(p)s + p−2f(p)s + · · · )g(p) ≥1 + p−f(p)g(p)s + p−2f(p)g(p)s + · · · = 1 + p−φ(m)s + p−2φ(m)s + · · · = 1 1 −pφ(m)s.
This shows that for s > 1, ζm(s) = Y χ L(s, χ) = Y χ Y p 1 1 −χ(p)p−s = Y p 1 1 −p−f(p)s g(p) ≥ Y p 1 1 −p−φ(m)s = X (n,m)=1 n−φ(m)s (27) Thus, ζm(s) has all its coefficients greater than those of the series in (27). These coefficients of ζm(s) remain unchanged if we take s between 0 and 1. But the series (27) diverges for s = 1 φ(m) > 0. Hence ζm(s) is unbounded for this value of s, and thus the series ζm(s) diverges for a value of s whose real part is greater than zero. This is a contradiction, since we showed ζm(s) converges for Re s > 0. This completes the proof of Theorem 3.6.
Remark. Even though the product expansion of ζm(s) is only valid for Re s > 1, we were able to use the expansion to look at the coefficients of the series representation for ζm(s), which is valid for Re s > 0 References K. Ireland, M. Rosen, A Classical Introduction to Modern Number Theory Second Edi-tion. Springer, New York, 1990.
A. Knapp, Elliptic Curves Princeton UP, New Jersey, 1992.
J-P. Serre, A course in Arithmetic Springer, New York, 1973.
J-P. Serre, Linear Representations of Finite Groups Springer, New York, 1977.
13
|
29
|
Published Time: 2004-08-10T06:10:45Z
Rhotacism - Wikipedia
===============
Jump to content
[x] Main menu
Main menu
move to sidebar hide
Navigation
Main page
Contents
Current events
Random article
About Wikipedia
Contact us
Contribute
Help
Learn to edit
Community portal
Recent changes
Upload file
Special pages
Search
Search
[x] Appearance
Donate
Create account
Log in
[x] Personal tools
Donate
Create account
Log in
Pages for logged out editors learn more
Contributions
Talk
Contents
move to sidebar hide
(Top)
1 Albanian
2 Aramaic
3 Basque
4 Finnish
5 Goidelic languages
6 Germanic languagesToggle Germanic languages subsection
6.1 English
6.2 German
7 Romance languages and LatinToggle Romance languages and Latin subsection
7.1 Latin
7.2 Neapolitan
7.3 Portuguese and Galician
7.4 Romanesco Italian
7.5 Romanian
7.6 Sicilian
7.7 Spanish
7.8 Other languages
8 Turkic
9 South Slavic languages
10 See also
11 References
12 Bibliography
[x] Toggle the table of contents
Rhotacism
[x] 2 languages
Euskara
日本語
Edit links
Article
Talk
[x] English
Read
Edit
View history
[x] Tools
Tools
move to sidebar hide
Actions
Read
Edit
View history
General
What links here
Related changes
Upload file
Permanent link
Page information
Cite this page
Get shortened URL
Download QR code
Print/export
Download as PDF
Printable version
In other projects
Wikidata item
Appearance
move to sidebar hide
From Wikipedia, the free encyclopedia
Sound change converting an alveolar consonant to a rhotic consonant
This article is about the sound change. For the speech errors, see Speech sound disorder §Rhotacism. For other uses, see Rhotic.
| Sound change and alternation |
| --- |
| Metathesis Quantitative metathesis |
| Lenition Consonant gradation Consonant voicing and devoicing Assibilation Spirantization L-vocalization Debuccalization |
| Fortition |
| Epenthesis Prothesis Paragoge Unpacking Vowel breaking |
| Elision Apheresis Syncope Apocope Haplology Cluster reduction |
| Transphonologization Compensatory lengthening Nasalization Tonogenesis Floating tone |
| Assimilation Fusion Coarticulation Palatalization Velarization Labialization Final devoicing Metaphony (vowel harmony, umlaut) Consonant harmony |
| Dissimilation |
| Sandhi Liaison, linking R Consonant mutation Tone sandhi Vowel hiatus Synalepha Elision Crasis Synaeresis and diaeresis Synizesis |
| Other types Apophony Affrication Gemination Clipping Fronting Raising Betacism Iotacism Fusion Merger Compensatory lengthening Monophthongization Rhotacism Rhinoglottophilia Sulcalization Shm-reduplication Consonant mutation Vowel shift Chain shift |
| v t e |
Rhotacism (/ˈ r oʊ t ə s ɪ z əm/ROH-tə-siz-əm) or rhotacization is a sound change that converts one consonant (usually a voiced alveolar consonant: /z/, /d/, /l/, or /n/) to a rhotic consonant in a certain environment. The most common may be of /z/ to /r/. When a dialect or member of a language family resists the change and keeps a /z/ sound, this is sometimes known as zetacism.
The term comes from the Greek letterrho, denoting /r/.
Albanian
[edit]
The southern (Tosk) dialects, the base of Standard Albanian, changed /n/ to /r/, but the northern (Gheg) dialects did not:
zëri vs. zâni 'the voice'
gjuri vs. gjuni 'the knee'
Shqipëria vs. Shqypnia 'Albania'
Arbëria vs. Arbënia 'Albania' (older name of the country)
i djegur vs. i djegun 'burnt'
druri vs. druni 'wood'
bëra vs. bona 'did'
zura vs. zuna 'caught'
pluhur vs. pluhun 'dust'
dashuri vs. dashni 'love'
Aramaic
[edit]
In Aramaic, Proto-Semiticn changed to r in a few words:
bar "son" as compared to Hebrew בֵן ben (from Proto-Semitic bnu)
trên and tartên "two" (masculine and feminine form respectively) as compared to Demotic Arabictnēn and tintēn, from Proto-Semitic ṯnaimi and ṯnataimi. Compare also Aramaic tinyânâ "the second one", without the shift.
Basque
[edit]
Aquitanian l changed to the tapped r between vowels in Basque. It can be observed in words borrowed from Latin; for example, Latin caelum (meaning "sky, heaven") became zeru in Basque (caelum>celu>zeru; compare cielo in Spanish). The original l is preserved in the Souletin dialect: caelum>celu>zelü.
Finnish
[edit]
Western dialects of Finnish are characterised by the pronunciation /r/ or /ɾ/ of the consonant written d in Standard Finnish kahden kesken- kahren kesken (two together = one on one).[example needed] The reconstructed older pronunciation is ð.
Goidelic languages
[edit]
In Manx, Scottish Gaelic and some dialects of Irish, /n/ becomes /r/ in a variety of consonant clusters, often with nasalization of the following vowel. For example, the /kn/ cluster developed into /kr/, as in Scottish Gaelic cnoc[krɔ̃xk] ‘hill’. Within Ireland, this phenomenon is most prevalent in northern dialects and absent from the most southern dialects. Some examples of rhotacized clusters include /kn/ (cnó), /mn/ (mná), /ɡn/ (gnó), and /tn/ (tnáith), while /sn/ (snámh) is never rhotacized even in the most innovative dialects. This can lead to interesting pairs such as nominative an sneachta/ə ˈʃnʲæːxt̪ˠə/ versus genitive an tsneachta/ə ˈt̪ɾʲæːxt̪ˠə/.
Germanic languages
[edit]
See also: Grammatischer Wechsel
All surviving Germanic languages, which are members of the North and West Germanic families, changed /z/ to /r/, implying a more approximant-like rhotic consonant in Proto-Germanic. As attested by runes, the shift affected Old Norse later than the Continental Germanic languages. Some languages later changed all forms to r, but Gothic, an extinct East Germanic language, did not undergo rhotacism.
| Proto-Germanic | Gothic | Old Norse | (Old English) Modern English | Old Frisian | Dutch | (Old High German) Modern German |
| --- | --- | --- | --- | --- | --- | --- |
| was,1st/3rd sg wēzum 1st pl | was, wēsum | var, várum | (wæs, wǣron) was, were | was, wēren | was, waren | (was, wārum) war, waren |
| fraleusaną,inf fraluzanazp.part. | fraliusan, fralusans | — | (forlēosan, forloren) forlese, forlorn | urliāsa, urlāren | verliezen, verloren | (farliosan, farloren) ver lieren, verloren |
Note that the Modern German forms have levelled the rhotic consonant to forms that did not originally have it. However, the original sound can still be seen in some nouns such as Wesen, "being" (from the same root as war/waren) as well as Verlust, "loss" and Verlies, "dungeon" (both from the same root as verlieren/verloren).
Because of the presence of words that did not undergo rhotacisation from the same root as those that did, the result of the process remains visible in a few modern English word pairs:
is and are (PGmc. isti vs izi)
was and were ( PGmc. was_ vs _wēz-)
the comparative and superlative suffixes -er and -est (PGmc. -izô vs -istaz) and derived words such as more and most (maizô vs maistaz), better and best (batizô vs batistaz), etc
rise and rear (as in 'to bring up'; PGmc. rīsaną vs raizijaną)
loss and forlorn (PGmc. lusą vs fraluzanaz)
English
[edit]
See also: Rhoticity in English
Intervocalic /t/ and /d/ are commonly lenited to [ɾ] in most accents of North American and Australian English and some accents of Irish English and English English, a process known as tapping or less accurately as flapping:got a lot of/ˈɡɒtə ˈlɒtə/ becomes [ˈɡɒɾə ˈlɒɾə]. Contrast is usually maintained with /r/, and the [ɾ] sound is rarely perceived as /r/.
German
[edit]
In Central German dialects, especially Rhine Franconian and Hessian, /d/ is frequently realised as [ɾ] in intervocalic position. The change also occurs in Mecklenburg dialects. Compare Borrem (Central Hessian) and Boden (Standard German).
Romance languages and Latin
[edit]
Latin
[edit]
Reflecting a highly-regular change in pre-Classical Latin, intervocalic /s/ in Old Latin, which is assumed to have been pronounced [z], invariably became r, resulting in pairs such as these:
flōsnom — flōremacc (Old Latinflōsem)
genusnom — generisgen (from _geneses_, cf. Sanskrit janasas)
rōbus,rōbustus — rōbur, corrōborāre (verb from _conrobosare_)
jūstus — de jūre (from de jouse)
est — erō (from esō)
gessī, gestō — gerō (from gesō)
Intervocalic s in Classical Latin suggests either borrowing (rosa) or reduction of an earlier ss after a long vowel or a diphthong (pausa<paussa, vīsum<vīssum_<_weid-tom). The s was preserved initially (septum) and finally and in consonant clusters.
Old Latin honos became honor in Late Latin by analogy with the rhotacised forms in other cases such as genitive, dative and accusative honoris, honori, honorem.
Another form of rhotacism in Latin was dissimilation of d to r before another d and dissimilation of l to r before another l, resulting in pairs such as these:
medius — merīdiēs (instead of _medi-diēs_)
caelum — caeruleus (instead of _cael-uleus_)
The phenomenon was noted by the Romans themselves:
In many words in which the ancients said s, they later said r... foedesum foederum, plusima plurima, meliosem meliorem, asenam arenam
— Varro, De lingua Latina, VII, 26, In multis verbis, in quo antiqui dicebant s, postea dicunt r... foedesum foederum, plusima plurima, meliosem meliorem, asenam arenam
Neapolitan
[edit]
In Neapolitan, rhotacism affects words that etymologically contained intervocalic or initial /d/, when this is followed by a vowel; and when /l/ is followed by another consonant. This last characteristic, however, is not very common in modern speech.
LAT. DENTE(M)> Neap. dente[ˈrɛndə] "tooth"
LAT. PEDE(M)> Neap. pere[ˈpɛːrə] "foot"
LAT. SOLDU(M)> Neap. sòrdo[ˈsɔːrdə] (or [ˈsɔːldə]) "money"
Portuguese and Galician
[edit]
In Galician-Portuguese, rhotacism occurred from /l/ to /r/, mainly in consonant clusters ending in /l/ such as in the words obrigado, "thank you" (originally from "obliged [in honourably serving my Sir]"); praia, "beach"; prato, "plate" or "dish"; branco, "white"; prazer/pracer, "pleasure"; praça/praza, "square". Compare Spanish obligado (obliged), playa, plato, blanco, placer, plaza from Latin obligatus, plagia, platus, blancus (Germanic origin), placere (verb), platea.
In contemporary Brazilian Portuguese, rhotacism of /l/ in the syllable coda is characteristic of the Caipira dialect. Further rhotacism in the nationwide vernacular includes planta, "plant", as [ˈpɾɐ̃tɐ], lava, "lava", as /ˈlarvɐ/ (then homophonous with larva, worm/maggot), lagarto, "lizard", as [laʁˈɡaʁtu] (in dialects with guttural coda r instead of a tap) and advogado, "lawyer", as [ɐ̞de̞vo̞ʁˈɡadu]. The nonstandard patterns are largely marginalised, and rhotacism is regarded as a sign of speech-language pathology or illiteracy.
Romanesco Italian
[edit]
Rhotacism, in Romanesco, shifts l to r before a consonant, like certain Andalusian dialects of Spanish. Thus, Latin altus (tall) is alto in Italian but becomes arto in Romanesco. Rhotacism used to happen when l was preceded by a consonant, as in the word ingrese (English), but modern speech has lost that characteristic.
Another change related to r was the shortening of the geminatedrr, which is not rhotacism. Italian errore, guerra and marrone "error", "war", "brown" become erore, guera and marone.
Romanian
[edit]
In Romanian, rhotacism shifted intervocalic l to r and n to r.
Thus, Latin caelum ‘sky; heaven’ became Romanian cer, Latin fenestra ‘window’ Romanian fereastră and Latin felicitas ‘happiness’ Romanian fericire.
Some northern Romanian dialects and Istro-Romanian also changed all intervocalic [n] to [ɾ] in words of Latin origin. For example, Latin bonus became Istro-Romanian bur: compare to standard Daco-Romanian bun.
Sicilian
[edit]
Rhotacism is particularly widespread in the island of Sicily, but it is almost completely absent in the Sicilian varieties of the mainland (Calabrese and Salentino). It affects intervocalic and initial /d/: cura from Latin caudam, peri from Latin pedem, 'reci from Latin decem.
Spanish
[edit]
In Andalusian Spanish, particularly in Seville, at the end of a syllable before another consonant, l is replaced with r: Huerva for Huelva. The reverse occurs in Caribbean Spanish: Puelto Rico for Puerto Rico (lambdacism).
Other languages
[edit]
Rhotacism (mola>mora, filum>fir, sal>sare) exists in some Gallo-Italic languages as well: Lombard (Western and Alpine[lmo; it]) and Ligurian.
In Umbrian but not Oscan, rhotacism of intervocalic s occurred as in Latin.
Turkic
[edit]
Among the Turkic languages, the Oghur branch exhibits /r/, opposing to the rest of Turkic, which exhibits /z/. In this case, rhotacism refers to the development of -/r/, -/z/, and -/d/ to /r/, -/k/, -/kh/ in this branch.
South Slavic languages
[edit]
(This section relies on the treatment in Greenberg 1999.)
In some South Slavic languages, rhotacism occasionally changes a voiced palatal fricative [ʒ] to a dental or alveolar tap or trill [r] between vowels:
moreš (Slovene, KajkavianCroatian) 'you can' from earlier možešь
kdor (Slovene) from earlier kъto-že
The beginning of the change is attested in the Freising manuscripts from the 10th century AD, which show both the archaism (ise 'which' < jь-že) and the innovation (tere 'also' < te-že). The shift is also found in individual lexical items in Bulgarian dialects, дорде 'until' (< do-že-dĕ) and Macedonian, сеѓере (archaic: 'always' <_vьsegъda-že_). However, the results of the sound change have largely been reversed by lexical replacement in dialects in Serbia and Bosnia from the 14th century.
Dialects in Croatia and Slovenia have preserved more of the lexical items with the change and have even extended grammatical markers in -r from many sources that formally merged with the rhotic forms that arose from the sound change: Slovene dialect nocor 'tonight' (< not'ь-sь-ǫ- + -r-) on the model of večer 'evening' (< večerъ). The reversal of the change is evident in dialects in Serbia in which the -r- formant is systematically removed: Serbian veče 'evening'.
See also
[edit]
Lambdacism, the related condition or phonetic shift with regard to the sound /l/
References
[edit]
^"American English Dictionary: Definition of rhotacism". Collins. Retrieved December 13, 2013.
^ abcdCatford (2001:178)
^Trask, R. Larry (2008), Wheeler, Max W. (ed.), A Historical Dictionary of Basque(PDF), University of Essex, p.29, archived from the original(PDF) on June 7, 2011, retrieved January 22, 2011
^Catford (2001:179)
^D. Hofmann, A.T. Popkema, Altfriesisches Handwörterbuch (Heidelberg 2008).
^Harris, John (1994). English Sound Structure. Blackwell. p.121. ISBN0-631-18741-3.
^Ladefoged, Peter (2006). A Course in Phonetics. Thomson. pp.171–3. ISBN978-1-4130-0688-9.
^robus 1; rōbur. Charlton T. Lewis and Charles Short. A Latin Dictionary on Perseus Project.
^Malte Rosemeyer (15 April 2014). Auxiliary Selection in Spanish: Gradience, gradualness, and conservation. John Benjamins Publishing Company. p.81. ISBN978-90-272-7040-5.
^Nandris (1963:255–258)
^Buck, Carl Darling. 1904. A grammar of Oscan and Umbrian: with a collection of inscriptions and a glossary
^Larry Clark, "Chuvash", in The Turkic Languages, eds. Lars Johanson & Éva Ágnes Csató (London–NY: Routledge, 2006), 434–452.
^Greenberg (1999)
Bibliography
[edit]
Catford, J.C. (2001), "On Rs, rhotacism and paleophony", Journal of the International Phonetic Association, 31 (2): 171–185, doi:10.1017/S0025100301002018, S2CID143611805
Crowley, Terry (1997). An Introduction to Historical Linguistics (3rd ed.). Oxford University Press. ISBN9780195583786.
Greenberg, Marc L. (1999), "Multiple Causation in the Spread and Reversal of a Sound Change: Rhotacism in South Slavic", Slovenski Jezik/Slovene Linguistics Studies, 2: 63–76
Nandris, O (1963), Phonétique Historique du Roumain, Paris: Klincksiek
| v t e The letter R |
| --- |
| General | Rhotic consonants (R-like sounds) Perception of English /r/ and /l/ by Japanese speakers Pronunciation of English /r/ Rhoticity in English R-colored vowel Rhotacism Guttural R Linking and intrusive R |
| Pronunciations | Voiceless alveolar trill[r̥] Voiced alveolar trill[r] Voiced alveolar or postalveolar approximant[ɹ] Voiceless alveolar tap[ɾ̥] Voiced alveolar tap[ɾ] Voiced alveolar lateral flap[ɺ] Voiced retroflex approximant[ɻ] Voiced retroflex flap[ɽ] Voiced retroflex trill[ɽr] Voiced velar approximant[ɰ] Voiced velar bunched approximant[ɹ̈] Voiceless velar fricative[x] Voiced velar fricative[ɣ] Voiceless uvular trill[ʀ̥] Voiced uvular trill[ʀ] Voiceless uvular fricative[χ] Voiced uvular fricative[ʁ] Voiced pharyngeal fricative[ʕ] Voiceless glottal fricative[h] Voiced glottal fricative[ɦ] |
| Variations | Ꝛꝛ (R rotunda) Ꞃꞃ (Insular) Ɽɽ (R with tail) Ɍɍ (R with stroke) Ʀʀ Ȑȑ Ŕŕ Ŗŗ Řř Ȓȓ R̃r̃ Ṙṙ Ṛṛ R̄r̄ Ṝṝ Ṟṟ Rd Rh Rl Rm Rn Rp Rr Rs Rt Rw Rz Rnd ᚱ (Raidô) ℝ (Real number) ® (Registered trademark symbol) Ⓡ (Enclosed R) ℞ (Medical prescription) |
Retrieved from "
Categories:
Phonetics
Sound changes
Hidden categories:
Articles with short description
Short description is different from Wikidata
Pages with plain IPA
Articles containing Albanian-language text
All articles needing examples
Articles needing examples from July 2016
Articles containing Scottish Gaelic-language text
Articles containing Latin-language text
Articles containing Neapolitan-language text
Pages with Neapolitan IPA
This page was last edited on 15 April 2025, at 19:27(UTC).
Text is available under the Creative Commons Attribution-ShareAlike 4.0 License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.
Privacy policy
About Wikipedia
Disclaimers
Contact Wikipedia
Code of Conduct
Developers
Statistics
Cookie statement
Mobile view
Search
Search
[x] Toggle the table of contents
Rhotacism
2 languagesAdd topic
|
30
|
Sign up
Sign in
Sign up
Sign in
A beginner’s guide to Shamir’s Secret Sharing
--
Listen
Share
An introduction to this privacy-preserving cryptographic technique and how Keyless is using it to transform the way we share and store private data across the internet.
Shamir’s Secret Sharing scheme is an important cryptographic algorithm that allows private information— “secrets” — to be distributed securely amongst an untrusted network.
It is one of the cryptographic techniques that Keyless uses to ensure that personal data is kept safe and secure — whether that’s biometric data, private keys or any other personal information that should not be made public.
To understand Shamir’s Secret Sharing, first it’s important to understand what secret sharing aims to achieve.
What is secret sharing?
In cryptography, secret sharing is a way to securely distribute fragments of important private information amongst a distributed network or group, making such schemes particularly useful for safeguarding highly sensitive information like private cryptographic keys or biometric data.
Secret sharing works by splitting private information into smaller pieces — or shares — and then distributing those shares amongst a group or network.
Each individual share is useless on its own but when all the shares are together, they reconstruct an original secret.
Imagine that you had one million dollars that you kept in a bank account, and in order to access this bank account you used to the password: secret.
You could split it up and distribute a letter each to six trusted shareholders.
s_____, _e____, __c___, ___r__, ____e_, _____t
The only information that each shareholder would have is the letter that they hold, essentially making their individual shares useless.
Secret sharing schemes can also be hierarchical depending on how the shares are distributed. This allows the secret owner to distribute shares based on how much the shareholders are trusted.
Let’s say you wanted to safely store your private key that you used to access your cryptocurrency wallet.
Private keys are used to send cryptocurrency from one address to another. They consist of a sequence of random and unique numbers and are given to users at the time they open a wallet.
Firstly, you wouldn’t want to give anyone the entire sequence, so say you split the key into eight shares. Then you distribute copies of those shares between your closest friends and trusted family members.
You may give eight shares to each of your parents, who you trust without a doubt, four each to your brother your sister, who you trust for the most part, and one each to eight of your friends, who you somewhat trust.
This hierarchical distribution scheme allows for secret owners to distribute shares based on how much they trust their shareholders.
But what about when there is zero-trust between the secret owner and the shareholders?
In most schemes an added encryption layer is implemented to ensure additional privacy and security, allowing the shares to be distributed amongst a network or group that are unknown to the secret owner.
Let’s say that each shareholder only holds what seems to be random numbers:
19_____, _5____, __3___, ___18__, ____5_,_____20
With encryption, when all the separate shares (numbers) are together, they still require a decrypting key to reveal the secret (letters) that they represent in the alphabet.
This important step protects private information from organized attacks; even if each shareholder were to collude to recreate the original secret, they wouldn’t be able to learn anything about that secret, as the original secret is encrypted.
Shamir’s Secret Sharing Scheme
One of the challenges of distributing shares is that they can often be lost or compromised. Shareholders can die, lose their shares or have them stolen. At other times, shareholders themselves turn rogue. When many different shares are distributed, it’s also impractical and inefficient to require all shares to reconstruct the secret.
Shamir’s Secret Sharing scheme is an algorithm that was first proposed in 1979 by the renowned Israeli cryptographer Adi Shamir. It allows for information to be broken into many shares, while only requiring a fraction of those shares to reconstruct the original secret.
This means that, instead of requiring all shares to reconstruct the original secret, Shamir’s scheme requires a minimum number of shares — this minimum is referred to as the threshold.
One of the benefits of Shamir’s algorithm is that it is flexible and extensible — meaning that the secret owner could add, amend or remove shares at anytime if they wanted to, without modifying the original secret.
The threshold needs to be met in order to reconstruct the secret. If there is anything less than the threshold, the secret cannot be reconstructed, thus making Shamir’s Secret Sharing secure against an adversary — a malicious attacker — that has unlimited computational power; in cryptography this is what we call information theoretically secure.
Information theoretically secure simply means that not even an adversary with unlimited computational power would be able to break the encrypted secret.
For example:
Using the same example from earlier, say that the threshold to reveal the password is 3:
When three shares are presented:
19_____, _5____, __3___ = 19,5,3,18,5,20 = secret
When two shares are presented:
19_____, _5____ = 19_____, _5____
It’s important to note that with Shamir’s algorithm, shareholders never find out what the other encrypted shares are in a secret. Only the secret owner has access to the entire set of decrypted shares once the secret is reconstructed.
How Shamir’s Secret Sharing works
Shamir’s method for secret sharing relies on polynomial interpolation, which is an algebraic method of estimating unknown values in a gap between two known data points — without needing to know anything about what is on either side of those points.
We will go into further detail on polynomial interpolation in another blog piece, but for the purpose of explaining how SSS works, you can think of it like this:
SSS encodes a “secret” into a polynomial, then splits it into pieces and distributes it It’s possible to use polynomial interpolation to efficiently reconstruct that secret without requiring every single share. Instead only the threshold is needed, which provides enough points of data to correctly estimate the values between gaps in the encrypted shares.
Why Shamir’s Secret Sharing is essential to maintaining data privacy
Shamir’s Secret Sharing makes it possible for multiple parties who do not know each other to store private information. In Keyless’s case, this would be for securely storing user secrets — whether that’s personal information or private cryptographic keys — across our distributed network.
Because Shamir’s Secret Sharing scheme is information theoretically secure, even an attacker with unlimited computational power cannot break the decrypted share to access the data without having enough shares to meet the threshold — or minimum number of shares.
When combined with other cryptographic techniques, like secure multiparty computation and zero-knowledge cryptography, SSS offers an extra layer of security, making data sharing and storage secure, private, and resilient to accidental data loss and external attacks.
How Keyless uses Shamir’s scheme to keep your biometric data private
Thanks to this algorithm, we can safely distribute secret data in a way that is efficient, secure and private. Instead of storing sensitive data on centralized servers, Keyless is able to split encrypted secrets into pieces, distributing those randomly to nodes across a zero-trust network.
Imagine that you write down a secret message on a piece of paper. The message that you wrote uses whole words to substitute letters, but only you know that. For example, PIG stands for P. You place the piece of paper into an envelope, and then seal it and cut it into twenty different pieces, and give those pieces out to random strangers at Shibuya crossing in Tokyo — the busiest pedestrian crossing in the world.
Since the encrypted data is split into ‘shares’ and randomly assigned to Keyless nodes, there is no longer a centralized storage system that adversaries — also known as hackers or bad players — can target.
Someone who wanted to find those pieces of the envelope and use them illegally, wouldn’t know where to start looking.
To reconstruct the message, a minimum number of shares need to be collected from nodes in our network. So in order to compromise the user’s “secrets”, someone would need to take over enough nodes in the network to acquire the minimum number of shares to meet the threshold.
Despite the odds, that person would need to find at least half of people carrying different pieces of the envelope. They would then need to try to steal the pieces from these five strangers — who may have their own weapons to fight off the attacker.
The last line of defense is that the shares are encrypted, so even if an attacker compromises all the nodes of the network, it can’t decrypt the shares because they are encrypted with a key that is only stored within the user’s device.
Imagine, the attacker finally managed to steal five of those pieces of the envelope you wrote your message in. Now, he can finally learn what the message is. However, when he goes to open the pieces, he finds a bunch of random words, and he in unable to make sense of it. The only person that knows how to decrypt the message is the person who created it — you.
The potential of secret sharing
As our physical and digital worlds continue to converge and blend together, SSS, combined with zero-knowledge encryption and secure multiparty computation, will most likely be used to decentralize risk across all industries, while enabling users to confidently share private data in a way that is secure and empowering.
Thinking beyond biometric authentication, Keyless is using SSS to build platforms that allow us to securely manage our private cryptographic keys online, as well as our entire digital identities. These technologies will help transform the way we interact with the internet and the world around us, giving unmatched power and control back to the user.
--
--
Written by Keyless Technologies
Keyless is a deeptech, cybersecurity company building the world’s first privacy-preserving biometric authentication and personal identity management platform.
No responses yet
Help
Status
About
Careers
Press
Blog
Privacy
Rules
Terms
Text to speech
|
31
|
Published Time: Wed, 13 Aug 2025 15:12:25 GMT
HAL Id: hal-01322850
Submitted on 10 Apr 2020
HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL , est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
A probabilistic approach to some binomial identities
Victor H. Moll, Christophe Vignat
To cite this version:
Victor H. Moll, Christophe Vignat. A probabilistic approach to some binomial identities. Elemente der Mathematik, 2015, 70 (2), pp.55. 10.4171/em/275. hal-01322850 arXiv:1111.3732v1 [math.CO] 16 Nov 2011
A PROBABILISTIC APPROACH TO SOME BINOMIAL IDENTITIES
CHRISTOPHE VIGNAT AND VICTOR H. MOLL
Abstract. Classical binomial identities are established by giving probabilistic interpretations to the summands. The examples include Vandermonde identity and some generalizations.
Introduction
The evaluation of finite sums involving binomial coefficients appears throughout the undergraduate curriculum. Students are often exposed to the identity (1.1)
n
∑
k=0
(nk
)
= 2 n.
Elementary proofs abound: simply choose x = y = 1 in the binomial expansion of ( x + y)n. The reader is surely aware of many other proofs, including some combinatorial in nature. At the end of the previous century, the evaluation of these sums was trivialized by the work of H. Wilf and D. Zeilberger . In the preface to the charming book , the authors begin with the phrase
You’ve been up all night working on your new theory, you found the answer, and it is in the form that involves factorials, binomial coefficients, and so on, ...
and then proceed to introduce the method of creative telescoping . This technique provides an automatic tool for the verification of this type of identities. Even in the presence of a powerful technique, such as the WZ-method, it is often a good pedagogical idea to present a simple identity from many different points of view. The reader will find in this approach with the example (1.2)
m
∑
k=0
2−2k
(2kk
)( 2m − km
)
=
m
∑
k=0
2−2k
(2kk
)( 2m + 1 2k
)
.
The current paper presents probabilistic arguments for the evaluation of certain binomial sums. The background required is minimal. The continuous random vari-ables X considered here have a probability density function. This is a nonnegative
Date : July 1, 2018. 1991 Mathematics Subject Classification. Primary 05A10, Secondary 33B15, 60C99.
Key words and phrases. binomial sums, gamma distributed random variables, Vandermonde identity, orthogonal polynomials.
12A PROBABILISTIC APPROACH TO SOME BINOMIAL IDENTITIES
function fX (x), such that (1.3) Pr( X < x ) =
∫ x
−∞
fX (y) dy.
In particular, fX must have total mass 1. Thus, all computations are reduced to the evaluation of integrals. For instance, the expectation of a function of the random variable X is computed as (1.4) Eg(X) =
∫ ∞−∞
g(y)fX (y) dy.
In elementary courses, the reader has been exposed to normal random variables, written as X ∼ N (0 , 1), with density (1.5) fX (x) = 1
√2π e−x2/2,
and exponential random variables, with probability density function (1.6) f (x; λ) =
{
λe −λx for x ≥ 0; 0 otherwise. The examples employed in the arguments presented here have a gamma distribu-tion with shape parameter k and scale parameter θ, written as X ∼ Γ( k, θ ). These are defined by the density function (1.7) f (x; k, θ ) =
{
xk−1e−x/θ /θ kΓ( k), for x ≥ 0; 0 otherwise .
Here Γ( s) is the classical gamma function, defined by (1.8) Γ( s) =
∫ ∞
0
xs−1e−x dx
for Re s > 0. Observe that if X ∼ Γ( a, θ ), then X = θY where Y ∼ Γ( a, 1). Moreover EXn = θn(a)n, where (1.9) (a)n = Γ( a + n)
Γ( a) = a(a + 1) · · · (a + n − 1) is the Pochhammer symbol. The main property of these random variables employed in this paper is the following: assume Xi ∼ Γ( ki, θ ) are independent, then (1.10) X1 + · · · + Xn ∼ Γ( k1 + · · · + kn, θ ).
This follows from the fact that that the density probability function for the sum of two independent random variables is the convolution of the individual ones. Related random variables include those with a beta distribution (1.11) fa,b (x) =
{
xa−1(1 − x)b−1/B (a, b ) for 0 ≤ x ≤ 1; 0 otherwise .
Here B(a, b ) is the beta function defined by (1.12) B(a, b ) =
∫ 10
xa−1(1 − x)b−1 dx A PROBABILISTIC APPROACH TO SOME BINOMIAL IDENTITIES 3
and also the symmetric beta distributed random variable Zc, with density propor-tional to (1 − x2)c−1 for −1 ≤ x ≤ 1. The first class of random variables can be generated as (1.13) Ba,b = Γa
Γa + Γ b
,
where Γ a and Γ b are independent gamma distributed with shape parameters a and
b, respectively and the second type is distributed as 1 − 2Bc,c , that is, (1.14) Zc = 1 − 2Γ c
Γc + Γ ′
c
= Γc − Γ′
c
Γc + Γ ′
c
,
where Γ c and Γ ′
c
are independent gamma distributed with shape parameter c. A well-known result is that Ba,b and Γ a + Γ b are independent in (1.13); similarly, Γc + Γ ′
c
and Zc are independent in (1.14). 2. A sum involving central binomial coefficients
Many finite sums may be evaluated via the generating function of terms appear-ing in them. For instance, a sum of the form (2.1) S2(n) = ∑
i+j=n
aiaj
is recognized as the coefficient of xn in the expansion of f (x)2, where (2.2) f (x) =
∞
∑
j=0
aj xj
is the generating function of the sequence {ai}. Similarly, (2.3) Sm(n) = ∑
k1+··· +km=n
ak1 · · · akm
is given by the coefficient of xn in f (x)m. The classical example (2.4) 1
√1 − 4x =
∞
∑
j=0
(2jj
)
xj
gives the sums (2.5)
n
∑
i=0
(2ii
)( 2n − 2in − i
)
= 4 n
and (2.6) ∑
k1+··· +km=n
(2k1
k1
)
· · ·
(2km
km
)
= 22n
n!Γ( m
2
n)
Γ( m
2
) .
The powers of (1 − 4x)−1/2 are obtained from the binomial expansion (2.7) (1 − 4x)−a =
∞
∑
j=0
(a)j
j! (4 x)j ,
where ( a)j is the Pochhammer symbol. The identity (2.5) is elementary and there are many proofs in the literature. A nice combinatorial proof of (2.6) appeared in 2006 in this journal . In a more recent contribution, G. Chang and C. Xu present a probabilistic proof of these 4 A PROBABILISTIC APPROACH TO SOME BINOMIAL IDENTITIES
identities. Their approach is elementary: take m independent Gamma random variables Xi ∼ Γ( 1
2
, 1) and write (2.8) E
( m∑
i=1
Xi
)n
= ∑
k1+··· +km=n
( nk1, · · · , k m
)
EXk1
1
· · · EXkm
m
where E denotes the expectation operator. For each random variable Xi, the mo-ments are given by (2.9) EXki
i
= Γ( ki + 1
2
)
Γ( 1
2
) = 2 −2ki (2 ki)!
ki! = ki!
22ki
(2ki
ki
)
,
using Euler’s duplication formula for the gamma function (2.10) Γ(2 z) = 1
√π 22z−1Γ( z)Γ( z + 1
2
)(see , 5 .5.5) to obtain the second form. The expression (2.11)
( nk1, · · · , k m
)
= n!
k1! k2! · · · km!for the multinomial coefficients shows that the right-hand side of (2.8) is (2.12) n!
22n
∑
k1+··· +km=n
(2k1
k1
)
· · ·
(2km
km
)
.
To evaluate the left-hand side of (2.8), recall that the sum of m independent Γ ( 1
2
, 1)
has a distribution of Γ( m
2
, 1). Therefore, the left-hand side of (2.8) is (2.13) Γ( m
2
n)
Γ( m
2
) .
This gives (2.6). The special case m = 2 produces (2.5). 3. More sums involving central binomial coefficients
The next example deals with the identity (3.1)
n
∑
k=0
(4k
2k
)( 4n − 4k
2n − 2k
)
= 2 4n−1 + 2 2n−1
(2nn
)
that appears as entry 4 .2.5.74 in . The proof presented here employs the famous dissection technique, first introduced by Simpson in the simplification of (3.2) 1
2
(E(X1 + X2)2n + E(X1 − X2)2n) ,
where X1, X 2 are independent random variables distributed as Γ ( 1
2
, 1).The left-hand side is evaluated by expanding the binomials to obtain 1
2 (E(X1 + X2)2n + E(X1 − X2)2n) = 1
2
2n
∑
k=0
(2nk
)
EXk
1
EX2n−k
2
1
2
2n
∑
k=0
(−1) k
(2nk
)
EXk
1
EX2n−k
2A PROBABILISTIC APPROACH TO SOME BINOMIAL IDENTITIES 5
This gives 1
2 (E(X1 + X2)2n + E(X1 − X2)2n) =
n
∑
k=0
(2n
2k
)
EX2k
1
EX2n−2k
2
.
Using (2.9), this reduces to (3.3) 1
2
(E(X1 + X2)2n + E(X1 − X2)2n) = (2 n)!
24nn∑
k=0
(4k
2k
)( 4n − 4k
2n − 2k
)
.
The random variable X1 + X2 is Γ(1 , 1) distributed, so (3.4) E(X1 + X2)2n = (2 n)! ,
and the random variable X1 − X2 is distributed as ( X1 + X2)Z1/2, where Z1/2
is independent of X1 + X2 and has a symmetric beta distribution with density
fZ1/2 (z) = 1 /π √1 − z2. In particular, the even moments are given by (3.5) 1
π
∫ 1
−1
z2n dz
√1 − z2 = 1
22n
(2nn
)
.
Therefore, (3.6) E(X1 − X2)2n = E(X1 + X2)2n EZ2n
1/2
= (2 n)!
22n
(2nn
)
.
It follows that (3.7) E(X1 + X2)2n + E(X1 − X2)2n = (2 n)! + (2 n)!
22n
(2nn
)
.
The evaluations (3.3) and (3.7) imply (3.1). 4. An extension related to Legendre polynomials
A key point in the evaluation given in the previous section is the elementary identity (4.1) 1 + ( −1) k =
{
2 if k is even; 0 otherwise. This reduces the number of terms in the sum (3.3) from 2 n to n. A similar can-cellation occurs for any p ∈ N. Indeed, the natural extension of (4.1) is given by (4.2)
p−1
∑
j=0
ωjr =
{
p if r ≡ 0 (mod p); 0 otherwise; Here ω = e2πi/p is a complex p-th root of unity. Observe that (4.2) reduces to (4.1) when p = 2. The goal of this section is to discuss the extension of (3.1). The main result is given in the next theorem. The Legendre polynomials appearing in the next theorem are defined by (4.3) Pn(x) = 1
2n n!
( d
dx
)n
(x2 − 1) n.6 A PROBABILISTIC APPROACH TO SOME BINOMIAL IDENTITIES
Theorem 4.1. Let n, p be positive integers. Then (4.4)
n
∑
k=0
(2kp kp
)( 2( n − k)p
(n − k)p
)
= 22np
p
p−1
∑
ℓ=0
eiπℓn Pnp
(
cos
( πℓ
p
))
.
Proof. Replace the random variable X1 − X2 considered in the previous section, by X1 + W X 2, where W is a complex random variable with uniform distribution among the p-th roots of unity. That is, (4.5) Pr {W = ωℓ} = 1
p , for 0 ≤ ℓ ≤ p − 1.
The identity (4.2) gives (4.6) EW r =
{
1 if r ≡ 0 (mod p); 0 otherwise. This is the cancellation alluded above. Now proceed as in the previous section to obtain the moments
E(X1 + W X 2)np =
n
∑
k=0
(np kp
)
EX(n−k)p
1
EXkp
2
(4.7) = (np )!
22np n∑
k=0
(2kp kp
)( 2( n − k)p
(n − k)p
)
.
A second expression for E(X1 + W X 2)np employs an alternative form of the Legendre polynomial Pn(x) defined in (4.3).
Proposition 4.2. The Legendre polynomial is given by (4.8) Pn(x) = 1
n! E
[
(x + √x2 − 1) X1 + ( x − √x2 − 1) X2
]n
,
where X1 and X2 are independent Γ ( 1
2
, 1) random variables.
Proof. The proof is based on characteristic functions. Compute the sum (4.9) Eet(x+√x2 −1) X1 Eet(x−√x2−1) X2 =
∞
∑
k=0
tn
n! E
[
(x + √x2 − 1) X1 + ( x − √x2 − 1) X2
]
.
The moment generating function for a Γ ( 1
2
, 1) random variable is (4.10) EetX = (1 − t)−1/2.
This reduces (4.9) to
(
1 − t(x + √x2 − 1)
)−1/2 (
1 − t(x − √x2 − 1)
)−1/2
= (1 − 2tx + t2)−1/2
which is the generating function of the Legendre polynomials.
This concludes the proof of Theorem 4.1.
Corollary 4.3. Let x be a variable and Γ 1, Γ2 as before. Then (4.11) E(Γ 1 + x2Γ2)n = n!xnPn
( 1
2
(x + x−1) .A PROBABILISTIC APPROACH TO SOME BINOMIAL IDENTITIES 7
Proof. This result follows from Proposition 4.2 and the change of variables x 7 →
1
2
(x + x−1), known as the Joukowsky transform.
Replacing x by W 1/2 in (4.11) and averaging over the values of W gives the second expression for E(X1 + W X 2)np . The proof of Theorem 4.1 is complete. 5. Chu-vandermonde
The arguments presented here to prove (2.5) can be generalized by replacing the random variables Γ ( 1
2
, 1) by two random variables Γ( ai, 1) with shape parameters
a1 and a2, respectively. The resulting identity is the Chu-Vandermonde theorem.
Theorem 5.1. Let a1 and a2 be positive real numbers. Then (5.1)
n
∑
k=0
(a1)k
k!(a2)n−k
(n − k)! = (a1 + a2)n
n! .
The reader will find in a more traditional proof. The paper describes how to find and prove this identity in automatic form. Exactly the same argument for (2.6) provides a multivariable generalization of the Chu-Vandermonde identity.
Theorem 5.2. Let {ai}1≤i≤m be a collection of m positive real numbers. Then (5.2) ∑
k1+··· +km=n
(a1)k1
k1! · · · (am)km
km! = 1
n! (a1 + · · · + am)n.
The final stated result presents a generalization of Theorem 4.1.
Theorem 5.3. Let n, p ∈ N, a ∈ R+ and ω = eiπ/p . Then (5.3)
n
∑
k=0
(a)kp
(kp )! (a)(n−k)p
(( n − k)p)! z2kp = 1
p
p−1
∑
ℓ=0
eiπℓn znp C(a)
np
( 1
2
(zω ℓ + z−1ω−ℓ)) .
Here C(a)
n
(x) is the Gegenbauer polynomial of degree n and parameter a.
Proof. Start with the moment representation for the Gegenbauer polynomials (5.4) C(a)
n
(x) = 1
n! EU,V
(
U (x + √x2 − 1) + V (x − √x2 − 1)
)n
with U and V independent Γ( a, 1) random variables. This representation is proved in the same way as the proof for the Legendre polynomial, replacing the exponent
−1/2 by and exponent −a. Note that the Legendre polynomials are Gegenbauer polynomials with parameter a = 1
2
. This result can also be found in Theorem 3 of .
Note 5.4. The value z = 1 in (5.3) gives (5.5)
n
∑
k=0
(a)kp
(kp )! (a)(n−k)p
(( n − k)p)! = 1
p
p−1
∑
ℓ=0
eiπℓn C(a)
np
(
cos
( πℓ
p
))
.
This is a generalization of Chu-Vandermonde. 8 A PROBABILISTIC APPROACH TO SOME BINOMIAL IDENTITIES
The techniques presented here may be extended to a variety of situations. Two examples illustrate the type of identities that may be proven. They involve the Hermite polynomials defined by (5.6) Hn(x) = ( −1) nex2
( d
dx
)n
e−x2
.
Theorem 5.5. Let m ∈ N. The Hermite polynomials satisfy (5.7) 1
n! Hn
( x1 + · · · + xm
√m
)
= m−n/ 2 ∑
k1+··· +km
Hk1 (x1)
k1! · · · Hkm (xm)
km! .
Proof. Start with the moment representation for the Hermite polynomials (5.8) Hn(x) = 2 nE(x + iN )n,
where N is normal with mean 0 and variance 1
2
. The details are left to the reader.
The moment representation for the Gegenbauer polynomials (5.4) yields the final result presented here.
Theorem 5.6. Let m ∈ N. The Gegenbauer polynomials C(a)
n
(x) satisfy (5.9) C(a1 +··· +am)
n
(x) = ∑
k1+··· +km=n
C(a1 )
k1
(x) · · · C(am )
km
(x).
Remark 5.7 . A relation between Gegenbauer and Hermite polynomials is given by (5.10) lim
a→∞
1
an/ 2 C(a)
n
( x
√a
)
= 1
n! Hn(x).
This relation allows to recover easily identity (5.7) from identity (5.9). The examples presented here, show that many of the classical identities for spe-cial functions may be established by probabilistic methods. The reader is encour-aged to try this method in his/her favorite identity.
Acknowledgements . The work of the second author was partially supported by nsf-dms 0070567.
References
T. Amdeberhan, V. De Angelis, M. Lin, V. Moll, and B. Sury. A pretty binomial identity.
Elem. Math. , to appear, 2012. G. Andrews, R. Askey, and R. Roy. Special Functions , volume 71 of Encyclopedia of Mathe-matics and its Applications . Cambridge University Press, New York, 1999. V. De Angelis. Pairings and signed permutations. Amer. Math. Monthly , 113:642–644, 2006. Y. A. Brychkov. Handbook of Special Functions. Derivatives, Integrals, Series and Other Formulas . Taylor and Francis, Boca Raton, Florida, 2008. Guisong Chang and Chen Xu. Generalization and Probabilistic Proof of a Combinatorial Identity. Amer. Math. Monthly , 118:175–177, 2011. F. W. J. Olver, D. W. Lozier, R. F. Boisvert, and C. W. Clark, editors. NIST Handbook of Mathematical Functions . Cambridge University Press, 2010. M. Petkovsek, H. Wilf, and D. Zeilberger. A=B . A. K. Peters, Ltd., 1st edition, 1996. T. Simpson. The invention of a general method for determining the sum of every second, third, fourth, or fifth, etc term of a series, taken in order; the sum of the whole series being known. Phil. Trans. Royal Soc. London , 50:757–769, 1759. A PROBABILISTIC APPROACH TO SOME BINOMIAL IDENTITIES 9
P. Sun. Moment representation of Bernoulli polynomial, Euler polynomial and Gegenbauer polynomials. Stat. and Prob. Letters , 77:748, 2007. D. Zeilberger. Three recitations on holonomic functions and hypergeometric series. Jour. Symb. Comp. , 20:699–724, 1995.
Information Theory Laboratory, E.P.F.L., 1015 Lausanne, Switzerland
E-mail address : [email protected]
Department of Mathematics, Tulane University, New Orleans, LA 70118
E-mail address : [email protected]
|
32
|
Crack problem for superconducting strip with finite thickness - ScienceDirect
===============
Typesetting math: 100%
Skip to main contentSkip to article
Journals & Books
ViewPDF
Download full issue
Search ScienceDirect
Outline
Abstract
Keywords
1. Introduction
2. Thin strip in a perpendicular field
3. Strip with finite thickness in a field
4. Conclusions
Acknowledgments
References
Show full outline
Cited by (25)
Figures (11)
Show 5 more figures
International Journal of Solids and Structures
Volume 51, Issues 3–4, February 2014, Pages 886-893
Crack problem for superconducting strip with finite thickness
Author links open overlay panel Hua-Dong Yong, Ze Jing, You-He Zhou
Show more
Outline
Add to Mendeley
Share
Cite
rights and content
Under an Elsevier user license
Open archive
Abstract
The inclined crack problems are considered for a thin strip and a strip with finite thickness in a perpendicular magnetic field. The critical current density is assumed to be a constant. The crack orientation is varied and the effect of crack on the magnetic field distribution is neglected. Based on the analytical results and variational inequality, the field and current distributions are computed for both thin strip and strip with finite thickness cases, respectively. Then, the stress intensity factors at the crack tip are determined using the finite element method for magnetic field loads. The numerical results are presented for different inclined crack angles, magnetization processes and geometry parameters of the strip. The results show that the fracture behavior of the strip with finite thickness is more complicated than that of the thin strip. With the numerical results, we can predict the largest possibility of cracking as the strip is in an external field.
Previous article in issue
Next article in issue
Keywords
Superconducting strip
Inclined crack
Variational inequality
Stress intensity factor
1. Introduction
Superconductivity is a phenomenon of zero electrical resistance below a certain temperature. The critical current density is one of the most important characteristics of the superconductors, which represents the maximum current density that can be carried in a superconductor. In addition, high magnetic field can be trapped in the superconductors, and trapped field is dependent on the critical current density. Due to their unique properties of high critical current density and trapped field, high temperature superconductors (HTS) may be widely applied in many applications, such as magnetic separation, motor, generator and magnetron spattering (Oka et al., 2012, Yokoyama et al., 2011). Recently, with the development of the superconductors of strong flux pinning properties, high critical current were achieved. At liquid nitrogen temperature, YBCO on metal substrates could reach current values in excess of 1 MA cm−2 (Foltyn et al., 1999). The critical current density J c of very thin YBCO films can approach 10 MA cm−2 at 77 K in self field (Foltyn et al., 2009).
Although the superconductors have remarkable properties, the mechanical behavior of the superconductor has been paid much attention in recent years (Johansen, 2000). A major problem is the brittleness of the superconductors. Application of the magnetic field will induce the shielding current. As the magnetic field is smaller than the lower critical field B c 1, the shielding current only flows near the surfaces. When the magnetic field becomes large enough, the magnetic flux and shielding current begin to penetrate into the sample. Then, due to the effects of defects and flux pinning, the electromagnetic body force (Lorentz force) is generated by the interaction of the magnetic field and current. Loaded by the electromagnetic force, the superconductor will undergo magnetostriction. With the increasing of the critical current density and applied magnetic field, the stress induced by the electromagnetic body force may be greater than critical value and lead to the mechanical failure of the superconductors. Ikuta et al. (1993) studied a large magnetostriction in a single crystal firstly, and a quantitative model was proposed. Subsequently, the magnetostriction was investigated based on three critical state models (Ikuta et al., 1994). The pinning induced magnetostriction in an isotropic superconductor was calculated for different sample shapes, such as an infinite slab (Ikuta et al., 1993), a rectangular slab (Johansen, 1999b), a square cylinder (Johansen et al., 1998) and a circular cylinder (Johansen et al., 1995, Johansen, 1999a), a hollow cylinder and a clamped cylinder (Johansen et al., 2000, Johansen et al., 2001), a superconducting thin circular disk (Johansen and Shantsev, 2003), the full magnetization cycle and virgin branch in thin long strip (Eremenko et al., 1998, Nabialek et al., 1998). Eremenko et al. (1998) also performed measurements for a quantitative comparison, and the results showed that measured and calculated curves have similar features and close magnitudes of deformation. The flux pinning induced magnetostriction was studied experimentally for different temperatures (Nabialek et al., 1997). The contributions of Meissner current and normal state to the magnetostriction were also investigated (Celebi et al., 2005, Celebi et al., 2007). The stress distributions were calculated for a slab based on the critical state Kim model and a superconducting strip with transport current (Yong and Zhou, 2008, Yong and Zhou, 2011b). Feng et al. (2011) investigated the stress and magnetostriction in the functionally graded slab.
In addition, cracks and defects may be found during fabrication and low temperature conditions (Diko and Krabbes, 2003, Katagiri et al., 2008). As the superconductors are subjected to high magnetic field in service, the initiation and propagation of crack may result in the failure of these materials. Ren et al. (1995) observed cracking in a mini-magnet activated by an applied field of 14 T. The cracking of the samples will limit the trapped field of bulk superconductors at lower temperature (Fuchs et al., 2000, Tomita and Murakami, 2003). Zhou and Yong (2007) analyzed the central crack problem for the long rectangular slab under the electromagnetic force. It was found that the stress intensity factor can be reduced by decreasing the maximum field. The central crack, interface crack and two collinear cracks problems in the thin strip, and the crack problem in thin strip by considering the decreasing of critical current with thickness were also investigated (He et al., 2013, Yong and Zhou, 2011a, Yong and Zhou, 2012, Yong et al., 2013). The shear and transverse stress distributions in coated conductors were analyzed (Jing et al., 2013). The interaction of two collinear cracks and crack-inclusion problems in a rectangular slab were studied (Gao et al., 2010a, Gao et al., 2010b). The fracture behavior for an inhomogeneous orthotropic superconducting slab was investigated by Feng et al. (2012). The inclined crack problem in the rectangular slab for different magnetization processes was also studied (Wang et al., 2013).
In spite of all the extensive theoretical work done on the fracture behavior, the crack problem in the strip of finite thickness has not been studied. In this paper, the general problems of an inclined crack in a superconducting thin strip and a strip with finite thickness are studied. Since the thickness is assumed to be half of the width, the strip with finite thickness can be regarded as a bulk. The state of deformation is assumed to be plane strain. After we present the electromagnetic behavior in the strip, the stress intensity factors are obtained numerically. Both the increasing field case and decreasing field case are considered. The effects of the geometry parameters of the strip and the orientations of crack on the stress intensity factors are discussed.
2. Thin strip in a perpendicular field
Consider a long superconducting thin strip of width 2 a and thickness 2 b which is attached to a substrate, as shown in Fig. 1(a). The strip is infinitely long in the y direction and assumed to be isotropic. An inclined crack of length 2 c is in the strip and the crack is located at the center (Fig. 1(b)). The inclined angle is θ, and the crack can be oriented from a parallel position (θ=0) to a perpendicular position (θ=π/2). A uniform magnetic fieldB a is applied parallel to the z axis. Due to the symmetry of the problem, the shielding current inside the superconductor is only along the y direction. In addition, the shielding current induced by the magnetic field screens the interior magnetic field of the superconductor. Since the shielding current flows in a pattern of rectangular loop, the direction of the shielding current is opposite for the left and right parts of the strip and the current density J has opposite sign. When the superconductor is sufficiently long in the applied field direction, the demagnetization effect is negligible. However, strong demagnetization effect will occur in the strip geometry. For simplicity, we neglect the effect of the crack on the magnetic field distribution. Under this assumption the sample will contain the symmetric flux and current distributions.
1. Download: Download full-size image
Fig. 1. (a) Schematic of the superconducting strip with substrate. An external field is applied perpendicular to the strip. (b) Crack geometry in the strip. (c) The loadings in thin strip for vertical crack and inclined crack.
By ignoring the effect of the lower critical field B c 1, we can obtain the flux density and current density distributions for the magnetized state. Due to the interaction of the magnetic field and current, the superconductor is subjected to the electromagnetic body force and shows a giant magnetostriction. Furthermore, the local loadings for the vertical crack and inclined crack are different (Fig. 1(c)). In the following part, we will consider the zero-field cooling condition. The strip is cooled in zero field, then the magnetic field B a is applied. In order to obtain the mechanical behavior of the superconductor, it is necessary to determine the flux profiles and current density firstly.
2.1. Increasing field case
Start from the exact expressions of the shielding current distribution for a thin superconducting strip in perpendicular magnetic field. For the Bean model, the shielding current density J in the flux penetration region is equal to the critical current density J c and independent of the magnetic field. In the center of the strip where the magnetic field is screened and B z=0, the shielding current density is less than J c. When the applied field increases from the virgin state, one can obtain the following current profiles with conformal transformation (Brandt and Indenbom, 1993, Eremenko et al., 1998)(1)J(x)=2 J c π arctan 1-(a 0/a)2(a 0/x)2-1,|x|<a 0(2)J(x)=J c x/|x|,a 0<|x|<a where the penetrating flux front a 0=a/cosh (B a/B 0) and the characteristic field B 0=2 μ 0 J c b/π. By using the Biot-Savart law, the magnetic field is B z(x)=0, |x|<a 0 and B z(x)=μ 0 2 J c b π arctanh 1-a 0 2/x 2 1-a 0 2/a 2, a 0<|x|<a (Brandt and Indenbom, 1993). It is to be noted that for the thin strip where b≪a, the current density and the magnetic field are averaged over the thickness. In addition, the magnetic field is only along the z direction and B x=0. Both the current and magnetic field can be considered as scalar quantities. The body force i.e. electromagnetic force f→=J→×B→ is only along the x direction. Furthermore, due to the reason that the strip is infinite along the y direction, we can treat this problem as plane strain case. In addition, the electromagnetic body force is compressive for the increasing field. In the nonpenetrated region where B z=0, the body force is zero. It is to be noted that the thin strip may become unstable when the body force are compressive, and the buckling of thin strip may occur. In order to prevent strip buckling, the thin strip is always mechanically supported by the substrate. Then, we assume that the thin strip is attached to a substrate with the thickness of 10 b. The stress and displacement are continuous at the interface between the strip and substrate. The Young’s moduli of the strip and substrate are given as 97 GPa and 128 GPa (Wang, 2013). The Poisson’s ratios are the same and equal to 0.3.
After determining the flux and current density profiles, the elastic response can be calculated. The strip is isotropic elastically. To determine the fracture behavior in the superconducting strip, we use the finite element method to obtain the elastic crack tip stress intensity factors. The expressions of stress intensity factors for elastic material are (Broek, 1986)(3)K I=lim r→0 2 π r σ x(r,θ=0)(4)K II=lim r→0 2 π r σ xz(r,θ=0)in which the origin of polar coordinate (r,θ) is at the crack tip. σ x is the normal stress along x axis and σ xz is the shear stress. K I and K II are the Mode I and Mode II stress intensity factors, respectively. In addition, Mode I is defined as opening mode in which the displacements of the crack surfaces are vertical to the crack plane. Mode II is denoted as In-plane shear mode in which the displacements of the crack surfaces are parallel to the crack plane. Here, we only present the results of stress intensity factor at the upper crack tip for thin strip case. The top surface is free of stress, i.e.,(5)σ z=σ xz=0,z=b
Based on the software ABAQUS, the numerical results of stress intensity factors for the body force can be obtained. We assume that the displacement in z direction is equal to zero at z=0 and x=±a during the following computation. In addition, when the strip does not contain a central crack, the stress intensity factor will not exist in this case. However, the mechanical behavior of the strip such as the magnetostriction and stress distributions can also be obtained with analytical or finite element method, and the magnetostriction of thin strip in magnetic field has been given by Eremenko et al. (1998).
Fig. 2 shows the stress intensity factors for an embedded crack with different inclined angles, where K 0=2 μ 0 J c 2 ab π c. The magnetic field B a is increased from 0 to 4 B 0. The crack length c is half of the thickness b. Although the length of the crack is assumed to be half of the thickness, many small cracks can be scattered into the sample with the actual material. It can be found that the stress intensity factors decrease with the increasing of the magnetic field. Since the body forces are compressive during the increasing field, the Mode I stress intensity factor K I is always negative. However, the Mode II stress intensity factor K II increases with the increasing of field. As θ=0 and θ=π/2, K II=0. For a fixed field, K II has a maximum at θ=π/4. In addition, negative K I means crack contact and is meaningless (Konda and Erdogan, 1994).
1. Download: Download full-size image
Fig. 2. Variation of the normalized stress intensity factorsK I/K 0 (a) and K II/K 0 (b) with applied field for the increasing field case.
2.2. Decreasing field case
We now turn our attention to the decreasing field. As the magnetic field is increased to B m=4 B 0 and then decreases, the shielding current direction will reverse in the outer part of the strip. We denote the maximum applied field by B m. The magnetic field and current density will redistribute in the remagnetizated state. However, the distributions depend on the previous maximum field. Unlike the increasing field case, the strip is divided into three regions for the decreasing field case. The flux and current distributions vary dramatically along x axis. Using the linear superposition approach, the critical state profiles for decreasing field can also be obtained (Brandt et al., 1993, Eremenko et al., 1998). During the decreasing field case, the current direction reverses in the outer region, and the body force will become tensile in the outer part. Both the tensile and compressive body forces exist during the field reduction. We will now discuss the remagnetization process which gives different fracture behavior.
Fig. 3(a) and (b) shows the stress intensity factors while the applied field B a is reduced from B m=4 B 0 to 0. Due to the reason that both positive and negative body forces are simultaneously presented, the stress intensity factors change sign as the field decreases. Although negative K I is meaningless, we will still present the numerical results for better insight into the mechanical behavior of the strip. At B a≈2.3 B 0, both K I and K II are equal to zero, which means that the effect of the tensile body force is the same as that of the compressive body force for B a≈2.3 B 0. The stress intensity factors are positive for B a<2.3 B 0, where the tensile body force dominates. However, the stress intensity factors increase with the decreasing of the field firstly, reach the peak value and then decrease. The trend is similar to that of thin strip without substrate (Yong and Zhou, 2012, Yong et al., 2013). In addition, the absolute value of the K I decreases with θ, and the absolute value of the K II for θ=π/4 is larger than those of other angles. The results are similar to the increasing field case. Since the stress intensity factor becomes positive during field descent, the cracking may occur only for the decreasing field.
1. Download: Download full-size image
Fig. 3. Variation of the normalized stress intensity factorsK I/K 0 (a) and K II/K 0 (b) with applied field for the decreasing field case, in which B m=4 B 0.
Fig. 4(a) and (b) shows the stress intensity factors while the applied field B a is reduced from B m=B 0 to 0 for different inclined angles. It can be found that the stress intensity factors are almost negative during the entire field range. That is to say, the compressive body force always dominates. The trends of the stress intensity factors are similar to those of increasing field case. Comparing Fig. 3 with Fig. 4, we can see that the stress intensity factor decreases with the maximum field B m. In other words, a smaller maximum field B m leads to a more stable structure. During the magnetization process, it is necessary to avoid the high magnetic field. In addition, the stress intensity factors reach the same value as the applied field decreases to zero.
1. Download: Download full-size image
Fig. 4. Variation of the normalized stress intensity factors K I/K 0 (a) and K II/K 0 (b) with applied field for the decreasing field case, in which B m=B 0.
3. Strip with finite thickness in a field
So far we have analyzed fracture behavior of the thin strip (b≪a). As the thin strip is subjected to the applied magnetic field, the analytical solutions of flux and current distributions can be obtained for the Bean model. However, the real strips for applications have a finite thickness. We now analyze the strip with finite thickness, i.e., b/a is a finite value. In this case, we neglect the effect of substrate. Both B z and B x exist for the strip with finite thickness. It is difficult to obtain the analytic solution for the strip with finite thickness. Based on the method of the variational inequality given by Prigozhin, 1996, Prigozhin, 1997, we can obtain the numerical solutions of the current and flux distributions. The stationary variational inequalities are equivalent to the constrained optimization problems (Prigozhin, 1996).(6)min J∈R 1 2(L J,J)-(g,J)where L is the convolution with G=(1/2π)ln(1/|x|) and (u,v) is the scalar product of the two vector functions. For the two dimensional problems, g=L J ˆ-x(H z-H ˆ z)+z(H x-H ˆ x), and “^” means the initial value for the stationary problem (Prigozhin, 1996). After the above coefficients are determined, the optimization problems can be solved using the method of point underrelaxation with projection (Trémolières et al., 1981). The general procedure is that we divide the strip into N rectangular elements firstly, and we set the initial current density in each element. Then, Eq. (6) can be discretized by means of evaluating G and (u,v) at different elements. With the method of underrelaxation with projection, the current density in each element can be obtained by satisfying a certain tolerance.
3.1. Increasing field case
We first analyze the body force distributions in the strip of finite thickness as the field is applied. The current density and filed distributions are similar to the results given by Brandt (1996) and Prigozhin (1996). With the variational formulation, we can obtain the body force distribution.
Fig. 5 shows the distributions of the body force along x and z directions for the increasing field, in which f 0=μ 0 J c 2 a. It can be found that f x is larger than f z. Note that the body force is perpendicular to the magnetic field. Then the body force f x is induced by B z. In addition, as the central crack is parallel to the z axis, we can only consider the effect of B z and neglect that of B x. It is interesting that f x is an odd function with respect to z coordinate, while f z is an even function. On the other hand, the largest value of f x is located at x=±a, and the largest f z is located at z=±b. In the thin strip, the body force is only in one direction. For the strip with finite thickness, the body forces f x and f z are along two directions. This means that the fracture behavior of the strip with finite thickness is more complicated than that of the thin strip.
1. Download: Download full-size image
Fig. 5. The body force distributions f x/f 0 (upper) and f z/f 0 (lower) in the strip with finite thickness as the applied field is increased from zero to B a=B 0.
Fig. 6 shows the stress intensity factors of the strip with finite thickness for increasing field, in which the ratio b/a=0.5 and K 0=μ 0 J c 2 a 2 π c. Because both the geometry and body forces are symmetric with respect to x axis, the stress intensity factors are the same at the two crack tips for the central crack. As expected, the stress intensity factors are always negative and the trends are similar to those of increasing field of thin strip. It is interesting that the stress intensity factors K II for θ=π/3 and θ=π/6 are also very close. For larger field B a, the stress intensity factors change in magnitude linearly with the applied field.
1. Download: Download full-size image
Fig. 6. Variation of the normalized stress intensity factors K I/K 0 (a) and K II/K 0 (b) with applied field for the increasing field case, in which b/a=0.5.
3.2. Decreasing field case
Fig. 7 shows the body force distributions in the cross section of the strip for the decreasing field. The direction of f x is reversed in the outer part of the strip. Similar to Fig. 5, f x is an odd function with respect to z coordinate. The variation of f z with x is more complex. In addition, since the current direction will change in different parts, both the tensile and compressive body forces are presented simultaneously. The cracking is most likely to be initiated.
1. Download: Download full-size image
Fig. 7. The body force distributions f x/f 0 (upper) and f z/f 0 (lower) in the strip with finite thickness as the applied field is decreased from B m=B 0 to B a=0.6 B 0.
Fig. 8 shows the stress intensity factors for different ratios b/a as the applied field is decreased from B m=B 0 to zero. The crack is parallel to z axis, thus only the Mode I stress intensity factor K I exists. It can be found that with the increasing of ratio b/a, the stress intensity factor decreases obviously in the most of field region. This is due to the reason that it is easier for the magnetic field to penetrate into the smaller structure, which means that a larger cross section will lead to a more stable structure. As b/a=2.0, the stress intensity factors are almost negative in the entire field region. For b/a<0.5, the stress intensity factor is not a monotonic function of the applied field.
1. Download: Download full-size image
Fig. 8. Variation of the normalized stress intensity factors with applied field for different ratios b/a, in which θ=0.
Fig. 9 shows the stress intensity factors of ratio b/a=0.5 for decreasing field. The applied field B a is decreased from B m=4 B 0 to zero. As expected, the stress intensity factors K II for θ=π/3 and θ=π/6 are very close. In addition, the magnitude of the normalized stress intensity factor for the strip with finite thickness is larger than that for thin strip. The magnetic field B a which corresponds to the maximum of stress intensity factor is about B a=3 B 0, which is different with the thin strip case. The magnetic field B a where the highest cracking probability can be expected is dependent on the size of the structure and the maximum of applied field B m.
1. Download: Download full-size image
Fig. 9. Variation of the normalized stress intensity factors K I/K 0 (a) and K II/K 0 (b) with applied field for the decreasing field case, in which b/a=0.5.
3.3. Rotating field case
We have analyzed the situations in which the applied field is increased and decreased. Now we generalize our results to account for rotating field. When the direction of the field B a is changed, the response of the system depends on the previous values. As shown in Fig. 10, the applied field rotates with respect to the strip. The rotating angle between the applied field and z axis is ϕ, and is changed from 0 to π. In addition, the magnitude of applied field B a keeps constant during the rotation.
1. Download: Download full-size image
Fig. 10. A strip with a central crack loaded by rotating field.
Fig. 11(a) and (b) shows the stress intensity factors for the ratio b/a=0.5 as the applied field is rotated. The magnitude of the applied field B a is B 0. From Fig. 11(a), it can be found that the stress intensity factor K I increases with ϕ, reaches the peak value and then decreases. In addition, for a fixed ϕ, the stress intensity factor K I varies with θ. Especially, the overall peak value of K I is located at θ=π/4. The angle ϕ which corresponds to the maximum of the stress intensity factor also decreases with θ. In other words, the maximum of the stress intensity factor is dependent on both ϕ and θ. Then, it is necessary to consider the crack inclined angle and rotating field angle as the applied field is rotated. As can be seen in Fig. 11(b), the variation of the stress intensity factor K II with ϕ is more complicated. Comparing with stress intensity factor K I in Fig. 11(a), we can find that the trend of K II with respect to ϕ is dependent on θ. In addition, K II at ϕ=0 is the same as that at ϕ=π. Compared to other inclined crack angles, the change of the stress intensity factor K II with ϕ is small for θ=0. It is interesting that the peak values of K II for different inclined angles are close.
1. Download: Download full-size image
Fig. 11. Variation of the normalized stress intensity factors K I/K 0 (a) and K II/K 0 (b) with applied field for the rotating field case, in which b/a=0.5.
4. Conclusions
The fracture behavior of a superconducting strip with finite thickness in a magnetic field was analyzed in the framework of the Bean model where J c is a constant. The flux and current distributions are calculated numerically with variational inequality firstly. Then, the stress intensity factors are obtained for different inclined crack angles and magnetization processes. The results show that the stress intensity factor is dependent on geometry parameter and maximum field, and the value may become positive during field decreasing and rotating cases. As the magnetic field decreases, the stress intensity factor decreases with thickness, which means that cracking is prone to occur in thin strip. In addition, during field decreasing, the fields corresponding to the maximum stress intensity factors (the largest cracking possibility) are different for thin strip and strip with finite thickness. The results can help researchers to understand the complicated mechanical phenomena in high temperature superconductors.
Acknowledgments
We thank Dr L. Prigozhin for helpful discussions. This research was supported by the Fund of Natural Science Foundation of China (Nos. 11032006, 11121202 and 11202087) and National Key Project of Magneto-Restriction Fusion Energy Development Program (No. 2013GB110002). The Fundamental Research Funds for the Central Universities, Specialized Research Fund for the Doctoral Program of Higher Education under Grant 20110211120027 and Program for New Century Excellent Talents in University of Ministry of Education of China (NCET-13-0266). The authors gratefully acknowledge these supports.
Recommended articles
References
Brandt, 1996E.H. Brandt Superconductors of finite thickness in a perpendicular magnetic field: strips and slabs Phys. Rev. B, 54 (1996), p. 4246 View in ScopusGoogle Scholar
Brandt and Indenbom, 1993E.H. Brandt, M. Indenbom Type-II superconductor strip with current in a perpendicular magnetic field Phys. Rev. B, 48 (1993), p. 12893 View in ScopusGoogle Scholar
Brandt et al., 1993E.H. Brandt, M.V. Indenbom, A. Forkl Type-II superconducting strip in perpendicular magnetic field Europhys. Lett., 22 (1993), p. 735 CrossrefView in ScopusGoogle Scholar
Broek, 1986D. Broek Elementary Engineering Fracture Mechanics Kluwer Academic Pub (1986) Google Scholar
Celebi et al., 2005S. Celebi, F. Inanir, M. LeBlanc Contribution of the Meissner current to the magnetostriction in a high Tc superconductor Supercond. Sci. Technol., 18 (2005), p. 14 CrossrefView in ScopusGoogle Scholar
Celebi et al., 2007S. Celebi, F. Inanir, M. LeBlanc Coexistence of critical and normal state magnetostrictions in type II superconductors: a model exploration J. Appl. Phys., 101 (2007), p. 013906 View in ScopusGoogle Scholar
Diko and Krabbes, 2003P. Diko, G. Krabbes Macro-cracking in melt-grown YBaCuO superconductor induced by surface oxygenation Supercond. Sci. Technol., 16 (2003), p. 90 View in ScopusGoogle Scholar
Eremenko et al., 1998V. Eremenko, V. Sirenko, H. Szymczak, A. Nabialek, M. Balbashov Magnetostriction of thin flat superconductor in a transverse magnetic field Superlattices Microstruct., 24 (1998), pp. 221-226 View PDFView articleView in ScopusGoogle Scholar
Feng et al., 2011W. Feng, X. Han, P. Ma Flux-pinning-induced stress and magnetostriction in a functionally graded long rectangular superconductor slab J. Appl. Phys., 110 (2011), p. 063917 View in ScopusGoogle Scholar
Feng et al., 2012W. Feng, R. Zhang, H. Ding Crack problem for an inhomogeneous orthotropic superconducting slab under an electromagnetic force Physica C, 477 (2012), pp. 32-35 View PDFView articleView in ScopusGoogle Scholar
Foltyn et al., 1999S.R. Foltyn, Q.X. Jia, P.N. Arendt, L. Kinder, Y. Fan, J.F. Smith Relationship between film thickness and the critical current of YBa 2 Cu 3 O 7-δ coated conductors Appl. Phys. Lett., 75 (1999), pp. 3692-3694 View in ScopusGoogle Scholar
Foltyn et al., 2009S.R. Foltyn, H. Wang, L. Civale, B. Maiorov, Q. Jia The role of interfacial defects in enhancing the critical current density of YBa 2 Cu 3 O 7−δ coatings Supercond. Sci. Technol., 22 (2009), p. 125002 CrossrefView in ScopusGoogle Scholar
Fuchs et al., 2000G. Fuchs, P. Schatzle, G. Krabbes, S. Grub, P. Verges, K.H. Muller, J. Fink, L. Schultz Trapped magnetic fields larger than 14 T in bulk YBaCuO Appl. Phys. Lett., 76 (2000), p. 2107 View in ScopusGoogle Scholar
Gao et al., 2010aZ.-W. Gao, Y.-H. Zhou, K.Y. Lee The interaction of two collinear cracks in a rectangular superconductor slab under an electromagnetic force Physica C, 470 (2010), pp. 654-658 View PDFView articleView in ScopusGoogle Scholar
Gao et al., 2010bZ.-W. Gao, Y.-H. Zhou, K.Y. Lee Crack-inclusion problem for a long rectangular slab of superconductor under an electromagnetic force Comput. Mater. Sci., 50 (2010), pp. 279-282 View PDFView articleView in ScopusGoogle Scholar
He et al., 2013A. He, C. Xue, H.D. Yong, Y.H. Zhou Fracture behaviors of thin superconducting films with field-dependent critical current density Physica C, 492 (2013), pp. 25-31 View PDFView articleView in ScopusGoogle Scholar
Ikuta et al., 1993H. Ikuta, N. Hirota, Y. Nakayama, K. Kishio, K. Kitazawa Giant magnetostriction in Bi 2 Sr 2 CaCu 2 O 8 single crystal in the superconducting state and its mechanism Phys. Rev. Lett., 70 (1993), pp. 2166-2169 View in ScopusGoogle Scholar
Ikuta et al., 1994H. Ikuta, K. Kishio, K. Kitazawa Critical state models for flux-pinning-induced magnetostriction in type-II superconductors J. Appl. Phys., 76 (1994), pp. 4776-4786 View in ScopusGoogle Scholar
Jing et al., 2013Z. Jing, H.D. Yong, Y.H. Zhou Shear and transverse stress in a thin superconducting layer in simplified coated conductor architecture with a pre-existing detachment J. Appl. Phys., 114 (2013), p. 033907 View in ScopusGoogle Scholar
Johansen, 1999aT.H. Johansen Flux-pinning-induced stress and strain in superconductors: case of a long circular cylinder Phys. Rev. B, 60 (1999), p. 9690 View in ScopusGoogle Scholar
Johansen, 1999bT.H. Johansen Flux-pinning-induced stress and strain in superconductors: long rectangular slab Phys. Rev. B, 59 (1999), p. 11187 View in ScopusGoogle Scholar
Johansen, 2000T.H. Johansen Flux-pinning-induced stress and magnetostriction in bulk superconductors Supercond. Sci. Technol., 13 (2000), p. R121 View in ScopusGoogle Scholar
Johansen and Shantsev, 2003T.H. Johansen, D.V. Shantsev Magnetostrictive behaviour of thin superconducting disks Supercond. Sci. Technol., 16 (2003), p. 1109 View in ScopusGoogle Scholar
Johansen et al., 1995T.H. Johansen, J. Lothe, H. Bratsberg Flux-pinning-induced magnetostriction of cylindrical superconductors A. Barone, D. Fiorani, A. Tampieri (Eds.), Fourth Euro Ceramics - High Tc Superconductors, Part II, Gruppo Editoriale Faenza Editrice, Faenza (1995), pp. 117-120 Google Scholar
Johansen et al., 1998T.H. Johansen, J. Lothe, H. Bratsberg Shape distortion by irreversible flux-pinning-induced magnetostriction Phys. Rev. Lett., 80 (1998), pp. 4757-4760 View in ScopusGoogle Scholar
Johansen et al., 2000T.H. Johansen, C. Wang, Q.Y. Chen, W.-K. Chu Enhancement of tensile stress near a hole in superconducting trapped-field magnets J. Appl. Phys., 88 (2000), pp. 2730-2733 View in ScopusGoogle Scholar
Johansen et al., 2001T.H. Johansen, Q.Y. Chen, W.-K. Chu Pinning-induced stress in clamped superconductors Physica C, 349 (2001), pp. 201-210 View PDFView articleView in ScopusGoogle Scholar
Katagiri et al., 2008K. Katagiri, S. Sato, N. Tsuchiya, K. Kasaba Bending fatigue characteristics of DyBaCuO bulk superconductor Physica C, 468 (2008), pp. 1424-1427 View PDFView articleView in ScopusGoogle Scholar
Konda and Erdogan, 1994N. Konda, F. Erdogan The mixed mode crack problem in a nonhomogeneous elastic medium Eng. Fract. Mech., 47 (1994), pp. 533-545 View PDFView articleView in ScopusGoogle Scholar
Nabialek et al., 1997A. Nabialek, P. Komorowski, M. Gutowska, M. Balbashov, J. Gorecka, H. Szymczak, O. Mironov Giant magnetostriction and magnetostriction jumps in superconducting single crystalline Supercond. Sci. Technol., 10 (1997), p. 786 View in ScopusGoogle Scholar
Nabialek et al., 1998A. Nabialek, H. Szymczak, V. Sirenko, A. Dˇyachenko Influence of the real shape of a sample on the pinning induced magnetostriction J. Appl. Phys., 84 (1998), pp. 3770-3775 View in ScopusGoogle Scholar
Oka et al., 2012T. Oka, T. Muraya, N. Kawasaki, S. Fukui, J. Ogawa, T. Sato, T. Terasawa Magnetizing of permanent magnets using HTS bulk magnets Cryogenics, 52 (2012), pp. 27-31 View PDFView articleView in ScopusGoogle Scholar
Prigozhin, 1996L. Prigozhin The Bean model in superconductivity: variational formulation and numerical solution J. Comput. Phys., 129 (1996), pp. 190-200 View PDFView articleView in ScopusGoogle Scholar
Prigozhin, 1997L. Prigozhin Analysis of critical-state problems in type-II superconductivity IEEE Trans. Appl. Supercond., 7 (1997), pp. 3866-3873 View in ScopusGoogle Scholar
Ren et al., 1995Y. Ren, R. Weinstein, J. Liu, R. Sawh, C. Foster Damage caused by magnetic pressure at high trapped field in quasi-permanent magnets composed of melt-textured Y-Ba-Cu-O superconductor Physica C, 251 (1995), pp. 15-26 View PDFView articleView in ScopusGoogle Scholar
Tomita and Murakami, 2003M. Tomita, M. Murakami High-temperature superconductor bulk magnets that can trap magnetic fields of over 17 tesla at 29 K Nature, 421 (2003), pp. 517-520 View in ScopusGoogle Scholar
Trémolières et al., 1981R. Trémolières, J.-L. Lions, R. Glowinski Numerical Analysis of Variational Inequalities North Holland (1981) Google Scholar
Wang, 2013Y.S. Wang Fundamental Elements of Applied Superconductivity in Electrical Engineering Wiley (2013) Google Scholar
Wang et al., 2013X. Wang, H.D. Yong, C. Xue, Y.H. Zhou Inclined crack problem in a rectangular slab of superconductor under an electromagnetic force J. Appl. Phys., 114 (2013), p. 083901 View in ScopusGoogle Scholar
Yokoyama et al., 2011K. Yokoyama, T. Oka, K. Noto Improvement of a magnetization method on a small-size superconducting bulk magnet system Physica C, 471 (2011), pp. 901-904 View PDFView articleView in ScopusGoogle Scholar
Yong and Zhou, 2008H.D. Yong, Y.H. Zhou Kim model of stress induced by flux pinning in type-II superconductors J. Appl. Phys., 103 (2008), p. 113903 View in ScopusGoogle Scholar
Yong and Zhou, 2011aH.D. Yong, Y.H. Zhou Interface crack between superconducting film and substrate J. Appl. Phys., 110 (2011), p. 063924 View in ScopusGoogle Scholar
Yong and Zhou, 2011bH.D. Yong, Y.H. Zhou Stress distribution in a flat superconducting strip with transport current J. Appl. Phys., 109 (2011), p. 073902 View in ScopusGoogle Scholar
Yong and Zhou, 2012H.D. Yong, Y.H. Zhou Crack problem for thin superconducting strip in a perpendicular magnetic field IEEE Trans. Appl. Supercond., 22 (2012) 8400905 Google Scholar
Yong et al., 2013H.D. Yong, C. Xue, Y.H. Zhou Thickness dependence of fracture behaviour in a superconducting strip Supercond. Sci. Technol., 26 (2013), p. 055003 CrossrefView in ScopusGoogle Scholar
Zhou and Yong, 2007Y.H. Zhou, H.D. Yong Crack problem for a long rectangular slab of superconductor under an electromagnetic force Phys. Rev. B, 76 (2007), p. 094523 CrossrefView in ScopusGoogle Scholar
Cited by (25)
Dynamic fracture behavior of a crack in the bulk superconductor under electromagnetic force
2016, Engineering Fracture Mechanics Citation Excerpt :
It was found that the stress intensity factor is not a monotonic function of the crack length. In order to consider the effect of the inhomogeneity and geometry structure on the fracture behavior, the crack problems in nonhomogeneous superconductors [28–30] and superconducting strips with different thicknesses were studied [31,32]. The above investigations mainly considered the crack problems in the superconductors under the static magnetic field. Show abstract As the high temperature superconductors are under the electromagnetic field, large Lorentz body force can lead to the cracking of the superconductors. In the present paper, the dynamic fracture behavior of the crack in the bulk superconductor is investigated. We consider the central crack case and edge crack case. It is assumed that bulk superconductor is long enough and the demagnetization effect is neglected. By using the nonlinear constitutive relation in superconductor, the time-dependent Lorentz body force can be obtained. Then, the dynamic strain energy release rates are presented as the superconductor is under the magnetic field or with transport current. The results show that the dynamic strain energy release rates are also the periodic functions of time. The variations of dynamic strain energy release rate with edge crack length are different for the magnetic field and transport current cases.
### Crack problem for a functionally graded thin superconducting film with field dependent critical currents
2014, Mechanics Research Communications Show abstract In this study, double exponential model is established to investigate the central crack problem for a functionally graded superconducting film with filed dependent critical current. The stress intensity factors (SIFs) are analytically obtained and numerically calculated. Numerical results show the effects of applied magnetic field, model parameters, and crack length on the SIFs. Among others, in the process of field descent, the crack in the superconducting film easily propagates in the mode-I form. Increasing the graded parameter of shear modulus can inhibit crack propagation. For a fixed reduced field (especially for a larger magnetic field), both the mode-I and mode-II SIFs firstly increase, then decrease with the increasing of introduced non-dimensional exponent parameter. This study should be useful for the application of superconducting devices.
### Numerical modelling and simulations on the mechanical failure of bulk superconductors during magnetization: Based on the phase-field method
2020, Superconductor Science and Technology
### XFEM analysis of the fracture behavior of bulk superconductor in high magnetic field
2019, Journal of Applied Physics
### Fracture behavior of an inclined crack interacting with a circular inclusion in a high- TC superconductor under an electromagnetic force
2015, Aip Advances
### Crack problem for a bulk superconductor with nonsuperconducting inclusions under an electromagnetic force
2015, Aip Advances
View all citing articles on Scopus
Copyright © 2013 Elsevier Ltd. All rights reserved.
Recommended articles
CaAl 2 Cr 2 O 7: Formation, synthesis, and characterization of a new Cr(III) compound under air atmosphere in the Al 2 O 3–CaO–Cr 2 O 3 system
Ceramics International, Volume 45, Issue 13, 2019, pp. 16476-16481 Mithun Nath, …, Yawei Li
### Experimental and numerical investigation of the residual strength of steel-composites bonded joints: Effect of media and aging condition
Composites Part B: Engineering, Volume 173, 2019, Article 106977 Sugiman Sugiman, …, Hilton Ahmad
### Pervasive cracking of heterogeneous brittle materials using a multi-directional smeared crack band model
International Journal of Mechanical Sciences, Volume 149, 2018, pp. 459-474 Alden C.Cook, …, Scott E.Johnson
### Quench and stability of Roebel cables at 77 K and self-field: Minimum quench power, cold end cooling, and cable cooling efficiency
Cryogenics, Volume 95, 2018, pp. 57-63 C.J.Kovacs, …, E.W.Collings
### Gradient-elasticity for honeycomb materials: Validation and identification from full-field measurements
International Journal of Solids and Structures, Volume 72, 2015, pp. 108-117 Julien Réthoré, …, Manuel Kuhn View PDF
### Influence analysis of the geometrical parameters on the electro-mechanical stability of HTS Roebel cables
Engineering Failure Analysis, Volume 118, 2020, Article 104804 S.Gijoy, K.E.Reby Roy
Show 3 more articles
Article Metrics
Citations
Citation Indexes 25
Captures
Mendeley Readers 8
View details
About ScienceDirect
Remote access
Advertise
Contact and support
Terms and conditions
Privacy policy
Cookies are used by this site. Cookie Settings
All content on this site: Copyright © 2025 Elsevier B.V., its licensors, and contributors. All rights are reserved, including those for text and data mining, AI training, and similar technologies. For all open access content, the relevant licensing terms apply.
Cookie Preference Center
We use cookies which are necessary to make our site work. We may also use additional cookies to analyse, improve and personalise our content and your digital experience. For more information, see our Cookie Policy and the list of Google Ad-Tech Vendors.
You may choose not to allow some types of cookies. However, blocking some types may impact your experience of our site and the services we are able to offer. See the different category headings below to find out more or change your settings.
Allow all
Manage Consent Preferences
Strictly Necessary Cookies
Always active
These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work.
Cookie Details List
Performance Cookies
[x] Performance Cookies
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site.
Cookie Details List
Targeting Cookies
[x] Targeting Cookies
These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. If you do not allow these cookies, you will experience less targeted advertising.
Cookie Details List
Cookie List
Clear
[x] checkbox label label
Apply Cancel
Consent Leg.Interest
[x] checkbox label label
[x] checkbox label label
[x] checkbox label label
Confirm my choices
Your Privacy [dialog closed]
×
From question to evidence in seconds
ScienceDirect AI enhances your research, producing cited responses from full-text research literature.
Try for free
|
33
|
Macroscale, humidity-insensitive, and stable structural superlubricity achieved with hydrogen-free graphene nanoflakes | Nature Communications
===============
Loading [MathJax]/jax/output/HTML-CSS/config.js
Your privacy, your choice
We use essential cookies to make sure the site can function. We also use optional cookies for advertising, personalisation of content, usage analysis, and social media.
By accepting optional cookies, you consent to the processing of your personal data - including transfers to third parties. Some third parties are outside of the European Economic Area, with varying standards of data protection.
See our privacy policy for more information on the use of your personal data.
Manage preferences for further information and to change your choices.
Accept all cookies
Skip to main content
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
Advertisement
View all journals
Search
Search
Search articles by subject, keyword or author Show results from Search Advanced search
Quick links
Explore articles by subject
Find a job
Guide to authors
Editorial policies
Log in
Explore content
Explore content
Research articles
Reviews & Analysis
News & Comment
Videos
Collections
Subjects
Follow us on Facebook
Follow us on Twitter
Sign up for alerts
RSS feed
About the journal
About the journal
Aims & Scope
Editors
Journal Information
Open Access Fees and Funding
Calls for Papers
Editorial Values Statement
Journal Metrics
Editors' Highlights
Contact
Editorial policies
Top Articles
Publish with us
Publish with us
For authors
For Reviewers
Language editing services
Open access funding
Submit manuscript
Sign up for alerts
RSS feed
nature
nature communications
articles
article
Macroscale, humidity-insensitive, and stable structural superlubricity achieved with hydrogen-free graphene nanoflakes
Download PDF
Download PDF
Article
Open access
Published: 24 October 2024
Macroscale, humidity-insensitive, and stable structural superlubricity achieved with hydrogen-free graphene nanoflakes
Ruiyun Li1,
Xing Yang2,
Jiacheng Li3,
Yongfu WangORCID: orcid.org/0000-0002-2340-47912&
…
Ming MaORCID: orcid.org/0000-0001-6016-286X1,3
Show authors
Nature Communicationsvolume 15, Article number:9197 (2024) Cite this article
4180 Accesses
1 Citations
Metrics details
Abstract
Achieving solid superlubricity in high-humidity environments is of great practical importance yet remains challenging nowadays, due to the complex physicochemical roles of water and concomitant oxidation on solid surfaces. Here we report a facile way to access humidity-insensitive solid superlubricity (coefficient of friction 0.0035) without detectable wear and running-in at a humidity range of 2–80%. Inspired by the concept of structural superlubricity, this is achieved between Au-capped microscale graphite flake and graphene nanoflake-covered hydrogen-free amorphous carbon (GNC a-C). Such GNC a-C exhibits reduced pinning effects of water molecules and weak oxidation, which demonstrates stable structural superlubricity even after air exposure of the surfaces for 365 days. The manufacturability of such design enables the macroscopic scale-up of structural superlubricity, achieving the leap from 4 μm × 4 μm contact to 3 mm ball-supported contact with a wide range of materials. Our results suggest a strategy for the macroscale application of structural superlubricity under ambient condition.
Similar content being viewed by others
Robust microscale structural superlubricity between graphite and nanostructured surface
Article Open access 22 May 2023
Enhancing supercapacitor performance through design optimization of laser-induced graphene and MWCNT coatings for flexible and portable energy storage
Article Open access 30 November 2023
Metallic glass-based triboelectric nanogenerators
Article Open access 23 February 2023
Introduction
Friction causes an estimated 23% of world energy dissipation, which costs 119 EJ per year1,2,3. Meanwhile, the accompanied wear leads to about 80% of mechanical component failure2,4. Even a modest reduction in friction and wear remarkably impacts cost economics. Solid superlubricity, the phenomenon of extremely low coefficient of friction (10−3 magnitude), brings a fundamental way to solve these problems5. Achievement of solid superlubricity has been proved in two main approaches with different types of solid interfaces: (1) incommensurate crystalline contact interface and (2) disordered solid-based interface. The former is widely observed in single finite-size junction at homogeneous (graphite/graphite6,7,8, graphene/graphene9,10 and MoS 2/MoS 211,12) and heterogeneous (graphite/MoS 23,13,14, graphite/hexagonal boron nitride15,16,17 and MXene/MoS 218,19,20) contacts. These contacts are proved to be robust against humidity, but they suffer from the limit scale (e.g., micrometer21) and the low manufacturability for mechanical equipment22. Recent ingenious control and design of multiple finite-size junctions for their nanocomposite are explored to achieve the macroscopic scale-up of superlubricity in a non-humid environment23,24. The typical materials of the latter are highly hydrogenated diamond-like carbon (DLC, H > 40 at.%)25,26. The DLC which demands a dry environment (such as vacuum or inert gas) can achieve superlubricity at macroscale, thereby compensating the drawbacks of the former; however, the superlubricity is always lost in open air. With the use of carbon nanostructure and composition design, the relative humidity (RH) of DLC superlubricity can be increased up to 45%27,28.
The search for more stable and humidity-insensitive superlubricity material systems continues in practical application. Most engineering surfaces are exposed to the wide-humidity environment from relative humidity 20% to 70% according to the European standard EN 16798-1:201929, which leads to frequently alternate water dissociation and adsorption on the solid surfaces that form complex solid-liquid contact states and affect friction. The water adsorption would trigger chemical reactions due to the high surface activity of most engineering parts, which might make surface roughening and oxidation that lead to high friction30. Due to the above challenges, superlubricity can only be achieved under limited conditions such as low humidity (<RH 45%)31,32 or single crystalline surfaces10,21.
While the two approaches have their own limitations, they also indicate a possibility to achieve macroscale and humidity-insensitive solid superlubricity by combining their advantages. Here we demonstrate humidity-insensitive and stable superlubricity (coefficient of friction (\mu) ~ 0.0035) between Au-capped microscale graphite flake and graphene nanoflake-covered amorphous carbon (GNC a-C) at a wide humidity range of 2–80%. The graphite flake was driven to slide on the surfaces using a tungsten microtip in contact with the Au cap. Together with the absence of detectable wear, it shows that in fact structural superlubricity22 is achieved. No running-in process is required to achieve such properties. With a series of carefully designed comparative experiments, we find that the orientation of graphene nanoflakes of GNC a-C as well as the hydrogen-free nature is the key. Detailed atomic scale characterization and full atomic molecular simulations further reveal the mechanism to be the reduced pinning effects of water molecules on the surfaces and weak oxidation. Due to the high scalability of such design, a macroscopic structural superlubricity under ambient condition is further achieved.
Results and discussion
Structure analysis of hydrogen-free GNC a-C
Inspired by the concept of structural superlubricity which makes use of incommensurability, we designed and fabricated a hydrogen-free GNC a-C coating which had surface-covered graphene nanoflakes. The silicon supported GNC a-C was synthetized by a magnetron sputtering method using the bombardment of Ar ions on a graphite target (refers to the Methods section). In the case of sputtering from graphite targets by Ar ions of a few hundreds of eV, the films would reflect the target composition and structure, which makes the GNC a-C preparation be not affected by substrate (Supplementary Fig.1.1 and Supplementary Table1). The cross-section TEM image of GNC a-C shows that the random oriented planes have an interlayer distance of about 0.34 nm, in accordance with the (002) plane of graphene33 (Fig.1a, b). The graphene planes being not parallel to the Si substrate would extend to the coating surfaces, thereby achieving the coverage of graphene nanoflakes on the surfaces (Fig. 1c). The surface structure of GNC a-C was revealed by the procedure where a thin GNC a-C (about 20 nm) was deposited on a NaCl substrate under the same condition with Si substrate sample and then analyzed in transmission electron microscopy (TEM) after the deposition of GNC a-C on a Cu grid by the underlying NaCl dissolution (Supplementary Fig.1.2). The top TEM image of GNC a-C clearly reveals that nanoscale graphene flakes arranged in different orientations cover most of the surfaces (Fig.1c–g). The electron energy loss spectra (EELS) of the region show strong signal of π peak at ~285 eV which is the same as pyrolytic graphite26, implying its atoms should be highly sp 2-bonded (Fig.1h). The sp 2 fraction of GNC a-C calculated by the position and intensity of the π peak34 is 85%. The chemical bond property is supported by the Raman spectra of the films showing high I D/I G intensity ratio (0.99 of GNC a-C vs 0.43 of a-C in Fig.1i). The higher I D/I G ratio indicates higher carbon structure ordering of the surfaces35, which reveals the graphite-like carbon characteristic of GNC a-C. The Raman data on the global scale and TEM/EELS data on the local scale bring out the key feature of the carbon films, which is graphene nanoflakes covering the surfaces of the films. Therefore, the structure could be considered as graphene nanoflakes covering the amorphous carbon matrix, resembling snowflakes covering a smooth road.
Fig. 1: Structural characterization of graphene nanoflake-covered amorphous carbon (GNC a-C).
a Side TEM image of GNC a-C marked in (b). b Cross-section TEM image of GNC a-C. c Top TEM image of GNC a-C marked in (b). d–f Magnified TEM images of (I-III) marked in (c), respectively, which demonstrate that nanoscale graphene flakes arranged in different orientations cover most of the surfaces. g Structure schematic illustration of (c). h EELS results of amorphous carbon (a-C), GNC a-C and highly ordered pyrolytic graphite (HOPG). The sp 2 fraction in C K-edge is calculated by the ratio of π area/total (π + σ) area which the π energy window is 5 eV, and the (π + σ) energy window is 15 eV. i Raman results of a-C, GNC a-C and HOPG.
Full size image
Microscale and humidity-insensitive structural superlubricity
The Au-capped microscale graphite flakes were obtained by the shear-induced cleavage of a square graphite mesa, due to its inherent self-retracting motion property8,36. The microscale graphite flakes with single crystalline surfaces8,36 were transferred onto the GNC a-C surfaces, forming a tribo-pair (refers to the Methods section and Supplementary Figs.2.1–2.3). Then, they were driven to slide on the surfaces using a tungsten microtip in contact with the gold cap.
To demonstrate the humidity-insensitive superlubricity, we used an atomic force microscope (AFM) manipulator (Methods section shows further details as to the experimental set-up) to measure the friction force of 4 μm × 4 μm graphite flake on GNC a-C surfaces (Fig.2a). Specific calibration process is shown in Supplementary Section3. The friction force of graphite flake on a-C surfaces was also measured for comparison (Supplementary Fig.4.1). First, the frictional measurements were performed under ambient atmosphere with the relative humidity of 60 ± 3%. Figure2b, c shows a positive linear dependence between friction force (({F}{f})) and normal load (({F}{N})) in loading and unloading regimes. The slope of the ({F}{f})–({F}{N}) fitting curve is defined as the coefficient of friction, i.e. differential friction coefficient. For GNC a-C surfaces, the (\mu) is found to be 0.0036 ± 0.0001 for loading and 0.0035 ± 0.0002 for unloading, with friction stress ({\tau }{{{\rm{f}}}}={F}{f}/A) about 8.8-15.4 kPa, where the contact area A is 16 μm 2. Such low friction coefficient and friction stress clearly show the presence of superlubricity. The cross-section TEM images of graphite flake/GNC a-C tribo-pair provide a clear picture that the graphite flake keeps a full contact with the GNC a-C substrates, which contributes to robust superlubric sliding (Supplementary Fig.5). In sharp contrast, the (\mu) of microscale graphite flake on pure a-C is 0.0521 ± 0.0015 for loading and 0.0537 ± 0.0027 for unloading, with ({\tau }_{{{\rm{f}}}}) about 413.8-505.0 kPa, which is much larger than that of graphite flake/GNC a-C tribo-pair. By extrapolating this linear dependence to normal load being zero, a finite friction of 0.045 μN is found between graphite flake and GNC a-C, which is much lower than that of graphite flake/a-C tribo-pair (4.222 μN). The durability of the superlubricity was measured by sliding an 8 μm × 8 μm graphite flake on GNC a-C for 1.8 × 10 5 cycles with a speed of 2 μm/s and a normal force of 35.5 μN (with a sliding distance of 0.36 m). An extremely low friction force of 0.295 μN is recorded, corresponding to a friction stress of 4.6 kPa (Fig.2e).
Fig. 2: Friction performances of microscale graphite flake on GNC a-C.
a Frictional test illustration of graphite flake/GNC a-C. The Si-supported GNC a-C was fixed to the piezoelectric ceramic transducer (PZT) stage and an Au-capped graphite flake was placed on its top. Normal and lateral forces were exerted using AFM tip in contact with the central region of Au cap. An objective lens was attached to the AFM head to monitor the relative movement of the sheared flake with respect to the GNC a-C in situ. b, c Frictional forces of microscale graphite flake on GNC a-C and a-C under different normal loads with a sliding speed of 2 μm/s, respectively. Orange and blue points correspond to loading and unloading processes, respectively. The error bars represent the standard deviation of 20 friction measurements on the same sample in (b, c). d Friction forces of the microscale graphite flake on GNC a-C and a-C surfaces at RH 2%, RH 20%, RH 40%, RH 60% and RH 80%. The functional form of blue line is ({F}{f}=(10.155\pm 1.453){x}^{2}+(2.522\pm 0.949)x+(4.892\pm 0.125)). The functional form of red line is ({F}{f}=(3.575\times {10}^{-6}\pm 1.013)\times \exp \left(-\frac{x}{-0.070\pm 0.017}\right)+(0.176\pm 0.009)). Error bars are the standard deviation of the data points. e The durability of superlubricity which was measured by sliding an 8 μm × 8 μm graphite flake on GNC a-C with a speed of 2 μm/s and a normal force of 35.5 μN. f AFM result of GNC a-C friction surfaces after 1.8 × 10 5 cycles in (g). g Optical image of graphite flake sliding on GNC a-C surface; the orange dashed box is sliding area. h Orange dashed box in (g). i Corresponding Raman spectra of dots 1-9 on microscale graphite flake in (h).
Full size image
To examine whether wear presents on the surfaces after the frictional test, the 8 μm × 8 μm graphite flake friction surface was characterized by Raman spectrometer and the GNC a-C friction surface was characterized using AFM (Fig.2f, g). The Raman spectra obtained at different positions (points 1–9 in illustration) on the slid graphite flake surface show the absence of D peak, which indicates the negligible wear (Fig.2g–i). For the slid region on GNC a-C surface, there are no detectable wear debris and no obvious roughening in the AFM images. Altogether, the small friction stress (({\tau }_{{{\rm{f}}}})< 15.4 kPa), low frictional coefficient (on the order of 0.001), and negligible wear confirm the achievement of robust and humidity-insensitive structural superlubricity between microscale graphite flake and GNC a-C.
To reveal the relation between friction force and relative humidity, we further performed the frictional tests at a wide humidity range of 2–80%. The friction forces of the microscale graphite flake on GNC a-C surfaces at RH 2%, RH 20%, RH 40%, RH 60% and RH 80% are 0.142 ± 0.024, 0.181 ± 0.011, 0.183 ± 0.015, 0.195 ± 0.009 and 0.515 ± 0.010 μN, respectively, indicating the achievement of humidity-insensitive superlubricity (Fig.2d). Whereas the friction forces on pure a-C surfaces at RH 2%, RH 20%, RH 40%, RH 60% and RH 80% are 4.949 ± 0.120, 5.939 ± 0.415, 7.473 ± 0.130, 10.467 ± 0.346 and 13.240 ± 0.382 μN, respectively, which shows a rapid friction increase with the increase of humidity.
Subsequently, the environmental stability was investigated by exposing the GNC a-C surfaces to open air with free-changing humidity, and the surface oxidation and roughening were detected by X-ray photoelectron spectroscopy (XPS) and AFM, respectively. Figure3a, b shows the C1s spectra of the a-C and GNC a-C surfaces after 0, 30, 180 and 365 days exposure. It is indicated that three approximately unchanged peaks located around 284.6 eV (C = C), 285.3 eV (C–C) and 286.6 eV (C–O) are present on the GNC a-C surfaces28,37, whereas the a-C surfaces show obviously increased C–O and a new peak at 288.5 eV. The C–O fractions at 0, 30, 180 and 365 days change from 7.47% to 7.68% (Fig.3e and Supplementary Table2), and the corresponding surface roughness increases from 131.6 pm to 196.1 pm, indicating that the oxidation and roughening of GNC a-C surfaces are weak (Fig.3c–e). The corresponding friction stresses at 0, 30, 180 and 365 days are 11.4 ± 1.5, 12.1 ± 0.7, 12.8 ± 0.8, 13.3 ± 0.5 kPa, respectively, indicating the achievement of super-low friction (Fig.3f). In comparison, the oxidation and roughening of graphite flake/exposed a-C tribo-pairs have an obvious increase with the increase of air exposure time (Supplementary Fig.6), leading to a large increase in friction (Fig.3f). The results confirm that the GNC a-C surfaces are competent to act as a stable superlubric counterface for microscale graphite flakes, where they can achieve humidity-insensitive superlubricity on the premise of no detectable wear.
Fig. 3: Environmental stability for the structural superlubricity of microscale graphite flake on GNC a-C surfaces.
a C1s spectra of GNC a-C surfaces exposed in air for 0, 30, 180 and 365 days. b C1 spectra of a-C surfaces exposed in air for 0, 30, 180 and 365 days. The red, yellow, blue and dark blue shadowed areas respectively correspond to C=C, C−C, C−O and C=O bonds, the grey line is the raw C1s spectrum and the grey points are the peak enveloped from the raw C1s. c AFM results of a-C surfaces before and after the 365 days’ exposure. d AFM results of GNC a-C surfaces before and after the 365 days’ exposure. The scan areas in (c) and (d) are 5 μm × 5 μm. e GNC a-C surface roughness and the fraction of C−O and C=O bonds at different air exposure time. f Corresponding friction stress of microscale graphite flake on exposed GNC a-C and a-C surfaces. g Measured friction forces of GNC a-C that is pruned by the H+ ions bombardment. Error bars are the standard deviation of the data points.
Full size image
Effect of doped-H on GNC a-C superlubricity
To understand the effect of film structure on structural superlubricity, we tuned the film structure using H+ ions bombardment. The incorporated H atoms preferentially etch the sp 2 phase, which destroys the hexagonal benzene ring structure of graphene nanoflakes and makes them amorphous. This structure evolution is confirmed by the weaker signal of π peak at ~284.5 eV with the increase of H 2, and corresponding chemical bond fraction changes (Supplementary Fig.7.1). The doped-H smooth the GNC a-C surfaces (from 160.4 pm to 118.5 pm, Supplementary Fig.7.2); however, they increase the friction forces with microscale graphite flakes (from 0.181 ± 0.003 to 8.339 ± 0.071 μN, Fig.3g). This is probably due to the fact that the incorporated H atoms trigger the edge-pinning effect where the H atoms tend to approach the edge sp 3-bonded carbon atoms of graphite flake due to the H-passivation function26,38,39.
The friction behaviors of graphite flakes in different sizes (4 μm × 4 μm, 6 μm × 6 μm, 8 μm × 8 μm and 10 μm × 10 μm) were also measured on two types of GNC a-C deposited under the 120 H 2 and 0 H 2 condition. The friction stresses on the 0 H 2 GNC a-C are 11.3 ± 0.5, 7.2 ± 0.4, 4.4 ± 0.4 and 4.0 ± 0.4 kPa, respectively, showing a tendency of saturation to few kPa (inset in Fig.4a). Whereas the friction stresses of 4 μm × 4 μm and 6 μm × 6 μm graphite flakes on the 120 H 2 films are 579.3 ± 11.3 kPa and 334.1 ± 9.0 kPa respectively (Fig.4a), and the friction stresses of 8 μm × 8 μm and 10 μm × 10 μm graphite flake are hard to be measured because they are locked in the carbon films due to the enhanced edge-pinning effect by significantly increased edges.
Fig. 4: Microscale-to-macroscale superlubricity.
a Friction stress of graphite flakes in different sizes (4 μm × 4 μm, 6 μm × 6 μm, 8 μm× 8 μm and 10 μm × 10 μm) on 0 H 2 GNC a-C and 120 H 2 GNC a-C surfaces. Error bars are the standard deviation of the data points. The blue and red dashed lines are the fittings of corresponding data points. The functional form of 120 H 2 GNC a-C is τ f = (3.776 ± 0.083)×A(-0.677±0.007). The functional form of 0 H 2 GNC a-C is τ f = (-9.156 × 10-9)A 3+(3.253 × 10-6)A 2-(3.591 × 10-4)A+0.016. b Macroscale friction coefficient curves of 3 mm SiO 2-supported GNC a-C/graphite and 3 mm SiO 2/graphite contact at the RH of 60±3%. The inset is the picture of friction test. c Sliding surfaces on graphite. d Sliding surfaces on 3 mm SiO 2-supported GNC a-C. e Raman mapping of the sliding surfaces marked in (c). f 3D white-light morphology of the sliding surface marked in (d). g Macroscale friction coefficient (({F}{f})/({F}{N})) of GNC a-C/graphite, GNC a-C/MoS 2, GNC a-C/MoSe 2 and self-mated GNC a-C tribopairs. h A friction stress comparison between the present work and other superlubric DLC systems.
Full size image
Macroscale humidity-insensitive superlubricity
Together the hydrogen-free nature of GNC a-C with the different orientation of graphene flakes on the films lead to the ultralow and nearly saturated friction stress with the contact area up to 10 μm × 10 μm. The achievement of nearly saturated friction stress on microscale drives us to explore the macroscopic scale-up of the superlubricity which is relevant to practical applications in field such as bearings22. The structure manufacturability of GNC a-C allows it to be deposited on a SiO 2 ball with a diameter of 3 mm and have the same structure as the Si-supported samples. The frictional measurements of SiO 2-supported GNC a-C and graphite were conducted by a ball-on-disk tribometer (CSM Instrument, Switzerland) at the RH of 3 ± 3%, 30 ± 3% and 60 ± 3% (Fig.4b). Sliding the GNC a-C coated SiO 2 ball on graphite leads to robust superlubricity with no running-in state (Fig.4b and Supplementary Fig.8). The corresponding optical microscope images indicate the absence of detectable wear on the contact surfaces of 0.04 mm 2 (Fig.4c, d), supported by Raman mapping (Figs.4e) 3D white-light morphology (Fig.4f) measurements. The wear rate should to be below (8.5\times {10}^{-16})({{{\rm{mm}}}}^{3}/{{\rm{N}}}\, {{\rm{m}}}), which is the limit of Raman equipment (refers to Supplementary section 9)40,41. The absence of wear indicates the surviving of hydrogen-free and orientation-different graphene nanoflakes on the outermost surface of GNC a-C, which shows that structural superlubricity is achieved22. The macroscale interfacial friction stress is about 81 kPa (Supplementary Fig.10), lower orders than those at the DLC macroscale superlubricity establishment via the hydrogenated, nanostructural or nanocomposite tribofilms (0.57–6.92 MPa, Fig.4h)18,24,25,26,27,28,31,42,43,44,45,46. The no running-in, no detectable wear, and superlubricity state at a relative humidity of 60 ± 3% can be extended to the self-mated GNC a-C, MoS 2 and MoSe 2 counterfaces (Fig.4g, Supplementary Figs.11, 12).
Mechanism of humidity-insensitive superlubricity
To reveal the mechanism of humidity-insensitive structural superlubricity pair, we analyzed the water distribution on the GNC a-C and pure a-C surfaces under a RH 60 ± 3% atmosphere using Raman mapping method. The Raman spectra of the two surfaces show different I D/I G ratio, and two peaks at 3400 cm−1 and 3600 cm−1 that attribute to OH stretch modes in water molecules47,48 (Fig.5a, b). The maps of I D/I G ratio are displayed in Fig.5c, e, and the corresponding intensity maps of water OH stretch bands are displayed in Fig.5d, f. The Raman mapping data reveal that the graphene nanoflake-covered design of carbon films leads to the spreading-to-localizing transformation of water. First, the GNC a-C surfaces have lots of small-area and structure-different regions (Fig.5c), whereas the pure a-C surfaces are filled with large-area and structure-uniform bands, such as blue regions in Fig.5e. Second, the higher I D/I G region of GNC a-C surfaces (red regions in Fig.5c) has less water molecules (green regions in Fig.5d), as is shown by the red-green correspondence of points 1-5 between its I D/I G and OH stretch maps. The red-green correspondence can be also observed on the right boundaries of the I D/I G and OH stretch maps of GNC a-C (Fig.5c, d). In sharp contrast, the pure a-C surfaces are filled with large-area water bands, as the red-green correspondence phenomenon is lost in the points I and II, and at the right boundaries of I D/I G and OH stretch maps (Fig.5e, f). The localization transformation of water on the GNC a-C surfaces can be supported by the weaker absorption band in the range of 3400-3600 cm-1 than that of a-C surfaces (Supplementary Fig.13).
Fig. 5: Raman analysis of water distribution on carbon films.
a, b Raman spectra of GNC a-C and a-C surfaces, respectively. The arrows in (a, b) correspond to two peaks at 3400 cm−1 and 3600 cm−1 that attribute to OH stretch modes in water molecules. c, e Maps of the intensity ratio of D and G peaks from GNC a-C and a-C surfaces, respectively. d, f Maps of water OH stretching band (3400 and 3600 cm−1) from GNC a-C and a-C surfaces, respectively. The points 1-5 in (c) and (d) show the red-green correspondence, whereas the regions I and II in (e) and (f) do not have the red-green correspondence phenomenon. The arrows between (c) and (d), and between (e) and (f) indicate whether the water band exists or not. All the Raman tests were performed under a RH 60±3% atmosphere, and the Raman laser intensity was controlled below 0.5 MW·m−2 to reduce the effects on water distribution and carbon structures.
Full size image
The water distribution difference on GNC a-C and pure a-C surfaces results in the obvious increase of contact angle (from ~69.4° of pure a-C to ~101.0° of GNC a-C, Supplementary Fig.14). The increased contact angle indicates the weakening for the movement resistance of water clusters on the GNC a-C surfaces. The slip length of water on the GNC a-C surfaces is measured to be 30.0 ± 1.2 nm, based on the equation ({l}_{s}=\eta /\lambda), where (\eta) is the liquid viscosity and (\lambda) is the interfacial friction coefficient between water molecules and solid surfaces49,50. In comparison with the 4.3 ± 3.5 nm slip length of water on graphite, the high slip length of water on the GNC a-C surfaces indicates weak interfacial friction and pinning effect of water molecules on the surfaces50. These changes can support the spreading-to-localizing transformation of water on GNC a-C surfaces, which promotes the achievement of humidity-insensitive structural superlubricity.
All the above results demonstrate that the graphene nanoflake-covered design of carbon films leads to the reduced pinning effects of water molecules and weak oxidation, which promotes the water spreading-to-localizing transformation on the GNC a-C surfaces, thereby achieving stable structural superlubricity at a wide humidity range of 2–80%.
Theoretical simulations
To understand the mechanism on atomic level, we investigated the water localization transformation on the GNC a-C surfaces at the RH 2–80% range as well as the friction behaviors of graphite on the humidified surfaces via theoretical simulation method. First, we built the structure model of GNC a-C based on the TEM observation of graphene nanoflakes arranged in different orientations and the EELS fitting results of 85% sp2-bonded carbon (details of simulation are provided in Methods section, as well as Supplementary Figs.15.1 and 15.2). Afterwards, we simulated the morphology of the surfaces under different humidity as studied in experiments. The RH 2–80% is imitated by using a variable number of H 2 O molecules in the N 2. With RH increasing from 2% to 80%, the distributions of water molecules show the spreading phenomenon and water band formation on a-C surface (Fig.6a), and the localization transformation via water clusters formation on GNC a-C surface (Fig.6b). Specifically, for RH < 40%, the a-C surface is randomly resided by two-molecule clusters, and the GNC a-C surface has large clusters nucleating at the a-C component. With RH increases to 60%, long water bands are formed on the a-C surface, which are absent on the GNC a-C surface. Under a RH of 80%, all the water molecules form a water band on the a-C surface, while a few water clusters present on the GNC a-C surface. This unique water localization transformation from a-C to GNC a-C is further evidenced by quantifying the number of water molecules in each water cluster (insets in Fig.6a, b). On the other hand, the number of water molecules directly contacting with the surface is monitored to identify their interactions. The GNC coverage of a-C leads to the remarkably reduction of water molecules with the humidity increase, which promotes the water localization transformation on the GNC a-C surface (Fig.6c, d). The qualitative agreement of simulation results with the observation in the Raman mapping images (Fig.5) validates the simulation setup.
Fig. 6: MD simulations of water molecule distribution under wide-humidity conditions.
a Snapshots of H 2 O (200-6400 molecules for RH 2–80%) molecules adsorbed on a-C (black) after the simulations were performed for 300 ps. b Snapshots of H 2 O molecules on GNC a-C (yellow) as the same as (a). The sets are corresponding water cluster number with included water molecule number and the white in (c, d) represents water clusters. c Number of water molecules adsorbed directly on the a-C component of a-C. d Number of water molecules adsorbed directly on the a-C and graphene components of GNC a-C.
Full size image
The tribological processes in simulations are investigated by sliding a graphite flake (6.62 nm × 6.23 nm) on both surfaces under different humidity at a velocity of 10 m/s. As shown in Fig.7a, the (\mu) of graphite flake/GNC a-C tribopair remains small (from 0.004 of RH 2% to 0.01 of RH 80%), with a mild increase with the humidity, indicating the achievement of humidity-insensitive structural superlubricity (Supplementary sections16, 17). In sharp contrast, the (\mu) of graphite flake/a-C tribopair is well above 0.02 and shows significant increase as RH increases. Such a difference in (\mu) also agrees qualitatively with that measured in experiments (Fig.2d). By examining the sliding process on atomic scale carefully, we found that the graphite flake can sweep away the water clusters much more efficiently on GNC a-C surface than that on a-C surface, as evidenced in Fig.7b–d. This phenomenon can be further quantified by calculating the number of water molecules attached to the edge of graphite flakes, where less water molecules stick to the graphite flake edge on GNC a-C surface than that on a-C surface (Fig.7b).
Fig. 7: Full atomic molecular simulations of sliding friction under wide-humidity conditions.
a Coefficient of friction at different relative humidity. b Number of water molecules attached to the edge of graphite flakes in the 5 Å region marked in (c, d). c, d Snapshots of graphite flake (pink) sliding on the a-C (black) and GNC a-C (yellow) surfaces, respectively. The white in (c, d) represents water clusters. e DFT model and interfacial energy Δ E results of a-C/H 2 O, graphene/H 2 O and graphene edge/H 2 O contacts. The blue and red in (e) represent electron accumulation and depleting.
Full size image
To understand the morphologies during both static and dynamic processes, with density functional theory (DFT) we calculated the interfacial energy Δ E of all the interfaces present in the systems, namely graphene/a-C, graphene/graphene, a-C/H 2 O, graphene/H 2 O and graphene edge/H 2 O contacts (see Methods section for details). For solid-solid interaction, Δ E for graphene/a-C (0.048 eV/atom) is one order larger than that for graphene/graphene (0.005 eV/atom) (Supplementary Fig.18). For solid-liquid interaction, a-C surface shows a stronger interaction (Δ E = 0.07 eV/molecule) than that on GNC a-C surface (Δ E = 0.02 eV/molecule), whereas the edge is the most favor binding site (Δ E = 0.14 eV/molecule, Fig.7e). Detailed energy decomposition analysis (with more details shown in TablesS3 and S4) suggests that all the three solid-liquid interactions are non-bonded, with a-C/H 2 O and graphene edge/H 2 O exhibiting more chemical interaction components than graphene/H 2 O. The weaker chemical interaction of graphene/H 2 O indicates weak pinning effect of water molecules on the GNC a-C surface, which is responsible for the water localization transformation and the high slippage of water clusters on the surface (Fig.7b–d). Together with the cohesive energy of water molecules which is 0.23 eV51, it becomes evident that the morphologies are caused by the relative strength of the three solid-liquid interaction. Whereas the GNC coverage of a-C and the graphite surface have approximately no RH independence52, the solid interface is prone to remain clean.
The energy dissipation during the sliding process can be understood by considering the contribution from nominal contact area and the associated edge part. Within the nominal contact area, both for solid-graphite and solid-water interaction, GNC a-C surface shows lower interfacial energy than that of a-C surface. As the bond length of both surfaces are similar, it is reasonable to assume that lower interfacial energy leads to less energy dissipation. Therefore, the friction within the nominal contact area of GNC a-C/graphite tribopair is always smaller than that of a-C/graphite tribopair, regardless of the amount of water confined. For edge contribution, as less water molecules are attached to the edge of graphite flake on GNC a-C surface (Fig.7b), forming a smaller contact area with the surface, and the interaction between water and GNC a-C surface is smaller than that of a-C surface, the contribution from the edge to friction for GNC a-C is also smaller than that on the a-C surface. As a result, the overall friction for graphite flake sliding on GNC a-C surface is significantly smaller than that on a-C surface, which agrees with the experimental measurements.
In summary, we have demonstrated the achievement of macroscale, humidity-insensitive, and stable structural superlubricity at a wide humidity range of 2–80%. This is accomplished by making the best use of the two existing approaches towards macroscale structural superlubricity, where the intrinsic incommensurability and flexible manufacturability are combined. By incorporating numerous graphene nanoflakes into hydrogen-free a-C via magnetron sputtering and forming friction pair with graphite, the pinning effects of water molecules on the surface and potential oxidation are greatly reduced. Our microscale experiments together with atomistic simulation clearly reveal the mechanism, while macroscale tests with several types of friction pair (self-mated GNC a-C, graphite, MoS 2 and MoSe 2 counterfaces) on curved surfaces unambiguously demonstrate macroscale structural superlubricity, therefore being a critical step of establishing the connection across microscopic, and macroscale manufacturing levels.
Methods section
Carbon film preparation
GNC a-C (about 100 μm) was prepared on silicon substrates by an unbalanced magnetron sputtering system using Ar gas. The silicon substrates were fixed on the platform that were 100 mm away from graphite target (99.99% purity), and rotated at a constant velocity of 3 rpm. The structures of GNC a-C were pruned by H+ ions bombardment, which was obtained by the introduction of H 2 gas with the gas flow rate of 0, 30, 60, 90 and 120 SCCM. The target power was 0.50 kW and the substrate bias voltage was −700 V (duty cycle of 0.6 and pulsed frequency of 60 kHz). The 3 mm SiO 2 ball-supported GNC a-C was prepared in pure Ar gas, and rotated at a constant velocity of 5 rpm. All the substrates and balls were cleaned by the mixture of acetone and ethanol before the deposition.
Microscale graphite mesa preparation
Graphite mesa was fabricated on a HOPG (ZYB grade, 12 mm × 12 mm, Brucker) as the following: firstly, the fresh HOPG surface was coated by double layer photoresist LOR (200 nm) and ZEP (400 nm). Secondly, graphite patterns were created on photoresist using electron beam lithography. Subsequently, the oxygen plasma was used to remove contaminates on the graphite patterns, and the pattern surfaces were deposited by a 200 nm thick Au film using plasma-enhanced chemical vapor deposition. The Au cap protects the graphite flake when pressing the AFM tip on the top of the graphite flake. Finally, the Au-capped HOPG was followed by lift-off and reactive ion etching process36,53 (Supplementary Fig.2.1), to produce microscale graphite mesa.
Graphite flakes with linear dimensions of 4 μm × 4 μm, 6 μm × 6 μm, 8 μm × 8 μm, 10 μm × 10 μm were sheared from the mesas according to their self-recovery characterization8,36. The self-recovery characterization was the spontaneous return motion of the sheared top flake to initial state due to the natural incommensurate contact between the top and bottom flakes. Because of the self-recovery characterization, the sheared top flake was single crystal8,36. Existing studies demonstrate that the contacting surfaces are single crystalline and being incommensurate, which preserve the ultralow friction which is smaller than the self-retraction force due to surface energy54. The shearing process was controlled by a micromanipulator using a tungsten microtip in contact with the Au capping of the mesa. After self-recovery shearing test, the top graphite flakes were selected and transferred to the GNC a-C and a-C surfaces (Supplementary Fig.2.2).
Microscale friction tests
The friction measurements of graphite flakes/GNC a-C tribo-pair were conducted on an AFM system in contact mode under various relative humidity (RH 2%, RH 20%, RH 40%, RH 60%, RH 80%) and at a temperature of 25 ± 2 °C. The macroscale frictional test system was made up of a commercial AFM (NT-MDT), a 100 μm piezolelectric scanning tube, a 100× objective lens with an aperture of 0.7 (Mitutoyu, Japan) and an environmental chamber (Fig.2a). The relative humidity was controlled by the flow rate of dry N 2 gas through deionized water. When the N 2 gas with different flow rate passed through the deionized water, this produced different partial pressure of water vapor in the chamber. To obtain a stable humidity environment, the tribo-pair was held in an aimed relative humidity for 30 min, making water films disperse between graphite flakes and GNC a-C55,56.
In the friction test process, a top visualized AFM tip (VIT-P/IR; nominal spring constant, 50 N/m) was applied to press on the Au capping of the graphite flake and laterally slid the flake on the GNC a-C surfaces at a constant speed. All experiments were conducted with a sliding velocity of 2 μm/s in reciprocating mode. For the 4 μm × 4 μm graphite flake, the normal loads were changed from 27.97 μN to 55.95 μN, generating a contact pressure range of 1.75 MPa to 3.50 MPa. In the measurements, the frictional normal load increased firstly and decreased afterwards, which was termed as loading and unloading processes, respectively. Lateral force and normal force signals were acquired simultaneously during the scanning process. Before the tests, a standard calibration method of the AFM tip was conducted using the Sader method for measuring normal direction forces and maglev calibration method for lateral forces (Supplementary Fig.3). All the microscale friction tests were repeated at least three times to ensure the result reproducibility.
Macroscale friction tests
Graphite, MoS 2 and MoSe 2 were purchased from Nanjing MKNANO Tech. Co., Ltd. All macroscale tribological tests were performed by a ball-on-disk tribometer (CSM instrument, Switzerland) at room temperature. The SiO 2 ball-supported GNC a-C was driven to slide against graphite, MoS 2, MoSe 2, and GNC a-C in a linear reciprocating mode. The applied normal load was set as 0.25 N and the sliding velocity was 2 mm/s with the amplitude of 1 mm. The corresponding contact pressure is about 81 MPa. The relative humidity of frictional chamber was controlled by the flow rate of dry N 2 gas through deionized water, to maintain the relative humidity of 3 ± 3%, 30 ± 3% and 60 ± 3%. At the beginning of each test, zero calibration was conducted automatically. All the macroscale tribological tests were repeated at least three times to ensure the reproducibility of measurement results.
Nanostructure and interfacial characterization of the frictional system
The roughness of all the carbon films was tested by AFM (Cypher S-Oxford Instruments). The structures of microscale graphite flakes and GNC a-C before and after frictional tests were characterized by Raman spectroscopy (Horiba JobinYvon HR 800, 532 nm). The distributions of water on GNC a-C and a-C surfaces under a RH 60 ± 3% atmosphere were measured by Raman mapping. The Raman spectroscopy had a visible 532 nm laser diode and a 300 lines/mm grating that ensured a ~ 2 cm−1 resolution over a spectral range from 100 to 3500 cm−1, and coupled with an Olympus confocal microscope with a ×50 objective. In the Raman spectroscopy measurement, the integration time was 0.25–0.50 s for each spectrum, and each scan was repeated 2–3 times. The Raman laser intensity was controlled strictly at less than 0.5 MWm−2 to reduce the effects on water distribution and carbon structures. The chemical bond composition of the films was investigated by X-ray photoelectron spectroscopy (XPS, ESCALAB 250Xi; ThermoFisher Scientific) with AlKα monochromator radiation as exciting source. Before each measurement, a 0.2 nm thick Au thin film was deposited on film surfaces to reduce the charging effect24. The baseline correction of the sample was performed using the binding energy of C1s at 284.6 eV. To obtain XPS signal intensity, a Shirley background subtraction was performed.
The surface morphologies of the ball-supported GNC a-C and graphite (before and after frictional tests) were observed by Optical microscope (Olympus, BX35) with a ×100 objective. The topographies and sectional profiles of the sliding surfaces were Zygo NewView laser-interference surface profilometer with a ×50 objective.
To reveal the outermost surface structures, an about 20 nm thick GNC a-C was deposited on a NaCl substrate under the same condition with Si substrate, and followed by NaCl dissolution and GNC a-C desiccation. The lamellar specimens cut from microscale graphite flakes and GNC a-C interfaces were prepared by a dual-beam SEM/FIB system (Helios Nanolab 600). Before FIB-cutting, a protective Pt layer was deposited in an ion sputtered system. The reducing of lamellar specimens to about 100 nm was achieved at the voltage of 16 kV and the current of 47 pA, and the reducing to 30-50 nm was obtained at the voltage of 5 kV and the current of 17 pA. The topography structures of all the carbon films and graphite were operated at a spherical aberration corrected TEM (JEOL JEM-ARM200F) with an accelerating voltage of 200 kV (entrance aperture of 2.0 mm and a channel step size of 0.3 eV). The working conditions could yield a bright field (BF) imaging resolution of 0.14 nm, which were demonstrated to reduce the destruction of sample and enhance the accuracy of characterization24,33.
Theoretical simulation
The absorbing site and distribution of water molecules on a-C and GNC a-C (about 54000 atoms) substrates under different relative humidity (2%, 10%, 20%, 30%, 40%, 50%, 60%, 70% and 80%) were investigated using molecular dynamics (MD) simulations. Here, the a-C and GNC a-C (Supplementary Fig.15.1) were modeled using annealing strategy from 4500 K to 300 K and factitious addition of graphene flakes into a-C surfaces, respectively. Moreover, we mixed N 2 and H 2 O (200–6400 molecules for RH 2–80%) molecules together evenly to imitate relative humidity environment with the constant density (1.29 kg/m 3). The simulations were performed for 300 ps at 300 K controlled using Nosé-Hoover thermostat57,58, in which the bottom layer about 4 Å of substrate was constrained. The optimal carbon structures and stable water molecule distribution could be obtained in the simulations.
Subsequently, a square graphite flake with about 4800 atoms was added on the surfaces obtained by the above simulations and was moved at the velocity of 10 m/s along x direction. To clearly describe the sliding process, the parameters including total friction force, normal and lateral forces between graphite face and substrate (N GF/Sub and L GF/Sub), normal and lateral forces between graphite face and interfacial water cluster (N GF/I-water and L GF/I-water), as well as the interaction forces between graphite edge (or substrate) and exterior water cluster (L GE/E-water or N Sub/E-water) were collected. The sliding simulations for 5 ns were carried out at 300 K through Nosé-Hoover thermostats.
For these simulations performed in LAMMPS software59, TIP4P water model was used, AIREBO potential60 was used for graphite and a-C, and Leonard-Jones (L-J) potentials were applied for N 2 and interlayer interactions between water, N 2, graphite and a-C.
To further understand the interface interactions, five models including graphene/a-C, graphene/graphene, a-C/H 2 O, graphene/H 2 O and graphene edge/H 2 O contacts were constructed. The models were optimized by standard density functional theory (DFT) simulations in VASP software with the generalized gradient approximation (GGA) including the Perdew-Burke-Ernzerhof (PBE) functional61,62,63. The plane-wave cut off was set at 400 eV and the total energy threshold was 0.1 meV/atom. In these models, bilayer graphene, a-C and water contained 240 carbon atoms, 280 carbon atoms and 10 molecules, respectively.
Based on the optimal adsorption structures after VASP calculations, the interaction energies were recalculated using B3LYP-D3/6-311 + G (d, p) method in Gaussian16 software64, and then decomposed using SobEDAw method in Multiwfn software65,66. Here, the total interaction energy ((\Delta {E}{{\mathrm{int}}})) is the sum of electrostatics ((\Delta {E}{{els}})), exchange-repulsion ((\Delta {E}{{xrep}})), orbital ((\Delta {E}{{orb}})) and dispersion ((\Delta {E}_{{disp}})) interactions, which can be represented as:
$$\Delta {E}{{\mathrm{int}}}=\Delta {E}{{els}}+\Delta {E}{{xrep}}+\Delta {E}{{orb}}+\Delta {E}_{{disp}}$$
This method can be used for weak interactions, chemical bond interactions, open-shell systems and so on67,68.
Data availability
Source Data file has been deposited in Figshare under accession code data are provided with this paper.
Code availability
All codes used in this study are available from the corresponding author (M.M.) upon request.
References
Holmberg, K. & Erdemir, A. Influence of tribology on global energy consumption, costs and emissions. Friction5, 263–284 (2017).
ArticleCASGoogle Scholar
Holmberg, K., Andersson, P. & Erdemir, A. Global energy consumption due to friction in passenger cars. Tribol. Int.47, 221–234 (2012).
ArticleGoogle Scholar
Liao, M. et al. UItra-low friction and edge-pinning effect in large-lattice-mismatch van der Waals heterostructures. Nat. Mater.21, 47–53 (2022).
ArticleADSCASPubMedGoogle Scholar
Berman, D., Deshmukh, S. A., Sankaranarayanan, S. K. R. S., Erdemir, A. & Sumant, A. V. Macroscale superlubricity enabled by graphene nanoscroll formation. Science348, 1118–1122 (2015).
ArticleADSCASPubMedGoogle Scholar
Hirano, M. & Shinjo, K. Atomistic locking and friction. Phys. Rev. B41, 11837–11851 (1990).
ArticleADSCASGoogle Scholar
Dienwiebel, M. et al. Superlubricity of graphite. Phys. Rev. Lett.92, 126101 (2004).
ArticleADSPubMedGoogle Scholar
Filippov, A. E., Dienwiebel, M., Frenken, J. W., Klafter, J. & Urbakh, M. Torque and twist against superlubricity. Phys. Rev. Lett.100, 046102 (2008).
ArticleADSPubMedGoogle Scholar
Liu, Z. et al. Observation of microscale superlubricity in graphite. Phys. Rev. Lett.108, 205503 (2012).
ArticleADSPubMedGoogle Scholar
Feng, X., Kwon, S., Park, J. Y. & Salmeron, M. Superlubric sliding of graphene nanoflakes on graphene. ACS Nano7, 1718–1724 (2013).
ArticleCASPubMedGoogle Scholar
Androulidakis, C., Koukaras, E. N., Paterakis, G., Trakakis, G. & Galiotis, C. Tunable macroscale structural superlubricity in two-layer graphene via strain engineering. Nat. Commun.11, 1595 (2020).
ArticleADSCASPubMedPubMed CentralGoogle Scholar
Li, H. et al. Superlubricity between MoS 2 monolayers. Adv. Mater.29, 1701474 (2017).
ArticleGoogle Scholar
Martin, J. M., Donnet, C., Le Mogne, T. & Epicier, T. Superlubricity of molybdenum disulphide. Phys. Rev. B48, 10583–10586 (1993).
ArticleADSCASGoogle Scholar
Wang, L. et al. Superlubricity of a graphene/MoS 2 heterostructure: a combined experimental and DFT study. Nanoscale9, 10846–10853 (2017).
ArticleCASPubMedGoogle Scholar
Vazirisereshk, M. R. et al. Origin of nanoscale friction contrast between supported graphene, MoS 2, and a graphene/MoS 2 heterostructure. Nano Lett.19, 5496–5505 (2019).
ArticleADSCASPubMedGoogle Scholar
Song, Y. et al. Robust microscale superlubricity in graphite/hexagonal boron nitride layered heterojunctions. Nat. Mater.17, 894–899 (2018).
ArticleADSCASPubMedGoogle Scholar
Mandelli, D., Leven, I., Hod, O. & Urbakh, M. Sliding friction of graphene/hexagonal -boron nitride heterojunctions: a route to robust superlubricity. Sci. Rep.7, 10851 (2017).
ArticleADSCASPubMedPubMed CentralGoogle Scholar
Mandelli, D., Ouyang, W., Hod, O. & Urbakh, M. Negative friction coefficients in superlubric graphite-hexagonal boron nitride heterojunctions. Phys. Rev. Lett.122, 076102 (2019).
ArticleADSCASPubMedGoogle Scholar
Macknojia, A. et al. Macroscale superlubricity induced by MXene/MoS 2 nanocomposites on rough steel surfaces under high contact stresses. ACS Nano17, 2421–2430 (2023).
ArticleCASPubMedGoogle Scholar
Boidi, G. et al. Solid lubrication performance of hybrid Ti 3 C 2 T x/MoS 2 coatings. Carbon225, 119067 (2024).
ArticleCASGoogle Scholar
Zambrano-Mera, D. F. et al. Solid lubrication performance of sandwich Ti 3 C 2 T x-MoS 2 composite coatings. Appl. Surf. Sci.640, 158295 (2023).
ArticleCASGoogle Scholar
Zhang, R. et al. Superlubricity in centimetres-long double-walled carbon nanotubes under ambient conditions. Nat. Nanotechnol.8, 912–916 (2013).
ArticleADSCASPubMedGoogle Scholar
Hod, O., Meyer, E., Zheng, Q. & Urbakh, M. Structural superlubricity and ultralow friction across the length scales. Nature563, 485–492 (2018).
ArticleADSCASPubMedGoogle Scholar
Frerot, L. et al. From molecular to multiasperity contacts: how roughness bridges the friction scale gap. ACS Nano17, 2205–2211 (2023).
ArticleCASPubMedPubMed CentralGoogle Scholar
Yang, X., Li, R., Wang, Y. & Zhang, J. Tunable, wide-temperature, and macroscale superlubricity enabled by nanoscale van der waals heterojunction-to-homojunction transformation. Adv. Mater.35, 2303580 (2023).
ArticleCASGoogle Scholar
Erdemir, A., Eryilmaz, O. L. & Fenske, G. Synthesis of diamondlike carbon films with superlow friction and wear properties. J. Vac. Sci. Technol. A18, 1987–1992 (2000).
ArticleCASGoogle Scholar
Chen, X. et al. Evolution of tribo-induced interfacial nanostructures governing superlubricity in a-C:H and a-C:H:Si films. Nat. Commun.8, 1675 (2017).
ArticleADSPubMedPubMed CentralGoogle Scholar
Wang, C., Yang, S., Wang, Q., Wang, Z. & Zhang, J. Super-low friction and super-elastic hydrogenated carbon films originated from a unique fullerene-like nanostructure. Nanotechnology19, 225709 (2008).
ArticleADSPubMedGoogle Scholar
Chen, X. et al. Atomic-scale insights into the interfacial instability of superlubricity in hydrogenated amorphous carbon films. Sci. Adv.6, eaay1272 (2020).
ArticleADSCASPubMedPubMed CentralGoogle Scholar
16798-1, C. E. S. Indoor Environmental Input Parameters for Design and Assessment of Energy Performance of Buildings Addressing Indoor Air Quality, Thermal Environment, Lighting and Acoustics-module M1-6. (European Committee for Standardization, Brussels, 2019).
Berman, D., Erdemir, A. & Sumant, A. V. Approaches for achieving superlubricity in two-dimensional materials. ACS Nano12, 2122–2137 (2018).
ArticleCASPubMedGoogle Scholar
Wang, Y., Gao, K., Zhang, B., Wang, Q. & Zhang, J. Structure effects of sp 2-rich carbon films under super-low friction contact. Carbon137, 49–56 (2018).
ArticleCASGoogle Scholar
Liu, X. et al. A near-frictionless and extremely elastic hydrogenated amorphous carbon film with self-assembled dual nanostructure. Adv. Mater.24, 4614–4617 (2012).
ArticleCASPubMedGoogle Scholar
Li, R. et al. Operando formation of Van der Waals heterostructures for achieving macroscale superlubricity on engineering rough and worn surfaces. Adv. Funct. Mater.32, 2111365 (2022).
ArticleCASGoogle Scholar
Berger, S. D., McKenzie, D. R. & Martin, P. J. EELS analysis of vacuum arc-deposited diamond-like films. Philos. Mag. Lett.57, 285–290 (1988).
ArticleADSCASGoogle Scholar
Casiraghi, C., Ferrari, A. C. & Robertson, J. Raman spectroscopy of hydrogenated amorphous carbons. Phys. Rev. B72, 085401 (2005).
ArticleADSGoogle Scholar
Zheng, Q. et al. Self-retracting motion of graphite microflakes. Phys. Rev. Lett.100, 067205 (2008).
ArticleADSPubMedGoogle Scholar
Tang, C. et al. Layer-dependent nanowear of graphene oxide. ACS Nano17, 2497–2505 (2023).
ArticleCASPubMedGoogle Scholar
Erdemir, A. & Donnet, C. Tribology of diamond-like carbon films: recent progress and future prospects. J. Phys. D: Appl. Phys.39, R311–R327 (2006).
ArticleCASGoogle Scholar
Cui, L., Lu, Z. & Wang, L. Toward low friction in high vacuum for hydrogenated diamondlike carbon by tailoring sliding interface. ACS Appl. Mater. Interfaces5, 5889–5893 (2013).
ArticleCASPubMedGoogle Scholar
Ferrari, A. C. & Basko, D. M. Raman spectroscopy as a versatile tool for studying the properties of graphene. Nat. Nanotechnol.8, 235–246 (2013).
ArticleADSCASPubMedGoogle Scholar
Lucchese, M. M. et al. Quantifying ion-induced defects and Raman relaxation length in graphene. Carbon48, 1592–1597 (2010).
ArticleCASGoogle Scholar
Fontaine, J., Le Mogne, T., Loubet, J. L. & Belin, M. Achieving superlow friction with hydrogenated amorphous carbon: some key requirements. Thin Solid Films482, 99–108 (2005).
ArticleADSCASGoogle Scholar
Li, R., Yang, X., Wang, Y., Zhang, J. & Li, J. Graphitic encapsulation and electronic shielding of metal nanoparticles to achieve meta-carbon interfacial superlubricity. ACS Appl. Mater. Interfaces13, 3397–3407 (2021).
ArticleCASPubMedGoogle Scholar
Song, H. et al. Perspectives of friction mechanism of a-C:H film in vacuum concerning the onion-like carbon transformation at the sliding interface. RSC Adv.5, 8904–8911 (2015).
ArticleADSCASGoogle Scholar
Huang, S., Mutyala, K. C., Sumant, A. V. & Mochalin, V. N. Achieving superlubricity with 2D transition metal carbides (MXenes) and MXene/graphene coatings. Mater. Today Adv.9, 100133 (2021).
ArticleCASGoogle Scholar
Zhu, D. et al. Robust macroscale superlubricity in humid air via designing amorphous DLC crystalline. Adv. Funct. Mater. 34, 2316036 (2024).
Feng, W. et al. Physical state of water controls friction of gabbro-built faults. Nat. Commun.14, 4612 (2023).
ArticleADSCASPubMedPubMed CentralGoogle Scholar
Seki, T. et al. The bending mode of water: a powerful probe for hydrogen bond structure of aqueous systems. J. Phys. Chem. Lett.11, 8459–8469 (2020).
ArticleCASPubMedPubMed CentralGoogle Scholar
Bocquet, L. & Barrat, J.-L. Flow boundary conditions from nano- to micro-scales. Soft Matter3, 685–693 (2007).
ArticleADSCASPubMedGoogle Scholar
Li, H., Xu, Z., Ma, C. & Ma, M. Translucency and negative temperature-dependence for the slip length of water on graphene. Nanoscale14, 14636–14644 (2022).
ArticleCASPubMedGoogle Scholar
Fellers, R. S., Leforestier, C., Braly, L. B., Brown, M. G. & Saykally, R. J. Spectroscopic determination of the water pair potential. Science284, 945–948 (1999).
ArticleADSCASPubMedGoogle Scholar
Tang, B. et al. Nanoscopic humidity-dependent adhesion behaviors of 2D materials. Appl. Surf. Sci.572, 151394 (2022).
ArticleCASGoogle Scholar
Lu, X., Yu, M., Huang, H. & Ruoff, R. S. Tailoring graphite with the goal of achieving single sheets. Nanotechnology10, 269–272 (1999).
ArticleADSCASGoogle Scholar
Qu, C. et al. Origin of friction in superlubric graphite contacts. Phys. Rev. Lett.125, 126102 (2020).
ArticleADSCASPubMedGoogle Scholar
Yan, C., Chen, H.-Y., Lai, P.-Y. & Tong, P. Statistical laws of stick-slip friction at mesoscale. Nat. Commun.14, 6221 (2023).
ArticleADSCASPubMedPubMed CentralGoogle Scholar
Ding, H. et al. Chemical scissor-mediated structural editing of layered transition metal carbides. Science379, 1130–1135 (2023).
ArticleADSCASPubMedGoogle Scholar
Nosé, S. A unified formulation of the constant temperature molecular dynamics methods. J. Chem. Phys.81, 511–519 (1984).
ArticleADSGoogle Scholar
Hoover, W. G. Canonical dynamics: Equilibrium phase-space distributions. Phys. Rev. A31, 1695–1697 (1985).
ArticleADSCASGoogle Scholar
Plimpton, S. Fast parallel algorithms for short-range molecular dynamics. J. Comput. Phys. 117 (1995).
Stuart, S. J., Tutein, A. B. & Harrison, J. A. A reactive potential for hydrocarbons with intermolecular interactions. J. Chem. Phys.112, 6472–6486 (2000).
ArticleADSCASGoogle Scholar
Kresse, G. Efficient iterative schemes for ab initio total-energy calculations using a plane-wave Basis set. Phys. Rev. B54, 11169–11186 (1996).
ArticleADSCASGoogle Scholar
Kresse, G. From ultrasoft pseudopotentials to the projector augmented-wave method. Phys. Rev. B59, 1758–1775 (1999).
ArticleADSCASGoogle Scholar
Perdew, J. P., Burke, K. & Ernzerhof, M. Generalized gradient approximation made simple. Phys. Rev. Lett.77, 3865–3868 (1996).
ArticleADSCASPubMedGoogle Scholar
Frisch, M. J. et al. Gaussian 16, Revision A.03 (Gaussian, Inc., 2016).
Lu, T. & Chen, Q. Simple, efficient, and universal energy decomposition analysis method based on dispersion-corrected density functional theory. J. Phys. Chem. Lett.127, 7023–7035 (2023).
CASGoogle Scholar
Lu, T. & Chen, F. Multiwfn: a multifunctional wavefunction analyzer. J. Comput. Chem.33, 580–592 (2011).
ArticlePubMedGoogle Scholar
Yang, E. et al. A gatekeeper residue controls aromatic acceptor specificity of the PHB-type UbiA prenyltransferases. ACS Catal.13, 13717–13728 (2023).
ArticleCASGoogle Scholar
Zhao, M., Yuan, H. & Zhang, J. Origin of ligand and acid effects on the Pd-catalyzed regiodivergent coupling reaction of indazoles and isoprene: a DFT study. J. Org. Chem.88, 16132–16143 (2023).
ArticleCASPubMedGoogle Scholar
Li, R., Yang, X., Li, J., Wang, Y. & Ma, M. Source data of figures in the article “Macroscale, humidity-insensitive, and stable structural superlubricity achieved with hydrogen-free graphene nanoflakes”. figshare, (2024).
Download references
Acknowledgements
M.M. acknowledges the financial support from MOST (2023YFB4603601), NSFC (12372112) and Shenzhen Fundamental Research Key Project (JCYJ20200109150608043). We acknowledge the support from Prof. Yunxia Wang of Lanzhou Institute of Chemical Physics, Chinese Academy of Science for her help with Raman characterization.
Author information
Authors and Affiliations
Institute of Superlubricity Technology, Research Institute of Tsinghua University in Shenzhen, Shenzhen, 518057, China
Ruiyun Li&Ming Ma
State Key Laboratory of Solid Lubrication, Lanzhou Institute of Chemical Physics, Chinese Academy of Science, Lanzhou, 730000, China
Xing Yang&Yongfu Wang
State Key Laboratory of Tribology in Advanced Equipment, Department of Mechanical Engineering, Tsinghua University, Beijing, 100084, China
Jiacheng Li&Ming Ma
Authors
1. Ruiyun LiView author publications You can also search for this author inPubMedGoogle Scholar
2. Xing YangView author publications You can also search for this author inPubMedGoogle Scholar
3. Jiacheng LiView author publications You can also search for this author inPubMedGoogle Scholar
4. Yongfu WangView author publications You can also search for this author inPubMedGoogle Scholar
5. Ming MaView author publications You can also search for this author inPubMedGoogle Scholar
Contributions
R.L. and X.Y. contribute equally to this work. Y.W. and M.M initiated the project and supervised the project. R.L. and J.L. performed research and analyzed data. X.Y. designed and performed simulations. The manuscript was written by R.L. and revised by Y.W. and M.M. All authors participated in the interpretation of the discussion of the manuscript.
Corresponding authors
Correspondence to Yongfu Wang or Ming Ma.
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Nature Communications thanks Chen Xiao and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. A peer review file is available.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Supplementary Information
Source data
Transparent Peer Review file
Source Data
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit
Reprints and permissions
About this article
Cite this article
Li, R., Yang, X., Li, J. et al. Macroscale, humidity-insensitive, and stable structural superlubricity achieved with hydrogen-free graphene nanoflakes. Nat Commun15, 9197 (2024).
Download citation
Received: 21 March 2024
Accepted: 14 October 2024
Published: 24 October 2024
DOI:
Share this article
Anyone you share the following link with will be able to read this content:
Get shareable link
Sorry, a shareable link is not currently available for this article.
Copy to clipboard
Provided by the Springer Nature SharedIt content-sharing initiative
Subjects
Materials for devices
Mechanical engineering
Structural properties
This article is cited by
Macroscale, humidity-insensitive, and stable structural superlubricity achieved with hydrogen-free graphene nanoflakes
Ruiyun Li
Xing Yang
Ming Ma
Nature Communications (2024)
Download PDF
Sections
Figures
References
Abstract
Introduction
Results and discussion
Methods section
Data availability
Code availability
References
Acknowledgements
Author information
Ethics declarations
Peer review
Additional information
Supplementary information
Source data
Rights and permissions
About this article
This article is cited by
Advertisement
Fig. 1: Structural characterization of graphene nanoflake-covered amorphous carbon (GNC a-C).
View in articleFull size image
Fig. 2: Friction performances of microscale graphite flake on GNC a-C.
View in articleFull size image
Fig. 3: Environmental stability for the structural superlubricity of microscale graphite flake on GNC a-C surfaces.
View in articleFull size image
Fig. 4: Microscale-to-macroscale superlubricity.
View in articleFull size image
Fig. 5: Raman analysis of water distribution on carbon films.
View in articleFull size image
Fig. 6: MD simulations of water molecule distribution under wide-humidity conditions.
View in articleFull size image
Fig. 7: Full atomic molecular simulations of sliding friction under wide-humidity conditions.
View in articleFull size image
Holmberg, K. & Erdemir, A. Influence of tribology on global energy consumption, costs and emissions. Friction5, 263–284 (2017).
ArticleCASGoogle Scholar
Holmberg, K., Andersson, P. & Erdemir, A. Global energy consumption due to friction in passenger cars. Tribol. Int.47, 221–234 (2012).
ArticleGoogle Scholar
Liao, M. et al. UItra-low friction and edge-pinning effect in large-lattice-mismatch van der Waals heterostructures. Nat. Mater.21, 47–53 (2022).
ArticleADSCASPubMedGoogle Scholar
Berman, D., Deshmukh, S. A., Sankaranarayanan, S. K. R. S., Erdemir, A. & Sumant, A. V. Macroscale superlubricity enabled by graphene nanoscroll formation. Science348, 1118–1122 (2015).
ArticleADSCASPubMedGoogle Scholar
Hirano, M. & Shinjo, K. Atomistic locking and friction. Phys. Rev. B41, 11837–11851 (1990).
ArticleADSCASGoogle Scholar
Dienwiebel, M. et al. Superlubricity of graphite. Phys. Rev. Lett.92, 126101 (2004).
ArticleADSPubMedGoogle Scholar
Filippov, A. E., Dienwiebel, M., Frenken, J. W., Klafter, J. & Urbakh, M. Torque and twist against superlubricity. Phys. Rev. Lett.100, 046102 (2008).
ArticleADSPubMedGoogle Scholar
Liu, Z. et al. Observation of microscale superlubricity in graphite. Phys. Rev. Lett.108, 205503 (2012).
ArticleADSPubMedGoogle Scholar
Feng, X., Kwon, S., Park, J. Y. & Salmeron, M. Superlubric sliding of graphene nanoflakes on graphene. ACS Nano7, 1718–1724 (2013).
ArticleCASPubMedGoogle Scholar
Androulidakis, C., Koukaras, E. N., Paterakis, G., Trakakis, G. & Galiotis, C. Tunable macroscale structural superlubricity in two-layer graphene via strain engineering. Nat. Commun.11, 1595 (2020).
ArticleADSCASPubMedPubMed CentralGoogle Scholar
Li, H. et al. Superlubricity between MoS 2 monolayers. Adv. Mater.29, 1701474 (2017).
ArticleGoogle Scholar
Martin, J. M., Donnet, C., Le Mogne, T. & Epicier, T. Superlubricity of molybdenum disulphide. Phys. Rev. B48, 10583–10586 (1993).
ArticleADSCASGoogle Scholar
Wang, L. et al. Superlubricity of a graphene/MoS 2 heterostructure: a combined experimental and DFT study. Nanoscale9, 10846–10853 (2017).
ArticleCASPubMedGoogle Scholar
Vazirisereshk, M. R. et al. Origin of nanoscale friction contrast between supported graphene, MoS 2, and a graphene/MoS 2 heterostructure. Nano Lett.19, 5496–5505 (2019).
ArticleADSCASPubMedGoogle Scholar
Song, Y. et al. Robust microscale superlubricity in graphite/hexagonal boron nitride layered heterojunctions. Nat. Mater.17, 894–899 (2018).
ArticleADSCASPubMedGoogle Scholar
Mandelli, D., Leven, I., Hod, O. & Urbakh, M. Sliding friction of graphene/hexagonal -boron nitride heterojunctions: a route to robust superlubricity. Sci. Rep.7, 10851 (2017).
ArticleADSCASPubMedPubMed CentralGoogle Scholar
Mandelli, D., Ouyang, W., Hod, O. & Urbakh, M. Negative friction coefficients in superlubric graphite-hexagonal boron nitride heterojunctions. Phys. Rev. Lett.122, 076102 (2019).
ArticleADSCASPubMedGoogle Scholar
Macknojia, A. et al. Macroscale superlubricity induced by MXene/MoS 2 nanocomposites on rough steel surfaces under high contact stresses. ACS Nano17, 2421–2430 (2023).
ArticleCASPubMedGoogle Scholar
Boidi, G. et al. Solid lubrication performance of hybrid Ti 3 C 2 T x/MoS 2 coatings. Carbon225, 119067 (2024).
ArticleCASGoogle Scholar
Zambrano-Mera, D. F. et al. Solid lubrication performance of sandwich Ti 3 C 2 T x-MoS 2 composite coatings. Appl. Surf. Sci.640, 158295 (2023).
ArticleCASGoogle Scholar
Zhang, R. et al. Superlubricity in centimetres-long double-walled carbon nanotubes under ambient conditions. Nat. Nanotechnol.8, 912–916 (2013).
ArticleADSCASPubMedGoogle Scholar
Hod, O., Meyer, E., Zheng, Q. & Urbakh, M. Structural superlubricity and ultralow friction across the length scales. Nature563, 485–492 (2018).
ArticleADSCASPubMedGoogle Scholar
Frerot, L. et al. From molecular to multiasperity contacts: how roughness bridges the friction scale gap. ACS Nano17, 2205–2211 (2023).
ArticleCASPubMedPubMed CentralGoogle Scholar
Yang, X., Li, R., Wang, Y. & Zhang, J. Tunable, wide-temperature, and macroscale superlubricity enabled by nanoscale van der waals heterojunction-to-homojunction transformation. Adv. Mater.35, 2303580 (2023).
ArticleCASGoogle Scholar
Erdemir, A., Eryilmaz, O. L. & Fenske, G. Synthesis of diamondlike carbon films with superlow friction and wear properties. J. Vac. Sci. Technol. A18, 1987–1992 (2000).
ArticleCASGoogle Scholar
Chen, X. et al. Evolution of tribo-induced interfacial nanostructures governing superlubricity in a-C:H and a-C:H:Si films. Nat. Commun.8, 1675 (2017).
ArticleADSPubMedPubMed CentralGoogle Scholar
Wang, C., Yang, S., Wang, Q., Wang, Z. & Zhang, J. Super-low friction and super-elastic hydrogenated carbon films originated from a unique fullerene-like nanostructure. Nanotechnology19, 225709 (2008).
ArticleADSPubMedGoogle Scholar
Chen, X. et al. Atomic-scale insights into the interfacial instability of superlubricity in hydrogenated amorphous carbon films. Sci. Adv.6, eaay1272 (2020).
ArticleADSCASPubMedPubMed CentralGoogle Scholar
16798-1, C. E. S. Indoor Environmental Input Parameters for Design and Assessment of Energy Performance of Buildings Addressing Indoor Air Quality, Thermal Environment, Lighting and Acoustics-module M1-6. (European Committee for Standardization, Brussels, 2019).
Berman, D., Erdemir, A. & Sumant, A. V. Approaches for achieving superlubricity in two-dimensional materials. ACS Nano12, 2122–2137 (2018).
ArticleCASPubMedGoogle Scholar
Wang, Y., Gao, K., Zhang, B., Wang, Q. & Zhang, J. Structure effects of sp 2-rich carbon films under super-low friction contact. Carbon137, 49–56 (2018).
ArticleCASGoogle Scholar
Liu, X. et al. A near-frictionless and extremely elastic hydrogenated amorphous carbon film with self-assembled dual nanostructure. Adv. Mater.24, 4614–4617 (2012).
ArticleCASPubMedGoogle Scholar
Li, R. et al. Operando formation of Van der Waals heterostructures for achieving macroscale superlubricity on engineering rough and worn surfaces. Adv. Funct. Mater.32, 2111365 (2022).
ArticleCASGoogle Scholar
Berger, S. D., McKenzie, D. R. & Martin, P. J. EELS analysis of vacuum arc-deposited diamond-like films. Philos. Mag. Lett.57, 285–290 (1988).
ArticleADSCASGoogle Scholar
Casiraghi, C., Ferrari, A. C. & Robertson, J. Raman spectroscopy of hydrogenated amorphous carbons. Phys. Rev. B72, 085401 (2005).
ArticleADSGoogle Scholar
Zheng, Q. et al. Self-retracting motion of graphite microflakes. Phys. Rev. Lett.100, 067205 (2008).
ArticleADSPubMedGoogle Scholar
Tang, C. et al. Layer-dependent nanowear of graphene oxide. ACS Nano17, 2497–2505 (2023).
ArticleCASPubMedGoogle Scholar
Erdemir, A. & Donnet, C. Tribology of diamond-like carbon films: recent progress and future prospects. J. Phys. D: Appl. Phys.39, R311–R327 (2006).
ArticleCASGoogle Scholar
Cui, L., Lu, Z. & Wang, L. Toward low friction in high vacuum for hydrogenated diamondlike carbon by tailoring sliding interface. ACS Appl. Mater. Interfaces5, 5889–5893 (2013).
ArticleCASPubMedGoogle Scholar
Ferrari, A. C. & Basko, D. M. Raman spectroscopy as a versatile tool for studying the properties of graphene. Nat. Nanotechnol.8, 235–246 (2013).
ArticleADSCASPubMedGoogle Scholar
Lucchese, M. M. et al. Quantifying ion-induced defects and Raman relaxation length in graphene. Carbon48, 1592–1597 (2010).
ArticleCASGoogle Scholar
Fontaine, J., Le Mogne, T., Loubet, J. L. & Belin, M. Achieving superlow friction with hydrogenated amorphous carbon: some key requirements. Thin Solid Films482, 99–108 (2005).
ArticleADSCASGoogle Scholar
Li, R., Yang, X., Wang, Y., Zhang, J. & Li, J. Graphitic encapsulation and electronic shielding of metal nanoparticles to achieve meta-carbon interfacial superlubricity. ACS Appl. Mater. Interfaces13, 3397–3407 (2021).
ArticleCASPubMedGoogle Scholar
Song, H. et al. Perspectives of friction mechanism of a-C:H film in vacuum concerning the onion-like carbon transformation at the sliding interface. RSC Adv.5, 8904–8911 (2015).
ArticleADSCASGoogle Scholar
Huang, S., Mutyala, K. C., Sumant, A. V. & Mochalin, V. N. Achieving superlubricity with 2D transition metal carbides (MXenes) and MXene/graphene coatings. Mater. Today Adv.9, 100133 (2021).
ArticleCASGoogle Scholar
Zhu, D. et al. Robust macroscale superlubricity in humid air via designing amorphous DLC crystalline. Adv. Funct. Mater. 34, 2316036 (2024).
Feng, W. et al. Physical state of water controls friction of gabbro-built faults. Nat. Commun.14, 4612 (2023).
ArticleADSCASPubMedPubMed CentralGoogle Scholar
Seki, T. et al. The bending mode of water: a powerful probe for hydrogen bond structure of aqueous systems. J. Phys. Chem. Lett.11, 8459–8469 (2020).
ArticleCASPubMedPubMed CentralGoogle Scholar
Bocquet, L. & Barrat, J.-L. Flow boundary conditions from nano- to micro-scales. Soft Matter3, 685–693 (2007).
ArticleADSCASPubMedGoogle Scholar
Li, H., Xu, Z., Ma, C. & Ma, M. Translucency and negative temperature-dependence for the slip length of water on graphene. Nanoscale14, 14636–14644 (2022).
ArticleCASPubMedGoogle Scholar
Fellers, R. S., Leforestier, C., Braly, L. B., Brown, M. G. & Saykally, R. J. Spectroscopic determination of the water pair potential. Science284, 945–948 (1999).
ArticleADSCASPubMedGoogle Scholar
Tang, B. et al. Nanoscopic humidity-dependent adhesion behaviors of 2D materials. Appl. Surf. Sci.572, 151394 (2022).
ArticleCASGoogle Scholar
Lu, X., Yu, M., Huang, H. & Ruoff, R. S. Tailoring graphite with the goal of achieving single sheets. Nanotechnology10, 269–272 (1999).
ArticleADSCASGoogle Scholar
Qu, C. et al. Origin of friction in superlubric graphite contacts. Phys. Rev. Lett.125, 126102 (2020).
ArticleADSCASPubMedGoogle Scholar
Yan, C., Chen, H.-Y., Lai, P.-Y. & Tong, P. Statistical laws of stick-slip friction at mesoscale. Nat. Commun.14, 6221 (2023).
ArticleADSCASPubMedPubMed CentralGoogle Scholar
Ding, H. et al. Chemical scissor-mediated structural editing of layered transition metal carbides. Science379, 1130–1135 (2023).
ArticleADSCASPubMedGoogle Scholar
Nosé, S. A unified formulation of the constant temperature molecular dynamics methods. J. Chem. Phys.81, 511–519 (1984).
ArticleADSGoogle Scholar
Hoover, W. G. Canonical dynamics: Equilibrium phase-space distributions. Phys. Rev. A31, 1695–1697 (1985).
ArticleADSCASGoogle Scholar
Plimpton, S. Fast parallel algorithms for short-range molecular dynamics. J. Comput. Phys. 117 (1995).
Stuart, S. J., Tutein, A. B. & Harrison, J. A. A reactive potential for hydrocarbons with intermolecular interactions. J. Chem. Phys.112, 6472–6486 (2000).
ArticleADSCASGoogle Scholar
Kresse, G. Efficient iterative schemes for ab initio total-energy calculations using a plane-wave Basis set. Phys. Rev. B54, 11169–11186 (1996).
ArticleADSCASGoogle Scholar
Kresse, G. From ultrasoft pseudopotentials to the projector augmented-wave method. Phys. Rev. B59, 1758–1775 (1999).
ArticleADSCASGoogle Scholar
Perdew, J. P., Burke, K. & Ernzerhof, M. Generalized gradient approximation made simple. Phys. Rev. Lett.77, 3865–3868 (1996).
ArticleADSCASPubMedGoogle Scholar
Frisch, M. J. et al. Gaussian 16, Revision A.03 (Gaussian, Inc., 2016).
Lu, T. & Chen, Q. Simple, efficient, and universal energy decomposition analysis method based on dispersion-corrected density functional theory. J. Phys. Chem. Lett.127, 7023–7035 (2023).
CASGoogle Scholar
Lu, T. & Chen, F. Multiwfn: a multifunctional wavefunction analyzer. J. Comput. Chem.33, 580–592 (2011).
ArticlePubMedGoogle Scholar
Yang, E. et al. A gatekeeper residue controls aromatic acceptor specificity of the PHB-type UbiA prenyltransferases. ACS Catal.13, 13717–13728 (2023).
ArticleCASGoogle Scholar
Zhao, M., Yuan, H. & Zhang, J. Origin of ligand and acid effects on the Pd-catalyzed regiodivergent coupling reaction of indazoles and isoprene: a DFT study. J. Org. Chem.88, 16132–16143 (2023).
ArticleCASPubMedGoogle Scholar
Li, R., Yang, X., Li, J., Wang, Y. & Ma, M. Source data of figures in the article “Macroscale, humidity-insensitive, and stable structural superlubricity achieved with hydrogen-free graphene nanoflakes”. figshare, (2024).
Nature Communications (Nat Commun)
ISSN 2041-1723 (online)
nature.com sitemap
About Nature Portfolio
About us
Press releases
Press office
Contact us
Discover content
Journals A-Z
Articles by subject
protocols.io
Nature Index
Publishing policies
Nature portfolio policies
Open access
Author & Researcher services
Reprints & permissions
Research data
Language editing
Scientific editing
Nature Masterclasses
Research Solutions
Libraries & institutions
Librarian service & tools
Librarian portal
Open research
Recommend to library
Advertising & partnerships
Advertising
Partnerships & Services
Media kits
Branded content
Professional development
Nature Careers
Nature Conferences
Regional websites
Nature Africa
Nature China
Nature India
Nature Italy
Nature Japan
Nature Middle East
Privacy Policy
Use of cookies
Your privacy choices/Manage cookies
Legal notice
Accessibility statement
Terms & Conditions
Your US state privacy rights
© 2025 Springer Nature Limited
Close
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
Email address
Sign up
[x] I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy.
Close
Get the most important science stories of the day, free in your inbox.Sign up for Nature Briefing
|
34
|
| | | | | | |
| --- | --- | --- | --- | --- | --- |
| Other versions: | 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 | | Print | | Feedback on this topic | | |
| | |
| --- | --- |
| | Welcome to SOLIDWORKS Online Help |
| | |
| --- | --- |
| Expand Working with the 3DEXPERIENCE Platform and 3DEXPERIENCE Apps) | Working with the 3DEXPERIENCE Platform and 3DEXPERIENCE Apps |
| | |
| --- | --- |
| Expand Working with Online Services) | Working with Online Services |
| | |
| --- | --- |
| Expand User Interface) | User Interface |
| | |
| --- | --- |
| Expand Fundamentals) | Fundamentals |
| | |
| --- | --- |
| Expand Display) | Display |
| | |
| --- | --- |
| Expand Moving from 2D to 3D) | Moving from 2D to 3D |
| | |
| --- | --- |
| Expand Assemblies) | Assemblies |
| | |
| --- | --- |
| Expand CircuitWorks) | CircuitWorks |
| | |
| --- | --- |
| Expand Configurations) | Configurations |
| | |
| --- | --- |
| Expand SOLIDWORKS Costing) | SOLIDWORKS Costing |
| | |
| --- | --- |
| Expand Design Checker) | Design Checker |
| | |
| --- | --- |
| Expand Design Studies in SOLIDWORKS) | Design Studies in SOLIDWORKS |
| | |
| --- | --- |
| Expand Detailing and Drawings) | Detailing and Drawings |
| | |
| --- | --- |
| Expand SOLIDWORKS File Utilities) | SOLIDWORKS File Utilities |
| | |
| --- | --- |
| Expand DFMXpress) | DFMXpress |
| | |
| --- | --- |
| Expand DriveWorksXpress) | DriveWorksXpress |
| | |
| --- | --- |
| Expand FloXpress) | FloXpress |
| | |
| --- | --- |
| Expand Import and Export) | Import and Export |
| | |
| --- | --- |
| Expand SOLIDWORKS MBD) | SOLIDWORKS MBD |
| | |
| --- | --- |
| Expand Model Display) | Model Display |
| | |
| --- | --- |
| Expand Mold Design) | Mold Design |
| | |
| --- | --- |
| Expand Motion Studies) | Motion Studies |
| | |
| --- | --- |
| Expand Parts and Features) | Parts and Features |
| | |
| --- | --- |
| Expand Routing) | Routing |
| | |
| --- | --- |
| Expand Sheet Metal) | Sheet Metal |
| | |
| --- | --- |
| Collapse Simulation) | Simulation |
| | | |
| --- | --- | --- |
| | | Welcome to SOLIDWORKS Simulation Help |
| | | |
| --- | --- | --- |
| | | Accessing Help |
| | | |
| --- | --- | --- |
| | | Legal Notices |
| | | |
| --- | --- | --- |
| | | SOLIDWORKS Simulation Reference |
| | | |
| --- | --- | --- |
| | Expand SOLIDWORKS Simulation Fundamentals) | SOLIDWORKS Simulation Fundamentals |
| | | |
| --- | --- | --- |
| | Expand Simulation with SOLIDWORKS Connected) | Simulation with SOLIDWORKS Connected |
| | | |
| --- | --- | --- |
| | Collapse Analysis Background) | Analysis Background |
| | | | |
| --- | --- | --- | --- |
| | | Expand Linear Static Analysis) | Linear Static Analysis |
| | | | |
| --- | --- | --- | --- |
| | | Expand Frequency Analysis) | Frequency Analysis |
| | | | |
| --- | --- | --- | --- |
| | | Expand Dynamic Analysis) | Dynamic Analysis |
| | | | |
| --- | --- | --- | --- |
| | | Expand Linearized Buckling Analysis) | Linearized Buckling Analysis |
| | | | |
| --- | --- | --- | --- |
| | | Expand Thermal Analysis) | Thermal Analysis |
| | | | |
| --- | --- | --- | --- |
| | | Expand Nonlinear Static Analysis) | Nonlinear Static Analysis |
| | | | |
| --- | --- | --- | --- |
| | | Expand Drop Test Studies) | Drop Test Studies |
| | | | |
| --- | --- | --- | --- |
| | | Collapse Fatigue Analysis) | Fatigue Analysis |
| | | | | |
| --- | --- | --- | --- | --- |
| | | | | Definitions for Fatigue |
| | | | | |
| --- | --- | --- | --- | --- |
| | | | Expand Stress- Life Cycle (S-N) Curve) | Stress- Life Cycle (S-N) Curve |
| | | | | |
| --- | --- | --- | --- | --- |
| | | | | Performing Fatigue Analysis |
| | | | | |
| --- | --- | --- | --- | --- |
| | | | Collapse Fatigue Events) | Fatigue Events |
| | | | | | |
| --- | --- | --- | --- | --- | --- |
| | | | | Expand Add Event (Constant) PropertyManager) | Add Event (Constant) PropertyManager |
| | | | | | |
| --- | --- | --- | --- | --- | --- |
| | | | | | Add Event (Variable) PropertyManager |
| | | | | | |
| --- | --- | --- | --- | --- | --- |
| | | | | Collapse Add Event (Random Vibration) PropertyManager) | Add Event (Random Vibration) PropertyManager |
| | | | | | | |
| --- | --- | --- | --- | --- | --- | --- |
| | | | | | | Fatigue - Random Vibration PropertyManager |
| | | | | | | |
| --- | --- | --- | --- | --- | --- | --- |
| | | | | | | Fatigue for Random Vibration Loading |
| | | | | | | |
| --- | --- | --- | --- | --- | --- | --- |
| | | | | | | Creating a Fatigue Study Based on Random Vibration Results |
| | | | | | | |
| --- | --- | --- | --- | --- | --- | --- |
| | | | | | | Derivation of Basquin Constants from S-N curve |
| | | | | | |
| --- | --- | --- | --- | --- | --- |
| | | | | | Add Event (Harmonic) PropertyManager |
| | | | | | |
| --- | --- | --- | --- | --- | --- |
| | | | | | Creating a Fatigue Study for Harmonic Loading |
| | | | | |
| --- | --- | --- | --- | --- |
| | | | | Theory of Cumulative Damage |
| | | | | |
| --- | --- | --- | --- | --- |
| | | | Expand Multiple Fatigue Events) | Multiple Fatigue Events |
| | | | | |
| --- | --- | --- | --- | --- |
| | | | | Rainflow Cycle Counting Method |
| | | | | |
| --- | --- | --- | --- | --- |
| | | | | Fatigue Plots |
| | | | | |
| --- | --- | --- | --- | --- |
| | | | | Fatigue Result Options PropertyManager |
| | | | | |
| --- | --- | --- | --- | --- |
| | | | | Matrix Charts PropertyManager |
| | | | | |
| --- | --- | --- | --- | --- |
| | | | Expand Fatigue Check Plot) | Fatigue Check Plot |
| | | | |
| --- | --- | --- | --- |
| | | Expand Pressure Vessel Design Overview) | Pressure Vessel Design Overview |
| | | | |
| --- | --- | --- | --- |
| | | Expand Beams and Trusses) | Beams and Trusses |
| | | | |
| --- | --- | --- | --- |
| | | Expand 2D Simplification) | 2D Simplification |
| | | |
| --- | --- | --- |
| | Expand Simulation Studies) | Simulation Studies |
| | | |
| --- | --- | --- |
| | Expand Submodeling Studies) | Submodeling Studies |
| | | |
| --- | --- | --- |
| | Expand Simulation Materials) | Simulation Materials |
| | | |
| --- | --- | --- |
| | Expand Simulation Options) | Simulation Options |
| | | |
| --- | --- | --- |
| | Expand Interaction Options) | Interaction Options |
| | | |
| --- | --- | --- |
| | Expand Composite Shells) | Composite Shells |
| | | |
| --- | --- | --- |
| | Expand Design Studies) | Design Studies |
| | | |
| --- | --- | --- |
| | Expand Factor of Safety Check) | Factor of Safety Check |
| | | |
| --- | --- | --- |
| | Expand Loads and Restraints) | Loads and Restraints |
| | | |
| --- | --- | --- |
| | Expand Meshing) | Meshing |
| | | |
| --- | --- | --- |
| | Expand Parameters) | Parameters |
| | | |
| --- | --- | --- |
| | Expand Study Reports) | Study Reports |
| | | |
| --- | --- | --- |
| | Expand Topology Study) | Topology Study |
| | | |
| --- | --- | --- |
| | Expand Viewing Analysis Results) | Viewing Analysis Results |
| | | |
| --- | --- | --- |
| | Expand Workflow for Performing 2D Simplification) | Workflow for Performing 2D Simplification |
| | | |
| --- | --- | --- |
| | Expand Analysis Library Features) | Analysis Library Features |
| | |
| --- | --- |
| Expand SimulationXpress) | SimulationXpress |
| | |
| --- | --- |
| Expand Sketching) | Sketching |
| | |
| --- | --- |
| Expand SLDXML Data Exchange) | SLDXML Data Exchange |
| | |
| --- | --- |
| Expand SOLIDWORKS Sustainability) | SOLIDWORKS Sustainability |
| | |
| --- | --- |
| Expand Tolerancing) | Tolerancing |
| | |
| --- | --- |
| Expand TolAnalyst) | TolAnalyst |
| | |
| --- | --- |
| Expand Toolbox) | Toolbox |
| | |
| --- | --- |
| Expand SOLIDWORKS Utilities) | SOLIDWORKS Utilities |
| | |
| --- | --- |
| Expand Weldments and Structure System) | Weldments and Structure System |
| | |
| --- | --- |
| Expand Installation and Administration) | Installation and Administration |
| | |
| --- | --- |
| Expand Troubleshooting) | Troubleshooting |
| | |
| --- | --- |
| | Legal Notices |
Derivation of Basquin Constants from S-N curve
From a given material’s fatigue strength S-N curve, you can derive the Basquin equation constants, or let the program calculate the Basquin constants by specifying the number of data points on the S-N curve to include in the curve-fitting calculations.
Some materials available from the SOLIDWORKS Material database and the SOLIDWORKS Material Web Portal have fatigue S-N curve data information. For example the S-N curve of the material Ti-6AI-4V (Metal_Ti Alpha-Beta Alloy) downloaded from the SOLIDWORKS Material Web Portal (material database format is .sldmat) is shown in a log S - log N scale.
The numerical values of the first four S-N data points are given in the table.
Basquin’s equation is a power law relationship which describes the linear relationship between the applied stress cycles (S) in the y-axis and the number of cycles to failure in the x-axis plotted on a log-log scale.
It can be defined as:
where N is the number of cycles to failure usually more than 104 , Sr is the reference value of fatigue strength (in Simulation this is the stress range which is taken as 2 alternating stress), m is the slope of the log S - log N fatigue strength curve, and B is the value of the stress at one cycle.
To calculate the slope m of the Basquin equation, solve the system of equations:
To solve for m, take the log of both expressions:
By substituting the first two S-N data points from the table above, calculate first m and then B:
For the constant B, the program considers the stress range value (from the maximum cyclic stress to the minimum cyclic stress).
If the stress values of the S-N curve are given as alternating stresses (which is the common practice), multiply these stresses by 2 to calculate the constant B (stress range = 2 alternating stress, assuming a zero mean stress and full reversal of the cyclic load).
If the S-N curve data are given in stress range values, apply them directly in the equation for estimating the constant B.
For the calculation of the slope constant m, multiplying the stresses does not alter the slope value.
You can also find plots of fatigue strength curves in codes such as the Eurocode 9: Design of aluminum structures: Structures susceptible to fatigue, Ref. EN 1999-1-3:2007/A1.
Example of Fatigue Strength S-N Curve
In Eurocode 9 you can find numerical values for the constant slope m for different detail categories, and then calculate B.
For example, from Ref. Table J.2 - Detail categories for plain members, EN 1999-1-3:2007/A1 for a simple plate with holes, the stress range Ds = 100 MPa at N = 2x106 cycles and the slope m = 7; then B equals to:
To let the program perform the curve-fitting on a given set of S-N data to a straight line, select Estimate Basquin constants from S-N curve. In this case, make sure that Interpolate is set to Log-log and select the last S-N data point to consider for the curve-fitting in Consider the cut-off point for the S-N curve at row.
The two graphs show the superposition of an original S-N curve (red line) with the Basquin equation curve-fitting line (green line) for 2 (a) and 22 (b) S-N data points respectively. It is recommended to check the quality of the Basquin curve-fitting before you proceed with the analysis. The quality of the curve-fitting line in approximating the original S-N curve is best for the portion of the S-N curve up to the cut-off point.
| | |
| --- | --- |
| | |
| (a) Basquin curve-fitting with 2 S-N data points (green line). | (b) Basquin curve-fitting with 22 S-N data points (green line). |
Fatigue Strength S-N Curve
Reference: Figure J.1 , EN 1999-1-3:2007 : Annex J, Eurocode 9: Design of aluminum structures - Structures susceptible to fatigue
Provide feedback on this topic
SOLIDWORKS welcomes your feedback concerning the presentation, accuracy, and thoroughness of the documentation. Use the form below to send your comments and suggestions about this topic directly to our documentation team. The documentation team cannot answer technical support questions. Click here for information about technical support.
Required
| | | |
| --- | --- | --- |
| | | |
| Email: | | |
| Subject: | | Feedback on Help Topics |
| Page: | | Derivation of Basquin Constants from S-N curve |
| Comment: | | |
| | | I acknowledge I have read and I hereby accept the privacy policy under which my Personal Data will be used by Dassault Systèmes |
Thank you for your comments. We will contact you if we have questions regarding your feedback.
Sincerely,
The SOLIDWORKS Documentation Team
Print Topic
Select the scope of content to print:
| |
| --- |
| This topic and all topics linked from this topic |
| Just this topic |
| This topic and only immediate topics under it |
| This selected topic and all subtopics |
We have detected you are using a browser version older than Internet Explorer 7. For optimized display, we suggest upgrading your browser to Internet Explorer 7 or newer.
Web Help Content Version: SOLIDWORKS 2023 SP05
To disable Web help from within SOLIDWORKS and use local help instead, click Help > Use SOLIDWORKS Web Help.
To report problems encountered with the Web help interface and search, contact your local support representative. To provide feedback on individual help topics, use the “Feedback on this topic” link on the individual topic page.
Terms of Use
| Privacy Policy
| Personalize Cookie Choices
| Get a Product Demo
| Contact Sales
| Get a Quote
© 1995-2025 Dassault Systèmes. All rights reserved.
|
35
|
Published Time: 2024-08-19T21:41:53.413Z
Bubbles: Spheres, Volume I: Microspherology - The Brooklyn Rail
===============
The Brooklyn Rail
Critical Perspectives on Art, Politics and Culture
Home
Issues
Events
Exhibitions
Art
ArtSeen
Critics Page
A Tribute to Cole Heinowitz
Field Notes
Books
Music
Dance
Fiction
Film
Theater
Poetry
Art Books
1×1
Architecture
Dispatches
Railing Opinion
Art and Technology
SubscribeDonate
Previous #### Art Books ### PETER DOWNSBROUGH: Two Reviews
Next #### Art Books ### Is That All There Is?
Art BooksFebruary 2012
Bubbles: Spheres, Volume I: Microspherology
By John Ganz
Word count: 1944
Paragraphs: 15
Peter Sloterdijk
Bubbles: Spheres, Volume I: Microspherology
(Semiotext(e) / Foreign Agents, 2011)
It could almost be a proverb: The difference between the United States and Europe is that in Europe a philosopher can have a television show. The German philosopher Peter Sloterdijk hosts just such a weekly talk show. It’s also sadly hard to imagine that in this country—despite many virtues over that cultural rival of ours—we could have a real, heavyweight public intellectual, let alone one whose provocations would lead to a national debate on the meaning of the state and democracy. In Germany and France, Sloterdijk’s 1999 lecture “Rules for the Human Zoo”—with its brilliant, biologically-tinged take on humanism—on the framing of philosophy and literature since Plato as a technique for “taming the human beast,” and on the production of pacific citizens, caused a major controversy that was extensively covered in the media. The lecture—which was also published in the newspaper Die Zeit— brought a cry from the critical theory establishment that Sloterdijk had betrayed his leftist roots and become a radical neoconservative, with the inevitable insinuations that his apparent “hatred for democracy” was really treading on more sinister, fascist grounds.
But Slotderdijk’s point was that humanity has been “abandoned by the wise,” that today there remain no humanists who serve to transmit the civilizing literatures of the past. It is a profound point and it should have special resonance in the United States, where Sloterdijk’s work has been relegated to relative obscurity. Only his Critique of Cynical Reason, which diagnoses contemporary culture as being sickly obsessed with the notion of all-pervading self-interest, was something of a 1980s academic cause célèbre stateside. With Semiotext(e)’s recent translation of the first volume of his magnum opus Spheres trilogy, Bubbles: Spheres I(it was first published in Germany in 1998), Sloterdijk’s name in this country ought to become better known.
Sloterdijk’s concern in Spheres is the same as every German philosopher since Kant: What is humanity in the condition of modernity? That is to say: What is humanity without the all-encompassing presence of religion, whose persistence in the modern world is either ineffectually subcultural or violently retrograde, and, in any case, is clearly incapable of offering a satisfying universal? What is humanity without the predictable cycles of the quasi-natural, communal lifeworld, and without the unquestioned legitimacy of the social, spiritual, and aesthetic hierarchies that once regulated that lifeworld? And how should we best offer solace to the lonely, confused, and rootless subject that emerges with the triumph of mass society, capitalism, scientism, technology, the destruction of traditional life, and the disenchantment of the world? (Just to make it sunnier, we can now also add to the list impending ecological crisis.) Sloterdijk describes humanity at the end of this process: “[d]isappointed, cold, and abandoned, they wrap themselves in surrogates of older conceptions of the world, as long as these still hold a trace of the warmth of old human illusions of encompassedness.”
For Sloterdijk, this crisis of modernity and post-enlightenment sketched above is a spherological crisis: it concerns the gradual destruction of those protective—or immunlogical, to use Sloterdijk’s terminology—membranes that mankind dwelled in for millenia, the bursting of the shared spaces that human beings had cultivated to provide meaning, metaphysical comfort, and shelter from the inhuman exterior. This metaphor of the sphere—the preservation, growth, and development of which can be thought of as the sole preoccupation of what we call culture—shares with Sloterdijk’s style in general the quality of being astonishing, strange, and novel, as well as being, at the same time, familiar, intuitive, and even self-evident.
Philosophers are often harsh judges of human nature, and the concept of Spheres is unusually generous, kind, and good-natured: it construes human life through an effort to create conditions of warmth, closeness, and security. Sloterdijk’s patience and his lack of a concern for purity allows him a great deal of freedom and variety in terms of his source material. As learned as he is philosophical, Sloterdijk seems equally comfortable drawing on medieval theology, media theory, sociology, theoretical biology, antique numismatics, psychoanalysis, Roman superstitions and domestic cults, and Buddhist sculpture. And that is only a partial catalogue. The result is that reading Bubbles: Spheres I can at first feel a little like being locked inside a cabinet of curiosities, or like reading a book from the Renaissance, when knowledge wasn’t yet splintered into hundreds of specialist fields, when the universe was a vast system of analogies, and when books on medicine would not be considered complete if they didn’t include extended meditations on alchemy, astrology, demonology, the nature of the soul, and the meaning of the holy trinity. Hannah Arendt once wrote something to the effect of, “Schopenhauer was a charlatan who wrote like a philosopher, and Nietzsche was a philosopher who wrote like a charlatan.” The latter description could equally be applied to Sloterdijk.
Advertisement
Once the dazzling effect of Sloterdijk’s erudition wears off and the arguments of the book come into clearer focus, Bubbles’s place in the entire spherological system emerges. The first volume spells out the most intimate type of sphere—the microsphere or bubble—the original form of being-in-spheres. As Sloterdijk somewhat opaquely puts it, bubbles “constitute the intimate forms of the rounded being-in form and the basic molecule of the strong relationship.” Put in terms of what one might call “ordinary philosophical vocabulary,” Bubbles comprises Sloterdijk’s theory of human subjectivity and the anthropological ground that his theoretical edifice will rest upon. In an attempt to move past the legacy of Heidegger—perhaps Sloterdijk’s main foil and point of reference—Sloterdijk thinks that the traditional philosophical account of the subject—the self-contained, rational, alternately contemplative, and emotional “I” that can either observe or decide to act upon an exterior Nature—is a woefully inadequate description of the human condition that reflects the self-image of where humanity has arrived historically rather than the eternal essence that it purports itself to be. Sloterdijk believes that the modern, existentialist heroic myth of the isolated individual, fighting for its own place and “suspended in nothingness,” obscures more than it reveals about human existence, and cannot offer a radical interpretation of the meaning of human life.
For Sloterdijk, the human subject is always in—at the very least—a dyadic microsphere, with another being that animates it: “Only the ideologia perennis speaks of the mainstream of individualistic abstraction speaks of the unaccompanied single person…. ‘[H]uman existing’ is thus no longer to be understood as the solitary individual standing out into the indeterminate openness.” Instead, “existence includes the presence of a pre-objective something floating around me; its purpose is to let me be and support me.” According to Sloterdijk, “people are ecstatic, as Heidegger says, but not because they are contained in nothingness, but rather in the souls of others, or in the field of the soul of others, and vice versa.”
Probably the most fundamental microsphere that underlies the investigation in Bubbles is the fetus in the mother’s womb. The centrality of this theme allows Sloterdijk to posit a fundamental state for the formation of the human soul that predates any kind of conscious self in an animating and immunizing sphere. To this end, Sloterdijk crafts absolutely beautiful passages about sound coming through the medium of the womb that extends to the songs of the nursery to form an original musicality of the human soul. Sloterdijk presents the womb-state as an original type of human ecstasy that is at the root of subsequent religious, erotic, communal, and political sphere formations.
The centrality of the womb for Sloterdijk also hints at a genetic principle of ever-expanding sphere formation: “All amniotic sacs, organic models of autogenuous vessels, live towards their bursting; with the turbulent waters of birth, every life is washed up on the coast of harder facts. Those who reach it can use those facts to explain what drives the intimate, all too intimate bubbles to failure and forces their inhabitant into transformations.” In other words, bubbles burst, we are born—biologically and ontologically—thrown out of our intimate spheres, and we are ever set about forming new ones.
But what does the theory of the microsphere provide other than an intellectual high? It would be supposing a bit much that something that is frankly so odd could quickly enter into mainstream discourse. But I believe Sloterdijk successfully puts to rest the notion that we are essentially isolated beings in a field of meaningless objects, and puts in its place a way to conceive of human existence as incumbent upon highly convoluted and delicate systems of augmentation, nurturing, and growth. In the process, Sloterdijk is able to find new meaning in the cast aside achievements of past culture. This is particularly true in the case of archaic mysticism and theology, which Sloterdijk does not treat as the ideological relics of backward societies, but rather as containing subtle lessons on the nature of human solidarity and intimacy. And he does this without calling for the uncritical readoption of a pre-modern religiosity or by succumbing to tasteless, New Age pseudo-spirituality (some puzzling words of admiration for the deplorable mountebank Osho in one interview notwithstanding), but by permitting the spirit of the past to breathe into and reanimate the present.
The language-game of spheres leads to an ecological understanding of culture; a term whose etymology in Latin denotes the care of plants and the tilling of the earth. In that light, those involved in humanistic endeavors should concern themselves with the preservation and cultivation of the atmospheres that permit human beings to flourish. It’s easy to see how this could lead quickly into a belief in the necessity of mindless, cloying communal life, or reactionary conservative politics, its ugly political correlate. The other risk with all these horticultural metaphors—one of Sloterdijk’s terms is “anthropogenic hothouse”—is that they could also lead to a philosophical outlook that might tend to replace the inquiry into the being of the human animal with the being of the human vegetable. But there are two volumes yet to be translated, so it remains to be seen for the non-German-speaking English reader how Sloterdijk deals with these problems.
One might wonder also if the sphere as a figure of thought is not a little too good. Sloterdijk writes that it’s characteristic of the old philosophical systems—as the metaphysical correlates of states and empires—to attempt to pull everything into their purview and “round off” the world. Although he expresses doubt about the ability of philosophy to provide such all-encompassing universals in the contemporary period, the relentless certainty with which Sloterdijk deploys his thought might be accused of sharing the same megalomaniac delusions of grandeur. The sphere starts to become claustrophobic at times, and it can become exhausting to think along with Sloterdijk as one’s imagination turns either into an ever-expanding amoeba inexorably sucking everything inside, or a foaming sea of bubbles.
But trying to keep pace with Sloterdijk’s intellectual athleticism is—on the whole—invigorating. This touches on a theme that’s been dealt with in his recent work: the idea of philosophy as a “spiritual exercise,” an idea, which he has taken up from the French historian of ancient philosophy Pierre Hadot. (Hadot was also a major influence on Michel Foucault’s late work about the care of the self.) Rather than taking philosophy as a purely theoretical enterprise concerned with developing a disinterested, complete picture of the world, this conception treats it as a therapeutic method, a way in which to affect change in oneself. As a very ancient technique for the support of human life, it’s unclear whether philosophy can compete with the rapid proliferation of new technologies of human augmentation. If philosophy has a place in this world, it looks like this.
John Ganz
Previous #### Art Books ### PETER DOWNSBROUGH: Two Reviews
Next #### Art Books ### Is That All There Is?
The RAIL
About the Rail
Staff
Our Supporters
Contributors
Store
Archives
Contact Us
Get Involved
Sign up for our newsletter
Subscribe
Donate
Advertise
Submissions
Follow
Instagram
Twitter
Facebook
RSS
© Copyright 2000-2024 The Brooklyn Rail
Dark mode: Off
[x]
Close
Home
Today
1 p.m. ET / 10 a.m. PTTammy Nguyen: A Comedy for Mortals
Upcoming eventsPast events
Art
ArtSeen
Books
Music
1×1
Critics Page
Publisher's Message
Dance
Film
Theater
Fiction
Poetry
Art Books
Field Notes
The Miraculous
Architecture
Special Report
Art and Technology
ArTonic
A Tribute to Richard Serra
A Tribute to Neeli Cherkovski
InTranslation
About the Brooklyn Rail
A Note from the Publisher
Board of Directors
Staff
Our Supporters
Advertise
Distributors
Contact Us
Submission Guidelines
About this Site
Terms of Service
Sign up for our newsletter
Follow us on Instagram
Visit our store
|
36
|
1
I Concur
Nov 18, 2014
Jerome v Clements Motor Sales Ltd., 1958 CanLII 90 (ON CA)
Jerome v. Clements Motor Sales Ltd., 1958 CanLII 90 (ON CA)
by K. Asnani — Western University's Law Students' Association
1
I Concur
Facts: A seller agreed to sell a car to the buyer in exchange for 2 cars. He also agreed to do repairs, and the buyer would keep 1 trade-in car until the repairs were made. The seller had completed some of the repairs, but not all, and a fire destroyed the car.
Issue: did the risk of loss pass to the buyer?
Held: the risk of loss remained with the seller
Ratio: Risk passes with property, and since property had not passed, neither did risk
by K. Asnani — Western University's Law Students' Association
1
I Concur
Cookies are saved on your device to record navigational choices made, help statistical analysis of the website's usage and for functionality improvements, as detailed in our Privacy Policy.
Manage your cookies
Cookies are saved on your device to ensure proper operation and security of the website, help statistical analysis of its usage, improve its functionality, or record navigational choices you make. Further details can be found in the cookies section of CanLII's Privacy Policy.
A few cookies are strictly necessary to use CanLII Connects and are always active. Cookies that are used to measure performance or improve functionality can be enabled using the buttons below.
Reset all cookies
Strictly necessary cookies
Always active
These cookies are necessary for the operation of the website and cannot be deactivated. They enable the basic operation of the site and prevent abusive access.
Performance cookies
These cookies collect anonymous data about users' visits to the site. This data is used to measure site performance and improve search functions.
Functionality cookies
These cookies record your browsing preferences, for example by storing your browsing language or your CanLII Connects account login.
Targeting cookies
We do not use cookies to establish your profile or target you for advertising purposes.
|
37
|
winter semester 2019/20 Claudia Scheimbauer Eilind Karlsson Algebraic Topology – Exercise 13 Sketch of solutions (1) Compute the homology of R3zA, where A is the upper hemisphere of the unit sphere S2 in R3.
Solution: Note that A deformation retracts to txu for any point x P A, so their complements in R3 are homotopy equivalent.
(Why?) By homotopy invariance of homology, we need to understand the homology groups of R3ztxu. Removing one point from R3 is homotopy equivalent to S2, so we get HnpR3zAq – HnpS2q “ # Z, n “ 0, 2, 0, otherwise.
(2) Compute the reduced homology of the Klein bottle using the given cover.
Solution: Similarly to the Mayer-Vietoris sequence we have seen in class, one can obtain a reduced version of the Mayer-Vietoris sequence. In more detail, the long exact sequence follows from a short exact sequence of the certain chain complexes (c.f. the proof), so we only need to check that the analogous chain complexes for reduced homology, given by extending the chain complexes by a copy of Z in degree -1 (c.f. Exercise 4 on Sheet 12), also form a short exact sequence. Do this! It follows that we have a long exact sequence of the form 0 Ð r H0pKq Ð r H0pX1q ‘ r H0pX2q Ð r H0pX1 X X2q Ð r H1pXq Ð ¨ ¨ ¨ (1) In this exercise both X1 and X2 are M¨ obius bands, and the same is true for X3 – X1 X X2.
We have seen in class that a M¨ obius band deformation retracts to S1 (recall the argument), so for i P t1, 2, 3u we have that r HnpXiq – r HnpS1q “ # Z, n “ 1 0, otherwise.
First we note that since r H0pX1q ‘ r H0pX2q “ 0 by exactness of (1) we have that r H0pKq “ 0.
The same argument also ensures that r HnpKq “ 0 when n ą 2. Substituting the known homol-ogy groups of Xi into the long exact sequence for n “ 1, 2 we have the exact sequence 0 Ð r H1pKq g Ð Ý r H1pX1q ‘ r H1pX2q – Z ‘ Z r H1pι1,ι2q Ð Ý Ý Ý Ý Ý r H1pX1 X X2q – Z Ð r H2pKq Ð 0 ‘ 0.
(2) We need to understand the maps in the long exact sequence to find the unknown homology groups. We have inclusions ιi : X1 X X2 ã Ñ Xi. Consider a loop in X1 X X2 which is a generator of its first homology group.
Under the inclusion into Xi this is sent to a loop running twice along a generator of the first homology of Xi, so on the level of homology groups we have r H1pι1, ι2qp1q “ p2, 2q1. Hence, the kernel of this map is 0, so the image of the map r H2pKq Ñ Z in the long exact sequence (2) is the zero map, but since it is also injective by exactness, we find that H2pKq – r H2pKq “ 0.
Exactness of (2) also tells us that kerpgq “ imp r H1pι1, ι2qq “ Zp2, 2q. Note that Zp2, 2q ‰ 2Z ‘ 2Z! In addition impgq “ kerp0q “ r H1pKq. Hence we find that r H1pKq – Zp1, 0q ‘ Zp0, 1q ˘ {Zp2, 2q – Zp0, 1q ‘ Zp1, 1q ˘ {2Zp1, 1q – Z ‘ Z{2Z.
Remark. The fact that H2pKq – r H2pKq “ 0 implies that K is a “non-oriented surface”.
We briefly talked about orientability in connection with triangulations of surfaces. You may like to think about (or look up) the connection if you are interested.
(3) Use the Mayer-Vietoris Theorem to compute H˚pFgq.
Solution: Choose X1 and X2 as in the Figures below, i.e. X1 “ Fgztsmall disk in the center of Fgu and X2 “ slightly bigger disk, chosen such that it covers the disk that is removed in X1.
X1 X2 First we saw in class (recall the argument for this!) that X1 deformation retracts onto the boundary of Fg which is a wedge sum of circles, i.e. X1 » Ž2g i“1 S1. Secondly, we have that X2 – D2 » t˚u and X1 X X2 is the complement of a disk in another disk, which deformation retracts to S1 (Why?). Thus, since homology is homotopy invariant, HnpX1q – $ ’ & ’ % Z, n “ 0, Z2g, n “ 1, 0, else; HnpX2q – # Z, n “ 0, 0, else; HnpX1 X X2q – # Z, n “ 0, 1, 0, else.
We use the Mayer-Vietoris sequence for the cover of Fg given by X1 and X2.
As in the previous exercise, for n ą 2, the exact sequence reads 0 – Hn´1pX1 X X2q Ð HnpFgq Ð HnpX1q ‘ HnpX2q – 0 ‘ 0, so by exactness, for n ą 2, we have HnpFgq – 0.
1Depending on how you choose Xi – S1 you can also run along Xi in the opposite direction and hence map 1 to e.g. (2,-2) but this does not affect the computation, as long as you are consistent with your choices.
To compute H2pFgq, consider the following terms in the Mayer-Vietoris sequence: ¨ ¨ ¨ Ð H1pFgq Ð H1pX1q ‘ H1pX2q H1pι1q Ð Ý Ý Ý Ý H1pX1 X X2q Ð H2pFgq Ð 0 ‘ 0 Ð ¨ ¨ ¨ First, note that by exactness, H2pFgq injects into H1pX1 X X2q. To compute its image, we compute the kernel of H1pι1q.
Since H1pX2q – 0 the map H1pι1q reduces to a linear map ¯ ι : H1pX1 X X2q – Z Ñ Z2g – H1pX1q induced from including a loop close to the edge into X1. On homology, this gives ¯ ιp1q “ rag, bgs ¨ ¨ ¨ ra1, b1s “ 0, where the last equality follows from the fact that H1 is abelian.
Thus, from the long exact sequence we have that H2pFgq – H1pX1 X X2q – Z.
Continuing the long exact sequence ¨ ¨ ¨ Ð H0pX1q ‘ H0pX2q H0pι1,ι2q Ð Ý Ý Ý Ý Ý H0pX1 X X2q Ð H1pFgq Ð H1pX1q ‘ H1pX2q 0 Ð ¨ ¨ ¨ Including X1 X X2 » S1 into X1 and X2 induces the map H0pι1, ι2qpr1sq “ pr1s, r1sq on the zeroth homology (Why?). In particular this is an injective map with trivial kernel, so by exactness we deduce that H1pFgq – Z2g.
Finally, we saw in class that H0 of a topological space simply counts connected components, of which Fg only has one. Hence, H0pFgq “ Z. Summarizing, we computed H˚pFgq “ $ ’ & ’ % Z, ˚ “ 0, 2, Z2g, ˚ “ 1, 0, ˚ ě 3.
(4) Compute the homology groups of the torus T. Hint: Use Mayer-Vietoris, for example with two overlapping cylinders as indicated in the below Figure.
Solution: One can simply plug in g “ 1 in the results of the previous exercise and we are done. However, practice makes perfect so here we go: For the given cover we have that A and B are homeomorphic to S1 times some interval such that the two cylinders slightly overlap. Their intersection A X B is homeomorphic to the disjoint union of two copies of S1 times a small interval. All intervals could be open or closed, but let’s chose them to be closed. Hence we already know all of their homology groups since S1 ˆ ra, bs deformation retracts onto S1 (Why?), and we saw in class that the homology of a disjoint union is the direct sum of the homologies. Concretely, HnpAq “ HnpBq – # Z, n “ 0, 1 0, else HnpA X Bq – # Z ‘ Z, n “ 0, 1 0, else.
As in the the previous exercises we find that HnpTq “ 0 for n ą 2 (you could have also used a dimension argument here). Thus, the interesting calculations concern n “ 0, 1, 2. First we consider the following part of the Mayer-Vietoris sequence ¨ ¨ ¨ Ð H1pTq H1pAq ‘ H1pBq H1pA X Bq H2pTq H2pAq ‘ H2pBq Ð ¨ ¨ ¨ Z ‘ Z Z ‘ Z 0 ‘ 0 H1(i,j) – r – 0 – We now start by considering the map induced by the inclusions i and j of A X B into A and B, respectively. Since AXB is a disjoint union of two cylinders it has two generators, one for each cylinder. Denote these two loops by α and β. Similarly, we have one generator each for H1pAq and H1pBq (draw this!). We choose the directions of the loops in the following way.
(What does this mean in the picture?) We view them in H1pAq ‘ H1pBq by concatenating with the inclusion and denote them by the generators p1, 0q and p0, 1q in Z ‘ Z, respectively.
Again using the concatenating with the inclusion into H1pAq ‘ H1pBq, we have that H1piqpαq “ p1, 0qH1pjqpαq “ p0, 1q H1piqpβq “ p1, 0qH1pjqpβq “ p0, 1q This depends on the choices of directions of the two loops, and one has to be consistent in these choices. With other choices, signs will appear. The computation on homology does not depend on these choices.
Note that with our choices the two generators α and β are mapped to the same generating circle. Then we have that H1pi, jqpαq “ p1, 1q “ H1pi, jqpβq and hence H1pi, jqpk ¨αl¨βq “ pkl, k lq. Hence, we have that impH1pi, jqq “ Zp1, 1q – Z and kerpH1pi, jqq “ Zpα ´ βq – Z. Another way of understanding this map is to note that H1pi, jq corresponds to the following matrix ˆ 1 1 1 1 ˙ : Z ‘ Z Ñ Z ‘ Z.
This gives us enough information to calculate H2pTq. First, since the part of the long exact sequence above starts with a trivial group we know that r is injective. Exactness gives us imprq “ kerpH1pi, jqq – Z, so it follows that H2pTq – Z.
To calculate H0pTq and H1pTq we need further pieces of the long exact sequence. To simplify calculations we will work with reduced homology (which only differs from ordinary homology in the zeroth degree). Thus, we consider 0 r H0pTq r H0pAq ‘ r H0pBq r H0pA X Bq H1pTq H1pAq ‘ H1pBq ¨ ¨ ¨ 0 ‘ 0 Z Z ‘ Z » B » H1(f-g) H1(i,j) » It follows by exactness that r H0pTq “ 0. One can also find H0pTq by noting that the torus only has one connected component so H0pTq “ Z.
The inclusion of A and B into T are denoted by f and g respectively, and determine the map from the direct sum H1pAq ‘ H1pBq to H1pTq. From exactness we have kerpH1pf ´ gqq “ impH1pi, jqq – Z. Hence, kerpBq “ impH1pf ´ gqq “ Z ‘ Z{kerpH1pf ´ gqq – Z.
Moreover, we know that B is surjective (Why?). So the short exact sequence of the map B becomes 0 kerpBq H1pTq impBq 0 Z Z » » As Z is a free abelian group this short exact sequence splits and we can conclude that H1pTq – Z ‘ Z2.
Aside: If you want to solve this without resorting to reduced homology you need to work out the map induced by the inclusions on zeroth homology. It basically works the same way as H1pi, jq, so if you find it necessary to practice these calculations I highly encourage you to work it out that way as well!
(5) Let the topological space M be Hausdorffand locally Euclidean of dimension d ě 1 (for example, M could be a manifold).
(a) Use excision to compute HnpM, Mztxuq for any point x P M.
(b) Consider the case when M is an open M¨ obius strip, i.e. has no boundary. Pick a generator µx P HnpM, Mztxuq – Z and describe what happens with µx if one walks along the meridian of the M¨ obius strip.
Solution: Pick an arbitrary point x P M. Since M is locally Euclidean we can find a small neighborhood Ux Ă M for any x such that Ux – Rd, where d3 is the dimension of the space.
The subset MzUx Ă Mztxu satisfies the condition for excision (check them!), so we have the following isomorphism of homology groups H˚ M, Mztxu ˘ – H˚ ´ MzpMzUxq, pMztxuqzpMzUxq ¯ – H˚ Ux, Uxztxu ˘ .
By construction we have that Ux is homeomorphic to Rd. In addition Uxztxu – Rdztxu which we have seen in class to deformation retract onto Sd´1. Hence, by homotopy invariance, HnpUxq – # Z, n “ 0 0, else, HnpUxztxuq – # Z, n “ 0, d ´ 1 0, else.
2Note that this (fortunately) agrees with what we would have gotten by setting g “ 1 in the previous exercise. It is a good habit to always double-check your answer when you can!
3Note that we changed the notation for the dimension from n to d to avoid a confusion with the indices].
We use the long exact sequence for relative homology for Uxztxu Ă Ux.
For n ě 2 we find that HnpUx, Uxztxuq – Hn´1pUxztxuq – Hn´1pSd´1q since the (surround-ing) homology groups of Ux are trivial. For the lower degrees we have 0 H0pUx, Uxztxuq H0pUxq H0pUxztxuq H1pUx, Uxztxuq H1pUxq ¨ ¨ ¨ Z Z 0 H0pιq » » » The inclusion ι : Uxztxu ã Ñ Ux induces an isomorphism H0pιq on the zeroth homology groups (Why?).
Hence, from exactness of the sequence we learn that H1pUx, Uxztxuq “ 0 “ H0pUx, Uxztxuq. Summarizing, we have HnpM, Mztxuq – HnpUx, Uxztxuq – # Z, n “ d 0, else.
For part (b) pick a point x P M and choose a 2-simplex α : ∆2 Ñ M such that Bα P C1pMztxuq.
In particular this 2-simplex comes with an orientation!
Walking along the meridian of the M¨ obius band reverses the orientation of the 2-simplex (One illustration of this only with a crab instead of a 2-simplex can be found at: Illustration). This shows that the M¨ obius band is non-orientable.
(6) Prove the following properties of the degree map: (a) Let fpnq : Sn Ñ Sn be the map px0, x1, ..., xnq ÞÑ p´x0, x1, ..., xnq. Show that fpnq has degree -1.
(b) Show that for f, g : Sn Ñ Sn we have degpF ˝ pf _ gq ˝ Tq “ degpfq degpgq.
Solution: Recall from the computation of the homology groups of spheres (lecture on Tuesday 28.01) that we have isomorphisms HkpSnq δ Ñ Hk1pSn´1 ˆ p0, 1qq Hn´1pιq´1 – Hk´1pSn´1q.
We denote this composition by D, i.e. D :“ Hn´1pιq´1 ˝ δ.
Let µ0 :“ r1s ´ r´1s P H0pS0q and µ1 P H1pS1q – π1pS1q4 be the degree one map (i.e. the class of the identity on S1 which corresponds to the class of the loop t ÞÑ e2πit). Define the higher µn as Dµn “ µn´1. Then µn is called the fundamental class in HnpSnq.
We prove the claim by induction. First, fp0qpµ0q “ fp0qpr1s ´ r´1sq “ r´1s ´ r1s “ ´µ0.
The morphism D is natural, so we have Hnpfpnqqµn “ HnpfpnqqD´1µn´1 “ D´1Hn´1pfn´1qµn´1 p˚q “ D´1p´µn´1q “ ´µn, 4Here the fundamental group is abelian, so in particular it is equal to its abelianization. This isomorphism is not true in general, c.f. Exc 5 Sheet 11.
where step p˚q follows from the induction hypothesis. Thus, we can conclude that the degree of fpnq is ´1.
For part (b) we note that the map r HnpTq sends µn to pµn, µnq P r HnpSnq ‘ r HnpSnq – r HnpSn _ Snq, where the isomorphism follows from the fact that our spaces are well-pointed (i.e., they satisfy the conditions for excision). Under the isomorphism the map r Hnpf _ gq corresponds to pµn, µnq ÞÑ p r Hnpfqµn, r Hnpgqµnq and this yields pdegpfqµn, degpgqµnq. Under the fold map this is sent to the sum, which is exactly what we wanted to show.
Note that this construction gives a generalization of the additivity relation degpω2 ‹ ω1q “ degpω2q degpω1q which follows from concatenation of paths in the case of S1.
(7) (a) Let SX denote the suspension of a topological space X. Show by a Mayer-Vietoris sequence that for all n there are isomorphisms r HnpSXq – r Hn´1pXq.
(b) For f : Sn Ñ Sn show that suspension leave the degree invariant, i.e.
degpSpfqq “ degpfq.
Conclude that for every integer k P Z there is a continuous map f : Sn Ñ Sn with degpfq “ k. Hint: Recall that for X “ Sn we have SX – Sn1.
Solution: Recall that the suspension of a topological space is defined as SX :“ X ˆ r0, 1s{pX ˆ t0u, X ˆ t1uq.
The suspension of a space can be seen as two cones glued to-gether at their bases. Denoting the upper cone by CX and the lower cone by C´X, we have SX “ CX YX C´X. Their intersection is homeomorphic to the topological space X. The Mayer-Vietoris sequence becomes ¨ ¨ ¨ Ð r HnpCXq ‘ r HnpC´Xq – 0 ‘ 0 Ð r HnpXq Ð r Hn1pSXq Ð 0 ‘ 0 Ð ¨ ¨ ¨ since cones are contractible and hence have trivial homology groups. Exactness of the sequence then gives the desired isomorphism.
For part (b) we first recall that the suspension of Sn is Sn1 which together with the isomor-phism from (a) gives r Hn1pSSnq – r Hn1pSn1q – r HnpSnq.
Note that the isomorphism in (a) comes from the connecting homomorphism δ which in particular is functorial (Proposition 1 in Lecture nr 25). Also, it sends µn1 P r Hn1pSn1q to ˘µn P r HnpSnq. We have the commuting diagram r Hn1pSn1q r Hn1pSSnq r Hn1pSSnq r Hn1pSn1q r HnpSnq r HnpSnq – r Hn1pSfq δ – δ r Hnpfq By tracing ˘µn1 P r Hn1pSSnq through the diagram we find that ˘degpfqµn “ ˘degpSfqµn, with the same sign. Hence, suspension leaves the degree of the map f invariant. Recall that we have maps of any degree k of S1 (what are they?), so by using the isomorphism SSn – Sn1 and the result of this exercise it follows that the same is true for any Sn as well.
(8) We define the Euler characteristic χpXq as the alternating sum χpXq :“ ř ip´1qiRankpCiq.
Show that χpXq :“ ÿ n p´1qn RankpHnpXqq Solution: The proof is exactly the same as that of Theorem 2.44 in Hatcher. Hatcher uses the chain complex from a CW complex structure on the topological space X, but the argument is purely algebraic and works exactly the same way when working with the chain complex coming from a ∆-complex structure on X.
In particular, this means that χpXq only depends on the homotopy type of X and is inde-pendent of the choice of ∆-structure (or CW-complex structure) on X!
(9) (a) Let X be a path-connected, locally path-connected, and simply connected topological space. Let p : E Ñ B be a covering with E contractible. Prove that every continuous map f : X Ñ B induces only zero maps in reduced homology, i.e. for all n P N0 we have r Hnpfq “ 0.
(b) Show that for n, m P N with m ě 2, any map Sm Ñ Tn “ pS1qn induces the zero map on all reduced homology groups. Give a counterexample for m “ 1.
Solution: Here, for every continuous map f : X Ñ B there exists a map r f (why?) such that the below diagram commutes E X B.
p f r f Since E is contractible we know that HnpEq “ 0 for all n ą 0, which implies that Hnppq “ 0.
On the level of homology we have Hnpfq “ Hnpp ˝ r fq “ Hnppq ˝ Hnp r fq since H˚ is functorial.
Hence Hnpfq must also be trivial for all n ą 0. Finally, for n “ 0 we simply use that X is path-connected so r H0pXq “ 0 which ensures that r H0pfq “ 0 as well.
In part (b) we note that when m ě 2 the sphere Sm is path-connected, locally path-connected and simply connected.
Also, T n has the covering p : Rn Ñ T n given by pφ1, ...., φnq ÞÑ peiπφ1, ..., eiπφnq. We know that Rn is contractible, so for this situation the result in part (a) applies.
For the case m “ 1 we can e.g. consider the identity map id : S1 Ñ T 1 which gives r H1pidq “ H1pidq “ idµ1. This is non-trivial and hence gives a counterexample.
|
38
|
BROOKS-TYPE RESULTS FOR COLORING OF DIGRAPHS by Ararat Harutyunyan B.Sc., McGill University, 2006 M.Sc., McGill University, 2008 a Thesis submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Mathematics c ⃝Ararat Harutyunyan 2011 SIMON FRASER UNIVERSITY Summer 2011 All rights reserved. However, in accordance with the Copyright Act of Canada, this work may be reproduced without authorization under the conditions for Fair Dealing. Therefore, limited reproduction of this work for the purposes of private study, research, criticism, review and news reporting is likely to be in accordance with the law, particularly if cited appropriately.
APPROVAL Name: Ararat Harutyunyan Degree: Doctor of Philosophy Title of Thesis: Brooks-type results for coloring of digraphs Examining Committee: Dr. Luis Goddyn Chair Dr. Bojan Mohar, Senior Supervisor Dr. Matt DeVos, Supervisor Dr. Pavol Hell, SFU Examiner Dr. Bruce Reed, External Examiner, Professor of Computer Science, McGill University Date Approved: June 6, 2011 ii Abstract In the thesis, the coloring of digraphs is studied. The chromatic number of a digraph D is the smallest integer k so that the vertices of D can be partitioned into at most k sets each of which induces an acyclic subdigraph.
A set of four topics on the chromatic number is presented. First, the dependence of the chromatic number of digraphs on the maximum degree is explored. An analog of Gallai’s Theorem is proved and some algorithmic questions involving list colorings are studied. Sec-ondly, an upper bound on the chromatic number of digraphs without directed cycles of length two is obtained, strengthening the upper bound of Brooks’ Theorem by a multiplicative fac-tor of α < 1. Thirdly, evidence is provided for the global nature of the digraph chromatic number by proving that sparse digraphs with maximum degree ∆can have chromatic num-ber as large as Ω(∆/ log ∆), as well as showing the existence of digraphs with arbitrarily large chromatic number where every constant fraction of the vertices is 2-colorable. Finally, a generalization of digraph coloring to acyclic homomorphisms is considered, and a result linking D-colorability and girth is presented.
iii Acknowledgments First and foremost I would like to thank Bojan Mohar, my senior supervisor, for all that I have learned from him, and for his guidance, support and faith in me throughout my graduate studies. Bojan’s dedication to mathematics and his constant readiness to meet for discussions has truly inspired me. A special thanks goes to Matt DeVos, my supervisor, from whom I have learned a great deal and from whose enthusiasm for mathematics I have greatly benefited. I also wish to express my gratitude to FQRNT for having funded my Ph.D. studies.
I want to thank all my friends and colleagues who all have in one way or another made my studies enjoyable. My family has always been very supportive of my academic career. Their constant love and encouragement has always been with me, for which I am very grateful.
iv Contents Approval ii Abstract iii Acknowledgments iv Contents v List of Figures viii 1 Preliminaries 1 1.1 Basic notation and definitions . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 1.2 Overview of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3 2 The Chromatic Number of a Digraph 5 2.1 Brooks theorem for graphs and digraphs . . . . . . . . . . . . . . . . . . . . .
5 2.1.1 Spectral version of Brooks’ Theorem . . . . . . . . . . . . . . . . . . .
7 2.2 Further motivation - the circular chromatic number . . . . . . . . . . . . . . .
8 2.2.1 Circular chromatic number of graphs . . . . . . . . . . . . . . . . . . .
8 2.3 Some preliminary results and tournaments . . . . . . . . . . . . . . . . . . . .
10 2.3.1 Extremal results on tournaments . . . . . . . . . . . . . . . . . . . . .
11 2.3.2 The digraph chromatic number and the Erd˝ os - Hajnal Conjecture . .
13 2.4 Planar digraphs and vertex-arboricity . . . . . . . . . . . . . . . . . . . . . .
14 3 Gallai’s Theorem and List Coloring of Digraphs 17 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17 3.2 List coloring and Gallai’s Theorem . . . . . . . . . . . . . . . . . . . . . . . .
19 v 3.3 Complexity of list coloring of digraphs with Brooks’ condition . . . . . . . . .
27 4 Brooks Theorem for Digraphs of Girth Three 32 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
32 4.2 Strengthening Brooks’ Theorem for large ˜ ∆. . . . . . . . . . . . . . . . . . .
34 4.3 Brooks’ Theorem for small ˜ ∆. . . . . . . . . . . . . . . . . . . . . . . . . . .
38 5 Non-locality of the digraph chromatic number 42 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
42 5.2 Chromatic number and girth . . . . . . . . . . . . . . . . . . . . . . . . . . .
43 5.3 Local 2-colorings and the chromatic number . . . . . . . . . . . . . . . . . . .
45 6 Acyclic Homomorphisms 48 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
48 6.2 D-colorable digraphs of large girth . . . . . . . . . . . . . . . . . . . . . . . .
49 6.3 Proof of Theorem 6.2.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
50 6.4 Uniquely D-colorable digraphs . . . . . . . . . . . . . . . . . . . . . . . . . .
55 6.5 Circular chromatic number of digraphs . . . . . . . . . . . . . . . . . . . . . .
55 7 3-colorings of planar graphs 57 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
57 7.2 Unavoidable configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . .
58 7.3 Reducibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
64 7.4 Proof of the main theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
69 8 Conclusion and Future Work 72 8.1 Upper bounds on χ(D) in terms of ∆(D) . . . . . . . . . . . . . . . . . . . .
72 8.2 The planar digraph conjecture . . . . . . . . . . . . . . . . . . . . . . . . . .
73 8.3 The relationship between χ(G) and χ(D) . . . . . . . . . . . . . . . . . . . .
73 8.4 Chromatic polynomial for digraphs . . . . . . . . . . . . . . . . . . . . . . . .
74 8.4.1 The chromatic polynomial and planar digraphs . . . . . . . . . . . . .
77 8.5 Hedetniemi’s Conjecture for digraphs . . . . . . . . . . . . . . . . . . . . . . .
77 A Probabilistic Preliminaries 79 A.1 The First Moment Method . . . . . . . . . . . . . . . . . . . . . . . . . . . .
79 vi A.2 The Poisson Paradigm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
80 A.2.1 The Janson Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . .
80 A.3 The Lov´ asz Local Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
81 A.4 Concentration Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
82 Bibliography 84 vii List of Figures 2.1 A digraph D on ten vertices with α(D) ≤6. . . . . . . . . . . . . . . . . . . .
16 3.1 Possible blocks in Gallai trees: (a) a directed cycle, (b) a bidirected odd cycle, and (c) a bidirected complete graph. . . . . . . . . . . . . . . . . . . . . . . .
20 3.2 An L-critical digraph with |L(v)| ≥min{d+(v), d−(v)} that is not Eulerian .
27 3.3 Constructing an L-critical digraph with |L(v)| ≥min{d+(v), d−(v)} of arbi-trary order that is not Eulerian . . . . . . . . . . . . . . . . . . . . . . . . . .
27 4.1 Constructing a 3-regular digraph D with χ(D) = 3. . . . . . . . . . . . . . . .
40 4.2 Constructing a 3-chromatic 3-regular digraph from the Fano plane. . . . . . .
40 7.1 Unavoidable configurations. The listed numbers refer to the degree function δ, and the notation d−at a vertex v means all such configurations where the value δ(v) is either d or d −1. . . . . . . . . . . . . . . . . . . . . . . . . . . .
70 7.2 Lemma 7.3.1 applies to several configurations. . . . . . . . . . . . . . . . . . .
71 viii Chapter 1 Preliminaries Digraphs have not been as thoroughly studied in the literature as (undirected) graphs. In this thesis, we study the chromatic number of digraphs. One of the most common ways of defining the chromatic number of a directed graph D is to forget the orientation of the edges of D and define the chromatic number of D as the chromatic number of the underlying graph (we call this the orientation-forgetful chromatic number of D). This seems to be the most common coloring parameter for digraphs studied in the literature by researchers. The disadvantage of this definition is that digraphs with very different structures can have the same chromatic number if their underlying graphs are the same.
We study a particular coloring variant of digraphs called the dichromatic number in .
This is the smallest integer k such that the vertex set of the digraph D can be partitioned into k acyclic sets. We have decided to call this parameter the chromatic number of D for the reasons that will become apparent later. The problem does not seem to have received much attention in the literature until recently. We will show that the chromatic number is the natural coloring invariant for digraphs by presenting some old and new results that generalize analogous results from graph coloring.
In the next section, we present some common notation and definitions used in the thesis.
More specific terms are defined throughout the thesis when they are needed.
1.1 Basic notation and definitions In this section we give the main definitions and terminology that is used in the thesis. The basic definitions below can be found in .
1 CHAPTER 1. PRELIMINARIES 2 A graph G = (V, E) consists of a set V of vertices and a set E of edges and a relation that associates with each edge two vertices called its endpoints. The edge uv ∈E joins vertices u and v. A loop is an edge vv ∈E from a vertex v to itself. Multiple edges are edges having the same pair of endpoints. A graph is called simple if it does not have any loops or multiple edges. Every graph considered in this thesis is simple unless otherwise stated. The order of G is the number of vertices of G and the size of G is the number of edges. Vertices u and v are said to be adjacent or neighbors if they are the endpoints of the same edge. The degree of a vertex v, denoted by deg(v), is the number of neighbors of v. The maximum degree of a graph G is denoted by ∆(G). G is a d-regular graph if every vertex of G has degree d. Given a vertex v ∈V , the open neighborhood of v, denoted N(v), is defined as N(v) = {u : uv ∈E}. A path is a simple graph whose vertices can be ordered so that two vertices are adjacent if and only if they are consecutive in the list. A cycle is a graph with an equal number of vertices and edges whose vertices can be placed around a circle so that two vertices are adjacent if and only if they appear consecutively along the circle. The length of a cycle is the number of vertices in the cycle. The girth of a graph is the length of its shortest cycle. If the graph does not have any cycles, its girth is infinite. A subgraph of a graph G is a graph H such that V (H) ⊂V (G) and E(H) ⊂E(G). Let S be a subset of vertices. We denote by G[S] the graph that has the vertex set S and edge set E(S) where uv ∈E(S) if and only if uv ∈E(G). G[S] is called the induced subgraph on S. G is connected if for all u, v ∈V there is a path in G containing u and v. A cut-vertex in a graph G is a vertex v whose removal increases the number of connected components of G. A maximal connected subgraph of G that has no cut-vertex is called a 2-connected component or a block of G. A cut-edge (or bridge) is an edge whose removal increases the number of connected components of G. A graph is planar if it can be drawn in the plane so that no two edges cross. A set S ⊂V is independent if no two vertices in S are adjacent. We denote by α(G) to be the size of the largest independent set in G. The chromatic number of G, denoted by χ(G), is the smallest integer k so that V (G) can be partitioned into k independent sets.
Now, we introduce some definitions and notations for digraphs. The notation is standard and we refer the reader to for an extensive treatment of digraphs. A digraph is obtained from a graph by giving each edge an orientation.
We use xy to denote the arc joining vertices x and y, where x is called the initial vertex and y is called the terminal vertex of the arc xy. We denote by A(D) the set of arcs of the digraph D. The vertex set of D will CHAPTER 1. PRELIMINARIES 3 be denoted by V (D). Digraphs discussed in this thesis will not have loops or parallel arcs.
Such digraphs are called simple. We do allow, however, the existence of two arcs between two vertices going in opposite directions. For v ∈V (D) and e ∈A(D), we denote by D −v and D −e the subdigraph of D obtained by deleting v and the subdigraph obtained by removing e, respectively. We let d+ D(v) and d− D(v) denote the out-degree (the number of arcs whose initial vertex is v) and the in-degree (the number of arcs whose terminal vertex is v) of v in D, respectively. The total degree of a vertex v is d+(v) + d−(v). A vertex v ∈V (D) is said to be Eulerian if d+(v) = d−(v). The digraph D is Eulerian if every v ∈V (D) is Eulerian. We say that u is an out-neighbor (in-neighbor) of v if vu (uv) is an arc. We denote by N+(v) and N−(v) the set of out-neighbors and in-neighbors of v, respectively.
Every undirected graph G determines a bidirected digraph D(G) that is obtained from G by replacing each edge with two oppositely directed edges joining the same pair of vertices.
If D is a digraph, we let G(D) be the underlying undirected graph obtained from D by “forgetting” all the orientations of the arcs. A digraph D is said to be (weakly) connected if G(D) is connected. The blocks of a digraph D are the maximal subdigraphs D′ of D whose underlying undirected graph G(D′) is 2-connected. We say that D is strongly connected if for every vertex u and v, there is directed path from u to v. A cycle in a digraph D is a cycle in G(D) that does not use parallel edges. A directed cycle in D is a subdigraph forming a directed closed walk in D whose vertices are all distinct. A directed cycle consisting of exactly two vertices is called a digon. A vertex set S ⊂V (D) is called acyclic if the induced subdigraph D[S] has no directed cycles. A k-coloring of D is a partition of V (D) into k acyclic sets. The minimum integer k for which there exists a k-coloring of D is the chromatic number χ(D) of the digraph D.
1.2 Overview of the thesis The rest of the thesis is organized as follows. In Chapter 2, we discuss the known results in the literature and motivate the problem. In Chapter 3, we prove Gallai’s Theorem for list coloring of digraphs and study the algorithmic complexity of list coloring digraphs.
In Chapter 4, we derive an upper bound on the chromatic number of digon-free digraphs in terms of maximum average degree of the digraph. In Chapter 5, we derive analogs of two well-known theorems in graph theory which show that the chromatic number, like the chromatic number, is not a local parameter. In Chapter 6, we look at the extension of the CHAPTER 1. PRELIMINARIES 4 chromatic number to acyclic homomorphisms and derive a result about D-colorability and girth. In Chapter 7, we highlight related problems and future work.
Chapter 2 The Chromatic Number of a Digraph In this chapter we introduce the problem studied in this thesis - the chromatic number of a digraph (see Chapter 1). We give a motivation for studying this digraph invariant by discussing some results for the chromatic number of (undirected) graphs and see how they generalize to digraphs. We will see that the digraph chromatic number introduced in Chapter 1 is the natural coloring invariant for digraphs. We also discuss some results in the literature and look at related problems.
Recall, that given a digraph D, the chromatic number χ(D) of D is the smallest integer k such the vertices of D can be colored with k colors so that no directed cycle is monochro-matic. This coloring parameter was first introduced by Neumann-Lara in 1982. There are a few papers that appeared in the literature on the topic in the following decade. How-ever, recently there seems to be a newfound interest in the chromatic number due to some results that highlight its close relationship with the chromatic number of an (undirected) graph. In the rest of this chapter, we closely study this relationship.
2.1 Brooks theorem for graphs and digraphs Recall that the chromatic number χ(G) of a graph G is the smallest integer k such that the vertices of G can be colored with k colors so that no two adjacent vertices receive the same color. Note that chromatic number of a graph G is equal to the chromatic number of 5 CHAPTER 2. THE CHROMATIC NUMBER OF A DIGRAPH 6 its bidirected digraph D(G). One of the earliest results in graph coloring is the following theorem of Brooks .
Theorem 2.1.1. Let G be a connected graph of maximum degree ∆. Then χ(G) ≤∆+ 1 with equality only for odd cycles and complete graphs.
For digraphs, it is not hard to see that the following tight upper bound holds, as proved by Neumann-Lara .
Theorem 2.1.2 (). Let D be a digraph and denote by ∆o and ∆i the maximum out-degree and in-degree of D, respectively. Then χ(D) ≤min{∆o, ∆i} + 1.
It turns out that Brooks’ Theorem has an analog for digraphs. We say that a digraph D is k-critical if χ(D) = k and for every vertex v, χ(D −v) < χ(D). Mohar proved the following theorem.
Theorem 2.1.3 (). Suppose that D is a k-critical digraph in which every vertex v satisfies d+(v) = d−(v) = k −1. Then one of the following cases occurs: 1. k = 2 and D is a directed cycle of length n ≥2.
2. k = 3 and D is a bidirected cycle of odd length n ≥3.
3. D is bidirected complete graph of order k ≥4.
(a) (b) (c) The above theorem shows that the only obstructions preventing a critical k −1-regular digraph from being k −1-colorable are the obvious ones. Note that odd cycles in Brooks’ theorem are replaced by odd bidirected cycles and cliques are replaced by bidirected cliques.
However, we also have an additional structure – the directed cycle. This new structure will also appear later when we study Gallai’s Theorem for digraphs.
CHAPTER 2. THE CHROMATIC NUMBER OF A DIGRAPH 7 2.1.1 Spectral version of Brooks’ Theorem Brooks’ Theorem also has its analog in spectral graph theory. Give a simple graph G of order n, one can define the adjacency matrix A = A(G) for G. This is the n × n matrix A = (aij), where aij = 1 if vertex i is adjacent to vertex j, and 0 otherwise. The eigenvalues of A(G) reveal a lot of information about G and are well treated in the literature, see for example [17, 31]. Let λn(G) be the largest eigenvalue of A(G). It is not hard to show that ∆(G) is always an upper bound on λn(G). Wilf proved the following Brooks-type result about the chromatic number. Note that the upper bound in the theorem is at most the upper bound of ∆+ 1 in Brooks’ Theorem.
Theorem 2.1.4 (). Let G be a simple graph. Then χ(G) ≤λn + 1 with equality if and only if G is an odd cycle or a complete graph.
Surprisingly, this result generalizes to the digraph chromatic number. Given a simple digraph D of order n, the adjacency matrix A(D) of D is the n×n 0-1 matrix where aij = 1 if ij is an arc and 0 otherwise. Note that A(D) not necessarily a symmetric matrix. The spectral radius of D, denoted by ρ(D), is the largest modulus of an eigenvalue of A(D).
It is known from the Perron-Frobenius theorem (see, for example ) that ρ(D) is an eigenvalue of D with a corresponding non-negative eigenvector.
More properties on the spectra of digraphs can be found in .
It turns out that the spectral radius gives a Brooks-type theorem on the digraph chromatic number, as shown by Mohar .
Theorem 2.1.5 (). Let D be a loopless digraph. Then χ(D) ≤ρ(A(D)) + 1.
(2.1) If D is strongly connected, then equality holds in 2.1 if and only if D is one of the digraphs listed in cases (1)-(3) in Theorem 2.1.3 for k = χ(D).
This theorem is further evidence that the digraph chromatic number is the natural coloring parameter for digraphs. Note that the assumption of strong connectivity in the above theorem is required – it is known that transitive tournaments (tournaments which are acyclic) have all eigenvalues equal to zero. Incidentally, this also shows that the upper bound in the above theorem will not hold for the “orientation-forgetful” chromatic number of digraphs.
CHAPTER 2. THE CHROMATIC NUMBER OF A DIGRAPH 8 Lin and Shu recently obtained another spectral result for the chromatic number of digraphs. Let Dn,k be the set of digraphs of order n and chromatic number k. They characterize the digraph which has the maximal spectral radius in Dn,k.
2.2 Further motivation - the circular chromatic number In recent years, researchers have worked on different coloring invariants of graphs that refine the chromatic number. One of the well-studied problems in this area is the circular chromatic number. The circular chromatic number of a graph G is greater than χ(G) −1 and at most χ(G). Thus, it is a refinement of the chromatic number. Recently, the circular chromatic number was generalized to digraphs. In this section, we explore the relationship between the circular chromatic number and the chromatic number of a digraph. As we shall see, the circular chromatic number of a digraph refines the digraph chromatic number in much the same fashion as the circular chromatic number of a graph refines the chromatic number.
2.2.1 Circular chromatic number of graphs Let G be a simple graph. For q ∈Q, we define a circular q-coloring of G to be a map φ : V (D) →Sq, where Sq is the circle of perimeter q, such that for all xy ∈E(G), φ(x) ̸= φ(y) and the shortest distance dS(φ(x), φ(y)) from φ(x) to φ(y) on the circle is at least 1. We say that G is q-circular colorable if there exists a circular q-coloring φ for G. The circular chromatic number of G, denoted by χc(G), is defined as χc(G) = inf{q : G has a circular q-coloring}.
This notion is studied by many authors in the literature. We refer the reader to a survey by Zhu . The circular chromatic number χc(G) refines the chromatic number χ(G) in the following sense.
Theorem 2.2.1. For any graph G, χ(G) −1 < χc(G) ≤χ(G).
It is known that if χc(G) = r, then G is r-circular colorable, i.e. the infimum is attained.
The circular chromatic number was first introduced by Vince in 1988 as ‘the star-chromatic number’.
The original definition of Vince, which is equivalent to the above CHAPTER 2. THE CHROMATIC NUMBER OF A DIGRAPH 9 definition, is as follows. For two integers 1 ≤d ≤k, a (k,d)-coloring of a graph G is a coloring c of the vertices of G with colors {0, 1, ..., k −1} such that for all xy ∈E(G), d ≤|c(x) −c(y)| ≤k −d. The circular chromatic number is defined as χc(G) = inf{k/d : there is a (k, d)-coloring of G}.
The definition of circular chromatic number has a natural extension to digraphs as treated by Mohar and Bokal et. al . A circular q-coloring of a digraph D is a function φ : V (D) →Sq such that for all xy ∈E(D), φ(x) ̸= φ(y) and the distance dS(φ(x), φ(y)) from φ(x) to φ(y) in the clockwise direction on the circle is at least 1. If D has at least one arc, we define the circular chromatic number χc(D) as χc(D) = inf{q : D has a circular q-coloring}.
If D has no arcs, then we define χc(D) = 1. The above definition was introduced in . As opposed to the circular chromatic number for graphs, it is possible that D does not admit a χc(D)-coloring, i.e., the infimum is not necessarily attained. However, an alternative definition of a circular coloring overcomes this problem. Let q ≥1. A map φ : V (D) →Sq is called a weak circular q-coloring of D if, for every arc uv ∈A(D), either φ(u) = φ(v) or the distance dS(φ(u), φ(v)) from φ(u) to φ(y) in the clockwise direction on the circle is at least 1, and for every x ∈Sq, the color class φ−1(x) is an acyclic vertex set of D. It is easy to see that χc(D) is equal to the infimum of all real numbers q ≥1 for which there exists a weak circular q-coloring of D. It turns out that results by Mohar show that this infimum is always attained; i.e., every digraph D admits a weak circular χc(D)-coloring. Moreover, it is also shown in that χc(D) is a rational number for every D.
Interestingly, the circular chromatic number of a digraph is related to its analog for graphs. If G is a simple graph, then χc(G) = χc(D(G)), where D(G) is the bidirected digraph obtained from G by replacing each edge with two oppositely oriented arcs. The following extension of Theorem 2.2.1 shows that the digraph circular chromatic number is related to the digraph chromatic number.
Theorem 2.2.2 (). For every digraph D, χ(D) −1 < χc(D) ≤χ(D).
Since the proof is short, we present it here.
CHAPTER 2. THE CHROMATIC NUMBER OF A DIGRAPH 10 Proof. Given a k-coloring of D, we can map each of the k color classes to a single point on Sk so that two points corresponding to two color classes are at least distance 1 apart. This shows that a k-coloring of D determines a weak circular k-coloring of D, proving the second inequality.
For the first inequality, let p = χc(D), k = ⌈p⌉, and ϵ = p/2n, where n is the order of D.
Let c be a circular (p + ϵ)-coloring. Then Sp+ϵ can be written as the disjoint union of k + 1 arcs A0, A1, ..., Ak, each of length less than 1, and such that c−1(A0) = ∅. Let Vi = c−1(Ai), for i = 1, ..., k. Clearly, each Vi is acyclic. The partition of V (D) into these acyclic sets gives a k-coloring of D.
2.3 Some preliminary results and tournaments Following the introduction of the chromatic number by Neumann-Lara several papers ap-peared on the subject. In particular, the study of tournaments has received some attention.
Neumann-Lara and Urrutia proved the existence of an infinite family of vertex-critical r-chromatic regular tournaments for every r ≥3, r ̸= 4. In particular, they proved the following theorems.
Theorem 2.3.1 (). For each pair of positive odd integers r = 2l + 1, i ≥7, there exists a vertex-critical r-chromatic regular tournament with 3l−1 · i vertices.
Theorem 2.3.2 (). For each even integer r = 2l, l ≥3, and each odd i ≥7, there exists a vertex critical 2l-chromatic regular tournament with 3l−1 · i vertices.
The authors actually show a method of constructing such tournaments.
They also conjecture that there exists an infinite family of vertex-critical 4-chromatic circulant tour-naments. Circulant tournaments are defined as follows. Let Z2n+1 be the set of integers mod 2n + 1 and J an n-subset of Z2n+1 −{0} such that for every w ∈Z2n+1, w ∈J if and only if −w / ∈J. The circulant tournament C2n+1(J) is defined by V (C2n+1(J)) = Z2n+1, A(C2n+1(J)) = {ij : i, j ∈Z2n+1, j −i ∈J}.
In , Neumann-Lara solves the aforementioned conjecture in the affirmative. He also conjectures the following: Conjecture 2.3.3 (). There is an infinite family of vertex-critical r-chromatic circulant tournaments for each r ≥3.
CHAPTER 2. THE CHROMATIC NUMBER OF A DIGRAPH 11 Neumann-Lara , settles the above conjecture for all k ≥3, k ̸= 7. In , the authors prove the conjecture for k = 7 and construct other infinite families of critical k-chromatic circulant tournaments for general k.
2.3.1 Extremal results on tournaments The study of the chromatic number of tournaments has received some attention in the literature. Neumann-Lara showed that the minimum order of a 3-chromatic tournament is seven, and the minimum order of a 4-chromatic tournament is eleven. In particular, he proved the following theorem.
Theorem 2.3.4 (). There are exactly four non-isomorphic 3-chromatic tournaments of order 7, and one 4-chromatic tournament of order 11.
All the tournaments in the above theorem are also characterized.
Erd˝ os, Gimbel and Kratsch derived an extremal result for general digraphs. For a graph G, define d(G) = max{χ(D) | D is an orientation of G}. For an integer k, let d(k) be the minimum number of edges a graph G satisfying d(G) = k can have. Then the following bounds on d(k) hold.
Theorem 2.3.5 (). There exist positive constants c1, c2 such that c1k2 log2 k ≤d(k) ≤c2k2 log2 k.
The first inequality in Theorem 2.3.5 implies that any digraph D of order n has χ(D) = O( n log n).
The second inequality implies that there exists a digraph D of order k with χ(D) = Ω( k log k). The proof of the above theorem relies on previous results. Here, we give shorter proofs from first principles. We say that almost all tournaments have a property P if the probability that the random tournament Tn on n vertices obtained from Kn by randomly orienting the edges satisfies property P with probability tending to 1 as n approaches infinity.
Given a digraph D, we let α(D) be the largest acyclic set of vertices in D.
Theorem 2.3.6. Almost all tournaments of order n have chromatic number at least 1 2 n log n+1 .
Proof. Let Tn be the random tournament of order n. Let A be a fixed subset of vertices of Tn of size 2 log n + 2. Note that if the subdigraph Tn[A] induced by A is acyclic then there CHAPTER 2. THE CHROMATIC NUMBER OF A DIGRAPH 12 is an ordering of the vertices of A such that all the arcs of Tn[A] go forward with respect to the ordering. Thus, P[α(Tn) ≥2⌈log n⌉+ 2] ≤ n 2⌈log n⌉+ 2 P[A is acyclic ] ≤ n 2⌈log n⌉+ 2 (2⌈log n⌉+ 2)!
1 2 (2⌈log n⌉+2 2 ) ≤n2⌈log n⌉+2 · 1 n2⌈log n⌉+1 · 1 n2 = o(1).
Since χ(D) ≥|V (D)| α(D) for any digraph D, the theorem follows.
In the proof of the above theorem we used the fact that with high probability a random tournament has no acyclic set of size greater than 2 log n+2. On the other hand, it is known (see, for example, [20, 59]) that every tournament has an acyclic set of size at least c log2 n, for some positive constant c. The following lemma can be readily derived.
Lemma 2.3.7. Let D be a tournament of order n. Then D has an acyclic set of size at least log n.
Proof. Greedily pick vertices to be in the acyclic set in the following manner. In the first step pick any vertex v, and remove from the graph either the set of its in-neighbors or out-neighbors, whichever is smaller in size. Then pick one of the remaining neighbors and put it in the acyclic set. Repeat this process until there are no vertices remaining. Clearly, the resulting set is acyclic. Since in each iteration we remove at most (n −1)/2 vertices from the graph, we pick at least log n vertices.
Theorem 2.3.8. Let T be a tournament of order n. Then χ(T) ≤ n log n(1 + o(1)).
Proof. The theorem is intuitively clear from lemma 2.3.7. In each iteration, using a single color we color and remove from the digraph a large acyclic set whose existence is guaranteed by Lemma 2.3.7. Of course, as we remove a color class from the digraph the size of the subsequent acyclic set decreases. However, it turns out that we still only need roughly n log n colors. We now make the argument more precise.
We color the vertices of the digraph using the procedure described above until there are at most j n log2 n k uncolored vertices remaining. At this point we stop the procedure and greedily finish by assigning a new color to each uncolored vertex. This gives a valid coloring.
CHAPTER 2. THE CHROMATIC NUMBER OF A DIGRAPH 13 During each iteration of the first phase of the procedure we remove at least log n log2 n ≥ log n −2 log log n vertices. Hence, there are at most n log n−2 log log n such iterations. Hence the total number of colors needed is at most n log n−2 log log n + ⌊ n log2 n⌋≤(1 + o(1)) n log n.
Theorem 2.3.8 is thought to have extension to all digraphs, where n is replaced by the maximum total degree of a vertex. McDiarmid and Mohar conjectured the following: Conjecture 2.3.9 (). Every digraph D without digons and with maximum total degree ∆has χ(D) = O( ∆ log ∆).
2.3.2 The digraph chromatic number and the Erd˝ os - Hajnal Conjecture The chromatic number of tournaments is also related to the well-known Erd˝ os-Hajnal con-jecture.
The Erd˝ os-Hajnal conjecture is one of the fundamental conjectures in Ramsey theory. Recall that for a graph G, α(G) is the size of the largest independent set in G and ω(G) is the order of the largest complete graph in G. It is known by Ramsey theory that if G is a graph of order n, then max{α(G), ω(G)} ≥1 2 log2 n (see, ). Erd˝ os showed that this is essentially best possible by proving the existence of a graph G with max{α(G), ω(G)} < 2 log2 n. However, it may be true that forbidding subgraphs will yield a polynomial lower bound on max{α(G), ω(G)} rather than logarithmic.
If H is not an induced subgraph of G, then we say that G is an H-free graph. Erd˝ os and Hajnal conjectured the following.
Conjecture 2.3.10 (Erd˝ os-Hajnal Conjecture). For every graph H, there exists a positive ϵ = ϵ(H) such that every H-free graph G with n vertices has max{α(G), ω(G)} ≥nϵ.
The conjecture is known to hold for some classes of graphs. Alon, Pach and Solymosi raised the following conjecture on tournaments that has a similar flavor. Recall that a tournament is called transitive if it is acyclic.
Conjecture 2.3.11 (). For every tournament T, there exists a positive constant ϵ = ϵ(T) such that every T-free tournament with n vertices has transitive subtournament of size at least nϵ.
We have shown above that every n-vertex tournament has transitive subtournament of order at least log2 n. Results of Erd˝ os et al. and Spencer show that up to a multiplicative constant this is best possible. Alon, Pach and Solymosi prove the following.
CHAPTER 2. THE CHROMATIC NUMBER OF A DIGRAPH 14 Theorem 2.3.12 (). Conjecture 2.3.10 and Conjecture 2.3.11 are equivalent.
P. Seymour raised the following question: is it true that for every tournament H, there exists a constant c = c(H) such that every H-free tournament T has χ(T) ≤c? It turns out that the answer to this question is negative as shown by Berger et al. . One can pose the following weaker conjecture.
Conjecture 2.3.13 (). For every tournament H, there exist c > 0 and ϵ < 1, such that if T is an H-free tournament, then χ(T) ≤c|V (T)|ϵ.
It is not hard to see that Conjecture 2.3.13 is equivalent to Conjecture 2.3.11, and thus to the Erd˝ os-Hajnal Conjecture.
2.4 Planar digraphs and vertex-arboricity For planar digraphs there is the following conjecture, raised by Neumann-Lara (1982) and Skrekovski (2001).
Conjecture 2.4.1. If D is a planar digraph without directed cycles of length 2, then χ(D) ≤ 2.
The conjecture is still very much open and seems quite difficult. Some results in this area follow from the theory of vertex arboricity of graphs. The vertex arboricity or point-arboricity of a graph G, denoted by a(G), is the minimum number of sets in a partition of V (G) into sets each of which induces a forest. The following observation is clear.
Observation 2.4.2. If G is a graph and D is any orientation of the edges of G, then χ(D) ≤a(G).
It is known that Conjecture 2.4.1 does not hold for vertex arboricity as proved by Chartrand, Kronk and Wall .
Theorem 2.4.3 (). a(G) ≤3 when G is planar, and this bound is sharp.
Note that Theorem 2.4.3 implies that χ(D) ≤3 for every digon-free planar digraph D.
For general graphs, the following upper bound on vertex-arboricity is given by Kronk and Mitchem .
CHAPTER 2. THE CHROMATIC NUMBER OF A DIGRAPH 15 Theorem 2.4.4 (). If G is a connected graph that is neither a complete graph of odd order nor a cycle, then a(G) ≤⌈∆(G) 2 ⌉.
Planar graphs without short cycles have vertex arboricity two, as shown by Raspaud and Wang .
Theorem 2.4.5 (). If k ∈{3, 4, 5, 6}, then a(G) ≤2 for every planar graph G having no k-cycles.
Raspaud and Wang also show the following.
Theorem 2.4.6 (). If G is a planar graph with no two triangles having vertices that are shared or adjacent, then a(G) ≤2.
Theorem 2.4.5 with Observation 2.4.2 implies that χ(D) ≤2 for every planar digon-free digraph D having no k-cycles if k ∈{3, 4, 5, 6}. Similarly, Theorem 2.4.6 implies that χ(D) ≤2 for every digon-free digraph D without triangles.
Albertson proposed the following weaker version of Conjecture 2.4.1.
Conjecture 2.4.7 (). Every planar digraph of order n without digons has an acyclic set of vertices of size at least n/2.
It may be true that n/2 in the above conjecture could be replaced by αn for some α > 1/2. Below we show that α cannot be greater than 3/5.
Proposition 2.4.8. The largest acyclic set in the digraph D below is at most six.
Proof. Suppose, for contradiction, that D has an acyclic set S of size seven. Then S contains exactly four vertices from one of the two directed pentagons. By symmetry, we may assume that the outer pentagon contains four vertices of S. Now, it is easy to see that the inner pentagon cannot contain more than two vertices of S, a contradiction.
The above conjecture is a weakening of a much older conjecture due to Albertson and Berman .
Conjecture 2.4.9 (). Let G be a planar graph graph of order n and let k be the size of a largest set of vertices in G which induces a forest. Then k ≥n 2 .
By results of Borodin , it is known that k ≥2n/5.
CHAPTER 2. THE CHROMATIC NUMBER OF A DIGRAPH 16 Figure 2.1: A digraph D on ten vertices with α(D) ≤6.
Chapter 3 Gallai’s Theorem and List Coloring of Digraphs 3.1 Introduction A theorem of Gallai describes the structure of low degree vertices in graphs that are critical for the chromatic number. It states that the induced subgraph on the vertices of degree k −1 in a k-critical graph is composed of blocks that are either complete graphs or odd cycles. In this chapter, we consider the chromatic number of digraphs and show that Gallai’s theorem can be extended to this setting. It is interesting to note that another structure appears in addition to cliques and odd cycles. These are directed cycles of any length.
For a parallel, we observe that this kind of graphs also occur in the version of Brooks’ Theorem for digraphs, see Theorem 3.1.3 below.
The Gallai theorem has a natural setting in terms of list colorings.
For undirected graphs, it can be viewed as a list coloring problem where the list at each vertex has the same number of available colors as the degree of that vertex. The coloring problem for this type of lists is easily solvable for undirected graphs. However, as we show in Section 3.3, the list coloring problem of this type on digraphs is NP-hard.
List colorings and Gallai trees A graph G is k-color-critical or k-critical if χ(G) = k and χ(H) < χ(G), for every proper subgraph H ⊂G. The minimum degree of a k-critical graph is at least k −1. A classical 17 CHAPTER 3. GALLAI’S THEOREM AND LIST COLORING OF DIGRAPHS 18 theorem of Gallai states that in every k-critical graph, the vertices of degree k−1 induce a graph whose blocks are either odd cycles or complete graphs. Because of this result, a connected graph all of whose blocks are either odd cycles or complete graphs is called a Gallai tree.
A natural setting of applying Gallai’s theorem is that of list colorings. Given a graph G and a list L(v) of colors for each vertex v, we say G is L-colorable if there is a proper coloring of G (i.e. each color class is an independent set) such that each vertex v is assigned a color from L(v). Having a k-critical graph G, one may assume that we have (somehow) colored vertices of degree larger than k −1 with k −1 colors and that only vertices whose degree in G is k −1 are left to be colored. Denote the subgraph induced by the vertices of degree k −1 by S. Now, each vertex v ∈V (S) has a list L(v) of available colors, and |L(v)| = degS(v). This setting is used to formulate Gallai’s theorem for list colorings. It was obtained independently by Borodin and Erd˝ os et al. . Kostochka et al. generalized it to hypergraphs.
Theorem 3.1.1 (,). Let G be a connected graph, and L a list-assignment for G.
Suppose that |L(x)| ≥deg(x) for each x ∈V (G), and G is not L-colorable. Then G is a Gallai tree.
The following strengthening of the previous theorem has been proved by Thomassen , while the generalization to hypergraphs can be found in .
Theorem 3.1.2. Let L be an arbitrary list-assignment for a graph G. Let X be a subset of vertices such that G[X] is connected and |L(x)| ≥degG(x) for each x ∈X. Assume that G−X is L-colorable. If G is not L-colorable, then G[X] is a Gallai tree and |L(x)| = degG(x) for every x ∈X.
Digraph colorings and Brooks’ Theorem Note that the blocks in Gallai’s theorem for undirected graphs are precisely complete graphs and odd cycles, which also appear in Brooks’ theorem. For digraphs, a version of Brooks’ theorem was proved in , as mentioned in a previous chapter.
CHAPTER 3. GALLAI’S THEOREM AND LIST COLORING OF DIGRAPHS 19 Theorem 3.1.3 (). Suppose that D is a k-critical digraph in which for every vertex v ∈V (D), d+(v) = d−(v) = k −1. Then one of the following cases occurs: 1. k = 2 and D is a directed cycle of length n ≥2.
2. k = 3 and D is a bidirected cycle of odd length n ≥3.
3. D is bidirected complete graph of order k ≥4.
Note that the last two cases of Theorem 3.1.3 are the analogues of odd cycles and complete graphs in the undirected version of Brooks’ and Gallai’s theorems. Thus, it is expected that the first case of Theorem 3.1.3 will appear in the Gallai’s theorem for digraphs, which is proved in the sequel.
The rest of the chapter is organized as follows. In Section 3.2, we derive an analogue of Gallai’s theorem for directed graphs. In Section 3.3, we consider algorithmic questions for list coloring a digraph.
3.2 List coloring and Gallai’s Theorem We define list colorings of digraphs in an analogous way as for undirected graphs. Let C be finite set of colors. Given a digraph D, let L : v 7→L(v) ⊆C be a list-assignment for D, which assigns to each vertex v ∈V (D) a set of colors. The set L(v) is called the list (or the set of admissible colors) for v. We say D is L-colorable if there is an L-coloring of D, i.e., each vertex v is assigned a color from L(v) such that every color class induces an acyclic subdigraph in D. We say that D is L-critical if D is not L-colorable but every proper subdigraph of D is L-colorable. Clearly, by saying that a subdigraph H is L-colorable, we use the restriction of the list-assignment L to V (H). The main result of this section is the following digraph analogue of Gallai Theorem.
Theorem 3.2.1. Let D be a connected digraph, and L an assignment of colors to the vertices of D such that |L(v)| ≥max{d+(v), d−(v)}. Suppose that D is not L-colorable. Then D is Eulerian and every block of D is one of the following: (a) directed cycle, (b) an odd bidirected cycle, or CHAPTER 3. GALLAI’S THEOREM AND LIST COLORING OF DIGRAPHS 20 (c) a bidirected complete digraph.
Moreover, for each block B of D, there is a set CB of colors so that for each vertex v ∈V (D), we have L(v) = {∪CB | B is a block of D and v ∈V (B)}.
Furthermore, |L(v)| = d+(v), implying that the blocks B containing v have pairwise disjoint color sets CB.
(a) (b) (c) Figure 3.1: Possible blocks in Gallai trees: (a) a directed cycle, (b) a bidirected odd cycle, and (c) a bidirected complete graph.
The proof of Theorem 3.2.1 relies on several lemmas. The first of these gives information about the lists of L-critical Eulerian digraphs.
Lemma 3.2.2. Let D be an Eulerian digraph, and let L be an assignment of colors to the vertices of D. Suppose that |L(v)| = d+(v) (v ∈V (D)) and that D is L-critical. Given a vertex v ∈V (D), let f be an L-coloring of D −v. Then the following holds: 1. L(v) = {f(u) | u ∈N−(v)} = {f(w) | w ∈N+(v)}, and so each color in L(v) appears exactly once in N−(v) and once in N+(v).
2. If u is a neighbor of v with f(u) = c, then uncoloring u and coloring v with c gives an L-coloring of D −u.
Proof. If a color c ∈L(v) would not appear on the out-neighborhood of v, we could color v by c and obtain an L-coloring of D. Similarly, each color c ∈L(v) also appears on the in-neighborhood of v. This establishes the first claim.
To prove the second claim, remove color c from u and color v with c. Suppose, without loss of generality, that u is an out-neighbor of v. Since c appeared on the out-neighbors of v only once, we get an L-coloring of D −u.
CHAPTER 3. GALLAI’S THEOREM AND LIST COLORING OF DIGRAPHS 21 Lemma 3.2.3. Let D be a connected digraph. Let L be an assignment of colors to the vertices of D with |L(v)| ≥max{d+(v), d−(v)} for each v ∈V (D). Suppose that D is not L-colorable. Then 1. D is Eulerian and |L(v)| = d+(v) = d−(v) for every v ∈V (D).
2. D is L-critical.
Proof. To prove 1), we will use induction on |V (D)|. The claim is clear if |V (D)| = 1. If |V (D)| = 2, then D is either a directed edge (and hence L-colorable since L(v) ̸= ∅for v ∈V (D)) or a digon, in which case 1) holds. So, assume now that |V (D)| ≥3.
Suppose there exists a vertex v ∈V (D) such that d+ D(v) ̸= d− D(v). Let D′ = D −v.
If D′ was L-colorable so would be D, since one of the colors in L(v) would not appear among the in-neighbors or out-neighbors of v. Thus, D′ is not L-colorable. Then D′ has a connected component D′′ that is not L-colorable. Applying the induction hypothesis to D′′, we conclude that D′′ is Eulerian and |L(u)| = d+ D′′(u) = d− D′′(u) for every u ∈V (D′′). Now, choosing a vertex u ∈V (D′′) which is a neighbor of v we obtain that d+ D′′(u) = |L(u)| ≥ max{d+ D(u), d− D(u)} ≥d+ D′′(u)+1, a contradiction. Therefore, |L(v)| = min{d+ D(v), d− D(v)} = max{d+ D(v), d− D(v)}, and the result follows.
To prove 2), we use induction on |A(D)|. The claim is clearly true when |A(D)| ≤2.
So, suppose |A(D)| ≥3. Now, let e = uv be any arc, and let D′ = D −e. Let D′′ be any component of D′. By part 1), D is Eulerian which implies that D′′ is not Eulerian.
Therefore, by the induction hypothesis, D′′ is L-colorable. Similarly, if there exists a second component of D′, it is also L-colorable. Therefore, D′ is L-colorable, and thus D is L-critical.
Let C = v1v2...vk be a cycle (not necessarily directed) in a digraph D.
Let f be a coloring of D −v1. A shift of colors around C is a color assignment g for D −v1, where g(v2) = f(v3), g(v3) = f(v4), ..., g(vk) = f(v2) and g(v) = f(v) for v ∈V (D)\V (C). Let us observe that in the case of Eulerian L-critical graphs, Lemma 3.2.2 guarantees that g is a (proper) L-coloring of D −v1 since g can be obtained by repeatedly using part (2) of Lemma 3.2.2: first we uncolor v2 and color v1, then uncolor v3 and color v2, etc. until the last step when we uncolor v1 and color vk. This fact will be used throughout this section.
CHAPTER 3. GALLAI’S THEOREM AND LIST COLORING OF DIGRAPHS 22 Lemma 3.2.4. Let D be a connected digraph, and L an assignment of colors to the vertices of D such that |L(v)| = max{d+(v), d−(v)} for each v ∈V (D). Suppose that D is not L-colorable. Let C be a cycle of length 3 or 4 in the underlying graph G(D). If the orientation of the edges of C in D is not cyclic (i.e., E(C) does not induce a directed cycle in D), then V (C) induces a complete bidirected graph in D.
Proof. By Lemma 3.2.3, D is Eulerian and L-critical. First, assume that C = v1v2v3 has length three. We may assume that the edges of C are directed as follows: v3v1, v1v2 and v3v2. We will show that the arcs v1v3, v2v3 and v2v1 are also present in D. Consider a coloring f of D −v1. Let f(v2) = a. If f(v3) = a, then uncoloring v3 and coloring v1 with a would give an L-coloring of D−v3 where v3 has two out-neighbors colored a, a contradiction by Lemma 3.2.2. Therefore, f(v3) = b ̸= a. Now, the out-neighbor of v1 that is colored b must be on the cycle C since otherwise doing a shift of colors around C we would get a new L-coloring of D −v1 with v1 having two out-neighbors colored b, so we could complete the coloring. The only way the out-neighbor of v1 colored b is on C is when v1v3 ∈A(D).
By a similar reasoning, v2v1 ∈A(D). To show the existence of the arc v2v3, consider an L-coloring of D −v3 and the cycle C′ consisting of the arcs v1v2, v1v3, and v3v2. The same proof as above shows that v2v3 ∈A(D). This settles the case when C is a cycle of length 3.
Suppose now that C = v1v2v3v4v1 is a 4-cycle, and assume that the arcs of C are not cyclic. We may assume that the vertex v1 has both vertices, v2 and v4, as its out-neighbors.
Now, by criticality, D −v1 is L-colorable. Moreover, every coloring f assigns different colors to v2 and v4 by Lemma 3.2.2. So suppose f(v2) = a and f(v4) = b, a ̸= b. Now, f(v3) ̸= a, since otherwise making the counter-clockwise shift of colors around C we would get two out-neighbors of v1 colored a. Similarly, if we do a clockwise shift of colors around C we deduce that f(v3) ̸= b. Therefore, assume f(v3) = c, c ̸= a, b. Now, if we do a clockwise shift of colors around C we get that the color a disappears in the out-neighborhood of v, unless the vertex v3 is an out-neighbor of v1. Thus, by Lemma 3.2.2, v1v3 ∈A(D).
Now, regardless of the orientation of edges v2v3 and v3v4, the two triangles v1v2v3 and v1v3v4 have acyclic orientations and therefore by the first part of the proof, these sets induce bidirected cycles in D. Therefore, we have that C is a bidirected cycle that also has the chords v1v3 and v3v1.
Now we apply the same proof to the cycle C′ with arcs v2v3, v3v4, v4v1, v2v1 in which v2 has two out-neighbors. We conclude that also the chords v2v4 and v4v2 are in D. This completes the proof of the lemma.
CHAPTER 3. GALLAI’S THEOREM AND LIST COLORING OF DIGRAPHS 23 Using Lemma 3.2.4, we now obtain the following.
Lemma 3.2.5. Let D be a connected digraph, and L an assignment of colors to the vertices of D such that |L(v)| = max{d+(v), d−(v)} for each v ∈V (D). Suppose that D is not L-colorable. Let C = v1v2...vkv1, k ≥3, be a cycle of length k in the underlying graph.
Suppose that the orientation of the edges of C is not cyclic. Then the following holds: 1. If k is even, then V (C) induces a complete bidirected subdigraph in D, 2. If k is odd, then V (C) either induces a complete bidirected cycle or a complete bidi-rected subdigraph in D.
Proof. By Lemma 3.2.3, D is Eulerian and L-critical. We proceed by induction on k. The cases k = 3 and k = 4 are established by Lemma 3.2.4. So we assume that k ≥5. First, suppose that k is odd. We may assume that the two neighbors of v1 on the cycle C, v2 and vk, are an out-neighbor and an in-neighbor, respectively. Such a vertex must exist by parity.
We consider two cases. First, suppose there is a chord incident to v1, say v1vi, 2 < i < k.
Then regardless of the orientation of the edge v1vi, one of the two cycles v1v2...viv1 and v1vivi+1...vkv1 has acyclic orientation. By induction, we must have the arcs v1vi and viv1 present in D. The arcs v1vi and viv1 divide the cycle C into an odd cycle and an even cycle. Suppose C1 = v1v2...vi is the even cycle. We can make sure that C1 has its edges oriented acyclically by appropriately picking either the arc v1vi or viv1. Thus, by induction, C1 induces a complete bidirected digraph. Similarly, C2 = v1vivi+1...vkv1 induces either a bidirected cycle or a bidirected clique. Now, consider the cycle C3 = v2vivi+1...vkv1v2. We can choose the appropriate bidirected arcs to ensure that C3 has acyclic orientation. Since C3 is an even cycle and it is shorter than C, it follows that C3, and hence also C2, induces a complete bidirected digraph. It remains to show that every vertex on C1 has bidirected arcs to every vertex on C2. But this is clear, since for any vj on C1, v1vjvivi+1...vkv1 is an even cycle and thus induces a complete bidirected graph by the same argument as used above.
Now, suppose there is no chord incident to v1. Let f be an L-coloring of D −v1. First, we claim that f(vk) ̸= f(v2). Suppose, for a contradiction, that f(vk) = f(v2) = a. By making a shift of colors around C, we conclude that f(v3) = a. Moreover, by repeatedly making a shift of colors around C, we conclude that all the original colors on C were equal to a. Let vi be a vertex on C that has both of its neighbors on C as in-neighbors. Passing the color of v2 to v1 (by using Lemma 3.2.2(2)), the color of v3 to v2,· · · , the color of vi to CHAPTER 3. GALLAI’S THEOREM AND LIST COLORING OF DIGRAPHS 24 vi−1, we get a proper L-coloring of D −vi. But now vi has two in-neighbors colored a, so we can complete the coloring to a coloring of D, a contradiction. So we may assume that f(v2) = a and f(vk) = b, a ̸= b. Now, the out-neighbor of v1 that has color b must be vk for otherwise doing a shift of colors we would get a coloring of D −v1 with two out-neighbors colored b. So, v1vk ∈A(D). By a similar argument, v2v1 ∈A(D). Now, consider the vertex v2 and a coloring of D −v2. Since the edges v1v2, v2v1, v1vk, vkv1 exist, we can change C to a non-directed cycle C′ in which v2 has an in-neighbor and an out-neighbor. As above, we either get a bidirected clique or both arcs v2v3 and v3v2. Repeating this argument, we deduce that V (C) induces a bidirected cycle or a bidirected clique.
Next, suppose k is even.
We may assume that v1’s neighbors on C, v2 and vk, are both in-neighbors. We claim that there is a chord of C incident to v1 and directed inwards (i.e., v1 has another in-neighbor on C). Suppose not. Consider a coloring of D −v1 and let f(v2) = a and f(vk) = b. Now if we do a shift of colors around C we deduce that f(v3) = f(v5) = f(v7) = · · · = f(vk−1) = b. But this is impossible since after performing a shift of colors in the opposite direction, we will obtain a valid coloring of D −v1 with vk and v2 both colored b. Therefore, there is an arc viv1 ∈A(D). If this arc divides C into two even cycles, then by an inductive argument similar to the case when k is odd we can deduce that C is a complete bidirected digraph. Therefore, assume that i is odd so that viv1 splits the cycle C into two odd cycles C1 = v1v2...viv1 and C2 = v1vivi+1...vkv1.
By induction, we have that all the edges of C are actually bidirected arcs. Also, we know that viv1, v1vi ∈A(D). Next, we show that there must be further chords incident to v1 in addition to those coming from vi. Suppose not. Consider a coloring g of D −v1, and suppose g(v2) = a, g(vk) = b and g(vi) = c.
Now, if we do shift of colors around C1, we conclude that g(v2) = g(v4) = · · · = g(vi−1) = a and g(v3) = g(v5)... = g(vi) = c.
Similarly, doing shift of colors around C2 we conclude that g(vi) = g(vi+2) = g(vk−1) = c and g(vi+1) = g(vi+3) = · · · = g(vk) = b. Since k ≥6, if we now do two shifts of colors around C, we will get a coloring of D −v1 where there is the same color appearing twice in the neighborhood of v1, contradicting Lemma 3.2.2. Therefore, there are other chords incident to v1 beside the ones coming from vi. This implies that one of the cycles C1 or C2 is divided into an even cycle and an odd cycle and we are done by a similar argument as in the case when k is odd.
Now, we can prove the main result of this section.
CHAPTER 3. GALLAI’S THEOREM AND LIST COLORING OF DIGRAPHS 25 Proof of Theorem 3.2.1. By Lemma 3.2.3, D is Eulerian and L-critical. First, we prove the first claim of the theorem. Let H be a block of D, for which none of (a)-(c) applies. Note that H cannot be a single arc by L-criticality. The theorem is clear if |V (H)| ≤3. Note that H cannot be a non-directed cycle or a cycle with some but not all edges bidirected, since every such cycle induces new arcs by Lemma 3.2.5. So we may assume that |V (H)| ≥4 and that H (as an undirected graph) is not a cycle. Then there are two vertices in H with three internally vertex-disjoint paths between them, say P1, P2, P3. Two of these paths, say P1 and P2, create a cycle C of even length. We claim that the cycle C induces a complete bidirected graph. Suppose not. Then C is a directed cycle by Lemma 3.2.5. This implies that at least one of the cycles P1 ∪P3 or P1 ∪P2 is not directed. By applying Lemma 3.2.5 again, this new cycle induces at least a bidirected cycle and therefore some of the arcs of C are bidirected. But this is a contradiction, which shows that C induces a complete bidirected digraph.
Let v be any vertex of H that is not on C. Since H is a block, there are two paths P and Q from v to C whose only common vertex is v. Now, simply take an even cycle C′ that contains the path P ∪Q and one or two additional arcs of C. We may choose the arcs of C′ so that C′ is a non-directed cycle. Now, Lemma 3.2.5 shows that C′ induces a complete bidirected digraph. By using different vertices of C when making C′ (by possibly including more than two arcs of C), we conclude that every vertex of P ∪Q is adjacent to each other and to every vertex on C. Therefore, if we take any maximal bidirected clique K in H we conclude that all the vertices of H are on K. Hence, H is a complete bidirected digraph.
It remains to prove the last part of the theorem. Let us consider a block B of D. Note that B satisfies one of (a)–(c). If B = D, then it is easy to see that the only list assignment L, for which D is not L-colorable, has all lists L(v), v ∈V (D), equal to each other. So, we may assume that B ̸= D. Next, we L-color D′ = D −V (B). After this, each vertex v ∈V (B) is left with at least d+ B(v) colors that do not appear on N(v). Let L′(v) ⊆L(v) denote these colors. Now, every L′-coloring of B gives rise to an L-coloring of D, so B is not L′-colorable. But since |L′(v)| ≥d+ B(v) for all v ∈V (B), we conclude, by the same arguments as above, that |L′(v)| = d+ B(v) for each v ∈V (B) and that all lists L′(v) are the same. By denoting this common color set by CB, we obtain the last part of the theorem.
Since |L(v)| = d+(v), it is easy to see that the color sets CB of all blocks B containing v are pairwise disjoint.
CHAPTER 3. GALLAI’S THEOREM AND LIST COLORING OF DIGRAPHS 26 Note that the condition |L(v)| ≥max{d+(v), d−(v)} in Theorem 3.2.1 cannot be strength-ened to, say, |L(v)| ≥d+(v), since we could take any digraph which has a vertex with no out-neighbors and an empty list of colors. However, this becomes possible if we know that the digraph is L-critical.
Corollary 3.2.6. Let D be a connected digraph and L an assignment of colors to the vertices of D such that |L(v)| ≥d+(v), for every v ∈V (D). Suppose that D is L-critical. Then D is Eulerian, and hence the conclusions of Theorem 3.2.1 hold.
Proof. If D is not Eulerian, then there exists a vertex v ∈V with d+(v) > d−(v). Consider an L-coloring of D −v. Now, since |L(v)| ≥d+(v) > d−(v), there is a color c ∈L(v) that does not appear on the in-neighborhood of v. Coloring v with color c gives an L-coloring of D, a contradiction.
The next corollary obtains a similar result when the criticality condition is dropped, but we insist that vertices whose out-degree is larger than their in-degree have an extra admissible color.
Corollary 3.2.7. Let D be a connected digraph, and L an assignment of colors to the vertices of D such that |L(v)| ≥d−(v) if d+(v) ≤d−(v) and |L(v)| ≥d−(v) + 1 other-wise. Suppose that D is not L-colorable. Then D is Eulerian, and hence the conclusions of Theorem 3.2.1 hold.
Proof. We use induction on |A(D)|. If |A(D)| ≤3 and D is not Eulerian, then D is L-colorable for any choice of L. So, we may assume from now on that |A(D)| ≥4.
We first show that D is L-critical.
Let e = uv be an arc of D and suppose for a contradiction that D −uv is not L-colorable. Consider a component C of D −uv that is not L-colorable. By the induction hypothesis, we have that C is Eulerian and that conclusions of Theorem 3.2.1 hold.
If u ∈V (C) (say), then u is not an Eulerian vertex in D, so |L(u)| > d+ C(u), which contradicts the conclusions of Theorem 3.2.1 for C.
Now, suppose that D is not Eulerian. Since P v d+(v) = P v d−(v) = |A(D)|, there exists a vertex v such that d+(v) > d−(v). Then |L(v)| ≥d−(v) + 1. Remove an arc e incident to v from D, and choose an L-coloring of D −e. Now, putting the edge e back, we see that we still have a color in L(v) not appearing on the in-neighborhood of v, allowing us to complete the coloring to an L-coloring of D, a contradiction.
CHAPTER 3. GALLAI’S THEOREM AND LIST COLORING OF DIGRAPHS 27 The reader may wonder why require an additional color for non-Eulerian vertices. As we shall see in the next section, the situation changes drastically if this were not the case.
3.3 Complexity of list coloring of digraphs with Brooks’ con-dition It is natural to ask whether the condition of Corollary 3.2.7 can be relaxed to |L(v)| ≥ min{d+(v), d−(v)}. It turns out that the answer is negative even if the digraph is L-critical.
There is an example on four vertices; see Figure 3.2, where the numbers at the vertices indicate the corresponding lists of colors. Further examples of digraphs that are L-critical with |L(v)| ≥min{d+(v), d−(v)} for every v ∈V (D), and yet do not admit a block de-composition described by Theorem 3.2.1, are not hard to construct, as shown by Figure 3.3. One can extend the construction in Figure 3.3 to get counterexamples of any order by subdividing any of the arcs.
1,2 1,2 1 1 Figure 3.2: An L-critical digraph with |L(v)| ≥min{d+(v), d−(v)} that is not Eulerian 1 1 1,2 1,2 1 1 1 2 1 1 Figure 3.3: Constructing an L-critical digraph with |L(v)| ≥min{d+(v), d−(v)} of arbitrary order that is not Eulerian CHAPTER 3. GALLAI’S THEOREM AND LIST COLORING OF DIGRAPHS 28 Not only are there many such examples, it turns out that the list coloring problem restricted to such a class of instances is NP-hard. This (surprising) fact and its proof is the subject of the remainder of this section.
Computational complexity of digraph colorings has been studied by several authors. We have the following complexity theorem for digraphs proven in Bokal et al. .
Theorem 3.3.1 (). Let D be a digraph. It is NP-complete to decide whether χ(D) ≤2.
Stronger results were obtained by Feder, Hell and Mohar . These results will be discussed in a later chapter.
We study the following problem.
Problem: List Coloring with Brooks’ Condition Instance: A digraph D, a list-assignment L such that for every vertex v ∈ V (D), |L(v)| = min{d+(v), d−(v)}.
Question: Is the digraph D L-colorable?
If we restrict the instances to planar graphs, we get the Planar List Coloring Prob-lem with Brooks’ Condition.
Theorem 3.3.2. The Planar List Coloring Problem with Brooks’ Condition is NP-complete.
For a polynomial time reduction, we shall use the following problem, which was proved to be NP-complete in .
Problem: Planar (≤3, 3)-Satisfiability Instance: A formula Φ in conjunctive normal form with a set C of clauses over a set X of boolean variables such that (1) each clause involves at most three distinct variables, (2) every variable occurs in exactly three clauses, once positive and twice nega-tive, and (3) the graph GΦ = (X ∪C, {xc | x ∈X, x ∈c ∈C or ¬x ∈c ∈C}) is planar.
Question: Is Φ satisfiable?
CHAPTER 3. GALLAI’S THEOREM AND LIST COLORING OF DIGRAPHS 29 Proof. Clearly, every list coloring problem is in NP since after guessing an L-coloring, one can check in polynomial time whether each color class induces an acyclic subdigraph using Breadth-First-Search.
Let the formula Φ be an instance of Planar (≤3, 3)-Satisfiability. Note that G = GΦ is a bipartite graph with bipartition {X, C}. We create an instance of list coloring for digraphs as follows.
• Direct all the edges of G from X to C.
• For each x ∈X, we create a new vertex x′ and add the arcs x′x and c1x′, c2x′, where c1, c2 are the two clauses that contain ¬x.
• Add the arc c3x, where c3 is the clause containing the literal x.
• For every variable x ∈X, we define two colors, x and ¯ x.
For each x ∈X, set L(x) = {x, ¯ x}. For each c ∈C, we set L(c) = {¯ x | x ∈c} ∪{x | ¬x ∈c}. Finally, let L(x′) = {x} for every x′.
Let D be the resulting digraph. We first claim that D is planar. Note that the graph GΦ is assumed to be planar. Clearly, adding the arcs c3x, where c3 is the clause containing the literal x, preserve the planarity. All that remains to show is that the vertices x′ and their incident arcs can be added in a way as to preserve the planarity. But this is clear because we can add the vertices x′ one by one in the face defined by the vertices x, c1 and c2, where c1 and c2 are the clauses containing ¬x.
Next, we consider the sizes of the lists. Clearly, every x ∈X has out-degree 3 and in-degree 2 because x appears in three clauses, twice negative and once positive. Therefore, |L(x)| = min{d+(v), d−(v)}. For a given clause c ∈C, for every arc xc we have exactly one of the two arcs cx or cx′. Therefore, d+(c) = d−(c) = |L(c)|. Now, every x′ has in-degree 2 and out-degree 1, which implies that |L(x′)| = min{d+(x′), d−(x′)}. Therefore, all the list sizes match with minimum degree. Now, we claim that Φ is satisfiable if and only if D is L-colorable.
Suppose first that f is an L-coloring of D.
Define a truth assignment φ as follows: φ(x) = true if f(x) = x and φ(x) = false if f(x) = ¯ x. We need to show that every clause c is satisfied. If f(c) = x for some variable x, then ¬x ∈c. Also, f(x) ̸= x for otherwise we would have a monochromatic triangle cx′x of color x. Therefore, f(x) = ¯ x, CHAPTER 3. GALLAI’S THEOREM AND LIST COLORING OF DIGRAPHS 30 thus φ(x) = false, and hence c is satisfied. Similarly, if f(c) = ¯ x, then x ∈c. Further, f(x) = x for otherwise we would have a monochromatic digon. Therefore, φ(x) = true and c is satisfied.
Conversely, let φ be a satisfying truth assignment. Define the following L-coloring f: f(x) = x if φ(x) = true, and f(x) = ¯ x if φ(x) = false. For each clause c, choose a variable x which satisfies c and set f(c) = x if ¬x ∈c, and f(c) = ¯ x, if x ∈c. Clearly, f(x′) = x for all x′. To see that f is a coloring, consider an arc xc. We claim that f(x) ̸= f(c). Suppose f(x) = x (the other case is similar) and that ¬x ∈c. Since f(x) = x, φ(x) = true which implies that ¬x = false. Therefore, f(c) ̸= x. Thus, no arc from X to C is monochromatic, so f is a coloring. This completes the proof.
We note that the above proof implies the following immediate corollary.
Corollary 3.3.3. List coloring of digraphs is NP-complete even if restricted to planar di-graphs where each vertex v has d0(v) = min{d+(v), d−(v)} ≤3 and the list size for v is equal to d0(v).
Proof. Note that all the vertices v of the digraph D in the above proof satisfy the conditions d0(v) ≤3 and d0(v) = |L(v)|.
Next, we consider the problem where the list sizes of vertices with d+(v) > d−(v) have an additional color.
Problem: List Coloring With Relaxed Brooks’ Condition Instance: A digraph D, a list-assignment L such that for every vertex v ∈V (D) with d+(v) ≤d−(v), |L(v)| ≥d+(v), and for every vertex v with d+(v) > d−(v), we have |L(v)| ≥d−(v) + 1.
Question: Is the digraph D L-colorable?
Theorem 3.3.4. The problem List Coloring With Relaxed Brooks’ Condition can be solved in linear time O(|V (D)| + |A(D)|).
Proof. Note that it is sufficient to provide an algorithm for connected digraphs because we can then apply it to all the components. We first give an algorithm for the Eulerian instances of D, and then show that the general case can be reduced to the Eulerian case.
So suppose D is Eulerian. We will apply Theorem 3.2.1. If there exists a vertex v ∈V (D) such that |L(v)| > d+(v), then D is L-colorable by Theorem 3.2.1. So we may assume that CHAPTER 3. GALLAI’S THEOREM AND LIST COLORING OF DIGRAPHS 31 |L(v)| = d+(v) for all v ∈V (D). We first find the blocks of D; this can be done in time O(|V (D)| + |A(D)|) using Depth-First-Search, see for example . By Theorem 3.2.1, if there exists a block of D that is not of type (a)–(c), then D is L-colorable. So we may assume that all blocks of D are of type (a), (b) or (c). Let B be a leaf block in the block-cutpoint tree of D. If B = D, then as mentioned in the proof of Theorem 3.2.1, D is not L-colorable if and only if all the lists of D are the same. This can be checked in linear time. Otherwise, let v ∈V (B) be the single cut-vertex in B. If there are two vertices in u, w ∈V (B){v} with L(u) ̸= L(w) or there exists a vertex x ∈V (B){v} such that L(x) ⊈L(v), then D is L-colorable by Theorem 3.2.1. Therefore, we may assume that for all u, w ∈V (B){v}, L(u) = L(w) and L(u) ⊆L(v). In this case, it is easy to see that D is L-colorable if and only if D −(V (B){v}) is L′-colorable, where L′(v) = L(v)\L(u), for some u ∈V (B){v}, and L′(x) = L(x) for all x ∈V (D)\V (B). Thus, we can reduce the problem by deleting a leaf block B at each step by using at most O(|V (B)| + |A(B)|) time, which results in a O(|V (D)| + |A(D)|) overall time.
Next, suppose that D is not Eulerian. We give a linear time reduction to the Eulerian case. Since P v d+(v) = P v d−(v) = |A(D)|, there exists a vertex u such that d+(u) > d−(u). Consider D −u. We claim that D is L-colorable if and only if D −u is L-colorable.
Clearly, if D is L-colorable then D −u is L-colorable. Now, suppose D −u is L-colorable, and let f be such a coloring. Since d+(u) > d−(u), we have that there is a color in L(u) that does not appear in the in-neighborhood of u. By using such a color, we can complete the coloring of D −u to an L-coloring of D.
Repeating this reduction we will obtain a (possibly empty) digraph D∗such that d+ D∗(v) = d− D∗(v) for every v ∈V (D∗). Since d+(v) ≥d+ D∗(v), it follows that |L(v)| ≥d+ D∗(v) = d− D∗(v).
Now, using the algorithm for the Eulerian case, we can decide whether each component of D∗is L-colorable. Then clearly D is L-colorable if and only if each component of D∗is L-colorable.
To keep the list of vertices v with d+(v) > d−(v), and updating this list after every vertex-removal takes overall linear time. We only need to consider at most min{d+(v), d−(v)} + 1 colors at v, so when comparing the lists in the blocks we only need O(|V (D)| + |A(D)|) time. Thus, it takes O(|V (D) + |A(D)|) time to reduce D to the Eulerian digraph D∗.
Since we need linear time to decide whether an Eulerian digraph is L-colorable, we have an O(|V (D)| + |A(D)|) algorithm.
Chapter 4 Brooks Theorem for Digraphs of Girth Three 4.1 Introduction Brooks’ Theorem states that if G is a connected graph with maximum degree ∆, then χ(G) ≤∆+ 1, where equality is attained only for odd cycles and complete graphs. The presence of triangles has significant influence on the chromatic number of a graph. A result of Johansson states that if G is triangle-free, then χ(G) = O (∆/ log ∆). In this chapter, we show that Brooks’ Theorem for digraphs can also be improved when we forbid directed cycles of length 2.
Digraph colorings and the Brooks Theorem Recall that for digraphs, a version of Brooks’ theorem was proved in . Here, a digraph D is k-critical if χ(D) = k, and χ(H) < k for every proper subdigraph H of D.
Theorem 4.1.1 (). Suppose that D is a k-critical digraph in which for every vertex v ∈V (D), d+(v) = d−(v) = k −1. Then one of the following cases occurs: 1. k = 2 and D is a directed cycle of length n ≥2.
2. k = 3 and D is a bidirected cycle of odd length n ≥3.
3. D is bidirected complete graph of order k ≥4.
32 CHAPTER 4. BROOKS THEOREM FOR DIGRAPHS OF GIRTH THREE 33 A tight upper bound on the chromatic number of a digraph was first given by Neumann-Lara .
Theorem 4.1.2 (). Let D be a digraph and denote by ∆o and ∆i the maximum out-degree and in-degree of D, respectively. Then χ(D) ≤min{∆o, ∆i} + 1.
In this chapter, we study improvements of this result using the following substitute for the maximum degree. If D is a digraph, we let ˜ ∆= ˜ ∆(D) = max{ p d+(v)d−(v) | v ∈V (D)} be the maximum geometric mean of the in-degree and out-degree of the vertices. Observe that ˜ ∆≤1 2(∆o +∆i), by the arithmetic-geometric mean inequality (where ∆o and ∆i are as in Theorem 4.1.2). We show that when ˜ ∆is large (roughly ˜ ∆≥1010), then every digraph D without digons has χ(D) ≤α ˜ ∆, for some absolute constant α < 1. We do not make an attempt to optimize α, but show that α = 1 −e−13 suffices. To improve the value of α significantly, a new approach may be required.
It may be true that the following analog of Johansson’s result holds for digon-free di-graphs, as conjectured by McDiarmid and Mohar .
Conjecture 4.1.3 (). Every digraph D without digons has χ(D) = O( ˜ ∆ log ˜ ∆).
If true, this result would be asymptotically best possible in view of the chromatic num-ber of random tournaments of order n, whose chromatic number is Ω( n log n) and ˜ ∆> 1 2 −o(1) n, as shown by Erd˝ os et al. .
We also believe that the following conjecture of Reed generalizes to digraphs without digons.
Conjecture 4.1.4 (). Let ∆be the maximum degree of (an undirected) graph G, and let ω be the size of the largest clique. Then χ(G) ≤ ∆+ 1 + ω 2 .
If we define ω = 1 for digraphs without digons, we can pose the following conjecture for digraphs. Note that a digraph is ∆-regular if d+(v) = d−(v) = ∆for every vertex v.
CHAPTER 4. BROOKS THEOREM FOR DIGRAPHS OF GIRTH THREE 34 Conjecture 4.1.5. Let D be a ∆-regular digraph without digons. Then χ(D) ≤ ∆ 2 + 1.
Conjecture 4.1.5 is trivial for ∆= 1, and follows from Lemma 4.3.2 for ∆= 2, 3. We believe that the conjecture is also true for non-regular digraphs with ∆replaced by ˜ ∆.
The rest of the chapter is organized as follows. In Section 4.2, we improve Brooks’ bound for digraphs that have sufficiently large degrees. In Section 4.3, we consider the problem for arbitrary degrees.
4.2 Strengthening Brooks’ Theorem for large ˜ ∆ The main result in this section is the following theorem.
Theorem 4.2.1. There is an absolute constant ∆1 such that every digon-free digraph D with ˜ ∆= ˜ ∆(D) ≥∆1 has χ(D) ≤ 1 −e−13 ˜ ∆.
The rest of this section is the proof of Theorem 4.2.1. The proof is a modification of an argument found in Molloy and Reed for usual coloring of undirected graphs. We first note the following simple lemma.
Lemma 4.2.2. Let D be a digraph with maximum out-degree ∆o, and suppose we have a partial proper coloring of D with at most ∆o +1−r colors. Suppose that for every uncolored vertex v there are at least r colors that appear on vertices in N+(v) at least twice. Then D is ∆o + 1 −r-colorable.
Proof. The proof is easy – since many colors are repeated on the out-neighborhood of v, there are many colors that are not used on N+(v). In particular, there are at most ∆−r distinct colors appearing on N+(v). Thus, one can “greedily” extend the partial coloring.
Proof of Theorem 4.2.1. We may assume that c1 ˜ ∆< d+(v) < c2 ˜ ∆and c1 ˜ ∆< d−(v) < c2 ˜ ∆ for each v ∈V (D), where c1 = 1 −1 3e−11 and c2 = 1 + 1 3e−11. If not, we remove all the vertices v not satisfying the above inequality and obtain a coloring for the remaining graph with 1 −e−13 ˜ ∆colors. Now, if a vertex does not satisfy the above condition either one of d+(v) or d−(v) is at most c1 ˜ ∆or one of d+(v) or d−(v) is at most 1 c2 ˜ ∆.
Note that 1 −e−13 > max{c1, 1/c2}. This ensures that there is a color that either does not appear CHAPTER 4. BROOKS THEOREM FOR DIGRAPHS OF GIRTH THREE 35 in the in-neighborhood or does not appear in the out-neighborhood of v, allowing us to complete the coloring.
The core of the proof is probabilistic, and we refer the reader to Appendix A for all the probabilistic tools used in the sequel. We color the vertices of D randomly with C colors, C = ⌊˜ ∆/2⌋. That is, for each vertex v we assign v a color from {1, 2, ..., C} uniformly at random. After the random coloring, we uncolor all the vertices that are in a monochromatic directed path of length at least 2. Clearly, this results in a proper partial coloring of D since D has no digons. For each vertex v, we are interested in the number of colors which are assigned to at least two out-neighbors of v and are retained by at least two of these vertices.
For analysis, it is better to define a slightly simpler random variable. Let v ∈V (D). For each color i, 1 ≤i ≤C, let Oi be the set of out-neighbors of v that have color i assigned to them in the first phase. Let Xv be the number of colors i for which |Oi| ≥2 and such that all vertices in Oi retain their color after the uncoloring process.
For every vertex v, we let Av be the event that Xv is less than 1 2e−11 ˜ ∆+ 1. We will show that with positive probability none of the events Av occur. Then Lemma 4.2.2 will imply that χ(D) ≤(c2 −1 2e−11) ˜ ∆≤(1 −e−13) ˜ ∆, finishing the proof. We will use the symmetric version of the Lov´ asz Local Lemma (see Appendix, Theorem A.3.2). Note that the color assigned initially to a vertex u can affect Xv only if u and v are joined by a path of length at most 3. Thus, Av is mutually independent of all except at most (2c2 ˜ ∆) + (2c2 ˜ ∆)2 + (2c2 ˜ ∆)3 + (2c2 ˜ ∆)4 + (2c2 ˜ ∆)5 + (2c2 ˜ ∆)6 ≤100 ˜ ∆6 other events Aw. Therefore, by the symmetric version of the Local Lemma, it suffices to show that for each event Av, 4 · 100 ˜ ∆6P[Av] < 1. We will show that P[Av] < ˜ ∆−7. We do this by proving the following two lemmas.
Lemma 4.2.3. E[Xv] ≥e−11 ˜ ∆−1.
Proof. Let X′ v be the random variable denoting the number of colors that are assigned to exactly two out-neighbors of v and are retained by both of these vertices. Clearly, Xv ≥X′ v and therefore it suffices to consider E[X′ v].
Note that color i will be counted by X′ v if two vertices u, w ∈N+(v) are colored i and no other vertex in S = N(u)∪N+(v)∪N(w) is assigned color i. This will give us a lower bound on E[X′ v]. There are C choices for color i and at least c1 ˜ ∆ 2 choices for the set {u, w}. The probability that no vertex in S gets color i is at least (1 −1 C )|S| ≥(1 −1 C )5c2 ˜ ∆. Therefore, CHAPTER 4. BROOKS THEOREM FOR DIGRAPHS OF GIRTH THREE 36 by linearity of expectation, and using the inequality (1 −p)n > e−pn−p, we can estimate: E[X′ v] ≥ C c1 ˜ ∆ 2 1 C 2 1 −1 C 5c2 ˜ ∆ ≥ c1(c1 ˜ ∆−1) exp(−5c2 ˜ ∆/C −1/C) ≥ ˜ ∆ e11 −1 for ˜ ∆sufficiently large.
Lemma 4.2.4. P h |Xv −E[Xv]| > log ˜ ∆ p E[Xv] i < ˜ ∆−7.
Proof. Let ATv be the random variable counting the number of colors assigned to at least two out-neighbors of v, and Delv the random variable that counts the number of colors assigned to at least two out-neighbors of v but removed from at least one of them. Clearly, Xv = ATv −Delv and therefore it suffices to show that each of ATv and Delv are sufficiently concentrated around their means. We will show that for t = 1 2(log ˜ ∆) p E[Xv] the following estimates hold: Claim 1: P [|ATv −E[ATv]| > t] < 2e−t2/(8 ˜ ∆).
Claim 2: P [|Delv −E[Delv]| > t] < 4e−t2/(100 ˜ ∆).
The two above inequalities yield that, for ˜ ∆sufficiently large, P[|Xv −E[Xv]| > log ˜ ∆ p E[Xv]] ≤ 2e−t2 8 ˜ ∆+ 4e− t2 100 ˜ ∆ ≤ ˜ ∆−log ˜ ∆ < ˜ ∆−7, as we require. So, it remains to establish both claims.
To prove Claim 1, we use a version of Azuma’s inequality found in , called the Simple Concentration Bound (see Appendix A, Theorem A.4.2).
Theorem 4.2.5 (Simple Concentration Bound). Let X be a random variable determined by n independent trials T1, ..., Tn, and satisfying the property that changing the outcome of any single trial can affect X by at most c. Then P[|X −E[X]| > t] ≤2e− t2 2c2n .
CHAPTER 4. BROOKS THEOREM FOR DIGRAPHS OF GIRTH THREE 37 Note that ATv depends only on the colors assigned to the out-neighbors of v. Note that each random choice can affect ATv by at most 1. Therefore, we can take c = 1 in the Simple Concentration Bound for X = ATv. Since the choice of random color assignments are made independently over the vertices and since d+(v) ≤c2 ˜ ∆, we immediately have the Claim 1.
For Claim 2, we use the following variant of Talagrand’s Inequality (see Appendix A, Theorem A.4.4).
Theorem 4.2.6 (Talagrand’s Inequality). Let X be a nonnegative random variable, not equal to 0, which is determined by n independent trials, T1, . . . , Tn and satisfyies the follow-ing conditions for some c, r > 0: 1. Changing the outcome of any single trial can affect X by at most c.
2. For any s, if X ≥s, there are at most rs trials whose exposure certifies that X ≥s.
Then for any 0 ≤λ ≤E[X], P h |X −E[X]| > λ + 60c p rE[X] i ≤4e − λ2 8c2rE[X] .
We apply Talagrand’s inequality to the random variable Delv. Note that we can take c = 1 since any single random color assignment can affect Delv by at most 1. Now, suppose that Delv ≥s. One can certify that Delv ≥s by exposing, for each of the s colors i, two random color assignments in N+(v) that certify that at least two vertices got color i, and exposing at most two other color assignments which show that at least one vertex colored i lost its color. Therefore, Delv ≥s can be certified by exposing 4s random choices, and hence we may take r = 4 in Talagrand’s inequality. Note that t = 1 2 log ˜ ∆ p E[Xv] > > 60c p rE[Delv] since E[Xv] ≥˜ ∆/e11 −1 and E[Delv] ≤c2 ˜ ∆. Now, taking λ in Talagrand’s inequality to be λ = 1 2t, we obtain that P[|Delv −E[Delv]| > t] ≤P[|Delv −E[Delv]| > λ + 60c p rE[X]].
Therefore, provided that λ ≤E[Delv], we have the confirmed Claim 2.
It is sufficient to show that E[Delv] = Ω( ˜ ∆), since λ = O(log ˜ ∆ p ˜ ∆). The probability that exactly two vertices in N+(v) are assigned a particular color c is at least c1 ˜ ∆2 2 C−2(1 − 1/C)c2 ˜ ∆≈2e−10, a constant. It remains to show that the probability that at least one of these vertices loses its color is also (at least) a constant. We use Janson’s Inequality (see Appendix, Theorem A.2.1). Let u be one of the two vertices colored c. We only compute the probability that u gets uncolored. We may assume that the other vertex colored c is not a neighbor of u since this will only increase the probability. We show that with large CHAPTER 4. BROOKS THEOREM FOR DIGRAPHS OF GIRTH THREE 38 probability there exists a monochromatic directed path of length at least 2 starting at u.
Let Ω= N+(u)∪N++(u), where N++(u) is the second out-neighborhood of u. Each vertex in Ωis colored c with probability 2 ˜ ∆. Enumerate all the directed paths of length 2 starting at u and let Pi be the ith path. Clearly, there are at least (c1 ˜ ∆)2 such paths Pi. Let Ai be the set of vertices of Pi, and denote by Bi the event that all vertices in Ai receive the same color. Then, clearly P[Bi] = 1 (⌊˜ ∆/2⌋)2 ≥ 4 ˜ ∆2 . Then, µ = P P[Bi] ≥ 4 ˜ ∆2 · (c1 ˜ ∆)2 = 4c2 1. Now, if δ = P i,j:Ai∩Aj̸=∅P[Bi ∩Bj] in Janson’s Inequality satisfies δ < µ, then applying Janson’s Inequality, with the sets Ai and events Bi, we obtain that the probability that none of the events Bi occur is at most e−1, and hence the probability that u does not retain its color is at least 1 −e−1, as required. Now, assume that δ ≥µ. The following gives an upper bound on δ: δ = X i,j:Ai∩Aj̸=∅ P[Bi ∩Bj] = X i,j:Ai∩Aj̸=∅ 1 (⌊˜ ∆/2⌋)3 ≤ (c2 ˜ ∆)2 · 2c2 ˜ ∆· 8 ( ˜ ∆−2)3 < 32, for ˜ ∆≥100. Now, we apply Extended Janson’s Inequality (see Appendix, Theorem A.2.2).
This inequality now implies that the probability that none of the events Bi occur is at most e−c2 1/4, a constant. Therefore, by linearity of expectation E[Delv] = Ω( ˜ ∆).
Clearly, since E[Xv] ≤c2 ˜ ∆, Lemmas 4.2.3 and 4.2.4 imply that P[Av] < ˜ ∆−7. This completes the proof of Theorem 4.2.1.
4.3 Brooks’ Theorem for small ˜ ∆ The bound in Theorem 4.2.1 is only useful for large ˜ ∆.
Rough estimates suggest that ˜ ∆needs to be at least in the order of 1010. The above approach is unlikely to improve this bound significantly with a more detailed analysis. In this section, we improve Brooks’ Theorem for all values of ˜ ∆. We achieve this by using the result on list colorings found in Chapter 3.
Theorem 4.3.1 (). Let D be a connected digraph, and L an assignment of colors to the vertices of D such that |L(v)| ≥d+(v) if d+(v) = d−(v) and |L(v)| ≥min{d+(v), d−(v)}+1 otherwise. Suppose that D is not L-colorable. Then D is Eulerian, |L(v)| = d+(v) for each v ∈V (D), and every block of D is one of the following: CHAPTER 4. BROOKS THEOREM FOR DIGRAPHS OF GIRTH THREE 39 (a) a directed cycle (possibly a digon), (b) an odd bidirected cycle, or (c) a bidirected complete digraph.
D is said to be k-choosable if D is L-colorable for every list-assignment L with |L(v)| ≥k for each v ∈V (D). We denote by χl(D) the smallest integer k for which D is k-choosable.
Now, we can state the next result of this section.
Lemma 4.3.2. Let D be a connected digraph without digons, and let ˜ ∆= ˜ ∆(D). If ˜ ∆> 1, then χl(D) ≤⌈˜ ∆⌉.
Proof. We apply Theorem 4.3.1 with all lists L(v), v ∈V (D) having cardinality ⌈˜ ∆⌉. It is clear that the conditions of Theorem 4.3.1 are satisfied for every Eulerian vertex v. It is easy to see that the conditions are also satisfied for non-Eulerian vertices. Now, if D is not L-colorable, then by Theorem 4.3.1, D is Eulerian and d+(v) = ⌈˜ ∆⌉for every vertex v.
This implies that D is ⌈˜ ∆⌉-regular. Now, the conclusion of Theorem 4.3.1 implies that D consists of a single block of type (a), (b) or (c). This means that either D is a directed cycle (and hence ˜ ∆= 1), or D contains a digon, a contradiction. This completes the proof.
We can now prove the main result of this section, which improves Brooks’ bound for all digraphs without digons.
Theorem 4.3.3. Let D be a connected digraph without digons, and let ˜ ∆= ˜ ∆(D).
If ˜ ∆> 1, then χ(D) ≤α( ˜ ∆+ 1) for some absolute constant α < 1.
Proof. We define α = max n ∆1 ∆1+1, 1 −e−13o , where ∆1 is the constant in the statement of Theorem 4.2.1. Now, if ˜ ∆< ∆1 then by Lemma 4.3.2, it follows that χ(D) ≤⌈˜ ∆⌉≤ α( ˜ ∆+1). If ˜ ∆≥∆1, then by Theorem 4.2.1 we obtain that χ(D) ≤ 1 −e−13 ˜ ∆≤α( ˜ ∆+1), as required.
An interesting question to consider is the tightness of the bound of Lemma 4.3.2. It is easy to see that the bound is tight for ⌈˜ ∆⌉= 2 by considering, for example, a directed cycle with an additional chord or a digraph consisting of two directed triangles sharing a common vertex. The graph in Figure 4.2 shows that the bound is also tight for ⌈˜ ∆⌉= 3. It is easy to verify that, up to symmetry, the coloring outlined in the figure is the unique 2-coloring.
Now, adding an additional vertex, whose three out-neighbors are the vertices of the middle CHAPTER 4. BROOKS THEOREM FOR DIGRAPHS OF GIRTH THREE 40 triangle and the three in-neighbors are the remaining vertices, we obtain a 3-regular digraph where three colors are required to complete the coloring.
Another example of a digon-free 3-regular digraph on 7 vertices requiring three colors is the following. Take the Fano Plane and label its points by 1,2,...,7. For every line of the Fano plane containing points a, b, c, take a directed cycle through a, b, c (with either orientation). There is a unique directed 3-cycle through any two vertices because every two points line in exactly one line. This shows that the Fano plane digraphs are not isomorphic to the digraph from the previous paragraph. Finally, it is easy to verify that the resulting digraph needs three colors for coloring.
1 1 2 2 1 2 Figure 4.1: Constructing a 3-regular digraph D with χ(D) = 3.
1 1 2 1 2 2 1 Figure 4.2: Constructing a 3-chromatic 3-regular digraph from the Fano plane.
Note that the digraphs in the above examples are 3-regular tournaments on 7 vertices. It is not hard to check that every tournament on 9 vertices has ⌈˜ ∆⌉= 4, and yet is 3-colorable (simply choose three vertices that do not induce a directed triangle and color them with same color; the remaining 6 vertices can 2-colored). In general, we pose the following problem.
CHAPTER 4. BROOKS THEOREM FOR DIGRAPHS OF GIRTH THREE 41 Question 4.3.4. What is the smallest integer ∆0 such that every digraph D without digons with ⌈˜ ∆(D)⌉= ∆0 satisfies χ(D) ≤∆0 −1?
Note that this is a weak version of Conjecture 4.1.5.
By Theorem 4.2.1, ∆0 exists.
However, we believe that ∆0 is small, possibly equal to 4. The following proposition shows that the above holds for every ⌈˜ ∆⌉≥∆0.
Proposition 4.3.5. Let ∆0 be defined as in Question 4.3.4. Then every digon-free digraph D with ⌈˜ ∆(D)⌉≥∆0 satisfies χ(D) ≤⌈˜ ∆(D)⌉−1.
Proof. The proof is by induction on ⌈˜ ∆⌉. If ⌈˜ ∆⌉= ∆0 this holds by the definition of ∆0.
Otherwise, let U be a maximal acyclic subset of D. Then ⌈˜ ∆(D −U)⌉≤⌈˜ ∆(D)⌉−1 for otherwise U is not maximal. Since we can color U by a single color, we can apply the induction hypothesis to complete the proof.
As a corollary we get: Corollary 4.3.6. There exists a positive constant α < 1 such that for every digon-free digraph D with ⌈˜ ∆(D)⌉≥∆0, χ(D) ≤α⌈˜ ∆⌉.
Proof. Let α = max n ⌈∆1⌉ ⌈∆1⌉+1, 1 −e−13o , where ∆1 is the constant in the statement of The-orem 4.2.1. Now, applying Theorem 4.2.1 or Proposition 4.3.5 gives the result.
Chapter 5 Non-locality of the digraph chromatic number 5.1 Introduction In this chapter we prove, using standard probabilistic approach, that two further analogues of graph coloring results carry over to digraphs. The first result provides evidence that the digraph chromatic number, like the graph chromatic number, is a global parameter that cannot be deduced from local considerations. The second result, see Theorem 5.3.1, shows that there are digraphs with large chromatic number k in which every set of at most c|V (D)| vertices is 2-colorable, where c > 0 is a constant that only depends on k. The analogous result for graphs was proved by Erd˝ os with the assumption being that all sets of at most cn vertices are 3-colorable. Both the 3-colorability in Erd˝ os’ result and 2-colorability in Theorem 5.3.1 are best possible.
Concerning the first result, it is well-known that there exist graphs with large girth and large chromatic number. Bollob´ as and, independently, Kostochka and Mazurova proved that there exist graphs of maximum degree at most ∆and of arbitrarily large girth whose chromatic number is Ω(∆/ log ∆). We present a theorem (Theorem 5.2.1) that provides an extension to digraphs.
The bound of Ω(∆/ log ∆) from [9, 40] is essentially best possible: a result of Johansson shows that if G is triangle-free, then the chromatic number is O(∆/ log ∆). Similarly, Theorem 5.3.1 is also essentially best possible: we showed that every tournament on n 42 CHAPTER 5. NON-LOCALITY OF THE DIGRAPH CHROMATIC NUMBER 43 vertices has chromatic number at most n log n(1 + o(1)).
In general, it may be true that the following analog of Johansson’s result holds for digon-free digraphs, as conjectured by McDiarmid and Mohar .
Conjecture 5.1.1. Every digraph D without digons and with maximum total degree ∆has χ(D) = O( ∆ log ∆).
Theorem 5.2.1 shows that Conjecture 5.1.1, if true, is essentially best possible.
5.2 Chromatic number and girth First, we need some basic definitions. The total degree of a vertex v is the number of arcs incident to v. The maximum total degree of D, denoted by ∆(D), is the maximum of all total degrees of vertices in D.
It is proved in that there are digraphs of arbitrarily large digirth and chromatic number. Our result is an analogue of the aforementioned result of Bollob´ as and Kostochka and Mazurova . Note that the result involves the girth and not the digirth.
Theorem 5.2.1. Let g and ∆be positive integers. There exists a digraph D of girth at least g, with ∆(D) ≤∆, and χ(D) ≥a∆/ log ∆for some absolute constant a > 0. For ∆ sufficiently large we may take a = 1 5e.
Proof. Our proof is in the spirit of Bollob´ as . We may assume that ∆is sufficiently large.
Let D = D(n, p) be a random digraph of order n defined as follows. For every u, v ∈ V (D), we connect uv with probability 2p, independently. Now we randomly (with probabil-ity 1/2) assign an orientation to every edge that is present. Observe that D has no digons.
We will use the value p = ∆ 4en, where e is the base of the natural logarithm.
Claim 5.2.2. D has no more than ∆g cycles of length less than g with probability at least 1 −1 ∆.
Proof. Let Nl be the number of cycles of length l in D. Then, by linearity of expectation E[Nl] ≤ n l l!(2p)l ≤nl(2p)l ≤( ∆ 4 )l.
Therefore, the expected number of cycles of length less than g is at most ∆g−1. So the probability that D has more than ∆g cycles of length less than g is at most 1/∆by Markov’s inequality (see Appendix A, Theorem A.1.2).
CHAPTER 5. NON-LOCALITY OF THE DIGRAPH CHROMATIC NUMBER 44 Claim 5.2.3. There is a set A of at most n/1000 vertices of D such that ∆(D −A) ≤∆ with probability at least 1 2.
Proof. As in , define the excess degree of D to be ex(D) = P di>∆(di −∆), where di is the total degree of the ith vertex. Clearly, there is a set of at most ex(D) arcs (or vertices) whose removal reduces the maximum total degree of D to ∆. Let Xd be the number of vertices of total degree d, d = 0, 1, ..., n −1. Then ex(D) = Pn−1 d=∆+1(d −∆)Xd.
Now, we estimate the expectation of Xd. By linearity of expectation, and using the bound n k ≤( en k )k, we have: E[Xd] ≤ n n −1 d (2p)d ≤ n e(n −1) d d ∆ 2en d ≤ n ∆ 2d d .
Therefore, by linearity of expectation we have that, for ∆sufficiently large, E[ex(D)] ≤ n−1 X d=∆+1 nd ∆ 2d d ≤ n∆ 2 n−1 X d=∆+1 ∆ 2d d−1 ≤ n∆ 2 n−1 X d=∆+1 1 2 d−1 ≤ n∆ 2 · ( 1 2)∆ 1 −1 2 = n · ∆ 2∆ ≤ n 2000.
Now, by Markov’s inequality, P[ex(D) > n/1000] < 1/2.
CHAPTER 5. NON-LOCALITY OF THE DIGRAPH CHROMATIC NUMBER 45 Let α(D) be the size of a maximum acyclic set of vertices in D. The following result will be used in the proof of our next claim and also in Section 5.3.
Theorem 5.2.4 (). Let D ∈D(n, p) and w = np. There is an absolute constant W such that: If p satisfies w ≥W, then, asymptotically almost surely (on n), α(D) ≤ 2 log q (log w + 3e), where q = (1 −p)−1.
Claim 5.2.5. Let α(D) be the size of a maximum acyclic set of vertices in D.
Then α(D) ≤4en log ∆ ∆ with high probability, where the asymptotic is in terms of n .
Proof. Since ∆is sufficiently large, Theorem 5.2.4 applies and using the fact 1 −p ≤e−p, the result follows.
Now, pick a digraph D that satisfies the three claims. After removing at most n/1000 + ∆g ≤n/100 vertices, the resulting digraph D∗has maximum degree at most ∆and girth at least g. Clearly, α(D∗) ≤α(D). Therefore, χ(D∗) ≥n(1−1/100) 4en log ∆ ∆ ≥ ∆ 5e log ∆.
5.3 Local 2-colorings and the chromatic number A result of Erd˝ os states that there exist graphs of large chromatic number where the induced subgraph on any constant fraction number of the vertices is 3-colorable. In par-ticular, it is proved that for every k there exists ϵ > 0 such that for all n sufficiently large there exists a graph G of order n with χ(G) > k and yet χ(G[S]) ≤3 for every S ⊂V (G) with |S| ≤ϵn.
The 3-colorability in the aforementioned theorem cannot be improved. A result of Kier-stead, Szemeredi and Trotter (with later improvements by Nelli and Jiang ) shows that every 4-chromatic graph of order n contains an odd cycle of length at most 8√n.
We prove the following analog for digraphs. Our proof follows the proof of the result of Erd˝ os found in .
Theorem 5.3.1. For every k, there exists ϵ > 0 such that for every sufficiently large integer n there exists a digraph D of order n with χ(D) > k and yet χ(D[S]) ≤2 for every S ⊂V (D) with |S| ≤ϵn.
CHAPTER 5. NON-LOCALITY OF THE DIGRAPH CHROMATIC NUMBER 46 Proof. Clearly, we may assume that log k ≥3 and k ≥ √ W, where W is the constant in Theorem 5.2.4.
Let us consider the random digraph D = D(n, p) with p = k2 n and let 0 < ϵ < k−5.
We first show that χ(D) > k with high probability. Since k is sufficiently large, Theo-rem 5.2.4 implies that α(D) ≤6n log k/k2 with high probability. Therefore, almost surely χ(D) ≥1 6k2/ log k > k.
Now, we show that with high probability every set of at most ϵn vertices can be colored with at most two colors . Suppose there exists a set S with |S| ≤ϵn such that χ(D[S]) ≥3.
Let T ⊂S be a 3-critical subset, i.e. for every v ∈T, χ(D[T] −v) ≤2. Let t = |T|. For every v ∈T, min{d+ DT, d− DT} ≥2 for otherwise a 2-coloring of D[T] −v could be extended to D[T]. Therefore, every vertex in T has total degree of at least 4 in D[T] which implies that D[T] has at least 2t arcs. The probability of this is at most X 3≤t≤ϵn n t 2 t 2 2t k2 n 2t ≤ X 3≤t≤ϵn en t t et(t −1) 2t 2t k2 n 2t ≤ X 3≤t≤ϵn e3tk4 4n t ≤ ϵn max 3≤t≤ϵn 7tk4 n t (5.1) If 3 ≤t ≤log2 n, then 7tk4 n t ≤ 7 log2 nk4 n t ≤ 7 log2 nk4 n 3 = o( 1 n).
Similarly, if log2 n ≤t ≤ϵn, then 7tk4 n t ≤(7ϵk4)t ≤( 7 k)t ≤( 7 k)log2 n = o( 1 n).
These estimates and (5.1) imply that the probability that χ(D[S]) ≤2 is o(1). This completes the proof.
We show that 2-colorability in the previous theorem cannot be decreased to 1 due to the following theorem.
Theorem 5.3.2. If D is a digraph with χ(D) ≥3 and of order n, then it contains a directed cycle of length o(n).
Proof. In the proof we shall use the following digraph analogue of Erd˝ os-Posa Theorem.
Reed et al. proved that for every integer t, there exists an integer f(t) so that every digraph either has t vertex-disjoint directed cycles or a set of at most f(t) vertices whose removal makes the digraph acyclic.
CHAPTER 5. NON-LOCALITY OF THE DIGRAPH CHROMATIC NUMBER 47 Define h(n) = max{t : tf(t) ≤n}. It is clear that h(n) →∞. Let c be the length of a shortest directed cycle in D.
If D has h(n) vertex-disjoint directed cycles, then ch(n) ≤n which implies that c ≤ n h(n) = o(n).
Otherwise, suppose that h(n) = t.
There exists a set S of vertices with |S| = f(t) such that V (D)\S is acyclic. Since χ(D) ≥3, we have that χ(D[S]) ≥2, which implies that S contains a directed cycle of length at most |S| = f(t) ≤n t = n h(n) = o(n).
Chapter 6 Acyclic Homomorphisms 6.1 Introduction In this chapter, we study a generalization of the digraph chromatic number. All the new results discussed here can be found in . The main result of this chapter is Theorem 6.2.3, which can be found, along with an alternate proof, in . For undirected graphs, a natural generalization of coloring is the homomorphism of graphs. Given graphs G and H, a homomorphism from G to H is a function φ : V (G) →V (H) such that for every uv ∈E(G), φ(u)φ(v) ∈E(H). It is well-known (and easy to see) that a graph G is r-colorable if and only if there exists a homomorphism from G to the complete graph Kr. In general, we say that G is H-colorable if there is a homomorphism from G to H. Graph homomorphisms have been studied extensively in the literature and we refer the reader to .
One can generalize the notion of the digraph chromatic number. In a similar fashion, our digraphs are simple, i.e. loopless and without multiple arcs. However, we allow two vertices u, v to be joined by two oppositely directed arcs, uv and vu.
An acyclic homomorphism of a digraph D into a digraph C is a function φ: V (D) → V (C) such that: (i) for every vertex v ∈V (C), the subdigraph of D induced by φ−1(v) is acyclic; (ii) for every arc uv ∈E(D), either φ(u) = φ(v), or φ(u)φ(v) is an arc of C.
If digraphs C and D are obtained from undirected graphs G and H, respectively, by replacing every edge by two oppositely directed arcs, then acyclic homomorphisms between 48 CHAPTER 6. ACYCLIC HOMOMORPHISMS 49 C and D correspond to usual graph homomorphisms between G and H.
In this sense, acyclic homomorphisms can be viewed as a generalization of the notion of homomorphisms of undirected graphs.
In the same way as usual graph homomorphisms generalize the notion of graph colorings, the acyclic homomorphisms generalize colorings of digraphs, where complete graphs are replaced by complete bidirected graphs. Motivated by this, we say that a digraph D is C-colorable if there is an acyclic homomorphism from D to C.
Acyclic homomorphisms were introduced in . The authors studied the complexity of D-coloring. They proved the following theorems.
Theorem 6.1.1 (). Let D be a digraph that contains a directed cycle. Then the acyclic D-coloring problem is NP-complete.
Let C3 be the directed triangle. Then the above theorem can be strengthened.
Theorem 6.1.2 (). The acyclic C3-coloring problem is NP-complete even when re-stricted to planar digraphs.
Let C2 be the directed two cycle, i.e., the digon. It is easy to see that for a digraph D, χ(D) ≤2 if and only if D is C2-colorable. We mentioned previously that deciding 2-colorability for general digraphs is NP-complete. The next result strengthens this theorem.
Theorem 6.1.3 (). The acyclic C2-coloring problem is NP-complete even when re-stricted to planar digraphs.
6.2 D-colorable digraphs of large girth A classical result of Erd˝ os asserts that for all integers k and g there exist graphs with chromatic number k and with girth at least g. Bollob´ as and Sauer strengthened this result by showing that there are such graphs which are, moreover, uniquely k-colorable. Zhu extended Bollob´ as and Sauer’s result to homomorphisms into general graphs. Rather recently, the results of have been extended by Neˇ setˇ ril and Zhu to give a simulta-neous generalization of Zhu’s two primary results. The results of this chapter extend these theorems to digraphs with acyclic homomorphisms.
Zhu generalized Erd˝ os’ result as follows.
CHAPTER 6. ACYCLIC HOMOMORPHISMS 50 Theorem 6.2.1 (). If G and H are graphs such that G is not H-colorable, then for every positive integer g, there exists a graph G∗of girth at least g that is G-colorable but not H-colorable.
To recover Erd˝ os’ result, we simply take G = Kk and H = Kk−1.
For digraphs, the following analog of Erd˝ os’ theorem was proved by Bokal et. al. .
Theorem 6.2.2 ( ). For every g ≥3 and k ≥1, there exists a digraph D with digirth at least g and χ(D) ≥k.
The proof of the above theorem is in the same vein as that of Erd˝ os.
In fact, the method of proof yields a stronger result: the digirth in the statement of Theorem 6.2.2 can be replaced with girth. The purpose of this chapter is to extend Theorem 6.2.2 to acyclic homomorphisms. We will prove the following.
Theorem 6.2.3. If D and C are digraphs such that D is not C-colorable, then for any positive integer g, there exists a digraph D∗of girth at least g that is D-colorable but not C-colorable.
6.3 Proof of Theorem 6.2.3 This section is devoted to the proof of Theorem 6.2.3. Suppose that V (D) = {1, 2, . . . , k} and that q = |E(D)|.
Let n be a (large) positive integer, and let Dn be the digraph obtained from D as follows: replace every vertex i with a stable set Vi of n ordered vertices v1, v2, ..., vn, and replace each arc ij of D by the set of all possible n2 arcs from Vi to Vj.
Clearly, |V (Dn)| = kn and |E(Dn)| = qn2.
Now fix a positive ε < 1/(4g).
Our random digraph model D = D(Dn, p) consists of those spanning subdigraphs of Dn in which the arcs of Dn are chosen randomly and independently with probability p = nε−1.
As usual in nonconstructive probabilistic proofs of results of this nature, the idea is to show that most digraphs in D have only a few short cycles, and for most digraphs H ∈D, the subdigraph of H obtained by removing an arbitrary small set of arcs is not C-colorable.
Choosing an H ∈D with both these properties, we can force the girth to be large by deleting an arc from each short cycle. Since the set A0 of deleted arcs is small, the resulting digraph H −A0 satisfies the desired conclusion of Theorem 6.2.3.
CHAPTER 6. ACYCLIC HOMOMORPHISMS 51 To make this description more precise, let D1 denote the set of digraphs in D containing at most ⌈ngε⌉cycles of length less than g, and let D2 be the set of digraphs H ∈D that have the property that H −A0 is not C-colorable for any set A0 of at most ⌈ngε⌉arcs. We will show that |D1| > 1 −n−ε/2 |D| (6.1) and |D2| > 1 −e−n |D| .
(6.2) Since (6.1) and (6.2) imply that D1 ∩D2 ̸= ∅(for sufficiently large n), there exists a digraph H ∈D1 ∩D2. Now H ∈D1 implies that there is a set A0 of at most ⌈ngε⌉arcs whose removal leaves a digraph D∗:= H −A0 of girth at least g, while H ∈D2 means that D∗is not C-colorable. Thus, it remains to establish (6.1) and (6.2).
Proof of (6.1). The expected number Nℓof cycles of length ℓin a digraph H ∈D is at most kn ℓ (ℓ−1)! pℓ (6.3) since there are kn ℓ (ℓ−1)! ways of choosing a cyclic sequence of ℓvertices as a candidate for a cycle, and such an ℓ-cycle occurs in D with probability either 0 or pℓ. It is easy to see that the product of the first two factors in (6.3) is smaller than (kn)ℓ/ℓ. Therefore, if n is large enough, then g−1 X ℓ=2 Nℓ≤ g−1 X ℓ=2 1 ℓ(knε)ℓ< kg−1n(g−1)ε < n−ε/2ngε.
Now (6.1) follows easily from Markov’s Inequality.
Proof of (6.2). We shall argue that |D ∖D2| < e−n|D|. If H ∈D ∖D2, then there is a set A0 of at most ⌈ngε⌉arcs of H so that H −A0 admits an acyclic homomorphism h to C. Let k′ = |V (C)|. By the pigeonhole principle, for each i ∈V (D), there exists a vertex xi ∈V (C) such that |Vi ∩h−1(xi)| ≥n/k′. Define φ: V (D) →V (C) by setting φ(i) = xi.
Since n/k′ ≫ngε, the set Vi ∩h−1(xi) contains a subset Wi of cardinality w := ⌈n/(2k′)⌉ such that no arc in A0 has an end vertex in Wi.
Since D is not C-colorable, the function φ is not an acyclic homomorphism. Therefore, either there is an arc ij ∈E(D) such that φ(i) ̸= φ(j) and φ(i)φ(j) is not an arc of C, or there is a v ∈V (C) such that the subdigraph of D induced on φ−1(v) contains a cycle.
CHAPTER 6. ACYCLIC HOMOMORPHISMS 52 We first consider the case when ij is an arc of D such that φ(i) ̸= φ(j) and φ(i)φ(j) is not an arc of C. Since h is an acyclic homomorphism, there are no arcs from Wi to Wj in H −A0. By the definition of Wi and Wj, neither are there such arcs in H.
Let us now estimate the expected number M of pairs of sets A ⊆Vi, B ⊆Vj, with |A| = |B| = w, such that ij ∈E(D) and such that there is no arc from A to B in H ∈D (we call such a pair A, B a bad pair). By the linearity of expectation, we have M = q n w 2 (1 −p)w2 < q nw w!
2 (1 −p)w2 = q(n2(1 −p)w)w (w!)2 .
(6.4) Since w grows linearly with n, for sufficiently large n we have n2(1 −p)w < e−2k′ and q (w!)2 < 1 2.
Therefore Markov’s Inequality and (6.4) yield Pr(∃a bad pair) < e−n 2 .
(6.5) Suppose now that there is a v ∈V (C) such that D contains a cycle Q whose vertices are all in φ−1(v). Suppose that Q = i1i2 · · · it. Observe that 2 ≤t ≤k. Since φ(Q) = {v}, we conclude that h(Wi1) = h(Wi2) = · · · = h(Wit) = {v}. Since h is an acyclic homomorphism, the subdigraph of H induced on Wi1 ∪Wi2 ∪· · · ∪Wit is acyclic.
Let us consider all sequences of sets Uj1, Uj2, . . . , Ujℓsuch that, for r = 1, 2, . . . , ℓ, we have Ujr ⊆Vjr and |Ujr| = w, and the vertex sequence j1j2 · · · jℓis a cycle in D. Let U(ℓ) the subdigraph of H induced on Uj1 ∪Uj2 ∪· · · ∪Ujℓ. Let Pℓ:= Pr(U(ℓ) is acyclic). We say that this sequence is bad if U(ℓ) is acyclic. Since the expected number N of bad sequences is the sum of the corresponding expectations over all possible cycle lengths, we have N ≤ k X ℓ=2 k ℓ (ℓ−1)!
n w ℓ Pℓ.
(6.6) In order to bound N, we first bound the probabilities Pℓ.
Lemma 6.3.1. There exists a constant γ > 0 (not depending on n) such that Pℓ≤e−γn1+ε for every integer ℓ∈{2, 3, . . . , k}.
The proof invokes the Janson Inequalities (see Appendix, Theorems A.2.1 and A.2.2).
Proof of Lemma 6.3.1.
We use the Janson Inequalities.
Here, Ωdenotes the set of all potential arcs (in Dn, as defined at the start of Section 6.3) between the sets Uji, i = CHAPTER 6. ACYCLIC HOMOMORPHISMS 53 1, ..., ℓ, (introduced just prior to our statement of Lemma 6.3.1); each arc in Ωappears with probability p.
Let s be a (large) multiple of ℓ; the value of s will be independent of n and specified below. Now, let us enumerate those cycles of Dn that are of length s, and that cyclically traverse Uj1, Uj2, ..., Ujℓs/ℓtimes. For j ≥1, denote by Sj the arc set of the jth such cycle and by Bj the event that the arcs in Sj all appear in H (i.e. the cycle determined by Sj is present in H). Let the random variable X count those Bj that occur. Since Pr(X = 0) (the probability that there is no such cycle of length s) is an upper bound for Pℓ(the chance that U(ℓ) is acyclic), we can bound Pℓby bounding Pr(X = 0), and estimating the latter quantity is exactly the purpose of Janson’s Inequalities. In the Janson paradigm, the value of ∆is defined by ∆= X Si∼Sj Pr(Bi ∩Bj), (6.7) where Si ∼Sj if the two cycles determined by Si and Sj have at least one arc in common.
First, we find an upper bound for ∆. Letting i remain fixed, we (rather crudely) obtain ∆≤ns X j:Si∼Sj Pr(Bi ∩Bj), (6.8) since each |Ur| ≤n and each |Si| = s. The sum on the right side satisfies X j:Si∼Sj Pr(Bi ∩Bj) ≤ s−1 X r=1 s r p2s−rws−(r+1).
(6.9) The binomial coefficient in (6.9) accounts for the number of ways to choose the arcs of Si ∩Sj, the power of p is Pr(Bj|Bi) Pr(Bi), the power of w reflects the facts that each U-set has cardinality w and, with i fixed, there are at most s −(r + 1) vertices in the Sj-cycle not already in the Si-cycle. Recalling that w = ⌈n/(2k′)⌉(so that w < n), using the gross bound s r < 2s, and replacing p with nε−1, we find that X j:Si∼Sj Pr(Bi ∩Bj) < 2s s−1 X r=1 p2s−rns−(r+1) = 2s s−1 X r=1 n2εs−s−rε−1 < 2ssn2εs−s−ε−1.
With (6.8), the last estimate yields ∆< 2ssn2εs−ε−1.
(6.10) CHAPTER 6. ACYCLIC HOMOMORPHISMS 54 Next, we find a lower bound for µ := E[X]. Since there are ℓU-sets, each containing w vertices, and each ordered choice of s/ℓvertices from each (up to the choice of the first vertex) contributes 1 to X with probability at least ps, we have µ ≥ 1 s w s/ℓ ℓhs ℓ !
iℓ ps.
Therefore, µ ≥1 s w!
(w −s/ℓ)!
ℓ ps ≥1 s w −s ℓ s ps ≥1 s n 4k′ s nεs−s = nεs s(4k′)s .
(6.11) We distinguish two cases.
Case 1: ∆≥µ.
Here, we have the hypotheses of the Extended Janson Inequality, which, along with our bounds (6.10), (6.11) gives Pr(X = 0) ≤e−µ2/(2∆) < e−n1+ε/(2s3(32k′2)s).
Case 2: ∆< µ.
Now we have the hypotheses of the basic Janson Inequality, which together with (6.11) gives Pr(X = 0) ≤e−µ+∆/2 < e−µ/2 ≤e−nεs/(2s(4k′)s).
Let s > 1 + (1 + ε)/ε be a multiple of ℓ. Then the last bound shows that Pr(X = 0) ≤e−n1+ε(nε/(2s(4k′)s)) ≤e−n1+ε.
Since s and k′ are constants (not depending on n), in either case we see that Pℓ≤Pr(X = 0) ≤e−γn1+ε for some constant γ > 0. This gives us Lemma 6.3.1.
We return to our estimation of the expected number N of bad sequences in (6.6), repeated here for convenience: N ≤ k X ℓ=2 k ℓ (ℓ−1)!
n w ℓ Pℓ.
CHAPTER 6. ACYCLIC HOMOMORPHISMS 55 Using Lemma 6.3.1 to bound the factors Pℓin this sum shows that for n large enough, N ≤ k X ℓ=2 k ℓ (ℓ−1)!
n w ℓ e−γn1+ε < k X ℓ=2 e−n 2k < e−n 2 .
(6.12) From (6.12) and Markov’s Inequality, we conclude that Pr(∃a bad sequence) < e−n 2 .
(6.13) Since φ fails to be an acyclic homomorphism exactly when there exists a bad pair or there exists a bad sequence, (6.5) and (6.13) now show that |D ∖D2| ≤(Pr(∃bad pair) + Pr(∃bad sequence)) |D| < e−n |D| , which yields (6.2).
6.4 Uniquely D-colorable digraphs The notion of colorability can be extended to unique colorability. A graph (digraph) G is uniquely H-colorable if it is surjectively H-colorable and for any two coloring φ, ψ of G, there is an automorphism π of H such that φ = πψ. A graph (digraph) G is a core if it is uniquely G-colorable.
Theorem 6.2.3 has the following similar result. The proof can be found in .
Theorem 6.4.1 (). For any core D and any positive integer g, there is a digraph D∗of girth at least g that is uniquely D-colorable.
Theorem 6.4.1 is a generalization of its graph analog proved by Zhu .
Theorem 6.4.2 (). For any graph H that is a core and any positive integer g, there is a graph H∗of girth at least g that is uniquely H-colorable.
Theorem 6.4.1 immediately applies to digraph circular colorings as discussed in the next section 6.5 Circular chromatic number of digraphs Recall that there are digraphs with arbitrary large digirth and chromatic number. In , the authors proved the following generalization to the circular chromatic number.
CHAPTER 6. ACYCLIC HOMOMORPHISMS 56 Theorem 6.5.1 (). There exist digraphs with arbitrary large girth and arbitrary large circular chromatic number.
Here we present a theorem that generalizes the above result.
Let d ≥1 and k ≥d be integers. Let C(k, d) be the digraph with vertex set Zk = {0, 1, . . . , k −1} and arcs E(C(k, d)) = {ij | j −i ∈{d, d + 1, . . . , k −1}}, where the subtraction is considered in the cyclic group Zk of integers modulo k.
Acyclic homomorphisms into C(k, d) are an important concept because of their relation to the circular chromatic number of digraphs; cf. . It is shown in that D is C(k, d)-colorable if and only if k/d ≥χc(D).
In , the authors show that Theorem 6.4.1 implies the following generalization of Theorem 6.5.1.
Theorem 6.5.2 (). If k and d are relatively prime integers and 1 ≤d ≤k, then for every integer g, there exists a uniquely C(k, d)-colorable digraph of girth at least g and with circular chromatic number equal to k/d.
Chapter 7 3-colorings of planar graphs 7.1 Introduction In Chapter 2, we mentioned that every graph can be 3-colored so that each color class induces a forest and that this bound is sharp (see Chartrand et al. ). In this chapter, we show that there are in fact exponentially many 3-colorings of this kind for any planar graph. The same result holds in the setting of 3-list-colorings.
Let us recall that a partition of vertices of a graph G into classes V1∪· · ·∪Vk is an arboreal partition if each Vi (1 ≤i ≤k) induces a forest in G. A function f : V (G) →{1, . . . , k} is called an arboreal k-coloring if Vi = f−1(i), i = 1, . . . , k, form an arboreal partition. The vertex-arboricity a(G) of the graph G is the minimum k such that G admits an arboreal k-coloring.
It is an easy consequence of 5-degeneracy of planar graphs that every planar digraph D without cycles of length at most 2 and its associated underlying planar graph G satisfy χ(D) ≤a(G) ≤3.
(7.1) The main result of this chapter is a relaxation of Conjecture 2.4.1 and a strengthening of the above stated inequality (7.1). In particular, we prove the following.
Theorem 7.1.1. Every planar graph of order n has at least 2n/9 arboreal 3-colorings.
Corollary 7.1.2. Every planar digraph of order n without cycles of length at most 2 has at least 2n/9 3-colorings.
57 CHAPTER 7. 3-COLORINGS OF PLANAR GRAPHS 58 Let us observe that Theorem 7.1.1 cannot be extended to graphs embedded in the torus since a(K7) = 4 and K7 admits an embedding in the torus. However, for every orientation D of K7, we have χ(D) ≤3 (and in some cases χ(D) = 3), so it is possible that Corollary 7.1.2 extends.
The proof of Theorem 7.1.1 is deferred until Section 7.4. Actually, we shall prove an extended version in the setting of list-colorings. Given a list-assignment L for the vertices of graph G, we say that L is a k-list-assignment if |L(v)| = k for every v ∈V (G).
Theorem 7.1.3. Let L be a 3-list-assignment for a planar graph G of order n. Then G has at least 2n/9 L-colorings.
Corollary 7.1.2 then extends to the list coloring of digraphs.
7.2 Unavoidable configurations We define a configuration as a plane graph C together with a function δ: V (C) →N such that δ(v) ≥degC(v) for every v ∈V (C). A plane graph G contains the configuration (C, δ) if there is an injective mapping h: V (C) →V (G) such that the following statements hold: (i) For every edge ab ∈E(C), h(a)h(b) is an edge of G.
(ii) For every facial walk a1 . . . ak in C, except for the unbounded face, the image h(a1) . . . h(ak) is a facial walk in G.
(iii) For every a ∈V (C), the degree of h(a) in G is equal to δ(a).
If v is a vertex of degree k in G, then we call it a k-vertex, and a vertex of degree at least k (at most k) will also be referred to as a k+-vertex (k−-vertex). A neighbor of v whose degree is k is a k-neighbor (similarly k+- and k−-neighbor).
We will prove the following theorem.
Theorem 7.2.1. Every planar triangulation contains one of the configurations listed in Fig. 7.1.
Proof. The proof uses the discharging method. Assume, for a contradiction, that there is a planar triangulation G that contains none of the configurations shown in Fig. 7.1. We shall refer to these configurations as Q1, Q2, . . . , Q23.
CHAPTER 7. 3-COLORINGS OF PLANAR GRAPHS 59 Let G be a counterexample of minimum order. To each vertex v of G, we assign a charge of c(v) = deg(v) −6. A well-known consequence of Euler’s formula is that the total charge is always negative, P v∈V (G) c(v) = −12. We are going to apply the following discharging rules: R1: A 7-vertex sends charge of 1/3 to each adjacent 5-vertex.
R2: A 7-vertex sends charge of 1/2 to each adjacent 4-vertex.
R3: An 8+-vertex sends charge of 1/2 to each adjacent 5-vertex.
R4: An 8+-vertex sends charge of 3/2 to each adjacent 4-vertex whose neighbors have degrees 8+, 7, 6, 6.
R5: An 8+-vertex sends charge of 3/4 to each adjacent 4-vertex whose neighbors have degrees 8+, 8+, 7+, 6.
R6: An 8+-vertex sends charge of 1/2 to each adjacent 4-vertex whose neighbors have degrees 8+, 7+, 7+, 7+.
R7: An 8+-vertex sends charge of 1 to each adjacent 4-vertex whose neighbors have degrees 8+, 8+, 6, 6 or 8+, 7, 7, 6.
Let c∗(v) be the final charge obtained by applying rules R1–R7 to all vertices in G. We will show that every vertex has non-negative final charge. This will yield a contradiction since the initial total charge of −12 must be preserved.
We say that a 4-vertex is bad if its neighbors have degrees 8+, 7, 6, 6, i.e., the rule R4 applies to it and its 8+-neighbor. Let us observe that the clockwise order of the neighbors of a bad vertex is 8+, 7, 6, 6 (or 8+, 6, 6, 7) since Q7 is excluded.
First, note that G has no 3−-vertices since the configuration Q1 is excluded and since a triangulation cannot have 2−-vertices.
4-vertices: Let v be a 4-vertex. Note that v cannot have a 5−-neighbor since Q2 is excluded. If all of v’s neighbors have degree at most 7, then they all have degree exactly 7 since Q6, Q7 and Q8 are excluded. Since the vertex v has initial charge of −2, and each 7-neighbor sends a charge of 1/2 to it, the final charge of v is 0.
CHAPTER 7. 3-COLORINGS OF PLANAR GRAPHS 60 Now, assume that v is adjacent to an 8+-vertex. First, assume that the remaining three neighbors v1, v2, v3 of v are all 7−-vertices. The vertices v1, v2, v3 cannot have all degree 6 since Q8 is excluded. If deg(v1) = 7 and deg(v2) = deg(v3) = 6, then the rules R2 and R4 imply that v receives a charge of 2, resulting in the final charge of 0. If deg(v1) = deg(v2) = 7 and deg(v3) = 6, then by rules R2 and R7, v again receives a charge of 2. The case where deg(v1) = deg(v2) = deg(v3) = 7 is similar through rules R2 and R6.
Next, assume that v has exactly two 8+-neighbors v1, v2. If the remaining two vertices v3, v4 are both 7-vertices, then rules R2 and R7 imply that v receives a total charge of at least 3, giving it the final charge of 1. If the remaining two vertices are both 6-vertices, then rule R7 implies that v receives a total charge of 2, resulting in 0 final charge. Therefore, we may assume that deg(v3) = 7 and deg(v4) = 6. In this case, both v1 and v2 send a charge of 3/4 to v by R5, and v3 sends a charge of 1/2, resulting in a final charge of 0 for v.
Finally, assume that v has at least three 8+-neighbors. By rule R5 (if v has a 6-neighbor), or by rules R2 and R6 (if v has a 7-neighbor), or by rule R6 (otherwise), we see that v receives a total charge of at least 2, so c∗(v) ≥0.
5-vertices: Let v be a 5-vertex.
Note that v is not adjacent to a 4-vertex.
If all neighbors of v are 7−-vertices, then exclusion of Q8 and Q10 implies that v has at least three 7-neighbors. By R1, each such neighbor sends a charge of 1/3 to v. Since v has initial charge of −1, its final charge is at least 0. Next, suppose that v has an 8+-neighbor. If v has at least two 8+-neighbors, then by rule R3, v receives a charge of 1/2 from each of them, resulting in the final charge of at least 0 for v. Therefore, we may suppose that v has exactly one 8+-neighbor. If v has at least two 7-neighbors, then by R1 and R3, v receives a total charge of at least 1/2+1/3+1/3 > 1, resulting in a positive final charge for v. Finally, if v has at most one 7-neighbor, then we get the configuration Q8 or Q10.
6-vertices: They have initial charge of 0, and by the discharging rules, they do not give or receive any charge, which implies that they have a final charge of 0.
7-vertices: They have an initial charge of +1 and they send charge only to 4-vertices and 5-vertices.
Let v be a 7-vertex.
If v has no 4-neighbors then it has at most three 5-neighbors since Q11 is excluded. Therefore, it sends a charge of 1/3 to each such vertex, resulting in the final charge of 0 for v. Next, suppose that v has at least one 4-neighbor.
Since Q12 is excluded, v has at most one other 5−-neighbor. Therefore, v sends a charge of at most 1/2 + 1/2 = 1, resulting in the final charge of at least 0 for v.
CHAPTER 7. 3-COLORINGS OF PLANAR GRAPHS 61 8-vertices: An 8-vertex v has initial charge of +2. Since Q13 is excluded, v has at most two 4-neighbors. Now, suppose that v has exactly two 4-neighbors, say v1 and v2.
We consider two subcases. First, assume that v has a 5-neighbor. Excluding Q2 and Q15, no vertex in N(v1) ∩N(v) and N(v2) ∩N(v) has degree at most 6. If the two vertices in N(v1) ∩N(v) are both 7-vertices, then v1 has no 6−-neighbor (Q2 and Q16 being excluded).
This implies that v sends charge of 1/2 to v1. Otherwise, the two vertices in N(v1)∩N(v) are an 8+ and a 7+-vertex, respectively. This implies that by rules R5 and R6, v sends charge of 3/4 or 1/2 to v1. Therefore, in all cases, v sends no more than 3/4 charge to v1. An identical argument shows that v sends a charge of at most 3/4 to v2. Since v sends a charge of 1/2 to a 5-vertex, we have that v sends a total charge of at most 3/4 + 3/4 + 1/2 = 2.
Secondly, assume that v has no 5-neighbors. Consider v1. Excluding Q7 and Q17, v1 is not a bad 4-vertex. Therefore, v sends charge of at most 1 to v1. An identical argument shows that v sends charge of at most 1 to v2. Therefore, the final charge of v is non-negative.
Next, suppose that v has exactly one 4-neighbor, say v1. First, suppose that v1 is a bad 4-vertex. Excluding Q7 and Q16, v has at most one 5-neighbor. Since v sends a charge of at most 3/2 to v1 and charge 1/2 to its 5-neighbor, its final charge is at least 0. Thus, we may assume that v1 is not a bad 4-vertex. Then v sends at most charge of 1 to v1. Because Q18 is excluded, v has at most two 5-neighbors, to each of which it sends a charge of 1/2.
Therefore, v sends a total charge of at most 1 + 1/2 + 1/2 = 2, which implies that it has a non-negative final charge.
Finally, suppose that v has no 4-neighbors.
Excluding Q19, v has at most four 5-neighbors, to each of which it sends charge of 1/2. Therefore, the final charge of v is again non-negative.
9-vertices: A 9-vertex v has a charge of +3. Since Q20 is excluded, v has at most three 4-neighbors. First, suppose that v has exactly three 4-neighbors. Since Q20 is excluded, v has no 5-neighbor and since Q21 is excluded, none of its 4-neighbors are bad. Therefore, in this case v sends charge of at most 1 to each 4-neighbor, resulting in a non-negative final charge. Secondly, suppose that v has exactly two 4-neighbors. We consider two subcases.
For the first subcase, suppose that none of the 4-neighbors are bad. Now, v has at most two 5-neighbors since Q22 is excluded. This implies that v sends total charge of at most 1 + 1 + 1/2 + 1/2 = 3 to its neighbors, resulting in a non-negative final charge for v. For the second subcase, assume that v has at least one bad 4-neighbor. Now, the exclusion of Q21 implies that v has no 5-neighbors. Thus, v sends total charge of at most 3/2 + 3/2 = 3, CHAPTER 7. 3-COLORINGS OF PLANAR GRAPHS 62 and therefore c∗(v) ≥0. Thirdly, suppose that v has exactly one 4-neighbor. The exclusion of Q22 implies that v has at most three 5-neighbors, and hence it sends out a total charge of at most 3/2 + 1/2 + 1/2 + 1/2 = 3, resulting in c∗(v) ≥0. Lastly, assume that v has no 4-neighbors. Excluding Q4 we see that v has at most six 5-neighbors. This implies that v sends a total charge of at most 6 × 1/2 = 3 to its neighbors, thus c∗(v) ≥0.
10-vertices: A 10-vertex v has a charge of +4. Let v1, . . . , v10 be the neighbors of v in the cyclic order around v. If vi is a bad 4-neighbor of v and deg(vi−1) = 7, deg(vi+1) = 6, then the absence of Q3 and Q9 implies that deg(vi+2) ≥6 and deg(vi−2) ≥5. The absence of Q5 also implies that if vi+3 is another bad 4-neighbor, then deg(vi+2) = 7, thus deg(vi+4) = 6 and deg(vi+5) ≥6 (all indices modulo 10). By excluding Q23 and Q4, we conclude that if v has two bad 4-neighbors, then it has no other 4-neighbor and has at most two 5-neighbors.
This implies that c∗(v) ≥0.
Suppose now that v has one bad 4-neighbor, say v2.
We may assume deg(v1) = 7, deg(v3) = 6 and by the arguments given above, deg(v10) ≥5, deg(v4) ≥6. Excluding Q4, v can have at most four 5-neighbors. Thus, the only possibility that c∗(v) < 0 is that v has 3 more 4-neighbors (and the only way to have this is that the 4-neighbors are v5, v7, v9) or that v has two more 4-neighbors and two 5-neighbors (in which case 4-neighbors are v5, v7 and 5-neighbors are v9, v10). In each of these cases, we see, by excluding Q3 and Q5, that deg(v4) ≥7, deg(v6) ≥7 and deg(v8) ≥7. Thus, excluding Q9, v sends charge of at most 3/4 to each of v5 and v7 and at most 1 together to both v9 and v10. Thus, c∗(v) ≥4 −3/2 −2 × 3/4 −1 = 0.
Suppose now that v has no bad 4-neighbors. If v has five 4-neighbors, then they are (without loss of generality) v1, v3, v5, v7, v9 and excluding Q3 and Q4 we see that deg(vj) ≥7 for j = 2, 4, 6, 8, 10. This implies (by the argument as used above) that v sends charge of at most 3/4 to each 4-neighbor, thus c∗(v) ≥4−5×3/4 > 0. Similarly, if v has one 5-neighbor v1 and four 4-neighbors v3, v5, v7, v9, then we see as above that v sends charge of at most 3/4 to each 4-neighbor, and thus c∗(v) ≥4 −4 × 3/4 −1/2 > 0. If v has three 4-neighbors, then the exclusion of Q2 and Q4 implies that it has at most two 5-neighbors. Similarly, if v has two 4-neighbors, then it has at most four 5-neighbors. If v has one 4-neighbor, then it has at most five 5-neighbors. If v has no 4-neighbors, it has at most six 5-neighbors. In each case, c∗(v) ≥0.
11+-vertices: Let v be a d-vertex, with d ≥11. Let v1, . . . , vd be the neighbors of v in cyclic clockwise order, indices modulo d. Suppose that vi is a bad 4-vertex. Then we CHAPTER 7. 3-COLORINGS OF PLANAR GRAPHS 63 may assume that deg(vi−1) = 7 and deg(vi+1) = 6 (or vice versa), since Q7 is excluded.
By noting that the fourth neighbor of vi has degree 6, we see that deg(vi+2) ≥6 (since Q3 is excluded) and deg(vi−2) ≥5 (since Q9 is excluded). If vi is a good 4-vertex, then its neighbors are 6+-vertices. Now, we redistribute the charge sent from v to its neighbors so that from each bad 4-vertex vi we give 1/2 to vi−1 and 1/2 to vi+1, and from each good 4-vertex vi we give 1/4 to vi−1 and 1/4 to vi+1. We claim that after the redistribution, each neighbor of v receives from v at most 1/2 charge in total. This is clear for 4-neighbors of v.
A 5-neighbor of v is not adjacent to a 4-vertex since Q2 is excluded, so it gets charge of at most 1/2 as well. The claim is clear for each 6-neighbor of v since it is adjacent to at most one 4-vertex (Q3 is excluded). If a 7-neighbor vj of v satisfies deg(vj+1) = deg(vj−1) = 4, the exclusion of Q9 implies that both vj−1 and vj+1 are good 4-vertices. Thus, the claim holds for 7-neighbors of v. An 8+-neighbor of v cannot be adjacent to a bad 4-neighbor of v, and therefore it receives charge of at most 1/2 from v after the redistribution. This implies that if d ≥12, then the final charge at v is c∗(v) ≥c(v) −1 2d ≥0.
Thus, it remains to consider the case when d = 11. In this case the same conclusion as above can be made if we show that either the redistributed charge at one of the vertices vi is 0, or that there are two vertices whose redistributed charge is at most 1/4. If there exists a good 4-vertex, then there exists a good 4-vertex vi, one of whose neighbors, say vi−1, gets 1/4 total redistributed charge. This is easy to see since d = 11 is odd and Q9 is excluded.
Let t ≥0 be the largest integer such that vi, vi+2, . . . , vi+2t are all good 4-neighbors of v.
Then it is clear that vi+2t+1 has total redistributed charge 1/4 and that vi−1 ̸= vi+2t+1 (by parity). This shows that the total charge sent from v is at most 5, thus the final charge c∗(v) is non-negative. Thus, we may assume that v has no good 4-neighbors. If v has a bad 4-neighbor vi, then we may assume that deg(vi−1) = 7 and deg(vi+1) = 6. As mentioned above, we conclude that deg(vi+2) ≥6. We are done if this vertex has 0 redistributed charge.
Otherwise, vi+2 is adjacent to another bad 4-neighbor vi+3 of v. Since vi, vi+1, vi+2, vi+3 do not correspond to the excluded configuration Q5, we conclude that deg(vi+2) = 7. Now we can repeat the argument with vi+3 to conclude that vi+6, vi+9 are also bad 4-vertices and deg(vi+8) = 7. However, since deg(vi−1) = 7, we conclude that vi+9 cannot be a bad 4-vertex and hence there is a neighbor of v with redistributed charge 0.
Thus, v has no 4-neighbors. Now the only way to send charge 1/2 to each neighbor of v is that all neighbors of v are 5-vertices. However, in this case we have the configuration Q4.
CHAPTER 7. 3-COLORINGS OF PLANAR GRAPHS 64 7.3 Reducibility This section is devoted to the reducibility part of the proof of our main result, Theorem 7.1.3.
Let G be a planar graph and L a 3-list-assignment.
It is sufficient to prove the theorem when G is a triangulation. Otherwise, we triangulate G and any L-coloring of the triangulation is an L-coloring of G. Of course, we only consider arboreal L-colorings, and we omit the adverb “arboreal” in the sequel.
A configuration C contained in G is called reducible if |C| ≤9 and any L-coloring of G −C can be extended to an L-coloring of G in at least two ways. Showing that every triangulation G contains a reducible configuration will imply that G has at least 2|V (G)|/9 arboreal L-colorings.
Here we prove our main theorem by showing that each configuration from Section 5.2 is reducible. The following lemma will be used throughout this section to prove reducibility.
Lemma 7.3.1. Let G be a planar graph, L a 3-list-assignment for G, and v1, . . . , vk ∈V (G).
Let Gi = G −{vi+1, . . . , vk} for i = 0, . . . , k and suppose that: (1) for every i = 1, . . . , k, degGi(vi) ≤5, and (2) there exists an i such that degGi(vi) ≤3.
Then every arboreal L-coloring of G0 can be extended to G in at least two ways. If only (1) holds, then every arboreal L-coloring of G0 can be extended to G.
Proof. Let f be an L-coloring of G0. Since v1 has degree at most 5 in G1, there is a color c ∈L(v1) such that c appears at most once on NG1(v1). Therefore, coloring v1 with c gives an L-coloring of G1. Repeating this argument, we see that the L-coloring of G0 can be extended to an L-coloring of G by consecutively L-coloring v1, v2, . . . , vk. If (2) holds for i, then there are actually two possible colors that can be used to color vi. Therefore, every L-coloring of G0 can be extended to G in at least two ways.
Lemma 7.3.2. Configurations Q1, . . . , Q5, Q8, . . . , Q12, Q16, . . . , Q19, Q21, Q22 listed in Fig. 7.1 are reducible.
The configuration Q′ 23 that is obtained from Q23 by deleting the pendant vertex with δ(v) = 4 is also reducible.
Proof. For these configurations Qi, Q′ j we simply apply Lemma 7.3.1. The corresponding enumeration v1, . . . , vk (k = |V (Qi)| or k = |V (Q′ j)|) is shown in Figure 7.2 and the vertex for which condition (2) of Lemma 7.3.1 applies is shown by a larger circle.
CHAPTER 7. 3-COLORINGS OF PLANAR GRAPHS 65 Lemma 7.3.3. Configuration Q6 in Fig. 7.1 is reducible.
Proof. Let u be the 4-vertex and let u1, u2, u3, u4 be its neighbors in cyclic order and let C be the cycle u1u2u3u4. Suppose that deg(u1) = deg(u2) = 7, deg(u3) ≤7 and deg(u4) = 6.
Let f be an L-coloring of G −{u, u1, u2, u3, u4}. Now, consider u2. If there are at least two ways to extend the coloring f to u2, then we can obtain at least two different colorings for G by sequentially coloring u3, u1, u4, u using Lemma 7.3.1. Therefore, we may assume that L(u2) = {1, 2, 3} and that colors 1 and 2 each appear exactly twice on N(u2). Now, let us color u2 with color 3. We now consider coloring u1 and u3. We claim that at least one of u1 and u2 must be forced to be colored 3. Otherwise, we color u1 and u2 without using color 3, then we color u4 arbitrarily (this is possible since u is yet uncolored). Now, if 3 ∈L(u), then we can color u with 3 since u2 has no neighbor of color 3. Moreover, there is at most one color (other than color 3) that can appear on the neighborhood of u twice. Therefore, u has another available color in its list. Therefore, there are two ways to color u. Similarly, we get two different colorings of u when 3 / ∈L(u). This proves the claim, and we may assume that L(u1) = {a, b, 3}, u1 is forced to be colored 3, and that the four colored neighbors of u1 not on C have colors a, a, b, b. Now, we color u3 arbitrarily with a color c. We may assume that c ̸= 3, for otherwise we color u4 arbitrarily and we will have two available colors for u.
To complete the proof it is sufficient to show that u4 can be colored with a color that is not c, for then we could color u with at least two different colors. If u4 is forced to be colored c, then for every color x ∈L(u4), x ̸= c, the color x must appear at least twice on N(u4).
This implies that the three colored neighbors of u4 not on the cycle have colors 3, y, y, for some color y and that 3, y ∈L(u4). But recall that u1 and u2 have no neighbors outside C having color 3. Therefore, coloring u4 with color 3 gives a proper coloring of G −u. Now, u can be colored with at least two colors to obtain a coloring of G.
Lemma 7.3.4. Let u be a 4-vertex, and suppose u1, u2, u3, u4 are the neighbors of u in cyclic order. Suppose that deg(u1) ≤6, deg(u2) ≤7 and deg(u3) ≤6. This configuration is reducible. In particular, the configuration Q7 in Fig. 7.1 is reducible.
Proof. Let f be an L-coloring of G −{u, u1, u2, u3}. Suppose that f(u4) = 3. Let C be the cycle u1u2u3u4u1. Now, consider u1. Since only four of u1’s neighbors are colored and f(u4) = 3, we can color u1 with a color other than 3, say 2. Now, consider coloring u2. We have two cases. First suppose that it is possible to color u2 with a color that is not 2. In this case, we color u2 with a color x ̸= 2, and then arbitrarily color u3 with a color y (this CHAPTER 7. 3-COLORINGS OF PLANAR GRAPHS 66 is possible since u is not yet colored). Now, if u does not have two colors on N(u), each appearing twice, we have two different available colors in L(u). Therefore, we may assume that x = 3 and y = 2, and that 2, 3 ∈L(u). Now, let z ∈L(u){2, 3}. Clearly, coloring u with color z gives a proper coloring of G. But by planarity of G, for one of the colors 2 and 3, coloring u with this color will not create a monochromatic cycle since a 2-colored path joining u1 and u3 and a 3-colored path from u2 to u4 would cross. Therefore, there are two colors available for u.
Next suppose that u2 is forced to be colored with color 2. Let L(u2) = {a, b, 2}. Since u2 is forced to be colored 2, we have that the four neighbors of u2 not on C have colors a, a, b, b.
Now, consider u3. We may assume that u3 is forced to be colored with color 3 for otherwise we could color u with two different colors afterwards. This implies that 2, 3 ∈L(u3) and that the three neighbors of u3 not on C have colors 2, d, d, where d ∈L(u3){2, 3}. Note that this way we get one coloring extension of f. We need to get another one. Now, since u3 cannot be colored 2, and u2 has no neighbor outside C of color 2, it follows that u1 must have a neighbor of color 2 not on C. Now we color u3 with color 3. Since u1 has five colored neighbors and color 2 appears on N(u) at least twice, we may change the color 2 of u1 to another color in its list. Now, an extension of this coloring to u gives us the second L-coloring.
Lemma 7.3.5. A configuration consisting of an 8-vertex that is adjacent to at least three 4-vertices (configuration Q13) is reducible.
Proof. Let v1 be an 8-vertex and suppose v2, v3, v4 are 4-vertices adjacent to v1. Let C be the cycle on the neighbors of v1.
Let L(v1) = {1, 2, 3}.
Consider a 3-coloring f of G1 = G −{v2, v3, v4}.
We may assume that v1 is colored 3.
Since every L-coloring of G1 extends to G, we may assume that v1 cannot be recolored. Thus, (at least) two of its neighbors are colored 1 and two are colored 2. If no neighbor of v1 is colored 3, then we can extend the coloring to v2 in two ways since the vertex v1 cannot be part of a monochromatic cycle. Thus, color 3 appears exactly once on N(v1) and colors 1 and 2 appear precisely twice. It is also clear that 3 ∈L(v2) ∩L(v3) ∩L(v4). Let v5 be the neighbor of v1 with f(v5) = 3. Without loss of generality, v5 is not the neighbor of v2 on the cycle C. If v2 has no neighbor colored 3 except v1, then we may extend the coloring f to G −{v3, v4} in at least two ways. We can then extend these colorings to G. Therefore, we may assume that v2’s neighbor distinct from its neighbors on the cycle C is colored 3. Now, v2’s neighbors CHAPTER 7. 3-COLORINGS OF PLANAR GRAPHS 67 on the cycle both have the same color for otherwise we can extend f to G2 = G −{v3, v4} in at least two ways. Therefore, we may assume that the neighbors of v2 on the cycle are colored 1 and that 1 ∈L(v2). But now, by planarity, coloring v2 by either 1 or 3 gives a proper L-coloring of G2. Coloring v2 with the other remaining color in its list gives a second coloring of G2. Both of these colorings can be then extended to G. Therefore, there are at least two ways to extend a coloring of G1 to G.
Lemma 7.3.6. Let u be an 8-vertex and assume its neighbors (in the clockwise cyclic order) are u1, . . . , u8 and let C be the 8-cycle u1u2 . . . u8u1. Suppose that u is adjacent to two 4-vertices, one 5-vertex, and a 6-vertex that is adjacent to either the 5-vertex or one of the two 4-vertices on C. Then this configuration (Q14 or Q15) is reducible.
Proof. Suppose that deg(ui) = deg(uj) = 4, deg(uk) = 5 and deg(ul) = 6, where i, j, k, l ∈ {1, . . . , 8} and i ̸= j. First assume that uk and ul are adjacent on C. We may assume l = k+1. Let L(u) = {1, 2, 3} and consider an L-coloring f of G−{u, ui, uj, uk, ul}. Without loss of generality, we may assume that colors 1 and 2 each appear exactly twice on N(u) in the coloring f. Otherwise, there are two ways to extend the coloring f of G−{u, ui, uj, uk, ul} to a coloring of G −{ui, uj, uk, ul}, and applying Lemma 7.3.1 we can extend each of these to a coloring of G. Therefore, color 3 does not appear in the neighborhood of u in the coloring f. We color u with color 3 to obtain a coloring g of G −{ui, uj, uk, ul}. Now, consider the 6-vertex uk+1. Since uk+1 has only five colored neighbors so far, we have at least one available color for it from its list L(uk+1). If 3 / ∈L(uk+1) we color uk+1 arbitrarily with an available color. If 3 ∈L(uk+1), we color uk+1 with 3 if color 3 does not appear on N(uk+1){u}. If color 3 appears on N(uk+1){u}, we color uk+1 with any other color in its list except 3 (this is possible since the remaining three colored neighbors of uk+1 can forbid only one additional color from L(uk+1)). Now, consider one of the 4-vertices, say ui. We may assume that ui ̸= uk+2, otherwise we consider uj. First, assume that 3 / ∈L(ui). Since ui has only three colored neighbors and u is colored 3, there are at least two available colors in L(ui) that can be used to color ui. Each coloring then can be extended to a coloring of G by Lemma 7.3.1. Therefore, we may assume that 3 ∈L(ui). Recall that no neighbor of u, except possibly uk+1, is colored 3. Therefore, ui can be colored with color 3 without creating a monochromatic cycle of color 3, since any such cycle must use the vertex uk+1, and by assumption if uk+1 is colored 3, it has no neighbor except u that is colored 3. Therefore, the four colored neighbors of ui can forbid at most one color from L(ui), which implies that CHAPTER 7. 3-COLORINGS OF PLANAR GRAPHS 68 we can color ui with two different colors. Now, applying Lemma 7.3.1 to G −{uk, uj}, we see that each of these two colorings can be extended to a coloring of G.
Next, assume that ul and uj are adjacent on C. We may assume that ul = uj+1. If ui ̸= uj+2, then the above proof works also in this case. Thus, we have ui = uj+2. However, in this case we can use Lemma 7.3.1 (with v1 = ui, v2 = u, v3 = uj+1, v4 = uj, v5 = uk), where property (2) applies for v1.
Lemma 7.3.7. A configuration consisting of a 9-vertex adjacent to at least three 4-vertices and at least one other 5−-vertex is reducible. In particular, Q20 is reducible.
Proof. Let v1 be a 9-vertex and suppose v2, v3, v4 are 4-vertices and v5 is a 5-vertex adjacent to v1. Let L(v1) = {1, 2, 3}. Consider an L-coloring f of G1 = G−{v1, . . . , v5}. This coloring can be extended to v1 and henceforth to v2, . . . , v5 by Lemma 7.3.1. We may assume that v1 is forced to be colored 3; otherwise we are done. This implies that each of colors 1 and 2 appear on N(v1) at least twice. If color 3 does not occur on N(v1), then we can extend f to v2 in at least two ways since color 3 does not give any restriction on the extension to v2, and the remaining three neighbors of v2 prevent at most one color to be used. Therefore, we may assume that colors 1 and 2 appear exactly twice and color 3 appears exactly once on N(v1) in the coloring f. Let v6 be the neighbor of v1 with f(v6) = 3. Without loss of generality, v2 is not contained in the triangular faces containing the edge v1v6. Let v7, v8 be the common neighbors of v1 and v2. If v2 has no neighbor of color 3 except v1, or if 3 / ∈L(v2), then we can extend the coloring f to v2 in at least two ways. We can then extend these colorings to G. Therefore, we may assume that the neighbor of v2 distinct from v1, v7, v8 is colored 3. Now, v7 and v8 both have the same color, for otherwise we can extend f to v2 in at least two ways. This implies that we may assume that v7 and v8 are colored 1 and that 1 ∈L(v2). But now, by planarity, coloring v2 by 1 or 3 gives rise to a proper coloring since a path joining v7 and v8 colored 1 and a path colored 3 joining v1 and the fourth neighbor of v2 would cross. Coloring v2 with the other remaining color in its list gives another extension of f. Both of these colorings can be then extended to G by Lemma 7.3.1. This shows that the considered configuration is reducible.
CHAPTER 7. 3-COLORINGS OF PLANAR GRAPHS 69 7.4 Proof of the main theorem It is easy to see that every plane graph is a spanning subgraph of a triangulation; we can always add edges joining distinct nonadjacent vertices until we obtain a triangulation.
Proof of Theorem 7.1.3. The proof is by induction on the number of vertices, n = |G|.
We may assume that G is a triangulation. By Theorem 7.2.1 and Lemmas 7.3.2–7.3.7, G contains a reducible configuration C on at most k ≤9 vertices. By the induction hypothesis, G −V (C) has at least 2(n−k)/9 arboreal L-colorings. Since C is reducible, each of these colorings extends to G in at least two ways, giving at least 2 × 2(n−k)/9 ≥2n/9 arboreal L-colorings in total.
CHAPTER 7. 3-COLORINGS OF PLANAR GRAPHS 70 4 5− 4 5− 7− 6 4 3 6 4 6 6 5− 7 7 4 6 6 7 5− 6− 6− 6− 5 6− 7 6− 6− Q1 Q2 Q3 Q5 Q6 Q7 Q8 Q10 7 5 5 5 5 7 4 5− 5− 8 5 5 5 5 5 Q11 Q12 Q19 8 4 4 4 Q13 8 4 4 6− 5 Q14 8 4 5 6− 4 Q15 7 6 4 7− 8 Q16 7 6 4 6 8 Q17 4 8 5− 5− 5− Q18 4 4 6 7 4 5− 5 9 4 4 4 5− Q20 9 4 7 6 6 4 5− Q21 9 4 5− 5 5 5 Q22 10 7 6 6 4 7 6 6 4 Q23 5 5 5 Q4 Q9 4 Figure 7.1: Unavoidable configurations. The listed numbers refer to the degree function δ, and the notation d−at a vertex v means all such configurations where the value δ(v) is either d or d −1.
CHAPTER 7. 3-COLORINGS OF PLANAR GRAPHS 71 4 5− 4 5− 3 6 4 6 6 5− 5− 6− 6− 6− 5 6− 7 6− 6− Q1 Q2 Q3 Q5 Q8 Q10 7 5 5 5 5 7 4 5− 5− 8 5 5 5 5 5 Q11 Q12 Q19 7 6 4 7− 8 Q16 7 6 4 6 8 Q17 4 8 5− 5− 5− Q18 4 4 6 7 4 5− 5 9 4 7 6 6 4 5− Q21 9 4 5− 5 5 5 Q22 10 7 6 6 4 7 6 6 4 Q′ 23 5 5 5 Q4 Q9 v1 v1 v2 v1 v2 v3 v1 v2 v3 v1 v2 v3 v4 v1 v2 v3 v4 v1 v2 v3 v4 v5 v1 v2 v3 v4 v5 v1 v2 v3 v4 v1 v2 v3 v4 v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v1 v2 v3 v4 v5 v1 v2 v3 v4 v5 v6 v1 v2 v3 v4 v5 v6 v7 v1 v2 v3 v4 v5 v6 v1 v2 v3 v4 v5 v6 v7 v8 v9 Figure 7.2: Lemma 7.3.1 applies to several configurations.
Chapter 8 Conclusion and Future Work In this chapter, we outline possible directions for future research.
8.1 Upper bounds on χ(D) in terms of ∆(D) In the thesis, we proved that every digon-free digraph D has χ(D) ≤(1−e−13) ˜ ∆. A natural question is if the proof of the theorem can be extended to list colorings, i.e. is it true that χl(D) ≤(1 −e−13) ˜ ∆? The main difficulty seems to be in showing that the concentration inequalities would still hold. We also mentioned that the constant 1 −e−13 is probably not best possible and conjecture the following.
Conjecture 8.1.1. Let D be a digon-free digraph. Then χ(D) ≤ & ˜ ∆ 2 ' + 1.
We may also generalize the above mentioned conjecture to list colorings. In general, we believe that the asymptotic order of the chromatic number of a digon-free digraph should be ˜ ∆/ log ˜ ∆, as conjectured by Mohar and McDiarmid .
Conjecture 8.1.2. Every digraph D without digons has χ(D) = O( ˜ ∆ log ˜ ∆).
Again, we believe this should generalize to list colorings as well. As mentioned in Chapter 2, the conjecture is best possible due to results known about tournaments (see Theorem 2.3.6).
72 CHAPTER 8. CONCLUSION AND FUTURE WORK 73 8.2 The planar digraph conjecture The main conjecture concerning colorings of planar digraphs is that every digon-free planar digraph is 2-colorable, see Conjecture 2.4.1. Since this is likely to be difficult, we pose the following relaxations of this conjecture.
Conjecture 8.2.1. Let D be a digon-free planar digraph such that all cycles of length 3 (directed or otherwise) are vertex-disjoint. Then χ(D) ≤2.
The above conjecture is a weakening of a conjecture for vertex-arboricity of graphs posed by Raspaud and Wang in 2008. The authors conjectured that every planar graph where all triangles are vertex-disjoint have vertex-arboricity at most 2. Clearly, this conjecture would imply Conjecture 8.2.1 since a 2-coloring of a graph G gives a 2-coloring of digraph D, where D is obtained by arbitrarily orienting the edges of G. We can also try weakening Conjecture 2.4.1 by forbidding directed cycles of certain lengths.
Conjecture 8.2.2. Let D be a planar digraph with digirth at least 4. Then χ(D) ≤2.
If we also forbid non-directed triangles of length three then the above conjecture follows trivially. This follows from the fact that every triangle-free planar graph has a vertex of degree at most three. This implies that the digraph has a vertex v with min{d+(v), d−(v)} ≤ 1. Now, applying induction on D −v we get that D is 2-colorable.
One can also state the following weakening of Conjecture 8.2.2.
Conjecture 8.2.3. There exists a k such that every planar digraph with digirth at least k is 2-colorable.
8.3 The relationship between χ(G) and χ(D) Recall that every graph G has an orientation D of edges so that χ(D) = 1. Neumann-Lara asked the question whether there is an orientation D of the edges of G so that χ(D) is large.
Clearly, χ(D) is always bounded above by χ(G) for any orientation D.
Neumann-Lara conjectured the following.
Conjecture 8.3.1. For every k ≥1 there exists an r = r(k) so that every graph G with χ(G) ≥r has an orientation of edges D such that χ(D) ≥k.
CHAPTER 8. CONCLUSION AND FUTURE WORK 74 The conjecture clearly holds for k = 1. For k = 2, it is easy to observe that we may take r(k) = 3 . However, we do not even know if the conjecture holds for k = 3. Conjecture 8.3.1 also holds for complete graphs where r = θ(k log k) as can be seen from the discussion of the chromatic number of tournaments in Chapter 2.
8.4 Chromatic polynomial for digraphs An interesting notion is the chromatic polynomial of a digraph. For a (undirected) graph G and positive integer x, P(G; x) is defined to be the number of colorings of G with x colors.
Note that we distinguish between colorings with the same color class partitions where the colors of the classes are different. It can be shown that P(G; x) is a polynomial in x for every graph G. Hence, P(G; x) is called the chromatic polynomial of the graph G. One of the main motivations for studying P(G; x) is that χ(G) = min{k : P(G; k) > 0}. For every graph G, the chromatic polynomial is known to admit the following nice recurrence.
Theorem 8.4.1. Let G be a graph and uv ∈E(G). Then P(G; x) = P(G −uv; x) −P(G/uv; x), (8.1) where G/uv is the graph obtained from G by deleting the edge uv and identifying vertices u and v.
One can similarly define a chromatic polynomial for a digraph.
Given a digraph D and a positive integer x, we can define P(D; x) to be the number of colorings of D with x colors. One can derive that P(D; x) is a polynomial in x from results on hypergraph polynomials. Note that a hypergraph H = (X, E) is a set X of elements called vertices and a set E of subsets of X called hyperedges or simply edges. Note that graphs are precisely those hypergraphs where each hyperedge contains two elements. The chromatic polynomials have been studied in the setting of hypergraphs. The following theorem is from .
Theorem 8.4.2. Given a hypergraph H = (X, E), where X = {v1, ..., vn} and E= {e1, ..., em}, let fx1,...,xm(H, λ) denote the number of different λ-colorings of H satisfying the condition that in each edge ei there appear at least xi different colors. Then f is a polynomial in λ.
The case where all xi = 2 has been proved earlier in , along with an inclusion-exclusion recurrence. Note that digraph coloring can be formulated as a hypergraph coloring, CHAPTER 8. CONCLUSION AND FUTURE WORK 75 where the vertices of the hypergraph are the vertices of the digraph and the hyperedges are all the sets of vertices which contain a directed cycle in the digraph. Then Theorem 8.4.2 with xi = 2 for all i immediately implies the following.
Proposition 8.4.3. P(D; x) is a polynomial in x for every digraph D.
It is easy to compute the chromatic polynomial of the following digraphs.
Proposition 8.4.4. Let D be an acyclic digraph of order n. Then P(D; x) = xn.
Proof. Every assignment of colors to vertices of D is a proper coloring and there are xn color assignments.
Proposition 8.4.5. Let D = ⃗ Cn, the directed cycle of length n. Then P(D; x) = xn −x.
Proof. Every assignment of colors to vertices of D is a proper coloring except those where all vertices are assigned the same color in {1, 2, ..., x}. There are x such assignments. Since the total number of color assignments is xn the result follows.
We can also express the chromatic polynomial of a digraph in terms of the strongly connected components.
Proposition 8.4.6. Let D be a digraph and let D1, D2, ..., Dk be the strongly connected components of D. Then P(D; x) = k Y i=1 P(Di; x).
Proof. Note that two strongly connected components Di and Dj, i ̸= j, do not share a vertex in common for otherwise Di and Dj would form a single component. Therefore, P(D; x) ≤Qk i=1 P(Di; x). Now, for i = 1, ..., k, let πi be an x-coloring of Di. We claim that π = ∪k i=1πi is an x-coloring of D. If not, then there is a monochromatic directed cycle that uses vertices of at least two components Di and Dj. But this implies that the block digraph of strongly connected components of D is not acyclic, a contradiction.
In general, it seems difficult to get a recursive formula for P(D; x) similar to 8.1. How-ever, we can show the following.
CHAPTER 8. CONCLUSION AND FUTURE WORK 76 Proposition 8.4.7. Let C be a directed cycle in a digraph D such that no edges of C appear in any other cycle of D. Then P(D; x) = P(D −E(C); x) −P(D/C; x), where D/C is the digraph obtained from D by deleting E(C) and identifying all vertices of C.
Proof. Note that, by assumption, C cannot have any chords. Since no arc of C appears in any other cycle, it follows that every x-coloring of D −E(C) is an x-coloring of D except for those colorings which have vertices of C colored with the same color. Thus, it is easy to see that the number of x-colorings of D −E(C) where the set V (C) is monochromatic is P(D/C; x).
We can also reduce vertices of low degree.
Proposition 8.4.8. Let v be a vertex with min{d+(v), d−(v)} = 0 in a digraph D. Then P(D; x) = xP(D −v; x).
Proof. Note that D1 = v is a strongly connected component of D. The result now follows by Proposition 8.4.6.
Proposition 8.4.9. Let D be a digraph and v a vertex with d+(v) = d−(v) = 1. Let w be the in-neighbor of v and u be the out-neighbor of v, where w ̸= u. Let D′ be the digraph with V (D′) = V (D){v} and E(D′) = E(D −v) ∪{wu}. Then P(D; x) = P(D′; x) + (x −1)P(D −v; x).
Proof. We first claim that the number of k-colorings of D −v with no monochromatic uw-path (i.e. a path starting at u and ending at w) is P(D′; x). Note that every coloring of D −v with no monochromatic uw-path is a coloring of D′ and no proper coloring of D′ can have a monochromatic uw-path. This establishes the claim. Next, note that the number of colorings of D−v with a monochromatic uw-path is P(D−v; x)−P(D′; x). Now, a coloring of D −v where there is no monochromatic path from u to w can be extended to a coloring of D in x ways by coloring v arbitrarily. On the the other hand, a coloring of D −v with a monochromatic uw-path can be extended to D by coloring v with one of the x −1 colors CHAPTER 8. CONCLUSION AND FUTURE WORK 77 that do not appear on u and w. Thus, we have P(D; x) = xP(D′; x) + (x −1)[P(D −v; x) −P(D′; x)] = P(D′; x) + (x −1)P(D −v; x).
8.4.1 The chromatic polynomial and planar digraphs We mentioned that for a digraph D, P(D; x) is closely related to χ(D). Note that Conjecture 2.4.1 is equivalent to stating that for every digon-free planar digraph D, we have P(D; 2) > 0.
Since this conjecture seems difficult, we may try to find a lower bound on P(D; 3). Since every digon-free planar digraph is 3-colorable (see Chapter 2), we know that P(D; 3) > 0.
In Chapter 7, we proved that P(D; 3) ≥2n/9, where n is the order of D. On the other hand, we believe that P(D; 2) should be finite for any digon-free digraph D.
Conjecture 8.4.10. There exists a constant C such that P(D; 2) < C for any digon-free digraph D.
Conjecture 8.4.10 seems to be difficult. In fact, it does not seem to be easy even if we replace C by a polynomial in |V (D)|.
8.5 Hedetniemi’s Conjecture for digraphs In this section, we propose an analog of Hedetniemi’s Conjecture for digraphs. First, we need a definition. Given graph G and H, the direct product G × H of G and H is the graph with vertex set V (G×H) = V (G)×V (H) where two vertices (u, u′) and (v, v′) are adjacent if and only if u is adjacent with v and u′ is adjacent with v′. Hedetniemi’s conjecture states: Conjecture 8.5.1 (Hedetniemi’s Conjecture). Let G and H be simple graphs. Then χ(G × H) = min{χ(G), χ(H)}.
It is easy to verify that χ(G×H) ≤min{χ(G), χ(H)} by simply considering the natural homomorphism projections G × H →G and G × H →H. Aside from small values of χ(G) and χ(H), Hedetniemi’s conjecture is largely open. Zhu generalized the conjecture to circular colorings.
CHAPTER 8. CONCLUSION AND FUTURE WORK 78 Conjecture 8.5.2 (Zhu’s Conjecture). Let G and H be simple graphs. Then χc(G × H) = min{χc(G), χc(H)}.
One can also ask if Hedetniemi’s conjecture generalizes to digraphs. Given digraphs D and D′, the direct product D × D′ of D and D′ is the digraph with vertex set V (D × D′) = V (D) × V (D′) where there is an arc from vertex (u, u′) to vertex (v, v′) if and only if uv is an arc in D and u′v′ is an arc in D′.
We conjecture that the following analog of Hedetniemi’s conjecture holds.
Conjecture 8.5.3. Let D and D′ be simple digraphs. Then χ(D × D′) = min{χ(D), χ(D′)}.
It is easy to see that χ(D × D′) ≤min{χ(D), χ(D′)}.
Proposition 8.5.4. Conjecture 8.5.3 holds if min{χ(D), χ(D′)} ≤2.
Proof. Suppose that min{χ(D), χ(D′)} = 1. We may assume that χ(D) = 1 and it follows that D is acyclic. Note that D × D′ cannot contain a cycle for otherwise the projection of the cycle onto D would be a cycle in D. Therefore, χ(D × D′) = 1.
Next, suppose that min{χ(D), χ(D′)} = 2. Let C = v1v2...vkv1 be a directed cycle in D and C′ = u1u2...ul be a directed cycle in D′. Then clearly the walk (v1, u1)(v2, u2)...
will eventually (after lcm(k, l) steps) reach (v1, u1). Therefore, D × D′ contains a cycle and hence χ(D × D′) = 2.
Appendix A Probabilistic Preliminaries Here we present all the probabilistic tools used in the thesis. The results presented here can be found in and . The most fundamental property used in probabilistic analysis is the linearity of expectation.
Theorem A.0.5 (Linearity of Expectation). Let X1, X2, ..., Xl be random variables. Then E[ l X i=1 Xi] = l X i=1 E[Xi].
A.1 The First Moment Method The first moment method can essentially be summarized as follows.
Theorem A.1.1 (The First Moment Principle). Let X be a random variable. If E[X] ≤t then P[X ≤t] > 0.
The following inequality is frequently used in probabilistic analysis.
Theorem A.1.2 (Markov’s Inequality). For any positive random variable X, P[X ≥t] ≤E[X] t .
If X is positive and integer-valued, Markov’s inequality implies the following: Theorem A.1.3. P[X > 0] ≤E[X].
79 APPENDIX A. PROBABILISTIC PRELIMINARIES 80 The first moment method allows us to bound from above the probability that a random variable is large by computing its expected value. The power of the method relies in the fact that expected values are usually straightforward to compute due to linearity of expectation.
In the next section we discuss bounding from above the probability that a random variable is small.
A.2 The Poisson Paradigm When we have a random variable X that depends on many rare occurring and mostly independent indicator random variables we would like to say that X has distribution that is close to Poisson. In particular, we would like to say that P[X = 0] ≈e−µ where µ is the expectation of X. In this section, we present inequalities that achieve this.
A.2.1 The Janson Inequalities Given a set of bad events Bi, i ∈I, each of which has a small probability of occurring, we would like to say that P[∩i∈I ¯ Bi] is small. If the events Bi are mutually independent of each other then this indeed is the case since P[∩i∈I ¯ Bi] = Q i∈I P[ ¯ Bi]. Janson Inequalities (see Chapter 8, ) to make a similar claim if the Bi are “almost” independent.
Let Ωbe a finite universal set and let R be a random subset of Ωconstructed as follows.
For each r ∈Ω, we put r ∈R with some probability pr, independently. Let Ai, i ∈I be subsets of Ω, where I is a finite index set. Let Bi be the event that Ai ⊆R. That is, Bi is the event that all the elements of Ai “won” their random coin flips and were put in R.
Let Xi be the indicator random variable for Bi, i.e. Xi = 1 if the event Bi occurred and 0 otherwise. Set X = P i∈I Bi. Note that X counts the number of events Bi that occur, and therefore, P[X = 0] = P[∩i∈I ¯ Bi]. For i, j ∈I, we write i ∼j if i ̸= j and Ai ∩Aj ̸= ∅. We define ∆= X i∼j P[Bi ∩Bj].
The sum above is over all ordered pairs. Note that if i ̸= j and not i ∼j, then the events Bi and Bj are independent. This means that ∆is a kind of total measure of mutual dependence of the Bi. Finally, we set µ = E[X] = X i∈I P[Bi].
APPENDIX A. PROBABILISTIC PRELIMINARIES 81 Theorem A.2.1 (The Janson Inequality). Let Bi, i ∈I, ∆and µ be as above. Then P[X = 0] = P[∩i∈I ¯ Bi] ≤e−µ+ ∆ 2 .
Note that when ∆≥2µ, the bound in Theorem A.2.1 is useless.
Fortunately, the following extension can often be applied.
Theorem A.2.2 (The Extended Janson Inequality). Under the assumptions of Theorem A.2.1 and the further assumption that ∆≥µ, P[X = 0] = P[∩i∈I ¯ Bi] ≤e−µ2 2∆.
A.3 The Lov´ asz Local Lemma If we have n mutually independent events each of which holds with a positive probability p, we know that all events hold simultaneously with probability pn > 0. The local lemma generalizes this statement to events which are only locally dependent.
Theorem A.3.1 (The Local Lemma). Let A1, ..., An be events in an arbitrary probability space.
A directed graph D = (V, E) on the set of vertices V = {1, 2, ..., n} is called a dependency digraph for the events A1, ..., An, if for each i, 1 ≤i ≤n, the event Ai is mutually independent of all the events {Aj : (i, j) / ∈E}. Suppose that D = (V, E) is a dependency digraph for the above events and suppose there are real numbers x1, ..., xn such that 0 ≤xi < 1 and P[Ai] ≤xi Q (i,j)∈E(1 −xj) for all 1 ≤i ≤n. Then P[∩n i=1 ¯ Ai] ≥ n Y i=1 (1 −xi).
In particular, with positive probability no event Ai holds.
In practice, the following version is usually the most useful.
Theorem A.3.2 (The Local Lemma; Symmetric Version). Let A1, ..., An be events in an arbitrary probability space. Suppose that each event Ai is mutually independent of a set of all the other events Aj but at most d, and that P[Ai] ≤p for all 1 ≤i ≤n. If 4pd ≤1, then P[∩n i=1 ¯ Ai] > 0.
APPENDIX A. PROBABILISTIC PRELIMINARIES 82 A.4 Concentration Inequalities The first moment method states that a random variable X is at most E[X] with positive probability.
Often we would like to show that X is very close to E[X] with very high probability. If this is the case, we say that X is concentrated. Concentration inequalities are widely used in probabilistic combinatorics. The most common example of a concentrated random variable is the binomial which is defined as follows. Suppose X = Pn i=1 Xi, where each Xi is a random variable that takes value 1 with probability p and 0 otherwise. If the Xi are all mutually independent, we say that X is a binomial random variable and write it as X = BIN(n, p). The following theorem, due to Chernoff, shows that the binomial random variable BIN(n, p) is strongly concentrated around its mean np.
Theorem A.4.1 (ChernoffBound). For any 0 ≤t ≤np, P[|BIN(n, p) −np| > t] < 2e−t2/3np.
Chernofftype bounds generalize to other random variables which are functions of inde-pendent trials. The following theorem (see ) is an example of one such generalization.
Theorem A.4.2 (Simple Concentration Bound). Let X be a random variable determined by n independent trials T1, ..., Tn, and satisfying the property that changing the outcome of any single trial can affect X by at most c. Then P[|X −E[X]| > t] ≤2e− t2 2c2n .
Typically, c in Theorem A.4.2 is a constant not depending on n and t is a constant fraction of E[X]. Therefore, the above bound is generally good when E[X] = Ω(n). Fortu-nately, under some additional conditions we can still get strong concentration of X even if E[X] = o(n). This can be achieved by Talagrand’s Inequality. The original inequality yields a concentration around the median Med(X) of a random variable X.
Theorem A.4.3 (Talagrand’s Inequality (Median)). Let X be a nonnegative random vari-able, not equal to 0, which is determined by n independent trials, T1, . . . , Tn and satisfies the following conditions for some c, r > 0: 1. Changing the outcome of any single trial can affect X by at most c.
2. For any s, if X ≥s, there are at most rs trials whose outcomes certify that X ≥s.
APPENDIX A. PROBABILISTIC PRELIMINARIES 83 Then for any 0 ≤t ≤Med(X), P [|X −Med(X)| > t ] ≤4e − t2 8c2rMed(X) .
To be clear, condition 2 states that for any s, there is a set of trials Ti1, ..., Tif(s) for some f(s) ≤rs so that changing the outcomes of all the other trials cannot cause X to be less than s. In other words, showing the outcome of the trials Ti1, ..., Tif(s) is sufficient to demonstrate that X ≥s.
A problem with Theorem A.4.3 is that medians are often difficult to compute and there-fore the inequality may not be easy to apply. Fortunately, there exists the following version of the inequality that replaces Med(X) with E[X].
Theorem A.4.4 (Talagrand’s Inequality (Mean)). Let X be a nonnegative random variable, not equal to 0, which is determined by n independent trials, T1, . . . , Tn and satisfies the following conditions for some c, r > 0: 1. Changing the outcome of any single trial can affect X by at most c.
2. For any s, if X ≥s, there are at most rs trials whose outcomes certify that X ≥s.
Then for any 0 ≤t ≤E[X], P h |X −E[X]| > t + 60c p rE[X] i ≤4e − t2 8c2rE[X] .
Bibliography M. Albertson, private communication to B. Mohar; see mohar/Problems/P0207AcyclicPartitions.html M. Albertson, D. Berman, A conjecture on planar graphs, Graph Theory and Related Topics, ed. by Bondy and Murty, Academic Press, 1979, p. 357.
N. Alon, J. Pach, J. Solymosi, Ramsey-type theorems with forbidden subgraphs, Com-binatorica 21(2001), 155–170.
N. Alon, J. Spencer, The Probabilistic Method, Wiley, 1992.
G. Araujo-Pardo, M. Olsen, A conjecture of Neumann-Lara on infinite families of r-dichromatic circulant tournaments, Discrete Mathematics 310 (2010), 489–492.
J. Bang-Jensen, G. Gutin, Digraphs. Theory, Algorithms and Applications, Springer, 2001.
E. Berger, K. Choromanski, M. Chudnovsky, J. Fox, M. Loebl, A. Scott, P. Seymour, S. Thomasse, Tournaments and Colouring, preprint.
D. Bokal, G. Fijavˇ z, M. Juvan, P. M. Kayll, B. Mohar, The circular chromatic number of a digraph, J. Graph Theory 46 (2004), 227–240.
B. Bollob´ as, Chromatic number, girth and maximal degree, Discrete Math. 24 (3) (1978), 311–314.
B. Bollob´ as, N. Sauer, Uniquely colourable graphs with large girth, Canadian Journal of Mathematics 28 (1976), 1340–1344.
84 BIBLIOGRAPHY 85 O. V. Borodin, On acyclic colorings of planar graphs, Discrete Mathematics 25 (1979), 211-236.
O. V. Borodin, Problems of colouring and of covering the vertex set of a graph by induced subgraphs. Ph.D. Thesis, Novosibirsk State University, Novosibirsk, 1979 (in Russian).
R. L. Brooks , On Coloring the Nodes of a Network, Proc. Cambridge Philos. Soc. 37 (1941), 194–197.
R. Brualdi, Spectra of digraphs, Linear Algebra Appl. 432 (2010), 2181–2213.
G. Chartrand, H.V.Kronk, C.E.Wall, The point-arboricity of a graph, Israel J. Math.
6 (1968), 169–175.
T. Cormen, C. Leiserson, R. Rivest, C. Stein, Introduction to Algorithms (2nd Edition), MIT Press and McGraw-Hill, 2001.
D. M. Cvetkovic, M. Doob, H. Sachs, Spectra of Graphs, 3rd Ed., Johann Ambrosius Barth Verlag, 1995.
K. Dohmen, A broken-circuits-theorem for hypergraphs, Arch. Math. 64 (1995), 159– 162.
E. Drgas-Burchardt, E. Lazuka, Chromatic Polynomials of Hypergraphs, Applied Mathematics Letters 20 (2007), 1250–1254.
R. C. Entringer, P. Erd˝ os, C. C. Harner, Some extremal properties concerning transi-tivity in graphs, Periodica Mathematica Hungarica 3 (1972), 275–279.
P. Erd˝ os, Some remarks on the theory of graphs, Bulletin of the American Mathematical Society 53 (1947), 292–294.
P. Erd˝ os, Graph theory and probability, Canadian Journal of Mathematics 11 (1959), 34–38.
P. Erd˝ os, On circuits and subgraphs of chromatic graphs, Mathematika 9 (1962), 170– 175.
BIBLIOGRAPHY 86 P. Erd˝ os, J. Gimbel, D. Kratsch, Some extremal results in cochromatic and dichromatic theory, J. Graph Theory 15 (1991) 579–585.
P. Erd˝ os, A. Hajnal, Ramsey-type theorems, Discrete Mathematics 25 (1989), 37–52.
P. Erd˝ os, A.L. Rubin and H. Taylor, Choosability in graphs, Proc. West Coast Conf. on Combinatorics, Graph Theory and Computing, Congressus Numerantium XXV (1979) 125–157.
P. Erd˝ os, G. Szekeres, A combinatorial problem in geometry, Compositio Math. 2 (1935), 436–470.
T. Feder, P. Hell, B. Mohar, Acyclic homomorphisms and circular colorings of digraphs, SIAM J. Discrete Math. 17 (2003), 161–169.
M. R. Fellows, J. Kratochvil, M. Middendorf, F. Pfeiffer, The complexity of induced minors and related problems, Algorithmica 13 (1995), 266–282.
T. Gallai, Kritische Graphen I, Publ. Math. Inst. Hung. Acad. Sci. 8 (1963), 373–395.
C. Godsil, G. Royle, Algebraic Graph Theory, Springer, 2001.
A. Harutyunyan, M. Kayll, B. Mohar, L. Rafferty, Uniquely D-colorable digraphs of large girth, submitted to The Canadian Journal of Mathematics.
A. Harutyunyan, B. Mohar, Gallai’s Theorem for List Coloring of Digraphs, SIAM Journal on Discrete Mathematics 25(1) (2011), 170–180.
A. Harutyunyan, B. Mohar, Strengthened Brooks Theorem for digraphs of girth three, submitted to The Electronic Journal of Combinatorics.
P. Hell, J. Nesetril, Graphs and Homomorphisms, Oxford Lecture Series in Mathematics and its Applications, 2004.
R. A. Horn, C. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, 1985.
T. Jiang, Small odd cycles in 4-chromatic graphs, J. Graph Theory 37 (2001), 115–117.
A. Johansson, Asymptotic choice number for triangle free graphs, DIMACS Technical Report (1996) 91–95.
BIBLIOGRAPHY 87 H. Kierstead, E. Szemeredi, W.T. Trotter, On coloring graphs with locally small chro-matic number, Combinatorica 4 (1984), 183–185.
A. V. Kostochka, N. P. Mazurova, An inequality in the theory of graph coloring, Met Diskret Analiz 30 (1977), 23-29 (in Russian).
A. V. Kostochka, M. Stiebitz, B. Wirth, The colour theorems of Brooks and Gallai extended, Discrete Mathematics 162 (1996) 299–303.
H. V. Kronk, J. Mitchem, Critical point arboritic graphs, J. London Math. Soc. 9 (1974/75), 459–466.
H. Lin, J. Shu, Spectral radius of digraphs with given dichromatic number, Linear Algebra and its Applications 434 (2011), 2462–2467.
C. McDiarmid, B. Mohar, private communication, 2002.
B. Mohar, Circular colorings of edge-weighted graphs, J. Graph Theory 43 (2003) 107– 116.
B. Mohar, Eigenvalues and colorings of digraphs, Linear Algebra and its Applications 432 (2010) 2273–2277.
M. Molloy, B. Reed, Graph Colouring and the Probabilistic Method, Springer, 2002.
J. Neˇ setˇ ril, X. Zhu, On sparse graphs with given colorings and homomorphisms, Journal of Combinatorial Theory, Ser. B, 90 (2004), 161–172.
V. Neumann-Lara, The dichromatic number of a digraph, J. Combin. Theory, Ser. B 33 (1982) 265–270.
V. Neumann-Lara, The 3 and 4-dichromatic tournaments of minimum order, Discrete Mathematics 135 (1994), 233–243.
V. Neumann-Lara, Vertex critical 4-dichromatic circulant tournaments, Discrete Math-ematics 170 (1997), 289–291.
V. Neumann-Lara, Dichromatic number, circulant tournaments and Zykov sums of digraphs, Discussiones Mathematicae Graph Theory 20 (2000), 197–207.
BIBLIOGRAPHY 88 V. Neumann-Lara, J. Urrutia, Vertex critical r-dichromatic tournaments, Discrete Mathematics 49 (1984), 83–87.
A. Nilli, Short odd cycles in 4-chromatic graphs, J. Graph Theory 31 (1999), 145–147.
A. Raspaud, W. Wang, On the vertex-arboricity of planar graphs, European Journal of Combinatorics 29 (2008), 1064–1075.
B. Reed, ω, ∆, and χ, J. Graph Theory 27 (1998), 177–212.
B. Reed, N. Robertson, P. Seymour, R. Thomas, Packing Directed Circuits, Combina-torica 16(4) (1996), 535–554.
P. Seymour, communicated in the DIMACS Workshop on Graph Colouring and Struc-ture, May 7-11, 2009.
J. Spencer, Random regular tournaments, Periodica Mathematica Hungarica 5 (1974), 105–120.
J. Spencer, C. R. Subramanian, On the size of induced acyclic subgraphs in random digraphs, Discrete Mathematics and Theoretical Computer Science, 10(2) (2008), 47– 54.
C. Thomassen, Color-critical graphs on a fixed surface, J. Combin. Theory, Ser. B 70 (1997) 67–100.
A. Vince, Star chromatic number, J. Graph Theory 12 (1988), 551–559.
D. West, Introduction to Graph Theory, 2nd edition, Prentice Hall, 2001.
H. S. Wilf, The eigenvalues of a graph and its chromatic number, J. London Math. Soc.
42 (1967), 330-332.
X. Zhu, Star chromatic numbers and products of graphs, J. Graph Theory 16 (1992), 557–569.
X. Zhu, Uniquely H-colorable graphs with large girth, J. Graph Theory 23 (1996), 33–41.
X. Zhu, Circular chromatic number: a survey, Discrete Mathematics 229 (2001), 371– 410.
|
39
|
Article
Multiple terms: term1 term2
red apples
returns results with all terms like:
Fructose levels in red and green apples
Precise match in quotes: "term1 term2"
"red apples"
returns results matching exactly like:
Anthocyanin biosynthesis in red apples
Exclude a term with -: term1 -term2
apples -red
returns results containing apples but not red:
Malic acid in green apples
hits for
Network problems
Server timeout
Empty search term
Too many requests
The geography of spheres: an introduction and critical assessment of Peter Sloterdijk's concept of spheres
Huib Ernste
With his three-volume magnum opus on spheres, Peter Sloterdijk introduces a
critical philosophical and cultural view of the spatiality of current
society. His spatial metaphors serve as an intriguing source for inspiration
for geographers. He describes the topological conditions of society by means
of three different forms of spherical conditions of life: bubbles, globes,
and foams. To understand, assess, and critique our current society we,
according to Sloterdijk, need to replace the arrogant and cynical academic
view of Plato and his followers with the more serene composure of the
kinetic view of Diogenes. In this contribution, on the one hand we shall
elaborate the spatial metaphor Sloterdijk uses. On the other hand we want to
scrutinise Sloterdijk's ideas by drawing some parallels between his ideas and
those of other philosophical anthropological thinkers. Finally, we very
briefly want to point to a suitable conceptual framework for empirically
investigating the spherology of human being in the world.
Ernste, H.: The geography of spheres: an introduction and critical assessment of Peter Sloterdijk's concept of spheres, Geogr. Helv., 73, 273–284, 2018.
Peter Sloterdijk has written almost about everything, and in doing so has
developed a great number of inspiring as well as provocative new ideas and
new critical perspectives on old ideas. You love him or you hate him. He is
sometimes accused of being a philosophical knock, knock, ginger prankster â a
thinker who yells something and then quickly hides away. He loves to throw
out some grand ideas, out of the blue, in a language which is bombastic,
swollen and full of neologisms and with which he amuses but also confuses his
audience. As Koen Haegens (2011) in a
review essay in De Groene Amsterdammer once wrote, âwhen you read
Sloterdijk you regularly get the feeling that with his wildest assertions he
does not do justice to the facts. But before you are able to pin that down,
the philosopher is already two, or three steps further in his
argumentationâ, and you stay back, helplessly baffled. Carlin Romano (2012),
writing for the The Chronicle of Higher Education, in the first instance
describes Peter Sloterdijk as a hip European philosopher, choosing obscurity
over clarity using abstract language and neologisms, the uglier the better,
referring en passant to the endlessly interpretable giants of
continental tradition, being prolific even if he does not have much to say,
and rigorously avoiding clear-minded science, as if philosophy commands its
own territory and outsiders must pay a literacy fee at the door. In the second
instance he unveils some more of Sloterdijk's more substantial ideas.
Peter Sloterdijk loves to provoke and to think the unthinkable and in
eloquently doing so, he does not care for precision, thoroughness, or
completeness. As such it is not surprising that the interviews he gave and
public conversation he had are summarised in a book with the title
Selected Exaggerations (2016a).
Even if his work seems to be rather eclectic and fuzzy, there is also a basic
line of spatial argumentation in his work, which makes him, especially
for geographers, a very interesting and inspiring thinker. In his magnum opus,
the trilogy Spheres, the first volume of which was titled
Bubbles (2011), the second Globes (2014) and the concluding
one Foams (2016b), he develops his main ideas about the spatiality of the human
being. It has been attempted several times to summarise this more than 2500-page trilogy, but because of his style of writing, the many very diverse
examples, and his ubiquitous use of neologisms, the secondary literature also
presupposes a lot and is often difficult to digest for less philosophically
engrained geographers, and therefore only addresses an in-crowd, who actually
did not need it to gain access to the world of Sloterdijk. But as
Nigel Thrift already noted, Sloterdijk writes as a philosopher, but to
underpin his story, he draws on many empirical cases from a wide
variety of sources (Thrift, 2009, p. 125). So he pretends to do more than
just philosophy. I will, therefore, critically interpret his work from the
social scientific perspective and look at it as a hypothetical social
theory, based on philosophical insights, about the relation between human
being, and space and place.
In this contribution I will briefly describe the main points of Sloterdijk's
Spheres trilogy before I discuss some parallels and
critiques. I will mainly focus on one important, but often overlooked parallel,
namely with the philosophical anthropology of Helmuth Plessner.
Peter Sloterdijk claims that his theory of spheres is an elaboration of the
spatiality of Martin Heidegger's Being and Time (1927). Helmuth Plessner, as a contemporary of Martin Heidegger,
already developed a spatial theory of human being and thus anticipated many
of Sloterdijk's ideas. But if one compares them, an important critical
difference with Sloterdijk's conceptualisation of spheres is also unveiled, which
underscores the topicality of Helmuth Plessner's contribution to the current
debate. The debate on the ontological foundations of current
conceptualisations of the relationship between a human being and space is of
course very important, but only indirectly helps geographers to do empirical
research. We therefore need a more detailed conceptual framework with which
we could address the different aspects of this relationship between a human
being and space. For this purpose many different conceptual frameworks might
be useful or could be developed further in these respects. In the last part
of this contribution, as a very brief outlook, I will point to the current practice-theoretical turn as one possible promising conceptual framework for
empirically investigating the role of spheres in today's society.
Sloterdijk's philosophical starting point is Martin Heidegger's Being and Time (1927), in which Heidegger dealt
with the temporality of human existence (Dasein), which Sloterdijk
tries to reformulate in a philosophy of Being and Space
(Noordegraaf-Eelens and Schinkel, 2005). In doing so he also tries to
criticise the dominant analytical and instrumental way of looking at the
world, in which it is assumed that we can take the world apart and divide it
up in its components and understand the causal relations between them, in
such a way that we instrumentally manipulate the world in whatever way we
like. It is this latter analytic and instrumental view which puts a human being
as the manipulator of the world at the centre of the world and at the same time
apart from it, from where he can rule the world. Sloterdijk tries to rethink
our relation to the world by not starting with the individual in the
face of the world, but by noting that to be human already implies that we are
taking part in an intimate space that we share with other human beings and with
other objects. In his view we cannot even think of ourselves if not as part
of this sphere. This sphere is, however, not clearly demarcated or
bordered, but is a rather diffuse feeling of connectedness. Spheres are
affective orderings of living together (Boos, 2013, p. 55).
This affectivity is an important element in Sloterdijk's thinking about
spheres. Like in all phenomenological approaches the embodiment of the human
being plays a central role. The lived body (Leib) is the starting
point and through embodiment we constitute the world. The lived body unites
the physical body (Körper) with the mind, and therefore overcomes
the separation of the physical outer world and mental inner world. They are both
an integral part of the human lifeworld and cannot be segregated from each
other. We observe the world through our bodily senses and through our bodily
movements and observations, we make sense of the world, and we experience this
sense in the form of a (spatial) ordering of experiences and meanings around our
bodily being. We thus create a topology of our lifeworld, with regions
closer by and regions which are at a further distance (Boos, 2013, p. 62).
Sloterdijk extends this view by not putting the individual subjective
embodiment, but the dividual con-subjective embodiment at the centre of his
phenomenology of spheres. So it is not through our individual experience of
the world, but through a joint clearing, conceding, and giving space
(Einräumen1), a joint creation of a topological network of relations, that we
create our sphere (Sloterdijk, 2012)2. The topological
replaces the transcendental (Malpas, 2012, p. 78; Günzel, 2007).
The term âsphereâ used by Sloterdijk in this sense is not thought of in a
territorial way, but rather in a relational way. Maybe it is even better to
compare it with a network of relations, which somehow caries us as human
beings and in which human beings emerge as one node among others out of the
densification of the network. This network, however, has no clear borders.
Some relations reach further than others. Even though the metaphor of a
network seems telling in this respect, Sloterdijk does not prefer the term
ânetworkâ, because, in his view, this still suggests too much that the
human being is at the centre of this network. One could also associate the
idea of a sphere with the idea of a rhizome of Gilles Deleuze. A rhizome can
extend in all directions without having a clear core. As such, Sloterdijk's
idea of this being, not as an isolated lonely creature, but as part of an
intimate sphere, does not really allow the experience of an âoutsideâ. There is
no initial outside. The outside is at best something we create from the
inside. So in the first instance we co-exist in a sphere, and only in the second
instance do we exit as individuals differentiated from the outside
other (van Tuinen, 2006, p. 48). According Peter Sloterdijk, in a
sphere, we are never alone. A sphere is always a shared space. Dasein is
always a being âwithâ and a being âwithinâ. The idea of an individual is,
instead, a derived, secondary phenomenon. In the first instance we are not an
individual, but as Nietzsche (1886) called it, we are a
dividual3. In the formulation of Heidegger the human being is inherently
standing out in the openness of being, is ecstatic (van Tuinen,
2004, p. 55), and this is seen as structurally, immanently given. Sloterdijk
therefore describes Dasein as ecstatic immance (2011, p. 625). As a dividual,
we are more or less footloose within our sphere and are both here and there.
Spheres thus are characterised by a multiplicity of different positions.
Spheres inherently comprise more than one person, so they are by definition
communities of dividuals. But this description also needs to be interpreted
carefully. All too easily, one could assume that within a sphere there are
two or more clearly distinguishable individuals or individual positions.
Sloterdijk, however, assumes persons within a sphere to be real dividuals, to
be inherently entangled, and part of each other.
The original idea of the philosophy of consciousness â that there is a real
âIâ which has a clear identity and position in the world and as such has
a specific place in which it is at home, can feel intimately secure, and
can be who it is â is an illusion according to Sloterdijk. Being
in a sphere is an act of creation. Spheres, with their inherent
multiplicities, challenge us to actively create a home for ourselves.
Spheres, therefore, need to be taken care of and need to be created by
ecstatic creatures who feel how the outside, the unfamiliar, the unfaithful,
the strange and far away, which are socially constructed from the inside,
affect them (Sloterdijk, 2011, p. 28). Through these creative actions, the
human beings in a sphere jointly attempt to immunise and protect their sphere
from the âmonstrousâ outside. This is not the act of an individual subject
in the face of the big world out there, but an act of what Sloterdijk calls the
âcon-subjectâ seeking a secure home.
According to Sloterdijk, in our western thinking in terms of unities and
substances and as independent knowing subjects, we seem to have forgotten
the con-subjectivity and floating relationality of our being in the world
(van Tuinen, 2004, p. 91ff.). This shifts the subjectivity from the
individual subject to the con-subjectivity of the sphere, or as Boos (2013,
p. 69) formulates it in the terminology of Heidegger, âThrough the shift
from subject to Dasein the initial perspective changes to the human community
as a whole, from subject as producer of a lifeworld to the community as
creative constructor of spheres of strong tiesâ.
By the con-subjective immunising strategies, the sphere is to a certain
degree being insulated from the outside world by creating shared norms and
values of how to jointly deal with irritations and intrusions from the
outside world. This usually takes place by means of a combination of
internalisation, externalisation, objectivation, and routinisation (Boos,
2013, p. 73), and is described by Sloterdijk as the âair conditioningâ of the
sphere. This to a certain degree reduces the complexity of living
together, but Sloterdijk immediately adds that this is not the general
mechanism of complexity reduction, which Niklas Luhmann describes in his
theory of social systems (Borch, 2013), because in his view human beings
actually create a lot of complexity in dealing with each other and with their
situation. And it is this complexity which also allows the con-subject to
react creatively in different ways, in different situations, and on different
occasions. In this way the sphere can also adapt itself to new situations and
can even adopt and internalise parts of the outside world in its own sphere,
partly also changing the character of the sphere as a whole. Or one could
also describe this as the co-productive transmission of parts of the own
sphere to the outside world, whereby the unfamiliar and distrusted outside is
transformed into the familiar and trusted, extending the âcomfort zoneâ. One
aspect of this complexity within a sphere is also that human beings
taking part in this sphere are usually taking part in other spheres as well
and thus are also actively involved in the transmission between these
spheres. Taking care of the inside is inherently entangled with taking care
of the outside. So irrespective of the continuous immunising strategies,
spheres are never fully closed entities, but always comprise multiplicities
(Elden and Mendieta, 2009, p. 7). Through this continuous creative production
of spheres the âcommunityâ protects itself from the naked outside world,
but also creates a positionality and identity which enables communication and
interaction with and relating to the outside world (see also Fig. 1).
Spheres, therefore, mediate between the inside and the outside. They are
inner worlds which enable the human being to inhabit the outside world
(Lemmens and Zwart, 2004, pp. 5â6).
Figure 1Relating to the outside world from a secure bubble. Aquarelle by
Rien Poortvliet (1973) Te hooi en te gras. van Holkema & Warendorf, Bussum. Source: Ernste (2016, p. 43).
Download
If, through the step-by-step extension of the sphere into the outside,
larger and more comprising spheres come into being. We thus might think of
finally ending at the scale of a global sphere, an all comprising, total,
overall, singular, borderless sphere. But according to Sloterdijk such a
global sphere must be an illusion, as, following Sloterdijk, every
de-bordering is accompanied by re-bordering, and as a consequence living
together on a global scale does not reduce complexity but actually increases
complexity as we cannot be unified under one single institutionalised
normative whole. Sloterdijk outs himself here as a theorist of globalisation,
and the Spheres trilogy becomes a historical description of different
stages of globalisation and sphere making.
In Bubbles (2011) Peter Sloterdijk develops his ontological view
that the human being is never alone, but is always accompanied by other human
beings and things in a shared living space. To underpin this thesis he goes
back to being born into this world, through which the original intimate
relationship with the mother, is broken up, and to the experience of floating
in and with each other and of being in between (p. 139). The
original double-unity of mother and child is described as a pure inner space
without an outer space. In this pre-birth primal sphere we could speak of a
pre-eminent con-subjectivity. Being born into this world in this respect is a
primordial catastrophe, which is the exemplary event for all later
destructions and transformations of spheres (van Tuinen, 2006, p. 54), which
causes our lifelong search for new relations, or as Sloterdijk calls it, new
mediated resonances. To speak with Marijn Nieuwenhuis (2014, p. 21): âThe longing for the perfect
union in the bubble of the broken womb will, as we are told, throughout the
subject's lifetime compel her to travel, create, and dwell in many different
spheresâ. By suggesting this con-subjectivity as a kind of pre-subjectivity
Sloterdijk also counters the classical philosophical idea that we should
start thinking from the premise of a subjectâobject dichotomy. Subject and
object are not divided, and a new view of cultural and natural objects, which
comprise a sphere, comes about. In this micro-sphere we become aware
that everyone and everything we encounter takes part of us or takes part in
us, and we experience ourselves as a penetrable and receptive body. In the
view of Sloterdijk (2011, p. 94), this, by the way, also disqualifies the
enlightenment dream of human autonomy, and individuality and the myth of
modernity assuming humans as individuals in harsh competition for survival in
a state of war, based on the anthropology of a âpureâ, âborn aloneâ,
âsolitaryâ individual without any âbeing withâ (Couture, 2009, p. 158).
The neglect of the individual or the subject is not new and can also be
observed in the thinking of Niklas Lumann's Social Systems Theory, and for
example in the work of Michel Foucault and his followers. Both of them emphasise
communication and discourse rather than individual subjectivity. Sloterdijk,
however by and large replaces communication with imitation, a term
he borrows from Gabriel Tarde (1903), related to the non-linguistic
contagiousâaffective relationships between con-subjects, which Tarde
describes as a kind of somnambulistic suggestion. âThe individual and his or
her desires, inclinations, gestures, etc., are seen as hypnotically
transmitted and therefore not specific or characteristic to the individual in
questionâ (Borch, 2009, p. 229). Imitation between members of a sphere
founds a kind of anonymous âgroup mindâ. The individual is thus nothing
more than a node of various rays of imitation. This kind of mimetic
suggestion thus undermines the notion of individuality and at the same time
emphasises affect rather than deliberation and conscious choices and
purposive action.
In Globes (2014), in the wake of Friedrich Hegel
(Phenomenology of Spirit) and Oswald Spengler (The Decline of the West), Peter Sloterdijk provides us with a morphological history of
globalisation by distinguishing three periods of globalisation â the
metaphysical, the terrestrial, and the contemporary period of foams (Morin,
2009, p. 58). The first metaphysical phase of globalisation is, according
Sloterdijk, based on the conviction that the best strategy with which to immunise the
interior is by integration of the outside. âIn this phase, the goal of
human existence is the construction of a metaphysical globe, an
all-encompassing sphere in which humans could find a sense of security, of
immunity. By swallowing up the outside, this absolute totality (under the
form either of a cosmos or of a God) is supposed to be in a position to offer
absolute immunity to its inhabitantsâ (Morin, 2009,
p. 62). The internal ordering is prescribed by its final Aristotelian
cosmological teleological structure striving towards perfection, where
everything has its assigned place. Also, the politics in such a spherical
community would be directed towards keeping everything turning around its
centre (Morin, 2009, p. 63). With this objective, the
individual is subordinated to the divine centre. In the classical metaphysics
to protect the mortal individual one assumed the eternal, which actually
ignores every individuality. In face of God we are all equal. This logic does
not really change after Kant's Copernican turn, because it then becomes
reason which directs us towards the anticipated transcendental idea of a
universal whole (as if we know what the world is teleologically directed to).
The kind of politics related to this view is the politics in which the
particular and local being is replaced by being a citizen of the whole, of
the cosmopolis, and being part of a world government or a universal culture.
According to Sloterdijk and following Nietzsche, however, such a creation of a
total immune sphere is deemed to fail, because it lacks a unifying outside.
An absolute sphere with no outside, or to repeat a dictum of Blaise Pascal,
âan infinite sphere whose centre is everywhere and whose circumference is
nowhereâ, cannot be used by anyone to create a sphere of intimacy.
Instead of offering absolute protection, it ends up offering no protection at
all and negates all human demands for immunity (Sloterdijk, 2014,
pp. 526â528 as quoted in Morin, 2009, p. 64).
The metaphysical focus on the eternal whole changes slowly but surely,
according to Sloterdijk, with the discoveries of Copernicus. From then on,
one did not so much seek a spiritual whole as an eternal sphere, but one
sought a terrestrial, territorial whole as a global sphere. The vertical
transcendence is now replaced by a horizontal transcendence, implying the
conquest of the outer world. God seekers become state seekers (van Tuinen,
2006, p. 57). By means of imperialistic strategies that are used to conquer the world, one
tried to accommodate and assimilate the outside into an inside. One wanted to
control the whole world. Microspheres thus coalesce to macrospheres. But also
in such a global sphere one is bound to fail, as, without an outside, the
destructive influences come from the inside. Larger communities, therefore,
do not automatically lead to greater immunity, as, according to Sloterdijk,
was shown by the fall of the Roman Empire (van Tuinen, 2006, p. 58).
In Foams (2016b), he describes how we
break with these globalising tendencies when, through the immense speed with
which goods, human beings, capital, and information flash around the globe,
we lose our centre and notice that where everything has become a centre, we
do not have a valid centre anymore. The virtual space has become the overall
outside, which cannot be internalised anymore (Sloterdijk, 2011, p. 66). We
become footloose and homeless. In this third phase of globalisation, we lose
the typical spherical form or being in the world and our existence becomes
rather formless, which Sloterdijk tends to describe as foam, an irregular
agglomeration of bubbles. As Morin (2009, p. 67) describes it, âeach bubble
is a âworldâ, a place of sense, an intimate room that resonates or
oscillates with its own interior lifeâ while at the same time being
connected to all other bubbles and therefore highly interdependent of each
other. It could be described as the connected isolation of living apart
together in a system of co-fragility and co-isolation (Morin, 2009, p. 67). From the inside of each foamy bubble, one does not
have a view of the whole, but only on the adjacent bubbles. In contrast to
the metaphysical or terrestrial globes for these foamy bubbles, there is an
overall outside, and for every outside is a related outside from which one is not
fully immunised. Without this overall outside there can also be no ruling
from the whole to the multitude of its parts, or resistance from the parts
towards the whole. âEach bubble resists its dissolution and integration into
a whole or a uniform sphere but without being opposed to or directly fighting
against it since each of them requires the whole for its own stabilisationâ
(Morin, 2009, p. 68).
Sloterdijk also characterises the topological structure of these foamy
spheres â â¦Â closely bound to the hominisation process:
chirotope (the hand accessible domain), phonotop (the vocal
bell under which coexisting beings listen to each other), uterotop
(the maternal zone and its early social metaphorisation), thermotop
(the heated circle of comfort), erotop (the place for primacy erotic
energy transfer), ergotop (the shared spirit of cooperation in
common work), aethotop (the continuity of the collective world view),
theotop (the space of revelation for elders and gods), and
nomotop (the social architecture and its political constitution).
These are seen as the promising fields of inquiry for any future spatial
analysis of humans as in-world insulated creaturesâ (Couture, 2009, p. 162).
In retrospect, this seems very much an inside-oriented topology, and one is
tempted to think of additional or alternatives ways of describing and
categorising such a spherological topology.
Human beings in these foamy spheres will need to take care of themselves from
these small foam-bubbles. They need to position themselves or are being
positioned in these topological structures. This is, therefore, not big
politics but small politics (van Tuinen, 2004, p. 83).
The conservation of the personal foam-bubble then becomes a condition for
solidarity. This kind of self-care or limited care stands in stark contrast
to the global responsibility of taking care of the whole even if it implies
acting against one's own particular interests (van der Ven, 2002,
pp. 503â507). In contrast to classical politics, the politics in foams does
not address the belonging to an overall whole, but is a politics of
self-regeneration and self-continuation that Sloterdijk calls âhyperpoliticsâ
(Morin, 2012, p. 68). This is a shift from macromanagement to
micromanagement. Accordingly Sloterdijk pleads for a politics of
dispassionateness, or to use a phrase borrowed from Georg Simmel, of a
disengaged blazé political attitude, towards other spheres. This is a light,
frivolous and floating attitude, opposing the heavy, demanding and pressing
character of the totalising pretentions of the global whole.
In the following I want to focus on an alternative view of the ontological
aspects of the concept of spheres and leave the issue of a theory of
globalisation aside, which is how Sloterdijk's spherology is also read.
Although Sloterdijk's writings are very baroque and evocative and therefore
thought provoking and inspiring, his thinking is to a large degree not
totally new and finds many parallels. Even in writings he fully ignores or
tends to criticise. Without going into the nitty gritty details and without
the pretention of completeness, which anyhow is also not Peter Sloterdijk's
style, let me just observe a number of them and discuss the issues for debate
related to them.
Peter Sloterdijk usually presents his ideas by lustfully breaking all kind of
taboos, sometimes even causing a scandal. If one reads his texts closely, it
is striking how many valuing adjectives he uses, without really underpinning
these implicit judgments. The inherent provocation lets it sound as something
totally new and unheard of, but the basic philosophical ideas he presents are
to a large part not that new at all. For example, already in the 1920s, in
the year after the publication of Heidegger's Being and Time, the German
philosopher Helmuth Plessner designed a general theory of being and space
and created an alternative philosophy of human being, through which he also
revised the traditional European humanism. With his book Die Stufen des Organischen und der Mensch (1975 ) he formulated from a
critical-phenomenological point of view a philosophical anthropology from a
spatial perspective. Interestingly enough, Peter Sloterdijk does not seem to
mention or acknowledge Helmuth Plessner at all, but that is not to say that he fully
ignores Helmuth Plessner's work, even though he, as a philosophical glutton,
must be well aware of it (van Tuinen, 2004, p. 103). The Helmuth Plessner
Association, on the other hand, together with the municipality of Wiesbaden
(the place where Plessner lived for a long time) awarded Peter Sloterdijk the
Helmuth Plessner prize 2017, but not without creating a scandal within
the Helmuth Plessner Association itself. So, one is tempted to say that therefore the
bubble around Sloterdijk seems to
have a hypnotic mimetic effect on the bubble of Helmuth Plessner scholars almost in a spherological way.
In the heydays of philosophical anthropology in the 1920s, under the
influence of the revolutionary developments in the natural sciences in the
second half of the nineteenth century, it was the main endeavour of the
philosophical anthropology to rethink the special position of the human being and
teleologic phenomena, by either assuming an Aristotelian notion of entelechy
denoting the vital function, of a living organism, which actualises a vital
potential and gives form to the matter it is comprised of (Hans Driesch), or
by assuming a divine spiritual metaphysical dimension (Max Scheler) (de Mul,
2014, p. 458). It is this kind of philosophical anthropology which came under
attack after World War II with opposition to its alleged essentialism and
anthropocentrism (de Mul, 2014, p. 461). The
philosophical anthropology of Helmuth Plessner is, however, of a different
kind. To a large degree he accepted the materialistic and mechanistic
world view, but at the same time gave a critique by asserting that
this clarifies âhow the vital and psychic functions of living
organisms are being materialised, but not what life in its subsequent stages
and various expressions isâ (de Mul, 2014,
p. 459). In the same way, he was also critical about the transcendentalist
position of Scheler and Driesch. Similarly to Sloterdijk he assumes that being
in the world, being alive, presumes a unity between the material and the
psychological.
On this basis, Plessner develops important categories of human life and of the
human being in the world from a spatial perspective (see also the more
elaborate account in Ernste, 2004 and 2014). He describes how human beings on one hand live a centric life and are centrically positioned, at the âcentreâ of
their body and distinguished from the environment by a clear boundary, from
which the human being is directed towards their environment. On the other hand, human beings live an eccentric life or are eccentrically positioned,
from where they can look back on themselves and on their situation but also
look outward as if it is part of their inner life. This is not âa
reproduction of the Cartesian dualism with its separation of bodily existence
and human consciousness. On the contrary, it is an essential element of
Plessner's theory that these are two sides of the same coinâ (Ernste, 2004,
p. 444) or what he denotes as double aspectivity. From this perspective human
beings are always aware of the contingency of their current centric
positionality, or one might also say that they are simultaneously aware
of the inside and of the outside. They are having a directed
relationship with their immediate environment (Umwelt) but at the
same time also have a view of the world at large (Welt).
Interestingly enough, the boundary which envelops us, according to
Plessner, is not an immunising protective mechanism, but always an interface,
which hides certain aspects from the outer world, but which is also a
projection surface through which the human being expresses itself to the
outer world and is depicted by the outer world and through which it gains
identity and individuality. It is the medium through which the person's
topological being in the world is constituted (Malpas, 2017, pp. 8â9). In
contrast to Sloterdijk, Plessner refrains from using many judgemental
adjectives, and his phenomenological analysis allows different conclusions.
Sloterdijk seems to build a picture of a monstrous outer world, from
which we can only expect threat and danger against which we tend to immunise, while for Plessner boundary work always has two sides. One the one hand it
distinguishes and isolates us, but on the other hand it relates and opens us
to the outer world. While Sloterdijk assumes that we feel safe and
comfortable within our immunising boundaries, therefore implicitly
essentialising our positionality, Plessner notes that from our eccentric
position, we are always aware of the uncomfortable narrowing limitation,
localisation and temporalisation of our centric positionality and thus can
never feel truly at home. We are thus bound to continuously reinvent and
recreate our centred being without ever losing the basic human experience
of the contingency of our being in the world. So there is no such thing as an
immunised place we can call home and therefore also in the foamy globalising
world of Sloterdijk, there is no such thing as conflict-free acquiescence
towards neighbouring and related spheres. Plessner describes human being in
the world in a non-essentialising way as homo absconditus, the
hidden human being, or to paraphrase a famous quote of Robert Musil (2017),
as âa human being without qualitiesâ.
Being human in this world therefore does not just let us retreat behind
immunising borders but actually lets us transgress these borders and venture
into the world and encounter âthe otherâ, seeking a place where we can be
what we are as human beings in this world. Our openness to the world is not
monstrous, but part of our dwelling, or our home. The parallels with
Sloterdijk are striking, but the nuanced differences in
conceptualisation and valuation are also apparent. One other difference between
them seems to be the focus on the affective aspects of the sphere in the
work of Peter Sloterdijk, in contrast to the focus on conscious reflexivity
in the work of Helmuth Plessner. While Peter Sloterdijk, following the work of
Hermann Schmitz (2007, 2010, 2011), decentres the affectivity from the
individual subject to the sphere, where it also figures as an emergent
relationships between different persons, without any conscious intermediation
of the individual subject (Demmerling and Landweer, 2007; Fuchs,
2000), Helmuth Plessner preserves the subjective cognition and centred
performativity of the individual human being, without excluding affective
relationships. This is also clearly reflected in his The Limits of Community: A Critique of Social Radicalism, first published in 1924 (1999)4. In this writing he reacts to
Ferdinand Tönnies' ground-breaking book (2011 ) Community and Society but also to the societal and political circumstances of those times,
which seem to a certain degree to resemble current conditions. When Helmuth
Plessner was writing, these were the first years of the Weimar Republic, with
unstable conditions, intense resentment against the rule of law and against
democracy, pressing reparation payments, galloping inflation, and the Hitler Putsch in 1923. In those times, both from the left as well as from the right,
extremist calls could be heard, which often also used the call for community.
These were radical times (Hellmann, 2008, p. 2). The similarities with
current times, with economic uncertainty, political moroseness, populism,
xenophobia, protectionism, and no-future youngsters is obvious, and is partly also
related to the failed globalism Sloterdijk is describing in his
Spheres
trilogy.
In exactly these circumstances Plessner felt the call to write his
critique against social radicalism, which tends to glorify the community, the
âweâ against the evil others and which also tends towards an imperialistic
moral radicalism. Plessner sees it as a strength under these circumstances
to vote for society instead of community. Society demands much more from the
individual human being than a community, which tends to take the individual
under one's wing and therefore obliterates the individual
(Hellmann, 2008, p. 3). The (affective) intimacy which
is presupposed in these communitarian spheres cannot simply be superimposed
onto modern or post-modern societies, which with their functional
differentiation anyhow demand a different regime for self-control. And as
Plessner states, the idea of such a communitarian sphere is anyhow an
illusion, since even in archaic communities the complete resorption of a
person by the community does not exist. Even in these situations, for the sake
of human dignity, a minimum of individuality, non-shared intimacy and privacy
is needed. So Plessner does not oppose the idea of community in general,
but points to its limits. To cope with these limits, Plessner suggests, in a
rather pragmatic way, that we should look for compromises with each other and in relation
to the unknown âotherâ, even if it were the devil, instead of the mere idealistic
blissful repulsion. It is important to note that for Plessner the possibility
for the political in society is based on the anthropological conditions of
a human being in the world and is not just based on his diagnosis of the
historical situation at that time (Edinger, 2017, p. 327). For Plessner
community and society are inviolably dialectically united. The political is
just one side of Plessner's social ontology of everyday Dasein (Krüger,
2016). On this basis, Gesa Lindemann developed the concept of a reflexive
anthropology (Lindemann, 1999), in which both the anthropological conditions
of human being and the historical situation are openly reflected and
can be politicised.
With respect to the critique against the imperialistic idea of a global
community, Peter Sloterdijk and Helmuth Plessner are in one line. But with
respect to the alternative, they clearly differ. While Sloterdijk opts for a
conceptualisation of the (post-)modern world as a world of foams and
suggests small politics and an attitude of composure and limited solidarity.
Helmuth Plessner, in my interpretation, would opt for large politics with
awareness of the limitations and contingencies and therefore without
essentialising a transcendental whole. Bude and Dürrschmidt, in a
thought-provoking paper based on Plessner's conceptualisation of human
being, also ask themselves âWhat's wrong with globalisation?â and come to the
following insight.
âthough as a bodily existence always deeply entangled in the
here and now, man is also âaheadâ of himself in terms of reflexive distance
towards here and now. Structurally he lives in an open horizon of
possibilities, pressured to solidify some of them into existence by his
ultimately final life trajectory (Plessner, 1975, p. 343). It is this
unalterable human condition of âeccentric positionalityâ, or as one might
also refer to it, as a âhalf-opened beingâ (Metcalfe & Ferguson, 2001),
which forces him to âleadâ a life in the most literal meaning of the word
(ein Leben âführenâ)â (Bude and Dürrschmidt, 2010,
p. 494).
So instead of opting for a politics of the non-human, in the course
of the post-phenomenological (Ash and Simpson, 2016, p. 63) thrust towards
embodied consciousness â a rather contradictory move as it lets the
component of consciousness disappear from the embodied consciousness â
Plessner opts for a real double aspectivity of the embodied consciousness
(Richter, 2005).
These different ontological assumptions and historical diagnosis are also at
the core of the fierce debate between Peter Sloterdijk and Jürgen
Habermas. In the first instance this seemed to be only about the Nazi-tainted
provocative statements of Peter Sloterdijk in the lectures he gave in 1997
and 1999 that had the title Rules for the Human Zoo: A response to Heidegger's letter on humanism, and contained words like
Züchtung (breeding) and Selektion (selection) and
references to the âfailure of humanismâ, but on the bottom line, the
dispute was about Habermas' observation of Sloterdijk's seeming move to
radical neo-conservatism with a whiff of fascism and eugenics as well as
hatred of democracy, related to it (Romano, 2012). This shows that the
political geographic implications of Sloterdijk's thinking are far from
neutral, as Benedikt Korf and Doris Wastl-Walter (2016, p. 106) tended to
describe it, and need, in general, to be critically scrutinised.
Given this critique and alternative conceptualisation of the human being in this
world, one may ask whether there is a conceptual framework which can be
operationalised and applied in the field of geography in such a way that it
allows comprehensive empirical research into the spatialities and everyday
practices of spheres, which potentially could take into account the double aspectivity of
a human being in the world. Based on this critique, it is clear
that contrary to Sloterdijk, Helmuth Plessner does think of spatiality and
of the political aspects of speciality in a much more relational and
procedural way. Being human for Helmuth Plessner implies that one is already
beyond one's own cocoon, and beyond the strategies of immunisation, and
therefore to be human brings with it, to be a zoon politicon, which
is constitutively entangled with the âmutual worldâ (Mitwelt)
(Hetzel, 2005, p. 236). As such this is a plea for an even more radical
relational thinking in human geography and for conceptualising these
relationships in a fundamentally political fashion. Obviously, currently
in the field of geography fashionable relational approaches (complexity
theory, actor network theory, assemblage theory, practice theory, mobility
theory) are a good starting point. In the following I very briefly focus on
one of them, namely practice theory.
Although practice
theory5
is not presented as a theory of spheres as such, but rather as a social theory
grasping the complexities and ambivalences of our being in the world, it is formulated in a less
polemic and better underpinned way than Sloterdijk's theory of spheres. (Schatzki, 2001,
2012; Reckwitz, 2002; Everts et al., 2011; Schäfer, 2016). Although
practice theories come in many different forms and are interpreted in
different ways, a number of key elements show that some parallels can be
found between Peter Sloterdijk's spherology, Plessner's view of human
spheres and current praxeological approaches. Of course there are also some
differences and tensions, which I will not deny. In this section of this
contribution, I would like to point to those parallels and underscore the
potential compatibilities. Since there is already a rich field of empirical
applications of this praxeological approach in the geographical field, this
praxeological approach, or a further developed version of it, might also be
helpful in operationalising and critically investigating how human spheres
emerge, develop, and are politicised.
Like Sloterdijk's attempt to conceptualise the being in the world from a
relational, topological perspective, practice theory does this based on the
concept of every day practices, which creates the dynamic topologies of the human
being and human positioning. What Sloterdijk tends to circumscribe as
spheres is conceptualised as practical situations, or
âsitesâ of the social (Schatzki, 2002) in practice theories. Crucial to Schatzki's version of
practice theory is that he clearly shows how, on the one hand, these
theories of practice also decentre human subjectivity to the practical situation.
They position subjectivity in relation to the practical situation and
therefore move in the direction of a post-humanist view, but on the other
hand they still defend a residual humanism, in the same way as Helmuth
Plessner, based on his concept of double aspectivity, and therefore do
not release the subject from a boundary transgressing political
responsibility (Ernste, 2004). The topological arrangements, according to
Schatzki (2002), impute, prefigure, and lead to agency, a necessary agency,
because human activity is fundamentally indeterminate and inherently
contingent. Although, some scholars of practice theories only refer to this
aspect of decentring of the subject, as agents formed by the structures of
practice, I think that this interpretation does not do justice to the
ontological assumptions of the subject on which Schatzkian practice
theory is based, which also find their parallels in âHeidegger's early
conceptions of thrownness and of the priority of involved practical dealing
over reflection and theory; in Wittgenstein's account of rule following and
in his conviction that action underlies language, thought, and reason; in
H.-G. Gadamer's notion of
continuous concept formation; in Derrida's and Judith Butler's notions of the
performative citation of norms; and in what [Schatzki is] calling the
âindeterminacyâ of actionâ (Schatzki, 2002, p. 233). Here I see a great
opportunity for a mutually constructive debate between practice theory and the
Plessner-inspired theory of âapproaches to the worldâ as formulated by
Gesa Lindemann (2014), in which practice is not prioritised over reflection,
but human practices are themselves conceptualised as reflective.
A typical element of practice theories is that they explicitly address the
âchangeâ of practices in everyday life (Shove, Pantzar, and Watson, 2012).
They deal with small shifts but also with larger transformations. So without
being presented as a theory of globalisation, as Peter Sloterdijk
implicitly does in his Spheres trilogy, practice theories do provide a very
open conceptual framework in which to address these changes, without precluding in
what direction these changes take place. From this view of the dynamics of
practices, human spheres (Lindemann, 2017) are seen as emergent and becoming
and are therefore also in a process of continuous negotiation with âthe otherâ
in different settings and at different times.
Practice theory takes practices, rather than individuals or whole societies,
as the primary unit of investigation and analysis. Distinguished practices
can be viewed as practices of being in the world, or as Sloterdijk would
probably express it, as practical sphere making. Sloterdijk conceptualises
spheres as affective communities in which the affective bondage in relation
with a specific spatiality plays a central role, and in a similar way
Andreas Reckwitz (2012) conceptualises these affective spaces from a
praxeological point of view. Practices then become constitutive for the
development of affective spheres. Like a sphere, a practice consists of
socially embodied activities (âsayings and doingsâ) combined with material
arrangements and linked into a nexus by understandings (âknowing how to
carry out desired actionsâ), rules (âexplicitly formulated directive,
remonstration, instruction, or edictsâ) and teleoaffective structures
(âends, projects, tasks, purposes, beliefs, emotions, and moodsâ) (Schatzki,
2012). One could say that these concepts describe the topological structure
of practices, including the human beings taking part in it. Although
practices are social entities, they are performed by individual carriers who
actualise and sustain these social entities. â[P]ractices not only generate
emotions, but [â¦] emotions themselves can be viewed as a practical
engagement with the world. Conceiving of emotions as practices means
understanding them as emerging from bodily dispositions conditioned by a
social context, which always has cultural and historical specificity.
Emotion-as-practice is bound up with and dependent on âemotional practicesââ
(Scheer, 2012, p. 193). In these practice theories it is essential that
practices are executed by knowledgeable human beings, but this individual
bodily subject according practice theory emerges from social practices in
which bodies and things are mutually entangled through emotional
relationships. These practices are never just limited to the boundaries of a
sphere or situation but reach well beyond them. At the same time, as human
beings, we are always involved in many different practices on different scales and in many different political frames, from local foamy
spheres to global globes, or to be more precise, we as human beings are
continuously creating and taking part in different spheres and thus are
creating and taking part in different places. As such we are never just
within a sphere but always also beyond that sphere.
In practice theories the choices people make in these situations are
addressed from a pragmatic point of view and there is an attempt to reconstruct human
activities as practical sensemaking in those specific situations. This
somehow suggests that a suitable fit between (political) choices and
practical situations, or current and local practices is feasible.
However, if practice theory fully took into account the double aspectivity of the human being
in the world as suggested by Helmuth Plessner, it would
also need to address the inherent homelessness of the human being in these
situations. Making sense of a practical situation is an act of
meaning making, but as a consequence of the double aspectivity of the human
being, meaning needs to be defined as the âunity of the difference between
actuality and potentialityâ (Henkel, 2016) of the difference between the
actual and the virtual (Delanda, 2005). So the pragmatics of âmeaning
makingâ in practice theories sometimes still tends to partly disguise the
political aspects of everyday practices and the insufficiencies of everyday
compromises too much, but on the other hand, a broader conceptualisation
of meaning could also serve as a framework in which to address them without
taking a position beforehand too easily. The pragmatist conceptualisation of
human practices in current practice theories would at least allow this and
could foster the further development of these practice theories in these
directions, which could prove to be very promising and useful for
geographical research. As such, current practice theories (Hui et al., 2017)
seem to provide a comprehensive framework for productive geographical
research on spheres of human being and human activities.
In this contribution I have in the first instance tried to give a brief overview
of some of the core aspects of Peter Sloterdijk's inspiring endeavour as put
forward in his magnum opus, the trilogy Spheres. This endeavour also evoked a
lot of critique, which partly target his style and performance but also address
some of the core issues of his theory. Without pretending to be comprehensive or complete, I highlighted some of those
critiques, not so much from a philosophical but more from a critical social
theoretical and geographical perspective. But critique is always easy. More
difficult is offering an alternative. In this contribution I showed that the
philosophical anthropological perspective of Helmuth Plessner offers us a
well founded and well underpinned alternative phenomenology of the human
spatial being in the world, with far-reaching political consequences
of how to deal with the current state of globalisation. Second I suggested
that current practice theories, also offer us a good alternative social
theoretical conceptual framework, to investigate, the kind of relationalities
and topologies, which Sloterdijk suggests, but which he approaches from a
rather one-sided and sometimes even flawed angle, without an elaborated and
critical conceptualisation of these relations. Practice theories themselves
do not really take a critical stance themselves, but allow elaboration of the
multidimensional complexities of the political choices and
positionings
constantly made in everyday practice. As shown above in certain aspects,
practice theory is still not radical enough in its relational thinking
according to Plessner, since it still seems to think of human beings as
elements in practices, instead of the human being as a relational phenomenon, with all the inherent political aspects of that relationship. Practice theory is seemingly
apolitical, but this openness or indeterminateness makes it an especially
good candidate for developing further in the direction of Plessner's
alternative spherology, so that the politics of spheres and of human
spatiality becomes much more apparent. These potentialities still need to be put in practice and are thus far from ready to use, and will need further
elaboration along the lines suggested to come up with a fully fledged
alternative theory of Spheres and a mature framework for empirical
geographical research on the practices of sphere making.
No data sets were used in this article.
The author declares that he has no conflict of
interest.
I am very grateful for the very useful comments from the editors of this
special issue on an earlier version of this contribution.
Edited by: Benedikt Korf
Reviewed by: two
anonymous referees
Ash, J. and Simpson, P.: Geography and post-phenomenology, Prog. Hum. Geog.,
40, 48â66, 2016.â
Boos, T.: Ethnische Sphären, Ãber die emotionale Konstruktion von
Gemeinschaft bei syrisch- und libanesischstämmigen Argentiniern,
Transcript, Bielefeld, 2013.â
Borch, C.: Organizational Atmospheres: Foam, Affect and Architecture,
Organization, 17, 223â241, 2009.â
Borch, C.: Spatiality, Imitation, Immunization: Luhmann and Sloterdijk on the
Social, in: Luhmann Observed, Radical Theoretical Encounters, edited by: la
Cour, A. and Philippopoulos-Mihalopoulos, A.,
Palgrave Macmillan, Basingstoke, 150â168, 2013.â
Bude, H. and Dürrschmidt, J.: What's wrong with globalisation?: Contra
âflow speakâ â towards an existential turn in the theory of globalization,
Eur. J. Soc. Theory, 13, 481â500, 2010.â
Couture, J. P.: Spacing emancipation? Or how spherology can be seen as a
therapy for modernity, Environ. Plann. D, 27, 157â163, 2009.â
Delanda, M.: Space: Extensive and Intensive, Actual and Virtual, in: Deleuze
and Space, edited by: Buchanan, I. and Lambert, G., Edinburgh University
Press, Edinburgh, 80â88, 2005.â
Demmerling, C. and Landweer, H.: Philosophie der Gefühle, Metzler,
Stuttgart, 2007.â
de Mul, J. (Ed.): Plessner's Philosophical Anthropology, Perspectives and
Prospects, University of Amsterdam Press, Amsterdam, 2014.â
Edinger, S.: Das Politische in der Ontologie der Person, Helmuth Plessners
Philosophische Anthropologie im Verhältnis zu den Substanzontologien von
Aristoteles und Edith Stein, de Gruyter, Berlin, 2017.â
Elden, S. and Mendieta, E.: Being-with as making worlds: the âsecond comingâ
of Peter Sloterdijk, Environ. Plann. D, 27, 1â11, 2009.â
Ernste, H.: The pragmatism of life in poststructuralist times, Environ.
Plann. A, 36, 437â450, 2004.â
Ernste, H.: Eccentric Positionality and Urban Space, in: Plessner's
Philosophical Anthropology, Perspectives and Prospects, edited by: de Mul,
J., University of Amsterdam Press, Amsterdam,
243â260, 2014.â
Ernste, H.: Klassieker: Over de relatie tussen mens en de ruimte,
Agora, 32, 42â43, 2016.â
EÃbach, W., Fischer, J., and Lethen, H. (Eds.): Plessners âGrenzen der
Gemeinschaftâ Eine Debatte, Suhrkamp, Frankfurt am Main, 2002.â
Everts, J., Lahr-Kurten, M., and Watson, M.: Practice Matters! Geographical
inquiry and theories of practice, Erdkunde, 65, 323â334, 2011.â
Fuchs, T.: Leib, Raum, Person, Entwurf einer phänomenologischen
Anthropologie, Klett-Cotta, Stuttgart, 2000.â
Günzel, S.: Topologie, Zur Raumbeschreibung in den Kultur und
Medienwissenschaften, Transcript, Bielefeld, 2007.â
Haegens, K.: Wie is Peter Sloterdijk?, available at:
last access:
1 May 2017.â
Heidegger, M.: Being and Time, HarperPerennial, New York, 2008
.â
Hellmann, K.-U.: Grenzen der Gemeinschaft, Helmuth Plessner, René
König und Joseph R, Gusfield, available at:
(last access: 1 May 2017), 2008.â
Henkel, A.: Posthumanism, the Social and the Dynamics of Material Systems,
Theor. Cult. Soc., 33, 65â89, 2016.â
Hetzel, A.: Der Mensch als âpraktischer Anspruchâ, Zum Primat des
Politischen in Helmuth Plessners Anthropologie, in: Zwischen Anthropologie
und Gesellschaftstheorie, Zur Renaissance Helmuth Plessners im Kontext der
modernen Lebenswissenschaften, edited by: Gamm, G., Gutmann, M., and Manzei,
A., Transcript, Bielefeld, 233â258, 2005.â
Hui, A., Schatzki, T., and Shove, E. (Eds.): The Nexus of Practices,
Connections, constellations, practitioners, Routledge, Oxon, 2017.â
Korf, B. and Wastl-Walter, D.: Kultur und Politik, in: Humangeographie
kompakt, edited by: Freytag, T., Gebhardt, H., Gerhard, U., and Wastl-Walter,
D., Springer, Berlin, 2016.â
Krüger, H.-P.: Kritische Anthropologie? Zum Verhältnis zwischen
Philosophischer Anthropologie und Kritischer Theorie, Deut. Z. Philos., 64,
553â580, 2016.â
Lemmens, P. and Zwart, H.: Sloterdijk in vogelvlucht, Wijsgerig Perspectief,
44, 4â13, 2004.â
Lindemann, G.: Doppelte Kontingenz und reflexive Anthropologie, Z. Soziol.,
28, 165â181, 1999.â
Lindemann, G.: Weltzugänge: Die mehrdimensionale Ordnung des Sozialen,
Velbrück Wissenschaft, Weilerswist, 2014.â
Lindemann, G.: Die Sphäre des Menschen, in: Helmuth Plessner: Die Stufen
des Organischen und der Mensch, edited by: Krüger, H.-P., De Gruyter,
Berlin, 163â177, 2017.â
Malpas, J.: Heidegger and the Thinking of Place: Explorations in the Topology
of Being, MIT Press, Cambridge, 2012.â
Malpas, J.: Re-Orienting Thinking: Philosophy in the Midst of the World, in:
Commonplace Commitments: Thinking Through the Legacy of Joseph P, Fell, edited by: Fosl,
P. S., McGandy, M. J., and Moorman, M. D., Bucknell University Press, Lewisburg, 169â188,
2016.â
Malpas, J.: In the Vicinity of the Human, Int. J. Philos. Stud., 25,
423â436, 2017.â
Mauthner, F.: Zur Sprache und Zur Psychologie, Beiträge zu einer Kritik
der Sprache, 1, Cotta'sche Buchhandlung, Stuttgart, 1906.â
Metcalfe, A. and Ferguson, L.: Half-opened being, in: Timespace: Geographies
of temporality, edited by: May, J. and Thrift, N., Routledge, London, 2001.â
Morin, M.-E.: Cohabitating in the globalised world: Peter Sloterdijk's global
foams and Bruno Latour's cosmopolitics, Environ. Plann. D, 27, 58â72, 2009.â
Morin, M.-E.: The coming-to-the-world of the Human Animal, in: Sloterdijk
Now, edited by: Elden, S., Polity Press, Malden, 77â95, 2012.â
Musil, R.: The Man Without Qualities, Pan Macmillan, London, 2017.â
Nicolini, D.: Practice theory, work, and organization: an introduction,
Oxford University Press, Oxford, 2013.â
Nietzsche, F.: Menschliches, Allzumenschliches, I, Moral als Selbstzerteilung
des Menschen, Kritische Studienausgabe KSA, 2, 76, 1886.â
Nieuwenhuis, M.: Taking up the challenge of space: New conceptualisations of
space in the work of Peter Sloterdijk and Graham Harman, Continent, no. 4.1,
17â37, available at: last
access: 1 May 2017.â
Noordegraaf-Eelens, L. and Schinkel, W.: De inspirerende ruimte van
Peter Sloterdijk, Krisis, 2, 84â93, 2005.â
Plessner, H.: Die Stufen des Organischen und der Mensch, Einleitung in die
philosophische Anthropologie, De Gruyter, Berlin, 1975 .â
Plessner, H.: The Limits of Community: A Critique of Social Radicalism,
Humanity Books, New York, 1999 .â
Reckwitz, A.: Toward a Theory of Social Practices, A Development in
Culturalist Theorizing, Eur. J. Soc. Theory, 5, 243â263, 2002.â
Reckwitz, A.: Affective spaces: a praxeological outlook, Rethinking History,
Journal of Theory and Practice, 16, 241â258, 2012.â
Richter, N. A.: Grenzen der Ordnung, Bausteine einer Philosophie des
politischen Handelns nach Plessner und Foucault, Campus, Frankfurt am Main,
2005.â
Romano, C.: Slippery Sloterdijk: the Edgy European Philosopher, Circa 2012,
in: The Chronicle of Higher Education, 5 November 2012.â
Schäfer, H. (Ed.): Praxistheorie, Ein soziologisches Forschungsprogramm,
Transcript, Bielefeld, 2016.â
Schatzki, T. R.: Introduction: practice theory, in: The practice turn in
contemporary theory, edited by: Schatzki, T. R., Knorr Cetina, K., and von
Savigny, E., Routledge, London,
1â14, 2001.â
Schatzki, T. R.: The site of the social: a philosophical account of the
constitution of social life and change, Pennsylvania State University Press,
University Park, 2002.â
Schatzki T. R.: A primer on practices, in: Practice-based education:
perspectives and strategies, edited by: Higgs, J., Barnett, R., Billett, S.,
Hutchings, M., and Trede, F., Sense, Rotterdam, 13â26, 2012.â
Scheer, M.: Are Emotions a Kind of Practice and Is That What Makes Them Have
a History? A Bourdieuian Approach to Understanding Emotion, Hist. Theory, 51,
193â220, 2012.â
Schmitz, H.: Der Leib, der Raum und die Gefühle, Aisthesis, Bielefeld,
2007.â
Schmitz, H.: Kurze Einführung in die neue Phänomenologie, Karl Alber,
Freiburg, 2010.â
Schmitz, H., Müllan, R. O., and Slaby, J.: Emotions outside the box â the
new phenomenology of feeling and corporeality, Phenomenol. Cogn. Sci., 10,
241â259, 2011.â
Shove, E., Pantzar, M., and Watson, M.: The Dynamics of Social Practice:
Everyday life and how it changes, Sage, London, 2012.â
Sloterdijk, P.: Spheres, Volume I: Bubbles Microspherology, Semiotexte, Los
Angeles, 2011.â
Sloterdijk, P.: Nearness and Da-sein: The Spatiality of Being and Time,
Theor. Cult. Soc., 29, 36â42, 2012.â
Sloterdijk, P.: Spheres, Volume II: Globes Macrospherology, Semiotexte, Los
Angeles, 2014.â
Sloterdijk, P.: Selected Exaggerations: Conversations and Interviews
1993â2012, Polity, Cambridge, 2016a.â
Sloterdijk, P.: Spheres, Volume III: Foams Plural Spherology, Semiotexte, Los
Angeles, 2016b.â
Tarde, G.: The Laws of Imitation, Henry Holt, New York, 1903 .
â
Thrift, N.: Different atmospheres: of Sloterdijk, China and site, Environ.
Plann. D, 27, 119â138, 2009.â
Tönnies, F.: Community and Society, Dover Publications, Mineola,
2011 .â
van Tuinen, S.: Sloterdijk binnenstebuiten denken, Klement, Kampen, 2004.â
van Tuinen, S.: Peter Sloterdijk: Ein Profil, Fink, Paderborn, 2006.â
van der Ven, B.: Sferen en globalisering, Ethische aspecten van Sloterdijks
bijdrage aan het globaliseringsdebat, Tijdschr Filos, 64, 479â507, 2002.â
The Erörterung of Ort
(Placing of Place), or Einräumung of Raum (Spacing of
Space).
Jeff Malpas, however, notes
that Sloterdijk takes up the issue of spatiality in his Spheres triology, but
does this in a rather superficial way: âpresenting itself as a new approach
to space and place, it actually does little more than mobilise a set of
spatial and topological tropes and ideas without ever interrogating their
spatial and topological content or addressing the spatial and topological
notions that they presupposeâ (Malpas, 2016, p. 170).
What a dividual implies in psychological
sense is more clearly explained by Mauthner (1906, p. 650ff.): a
consciousness that comprises the here and now while at the same time having
the ability to put oneself in the position of the other at another place and
time.
See also EÃbach et al. (2002).
For an overview see also Nicolini (2013).
|
40
|
Descemet Stripping Only (DSO) Technique for Corneal Endothelial Damage in Mice - PMC
===============
Skip to main content
An official website of the United States government
Here's how you know
Here's how you know
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS
A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
PMC Search Update
PMC Beta search will replace the current PMC search the week of September 7, 2025. Try out PMC Beta search now and give us your feedback. Learn more
Search
Log in
Dashboard
Publications
Account settings
Log out
Search… Search NCBI
Primary site navigation
Search
Logged in as:
Dashboard
Publications
Account settings
Log in
Search PMC Full-Text Archive
Search in PMC
Advanced Search
Journal List
User Guide
New Try this search in PMC Beta Search
View on publisher site
Download PDF
Add to Collections
Cite
Permalink PERMALINK
Copy
As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with, the contents by NLM or the National Institutes of Health.
Learn more: PMC Disclaimer | PMC Copyright Notice
Cornea
. Author manuscript; available in PMC: 2024 Apr 1.
Published in final edited form as: Cornea. 2022 Dec 22;42(4):470–475. doi: 10.1097/ICO.0000000000003223
Search in PMC
Search in PubMed
View in NLM Catalog
Add to search
Descemet Stripping Only (DSO) Technique for Corneal Endothelial Damage in Mice
Hayate Nakagawa
Hayate Nakagawa, M.D., Ph.D.
1,Laboratory of Corneal Immunology, Transplantation and Regeneration, Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA
Find articles by Hayate Nakagawa
1,, Hamid Alemi
Hamid Alemi, M.D., M.P.H.
1,Laboratory of Corneal Immunology, Transplantation and Regeneration, Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA
Find articles by Hamid Alemi
1,, Shudan Wang
Shudan Wang, M.D., Ph.D.
1,Laboratory of Corneal Immunology, Transplantation and Regeneration, Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA
Find articles by Shudan Wang
1, Francesca Kahale
Francesca Kahale, M.D.
1,Laboratory of Corneal Immunology, Transplantation and Regeneration, Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA
Find articles by Francesca Kahale
1, Tomas Blanco
Tomas Blanco, Ph.D.
1,Laboratory of Corneal Immunology, Transplantation and Regeneration, Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA
Find articles by Tomas Blanco
1, Catherine Liu
Catherine Liu, M.D.
1,Laboratory of Corneal Immunology, Transplantation and Regeneration, Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA
Find articles by Catherine Liu
1, Jia Yin
Jia Yin, M.D., M.P.H., Ph.D.
1,Laboratory of Corneal Immunology, Transplantation and Regeneration, Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA
Find articles by Jia Yin
1, Thomas H Dohlman
Thomas H Dohlman, M.D.
1,Laboratory of Corneal Immunology, Transplantation and Regeneration, Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA
Find articles by Thomas H Dohlman
1, Reza Dana
Reza Dana, M.D., M.P.H., M.Sc.
1,Laboratory of Corneal Immunology, Transplantation and Regeneration, Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA
Find articles by Reza Dana
1
Author information
Article notes
Copyright and License information
1,Laboratory of Corneal Immunology, Transplantation and Regeneration, Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA
Dr. Nakagawa and Dr. Alemi contributed equally as co-first authors.
✉
Corresponding Author: Thomas H. Dohlman, M.D., Laboratory of Corneal Immunology, Transplantation and Regeneration, Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, 20 Staniford Street, Boston, MA 02114, USA, Tel: +1-617-912-7489; Fax: +1-617-912-0117, [email protected]
Issue date 2023 Apr 1.
PMC Copyright notice
PMCID: PMC10117527 NIHMSID: NIHMS1852219 PMID: 36728991
The publisher's version of this article is available at Cornea
Abstract
Purpose:
Descemet Stripping Only (DSO) is an emerging surgical technique used to remove central Descemet’s membrane and corneal endothelial cells (CEnC) in patients with corneal endothelial disease. Here we describe a murine model of this procedure to help facilitate basic science investigation and evaluation of postoperative outcomes using this surgical technique.
Methods:
Slit lamp biomicroscopy, central corneal thickness (CCT) assessment (by optical coherence tomography) and immunohistochemistry were used to assess the model through 7 weeks of follow-up.
Results:
Complete removal of the endothelium and Descemet’s membrane was confirmed by slit lamp biomicroscopy and by histology. CCT peaked at day 1 post-injury and then declined over the course of 2 weeks to a stable level of persistent edema. Seven weeks post-injury, immunohistochemical staining for ZO-1 showed the area of Descemet stripping was fully covered by enlarged and dysmorphic CEnC. No significant ocular complications were appreciated through the end of the follow up.
Conclusion:
We demonstrate the feasibility of and provide detailed instructions for a murine model of DSO. This model provides a potential in vivo platform to investigate the mechanisms and biology of this emerging surgical procedure.
Keywords: Descemet Stripping Only (DSO), Descemetorhexis Without Endothelial Keratoplasty (DWEK), Corneal Endothelium, Animal Model, Fuchs Corneal Endothelial Dystrophy (FECD)
INTRODUCTION
Descemet Stripping Only (DSO) (a.k.a. Descemetorhexis Without Endothelial Keratoplasty [DWEK]) is a corneal surgical technique consisting of removal of Descemet’s membrane (DM), without subsequent endothelial transplantation, that has proven to be efficacious in select patients with endothelial dysfunction, primarily in the setting of Fuchs’ Endothelial Corneal Dystrophy (FECD).1 FECD is a non-inflammatory dystrophy of the corneal endothelium that leads to corneal edema and subsequent impairment of visual acuity, potentially leading to corneal blindness.2 DSO reduces the need for corneal transplantation and is now a viable treatment option in select patients with FECD.3
Animal models of mechanical corneal injury resembling DSO have been described previously including in rats4, rabbits5 and non-human primates6. While larger animals are excellent models for reproducing human surgical techniques, cost and administrative restrictions in animal research can limit their use.7 This makes development of a model of DSO in smaller animals appealing, and a murine model would be invaluable thanks to the well-defined genetics of in-bred mouse strains and wide availability of reagents. A murine model of DSO would provide a foundation for basic scientific investigation into this surgical technique and the underlying mechanisms of corneal endothelial wound healing. Of note, the lower proliferation rate of corneal endothelial cells (CEnC) in adult mice relative to, i.e. rabbits, approximates the minimal proliferative capacity of CEnC in humans.7
Over the past three decades our laboratory has developed experimental murine models for a number of corneal procedures in order to gain a better understanding of corneal transplant failure, the host alloimmune response to corneal transplants, the role of angiogenesis in immunological responses and assessment of novel therapeutics. Here, we report a novel adaptation of the DSO technique, as commonly used in the clinical setting, to a murine model. A step-by-step description of the technique is provided, and we additionally describe follow-up with slit-lamp and optical coherence tomography (OCT) imaging as well as pathology sections to confirm the feasibility of this technique.
METHODS
Animals
Surgery was performed in six-to-eight-week-old male BALB/c mice. The BALB/c strain was chosen to maintain consistency with well-established murine models of corneal transplantation and endothelial keratoplasty.8–10 In addition, BALB/c mice have transparent irises, which facilitates intraocular structure visualization and helps to prevent perioperative injuries.
Descemet Stripping Only (DSO) Procedure
Animals were housed in the Schepens Eye Research Institute animal vivarium and treated according to the guidelines set forth by the Association for Research in Vision and Ophthalmology (ARVO). All animal experiments were reviewed and approved by the Institutional Animal Care and Use Committee. Figures 1 and 2 depict the DSO procedure and details are described below (Also see Supplementary Video 1). Required reagents and materials are listed in Supplementary Table 1.
Fig. 1. Schematic of the Descemet Stripping Only (DSO) Surgical Procedure in Mice:
Open in a new tab
(A) Mark the central cornea with a 1.5-mm trephine; (B) support the globe using jeweler’s forceps, and make a tunnel incision into the cornea at ~6 to 9 o’clock along the border of the marked area (red dashed cross) using a 30G needle with the bevel pointed down; (C) use the proximal 1/4 of a 30G needle (by cutting the distal 3/4) to inject viscoelastic substance to fill the anterior chamber; (D) using a needle driver, bend the tip of a 30G needle; (E) introduce the bent 30G needle into the anterior chamber via the stromal tunnel and make a semicircular scratch (~120 o) from 10 o’clock to 2 o’clock (120 o) within the marked zone. Use the bent tip of the needle and scrape the endothelium gently to remove the endothelium in the marked area; (F) close the tunnel with an 11–0 nylon intrastromal suture; (G) irrigate the anterior chamber with PBS; (H) evaluate anterior chamber depth, iris integrity, pupil shape and suture placement. The red area depicts extent of scraped Descemet’s membrane.
Fig. 2. Surgical Technique of Descemet Stripping Only (DSO) in mice.
Open in a new tab
(A) Support the globe by holding the corneal limbus at the 6 and 12 o’clock positions with jeweler’s forceps. Mark the Descmetorhexis field using a 1.5-mm trephine. Make a tunnel incision in the cornea extending at least 0.3 mm in width between the 6 and 9 o’clock surgical positions using a 30G needle with the bevel pointing downwards. (B) Fill the anterior chamber with viscoelastic using a blunt 30G needle. Insert a bent 30G needle through the tunnel incision, with the bevel pointing downwards, and scratch the corneal endothelium from 10 o’clock to 2 o’clock (120 o) in the previously marked area. Use the bent tip of the needle to contact the detached edge of Descemet’s membrane and scrape it gently in a circular fashion to extract the detached membrane. The scraped area appears relatively darker and hazier as compared to unscraped tissue when viewed through a microscope (outlined by dashed border). (C) Close the incision using an 11–0 nylon intrastromal suture. (D) Irrigate the anterior chamber with PBS through the sutured incision using the tip of a 30G needle (with bevel pointing away from the suture). Examine the depth of the anterior chamber, integrity of the iris, the pupil shape, and tightness of the suture.
Preparation: Six to eight week old male BALB/c mice were anesthetized by intraperitoneal injection of ketamine (120 mg/kg) and xylazine (20 mg/kg) using a 25G needle. A drop of 0.5% proparacaine was applied to the mouse ocular surface. After the animal was fully anesthetized, as confirmed by hind limb pinch, one drop each of 2.5% phenylephrine hydrochloride and 1% tropicamide was applied to achieve mydriasis during the procedure.
Corneal Incision: The mouse was placed in the lateral recumbent position ensuring the head is positioned so the eye can be visualized under the microscope. The ocular surface was irrigated with PBS and dried using eye spears. The central cornea was marked with a 1.5-mm diameter trephine to outline the area from which Descemet’s membrane is to be removed. Adequate pressure was applied to make a superficial partial thickness trephination without perforating the cornea (Fig. 1A). A paracentral corneal tunnel was made using a 30G needle to enter the anterior chamber and confirmed by positive Seidel sign and partial collapse of the anterior chamber (Fig. 1B). The distal 3/4 of the 30G needle was removed and the proximal 1/4 of the needle was used to enter the anterior chamber and deliver the ocular viscoelastic device (OVD). The depth of the anterior chamber was restored by injecting OVD and the corneal surface was dried carefully with eye spears. (Fig. 1C).
Scraping of the Endothelial Layer: The beveled tip of a 30G needle was bent using a needle-driver (Fig. 1D) and inserted into the anterior chamber through the prepared tunnel, with the bevel pointing downwards, and then it was rotated so its tip pointed upwards. Then corneal endothelium was gently contacted, and the bent sharp needle tip was used to engage endothelium and Descemet’s membrane and scrape them within the area previously demarcated with the trephine (Fig. 1E) until complete removal of Descemet’s membrane was achieved by pulling and externalizing scraped tissue. To ensure complete removal of the tissue, the scraped region was visualized under the microscope as a darker and slightly hazier area relative to the surrounding cornea with intact Descemet’s membrane. Thereafter, to maintain the anterior chamber depth, OVD was injected as needed and an 11–0 nylon suture was placed to close the corneal incision (Fig. 1F). OVD was removed by irrigating the anterior chamber with PBS (Fig. 1G).
Final Intraoperative Evaluation: The eye was examined to confirm a round pupil and normal anterior chamber depth (Fig. 1H). Antibiotic ointment was applied to the operated eye. To control post-operative pain, buprenorphine (0.05–0.1 mg/kg) was subcutaneously injected with a 25G needle immediately after the surgery and then every 12h for the next 48 hours.
Postoperative Clinical Assessment: Mice were followed post-operatively on a weekly basis for a total duration of 8 weeks. The corneal suture was removed after 1 week. At each follow up time point, the cornea and anterior chamber were examined by slit-lamp examination and corneal thickness was recorded using optical coherence tomography (OCT). Eyes were examined for development of any post-operative complications including hyphema, cataract, infection and collapsed anterior chamber.
RESULTS
The DSO technique was successfully developed in mice as shown in the schematic outline in Fig. 1. The critical steps of this procedure are depicted in Fig. 2 and include creating a stromal tunnel incision (Fig. 2A), scraping of the endothelium in the marked area (Fig. 2B), closure of the incision using an 11–0 nylon intrastromal suture (Fig. 2C), and examination of the depth of the anterior chamber, integrity of the iris, pupil shape, and tightness of the suture (Fig. 2D). Removal of Descemet’s membrane can be confirmed by direct visualization, as the scraped area appears relatively darker and hazier compared to the unscraped area (Fig. 2B, 3A). Following surgery, we were able to evaluate the cornea’s response to DSO by slit-lamp examination and by OCT (Fig. 3B, C). As expected, on postoperative day 1, the cornea was edematous and partially opaque (Fig. 3A). Pre-injury, the CCT of mice was measured to be 116±9 mm, and on post-operative day 1, the average corneal thickness was approximately 300 μm and gradually subsided over 2 weeks to eventually stabilize at approximately 128–140 μm between days 14 and 35, with a statistically significant difference compared to pre-injury corneal thickness (Fig. 3B–C). No complications were observed through 8-weeks of follow-up. The anterior chamber remained formed after the procedure and no synechiae were observed. We did not observe corneal neovascularization, including neovascularization to the placed suture. Periodic Acid–Schiff (PAS) staining was used to confirm successful removal of the corneal endothelium and its associated Descemet’s membrane (Fig. 4A–B).
Fig. 3. Corneal thickness following Descemet Stripping Only (DSO).
Open in a new tab
(A) Stripped corneal endothelium and increased corneal thickness can be visualized on slit-lamp exam using a narrow slit-lamp light beam. (B) AS-OCT examination shows corneal thickness greater than 400 μm at post-operative day 1. (C) Serial corneal thickness measurements via OCT examination show a decrease in corneal thickness through 35 days of follow-up. Data is presented as mean ± SEM; comparison was made by multiple paired t-test at each time point (DSO vs. No DSO [Pre-injury]) N=6/group; , P<0.05; , P<0.01, , P<0.001. CCT, Central Corneal Thickness. (D) Staining of corneal endothelium with ZO-1 35 days (7wks) after DSO showed impaired corneal wound healing, evidenced by enlarged and dysmorphic corneal endothelial cells as compared to uninjured corneas.
Fig. 4. Histologic Confirmation of Corneal Endothelium Removal following DSO.
Open in a new tab
After finishing the procedure, corneas were collected and fixed in formalin (10%) and paraffin-embedded sections were stained with Periodic acid–Schiff (PAS) to visualize Descemet’s membrane. (A) Anterior chamber depth was preserved after the procedure and no synechiae were observed. (B) Removal of Descemet’s membrane is confirmed by the absence of membrane centrally (red arrowhead), which is in contrast to uninjured areas where Descemet’s membrane is visualized (blue arrow).
Of note, potential complications of the procedure include hyphema, which can result from trauma to the iris or conjunctival blood vessels and cataract. Cataract may result from contact with the lens capsule, and the chance of cataract formation can be minimized by using sufficient OVD in the anterior chamber, and potentially also by using older mice which have a relatively larger anterior chamber.11
DISCUSSION
DSO is an emerging surgical technique for the management of patients with diseases affecting the central corneal endothelium, including Fuchs’ Endothelial Dystrophy. To date, surgical interventions for endothelial disease consist of penetrating keratoplasty (PK) and endothelial keratoplasty (EK). The clinical importance of DSO lies in the fact that no donor tissue is needed, eliminating the possibility of graft rejection or failure. In addition, DSO is minimally invasive, induces little astigmatism and has fewer complications compared to PK and EK.12 Interestingly, one study has reported similar refractive results in DSO as compared to Descemet Stripping Automated Endothelial Keratoplasty (DSAEK).12 DSO may also be useful when surgery is performed in areas with limited access to donor tissue and in circumstances where patients decline donor tissue.13 In the present report, we describe a novel murine model of this procedure which may permit future basic science investigation into mechanisms of DSO success or failure, as well as corneal endothelial cell biology.
DSO has been shown to be an effective management strategy for patients affected by diseases affecting the central corneal endothelium. Corneal endothelial cells are responsible for maintaining appropriate corneal hydration and are thus critical to corneal clarity and vision. A prerequisite for DSO is the presence of healthy peripheral endothelial cells, as these cells are believed to migrate and cover the induced central defect in the weeks to months following surgery. When performing DSO, a “peeling” technique (i.e. removal of Descemet’s membrane in a smooth and continuous fashion) is preferred over a “scoring” technique so as to avoid stromal trauma and stromal tags which impede endothelial cell migration from the periphery towards the center.1415 One main limitation of DSO is that it takes longer for corneal edema to clear after surgery as compared to endothelial keratoplasty;12 in humans, corneal clearance can take up to 1 to 3 months after the procedure. 1214 Clinically, following DSO, the central cornea initially undergoes thickening before deturgescence. In this study, we used slit-lamp microscopy and OCT examination to assess corneal thickness through 8 weeks of follow-up and we observed a similar pattern in our murine model of DSO (Fig. 3C). It is important to note that CEC wound healing is more delayed with DM stripping as compared to simple CEC scraping; in this model we removed both CEC and DM together.
Compared to other DSO models, as the mouse cornea is smaller than that of other animals, we adjusted the size of the injury area to 1.5-mm diameter based on our previously established PK model in mice.9,16 Recovery after endothelial injury and DSO depends on the intrinsic proliferative capacity of CEnC, which seems to vary among species.7 Interestingly, in this model, we saw repopulation of the denuded area with CEnC, similar to DSO in humans, and CCT remained elevated compared to pre-injury through the follow up period. Although CCT has been reported to return to baseline levels in a feline and rat scraping model4,17, several other reports using scraping in cats18, as well as in rabbits19–21 and non-human primates6,19, showed persistent corneal edema. We believe the technique we describe in this report using mice is a feasible and reproducible model to study corneal endothelial regeneration while also reproducing clinical outcomes in humans.
An emerging area of research in endothelial cell biology is the role of Rho-associated kinase inhibitors in promoting the migration of endothelial cells. Treatment with Ripasudil, approved in Japan as an anti-glaucoma therapy, has been shown to decrease time to corneal clearance and increase final endothelial cell counts in DSO.22 A pilot study using Netarsudil has been shown to significantly reduce the time to corneal clearance after DSO,2324 and an international, multi-center, placebo-controlled trial of Ripasudil after DSO is currently enrolling and should yield results this year. In the future, one potential role for the murine model of DSO described here could be to help investigate the effect of Rho-associated kinase inhibitors on endothelial cell migration and the underlying mechanisms.
In summary, in the present report we show that DSO is feasible and reproducible in a murine model, and that this model recapitulates the post-operative course seen clinically. This model may be a useful tool to aid the study of endothelial cell biology and to better understand the behavior of corneal endothelial cells after DSO in order to improve clinical outcomes in patients undergoing this procedure.
Supplementary Material
Supplemental Video File
Supplemental Video 1. Video of Descemet Stripping Only (DSO) Surgical Technique in Mice. Video depicting the step-by-step technique for this procedure and the appearance of the mouse cornea immediately following completion.
Download video file (90.1MB, mp4)
Supplemental Data File (.doc, .tif, pdf, etc.)
Supplemental Table 1. Equipment and Reagents for Descemet Stripping Only (DSO) Technique in Mice.
NIHMS1852219-supplement-Supplemental_Data_Filedoctif__pdf__etc__.docx (15.3KB, docx)
Cover Letter
NIHMS1852219-supplement-Cover_Letter.pdf (257.3KB, pdf)
ACKNOWLEDGEMENT
Funding:
This study was supported by the National Eye Institute/National Institutes of Health (R01 EY012963 to RD and K08 EY031759 to THD) and Department of Defense (W81XWH2110855 to RD)
Footnotes
Conflict of interest: None
REFERENCES
1.Borkar DS, Veldman P, Colby KA. Treatment of Fuchs Endothelial Dystrophy by Descemet Stripping Without Endothelial Keratoplasty. Cornea. 2016;35(10):1267–1273. doi: 10.1097/ICO.0000000000000915 [DOI] [PubMed] [Google Scholar]
2.Ong Tone S, Kocaba V, Böhm M, Wylegala A, White TL, Jurkunas UV. Fuchs endothelial corneal dystrophy: The vicious cycle of Fuchs pathogenesis. Progress in Retinal and Eye Research. 2021;80(November 2019):100863. doi: 10.1016/j.preteyeres.2020.100863 [DOI] [PMC free article] [PubMed] [Google Scholar]
3.Huang MJ, Kane S, Dhaliwal DK. Descemetorhexis without Endothelial Keratoplasty Versus DMEK for Treatment of Fuchs Endothelial Corneal Dystrophy. Cornea. 2018;37(12):1479–1483. doi: 10.1097/ICO.0000000000001742 [DOI] [PubMed] [Google Scholar]
4.Tuft SJ, Williams KA, Coster DJ. Endothelial repair in the rat cornea. Investigative Ophthalmology and Visual Science. 1986;27(8):1199–1204. [PubMed] [Google Scholar]
5.Mimura T, Yamagami S, Yokoo S, et al. Cultured human corneal endothelial cell transplantation with a collagen sheet in a rabbit model. Investigative Ophthalmology and Visual Science. Published online 2004. doi: 10.1167/iovs.03-1174 [DOI] [PubMed] [Google Scholar]
6.Kimoto M, Shima N, Yamaguchi M, Hiraoka Y, Amano S, Yamagami S. Development of a bioengineered corneal endothelial cell sheet to fit the corneal curvature. Investigative Ophthalmology and Visual Science. Published online 2014. doi: 10.1167/iovs.13-13167 [DOI] [PubMed] [Google Scholar]
7.Park S, Leonard BC, Raghunathan VK, et al. Animal models of corneal endothelial dysfunction to facilitate development of novel therapies. Annals of Translational Medicine. 2021;0(0):0–0. doi: 10.21037/atm-20-4389 [DOI] [PMC free article] [PubMed] [Google Scholar]
8.Inomata T, Mashaghi A, Di Zazzo A, Dana R. Ocular surgical models for immune and angiogenic responses. J Biol Methods. 2015;2(3):e27. doi: 10.14440/jbm.2015.78 [DOI] [PMC free article] [PubMed] [Google Scholar]
9.Nakagawa H, Blanco T, Kahale F, Bir Singh R, Dohlman TH, Dana R. Novel adaptation of a running suture technique in a mouse model of corneal transplantation. J Biol Methods. 2021;8(4):e156. doi: 10.14440/jbm.2021.373 [DOI] [PMC free article] [PubMed] [Google Scholar]
10.Lužnik Marzidovšek Z, Blanco T, Sun Z, et al. The Neuropeptide Alpha-Melanocyte-Stimulating Hormone Is Critical for Corneal Endothelial Cell Protection and Graft Survival after Transplantation. Am J Pathol. 2022;192(2):270–280. doi: 10.1016/j.ajpath.2021.10.016 [DOI] [PMC free article] [PubMed] [Google Scholar]
11.Puk O, Dalke C, Favor J, de Angelis MH, Graw J. Variations of eye size parameters among different strains of mice. Mamm Genome. 2006;17(8):851–857. doi: 10.1007/s00335-006-0019-5 [DOI] [PubMed] [Google Scholar]
12.Huang MJ, Kane S, Dhaliwal DK. Descemetorhexis Without Endothelial Keratoplasty Versus DMEK for Treatment of Fuchs Endothelial Corneal Dystrophy. Cornea. 2018;37(12):1479–1483. doi: 10.1097/ICO.0000000000001742 [DOI] [PubMed] [Google Scholar]
13.Rocha-de-Lossada C, Rachwani-Anil R, Borroni D, et al. New Horizons in the Treatment of Corneal Endothelial Dysfunction. J Ophthalmol. 2021;2021:6644114. doi: 10.1155/2021/6644114 [DOI] [PMC free article] [PubMed] [Google Scholar]
14.Davies E, Jurkunas U, Pineda R. Predictive Factors for Corneal Clearance After Descemetorhexis Without Endothelial Keratoplasty. Cornea. 2018;37(2):137–140. doi: 10.1097/ICO.0000000000001427 [DOI] [PubMed] [Google Scholar]
15.Garcerant D, Hirnschall N, Toalster N, Zhu M, Wen L, Moloney G. Descemet’s stripping without endothelial keratoplasty. Curr Opin Ophthalmol. 2019;30(4):275–285. doi: 10.1097/ICU.0000000000000579 [DOI] [PubMed] [Google Scholar]
16.Nakagawa H, Blanco T, Kahale F, et al. A Novel Murine Model of Endothelial Keratoplasty. Cornea. Published online May 13, 2022: 10.1097/ICO.0000000000003047. doi: 10.1097/ICO.0000000000003047 [DOI] [PMC free article] [PubMed] [Google Scholar]
17.Ling TL, Vannas A, Holden BA. Long-term changes in corneal endothelial morphology following wounding in the cat. Invest Ophthalmol Vis Sci. 1988;29(9):1407–1412. [PubMed] [Google Scholar]
18.Solomon A, Solberg Y, Belkin M, Landshman N. Effect of corticosteroids on healing of the corneal endothelium in cats. Graefes Arch Clin Exp Ophthalmol. 1997;235(5):325–329. doi: 10.1007/BF01739643 [DOI] [PubMed] [Google Scholar]
19.Okumura N, Okazaki Y, Inoue R, et al. Effect of the Rho-Associated Kinase Inhibitor Eye Drop (Ripasudil) on Corneal Endothelial Wound Healing. Invest Ophthalmol Vis Sci. 2016;57(3):1284–1292. doi: 10.1167/iovs.15-18586 [DOI] [PubMed] [Google Scholar]
20.Chen J, Li Z, Zhang L, et al. Descemet’s Membrane Supports Corneal Endothelial Cell Regeneration in Rabbits. Sci Rep. 2017;7(1):6983. doi: 10.1038/s41598-017-07557-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
21.Meekins LC, Rosado-Adames N, Maddala R, Zhao JJ, Rao PV, Afshari NA. Corneal Endothelial Cell Migration and Proliferation Enhanced by Rho Kinase (ROCK) Inhibitors in In Vitro and In Vivo Models. Invest Ophthalmol Vis Sci. 2016;57(15):6731–6738. doi: 10.1167/iovs.16-20414 [DOI] [PMC free article] [PubMed] [Google Scholar]
22.Macsai MS, Shiloach M. Use of Topical Rho Kinase Inhibitors in the Treatment of Fuchs Dystrophy After Descemet Stripping Only. Cornea. 2019;38(5):529–534. doi: 10.1097/ICO.0000000000001883 [DOI] [PubMed] [Google Scholar]
23.Davies E.Case Series: Novel Utilization of Rho-Kinase Inhibitor for the Treatment of Corneal Edema. Cornea. 2021;40(1):116–120. doi: 10.1097/ICO.0000000000002421 [DOI] [PubMed] [Google Scholar]
24.Davies E, Jurkunas U, Pineda R. Pilot Study of Corneal Clearance With the Use of a Rho-Kinase Inhibitor After Descemetorhexis Without Endothelial Keratoplasty for Fuchs Endothelial Corneal Dystrophy. Cornea. 2021;40(7):899–902. doi: 10.1097/ICO.0000000000002691 [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Supplemental Video File
Supplemental Video 1. Video of Descemet Stripping Only (DSO) Surgical Technique in Mice. Video depicting the step-by-step technique for this procedure and the appearance of the mouse cornea immediately following completion.
Download video file (90.1MB, mp4)
Supplemental Data File (.doc, .tif, pdf, etc.)
Supplemental Table 1. Equipment and Reagents for Descemet Stripping Only (DSO) Technique in Mice.
NIHMS1852219-supplement-Supplemental_Data_Filedoctif__pdf__etc__.docx (15.3KB, docx)
Cover Letter
NIHMS1852219-supplement-Cover_Letter.pdf (257.3KB, pdf)
ACTIONS
View on publisher site
PDF (1.0 MB)
Cite
Collections
Permalink PERMALINK
Copy
RESOURCES
Similar articles
Cited by other articles
Links to NCBI Databases
On this page
Abstract
INTRODUCTION
METHODS
RESULTS
DISCUSSION
Supplementary Material
ACKNOWLEDGEMENT
Footnotes
REFERENCES
Associated Data
Cite
Copy
Download .nbib.nbib
Format:
Add to Collections
Create a new collection
Add to an existing collection
Name your collection
Choose a collection
Unable to load your collection due to an error
Please try again
Add Cancel
Follow NCBI
NCBI on X (formerly known as Twitter)NCBI on FacebookNCBI on LinkedInNCBI on GitHubNCBI RSS feed
Connect with NLM
NLM on X (formerly known as Twitter)NLM on FacebookNLM on YouTube
National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894
Web Policies
FOIA
HHS Vulnerability Disclosure
Help
Accessibility
Careers
NLM
NIH
HHS
USA.gov
Back to Top
|
41
|
Published Time: 2018-09-10
Superconducting and normal-state properties of the noncentrosymmetric superconductor | Phys. Rev. B
===============
Opens in a new window Opens an external website Opens an external website in a new window
This website utilizes technologies such as cookies to enable essential site functionality, as well as for analytics, personalization, and targeted advertising. To learn more, view the following link: Privacy Policy
Skip to Main Content
All JournalsPhysics Magazine
Article Lookup
Sign in
Article Lookup
Sign in
Sign in
All JournalsPhysics Magazine
Highlights
Recent
Accepted
Collections
Authors
Referees
Press
About
Editorial Team
RSS
Physical Review B
Highlights
Recent
Accepted
Collections
Authors
Referees
Press
About
Editorial Team
RSS
Superconducting and normal-state properties of the noncentrosymmetric superconductor R e 3T a
J. A. T. Barker1,2,, B. D. Breen1, R. Hanson1, A. D. Hillier3, M. R. Lees1, G. Balakrishnan1, D. McK. Paul1, and R. P. Singh4
1 Physics Department, University of Warwick, Coventry CV4 7AL, United Kingdom
2 Laboratory for Muon-Spin Spectroscopy, Paul Scherrer Institut, CH-5232 Villigen PSI, Switzerland
3 ISIS facility, STFC Rutherford Appleton Laboratory, Harwell Science and Innovation Campus, Oxfordshire OX11 0QX, United Kingdom
4 Department of Physics, Indian Institute of Science Education and Research Bhopal, Bhopal 462066, India
[email protected]
PDF
Share
X
Facebook
Mendeley
LinkedIn
Reddit
Sina Weibo
Phys. Rev. B 98, 104506 – Published 10 September, 2018
DOI:
Export Citation
60
Show metrics
See more details
Posted by 1 X users
14 readers on Mendeley
60 CITATIONS
Abstract
The noncentrosymmetric superconductor, R e 3T a, has been characterized in detail with a combination of magnetization, heat capacity, and electrical resistivity measurements, as well as a microscopic investigation of the internal magnetic fields using muon spin spectroscopy (𝜇S R). In low applied fields, we observe 1 0 0% flux expulsion at a temperature of 𝑇 c=4.6 8 K, which is concomitant with a sudden decrease of the electrical resistivity to zero and a sharp discontinuity in the heat capacity, confirming bulk superconductivity in this material. We find that R e 3T a is a poor metal, with superconductivity occurring in the dirty limit, and in which the disorder in the structure dominates the physical properties. Zero-field 𝜇S R shows that the superconducting state preserves time-reversal symmetry, and transverse-field measurements of the superfluid density are well described by an isotropic 𝑠-wave model. A careful analysis of the internal field distribution reveals a high level of disorder in the vortex lattice. Furthermore, we have combined the experimental data and calculated the effective mass, carrier density, and electronic mean-free path in this material, and ultimately show that R e 3T a lies close to the unconventional region of the Uemura plot.
Physics Subject Headings (PhySH)
Magnetic susceptibility
Superconductivity
Transport phenomena
Vortex lattices
Superconductors
Muon spin relaxation & rotation
Authorization Required
We need you to provide your credentials before accessing this content.
Log in via your institution
If your institution provides access using Shibboleth/OpenAthens log in.
OpenAthens Log In
Log in via APS Member Subscription
If you have a personal subscription through your APS membership please log in.
Starting August 1, 2019 APS member subscribers will need to log in using your member credentials instead of your APS Journal Account.
APS Member Log In
Other Options
Buy Article »
Log in with APS Journals Account
Log in with username/password provided by your institution
Get access through a U.S. public or high school
References (Subscription Required)
Outline Information
Citing Articles (57)
Abstract
Article Text
References
Phys. Rev. B 98, 104506– Published 10 September, 2018
Vol. 98, Iss. 10 — 1 September 2018
Received 12 February 2018
Export Citation
Reuse & Permissions
DOI:
©2018 American Physical Society
Outline Information
Outline
Citing Articles (57)
Abstract
Article Text
References
Information
Phys. Rev. B 98, 104506– Published 10 September, 2018
Vol. 98, Iss. 10 — 1 September 2018
Received 12 February 2018
Export Citation
Reuse & Permissions
DOI:
©2018 American Physical Society
NewsJoin APS
Authors
General InformationSubmit a ManuscriptPublication RightsOpen AccessPolicies & PracticesTips for AuthorsProfessional Conduct
Referees
General InformationSubmit a ReportUpdate Your InformationPolicies & PracticesReferee FAQGuidelines for RefereesOutstanding Referees
Librarians
General InformationSubscriptionsOnline License AgreementUsage StatisticsYour Account
Students
PhysicsPhysicsCentralStudent Membership
Connect
PrivacyPoliciesContact InformationFeedback
NewsJoin APS
Authors
General Information
Submit a Manuscript
Publication Rights
Open Access
Policies & Practices
Tips for Authors
Professional Conduct
Referees
General Information
Submit a Report
Update Your Information
Policies & Practices
Referee FAQ
Guidelines for Referees
Outstanding Referees
Librarians
General Information
Subscriptions
Online License Agreement
Usage Statistics
Your Account
Students
Physics
PhysicsCentral
Student Membership
Connect
Privacy
Policies
Contact Information
Feedback
ISSN 2469-9969 (online), 2469-9950 (print).
©2025 American Physical Society. All rights reserved.
Physical Review B™ is a trademark of the American Physical Society, registered in the United States, Canada, European Union, and Japan. The APS Physics logo and Physics logo are trademarks of the American Physical Society. Information about registration may be found here. Use of the American Physical Society websites and journals implies that the user has read and agrees to our Terms and Conditions and any applicable Subscription Agreement.
Sign In to Your Journals Account
Username
Password
Forgot your username/password?
Create an account
Sign in to your APS Member Account
Sign in via your Institution
Sign In
Filter
Filter
Article Lookup
Enter a citation
Paste a citation or DOI
Lookup Article
Journal
Volume
Article ID / page number
Lookup Article
|
42
|
Published Time: 2024-07-08T21:00:00.000Z
Big O Notation and Time Complexity Guide: Intuition and Math | DataCamp
===============
Skip to main content
EN
EnglishEspañolPortuguêsDeutsch BetaFrançais Beta
More InformationFound an Error?
Log inGet Started
Tutorials
Blogs
Tutorials
docs
Podcasts
Cheat Sheets
code-alongs
Newsletter
Category
Category
Technologies
Discover content by tools and technology
AI AgentsArtificial IntelligenceAWSAzureBusiness IntelligenceChatGPTDatabricksdbtDockerExcelGenerative AIGitGoogle Cloud PlatformHugging FaceJavaJuliaKafkaKubernetesLarge Language ModelsMongoDBOpenAIPostgreSQLPower BIPySparkPythonRScalaSnowflakeSpreadsheetsSQLSQLiteTableau
Category
Topics
Discover content by data science topics
AI for BusinessBig DataCareer ServicesCloudData AnalysisData EngineeringData LiteracyData ScienceData VisualizationDataLabDeep LearningMachine LearningMLOpsNatural Language Processing
Browse Courses
category
Home
Tutorials
Data Science
Big O Notation and Time Complexity Guide: Intuition and Math
Time complexity is the measure of how an algorithm's runtime scales with input size, often expressed using Big-O notation, which provides an upper bound on the worst-case scenario.
Contents
Jul 8, 2024· 15 min read
Contents
Does Time Complexity Still Matter in 2024?
Basic Operations and Basic Instructions
Big-O Notation
Big-O notation: intuitive explanation
Big-O notation: mathematical definition
Types of Time Complexity
Constant time: O(1)
Linear time: O(N)
Logarithmic time: O(log(N))
Quadratic time: O(N2)
Loglinear time: O(N log(N))
Cubic time: O(N3)
Exponential time complexity: O(2N) and O(N!)
Time Complexity: Worst Case and Best Case
Time Complexity: Practical Implications
Conclusion
FAQs
Training more people?
Get your team access to the full DataCamp for business platform.For BusinessFor a bespoke solution book a demo.
Time complexity analysis provides a way to analyze and predict the efficiency of algorithms in a way that is independent of both the language in which we implement them and the hardware in which they’re executed.
The goal of time complexity analysis isn’t to predict the exact runtime of an algorithm but rather to be able to answer these questions:
Given two algorithms that solve the same problem, which one is expected to run faster if the same amount of data is provided to both?
If we doubled the data provided to the algorithm, how would the execution time be affected? Will it scale linearly and double as well? Will it remain the same? Or something else entirely?
By the end of this article, you’ll know how to analyze the time complexity of an algorithm and answer these questions.
Does Time Complexity Still Matter in 2024?
Computers are getting faster and faster so one might wonder if there’s any need anymore in worrying about time complexity. Surely nowadays a supercomputer can handle any problem regardless of the algorithm we use to solve it, right?
Well, not quite. The AI boom is largely due to immense advances in hardware performance in recent years. However, even on the most advanced hardware, simple problems can lead to catastrophic performance if a slow algorithm is used.
For example, imagine using a naive algorithm to sort a database table with ten million entries. Even on a modern computer, this simple operation would take several days to complete, making the database unusable for any practice application. By contrast, using an efficient algorithm, we can expect it to take less than one-tenth of a second.
Basic Operations and Basic Instructions
We analyze an algorithm using the Random Access Machine (RAM or RA-machine) model. This model assumes the following operations take exactly one time step:
Arithmetic operations
Logic operations (<, >, ==, etc.)
Statements like ifor return
Accessing a memory location, such as writing or reading a variable value
These operations are called basic operations. A line of code that executes a constant number of basic operations is called a basic instruction.
Because the number of operations in a basic instruction is constant, it doesn’t depend on the amount of data. Since we care about how the execution time grows as the amount of data grows, we can focus on counting the number of basic instructions the algorithm performs.
Function calls and loops like for and while are evaluated by adding up the number of basic instructions inside of them. Consider the following code that sums the elements of a list.
python
def sum_list(lst):
total = 0 # 1 instruction
for value in lst:
total += value # executed len(lst) times
return total # 1 instruction
Powered By
Was this AI assistant helpful? Yes No
We added comments to each line to help visualize the number of basic instructions. If N is the length of lst, then the total number of basic instructions is 1 + N + 1 = N + 2.
Big-O Notation
Imagine we have two sorting algorithms. We calculated the number of basic instructions for each of them and got the following expressions:
First algorithm Second algorithm
4N2 + 2N + 7 3N2 + 5N + 13
At first glance, it’s not obvious which one scales better. Big-O notation gives a way to simplify these expressions further by applying these simplifications:
Drop the terms with lower exponents
Remove multiplicative constants
If we apply this process to both expressions, we get N2 in both cases:
The expression we obtain after simplifying is the algorithm's time complexity, which we denote using O(). In this case, we can write that 4N2 + 2N + 7 = O(N2) and 3N2 + 5N + 13 = O(N2). This means that both algorithms have the same time complexity, which is proportional to the square of the list's number of elements.
Above, we calculated that the sum_list algorithm performed N + 2 operations, where N is the length of the list. Applying the same simplifications, we obtain N, so the time complexity of sum_list is O(N).
This means that the execution time is proportional to the list size. This is expected because when computing the sum, doubling the number of elements in a list should require double the amount of work, not less, not more.
Big-O notation: intuitive explanation
We can ignore the lower terms because, as N grows, their impact on the total sum becomes insignificant when compared to the highest term. The following plot shows the contribution of each of the three terms 4N 2, 2N, and 7 to the total sum.
The quadratic term 4N 2 very quickly accounts for almost 100% of the total number of operations even for small values of N. Therefore, when dealing with large amounts of data, the time corresponding to lower-order terms will be negligible compared to the higher-order terms.
We drop the multiplicative constant because this value is independent of the amount of data.
Big-O notation: mathematical definition
The above steps are sufficient for most practical purposes to accurately derive an algorithm’s time complexity. We can count the number of basic instructions an algorithm performs and obtain some expression f(N). After applying the two simplification steps, we obtain some simplified expression g(N). Then, we can write that f(N) = O(g(N)). We read it as “f is big O of g.”
However, this isn’t a formal mathematical definition, and in some cases, it isn’t sufficient.
The mathematical definition is that a positive function f(N) = O(g(N)) if we can find a constant C such that for large values of N, the following holds:
f(N) ≤ C × g(N)
We can use the definition to show that f(N) = 4N 2 + 2N + 7 = O(N 2). In this case, we can choose C = 5 and see that for any N > 4, f(N) ≤ 5 × N 2:
In practice, we’ll just use the above simplification rules.
Types of Time Complexity
Now that we understand the basics of Big-O notation, let's explore common time complexities and their implications, starting with the most efficient: constant time.
Constant time: O(1)
As the name suggests, constant-time algorithms have a constant time complexity, meaning their performance does not depend on the amount of data. These algorithms typically involve calculations with a fixed number of inputs, such as computing the distance between two points.
The distance between two points is calculated by computing the square root of the difference between the x and y coordinates of the two points.
The dist function shown below computes the distance between two points. It consists of three basic instructions, which is a constant. In this case, we write that the time complexity is O(1).
py
def dist(p, q):
dx = (p - q) 2 # 1 instruction
dy = (p - q) 2 # 1 instruction
return (dx + dy) 0.5 # 1 instruction
Powered By
Was this AI assistant helpful? Yes No
Interestingly, it is entirely possible for a function to exhibit constant time complexity, even when processing large datasets. Take, for instance, the case of calculating the length of a list using the len function. This operation is O(1) because the implementation internally keeps track of the list's size, eliminating the need to count the elements each time the length is requested.
Linear time: O(N)
We have previously discussed sum_list as a linear time function example. Generally, linear time algorithms execute tasks that need to inspect each data point individually. Calculating the minimum, maximum, or average is a typical operation in this category.
Let’s practice by analyzing the following function for computing the minimum value of a list:
py
def minimum(lst):
min_value = lst # 1 instruction
for i in range(1, len(lst)):
min_value = min(min_value, lst[i]) # executed len(lst) - 1 times
return min_value # 1 instruction
Powered By
Was this AI assistant helpful? Yes No
If the list has N elements, the complexity is 1 + (N - 1) + 1 = N + 1 = O(N).
The execution time of linear time algorithms is directly proportional to the amount of data. Thus, if the amount of data is doubled, the time it takes for the algorithm to process the data should also doubled.
Logarithmic time: O(log(N))
The number-guessing game involves attempting to guess a hidden number between 1 and N, aiming to identify this number with the fewest possible guesses. After every guess, we receive feedback indicating whether the hidden number is higher, lower, or the same as their guess. If the guess is correct, we win the game; otherwise, we continue guessing.
One effective strategy for this game is to consistently guess the midpoint. Suppose N = 15. In this case, our first guess would be 8. If the secret number is smaller than 8, we know it must be between 1 and 7. If it's larger, then it lies between 9 and 15. We continue to guess the midpoint of the remaining range until we find the correct answer. The diagram below illustrates the potential paths this strategy can follow.
Let's examine the time complexity of this approach. Unlike the previous examples, the number of steps this algorithm takes isn't solely dependent on the input size N, as there's a possibility of guessing correctly on the first attempt.
When analyzing these algorithms, we concentrate on the worst-case scenario. For instance, with N = 15, in the worst case, we would need to make 4 guesses. Each guess, being the midpoint, eliminates half of the remaining possibilities. In the worst-case scenario, we continue guessing until only one possibility remains. Therefore, the question we need to address is:
How many times must we divide N by 2 to obtain 1?
The answer is the base-2 logarithm of N, denoted as log2(N). The algorithm deployed to pinpoint the correct number is known as binary search. Given that the maximum number of guesses is log2(N), we can express its complexity as O(log2(N)) or simply O(log(N)) since constants are insignificant in time complexity notation.
Due to its extremely slow growth rate, logarithmic time complexity is highly sought after in practice, behaving almost like constant time complexity. Even for large values of N, the total number of operations remains remarkably low. For instance, even if N is one billion, the number of operations required is only about 30.
Binary search is a fundamental algorithm in computer science with numerous applications. For example, in natural language processing, spell checkers and autocorrect systems can utilize binary search variants to efficiently identify potential candidate words.
Quadratic time: O(N 2)
Sorting data is a fundamental task that computers frequently need to perform. One method to sort a list of N numbers involves iteratively identifying the minimum element of the list.
python
def selection_sort(lst):
sorted_lst = [] # 1 instruction
for _ in range(len(lst)):
minimum = min(lst) # executed len(lst) times
lst.remove(minimum) # executed len(lst) times
sorted_lst.append(minimum) # executed len(lst) times
return sorted_lst # 1 instruction
Powered By
Was this AI assistant helpful? Yes No
To analyze the complexity of a for loop, we evaluate the complexity of the instructions within it and then multiply that result by the number of iterations the loop is executed.
Computing the minimum of a list and removing an element from a list are both O(N) operations. Appending is done in constant time, O(1). Therefore, each iteration of the for loop has a complexity of O(N + N + 1) = O(2N + 1), which simplifies to O(N). Since there are N iterations, the complexity is N × O(N) = O(N 2). The rest of the selection_sort() function consists of three simple instructions, so the overall complexity is O(N 2 + 3) = O(N 2).
Our analysis of the for loop involved an oversimplification. In each iteration, we remove one element from the list, meaning not all iterations execute the same number of instructions. The first iteration executes N instructions, the second executes N - 1, the third N - 2, and this pattern continues until the last iteration, which only executes 1 instruction.
This implies that the true complexity of the for loop is:
N + (N - 1) + (N - 2) + … + 1
This sum can be shown to be equal to (N 2 + N) / 2. However,
(N 2 + N) / 2 = ½N 2 + ½N = O(N 2)
Even though we overcounted, the overall complexity is still quadratic. The selection_sort() function exemplifies a slower sorting algorithm due to its quadratic complexity, O(N 2).
Functions with quadratic complexity scale poorly, making them suitable for small lists but impractical for sorting millions of data points, as they can take days to complete the task. Doubling the amount of data leads to a quadrupling of the execution time.
Loglinear time: O(N log(N))
It is possible to devise an algorithm for sorting a list with O(N log(N)) time complexity. One such algorithm is called merge sort. We won’t go into the details of its implementation, but if you want to learn more about it, check out this quick introduction to merge sort from this course on Data Structures and Algorithms in Python.
As mentioned previously, logarithmic complexity behaves similarly to constant complexity in practice. Therefore, the execution time of an O(N log(N)) algorithm can be comparable to that of a linear-time algorithm in real-world applications.
The following plot shows that N log(N) and N grow at similar rates while N 2 grows very fast, quickly becoming very slow.
Cubic time: O(N 3)
A widely used cubic time complexity algorithm in AI is matrix multiplication, which plays a crucial role in the functionality of large language models like GPT. This algorithm is pivotal to the calculations carried out during both the training and inference phases.
To multiply two N by N matrices, A and B, we need to multiply each row of the first matrix A by each column of the second matrix B. Specifically, the value of the product at entry (i, j) is determined by multiplying the elements in row i of A by the corresponding elements in column j of B.
Here’s a Python implementation of matrix multiplication:
python
def matrix_mul(A, B):
n = len(A) # 1 instruction
res = [[0 for _ in range(n)] for _ in range(n)] # N^2 instructions
for i in range(n):
for j in range(n):
for k in range(n):
res[i][j] += A[i][k] B[k][j] # executed N×N×N = N^3 times
return res # 1 instruction
Powered By
In total,matrix_mul() executes N 3 + N 2 + 2 instructions so its time complexity is O(N 3).
Alternatively, we could have analyzed the complexity by reasoning about the calculations. The result of the matrix multiplication has N 2 entries. Each entry takes O(N) time to compute, so the overall complexity is N 2 × O(N) = O(N 3).
Cubic time algorithms are quite slow. Doubling the amount of data results in a factor of eight in the execution time, making such algorithms scale very poorly. Even for a relatively small value of N, the number of instructions explodes very quickly.
Matrix multiplication is an example of a computational challenge where advances in hardware have made a significant impact. Although there are slightly more efficient algorithms for matrix multiplication, it is primarily the advances in the design of GPU chips — tailored specifically for performing matrix multiplication — that have significantly facilitated the training of large models like GPT within reasonable timeframes.
More recently, researchers have proposed eliminating matrix multiplication from large learning models—you can read more about this in this article on MatMul-Free LLMs: Key Concepts Explained.
Exponential time complexity: O(2 N) and O(N!)
Exponential time algorithms typically arise when a problem is solved by exploring all possible solutions. Consider planning a route for a delivery truck required to visit N delivery locations. One approach to address this problem is to evaluate every possible delivery order. The Python itertools package offers a straightforward method to cycle through all permutations of a list.
python
import itertools
for order in itertools.permutations([1, 2, 3]):
print(order)
Powered By
Was this AI assistant helpful? Yes No
markdown
(1, 2, 3)
(1, 3, 2)
(2, 1, 3)
(2, 3, 1)
(3, 1, 2)
(3, 2, 1)
Powered By
Was this AI assistant helpful? Yes No
The number of permutations of a list of length N is denoted by N!, read as "N factorial." The factorial function exhibits exponential growth. For instance, when N = 13, the number of permutations exceeds 1 billion.
Suppose the delivery locations are given as a list of 2D points. In that case, we can determine the delivery sequence that minimizes the total distance by iterating through all possible sequences while keeping track of the best one.
python
def optimize_route(locations):
minimum_distance = float("inf") # 1 instruction
best_order = None # 1 instruction
for order in itertools.permutations(locations): # N! iterations
distance = 0 # 1 instruction, N! times
for i in range(1, len(order)):
distance += dist(order[i - 1], order[i]) # N-1 instructions, N! times
if distance < minimum_distance:
distance = minimum_distance # 1 instruction, N! times
best_order = order # 1 instruction, N! times
return best_order # 1 instruction
Powered By
Was this AI assistant helpful? Yes No
The first for loop is executed N! times. Therefore, the number of operations in optimize_route() is N! x (1 + N - 1 + 2) + 3 = N! x (N + 2) + 3 = N! x N + 2N! + 3.
We learned that when calculating time complexity, we ignore lower exponent terms. However, in this scenario, we encounter an N! term, which isn’t a power of N. We can refine our simplification strategy by choosing to omit the terms with slower growth rates. The table below lists terms commonly found in an algorithm's time complexity, arranged from lowest to highest growth rate.
1 log(N)N N2 N3 Nk 2N N!
Thus, we can express the time complexity of optimize_route() as N! x N + 2N! + 3 = O(N! x N).
Exponential time complexity algorithms are typically only efficient for solving very small instances. Even with just 13 locations, the number of permutations exceeds 1 billion, rendering this algorithm impractical for real-world scenarios.
Time Complexity: Worst Case and Best Case
With the number-guessing game, we focused on the worst-case complexity. By focusing on the worst case, we guarantee the rate of growth of the algorithm's execution time.
In the best case, we guess the first time, so a best-case complexity analysis would result in O(1) complexity. This is accurate—in the best case, we need a single constant operation. However, this isn’t quite useful because it is very unlikely.
In general, when we analyze the complexity of an algorithm, we always focus on the worst case because:
Guarantee of performance: By focusing on the worst-case complexity, we can ensure that our algorithm will never perform worse than a certain threshold. This is crucial for applications that require reliable performance, such as real-time systems, where delays can cause significant issues.
Safety and reliability: Worst-case analysis helps design robust algorithms that can handle the most demanding scenarios. It ensures that the algorithm will still function within acceptable limits even in the worst possible situations.
Upper bound: Knowing the worst-case complexity gives us an upper bound on the performance. This is useful for understanding the maximum resources (time, memory) an algorithm might require.
Time Complexity: Practical Implications
Time complexity may seem like a very theoretical subject, but it can prove very useful in practice.
For example, if a function consistently handles a small dataset, an easy-to-understand O(N 2) algorithm may be more appropriate than a complex and difficult-to-maintain O(N) algorithm. However, this should be a deliberate decision to avoid being caught off-guard when the application cannot handle increased user traffic.
Time complexity analysis reveals that minor code optimizations have a negligible effect on an algorithm's execution time. Significant improvements are achieved by fundamentally reducing the number of basic operations needed to compute the result. Generally, maintaining clean and comprehensible code is preferable over optimizing it to the extent that it becomes unintelligible.
Slower parts of the code will dominate the others. It’s usually pointless to optimize the faster part before optimizing the slower one. For example, imagine we have a function that performs two tasks:
python
def process_data(data):
clean_data(data)
analyze_data(data)
Powered By
Was this AI assistant helpful? Yes No
If the clean_data() function has complexity O(N 2) and analyze_data() has complexity O(N 3), then reducing the complexity of clean_data() will not improve the overall complexity of process_data(). It’s better to focus on improving the analyze_data() function.
Furthermore, time complexity can inform hardware decisions. For example, an O(N 3) algorithm, despite its poor scalability, might still be fast enough for tasks with limited data if run on superior hardware or a faster programming language. Conversely, if an algorithm has an exponential time complexity O(2 N), no hardware upgrade will suffice, indicating the necessity for a more efficient algorithm.
Conclusion
Despite computers getting faster, we're dealing with more data than ever. Think of it like this: every time we use the Internet, smartphones, or smart home devices, we create tons of information. As this mountain of data grows, sorting through it and finding what's important becomes harder. That's why it's really important to keep making our tools—especially algorithms—better at handling and understanding all this information.
Time complexity analysis offers a straightforward framework for reasoning about an algorithm's execution time. Its primary goal is to provide insights into the growth rate of the time complexity rather than predicting exact execution times. Despite the model's numerous simplifications and assumptions, it has proven to be extremely useful in practice, effectively capturing the essence of algorithm execution time behaviors.
If you want to explore more computer science subjects, check out my article on Data Structures: A Comprehensive Guide With Python Examples.
FAQs
How can I know the time complexity of Python built-in functions?
Sometimes, the documentation mentions the complexity. Otherwise, looking at the source code and analyzing it is a good exercise. Most languages follow standard implementations for common data structures such as lists and dictionaries, so looking up the complexity of these data structures yields the correct answer. We recommend following a generic course on data structures and algorithms to better understand common operations' complexity.
Is the complexity of the form O(N^k) always better than O(2^N), regardless of the value of k?
From a theoretical point of view, yes. For large enough N, the exponential time algorithm will eventually become slower. However, the exponential time algorithm may perform much better for small values of N. In some rare cases, we only care about small datasets, so the O(2 N) algorithm may be preferable.
Are there complexities functions than the one we learned here?
Yes. There exist algorithms with very strange complexity functions, but for almost all algorithms, the complexity will involve only the functions we mention in this article.
How can I know if it’s possible to find a better algorithm?
This is a hard question in general. Proving that there’s no algorithm with a better time complexity involves reasoning about the problem and showing that solving it using fewer operations is impossible. In some cases, it’s easy. For example, to find out if a list contains a given value, it’s impossible to have an algorithm that has a complexity lower than O(N) because, in the worst case, we need to look at all elements to find the answer.
How can we analyze the time complexity if the algorithm involves randomness?
Even with randomness involved, it is often possible to reason about the worst possible case of the random outcomes and derive the complexity. Alternatively, for some randomized algorithms, we analyze the average complexity by studying the expected number of operations.
Author
François Aubry
Full-stack engineer & founder at CheapGPT. Teaching has always been my passion. From my early days as a student, I eagerly sought out opportunities to tutor and assist other students. This passion led me to pursue a PhD, where I also served as a teaching assistant to support my academic endeavors. During those years, I found immense fulfillment in the traditional classroom setting, fostering connections and facilitating learning. However, with the advent of online learning platforms, I recognized the transformative potential of digital education. In fact, I was actively involved in the development of one such platform at our university. I am deeply committed to integrating traditional teaching principles with innovative digital methodologies. My passion is to create courses that are not only engaging and informative but also accessible to learners in this digital age.
Topics
Data ScienceData Literacy
François Aubry Full-Stack Engineer & Founder @ CheapGPT
Topics
Data ScienceData Literacy
### What is Sample Complexity?
### What Is an Algorithm?
### Analyzing Complexity of Code through Python
### SQL Tutorial: How To Write Better Queries
### The 15 Basic Excel Formulas Everyone Needs to Know
### A Guide to The Gradient Boosting Algorithm
Learn Python with these courses!
Course
Writing Efficient Python Code
4 hr
138.6K
Learn to write efficient code that executes quickly and allocates resources skillfully to avoid unnecessary overhead.
See DetailsStart Course
Course
Data Structures and Algorithms in Python
4 hr
26.4K
Explore data structures such as linked lists, stacks, queues, hash tables, and graphs; and search and sort algorithms!
See DetailsStart Course
Course
Introduction to Python for Developers
3 hr
75.1K
Master the fundamentals of programming in Python. No prior knowledge required!
See DetailsStart Course
See More
Related
blog ### What is Sample Complexity?
Learn about sample complexity in machine learning and how to assess the efficiency of a learning algorithm to determine the data needed for a specific learning goal.
Abid Ali Awan
9 min
blog ### What Is an Algorithm?
Learn algorithms & their importance in machine learning. Understand how algorithms solve problems & perform tasks with well-defined steps.
DataCamp Team
11 min
Tutorial ### Analyzing Complexity of Code through Python
Get introduced to Asymptotic Analysis. Learn more about the complexity of the algorithm as well as asymptotic notation, such as Big O, Big θ, and Big Ω notation. Along with the examples of complexity in a different algorithm.
Saneep Khatri
11 min
Tutorial ### SQL Tutorial: How To Write Better Queries
Learn about anti-patterns, execution plans, time complexity, query tuning, and optimization in SQL.
Karlijn Willems
15 min
Tutorial ### The 15 Basic Excel Formulas Everyone Needs to Know
Learn how to add arithmetic, string, time series, and complex formulas in Microsoft Excel.
Abid Ali Awan
15 min
Tutorial ### A Guide to The Gradient Boosting Algorithm
Learn the inner workings of gradient boosting in detail without much mathematical headache and how to tune the hyperparameters of the algorithm.
Bex Tuychiev
15 min
See MoreSee More
Grow your data skills with DataCamp for Mobile
Make progress on the go with our mobile courses and daily 5-minute coding challenges.
Download on the App StoreGet it on Google Play
Learn
Learn PythonLearn AILearn Power BILearn Data EngineeringAssessmentsCareer TracksSkill TracksCoursesData Science Roadmap
Data Courses
Python CoursesR CoursesSQL CoursesPower BI CoursesTableau CoursesAlteryx CoursesAzure CoursesAWS CoursesGoogle Sheets CoursesExcel CoursesAI CoursesData Analysis CoursesData Visualization CoursesMachine Learning CoursesData Engineering CoursesProbability & Statistics Courses
DataLab
Get StartedPricingSecurityDocumentation
Certification
CertificationsData ScientistData AnalystData EngineerSQL AssociatePower BI Data AnalystTableau Certified Data AnalystAzure FundamentalsAI Fundamentals
Resources
Resource CenterUpcoming EventsBlogCode-AlongsTutorialsDocsOpen SourceRDocumentationBook a Demo with DataCamp for BusinessData Portfolio
Plans
PricingFor StudentsFor BusinessFor UniversitiesDiscounts, Promos & SalesExpense DataCampDataCamp Donates
For Business
Business PricingTeams PlanData & AI Unlimited PlanCustomer StoriesPartner Program
About
About UsLearner StoriesCareersBecome an InstructorPressLeadershipContact UsDataCamp EspañolDataCamp PortuguêsDataCamp DeutschDataCamp Français
Support
Help CenterBecome an Affiliate
FacebookTwitterLinkedInYouTubeInstagram
Privacy PolicyCookie NoticeDo Not Sell My Personal InformationAccessibilitySecurityTerms of Use
© 2025 DataCamp, Inc. All Rights Reserved.
|
43
|
Published Time: 2022-03-21T11:57:34+00:00
A Visualization of Causality and Stability in z-Transform | Wireless Pi
===============
Skip to content
Wireless Pi
Discover the Joy of signal processing, SDRs and wireless communications
Search for: Search
Navigation
Home
Articles
Book
SDR Course
5G PHY
March 21, 2022DSP
A Visualization of Causality and Stability in z-Transform
By Qasim Chaudhari
Most of the books and resources explain the z-Transform as a mathematical concept rather than a signal processing idea. Today I will provide a simple explanation of how the z-Transform helps in determining whether a system is causal and stable. I hope that this visual approach will help my readers learn this concept in a better manner.
The z-Transform
For a discrete-time signal h[n]h[n] (that is the impulse response of a system), the z-Transform is defined as
H(z)=∞∑n=−∞h[n]z−n(1)(1)H(z)=∑n=−∞∞h[n]z−n
Then, z z is a complex number and hence can be written as
z=r e j ω z=r e j ω
Putting r=1 r=1 provides the classic Discrete-Time Fourier Transform (DTFT) given as
H(e j ω)=∞∑n=−∞h[n]e−j ω n(2)(2)H(e j ω)=∑n=−∞∞h[n]e−j ω n
As r=1 r=1 above, the DTFT is the z-Transform evaluated on the unit circle on a complex plane.
Traditional Approach
From here, the concept of stable and causal systems having all poles inside the unit circle is established as follows. Using a causal signal
h[n]=a n u[n]h[n]=a n u[n]
the z-Transform from Eq (11) is derived as
H(z)=∞∑n=0 a n z−n=∞∑n=0(a z−1)n=1 1−a z−1,Region of Convergence is|z|>|a|H(z)=∑n=0∞a n z−n=∑n=0∞(a z−1)n=1 1−a z−1,Region of Convergence is|z|>|a|
where the last steps follows from the geometric series formula and the region of convergence is outside the outermost pole. If an additional requirement of stability is desired, then all poles of a signal must lie inside the unit circle as shown in the figure below.
If all this sounds confusing to you, there is another intuitive way to study causality and stability. Let us take that route.
The Concept of Fourier Transform
Recall from the concept of frequency and the Discrete Fourier Transform (DFT) the following points.
A complex sinusoid is a fundamental signal that consists of a cosine wave on the real axis and a sine wave on the imaginary axis, as shown below. This is for z=r e j ω z=r e j ω when r=1 r=1.
Regardless of the signal shape, most signals of practical interest can be considered as a sum of complex sinusoids oscillating at different frequencies, similar to how any point in space can be represented with a combination of x, y and z components, or different colours can be made from the participation of just three basic colours: Red, Green and Blue (a demo for the sinusoids forming a signal is shown here).
A set of N N orthogonal complex sinusoids is shown below.
A continuum of such frequencies defines our frequency axis ω ω that is employed in DTFT of Eq (22) above.
Causality
Similar to the DFT, the z-Transform can be thought of as constructing a signal in time domain through complex sinusoids with unit magnitude as well as complex sinusoids scaled by a factor ≠1≠1.
When this factor r≠1 r≠1, there are two possibilities. Either r>1 r>1 (outside the unit circle) or r<1 r<1 (inside the unit circle).
The complex sinusoids with such magnitude scaling are shown in the figures below. Since z=r e j ω z=r e j ω and r>1 r>1, the magnitude keeps increasing with time for z n z n (n n is substituted by t t for ease of exposition). See the cosine and sine waves growing with time.
On the other hand, the magnitude decays with time for z n z n as r<1 r<1. See the cosine and sine waves diminishing with time.
All the points on the complex z-plane indicate numbers z z that (as a function of time) come together to form our signal. Since z=r e j ω z=r e j ω and a negative power is used in the definition of z-Transform, we have
z−n=r−n e−j ω n(3)(3)z−n=r−n e−j ω n
This will have a different impact in case of causal and non-causal impulse responses.
Causal Systems
Causal systems have impulse responses that exist only for n≥0 n≥0. For positive n n, Eq (33) becomes
z−n=r−n e−j ω n=1 r n e−j ω n(4)(4)z−n=r−n e−j ω n=1 r n e−j ω n
Clearly, all signals have a region of convergence when the complex signals decay with time.
The magnitude is determined by the part involving r r because |e−j ω n|=1|e−j ω n|=1.
Since |r|n|r|n appears in the denominator, that happens for |z|=|r|>1|z|=|r|>1 here as shown in the figure below.
For a general case where the outermost pole is a a, the region of convergence is |z|=|r|>|a||z|=|r|>|a|. This is because from Eq (44), we have
a n⋅z−n=a n⋅r−n e−j ω n=(a r)n e−j ω n a n⋅z−n=a n⋅r−n e−j ω n=(a r)n e−j ω n
For all r r greater than |a||a|, this sequence converges to zero with time n n.
In the above figure, the direction of rotations should be clockwise due to negative frequency in e−j ω n e−j ω n but that is ignored for simplicity.
Non-Causal Systems
Non-causal systems are the ones that exist only for n<0 n<0. For these negative values of n n denoted by −n−n, Eq (33) becomes z−(−n)=r n e j ω n(5)(5)z−(−n)=r n e j ω n Again, all signals should have a region of convergence when the complex signals decay with time.
Since |r|n|r|n appears in the numerator, that happens for |z|=|r|<1|z|=|r|<1 here as shown in the figure below.
For a general case where the innermost pole is a a, the region of convergence is |z|=|r|<|a||z|=|r|<|a|. This is because from Eq (55), we have a(−n)⋅z−(−n)=a(−n)⋅r n e j ω n=(r a)n e j ω n a(−n)⋅z−(−n)=a(−n)⋅r n e j ω n=(r a)n e j ω n For all r r lesser than |a||a|, this sequence converges to zero with time n n.
Stability
With causality established above, it is straightforward to find the stability condition for a system. The system output is determined not by z-Transform but by convolution between the impulse response h[n]h[n] and input signal x[n]x[n]. For a stable system, this convolution output should be finite.
∣∣∣∑n h[n]⋅x[m−n]∣∣∣≤∑n|h[n]|⋅|x[m−n]|is finite|∑n h[n]⋅x[m−n]|≤∑n|h[n]|⋅|x[m−n]|is finite
For a bounded input x[n]x[n] that stays lesser than a certain value, the impulse response h[n]h[n] should not be growing without bound so that it should produce a bounded output after convolution.
∑n|h[n]|is finite∑n|h[n]|is finite
Both the causal and non-causal scenarios above involve the role of r r for convergence. However, the above expression only focuses on the impulse response samples without r r that happens on the unit circle z=1⋅e j ω z=1⋅e j ω because
|h[n]⋅1⋅e−j ω n|=|h[n]||h[n]⋅1⋅e−j ω n|=|h[n]|
We conclude that the unit circle must be included in the region of convergence for a system to be stable.
Condition for Causal and Stable Systems
The condition for both causality and stability can now be derived as follows.
A causal system should have a region of convergence outside the outermost pole.
A stable system should have the unit circle in its region of convergence.
Therefore, a causal and stable system should have all poles inside the unit circle.
Complex SinusoidsLinear Time-Invariant (LTI)
Post navigation
Previous Post ### On Microchip AT86RF215 Radios
Next Post ### The Extended Kalman Filter (EKF)
Aliasingatan2AWGNBasicsBayesianBeamformingChannelClock RecoveryComplex NumbersComplex SinusoidsDiscrete Fourier Transform (DFT)DiversityDownsamplingEqualizerFIR FilterFrequency Modulation (FM)Frequency RecoveryFrequency Shift Keying (FSK)FrontendI/Q SignalsInter-Symbol Interference (ISI)InterpolationIntuitive GuideLinear Time-Invariant (LTI)Matched FilteringMIMOMultirateNumerically Controlled Oscillator (NCO)OFDMPhase Locked Lop (PLL)Phase RecoveryProbabilityPulse Amplitude Modulation (PAM)Pulse ShapingQuadrature Amplitude Modulation (QAM)Quadrature Phase Shift Keying (QPSK)Raised CosineRegressionRFRx-ArchitectureSoftware Defined Radio (SDR)Square-Root Raised CosineSynchronizationTimestampsTiming Recovery
Recommended for You
Two Birds with One Tone: I/Q Signals and Fourier Transform – Part 1
Two Birds with One Tone: I/Q Signals and Fourier Transform – Part 2
FMCW Radar Part 1 – Ranging
Leave a Reply; You can use HTML (<>) or Latex ($$)Cancel reply
Your email address will not be published.Required fields are marked
Comment
Name
Email
Website
Δ
Learn about Wireless Communications and SDR through great visualizations, simple Math and intuitive explanations
Send and receive wireless signals over the air without expensive SDR hardware through a speaker (Tx) and a microphone (Rx)
Members Login
Consulting|Training
Newsletter|Contact|About
Copyright © Wireless Pi 2025
Privacy Policy
Terms of Use
|
44
|
Preview tools
CZGate
class qiskit.circuit.library.CZGate(args, _force_mutable=False, kwargs)
qiskit.circuit.library.CZGate(args, _force_mutable=False, kwargs)
Bases: SingletonControlledGate
SingletonControlledGate
Controlled-Z gate.
This is a Clifford and symmetric gate.
Can be applied to a QuantumCircuit with the cz() method.
QuantumCircuit
cz()
Circuit symbol:
q_0: ─■─
│
q_1: ─■─
Matrix representation:
In the computational basis, this gate flips the phase of the target qubit if the control qubit is in the ∣1⟩|1\rangle∣1⟩ state.
Create new CZ gate.
Attributes
base_class
Get the base class of this instruction. This is guaranteed to be in the inheritance tree of self.
self
The “base class” of an instruction is the lowest class in its inheritance tree that the object should be considered entirely compatible with for _all_ circuit applications. This typically means that the subclass is defined purely to offer some sort of programmer convenience over the base class, and the base class is the “true” class for a behavioral perspective. In particular, you should not override base_class if you are defining a custom version of an instruction that will be implemented differently by hardware, such as an alternative measurement strategy, or a version of a parametrized gate with a particular set of parameters for the purposes of distinguishing it in a Target from the full parametrized gate.
base_class
Target
This is often exactly equivalent to type(obj), except in the case of singleton instances of standard-library instructions. These singleton instances are special subclasses of their base class, and this property will return that base. For example:
type(obj)
`>>> isinstance(XGate(), XGate)
True
type(XGate()) is XGate
False
XGate().base_class is XGate
True`
In general, you should not rely on the precise class of an instruction; within a given circuit, it is expected that Instruction.name should be a more suitable discriminator in most situations.
Instruction.name
ctrl_state
Return the control state of the gate as a decimal integer.
decompositions
Get the decompositions of the instruction from the SessionEquivalenceLibrary.
definition
Return definition in terms of other basic gates. If the gate has open controls, as determined from ctrl_state, the returned definition is conjugated with X without changing the internal _definition.
ctrl_state
_definition
label
Return instruction label
mutable
Is this instance is a mutable unique instance or not.
If this attribute is False the gate instance is a shared singleton and is not mutable.
False
name
Get name of gate. If the gate has open controls the gate name will become:
<original_name_o
where is the gate name for the default case of closed control qubits and is the integer value of the control state for the gate.
num_clbits
Return the number of clbits.
num_ctrl_qubits
Get number of control qubits.
Returns
The number of control qubits for the gate.
Return type
int
num_qubits
Return the number of qubits.
params
Get parameters from base_gate.
Returns
List of gate parameters.
Return type
list
Raises
CircuitError – Controlled gate does not define a base gate
Methods
add_decomposition
add_decomposition(decomposition)
add_decomposition(decomposition)
Add a decomposition of the instruction to the SessionEquivalenceLibrary.
broadcast_arguments
broadcast_arguments(qargs, cargs)
broadcast_arguments(qargs, cargs)
Validation and handling of the arguments and its relationship.
For example, cx([q,q], q) means cx(q, q); cx(q, q). This method yields the arguments in the right grouping. In the given example:
cx([q,q], q)
cx(q, q); cx(q, q)
in: , q],[]
outs: [q, q], []
[q, q], []
The general broadcasting rules are:
If len(qargs) == 1:
[q, q] -> [q],[q]
If len(qargs) == 2:
, [r, r]] -> [q, r], [q, r]
, [r, r]] -> [q, r], [q, r]
, [r]] -> [q, r], [q, r]
If len(qargs) >= 3:
[q, q], [r, r], ...] -> [q, r, ...], [q, r, ...]
Parameters
Returns
A tuple with single arguments.
Raises
CircuitError – If the input is not valid. For example, the number of arguments does not match the gate expectation.
Return type
Iterable[tuple[list, list]]
control
control(num_ctrl_qubits=1, label=None, ctrl_state=None, annotated=None)
control(num_ctrl_qubits=1, label=None, ctrl_state=None, annotated=None)
Return the controlled version of itself.
Implemented either as a controlled gate (ref. ControlledGate) or as an annotated operation (ref. AnnotatedOperation).
ControlledGate
AnnotatedOperation
Parameters
1
'111'
None
2num_ctrl_qubits-1
None
False
True
Returns
Controlled version of the given operation.
Raises
QiskitError – unrecognized mode or invalid ctrl_state
copy
copy(name=None)
copy(name=None)
Copy of the instruction.
Parameters
name (str) – name to be given to the copied circuit, if None then the name stays the same.
None
Returns
a copy of the current instruction, with the name updated if it was provided
Return type
qiskit.circuit.Instruction
inverse
inverse(annotated=False)
inverse(annotated=False)
Return inverted CZ gate (itself).
Parameters
annotated (bool) – when set to True, this is typically used to return an AnnotatedOperation with an inverse modifier set instead of a concrete Gate. However, for this class this argument is ignored as this gate is self-inverse.
True
AnnotatedOperation
Gate
Returns
inverse gate (self-inverse).
Return type
CZGate
is_parameterized
is_parameterized()
is_parameterized()
Return whether the Instruction contains compile-time parameters.
Instruction
power
power(exponent, annotated=False)
power(exponent, annotated=False)
Raise this gate to the power of exponent.
exponent
Implemented either as a unitary gate (ref. UnitaryGate) or as an annotated operation (ref. AnnotatedOperation). In the case of several standard gates, such as RXGate, when the power of a gate can be expressed in terms of another standard gate that is returned directly.
UnitaryGate
AnnotatedOperation
RXGate
Parameters
RXGate
Returns
An operation implementing gate^exponent
gate^exponent
Raises
CircuitError – If gate is not unitary
repeat
repeat(n)
repeat(n)
Creates an instruction with self repeated :mathn times.
self
Parameters
n (int) – Number of times to repeat the instruction
Returns
Containing the definition.
Return type
qiskit.circuit.Instruction
Raises
CircuitError – If n < 1.
reverse_ops
reverse_ops()
reverse_ops()
For a composite instruction, reverse the order of sub-instructions.
This is done by recursively reversing all sub-instructions. It does not invert any gate.
Returns
a new instruction with
sub-instructions reversed.
Return type
qiskit.circuit.Instruction
soft_compare
soft_compare(other)
soft_compare(other)
Soft comparison between gates. Their names, number of qubits, and classical bit numbers must match. The number of parameters must match. Each parameter is compared. If one is a ParameterExpression then it is not taken into account.
Parameters
other (instruction) – other instruction.
Returns
are self and other equal up to parameter expressions.
Return type
bool
to_matrix
to_matrix()
to_matrix()
Return a Numpy.array for the gate unitary matrix.
Returns
if the Gate subclass has a matrix definition.
Return type
np.ndarray
Raises
CircuitError – If a Gate subclass does not implement this method an exception will be raised when this base class method is called.
to_mutable
to_mutable()
to_mutable()
Return a mutable copy of this gate.
This method will return a new mutable copy of this gate instance. If a singleton instance is being used this will be a new unique instance that can be mutated. If the instance is already mutable it will be a deepcopy of that instance.
validate_parameter
validate_parameter(parameter)
validate_parameter(parameter)
Gate parameters should be int, float, or ParameterExpression
On this page
© IBM Corp., 2017-2025
|
45
|
Contents
Traditional Chinese marriage
Traditional Chinese marriage (Chinese: 婚姻; pinyin: hūnyīn) is a ceremonial ritual within Chinese societies that involves not only a union between spouses but also a union between the two families of a man and a woman, sometimes established by pre-arrangement between families. Marriage and family are inextricably linked, which involves the interests of both families. Within Chinese culture, romantic love and polygamy were the norm for most citizens. Around the end of primitive society, traditional Chinese marriage rituals were formed, with deer skin betrothal in the Fuxi era, the appearance of the "meeting hall" during the Xia and Shang dynasties, and then in the Zhou dynasty, a complete set of marriage etiquette ("six rituals") gradually formed. The richness of this series of rituals proves the importance the ancients attached to marriage.[citation needed] In addition to the unique nature of the "three letters and six rituals",[citation needed] monogamy, remarriage and divorce in traditional Chinese marriage culture are also distinctive.
Etymology
The two-Chinese character word 婚姻 (hūnyīn; 'marriage') can be analyzed as follows:
Marriage in a Confucian context
In Confucian thought, marriage is of grave significance to both families and society, as well as being important for the cultivation of virtue. Traditionally, incest has been defined as marriage between people with the same surname.
"One of the earliest marriage prohibitions, and one surviving to this day, was that forbidding persons of the same surname to marry. An imperial decree of 484 A.D. states that this rule was promulgated far back in the Zhou dynasty, which was from 1122 to 255 B.C.' Any one marrying within his clan received sixty blows, and the marriage was declared null and void. It was feared that such mating would produce weak offspring."
From the perspective of a Confucian family, marriage brings together families of different surnames and continues the family line of the paternal clan. This is generally why giving birth to a boy is preferred over a girl. Therefore, the benefits and demerits of any marriage are important to the entire family, not just the individual couples. Socially, the married couple is thought to be the basic unit of society. In Chinese history, there have been many times when marriages have affected the country's political stability and international relations. For International Relations, "intermarriage has continued throughout Chinese history as a means of establishing and maintaining relations among families in the private sphere, as well as a factor in political careers. " For example, "Marriage alliances, or ho-ch'in 和亲, literally 'harmonious kinship,' was something new in its Han-era application. It was a part of a formal peace treaty arrangement at the interstate level, designed to pacify the powerful Hsiung-nu (匈奴) empire" During the Han dynasty, the rulers of the powerful Xiongnu tribe demanded women from the imperial family. Many periods of Chinese history were dominated by the families of the wife or mother of the ruling emperor. For the country's political stability, during the Qing dynasty, although no "evidence of prohibitions against ethnic intermarriage within the Eight Banners", "in elite families of the ruling class, primary wives were almost entirely Manchu, while qie (commonly translated as "concubines") and other partners of lower status could be Han". In the Qing dynasty, most of the high officials were mainly Manchu, so in order to protect the interests of the family, the selection of a wife will be very important. in particular, if whether the woman was born in the "eight banners". For example, "the ethnicity apparent in the maiden names of wives in genealogies from elite Manchu descent groups, such as the Imperial Lineage."
Role of women in marriages
The bride had to leave her family to become a daughter-in-law, subject to the authority of her husband's mother. In this role, she could witness the addition of secondary wives or concubinage, especially if she failed to produce a male heir. The husband could repudiate her for various reasons, and in the event of his death, remarrying was a challenge. This situation underscored the lack of economic independence for women, as their labor focused on household duties without bringing in income. Farm women were largely illiterate, and they had minimal to no property rights.
Ancient China perceived the world as the result of the interplay between two complementary elements, yin and yang. Yin represented all things female, dark, weak, and passive, while yang represented all things male, bright, strong, and active. Although both male and female were deemed necessary and complementary, one was passive in relation to the other. Building on these ideological foundations, Chinese male moralists developed behavioral norms of obedience and passivity expected of women.
These norms placed girls subordinate to boys from infancy and maintained the wife's subordination to her husband and the mother's subordination to her grown son. Status within the family was formally outlined in the renowned "three bonds" accentuated by Confucian philosophers. These bonds included the allegiance of subjects to rulers, the filial obedience of sons to fathers, and the chastity expected from wives but not husbands. While the theory did not emphasize the relationship between mother and son it held practical importance.
When a father perceived the emergence of individuality and independence in his son, he harbored concerns about potential disruption to the family. Strong bonds of intimacy between the son and either mother or wife posed a potential threat to the vertical lines of loyalty and respect that upheld the family structure and the father's authority. Women were deemed destabilizers, even though they promised of descendants, they also posed a constant threat to the bond of obedience between parents and sons.
Ancient Chinese marriages
| | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- |
| | This section has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages) | | | | --- | --- | | | This section does not cite any sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (September 2016) (Learn how and when to remove this message) | | | | | --- | --- | | | This section's factual accuracy is disputed. Relevant discussion may be found on the talk page. Please help to ensure that disputed statements are reliably sourced. (September 2016) (Learn how and when to remove this message) | | | | | --- | --- | | [icon] | This section needs expansion with: examples and additional citations. You can help by adding to it. (September 2016) | (Learn how and when to remove this message) |
| | |
| --- | --- |
| | This section does not cite any sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (September 2016) (Learn how and when to remove this message) |
| | |
| --- | --- |
| | This section's factual accuracy is disputed. Relevant discussion may be found on the talk page. Please help to ensure that disputed statements are reliably sourced. (September 2016) (Learn how and when to remove this message) |
| | |
| --- | --- |
| [icon] | This section needs expansion with: examples and additional citations. You can help by adding to it. (September 2016) |
Marriages in early societies
Women and men were married relatively young. For the women, it was soon after puberty and men were not much later, around fifteen and twenty respectively.
Mythological origin
The story about the marriage of sister and brother Nüwa and Fu Xi told how they invented proper marriage procedures after marrying. At that time, the world was unpopulated so the siblings wanted to get married, but at the same time, they felt ashamed. They went up to Kunlun Mountains and prayed to the heavens. They asked for permission for their marriage and said, "if you allow us to marry, please make the mist surround us." The heavens gave permission to the couple, and promptly the peak was covered in mist. It is said that in order to hide her shyness, Nüwa covered her blushing face with a fan. Nowadays in some villages in China, the brides still follow the custom and use a fan to shield their faces.[citation needed]
Historic marriage practices
Endogamy among different classes in China was practiced, the upper class, such as the Shi, married among themselves. Commoners married among themselves as well, avoiding marriage with slaves and other ordinary people. This practice was enforced under the law.
Maternal marriage and monogamy
In a maternal marriage, the husband moved in the woman's family home after the marriage. This happened in the transformation of antithetic marriage into monogamy, which signified the decline of matriarchy and the growing dominance of patriarchy in ancient China.[citation needed]
Marriage during the Han dynasty (202 BC – 220 AD)
Marriages during this time included a number of mandatory steps. The most important of them was the presentation of betrothal gifts from the groom and his family to the bride and her family. The bride's family then countered with a dowry. Sometimes the bride's family would buy goods with the betrothal money. Using a betrothal gift for family financial needs rather than saving it for the bride was viewed as dishonorable because it appeared as though the bride has been sold. A marriage without a dowry or betrothal gifts was also seen as dishonorable. as the bride was then seen as a concubine instead of a wife. Once all the goods were exchanged, the bride was taken to the groom's ancestral home. There she was expected to obey her husband and live with his relatives. Women continued to belong to their husband's families even if they had passed. If the widow's birth family wanted her to marry again, they would often have to ransom her back from her deceased husband's family. If they had any children they stayed with his family.
Marriage match-maker during the Ming dynasty
In the Ming period, marriage was considered solemn and according to the law written in The Ming Code (Da Ming Lü), all commoners' marriages must follow the rules written in Duke Wen's Family Rules (Wen Gong Jia Li). The rules stated that "in order to arrange a marriage, an agent must come and deliver messages between the two families." A marriage match-maker had the license to play important roles by arranging marriages between two families. Sometimes both families were influential and wealthy and the matchmaker bonded the two families into powerful households. Studies have shown that, "In the Ming and Qing dynasties, a number of noble families emerged in Jiaxing of Zhejiang, where marriage is the most important way to expand their clan strength." Hence, marriage match-makers were crucial during the Ming era, and offer an insight into the lives of the Ming commoners.
Instead of using the more gender general term "mei ren" (媒人), texts more frequently referred to marriage match-makers as "mei po" (媒婆). Since "po" (婆) translates to "granny" in English, it suggests that elderly female characters dominated the "marriage market". Indeed, in the novel The Golden Lotus (Jin Ping Mei), the four matchmakers, Wang, Xue, Wen, Feng were all elderly female characters. In ancient China, people believed that marriages belong to the "Yin" side (the opposite of "Yang"), which corresponds to females. In order to maintain the balance between Yin and Yang, women should not interfere with the Yang side and men should not interfere with the Yin side. Since breaking the balance may lead to disorder and misfortune, men were rarely seen in marriage arrangements. Furthermore, unmarried girls were not in the occupation because they themselves knew little about marriage and were not credible in arranging marriages. As a result, almost all marriage match-makers in the literary work were presented as elderly females.
Being a successful marriage match-maker required various special skills. First, the match-maker had to be very persuasive. The match-maker had to persuade both sides of the marriage that the arrangement was impeccable, even though many times the arrangement was not perfect. In Feng Menglong's "Old Man Zhang Grows Melons and Marries Wennü" in the collection Stories Old and New (Gu Jin Xiao Shuo), he wrote about an eighty-year-old man who married an eighteen-year young girl. The marriage was arranged by two matchmakers, Zhang and Li. Given the age difference, the marriage seemed impossible, but the two match-makers still managed to persuade the father of the girl to marry her to the old man. Feng Menglong described them as "Once they start to speak the match is successfully arranged, and when they open their mouths they only spoke about harmony." The match-makers gave powerful persuasions by avoiding mentioning the differences between the couples they arranged. In addition to persuasion techniques, the match-makers must possess great social skills. They needed to know a network of people so that when the time came for marriage, they were able to seek the services of the match-makers. Finally, when someone came to the match-maker, she must be able to pick out a matching suitor according to her knowledge of local residents. Normally, a perfect couple had similar social status, economic status, and age. Wealthy families would look for a bride of similar social status who could manage the family finances and, most importantly, produce sons to inherit the family's wealth. Poor families, on the other hand, would not be as demanding and would only look for a bride who is willing to work hard in the fields. Sometimes they even needed to travel to neighboring towns for a match, hence the verse "Traveling to the east household, traveling to the west household, their feet are always busy and their voices are always loud." Furthermore, mediators were required to know simple mathematics and characters in order to write the matrimonial contract. The contract included "the sum of the bride price, the identity and age of both partners, and the identity of the person who presided over the wedding ceremony, usually the parents or grandparents."
Matchmakers made a living not only by facilitating successful marriage arrangements, but also by delivering messages between the two families. When they visited the households to deliver messages, the hosts usually provided them food and drinks to enjoy, hence the verse "Asking for a cup of tea, asking for a cup of alcohol, their faces are 3.3 inches thick (they are really cheeky)." However, these "visiting payments" were tiny compared to the payment they receive for a successful marriage. The visiting payment was always measured by "wen" or cash. Whereas the final payment was measured by "liang" or taels. (One tael was equivalent to a thousand wen.) Therefore, the match-makers would spend most of their time travelling back and forth between the two households to persuade them to marry. In addition, the matchmakers received payments for introducing young girls to wealthy men. In Zhang Dai's short essay collection The Dream Collection of Tao'an (Tao'an Meng Yi), he described a scene in which matchmakers brought young beautiful girls to the houses of wealthy customers to choose. Even if the customer was not satisfied, he would reward the matchmaker several hundreds wen.
As marriage match-makers, these grannies also possessed "guilty knowledge" of secret affairs. In The Golden Lotus (Jin Ping Mei), the matchmaker Wang speculated that Ximen Qing was fond of the married woman Pan Jinlian, so she introduced Pan to Ximen, helped them to have an affair and then hide the secret for them. According to the law, married woman had to be loyal to her husband, and anyone who discovered a woman who had an affair should report her immediately. Matchmakers were licensed to keep secrets about affairs because keeping privacy of their clients was their obligation. Even so, they were usually criticized for doing so. In The Golden Lotus, Wang was blamed for encouraging ladies to have improper affairs.[citation needed]
Marriage matters in Xinjiang (1880–1949)
Even though Muslim women are forbidden to marry non-Muslims in Islamic law, from 1880 to 1949, it was frequently violated in Xinjiang since Chinese men married Muslim Turki (Uyghur) women. Turki women who married Chinese were labelled as whores by the Turki community; these marriages were illegitimate according to Islamic law. Turki women obtained benefits from marrying Chinese men since the Chinese defended them from Islamic authorities. These women were not subjected to the tax on prostitution and were able to save their income for themselves.[citation needed]
Chinese men gave their Turki wives privileges which Turki men's wives did not have, The wives of Chinese men did not have to wear a veil. A Chinese man in Kashgar once beat a mullah who tried to force his Turki Kashgari wife to veil.[citation needed] Turki women also were not subjected to any legal binding to their Chinese husbands and they could make their Chinese husbands provide them with as much money as she wanted for her relatives and herself, since the women could leave when she wanted to. Any property the Chinese men owned was left to their Turki wives after they died.
Because they were viewed as "impure", Islamic cemeteries banned Turki wives of Chinese men from being buried within them. Turki women got around this problem by giving shrines donations to buy a grave in other towns. Besides Chinese men, other men such as Armenians, Jews, Russians, and Badakhshanis intermarried with local Turki women.
Local Turki society accepted the Turki women and Chinese men's mixed offspring as their own people, despite the marriages being in violation of Islamic law. Turki women also conducted temporary marriages with Chinese soldiers temporarily stationed around them. Frequently, when the soldier's time at the post ended, they would sell their wives and daughters to other Chinese soldiers stationed nearby, taking their sons with them if they could afford to, abandoning them if they could not.
Traditional marriage rituals
Chinese marriage became a custom between 402 and 221 BC. Despite China's long history and many different geographical areas, there are essentially six rituals, generally known as the three letters and six etiquettes (三書六禮). Unfortunately for some traditional families, the wife's mother cannot go to her son-in-law's family until one year (according to the Chinese lunar calendar or Chinese Lunar New Year) after the wedding has elapsed. However, during this one year the daughter can go back at any time.
Six etiquettes
The wedding ceremony consisted of six basic procedures: making a proposal of marriage (nacai), requesting the bride's name and date of birth (wenming), sending news of divination results and betrothal gifts (naji), sending wedding presents to the bride's house (nazheng), requesting the date of the wedding (qingqi), and fetching the bride in person (qinying). Details of each ritual could vary.
Modern practices
Since the late 1990s,[clarification needed] it has become popular to create an elaborate wedding album, often taken at a photography studio. The album usually consists of many pictures of the bride and groom taken at various locations with many different outfits. In Singapore, these outfits often include wedding outfits belonging to different cultures, including Arab and Japanese wedding outfits. In contrast to Western wedding pictures, the Chinese wedding album will not contain pictures of the actual ceremony and wedding itself.
In Mandarin Chinese, a mang nian (盲年), or 'blind year', when there are no first days of spring, such as in year 2010, a Year of the Tiger, is considered an ominous time to marry or start a business. In the preceding year, there were two first days of spring.
In recent years, Confucian wedding rituals have become popular among Chinese couples. In such ceremonies, which are a recent innovation with no historic antecedent, the bride and groom bow and pay respects to a large portrait of Confucius hanging in the banquet hall while wedding attendants and the couple themselves are dressed in traditional Chinese robes.
Before the bride and groom enter the nuptial chambers, they exchange nuptial cups and perform ceremonial bows as follows:
Traditional divorce process
In traditional Chinese society, there are three major ways to dissolve a marriage.
The first was no-fault divorce. According to the Tang Code, the legal code of the Tang dynasty (618–907), a marriage may be dissolved due to personal incompatibility, provided that the husband writes a divorce note.
The second, (義絕) was through state-mandated annulment of marriage. This applied when one spouse committed a serious crime (variously defined, usually more broadly for the wife) against the other or his/her clan. If the couple did not take the initiative to divorce when the criminal annulment (義絕) situation arose,[clarification needed] the state would intervene to force them to divorce. If one side refused to divorce, the law had to investigate the criminal liability of the party and give a one-year prison sentence. Once a divorce was judged, they must not be reunited.
The third way was by mutual divorce (和離). It was a way that both husband and wife could have the power to divorce. It required agreement between the two. This way of divorce was to ensure both husband and wife had equal power to protect themselves and their property. It also enhanced the concept of responsibility. Divorce was seen as a responsibility to each other and, the country or government would not typically intervene.
Last, the husband could unilaterally declare a divorce. To be legally recognized, it had to be based on one of the following seven reasons (七出):
There are, however, three clearly defined exceptions (三不去), under which unilateral divorce was forbidden, despite the presence of any of the seven aforementioned grounds:[citation needed]
The above law about unilateral divorce was in force from the Tang dynasty up to its final abolition in the Republic of China's Civil Code (Part IV) Section 5, passed in 1930.
Divorce in contemporary China
After the establishment of the People's Republic in 1949, the country's new Marriage Law also explicitly provided for lawful divorces. Women were permitted to divorce their husbands and many did, sparking resistance especially from rural males. Kay Ann Johnson reported that tens of thousands of women in north central China were killed for seeking divorces or committed suicide when blocked from doing so.
Divorce was rare during the Mao era (1949–1976), but it has become easier and more commonplace in the post-reform era. A USC U.S.-China Institute article reports that the divorce rate in 2006 was about 1.4/1000 people, about twice what it was in 1990 and more than three times what it was in 1982. Still, the divorce rate in China is less than half of that in the United States. One of the most important breakthroughs in the marriage institution were amendments added to the Marriage Law in 2001, which shortened the divorce-application procedure and added legitimate reasons for divorce, including emphasizing the importance of faithfulness within a married couple. A response to rising failure of marriages due to unfaithful affairs during marriages has come into public knowledge. With rising divorce rates, public discussions and governmental organs often criticize the lack of effort in marriage maintenance which many couples express. This is evident, for example in the new 'divorce buffer zones' established in the marriage registration offices in certain provinces, which is a room where the couples wait, typically 30 days as a stage within the divorce application procedure, and are encouraged to talk things over and consider giving their marriage another chance. However, such phenomena has not decreased divorce rates in China.
Amendments have also been made to Article 32 of the revised 2001 Marriage Law. Parties to a marriage can apply for divorce under, and by showing, the following grounds:
Monogamy
In ancient China, women's social status was not as good as men. A woman could only obey and rely on her husband. She shared her husband's class, whether he was a peasant, merchant, or official. The clothes she could wear and the etiquette she was expected to display depended on her husband's background and achievements. If her husband was dead, she could remarry, but would be seen as not decent. The neo-Confucian opposition to widow remarriage was expressed in an oft-quoted aphorism of Zhu Xi: "It is a small matter to starve to death, but a large matter to lose one's virtue." Moreover, the government also issued measures against widow remarriage. For example, "The state reinforced the neo-Confucian attitude against widow remarriage by erecting commemorative arches to honour women who refused to remarry. In 1304, the Yuan government issued a proclamation declaring that all women widowed before they were thirty who remained chaste widows until they were fifty were to be so honoured. The Ming and Qing continued the practice."
The virtues of chaste widowhood were extolled by instructions for women, such as the Nu Lun Yu (Analects for Women). While a man could have though only one wife but many concubines and marry someone else a new wife if the wife died before him. The general dignitaries also had only one wife but many concubines.[citation needed]
Sororate marriage
Sororate marriage is a custom in which a man marries his wife's sister(s). Later it is expanded to include her cousins or females from the same clan. The Chinese name is 妹媵 (妹=younger sister, 媵=co-bride/concubinage). It can happen at the same time as he marries the first wife, at a later time while the wife is still alive, or after she dies. This practice occurred frequently among the nobility of the Zhou dynasty (1045–256 BC), with cases occurring at later times.[citation needed]
Multiple wives with equal status
Besides the traditional desire for male children to carry on the family name, this allowance partially resolves a dilemma created by the emperor himself. He had recently banned all non-patrilineal forms of inheritance, while wanting to preserve the proper order in Chinese kinship. Therefore, a couple without a son cannot adopt one from within the extended family. They either have to adopt from outside (which was regarded by many as passing the family wealth to unrelated "outsiders"), or become heirless. Multiple inheritance marriages provided a way out when the husband's brother has a son.
Ruzhui marriage
The custom of ruzhui (入贅) applied when a relatively wealthy family had no male heirs, and a poorer family had multiple male children. Under these circumstances, a male from the poorer family, generally a younger sibling, will marry into the wealthier family in order to continue their family line. In a ruzhui (lit., 'the [man] becoming superfluous') marriage, the children would take on the surname of the wife.
Concubinage
In ancient China, concubinage was very common, and men who could afford it usually bought concubines and took them into their homes in addition to their wives. The standard Chinese term translated as "concubine" means "concubine: my and your servant." In the family hierarchy, the principal wife (diqi) ranked second only to her husband, while a concubine was always inferior to the wife, even if her relations with the husband were more intimate. Women in concubinage (妾) were treated as inferior, and expected to be subservient to the wife (if there was one). The women were not wedded in a formal ceremony, had less rights in the relationship, and could be divorced arbitrarily. They generally came from lower social status or were bought as slaves.
Women who had eloped may have also become concubines since a formal wedding requires her parents' participation.
The number of concubines was sometimes regulated, and differed according to the man's rank. In ancient China, men of higher social status often supported several concubines, and Chinese emperors almost always had dozens, or even hundreds, of royal concubines.
Polyandry
Polyandry is the practice of a woman having multiple husbands. It was not rare in traditional Chinese society, especially among the wealthy elite, and it was legal in Hong Kong until as recently as 1971.[citation needed] A compendium of miscellaneous facts compiled in the Ming dynasty (1368–1644) mentioned a coastal village in present-day Zhejiang province called Shoujin'ao, where it was customary for brothers to marry the same woman. In fact, the wife preferred this arrangement for reasons of financial security. With a handkerchief hung outside the bedroom door, the husbands indicated whose turn it was to have conjugal relations.
In Yunan, Pumi society has been traditionally organized into exogamous clans with marriages arranged by the parents. Marriage between cross-cousins and marriage within the clan is prohibited. Today there is a great variety of marriage patterns and styles, with intermarriage to other ethnic groups common in some areas while not so common in others. Some polyandry exists among the Pumi. Those that live near the Mosuo have adopted some of their marriage customs. Generally marriage is patrilocal, with men inheriting property, except in the area around Mosuo-dominated Lugu Lake and Yongning where the Pumi seem to have adopted the Mosuo practice of the 'walking marriage' where husbands visit their his wife's home at night but return to their maternal home in the day to work. Also, where Pumi live alongside Mosuo, it is not unusual for the two groups to intermarry.
Fraternal polyandry in Tibet is widely considered to be a means of preventing the division of a family's resources among its male heirs. As a family resource preservation strategy, Tibetan polyandry accomplishes the same goal as European family system, but in a very different way. Researchers have suggested that polyandry developed in Tibet, because it; provides a household with enough male laborers to fully exploit the marginal agricultural lands in the Himalayas, that it serves as a means of population control, or that it serves as a way of reducing tax obligations to feudal Tibetan lords.[citation needed] A more convincing explanation of why Tibetan polyandry is practiced is provided by Nancy E. Levine. She claims that polyandry provides a household with a large labor force, enabling the family to pursue simultaneous and extensive involvement in the three sectors of Tibetan economy: agriculture, herding, and trading (1988).[citation needed] Since Tibetan polyandry provides such important economic advantages to households, one can assume that the reason for the dissolution of polyandrous marriages is largely for individual interests. Levine (1981) and Melvyn C. Goldstein (1981) find that the breakup of polyandrous marriages is usually caused by the younger brothers of the household, due to unhappiness with their spouse, their lower reproductive success than older brothers, a desire for personal autonomy, or difficulty in maintaining a large household. Goldstein (1981) also finds that brothers are more likely to leave polyandrous marriages when unexpected economic opportunities arise.
Related content
See also
References
{{cite journal}}
Further reading
| v t e Sexuality and gender in China | |
| --- | --- |
| Topics | | | | | --- | --- | | Women | All-China Women's Federation Female migrant workers Feminism Healthcare Globalization and women Ideals of female beauty In ancient and imperial times + Foot binding + "Good Wife, Wise Mother" + "Three Obediences and Four Virtues" Missing women Prostitution Son preference + Female infanticide | | Marriage and family | Chinese-foreign marriages Family planning policies + One-child policy + Two-child policy + Three-child policy Ghost marriage Marriage in modern China Naked marriage New Marriage Law Polyandry in Tibet Traditional Chinese marriage | | LGBT | History Homosexuality Intersex LGBTQ culture in Beijing LGBTQ culture in Chengdu LGBTQ culture in Hong Kong LGBTQ culture in Shanghai Rights + same-sex unions + Tibet + Hong Kong + Macau Transgender | |
| | |
| --- | --- |
| Women | All-China Women's Federation Female migrant workers Feminism Healthcare Globalization and women Ideals of female beauty In ancient and imperial times + Foot binding + "Good Wife, Wise Mother" + "Three Obediences and Four Virtues" Missing women Prostitution Son preference + Female infanticide |
| Marriage and family | Chinese-foreign marriages Family planning policies + One-child policy + Two-child policy + Three-child policy Ghost marriage Marriage in modern China Naked marriage New Marriage Law Polyandry in Tibet Traditional Chinese marriage |
| LGBT | History Homosexuality Intersex LGBTQ culture in Beijing LGBTQ culture in Chengdu LGBTQ culture in Hong Kong LGBTQ culture in Shanghai Rights + same-sex unions + Tibet + Hong Kong + Macau Transgender |
| v t e Weddings | |
| --- | --- |
| Collective Elopement Handfasting Same-sex White | |
| Pre-wedding | Marriage proposal planner Marriage proposal Engagement Banns of marriage Wedding planner Wedding registry Bridal shower Engagement party Wedding invitation Bachelor party Bachelorette party Stag and doe party Marriage license Rehearsal dinner |
| Locations | Wedding chapel Gretna Green Las Vegas |
| Clothing | | | | | --- | --- | | Western dress codes | Formal + White tie + Morning dress Semi-formal + Black tie + Black lounge suit Informal + Suit | Wedding dress + Contemporary + Bridal crown Dress + Evening gown + Ball gown + Cocktail dress + Garters Casual |
| Objects | - Chuppah - Las arras - Lèbes gamikòs - Wedding cord - Wedding favors - Wedding mandapa - Wedding ring cushion - Wishing well |
| Participants | Bride + child bride Bridegroom + child bridegroom Bridesmaid Bridesman Flower girl Groomsman Page boy Officiant |
| Traditions | Ahesta Bero Bedding ceremony Bridal Chorus First dance "Indian" Wedding Blessing Jumping the broom Lychgate Money dance Music Polterabend Pounded rice ritual Pyebaek Trash the dress Unity candle Walima Wedding March Wedding photography Wedding reception Wedding videography |
| Food and drink | Wedding breakfast Wedding cake Wedding cake topper Cookie table Groom's cake Hochzeitssuppe Jordan almonds Kolach (bread) Korovai Loving cup Place card |
| By religion or culture | Anand Karaj (Sikh) Arab Ayie Ayyavazhi Bengali Hindu Bengali Muslim Brunei Malay Chinese + Pre-wedding customs Catholic Hajong Hindu Islamic Iyer Jewish Mandaean Mormon Odia Persian Poruwa ceremony Punjabi Quaker Saint Thomas Christian Shinto Timorese Vőfély (Hungary) Zoroastrian |
| By country | Ethiopia Vőfély (Hungary) Iceland India Myanmar (Burma) Pakistan Philippines Russia Sri Lanka Ukraine United Arab Emirates United States United Kingdom + England and Wales + Scotland + history Vietnam |
| Honeymoon | Honeymoon registry Consummation |
| Other | Black wedding Elopement Maiden and married names Marriage vows Newlywed Royal weddings Self-uniting marriage Sequel wedding Shotgun wedding Knobstick wedding Wedding anniversary Wedding crashing Wedding vow renewal ceremony Womanless wedding |
| | |
| --- | --- |
| Western dress codes | Formal + White tie + Morning dress Semi-formal + Black tie + Black lounge suit Informal + Suit |
| v t e Religious Confucianism | |
| --- | --- |
| Rituals | Jesa Sacrifice to Heaven + Festival Huế Feng Shan Ritual and music system Guan Li Ji Li Marriage Ghost marriage Worship of the living |
| Concepts | Shendao shejiao Chinese theology Mandate of Heaven Confucian ritual religion Filial piety Unity of Heaven and humanity Interactions Between Heaven and Mankind Chinese folk religion Ancestor veneration in China Son of Heaven Tianxia Yan Huang Zisun Soil and grain |
| Organizations | Confucian church Holy Confucian Church Indonesian Confucian Church Xuanyuan teaching Taigu school Shengdao Confucian Academy Confucian Shinto + Taiseikyo + Shusei + Suika Shinto + Onmyōdō |
| People | Yellow Emperor Confucius Xunzi Kang Youwei Yamazaki Ansai |
| Buildings | Temple of Confucius Confucian royal ancestral shrine Ancestral shrine Religious goods store Hero shrine Myo shrine + Jongmyo + Munmyo + Dongmyo Ci shrine Miao shrine Yin miao Beijing Temples + Temple of Heaven + Beijing Shejitan + Temple of Agriculture |
| Objects | Spirit tablet Tiangong censer Hell money Confucian coin charm Joss paper |
| Books | Four Books and Five Classics Thirteen Classics |
| Deities | Tian Shangdi Wufang Shangdi + Yellow Emperor + White Emperor - Shaohao + Black Emperor - Zhuanxu - Xuanwu + Bluegreen Emperor - Fu Xi + Red Emperor - Shennong - Flame Emperor - Zhurong |
| Tutelary deities | Landlord deity Tudigong City God Mountain God |
|
46
|
Published Time: Sat, 21 Jan 2023 03:42:51 GMT
arXiv:1604.08266v2 [math-ph] 15 Nov 2016
Contact Hamiltonian Mechanics
Alessandro Bravetti a, Hans Cruz b, Diego Tapias c
a
Instituto de Investigaciones en Matem´ aticas Aplicadas y en Sistemas, Universidad Nacional Aut´ onoma de M´ exico, A. P. 70543, M´ exico, DF 04510, M´ exico.
b
Instituto de Ciencias Nucleares, Universidad Nacional Aut´ onoma de M´ exico, A. P. 70543, M´ exico, DF 04510, M´ exico.
c
Facultad de Ciencias, Universidad Nacional Aut´ onoma de M´ exico, A.P. 70543, M´ exico, DF 04510, Mexico.
Abstract
In this work we introduce contact Hamiltonian mechanics, an extension of symplectic Hamiltonian mechanics, and show that it is a natural candidate for a geometric description of non-dissipative and dissipative systems. For this purpose we review in detail the major features of standard symplectic Hamiltonian dynamics and show that all of them can be generalized to the contact case.
Keywords: Hamiltonian mechanics, Dissipative systems, Contact geometry
Email addresses: [email protected] (Alessandro Bravetti),
[email protected] (Hans Cruz), [email protected] (Diego Tapias) Contents 1 Introduction 2
2 Symplectic mechanics of non-dissipative systems 5
2.1 Time-independent Hamiltonian mechanics . . . . . . . . . . . 5
2.2 Canonical transformations and Liouville’s theorem . . . . . . . 6
2.3 Time-dependent Hamiltonian systems . . . . . . . . . . . . . . 7
2.4 Hamilton-Jacobi formulation . . . . . . . . . . . . . . . . . . . 10
3 Contact mechanics of dissipative systems 11
3.1 Time-independent contact Hamiltonian mechanics . . . . . . . 11
3.2 Time evolution of the contact Hamiltonian and mechanical energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.3 Contact transformations and Liouville’s theorem . . . . . . . . 17
3.4 Time-dependent contact Hamiltonian systems . . . . . . . . . 20
3.5 Hamilton-Jacobi formulation . . . . . . . . . . . . . . . . . . . 24
3.6 Example: the damped parametric oscillator . . . . . . . . . . 25
3.6.1 First route to the solution: contact transformations . . 25
3.6.2 Second route to the solution: the invariants . . . . . . 26
3.6.3 Third route to the solution: the contact Hamilton-Jacobi equation . . . . . . . . . . . . . . . . . . . . . . 27
4 Conclusions and perspectives 29
Appendix A Invariants for the damped parametric oscilla-tor 32
Appendix B Equivalence between the contact Hamilton-Jacobi equation and the contact Hamiltonian equations 34
Introduction
The Hamiltonian formulation of classical mechanics is a very useful tool for the description of mechanical systems due to its remarkable geometrical properties, and because it provides a natural way to extend the classical the-ory to the quantum context by means of standard quantization. However, this formulation exclusively describes isolated systems with reversible dy-namics, while real systems are constantly in interaction with an environment 2that introduces the phenomena of dissipation and irreversibility. Therefore a major question is whether it is possible to construct a classical mechanical theory that not only contains all the advantages of the Hamiltonian formal-ism, but also takes into account the effects of the environment on the system. Several programmes have been proposed for this purpose (see e.g. [ 1] for a recent review). For example, one can introduce stochastic dynamics to model the effect of fluctuations due to the environment on the system of in-terest. This leads to stochastic equations of the Langevin or Fokker-Planck type with diffusion terms [ 2, 3]. A different although related approach is the
system-plus-reservoir technique, in which the system of interest is coupled to an environment (usually modeled as a collection of harmonic oscillators). The system and the environment together are considered as an isolated Hamil-tonian system and after averaging out the environmental degrees of freedom one obtains the equations of motion for the system of interest, including dis-sipative terms. This is the case for example of the Caldeira-Laggett formal-ism [ 4–6]. An alternative approach is to propose effective Hamiltonians with an explicit time dependence that reproduce the correct Newtonian equation, including the dissipative forces. A famous example is the Caldirola-Kanai (CK) model [ 7–9]. Another proposal based on a nonconservative action prin-ciple, allows for time-irreversible processes, such as dissipation, to be included at the level of the action [ 10 , 11 ]. Finally, a more geometrical attempt to-wards the description of dissipative systems is given by the so-called bracket formulation of dynamical systems [ 12 ]. Here one generalizes the standard Poisson bracket to a noncanonical Poisson bracket and exploits the algebraic properties of the latter to include dissipation. The literature on all these proposals is very extensive and it is not our purpose here to review them in detail. We refer the interested reader to the standard references cited above and references therein. Here we discuss a new proposal which consists in extending the symplectic phase space of classical mechanics by adding an extra dimension, thus dealing with a contact manifold instead of a symplectic one. Contact geometry arises naturally in mechanics. First of all, in describing mechanical systems where the Hamiltonian function explicitly depends on time, one usually appeals to an extended phase space, the additional dimension being time, endowed with the Poincar´ e-Cartan 1-form, which defines a contact structure on the extended space [ 13 –15 ]. Besides, the time-dependent Hamilton-Jacobi theory is naturally formulated in this extended phase space [ 16 , 17 ]. Furthermore, it has recently been argued that symmetries of the contact phase space can 3be relevant for a (non-canonical) quantization of nonlinear systems [ 18 ]. In this work we consider the phase space of any (time-independent) me-chanical system (either non-dissipative or dissipative) to be a contact man-ifold, but we take a different route from previous works. In fact, there are two main differences between our proposal and the previous ones. First, we do not assume that the additional dimension is time, letting the addi-tional dimension be represented by a non-trivial dynamical variable. Second, we derive the equations of motion for the system from contact Hamiltonian dynamics , which is the most natural extension of symplectic Hamiltonian dynamics [ 14 ]. Contact Hamiltonian dynamics has been used already in thermodynam-ics (both equilibrium and not [ 19 –24 ]) and in the description of dissipative systems at the mesoscopic level [ 25 ]. Furthermore, it has been recently in-troduced in the study of mechanical systems exchanging energy with a reser-voir [ 26 , 27 ]. However, a detailed analysis of the dynamics of mechanical systems and a thorough investigation of the analogy with standard symplec-tic mechanics have never been pursued before. We show that the advantages of contact Hamiltonian mechanics are that it includes within the same formal-ism both non-dissipative and dissipative systems, giving a precise prescrip-tion to distinguish between them, that it extends canonical transformations to contact transformations, thus offering more techniques to find the invari-ants of motion and to solve the dynamics, and that it leads to a contact version of the Hamilton-Jacobi equation. We argue that these additional properties play a similar role as their symplectic counterparts for dissipative systems. The structure of the paper is as follows: in section 2, in order to make the paper self-contained, we review the main aspects of the standard mechanics of non-dissipative systems, with emphasis on the symplectic geometry of the phase space and the Hamilton-Jacobi formulation. In section 3 the same analysis is extended to the case of contact Hamiltonian systems and it is shown by some general examples that this formulation reproduces the correct equations of motion for mechanical systems with dissipative terms. Besides, an illustrative example (the damped parametric oscillator) is worked out in detail in this section in order to show the usefulness of our method. Fi nally Section 4 is devoted to a summary of the results and to highlight future directions. In particular, we discuss a possible extension of our formalism to quantum systems. Finally, in Appendix A and Appendix B we provide respectively a derivation of the invariants of the damped parametric oscillator 4and a constructive proof of the equivalence between the contact Hamilton-Jacobi equation and the contact Hamiltonian dynamics. Before starting, let us fix a few important notations that are used through-out the text. Both symplectic mechanics of conservative systems and contact mechanics of dissipative systems are presented first in a coordinate-free man-ner and then in special local coordinates – canonical and contact coordinates – labelled as ( qa, p a) and ( qa, p a, S ) respectively. Moreover, the symplectic phase space is always indicated by Γ, while the contact phase space by T .The extension of any geometric object to a quantity that explicitly includes time as an independent variable is always indicated with a superscript E over the corresponding object (e.g. Γ E). Finally, we always use the notation H
for the usual symplectic Hamiltonian function and H for the corresponding contact analogue.
Symplectic mechanics of non-dissipative systems
The description of isolated mechanical systems can be given in terms of the Hamiltonian function and of Hamilton’s equations of motion in the phase space, which has a natural symplectic structure. In this section we review Hamiltonian dynamics in the symplectic phase space, in order to compare it with the generalization to the contact phase space that is given in the next section.
2.1. Time-independent Hamiltonian mechanics
The phase space of a conservative system is the cotangent bundle of the configuration manifold, which is a 2 n-dimensional manifold Γ. Such manifold is naturally endowed with a canonical 1-form α, whose exterior derivative Ω = d α is non-degenerate, and therefore defines the standard symplectic form on Γ. Given a Hamiltonian function H on Γ, Hamilton’s equations of motion follow from
−dH = Ω( XH ) , (1) with XH the Hamiltonian vector field defining the evolution of the system. By a theorem of Darboux, one can always find local coordinates ( qa, p a) with
a = 1 , . . . , n – called canonical coordinates – in which the canonical form is expressed as
α = padqa , (2) 5where here and in the following Einstein’s summation convention over re-peated indices is assumed. In such coordinates Ω = d α = d pa ∧ dqa (3) and from ( 1) it follows that the Hamiltonian vector field reads
XH = −∂H
∂q a
∂
∂p a
∂H
∂p a
∂
∂q a . (4) Usually, the canonical coordinates qa and pa correspond to the particles’ generalized positions and momenta. From ( 4) the equations of motion take the standard Hamiltonian form ˙qa = ∂H
∂p a
, ˙pa = −∂H
∂q a . (5) A system whose evolution is governed by ( 5) is usually called a Hamiltonian system . The time evolution of any (not explicitly time-dependent) function
G ∈ C∞(Γ) is determined by the phase space trajectories generated by the Hamiltonian vector field XH , that is dG
dt = XH [G] = Ω( XH , X G) = {G, H }(qa,p a) , (6) where we have introduced the notation {G, H }(qa,p a) for the standard Poisson bracket between the two functions G and H, which in canonical coordinates reads
{G, H }(qa,p a) = ∂G
∂q a
∂H
∂p a
− ∂G
∂p a
∂H
∂q a . (7) Equations ( 6) and ( 7) imply immediately that H is a first integral of the flow, that is, energy is conserved. In addition, any function commuting with
H is also a first integral.
2.2. Canonical transformations and Liouville’s theorem
Canonical transformations are an extremely important tool in classical mechanics, as they are strongly related to the symmetries and to the con-served quantities of the system and hence they are useful to simplify the equations of motion. They can be classified in time-independent transfor-mations , the ones that preserve the form of the Hamiltonian function, and
time-dependent transformations , which include time in the transformation 6and therefore are properly defined in an extended phase space. Here we consider time-independent transformations only. Time-dependent transfor-mations are introduced below. Canonical transformations are change of coordinates in the phase space that leave Hamilton’s equations ( 5) invariant. From ( 1), this amounts at finding a change of coordinates in the phase space that preserves the sym-plectic form Ω [ 14 ]. This definition immediately yields a way to check whether a coordinate transformation is canonical. Given the transformation (qa, p a) → (Qa, P a), invariance of Ω implies the following conditions
{Qa, Q b}
(qi,p i)
= 0 , {Pa, P b}(qi,p i) = 0 , {Qa, P b}(qi,p i) = δab . (8) As a consequence, canonical transformations also leave the canonical form α
invariant up to an exact differential, that is
pa dqa = Pa dQa + d F1 , (9) where F1(qa, Q a) is called the generating function of the canonical transfor-mation and obeys the relations
pa = ∂F 1
∂q a , Pa = − ∂F 1
∂Q a . (10) Furthermore, as Ω n is the volume element of the phase space, then it follows that canonical transformations preserve the phase space volume. A particular case of canonical transformations is the Hamiltonian evolution ( 5). In fact, symplectic Hamiltonian vector fields XH are the infinitesimal generators of canonical transformations. Therefore Liouville’s theorem
£XH Ωn = 0 (11) follows directly, where £XH is the Lie derivative along the Hamiltonian vector field XH [14 ].
2.3. Time-dependent Hamiltonian systems
For mechanical systems whose Hamiltonian depends explicitly on time the equations ( 1) are no longer valid, since the differential of the Hamiltonian depends on time. Moreover, also in the case of time-independent systems, it is useful to consider time-dependent canonical transformations, for which the differential of the corresponding generating functions does not satisfy the 7canonical condition ( 9). In order to deal with time-dependent systems and time-dependent canonical transformations, one usually extends the phase space with an extra dimension representing time. The extended phase space ΓE = Γ × R is therefore a (2 n + 1)-dimensional manifold endowed with a 1-form
ηPC = padqa − Hdt , (12) called the Poincar´ e-Cartan 1-form , where the Hamiltonian H can either de-pend explicitly on time or not 1. Then one proceeds to define a dynamics on Γ E that correctly extends Hamiltonian dynamics to the case where the Hamiltonian depends explicitly on time. A direct calculation shows that the condition dηPC (XE
H
) = 0 (13) is satisfied if and only if the vector field XE
H
in these coordinates takes the form
XE
H
= XH + ∂
∂t , (14) where XH is given by ( 4). Therefore the equations of motion for this vector field read ˙qa = ∂H
∂p a
, ˙pa = −∂H
∂q a , ˙t = 1 , (15) which are just Hamilton’s equations ( 5), augmented with the trivial equation ˙t = 1. This makes clear that Hamilton’s equation in the extended phase space ( 15 ) are equivalent to the condition ( 13 ). It follows that the evolution of an arbitrary function G ∈ C∞(Γ E) is given by dG
dt = {G, H }(qa,p a) + ∂G
∂t , (16) and consequently for time-dependent Hamiltonian systems the Hamiltonian itself is not conserved. Now let us study time-dependent canonical transformations and their generating functions. To do so, we need to find a change of coordinates from ( qa, p a, t ) to ( Qa, P a, t ) that leaves the form of the extended Hamilton’s equations ( 15 ) unchanged. Since in condition ( 13 ) only the differential of ηPC
1Notice that (Γ E, η PC ) is a contact manifold (cf. section 3.1 ), but it is not the standard (natural) contactification of (Γ ,Ω) (see [ 28 ]), since ηPC depends on Hand hence on the system.
8is involved, we find out that we can make a transformation that changes ηPC
by the addition of an exact differential, so that equation ( 13 ) is not affected. Let us consider such transformation and write
padqa − Hdt − (PadQa − Kdt) = d F1 , (17) where K is a function on Γ E which is going to be the new Hamiltonian function after the transformation. Let us further assume that we can choose coordinates in which Qa and qa are independent, so that the independent variables in ( 17 ) are ( qa, Q a, t ). We rewrite ( 17 ) as
(
pa − ∂F 1
∂q a
)
dqa −
(
Pa + ∂F 1
∂Q a
)
dQa +
(
K − H − ∂F 1
∂t
)
dt = 0 , (18) which implies that the generating function of the canonical transformation
F1(qa, Q a, t ) satisfies the relations
pa = ∂F 1
∂q a , Pa = − ∂F 1
∂Q a , K = H + ∂F 1
∂t . (19) Hamilton’s equations ( 15 ) in the new coordinates can be written as ˙Qa = ∂K
∂P a
, ˙Pa = − ∂K
∂Q a , ˙t = 1 , (20) with K the new Hamiltonian. Systems with explicit time dependence are used for the effective descrip-tion of dissipative systems within the Hamiltonian formalism. The idea is to introduce a convenient time dependence into the Hamiltonian so that it re-produces the phenomenological equations of motion with energy dissipation. As an example, let us consider the approach by Caldirola [ 7] and Kanai [ 8] for a 1-dimensional dissipative system with a friction force linear in the velocity. This model considers the time-dependent Hamiltonian
HCK = e−γt p2
CK
2m + eγt V (qCK ) , (21) where pCK and qCK are the canonical coordinates in phase space, which are related to the physical positions and momenta by the non-canonical trans-formation
pCK = eγt p , qCK = q. (22) 9It is easy to show that Hamilton’s equations ( 15 ) for H as in ( 21 ) give the correct equation of motion for the position including the friction force, i.e. the damped Newton equation ¨q + γ ˙q + 1
m∂V (q)
∂q = 0 . (23) However, although this model reproduces the correct phenomenological equa-tion of motion, it has the drawback that in order to describe dissipative systems one needs to take into account the non-canonical relationship ( 22 )between canonical and physical quantities. As a consequence, at the quan-tum level this model has generated quite a dispute on whether it can describe a dissipative system without violating the Heisenberg uncertainty principle; we refer to e.g. the discussion in [ 29 –34 ] and references therein.
2.4. Hamilton-Jacobi formulation
The Hamilton-Jacobi formulation is a powerful tool which enables to re-express Hamilton’s equations in terms of a single partial differential equation whose solution, a function of the configuration space, has all the necessary information to obtain the trajectories of the mechanical system. Moreover, this formulation gives rise to a new and more geometric point of view that allows to relate classical mechanics with wave phenomena and thus with quantum mechanics. The Hamilton-Jacobi equation can be introduced as a special case of a time-dependent canonical transformation ( 19 ). Consider the case in which the new Hamiltonian K vanishes and write the generating function F1 in this particular case as S. Using ( 19 ), we can write the Hamilton-Jacobi equation
H
(
qa, ∂S
∂q a , t
)
= −∂S
∂t . (24) A complete solution S(qa, t ), called Hamilton’s principal function , determines completely the dynamics of the system [ 15 ]. Besides, since K = 0 for such transformation, it is clear from ( 20 ) that Hamilton’s equations in the new coordinates read ˙Qa = 0 , ˙Pa = 0 . (25) Therefore, the new system of coordinates moves along the Hamiltonian flow. In fact, the functions Qa(qi, p i, t ) and Pa(qi, p i, t ) are (generalized) Noether 10 invariants associated with the Noether symmetries ∂/∂P a and ∂/∂Q a respec-tively [ 18 ]. Finally, the time derivative of Hamilton’s principal function is given by ˙S = ∂S
∂q a ˙qa + ∂S
∂t = pa ˙qa − H (26) where in the second identity we have used both ( 19 ) and ( 24 ). Since the right hand side of ( 26 ) is the Lagrangian of the system, one concludes that
S(qa, t ) =
∫
L(qa, ˙qa, t )d t , (27) i.e. that Hamilton’s principal function is the action, up to an undetermined additive constant [ 15 ].
Contact mechanics of dissipative systems
So far we have only reviewed the standard Hamiltonian description of mechanical systems. In this section we introduce the formalism of contact Hamiltonian mechanics and show that it can be applied to describe both non-dissipative and dissipative systems. Some of the material in sections 3.1
and 3.3 has been already presented in [ 26 , 27 ].
3.1. Time-independent contact Hamiltonian mechanics
A contact manifold T is a (2 n + 1)-dimensional manifold endowed with a 1-form η, called the contact form , that satisfies the condition [ 28 ]
η ∧ (d η)n 6 = 0 . (28) The left hand side in ( 28 ) provides the standard volume form on T , analo-gously to Ω n for the symplectic case. Hereafter we assume that the phase space of time-independent mechanical systems (both dissipative and non-dissipative) is a contact manifold, called the contact phase space 2, and that the equations of motion are always given by the so-called contact Hamilto-nian equations. We show that in this way one can construct a Hamiltonian
2The reader familiar with the geometric representation of quantum mechanics might notice the similarity between the concepts of contact phase space and quantum phase space. Both of them may be seen as a fiber bundle over the symplectic phase space [ 35 –
38 ].
11 formalism for any mechanical system. First let us define the dynamics in the phase space T . Given the 1-form η, one can associate to every differentiable function H : T → R, a vector field XH , called the contact Hamiltonian vector field generated by H , defined through the two (intrinsic) relations
£XH η = fH η and − H = η (XH ) , (29) where fH ∈ C∞(T ) is a function depending on H to be fixed below, cf. equa-tions ( 33 ) and ( 35 ), and H is called the contact Hamiltonian [19 , 28 , 39 ]. The first condition in ( 29 ) means that XH generates a contact transforma-tion (see section 3.3 below), while the second condition guarantees that it is generated by a Hamiltonian function. Using Cartan’s identity [ 14 ]
£XH η = d η(XH ) + d[ η(XH )] (30) and the second condition in ( 29 ), it follows that dH = d η(XH ) − £XH η , (31) from which it is clear that the definition of a contact Hamiltonian vector field generalizes that of a symplectic Hamiltonian vector field to the case where the defining 1-form is not preserved along the flow [cf. equations ( 31 ) and (1)]. An example of a contact manifold is the extended phase space Γ E that we have introduced in section 2.3 in order to account for time-dependent Hamil-tonian systems. In fact, it is easy to prove that the Poincar´ e-Cartan 1-form
ηPC satisfies the condition ( 28 ) and therefore it defines a contact structure on Γ E .Associated with the definition of the contact 1-form on a contact manifold, there is another fundamental object called the Reeb vector field ξ, which is defined intrinsically by the conditions
η(ξ) = 1 , dη(ξ) = 0 . (32) It can be shown that such vector field is unique and that it defines at every point a ‘vertical’ direction with respect to the horizontal distribution D =ker( η). Finally, using ( 31 ) and ( 32 ), it is easy to prove that the two conditions in ( 29 ) imply
fH = −ξ(H ) . (33) 12 It is always possible to find a set of local (Darboux) coordinates ( qa, p a, S )for T [14 ], to which we refer as contact coordinates , such that the 1-form η
and the Reeb vector field ξ can be written as
η = d S − padqa, ξ = ∂
∂S . (34) We remark that η as in ( 34 ) is the standard (natural) contactification of a symplectic manifold whose symplectic structure is exact, as defined e.g. in [ 28 ]and that the second expression in ( 34 ) directly implies that in these coordi-nates
fH = −ξ(H ) = −∂H
∂S . (35) Besides, in these coordinates, the contact Hamiltonian vector field XH takes the form
XH =
(
pa
∂H
∂p a
− H
) ∂
∂S −
(
pa
∂H
∂S + ∂H
∂q a
) ∂
∂p a
+
( ∂H
∂p a
) ∂
∂q a . (36) According to equation ( 36 ), the flow of XH can be explicitly written in contact coordinates as ˙qa = ∂H
∂p a
,
˙pa = −∂H
∂q a − pa
∂H
∂S ,
˙S = pa
∂H
∂p a
− H .
(37) (38) (39) The similarity of equations ( 37 )-( 39 ) with Hamilton’s equations of sym-plectic mechanics ( 5) is manifest. In fact, these are the generalization of Hamilton’s equations to a contact manifold. In particular, when H does not depend on S, equations ( 37 ) and ( 38 ) give exactly Hamilton’s equations in the symplectic phase space and H is an integral of motion. Finally, the remain-ing equation ( 39 ) in this case is the usual definition of Hamilton’s principal function – cf. equation ( 26 ). Therefore ( 37 )-( 39 ) generalize the equations of motion for the positions, the momenta and Hamilton’s principal function of the standard Hamilton’s theory and can include a much larger class of models, such as the dynamics of basic dissipative systems (that we consider 13 below) and that of systems in equilibrium with a heat bath, i.e. the so-called “thermostatted dynamics” [ 27 , 40 –42 ] (see also [ 22 –24 ] for applications to non-equilibrium thermodynamics). As an example, given the (1-dimensional) contact Hamiltonian system
HS = p2
2m + V (q) + γ S (40) where V (q) is the mechanical potential and γ is a constant parameter, the equations of motion ( 37 )-( 39 ) read ˙q = p
m , (41) ˙p = −∂V (q)
∂q − γ p , (42) ˙S = p2
2m − V (q) − γ S . (43) From ( 41 ) and ( 42 ) it is easy to derive the damped Newtonian equation ( 23 ), which describes all systems with a friction force that depends linearly on the velocity. Notice that the derivation through the use of contact Hamiltonian dynamics guarantees that the canonical and physical momenta and positions coincide, contrary to what happens in the case of a description by means of explicit time dependence, as for instance in the Caldirola-Kanai model ( 21 ). Before concluding this section, let us remark an important difference between our approach and previous uses of contact geometry to describe non-conservative systems. As we showed in section 2.3 , the evolution of a non-conservative mechanical system whose Hamiltonian depends explicitly on time is usually given in the extended phase space Γ E, endowed with the Poincar´ e-Cartan 1-form ( 12 ), which provides the contact 1-form for Γ E [13 ]. Usual treatments of time-dependent mechanical systems give the dynamics as in ( 14 ). Therefore, according to ( 29 ) one finds that the corresponding contact Hamiltonian is
−H = ηPC (XE
H
) = pa
∂H
∂p a
− H = [ H] , (44) where [ H] stands for the total Legendre transform of H [14 ]. Moreover, from the condition ( 13 ), defining the Hamiltonian dynamics in Γ E, and from the definition of the Reeb vector field ( 32 ), one finds immediately that XE
H
is pro-portional to the Reeb vector field ξ in the extended phase space, the propor-tionality being given by −H . One concludes then that any time-dependent 14 mechanical system can be described in Γ E by the contact Hamiltonian vector field
XH = −H ξ =
(
pa
∂H
∂p a
− H
) ∂
∂S . (45) The flow of this vector field in contact coordinates ( Qa, P a, S ) is ˙Qa = 0 (46) ˙Pa = 0 (47) ˙S = [H] , (48) which coincides with the flow ( 25 )-( 26 ), i.e. the natural evolution in the adapted coordinates found after performing the proper (Hamilton-Jacobi) time-dependent canonical transformation [ 18 ]. In this work we decide not to take this description for time-dependent Hamiltonian systems. In fact, we always consider here time-independent symplectic systems as embedded into the contact phase space T and we use the mechanical Hamiltonian Hmec (qa, p a) as a contact Hamiltonian to write the equations of motion in the form ( 37 )-( 39 ). It is easy to see that since
Hmec does not depend on the additional variable S explicitly, the equations of motion thus derived are Hamilton’s equations ( 5) for the time-independent case. In order to consider the time-dependent case, we develop in section 3.4
a formalism for time-dependent contact Hamiltonian systems and then again we recover standard mechanical systems given by a mechanical Hamilto-nian of the type Hmec (qa, p a, t ) as a particular case of the more general time-dependent contact Hamiltonian evolution, thus obtaining again the correct equations ( 15 ) as a particular case. The two main advantages of our per-spective are that we can always identify the canonical variables ( qa, p a) with the physical ones and that – as we show below – we can classify mechanical systems as dissipative or non-dissipative in terms of the contraction of the phase space volume.
3.2. Time evolution of the contact Hamiltonian and mechanical energy
In this section we derive the evolution of the contact Hamiltonian and the mechanical energy for a system evolving according to the contact Hamil-tonian equations ( 37 )-( 39 ) and we show that there is a constant of motion that can help to simplify the solution of the dynamics in particular cases. 15 Given any function in the contact phase space F ∈ C∞(T ), its evolution according to equations ( 37 )-( 39 ) is given by dF
dt = XH [F ]= −H ∂F
∂S + pa
[ ∂F
∂S ∂H
∂p a
− ∂F
∂p a
∂H
∂S
]
∂F
∂q a
∂H
∂p a
− ∂F
∂p a
∂H
∂q a
= −H ∂F
∂S + pa {F , H }(S,p a) + {F , H }(qa ,p a) , (49) where { , }(qa,p a) is the standard Poisson bracket as in ( 7) and the remaining terms are contact corrections. We point out that the bracket { , }(S,p a) is just a shorthand notation and we do not provide any intrinsic definition for it. We say that a function F ∈ C∞(T ) is a first integral (or invariant ) of the contact dynamics given by XH if F is constant along the flow of XH , that is if XH [F ] = 0. From the above equations, it follows that the evolution of the contact Hamiltonian function along its flow is dH
dt = −H ∂H
∂S . (50) Therefore in general H is constant if and only if H = 0 or if H does not depend on S. The latter case corresponds to a non-dissipative mechan-ical system, for which H = Hmec (qa, p a) and thus the mechanical energy is conserved. Let us consider a more general case, in which
H = Hmec (qa, p a) + h(S) , (51) where Hmec (qa, p a) is the mechanical energy of the system and h(S) char-acterizes effectively the interaction with the environment. From ( 49 ), the evolution of the mechanical energy is dHmec
dt = −pa
∂H mec
∂p a
h′(S) , (52) from which it is clear that h is a potential that generates dissipative forces. For example, in the case of mechanical systems with linear friction repre-sented by the contact Hamiltonian ( 40 ), we see that the rate of dissipation of the mechanical energy is dHmec
dt = −m γ ˙q2 , (53) 16 which agrees with standard results based on Rayleigh’s dissipation func-tion [ 15 ]. Furthermore, the evolution of the contact Hamiltonian ( 51 ) can be formally obtained from ( 50 ) to be
H (t) = H0 exp
{
−
∫ t
0
h′(S) d τ
}
. (54) In the example of the mechanical system with linear friction, the contact Hamiltonian ( 40 ) depends linearly on S and therefore its evolution reads
HS (t) = HS, 0 e−γt , (55) where HS, 0 is the value of HS at t = 0. Equation ( 54 ) introduces the constant of motion H0, which eliminates one degree of freedom from the equations of motion ( 37 )-( 39 ). In fact, inserting the contact Hamiltonian ( 51 ) into ( 54 ) one obtains in general
Hmec (qa, p a) + h(S) = H0 exp
{
−
∫ t
0
h′(S) d τ
}
, (56) and in principle one can solve this equation for any contact coordinate. In particular, it is possible to solve ( 56 ) to obtain S as a function of qa, p a and
t and therefore the solution of the system ( 37 )-( 39 ) then amounts only to solve the 2 n equations for the momenta and the positions, as in the standard symplectic case. For example, with HS as in ( 40 ) one obtains
S(q, p, t ) = 1
γ
[
HS, 0 e−γt − p2
2m − V (q)
]
. (57)
3.3. Contact transformations and Liouville’s theorem
In the preceding sections we have introduced the contact phase space for time-independent mechanical systems, equipped with the local coordinates (qa, p a, S ), called contact coordinates. In these variables the equations of motion are expressed in terms of the contact Hamiltonian equations ( 37 )-( 39 )and the contact form is expressed as in ( 34 ). As in the symplectic case, we are now interested in introducing those transformations that leave the contact structure unchanged, which are known as contact transformations [14 , 39 ]. Here we consider only time-independent contact transformations and in the next subsection we introduce the time-dependent case. 17 A contact transformation is a transformation that leaves the contact form invariant up to multiplication by a conformal factor [ 16 , 17 ], that is ˜η = f η . (58) From ( 58 ), an arbitrary transformation of coordinates from ( qa, p a, S ) to ( ˜Qa, ˜Pa, ˜S) is a contact transformation if
f (d S − padqa) = d ˜S − ˜Pad ˜Qa , (59) which is equivalent to
f = ∂ ˜S
∂S − ˜Pa
∂ ˜Qa
∂S (60)
−f p i = ∂ ˜S
∂q i − ˜Pa
∂ ˜Qa
∂q i (61) 0 = ∂ ˜S
∂p i
− ˜Pa
∂ ˜Qa
∂p i
. (62) As in the standard symplectic theory, we can obtain the generating func-tion of a contact transformation. Assuming that the coordinates ( qa, ˜Qa, S )are independent, we compute the differential of the generating function ˜S(qa, ˜Qa, S ), namely d ˜S = ∂ ˜S
∂S dS + ∂ ˜S
∂q a dqa + ∂ ˜S
∂ ˜Qa d ˜Qa . (63) Substituting ( 63 ) into ( 59 ) we obtain the following conditions for ˜Sf = ∂ ˜S
∂S , f p a = − ∂ ˜S
∂q a , ˜Pa = ∂ ˜S
∂ ˜Qa . (64) In particular, for contact transformations with f = 1 the conditions in ( 64 )imply that the generating function has the form ˜S = S − F1(qa, ˜Qa) , (65) where F1(qa, ˜Qa) is the generating function of a symplectic canonical trans-formation, cf. equation ( 10 ). This result is remarkable, since it implies that all canonical transformations are a special case of contact transformations corresponding to f = 1. 18 While canonical transformations preserve the symplectic volume form Ω n,we show now that contact transformations induce a re-scaling of the contact volume form η ∧ (d η)n. Let us assume that we have a transformation that induces the change ˜ η = f η ; then d˜ η = d f ∧ η + f dη. It follows that ˜η ∧ (d˜ η)n = f n+1 η ∧ (d η)n , (66) i.e. the volume form is rescaled by a term f n+1 , with f given in general in ( 60 ). Note that canonical transformations are a special case with f = 1 and therefore they preserve the contact volume form. Finally, applying the contact Hamiltonian vector field XH to η, we see from ( 29 ) and ( 35 ) that
£XH η = fH η = −∂H
∂S η . (67) Comparing ( 67 ) with ( 58 ), we conclude that contact Hamiltonian vector fields are the infinitesimal generators of contact transformations [ 16 , 17 ]. Again, this is the analogue of the fact that symplectic Hamiltonian vector fields are the infinitesimal generators of canonical transformations. Moreover, equation ( 67 ) also implies that the volume element contracts (or expands) along the contact Hamiltonian flow according to [ 26 ]
£XH (η ∧ (d η)n) = −(n + 1) ∂H
∂S (η ∧ (d η)n) , (68) which means that the contact flow has a non-zero divergence div( XH ) = −(n + 1) ∂H
∂S (69) and therefore Liouville’s theorem ( 11 ) does not hold. However, an analogous statement of Liouville’s theorem for contact flows has been proved in [ 26 ]. In fact, although the volume element η∧(d η)n is not preserved along the contact Hamiltonian flow, nevertheless a unique invariant measure depending only on H can be found whenever H 6 = 0, given by dμ = |H |−(n+1) (η ∧ (d η)n) , (70) where the absolute value | · | has been introduced in order to ensure that the probability distribution is positive. As it provides an invariant measure for 19 the flow, this is the analogue of Liouville’s theorem for contact Hamiltonian flows. Since the presence of a non-zero divergence is usually interpreted as a sign of dissipation [ 43 , 44 ], here we classify systems as non-dissipative or dissi-pative depending on whether the divergence ( 69 ) of the associated dynamics vanishes or not.
3.4. Time-dependent contact Hamiltonian systems
In the preceding sections we have seen that contact Hamiltonian mechan-ics can account for the dynamics of mechanical systems with dissipation and we have proven some results that extend the symplectic formalism to the contact case. However, so far we have considered only time-independent sys-tems. Now we introduce contact Hamiltonian systems that explicitly depend on time. The results of this and the following sections are all new. To begin, let us extend the contact phase space by adding the time vari-able to it. Therefore we have an extended manifold T E = T × R with natural coordinates derived from contact coordinates as ( qa, p a, S, t ). Then we extend the contact 1-form ( 34 ) to the 1-form
ηE = d S − padqa + H dt , (71) where H is the contact Hamiltonian, that in this case is allowed to depend on t too. Notice that whenever H depends on S, d ηE is non-degenerate (and closed) and therefore ( T E, dηE) is a symplectification of ( T , η ). However, such symplectification is not the standard (natural) one defined e.g. in [ 28 ]. Our symplectification depends on the Hamiltonian of the system as it is clear from equation ( 71 ). This is the same as it happens with the contactification of the symplectic phase space given by the Poincar´ e-Cartan 1-form ( 12 ). Besides, the coordinates ( qa, p a, S, t ) are non-canonical coordinates for d ηE, as it is easy to check. Now we want to define the dynamics on T E. To do so, we set the two (intrinsic) simultaneous conditions
£XE
H
ηE = gH ηE and ηE (XE
H
) = 0 , (72) with gH ∈ C∞(T E) a function depending on H to be fixed below, cf. equa-tion ( 77 ). Notice that ( 72 ) is the natural extension of ( 29 ) to T E. We argue that these two conditions define a vector field XE
H
on T E which is completely 20 equivalent to the contact Hamiltonian flow ( 36 ). To prove this, let us first use Cartan’s identity ( 30 ) to re-write ( 72 ) as dηE(XE
H
) = gH ηE and ηE(XE
H
) = 0 . (73) Then we use the second condition in ( 73 ). In local coordinates we can write this condition as (d S − padqa + H dt)
(
XS ∂
∂S + Xqa ∂
∂q a + Xpa ∂
∂p a
Xt ∂
∂t
)
= 0 , (74) where the Xi are the general components of the vector field XE
H
in these coordinates. We are free to fix a normalization for XE
H
such that Xt = 1. Now condition ( 74 ) yields
XS = paXqa
− H . (75) Using ( 75 ) we can write the first condition in ( 73 ) as dηE
(
[pa Xqa
− H ] ∂
∂S + Xqa ∂
∂q a + Xpa ∂
∂p a
∂
∂t
)
= gH ηE , (76) and, after a direct calculation, one arrives at
gH = −∂H
∂S , Xqa
= ∂H
∂p a
, Xpa = −∂H
∂q a − pa
∂H
∂S . (77) Finally, considering all the above conditions, we can write the resulting vector field XE
H
satisfying both conditions in ( 73 ) in its general form as
XE
H
= XH + ∂
∂t , (78) with XH given by ( 36 ). From this it is immediate to recognize that the equations of motion given by such field on T E are the same as those of the contact Hamiltonian vector field ( 36 ), with the addition of the trivial equation ˙t = 1. We call a system defined by a contact Hamiltonian H (qa, p a, S, t ) and by the vector field XE
H
of the form ( 78 ) a time-dependent contact Hamiltonian system 3. From ( 78 ) and ( 49 ) it follows that the evolution of any function F ∈
3We emphasize that, although ( TE,dηE) is a symplectic manifold, the flow XE
Hhas a non-vanishing divergence and therefore it is not a standard symplectic Hamiltonian dynam-ics, nor it can be introduced in terms of the usual Dirac formalism for time-independent constrained systems.
21 C∞(T E) under the dynamics given by a time-dependent contact Hamiltonian system reads dF
dt = −H ∂F
∂S + pa {F , H }(S,p a) + {F , H }(qa,p a) + ∂F
∂t . (79) Now that we have found a formal prescription to write the equations of motion for time-dependent contact Hamiltonian systems, let us discuss time-dependent contact transformations and their generating functions. Time-dependent contact transformations are transformation of coordinates (qa, p a, S, t ) → ( ˜Qa, ˜Pa, ˜S, t ) , (80) that leave the equations of motion, i.e. the vector field XE
H
, invariant. By def-inition, this amounts at finding a transformation that leaves both conditions in ( 73 ) unchanged. To find such a transformation, we start with the second condition and write the invariance as the fact that the transformed extended 1-form must have the same form as the original one up to multiplication by a non-zero function f , that is
f (d S − padqa + H dt) = d ˜S − ˜Pad ˜Qa + K dt , (81) where K is a function on T E which is going to be the new contact Hamil-tonian in the transformed coordinates. This condition provides a way to check whether a transformation of the type ( 80 ) is a time-dependent con-tact transformation. Indeed, inserting the differentials of ˜Qa and ˜S into ( 81 )one obtains the standard conditions ( 60 )-( 62 ) for a time-independent contact transformation, together with the following rule for the transformation of the Hamiltonians
f H = ∂ ˜S
∂t − ˜Pa
∂ ˜Qa
∂t + K . (82) As in the time-independent case, in order to find the conditions on the generating function ˜S(qa, ˜Qa, S, t ) we assume that the coordinates ( qa, ˜Qa, S, t )are independent. Thus, from ( 81 ) one finds that ˜S must satisfy ( 64 ) and the additional constraint
f H = ∂ ˜S
∂t + K , (83) which defines the new contact Hamiltonian for the new coordinates. In the special case f = 1 the generating function reduces to ˜S = S − F1(qa, ˜Qa, t ), 22 where F1(qa, ˜Qa, t ) is the generating function of the time-dependent canonical transformation, cf. ( 19 ). Now let us consider also the condition on ˜S imposed by invariance of the first equation in ( 73 ). Rewriting such equation after the transformation we get d˜ ηE( ˜XE
H
) = ˜ gK ˜ηE . (84) with ˜gK = −∂K
∂ ˜S (85) in the new coordinates, cf. the first condition in ( 77 ). Using that ˜ ηE = f η E
and that ˜XE
H
= XE
H
and the two equations in ( 73 ), one arrives directly at the following relation
f ˜gK = f g H − df (XE
H
) . (86) Notice that for f = 1, which corresponds to canonical transformations, ( 86 )reads ˜ gK = gH , from which we infer that if H does not depend on S,then K = 0 is a possible solution of ( 86 ) and in such case ( 83 ) reduces to the standard Hamilton-Jacobi transformation ( 24 ). However, in the general case f is a function of the extended phase space and thus time-dependent contact transformations extend canonical transformations, as we show with the following example. To illustrate the formalism developed so far, we consider an example of an important time-dependent contact transformation, i.e. we prove that the Caldirola-Kanai Hamiltonian ( 21 ) and the contact Hamiltonian ( 40 ) – which both give the same damped Newtonian equation – are related by a time-dependent contact transformation with f = eγt . To do so, let us consider the Caldirola-Kanai Hamiltonian HCK as a function on the extended contact phase space T E written in the coordinates ( qCK , p CK , S CK , t ) and the contact Hamiltonian HS as a function on T E written in the coordinates ( q, p, S, t ). Defining the change of coordinates 30 , 32 –34 → (qCK = q, p CK = eγt p, S CK = eγt S, t ) , (87) it is easy to check that the conditions ( 60 )-( 62 ) and ( 82 ) are satisfied and therefore ( 87 ) is a time-dependent contact transformation. 23 3.5. Hamilton-Jacobi formulation
In this section we introduce a Hamilton-Jacobi formulation of contact Hamiltonian systems. This formulation has a major importance, because it establishes a connection with the configuration space, where the phenomeno-logical equations are defined. The Hamilton-Jacobi equation is a re-formulation of the dynamical equa-tions in terms of a single partial differential equation (PDE) for the function
S(qa, t ). Thus, we are looking for a PDE of the form
F
(
qa, ∂S
∂q a , S, t, ∂S
∂t
)
= 0 , (88) whose characteristic curves are equivalent to the contact Hamiltonian dy-namics ( 37 )-( 39 ). To construct such PDE, let us define the function
F (qa, p a, S, t, E ) ≡ E − H (qa, p a, S, t ) . (89) It turns out that the solution of the equation F = 0 on the configuration space defined by
ηE = d S − padqa + H dt = 0 , (90) that is by the two conditions
pa = ∂S
∂q a and H
(
qa, ∂S
∂q a , S, t
)
= −∂S
∂t (91) gives exactly the contact Hamiltonian equations ( 37 )-( 39 ), together with ˙ t =1 and ˙H = −H ∂H /∂S + ∂H /∂t , which is the evolution of the time-dependent contact Hamiltonian ( 79 ). Therefore we call the second equation in ( 91 ) the contact Hamilton-Jacobi equation .In the symplectic case the Hamilton-Jacobi equation is also the time-dependent canonical transformation induced by the Hamiltonian dynamics. To find an equivalent formulation for the contact case, one must find a gener-ating function ˜S(qa, Q a, S, t ) such that ( 83 ) reduces to ( 91 ), where from ( 67 )the function f is
f = exp
(
−
∫ t
0
∂H
∂S dτ
)
. (92) However, contrary to the symplectic case, in general such transformation does not lead to a vanishing K , cf. equation ( 86 ). 24 In the Appendix B we give an alternative proof of the equivalence be-tween the contact Hamilton-Jacobi equation and ( 37 )-( 39 ). Such proof is useful because it yields explicitly the algebraic conditions needed to recover the solution qi(t) from knowledge of the complete solution of ( 91 ), cf. equa-tion ( B.10 ). An example of such procedure is worked out in detail in section
3.6.3 .
3.6. Example: the damped parametric oscillator
In this section we provide an important example, which enables us to show the usefulness of our formalism. The example considered here is the one-dimensional damped parametric oscillator with mass m and time-dependent frequency ω(t), whose contact Hamiltonian is
H = p2
2m + 1
2mω 2(t)q2 + γ S . (93) Clearly the damped harmonic oscillator is obtained for ω(t) = ω0 and the damped free particle is recovered when ω(t) = 0. The dynamics of the system is given by the contact Hamiltonian equations ( 41 )-( 43 ), with the time-dependent potential V = 1
2
mω 2(t)q2. Our aim is to use the tools of contact geometry to solve the dynamics. We show three different ways to solve this system, the first of them using contact transformations, the second one using the integrals of motion and the last one by means of the contact Hamilton-Jacobi equation.
3.6.1. First route to the solution: contact transformations
In this section we show how to use time-dependent contact transforma-tions to reduce the system to a known form and thus find a solution. Let us start by introducing the contact transformation (qE , p E, S E, t ) =
(
q e γt
2
,
[
p + mγ
2 q
]
eγt
2
,
[
S + mγ
4 q2
]
eγt , t
)
. (94) The new coordinates qE, pE and SE are known in the literature as the ex-panding coordinates [32 –34 ]. The new Hamiltonian in these coordinates is obtained from ( 82 ) to be
HE = eγt H − ∂S E
∂t = p2
E
2m + m
2
(
ω2(t) − γ2
4
)
q2
E
. (95) 25 The Hamiltonian HE is known as the expanding Hamiltonian and represents a parametric oscillator with shifted frequency ω2(t) − γ2
4
. This model has been extensively studied since the sixties and there are many methods to obtain the solutions of the equations of motion, see for example [ 45 –47 ]. It is interesting to note that in the case ω(t) = ω0 the Hamiltonian HE itself is an invariant of motion.
3.6.2. Second route to the solution: the invariants
As in the standard symplectic theory, an important tool to solve the contact Hamiltonian equations are the invariants (or first integrals) of the system, which are functions of the (extended) contact phase space that do not vary along the flow, cf. equation ( 79 ). In Appendix A we prove that the damped parametric oscillator possesses the quadratic invariant
I (q, p, t ) = m e γ t
2
[(
α(t) p
m −
[
˙α(t) − γ
2 α(t)
]
q
)2
+
( q
α(t)
)2]
, (96) where the purely time-dependent function α(t) satisfies the Ermakov equation
¨α +
(
ω2(t) − γ2
4
)
α = 1
α3 , (97) and the S-dependent invariant
G (q, p, S, t ) = eγt
[
S − q p
2
]
. (98) The invariant I (q, p, t ) is a generalization of the canonical invariant found by H. R. Lewis Jr. for the parametric oscillator [ 45 ], which is recovered when
γ → 0. Besides, the invariant G is completely new. To solve the equations of motion of the system ( 93 ) in the general case, we use the invariants I and G to define the time-dependent contact trans-formation ˜Q = arctan
(
α
[
˙α − γ
2 α
]
− α2 p
m q
)
(99) ˜P = I (q, p, t ) (100) ˜S = G (q, p, S, t ) (101)
t = t . (102) 26 The conformal factor in equation ( 81 ) for this transformation is f = eγt and the new contact Hamiltonian, from equation ( 82 ), takes the simple form
K = I
α2 . (103) Thus, as K does not involve the variables ˜Q and ˜S, the new contact Hamil-tonian equations have the trivial form ˙˜Qa = 1
α2 , ˙˜Pa = 0 , ˙˜S = 0 , (104) with solutions ˜Q(t) =
∫ t dτ
α2(τ ) , ˜P (t) = I and ˜S(t) = G . (105) Now, inverting the transformation ( 99 )-( 101 ) and using ( 105 ), one obtains the solutions in the original (physical) coordinates, namely
q(t) =
√2I
m eγt α(t) cos φ(t) , (106)
p(t) = √2 mI eγt
[(
˙α − γ
2 α
)
cos φ(t) − 1
α sin φ(t)
]
, (107)
S(t) = e−γt G + q(t)p(t)
2 , (108) where φ(t) = ˜Q(t) and the values of the constants I and G are determined by the initial conditions. Therefore, we have derived here the solutions of the equations of motion of the damped parametric oscillator using the invariants of the contact Hamiltonian system and a proper contact transformation. From ( 106 )-( 108 ) we see that all the dynamics of the system is encoded in the Ermakov equation ( 97 ).
3.6.3. Third route to the solution: the contact Hamilton-Jacobi equation
We show here another way to find the evolution of the system ( 93 ), that is, by solving the corresponding contact Hamilton-Jacobi equation ( 91 ), which in this case reads 1
2m
(∂S
∂q
)2
1
2 mω 2(t)q2 + γS = −∂S
∂t . (109) 27 Due to the form of the left hand side of the above equation, one can propose that S(q, t ) is a polynomial with respect to q. Thus we choose the ansatz
S(q, t ) = m C (t)
[q2
2 − λ(t)q
]
mq ˙λ + s(t) , (110) where C(t), λ(t) and s(t) are purely time-dependent functions. It follows directly that
p(q, t ) = ∂S
∂q = m C (t) [ q − λ(t)] + m ˙λ(t) . (111) Besides, inserting S(q, t ) into the contact Hamilton-Jacobi equation ( 109 ) and comparing the coefficients of the same order in q, we can find the conditions on C(t), λ(t) and s(t). After a direct calculation one obtains that C(t) obeys the Riccati equation ˙C + C2 + γ C + ω2(t) = 0 , (112)
λ(t) satisfies the damped Newtonian equation ¨λ + γ ˙λ + ω2(t) λ = 0 (113) and ˙s = −m
2
[
C2λ2 − 2Cλ ˙λ + ˙λ2
]
− γs . (114) Now one can use the Riccati equation ( 112 ) and the Newton equation ( 113 )to integrate ( 114 ) and obtain
s(t) = m
2
[
C(t)λ2(t) − λ(t) ˙λ(t)
]
. (115) Substituting into ( 110 ), one finds that the solution of the contact Hamilton-Jacobi equation ( 109 ) is
S(q, t ) = m
2 C(t) [ q − λ(t)] 2 + m ˙λ(t) [ q − λ(t)] + m
2 λ(t) ˙λ(t) . (116) Let us mention that solutions C(t) of the Riccati equation are connected to solutions λ(t) of the damped Newton equation by means of the transforma-tion C(t) = ˙λ(t)/λ (t). Therefore, in order to determine S(q, t ) it is sufficient to solve only one of these equations. Now given the solution ( 116 ) of the contact Hamilton-Jacobi equation (91 ), depending on the non-additive constant C0 = C(0), the trajectory of 28 the particle q(t) can be obtained using ( B.3 ) and ( B.10 ) as follows. For this system ( B.10 ) implies b(t) = b0 e−γt and therefore ( B.3 ) reads
∂S
∂C 0
= m
2
∂C
∂C 0
q2 = b0 e−γt , (117) which can be inverted to find the solution trajectory
q(t) =
√
2 b0 e−γt
m
( ∂C
∂C 0
)−1
. (118) For instance, in the case of a damped free particle, ω(t) = 0, the solution of the Riccati equation is
C(t) = e−γt
1
C0
1
γ
(1 − e−γt ) , (119) and from ( 118 ) we can recover the correct trajectories
q(t) =
√ 2b0
m
[
1 + C0
γ
(1 − e−γt )]
, (120) where the constants b0 and C0 are related to the initial conditions via
b0 = m
2 q20 and C0 = ˙ q0 . (121) Interestingly, with the method presented in this section the evolution of the particle is ultimately determined by the solution of the Riccati equa-tion ( 112 ), while with the method given in section 3.6.1 one has to directly solve the Newton equation arising from HE and with the method introduced in section 3.6.2 the solution is given in terms of the solution of the Ermakov equation ( 97 ). This shows that the three methods presented here within the framework of contact geometry are related to the three standard techniques for the solution of this system.
Conclusions and perspectives
In this work we have proposed a new geometric perspective for the Hamil-tonian description of mechanical systems. The defining features of our for-mulation are that the phase space of any (dissipative or non-dissipative) 29 mechanical system is assumed to be a contact manifold and that the evolu-tion equations are given as contact Hamiltonian equations, see ( 37 )-( 39 ). We have shown that contact Hamiltonian dynamics on the one hand recovers all the results of standard symplectic dynamics when the contact Hamiltonian
H does not depend explicitly on S and on the other hand can account for the evolution of systems with different types of dissipation in the more general case in which H depends on S.We have considered both time-independent and time-dependent contact systems and we have found in both cases the transformations (called con-tact transformations) that leave the contact Hamiltonian equations invari-ant, showing that canonical transformations of symplectic dynamics are a special case. To show the usefulness of contact transformations, we have provided an explicit example (the Caldirola-Kanai model for systems with linear dissipation) in which a non-canonical but contact transformation ( 87 )allows to move from the usual time-dependent canonical description in terms of non-physical variables to a contact description in terms of the physical variables. By computing the divergence of the contact Hamiltonian flow ( 69 ), we have provided a formal definition of dissipation in our formalism in terms of the contraction of the phase space, which is usually associated with irre-versible entropy production [ 43 , 44 ]. In addition, we have derived a contact Hamilton-Jacobi equation ( 91 )whose complete solution is equivalent to solve the Hamiltonian dynamics, as it happens in standard symplectic mechanics. Finally, we have worked out in detail a specific important example (the damped parametric oscillator) for which we have solved the dynamics in three different ways: using contact transformations, using the invariants of the sys-tem and resorting to the solution of the associated contact Hamilton-Jacobi equation. This example thus provides a direct evidence of the usefulness of our formalism. Given the importance of the symplectic perspective in the classical me-chanics of conservative systems, we consider that the contact perspective could play a similar role in the mechanics of dissipative systems. For in-stance, a relevant question is that of a quantization of our formalism. Here we sketch briefly such possibility. Using the fact that the additional contact variable is a generalization of Hamilton’s principal function which satisfies the contact version of the Hamilton-Jacobi equation, and that the canonical momenta and positions in our formalism coincide with the physical ones, we 30 suggest a canonical quantization of the contact Hamiltonian based on the standard rules of canonical quantization, namely
pa → ℏ
i∂
∂q a , qa → ˆqa, S(qa, t ) → ℏ
i ln Ψ( qa, t ) . (122) Using such rules to quantize the contact Hamiltonian H and obtain the operator ˆH , one can define the “contact Schr¨ odinger equation”
iℏ ∂Ψ
∂t = ˆH Ψ . (123) This equation has a fundamental property: in the case in which the con-tact Hamiltonian reduces to a symplectic Hamiltonian (i.e. when H does not depend on S explicitly) and the dynamics reduces to a standard con-servative dynamics, equation ( 123 ) obviously reduces to the standard linear Schr¨ odinger equation and all the known results for the quantization of con-servative systems are recovered. However, this equation has the disadvantage that in general it does not conserve the norm of the wave function [ 32 ]. For systems with contact Hamiltonian of the form H = Hmec + h(S), see ( 51 ), normalization is achieved following the procedure of Gisin [ 48 ], which con-sists in subtracting the mean value of h, that is 〈ˆh〉 = ∫ Ψ∗ ˆh Ψ d qa. This leads to the nonlinear Schr¨ odinger equation
iℏ ∂Ψ
∂t = ˆHmec Ψ + ( ˆh − 〈 ˆh〉)Ψ . (124) Applying ( 124 ) to the contact Hamiltonian HS with linear dependence on S
given in equation ( 40 ) and using ( 123 ), we get the evolution equation
iℏ ∂Ψ
∂t =
(
− ℏ2
2m ∇2 + V (qa) + γ ℏ
i [ln Ψ − 〈 lnΨ 〉]
)
Ψ , (125) which is exactly the phenomenological nonlinear Schr¨ odinger equation intro-duced in [ 49 –51 ] for the description of dissipative systems, see also [ 30 , 32 –34 ]. This fact, together with the result that the contact dynamics generated by HS coincides with the classical Newtonian equations for systems with linear dissipation ( 23 ), provides a further theoretical justification for the introduction of the nonlinear phenomenological Schr¨ odinger equation ( 125 )and, most importantly, displays an intriguing consistency between the clas-sical and quantum descriptions in our proposal. A more detailed study of 31 the extension of contact Hamiltonian mechanics to quantum systems will be presented in a future work. For instance, it will be worth trying a more geometric quantization program, e.g. following the lines of [ 52 ]. Finally, we have not considered here the Lagrangian formulation. This as-pect is fundamental in order to have a complete picture of contact mechanics and it is of primary interest for extension to field theory.
Acknowledgements
The authors would like to thank the unknown referee for interesting com-ments and suggestions. AB is funded by a DGAPA postdoctoral fellowship. DT acknowledges financial support from CONACyT, CVU No. 442828.
Appendix A. Invariants for the damped parametric oscillator
In this appendix we prove that I (q, p, t ) and G (q, p, S, t ) given in equa-tions ( 96 ) and ( 98 ) are two invariants of the damped parametric oscillator defined by the contact Hamiltonian ( 93 ). An invariant is a function F of the (extended) contact phase space that satisfies the partial differential equation
−H ∂F
∂S + pa {F , H }(S,p a) + {F , H }(qa,p a) = −∂F
∂t , (A.1) where we use the same notation as in ( 49 ). To find a solution, we propose the ansatz
F (q, p, S, t ) = β(t)p2 − 2ξ(t)qp + η(t)q2 + ζ(t)S . (A.2) Inserting ( A.2 ) into ( A.1 ), we get the system of ordinary differential equations ˙β = 2
m ξ + 2 γβ − 1
2mζ , (A.3) ˙η = −2mω 2ξ + 1
2 mω 2ζ , (A.4) ˙ξ = 1
m η + γξ − mω 2β , (A.5) ˙ζ = γ ζ . (A.6) Then clearly
ζ(t) = ζ0eγt , (A.7) 32 and we are left with the problem of solving the system ( A.3 )-( A.5 ). To do so, we consider the change of variables ˜β(t) = e−γt β(t), ˜ η(t) = e−γt η(t) and ˜ξ(t) = e−γt ξ(t), which yields the equivalent system ˙˜β = 2
m ˜ξ + γ ˜β − ζ0
2m , (A.8) ˙˜η = −2mω ˜ξ − γ ˜η + ζ0
2 mω 2 , (A.9) ˙˜ξ = 1
m ˜η − mω 2 ˜β . (A.10) To solve this system, we re-write it as a third-order ordinary differential equation for ˜β(t) ... ˜β + 4 Ω 2 ˙˜β + 4Ω ˙Ω ˜β = 0 , (A.11) where for simplicity we define Ω 2 = ω2 − γ2
4
. The above equation is known as the normal form of a third order equation of maximal symmetry [ 53 ]. Now, using the further change of variable ˜β(t) = 1
2mα2(t) (A.12) in ( A.11 ), one obtains that ˜β(t) is a solution of ( A.11 ) if and only if α(t)is a solution of the Ermakov equation ( 97 ). Moreover, from ( A.12 ) one can re-write the remaining two equations as ˜η(t) = m
2
([
˙α(t) − γ
2 α(t)
]2
1
α2(t)
)
, (A.13) ˜ξ(t) = α(t)
2
(
˙α(t) − γ
2 α(t)
)
1
4 . (A.14) Finally, using ( A.7 ),( A.12 )-( A.14 ) and β(t) = eγt ˜β(t), η(t) = eγt ˜η(t),
ξ(t) = eγt ˜ξ(t) into the ansatz ( A.2 ), we find that
F (q, p, S, t ) = I (q, p, t ) + ζ0G (q, p, S, t ) , (A.15) with
I (q, p, t ) = m e γ t
2
[(
α(t) p
m −
[
˙α(t) − γ
2 α(t)
]
q
)2
+
( q
α(t)
)2]
, (A.16) 33 and
G (q, p, S, t ) = eγt
[
S − q(t)p(t)
2
]
. (A.17) Since F (q, p, S, t ) is an invariant for any choice of the initial conditions and since ζ0 only depends on the initial conditions, it follows that I (q, p, t ) and
G (q, p, S, t ) separately are invariants of the system.
Appendix B. Equivalence between the contact Hamilton-Jacobi equation and the contact Hamiltonian equations
In this appendix we prove that finding the complete solution of the con-tact Hamilton-Jacobi equation ( 91 ) is equivalent to solving the equations of motion ( 37 )-( 39 ). This proof is a generalization of the standard proof for the symplectic case [ 15 ]. To begin, let S(q1, . . . , q n, c 1, . . . , c n, t ) be the complete solution of ( 91 ), where ci are n constants and suppose
∣∣∣∣
∂2S
∂q i∂c j
∣∣∣∣ 6 = 0 . (B.1) Using the quantities pi(q1, . . . , q n, c 1, . . . , c n, t ) = ∂S
∂q i
, we can rewrite ( 91 ) as
H (q1, . . . , q n, p 1, . . . , p n, S, t ) = −∂S
∂t . (B.2) Besides, defining
bi = ∂S
∂c i , (B.3) we obtain ˙bi = ∂2S
∂q j ∂c i ˙qj + ∂2S
∂t ∂c i (B.4) and deriving ( B.2 ) with respect to ci we have
∂2S
∂c i∂t = −∂H
∂S bi − ∂H
∂p j
∂2S
∂c i∂q j . (B.5) Combining ( B.4 ) and ( B.5 ) we get ˙bi = ∂2S
∂q j ∂c i
(
˙qj − ∂H
∂p j
)
− ∂H
∂S bi . (B.6) 34 Now, from the definition of pi, it follows that ˙pi = ∂2S
∂q j ∂q i ˙qj + ∂2S
∂t ∂q i , (B.7) and deriving ( B.2 ) with respect to qi one obtains
∂2S
∂q i∂t = −∂H
∂q i − ∂H
∂S pi − ∂H
∂p j
∂2S
∂q i∂q j . (B.8) From ( B.7 ) and ( B.8 ) we get ˙pi = ∂2S
∂q j ∂q i
(
˙qj − ∂H
∂p j
)
− ∂H
∂q i − pi
∂H
∂S . (B.9) It is thus easy to see that imposing ˙bi = −∂H
∂S bi , (B.10) equations ( B.6 ) and ( B.9 ) reduce to ˙qi = ∂H
∂p i
, (B.11) ˙pi = −∂H
∂q i − pi
∂H
∂S , (B.12) which coincide with ( 37 ) and ( 38 ). Finally, using the fact that ci are constants of motion, the equation for the evolution of S reads ˙S = ∂S
∂q i ˙qi + ∂S
∂t = pi
∂H
∂p i
− H , (B.13) where in the last equality we have used that pi = ∂S/∂q i, ˙ qi = ∂H /∂p i and that H = −∂S/∂t . Equations ( B.11 )-( B.13 ) are exactly equivalent to ( 37 )-(39 ). Therefore we have proved that the contact Hamilton-Jacobi equation (91 ) is equivalent to the contact Hamiltonian dynamics, provided the condi-tion ( B.10 ) holds. Therefore such condition has a primary importance. In fact, this yields the algebraic conditions to be solved for qi in order to recover the solution qi(t) from knowledge of S(qi, c i, t ). For an explicit example see section 3.6.3 .35 References
M. Razavy, Classical and quantum dissipative systems . World Scientific, 2005. U. Weiss, Quantum dissipative systems , vol. 10. World Scientific, 1999. S. Chandrasekhar, “Stochastic problems in physics and astronomy,” Re-views of modern physics , vol. 15, no. 1, p. 1, 1943. N. G. Van Kampen, Stochastic processes in physics and chemistry , vol. 1. Elsevier, 1992. A. O. Caldeira and A. J. Leggett, “Influence of dissipation on quantum tunneling in macroscopic systems,” Physical Review Letters , vol. 46, no. 4, p. 211, 1981. A. Caldeira and A. J. Leggett, “Quantum tunnelling in a dissipative system,” Annals of Physics , vol. 149, no. 2, pp. 374–456, 1983. P. Caldirola, “Forze non conservative nella meccanica quantistica,” Il Nuovo Cimento (1924-1942) , vol. 18, no. 9, pp. 393–400, 1941. E. Kanai, “On the quantization of the dissipative systems,” Progress of Theoretical Physics , vol. 3, no. 4, pp. 440–442, 1948. M. Lakshmanan and V. Chandrasekar, “Generating finite dimensional integrable nonlinear dynamical systems,” The European Physical Jour-nal Special Topics , vol. 222, no. 3-4, pp. 665–688, 2013. C. R. Galley, “Classical mechanics of nonconservative systems,” Phys. Rev. Lett. , vol. 110, p. 174301, Apr 2013. C. R. Galley, D. Tsang, and L. C. Stein, “The principle of stationary nonconservative action for classical mechanics and field theories,” arXiv preprint arXiv:1412.3082 , 2014. P. Morrison, “Thoughts on brackets and dissipation: old and new,” in
Journal of Physics: Conference Series , vol. 169, p. 012006, IOP Pub-lishing, 2009. 36 R. Abraham, J. E. Marsden, and J. E. Marsden, Foundations of mechan-ics . Benjamin/Cummings Publishing Company Reading, Massachusetts, 1978. V. I. Arnold, Mathematical methods of classical mechanics , vol. 60. Springer Science & Business Media, 1989. H. Goldstein, C. Poole, and J. Safko, “Classical mechanics,” Classical mechanics (3rd ed.) by H. Goldstein, C. Poolo, and J. Safko. San Fran-cisco: Addison-Wesley, 2002. , vol. 1, 2002. S. Rajeev, “Quantization of contact manifolds and thermodynamics,”
Annals of Physics , vol. 323, no. 3, pp. 768 – 782, 2008. S. Rajeev, “A Hamilton–Jacobi formalism for thermodynamics,” Annals of Physics , vol. 323, no. 9, pp. 2265–2285, 2008. V. Aldaya, J. Guerrero, F. F. L´ opez-Ruiz, and F. Coss´ ıo, “Contact sym-metries in non-linear mechanics: a preliminary step to (non-canonical) quantization,” arXiv preprint arXiv:1406.6828 , 2014. R. Mrugala, “On a special family of thermodynamic processes and their invariants,” Reports on Mathematical Physics , vol. 46, no. 3, 2000. A. Favache, D. Dochain, and B. Maschke, “An entropy-based formu-lation of irreversible processes based on contact structures,” Chemical Engineering Science , vol. 65, no. 18, pp. 5204–5216, 2010. M. Dolfin and M. Francaviglia, “A geometric perspective on irreversible thermodynamics. part I: general concepts,” Communications in Applied and Industrial Mathematics , vol. 1, no. 2, pp. 135–152, 2011. A. Bravetti, C. Lopez-Monsalvo, and F. Nettel, “Contact symmetries and Hamiltonian thermodynamics,” Annals of Physics , vol. 361, pp. 377 – 400, 2015. S.-I. Goto, “Legendre submanifolds in contact manifolds as attractors and geometric nonequilibrium thermodynamics,” Journal of Mathemat-ical Physics , vol. 56, no. 7, 2015. 37 S.-I. Goto, “Contact geometric descriptions of vector fields on dually flat spaces and their applications in electric circuit models and nonequilib-rium statistical mechanics,” arXiv preprint arXiv:1512.00950 , 2015. M. Grmela, “Contact geometry of mesoscopic thermodynamics and dy-namics,” Entropy , vol. 16, no. 3, pp. 1652–1686, 2014. A. Bravetti and D. Tapias, “Liouville’s theorem and the canonical mea-sure for nonconservative systems from contact geometry,” Journal of Physics A: Mathematical and Theoretical , vol. 48, no. 24, p. 245001, 2015. A. Bravetti and D. Tapias, “Thermostat algorithm for generating target ensembles,” Phys. Rev. E , vol. 93, p. 022139, Feb 2016. V. I. Arnold and S. P. Novikov, Dynamical Systems IV: Symplectic Ge-ometry and Its Applications . Dinamicheskie sistemy, Springer, 2001. D. M. Greenberger, “A critique of the major approaches to damping in quantum theory,” Journal of Mathematical Physics , vol. 20, no. 5, pp. 762–770, 1979. D. Schuch, “Nonunitary connection between explicitly time-dependent and nonlinear approaches for the description of dissipative quantum sys-tems,” Physical Review A , vol. 55, no. 2, p. 935, 1997. C.-I. Um, K.-H. Yeon, and T. F. George, “The quantum damped har-monic oscillator,” Physics Reports , vol. 362, no. 2, pp. 63–192, 2002. D. Schuch, “Dynamical invariants in systems with and without bro-ken time-reversal symmetry,” in LATIN-AMERICAN SCHOOL OF PHYSICSXL ELAF: Symmetries in Physics , vol. 1334, pp. 291–340, AIP Publishing, 2011. H. Cruz, D. Schuch, O. Casta˜ nos, and O. Rosas-Ortiz, “Time-evolution of quantum systems via a complex nonlinear riccati equation. i. conser-vative systems with time-independent hamiltonian,” Annals of Physics ,vol. 360, pp. 44–60, 2015. H. Cruz, D. Schuch, O. Castanos, and O. Rosas-Ortiz, “Time-evolution of quantum systems via a complex nonlinear riccati equation II. dissi-pative systems,” arXiv preprint arXiv:1602.02314 , 2016. 38 D. C. Brody and L. P. Hughston, “Geometric quantum mechanics,”
Journal of geometry and physics , vol. 38, no. 1, pp. 19–53, 2001. J. M. Isidro, “Duality and the geometry of quantum mechanics,” Journal of Physics A: Mathematical and General , vol. 35, no. 14, p. 3305, 2002. L. C. Venuti and P. Zanardi, “Quantum critical scaling of the geometric tensors,” Physical review letters , vol. 99, no. 9, p. 095701, 2007. H. Heydari, “Geometry and structure of quantum phase space,” Foun-dations of Physics , vol. 45, no. 7, pp. 851–857, 2015. C. P. Boyer, “Completely integrable contact Hamiltonian systems and toric contact structures on S2 × S3,” SIGMA, Symmetry Integrability Geom. Methods Appl. , vol. 7, pp. 058, 22, 2011. M. E. Tuckerman, Statistical Mechanics: Theory and Molecular Simu-lation . Oxford graduate texts, Oxford: Oxford University Press, 2010. D. Evans and G. Morriss, Statistical Mechanics of Nonequilirium Liq-uids . Cambridge University Press, 2008. D. Tapias, D. P. Sanders, and A. Bravetti, “Geometric integrator for sim-ulations in the canonical ensemble,” arXiv preprint arXiv:1605.01654 ,2016. D. Daems and G. Nicolis, “Entropy production and phase space volume contraction,” Physical Review E , vol. 59, no. 4, p. 4000, 1999. G. Gallavotti and E. Cohen, “Nonequilibrium stationary states and en-tropy,” Physical Review E , vol. 69, no. 3, p. 035104, 2004. H. R. Lewis Jr, “Class of exact invariants for classical and quantum time-dependent harmonic oscillators,” Journal of Mathematical Physics ,vol. 9, no. 11, pp. 1976–1986, 1968. H. R. Lewis Jr and W. Riesenfeld, “An exact quantum theory of the time-dependent harmonic oscillator and of a charged particle in a time-dependent electromagnetic field,” Journal of Mathematical Physics ,vol. 10, no. 8, pp. 1458–1473, 1969. 39 I. Malkin, V. Man’Ko, and D. Trifonov, “Linear adiabatic invariants and coherent states,” Journal of Mathematical Physics , vol. 14, no. 5, pp. 576–582, 1973. N. Gisin, “Irreversible quantum dynamics and the Hilbert space struc-ture of quantum kinematics,” Journal of Mathematical Physics , vol. 24, no. 7, pp. 1779–1782, 1983. D. Schuch, K.-M. Chung, and H. Hartmann, “Nonlinear Schr¨ odinger-type field equation for the description of dissipative systems. I. Deriva-tion of the nonlinear field equation and one-dimensional example,” Jour-nal of Mathematical Physics , vol. 24, no. 6, pp. 1652–1660, 1983. D. Schuch, K.-M. Chung, and H. Hartmann, “Nonlinear Schr¨ odinger-type field equation for the description of dissipative systems. III. Fric-tionally damped free motion as an example for an aperiodic motion,”
Journal of mathematical physics , vol. 25, no. 10, pp. 3086–3092, 1984. D. Schuch and K.-M. Chung, “From macroscopic irreversibility to mi-croscopic reversibility via a nonlinear Schr¨ odinger-type field equation,”
International Journal of Quantum Chemistry , vol. 29, no. 5, pp. 1561– 1573, 1986. S. Fitzpatrick, “On the geometric quantization of contact manifolds,”
Journal of Geometry and Physics , vol. 61, no. 12, pp. 2384–2399, 2011. P. Leach and K. Andriopoulos, “The Ermakov equation: a commentary,”
Applicable Analysis and Discrete Mathematics , pp. 146–157, 2008. 40
|
47
|
23 11 Article 14.2.1 Journal of Integer Sequences, Vol. 17 (2014), 2 3 6 1 47 Asymptotic Expansions of Central Binomial Coefficients and Catalan Numbers Neven Elezovi´ c Faculty of Electrical Engineering and Computing University of Zagreb Unska 3 10000 Zagreb Croatia [email protected] Abstract We give a systematic view of the asymptotic expansion of two well-known sequences, the central binomial coefficients and the Catalan numbers. The main point is explana-tion of the nature of the best shift in variable n, in order to obtain “nice” asymptotic expansions. We also give a complete asymptotic expansion of partial sums of these sequences.
1 Introduction One of the most beautiful formulas in mathematics is the classical Stirling approximation of the factorial function: n! ≈ √ 2πn n e n .
This is the beginning of the following full asymptotic expansions [1, 3], Laplace expansion: n! ∼ √ 2πn n e n 1 + 1 12n + 1 288n2 − 139 51840n3 − 571 2488320n4 + . . .
(1) 1 and Stirling series n! ∼ √ 2πn n e n exp 1 12n − 1 360n3 + 1 1260n5 + . . .
.
(2) The central binomial coefficient has a well-known asymptotic approximation; see e.g., [3, p. 35]: 2n n ∼22n √nπ 1 −1 8n + 1 128n2 + 5 1024n3 − 21 32768n4 + . . .
.
(3) Luschny gives the following nice expansions: 2n n ∼ 4n p Nπ/2 2 −2 N 2 + 21 N 4 −671 N 6 + 45081 N 8 (4) where N = 8n + 2, and for the Catalan numbers 1 n + 1 2n n ∼ 4n−2 M √ Mπ 128 + 160 M 2 + 84 M 4 + 715 M 6 −10180 M 8 (5) where M = 4n + 3. Here, for the sake of the beauty, the exact value 45080 3 4 is replaced by 45081, and 10179 13 16 is replaced by 10180.
We would like to thank the anonymous referee who brought to our attention the existence of the manuscript where similar problems are treated. D. Kessler and J. Schiffproved that expansion mentioned above contains only odd powers of n + 1 4 (for the central binomial coefficient) and n + 3 4 (for Catalan numbers). In this paper we explain why this happens.
The main subject of this paper is to explain why N = 8n+2 and M = 4n+3 are the best choices in such expansions, and also to obtain general form of these expansions, especially in the case of the Laplace expansions. In the last section, the asymptotic expansion of the partial sums of binomial coefficients and Catalan numbers are derived, using a simple and efficient recursive algorithm.
2 Central binomial coefficients Although the central binomial coefficient is expressed as Γ(2n+1)/Γ(n+1)2, expansions (1) or (2) cannot be used for direct derivation of (3). Instead, one should use the asymptotic expansion of the ratio of two gamma functions. In the standard reference , the connection with generalized Bernoulli polynomials is used. This approach is improved in a series of recent papers –. Namely, from the duplication formula for the gamma function we have 2n n = Γ(2n + 1) Γ(n + 1)2 = 4n √π · Γ(n + 1 2) Γ(n + 1).
(6) 2 In , the following general asymptotic expansion of the quotient of two gamma functions is given: Γ(x + t) Γ(x + s) ∼xt−s ∞ X m=0 Pm(t, s, r)x−m !1/r .
(7) Here, s, t and r ̸= 0 are real numbers. Coefficients Pm = Pm(t, s, r) are polynomials defined by P0(t, s, r) = 1, (8) Pm(t, s, r) = r m m X k=1 (−1)k+1Bk+1(t) −Bk+1(s) k + 1 Pm−k(t, s, r) (9) and Bk(t) stands for the Bernoulli polynomials.
In the sequel, we shall use the following properties of Bernoulli polynomials and Bernoulli numbers: (−1)nBn(−x) = Bn(x) + nxn−1, Bn(1 + x) = Bn(x) + nxn−1, B2n+1 = 0, (n ≥1), Bn(0) = (−1)nBn(1) = Bn, Bn( 1 2) = −(1 −21−n)Bn, Bn(−1 2) = −(1 −21−n)Bn + (−1)n n 2n−1, Bn( 1 4) = −2−n(1 −21−n)Bn −n4−nEn−1, Bn( 3 4) = (−1)n+12−n(1 −21−n)Bn + n4−nEn−1.
(10) Denote x = n + α, t = 1/2 −α, s = 1 −α. Applying (6), we have 2n n ∼22n √xπ ∞ X k=0 Pk xk !1/r (11) where sequence (Pn) is defined by P0 = 1 and Pm = r m m X k=1 Bk+1( 1 2 + α) −Bk+1(α) k + 1 Pm−k, m ≥1.
In order to obtain a useful formula, the parameter α should be chosen in such a way that the values of Bernoulli polynomials can be (easily) calculated. Some simplifications are also possible if these coefficients are connected in a way which reduces complexity of this expression. Therefore, the following choices are indicated: 1) α = 0: this gives “natural” expansion in terms of powers of n. Although natural, this choice usually is not the best one.
3 2) α = 1 2: this value leads to easily computable coefficients.
3) 1 2 −α = 1 −(1 −α): wherefrom it follows α = 1 4. Here, the symmetry property of Bernoulli polynomials is used, and this is the best choice for α.
4) 1 2 −α = −(1 −α), i.e., α = 3 4: This choice will also reduce computation.
The value of the Bernoulli polynomials may be calculated explicitly (in terms of Bernoulli and Euler numbers) for some other constants α, for example α = 1 6, but the values will be “complicated” compared to the ones chosen above.
The expansions of the central binomial coefficients are given in the following theorem.
Theorem 1. The following asymptotic expansion is valid: 2n n ∼ 4n p π(n + α) ∞ X m=0 Pm(α)(n + α)−m !1/r , (12) where P0 = 1 and 1. for α = 0 Pm = r m ⌊(m+1)/2⌋ X k=1 (2−2k −1)B2k k Pm−2k+1; (13) 2. for α = 1 4 Pm = r m ⌊m/2⌋ X k=1 2−2k−1EkPm−2k; (14) 3. for α = 1 2 Pm = r m ⌊(m+1)/2⌋ X k=1 (1 −2−2k)B2k k Pm−2k+1; (15) 4. for α = 3 4 Pm = r m ⌊m/2⌋ X k=1 2−2k−1(2 −Ek)Pm−2k; (16) Proof. Let us write bk(α) = [Bk+1( 1 2 + α) −Bk+1(α)].
We have bk(0) = Bk+1( 1 2) −Bk+1.
This value is equal to 0 for even k, and equal to (2−k −2)Bk+1 for odd k, and hence (13) follows.
For α = 1 4, bk( 1 4) = Bk+1( 1 4)[(−1)k+1 −1] 4 This is equal to 0 for odd k, and equal to (k + 1)2−2k−1Ek for even k.
Further, bk( 1 2) = (−1)k+1[Bk+1(0) −Bk+1( 1 2)] and (15) follows similarly as in the first case.
Finally, bk( 3 4) = Bk+1( 1 4)[1 −(−1)k+1] + (k + 1)4−k As before, this is equal to 0 for odd k, and equal to (k + 1)2−2k−1(2 −Ek) for even k. Thus (16) follows, completing the proof of the theorem.
It is obvious that the choice α = 1 4 is superior to others. In fact, the equation Bk+1(1/2 −α) = Bk+1(1 −α) is an identity for each odd k only if α = 1 4. Hence, this value of α is unique with the property that the asymptotic expansion reduces to even terms.
We shall give the first few terms of asymptotic expansions of Pm(α) for the values of the shift α observed in Theorem 1. Using r = 1 we get: 2n n ∼ 4n √πn 1 −1 8n + 1 128n2 + 5 1024n3 − 21 32768n4 − 399 262144n5 + 869 4194304n6 + · · · , (17) 2n n ∼ 4n q π(n + 1 4) 1 − 1 64 n + 1 4 2 + 21 8192 n + 1 4 4 − 671 524288 n + 1 4 6 + 180323 134217728 n + 1 4 8 − 20898423 8589934592 n + 1 4 10 + · · · , (18) 2n n ∼ 4n q π(n + 1 2) 1 + 1 8 n + 1 2 + 1 128 n + 1 2 2 − 5 1024 n + 1 2 3 − 21 32768 n + 1 2 4 + 399 262144 n + 1 2 5 + 869 4194304 n + 1 2 6 + · · · , (19) 2n n ∼ 4n q π(n + 3 4) 1 + 1 4 n + 3 4 + 5 64 n + 3 4 2 + 5 256 n + 3 4 3 + 21 8192 n + 3 4 4 + 21 32768 n + 3 4 5 + 715 524288 n + 3 4 6 + · · · , (20) 5 3 The role of exponent r The coefficients Pm(t, s, r) are polynomials in r of degree m, which follows directly from the recursive formula (9).
Theorem 2. Let α = 0. Then Pm(t, s, −r) = (−1)mPm(t, s, r).
(21) Proof. By induction. (21) holds for m = 0 and m = 1. The rest is obvious from (13).
As a corollary, we get that the coefficients of the expansion for r = −1 are identical, up to the sign of odd powers, to the coefficients of the expansion for r = 1. Therefore, from (17) it follows immediately that 2n n −1 ∼ √πn 4n 1 + 1 8n + 1 128n2 − 5 1024n3 − 21 32768n4 + 399 262144n5 + 869 4194304n6 + · · · .
(22) Various choices of r may give useful expansions. For example, r = 4 and N = 4n leads to a good approximation with the first two terms: 2n n ∼22n+1 √ πN 4 r 1 −2 N + 2 N 2 −2 N 4 −4 N 5 −12 N 6 + . . .
while r = 2 and N = 8n + 2 gives a good square root analogue of the formula (4): 2n n ∼4n+1 √ πN r 1 2 −1 N 2 + 11 N 4 −346 N 6 + 22931 N 8 + . . ..
4 Catalan numbers The standard definition of Catalan numbers is given by a recurrence relation, C0 = 1 and Cn+1 = n X k=1 CkCn−k.
Catalan numbers occur in various situations. For instance, Stanley [12, p. 219] explains 66 such situations.
The starting point for us will be the following explicit formula: Cn = 1 n + 1 2n n = Γ(2n + 1) Γ(n + 1)Γ(n + 2) (23) 6 Hence, Catalan numbers can be expressed as a ratio of two gamma functions Cn = 4n √π · Γ(n + 1 2) Γ(n + 2).
(24) Putting x = n + α, t = 1 2 −α, s = 2 −α, from (9) we get Cn ∼4n √πx−3/2 ∞ X m=0 Pmx−m !1/r , (25) with P0 = 1 and Pm = r m m X k=1 ck(α)Pm−k, (26) where we denote ck(α) = Bn+1(α + 1 2) −Bk+1(α −1) k + 1 .
(27) As before, 0 and 1 2 are the natural choice for α. Two other good values follow from α+ 1 2 = 1−(α−1) and α+ 1 2 = −(α−1), wherefrom one gets α = 3 4 and α = 1 4, respectively.
Theorem 3. The following asymptotic expansion holds: Cn ∼4n √π(n + α)−3/2 ∞ X m=0 Pm(α)(n + α)−m !1/r , (28) where P0 = 1 and 1. for α = 0 Pm = r m m X k=1 (2−k −2)Bk+1 k + 1 + (−1)k Pm−k; (29) 2. for α = 1 2 Pm = r m m X k=1 (2 −2−k)Bk+1 k + 1 + (−1)k+1 2k Pm−k; (30) 3. for α = 3 4 Pm = r m ⌊m/2⌋ X k=1 2 · 4−2k−1(4 −E2k)Pm−2k; (31) 4. for α = 1 4 Pm = r m m X k=1 [2−2k−1Ek + (−3 4)k]Pm−k.
(32) 7 Proof. We need to compute explicit coefficients in formula (25).
1) α = 0: ck(0) = Bk+1( 1 2) −Bk+1(−1) k + 1 Using (10), we have ck(0) = −(2 −2−k)Bk+1 k + 1 −(−1)k+1 and (29) follows.
2) α = 1 2: ck( 1 2) = Bk+1(1) −Bk+1(−1 2) k + 1 Using (10), we have ck( 1 2) = (2 −2−k)Bk+1 k + 1 + (−1)k 2k .
3) α = 3 4: ck( 3 4) = Bk+1( 5 4) −Bk+1(−1 4) k + 1 Since Bk+1( 5 4) = Bk+1( 1 4) + (k + 1) 1 4k , Bk+1(−1 4) = (−1)k+1 Bk+1( 1 4) + k + 1 4k it follows ck( 3 4) = 1 + (−1)k k + 1 Bk+1( 1 4) + k + 1 4k .
Therefore, for odd k we have ck( 3 4) = 0. For even k it follows ck( 3 4) = 2Bk+1( 1 4) k + 1 + 2 4k = −2 · 4−k−1Ek + 2 · 4−k.
This proves (31).
4) α = 1 4: ck( 1 4) = Bk+1( 3 4) −Bk+1(−3 4) k + 1 Since Bk+1(−3 4) = (−1)k+1 Bk+1( 3 4) + (k + 1)( 3 4)k 8 it follows ck( 5 4) = 1 k + 1Bk+1( 3 4)[(−1)k + 1] + (−1)k( 3 4)k.
Therefore, for odd k it holds ck( 1 4) = (−3 4)k.
For even k, after reducing in a similar way as before, we get ck( 1 4) = −2 · 4−k−1Ek + (−3 4)k.
For odd k this values coincides with previous one, since E2n+1 = 0. Hence, (32) is proved.
For the convenience of the reader, here is the short list of the observed coefficients, for r = 1: Cn ∼ 4n √ πn3 1 −9 8n + 145 128n2 − 1155 1024n3 + 36939 32768n4 − 295911 262144n5 + −4735445 4194304n6 + · · · , (33) Cn ∼ 4n q π(n + 1 2)3 1 − 3 8 n + 1 2 + 25 128 n + 1 2 2 − 105 1024 n + 1 2 3 + 1659 32768 n + 1 2 4 − 6237 262144 n + 1 2 5 + 50765 4194304 n + 1 2 6 + · · · .
(34) Cn ∼ 4n q π(n + 3 4)3 1 + 5 64 n + 3 4 2 + 21 8192 n + 3 4 4 + 715 524288 n + 3 4 6 − 162877 134217728 n + 3 4 8 + 19840275 8589934592 n + 3 4 10 + · · · .
(35) Cn ∼ 4n q π(n + 1 4)3 1 − 3 4 n + 1 4 + 35 64 n + 1 4 2 − 105 256 n + 1 4 3 + 2541 8192 n + 1 4 4 − 7623 32768 n + 1 4 5 + 90805 524288 n + 1 4 6 + · · · .
(36) As one can see, the expansion in terms of n + 3 4 has additional property that it contains only odd terms.
9 Corollary 4. The value α = 3 4 is the unique value for which asymptotic expansion of Catalan numbers contains only odd terms.
Proof. If this is the case, then coefficient c1(α) from (27) must vanish: B2(α + 1 2) −B2(α −1) = 0.
From this one obtain α = 3 4.
5 Expansions of Stirling’s type Stirling expansion of the factorial function (2) includes the exponential function. Using the known asymptotic expansion ln Γ(x + t) = (x + t −1 2) ln x −x + 1 2 ln(2π) + ∞ X k=1 (−1)k+1Bk+1(t) k(k + 1 x−k we immediately get Γ(x + t) Γ(x + s) ∼xt−s · exp ∞ X k=1 Qk(t, s)x−k !
(37) where Qk(t, s) are polynomials of order k defined by Qk(t, s) = (−1)k+1Bk+1(t) −Bk+1(s) k(k + 1) .
(38) Hence, we obtain: Theorem 5. The binomial coefficient has the following asymptotic expansions of Stirling’s type: 2n n ∼ 4n √πn exp ∞ X k=1 (2−2k −1)B2k k(2k −1) n−2k+1 (39) 2n n ∼ 4n q π(n + 1 4) exp ∞ X k=1 2−4k−2E2k k n−2k .
(40) The proof is already carried out in the Theorem 1.
A similar result can be stated for Catalan numbers: Theorem 6.
Cn ∼ 4n n√πn exp ∞ X k=1 (−1)k+1(2−k −2)Bk+1 −k −1 k(k + 1) n−k (41) Cn ∼ 4n q π(n + 3 4)3 exp ∞ X k=1 2−4k−2(4 −E2k) k (n + 3 4)−2k .
(42) Formulae (39)–(42) are derived in the manuscript .
10 6 The sum of binomial coefficients and Catalan num-bers In a recent paper, Mattarei proves the following asymptotic expansions: n X k=0 2k k = 4n+1 3√πn 1 + 1 24n + 59 384n2 + 2425 9216n3 + O(n−4) (43) n X k=0 Cn = 4n+1 3n√πn 1 −5 8n + 475 384n2 + 1225 9216n3 + O(n−4) (44) The calculation was tedious. For example, it relies on a computer algebra system, since it is based on the following formulas: n X k=0 2k k = 4 3 2n n m X j=0 3−j (2n −1)(2n/3 −1) · (2n/(2j −1) −1) + O(4nn−m−3/2), n X k=0 Cn = 2 3 2n n m X j=0 (−3)j + 3−j (2n −1)(2n/3 −1) · (2n/(2j + 1) −1) + O(4nn−m−5/2) The final calculation of the coefficients (43) and (44) was carried out using Maple, with m = 4.
We shall derive an efficient algorithm for recursive calculations of asymptotic expansions of this and similar sums, which enables an easy calculation of the arbitrary coefficient in these expansions.
The theorem will be formulated in such a way that it may be easily applied to both binomial and Catalan sums. It is evident that a similar statement is valid for the asymptotic expansion of the sum of more general functions.
Theorem 7. Suppose that a(n) has the following expansion, P0(α) = 1 and a(n) ∼4n √π ∞ X k=0 Pk(α)(n + α)−k−r, (45) where r > 0 is a real number. Then n X k=0 a(k) ∼4 3 · 4n+1 √π ∞ X k=0 Sk(α)(n + α)−k−r (46) where the coefficients of this expansion satisfy S0(α) = 1 and Sk(α) = Pk(α) + 1 3 k−1 X j=0 (−1)k−j −j −r k −j Sj(α) (47) 11 Proof. Denote Σ(n) = Pn k=0 a(k). Suppose that Σ(n) ∼C · 4n √πn−r + O(n−r−1).
Then Σ(n) ∼a(n) + Σ(n −1) ∼4n √π n−r + C · 4n−1 √π (n −1)−r + O(n−r−1) ∼4n √π n−r + C · 4n−1 √π n−r + O(n−r−1) ∼C · 4n √πn−r + O(n−r−1) and from here it follows that C = 4 3. The fact that Σ(n) indeed has the asymptotic behavior of this type may be proved in the same way as it is done for the case r = 1/2 in .
Hence, we obtain that Σ(n) has the asymptotic expansion of the following form: Σ(n) = 4n+1 3√π ∞ X k=0 Sk(α)(n + α)−k−r.
(48) Then, using the asymptotic expansion (45), we get 4n+1 3√π ∞ X k=0 Sk(α)(n + α)−k−r = 4n √π ∞ X k=0 Pk(α)(n + α)−k−r + 4n 3√π ∞ X k=0 Sk(α)(n + α −1)−k−r = 4n √π ∞ X k=0 Pk(α)(n + α)−k−r + 4n 3√π ∞ X k=0 Sk(α)(n + α)−k−r ∞ X j=0 (−1)j −k −r j (n + α)−j.
Hence 4Sk(α) = 3Pk(α) + k X j=0 (−1)k−j −j −r k −j Sj(α).
Extracting from the right side the member Sk, we get (47).
12 Taking α = 0 and r = 1/2 or r = 3/2, it is easy to obtain the following asymptotics: n X k=0 2k k ∼4n+1 3√πn 1 + 1 24n + 59 384n2 + 2425 9216n3 + 576793 884736n4 + 5000317 2359296n5 + 953111599 113246208n6 + . . .
n X k=0 Cn ∼ 4n+1 3n√πn 1 −5 8n + 475 384n2 + 1225 9216n3 + 395857 98304n4 + 27786605 2359296n5 + 6798801295 113246208n6 .
Here, there is no good value of α which would lead to the expansion similar to (18) or (35), as it is evident from the formula (47).
References M. Abramowitz and I. A. Stegun, eds., Handbook of Mathematical Functions with For-mulas, Graphs, and Mathematical Tables, National Bureau of Standards, Applied Math-ematics Series 55, 9th printing, Washington, 1970.
A. Erd´ elyi, Asymptotic Expansions, Dover Publications, New York, 1956.
Y. L. Luke, The Special Functions and Their Approximations, Vol. I, Academic Press, New York, 1969.
T. Buri´ c and N. Elezovi´ c, Bernoulli polynomials and asymptotic expansions of the quotient of gamma functions, J. Comput. Appl. Math. 235 (2011), 3315–3331.
T. Buri´ c and N. Elezovi´ c, New asymptotic expansions of the gamma function and im-provements of Stirling’s type formulas, J. Comput. Anal. Appl. 13 (2011), 785–795.
T. Buri´ c and N. Elezovi´ c, New asymptotic expansions of the quotient of gamma func-tions, Integral Transform. Spec. Funct. 23 (2012), 355–368.
T. Buri´ c, N. Elezovi´ c, and R. ˇ Simi´ c, Asymptotic expansions of the multiple quotients of gamma functions and applications, Math. Inequal. Appl., to appear.
N. Elezovi´ c, C. Giordano, and J. Peˇ cari´ c, The best bounds in Gautschi’s inequality, Math. Inequal. Appl. 3 (2000), 239–252.
P. Luschny, Approximation formulas for the factorial function n!,
13 D. Kessler and J. Schiff, The asymptotics of factorials, binomial coefficients and Catalan numbers, manuscript,
S. Mattarei, Asymptotics of partial sums of central binomial coefficients and Catalan numbers, preprint,
Richard P. Stanley, Enumerative Combinatorics, Vol. 2, Cambridge University Press, 1999.
F. G. Tricomi and A. Erd´ elyi, The asymptotic expansion of a ratio of Gamma functions, Pacific J. Math., 1 (1951), 133–142.
2010 Mathematics Subject Classification: Primary 11B65; Secondary 41A60, 33B15.
Keywords: binomial coefficient; Catalan number; asymptotic expansion; gamma function.
(Concerned with sequences A001163, A001164, A046968, and A046969.) Received March 21 2013; revised versions received June 6 2013; October 14 2013; November 26 2013. Published in Journal of Integer Sequences, January 3 2014.
Return to Journal of Integer Sequences home page.
14
|
48
|
Newest linked questions - Mathematics Stack Exchange
===============
Join Mathematics
By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy.
Sign up with Google
OR
Email
Password
Sign up
Already have an account? Log in
Skip to main content
Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Loading…
Tour Start here for a quick overview of the site
Help Center Detailed answers to any questions you might have
Meta Discuss the workings and policies of this site
About Us Learn more about Stack Overflow the company, and our products
current community
Mathematics helpchat
Mathematics Meta
your communities
Sign up or log in to customize your list.
more stack exchange communities
company blog
Log in
Sign up
Home
Questions
Unanswered
AI Assist Labs
Tags
Chat
Users
Teams
Ask questions, find answers and collaborate at work with Stack Overflow for Teams.
Try Teams for freeExplore Teams
3. Teams
4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
Linked Questions
Ask Question
23 questions linked to/from Derivation of asymptotic solution of tan(x)=x tan(x)=x.
HotNewestScoreActiveUnanswered
2 votes
1 answer
156 views
Asymptotic expansion of the solution of tan x=x tanx=x.
I am trying to find an asymptotic expansion of the soltion x n x n of the equation tan x=x tanx=x in the intervall I n=]−π 2+n π,π 2+n π[I n=]−π 2+n π,π 2+n π[. I have showed that $$x_n = ...
real-analysis
asymptotics
M-S
312
asked Dec 18, 2022 at 21:06
2 votes
1 answer
223 views
How do I solve sin x+x cos x=0 sinx+x cosx=0? [closed]
How do I solve sin x+x cos x=0 sinx+x cosx=0? I've tried several different trigonometric identities and I'm aware it can be also written as
tan x=−x.tanx=−x.
One of the answers is zero, but the other answers ...
trigonometry
numerical-methods
Oscar Yuleton
19
asked Dec 14, 2021 at 19:56
2 votes
1 answer
227 views
How did Euler and Rayleigh calculate tan(x)=x tan(x)=x? [closed]
I wanted to solve tan(x)=x tan(x)=x without Newton's method. Along I have found various questions here on Mathematics like : Solution of tanx = x? (Answer of JJacquelin) Derivation of asymptotic solution ...
analysis
trigonometry
asymptotics
OcK
25
asked Dec 31, 2020 at 19:52
1 vote
1 answer
149 views
Evaluate lim n→∞n(n π+π/2−x n)lim n→∞n(n π+π/2−x n) [duplicate]
Where x n x n is the solution of tan(x)=x tan(x)=x in the interval (n π,n π+π 2)(n π,n π+π 2) , n≥0 n≥0. Any hints on how to approach this problem with elementary methods in the first place? The answer should ...
calculus
limits
trigonometry
limits-without-lhopital
Rareș Stanca
433
asked Jun 7, 2019 at 10:38
3 votes
3 answers
268 views
Given f(x)=x sin 1 x f(x)=x sin1 x, find roots of f′(x)f′(x) in the interval 0≤x≤1 π 0≤x≤1 π.
This question is taken from book: Advanced Calculus: An Introduction to Classical Analysis, by Louis Brand. The book is concerned with introductory analysis. If $f(x) = x \sin\frac1x\;(x\ne 0), f(0) ...
trigonometry
rolles-theorem
jiten
4,972
asked Jun 5, 2019 at 8:32
0 votes
1 answer
218 views
Sketching functions with an uncountable number of turning points (sinusoids)
In my attempt to sketch functions, I usually employ the following algorithm: Differentiate the function Find which x x has d y d x=0 d y d x=0 Find which y y correspond to the aforementioned x x Find ...
functions
graphing-functions
sangstar
2,005
asked Jul 8, 2017 at 6:06
2 votes
4 answers
413 views
Solution to x tan(π 2−π x)=π x tan(π 2−π x)=π
In the following equation, is it possible to solve for numerical value of x x.
x tan(π 2−π x)=π x tan(π 2−π x)=π
algebra-precalculus
trigonometry
Hashbrowns
187
asked Nov 10, 2016 at 20:33
7 votes
2 answers
228 views
Is there standard terminology to describe the not-quite-a-limit behavior of tan(log x)x tan(logx)x as x x approaches infinity?
Suppose I want to describe the long term behavior of tan(log x)x tan(logx)x as x increases towards positive real infinity. Now,
lim x→∞tan(log x)x lim x→∞tan(logx)x
obviously ...
calculus
real-analysis
limits
terminology
asymptotics
Nathan McKenzie
1,007
asked Dec 18, 2015 at 14:56
9 votes
4 answers
2k views
Solving ln x=tan x lnx=tanx with infinitely many solutions
Lets take f(x)=ln x f(x)=lnx and g(x)=tan x g(x)=tanx When f(x)=g(x)f(x)=g(x) that is ln x=tan x lnx=tanx, we see that the graph is like: Hence we see that there are infinitely many solutions to x x but the two graphs ...
special-functions
transcendental-equations
lambert-w
newton-raphson
NeilRoy
2,251
asked May 11, 2015 at 6:37
1 vote
2 answers
172 views
Roots of tan x−x tanx−x
The function tan x−x tanx−x has exactly one root x n x n in the interval (n π,(n+1 2)π)(n π,(n+1 2)π). Show that
x n=n π+π 2−1 n π+r n x n=n π+π 2−1 n π+r n
where $\lim_{n\rightarrow \infty} n ...
real-analysis
taylor-expansion
roots
user654236
11
asked Jul 24, 2014 at 21:40
0 votes
0 answers
250 views
The roots of transcendent equation tan(x)=x tan(x)=x [duplicate]
Can we find the roots of equation tan(x)=x tan(x)=x. I once found a formula which gives its roots approximately. Any link will be welcome.
trigonometry
transcendental-equations
Roger209
1,253
asked Jul 24, 2014 at 8:22
1 vote
1 answer
1k views
Lagrange Bürmann Inversion Series Example
I am trying to understand how one applies Lagrange Bürmann Inversion to solve an implicit equation in real variables(given that the equation satisfies the needed conditions). I have tried looking for ...
real-analysis
complex-analysis
power-series
taylor-expansion
implicit-function-theorem
QTHalfTau
342
asked Jun 27, 2014 at 18:05
1 vote
1 answer
152 views
Solving ctg x=x/b ctgx=x/b
I have no problems finding first solution (both: b→0 b→0 and b→∞b→∞). My solutions on photos. I got stuck trying to find solution when x→∞x→∞. As I think, solution for x x will have $...
calculus
roots
perturbation-theory
trigonometric-series
Resha
19
asked Apr 22, 2014 at 12:33
5 votes
1 answer
545 views
Limits of the solutions to x sin x=1 x sinx=1
Let x n x n be the sequence of increasing solutions to x sin x=1 x sinx=1. Define
a=lim n→∞n(x 2 n+1−2 π n)a=lim n→∞n(x 2 n+1−2 π n)
and $$b = \lim_{n \to \infty} n^3 \left( x_{2n+1} - 2\pi n - \frac{a}{n} ...
calculus
limits
contest-math
taylor-expansion
roots
Ayesha
2,740
asked Mar 20, 2014 at 2:23
4 votes
1 answer
273 views
Roots of x x−tan(x)x x−tan(x)
I conjecture, that the function f(x)=x x−tan x f(x)=x x−tanx has exactly one root in any of the intervals [2 n+1 2 π,2 n+3 2 π][2 n+1 2 π,2 n+3 2 π] , where n n is a nonnegative integer. Does anyone ...
analysis
Peter
86.7k
asked Oct 18, 2013 at 10:59
153050per page
1
2Next
Featured on Meta
Will you help build our new visual identity?
Upcoming initiatives on Stack Overflow and across the Stack Exchange network...
Community help needed to clean up goo.gl links (by August 25)
Report this ad
Hot Network Questions
What do you call this outfit that Japanese housewives always wear?
If linear negation is interpreted as representing destructors, how to make sense of double linear negation elimination?
What does, "For you alone are holy." mean in Revelation 15:4?
Best bike type for multi-day tours in France (e.g., Discover France itineraries)
In Isa. 46:9 why is וְאֵ֣ין עֹ֔וד אֱלֹהִ֖ים not translated "and there are no other gods?"
Is it possible that death existed before the fall?
Problem with differential backups, after a problem with a full backup
Can metal atoms act as ligands?
VLOOKUP with wildcards
History of Wilcoxon/Mann-Whitney being for the median?
Why does grounding eliminate mains hum but not radio signals?
Is laser engraving on an interstellar object feasible?
How to defend against GDPR being used to access anti-fraud measures?
What is a single adjective for someone who accepts their faults?
How to debug/correct missing number error in plug during memoization?
Harry Potter fanfic where Petunia dies of cancer and Vernon works at a horse racing track?
Make separate appendix table of contents and remove appendix chapters and sections from main toc
Why do I observe a sawtooth-like wave when measuring the voltage of a Li-Ion battery?
Sickness after admitted for Masters
Collect coefficient of sum of terms in Mathematica
What keeps an index ETF pegged to the index?
Find real and imaginary parts of a complex series
What's the difference between democracy and totalitarianism if, even in democracy, we must respect laws set by parties we didn't vote for?
Why should components of force be independent?
Mathematics
Tour
Help
Chat
Contact
Feedback
Company
Stack Overflow
Teams
Advertising
Talent
About
Press
Legal
Privacy Policy
Terms of Service
Your Privacy Choices
Cookie Policy
Stack Exchange Network
Technology
Culture & recreation
Life & arts
Science
Professional
Business
API
Data
Blog
Facebook
Twitter
LinkedIn
Instagram
Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804
By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.
Accept all cookies Necessary cookies only
Customize settings
Cookie Consent Preference Center
When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer.
Cookie Policy
Accept all cookies
Manage Consent Preferences
Strictly Necessary Cookies
Always Active
These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.
Cookies Details
Performance Cookies
[x] Performance Cookies
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.
Cookies Details
Functional Cookies
[x] Functional Cookies
These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly.
Cookies Details
Targeting Cookies
[x] Targeting Cookies
These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device.
Cookies Details
Cookie List
Clear
[x] checkbox label label
Apply Cancel
Consent Leg.Interest
[x] checkbox label label
[x] checkbox label label
[x] checkbox label label
Necessary cookies only Confirm my choices
|
49
|
Homology of a polyhedron - Encyclopedia of Mathematics
===============
Log in
www.springer.comThe European Mathematical Society
Navigation
Main page
Pages A-Z
StatProb Collection
Recent changes
Current events
Random page
Help
Project talk
Request account
Tools
What links here
Related changes
Special pages
Printable version
Permanent link
Page information
Namespaces
Page
Discussion
Variants
Views
View
View source
History
Actions
Homology of a polyhedron
From Encyclopedia of Mathematics
Jump to: navigation, search
A homology theory of a topological space which is a polyhedron (cf. Polyhedron, abstract). Homology of a polyhedron first appeared in the works of H. Poincaré (1895) in a study of manifolds in Euclidean spaces. He considered $ r $-dimensional closed submanifolds of a given manifold, known as $ r $-dimensional cycles. If the manifold includes a bounded $ ( r + 1 ) $-dimensional submanifold with as boundary a given $ r $-dimensional cycle, this cycle is said to be homologous to zero in the given manifold. Thus, a circle which is concentric with the circles bounding an annulus is not homologous to zero, whereas the circle forming the boundary of a disc contained in the annulus is homologous to zero in this annulus. The initial analytic definition of a manifold was replaced by Poincaré by its representation by simplices (or simplexes) with adjacent boundaries, forming a complex. Such a method for studying homology may be applied to any space that can be triangulated as a simplicial complex, i.e. that can be seen as rectilinear polyhedra, or their homeomorphic images — curvilinear polyhedra. The geometrical meaning of cycles and their homology is preserved. Thus, a one-dimensional cycle is a closed polygonal line with one-dimensional simplices as its segments. It is homologous to zero if it is the boundary of a two-dimensional subcomplex of the given complex. Two cycles of equal dimension are homologous to each other if, taken together, they bound a subcomplex of the given complex. This is an equivalence relation the result of which is a subdivision of the set of cycles with the same dimension into classes. An algebraic structure may be introduced into the set of classes if the sum of two classes is the class containing the sum of two cycles arbitrarily chosen out of the classes being added. The introduction of a direction of traversal, i.e. of oriented simplices, leads to the concept of the inverse class. A strict interpretation of these illustrative concepts makes it possible to define the concept of the homology groups of a polyhedron.
Let there be given a triangulation $ K $ of a polyhedron $ P $ and an Abelian group $ G $. An $ r $-dimensional chain of the complex $ K $ over the coefficient group $ G $ is an arbitrary function $ c _ {r} $ that assigns to each oriented $ r $-dimensional simplex $ t ^ {r} $ from $ K $ a certain element of $ G $, and that is non-zero only for a finite number of simplices; moreover, $ c _ {r} ( - t ^ {r} ) = - c _ {r} ( t ^ {r} ) $. By adding $ r $-dimensional chains as linear forms one obtains the Abelian group $ C _ {r} ( K, G) $ of all $ r $-dimensional chains of $ K $ over $ G $. Starting from the concept of the boundary of a simplex, and defining the boundary of a chain by additivity, one arrives at a homomorphism
$$ \partial _ {r} : C _ {r} ( K, G) \rightarrow C _ {r - 1 } ( K, G) $$
with the property $ \partial _ {r- 1} \partial _ {r} = 0 $, and the chain complex
$$ { C _ {r} ( K, G), \partial _ {r} } . $$
A chain $ c _ {r} $ is called a cycle if its boundary is the zero chain: $ \partial _ {r} c _ {r} = 0 $. A cycle $ z _ {r} $ is said to be bounding (a boundary) if $ K $ contains an $ ( r + 1) $-dimensional chain $ c _ {r+ 1} $ such that $ z _ {r} = \partial _ {r+ 1} c _ {r+ 1} $. The kernel of the homomorphism $ \partial _ {r} $, i.e. the group $ Z _ {r} ( K, G) $ of all $ r $-dimensional cycles, contains the image under the homomorphism $ \partial _ {r+ 1} $, i.e. the subgroup $ B _ {r} ( K, G) $ of all bounding $ r $-dimensional cycles. The quotient group $ H _ {r} ( K, G) $ of $ Z _ {r} ( K, G) $ by $ B _ {r} ( K, G) $ is the $ r $-dimensional homology group of $ K $ over $ G $. It is taken as the $ r $-dimensional homology group $ H _ {r} ( P, G) $ of the polyhedron $ P $ over $ G $, since it can be proved that all triangulations of $ P $ have isomorphic $ r $-dimensional homology groups over $ G $. In view of the universal coefficient theorem, the group $ H _ {r} ( P, G) $ is defined, for an arbitrary $ G $, by integer groups $ H _ {s} ( P, \mathbf Z ) $, where $ \mathbf Z $ is the group of integers. Furthermore, if the polyhedron is finite, the integer group, which is an Abelian group with a finite number of generators, has a complete system of numerical invariants — the Betti number and the torsion coefficients, i.e. the rank and the torsion coefficients of the group $ H _ {r} ( P, \mathbf Z ) $.
References
P.S. Aleksandrov, "Combinatorial topology" , Graylock , Rochester (1956) (Translated from Russian)
L.S. Pontryagin, "Grundzüge der kombinatorischen Topologie" , Deutsch. Verlag Wissenschaft. (1956) (Translated from Russian)
H. Seifert, W. Threlfall, "A textbook of topology" , Acad. Press (1980) (Translated from German)
P.J. Hilton, S. Wylie, "Homology theory. An introduction to algebraic topology" , Cambridge Univ. Press (1960)
Comments
References
[a1]E.H. Spanier, "Algebraic topology" , McGraw-Hill (1966)
How to Cite This Entry:
Homology of a polyhedron. Encyclopedia of Mathematics. URL:
This article was adapted from an original article by G.S. Chogoshvili (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "
Categories:
TeX auto
TeX done
This page was last edited on 25 April 2022, at 08:54.
Privacy policy
About Encyclopedia of Mathematics
Disclaimers
Copyrights
Impressum-Legal
Manage Cookies
|
50
|
In Solaris (1972), Andrei Tarkovsky presents a vision of contemporary society as one that has become cut off from nature, and provides a narrative that illustrates the possibility of remaining human in the inhuman world that is the result. The film contrasts a life-affirming natural landscape to an urban, constructed landscape where the natural world is submerged and invisible. The Solaris space station is both a projection of this second, inhuman, landscape and an allegory for Tarkovsky’s view of urban life. The narrative of the film concerns the journey by the central character, Kris Kelvin (Donatas Banionis), from emotional deadness to a rediscovery of his humanity as he charts a course between these two worlds, and the role that art, whether painting, music or film, plays in this.
Solaris begins with the leisurely contemplation of a landscape where natural events pass before the camera lens. Water ripples over seaweed, a breeze rustles through a meadow, and in the midst of this is Kris. He stands still, but his eyes move, showing he is aware and attentive to what is around him. His place as part of the natural setting is emphasized by a shot centering on a tree in a field, where Kris enters from the left and crosses the frame in the background, disappearing behind a bush before reappearing, showing him as merely another feature of the landscape.
This immersion in nature reaches its peak when he welcomes and enjoys being drenched by a sudden shower. As Vida Johnson and Graham Petrie note, he is “intimately associated with the sights and sounds of nature – the pond, the trees, the horse, and the dog, his enjoyment of the rain” (Johnson and Petrie 104). The horse in this sequence is particularly interesting, as the meaning Tarkovsky ascribes to horses in his films is straightforward. As he told an interviewer, “for me the horse symbolizes life” (Gianvito 25). Here the horse suddenly appears, with, as Stephen Dillon points out, “no coherent spatial relationship with Kris” (Dillon 15). However, there is a relationship through editing. Kris is seen walking left to right across the frame, there is then a cut to the horse trotting left to right, and then a return to Kris, still walking left to right. The inference through association is that while we soon find out Kris is emotionally dead, his relationship to this setting suggests he has the potential for resurrection.
These opening sequences also serve to predict future events and themes of the film. As Dillon writes, the “watery images with which the film begins recall ahead of time the swirling waters of the planet Solaris” (Dillon 15). This conflation of Earth’s world with Solaris gains further importance in the light of Kris raising the possibility with Berton that one of the options he might recommend is the destruction of the Solaris ocean through bombardment with x-rays, something that would find an echo in the scientist’s plan to blow up the room in the Zone in Tarkovsky’s later film Stalker (1979). So when Kris’ father says that outer space is too “fragile” to have people like Kris making decisions about it, he is also talking about the natural world around them.
In this context, the brief scenes involving Berton’s son gain interest. He arrives from the city, and is soon paired off with a young girl who is never identified and whose presence is never explained (or questioned). She can be seen as a personification of the natural landscape, and her running with the boy and the dog in the rain both a celebration of nature and a precursor to the later connection between Kris and Hari. We later see the boy with the old woman, describing to her being frightened by the horse, the expression of life, and being led back to it as she describes its beauty. This parallels Kris’ later journey which Timothy Hyman describes as “the transformation of . . . initial fear of the ocean . . . eventually into love,” which he argues “constitutes the true narrative of the film” (Hyman 55).
The natural setting shown here is not representative of contemporary society, which enters the film when Kris, after washing his hands in the pond, stands and looks directly at the camera (the audience). What he sees is Berton and his son arriving by car. There is a road running by the side of the property, revealing it to be “an oasis of tranquility in an otherwise ugly environment” (Johnson and Petrie 211).
In fact, this idyllic setting is itself partially a construct. The dacha’s integration with the natural environment is shown during a conversation between Kris’ father and Berton, where the garden remains in shot through a window and the father at one point disappears from the frame and then reappears outside, continuing the uninterrupted conversation through that window and unifying the interior and exterior spaces. But the dacha turns out to be a replica of one that belonged to Kris’ great-grandfather. It is also an imperfect replica, since it has modern gadgets that would not have been in the original, and it is the first of many objects that have been created to recapture something which is lost, all of which turn out to be imperfect. Even here, the modern world is encroaching, the natural world no longer exists as it once did and only through interaction with objects can it to a certain extent be recreated.
The long drive to the city made by Berton and his son presents a contrast between the natural and modern worlds, but also expresses in metaphoric terms Kris’ journey from Earth to outer space and suggests the idealized natural world has been lost. Berton’s trip begins in black and white, following the car through a series of concrete tunnels and overpasses with brief glimpses of trees and shrubbery becoming scarce as the environment becomes increasingly constructed, while sounds of traffic and mechanical noises rise in volume until the car bursts into a maze of freeways and the colours of a garish, neon-lit city at night. As Timothy Hyman writes, the “landscape of these first sequences spells out a polarity between garden and city, organic and inorganic, humanistic and anti-humanistic” (Hyman 55). This polarity is reinforced and broadened by a tempo change as the reality expressed through the studied contemplation of the details of the countryside in the early scenes gives way to the abstract, fast-paced, almost hallucinogenic cityscape, the “stark contrast between the chaotic time-pressure in the technological montage and the tranquil rhythm in the shot of the natural landscape reflects one of the film’s thematic conflicts of technology/nature, space/earth” (Totaro 25).
Contrast edit from (chaotic/modern)….
To (tranquil/rural)
In this way, the journey from country to city is transformed into one from earth to sky. In narrative terms, the actual space flight is somewhat perfunctory, shown almost entirely through a close-up on Kris’ face. An effect much more akin to a rocket launch is achieved early in Berton’s drive by an increase in mechanical sound as the car approaches and enters a tunnel, while the difficulty of interplanetary travel is effectively suggested by the length of the sequence. As C. Claire Thomson writes, the “job of expressing allegorically the unimaginable distance which Kelvin must cover falls to Berton’s interminable drive along all-too concrete motorways and underpasses” (Thomson 9).
This extremely negative representation of the urban landscape is not some future dystopia, it is 1971 Tokyo. Except for a few token gadgets, there is little effort to “futurize” what is shown. As Dillon notes, “we are simply supposed to pretend that this obviously contemporary world belongs to the future” (Dillon 16). The implication of Tarkovsky presenting the present as the future is that it suggests Solaris is not intended as a prediction, but as a critique of the world he saw around him. Chris Fujiwara argues that “Tarkovsky’s films are nostalgic: they view the world as in danger of being lost, and see it from the point of view of someone striving to hold on to it” (Fujiwara 51). However, the film can also be read to suggest the natural world is not just in danger of being lost, but is already gone. Following the trip to the city sequence, the film returns to the dacha, but it is now a black and white world where Kris is burning artifacts of his past, including a photo of his dead ex-wife, Hari. Kris’ father tells him he will make plans “if something should happen” and the old woman turns away and cries, implying this is the last time they will see each other. The tone is melancholic and the lack of the greens, browns and blues of the opening sequence conveys the sense that the journey to the city was also a chronicle of society’s move away from nature, and the film is showing a lost world. The sense of loss and disharmony is emphasized in Kris’ last walk around the dacha. As he passes the open doorway, the horse can be seen trotting by, this time in the opposite direction from Kris.
The space station is an extension of Tarkovsky’s view of the urban world and reflects a lack of interest in the details of space travel that can be summed up in his comment that “for me, the sky is empty” (Gianvito 25). Like the city, the station is a constructed environment where nothing grows. But instead of the dizzying colours of the neon nightscape, it is dreary and run down, the urban landscape by day. What we are shown is the world we live in now that we have cut ourselves off from our roots in the natural world. As Hyman writes, “Tarkovsky in Solaris is clearly talking about our life, today” (Hyman 54). A visual link between the station and the city is made when Kris first enters the station. The hallway is shown in long shot, the dominant chrome-grey colour reminiscent of the tunnels shown in the journey to the city.
Matching chrome-grey tone. Tunnel and…
Solaris space station
The intense unnaturalness of this environment presents a challenge to the three scientists to remain sane. Or, as Tarkovsky has said, “the people in the space station have only to solve one problem: how to remain human” (Gianvito 42). Their response is to attempt to replicate the experience of their former lives on Earth through creating objects (such as the strips of paper attached to air vents that, Snaut claims, at night “sound like the rustling of leaves”) or through the creations of others, works of art such as the Breughel paintings that hang on the walls of the station library. Interestingly, the library also has a bust of Plato, a copy of which also could be seen in Kris’ father’s dacha. Plato’s belief that everything on Earth was merely a false, illusory duplication of an “ideal” form (in Heaven) is echoed in the series of flawed imitations we are shown. Indeed, even the dacha with the “original” bust is itself an object constructed to replicate a lost experience, which will be (imperfectly) replicated again later in the film. These are all imperfect replicas: the rustling paper or a Breughel canvas are not the same as actually being in a forest, and the butterfly collection on Snaut’s wall is not the same as a real living butterfly, but they offer the possibility of providing connections to human experience.
Tarkovsky writes that “art is a meta-language, with the help of which people try to communicate with one another . . . this has not to do with practical advantage but with realizing the idea of love” (Tarkovsky 40). In light of this, it is possible to see Hari as a work of art. Like a Breughel painting, she is an imperfect replica of something lost, and through interaction with her, Kris rediscovers what it is to be human and to love. This is the key difference between the film and Stanislaw Lem’s original novel, which was a critique of the idea of anthropomorphically ascribing human values to a universe possibly organized in a manner beyond human comprehension, and in which these values are probably meaningless. As Johnson and Petrie write, “Tarkovsky’s film, by contrast, is a celebration of human values and of the power of love in an indifferent or hostile universe” (Johnson and Petrie 102). They note further that Kris’ transformation under Hari’s influence is visually charted as the white spacesuit, which “initially associates him with the coldly geometric layout” is replaced by grey sweater and slacks (Johnson and Petrie 108). In fact, it goes much further than this. As the relationship between Kris and Hari becomes more intense and they both become more “human,” their clothes seem to be progressively stripped away and we become more aware of their bodies until, during Hari’s suicide attempt and resurrection, which is when they are at their most human, Kris is in his underwear and Hari wears a transparent negligee through which her naked body is clearly visible.
It is in the levitation sequence that the interaction between Hari, the replica of a human, the replications of human experience (which is a possible view of painting, music and film) and Kris, the human, reaches a climax. There are a series of cuts between Kris watching Hari, Hari staring at Breughel’s “Hunters in the Snow,” camera pans of close-up details of that painting and flashbacks of the home movie of Kris as a child in a snowy landscape that they watched earlier. Suddenly, the period of weightlessness begins. First candles begin to float, then the chandelier ripples with sound, and Kris and Hari begin to float as the Bach Prelude heard at the beginning of the film and during the showing of the home movie returns, an evocative work of art “associated with Earth and its values – nature, art, love” (Johnson and Petrie 108). As they levitate, the camera cuts twice more to “Hunters in the Snow” as Hari also begins to take in the other Breughel paintings. It seems to be a moment of transcendence, even exaltation, in a way reminiscent of the balloon flight at the beginning of Andrei Rublev (1966). And when it is done, they relax on the ground with Kris’ head on Hari’s lap, and it is almost as if they have just had sex. There is then a shot of the swirling Solaris ocean as electronic noises rise and overpower the Bach. This is followed by the sound of a crash and a sudden cut to the smoking vial, as Hari has just tried to commit suicide. A way of reading this scene is that through the evocative power of Breughel’s snowscape, Hari is able to relate it to the home movie of young Kris in the snow, and she is able to understand what being human means and to fully love Kris. As Hyman writes, “when she turns to Kris, we realize through Breughel she has been able to apprehend what it is to be a human being on earth” (Hyman 56). The shot of the Solaris ocean in tumultuous activity which follows reinforces this, as if the synapses of a giant brain are popping frantically as it assimilates information.
That Hari’s understanding of what it is to be human is followed immediately by a suicide attempt is not a contradiction. Johnson and Petrie see it as an act of love, that the “new Hari’s sacrifice is a redemptive one from which Kris is able to learn and benefit, rather than a gesture of despair,” which the original Hari’s suicide had been (Johnson and Petrie 105). Dillon has a different interpretation. Relating it to his reading of Solaris as a reflexive meditation on cinema, with the relationship of Kris and Hari analogous to that between audience and film, he suggests the “levitation is beautiful but temporary. Their time together is really one manner of disorientation after another . . . This scene does not imply the “naturalness” and “timeliness” of art, but instead the ghostly artifice of art” (Dillon 14). He goes on to write that “the next sequence begins with the revelation that Hari has drunk liquid oxygen, further eroding any idealistic reading of the levitation” (Dillon 14). This is a problematic reading because it ignores what is expressed in so many of Tarkovsky’s films, that for love, “the meaning . . . is in sacrifice” (Tarkovsky 40). Hari’s ability to make the ultimate sacrifice is a sign of hope because it proves that even on the space station, and therefore in the soulless contemporary society, it is possible to act as a human, and that this gaining of humanity is possible through interaction with art. That it requires a non-human (Hari) to instruct a human (Kris) on being human adds a layer of irony to the situation. Tarkovsky’s intention seems similar to how he describes a transcendent moment from Cries and Whispers (1973, Ingmar Bergman) where Bach is also used, that “even this illusory flight gives the audience the possibility of catharsis, of that spiritual cleaning and liberation which is attained through art” (Tarkovsky 192).
The “death” of the replica Hari is then followed by a finale in which Kris seems to return to the dacha and kneel for his father’s blessing in a pose taken from Rembrandt’s “Return of the Prodigal.” This suggests that Kris, now emotionally resurrected and painfully aware of the new Hari’s absence, has embraced the humanistic beliefs of his father which he had been skeptical about in the opening section. This is a replica Kris on a replica dacha, like the original dacha an island in an otherwise completely different landscape, in this case the Solaris ocean. It is also another imperfect replica, as the water does not ripple and the rain falls inside the dacha instead of outside. But it still reflects the transformed Kris. As Tarkovsky says, Kris “has been recreated by the Ocean – the materialization of his homesickness has been taken from him and reconstructed on the planet” (Gianvito 72). This is another major change from the novel, where Kris travels to the planet and is left standing on an indifferent ocean, a consciousness he will never connect with (Lem 211).
Rembrandt’s “Return of the Prodigal”
According to Tarkovsky, the controlling idea of Solaris is that “human beings have to remain human beings, even if they find themselves in inhumane conditions” (Gianvito 136). In terms of the plot, these inhumane conditions are on a distant planet, but what the film tells us is that today we live in a time where the natural world has been paved over by a modern, constructed one and there has been a “total destruction in people’s awareness of all that goes with a conscious sense of the beautiful” (Tarkovsky 42). In this situation, it is through interaction with art (which, as mentioned, deals with “realizing the idea of love”) that modern man can remain human. Solaris is about the world Tarkovsky saw around him.
Works Cited
Andrei Tarkovsky Interviews. Ed. John Gianvito. Jackson, MS: University Press of Mississippi, 2006.
Dillon, Steven. The Solaris Effect: Art & Artifice in Contemporary American Film. Austin, TX: University of Texas Press, 2006.
Fujiwara, Chris. “Solaris.” Cineaste 29.3 (Summer 2003): 51.
Hyman, Timothy. “Solaris.” Film Quarterly 29.3 (Spring 1976): 54-8.
Johnson, Vida T., and Graham Petrie. The Films of Andrei Tarkovsky: A Visual Fugue. Bloomington, IN: Indiana University Press, 1994.
Lem, Stanislaw. Solaris. New York, NY: Berkley Publishing Corp., 1971.
Tarkovsky, Andrei. Sculpting in Time. Austin, TX: University of Texas Press, 1987.
Thomson, C. Claire. “It’s All About Snow: Limning the Post-Human Body in Solaris (Tarkovsky, 1972) and It’s All About Love (Vinterberg, 2003).” New Cinemas: Journal of Contemporary Film 5.1 (2007): 3-21.
Totaro, Donato. “Time and the Film Aesthetics of Andrei Tarkovsky.” Canadian Journal of Film Studies 2.1 (1992): 21-30.
David Hanley has a BFA and MA in Film Studies from Concordia University and is currently pursuing a PhD in Canadian Studies at Carleton University, where he has also taught in the Department of Film Studies. As well as being a frequent contributor to Offscreen, he has had pieces published by the University of Toronto Quarterly, Canadian Journal of Irish Studies, Synoptique, The Projector, Isis, and Nuacht. He also contributed several entries to the Historical Dictionary of South American Cinema by Dr. Peter H. Rist (Rowman & Littlefield, 2014), and chapters to the books Reclaiming 1940s Horror Cinema: Traces of a Lost Decade (Lexington Books, 2015) and The Spaces and Places of Canadian Popular Culture (Canadian Scholars’ Press, 2019). He has been a programmer for Cine Gael of Montreal’s annual series of contemporary Irish films since 2011.
More by David
Email David
Volume 15, Issue 1 / January 2011
Essays
andrei tarkovsky science fiction
Essays
Interviews
Videos
Film Reviews
Capsule Reviews
Buck A Review
Blu-ray/DVD
Book Reviews
Festival Reports
Issue Archive
Offscreen Ebooks
Subscribe
Donate
Follow @Offscreen2
Advertise on Offscreen
Also in Volume 15, Issue 1
Tradition and Modernity in Japanese Yakuza Films of the 1960s and 70s
David Hanley
### Temporal Defamiliarization and Mise-en-Scène in Tarkovsky’s Stalker
### Ivan’s Childhood: The Tree of Life
### The Hidden Blade
Related Articles
Fantasia 2025: A Monster Under The Bed
Canadian Art & Trash At Fantasia 2024
Donato Totaro
### Fantasia 2024: Shorts, Retrospectives and Special Events
Donato Totaro
### Cannes 2025: Returning Names Brush Up Against Fresh Faces For An Unpredictably Relaxed Festival Lineup
|
51
|
Details - A generic classification of the Nearctic sawflies (Hymenoptera, Symphyta) - Biodiversity Heritage Library
===============
Biodiversity Heritage Library =============================AboutHelpFAQFeedback
BHL announces Call for Support. Learn more.
Browse by:
Title
Author
Date
Collection
Contributor
Full-text Catalog advanced search
A generic classification of the Nearctic sawflies
Title
A generic classification of the Nearctic sawflies (Hymenoptera, Symphyta)
Related Titles
Series:Illinois biological monographs, v. 15,no. 2
By
Ross, Herbert H. (Herbert Holdsworth), 1908-1978
Type
Book
Material
Published material
Publication info
Urbana, Ill, 1937
Notes
"Contribution no. 188 from the Entomological laboratories of the University of Illinois, in cooperation with the Illinois State natural history survey."
Descriptive letterpress on versos facing the plates.
On cover: University of Illinois bulletin. vol. XXXIV, no. 94. July 23, 1967.
"An elaboration of a thesis submitted...for the degree of doctor of philosophy in entomology in the Graduate school of the University of Illinois in 1933."
Subjects
Hymenoptera , North America , Saw-flies
Call Number
QL568.T3 R725
Classification
595.793
Language
English
Identifiers
DOI:
OCLC: 771487
Wikidata:
Find in a local library
Cite This Publication
Volumes
expandv.15:no.2 (1937)view
Holding Institution:
University Library, University of Illinois Urbana Champaign
Sponsor:
University of Illinois Urbana-Champaign
Date Scanned:
03/01/2011
View Volume
Copyright & Usage:
Copyright Status: Not provided. Contact Holding Institution to verify copyright status.
Download:
AllJP2OCRPDF
Join Our Mailing List
Sign up to receive the latest news, content highlights, and promotions
Subscribe Here
Get Involved
Ways To Contribute
Harmful Content
BHL acknowledges the existence of harmful content in many biodiversity science publications and original materials included in its collection. Please read BHL's Acknowledgment of Harmful Content for more information.
Tools and Services
BHL offers a wide range of free tools and services to support the use and re-use of our collections and data.
Now Online
199,367 titles
320,830 volumes
63,380,212 pages
Recent Additions
BHL Consortium
BHL operates as a worldwide consortium of natural history, botanical, research, and national libraries working together to digitize the natural history literature held in their collections and make it freely available for open access as part of a global "biodiversity community."
Copyright and Reuse | Data Disclaimer | Developer and Data Tools | Partners | Privacy | Cookies | Terms of Use
Major support and hosting provided by
We value your privacy
We use cookies to enhance your browsing experience and to analyze our traffic. By clicking "Accept all" you consent to our use of cookies.
Accept all Reject all
Customize
Privacy Policy
|
52
|
cosmology - Cosmological redshift vs doppler redshift - Astronomy Stack Exchange
===============
Join Astronomy
By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy.
Sign up with Google
OR
Email
Password
Sign up
Already have an account? Log in
Skip to main content
Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Loading…
Tour Start here for a quick overview of the site
Help Center Detailed answers to any questions you might have
Meta Discuss the workings and policies of this site
About Us Learn more about Stack Overflow the company, and our products
current community
Astronomy helpchat
Astronomy Meta
your communities
Sign up or log in to customize your list.
more stack exchange communities
company blog
Log in
Sign up
Astronomy
Home
Questions
Unanswered
AI Assist Labs
Tags
Chat
Users
Teams
Ask questions, find answers and collaborate at work with Stack Overflow for Teams.
Try Teams for freeExplore Teams
3. Teams
4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
Hang on, you can't upvote just yet.
You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it?
Instead, you can save this post to reference later.
Save this post for later Not now
Cosmological redshift vs doppler redshift
Ask Question
Asked 5 years, 11 months ago
Modified8 months ago
Viewed 4k times
This question shows research effort; it is useful and clear
11
Save this question.
Show activity on this post.
I'm reading Harrison's "Cosmology: Science of the universe" because Harrison focuses on the distinction between cosmological redshift (he calls it expansion redshift) and the Doppler redshift.
He states that "they [Doppler redshifts] are produced by peculiar and not by recession velocities" and "[expansion redshifts] are produced by recession and not peculiar velocities".
I understand the concepts of both kinds of redshifts but have a hard time understanding this strict separation. Please correct me, where/if I am wrong:
Suppose a galaxy has no peculiar movement. This means that its position will stay (approximately) the same in comoving coordinates. In fact though ( proper distance), it IS moving away from Earth with recession velocity V, caused by the expansion of the universe. So it should have a Doppler effect based on this recession velocity AND a cosmological (expansion) redshift should also take effect because the light gets stretched with the expansion of the universe. Even though the recession velocity is not due to a peculiar movement, it means that the source of light is moving away from the observer and hence the light should be redshifted and on top the light gets redshifted on its way through expanding space.
Please correct me or tell me if I am right or wrong, I have spent a lot of time reading but still don't fully understand this. Thanks.
cosmology
expansion
redshift
doppler-effect
Share
Share a link to this question
Copy linkCC BY-SA 4.0
Improve this question
Follow
Follow this question to receive notifications
asked Sep 15, 2019 at 12:56
user120112user120112
175 1 1 silver badge 6 6 bronze badges
5
1 An idea that I have, is that my assumption, that recession velocities cause a Doppler redshift is wrong and that they cannot be compared to other "ordinary" or peculiar velocities. Maybe recession velocities don't cause Doppler redshifts because the galaxy moving away due to recession velocity is not moving compared to its environment. Objects in a Doppler scenario (moving light source on Earth or galaxy with peculiar motion), are moved against their environment (or their environment is moved, if in the objects rest frame). Maybe this solves my problem. I'd love to discuss/get feedback. –user120112 Commented Sep 16, 2019 at 2:02
This article might be of interest arxiv.org/abs/astro-ph/0310808v2 –usernumber Commented Jan 13, 2020 at 10:40
1 The current accepted answer by pela is incorrect. Davis and Lineweaver (linked in the previous comment) are also incorrect on this point, unfortunately, though I like their paper in general. Since the upshot of the paper is that even famous physicists can be and have been wrong about basic properties of FLRW cosmology, I hope you can accept that there may be lingering misconceptions that they didn't correct, and may even still suffer from themselves. This is one of them. Please see my comments on pela's answer, my answer, and (for more details) the other answers linked from it. –benrg Commented Sep 1, 2020 at 0:21
They're the same thing. If you fix the Inconsistency between Velocity vs Redshift and Scale Factor vs Time plots you will see why. –user57748 Commented Jul 16, 2024 at 11:07
Simplified expansion simulation with arbitrary scale factor function / Time reversal symmetry of Doppler expansion –user59549 Commented Nov 11, 2024 at 9:55
Add a comment|
4 Answers 4
Sorted by: Reset to default
This answer is useful
12
Save this answer.
Show activity on this post.
After considering @benrg's comments, I realize that my first answer contained too strong statements about the relation between the two redshifts. I try here to moderate my answer, but you might want to accept their answer instead.
It is common to think of the two redshifts as having nothing to do with each other. Doppler shifts arise when the observer and/or the emitter moves through space, whereas the cosmological redshift can be derived considering stationary emitters and stationary observers in an expanding space.
Because the cosmological redshift doesn't involve movement through space, it is often considered completely different from the Doppler. However, it is also possible to derive the cosmological redshift by considering it as infinitely many infinitesimally small Doppler shift (e.g. Lewis 2016). I admit that I'm not well enough versed in general relativity to be certain about my statements, but just because an infinitesimally small patch of spacetime is flat doesn't necessarily mean that infinitely many such patches add up to be flat. However, as @benrg says, in GR there is only one redshift.
Different or the same?
The reason I think it makes sense to view the Doppler shift and the cosmological redshift as two separate mechanisms is the following:
In principle you could have a universe (non-capitalized, since it's not our universe, the Universe) that were static when a distant galaxy emitted a photon, then at some point expanded quickly by a factor of 2, and then again is static. In this hypothetical case, the observer would still measure the photon to have been redshifted by a factor of 2 (i.e. z=1 z=1).
That this is true can bee seen from considering the mathematical derivation of the cosmological redshift (see e.g. here) which involves an integral, the result of which only depends on the initial and the final state, not on the expansion history.
In contrast, if you and your friend stand still with respect to each other while your friend shines her flashlight at you, then run away from each other with a relative velocity of 0.6 c 0.6 c, then stand still before you receive the light (i.e. analogously to the hypothetical universe above), then you would measure no redshift; you wouldn't measure the special relativistic Doppler shift of z+1=1+v/c 1−v/c−−−−√=2 z+1=1+v/c 1−v/c=2 that you would if you were receding from each other while either emitting or observing.
In the real Universe, galaxies move through space (i.e. they change their comoving coordinate χ χ), and space expands (i.e. the scale factor a a evolves). If the physical distance to a galaxy is
d=a χ,d=a χ,
then the change in this distance gives their total velocity wrt. us, and is obtained through differentiation:
v t o t=≡a˙χ+a χ˙v r e c+v p e c,v t o t=a˙χ+a χ˙≡v r e c+v p e c,
where dots denote differentiation with respect to time, and the two terms have been identified as the cosmological recession velocity, and the peculiar, "normal" velocity. Each of these terms give rise to a redshift, but through two very different mechanisms. Only the latter term is called a Doppler shift.
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Improve this answer
Follow
Follow this answer to receive notifications
edited Sep 5, 2020 at 7:36
answered Sep 16, 2019 at 12:23
pelapela
40.4k 122 122 silver badges 153 153 bronze badges
9
1 Up vote because the reference to semantics. For many cosmological red shift is a doppler shift. I think focusing on comoving and proper coordinates remove the confusion or whatever the need to call things a way or another. –Alchimista Commented Sep 17, 2019 at 10:29
1 @user120112 Yes, exactly! :) –pela Commented Sep 17, 2019 at 15:08
1 @SpaceBread Yes, maybe it's a bit confusing to write it like this, but it's just Hubble's law. Without (or with negligible) peculiar velocities, v r e c=a˙χ=a˙d/a≡H 0 d v r e c=a˙χ=a˙d/a≡H 0 d. –pela Commented Sep 17, 2019 at 15:29
1 @SpaceBread I think I misunderstood your comment. The "a˙a˙" doesn't, in general, refer to the change in a a at the time of emission, but to the general value at the time you wish to know the recession velocity. So in the hypothetical universe that is static except for a brief, intermediate expansion, you have a˙(t=0)=0 a˙(t=0)=0 and a˙(t=t e m)=0 a˙(t=t e m)=0, meaning that v r e c=0 v r e c=0 at t=0 t=0 and at t e m t e m. In our Universe, however, a˙(t=0)=H 0≠0 a˙(t=0)=H 0≠0. –pela Commented Sep 19, 2019 at 11:03
1 There's only one kind of redshift in GR: the ratio of the distances between differentially separated null geodesics at emission and detection. There's no generally covariant way to separate it into "cosmological" and "relative motion" redshift; it's all part and parcel of the same thing. There's also no way to define a covariant concept of "expanding space" in GR (e.g. there's no space-expansion tensor that's nonzero where space is expanding and zero where it isn't). Expanding space is just a particular coordinate description of a manifold that can be described in other ways. See my answer. –benrg Commented Nov 11, 2019 at 4:59
|Show 4 more comments
This answer is useful
3
Save this answer.
Show activity on this post.
There's only one kind of redshift in general relativity. The cosmological redshift, gravitational redshift, and special-relativistic redshift formulas are special cases of it, which apply to spacetimes with certain symmetries.
If you put approximate Minkowski coordinates on a patch of spacetime that's small enough to be approximately flat, you'll find that objects in that patch that are moving with the Hubble flow are moving away from each other in the special-relativistic sense with respect to to those coordinates. If you use the special-relativistic redshift formula to calculate the redshift between objects A and B on that patch, then do the same in an approximately flat patch containing B and C, and keep doing that until you get to a very distant object Z, and multiply all those redshift factors together, you'll get the correct cosmological redshift between A and Z, up to an error arising from the deviation of each patch from flatness. In the limit of very small patches, this becomes exact.
So the answer to your question is that cosmological redshift and redshift due to relative motion don't add together because they're the same thing. Adding them would count the same redshift twice.
It's an extremely common misconception, found in many textbooks and even in Davis and Lineweaver, that there is some sort of fundamental difference between ordinary relative motion and "expansion of space" in general relativity. In reality, there is no generally covariant way to distinguish them. Spacetime is just a manifold, and worldlines are just worldlines. As a (quite close) analogy, consider lines of constant longitude on a globe. On a local map (small enough that there's negligible distortion when flattening it), they converge to a point at the poles and there's a nonzero angle (angle ~ rapidity) between them everywhere except near the equator. But you could also say that they are "at rest" at fixed longitudes but the metric distance between them increases as you move away from the poles. These descriptions are equivalent. You can calculate the coordinates of a rhumb line in the latter picture by integrating the reciprocal of the latitudinal scale factor, just as in cosmology. If you look at the distance between rhumb lines of the same slope at different latitudes, you'll find that it grows in proportion to the latitudinal scale factor, just like cosmological redshift. This does not mean the distance metric is expanding in any objective sense. It's just a simple consequence of the global symmetries of the manifold.
I've written some previous answers to similar questions that go into more mathematical detail. Here's one; here's another.
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Improve this answer
Follow
Follow this answer to receive notifications
answered Nov 11, 2019 at 4:24
benrgbenrg
4,469 14 14 silver badges 21 21 bronze badges
Add a comment|
This answer is useful
-3
Save this answer.
Show activity on this post.
There are several sorts of redshifts and they are not always easy to separate from one another. First you have cosmological redshift. These redshifts tend to be so high they could not be caused by anything else. Then you have gravitational redshifts, which when they originate with distant quasars having a mass equivalent to trillions of suns might become entangled with their cosmological redshifts. We also have redshift (sometimes blueshift) which arises from the proper motion of the galaxy in question. These red or blue shifts tend to mask any cosmological redshift for nearby galaxies, which redshift is in any case trivial. The Andromeda galaxy, for example, is slightly blue shifted, as it is coming towards us and is 'only' 2.5 million light years away. The distant quasars I referred to are supermassive black holes many billions of light years away.
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Improve this answer
Follow
Follow this answer to receive notifications
answered Sep 15, 2019 at 20:18
Michael WalsbyMichael Walsby
1,421 7 7 silver badges 8 8 bronze badges
1
4 This doesn’t address the question; also, quasars do not have masses of “trillions of suns”. –Peter Erwin Commented Sep 16, 2019 at 0:20
Add a comment|
This answer is useful
-3
Save this answer.
Show activity on this post.
I think the answers so far are predicated on a flat, that is Euclidean, universe with no curvature.
If the universe has curvature it is quite possible that the direction of expansion is perpendicular to the three spatial dimensions. In this case there will be no cosmological Doppler effect.
In addition it seems to be assumed that redshift due to gravitational time dilation is a peculiar or local effect only, which might be true for a flat universe. But a curved universe can have gravitational time dilation on a cosmological scale.
In a curved universe it seems possible that all of cosmological redshift is due to gravitational time dilation and none is due to the Doppler effect. This is the case for the FLRW curvature-only metric.
I admit that the full reasoning behind this has not been peer reviewed. I would very much welcome such a review.
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Improve this answer
Follow
Follow this answer to receive notifications
answered Nov 30, 2024 at 19:22
John HobsonJohn Hobson
357 6 6 bronze badges
Add a comment|
You must log in to answer this question.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
cosmology
expansion
redshift
doppler-effect
See similar questions with these tags.
Featured on Meta
Will you help build our new visual identity?
Upcoming initiatives on Stack Overflow and across the Stack Exchange network...
2025 Community Moderator Election Results
Linked
3Is there any way to distinguish the doppler redshift from a distant galaxy from the probably much greater cosmological redshift?
4Why is cosmological redshift treated as a different phenomenon to doppler redshift?
1Is the redshift of galaxies, caused by stretching of space, added onto redshift due to speed to get total redshift?
1Cosmological redshift - How do we know it's not caused by the observer's time dilation?
Related
4How to disentangle a very distant star's relative velocity vs. redshift distance
5Gravitational red shift vs Doppler redshift: Is the universe really expanding?
1Cosmological redshift - How do we know it's not caused by the observer's time dilation?
2Universe expansion
3Difference between 3D real space and 3D redshift space?
4How can it be justified that every discovery at z > 1 is an indication of "slowing down" or past deceleration?
4Doppler Redshift vs. Cosmological Redshift ... or Both?
2Does a cosmological redshift occur within our local group, of galaxies?
Hot Network Questions
What does it mean to be one's "God"?
LM393 comparator not pulling down
Are there other LEGO Duplo track layouts with two trains that trigger all the switches indefinitely?
What is a single adjective for someone who accepts their faults?
Can high schoolers post to arXiv or write preprints?
Proper way to power off a Ubuntu 22.04-5 desktop from single user mode
How to deal with this problem in hedonism?
Can metal atoms act as ligands?
Meaning of 'present' in Job 1:6
Do utilitarians value the lives of people in worse situations less?
Expected number of rolls to see all sides of a die
Can defamation occur without specific intent for false statements about ordinarily non-damaging things?
Find real and imaginary parts of a complex series
Can my daughter’s candy preferences be modelled using numeric weights II?
How can a theory be discarded if the Duhem–Quine thesis suggests it can’t be falsified
In Isa. 46:9 why is וְאֵ֣ין עֹ֔וד אֱלֹהִ֖ים not translated "and there are no other gods?"
Using my custom font on kitty Kubuntu
Landmark identification in "The Angel" (Arsenal FC's anthem)
Is there a simple method to prove that this triangle is isosceles?
Reskinning creatures without accidentally hiding how dangerous/safe they are
Does the Melf's Acid Arrow spell require a ranged attack roll?
What does my 3D Printing Life-Seeder Probe need to print to populate the Universe for humans?
How to debug/correct missing number error in plug during memoization?
What keeps an index ETF pegged to the index?
Question feed
Subscribe to RSS
Question feed
To subscribe to this RSS feed, copy and paste this URL into your RSS reader.
Why are you flagging this comment?
It contains harassment, bigotry or abuse.
This comment attacks a person or group. Learn more in our Code of Conduct.
It's unfriendly or unkind.
This comment is rude or condescending. Learn more in our Code of Conduct.
Not needed.
This comment is not relevant to the post.
Enter at least 6 characters
Something else.
A problem not listed above. Try to be as specific as possible.
Enter at least 6 characters
Flag comment Cancel
You have 0 flags left today
Astronomy
Tour
Help
Chat
Contact
Feedback
Company
Stack Overflow
Teams
Advertising
Talent
About
Press
Legal
Privacy Policy
Terms of Service
Your Privacy Choices
Cookie Policy
Stack Exchange Network
Technology
Culture & recreation
Life & arts
Science
Professional
Business
API
Data
Blog
Facebook
Twitter
LinkedIn
Instagram
Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804
By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.
Accept all cookies Necessary cookies only
Customize settings
Cookie Consent Preference Center
When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer.
Cookie Policy
Accept all cookies
Manage Consent Preferences
Strictly Necessary Cookies
Always Active
These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.
Cookies Details
Performance Cookies
[x] Performance Cookies
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.
Cookies Details
Functional Cookies
[x] Functional Cookies
These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly.
Cookies Details
Targeting Cookies
[x] Targeting Cookies
These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device.
Cookies Details
Cookie List
Clear
[x] checkbox label label
Apply Cancel
Consent Leg.Interest
[x] checkbox label label
[x] checkbox label label
[x] checkbox label label
Necessary cookies only Confirm my choices
|
53
|
Published Time: Sat, 22 Apr 2023 00:05:51 GMT
arXiv:2304.09619v1 [math.GR] 19 Apr 2023
MEASURE DOUBLING OF SMALL SETS IN SO(3 , R)
YIFAN JING, CHIEU-MINH TRAN, AND RUIXIANG ZHANG
Abstract. Let SO(3 ,R) be the 3D-rotation group equipped with the real-manifold topol-ogy and the normalized Haar measure μ.Resolving a problem by Breuillard and Green, we show that if A⊆SO(3 ,R) is an open subset with sufficiently small measure, then
μ(A2)>3.99 μ(A).
We also show a more general result for the product of two sets, which can be seen as a Brunn–Minkowski-type inequality for sets with small measure in SO(3 ,R).
Introduction
1.1. Results and backgrounds. Throughout, let SO(3 , R) be the 3D-rotation group, or more precisely, SO(3 , R) := {Q ∈ M3,3(R) : QQ T = QT Q = I3, det( Q) = 1 }
with M3,3(R) the set of (3 × 3)-matrices with real coefficients, QT the transpose of Q, I3 the identity (3 ×3)-matrix, and the group operation on SO(3 , R) given by matrix multiplication. As a consequence, SO(3 , R) can be identified with a Euclidean closed subset of R9, which gives it a real-manifold topology. It is well-known that the group SO(3 , R) is compact and connected with respect to this topology. In particular, it has a normalized Haar measure μ.For A, B ⊆ SO(3 , R), we are interested in their product set AB := {ab : a ∈ A, b ∈ B}. We write A2 instead of AA , and define the k-fold product Ak similarly. The following question was described in Green’s list of open problem [Gre], where it was also attributed to discussions with Breuillard:
When A ⊆ SO(3 , R) is open and has sufficiently small measure, is μ(A2) > 3.99 μ(A)?
It was also remarked in [Gre] that, if true, this will be the best possible, as seen by considering small neighbourhoods of a 1-dimensional subgroup. (Indeed, the construction in Section 3 yields such open A ⊆ SO(3 , R) with 3 .99 μ(A) < μ (A)2 < 4μ(A) and μ(A) arbitrarily small, so we cannot replace 3 .99 by 4.) In less precise form, the question traces back to the much earlier work of Henstock and Macbeath [HM53], where they proposed the problem of determining minimal doubling in nonabelian locally compact groups. From this angle, SO(3 , R) is of interest as it is the first nontrivial compact and connected case. Our main result answers the question by Breuillard and Green positively:
Theorem 1.1. For all ε > 0, there is δ > 0 such that if A ⊆ SO(3 , R) is an open subset with μ(A) < δ , then
μ(A2) > (4 − ε)μ(A).
2020 Mathematics Subject Classification. Primary 20G20; Secondary 43A75, 22E30, 03C20, 11B30. YJ was supported by Ben Green’s Simons Investigator Grant, ID:376201. RZ was supported by the NSF grant DMS-2207281, NSF CAREER Award DMS-2143989, and the Alfred P. Sloan Foundation.
1MEASURE DOUBLING OF SMALL SETS IN SO(3 ,R)2
Our proof, in fact, gives the same conclusion for all compact semisimple Lie groups G
(Theorem 10.4), but the inequality is not sharp unless G contains SO(3 , R) as a closed subgroup. Nevertheless, this provides a sharp contrast with compact and connected Lie group which is not semisimple. Here, the lower bound μ(A2) ≥ min {1, 2μ(A)} given by the Kemperman inequality cannot be improved. For open subsets of SO(3 , R) with very small measure, Theorem 1.1 improves the measure expansion gap by the first two authors [JT22], which says that there is a constant η > 0 such that
μ(A2) ≥ min {1, 2μ(A) + ημ (A)|1 − 2μ(A)|}
whenever G is a compact semisimple Lie group, and A ⊆ G is open. Note that construction in Section 3 mentioned earlier also provides open A ⊆ SO(3 , R) with measure close to 1 /2such that 2 μ(A) < μ (A2) < 2.01 μ(A). Hence, the condition that μ(A) has sufficiently small measure is necessary, and Theorem 1.1 does not provide information about general open
A ⊆ SO(3 , R). Breuillard and Green studied the aforementioned question in connection with product theorems in groups of Lie type over finite fields [Hel08, BGT11, PS16] and results on approxi-mate groups [Hru12, BGT12]. While not stated explicitly, subgroups and neighborhoods are expected to be the explanation for small doubling in SO(3 , R), just as they are in the abelian settings of additive combinatorics. Thus, behind the question by Breuillard and Green is the challenge to generalize results in additive combinatorics to nonabelian groups. The result in this paper and the authors earlier work on the nonabelian Brunn–Minkowski [JTZ21] are currently the only product-type theorems with sharp bounds. Recall that, for Rn equipped with the usual Lebesgue measure λ, the Brunn–Minkowski inequality tells us that if X, Y ⊆ Rn are open, then λ(X + Y )1/n ≥ λ(X)1/n + λ(Y )1/n where we set X + Y = {x + y : x ∈ X, y ∈ Y }. For SO(3 , R), our proof of Theorem 1.1 also yield the following more general asymmetric result, which can be seen as a Brunn–Minkowski type inequality for SO(3 , R).
Theorem 1.2. For all ε and N, there is c = c(ε, N ) such that whenever A, B ⊆ SO(3 , R)
are open, 0 < μ (A), μ (B) < c , and μ(A)/N < μ (B) < Nμ (A), we have
μ(AB )1
2+ε
≥ μ(A)1
2+ε
μ(B)1
2+ε
.
In Section 3, we will propose conjectural Brunn–Minkowski type inequalities for other compact connected Lie groups. The proofs of our theorems, in fact, provide a reduction of these results to the nonabelian Brunn–Minkowski conjecture for noncompact groups and a measure expansion gap result; see Remark 10.5. We end this background discussion proposing a conjectural strengthening of Theorem 1.1:
Conjecture 1.3 (Strong Breuillard–Green Conjecture) . If A ⊆ SO(3 , R) is open, then
μ(A2) ≥ min {1, 4μ(A)(1 − μ(A)) }.
Moreover, if μ(A) < 1/2, the equality happens if and only if A is of the form
{g ∈ SO(3 , R) : ∠(u, gu ) < arccos(1 − μ(A)) }
with u ∈ R3 a unit vector, gu its image g-action, and ∠(u, gu ) the angle between them.
We will discuss the construction behind this conjecture and generalizations to simple Lie groups with finite center in Section 3. MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 3
1.2. Overview of the proof. The central idea of the proofs of Theorem 1.1 and Theo-rem 1.2 is to link the compact setting of SO(3 , R) to the setting of noncompact Lie groups using ultraproduct and the Massicot–Wagner version [MW15] of the so-called Hrushovski’s Lie model theorem [Hru12]. The compact settings differs from the noncompact ones in that there is a useful nonabelian Brunn–Minkowski inequality in the latter; this was proven by the authors [JTZ21] using the Iwasawa decomposition to reduce the problem to lower dimensions. For a compact semisimple Lie group, the Iwasawa decomposition returns the group itself. For concreteness, towards a contradiction, let us assume that the conclusion of Theo-rem 1.1 is false for ε = 0 .01. Then there is a sequence ( An) of open subsets of SO(3 , R) such that
μ(( An)2) < 3.99 μ(An) and lim
n→∞
μ(An) = 0 .
Let μn be the normalization of μ on An (i.e. setting μn(X) = μ(X)/μ (An) for measurable
X ⊆ SO(3 , R)), we get
μn(An) = 1 , μn(( An)2) < 3.99 μn(An), and lim
n→∞
μn(SO(3 , R)) = ∞.
Taking a “suitable limit” of the sequence of triples (SO(3 , R), A n, μ n) we arrive at a triple (G∞, A ∞, μ ∞) where G∞ is a “pseudo-compact” group, A∞ ⊆ G∞ is “pseudo-open”, μ∞ is a “pseudo-Haar” measure, and
μ∞(A∞) = 1 , μ∞(( A∞)2) < 3.99 μ∞(A∞), and μ∞(G∞) = ∞.
The suitable limit notion we use here is taking ultraproduct from model theory, and one can think of it as taking the average of (SO(3 , R), A n, μ n). Notions like “pseudo-Haar” must also be carefully defined, but we will not do that in this overview. Despite what the name “pseudo-compact” and “pseudo-open” might suggest, there is no automatic topological data on G∞. However, the measure-theoretic data on G∞ is enough to apply tools from the study of approximate groups to modify A∞ “slightly” and construct a “good” surjective group homomorphism
π : 〈A∞〉 → L
where L is noncompact, unimodular (i.e., left Haar measures are also right-invariance), and connected Lie group. Such π is often called a Lie model. We will postpone explaining this point for now and proceed with the argument. Using ideas from real/harmonic analysis, also to be revisited later, one shows that
λ(X2) < 3.99 λ(X)with X = π(A∞) and λ a left (and hence right) Haar measure on L.Let d be the dimension of L, and m the maximum dimension of a compact subgroup of L. The nonabelian Brunn–Minkowski Conjecture [JTZ21, Conjecture 1.4] predicts that
λ(X2) ≥ 2d−mλ(X). This is still not known in general, but we do know that
λ(X2) ≥ 2d−m−⌊ (d−m)/3⌋λ(X).
Applying it into our case, we get d − m < 2, which implies d − m = 1 because L is noncompact. In other words, L has a compact subgroup H of codimension 1. By standard Lie-theoretic argument, we learn that H must be a normal subgroup of L, and L/H must be an isomorphic copy of R.MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 4
Now, let I ⊆ R be the interval (0 , 1) R. Then I + I = {x + y : x, y ∈ I} is the interval (0 , 2) R which has exactly twice the Lebesgue measure of I. Let ρ : 〈A∞〉 → R denotes the composition φ ◦ π of π : 〈A∞〉 → L with the quotient map φ : L → R, and set B∞ = ρ−1(I). Then from the fact that π is well-behaved, we learn that
μ∞(( B∞)2) = 2 μ∞(B∞).
As G∞ is the limit of copies of SO(3 , R), and from the fact that π : 〈A∞〉 → L is “good”, we get Bn ⊆ SO(3 , R) with very small measure such that
μ(B2
n
) < (2 + 10 −12 )μ(Bn).
This is known to be impossible by the measure expansion gap for semisimple Lie group developed by the first two authors in [JT22]. This completes our proof via “bootstrapping”.
Approximate groups and Lie models. We now come back to an earlier point where we “slightly” modify A∞ and obtain a Lie model π : 〈A∞〉 → L.Conceptually, such a homomorphism reflects the expectation that sets with small doubling in any setting are supposed to have origin in Lie groups. However, the small doubling condition μ∞(A2
∞
) < 3.99 μ∞(A∞) is not strong enough to guarantee the existence of such
π in the literal sense. One must go around this problem by first using a technique by Tao [Tao15] to construct an approximate subgroup S∞ ⊆ G∞ closely related to A∞. (Recall that S∞ ⊆ G∞ is called an approximate group if id G∞ ∈ S, S∞ = S−1
∞
, and S2
∞
is covered by finitely many translates of S∞). Under an assumption called definable amenability, a version of the Hrushovski’s Lie model theorem by Massicot and Wagner allows the construction of “good” surjective group homo-morphism π : 〈S∞〉 → L where L is a locally compact group (but not yet a connected Lie group). Fortunately, one can modify A∞ and S∞ to make them “pseudo-semialgebraic” and arrange that the definability condition is satisfied. Next, we use the Gleason–Yamabe the-orem [Gle52, Yam53] to obtain an open subgroup L′ of L, and a normal compact subgroup
K of L′ such that L′/K is a connected Lie group. Replacing S∞ with S4
∞
∩ π−1(L′), the locally compact group L with the Lie group L′/K , and π with φ ◦ π|〈S4
∞∩π−1(L′)〉
, we arrange that L is a connected Lie group. We cannot completely replace A∞ by S∞ because the latter might have doubling rate much larger than 3 .99 even though still bounded. In a mock version of the actual argument, we construct A′∞ ⊆ 〈 S∞〉 from A∞ and S∞ such that μ∞(( A′∞)2) < 3.99 μ∞(A′∞). The set A′∞ is obtained by taking the intersection of 〈S∞〉 and a random translate of A∞ with respect to a carefully chosen probability measure. Noting that 〈A′∞〉 = 〈S∞〉 because of the connectedness of L, and we replace the original A∞ by this A′∞. The actual argument is slightly more complicated than the above mock version. Instead of getting A′∞ as above, we get two sets ˜A∞ and ˜A′∞ satisfying an asymmetric small doubling condition in the form of a Brunn–Minkowski-type inequality. Therefore, even if we are only interested in Theorem 1.1, the proof required essentially also handle Theorem 1.2. We end this part remarking that the argument via Hrushovski’s Lie model theorem has deep roots within model theory. The origin of such approach traces back to Robin-son’s nonstandard-analysis [Rob96] and is related to other results like van den Dries– Wilkie reproof of Gromov theorem using ultraproduct [vdDW84], the body of work sur-rounding Pillay’s Conjecture [HPP08], Goldbring’s proof of Hilbert’s 5th problem for local groups [Gol10]. MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 5
Density of definable sets over Lie models. We next briefly indicate the ideas from real/harmonic analysis used in deducing λ(X2) < 3.99 λ(X) when we have the Lie model
π : 〈A〉 → L, with X = π(A∞) and λ a Haar measure on L.If 〈A∞〉 were a locally compact group, the pseudo-Haar measure μ∞ and the Haar measure
λ on L can be linked together by a Haar measure ν on ker π and a quotient integral formula. This allows us to relate μ∞(A∞) and μ∞(A2
∞
) with the measure of the images λ(X) and
λ(X2) using the density function fA(x) = ν(ker π ∩ x−1A). The argument then proceeds using a “spillover” technique in [JT22] and [JTZ21]. There is no similar measure on the kernel ker π of our Lie model. The Radon–Nikodym theorem does imply there is a density functions defined almost everywhere linking μ∞ and
λ. This is still not good enough for us for the following reason. We need to study the relationship between the density functions fA∞ and fA2
∞
. As A2
∞
is generally a uncountable union of translates of A∞, the bad behavior at a null-set of points might contribute too much in a product, making such relationship unclear. In our approach, we approximate fA∞ by a family of better behaved functions ( f εA∞ )ε∈R>0 ,where f εA∞ is obtained by considering average behavior of fA∞ in a suitable ε-ball around the point under consideration. The Lebesgue differentiation theorem for the Lie group L
implies the convergence of ( f εA∞ )ε∈R>0 to f εA∞ almost everywhere. We also do the same for
fA2
∞
. It turns out that ( f εA∞ )ε∈R>0 are well-behaved enough to replace the role played by usual density function and allow us to obtain the desired conclusion. We remark that the Lebesgue differentiation theorem over manifolds we used is a conse-quence of the weak-L1 estimates for the Hardy–Littlewood maximal function from harmonic analysis. 1.3. Structure of the paper. The paper is organized as follows. Section 2 includes some facts about Haar measures, Riemannian metrics, and Lie groups, which will be used in the subsequent parts of the paper. In Section 3, we construct examples in SO(3 , R) to show that for every c ∈ (0 , 1) there is a set A with μ(A) = c with μ(A2) = 4 μ(A)(1 − μ(A)). We also generalize this construction to other simple Lie groups and make some more general conjectures. Section 4 allows us to find sets with doubling doubling smaller than 4 − ε in the ultra-product group. This step corresponds to the “taking limit” step in the overview. In Section 5, we will find a definably amenable open approximate group that is commensurable to the original set with with small doubling in the ultraproduct group. In Section 6, we will find a subgroup of the ultraproduct group and then go down to a connected Lie model via Massicot–Wagner version of the Hrushovski’s Lie model theorem. In Section 7, we will reconstruct sets with doubling smaller than 4 − ε in the subgroup of the ultraproduct group obtained in Section 6 from the approximate group. In Section 8, we will introduce a density function to connect doubling of sets in the ultraproduct group and doubling of their projections in the Lie model. Section 9 allows us to produce a set with doubling smaller than 4 − ε in the connected Lie group using density function. In Section 10, we prove the main theorem by pulling back sets with doubling close to 2 in the Lie model and obtaining a contradiction to the measure growth gaps. 1.4. Notation and convention. From now on, k and l are in the ordered ring Z of integers,
m and n are in the ordered semiring N = {0, 1, . . . } of natural numbers. By a constant, we mean an element in the positive cone R>0 of the ordered ring R of real numbers. MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 6
We will let G with possible decorations denote a multiplicative group possibly with more data (topology, differentiable structures, etc). By a measure on G, we mean a nonnegative measure μ : Σ → R≥0 on a σ-algebra Σ of subsets of G. If set A is in this σ-algebra Σ, we will say that A is measurable with respect to μ.By a locally compact group, we means a topological group which is locally compact as a topological space. Similar conventions apply to compact groups, connected groups, etc. A measure on a locally compact group is always assumed to be a Haar measure. By a measurable subset of a locally compact group, we mean a set measurable with respect to some Haar measure, equivalently, with respect to the up-to-constant unique complete Haar measure. See Section 2.1 for more details. Given a set A, we write An for the n-fold product set of A, that is {a1 · · · an | a1, . . . , a n ∈
A}. We write A[n] for the n-dimensional Cartesian product of A, but we still write Rn for the n-dimensional Euclidean space for the notational convention. We use the asymptotic notation from harmonic analysis. That is, f . g means f = O(g), and f ∼ g if f . g and g . f . We write f = Oα1,...,α n (g), f .α1,...,α n g, and f ∼α1 ,...,α n g
when the hidden constant(s) depends on α1, . . . , α n.2. Preliminaries
We recall here a number of standard facts about measures, Riemannian metrics, locally compact groups, and Lie groups. Advanced concepts more directly related to the main argument will be included later on. 2.1. Measures and locally compact groups. Recall that a premeasure μ0 on an am-bient set Ω is a nonnegative real-valued function on a Boolean algebra (closed under finite union and taking relative complement) of subsets of Ω such that the following holds: (PM1) μ0(∅) = 0 (PM2) ( σ-additivity) If ( An) is a sequence of sets such that μ0(An) for each n and μ0(⋃ nA n)are are well defined, then
μ0
( ⋃
n
An
)
=
n
∑
i=1
μ0(Ai).
In other words, the premeasure μ0 behaves like a measure, except that the collection of set it applies to might not be a σ-algebra (also closed under countable union). We will later need the following fact, which allows us to construct measures from premeasures:
Fact 2.1 (Carath´ eodory’s extension theorem) . Suppose μ0 is a premeasure on an ambient set Ω. Then there is a measure on Ω extending μ0.
We say that μ is complete if every subset of a μ-null set is measurable. It is well known that there is a smallest complete measure extending μ, which we will refer to as the
completion of μ.Let Ω be a topological space. A measure μ on Ω is a Borel measure if the following hold: (BM1) Every Borel subset (i.e. member of the σ-algebra generated by open sets) of Ω is
μ-measurable. (BM2) μ is the completion of its restriction to the σ-algebra of Borel subsets of Ω. MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 7
Again Ω is a topological space. A Borel measure μ on Ω is an outer Radon measure
if we have the following (RM1) (Locally finite) Every x ∈ Ω has an open neighborhood with finite measure. (RM2) (Outer regularity) The measure of a Borel A ⊆ Ω is the infimum of the measure of the open subsets of Ω containing A.(RM3) (Inner regularity of open set) The measure of an open U ⊆ Ω is the supremum of the measure of the compact subsets of U.Suppose G is a group and H ≤ G. A measure μ on the left-coset space G/H is left-invariant if for all measurable A ⊆ G/H and g ∈ G, we have μ(A) = μ(gA ). A locally compact group G is a group equipped with a locally compact and Hausdorff topology on its underlying set such that multiplication and inversion are continuous. It is easy to see that when H is a compact subgroup of G, the left-cosets space G/H equipped with the quotient topology is also a locally compact and Haussdorff topological space. We have the following fact [DE09, Chapter 1]:
Fact 2.2. Suppose G is a locally compact group, and H ≤ G is compact. Then there is a left-invariant nonzero Radon measure on the left-cosets space G/H . Moreover, any two such measures differ by a constant.
In particular, with H = {1G}, the locally compact group G can be equipped with a left-invariant complete nonzero Radon measure, which is called a left Haar measure . Any two left Haar measures differs only by a positive constant. If a left Haar measure μ on G is right-invariant (i.e., μ(A) = μ(Ag ) for all measurable A ⊆ G and g ∈ G, we call it a Haar measure . If one (equivalently, all) Haar measures on G is right-invariant, we say that G
is unimodular . The additive group Rd with the Euclidean topology and compacts group are, in particular, unimodular. Below are some other facts about the Radon measure in Fact 2.2 that we will use.
Fact 2.3. Suppose G is a locally compact group, and H ≤ G is compact. Suppose μ is a left-invariant complete nonzero Radon measure on the left-cosets space G/H . Then we have the following: (1) Open sets have positive measures. (2) Compact sets have finite measures.
2.2. Riemannian metrics and Lie groups. The material in this section is standard and can be found in any book of Riemannian geometry; see [Lee18], for example. Let M be a smooth manifold, a Riemannian metric g on M assigns in a smooth fashion to each point p ∈ M a positive definite inner product gp on the tangent space TpM of M at the point p. Here, positive definite means gp(v, v ) ≥ 0 for each v ∈ TpM and the equality holds if and only if v = 0. A Riemannian manifold (M, g ) is a smooth manifold together with a Riemannian metric g on M.Suppose ( M, g ) is a Riemannian manifold with dimension n. Let U ⊆ M be a coordinate patch, and x : U → Rn, p 7 → (x1(p), . . . , x n(p)) the coordinate function. If K ⊆ U is compact, we define Vol g(K) =
∫
z∈x(K)
√G ◦ x−1(z)dλ. MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 8
where G = det( gij )(i,j )∈[n]×[n], gij = g(∂i, ∂ j ), and λ is a Lebesgue measure on Rn. Recall that A ⊆ M is measurable if for each coordinate patch U ⊆ M and coordinate function
x : U → Rn, the image x(A ∩ U) is measurable. For a measurable A ⊆ M, we can define its volume Vol g(A) to be the supremum of volume of finite union of compact sets each contained in a coordinate patch. We have the following fact:
Fact 2.4. Vol g is a complete measure on M compatible with the topology on M. Moreover,
Vol is independent of a choice of coordinate patch.
Suppose ( M, g ) is a Riemannian manifold with dimension n. Let U ⊆ M be a coordinate patch, and x : U → Rn, p 7 → (x1(p), . . . , x n(p)) the coordinate function. Define length Lg of an admissible curve. It is a well-known fact that in a connected Riemannian manifold, every two points p and
q are connected by an admissible curve. We define
dg(p, q ) = inf {γ : γ is an admissible curve linking p and q}.
Fact 2.5. dg is a metric compatible with the topology on M.
If d is a metric on M such that d = dg for some Riemannian metric g, we say d is the distance function induced by a Riemannian metric g. The following fact gives us the existence of an invariant Riemannian metric.
Fact 2.6. If a Lie group G acts smoothly and transitively on a smooth manifold M with compact isotropy groups, then there exists a G-invariant Riemannian metric on M.
Suppose ( M, g ) is an n-dimensional Riemannian manifold. The Ricci curvature is the covariant 2-tensor field defined as the trace of the curvature endomorphism on its first and last indices, and the scalar curvature is the function S defined as the trace of the Ricci curvature. The next fact shows that the volume of a infinitesimal ball is controlled by the scalar curvature.
Fact 2.7. Let (M, g ) be an n-dimensional Riemannian manifold with constant scalar cur-vature S. Then when r → 0, a ball of radius r at the identity point has volume
1
n αn−1rn
(
1 − Sr 2
6( n + 2) + O(r3)
)
,
where
αn =
{22k+1 πk k!
(2 k)!
when n = 2 k, k ∈ Z,
2πk+1
k!
, when n = 2 k + 1 , k ∈ Z.
Constructions and conjectures
In this section, we will go through in more details the earlier mentioned example of open
A ⊆ SO(3 , R) with measure smaller than 1 /2 such that
μ(A2) = 4 μ(A) − 4( μ(A)) 2 < 4μ(A).
We will state the relevant conjectures and generalizations for semisimple Lie groups. Before stating the example, we need a couple of lemmas about some basic properties of SO(3 , R). Let u, v, and w range over unit vectors in R3, and let α(u, v ) := arccos( u·v) be the angle between u and v. Let φ and θ range over R. Use Rφu to denote the counter-clockwise MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 9
rotation with signed angle φ where the axis and the positive direction (under the right-hand rule) is specified by the unit vector u. We first recall Euler’s rotation theorem:
Fact 3.1. Each nontrivial element g ∈ SO(3 , R) is of the form Rφu with φ ∈ (0 , 2π). More-over, the set of elements in R3 fixed by such g is span( u) = {λu | λ ∈ R}.
The following lemma is essentially a variation of the above fact.
Lemma 3.2. Let u be a fixed unit vector. Then every g ∈ SO(3 , R) is of the form RθvRφu
with v a unit vector orthogonal to u. Likewise, every g ∈ SO(3 , R) is of the form Rφ′
u
Rθ′
w
with w a unit vector orthogonal to u.Proof. We prove the first assertion. Choose v to be the normal vector of span( u, g (u)). Then v is orthogonal to u, and there is θ such that g(u) = Rθv(u). Now, g−1Rθv fixes u, so by Fact 3.1, g−1Rθv = R−φu for some φ ∈ R. Thus, RθvRφu .The second assertion can be obtained by applying the first assertion to g−1.
We will need the following inequality:
Lemma 3.3. Let u be a fixed unit vector. Suppose g1, g 2 ∈ SO 3(R) are such that α(u, g i(u)) ≤
π/ 2 for i ∈ 1, 2. Then we have α(z, g 1g2(z)) ≤ α(z, g 1(z)) + α(z, g 2(z)) .
Proof. Applying Lemma 3.2, we can write g2 as Rθ2
v
Rφ2
u
and g1 as Rφ1
u
Rθ1
w
with v and w
orthogonal to u. It is easy to see that
α(u, g 1(u)) = α(u, R θ1
w
(u)) and α(u, g 2(u)) = α(u, R θ2
v
(u)) .
On the other hand,
α(u, g 1g2(u)) = α(u, (Rθ1
w
Rθ2
v
)( u)) .
The triangle inequality in term of angles gives us
α(u, (Rθ1
w
Rθ2
v
)( u)) ≤ α(u, R θ2
v
(u)) + α(Rθ2
v
(u), (Rθ1
w
Rθ2
v
)( u)) .
As w is orthogonal to u and possibly not to to Rθ2
v
(u), we have
α(Rθ2
v
(u), (Rθ1
w
Rθ2
v
)( u)) ≤ α(u, R θ1
w
(u)) .
The desired conclusion follows.
Lemma 3.4. Let u be a unit vector, θ ∈ (0 , π/ 2) , and A = {g ∈ SO(3 , R) | α(u, g (u)) ≤
π/ 2 − ε}. Then
A2 = {g ∈ SO(3 , R) | α(u, g (u)) ≤ π − 2ε}.
Proof. The inclusion A2 ⊆ { g ∈ SO(3 , R) | α(u, g (u)) ≤ π − 2ε} is immediate from Lemma 3.3. Now suppose g ∈ SO(3 , R) satisfies α(u, g (u)) ≤ π−2ε. Then, Lemma 3.2 yields
g = RθvRφu with v orthogonal to u and θ ∈ [2 ε −π, π −2ε]. We can rewrite g = Rθ/ 2
v
(Rθ/ 2
v
Rφu). The other inclusion follows.
We need the following facts about the unit sphere:
Fact 3.5. Let S2 = {u ∈ R3 : ‖u‖ = 1 }. Then there is a complete Radon measure ν of S2
satisfying the following conditions: (1) ν is invariant with respect to the left action by SO(3 , R) and ν(S2) = 1 .MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 10
(2) If X ⊆ S2 is the set {u ∈ R3 : θ1 ≤ θ(u) < θ 2, φ 1, ≤ φ(u), < φ 2}, then ν(X) is given by the Riemann integral
∫ θ2
θ1
∫ φ2
φ1
sin( θ) d θ dφ.
The next proposition is our construction. This can be seen as a generalization of the example given in [JT22] by the first two authors.
Proposition 3.6. Suppose an open A ⊆ SO(3 , R) is of the form {g ∈ SO(3 , R) : ∠(u, gu ) <θ} with u ∈ R3 a unit vector and θ ∈ (0 , π/ 2] . Then we have
μ(A2) = 4 μ(A)(1 − μ(A)) .
In particular, μ(A2) < 4μ(A) and the measure of A can be arbitrarily small. Proof. Let u and A be as in Lemma 3.4. Recall that the group SO(3 , R) acts transitively on the 2-sphere S2 consisting of all unit vectors in R3. Let T ≤ SO(3 , R) be the stabilizer of u. Then, SO(3 , R)/T can be identified with S2. With πA and πA 2 be the projections of
A and A2 in SO(3 , R)/T respectively, we have
πA =
{
v | α(u, v ) ≤ π
2 − ε
}
and πA 2 = {v | α(u, v ) ≤ π − 2ε}.
Let ν be the Radon measure induced by μ on SO(3 , R)/T via the quotient integral formula. From the way we construct A and Lemma 3.4, we have A = AT and A2 = A2T . Hence,
μ(A) = ν(πA ) and μ(A2) = ν(πA 2). Finally, note that ν is the normalized Euclidean measure, so an obvious computation yields the desired conclusion.
In light of Proposition 3.6, it is natural to generalize the aforementioned Breuillard–Green conjecture to sets with all possible measures in SO(3 , R).
Conjecture 3.7 (Strong Breuillard–Green Conjecture) . Suppose A ⊆ SO(3 , R) is open. Then
μ(A2) ≥ min {1, 4μ(A)(1 − μ(A)) }.
Moreover, if μ(A) < 1/2, the equality happens if and only if A is of the form
{g ∈ SO(3 , R) : ∠(u, gu ) < arccos(1 − μ(A)) }
where u ∈ R3 is a unit vector.
The aforementioned Breuillard–Green Conjecture for SO(3 , R) also has a natural gener-alization to all compact simple Lie groups.
Conjecture 3.8 (Breuillard–Green Conjecture for compact simple Lie groups) . Let G be a compact simple Lie group with dimension d equipped with a normalized Haar measure μ,and the dimension of the maximal compact proper subgroup is m. Then for every ε > 0,every compact A ⊆ G with sufficiently small measures,
μ(A2) > (2 d−m − ε)μ(A).
We now discuss a more general construction. We start with some fact about compact simple Lie group and symmetric space [Lee18]. MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 11
Fact 3.9. Suppose G is a compact simple Lie group, then up to a constant, there is a unique bi-invariant Riemannian metric on G inducing a distance function ˜d on G. Moreover, let
H ≤ G be a closed and hence compact subgroup of G. Then there is a unique left-invariant Riemannian metric on G/H such that if d is the distance function it induces on G/H , then
d(g1H, g 2H) := Eh∈H ˜d(g1h, g 2h).
The Riemannian metric d has positive constant scalar curvature.
The following example shows that Conjecture 3.8 if true, the 2 d−m factor is also sharp, and the −ε term is necessary.
Proposition 3.10. Let G be a compact simple Lie group with dimension d equipped with a normalized Haar measure μ. Let m be the dimension of a maximal compact proper subgroup. Then for sufficiently c > 0 there is A ⊆ G with μ(A) = c and
μ(A2) < 2d−mμ(A).
Proof. Let H ≤ G be a maximal proper compact subgroup. Then dim H = m. By Fact 3.9,
G/H is a symmetric space and hence it has a unique G-invariant Riemannian metric, and this metric induces a volume measure λ on G/H .Note that the projection measure of μ on G/H equals to λ up to a constant scalar as the projection measure of μ on G/H is also a G-invariant Borel measure. Let Br be an open ball of radius r in G/H , and Dr = π−1(Br) where π : G → G/H is the projection map. We claim that D2
r
⊆ D2r.To see the claim, let g1 and g2 be two arbitrary elements in Dr, and let πg 1 and πg 2 be the projections of g1 and g2 in G/H . For i = 1 , 2, let γi be the geodesic curve connecting πg i
and the identity in G/H , and the length of each γi is strictly smaller than r by the choice of gi and Fact 2.5. As the metric on G/H is G-invariant, ( πg 1)γ2 has the same length as γ2.Now let γ be the curve formed by ( πg 1)γ2 after γ1, then it is a curve connecting πg 1g2 and the identity, and has length strictly smaller than 2 r. Thus g1g2 ∈ D2r and hence D2
r
⊆ D2r.Using Fact 2.7 when r is sufficiently small, we have
μ(D2
r
)
μ(Dr) ≤ μ(D2r)
μ(Dr) = λ(B2r)
λ(Br) → 2d−m (as r → 0) .
By Fact 3.9 the metric on G/H has a positive constant scalar curvature. Thus using Fact 2.7 we have μ(D2
r)
μ(Dr)
< 2d−m when r is sufficiently small.
One may compare Conjecture 3.8 with the recently developed Brunn–Minkowski phenom-enon by the authors [JTZ21]:
Fact 3.11 (Symmetric Brunn–Minkowski for simple Lie groups with finite center) . Let G be a simple Lie group with finite center, and μ a Haar measure on G. Let d be the topological dimension of G, and m be the dimension of a maximal compact subgroup of G. Then for every compact A ⊆ G,
μ(A2) ≥ 2d−mμ(A).
For a more general Brunn–Minkowski result we refer to [JTZ21]. We also remark that the Brunn–Minkowski inequality became trivial when the ambient group is compact, as in this case m = d in Fact 3.11, while in Conjecture 3.8 we have m < d .Now we look at a fact in noncompact simple Lie groups with a finite center [HN12]. MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 12
Fact 3.12. Suppose G is a noncompact simple Lie group with a finite center, and let H ≤ G
be a closed and connected compact subgroup of G having maximal dimension. Then there is a unique left-invariant Riemannian metric on G/H . This Riemannian metric has negative constant scalar curvature.
We will refer to the metric obtained in Facts 3.9 and 3.12 as the canonical metric on G/H .We now state a general conjecture:
Conjecture 3.13 (Minimal measure doubling Conjecture) . Suppose G is a simple Lie group equipped with a Haar measure μ, and A ⊆ G is open. Let H ≤ G be a proper compact connected subgroup of G with maximal dimension, π : G → G/H is the quotient map,
Br ⊆ G/H is the ball with radius r centered at the coset H, and B2r is defined similarly. Assuming μ(A) = μ(π−1(Br)) , we have
μ(A2) ≥ min {μ(π−1(B2r)) , μ (G)}.
Moreover, the equality happens if an only if A is a conjugate of π−1(Br).
We will next deduce a number of consequences of the above conjecture.
Proposition 3.14. Assuming the minimal measure doubling Conjecture holds. Then we have the following: (1) The Strong Breuillard–Green conjecture holds. (2) If A ⊆ SL(2 , R) is open, then there is a Haar measure μ that
μ(A2) ≥ 4μ(A)(1 + μ(A))
Moreover, consider the action of SL(2 , R) on the upper half plane by linear fractional transformation. The equality happens if and only if A is of the form
{g ∈ SL(2 , R) : d(p, gp ) < r }.
(3) Fact 3.11 holds. Proof. Statement (1) is clear from our construction, as well as the uniqueness of the G-invariant metric by Fact 3.9. Similarly, Statement (3) follows from Fact 3.12, the fact that the scalar curvature is negative, and Fact 2.7 when the radius of ball approaches to 0. We will only show (2) here, which can also be viewed as the first non-trivial case of Conjecture 3.13 for noncompact groups. Let H ∼= SO(2 , R) be a maximal compact subgroup of G := SL(2 , R). Then G/H is isometric to the hyperbolic plane H2. Choose a Haar measure μ on G so that under the induced metric a ball of radius r in H2 has volume
∫ r
0
sinh( t) d t.
Let π : G → G/H be the quotient map and A = π−1(Br). Using the same proof as in Proposition 3.10 together with Fact 3.12 it is easy to see that A2 ⊆ π−1(B2r ). Thus
μ(A2) ≤ cosh(2 r) − 1 = 2 cosh( r)2 − 2 = 4 μ(A)(1 + μ(A)) as desired. MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 13
Small growth in ultraproducts
In this section, we will start from a decreasing size sequence of small growth sets in SO(3 , R) and construct infinitesimal small growth set in an ultraproduct of SO(3 , R). We will treat more generally ultraproducts of compact groups. As it was mentioned earlier in the introduction, our proof necessitate considering the asymmetric problem for products of two sets. Let G be a unimodular locally compact group, and μ a complete Haar measure. Suppose
A, B ⊆ G are such that A, B, and AB are measurable, and 0 < μ (A), μ (B), μ (AB ) < ∞.We define the Brunn–Minkowski growth BM( A, B ) to be the unique r > 0 such that
μ(AB )1
r
= μ(A)1
r
μ(B)1
r
.
In particular, with A = B, we have BM( A, A ) ≤ r if and only if μ(A2) ≤ 2rμ(A). Note that this definitions is independent of the choice of the complete Haar measure μ due to the up-to-constant uniqueness of complete Haar measure (Fact 2.2(6)). At first sight, this definition seem to also work in then nonunimodular setting with left Haar measure. However, the correct definition for nonunimodular locally compact groups involves both the left and right Haar measure; see [JTZ21] for details. We will not discuss this issue further here as it will not be useful for the current purpose. We now recall some element of the theory of ultrafilters and ultraproducts from logic. For a systematic treatment of ultrafilter and applications, see [Gol22]. A nonprincipal ultra-filter U on N is a collection of infinite subsets of N which satisfy the following conditions: (NU1) If I, J ∈ U , then I ∩ J ∈ U .(NU2) If I ∈ U and I ⊆ J, then J ∈ U .(NU3) For all I ⊆ N, either I or N \ I is in U.One can think of a nonprincipal ultrafilter on N as choosing a notion of “almost every-where/almost every” on N, where I ∈ U if and only if “almost every” natural number n is in I. This can be made precise by considering the finitely additive measure
P(N) → { 0, 1}, I 7 →
{
1 if I ∈ U ,
0 if I / ∈ U .
Note that this assignment is not σ-additive, and hence not a measure. Indeed, for each n, the set {n} is in U as it is finite, but their union N is not in U being cofinite. Nevertheless, this is a very useful heuristic, and we will facilitate it further. For a fixed nonprincipal ultrafilter
U, when P is a property for natural number, we write a.e. n satisfies P if {n : P (n) holds } is in U. The existence of a non-principal ultrafilter depends on the axiom of choice. However, the truth of our theorem is independent of the axiom of choice by a Shoenfied’s absoluteness argument. Throughout, we fix an ambient set V which contains all the mathematical object we care about; for instance, we can choose V to be an initial segment of the set-theoretic universe. Fix a nonprincipal ultrafilter U on N. Two sequences ( an) and ( a′
n
) of elements of V are
U-equal if an = bn for a.e. n. It is easy to see that U-equality is an equivalent relation on
V . For a sequence ( an) of elements of V , we let ( an)/U denote its equivalence class under
U-equality. The ultraproduct ∏
U
An of a sequence ( An) of subsets of V with respect to
U is the set
{(an)/U : an ∈ An for each n}.MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 14
The ultraproduct ∏
U
An can be seen as an averaging of ( An) reflecting certain types of behaviors that hold in U-a.e. An. The following fact makes this intuition precise.
Fact 4.1. Fix a nonprincipal ultrafilter U. Suppose (An), (Bn), and (Cn) are a sequence of sets, U is a nonprincipal ultrafilter on N, A = ∏
U
An, B = ∏
U
Bn, and C = ∏
U
Cn. Then (1) We have A = ∅ if and only if An = ∅ for a.e. n.(2) We have A ⊆ B if and only if An ⊆ Bn for a.e. n.(3) A ∪ B = ∏
U
(An ∪ Bn), A ∩ B = ∏
U
(An ∩ Bn), A \ B = ∏
U
(An \ Bn).(4) With × denoting the Cartesian product, A × B = ∏
U
(An × Bn).(5) Suppose π : B × C → B and πn : Bn × Cn → Bn for all n are the projections to the first coordinates, and assume A ⊆ B × C and An ⊆ Bn × Cn for all n. Then
π(A) = ∏
U
πn(An).
The following “compactness” property (also called ℵ1-saturated property) is a new fea-ture of sets obtained by ultraproducts. It is behind a major advantage with working with ultraproduct: One can replace approximation by actual equality if willing to give up some quantitative information.
Fact 4.2. Suppose (An) is a sequence of sets, each obtained by taking ultraproducts of a sequence of sets in V . If for each finite I ⊆ N, the intersection ⋂
n∈I
An is nonempty, then the intersection ⋂
n
An is nonempty.
The following remark, on the other hand, highlight a difficulty of working with ultraprod-uct, namely, we completely lose meaningful topological data.
Remark 4.3. Let An = {0, 1}, and A = ∏
U
An. Using (the dual of) Fact 4.2, it can be shown that A is uncountable. Suppose we equip An with the discrete topology. Then A
must be “pseudo-compact”, and the singleton set {a} for each a ∈ A is “pseudo-open”. The collection {{ a} : a ∈ A} forms a cover of A, but there is no finite subcover. As we are interested in handling group and products of sets on them, the following fact is needed:
Fact 4.4. Fix a nonprincipal ultrafilter U. Suppose (Gn) is a sequence of groups, G is the ultraproduct ∏
U
Gn of the underlying set. (1) Then the map
(∏ Gn
)
→
(∏ Gn
)
, (( an), (bn)) 7 → (anbn)
induces a binary operation from G × G to G with (( an)/U, (bn)/U) 7 → (anbn)/U, and the set G together with this binary operation map is a group. (2) Suppose (An) and (Bn) are sequences with An, B n ⊆ Gn, A = ∏
U
An, B = ∏
U
Bn,and AB is the product set of A and B with respect to the group operation defined in (1). Then AB = ∏
U
AnBn.
For a sequence ( Gn) of groups and a nonprincipal ultrafilter U on N, the ultraproduct
of ( Gn) with respect to U, denoted by ∏
U
Gn, is the group G whose underlying set is the ultraproduct of the sequence of underlying sets of ( Gn), and whose group operation on G is given by Fact 4.4(1). MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 15
A pseudo-compact group G is a group equipped with the additional data of a sequence (Gn) of compact groups and a nonprincipal ultrafilter U on G such that G = ∏
U
Gn. Very often, we will treat G just as a group and suppress the data about ( Gn) and U. We will say the pseudo-compact group G is built from a sequence ( Gn) of local groups and an ultrafilter
U if we want to make the other pieces of information clear. Note that by Remark 4.3, there is no obvious topology one can equip on such G. There is also an obvious notion of pseudo-locally-compact groups, but we will not discuss this further as it will not be used. Unlike topological information, we will see that measure-theoretic information are mean-ingfully preserved by ultraproducts. Suppose G is a pseudo-locally-compact group built from a sequence ( Gn) of groups and a nonprincipal ultrafilter U. We say that A ⊆ G is
pseudo-measurable A if there is a sequence ( An) with An ⊆ Gn measurable such that
A = ∏
U
An. It follows from Fact 4.1(4) that the collection of pseudo-measurable subgroups of G forms a Boolean algebra (closed under taking finite intersection, finite union, and relative complement). A pleasure working with ultrafilter is the following easy fact:
Fact 4.5. Fix a nonprincipal ultrafilter U on N. Let (rn) be a sequence of real numbers. Then exactly one of the following three scenarios hold: (1) There is r ∈ R such that for all ε > 0, we have |rn − r| < ε for a.e. n.(2) for all C > 0, we have rn > C for a.e. n.(3) for all C > 0, we have rn < −C for a.e. n.
For a sequence ( rn) of real numbers and nonprincipal ultrafilter U, the limit of (rn)
under U, denoted by lim U rn, is defined to be the unique element in R ∪ {±∞} such that one of the three cases of Fact 4.5 holds. This notion of limit behaves as it should. Fact 4.6 records some behavior that we will actually use.
Fact 4.6. Fix a nonprincipal ultrafilter U on N. Suppose (rn), (sn), and (tn) are sequences of nonnegative real number, and K is a constant. Then we have the following (1) lim U (rn + sn) = lim U rn + lim U sn
(2) lim U Kr n = K lim U rn
(3) if rn ≤ sn for a.e. n, then lim U rn ≤ lim U sn.(4) If f : R3 → R is a continuous function, and lim U rn, lim U sn, lim U tn < ∞, then
f
(
lim
U
rn, lim
U
sn, lim
U
tn
)
= lim
U
f (rn, s n, t n).
Here, we hold the convention that r + ∞ = ∞ + ∞ = K · ∞ = ∞ and r < ∞ for r ∈ R>0.
Using Fact 4.6(1), one can deduce Lemma 4.7 below. We omit the obvious proof.
Lemma 4.7. Suppose G is a pseudo-locally-compact group built from a sequence (Gn) of locally compact groups and ultrafilter U. Let (μn) be a sequence with μn a left Haar measure on Gn. Then we have the following: (1) If A ⊆ G is pseudo-measurable, A = ∏
U
An = ∏
U
A′
n
for two sequences (An) and
(A′
n
) where An, A ′
n
⊆ Gn are measurable, then
lim
U
μn(An) = lim
U
μn(A′
n
).MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 16
(2) If A, B ⊆ G are pseudo-measurable and disjoint, A = ∏
U
An and B = ∏
U
Bn for two sequences (An) and (Bn) where An, B n ⊆ Gn are measurable, then
lim
U
μn(An ∪ Bn) = lim
U
μn(An) + lim
U
μn(Bn).
Suppose G is a pseudo-compact group built from a sequence ( Gn) of compact groups and an ultrafilter U. A measure μ on G is a pseudo-Haar measure if the following holds: (PH1) there is a sequence ( μn) with μn a Haar measure on Gn such that for every pseudo-measurable A ⊆ G, we have
μ(A) = lim
U
μn(An)where ( An) is a sequence with An ⊆ G measurable and A = ∏
U
An
(PH2) μ is the completion of its restriction to the σ-algebra generated by pseudo-measurable sets. To simplify notation, we will write (PH1) as μ = lim U μn on pseudo-measurable sets. By Fact 4.6(2), a constant multiple of a pseudo-Haar measure is a pseudo-Haar measure. We note that every pseudo-measurable set is a measurable set with respect to a pseudo-Haar measure, but the converse is not true. The next lemma shows that a pseudo-Haar measure can be constructed a sequence of Haar measure.
Lemma 4.8. Suppose G is a pseudo-compact group built from a sequence (Gn) of compact groups and an ultrafilter U. Let (μn) be a sequence where μn is a Haar measure on Gn.Then there is a pseudo-Haar measure on G such that μ = lim U μn on pseudo-measurable subsets of G.Proof. Let μ0 be the function on the Boolean algebra of pseudo-measurable subsets of G
given by
μ0(A) = lim
U
μn(An)where ( An) is a sequence with An ⊆ G measurable and A = ∏
U
An. We note that μ0 is well-defined by Fact 4.7(1). From the definition of limit and ultraproduct, we have μ0(∅) = 0. If a pseudo-measurable A ⊆ G is a countable union of pseudo-measurable subsets of G,it follows from Fact 4.2 that A is equal to a finite union of these subsets of G. Hence, it follows from Fact 4.7(2) that μ0 is a premeasure. Thus, by Carath´ eodory’s extension theorem (Fact 2.1) μ0 can be extended to a measure on the σ algebra of subsets of G generated by the pseudo-measurable sets. Using completion, we obtain a pseudo-Haar measure with the desired properties.
The next lemma shows that a pseudo-compact group is in a sense unimodular:
Lemma 4.9. Suppose G is pseudo-compact, A ⊆ G is pseudo-measurable, and μ is a pseudo-Haar measure on G. Then for all g ∈ G, the sets gA and Ag are pseudo-measurable, and
μ(gA ) = μ(A) = μ(Ag ).Proof. We will only show gA is pseudo-measurable and μ(gA ) = μ(A), as the remaining parts are similar. Suppose G is built from the sequence of compact subgroup Gn and a nonprincipal ultrafilter U. Let ( gn), ( An), and ( μn) be sequences such that gn ∈ Gn,
g = ( gn)/U, An ∈ A is measurable, A = ∏
U
An, μn is a Haar measure on Gn, and μ(A) = lim U μn(An). One can then check that gA = ∏
U
(gnAn), so gA is pseudo-measurable. The equality μ(gA ) = μ(A) follows from the equality μn(An) = μn(gnAn). MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 17
Let G be a pseudo-compact group. We say that pseudo-measurable sets A, B ⊆ G are
commensurable if there is a pseudo-Haar-measure μ on G such that 0 < μ (A), μ (B) < ∞.
We say that A is infinitesimal compared to B if there is a pseudo-Haar-measure μ on G
such that 0 < μ (A) < ∞ and μ(B) = ∞.
The following lemma is the counterpart of the uniqueness up to constant of Haar measures.
Lemma 4.10. Let G be a pseudo-compact group, and μ and μ′ be pseudo-Haar measures on G. Suppose there is a pseudo-measurable set A ⊆ G such that 0 < μ (A) < ∞ and
0 < μ ′(A) < ∞. Then there is K ∈ R≥0 such that μ′ = Kμ .Proof. Suppose G is built from the sequence ( Gn) and the ultrafilter U. Let K = μ′(A)/μ (A). We will show μ′(B) = Kμ (B) for an arbitrary pseudo-measurable B ⊆ G. This is enough because μ can be obtained uniquely via completion and Carath´ eodory’s theorem from the premeasure μ0 which is the restriction of μ to the pseudo-measurable subsets of B. Let ( An)and ( Bn) be sequences such that An, B n ⊆ Gn are measurable, A = ∏
U
An, and B = ∏
U
Bn.Let ( μn) and ( μ′
n
) be two sequences such that μn and μ′
n
are Haar measure on Gn, and
μ(A) = lim
U
μn(An) and μ′(A) = lim
U
μ′
n
(An).
By definition, for any ε > 0, for a.e. n, we have
μn(An) < (1 + ε)μ(A) and μ′(A) < (1 + ε)μn(An).
Hence, for these n, Kμ n(An) < (1 + ε)2μ′
n
(An). Since, μn and μ′
n
differs by a constant, it leads to Kμ n(Bn) < (1 + ε)2μ′
n
(Bn). It follows that μ(B) ≤ K(1 + ε)2μ′(B). Since ε can be taken arbitrarily, we get μ′(B) ≤ K(B). A similar argument yields, μ′(B) ≥ Kμ . Thus,
μ′(B) = Kμ (B) completing the proof.
We deduce some consequence for commensurability and being infinitesimal. There are other facts along this line, but we leave those to interested readers.
Corollary 4.11. Suppose G is a pseudo-compact group, and A, B, C ⊆ G are pseudo-measurable. Then we have the following: (1) If A and B are commensurable, and B and C are commensurable, then A and C are commensurable. (2) If A and B are commensurable, and A is infinitesimal compared to C, then B is infinitesimal compared to C.Proof. We will only prove (1), as the proof of (2) is similar. Let μ and μ′ be pseudo-Haar measure on G such that 0 < μ (A), μ (B) < ∞ and 0 < μ ′(B), μ ′(C) < 0. Then by Lemma 4.10, μ and μ′ differs by a constant K. It follows that 0 < μ (A) < ∞, which yields the desired conclusion.
Let G be a pseudo-compact group. Suppose A, B ⊆ G are such that A, B, and AB are pseudo-measurable, and A and B are commensurable. Let μ be a pseudo-Haar measure on G such that 0 < μ (A), μ (B) < ∞. Note that 0 < μ (AB ) by Fact 4.9. We define the
Brunn–Minkowski growth BM( A, B ) to be the unique r > 0 such that
μ(AB )1
r
= μ(A)1
r
μ(B)1
rMEASURE DOUBLING OF SMALL SETS IN SO(3 ,R)18
if μ(AB ) < ∞, otherwise we set BM( A, B ) = ∞. By Fact 4.10, this definition is independent of the choice of the pseudo-Haar measure μ as long as 0 < μ (A) < μ (B) < ∞.We now prove the main result of this section:
Proposition 4.12. Suppose G is a pseudo-compact group built from a sequence (Gn) of compact groups and a nonprincipal ultrafilter U. Let (An), (Bn) and (μn) be sequences such that An, B n, A nBn ⊆ Gn has positive measure, μn is a Haar measure on Gn. Let A, B ⊆ G
be pseudo-measurable sets and μ a pseudo-Haar measure on G such that A = ∏
U
An and
B = ∏
U
Bn. Then we have the following. (1) If there is a constant N such that μn(An)/N ≤ μn(Bn) ≤ Nμ n(A)n, then A and B
are commensurable. (2) If lim n→∞ μn(An)/μ n(Gn) = lim n→∞ μn(Bn)/μ n(Gn) = 0 , then A and B are infini-tesimal compared to G.(3) If A and B are commensurable and r is a constant such that BM( An, B n) ≤ r, then
BM( A, B ) ≤ r.Proof. We first prove (1). For each n, scaling μn by a constant factor, we can assume that
μn(An) = 1. Let μ be a pseudo-Haar measure on G such that μ = lim U μn. We can check that μ(A) = 1 and 1 /N < μ (B) < N . This shows A and B are commensurable. The proof for (2) can be obtained similarly. We now prove (3). Let μ be a pseudo-Haar measure on G such that 0 < μ (A), μ (B) < ∞. By suitably scaling, assume μ = lim U μn.Then we have lim
U
μn(An) = μ(A), lim
U
μn(Bn) = μ(B), and lim
U
μn(AnBn) = μ(AB ).
The condition that BM( An, B n) ≤ r implies μn(AnBn) ≤ (μn(An)1
r
μn(Bn)1
r
)r. It follows from Fact 4.6(3,4) that we will also get μ(AB ) ≤ (μ(A)1
r
μ(B)1
r
)r. In particular, this implies μ(AB ) < ∞. Apply Fact 4.6(4) again, we get BM( A, B ) = lim U BM( An, B n). The conclusion then follows from Fact 4.6(3).
Approximate groups from small growth
In this section, we will link the sets in the ultraproduct having small measure product with approximate groups, the intermediate object used to connect to noncompact Lie groups. For a constant K, we call S ⊆ G a K-approximate group if (1) id G ∈ S,(2) S = S−1,(3) S2 can be covered by K-many left translates of S (equivalently, K-many right trans-lates of S). Clearly, there is no K-approximate group if K < 1, and a 1-approximate group is a subgroup. We say that S ⊆ G is an approximate group if it is a K-approximate group for some K.For instance, a open subset of a compact group is an approximate group. If G is a compact and connected group and S ⊆ G is a 2 r-approximate group such that
S2 is also measurable, then BM( S, S ) ≤ r. Hence, under rather mild assumption that S2 is still measurable, an approximate group has small measure doubling. The following fact from [Tao08] establish a partial converse: Approximate groups arise from sets with small measure growth. MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 19
Fact 5.1. Let G be a compact group equipped with a Haar measure μ, and A, B ⊆ G are such that A, B, and AB are measurable. Suppose μ(A)/N < μ (B) < Nμ (A), and BM( A, B ) ≤ r.Then there is an open ON,r (1) -approximate group S such that μ(S) ∼N,r μ(A), A is contained in ON,r (1) left translates of S, and B is contained in ON,r (1) right translates of S.
The following lemma provides the first sign that approximate groups are robust and easier to handle.
Lemma 5.2. Suppose S ⊆ G is a K-approximate group. Then Sn can be covered by Kn−1
left translates of S when n > 0.Proof. For n = 1, the statement follows from the definition. Suppose we have shown the conclusion for n, then Sn+1 = SnS can be covered by K(n−1) left translates of S2, which can in turn be covered by Kn left translates of S.
In order to be able to use the model-theoretic machinery later on, we need special kind of approximate groups. Let Ω be an ambient set. A structure Σ on Ω is a sequence (Σ n)satisfying the following conditions: (S1) For each n, Σ n is a Boolean subalgebra of P(Ω [n]). (S2) The diagonal {(a, a ) : a ∈ Ω} is in Σ 2.(S3) If m ≤ n and π : Ω [n] → Ω[m] is the projection to some m out of n coordinates,
A ⊆ Ω[m] is in Σ m, then π−1(A) is in Σ n.(S4) If m ≤ n and π : Ω [n] → Ω[m] is the projection to some m out of n coordinates,
A ⊆ Ω[n] is in Σ n, then π(A) is in Σ m.We say that D ⊆ Ω[n] is definable in Σ if D is an element of Σ n. A subset of Ω [n] is
δ-definable in Σ if it is a countable intersection of subsets of Ω [n] definable in Σ, and a subset of G[n] is σ-definable in Σ if it is a countable union of subsets of G[n] definable in Σ. A structure Σ on an ambient set Ω is ℵ1-saturated if whenever ( Dn) is a sequence of subsets of Ω k is definable in Σ, and every finite intersection is nonempty, then ( Dn) has nonempty intersection. The following easy observation about ℵ1-saturation will become very useful later on.
Lemma 5.3. Suppose Σ is an ℵ1-saturated structure on an ambient set Ω. If A ⊆ Ω[n] is
δ-definable in Σ, B ⊆ Ω[n] is σ-definable in Σ, and A ⊆ B, then there is D ⊆ Ω[n] definable in Σ such that A ⊆ D ⊆ B.Proof. Suppose A = ⋂
m
Am and B = ⋃
m
Bm with Am, B m ⊆ Ω[n] definable in Σ for each
m. Then (⋂
m
Am
)
∩
(⋂
m
(Ω [n] \ Bm
)
= ∅.
By ℵ1-saturation, there is N such that
(⋂Nm=1 Am
)
∩
(⋂Nm=1 (Ω [n] \ Bm
)
= ∅. it is easy to check that D =
(⋂Nm=1 Am
)
satisfies desired conditions.
An expansion of a group is a pair ( G, Σ) such that G is a group, Σ is a structure on the underlying set of G, and the graph Γ = {(a, b, c ) ∈ G3 : ab = c} of multiplication in G is definable in Σ. The following lemma will be useful later on. MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 20
Lemma 5.4. Let (G, Σ) be an expansion of a group. If A, B ⊆ G are definable in Σ, then
AB ⊆ G is definable in Σ. Hence, if A ⊆ G is definable in Σ, then An ⊆ G is definable in
Σ for all n.Proof. Let A, B be as in the statement of the lemma. By (S3) A × G × G ⊆ G and
G × B × G ⊆ G are definable in Σ. Set
C := ( A × G × G) ∩ (G × B × G) ∩ Γ.
Then C ⊆ G is definable in Σ by (S3) and the definition of an expansion. Note that
AB ⊆ G is the projection C ⊆ G to the last coordinate, hence AB is definable in Σ by (S4). The second statement of the lemma is immediate from the first.
For a collection ( Ai)i∈I of subsets of G there is clearly a smallest expansion ( G, Σ) of the group G where, for each i ∈ I, the set Ai is definable in Σ. We call this ( G, Σ) the expansion generated by (Ai)i∈I . An expansion ( G, Σ) of a group is ℵ1-saturated if the structure Σ is ℵ1-saturated. This property appears naturally here due to its relationship with ultra-product construction.
Lemma 5.5. Suppose (G, Σ) is an expansion of a group generated by a collection of pseudo-measurable subsets of G. Then Σ is ℵ1-saturated. Proof. Suppose G is built from the sequence ( Gn) of compact groups and a nonprincipal ultraproduct U. For each k, let Σ ′
k
be the collection of subsets of G[k] of the form C = ∏
U
Cn
with Cn ⊆ (Gn)[k]. It follows from Fact 4.1, that Σ ′ := (Σ ′
k
) is a structure. Note that the graph Γ is definable in Σ ′, and by definition A, B, and S are also definable in Σ ′ as they are pseudo-measurable. Thus, we have Σ k ⊆ Σ′
k
for all k. By Fact 4.2, Σ ′ is ℵ1-saturated, which implies Σ is ℵ1-saturated.
We now arrive at the main concept which will allow us to use the model-theoretic ma-chinery later on. Suppose G be a group, and ( G, Σ) is an expansion of G. We say that S
is definably amenable in ( G, Σ) if there is a finitely-additive left-invariant measure μ on
〈S〉 satisfying: (DA1) μ(S) = 1 (DA2) Every D ⊆ 〈 S〉 definable in Σ is μ-measurable. In view of Lemma 5.4, one can see this as a strengthening of the condition that Sn is measurable for each n > 0. The next lemma explain why this can arise in our setting.
Lemma 5.6. Suppose (G, Σ) is an expansion of a pseudo-compact group such that every
D ⊆ G definable in Σ is pseudo-measurable, and there is a pseudo Haar measure μ such that 0 < μ (S) < ∞. Then S is definably amenable in (G, Σ) .Proof. Recall that every pseudo measurable set is μ-measurable. Define the finitely additive measure ν on 〈S〉 by setting ˜ μ(D) = μ(D)/μ (S) for D ⊆ 〈 S〉 definable in Σ. The desired conclusions are immediate.
We now discuss how to obtain the conditions for Lemma 5.6 in our setting. Recall that X
is semialgebraic if it is a finite union of the solution sets of systems of algebraic inequalities. The following fact is a restatement of the Tarski-Seidenberg theorem [vdD98, (2.10)].
Fact 5.7. Let Σn be the collection of semialgebraic subsets of Rn. Then Σ := (Σ n) is a structure on R.MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 21
We say X ⊆ Rn is algebraic if it is the solution set of a system of polynomial equation with coefficient in R. Identifying the underlying set of the general linear group GL( d, R) in the obvious way as an algebraic subset of Rd2
, we say that G ≤ GL( d, R) is an algebraic subgroup of GL( d, R) if its underlying set is algebraic. A subset of G[n] is semi-algebraic
if it is semialgebraic under the obvious identification of G[n] with a subset of Rnd 2
. The Tarski-Seidenberg theorem translates into the following lemma for G:
Lemma 5.8. Let G is an algebraic subgroup of GL( d, R), and Σn the collection of semial-gebraic subsets of G[n]. Then (G, Σ) is an expansion of the group G where we set Σ := (Σ n).Proof. Note that the diagonal {(g, g ) : g ∈ G} can be obtained as the intersection of G × G
and an n-times Cartersian product of the diagonal {(x, x ) : x ∈ R}. Also note that the projection G[n] → G[m] to m out of n coordinates can be obtained from a suitable projection
Rnd 2
→ Rmd 2
. Finally, the graph of multiplication in G is semialgebraic, in fact, algebraic. The conclusion then follows easily from Fact 5.7.
We next lemma allow us to replace small-growth pairs ( A, B ) with semialgebraic ones.
Lemma 5.9. Suppose G is an algebraic subgroup of GL( d, R), μ is a Haar measure of G,and A, B ⊆ G are such that A, B, and AB are measurable, μ(A)/N < μ (B) < Nμ (A),and BM( A, B ) ≤ r. Then for all ε > 0, there are semialgebraic A′, B ′ ⊆ G such that
μ(A)/(1+ ε) < μ (A′) < (1+ ε)μ(A), μ(B)/(1+ ε) < μ (B′) < (1+ ε)μ(B), and BM( A′, B ′) ≤
r + ε.Proof. We will deduce the lemma from two weaker statements.
Claim 1. The statement of the lemma holds if we replace the semialgebraic requirement on A′ and B′ with the requirement that they are compact.
Proof of claim 1. Recall that μ is the completion of its restriction to Borel sets and μ has outer regularity. Applying these properties to G \ A and G \ B, for any δ can we obtain compact A′ ⊆ A and B′ ⊆ B such that μ(A′) > μ (A)/(1 + δ) and μ(B′) > μ (A)/(1 + δ). Choosing δ sufficiently small, we see that the desired properties are satisfied. ⊲⊳
Claim 2. The statement of the lemma holds if we add the assumption that A and B are compact.
Proof of claim 2. By a similar argument as in the proof of Claim 1, we choose open U ⊆ G
containing AB such that μ(U) < (1 + δ)μ(AB ) where we will determine δ later. Let d be an invariant distance on G. Set d0 = d(AB, G \ U). Let UA = {x ∈ G : d(x, A ) < d 0/2}
and UB = {x ∈ G : d(x, A ) < d 0/2}. Then UAUB ⊆ U. As A is compact, we can choose A′
which is a union of finitely many Euclidean open balls in GL( d, R) intersecting G such that
A ⊆ A′ ⊆ UA. Choose B′ similarly such that B ⊆ B′ ⊆ UB . With δ small enough, we see that the conditions are satisfied. ⊲⊳
Apply Claim 1 and then Claim 2 suitably, we get the statement of the Lemma.
The next lemma allow us to replace approximate groups with semialgebraic ones.
Lemma 5.10. Suppose G is an algebraic subgroup of GL( d, R), and S ⊆ G is an open
K-approximate group. Then there is a semialgebraic open K3-approximate group S′ such that S ⊆ S′ ⊆ S2. The set S′ is definably amenable with respect to some left Haar measure of G.MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 22
Proof. Let S be the closure of S with respect to the topology in G. We first note that
S ⊆ S2. Indeed, the open neighborhood aS −1 of a ∈ S contains a point in a′ ∈ S, so
a ∈ a′S ⊆ S2.Choose an open cover ( Ui)i∈I of S such that for each i each Ui ⊆ S2, and Ui is of the form
Bε ∩ G with Bε a ball of radius ε in Rd2
. Since G is compact, S is also compact. Choose a finite subcover of ( Ui)i∈I and take S′ to be their union. It is clear that S ⊆ S′ ⊆ S2, and S′
is semialgebraic. Replace S′ by S′ ∪ (S′)−1, we can make S′ symmetric. Finally, note that ( S′)2 ⊆ S4, which can be covered by K3 left-translates of S, and in turn covered by K3 left-translates of S′.We note that S′ is an open subset of G, so we can choose a left Haar measure μ of G such that μ(S′) = 1. By Lemma 5.8, every subset of G definable in ( G, S ) is semi-algebraic. In particular, they are Borel, and hence μ-measurable.
To link to compact Lie groups, we also need the following fact [HN12, Theorem 12.3.9].
Fact 5.11. Every compact Lie group is isomorphic as a topological group to an algebraic subgroup of GL( d, R) for some d.
Let G be a pseudo-compact group built from a sequence ( Gn) of compact groups and a nonprincipal ultrafiter U. We say that G is pseudo-Lie if each Gn is also a Lie group. We now prove the main proposition of this section.
Proposition 5.12. Suppose G is a pseudo-Lie pseudo-compact group, and A, B ⊆ G
are such that A,B, and AB are pseudo-measurable, A and B are commensurable, and
BM( A, B ) ≤ r. Then for each ε > 0, there are A′, B ′, S ⊆ G such that the following conditions hold: (1) Let (G, Σ) be the expansion of the group G generated by A′, B′, and S. Then every
D ⊆ G definable in Σ is pseudo-measurable. It follows that S is definably amenable in (G, Σ) .(2) A, B, A ′, B ′, S are all commensurable to one another (3) BM (A′, B ′) ≤ r + ε
(4) S is an approximate groups (5) A′ can be covered by finitely many left-translates of S, and B′ can be covered by finitely many right-translates of S
Proof. Suppose G is built from a sequence ( Gn) of compact Lie groups and a nonprincipal ultrafilter U. Using Fact 5.11, we can arrange that for each n, Gn is a closed subgroup of a general linear group. Let ( An), ( Bn), and ( Cn) be sequences such that An, B n, C n ⊆ Gn
are measurable such that A = ∏
U
An, B = ∏
U
Bn, and AB = ∏
U
Cn. Recall that AB =∏
U
(AnBn) by Fact 4.4(2). Hence, by fact 4.1(1,3) we must have AnBn = Cn for a.e. n. In particular, for a.e. n, the product AnBn is measurable. Now let μ be a pseudo-Haar measure such that 0 < μ (A), μ (B) < ∞. Then there is N
such that μ(A)/N < μ (B) < Nμ (A). Let ( μn) be a sequence such that μn is a Haar measure on Gn and μ = lim U μn. Using Fact 4.6, for every ε > 0 and for a.e. n,
μn(An)/(2 N) < μ n(Bn) < (2 N)μn(An) and BM( An, B n) ≤ (r + ε/ 2) .
Using Lemma 5.9, for a.e. n, we obtain semialgebraic A′
n
and B′
n
such that
μ(An)/2 < μ (A′
n
) < 2μ(An), mu (Bn)/2 < μ (B′
n
) < 2μ(Bn), and BM( A′
n
, B ′
n
) ≤ (r + ε).MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 23
Using Fact 5.1 together with Lemma 5.10, we produce for a.e. n an ON,r (1)-approximate group Sn such that Sn is open, Sn is definably amenable with respect to some Haar measure of Gn, we have μn(Sn) ∼N,r μn(A′
n
), the set A′
n
can be covered by ON,r (1)-left translates of
Sn, and B′
n
can be covered by ON,r (1)-right translates of Sn.Let A′ = ∏
U
A′
n
, B′ = ∏
U
B′
n
, and S = ∏
U
Sn. It remains to verify that the conditions of the lemma are satisfied. We note that (3) is immediate from the construction. The set S is an ON,r (1)-approximate group, and in particular, an approximate group. Also,
μ(S) = ON,r (1) μ(A), we see that S is commensurable with A′. By Lemma 4.11, S is also commensurable with A, B, and B′. Hence, we also get (2) and (4). We now verify (1). By scaling μn suitably, we can harmlessly arrange that μn(Sn) = 1. Then μ(S) = 1. Let Σ ′
m
be the subcollection of P(G[m]) consisting of D ⊆ G[m] of the form
D = ∏
U
Dn
where Dn ⊆ (Gn)[m] is semialgebraic. It follows from Fact 4.1, Fact 4.4, and Lemma 5.8 that with Σ ′ = (Σ ′
m
) the pair ( G, Σ′) is an expansion of the group G. As A′, B′, and S are definable in Σ ′, we have Σ is a substructure of Σ ′. In particular, if D ⊆ G is definable in Σ, then D is also definable in Σ ′. Hence, such D pseudo-measurable, and so μ-measurable. Thus, S is definably amenable by Lemma 5.6. Finally, we verify (5). We can choose m = m(N, r ) such that for a.e. n, there are at most
m-many left-translates of Sn needed to cover A′
n
. For these n, adding more left-translates of Sn if necessary, we assume that the m translates an, 1Sn, . . . , a n,m Sn can be used to cover
A′
n
. For i ∈ [m], set ai = ( an,i )/U. We claim that ( aiS)i∈[m] is a cover of A′. This is the case because,
A′ \ (a1S ∪ amS) = ∏
U
A′
n
\ (an, 1S ∪ an,m S) = ∅.
A similar argument shows that finitely many right translates of S is needed to cover B′.
Lie models from approximate groups
In this section, we will define locally compact models of approximate groups and describe the Hrushovski’s locally compact model theorem [Hru12], which allows us to construct them from definably amenable approximate groups. In fact, an improvement of Hrushovski’s theorem by Massicot and Wagner [MW15] for the definably amenable case will be discussed. This suits our purpose better even though the original theorem by Hrushovski also suffices with some extra steps. We also supplement this result with an additional step which allows us to build connected Lie models. This later step is folkloric, so we only gather the facts together. (Note that Hrushovski’s locally compact model theorem is also called Hrushovski’s Lie model theorem. This is in viewed of the fact that this theorem can be used in combination with the Gleason–Yamabe theorem to build Lie models. We split them apart here as it is done so in [MW15], and we also think it is conceptually clearer.) Let G be a group, and S ⊆ G is an approximate subgroup. A locally compact model of
S is a surjective group homomorphism such that the following two conditions are satisfied: (LM1) (Thick image) There is an open neighborhood U of id L such that π−1(U) ⊆ S
(consequently, ker π ⊆ S and U ⊆ π(S)). (LM2) (Compact image) π(S) in L is precompact. MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 24
Behind this definition is the observation that approximate groups arise naturally from group homomorphisms into locally compact groups. This can be seen through the lemma below:
Lemma 6.1. Suppose G is a group, S is a subset of G with id G ∈ S and S = S−1, L is a locally compact group, and π : 〈S〉 → L is such that conditions (LM1) and (LM2) are satisfied, Then S is an approximate group. In particular, an open an precompact subset of a locally compact group is an approximate group. Proof. Indeed, by (LM2), such S2 will be contained in an inverse image π−1(F ) of a compact set F ⊆ L, so S2 is covered by finitely many left translates of π−1(U) ⊆ S in (LM1). The last sentence is a special case with G = L and π the identity map.
We caution the reader that the word “model” in “locally compact model” is unrelated to model-theory. This word used comes from the Freiman homomorphism and Ruzsa’s model lemma in the abelian settings. Let G be a group, S ⊆ G is an approximate group, and π : 〈S〉 → L is a locally compact model of S. If L is a Lie group, we call π a Lie model of S. We say that π : 〈S〉 → L is
noncompact if L is noncompact. We define unimodular and connected for π similarly. Let ( G, Σ) be an ℵ1-saturated expansion of a group G, and H ⊆ G is σ-definable in Σ. A surjective group homomorphism π : 〈H〉 → L with L a locally compact group is definable
(in the sense of continuous logic) in Σ if the topology on L is induced by Σ as follows: (CD) X ⊆ L is compact in L if and only if π−1(X) is δ-definable in Σ, and X ⊆ L is open in L if and only if π−1(X) is σ-definable in Σ. It is known that (CD) is equivalent to (CD ′) Whenever F ⊆ U ⊆ L are such that F is compact and U is open, there is D ⊆ 〈 S〉
that is definable in Σ and π−1(F ) ⊆ D ⊆ π−1(U). We will not be using this equivalence, so we will leave it to the interested reader. The the definition good model given in [BGT12] is essentially combines the definition of locally compact model and (CD ′), so it is in a sense the original definition. Our definition of locally compact model is not the definition given in [MW15]. (We chose it due to conceptual clarity, and its later use for constructing of Lie models). To discuss the definition of locally compact models in [MW15], we introduce two further conditions: (LM1 ′) (Containing the kernel) ker π ⊆ S
(LM2 ′) (Bounded index) If A, B ⊆ 〈 S〉 are definable in Σ and ker π ⊆ A, then finitely many left-translates of A by elements in 〈S〉 can be used to cover B.The equivalence between the definition is given by the following lemma:
Lemma 6.2. Suppose (G, Σ) is an ℵ1-saturate expansion of a group, and S ⊆ G is definable in Σ. Let L be a locally compact group, and π : 〈S〉 → L a surjective group homomorphism continuously definable in Σ. Then (LM1) is equivalent to (LM1 ′) and (LM2) is equivalent to (LM2 ′). Proof. We first show the equivalent between (LM1) and (LM1’), and clearly only the back-ward implication is needed. Choose a sequence ( Un) of open neighborhood of id G. Then ⋂
n
π−1(Un) = ker π is a subset of S. It suffices to show there is n such that π−1(Un) ⊆ S.Suppose this is not the case. By (4), for each n, π−1(Un) is a countable union of subsets MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 25
of G definable in ( G; S). Using induction, one can construct a sequence ( Dn) such that
Dn ⊆ π−1(Un) is definable in Σ and
n
⋂
i=1
Di ∩ ⋂
j>n
π−1(Un)is not a subset of S. In particular, any finite intersection of ( Dn) is not a subset of S.Finally, the ℵ1-saturation of ( G; Σ) yields the desired conclusion. Next we show that (LM2) implies (LM2’). For the forward direction, let A, B ⊆ 〈 S〉 be definable in Σ, and ker π ⊆ A. An argument like in the equivalence between (LM1) and (LM1’) shows that there is an open neighborhood U of id L such that π−1(U) ⊆ A. On the other hand, by ℵ1-saturation or Lemma 5.3, B is contained in Sn for some n. This implies that π(B) is precompact. It follows that finitely many left-translates of A can be used to cover B.Finally, we show (LM2’) implies (LM2). Take U a precompact open neighborhood of id L,which exists due to the fact that L is locally compact. Then π−1(U) is σ-definable and contains ker π. By ℵ1-saturation via Lemma 5.3, there is D ⊆ π−1(U) such that ker π ⊆ D.By (LM2’), finitely many left translates of D covers S. This implies that finitely many left translates of U covers π(S), which implies that π(S) is precompact.
The following fact [MW15] is the main model-theoretic input that will allow us construct Lie model. We use this as a black box in this paper.
Fact 6.3 (Hrushovski’s Locally Compact Model Theorem via Massicot–Wagner) . Suppose
(G, Σ) is an ℵ1-saturated expansion of a group, and S ⊆ G is an approximate group definable in Σ and definably amenable in (G, Σ) . Then there is a locally group L, and surjective group homomorphism π : 〈S〉 → L satisfying (LM1’), (LM2’), and (CD) with S replaced by S4.
To obtain connected Lie models from locally compact model, we need the solution of Hilbert’s 5th problem, which is known as the Gleason–Yamabe Theorem [Gle52, Yam53]. Here is a version of this result, which is another major black box.
Fact 6.4 (Gleason–Yamabe Theorem) . Suppose L is a locally compact group and U is an open neighborhood of id L. Then there is an open subgroup L′ of L and a compact normal subgroup K of L′ such that K ⊆ U and the L′/K is a connected Lie group.
Suppose G is a group. Two approximate groups S and S′ of G are equivalent if each is contained in finitely many left (equivalently, right) translations of the other. It is easy to see that this is indeed an equivalence relation. Moreover, by Lemma 5.2, if S is an approximate group, then Sn is an approximate group equivalent to S for all n.
Proposition 6.5. Suppose (G, Σ) is an ℵ1-saturated expansion of a group, and S ⊆ G is an approximate subgroup definable in Σ and definably amenable in (G, Σ) . Then there is approximate subgroup S′ ⊆ G definable in Σ and definably amenable in (G, Σ) such that S′
is equivalent to S, and S′ has a connected Lie model definable in Σ.Proof. By Lemma 6.2 and Fact 6.3, S4 has a locally compact model π : 〈S4〉 → L. Since S4
is an approximate group equivalent to S, it is harmless to replace S with S4 and assume π is already the locally compact model of S. In particular, using (LM1), we can choose an open neighborhood U of id L such that π−1(U) ⊆ S. Applying the Gleason–Yamabe theorem, we MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 26
get an open subgroup L′ of L and a compact normal subgroup K of L′ such that K ⊆ U
and the quotient group L′/K is a connected Lie group. Now, the group L′ is open in L. Hence, L′ is also closed in L. Therefore, π−1(L′) is both
δ-definable and σ-definable in Σ. By ℵ1-saturation via Lemma 5.3, we learn that π−1(L′) is definable in Σ. Set S′ = S ∩ π−1(L′). It remains to verify that S′ has the desired property. As both S and π−1(L) are symmetric and contain id G, so is S′. Moreover, S′ contains ker π as a subset, and ( S′)2 is definable in Σ. It follows from (LM2 ′) in Lemma 6.2 that finitely many left-translates of S′ covers ( S′)2. We have verified that S′ is an approximate group. Note that both S and S′ = S ∩ π−1(L′) contains ker π = π−1(id L). Hence, it again follows from (LM2 ′) in Lemma 6.2 that S and S′ are equivalent approximate groups. The set S′ = S ∩π−1(L′) is definable in Σ as it is the intersection of two sets definable in Σ. Let μ be the finitely additive measure on 〈S〉 witnessing the fact that S is definably amenable in ( G, Σ), that is μ is left-invariant, μ(S) = 1, and every subset of 〈S〉 definable in Σ is
μ-measurable. Because S′ ⊆ 〈 S〉, S′ is definable in Σ, and S′ is an equivalent approximate group to S, we have 0 < μ (S′) < ∞. Scaling μ appropriately, we get a measure witnessing the fact that S′ is definably amenable in ( G, Σ) Now, as π is a locally compact model of S, by (LM1), π(S′) = π(S) ∩ L′ contains an open subset of L′. Since L′ is connected, L′ = 〈π(S′)〉. Hence, π|〈S′〉 : 〈S′〉 → L′ is surjective. We now define
π′ : 〈S′〉 → L′/K, π′ = φ ◦ π|〈S′〉
with φ : L′ → L′/K the quotient map. By construction ker π′ = π−(H), and recall that
H ⊆ U ∩ L′ with π−1(U) ⊆ S. Hence, ker π′ ⊆ S′. It follows that π′ satisfy (LM1’) from Lemma 6.2. Since, ker π ⊆ ker π′, it follows that π′ satisfy (LM2’) from Lemma 6.2 too. The group homomorphism π′ satisfy (CD) because π satisfy (CD) and φ is continuous. Thus, by Lemma 6.2, π′ is a locally compact model. Finally, L′/K is a connected Lie group, so π′
is a connected Lie model.
We end with a remark about plausible alternative approach:
Remark 6.6. Here, we are using the Massicot–Wagner version of Hrushovski’s locally com-pact model theorem. This is easier for us as it is compatible with the setting with measure. However, one can also use the original version of the theorem by Hrushovski [Hru12]. For that one will need to verify in addition that the collection of null sets with respect to the pseudo-Haar measure form an S1-ideal, which is possible. In his thesis [Car15], Carolino provides an alternative way to link pseudo-measurable ap-proximate groups to connected Lie group. Instead of using definably amenable approximate groups as we do here, he works with a notion of pseudo-open approximate groups and follow the strategy in [BGT12] to get a suitable version of Hrushovski’s Lie model theorem. Even though we believe his results are by-and-large correct, the thesis is not peer-reviewed and we think some details must be further clarified. For example, in Lemma 6.8 of [Car15], it was not verified that π−1(G′) is pseudo-open and deduce that ˜A is an ultra-approximate group in the sense he defined. This is particular important for us because without the knowledge of pseudo-open, the product sets might fail to be measurable with respect to a pseudo-Haar measure. If such issues can be addressed, Carolino’s thesis will provide an alternative path to a connected Lie model. MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 27
Small growth above Lie models
In the previous sections, we construct a suitable approximate group S based on A and
B. However, the Brunn–Minkowski growth BM( S, S ) may be much larger than BM( A, B ). The purpose of the section is to find two sets A′, B ′ ⊆ 〈 S〉 so that BM( A′, B ′) is very close to BM( A, B ). This step is the essential reason we also need to treat the asymmetric case. Let G be a pseudo-compact group built from a sequence ( Gn) of compact groups and a nonprincipal ultrafilter U. We say that G is pseudo-connected if each Gn is connected. The primary use of the connectedness assumption is the inequality below by Kemper-man [Kem64]. Note that this inequality does not hold when the group is disconnected by choosing the set to be the identity component.
Fact 7.1 (Kemperman Theorem) . Suppose G is a connected unimodular locally compact groups equipped with a Haar measure μ. If A, B ⊆ G are such that A,B, and AB are measurable, then μ(AB ) ≥ min {μ(A) + μ(B), μ (G)}.
The following lemma is the version of Kemperman theorem for pseudo-connected pseudo-compact groups.
Lemma 7.2. Suppose G is pseudo-compact and pseudo-connected with a pseudo-Haar mea-sure μ. If A, B ⊆ G are such that A,B, and AB are pseudo-measurable, then μ(AB ) ≥
min {μ(A) + μ(B), μ (G)}.Proof. Suppose G is the pseudo-compact group built from a sequence ( Gn) of compact and connected groups and a nonprincipal ultrafilter U. Let ( An) and ( Bn), and sequences such that An, B n ⊆ Gn is measurable for each n. Applying the Kemperman theorem and taking limit we get the desired conclusion.
We now prove the main result of this section.
Proposition 7.3. Suppose (G, Σ) is an expansion of a pseudo-connected pseudo-compact group such that every D ⊆ G definable in Σ is pseudo-measurable. Let A, B ⊆ G be definable in Σ, commensurable to one another, and infinitesimal compared to G, let S ⊆ G
be an approximate group definable in Σ and commensurable to both A and B, and assume that A can be covered by finitely many left-translates of S, and B can be covered by finitely many right-translates of S. Then for all ε > 0, there are A′, B ′ ⊆ 〈 S〉 definable in Σ,commensurable to A, B, and S such that BM( A′, B ′) ≤ BM( A, B ) + ε.Proof. Let μ be a pseudo-Haar measure on G such that 0 < μ (A), μ (B), μ (S) < ∞, so
μ(G) = ∞. As G is a pseudo-connected and pseudo-compact group, applying Lemma 7.2, we get
μ(AB ) ≥ min {μ(A) + μ(B), μ (G)}.
As μ(G) = ∞, we have BM( A, B ) ≥ 1. For each left coset α ∈ G/ 〈S〉, choose a representative g(α) ∈ G, and for each right coset
β ∈ 〈 S〉\ G, choose a representative g(β) ∈ G. By the relationship between A, B, and S,there are only finitely many left cosets of 〈S〉 intersecting A nonemptily, and a similar claim holds for B with right cosets of 〈S〉. Therefore, (1) μ(A) = ∑
α∈G/ 〈S〉
μ〈S〉(g(α)−1A ∩ 〈 S〉), μ(B) = ∑
β∈〈 S〉\ G
μ〈S〉(Bg (β)−1 ∩ 〈 S〉).MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 28
For simplicity, for every α ∈ G/ 〈S〉 and β ∈ 〈 S〉\ G, we write
Aα := g(α)−1A ∩ 〈 S〉, and Bβ := Bg (β)−1 ∩ 〈 S〉.
As A, B are definable in Σ, for each α ∈ G/ 〈S〉 and β ∈ 〈 S〉\ G, the sets Aα, B β ⊆ 〈 S〉 are also definable in Σ. It is harmless to delete such Aα, B β when they have μ-measure 0 and replacing A and B with subsets accordingly. Hence, we can assume for each α ∈ G/ 〈S〉 and
β ∈ 〈 S〉\ G, the sets Aα, B β are commensurable to S.Let r = BM( A, B ). Then r ≥ 1. We will show that there are α ∈ G/ 〈S〉 and β ∈ 〈 S〉\ G
such that BM( Aα, B β ) ≤ r + ε for all positive ε. From now on we assume that there is ε > 0such that for all α ∈ G/ 〈S〉 and β ∈ 〈 S〉\ G, BM( Aα, B β ) > r + ε. Equivalently, we have (2)
( μ〈S〉(Aα)
μ〈S〉(AαBβ )
) 1
r+ε
+
( μ〈S〉(Bβ )
μ〈S〉(AαBβ )
) 1
r+ε
≤ 1.
We now define two probability measures on G/ 〈S〉 and 〈S〉\ G based on the shapes of A
and B respectively as follows. For α ∈ G/ 〈S〉, define pA(α) = μ〈S〉(Aα)
μ(A) .
And similarly, for β ∈ 〈 S〉\ G, let pB (β) = μ〈S〉(Bβ )
μ(B) .
Choose α from G/ 〈S〉 randomly with respect to p A. We obtain that (3) EpA(α)
( μ〈S〉(Aα)
μ〈S〉(AαBβ )
) 1
r+ε
= 1
μ(A)
∑
α∈G/ 〈S〉
(μ〈S〉(Aα))r+ε+1
r+ε
(μ〈S〉(AαBβ )) 1
r+ε
.
On the other hand, by H¨ older’s inequality (with exponents ( r+ε)/(r+ε+1) and 1 /(r+ε+1)) we get
∑
α∈G/ 〈S〉
μ〈S〉(Aα)
r+ε+1
r+ε
≤ ∑
α∈G/ 〈S〉
μ〈S〉(Aα)r+ε+1
r+ε
μ〈S〉(AαBβ ) 1
r+ε
∑
α∈G/ 〈S〉
μ〈S〉(AαBβ )
1
r+ε
.
Using (1) and the fact that
AαBβ = (g(α)−1A ∩ 〈 S〉) ( Bg (β)−1 ∩ 〈 S〉) ⊆ (g(α)−1ABg (β)−1) ∩ 〈 S〉,
together with the unimodularity of G we conclude that
∑
α∈G/ 〈S〉
μ〈S〉(Aα) = μ(A), and ∑
α∈G/ 〈S〉
μ〈S〉(AαBβ ) ≤ μ(ABg (β)−1) = μ(AB ).
Finally, by using (3), we get (4) EpA(α)
( μ〈S〉(Aα)
μ〈S〉(AαBβ )
) 1
r+ε
≥
( μ(A)
μ(AB )
) 1
r+ε
.MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 29
Similarly by choosing β from 〈S〉\ G randomly with respect to p B , we will get (5) EpB (β)
( μ〈S〉(Bβ )
μ〈S〉(AαBβ )
) 1
r+ε
≥
( μ(B)
μ(AB )
) 1
r+ε
.
Inequalities (4) and (5) together with the Fubini and (2) give us
( μ(A)
μ(AB )
) 1
r+ε
+
( μ(B)
μ(AB )
) 1
r+ε
≤ EpA(α)EpB (β)
[( μ〈S〉(Aα)
μ〈S〉(AαBβ )
) 1
r+ε
+
( μ〈S〉(Bβ )
μ〈S〉(AαBβ )
) 1
r+ε
]
≤ 1.
This contradicts the choice of r.
Density functions of definable sets over Lie models
When H is a locally compact group, and π : H → L is a continuous and surjective group homomorphism, a left Haar measure μ of H and a left Haar measure λ of L are linked together by the corresponding left Haar measure ν of ker π via a quotient integral formula. This allows us to relate the measure of a measurable set A ⊆ G and the measure of its image π(A) using the density function fA(x) = ν(ker π ∩ x−1A). There is no similar measure on the kernel of a locally compact model or Lie model. Nevertheless, we will now show there are still density functions which are well-behaved enough for our later purpose. Suppose H is a group equipped with a left-invariant measure μ, the set A ⊆ H is mea-surable, π : H → L is a surjective group homomorphism, and the group L is equipped with a left-invariant measure λ. We say that fA : H → R≥0 is a density function for A with respect to π if for all measurable X ⊆ L, the set π−1(X) is μ-measurable, and
μ(A ∩ π−1(X)) =
∫
X
fA dλ.
We have a similar definition replacing left by right. It is clear from the above definition that for densities to exist, the measure on H must be sufficiently rich. The following lemma verifies that this is the case when we are handling locally compact models.
Lemma 8.1. Suppose (G, Σ) is an ℵ1-saturated expansion of a group, S ⊆ G is an approx-imate group definable in Σ, S is definably amenable in (G, Σ) witnessed by a left-invariant measure μ on 〈S〉, and S has a locally compact model π : 〈S〉 → L continuously definable in
Σ. Then π−1(X) ⊆ 〈 S〉 is μ-measurable whenever X ⊆ L is measurable. Moreover, if λ is the pushforward of μ on L, given by
λ(X) := μ(π−1(X))
for measurable X ⊆ L, then λ is a left Haar measure on L.Proof. We first note that if X ⊆ L is Borel, then π−1(X) is μ-measurable. Indeed, it is enough to check for open X ⊆ L , that π−1(X) is μ-measurable. This is the case because from the condition that π is definable in Σ, π−1(U) is open. MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 30
Now define a Borel measure ˜λ on L as follows. When X ⊆ L is Borel, set ˜λ(X) = μ(π−1(X)) .
The σ-additivity of ˜λ on Borel sets and ˜λ(∅) = 0 follow from the corresponding facts for μ.Moreover, ˜λ is the completion of its restriction to the collection of Borel subsets. It will turn out that ˜λ is a Haar measure, but will postpone this proof and show why the desired conclusions follow from it. By the uniqueness of Haar measure, we get constant α
such that λ(X) = α˜λ(X) for all measurable X. When B is Borel, from our construction, this already gives us that λ(B) = αμ (π−1(B)). If X is measurable, and λ(X) = 0, then X
is a subset of Borel B ⊆ L with λ(B) = 0 as λ is the completion of its restriction to the Borel sets. For such X, the set π−1(X) is a subset of π−1(B) which has μ(π−1(B)) = 0, so π−1(X) is measurable and has zero μ measure by the completeness of μ. For a general measurable X ⊆ L, a standard argument produces Borel B ⊆ L such that λ(X△B) = 0. It then follows from the earlier cases that π−1(X) is measurable and λ(X) = αμ (π−1(X)) .
Finally, we verify that ˜λ is a left Haar measure of L. It is clear that ˜λ is left-invariant. So we only need to show that ˜λ is a Radon measure. By (LM1) of the definition of locally compact model, there is an open neighborhood U of id L such that π−1(U) ⊆ S. Then for each x ∈ U, xU is is open neighborhood with ˜λ(U) = μ(π−1(U)) < μ (S) = 1 .
From this we established the locally finiteness (RM1) property of ˜λ. It remains to show that ˜λ is inner regular for open sets and outer regular. We will show this through several claims.
Claim 1: If D ⊆ 〈 S〉 is definable in Σ, then π(D) is compact in L.
Proof of Claim 1: Recall that the definability of π implies that C ⊆ L is closed if and only if π−1(C) is δ-definable. So we need to show π−1(π(D)) = D(ker π) is δ-definable. As ker π = π−1({id L}) is δ-definable, we obtain a decreasing sequence ( En) definable in Σ such that ker π = ⋂
n
En.
Clearly, we have D(ker π) ⊆ ⋂
n
DE n. We will now show that ⋂
n
DE n ⊆ D(ker π). Take
a ∈ ⋂
n
DE n. Then a = dnen with dn ∈ D and en ∈ En. Hence, D ∩ (a(En)−1) containing
a(en)−1 = dn is nonempty. By ℵ1-saturation, D ∩ ⋂
n
a(En)−1 is nonempty. It follows that there is d ∈ D and e−1 ∈ ⋂
n
(En)−1 such that d = ae . Thus, a = de is in D ker π.Finally, by ℵ1-saturation, D is a subset of Sn for some n. Hence, π(D) is a subset of (π(S)) n, which is precompact. Therefore, π(D) is compact. ⊲⊳
Claim 2: Suppose U ⊆ L is open. Then U is a countable union of compact sets. In particular, ˜λ has inner regularity for open sets.
Proof of Claim 2: Recall that the definability of π implies that U ⊆ L is open if and only if
π−1(U) is σ-definable in Σ. Hence, π−1(U) = ⋃
n
Dn with Dn definable in Σ for each n. It then follows that
U = ⋃
n
π(Dn).
From claim 1, it follows that U is a countable union of compact subsets of L. ⊲⊳ MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 31
Claim 3: Suppose U ⊆ L is open and precompact, and B ⊆ U is Borel. Then ˜λ is both inner regular and outer regular on U.
Proof of Claim 3: We will prove using induction on the Borel complexity of B. When B
is open, the inner regularity follows from Claim 2 and the outer regularity is immediate. If
B = U \ B1 and we have proven the statement for B1, then the statement for B will simply follows by switching inner and outer regularity. Suppose B = ∪nBn, and we have proven the statement for Bn for each n. Fix ε > 0. Let BN = ⋃Nn=1 Bn such that ˜λ(B \ Bn) < ε/ 2. For n ∈ [N], using the induction hypothesis, we choose compact Kn ⊆ Bn such that ˜λ(Bn \ Kn) < (ε/ 2)(1 /2) n. Then, it is easy to see that ˜λB \ ⋃Nn=1 Kn. This shows the inner regularity of ˜λ for B. A similar approximation argument shows the outer regularity of ˜λ for
B. ⊲⊳
Claim 4: Suppose B ⊆ L is Borel. Then ˜λ is outer regular on B.
Proof of Claim 4: From Claim 3, it is enough to establish that L is a countable union of open an precompact subsets of it. As π is a locally compact model, there is a neighborhood U of id L such that U ⊆ πS and π(S) is precompact. Hence, this set U is open and precompact. Note that L = ⋃
n
π(Sn). For each n, π(Sn) is precompact, and hence can be covered by finitely many translation of U. Therefore, L can be covered by countably many translates of U, which completes the proof. ⊲⊳
This completes the proof.
We recall some concept from analysis. Let Ω be a topological space equipped with a measure λ. A measure κ with the same σ-algebra as λ is absolutely continuous with respect to λ if and only if for each measurable X ⊆ L, we have μ(X) = 0 whenever
λ(X) = 0. We need the following fact:
Fact 8.2 (Radon–Nikodym Theorem) . Suppose κ and λ are σ-finite Borel measure on L
and κ is absolutely continuous with respect to λ (i.e, for all Borel X ⊆ L, λ(X) = 0 implies
κ(X) = 0 ). Then there is a λ-measurable function f such that for all Borel X ⊆ L we have
κ(X) =
∫
X
f dλx.
We now prove the promised existence:
Proposition 8.3. Suppose G is a group equipped with a complete left invariant measure μ,and S ⊆ G is an approximate subgroup with π : 〈S〉 → L is a locally compact model. If
A ⊆ 〈 S〉 is definable in (G, S ), then A has a density function fA with respect to π which is a.e. bounded. Moreover, if gA is another density function of A with respect to π, then
fA(x) = gA(x) for a.e. x ∈ L.Proof. Define the Borel measure κA on L by setting
κA(X) := ν(A ∩ π−1X)whenever X ⊆ L is Borel. From the definition of Lie model, one has 0 ≤ κA(X) ≤ ν(X). In particular, κA is absolutely continuous with respect to λ. As L is connected, λ is σ-finite, and so is κA. Thus, we can apply Fact 8.2 to get the desired conclusion. The last statement is immediate. MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 32
The fact that there is no single canonical density function creates more problem than it might initially seem. We will later need to study the relationship between the density functions fA, fB , and fAB for definable A, B ∈ 〈 S〉. As AB is generally a uncountable union of left-translates of B and a uncountable union of right-translates of A, such relationships becomes unclear. We will introduce the variant notion of approximate density functions, which is canonical. Suppose H is a group equipped with a left invariant measure μ, the set A ⊆ H is measurable, π : H → L is a surjective group homomorphism, and the group L equipped with a left-invariant measure λ. Suppose moreover, that d is a left-invariant metric on L
such that λ is a nonzero Radon measure with respect to the topology generated by d and open balls has finite positive measure. For ε > 0, the ( d, ε )-density function f d,ε A with respect to π is given by
f d,ε A (x) := μ(A ∩ π−1(Bε(x))
λ(Bε(x)) .
for x ∈ L. It is immediate to see that if fA is a density function of A with respect to π then we have
f d,ε A (x) = Et∈Bε (x)fA(t).
Note that the above equation does not depends on the choice of the density function fA.Earlier, we have seen that approximate density functions can be understood in term of density functions. Next, we will see that we can go in the other direction in the setting of Lie model. Let (Ω , d ) be a metric space, and μ is a Radon measure on Ω such that open balls have positive and finite measure. We call x ∈ Ω a Lebesgue point of the function f if lim
ε→0+
Et∈Bε (x)|f (x) − f (t)| = 0 .
We say that (Ω , d ) is locally doubling if there r > 0 and a constant K such that for all
x ∈ Ω and 0 < ε < r , we have
λ(B2ε(x)
λ(Bε(x)) < K.
The following fact [HKST15, Section 3.4] from real/harmonic analysis is the key to our approach. The theorem in its stronger form, that almost every point is a Lebesgue point of a locally integrable function f , can be proved as a consequence of the weak-L1 estimates for the Hardy–Littlewood maximal function.
Fact 8.4 (Lebesgue differentiation theorem for locally doubling metric space) . Let (Ω , d ) be a metric space, and μ is a Radon measure on Ω such that open balls have positive and finite measure. Suppose (Ω , d ) is locally doubling, and f is a λ-integrable function on Ω. Then
λ-a.e. x ∈ Ω is a Lebesgue point of f .
We now prove the second main result of this section:
Proposition 8.5. Suppose H is a group equipped with a left invariant measure μ, π : H → L
is a surjective group homomorphism, and L is a connected Lie group equipped with a left Haar measure λ. Suppose moreover, d is the distance function induced by a left-invariant MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 33
Riemannian metric L. Let fA be a density function for A with respect to π, and for ε > 0
let f d,ε A be the (d, ε )-density function for A with respect to π. Then for a.e. x ∈ L, we have
fA(x) = lim
ε→0+
f d,ε (x).
Similar statements hold replacing left invariant metric and Haar measure with right invari-ant metric and Haar measure. Proof. The desired conclusion will follow from the two claims below and Fact 8.4.
Claim 1. The metric space ( L, d ) is locally doubling.
Proof of Claim 1. Note that lim
r→0+
λ(Bid L,2r)
λ(Bid L,r ) = 2 n.
where n is the dimension of L. Hence, we can choose K = 2 n + 1 and r0 sufficiently small such that λ(Bid L,2r)/λ (Bid L,r ) < K . Using the fact that both λ and d are left-invariant, we arrive at the desired conclusion. ⊲⊳
Claim 2. If x ∈ L is a Lebesgue point of fA, then lim
ε→0+
f d,ε A (x) = fA(x).
Proof of Claim 2. Note that f d,ε (x) − f (x) = Et∈Bε(x)(f (t) − f (x)). Hence, the conclusion follows from the definition of Lebesgue points. ⊲⊳
The proof for the last statement for the right measures and right metrics is similar.
Small growth in Lie models
In this section we will prove there are sets in the Lie model with small measure growth. We starts by noting some additional properties of the Lie model in the situation we are interested in.
Proposition 9.1. Suppose (G, Σ) is an expansion of a pseudo-connected and pseudo-compact group such that every D ⊆ G definable in Σ is pseudo-measurable, S ⊆ G is an approximate group definable in Σ such that 0 < μ (S) < μ (G) = ∞ for some pseudo-Haar measure μ,and π : 〈S〉 → L is a connected Lie model of S continuously definable in Σ. Then L is a unimodular noncompact connected Lie group. Proof. We first show that L is unimodular. Without loss of generality, we can assume that (G, Σ) is generated by the collection of subsets of G definable in Σ. Then by Lemma 5.5, (G, Σ) is ℵ1-saturated. Let μ also denote its restriction to 〈S〉. Scaling μ by a constant, we can arrange that μ(S) = 1. Hence, by Lemma 5.6, μ witness that S is a definably amenable approximate group. Thus, we are in the setting to apply Lemma 8.1. In particular, the pushforward of μ on L is a Haar measure λ on L. The pseudo-Haar measure is right invariant by Lemma 4.9. It follows that λ is also right invariant. Thus L is unimodular. Next we show that L is noncompact. Suppose to the contrary that this is not the case. By (LM1) in the definition of locally compact model, there is an open neighborhood U
of id L such that π−1(U) ⊆ S. Since L is connected, applying the Kemperman inequality (Fact 7.1), we have
λ(Un+1 ) ≥ min {λ(Un) + λ(U), λ (L)}.MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 34
As L is compact, λ(U) < ∞. Hence, there is n such that λ(Un) ≥ λ(L), which implies that
Un+1 = L. For such n, we see that Sn+1 = 〈S〉. Since S is an approximate group, we then learn that 〈S〉 is contained in contained in finitely many translates of S. So μ(〈S〉) < ∞.On the other hand S is infinitesimal compared to G, Lemma 7.2 applies and, for all n,
μ(Sn+1 ) ≥ μ(Sn) + μ(S).
This results in a contradiction.
Recall that for a given function f : G → R, the superlevel set of f is
L+
f
(t) := {g ∈ G | f (g) ≥ t}.
Lemma 9.2. Suppose G is a group equipped with an invariant measure μ, and S ⊆ G is an approximate group definably amenable with respect to μ such that S has a connected Lie model π : 〈S〉 → L. Let A, B ⊆ 〈 S〉 be μ-measurable such that π(A), π (B) have positive measure and are contained compact sets. Let fA, fB , and fAB be the λ-a.e. density of A, B,and AB . For all constant α, β > 0, there are σ-compact Xα, Y β ⊆ L such that the following holds: (1) Xα ⊆ L+
fA
(α) and λ(Xα) = λ(L+
fA
(α))
(2) Yβ ⊆ L+
fB
(β)) and λ(Yβ ) = λ(L+
fB
(β))
(3) XαYβ is σ-compact, (4) With γ = max {α, β }, we have λ(XαYβ \ L+
fAB
(γ)) = 0 .Proof. We first consider the case where α ≤ β. Using the inner regularity of the Haar measure on L, choose σ-compact Xα ⊆ L+
fA
(α) such that λ(Xα) = λ(L+
fA
(α)). Let d :
L → R≥0 be the distance function induced by a left-invariant Riemannian metric on L.By Proposition 8.5, a.e. y ∈ G we have (6) fB (y) = lim
ε→0+
f d,ε B (t)where f d,ε B is the ( d, ε )-density function of B. Using this observation and the inner regularity of the Haar measure on L, we obtain σ-compact Yβ ⊆ L+
fB
(β) such that λ(Yβ ) = λ(L+
fB
(β)) and Yβ consists only of y ∈ G such that equation (6) holds. The product XαYβ is σ-compact, and hence measurable. It remains to show that
λ(XαYβ \ L+
fAB
(β)) = 0 Again by Proposition 3.2, for a.e. z ∈ XαYβ , we have (7) fAB (z) = lim
ε→0+
f d,ε AB (z).
where f d,ε AB is the ( d, ε )-density function of AB with respect to π. It is enough to consider one such z and show that fAB (z) ≥ β. Let x ∈ Xα and y ∈ Yβ be such that z = xy , and choose a ∈ A such that π(a) = x. If W ⊆ L is measurable, then
μ(aB ∩ π−1(W )) = μ(B ∩ π−1(x−1W )) .
If Ur(y) is a ball centered at y with radius r, then by the left invariance of d, xU (y) is the ball
Ur(z) centered at z and with the same radius. It follows that f d,ε B (x−1t) is the ( d, ε )-density function for aB with respect to π.MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 35
Since aB ⊆ AB , it implies that f d,ε B (x−1t) ≤ f d,ε AB (t) for all t ∈ L. In particular, with
t = z, we get
f d,ε B (y) ≤ f d,ε AB (z)sing equations (6) and (7), we get fB (y) ≤ fAB (z), so fAB (z) ≥ β completing the proof. Note that all the results in Section 8 have obvious counterparts with “left” replaced by “right”. Moreover, the “right” counterparts follow from their “left”-version by considering the opposite group Gop where we define the group operation ·op by g1 ·op g2 := g2g1. Thus, we handle the case where α > β similarly switching the roles of L+
fA
(α) and L+
fB
(β) and use a right-invariant metric instead of a left-invariant metric.
We use the following simple consequence of Fubini’s theorem concerning the superlevel sets [Rud87, Theorem 8.16]:
Fact 9.3. Let μ be a positive measure on some σ-algebra in the set Ω. Suppose f : Ω →
[0 , ∞] be a compactly support measurable function. For every r > 0,
∫
Ω
f r(x) d μ(x) =
∫
R≥0
rx r−1μ(L+
f
(x)) d x.
We now show we can find subsets of the Lie model with small growth.
Proposition 9.4. Suppose G is a pseudo-compact group equipped with a pseudo-Haar mea-sure μ, and S ⊆ G is a open approximate group with connected Lie model π : 〈S〉 → L.Suppose A, B ⊆ 〈 S〉 are μ-measurable such that π(A), π (B) are contained in compact sets. Then for all ε > 0, there are compact X, Y ⊆ L such that BM( X, Y ) ≤ BM( A, B ) + ε.Proof. If L is compact we are done. Hence, we can assume that L is noncompact. By Kemperman’s inequality, for each X, Y ⊆ L, we have BM( X, Y ) ≥ 1. Let r = BM( A, B )and assume there is ε > 0 such that for every pair of sets X, Y in L we always have BM( X, Y ) > r + ε.Let fA, fB , and fAB denote the density function of A, B, and AB respectively. Applying Lemma 9.2, for every α, β > 0 there are Xα ⊆ L+
fA
(α) ⊆ L and Yβ ⊆ L+
fB
(β) ⊆ L such that
λ(Xα) = λ(L+
fA
(α)), λ(Yβ ) = λ(L+
fB
(β)), and (8) λ(XαYβ \ L+
fAB
(max {α, β })) = 0 .
By the assumption we have BM( Xα, Y β) > r + ε. Together with (8) we get
λ (L+
fAB
(max {α, β })) 1
r+ε
≥ λ (XαYβ) 1
r+ε
≥ λ (Xα) 1
r+ε
λ (Yβ ) 1
r+ε
= λ (L+
fA
(α)) 1
r+ε
λ (L+
fB
(β)) 1
r+ε
.(9) To simplify the notation we define functions FA, F B , F AB : R → R that
FA(t) = λ(L+
fA
(t)) 1
r+ε
, FB (t) = λ(L+
fB
(t)) 1
r+ε
, and FAB (t) = λ(L+
fAB
(t)) 1
r+ε
.
Clearly FA, F B , F AB are non-increasing functions, and hence measurable. By (9), we have
FAB (max {α, β }) ≥ FA(α) + FB (β).
for all α, β ∈ R. This means, if we choose α, β ∈ R and assume that FA(α) ≥ t1 and
FB (β) ≥ t2, then FAB (max {α, β }) ≥ t1 + t2. Hence L+
FA
(t1) ∪ L+
FB
(t2) ⊆ L+
FAB
(t1 + t2). Let MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 36
λR be the Lebesgue measure on R, we thus have (10) λR(L+
FAB
(t1 + t2)) ≥ max {λR(L+
FA
(t1)) , λ R(L+
FB
(t2)) }.
Finally, let us compute μ(AB ). Using Fact 9.3 we have
μ(AB ) 1
r+ε
=
(∫
R>0
F r+εAB (t) d t
) 1
r+ε
=
(∫
R>0
(r + ε)tr+ε−1λR(L+
FAB
(t)) d t
) 1
r+ε
.(11) Let Ω A ⊆ R and Ω B ⊆ R be the essential support of FA and FB respectively. Let PA =
λR(Ω A) and PB = λR(Ω B ). Using (10) by letting t1 and t2 approach to 0, the essential support of FAB has a λR-measure at least PA + PB . Thus by a change of variable we have
(∫
R>0
tr+ε−1λR(L+
FAB
(t)) d t
) 1
r+ε
≥
(∫ PA+PB
0
tr+ε−1λR(L+
FAB
(t)) d t
) 1
r+ε
=
(
(PA + PB )r+ε
∫ 10
tr+ε−1λR(L+
FAB
(( PA + PB )t)) d t
) 1
r+ε
.
Now using (10) again, the right hand side of the above inequality is at least
(
(PA + PB )r+ε max
{∫ 10
tr+ε−1λR(L+
FA
(PAt)) d t,
∫ 10
tr+ε−1λR(L+
FB
(PB t)) d t
}) 1
r+ε
.
Using H¨ older’s inequality with exponents (0 , ∞), the above quantity is at least
(
P r+εA
∫ 10
tr+ε−1λR(L+
FA
(PAt)) d t
) 1
r+ε
+
(
P r+εB
∫ 10
tr+ε−1λR(L+
FB
(PB t)) d t
) 1
r+ε
.
By a change of variable, and the definitions of PA and PB , the above quantity is equal to
(∫
R>0
tr+ε−1λR(LFA )( t)) d t
) 1
r+ε
+
(∫
R>0
tr+ε−1λR(LFB )( t)) d t
) 1
r+ε
.
Therefore using (11) we obtain
μ(AB ) 1
r+ε
≥ μ(A) 1
r+ε
μ(B) 1
r+ε
,
which contradicts the assumption that BM( A, B ) = r.
Proof of the main theorems
A main ingredient is the following measure growth gap result by the first two authors [JT22].
Fact 10.1 (Measure expansion gaps for semisimple Lie groups) . Suppose G is a compact semisimple Lie group. If A ⊆ G is compact with sufficiently small measure, then we have
μ(A2) ≥ (2 + 10 −12 )μ(A).
We will also use the solution of the noncompact Kemperman Inverse Problem by An and the authors [AJTZ21, Theorem 3.12]. The main ingredient of this result is the nonabelian Brunn–Minkowski inequality in [JTZ21] by the authors. MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 37
Fact 10.2 (Kemperman inverse theorem for noncompact groups) . Let G be a connected unimodular locally compact noncompact group equipped with a Haar measure μ, and A, B ⊆
G be a compact set satisfying
μ(AB ) < μ (A) + μ(B) + 2( μ(A)μ(B)) 1
2
.
Then there is a continuous surjective group homomorphism χ : G → R with compact kernel.
We prove a version of theorem 1.2 for simple Lie groups. The proof, in fact, also work for compact groups that does not have R/Z as a quotient by replacing Fact 10.1 with its generalization from [JT22]. As the exponent is not sharp for any group other than SO(3 , R), we refrain from stating the result in the most general form for the sake of readability.
Theorem 10.3. For all ε > 0 and N > 0, there is c = c(ε, N ) such that if G is a compact semisimple Lie group with normalized Haar measure μ, and A, B ⊆ G are chosen with A,
B, and AB measurable, 0 < μ (A), μ (B) < c , and μ(A)/N < μ (B) < Nμ (A), then
μ(AB )1
2+ε
μ (A)1
2+ε
μ(B)1
2+ε
.
Proof. Fix ε and N as in the statement of the theorem. Suppose to the contrary that no such c exists. Then obtain a sequence ( Gn) of compact semisimple Lie groups, and sequence (An) and ( Bn) of sets such that (1 n) An, B n, A nBn ⊆ Gn are measurable (2 n) lim n→∞ μn(An) = lim n→∞ μn(Bn) = 0, and μn(An)/N < μ n(Bn) < Nμ n(An) where
μn is the normalized Haar measure on Gn
(3 n) For each n, μ(AnBn)1
2+ε
≤ μ(An)1
2+ε
μ(Bn)1
2+ε
, or equivalently, BM( An, B n) ≤ 2 − 4ε′,
where we set ε′ = ε/ (1 + 2 ε). Now choose an arbitrary nonprincipal ultrafilter U. Let G be the ultraproduct ∏
U
Gn
of the sequence Gn, and set A = ∏
U
An and B = ∏
U
Bn. Applying Proposition 4.12 we deduce that (1) A, B, and AB are pseudo-measurable in G
(2) the sets A and B commensurable and infinitesimal compared to G.(3) BM( A, B ) ≤ 2 − 4ε′.Noting also that G is a pseudo-Lie pseudo-compact group, we apply Proposition 5.12 to get
A′, B ′, S ⊆ G such that the following conditions hold: (1 ′) Let ( G, Σ) be the expansion of the group G generated by A′, B′, and S. Then every
D ⊆ G definable in Σ is pseudo-measurable. (2 ′) A, B, A ′, B ′, S are all commensurable to one another (3 ′) BM (A′, B ′) ≤ 2 − 3ε′
(4 ′) S is an approximate groups (5 ′) A′ can be covered by finitely many left-translates of S, and B′ can be covered by finitely many right-translates of S.Next, we apply Proposition 6.5 and replace S if necessary to arrange that it has a con-nected Lie model π : 〈S〉 → L continuously definable in Σ. MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 38
Recall that a semisimple Lie group is connected. Hence, G is a pseudo-connected and pseudo-compact group. Conditions (1 ′) and (2 ′) then allow us to apply Proposition 7.3 to get A′′ , B ′′ ⊆ 〈 S〉 such that the following hold: (1 ′′ ) A′′ and B′′ are definable in Σ (2 ′′ ) A, B, A ′, B ′, A ′′ , B ′′ , S are all commensurable to one another (3 ′′ ) BM (A′′ , B ′′ ) ≤ 2 − 2ε′.Now apply Proposition 9.4 to produce compact sets X, Y ⊆ L such that BM( X, Y ) ≤ 2 − ε′.
Using Fact 10.2, there is a continuous and surjective group homomorphism with compact kernel φ : L → R.Let I ⊆ R be an open interval, we have λR(I + I) = 2 λR(I). Set E = π−1(φ−1(I)), we get μ(E2) = 2 μ(E). Since E is σ-definable in Σ, we obtain D ⊆ E definable in Σ such that
μ(D2) < (2 + 10 −12 )μ(D).
Since D is definable in Σ, it is pseudo-measurable and is equal to ∏
U
Dn with Dn ⊆ Gn
measurable. Hence, there is n sufficiently large such that μn(D2
n
) < (2 + 10 −12 )μ(Dn) and
μ(Dn) arbitrarily small. This is a contradiction to Fact 10.1, which completes the proof.
We now get a generalization of Theorem 1.1.
Theorem 10.4. For every ε > 0 there is c > 0 such that whenever G is a compact semisim-ple Lie group with normalized Haar measure μ, A ⊆ G is open with μ(A) < c , we have
μ(A2) > (4 − ε)μ(A).
Proof. Choose δ sufficiently small such that 2 2−δ > 4 − ε/ 10, set N = 1, and let c = c(ε, N )as in Theorem 10.4. Suppose A ⊆ G is open, and μ(A) < c . By the inner regularity of the Haar measure, one can choose compact A′ ⊆ A with μ(A′) ≥ (1 − ε/ 10) μ(A). Then we have
μ(A2) ≥ μ(( A′)2) ≥
(
4 − ε
10
)
μ(A′) ≥
(
4 − ε
10
) (
1 − ε
10
)
μ(A) > (4 − ε)μ(A).
This completes the proof.
Remark 10.5. Fact 10.2 has a natural generalization to all noncompact Lie groups if one can remove the helix dimension (defined in [JTZ21]) term from the Brunn–Minkowski inequality; equivalently, if we have the following form of the Nonabelian Brunn–Minkowski Conjecture [JTZ21, Conjecture 1.4 and Theorem 1.5].
Conjecture 10.6 (Nonabelian Brunn–Minkowski Conjecture) . Let G be a simply connected simple Lie group equipped with a Haar measure μ, with d the dimension of G and m the dimension of a maximal compact subgroup of G. Then for every compact sets A, B ⊆ G,
μ(AB ) 1
d−m
≥ μ(A) 1
d−m
μ(B) 1
d−m
.
The method developed in the paper is ready for us to prove the generalized Breuillard– Green conjecture for all compact simple Lie groups, if we have the Nonabelian Brunn– Minkowski Conjecture (Conjecture 10.6) together with a suitable generalization of Fact 10.1, that μ(A2) ≥ (2 d−m−1 + η)μ(A) for some η > 0. It worth noting that although the Nonabelian Brunn-Minkowski Conjecture remains open, in [JTZ21] the authors proved the following theorem with an extra 2 /3 factor on the expo-nents: MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 39
Fact 10.7 (Nonabelian Brunn-Minkowski inequality for Lie groups) . Let L be a connected Lie group with dimension d, let m be the maximal dimension of a compact subgroup of L,and set n = d − m. Suppose, L is unimodular and equipped with Haar measure μ. For all
X, Y ⊆ L be compact with positive measure, we have
μ(XY )⌈ 2n
3⌉
≥ μ(X)⌈ 2n
3⌉
μ(Y )⌈ 2n
3⌉
.
With Fact 10.7 and a suitable generalization of Fact 10.1, it is possible to use our method to show a weaker version of Conjecture 3.8 that μ(A2) > (2 2( n−m)
3
− ε)μ(A) for sufficiently small A.
Acknowledgements
The authors thank Arturo Rodriguez Fanlo, Ben Green, Ehud Hrushovski, Simon Machado, and Jinhe Ye for discussions.
References
[AJTZ21] Jinpeng An, Yifan Jing, Chieu-Minh Tran, and Ruixiang Zhang, On the small measure expansion phenomenon in connected noncompact nonabelian groups , arXiv:2111.05236 (2021). [BGT11] Emmanuel Breuillard, Ben Green, and Terence Tao, Approximate subgroups of linear groups ,Geom. Funct. Anal. 21 (2011), no. 4, 774–819. MR 2827010 [BGT12] , The structure of approximate groups , Publ. Math. Inst. Hautes ´Etudes Sci. 116 (2012), 115–221. MR 3090256 [Car15] Pietro Kreitlon Carolino, The Structure of Locally Compact Approximate Groups , ProQuest LLC, Ann Arbor, MI, 2015, Thesis (Ph.D.)–University of California, Los Angeles. MR 3438951 [DE09] Anton Deitmar and Siegfried Echterhoff, Principles of harmonic analysis , Universitext, Springer, New York, 2009. MR 2457798 [Gle52] Andrew M. Gleason, Groups without small subgroups , Ann. of Math. (2) 56 (1952), 193–212. MR 49203 [Gol10] Isaac Goldbring, Hilbert’s fifth problem for local groups , Ann. of Math. (2) 172 (2010), no. 2, 1269–1314. MR 2680491 [Gol22] , Ultrafilters throughout mathematics , Graduate Studies in Mathematics, vol. 220, Amer-ican Mathematical Society, Providence, RI, 2022. MR 4454845 [Gre] Ben Green, 100 open problems , manuscript. [Hel08] Harald Helfgott, Growth and generation in SL 2(Z/p Z), Ann. of Math. (2) 167 (2008), no. 2, 601–623. MR 2415382 [HKST15] Juha Heinonen, Pekka Koskela, Nageswari Shanmugalingam, and Jeremy T. Tyson, Sobolev spaces on metric measure spaces , New Mathematical Monographs, vol. 27, Cambridge University Press, Cambridge, 2015, An approach based on upper gradients. MR 3363168 [HM53] Ralph Henstock and Murray Macbeath, On the measure of sum-sets. I. The theorems of Brunn, Minkowski, and Lusternik , Proc. London Math. Soc. (3) 3 (1953), 182–194. MR 56669 [HN12] Joachim Hilgert and Karl-Hermann Neeb, Structure and geometry of Lie groups , Springer Mono-graphs in Mathematics, Springer, New York, 2012. MR 3025417 [HPP08] Ehud Hrushovski, Ya’acov Peterzil, and Anand Pillay, Groups, measures, and the NIP , J. Amer. Math. Soc. 21 (2008), no. 2, 563–596. MR 2373360 [Hru12] Ehud Hrushovski, Stable group theory and approximate subgroups , J. Amer. Math. Soc. 25 (2012), no. 1, 189–243. MR 2833482 [JT22] Yifan Jing and Chieu-Minh Tran, Measure growth in compact semisimple Lie groups and the Kemperman inverse problem , arXiv:2303.15628 (2022). [JTZ21] Yifan Jing, Chieu-Minh Tran, and Ruixiang Zhang, A nonabelian Brunn-Minkowski inequality ,arXiv:2101.07782 (2021). MEASURE DOUBLING OF SMALL SETS IN SO(3 , R) 40
[Kem64] Johannes Kemperman, On products of sets in a locally compact group , Fund. Math. 56 (1964), 51–68. MR 202913 [Lee18] John M. Lee, Introduction to Riemannian manifolds , Graduate Texts in Mathematics, vol. 176, Springer, Cham, 2018. MR 3887684 [MW15] Jean-Cyrille Massicot and Frank O. Wagner, Approximate subgroups , J. ´Ec. polytech. Math. 2
(2015), 55–64. MR 3345797 [PS16] L´ aszl´ o Pyber and Endre Szab´ o, Growth in finite simple groups of Lie type , J. Amer. Math. Soc.
29 (2016), no. 1, 95–146. MR 3402696 [Rob96] Abraham Robinson, Non-standard analysis , Princeton Landmarks in Mathematics, Princeton University Press, Princeton, NJ, 1996, Reprint of the second (1974) edition, With a foreword by Wilhelmus A. J. Luxemburg. MR 1373196 [Rud87] Walter Rudin, Real and complex analysis , third ed., McGraw-Hill Book Co., New York, 1987. MR 924157 [Tao08] Terence Tao, Product set estimates for non-commutative groups , Combinatorica 28 (2008), no. 5, 547–594. MR 2501249 [Tao15] , Expansion in finite simple groups of Lie type , Graduate Studies in Mathematics, vol. 164, American Mathematical Society, Providence, RI, 2015. MR 3309986 [vdD98] Lou van den Dries, Tame topology and o-minimal structures , London Mathematical Society Lecture Note Series, vol. 248, Cambridge University Press, Cambridge, 1998. MR 1633348 [vdDW84] Lou van den Dries and Alex J. Wilkie, Gromov’s theorem on groups of polynomial growth and elementary logic , J. Algebra 89 (1984), no. 2, 349–374. MR 751150 [Yam53] Hidehiko Yamabe, A generalization of a theorem of Gleason , Ann. of Math. (2) 58 (1953), 351– 365. MR 58607
Mathematical Institute, University of Oxford, Oxford OX2 6GG, UK
Email address : [email protected]
Department of Mathematics, National University of Singapore, Singapore
Email address : [email protected]
Department of Mathematics, University of California Berkeley, CA, USA
Email address : [email protected]
|
54
|
The Radial Basis Function Kernel The Radial basis function kernel, also called the RBF kernel, or Gaussian kernel, is a kernel that is in the form of a radial basis function (more specifically, a Gaussian function). The RBF kernel is defined as KRBF(x, x′) = exp h −γ ∥x −x′∥2i where γ is a parameter that sets the “spread” of the kernel.
The RBF kernel as a projection into infinite dimensions Recall a kernel is any function of the form: K(x, x′) = ⟨ψ(x), ψ(x′)⟩ where ψ is a function that projections vectors x into a new vector space. The kernel function computes the inner-product between two projected vectors.
As we prove below, the ψ function for an RBF kernel projects vectors into an infinite di-mensional space. For Euclidean vectors, this space is an infinite dimensional Euclidean space.
That is, we prove that ψRBF : Rn →R∞ Proof: 1 Without loss of generality, let γ = 1 2.
KRBF(x, x′) = exp " −1 2 ∥x −x′∥2 # = exp " −1 2⟨x −x′, x −x′⟩ # = exp " −1 2(⟨x, x −x′⟩−⟨x′, x −x′⟩) # = exp " −1 2(⟨x, x −x′⟩−⟨x′, x −x′⟩) # = exp " −1 2(⟨x, x⟩−⟨x, x′⟩−⟨x′, x⟩+ ⟨x′, x′⟩) # = exp " −1 2(∥x∥2 + ∥x′∥2 −2⟨x, x′⟩) # = exp " −1 2 ∥x∥2 −1 2 ∥x′∥2 # exp " −1 2 −2⟨x, x′⟩ # = Ce⟨x,x′⟩ C := exp " −1 2 ∥x∥2 −1 2 ∥x′∥2 # is a constant = C ∞ X n=0 ⟨x, x′⟩n n!
Taylor expansion of ex = C ∞ X n=0 Kpoly(n)(x, x′) n!
We see that the RBF kernel is formed by taking an infinite sum over polynomial kernels.
2 As proven previously, recall that the sum of two kernels Kc(x, x′) := Ka(x, x′) + Kb(x, x′) implies that the ψc function is defined so that it forms vectors of the form ψc(x) := (ψa(x), ψb(x)) That is, the vector ψc(x) is a tuple where the first element of the tuple is the vector ψa(x) and the second element is ψb(x). The inner-product on the vector space of ψc is defined as ⟨ψc(x), ψc(x′)⟩:= ⟨ψa(x), ψa(x′)⟩+ ⟨ψb(x), ψb(x′)⟩ For Euclidean vector spaces, this means that ψc(x) is the vector formed by appending the elements of ψb(x) onto the ψa(x) and that ⟨ψc(x), ψc(x′)⟩:= dimension(a) X i ψa,i(x)ψa,i(x′) + dimension(b) X j ψb, j(x)ψb,j(x′) = dimension(a)+dimension(b) X i ψc,i(x)ψc,i(x′) Since the RBF is an infinite sum over such appendages of vectors, we see that the pro-jections is into a vector space with infinite dimension.
□ The γ parameter Recall a kernel expresses a measure of similarity between vectors. The RBF kernel rep-resents this similarity as a decaying function of the distance between the vectors (i.e.
the squared-norm of their distance). That is, if the two vectors are close together then, ∥x −x′∥will be small. Then, so long as γ > 0, it follows that −γ ∥x −x′∥2 will be larger.
Thus, closer vectors have a larger RBF kernel value than farther vectors. This function is of the form of a bell-shaped curve.
The γ parameter sets the width of the bell-shaped curve. The larger the value of γ the narrower will be the bell. Small values of γ yield wide bells. This is illustrated in Figure 1.
3 (a) (b) Figure 1: (a) Large γ. (b) Small γ.
4
|
55
|
anti-transpose ( matrix -- newmatrix ) - Factor Documentation
===============
HandbookGlossary
anti-transpose ( matrix -- newmatrix )
Matrix operations
Prev:transpose ( matrix -- newmatrix )
Next:matrix-nth ( pair matrix -- elt )
Vocabulary
math.matrices
Inputs
matrix a matrix
Outputs
newmatrix a matrix
Word description
Like transpose except that the matrix is transposed over the anti-diagonal, so that the anti-diagonal itself is preserved and the main-diagonal is reversed.
Notes
This word is the opposite variant of transpose.
Examples
USING: math.matrices sequences prettyprint ; 5 anti-transpose .
{ { 4 0 0 0 0 } { 0 3 0 0 0 } { 0 0 2 0 0 } { 0 0 0 1 0 } { 0 0 0 0 0 } }
Definition
USING:kernelmath.matrices.privatesequences;
IN:math.matrices
:anti-transpose( matrix -- newmatrix )
dupempty?
|
56
|
Spherical coordinates - Math Insight
===============
Skip to navigation (Press Enter)
Skip to main content (Press Enter)
Home
Threads
Index
About
Math Insight
Page Navigation
Top
Relationship Cartesian
Explore influence
Surfaces
Constant ϕ ϕ
Constant θ θ
Constant ρ ρ
In threads
Vector algebra
Math 2374
Links
Similar pages
See also
Lighten up
Contact us
To create your own interactive content like this, check out our new web site doenet.org!
Spherical coordinates
Suggested background
Polar coordinates
Spherical coordinates can be a little challenging to understand at first. Spherical coordinates determine the position of a point in three-dimensional space based on the distance ρ ρ from the origin and two angles θ θ and ϕ ϕ. If one is familiar with polar coordinates, then the angle θ θ isn't too difficult to understand as it is essentially the same as the angle θ θ from polar coordinates. But some people have trouble grasping what the angle ϕ ϕ is all about.
The following graphics and interactive applets may help you understand spherical coordinates better. On this page, we derive the relationship between spherical and Cartesian coordinates, show an applet that allows you to explore the influence of each spherical coordinate, and illustrate simple spherical coordinate surfaces.
Relationship between spherical and Cartesian coordinates
Spherical coordinates are defined as indicated in the following figure, which illustrates the spherical coordinates of the point P P.
The coordinate ρ ρ is the distance from P P to the origin. If the point Q Q is the projection of P P to the x y x y-plane, then θ θ is the angle between the positive x x-axis and the line segment from the origin to Q Q. Lastly, ϕ ϕ is the angle between the positive z z-axis and the line segment from the origin to P P.
We can calculate the relationship between the Cartesian coordinates (x,y,z)(x,y,z) of the point P P and its spherical coordinates (ρ,θ,ϕ)(ρ,θ,ϕ) using trigonometry. The pink triangle above is the right triangle whose vertices are the origin, the point P P, and its projection onto the z z-axis. As the length of the hypotenuse is ρ ρ and ϕ ϕ is the angle the hypotenuse makes with the z z-axis leg of the right triangle, the z z-coordinate of P P (i.e., the height of the triangle) is z=ρ cos ϕ z=ρ cosϕ. The length of the other leg of the right triangle is the distance from P P to the z z-axis, which is r=ρ sin ϕ r=ρ sinϕ. The distance of the point Q Q from the origin is the same quantity.
The cyan triangle, shown in both the original 3D coordinate system on the left and in the x y x y-plane on the right, is the right triangle whose vertices are the origin, the point Q Q, and its projection onto the x x-axis. In the right plot, the distance from Q Q to the origin, which is the length of hypotenuse of the right triangle, is labeled just as r r. As θ θ is the angle this hypotenuse makes with the x x-axis, the x x- and y y-components of the point Q Q (which are the same as the x x- and y y-components of the point P P) are given by x=r cos θ x=r cosθ and y=r sin θ y=r sinθ. Since r=ρ sin ϕ r=ρ sinϕ, these components can be rewritten as x=ρ sin ϕ cos θ x=ρ sinϕ cosθ and y=ρ sin ϕ sin θ y=ρ sinϕ sinθ. In summary, the formulas for Cartesian coordinates in terms of spherical coordinates are
x y z=ρ sin ϕ cos θ=ρ sin ϕ sin θ=ρ cos ϕ.(1)x=ρ sinϕ cosθ(1)y=ρ sinϕ sinθ z=ρ cosϕ.
Exploring the influence of each spherical coordinate
The below applet allows you to see how the location of a point changes as you vary ρ ρ, θ θ, and ϕ ϕ. The point P P corresponding to the value of the coordinates is shown as a large purple point. The green dot is the point Q Q, i.e., the projection of P P in the x y x y-plane.
Spherical coordinates. Given the values for spherical coordinates ρ ρ, θ θ, and ϕ ϕ, which you can change by dragging the points on the sliders, the large red point shows the corresponding position in Cartesian coordinates. The green dot is the projection of the point in the x y x y-plane. You can visualize each of the spherical coordinates by the geometric structures that are colored corresponding to the slider colors. The length of the red line segment from the origin is ρ ρ. The angle of the green portion of the disk in the x y x y-plane is θ θ. The angle of the blue portion of the vertical disk is ϕ ϕ. You can also move the large red point and the green projection of that point directly with the mouse.
More information about applet.
Notice how you can obtain any point even though we restrict ρ≥0 ρ≥0, 0≤θ<2 π 0≤θ<2 π, and 0≤ϕ≤π 0≤ϕ≤π. Can you see why we only need ϕ ϕ to go up to π π?
These restrictions removed much of the non-uniqueness of spherical coordinates. Notice there is still non-uniqueness at ρ=0 ρ=0, at ϕ=0 ϕ=0 and at ϕ=π ϕ=π. When any of these conditions are true, you can change the value of one or more of the other coordinates without moving the point.
Unfortunately, the convention for the notation of spherical coordinates is not standardized across disciplines. For example, in physics, the roles of θ θ and ϕ ϕ are typically reversed. In order to correctly understand someone's use of spherical coordinates, you must first determine what notational convention this are using. You cannot assume they follow the convention used here.
Simple spherical coordinate surfaces
These three next applets may help you understand what each of three spherical coordinates means. They show what the surfaces ϕ=ϕ= constant, θ=θ= constant, and ρ=ρ= constant look like. The value of the constant is determined by the position of the sliders. In all cases, we restrict the surfaces to the region ρ<5 ρ<5.
Constant ϕ ϕ
What does it mean for a point to have the spherical coordinate ϕ=π/3 ϕ=π/3? Take a look at the surfaces that are defined by the equation ϕ=ϕ= constant.
Surfaces of constant ϕ ϕ in spherical coordinates. The conical surface of ϕ=ϕ= constant is shown, where the value of ϕ ϕ is determined by the blue point on the slider. Only the part of the surface where ρ<5 ρ<5 is shown.
More information about applet.
The surface ϕ=ϕ= constant is simply a single cone, pointing either upward or downward. If you know that ϕ=π/3 ϕ=π/3, then you know the point is somewhere on a (wide) single cone that opens upward, i.e., the equation ϕ=π/3 ϕ=π/3 specifies a surface that is a single cone opening upward. The equation ϕ=π/2 ϕ=π/2 corresponds to the x y x y-plane.
The surface ϕ=ϕ= constant is rotationally symmetric around the z z-axis. Therefore it must depend on x x and y y only via the distance x 2+y 2−−−−−−√x 2+y 2 from the z z-axis. Using the relationship (1)(1) between spherical and Cartesian coordinates, one can calculate that
x 2+y 2=ρ 2 sin 2 ϕ(cos 2 θ+sin 2 θ)=ρ 2 sin 2 ϕ x 2+y 2=ρ 2 sin 2ϕ(cos 2θ+sin 2θ)=ρ 2 sin 2ϕ
or x 2+y 2−−−−−−√=ρ sin ϕ x 2+y 2=ρ sinϕ. (Given that 0≤ϕ≤π 0≤ϕ≤π, we know that sin ϕ≥0 sinϕ≥0 and the positive square root is ρ sin ϕ ρ sinϕ.) If we divide by z=ρ cos ϕ z=ρ cosϕ, we obtain a formula for ϕ ϕ in terms of Cartesian coordinates
x 2+y 2−−−−−−√z=tan ϕ.x 2+y 2 z=tanϕ.
We can rewrite the surface ϕ=ϕ= constant as
z=C x 2+y 2−−−−−−√z=C x 2+y 2
where C=1/tan ϕ C=1/tanϕ, which is indeed the equation for a cone.
Constant θ θ
The surface θ=θ= constant is a half-plane off the z z-axis. (It is plotted as a half-disk only because we restrict the plot to ρ<5 ρ<5.)
Surfaces of constant θ θ in spherical coordinates. The half-plane surface of θ=θ= constant is shown, where the value of θ θ is determined by the blue point on the slider. Only the part of the surface where ρ<5 ρ<5 is shown, which makes the half-plane appear like a half-disk.
More information about applet.
If a point has θ=π/2 θ=π/2, then you know the point is on the half of the y z y z-plane where y y values are positive. The equation θ=π/2 θ=π/2 is the equation for this half-plane.
From relationship (1)(1), the ratio between x x and y y can be written, for example, as y/x=tan θ y/x=tanθ. If θ θ is held constant, then the ratio between x x and y y is constant. Thus, the equation θ=θ= constant gives a line through the origin in the x y x y-plane. Since z z is unrestricted, we get a vertical plane. Looking back at relationship (1)(1), we see it is only a half plane because ρ sin ϕ ρ sinϕ cannot be negative.
Constant ρ ρ
Most people don't have trouble understanding what ρ=3 ρ=3 means. It is the sphere of radius 3 centered at the origin. In general, the surface ρ=ρ= constant is a sphere of radius ρ ρ centered at the origin.
Surfaces of constant ρ ρ in spherical coordinates. The spherical surface of ρ=ρ= constant is shown, where the value of ρ ρ is determined by the blue point on the slider.
More information about applet.
From relationship (1)(1), we can calculate that
x 2+y 2+z 2=ρ 2 sin 2 ϕ(cos 2 θ+sin 2 θ)+ρ 2 cos 2 ϕ=ρ 2(sin 2 ϕ+cos 2 ϕ)=ρ 2 x 2+y 2+z 2=ρ 2 sin 2ϕ(cos 2θ+sin 2θ)+ρ 2 cos 2ϕ=ρ 2(sin 2ϕ+cos 2ϕ)=ρ 2
verifying that ρ=ρ= constant is the sphere of radius ρ ρ centered at the origin.
Thread navigation
Vector algebra
Previous: Surfaces of revolution
Next: Polar coordinates
Math 2374
Previous: Cylindrical coordinates
Next: Introduction to changing variables in triple integrals
Similar pages
Cartesian coordinates
Polar coordinates
Cylindrical coordinates
Polar coordinates mapping
Parametrization of a line
Parametrization of a line examples
Triple integral change of variable examples
Lines (and other items in Analytic Geometry)
The elliptic paraboloid
The hyperbolic paraboloid
More similar pages
See also
Cylindrical coordinates
Lighten up
Cartesian coordinates
Polar coordinates
Cite this as
Nykamp DQ, “Spherical coordinates.” From Math Insight.
Keywords: coordinate systems, spherical coordinates
Send us a message about “Spherical coordinates”
Name:
Email address:
Comment:
If you enter anything in this field your comment will be treated as spam:
Spherical coordinates by Duane Q. Nykamp is licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 4.0 License. For permissions beyond the scope of this license, please contact us.
|
57
|
A GENTZEN SYSTEM AND DECIDABILITY FOR RESIDUATED LATTICES PETER JIPSEN Abstract. This note presents details of a result of Okada and Terui which shows that the equational theory of residuated lat-tices is decidable and gives an effective algorithm based on a Gentzen system for propositional intuitionistic linear logic.
The variety of residuated lattices is denoted by RL.
An algebra (L, ∨, ∧, ∗, \, /, 1) is a member of this variety if (L, ∨, ∧) is a lattice, (L, ∗, 1) is a monoid, and x ∗y ≤z iffy ≤x\z iffx ≤z/y for all x, y, z ∈L (these equivalences can be expressed by equations). We use s, t, u for terms in the language of RL, and γ, δ, ρ, σ for (finite) sequences of terms. Concatenation of sequences γ and δ is denoted by γδ, and terms are considered as sequences of length 1. (In this note, multiplication in RL is written explicitly as s ∗t.) A pair (σ, t) is called a sequent and is written σ ⊢t. The symbol ⊢is read as yields, and the semantic interpretation of s1s2 . . . sn ⊢t is that the inclusion s1∗s2∗· · · ∗sn ≤t holds in RL. The empty sequence ε is interpreted as the multiplicative unit 1. Using this notation, we list below some quasi-inclusions s1 ≤t1 & · · · & sn ≤tn ⇒s ≤t in the style of Gentzen rules: s1 ⊢t1 . . . sn ⊢tn s ⊢t name of rule.
With the given semantic interpretation it is straight forward to check that all these quasi-inclusions hold in RL. The details in the proof of the completeness lemma (Lemma 3 below) justify the specific form of these rules.
t ⊢t Id γδ ⊢u γ1δ ⊢u 1-left ε ⊢1 1-right γstδ ⊢u γs∗tδ ⊢u ∗left γ ⊢s δ ⊢t γδ ⊢s∗t ∗right σ ⊢s γtδ ⊢u γσs\tδ ⊢u \left sγ ⊢t γ ⊢s\t \right Date: April 3, 2000.
1 2 PETER JIPSEN σ ⊢s γtδ ⊢u γσt/sδ ⊢u /left γs ⊢t γ ⊢t/s /right γsδ ⊢u γtδ ⊢u γs∨tδ ⊢u ∨left γ ⊢s γ ⊢s ∨t ∨right1 γ ⊢t γ ⊢s ∨t ∨right2 γsδ ⊢u γs∧tδ ⊢u ∧left1 γtδ ⊢u γs∧tδ ⊢u ∧left2 γ ⊢s γ ⊢t γ ⊢s ∧t ∧right A proof-tree is a tree in which each node is a sequent, and if σ1 ⊢ t1,. . . , σn ⊢tn are all the child nodes of node σ ⊢t, then n ∈{0, 1, 2} and the rule σ1⊢t1 ... σn⊢tn σ⊢t matches one of the above rules. Hence each node has at most 2 child nodes, and a sequent can appear at a leaf iffit matches one of the rules Id or 1-right. A sequent is said to be provable1 if there exists a proof-tree with this sequent as the root. A subtree of a proof-tree is again a proof-tree, hence all the nodes in a proof-tree are provable.
Note that for each of the rules, there are only a finite number of ways a given sequent can match the denominator of a rule, and this determines exactly what sequents must appear in the numerator of the rule. In each case the sequents in the numerator are structurally simpler than the sequent in the denominator, so the depth of a proof-tree is bounded by the size (defined in a suitable way) of the sequent at the root. Hence it is decidable whether a given sequent is provable.
As an exercise it is instructive to show that sequents such as x∗(y ∨z) ⊢x∗y ∨x∗z and x(y ∧z) ⊢x\y ∧x\z are provable, whereas x ∧(y ∨z) ⊢(x ∧y) ∨(x ∧z) is not. For lattice theorists it is also interesting to note that the 6 rules for ∨and ∧are essentially equivalent to Whitman’s method for deciding if s ≤t holds in all lattices, where s, t are lattice terms, and γ, δ in the rules are taken to be empty sequences. A different method for deciding lattice inclusions, due to Skolem, is described by Burris in .
We now prove the result of Okada and Terui which shows that for terms s, t, the sequent s ⊢t is provable iffthe inclusion s ≤t holds in RL. The forward implication is the soundness of the proof procedure, and follows from the observation that all the rules are vaild (as quasi-inclusions) in RL. The reverse implication is completeness, which is proved by defining a semantics for residuated lattices, based on a non-commutative version of the phase spaces of linear logic. In the theory 1In the literature on Gentzen systems this corresponds to cut-free provable since the Gentzen system presented here does not mention the so-called cut-rule σ ⊢t γtδ ⊢u γσδ ⊢u .
A GENTZEN SYSTEM AND DECIDABILITY FOR RESIDUATED LATTICES 3 of quantales, such phase spaces can also be viewed as quantic nuclei of powerset quantales. However the argument below does not require any knowledge of linear logic or quantales.
A non-commutative phase space is of the form (M, L), where M is a monoid (with unit e and x · y written as xy), and L ⊆P(M) such that (P1) L is closed under arbitrary intersections and (P2) for all X ⊆M, Y ∈L we have X\Y and Y/X ∈L, where X\Y = {z ∈M : X{z} ⊆Y }, Y/X = {z ∈M : {z}X ⊆Y } and XY = {xy : x ∈X, y ∈Y }. We also define XC = \ {Z ∈L : X ⊆Z} the closure of X X ∨Y = (X ∪Y )C X ∧Y = X ∩Y X∗Y = (XY )C 1 = {e}C Lemma 1. For any non-commutative phase space (M, L), the algebra (L, ∨, ∧, ∗, \, /, 1) is a residuated lattice.
Proof. It is a lattice (in fact a complete lattice) since it is the collection of closed sets of a closure operation. By definition of \ we have Z ⊆ X\Y iffXZ ⊆Y , and for Y ∈L this is equivalent to X∗Y = (XY )C ⊆ Y . Similarly, Z ⊆Y/X is equivalent to Z∗X ⊆Y . It remains to show that ∗is associative and 1 is an identity.
For all X, Y ⊆M, XY ⊆(XY )C implies Y ⊆X(XY )C, hence XY C ⊆X(X(XY )C)C = X(X(X ∗Y )) ⊆X ∗Y where the middle equality makes use of the fact that X(XY )C is closed by (P2). Similarly XCY ⊆X ∗Y , hence XCY C ⊆XC ∗Y = (XCY )C ⊆(X ∗Y )C = X ∗Y . Since we also have XY ⊆XCY C, it follows that (XY )C = (XCY C)C. Now (X ∗Y ) ∗Z = ((XY )CZ)C = ((XY )CCZC)C = ((XY )Z)C = (X(Y Z))C = (XC(Y Z)C)C = X ∗(Y ∗Z), and 1 ∗X = ({e}CX)C = ({e}CCXC)C = ({e}X)C = XC = X for X ∈L.
□ In logic it is common to refer to a structure with an assignment as a model. Let X be a set of variables. A non-commutative phase model M = (M, L, h) is a non-commutative phase space (M, L) together with an assignment h : X →L. As usual, h extends to a homomorphism from the absolutely free term algebra T(X) to L, with h(1) defined as 1.
A term p is satisfied in M if e ∈h(p). This is equivalent to h(1) ⊆h(p), which agrees with the usual algebraic notion of satisfaction for the inclusion 1 ≤p under the assignment h. Since s ≤t is equivalent 4 PETER JIPSEN to 1 ≤s\t, the satisfaction of arbitrary inclusions is captured by this notion.
Given a term p, let S(p) be the set of subterms of p. We now define a syntactical model M(p) = (M(p), L(p), h). The universe M(p) is the free monoid generated by S(p), i.e. the set of all finite sequences of subterms of p.
The empty sequence is again denoted by ε.
For γ, δ ∈M(p), u ∈S(p), define [γ δ ⊢u] = {σ ∈M(p) : γσδ ⊢u is provable}.
The notation [u] is shorthand for [ε ε ⊢u], called the value of u. Fur-ther let L0 = {[γ δ ⊢u] : γ, δ ∈M(p), u ∈S(p)} L(p) = { \ K : K ⊆L0} h(x) = [x] for any variable x in p.
In the subsequent proofs we will frequently make use of the following observation: (∗) For any X ⊆M(p), t ∈S(p), t ∈XC if and only if for all γ, δ ∈M(p) and u ∈S(p), X ⊆[γ δ ⊢u] implies t ∈[γ δ ⊢u].
Lemma 2. M(p) is a non-commutative phase model.
Proof. (P1) holds by construction. To prove (P2), let X ⊆M(p) and Y ∈L(p). Now σ ∈X\Y iffX{σ} ⊆Y ifffor all ρ ∈X, ρσ ∈Y = Y C. By () this is equivalent to showing that Y ⊆[γ δ ⊢u] implies ρσ ∈[γ δ ⊢u]. This last containment holds iffγρσδ ⊢u is proveable iffσ ∈[γρ δ ⊢u]. Hence σ ∈X\Y iffσ ∈ \ {[γρ δ ⊢u] : ρ ∈X and Y ⊆[γ δ ⊢u]}, which implies that X\Y ∈L, and Y/X is similar.
□ The following result is the central part of the completeness argument.
Lemma 3. Let M(p) be defined as above. For any subterm t of p we have t ∈h(t) ⊆[t]. In particular, if ε ∈h(t) then the sequent ε ⊢t is provable.
Proof. By induction on the structure of the subterm. If it is a variable of p, say x, then h(x) = [x] by definition, and x ∈[x] since x ⊢x is provable (using Id). Suppose s, t are subterms of p, and s ∈h(s) ⊆[s], t ∈h(t) ⊆[t].
A GENTZEN SYSTEM AND DECIDABILITY FOR RESIDUATED LATTICES 5 s ∨t ∈h(s ∨t) ⊆[s ∨t]: Note that h(s ∨t) = (h(s) ∪h(t))C. Let γ ∈h(s) ∪h(t). If γ ∈h(s), then γ ∈[s], so γ ⊢s is provable. By the ∨right1 rule it follows that γ ⊢s ∨t is provable, hence γ ∈[s ∨t] and therefore h(s) ⊆[s ∨t]. Similarly h(t) ⊆[s ∨t], and since [s ∨t] is closed, h(s ∨t) ⊆[s ∨t].
To see that s ∨t ∈h(s ∨t), we use observation (∗): Suppose h(s) ∪ h(t) ⊆[γ δ ⊢u] where γ, δ ∈M(p), u ∈S(p). Then γsδ ⊢u and γtδ ⊢u are provable (since s ∈h(s) and t ∈h(t)). Therefore γs∨tδ ⊢u is provable by ∨left and so s ∨t ∈[γ δ ⊢u]. By (∗) we conclude that s ∨t ∈(h(s) ∪h(t))C = h(s ∨t).
s ∧t ∈h(s ∧t) ⊆[s ∧t]: Let γ ∈h(s ∧t) = h(s) ∩h(t). Then γ ∈[s] ∩[t], hence γ ⊢s and γ ⊢t are provable. So now γ ⊢s ∧t is provable by the ∧right rule, which shows that γ ∈[s ∧t].
Suppose h(s) ⊆[γ δ ⊢u]. Then γsδ ⊢u is provable, and by the ∧left1 rule, γs∧tδ ⊢u is provable. Therefore s ∧t ∈[γ δ ⊢u], and it follows from (∗) that s∧t ∈h(s)C = h(s). Similarly s∧t ∈h(t), hence s ∧t ∈h(s ∧t).
s∗t ∈h(s∗t) ⊆[s∗t]: Note that h(s∗t) = (h(s)h(t))C, and let σ ∈ h(s)h(t). Then σ = γδ, where γ ∈h(s) ⊆[s] and δ ∈h(t) ⊆[t].
Therefore γ ⊢s and δ ⊢t are provable, hence by ∗right γδ ⊢s∗t is provable, and so σ ∈[s∗t]. It follows that h(s)h(t) ⊆[s∗t], and since [s∗t] is closed, h(s ∗t) ⊆[s∗t].
Suppose h(s)h(t) ⊆[γ δ ⊢u]. Then st ∈[γ δ ⊢u] since s ∈h(s) and t ∈h(t). Thus γstδ ⊢u is provable, and by ∗left, γs∗tδ ⊢u is provable.
This implies s∗t ∈[γ δ ⊢u], so by (∗), it follows that s∗t ∈h(s ∗t).
s\t ∈h(s\t) ⊆[s\t]: Here h(s\t) = h(s)\h(t) = {γ ∈M(p) : h(s){γ} ⊆h(t)}. Thus γ ∈h(s\t) implies sγ ∈h(t) ⊆[t], since we are assuming s ∈h(s). This means sγ ⊢t is provable, so by \right γ ∈[s\t].
Suppose h(t) ⊆[γ δ ⊢u], then t ∈h(t) implies γtδ ⊢u is provable.
For any σ ∈h(s) ⊆[s] we have that σ ⊢s is provable, so from \left we get that σs\t ∈[γ δ ⊢u]. By (∗) it follows that σs\t ∈h(t) whenever σ ∈h(s), hence h(s){s\t} ⊆h(t). This implies s\t ∈h(s)\h(t) = h(s\t).
The case for s/t ∈h(s/t) ⊆[s/t] is similar. Since we are assuming that h has been extended to a homomorphism from the term algebra to L, we have h(1) = 1 = {ε}C. Suppose {ε} ⊆[γ δ ⊢u], then γδ ⊢u is provable, and by the 1-left rule 1 ∈[γ δ ⊢u]. Hence (∗) implies 1 ∈h(1). Finally, h(1) ⊆ holds since {ε} ⊆ follows from 1-right.
The second statement is a simple consequence: if ε ∈h(t) then ε ∈[t] which means ε ⊢t is provable.
□ 6 PETER JIPSEN Theorem 4. For any term p the following statements are equivalent: (i) RL | = 1 ≤p (ii) ε ∈h(p) in M(p) (iii) ε ⊢p is provable.
Proof. (i) implies (ii) by Lemma 2, (ii) implies (iii) by Lemma 3, and (iii) implies (i) by a standard soundness argument using the observation that all the (quasi-inclusions corresponding to) sequent rules are valid in RL.
□ Since it was observed earlier that condition (iii) is decidable, and since any equation can be reduced to this form, the equational theory of RL is decidable. Okada and Terui go on to prove that RL is generated by its finite members, and they also consider several sub-varieties and expansions of RL. For example, to decide inclusions for bounded residuated lattices, one simply adds the two rules γ0δ⊢u and γ⊢⊤. In fact their results are formulated for what amounts to bounded commutative residuated lattices, and the non-commutative case is only mentioned briefly at the end. However their method of proving decid-ability and the finite model property is very versatile and can perhaps be adapted to cover other subvarieties of RL, such as the varieties of distributive or of cancellative residuated lattices, or the variety gener-ated by residuated chains.
References S. Burris, Polynomial time uniform word problems Math. Logic Quarterly 41 (1995), 173–182.
M. Okada and K. Terui, The finite model property for various fragments of intuitionistic linear logic, Journal of Symbolic Logic, 64(2) (1999) 790–802.
|
58
|
Problem - 472A - Codeforces
===============
Enter | Register
Home
Top
Catalog
Contests
Gym
Problemset
Groups
Rating
Edu
API
Calendar
Help
| Codeforces Round 270 |
| --- |
| Finished |
→ Virtual participation
Virtual contest is a way to take part in past contest, as close as possible to participation on time. It is supported only ICPC mode for virtual contests. If you've seen these problems, a virtual contest is not for you - solve these problems in the archive. If you just want to solve some problem from a contest, a virtual contest is not for you - solve this problem in the archive. Never use someone else's code, read the tutorials or communicate with other person during a virtual contest.
→ Problem tags
math
number theory
800
No tag edit access
→ Contest materials
Codeforces Round #270
Editorial (en)
Problems
Submit
Status
Standings
Custom test
The problem statement has recently been changed. View the changes.
×
A. Design Tutorial: Learn from Math
time limit per test
1 second
memory limit per test
256 megabytes
input
stdin
output
stdout
One way to create a task is to learn from math. You can generate some random math statement or modify some theorems to get something new and build a new task from that.
For example, there is a statement called the "Goldbach's conjecture". It says: "each even number no less than four can be expressed as the sum of two primes". Let's modify it. How about a statement like that: "each integer no less than 12 can be expressed as the sum of two composite numbers." Not like the Goldbach's conjecture, I can prove this theorem.
You are given an integer n no less than 12, express it as a sum of two composite numbers.
Input
The only line contains an integer n(12 ≤ n ≤ 10 6).
Output
Output two composite integers x and y(1 <x, y<n) such that x + y = n. If there are multiple solutions, you can output any of them.
Examples
Input
Copy
12
Output
Copy
4 8
Input
Copy
15
Output
Copy
6 9
Input
Copy
23
Output
Copy
8 15
Input
Copy
1000000
Output
Copy
500000 500000
Note
In the first example, 12 = 4 + 8 and both 4, 8 are composite numbers. You can output "6 6" or "8 4" as well.
In the second example, 15 = 6 + 9. Note that you can't output "1 14" because 1 is not a composite number.
Codeforces (c) Copyright 2010-2025 Mike Mirzayanov
The only programming contests Web 2.0 platform
Server time: Aug/14/2025 06:26:59 UTC (l3).
Desktop version, switch to mobile version.
Privacy Policy | Terms and Conditions
Supported by
User lists
| Name |
| --- |
| No items |
|
59
|
DEADLOCK ON THE BOARD∗ Jason Roderick Donaldson† Nadya Malenko‡ Giorgia Piacentino§ November 17, 2017 Abstract We develop a dynamic model of board decision making. We show that directors may knowingly retain the policy they all think is the worst just because they fear they may disagree about what policy is best in the future—the fear of deadlock begets deadlock. Board diversity can exacerbate deadlock. Hence, shareholders may optimally appoint a biased director to avoid deadlock. On the other hand, the CEO may appoint unbiased directors, or even directors biased against him, to create deadlock and thereby entrench himself.
Still, shareholders may optimally give the CEO some power to appoint directors.
Our theory thus gives a new explanation for CEO entrenchment. It also gives a new perspective on director tenure, staggered boards, and short-termism.
∗For helpful comments, we thank Patrick Bolton, Francesca Cornelli, Daniel Ferreira, Slava Fos, Simon Gervais, Armando Gomes, Radha Gopalan, Todd Gormley, Mark Leary, Mina Lee, Andrey Malenko, Ernst Maug as well as seminar audiences at the 2017 WAPFIN@Stern and Washington University in St. Louis.
†Washington University in St. Louis; [email protected].
‡Boston College; [email protected].
§Columbia University; [email protected].
1 Introduction The board of directors is the highest decision-making authority in a corporation. But sometimes boards struggle to make decisions. In surveys, 67% of directors report the inability to decide about some issues in the boardroom. Moreover, 37% say they have encountered a boardroom dispute threatening the very survival of the corporation (IFC (2014), p. 2).1 Such a “division among the directors” that “may render the board unable to take effective management action”—such deadlock on the board—can even lead directors to “vote wholly in disregard of the interests of the corporation” (Kim (2003), pp. 113, 120).2 Deadlock on the board can be so costly to US corporations that most states have adopted deadlock statutes, which often give courts the power to dissolve a deadlocked corporation, a power they rarely have otherwise, except in the event of default or fraud. A substantial legal literature studies how corporations can resolve deadlock ex post.3 In this paper, we ask how deadlock can be avoided ex ante. Can the right mix of directors ensure a board makes efficient decisions?
And, if so, how should director elections be structured to help achieve the right board composition? Should director elections be staggered or should all directors be chosen at once? Should director tenure be limited? And should shareholders have all the power to choose directors or should the CEO have some power as well?
To address these questions, we develop a dynamic model of board decision making in which deadlock on the board is the result of the fear of future deadlock: directors 1Further, from 2004–2006, 166 directors experienced disputes so severe that they publicly re-signed from their boards at US public corporations, accepting potential damage to their careers (Marshall (2013); see also Agrawal and Chen (2017)).
2Last summer, deadlock on the board made it hard for Uber to appoint a CEO. According to the New York Times, “Uber’s C.E.O. selection...illustrates the high-wire act of herding eight board members...toward consensus.” Moreover, deadlock on the board led one frontrunner for the job, Meg Whitman, to withdraw her name from consideration, saying “it was becoming clear that the board was still too fractured to make progress on the issues that were important to me” (“Inside Uber’s Wild Ride in a Search of a New C.E.O.” New York Times, August 29, 2017). Whitman’s description of Uber’s board mirrors the dictionary definition of deadlock: “a situation, typically one involving opposing parties, in which no progress can be made” (New Oxford American Dictionary).
3See, e.g., Duke (1972), Howe (1967), Kim (2003), Landeo and Spier (2014a, 2014b), Lew (1979– 1980), McDonald (1979).
1 refuse to replace a current policy with a new one because they fear that other di-rectors will refuse to replace the new policy in the future. Shareholders suffer, since a deadlocked board struggles to remove low-quality policies or executives—a dead-locked board leads to an entrenched CEO. We find that boardroom diversity can exacerbate deadlock (its benefits notwithstanding).4 So can long director tenures, another hotly debated policy issue.5 Moreover, the anticipation of deadlock can af-fect board composition via director elections. Shareholders elect directors to avoid deadlock, possibly voting for a director who does not represent their interest but will get along with the rest of the board. In contrast, a CEO may aim to create deadlock, possibly favoring a director who does not get along with the rest of the board, since a deadlocked board will struggle to fire him.
Model preview. In the model, a board made up of multiple directors decides on a corporate policy at each date. The model is based on three key assumptions, reflecting how real-world boards operate.
(i) Directors have different preferences over policies. We refer to these different preferences as “biases,” as they could reflect misspecified beliefs or anticipated perks. However, they could also reflect reasonable diversity of opinion (as we formalize in Subsection 7.1). For example, in the context of CEO turnover decisions, an activist’s representative on the board could be biased toward an outside candidate with a history of asset divestitures, and an executive director could be biased toward an internal candidate with experience at the firm.
(ii) The set of feasible policies changes over time. For example, different candidates are available to replace the CEO at each date. (iii) The incumbent stays in place whenever the board does not come to a decision. For example, if the board cannot agree on a replacement, the current CEO keeps the job.
Results preview. First, we ask when the board will replace an existing policy with a new one. We find that deadlock on the board can lead directors to knowingly retain a Pareto-dominated policy. In the context of CEO turnover, this implies that 4See Ferreira (2010) for a survey of the literature on boardroom diversity.
5See, e.g., “Big Investors Question Corporate Board Tenures” (Wall Street Journal, March 23, 2016) and Katz and McIntosh (2014).
2 a CEO can be so severely entrenched that he is not fired even if all directors prefer a replacement.
To see why, consider a firm with a bad incumbent CEO, whom the board is considering replacing with an alternative. Suppose all directors agree that the alternative is better than the incumbent, but some directors are especially biased toward him. For example, activist representatives could be biased toward an alternative with a history of divestment, as touched on above. Then, if the alternative becomes the new CEO, the biased directors will try to keep him in place, voting down alternatives in the future, no matter how much other directors prefer them—the new CEO will become entrenched. To prevent this, other (sufficiently patient) directors block the alternative today, keeping the bad incumbent CEO in place to retain the option to get their way in the future—the incumbent CEO becomes entrenched. The fear of entrenchment begets entrenchment.
This mechanism resonates with practice. For example, when Uber’s recent search for a new CEO was hindered by disagreement among its directors, one director was pushing for a weak CEO who would be easy to replace in the future. According to Bloomberg: The company hopes to lock in a CEO by early September.
The big question is whether the board can get on the same page.
Getting a majority of the eight-person group to support a single candidate is looking to be difficult.... Some...have argued...that Kalanick [a current director and former CEO of Uber] would prefer a weak CEO just to increase his chance of making a comeback (“Behind Uber’s Messy CEO Search Is a Divided Boardroom,” Bloomberg Technology, July 28, 2017, emphasis added).6 Second, we ask how director tenure affects deadlock. In the current debate (e.g., 6See also “Investor Benchmark Capital Sues Uber Ex-CEO Travis Kalanick” (Wall Street Jour-nal, August 10, 2017), according to which “some investors have alleged that Mr. Kalanick...[was] impeding the search, including by rejecting qualified candidates.” Cf. footnote 2.
3 Katz and McIntosh (2014)),7 arguments against long director tenure focus on con-cerns about independence and the lack of fresh ideas. Our analysis suggests a distinct yet complementary argument for shorter tenures: in anticipation of a long tenure, directors behave strategically, blocking good candidates, creating deadlock.
This provides a counterpoint to the broadly negative view of corporate short-termism.
Third, we ask how board composition affects deadlock. We find that board diver-sity has a downside: it can exacerbate deadlock. For example, the deadlock caused by an activist’s bias toward divestiture is not resolved by adding some executive directors biased toward investment. These directors will block divestiture-oriented policies, even if they agree that they are optimal today, just to preserve a strong bargaining position for the future. More generally, heterogeneous director biases do not cancel out—they do not yield a board that implements policies in shareholders’ interests. Rather, they can yield a board that does not implement any policies at all. This is in line with the empirical findings in Goodstein, Gautam, and Boeker (1994) and Knyazeva, Knyazeva, and Raheja (2013) that diversity of directors’ skill and experience is negatively associated with strategic change and firm value, respec-tively. Our results thus offer a counterpoint to the blanket view that a “board should reflect a diversity of thought, backgrounds, skills, experiences and expertise” (Busi-ness Roundtable (2016), p. 11). However, that is not to say that a diverse board is all bad in our model. The short-term deadlock created by opposing biases can also benefit shareholders—by blocking policies that some directors are biased toward, a diverse board can prevent permanent tyranny of a biased board.
Continuing the analysis of board composition, we ask how adding unbiased di-rectors affects deadlock. We find that even if unbiased directors act purely in share-holders’ interest, adding them to the board can make shareholders strictly worse off.
7While many countries, such as the UK, Hong Kong, Singapore, and several EU countries, have adopted some form of term limits for independent directors, the US and Canada do not yet have any specific regulatory guidelines on director tenure. However, many institutional investors, such as BlackRock and State Street, deem director tenures in the US as too long and are voting against reappointments, leading commentators to suggest that director tenure is “the next boardroom bat-tle” (Libit and Freier (2016), p. 5; see also Francis and Lublin (2016)).
4 To see why, observe that if all directors are biased the same way, they are never deadlocked (although sometimes they act against the interest of shareholders). If some directors are replaced with unbiased directors, the biased directors will re-spond strategically. They have extra incentive to block shareholder-friendly policies to improve their future bargaining positions. That said, unbiased directors can also benefit shareholders. Like a diverse board, they block policies that other directors are biased toward to prevent them from becoming entrenched. In so doing, unbiased directors can appear passive or even biased in the short-term: they may block policies that enhance short-term value so that they can implement policies that maximize long-term value in the future.
This mechanism was recently manifested at railroad company CSX. There, ac-tivist investor Paul Hilal demanded that CSX replace the incumbent CEO by veteran railroad executive Hunter Harrison and, in addition, give Hilal and Harrison six seats on the board. Although Harrison was widely considered to be the perfect candidate to lead CSX, directors were reluctant to agree to the activist’s demands: they proba-bly worried that, given support from the new directors, the new CEO would be hard to replace in the future. Hence, they seemed biased, blocking an alternative that was good in the short term, to prevent entrenchment, which could be bad for the firm in the long term.8 Fourth, we ask how deadlock affects director appointments. As an immediate result of our board-composition analysis, we find that shareholders may choose to appoint a biased director, since adding an unbiased director to a board with biased incumbent directors may create deadlock. This points to a downside of staggered boards. If only a subset of directors is replaced at a time, today’s newly appointed biased directors become tomorrow’s incumbent directors. Hence, if shareholders still want to avoid deadlock tomorrow, they will appoint biased directors again, and so on ad infinitum. The board may remain biased forever, even after shareholders have 8See, e.g., “The $10 Billion Battle for CSX Stock Will Be Decided Shortly” (Fortune, February 15, 2017). Eventually, after gathering the opinion of the company’s investors, the board agreed to the activist’s demands.
5 replaced all the directors.
In practice, shareholders do not have full control over director appointments.
The CEO often exerts influence over the appointment of new board members (e.g., Hermalin and Weisbach (1998), Shivdasani and Yermack (1999)). We find that even if the CEO’s only goal is to retain his position, he will not always appoint directors who are biased towards him. He may prefer directors who are unbiased, or even biased against him. The reason is that they may exacerbate deadlock on the board.
Since deadlock makes it hard to fire the CEO, such strategic director appointments can help the CEO entrench himself.
Colloquially, deadlock on the board can be better for the CEO than buddies on the board.
Fifth, we ask whether shareholders should give the CEO power over director appointments. We find that by ceding power to the CEO, shareholders can commit not to block his preferred policies in the future, and hence prevent deadlock today.
But they should not give the CEO full power over board appointments, so his bias does not take over the board.
Typically, they should give the CEO an interior amount of a power, sometimes letting him choose directors and sometimes choosing them themselves.
Related literature. A relatively small number of theory papers studies strategic decision making by multiple directors on a corporate board.9 We contribute to this literature by including dynamic interactions, which none of these papers study.10 Indeed, none of our results would obtain with a one-shot decision since deadlock would not arise.
We also add to the broader theory literature on boards.11 Our finding that board diversity can exacerbate deadlock complements existing work on the downsides of director independence, since independent directors are likely to have different 9See Baranchuk and Dybvig (2009), Chemmanur and Fedaseyeu (2017), Harris and Raviv (2008), Levit and Malenko (2016), Malenko (2014), and Warther (1998).
10One paper that features directors’ dynamic interactions, but not their strategic decision making, is Garlappi, Giammarino, and Lazrak (2017)—in their model, the board maximizes a weighted average of directors’ utilities. Cf. footnote 14.
11See Adams, Hermalin, and Weisbach (2010) for a survey.
6 views than insiders on the board.12 And our finding that a CEO may prefer to appoint unbiased directors, even when they may fire him in the future, contrasts with Hermalin and Weisbach (1998), another paper in which a CEO appoints directors with the power to fire him.
At an abstract level, our model of board decisions falls within the class of dynamic collective choice models with endogenous status quo explored in the political economy literature, notably in Dziuda and Loeper (2016).13 We embed this literature’s notion of deadlock in a corporate finance framework to apply it to corporate boards.14 This allows us to study board/committee composition, director appointments/elections, and the role of the CEO. This leads to our main results, none of which have parallels in that literature.
Our explanation of entrenchment, which is based only on directors’ strategic behavior, contrasts with those in the finance literature, which are based largely on a CEO’s actively entrenching himself (e.g., “invest[ing] in businesses related to their own background and experience”)15 or directors’ direct utility costs of firing a CEO (e.g., because he is a friend).16 Layout.
In Section 2, we present the model.
In Section 3, we describe the baseline mechanism of deadlock on the board and entrenched policies. In Section 4, we analyze board composition. In Section 5 and Section 6, we study director appointments and who should appoint directors. In Section 7, we discuss robustness and analyze extensions. In Section 8, we discuss our model’s empirical implications 12See Adams and Ferreira (2007), Kumar and Sivaramakrishnan (2008), Laux (2008), and Malenko (2014).
13See also Austen-Smith, Dziuda, Harstad, and Leoper (2016), Duggan and Kalandrakis (2012), Dziuda and Leoper (2017), and Zápal (2012).
14Deadlock in our Proposition 2 is a feature of the equilibrium in Dziuda and Loeper’s (2016) Corollary 2. Garlappi, Giammarino, and Lazrak (2017) also find a version of this result: a board passes up an investment all directors believe is good, knowing they will disagree about how to manage it later. This results in underinvestment, but not full entrenchment because directors in their model do not act strategically (see Appendix Subsection A.2.2).
15Shleifer and Vishny (1989), p. 125. See also, e.g., Zwiebel (1996).
16See, e.g., Chemmanur and Fedaseyeu (2017), Coles, Daniel, and Naveen (2014), Taylor (2010), and Warther (1998).
7 and how to test them. Section 9 concludes.
2 Model There is a board comprising two directors, i ∈{1, 2}, who decide on a policy at each of two dates, t ∈{1, 2}.17 (See Section 7 for N-director and infinite-horizon extensions.) At date t, the board can replace the current “incumbent” policy xt−1 with an alternative policy yt. Decisions are made by strict majority voting: if both directors vote for the alternative yt, then yt becomes the incumbent policy, xt = yt; otherwise, the incumbent policy stays in place, xt = xt−1. The policy in place creates value v(xt) at date t, so shareholders get v(x1) + δv(x2), where δ is the rate of time preference. (We allow for δ > 1, since date 2 may represent more calendar time than date 1.) Directors care about firm value, but they can be biased. Each director i maximizes the sum v(x1) + bi(x1) + δ v(x2) + bi(x2) , where bi is her bias.18 We discuss different interpretations of directors’ biases in Section 7.1.
Policies differ in two dimensions: in how much value they create for shareholders and in how much they appeal to biased directors. We capture shareholder value with “quality” q ∈{h, ℓ}. If the date-t policy xt is of high quality h, then v(xt) = vh; if xt is of low(er) quality ℓ, then v(xt) = vℓ< vh. We capture the appeal to biased directors by adding a “bias” type τ ∈{α, β} to each policy and allowing directors to be either α- or β-biased, where a τ-biased director gets bi(xt) = bτ if the policy xt is type τ and bi(xt) = 0 otherwise. We also allow for unbiased directors, for whom bi(xt) = 0 for all policies xt.
We assume that the qualities and bias types are i.i.d. at date 1 and date 2. pq 17By restricting attention to two-director boards, we circumvent the concern that different decision-making protocols lead to different results. E.g., unanimity and majority voting are equiv-alent. That said, this restriction does not drive the results. Proposition 2, which underlies most of our analysis, holds in an N-director version of our model; see the proof and footnote 23.
18v(xt) need not represent the common value of all shareholders, but could rather represent the average value of shareholders with heterogenous biases, e.g., half of the shareholders could value xt above v(xt) and half below. Thus, directors’ biases could also reflect the heterogenous biases/preferences of individual shareholders.
8 and pτ denote the probabilities that an alternative yt is of quality q ∈{h, ℓ} and of bias type τ ∈{α, β}, respectively. ¯ v := phvh + pℓvℓdenotes the average quality of yt and v0 := v(x0) denotes the quality of the initial incumbent policy x0.19 As touched on in the Introduction, disagreement among directors is common on corporate boards.20 To capture this, we assume that the directors’ biases are relatively large.
Assumption 1 Biased directors are sufficiently biased: for τ ∈{α, β}, bτ > max vh −v0 + δpℓ(vh −vℓ) δpτpℓ , vh −vℓ .
(1) Solution concept. We solve for subgame perfect equilibria—sequentially ratio-nal strategies for each director i ∈{1, 2} to vote for/against yt for t ∈{1, 2} given consistent beliefs—such that directors use the following tie-breaking rules if they are indifferent.
Assumption 2 Directors do not vote against strictly Pareto-dominant policies at the final date.21 If both directors are indifferent, the incumbent stays in place.
Board composition. If a director is unbiased we indicate her type with ν. If a director is biased toward τ-policies, we refer to her as τ-biased and indicate her type with τ (so τ can represent a director type as well as a policy type). We use primes to denote the opposite director or policy: if τ = α, then τ ′ = β, and vice versa. Hence, 19An alternative policy yt is one of four types hα, hβ, ℓα, and ℓβ. However, the initial policy x0 is not necessarily one of these types. We allow for this because we are interested in the case in which a policy x0 is entrenched even though it is “worse” than any alternative yt (see Section 3).
20For example, a recent survey of global directors emphasizes the importance of disagreement on boards as follows: “In the boardroom, disagreements are often unavoidable—especially when the board is composed of independent-minded, skilled, and outspoken directors. This is not a bad thing. There should be a debate in the boardroom” (IFC (2014), p. 2).
21In particular, if both directors weakly prefer the alternative y2 to the incumbent x1 and one director strictly prefers y2 to x1, then (i) if one director is indifferent between y2 and x1, she votes in the interest of the director with strict preference and (ii) if the director with strict preference is indifferent between voting for and against (because she is not pivotal), she votes sincerely.
9 a ν-ν board is an “unbiased” board in which both directors are unbiased; a τ-τ board is a “fully biased” board in which directors have the same bias; a τ-ν board is a “partially biased” board in which one director is τ-biased and the other is unbiased; and a τ-τ ′ board is a “diverse” board in which directors have opposing biases.
3 Entrenchment Until stated otherwise (cf. Section 5), suppose that the initial policy x0 is “very bad,” in that it is worse for shareholders than low-quality alternatives, v0 < vℓ, and no director is biased toward it, bi(x0) = 0 for i ∈{1, 2}. Thus, directors prefer any alternative yt to x0. In a one-shot model, they always vote to replace it.
Proposition 1 (One-shot benchmark.) Suppose the board votes only once.22 The incumbent policy x0 is always replaced, regardless of board composition.
Do directors also vote to replace x0 in our dynamic model? Not if the board is diverse, since in a dynamic model directors with opposing biases vote strategically. In particular, with a diverse board, the α-biased director votes against all β-alternatives and the β-biased director votes against all α-alternatives. This leaves x0 in place at date 1, even though both directors would be strictly better offwith any other policy.
Proposition 2 (Entrenchment.) Given a diverse (τ-τ ′) board, the incumbent policy x0 is entrenched: no replacement is ever appointed at date 1.23 Intuitively, the τ-biased director knows that if a τ ′-alternative is chosen, the τ ′-biased director will vote against replacing it with any τ-alternative at date 2 (given bτ ′ > vh −vℓby Assumption 1). In contrast, if the incumbent bad policy x0 stays in 22This is tantamount to supposing there is no second date in our model.
23As we spell out formally in the proof, this result does not rely on there being only two directors.
If there are N > 2 directors and N possible alternative policies, then the same intuition leads to the same result: directors block Pareto-dominating policies at date 1 to preserve the option to implement their preferred policy at date 2.
10 place, the τ ′-biased director will vote in favor of any τ-alternative at date 2. Because the τ-biased director’s bias towards τ-policies is sufficiently large (by Assumption 1), she blocks any τ ′-policy at date 1. Even though retaining the very bad incumbent is costly in the short term, she wants to preserve the option to get her way in the long term. There is complete deadlock: each director votes against policies that would make her better offtoday to preserve the option of implementing a policy that would make her even better offin the future.24 Perhaps the most important function of real-world boards is appointing CEOs. If the incumbent policy x0 represents the incumbent CEO, and the alternatives yt rep-resent potential replacement CEOs, our model generates CEO entrenchment, which seems to be a major source of corporate inefficiency (Taylor (2010)). In our model, unlike in others, entrenchment arises without any opportunistic behavior by the CEO or director disutility of firing. Rather, it arises only due to the constraints imposed by the dynamic consistency of multiple strategic directors.
Deadlock in our model results from directors’ concern about board negotiations that will occur in the future—directors vote strategically to increase their chances of implementing their preferred policies later in their tenure on the board. The rate of time preference δ in our model can be viewed as a measure of directors’ remaining tenure: if a director has a short tenure, she does not care about future policies, so δ is low; in contrast, if she has a long tenure, she cares a lot about them, so δ is high.
This interpretation yields the next corollary.
Corollary 1 (Tenure.) Suppose (instead of Assumption 1) that bτ > (vh −vℓ)/pτ for each τ. Given a diverse board, increasing director tenure leads to entrenchment in the sense that x0 is always replaced at date 1 for δ sufficiently small but never replaced for δ sufficiently large.
24This extends the standard real options intuition that it is optimal to delay irreversible decisions (see, e.g., Dixit and Pindyck (1994)). Here, if a director exercises her option to replace the incumbent today, her choice is endogenously irreversible, since the other director will refuse to exercise her option to replace the new incumbent in the future.
11 Deeming director tenures too long, a number of institutional investors, such as Black-Rock and State Street, are now voting against reappointments, leading commenta-tors to suggest that director tenure is “the next boardroom battle” (Libit and Freier (2016), p. 5; see also Francis and Lublin (2016)). The argument for shorter tenures has centered around the idea that after a long tenure, a director may become too close to management and may also lack fresh ideas about the business. Our anal-ysis offers a new, complementary perspective on the downside of long tenures: in anticipation of a long tenure, directors behave strategically, creating deadlock.
More generally, our analysis uncovers a cost of long-termism: long-termism can incentivize strategic voting, exacerbating deadlock.
This provides a counterpoint to the broadly negative view of corporate short-termism; see, e.g., the former Vice President Joe Biden’s opinion that short-termism “saps the economy” (Biden (2016)).
4 Board Composition Our results so far show a downside of diverse boards: directors with opposing biases create deadlock. Now we ask how board composition can mitigate/aggravate dead-lock. Does an unbiased director on the board resolve deadlock? No. The unbiased director votes in the interest of shareholders at each date. But, anticipating as much, the biased director responds strategically. Just as in the case of a diverse board, she strategically blocks high-quality policies not of her preferred type today, anticipating that the unbiased director will make them hard to replace in the future.
Lemma 1 (Cost of director heterogeneity.) Consider a τ-ν-partially bi-ased board. The τ-biased director votes against the high-quality τ ′-alternative and votes in favor of all other alternatives.
Although an unbiased director does not completely resolve deadlock, she prevents x0 from becoming fully entrenched. But perhaps a biased director can resolve dead-lock even further? Yes, in fact. If the other director is biased the same way, she 12 does not strategically block alternatives today, knowing she will always be able to implement her preferred policies in the future. With a fully biased board, one di-rector does not have to make sure that the other director is dissatisfied with the incumbent to preserve the option to replace it. Hence, director heterogeneity can be bad for shareholders, since deadlock prevents some high-quality policies from getting through.
Proposition 3 (Shareholder optimal board composition.) Define ∆τ := δ(1 −pτ)ph (vh −vℓ) −(vℓ−v0) .
(2) Shareholders are better offwith a fully τ-biased board than a τ-ν-partially biased board if and only if pℓpτ∆τ < pτ ′ph vh −v0 + δpτ ′pℓ(vh −vℓ) , (3) and are always better offwith a fully biased or partially biased board than a diverse board.
Although this result stresses a cost of director heterogeneity, it also suggests a benefit: given a τ-director on the board, a ν-director on the board can prevent low-quality τ-policies from becoming entrenched. Namely, she can strategically block low quality τ-policies that the τ-biased director would make hard to replace.
Corollary 2 (Benefit of director heterogeneity.) Consider a τ-ν-partially biased board. The unbiased director votes against the low-quality τ-alternative if and only if ∆τ > 0 and votes in favor of all other alternatives.
Observe that the unbiased director may appear passive, or even biased, in the short-term, voting against some alternatives even though the incumbent policy is even worse (v0 < vℓ). This is because she wants to avoid being stuck with a low-quality policy in the long-term: by blocking the low-quality alternative that the other direc-tor is biased toward, she increases her chances of implementing a high-quality policy in the future.
13 In summary, an unbiased director acts in shareholders’ interest, strategically blocking alternatives as long as the long-term benefit of implementing a high-quality τ ′-policy outweighs the short-term cost of keeping the incumbent policy x0 in place (this benefit and cost correspond to the two terms in ∆τ).
However, the biased director responds strategically, which can make shareholders worse offwith a par-tially biased board than with a fully biased board. This is the case whenever the benefit from the unbiased director strategically blocking low-quality τ-alternatives is less than the cost from the τ-biased director strategically blocking high-quality τ ′-alternatives (this benefit and cost correspond to the left-hand side and right-hand side of equation (3)).
Finally, note that, like a partially biased board, a diverse board has the benefit of preventing some low-quality policies from being implemented. However, in this baseline specification, this benefit is less valuable than that of the fully biased board, i.e. than preventing x0 from being entrenched at date 1 (and hence getting a better set of policies to choose from at date 2). So shareholders always prefer the fully biased board to the diverse board. This is no longer the case if we relax the assumption that the alternative quality is identically distributed, as we show in Appendix A.5.3, to stress this potential benefit of diversity.
5 Appointing Directors In this section, we study how deadlock affects director appointments. Suppose, first, that shareholders have full control over director appointments and consider a board with a τ-biased director in place and an empty seat to be filled at date 0. Will shareholders necessarily appoint an unbiased director who will act in their interest?
Or a τ ′-biased director who will counteract the τ-biased director? Not necessarily.
Diverse and partially biased boards are not always good for shareholders, since they are prone to deadlock (Proposition 3). Hence, shareholders may appoint a τ-biased director, creating a fully biased board that makes some bad decisions but avoids 14 deadlock.
Corollary 3 (Shareholders’ director appointments.) Suppose there is a τ-biased director in place and an empty board seat. Shareholders appoint a τ-biased director if condition (3) holds. Otherwise they appoint an unbiased director.
In our setup, shareholders would like to replace all directors at once with unbiased directors, since an unbiased board always acts exactly in their interest. But practical concerns could make this unattractive, because, e.g., some incumbent directors have indispensable expertise. Hence, shareholders’ director appointments must account for the biases of incumbent directors. Given the costs of deadlock, the best response may be to exacerbate these biases, rather than to attenuate them.
Another source of shareholders’ inability to replace all directors at once is a stag-gered board, which “prevents shareholders from replacing a majority of the board of directors without the passage of at least two annual elections” (Bebchuk and Cohen (2005), p. 410). The literature emphasizes that this can prevent efficient takeovers and proxy fights by forcing bidders and activists to win two far-apart elections (Be-bchuk, Coates, and Subramanian (2002)). Our analysis suggests it may be even worse than we thought. If shareholders want to avoid deadlock today, they appoint new directors with biases in line with the current incumbent directors. But with staggered elections, today’s new biased directors become tomorrow’s incumbent di-rectors. And if shareholders want to avoid deadlock tomorrow, they will appoint biased directors again, and so on ad infinitum. In other words, our analysis sug-gests that staggered elections may lead the board to stay biased forever, even after shareholders have replaced every director with a new director.
CEO appoints directors. In practice, shareholders do not always have full control over director appointments: the CEO can appoint some directors to the board as well (Hermalin and Weisbach (1998), Shivdasani and Yermack (1999)).
Hence, we ask which directors the CEO will appoint. If his sole objective is to keep his position,25 will he always appoint directors who are biased toward him? No. In 25I.e., the CEO’s objective function is U = P employed at date 1 w1 + P employed at date 2 w2 15 fact, he may prefer to appoint directors biased against him, since this may exacerbate deadlock on the board and make it hard to fire him (Proposition 2).
Here, we return to the interpretation of the incumbent policy x0 as the incumbent CEO and of the alternatives yt as potential replacement CEOs (cf. the Introduction and Section 3). It follows from Proposition 2 that a “very bad” CEO chooses a diverse board to entrench himself.
Corollary 4 (“Very bad” CEO’s board ranking.) Given v0 < vℓand bi(x0) = 0, the incumbent CEO’s preference over boards is as follows: diverse ≻ partially biased ⪰ unbiased ∼fully biased.
(4) So far, we assumed that no director was biased toward the incumbent policy, to explore how a “very bad” policy/CEO could become entrenched. Now, we assume that the CEO is of type τ, to explore how the CEO appoints directors biased to-ward/against him. A high-quality CEO is only at risk of being fired if a director is biased against him and hence always prefers directors biased towards him: Proposition 4 (High-quality CEO’s board ranking.) Given v0 = vh and bi(x0) = bτ, the incumbent τ-CEO’s preference over boards is as follows: fully τ-biased ∼diverse ∼τ-ν-partially biased ∼unbiased ≻τ ′-ν-partially biased ≻fully τ ′-biased.
(5) In contrast to a high-quality CEO, a low-quality CEO is at risk of being fired even by directors biased toward him, since they prefer a high-quality CEO of the same bias type. Thus, like the very bad CEO above, a low-quality CEO wants to exploit deadlock on the board to avoid being fired. In fact, deadlock on the board can be more valuable for him than favoritism from the board.
for some weights or “wages” w1 and w2. Only the proof of Proposition 5 depends on the form of the CEO’s objective.
16 Proposition 5 (Low-quality CEO’s board ranking.) Given v0 = vℓand bi(x0) = bτ, as long as pτ is sufficiently large,26 the incumbent τ-CEO’s preference over boards is as follows: τ ′-ν-partially biased ≻fully τ-biased ∼diverse ∼ τ-ν-partially biased ≻unbiased ≻fully τ ′-biased.
(7) The low-quality τ-CEO benefits from having a τ-biased director on the board to prevent him from being replaced by any τ ′-CEO. However, with a τ-biased director on the board, he is always replaced when a high-quality τ-CEO is available. There is no deadlock: even if the other director is τ ′-biased, she will not vote strategically at date 1 because she knows that the τ-biased director will prevent her from getting her way at date 2 anyway. Thus, the CEO may be better offwith a τ ′-ν-partially biased board, because there is deadlock: the τ ′-biased director votes against the high-quality τ-CEO (to preserve her option of appointing a τ ′-CEO tomorrow) and the unbiased director votes against the low-quality τ ′-CEO (to prevent his entrenchment). Hence, given an empty board seat, the CEO may appoint a director biased against him.
Corollary 5 (Low-quality τ-CEO’s director appointments.) Suppose there is an unbiased director in place and an empty board seat. A τ-CEO appoints a τ ′-biased director for some parameters (specified in the proof).
6 Who Should Appoint Directors?
As we touched on in Section 5, CEOs often have the power to appoint new directors.
Could it be optimal for shareholders to give the CEO this power? In our baseline 26Specifically, we require that (pτ −pτ ′)phw1 > pτ ′ −pτph(pτ ′ph + pτpℓ) w2, (6) where w1 and w2 are as in footnote 25. In the proof we also give the low-quality τ-CEO’s rankings for other parameters.
17 setup, the answer is no. Since directors are appointed at date 0, shareholders appoint the best director(s) for them, taking into account potential deadlock in the future.
However, when directors are appointed at date 1, this is no longer necessarily the case, as we show in a modified setup here.
Given that in most firms CEOs have board seats and some power to appoint directors, we assume now that one director represents the CEO. We assume that she is τ-biased, reflecting, e.g., her private benefits of control or concerns about future employment. The other director can be of any bias type. But, unlike above, we assume she retires after date 1 (but before y2 is realized). How her replacement is chosen depends on the CEO’s power, denoted by π: with probability π, the CEO chooses the replacement and with probability 1 −π shareholders do.
Ceding power to the CEO can help prevent deadlock. When the CEO controls the board at date 2, she does not block policies at date 1, since she does not need to improve her future bargaining position. By ceding power to the CEO, shareholders are able to commit not to block her preferred policies at date 2, and hence to im-prove date 1 outcomes. The next result summarizes how much power shareholders optimally give to the CEO to manage the tradeoffbetween avoiding deadlock at date 1 and not getting their preferred policy at date 2.
Proposition 6 Define ¯ π := 1 −vh −v0 + δplpτ ′ (vh −vl) δplpτ [bτ −(vh −vl)] ∈(0, 1) .
(8) Given a very bad incumbent x0, the shareholder optimal CEO power π∗is π∗= ¯ π if v0 + δ¯ v < vh + δ(1 −¯ π)vh + δ¯ π vh (1 −pτpl) + vlpτpl π ∈[0, ¯ π) otherwise.
(9) Intuitively, shareholders optimally give some power over director elections to the 18 CEO if the costs of deadlock are sufficiently high: observe π∗> 0 when v0 is much lower than vh, by the condition in equation (9). The cutoff¯ π represents the least power the CEO must have not to block high-quality τ ′-alternatives, i.e. to avoid deadlock.
7 Robustness and Extensions 7.1 Interpretation of Biases Heterogenous biases.
Heterogeneous director biases are the key driver of our results. These biases capture realistic heterogeneity among directors. For example, in start-ups, founding entrepreneurs often sit on boards beside capital providers like VCs, which have different objectives for the corporation. Indeed, early this year at Applied Cleantech, a technology start-up, deadlock on the board was so severe that the investors on the board sued the founder for control. In mature firms, equity blockholders typically sit on the board.
The blockholder could be an heir to a family firm, with an interest in preserving her legacy. Or an activist investor, with an interest in preserving her reputation for fast value-enhancement.
Other kinds of director heterogeneity are common. For example, in Germany it is common for directors to represent stakeholders such as bank creditors or employees/unions.
Director heterogeneity can also reflect heterogeneity among shareholders them-selves, who have different preferences, e.g., due to different beliefs and portfolio positions. In close corporations, diverse shareholders sit directly on the board. But even in public corporations, diverse shareholders appoint directors to represent their diverse interests.
Preferences vs. beliefs. We have described directors’ biases as reflecting differ-ences in their preferences (i.e. tastes) over policies. But they can reflect differences in beliefs. To see why, consider the following setup, which is equivalent to ours. At the end of each date the policy xt either “succeeds,” generating value V or “fails,” generating zero. Directors agree to disagree about the success probability. An unbi-19 ased director believes the policy succeeds with probability πν(xt), so that her value of the policy coincides with shareholders’, i.e. πν(xt)V = v(xt), or πν(xt) = v(xt)/V .
A τ-biased director believes the success probability of a τ-policy is πτ(xt), so that her value of the policy is v(xt) + bτ, i.e. πτ(xt)V = v(xt) + bτ or πτ(xt) = v(xt) + bτ V = πν(xt) + bτ V .
(10) Note that, by our definition, although unbiased directors have the same beliefs as shareholders, these are not necessarily the “true” beliefs. “Biased” directors may be able to assess success probabilities better than shareholders.
7.2 N > 2 Directors and Uncertain Biases Here we show that our results are not specific to boards with just two directors.
Suppose now that there are N directors, but still just two alternatives, and decisions are made by majority voting. All directors are either τ-biased or τ ′-biased. Each director knows her bias, but not the biases of other directors.
Define qτ as the probability that most directors are τ-biased, i.e., qτ := P at least N + 1 2 directors are τ-biased .
(11) Here, we ask whether a very bad incumbent policy x0 can still be entrenched in this setup. Will a τ-biased director prefer to vote against a high-quality τ ′ alternative to retain the very bad incumbent x0 at date 1?
At date 2, all directors vote sincerely (it is a weakly dominant strategy). Thus, if the high-quality alternative is in place, it is retained unless the date-2 alternative is type-τ and the majority of directors are τ-biased. Hence, given a high-quality τ ′ incumbent, a τ-biased director’s expected date-2 payoffis τ-biased director’s date-2 payoff x1is hτ ′ = qτ ′vh + qτ pτ ¯ v + bτ + pτ ′vh .
(12) 20 Whereas her payoffgiven the incumbent is x0 is as in the two-director model, since the alternative is always implemented at date 2: τ-biased director’s date-2 payoff x1=x0 = ¯ v + pτbτ.
(13) Adding the date-1 payoffs to the expressions above, we get the following condition for when a τ-biased director prefers to retain the incumbent x0 than to implement a high-quality τ ′-alternative: v0 + δ ¯ v + pτbτ > vh + δ qτ ′vh + qτ pτ ¯ v + bτ + pτ ′vh , (14) which yields the following proposition.
Proposition 7 With N directors and uncertain biases, a τ-biased director votes against a high-quality τ ′-alternative as long as her bias is sufficiently large, i.e., bτ > vh −v0 + δ pℓ 1 −qτpτ vh −vℓ) δpτqτ ′ .
(15) This implies that a version of deadlock can arise even if the majority of the board is biased the same way. As long as directors are not certain that most other directors are biased the same way, they vote to keep the very bad incumbent policy in place, blocking high-quality alternatives. Observe, however, if qτ ′ →0 the condition in the proposition (equation (15)) is never satisfied. In words, if τ-directors know they are in the majority, they never block high-quality alternatives.
7.3 Infinite Horizon Here we show that our results are not specific to our two-date setup.
To do so, we consider an infinite horizon version of our baseline model and show that a very bad policy x0 will still be entrenched with a diverse board. Here, we define x0 as entrenched if all ℓ-quality policies are blocked. This definition is stronger than in the 21 baseline model, since it applies to all dates (not just date 1), but weaker in that it applies only to ℓ-quality policies (not to h-quality policies).27 Assume that v0 = 0 (a normalization), that pα = pβ = 1/2 (for simplicity), and that δ ∈(0, 1), so that the value functions are well defined.
Proposition 8 (Infinite-horizon Entrenchment.) Suppose that v0 = 0, pα = pβ = 1/2, and δ ∈(0, 1). Given a diverse board, there is entrenchment in the infinite horizon version of the model as long as directors’ biases are neither too high nor too low, i.e., as long as max 2 2vℓ−δ phvh + 2(1 −ph)vℓ δph (2 −2δ + δph) , 2 (vh −vℓ) 2 −2δ + δph ≤ bτ 1 −δ ≤2vh δph .
(16) Observe that this result requires not only that directors’ biases are not too small, as in the corresponding result in the baseline model (Proposition 2), but also that they are not too large (relative to vh). This ensures that the τ-director does not block the h-quality τ ′-alternative.
8 Empirical Implications Turning to our model’s empirical content, we discuss empirical proxies for our model’s key quantities and empirical predictions corresponding to its main results.
Proxies. Boards meet in the privacy of the boardroom without disclosing their minutes.
Hence, deadlock is unlikely to be revealed publicly except in the most extreme cases, such as those that wind up in court, that result in director resigna-tions, or that record directors voting in dissent.28 This lack of data makes it hard 27Even in the baseline model, the outcome in which x0 stays in place forever is not an equilibrium.
At date 2, an alternative is always implemented. Analogously, in the infinite horizon version, this extreme form of entrenchment is not an equilibrium (given the tie-breaking rule in Assumption 2). Both directors would be better offwith any alternative, and would have a profitable one-shot deviation to implement it.
28Translation company Transperfect and startup Applied Cleantech are recent examples of dead-22 to test for deadlock directly. But our model suggests a way to test for deadlock in-directly: deadlock is manifested in boards’ retaining incumbent policies, even when superior alternatives are available (Proposition 2). Applied to boards’ key decisions, CEO turnover and corporate strategy, deadlock can be measured/proxied for by the following: 1. longer CEO tenure and, conditional on CEO termination, longer periods to appoint a new CEO (as with Uber’s deadlocked board); 2. slow changes in strategy in response to a changing environment, even at the expense of the firm’s competitiveness (as is common in corporations, Hannan and Freeman (1984), Hopkins, Mallette, and Hopkins (2013)).
A number of our predictions require proxies not only for deadlock, but also for di-rectors’ “biases” bτ representing their preferences/private benefits or beliefs (Subsec-tion 7.1). Proxies for directors’ preferences include the stakeholders they represent— directors could represent VC investors, activists, founding families, employee unions, outside creditors, and corporate executives, all of which are likely to have differ-ent preferences over/private benefits from different company policies. Proxies for directors’ beliefs include diversity in directors’ experience, expertise, backgrounds, or skills, all of which are likely to lead to different views on the best policy for a company.
Predictions. Our main results correspond to testable predictions on the deter-minants of deadlock.
Prediction 1 All else equal, deadlock is more likely on more diverse boards (cf.
Proposition 2).
lock cases that have gone all the way to court. Agrawal and Chen (2017) and Marshall (2013) analyze director resignations resulting from board disputes, which US companies must disclose by a 2004 SEC law. Jiang, Wan, and Zhao (2016) study independent directors voting in dissent, which Chinese firms must disclose by law. Of course, companies typically want to keep such disagreements private, so boards on which directors resign or vote in dissent should make up only a fraction of deadlocked boards.
23 This is consistent with Goodstein, Gautam, and Boeker’s (1994) finding that diversity in directors’ occupational or professional backgrounds is associated with less strategic change, such as fewer divestitures and reorganizations. Likewise, it is consistent with Knyazeva, Knyazeva, and Raheja’s (2013) finding that diversity in directors’ expertise and incentives leads to lower investment and lower firm value.
Prediction 2 All else equal, deadlock is more likely when directors’ remaining tenures are longer (cf. Corollary 1).
In contrast to much of the literature, which focuses on directors’ past tenure, this prediction underscores the costs and benefits of directors’ future tenure. Strategic voting and deadlock on the board result from directors’ incentive to improve their bargaining positions in anticipation of future negotiations. Hence, our model suggests that deadlock is less likely to arise if many directors are likely to leave a board soon, e.g., because they are nearing retirement or they are reaching the legal maximum tenure (in jurisdiction where such a maximum exists, such as the UK, Hong Kong, Singapore, and several EU countries).
In our model, a director strategically blocks an alternative because she wants to prevent other directors from blocking other alternatives in the future. Hence, given data on individual director voting, we have the following testable predictions: Prediction 3 All else equal, a director is more likely to vote against a policy if (a) there are other directors on the board who especially favor this alternative; (b) these other directors have long expected remaining tenure; (c) the director himself has longer expected remaining tenure.
This suggests a director is relatively likely to vote against a CEO candidate nomi-nated by an influential blockholder on the board, since the blockholder is likely to nominate someone she is biased toward. For example, hedge fund activist campaigns are increasingly including the demand to replace the incumbent CEO. Our model suggests that directors on the board are relatively likely to vote against the activist’s 24 candidate if the activist has (or will get) board representation. Indeed, as discussed in the Introduction, this is exactly what happened during Paul Hilal’s activist cam-paign at CSX. Likewise, directors at Uber blocked candidates during its CEO search last summer. Some directors were opposed to Meg Whitman because they viewed her as “potentially compromised by her strong affiliation with Benchmark,” the VC blockholder that had a seat on the board.29 That said, we acknowledge that our model is stylized, and we have abstracted away from at least one force pushing in the opposite direction: Fear of future alien-ation could make a director reluctant to vote against a powerful director’s proposal.
We hope future work will study the theoretical interaction between and empirical relevance of these two mechanisms.
Finally, our analysis of director appointments (Section 5) speaks to how CEO power affects deadlock.
Prediction 4 Among companies with poor quality CEOs, deadlock is more likely if the CEO has more power to appoint directors (cf. Corollary 4).
9 Conclusion We argue that deadlock on the board can cause pervasive entrenchment, and hence explain why corporations are often too slow to turn over their top management and to adapt their strategies to a changing competitive environment. Our results hinge on the dynamic interaction between multiple directors’ decisions, something new to the literature on corporate boards. Indeed, deadlock in our model is entirely a consequence of dynamic consistency: the board is deadlocked because it fears it will become deadlocked in the future.
This dynamic model gives a new take on board composition, director appoint-ments, and director tenure.
It suggests board diversity has a downside: it can exacerbate deadlock. As such, even adding unbiased directors to the board can cre-29“Inside Uber’s Wild Ride in a Search of a New C.E.O.” New York Times, August 29, 2017.
25 ate deadlock. Hence, shareholders may optimally appoint a biased director to avoid deadlock. On the other hand, the CEO may appoint unbiased directors, or even directors biased against him, to create deadlock and thereby entrench himself. Still, shareholders may optimally give the CEO some power to appoint directors. We also uncover a cost of long director tenure: the more directors focus on the future, the more they vote strategically; they block policies today to preserve a strong bargaining position in the future, creating deadlock in the process.
26 A Proofs A.1 Proof of Proposition 1 Given x0 is very bad, voting for the alternative is a strict best response if the other director votes for. Hence, replacing the incumbent is always an equilibrium, and there is no equilibrium in which one director votes for and the other votes against. The tie-breaking rule in Assumption 2 rules out an equilibrium in which either director votes against.30 A.2 Proof of Proposition 2 To prove the proposition, we solve the model backward. The key observation is that if the “very bad” incumbent policy x0 is in place at date 2, no alternative is blocked.
This means that directors have incentive to keep x0 in place at date 1 to preserve the option to implement their preferred alternatives at date 2. Thus, at date 1, the τ-biased director blocks all τ ′ alternatives and, symmetrically, the τ ′-biased director blocks all τ alternatives.
We now proceed to characterize a τ-biased director’s payoffs at date 2 and then to show that she blocks all τ ′ alternatives at date 1. (The argument for the τ ′-biased director is identical.) Date 2. Since bτ > vh −vℓby Assumption 1, a τ-biased director prefers a low-quality τ-policy to a high-quality τ ′-policy. Thus, she blocks any τ ′-policy if any τ-policy is in place. A high-quality τ-alternative gets through at date 2 if x0 or a low-quality τ-policy is in place. Thus, the τ-biased director’s payoffs as a function 30Note, however, that without this assumption, both directors voting against would be an equi-librium, since if one director votes against, the incumbent always stays in place, making voting against a weak best response to voting against.
27 of the date-1 policy x1 are as follows: τ director’s payoff= v0 + δ ¯ v + pτbτ if x1 = x0, vℓ+ bτ + δ pτphvh + (1 −pτph)vℓ+ bτ if x1 is type ℓτ, vℓ+ δ pτ ′phvh + (1 −pτ ′ph)vℓ if x1 is type ℓτ ′, vh + bτ + δ vh + bτ if x1 is type hτ, vh + δvh if x1 is type hτ ′.
(17) Date 1. Observe immediately that the τ-biased director prefers high-quality τ ′ policies to low-quality τ ′ policies at date 1. Now observe further that she prefers x0 to high-quality τ ′-policies, since v0 + δ ¯ v + pτbτ > vh + δvh (18) if and only if bτ > vh −v0 + δ(vh −¯ v) δpτ (19) = vh −v0 + δpℓ(vh −vℓ) δpτ , (20) which is implied by Assumption 1. Thus, she blocks any τ ′-alternative policy.
A.2.1 What if there are N directors?
The result above does not depend on the number of directors. To see this, suppose, instead, that there are N directors on the board and N (or more) policies τ1, ..., τN, where each policy τn is the date-2 alternative with probability pτn. Now consider a diverse board, with one director of each bias type. Observe that the condition for a τn-biased director to prefer the incumbent policy x0 to a high-quality policy of 28 a different type (i.e. not her preferred type τn) is the same as in the two-director two policy case. It is given by equation (18) above (with τ replaced by τn). The intuition is also unchanged. Each director knows that she will be able to implement her preferred policy at date 2 only if the incumbent x0 stays in place, so she votes against all alternatives not of her preferred type.31 A.2.2 What if there is no strategic interaction?
Here we illustrate that entrenchment is the result of strategic blocking. It does not obtain if the board follows a non-strategic group decision protocol, as in Garlappi, Giammarino, and Lazrak (2017) (although a kind of inertia/underinvestment exists, in line with Garlappi, Giammarino, and Lazrak’s findings). Consider a diverse board that maximizes the weighted average of the payoffs of the α-biased director and the β-biased director, as in Garlappi, Giammarino, and Lazrak (2017). Call λ the weight on the α-biased director’s payoff. Thus, the payofffrom the very bad policy x0 is payoff x0 = λv0 + (1 −λ)v0 + δ λ ¯ v + pαb + (1 −λ) ¯ v + pβb !
(21) = v0 + δ ¯ v + λpα + (1 −λ)pβ b (22) where we have used the fact that the alternative policy y2 is always implemented at date 2, no matter what it is. For entrenchment to occur with this specification, this payoffhas to be bigger than the payoffgiven any alternative, in particular it must be that payoff x0 > payoff x1 is hα and payoff x0 > payoff x1 is hβ.
(23) 31Unlike in many group–decision making environments with N > 2, this argument is not sensitive to the decision rule. Since N −1 directors want to keep the incumbent policy in place at date 1, it does not matter if they block alternatives via a veto rule, majority voting, or supermajority.
Implementing a high-quality τn-policy requires either that the τn-biased director is a dictator or, similarly, that unanimity is required not to replace the incumbent.
29 Consider these two inequalities in turn. When x1 is hα, we require that v0 + δ ¯ v + λpα + (1 −λ)pβ b > vh + λb + δ vh + λb .
(24) Given all the v-terms are bigger on the right, the above implies that λpα+(1−λ)pβ > λ. Or λ < pβ 1 −pα + pβ = 1 2, (25) where we have used the fact that pα + pβ = 1. And, likewise, v0 + δ ¯ v + λpα + (1 −λ)pβ b > vh + (1 −λ)b + δ vh + (1 −λ)b , (26) which implies that λpα + (1 −λ)pβ > 1 −λ, or λ > 1 −pβ 1 + pα −pβ = 1 2.
(27) Clearly the inequalities in (25) and (27) are inconsistent. Hence, preference aggre-gation without strategic interaction does not generate entrenchment.
Note that for fixed λ, you can get that the board does not implement one of the policies, either hα or hβ. This is analogous to Garlappi, Giammarino, and Lazrak’s underinvestment. But with strategic directors, the board implements neither of the policies, neither hα nor hβ. This is our entrenchment.
A.3 Proof of Corollary 1 The result follows from two observations. (i) For δ = 0, directors care only about today’s policy. Hence, they implement any alternative at date 1 (see the benchmark in Proposition 1). There is no entrenchment.
(ii) For δ →∞, the condition in the corollary implies Assumption 1 (recalling that we allow for δ > 1 since date 2 can represent more calendar time than date 1). Hence, there is entrenchment by Proposition 2.
30 A.4 Proof of Lemma 1 First observe that, on a τ-ν partially biased board, the unbiased director votes for τ-policies over τ ′-policies of the same quality, given the tie-breaking rule in Assumption 2. Thus, the only state in which the unbiased director votes against the τ-biased director at date 2 is when x1 is hτ ′ and y2 is ℓτ; in words, when the incumbent is an h-quality τ ′-policy and the alternative is an ℓ-quality τ-policy. In anticipation of this, the τ-biased director blocks the hτ ′-alternative at date 1 whenever v0 + δ ¯ v + pτbτ > vh + δvh + δpτphbτ, (28) or bτ > vh −v0 + δpℓ(vh −vℓ) δpτpℓ , (29) which holds by Assumption 1.
The τ-biased director votes for all other date-1 alternatives since they all increase her date-1 payoffand do not decrease her date-2 payoff.
A.5 Proof of Corollary 2 and Proposition 3 Here, we prove Corollary 2 first and Proposition 3 second.
A.5.1 Proof of Corollary 2 Consider the unbiased director on the τ-ν board. And suppose the date-1 alternative y1 is ℓτ. If it becomes the incumbent, i.e. if x1 = y1, then the τ-director will block the hτ ′-alternative at date 2, since vh −vℓ< bτ by Assumption 1. Thus, the unbiased director’s payoffs as a function of the date-1 policy x1 are: ν-director’s payoff y1 is ℓτ = v0 + δ¯ v if x1 = x0, vℓ+ δ pτphvh + (1 −pτph) vℓ if x1 is ℓτ.
(30) 31 Comparing these payoffs, we find that the independent director blocks the ℓτ-alternative if and only if vℓ−v0 < δ(1 −pτ)ph (vh −vℓ) (31) or ∆τ > 0, which is the condition in the proposition.
The unbiased director votes for all other policies: he votes for any high-quality policy, and he does not block the lτ ′-policy because the other director will always agree to replace it by a high-quality alternative in the future.
A.5.2 Proof of Proposition 3 τ-τ board vs. τ-ν board. On a fully τ-biased board, directors always agree at date 2. Hence, there is no strategic blocking at date 1. Since v0 < vℓ, directors will always replace the inferior manager at date 1. Shareholders’ expected payoffis Vτ-τ = pτph vh + δvh + pτ ′ph vh + δpτpℓvℓ+ δ (1 −pτpℓ) vh + + pτpℓ vℓ+ δpτphvh + δ(1 −pτph)vℓ + pτ ′pℓ vℓ+ δ¯ v .
(32) Note that the second and third term follow from the fact that vh −vℓ< bτ by Assumption 1: at date 2, τ-directors will replace an hτ ′-policy with an ℓτ-policy but not an ℓτ policy with an hτ ′-policy.
On a τ-ν board, the analysis follows from Lemma 1 and Corollary 2. Recall that the ν-director’s strategy depends on whether ∆τ ≶0. Hence, we consider these cases in turn.
Case 1: ∆τ < 0. Shareholders’ expected payoffV ∆τ<0 τ-ν is V ∆τ<0 τ-ν =pτph vh + δvh + pτ ′ph v0 + δ¯ v + + pτpℓ vℓ+ δpτphvh + δ (1 −pτph) vℓ + pτ ′pℓ vℓ+ δ¯ v .
(33) 32 Hence Vτ-τ −V ∆τ <0 τ-ν = pτ ′ph vh + δpτpℓvℓ+ δ (1 −pτpℓ) vh −v0 −δ¯ v (34) = pτ ′ph vh −v0 + δpτ ′pℓ(vh −vℓ) > 0.
(35) Case 2: ∆τ > 0. Here, shareholders’ expected payoffV ∆τ >0 τ-ν is V ∆τ>0 τ-ν = pτph (vh + δvh) + pτ ′ph (v0 + δ¯ v) + pτpℓ(v0 + δ¯ v) + pτ ′pℓ(vℓ+ δ¯ v) .
(36) Hence Vτ-τ −V ∆>0 τ-ν =pτ ′ph vh + δpτpℓvℓ+ δ (1 −pτpℓ) vh −v0 −δ¯ v + (37) + pτpℓ vℓ+ δpτphvh + δ (1 −pτph) vℓ−v0 −δ¯ v (38) =pτ ′ph vh −v0 + δpτ ′pℓ(vh −vℓ) + pτpℓ vℓ−v0 −δpτ ′ph(vh −vℓ) (39) =pτ ′ph vh −v0 + δpτ ′pℓ(vh −vℓ) −pτpℓ∆τ.
(40) This is positive exactly when condition (3) in the statement of the proposition is satisfied.
τ-ν board vs. τ-τ ′ board. Here, we show that shareholders always prefer a τ-ν board to a τ-τ ′ board, i.e.
Vτ-ν −Vτ-τ ′ > 0, (41) where Vτ-τ ′ = v0 + δ¯ v, (42) by Proposition 2, and Vτ-ν is given by equation (33) if ∆τ ≤0 and by equation (36) if ∆τ ≥0.
Again, consider the two cases for ∆τ ≶0.
Case 1: ∆τ < 0. Substituting equations (42) and (33) into inequality (41) and 33 simplifying, we see that the partially biased board is better than the diverse board if pτph(vh −v0) + pℓ(vℓ−v0) + δp2 τpℓph(vh −vℓ) > 0.
(43) This is always satisfied since vh > vℓ> v0.
Case 2: ∆τ > 0. Substituting equations (42) and (36) into inequality (41) and simplifying, we see that the independent board is better than the diverse board if pτph vh −v0 + δ(vh −¯ v) + pτ ′pℓ(vℓ−v0) > 0.
(44) This is always satisfied since vh > vℓ> v0.
τ-τ board vs. τ-τ ′ board.
Here, we show that a τ-biased board is always preferred to a diverse board, i.e.
Vτ-τ > Vτ-τ ′ (45) where Vτ-τ and Vτ-τ ′ are given by equations (32) and (42) respectively. Substituting, a τ-biased board is preferred to a diverse board if and only if pτph vh + δvh + pτ ′ph vh + δpτpℓvℓ+ δ (1 −pτpℓ) vh + +pτpℓ vℓ+ pτδphvh + δ(1 −pτph)vℓ + pτ ′pℓ vℓ+ δ¯ v > v0 + δ¯ v.
Simplifying and rearranging, we get that a τ-biased board is preferred to a diverse board if and only if ¯ v −v0 + δp2 τpℓph (vh −vℓ) + δp2 τ ′phpℓ(vh −vℓ) > 0, (46) which is always satisfied.
34 A.5.3 Non-stationary Qualities Here, we relax the assumption that alternative qualities are identically distributed.
In this setup, a diverse board can be preferred to a biased board. Hence, we can highlight that a diverse board has the benefit of preventing some low-quality poli-cies from being implemented and becoming entrenched (as a partially biased board does (Corollary 2)). It has this benefit in the baseline model too, but it is always outweighed by another benefit of the biased board: by preventing the date-1 en-trenchment of x0, the biased board gets a better set of policies to choose from at date 2.
We use the following notation. As above, ph denotes the probability that the alternative is of type h at date 1, but now let ˆ ph ̸= ph denote the probability that the alternative is of type h at date 2. Analogously, as above, ¯ v = phvh + pℓvℓdenotes average value of date-1 alternatives, but let ˆ v := ˆ phvh + ˆ pℓvℓdenote the average value of date-2 alternatives (where ˆ pℓ:= 1 −ˆ ph).
We now compare the value Vτ-τ of a τ-τ board with the value Vτ-τ ′ of a τ-τ ′ board: Vτ,τ ≥Vτ,τ ′ if and only if pτph vh + δvh + pτ ′ph vh + δpτ ˆ pℓvℓ+ δ (1 −pτ ˆ pℓ) vh + + pτpℓ vℓ+ δpτ ˆ phvh + δ (1 −pτ ˆ ph) vℓ + pτ ′pℓ vℓ+ δˆ v − v0 + δˆ v ≥0 (47) Simplifying this expression is lengthy (although elementary), so we divide it into a few steps.
• Date-1 value. We can group the terms not multiplied by δ as follows, phvh + pℓvl −v0 = ¯ v −v0.
(48) This is always positive, implying a fully biased board always increases the date-1 value.
35 • Date-2 value. We can group the terms multiplied by δ as follows (omitting δ): pτ h phvh + pℓ(pτ ˆ phvh + (1 −pτ ˆ ph) vℓ) −ˆ v i + (49) +pτ ′ h ph (pτ ˆ pℓvℓ+ (1 −pτ ˆ pℓ) vh) + pℓˆ v −ˆ v i (50) The first term in square brackets above can be rewritten as phvh + pτpℓˆ ph(vh −vℓ) + pℓvℓ−ˆ v = pτpℓˆ ph (vh −vℓ) + ¯ v −ˆ v The second term in square brackets above can be rewritten as −pτphˆ pℓ(vh −vℓ) + phvh + pℓˆ v −ˆ v (51) = −pτphˆ pℓ(vh −vℓ) + phvh + (1 −pℓ)ˆ v (52) = −pτphˆ pℓ(vh −vℓ) + ph vh −ˆ v (53) = −pτphˆ pℓ(vh −vℓ) + ph vh −(1 −ˆ pℓ)vh −ˆ pℓvℓ (54) = −pτphˆ pℓ(vh −vℓ) + phˆ pℓ(vh −vℓ) (55) = pτ ′phˆ pℓ(vh −vℓ).
(56) In summary, the fully τ-biased board is better than the diverse board if and only if pτ [pτpℓˆ ph (vh −vℓ) + ¯ v −ˆ v] + p2 τ ′phˆ pℓ(vh −vℓ) ≥0.
To see that this may be violated, set vℓ= 0 and ˆ ph = 1, so ˆ pℓ= 0, ˆ v = vh, and ¯ v = phvh. The condition becomes pτ pτpℓvh + phvh −vh ≥0 (57) which is never satisfied since pℓpτ + ph = 1 −pℓ(1 −pτ) < 1.
36 A.6 Proof of Corollary 3 The result follows immediately from Proposition 3.
A.7 Proof of Corollary 4 First observe that, since v0 < vl (the incumbent CEO is “very bad”), he is always fired at date 2. Hence, he just wants to minimize the probability he is fired at date 1, which varies with board composition as follows.
• With a τ-τ or ν-ν board, he is always fired at date 1, since there is no strategic blocking at date 1.
• With a τ-τ ′ board, on the other hand, he is never fired at date 1—he is en-trenched by Proposition 2.
• With a τ-ν board, he is retained when y1 is type hτ ′ (by Lemma 1) and, for some parameters, when y1 is type ℓτ (Corollary 2) and fired otherwise.
This yields the rainking stated in the corollary.
A.8 Proof of Proposition 4 • With a τ-τ, τ-τ ′, τ-ν, or ν-ν board, the CEO is never fired: the hτ-CEO is the best policy for both τ-biased directors and unbiased directors. Hence he is never fired because they always block less-preferred alternatives (and keep him given equally-preferred alternatives by Assumption 2).
• With a τ ′-ν board, he gets fired the first time there is an hτ ′-alternative, i.e.
yt is type hτ ′.
• With a τ ′-τ ′ board he is fired the first time there is a τ ′-alternative, i.e. yt is type hτ ′ or ℓτ ′.
37 For a given yt the τ ′-ν board fires the CEO only if the τ ′-τ ′ board does so (and the τ ′-τ ′ board also fires the CEO for other realizations of yt). Hence, the CEO (strictly) prefers the τ ′-ν board to the τ ′-τ ′ board.
In summary, the CEO’s ranking is as stated in the proposition.
A.9 Proof of Proposition 5 CEO’s objective. Recall that the CEO maximizes his expected tenure (see footnote 25). Since we want to allow date 1 and date 2 to represent different amounts of calendar time, we assume his objective is given by U = P h employed at date 1 i w1 + P h employed at date 2 i w2, (58) where the weights wt could represent his wage at date t or, alternatively, the ratio w2/w1 could represent his rate of time preference if he just values being employed.
CEO payoffgiven board compositions. Consider each of the six possible board compositions.
1. τ-τ board. Here, the CEO is fired the first time there is an hτ-alternative.
(Recall that the τ-biased director prefers the ℓτ-CEO to the hτ ′-CEO by As-sumption 1.) Hence, Uτ-τ = (1 −pτph)w1 + (1 −pτph)2w2.
(59) 2. τ-ν board. Here, the board’s decision rule coincides with that of the τ-τ board.
Hence, Uτ-ν = (1 −pτph)w1 + (1 −pτph)2w2.
(60) 3. τ-τ ′ board. Here, as in the simpler cases above, there is no strategic blocking.
The reason is that the τ-director blocks any τ ′-alternative (since she prefers the ℓτ-CEO to an hτ ′-CEO by Assumption 1). As a result, the τ ′-director knows 38 she can never hire a τ ′-CEO, and hence wants to hire a high-quality τ-CEO as soon as possible. The CEO is fired the first time there is a hτ-alternative, as with the τ-τ and τ-ν boards. Hence, Uτ-τ ′ = (1 −pτph)w1 + (1 −pτph)2w2.
(61) 4. ν-ν board. Here, the CEO is fired the first time there is a high-quality alterna-tive. Hence, Uν-ν = (1 −ph)w1 + (1 −ph)2w2 = pℓw1 + p2 ℓw2.
(62) 5. τ ′-τ ′ board. Here, the CEO is fired the first time there is a τ ′- or high-quality alternative. Hence, Uτ ′-τ ′ = pτpℓw1 + (pτpℓ)2w2.
6. τ ′-ν board.
Here, there is strategic blocking.
Specifically, by an argument analogous to that of Lemma 1, the τ ′-biased director strategically blocks hτ-alternatives, since vh + δ vh + pτ ′phbτ ′ < vℓ+ δ ¯ v + pτ ′bτ ′ (63) by Assumption 1.32 And, by an argument analogous to Corollary 2, the inde-pendent director blocks ℓτ ′: she is indifferent between the incumbent ℓτ and the alternative ℓτ ′ today, but if ℓτ ′ is appointed today, the τ ′-biased director will prevent her from appointing an hτ-alternative in the future.
32To see this, observe equation (63) can be rewritten as bτ ′ > vh −vℓ+ δ(vh −¯ v) δpℓpτ ′ = vh −vℓ+ δpℓ(vh −vℓ) δpℓpτ ′ , (64) which is implied by Assumption 1 given v0 < vℓ.
39 Hence, Uτ ′-ν = (1 −pτ ′ph)w1 + (1 −pτ ′ph)pτpℓw2.
(65) CEO’s ranking. From the computations above, we can observe immediately that Uτ-τ = Uτ-ν = Uτ-τ ′ > Uν-ν > Uτ ′-τ ′.
(66) The question is how Uτ ′-ν compares with the above.
• Uτ ′-ν > Uτ-τ if (1 −pτ ′ph)w1 + (1 −pτ ′ph)pτpℓw2 > (1 −pτph)w1 + (1 −pτph)2w2 (67) or (pτ −pτ ′)phw1 + (−pτ ′ −pτ ′pτphpℓ−p2 τp2 h + pτph)w2 > 0 (68) or (pτ −pτ ′)phw1 > pτ ′ −pτph(pτ ′ph + pτpℓ) w2.
(69) This is always satisfied for pτ sufficiently large (i.e. pτ ′ sufficiently small), giving the ranking in the proposition.
• Uτ ′-ν > Uν-ν if (pℓpτ + phpτ ′)w1 + (pℓpτ + phpτ ′)(1 −pℓpτ)w2 > pℓw1 + p2 ℓw2.
(70) • Uτ ′-ν > Uτ-τ ′ if (pℓpτ+phpτ ′)w1+(pℓpτ+phpτ ′)(1−pℓpτ)w2 > (1−pτph)w1+(1−pτph)2w2. (71) In summary, τ-τ ∼τ-ν ∼τ-τ ′ ≻ν-ν ≻τ ′-τ ′ and the ranking of τ ′-ν depends on the inequalities (67), (70), and (71) above, as stated in the proposition.
40 A.10 Proof of Proposition 6 Appointments. Consider appointment decision after date 1. Since this is the last date, the new board will make a one-shot decision. By Proposition 1, directors vote sincerely, for their preferred policy. Hence, whoever makes the appointment chooses the director that represents its interests: shareholders appoint an unbiased director; the CEO appoints a τ-biased director. (This is in contrast to our analysis in Section 5. There, appointments took into account strategic voting and deadlock.) Date 2. At date 2, the board is fully biased with probability π and partially τ-biased with probability 1 −π.
First consider the fully biased board. There are four possibilities for the incum-bent policy x1: • If a τℓpolicy is in place, it is replaced only with a τh policy (and kept in place otherwise).
• If a τh policy is in place, it is never replaced.
• If a τ ′l policy is in place, the board replaces it with any alternative except τ ′l.
• If a τ ′h policy is in place, it is replaced by τl and τh and is not replaced otherwise.
• If x0 is in place, it is always replaced.
Now consider the partially biased board. There are five possibilities for the in-cumbent policy x1: • If a τℓpolicy is in place, it is replaced only with a τh policy (the τ-biased CEO blocks all other alternatives) • If a τh policy is in place, it is never replaced.
• If a τ ′ℓpolicy is in place, it is replaced by any alternative except τ ′ℓ.
41 • If a τ ′h policy is in place, it is replaced only by a τh policy (the unbiased director votes against low-quality alternatives).
• If x0 is in place, it is always replaced.
Date 1. Since one director retires at the end of date 1, she only maximizes her date-1 payoff. She does not vote strategically, but rather votes for of any alternative regardless of her bias, as in the one-shot benchmark (Proposition 1).
Consider the CEO’s voting decision. There are four possible alternatives.
• If y1 is of type τ (τℓor τh) she votes for it given her bias.
• If y1 is of type τ ′l, she votes for it, since with the policy in place, she will be still able to implement any τ policy at date 2 regardless of the composition of the board.
• If y1 is of type τ ′h, voting for/against comes with a trade-off. If she votes for, and policy becomes the incumbent, her payoffis vh + δ(1 −π) vh + bτpτph + δπ bτpτ + vlpτpl + vh (1 −pτpl) .
(72) If she votes against, her payoffis v0 + δ ¯ v + bτpτ .
(73) Comparing the two payoffs, the CEO votes for the τ ′h-alternative if and only if π ≥1 −vh −v0 + δplpτ ′ (vh −vl) δplpτ bτ −(vh −vl) =: ¯ π (74) because, by Assumption 1, the denominator is positive. Note also that, by Assumption 1, ¯ π ∈(0, 1).
Shareholder optimal CEO power. Now we calculate shareholders’ expected payoffs in case π ≥¯ π and π < ¯ π.
42 Case 1: π ≥¯ π.
In this case, the CEO does not block the τ ′h alternative.
Shareholder value is plpτ (vl + δvl(1 −pτph) + vhpτph (75) + phpτ(vh + δvh) + plpτ ′(vl + δ¯ v) (76) + phpτ ′ vh + δ(1 −π)vh + δπ vh (1 −pτpl) + vlpτpl (77) In this case, π∗= ¯ π: shareholders optimally choose the lowest CEO power, to minimize probability that a τ ′h incumbent is replaced by a τl alternative at date 2.
Case 2. π < ¯ π. In this case, the CEO strategically blocks the τ ′h alternative.
Shareholder value is plpτ vl + δvl(1 −pτph) + vhpτph (78) + phpτ(vh + δvh) + plpτ ′(vl + δ¯ v) (79) + phpτ ′(v0 + δ¯ v).
(80) In this case, π does not affect shareholder value.
Hence, π∗= ¯ π if v0 + δ¯ v < vh + δ(1 −¯ π)vh + δ¯ π vh (1 −pτpl) + vlpτpl (81) and π∗∈[0, ¯ π) otherwise.
It may also be worth pointing out as an aside that if v0+δ¯ v < vh+δ (vh (1 −pτpl) + vlpτpl), then π = 1 is better for shareholders than π = 0: full CEO control over director appointments can be better for shareholders than full shareholder control.
A.11 Proof of Proposition 8 Here, we first consider an outcome with entrenchment. Then, given this outcome, we compute the value functions at each date as a function of the incumbent policy.
43 Finally, we check the inequalities in equation (92) given the expressions for the value functions.
Entrenchment outcome. Consider the following outcome: • If x0 is in place, the τ-biased director votes for τ- and hτ ′-policies, but against ℓτ ′-policies.
• If an hτ policy is in place, the τ-biased director votes against all alternatives.
• If an ℓτ ′ policy were in place (offequilibrium), the τ ′-biased director votes against all τ-policies.
Continuation values. Defining ux τ as a τ-director’s continuation value at any date t given x is chosen at date t (but before the date-t flow payoffs are realized).
We state the value functions as a lemma. For clarity, we compute them for general parameters even though we formulate the proposition only for v0 = 0 and pτ = pτ ′ = 1/2.
Lemma 2 (Value functions.) The value functions are as follows: uhτ τ = vh + b 1 −δ , (82) uhτ ′ τ = vh 1 −δ, (83) uℓτ τ = 1 1 −δ(1 −pτph) vℓ+ b + δpτph vh + b 1 −δ , (84) uℓτ ′ τ = 1 1 −δ(1 −pτ ′ph) vℓ+ δpτ ′ph vh 1 −δ , (85) ux0 τ = 1 1 −δpℓ v0 + δpτph vh + b 1 −δ + pτ ′ph vh 1 −δ .
(86) These expressions follow from direct computation given the supposed outcome.
Indeed: 44 • uhτ τ and uhτ ′ τ . If an h-policy is in place, it stays in place forever. Hence, we can write the value functions uhτ τ and uhτ ′ τ recursively as uhτ τ = vh + b + δuhτ τ (87) and uhτ ′ τ = vh + δuhτ ′ τ .
(88) Solving for uhτ τ and uhτ ′ τ gives the expressions in the lemma.
• uℓτ τ and uℓτ ′ τ . If an ℓ-policy is in place (offequilibrium), it stays in place until it is replaced with an h-policy of the same bias-type. Hence, we can write the value functions uℓτ τ and uℓτ ′ τ recursively as uℓτ τ = vℓ+ b + δ pτphuhτ τ + (1 −pτph)uℓτ τ (89) and uℓτ ′ τ = vℓ+ δ pτ ′phuhτ ′ τ + (1 −pτ ′ph)uℓτ ′ τ .
(90) Substituting for uhτ τ and uhτ ′ τ from above and solving for uhτ τ and uhτ ′ τ gives the expressions in the lemma.
• ux0 τ . If x0 is in place, it stays in place until it is replaced with an h-policy (of either type). Hence, we can write the value function ux0 τ recursively as ux0 τ = v0 + δ pℓux0 τ + pτphuhτ τ + pτ ′phuhτ ′ τ .
(91) Substituting for uhτ τ and uhτ ′ τ from above and solving for ux0 τ gives the expression in the lemma.
45 Equilibrium. The outcome above is a subgame perfect equilibrium 33 as long as uhτ τ ≥uℓτ τ ≥uhτ ′ τ ≥ux0 τ ≥uℓτ ′ τ , (92) where the last inequality reflects deadlock. Now we set v0 = 0 and pτ = pτ ′ = 1/2 and show that these inequalities are satisfied given the expressions above for the value functions.
• uhτ τ ≥uℓτ τ is immediate.
• uℓτ τ ≥uhτ ′ τ reduces to bτ ≥2(1 −δ) (vh −vℓ) 2 −2δ + δph (93) • uhτ ′ τ ≥ux0 τ reduces to bτ ≤(1 −δ)2vh δph .
(94) • ux0 τ ≥uℓτ ′ τ reduces to bτ ≥ 2(1 −δ) 2vℓ−δ phvh + 2(1 −ph)vℓ δph (2 −2δ + δph) .
(95) Together, the inequalities above yield the condition in the proposition.
33In line with Assumption 2, this preference ordering implies that the equilibrium is not the result of directors’ indifference. Directors are not driven by the fact that if one director votes against, the other director is never pivotal. Note, however, that this ranking is only a sufficient condition, and other equilibria could exist that do not satisfy it.
46 References Adams, Renée, Benjamin Hermalin, and Michael Weisbach, 2010, The role of boards of directors in corporate governance: A conceptual framework and survey, Journal of Economic Literature 48, 58–107.
Adams, Renée B., and Daniel Ferreira, 2007, A theory of friendly boards, The Journal of Finance 62, 217–250.
Agrawal, Anup, and Mark A. Chen, 2017, Boardroom brawls: An empirical analysis of disputes involving directors, Quarterly Journal of Finance 07, 1750006.
Austen-Smith, David, Wioletta Dziuda, Bard Harstad, and Antoine Leoper, 2016, Gridlock and inefficient policy instruments, Working paper.
Baranchuk, Nina, and Philip H. Dybvig, 2009, Consensus in diverse corporate boards, The Review of Financial Studies 22, 715–747.
Bebchuk, Lucian, John Coates, and Guhan Subramanian, 2002, The powerful an-titakeover force of staggered boards: Theory, evidence, and policy, Stanford Law Review 54, 887–951.
Bebchuk, Lucian, and Alma Cohen, 2005, The costs of entrenched boards, Journal of Financial Economics 78, 409–433.
Biden, Joe, 2016, How short-termism saps the economy, Wall Street Journal.
Business Roundtable, 2016, Principles of corporate governance 2016, Discussion pa-per, .
Chemmanur, Thomas J., and Viktar Fedaseyeu, 2017, A theory of corporate boards and forced ceo turnover, Management Science Forthcoming.
Coles, Jeffrey L., Naveen D. Daniel, and Lalitha Naveen, 2014, Co-opted boards, The Review of Financial Studies 27, 1751–1796.
47 Dixit, Avinash K., and Robert S. Pindyck, 1994, Investment under Uncertainty (Princeton University Press).
Duggan, John, and Tasos Kalandrakis, 2012, Dynamic legislative policy making, Journal of Economic Theory 147, 1653 – 1688.
Duke, 1972, Deadlock in a close corporation: A suggestion for protecting a dissident, co-equal shareholder, Duke Law Journal.
Dziuda, Wioletta, and Antoine Leoper, 2017, Dynamic pivotal politics, Working paper.
Dziuda, Wioletta, and Antoine Loeper, 2016, Dynamic collective choice with endoge-nous status quo, Journal of Political Economy 124, 1148–1186.
Ferreira, Daniel, 2010, Board Diversity . chap. 12, pp. 225–242 (John Wiley & Sons, Inc.).
Francis, Theo, and Joann Lublin, 2016, Big investors question corporate board tenures, Wall Street Journal.
Garlappi, Lorenzo, Ron Giammarino, and Ali Lazrak, 2017, Ambiguity and the corporation: Group disagreement and underinvestment, Journal of Financial Eco-nomics 125, 417 – 433.
Goodstein, Jerry, Kanak Gautam, and Warren Boeker, 1994, The effects of board size and diversity on strategic change, Strategic Management Journal 15, 241–250.
Hannan, Michael T., and John Freeman, 1984, Structural inertia and organizational change, American Sociological Review 49, 149–164.
Harris, Milton, and Artur Raviv, 2008, A theory of board control and size, The Review of Financial Studies 21, 1797–1832.
48 Hermalin, Benjamin, and Michael Weisbach, 1998, Endogenously chosen boards of directors and their monitoring of the ceo, American Economic Review 88, 96–118.
Hopkins, Willie, Paul Mallette, and Shirley Hopkins, 2013, Proposed factors influ-encing strategic inertia/strategic renewal in organizations, Academy of Strategic Management Journal 12, 79–94.
Howe, Lawrence, 1967, Corporate divorce: Deadlocks in the close corporation, The Business Lawyer 22, 469–477.
IFC, 2014, Conflicts in the boardroom survey, Corporate Governance Knowledge Publication.
Jiang, Wei, Hualin Wan, and Shan Zhao, 2016, Reputation concerns of indepen-dent directors: Evidence from individual director voting, The Review of Financial Studies 29, 655–696.
Katz, David, and Laura McIntosh, 2014, Renewed focus on corporate director tenure, New York Law Journal.
Kim, Susanna, 2003, The provisional director remedy for corporate deadlock: A proposed model statute, Washington and Lee Law Review 60, 111–181.
Knyazeva, Anzhela, Diana Knyazeva, and Charu Raheja, 2013, The benefits of focus vs. heterogeneity: Dissimilar directors and coordination within corporate boards, Working paper University of Rochester.
Kumar, Praveen, and K. Sivaramakrishnan, 2008, Who monitors the monitor? the effect of board independence on executive compensation and firm value, The Re-view of Financial Studies 21, 1371–1401.
Landeo, Claudia M., and Kathryn E. Spier, 2014a, Irreconcilable differences: Judicial resolution of business deadlock, The University of Chicago Law Review 81, 203– 227.
49 , 2014b, Shotguns and deadlocks, Yale Journal on Regulation 31, 143–187.
Laux, Volker, 2008, Board independence and ceo turnover, Journal of Accounting Research 46, 137–171.
Levit, Doron, and Nadya Malenko, 2016, The labor market for directors and exter-nalities in corporate governance, The Journal of Finance 71, 775–808.
Lew, Cheryl Jean, 1979–1980, The custodian remedy for deadlocks in close corpora-tions symposium: Chapter two - close corporations, U.C. Davis Law Review 13, 498.
Libit, William, and Todd Freier, 2016, Director tenure: The next boardroom battle, The Corporate Board.
Malenko, Nadya, 2014, Communication and decision-making in corporate boards, The Review of Financial Studies 27, 1486.
Marshall, Cassandra D., 2013, Are dissenting directors rewarded?, Working paper University of Richmond.
McDonald, Diane, 1979, Deadlock and dissolution in the close corporation: Has the sacred cow been butchered?, Nebraska Law Review 58.
Shivdasani, Anil, and David Yermack, 1999, Ceo involvement in the selection of new board members: An empirical analysis, The Journal of Finance 54, 1829–1853.
Shleifer, Andrei, and Robert W. Vishny, 1989, Management entrenchment, Journal of Financial Economics 25, 123 – 139.
Taylor, Lucian A., 2010, Why are ceos rarely fired? evidence from structural estima-tion, The Journal of Finance 65, 2051–2087.
Warther, Vincent A., 1998, Board effectiveness and board dissent: A model of the board’s relationship to management and shareholders, Journal of Corporate Fi-nance 4, 53–70.
50 Zápal, Jan, 2012, Explicit and implicit status-quo determination in dynamic bargain-ing: Theory and application to fomc directive, Working paper IAE-CSIC Bacelona.
Zwiebel, Jeffrey, 1996, Dynamic capital structure under managerial entrenchment, The American Economic Review 86, 1197–1215.
51
|
60
|
Contents
Intramolecular vibrational energy redistribution
Intramolecular vibrational energy redistribution (IVR) is a process in which energy is redistributed between different quantum states of a vibrationally excited molecule, which is required by successful theories explaining unimolecular reaction rates such as RRKM theory. Such theories assume a full statistical redistribution between all vibrational modes, but restricted redistribution could enable bond selective chemistry for which deposited energy must remain in a particular mode for as long as it takes for the required reaction to take place.
References
| | |
| --- | --- |
| Stub icon | This quantum chemistry-related article is a stub. You can help Wikipedia by expanding it. |
This quantum chemistry-related article is a stub. You can help Wikipedia by expanding it.
| | |
| --- | --- |
| Stub icon | This molecular physics–related article is a stub. You can help Wikipedia by expanding it. |
This molecular physics–related article is a stub. You can help Wikipedia by expanding it.
|
61
|
linear algebra - Symmetric Zero-Diagonal Matrices - MathOverflow
===============
Join MathOverflow
By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy.
Sign up with Google
OR
Email
Password
Sign up
Already have an account? Log in
Skip to main content
Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Loading…
Tour Start here for a quick overview of the site
Help Center Detailed answers to any questions you might have
Meta Discuss the workings and policies of this site
About Us Learn more about Stack Overflow the company, and our products
current community
MathOverflow helpchat
MathOverflow Meta
your communities
Sign up or log in to customize your list.
more stack exchange communities
company blog
Log in
Sign up
Home
Questions
Unanswered
AI Assist Labs
Tags
Chat
Users
Hang on, you can't upvote just yet.
You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it?
Instead, you can save this post to reference later.
Save this post for later Not now
Symmetric Zero-Diagonal Matrices
Ask Question
Asked 10 years, 11 months ago
Modified10 years, 11 months ago
Viewed 1k times
This question shows research effort; it is useful and clear
0
Save this question.
Show activity on this post.
Consider matrices with entries in a field F F of characteristic 2 2. Let Ω Ω denote the 2 n×2 n 2 n×2 n matrix [0 1 n 1 n 0][0 1 n 1 n 0]. Then X t Ω X X t Ω X is symmetric with 0 0 diagonal, for each 2 n×2 n 2 n×2 n-matrix X X.
Question: can we express each symmetric matrix with zero diagonal in the form X t Ω X X t Ω X, for some X X?
Note: This is an simpler version of a question I asked yesterday.
linear-algebra
matrices
Share
Share a link to this question
Copy linkCC BY-SA 3.0
Cite
Improve this question
Follow
Follow this question to receive notifications
edited Apr 13, 2017 at 12:58
CommunityBot
1 2 2 silver badges 3 3 bronze badges
asked Sep 12, 2014 at 13:16
John MurrayJohn Murray
1,090 5 5 silver badges 18 18 bronze badges
Add a comment|
1 Answer 1
Sorted by: Reset to default
This answer is useful
2
Save this answer.
Show activity on this post.
In fact, the symmetric matrix with zero diagonal over F F with char F=2 charF=2 is skew symmetric. It is a standard fact that every skew symmetric (bilinear) form in some basis has matrix Ω Ω surrounded by zeroes. Each such matrix can be easily obtained from Ω Ω by an appropriate X X.
Share
Share a link to this answer
Copy linkCC BY-SA 3.0
Cite
Improve this answer
Follow
Follow this answer to receive notifications
edited Sep 12, 2014 at 15:01
answered Sep 12, 2014 at 14:55
Ilya BogdanovIlya Bogdanov
24.6k 55 55 silver badges 95 95 bronze badges
3
I think I understand. More precisely: for 1≤m≤n 1≤m≤n the 2 n×2 n 2 n×2 n-block matrix ⎡⎣⎢0 1 m 0 1 m 0 0 0 0 0⎤⎦⎥[0 1 m 0 1 m 0 0 0 0 0] is X t Ω X X t Ω X where X X is a simple 0,1 0,1-matrix. –John Murray Commented Sep 12, 2014 at 15:31
Thanks very much, although I prefer the terminology 'symplectic form' (a symmetric bilinear form which is zero on the diagonal) to `skew symmetric' in characteristic 2 2. –John Murray Commented Sep 12, 2014 at 15:42
I just removed the requirement that F F is quadratically closed, which is redundant. –John Murray Commented Sep 12, 2014 at 19:19
Add a comment|
You must log in to answer this question.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
linear-algebra
matrices
See similar questions with these tags.
Featured on Meta
Upcoming initiatives on Stack Overflow and across the Stack Exchange network...
Will you help build our new visual identity?
Related
6Counting the (additive) decompositions of a quadratic, symmetric, empty-diagonal and constant-line matrix into permutation matrices
3Symplectic block-diagonalization of a complex symmetric matrix
5Diagonalization of real symmetric matrices with symplectic matrices
3If A=B C A=B C, where A A and C C are given Laplacian matrices, how to calculate B B?
2Symmetric orthogonal matrices with constant diagonal entries
11Diagonal plus low-rank decomposition of symmetric matrices
8Action of symmetric matrices under O(n)O(n)
0Symmetric positive definite matrix - submatrices
Question feed
Subscribe to RSS
Question feed
To subscribe to this RSS feed, copy and paste this URL into your RSS reader.
Why are you flagging this comment?
It contains harassment, bigotry or abuse.
This comment attacks a person or group. Learn more in our Code of Conduct.
It's unfriendly or unkind.
This comment is rude or condescending. Learn more in our Code of Conduct.
Not needed.
This comment is not relevant to the post.
Enter at least 6 characters
Something else.
A problem not listed above. Try to be as specific as possible.
Enter at least 6 characters
Flag comment Cancel
You have 0 flags left today
MathOverflow
Tour
Help
Chat
Contact
Feedback
Company
Stack Overflow
Teams
Advertising
Talent
About
Press
Legal
Privacy Policy
Terms of Service
Your Privacy Choices
Cookie Policy
Stack Exchange Network
Technology
Culture & recreation
Life & arts
Science
Professional
Business
API
Data
Blog
Facebook
Twitter
LinkedIn
Instagram
Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804
By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.
Accept all cookies Necessary cookies only
Customize settings
|
62
|
Published Time: Thu, 10 Jul 2025 03:48:22 GMT
Voix des Arts: A Voice for the Performing Arts throughout the World: PERFORMANCE REVIEW: Gaetano Donizetti — LA FAVORITE (K. Lindsey, R. Bills, J. Arrey, J. Relyea, J. Harvey, R. Sanz; Washington Concert Opera, 4 March 2016)
===============
05 March 2016
PERFORMANCE REVIEW: Gaetano Donizetti — LA FAVORITE (K. Lindsey, R. Bills, J. Arrey, J. Relyea, J. Harvey, R. Sanz; Washington Concert Opera, 4 March 2016)
GAETANO DONIZETTI (1797 – 1848): La favorite—Kate Lindsey (Léonor de Guzmán), Randall Bills (Fernand), Javier Arrey (Alphonse XI), John Relyea (Balthazar), Joélle Harvey (Inès), Rolando Sanz (Don Gaspar); Washington Concert Opera Chorus and Orchestra; Antony Walker, conductor [Lisner Auditorium, The George Washington University, Washington, D.C., USA; Friday, 4 March 2016]
It might never be deduced from its lamentably few appearances in the world’s major opera houses in recent seasons that La favorite is one of Gaetano Donizetti’s finest scores. Composed in fulfillment of a commission from the Paris Opéra, an offer that any ambitious composer could hardly refuse, La favorite was in part adapted from the never-performed L’ange di Nisida, replacing the aborted Le duc d’Albe. Premièred at the Académie Royale de Musique on 2 December 1840, by a cast headed by mezzo-soprano Rosine Stoltz, whose reign as prima donna of both the Opéra and its manager, Léon Pillet, may have played at least a small part in the demise of Le duc d’Albe, the heroine of which was written for a higher voice, and the famous tenor Gilbert Duprez, La favorite solidified Donizetti’s reputation in the French capital, his home since an irreconcilable feud with the Neapolitan censors prompted him to turn his back on his native Italy. Despite the advocacy of singers as gifted as Giulietta Simionato, Fiorenza Cossotto, and Shirley Verrett, the appreciation that La favorite rightfully garnered in the Nineteenth Century has not persisted in the Twentieth and Twenty-First Centuries. Last heard at New York’s Metropolitan Opera in 1978, the opera has been best served in recent years by concert performances, including Opera Orchestra of New York outings in 1975 with Shirley Verrett, Alfredo Kraus, and Pablo Elvira and in 2003 with Jennifer Larmore, Gregory Kunde, and Dmitri Hvorostovsky; a 1989 Wiener Konzerthaus presentation with Agnes Baltsa, Kraus, and Paolo Gavanelli; a previous Washington Concert Opera showing in 1991 with Florence Quivar, Vinson Cole, and Christopher Robertson; the 2014 Salzburger Festspiele account with Elīna Garanča, Juan Diego Flórez, and Ludovic Tézier; and 2015’s Bel Canto at Caramoor offering with Clémentine Margaine, Santiago Ballerini, and Stephen Powell. Compared with recorded souvenirs of these performances, Washington Concert Opera’s 2016 performance in Lisner Auditorium was finer than any of them. Opera lovers’ affection for the genre is sustained by those gloriously few occasions when every aspect of a performance excels. In the past several decades, aficionados have learned to subsist on very meager diets of memorable performances. This La favorite was a gluttonously fulfilling experience for ears and hearts that hunger for genuine bel canto.
Written by Alphonse Royer and Gustave Vaëz, the libretto of La favorite examines collisions of regal authority, the power of the Church, and individual emotions in the piquant setting of Fourteenth-Century Castile. Alphonse XI, King of Castile, is a archetypical Latin lover, a playboy whose amorous appetite is not quenched by the attentions of his consort, the daughter of Balthazar, superior of the monastery of the Order of Santiago de Compostela. The King keeps as his preferred mistress—voilà, la favorite—Léonor de Gusmán, a beautiful lady of the court whose fervor at prayer has been noticed by Fernand, a postulant in the monastery who eventually abandons his ecclesiastical intentions, accepts a commission in Alphonse’s army procured for him by Léonor, wins royal favor in battle, and claims as his reward from his sovereign Léonor’s hand in marriage—a hand given with the knowledge of everyone except Fernand that her other hand remains firmly grasped by the King. Fernand rejoices at being granted his wish to marry Léonor without knowing of her liaison with Alphonse: Léonor’s confidante Inès, dispatched before the wedding ceremony to reveal Léonor’s past, having been arrested before communicating the crucial information, Fernand pledges himself to a woman he does not truly know and who believes that she is accepted and loved despite her transgressions. Such a plot can be difficult to sort out in staged performances, and concert presentations can make it even more incomprehensible for listeners, especially those without good French—or, more frequently in the case of this opera, Italian. The atmosphere established by the efforts of all participants in Washington Concert Opera’s La favorite lent the performance a strong dramatic profile, elucidating plot elements despite erratic interactions among the principals. The singers’—soloists and choristers—generally very good diction was advantageous. Concert performances of operas often provide opportunities to more intimately savor scores’ musical qualities without visual distractions, but this La favorite in concert was more histrionically effective than many fully-staged productions of familiar works manage to be.
The leadership of Washington Concert Opera’s Artistic Director and Conductor Antony Walker reliably brings the excitement of staged opera to the concert setting, never more so than in this performance of La favorite. His work with Pinchgut Opera in his native Australia has revealed the stylistic versatility of his conducting, but his appearances with Washington Concert Opera, with which company his repertoire encompasses lesser-known scores by Rossini, Donizetti, Bellini, Verdi, and Richard Strauss, have confirmed that he has a special affinity for bel canto, spotlighting the inherent elements of bel canto as much in Strauss’s Guntram as in Rossini’s Semiramide and Bellini’s I Capuleti ed i Montecchi. In Walker’s hands, the kinship between La favorite and Verdi’s mature style was particularly apparent. Donizetti’s music for Alphonse XI, the King of Castile, would dovetail perfectly with Verdi’s music for the Conte di Luna in Il trovatore, and Fernand’s high-centered vocal lines might be uttered just as convincingly by Henri in Les vêpres sicilienne. Balthazar’s scenes might have been cut from the same cloth as similar episodes in La forza del destino and Don Carlos. Without applying pressure greater than the music can withstand, Walker’s approach made Donizetti as much a peer of Verdi, Ponchielli, and Boito as of Rossini and Bellini, and the lesson in this is unmistakably legitimized by the composers’ bodies of work. Rodolfo’s ‘Quando le sere al placido’ in Verdi’s Luisa Miller is a close relative of Fernand’s ‘Ange si pur,’ and what is la Cieca’s ‘Voce di donna’ in Ponchielli’s La gioconda if not bel canto? Walker’s tempi were consistently appropriate for music and musicians, and he enhanced the continuity of the score by refusing to linger over ‘purple’ passages. Every emotion, gleeful or doleful, was given its due but not allowed to dominate unless its domination was clearly Donizetti’s intention. The circumstances of the company’s performances prohibit extensive periods of rehearsal, but such was Walker’s commitment—and the commitment that he inspired in his colleagues on the Lisner Auditorium stage—that this La favorite sounded like the culmination of a lifetime of study and preparation.
Under Walker’s guidance, the quality of the playing by the Washington Concert Opera Orchestra continues to improve, the musicians’s slightly rough-edged account of the Ouverture’s opening Larghetto smoothing to a well-integrated, exciting account of the Allegretto mosso. The Act Two ballet, de rigueur in a score commissioned by the Opéra, was omitted from Washington Concert Opera’s performance, but plentiful opportunities for orchestral glory remained. There were a few very small mistakes and instances of imperfect ensemble, but the playing mostly set and adhered to a high standard. The horns that introduced Léonor’s celebrated ‘O mon Fernand’ were commendably sure of intonation, and harpist Eric Sabatino’s playing was always heard with pleasure. Among the sometimes thin-sounding strings, principal cellist Gita Ladd’s spirited rallying of her section remains a marvel: even her pizzicato playing is emotionally charged. As the Santiago de Compostela organist in Act Four, Joel Ayau phrased his music with bel canto sensibility.
La favorite et ses hommes: (from left to right) Mezzo-soprano Kate Lindsey as Léonor, tenor Randall Bills as Fernand, Artistic Director and Conductor Antony Walker, baritone Javier Arrey as Alphonse, tenor Rolando Sanz as Don Gaspar, and bass John Relyea as Balthazar in Washington Concert Opera’s performance of Gaetano Donizetti’s La favorite in Lisner Auditorium, 4 March 2016 [Photo by Don Lassell, © by Washington Concert Opera]
Prepared by Assistant Conductor and Chorus Master Bruce Stasyna, the ladies and gentlemen of the Washington Concert Opera Chorus sang with potency and impressive balance. The men intoned the Andante introduction in Act One, ‘Pieux monastère, de ton sanctuaire que notre prière monte vers les cieux,’ expansively, and the ladies were luminous in the scene with Inès, sounding aptly girlist in ‘Rayons dorés, tiède zéphyre, de fleurs parez ce doux séjour.’ In both the Act Three finale and the first scene of Act Four, the dramatic force of the choral singing was gripping. Their accounts of ‘Frères, creusons l’asile où la douleur s’en dort’ and ‘Que du Très-Haut la faveur t’accompagne,’ the latter sung from the wings as Donizetti stipulated, were deeply poignant. Choral music plays a very important part in La favorite, and the success of this performance was considerably influenced by the choristers’ skillful contributions.
Interpreting the part of Don Gaspar, an officer in service to Alphonse, tenor Rolando Sanz acquitted himself expertly, his intuitive mastery of Donizetti’s style evident even in his character’s declamatory lines. Considering the quality of Sanz’s instrument, it was atypically regrettable that Donizetti and his librettists did not give Don Gaspar an aria. This talented tenor made the most of all that his character had to do, however, his voice ringing heroically—no whimpering character tenor, he!—in the scene with Alphonse at the start of Act Two. Sanz proclaimed Don Gaspar’s dramatically portentous lines in the Act Two finale with the machismo of a world-class Pollione. Of similar quality was his execution of his music in the Act Three finale. Sanz’s voice was always audible in ensembles, and even in the concert setting he was the smug, insinuating courtier to the life. Few operatic courtiers match their machinations with such firm, focused singing. It is too much to expect a Don Gaspar to sound as though he might respectably sing Fernand should circumstances necessitate it, but Sanz was one who seemed more than up to the task.
As Léonor’s confidante Inès, beautiful soprano Joélle Harvey enlivened the otherwise dark drama with singing as radiant as her smile. In her Act One scene with the young ladies of Alphonse’s court, she voiced ‘Rayons dorés, tiède zéphyre, de fleurs parez ce doux séjour’ with girlish glee, unleashing a splendid top B♭ in the cadenza. Then, her ‘Doux zéphyr, sois-lui fidèle’ wafted the fragrances and warmth of spring through the chilly auditorium, the spot-on accuracy of her pitch complemented by well-supported projection. She performed her part in the Act Two finale with poise and tireless assurance above the stave. As much a victim of Alphonse’s jealous cruelty as Léonor and Fernand, Harvey’s Inès was as good-natured and golden-voiced a champion of illicit love as Donizetti and the Washington audience could have hoped to hear in the rôle.
At the opposite end of the vocal and dramatic spectrum, the Balthazar of bass John Relyea pronounced the teachings and dictates of the Church with thundering tones that scorched the air with fire and brimstone. In the first scene of Act One, Relyea declaimed ‘Ne vas-tu pas prier avec eux?’ with gravitas, and his handling of Balthazar’s stern counseling of Fernand in the Allegro duet drew from him an imposing ‘Toi, mon fils, ma seule espérance.’ The bass’s voice relayed the wills of God and Pope in the finales of Acts Two and Three with the unanswerable authority of a man personally acquainted with both the Holy Spirit and the Holy Father. Welcoming Fernand into the monastic brotherhood at the start of Act Four, Relyea’s Balthazar assumed a paternal benevolence that shone in his singing of ‘Les cieux s’emplissent d’étincelles.’ Hearing Relyea’s portrayal, utterly solid throughout the part’s two-octave range, it is interesting to note how often Balthazar is easily ignored by recorded Alphonses. Relyea’s emphatic, smoldering singing could not be ignored by King or commoners, but who could have wanted to close his ears to such an electrifying performance of great music?
Le roi et ses plus belles dames: (from left to right) Soprano Joélle Harvey as Inès, mezzo-soprano Kate Lindsey as Léonor, and baritone Javier Arrey as Alphonse in Washington Concert Opera’s performance of Gaetano Donizetti’s La favorite in Lisner Auditorium, 4 March 2016 [Photo by Don Lassell, © by Washington Concert Opera]
Baritone Javier Arrey endowed the throne of Castile in this La favorite with a young, virile Alphonse XI whose vocalism was as handsomely chiseled as his visage. Among high-octane colleagues, he dominated Act Two, phrasing with distinction and respecting Donizetti enough to make an honorable effort at the trill asked of him. Arrey dispatched the libidinous King’s Larghetto aria ‘Léonor! Viens, j’abandonne Dieu, mon peuple et ma couronne’ and cabaletta ‘Léonor, mon amour brave’ with contrasting sensuality and swagger, his easy top Es and Fs ricocheting through the auditorium like musket balls. He and his Léonor blended their voices stirringly in their Larghetto duet, ‘Léonor, Léonor, tais-toi,’ and his vitriolic singing in the Act Two finale was galvanizing. To the trio with Léonor and Fernand, ‘Fernand de votre amour, Madame, vient de me faire ici l’aveu,’ Arrey brought the bemused confidence of royal prerogative, his voice radiating offended pride. A noticeably softer heart pulsed at the core of Alphonse’s Act Three aria ‘Pour tant d’amour ne soyez pas ingrate,’ the baritone revealing the soul of the man rather than the persona of the King. In the Act Three finale, Arrey depicted a touchingly wounded, suddenly frightened monarch on the brink of collapse: denounced by Rome, abandoned by his lover, and mocked by his court, he was a Mediterranean Macbeth stained by sin. Minimizing the significance of a few suspect pitches and moments of compromised tonal quality, Arrey’s performance was both pompous and poetic—and, most winningly, sung with style and nuance.
La favorite et le malheureux: Mezzo-soprano Kate Lindsey as Léonor (left) and tenor Randall Bills as Fernand (right) in Washington Concert Opera’s performance of Gaetano Donizetti’s La favorite in Lisner Auditorium, 4 March 2016 [Photo by Don Lassell, © by Washington Concert Opera]
Tall and as attractive in white tie and tails as a fair-haired Tony Curtis, tenor Randall Bills was a boyish, earnest Fernand who sang with heartwarming expressivity. In a rôle created by Gilbert Duprez, credited as having been the first tenor to publicly unveil the now-expected ut de poitrine, Bills unsurprisingly faced high tessitura, but his voice retained its youthful bloom to the top of the range. In Fernand’s Larghetto cavatine in Act One, ‘Un ange, une femme inconnue,’ he managed the ecstatic rise to top C♯ without strain, but the most gratifying aspects of his singing were his smooth, clear timbre and impeccable breath control. In the duet with Balthazar, his exclamation of ‘Mon père, je l’aime!’ soared with lovesick sincerity, and he subsequently greeted Inès with a believably awestruck ‘Gentille messagère et nymphe si discrète.’ Finally united with his beloved Léonor, her identity still withheld from him, ‘Pour toi des saints autels j’ai brisé l’esclavage’ poured from him like lava, his vocalism igniting one of Donizetti’s most incendiary duets. Bills gave an understated performance of the martiale aria ‘Oui, ta voix m’inspire,’ its sentiments being in his hands a statement of very private resolve. The first scene of Act Three was defined by Bills’s affectionately-phrased utterance of ‘Me voici donc près d’elle,’ his urgent, athletic singing in the trio with Léonor and Alphonse and the act’s finale surging with emotion and musicality. Hesitating before taking his final vows as a brother in the fraternity of Santiago de Compostela in Act Four, the tenor’s Fernand voiced ‘Dans un instant, mon frère’ with humility. Like the Duca’s ‘La donna è mobile’ in Rigoletto and Rodolfo’s ‘Che gelida manina’ in La bohème, it is Fernand’s C-major Larghetto aria ‘Ange si pur, que dans un songe’ for which audiences eagerly wait in La favorite, and Bills’s performance of the piece, one of Donizetti’s most inspired arias for tenor, fulfilled the expectation engendered by his effective singing throughout the evening. Shaping the aria with obvious mastery of bel canto, he faithfully observed Donizetti’s dynamic marking by taking the famous top C in genuine voix mixte, sustaining the tone beautifully and with the softness requested by the composer. In the harrowing final duet with the dying Léonor, he seemed transformed by ‘Ses pleurs, sa voix jadis si chère portent le trouble dans mes sens,’ his coldness towards his one true love thawed in an instant. This was a Fernand whose suggestion that his fellow monks’ prayers for the repose of Léonor’s soul would on the following day be lifted in requests of intercession for his own seemed inevitable: having borne too much, one could virtually feel the sensitive young man’s heart breaking. Particularly in early scenes, Bills’s gestures revealed nervousness, but the thoughtful young artist’s preparation and innate stylishness prevailed. Further experience will undoubtedly increase his comfort in the rôle, but few of even the most acclaimed Fernands have sung the music so securely and serenely.
After her début at the Opéra in 1837, Rosine Stoltz was frequently compared to one of the most popular singers in Paris, the sui generis Cornélie Falcon. Acclaimed for performances of rôles composed by Rossini for Isabella Colbran, as well as Falcon parts like Rachel in Halévy’s La Juive and Valentine in Meyerbeer’s Les Huguenots, Stoltz was admired for the excellent quality of her voice throughout its wide range and the dramatic verisimilitude of her characterizations, attributes that likely made her Léonor de Gusmán a memorable portrayal. The same praise can be justifiably directed at mezzo-soprano Kate Lindsey, whose Léonor for Washington Concert Opera was a spectacular junction of singer and rôle. In her Act One duet with Fernand, Lindsey caressed the line and cajoled her Fernand with a bewitching ‘Mon idole, mon idole, Dieu t’envoie.’ Her singing in the Act Two duet with Alphonse was better still, the bitterness that flooded her enunciation of ‘Dans vos palais, ma pauvre âme soupire’ altering the mood of the scene and of the opera as a whole. Her voice rocketed through the tricky writing in the Act Two finale. After enduring crippling shame in the trio with Fernand and Alphonse, depicted by Lindsey with unaffected dignity, Léonor’s majestic solo scene is the centerpiece of Act Three and the climax of the opera. Lindsey phrased the recitative ‘L’ai-je bien entendu?’ with great feeling, and her performances of the aria ‘O mon Fernand! tout les biens de la terre’ and cabaletta ‘Mon arrêt descend du ciel’ were galvanizing, a masterclass in the art of dramatic bel canto. Lindsey has flashing, unforced top Bs, used sparingly and to great effect, and her upper register was on sterling form throughout the performance, not least in the difficult Act Three finale. Entering in Act Four, Lindsey delivered ‘Fernand! Fernand! pourrai-je le trouver?’ with a voice already touched by death, and her piano singing of ‘Fernand, imite la clémence du ciel à qui tu t’es lié’ in the final duet was ravishingly plaintive. When singing quietly, Lindsey's tones sporadically lost focus, and her cautious management of vocal registers, commendably maintaining head resonance in the interest of preserving the line, led to a few moments of awkwardness at the bottom of the range. Like Bills, however, she reduced minor imperfections to immateriality with a performance that, taken as a whole, qualified her as a Léonor worthy of the legacy of Simionato, Cossotto, and Verrett.
It is never easy to explain why some of a composer’s operas enjoy enduring success while others of equal or greater quality languish in relative obscurity. For Donizetti’s La favorite, the argument is often made that the opera is neglected because there are no singers active today who are capable of doing justice to the score. Washington Concert Opera’s performance delightfully disavowed that notion. Are audiences’ collective attention spans too brief to enable exploration beyond the handful of Donizetti’s operas that remain in the standard repertory? Do today’s listeners fail to respond to the tragedy of La favorite as readily as Nineteenth-Century observers must have done? Whichever reasons are most valid for explaining the infrequency with which La favorite adorns the world’s stages, performances of the prowess of Washington Concert Opera’s traversal of the magnificent score are worth waiting for.
Receiving thanks for a job well done: (from left to right) Bass John Relyea (Balthazar), mezzo-soprano Kate Lindsey (Léonor), tenor Randall Bills (Fernand), Artistic Director and Conductor Antony Walker, baritone Javier Arrey (Alphonse), soprano Joélle Harvey (Inès), and tenor Rolando Sanz (Don Gaspar) in Washington Concert Opera’s performance of Gaetano Donizetti’s La favorite in Lisner Auditorium, 4 March 2016 [Photo by Don Lassell, © by Washington Concert Opera]
© by Joseph Newsome
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest
Newer PostOlder PostHome
All content © by Joseph Newsome.
Please do not alter or reproduce content without consent.
Upcoming PERFORMANCE REVIEWS
8 June 2025: Boston Early Music Festival’s production of Reinhard Keiser’s Octavia (Boston, Massachusetts)
Voix des Arts Archive
►2025(1)
►May(1)
►2024(5)
►August(3)
►July(2)
►2023(18)
►November(2)
►October(3)
►September(2)
►August(3)
►May(1)
►March(3)
►February(4)
►2022(24)
►December(1)
►November(4)
►October(3)
►September(3)
►July(1)
►June(3)
►May(2)
►April(2)
►March(1)
►February(2)
►January(2)
►2021(17)
►December(2)
►November(3)
►September(4)
►July(1)
►June(1)
►May(3)
►April(1)
►March(1)
►January(1)
►2020(14)
►December(2)
►October(3)
►September(1)
►August(1)
►June(1)
►March(1)
►February(2)
►January(3)
►2019(34)
►December(1)
►November(4)
►October(5)
►September(2)
►August(2)
►June(4)
►May(2)
►April(3)
►March(10)
►January(1)
►2018(33)
►December(3)
►November(4)
►October(5)
►September(1)
►August(1)
►July(3)
►June(3)
►May(2)
►April(2)
►March(1)
►February(1)
►January(7)
►2017(42)
►December(5)
►November(1)
►October(2)
►September(4)
►August(1)
►July(4)
►June(3)
►May(5)
►April(6)
►March(1)
►February(5)
►January(5)
▼2016(76)
►December(5)
►November(8)
►October(6)
►September(7)
►August(3)
►July(12)
►June(1)
►May(6)
►April(8)
▼March(8)
CD REVIEW: F. Chopin, R. Schumann, & A. Eliasson —...
CD REVIEW: Lennox Berkeley, Gavin Bryars, Herbert ...
CD REVIEW: Georg Friedrich Händel — ARMINIO (M. E....
RECORDING OF THE MONTH / March 2016: Gaetano Venez...
CD REVIEW: Georg Friedrich Händel — MESSIAH (S. Yo...
PERFORMANCE REVIEW: Gaetano Donizetti — LA FAVORIT...
CD REVIEW: Arthur Honegger & Jacques Ibert — L’AIG...
SINGER SPOTLIGHT: Cosmopolitan Queen of Bel canto ...
►February(9)
►January(3)
►2015(115)
►December(10)
►November(9)
►October(7)
►September(6)
►August(11)
►July(9)
►June(5)
►May(15)
►April(12)
►March(9)
►February(12)
►January(10)
►2014(116)
►December(10)
►November(9)
►October(12)
►September(14)
►August(10)
►July(7)
►June(7)
►May(6)
►April(10)
►March(13)
►February(11)
►January(7)
►2013(83)
►December(7)
►November(7)
►October(5)
►September(12)
►August(13)
►July(9)
►June(9)
►May(8)
►April(5)
►March(2)
►February(4)
►January(2)
►2012(14)
►December(4)
►November(2)
►October(1)
►September(2)
►August(1)
►July(1)
►May(1)
►February(1)
►January(1)
►2011(10)
►November(1)
►October(1)
►July(2)
►May(1)
►April(2)
►January(3)
►2010(28)
►December(1)
►November(3)
►October(1)
►September(1)
►August(2)
►July(1)
►June(3)
►May(5)
►April(2)
►March(6)
►February(1)
►January(2)
►2009(36)
►December(3)
►November(4)
►October(4)
►September(1)
►August(4)
►July(4)
►June(4)
►May(5)
►April(1)
►March(5)
►February(1)
►2008(12)
►December(2)
►November(1)
►October(9)
Share your comments and questions!
Name
Email
Message
Looking for something in particular? Search for it here!
FRIENDS OF Voix des Arts
San Francisco-based Arts institution Ars Minerva
Tenor Jonathan Blalock
Australia-based retailer of opera recordings Celestial Audio
Countertenor Max Emanuel Cenčić
Mezzo-soprano Vivica Genaux
Bass-baritone Donald Hartmann
Composer Joseph Phibbs
Harpsichordist and Conductor Jory Vinikour
Please support the Performing Arts in your community
NORTH CAROLINA ARTS INSTITUTIONS
The Lost Colony - America's longest-running Outdoor Drama (Roanoke Island)
The Sword of Peace Outdoor Drama (Snow Camp)
Asheville Lyric Opera
Asheville Symphony
Blumenthal Performing Arts Center (Charlotte)
Carolina Performing Arts (UNC Chapel Hill)
Carolina Theatre (Durham)
Carolina Theatre (Greensboro)
Charlotte Symphony
Cherokee Historical Association (including Unto These Hills Outdoor Drama)
Choral Society of Durham
Duke Energy Center for the Performing Arts (Raleigh)
Duke Performances (Duke University, Durham)
Durham Performing Arts Center
Durham Symphony
Elon University Department of Performing Arts
Folkmoot USA - The State International Festival of North Carolina (Western NC)
Gallery Players (Burlington)
Greensboro Opera
Greensboro Symphony
High Point Theatre
NC Black Film Festival (Winston-Salem)
North Carolina Arts Council
North Carolina Opera (Raleigh)
North Carolina Shakespeare Festival (High Point)
North Carolina Symphony (based at Raleigh's Meymandi Concert Hall)
North State Chamber Orchestra (Alamance County)
Old Salem (Winston-Salem)
Opera Carolina (Charlotte)
Opera Wilmington
Piedmont Opera (Winston-Salem)
Smoky Mountain Center for the Performing Arts (Franklin)
The Music Academy of North Carolina (Greensboro)
Tribal Council of the Eastern Band of the Cherokee
UNCG School of Music, Theater and Dance (Greensboro)
University of North Carolina School of the Arts (Winston-Salem)
WCPE 89.7 FM Classical Radio
Wilmington Symphony
Winston-Salem Symphony
Adopt a manatee for a loved one: you just might save one life and greatly enrich another.
Please join me in supporting this wonderful organization that promotes research, conservation, and political activism, dedicated to preserving one of America's most endangered indigenous species. Add your voice to the chorus singing for the survival of these beautiful, mild-mannered creatures. Give a gift adoption of a Florida manatee to someone you love today.
Help bring the joy of art to a suffering child
Click the picture above to visit the website for RxArt, a non-profit organization working to create and place art in public areas of children's hospitals throughout the United States. Bringing beauty to a sick child may well provide the moment of happiness or glimmer of hope that he or she desperately needs in order to recover.
Please consider helping a family with the life-altering gift of an animal.
Please consider helping a needy family by providing an animal (or animals) that can forever alter a family's livelihood and means of sustenance. Heifer International is a phenomenal organization that improves the daily lives of impoverished families throughout the world by providing livestock for agricultural pursuits. This is made possible by the incredible generosity of people like you and me. This is a gift that neither you nor its recipients will ever forget. As we measure the effects of the deteriorating global economy on our own lives, consider the impact plunging charitable contributions have on the lives of underprivileged, oppressed, and ignored people throughout the world.
I proudly support UNICEF in their work to improve the lives of children throughout the world. Please click on the logo above to visit UNICEF's website to learn about how you can help.
OPERA COMPANIES AND VENUES OF NOTE
Arena di Verona
Asheville Lyric Opera (Asheville, NC)
Austin Opera (Austin, TX)
Bayerische Staatsoper (Munich)
Bayreuther Festspiele
Boheme Opera NJ (Hamilton Township, NJ)
Boston Lyric Opera (Boston, MA)
Castleton Festival (Castleton, VA)
Cincinnati Opera (Cincinnati, OH)
Cleveland Opera Theater (Cleveland, OH)
Deutsche Oper Berlin
English National Opera (London)
Fort Worth Opera (Fort Worth, TX)
Gran Teatre del Liceu (Barcelona)
Greensboro Opera (Greensboro, NC)
Haymarket Opera Company (Chicago, IL)
Houston Grand Opera (Houston, TX)
Knoxville Opera (Knoxville, TN)
Lyric Opera of Chicago (Chicago, IL)
Michigan Opera Theatre (Detroit, MI)
Minnesota Opera (Minneapolis, MN)
Nashville Opera (Nashville, TN)
New Orleans Opera (New Orleans, LA)
North Carolina Opera (Raleigh, NC)
Opera Carolina (Charlotte, NC)
Opéra National de Paris (Bastille & Palais Garnier)
Opera Omaha (Omaha, NE)
Opera on the James (Lynchburg, VA)
Opera Philadelphia (Philadelphia, PA)
Opera Roanoke (Roanoke, VA)
Opera San Antonio (San Antonio, TX)
Opera Theatre of Saint Louis (St. Louis, MO)
Opernhaus Zürich
Palm Beach Opera (West Palm Beach, FL)
Piedmont Opera (Winston-Salem, NC)
Pittsburgh Opera (Pittsburgh, PA)
Royal Opera House, Covent Garden (London)
San Francisco Opera (San Francisco, CA)
Scottish Opera (Glasgow & Edinburgh)
Seattle Opera (Seattle, WA)
Semperoper (Dresden)
Teatro alla Scala (Milan)
Teatro Carlo Felice (Genoa)
Teatro Colón (Buenos Aires)
Teatro de la Zarzuela (Madrid)
Teatro di San Carlo (Naples)
Teatro La Fenice (Venice)
Teatro Nacional de São Carlos (Lisbon)
Teatro Real (Madrid)
Teatro San Carlo (Naples)
The Atlanta Opera (Atlanta, GA)
The Dallas Opera (Dallas, TX)
The Metropolitan Opera (New York)
Théâtre des Champs-Elysées (Paris)
Virginia Opera (Norfolk, Richmond, & Fairfax, VA)
Volksoper Wien (Vienna)
Washington Concert Opera (Washington, D.C.)
Washington National Opera (Washington, D.C.)
Wichita Grand Opera (Wichita, KS)
Wiener Staatsoper (Vienna)
NOTABLE RETAILERS OF OPERA AND CONCERT MUSIC RECORDINGS
Celestial Audio - the foremost source for recorded performances of all eras in state-of-the-art, best-in-the-business Audio DVD and CD editions featuring meticulous audio mastering and artfully-conceived cover art
Arkiv Music - a source for fully authorized CD-R copies of many out-of-print recordings (USA)
JPC - retailer with an extensive catalogue of CD recordings, as well as parent company of the cpo label (Germany)
Malibran Music - source for rare recordings of French operas, especially radio broadcasts featuring little-remembered French singers (France)
Norpete - Norbeck, Peters, & Ford; specialists in historical recordings (Vermont, USA)
Presto Classical (UK)
INFORMATIVE SITES ABOUT OPERA & CONCERT MUSIC
Canada Radio2 - streaming classical-music radio from Canada, with customization according to listeners' preferences available [in French]
Classical Voice of North Carolina
Metropolitan Opera Archives - the archives maintained by the Metropolitan Opera Association, documenting every performance by New York's Metropolitan Opera since its founding in 1883
Nederlands Radio4 - one of the world's most reliable broadcasters of concert music and opera, with several live streams available [site in Dutch]
NewOlde.com - a comprehensive resource for information and insightful analysis of new and reissued recordings of Early Music repertory, focusing on Historically-Informed practices [written by John Wall]
OPERADIS Operatic Discography - a simple-to-use and meticulously-maintained database containing complete discographies for a large array of operas
OperaBaroque.fr - an incomparable source of information regarding Baroque operas, composers, productions, and recordings [portions in French; no longer actively updated]
Opera Lively - a splendidly erudite and insightfully-written source for information about all aspects of operatic performances, including excellent interviews with singers and operatically-inclined personalities
Opera Today - a superb online resource for reviews, analysis, and in-depth information about operatic productions and performers
Thank you for reading Voix des Arts.
Questions or comments? Your feedback is always welcomed.
© 2008 - 2024 by Joseph Newsome. Please do not reproduce articles without permission. Simple theme. Powered by Blogger.
|
63
|
Rev:19
===============
Home | Books|Headwords - Wordforms | A-Z | Gospels: chrestomathy - parallels | Commentaries-sources | Arak29|Dictionaries | Գրաբարից-փոխարկիչ
Goto: Treasury - [x]
Յայտնութիւն / Revelation (tRev)12345678910111213141516171819202122
Comments:TБиблия - [x] Matthew Henry - [x] Adam Clarke - [x] Albert Barnes - [x] Geneva - [x] John Gill - [x] John Wesley - [x] JFB - [x]
Յայտնութիւն / Revelation - 19 | swap - [x]
Text:Zohrap 1805 - [x] EAm-1994 - [x] Western Armenian - [x] Ru - [x] Greek - [x] El-En-Gloss - [x] Vulgate - [x] English Revised 1895 - [x] KJV 1900 - [x] Catholic Public Domain - [x]
< PreviousՅայտնութիւն - 19 Revelation - 19Next > jg▾tr▾ab▾ac▾mh▾all ▾ ###### Matthew Henry: Concise Commentary on the Whole Bible - 1706 In this chapter we have, I. A further account of the triumphant song of angels and saints for the fall of Babylon, ver. 1-4. II. The marriage between Christ and the church proclaimed and perfected, ver. 5-10. III. Another warlike expedition of the glorious head and husband of the church, with the success of it, ver. 10, &c. ###### Adam Clarke: Commentary on the Bible - 1831 The whole heavenly host give glory to God, because he has judged the great whore, and avenged the blood of his saints, Rev 19:1-6. The marriage of the Lamb and his bride, Rev 19:7-9. John offers to worship the angel, but is prevented, Rev 19:10. Heaven is opened, and Jesus the Word of God appears on a white horse; he and his armies described, Rev 19:11-16. An angel in the sun invites all the fowls of heaven to come to the supper of the great God, Rev 19:17, Rev 19:18. The beast, the false prophet, and the kings of the earth, gather together to make war with him who sits on the white horse; but they are all discomfited, and utterly destroyed, Rev 19:19-21. ###### Albert Barnes: Notes on the Bible - 1834 19:0: This chapter Rev_. 19, as well as the last Rev_. 18, is an episode, delaying the final catastrophe, and describing more fully the effect of the destruction of the mystical Babylon. The chapter consists of the following parts: I. A hymn of the heavenly hosts in view of the destruction of the mystical Babylon, Rev 19:1-7; (a) A voice is heard in heaven shouting Hallelujah, in view of the fact that God had judged the great harlot that had corrupted the earth, Rev 19:1-2. (b) The sound is echoed and repeated as the smoke of her torment ascends, Rev 19:3. (c) The four and twenty elders, and the four living creatures, as interested in all that pertains to the church, unite in that shout of Hallelujah, Rev 19:4. (d) A voice is heard from the throne commanding them to praise God, Rev 19:5; and, (e) the mighty shout of Hallelujah is echoed and repeated from unnumbered hosts, Rev 19:6-7. II. The marriage of the Lamb, Rev 19:8-9. The Lamb of God is united to his bride - the church - never more to be separated; and after all the persecutions, conflicts, and embarrassments which had existed, this long-desired union is consummated, and the glorious triumph of the church is described under the image of a joyous wedding ceremony. III. John is so overcome with this representation, that in his transports of feeling he prostrates himself before the angel who shows him all this, ready to worship one who discloses such bright and glorious scenes, Rev 19:10. He is gently rebuked for allowing himself to be so overcome that he would render divine homage to any creature, and is told that he who communicates this to him is but a fellow-servant, and that God only is to be worshipped. IV. The final conquest over the beast and the false prophet, and the subjugation of all the foes of the church, Rev 19:11-21; (a) A description of the conqueror - the Son of God, Rev 19:11-16. He appears on a white horse - emblem of victory. He has on his head many crowns; wears a vesture dipped in blood; is followed by the armies of heaven on white horses; from his mouth goes a sharp sword; and his name is prominently written on his vesture and his thigh - all emblematic of certain victory. (b) An angel is seen standing in the sun, calling on all the fowls of heaven to come to the great feast prepared for them in the destruction of the enemies of God - as if there were a great slaughter sufficient to supply all the fowls that feed on flesh, Rev 19:17-18. (c) The final war, Rev 19:19, Rev 19:21. The beast, and the kings of the earth, and their armies are gathered together for battle; the beast and the false prophet are taken, and are cast into the lake that burns with fire and brimstone; and all that remain of the enemies of God are slain, and the fowls are satisfied with their flesh. The last obstacle that pRev_ented the dawn of the millennial morning is taken away, and the church is triumphant. ###### R. A. Torrey - Treasury: Treasury of Scriptural Knowledge - 1880 Rev 19:1, God is praised in heaven for judging the great whore, and avenging the blood of his saints; Rev 19:7, The marriage of the Lamb; Rev 19:10, The angel will not be worshipped; Rev 19:17, The fowls called to the great slaughter. ###### John Gill INTRODUCTION TO REVELATION 19 This chapter contains the triumph of the saints over Babylon, and their thanksgiving to God because of his judgments on her; the marriage of Christ and his church, and a battle between him and his and her enemies, with the success of it. The congratulations are first of a promiscuous multitude in the church, ascribing salvation, praise, honour, glory, and power to God, because of the righteousness of his judgments, and because of the perpetuity of them, Rev_ 19:1 and then of the four and twenty elders and four living creatures, who worship God, assent to what had been before said, and join in praising the Lord, Rev_ 19:4 and then another voice out of the throne is heard, calling upon all the servants of the Lord, and those that fear him, whether small or great, to praise our God, Rev_ 19:5 after which is heard the voice of a great multitude, stirring up one another to praise, because of the reign of the Lord God Almighty, and to rejoice and be glad because the time of the Lamb's marriage with his bride was come; who is described by her dress, the righteousness of the saints, comparable to fine linen, clean and white, Rev_ 19:6 upon which an angel bids John write those persons happy who are invited to the marriage supper of the Lamb, and affirms these to be the true sayings of God; wherefore John, in a transport of joy, was just going to worship the angel, had he not been forbidden by him; from which he dissuades him, by observing that he was his fellow servant, that God only is the object of worship, and that the testimony of Jesus is the spirit of prophecy, Rev_ 19:9 next follows a vision of a battle between Christ and his enemies; and first he the General is described, by the horse he sat upon, a white one; by the characters he bears, faithful and true; by what he did, judging and making war in righteousness; by his eyes, which were as a flame of fire; by his having many crowns on his head; by having a name, or names unknown, and particularly one, which is the Word of God; by his habit, a vesture dipped in blood; by the armies he was at the head of, riding on white horses, and clothed in fine linen; by a sharp sword coming out of his mouth, with which he should utterly destroy the nations; and by having a name on his vesture and thigh, King of kings, and Lord of lords, Rev_ 19:11 upon which an angel is seen standing in the sun, and calling to all the fowls of the heaven to come to the supper of the great God, and to eat the flesh of kings, captains, mighty men, horses and horsemen, of all ranks, and degrees, Rev_ 19:17 and next an account is given of the armies of the beast, and of the kings of the earth, that came to make war with the above warrior, Rev_ 19:19 the issue and success of which follow; the beast and false prophet are taken, and cast alive into a lake of fire and brimstone; and the rest are killed by the sword of the above General, and the fowls have a feast of their flesh, Rev_ 19:20. 19:119:1: Եւ յետ այնորիկ լուայ ձա՛յն մեծ բազմութեան յերկինս՝ ասելով. Ալէ՛լուիա, փրկութիւն եւ փառք եւ պատիւ եւ զօրութիւն Աստուծոյ մերում. Ոմանք. Եւ յետ այսորիկ լուայ։ Ոսկան. Աստուծոյ մերոյ։ 1 Այնուհետեւ լսեցի՝ ինչպէս բազմութեան մի բարձր ձայն երկնքում, որ ասում էր. «Ալէլուիա՜, փրկութի՜ւն եւ փա՜ռք եւ պատի՜ւ եւ զօրութի՜ւն մեր Աստծուն, 19 Անկէ ետքը շատ բազմութեան մեծ ձայն մը լսեցի երկնքէն, ըսելով. «Ալէլուիա՜. փրկութիւն ու փառք եւ պատիւ ու զօրութիւն մեր Աստուծոյն. Եւյետայնորիկլուայձայնմեծբազմութեանյերկինսասելով. Ալէլուիա, փրկութիւնեւփառքեւպատիւեւզօրութիւնԱստուծոյմերում: 19:1: Եւ յետ այնորիկ լուայ ձա՛յն մեծ բազմութեան յերկինս՝ ասելով. Ալէ՛լուիա, փրկութիւն եւ փառք եւ պատիւ եւ զօրութիւն Աստուծոյ մերում. Ոմանք. Եւ յետ այսորիկ լուայ։ Ոսկան. Աստուծոյ մերոյ։ 1 Այնուհետեւ լսեցի՝ ինչպէս բազմութեան մի բարձր ձայն երկնքում, որ ասում էր. «Ալէլուիա՜, փրկութի՜ւն եւ փա՜ռք եւ պատի՜ւ եւ զօրութի՜ւն մեր Աստծուն, 19 Անկէ ետքը շատ բազմութեան մեծ ձայն մը լսեցի երկնքէն, ըսելով. «Ալէլուիա՜. փրկութիւն ու փառք եւ պատիւ ու զօրութիւն մեր Աստուծոյն. zohrab-1805▾eastern-1994▾western am▾ 19:11: После сего я услышал на небе громкий голос как бы многочисленного народа, который говорил: аллилуия! спасение и слава, и честь и сила Господу нашему! 19:1 μετὰ ταῦτα ἤκουσα ὡς φωνὴν μεγάλην ὄχλου πολλοῦ ἐν τῶ οὐρανῶ λεγόντων, ἁλληλουϊά· ἡ σωτηρία καὶ ἡ δόξα καὶ ἡ δύναμις τοῦ θεοῦ ἡμῶν, 19:1. Μετὰ (With) ταῦτα (to-the-ones-these) ἤκουσα (I-heard) ὡς (as) φωνὴν (to-a-sound) μεγάλην (to-great) ὄχλου (of-a-crowd) πολλοῦ (of-much) ἐν (in) τῷ (unto-the-one) οὐρανῷ (unto-a-sky) λεγόντων ( of-forthing ,"Ἁλληλουιά: (Hallelouia,"ἡ (the-one) σωτηρία (a-savioring-unto) καὶ (and) ἡ (the-one) δόξα (a-recognition) καὶ (and) ἡ (the-one) δύναμις (an-ability) τοῦ (of-the-one) θεοῦ (of-a-Deity) ἡμῶν, (of-us," 19:1. post haec audivi quasi vocem magnam turbarum multarum in caelo dicentium alleluia salus et gloria et virtus Deo nostro estAfter these things, I heard as it were the voice of much people in heaven, saying: Alleluia. Salvation and glory and power is to our God. 1. After these things I heard as it were a great voice of a great multitude in heaven, saying, Hallelujah; Salvation, and glory, and power, belong to our God: 19:1. And after these things I heard a great voice of much people in heaven, saying, Alleluia; Salvation, and glory, and honour, and power, unto the Lord our God: 19:1. After these things, I heard something like the voice of many multitudes in heaven, saying: “Alleluia! Praise and glory and power is for our God. And after these things I heard a great voice of much people in heaven, saying, Alleluia; Salvation, and glory, and honour, and power, unto the Lord our God: 1: После сего я услышал на небе громкий голос как бы многочисленного народа, который говорил: аллилуия! спасение и слава, и честь и сила Господу нашему! 19:1 μετὰ ταῦτα ἤκουσα ὡς φωνὴν μεγάλην ὄχλου πολλοῦ ἐν τῶ οὐρανῶ λεγόντων, ἁλληλουϊά· ἡ σωτηρία καὶ ἡ δόξα καὶ ἡ δύναμις τοῦ θεοῦ ἡμῶν, 19:1. Μετὰ (With) ταῦτα (to-the-ones-these) ἤκουσα (I-heard) ὡς (as) φωνὴν (to-a-sound) μεγάλην (to-great) ὄχλου (of-a-crowd) πολλοῦ (of-much) ἐν (in) τῷ (unto-the-one) οὐρανῷ (unto-a-sky) λεγόντων (of-forthing,"Ἁλληλουιά: (Hallelouia,"ἡ (the-one) σωτηρία (a-savioring-unto) καὶ (and) ἡ (the-one) δόξα (a-recognition) καὶ (and) ἡ (the-one) δύναμις (an-ability) τοῦ (of-the-one) θεοῦ (of-a-Deity) ἡμῶν, (of-us," 19:1. post haec audivi quasi vocem magnam turbarum multarum in caelo dicentium alleluia salus et gloria et virtus Deo nostro est After these things, I heard as it were the voice of much people in heaven, saying: Alleluia. Salvation and glory and power is to our God. 1. AfterthesethingsI heard asitwereagreatvoiceofagreatmultitudeinheaven, saying, Hallelujah; Salvation, andglory, andpower, belong toourGod: 19:1. And after these things I heard a great voice of much people in heaven, saying, Alleluia; Salvation, and glory, and honour, and power, unto the Lord our God: 19:1. After these things, I heard something like the voice of many multitudes in heaven, saying: “Alleluia! Praise and glory and power is for our God. ru▾el▾el-en-gloss▾vulgate▾erva_1895▾kjv_1900▾catholic_pdv▾ jfb▾jw▾jg▾gnv▾tr▾ab▾ac▾mh▾tb▾all ▾ ###### А. П. Лопухин: Tолковая Библия или комментарий на все книги Св.Писания Ветхого и Нового Заветов - 1903-1914 1: В 19: главе говорится о торжественной радости по поводу гибели Вавилона, ибо это событие предвозвещало близкое и окончательное торжество добра и истины. Св. тайнозритель слышит новый, громкий небесный (на небе в его противоположности земле) голос, т.е. звуки пения (ср. X:3; XVI:18) исключительно блаженных Ангелов [Ewald] с четырьмя серафимами - животными во главе (IV:8). Они взывают: "аллилуия" (с еврейского языка "хвалите Бога") (ср. Пс CV:48). Прославляют за спасение, которое нужно понимать в смысле совершенного избавления христианского общества от козней диавола. Под славою же нужно разуметь славу Божию, которая свойственна Богу от века; а сила, как Бож. всемогущество, есть основание этой победы, этого торжества. ###### Matthew Henry: Concise Commentary on the Whole Bible - 1706 The Triumph of the Saints.A. D. 95. 1 And after these things I heard a great voice of much people in heaven, saying, Alleluia; Salvation, and glory, and honour, and power, unto the Lord our God: 2 For true and righteous are his judgments: for he hath judged the great whore, which did corrupt the earth with her fornication, and hath avenged the blood of his servants at her hand. 3 And again they said, Alleluia. And her smoke rose up for ever and ever. 4 And the four and twenty elders and the four beasts fell down and worshipped God that sat on the throne, saying, Amen; Alleluia. The fall of Babylon being fixed, finished, and declared to be irrecoverable in the foregoing chapter, this begins with a holy triumph over her, in pursuance of the order given forth: Rejoice over her, thou heaven, and you holy apostles and prophets, ch. xviii. 20. They now gladly answer the call; and here you have, 1. The form of their thanksgiving, in that heavenly and most comprehensive word, Alleluia, praise you the Lord: with this they begin, with this they go on, and with this they end (v. 4); their prayers are now turned into praises, their hosannas end in halleluias. 2. The matter of their thanksgiving: they praise him for the truth of his word, and the righteousness of his providential conduct, especially in this great event--the ruin of Babylon, which had been a mother, nurse, and nest of idolatry, lewdness, and cruelty (v. 2), for which signal example of divine justice they ascribe salvation, and glory, and honour, and power, unto our God. 3. The effect of these their praises: when the angels and saints cried Alleluia, her fire burned more fiercely and her smoke ascended for ever and ever, v. 3. The surest way to have our deliverances continued and completed is to give God the glory of what he has done for us. Praising God for what we have is praying in the most effectual manner for what is yet further to be done for us; the praises of the saints blow up the fire of God's wrath against the common enemy. 4. The blessed harmony between the angels and the saints in this triumphant song, v. 4. The churches and their ministers take the melodious sound from the angels, and repeat it; falling down, and worshipping God, they cry, Amen, Alleluia. ###### Adam Clarke: Commentary on the Bible - 1831 19:1: I heard a great voice of much people in heaven - The idolatrous city being destroyed, and the blood of the martyred saints being avenged, there is a universal joy among the redeemed of the Lord, which they commence with the word הללו יה Hallelu-Yah, praise ye Jah or Jehovah; which the Septuagint, and St. John from them, put into Greek letters thus: Αλληλουΐα, Allelou-ia, a form of praise which the heathens appear to have borrowed from the Jews, as is evident from their paeans, or hymns in honor of Apollo, which began and ended with ελελευ ιη, eleleu ie; a mere corruption of the Hebrew words. It is worthy of remark that the Indians of North America have the same word in their religious worship, and use it in the same sense. "In their places of worship, or beloved square, they dance sometimes for a whole night always in a bowing posture, and frequently singing halleluyah Ye ho wah; praise ye Yah, Ye ho vah:" probably the true pronunciation of the Hebrew יהוה, which we call Jehovah. See Adair's History of the American Indians. Salvation - He is the sole author of deliverance from sin; the glory of this belongs to him, the honor should be ascribed to him, and his power is that alone by which it is effected. ###### Albert Barnes: Notes on the Bible - 1834 19:1: And after these things - The things particularly that were exhibited in the pRev_ious chapter. See the notes on Rev 18:1. I heard a great voice of much people in heaven - The voice of the worshippers before the throne. Saying, Alleluia - The Greek method of writing "Hallelujah." This word - ἀλληλούΐα allē louia - occurs in the New Testament only in this chapter, Rev 19:1, Rev 19:3-4, Rev 19:6. The Hebrew phrase - הללוּ יה haleluw Yah "Hallelujah" - occurs often in the Old Testament. It means, properly, "Praise Yahweh," or "Praise the Lord." The occasion on which it is introduced here is very appropriate. It is uttered by the inhabitants of heaven, in the immediate presence of God himself, and in view of the final overthrow of the enemies of the church, and the triumph of the gospel. In such circumstances it was fit that heaven should render praise, and that a song of thanksgiving should be uttered in which all holy beings could unite. Salvation - That is, the salvation is to be ascribed to God. See the notes on Rev 7:10. And glory, and honour - notes on Rev 5:12. And power - notes on Rev 5:13. Unto the Lord our God - That is, all that there is of honor, glory, power, in the redemption of the world belongs to God, and should be ascribed to him. This is expressive of the true feelings of piety always; this will constitute the song of heaven. ###### R. A. Torrey - Treasury: Treasury of Scriptural Knowledge - 1880 19:1: after: Rev_. 18:1-24 I heard: Rev 11:15, Rev 18:20 Alleluia: Rev 19:3, Rev 19:4, Rev 19:6; Psa 106:1, Psa 111:1, Psa 115:18, Psa 146:1, Psa 148:1, Psa 149:1, Psa 150:1 marg. Salvation: Rev 4:10, Rev 4:11, Rev 5:9-13, Rev 7:10-12, Rev 11:15, Rev 12:10; Ch1 29:11; Psa 3:8; Jon 2:9; Mat 6:13; Ti1 1:16, Ti1 1:17 ###### Geneva 1599 19:1 And (1) after these things I heard a great voice of much people in heaven, saying, (a) (2) Alleluia; Salvation, and glory, and honour, and power, unto the Lord our God: (1) This chapter has in summary two parts, one transitory or of passage to the things that follow, to the tenth verse, (Rev_ 19:2-10), another historical of the victory of Christ over both the beasts, to the end of the chapter (Rev_ 19:11-21), which I said was the second history of this argument, (Rev_ 17:1). The transition has two places, one of praising God for the overthrow done to Babylon in (Rev_ 19:4): and another likewise of praise and prophecy, for the coming of Christ to his kingdom, and his most royal marriage with his Church, thence to the tenth verse (Rev_ 19:5-10). The former praise has three parts, distinguished after the ancient manner of those that sing: an invitation in (Rev_ 19:1-2), a response or answer in (Rev_ 19:3), and a close or joining together in harmony in (Rev_ 19:4), all which I thought good of purpose to distinguish in this place, lest any man should with Porphyrius, or other like dogs, object to John, or the heavenly Church, a childish and idle repetition of speech. (a) Praise the Lord. (2) The proposition of praise with exhortation in this verse, and the cause of it in (Rev_ 19:2). ###### John Gill 19:1 And after these things,.... After the angel had declared the fall of Babylon, a voice from heaven had called the people of God out of her, and had ordered them to take vengeance on her; after the mournful lamentation of the kings, merchants, and seafaring men; after another voice had called upon the saints to rejoice at her overthrow, and a mighty angel had described the manner of it, and had expressed her ruin in the strongest terms, with the reasons of it, John heard the songs of the righteous, as follow: I heard a great voice of much people in heaven: not literally taken, for these are not the innumerable company of angels, who are never called people; nor the spirits of just men made perfect, or the souls of departed saints, but men on earth; wherefore heaven designs the church, as in Rev_ 18:20 and frequently in this book; the people are the same with the 144000 seen with the Lamb on Mount Zion, Rev_ 14:1 and with those on the sea of glass, who had got the victory over the beast, Rev_ 15:2 and are no other than God's covenant people, who are given to Christ, and made willing to be his in the day of his power; and though they are but a seed, a remnant, a small company, when compared with the world and carnal professors; yet are a large body of themselves, especially they will be at this time, when the nation of the Jews shall be born at once, and the fulness of the Gentiles will be brought in: and their voice on this occasion, the downfall of Rome, is said to be "great" partly on account of their number, who will join together in acclamations of praise, and partly on account of their great affection and vehemency of spirit, which will be raised hereby: saying Alleluia; an Hebrew word, which signifies "praise ye the Lord". The Jews say (n), that the book of Psalms consists of ten sorts of songs, but Hallelujah is the greatest of them, because it comprehends the name (Jehovah) and praise in one word: and it is observable that this word, which is often used in the Psalms, is first used when the Psalmist desires the utter consumption and destruction of sinners and wicked men on earth, and is here taken up by the saints at the destruction of the man of sin and son of perdition; see Ps 104:35 and its being an Hebrew word shows that at this time the Jews will be converted, and that Jews and Gentiles will become one church state, and will worship and praise the Lord together; for the word is a call upon the saints to join together in solemn praise and thanksgiving; who is to be praised for the perfections of his nature, for the works of his hands, both of nature and grace; and for his righteous judgments on his and his church's enemies; and this is to be done in concert: salvation, and glory, and honour, and power, unto the Lord our God: salvation, temporal, spiritual, and eternal, is of God; "salvation" from antichristian power and tyranny, and from all enemies, and the everlasting salvation of the soul; and the "glory" of it belongs to all the three Persons; they are glorious in themselves, and deserve all glory to be ascribed to them by man, and especially by the saints: "honour" is also their due; God the Father is to be honoured because he is the Father, and the Son is to he honoured as the Father is, and the Holy Spirit is not to be grieved, but to be highly esteemed and valued, and equally with the other two Persons: and "power" belongs to them all, and is seen in the works of creation, redemption, and sanctification. (n) Yalkut Simeoni, par. 2. fol. 89. 1. T. Bab. Pesachim, fol. 117. 1. ###### John Wesley 19:1 I heard a loud voice of a great multitude - Whose blood the great whore had shed. Saying, Hallelujah - This Hebrew word signifies, Praise ye Jah, or Him that is. God named himself to Moses, EHEIEH, that is, I will be, Ex 3:14; and at the same time, "Jehovah," that is, "He that is, and was, and is to come:" during the trumpet of the seventh angel, he is styled, "He that is and was," Rev_ 16:5; and not "He that is to come;" because his long - expected coming is under this trumpet actually present. At length he is styled, "Jah," "He that is;" the past together with the future being swallowed up in the present, the former things being no more mentioned, for the greatness of those that now are. This title is of all others the most peculiar to the everlasting God. The salvation - Is opposed to the destruction which the great whore had brought upon the earth. His power and glory - Appear from the judgment executed on her, and from the setting up his kingdom to endure through all ages. ###### Robert Jamieson, A. R. Fausset and David Brown 19:1 THE CHURCH'S THANKSGIVING IN HEAVEN FOR THE JUDGMENT ON THE HARLOT. THE MARRIAGE OF THE LAMB: THE SUPPER: THE BRIDE'S PREPARATION: JOHN IS FORBIDDEN TO WORSHIP THE ANGEL: THE LORD AND HIS HOSTS COME FORTH FOR WAR: THE BEAST AND THE FALSE PROPHET CAST INTO THE LAKE OF FIRE: THE KINGS AND THEIR FOLLOWERS SLAIN BY THE SWORD OUT OF CHRIST'S MOUTH. (Rev. 19:1-21) As in the case of the opening of the prophecy, Rev_ 4:8; Rev_ 5:9, &c.; so now, at one of the great closing events seen in vision, the judgment on the harlot (described in Rev. 18:1-24), there is a song of praise in heaven to God: compare Rev_ 7:10, &c., toward the close of the seals, and Rev_ 11:15-18, at the close of the trumpets: Rev_ 15:3, at the saints' victory over the beast. And--so ANDREAS. But A, B, C, Vulgate, Syriac, and Coptic omit. a great voice--A, B, C, Vulgate, Coptic, and ANDREAS read, "as it were a great voice." What a contrast to the lamentations Rev. 18:1-24! Compare Jer 51:48. The great manifestation of God's power in destroying Babylon calls forth a great voice of praise in heaven. people--Greek, "multitude." Alleluia--Hebrew, "Praise ye JAH," or JEHOVAH: here first used in Revelation, whence ELLICOTT infers the Jews bear a prominent part in this thanksgiving. JAH is not a contraction of "JEHOVAH," as it sometimes occurs jointly with the latter. It means "He who Is": whereas Jehovah is "He who will be, is, and was." It implies God experienced as a PRESENT help; so that "Hallelujah," says KIMCHI in BENGEL, is found first in the Psalms on the destruction of the ungodly. "Hallelu-Jah" occurs four times in this passage. Compare Ps 149:4-9, which is plainly parallel, and indeed identical in many of the phrases, as well as the general idea. Israel, especially, will join in the Hallelujah, when "her warfare is accomplished" and her foe destroyed. Salvation, &c.--Greek, "The salvation . . . the glory . . . the power." and honour--so Coptic. But A, B, C, and Syriac omit. unto the Lord our God--so ANDREAS. But A, B, C, and Coptic read, "(Is) of our God," that is, belongs to Him. 19:219:2: զի ճշմարի՛տ եւ արդա՛ր են դատաստանք նորա. զի դատեա՛ց զպոռնիկն մեծ ՚ի պոռնկութեան իւրում, եւ խնդրեաց զվրէժ արեան ծառայից իւրոց ՚ի ձեռաց նորա: 2 որովհետեւ ճշմարիտ եւ արդար են նրա դատաստանները, քանի որ նա դատեց մեծ պոռնիկին իր պոռնկութեան մէջ եւ նրանից լուծեց իր ծառաների արեան վրէժը»: 2 Վասն զի ճշմարիտ ու արդար են անոր դատաստանները, քանզի դատեց այն մեծ պոռնիկը, որ իր պոռնկութիւնովը երկիրը ապականեց ու իր ծառաներուն արեան վրէժը անոր ձեռքէն պահանջեց»։ զիճշմարիտեւարդարենդատաստանքնորա. զիդատեացզպոռնիկնմեծիպոռնկութեանիւրում, եւխնդրեացզվրէժարեանծառայիցիւրոցիձեռացնորա: 19:2: զի ճշմարի՛տ եւ արդա՛ր են դատաստանք նորա. զի դատեա՛ց զպոռնիկն մեծ ՚ի պոռնկութեան իւրում, եւ խնդրեաց զվրէժ արեան ծառայից իւրոց ՚ի ձեռաց նորա: 2 որովհետեւ ճշմարիտ եւ արդար են նրա դատաստանները, քանի որ նա դատեց մեծ պոռնիկին իր պոռնկութեան մէջ եւ նրանից լուծեց իր ծառաների արեան վրէժը»: 2 Վասն զի ճշմարիտ ու արդար են անոր դատաստանները, քանզի դատեց այն մեծ պոռնիկը, որ իր պոռնկութիւնովը երկիրը ապականեց ու իր ծառաներուն արեան վրէժը անոր ձեռքէն պահանջեց»։ zohrab-1805▾eastern-1994▾western am▾ 19:22: Ибо истинны и праведны суды Его: потому что Он осудил ту великую любодейцу, которая растлила землю любодейством своим, и взыскал кровь рабов Своих от руки ее. 19:2 ὅτι ἀληθιναὶ καὶ δίκαιαι αἱ κρίσεις αὐτοῦ· ὅτι ἔκρινεν τὴν πόρνην τὴν μεγάλην ἥτις ἔφθειρεν τὴν γῆν ἐν τῇ πορνείᾳ αὐτῆς, καὶ ἐξεδίκησεν τὸ αἷμα τῶν δούλων αὐτοῦ ἐκ χειρὸς αὐτῆς. 19:2. ὅτι (to-which-a-one) ἀληθιναὶ ( un-secluded-belonged-to ) καὶ (and) δίκαιαι ( course-belonged ) αἱ ( the-ones ) κρίσεις ( separatings ) αὐτοῦ : ( of-it ,"ὅτι (to-which-a-one) ἔκρινεν (it-separated) τὴν (to-the-one) πόρνην (to-a-harlot) τὴν (to-the-one) μεγάλην (to-great) ἥτις (which-a-one) ἔφθειρεν (it-was-degrading) τὴν (to-the-one) γῆν (to-a-soil) ἐν (in) τῇ (unto-the-one) πορνείᾳ (unto-a-harloting-of) αὐτῆς, (of-it,"καὶ (and) ἐξεδίκησεν ( it-coursed-out-unto ) τὸ ( to-the-one ) αἷμα ( to-a-blood ) τῶν ( of-the-ones ) δουλων ( of-bondees ) αὐτοῦ (of-it) ἐκ ( out ) χειρὸς ( of-a-hand ) αὐτῆς. (of-it) 19:2. quia vera et iusta iudicia sunt eius quia iudicavit de meretrice magna quae corrupit terram in prostitutione sua et vindicavit sanguinem servorum suorum de manibus eiusFor true and just are his judgments, who hath judged the great harlot which corrupted the earth with her fornication and hath revenged the blood of his servants, at her hands. 2. for true and righteous are his judgments; for he hath judged the great harlot, which did corrupt the earth with her fornication, and he hath avenged the blood of his servants at her hand. 19:2. For true and righteous [are] his judgments: for he hath judged the great whore, which did corrupt the earth with her fornication, and hath avenged the blood of his servants at her hand. 19:2. For true and just are his judgments, he who has judged the great harlot that corrupted the earth by her prostitution. And he has vindicated the blood of his servants from her hands.” For true and righteous [are] his judgments: for he hath judged the great whore, which did corrupt the earth with her fornication, and hath avenged the blood of his servants at her hand: 2: Ибо истинны и праведны суды Его: потому что Он осудил ту великую любодейцу, которая растлила землю любодейством своим, и взыскал кровь рабов Своих от руки ее. 19:2 ὅτι ἀληθιναὶ καὶ δίκαιαι αἱ κρίσεις αὐτοῦ· ὅτι ἔκρινεν τὴν πόρνην τὴν μεγάλην ἥτις ἔφθειρεν τὴν γῆν ἐν τῇ πορνείᾳ αὐτῆς, καὶ ἐξεδίκησεν τὸ αἷμα τῶν δούλων αὐτοῦ ἐκ χειρὸς αὐτῆς. 19:2. ὅτι (to-which-a-one) ἀληθιναὶ (un-secluded-belonged-to) καὶ (and) δίκαιαι (course-belonged) αἱ (the-ones) κρίσεις (separatings) αὐτοῦ: (of-it,"ὅτι (to-which-a-one) ἔκρινεν (it-separated) τὴν (to-the-one) πόρνην (to-a-harlot) τὴν (to-the-one) μεγάλην (to-great) ἥτις (which-a-one) ἔφθειρεν (it-was-degrading) τὴν (to-the-one) γῆν (to-a-soil) ἐν (in) τῇ (unto-the-one) πορνείᾳ (unto-a-harloting-of) αὐτῆς, (of-it,"καὶ (and) ἐξεδίκησεν (it-coursed-out-unto) τὸ (to-the-one) αἷμα (to-a-blood) τῶν (of-the-ones) δουλων (of-bondees) αὐτοῦ (of-it) ἐκ (out) χειρὸς (of-a-hand) αὐτῆς. (of-it) 19:2. quia vera et iusta iudicia sunt eius quia iudicavit de meretrice magna quae corrupit terram in prostitutione sua et vindicavit sanguinem servorum suorum de manibus eius For true and just are his judgments, who hath judged the great harlot which corrupted the earth with her fornication and hath revenged the blood of his servants, at her hands. 2. fortrueandrighteousarehisjudgments; forhehathjudgedthegreatharlot, whichdidcorrupttheearthwithherfornication, andhehathavengedthe blood ofhisservantsatherhand. 19:2. For true and righteous [are] his judgments: for he hath judged the great whore, which did corrupt the earth with her fornication, and hath avenged the blood of his servants at her hand. 19:2. For true and just are his judgments, he who has judged the great harlot that corrupted the earth by her prostitution. And he has vindicated the blood of his servants from her hands.” ru▾el▾el-en-gloss▾vulgate▾erva_1895▾kjv_1900▾catholic_pdv▾ jfb▾jw▾jg▾tr▾ab▾ac▾all ▾ ###### Adam Clarke: Commentary on the Bible - 1831 19:2: For true and righteous - His judgments displayed in supporting his followers, and punishing his enemies, are true - according to his predictions; and righteous, being all according to infinite justice and equity. ###### Albert Barnes: Notes on the Bible - 1834 19:2: For true and righteous are his judgments - That is, the calamities that come upon the power here referred to are deserved. For he hath judged the great whore - The power represented by the harlot. See the notes on Rev 17:1. Which did corrupt the earth with her fornication - See the notes on Rev 14:8; Rev 17:2, Rev 17:4-5; Rev 18:3. Compare the notes on Rev 9:21. And hath avenged the blood of his servants - See the notes on Rev 18:20, Rev 18:24. At her hand - Shed by her hand, ###### R. A. Torrey - Treasury: Treasury of Scriptural Knowledge - 1880 19:2: true: Rev 15:3, Rev 16:5-7; Deu 32:4; Psa 19:9; Isa 25:1 judged: Rev 17:1, Rev 17:2, Rev 17:15, Rev 17:16, Rev 18:3, Rev 18:9, Rev 18:10, Rev 18:23 and hath: Rev 6:10, Rev 18:20, Rev 18:24; Deu 32:35, Deu 32:43 ###### John Gill 19:2 For true and righteous are his judgments,.... As in See Gill on Rev_ 15:3; see Gill on Rev_ 16:7, this is to be understood of God's judgments in general, and is a reason of the attribution of praise and glory to him; which may be said to be true, because, being threatened, are now fulfilled; and to be "righteous", because according to the demerit of sin; and particularly God's judgments on antichrist are intended: for he hath judged the great whore; Jezebel, Babylon, the Romish antichrist, before spoken of, Rev_ 17:1 not only by passing a sentence of condemnation on her, but by executing it, putting it into the hearts of the kings to hate and burn her, and utterly destroy her; and which is judging right, since it follows: which did corrupt the earth with her fornication; drew the kings and inhabitants of the Roman empire into wicked and idolatrous practices, and so corrupted and destroyed them in soul, body, and estate; See Gill on Rev_ 11:18 for this vision is contemporary with the seventh trumpet: and hath avenged the blood of his servants at her hand; shed by her, Rev_ 18:20 and this being done in righteous judgment, is matter of joy and praise to the saints. ###### John Wesley 19:2 For true and righteous are his judgments - Thus is the cry of the souls under the altar changed into a song of praise. ###### Robert Jamieson, A. R. Fausset and David Brown 19:2 which did corrupt the earth--Greek, "used to corrupt" continually. "Instead of opposing and lessening, she promoted the sinful life and decay of the world by her own earthliness, allowing the salt to lose its savor" [AUBERLEN]. avenged--Greek, "exacted in retribution." A particular application of the principle (Gen 9:5). blood of his servants--literally shed by the Old Testament adulterous Church, and by the New Testament apostate Church; also virtually, though not literally, by all who, though called Christians, hate their brother, or love not the brethren of Christ, but shrink from the reproach of the cross, and show unkindness towards those who bear it. 19:319:3: Եւ կրկին անգամ օրհնեցին՝ եւ ասացին. Ալէ՛լուիա։ Եւ ծո՛ւխ նորա ելանէր յաւիտեանս յաւիտենից. 3 Եւ կրկին անգամ օրհներգեցին ու ասացին. «Ալէլուիա՜. եւ նրա ծուխը բարձրանում է յաւիտեանս յաւիտենից»: 3 Եւ կրկին անգամ ըսին. «Ալէլուիա՜. անոր ծուխը յաւիտեանս յաւիտենից պիտի ելլէ»։ Եւկրկինանգամօրհնեցինեւասացին"). Ալէլուիա"): Եւ")ծուխ")նորա")ելանէր")յաւիտեանս")յաւիտենից"): 19:3: Եւ կրկին անգամ օրհնեցին՝ եւ ասացին. Ալէ՛լուիա։ Եւ ծո՛ւխ նորա ելանէր յաւիտեանս յաւիտենից. 3 Եւ կրկին անգամ օրհներգեցին ու ասացին. «Ալէլուիա՜. եւ նրա ծուխը բարձրանում է յաւիտեանս յաւիտենից»: 3 Եւ կրկին անգամ ըսին. «Ալէլուիա՜. անոր ծուխը յաւիտեանս յաւիտենից պիտի ելլէ»։ zohrab-1805▾ δεύτερον (to-second) εἴρηκαν (they-hath-had-come-to-utter,"Ἁλληλουιά: (Hallelouia," καὶ ( and ) ὁ ( the-one ) καπνὸς ( a-smoke ) αὐτῆς ( of-it ) ἀναβαίνει ( it-steppeth-up ) εἰς ( into ) τοὺς ( to-the-ones ) αἰῶνας ( to-ages ) τῶν (of-the-ones) αἰώνων. (of-ages) 19:3. et iterum dixerunt alleluia et fumus eius ascendit in saecula saeculorumAnd again they said: Alleluia. And her smoke ascendeth for ever and ever. 3. And a second time they say, Hallelujah. And her smoke goeth up for ever and ever. 19:3. And again they said, Alleluia. And her smoke rose up for ever and ever. 19:3. And again, they said: “Alleluia! For her smoke ascends forever and ever.” And again they said, Alleluia. And her smoke rose up for ever and ever: 3: И вторично сказали: аллилуия! И дым ее восходил во веки веков. 19:3 καὶ δεύτερον εἴρηκαν, ἁλληλουϊά· καὶ ὁ καπνὸς αὐτῆς ἀναβαίνει εἰς τοὺς αἰῶνας τῶν αἰώνων. 19:3. καὶ (And) δεύτερον (to-second) εἴρηκαν (they-hath-had-come-to-utter,"Ἁλληλουιά: (Hallelouia,"καὶ (and) ὁ (the-one) καπνὸς (a-smoke) αὐτῆς (of-it) ἀναβαίνει (it-steppeth-up) εἰς (into) τοὺς (to-the-ones) αἰῶνας (to-ages) τῶν (of-the-ones) αἰώνων. (of-ages) 19:3. et iterum dixerunt alleluia et fumus eius ascendit in saecula saeculorum And again they said: Alleluia. And her smoke ascendeth for ever and ever. 3. Andasecondtimetheysay, Hallelujah. Andhersmokegoethupforeverandever. 19:3. And again they said, Alleluia. And her smoke rose up for ever and ever. 19:3. And again, they said: “Alleluia! For her smoke ascends forever and ever.” ru▾. ###### Adam Clarke: Commentary on the Bible - 1831 19:3: Her smoke rose up - There was, and shall be, a continual evidence of God's judgments executed on this great whore or idolatrous city; nor shall it ever be restored. ###### Albert Barnes: Notes on the Bible - 1834 19:3: And again they said, Alleluia - See the notes on Rev 19:1. The event was so glorious and so important; the final destruction of the great enemy of the church was of so much moment in its bearing on the welfare of the world, as to call forth repeated expressions of praise. And her smoke rose up foRev_er and ever - See the notes on Rev 14:11. This is an image of final ruin; the image being derived probably from the description in Genesis of the smoke that ascended from the cities of the plain, Gen 19:28. On the joy expressed here in her destruction, compare the notes on Rev 18:20. ###### R. A. Torrey - Treasury: Treasury of Scriptural Knowledge - 1880 19:3: Alleluia: Rev 19:1 And her: Rev 14:11, Rev 18:9, Rev 18:18; Gen 19:28; Isa 34:10; Jde 1:7 ###### Geneva 1599 19:3 And again they said, (3) Alleluia. And her smoke rose up for ever and ever. (3) The song of the Antiphony or response, containing an amplification of the praise of God, from the continuous and certain testimony of his divine judgment as was done at Sodom and Gomorrah, (Gen. 19:1-38). ###### John Gill 19:3 And again they said, Alleluia,.... Or a "second time" they said it; they began and ended their solemn worship and service with it; so some psalms begin and end with this word, translated in the Old Testament by the words "Praise ye the LORD", as in Ps 106:1&c. and the repeating of the word shows how hearty, arnest, and constant they were in the work of praise on this account: and her smoke rose up for ever and ever; they repeated their hallelujah, or gave one spiritual "huzza" more at the burning of Rome, and this followed: or the words may be rendered, "for her smoke rose", &c. and so are a reason for the second "hallelujah": it looks as if Rome, like another Sodom and Gomorrah, would sink into a sulphurous burning lake, and continue so: respect is had to the everlasting punishment of antichrist and his followers in hell, and to the everlasting burnings that will follow Rome's temporal destruction, which was an example and symbol of the vengeance of eternal fire; see Rev_ 14:11 so the Jews (o) say of the burning of Rome, that its fire shall not be quenched for ever, and that "its smoke shall rise up for". (o) Yalkut Simeoni, par. 2. fol. 48. 2. ###### Robert Jamieson, A. R. Fausset and David Brown 19:3 again--Greek, "a second time." rose up--Greek, "goeth up." for ever and ever--Greek, "to the ages of the ages." անկան")քսան")եւ")չորք")երիցունքն")եւ")չորք")կենդանիքն"), եւ")երկիր")պագին")Աստուծոյ")որ")նստէր")յաթոռ")( ասելով"). Ամէն: part. Eng: Amen (3864)"), Ալէլուիա"): 19:4: եւ անկա՛ն քսան եւ չորք երիցունքն՝ եւ չորք կենդանիքն, եւ երկի՛ր պագին Աստուծոյ՝ որ նստէ՛ր յաթոռ փառաց՝ ասելով. Ամէ՛ն՝ ալէ՛լուիա( "Ոմանք. Քսան եւ չորս երի՛՛... եւ երկրպագեցին Աստուծոյ որ... յաթոռն։"): Ոմանք. Քսան եւ չորս երի՛՛... եւ երկրպագեցին Աստուծոյ որ... յաթոռն։ 4 Եւ ծնկի եկան քսանչորս երէցներն ու չորս կենդանիները եւ երկրպագեցին փառքի գահի վրայ նստած Աստծուն՝ ասելով. «Ամէն. Ալէլուիա՜»: 4 Քսանըչորս երէցները ու չորս կենդանիները ինկան ու երկրպագութիւն ըրին Աստուծոյ, որ աթոռը կը նստի, ըսելով. «Ամէ՛ն, Ալէլուիա՜»։ zohrab-1805▾ ἔπεσαν (they-fell,"οἱ (the-ones) πρεσβύτεροι ( more-eldered ) οἱ (the-ones) εἴκοσι (twenty) τέσσαρες ( four ) καὶ (and) τὰ (the-ones) τέσσερα ( four ) ζῷα, (lifelets,"καὶ (and) προσεκύνησαν (they-kissed-toward-unto) τῷ (unto-the-one) θεῷ (unto-a-Deity) τῷ (unto-the-one) καθημένῳ ( unto-sitting-down ) ἐπὶ , ( upon ) τῷ ( unto-the-one ) θρόνῳ ( unto-a-throne ) λέγοντες ( forthing ,"Ἀμήν, (Amen,"Ἁλληλουιά. (Hallelouia) 19:4. et ceciderunt seniores viginti quattuor et quattuor animalia et adoraverunt Deum sedentem super thronum dicentes amen alleluiaAnd the four and twenty ancients and the four living creatures fell down and adored God that sitteth upon the throne, saying: Amen. Alleluia. 4. And the four and twenty elders and the four living creatures fell down and worshipped God that sitteth on the throne, saying, Amen; Hallelujah. 19:4. And the four and twenty elders and the four beasts fell down and worshipped God that sat on the throne, saying, Amen; Alleluia. 19:4. And the twenty-four elders and the four living creatures fell down and worshiped God, sitting upon the throne, saying: “Amen! Alleluia!” And the four and twenty elders and the four beasts fell down and worshipped God that sat on the throne, saying, Amen; Alleluia: 4: Тогда двадцать четыре старца и четыре животных пали и поклонились Богу, сидящему на престоле, говоря: аминь! аллилуия! 19:4 καὶ ἔπεσαν οἱ πρεσβύτεροι οἱ εἴκοσι τέσσαρες καὶ τὰ τέσσαρα ζῶα, καὶ προσεκύνησαν τῶ θεῶ τῶ καθημένῳ ἐπὶ τῶ θρόνῳ, λέγοντες, ἀμήν, ἁλληλουϊά. 19:4. καὶ (And) ἔπεσαν (they-fell,"οἱ (the-ones) πρεσβύτεροι (more-eldered) οἱ (the-ones) εἴκοσι (twenty) τέσσαρες (four) καὶ (and) τὰ (the-ones) τέσσερα (four) ζῷα, (lifelets,"καὶ (and) προσεκύνησαν (they-kissed-toward-unto) τῷ (unto-the-one) θεῷ (unto-a-Deity) τῷ (unto-the-one) καθημένῳ (unto-sitting-down) ἐπὶ, (upon) τῷ (unto-the-one) θρόνῳ (unto-a-throne) λέγοντες (forthing,"Ἀμήν, (Amen,"Ἁλληλουιά. (Hallelouia) 19:4. et ceciderunt seniores viginti quattuor et quattuor animalia et adoraverunt Deum sedentem super thronum dicentes amen alleluia And the four and twenty ancients and the four living creatures fell down and adored God that sitteth upon the throne, saying: Amen. Alleluia. 4. AndthefourandtwentyeldersandthefourlivingcreaturesfelldownandworshippedGodthatsittethonthethrone, saying, Amen; Hallelujah. 19:4. And the four and twenty elders and the four beasts fell down and worshipped God that sat on the throne, saying, Amen; Alleluia. 19:4. And the twenty-four elders and the four living creatures fell down and worshiped God, sitting upon the throne, saying: “Amen! Alleluia!” ru▾ձայն")յաթոռոյն")ելանէր")` ասելով"). Օրհնեցէք")զԱստուած")մեր"), ամենայն")ծառայք")նորա"), եւ")որք")երկնչիք")ի")նմանէ"), փոքունք")եւ")մեծամեծք"): 19:5: Եւ ձայն յաթոռոյն ելանէր՝ ասելով. Օրհնեցէ՛ք զԱստուած ամենայն ծառայք նորա, եւ որք երկնչիք ՚ի նմանէ՝ փոքունք եւ մեծամեծք( "Ոմանք. ԶԱստուած մեր ամենայն ծա՛՛։"): Ոմանք. ԶԱստուած մեր ամենայն ծա՛՛։ 5 Եւ գահից մի ձայն ելաւ, որ ասում էր. «Օրհներգեցէ՛ք մեր Աստծուն, նրա բոլոր ծառանե՛րդ, եւ դո՛ւք, որ երկնչում էք նրանից, փոքրե՛ր եւ մեծե՛ր»: 5 Եւ աթոռէն ձայն մը ելաւ, ըսելով. «Մեր Աստուծոյն օրհնութիւն տուէք, ո՛վ բոլոր ծառաներ ու դուք որ իրմէ կը վախնաք, պզտիկներ ու մեծեր»։ zohrab-1805▾ φωνὴ (a-sound) ἀπὸ (off) τοῦ (of-the-one) θρόνου (of-a-throne) ἐξῆλθεν (it-had-came-out) λέγουσα (forthing," Αἰνεῖτε ( Ye-should-laud-unto ) τῷ (unto-the-one) θεῷ (unto-a-Deity) ἡμῶν, (of-us," πάντες ( all ) οἱ ( the-ones ) δοῦλοι ( bondees ) αὐτοῦ (of-it) οἱ ( the-ones ) φοβούμενοι ( feareeing-unto ) αὐτόν , ( to-it ," οἱ ( the-ones ) μικροὶ ( small ) καὶ ( and ) οἱ ( the-ones ) μεγάλοι . ( great ) 19:5. et vox de throno exivit dicens laudem dicite Deo nostro omnes servi eius et qui timetis eum pusilli et magniAnd a voice came out from the throne, saying: Give praise to our God, all ye his servants: and you that fear him, little and great. 5. And a voice came forth from the throne, saying, Give praise to our God, all ye his servants, ye that fear him, the small and the great. 19:5. And a voice came out of the throne, saying, Praise our God, all ye his servants, and ye that fear him, both small and great. 19:5. And a voice went out from the throne, saying: “Express praise to our God, all you his servants, and you who fear him, small and great.” And a voice came out of the throne, saying, Praise our God, all ye his servants, and ye that fear him, both small and great: 5: И голос от престола исшел, говорящий: хвалите Бога нашего, все рабы Его и боящиеся Его, малые и великие. 19:5 καὶ φωνὴ ἀπὸ τοῦ θρόνου ἐξῆλθεν λέγουσα, αἰνεῖτε τῶ θεῶ ἡμῶν, πάντες οἱ δοῦλοι αὐτοῦ, [καὶ] οἱ φοβούμενοι αὐτόν, οἱ μικροὶ καὶ οἱ μεγάλοι. 19:5. καὶ (And) φωνὴ (a-sound) ἀπὸ (off) τοῦ (of-the-one) θρόνου (of-a-throne) ἐξῆλθεν (it-had-came-out) λέγουσα (forthing,"Αἰνεῖτε (Ye-should-laud-unto) τῷ (unto-the-one) θεῷ (unto-a-Deity) ἡμῶν, (of-us,"πάντες (all) οἱ (the-ones) δοῦλοι (bondees) αὐτοῦ (of-it) οἱ (the-ones) φοβούμενοι (feareeing-unto) αὐτόν, (to-it,"οἱ (the-ones) μικροὶ (small) καὶ (and) οἱ (the-ones) μεγάλοι. (great) 19:5. et vox de throno exivit dicens laudem dicite Deo nostro omnes servi eius et qui timetis eum pusilli et magni And a voice came out from the throne, saying: Give praise to our God, all ye his servants: and you that fear him, little and great. 5. Andavoicecameforthfromthethrone, saying, Give praise toourGod, allyehisservants, yethatfearhim, thesmallandthegreat. 19:5. And a voice came out of the throne, saying, Praise our God, all ye his servants, and ye that fear him, both small and great. 19:5. And a voice went out from the throne, saying: “Express praise to our God, all you his servants, and you who fear him, small and great.” ru▾, yet is declared to be such as would make all those happy who were called to it, so called as to accept the invitation, a feast made up of the promises of the gospel, the true sayings of God, v. 9. These promises, opened, applied, sealed, and earnested by the Spirit of God, in holy eucharistical ordinances, are the marriage-feast; and the whole collective body of all those who partake of this feast is the bride, the Lamb's wife; they eat into one body, and drink into one Spirit, and are not mere spectators or guests, but coalesce into the espoused party, the mystical body of Christ. 3. The transport of joy which the apostle felt in himself at this vision. He fell down at the feet of the angel, to worship him, supposing him to be more than a creature, or having his thoughts at the present overpowered by the vehemency of his affections. Here observe, (1.) What honour he offered to the angel: He fell at his feet, to worship him; this prostration was a part of external worship, it was a posture of proper adoration. (2.) How the angel refused it, and this was with some resentment: "See thou do it not; have a care what thou doest, thou art doing a wrong thing." (3.) He gave a very good reason for his refusal: "I am thy fellow-servant, and of thy brethren which have the testimony of Jesus--I am a creature, thine equal in office, though not in nature; I, as an angel and messenger of God, have the testimony of Jesus, a charge to be a witness for him and to testify concerning him, and thou, as an apostle, having the Spirit of prophecy, hast the same testimony to give in; and therefore we are in this brethren and fellow-servants." (4.) He directs him to the true and only object of religious worship; namely, God: "Worship God, and him alone." This fully condemns both the practice of the papists in worshipping the elements of bread and wine, and saints, and angels, and the practice of those Socinians and Arians who do not believe that Christ is truly and by nature God, and yet pay him religious worship; and this shows what wretched fig-leaves all their evasions and excuses are which they offer in their own vindication: they stand hereby convicted of idolatry by a messenger from heaven. ###### Adam Clarke: Commentary on the Bible - 1831 19:5: Praise our God, etc. - Let all, whether redeemed from among Jews or Gentiles, give glory to God. ###### Albert Barnes: Notes on the Bible - 1834 19:5: And a voice came out of the throne - A voice seemed to come from the very midst of the throne. It is not said by whom this voice was uttered. It cannot be supposed, however, that it was uttered by God himself, for the command which it gave was this: "Praise our God," etc. For the same reason it seems hardly probable that it was the voice of the Messiah, unless it be supposed that he here identifies himself with the redeemed church, and speaks of God as his God and hers. It would seem rather that it was a responsive voice that came from those nearest the throne, calling on all to unite in praising God in view of what was done. The meaning then will be, that all heaven was interested in the triumph of the church, and that one portion of the dwellers there called on the others to unite in offering thanksgiving. Praise our God - The God that we worship. All ye his servants - All in heaven and earth; all have occasion for thankfulness. And ye that fear him - That Rev_erence and obey him. The fear of the Lord is a common expression in the Scriptures to denote true piety. Both small and great - All of every class and condition - poor and rich - young and old; those of humble and those of exalted rank. Compare Psa 148:7-13. ###### R. A. Torrey - Treasury: Treasury of Scriptural Knowledge - 1880 19:5: a voice: Rev 7:15, Rev 11:19, Rev 16:17 Praise: Psa 103:20-22, Psa 134:1, Psa 135:1, Psa 135:19, Psa 135:20, Psa 148:11-13, Psa 150:6 both: Rev 11:18, Rev 20:12 ###### Geneva 1599 19:5 (4) And a voice came out of the (5) throne, saying, Praise our God, all ye his servants, and ye that fear him, both small and great. (4) The second place of praise, as I said See Rev_ 19:1 which first is commanded by God in this verse: and then is in most ample manner pronounced by the creatures, both because they see that kingdom of Christ to come, which they desire, (Rev_ 19:6) and also because they see the Church is called forth to be brought home to the house of her husband by holy marriage, to the fellowship of his kingdom, (Rev_ 19:7-8). Therefore John is commanded to write in a book the acclamation together with a divine testimony, (Rev_ 19:9). (5) Out of the temple from God as in (Rev_ 11:19). ###### John Gill 19:5 And a voice came out of the throne,.... Not from God the Father, that sat upon it, for the phrase, praise our God, could not be said by him with propriety and pertinence; but rather from Christ, the Lamb, in the midst of the throne, who as Mediator could say of him to his people, my God and your God, and my Father and your Father, Jn 20:17 though it seems best to understand it of the voice of one of the angels about the throne, since one of these is afterwards spoken of, whom John would have worshipped, but was forbid, Rev_ 19:9 and which may design either one of the ministering spirits, or a preacher of the Gospel, and a set of such, calling upon the saints to the discharge of their duty, or to return to it on this occasion: saying, praise our God, all ye his servants; meaning not the ministers of the Gospel only, who serve in the Gospel of Christ, by preaching and defending it, and in the administration of Gospel ordinances to the comfort of the saints, but all the people of God; for though they are sons, and no more servants to sin and Satan, and the world, yet they are servants of God and of righteousness, and serve him willingly and cheerfully in a way of duty, and without slavish fear, and with a godly one, and from principles of love and gratitude, and without mercenary views and selfish ends; and these are called upon, as a part of their service, to say hallelujah, or to sing the praises of God for his judgments on antichrist; see Ps 134:1. and ye that fear him, both small and great; who fear the Lord, not with a servile, but filial fear, with the new covenant grace of fear, which springs from, and is increased by, the goodness and grace of God; whether greater or lesser believers, fathers, young men, or children; whether Jews or Gentiles, or of whatsoever nation, kindred, or people; see Ps 115:13. ###### John Wesley 19:5 And a voice came forth from the throne - Probably from the four living creatures, saying, Praise our God - The occasion and matter of this song of praise follow immediately after, Rev_ 19:6, &c.; God was praised before, for his judgment of the great whore, Rev_ 19:1-4. Now for that which follows it: for that the Lord God, the Almighty, takes the kingdom to himself, and avenges himself on the rest of his enemies. Were all these inhabitants of heaven mistaken? If not, there is real, yea, and terrible anger in God. ###### Robert Jamieson, A. R. Fausset and David Brown 19:5 out of--Greek, "out from the throne" in A, B, C. Praise our God--Compare the solemn act of praise performed by the Levites, 1Chron 16:36; 1Chron 23:5, especially when the house of God was filled with the divine glory (2Chron 5:13). both--omitted in A, B, C, Vulgate, Coptic, and Syriac. Translate as Greek, "the small and the great." լուայ")( եւ")իբրեւ")զձայն")ջուրց")բազմաց"), եւ")իբրեւ")զձայն")հզօր")որոտման"), ասելով")( Ալէլուիա"), վասն")զի")թագաւորեաց")( 19:6: Եւ լուա՛յ ձայն բազմութեան մեծի՝ իբրեւ զձայն ջուրց բազմաց, եւ իբրեւ զձայն հզօ՛ր որոտման՝ ասելով դարձեալ. Ալէ՛լուիա, վասն զի թագաւորեա՛ց Աստուած Ամենակալ( "Ոմանք. Հզօր որոտմանց... Աստուած մեր Ամենա՛՛։ Ուր Ոսկան. Տէր Աստուած մեր Ա՛՛։"): Ոմանք. Հզօր որոտմանց... Աստուած մեր Ամենա՛՛։ Ուր Ոսկան. Տէր Աստուած մեր Ա՛՛։ 6 Եւ լսեցի մի ձայն՝ ինչպէս ձայնը մի մեծ բազմութեան, ինչպէս ձայնը շատ ջրերի եւ ինչպէս ձայնը ուժեղ որոտի, որ ասում էր դարձեալ. «Ալէլուիա՜. քանզի թագաւորեց մեր Ամենակալ Աստուածը: 6 Եւ լսեցի ձայն մը մեծ բազմութեան ձայնի պէս ու շատ ջուրերու ձայնի պէս ու սաստիկ որոտումներու ձայնի պէս, որոնք կ’ըսէին. «Ալէլուիա՜, վասն զի մեր Ամենակալ Տէր Աստուածը թագաւորեց։ zohrab-1805▾ ἤκουσα (I-heard) ὡς ( as ) φωνὴν ( to-a-sound ) ὄχλου ( of-a-crowd ) πολλοῦ (of-much) καὶ (and) ὡς ( as ) φωνὴν ( to-a-sound ) ὑδάτων ( of-waters ) πολλῶν ( of-much ) καὶ (and) ὡς (as) φωνὴν (to-a-sound) βροντῶν (of-thunders) ἰσχυρῶν , ( of-force-held ) λεγόντων ( of-forthing ,"Ἁλληλουιά, (Hallelouia,"ὅτι (to-which-a-one) ἐβασίλευσεν ( it-ruled-of ," Κύριος , ( Authority-belonged ," ὁ ( the-one ) θεὸς ( a-Deity ) [ἡμῶν], "[of-us]," ὁ ( the-one ) παντοκράτωρ . ( an-all-securer ) 19:6. et audivi quasi vocem turbae magnae et sicut vocem aquarum multarum et sicut vocem tonitruum magnorum dicentium alleluia quoniam regnavit Dominus Deus noster omnipotensAnd I heard as it were the voice of a great multitude, and as the voice of many waters, and as the voice of great thunders, saying: Alleluia: for the Lord our God, the Almighty, hath reigned. 6. And I heard as it were the voice of a great multitude, and as the voice of many waters, and as the voice of mighty thunders, saying, Hallelujah: for the Lord our God, the Almighty, reigneth. 19:6. And I heard as it were the voice of a great multitude, and as the voice of many waters, and as the voice of mighty thunderings, saying, Alleluia: for the Lord God omnipotent reigneth. 19:6. And I heard something like the voice of a great multitude, and like the voice of many waters, and like the voice of great thunders, saying: “Alleluia! For the Lord our God, the Almighty, has reigned. And I heard as it were the voice of a great multitude, and as the voice of many waters, and as the voice of mighty thunderings, saying, Alleluia: for the Lord God omnipotent reigneth: 6: И слышал я как бы голос многочисленного народа, как бы шум вод многих, как бы голос громов сильных, говорящих: аллилуия! ибо воцарился Господь Бог Вседержитель. 19:6 καὶ ἤκουσα ὡς φωνὴν ὄχλου πολλοῦ καὶ ὡς φωνὴν ὑδάτων πολλῶν καὶ ὡς φωνὴν βροντῶν ἰσχυρῶν λεγόντων, ἁλληλουϊά, ὅτι ἐβασίλευσεν κύριος ὁ θεὸς [ἡμῶν] ὁ παντοκράτωρ. 19:6. Καὶ (And) ἤκουσα (I-heard) ὡς (as) φωνὴν (to-a-sound) ὄχλου (of-a-crowd) πολλοῦ (of-much) καὶ (and) ὡς (as) φωνὴν (to-a-sound) ὑδάτων (of-waters) πολλῶν (of-much) καὶ (and) ὡς (as) φωνὴν (to-a-sound) βροντῶν (of-thunders) ἰσχυρῶν, (of-force-held) λεγόντων (of-forthing,"Ἁλληλουιά, (Hallelouia,"ὅτι (to-which-a-one) ἐβασίλευσεν (it-ruled-of,"Κύριος, (Authority-belonged,"ὁ (the-one) θεὸς (a-Deity) [ἡμῶν], "[of-us],"ὁ (the-one) παντοκράτωρ. (an-all-securer) 19:6. et audivi quasi vocem turbae magnae et sicut vocem aquarum multarum et sicut vocem tonitruum magnorum dicentium alleluia quoniam regnavit Dominus Deus noster omnipotens And I heard as it were the voice of a great multitude, and as the voice of many waters, and as the voice of great thunders, saying: Alleluia: for the Lord our God, the Almighty, hath reigned. 6. AndI heard asitwerethevoiceofagreatmultitude, andasthevoiceofmanywaters, andasthevoiceofmightythunders, saying, Hallelujah: fortheLordourGod, theAlmighty, reigneth. 19:6. And I heard as it were the voice of a great multitude, and as the voice of many waters, and as the voice of mighty thunderings, saying, Alleluia: for the Lord God omnipotent reigneth. 19:6. And I heard something like the voice of a great multitude, and like the voice of many waters, and like the voice of great thunders, saying: “Alleluia! For the Lord our God, the Almighty, has reigned. ru▾ для произнесения окончательного приговора суда. Общество верных христиан, доживших до последнего времени, и есть эта женщина, эта невеста Агнца. Оно приготовило себя к встрече своего жениха, Иисуса Христа. ###### Adam Clarke: Commentary on the Bible - 1831 19:6: The voice of a great multitude - This is the catholic or universal Church of God gathered from among the Gentiles. The Lord God Omnipotent reigneth - Εβασιλευσε Κυριος ὁ Θεος ὁ παντοκρατωρ. Many excellent MSS., most of the versions, with Andreas and Arethas, the two most ancient commentators on this book, add ἡμων, our, after ὁ Θεος· and according to this the text reads emphatically thus: Our Lord God, the Almighty, reigneth. What consolation to every genuine Christian that His Lord and God is the Almighty, and that this Almighty never trusts the reins of the government of the universe out of his hands! What therefore has his Church to fear? ###### Albert Barnes: Notes on the Bible - 1834 19:6: And I heard as it were the voice of a great multitude - In Rev 19:1 he says that he "heard a great voice of much people"; here he says he "heard as it were a voice of a great multitude." That is, in the former case he heard a shout that he at once recognized as the voice of a great multitude of persons; here he says that he heard a sound not distinctly recognized at first as such, but which resembled such a shout of a multitude. In the former case it was distinct; here it was confused - bearing a resemblance to the sound of roaring waters, or to muttering thunder, but less distinct than the former. This phrase would imply: (a) a louder sound; and, (b) that the sound was more remote, and therefore less clear and distinct. And as the voice of many waters - The comparison of the voices of a host of people with the roar of mighty waters is not uncommon in the Scriptures. See the notes on Isa 17:12-13. So in Homer: "The monarch spoke, and straight a murmur rose, Loud as the surges when the tempest blows; That dash'd on broken rocks tumultuous roar, And foam and thunder on the stony shore." And as the voice of mighty thunderings - The loud, deep, heavy voice of thunder. The distant shouts of a multitude may properly be represented by the sound of heavy thunder. Saying, Alleluia - See the notes on Rev 19:1. This is the fourth time in which this is uttered as expressive of the joy of the heavenly hosts in view of the overthrow of the enemies of the church. The occasion will be worthy of this emphatic expression of joy. For the Lord God omnipotent reigneth - Yahweh - God Almighty - the true God. The meaning is, that as the last enemy of the church is destroyed, he now truly reigns. This is the result of his power, and therefore it is proper that he should be praised as the "omnipotent" or "Almighty God" - for he has shown that he can overcome all his enemies, and bring the world to his feet. ###### R. A. Torrey - Treasury: Treasury of Scriptural Knowledge - 1880 19:6: and as the voice of many: Rev 1:15, Rev 14:2; Eze 1:24, Eze 43:2 and as the voice of mighty: Rev 4:5, Rev 6:1, Rev 8:5, Rev 14:2, Rev 19:6; Job 40:9; Psa 29:3-9, Psa 77:18 for: Rev 11:15-18, Rev 12:10, Rev 21:22; Psa 47:2, Psa 47:7, Psa 93:1, Psa 97:1, Psa 97:12, Psa 99:1; Isa 52:7; Mat 6:13 ###### Geneva 1599 19:6 And I heard (6) as it were the voice of a great multitude, and as the voice of many waters, and as the voice of mighty thunderings, saying, Alleluia: for the Lord God omnipotent reigneth. (6) Outside the temple in heaven. ###### John Gill 19:6 And I heard, as it were, the voice of a great multitude,.... Even of all the servants of the Lord, and them that fear him, small and great; a vast multitude of converted Jews and Gentiles, in the several parts of the world, who in answer to the voice out of the throne, which came with great power and energy, lift up their voices in praise to God, both for their own conversion, and for the downfall of Babylon: and as the voice of many waters: falling down in a descent, or in rough and rocky places, which make a great noise, and is heard afar off; and such must be the united voice of so great a multitude of converts as will be gathered together everywhere at this time: the same metaphor is used of the voice of Christ in Rev_ 1:15 and as the voice of mighty thunderings; violent claps of it, which are sometimes so loud that they rend the very heavens, and strike the inhabitants of the earth with the utmost consternation: these are the same voices which will be heard in the church when the seventh angel sounds his trumpet, Rev_ 11:15 saying, Alleluia; or praise ye the Lord; they will call upon one another to celebrate the praises of God, on account of the above things, in the same manner, and using the same word the people in heaven, and the four and twenty elders and four living creatures, do; and this is the fourth time the word is used in this context, and confirms the observation that has been made, that this vision refers to the conversion of the Jews, which will quickly follow the destruction of Rome: and the Jews themselves have a notion, that when Rome is destroyed the Messiah will come; and so he will in his spiritual reign. They say (o), "our redemption will be immediately upon the destruction of Rome.'' And again (p), "the root of our redemption depends upon the destruction of Rome.'' The reason for their saying "hallelujah" follows, for the Lord God omnipotent reigneth; by whom is meant the Lord Jesus Christ, who is Lord of all, and God over all, blessed for ever, and is the Almighty; and though he was set up as King over the holy hill of Zion, and has reigned over the church in every age, and came as King into this world, though his kingdom was not of it, and at his resurrection was declared Lord and Christ, and his kingdom was then more manifest, and he has ever since displayed his kingly power in defending his church, and defeating the enemies of it; yet now will he reign more visibly and gloriously, his kingdom will be enlarged from one end of the earth to the other, and he will be King over all the earth, which will occasion great joy to Jews and Gentiles; see Ps 47:1 and See Gill on Rev_ 11:17. (o) Tzeror Hammor, fol. 148. 1. (p) Tzeror Hammor, fol. 163. 4. ###### John Wesley 19:6 And I heard the voice of a great multitude. So all his servants did praise him. The Almighty reigneth - More eminently and gloriously than ever before. ###### Robert Jamieson, A. R. Fausset and David Brown 19:6 many waters--Contrast the "many waters" on which the whore sitteth (Rev_ 17:1). This verse is the hearty response to the stirring call, "Alleluia! Praise our God" (Rev_ 19:4-5). the Lord God omnipotent--Greek, "the Omnipotent." reigneth--literally, "reigned": hence reigneth once for all. His reign is a fact already established. Babylon, the harlot, was one great hindrance to His reign being recognized. Her overthrow now clears the way for His advent to reign; therefore, not merely Rome, but the whole of Christendom in so far as it is carnal and compromised Christ for the world, is comprehended in the term "harlot." The beast hardly arises when he at once "goeth into perdition": so that Christ is prophetically considered as already reigning, so soon does His advent follow the judgment on the harlot. եւ")ցնծամք")եւ")տամք")փառս")նմա"), զի")եկն")հարսանիք")Գառինն"), եւ")կինն")` ( 19:7: Խնդամք՝ եւ տամք փա՛ռս նմա, զի ե՛կն հարսանիք Գառինն։ Եւ կինն հարսն նորա պատրաստեա՛ց զինքն( "Ոմանք. Խնդամք եւ ցնծամք, եւ տամք։ Ոսկան. Եւ կին նորա պատրաս՛՛։"). Ոմանք. Խնդամք եւ ցնծամք, եւ տամք։ Ոսկան. Եւ կին նորա պատրաս՛՛։ 7 Ուրախանանք եւ ցնծանք ու փառք տանք նրան, քանի որ հասաւ Գառան հարսանիքի ժամը, եւ կինը՝ նրա հարսը, պատրաստուեց. 7 Խնդա՛նք եւ ուրախանա՛նք ու փա՛ռք տանք անոր, քանզի Գառնուկին հարսանիքը հասաւ ու անոր կինը ինքզինք պատրաստեց։ zohrab-1805▾ καὶ (and) ἀγαλλιῶμεν , ( we-might-excess-jump-unto ,"καὶ (and) δώσομεν (we-shall-give) τὴν (to-the-one) δόξαν (to-a-recognition) αὐτῷ, (unto-it,"ὅτι (to-which-a-one) ἦλθεν (it-had-came,"ὁ (the-one) γάμος (a-marriage) τοῦ (of-the-one) ἀρνίου, (of-a-Lamblet,"καὶ (and) ἡ (the-one) γυνὴ (a-woman) αὐτοῦ (of-it) ἡτοίμασεν (it-readied-to) ἑαυτήν, (to-self," 19:7. gaudeamus et exultemus et demus gloriam ei quia venerunt nuptiae agni et uxor eius praeparavit seLet us be glad and rejoice and give glory to him. For the marriage of the Lamb is come: and his wife hath prepared herself. 7. Let us rejoice and be exceeding glad, and let us give the glory unto him: for the marriage of the Lamb is come, and his wife hath made herself ready. 19:7. Let us be glad and rejoice, and give honour to him: for the marriage of the Lamb is come, and his wife hath made herself ready. 19:7. Let us be glad and exult. And let us give glory to him. For the marriage feast of the Lamb has arrived, and his wife has prepared herself.” Let us be glad and rejoice, and give honour to him: for the marriage of the Lamb is come, and his wife hath made herself ready: 7: Возрадуемся и возвеселимся и воздадим Ему славу; ибо наступил брак Агнца, и жена Его приготовила себя. 19:7 χαίρωμεν καὶ ἀγαλλιῶμεν, καὶ δώσωμεν τὴν δόξαν αὐτῶ, ὅτι ἦλθεν ὁ γάμος τοῦ ἀρνίου, καὶ ἡ γυνὴ αὐτοῦ ἡτοίμασεν ἑαυτήν· 19:7. χαίρωμεν (We-might-joy) καὶ (and) ἀγαλλιῶμεν, (we-might-excess-jump-unto,"καὶ (and) δώσομεν (we-shall-give) τὴν (to-the-one) δόξαν (to-a-recognition) αὐτῷ, (unto-it,"ὅτι (to-which-a-one) ἦλθεν (it-had-came,"ὁ (the-one) γάμος (a-marriage) τοῦ (of-the-one) ἀρνίου, (of-a-Lamblet,"καὶ (and) ἡ (the-one) γυνὴ (a-woman) αὐτοῦ (of-it) ἡτοίμασεν (it-readied-to) ἑαυτήν, (to-self," 19:7. gaudeamus et exultemus et demus gloriam ei quia venerunt nuptiae agni et uxor eius praeparavit se Let us be glad and rejoice and give glory to him. For the marriage of the Lamb is come: and his wife hath prepared herself. 7. Letusrejoiceandbe exceeding glad, andletusgivethegloryuntohim: forthemarriageofthe Lamb iscome, andhiswifehathmadeherselfready. 19:7. Let us be glad and rejoice, and give honour to him: for the marriage of the Lamb is come, and his wife hath made herself ready. 19:7. Let us be glad and exult. And let us give glory to him. For the marriage feast of the Lamb has arrived, and his wife has prepared herself.” ru▾ made herself ready. (7) Namely, to that holy marriage, both herself in person in this verse, and also provided by her spouse with marriage gifts princely and divine, is adorned and prepared in the next verse. ###### John Gill 19:7 Let us be glad, and rejoice, and give honour to him,.... The saints particularly; the converted Jews will call upon one another to express their gladness at the glorious display of Christ's kingly power and authority, and at the destruction of his enemies, and the happy and comfortable state of his church and people; and to rejoice in him as the Lord their righteousness and strength, and to give him the honour and glory of salvation, and to return him thanks for all the benefits they shall have received from him, particularly on account of what follows: for the marriage of the Lamb is come; that is, of Christ, the Son of God, with the Jewish church more especially; there was a secret betrothing of all the elect to Christ before the world began; and there is an open espousal of every individual of them at conversion; but the public and general solemnization of the nuptials will not be until the new Jerusalem church state takes place in the personal reign of Christ, hereafter mentioned, Rev_ 21:1 but here, and as previous to that, there will be a very general and open marriage of Christ with the people of the Jews, who have long rejected and forsaken him; for if the conversion of a single person may be called a marriage with Christ, much more the conversion of such members; and which is often prophesied of under this metaphor of a marriage, as in Is 62:4. And now the time will be come for the accomplishment of it, the evidence of which follows: and his wife hath made herself ready, or "dressed herself"; by decking herself with jewels, and putting on her wedding garment provided for her, and given to her by her husband, the Lamb, as appears from the next verse: this preparation will lie partly in the number of converts that will be brought into the Jewish church, which she will receive and clothe herself with, as with the ornament of a bride, Is 49:18 and partly by the exercise of the several graces of the Spirit upon Christ, comparable to the jewels of a bride, with which she will be adorned for her husband; and also by putting on the robe of his righteousness, hereafter mentioned, which the old Jewish synagogue rejected, and therefore was cast off, Rom 10:3. The Arabic version reads, "the marriage of the Lamb is now come with his spouse, prepared for him"; and the Ethiopic version, "the marriage of his Lamb is come, and the wife is prepared"; and that her preparation is not by her own merits and works of righteousness, but by the grace of her husband, is clear from the following verse. Mr. Daubuz, by "the marriage of the Lamb", understands the first resurrection, and the state of the church at that time; and by "the fine linen", the dress of the church, next mentioned, the incorruptible body of the saints compared to a garment, 1Cor 15:53 and by those who are afterwards said to be "called to the marriage", the converted nations in a mortal state: but all the saints will share in the first resurrection; besides, as yet the beast and false prophet are not destroyed, which must be before the first resurrection, as the following vision shows. ###### John Wesley 19:7 The marriage of the Lamb is come - Is near at hand, to be solemnized speedily. What this implies, none of "the spirits of just men," even in paradise, yet know. O what things are those which are yet behind! And what purity of heart should there be, to meditate upon them! And his wife hath made herself ready - Even upon earth; but in a far higher sense, in that world. After a time allowed for this, the new Jerusalem comes down, both made ready and adorned, Rev_ 21:2. ###### Robert Jamieson, A. R. Fausset and David Brown 19:7 glad . . . rejoice--Greek, "rejoice . . . exult." give--so B and ANDREAS. But A reads, "we will give." glory--Greek, "the glory." the marriage of the Lamb is come--The full and final consummation is at Rev_ 21:2-9, &c. Previously there must be the overthrow of the beast, &c., at the Lord's coming, the binding of Satan, the millennial reign, the loosing of Satan and his last overthrow, and the general judgment. The elect-Church, the heavenly Bride, soon after the destruction of the harlot, is transfigured at the Lord's coming, and joins with Him in His triumph over the beast. On the emblem of the heavenly Bridegroom and Bride, compare Mt 22:2; Mt 25:6, Mt 25:10; 2Cor 11:2. Perfect union with Him personally, and participation in His holiness; joy, glory, and kingdom, are included in this symbol of "marriage"; compare Song of Solomon everywhere. Besides the heavenly Bride, the transfigured, translated, and risen Church, reigning over the earth with Christ, there is also the earthly bride, Israel, in the flesh, never yet divorced, though for a time separated, from her divine husband, who shall then be reunited to the Lord, and be the mother Church of the millennial earth, Christianized through her. Note, we ought, as Scripture does, restrict the language drawn from marriage-love to the Bride, the Church as a whole; not use it as individuals in our relation to Christ, which Rome does in the case of her nuns. Individually, believers are effectually-called guests; collectively, they constitute the bride. The harlot divides her affections among many lovers: the bride gives hers exclusively to Christ. տուաւ")նմա")զգենուլ")բեհեզս")սպիտակս")սուրբս")( որէբեհեզնարդարութիւն")սրբոցն"): 19:8: եւ տուաւ նմա զգենուլ բեհե՛զս սպիտակս սուրբս եւ լուսափա՛յլս. որ է բեհեզն արդարութիւն սրբոցն: 8 հարսին տրուեց հագնելու սպիտակ, մաքուր եւ լուսափայլ բեհեզ»: Եւ այդ բեհեզը խորհրդանշում է սրբերի արդարութիւնը: 8 Եւ իրեն հագնելու համար տրուեցաւ մաքուր ու լուսափայլ բեհեզ, քանզի այն բեհեզը սուրբերուն արդարութիւնն է»։ zohrab-1805▾ ἐδόθη (it-was-given) αὐτῇ (unto-it) ἵνα (so) περιβάληται ( it-might-have-had-casted-about ) βύσσινον (to-linened-belonged-to) λαμπρὸν (to-en-lamped) καθαρόν, (to-cleansed) τὸ (the-one) γὰρ (therefore) βύσσινον (linened-belonged-to,"τὰ (the-ones) δικαιώματα (en-course-belongings-to) τῶν (of-the-ones) ἁγίων ( of-hallow-belonged ) ἐστίν. (it-be) 19:8. et datum est illi ut cooperiat se byssinum splendens candidum byssinum enim iustificationes sunt sanctorumAnd it is granted to her that she should clothe herself with fine linen, glittering and white. For the fine linen are the justifications of saints. 8. And it was given unto her that she should array herself in fine linen, bright pure: for the fine linen is the righteous acts of the saints. 19:8. And to her was granted that she should be arrayed in fine linen, clean and white: for the fine linen is the righteousness of saints. 19:8. And it was granted to her that she should cover herself with fine linen, splendid and white. For the fine linen is the justifications of the Saints. And to her was granted that she should be arrayed in fine linen, clean and white: for the fine linen is the righteousness of saints: 8: И дано было ей облечься в виссон чистый и светлый; виссон же есть праведность святых. 19:8 καὶ ἐδόθη αὐτῇ ἵνα περιβάληται βύσσινον λαμπρὸν καθαρόν, τὸ γὰρ βύσσινον τὰ δικαιώματα τῶν ἁγίων ἐστίν. 19:8. καὶ (and) ἐδόθη (it-was-given) αὐτῇ (unto-it) ἵνα (so) περιβάληται (it-might-have-had-casted-about) βύσσινον (to-linened-belonged-to) λαμπρὸν (to-en-lamped) καθαρόν, (to-cleansed) τὸ (the-one) γὰρ (therefore) βύσσινον (linened-belonged-to,"τὰ (the-ones) δικαιώματα (en-course-belongings-to) τῶν (of-the-ones) ἁγίων (of-hallow-belonged) ἐστίν. (it-be) 19:8. et datum est illi ut cooperiat se byssinum splendens candidum byssinum enim iustificationes sunt sanctorum And it is granted to her that she should clothe herself with fine linen, glittering and white. For the fine linen are the justifications of saints. 8. Anditwasgivenuntoherthatsheshould array herself infinelinen, brightpure: forthefinelinenistherighteousactsofthe saints. 19:8. And to her was granted that she should be arrayed in fine linen, clean and white: for the fine linen is the righteousness of saints. 19:8. And it was granted to her that she should cover herself with fine linen, splendid and white. For the fine linen is the justifications of the Saints. ru▾ fine linen, clean and white: for the fine (9) linen is the (b) righteousness of saints. (8) As an ensign of kingly and priestly dignity, which Christ bestows on us in (Rev_ 1:6). (9) This is a gift given by the husband for marriage sake, and a most choice ornament which Christ gave to us, as to his spouse. (b) Good works which are lively testimonies of faith. ###### John Gill 19:8 And to her was granted that she should be arrayed in fine linen,.... Or "Byssine": the "Byssus", of which fine linen is made, is said to grow on a tree, in height like to a poplar, and its leaves like a willow, and to be brought out of Judea into Egypt, which the Egyptians used in most of their holy things (q). A dress neat and modest, and not like the attire of the whore of Rome, Rev_ 17:4 and this is said to be clean and white, and is interpreted in the next clause: for the fine linen is the righteousness of saints, or "righteousnesses"; not good works, or their own righteousness; for though these are evidences of faith, by which the saints are justified, and are what God has prepared for them, that they should walk in them; yet these are not comparable to fine linen, clean and white, but are like filthy rags, and cannot justify in the sight of God; but the righteousness of Christ is meant, and justification by that; for that is the only justifying righteousness of the saints: and though it is but one, yet it may be called "righteousnesses", or "justifications", in the plural number; partly because of the several seasons in which the act of justification passes, first in God's mind from eternity, next on Christ as the surety, when he rose from the dead, and on all the elect in him, and then in the consciences of the saints when they believe, and the sentence of it will be notified and declared to men and angels at the last judgment; and partly because of the many persons that are justified by it, as also because of the excellency of it; so the Jews use the word in the plural number: the Targumist on Zech 3:4 paraphrases the text, "I will clothe thee" "with righteousnesses" (r); upon which words Jarchi has this note, "change of beautiful garments is all one as if it had been said "righteousnesses": and because sin is like to filthy garments, righteousness is like to garments beautiful and white.'' Christ's righteousness may be compared to fine linen, clean and white, because of its spotless purity; those that are arrayed with it being unblamable and irreprovable, and without spot and blemish, and without fault before the throne; with this the Jewish church will be clothed; all the Lord's people will be righteous, they will have on the best robe, and wedding garment, which was despised by the Jews in Christ's time, who refused to come to the marriage feast; and their being arrayed with it will be owing to the grace of Christ, who grants it; and so Christ's righteousness is called the gift of righteousness, the free gift, and gift by grace, and abundance of grace; and faith, which receives it, and puts it on, is the gift of God, Rom 5:15. Not only the garment is a gift of grace, but the putting of it on is a grant from Christ, and what he himself does, Is 61:10. (q) Philostrat. Vita Apollon. l. 2. c. 9. Vid. Apul. Apolog. p. 225. Pausan. l. 5. sive Eliac. p. 294. (r) See Isa. lxi. 10. & Targum in Hos. x. 12. ###### John Wesley 19:8 And it is given to her - By God. The bride is all holy men, the whole invisible church. To be arrayed in fine linen, white and clean - This is an emblem of the righteousness of the saints - Both of their justification and sanctification. ###### Robert Jamieson, A. R. Fausset and David Brown 19:8 granted--Though in one sense she "made herself ready," having by the Spirit's work in her put on "the wedding garment," yet in the fullest sense it is not she, but her Lord, who makes her ready by "granting to her that she be arrayed in fine linen." It is He who, by giving Himself for her, presents her to Himself a glorious Church, not having spot, but holy and without blemish. It is He also who sanctifies her, naturally vile and without beauty, with the washing of water by the word, and puts His own comeliness on her, which thus becomes hers. clean and white--so ANDREAS. But A and B transpose. Translate, "bright and pure"; at once brilliantly splendid and spotless as in the bride herself. righteousness--Greek, "righteousnesses"; distributively used. Each saint must have this righteousness: not merely be justified, as if the righteousness belonged to the Church in the aggregate; the saints together have righteousnesses; namely, He is accounted as "the Lord our righteousness" to each saint on his believing, their robes being made white in the blood of the Lamb. The righteousness of the saint is not, as ALFORD erroneously states, inherent, but is imputed: if it were otherwise, Christ would be merely enabling the sinner to justify himself. Rom 5:18 is decisive on this. Compare Article XI, Church of England. The justification already given to the saints in title and unseen possession, is now GIVEN them in manifestation: they openly walk with Christ in white. To this, rather than to their primary justification on earth, the reference is here. Their justification before the apostate world, which had persecuted them, contrasts with the judgment and condemnation of the harlot. "Now that the harlot has fallen, the woman triumphs" [AUBERLEN]. Contrast with the pure fine linen (indicating the simplicity and purity) of the bride, the tawdry ornamentation of the harlot. Babylon, the apostate Church, is the antithesis to new Jerusalem, the transfigured Church of God. The woman (Rev_ 12:1-6), the harlot (Rev_ 17:1-7), the bride (Rev_ 19:1-10), are the three leading aspects of the Church. ( num. Eng: one (8537)")յերիցանցն")`` ասէր")ցիս: pron.acc.sg. Eng: I (5691)"). Գրեա")( Երանելիք")են")ամենեքեան: pron.nom.pl. Eng: all (3863)")որ")կոչեցեալ")են")յընթրիս")հարսանեաց")Գառինն")( 19:9: Եւ մի յերիցանցն ասէր ցիս. Գրեա՛ զայդ. Երանելի՛ք են ամենեքեան՝ որ կոչեցեա՛լ են յընթրիս հարսանեաց Գառինն( "Ոմանք. Ասէ ցիս. Գրեա՛։"): Ոմանք. Ասէ ցիս. Գրեա՛։ 9 Եւ երէցներից մէկն ինձ ասաց. «Գրի՛ր այս բանը. երանելի՜ են բոլոր նրանք, որ կանչուած են Գառան հարսանիքի ընթրիքին»: 9 Եւ ինծի ըսաւ. «Գրէ՛. Երանելի են անոնք, որ Գառնուկին հարսանիքին ընթրիքին կանչուած են» ու ինծի ըսաւ. «Ասոնք են Աստուծոյ ճշմարիտ խօսքերը»։ zohrab-1805▾ λέγει (it-fortheth) μοι (unto-me,"Γράψον (Thou-should-have-scribed," Μακάριοι ( Bless-belonged ) οἱ (the-ones) εἰς (into) τὸ (to-the-one) δεῖπνον (to-mealed) τοῦ (of-the-one) γάμου (of-a-marriage) τοῦ (of-the-one) ἀρνίου (of-a-Lamblet) κεκλημένοι . ( having-had-come-to-be-called-unto ) καὶ (And) λέγει (it-fortheth) μοι (unto-me,"Οὗτοι (The-ones-these) οἱ (the-ones) λόγοι (forthees) ἀληθινοὶ ( un-secluded-belonged-to ) τοῦ (of-the-one) θεοῦ (of-a-Deity) εἰσίν. (they-be) 19:9. et dicit mihi scribe beati qui ad cenam nuptiarum agni vocati sunt et dicit mihi haec verba vera Dei suntAnd he said to me: Write: Blessed are they that are called to the marriage supper of the Lamb. And he saith to me: These words of God are true. 9. And he saith unto me, Write, Blessed are they which are bidden to the marriage supper of the Lamb. And he saith unto me, These are true words of God. 19:9. And he saith unto me, Write, Blessed [are] they which are called unto the marriage supper of the Lamb. And he saith unto me, These are the true sayings of God. 19:9. And he said to me: “Write: Blessed are those who have been called to the wedding feast of the Lamb.” And he said to me, “These words of God are true.” And he saith unto me, Write, Blessed [are] they which are called unto the marriage supper of the Lamb. And he saith unto me, These are the true sayings of God: 9: И сказал мне [Ангел]: напиши: блаженны званые на брачную вечерю Агнца. И сказал мне: сии суть истинные слова Божии. 19:9 καὶ λέγει μοι, γράψον· μακάριοι οἱ εἰς τὸ δεῖπνον τοῦ γάμου τοῦ ἀρνίου κεκλημένοι. καὶ λέγει μοι, οὖτοι οἱ λόγοι ἀληθινοὶ τοῦ θεοῦ εἰσιν. 19:9. Καὶ (And) λέγει (it-fortheth) μοι (unto-me,"Γράψον (Thou-should-have-scribed,"Μακάριοι (Bless-belonged) οἱ (the-ones) εἰς (into) τὸ (to-the-one) δεῖπνον (to-mealed) τοῦ (of-the-one) γάμου (of-a-marriage) τοῦ (of-the-one) ἀρνίου (of-a-Lamblet) κεκλημένοι. (having-had-come-to-be-called-unto) καὶ (And) λέγει (it-fortheth) μοι (unto-me,"Οὗτοι (The-ones-these) οἱ (the-ones) λόγοι (forthees) ἀληθινοὶ (un-secluded-belonged-to) τοῦ (of-the-one) θεοῦ (of-a-Deity) εἰσίν. (they-be) 19:9. et dicit mihi scribe beati qui ad cenam nuptiarum agni vocati sunt et dicit mihi haec verba vera Dei sunt And he said to me: Write: Blessed are they that are called to the marriage supper of the Lamb. And he saith to me: These words of God are true. 9. Andhesaithuntome, Write, BlessedaretheywhicharebiddentothemarriagesupperoftheLamb. Andhesaithuntome, ThesearetruewordsofGod. 19:9. And he saith unto me, Write, Blessed [are] they which are called unto the marriage supper of the Lamb. And he saith unto me, These are the true sayings of God. 19:9. And he said to me: “Write: Blessed are those who have been called to the wedding feast of the Lamb.” And he said to me, “These words of God are true.” ru▾ And he saith unto me, Write, Blessed [are] they which are called unto the marriage supper of the Lamb. And he saith unto me, These are the true sayings of God. (10) Namely the angel, as it appears by the next verse. ###### John Gill 19:9 And he saith unto me, write,.... What follows, because of the importance of it, and to show the certainty of it, and that it may be regarded and remembered: the person speaking is either the voice from the throne, Rev_ 19:5 or the angel that attended John all along, and showed him this revelation, Rev_ 1:1 or the angel that proposed to show him the judgment of the great whore, Rev_ 17:1. Blessed are they which are called to the marriage supper of the Lamb; by which is meant the Gospel ministry and ordinances, and communion in them, to which the Jews will be called to partake of in the latter day; these at the first of the Gospel dispensation are called a "dinner", to which, the Jews were invited, but refused to come, and now a "supper", because made in the evening of that dispensation; to which being called with an effectual calling, they will come and partake of it; on which account they are pronounced blessed, being the bride, the Lamb's wife, having on his righteousness, partaking of his benefits, and being called unto, and made meet for eternal glory and happiness; or else these may design converted Gentiles, who will be invited to join with them, and will. And he saith unto me, these are the true sayings of God; the Syriac version reads, "these my true words are of God"; being true, it is plain they are of God, and being of God, it is certain they are true; for he is the God of truth, and cannot lie, and therefore may be depended upon. ###### John Wesley 19:9 And he - The angel, saith to me, Write - St. John seems to have been so amazed at these glorious sights, that he needeth to be reminded of this. Happy are they who are invited to the marriage supper of the Lamb - Called to glory. And he saith - After a little pause. ###### Robert Jamieson, A. R. Fausset and David Brown 19:9 He--God by His angel saith unto me. called--effectually, not merely externally. The "unto," or into," seems to express this: not merely invited to (Greek, "epi"), but called INTO, so as to be partakers of (Greek, "eis"); compare 1Cor 1:9. marriage supper--Greek, "the supper of the marriage." Typified by the Lord's Supper. true--Greek, "genuine"; veritable sayings which shall surely be fulfilled, namely, all the previous revelations. ես: pron.nom.sg. Eng: I (5691)")( ( նմա"). եւ")ասէ")ցիս: pron.acc.sg. Eng: I (5691)"). ( մի՛: part.neg. Eng: not (8538)")անկանիր")առաջի")իմ"), քանզի")եւ")ես: pron.nom.sg. Eng: I (5691)")`` ծառայակից")քո (5597)")եմ")եւ")եղբարց")քոց (11165)")որք")ունին")զվկայութիւնն")Յիսուսի")( Տեառն")`` Աստուծոյ")( ( զի")վկայութիւն")Յիսուսի")է")( որիմարգարէսնէր: 19:10: Եւ ես անկեալ առաջի ոտից նորա, երկի՛ր պագի նմա. եւ ասէ ցիս. Անսա՛՝ մի՛ անկանիր առաջի իմ, քանզի եւ ես ծառայակի՛ց քո եմ, եւ եղբարց քոց՝ որք ունին զվկայութիւնն Յիսուսի Քրիստոսի. Տեառն Աստուծոյ միայն երկրպագեա՛. քանզի հաստատութեամբ Յիսուսի՛ է տեսիլդ, եւ հոգի մարգարէութեանդ. զի վկայութիւն Յիսուսի՛ է՝ Հոգւովն Սրբով՝ որ ՚ի մարգարէսն էր: Ոսկան. Ոտից նորա, զի երկրպագից նմա. եւ ասէ ցիս։ Ոմանք. Յիսուսի է, եւ Հոգւովն Սրբով։ 10 Եւ ես ընկնելով նրա ոտքերի առաջ՝ երկրպագեցի նրան. եւ նա ինձ ասաց. «Լսի՛ր, մի՛ ընկիր իմ առաջ, քանզի ես էլ ծառայակիցն եմ քո եւ քո եղբայրների, որոնք վկայում են Յիսուս Քրիստոսին: Տէր Աստծո՛ւն միայն երկրպագիր. քանզի Յիսուսի հաստատումով է այդ տեսիլքը եւ այդ մարգարէութեան հոգին, քանի որ Յիսուսի մասին վկայութիւնը, որ մարգարէների մէջ էր, Սուրբ Հոգով է»: 10 Ես ալ անոր ոտքերուն առջեւ ինկայ, որպէս զի անոր երկրպագութիւն ընեմ ու ըսաւ ինծի. «Զգուշացի՛ր, մի՛ ըներ. վասն զի ես ալ քու ծառայակիցդ եմ ու քու եղբայրներուդ՝ որոնք Յիսուսին վկաներն էին։ Աստուծոյ երկրպագութիւն ըրէ. վասն զի Յիսուսին վկայութիւնը մարգարէութեան հոգին է»։ zohrab-1805▾eastern-1994▾western am▾ 19:1010: Я пал к ногам его, чтобы поклониться ему; но он сказал мне: смотри, не делай сего; я сослужитель тебе и братьям твоим, имеющим свидетельство Иисусово; Богу поклонись; ибо свидетельство Иисусово есть дух пророчества. 19:10 καὶ ἔπεσα ἔμπροσθεν τῶν ποδῶν αὐτοῦ προσκυνῆσαι αὐτῶ. καὶ λέγει μοι, ὅρα μή· σύνδουλός σού εἰμι καὶ τῶν ἀδελφῶν σου τῶν ἐχόντων τὴν μαρτυρίαν ἰησοῦ· τῶ θεῶ προσκύνησον. ἡ γὰρ μαρτυρία ἰησοῦ ἐστιν τὸ πνεῦμα τῆς προφητείας. 19:10. καὶ (And) ἔπεσα (I-had-fallen) ἔμπροσθεν (in-toward-from) τῶν (of-the-ones) ποδῶν (of-feet) αὐτοῦ (of-it) προσκυνῆσαι (to-have-kissed-toward-unto) αὐτῷ. (unto-it,"καὶ (and) λέγει (it-fortheth) μοι (unto-me,"Ὅρα (Thou-should-discern-unto) μή: (lest) σύνδουλός (a-bondee-together) σού (of-THEE) εἰμι (I-be) καὶ (and) τῶν (of-the-ones) ἀδελφῶν ( of-brethrened ) σου (of-thee) τῶν (of-the-ones) ἐχόντων ( of-holding ) τὴν (to-the-one) μαρτυρίαν (to-a-witnessing-unto) Ἰησοῦ: (of-an-Iesous) τῷ (unto-the-one) θεῷ (unto-a-Deity) προσκύνησον: (thou-should-have-kissed-toward-unto) ἡ (the-one) γὰρ (therefore) μαρτυρία (a-witnessing-unto) Ἰησοῦ (of-an-Iesous) ἐστὶν (it-be) τὸ (the-one) πνεῦμα (a-currenting-to) τῆς (of-the-one) προφητείας. (of-a-declaring-before-of) 19:10. et cecidi ante pedes eius ut adorarem eum et dicit mihi vide ne feceris conservus tuus sum et fratrum tuorum habentium testimonium Iesu Deum adora testimonium enim Iesu est spiritus prophetiaeAnd I fell down before his feet, to adore him. And he saith to me: See thou do it not. I am thy fellow servant and of thy brethren who have the testimony of Jesus. Adore God. For the testimony of Jesus is the spirit of prophecy. 10. And I fell down before his feet to worship him. And he saith unto me, See thou do it not: I am a fellow-servant with thee and with thy brethren that hold the testimony of Jesus: worship God: for the testimony of Jesus is the spirit of prophecy. 19:10. And I fell at his feet to worship him. And he said unto me, See [thou do it] not: I am thy fellowservant, and of thy brethren that have the testimony of Jesus: worship God: for the testimony of Jesus is the spirit of prophecy. 19:10. And I fell down before his feet, to adore him. And he said to me: “Be careful not to do so. I am your fellow servant, and I am among your brothers, who hold to the testimony of Jesus. Adore God. For the testimony of Jesus is a spirit of prophecy.” And I fell at his feet to worship him. And he said unto me, See [thou do it] not: I am thy fellowservant, and of thy brethren that have the testimony of Jesus: worship God: for the testimony of Jesus is the spirit of prophecy: 10: Я пал к ногам его, чтобы поклониться ему; но он сказал мне: смотри, не делай сего; я сослужитель тебе и братьям твоим, имеющим свидетельство Иисусово; Богу поклонись; ибо свидетельство Иисусово есть дух пророчества. 19:10 καὶ ἔπεσα ἔμπροσθεν τῶν ποδῶν αὐτοῦ προσκυνῆσαι αὐτῶ. καὶ λέγει μοι, ὅρα μή· σύνδουλός σού εἰμι καὶ τῶν ἀδελφῶν σου τῶν ἐχόντων τὴν μαρτυρίαν ἰησοῦ· τῶ θεῶ προσκύνησον. ἡ γὰρ μαρτυρία ἰησοῦ ἐστιν τὸ πνεῦμα τῆς προφητείας. 19:10. καὶ (And) ἔπεσα (I-had-fallen) ἔμπροσθεν (in-toward-from) τῶν (of-the-ones) ποδῶν (of-feet) αὐτοῦ (of-it) προσκυνῆσαι (to-have-kissed-toward-unto) αὐτῷ. (unto-it,"καὶ (and) λέγει (it-fortheth) μοι (unto-me,"Ὅρα (Thou-should-discern-unto) μή: (lest) σύνδουλός (a-bondee-together) σού (of-THEE) εἰμι (I-be) καὶ (and) τῶν (of-the-ones) ἀδελφῶν (of-brethrened) σου (of-thee) τῶν (of-the-ones) ἐχόντων (of-holding) τὴν (to-the-one) μαρτυρίαν (to-a-witnessing-unto) Ἰησοῦ: (of-an-Iesous) τῷ (unto-the-one) θεῷ (unto-a-Deity) προσκύνησον: (thou-should-have-kissed-toward-unto) ἡ (the-one) γὰρ (therefore) μαρτυρία (a-witnessing-unto) Ἰησοῦ (of-an-Iesous) ἐστὶν (it-be) τὸ (the-one) πνεῦμα (a-currenting-to) τῆς (of-the-one) προφητείας. (of-a-declaring-before-of) 19:10. et cecidi ante pedes eius ut adorarem eum et dicit mihi vide ne feceris conservus tuus sum et fratrum tuorum habentium testimonium Iesu Deum adora testimonium enim Iesu est spiritus prophetiae And I fell down before his feet, to adore him. And he saith to me: See thou do it not. I am thy fellow servant and of thy brethren who have the testimony of Jesus. Adore God. For the testimony of Jesus is the spirit of prophecy. 10. AndIfelldownbeforehisfeettoworshiphim. Andhesaithuntome, Seethoudoitnot: Iama fellow-servant with thee andwiththybrethrenthatholdthetestimonyofJesus: worshipGod: forthetestimonyofJesusisthespiritofprophecy. 19:10. And I fell at his feet to worship him. And he said unto me, See [thou do it] not: I am thy fellowservant, and of thy brethren that have the testimony of Jesus: worship God: for the testimony of Jesus is the spirit of prophecy. 19:10. And I fell down before his feet, to adore him. And he said to me: “Be careful not to do so. I am your fellow servant, and I am among your brothers, who hold to the testimony of Jesus. Adore God. For the testimony of Jesus is a spirit of prophecy.” ru▾el▾el-en-gloss▾vulgate▾erva_1895▾kjv_1900▾catholic_pdv▾ jfb▾jw▾jg▾gnv▾tr▾ab▾ac▾tb▾all ▾ ###### А. П. Лопухин: Tолковая Библия или комментарий на все книги Св.Писания Ветхого и Нового Заветов - 1903-1914 10: Иоанн пал к ногам Ангела. Преклонение Иоанна было естественным, невольным следствием чрезвычайного впечатления явления Ангела и его слов. Содержание слов было столь поразительно, что Иоанн не удержался и упал к ногам говорившего Ангела, как падал к ногам Ангела прор. Даниил. Ангел и исправляет эту невольную человеческую ошибку тайнозрителя и разъясняет, что, как бы ни были величественны те или другие явления на земле, люди из-за них не должны забывать Бога, Который есть их первопричина и единственно достойный поклонения и служения (Втор 6:13). - Свидетельство Иисуса есть Сам Иисус Христос, все то, чему Он учил и что Он совершил для спасения человеческого рода. Это свидетельство есть "дух пророчества", каковое выражение употреблено в смысле основания пророчества, того, что одушевляет пророчество и составляет его сущность: в свидетельстве Иисуса Христа, т.е. в Его учении и принесенном Им откровении, пророчестве, открыто и разъяснено, что только один Бог достоин поклонения и почитания. Вставкою 9: и 10: стихов было нарушено течение описания наступающего брачного вечера Агнца; с 11: ст. Иоанн снова обращается к этому описанию. Теперь говорится о тех уже, которые не только не удостоятся участия в брачной вечере, но подвергаются жестокому наказанию, как возмездию. Это - события последнего времени, времени страшного суда и последнего воздаяния. ###### Adam Clarke: Commentary on the Bible - 1831 19:10: I fell at his feet to worship him - Great as this angel was, St. John could not mistake him either for Jesus Christ, or for God the Father; nor was his prostration intended as an act of religious worship. It was merely an act of that sort of reverence which any Asiatic would pay to a superior. His mistake was, the considering that he was under obligation to the angel for the information which he had now received. This mistake the angel very properly corrects, showing him that it was from God alone this intelligence came, and that to him alone the praise was due. I am thy fellow servant - No higher in dignity than thyself; employed by the same God, on the same errand, and with the same testimony; and therefore not entitled to thy prostration: worship God - prostrate thyself to him, and to him give thanks. The testimony of Jesus is the spirit of prophecy - As this is a reason given by the angel why he should not worship him, the meaning must be this: I, who have received this spirit of prophecy, am not superior to thee who hast received the testimony of Christ, to preach him among the Gentiles; for the commission containing such a testimony is equal to the gift of the spirit of prophecy. Or, the spirit of prophecy is a general testimony concerning Jesus, for he is the scope and design of the whole Scripture; to him gave all the prophets witness. Take Jesus, his grace, Spirit, and religion out of the Bible, and it has neither scope, design, object, nor end. ###### Albert Barnes: Notes on the Bible - 1834 19:10: And I fell at his feet to worship him - At the feet of the angel. See the notes on Rev 19:9. This is a common posture of adoration in the East. See Rosenmuller's "Morgenland, in loco." notes on Co1 14:25. John was entirely overcome with the majesty of the heavenly messenger, and with the amazing truths that he had disclosed to him, and in the overflowing of his feelings he fell upon the earth in the posture of adoration. Or it may be that he mistook the rank of him who addressed him, and supposed that he was the Messiah whom he had been accustomed to worship, and who had first Rev_. 1 appeared to him. If so, his error was soon corrected. He was told by the angel himself who made these communications that he had no claims to such homage, and that the praise which he offered him should be rendered to God alone. It should be observed that there is not the slightest intimation that this was the Messiah himself, and consequently this does not contain any evidence that it would be improper to worship him. The only fair conclusion from the passage is, that it is wrong to offer religious homage to an angel. And he said unto me, See thou do it not - That is, in rendering the homage which you propose to me, you would in fact render it to a creature. This may be regarded as an admonition to be careful in our worship; not to allow our feelings to overcome us; and not to render that homage to a creature which is due to God alone. Of course, this would prohibit the worship of the Virgin Mary, and of any of the saints, and all that homage rendered to a created being which is due to God only. Nothing is more carefully guarded in the Bible than the purity and simplicity of worship; nothing is more sternly rebuked than idolatry; nothing is more contrary to the divine law than rendering in any way that homage to a creature which belongs of right to the Creator. It was necessary to guard even John, the beloved disciple, on that subject; how much more needful, therefore, is it to guard the church at large from the dangers to which it is liable. I am thy fellow-servant - Evidently this was an angel, and yet he here speaks of himself as a "fellow-servant" of John. That is, he was engaged in the service of the same God; he was endeavoring to advance the same cause, and to honor the same Redeemer. The sentiment is, that in promoting religion in the world, we are associated with angels. It is no condescension in them to be engaged in the service of the Redeemer, though it seems to be condescension for them to be associated with us in anything; it constitutes no ground of merit in us to be engaged in the service of the Redeemer (compare Luk 17:10), though we may regard it as an honor to be associated with the angels, and it may raise us in conscious dignity to feel that we are united with them. And of thy brethren - Of other Christians; for all are engaged in the same work. That have the testimony of Jesus - Who are witnesses for the Saviour. It is possible that there may be here a particular reference to those who were engaged in preaching the gospel, though the language will apply to all who give their testimony to the value of the gospel by consistent lives. Worship God - He is the only proper object of worship; he alone is to be adored. For the testimony of Jesus - The meaning here seems to be, that this angel, and John, and their fellow-servants, were all engaged in the same work that of bearing their testimony to Jesus. Thus, in this respect, they were on a level, and one of them should not worship another, but all should unite in the common worship of God. No one in this work, though an angel, could have such a pre-eminence that it would be proper to render the homage to him which was due to God alone. There could be but one being whom it was proper to worship, and they who were engaged in simply bearing testimony to the work of the Saviour should not worship one another. Is the spirit of prophecy - The design of prophecy is to bear testimony to Jesus. The language does not mean, of course, that this is the only design of prophecy, but that this is its great and ultimate end. The word "prophecy" here seems to be used in the large sense in which it is often employed in the New Testament - meaning to make known the divine will (see the notes on Rom 12:6), and the primary reference here would seem to be to the preachers and teachers of the New Testament. The sense is, that their grand business is to bear testimony to the Saviour. They are all - whether angels, apostles, or ordinary teachers - appointed for this, and therefore should regard themselves as "fellow-servants." The design of the angel in this seems to have been, to state to John what was his own specific business in the communications which he made, and then to state a universal truth applicable to all ministers of the gospel, that they were engaged in the same work, and that no one of them should claim adoration from others. Thus understood, this passage has no direct reference to the prophecies of the Old Testament, and teaches nothing in regard to their design, though it is in fact undoubtedly true that their grand and leading object was to bear testimony to the future Messiah. But this passage will not justify the attempt so often made to "find Christ" everywhere in the prophecies of the Old Testament, or justify the many forced and unnatural interpretations by which the prophecies are often applied to him. ###### R. A. Torrey - Treasury: Treasury of Scriptural Knowledge - 1880 19:10: I fell: Rev 22:8, Rev 22:9; Mar 5:22, Mar 7:25; Act 10:25, Act 10:26, Act 14:11-15; Jo1 5:21 See: Co2 8:7; Eph 5:15, Eph 5:33; Th1 5:15; Heb 12:25 I am: Psa 103:20, Psa 103:21; Dan 7:10; Luk 1:19; Heb 1:14 the testimony: Rev 1:9, Rev 12:11, Rev 12:17, Rev 22:9; Jo1 5:10 worship: Rev 4:10, Rev 14:7, Rev 15:4; Exo 34:14; Kg2 17:36; Psa 45:11; Mat 4:10; Joh 4:22-24; Phi 3:3 for the: Luk 24:25-27, Luk 24:44; Joh 5:39; Act 3:12-18, Act 10:43, Act 13:27; Rom 3:21, Rom 3:22; Pe1 1:10-12; Pe2 1:19-21 ###### Geneva 1599 19:10 (11) And I fell at his feet to worship him. And he said unto me, See [thou do it] not: I am thy fellowservant, and of thy brethren that have the (c)testimony of Jesus: worship God: for the testimony of (d) Jesus is the spirit of prophecy. (11) The particular history of this verse is brought in by occasion, and as it were besides the purpose that John might make a public example of his own infirmity and of the modest sanctimony of the angel, who both renounced for himself the divine honours, and recalled all the servants of God, to the worship of him alone: as also (Rev_ 22:8). (c) Who are commanded to bear witness of Jesus. (d) For Jesus is the mark that all the prophecies shoot at. ###### John Gill 19:10 And I fell at his feet to worship him,.... Being transported with the news he brought him of the marriage, or conversion of his countrymen the Jews, and struck with reverence and awe of the glory and majesty in which the angel appeared to him; and forgetting himself, that worship was only due to God, he behaved in this manner; which is not to be excused nor justified, as appears from the angel's words: and he said unto me, see thou do it not; the words are in the original very short and concise, and are spoken in an abrupt manner, and in great haste; as fearing he would be guilty of idolatry, before he could speak all his mind, and use the arguments that were necessary to dissuade from it: I am thy fellow servant, and of thy brethren that have the testimony of Jesus; if this was one of the ministering spirits, he was a servant of the same Lord as John; and if he was a minister of the Gospel, he was still more literally a fellow servant of his, and of the apostles, and preachers of the Gospel; which is meant by the testimony of Jesus, that bearing testimony to the person, office, grace, obedience, sufferings, and death of Christ, and the glory following; and therefore being but a servant, and a servant in common with John and his brethren, was by no means to be worshipped; not the servant, but master; not the creature, but the Creator: worship God and him only, even God the Father, Son, and Spirit; not the Father to the exclusion of the Son, the firstborn, whom all the angels are called upon to worship; nor of the Spirit, who is equally joined with the Father and Son in baptism, a part of religious worship, and in other parts of it also; but this excludes all creatures, angels, and men, things animate or inanimate, and images of them; the worshipping of which will now be no more, or at least will be quickly at an end. For the testimony of Jesus is the spirit of prophecy that is, the testimony of Jesus, or the Gospel which John and his brethren had, is the very spirit, life, and soul of the prophecy of this book; for as all the prophets bore witness to Christ, so does the Spirit of God in this; or the testimony which they had, and bore to Christ, was equal to the spirit of prophecy with which this angel was endowed; so that he and they were upon an equal foot; and he was no more a proper object of divine and religious adoration than they were. ###### John Wesley 19:10 And I fell before his feet to worship him - It seems, mistaking him for the angel of the covenant. But he saith, See thou do it not - In the original, it is only, See not, with a beautiful abruptness. To pray to or worship the highest creature is flat idolatry. I am thy fellowservant and of thy brethren that have the testimony of Jesus - I am now employed as your fellowservant, to testify of the Lord Jesus, by the same Spirit which inspired the prophets of old. ###### Robert Jamieson, A. R. Fausset and David Brown 19:10 at--Greek, "before." John's intending to worship the angel here, as in Rev_ 22:8, on having revealed to him the glory of the new Jerusalem, is the involuntary impulse of adoring joy at so blessed a prospect. It forms a marked contrast to the sorrowful wonder with which he had looked on the Church in her apostasy as the harlot (Rev_ 17:6). It exemplifies the corrupt tendencies of our fallen nature that even John, an apostle, should have all but fallen into "voluntary humility and worshipping of angels," which Paul warns us against. and of thy brethren--that is, a fellow servant of thy brethren. have the testimony of Jesus--(See on Rev_ 12:17). the testimony of--that is, respecting Jesus. is the spirit of prophecy--is the result of the same spirit of prophecy in you as in myself. We angels, and you apostles, all alike have the testimony of (bear testimony concerning) Jesus by the operation of one and the same Spirit, who enables me to show you these revelations and enables you to record them: wherefore we are fellow servants, not I your lord to be worshipped by you. Compare Rev_ 22:9, "I am fellow servant of thee and of thy brethren the prophets"; whence the "FOR the testimony," &c., here, may be explained as giving the reason for his adding "and (fellow servant) of thy brethren that have the testimony of Jesus." I mean, of the prophets; "for it is of Jesus that thy brethren, the prophets, testify by the Spirit in them." A clear condemnation of Romish invocation of saints as if they were our superiors to be adored. 19:1119:11: Եւ տեսի զերկինս բացեալ, եւ ահա ձի՛ սպիտակ, որ հեծեալն էր ՚ի նմա՝ Հաւատարի՛մ եւ Ճշմարիտ, եւ արդարութեամբ դատի՝ եւ պատերազմի. Ոմանք. Եւ որ հեծեալն էր։ 11 Ապա տեսայ երկինքը բացուած. եւ ահա մի սպիտակ ձի. ով հեծել էր նրա վրայ, կոչւում է Հաւատարիմ եւ Ճշմարիտ. նա արդարութեամբ է դատում ու պատերազմում: 11 Տեսայ երկինքը բացուած եւ ճերմակ ձի մը։ Անոր վրայ հեծնողը Հաւատարիմ ու Ճշմարիտ կը կոչուի։ Արդարութեամբ դատաստան կը տեսնէ ու կը պատերազմի։ Եւտեսիզերկինսբացեալեւահաձիսպիտակ, եւորհեծեալնէրինմա( եւ")արդարութեամբ")դատի")եւ")պատերազմի"): 19:11: Եւ տեսի զերկինս բացեալ, եւ ահա ձի՛ սպիտակ, որ հեծեալն էր ՚ի նմա՝ Հաւատարի՛մ եւ Ճշմարիտ, եւ արդարութեամբ դատի՝ եւ պատերազմի( "Ոմանք. Եւ որ հեծեալն էր։"). Ոմանք. Եւ որ հեծեալն էր։ 11 Ապա տեսայ երկինքը բացուած. եւ ահա մի սպիտակ ձի. ով հեծել էր նրա վրայ, կոչւում է Հաւատարիմ եւ Ճշմարիտ. նա արդարութեամբ է դատում ու պատերազմում: 11 Տեսայ երկինքը բացուած եւ ճերմակ ձի մը։ Անոր վրայ հեծնողը Հաւատարիմ ու Ճշմարիտ կը կոչուի։ Արդարութեամբ դատաստան կը տեսնէ ու կը պատերազմի։ zohrab-1805▾ εἶδον ( I-had-seen ) τὸν ( to-the-one ) οὐρανὸν ( to-a-sky ) ἠνεῳγμένον , ( to-having-hath-had-come-to-be-opened-up ,"καὶ (and) ἰδοὺ ( thou-should-have-had-seen ,"ἵππος (a-horse) λευκός, (white,"καὶ (and) ὁ (the-one) καθήμενος ( sitting-down ) ἐπ' (upon) αὐτὸν (to-it) πιστὸς (trusted) [καλούμενος] "[being-called-unto]"καὶ (and) ἀληθινός, (un-secluded-belonged-to,"καὶ (and) ἐν ( in ) δικαιοσύνῃ ( unto-a-course-belongedness ) κρίνει ( it-separateth ) καὶ (and) πολεμεῖ. (it-warreth-unto) 19:11. et vidi caelum apertum et ecce equus albus et qui sedebat super eum vocabatur Fidelis et Verax vocatur et iustitia iudicat et pugnatAnd I saw heaven opened: and behold a white horse. And he that sat upon him was called faithful and true: and with justice doth he judge and fight. 11. And I saw the heaven opened; and behold, a white horse, and he that sat thereon, called Faithful and True; and in righteousness he doth judge and make war. 19:11. And I saw heaven opened, and behold a white horse; and he that sat upon him [was] called Faithful and True, and in righteousness he doth judge and make war. 19:11. And I saw heaven opened, and behold, a white horse. And he who was sitting upon it was called Faithful and True. And with justice does he judge and fight. And I saw heaven opened, and behold a white horse; and he that sat upon him [was] called Faithful and True, and in righteousness he doth judge and make war: 11: И увидел я отверстое небо, и вот конь белый, и сидящий на нем называется Верный и Истинный, Который праведно судит и воинствует. 19:11 καὶ εἶδον τὸν οὐρανὸν ἠνεῳγμένον, καὶ ἰδοὺ ἵππος λευκός, καὶ ὁ καθήμενος ἐπ᾽ αὐτὸν [καλούμενος] πιστὸς καὶ ἀληθινός, καὶ ἐν δικαιοσύνῃ κρίνει καὶ πολεμεῖ. 19:11. Καὶ (And) εἶδον (I-had-seen) τὸν (to-the-one) οὐρανὸν (to-a-sky) ἠνεῳγμένον, (to-having-hath-had-come-to-be-opened-up,"καὶ (and) ἰδοὺ (thou-should-have-had-seen,"ἵππος (a-horse) λευκός, (white,"καὶ (and) ὁ (the-one) καθήμενος (sitting-down) ἐπ' (upon) αὐτὸν (to-it) πιστὸς (trusted) [καλούμενος] "[being-called-unto]"καὶ (and) ἀληθινός, (un-secluded-belonged-to,"καὶ (and) ἐν (in) δικαιοσύνῃ (unto-a-course-belongedness) κρίνει (it-separateth) καὶ (and) πολεμεῖ. (it-warreth-unto) 19:11. et vidi caelum apertum et ecce equus albus et qui sedebat super eum vocabatur Fidelis et Verax vocatur et iustitia iudicat et pugnat And I saw heaven opened: and behold a white horse. And he that sat upon him was called faithful and true: and with justice doth he judge and fight. 11. AndIsawtheheaven opened; andbehold, awhitehorse, andhethatsatthereon, calledFaithfulandTrue; andinrighteousnesshedothjudgeandmakewar. 19:11. And I saw heaven opened, and behold a white horse; and he that sat upon him [was] called Faithful and True, and in righteousness he doth judge and make war. 19:11. And I saw heaven opened, and behold, a white horse. And he who was sitting upon it was called Faithful and True. And with justice does he judge and fight. ru▾, a very large one, made up of many armies; angels and saints followed his conduct, and resembled him in their equipage, and in their armour of purity and righteousness--chosen, and called, and faithful. III. The weapons of his warfare--A sharp sword proceeding from his mouth (v. 15), with which he smites the nations, either the threatenings of the written word, which now he is going to execute, or rather his word of command calling on his followers to take a just revenge on his and their enemies, who are now put into the wine-press of the wrath of God, to be trodden under foot by him. IV. The ensigns of his authority, his coat of arms--a name written on his vesture and thigh, King of kings, and Lord of lords, asserting his authority and power, and the cause of the quarrel, v. 16. V. An invitation given to the fowls of heaven, that they should come and see the battle, and share in the spoil and pillage of the field (v. 17, 18), intimating that this great decisive engagement should leave the enemies of the church a feast for the birds of prey, and that all the world should have cause to rejoice in the issue of it. VI. The battle joined. The enemy falls on with great fury, headed by the beast, and the kings of the earth; the powers of earth and hell gathered, to make their utmost effort, v. 19. VII. The victory gained by the great and glorious head of the church: The beast and the false prophet, the leaders of the army, are taken prisoners, both he who led them by power and he who led them by policy and falsehood; these are taken and cast into the burning lake, made incapable of molesting the church of God any more; and their followers, whether officers or common soldiers, are given up to military execution, and made a feast for the fowls of heaven. Though the divine vengeance will chiefly fall upon the beast, and the false prophet, yet it will be no excuse to those who fight under their banner that they only followed their leaders and obeyed their command; since they would fight for them, they must fall and perish with them. Be wise now therefore, O you kings, be instructed, you rulers of the earth; kiss the Son, lest he be angry, and you perish from the way, Ps. ii. 10, 12. ###### Adam Clarke: Commentary on the Bible - 1831 19:11: A white horse - This is an exhibition of the triumph of Christ after the destruction of his enemies. The white horse is the emblem of this, and Faithful and True are characters of Christ. See Rev 3:14. In righteousness he doth judge and make war - The wars which he wages are from no principle of ambition, lust of power, or extension of conquest and dominion; they are righteous in their principle and in their object. And this is perhaps what no earthly potentate could ever say. ###### Albert Barnes: Notes on the Bible - 1834 19:11: And I saw heaven opened - He saw a new vision, as if an opening were made through the sky, and he was permitted to look into heaven. See the notes on Rev 4:1. And behold, a white horse - On the white horse as a symbol, see the notes on Rev 6:2. He is here the symbol of the final victory that is to be obtained over the beast and the false prophet Rev 19:20, and of the final triumph of the church. And he that sat upon him was called Faithful and True - He is not designated here by his usual and real name, but by his attributes. There can be no doubt that the Messiah is intended, as he goes forth to the subjugation of the world to himself. The attributes here referred to - faithful and true - are especially appropriate, for they are not only strongly marked attributes of his character, but they would be particularly manifested in the events that are described. He would thus show that he was faithful - or worthy of the confidence of his church in delivering it from all its enemies; and true to all the promises that he has made to it. And in righteousness he doth judge - All his acts of judgment in determining the destiny of people are righteous. See the notes on Isa 11:3-5. And make war - That is, the war which he wages is not a war of ambition; it is not for the mere purpose of conquest; it is to save the righteous, and to punish the wicked. ###### R. A. Torrey - Treasury: Treasury of Scriptural Knowledge - 1880 19:11: heaven: Rev 4:1, Rev 11:19, Rev 15:5 a white: Rev 6:2; Zac 1:8 Faithful: Rev 1:5, Rev 3:7, Rev 3:14; Joh 14:6 and in: Rev 15:3-7; Psa 45:3-7, Psa 50:6, Psa 72:2-4, Psa 96:13, Psa 98:9, Psa 99:4; Isa 11:3-5, Isa 32:1; Isa 45:21, Isa 63:1-5; Jer 23:5, Jer 23:6, Jer 33:15; Zac 9:9, Zac 9:10; Heb 7:1, Heb 7:2 ###### Geneva 1599 19:11 (12) And I saw (13) heaven opened, and behold a white horse; and he that sat upon him [was] called Faithful and True, and in righteousness he doth judge and make war. (12) The second part of this chapter (as I said in) See Rev_ 19:1 is of the victory gained by Christ against both the beasts: in which first Christ is described as one ready to fight, to the sixteenth verse (Rev_ 19:12-16), then the battle is shown to begin, there to the eighteenth verse (Rev_ 19:17-18), lastly is set forth the victory, to the end the chapter (Rev_ 19:19-21). In this place the most excellent properties of Christ as our heavenly judge and avenger shine forth, according to his person, company, effects and names. (13) Properties belonging to his person, that he is heavenly, judge, faithful, true, just, in this verse, knowing all things, ruling over all, to be known by no one, (Rev_ 19:12), the triumpher and in essence, the Word of God, in (Rev_ 19:13). ###### John Gill 19:11 And I saw heaven opened,.... This vision refers not to the same time the first seal does, Rev_ 6:2 for though a white horse, with a rider on it, is seen here, as there; that respects the first times of the Gospel, this the latter part of the dispensation of it; nor to the war in heaven between Michael and the dragon, and their angels, Rev_ 12:7 that issued in the downfall of Paganism in the Roman empire, this will issue in the downfall of the Papacy in it; nor to the personal coming of Christ to the last judgment, of which an account is given in the following chapter; but to the battle at Armageddon, to which the sixth vial is a preparation, and which is finished under the seventh, Rev_ 16:13 and what is briefly hinted at there is at large related here; in which Christ, the General, and his armies, on the one hand, and the kings of the earth, with the beast and false prophet, and their armies, on the other hand, appear to give battle to each other: and the issue of the battle is particularly represented, in order to have a view of which, "John saw heaven opened": not literally, as at Christ's baptism, and at the stoning of Stephen, nor in a spiritual sense, by the blood of Christ, but visionally, as in Rev_ 4:1 and since heaven, often in this book, signifies the church on earth, a more glorious and comfortable state of the church may be designed; when her gates shall be opened continually, and not shut day nor night, to receive the forces of the Gentiles, and their kings, Is 60:15 such a state as is referred to in Rev_ 11:19 to which visions this is contemporary; and it may denote a very glorious appearing of Christ, not in person, which will be after this, but in his kingdom and power, in defeating his enemies, and reigning spiritually with his saints: and it may also design the clear revelation and discerning John had of the following things: and behold a white horse which, as in Rev_ 6:2 may be a symbol of the Gospel, and Gospel ministers, as there in the former, here in the latter part of the Gospel dispensation; signified by a horse, to denote the swift progress of the Gospel in the latter day, the majesty, power, and authority with which it will come, bearing down all opposition made against it; and by a white horse, to express the purity of the Gospel, and of its preachers and professors, and the peace it publishes, and gives, and the joy it brings, and the triumphs that will attend it. And he that sat upon him was called Faithful and True: that Christ is here meant, is evident from the description of his eyes, Rev_ 19:12 being the same as in Rev_ 1:14 and from his name, Rev_ 19:13 which is the peculiar name of the Son of God, Jn 1:1 and he sits upon, and is bore by, and rides forth in the Gospel, and the ministry of it, with glory and majesty, and prosperously, Ps 45:3 and the characters of faithful and true well agree with him; See Gill on Rev_ 3:7. See Gill on Rev_ 3:14. He is "faithful and true" to God, who appointed him a Leader and Commander of the people, and to them he is the Commander of: and these characters well suit him now, when he will accomplish all the glorious things spoken of the church, relating to her spiritual and happy state in the latter day, and serve greatly to recommend him as a General. And in righteousness he doth judge and make war; which is to be understood not of the last judgment, though that will be executed in righteousness, and therefore is called the righteous judgment, yet in that day there will be no war, no opposition, the wicked will at once submit; but of Christ's judging of his people, and avenging their blood on their enemies, and the remainder of them among Papists, Pagans, and Mahometans; who will be gathered together at Armageddon in battle array against them, when there will be an utter discomfiture of them in righteous judgment; for as in times past the beast made war with the saints and witnesses, and overcame them, Christ will enable his people to make war with him and his accomplices, and overcome them, as the sequel of this vision shows, Christ being at the head of them, though not in person, yet in power. ###### John Wesley 19:11 And I saw the heaven opened - This is a new and peculiar opening of it, in order to show the magnificent expedition of Christ and his attendants, against his great adversary. And behold a white horse - Many little regarded Christ, when he came meek, "riding upon an ass;" but what will they say, when he goes forth upon his white horse, with the sword of his mouth? White - Such as generals use in solemn triumph. And he that sitteth on him, called Faithful - In performing all his promises. And True - In executing all his threatenings. And in righteousness - With the utmost justice. He judgeth and maketh war - Often the sentence and execution go together. ###### Robert Jamieson, A. R. Fausset and David Brown 19:11 behold a white horse; and he that sat upon him--identical with Rev_ 6:2. Here as there he comes forth "conquering and to conquer." Compare the ass-colt on which He rode into Jerusalem (Mt 21:1-7). The horse was used for war: and here He is going forth to war with the beast. The ass is for peace. His riding on it into Jerusalem is an earnest of His reign in Jerusalem over the earth, as the Prince of peace, after all hostile powers have been overthrown. When the security of the world power, and the distress of the people of God, have reached the highest point, the Lord Jesus shall appear visibly from heaven to put an end to the whole course of the world, and establish His kingdom of glory. He comes to judge with vengeance the world power, and to bring to the Church redemption, transfiguration, and power over the world. Distinguish between this coming (Mt 24:27, Mt 24:29, Mt 24:37, Mt 24:39; Greek, "parousia") and the end, or final judgment (Mt 25:31; 1Cor 15:23). Powerful natural phenomena shall accompany His advent [AUBERLEN]. աչք: noun.nom.acc.pl. Eng: eye (3711)")նորա")բոց")հրոյ"), եւ")ի")վերայ")գլխոյ")նորա")պսակս")բազումս"). եւ")ունէր")անուն")գրեալ"), զոր")ոչ")ոք")գիտէ")բայց")միայն")ինքն"): 19:12: եւ աչք նորա բո՛ց հրոյ, եւ ՚ի վերայ գլխոյ նորա պսա՛կս բազումս. եւ ունէր անուն գրեալ, զոր ո՛չ ոք գիտէ՝ բայց միայն ի՛նքն: 12 Նրա աչքերը նման էին կրակի բոցի, եւ նրա գլխի վրայ կային բազում պսակներ: Նա ունէր գրուած մի անուն, որը ոչ ոք չգիտէ, այլ միայն՝ ինքը: 12 Անոր աչքերը կրակի բոցի պէս էին։ Անոր գլխուն վրայ շատ թագեր կային ու գրուած անուն մը ունէր, որ մէ՛կը չի գիտեր, բայց միայն ինք։ zohrab-1805▾ δὲ ( moreover ) ὀφθαλμοὶ ( eyes ) αὐτοῦ ( of-it ) φλὸξ (a-blaze) πυρός , ( of-a-fire ,"καὶ (and) ἐπὶ (upon) τὴν (to-the-one) κεφαλὴν (to-a-head) αὐτοῦ (of-it) διαδήματα (bindings-through-to) πολλά , ( much ,"ἔχων (holding) ὄνομα (to-a-name) γεγραμμένον (to-having-had-come-to-be-scribed) ὃ (to-which) οὐδεὶς (not-moreover-one) οἶδεν (it-had-come-to-see) εἰ (if) μὴ (lest) αὐτός, (it," 19:12. oculi autem eius sicut flamma ignis et in capite eius diademata multa habens nomen scriptum quod nemo novit nisi ipseAnd his eyes were as a flame of fire: and on his head were many diadems. And he had a name written, which no man knoweth but himself. 12. And his eyes a flame of fire, and upon his head many diadems; and he hath a name written, which no one knoweth but he himself. 19:12. His eyes [were] as a flame of fire, and on his head [were] many crowns; and he had a name written, that no man knew, but he himself. 19:12. And his eyes are like a flame of fire, and on his head are many diadems, having a name written, which no one knows except himself. His eyes [were] as a flame of fire, and on his head [were] many crowns; and he had a name written, that no man knew, but he himself: 12: Очи у Него как пламень огненный, и на голове Его много диадим. [Он] имел имя написанное, которого никто не знал, кроме Его Самого. 19:12 οἱ δὲ ὀφθαλμοὶ αὐτοῦ [ὡς] φλὸξ πυρός, καὶ ἐπὶ τὴν κεφαλὴν αὐτοῦ διαδήματα πολλά, ἔχων ὄνομα γεγραμμένον ὃ οὐδεὶς οἶδεν εἰ μὴ αὐτός, 19:12. οἱ (The-ones) δὲ (moreover) ὀφθαλμοὶ (eyes) αὐτοῦ (of-it) φλὸξ (a-blaze) πυρός, (of-a-fire,"καὶ (and) ἐπὶ (upon) τὴν (to-the-one) κεφαλὴν (to-a-head) αὐτοῦ (of-it) διαδήματα (bindings-through-to) πολλά, (much,"ἔχων (holding) ὄνομα (to-a-name) γεγραμμένον (to-having-had-come-to-be-scribed) ὃ (to-which) οὐδεὶς (not-moreover-one) οἶδεν (it-had-come-to-see) εἰ (if) μὴ (lest) αὐτός, (it," 19:12. oculi autem eius sicut flamma ignis et in capite eius diademata multa habens nomen scriptum quod nemo novit nisi ipse And his eyes were as a flame of fire: and on his head were many diadems. And he had a name written, which no man knoweth but himself. 12. Andhiseyesaflameoffire, anduponhisheadmanydiadems; andhehathanamewritten, whichnooneknowethbuthe himself. 19:12. His eyes [were] as a flame of fire, and on his head [were] many crowns; and he had a name written, that no man knew, but he himself. 19:12. And his eyes are like a flame of fire, and on his head are many diadems, having a name written, which no one knows except himself. ru▾ garlands of victory, but royal crowns, as KING OF KINGS. Christ's diadem comprises all the diadems of the earth and of heavenly powers too. Contrast the papal tiara composed of three diadems. Compare also the little horn (Antichrist) that overcomes the three horns or kingdoms, Dan 7:8, Dan 7:24 (QuÃ&brvbr;re, the Papacy? or some three kingdoms that succeed the papacy, which itself, as a temporal kingdom, was made up at first of three kingdoms, the exarchate of Ravenna, the kingdom of the Lombards, and the state of Rome, obtained by Pope Zachary and Stephen II from Pepin, the usurper of the French dominion). Also, the seven crowns (diadems) on the seven heads of the dragon (Rev_ 12:3), and ten diadems on the ten heads of the beast. These usurpers claim the diadems which belong to Christ alone. he had a name written--B and Syriac insert, "He had names written, and a name written," &c., meaning that the names of the dominion which each diadem indicated were written on them severally. But A, Vulgate, ORIGEN, and CYPRIAN omits the words, as English Version. name . . . that no man knew but . . . himself-- (Judg 13:18; 1Cor 2:9, 1Cor 2:11; 1Jn 3:2). The same is said of the "new name" of believers. In this, as in all other respects, the disciple is made like his Lord. The Lord's own "new name" is to be theirs, and to be "in their foreheads"; whence we may infer that His as yet unknown name also is written on His forehead; as the high priest had "Holiness to the Lord" inscribed on the miter on his brow. John saw it as "written," but knew not its meaning. It is, therefore, a name which in all its glorious significancy can be only understood when the union of His saints with Him, and His and their joint triumph and reign, shall be perfectly manifested at the final consummation. արկեալ")զիւրեաւ")հանդերձ: noun.nom.acc.loc.sg. Eng: clothes (7709)")ներկեալ")արեամբ"). եւ")կոչէր")անուն")նորա")Բան")Աստուծոյ"): 19:13: Եւ արկեալ զիւրեւ հանդերձ ներկեա՛լ արեամբ. եւ կոչէր անուն նորա՝ Բա՛ն Աստուծոյ( "Ոմանք. Արկեալ զիւրեաւ հանդ՛՛։"): Ոմանք. Արկեալ զիւրեաւ հանդ՛՛։ 13 Եւ իր վրայ գցել էր արիւնով ներկուած մի զգեստ. նրա անունն էր՝ Բանն Աստծու: 13 Արիւնով ներկուած հանդերձ մը հագեր էր։ Անոր անունը Բանն Աստուծոյ էր։ zohrab-1805▾ περιβεβλημένος (having-had-come-to-be-casted-about) ἱμάτιον (to-an-apparelet) ῤεραντισμένον (to-having-had-come-to-be-sprinkled-to) αἵματι, (unto-a-blood,"καὶ (and) κέκληται (it-had-come-to-be-called-unto) τὸ (the-one) ὄνομα (a-name) αὐτοῦ (of-it,"Ὁ (The-one) Λόγος (a-Forthee) τοῦ (of-the-one) Θεοῦ. (of-a-Deity) 19:13. et vestitus erat vestem aspersam sanguine et vocatur nomen eius Verbum DeiAnd he was clothed with a garment sprinkled with blood. And his name is called: THE WORD OF GOD. 13. And he arrayed in a garment sprinkled with blood: and his name is called The Word of God. 19:13. And he [was] clothed with a vesture dipped in blood: and his name is called The Word of God. 19:13. And he was clothed with a vestment sprinkled with blood. And his name is called: THE WORD OF GOD. And he [was] clothed with a vesture dipped in blood: and his name is called The Word of God: 13: [Он был] облечен в одежду, обагренную кровью. Имя Ему: 'Слово Божие'. 19:13 καὶ περιβεβλημένος ἱμάτιον βεβαμμένον αἵματι, καὶ κέκληται τὸ ὄνομα αὐτοῦ ὁ λόγος τοῦ θεοῦ. 19:13. καὶ (and) περιβεβλημένος (having-had-come-to-be-casted-about) ἱμάτιον (to-an-apparelet) ῤεραντισμένον (to-having-had-come-to-be-sprinkled-to) αἵματι, (unto-a-blood,"καὶ (and) κέκληται (it-had-come-to-be-called-unto) τὸ (the-one) ὄνομα (a-name) αὐτοῦ (of-it,"Ὁ (The-one) Λόγος (a-Forthee) τοῦ (of-the-one) Θεοῦ. (of-a-Deity) 19:13. et vestitus erat vestem aspersam sanguine et vocatur nomen eius Verbum Dei And he was clothed with a garment sprinkled with blood. And his name is called: THE WORD OF GOD. 13. Andhearrayedinagarmentsprinkledwith blood: andhisnameiscalledTheWordofGod. 19:13. And he [was] clothed with a vesture dipped in blood: and his name is called The Word of God. 19:13. And he was clothed with a vestment sprinkled with blood. And his name is called: THE WORD OF GOD. ru▾ No one but he can understand its full import, as it implies so high a knowledge of the nature of the Deity; (2) no one but he can understand the relation which it supposes in regard to God, or the relation of the Son to the Father; (3) no one but he can understand what is implied in it, regarded as the method in which God Rev_eals himself to his creatures on earth; (4) no one but he can understand what is implied in it in respect to the manner in which God makes himself known to other worlds. It may be added, as a further illustration of this, that none of the attempts made to explain it have left the matter so that there are no questions unsolved which one would be glad to ask. ###### R. A. Torrey - Treasury: Treasury of Scriptural Knowledge - 1880 19:13: clothed: Rev 14:20; Psa 58:10; Isa 9:5, Isa 34:3-8, Isa 63:1-6 The: Joh 1:1, Joh 1:14; Jo1 1:1, Jo1 5:7 ###### John Gill 19:13 And he was clothed with a vesture dipped in blood,.... Either in his own, by which he became the Saviour of his church and people; or else in the blood of his saints, he now comes to avenge; or rather in the blood of his enemies, with which he appears as stained, before the battle is fought, the victory being sure, and their slaughter unavoidable: the metaphor is taken from persons treading in a winepress, whose garments are stained with blood of grapes; see Rev_ 19:15. Here may be also an allusion to the Roman general's vesture, which was sometimes purple or scarlet, in which he fought, as did Lucullus (s). And his name is called the Word of God; the name of Christ, often used by John in his Gospel, epistles, and in this book, Jn 1:11Jn 1:1. Of the signification, reason, and import of this name; see Gill on Jn 1:1. The reason why he is called by it here may be partly to express his greatness, glory, and majesty, this being a name which principally belongs to him, is a person, as the Creator of all things, and as previous to his incarnation; and partly because all the promises of God in his word, and which are all yea, and amen in Christ, will be now shortly fulfilled. (s) Alex. ab Alex. Genial. Dier. l. 1. c. 20. ###### John Wesley 19:13 And he is clothed in a vesture dipped in blood - The blood of the enemies he hath already conquered. Is 63:1, &c ###### Robert Jamieson, A. R. Fausset and David Brown 19:13 vesture dipped in blood-- Is 63:2 is alluded to here, and in Rev_ 19:15, end. There the blood is not His own, but that of His foes. So here the blood on His "vesture," reminding us of His own blood shed for even the ungodly who trample on it, is a premonition of the shedding of their blood in righteous retribution. He sheds the blood, not of the godly, as the harlot and beast did, but of the blood-stained ungodly, including them both. The Word of God--who made the world, is He also who under the same character and attributes shall make it anew. His title, Son of God, is applicable in a lower sense, also to His people; but "the Word of God" indicates His incommunicable Godhead, joined to His manhood, which He shall then manifest in glory. "The Bride does not fear the Bridegroom; her love casteth out fear. She welcomes Him; she cannot be happy but at His side. The Lamb [Rev_ 19:9, the aspect of Christ to His people at His coming] is the symbol of Christ in His gentleness. Who would be afraid of a lamb? Even a little child, instead of being scared, desires to caress it. There is nothing to make us afraid of God but sin, and Jesus is the Lamb of God that taketh away the sin of the world. What a fearful contrast is the aspect which He will wear towards His enemies! Not as the Bridegroom and the Lamb, but as the [avenging] judge and warrior stained in the blood of His enemies." ( զօրք")երկնից")զհետ")երթային")նորա")( զգեցեալք")բեհեզս")սուրբս")եւ")սպիտակս"): 19:14: Եւ զօրավարք երկնից՝ եւ զօրք երկնից զհե՛տ երթային նորա ձիովք՝ զգեցեալք բեհեզս սուրբս եւ սպիտակս( "Ոսկան յաւելու. Ձիովք սպիտակօք։"): Ոսկան յաւելու. Ձիովք սպիտակօք։ 14 Եւ երկնքի զօրավարներ ու երկնքի զօրքեր ձիերով գնում էին նրա յետեւից՝ հագած մաքուր եւ սպիտակ բեհեզներ: 14 Երկնքի մէջ եղող զօրքերը անոր ետեւէն կ’երթային ճերմակ ձիերով, ճերմակ ու մաքուր բեհեզներ հագած։ zohrab-1805▾ τὰ (the-ones) στρατεύματα (amassings-to) τὰ (the-ones) ἐν (in) τῷ (unto-the-one) οὐρανῷ (unto-a-sky) ἠκολούθει (it-was-pathing-along-unto) αὐτῷ (unto-it) ἐφ' (upon) ἵπποις (unto-horses) λευκοῖς , ( unto-white ," ἐνδεδυμένοι ( having-had-come-to-vest-in ) βύσσινον (to-linened-belonged-to) λευκὸν (to-white) καθαρόν. (to-cleansed) 19:14. et exercitus qui sunt in caelo sequebantur eum in equis albis vestiti byssinum album mundumAnd the armies that are in heaven followed him on white horses, clothed in fine linen, white and clean. 14. And the armies which are in heaven followed him upon white horses, clothed in fine linen, white pure. 19:14. And the armies [which were] in heaven followed him upon white horses, clothed in fine linen, white and clean. 19:14. And the armies that are in heaven were following him on white horses, clothed in fine linen, white and clean. And the armies [which were] in heaven followed him upon white horses, clothed in fine linen, white and clean: 14: И воинства небесные следовали за Ним на конях белых, облеченные в виссон белый и чистый. 19:14 καὶ τὰ στρατεύματα [τὰ] ἐν τῶ οὐρανῶ ἠκολούθει αὐτῶ ἐφ᾽ ἵπποις λευκοῖς, ἐνδεδυμένοι βύσσινον λευκὸν καθαρόν. 19:14. καὶ (And) τὰ (the-ones) στρατεύματα (amassings-to) τὰ (the-ones) ἐν (in) τῷ (unto-the-one) οὐρανῷ (unto-a-sky) ἠκολούθει (it-was-pathing-along-unto) αὐτῷ (unto-it) ἐφ' (upon) ἵπποις (unto-horses) λευκοῖς, (unto-white,"ἐνδεδυμένοι (having-had-come-to-vest-in) βύσσινον (to-linened-belonged-to) λευκὸν (to-white) καθαρόν. (to-cleansed) 19:14. et exercitus qui sunt in caelo sequebantur eum in equis albis vestiti byssinum album mundum And the armies that are in heaven followed him on white horses, clothed in fine linen, white and clean. 14. Andthearmieswhichareinheaven followed himuponwhitehorses, clothedinfinelinen, whitepure. 19:14. And the armies [which were] in heaven followed him upon white horses, clothed in fine linen, white and clean. 19:14. And the armies that are in heaven were following him on white horses, clothed in fine linen, white and clean. ru▾. Воинство следовало на белых конях, чтобы соответствовать Своему предводителю, а одежда их была из белого виссона. Оружие, которым Христос побеждал врагов, есть меч, выходящий из Его уст (I:16; II:12). Этот меч есть Его слово, слово Его всемогущества и всевеления. Он пасет народ жезлом железным, ибо всецело подчиняет их Своей власти и Своему решению. Он топчет точило вина ярости и гнева Божия (XIV:19-20). Весь символический образ есть изображение страшного суда и мздовоздания, когда нечестивые будут приведены к сознанию своего полного ничтожества. Христос, как всемогущий Царь, один только и может носить имя Царя и Господа. ###### Adam Clarke: Commentary on the Bible - 1831 19:14: The armies which were in heaven - Angels and saints over whom Jesus Christ is Captain, Clothed in fine linen - All holy, pure, and righteous. ###### Albert Barnes: Notes on the Bible - 1834 19:14: And the armies which were in heaven followed him - The heavenly hosts; particularly, it would seem, the redeemed, as there would be some incongruity in representing the angels as riding in this manner. Doubtless the original of this picture is Isa 63:3; "I have trodden the wine-press alone, and of the people there was none with me." These hosts of the redeemed on white horses accompany him to be witnesses of his victory, and to participate in the joy of the triumph, not to engage in the work of blood. Upon white horses - Emblems of triumph or victory. See the notes on Rev 6:2. Clothed in fine linen, white and clean - The usual raiment of those who are in heaven, as everywhere represented in this book. See Rev 3:4-5; Rev 4:4; Rev 7:9, Rev 7:13; Rev 15:6. ###### R. A. Torrey - Treasury: Treasury of Scriptural Knowledge - 1880 19:14: the armies: Rev 14:1, Rev 14:20, Rev 17:14; Psa 68:17, Psa 149:6-9; Zac 14:5; Mat 26:53; Th2 1:7; Jde 1:14 white horses: Rev 19:11 clothed: Rev 19:8, Rev 4:4, Rev 7:9; Mat 28:3 ###### Geneva 1599 19:14 (14) And the armies [which were] in heaven followed him upon white horses, clothed in fine linen, white and clean. (14) The company or retinue of Christ, holy, innumerable, heavenly, judicial, royal and pure. ###### John Gill 19:14 And the armies which were in heaven,.... Not the angels, though they are God's host, and are the armies of the heavens; they are in heaven, and dwell there, and follow Christ, attend upon him, and minister to him, and have been sometimes represented by horses and horsemen, 4Kings 2:11 and they are pure and holy creatures, and will come with Christ to judgment: but this vision refers not to the day of judgment; and besides, the saints are meant, as appears by their habit, for the fine linen, white and clean, is the righteousness of the saints, Rev_ 19:8 and the righteousness of angels and saints is not the same. Moreover, these are the same company described in Rev_ 17:14. The saints are in a state of warfare, have many enemies to fight with, sin, Satan, and the world; they are enlisted as volunteers under Christ, the Captain of salvation; they are provided with the whole armour of God, and are very numerous, and always more than conquerors through Christ: these are described by the place where they were, "in heaven"; not being glorified saints in heaven; with these indeed Christ will come to judgment, even with all his saints with him; but members of the church militant, said to be in heaven, because that is often called the kingdom of heaven; and because their names are written in heaven, and they are of heavenly extract; they are born from above, and are partakers of the heavenly calling; they belong to heaven, they are citizens of it, and are pressing on to it. And these followed him; Christ, their Leader and Commander in the exercise of grace, and in the discharge of duty; having gone on in his ways through a train of sufferings, and cleaved unto him, and now attend him; not to assist in fighting, but to add to the glorious and triumphant appearance of their General; and therefore are said to be upon white horses; they had served Christ in his Gospel, which some of them had preached, and all professed, and had maintained the purity of it in doctrine and practice, and now triumphed in Christ, and along with him, riding upon horses of the same colour with his, as being his princes and nobles, and whom he had made kings as well as priests; the former may be signified by their horses; see Judg 5:10 and the latter by their following habit, clothed in fine linen, white and clean; not the horses, but the armies on them; which designs not their inward purity, which was very glorious; nor their outward conversation garments, washed and made white in the blood of the Lamb; but the robe of Christ's righteousness, which is pure and spotless: these have no armour on, for they are not to strike a blow, only their General, who has the bloody garment on, is to tread the winepress, and destroy antichrist with the breath of his mouth, or with his sword proceeding out of it, as follows. ###### Robert Jamieson, A. R. Fausset and David Brown 19:14 the armies . . . in heaven--Compare "the horse bridles," Rev_ 14:20. The glorified saints whom God "will bring with" Christ at His advent; compare Rev_ 17:14, "they that are with Him, called, chosen, faithful"; as also "His mighty angels." white and clean--Greek, "pure." A, B, Vulgate, Syriac, and CYPRIAN omit "and," which ORIGEN and ANDREAS retain, as English Version. ի")բերանոյ")նորա")ելանէր")սուր")սրեալ"), զի")նովաւ")հարցէ")զազգս"). եւ")նա")հովուեսցէ")զնոսա")գաւազանաւ")երկաթեաւ"), եւ")նա")կոխեսցէ")զհնձան")գինւոյ")ցասման")բարկութեան")Աստուծոյ")Ամենակալի"): 19:15: Եւ ՚ի բերանոյ նորա ելանէր սո՛ւր սրեալ. զի նովա՛ւ հարցէ զազգս. եւ նա հովուեսցէ զնոսա գաւազանաւ երկաթեաւ. եւ նա՛ կոխեսցէ զհնծան գինւոյ՝ ցասման բարկութեան Աստուծոյ Ամենակալի: 15 Եւ նրա բերանից ելնում էր սրած սուր, որպէսզի նրանով հարուածի ազգերին. եւ նա պիտի իշխի նրանց վրայ երկաթէ մականով. նա պիտի կոխոտի Ամենակալ Աստծու ցասման գինու հնձանը: 15 Անոր բերնէն սրած սուր մը կ’ելլէր, որպէս զի անով ազգերը զարնէ ու ինք երկաթէ գաւազանով պիտի հովուէ զանոնք ու Ամենակալ Աստուծոյ սաստիկ բարկութեանը գինիին հնձանը ինք պիտի կոխէ։ zohrab-1805▾ ἐκ (out) τοῦ ( of-the-one ) στόματος ( of-a-mouth ) αὐτοῦ (of-it) ἐκπορεύεται ( it-traverseth-out-of ,"ῥομφαία (a-sword) ὀξεῖα, (sharp,"ἵνα (so) ἐν (in) αὐτῇ (unto-it) πατάξῃ ( it-might-have-smote ) τὰ ( to-the-ones ) ἔθνη , ( to-nations ,"καὶ (and) αὐτὸς (it) ποιμανεῖ ( it-shall-shepherd ) αὐτοὺς ( to-them ) ἐν ( in ) ῥάβδῳ ( unto-a-rod ) σιδηρᾷ : ( unto-iron ) καὶ (and) αὐτὸς (it) πατεῖ ( it-treadeth-unto ) τὴν ( to-the-one ) ληνὸν ( to-a-trough ) τοῦ (of-the-one) οἴνου (of-a-wine) τοῦ (of-the-one) θυμοῦ (of-a-passion) τῆς (of-the-one) ὀργῆς (of-a-stressing) τοῦ ( of-the-one ) θεοῦ ( of-a-Deity ) τοῦ ( of-the-one ) παντοκράτορος . ( of-an-all-securer ) 19:15. et de ore ipsius procedit gladius acutus ut in ipso percutiat gentes et ipse reget eos in virga ferrea et ipse calcat torcular vini furoris irae Dei omnipotentisAnd out of his mouth proceedeth a sharp two-edged sword, that with it he may strike the nations. And he shall rule them with a rod of iron: and he treadeth the winepress of the fierceness of the wrath of God the Almighty. 15. And out of his mouth proceedeth a sharp sword, that with it he should smite the nations: and he shall rule them with a rod of iron: and he treadeth the winepress of the fierceness of the wrath of Almighty God. 19:15. And out of his mouth goeth a sharp sword, that with it he should smite the nations: and he shall rule them with a rod of iron: and he treadeth the winepress of the fierceness and wrath of Almighty God. 19:15. And from his mouth proceeded a sharp two-edged sword, so that with it he may strike the nations. And he shall rule them with an iron rod. And he treads the winepress of the fury of the wrath of God Almighty. And out of his mouth goeth a sharp sword, that with it he should smite the nations: and he shall rule them with a rod of iron: and he treadeth the winepress of the fierceness and wrath of Almighty God: 15: Из уст же Его исходит острый меч, чтобы им поражать народы. Он пасет их жезлом железным; Он топчет точило вина ярости и гнева Бога Вседержителя. 19:15 καὶ ἐκ τοῦ στόματος αὐτοῦ ἐκπορεύεται ῥομφαία ὀξεῖα, ἵνα ἐν αὐτῇ πατάξῃ τὰ ἔθνη, καὶ αὐτὸς ποιμανεῖ αὐτοὺς ἐν ῥάβδῳ σιδηρᾷ· καὶ αὐτὸς πατεῖ τὴν ληνὸν τοῦ οἴνου τοῦ θυμοῦ τῆς ὀργῆς τοῦ θεοῦ τοῦ παντοκράτορος. 19:15. καὶ (And) ἐκ (out) τοῦ (of-the-one) στόματος (of-a-mouth) αὐτοῦ (of-it) ἐκπορεύεται (it-traverseth-out-of,"ῥομφαία (a-sword) ὀξεῖα, (sharp,"ἵνα (so) ἐν (in) αὐτῇ (unto-it) πατάξῃ (it-might-have-smote) τὰ (to-the-ones) ἔθνη, (to-nations,"καὶ (and) αὐτὸς (it) ποιμανεῖ (it-shall-shepherd) αὐτοὺς (to-them) ἐν (in) ῥάβδῳ (unto-a-rod) σιδηρᾷ: (unto-iron) καὶ (and) αὐτὸς (it) πατεῖ (it-treadeth-unto) τὴν (to-the-one) ληνὸν (to-a-trough) τοῦ (of-the-one) οἴνου (of-a-wine) τοῦ (of-the-one) θυμοῦ (of-a-passion) τῆς (of-the-one) ὀργῆς (of-a-stressing) τοῦ (of-the-one) θεοῦ (of-a-Deity) τοῦ (of-the-one) παντοκράτορος. (of-an-all-securer) 19:15. et de ore ipsius procedit gladius acutus ut in ipso percutiat gentes et ipse reget eos in virga ferrea et ipse calcat torcular vini furoris irae Dei omnipotentis And out of his mouth proceedeth a sharp two-edged sword, that with it he may strike the nations. And he shall rule them with a rod of iron: and he treadeth the winepress of the fierceness of the wrath of God the Almighty. 15. Andoutofhismouthproceedethasharpsword, thatwithitheshouldsmitethenations: andheshallrulethemwitharodofiron: andhetreadeththewinepressofthefiercenessofthewrathofAlmightyGod. 19:15. And out of his mouth goeth a sharp sword, that with it he should smite the nations: and he shall rule them with a rod of iron: and he treadeth the winepress of the fierceness and wrath of Almighty God. 19:15. And from his mouth proceeded a sharp two-edged sword, so that with it he may strike the nations. And he shall rule them with an iron rod. And he treads the winepress of the fury of the wrath of God Almighty. ru▾. This appears to mean the word of the Gospel, by which his enemies are confounded, and his friends supported and comforted. With a rod of iron - He shall execute the severest judgment on the opposers of his truth. He treaded the winepress - As the grapes are trodden to express the juice, so his enemies shall be bruised and beaten, so that their life's blood shall be poured out. ###### Albert Barnes: Notes on the Bible - 1834 19:15: And out of his mouth goeth a sharp sword - See the notes on Rev 1:16. In that place the sword seems to be an emblem of his words or doctrines, as penetrating the hearts of people; here it is the emblem of a work of destruction worked on his foes. That with it he should smite the nations - The nations that were opposed to him; to wit, those especially who were represented by the beast and the false prophet, Rev 19:18-20. And he shall rule them with a rod of iron - See the notes on Rev 2:27; Rev 12:5. And he treadeth the wine-press of the fierceness and wrath of Almighty God - This language is probably derived from Isa 63:1-4. See it explained in the notes on that place, and on Rev 14:19-20. It means here that his enemies would be certainly crushed before him - as grapes are crushed under the feet of him that treads in the winevat. ###### R. A. Torrey - Treasury: Treasury of Scriptural Knowledge - 1880 19:15: out: Rev 19:21, Rev 1:16, Rev 2:12, Rev 2:16; Isa 11:4, Isa 30:33; Th2 2:8 and he shall: Rev 2:27, Rev 12:5; Psa 2:9 and he treadeth: Rev 14:17-20; Isa 63:2-6 ###### Geneva 1599 19:15 (15) And out of his mouth goeth a sharp sword, that with it he should smite the nations: and he shall rule them with a rod of iron: and he treadeth the winepress of the fierceness and wrath of Almighty God. (15) The effects of Christ prepared to fight, that with his mouth he strikes the Gentiles, rules and destroys. ###### John Gill 19:15 And out of his mouth goeth a sharp sword,.... The Complutensian edition, and all the Oriental versions, with the Vulture Latin, read, "a sharp twoedged sword". The word of God, or the judiciary sentence of Christ according to it, and which he will fully execute, to the utter destruction of all his enemies; See Gill on Rev_ 1:16 that with it he should smite the nations; the Gentiles, the Papists, the antichristian states, those that have adhered to Babylon, and have drunk of the wine of her fornication. This is predicted in Num 24:17 and on account of this the nations will be angry under the sounding of the seventh trumpet, with which this vision is contemporary, Rev_ 11:18. And he shall rule them with a rod of iron; use them with the utmost severity; the phrase is taken out of Ps 2:9 a prophecy of Christ, and mentioned twice before in this book; see Gill on Rev_ 2:27, Rev_ 12:5, and he treadeth the winepress of the fierceness and wrath of Almighty God; the fierce wrath of God against sinners is compared to a winepress; and the wicked antichristian party are likened to clusters of grapes; who being ripe for destruction, are cast into it, and pressed, squeezed, and trodden down by the mighty power of Christ, the Word of God, whose vesture is therefore before said to be dipped in blood; the same metaphor is used in Rev_ 14:19 the allusion seems to be to Is 63:3. ###### John Wesley 19:15 And he shall rule them - Who are not slain by his sword. With a rod of iron - That is, if they will not submit to his golden sceptre. And he treadeth the wine press of the wrath of God - That is, he executes his judgments on the ungodly. This ruler of the nations was born (or appeared as such) immediately after the seventh angel began to sound. He now appears, not as a child, but as a victorious warrior. The nations have long ago felt his "iron rod," partly while the heathen Romans, after their savage persecution of the Christians, themselves groaned under numberless plagues and calamities, by his righteous vengeance; partly, while other heathens have been broken in pieces by those who bore the Christian name. For although the cruelty, for example, of the Spaniards in America, was unrighteous and detestable, yet did God therein execute his righteous judgment on the unbelieving nations; but they shall experience his iron rod as they never did yet, and then will they all return to their rightful Lord. ###### Robert Jamieson, A. R. Fausset and David Brown 19:15 out of his mouth . . . sword-- (Rev_ 1:16; Rev_ 2:12, Rev_ 2:16). Here in its avenging power, Th2 2:8, "consume with the Spirit of His mouth" (Is 11:4, to which there is allusion here); not in its convicting and converting efficacy (Eph 6:17; Heb 4:12-13, where also the judicial keenness of the sword-like word is included). The Father commits the judgment to the Son. he shall rule--The HE is emphatic, He and none other, in contrast to the usurpers who have misruled on earth. "Rule," literally, "tend as a shepherd"; but here in a punitive sense. He, who would have shepherded them with pastoral rod and with the golden scepter of His love, shall dash them in pieces, as refractory rebels, with "a rod of iron." treadeth . . . wine-press-- (Is 63:3). of the fierceness and wrath--So ANDREAS reads. But A, B, Vulgate, Coptic, and ORIGEN read, "of the fierceness (or boiling indignation) of the wrath," omitting "and." Almighty--The fierceness of Christ's wrath against His foes will be executed with the resources of omnipotence. ունէր")ի")վերայ")պատմուճանի")իւրոյ")եւ")ի")վերայ")( գիր")գրեալ") ԹԱԳԱՒՈՐԹԱԳԱՒՈՐԱՑԵՒՏԷՐՏԵՐԱՆՑ: 19:16: Եւ ունէր ՚ի վերայ պատմուճանի իւրոյ, եւ ՚ի վերայ անդամոց իւրոց գիր գրեալ. թագաւո՛ր թագաւորաց եւ տէր տերանց: 16 Եւ իր պատմուճանի ու իր մարմնի մասերի վրայ կար գրուած մի գրութիւն՝ Թագաւոր թագաւորների եւ Տէր տէրերի: Յունարէնն ունի ազդրի վրայ: 16 Եւ իր պատմուճանին վրայ ու իր ազդրին վրայ գրուած գիր մը ունէր. «Թագաւոր Թագաւորաց եւ Տէր Տերանց»։ zohrab-1805▾eastern-1994▾western am▾ 19:1616: На одежде и на бедре Его написано имя: 'Царь царей и Господь господствующих'. 19:16 καὶ ἔχει ἐπὶ τὸ ἱμάτιον καὶ ἐπὶ τὸν μηρὸν αὐτοῦ ὄνομα γεγραμμένον· βασιλεὺς βασιλέων καὶ κύριος κυρίων. 19:16. καὶ (And) ἔχει (it-holdeth) ἐπὶ (upon) τὸ (to-the-one) ἱμάτιον (to-an-apparelet) καὶ (and) ἐπὶ (upon) τὸν (to-the-one) μηρὸν (to-a-thigh) αὐτοῦ (of-it) ὄνομα (to-a-name) γεγραμμένον (to-having-had-come-to-be-scribed,"ΒΑΣΙΛΕΥΣ (A-Ruler-of) ΒΑΣΙΛΕΩΝ (of-rulers-of) ΚΑΙ (and) ΚΥΡΙΟΣ (Authority-belonged) ΚΥΡΙΩΝ . ( of-authority-belonged ) 19:16. et habet in vestimento et in femore suo scriptum rex regum et Dominus dominantiumAnd he hath on his garment and on his thigh written: KING OF KINGS AND LORD OF LORDS.And he hath on his garment and on his thigh written: KING OF KINGS AND LORD OF LORDS. 16. And he hath on his garment and on his thigh a name written, KING OF KINGS, AND LORD OF LORDS. 19:16. And he hath on [his] vesture and on his thigh a name written, KING OF KINGS, AND LORD OF LORDS. 19:16. And he has on his garment and on his thigh written: KING OF KINGS AND LORD OF LORDS. And he hath on [his] vesture and on his thigh a name written, KING OF KINGS, AND LORD OF LORDS: 16: На одежде и на бедре Его написано имя: 'Царь царей и Господь господствующих'. 19:16 καὶ ἔχει ἐπὶ τὸ ἱμάτιον καὶ ἐπὶ τὸν μηρὸν αὐτοῦ ὄνομα γεγραμμένον· βασιλεὺς βασιλέων καὶ κύριος κυρίων. 19:16. καὶ (And) ἔχει (it-holdeth) ἐπὶ (upon) τὸ (to-the-one) ἱμάτιον (to-an-apparelet) καὶ (and) ἐπὶ (upon) τὸν (to-the-one) μηρὸν (to-a-thigh) αὐτοῦ (of-it) ὄνομα (to-a-name) γεγραμμένον (to-having-had-come-to-be-scribed,"ΒΑΣΙΛΕΥΣ (A-Ruler-of) ΒΑΣΙΛΕΩΝ (of-rulers-of) ΚΑΙ (and) ΚΥΡΙΟΣ (Authority-belonged) ΚΥΡΙΩΝ. (of-authority-belonged) 19:16. et habet in vestimento et in femore suo scriptum rex regum et Dominus dominantium And he hath on his garment and on his thigh written: KING OF KINGS AND LORD OF LORDS.And he hath on his garment and on his thigh written: KING OF KINGS AND LORD OF LORDS. 16. Andhehathonhisgarmentandonhisthighanamewritten, KINGOFKINGS, ANDLORDOFLORDS. 19:16. And he hath on [his] vesture and on his thigh a name written, KING OF KINGS, AND LORD OF LORDS. 19:16. And he has on his garment and on his thigh written: KING OF KINGS AND LORD OF LORDS. ru▾el▾el-en-gloss▾vulgate▾erva_1895▾kjv_1900▾catholic_pdv▾ jfb▾jw▾jg▾gnv▾tr▾ab▾ac▾all ▾ ###### Adam Clarke: Commentary on the Bible - 1831 19:16: On his vesture and on his thigh a name written - Dr. Dodd has well observed on this passage, that "it appears to have been an ancient custom among several nations to adorn the images of their deities, princes, victors at public games, and other eminent persons, with inscriptions, expressing either the character of the persons, their names, or some other circumstance which might contribute to their honor; and to that custom the description here given of Christ may possibly have some allusion. "There are several such images yet extant, with an inscription written either on the garment, or on one of the thighs, or on that part of the garment which was over the thigh; and probably this is the meaning of the apostle. And as these inscriptions are placed on the upper garment, Grotius seems very justly to have explained the words επι το ἱματιον, by his imperial robe, that his power in this victory might be conspicuous to all. But as a farther confirmation of this sense of the passage it may not be improper here to describe briefly several remarkable figures of this sort, which are still extant." This description I shall give from my own examination. 1. Herodotus, Euterpe, lib. ii. p. 127, edit. Gale, speaking of the actions of Sesostris, and of the images he set up in the countries which he conquered, has the following words: Εισι δε περι Ιωνιην δυο τυποι εν πετρῃσι εγκεκολαμμενοι τουτου του ανδρος, κ. τ. λ. "Two images likewise of this man are seen in Ionia, on the way that leads from Ephesus to Phocaea, and from Sardis to Smyrna. The figure is five palms in height; in his right hand he holds a dart, in his left a bow, armed after the manner of the Egyptians and Ethiopians. On a line drawn across the breast, from one shoulder to the other, are these words, written in Egyptian hieroglyphics: Εγω τηνδε την χωρην ωμοισι τοισι εμοισι εκτησαμην· 'I obtained this country by these my shoulders;'" i.e., by my own power. 2. In the Etruria Regalis of Dempster, in the appendix at the end of vol. ii., there is a beautiful female figure of brass, about twelve inches high, the hair gracefully plaited, and the head adorned with a diadem. She has a tunic without sleeves, and over that a sort of pallium. On the outside of the right thigh, close to the tunic, and probably on it, in the original, is an inscription in Etruscan characters. What these import I cannot say. Dempster has given a general explanation of the image in the appendix to the above volume, p. 108. The plate itself is the eighty-third of the work. 3. There are two other images found in the same author, vol. i., p. 91, tab. xxiv.; the first is naked, with the exception of a short loose jupe, or petticoat, which goes round the loins, and over the left arm. On the left thigh of this image there is an inscription in Etruscan characters. The second has a similar jupe, but much longer, which extends to the calf of the leg, and is supported over the bended left arm. Over the right thigh, on this vesture, there is an Etruscan inscription in two lines. 4. Montfaucon, Antiquite Expliquee, vol. iii., part 2, p. 268, has introduced an account of two fine images, which are represented tab. CLVII. The first is a warrior entirely naked, except a collar, one bracelet, and boots. On his left thigh, extending from the groin to a little below the knee, is an inscription in very ancient Etruscan characters, in two lines, but the import is unknown. The second is a small figure of brass, about six inches long, with a loose tunic, which is suspended from the left shoulder down to the calf of the legs. On this tunic, over the left thigh, is an inscription (perhaps) in very ancient Latin characters, but in the Etruscan language, as the learned author conjectures. It is in one line, but what it means is equally unknown. 5. In the same work, p. 269, tab. CLVIII., another Etruscan warrior is represented entirely naked; on the left thigh is the following words in uncial Greek letters, ΚΑΦΙΣΟΔΩΡΟΣ, and on the right thigh, ΑΙΣΧΛΑΜΙΟΥ, i.e., "Kaphisodorus, the son of Aischlamius." All these inscriptions are written longitudinally on the thigh. 6. Gruter, vol. iii., p. DCCCCLXXXIX, sub. tit. Affectus Servorum et Libertinorum inter se, et in suos, gives us the figure of a naked warrior, with his left hand on an axe, the end of whose helve rests on the ground, with the following inscription on the inside of his left thigh, longitudinally written, as in all other cases: - A. Poblicius. D. L. Antioc. Ti. Barbius. Q. P. L. Tiber. 7. The rabbins say, that "God gave to the Israelites a sword, on which the ineffable name יהוה Yehovah was inscribed; and as long as they held that sword the angel of death had no power over them." Shemoth Rabba, sec. 51, fol. 143, 2. Bemidbar Rabba, sec. 12, fol. 214, 2. In the latter tract, sec. 16, fol. 232, 3, and in Rab. Tanchum, fol. 66, mention is made of the guardian angels of the Israelites, who were clothed with purple vestments, on which was inscribed שם המפורש shem hammephorash, the ineffable name. See more in Schoettgen. 8. But what comes nearer to the point, in reference to the title given here to Christ, is what is related of Sesostris by Diodorus Siculus, lib. i. c. 55, p. 166, edit. Bipont, of whom he says: "Having pushed his conquests as far as Thrace, he erected pillars, on which were the following words in Egyptian hieroglyphics: Τηνδε την χωραν ὁπλοις κατεστρεψατο τοις ἑαυτου Βασιλευς Βασιλεων, και Δεσποτης Δεσποτων, Σεσοωσις·" This province, Sesoosis, (Sesostris), King of Kings and Lord of Lords, conquered by his own arms. This inscription is conceived almost in the words of St. John. Now the Greek historian did not borrow the words from the apostle, as he died in the reign of Augustus, about the time of our Lord's incarnation. This cannot be the same inscription mentioned above by Herodotus, the one being in Ionia, the other in Thrace: but as he erected several of those pillars or images, probably a nearly similar inscription was found on each. 9. This custom seems to have been common among the ancient Egyptians. Inscriptions are frequently found on the images of Isis, Osiris, Anubis, etc., at the feet, on the head, on the back, on the girdle, etc., etc. Eight of those ancient images in my own collection abound with these inscriptions. 1. Osiris, four inches and a quarter high, standing on a thrones all covered over with hieroglyphics exquisitely engraved. 2. Anubis, six inches high, with a tiara, on the back of which is cut ΛΕΓΟΡΝΥΘ , in uncial Greek characters. 3. The Cercopithecus, seven inches long, sitting on a pedestal, and at his feet, in the same characters, ΧΑΔΕΟ. 4. An Isis, about eight inches high, on her back ΔΡΥΓΟ. 5. Ditto, seven inches, beautifully cut, standing, holding a serpent in her left hand, and at her feet ΕΤΑΠΥΓΙ. 6. Ditto, five inches and a quarter, round whose girdle is ΠΙΕΥΧΥΔΙ; but part of this inscription appears to be hidden under her arms, which are extended by her side. 7. Ditto, five inches high, hooded, with a loose stola, down the back of which are seven lines of Greek uncial characters, but nearly obliterated. 8. Ditto, four inches high, with a girdle going round the back immediately under the arms, the front of which is hidden under a sort of a stomacher; on the part that appears are these characters, ΧΕΝΛΑ. These may be all intended as a kind of abrasaxas or tutelary deities; and I give this notice of them, and the inscriptions upon them, partly in illustration of the text, and partly to engage my learned and antiquarian readers in attempts to decipher them. I would have given the Etruscan characters on the other images described above, but have no method of imitating them except by an engraving. As these kinds of inscriptions on the thigh, the garments, and different parts of the body, were in use among different nations, to express character, conduct, qualities, and conquests, we may rest assured that to them St. John alludes when he represents our sovereign Lord with an inscription upon his vesture and upon his thigh; and had we not found it a custom among other nations, we should have been at a loss to account for its introduction and meaning here. ###### Albert Barnes: Notes on the Bible - 1834 19:16: And he hath on his vesture - That is, this name was conspicuously written on his garment - probably his military robe. And on his thigh - The robe or military cloak may be conceived of as open and flowing, so as to expose the limbs of the rider; and the idea is, that the name was conspicuously written not only on the flowing robe, but on the other parts of his dress, so that it must be conspicuous whether his military cloak were wrapped closely around him, or whether it was open to the breeze. Grotius supposes that this name was on the edge or hilt of the sword which depended from his thigh. A name written - Or a title descriptive of his character. King of kings, and Lord of lords - As in Rev 17:5, so here, there is nothing in the original to denote that this should be distinguished, as it is, by capital letters. As a conspicuous title, however, it is not improper. It means that he is, in fact, the sovereign over the kings of the earth, and that all nobles and princes are under his control - a rank that properly belongs to the Son of God. Compare the notes on Eph 1:20-22. See also Rev 19:12 of this chapter. The custom here alluded to of inscribing the name or rank of distinguished individuals on their garments, so that they might be readily recognized, was not uncommon in ancient times. For full proof of this, see Rosenmuller, Morgenland, vol. iii. pp. 232-236. The authorities quoted there are, Thevenot's Travels, vol. i. p. 149; Gruter, p. 989; Dempster's Etruria Regalis, t. ii. tab. 93; Montfaucon, Antiq. Expliq. t. iii. tab. 39. Thus Herodotus (vol. ii. p. 196), speaking of the figures of Sesostris in Ionia, says that, "Across his breast, from shoulder to shoulder, there is this inscription in the sacred characters of Egypt, 'I conquered this country by the force of my arms.'" Compare Cic. Verr. iv. 23; LeMoyne a. d. Jer 23:6; Munter, Diss. a. d. Rev 17:5, as referred to by Prof. Stuart, in loco. ###### R. A. Torrey - Treasury: Treasury of Scriptural Knowledge - 1880 19:16: on his vesture: Rev 19:12, Rev 19:13 KING: Rev 17:14; Psa 72:11; Pro 8:15, Pro 8:16; Dan 2:47; Phi 2:9-11; Ti1 6:15 ###### Geneva 1599 19:16 (16) And he hath on [his] vesture and on his thigh a name written, KING OF KINGS, AND LORD OF LORDS. (16) The name agreeing to Christ according to the former qualities, expressed after the manner of the Hebrews. ###### John Gill 19:16 And he hath on his vesture and on his thigh a name written,.... This name, afterwards expressed, is said to be written on his vesture, in allusion to the custom of persons of note and eminence having their names interwoven in their garments, and which was sometimes done in letters of gold, as Zeuxis had (t); and it is expressive of the conspicuousness of Christ's kingdom, which now will come with observation; his judgments, the administrations of his kingly office, will be manifest, and he will reign before his ancients gloriously: and its being said to be written on his thigh may mean either that it was upon that part of his garment which covered his thigh; or else that it was also on his sword, which he sometimes girt upon his thigh. Mr. Daubuz has given an instance out of Victor Vitensis, of Clementianus, a monk, who had written on his thigh, ""a manichee" disciple of Jesus Christ.'' And this being done in Africa, he supposes it to be a Phoenician custom continued. It may here denote the perpetuity of Christ's name, power, and dominion, which will continue to the latest posterity, Ps 72:17 which spring from the thigh; and it may denote the subjection of his people to him, signified by the putting the hand under the thigh, Gen 24:2. And this name is King of kings and Lord of lords; which will well suit him now when he shall be openly King over all the earth; See Gill on Rev_ 17:14. (t) Plin. Nat. Hist. l. 35. c. 9. ###### John Wesley 19:16 And he hath on his vesture and on his thigh - That is, on the part of his vesture which is upon his thigh. A name written - It was usual of old, for great personages in the eastern countries, to have magnificent titles affixed to their garments. ###### Robert Jamieson, A. R. Fausset and David Brown 19:16 "His name written on His vesture and on His thigh," was written partly on the vesture, partly on the thigh itself, at the part where in an equestrian figure the robe drops from the thigh. The thigh symbolizes Christ's humanity as having come, after the flesh, from the loins of David, and now appearing as the glorified "Son of man." On the other hand, His incommunicable divine name, "which no man knew," is on His head (Rev_ 19:12), [MENOCHIUS]. KING OF KINGS--Compare Rev_ 17:14, in contrast with Rev_ 19:17, the beast being in attempted usurpation a king of kings, the ten kings delivering their kingdom to him. 19:1719:17: Եւ տեսի ա՛յլ հրեշտակ՝ որ կայր ՚ի վերայ արեգական, զի աղաղակեա՛ց ՚ի ձայն մեծ ամենայն հաւուց թռուցելոց ընդ մէջ երկնից՝ ասելով. Եկա՛յք ժողովեցարո՛ւք յընթրիս մեծին Աստուծոյ. Ոսկան. Եւ ժողովեցարուք։ 17 Տեսայ նաեւ մի ուրիշ հրեշտակ, որ կանգնած էր արեգակի վրայ. նա աղաղակեց բարձր ձայնով՝ երկնքի մէջ թռչող բոլոր թռչուններին ասելով. «Եկէք հաւաքուեցէ՛ք Աստծու մեծ ընթրիքին. 17 Եւ հրեշտակ մըն ալ տեսայ, որ արեւին մէջ կայներ էր ու մեծ ձայնով աղաղակեց երկնքի մէջ թռչող բոլոր թռչուններուն՝ ըսելով. «Եկէք, Աստուծոյ մեծ ընթրիքին հաւաքուեցէ՛ք Եւտեսիայլհրեշտակորկայրիվերայարեգական, զիաղաղակեացիձայնմեծամենայնհաւուցթռուցելոցընդմէջերկնիցասելով"). Եկայք")ժողովեցարուք")( 19:17: Եւ տեսի ա՛յլ հրեշտակ՝ որ կայր ՚ի վերայ արեգական, զի աղաղակեա՛ց ՚ի ձայն մեծ ամենայն հաւուց թռուցելոց ընդ մէջ երկնից՝ ասելով. Եկա՛յք ժողովեցարո՛ւք յընթրիս մեծին Աստուծոյ( "Ոսկան. Եւ ժողովեցարուք։"). Ոսկան. Եւ ժողովեցարուք։ 17 Տեսայ նաեւ մի ուրիշ հրեշտակ, որ կանգնած էր արեգակի վրայ. նա աղաղակեց բարձր ձայնով՝ երկնքի մէջ թռչող բոլոր թռչուններին ասելով. «Եկէք հաւաքուեցէ՛ք Աստծու մեծ ընթրիքին. 17 Եւ հրեշտակ մըն ալ տեսայ, որ արեւին մէջ կայներ էր ու մեծ ձայնով աղաղակեց երկնքի մէջ թռչող բոլոր թռչուններուն՝ ըսելով. «Եկէք, Աստուծոյ մեծ ընթրիքին հաւաքուեցէ՛ք zohrab-1805▾ εἶδον (I-had-seen) ἕνα (to-one) ἄγγελον (to-a-messenger) ἑστῶτα (to-having-had-come-to-stand) ἐν (in) τῷ (unto-the-one) ἡλίῳ, (unto-a-sun) καὶ (and) ἔκραξεν (it-clamored-to) [ἐν] "[in]"φωνῇ (unto-a-sound) μεγάλῃ (unto-great) λέγων ( forthing ) πᾶσι ( unto-all ) τοῖς ( unto-the-ones ) ὀρνέοις ( unto-en-birdings ) τοῖς ( unto-the-ones ) πετομένοις ( unto-flying ) ἐν (in) μεσουρανήματι (unto-a-mid-skying-to," Δεῦτε ( Ye-should-hitherto ," συνάχθητε ( ye-should-have-been-led-together ) εἰς ( into ) τὸ ( to-the-one ) δεῖπνον (to-mealed) τὸ (to-the-one) μέγα (to-great) τοῦ (of-the-one) θεοῦ, (of-a-Deity," 19:17. et vidi unum angelum stantem in sole et clamavit voce magna dicens omnibus avibus quae volabant per medium caeli venite congregamini ad cenam magnam DeiAnd I saw an angel standing in the sun: and he cried with a loud voice, saying to all the birds that did fly through the midst of heaven: Come, gather yourselves together to the great supper of God: 17. And I saw an angel standing in the sun; and he cried with a loud voice, saying to all the birds that fly in mid heaven, Come be gathered together unto the great supper of God; 19:17. And I saw an angel standing in the sun; and he cried with a loud voice, saying to all the fowls that fly in the midst of heaven, Come and gather yourselves together unto the supper of the great God; 19:17. And I saw a certain Angel, standing in the sun. And he cried out with a great voice, saying to all the birds that were flying through the midst of the sky, “Come and gather together for the great supper of God, And I saw an angel standing in the sun; and he cried with a loud voice, saying to all the fowls that fly in the midst of heaven, Come and gather yourselves together unto the supper of the great God: 17: И увидел я одного Ангела, стоящего на солнце; и он воскликнул громким голосом, говоря всем птицам, летающим по средине неба: летите, собирайтесь на великую вечерю Божию, 19:17 καὶ εἶδον ἕνα ἄγγελον ἑστῶτα ἐν τῶ ἡλίῳ, καὶ ἔκραξεν [ἐν] φωνῇ μεγάλῃ λέγων πᾶσιν τοῖς ὀρνέοις τοῖς πετομένοις ἐν μεσουρανήματι, δεῦτε συνάχθητε εἰς τὸ δεῖπνον τὸ μέγα τοῦ θεοῦ, 19:17. Καὶ (And) εἶδον (I-had-seen) ἕνα (to-one) ἄγγελον (to-a-messenger) ἑστῶτα (to-having-had-come-to-stand) ἐν (in) τῷ (unto-the-one) ἡλίῳ, (unto-a-sun) καὶ (and) ἔκραξεν (it-clamored-to) [ἐν] "[in]"φωνῇ (unto-a-sound) μεγάλῃ (unto-great) λέγων (forthing) πᾶσι (unto-all) τοῖς (unto-the-ones) ὀρνέοις (unto-en-birdings) τοῖς (unto-the-ones) πετομένοις (unto-flying) ἐν (in) μεσουρανήματι (unto-a-mid-skying-to,"Δεῦτε (Ye-should-hitherto,"συνάχθητε (ye-should-have-been-led-together) εἰς (into) τὸ (to-the-one) δεῖπνον (to-mealed) τὸ (to-the-one) μέγα (to-great) τοῦ (of-the-one) θεοῦ, (of-a-Deity," 19:17. et vidi unum angelum stantem in sole et clamavit voce magna dicens omnibus avibus quae volabant per medium caeli venite congregamini ad cenam magnam Dei And I saw an angel standing in the sun: and he cried with a loud voice, saying to all the birds that did fly through the midst of heaven: Come, gather yourselves together to the great supper of God: 17. AndIsawan angel standinginthesun; andhecriedwithaloudvoice, sayingtoallthebirdsthatflyin mid heaven, ComebegatheredtogetheruntothegreatsupperofGod; 19:17. And I saw an angel standing in the sun; and he cried with a loud voice, saying to all the fowls that fly in the midst of heaven, Come and gather yourselves together unto the supper of the great God; 19:17. And I saw a certain Angel, standing in the sun. And he cried out with a great voice, saying to all the birds that were flying through the midst of the sky, “Come and gather together for the great supper of God, ru▾. Должны подвергнуться этому изменению и грешники, последователи антихриста. И если его изменение для праведников будет блаженным, спокойным и радостным, то для нечестивых оно будет мучительно. Ангельский призыв хищных птиц питаться трупами врагов Царства Божия есть указание на ужасы и страдания этих последних при окончательном перевороте. ###### Adam Clarke: Commentary on the Bible - 1831 19:17: An angel standing in the sun - Exceedingly luminous; every part of him emitting rays of light. From this representation, Milton has taken his description of Uriel, the angel of the sun. Paradise Lost, b. iii. l. 648 - "The Archangel Uriel, one of the seven Who, in God's presence, nearest to his throne Stands ready at command and are his eyes That run through all the heavens, or down to the earth Bears his swift errands over moist and dry, Over sea and land." All the fowls that fly - The carcasses of God's enemies shall be food for all the fowls of heaven. This is according to a Jewish tradition, Synopsis Sohar, p. 114, n. 25: "In the time when God shall execute vengeance for the people of Israel, he shall feed all the beasts of the earth for twelve months with their flesh and all the fowls for seven years." It is well known that both beasts and birds of prey are accustomed to frequent fields of battle, and live upon the slain. ###### Albert Barnes: Notes on the Bible - 1834 19:17: And I saw an angel standing in the sun - A different angel evidently from the one which had before appeared to him. The number of angels that appeared to John, as referred to in this book, was very great, and each one came on a new errand, or with a new message. Everyone must be struck with the image here. The description is as simple as it can be; and yet as sublime. The fewest words possible are used; and yet the image is distinct and clear. A heavenly being stands in the blaze of the brightest of the orbs that God permits us here to see - yet not consumed, and himself so bright that he can be distinctly seen amidst the dazzling splendors of that luminary. It is difficult to conceive of an image more sublime than this. Why he has his place in the sun is not stated, for there does not appear to be anything more intended by this than to give grandeur and impressiveness to the scene. And he cried with a loud voice - So that all the fowls of heaven could hear. Saying to all the fowls that fly in the midst of heaven - That is, to all the birds of prey - all that feed on flesh - such as hover over a battlefield. Compare the notes on Isa 18:6; Isa 56:9. See also Jer 7:33; Jer 12:9; Ezek. 39:4-20. Come and gather yourselves together - All this imagery is taken from the idea that there would be a great slaughter, and that the bodies of the dead would be left unburied to the birds of prey. Unto the supper of the great God - As if the great God were about to give you a feast - to wit, the carcasses of those slain. It is called "his supper" because he gives it; and the image is merely that there would be a great slaughter of his foes, as is specified in the following verse. ###### R. A. Torrey - Treasury: Treasury of Scriptural Knowledge - 1880 19:17: an angel: Rev 8:13, Rev 14:6; Isa 34:1-8 saying: Rev 19:21; Isa 56:9; Jer 12:9; Eze 39:17-20 ###### Geneva 1599 19:17 (17) And I saw an angel standing in the (18) sun; and he cried with a loud voice, saying to all the fowls that fly in the (19) midst of heaven, Come and gather yourselves together unto the supper of the great God; (17) The second part, as I said in See Rev_ 19:11. A reproachful calling forth of his enemies into battle: in which not themselves (for why should they be called forth by the king of the world, or provoked being his subjects? for that is not comely) but in their hearing, the birds of the air are called to eat their carcasses. (18) That is, openly, and in sight of all, as in (Num 25:4; 2Kings 12:11). (19) That is, through this inferior heaven, and which is nearer to us: a Hebrew phrase. ###### John Gill 19:17 And I saw an angel standing in the sun,.... By whom is meant, not the angel of the fourth vial, who poured it on the sun, taken in another sense than here, and therefore could not stand in it; nor the archangel with the last trumpet, for as yet the dead rise not, nor does the judgment come on; nor one of the ministering spirits; nor Christ himself, for he is the great God, to whose supper this angel invites, but a minister of the Gospel; or rather a set of Gospel ministers, such as in Rev_ 14:1 who may be said to stand in the sun, in like manner as the woman, the church, was seen clothed with it, Rev_ 12:1 and may denote the conspicuousness of Gospel preachers; for, as the church now will be established upon the top of the mountains, so her teachers shall not be removed into corners any more, but her eyes shall behold her teachers; and also the clear sight they shall have of the doctrines and mysteries of the Gospel, who shall now see eye to eye; and particularly the further breakings forth of the glory of the latter day, and the ensuing victory of Christ over all his enemies; and also shows the great strength of their sight, who, far from being like moles and bats, will be able both to look upon the sun, and to stand in it: and it may likewise signify the glory and majesty of Christ's kingdom; the comfortable influence of him, the sun of righteousness, who will now arise upon his people with healing in his wings; and the steadfastness of Christ's ministers to him, and his pure Gospel, and the glorious truths of it. And he cried with a loud voice; that he might be heard far and near, having something of moment and importance to publish: saying to all the fowls that fly in the midst of heaven; meaning not the barbarous nations, the Goths, and Vandals, and Saracens, which overrun and destroyed the western and eastern empires; these times are too late for them, they rose up under the six first trumpets; nor devils and unclean spirits, which will prey upon and torment antichrist, and his followers, in hell; nor military and avaricious men among Protestants, but Christian princes, and their people, are designed; they are such as are in heaven, the church, and of note there, who will share the spoils of the antichristian people, and possess their kingdoms, substance, and estates: these are invited by the angel, saying, come and gather yourselves together unto the supper of the great God. The Alexandrian copy, the Vulgate Latin, and Syriac versions, read, "to the great supper of God"; and so the Complutensian edition; not the Lord's supper, where not the flesh of men, but the flesh of Christ is eat, by faith; nor the marriage supper of the Lamb, which will be of another kind than this; nor is any spiritual repast intended, such as living by faith on Christ, and supping with him, being entertained with his promises, presence, and the discoveries of his love; but the slaughter of Christ's enemies, and his victory over them, which is his sacrifice; and these are the guests he bids, see Zeph 1:7 and whom he calls to share in the conquest and spoils, and to express their joy on this occasion: "the great God" is no other than Christ, the general of the armies in heaven, called before the Word of God, and King of kings, and Lord of lords; who will gain this victory, and will be known to be the great God by the judgment he will execute. This is a proof of our Lord's divinity; see Tit 2:13. ###### John Wesley 19:17 Gather yourselves together to the great supper of God - As to a great feast, which the vengeance of God will soon provide; a strongly figurative expression, (taken from Ezek 39:17,) denoting the vastness of the ensuing slaughter. ###### Robert Jamieson, A. R. Fausset and David Brown 19:17 an--Greek, "one." in the sun--so as to be conspicuous in sight of the whole world. to all the fowls-- (Ezek 39:17-20). and gather yourselves--A, B, Vulgate, Syriac, Coptic, and ANDREAS read, "be gathered," omitting "and." of the great God--A, B, Vulgate, Syriac, Coptic, and ANDREAS read, "the great supper (that is, banquet) of God." `` կերիջիք")զմարմինս")թագաւորաց"), ( եւ")զմարմինս")երիվարաց")եւ")( verb.des./inf./pfv.gen.dat.abl.pl. Eng: mount, ride (7851)")ի")նոսա"), եւ")( մեծի")եւ")փոքու"): 19:18: եւ կերիջիք զմարմինս թագաւորաց, եւ զմարմինս երիվարաց՝ եւ ամենայն հեծելոց ՚ի նոսա, եւ զմարմինս ազատա՛ց եւ ծառայից՝ մեծի եւ փոքու( "Ոմանք յաւելուն. Թագաւորաց, եւ զմարմինս հզօրաց, եւ զմարմինս երիվա՛՛։"): Ոմանք յաւելուն. Թագաւորաց, եւ զմարմինս հզօրաց, եւ զմարմինս երիվա՛՛։ 18 եւ պիտի ուտէք թագաւորների մարմիններ, հզօրների մարմիններ եւ երիվարների ու նրանց վրայ նստած ամենայն հեծեալների մարմիններ, մարմիններ ազատների եւ ծառաների, մեծի եւ փոքրի»: 18 Որպէս զի ուտէք թագաւորներուն մարմինները ու հազարապետներուն մարմինները ու ձիերուն եւ անոնց վրայ հեծնողներուն մարմինները ու ամենուն մարմինները, թէ՛ ազատներուն եւ թէ՛ ծառաներուն, թէ՛ պզտիկներուն եւ թէ՛ մեծերուն»։ zohrab-1805▾ φάγητε ( ye-might-have-had-devoured ) σάρκας (to-fleshes) βασιλέων ( of-rulers-of ) καὶ (and) σάρκας (to-fleshes) χιλιάρχων (of-firsts-of-thousand) καὶ (and) σάρκας ( to-fleshes ) ἰσχυρῶν ( of-force-held ) καὶ (and) σάρκας (to-fleshes) ἵππων (of-horses) καὶ (and) τῶν (of-the-ones) καθημένων ( of-sitting-down ) ἐπ' (upon) αὐτούς, (to-them) καὶ (and) σάρκας (to-fleshes) πάντων ( of-all ) ἐλευθέρων ( of-en-freed ) τε (also) καὶ (and) δούλων (of-bondees) καὶ (and) μικρῶν ( of-small ) καὶ (and) μεγάλων . ( of-great ) 19:18. ut manducetis carnes regum et carnes tribunorum et carnes fortium et carnes equorum et sedentium in ipsis et carnes omnium liberorum ac servorum et pusillorum ac magnorumThat you may eat the flesh of kings and the flesh of tribunes and the flesh of mighty men and the flesh of horses and of them that sit on them: and the flesh of all freemen and bondmen and of little and of great. 18. that ye may eat the flesh of kings, and the flesh of captains, and the flesh of mighty men, and the flesh of horses and of them that sit thereon, and the flesh of all men, both free and bond, and small and great. 19:18. That ye may eat the flesh of kings, and the flesh of captains, and the flesh of mighty men, and the flesh of horses, and of them that sit on them, and the flesh of all [men, both] free and bond, both small and great. 19:18. so that you may eat the flesh of kings, and the flesh of tribunes, and the flesh of the strong, and the flesh of horses and those sitting on them, and the flesh of all: free and servant, small and great.” That ye may eat the flesh of kings, and the flesh of captains, and the flesh of mighty men, and the flesh of horses, and of them that sit on them, and the flesh of all [men, both] free and bond, both small and great: 18: чтобы пожрать трупы царей, трупы сильных, трупы тысяченачальников, трупы коней и сидящих на них, трупы всех свободных и рабов, и малых и великих. 19:18 ἵνα φάγητε σάρκας βασιλέων καὶ σάρκας χιλιάρχων καὶ σάρκας ἰσχυρῶν καὶ σάρκας ἵππων καὶ τῶν καθημένων ἐπ᾽ αὐτῶν καὶ σάρκας πάντων ἐλευθέρων τε καὶ δούλων καὶ μικρῶν καὶ μεγάλων. 19:18. ἵνα (so) φάγητε (ye-might-have-had-devoured) σάρκας (to-fleshes) βασιλέων (of-rulers-of) καὶ (and) σάρκας (to-fleshes) χιλιάρχων (of-firsts-of-thousand) καὶ (and) σάρκας (to-fleshes) ἰσχυρῶν (of-force-held) καὶ (and) σάρκας (to-fleshes) ἵππων (of-horses) καὶ (and) τῶν (of-the-ones) καθημένων (of-sitting-down) ἐπ' (upon) αὐτούς, (to-them) καὶ (and) σάρκας (to-fleshes) πάντων (of-all) ἐλευθέρων (of-en-freed) τε (also) καὶ (and) δούλων (of-bondees) καὶ (and) μικρῶν (of-small) καὶ (and) μεγάλων. (of-great) 19:18. ut manducetis carnes regum et carnes tribunorum et carnes fortium et carnes equorum et sedentium in ipsis et carnes omnium liberorum ac servorum et pusillorum ac magnorum That you may eat the flesh of kings and the flesh of tribunes and the flesh of mighty men and the flesh of horses and of them that sit on them: and the flesh of all freemen and bondmen and of little and of great. 18. thatyemayeatthefleshofkings, andthefleshofcaptains, andthefleshofmightymen, andthefleshofhorsesandofthemthatsitthereon, andthefleshofallmen, bothfreeandbond, andsmallandgreat. 19:18. That ye may eat the flesh of kings, and the flesh of captains, and the flesh of mighty men, and the flesh of horses, and of them that sit on them, and the flesh of all [men, both] free and bond, both small and great. 19:18. so that you may eat the flesh of kings, and the flesh of tribunes, and the flesh of the strong, and the flesh of horses and those sitting on them, and the flesh of all: free and servant, small and great.” ru▾. Indeed, it is common in most armies that a considerable portion of the enlistments are from those in early life; and besides this, it is usual to employ mere boys on various services about a camp. ###### R. A. Torrey - Treasury: Treasury of Scriptural Knowledge - 1880 19:18: ye: Deu 28:26; Sa1 17:44, Sa1 17:46; Psa 110:5, Psa 110:6; Jer 7:33, Jer 16:4, Jer 19:7, Jer 34:20; Eze 29:5, Eze 39:18-20; Mat 24:28; Luk 17:37 of all: Rev 6:15, Rev 13:16 ###### John Gill 19:18 That ye may eat the flesh of kings,.... Of the earth, and of the whole world, who will fall in the battle at Armageddon; see Rev_ 16:14 and design either those antichristian kings and states, which have drunk of the wine of Rome's fornication, and will bewail the downfall of Babylon, Rev_ 18:3 or those Pagan kings which will be drawn in by the pope and Turk, to aid and assist in this war, or both, when, upon the defeat of them, the Christian princes will seize upon, possess, and enjoy their kingdoms and dominions, which is meant by eating their flesh; See Gill on Rev_ 17:16. And the flesh of captains; of their generals, and officers under them: and the flesh of mighty men; the common soldiers, who are so called, Jer 46:5 who will be rifled and plundered: and the flesh of horses, and of them that sit on them; whose rich trappings and clothes will be taken away from them: and the flesh of all men, both free and bond, both small and great; that is, the substance of all the inhabitants of the antichristian empire, both eastern and western, of whatsoever rank, state, and condition they be, Rev_ 13:16. The metaphors are taken from, and there is a manifest allusion to, Ezek 39:17 and the whole denotes the entire slaughter and utter ruin of the whole antichristian army, and the certainty of Christ's victory over it before hand; and also the destruction of all that are the followers of antichrist, throughout his dominions, which will now wholly fall into the hands of the saints, and be enjoyed by them. ###### Robert Jamieson, A. R. Fausset and David Brown 19:18 Contrast with this "supper," Rev_ 19:17-18, the marriage supper of the Lamb, Rev_ 19:9. captains--Greek, "captains of thousands," that is, chief captains. The "kings" are "the ten" who "give their power unto the beast." free and bond--specified in Rev_ 13:16, as "receiving the mark of the beast." The repetition of flesh (in the Greek it is plural: masses of flesh) five times in this verse, marks the gross carnality of the followers of the beast. Again, the giving of their flesh to the fowls to eat, is a righteous retribution for their not suffering the dead bodies of Christ's witnesses to be put in graves. տեսի")զգազանն")եւ")զթագաւորս")երկրի")եւ")զզօրս")նոցա")ժողովեալս") առնելպատերազմընդայնմորհեծեալնէրիձինսպիտակեւընդզօրսնորա: 19:19: Եւ տեսի զգազանն՝ եւ զթագաւորս երկրի՝ եւ զզօրս նոցա ժողովեալս առնել պատերազմ ընդ այնմ, որ հեծեալն էր ՚ի ձին սպիտակ՝ եւ ընդ զօրս նորա: 19 Ապա տեսայ գազանին, երկրի թագաւորներին եւ նրանց զօրքերին՝ հաւաքուած, որպէսզի պատերազմ մղեն ընդդէմ նրա, ով հեծել էր սպիտակ ձիու վրայ, նաեւ ընդդէմ նրա զօրքի: 19 Տեսայ գազանը ու երկրի թագաւորները ու անոնց զօրքերը մէկտեղ հաւաքուած պատերազմ ընելու անոր հետ՝ որ ձիուն վրայ հեծեր էր ու անոր զօրքերուն հետ։ zohrab-1805▾eastern-1994▾western am▾ 19:1919: И увидел я зверя и царей земных и воинства их, собранные, чтобы сразиться с Сидящим на коне и с воинством Его. 19:19 καὶ εἶδον τὸ θηρίον καὶ τοὺς βασιλεῖς τῆς γῆς καὶ τὰ στρατεύματα αὐτῶν συνηγμένα ποιῆσαι τὸν πόλεμον μετὰ τοῦ καθημένου ἐπὶ τοῦ ἵππου καὶ μετὰ τοῦ στρατεύματος αὐτοῦ. 19:19. Καὶ (And) εἶδον (I-had-seen) τὸ (to-the-one) θηρίον (to-a-beastlet) καὶ (and) τους ( to-the-ones ) βασιλεῖς ( to-rulers-of ) τῆς ( of-the-one ) γῆς ( of-a-soil ) καὶ (and) τὰ (to-the-ones) στρατεύματα (to-amassings-to) αὐτῶν (of-them) συνηγμένα ( to-having-had-come-to-be-led-together ) ποιῆσαι (to-have-done-unto) τὸν (to-the-one) πόλεμον (to-a-war) μετὰ (with) τοῦ (of-the-one) καθημένου ( of-sitting-down ) ἐπὶ (upon) τοῦ (of-the-one) ἵππου (of-a-horse) καὶ (and) μετὰ (with) τοῦ (of-the-one) στρατεύματος (of-an-amassing-to) αὐτοῦ. (of-it) 19:19. et vidi bestiam et reges terrae et exercitus eorum congregatos ad faciendum proelium cum illo qui sedebat in equo et cum exercitu eiusAnd I saw the beast and the kings of the earth and their armies, gathered together to make war with him that sat upon the horse and with his army. 19. And I saw the beast, and the kings of the earth, and their armies, gathered together to make war against him that sat upon the horse, and against his army. 19:19. And I saw the beast, and the kings of the earth, and their armies, gathered together to make war against him that sat on the horse, and against his army. 19:19. And I saw the beast and the kings of the earth and their armies, having been gathered together to do battle against him who was sitting upon the horse, and against his army. And I saw the beast, and the kings of the earth, and their armies, gathered together to make war against him that sat on the horse, and against his army: 19: И увидел я зверя и царей земных и воинства их, собранные, чтобы сразиться с Сидящим на коне и с воинством Его. 19:19 καὶ εἶδον τὸ θηρίον καὶ τοὺς βασιλεῖς τῆς γῆς καὶ τὰ στρατεύματα αὐτῶν συνηγμένα ποιῆσαι τὸν πόλεμον μετὰ τοῦ καθημένου ἐπὶ τοῦ ἵππου καὶ μετὰ τοῦ στρατεύματος αὐτοῦ. 19:19. Καὶ (And) εἶδον (I-had-seen) τὸ (to-the-one) θηρίον (to-a-beastlet) καὶ (and) τους (to-the-ones) βασιλεῖς (to-rulers-of) τῆς (of-the-one) γῆς (of-a-soil) καὶ (and) τὰ (to-the-ones) στρατεύματα (to-amassings-to) αὐτῶν (of-them) συνηγμένα (to-having-had-come-to-be-led-together) ποιῆσαι (to-have-done-unto) τὸν (to-the-one) πόλεμον (to-a-war) μετὰ (with) τοῦ (of-the-one) καθημένου (of-sitting-down) ἐπὶ (upon) τοῦ (of-the-one) ἵππου (of-a-horse) καὶ (and) μετὰ (with) τοῦ (of-the-one) στρατεύματος (of-an-amassing-to) αὐτοῦ. (of-it) 19:19. et vidi bestiam et reges terrae et exercitus eorum congregatos ad faciendum proelium cum illo qui sedebat in equo et cum exercitu eius And I saw the beast and the kings of the earth and their armies, gathered together to make war with him that sat upon the horse and with his army. 19. AndIsawthebeast, andthekingsoftheearth, andtheirarmies, gatheredtogethertomakewaragainsthimthatsatuponthehorse, andagainsthisarmy. 19:19. And I saw the beast, and the kings of the earth, and their armies, gathered together to make war against him that sat on the horse, and against his army. 19:19. And I saw the beast and the kings of the earth and their armies, having been gathered together to do battle against him who was sitting upon the horse, and against his army. ru▾el▾el-en-gloss▾vulgate▾erva_1895▾kjv_1900▾catholic_pdv▾ jfb▾jw▾jg▾gnv▾tr▾ab▾ac▾tb▾all ▾ ###### А. П. Лопухин: Tолковая Библия или комментарий на все книги Св.Писания Ветхого и Нового Заветов - 1903-1914 19: Под воинством 19: ст. нужно разуметь крайнее напряжение и ycиление нечестия и боговраждебности пред пришествием Господа. Нечестивые уподобятся войску, выступившему на сражение и вызывающему Бога на борьбу с собою. Но коротка развязка этой долгой истории нечестия. Возмездие началось с тех, кто были виновниками человеческого нечестия последнего времени, - с антихриста и лжепророка. А так как их нечестие и их заслуженность вечных мучений будут для всех несомненны, то для них не будет даже и суда - они без суда будут живыми брошены в озеро огненное, геенну, на вечные мучения. Зверь-антихрист и его лжепророк первые получат возмездие, будут уничтожены духом уст Божиих и удалены с глаз остальных людей, собранных Господом для последнего суда. Страшное, мучительное изменение произойдет по приговору Божию: нечестивые (прочие) будут убиты мечом сидящего на коне, т.е. по действию всемогущества Божия и суда. Их прежние тела сделаются пищею птиц; в мучительном процессе они переродятся в новые [Андрей Кесар.], которые бы соответствовали предстоящим вечным мучениям, вечному ощущению непрерывающихся болей. ###### Adam Clarke: Commentary on the Bible - 1831 19:19: I saw the beast - See the notes on Revelation 12 (note), Revelation 13 (note) and Revelation 17 (note). ###### Albert Barnes: Notes on the Bible - 1834 19:19: And I saw the beast - notes on Rev 13:1, Rev 13:11. Compare Rev 17:13. And the kings of the earth, and their armies, gathered together - There is allusion here to the same assembling of hostile forces which is described in Rev 16:13-14, for the great decisive battle that is to determine the destiny of the world - the question whether the Messiah or antichrist shall reign. There can be no doubt that the writer in these passages designed to refer to the same events - the still future scenes that are to occur when the Roman, the pagan, and the Muhammedan powers shall be aroused to make common cause against the true religion, and shall stake all on the issue of the great conflict. See the notes on Rev 16:13-14. Against him that sat on the horse - The Messiah - the Son of God. notes on Rev 19:11. And against his army - The hosts that are associated with him - his redeemed people. See the notes on Rev 19:14. ###### R. A. Torrey - Treasury: Treasury of Scriptural Knowledge - 1880 19:19: I saw: Rev 13:1-10, Rev 14:9, Rev 16:14, Rev 16:16, Rev 17:12-14, Rev 18:9; Eze 38:8-18; Dan 7:21-26; Dan 8:25, Dan 11:40-45; Joe 3:9-14 him: Rev 19:11-14 ###### Geneva 1599 19:19 (20) And I saw the beast, and the kings of the earth, and their armies, gathered together to make war against him that sat on the horse, and against his army. (20) The third part (as was said in) (Rev_ 19:11) by the victory obtained by Christ. Two things pertain to this: his fighting with the beast and his forces, in this verse: and the event most magnificent, described after the manner of men, in the verses following. All these things are plain. ###### John Gill 19:19 And l saw the beast,.... Not the devil, for after this he is taken and bound for a thousand years, and then loosed, and laid hold on again, and cast into the lake of fire; not but that this war will be by his instigation, and under his influence, Rev_ 16:14 not the Roman Pagan empire, which has been destroyed long ago, under the sixth seal, and was the issue of the battle between Michael and his angels, and the dragon and his; but the antichristian civil powers, or antichrist in his civil capacity; and which, though it may chiefly regard the western antichrist, and the remains of the Latin idolatry, yet may take in the eastern antichrist, or the Mahometan powers, which may all join together in this battle; the beast will survive for a while the downfall of his seat, Babylon or Rome. And the kings of the earth; these, as they stand distinguished from the beast, or the antichristian kings, and civil states, may design as many of the Pagan kings and princes, as the pope and Turk by their emissaries can persuade to assist them in this war; See Gill on Rev_ 16:14. And their armies gathered together; at Armageddon, or in the valley of Jehoshaphat, Rev_ 16:16 to make war against him that sat on the horse; the white horse, Rev_ 19:11 as the Arabic and Ethiopic versions read, which must be downright folly and madness, since he is the Word of God, the great God, the King of kings, and Lord of lords: and against his army, Rev_ 19:14 who, though unarmed, and only clothed in fine linen, have nothing to fear, since Christ, the Captain of their salvation, is at the head of them. ###### John Wesley 19:19 And I saw the kings of the earth - The ten kings mentioned Rev_ 17:12; who had now drawn the other kings of the earth to them, whether Popish, Mahometan, or pagan. Gathered together to make war with him that sat upon the horse - All beings, good and evil, visible and invisible, will be concerned in this grand contest. See Zech 14:1, &c. ###### Robert Jamieson, A. R. Fausset and David Brown 19:19 gathered together--at Armageddon, under the sixth vial. For "their armies" in B and ANDREAS, there is found "His armies" in A. war--so ANDREAS. But A and B read, "the war," namely, that foretold, Rev_ 16:14; Rev_ 17:4. 19:2019:20: Եւ ըմբռնեցա՛ւ գազանն, եւ որ ընդ նմա սուտ մարգարէն, որ առնէր զնշանս առաջի նորա, եւ զորս մոլորեցոյց եւ ե՛տ առնուլ զդրոշմ գազանին՝ եւ զերկրպագուս պատկերի նորա. եւ կենդանւո՛յն արկին զնոսա ՚ի լիճ հրոյ այրեցելոյ ծծըմբով. Ոմանք. Եւ ընդ նմա սուտ։ Յօրինակին. Արկին զնա ՚ի լիճ։ 20 Եւ բռնուեց գազանը ու նրա հետ եղող սուտ մարգարէն, որ նրա առաջ նշաններ էր գործում. դրանցով մոլորեցնում էր նրանց, որոնց ստիպել էր առնել գազանի դրոշմը եւ երկրպագուներ էր դարձրել իր արձանին: Եւ նրանց կենդանի գցեցին ծծմբով այրուող կրակէ լճի մէջ: 20 Գազանը բռնուեցաւ ու անոր հետ սուտ մարգարէն, որ անոր առջեւ նշաններ ըրաւ, որոնցմով մոլորեցուց գազանին դրոշմը ընդունողները եւ անոր պատկերին երկրպագութիւն ընողները։ Երկուքն ալ ողջ ողջ ծծումբով վառուած կրակին լիճը ձգուեցան։ Եւըմբռնեցաւգազաննեւորընդնմասուտմարգարէնորառնէրզնշանսնառաջինորա, եւզորսմոլորեցոյցեւետառնուլզդրոշմ")գազանին"), եւ")զերկրպագուս")պատկերի")նորա"). եւ")կենդանւոյն")( իլիճհրոյայրեցելոյծծմբով: 19:20: Եւ ըմբռնեցա՛ւ գազանն, եւ որ ընդ նմա սուտ մարգարէն, որ առնէր զնշանս առաջի նորա, եւ զորս մոլորեցոյց եւ ե՛տ առնուլ զդրոշմ գազանին՝ եւ զերկրպագուս պատկերի նորա. եւ կենդանւո՛յն արկին զնոսա ՚ի լիճ հրոյ այրեցելոյ ծծըմբով. Ոմանք. Եւ ընդ նմա սուտ։ Յօրինակին. Արկին զնա ՚ի լիճ։ 20 Եւ բռնուեց գազանը ու նրա հետ եղող սուտ մարգարէն, որ նրա առաջ նշաններ էր գործում. դրանցով մոլորեցնում էր նրանց, որոնց ստիպել էր առնել գազանի դրոշմը եւ երկրպագուներ էր դարձրել իր արձանին: Եւ նրանց կենդանի գցեցին ծծմբով այրուող կրակէ լճի մէջ: 20 Գազանը բռնուեցաւ ու անոր հետ սուտ մարգարէն, որ անոր առջեւ նշաններ ըրաւ, որոնցմով մոլորեցուց գազանին դրոշմը ընդունողները եւ անոր պատկերին երկրպագութիւն ընողները։ Երկուքն ալ ողջ ողջ ծծումբով վառուած կրակին լիճը ձգուեցան։ zohrab-1805▾eastern-1994▾western am▾ 19:2020: И схвачен был зверь и с ним лжепророк, производивший чудеса пред ним, которыми он обольстил принявших начертание зверя и поклоняющихся его изображению: оба живые брошены в озеро огненное, горящее серою; 19:20 καὶ ἐπιάσθη τὸ θηρίον καὶ μετ᾽ αὐτοῦ ὁ ψευδοπροφήτης ὁ ποιήσας τὰ σημεῖα ἐνώπιον αὐτοῦ, ἐν οἷς ἐπλάνησεν τοὺς λαβόντας τὸ χάραγμα τοῦ θηρίου καὶ τοὺς προσκυνοῦντας τῇ εἰκόνι αὐτοῦ· ζῶντες ἐβλήθησαν οἱ δύο εἰς τὴν λίμνην τοῦ πυρὸς τῆς καιομένης ἐν θείῳ. 19:20. καὶ (And) ἐπιάσθη (it-was-squeezed-to) τὸ (the-one) θηρίον (a-beastlet,"καὶ (and) μετ' (with) αὐτοῦ (of-it) ὁ (the-one) ψευδοπροφήτης (a-false-declarer-before) ὁ (the-one) ποιήσας (having-done-unto) τὰ (to-the-ones) σημεῖα (to-signlets-of) ἐνώπιον (in-looked) αὐτοῦ, (of-it) ἐν (in) οἷς ( unto-which ) ἐπλάνησεν (it-wandered-unto) τοὺς (to-the-ones) λαβόντας ( to-having-had-taken ) τὸ (to-the-one) χάραγμα (to-a-graving-to) τοῦ (of-the-one) θηρίου (of-a-beastlet) καὶ (and) τοὺς (to-the-ones) προσκυνοῦντας ( to-kissing-toward-unto ) τῇ (unto-the-one) εἰκόνι (unto-a-resemblance) αὐτοῦ: (of-it) ζῶντες ( lifing-unto ) ἐβλήθησαν (they-were-casted) οἱ (the-ones) δύο (two) εἰς (into) τὴν (to-the-one) λίμνην (to-a-lake) τοῦ (of-the-one) πυρὸς (of-a-fire) τῆς (of-the-one) καιομένης ( of-being-burned ) ἐν ( in ) θείῳ . ( unto-a-sulphur ) 19:20. et adprehensa est bestia et cum illo pseudopropheta qui fecit signa coram ipso quibus seduxit eos qui acceperunt caracterem bestiae qui et adorant imaginem eius vivi missi sunt hii duo in stagnum ignis ardentis sulphureAnd the beast was taken, and with him the false prophet who wrought signs before him, wherewith he seduced them who received the character of the beast and who adored his image. These two were cast alive into the pool of fire burning with brimstone. 20. And the beast was taken, and with him the false prophet that wrought the signs in his sight, wherewith he deceived them that had received the mark of the beast, and them that worshipped his image: they twain were cast alive into the lake of fire that burneth with brimstone: 19:20. And the beast was taken, and with him the false prophet that wrought miracles before him, with which he deceived them that had received the mark of the beast, and them that worshipped his image. These both were cast alive into a lake of fire burning with brimstone. 19:20. And the beast was apprehended, and with him the false prophetess, who in his presence caused the signs, by which she seduced those who accepted the character of the beast and who worshiped his image. These two were cast alive into the pool of fire burning with sulphur. And the beast was taken, and with him the false prophet that wrought miracles before him, with which he deceived them that had received the mark of the beast, and them that worshipped his image. These both were cast alive into a lake of fire burning with brimstone: 20: И схвачен был зверь и с ним лжепророк, производивший чудеса пред ним, которыми он обольстил принявших начертание зверя и поклоняющихся его изображению: оба живые брошены в озеро огненное, горящее серою; 19:20 καὶ ἐπιάσθη τὸ θηρίον καὶ μετ᾽ αὐτοῦ ὁ ψευδοπροφήτης ὁ ποιήσας τὰ σημεῖα ἐνώπιον αὐτοῦ, ἐν οἷς ἐπλάνησεν τοὺς λαβόντας τὸ χάραγμα τοῦ θηρίου καὶ τοὺς προσκυνοῦντας τῇ εἰκόνι αὐτοῦ· ζῶντες ἐβλήθησαν οἱ δύο εἰς τὴν λίμνην τοῦ πυρὸς τῆς καιομένης ἐν θείῳ. 19:20. καὶ (And) ἐπιάσθη (it-was-squeezed-to) τὸ (the-one) θηρίον (a-beastlet,"καὶ (and) μετ' (with) αὐτοῦ (of-it) ὁ (the-one) ψευδοπροφήτης (a-false-declarer-before) ὁ (the-one) ποιήσας (having-done-unto) τὰ (to-the-ones) σημεῖα (to-signlets-of) ἐνώπιον (in-looked) αὐτοῦ, (of-it) ἐν (in) οἷς (unto-which) ἐπλάνησεν (it-wandered-unto) τοὺς (to-the-ones) λαβόντας (to-having-had-taken) τὸ (to-the-one) χάραγμα (to-a-graving-to) τοῦ (of-the-one) θηρίου (of-a-beastlet) καὶ (and) τοὺς (to-the-ones) προσκυνοῦντας (to-kissing-toward-unto) τῇ (unto-the-one) εἰκόνι (unto-a-resemblance) αὐτοῦ: (of-it) ζῶντες (lifing-unto) ἐβλήθησαν (they-were-casted) οἱ (the-ones) δύο (two) εἰς (into) τὴν (to-the-one) λίμνην (to-a-lake) τοῦ (of-the-one) πυρὸς (of-a-fire) τῆς (of-the-one) καιομένης (of-being-burned) ἐν (in) θείῳ. (unto-a-sulphur) 19:20. et adprehensa est bestia et cum illo pseudopropheta qui fecit signa coram ipso quibus seduxit eos qui acceperunt caracterem bestiae qui et adorant imaginem eius vivi missi sunt hii duo in stagnum ignis ardentis sulphure And the beast was taken, and with him the false prophet who wrought signs before him, wherewith he seduced them who received the character of the beast and who adored his image. These two were cast alive into the pool of fire burning with brimstone. 20. Andthebeastwastaken, andwithhimthefalseprophetthatwroughtthesignsinhissight, wherewithhedeceivedthemthathadreceivedthemarkofthebeast, andthemthatworshippedhisimage: theytwainwerecastaliveintothelakeoffirethatburnethwithbrimstone: 19:20. And the beast was taken, and with him the false prophet that wrought miracles before him, with which he deceived them that had received the mark of the beast, and them that worshipped his image. These both were cast alive into a lake of fire burning with brimstone. 19:20. And the beast was apprehended, and with him the false prophetess, who in his presence caused the signs, by which she seduced those who accepted the character of the beast and who worshiped his image. These two were cast alive into the pool of fire burning with sulphur. ru▾el▾el-en-gloss▾vulgate▾erva_1895▾kjv_1900▾catholic_pdv▾ jfb▾jw▾jg▾gnv▾tr▾ab▾ac▾all ▾ ###### Adam Clarke: Commentary on the Bible - 1831 19:20: And the beast was taken, and - the false prophet - See the notes on Rev 17:8, etc. That worshipped his image - The beast has been represented as the Latin empire; the image of the beast, the popes of Rome; and the false prophet, the papal clergy. Were cast alive into a lake of fire - Were discomfited when alive - in the zenith of their power, and destroyed with an utter destruction. ###### Albert Barnes: Notes on the Bible - 1834 19:20: And the beast was taken - That is, was taken alive, to be thrown into the lake of fire. The hosts were slain Rev 19:21, but the leaders were made prisoners of war. The general idea is, that these armies were overcome, and that the Messiah was victorious; but there is a propriety in the representation here that the leaders - the authors of the war should be taken captive, and reserved for severer punishment than death on the battlefield would be - for they had stirred up their hosts, and summoned these armies to make rebellion against the Messiah. The beast here, as all along, refers to the papal power; and the idea is that of its complete and utter overthrow, as if the leader of an army were taken captive and tormented in burning flames, and all his followers were cut down on the field of battle. And with him the false prophet - As they had been practically associated together, there was a propriety that they should share the same fate. In regard to the false prophet, and the nature of this alliance, see the notes on Rev 16:13. That wrought miracles before him - That is, the false prophet had been united with the beast in deceiving the nations of the earth. See the notes on Rev 16:14. With which he deceived them that had received the mark of the beast - notes on Rev 13:16-18. By these arts they had been deceived - that is, they had been led into the alliance, and had been sustained in their opposition to the truth. The whole representation is that of an alliance to pRev_ent the spread of the true religion, as if the papacy and Mohammedanism were combined, and the one was sustained by the pretended miracles of the other. There would be a practical array against the reign of the Son of God, as if these great powers should act in concert, and as if the special claims which each set up in behalf of its own divine origin became a claim which went to support the whole combined organization. These both were cast alive into a lake of fire - The beast and the false prophet. That is, the overthrow will be as signal, and the destruction as complete, as if the leaders of the combined hosts should be taken alive, and thrown into a pit or lake that burns with an intense heat. There is no necessity for supposing that this is to be literally inflicted - for the whole scene is symbolical - meaning that the destruction of these powers would be as complete as if they were thrown into such a burning lake. Compare the notes on Rev 14:10-11. Burning with brimstone - Sulphur - the usual expression to denote intense heat, and especially as referring to the punishment of the wicked. See the notes on Rev 14:10. ###### R. A. Torrey - Treasury: Treasury of Scriptural Knowledge - 1880 19:20: the beast: Rev 19:19, Rev 13:1-8, Rev 13:18, Rev 17:3-8, Rev 17:12; Dan 2:40-45, Dan 7:7, Dan 7:12-14, Dan 7:19-21, Dan 7:23 the false: Rev 13:11-17, Rev 16:13, Rev 16:14, Rev 20:10, Rev 22:15; Dan 7:8-11, Dan 7:24-26, Dan 8:24, Dan 8:26; Th2 2:8-11 These: Rev 20:10, Rev 20:14; Dan 7:11, Dan 11:45 burning: Rev 14:10, Rev 21:8; Gen 19:24; Deu 29:23; Job 18:15; Psa 11:6; Isa 30:33, Isa 34:9; Eze 38:22 ###### Geneva 1599 19:20 And the beast (21) was taken, and with him (22) the false prophet that wrought miracles before him, with which he deceived them that had received the mark of the beast, and them that worshipped his image. These both were cast alive into a lake of fire burning with brimstone. (21) Namely, that beast with seven heads; (Rev_ 13:1; Rev_ 17:3). (22) That is, that beast with two heads; (Rev_ 13:11; Rev_ 16:14). ###### John Gill 19:20 And the beast was taken,.... At the first onset, as soon as ever the battle begun, and carried away captive, as it was threatened he should, Rev_ 13:10 and this by Christ, who will destroy him with the breath of his mouth, and the brightness of his coming, Th2 2:8. And with him the false prophet; that is, the second beast in Rev_ 13:11 as appears by the characters by which he is here described, or antichrist in his ecclesiastic capacity; that is, the pope with his clergy: and indeed, when the antichristian princes and states are taken and destroyed, which are the support of the Papacy, that must in consequence sink, be crushed and ruined; the Alexandrian copy reads, "and they that are with him, the false prophet Jezebel"; the false prophetess and her children, who will now be killed with death, Rev_ 2:20 that wrought miracles before him: the beast, or the civil antichristian powers, even signs and lying wonders, which were approved of, and applauded by him, by which, believing them, he was confirmed in antichristian principles: with which he deceived them that had the mark of the beast, and them that worshipped his image; the several subjects of the antichristian states; see Rev_ 13:14 but none of God's elect, who cannot be seduced by such means, Mt 24:24. These both were cast alive into a lake of fire burning with brimstone; which is the second death, Rev_ 21:8. The severe punishment of antichrist, considered in both his capacities, civil and ecclesiastic, is expressed by being "cast into a lake of fire", not material fire, but the wrath of God, which will be poured out like fire, and will be intolerable; and by this lake "burning with brimstone", which, giving a nauseous stench, aggravates the punishment. Says R. Joden (t), when a man smells the smell of brimstone, why does his breath draw back at it (or catch)? because he knows he shall be judged with it in the world to come. The allusion seems to be to the place where Sodom and Gomorrah stood, which is become a sulphurous lake, and is an emblem of the vengeance of eternal fire, Jude 1:7 and these two are said to be "cast alive" into it, which shows that they will not only suffer a corporeal death at this battle, and in the issue of it, but will be destroyed, body and soul, in hell: the phrase denotes the awfulness, inevitableness, and severity of their punishment; there seems to be some reference to the earth's swallowing up Korah and his company alive, Num 16:33 see Dan 7:11. (t) Bereshit Rabba, sect. 51. fol. 45. 4. ###### John Wesley 19:20 The false prophet, who had wrought the miracles before him - And therefore shared in his punishment; these two ungodly men were cast alive - Without undergoing bodily death. Into the lake of fire - And that before the devil himself, Rev_ 20:10. Here is the last of the beast. After several repeated strokes of omnipotence, he is gone alive into hell. There were two that went alive into heaven; perhaps there are two that go alive into hell. It may be, Enoch and Elijah entered at once into glory, without first waiting in paradise; the beast and the false prophet plunge at once into the extremest degree of torment, without being reserved in chains of darkness till the judgment of the great day. Surely, none but the beast of Rome would have hardened himself thus against the God he pretended to adore, or refused to have repented under such dreadful, repeated visitations! Well is he styled a beast, from his carnal and vile affections; a wild beast, from his savage and cruel spirit! The rest were slain - A like difference is afterwards made between the devil, and Gog and Magog, Rev_ 20:9-10. ###### Robert Jamieson, A. R. Fausset and David Brown 19:20 and with him the false prophet--A reads, "and those with him." B reads, "and he who was with him, the false prophet." miracles--Greek, "the miracles" (literally, "signs") recorded already (Rev_ 13:14) as wrought by the second beast before (literally, 'in sight of') the first beast. Hence it follows the second beast is identical with the false prophet. Many expositors represent the first beast to be the secular, the second beast to be the ecclesiastical power of Rome; and account for the change of title for the latter from the "other beast" to the "false prophet," is because by the judgment on the harlot, the ecclesiastical power will then retain nothing of its former character save the power to deceive. I think it not unlikely that the false prophet will be the successor of the spiritual pretensions of the papacy; while the beast in its last form as the fully revealed Antichrist will be the secular representative and embodiment of the fourth world kingdom, Rome, in its last form of intensified opposition to God. Compare with this prophecy, Eze. 38:1-39:29; Dan 2:34-35, Dan 2:44; Dan 11:44-45; Dan 12:1; Joel 3:9-17; Zec. 12:1-14:21. Daniel (Dan 7:8) makes no mention of the second beast, or false prophet, but mentions that "the little horn" has "the eyes of a man," that is, cunning and intellectual culture; this is not a feature of the first beast in the thirteenth chapter, but is expressed by the Apocalyptic "false prophet," the embodiment of man's unsanctified knowledge, and the subtlety of the old serpent. The first beast is a political power; the second is a spiritual power--the power of ideas. But both are beasts, the worldly Antichristian wisdom serving the worldly Antichristian power. The dragon is both lion and serpent. As the first law in God's moral government is that "judgment should begin at the house of God," and be executed on the harlot, the faithless Church, by the world power with which she had committed spiritual adultery, so it is a second law that the world power, after having served as God's instrument of punishment, is itself punished. As the harlot is judged by the beast and the ten kings, so these are destroyed by the Lord Himself coming in person. So Zep. 1:1-18 compared with Zeph 2:1-15. And Jeremiah, after denouncing Jerusalem's judgment by Babylon, ends with denouncing Babylon's own doom. Between the judgment on the harlot and the Lord's destruction of the beast, will intervene that season in which earthly-mindedness will reach its culmination, and Antichristianity triumph for its short three and a half days during which the two witnesses lie dead. Then shall the Church be ripe for her glorification, the Antichristian world for destruction. The world at the highest development of its material and spiritual power is but a decorated carcass round which the eagles gather. It is characteristic that Antichrist and his kings, in their blindness, imagine that they can wage war against the King of heaven with earthly hosts; herein is shown the extreme folly of Babylonian confusion. The Lord's mere appearance, without any actual encounter, shows Antichrist his nothingness; compare the effect of Jesus' appearance even in His humiliation, Jn 18:6 [AUBERLEN]. had received--rather as Greek, "received," once for all. them; that worshipped--literally, "them worshipping" not an act once for all done, as the "received" implies, but those in the habit of "worshipping." These both were cast . . . into a lake--Greek, ". . . the lake of fire," Gehenna. Satan is subsequently cast into it, at the close of the outbreak which succeeds the millennium (Rev_ 20:10). Then Death and Hell, as well those not found at the general judgment "written in the book of life"; this constitutes "the second death." alive--a living death; not mere annihilation. "Their worm dieth not, their fire is not quenched." 19:2119:21: եւ այլքն մեռա՛ն ՚ի սրոյ հեծելոյն ՚ի վերայ ձիոյն՝ յորոյ բերանոյ ելանէր սուրն. եւ ամենայն թռչունք յագեցա՛ն ՚ի մարմնոց նոցա: 21 Մնացածները մեռան ձիու վրայ հեծնողի սրից, որը ելնում էր նրա բերանից: Եւ բոլոր թռչունները յագեցան նրանց մարմիններից: 21 Եւ ուրիշներ ալ սպաննուեցան ձիուն վրայ հեծնողին սուրէն, որ անոր բերնէն կ’ելլէր ու բոլոր թռչունները անոնց մարմիններէն կշտացան։ Եւայլքնմեռանիսրոյհեծելոյնիվերայձիոյն, յորոյբերանոյելանէրսուրն. եւամենայնթռչունքյագեցանիմարմնոցնոցա: 19:21: եւ այլքն մեռա՛ն ՚ի սրոյ հեծելոյն ՚ի վերայ ձիոյն՝ յորոյ բերանոյ ելանէր սուրն. եւ ամենայն թռչունք յագեցա՛ն ՚ի մարմնոց նոցա: 21 Մնացածները մեռան ձիու վրայ հեծնողի սրից, որը ելնում էր նրա բերանից: Եւ բոլոր թռչունները յագեցան նրանց մարմիններից: 21 Եւ ուրիշներ ալ սպաննուեցան ձիուն վրայ հեծնողին սուրէն, որ անոր բերնէն կ’ելլէր ու բոլոր թռչունները անոնց մարմիններէն կշտացան։ zohrab-1805▾eastern-1994▾western am▾ 19:2121: а прочие убиты мечом Сидящего на коне, исходящим из уст Его, и все птицы напитались их трупами. 19:21 καὶ οἱ λοιποὶ ἀπεκτάνθησαν ἐν τῇ ῥομφαίᾳ τοῦ καθημένου ἐπὶ τοῦ ἵππου τῇ ἐξελθούσῃ ἐκ τοῦ στόματος αὐτοῦ, καὶ πάντα τὰ ὄρνεα ἐχορτάσθησαν ἐκ τῶν σαρκῶν αὐτῶν. 19:21. καὶ (And) οἱ (the-ones) λοιποὶ ( remaindered ) ἀπεκτάνθησαν (they-were-killed-off) ἐν (in) τῇ (unto-the-one) ῥομφαίᾳ (unto-a-sword) τοῦ (of-the-one) καθημένου ( of-sitting-down ) ἐπὶ (upon) τοῦ (of-the-one) ἵππου (of-a-horse) τῇ (unto-the-one) ἐξελθούσῃ (unto-having-had-came-out) ἐκ (out) τοῦ (of-the-one) στόματος (of-a-mouth) αὐτοῦ, (of-it,"καὶ (and) πάντα ( all ) τὰ ( the-ones ) ὄρνεα ( en-birdings ) ἐχορτάσθησαν ( they-were-victualaged-to ) ἐκ ( out ) τῶν ( of-the-ones ) σαρκῶν ( of-fleshes ) αὐτῶν. (of-them) 19:21. et ceteri occisi sunt in gladio sedentis super equum qui procedit de ore ipsius et omnes aves saturatae sunt carnibus eorumAnd the rest were slain by the sword of him that sitteth upon the horse, which proceedeth out of his mouth: and all the birds were filled with their flesh. 21. and the rest were killed with the sword of him that sat upon the horse, which came forth out of his mouth: and all the birds were filled with their flesh. 19:21. And the remnant were slain with the sword of him that sat upon the horse, which [sword] proceeded out of his mouth: and all the fowls were filled with their flesh. 19:21. And the others were slain by the sword that proceeds from the mouth of him who was sitting upon the horse. And all the birds were sated with their flesh. And the remnant were slain with the sword of him that sat upon the horse, which [sword] proceeded out of his mouth: and all the fowls were filled with their flesh: 21: а прочие убиты мечом Сидящего на коне, исходящим из уст Его, и все птицы напитались их трупами. 19:21 καὶ οἱ λοιποὶ ἀπεκτάνθησαν ἐν τῇ ῥομφαίᾳ τοῦ καθημένου ἐπὶ τοῦ ἵππου τῇ ἐξελθούσῃ ἐκ τοῦ στόματος αὐτοῦ, καὶ πάντα τὰ ὄρνεα ἐχορτάσθησαν ἐκ τῶν σαρκῶν αὐτῶν. 19:21. καὶ (And) οἱ (the-ones) λοιποὶ (remaindered) ἀπεκτάνθησαν (they-were-killed-off) ἐν (in) τῇ (unto-the-one) ῥομφαίᾳ (unto-a-sword) τοῦ (of-the-one) καθημένου (of-sitting-down) ἐπὶ (upon) τοῦ (of-the-one) ἵππου (of-a-horse) τῇ (unto-the-one) ἐξελθούσῃ (unto-having-had-came-out) ἐκ (out) τοῦ (of-the-one) στόματος (of-a-mouth) αὐτοῦ, (of-it,"καὶ (and) πάντα (all) τὰ (the-ones) ὄρνεα (en-birdings) ἐχορτάσθησαν (they-were-victualaged-to) ἐκ (out) τῶν (of-the-ones) σαρκῶν (of-fleshes) αὐτῶν. (of-them) 19:21. et ceteri occisi sunt in gladio sedentis super equum qui procedit de ore ipsius et omnes aves saturatae sunt carnibus eorum And the rest were slain by the sword of him that sitteth upon the horse, which proceedeth out of his mouth: and all the birds were filled with their flesh. 21. andtherestwere killed withtheswordofhimthatsatuponthehorse, whichcameforthoutofhismouth: andallthebirdswerefilledwiththeirflesh. 19:21. And the remnant were slain with the sword of him that sat upon the horse, which [sword] proceeded out of his mouth: and all the fowls were filled with their flesh. 19:21. And the others were slain by the sword that proceeds from the mouth of him who was sitting upon the horse. And all the birds were sated with their flesh. ru▾el▾el-en-gloss▾vulgate▾erva_1895▾kjv_1900▾catholic_pdv▾ jfb▾jw▾jg▾tr▾ab▾ac▾all ▾ ###### Adam Clarke: Commentary on the Bible - 1831 19:21: With the sword of him that sat upon the horse - He who sat on the white horse is Christ; and his sword is his word - the unadulterated Gospel. ###### Albert Barnes: Notes on the Bible - 1834 19:21: And the remnant - The remainder of the assembled hosts - the army at large, in contradistinction from the leaders. Were slain with the sword - Cut down with the sword; not rescued for protracted torment. A proper distinction is thus made between the deceived multitudes and the leaders who had deceived them. Of him that sat upon the horse - The Messiah, Rev 19:11. Which sword proceeded out of his mouth - notes on Rev 19:15. That is, they were cut down by a word. They fell before him as he spake, as if they were slain by the sword. Perhaps this indicates that the effect that is to be produced when these great powers shall be destroyed is a moral effect; that is, that they will be subdued by the word of the Son of God. And all the fowls were filled with their flesh - notes on Rev 19:17. An effect was produced as if the fowls of heaven should feed upon the carcasses of the slain. The general idea here is, that these great anti-Christian powers which had so long resisted the gospel, and pRev_ented its being spread over the earth; which had shed so much blood in persecution, and had so long corrupted and deceived mankind, would be subdued. The true religion would be as triumphant as if the Son of God should go forth as a warrior in his own might, and secure their leaders for punishment, and give up their hosts to the birds of prey. This destruction of these great enemies - which the whole course of the interpretation leads us to suppose is still future - prepares the way for the millennial reign of the Son of God - as stated in the following chapter. The "beast" and the "false prophet" are disposed of, and there remains only the subjugation of the great dragon - the source of all this evil - to prepare the way for the long-anticipated triumph of the gospel. The subjugation of the great original source of all those evil influences is stated in Rev 20:1-3; and then follows the account of the thousand years' rest of the saints, the resurrection of the dead, and the final judgment. ###### R. A. Torrey - Treasury: Treasury of Scriptural Knowledge - 1880 19:21: the remnant: Rev 19:11-15, Rev 1:16 and all: Rev 19:17, Rev 19:18, Rev 17:16 ###### John Gill 19:21 And the remnant were slain,.... Not only the kings of the earth, and their armies, that will now be gathered together, but all the remains of Papists, Pagans, and Mahometans, in the several parts of the world, even all the enemies of Christ: these will be slain with the sword of him that sat upon the horse; upon the white horse, as the Ethiopic version reads; the Arabic version reads, that sat upon the throne, which sword proceedeth out of his mouth, Rev_ 19:15 and is the word of God, or the judiciary sentence of Christ according to it; and the meaning is, either that these shall be subdued, conquered, and converted by the word; and so are fitly called a "remnant", a remnant according to the election of grace among the antichristian party; and which sense agrees with Rev_ 11:13 or else that they will be convicted and confounded, and not be able to stand against the light and evidence of the word of God, and will be sentenced by Christ to everlasting punishment; and it may be partly one, and partly the other. And all the fowls were filled with their flesh; all the Christian princes and people will be satisfied with their kingdoms, riches, and wealth, and will rejoice at their destruction, and in the righteousness of God, which will be displayed in it; and now the world being clear of all Christ's enemies, Pagan, Papal, and Mahometan, the way will be prepared for Christ's open and glorious kingdom in it. ###### John Wesley 19:21 Here is a most magnificent description of the overthrow of the beast and his adherents. It has, in particular, one exquisite beauty; that, after exhibiting the two opposite armies, and all the apparatus for a battle, Rev_ 19:11-19; then follows immediately, Rev_ 19:20, the account of the victory, without one word of an engagement or fighting. Here is the most exact propriety; for what struggle can there be between omnipotence, and the power of all the creation united against it! Every description must have fallen short of this admirable silence. ###### Robert Jamieson, A. R. Fausset and David Brown 19:21 the remnant--Greek, "the rest," that is, "the kings and their armies" (Rev_ 19:19) classed together in one indiscriminate mass. A solemn confirmation of the warning in Ps 2:10. < PreviousՅայտնութիւն / Revelation - 19Next >swap - [x] 2000 - 2025 © All Rights Reserved Arak29 for technical questions: contact Timeline | Time of writing | Commentaries-sources | Book Summaries | Narek | Armenian Curch Corp | Zohrab Bible 1805 | NT Word+# | 12 Historians |(test) | 613 laws |
|
64
|
APPLICATIONS OF REPRESENTATION THEORY TO COMBINATORICS ALEX GHORBANI Abstract. This paper contains an introduction to the representation the-ory of finite groups, an application of this theory to the symmetric group, and an exploration into the combinatorial nature of objects discussed along the way.
We begin with the basic results of representation theory and use them to develop ideas about the symmetric group and its irreducible repre-sentations. Young Tableaux are introduced, and two proofs are given for the Hook-Length Formula with an application to the Catalan numbers. We then conclude our treatment of the representation theory of the symmetric group with an overview of Young’s Rule and the Kostka Numbers.
Contents 1.
Introduction 1 2.
Representations of Groups 2 3.
Representations of the Symmetric Group 4 4.
Hook-Length Formula 7 5.
Probabilistic Proof of the Hook-Length Formula 8 6.
Catalan Numbers 10 7.
Specht Modules and the Decomposition of M λ 10 8.
Basis for Sλ and Kostka Numbers 12 9.
Acknowledgments 14 References 14 1. Introduction Representation theory is a field of mathematics that systematically translates objects from the abstract world of group theory to the more concrete environment of vector spaces. This lets mathematicians use the extensive and powerful tools of linear algebra to answer questions about groups and their properties. In this paper, we will look at the basic ways that groups get translated into vector space automorphisms and the important results from the introduction to representation theory that follow.
We will apply these results to the study of the symmetric group, which is the group of bijective functions from a set of n objects to itself.
The symmetric group plays an important role in the field of combinatorics and, as we will see, some of its combinatorial properties can be understood from the perspective of representation theory.
Date: August 17, 2020.
1 2 ALEX GHORBANI 2. Representations of Groups Definition 2.1. Let G be a finite group, and let V be a finite dimensional vector space over the complex numbers. We say that a representation of G on V is a group homomorphism ρ : G →GL(V ), where GL(V ) is the set of all invertible linear transformations from V to itself, otherwise known as the general linear group of V .
This map is what lets us translate the elements of our group into linear trans-formations of a vector space. The vector space V is sometimes called a G-module, because the group acts on V in a way that preserves its abelian structure, and is said to carry a representation of G. It is convention to omit ρ and simply write gv instead of ρ(g)v.
Definition 2.2. Let V be a G-module, and let W be a subspace of V . We say that W is a submodule of V if W is a vector subspace of V and is closed under the action of G: gw ∈W for all g ∈G, w ∈W.
Given a vector space V , consider the subspaces W = V and W = {0}. The reader can verify that these subspaces are indeed submodules, but we call these trivial submodules. This idea of the triviality of representations brings us to one of the first major ideas of representation theory.
Definition 2.3. Let V be G-module. V is said to be irreducible if V contains only trivial submodules. If V contains non-trivial submodules then V is reducible.
Now, we have some conception of “atomic units” of representations, i.e., repre-sentations that do not contain other nontrivial representations. Our goal is now to decompose representations into these atomic units, bringing us to our next impor-tant idea.
Definition 2.4. Given two G-modules, U and W, there exists a new G-module, U ⊕W, where ⊕denotes the usual direct sum of vector spaces.
This means that we can construct new representations from old representations.
Here are some examples of group representations.
Example 2.5.
(1) The trivial representation of a group G is a map ρ : G →C such that, gz = z for all z ∈C.
(2) Consider a set S = {s1, s2, · · · , sn} on which G acts. Now we can look at the vector space CS which consists of formal linear combinations c1s1 + c2s2 + · · · + cnsn, where ci ∈C for all i. The elements of S are acting as our basis. The reader can verify that this vector space is indeed a G-module.
If a group G acts on a set S then the G-module CS is called the per-mutation representation and the elements of S are called the natural or standard basis.
APPLICATIONS OF REPRESENTATION THEORY TO COMBINATORICS 3 (3) Groups act on themselves, so when we let S = G itself, we can also form a G-module, which we call the group algebra, or the regular representation.
To begin our discussion of these two important results, we will discuss the maps between representations.
Definition 2.6. Let V and W be G-modules. A G-homomorphism or G-linear map is a linear transformation φ : V →W such that φ(gv) = gφ(v) for all g ∈G, v ∈V.
Definition 2.7. Let V and W be G-modules. V and W are said to be G-equivalent, written V ∼ = W, if there exists a bijective G-homomorphism φ : V →W.
Lemma 2.8. Let V and W be G-modules and let φ : V →W be a G-linear map.
Then Ker(φ) is a submodule of V and Im(φ) is a submodule of W.
Proof. Because φ is a linear transformation, Ker(φ) is a subspace of V and Im(φ) is a subspace of W. Take an arbitrary v ∈Ker(φ) and take an arbitrary g ∈G.
Consider gφ(v), because our vector is in the kernel, we can say gφ(v) = g0 = 0 = φ(gv) ⇐ ⇒gv ∈Ker(φ).
Thus Ker(φ) is closed with respect to G.
Similarly, take an arbitrary w ∈Im(φ). Then, φ(gv) = gφ(v) = gw Because Im(φ) is a submodule of W, gw ∈Im(φ), so Im(φ) is closed with respect to G.
□ This gives us the following important result.
Lemma 2.9 (Schur’s Lemma). Let V and W be irreducible G-modules. If φ : V → W is a G-linear map, then exactly one of the following holds.
(1) φ is a G-isomorphism.
(2) φ is the zero map.
Proof. By Lemma 2.3, we have that Ker(φ) is a submodule of V . We also know that V is irreducible, so Ker(φ) is either {0} or V . Similarly, because W is also irreducible, we have that Im(φ) is also either {0} or W.
If Ker(φ) = V or Im(φ) = {0}, then φ must be the zero map. If Ker(φ) = {0} and Im(φ) = W, then φ is an isomorphism.
□ Lemma 2.10. Let V be a G-module with W as a submodule. Then there exists some submodule U complementary to W so that V = W ⊕U.
Before the proof of this lemma, we will introduce some technical definitions.
Definition 2.11. An inner product on V is invariant under the action of G if ⟨gv, gw⟩= ⟨v, w⟩, for all g ∈G and v, w ∈V .
Definition 2.12. Given an inner product on a vector space V and a subspace W ≤V , we can form the orthogonal complement of W, denoted W ⊥, as follows: W ⊥= {v ∈V | ⟨v, w⟩= 0 for all w ∈W}.
4 ALEX GHORBANI It is always true that V = W ⊕W ⊥when W is a subspace. We can construct a G-invariant inner product from an arbitrary inner product on V by the following definition. Given an arbitrary inner product ⟨·, ·⟩′ on V , we can define a G-invariant inner product by ⟨u, v⟩= X g∈G ⟨gu, gv⟩′.
Now, we have the tools to prove the lemma.
Proof. Let U = W ⊥. We want to show that, for all g ∈G and u ∈W ⊥, we have gu ∈W ⊥. Take an arbitrary w ∈W. Then, we can say ⟨gu, w⟩= ⟨g−1gu, g−1w⟩, since the inner product is G-invariant. By the properties of the group action, we get ⟨g−1gu, g−1w⟩= ⟨u, g−1w⟩.
We know that u ∈W ⊥and g−1w ∈W so we can say ⟨gu, w⟩= 0.
Thus gu ∈W ⊥, giving us that W ⊥is a submodule of V . Finally, we can write V = W ⊕W ⊥.
□ The following theorem lets us talk about the decomposition of any representation in terms of irreducible subrepresentations.
Theorem 2.13 (Maschke’s Theorem). Let G be a finite group and let V be a non-zero G-module. Then, V = W (1) ⊕W (2) ⊕· · · ⊕W (k), where each (not necessarily distinct) W (i) is an irreducible submodule of V .
Proof. We will use induction on the dimension of V . If dim(V ) = 1, then V is irreducible, and we are done. Now assume this is true for dim(V ) = n, and we will study the case where dim(V ) = n+1. If V is irreducible, we are done. Otherwise, V contains at least one nontrivial submodule, W. Then, V = W ⊕W ⊥where dim(W), dim(W ⊥) are each less than n. This means we can use induction, decomposing each piece into irreducibles, and we are done.
□ Definition 2.14. We say that a representation V is completely reducible if it can be written as the direct sum of irreducible submodules.
Maschke’s Theorem tells us that every representation of a finite group over the complex numbers is completely reducible. This lets us study group representations in terms of its irreducibles, which is often a simpler task.
3. Representations of the Symmetric Group In this section, we will use the tools of representation theory that we developed in the last section to talk about the symmetric group.
Definition 3.1. Suppose λ = (λ1, λ2, λ3, · · · , λl) partitions n, written λ ⊢n. This means that n = λ1 + λ2 + · · · + λl, where n ≥λ1 ≥λ2 ≥· · · ≥λl > 0. The shape of λ is an array of n dots with l left-justified rows with row i containing λi dots for 1 ≤i ≤l.
APPLICATIONS OF REPRESENTATION THEORY TO COMBINATORICS 5 Example 3.2. Let λ = (4, 2, 2, 1) partition 10. The corresponding shape is sh(λ) = • • • • • • • • • .
Now we can associate a given λ ⊢n with a subgroup of Sn.
Definition 3.3. Let λ = (λ1, λ2, · · · , λl) partition n. The corresponding Young subgroup of Sn is Sλ = S{1,2,··· ,λ1} × S{λ1+1,λ1+2,··· ,λ1+λ2} × · · · × S{n−λl+1,n−λl+2,··· ,n} Definition 3.4. Let λ partition n. A Young diagram, or tableau, of shape λ is an array t obtained by replacing the dots of shape λ with the numbers 1, 2, · · · , n bijectively.
Example 3.5. Let λ = (4, 3, 2, 1) be a partition of 10. A Young Tableau t of shape λ might look like 1 2 3 4 5 6 7 8 9 10 We will now introduce an equivalence relation on the tableaux.
Definition 3.6. Two Young-tableaux t1 and t2 are row equivalent, t1 ∼t2, if corresponding rows of the two tableaux contain the same elements. A tabloid of shape λ is then {t} = {t1 | t1 ∼t} where the shape of t is λ.
Example 3.7. Suppose we have the following two tableaux, t1 and t2 respectively t1 = 3 1 4 2 , t2 = 4 3 1 2 .
These two tableaux are row equivalent, and the corresponding tabloid is given by {t} = 1 3 4 2 .
Definition 3.8. Let λ partition n. Define M λ = C{{t1}, · · · , {tk}}, where {t1}, · · · , {tk} is a complete list of λ-tabloids. M λ is, in fact, naturally an Sn-module. Then, M λ is called the permutation module corresponding to λ.
Definition 3.9. A G-module M is cyclic if there is a v ∈M such that M = C [Gv] , where Gv = {gv | g ∈G}. In this case, we say that M is generated by v.
Proposition 3.10. If λ partitions n, then M λ is cyclic, generated by any given λ-tabloid.
6 ALEX GHORBANI Proof. We know that any tabloid of shape λ can be taken to another tabloid of shape λ by some permutation in Sn, so M λ is cyclic.
□ Theorem 3.11. Let λ partition n. Consider the Young subgroup Sλ and an arbi-trary tabloid {tλ}. Then the representations V λ = C [Sn/Sλ] and M λ = C Sn{tλ} are Sn-isomorphic.
Proof. We can map between the Young subgroup Sλ = S{1,2,··· ,λ1} × S{λ1+1,λ1+2,··· ,λ1+λ2} × · · · × S{n−λl+1,n−λl+2,··· ,n} and the tabloid {tλ} = 1 2 · · · λ1 λ1 + 1 λ1 + 2 · · · λ2 · · · n −λl + 1 · · · n .
From this, we can think of the coset πSλ corresponding with the tabloid {πtλ}.
So, let π1, π2, · · · , πk be a transversal (a set containing exactly one element from each coset) for Sλ. Consider the map θ : V λ →M λ such that θ(πiSλ) = {πitλ} for 1 ≤i ≤k and vice-versa. Because θ preserves the action of permutation, our map is indeed an Sn-homomorphism. The correspon-dence above is bijective, so our map is also an Sn-isomorphism.
□ In our examples of representations in the first section, we have already seen examples of these modules.
Example 3.12.
a) If λ = (n), then M (n) = C n 1 2 · · · n o ∼ = C which gives us the trivial representation.
b) If λ = (1n), then each equivalence class {t} consists of only one tableau. We can think of this tableau as a permutation written in one-line notation, but transposed. The action of Sn is preserved by this correspondence, so we can say M (1n) ∼ = C [Sn] .
This gives us the regular representation.
c) If λ = (n −1, 1), then each λ-tabloid is uniquely determined by the element in the second row of the shape. Thus, each tabloid can just be thought of as a number 1, 2, · · · , n. The action of Sn is preserved by this correspondence, so we can say M (n−1,1) ∼ = C{1, 2, · · · , n}.
This gives us the defining representation.
APPLICATIONS OF REPRESENTATION THEORY TO COMBINATORICS 7 4. Hook-Length Formula In this section, we will discuss the notion of standard tableaux and use the Hook Length Formula to count the number of standard tableaux with a given partition λ ⊢n.
Definition 4.1. A tableau t is standard if the rows and columns of t are increasing sequences. We say f λ is the number of standard tableau of shape λ.
Definition 4.2. If (i, j) is a node in the diagram of λ, then it has hook Hi,j = {(i, j′) | j′ ≥j} ∪{(i′, j) | i′ ≥i} with corresponding hook length hi,j = |Hi,j|.
The arm and leg of Hi,j are Ai,j = {(i, j′) | j′ ≥j} and Li,j = {(i′, j) | i′ ≥i}, respectively, where arm length and leg length are given as ali,j = |Ai,j| and lli,j = |Li,j|.
Example 4.3. Let λ = (3, 2, 1) partition 6. The hook of cell (1, 1) is given by, H1,1 = • • • • • , giving h1,1 = 5.
The Hook Formula is motivated by the fact that the number of standard Young tableaux of a given shape, f λ, must be less than the number of total Young tableaux of a given shape, n!. After experimentation, the following expression was produced.
Theorem 4.4 (Hook Formula). If λ ⊢n, then f λ = n!
Q (i,j)∈λ hi,j .
Before proving this formula, we will rewrite the formula so that is easier to create a bijective proof: n! = f λ Y (i,j)∈λ hi,j.
We know that n! is the number of Young tableau of a given shape, and we know that f λ is the number of standard tableau of a given shape. So we want our bijection to map from a Young tableau, T, to a pair (P, J) where P is a standard tableau and J is a tableau of shape λ such that the number of choices for the cell Ji,j is hi,j, where the shapes of T, P, J are all λ.
Now we will give a total order to the cells in a tableau. We say that (i, j) ≤(i′, j′) if and only if j > j′ or j ≤j′ and i ≤i′.
Then label the cells as c1 < c2 < c3 < · · · < cn. Given a tableau T, we then define the tableau T ≤c to be the tableau with all the cells c′ of T such that c′ ≤c.
8 ALEX GHORBANI Our algorithm will construct a sequence of pairs (Ti, Ji) with the first pair being (T, 0)1 and the final pair being is (P, J). Each iteration of the algorithm produces a new tableau, jc(T), as follows: 1) Pick c such that T <c is a standard tableau.
2) While T ≤c is not a standard tableau do a) If c = (i, j), then let c′ be the cell of min{Ti+1,j, Ti,j+1}.
b) Exchange Tc and T ′ c and let c := c′.
The process terminates when a standard tableau, P, is reached.
Definition 4.5. The sequence of cells that c passes through is called the path of c.
If the path of jck starts at (i, j) and ends at (i′, j′) after one iteration, then we have Jk = Jk−1, except for the positions (Jk)h,j = (Jk−1)h+1,j −1 for i ≤h < i′ j′ −j for h = i′ .
To prove that the mapping is a bijection, we will create an inverse. To see that this inverse algorithm is well-defined or to see examples, see or .
We first consider the set of candidate cells for the end of the path of jck(Tk−1).
We define this set to be Ck = {(i′, j′) | i′ ≥i0, j′ = j0 + (Jk)i′,j0, (Jk)i′,j0 ≥0}.
For ease of notation, let (T ′, J′) = (Tk, Jk). Then our algorithm is as follows 1) Pick c ∈Ck where ck = (i0, j0).
2) While c ̸= ck do a) If c = (i, j), then let c′ be the cell of max{T ′ i−1,j, T ′ i,j−1} where T ′ k,l = 0 if k < 0 or l < j0.
b) Exchange T ′ c and T ′ c′ and let c := c′.
5. Probabilistic Proof of the Hook-Length Formula Besides the combinatorial proof given above, there are many other proofs of the hook-length formula. In this section, we will look at a short probabilistic proof of the formula.
Let λ be a partition of n, and let t be a λ-shaped standard tableau. Because of the condition that rows and columns are strictly increasing, the maximal element n in t is in one of the corner cells of the tableau. For our proof, we will remove a corner from t to get a new standard tableau which we will say has shape µ, written as µ ≺λ.
Now, we introduce the notation eλ = n!
Q hi,j to denote the ratio between the total number of permutations and the multiplied hook lengths of the partition λ.
If we can establish that the following identity is true, we can use induction over the size of each of the smaller tableaux until we reach a base case: eλ = X µ≺λ eµ, 1Here, 0 is an empty hook tableau.
APPLICATIONS OF REPRESENTATION THEORY TO COMBINATORICS 9 where µ ranges over shapes obtained by removing a corner from the shape λ. We can rewrite the above as 1 = X µ≺λ eµ eλ .
Now we will look at the corners of t in detail. A random cell (i, j) in t has a 1 n chance of being chosen. Now consider the hook, Hi,j, and a random cell, (i′, j′) ̸= (i, j), within it. That cell has a 1 hi,j−1 chance of being chosen. Repeat the process with a new cell chosen from Hi′,j′. Eventually, a corner will be chosen and the process will stop. This cell, (x, y), is called the terminal cell; any corner can be a terminal cell. Let p(x, y) be the probability that (x, y) is a terminal cell for a given trial of the process.
Consider the path, P, consisting of cells (a, b) = (a1, b1), (a2, b2), · · · , (am, bm) = (x, y), which records a trial starting at (a, b) and ending at the terminal cell (x, y).
Definition 5.1. The vertical and horizontal projections of P are the sequences A = (a1, a2, · · · , am) and B = (b1, b2, · · · , bm) respectively.
Denote the probability that a random trial beginning at cell (a, b) and having the projections (A, B) by p(A, B | a, b).
Lemma 5.2.
p(A, B | a, b) = Y i∈A{a} 1 hib −1.
Y j∈B{b} 1 haj −1.
Proof. We can rewrite the righthand side as p(A, B | a, b) = 1 hab −1 (p(A \ {a1}, B | a2, b) + p(A, B \ {b1} | a, b2)) .
Using induction over m gives us that p(A \ {a1}, B | a2, b) = (ha1y −1) Y i∈A{x} 1 hiy −1, and p(A, B \ {b1} | a, b2) = (hxb1 −1) Y j∈B{y} 1 hxj −1.
Therefore, p(A, B | a, b) = 1 hab −1 (hay −1) Y i∈A{x} 1 hiy −1 + (hxb1 −1) Y j∈B{y} 1 hxj −1 .
From the identity hab −1 = (ha1y −1) + (hxb1 −1), we get finally get that p(A, B | a, b) = Y i∈A{x} 1 hiy −1, Y j∈B{y} 1 hxj −1, and we are done.
□ Theorem 5.3. Let (x, y) be a terminal cell, then p(x, y) = eµ eλ .
10 ALEX GHORBANI Proof. We can expand the righthand side of the equation to eµ eλ = 1 n Y 1≤i<x hiy hiy −1 Y 1≤j l, similarly for µ.
The following two results are used to show that we have a complete list of irreducibles of M λ.
Theorem 7.4 (Submodule Theorem). Let U be a submodule of M λ. Then U ⊇Sλ or U ⊆Sλ⊥.
Moreover, the Sλ are irreducible.
Proposition 7.5. Let θ be a nonzero G-homomorphism from Sλ to M µ. Then λ ⊵µ. If λ = µ, then θ is multiplication by a scalar.
Now, we can prove the following theorem.
Theorem 7.6. Let λ partition n. The Sλ form a complete list of irreducible Sn-modules.
Proof. By the Submodule Theorem, the Sλ are irreducible. We also know that there is a one-to-one correspondence between the Specht Modules Sλ and conjugacy classes of Sn, as desired, by counting the irreduicible representations. Now, we must show that the Specht modules are pairwise inequivalent. If Sλ ∼ = Sµ, then there is a G-homomorphism between Sλ and M µ, by the Submodule Theorem. Then by the above proposition, we can say λ ⊵µ. Conversely, we can conclude that µ ⊵λ.
Thus λ = µ, and the modules are the same.
□ Theorem 7.7. (Decomposition of M µ) A permutation module M µ decomposes as M µ = M λ⊵µ mλµSλ, for some mλµ ∈N0.
12 ALEX GHORBANI Proof. This decomposition follows from Proposition 4.6 because if Sλ appears inside M µ with a nonzero coefficient, then λ ⊵µ.
□ We will see that the coefficients, mλµ, have a combinatorial interpretation later.
8. Basis for Sλ and Kostka Numbers If we want to select a natural basis for the Specht modules, we need to find a spanning set of linearly independent polytabloids. There is a clever way of choosing such a collection, namely to choose the set of polytabloids created from standard tableaux. We have already seen that standard tableaux are interesting objects and this fact furthers that assumption.
Theorem 8.1. Given a partition λ of n, the set {et | t is a standard tableau of shape λ} forms a basis of Sλ.
Proof. A complete treatment of this theorem can be found in . Here, we will just look at a sketch of the proof.
First, to prove that the set is linearly independent, a partial ordering is given for tabloids. Then, we consider a set of m vectors in M λ, {v1, v2, · · · , vm}. If, for each vi, we can choose a maximal tabloid based on our partial order, {ti}, such that all the {ti} are distinct, then those m vectors are linearly independent.
Paired with some technical lemmas meant to bridge the gap between tabloids and polytabloids, we can conclude that the set of polytabloids constructed from the standard tableaux of a given shape is linearly independent.
To show that the set spans Sλ, we introduce a process called the straightening algorithm. This process lets us take an arbitrary polytabloid as a linear combination of standard polytabloids.
We start with an arbitrary tableau t.
We can assume that t has increasing columns, because we can always find a permutation within the column stablizer that will give us increasing columns on t.
Then we want to find a set of permutations, {π}, such that (1) in each tableau πt, pairs of adjacent, out of order elements in a row are eliminated, and (2) there is a special element, called the Garnir element, g = e + X π∈{π} sgn(π)π which satisfies get = 0.
Then, we can write et = − X π eπt.
This process is repeated to give us the desired linear combination. Each iteration gives a linear combination consisting of polytabloids that are closer to being stan-dard than those from the last iteration. Once they are all standard, we can write any polytabloid as the linear combination of standard polytabloids.
□ APPLICATIONS OF REPRESENTATION THEORY TO COMBINATORICS 13 Because the standard tableaux form a basis for a given Specht module, the dimension can be given by the following corollary.
Corollary 8.2. For any partition of n, λ, dim(Sλ) = f λ.
Definition 8.3. A semistandard tableau of shape λ is like a Young tableau with slightly different conditions. The first condition is that the numbers in the boxes can repeat and the second condition is that the rows weakly increase while the columns strictly increase. The content of a semistandard tableau is represented by µ = (µ1, µ2, · · · , µm) where µi is the number of i’s in the tableau.
Definition 8.4. Given two partitions of n, λ and µ, the Kostka number Kλµ is the number of semistandard tableaux of shape λ and content µ.
As a consequence of the semistandard basis theorem (see ), we can give the following theorem, known as Young’s Rule.
Theorem 8.5. (Young’s Rule) The multiplicity of Sλ in M µ is equal to the number of semistandard tableaux of shape λ and content µ: M µ ∼ = M λ⊵µ KλµSλ.
Now we give some examples of Kλµ for familiar partitions λ, µ.
Proposition 8.6. Given a partition µ of n, Kµµ = 1.
Proof. The only way to construct a semistandard tableau of shape µ and content µ would be to fill the first row with all 1’s, the second row with all the 2’s, and so on.
□ Proposition 8.7. Given a partition µ of n, K(n)µ = 1.
Proof. There is only one way to arrange a collection of numbers in weakly increasing order, so there is only 1 semistandard tableau of this specific configuration.
□ The above proposition, in conjunction with Young’s rule, implies that M µ con-tains exactly one copy of S(n), which is the trivial representation.
Proposition 8.8. Given a partition λ of n, Kλ(1n) = f λ.
Proof. A semistandard tableau with content (1n) is a standard tableau because there will be no repetitions of numbers in the boxes, and the rows and columns will both increase strictly. So, the number of semistandard tableaux of shape λ and content (1n) is equal to the number of standard tableaux of shape λ.
□ Young’s rule then says that M (1n) ∼ = M λ f λSλ.
We know that M (1n) is just the regular representation. If we take the dimension of both sides of this expression, we find that n! = X λ (f λ)2.
14 ALEX GHORBANI 9. Acknowledgments First and foremost, I would like to thank my mentor Calder Sheagren for helping me learn the material in this paper and for helping me edit my writing. And I would like to thank Pallav Goyal and Peter Huxford for looking over my paper. I would also like to thank Daniil Rudenko for running the apprentice program and writing fun and challenging problems sets, as well as Peter Huxford and Alicia Xiao for hosting the apprentice problem sessions. Finally, I would like to thank Peter May for organizing this year’s REU.
References J.-C. Novelli, I. Pak, and A. V. Stoyanovskii. A direct bijective proof of the hook-length formula.
B. E. Sagan. The Symmetric Group: Representations, Combinatorial Algorithms, and Sym-metric Functions. Springer, 2000.
R. P. Stanley. Catalan addendum. 2013.
|
65
|
Published Time: Wed, 06 Aug 2025 18:22:17 GMT
Par Part 1: Sequent Calculus - Ryan Brewer
===============
☰
Ryan BrewerContactDemosPostsProjects
Par Part 1: Sequent Calculus
January 13, 2025
This post is the first in a series on Logic. These ideas are very useful in understanding many important papers on programming language theory, especially papers on type theory and the lambda calculus. I will start with an explanation of sequents and Sequent Calculus as a system for doing logic, then I'll dive into linear logic in the next post. I'll finish the trilogy with a third post on Par and computational interpretations of classical logic.
I'm not going to present all the rules of any logical system, in this post. Instead I'll just give examples of rules in each system, to explain how they work and help you read and write such rules yourself in the future. This is also partly because different sources present the same systems with slightly different, but ultimately equivalent, rules.
Sequents
Logic, as a field of study, has thousands of years of history. But in the early 1900s it exploded into new and interesting directions, as researchers in mathematics looked to logic to try and understand what it was that they were doing and why they could trust it. A brilliant logician named Gerhard Gentzen (unfortunately a Nazi, though some argue it was apathetically, or even under duress) developed the following notation for reasoning in 1934, which will be important to understand for the rest of the post. Why do you need a notation to understand this post? Some mathematical notations are brilliant for revealing what's really going on and instilling the perfect intuition in your brain, and Gentzen's notation is truly one of the best in this category.
[\begin{prooftree} \AxiomC{$A,B\vdash P$} \AxiomC{$C\vdash Q$} \BinaryInfC{$D\vdash R,S$} \end{prooftree} ]
This is called a "sequent notation," because the pair of (possibly empty) lists of symbols surrounding a "turnstile" (\vdash) is called a "sequent." As you can see here there are three sequents. Sequent notation can be challenging, and my best advice is to try to "turn your brain off" and see it as a dumb, mechanical symbol-rewriter. Let me explain.
The horizontal bar is describing "inputs," on the top, and the "output," on the bottom. This lets you create chains, like so:
[\begin{prooftree} \AxiomC{$C\vdash R$} \AxiomC{$A\vdash P$} \AxiomC{$B\vdash Q$} \BinaryInfC{$D\vdash S$} \BinaryInfC{$E\vdash T$} \end{prooftree} ]
This notation is used in two ways: "inference rules" and "derivation trees." An inference rule only has one horizontal line, so only one "top" and one "bottom." A derivation tree is a chain (like the one above) of inference rules, where each horizontal line (called a "step," when talking about derivation trees) must match an inference rule. So the chain above, if it's a derivation tree, might be using the following two inference rules:
[\begin{prooftree} \AxiomC{$A\vdash P$} \AxiomC{$B\vdash Q$} \BinaryInfC{$D\vdash S$} \end{prooftree} \quad\quad \begin{prooftree} \AxiomC{$C\vdash R$} \AxiomC{$D\vdash S$} \BinaryInfC{$E\vdash T$} \end{prooftree} ]
Notice how the (D\vdash S) is acting as a point where the inference rules can be connected, because it's the bottom of one inference rule and part of the top of the other. Like legos, we can stack inference rules when the top of the bottom piece matches the bottom of the top piece. Since inference rules are descriptions of valid steps in a derivation tree, we often fill their top sequents with "metavariables," which are like wildcards that match anything. The way to specify if something is a metavariable or not will depend on what you're doing, but typically any Greek or Latin (English) letter will be a metavariable.
Unfortunately I really do mean "typically," and not "always:" there are many, many exceptions to this, and you'll often find yourself using context clues to decipher sequent notation.
If the same metavariable appears multiple times on the top of a sequent, then those parts of the "input" sequent have to be equal to each other for it to be considered a valid match, because each metavariable can only match one thing, even if multiple times. Note that metavariables can stand in for some component of an expression, an entire expression, or even multiple adjacent expressions in a list, but generally not for the whole sequent (both lists). Understanding what a metavariable is supposed to stand in for unfortunately also requires context clues! But when you see enough of these you quickly develop trustworthy expectations and intuitions, and the examples in this post will help with that.
Lastly, the bottom of an inference rule will often mention the metavariables on the top, which just means "whatever that metavariable matched in the input, fill that in here in the output." If there's a metavariable on the bottom that isn't mentioned on the top, then it could be filled with anything that the metavariable could match; it's up to you! You'll see clarifying examples in a moment.
Sequent Calculus
Let's do some reasoning with Gentzen's notation! The left side of the turnstile (\vdash) will be our assumptions, and the right side will be our conclusions. We'll interpret this as "if all the propositions on the left are true, then at least one of the propositions on the right is true." In other words, the left can be seen as a big conjunction ("and") and the right can be seen as a big disjunction ("or"). If the left side is empty then it should be impossible for everything on the right to be false (it's a tautological disjunction), and if the right side is empty then it should be impossible for everything on the left to be true (it's a contradictory conjunction).
Allow me to emphasize: we are now giving an interpretation to the sequent notation. This is not the inherent meaning of the notation. This meaning comes from our choice of inference rules, which we've carefully designed to simulate formal logic. The underlying system, the sequent notation, is still just a dumb symbol rewriter, which we configure by our choice of inference rules.
Here are some simple inference rules for reasoning around conjunctions:
[\begin{prooftree} \AxiomC{$\Gamma\vdash p,\Delta$} \AxiomC{$\Gamma\vdash q,\Delta$} \RightLabel{$\land$-R} \BinaryInfC{$\Gamma\vdash p\land q,\Delta$} \end{prooftree} \quad\quad \begin{prooftree} \AxiomC{$\Gamma,p\vdash\Delta$} \RightLabel{$\land$-L${}_1$} \UnaryInfC{$\Gamma,p\land q\vdash\Delta$} \end{prooftree} \quad\quad \begin{prooftree} \AxiomC{$\Gamma,q\vdash\Delta$} \RightLabel{$\land$-L${}_2$} \UnaryInfC{$\Gamma,p\land q\vdash\Delta$} \end{prooftree} ]
Here the capital greek letters (\Gamma) and (\Delta) ("gamma" and "delta") are metavariables for zero or more adjacent propositions in a list, while lowercase latin letters (p) and (q) are metavariables for single propositions. To use these rules in a proof (derivation tree) we need a few more rules. First we need a way to start proofs, that is, a rule with a blank top:
[\begin{prooftree} \AxiomC{} \RightLabel{Axiom} \UnaryInfC{$p\vdash p$} \end{prooftree} ]
This says that no matter what (empty top), if you assume something then you can conclude it ((p\vdash p)). This is often called the Axiom rule, because it doesn't need to match anything on the top. Next we need the "structural rules," which let us manage our lists. Each of the three structural rules come in two variants, called the left structural rules and the right structural rules.
[\begin{prooftree} \AxiomC{$\Gamma_1,p,q,\Gamma_2\vdash\Delta$} \RightLabel{Exch-L} \UnaryInfC{$\Gamma_1,q,p,\Gamma_2\vdash\Delta$} \end{prooftree} \quad\quad \begin{prooftree} \AxiomC{$\Gamma\vdash\Delta$} \RightLabel{Weak-L} \UnaryInfC{$\Gamma,p\vdash\Delta$} \end{prooftree} \quad\quad \begin{prooftree} \AxiomC{$\Gamma,p,p\vdash\Delta$} \RightLabel{Contr-L} \UnaryInfC{$\Gamma,p\vdash\Delta$} \end{prooftree} ]
[\begin{prooftree} \AxiomC{$\Gamma\vdash\Delta_1,p,q,\Delta_2$} \RightLabel{Exch-R} \UnaryInfC{$\Gamma\vdash\Delta_1,q,p,\Delta_2$} \end{prooftree} \quad\quad \begin{prooftree} \AxiomC{$\Gamma\vdash\Delta$} \RightLabel{Weak-R} \UnaryInfC{$\Gamma\vdash p,\Delta$} \end{prooftree} \quad\quad \begin{prooftree} \AxiomC{$\Gamma\vdash p,p,\Delta$} \RightLabel{Contr-R} \UnaryInfC{$\Gamma\vdash p,\Delta$} \end{prooftree} ]
The Exch ("Exchange") rules let us reorder the lists, the Weak ("Weakening") rules let us introduce more assumptions or conclusions than we need (making the proof "weaker"), and the Contr ("Contraction") rules let us remove duplicate assumptions or conclusions.
Now we can start proving things, by constructing derivation trees. As long as the top is all "Axiom" steps and each step matches an inference rule, the derivation tree is a proof of the bottom sequent!
[\begin{prooftree} \AxiomC{} \RightLabel{Axiom} \UnaryInfC{$P\vdash P$} \RightLabel{Weak-L} \UnaryInfC{$P,Q\vdash P$} \AxiomC{} \RightLabel{Axiom} \UnaryInfC{$Q\vdash Q$} \RightLabel{Weak-L} \UnaryInfC{$Q,P\vdash Q$} \RightLabel{Exch-L} \UnaryInfC{$P,Q\vdash Q$} \RightLabel{$\land$-R} \BinaryInfC{$P,Q\vdash P\land Q$} \RightLabel{Weak-L} \UnaryInfC{$P,Q,R\vdash P\land Q$} \AxiomC{} \RightLabel{Axiom} \UnaryInfC{$R\vdash R$} \RightLabel{Weak-L} \UnaryInfC{$R,P\vdash R$} \RightLabel{Weak-L} \UnaryInfC{$R,P,Q\vdash R$} \RightLabel{Exch-L} \UnaryInfC{$P,R,Q\vdash R$} \RightLabel{Exch-L} \UnaryInfC{$P,Q,R\vdash R$} \BinaryInfC{$P,Q,R\vdash(P\land Q)\land R$} \end{prooftree} ]
Natural Deduction
What we have so far is called the Sequent Calculus, and it's so explicit and mechanical that it's a nice object of study in the field of Logic. But so many of these steps are administrative junk, so here's a tree with only the steps that matter to humans:
[\begin{prooftree} \AxiomC{} \RightLabel{Axiom} \UnaryInfC{$P\vdash P$} \AxiomC{} \RightLabel{Axiom} \UnaryInfC{$Q\vdash Q$} \RightLabel{$\land$-R} \BinaryInfC{$P,Q\vdash P\land Q$} \AxiomC{} \RightLabel{Axiom} \UnaryInfC{$R\vdash R$} \RightLabel{$\land$-R} \BinaryInfC{$P,Q,R\vdash(P\land Q)\land R$} \end{prooftree} ]
This is a system called Sequent Natural Deduction. Only one proposition is allowed on the right of a turnstile. To see if two sequents match (so we can create a chain), we don't just look at what's written, but also imagine all the different ways of applying structural rules, to see if that produces any matches. The resulting system is much more complicated to study, but much nicer to use. One might say that a natural deduction proof distills the essence of the proof, presenting only the steps that matter. But there is an even more "natural" system than Sequent Natural Deduction, namely Natural Deduction:
[\begin{prooftree} \AxiomC{} \RightLabel{Axiom} \UnaryInfC{$P$} \AxiomC{} \RightLabel{Axiom} \UnaryInfC{$Q$} \RightLabel{$\land$-R} \BinaryInfC{$P\land Q$} \AxiomC{} \RightLabel{Axiom} \UnaryInfC{$R$} \RightLabel{$\land$-R} \BinaryInfC{$(P\land Q)\land R$} \end{prooftree} ]
Since a Sequent Natural Deduction sequent must always have exactly one proposition on the right side of the turnstile, it's possible to simply drop the assumptions and turnstile like this and have something that makes sense. Because really, the assumptions that matter for a proposition can always be found directly above it! As an object of study in the field of Logic, natural deduction is even more challenging than sequent natural deduction, exactly because it leaves more implicit so it can look like human reasoning.
(I'm borrowing the Stanford Encyclopedia of Philosophy's terminology here, because I like it, but people typically call sequent natural deduction and natural deduction both just "natural deduction," without distinction.)
Notice that in the Sequent Calculus, the rules for conjunction either introduce conjunctions on the left or introduce conjunctions on the right. Sequent natural deduction (and therefore natural deduction, too) instead uses "introduction rules" and "elimination rules." Here are examples:
[\begin{prooftree} \AxiomC{$\Gamma\vdash p$} \AxiomC{$\Gamma\vdash q$} \RightLabel{$\land$-Intro-R} \BinaryInfC{$\Gamma\vdash p\land q$} \end{prooftree} \quad\quad \begin{prooftree} \AxiomC{$\Gamma\vdash p\land q$} \RightLabel{$\land$-Elim-R${}_1$} \UnaryInfC{$\Gamma\vdash p$} \end{prooftree} \quad\quad \begin{prooftree} \AxiomC{$\Gamma\vdash p\land q$} \RightLabel{$\land$-Elim-R${}_2$} \UnaryInfC{$\Gamma\vdash q$} \end{prooftree} ]
[\begin{prooftree} \AxiomC{$\Gamma,p\vdash\Delta$} \RightLabel{$\land$-Intro-L${}_1$} \UnaryInfC{$\Gamma,p\land q\vdash\Delta$} \end{prooftree} \quad\quad \begin{prooftree} \AxiomC{$\Gamma,q\vdash\Delta$} \RightLabel{$\land$-Intro-L${}_2$} \UnaryInfC{$\Gamma,p\land q\vdash\Delta$} \end{prooftree} \quad\quad \begin{prooftree} \AxiomC{$\Gamma,p\land q\vdash\Delta$} \RightLabel{$\land$-Elim-L} \UnaryInfC{$\Gamma,p,q\vdash\Delta$} \end{prooftree} ]
Notice how all the rules are kind of duplicated, for operating on each side of the turnstile. That said, natural deduction doesn't have the right contraction and right weakening rules, or any other rules that would require or introduce multiple propositions on the right of the turnstile. It's also worth being aware that conjunction on the left is equivalent to the comma (hence the funny left conjunction elimination rule) so the left conjunction introduction rules are basically just left weakening.
One-Sided Sequents
Now we've got way too many rules, it appears. Each rule is duplicated because there's one for the right of the turnstile and one for the left. One-sided sequent calculus can be done instead, where we just do everything on the right of the turnstile. To do things this way, let's start with introduction and elimination rules for negation ("not"). We do introduction and elimination rules instead of left-introduction and right-introduction rules because now we only have one side!
[\begin{prooftree} \AxiomC{} \RightLabel{$\neg$-Intro} \UnaryInfC{$\vdash p,\neg p$} \end{prooftree} \quad\quad \begin{prooftree} \AxiomC{$\vdash p,\Delta$} \AxiomC{$\vdash\neg p,\Delta$} \RightLabel{$\neg$-Elim or “Cut"} \BinaryInfC{$\vdash\Delta$} \end{prooftree} ]
These rules are starting to look very strange. It helps to remember that the comma here (on the right of the turnstile) is disjunction, so a one-sided sequent calculus is saying "one of these propositions is true."
Now, disjunction and negation is all you need to define the other logical constants in classical logic. For example, (P\Rightarrow Q) (implication) is equivalent to (\neg P\lor Q). Having implication means we can simulate assumptions (aka the left side of the turnstile). So now we can easily translate our Sequent Calculus inference rules for conjunction from earlier into this one-sided sequent system! First I'll present the natural deduction rules, which were one-sided in the case of conjunction anyway, and then show derivable sequent calculus rules for conjunction on the left of the turnstile:
[\begin{prooftree} \AxiomC{$\vdash p,\Delta$} \AxiomC{$\vdash q,\Delta$} \RightLabel{$\land$-Intro} \BinaryInfC{$\vdash p\land q,\Delta$} \end{prooftree} \quad\quad \begin{prooftree} \AxiomC{$\vdash p\land q,\Delta$} \RightLabel{$\land$-Elim${}_1$} \UnaryInfC{$\vdash p,\Delta$} \end{prooftree} \quad\quad \begin{prooftree} \AxiomC{$\vdash p\land q,\Delta$} \RightLabel{$\land$-Elim${}_2$} \UnaryInfC{$\vdash q,\Delta$} \end{prooftree} ]
[\begin{prooftree} \AxiomC{$\vdash\neg p,\Delta$} \UnaryInfC{$\vdash\neg(p\land q),\Delta$} \end{prooftree} \quad\quad \begin{prooftree} \AxiomC{$\vdash\neg q,\Delta$} \UnaryInfC{$\vdash\neg(p\land q),\Delta$} \end{prooftree} ]
If the introduction and elimination rules for conjunction aren't making sense with how you understand conjunction, think of it this way. The comma on the right is disjunction, so at least one proposition in the list is true. Let's look at the two (\land)-Elim rules. If the conjunction is true, then either of its conjuncts are true, so replacing the conjunction with a conjunct is valid. If the conjunction is false, then something else in the list is true, so replacing the conjunction with anything, true or false, is valid. Now look at the (\land)-Intro rule. If (p) and (q) are both true, then it's possible every proposition in (\Delta) might be false, but the disjunction of (p\land q) and (\Delta) would still be true, since (p\land q) is true. On the other hand, if either (p) or (q) is false, then something in (\Delta) must be true, so the disjunction of (p\land q) and (\Delta) would be true even though (p\land q) is false. To summarize, the attitude to have is "if everything in (\Delta) is false we can ignore it, otherwise what we're doing doesn't matter." This comes from the (\texttt{False}\lor P\equiv P) and (\texttt{True}\lor P\equiv\texttt{True}) rules of logic. This makes our rules look a little like the single-proposition-on-the-right calculi, such as natural deduction.
What you'll hopefully notice is that in the one-sided sequent calculus you can pretend that something is on the left simply by negating it. This is thought of as a deep beauty of logic, that the structure of the assumptions is just a negation of the structure of the potential conclusions. I'll explore this beauty more in future posts. It's this fact about simulating one side from the other with negation that allows one-sided sequent calculi to avoid duplicating rules (since there are no left rules), and thus have far fewer rules overall.
Intuitionistic Logic
As elegant as the one-sided sequent calculus is, it isn't used much in the field. This is because it is inherently classical. Note that our Axiom rule became the Law of Excluded Middle! Intuitionistic logic requires a left side, because implication is no longer equivalent to disjunction with a negation, and the left and right rules are no longer negations of each other. The typical rules for a comma on the right don't really match the funny way disjunction behaves in intuitionistic logic, so intuitionistic logic is always done as (sequent) natural deduction, with only one proposition on the right, which may be an intuitionistic disjunction. You could theoretically redefine the comma on the right to be intuitionistic disjunction, since right now the right comma in intuitionistic logic is undefined, but there would be no benefit to doing so. Intuitionistic disjunction has completely different rules, which aren't dual to or comparable to the comma on the left at all, and a disjunction symbol on the right serves its purpose well enough without needing to introduce anything extra like special right-comma rules. Here are the intuitionistic disjunction inference rules:
[\begin{prooftree} \AxiomC{$\Gamma\vdash p$} \RightLabel{$\lor$-Intro-R${}_1$} \UnaryInfC{$\Gamma\vdash p\lor q$} \end{prooftree} \quad\quad \begin{prooftree} \AxiomC{$\Gamma\vdash q$} \RightLabel{$\lor$-Intro-R${}_2$} \UnaryInfC{$\Gamma\vdash p\lor q$} \end{prooftree} \quad\quad \begin{prooftree} \AxiomC{$\Gamma\vdash p\lor q$} \AxiomC{$\Gamma,p\vdash r$} \AxiomC{$\Gamma,q\vdash r$} \RightLabel{$\lor$-Elim-R} \TrinaryInfC{$\Gamma\vdash r$} \end{prooftree} ]
[\begin{prooftree} \AxiomC{$\Gamma,p\vdash r$} \AxiomC{$\Gamma,q\vdash r$} \RightLabel{$\lor$-Intro-L} \BinaryInfC{$\Gamma,p\lor q\vdash r$} \end{prooftree} \quad\quad \begin{prooftree} \AxiomC{$\Gamma,p\lor q\vdash r$} \RightLabel{$\lor$-Elim-L${}_1$} \UnaryInfC{$\Gamma,p\vdash r$} \end{prooftree} \quad\quad \begin{prooftree} \AxiomC{$\Gamma,p\lor q\vdash r$} \RightLabel{$\lor$-Elim-L${}_2$} \UnaryInfC{$\Gamma,q\vdash r$} \end{prooftree} ]
The connection between natural deduction and intuitionistic logic is fairly strong. (One might say natural!) Classical natural deduction is possible but famously disappointing/unsatisfactory. It can be done by introducing a second Axiom rule that matches our (classical) one-sided sequent calculus's Axiom rule, that is, the law of excluded middle:
[\begin{prooftree} \AxiomC{} \RightLabel{LEM} \UnaryInfC{$\vdash p\lor\neg p$} \end{prooftree} ]
However, the useful thing about intuitionistic logic is that it can be given computational "proof terms." These are expressions in some typed lambda calculus that have a normal form (don't loop forever during computation) only if their type is equivalent to a true proposition of intuitionistic logic. Hence the name "proof terms:" the existence of a fully-evaluated expression of a type is itself a proof of the corresponding proposition. In general, the study of classical logic doesn't use proof terms, though in later posts I'll explore ways that you can, in fact, give proof terms for classical logic. But for now I'll focus the discussion on the simple case of intuitionistic logic.
For simplicity we'll limit intuitionistic logic to only implication and (\texttt{False}), which can express conjunction, disjunction, and negation. Then the correspondence between types and propositions is as follows: function types correspond to implication, and (\texttt{False}) corresponds to (\texttt{void}) (the type with no values). Then that simply-typed lambda calculus gives proof terms to this "implicational fragment" of propositional intuitionistic logic. We use slightly new notation for this:
[\begin{prooftree} \AxiomC{} \RightLabel{Axiom} \UnaryInfC{$x:p\vdash x:p$} \end{prooftree} \quad\quad \begin{prooftree} \AxiomC{$\Gamma,x:p\vdash e:q$} \RightLabel{$\Rightarrow$-Intro} \UnaryInfC{$\Gamma\vdash(\lambda x.e):p\Rightarrow q$} \end{prooftree} \quad\quad \begin{prooftree} \AxiomC{$\Gamma\vdash e_1:p\Rightarrow q$} \AxiomC{$\Gamma\vdash e_2:p$} \RightLabel{$\Rightarrow$-Elim} \BinaryInfC{$\Gamma\vdash(e_1\;e_2):q$} \end{prooftree} ]
Now we have these colons everywhere: (e: p) means (e) is a proof of (p), or equivalently that (e) is a simply-typed lambda calculus expression of type (p). A note on metavariables: (x) refers to variables in the lambda calculus, while (e) refers to any lambda calculus expressions. Therefore the proof terms in the context (\Gamma) will only be variables. (\texttt{False}) is can be introduced by picking it in (p) when using the Axiom rule. To stress that the types "mean" propositions, I'm using logical notation for the types (such as (p\Rightarrow q) rather than the more typical (t\to u)). Because this is a simply-typed lambda calculus, you will find that these inference rules can't produce any expressions that loop forever during computation, so every expression on the right side of the turnstile will be a valid proof of its type/proposition.
Since metavariable use is a little more complex here, I'll introduce the proper formal notation for describing them. We usually specify the behavior of metavariables with a "formal grammar," technically called a "BNF grammar" or "Backus-Naur Form," written as follows:
[e::=x\mid\lambda x.e\mid e\;e ]
[p,q::=\texttt{False}\mid p\Rightarrow p ]
[\Gamma::=\cdot\mid\Gamma,x:p ]
On the left of the (::=) are the metavariables, and on the right are the things they can be replaced with. The (\mid) symbol means "or," so the above grammar says that (e) can be any of the following: a variable like (x) or (y), a lambda abstraction (\lambda x.e), or an application (e\;e). The other two lines are similar. Notice how these definitions are recursive. The (\cdot) symbol in the context is worth mentioning: it means the empty context, and technically a context like (x:p,y:q) is just a shorthand for (\cdot,x:p,y:q). But we've been using this kind of shorthand for the whole post, as everyone does, instead of explicitly writing the empty context. In the wild it almost exclusively appears in formal grammars, not inference rules or derivation trees, so I've continued that tradition here. With all that in mind, we would write that our proof system for intuitionistic logic has the following "judgement form:"
[\Gamma\vdash e:p ]
This specifies that a valid sequent (often called a "judgement" in this context), and should be thought of as equivalent to a statement like "a valid sentence is a noun phrase followed by a verb phrase." In this case it would be like "a valid sequent is a context ((\Gamma)) on the left, then a turnstile, then an expression ((e)), then a colon, then a proposition ((p))." We could have formally given grammars and judgement forms for the earlier inference rules, but the usage of metavariables was simple enough to omit them.
Conclusion
Thus ends the crash course on sequent notation and sequent calculi! A sequel post on linear logic is in the works, which makes heavy and elegant use of this system.
© 2024 Ryan Brewer.
|
66
|
The Four-Numbers Game Sarah Prendergast Mathematics Senior Exercise Fall, 2008 1. Introduction to the Four Numbers Game The Four-Numbers game is a simple yet interesting problem that illustrates the fact that an elementary game relying only on basic arithmetic can exhibit complexity worthy of ad-vanced analysis. Although the Four-Numbers game has been examined by many different people, the earliest record of the game is credited to E. Ducci of Italy in the late nineteenth century. Hence it is sometimes called a Ducci Sequence. The rules of the Four-Numbers game are very simple. In fact, the game can be enjoyed by elementary school math students and college students alike.
The most basic form of the game begins with four nonnegative integers, a, b, c, d and a square. One number is placed at each corner of the square. The resulting square with numbers at each vertex is called the “start square”.
Fig 1: The initial configuration of the Four Numbers game.
Consider the start square as ‘step 0’ or S0 of the game S. The first step is then obtained by creating a new square inside the original square with corners at the midpoints of the sides of the previous square. Each new corner is labeled with the absolute value of the difference between the two neighboring labels. This process continues until the difference between the numbers at each of the vertices is zero. The game is over when all four labels are zeros.
Definition: 1.1 If for some Four Numbers game S = So = (a, b, c, d) and a = b = c = d = 0, then S is called the zero game. All other games are nonzero.
Fig. 2: (Left): The (6, 9, 0, 5)-game after 1 step.
(Right): The (6, 9, 0, 5)-game ends after 5 steps.
2 Definition: 1.2 The length of the game S = S0 = (a, b, c, d) is defined as the number of steps, n, it takes to end the game. The length is denoted L(a, b, c, d) = L(S0) = n.
For example, the (6, 9, 0, 5)-game has length 5, and the (12, 4, 16, 8)-game has length 3. See Figures 2 and 3.
Fig. 3: The (12, 4, 16, 8)-game has length 3.
Each of the examples we have considered have ended in finitely many steps. Such games are said to have finite length. If a game does not end in finitely many steps, then it is said to have infinite length.
An interesting observation is that the length of a game is not necessarily dependent on the size of the numbers involved. For example the (11, 9, 7, 3)-game has length 7 but the (17231205, 61305, 5371311, 322)-game only has length 4. At first glance it would be easy to jump to the conclusion that the second game would be longer due to the larger numbers and the fact that they are further apart from one another.
The Four Numbers game is complex, and we will need to examine its behavior carefully to understand the possible long term outcomes of the game. Will will start by considering the symmetry inherent to the game.
2. Symmetries of a Square In general, we use the word symmetry when describing objects, regions or patterns that exhibit similarity in size, shape, color, etc. In art, symmetry describes something that is visually appealing under certain guidelines. With respect to math, we generally describe a figure or expression as symmetric if its elements or parts can be interchanged in some logical way without affecting the overall object.
In order to apply symmetry to the Four Numbers game consider the symmetry of a square. (This section on the symmetries of a square follows the presentation provided by Gallian ). Imagine a glass square with corners labeled 1, 2, 3, 4. Suppose we pick the glass square up offof a surface and move it around in some way before placing it back down. We are not able to change the position of labeled corners, but we can move the square in such a way that changes the order in which we see the labels. Consider rotation. If we pick up the square and rotate it 90o in the clockwise direction we have maintained the order of the labels but their locations have changed. We can rotate the square 90o, 180o, or 270o before repeating the original positions of the labels, see Figure 4.
3 Fig. 4: All of the possible rotations of the square.
We can also consider reflecting the square about the axes of the square. That is, we can pick up the square and rotate in three-dimensional space, as opposed to the two-dimensional rotations seen in Figure 4. These three-dimensional rotations about the horizonal, vertical or one of the two diagonal axes result in the reflection of the labels of the corners. See Figure 5 for a clear visual representation of the reflections of the square.
Fig. 5: All of the possible reflections of the square.
We do not necessarily have to pick up the square and move it in only one way before setting it back down. We are able to move the square in more one way followed by a move in another way. In fact, applying two such moves on the square defines a “multiplication” on the set of symmetries of the square. This multiplication is an important notion in group theory because the set of symmetries of the square together with the multiplication exhibit a nice structure known as a dihedral group. The set of the eight symmetries of the square is known as the dihedral group of order 8, or D8. The notion of a dihedral group can be generalized to describe the symmetries on any 4 regular polygon having n sides. As it turns out, dihedral groups are a staple of group theory and much can be said about them. However, in this paper we will not need more than what has already been presented.
To illustrate an interesting and surprising observation consider the (a, b, c, d)-game. Re-flect it over D2 then rotate it by R180.
Fig. 6: Top: The (a, b, c, d)-game reflected over D2 and rotated by R180.
Bottom:The (a, b, c, d)-game reflected over D1.
Again start with the (a, b, c, d)-game. Rotate it by R90, reflect it over V then over H.
Fig. 7: Top: The (a, b, c, d)-game rotated by R90, reflected over V then over H.
Bottom:The (a, b, c, d)-game rotated by R270.
These examples should motivate the observation that the resulting start square of any product of the 8 symmetries of D8 can be obtained from any one single rotation or reflection.
5 3. Symmetry of the Four Numbers Game Since The Four Numbers game is defined on a square, we can use the symmetries of the square described in Section 2 to better understand the Four Numbers game and its symmetries. To begin, consider the (a, b, c, d)-game and it’s 4 rotations: the (a, b, c, d)-game, the (d, a, b, c)-game, the (c, d, a, b)-game and the (b, c, d, a)-game. It is not difficult to see that these 4 games are really the same game. All of the steps of the games are all exactly the same, just with the labels in rotated positions. Thus it is no surprise that these 4 games have the same length. More generally, we can apply any of the eight symmetries of D8 to the (a, b, c, d)-game (see Figure 6) without changing the length of the game.
Fig.8: All of the possible rotations and reflections of the (a, b, c, d)-game under D8.
These observations about the symmetry within the Four Numbers game should motivate the following definition.
Definition: 3.1 A Four Numbers game is equivalent to the (a, b, c, d)-game if it can be obtained from the (a, b, c, d)-game through rotations and/or reflections. Therefore all com-binations in D8 are equivalent.
We know that for any (a, b, c, d)-game, we can rotate or reflect the start square with any combination of the symmetries of D8 without affecting the length of the game. Thus all equivalent games have the same length.
Furthermore, we can see that, in all of the symmetries presented in Figure 6, the labels a and b are next to each other. The same can be said for b and c, c and d, and d and a.
Definition: 3.2 Any of these pairs of labels that are next to each other in a start square, and thus in all possible rotations or reflections of that start square, are called next neigh-bors.
It is clear to see that the symmetries of the square preserve next neighbors.
6 4. The Four Numbers Game Beyond the Integers Thus far we have only seen integer-valued Four Numbers games1. Although this sim-ple form of the game is fun and interesting, the curious reader will question what happens to the Four Numbers game if non-integer labels are used.
Consider a rational-valued Four Numbers game.
Fig. 9: The (6, 1/2, 3/2, 4)-game has length 5.
The rational-valued Four Numbers game is played with the same rules as the integer-valued game. In fact, the rational-valued Four Numbers game exhibits many of the same properties, as we will demonstrate in the next section, as the integer-valued Four Numbers game.
While the added complexity of rational-valued Four Numbers game may satisfy the afore-mentioned curious reader, an even more inquisitive reader may question the what happens to the Four Numbers game if irrational, real-valued labels are used. In moving to the ra-tionals it will be helpful to introduce a new formulation of the game. We will no longer be only using squares to illustrate the steps of the Four Numbers game, rather we will work with quadruples in R4. Letting S0 = (a0, bo, co, do) represent the start square of a game and Si = (ai, bi, ci, di), the ithstep, we see then that Si = (|ai−1 −di−1|, |ai−1 −bi−1|, |bi−1 −ci−1|, |ci−1 −di−1|).
Let’s consider the ( √ 2, e, π, √ 3)-game.
1In actuality, we only considered nonnegative integer-valued Four Numbers games. However, because the iteration rule of the Four Numbers game is defined using the absolute value of differences, negative integer games quickly become positive integer games. Therefore, although negative-valued games are interesting, they are not worthy of additional analysis.
7 S0 = ( √ 2, e, π, √ 3) S1 = (| √ 2 −e|, |e −π|, |π − √ 3|, | √ 3 − √ 2|) S2 = (|| √ 2 −e| −|e −π||, ||e −π| −|π − √ 3||, ||π − √ 3| −| √ 3 − √ 2||, || √ 3 − √ 2| −| √ 2 −e||) S3 = (||| √ 2 −e| −|e −π|| −||e −π| −|π − √ 3|||, |||e −π| −|π − √ 3|| −||π − √ 3| −| √ 3 − √ 2|||, |||π − √ 3| −| √ 3 − √ 2|| −|| √ 3 − √ 2| −| √ 2 −e|||, ||| √ 3 − √ 2| −| √ 2 −e||− || √ 2 −e| −|e −π|||) S4 = (0, 0, 0, 0) Surprisingly after only 4 steps the entries of the ( √ 2, e, π, √ 3)-game cancel themselves out! We will continue our discussion of real-valued Four Numbers games in Sections 7 and 8.
5. Four Numbers Games of Finite Length We claim that every Four Numbers game with nonnegative rational number labels has finite length. Furthermore, a maximum value for the length of any game S can be calculated from the values of the labels of S0. In order to prove this fact we will need the following observations and lemma. (Observations 1 and 2 and Lemma 1 follow from the examples and proof outlined in Sally.) Observation 5.1: Multiplication of the four nonnegative rational start numbers of a game by a positive integer m does not change the length of the game.
To illustrate consider the (6, 4, 9, 8)-game and (18, 12, 27, 24)-game. Both have length 5.
More generally, if m ∈Z+ then it is not hard to see that the (a, b, c, d)-game and the (ma, mb, mc, md)-game have the same length. Consider the entries of the kth step of the (ma, mb, mc, md)-game. They are equal to m times the entries of the kth step of the (a, b, c, d)-game. If the (a, b, c, d)-game has length L then all of the entries in step L are equal to zero. It follows that in step 0 through step L −1 at least one entry is nonzero. Since m is a nonnegative integer we know that for some integer n, in order for m · n = 0, n would have to be equal to zero. Thus if at least one entry in step L −1 is nonzero then at least one of m times that entry is still nonzero. Thus the (ma, mb, mc, md)-game does not end in the first L−1 steps. Since the (a, b, c, d)-game has length L, we know that all of the entries in step L are equal to zero. Since m · 0 = 0 we know that all the entries of the (ma, mb, mc, md)-game in step L are also equal to zero. Therefore the (ma, mb, mc, md)-game has length L as well.
Observation 5.2: If a Four Numbers game with nonnegative integer start numbers has length at least 4, then all the numbers appearing from step 4 onward are even.
To verify observation 2 we must consider all possible cases of different combinations of even and odd numbers as the labels of a start square. It will be helpful to introduce a new labeling system. If a label for the start square is an even number, replace it with the letter ‘e’. If the label is an odd number, replace it with an ‘o’. For example, the (6, 9, 4, 7)-game 8 becomes the (e, o, e, o)-game. Since there are two possible labels for each of the four corners of the square, there are 24 = 16 possible starting configurations using this new labeling system.
However, some of these starting configurations are equivalent games under reflection and rotation. In particular, we can use rotations and reflections to obtain all 16 possible starting configurations from the following 6 cases of start games: (i) = (e, e, e, e) (iv) = (e, o, e, o) (ii) = (e, e, e, o) (v) = (e, o, o, o) (iii) = (e, e, o, o) (vi) = (o, o, o, o) The following rules will be helpful: e −e = e o −o = e e −o = o o −e = o Figures 10-12 below show that for all 6 case start games all of the labels are even in 4 or fewer steps. Therefore, with use of rotation and reflection, all 16 possible starting configu-rations will have all even labels in 4 or fewer steps.
Fig 10: (Left): Case (i): All labels are even after zero steps.
(Right): Case (ii): All labels are even after four steps.
Fig 11: (Left): Case (iii): All labels are even after three steps.
(Right): Case (iv): All labels are even after two steps.
9 Fig 12: (Left): Case (v): All labels are even after four steps.
(Right): Case (vi): All labels are even after one step.
Hence, Observation 2 holds true.
Observations 1 and 2 are helpful in proving that all integer-valued Four Numbers games end in infinitely many steps. In fact, we can even provide an upper bound on the length.
Lemma 5.1: Every Four Numbers game played with nonnegative integers has finite length.
In fact, if we let A be the largest of the four nonnegative start integers and if k is the least integer such that A/2k < 1, then the game has length of most 4k.
Proof: We consider two cases: if the length of the game is at most 4 and the length of the game is greater than 4.
Case 1: Let A be the largest entry of the start square and assume the length of a game is at most 4. If A = 0 then all entries must be zero and A < 1 = 20. We have defined k as the least integer such that A/2k < 1 that is A < 2k, so in this case k = 0. We know the length of the zero game (the (0, 0, 0, 0)-game) is 0 = 4 · 0. If A > 0 then k cannot equal zero, so k ≥1, and certainly 4 ≤4k when k ≥1. We have assumed the length of the game is at most 4 and 4k = 4 if and only if k = 1.
Case 2: Consider a game with length t, where t > 4. We know all entries in steps 4 through t are even by Observation 2. Let A equal the largest of the four integer entries in step 4.
Because all of the entries are even, we can divide them by 2 without changing the length of the game by Observation 1. Thus the largest integer in this new game will be A/2. If the length of the new game is s, then s + 4 = t. If s > 4 then (by applying observation 2 again) we know that all the entries in steps 4 through s of the new game are even integers. Thus we can see that if s > 4 then t > 8 and all the entries in steps 8 through t of the original game are divisible by 4 = 22. We can repeat this process by creating another game by dividing the entries of the 4th step of the new game by 2. This yields a game with largest integer A/22 and with length q, where q + 8 = t. Hence we can say that the original game has length greater than or equal to 8, and the integer entries of steps 8 through t can be divided by 22. So if A is the largest integer in step 8 of the original game we know that A/22 ∈Z+.
So A/22 ≥1. If 2 is the largest integer J such that A/2J ≥1 then A/2J+1 < 1. So in this case, J = 2 , J + 1 = 3 and A < 23. That is, k = 3 is the smallest integer satisfying A < 2k.
We note now that the length of the game is at most 12. Indeed, if the length of the original 10 game is greater than 12 then the process can be repeated again such that A can be divided by 23. We have said that J is the largest integer such that A/2J ≥1 we know J = 2, so J ̸= 3. Hence the length of the game is not greater than 12. Therefore we have 8 ≤t ≤12.
We have defined k = 3 to be the least integer such that A/2k and 8 ≤t ≤12 = 4 · 3 = 4k.
Therefore the length of the game is at most 4k.
□ We are now ready to prove the main result of this section.
Theorem 5.1 Every Four Numbers game played with nonnegative rational numbers has finite length.
In fact, if N is the largest numerator occurring when the four start numbers are written with a common denominator, then the length is at most 4k, where N < 2k .
Proof: We know that every rational number can be expressed as some fraction, a/b, where a, b ∈Z and b ̸= 0. Consider the (e, f, g, h)-game, where e, f, g, h ∈Q+. We can express each of these positive rational numbers as a fraction of two positive integers. We can also manipulate the fractions so that they are expressed with a common denominator, J, where J ∈Z+. Let N ∈Z+ be the largest numerator occurring when the four start numbers are written with a common denominator J. Then N/J is the largest of the four rational start numbers. Let the length of the original (e, f, g, h)-game be l. We know from Observation 1 that we can multiply the four start numbers of a game by an integer and not change the length of the game. So if we multiply each of e, f, g and h by J then the length of the (Je, Jf, Jg, Jh)-game is still l, and Je, Jf, Jg, Jh ∈Z+. Also N = max(Je, Jf, Jg, Jh).
From Lemma 1 we know if N is the largest of the four nonnegative integer start numbers and k is the least integer such that N/2k < 1, then the length of the game is at most 4k, so l ≤4k.
□ We can conclude that any Four Numbers game with nonnegative integer or rational start labels has finite length. Next consider the Four Numbers game with long but finite length.
6. Four Numbers Games of Long but Finite Length We have shown that all Four Numbers games with nonnegative or rational start labels have finite length. And we can see from our examples in Figures 2 and 3 that the (6, 9, 0, 5)-game has length 5 and the (12, 4, 16, 8)-game has length 3; both games have relatively short length. In fact, the longest game we have discussed is the (11, 9, 7, 3)-game which has length 7. This begs the question: Are there long Four Numbers games, and if yes, how do we construct them?
In order to explore Four Numbers games of long length we will consider the Tribonacci games. Following the argument in Sally we define a Tribonacci game as a Four Numbers game played with four successive terms of the Tribonacci sequence. Where the Tribonacci sequence is defined recursively much like the Fibonacci sequence. In particular, by definition, t0 = 0, t1 = 1, t2 = 1 and for all n ≥3, tn = tn−1 + tn−2 + tn−3.
The length of a Tribonacci game becomes longer as the labels of the game get further out in the Tribonacci sequence. We will show that we can compute the exact length of any 11 Tribonacci game and that we can produce long games, but first we must familiarize ourselves with a few new concepts.
First, let us introduce some notation to simplify the upcoming computations. Let Tn be a the (tn, tn−1, tn−2, tn−3)-Tribonacci game. Then DTn = (tn −tn−3, tn −tn−1, tn−1 −tn−2, tn−2 −tn−3), and the kth step of the Tn game is denoted DkTn.
Next, recall the greatest integer function, denoted [r] for any real number r, is the the greatest integer less than or equal to r . For Example: [2.6] = 2 7 2 = 3 [.333333] = 0.
Before we are able to prove the formula for computing the length of a Tribonacci game we need the following Lemma.
Lemma 6.1: The Four Numbers game that starts with the third step of the Tn-game has the same length as the Tn−2-game. That is D3Tn = 2Tn−22.
Proof: Using the operator D we can compute D3Tn = (tn −tn−1 + tn−2 −tn−3, tn −tn−1 −tn−2 + tn−3, −tn + 3tn−1 −tn−2 −tn−3, tn −3tn−1 + 3tn−2 −tn−3) and using Observation 5.1 we see that 2Tn−2 = (2tn−2, 2tn−3, 2tn−4, 2tn−5). Solving tn−1 = tn−2+tn−3+tn−4 for tn−4 yields tn−4 = tn−1−tn−2−tn−3 and solving tn−2 = tn−3+tn−4+tn−5 = tn−3 + (tn−1 −tn−2 −tn−3) + tn−5 for tn−5 yields tn−5 = 2tn−2 −tn−1. Thus using only terms of tn, tn−1, tn−2,and tn−3 we can see that 2Tn−2 = (2tn−2, 2tn−3, 2tn−1 −2tn−2 −2tn−3, 4tn−2 −2tn−1).
Thus we can manipulate the labels of each of the games too see: tn −tn−1 −tn−3 = tn−1 + tn−2 + tn−3 −tn−1 −tn−3 + tn−2 = 2tn−2 tn −tn−1 −tn−2 + tn−3 = tn−1 + tn−2 + tn−3 −tn−1 −tn−2 + tn−3 = 2tn−3 2Recall from Observation 5.1 that we can multiply the labels of a Four Numbers game by an integer without affecting the length of the game. Therefore L(Tn−2) = L(2Tn−2).
12 −tn + 3tn−1 −tn−2 −tn−3 = −tn−1 −tn−2 −tn−3 + 3tn−1 −tn−2 −tn−3 = 2tn−1 −2tn−2 −2tn−3 tn −3tn−1 + 3tn−2 −tn−3 = tn−1 + tn−2 + tn−3 −3tn−1 + 3tn−2 −tn−3 = −2tn−1 −2tn−2 −2tn−3 Therefore D3Tn = 2Tn−2.
□ Lemma 6.2: For all n ∈N, n −2 2 + 1 = hn 2 i .
Proof: Assume n 2 = hn 2 i + ϵ, since n ∈N we know that ϵ = 0 or ϵ = 1 2. Then hn 2 −1 i = hhn 2 i + ϵ −1 i = hn 2 i −1 if ϵ = 0 hn 2 i −1 if ϵ = 1 2.
Therefore n −2 2 + 1 = hn 2 −1 i + 1 = hn 2 −1 i + 1 = hn 2 i −1 + 1 = hn 2 i .
□ Now we are ready for our next theorem regarding long games.
Theorem 6.1 Given any n ∈N, the Tribonacci Four Numbers game Tn = (tn, tn−1, tn−2, tn−3) has length 3 hn 2 i .
Proof: We proceed by complete induction.
For the base Case, n = 3, we know that t3 = t2 + t1 + t0, where t3 = 2 = 1 + 1 + 0. We can easily compute the length of the (2, 1, 1, 0)-game: 13 Fig. 13: L(T3) = 3.
We see that 3 2 = 1. Thus L(T3) = 3 = 3·1 = 3 3 2 . Now assume n ≥4 and L(Tk) = 3 k 2 .
for all k ∈N such that n ≤k ≤n and consider L(Tn). We know from Lemma 6.1 that D3Tn = 2Tn−2, which to say that L(Tn) = L(Tn−2)+3. Since n ≥4 we know that n−2 ≥2.
Thus by our base case we know that L(Tn−2) = 3 hn 2 i . Therefore using Lemma 6.2 we can see that L(Tn) = L(Tn−2) + 3 = 3 n −2 2 + 3 = 3 n −2 2 + 1 = 3 n 2 −2 2 + 1 = 3 hn 2 i − + 1 = 3 hn 2 i −1 + 1 = 3 hn 2 i Therefore for all n ∈N, L(Tn) = 3 hn 2 i .
□ For example, consider t18 = 19, 513. Then T18 = (19513, 10609, 5868, 3136). Then we can use Theorem 6.1 to see that L(T18) = 27. We can also use this result to produce games of arbitrarily long length. Let’s construct a Four Numbers game length greater than or equal to 100. So 100 < L(Tn) = 3 hn 2 i . Solving for n we see n > 66.66 but = therefore we must add 1 to our n. Thus L(T68) > 100. In fact L(T68) = 102.
14 Thus by considering the Tribonacci game Tn for sufficiently large n we can construct Four Numbers games of arbitrarily large length.
7. The Four Numbers Game over the Real Numbers Thus far our examination of the Four Numbers game has been relatively basic. Our analysis has relied largely on arithmetic and the fact that there is a minimal nonnegative nonzero integer. Note that there is no minimal nonnegative real number so the Four Num-bers game with real valued labels is quite different from the rational or integer cases.
In moving to the reals our analysis will become more complex, and we will use some math-ematical tools from linear algebra. It is natural to think of a real-valued quadruple as a vector in R4. Thus the iteration rule for the Four Numbers game is the function F : R4 →R4 de-fined by F(a, b, c, d) = (|a−d|, |a−b|, |b−c|, |c−d|). Can we represent F with a matrix? That is, is F a linear transformation? Recall that F : R4 →R4 is a linear transformation if for all vectors ⃗ u,⃗ v ∈R4 and for any real number r,F(⃗ u+⃗ v) = F(⃗ u)+F(⃗ v) and F(r⃗ u) = r(F(⃗ u)) .
Claim: F is not a linear transformation on R4.
We can verify our claim with a counterexample. Let ⃗ u and ⃗ v be quadruples in R4. Let ⃗ u = (6, 0, 3, 1) and ⃗ v = (0, 4, 1, 9), so that ⃗ u + ⃗ v = (6, 4, 4, 10). Then F(⃗ u + ⃗ v) = (|6 −10|, |6 −4|, |4 −4|, |4 −10|) = (4, 2, 0, 6), while F(⃗ u) + F(⃗ v) = (|6 −1|, |6 −0|, |0 −3|, |3 −1|) + (|0 −9|, |0 −4|, |4 −1|, |1 −9|) = (5, 6, 3, 2) + (9, 4, 3, 8) = (14, 10, 6, 10) ̸= F(⃗ u + ⃗ v).
Thus F is not a linear transformation on R4.
The reader should not be too disappointed however, because the linearity of our func-tion is not completely lost. If we restrict the map of F to the vectors in R4 having strictly decreasing components then we can salvage the linearity. Let S be the set {(a, b, c, d) ∈R4 : a > b > c > d}. Then we can define a new function FS such that for all ⃗ s = (a, b, c, d) ∈S, FS(⃗ s) = FS(a, b, c, d) = (a −d, a −b, b −c, c −d) Claim: F : S →R4 is a linear transformation.
Proof: Let ⃗ u,⃗ v ∈S. So ⃗ u = (g, h, i, j) and ⃗ v = (k, l, m, n) where g > h > i > j and k > l > m > n. Then we can see that ⃗ u+⃗ v = (g +k, h+l, i+m, j +n), and we can compute 15 FS(⃗ u), FS(⃗ v) and FS(⃗ u + ⃗ v). Then FS(⃗ u + ⃗ v) = ((g + k) −(j + n), (g + k) −(h + l), (h + l) −(i + m), (i + m) −(j + n)) = (g + k −j −n, g + k −h −l, h + l −i −m, i + m −j −n) = ((g −j) + (k −n), (g −h) + (k −l), (h −i) + (l −m), (i −j) + (m −n)) = (g −j, g −h, h −i, i −j) + (k −n, k −l, l −m, m −n) = FS(⃗ u + ⃗ v).
Thus the first condition of linearity is satisfied. To show that the second condition also holds let ⃗ u ∈S. So ⃗ u = (g, h, i, j) ∈R4 such that g > h > i > j. Let r be any real number.
Then we can easily compute FS for these vectors.
FS(r⃗ u) = (rg −rj, rg −rh, rh −ri, ri −rj) r(FS(⃗ u)) = r(g −j, g −h, h −i, i −j) = (rg −rj, rg −rh, rh −ri, ri −rj) = FS(r⃗ u) □ Thus the second condition is satisfied and FS : S →R4 is a linear transformation, hence FS can be represented by a matrix. In fact, we see that the linear transformation FS : S →R4 is represented by the matrix M = 1 0 0 −1 1 −1 0 0 0 1 −1 0 0 0 1 −1 , because FS(⃗ v) = FS a b c d = 1 0 0 −1 1 −1 0 0 0 1 −1 0 0 0 1 −1 a b c d = a −d a −b b −c c −d .
Since the transition rule of the Four Numbers game is now described as a matrix we can observe that the Four Numbers game s0 = (a, b, c, d) ∈S has length n if F n S (⃗ s0) = M n(⃗ s0) = ⃗ 0. Additionally, a game ⃗ s0 has infinite length if F n S (⃗ s0) = M n(⃗ s0) ̸= ⃗ 0 for all n ∈N. In the next section we will use eigenvectors to construct such a game.
8. Four Numbers Games of Infinite Length Recall that ⃗ s0 ∈S is an eigenvector of M for the eigenvalue λ therefore FS(⃗ s0) = λ(⃗ s0) ̸= ⃗ 0. Applying FS again we conclude that FS(FS(⃗ s0)) = λ2(⃗ s0) ̸= ⃗ 0. More generally, F n S (⃗ s0) = λn(⃗ s0) ̸= ⃗ 0. Hence no matter how many times FS is applied to ⃗ s0, the zero vector is unobtainable. Thus ⃗ s0 corresponds to a Four Numbers game of infinite length. It remains to show that there exists such an eigenvector ⃗ s0.
16 The process for finding eigenvectors relies on the characteristic polynomial p(λ) of a matrix M where p(λ) = det(M −λI4). Recall that FS(⃗ s0) = M(⃗ s0) = λ(⃗ s0) has a nonzero solution if det(M −λI4) equals zero.
So compute det(M −λI4) = det 1 −λ 0 0 −1 1 −1 −λ 0 0 0 1 −1 −λ 0 0 0 1 −1 −λ = 2λ + 2λ3 + λ4.
Thus p(λ) = λ(λ3 + 2λ2 −2). Setting p(λ) = 0 we can see that λ = 0 is a root. Using Maple to find the other roots we see that one of the roots is real λ0 = 1 3(19 + 3 p (33))1/3 + 4 3(19 + 3 p (33))1/3 −2 3 = 0.8392867553, and the other two roots are complex λ1 = −1 6(19 + 3 p (33))1/3 − 2 (3(19 + 3 p (33))1/3 −2 3 + 1 2i p (3)((1/3)(19 + 3 p (33))1/3 − 4 3(19 + 3 p (33))1/3 = −1.419643378 + .6062907300i λ2 = −1 6(19 + 3 p (33))1/3 − 2 (3(19 + 3 p (33))1/3 −2 3 − 1 2i p (3)((1/3)(19 + 3 p (33))1/3 − 4 3(19 + 3 p (33))1/3 = −1.419643378 −.6062907300i.
Then if ⃗ s0 = (a0, b0, c0, d0) is the eigenvector associated with the real eigenvalue λ0, then 1 −λ0 0 0 −1 1 −1 −λ0 0 0 0 1 −1 −λ0 0 0 0 1 −1 −λ0 a0 b0 c0 d0 = 0 0 0 0 .
Multiplying then yields a0(1 −λ0) −d0 b0(−1 −λ0) + a0 c0(−1 −λ0) + b0 d0(−1 −λ0) + c0 = 0 0 0 0 .
17 Therefore (1 −λ0)a0 = d0 (1 + λ0)b0 = a0 (1 + λ0)c0 = b0 (1 + λ0)d0 = c0.
Letting d0 = 1 we can solve the equations for a0, b0 and c0 simultaneously resulting in ⃗ s0 = ((1 + λ0)3, (1 + λ0)2, (1 + λ0), 1).
Noting that 0 < λ0 < 1 we know (1 + λ0)3 > (1 + λ0)2 > (1 + λ0) > 1. Thus ⃗ s0 is indeed in S, thus the linear transformation FS holds for ⃗ s0.
This series of computations (which follow the argument in Sally ) shows that the ma-trix M has an eigenvector ⃗ s0 and therefore there exists a real-valued Four Numbers game of infinite length. In fact, as the next theorem states, there exists an infinite set of Four Numbers games with infinite length.
Theorem 8.1 There exists infinitely many Four Numbers games of infinite length.
Proof: We have shown that the Four Numbers game with entries (a0, b0, c0, d0) = ⃗ s0, where ⃗ s0 is the eigenvector corresponding to the eigenvalue λ0, has infinite length.
Let r ∈R+ such that r > 0.
Consider the vector ⃗ r ∈R4 where ⃗ r = (r, r, r, r).
Then ⃗ s0 + ⃗ r = (a0 + r, b0 + r, c0 + r, d0 + r).
We can apply our function FS : S →R4 because we know that ⃗ s0 ∈S. So a0 > b0 > c0 > d0. Since r > 0 then a0 + r > b0 + r > c0 + r > d0 + r. Compute: FS(⃗ s0 + ⃗ r) = ((a0 + r) −(d0 + r), (a0 + r) −(b0 + r), (b0 + r) −(c0 + r), (c0 + r) −(d0 + r)) = (a0 + r −d0 −r, a0 + r −b0 −r, b0 + r −c0 −r, c0 + r −d0 −r) = (a0 −d0, a0 −b0, b0 −c0, c0 −d0) = FS(⃗ s0) We know that FS(⃗ s0) ̸= ⃗ 0 and F n S (⃗ s0) ̸= ⃗ 0 and we know there exists an eigenvalue λ0 that corresponds to ⃗ s0 having infinite length. Thus for all r > 0, ⃗ s0 +⃗ r represents a different Four Numbers game with infinite length.
We know from Observation 1 that for all m ∈Z+, L(ma, mb, mc, md) = L(a, b, c, d). A parallel argument will show that L(ka, kb, kc, kd) = L(a, b, c, d) for all k ∈R. Hence ⃗ s0 and k⃗ s0 + ⃗ r will have the same length for all k, r ∈R+.
□ Now we know that not only does there exist an infinitely long Four Numbers game over there reals, but there are infinitely many of them! Thus we have shown that there are games of finite bounded length, long but finite length and infinite length.
18 9. The Three-Numbers Game? Five? Six?
While the Four Numbers game is very interesting, the question of the possibility of a Three Numbers game arises. Consider the (1, 1, 2)-game: Fig. 14: The (1, 1, 2)-game and the first 6 steps written out.
As it turns out, this Three Numbers game does not have finite length. In fact, all Three Numbers games have infinite length unless the three start numbers are equal.
A Three Numbers game that does not have three equal start numbers will begin to oscillate between 1,1, and 0, after a certain amount of steps. This will obviously never end in a (0, 0, 0) step.
Thus there are three possible lengths for any Three Numbers game, 0, 1, or ∞.
The Five Numbers game follows a similar pattern. Unless all five entries are the same, the Five Numbers game will have infinite length.
Fig. 15: The first 7 steps of the (1, 8, 4, 3, 6)-game and the first 10 steps written out.
19 One can experiment and conclude that the Six and Seven Numbers games do not end in finite length either, unless all of the entries are equal. Does such behavior continue for all K-Numbers games with K > 7? Let’s examine the Eight Numbers game. In particular consider the (4, 12, 6, 3, 0, 9, 1, 2)-game.
S0 = (4, 12, 6, 3, 0, 9, 1, 2) S1 = (8, 6, 3, 3, 9, 8, 1, 2) S2 = (2, 3, 0, 6, 1, 7, 1, 7) S3 = (1, 3, 6, 5, 6, 6, 6, 5) S4 = (2, 3, 1, 1, 0, 0, 1, 4) S5 = (1, 2, 0, 1, 0, 1, 3, 2) S6 = (1, 2, 1, 1, 1, 2, 1, 1) S7 = (1, 1, 0, 0, 1, 1, 0, 0) S8 = (0, 1, 0, 1, 0, 1, 0, 1) S9 = (1, 1, 1, 1, 1, 1, 1, 1) S10 = (0, 0, 0, 0, 0, 0, 0, 0) Further experimentation will lead to more finite games. Observe that both 4 and 8 are powers of 2. In fact, it is this property of 4 and 8 that ensures the finitude of the Four and Eight Numbers games. We state without proof a theorem from Sally 3.
Theorem 9.1 Let k be an integer greater than 2. Every k-Numbers game has finite length if and only if k is a positive power of 2.
10. Conclusion We have shown through our analysis of the Four Numbers game that games exist of finite length, arbitrarily long length and infinite length. Our observations and lemmas have yielded some very interesting characteristics of the Four Numbers game. In fact, there is even more to the the Four Numbers game then discussed in this paper. With further analysis we can involve statistics to determine the probability that a Four Numbers game will end in k-steps. We can also, with additional research, illustrate the connection between the Four Numbers game and Pascal’s triangle. Over all we have exhibited through detailed examination with advanced mathematics and linear algebra that the Four Numbers game is much more than a simple game of differences, it is interesting and worthy of meticulous analysis.
3Proof of this involves the polynomial ring Z2 and is beyond the scope of this paper. Sally, Judith D., and Paul J. Sally. “The Four Numbers Problem.” Roots to Research: A Vertical Development of Mathematical Problems.
20 11. Bibliography Chamberland, Marc, and Diana M. Thomas. “The N-Number Ducci Game.” Journal of Difference Equations and Applications 10 (2004): 339-42.
Fraleigh, John B., and Raymond A. Beauregard. “Linear Algebra. 3rd ed. Addison-Wesley Company, 1995.
Gallian, Joseph A. “Introduction to Groups.” Contemporary Abstract Algebra. 2nd ed. D.C. Heath and Company, 1990. 23-28.
Ji, Jun, and Charles Kincey “The Four Numbers Game and Pascal’s Triangle.” The mathematical Gazette. Vol. 85, No. 503 (July 2001). The Mathematical Association. 1990.
23-28.
Sally, Judith D., and Paul J. Sally.
“The Four Numbers Problem.” Roots to Re-search: A Vertical Development of Mathematical Problems.
Ullman, Daniel. “More on the Four-Numbers Game.” Mathematical Magazine 65 (1992): 170-74.
21
|
67
|
Annals of Mathematics 175 (2012), 1575–1627 Vinogradov’s mean value theorem via efficient congruencing By Trevor D. Wooley Abstract We obtain estimates for Vinogradov’s integral that for the first time approach those conjectured to be the best possible. Several applications of these new bounds are provided. In particular, the conjectured asymptotic formula in Waring’s problem holds for sums of s kth powers of natural numbers whenever s ⩾2k2 + 2k −3.
1. Introduction Exponential sums of large degree play a prominent role in the analysis of problems spanning the analytic theory of numbers, and in consequence the estimation of their mean values is of central significance. Some seventy-five years ago, I. M. Vinogradov obtained new estimates for such mean values by exploiting the translation-dilation invariance of associated systems of Dio-phantine equations. Thereby, he was able to derive powerful new estimates for exponential sums going well beyond those made available via the differencing methods of Weyl and van der Corput. Decisive progress followed in such topics as Waring’s problem, the zero-free region for the Riemann zeta function and the distribution modulo 1 of polynomial sequences (see , and ).
Following a decade or so of technical improvement, Vinogradov’s mean value theorem evolved into a form little different from that familiar to present day workers, one which for problems of degree d falls short of the strength expected by a factor of order log d. In this paper we obtain significant improvements in estimates associated with Vinogradov’s mean value theorem, coming within a stone’s throw of the sharpest possible bounds. As we explain in due course, progress of a similar scale may now be realised in numerous allied problems.
In order to describe our conclusions, we must introduce some notation.
When k is a natural number and α ∈Rk, we consider the exponential sum (1.1) fk(α; X) = X 1⩽x⩽X e(α1x + · · · + αkxk), Supported by a Royal Society Wolfson Research Merit Award.
1575 1576 TREVOR D. WOOLEY where e(z) denotes e2πiz. It follows from orthogonality that, for natural num-bers s, the mean value (1.2) Js,k(X) = Z [0,1)k |fk(α; X)|2s dα counts the number of integral solutions of the system of equations (1.3) xj 1 + · · · + xj s = yj 1 + · · · + yj s (1 ⩽j ⩽k), with 1 ⩽xi, yi ⩽X (1 ⩽i ⩽s). Motivated by a heuristic application of the circle method, it is widely expected that whenever ε > 0, one should have1 (1.4) Js,k(X) ≪Xε(Xs + X2s−1 2 k(k+1)).
Indeed, the discussion surrounding [30, eq. (7.5)] supplies an ε-free version of such a conjecture for k > 2. The corresponding lower bound (1.5) Js,k(X) ≫Xs + X2s−1 2 k(k+1), meanwhile, is easily established (see [30, eq. (7.4)]). The main conclusion of this paper, the proof of which we complete in Section 7, is that the estimate (1.4) holds whenever s ⩾k(k + 1).
Theorem 1.1. Suppose that s and k are natural numbers with k ⩾2 and s ⩾k(k + 1). Then for each ε > 0, one has Js,k(X) ≪X2s−1 2 k(k+1)+ε.
If valid, the conjectured bound (1.4) would imply a conclusion of the same shape as that of Theorem 1.1, provided only that s ⩾1 2k(k +1). In some sense, therefore, Theorem 1.1 comes within a factor 2 of the best possible result of its type. For additive Diophantine systems of large degree k, this is the first occasion on which a conclusion so close to the best possible has been established, for in all previous results one misses the conjectured bounds by a factor of order log k.
A comparison with previous results on Vinogradov’s mean value theorem deserves a discussion in two parts. The original method of Vinogradov for estimating Js,k(X) was refined by means of the p-adic argument of Linnik and achieved its most polished form in the work of Karatsuba and Stechkin . Thus, for each natural number s with s ⩾k, one has a bound of the shape (1.6) Js,k(X) ⩽D(s, k)X2s−1 2 k(k+1)+ηs,k, where D(s, k) is independent of X and ηs,k = 1 2k2(1 −1/k)[s/k] ⩽k2e−s/k2.
For large integers k, the exponent ηs,k is appreciably smaller than 1/k as soon 1Here and throughout, implicit constants in Vinogradov’s notation ≪and ≫depend at most on s, k and ε, unless otherwise indicated.
VINOGRADOV’S MEAN VALUE THEOREM 1577 as s ⩾3k2(log k + log log k). When s is sufficiently large in terms of k, this observation permits the proof of an asymptotic formula of the shape (1.7) Js,k(X) ∼C(s, k)X2s−1 2 k(k+1), wherein C(s, k) is a positive number depending at most on s and k.
Note that the positivity of C(s, k) is a consequence of the lower bound (1.5). Let V (k) denote the least natural number s for which the anticipated relation (1.7) holds. Then this classical version of Vinogradov’s mean value theorem leads to the upper bound V (k) ⩽3k2(log k + O(log log k)) (see [30, Th. 7.4]).
The author’s thesis work , on repeated efficient differencing meth-ods led to sizeable improvements in the conclusions reported in the last para-graph. Roughly speaking, the upper bound (1.6) was established with ηs,k ≈ k2e−2s/k2 for s ⩽k2 log k and with ηs,k ≈(log k)4e−3s/(2k2) for s > k2 log k (see [37, Th. 1.2] for a precise statement). In the range critical in applications, the rate of decay of ηs,k with respect to s stemming from this progress is twice that previously available. As a consequence of these developments, we established that V (k) ⩽k2(log k + 2 log log k + O(1)) (see [42, Th. 3]). We are now able to improve matters significantly.
Define the singular series (1.8) S(s, k) = ∞ X q=1 q X a1=1 · · · q X ak=1 (a1,...,ak,q)=1 q−1 q X r=1 e((a1r + · · · + akrk)/q) 2s and the singular integral (1.9) J(s, k) = Z Rk Z 1 0 e(β1γ + · · · + βkγk) dγ 2s dβ.
It transpires that the positive number C(s, k) occurring in the putative asymp-totic formula (1.7) is then given by C(s, k) = S(s, k)J(s, k). In Section 9 we establish the asymptotic formula (1.7) for s ⩾k2 + k + 1.
Theorem 1.2. When k ⩾3, one has V (k) ⩽k2 + k + 1.
The lower bound (1.5) implies that the asymptotic formula (1.7) cannot hold for s < 1 2k(k+1). The condition on s imposed in Theorem 1.2 is therefore only a factor 2 away from the best possible conclusion of its type.
The estimate recorded in Theorem 1.1 also leads to improvements in avail-able bounds relating to Tarry’s problem. When h, k and s are positive integers with h ⩾2, consider the Diophantine system (1.10) s X i=1 xj i1 = s X i=1 xj i2 = . . . = s X i=1 xj ih (1 ⩽j ⩽k).
1578 TREVOR D. WOOLEY Let W(k, h) denote the least natural number s having the property that the simultaneous equations (1.10) possess an integral solution x with s X i=1 xk+1 iu ̸= s X i=1 xk+1 iv (1 ⩽u < v ⩽h).
The problem of estimating W(k, h) was investigated extensively by E. M.
Wright and L.-K. Hua (see , , ), and very recently upper bounds for W(k, h) have played a role in work of Croot and Hart on the sum-product conjecture. L.-K. Hua was able to show that W(k, h) ⩽k2(log k + O(1)) for h ⩾2, a conclusion improved by the present author when h = 2 with the bound W(k, 2) ⩽1 2k2(log k + log log k + O(1)) (see [42, Th. 1]). We improve both estimates in Section 9.
Theorem 1.3. When h and k are natural numbers with h ⩾2 and k ⩾2, one has W(k, h) ⩽k2 + k −2.
For small values of k, although nothing explicit appears to be available in the literature, some improvement would appear to be possible. It is a simple exercise to show that W(2, h) = 3 for h ⩾2, for example, and the methods of Section 9 of this paper combine with the estimates of [17, Chap. V.5] to confirm that 4 ⩽W(3, h) ⩽8. On the other hand, explicit numerical examples are available2 that may be applied to show that W(k, 2) = k+1 for 2 ⩽k ⩽10 and k = 12.
Next we discuss the asymptotic formula in Waring’s problem. When s and k are natural numbers, we denote by Rs,k(n) the number of representations of the natural number n as the sum of s kth powers of positive integers. A heuristic application of the circle method suggests that for k ⩾3 and s ⩾k+1, one should have (1.11) Rs,k(n) = Γ(1 + 1/k)s Γ(s/k) Ss,k(n)ns/k−1 + o(ns/k−1), where (1.12) Ss,k(n) = ∞ X q=1 q X a=1 (a,q)=1 q−1 q X r=1 e(ark/q) s e(−na/q).
Under modest congruence conditions, one has 1 ≪Ss,k(n) ≪nε, and thus the conjectural relation (1.11) may be interpreted as an honest asymptotic formula (see [30, §§4.3, 4.5 and 4.6] for details). Let ‹ G(k) denote the least integer t with the property that, for all s ⩾t, and all sufficiently large natural numbers 2See the website
VINOGRADOV’S MEAN VALUE THEOREM 1579 n, one has the asymptotic formula (1.11). As a consequence of Theorem 1.1, we derive the new upper bound for ‹ G(k) presented in the following theorem.
Theorem 1.4. When k ⩾2, one has ‹ G(k) ⩽2k2 + 2k −3.
The first to obtain a bound for ‹ G(k) were Hardy and Littlewood , who established the bound ‹ G(k) ⩽(k −2)2k−1 + 5. The sharpest bounds currently available for smaller values of k are ‹ G(k) ⩽2k (k = 3, 4, 5), due to Vaughan , , and ‹ G(k) ⩽7 82k (k ⩾6), due to Boklan . For larger values of k, the story begins with Vinogradov , who showed that ‹ G(k) ⩽183k9(log k + 1)2.
By 1949, Hua had shown that ‹ G(k) ⩽(4+o(1))k2 log k. This upper bound was improved first by the author to ‹ G(k) ⩽(2 + o(1))k2 log k, and most recently by Ford to ‹ G(k) ⩽(1 + o(1))k2 log k.
The latter two authors, Parsell , and most recently Boklan and Wooley , have also computed explicit upper bounds for ‹ G(k) when k ⩽20. In particular, one has the bounds ‹ G(7) ⩽112, ‹ G(8) ⩽224 due to Boklan , and ‹ G(9) ⩽365, ‹ G(10) ⩽497, ‹ G(11) ⩽627, ‹ G(12) ⩽771, ‹ G(13) ⩽934, ‹ G(14) ⩽1112, ‹ G(15) ⩽1307, ‹ G(16) ⩽1517, ‹ G(17) ⩽1747, ‹ G(18) ⩽1992, ‹ G(19) ⩽2255, ‹ G(20) ⩽2534 due to Boklan and Wooley . The conclusion of Theorem 1.4 supersedes all of these previous results for k ⩾7, establishing that ‹ G(7) ⩽109, ‹ G(8) ⩽141, ‹ G(9) ⩽177, . . . , ‹ G(20) ⩽837.
Furthermore, the strength of Theorem 1.1 opens new possibilities for transforming estimates for Js,k(X) into bounds for auxiliary mean values suitable for investigating Waring’s problem. This is a matter that we shall pursue further elsewhere (see ).
We turn next to estimates of Weyl type for exponential sums. Here we present conclusions of two types, one applicable to exponential sums fk(α; X) defined by (1.1) wherein a single coefficient αj is poorly approximable and a second applicable when α is poorly approximable as a k-tuple.
Theorem 1.5. Let k be an integer with k ⩾2, and let α ∈Rk. Suppose that there exists a natural number j with 2 ⩽j ⩽k such that, for some a ∈Z and q ∈N with (a, q) = 1, one has |αj −a/q| ⩽q−2 and q ⩽Xj. Then one has fk(α; X) ≪X1+ε(q−1 + X−1 + qX−j)σ(k), where σ(k)−1 = 2k(k −1).
We remark that the factor Xε in the conclusion of Theorem 1.5 may be replaced by log(2X) if one increases σ(k)−1 from 2k(k −1) to 2k2 −2k + 1.
Theorem 1.6. Let k be an integer with k ⩾2, and let τ and δ be real numbers with τ −1 > 4k(k −1) and δ > kτ. Suppose that X is sufficiently large in terms of k, δ and τ, and further that |fk(α; X)| ⩾X1−τ. Then there exist integers q, a1, . . . , ak such that 1 ⩽q ⩽Xδ and |qαj −aj| ⩽Xδ−j (1 ⩽j ⩽k).
1580 TREVOR D. WOOLEY The conclusion of Theorem 1.5 may be compared, for smaller exponents k, with Weyl’s inequality (see [30, Lemma 2.4]). The latter provides an estimate of the same shape as that of Theorem 1.5 in the special case j = k, with the exponent 2k−1 in place of 2k(k −1). The conclusion of Theorem 1.5 is therefore superior to Weyl’s inequality for k ⩾8. Subject to the condition k ⩾6, Heath-Brown has shown that whenever there exist a ∈Z and q ∈N with (a, q) = 1 and |α −a/q| ⩽q−2, then one has (1.13) X 1⩽x⩽X e(αxk) ≪X1−8 3 2−k+ε(X3q−1 + 1 + qX3−k) 4 3 2−k.
With the same conditions on α, Robert and Sargos [24, Th. 4 et Lemme 7] have shown that for k ⩾8, one has (1.14) X 1⩽x⩽X e(αxk) ≪X1−3·2−k+ε(X4q−1 + 1 + qX4−k) 8 5 2−k.
When k ⩾9, our conclusions in these special situations are superior to those of Heath-Brown, and those of Robert and Sargos, even for the restricted set of α for which either (1.13) or (1.14) prove superior to Weyl’s inequality. Finally, the methods of Vinogradov yield results of the type provided by Theorem 1.5 with the exponent 2k(k −1) replaced by (C + o(1))k2 log k for suitable values of C. For example, Linnik obtained the permissible value C = 22400, Hua obtained C = 4, and the sharpest bound available hitherto, due to the author , is tantamount to C = 3/2. We note also that Wooley , Ford , Parsell , and most recently Boklan and Wooley , have computed explicit upper bounds for σ(k) when k ⩽20. The conclusion of Theorem 1.5 is superior to these earlier numerical conclusions in all cases and is transparently sharper for larger values of k by a factor asymptotically of order log k. Similar comments apply to the conclusion of Theorem 1.6, a suitable reference to earlier work being [3, Chaps. 4 and 5].
Our final result concerns the distribution modulo 1 of polynomial se-quences. Here, we write ∥θ∥for min y∈Z|θ −y|.
Theorem 1.7. Let k be an integer with k ⩾2, and define τ(k) by τ(k)−1 = 4k(k −1). Then whenever α ∈Rk and N is sufficiently large in terms of k and ε, one has min 1⩽n⩽N ∥α1n + α2n2 + · · · + αknk∥< Nε−τ(k).
For comparison, R. C. Baker [3, Th. 4.5] provides a similar conclusion in which the exponent 4k(k −1) is replaced by (8 + o(1))k2 log k, a conclusion subsequently improved by the author to (4 + o(1))k2 log k (see [37, Cor. 1.3]).
For smaller values of k, meanwhile, a conclusion similar to that of Theorem 1.7 VINOGRADOV’S MEAN VALUE THEOREM 1581 is delivered by [3, Th. 5.2], but with the exponent 2k−1 in place of 4k(k −1).
The conclusion of Theorem 1.7 is superior to these earlier results for k ⩾10.
Given the scale of the improvement in estimates made available via The-orem 1.1, it is natural to enquire whether it is now possible to derive visible improvements in the zero-free region for the Riemann zeta function. The esti-mate supplied by Theorem 1.1 has the shape Js,k(X) ⩽D(k, ε)X2s−1 2 k(k+1)+ε for s ⩾k(k + 1), and the nature of the quantity D(k, ε) plays a critical role in determining the rate of growth of |ζ(σ + it)| with respect to t when σ is close to 1.
It seems clear that, while some numerical improvement will be made available via the methods underlying Theorem 1.1, such improvements will not lead to asymptotically significant improvements in the zero-free region.
We refer the reader to the work of Ford for a discussion of recent numerical improvements to which our new results may be expected to contribute.
The arguments that underly our proof of Theorem 1.1, which in a nod to the earlier use of efficient differencing we refer to loosely as efficient congru-encing methods, change little when the setting for Vinogradov’s mean value theorem is shifted from Z to the ring of integers of a number field. A significant feature of our estimates in this respect is that when s ⩾k(k + 1), one is at most a factor Xε away from the truth. In common with Birch’s application of Hua’s lemma in number fields, this aspect of our estimates makes them robust to variation in the degree of the field extension, since the strength of corresponding Weyl-type estimates for exponential sums no longer plays a sig-nificant role in applications. Thus, in any number field, one is able to establish the validity of the Hasse Principle, and of Weak Approximation, for diagonal equations of degree d in 2d2 + 2d + 1 or more variables, and moreover one is able to obtain the expected density of rational solutions of such equations.
Hitherto, such a conclusion was available via the methods of Birch only for diagonal forms of degree d in 2d + 1 or more variables. In a similar manner, the robustness of the efficient congruencing method permits conclusions to be drawn over function fields, such as Fq(t), matching in strength what is to be found within this paper. We intend to record the consequences of our methods for such problems in forthcoming work.
Finally, the efficient congruencing operation may be applied with success in a number of multidimensional problems related to Vinogradov’s mean value theorem. Thus, the work of Arkhipov, Chubarikov and Karatsuba, and Parsell, on exponential sums in many variables (see and , for example) may be improved in a manner no less dramatic than can be seen in the context of the version of Vinogradov’s mean value theorem described within this paper. This again is a topic to which we intend to return elsewhere.
The methods underlying our proof of Theorem 1.1 are complicated by the need to control nonsingularity constraints modulo various powers of a prime, 1582 TREVOR D. WOOLEY and this control must be exercised within an iterative process a step ahead of its application. This and other complicating factors obscure the key ideas of our argument, and so we have taken the liberty of providing, in Section 2 below, a sketch of the fundamental efficient congruencing process. The reader will also find there an outline of the classical approach of Vinogradov, together with the repeated efficient differencing process. Next, in Section 3, we prepare the notation and basic notions required in our subsequent deliberations. The concern of Section 4 is an estimate for a system of basic congruences, and Section 5 describes the conditioning process required to guarantee appropriate nonsingularity conditions in subsequent steps of the efficient congruencing ar-gument. In Section 6 we discuss the efficient congruencing process itself. We combine these tools in Section 7 with an account of the iterative process, ulti-mately delivering Theorem 1.1. In Section 8 we turn to our first applications, with the proof of Theorems 1.5, 1.6 and 1.7. Next, in Section 9, we consider Tarry’s problem, and establish Theorems 1.2 and 1.3. Finally, in Section 10, we consider the asymptotic formula in Waring’s problem and establish The-orem 1.4.
We finish in Section 11 by describing a heuristic argument that implies essentially the best possible bound of the shape (1.4).
The author is grateful to the two referees of this paper for useful comments.
2. A sketch of the efficient congruencing process Our goal in this section is to offer an indication of the strategy underlying the efficient congruencing process key to our new bounds. At the same time, it is expedient to introduce some notation of use throughout this paper. In what follows, the letter k denotes a fixed integer exceeding 1, the letter s will be a positive integer and ε denotes a sufficiently small positive number.
We take X to be a large real number depending at most on k, s and ε, unless otherwise indicated. In an effort to simplify our analysis, we adopt the following convention concerning the number ε.
Whenever ε appears in a statement, either implicitly or explicitly, we assert that the statement holds for each ε > 0. Note that the “value” of ε may consequently change from statement to statement. Finally, we make use of vector notation in a slightly unconventional manner. Thus, we may write a ⩽z ⩽b to denote that a ⩽zi ⩽b for 1 ⩽i ⩽t, we may write z ≡w (mod p) to denote that zi ≡wi (mod p) (1 ⩽i ⩽t), or on occasion z ≡ξ (mod p) to denote that zi ≡ξ (mod p) (1 ⩽i ⩽t).
Confusion should not arise if the reader interprets similar statements in like manner.
We take this opportunity to highlight our use of an important convention throughout Sections 2–7 and Section 11. Since k is considered fixed, we usually find it convenient to drop explicit mention of k from the exponential sum fk(α; X) and its mean value Js,k(X), defined in (1.1) and (1.2), respectively.
VINOGRADOV’S MEAN VALUE THEOREM 1583 Although potentially a source of confusion, this manoeuvre removes clutter from notation already burdened by other complexities.
We refer to the exponent λs as permissible when, for each positive number ε, and for any real number X sufficiently large in terms of s, k and ε, one has Js(X) ≪Xλs+ε. Define λ∗ s to be the infimum of the set of exponents λs permissible for s and k, and then put ηs = λ∗ s −2s+ 1 2k(k+1). Thus, whenever X is sufficiently large in terms of s, k and ε, one has (2.1) Js(X) ≪Xλ∗ s+ε, where (2.2) λ∗ s = 2s −1 2k(k + 1) + ηs.
Note that, in view of the lower bound (1.5) and the trivial estimate Js(X) ⩽ X2s, one has 0 ⩽ηs ⩽1 2k(k + 1) for s ∈N. Vinogradov’s method employs the translation-dilation invariance of the system (1.3) to bound ηs+k in terms of ηs by efficiently engineering a strong congruence condition on the variables.
After Linnik , the classical approach to Vinogradov’s mean value the-orem imposes an initial congruence condition on the variables of the system (1.3) by dividing into congruence classes modulo p for a suitably chosen prime p. Let θ be a positive number with 0 < θ ⩽1/k, and consider a prime num-ber p with Xθ < p ⩽2Xθ. The existence of such a prime is guaranteed by the Prime Number Theorem, or indeed by weaker results such as Bertrand’s Postulate. Next, when c and ξ are nonnegative integers, and α ∈[0, 1)k, define (2.3) fc(α; ξ) = X 1⩽x⩽X x≡ξ (mod pc) e(ψ(x; α)), where (2.4) ψ(x; α) = α1x + α2x2 + · · · + αkxk.
An application of H¨ older’s inequality now leads from (1.1) to the bound (2.5) |f(α; X)|2s = pc X ξ=1 X 1⩽x⩽X x≡ξ (mod pc) e(ψ(x; α)) 2s ⩽(pc)2s−1 pc X ξ=1 |fc(α; ξ)|2s.
Let us focus now on the mean value Js+k(X) defined via (1.2). In order to save clutter, when G : [0, 1)k →C is measurable, we write I G(α) dα = Z [0,1)k G(α) dα.
On substituting (2.5) into the analogue of (1.2) with s replaced by s + k, we find that (2.6) Js+k(X) ≪X2sθ max 1⩽ξ⩽p I |f(α; X)2kf1(α; ξ)2s| dα.
1584 TREVOR D. WOOLEY The mean value on the right-hand side of (2.6) counts the number of integral solutions of the system (2.7) k X i=1 (xj i −yj i ) = s X l=1 ((pul + ξ)j −(pvl + ξ)j) (1 ⩽j ⩽k), with 1 ⩽x, y ⩽X and (1 −ξ)/p ⩽u, v ⩽(X −ξ)/p. But, as a consequence of the Binomial Theorem, the validity of the equations (2.7) implies that (2.8) k X i=1 ((xi −ξ)j −(yi −ξ)j) = pj s X l=1 (uj l −vj l ) (1 ⩽j ⩽k), whence (2.9) k X i=1 (xi −ξ)j ≡ k X i=1 (yi −ξ)j (mod pj) (1 ⩽j ⩽k).
The congruences (2.9) provide the efficient congruence condition men-tioned earlier, with the artificial condition modulo p imposed via (2.5) con-verted into a system of congruence conditions modulo pj for 1 ⩽j ⩽k, as opposed merely to a system of congruence conditions modulo p. Suppose that x is well-conditioned, by which we mean that x1, . . . , xk lie in distinct congru-ence classes modulo p. Then, given an integral k-tuple n, the solutions of the system k X i=1 (xi −ξ)j ≡nj (mod p) (1 ⩽j ⩽k), with 1 ⩽x ⩽p, may be lifted uniquely to solutions of the system k X i=1 (xi −ξ)j ≡nj (mod pk) (1 ⩽j ⩽k), with 1 ⩽x ⩽pk. In this way, the congruences (2.9) essentially imply that (2.10) x ≡y (mod pk), provided that we inflate our estimates by the combinatorial factor k! to account for the multiplicity of solutions modulo p, together with a factor p 1 2 k(k−1) to account for solutions introduced as one considers for a fixed n′ the possible choices for n (mod pk) with nj ≡n′ j (mod pj) (1 ⩽j ⩽k).
In the classical argument, one chooses θ = 1/k, so that pk > X. Since 1 ⩽ x, y ⩽X, one is then forced to conclude from the congruential condition (2.10) that x = y, and in (2.8) this in turn implies that s X l=1 (uj l −vj l ) = 0 (1 ⩽j ⩽k).
VINOGRADOV’S MEAN VALUE THEOREM 1585 The number of solutions of this system with (1 −ξ)/p ⩽u, v ⩽(X −ξ)/p is readily seen to be O(Js(X/p)), and thus the corresponding number of solutions of (2.8) with x = y is O(XkJs(X/p)). Thus, in view of (2.1), one obtains Js+k(X) ≪(Xθ)2s+ 1 2 k(k−1)XkJs(X/p) ≪(Xθ)2s+ 1 2 k(k−1)Xk(X1−θ)λ∗ s+ε.
Since θ = 1/k, it follows from (2.2) that λ∗ s+k ⩽2(s + k) −1 2k(k + 1) + ηs+k, with ηs+k ⩽ηs(1 −1/k). On recalling the estimate Jk(X) ⩽k!Xk, stemming from Newton’s formulae, the classical bound (1.6) with ηs = 1 2k2(1 −1/k)[s/k] follows by induction.
Suppose now that we take θ < 1/k, and interpret the condition (2.10) by defining the k-tuple h by means of the relation h = (x −y)p−k, so that x = y + hpk. On substituting into (2.8), we obtain the new system (2.11) k X i=1 Ψj(yi, hi, p) = s X l=1 (uj l −vj l ) (1 ⩽j ⩽k), where Ψj(y, h, p) = p−j((y + hpk −ξ)j −(y −ξ)j) (1 ⩽j ⩽k).
The number of solutions of the system (2.11) subject to the associated con-ditions 1 ⩽y ⩽X, |h| ⩽Xp−k and (1 −ξ)/p ⩽u, v ⩽(X −ξ)/p, may be reinterpreted by means of an associated mean value of exponential sums.
An application of Schwarz’s inequality3 bounds this mean value in terms of Js(X/p) and a new mean value that counts integral solutions of the system (2.12) k X i=1 (Ψj(xi, h, p) −Ψj(yi, h, p)) = s X l=1 (uj l −vj l ) (1 ⩽j ⩽k), with variables satisfying similar conditions to those above. We now have the option of repeating the process of imposing an efficient congruence condition on x and y, much as before, by pushing the variables u and v into congruence classes modulo a suitable new prime number ϖ. In this way, one may estimate Js+k(X) by iteratively bounding the number of solutions of a system of type (2.12) by a similar one, wherein the polynomial Ψj(z) = Ψj(z, h, p) is replaced for 1 ⩽j ⩽k by one of the shape Φj(z, g, ϖ) = ϖ−j(Ψj(z + gϖk) −Ψj(z)).
This repeated efficient differencing process, so-called owing to its resemblance to classical Weyl differencing, delivers the more efficient choice of parameter θ ≈k/(k2 + ηs). In the most important range for s, one obtains an estimate roughly of the shape ηs+k ⩽ηs(1−2k/(k2+ηs)), and this yields ηs ≈k2e−2s/k2 (see for details).
3At the prompting of one of the referees, we point out that less pedantic readers may prefer to refer to this inequality as the Cauchy-Schwarz inequality, or arguably more precisely as Bunyakovsky’s inequality.
1586 TREVOR D. WOOLEY The strategy underlying Vinogradov’s method, as seen in both its classical and repeated efficient differencing formulations, is that of transforming an initial congruence condition into a differencing step, with the ultimate aim in (2.7), and its variants such as (2.12), of forcing 2k variables to obey a diagonal condition. In this paper we instead view Vinogradov’s method as an efficient generator of congruence conditions.
Thus, the initial condition modulo p amongst 2s variables underlying the mean value Js+k(X) efficiently generates the stronger condition modulo pk visible in (2.10). Our strategy now is to exploit this condition so as to push 2s variables into the same congruence class modulo pk within a new mean value and efficiently extract from this a fresh congruence condition modulo pk2. By repeating this process, one extracts successively stronger congruence conditions, and these may be expected to yield successively stronger mean value estimates.
There is a critical detail concerning which we have, thus far, remained silent.
We supposed in advance of (2.10) that the k-tuple x was well-con-ditioned, and indeed similar assumptions must be made at each point of the repeated efficient differencing process. There are several possible approaches to the challenge of ensuring this well-conditioning of variables, the most straight-forward being to preselect the prime so that the bulk of solutions are well-conditioned (see for a transparent application of this idea). The problem of ensuring well-conditioning causes considerable difficulty in the analysis of the efficient congruencing argument in this paper, for our prime is fixed, once and for all, at the outset of our argument. For now we ignore this complication so as to better expose the underlying ideas.
We now outline the repeated efficient congruencing argument. In the first instance, we take 0 < θ ⩽1/k2. Observe that, in view of the condition (2.10), one may derive from (2.7) the upper bound Js+k(X) ≪(Xθ)2s+ 1 2 k(k−1) max 1⩽ξ⩽p I pk X η=1 |fk(α; η)|2k |f1(α; ξ)|2s dα.
By H¨ older’s inequality, therefore, one sees that (2.13) Js+k(X) ≪(Xθ)2s+ 1 2 k(k−1)(Xkθ)k max 1⩽ξ⩽p max 1⩽η⩽pk I(ξ, η), where I(ξ, η) = I |fk(α; η)2kf1(α; ξ)2s| dα.
A further application of H¨ older’s inequality shows that (2.14) I(ξ, η) ⩽ I |f1(α; ξ)|2s+2k dα 1−k/sI |f1(α; ξ)2kfk(α; η)2s| dα k/s .
Notice that in the second mean value on the right-hand side of (2.14), there is a reversal of rˆ oles of the generating functions f1(α; ξ) and fk(α; η) as compared VINOGRADOV’S MEAN VALUE THEOREM 1587 to the corresponding mean value defining I(ξ, η). As we explain below, it is this manoeuvre that permits repeated application of the congruencing step.
The first integral on the right-hand side of (2.14) counts the number of integral solutions of the system s+k X i=1 ((pui + ξ)j −(pvi + ξ)j) = 0 (1 ⩽j ⩽k), with (1 −ξ)/p ⩽u, v ⩽(X −ξ)/p. An application of the Binomial Theo-rem shows this to be O(Js+k(X/p)). By orthogonality, meanwhile, the second integral is bounded above by the number of solutions of the system (2.15) k X i=1 (xj i −yj i ) = s X l=1 ((pkul + η)j −(pkvl + η)j) (1 ⩽j ⩽k), with 1 ⩽x, y ⩽X, x ≡y ≡ξ (mod p) and (1 −η)/pk ⩽u, v ⩽(X −η)/pk.
As in the classical treatment sketched above, it follows as a consequence of the Binomial Theorem that the validity of the equations (2.15) implies that k X i=1 ((xi −η)j −(yi −η)j) = pjk s X l=1 (uj l −vj l ) (1 ⩽j ⩽k), whence (2.16) k X i=1 (xi −η)j ≡ k X i=1 (yi −η)j (mod pjk) (1 ⩽j ⩽k).
The system (2.16) provides an even more efficient congruence condition than that offered by (2.9), tempered with a slightly diminished return stem-ming from the fact that the xi and yi all lie in the common congruence class ξ modulo p. On the face of it, the latter unequivocally prevents these variables being well-conditioned. However, let us assume for now that x1, . . . , xk are distinct modulo p2, and likewise y1, . . . , yk. It transpires that on this occa-sion, one may lift solutions modulo p2 to solutions modulo pk2. Indeed, the congruences (2.16) essentially imply that (2.17) x ≡y (mod pk2), provided that one inserts a compensating factor k!(pk+1) 1 2 k(k−1) into the con-comitant estimates. At this point one could repeat the whole process, employ-ing (2.17) to engineer a fresh congruence condition modulo pk3, then modulo pk4, and so on. However, in order to illuminate this efficient congruencing argu-ment, we examine instead the consequences of the assumption that θ = 1/k2.
In such circumstances, one has pk2 > X, and so it follows from (2.17) that 1588 TREVOR D. WOOLEY x = y. Since x ≡y ≡ξ (mod p), the number of possible choices for x and y is O((X/p)k). Substituting into (2.15), we deduce that I |f1(α; ξ)2kfk(α; η)2s| dα ≪(Xθ) 1 2 k(k2−1)(X1−θ)k I |fk(α; η)|2s dα (2.18) ≪(Xθ) 1 2 k(k2−1)(X1−θ)kJs(X/pk).
If we now substitute (2.18) into (2.14), we obtain I(ξ, η) ≪(Js+k(X/p))1−k/s (Xθ) 1 2 k(k2−1)(X1−θ)kJs(X/pk) k/s , whence, in view of (2.1), it follows from (2.13) that Js+k(X) ≪ (Xθ)2s+2k−1 2 k(k+1)(X1−θ)λ∗ s+k+ε1−k/s × (Xθ)2ks−1 2 k(k+1)+2k+ 1 2 k(k2−1)(X1−θ)k(X1−kθ)λ∗ s+εk/s .
Consequently, from (2.2) we discern the upper bound Js+k(X) ≪(Xηs+k(1−θ))1−k/s Ä X−k+k3θ(Xηs(1−kθ)) äk/s X2s+2k−1 2 k(k+1)+ε.
Recall that θ = 1/k2. Since λ∗ s+k = 2s + 2k −1 2k(k + 1) + ηs+k is an infimal exponent, it follows that for a sequence of values of X tending to ∞, one has Xηs+k−ε ≪Xε(X1−1/k2)(1−k/s)ηs+k(X1−1/k)(k/s)ηs, whence for each positive number ε, one has ηs+k ⩽(1 −k/s)(1 −1/k2)ηs+k + (k/s)(1 −1/k)ηs + ε.
Noting again the infimal definition of λ∗ s+k, we therefore deduce that (2.19) ηs+k ⩽ (1 −1/k)ηs 1 + (s/k −1)(1/k2).
Provided that s is no larger than about k5/2, a modest computation leads from the iterative relation (2.19) to the upper bound ηs+k ⩽(1 −s/k3)ηs ⩽e−s/k3ηs.
One therefore sees that ηs is no larger than about k2e−1 2 (s/k2)2. By comparison with the classical bound ηs ⩽k2e−s/k2 mentioned following (1.6), one has considerable additional decay in the upper bound for ηs as soon as s is a little larger than k2. Indeed, even an estimate of this quality would establish, for example, that ‹ G(k) ≪k2(log k)1/2, greatly improving the bound ‹ G(k) ⩽ (1 + o(1))k2 log k due to Ford .
For each natural number N, the pursuit of an N-fold repeated efficient congruencing process delivers bounds with the approximate shape ηs+k ⩽ ηs 1 + (s/k)N(1/kN+1).
VINOGRADOV’S MEAN VALUE THEOREM 1589 When s > k2, it is apparent that the upper bound on the right-hand side here converges to zero as N goes to infinity. Such a bound comes close to delivering Theorem 1.1. Two serious obstructions remain. The first is the removal of the assumption throughout that variables are suitably well-conditioned whenever this is essential. Since our auxiliary prime number p is fixed, once and for all, at the opening of our argument, we are forced to engineer well-conditioning directly using this single prime p.
Such has the potential to weaken sub-stantially our conclusions, and we are forced to consider a complex iterative process rather difficult to control. The second obstruction is less severe. The condition s > k2 must be replaced by s = k2, and owing to the possibility of ill-conditioned solutions, a direct approach would be successful, at best, only when s ⩾k2 + k. Once again, therefore, we are forced to negotiate delicate issues associated with a complex iterative process.
3. Preliminary manoeuvres We begin in this section with some notation and definitions of use in our subsequent discussion. Let k be a fixed integer with k ⩾2, and let δ be a small positive number. We consider a natural number u with u ⩾k, and we put s = uk. Our goal is to show that λ∗ s+k = 2(s + k) −1 2k(k + 1), whence ηs+k = 0. In view of the infimal definition of λ∗ s+k, there exists a sequence of natural numbers (Xn)∞ n=1, tending to infinity, with the property that (3.1) Js+k(Xn) > X λ∗ s+k−δ n (n ∈N).
Provided that Xn is sufficiently large, we have also for Xδ2 n < Y ⩽Xn the corresponding upper bounds (3.2) Jt(Y ) < Y λ∗ t +δ (t = s, s + k).
Notice that when s > k2, the trivial inequality |f(α; X)| ⩽X leads from (1.2) to the upper bound Js+k(X) ⩽X2(s−k2) I |f(α; X)|2k(k+1) dα ⩽X2(s−k2)Jk(k+1)(X).
It then follows from the above discussion that whenever s > k2, one has ηs+k ⩽ ηk(k+1). With an eye toward future applications, we shall continue to consider general values of s with s ⩾k2 until the very climax of the proof of Theorem 1.1, and only at that point specialise to the situation with s = k2. As we have just shown, the desired conclusion when s > k2 is an easy consequence of this special case. Finally, we take N to be a natural number sufficiently large in terms of s and k, and we put θ = 1 2(k/s)N+1. Note that we are at liberty to take δ to be a positive number with δ < (Ns)−3N, so that δ is in particular small compared to θ. We focus now on a fixed element X = Xn of the sequence 1590 TREVOR D. WOOLEY (Xn), which we may assume to be sufficiently large in terms of s, k, N and δ, and put M = Xθ. Thus we have Xδ < M1/N.
Let p be a fixed prime number with M < p ⩽2M to be chosen in due course. That such a prime exists is a consequence of the Prime Number Theo-rem. We will find it necessary to consider well-conditioned k-tuples of integers belonging to distinct congruence classes modulo a suitable power of p. Denote by Ξc(ξ) the set of k-tuples (ξ1, . . . , ξk), with 1 ⩽ξi ⩽pc+1 and ξi ≡ξ (mod pc) (1 ⩽i ⩽k), and satisfying the property that ξi ≡ξj (mod pc+1) for no i and j with 1 ⩽ i < j ⩽k. In addition, write Σk = {1, −1}k, and consider an element σ of Σk.
Recalling the definition (2.3), we then put (3.3) Fσ c (α; ξ) = X ξ∈Ξc(ξ) k Y i=1 fc+1(σiα; ξi).
Two mixed mean values play leading roles in our arguments. When a and b are nonnegative integers, and σ, τ ∈Σk, we define (3.4) Iσ a,b(X; ξ, η) = I |Fσ a (α; ξ)2fb(α; η)2s| dα and (3.5) Kσ,τ a,b (X; ξ, η) = I |Fσ a (α; ξ)2Fτ b (α; η)2u| dα.
It is convenient then to put (3.6) Ia,b(X) = max 1⩽ξ⩽pa max 1⩽η⩽pb max σ∈Σk Iσ a,b(X; ξ, η) and (3.7) Ka,b(X) = max 1⩽ξ⩽pa max 1⩽η⩽pb max σ,τ∈Σk Kσ,τ a,b (X; ξ, η).
Notice here that these mean values depend on our choice of p. However, since we will shortly fix this choice of p once and for all, we suppress mention of this prime when referring to Ia,b(X) and Ka,b(X).
Our arguments are simplified considerably by making transparent the re-lationship between various mean values on the one hand and the anticipated magnitude of these mean values on the other. Of course, such a concept may not be well defined, and so we indicate in what follows quite concretely what is intended. We define the normalised magnitude of a mean value M relative to its anticipated size M∗to be M/M∗, a quantity we denote by . In particular, we define (3.8) = Jt,k(X) X2t−1 2 k(k+1) (t = s, s + k), VINOGRADOV’S MEAN VALUE THEOREM 1591 and when 0 ⩽a < b, we define = Ia,b(X) (X/Mb)2s(X/Ma)2k−1 2 k(k+1) , = Ka,b(X) (X/Mb)2s(X/Ma)2k−1 2 k(k+1) .
(3.9) Note that the lower bound (3.1) implies that (3.10) > Xηs+k−δ while the upper bound (3.2) ensures that, whenever Xδ2 < Y ⩽X, one has (3.11) < Y ηt+δ (t = s, s + k).
Mean values of the exponential sum fc(α; ξ) are easily bounded by ex-ploiting the translation-dilation invariance of the solution sets of the system of equations (1.3). The argument is relatively familiar, though we provide details for the sake of completeness.
Lemma 3.1. Suppose that c is a nonnegative integer with cθ ⩽1. Then for each natural number t, one has (3.12) max 1⩽ξ⩽pc I |fc(α; ξ)|2t dα ≪t Jt(X/Mc).
Proof. Let ξ be an integer with 1 ⩽ξ ⩽pc. From the definition (2.3) of the exponential sum fc(α; ξ), one has fc(α; ξ) = X (1−ξ)/pc⩽y⩽(X−ξ)/pc e(ψ(pcy + ξ; α)), in which ψ(z; α) is given by (2.4). By orthogonality, therefore, one finds that the integral on the left-hand side of (3.12) counts the number of integral solu-tions of the system of equations (3.13) t X i=1 (pcyi + ξ)j = t X i=1 (pczi + ξ)j (1 ⩽j ⩽k), with 0 ⩽y, z ⩽(X −ξ)/pc. An application of the Binomial Theorem shows that the pair y, z satisfies (3.13) if and only if it satisfies the system t X i=1 yj i = t X i=1 zj i (1 ⩽j ⩽k).
1592 TREVOR D. WOOLEY Thus, on considering the underlying Diophantine system and recalling (1.1) and (1.2), we find that I |fc(α; ξ)|2t dα ⩽ I |1 + f(α; X/pc)|2t dα ≪t 1 + I |f(α; X/pc)|2t dα = 1 + Jt(X/pc).
The desired conclusion follows on noting that diagonal solutions alone ensure that Jt(X/Mc) ⩾1.
□ Our next preparatory manoeuvre concerns the initiation of the iterative procedure, and it is here that we fix our choice for p. It is convenient here and elsewhere to write 1 for the k-tuple (1, . . . , 1).
Lemma 3.2. There exists a prime number p with M < p ⩽2M for which Js+k(X) ≪M2sI0,1(X).
Proof. The quantity Js+k(X) counts the number of integral solutions of the system s+k X i=1 (xj i −yj i ) = 0 (1 ⩽j ⩽k), with 1 ⩽x, y ⩽X. Let T0 denote the number of such solutions in which xi = xj for some i and j with 1 ⩽i < j ⩽k, and let T1 denote the corresponding number of solutions with xi = xj for no i and j with 1 ⩽i < j ⩽k.
On considering the underlying Diophantine system, one finds that T0 ≪ I f(2α; X)f(α; X)s+k−2f(−α; X)s+k dα, whence by H¨ older’s inequality, it follows that T0 ≪ I |f(α; X)|2s+2k dα 1−1/(s+k)I |f(2α; X)|2s+2k dα 1/(2s+2k) .
Thus, by a change of variables, we obtain the upper bound (3.14) T0 ≪(Js+k(X))1−1/(2s+2k).
Consider next a solution x, y counted by T1. Write ∆(x) = Y 1⩽i<j⩽k |xi −xj|, and note that 0 < ∆(x) < Xk(k−1). Let P denote any set of [k3/θ] + 1 distinct prime numbers with M < p ⩽2M. Such a set exists by the Prime Number Theorem. It follows that Y p∈P p > Mk3/θ = Xk3 > ∆(x), VINOGRADOV’S MEAN VALUE THEOREM 1593 and hence at least one of the elements of P does not divide ∆(x). In particular, there exists a prime p ∈P for which xi ≡xj (mod p) for no i and j with 1 ⩽ i < j ⩽k. On considering the underlying Diophantine system, we therefore see that T1 ≪ X p∈P I F1 0(α; 0)f(α; X)sf(−α; X)s+k dα.
Therefore, as a consequence of Schwarz’s inequality, one finds that T1 ≪max p∈P I |F1 0(α; 0)2f(α; X)2s| dα 1/2I |f(α; X)|2s+2k dα 1/2 = max p∈P I |F1 0(α; 0)2f0(α; 0)2s| dα 1/2Ä Js+k(X) 1/2 .
In this way, we deduce that a prime number p with M < p ⩽2M exists for which (3.15) T1 ≪(I0,0(X))1/2(Js+k(X))1/2.
On recalling that Js+k(X) = T0 + T1, we find from (3.14) and (3.15) that (3.16) Js+k(X) ≪1 + I0,0(X) ≪I0,0(X).
Next, we split the summation in the definition (2.3) of f0(α; 0) into arith-metic progressions modulo p. Thus we obtain f0(α; 0) = p X ξ=1 f1(α; ξ), whence by H¨ older’s inequality one has |f0(α; 0)|2s ⩽p2s−1 p X ξ=1 |f1(α; ξ)|2s.
It therefore follows from (3.4) and (3.6) that (3.17) I0,0(X) ≪M2s max 1⩽ξ⩽p max σ∈Σk I |Fσ 0 (α; 0)2f1(α; ξ)2s| dα ⩽M2sI0,1(X).
The conclusion of the lemma is obtained by substituting (3.17) into (3.16).
□ We now fix the prime number p, once and for all, so that the upper bound Js+k(X) ≪M2sI0,1(X) holds.
4. The auxiliary system of congruences The efficient congruencing process delivers a strong congruence condition on a subset of variables. In order to be useful in further congruencing activities, this condition must be converted into a restriction of certain variables to higher level arithmetic progressions. It is to this task that we attend in the present section.
1594 TREVOR D. WOOLEY When σ ∈Σk, denote by Bσ a,b(m; ξ, η) the set of solutions of the system of congruences (4.1) k X i=1 σi(zi −η)j ≡mj (mod pjb) (1 ⩽j ⩽k), with 1 ⩽z ⩽pkb and z ≡ξ (mod pa+1) for some ξ ∈Ξa(ξ).
Lemma 4.1. Suppose that a and b are nonnegative integers with b > a.
Then max 1⩽ξ⩽pa max 1⩽η⩽pb max σ∈Σk card Ä Bσ a,b(m; ξ, η) ä ⩽k!p 1 2 k(k−1)(a+b).
Proof. Consider fixed integers a and b with 0 ⩽a < b, a fixed k-tuple σ ∈Σk, and fixed integers ξ and η with 1 ⩽ξ ⩽pa and 1 ⩽η ⩽pb. Denote by D1(n) the set of solutions of the system of congruences (4.2) k X i=1 σi(zi −η)j ≡nj (mod pkb) (1 ⩽j ⩽k), with 1 ⩽z ⩽pkb and z ≡ξ (mod pa+1) for some ξ ∈Ξa(ξ). Then it follows from (4.1) that we have card(Bσ a,b(m; ξ, η)) = X n1≡m1 (mod pb) 1⩽n1⩽pkb . . .
X nk≡mk (mod pkb) 1⩽nk⩽pkb card(D1(n)).
Counting the number of k-tuples n with 1 ⩽n ⩽pkb for which nj ≡mj (mod pjb) (1 ⩽j ⩽k), therefore, we see that (4.3) card(Bσ a,b(m; ξ, η)) ⩽p 1 2 k(k−1)b max 1⩽n⩽pkb card(D1(n)).
We now examine the system (4.2). We begin by rewriting each variable zi in the shape zi = payi+ξ. In view of the hypothesis that z ≡ξ (mod pa+1) for some ξ ∈Ξa(ξ), we find that the k-tuple y satisfies the condition that yi ≡yj (mod p) for no i and j with 1 ⩽i < j ⩽k. With this substitution in (4.2), we find by the Binomial Theorem that the set of solutions D1(n) is in bijective correspondence with the set of solutions of the system of congruences (4.4) j X l=0 Çj l å (ξ −η)j−lpla k X i=1 σiyl i ≡nj (mod pkb) (1 ⩽j ⩽k), with 1 ⩽y ⩽pkb−a and yi ≡yj (mod p) for no i and j with 1 ⩽i < j ⩽k.
Let y = w be any solution of this system, if indeed a solution exists. Then it follows from (4.4) that all other solutions y satisfy the system of congruences (4.5) j X l=0 Çj l å (ξ −η)j−lpla k X i=1 σi(yl i −wl i) ≡0 (mod pkb) (1 ⩽j ⩽k).
VINOGRADOV’S MEAN VALUE THEOREM 1595 By taking linear combinations of the congruences here, we find that the system (4.5) is equivalent to the new system k X i=1 σiyj i ≡ k X i=1 σiwj i (mod pkb−ja) (1 ⩽j ⩽k).
Next, we write D2(u) for the set of solutions of the system of congruences k X i=1 σiyj i ≡uj (mod pkb−ja) (1 ⩽j ⩽k), with 1 ⩽y ⩽pkb−a and yi ≡yj (mod p) for no i and j with 1 ⩽i < j ⩽k.
Then it follows from our discussion thus far that (4.6) card(D1(n)) ⩽ max 1⩽u⩽pkb−a card(D2(u)).
Denote by D3(v) the set of solutions of the system of congruences (4.7) k X i=1 σiyj i ≡vj (mod pkb−a) (1 ⩽j ⩽k), with 1 ⩽y ⩽pkb−a and yi ≡yj (mod p) for no i and j with 1 ⩽i < j ⩽k.
Then we have card(D2(u)) ⩽ X v1≡u1 (mod pkb−a) 1⩽v1⩽pkb−a . . .
X vk≡uk (mod pkb−ka) 1⩽vk⩽pkb−a card(D3(v)).
Counting the number of k-tuples v with 1 ⩽v ⩽pkb−a for which vj ≡uj (mod pkb−ja) (1 ⩽j ⩽k), therefore, we deduce that card(D2(u)) ⩽p 1 2 k(k−1)a max 1⩽v⩽pkb−a card(D3(v)).
Consequently, in combination with (4.3) and (4.6), we have shown thus far that (4.8) card(Bσ a,b(m; ξ, η)) ⩽p 1 2 k(k−1)(a+b) max 1⩽v⩽pkb−a card(D3(v)).
Suppose now that y = z is any solution of (4.7) belonging to D3(v), if one exists. Then all other solutions y satisfy the system k X i=1 σiyj i ≡ k X i=1 σizj i (mod pkb−a) (1 ⩽j ⩽k).
Let I denote the set of indices i with 1 ⩽i ⩽k for which σi = 1, and let J denote the corresponding set of indices for which σi = −1. Then this system of congruences is equivalent to the new system X i∈I yj i + X l∈J zj l ≡ X i∈I zj i + X l∈J yj l (mod pkb−a) (1 ⩽j ⩽k).
1596 TREVOR D. WOOLEY We are at liberty to assume that p > k. Consequently, from Newton’s formulae relating the sums of powers of the roots of a polynomial with its coefficients, we find that Y i∈I (t −yi) Y l∈J (t −zl) ≡ Y j∈I (t −zj) Y m∈J (t −ym) (mod pkb−a).
But zl ≡zm (mod p) for no l and m with 1 ⩽l < m ⩽k. Then for each j with j ∈I, by putting t = zj we deduce that Y i∈I (zj −yi) Y l∈J (zj −zl) ≡0 (mod pkb−a), whence for some i with i ∈I, one has yi ≡zj (mod pkb−a). Similarly, for each l with l ∈J , we deduce that for some m with m ∈J , one has ym ≡zl (mod pkb−a). It follows that the sets {y1, . . . , yk} and {z1, . . . , zk} are mutually congruent modulo pkb−a, whence card(D3(v)) ⩽k!.
The conclusion of the lemma now follows at once from (4.8).
□ 5. The conditioning process The mean value Iσ a,b(X; ξ, η), defined via (3.4), is already in a form suitable for the extraction of an efficient congruence.
Unfortunately, however, one would be poorly positioned to extract the next efficient congruence following the one at hand were one not to plan ahead by conditioning the auxiliary variables encoded by the exponential sum fb(α; η). In this section we show that the factor fb(α; η)2s occurring in (3.4) can, in essence, be replaced by the conditioned factor Fτ b (α; η)2u. The latter involves k-tuples of variables in residue classes distinct modulo pb+1 and is suitable for subsequent congruencing operations.
Lemma 5.1. Let a and b be integers with b > a ⩾0. Then one has Ia,b(X) ≪Ka,b(X) + Mk−1Ia,b+1(X).
Proof. Consider fixed integers ξ and η with 1 ⩽ξ ⩽pa and 1 ⩽η ⩽pb, and a k-tuple σ ∈Σk. Then on considering the underlying Diophantine system, one finds from (3.4) that Iσ a,b(X; ξ, η) counts the number of integral solutions of the system (5.1) k X i=1 σi(xj i −yj i ) = s X l=1 (vj l −wj l ) (1 ⩽j ⩽k), with 1 ⩽x, y, v, w ⩽X, v ≡w ≡η (mod pb) and satisfying the property that there exist ξ, ζ ∈Ξa(ξ) for which x ≡ξ (mod pa+1) and y ≡ζ (mod pa+1).
VINOGRADOV’S MEAN VALUE THEOREM 1597 Let T1 denote the number of integral solutions x, y, v, w of the system (5.1), counted by Iσ a,b(X; ξ, η), in which the 2s integers v1, . . . , vs and w1, . . . , ws together lie in at most k −1 distinct residue classes modulo pb+1, and let T2 denote the corresponding number of solutions in which the 2s integers v1, . . . , vs and w1, . . . , ws together occupy at least k distinct residue classes modulo pb+1.
Then we have Iσ a,b(X; ξ, η) ⩽T1 + T2.
On considering the underlying Diophantine system, it is apparent that T1 ≪ X 1⩽η1,...,ηk−1⩽pb+1 η≡η (mod pb) X 0⩽e⩽2s I |Fσ a (α; ξ)2fb+1(α; η1)e1 . . . fb+1(α; ηk−1)ek−1| dα, in which the summation over e is subject to the condition e1 + e2 + · · · + ek−1 = 2s.
In view of the elementary inequality |z1 · · · zn| ⩽|z1|n + · · · + |zn|n, we find that |fb+1(α; η1)e1 · · · fb+1(α; ηk−1)ek−1| ⩽ k−1 X i=1 |fb+1(α; ηi)|2s.
Thus we deduce that T1 ≪ X 1⩽η1,...,ηk−1⩽pb+1 η≡η (mod pb) k−1 X i=1 I |Fσ a (α; ξ)2fb+1(α; ηi)2s| dα (5.2) ≪pk−1 max 1⩽η0⩽pb+1 Iσ a,b+1(X; ξ, η0).
We turn our attention next to the solutions x, y, v, w counted by T2. The 2s integers v1, . . . , vs and w1, . . . , ws now together contain at least k distinct residue classes modulo pb+1. By relabelling variables if necessary, therefore, there is no loss of generality in supposing that v1, . . . , vk lie in distinct residue classes modulo pb+1.
We emphasise here that, not only the indices of the elements vi may need adjustment in this relabelling process, but also certain elements wj may need to be renamed as one of the new integers v1, . . . , vk. In the former cases, the associated signs indexed by τi will be +1, and in the latter cases they will be −1. On considering the underlying Diophantine system, we thus deduce that for some τ ∈Σk, one has T2 ≪ I |Fσ a (α; ξ)|2Fτ b (α; η)fb(α; η)s−r+fb(−α; η)s−r−dα.
1598 TREVOR D. WOOLEY Here, we have written r+ for the number of the coordinates of τ that are +1 and r−for the number that are −1. Thus, in particular, one has r+ + r−= k.
On recalling that s = uk, an application of H¨ older’s inequality leads from here to the bound T2 ≪ I |Fσ a (α; ξ)2Fτ b (α; η)2u| dα 1/(2u)I |Fσ a (α; ξ)2fb(α; η)2s| dα 1−1/(2u) .
Hence, in view of the definitions (3.4) and (3.5), we arrive at the estimate (5.3) T2 ≪(Kσ,τ a,b (X; ξ, η))1/(2u)(Iσ a,b(X; ξ, η))1−1/(2u).
Combining (5.2) and (5.3), and recalling (3.6) and (3.7), we deduce that Ia,b(X) ≪Mk−1Ia,b+1(X) + (Ka,b(X))1/(2u)(Ia,b(X))1−1/(2u).
The conclusion of the lemma now follows on disentangling this inequality.
□ Repeated application of Lemma 5.1 shows that whenever a, b and H are nonnegative integers with b > a ⩾0, then (5.4) Ia,b(X) ≪ H−1 X h=0 Mh(k−1)Ka,b+h(X) + MH(k−1)Ia,b+H(X).
Since for large values of H, quantities of the type Ia,b+H(X) are an irritant to our argument, we show in the next lemma that values of H exceeding 1 2(b −a) are harmless.
Lemma 5.2. Let a, b and H be nonnegative integers with 0 < 1 2(b −a) ⩽H ⩽θ−1 −b.
Then one has MH(k−1)Ia,b+H(X) ≪M−H/2(X/Mb)2s(X/Ma)2k−1 2 k(k+1)+ηs+k.
Proof. On considering the underlying Diophantine systems, it follows from (3.3) and (3.4) that when 1 ⩽ξ ⩽pa and 1 ⩽η ⩽pb+H, and σ ∈Σk, one has Iσ a,b+H(X; ξ, η) ⩽ I |fa(α; ξ)2kfb+H(α; η)2s| dα.
Then an application of H¨ older’s inequality in combination with Lemma 3.1 leads to the upper bound Iσ a,b+H(X; ξ, η) ⩽ I |fa(α; ξ)|2s+2k dα k/(s+k)I |fb+H(α; η)|2s+2k dα s/(s+k) ≪(Js+k(X/Ma))k/(s+k)(Js+k(X/Mb+H))s/(s+k).
Consequently, in view of (3.2), we have Ia,b+H(X) ≪((X/Ma)k/(s+k)(X/Mb+H)s/(s+k))2s+2k−1 2 k(k+1)+ηs+k+δ (5.5) ≪Xδ(X/Ma)2k−1 2 k(k+1)+ηs+k(X/Mb)2sΥ, VINOGRADOV’S MEAN VALUE THEOREM 1599 where Υ = (Mb−a+H) 1 2 k(k+1)s/(s+k)M−2sH.
But when s ⩾k2 and H ⩾1 2(b −a), one has s s + k Ä 2(s + k) −1 2k(k + 1) ä H ⩾ s s + k Ä3 2k(k + 1) ä H ⩾ s s + k Ä1 2k(k + 1) ä (b −a) + 1 2k2H.
Thus we see that for k ⩾2, one has MH(k−1)Υ ⩽MH(k−1−1 2 k2) ⩽M−H, whence XδMH(k−1)Υ ⩽M−H/2.
The conclusion of the lemma follows on substituting this estimate into (5.5).
□ Combining Lemma 5.2 with the upper bound (5.4), we may conclude as follows. Here, as usual, when β ∈R, we write ⌈β⌉for the least integer no smaller than β.
Lemma 5.3. Let a and b be integers with 0 ⩽a < b, and put H = ⌈1 2(b −a)⌉. Suppose that b + H ⩽θ−1. Then there exists an integer h with 0 ⩽h < H having the property that Ia,b(X) ≪Mh(k−1)Ka,b+h(X) + M−H/2(X/Mb)2s(X/Ma)2k−1 2 k(k+1)+ηs+k.
By making use of the special case of Lemma 5.3 in which a = 0 and b = 1, we are able to refine Lemma 3.2 into a form more directly applicable.
Lemma 5.4. One has Js+k(X) ≪M2sK0,1(X).
Proof. Observe first that when a = 0 and b = 1, then ⌈1 2(b −a)⌉= 1.
Thus we deduce from Lemma 5.3 that I0,1(X) ≪K0,1(X) + M−1/2(X/M)2sX2k−1 2 k(k+1)+ηs+k.
Since we may suppose that M1/2 > X4δ, it follows from Lemma 3.2 that Js+k(X) ≪M2sI0,1(X) ≪M2sK0,1(X) + X2s+2k−1 2 k(k+1)+ηs+k−2δ.
But in view of (3.10), we have Js+k(X) ≫X2s+2k−1 2 k(k+1)+ηs+k−δ, and hence we arrive at the upper bound Js+k(X) ≪M2sK0,1(X) + X−δJs+k(X).
The conclusion of the lemma follows on disentangling this inequality.
□ 1600 TREVOR D. WOOLEY 6. The efficient congruencing step The mean value Ka,b(X) contains a powerful latent congruence condition.
Our task in this section is to convert this condition into one that may be exploited by means of an iterative procedure.
Lemma 6.1. Suppose that a and b are integers with 0 ⩽a < b ⩽θ−1.
Then one has Ka,b(X) ≪M 1 2 k(k−1)(b+a)(Mkb−a)k Ä Js+k(X/Mb) ä1−k/s (Ib,kb(X))k/s .
Proof. Consider fixed integers ξ and η with 1 ⩽ξ ⩽pa and 1 ⩽η ⩽pb, and k-tuples σ, τ ∈Σk.
Then on considering the underlying Diophantine system, one finds from (3.5) that Kσ,τ a,b (X; ξ, η) counts the number of integral solutions of the system (6.1) k X i=1 σi(xj i −yj i ) = u X l=1 k X m=1 τm(vj lm −wj lm) (1 ⩽j ⩽k), in which, for some ξ, ζ ∈Ξa(ξ), one has 1 ⩽x, y ⩽X, x ≡ξ (mod pa+1) and y ≡ζ (mod pa+1), and for 1 ⩽l ⩽u, for some ηl, νl ∈Ξb(η), one has 1 ⩽vl, wl ⩽X, vl ≡ηl (mod pb+1) and wl ≡νl (mod pb+1).
By applying the Binomial Theorem, we see that the system (6.1) is equivalent to the new system of equations (6.2) k X i=1 σi((xi −η)j −(yi −η)j) = u X l=1 k X m=1 τm((vlm −η)j −(wlm −η)j) (1 ⩽j ⩽k).
But in any solution x, y, v, w counted by Kσ,τ a,b (X; ξ, η), one has v ≡w ≡η (mod pb). We therefore deduce from (6.2) that (6.3) k X i=1 σi(xi −η)j ≡ k X i=1 σi(yi −η)j (mod pjb) (1 ⩽j ⩽k).
Recall the notation from the preamble to Lemma 4.1, and write Gσ a,b(α; ξ, η; m) = X ζ∈Bσ a,b(m;ξ,η) k Y i=1 fkb(σiα; ζi).
Then on considering the underlying Diophantine system, it follows from (6.1) and (6.3) that (6.4) Kσ,τ a,b (X; ξ, η) = pb X m1=1 · · · pkb X mk=1 I |Gσ a,b(α; ξ, η; m)2Fτ b (α; η)2u| dα.
VINOGRADOV’S MEAN VALUE THEOREM 1601 An application of Cauchy’s inequality in combination with Lemma 4.1 yields the upper bound |Gσ a,b(α; ξ, η; m)|2 ⩽card(Bσ a,b(m; ξ, η)) X ζ∈Bσ a,b(m;ξ,η) k Y i=1 |fkb(α; ζi)|2 (6.5) ≪M 1 2 k(k−1)(a+b) X ζ∈Bσ a,b(m;ξ,η) k Y i=1 |fkb(α; ζi)|2.
Next, on substituting (6.5) into (6.4) and considering the underlying Diophan-tine system, we deduce that (6.6) Kσ,τ a,b (X; ξ, η) ≪M 1 2 k(k−1)(a+b) X 1⩽ζ⩽pkb ζ≡ξ (mod pa) I k Y i=1 |fkb(α; ζi)|2 |Fτ b (α; η)|2u dα.
Notice that the utility of the conditioning of the two initial blocks of k variables in (6.1) has now expired, and indeed in (6.6) this conditioning is abandoned without ill consequences for the argument to follow.
Observe next that by H¨ older’s inequality, one has X 1⩽ζ⩽pkb ζ≡ξ (mod pa) k Y i=1 |fkb(α; ζi)|2 = X 1⩽ζ⩽pkb ζ≡ξ (mod pa) |fkb(α; ζ)|2k ⩽(pkb−a)k−1 X 1⩽ζ⩽pkb ζ≡ξ (mod pa) |fkb(α; ζ)|2k.
Then it follows from (6.6) that (6.7) Kσ,τ a,b (X; ξ, η) ≪M 1 2 k(k−1)(a+b)(Mkb−a)k max 1⩽ζ⩽pkb I |fkb(α; ζ)2kFτ b (α; η)2u| dα.
On recalling that s = uk, an application of H¨ older’s inequality supplies the bound (6.8) I |fkb(α; ζ)2kFτ b (α; η)2u| dα ⩽U 1−k/s 1 U k/s 2 , where U1 = I |Fτ b (α; η)|2u+2 dα and U2 = I |Fτ b (α; η)2fkb(α; ζ)2s| dα.
1602 TREVOR D. WOOLEY On considering the underlying Diophantine system, it follows from Lemma 3.1 that U1 ⩽ I |fb(α; η)|2s+2k dα ≪Js+k(X/Mb).
Thus, on recalling the definition (3.4), we find that I |fkb(α; ζ)2kFτ b (α; η)2u| dα ≪(Js+k(X/Mb))1−k/s(Iτ b,kb(X; η, ζ))k/s ≪(Js+k(X/Mb))1−k/s(Ib,kb(X))k/s.
Finally, on substituting the latter estimate into (6.7), the conclusion of the lemma is immediate.
□ Before proceeding further, we pause to extract a crude but simple bound for Ka,b(X) of value when b is large.
Lemma 6.2. Suppose that a and b are integers with 0 ⩽a < b ⩽θ−1.
Then ≪Xηs+k+δ(Mb−a) 1 2 k(k+1).
Proof. Consider fixed integers ξ and η with 1 ⩽ξ ⩽pa and 1 ⩽η ⩽pb, and k-tuples σ, τ ∈Σk. On considering the underlying Diophantine system and applying H¨ older’s inequality, we deduce from (3.5) that Kσ,τ a,b (X; ξ, η) ⩽ I |fa(α; ξ)2kfb(α; η)2s| dα ⩽ I |fa(α; ξ)|2s+2k dα k/(s+k)I |fb(α; η)|2s+2k dα s/(s+k) .
In view of the hypothesis b ⩽θ−1, we therefore deduce from Lemma 3.1 that Ka,b(X)≪(Js+k(X/Ma))k/(s+k)(Js+k(X/Mb))s/(s+k).
Consequently, on recalling (3.9) and (3.11), it follows that ≪ Xδ Ä (X/Ma)k/(s+k)(X/Mb)s/(s+k)ä2s+2k−1 2 k(k+1)+ηs+k (X/Mb)2s(X/Ma)2k−1 2 k(k+1) ≪Xηs+k+δ(Mb−a) 1 2 k(k+1)s/(s+k).
The conclusion of the lemma is now immediate.
□ By substituting the estimate supplied by Lemma 5.3 into the conclusion of Lemma 6.1, we obtain the basic iterative relation.
Lemma 6.3. Suppose that a and b are integers with 0 ⩽a < b ⩽2 3(kθ)−1.
Put H = ⌈1 2(k −1)b⌉. Then there exists an integer h, with 0 ⩽h < H, having VINOGRADOV’S MEAN VALUE THEOREM 1603 the property that ≪XδM−7kh/4(X/Mb)ηs+k(1−k/s)k/s + M−kH/(3s)(X/Mb)ηs+k.
Proof. On recalling (3.9), it follows from Lemma 6.1 that (6.9) ≪(Mb)2s(Ma)2k−1 2 k(k+1)M 1 2 k(k−1)(b+a)(Mkb−a)kT 1−k/s 1 T k/s 2 , where T1 = Js+k(X/Mb) X2s+2k−1 2 k(k+1) and T2 = Ib,kb(X) X2s+2k−1 2 k(k+1) .
But in view of (3.2), one has (6.10) T1 ≪(M−b)2s+2k−1 2 k(k+1)(X/Mb)ηs+k+δ.
Write H = ⌈1 2(k −1)b⌉, and note that the hypotheses of the statement of the lemma ensure that kb + H ⩽kb + 1 2(k −1)b + 1 2 ⩽3 2kb ⩽θ−1.
Consequently, it follows from Lemma 5.3 that there exists an integer h with 0 ⩽h < H having the property that T2 ≪Mh(k−1)Kb,kb+h(X) X2s+2k−1 2 k(k+1) + M−H/2(X/Mb)ηs+k (Mkb)2s(Mb)2k−1 2 k(k+1) .
On recalling (3.9), we therefore see that (6.11) T2 ≪(M−kb)2s(M−b)2k−1 2 k(k+1)Ω, in which we have written Ω= M−(2s−k+1)h + M−H/2(X/Mb)ηs+k.
Substituting (6.10) and (6.11) into (6.9), we deduce that ≪Mω(a,b)(X/Mb)(1−k/s)(ηs+k+δ)Ωk/s, in which we have written ω(a, b) = 2sb + (2k −1 2k(k + 1))a + 1 2k(k −1)(b + a) + k(kb −a) −(1 −k/s)(2s + 2k −1 2k(k + 1))b −(2skb + (2k −1 2k(k + 1))b)k/s.
A modicum of computation reveals that ω(a, b) = 0, and thus we may infer that ≪(M−H/2)k/s(X/Mb)ηs+k+δ(1−k/s) + XδM−(2s−k+1)hk/s(X/Mb)ηs+k(1−k/s)k/s.
1604 TREVOR D. WOOLEY The conclusion of the lemma follows on noting that δ may be assumed small enough that (X/Mb)δ(1−k/s) ≪MkH/(6s), and further that the assumptions s ⩾k2 and k ⩾2 together imply that 2s −k + 1 ⩾7 4s.
□ 7. The iterative process The estimate supplied by Lemma 5.4 bounds Js+k(X) in terms of K0,1(X), and Lemma 6.3 relates Ka,b(X), for b > a ⩾0, to Kb,kb+h(X) for some integer h with 0 ⩽h ⩽1 2(k −1)b. By repeatedly applying Lemma 6.3, therefore, we are able to bound Js+k(X) in terms of the quantity Kc,d(X), with c and d essentially as large as we please. Unfortunately, this process is not particularly simple to control, largely owing to the possibility that at any point in our iteration, a value of h in the expression Kb,kb+h(X) may be forced upon us with h > 0. This defect in our procedure may accelerate us too rapidly towards the final step of the iteration. Our goal in this section, therefore, is to control the iterative process at a fine enough level that its potential is not substantially eroded.
Lemma 7.1. Suppose that a and b are integers with 0 ⩽a < b ⩽2 3(kθ)−1.
Suppose, in addition, that there exist nonnegative numbers ψ, c and γ, with c ⩽(2s/k)N, for which (7.1) Xηs+k(1+ψθ) ≪XcδM−γ.
Then, for some nonnegative integer h with h ⩽1 2(k −1)b, one has Xηs+k(1+ψ′θ) ≪Xc′δM−γ′, where ψ′ = (s/k)ψ + (s/k −1)b, c′ = (s/k)(c + 1), γ′ = (s/k)γ + 7 4sh, a′ = b and b′ = kb + h.
Proof. Since we may suppose that c ⩽(2s/k)N and δ < (Ns)−3N, we have cδ < s−2N/3 < θ/(3s), and hence Xcδ < M1/(3s). In addition, one has M1/(3s) > Xδ. Consequently, it follows from Lemma 6.3 that there exists an integer h with 0 ⩽h < ⌈1 2(k−1)b⌉ with the property that ≪M−k/(3s)Xηs+k +XδM−7kh/4(X/Mb)(1−k/s)ηs+kk/s.
In view of the hypothesised upper bound (7.1), therefore, we deduce that Xηs+k(1+ψθ) ≪Xηs+k−δ+X(c+1)δM−γ−7kh/4(X/Mb)(1−k/s)ηs+kk/s, whence Xηs+k(k/s+(ψ+(1−k/s)b)θ) ≪X(c+1)δM−γ−7kh/4k/s.
VINOGRADOV’S MEAN VALUE THEOREM 1605 The conclusion of the lemma follows on raising left- and right-hand sides here to the power s/k.
□ Repeated application of Lemma 7.1 provides a series of upper bounds for ηs+k. What remains is to ensure that the upper bound b ⩽2 3(kθ)−1, required by the hypotheses of the lemma, does not preclude the possibility of making many iterations.
Lemma 7.2. Whenever s ⩾k2, one has ηs+k = 0.
Proof. In the final moments of our proof, we find it convenient to restrict s to be k2. However, our argument is made more illuminating by avoiding this restriction in the opening stages. We may suppose that ηs+k > 0, for otherwise there is nothing to prove. We begin by defining three sequences (an), (bn), (hn) of nonnegative integers for 0 ⩽n ⩽N. We put a0 = 0 and b0 = 1. Then, when 0 ⩽n < N, we fix any integer hn with 0 ⩽hn ⩽1 2(k −1)bn, and then define (7.2) an+1 = bn and bn+1 = kbn + hn.
Next we define the auxiliary sequences (ψn), (cn), (γn) of nonnegative real numbers for 0 ⩽n ⩽N by putting ψ0 = 0, c0 = 1, γ0 = 0.
Then, for 0 ⩽n < N, we define ψn+1 = (s/k)ψn + (s/k −1)bn, (7.3) cn+1 = (s/k)(cn + 1), (7.4) γn+1 = (s/k)γn + 7 4shn.
(7.5) Notice here that an inductive argument readily confirms that for 0 ⩽n ⩽N, one has cn = 2s −k s −k s k n − s s −k ⩽ 2 + 1 k −1 s k n ⩽3(s/k)n.
We claim that a choice may be made for the sequence (hn) in such a manner that for 0 ⩽n ⩽N, one has (7.6) bn < 2(s/k)n and (7.7) Xηs+k(1+ψnθ) ≪XcnδM−γn.
When n = 0, the validity of the relation (7.6) follows by definition, whilst (7.7) is immediate from (3.9), (3.10) and Lemma 5.4, since the latter together imply that Xηs+k−δ < ≪.
1606 TREVOR D. WOOLEY We prepare the ground for the treatment of larger indices n with a pre-liminary discussion of the recurrence relations (7.2) to (7.5). Observe first that when m ⩾0, one has γm+1 −7 4k2bm+1 ⩾γm+1 −7 4sbm+1 = (s/k)(γm −7 4k2bm).
But γ0 −7 4k2b0 = −7 4k2, and so it follows by induction that when 0 ⩽m ⩽N, one has (7.8) γm ⩾7 4k2(bm −(s/k)m).
Suppose now that the desired conclusions (7.6) and (7.7) have been es-tablished for the index n < N.
Then as a consequence of (7.6) one has kbnθ < k(s/k)n−N−1 < 2 3, whence bn < 2 3(kθ)−1. We may therefore apply Lemma 7.1 to deduce from (7.7) that there exists a nonnegative integer h, with h ⩽1 2(k −1)bn, for which one has the upper bound (7.9) Xηs+k(1+ψ′θ) ≪Xc′δM−γ′, where (7.10) a′ = bn = an+1, b′ = kbn + h, ψ′ = (s/k)ψn + (s/k −1)bn = ψn+1, c′ = (s/k)(cn + 1) = cn+1, γ′ = (s/k)γn + 7 4sh.
(7.11) Let us suppose, if possible, that b′ ⩾2(s/k)n+1. The relations (7.10) and (7.11) then combine with (7.8) to show that γ′ = (s/k)γn + 7 4s(b′ −kbn) (7.12) ⩾(s/k)(γn −7 4k2bn) + 7 4k2b′ ⩾7 4k2(b′ −(s/k)n+1) ⩾7 8k2b′.
But b′ = kbn + h ⩽3 2kbn < θ−1, and so it follows from Lemma 6.2 that (7.13) ≪Xηs+k+δ(Mb′) 1 2 k(k+1).
Thus, on substituting (7.12) and (7.13) into (7.9), we arrive at the upper bound Xηs+k(1+ψn+1θ) ≪Xηs+k+(cn+1+1)δ(Mb′) 1 2 k(k+1)−7 8 k2.
We now recall that cn+1 ⩽3(s/k)n+1 and thus confirm that X(cn+1+1)δ < M1/2.
In this way, we obtain the upper bound Xηs+kψn+1θ ≪M−1/2. Since ψn+1 and θ are both positive, we are forced to conclude that ηs+k < 0, contradicting our opening hypothesis.
The assumption that b′ ⩾2(s/k)n+1 is therefore untenable, and so we must in fact have b′ < 2(s/k)n+1. We take hn to be the integer h at hand, so that b′ = bn+1 and γ′ = γn+1, and thereby we obtain the VINOGRADOV’S MEAN VALUE THEOREM 1607 desired conclusion that (7.6) and (7.7) hold with n replaced by n + 1. This completes the present inductive step.
At this point, we have confirmed the validity of the relations (7.6) and (7.7) for 0 ⩽n ⩽N. We next bound the sequences occurring in (7.7) so as to extract a suitable conclusion. The bound cn ⩽3(s/k)n has already been confirmed, and the lower bound γn ⩾0 already suffices for our purposes at this stage. In addition, the relation (7.2) plainly implies that bn ⩾kn, whence from (7.3) we deduce that for s ⩾k2, one has ψn+1 ⩾kψn + (k −1)kn, and by induction this delivers the lower bound ψn ⩾n(k −1)kn−1. Finally, we find from (7.6) that bNθ < k/s < 1, whence bN < θ−1. Making use of Lemma 6.2, therefore, we find from (7.7) that (7.14) Xηs+k(1+ψNθ) ≪Xηs+k+(cN+1)δ(MbN ) 1 2 k(k+1) ≪Xηs+k+k2.
But since θ = 1 2(k/s)N+1, it follows that ηs+k ⩽ k2 ψNθ ⩽2k2(s/k)N+1 N(k −1)kN−1 .
It is at this point only that we restrict s to be k2, and thus we obtain the upper bound ηk(k+1) ⩽2k4/N. But we are at liberty to take N as large as we please in terms of k, and thus ηk(k+1) can be made arbitrarily small. We are therefore forced to conclude that in fact ηk(k+1) = 0. But then, as in the discussion of the opening paragraph of Section 3, we may conclude that ηs = 0 whenever s ⩾k(k + 1). This completes the proof of the lemma.
□ We have now reached the crescendo of this opus, for in view of (2.1) and (2.2), the conclusion of Lemma 7.2 already establishes Theorem 1.1.
A perusal of the proof of Lemma 7.2 might give the impression that it is critical to the success of our iterative process that s = k2, and that the method is inherently unstable. This notion is, however, mistaken. If one were to have s > 3 2k2, then one easily reaches the conclusion that ηs+k = 0 simply by comparing the rates of growth of ψn and bn in the above argument. Such a procedure can also be adapted, with care, to the range s > 5 4k2. It is only when k2 ⩽s ⩽5 4k2 that the behaviour of the sequences (bn) and (ψn), depending as they do on (hn), become so difficult to control. The restriction to the case s = k2 should, therefore, be seen rather as a simplifying manoeuvre rather than an inescapable mandate.
8. Estimates of Weyl type The derivation of our upper bounds for Weyl sums, and the application of these estimates to analyse the distribution of polynomials modulo 1, is easily 1608 TREVOR D. WOOLEY accomplished by applying Theorem 1.1 within results familiar from the litera-ture. We are therefore concise in our discussion of the associated arguments.
The proof of Theorem 1.5. With the hypotheses of the statement of The-orem 1.5, it follows from [30, Th. 5.2] that for each natural number s, one has fk(α; X) ≪(Js,k−1(2X)X 1 2 k(k−1)(q−1 + X−1 + qX−j))1/(2s) log(2X).
But from Theorem 1.1 it follows that when s = k(k −1), one has Js,k−1(2X) ≪X2s−1 2 k(k−1)+ε, and thus fk(α; X) ≪X1+ε(q−1 + X−1 + qX−j)1/(2k(k−1)).
As we shall find in Section 9 below, when s ⩾k2 −k + 1, one has also the ε-free upper bound Js,k−1(X) ≪X2s−1 2 k(k−1), and in like manner this delivers the estimate fk(α; X) ≪X(q−1 + X−1 + qX−j)1/(2k2−2k+2) log(2X).
□ The proof of Theorem 1.6. One may establish Theorem 1.6 by applying the argument underlying the proofs of [3, Ths. 4.3 and 4.4]. Let ε be a suffi-ciently small positive number. We begin by fixing τ to be a positive number with τ ⩽1/(4k(k −1)) −ε and then put A = X1−τ. Observe next that Theo-rem 1.1 shows that one may replace θ by ε in the case l = k of [3, Th. 4.3]. In this way, we find that the hypotheses of the statement of Theorem 1.6 imply that there exist coprime pairs of integers qj, bj (2 ⩽j ⩽k) such that qj ⩾1, |qjαj −bj| ⩽Xε−j(X/A)2k(k−1) (2 ⩽j ⩽k) and such that the least common multiple q0 of q2, . . . , qk satisfies q0 ⩽Xε(X/A)2k(k−1).
Notice here that Xε(X/A)2k(k−1) ⩽Xε(Xτ)2k(k−1) ⩽X 1 2 −3ε.
Write r for q0 and vj for bjq0/qj (2 ⩽j ⩽k). Then one has |rαj −vj| ⩽X2ε−j(X/A)4k(k−1) ⩽X1−j/(4k4) (2 ⩽j ⩽k).
Next, denote by d the greatest common divisor d = (r, v2, . . . , vk).
Then, with the hypotheses of the statement of Theorem 1.6, it is a consequence of [3, Lemma 4.6] that there is a natural number t with t ⩽2k2 such that trd−1 ⩽(X/A)kX3kε, t|rαj −vj|d−1 ⩽(X/A)kX3kε−j (2 ⩽j ⩽k), ∥trd−1α1∥⩽(X/A)kX3kε−1.
VINOGRADOV’S MEAN VALUE THEOREM 1609 But (X/A)k = Xkτ, and so whenever δ > kτ + 3kε, one may conclude that there exist integers q, a1, . . . , ak such that 1 ⩽q ⩽Xδ and |qαj −aj| ⩽Xδ−j (1 ⩽j ⩽k).
Since we have supposed ε to be sufficiently small, the same conclusion follows whenever δ > kτ, and so the proof of Theorem 1.6 is complete.
□ The proof of Theorem 1.7. We may apply the argument of the proof of [3, Th. 4.4], substituting the modifications available from Theorem 1.6 above and its proof.
Let δ be a positive number.
Suppose that P ≪X and (MXP −1)4k(k−1) ⩽X1−δ. Then we find that when M X m=1 |fk(mα; X)| ⩾P, then there exist integers y, u1, . . . , uk such that 1 ⩽y ⩽M(MXP −1)kXε and |yαj −uj| ⩽(MXP −1)kXε−j (1 ⩽j ⩽k).
From here, as in the proof of [3, Th. 4.5], the remaining part of our argument is straightforward. If one has (8.1) min 1⩽n⩽X ∥α1n + · · · + αknk∥> Xδ−τ(k), then with M = [Xτ(k)−δ] + 1, one obtains the lower bound M X m=1 |fk(mα; X)| > 1 6X.
The above discussion then shows that there exists a natural number y such that y ≪Mk+1Xε ≪X(k+1)τ(k)+ε and ∥yαj∥≪Xkτ(k)−j+ε (1 ⩽j ⩽k).
Thus we find that y ⩽X and that ∥α1y + · · · + αkyk∥⩽ k X j=1 Xj−1∥yαj∥≪Xkτ(k)−1+ε < X−τ(k).
This upper bound contradicts our earlier hypothesis (8.1), and thus we are forced to conclude that min 1⩽n⩽X ∥α1n + · · · + αknk∥⩽Xδ−τ(k).
This completes the proof of Theorem 1.7.
□ 1610 TREVOR D. WOOLEY 9. Tarry’s problem, and related topics Our discussion of Tarry’s problem follows a familiar path.
Let s be a natural number with s ⩽k3, and define ρ(h) to be the number of integral solutions of the system of equations s X i=1 xj i = hj (1 ⩽j ⩽k), with 1 ⩽x ⩽X. In addition, let σ(g) denote the number of integral solutions of the system of equations s X i=1 xj i = gj (1 ⩽j ⩽k + 1), with 1 ⩽x ⩽X. Observe that ρ(h) = X 1⩽gk+1⩽sXk+1 σ(h, gk+1).
Consequently, if for all values of h one were to have σ(h, gk+1) ̸= 0 only for a set A(h) of values of gk+1 of cardinality at most t, then it would follow from Cauchy’s inequality that ρ(h)2 ⩽ X 1⩽gk+1⩽sXk+1 gk+1∈A(h) σ(h, gk+1) 2 ⩽card(A(h)) X 1⩽gk+1⩽sXk+1 σ(h, gk+1)2.
If such were the case, then one would have Js,k(X) = X 1⩽h1⩽sX · · · X 1⩽hk⩽sXk ρ(h)2 ⩽t X 1⩽h1⩽sX · · · X 1⩽hk⩽sXk X 1⩽gk+1⩽sXk+1 σ(h, gk+1)2 = tJs,k+1(X).
What we have shown is that when X is sufficiently large, and Js,k(X) > tJs,k+1(X), then there exists a choice of h such that there are more than t choices for gk+1 with σ(h, gk+1) > 0. There therefore exists a solution of the system s X i=1 xj i1 = s X i=1 xj i2 = . . . = s X i=1 xj it (1 ⩽j ⩽k), in which the sums Ps i=1 xk+1 il (1 ⩽l ⩽t) take distinct values. We have therefore shown that whenever (9.1) Js,k(X) > tJs,k+1(X), then W(k, t) ⩽s.
We seek to establish that for some positive number δ, one has (9.2) Js,k+1(X) ≪X2s−1 2 k(k+1)−δ.
VINOGRADOV’S MEAN VALUE THEOREM 1611 In view of the lower bound (1.5), an estimate of this quality suffices to establish (9.1). But from Theorem 1.1, one has Js,k+1(X) ≪X2s−1 2 (k+1)(k+2)+ε whenever s ⩾(k + 1)(k + 2). Moreover, the estimate Jk+2,k+1(X) ≪Xk+2 follows from , and indeed earlier results would suffice here. By interpolating via H¨ older’s inequality, therefore, we find that when s is an integer with k+2 ⩽ s ⩽(k + 1)(k + 2), then Js,k+1(X) ≪X2s−1 2 (k+1)(k+2)+ηs+ε, where ηs = ((k + 1)(k + 2) −s) Ç 1 2(k + 1)(k + 2) −(k + 2) (k + 1)(k + 2) −(k + 2) å = 1 2(1 −1/k)((k + 1)(k + 2) −s).
It follows that the condition (9.2) is satisfied whenever 1 2(1 −1/k)((k + 1)(k + 2) −s) < k + 1, or equivalently, (k + 1)(k + 2) −s < 2k Åk + 1 k −1 ã = 2k + 4 + 4 k −1.
We deduce that (9.2) holds whenever s ⩾(k + 1)(k + 2) −2k −4, and hence W(k, t) ⩽k2 + k −2. This completes the proof of Theorem 1.3.
There may be some scope for improvement in the upper bound presented in Theorem 1.3 by exploiting the sharpest bounds available from Vinogradov’s mean value theorem for smaller moments (see , , and ). In this way, one might hope to improve even the coefficient of k in the upper bound for W(k, h), though not that of k2. Of course, in the low degree cases in which k ⩽4, the above proof of Theorem 1.3 already yields W(k, t) ⩽k2 + k −3.
However, stronger conclusions are available in such circumstances.
The proof of Theorem 1.2. Let s and k be natural numbers with k ⩾3 and s ⩾k2 + k + 1, and let X be a positive number sufficiently large in terms of s and k. We follow the argument of the proof of [42, Th. 3]. When 1 ⩽q ⩽X1/k, 1 ⩽aj ⩽q (1 ⩽j ⩽k) and (q, a1, . . . , ak) = 1, define the major arc M(q, a) by M(q, a) = {α ∈[0, 1)k : |qαj −aj| ⩽X1/k−j (1 ⩽j ⩽k)}.
1612 TREVOR D. WOOLEY It is not hard to check that the arcs M(q, a) are disjoint. Let M denote the union of the major arcs M(q, a) with q and a as above, and define the minor arcs m by m = [0, 1)k \ M. Then from (1.2), we have (9.3) Js,k(X) = Z M |f(α; X)|2s dα + Z m |f(α; X)|2s dα.
We first bound the contribution of the minor arcs. As a consequence of Theorem 1.6, one finds that sup α∈m |f(α; X)| ⩽X1−τ+ε, where τ −1 = 4k(k −1). Then it follows from Theorem 1.1 that Z m |f(α; X)|2s dα ≪ Å sup α∈m |f(α; X)| ã2s−2k2−2k I |f(α; X)|2k2+2k dα (9.4) ≪(X1−τ+ε)2s−2k2−2kX 3 2 k(k+1)+ε ≪X2s−1 2 k(k+1)−1/(3k2).
Next we discuss the major arc contribution. When α ∈M(q, a) ⊆M, write V (α; q, a) = q−1S(q, a)I(α −a/q; X), where S(q, a) = q X r=1 e((a1r + · · · + akrk)/q) and I(β; X) = Z X 0 e(β1γ + · · · + βkγk) dγ.
In addition, define the function V (α) to be V (α; q, a) when α ∈M(q, a) ⊆M and to be zero otherwise. Then the argument concluding [42, §3] shows that Z M |f(α; X)|2s dα − Z M |V (α)|2s dα (9.5) ≪X1+2/kI |f(α; X)|2s−2 dα + I |V (α)|2s−2 dα .
When α ∈M(q, a) ⊆M, one has (q, a1, . . . , ak) = 1 and |qαj −aj| ⩽ X1/k−j (1 ⩽j ⩽k). Then it follows from [30, Ths. 7.1 and 7.3] that when α ∈M(q, a) ⊆M, one has V (α) ≪Xqε(q + |qα1 −a1|X + · · · + |qαk −ak|Xk)−1/k.
Consequently, one finds that when t ⩾1 2k(k + 1), one has Z M |V (α)|2t dα ≪X2tWZ, VINOGRADOV’S MEAN VALUE THEOREM 1613 where W = X 1⩽q⩽X1/k q X a1=1 · · · q X ak=1 (qε−1/k)2t and Z = k Y j=1 Z X1/k−j 0 (1 + βjXj)−2t/k2 dβj.
But since 2t ⩾k(k + 1), we obtain the upper bounds (9.6) W ≪X1/(3k) ∞ X q=1 q−5/4 ≪X1/(3k) and (9.7) Z ≪ k Y j=1 Z ∞ 0 (1 + βjXj)−1−1/k dβj ≪X−1 2 k(k+1).
Thus, in particular, we deduce that when s ⩾k2 + k + 1, then Z M |V (α)|2s−2 dα ≪X2s−2−1 2 k(k+1)+1/(3k).
In combination with Theorem 1.1, this leads from (9.5) to the asymptotic relation (9.8) Z M |f(α; X)|2s dα − Z M |V (α)|2s dα ≪X2s−1 2 k(k+1)−1/(3k).
The argument employed in deriving (9.6) and (9.7) is readily adapted to show that the singular series S(s, k) defined in (1.8), and the singular integral J(s, k) defined in (1.9), both converge absolutely, and that Z M |V (α)|2s dα = S(s, k)J(s, k) + O(X2s−1 2 k(k+1)−1/(3k)).
The asymptotic formula claimed implicitly in Theorem 1.2 now follows by substituting (9.4) and (9.8) into (9.3).
This completes the proof of Theo-rem 1.2.
□ As essentially was observed by Vaughan, one must have both S(s, k) ≫1 and J(s, k) ≫1 (see the conclusion of [30, §7.3]). For otherwise one would have I |f(α; X)|2s dα = o(X2s−1 2 k(k+1)), which contradicts the elementary lower bound (1.5).
An argument similar to that employed in the proof of Theorem 1.2 delivers an asymptotic formula for the number of solutions of a more general diagonal 1614 TREVOR D. WOOLEY Diophantine system. When s and k are natural numbers, and aij are integers for 1 ⩽i ⩽k and 1 ⩽j ⩽s, we write φi(x) = s X j=1 aijxi j (1 ⩽i ⩽k), and we consider the Diophantine system (9.9) φi(x) = 0 (1 ⩽i ⩽k).
We write N(B) for the number of integral solutions of the system (9.9) with |x| ⩽B. We next define the (formal) real and p-adic densities associated with the system (9.9), and here we follow Schmidt . When L > 0, define λL(η) = L(1 −L|η|), when |η| ⩽L−1, 0, otherwise.
We then put µL = Z |ξ|⩽1 k Y i=1 λL(φi(ξ)) dξ.
The limit σ∞= limL→∞µL, when it exists, is called the real density. Mean-while, given a natural number q, we write M(q) = card{x ∈(Z/qZ)s : φi(x) ≡0 (mod q) (1 ⩽i ⩽k)}.
For each prime number p, we then put σp = lim H→∞pH(k−s)M(pH), provided that this limit exists, and refer to σp as the p-adic density.
Theorem 9.1. Let s and k be natural numbers with k ⩾3 and s ⩾ 2k2 + 2k + 1. Suppose that aij (1 ⩽i ⩽k, 1 ⩽j ⩽s) are nonzero integers.
Suppose, in addition, that the system of equations (9.9) possess nonsingular real and p-adic solutions for each prime number p. Then one has N(B) ∼σ∞ Y p σp Bs−1 2 k(k+1).
In particular, the system (9.9) satisfies the Hasse Principle.
We will not offer any details of the proof here, the argument following in most respects that of the proof of Theorem 1.2. We note only that the system (9.9), if singular, is easily shown to have a singular locus of affine dimension at most k −1, which is harmless in the analysis. We note also that the restriction that aij ̸= 0 (1 ⩽i ⩽k, 1 ⩽j ⩽s) may be largely removed by elaborating on the basic argument. We emphasise that the most striking feature of Theorem 9.1 is that such a conclusion cannot possibly hold when s < 1 2k(k+1). Thus, for the very first time for a system of diagonal equations of VINOGRADOV’S MEAN VALUE THEOREM 1615 higher degree, we have an asymptotic formula in which the number of variables is just four times the best possible result. Hitherto, the number of variables required to achieve a successful analysis would be roughly 2 log k times the best possible result, a factor which becomes arbitrarily large as k increases.
We turn our attention next to the Hilbert-Kamke problem, a generali-sation of Waring’s problem considered first by Hilbert . When n1, . . . , nk are natural numbers, let Rs,k(n) denote the number of solutions in natural numbers x of the system of equations (9.10) s X i=1 xj i = nj (1 ⩽j ⩽k).
Put X = max 1⩽j⩽k n1/j j , and then write Js,k(n) = Z Rk I(β; 1)se(−β1n1/X −· · · −βknk/Xk) dβ and Ss,k(n) = ∞ X q=1 X 1⩽a⩽q (q,a1,...,ak)=1 (q−1S(q, a))se(−(a1n1 + · · · + aknk)/q).
The local solubility conditions associated with the system (9.10) are quite subtle, and we refer the reader to for a discussion of the conditions under which real and p-adic solutions may be expected to exist for the system (9.10).
It is easy to see, however, that the conditions nj/k k ⩽nj ⩽s1−j/knj/k k (1 ⩽j ⩽k) are needed. One also finds that p-adic solubility is not assured without at least 2k variables.
Theorem 9.2. Let s and k be natural numbers with k ⩾3 and s ⩾ 2k2 +2k +1. Suppose that the natural numbers n1, . . . , nk are sufficiently large in terms of s and k. Put X = max1⩽j⩽k n1/j j . Suppose, in addition, that the system (9.10) has nonsingular real and p-adic solutions. Then one has Rs,k(n) = Js,k(n)Ss,k(n)Xs−1 2 k(k+1) + o(Xs−1 2 k(k+1)).
We refer the reader to , , for the many details associated with a successful treatment of this problem. The technology available at the time of writing of the latter papers made necessary the constraint s ⩾(4+o(1))k2 log k in place of the lower bound s ⩾2k2 + 2k + 1 in Theorem 9.2. Our observation here is that a successful local-global analysis is now available via the circle 1616 TREVOR D. WOOLEY method when the number of variables grows like 2k2 + 2k + 1, only a factor of 4 away from what is likely to be best possible.
10. The asymptotic formula in Waring’s problem The proof of Theorem 1.4 would be routine were our goal the less precise bound ‹ G(k) ⩽2k2 + 2k + 1. Saving four additional variables requires some discussion that hints at possible new strategies for transforming estimates for Js,k(X) into upper bounds for ‹ G(k).
En route, we also improve some old estimates of Hua .
Write g(α) = X 1⩽x⩽X e(αxk), and when s ∈N, define Is(X) = Z 1 0 |g(α)|2s dα.
Then on considering the underlying Diophantine system, one has Is(X) = X |h1|⩽sX · · · X |hk−1|⩽sXk−1 I |f(α; X)|2se(−h1α1 −· · · −hk−1αk−1) dα ≪X 1 2 k(k−1) I |f(α; X)|2s dα = X 1 2 k(k−1)Js,k(X).
Thus we obtain the classical bound (10.1) Is(X) ≪X2s−k+ηs+ε.
Ford obtained a bound potentially sharper, valid for each natural num-ber m with 1 ⩽m ⩽k, and s ⩾1 2m(m −1), which is tantamount to Is(X) ≪X2s−k+η∗ s,m+ε, where η∗ s,m = 1 mηs−1 2 m(m−1). A little later, this conclusion was obtained in-dependently by Ustinov .
Owing to the efficiency of Theorem 1.1, this estimate proves to be no sharper than that provided by (10.1), at least in ap-plications to the asymptotic formula in Waring’s problem. Instead we offer a very modest refinement of (10.1). The idea underlying this refinement is related to one first shown to the author by Bob Vaughan in the first year of the author’s Ph.D. studies, in 1988.
Lemma 10.1. For each natural number s, one has Is(X) ≪Xε(X2s−k−1+ηs,k + X2s−k+ηs,k−1).
VINOGRADOV’S MEAN VALUE THEOREM 1617 Proof. Define the exponential sum F(β) = Fk(β; X) by F(β) = X 1⩽x⩽X e(βkxk + βk−2xk−2 + · · · + β1x).
Thus, to be precise, the argument of the exponentials in F(β) is a polynomial of degree k in which the coefficient of the monomial of degree k −1 is zero.
Also, define Υk(X; h) to be the number of integral solutions of the Diophantine system s X i=1 (xj i −yj i ) = 0 (1 ⩽j ⩽k, j ̸= k −1), (10.2) s X i=1 (xk−1 i −yk−1 i ) = h, with 1 ⩽x, y ⩽X. Then on considering the underlying Diophantine system, one finds that (10.3) I |F(β)|2s dβ = X |h|⩽sXk−1 Υk(X; h).
By applying an integer shift z to the variables in the system (10.2), we find that Υk(X; h) counts the number of integral solutions of the Diophantine system s X i=1 ((xi −z)j −(yi −z)j) = 0 (1 ⩽j ⩽k, j ̸= k −1), s X i=1 ((xi −z)k−1 −(yi −z)k−1) = h, with 1 + z ⩽x, y ⩽X + z. But by applying the Binomial Theorem, we find that x, y satisfies this system of equations if and only if s X i=1 (xj i −yj i ) = 0 (1 ⩽j ⩽k −2) (10.4) s X i=1 (xk−1 i −yk−1 i ) = h, s X i=1 (xk i −yk i ) = khz.
If we restrict the shifts z to lie in the interval 1 ⩽z ⩽X, then we see that an upper bound for Υk(X; h) is given by the number of integral solutions of the system (10.4) with 1 ⩽x, y ⩽2X. On considering the underlying Diophantine system, we therefore deduce from (10.3) that for each integer z with 1 ⩽z ⩽X, 1618 TREVOR D. WOOLEY one has I |F(β)|2s dβ ⩽ X |h|⩽sXk−1 I |f(α; 2X)|2se(−(kzαk + αk−1)h) dα.
Hence I |F(β)|2s dβ ≪X−1 X 1⩽z⩽X I |f(α; 2X)|2s min{Xk−1, ∥kzαk + αk−1∥−1} dα (10.5) = X−1 I |f(α; 2X)|2sΨ(αk, αk−1) dα, where we have written Ψ(αk, αk−1) = X 1⩽z⩽X min{Xk−1, ∥kzαk + αk−1∥−1}.
Suppose that αk ∈R and that b ∈Z and r ∈N satisfy (b, r) = 1 and |αk −b/r| ⩽r−2. Then it follows from [3, Lemma 3.2]4 that Ψ(αk, αk−1) ≪(Xk−1 + r log(2r))(X/r + 1) (10.6) ≪Xk(X−1 + r−1 + rX−k)(log(2r)).
Applying a standard transference principle (compare Exercise 2 of [30, §2.8]), it follows that (10.7) Ψ(αk, αk−1) ≪Xk+ε(X−1+(r+Xk|rαk−b|)−1+(r+Xk|rαk−b|)X−k).
We now return to consider the relation (10.5). Let m denote the set of real numbers α ∈[0, 1) having the property that whenever q ∈N and ∥qα∥⩽X1−k, then q > X. Also, let M denote the complementary set [0, 1)\m. By Dirichlet’s theorem on Diophantine approximation, whenever αk ∈m, there exists q ∈N with q ⩽Xk−1 such that ∥qαk∥⩽X1−k. From the definition of m, one must have q > X, and hence it follows from (10.6) that sup αk∈m Ψ(αk, αk−1) ≪Xk−1+ε.
Thus we deduce from (1.2) that Z m×[0,1)k−1 |f(α; 2X)|2sΨ(αk, αk−1) dα ≪Xk−1+ε I |f(α; 2X)|2s dα ≪Xk−1+εJs,k(2X).
4We note that the strict inequality |αk −b/r| < r−2 imposed by Baker is unnecessary in the proof of [3, Lemma 3.2] VINOGRADOV’S MEAN VALUE THEOREM 1619 Substituting this conclusion into (10.5), we see that I |F(β)|2s dβ ≪Xk−2+εJs,k(2X) (10.8) + X−1 Z M×[0,1)k−1 |f(α; 2X)|2sΨ(αk, αk−1) dα.
Let M(q, a) denote the set of real numbers αk ∈[0, 1) with |qαk −a| ⩽ X1−k. Then M is the union of the sets M(q, a) with 0 ⩽a ⩽q ⩽X and (a, q) = 1. From (10.7) it follows that when αk ∈M(q, a) ⊆M, one has Ψ(αk, αk−1) ≪Xk−1+ε + Xk+ε(q + Xk|qαk −a|)−1.
Define the function Φ(θ) for θ ∈M by putting Φ(θ) = (q + Xk|qθ −a|)−1 when θ ∈M(q, a) ⊆M. Then we deduce from (10.8) that (10.9) I |F(β)|2s dβ ≪Xk−2+εJs,k(2X) + Xk−1+εT , where T = Z M Φ(αk) I |f(β, αk; 2X)|2s dβ dαk.
From Br¨ udern [7, Lemma 2], we find that Z M Φ(αk)|f(β, αk; 2X)|2s dαk ≪Xε−k X Z 1 0 |f(β, αk; 2X)|2s dαk + |f(β, 0; 2X)|2s , and hence T ≪Xε−k X I |fk(α; 2X)|2s dα + I |fk−1(β; 2X)|2s dβ ≪Xε−k(XJs,k(2X) + Js,k−1(2X)).
Consequently, from (10.9) we conclude that (10.10) I |F(β)|2s dβ ≪Xk−2+εJs,k(2X) + Xε−1Js,k−1(2X).
Next we observe that, on considering the underlying Diophantine system, one has Is(X) = X |h1|⩽sX · · · X |hk−2|⩽sXk−2 R(X; h), 1620 TREVOR D. WOOLEY where R(X; h) denotes the number of integral solutions of the system s X i=1 (xj i −yj i ) = hj (1 ⩽j ⩽k −2), s X i=1 (xk i −yk i ) = 0, with 1 ⩽x, y ⩽X.
Thus, again considering the underlying Diophantine system, we obtain the upper bound Is(X) ≪ X |h1|⩽sX · · · X |hk−2|⩽sXk−2 I |F(β)|2se(−β1h1 −· · · −βk−2hk−2) dβ ≪X 1 2 (k−1)(k−2) I |F(β)|2s dβ.
In view of (10.10), we therefore arrive at the estimate Is(X) ≪X 1 2 (k+1)(k−2)+εJs,k(2X) + X 1 2 (k−1)(k−2)−1+εJs,k−1(2X) ≪X2s−k−1+ηs,k+ε + X2s−k+ηs,k−1+ε.
This completes the proof of the lemma.
□ From Theorem 1.1, we have ηs,k−1 = 0 for s ⩾k(k −1). By H¨ older’s inequality, moreover, one finds from Theorem 1.1 that I |fk(α; X)|2k2+2k−4 dα ⩽ I |fk(α; X)|2k2+2k dα 1−2/(k2+k) ≪ X 3 2 (k2+k)+ε1−2/(k2+k) ≪X 3 2 (k2+k)−3+ε.
Consequently, one has ηs,k ⩽1 for s ⩾k2 + k −2. Then by Lemma 10.1, we obtain the following corollary to Theorem 1.1.
Corollary 10.2. When s ⩾k2 + k −2, one has Is(X) ≪X2s−k+ε.
Having prepared the ground, the proof of Theorem 1.4 is now swift. Con-sider a large integer n, put X = [n1/k] and recall the definition of the sets of arcs m and M from the proof of Lemma 10.1. From Corollary 10.2 and Weyl’s inequality (see [30, Lemma 2.4]), one finds that when t ⩾2k2 +2k −3, one has Z m g(α)te(−nα) dα ≪ Å sup α∈m |g(α)| ãt−(2k2+2k−4) Z 1 0 |g(α)|2k2+2k−4 dα ≪(X1−21−k+ε)t−(2k2+2k−4)X(2k2+2k−4)−k ≪Xt−k−2−k.
VINOGRADOV’S MEAN VALUE THEOREM 1621 Notice here, of course, that we could have employed the conclusion of Theo-rem 1.5 in place of Weyl’s inequality. Meanwhile, the methods of [30, §4.4] show that, under the same conditions on t, one has Z M g(α)te(−nα) dα ∼Γ(1 + 1/k)t Γ(t/k) St,k(n)nt/k−1 + o(nt/k−1), where St,k(n) is defined as in (1.12). Thus we deduce that for t ⩾2k2 +2k−3, one has Rt,k(n) = Z M g(α)te(−nα) dα + Z m g(α)te(−nα) dα = Γ(1 + 1/k)t Γ(t/k) St,k(n)nt/k−1 + o(nt/k−1), whence ‹ G(k) ⩽2k2 + 2k −3. This completes the proof of Theorem 1.4.
We take this opportunity to point out that L.-K. Hua investigated the problem of bounding the least integer Ck such that, whenever s ⩾Ck, one has I |fk(α; X)|s dα ≪Xs−1 2 k(k+1)+ε, and likewise the least integer Sk such that, whenever s ⩾Sk, one has I |Fk(β; X)|s dβ ≪Xs−1 2 (k2−k+2)+ε, pursuing in particular the situation for smaller values of k. His arguments in-volve a clever application of Weyl differencing in a style that we would describe in the single equation situation as underlying Hua’s lemma. In Chapter 5 of , one finds tables recording the upper bounds C3 ⩽16, C4 ⩽46, C5 ⩽110, . . .
and S3 ⩽10, S4 ⩽32, S5 ⩽86, . . . .
The conclusion of Theorem 1.1 shows that Ck ⩽2k(k + 1), an upper bound superior to the conclusions of Hua for k ⩾4. Meanwhile, as a consequence of the estimate (10.10), one obtains the estimate contained in the following theorem.
Theorem 10.3. Suppose that k ⩾3 and s ⩾k2 + k −2. Then one has I |Fk(β; X)|2s dβ ≪X2s−1 2 (k2−k+2)+ε.
Proof. The discussion leading to Corollary 10.2 shows that ηs,k−1 = 0 for s ⩾k(k−1) and ηs,k ⩽1 for s ⩾k2 +k−2. The desired conclusion is therefore immediate from (3.8), (3.11) and (10.10).
□ Thus we have Sk ⩽2k2 +2k −4, an upper bound superior to those of Hua for k ⩾5.
1622 TREVOR D. WOOLEY 11. A heuristic argument We take the opportunity in this section to discuss a heuristic argument that delivers the bound (11.1) Js+k(X) ≪X2s+2k−1 2 k(k+1)+ε for s ⩾1 2k(k+1). In view of the lower bound (1.5), of course, the bound (11.1) cannot hold for s+k < 1 2k(k +1), so is in a strong sense close to best possible.
Our starting point is a heuristic interpretation of Lemma 6.1. In the course of the proof of Lemma 6.1, a critical role is played by the interpretation of the system of equations (6.1) by means of the implied congruences (6.3). In some sense, for each fixed choice of y in (6.3), the conclusion of Lemma 4.1 indicates that there are at most k!p 1 2 k(k−1)(a+b) possible choices for x with 1 ⩽x ⩽pkb and x ≡ξ (mod pa+1) for some ξ ∈Ξa(ξ). This is transformed via Cauchy’s inequality into the statement that, with a compensating factor k!p 1 2 k(k−1)(a+b), the variables in (6.1) are constrained by the additional congruence relations x ≡y (mod pkb). Such an interpretation is embodied in the relation (6.6).
An alternative interpretation, which we emphasise is heuristic in nature and not a statement of fact, is that, by relabelling variables if necessary, the congruences (6.3) essentially amount in (6.1) to the constraint xj ≡yj (mod pjb) (1 ⩽j ⩽k), with an additional compensating factor of k!p 1 2 k(k−1)a.
Indeed, one can prove the initial statement that xj ≡yj (mod pb) (1 ⩽j ⩽k) with precisely this compensating factor. Then, by fixing the variables x1, y1, and considering the system (6.3) with 2 ⩽j ⩽k, one might suppose that a cor-responding constraint xj ≡yj (mod p2b) (2 ⩽j ⩽k) might be imposed. Then, by fixing the variables x2, y2, and considering the system (6.3) with 3 ⩽j ⩽k, one seeks a corresponding constraint xj ≡yj (mod p3b) (3 ⩽j ⩽k), and so on. Such a heuristic implies a new relation to replace (6.6) of the shape Kσ,τ a,b (X; ξ, η) ≪M 1 2 k(k−1)a X 1⩽ζ1⩽pb ζ1≡ξ (mod pa) · · · X 1⩽ζk⩽pkb ζk≡ξ (mod pa) I(ζ), where I(ζ) = I k Y i=1 |fib(α; ζi)|2 |Fτ b (α; η)|2u dα.
Such an assertion at least carries the weight of correctly accounting for the number of available residue classes, though of course one cannot hope for the implied degree of independence to be true in anything but an average sense.
From here, an application of H¨ older’s inequality leads to the bound (11.2) Kσ,τ a,b (X; ξ, η) ≪M 1 2 k(k−1)a k Y i=1 Mib−aΘib,b(X; η)1/k , VINOGRADOV’S MEAN VALUE THEOREM 1623 where Θc,b(X; η) = max 1⩽ζ⩽pc I |fc(α; ζ)2kFτ b (α; η)2u| dα.
A further application of H¨ older’s inequality shows as in (6.8) that Θc,b(X; η) ≪(Js+k(X/Mb))1−k/s(Ib,c(X))k/s, and thus we find from (11.2) that Ka,b(X) ≪M 1 2 k(k−1)(a+b)+k(b−a)(Js+k(X/Mb))1−k/s k Y i=1 (Ib,ib(X))1/s.
Each mean value Ib,ib(X) may be conditioned via Lemma 5.3, and thus one deduces as in Lemma 6.3 that there exist integers h1, . . . , hk, none too large in terms of b, with the property that ≪M−k/(3s)(X/Mb)ηs+k + Xδ(X/Mb)ηs+k(1−k/s) k Y i=1 M−7hi/41/s.
(11.3) It is (11.3) that represents the critical step in our iteration. Starting from the relation Xηs+k−δ < ≪, one may apply (11.3) successively to bound Xηs+k−δ in terms first of the k expressions of the shape (1 ⩽i ⩽k), then of k2 expressions of the shape (1 ⩽j ⩽k), in which b takes values i+hi (1 ⩽i ⩽k), and so on. This iteration may be analysed in a manner very similar to that used in the proof of Lemma 7.2, though the complexity is now increased substantially.
The important feature is the number of iterations taken before the exponents ib + hi occurring in (11.3) become large in terms of θ. In the argument of the proof of Lemma 7.2, one finds that at the nth iteration, the relevant exponents have size roughly kn.
From the relation (11.3), one obtains an explosively growing tree of chains of relations, with the exponents bn increasing from one step to the next by a factor close to 1, 2, . . . , k −1 or k. When one considers the set of all chains, one finds that almost all possible chains have the property that the exponent bn grows on average like ( 1 2(k + 1))n. In order to see this, observe that if l1, . . . , ln are the factors at each step of one possible chain, then by the Arithmetic-Geometric Mean inequality, one has l1 · · · ln ⩽ l1 + · · · + ln n n .
If one randomly chooses l1, . . . , ln from {1, 2, . . . , k} with equal probability, then almost all values of (l1 + · · · + ln)/n will be concentrated towards the mean of {1, 2, . . . , k}, which is 1 2(k + 1). This is a consequence of the Central Limit Theorem.
In this way, one sees that the number of steps permitted 1624 TREVOR D. WOOLEY before the iteration begins to exhaust its usefulness is roughly N if we take θ = 1 2((k + 1)/2)−N−1 at the outset in place of θ = 1 2k−N−1. Note that the latter is indeed the value that we chose for θ in Section 3 when s = k2.
We are led now to a relation of similar shape to (7.14), but replaced now by (11.4) Xηs+k(1+(s/k−1)(s/k)N−1θ) ≪Xηs+k+k2.
Note here that we have made use of the growth rate of the exponents ψn from Section 7, with scale factor s/k. Thus, when s > 1 2k(k + 1), since now we have θ = 1 2((k + 1)/2)−N−1, we find that (s/k −1)(s/k)N−1θ ≫ s k(k + 1)/2 N , which tends to infinity as N tends to infinity.
In particular, on taking N sufficiently large, the relation (11.4) implies that ηs+k = 0.
The above heuristic shows that when s > 1 2k(k + 1), then one has (11.5) Js+k(X) ≪X2s+2k−1 2 k(k+1)+ε.
One might complain that this fails to prove that the relation (11.5) holds for s = 1 2k(k + 1). Apart from anything else, on the face of it, the integer s needs to be a multiple of k in our treatment, so that one may need to require that s ⩾1 2k(k + 3). But this issue may be circumvented. For this, one reinterprets the methods of this paper in the form of fractional moments of exponential sums along the lines of the author’s work on breaking classical convexity in Waring’s problem.
This was, in fact, the author’s original approach to Theorem 1.1 and feasible with sufficient effort. Such would permit the proof of (11.5) with s = 1 2k(k + 1) + ν for any positive number ν. But then an application of H¨ older’s inequality shows that (11.5) holds with s = 1 2k(k + 1) with the positive number ε bounded above by ν. Taking ν sufficiently small completes the heuristic proof.
A final word is in order concerning the value of such a heuristic argument.
A more sweeping heuristic of classical nature asserts that one should expect square-root cancellation in fk(α; X) when one subtracts the expected major arc approximation, and this leads to the conjectured estimate (11.5) for s + k ⩾ 1 2k(k + 1). This amounts to the assumption of very significant global rigid structure within the mean value Js+k(X). Our heuristic in this section also amounts to a structural assumption, but now of a rather weak congruential variety.
This is, most assuredly, an unproven assumption, but a relatively modest one of local type. Thus one can say, at least, that the conjectured estimate (11.5) for s ⩾1 2k(k+1) now rests on only a relatively mild assumption.
VINOGRADOV’S MEAN VALUE THEOREM 1625 References G. I. Arkhipov, The Hilbert-Kamke problem, Izv. Akad. Nauk SSSR Ser.
Mat. 48 (1984), 3–52. MR 0733357. Zbl 0539.10016. IM1985v024n01ABEH001210.
G. I. Arkhipov, V. N. Chubarikov, and A. A. Karatsuba, Trigonometric Sums in Number Theory and Analysis, de Gruyter Exp. Math. 39, Walter de Gruyter GmbH & Co. KG, Berlin, 2004. MR 2113479. Zbl 1074.11043.
R. C. Baker, Diophantine Inequalities, London Math. Soc. Monogr. (N.S.) 1, The Clarendon Press Oxford University Press, New York, 1986. MR 0865981.
Zbl 0592.10029.
B. J. Birch, Waring’s problem in algebraic number fields, Proc. Cambridge Philos. Soc. 57 (1961), 449–459. MR 0143754. Zbl 0111.25104.
org/10.1017/S0305004100035490.
K. D. Boklan, The asymptotic formula in Waring’s problem, Mathematika 41 (1994), 329–347. MR 1316613. Zbl 0815.11050. S0025579300007439.
K. D. Boklan and T. D. Wooley, On Weyl sums for smaller exponents, Funct.
Approx. Comment. Math., in press.
J. Br¨ udern, A problem in additive number theory, Math. Proc. Cambridge Philos. Soc. 103 (1988), 27–33. MR 0913447. Zbl 0655.10041.
org/10.1017/S0305004100064586.
E. Croot and D. Hart, h-fold sums from a set with few products, SIAM J.
Discrete Math. 24 (2010), 505–519. MR 2646099. Zbl 1221.11202.
doi.org/10.1137/090756041.
K. B. Ford, New estimates for mean values of Weyl sums, Internat. Math. Res.
Notices (1995), 155–171. MR 1321702. Zbl 0821.11050.
1155/S1073792895000122.
, Vinogradov’s integral and bounds for the Riemann zeta function, Proc.
London Math. Soc. 85 (2002), 565–633. MR 1936814. Zbl 1034.11044. http: //dx.doi.org/10.1112/S0024611502013655.
G. H. Hardy and J. E. Littlewood, Some problems of ‘Partitio Numerorum’: IV. The singular series in Waring’s Problem and the value of the number G(k), Math. Z. 12 (1922), 161–188. MR 1544511. JFM 48.0146.01. 10.1007/BF01482074.
D. R. Heath-Brown, Weyl’s inequality, Hua’s inequality, and Waring’s prob-lem, J. London Math. Soc. 38 (1988), 216–230. MR 0966294. Zbl 0619.10046.
D. Hilbert, Beweis f¨ ur die Darstellbarkeit der ganzen Zahlen durch eine feste Anzahl nter Potenzen (Waringsches Problem), Math. Ann. 67 (1909), 281–300.
MR 1511530. JFM 40.0236.03.
L.-K. Hua, On Tarry’s problem, Quart. J. Math. Oxford 9 (1938), 315–320.
Zbl 0020.00501.
, Improvement of a result of Wright, J. London Math. Soc. 24 (1949), 157– 159. MR 0030980. Zbl 0038.18104.
1626 TREVOR D. WOOLEY L.-K. Hua, An improvement of Vinogradov’s mean-value theorem and sev-eral applications, Quart. J. Math., Oxford Ser. 20 (1949), 48–61. MR 0029415.
Zbl 0039.27403.
, Additive Theory of Prime Numbers, Transl. Math. Monogr. 13, Amer.
Math. Soc., Providence, R.I., 1965. MR 0194404. Zbl 0192.39304.
A. A. Karatsuba, The mean value of the modulus of a trigonometric sum, Izv.
Akad. Nauk SSSR Ser. Mat. 37 (1973), 1203–1227. MR 0337817. Zbl 0294.10025.
Yu. V. Linnik, On Weyl’s sums, Rec. Math. [Mat. Sbornik] N.S. 12(54) (1943), 28–39. MR 0009776. Zbl 0063.03578. Available at archive.phtml?wshow=paper&jrnid=sm&paperid=6141&option lang=eng.
D. A. Mit′kin, An estimate for the number of summands in the Hilbert-Kamke problem, Mat. Sb. 129(171) (1986), 549–577, 592. MR 0842400. Zbl 0608.10020.
, An estimate for the number of summands in the Hilbert-Kamke problem.
II, Mat. Sb. 132(174) (1987), 345–351, 444–445. MR 0889596. Zbl 0619.10013.
S. T. Parsell, A generalization of Vinogradov’s mean value theorem, Proc.
London Math. Soc. 91 (2005), 1–32. MR 2149529. Zbl 1119.11024.
doi.org/10.1112/S002461150501525X.
, On the Bombieri–Korobov estimate for Weyl sums, Acta Arith.
138 (2009), 363–372. MR 2534142. Zbl 05615154. aa138-4-7.
O. Robert and P. Sargos, Un th´ eoreme de moyenne pour les sommes d’exponentielles. Application a l’in´ egalit´ e de Weyl, Publ. Inst. Math. (Beograd) 67(81) (2000), 14–30. MR 1761299. Zbl 1006.11046.
W. M. Schmidt, The density of integer points on homogeneous varieties, Acta Math. 154 (1985), 243–296. MR 0781588. Zbl 0561.10010. 10.1007/BF02392473.
S. B. Stechkin, Mean values of the modulus of a trigonometric sum, Trudy Mat. Inst. Steklov. 134 (1975), 283–309, 411. MR 0396431. Zbl 0319.10045.
Available at tm&paperid=2719&option lang=eng.
A. V. Ustinov, On the number of summands in the asymptotic formula for the number of solutions of the Waring equation, Mat. Zametki 64 (1998), 285–296.
MR 1681004. Zbl 0923.11138.
R. C. Vaughan, On Waring’s problem for cubes, J. Reine Angew. Math. 365 (1986), 122–170. MR 0826156. Zbl 0574.10046.
1986.365.122.
, On Waring’s problem for smaller exponents. II, Mathematika 33 (1986), 6–22.
MR 0859494.
Zbl 0601.10037.
S0025579300013838.
, The Hardy-Littlewood Method, second ed., Cambridge Tracts in Math.
125, Cambridge Univ. Press, Cambridge, 1997. MR 1435742. Zbl 0868.11046.
VINOGRADOV’S MEAN VALUE THEOREM 1627 R. C. Vaughan and T. D. Wooley, A special case of Vinogradov’s mean value theorem, Acta Arith. 79 (1997), 193–204. MR 1438823. Zbl 0887.11042.
I. M. Vinogradov, New estimates for Weyl sums, Dokl. Akad. Nauk SSSR 8 (1935), 195–198.
, The method of trigonometrical sums in the theory of numbers, Trav.
Inst. Math. Stekloff23 (1947), 109 pp. MR 0029417. Zbl 0041.37002.
, A new estimate of the function ζ(1 + it), Izv.
Akad.
Nauk SSSR.
Ser.
Mat.
22 (1958), 161–164.
MR 0103861.
Zbl 0097.26302.
Available at im&paperid=3962&option lang=eng.
A. Z. Walfisz, Weylsche Exponentialsummen in der neueren Zahlentheorie, Math. Forsch. 15, Berlin, 1963. Zbl 0146.06003.
T. D. Wooley, Large improvements in Waring’s problem, Ann. of Math.
135 (1992), 131–164. MR 1147960. Zbl 0754.11026. 2946566.
, On Vinogradov’s mean value theorem, Mathematika 39 (1992), 379–399.
MR 1203293. Zbl 0769.11036.
, On Vinogradov’s mean value theorem, II, Michigan Math. J. 40 (1993), 175–180. MR 1214062. Zbl 0805.11072. 1029004681.
, Quasi-diagonal behaviour in certain mean value theorems of additive number theory, J. Amer. Math. Soc. 7 (1994), 221–245. MR 1224595. Zbl 0786.
11053.
, Breaking classical convexity in Waring’s problem: sums of cubes and quasi-diagonal behaviour, Invent. Math. 122 (1995), 421–451. MR 1359599.
Zbl 0851.11055.
, New estimates for Weyl sums, Quart. J. Math. Oxford Ser. 46 (1995), 119–127. MR 1326136. Zbl 0855.11043.
119.
, Some remarks on Vinogradov’s mean value theorem and Tarry’s problem, Monatsh. Math. 122 (1996), 265–273. MR 1414861. Zbl 0881.11043.
doi.org/10.1007/BF01320189.
, The asymptotic formula in Waring’s problem, Internat. Math. Res. No-tices (2012), no. 7, 1485–1504.
E. M. Wright, The Prouhet-Lehmer problem, J. London Math. Soc. 23 (1948), 279–285. MR 0028858. Zbl 0033.35102.
279.
(Received: December 3, 2010) (Revised: July 11, 2011) School of Mathematics, University of Bristol, Bristol, UK E-mail : [email protected]
|
68
|
Invariance Properties of the Entropy Production, and the Entropic Pairing of Inertial Frames of Reference by Shear-Flow Systems - PMC
===============
Skip to main content
An official website of the United States government
Here's how you know
Here's how you know
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS
A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
PMC Search Update
PMC Beta search will replace the current PMC search the week of September 7, 2025. Try out PMC Beta search now and give us your feedback. Learn more
Search
Log in
Dashboard
Publications
Account settings
Log out
Search… Search NCBI
Primary site navigation
Search
Logged in as:
Dashboard
Publications
Account settings
Log in
Search PMC Full-Text Archive
Search in PMC
Advanced Search
Journal List
User Guide
New Try this search in PMC Beta Search
View on publisher site
Download PDF
Add to Collections
Cite
Permalink PERMALINK
Copy
As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with, the contents by NLM or the National Institutes of Health.
Learn more: PMC Disclaimer | PMC Copyright Notice
Entropy (Basel)
. 2021 Nov 15;23(11):1515. doi: 10.3390/e23111515
Search in PMC
Search in PubMed
View in NLM Catalog
Add to search
Invariance Properties of the Entropy Production, and the Entropic Pairing of Inertial Frames of Reference by Shear-Flow Systems
Robert K Niven
Robert K Niven
1 School of Engineering and Information Technology, The University of New South Wales, Canberra, ACT 2600, Australia; [email protected]
Find articles by Robert K Niven
1
Editor: Armin Feldhoff 1
Author information
Article notes
Copyright and License information
1 School of Engineering and Information Technology, The University of New South Wales, Canberra, ACT 2600, Australia; [email protected]
Roles
Armin Feldhoff: Academic Editor
Received 2021 Oct 8; Accepted 2021 Nov 10; Collection date 2021 Nov.
© 2021 by the author.
Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (
PMC Copyright notice
PMCID: PMC8623158 PMID: 34828213
Abstract
This study examines the invariance properties of the thermodynamic entropy production in its global (integral), local (differential), bilinear, and macroscopic formulations, including dimensional scaling, invariance to fixed displacements, rotations or reflections of the coordinates, time antisymmetry, Galilean invariance, and Lie point symmetry. The Lie invariance is shown to be the most general, encompassing the other invariances. In a shear-flow system involving fluid flow relative to a solid boundary at steady state, the Galilean invariance property is then shown to preference a unique pair of inertial frames of reference—here termed an entropic pair—respectively moving with the solid or the mean fluid flow. This challenges the Newtonian viewpoint that all inertial frames of reference are equivalent. Furthermore, the existence of a shear flow subsystem with an entropic pair different to that of the surrounding system, or a subsystem with one or more changing entropic pair(s), requires a source of negentropy—a power source scaled by an absolute temperature—to drive the subsystem. Through the analysis of different shear flow subsystems, we present a series of governing principles to describe their entropic pairing properties and sources of negentropy. These are unaffected by Galilean transformations, and so can be understood to “lie above” the Galilean inertial framework of Newtonian mechanics. The analyses provide a new perspective into the field of entropic mechanics, the study of the relative motions of objects with friction.
Keywords: entropy production, invariance properties, Lie symmetries, inertial frames of reference, negentropy, shear flow systems
1. Introduction
In his major life’s work, Isaac Newton provided the three laws of motion that constitute what is now described as classical or Newtonian mechanics [1,2,3]:
(1)
First law (law of inertia): Every body perseveres in its state of rest, or of uniform motion in a right line, unless it is compelled to change that state by forces impressed thereon.
(2)
Second law (equation of motion): The alteration of motion is ever proportional to the motive force impressed, and is made in the direction of the right line in which that force is impressed.
(3)
Third law (law of action and reaction): To every action there is always opposed an equal reaction, or the mutual actions of two bodies upon each other are always equal, and directed to contrary parts.
The first two laws invoke the concept of an inertial frame of reference, defined as a frame of reference (a coordinate system) that is not undergoing acceleration, i.e., which is either at rest or in motion with a constant velocity. In contrast, a non-inertial frame of reference is undergoing acceleration, due to changes in velocity and/or direction (such as rotation). Newtonian mechanics builds upon the earlier viewpoint of Galileo , now referred to as Galilean invariance, that the laws of motion are the same in all inertial frames of reference. In contrast, non-inertial frames require additional correction terms (inertial forces or “fictitious forces”) to the first and second laws of mechanics, and so are not self-contained. For this reason, inertial frames of reference are privileged over non-inertial frames in Newtonian mechanics. Apart from this distinction, all inertial frames of reference are considered to be equivalent. An important corollary of this statement—unappreciated in Newton’s time, but now viewed as fundamental—is that there is no “preferred” or “absolute” inertial frame of reference for the universe.
The process of conversion between two inertial frames of reference is referred to as Galilean transformation. It is readily shown that the major differential and integral conservation equations of fluid mechanics, including of fluid mass, momentum and energy—as well as subsidiary equations such as the Navier–Stokes equations—remain unchanged under Galilean transformation , and are, therefore, Galilean invariant. They also exhibit other important invariance properties, including invariance to certain dimensionless transformations [5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24], invariance to fixed displacements in the time or space coordinates [5,20], invariance to fixed reflections or rotations of the coordinate system [5,20] and, more generally, invariance to the one-parameter Lie group of point transformations, e.g., [25,26,27,28,29,30]. Further invariance properties are also satisfied by some conservation laws: for example, the continuity equation is invariant to reversal of the time coordinate, whereas the momentum and energy equations (including the Navier–Stokes equations) are not.
The aim of this study is to examine the invariance properties of the thermodynamic entropy production, in its different formulations. This work is set out as follows. In Section 2, we provide the major equations for the entropy production, including its global (integral), local (differential), local bilinear, and global macroscopic formulations. In Section 3, we examine the invariance properties of these equations, including dimensional scaling, invariance to fixed coordinate displacements, rotations or reflections, Galilean invariance, and one-parameter Lie symmetries. Of these, the Lie invariance provides a general framework for dimensional analysis and is shown to encompass the other invariances.
In Section 4.1 and Section 4.2, we then examine the macroscopic entropy production for shear flow systems—consisting of fluid flow relative to a solid boundary—at steady state. This reveals a peculiar invariance property of such systems, in that they preference a unique pair (an entropic pair) of inertial frames of reference, from the infinite set of equivalent such frames for the system. In Section 4.3, we draw out an apparent paradox arising from the coexistence of different entropic pairs for different parts of the same system. In each case, the paradox reveals the presence of at least one independent source of negentropy—a power source scaled by the absolute temperature—being depleted by one or more of the different parts. Extended forms of the entropic pairing property are examined in Section 4.4 for a wide range of other steady-state shear flow systems, and in Section 5 for unsteady shear flows, presented as a series of governing principles. The conclusions of this study are then presented in Section 6.
2. Thermodynamic Entropy Balance and Entropy Production
Although overlooked by many fluid mechanicists, the production of entropy is fundamental to studies of nonequilibrium or irreversible processes, and therefore all flow systems in which there is friction (dissipation), diffusion, or chemical reaction. From the second law of thermodynamics, the thermodynamic entropy is not conserved; however, once created it cannot be destroyed. We, therefore, say that entropy is preserved . From the Reynolds transport theorem for the motion of a body of fluid (the fluid volume, FV) through a defined region of space (the control volume, CV), we can extract the following integral law of preservation for the thermodynamic entropy [31,32,33,34,35,36,37,38,39]:
(1)
using Cartesian spatial coordinates [m] and time t [s], where S is the thermodynamic entropy [J K], s is the specific thermodynamic entropy (per unit mass of fluid) [J K kg = m s K], is the fluid density [kg m], is the fluid velocity vector [m s], is the substantial or material derivative [s], is the partial time derivative [s], ∇ is the gradient operator in Cartesian coordinates [m], is an infinitesimal volume element [m], is an infinitesimal area element on the boundary [m], and is the unit outward normal. The left-hand side of (1) refers to the rate of change of entropy in the fluid volume as a function of time, while the next two integrals are calculated, respectively, over the control volume coincident with the fluid volume at time t, and its control surface . The second and third parts of (1) are equivalent by Gauss’ divergence theorem. Usually, the left hand side of (1) is further separated by the de Donder method into internally- and externally-driven components, respectively:
(2)
The first term in the expanded part of (2) is the internal rate of entropy production (or rate of entropy generation [40,41,42]), commonly designated [J K s], for example, due to dissipative frictional or chemical processes. From the second law of thermodynamics, this must be nonnegative. This can be further decomposed in terms of the local rate of entropy production [J K m s]:
(3)
The second term in the expanded part of (2) is the externally driven rate of change of entropy, for example, due to nonfluid flows, and is commonly represented by:
(4)
where is the local nonfluid entropy flux [J K m s], with the negative sign arising from the sign convention for outward flows. Assembly of (1)–(4) and rearrangement gives an integral equation for the entropy production [31,32,33,34,35,36,37,38,39]:
(5)
Equation (5) must apply to every control volume, including each infinitesimal element , so from the fundamental lemma of the calculus of variations [43,44] it can be decomposed to give a differential equation for the local entropy production [31,32,33,34,35,36,37,38]:
(6)
The local entropy production field must be everywhere nonnegative, otherwise it would be possible to construct control volumes for which (5) is negative.
We now consider the definition of steady-state flow. For a differential system, we can adopt the strict definition for each fluid element, giving from (6):
(7)
For macroscopic flow systems, however, it is more meaningful to consider the mean steady state defined by the time average (or some other average) of (5):
(8)
where denotes the time average of some quantity a over an appropriate time period . As evident, at the mean steady state the entropy at any position may be fluctuating in time, but its integrated total remains constant. In either case (7) or (8), the concept of steady state precludes net rotational motion, except for small fluctuations in the time mean formulation (8).
For nonradiative processes, by substitution of the Gibbs equation and local conservation laws, the local entropy production (6) can be reduced to the bilinear form [31,32,34,35,36,37,38,39]:
(9)
based on conjugate pairs of fluxes or rates and thermodynamic forces or gradients , selected from those for the transport of heat, chemical species, momentum, or charge, or from the rates of chemical reaction processes. The diffusive fluxes of chemical species and charged particles in (9) are usually defined relative to the mass-average velocity of the fluid [34,36,37], with other choices also possible . In simple systems, (9) can be manifested at macroscopic scales [35,38].
Finally, various expressions have been derived for the bulk or macroscopic entropy production in different flow systems. Commonly this is expressed in terms of observable parameters, e.g., for a steady-state system [40,41,42,45]:
(10)
where P is the power [W = J s], equal to the rate of work loss or energy dissipation, and T is a reference temperature [K]. Equation (10) should be equivalent to the steady-state time average (8). However, it has the advantage of avoiding the well-known “problem of closure”, due to means of products of fluctuating quantities within that cannot readily be quantified [31,40,45]. Several forms of (10) are examined further in Section 4.
3. Invariance Properties
We now examine several invariance properties of the entropy production, firstly in its local or differential form (6), here numbered in Roman numerals:
(I)
Dimensionless invariance: As established by nearly two centuries of mathematical and physical insight [5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24], a fundamental property of a conservation equation is its invariance to dimensionless transformations. Carvallo and Vaschy referred to this as similitude, later also described as similarity or self-similarity, e.g., [5,15,46,47,48,49,50,51,52,53] (n.b., some researchers use similarity as the general term, reserving self-similarity for systems with a solution similar to itself, such as a fractal ). Birkhoff interpreted similarity from the perspective of group theory, as invariance under the “dimensional group of positive scalar transformations of units”. Some researchers, e.g., [18,22,23] distinguish between complete similarity or self similarity of the first kind, which can be revealed by dimensional analysis alone, and incomplete similarity or self similarity of the second kind, which cannot be revealed purely by dimensional analysis, due to divergent asymptotic behaviour of the governing equations in the limit of one or more dimensionless groups. The second category has been analyzed by the method of intermediate asymptotics [21,22,23], shown to be closely related to the method of renormalization groups, e.g., [22,23,25,54].
To transform the local entropy production, we choose an appropriate length scale L, velocity scale , temperature scale , and density scale , to define the following dimensionless variables:
(11)
where , and R are, respectively, the dimensionless time, position, velocity, and density. Using the scaling parameters, we also construct the following dimensionless groups:
(12)
(13)
(14)
The geometric scaling conditions impose the condition of geometric similarity; the velocities and fluxes impose the condition of kinematic similarity; while the remaining groups impose the equivalence of forces, or dynamic similarity [47,48,49,50,51,52].
Substitution of (11)–(14) into (6) and simplification gives the nondimensional entropy production equation:
(15)
This system consists of 17 dimensional parameters , counting each vector component and the four introduced scaling parameters, with 4 classical dimensions {s, m, kg, K} (here written in SI units rather than the usual dimensional notation). Applying the Buckingham theorem , we confirm that this yields a nondimensional equation between dimensionless groups .
For anisotropic systems, (15) can be rewritten in the more comprehensive formulation:
(16)
which incorporates the vector dimensionless group:
(17)
where ⊙ is the element-wise (Hadamard) product of two vectors to give a vector, and ⊘ is an element-wise division operator between two vectors. As evident, expresses the component-wise ratios of the nonfluid and fluid entropy fluxes. Other dimensionless forms of (6) can also be derived using different choices of the reference parameters, e.g., [40,41,42].
In either form (15) or (16), the nondimensional entropy production equation is invariant to transformations that maintain the same values of the dimensionless groups (11)–(14) (or with (17)), known as dimensionless invariance. Provided that the assumptions inherent in the derivation of (6) do not break down, such systems satisfy the properties of geometric, kinematic, and dynamic similarity, and will exhibit the same physics regardless of their absolute scale.
(II)Invariance to fixed displacements in the time or position coordinates: For this, we adopt the modified dimensionless coordinates [5,20]:
(18)
based on constant displacements and , respectively, in the time and position coordinates, with the velocities, density, and other dimensionless variables unchanged. This transformation of (6) returns (15), hence (15) is invariant to fixed displacements of the time or position coordinates.
(III)Invariance to fixed reflections or rotations of the coordinate system: Here we define a third-order coordinate rotation or reflection matrix A, an orthogonal matrix composed of direction cosine terms , giving the modified dimensionless positions, velocities, and fluxes [5,20]:
(19)
Transformation of (6) based on these coordinates, with the time, density, and other dimensionless variables unchanged, yields—with some effort—(15). The latter is therefore invariant to fixed reflections or rotations of the coordinate system.
(IV)Time antisymmetry: Here we consider reversal of the time coordinate, and also of the velocities and fluxes, in dimensionless form :
(20)
with the positions, density, and other dimensionless variables unchanged. Transformation of (6) then yields the negative of (15) on the right-hand side. Equation (15) therefore exhibits antisymmetry with respect to time reversal, as expected for an entropy production equation.
(V)Galilean invariance: For this we adopt the dimensionless velocities and fluxes [5,20]:
(21)
based on the constant velocity , which provides a Galilean transformation between two inertial frames of reference. The time, density, and other dimensionless variables are unchanged. This transformation of (6) returns (15), hence (15) is Galilean invariant.
(VI)Lie invariance: The invariance properties or symmetries associated with infinitesimal Lie transformations of a differential or integral equation constitutes a large topic, e.g., [25,26,55,56,57,58]. For the present study, we restrict the discussion to the one-parameter Lie group of point scaling transformations, e.g., [25,26,27,28,29,30]. For the local entropy production (6), this is defined by the 13-parameter map:
(22)
where is a scaling parameter, are the scaling exponents, and the capital or Greek letters denote the transformed variables (here, not necessarily dimensionless). Substitution into (6) and simplification gives the mathematical form (15) subject to 7 auxiliary relations for , which can be solved to give:
(23)
This result can be interpreted in several ways:
(a)
If the transformed parameters in (22) have the same dimensions as the original parameters—the standard interpretation—they must be considered as rescaled dimensional variables, while the terms are dimensionless conversion factors. Transformation gives a rescaled form of (6) rather than a nondimensional equation, while the auxiliary relations (23) provide the relations between conversion factors for the dependent and independent variables. This interpretation is useful, representing a rescaling between a model and a prototype to maintain similarity, e.g., [27,28,29], or rescaling by a change of units. However, since this interpretation can be handled by the more general apparatus of dimensionless invariance (I), it need not be considered further.
(b)
Alternatively, if the transformed variables in (22) are intended to be dimensionless, the terms must be interpreted as dimensional scaling parameters. Assuming a positive dimensionless parameter , each term will carry the base- logarithm of the dimensions of quantity i. As an example, the relation carries the dimensions , written in SI units. In consequence, the auxiliary relations (23) simply express the relations between the dependent and independent dimensions within the system, enabling the transformation of (6) into the nondimensional form (15).
In this example, there are 7 auxiliary relations (23) composed of 13 terms, so there must be independent terms, hence, 6 independent dimensions of the system. We have chosen to express (6) in terms of the dimensions {s, m, m, m, kg m−1 m−1 m−1, J K−1 kg−1} of the 6 linearly independent parameters , counting each vector component of , to span the set of 6 fundamental units {s, m, m, m, kg, K}. This choice of dimensions—which distinguishes the length scale in each Cartesian direction, and which adopts the dimensions of the fluid density and specific entropy rather than mass and temperature—is rather unexpected, but it does represent the intrinsic or “indigenous” dimensions of the local entropy production (6). Other choices of independent variables are possible, provided that they span the set of 6 fundamental units. The resulting Lie transformation (22) and (23) is consistent with the Buckingham theorem , reducing a system of parameters (including the 6 independent terms) and 6 dimensions to the nondimensional form (15) with dimensionless groups. It is curious that the number and character of the dimensions and the vector formulation of (15) are revealed automatically by the Lie apparatus, seemingly for free!
(c)
We can also consider hydrid interpretations, in which the transformed parameters and the terms in (22) both carry dimensions, in combination giving those of the original parameters. Such representations may be feasible, but owing to their complexity, we exclude them from further discussion.
We see that the second interpretation (b) of the Lie transformation provides an alternative formulation of the dimensionless invariance (I), expressed in terms of fundamental quantities that better reflect the underlying symmetries of (6). This equivalence between dimensionless invariance and Lie point symmetry has been recognized by some researchers, e.g., , but is not well-known in either the mathematical or engineering literature. By remapping the scaling parameters in the manner of (18)–(21), we can also extract the other invariances (II)–(V), hence, the Lie invariance can be considered to subsume all of these invariances.
In their analyses of other conservation laws, some authors deduce further equivalences, which for the present study would yield , and , e.g., [27,29]. These relations are not provided by the above Lie transformation of (6), nor are they evident from its vector structure. They can, however, be interpreted as consequences of the invariance to fixed rotations or reflections of the coordinate system (III).
We note that Lie and later workers developed comprehensive algorithmic methods for the analysis of infinitesimal Lie symmetries of differential equations [20,26,28,55,57,58]. Furthermore, several researchers have suggested multiparametric extensions of the above Lie apparatus, applicable to all conservation equations [26,58,59,60]. The current level of understanding of the Lie invariances of the local entropy production (6) is therefore incomplete, and warrants further examination.
Now examining the bilinear form of the local entropy production (9), it is evident that each flux and thermodynamic force can be converted to a nondimensional form, hence, this equation also satisfies dimensionless invariance (I). Since the fluid, momentum, and heat fluxes are expressed relative to a common inertial frame of reference, while the diffusive fluxes are expressed relative to the mass-average velocity , (9) will also be Galilean invariant (V) to a change in the common inertial frame of reference . It is also readily verified that (9) satisfies equivalent forms of the other invariance principles (II)–(IV) and (VI), which it must, since it is equivalent to the differential equation (6).
Examining the integral entropy production (5), this can be decomposed into a field of constituent differential equations for , each of which satisfies the invariance properties (I)–(VI). The integral equation will therefore satisfy the same transformations, provided they are applied identically throughout the fluid and control volumes. Furthermore, the invariance to fixed displacements (II)—which does not affect any velocities, accelerations, fluxes, or rates of change—can alternatively be applied in a smoothly-varying fashion to each infinitesimal element, to select a single set of coordinates for the entire control volume. Invariance to fixed rotations or reflections (III) and Galilean invariance (V) can then be demonstrated for this global coordinate system by mathematical induction.
Finally, the macroscopic entropy production (10), being equivalent to the time average of the integral Equation (8), should satisfy the same invariance properties (I)–(VI), based on macroscopic analogs of the local dimensionless groups (11)–(14). We examine several such nondimensional transformations (I) in the remainder of this study, as well as the Galilean invariance (V) of the macroscopic entropy production. These are used to draw out an additional, previously unrecognized, invariance property of shear flow systems.
4. Macroscopic Shear Flow Systems and an Entropic Invariance Principle
4.1. External and Internal Shear Flow Systems
We now examine two macroscopic shear flow systems involving fluid flow relative to a solid boundary at steady state, to reveal a rather different invariance property of these systems. In keeping with the convention for idealized flows in fluid mechanics, in this section we consider only drag forces, and neglect lift forces, gravitational or buoyancy forces, electromagnetic fields, rotational or vibrational motions, as well as the effects of special or general relativity. We assume that solid objects are rigid and remain at constant elevation or position. While we recognize that all dissipative processes generate heat, we restrict the analysis to the relative motions of solids and fluids under (approximately) isothermal conditions, and do not consider convective flows due to temperature gradients. We return to the consideration of lift forces in Section 4.3.2(c), and body forces and accelerations in Section 5.
4.1.1. External Flows
In the first example, consider the steady irrotational non-vibrational motion of a fluid relative to a solid object, such as a moving sphere or aircraft, in fluid mechanics referred to as an external flow, e.g., [47,48,50,51,52]. This is represented by the control volume illustrated in Figure 1a. We note that the selected inertial frame of reference, in which the fluid is considered to be moving around a stationary object, represents only one choice from an infinite set of equivalent inertial frames of reference for this system. An alternative inertial frame of reference is shown in Figure 1b, in which the ambient fluid (referenced at infinite distance) is considered stationary (in the mean), while the object is moving. Insofar as the subsystem consisting of the object and its surrounding fluid is concerned, there is no distinction in Newtonian mechanics between the flows described in these two inertial frames of reference, nor between these and any other inertial frames.
Figure 1.
Open in a new tab
Cross-sections of two representations of an external flow system at steady state, for the inertial frame of reference (a) relative to the solid or (b) relative to the fluid (compare ). In both cases, the fluid extends beyond the rectangular control volume shown.
The flow regime in Figure 1a,b is commonly described by the Reynolds number [46,47,48,49,50,51,52,53,62]:
(24)
where is a consistent length scale [m], usually obtained from the solid object, U is the mean velocity of the free stream of fluid relative to the solid [m s], and is the dynamic viscosity [Pa s]. We allow when , expressed in the coordinate system of Figure 1a. For simple geometries, the frictional and pressure drag force is expressed by the drag coefficient [46,47,48,49,50,51,52,53,62]:
(25)
where is the (scalar) drag force of the fluid on the object [N] and is the cross-sectional area of the solid normal to the flow [m]. We allow when . Correlations for as a function of are available for a range of solid shapes, typically in graphical form, e.g., [20,46,47,48,49,50,51,52,63]. For more complicated geometries, it may be necessary to adopt different drag and lift coefficients in different directions (see Section 4.3.2) or to integrate the pressure and viscous stress around the object [47,52]. Assuming subsonic flow, an absence of lift, and isothermal conditions, the macroscopic entropy production (10) is given by [40,41,42]:
(26)
From (24)–(25), this can be expressed in the nondimensional form:
(27)
where g is the acceleration due to gravity [m s] and is the Galileo number, which expresses the ratio of gravity to viscous forces [64,65,66]. We note that in all instances, U and will have the same sign (; see later discussions), hence, and will also, enforcing the nonnegativity of (26) and (27) in the event of flow reversal.
4.1.2. Internal Flows
In the second example, consider the steady irrotational motion of a fluid within a solid conduit such as a pipe—commonly termed an internal flow—under a pressure gradient (Poiseuille flow), as shown in Figure 2a. Again this choice of inertial frame of reference, in which the fluid is considered to be moving while the solid is stationary, represents only one choice from an infinite set of equivalent inertial frames of reference for this system. An alternative inertial frame of reference is represented in Figure 2b, in which the fluid is considered stationary (in the mean), while the enclosing solid is in motion. Once again there is no distinction in Newtonian mechanics between the flows described in these two inertial frames of reference, nor in any other inertial frame.
Figure 2.
Open in a new tab
Two representations of an internal flow system at steady state, in the region of developed flow, for the inertial frame of reference (a) relative to the solid or (b) relative to the fluid. The velocity arrows are drawn relative to the chosen inertial frame of reference.
The flow regime for internal flow through a conduit of noncircular cross-section is generally described by the Reynolds number [46,50,62]:
(28)
where is the hydraulic diameter [m], A is the flow cross-sectional area [m], W is the wetted perimeter [m], is now the mean cross-sectional velocity of the fluid relative to the solid [m s], and Q is the volumetric flow rate [m s]. The total pressure loss [Pa] can be expressed as [46,47,48,49,50,51,52,62]:
(29)
where is the total head loss [m], f is the Darcy-Weisbach friction factor [-], L is the flow length [m], and is the loss coefficient for a pipe fitting [-], summed over the total number of fittings. To enable flow reversal, we impose , , and (a pressure, head or frictional gain) for flows with , and c.f., [66,67,68,69]. Equations (28) and (29) reduce to the expressions for a circular pipe of diameter [46,47,48,49,50,51,52,62], while variants are available for conduits of different cross-section, e.g., [37,46,48,50,62,65,70,71]. Furthermore, f is a function of , with an analytical solution for laminar flow, and various correlations for turbulent flow as a function of Reynolds number and surface roughness [46,47,48,49,50,51,52,72]. For isothermal flow, the macroscopic entropy production (10) is then given by [40,41,42,66]:
(30)
From (28) and (29), this can be expressed in the nondimensional form, c.f., :
(31)
where is the hydraulic Galileo number. Since and U (or and ) have the same sign , (30) and (31) will remain nonnegative in the event of flow reversal.
The dimensionless entropy production terms for external and internal flows, (27) and (31), cannot be compared directly, being nondimensionalized by different parameters. However, it is always possible to compare their dimensional values, measured in a consistent set of units:
(32)
The above principles extend naturally to other shear flow systems, a number of examples of which are examined further in Section 4.4 and Section 5.
4.2. An Invariance Principle of Entropic Pairing
From the analyses in Section 3, in each of the above examples, the macroscopic entropy production (26) or (30) (or its nondimensional form (27) or (31)) must be Galilean invariant, since it arises from the time average of the integral formulation (5). Scrutinizing the examples carefully, it is evident that the velocity U used to define the Reynolds number (24) or (28) is not situated within a single inertial frame of reference. Instead, it represents the difference in velocity between two inertial frames of reference: (i) the frame moving with the solid or, in other words, that in which the solid is stationary, and (ii) the frame moving with the mean fluid flow, or that in which the mean flow is stationary. These correspond to the pair of frames respectively illustrated in Figure 1a,b or Figure 2a,b. In other words, the Reynolds number provides a dimensionless Galilean transformation between the two unique inertial frames of reference that define the shear flow. It is precisely for this reason that the Reynolds number—and, consequently, any derived quantity such as the entropy production—is independent of any individual inertial frame of reference, and so is the Galilean invariant.
We can take this argument further to consider that in each example, the Reynolds number (24) or (28) preferentially selects the solid and free-stream inertial frames of reference from the infinite set of equivalent inertial frames of reference for that system. This insight is profound, since it challenges the usual Galilean viewpoint that all inertial frames are equivalent: clearly, from the perspective of a shear flow subsystem defined by fluid flow relative to a solid boundary, they are not. Indeed, the very notion of dimensionless invariance (I) of a macroscopic shear flow system requires the existence of a special pair of inertial frames of reference, since otherwise the dimensionless groups used to represent dynamic similarity (such as the Reynolds number) could not be defined.
The two inertial frames identified by each Reynolds number could be described as a Reynolds pair of inertial frames of reference. However, they are not preferenced by the Reynolds number alone, but also by the drag coefficient (25), the friction factor (29), the entropy production (26) or (30), and every dimensionless group that invokes inertial processes. Examining the Reynolds number, we know that it discriminates between two flow regimes characterized by different rates of entropy production [66,73,74,75]:
(a)
Laminar flow, in which the diffusion of momentum is dominated by viscous processes, leading to a low Reynolds number and lower entropy production, and
(b)
Turbulent flow, in which the diffusion of momentum is dominated by inertial processes, leading to a higher Reynolds number and a commensurately higher rate of entropy production.
In other words, the Reynolds number is an entropic parameter, which expresses the relative influence of viscous and inertial momentum transport processes on the entropy production. For this reason, the two selected inertial frames of reference should be described as an entropic pair of inertial frames of reference, which are entropically paired by the shear flow system. We also refer to the distance between the two selected frames as their inertial separation, in each case given in dimensional form by the inertial velocity U.
Variants of the above arguments apply to all other shear flows, modified if necessary to encompass the relative motions of more than one solid or fluid. A variety of other such systems are examined in Section 4.4. In each shear flow, the entropic pair of inertial frames of reference will be invariant to the transformations (I)–(VI) identified in Section 3, and so can be identified as an additional invariance property of the system. In this respect, the invariant property of entropic pairing can be considered to “sit above” the Galilean inertial framework of Newtonian mechanics.
4.3. An Entropic Paradox and Its Resolution
4.3.1. Statement of the Paradox
Now consider the macroscopic flow system illustrated in Figure 3, containing multiple examples of external flow subsystems (subsidiary control volumes), each involving steady irrotational non-vibrational motion between a solid object and a fluid. Such a flow system can be seen, for example, in the movements of multiple fish in the ocean, or of multiple aircraft above most inhabited parts of the Earth. As drawn, Figure 3 adopts an inertial frame of reference relative to a reference solid such as the Earth’s surface, with the ambient fluid moving at constant velocity U relative to this surface (ignoring boundary-layer effects). As noted, however, this inertial frame of reference is not unique and is adopted here simply for convenience. Within the fluid, we consider four shear flow subsystems created by various solid objects:
(a)
Subsystem A, containing Object A, which is stationary with respect to the reference solid (), and experiences an incoming flow field of ambient velocity U;
(b)
Subsystem B, containing Object B moving at the speed relative to the reference solid (i.e., in motion upstream);
(c)
Subsystem C, containing Object C moving at the speed relative to the reference solid (i.e., in motion downstream); and
(d)
Subsystem D, consisting of an internal flow field of ambient velocity established within a container or region, within which Object D is held stationary with respect to the Subsystem D boundary and the reference solid.
Figure 3.
Open in a new tab
Multiple external flow subsystems within a flow system adopting an inertial frame of reference relative to a reference solid (compare ). For the subsystem control volumes, permeable boundaries are drawn with dashed lines, and impermeable boundaries with solid lines. All velocities are defined relative to the reference solid.
Considering the ambient flow in Figure 3 to be an internal flow driven by a pressure gradient (Poiseuille flow) without fittings, its Reynolds number (28) and head loss (29) are:
(33)
where subscript f denotes the ambient fluid. The dimensionless entropy production (31) for the ambient flow is therefore:
(34)
For the external flows associated with Objects A to D, the Reynolds numbers (24) and drag coefficients (25) are, respectively:
(35)
based on appropriate choices of length scales, drag forces, and cross-sectional areas. In consequence, their macroscopic dimensionless entropy production terms (27) are, respectively:
(36)
where for . We can also analyze the flow in Subsystem D as a separate internal flow for which the Reynolds number (28), friction factor (29), and dimensionless entropy production (31) are given, respectively, by:
(37)
where denotes the enclosed Subsystem D, is the hydraulic diameter [m], is the cross-sectional area [m], is the head loss [m], is the flow length [m], and . Subsystem D provides an example of an external flow subsystem nested inside an internal flow subsystem, in turn nested inside the bulk flow system. For even greater generality, we could imagine Subsystem D to be detached from the reference solid and moving at a constant velocity through the bulk fluid, requiring an additional Reynolds number, drag coefficient, and entropy production term associated with its motion.
As evident, in this example there are six different Reynolds numbers, drag or friction coefficients, and rates of entropy production, associated with the bulk flow of the ambient fluid, with each external flow in Subsystems A to D, and with the internal flow field in Subsystem D. From Section 4.2, we know that each system or subsystem has its own entropic pair of inertial frames of reference moving with its solid and its mean fluid flow. In consequence, different parts of a connected flow system can coexist with different entropically paired inertial frames of reference. This creates an entropic paradox: how can a shear flow system preferentially select many different—and unrelated—entropic pairs of inertial frames of reference?
4.3.2. Resolution of the Paradox
To resolve this paradox, we first revisit the concept of negative entropy or negentropy, conceived by many prominent researchers [76,77] and named by Brillouin . In this perspective, the universe contains a finite store or reservoir of negentropy, which is continually and irreversibly depleted by dissipative processes. The negentropy of the universe therefore provides a thermodynamic potential N, which decreases in the direction of spontaneous change. It therefore generalizes the availability, available work, free energy, affinity, and Planck potential concepts [79,80,81,82,83,84,85] and subsumes related ideas such exergy and essergy [86,87]. In a flow system, the rate of change of negentropy due to processes within a fluid volume is, by definition:
(38)
hence, from the de Donder separation (2):
(39)
where is the internally driven rate of change of negentropy or, more simply, the rate of negentropy production, and is the externally driven rate of change of negentropy. We see that the rate of negentropy production is identical to the rate of entropy production but of opposite sign , or in time-averaged dimensionless form, . We can also refer to the rate of negentropy consumption, defined by the absolute value or .
Now consider each subsystem shown in Figure 3 in turn:
(a)
Subsystem A: From Figure 3, we see that Object A shares the same inertial frame of reference as the reference solid (it could even be joined to it by some physical framework or magnetic coupling). The entropic pair for Subsystem A is, therefore, the same as for the bulk flow, with the inertial separation U. Furthermore, the entropy production (36) reveals the existence of a source of negentropy for Subsystem A, which is being continuously depleted by the reaction to the drag force, i.e., by the need to continuously do work against the frictional and pressure drag between the fluid and Object A. This rate of work is, at minimum, (assuming 100% efficiency), and the associated rate of negentropy consumption is, at minimum, , or in dimensionless form . From the information provided in Figure 3, we do not know if this source of negentropy is situated at Object A itself, for example, a source of motive power attached to the object, or if it is incurred by the source of negentropy driving the ambient fluid stream, enabling it to perform the required added work against Object A held in a fixed position. We do know, however, that there must be a source of negentropy for Subsystem A, and that this must be situated with either (or both) Object A or the ambient flow field.
(b)Subsystem B: Now consider Subsystem B, which preferences the entropic pair of inertial frames of reference defined by the ambient flow and Object B, with the inertial separation . In all cases for which (moving upstream), this entropic pair will differ from that for the ambient flow, due to the different solid velocities. There must be a source of negentropy for the entropy production incurred by Subsystem B. Furthermore—and this is the crucial point—even if there is a physical connection or coupling between Object B and the reference solid (e.g., a chassis and a set of wheels), we know that the ambient flow field can only contribute to this entropy production to the extent of its inertial separation U from the reference solid, i.e., for which:
(40)
(assuming a constant length scale and cross-sectional area). Any positive excess must therefore be incurred by an independent source of negentropy associated with Object B, for example a source of motive power attached to this object, or connected to it by other means such as a magnetic coupling. Alternatively, if there is no connection between Object B and the reference solid, nor with any other solid in the system, then all of the negentropy consumption for Subsystem B, not just the excess , must be incurred by the source attached to Object B.
(c)
Subsystem C: Next, consider Subsystem C, which preferences the entropic pair of inertial frames defined by the ambient stream and Object C, with the inertial separation . The Reynolds number, drag coefficient, and entropy production are given in (35) and (36). For the downstream motion () of Object C, there are three scenarios:
(i)
For , Object C will move more slowly than the downstream flow, and the subsystem will have a positive , , and entropy production.
(ii)
For the special case , Object C will move with the fluid stream, hence, , , so (for this ideal case) there is no entropic pair of inertial frames of reference and no entropy production.
(iii)
For , Object C will move more rapidly than the downstream flow, incurring a drag force in the opposite direction, again leading to a positive , , and entropy production.
As for Subsystem B, the ambient flow can only contribute to the entropy production of Subsystem C to the extent of its inertial separation U, i.e., for which:
(41)
The excess will be negative (non-physical), zero, or positive, respectively, for the above three cases. For case (i), it is thus possible for all of the negentropy consumption to be harnessed from the ambient flow, with the solid object partly carried by the fluid. For case (ii), both terms in the excess vanish, and there is no entropy production. For case (iii), however, the positive excess reveals the existence of a source of negentropy for Object C, independent of that for the ambient flow. Alternatively, regardless of the above categories, if the negentropy harnessed by Object C from the flow is less than , then this difference must also be provided by the independent source of negentropy for Object C.
If the object is in translational motion in a different direction to the surrounding flow field, the above arguments must be modified to account for the relative velocity vectors. For example, consider an ambient flow of velocity U, within which Object C moves at the constant velocity U, in both cases measured in Cartesian coordinates with respect to the reference solid. The macroscopic entropy production (10) is now given by the two- or three-dimensional vector scalar product:
(42)
where is the drag-lift force vector, containing drag and lift components aligned with and normal to the direction of motion, respectively (in three-dimensional systems there can be two lift components). Equation (42) represents the combined effect of frictional and pressure forces. The parameters in (42) can be expressed in terms of a vector Reynolds number and vector drag-lift coefficient, to give:
(43)
where is the magnitude of vector . As evident from (42) and (43), there must be an acute angle between vectors and (or between and ), to ensure the nonnegativity of the entropy production. From the Kutta-Joukowski theorem, in a potential flow the lift component of will be proportional to the fluid circulation on any closed path around the object, where is the intrinsic coordinate [47,48,52,53,62]. Since a two-dimensional asymmetric object (e.g., an airfoil or hydrofoil) creates a non-zero fluid circulation, it will produce lift. For an object with no lift forces, and will be oriented in the same direction, and we recover in (36) based on scalar variables.
Using a mechanism mounted on Object C (e.g., a sail), the ambient flow can be harnessed to extract the negentropy required for motion in almost any direction, but only to some maximum extent that is harnessable from the flow (allowing for the possibility of direction-dependent parameters such as areas, length scales, and drag coefficients). Beyond this, any positive excess must have an independent source of negentropy, most likely attached to the object. Similar arguments apply to other external flow subsystems. Reexamining Object B, we note it may be very difficult to harness the ambient flow to enable countercurrent motion, but it is certainly possible to facilitate motion on an oblique reverse trajectory and to thereby construct a zigzag course (“tacking“) to achieve a net upstream motion. In all cases, the negentropy not actually harnessed from the ambient flow or extracted by a connection to another solid must be provided by the independent source of negentropy associated with the object.
(d)Subsystem D: Now consider Subsystem D, an external flow subsystem within an internal flow subsystem, in turn situated within the main flow field. This preferences the entropic pair of inertial frames defined by its internal flow and Object D, with the inertial separation . For the example shown in Figure 3, the surrounding ambient flow field does not directly affect the fluid inertial frame of reference, but due to the connection between Object D and the reference solid, the subsystem has the same solid inertial frame of reference as the ambient flow. For any nonzero , the subsystem will have a positive , and entropy production due to the drag force on Object D, and also a positive , , and entropy production due to the internal flow within Subsystem D. In dimensional form, from (32) the total is:
(44) Due to the physical connection between Object D and the reference solid, we know that the source of negentropy for (44) cannot reside with Object D. It is possible that negentropy could be harnessed from the bulk flow, but only to the maximum extent that is harnessable, for example from (10):
(45)
where is the maximum power extractable by Subsystem and is its reference temperature. For example, considering only the kinetic energy component of the ambient flow, for Subsystem D with the intake area , by continuity , so from (30), . In general, the form of (45) will depend on the design of Subsystem D and its conversion efficiency, so this is not analyzed further here. Here, it is sufficient to state that the excess between (45) and (44), if positive, must be provided by an independent source of negentropy, which in this example must power the mechanism (such as a pump, blower, or compressor) used to create the flow field for Subsystem D. If no negentropy is harnessed from the bulk flow, then all of the total must be provided by this independent source of negentropy.
(e)
Other Variants: If we now release Object D from its solid connection, we create an independent external flow subsystem within the internal flow of Subsystem D. At steady state, Object D could harness negentropy from the Subsystem D flow field to provide for its entropy production but only to the extent that this is harnessable. Any excess must be incurred by yet another independent source of negentropy attached to Object D, to maintain it in a fixed position.
Finally, we can imagine Subsystem D to be in steady motion with respect to the bulk flow field, thus involving several nested external and internal flow systems. This will require sources of negentropy for the entropy production of all component subsystems. By connections or coupling, some of these could extract negentropy from their surrounding systems, but only to the extent possible, with any excess requiring one or more independent sources of negentropy.
4.3.3. Governing Principles
From the above examples, we can draw out the following governing principles:
(1)(A)
A shear flow subsystem at steady state, defined by fluid flow relative to a solid object, preferentially selects an entropic pair of inertial frames of reference, consisting of (i) the frame moving with the solid, i.e., that in which the solid is stationary, and (ii) the frame moving with the mean fluid flow, i.e., that in which the mean fluid flow is stationary. These frames are unique. They can be described as being entropically paired by the subsystem and inertially separated by their difference in velocities.
(2)(A)
An entropically paired shear flow subsystem must have at least one source of negentropy for its entropy production.
(3)
If the entropic pair of a shear flow subsystem differs from the frames of reference that define the surrounding flow, then:
(A)
The shear flow subsystem may be harnessing negentropy from its external environment, either from the ambient flow field or by exploiting some other connection, but it can only do so to the extent that these are harnessable by the subsystem.
(B)
Any excess entropy production, above what is actually harnessed from the external environment, reveals the existence of at least one independent source of negentropy for the shear flow subsystem.
We further note that the solid and fluid flow frames of reference that define an entropic pair need not be contained within the subsystem, but can be nonlocal to that subsystem. Indeed, for the external flow systems shown in Figure 1 and Subsystems A to C in Figure 3, the ambient flow field is referenced at a long distance upstream. Furthermore, in all subsystems shown in Figure 1, Figure 2 and Figure 3, the no-slip condition will require the fluid velocity to vanish at the solid surface, so there must be a physical separation between the solid and fluid flow which define the entropic pair. In all cases described, the nonlocal effects are transmitted by the flow field.
The above examples become more more complicated if we also include the effects of gravitational, buoyancy, or lift forces, and the requirements of thermodynamic efficiency. For objects lighter or denser than the fluid or which experience a lift force, maintaining its vertical position will make an additional contribution to the entropy production. This will add to the negentropy required by the subsystem (see Section 5). Similarly, for processes of less than 100% efficiency, there must be a source of negentropy for the lost work component, which must be taken into account in the above calculations. This can be represented by a modified rate of negentropy consumption for each process, in dimensional or time-averaged dimensionless form:
(46)
where is the thermodynamic efficiency. Additional complications will arise in flow fields with nonuniform fluid or thermodynamic parameters, such as velocity, temperature, fluid density, or dynamic viscosity, and in subsystems with acceleration.
The phenomena revealed in this section are very different to the motions of frictionless objects in Newtonian mechanics, which do not require the consumption of a source of negentropy for an object or fluid to maintain a constant velocity. Taken together, they provide new perspectives into the long-neglected field of entropic mechanics, the science of the relative motions of objects with friction. As hinted at in the above discussion, this will necessarily include the motions of all motor vehicles and other craft (irrespective of power source) and of all living organisms, relative to any fluid.
4.4. Other Steady-State Shear Flow Systems
The foregoing arguments apply with some modification to all other shear flow systems at steady state. Consider the following idealized classes of shear flow systems, as illustrated in Figure 4 [5,46,48,49,51,52,88,89]:
(a)
In a two-dimensional or three-dimensional external flow, as shown in Figure 1 or Figure 4a, the entropic pair consists of the solid and the ambient flow, the latter referenced to its upstream mean velocity profile. In the turbulent wake downstream of the solid, the flow field is subject to nonlocal influences of both the original fluid stream and the solid. As discussed in Section 4.1.1, there must be at least one source of negentropy associated with the solid and/or the ambient flow, to maintain the steady-state flow.
(b)
In a two-dimensional or three-dimensional internal flow under a pressure gradient (Poiseuille flow), as shown in Figure 2 or Figure 4b, the entropic pair consists of the solid wall(s) and the internal flow, the latter referenced at its mean velocity. As discussed in Section 4.1.2, there must be at least one source of negentropy, associated with the solid wall(s) and/or the fluid flow, to maintain the steady-state flow.
(c)
In Couette flow, involving the relative motion of parallel plates or concentric cylinders in contact with a fluid as shown in Figure 4c, the entropic pair is provided by the two solid walls of the system. At least one of the solids must have an independent source of negentropy, such as an engine connected to a driveshaft or crankshaft, to drive the relative motion, which in turn generates the fluid flow.
(d)
In a combined Couette-Poiseuille flow, consisting of fluid flow under a pressure gradient between moving parallel plates or concentric cylinders, as shown in Figure 4d, the two solids and the internal flow (referenced at its mean velocity) provide an entropic triple of inertial frames of reference for the system. From the above principles, there must be at least two independent sources of negentropy: one for the internal flow, and at least one for the relative motions of the solids.
(e)
For a boundary layer flow, consisting of fluid flow relative to a solid boundary such as that as shown in Figure 4e, the flow is commonly analyzed using the Reynolds number or , based on the distance x [m] or boundary-layer thickness [m] from the start of the plate, respectively [5,46,47,48,49,50,51,52]. These Reynolds numbers are functions of position, but all contain the same reference velocity . The entropic pair is therefore provided by the solid and the ambient flow (referenced to the upstream flow field or at infinite vertical distance). The system must have at least one source of negentropy, which could be associated with the solid object and/or the ambient flow field.
(f)
Consider the two-dimensional turbulent mixing layer, in which two fluid streams separated by a solid plate are allowed to merge beyond the end of the plate, as shown in Figure 4f. Here, the two fluid streams (referenced at infinite distance) and solid provide an entropic triple for the system. This example illustrates the nonlocal influence of a solid on the downstream fluid—its influence cannot be neglected even for long distances downstream. There must be at least two independent sources of negentropy associated with this triple, to drive the two independent flows.
(g)
Consider the two-dimensional or axisymmetric turbulent jet issuing into a stationary fluid, as shown in Figure 4g. Here, the solid nozzle and its internal flow field (referenced at its mean velocity) provide the entropic pair for the system. This system again illustrates the nonlocal influence of a solid to create the wedge-shaped or conical zone of fluid flow produced by shear against the ambient fluid. The fluid flow must be driven by at least one independent source of negentropy, associated with the fluid jet and/or solid nozzle.
(h)
Consider the two-dimensional or axisymmetric turbulent jet issuing into an ambient parallel flow, as shown in Figure 4h. Here the solid nozzle and two fluid flows provide an entropic triple for the system. There must be at least two independent sources of negentropy associated with this triple, to drive the two fluid flows.
Figure 4.
Open in a new tab
Schematic diagrams of several idealized classes of shear flow systems: (a) external flow and the turbulent wake (c.f., Figure 1), (b) internal (Poiseuille) flow (c.f., Figure 2), (c) Couette flow, (d) combined Couette-Poiseuille flow, (e) boundary layer flow, (f) the two-dimensional mixing layer, (g) the two-dimensional or axisymmetric turbulent jet, and (f) the two-dimensional or axisymmetric turbulent jet issuing into a parallel ambient flow, e.g., [5,46,48,49,50,51,52,88,89]. The time-averaged velocity profiles are all drawn to include the small transverse or radial velocity component, if present.
From these examples, we can add the following governing principles to those listed in Section 4.3.3:
(1)(B)
A shear flow subsystem at steady state, generated by the relative motion of two solid objects within a fluid, preferentially selects an entropic pair of inertial frames of reference, consisting of the frames moving with each solid.
(C)
A shear flow subsystem at steady state, defined by the relative motions of two solid objects and a fluid stream, or two fluid streams and a solid object, preferentially selects an entropic triple of inertial frames of reference, consisting of the frames moving with each solid and fluid flow. The entropic triple can be analyzed in terms of its three constituent entropic pairs.
(2)(B)
An entropically tripled shear flow subsystem must have at least two independent sources of negentropy for its entropy production.
We can also modify principle (3) to include entropically tripled systems. Similar considerations apply to more complicated fluid steady-state flow systems, for example, the jet boundary, the double boundary layer, the turbulent jet issuing into a cross-flow, or the buoyant or dense jet, c.f. [5,46,48,49,51,52,88,89].
The above analysis can be extended ad infinitum. Consider the multiple mixing layer consisting of m independent flows separated by n parallel plates, as shown in Figure 5a. Assuming that the flows are generated by independent pressure gradients, and that the solids are not connected, this system preferentially selects the entropic -tuple of inertial frames of reference moving with each fluid flow and each solid. This must have at least independent sources of negentropy, to drive the relative fluid and solid motions. If, however, there are dependencies or connections between the fluid flows or solids, such as for flow through a set of nested cylinders or a lattice, the number of degrees of freedom of the entropic tuple and the number of independent sources of negentropy will diminish accordingly.
Figure 5.
Open in a new tab
Schematic diagrams of more complicated shear flow systems: (a) the multiple mixing layer, and (b) a generalized shear flow subsystem.
This example provides the additional governing principles:
(1)(D)
A shear flow subsystem at steady state, defined by the relative motions of m independent solid objects and n independent fluid streams, preferentially selects an entropic -tuple of inertial frames of reference, consisting of the frames moving with each solid and fluid flow. The entropic-tuple can be analyzed in terms of its constituent entropic pairs.
(2)(C)
An entropically -tupled shear flow subsystem must have at least independent sources of negentropy for its entropy production.
We can also modify principle (3) to include entropically tupled systems of any order.
Finally, consider a small shear flow subsystem at steady state within a complicated but steady flow field, as shown in Figure 5b. Here, the ambient flow has been influenced by k upstream solids or fluid flows, hence, the external flow subsystem preferentially selects an entropic -tuple of inertial frames of reference. Nonetheless, the governing principles 2(A) and (3)(A)–(B) in Section 4.3.3 still apply; if the entropic pair of the subsystem differs from the inertial frames of reference that define the ambient flow, the subsystem must have an independent source of negentropy for its excess entropy production, above what is harnessed from the ambient flow. In the example shown, if the solid object is connected to the reference solid, its entropy production will be borne by the ambient flow field; however, if not, then it must have its own independent source of negentropy to maintain its stationary position or motion, in accordance with the analyses in Section 4.3.2. Similar arguments apply to systems with higher order influences, such as an entropically ℓ-tupled shear flow subsystem embedded within an entropically -tupled flow system.
5. Unsteady Shear Flow Systems
We now consider unsteady shear flow systems or, in other words, systems with acceleration, including changes in speed and/or direction. This necessarily includes all shear flows with rotation. For the sake of brevity, we restrict the discussion to unsteady extensions of the flows examined in Section 4.1. For these systems, it is appropriate to include the action of body forces due to a gravitational or electromagnetic field, omitted from the idealized steady-state flows in Section 4.
5.1. Unsteady External Flows
Consider the two- or three-dimensional unsteady external flow represented by Figure 6a. This encompasses a wide variety of flow systems with different reference frames, from a fluid in motion relative to a stationary object (e.g., flow past a model in a wind tunnel), through to a solid object in motion relative to a stationary fluid (e.g., a soccer ball in flight).
Figure 6.
Open in a new tab
Schematic diagrams of unsteady shear flow systems: (a) unsteady external flow and (b) unsteady internal flow. The velocities are expressed relative to an inertial frame of reference in which the control volume is stationary.
First consider the unsteady purely translational (irrotational) motion of an isolated rigid sphere with velocity [m s] in a uniform flow field of velocity [m s], both relative to a common inertial frame of reference (Figure 6a without rotation). A force balance at low velocities yields [90,91,92,93,94,95,96,97,98]:
(47)
where [kg] is the mass of the solid, [N] is a propulsion force on the object, [N] is the net body force (e.g., gravity minus buoyancy), [N] is the drag-lift force on the object due to viscous and pressure forces, [N] is the inertial force due to the “added mass” of fluid accelerated by the object, [N] is a history-dependent force to account for acceleration memory effects, and is the fluid force due to acceleration of the local fluid. The total resistance force on the solid is , hence, the entropy production is:
(48)
assuming isothermal conditions. Equation (48) can be nondimensionalized in a manner similar to steady-state flows (43) to give the dot product between the vector Reynolds number and a sum of vector drag-lift coefficients, each moderated by a function of the norm of its corresponding velocity or acceleration term.
Now consider purely rotational motion, in which a rigid sphere of radius vector [m] rotates about its centroid at the angular velocity [s], due to a torque [N m] on the solid (Figure 6a without translation). The entropy production is:
(49)
assuming isothermal conditions. Equation (49) can be nondimensionalized as the dot product between a vector rotational Reynolds number (Taylor number) and a vector torque coefficient, c.f., [99,100], moderated by the norm of an angular acceleration term. For different solid shapes or centers of rotation, more comprehensive torque equations can be derived based on moments of inertia.
Now consider the combined unsteady translational and rotational system, as shown in Figure 6a. The entropy production is given by a combination of (48) and (49), with additional terms to account for coupling effects such as rotation-induced lift (the Magnus force). In Section 4.3.2(c), we saw that an object with mirror asymmetry (e.g., an airfoil) will create a non-zero fluid circulation, thereby inducing lift. Extending this idea, an object with rotational symmetry and mirror asymmetry, mounted on a fixed axis (e.g., a turbine), will interact with the flow to produce a torque on the object, thereby harnessing negentropy from the flow to cause its rotation. Even without the fixed mounting, such an object will undergo rotation as well as translation, thus harnessing the flow for a proportion of its entropy production. For complex solids and/or nonuniform flow fields, the drag-lift and torque coefficients are generally found by numerical integration of pressures and viscous stresses around the solid surface, calculated using a turbulence model [101,102]. For flow-induced vibrations, the coefficients will also be functions of the Strouhal number , where f is the vibration frequency [s−1] [50,52,103,104]. Considerable research is now underway on more complicated systems such as tethered solids with various degrees of freedom, of interest to the study of fluid–structure interactions, e.g., .
Finally, transonic and supersonic flows (travelling close to or faster than the speed of sound) cause the formation of shock waves, with sudden changes in velocity, pressure, density, and temperature [47,52,62]. With an increasing Mach number, these cause a sharp increase in drag, modify the lift, and cause significant heating, substantially increasing the entropy production.
In these unsteady external flows, the system will preferentially select the frames of reference moving with the reference fluid flow and solid at each time instant. For translational motion, this defines a single entropic pair made up of the flow field and solid, while for rotational motion, it defines a joint or disjoint continuous set of entropic pairs, consisting of the flow field and each point on the solid surface. However, the flow field and/or the solid within each entropic pair will change with time due to the unsteady motion. In addition, at least one of the frames from each entropic pair, and possibly both, will be a non-inertial frame of reference. In consequence, we can add the following governing principle to those given previously:
(1)(E)
A shear flow subsystem in unsteady flow, defined by the relative motions of a fluid and a solid, preferentially selects an entropic pair of frames of reference—or a set of entropic pairs—that is changing in time. At least one frame from each entropic pair will be a non-inertial frame of reference.
From principles 2(A) and 3(A)–(B) from Section 4.3.3, the system must have an independent source of negentropy for its entropy production, above what is being harnessed from the flow. Given the possibility of inducing rotation of an object to harness negentropy from the flow, which can then be used to power translational motion, it is not possible to be too definitive as to the location of these source(s) of negentropy. We can however make the following addition to governing principles 3(A)–(B):
(3)
If the entropic pair (or other entropic tuple) of a shear flow subsystem differs from the frames of reference that define the surrounding flow, including changes with time, then:
(A)
The shear flow subsystem may be harnessing negentropy from its external environment, either from the ambient flow field or by exploiting some other connection, but can only do so to the extent that these are harnessable by the subsystem.
(B)
Any excess entropy production, above what is actually harnessed from the external environment, reveals the existence of at least one independent source of negentropy for the shear flow subsystem.
5.2. Unsteady Internal Flows
Now consider the two- or three-dimensional unsteady internal flow represented by Figure 6b, in which a fluid moves at the instantaneous velocity and mean velocity relative to the solid walls, which are stationary with respect to an inertial frame of reference. This can again be represented by the scalar Reynolds number (28), pressure loss (29), and entropy production (30) and (31) for internal flows, now as functions of time. Such flows can be divided into two classes. For a rigid fluid and solid walls, slow changes in velocity, such as the onset of flow, can be calculated from the one-dimensional momentum equation [48,49,52]. In contrast, for an elastic fluid and solid walls, a sudden change in velocity will create pressure waves migrating in both directions, for which the wave speed can be determined from the continuity and momentum equations [48,49,52]. Both flows will alter the instantaneous entropy production (30) and (31), predominantly through the cubic velocity term. In transonic and supersonic internal flows, the formation of shock waves will also substantially increase the entropy production as a function of the Mach number.
A second category of internal flows is provided by the open channel flow of a liquid under gravity, bounded by solid walls and a free liquid surface [48,49,52]. Uniform steady channel flows can be analyzed by channel variants of the Reynolds number (28), pressure loss (29), and entropy production (30) and (31), expressed in terms of the water surface elevation rather than pressure head, requiring solution of the continuity and momentum equations [106,107,108]. Nonuniform or unsteady channel flows require successive additional terms in the momentum equation [106,107,108], while oscillatory surface waves require a different treatment beyond the scope of the current discussion [107,108]. Channel flows with obstacles have the features of both external and internal flows, requiring the synthesis of (26) and (30). As with unsteady external flows in Section 5.1, a rotationally symmetric object mounted on a fixed axis, intruding into a channel (e.g., a water wheel), can harness negentropy from the flow in the form of rotational motion. Similarly, an object in contact with a solid wall (e.g., a sediment particle) can harness the flow for its transport, by suspension, a bouncing motion (saltation), tumbling, rolling, or sliding [107,109]. Finally, critical and supercritical channel flows (travelling at or faster than the speed of a surface wave) can cause the formation of an hydraulic jump, with a sudden increase in water level, substantially increasing the entropy production as a function of the Froude number.
The above unsteady internal flows exhibit similar properties to the external flows of Section 5.1 in that they preferentially select an entropic pair of frames of reference (or a set of pairs) that change with time. In consequence, we can draw the same conclusions as those previously, concerning the selection of entropic pair(s) and the source(s) of negentropy for each subsystem.
6. Conclusions
This study examined the invariance properties of the thermodynamic entropy production, based on its global (integral), local (differential), local bilinear, and global macroscopic formulations, as defined in Section 2. The mathematical invariance properties of these equations were examined in Section 3, including dimensional scaling, invariance to fixed coordinate displacements, rotations or reflections, Galilean invariance, and the one-parameter Lie group of point transformations. Of these, the Lie invariance can be reinterpreted as a generalized dimensionless invariance, which reveals and is expressed in terms of the intrinsic or ‘indigenous’ dimensions of the system. The Lie invariance can also be shown to encompass the other invariances, and is therefore the most general.
We then examined a number of shear flow systems involving relative motion between fluid(s) and solid(s), first for steady-state flow (Section 4) and then for unsteady flow (Section 5). In a steady-state shear flow system consisting of a single fluid and solid, the Galilean invariance property was shown to preference a unique pair of inertial frames of reference for the system—here referred to as an entropic pair—from the infinite set of available reference frames for the system. This challenges the Newtonian viewpoint that all inertial frames of reference are equivalent. This entropic pairing can be considered to be an additional invariant property of the system, enabling the Reynolds number, drag coefficient, and entropy production to be uniquely defined and Galilean invariant. In Section 4.3, we drew out an apparent paradox arising from the coexistence of different entropic pairs for different shear flow subsystems within a flow system. For each subsystem, the paradox was resolved by the fact that it reveals the presence of at least one independent source of negentropy—a power source scaled by the absolute temperature—being depleted by one or more of the different parts. By the analysis of a variety of steady-state and unsteady shear flow subsystems, we drew out a series of governing principles to describe their entropic pairing properties and sources of negentropy. These are reiterated here in consolidated form:
(1)(A)
A shear flow subsystem at steady state, defined by fluid flow relative to a solid object, preferentially selects an entropic pair of inertial frames of reference, consisting of (i) the frame moving with the solid, i.e., that in which the solid is stationary, and (ii) the frame moving with the mean fluid flow, i.e., that in which the mean fluid flow is stationary. These frames are unique. They can be described as being entropically paired by the subsystem and inertially separated by their difference in velocities.
(B)
A shear flow subsystem at steady state, generated by the relative motion of two solid objects within a fluid, preferentially selects an entropic pair of inertial frames of reference, consisting of the frames moving with each solid.
(C)
A shear flow subsystem at steady state, defined by the relative motions of two solid objects and a fluid stream, or two fluid streams and a solid object, preferentially selects an entropic triple of inertial frames of reference, consisting of the frames moving with each solid and fluid flow. The entropic triple can be analyzed in terms of its three constituent entropic pairs.
(D)
A shear flow subsystem at steady state, defined by the relative motions of m independent solid objects and n independent fluid streams, preferentially selects an entropic -tuple of inertial frames of reference, consisting of the frames moving with each solid and fluid flow. The entropic-tuple can be analyzed in terms of its constituent entropic pairs.
(E)
A shear flow subsystem in unsteady flow, defined by the relative motions of a fluid and a solid, preferentially selects an entropic pair of frames of reference—or a set of entropic pairs—that is changing in time. At least one frame from each entropic pair will be a non-inertial frame of reference.
(2)(A)
An entropically paired shear flow subsystem must have at least one source of negentropy for its entropy production.
(B)
An entropically tripled shear flow subsystem must have at least two independent sources of negentropy for its entropy production.
(C)
An entropically -tupled shear flow subsystem must have at least independent sources of negentropy for its entropy production.
(3)
If the entropic pair (or other entropic tuple) of a shear flow subsystem differs from the frames of reference that define the surrounding flow, including changes with time, then:
(A)
The shear flow subsystem may be harnessing negentropy from its external environment, either from the ambient flow field or by exploiting some other connection, but can only do so to the extent that these are harnessable by the subsystem.
(B)
Any excess entropy production, above what is actually harnessed from the external environment, reveals the existence of at least one independent source of negentropy for the shear flow subsystem.
The above principles are unaffected by Galilean transformations and so can be understood to “lie above” the Galilean inertial framework of Newtonian mechanics.
The phenomena revealed in this study are very different to the motions of frictionless objects in Newtonian mechanics, which do not require the consumption of a source of negentropy for an object or fluid to maintain a constant velocity. Taken together, they provide new perspectives into the long-neglected field of entropic mechanics, the study of the relative motions of objects with friction. This encompasses the motions of all motor vehicles and other craft (irrespective of power source) and of all living organisms within a fluid, whether they be in the atmosphere, on the land surface, or at any location on the surface of, within, or at the base of a water body or any other liquid.
Further research is required to elucidate the complete set of Lie symmetries associated with the entropy production equations, including multivariate Lie symmetries associated with multivariate continuous groups [26,58,59]. Further research is also warranted on analogs of the entropic pairing principles for other types of dissipative systems, including heat transfer, chemical reaction, and living systems [34,35,38,110], and their implications for the sources of negentropy that drive these systems.
Acknowledgments
The author thanks Harald Kleine, Jong-Leng Liow, Methma Rajamuni and Matthias Kramer for discussions of unsteady flows.
Funding
This work was largely completed during sabbatical leave in 2021 supported by UNSW, based on research conducted at UNSW and Institute Pprime, CNRS, Poitiers, France.
Conflicts of Interest
The author declares no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
Footnotes
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
1.Newton I. Philosophiæ Naturalis Principia Mathematica. Joseph Streater, Royal Society; London, UK: 1687. [Google Scholar]
2.Motte A. Newton’s Principia, The Mathematical Principles of Natural Philosophy. 3rd ed. Middle-Temple-Gate, Fleetstreet; London, UK: 1846. pp. 18–20. [Google Scholar]
3.Wikipedia Classical Mechanics. [(accessed on 15 September 2021)]. Available online:
4.Galileo G.L. Dialogo Sopra i due Massimi Sistemi del Mondo. Per Gio Batifta Landini; Florence, Italy: 1632. [Google Scholar]
5.Pope S.B. Turbulent Flows. Cambridge Univ. Press; Cambridge, UK: 2000. [Google Scholar]
6.Fourier J.B.J. Théorie Analytique de la Chaleur. Didot; Paris, Framce: 1822. [Google Scholar]
7.Rayleigh J.W. The Theory of Sound. Volume 1 Macmillan and Co.; London, UK: 1877. [Google Scholar]
8.Bertrand J. Sur l’homogénéité dans les formules de physique. Comptes Rendus l’Acad. Sci. 1878;86:916–920. [Google Scholar]
9.Rayleigh J.W. On the question of the stability of the flow of liquids. Phil. Mag. 1892;34:59–70. doi: 10.1080/14786449208620167. [DOI] [Google Scholar]
10.Carvallo E. Sur une similitude dans les fonctions des machines. J. Phys. Theor. Appl. 1892;1:209–212. doi: 10.1051/jphystap:018920010020901. [DOI] [Google Scholar]
11.Vaschy A. Théorie de l’Électricité: Exposé des Phénomènes Électriques et Magnétiques Fondé Uniquement sur L’expérience et le Raisonnement. Librairie Polytechnique, Baudry et Cie; Paris, France: 1892. [Google Scholar]
12.Vaschy A. Sur les lois de similitude en physique. Ann. Télégraphiques. 1892;19:25–28. [Google Scholar]
13.Federman A. On some general methods of integration of first-order partial differential equations. Proc. St.-Petersburg Polytech. Inst. Sect. Tech. Nat. Sci. Math. 1911;16:97–155. [Google Scholar]
14.Riabouchinsky D. Méthode des variables de dimension zéro, et son application en aérodynamique. L’Aérophile. 1911;1:407–408. [Google Scholar]
15.Buckingham E. On physically similar systems; illustrations of the use of dimensional equations. Phys. Rev. 1914;4:345–376. doi: 10.1103/PhysRev.4.345. [DOI] [Google Scholar]
16.Riabouchinsky D. The principle of similitude. Nature. 1915;95:591. doi: 10.1038/095591c0. [DOI] [Google Scholar]
17.Langhaar H.L. Dimensional Analysis and Theory of Models. John Wiley & Sons; New York, NY, USA: 1951. [Google Scholar]
18.Zeldovich Y.B. The motion of a gas under the action of a short term pressure (shock) Akust. Zhurnal. 1956;22:28–38. [Google Scholar]
19.Sedov L.I. Similarity and Dimensional Methods in Mechanics. Infosearch Ltd.; London, UK: 1959. [Google Scholar]
20.Birkhoff G. Hydrodynamics, a Study in Logic, Fact and Similitude. 2nd ed. Princeton Univ. Press; Princeton, NJ, USA: 1960. [Google Scholar]
21.Gratton J. Similarity and self similarity in fluid dynamics. Fundam. Cosm. Phys. 1991;15:1–106. [Google Scholar]
22.Barenblatt G.I. Scaling, Self-Similarity and Intermediate Asymptotics: Dimensional Analysis and Intermediate Asymptotics. Cambridge Univ. Press; Cambridge, UK: 1996. [Google Scholar]
23.Barenblatt G.I. Scaling. Cambridge Univ. Press; Cambridge, UK: 2003. [Google Scholar]
24.Hornung H.G. Dimensional Analysis: Examples of the Use of Symmetry. Dover Publ.; Mineola, NY, USA: 2006. [Google Scholar]
25.Burde G.I. Expanded Lie group transformations and similarity reductions of differential equations. Proc. Inst. Math. NAS Ukraine. 2002;43:93–101. [Google Scholar]
26.Oliveri F. Lie symmetries of differential equations: Classical results and recent contributions. Symmetry. 2010;2:658–706. doi: 10.3390/sym2020658. [DOI] [Google Scholar]
27.Ercan A., Kavvas M.L. Self-similarity in incompressible Navier–Stokes equations. Chaos. 2015;25:123126. doi: 10.1063/1.4938762. [DOI] [PubMed] [Google Scholar]
28.Polsinelli J., Kavvas M.L. A comparison of the modern Lie scaling method to classical scaling techniques. Hydrol. Earth Syst. Sci. 2016;20:2669–2678. doi: 10.5194/hess-20-2669-2016. [DOI] [Google Scholar]
29.Ercan A., Kavvas M.L. Scaling relations and self-similarity of 3-dimensional Reynolds-averaged Navier–Stokes equations. Sci. Rep. 2017;7:6416. doi: 10.1038/s41598-017-06669-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
30.She Z.-S., Chen X., Hussain F. Quantifying wall turbulence via a symmetry approach: A Lie group theory. J. Fluid Mech. 2017;827:322–356. doi: 10.1017/jfm.2017.464. [DOI] [Google Scholar]
31.Niven R.K., Noack B.R. Control volume analysis, entropy balance and the entropy production in flow systems. In: Dewar R.C., Lineweaver C., Niven R.K., Regenauer-Lieb K., editors. Beyond the Second Law: Entropy Production and Non-Equilibrium Systems. Springer; Berlin/Heidelberg, Germany: 2014. pp. 129–162. [Google Scholar]
32.Niven R.K., Ozawa H. Entropy production extremum principles. In: Singh V., editor. Handbook of Applied Hydrology. 2nd ed. McGraw-Hill; New York, NY, USA: 2016. Chapter 32. [Google Scholar]
33.Jaumann G. Geschlossenes System physikalischer und chemischer Differentialgesetze. Sitzungsberichte Akad. Der Wisenschaften Wien Math.—Naturwissenschaftliche Kl. 1911;120:385–530. [Google Scholar]
34.de Groot S.R., Mazur P. Non-Equilibrium Thermodynamics. Dover Publ.; Mineola, NY, USA: 1962. [Google Scholar]
35.Prigogine I. Introduction to Thermodynamics of Irreversible Processes. 3rd ed. Interscience Publ.; New York, NY, USA: 1967. [Google Scholar]
36.Kreuzer H.J. Nonequilibrium Thermodynamics and its Statistical Foundations. Clarendon Press; Oxford, UK: 1981. [Google Scholar]
37.Bird R.B., Stewart W.E., Lightfoot E.N. Transport Phenomena. 2nd ed. John Wiley & Sons; New York, NY, USA: 2006. [Google Scholar]
38.Kondepudi D., Prigogine I. Modern Thermodynamics: From Heat Engines to Dissipative Structures. 2nd ed. John Wiley & Sons; Chichester, UK: 2015. [Google Scholar]
39.Basaran C. Introduction to Unified Mechanics Theory with Applications. Springer; Cham, Switzerland: 2021. [Google Scholar]
40.Bejan A. Entropy Generation Through Heat and Fluid Flow. John Wiley & Sons; New York, NY, USA: 1982. [Google Scholar]
41.Bejan A. Entropy Generation Minimization. CRC Press; Boca Raton, FL, USA: 1996. [Google Scholar]
42.Bejan A. Advanced Engineering Thermodynamics. 4th ed. John Wiley & Sons; Hoboken, NJ, USA: 2016. [Google Scholar]
43.Weinstock R. Calculus of Variations. Dover Publ.; Mineola, NY, USA: 1952. [Google Scholar]
44.Gelfand I.M., Fomin S.V. Calculus of Variations. Dover Publ.; Mineola, NY, USA: 1963. [Google Scholar]
45.Adeyinka O.B., Naterer G.F. Modeling of entropy production in turbulent flows. J. Fluids Eng. 2004;126:893–899. doi: 10.1115/1.1845551. [DOI] [Google Scholar]
46.Schlichting H., Gersten K. Boundary Layer Theory. 8th ed. Springer; New Delhi, India: 2001. [Google Scholar]
47.Pao H.F. Fluid Mechanics. John Wiley & Sons; New York, NY, USA: 1961. [Google Scholar]
48.Street R.L., Watters G.Z., Vennard J.K. Elementary Fluid Mechanics. 7th ed. John Wiley & Sons; New York, NY, USA: 1996. [Google Scholar]
49.Streeter V.L., Wylie E.B., Bedford K.W. Fluid Mechanics. 9th ed. McGraw-Hill; Boston, MA, USA: 1998. [Google Scholar]
50.White F.M. Viscous Fluid Flow. 3rd ed. McGraw-Hill; New York, NY, USA: 2006. [Google Scholar]
51.Munson B.R., Young D.F., Okiishi T.H., Huebsch W.W. Fundamentals of Fluid Mechanics, 6th international student ed. John Wiley; Hoboken, NJ, USA: 2010. [Google Scholar]
52.Douglas J.F., Gasiorek J.M., Swaffield J.A., Jack L.B. Fluid Mechanics. 6th ed. Prentice Hall; Harlow, UK: 2011. [Google Scholar]
53.Anderson J.D., Jr. Fundamentals of Aerodynamics. McGraw-Hill; New York, NY, USA: 2001. [Google Scholar]
54.Goldenfeld N. Lectures on Phase Transitions and the Renormalization Group. Addison-Wesley; Reading, MA, USA: 1992. [Google Scholar]
55.Lie S., Engel F. Theorie der Transformationsgruppen. B.G. Teubner; Leipzig, Germany: 1888. [Google Scholar]
56.Ovsainnikov L.V. Group Analysis of Differential Equations. Academic Press; New York, NY, USA: 1982. [Google Scholar]
57.Olver P.J. Applications of Lie Groups to Differential Equations. 2nd ed. Springer; New York, NY, USA: 1993. [Google Scholar]
58.Blumen G.W., Kumei S. Symmetries and Differential Equations. Springer-Verlag; New York, NY, USA: 1989. [Google Scholar]
59.Niven R.K., Cordier L., Kaiser E., Schlegel M., Noack B.R. Rethinking the Reynolds transport theorem, Liouville equation, and Perron-Frobenius and Koopman operators. arXiv. 20191810.06022 [Google Scholar]
60.Niven R.K. New classes of conservation laws based on generalized fluid densities and Reynolds transport theorems. arXiv. 20212101.06113 [Google Scholar]
61.Mohammadipoor O.R., Niazmand H., Mirbozorgi S.A. Alternative curved-boundary treatment for the lattice Boltzmann method and its application in simulation of flow and potential fields. Phys. Rev. E. 2014;89:013309. doi: 10.1103/PhysRevE.89.013309. [DOI] [PubMed] [Google Scholar]
62.Spurk J.H. Fluid Mechanics. Springer; Berlin/Heidelberg, Germany: 1997. [Google Scholar]
63.Clift R., Grace J.R., Weber M.E. Bubbles, Drops and Particles. Academic Press, Inc.; New York, NY, USA: 1978. [Google Scholar]
64.Pavlov K.F., Romankov P.G., Noskov A.A. Examples and Problems to the Course of Unit Operations of Chemical Engineering. Mir Publ.; Moscow, Russia: 1979. [Google Scholar]
65.Niven R.K. Physical insight into the Ergun and Wen & Yu equations for fluid flow in packed and fluidised beds. Chem. Eng. Sci. 2002;57:527–534. [Google Scholar]
66.Niven R.K. Simultaneous extrema in the entropy production for steady-state fluid flow in parallel pipes. J. Non-Equil. Therm. 2010;35:347–378. doi: 10.1515/jnetdy.2010.022. [DOI] [Google Scholar]
67.Waldrip S.H., Niven R.K., Abel M., Schlegel M. Maximum entropy analysis of hydraulic pipe flow networks. J. Hydraul. Eng. ASCE. 2016;142:04016028. doi: 10.1061/(ASCE)HY.1943-7900.0001126. [DOI] [Google Scholar]
68.Waldrip S.H., Niven R.K., Abel M., Schlegel M. Reduced-parameter method for maximum entropy analysis of hydraulic pipe flow networks. J. Hydraul. Eng. ASCE. 2018;144:04017060. doi: 10.1061/(ASCE)HY.1943-7900.0001379. [DOI] [Google Scholar]
69.Niven R.K., Abel M., Schlegel M., Waldrip S.H. Maximum entropy analysis of flow networks: Theoretical foundation and applications. Entropy. 2019;21:776. doi: 10.3390/e21080776. [DOI] [PMC free article] [PubMed] [Google Scholar]
70.Churchill S.W. Viscous Flows—The Practical Use of Theory. Butterworths; Boston, MA, USA: 1988. [Google Scholar]
71.Cheng N.-S., Chiew Y.-M. Incipient sediment motion with upward seepage. J. Hydraul. Res. 1999;37:665–681. doi: 10.1080/00221689909498522. [DOI] [Google Scholar]
72.Colebrook C.F. Turbulent flow in pipes, with particular reference to the transition region between the smooth and rough pipe laws. J. IChemE. 1939;11:133–156. doi: 10.1680/ijoti.1939.13150. [DOI] [Google Scholar]
73.Paulus D.M., Jr. Ph.D. Thesis. Marquette University; Milwaukee, WI, USA: 2000. Second Law Applications in Modeling, Design and Optimization. [Google Scholar]
74.Paulus D.M., Jr., Gaggioli R.A. Some Observations of Entropy Extrema in Fluid Flow. Energy. 2004;29:2487–2500. doi: 10.1016/j.energy.2004.03.029. [DOI] [Google Scholar]
75.Martyushev L.M. Some interesting consequences of the maximum entropy production principle. J. Exper. Theor. Phys. 2007;104:651–654. doi: 10.1134/S1063776107040152. [DOI] [Google Scholar]
76.Tait P.G. Sketch of Thermodynamics. Edmonston and Douglas; Edinburgh, UK: 1868. p. 100. [Google Scholar]
77.Schrödinger E. What is Life? Cambridge Univ. Press; Cambridge, UK: 1944. [Google Scholar]
78.Brillouin L. The negentropy principle of information. J. Appl. Phys. 1953;24:1152–1163. doi: 10.1063/1.1721463. [DOI] [Google Scholar]
79.Gibbs J.W. On the equilibrium of heterogeneous substances. Trans. Connecticut Acad. 1877;3:108–248. doi: 10.2475/ajs.s3-16.96.441. [DOI] [Google Scholar]
80.Planck M. Treatise on Thermodynamics. 3rd ed. Dover Publ.; Mineola, NY, USA: 1922. [Google Scholar]
81.Planck M. In: Introduction to Theoretical Physics, Vol. V: Theory of Heat. Brose H.L., translator. Macmillan & Co., Ltd; London, UK: 1932. [Google Scholar]
82.Schrödinger E. Statistical Thermodynamics. Dover Publ.; Mineola, NY, USA: 1946. [Google Scholar]
83.Keenan J.H. Availability and irreversibility in thermodynamics. Brit. J. Appl. Phys. 1951;2:183–193. doi: 10.1088/0508-3443/2/7/302. [DOI] [Google Scholar]
84.Gaggioli R.A. The concepts of thermodynamic friction, thermal available energy, chemical available energy and thermal energy. Chem. Eng. Sci. 1962;17:523–530. doi: 10.1016/0009-2509(62)87003-1. [DOI] [Google Scholar]
85.Guggenheim E.A. Thermodynamics: An Advanced Treatment for Chemists and Physicists. North-Holland Publ. Co.; Amsterdam, The Netherlands: 1967. [Google Scholar]
86.Rant Z. Exergie, ein neues Wort fur, technische Arbeitsfahigkeit. Forsch. Ingenieurwesen. 1956;22:36–37. [Google Scholar]
87.Evans R.B. Ph.D. Thesis. Thayer School of Engineering, Dartmouth College; Hanover, NH, USA: 1969. A Proof that Essergy is the Only Consistent Measure of Potential Work (for Chemical Substances) [Google Scholar]
88.Rajaratnam N. Turbulent Jets. Elsevier Scientific; Amsterdam, The Netherlands: 1976. [Google Scholar]
89.Lee J.H.W., Chu V.H. Turbulent Jets and Plumes—A Lagrangian Approach. Kluwer; Boston, MA, USA: 2003. [Google Scholar]
90.Boussinesq J.V. Sur la résistance qu’oppose un fluide indéfini au repos, sans pesanteur, au mouvement varié d’une sphère solide qu’il mouille sur toute sa surface, quand les vitesses restent bien continues et assez faibles pour que leurs carrés et produits soient négligeables. Comptes Rendus L’AcadÉmie Sci. 1885;100:935–937. [Google Scholar]
91.Basset A.B. A Treatise on Hydrodynamics. Volume 2. Deighton, Bell and Co.; Cambridge, UK: 1888. Chapter 22. [Google Scholar]
92.Oseen C.W. Hydrodynamik. Akademische Verlagsgesellschaft; Leipzig, Germany: 1927. [Google Scholar]
93.Tchen C.M. Ph.D. Thesis. Technical School in Delft, Martinus Nijhoff; The Hague, The Netherlands: 1947. Mean Value and Correlation Problems Connected with the Motion of Small Particles Suspended in a Turbulent Fluid. [Google Scholar]
94.Corrsin S., Lumley J. On the equation of motion for a particle in turbulent fluid. Appl. Sci. Res. 1956;A6:114–116. doi: 10.1007/BF03185030. [DOI] [Google Scholar]
95.Odar F., Hamilton W.S. Forces on a sphere accelerating in a viscous fluid. J. Fluid Mech. 1964;18:302–314. doi: 10.1017/S0022112064000210. [DOI] [Google Scholar]
96.Maxey M.R., Riley J.J. Equation of motion for a small rigid sphere in a nonuniform flow. Phys. Fluids. 1983;26:883–889. doi: 10.1063/1.864230. [DOI] [Google Scholar]
97.Mei R. Flow due to an oscillating sphere and an expression for unsteady drag on the sphere at finite Reynolds number. J. Fluid Mech. 1994;270:133–174. doi: 10.1017/S0022112094004222. [DOI] [Google Scholar]
98.Hohermuth B., Kramer M., Felder S., Valero D. Velocity bias in intrusive gas-liquid flow measurements. Nat. Comm. 2021;12:4123. doi: 10.1038/s41467-021-24231-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
99.Lukerchenko N., Kvurt Y., Keita I., Chara Z., Vlasak P. Drag force, drag torque, and Magnus force coefficients of rotating spherical particle moving in fluid. Partic. Sci. Technol. 2012;30:55–67. doi: 10.1080/02726351.2010.544377. [DOI] [Google Scholar]
100.Sawatzki O. Das Strömungsfeld um eine rotierende Kugel. Acta Mech. 1970;9:159–214. doi: 10.1007/BF01179821. [DOI] [Google Scholar]
101.Mittal S., Kumar B. Flow past a rotating cylinder. J. Fluid Mech. 2003;476:303–334. doi: 10.1017/S0022112002002938. [DOI] [Google Scholar]
102.Versteeg H.K., Malalasekera W. An Introduction to Computational Fluid Dynamics, the Finite Volume Method. 2nd ed. Pearson, Prentice Hall; Harlow, UK: 2007. [Google Scholar]
103.Pettigrew M.J., Taylor C.E., Fisher N.J., Yetisir M., Smith B.A.W. Flow-induced vibration: Recent findings and open questions. Nucl. Eng. Des. 1998;185:249–276. doi: 10.1016/S0029-5493(98)00238-6. [DOI] [Google Scholar]
104.Naudasher E., Rockwell D. Flow-Induced Vibrations: An Engineering Guide. Dover Publ.; Mineola, NY, USA: 2005. [Google Scholar]
105.Rajamuni M.M., Thompson M.C., Hourigan K. Vortex dynamics and vibration modes of a tethered sphere. J. Fluid Mech. 2020;885:A10. doi: 10.1017/jfm.2019.928. [DOI] [Google Scholar]
106.Chow V.T. Open-Channel Hydraulics, international student edition. McGraw-Hill; New York, NY, USA: 1959. [Google Scholar]
107.Henderson F.M. Open Channel Flow. Prentice Hall; Upper Saddle River, NJ, USA: 1966. [Google Scholar]
108.Subramanya K. Flow in Open Channels. Tata McGraw-Hill Publ. Co.; New Delhi, India: 1997. [Google Scholar]
109.Reineck H.-E., Singh I.B. Depositional Sedimentary Environments. Springer; Berlin/Heidelberg, Germany: 1980. [Google Scholar]
110.Niven R.K. Steady state of a dissipative flow–controlled system and the maximum entropy production principle. Phys. Rev. E. 2009;80:021113. doi: 10.1103/PhysRevE.80.021113. [DOI] [PubMed] [Google Scholar]
Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)
ACTIONS
View on publisher site
PDF (1.2 MB)
Cite
Collections
Permalink PERMALINK
Copy
RESOURCES
Similar articles
Cited by other articles
Links to NCBI Databases
On this page
Abstract
1. Introduction
2. Thermodynamic Entropy Balance and Entropy Production
3. Invariance Properties
4. Macroscopic Shear Flow Systems and an Entropic Invariance Principle
5. Unsteady Shear Flow Systems
6. Conclusions
Acknowledgments
Funding
Conflicts of Interest
Footnotes
References
Cite
Copy
Download .nbib.nbib
Format:
Add to Collections
Create a new collection
Add to an existing collection
Name your collection
Choose a collection
Unable to load your collection due to an error
Please try again
Add Cancel
Follow NCBI
NCBI on X (formerly known as Twitter)NCBI on FacebookNCBI on LinkedInNCBI on GitHubNCBI RSS feed
Connect with NLM
NLM on X (formerly known as Twitter)NLM on FacebookNLM on YouTube
National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894
Web Policies
FOIA
HHS Vulnerability Disclosure
Help
Accessibility
Careers
NLM
NIH
HHS
USA.gov
Back to Top
|
69
|
Paul Erdos | What's new
===============
What's new
Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao
Home
About
Career advice
On writing
Books
Mastodon+
Applets
Subscribe to feed
Tag Archive
You are currently browsing the tag archive for the ‘Paul Erdos’ tag.
Rough numbers between consecutive primes
10 August, 2025 in math.NT, paper | Tags: Ayla Gafni, Paul Erdos, prime gaps, rough numbers | by Terence Tao | 12 comments
First things first: due to an abrupt suspension of NSF funding to my home university of UCLA, the Institute of Pure and Applied Mathematics (which had been preliminarily approved for a five-year NSF grant to run the institute) is currently fundraising to ensure continuity of operations during the suspension, with a goal of raising $500,000. Donations can be made at this page. As incoming Director of Special Projects at IPAM, I am grateful for the support (both moral and financial) that we have already received in the last few days, but we are still short of our fundraising goal.
Back to math. Ayla Gafni and I have just uploaded to the arXiv the paper “Rough numbers between consecutive primes“. In this paper we resolve a question of Erdös concerning rough numbers between consecutive gaps, and with the assistance of modern sieve theory calculations, we in fact obtain quite precise asymptotics for the problem. (As a side note, this research was supported by my personal NSF grant which is also currently suspended; I am grateful to recent donations to my own research fund which have helped me complete this research.)
Define a prime gap to be an interval between consecutive primes. We say that a prime gap contains a rough number if there is an integer whose least prime factor is at least the length of the gap. For instance, the prime gap contains the rough number , but the prime gap does not (all integers between and have a prime factor less than ). The first few for which the prime gap contains a rough number are
Numerically, the proportion of for which the prime gap does not contain a rough number decays slowly as increases:
Erdös initially thought that all but finitely many prime gaps should contain a rough number, but changed his mind, as per the following quote:
…I am now sure that this is not true and I “almost” have a counterexample. Pillai and Szekeres observed that for every , a set of consecutive integers always contains one which is relatively prime to the others. This is false for , the smallest counterexample being . Consider now the two arithmetic progressions and . There certainly will be infinitely many values of for which the progressions simultaneously represent primes; this follows at once from hypothesis H of Schinzel, but cannot at present be proved. These primes are consecutive and give the required counterexample. I expect that this situation is rather exceptional and that the integers for which there is no satisfying and have density .
In fact Erdös’s observation can be made simpler: any pair of cousin primes for (of which is the first example) will produce a prime gap that does not contain any rough numbers.
The latter question of Erdös is listed as problem #682 on Thomas Bloom’s Erdös problems website. In this paper we answer Erdös’s question, and in fact give a rather precise bound for the number of counterexamples:
Theorem 1 (Erdos #682)For , let be the number of prime gaps with that do not contain a rough number. Then Assuming the Dickson–Hardy–Littlewood prime tuples conjecture, we can improve this to for some (explicitly describable) constant .
In fact we believe that , although the formula we have to compute converges very slowly. This is (weakly) supported by numerical evidence:
While many questions about prime gaps remain open, the theory of rough numbers is much better understood, thanks to modern sieve theoretic tools such as the fundamental lemma of sieve theory. The main idea is to frame the problem in terms of counting the number of rough numbers in short intervals , where ranges in some dyadic interval and is a much smaller quantity, such as for some . Here, one has to tweak the definition of “rough” to mean “no prime factors less than ” for some intermediate (e.g., for some turns out to be a reasonable choice). These problems are very analogous to the extremely well studied problem of counting primes in short intervals, but one can make more progress without needing powerful conjectures such as the Hardy–Littlewood prime tuples conjecture. In particular, because of the fundamental lemma of sieve theory, one can compute the mean and variance (i.e., the first two moments) of such counts to high accuracy, using in particular some calculations on the mean values of singular series that go back at least to the work of Montgomery from 1970. This second moment analysis turns out to be enough (after optimizing all the parameters) to answer Erdös’s problem with a weaker bound
To do better, we need to work with higher moments. The fundamental lemma also works in this setting; one now needs precise asymptotics for the mean value of singular series of -tuples, but this was fortunately worked out (in more or less exactly the format we needed) by Montgomery and Soundararajan in 2004. Their focus was establishing a central limit theorem for the distribution of primes in short intervals (conditional on the prime tuples conjecture), but their analysis can be adapted to show (unconditionally) good concentration of measure results for rough numbers in short intervals. A direct application of their estimates improves the upper bound on to
and some more careful tweaking of parameters allows one to remove the error. This latter analysis reveals that in fact the dominant contribution to will come with prime gaps of bounded length, of which our understanding is still relatively poor (it was only in 2014 that Yitang Zhang famously showed that infinitely many such gaps exist). At this point we finally have to resort to (a Dickson-type form of) the prime tuples conjecture to get the asymptotic (2).
On several irrationality problems for Ahmes series
27 November, 2024 in math.NT, paper | Tags: Ahmes series, irrationality, Paul Erdos, Vjekoslav Kovac | by Terence Tao | 6 comments
Vjeko Kovac and I have just uploaded to the arXiv our paper “On several irrationality problems for Ahmes series“. This paper resolves (or at least makes partial progress on) some open questions of Erdős and others on the irrationality of Ahmes series, which are infinite series of the form for some increasing sequence of natural numbers. Of course, since most real numbers are irrational, one expects such series to “generically” be irrational, and we make this intuition precise (in both a probabilistic sense and a Baire category sense) in our paper. However, it is often difficult to establish the irrationality of any specific series. For example, it is already a non-trivial result of Erdős that the series is irrational, while the irrationality of (equivalent to Erdős problem #69) remains open, although very recently Pratt established this conditionally on the Hardy–Littlewood prime tuples conjecture. Finally, the irrationality of (Erdős problem #68) is completely open.
On the other hand, it has long been known that if the sequence grows faster than for any , then the Ahmes series is necessarily irrational, basically because the fractional parts of can be arbitrarily small positive quantities, which is inconsistent with being rational. This growth rate is sharp, as can be seen by iterating the identity to obtain a rational Ahmes series of growth rate for any fixed .
In our paper we show that if grows somewhat slower than the above sequences in the sense that , for instance if for a fixed , then one can find a comparable sequence for which is rational. This partially addresses Erdős problem #263, which asked if the sequence had this property, and whether any sequence of exponential or slower growth (but with convergent) had this property. Unfortunately we barely miss a full solution of both parts of the problem, since the condition we need just fails to cover the case , and also does not quite hold for all sequences going to infinity at an exponential or slower rate.
We also show the following variant; if has exponential growth in the sense that with convergent, then there exists nearby natural numbers such that is rational. This answers the first part of Erdős problem #264 which asked about the case , although the second part (which asks about ) is slightly out of reach of our methods. Indeed, we show that the exponential growth hypothesis is best possible in the sense a random sequence that grows faster than exponentially will not have this property, this result does not address any specific superexponential sequence such as , although it does apply to some sequence of the shape .
Our methods can also handle higher dimensional variants in which multiple series are simultaneously set to be rational. Perhaps the most striking result is this: we can find an increasing sequence of natural numbers with the property that is rational for every rational (excluding the cases to avoid division by zero)! This answers (in the negative) a question of Stolarsky Erdős problem #266, and also reproves Erdős problem #265 (and in the latter case one can even make grow double exponentially fast).
Our methods are elementary and avoid any number-theoretic considerations, relying primarily on the countable dense nature of the rationals and an iterative approximation technique. The first observation is that the task of representing a given number as an Ahmes series with each lying in some interval (with the disjoint, and going to infinity fast enough to ensure convergence of the series), is possible if and only if the infinite sumset
to contain , where . More generally, to represent a tuple of numbers indexed by some set of numbers simultaneously as with , this is the same as asking for the infinite sumset
to contain , where now So the main problem is to get control on such infinite sumsets. Here we use a very simple observation:
Proposition 1 (Iterative approximation)Let be a Banach space, let be sets with each contained in the ball of radius around the origin for some with convergent, so that the infinite sumset is well-defined. Suppose that one has some convergent series in , and sets converging in norm to zero, such that for all . Then the infinite sumset contains .
Informally, the condition (2) asserts that occupies all of “at the scale “.
Proof: Let . Our task is to express as a series with . From (2) we may write
for some and . Iterating this, we may find and such that
for all . Sending , we obtain
as required.
In one dimension, sets of the form are dense enough that the condition (2) can be satisfied in a large number of situations, leading to most of our one-dimensional results. In higher dimension, the sets lie on curves in a high-dimensional space, and so do not directly obey usable inclusions of the form (2); however, for suitable choices of intervals , one can take some finite sums which will become dense enough to obtain usable inclusions of the form (2) once reaches the dimension of the ambient space, basically thanks to the inverse function theorem (and the non-vanishing curvatures of the curve in question). For the Stolarsky problem, which is an infinite-dimensional problem, it turns out that one can modify this approach by letting grow slowly to infinity with .
Planar point sets with forbidden four-point patterns and few distinct distances
3 September, 2024 in math.CO, paper | Tags: Erdos distance problem, Paul Erdos | by Terence Tao | 18 comments
I’ve just uploaded to the arXiv my paper “Planar point sets with forbidden -point patterns and few distinct distance“. This (very) short paper was a byproduct of my recent explorations of the Erdös problem website in recent months, with a vague emerging plan to locate a suitable problem that might be suitable for some combination of a crowdsourced “Polymath” style project and/or a test case for emerging AI tools. The question below was one potential candidate; however, upon reviewing the literature on the problem, I noticed that the existing techniques only needed one additional tweak to fully resolve the problem. So I ended up writing this note instead to close off the problem.
I’ve arranged this post so that this additional trick is postponed to below the fold, so that the reader can, if desired, try to guess for themselves what the final missing ingredient needed to solve the problem was. Here is the problem (Erdös problem #135), which wasaskedmultipletimesbyErdös over more than two decades (and who even offered a small prize for the solution on one of these occasions):
Problem 1 (Erdös #135)Let be a set of points such that any four points in the set determine at least five distinct distances. Must determine many distances?
This is a cousin of the significantly more famous Erdös distinct distances problem (Erdös problem #89), which asks what is the minimum number of distances determined by a set of points in the plane, without the restriction on four-point configurations. The example of a square grid (assuming for sake of argument that is a perfect square), together with some standard analytic number theory calculations, shows that can determine distances, and it is conjectured that this is best possible up to constants. A celebrated result of Guth and Katz, discussed in this previous blog post, shows that will determine at least distances. Note that the lower bound here is far larger, and in fact comparable to the total number of distances available, thus expressing the belief that the “local” condition that every four points determine at least five distances forces the global collection distances to be almost completely distinct. In fact, in one of the papers posing the problem, Erdös made the even stronger conjecture that the set must contain a subset of cardinality for which all the distances generated by are distinct.
A paper of Dumitrescu came close to resolving this problem. Firstly, the number of ways in which four points could fail to determine five distinct distances was classified in that paper, with the four-point configurations necessarily being one of the following eight patterns:
: An equilateral triangle plus an arbitrary vertex.
: A parallelogram.
: An isosceles trapezoid (four points on a line, , where , form a degenerate isosceles trapezoid).
: A star with three edges of the same length.
: A path with three edges of the same length.
: A kite.
: An isosceles triangle plus an edge incident to a base endpoint, and whose length equals the length of the base.
: An isosceles triangle plus an edge incident to the apex, and whose length equals the length of the base.
(See Figure 1 and Lemma 1 of Dumitrescu’s paper.) So the question is asking whether if an point set avoids all of these patterns , then it must generate distances.
Given that the grid determine only distances, one could seek a counterexample to this by finding a set of points in the grid that avoided all of the eight patterns .
Dumitrescu then counted how often each of the patterns occured inside the grid . The answer is:
does not occur at all. (This is related to the irrationality of .)
occurs times.
occurs times.
occurs times.
occurs times.
occurs times.
occurs times.
occurs times.
(The bounds involving were obtained using the Szemerédi-Trotter theorem, and might not be optimal for this problem.) In particular, with the exception of the parallelogram pattern , the other seven forbidden -point patterns occur at most times.
Using this and a standard probabilistic argument, Dumitrescu then established the following “near miss” to a negative answer to the above problem:
Theorem 2 (First near miss)If is sufficiently large, then there exists a subset of of cardinality which avoids all of the patterms .
In particular, this generates a set of points with distances that avoids seven out of the eight required forbidden patterns; it is only the parallelograms that are not avoided, and are the only remaining obstacle to a negative answer to the problem.
Proof: Let be a small constant, and let be a random subset of , formed by placing each element of with an independent probability of . A standard application of Hoeffding’s inequality (or even the second moment method) shows that this set will have cardinality with high probability if is large enough. On the other hand, each of the patterns has a probability of lying inside , so by linearity of expectation, the total number of such patterns inside is on the average. In particular, by Markov’s inequality, we can find a set of cardinality with only such patterns. Deleting all of these patterns from , we obtain a set of cardinality , which is if is a sufficiently small constant. This establishes the claim.
Unfortunately, this random set contains far too many parallelograms ( such parallelograms, in fact) for this deletion argument to work. On the other hand, in earlier work of Thiele and of Dumitrescu, a separate construction of a set of points in that avoids all of the parallelograms was given:
Theorem 3 (Second near miss)For large, there exists a subset of of cardinality which contains no parallelograms . Furthermore, this set is in general position: no three points in are collinear, and no four are concyclic. As a consequence, this set in fact avoids the three patterns (the pattern in is concyclic, and the pattern does not occur at all in the grid).
Proof: One uses an explicit algebraic construction, going back to an old paper of Erdös and Turán involving constructions of Sidon sets. Namely, one considers the set
where is a prime between and (the existence of which is guaranteed by Bertrand’s postulate). Standard Gauss sum estimates can be used to show that has cardinality . If contained four points that were in a parallelogram or on a circle, or three points in a line, then one could lift up from to the finite field plane and conclude that the finite field parabola also contained four points in a parallelogram or a circle, or three points on a line. But straightforward algebraic calculations can be performed to show that none of these scenarios can occur. For instance, if were four points on a parallelogram that were contained in a parabola, this would imply that an alternating sum of the form
would vanish for some non-zero ; but this expression simplifies to , which cannot vanish for non-zero as is odd. (For the concylic claim, the parabola in can in fact contain four points on a circle, but only if their coordinates sum to zero, and this cannot happen in )
Given that we have one “near-miss” in the literature that avoids , and another “near-miss” that avoids , it is natural to try to combine these two constructions to obtain a set that avoids all eight patterns . This inspired the following problem of Dumitrescu (see Problem 2 of this paper):
Problem 4Does the set in (1)contain a subset of cardinality that avoids all eight of the patterns ?
Unfortunately, this problem looked difficult, as the number-theoretic task of counting the patterns in looked quite daunting.
This ends the survey of the prior literature on this problem. Can you guess the missing ingredient needed to resolve the problem? I will place the answer below the fold.
Read the rest of this entry »
Erdos problem #385, the parity problem, and Siegel zeroes
19 August, 2024 in expository, math.NT | Tags: parity problem, Paul Erdos, Siegel zero | by Terence Tao | 24 comments
The Erdös problem site was created last year, and announced earlier this year on this blog. Every so often, I have taken a look at a random problem from the site for fun. A few times, I was able to make progress on one of the problems, leading to a couple papers; but the more common outcome is that I play around with the problem for a while, see why the problem is difficult, and then eventually give up and do something else. But, as is common in this field, I don’t make public the observations that I made, and the next person who looks at the same problem would likely have to go through the same process of trial and error to work out what the main obstructions that are present are.
So, as an experiment, I thought I would record here my preliminary observations on one such problem – Erdös problem #385 – to discuss why it looks difficult to solve with our current understanding of the primes. Here is the problem:
Problem 1 (Erdös Problem #385)Let
where is the least prime divisor of . Is it true that for all sufficiently large ? Does as ?
This problem is mentioned on page 73 of this 1979 paper of Erdös (where he attributes the problem to an unpublished work of Eggelton, Erdös, and Selfridge that, to my knowledge, has never actually appeared), as well as briefly in page 92 of this 1980 paper of Erdös and Graham.
At first glance, this looks like a somewhat arbitrary problem (as many of Erdös’s problems initially do), as the function is not obviously related to any other well-known function or problem. However, it turns out that this problem is closely related to the parity barrier in sieve theory (as discussed in this previous post), with the possibility of Siegel zeroes presenting a particular obstruction. I suspect that Erdös was well aware of this connection; certainly he mentions the relation with questions on gaps between primes (or almost primes), which is in turn connected to the parity problem and Siegel zeroes (as is discussed recently in my paper with Banks and Ford, and in more depth in these papers of Ford and of Granville).
Let us now explore the problem further. Let us call a natural number bad if , so the first part of the problem is asking whether there exist bad numbers that are sufficiently large. We unpack the definitions: is bad if and only if for any composite , so placing in intervals of the form we are asking to show that
for each . To put it another way, the badness of asserts that for each that the residue classes for cover all the natural numbers in the interval except for the primes.
It is now natural to try to understand this problem for a specific choice of interval as a function of . If is large in the sense that , then the claimed covering property is automatic, since every composite number less than or equal to has a prime factor less than or equal to . On the other hand, for very small, in particular , it is also possible to find with this property. Indeed, if one takes to lie in the residue class , then we see that the residue classes cover all of except for , and from Linnik’s theorem we can ensure that is prime. Thus, to rule out bad numbers, we need to understand the covering problem at intermediate scales .
A key case is when for some . Here, the residue classes for sieve out everything in except for primes and semiprimes, and specifically the semiprimes that are product of two primes between and . If one can show for some that the largest gap between semiprimes in say with prime factors in is , then this would affirmatively answer the first part of this problem (and also the second). This is certainly very plausible – it would follow from a semiprime version of the Cramér conjecture (and this would also make the more precise prediction ) – but remains well out of reach for now. Even assuming the Riemann hypothesis, the best upper bound on prime gaps in is , and the best upper bound on semiprime gaps is not significantly better than this – in particular, one cannot reach for any . (There is a remote possibility that an extremely delicate analysis near , together with additional strong conjectures on the zeta function, such as a sufficiently quantitative version of the GUE hypothesis, may barely be able to resolve this problem, but I am skeptical of this, absent some further major breakthrough in analytic number theory.)
Given that multiplicative number theory does not seem powerful enough (even on RH) to resolve these problems, the other main approach would be to use sieve theory. In this theory, we do not really know how to exploit the specific location of the interval or the specific congruence classes used, so one can study the more general problem of trying to cover an interval of length by one residue class mod for each , and only leaving a small number of survivors which could potentially be classified as “primes”. The discussion of the small case already reveals a problem with this level of generality: one can sieve out the interval by the residue classes for , and leave only one survivor, . Indeed, thanks to known bounds on Jacobsthal’s function, one can be more efficient than this; for instance, using equation (1.2) from this paper of Ford, Green, Konyagin, Maynard, and myself, it is possible to completely sieve out any interval of sufficiently large length using only those primes up to . On the other hand, from the work of Iwaniec, we know that sieving up to is insufficient to completely sieve out such an interval; related to this, if one only sieves up to for some , the linear sieve (see e.g., Theorem 2 of this previous blog post) shows that one must have at least survivors, where can be given explicitly in the regime by the formula
These lower bounds are not believed to be best possible. For instance, the Maier–Pomerance conjecture on Jacobsthal’s function would indicate that one needs to sieve out primes up to in order to completely sieve out an interval of length , and it is also believed that sieving up to should leave survivors, although even these strong conjectures are not enough to positively resolve this problem, since we are permitted to sieve all the way up to (and we are allowed to leave every prime number as a survivor, which in view of the Brun–Titchmarsh theorem could permit as many as survivors).
Unfortunately, as discussed in this previous blog post, the parity problem blocks such improvements from taking place from most standard analytic number theory methods, in particular sieve theory. A particularly dangerous enemy arises from Siegel zeroes. This is discussed in detail in the papers of of Ford and of Granville mentioned previously, but an informal discussion is as follows. If there is a Siegel zero associated to the quadratic character of some conductor , this roughly speaking means that almost all primes (in certain ranges) will be quadratic non-residues mod . In particular, if one restricts attention to numbers in a residue class that is a quadratic residue, we then expect most numbers in this class to have an even number of prime factors, rather than an odd number.
This alters the effect of sieving in such residue classes. Consider for instance the classical sieve of Eratosthenes. If one sieves out for each prime , the sieve of Eratosthenes tells us that the surviving elements of are simply the primes between and , of which there are about many. However, if one restricts attention to for a quadratic residue class (and taking to be somewhat large compared to ), then by the preceding discussion, this eliminates most primes, and so now sieving out should leave almost no survivors. Shifting this example by and then dividing by , one can end up with an example of an interval of length that can be sieved by residue classes for each in such a manner as to leave almost no survivors (in particular, many). In the presence of a Siegel zero, it seems quite difficult to prevent this scenario from “infecting” the above problem, creating a bad scenario in which for all , the residue classes for already eliminate almost all elements of , leaving it mathematically possible for the remaining survivors to either be prime, or eliminated by the remaining residue classes for .
Because of this, I suspect that it will not be possible to resolve this Erdös problem without a major breakthrough on the parity problem that (at a bare minimum) is enough to exclude the possibility of Siegel zeroes existing. (But it is not clear at all that Siegel zeroes are be the only “enemy” here, so absent a major advance in “inverse sieve theory”, one cannot simply assume GRH to run away from this problem).
— 0.1. Addendum: heuristics for Siegel zero scenarios —
This post also provides a good opportunity to refine some heuristics I had previously proposed regarding Siegel zeroes and their impact on various problems in analytic number theory. In this previous blog post, I wrote
“The parity problem can also be sometimes be overcome when there is an exceptional Siegel zero … [this] suggests that to break the parity barrier, we may assume without loss of generality that there are no Siegel zeroes.”
On the other hand, it was pointed out in a more recent article of Granville that (as with the current situation), Siegel zeroes can sometimes serve to enforce the parity barrier, rather than overcome it, and responds to my previous statement with the comment “this claim needs to be treated with caution, since its truth depends on the context”.
I actually agree with Granville here, and I propose here a synthesis of the two situations. In the absence of a Siegel zero, standard heuristic models in analytic number theory (such as the ones discussed in this post) typically suggest that a given quantity of interest in number theory (e.g., the number of primes in a certain set) obey an asymptotic law of the form
where is generally fairly well understood, while is expected to fluctuate “randomly” (or more precisely, pseudorandomly) and thus be smaller than the main term. However, a major difficulty in analytic number theory is that we often cannot prevent a “conspiracy” from occurring in which the error term becomes as large as, or even larger than the main term: the fluctuations present in that term are often too poorly understood to be under good control. The parity barrier manifests by providing examples of analogous situations in which the error term is indeed as large as the main term (with an unfavorable sign).
However, the presence of a Siegel zero tends to “magnetize” the error term by pulling most of the fluctuations in a particular direction. In many situations, what this means is that one can obtain a refined asymptotic of the form
where now fluctuates less than the original , and in particular can (in some cases) be shown to be lower order than , while is a new term that is often explicitly describable in terms of the exceptional character associated to the Siegel zero, as well as the location of the Siegel zero . A typical example is the problem of estimating the sum of primes in an arithmetic progression. The Siegel–Walfisz theorem gives a bound of the form
for any (with an ineffective constant); in the regime one can improve the error term to , but for large one cannot do better than the Brun–Titchmarsh bound of . However, when there is a Siegel zero in an appropriate range, we can obtain the refined bound
for some , where is the conductor of ; see e.g., Theorem 5.27 of Iwaniec–Kowalski. Thus we see the error term is much improved (and in fact can even be made effective), at the cost of introduicing a Siegel correction term which (for close to and not too large) is of comparable size to the main term, and can either be aligned with or against the main term depending on the sign of .
The implications of this refined asymptotic then depend rather crucially on how the Siegel correction term is aligned with the main term, and also whether it is of comparable order or lower order. In many situations (particularly those concerning “average case” problems, in which one wants to understand the behavior for typical choices of parameters), the Siegel correction term ends up being lower order, and so one ends up with the situation described in my initial blog post, where we are able to get the predicted asymptotic in the Siegel zero case. However, as pointed out by Granville, there are other situations (particularly those involving “worst case” problems, in which some key parameter can be chosen adversarially) in which the Siegel correction term can align to completely cancel (or to highly reinforce) the main term. In such cases, the Siegel zero becomes a very concrete manifestation of the parity barrier, rather than a means to avoid it. (There is a tiny chance that there may be some sort of “repulsion” phenomenon in which having no semiprimes in for one value of somehow generates semiprimes in for another value of , which would allow one to solve the problem without having to directly address the Siegel issue, but I don’t see how two such intervals could “communicate” in order to achieve such a repulsion effect.)
A result of Bui–Pratt–Zaharescu, and Erdös problem#437
9 August, 2024 in expository, math.NT | Tags: Alexandru Zaharescu, Hung Bui, Kyle Pratt, Paul Erdos | by Terence Tao | 14 comments
The following problem was posed by Erdös and Graham (and is listed as problem #437 on the Erdös problems website):
Problem 1Let be integers. How many of the partial products , , , can be squares? Is it true that, for any , there can be more than squares?
If one lets denote the maximal number of squares amongst such partial products, it was observed in the paper of Erdös and Graham that the bound is “trivial” (no proof was provided, but one can for instance argue using the fact that the number of integer solutions to hyperelliptic equations of the form for fixed is quite sparse, and in fact finite for thanks to Siegel’s theorem), and the problem then asks if .
It turns out that this problem was essentially solved (though not explicitly) by a recently published paper of Bui, Pratt, and Zaharescu, who studied a closely related quantity introduced by Erdös, Graham, and Selfridge (see also Problem B30 of Guy’s book), defined for any natural number as the least natural number such that some subset of , when multiplied together with , produced a square. Among the several results proven about in that paper was the following:
Theorem 2 (Bui–Pratt–Zaharescu, Theorem 1.2)For sufficiently large, there exist integers such that .
The arguments were in fact quite elementary, with the main tool being the theory of smooth numbers (the theory of hyperelliptic equations is used elsewhere in the paper, but not for this particular result).
If one uses this result as a “black box”, then an easy greedy algorithm argument gives the lower bound
but with a small amount of additional work, one can modify the proof of the theorem to give a slightly better bound:
Theorem 3 (Bounds for )As , we have the lower bound
and the upper bound
In particular, for any , one has for sufficiently large .
The purpose of this blog post is to record this modification of the argument, which is short enough to present immediately. For a large , let denote the quantity
We call a natural number -smooth if all of its prime factors are at most . From a result of Hildebrand (or the older results of de Bruijn), we know that the number of -smooth numbers less than or equal to is
Let be the number of primes up to . From the prime number theorem we have
To prove the lower bound on , which is a variant of Theorem 2. The key observation is that given any -smooth numbers , some non-trivial subcollection of them will multiply to a square. This is essentially Lemma 4.2 of Bui–Pratt–Zaharescu, but for the convenience of the reader we give a full proof here. Consider the multiplicative homomorphism defined by
where is the prime and is the number of times divides . The vectors lie in a -dimensional vector space over , and thus are linearly dependent. Thus there exists a non-trivial collection of these vectors that sums to zero, which implies that the corresponding elements of the sequence multiply to a square.
From (1), (2) we can find sequences of -smooth numbers in , with each sequence being to the right of the previous sequence. By the above observation, each sequence contains some non-trivial subcollection that multiplies to a square. Concatenating all these subsequences together, we obtain a single sequence with at least partial products multiplying to a square, giving the desired lower bound on .
Next, we prove the upper bound on . Suppose that a sequence has partial products that are squares for some . Then we have a square for all (with the convention ). The key observation (essentially Lemma 3.4 of Bui–Pratt–Zaharescu) is that, for each , one of the following must hold:
(i) At least one of the is -smooth.
(ii) At least one of the is divisible by for some prime .
(iii) .
Indeed, suppose that (i) and (ii) are not true, then one of the terms in the sequence is divisible by exactly one copy of for some prime . In order for the product to be a square, another element of the sequence must also be divisible by the same prime; but this implies (iii).
From (1) we see that the number of for which (i) occurs is at most . From the union bound we see that the number of for which (ii) occurs is at most
Finally, from the pigeonhole principle we see that the number of for which (iii) occurs is also at most
Thus one has , as desired. This completes the proof.
The upper bound arguments seem more crude to the author than the lower bound arguments, so I conjecture that the lower bound is in fact the truth: .
Dense sets of natural numbers with unusually large least common multiples
8 July, 2024 in math.NT, paper | Tags: least common multiples, Paul Erdos | by Terence Tao | 13 comments
I’ve just uploaded to the arXiv my paper “Dense sets of natural numbers with unusually large least common multiples“. This short paper answers (in the negative) a somewhat obscure question of Erdős and Graham:
Problem 1Is it true that if is a set of natural numbers for which goes to infinity as , then the quantity also goes to infinity as ?
At first glance, this problem may seem rather arbitrary, but it can be motivated as follows. The hypothesis that (1) goes to infinity is a largeness condition on ; in view of Mertens’ theorem, it can be viewed as an assertion that is denser than the set of primes. On the other hand, the conclusion that (2) grows is an assertion that becomes significantly larger than on the average for large ; that is to say, that many pairs of numbers in share a common factor. Intuitively, the problem is then asking whether sets that are significantly denser than the primes must start having lots of common factors on average.
For sake of comparison, it is easy to see that if (1) goes to infinity, then at least one pair of distinct elements in must have a non-trivial common factor. For if this were not the case, then the elements of are pairwise coprime, so each prime has at most one multiple in , and so can contribute at most to the sum in (1), and hence by Mertens’ theorem, and the fact that every natural number greater than one is divisible by at least one prime , the quantity (1) stays bounded, a contradiction.
It turns out, though, that the answer to the above problem is negative; one can find sets that are denser than the primes, but for which (2) stays bounded, so that the least common multiples in the set are unusually large. It was a bit surprising to me that this question had not been resolved long ago (in fact, I was not able to find any prior literature on the problem beyond the original reference of Erdős and Graham); in contrast, another problem of Erdős and Graham concerning sets with unusually small least common multiples was extensively studied (and essentially solved) about twenty years ago, while the study of sets with unusually large greatest common divisor for many pairs in the set has recently become somewhat popular, due to their role in the proof of the Duffin-Schaeffer conjecture by Koukoulopoulos and Maynard.
To search for counterexamples, it is natural to look for numbers with relatively few prime factors, in order to reduce their common factors and increase their least common multiple. A particularly simple example, whose verification is on the level of an exercise in a graduate analytic number theory course, is the set of semiprimes (products of two primes), for which one can readily verify that (1) grows like but (2) stays bounded. With a bit more effort, I was able to optimize the construction and uncover the true threshold for boundedness of (2), which was a little unexpected:
Theorem 2
(i) For any , there exists a set of natural numbers with
for all large , for which (2) stays bounded.
(ii) Conversely, if (2) stays bounded, then
for all large .
The proofs are not particularly long or deep, but I thought I would record here some of the process towards finding them. My first step was to try to simplify the condition that (2) stays bounded. In order to use probabilistic intuition, I first expressed this condition in probabilistic terms as
for large , where are independent random variables drawn from with probability density function
The presence of the least common multiple in the denominator is annoying, but one can easily flip the expression to the greatest common divisor:
If the expression was a product of a function of and a function of , then by independence this expectation would decouple into simpler averages involving just one random variable instead of two. Of course, the greatest common divisor is not of this form, but there is a standard trick in analytic number theory to decouple the greatest common divisor, namely to use the classic Gauss identity , with the Euler totient function, to write
Inserting this formula and interchanging the sum and expectation, we can now express the condition as bounding a sum of squares:
Thus, the condition (2) is really an assertion to the effect that typical elements of do not have many divisors. From experience in sieve theory, the probabilities tend to behave multiplicatively in , so the expression here heuristically behaves like an Euler product that looks something like
and so the condition (2) is morally something like Comparing this with the Mertens’ theorems, this leads to the heuristic prediction that (for a typical prie much smaller than ) should decay somewhat like (ignoring for now factors of ). This can be compared to the example of the set of primes or semiprimes on one hand, where the probability is like , and the set of all natural numbers on the other hand, where the probability is like . So the critical behavior should come from sets that are in some sense “halfway” between the primes and the natural numbers.
It is then natural to try a random construction, in which one sieves out the natural numbers by permitting each natural number to survive with a probability resembling , in order to get the predicted behavior for . Performing some standard calculations, this construction could ensure (2) bounded with a density a little bit less than the one stated in the main theorem; after optimizing the parameters, I could only get something like
I was stuck on optimising the construction further, so I turned my attention to a positive result in the spirit of (ii) of the main theorem. On playing around with (3), I observed that one could use Cauchy-Schwarz and Mertens’ theorem to obtain the bound
which was in line with the previous heuristic that should behave like . The left-hand side had a simple interpretation: by linearity of expectation, it was the expected number of prime factors of . So the boundedness of (2) implied that a typical element of only had about prime factors, in contrast to the predicted by the Hardy-Ramanujan law. Standard methods from the anatomy of integers can then be used to see how dense a set with that many prime factors could be, and this soon led to a short proof of part (ii) of the main theorem (I eventually found for instance that Jensen’s inequality could be used to create a particularly slick argument).
It then remained to improve the lower bound construction to eliminate the losses in the exponents. By deconstructing the proof of the upper bound, it became natural to consider something like the set of natural numbers that had at most prime factors. This construction actually worked for some scales – namely those for which was a natural number – but there was some strange “discontinuities” in the analysis that prevented me from establishing the boundedness of (2) for arbitrary scales . The basic problem was that increasing the number of permitted prime factors from one natural number threshold to another ended up increasing the density of the set by an unbounded factor (of the order of , in practice), which heavily disrupted the task of trying to keep the ratio (2) bounded. Usually the resolution to these sorts of discontinuities is to use some sort of random “average” of two or more deterministic constructions – for instance, by taking some random union of some numbers with prime factors and some numbers with prime factors – but the numerology turned out to be somewhat unfavorable, allowing for some improvement in the lower bounds over my previous construction, but not enough to close the gap entirely. It was only after substantial trial and error that I was able to find a working deterministic construction, where at a given scale one collected either numbers with at most prime factors, or numbers with prime factors but with the largest prime factor in a specific range, in which I could finally get the numerator and denominator in (2) to be in balance for every . But once the construction was written down, the verification of the required properties ended up being quite routine.
On product representations of squares
20 May, 2024 in math.NT, paper | Tags: Paul Erdos | by Terence Tao | 24 comments
I’ve just uploaded to the arXiv my paper “On product representations of squares“. This short paper answers (in the negative) a (somewhat obscure) question of Erdös. Namely, for any , let be the size of the largest subset of with the property that no distinct elements of multiply to a square. In a paper by Erdös, Sárközy, and Sós, the following asymptotics were shown for fixed :
.
.
.
for .
for .
for .
Thus the asymptotics for for odd were not completely settled. Erdös asked if one had for odd . The main result of this paper is that this is not the case; that is to say, there exists such that any subset of of cardinality at least will contain distinct elements that multiply to a square, if is large enough. In fact, the argument works for all , although it is not new in the even case. I will also note that there are now quite sharp upper and lower bounds on for even , using methods from graph theory: see this recent paper of Pach and Vizer for the latest results in this direction. Thanks to the results of Granville and Soundararajan, we know that the constant cannot exceed the Hall-Montgomery constant
and I (very tentatively) conjecture that this is in fact the optimal value for this constant. This looks somewhat difficult, but a more feasible conjecture would be that the asymptotically approach the Hall-Montgomery constant as , since the aforementioned result of Granville and Soundararajan morally corresponds to the case.
In the end, the argument turned out to be relatively simple; no advanced results from additive combinatorics, graph theory, or analytic number theory were required. I found it convenient to proceed via the probabilistic method (although the more combinatorial technique of double counting would also suffice here). The main idea is to generate a tuple of distinct random natural numbers in which multiply to a square, and which are reasonably uniformly distributed throughout , in that each individual number is attained by one of the random variables with a probability of . If one can find such a distribution, then if the density of is sufficienly close to , it will happen with positive probability that each of the will lie in , giving the claim.
When , this strategy cannot work, as it contradicts the arguments of Erdös, Särközy, and Sós. The reason can be explained as follows. The most natural way to generate a triple of random natural numbers in which multiply to a square is to set
for some random natural numbers . But if one wants all these numbers to have magnitude , one sees on taking logarithms that one would need
which by elementary linear algebra forces
so in particular each of the would have a factor comparable to . However, it follows from known results on the “multiplication table problem” (how many distinct integers are there in the multiplication table?) that most numbers up to do not have a factor comparable to . (Quick proof: by the Hardy–Ramanujan law, a typical number of size or of size has factors, hence typically a number of size will not factor into two factors of size .) So the above strategy cannot work for .
However, the situation changes for larger . For instance, for , we can try the same strategy with the ansatz
Whereas before there were three (approximate) equations constraining three unknowns, now we would have four equations and six unknowns, and so we no longer have strong constraints on any of the . So in principle we now have a chance to find a suitable random choice of the . The most significant remaining obstacle is the Hardy–Ramanujan law: since the typically have prime factors, it is natural in this case to choose each to have prime factors. As it turns out, if one does this (basically by requiring each prime to divide with an independent probability of about , for some small , and then also adding in one large prime to bring the magnitude of the to be comparable to ), the calculations all work out, and one obtains the claimed result.
Two announcements: AI for Math resources, and erdosproblems.com
19 April, 2024 in advertising, Mathematics, question | Tags: Artificial Intelligence, Paul Erdos | by Terence Tao | 16 comments
This post contains two unrelated announcements. Firstly, I would like to promote a useful list of resources for AI in Mathematics, that was initiated by Talia Ringer (with the crowdsourced assistance of many others) during the National Academies workshop on “AI in mathematical reasoning” last year. This list is now accepting new contributions, updates, or corrections; please feel free to submit them directly to the list (which I am helping Talia to edit). Incidentally, next week there will be a second followup webinar to the aforementioned workshop, building on the topics covered there. (The first webinar may be found here.)
Secondly, I would like to advertise the erdosproblems.com website, launched recently by Thomas Bloom. This is intended to be a living repository of the many mathematical problems proposed in various venues by Paul Erdős, who was particularly noted for his influential posing of such problems. For a tour of the site and an explanation of its purpose, I can recommend Thomas’s recent talk on this topic at a conference last week in honor of Timothy Gowers.
Thomas is currently issuing a call for help to develop the erdosproblems.com website in a number of ways (quoting directly from that page):
You know Githuband could set a suitable project up to allow people to contribute new problems (and corrections to old ones) to the database, and could help me maintain the Github project;
You know things about web designand have suggestions for how this website could look or perform better;
You know things about Python/Flask/HTML/SQL/whateverand want to help me code cool new features on the website;
You know about accessibilityand have an idea how I can make this website more accessible (to any group of people);
You are a mathematicianwho has thought about some of the problems here and wants to write an expanded commentary for one of them, with lots of references, comparisons to other problems, and other miscellaneous insights (mathematician here is interpreted broadly, in that if you have thought about the problems on this site and are willing to write such a commentary you qualify);
You knew Erdősand have any memories or personal correspondence concerning a particular problem;
You have solved an Erdős problemand I’ll update the website accordingly (and apologies if you solved this problem some time ago);
You have spotted a mistake, typo, or duplicate problem, or anything else that has confused youand I’ll correct things;
You are a human being with an internet connectionand want to volunteer a particular Erdős paper or problem list to go through and add new problems from (please let me know before you start, to avoid duplicate efforts);
You have any other ideas or suggestions– there are probably lots of things I haven’t thought of, both in ways this site can be made better, and also what else could be done from this project. Please get in touch with any ideas!
I for instance contributed a problem to the site (#587) that Erdős himself gave to me personally (this was the topic of a somewhat well known photo of Paul and myself, and which he communicated again to be shortly afterwards on a postcard; links to both images can be found by following the above link). As it turns out, this particular problem was essentially solved in 2010 by Nguyen and Vu.
(Incidentally, I also spoke at the same conference that Thomas spoke at, on my recent work with Gowers, Green, and Manners; here is the video of my talk, and here are my slides.)
Monotone non-decreasing sequences of the Euler totient function
6 September, 2023 in math.NT, paper | Tags: Euler totient function, Paul Erdos | by Terence Tao | 23 comments
I have just uploaded to the arXiv my paper “Monotone non-decreasing sequences of the Euler totient function“. This paper concerns the quantity , defined as the length of the longest subsequence of the numbers from to for which the Euler totient function is non-decreasing. The first few values of are
(OEIS A365339). For instance, because the totient function is non-decreasing on the set or , but not on the set .
Since for any prime , we have , where is the prime counting function. Empirically, the primes come quite close to achieving the maximum length ; indeed it was conjectured by Pollack, Pomerance, and Treviño, based on numerical evidence, that one had
for all ; this conjecture is verified up to . The previous best known upper bound was basically of the form as for an explicit constant , from combining results from the above paper with that of Ford or of Maier-Pomerance. In this paper we obtain the asymptotic
so in particular . This answers a question of Erdős, as well as a closely related question of Pollack, Pomerance, and Treviño.
The methods of proof turn out to be mostly elementary (the most advanced result from analytic number theory we need is the prime number theorem with classical error term). The basic idea is to isolate one key prime factor of a given number which has a sizeable influence on the totient function . For instance, for “typical” numbers , one has a factorization
where is a medium sized prime, is a significantly larger prime, and is a number with all prime factors less than . This leads to an approximation
As a consequence, if we temporarily hold fixed, and also localize to a relatively short interval, then can only be non-decreasing in if is also non-decreasing at the same time. This turns out to significantly cut down on the possible length of a non-decreasing sequence in this regime, particularly if is large; this can be formalized by partitioning the range of into various subintervals and inspecting how this (and the monotonicity hypothesis on ) constrains the values of associated to each subinterval. When is small, we instead use a factorization where is very smooth (i.e., has no large prime factors), and is a large prime. Now we have the approximation and we can conclude that will have to basically be piecewise constant in order for to be non-decreasing. Pursuing this analysis more carefully (in particular controlling the size of various exceptional sets in which the above analysis breaks down), we end up achieving the main theorem so long as we can prove the preliminary inequality for all positive rational numbers . This is in fact also a necessary condition; any failure of this inequality can be easily converted to a counterexample to the bound (2), by considering numbers of the form (3) with equal to a fixed constant (and omitting a few rare values of where the approximation (4) is bad enough that is temporarily decreasing). Fortunately, there is a minor miracle, relating to the fact that the largest prime factor of denominator of in lowest terms necessarily equals the largest prime factor of , that allows one to evaluate the left-hand side of (5) almost exactly (this expression either vanishes, or is the product of for some primes ranging up to the largest prime factor of ) that allows one to easily establish (5). If one were to try to prove an analogue of our main result for the sum-of-divisors function, one would need the analogue of (5), which looks within reach of current methods (and was even claimed without proof by Erdos), but does not have a full proof in the literature at present.
In the final section of the paper we discuss some near counterexamples to the strong conjecture (1) that indicate that it is likely going to be difficult to get close to proving this conjecture without assuming some rather strong hypotheses. Firstly, we show that failure of Legendre’s conjecture on the existence of a prime between any two consecutive squares can lead to a counterexample to (1). Secondly, we show that failure of the Dickson-Hardy-Littlewood conjecture can lead to a separate (and more dramatic) failure of (1), in which the primes are no longer the dominant sequence on which the totient function is non-decreasing, but rather the numbers which are a power of two times a prime become the dominant sequence. This suggests that any significant improvement to (2) would require assuming something comparable to the prime tuples conjecture, and perhaps also some unproven hypotheses on prime gaps.
The convergence of an alternating series of Erdős, assuming the Hardy–Littlewood prime tuples conjecture
14 August, 2023 in math.NT, paper | Tags: Paul Erdos, prime numbers | by Terence Tao | 29 comments
I have just uploaded to the arXiv my paper “The convergence of an alternating series of Erdős, assuming the Hardy–Littlewood prime tuples conjecture“. This paper concerns an old problem of Erdős concerning whether the alternating series converges, where denotes the prime. The main result of this paper is that the answer to this question is affirmative assuming a sufficiently strong version of the Hardy–Littlewood prime tuples conjecture.
The alternating series test does not apply here because the ratios are not monotonically decreasing. The deviations of monotonicity arise from fluctuations in the prime gaps , so the enemy arises from biases in the prime gaps for odd and even . By changing variables from to (or more precisely, to integers in the range between and ), this is basically equivalent to biases in the parity of the prime counting function. Indeed, it is an unpublished observation of Said that the convergence of is equivalent to the convergence of . So this question is really about trying to get a sufficiently strong amount of equidistribution for the parity of .
The prime tuples conjecture does not directly say much about the value of ; however, it can be used to control differences for and not too large. Indeed, it is a famous calculation of Gallagher that for fixed , and chosen randomly from to , the quantity is distributed according to the Poisson distribution of mean asymptotically if the prime tuples conjecture holds. In particular, the parity of this quantity should have mean asymptotic to . An application of the van der Corput -process then gives some decay on the mean of as well. Unfortunately, this decay is a bit too weak for this problem; even if one uses the most quantitative version of Gallagher’s calculation, worked out in a recent paper of (Vivian) Kuperberg, the best bound on the mean is something like , which is not quite strong enough to overcome the doubly logarithmic divergence of .
To get around this obstacle, we take advantage of the random sifted model of the primes that was introduced in a paper of Banks, Ford, and myself. To model the primes in an interval such as with drawn randomly from say , we remove one random residue class from this interval for all primes up to Pólya’s “magic cutoff”. The prime tuples conjecture can then be intepreted as the assertion that the random set produced by this sieving process is statistically a good model for the primes in . After some standard manipulations (using a version of the Bonferroni inequalities, as well as some upper bounds of Kuperberg), the problem then boils down to getting sufficiently strong estimates for the expected parity of the random sifted set .
For this problem, the main advantage of working with the random sifted model, rather than with the primes or the singular series arising from the prime tuples conjecture, is that the sifted model can be studied iteratively from the partially sifted sets arising from sifting primes up to some intermediate threshold , and that the expected parity of the experiences some decay in . Indeed, once exceeds the length of the interval , sifting by an additional prime will cause to lose one element with probability , and remain unchanged with probability . If concentrates around some value , this suggests that the expected parity will decay by a factor of about as one increases to , and iterating this should give good bounds on the final expected parity . It turns out that existing second moment calculations of Montgomery and Soundararajan suffice to obtain enough concentration to make this strategy work.
Recent Comments
Terence Tao on Rough numbers between consecut…
Terence Tao on Rough numbers between consecut…
Terence Tao on Rough numbers between consecut…
Anonymous on Rough numbers between consecut…
Anonymous on Rough numbers between consecut…
Anonymous on Rough numbers between consecut…
Evan Conway on Rough numbers between consecut…
Anonymous on Rough numbers between consecut…
NooneAtAll3 on Rough numbers between consecut…
Anonymous on Rough numbers between consecut…
Terence Tao on The three-dimensional Kakeya c…
Terence Tao on Expanding polynomials over fin…
Anonymous on Rough numbers between consecut…
Sam on 245A, Notes 6: Outer measures,…
Sam on 245A, Notes 6: Outer measures,…
Top Posts
Rough numbers between consecutive primes
It ought to be common knowledge that Donald Trump is not fit for the presidency of the United States of America
Career advice
Books
There’s more to mathematics than rigour and proofs
Does one have to be a genius to do maths?
Analysis I
The three-dimensional Kakeya conjecture, after Wang and Zahl
On writing
Cosmic Distance Ladder videos with Grant Sanderson (3blue1brown): commentary and corrections
Archives
August 2025(1)
July 2025(1)
June 2025(2)
May 2025(5)
April 2025(2)
March 2025(1)
February 2025(3)
January 2025(1)
December 2024(3)
November 2024(4)
October 2024(1)
September 2024(4)
August 2024(3)
July 2024(3)
June 2024(1)
May 2024(1)
April 2024(5)
March 2024(1)
December 2023(2)
November 2023(2)
October 2023(1)
September 2023(3)
August 2023(3)
June 2023(8)
May 2023(1)
April 2023(1)
March 2023(2)
February 2023(1)
January 2023(2)
December 2022(3)
November 2022(3)
October 2022(3)
September 2022(1)
July 2022(3)
June 2022(1)
May 2022(2)
April 2022(2)
March 2022(5)
February 2022(3)
January 2022(1)
December 2021(2)
November 2021(2)
October 2021(1)
September 2021(2)
August 2021(1)
July 2021(3)
June 2021(1)
May 2021(2)
February 2021(6)
January 2021(2)
December 2020(4)
November 2020(2)
October 2020(4)
September 2020(5)
August 2020(2)
July 2020(2)
June 2020(1)
May 2020(2)
April 2020(3)
March 2020(9)
February 2020(1)
January 2020(3)
December 2019(4)
November 2019(2)
September 2019(2)
August 2019(3)
July 2019(2)
June 2019(4)
May 2019(6)
April 2019(4)
March 2019(2)
February 2019(5)
January 2019(1)
December 2018(6)
November 2018(2)
October 2018(2)
September 2018(5)
August 2018(3)
July 2018(3)
June 2018(1)
May 2018(4)
April 2018(4)
March 2018(5)
February 2018(4)
January 2018(5)
December 2017(5)
November 2017(3)
October 2017(4)
September 2017(4)
August 2017(5)
July 2017(5)
June 2017(1)
May 2017(3)
April 2017(2)
March 2017(3)
February 2017(1)
January 2017(2)
December 2016(2)
November 2016(2)
October 2016(5)
September 2016(4)
August 2016(4)
July 2016(1)
June 2016(3)
May 2016(5)
April 2016(2)
March 2016(6)
February 2016(2)
January 2016(1)
December 2015(4)
November 2015(6)
October 2015(5)
September 2015(5)
August 2015(4)
July 2015(7)
June 2015(1)
May 2015(5)
April 2015(4)
March 2015(3)
February 2015(4)
January 2015(4)
December 2014(6)
November 2014(5)
October 2014(4)
September 2014(3)
August 2014(4)
July 2014(5)
June 2014(5)
May 2014(5)
April 2014(2)
March 2014(4)
February 2014(5)
January 2014(4)
December 2013(4)
November 2013(5)
October 2013(4)
September 2013(5)
August 2013(1)
July 2013(7)
June 2013(12)
May 2013(4)
April 2013(2)
March 2013(2)
February 2013(6)
January 2013(1)
December 2012(4)
November 2012(7)
October 2012(6)
September 2012(4)
August 2012(3)
July 2012(4)
June 2012(3)
May 2012(3)
April 2012(4)
March 2012(5)
February 2012(5)
January 2012(4)
December 2011(8)
November 2011(8)
October 2011(7)
September 2011(6)
August 2011(8)
July 2011(9)
June 2011(8)
May 2011(11)
April 2011(3)
March 2011(10)
February 2011(3)
January 2011(5)
December 2010(5)
November 2010(6)
October 2010(9)
September 2010(9)
August 2010(3)
July 2010(4)
June 2010(8)
May 2010(8)
April 2010(8)
March 2010(8)
February 2010(10)
January 2010(12)
December 2009(11)
November 2009(8)
October 2009(15)
September 2009(6)
August 2009(13)
July 2009(10)
June 2009(11)
May 2009(9)
April 2009(11)
March 2009(14)
February 2009(13)
January 2009(18)
December 2008(8)
November 2008(9)
October 2008(10)
September 2008(5)
August 2008(6)
July 2008(7)
June 2008(8)
May 2008(11)
April 2008(12)
March 2008(12)
February 2008(13)
January 2008(17)
December 2007(10)
November 2007(9)
October 2007(9)
September 2007(7)
August 2007(9)
July 2007(9)
June 2007(6)
May 2007(10)
April 2007(11)
March 2007(9)
February 2007(4)
Categories
expository (317)
tricks (13)
guest blog (10)
Mathematics (894)
math.AC (9)
math.AG (42)
math.AP (115)
math.AT (17)
math.CA (193)
math.CO (199)
math.CT (9)
math.CV (37)
math.DG (37)
math.DS (89)
math.FA (24)
math.GM (14)
math.GN (21)
math.GR (88)
math.GT (17)
math.HO (13)
math.IT (13)
math.LO (54)
math.MG (47)
math.MP (31)
math.NA (25)
math.NT (202)
math.OA (22)
math.PR (109)
math.QA (6)
math.RA (48)
math.RT (21)
math.SG (4)
math.SP (48)
math.ST (11)
non-technical (196)
admin (46)
advertising (67)
diversions (7)
media (14)
journals (3)
obituary (15)
opinion (36)
paper (258)
book (21)
Companion (13)
update (25)
question (127)
polymath (86)
talk (69)
DLS (20)
teaching (190)
245A – Real analysis (11)
245B – Real analysis (22)
245C – Real analysis (6)
246A – complex analysis (11)
246B – complex analysis (5)
246C – complex analysis (5)
247B – Classical Fourier Analysis (5)
254A – analytic prime number theory (19)
254A – ergodic theory (18)
254A – Hilbert's fifth problem (12)
254A – Incompressible fluid equations (5)
254A – random matrices (14)
254B – expansion in groups (8)
254B – Higher order Fourier analysis (9)
255B – incompressible Euler equations (2)
275A – probability theory (6)
285G – poincare conjecture (20)
Logic reading seminar (8)
The sciences (1)
travel (26)
additive combinatoricsapproximate groupsarithmetic progressionsBen GreenCauchy-SchwarzCayley graphscentral limit theoremChowla conjecturecompressed sensingcorrespondence principledistributionsdivisor functioneigenvaluesElias SteinEmmanuel Breuillardentropyequidistributionergodic theoryEuler equationsexponential sumsfinite fieldsFourier transformFreiman's theoremGowers uniformity normGowers uniformity normsgraph theoryGromov's theoremGUEHilbert's fifth problemICMincompressible Euler equationsinverse conjectureJoni TeravainenKaisa MatomakiKakeya conjectureLie algebrasLie groupsLiouville functionLittlewood-Offord problemMaksym RadziwillMobius functionmultiplicative functionsNavier-Stokes equationsnilpotent groupsnilsequencesnonstandard analysisparity problemPaul Erdospoliticspolymath1polymath8Polymath15polynomial methodpolynomialsprime gapsprime numbersprime number theoremrandom matricesrandomnessRatner's theoremregularity lemmaRicci flowRiemann zeta functionSchrodinger equationShannon entropysieve theorystructureSzemeredi's theoremTamar ZieglerUCLAultrafiltersuniversalityVan Vuwave mapsYitang Zhang
The Polymath Blog
Polymath projects 2021
A sort of Polymath on a famous MathOverflow problem
Ten Years of Polymath
Updates and Pictures
Polymath proposal: finding simpler unit distance graphs of chromatic number 5
A new polymath proposal (related to the Riemann Hypothesis) over Tao’s blog
Spontaneous Polymath 14 – A success!
Polymath 13 – a success!
Non-transitive Dice over Gowers’s Blog
Rota’s Basis Conjecture: Polymath 12, post 3
For commenters
To enter in LaTeX in comments, use $latex $ (without the < and > signs, of course; in fact, these signs should be avoided as they can cause formatting errors). Also, backslashes \ need to be doubled as \. See the about page for details and for other commenting policy.
« Previous Entries
Blog at WordPress.com.Ben Eastaugh and Chris Sternal-Johnson.
Subscribe to feed.
SubscribeSubscribed
What's new
Join 11,934 other subscribers
Sign me up
Already have a WordPress.com account? Log in now.
What's new
SubscribeSubscribed
Sign up
Log in
Report this content
View site in Reader
Manage subscriptions
Collapse this bar
Loading Comments...
Write a Comment...
Email Name Website
|
70
|
1 What Resolution Should Your Images Be?
The best way to determine the optimum resolution is to think about the final use of your images.
For publication you’ll need the highest resolution, for desktop printing lower, and for web or classroom use, lower still. The following table is a general guide; detailed explanations follow.
Use Pixel Size Resolution Preferred File Format Approx. File Size Projected in class About 1024 pixels wide for a horizontal image; or 768 pixels high for a vertical one 102 DPI JPEG 300–600 K Web site About 400–600 pixels wide for a large image; 100–200 for a thumbnail image 72 DPI JPEG 20–200 K Printed in a book or art magazine Multiply intended print size by resolution; e.g. an image to be printed as 6” W x 4” H would be 1800 x 1200 pixels.
300 DPI EPS or TIFF 6–10 MB Printed on a laserwriter Multiply intended print size by resolution; e.g. an image to be printed as 6” W x 4” H would be 1200 x 800 pixels.
200 DPI EPS or TIFF 2-3 MB Digital Camera Photos Digital cameras have a range of preset resolutions which vary from camera to camera.
Designation Resolution Max. Image size at 300 DPI Printable size on a color printer 4 Megapixels 2272 x 1704 pixels 7.5” x 5.7” 12” x 9” 3 Megapixels 2048 x 1536 pixels 6.8” x 5” 11” x 8.5” 2 Megapixels 1600 x 1200 pixels 5.3” x 4” 6” x 4” 1 Megapixel 1024 x 768 pixels 3.5” x 2.5” 5” x 3 If you can, you generally want to shoot larger than you need, then sharpen the image and reduce its size in Photoshop.
2 For Screen: Classroom Use and Web sites.
For images that will exist only on screens, it’s better to think in terms of pixel dimensions only.
For classroom use, the guiding factor is the presentation equipment. Your monitor might be able to show 1800 x 1440 pixels, but you won’t be able to project that. The Hitachi CP-X430W projectors we have installed in the Schermerhorn classrooms project an image of 1024 x 768 pixels (what’s known as XGA resolution). This is pretty standard for high-end digital projectors these days. Any image you’re showing that’s larger in pixel dimension will be resampled down by the projector. So if you’re saving an image for use in the classroom, there’s no need to make it much larger than 1024 pixels wide. (Of course if you’re going to zoom in on a detail of the image, you’d need it that much larger.) If you use PowerPoint to project your images, you might notice that a 1000 pixel wide image looks tiny on the PowerPoint workspace (or perhaps unexpectedly large). That’s because PowerPoint measures its images according to the Document Size, not the Pixel Dimension as PowerPoint is made to work at the highest resolution possible for whatever device will ultimately display the slide show. (See below for more about Document Size and Pixel Dimensions.) The PowerPoint slide is 10” wide and 7.5” high. So an image with a Document Size of 10” x 7.5”and a resolution of 50 PPI will fill your PowerPoint screen, but when it’s projected it’ll look fuzzy (in pixel terms, that image is only 500 x 375 pixels). Conversely, if you have an image that’s 4” x 3” at 300 PPI it will import into PowerPoint as rather a small image on the 10” x 7.5” field, but since the image is actually, 1200 x 900 pixels you could scale it up to the full width of the PowerPoint slide without any loss in image quality when it’s projected. (You could think of it as the projector having an effective resolution of about 102.5 PPI.) It’s best to look at the pixel dimension of your images as you’re making them. As long as they’re at least about 1024 pixels wide (for a horizontal image) they should be fine for teaching.
The standard resolution for web images is 72 PPI (often called “screen resolution”). At that size, the pixels you see on the screen are all the pixels there are; an image that’s 4” long at 72 PPI will take up about 4” of your monitor. (Obviously there’ll be a lot of variation here, as most monitors have a range of resolutions they can be set at.) Most web sites are built to be visible on many different kinds of monitors. Usually a web site would be about 700-800 pixels wide. That means an image that’s about 400 or 500 pixels wide will take up a good chunk of the web page, and look pretty big on a monitor. You might want a bigger image on your site, but remember, some users might only have screens that show 800 x 600 pixels.
For Print: The dot and the line.
A bit about printing: images are printed using a halftone screen, made up of a mesh of tiny spots of varying sizes. In the old days, these patterns were formed by exposing a photograph through screens etched on glass, which were measured by counting the number of parallel lines to the inch. Thus the traditional measurement for the resolution of a printed image is still “lines per inch” or LPI.
3 A halftone image screen.
Newspaper images are generally printed with a very coarse screen, about 90 LPI. Magazines are usually printed with a 133-150 LPI screen, and book illustrations at least 150 LPI. Photo quality ink-jet printers print at the equivalent of about 133-150 LPI, and most laserwriters can handle about a 100 LPI screen.
Digital images are usually measured by counting the number of individual pixels (dots of image data) in an inch. Thus he resolution of digital images is often given in “Dots per Inch”(DPI) or, more precisely, “Pixels per Inch” (PPI).
[The terminology gets confusing as laserwriters are also measured in terms of Dots per Inch, referring to the spacing of individual dots of toner in making up solid forms, such as letters, or halftone spots. Since it takes a certain amount of laserwriter dots of toner to make up a halftone spot, and since the halftone spots vary in size while laserwriter dots are all the same size, a 600 DPI laserwriter can print a halftone screen of about 100 LPI.] The higher the LPI resolution of the final image, the more image data a digital image requires.
But the computer needs to create the halftone screen from the image before printing it, and it takes more than one pixel to make a halftone spot. The usual rule of thumb is: 2 pixels for every final halftone spot. That is, to print something at 150 LPI halftone resolution, you need an image of 300 PPI. However, most image processing software can get away with less. Anything within the rage of 1.5 to 2 times the final LPI resolution should be OK. So, realistically, to print an image at 150 LPI, you can use a digital image anywhere from 225 PPI to 300 PPI. (You can, of course have more image data, but it doesn’t give you any better a final result, and just takes up extra disk space and clogs your image processing software upon printing.) Document Size and Pixel Dimensions Image editing software, such as Photoshop, can adjust many variables in your image. Some are relative variables, and some absolute. The absolute size of the image is the “Pixel Dimension.” This is the number of individual little dots of color in the image. The Document Size (in inches or cm) and the resolution (in PPI or pixels per cm) are relative to the Pixel Dimension. The Document Size tells you how big your image can print at the given resolution.
For example, if you have an image with a 6” x 4” document size at 300 PPI resolution, you can print that image comfortably up to 6” x 4” at 150 LPI. The absolute size of the image would be 4 1800 x 1200 pixels, that is the document size multiplied by the resolution (6 x 300 = 1800; 4 x 300 = 1200). You can also do the calculations in reverse. If you have an image of 1800 x 1200 pixels, and you know the magazine it’ll be published in prints at 133 LPI, then you know you’ll need a resolution of 2 x 133 or 266 PPI, then divide: 1800 / 266 = 6.77; 1200 / 266 = 4.51. So that same 1800 x 1200 pixel image could also be printed as a 133 LPI image at about 6.75” x 4.5”.
That also means that if you have a scan at 72 DPI which is, say, 900 x 600 pixels, you could send that file to the printer as long as it was going to be reproduced as a 3” x 2” image or smaller (900 pixels / 300 PPI = 3”; 600/300 = 2).
If you scale the image down, Photoshop will decrease the number of pixels in the image by resampling them (averaging the values of neighboring pixels to make new pixels), if try to increase the number of pixels in your image Photoshop will interpolate new pixels (inventing new pixels based on surrounding ones) giving you a bigger, but fuzzier, image.
The way to control this is with the “Resample Image” option in Photoshop’s Image Size dialogue box. If “Resample Image” is on, and you change a document Size measurement (Width, Height or Resolution), it will adjust the number of pixels accordingly, either scaling down the image, or resampling it up (decreasing image quality). If “Resample Image” is turned off, you cannot change the Pixel dimension, and changing the Resolution will affect only the Document Size and vice-versa.
Image Size dialogue box in Photoshop Saving Images There are a multitude of image file formats, but the most common, and most cross-platform are JPEG, TIFF, EPS and GIF.
JPEG (or .jpg) is named after the Joint Photographic Experts Group which established the file format. It’s one of the most portable formats, which means that Macs and PCs, both read JPEGs.
Most image processing applications can handle them, and all web browsers can display them.
JPEG, however, is a compression scheme, which means that saving images as JPEGs will result in some loss of image quality. You can often compress an image to about 1/10 of its original size by saving it as JPEG.
5 Save As JPEG dialogue box in Photoshop JPEG is the best format for images you want to use in a PowerPoint presentation, and for most web site images.
When you save an image as a JPEG you’ll usually see a dialogue box asking what quality image you want. More compressed images are smaller, less compressed images are better quality.
Usually an image compressed at high quality looks perfectly fine. On extremely compressed JPEG images you can sometimes see boxy shapes known as “artifacts.” JPEG artifacts in a very highly compressed image, magnified 200% A digital camera will usually record its pictures in JPEG format. You will have the option of how compressed you want your images (“Normal,” “Fine,” or “Superfine”). The trick is to find the amount of compression that records enough data, but doesn’t take up too much room on your CF card.
When editing a JPEG image in Photoshop the program has to decompress the image and recompress it again upon saving. Thus every time you open, edit and close a JPEG image in Photoshop the image quality degrades a bit more. Therefore it’s best to work with you images in a lossless format which editing, then convert to JPEG only once you’re sure you don’t need to edit anymore.
6 TIFF (Tagged Image File Format) (or .tif) is a lossless image description standard. Hence TIFF files tend to be much larger than JPEGs, Most scanners will automatically produce TIFF images.
You can save TIFF images with LZW compression in Photoshop which is a lossless compression scheme. In TIFF images are written slightly differently for Macs and windows, but in most cases, either system can recognize TIFFs written on the other.
EPS (Encapsulated Postscript Format) (or .eps) is generally considered the standard for printing.
It’s a lossless image format, written the language laserwriters speak (postscript). EPS images are slightly larger than TIFF images, and each application that can create EPS images writes the code in a slightly different way (which can sometimes lead to problems). Still it gives the output device the most control over the image. If you have to send a file to a printer, EPS is usually the best format to use.
Photoshop gives you several options when saving an EPS image. One is the encoding (which can be ASCII or Binary). Binary gives you a smaller file size. Another is the preview. Since EPS images are literally a set of text commands to the printer, it needs to have another smaller file embedded in the file too that a text layout program like Quark XPress can see (hence the larger file size). You can usually save the preview image as a JPEG.
GIF (Graphics Interchange Format) (or .gif) images are a used exclusively on web sites. A GIF image can only have up to 256 colors, though you can specify a unique color table for each image as you save it based on the colors used in that image. As a result, it’s not very useful for images with lots of colors, and smooth transitions from color to color (like photographs and artwork).
GIF images are useful for images with flat areas of color, like a logo on a website. GIF images can have transparent areas, and can also contain multiple images so that on a web site they become an animation.
Source: Columbia University Visual Media Center PNG files (indicated by a .png extension on a file name) are becoming increasingly more common among the web. It was created to update and replace gif images since it can retain more colors than fig images and can also be transparent while simultaneously being slightly smaller in file size. Good for: web graphics.
|
71
|
Published Time: Tue, 17 Mar 2015 14:27:44 GMT
Doob | PDF | Markov Chain | Geometry
===============
Opens in a new window Opens an external website Opens an external website in a new window
This website utilizes technologies such as cookies to enable essential site functionality, as well as for analytics, personalization, and targeted advertising. To learn more, view the following link: Privacy Policy
Open navigation menu
Close suggestions Search Search
en Change Language
Upload
Sign in
Sign in
Download free for 30 days
0 ratings 0% found this document useful (0 votes)
2K views 8 pages
Doob
1) Doob's h-transform provides a way to condition a Markov process on its exit state. It shows that the conditioned process is also Markov, with transition probabilities that are absolutely …
Full description
Uploaded by
Daoping Yu
AI-enhanced description
Download
Save Save Doob For Later
Share
0%0% found this document useful, undefined
0%, undefined
Print
Embed
Report
0 ratings 0% found this document useful (0 votes)
2K views 8 pages
Doob
1) Doob's h-transform provides a way to condition a Markov process on its exit state. It shows that the conditioned process is also Markov, with transition probabilities that are absolutely continuous with respect to the unconditioned process. 2) Examples of conditioned random walks and Brownian motions are provided to illustrate Doob's h-transform. Conditioning adds a drift term to the diffusion in the direction of increasing h, where h is the harmonic function used to define the conditioning. 3) Conditioning a Brownian motion on the interval [0,π] to remain in (0,π) up to time t1 can be handled by enlarging the state space to include time, then applying the h
Read more
Download
0 ratings 0% found this document useful (0 votes)
2K views 8 pages
Doob
1) Doob's h-transform provides a way to condition a Markov process on its exit state. It shows that the conditioned process is also Markov, with transition probabilities that are absolutely …
Full description
Uploaded by
Daoping Yu
AI-enhanced description
Carousel Previous Carousel Next
Download
Save Save Doob For Later
Share
0%0% found this document useful, undefined
0%, undefined
Print
Embed
Report
Download
Download as pdf or txt
Save Doob For Later
You are on page 1/ 8
Search
Fullscreen
Doob’s
h
-transform: theory and examples
Alex Bloemendal DRAFT: May 18, 2010
Abstract
We discuss what it means to condition a Markov process on its exit state, using the resu ltin g theo ry to cons ider vari ous exam ples of cond itio ned rand om wa lk and Browni an motion. The treatment is informal with an emphasis on computations rather than proofs.
Consider a
time-homogeneous Markov process
X
t
on a
state space
S
with
transition kernel
P
t
(
x,dy
). Up to some measurability issues, this means the following: For each
x
∈
S
there is a probability measure
P
x
on the space Ω of
paths
{
X
t
} ⊆
S
indexed over
t
∈
Z
+
or
t
∈
R
+
,i.e. time can be discrete or continuous; for each
x
∈
S
and
t
≥
0,
P
t
(
x,
·
) is a probability measure on
S
; if
P
t
operates on bounded functions
f
:
S
→
R
by
P
t
f
=
S
P
t
(
·
,dy
)
f
(
y
),then
{
P
t
}
forms a semigroup of operators, i.e.
P
s
P
t
=
P
s
+
t
for
s,t
≥
0; and, of course, the
Markov prop erty
holds:
E
x
[
f
(
X
t
+
s
)
| F
t
] =
P
s
f
(
X
t
)
,
P
x
-a.s.
,
where
F
t
is the
natural filtration
generated by
{
X
s
}
0
≤
s
≤
t
. Recall that a more general Markov property holds in which
f
(
X
t
+
s
) is replaced with a bounded function of the future
{
X
s
}
s
≥
t
:if
F
: Ω
→
R
is bounded and
θ
t
: Ω
→
Ω
,
{
X
s
}
s
≥
0
→ {
X
t
+
s
}
s
≥
0
is the
time shift
, then
E
x
[
F
◦
θ
t
| F
t
] =
E
X
t
F,
P
x
-a.s. (1)We want a general way of discussing the “exit state” of
X
t
, that is, “where it goes when
t
→ ∞
.” It is natu ra l to co ns id er
shift-invariant
functions
H
=
H
◦
θ
t
or events
A
=
θ
−
1
t
A
(all
t
≥
0), as thes e de pen d on ly on th e in fin it e fu tu re. Le t
I
denote the cor res pond ing sigm a field. As it turns out,
I
is intimately connected with the
bounded harmonic functions
on
S
; these are bounded
h
satisfying
P
t
h
=
h
for all
t
, or equivalently,such that
h
(
X
t
) is a martingale under
P
x
for each
x
. On the on e hand, a boun ded
H
∈ I
gives rise to a bounded function
h
(
x
) =
E
x
H
which is harmonic by (1):
h
(
X
t
) =
E
x
[
H
| F
t
],manifestly a martingale. On the other hand, given a bounded harmonic function
h
, the limit
H
= lim
t
→∞
h
(
X
t
) exists
P
x
-a.s. by the martingale convergence theorem, and clearly
H
∈ I
;we can then use bounded convergence to recover
h
(
x
) =
E
x
H
. As lon g as all the st at es 1
adDownload to read ad-free
x
communicate, the measures induced on
I
will be mutually absolutely continuous; in this case
L
∞
(
I
) is a well-defined object, known as the
Poisson boundary
.It is elementary to condition our process pathwise on an event
A
∈ I
of positive probabil-ity. Let
h
(
x
) =
P
x
(
A
) be the corresponding harmonic function and
S
=
{
x
∈
S
:
h
(
x
)
0
}
the set of states from which
A
i s acces sib le. F or
x
∈
S
, the conditioned path measure
P
x
P
x
and is given by
d
P
x
=
1
A
h
(
x
)
d
P
x
.
Restricted to
F
t
, this becomes
d
P
x
F
t
=
E
x
1
A
h
(
x
)
F
t
d
P
x
F
t
=
h
(
X
t
)
h
(
x
)
d
P
x
F
t
.
Our starting point is the observation is that the conditioned process is also Markov.
Theor em 1.
Under
P
x
with
x
∈
S
,
X
t
is a time-homogeneous Markov process on
S
with transition kernel
P
t
(
x,dy
) =
h
(
y
)
h
(
x
)
P
t
(
x,dy
)
.
(2)This formula is known as Doob’s
h
-transform
. In terms of m easur es, (2) expresses that the conditioned transition probability is absolutely continuous with respect to the unconditioned one and gives a formula for the Radon-Nikodym derivative. In terms of operators, (2) writes
P
t
=
h
−
1
P
t
h
; here
h
is acting diagonally, i.e. by multiplication.
Proof.
First,
P
t
(
x,
·
) is a proba bilit y measure on
S
because
h
is harmonic and zero off
S
; the semigroup property holds because
P
t
is just a conjugate of
P
t
. As for
P
x
, it is a probability measure on paths in
S
because
P
x
h
(
X
t
)
0
=
E
x
1
h
(
X
t
)
0
h
(
X
t
)
/h
(
x
) = 1.Informally, the Markov property is inherited by the conditioned process and (2) is just Bayes’ rule:
P
x
(
X
t
+
s
∈
dy
|
A,
F
t
) =
P
x
(
A
|
X
t
+
s
=
y,
F
t
)
P
x
(
X
t
+
s
∈
dy
| F
t
)
P
x
(
A
| F
t
)=
h
(
y
)
h
(
X
t
)
P
s
(
X
t
,y
)
.
Rigorously, use the Markov property (1) for the unconditioned process (and the absolute continuity) to write the desired Markov property for the conditioned process as
E
x
[
f
(
X
t
+
s
)
| F
t
] =
h
−
1
(
X
t
)
E
x
[
h
(
X
t
+
s
)
f
(
X
t
+
s
)
| F
t
]
,
P
x
-a.s.To establish it, note the right-hand side is an
F
t
-random variable, let
B
∈ F
t
, and compute
E
x
h
−
1
(
X
t
)
E
x
[
h
(
X
t
+
s
)
f
(
X
t
+
s
)
| F
t
]
1
B
=
E
x
h
−
1
(
X
t
)
E
x
[
f
(
X
t
+
s
)
h
(
X
t
+
s
)
1
B
| F
t
]
h
(
X
t
)
/h
(
x
)=
E
x
f
(
X
t
+
s
)
1
B
h
(
X
t
+
s
)
/h
(
x
)
,
=
E
x
f
(
X
t
+
s
)
1
B
.
2
adDownload to read ad-free
Many of our examples will fit into the above framework by considering an
absorbing boundary
∂S
⊆
S
, meaning
P
t
(
x,
·
) is just
δ
x
for
x
∈
∂S
. The proces s stops when encou n-tering
∂S
in the sense that
X
t
=
X
t
∧
T
where
T
= inf
{
t
:
X
t
∈
∂S
}
is the hitting time.Observe that information about where
X
t
lands in
∂S
is contained in
I
: if
Z
⊆
∂S
, then
A
=
{
X
T
∈
Z
} ∈ I
. (Sometimes, for example with a slit domain, one has to be more careful and redefine
∂S
appropriately.)
Example
1
.
Simple random walk on
Z
∩
[0
,M
] conditioned to hit
M
bef ore 0. Her e
S
=
{
0
,...,M
}
,
∂S
=
{
0
,M
}
,
Z
=
{
M
}
. Solvi ng the discr ete Diric hlet proble m gives
h
(
i
) =
i/M
, 0
≤
i
≤
M
; th is is “g am bl er’s rui n.” He nc e for 0
< i < M
we have
P
(
i,j
) =(
j/
2
i
)
1
|
j
−
i
|
=1
. Notice that
M
does not appear. For the asymmetric simple walk whose steps are positive with probability 0
0). In the langua ge of elec tric al netw orks,if
Z
is a conductor held at unit voltage with respect to a ground at infinity, then
h
is the induc ed poten tial, and its “discr ete Laplacian” represe nts the sourc e of the induced curre nt flow through the network (which is the gradient of
h
); the total current flowing, which equals the total mass of the source, is called the “capacity” of
Z
; this quantity is maximal over all sources on
Z
whose potentials (normalized to be zero at infinity) nowhere exceed one(Dynkin and Yushkevich 1969).
Example
3
.
Random walk on the rooted
d
-regular tree, conditioned to end up among the descendants of a given child
v
of the root
u
. The symmetry makes it easy to compute
h
. For example,
h
(
u
) = 1
/d
and
h
(
v
) = (
d
−
1)
/d
; in general,
h
decreases by a factor of
d
−
1 as you step “up” toward
u
or away from
v
, and 1
−
h
decreases by a factor of
d
−
1 as you step“down” toward
v
or away from
u
. The words “up” and “down” can be pictured in terms of the flow induced by
h
, which incidentally has total current (
d
−
2)
/d
.For a general reversible Markov Chain on a countable state space, i.e. a random walk on a network, if
P
t
is induced by conductances
c
xy
, then
P
t
is induced by conductances
c
xy
=
h
(
x
)
h
(
y
)
c
xy
. In a sense, the conditioned walk behaves as the unconditioned walk but is biased by
h
, “going with the flow.”Turning now to the continuous time setting where
X
t
has infinitesimal generator
L
=
d dt
t
=0
P
t
, i.e.
Lf
(
x
) = lim
t
↓
0
E
x
f
(
X
t
)
−
f
(
x
)
/t
(or equivalently
P
t
=
e
tL
), the condi tione d process has gener ator
L
=
h
−
1
Lh.
(3)Here
h
is also
L
-harmonic
, meaning
Lh
= 0. These statements are obtained by differentiating the corresponding statements about
P
t
at
t
= 0.3
dB
t
+
b
(
X
t
)
dt
in
R
d
, we have
L
=
1 2
σσ
†
:
∇∇
†
+
b
·∇
=
1 2
i,j
a
ij
∂
2
∂x
i
∂x
j
+
i
b
i
∂∂x
i
where
a
=
σσ
†
. We can use (3) and
L
-harmonicity to compute that
L
=
L
+
a
∇
h h
·∇
.
(4)W e can also see this usi ng stoc has tic calcu lus. On the one han d, the absol ute cont in uit y
P
x
F
t
P
x
F
t
already implies the diffusion coefficients coincide, and the change of measure induc ed by an additi onal drift term
b
−
b
is given by the Cameron-Martin-Girsanov formula:
d
P
x
d
P
x
F
t
= exp
t
0
a
−
1
(
b
−
b
)(
X
s
)
·
dX
s
−
1 2
t
0
(
b
−
b
)
·
a
−
1
(
b
−
b
)(
X
s
)
ds
.
On the other hand, the Radon-Nikodym derivative is just
E
x
[
1
A
/h
(
x
)
| F
t
] =
h
(
X
t
)
/h
(
x
).Apply ing Itˆ o’s lemma and
Lh
= 0 to log
h
(
X
t
), we identify
b
−
b
=
a
∇
log
h
, recovering (4).To summarize,
conditioning just adds a drift in the direction of increasing
h
, with magnitude given by its relative increase
.
Example
4
.
Bro wni an mot ion on the in ter v al [0
,c
] conditioned to hit
c
b ef or e 0. He re
L
=
1 2
d
2
/dx
2
, so
h
(
x
) =
x/c
and
L
=
1 2
d
2
/dx
2
+ (1
/x
)
d/dx
; in other words, the conditioned process is the diffusion
dX
t
=
dB
t
+(1
/X
t
)
dt
on [0
,c
]. Notice that
c
does not appear in the SDE. For the generalization to a domain in
R
d
, one would solve the corresponding Dirichlet problem for the Laplacian.
Example
5
.
Brownian motion on the interval [0
,π
] conditione d to remai n in (0
,π
) up to time
t
1
. This exam ple initia lly appears to fall outside of our frame wor k. The ke y is the “space-time trick”:
we can recover the time-homogeneous setting in a trivial way, by enlarging the state space to include the time variable.
Here,
S
= [0
,π
]
×
[0
,t
1
],
∂S
=
{
0
,π
}×
(0
,t
1
]
∪
(0
,π
)
×{
t
1
}
, and
Z
= (0
,π
)
×{
t
1
}
. (Th e boun dar y is abs orb ing as abo ve, so the state actually includes
t
∧
T
, i.e. the process remembers both where and when it stopped.)The 2-dimensional generator becomes
L
=
1 2
d
2
/dx
2
+
d/dt
, i.e.
a
= diag
1
,
0
and
b
=
0
,
1
†
. F ro m (4), we already see that the drift term
a
(
∇
h/h
)
·∇
= (
dh/dx
)(1
/h
)
d/dx
has only a spatial component, but that it is time-dependent.The equation
Lh
(
x,t
) = 0 is the heat equation with space variable
x
and time
−
t
. We are to solve it with initial data
h
(
x,t
1
) = 1 and Dirichlet boundary conditions
h
(0
,t
) =
h
(
π,t
) =0 for
t _
|
72
|
Irwin-Hall Distribution
===============
Irwin-Hall Distribution
The Irwin-Hall distribution is the continuous probability distribution for the sum of n independent uniformly distributed random variables from [0, 1]. In Irwin-Hall distribution, we need a random variable for the sum of n variables and a number of variables you want to pick.
Example
We have a uniformly distributed random variables whose values are between 0 and 1. If we have 20 random variables from this distribution, then what is the probability that the sum of this 20 variables is greater than 12?
The following will be the input:
Sum of variables x: 12
Number of random variables n: 20
The Statistics Study will show the following result:
IRWIN-HALL DISTRIBUTION
Your Input
Random variable x = 12
Parameter n = 20
Probability Density Function (PDF)
f(x;n) =
1 i
------ Σ[(-1)^k·nCk·(x-k)^(n-1)]
(n-1)! k=0
where
i = greatest integer ≤ x
nCk = binomial coefficient
x f(x)
----------------+-------------------+
0.00 0.0000 |
1.00 0.0000 |
2.00 0.0000 |
3.00 0.0000 |
4.00 0.0000 |
5.00 0.0023 |
6.00 0.0458 |
7.00 0.4156 |
8.00 1.8884 |
9.00 4.5809 |
10.00 6.1339 |
11.00 4.5809 |
12.00 1.8884 |
13.00 0.4156 |
14.00 0.0458 |
15.00 0.0023 |
16.00 0.0000 |
17.00 0.0000 |
18.00 0.0000 |
19.00 0.0000 |
20.00 0.0000 |
----------------+-------------------+
f(12) = 1.888386
Cumulative Distribution Function (CDF)
1 i
F(x;n) = -- Σ[(-1)^k·nCk·(x-k)^n]
n! k=0
where
i = greatest integer ≤ x
nCk = binomial coefficient
x F(x) = P(< x)
----------------+-------------------+
0.00 0.0000 |
1.00 0.0000 |
2.00 0.0000 |
3.00 0.0000 |
4.00 0.0000 |
5.00 0.0000 |
6.00 0.0008 |
7.00 0.0097 |
8.00 0.0610 |
9.00 0.2207 |
10.00 0.5000 |
11.00 0.7793 |
12.00 0.9390 |
13.00 0.9903 |
14.00 0.9992 |
15.00 1.0000 |
16.00 1.0000 |
17.00 1.0000 |
18.00 1.0000 |
19.00 1.0000 |
20.00 1.0000 |
----------------+-------------------+
P(x < 12) = 0.939044
P(x > 12) = 0.060956
Properties of the PDF
Mean = n/2 = 10
Median = n/2 = 10
Mode = n/2 = 10
Variance = n/12 = 1.6667
Skewness = 0
Ex. kurtosis = -6/(5n) = -0.06
|
73
|
Published Time: Sat, 30 Nov 2024 01:26:59 GMT
arXiv:hep-th/0608066v3 10 Jan 2007
Superfield formulation of N=4 supersymmetric Yang–Mills theory in extended superspace
¨Omer F. Dayi a,b, 1 and Kayhan ¨Ulker b, 2
aPhysics Department, Faculty of Science and Letters, Istanbul Technical Uni-versity, TR-34469 Maslak–Istanbul, Turkey.
bFeza G¨ ursey Institute, P.O. Box 6, TR–34684, C ¸ engelk¨ oy–Istanbul, Turkey.
Action of 4 dimensional N=4 supersymmetric Yang–Mills theory is written by em-ploying the superfields in N=4 superspace which were used to prove the equivalence of its constraint equations and equations of motion. Integral forms of the extended superspace are engaged to collect all of the superfields in one “master” superfield. The proposed N=4 supersymmetric Yang–Mills action in extended superspace is shown to acquire a simple form in terms of the master superfield.
1
E-mail addresses: [email protected] and [email protected].
2
E-mail address: [email protected].
11 Introduction
Maximally supersymmetric gauge model in four dimensions that contains fields with spins at most one is N=4 supersymmetric Yang–Mills (SYM) theory . This theory is distinguished for its finiteness and duality properties and studied extensively since last three decades (for some reviews see [2, 3]). In spite of these facts, a superfield formulation of N=4 SYM in extended superspace is still lacking. An off-shell formulation in terms of auxiliary fields of the N=4 SYM multiplet is still unknown (for a formulation with an infinite number of fields see ) A progress made in this direction was to accomplish the equivalences of the superfield constraint equations and the equations of motion for N=3 and N=4 SYM theories [5, 6]. In a complete proof of this equivalence relation for N=3 SYM was given by introducing a suitable gauge choice which eliminates gauge freedom depending on Grassmann coordinates. Obviously, by superfields we mean fields written as functions of superspace variables. Indeed, this gauge choice permits one to find some recursion relations from the constraint equations to construct superfields order by order in Grassmann variables. This method was also applied to ten dimensional SYM equations . Unlike the accustomed superfields, components of the superfields constructed in [7, 8] do not encompass any auxiliary field. Hence, they do not demand an off–shell supersymmetric formulation, but at the cost of considering superfields which do not possess the usual supersymmerty properties. Superfields constructed in were employed to construct physical states in Berkovits quantization of superparticles and superstrings in ten dimensions . Also, the approach of was applied to define deformed N=4 SYM equations . Recently, in terms of these superfields an alternative superfield formulation of N=1 SYM without auxiliary fields in 4 dimensions was given . We present a formulation of 4 dimensional N=4 SYM action in terms of the superfields of . Moreover, we show that these fields can be written as integral forms and be collected in a “master” superfield such that the N=4 action can be expressed in a simple, compact form. Though how to determine components of the superfields by recursion rela-tions is known, actual calculation of components which are third or fourth order in Grassmann variables is a hard task. One of the other important issues to write an action in extended superspace is to define a measure which is invari-ant under the global SU (4) group. We propose a measure which is suitable to achieve our goal. Acquainted with these we write an action in terms of super-fields and prove that the action of N=4 SYM theory in terms of component fields results, after a lengthy calculation. For the coefficients appearing in the action there are more than one solution. Engaging differentials of Grassmann variables the N=4 SYM superfields can be written as integral forms (see also ) and be collected into a “master” superfield. Then, the N=4 SYM action can be written in a compact way. This action is apparently first order in space–time derivatives and there are two terms which are quadratic and cubic in master superfield. Indeed, all other powers of master superfield give vanishing contributions to the action. Also in this case, 2there are some different solutions for the coefficients involved. In the next section we recall formulation of 4 dimensional N=4 SYM theory in terms of component fields. In Section 3 after giving the definitions of su-perfields, we present their first two components in Grassmann variables of the extended superspace. The higher components are listed in Appendix. In Sec-tion 4 we present the general formulation in terms of superfields after a choice of measure. In Section 5 we give the definition of master superfield as a col-lection of integral forms. We demonstrate that the N=4 SYM theory action in extended superspace acquires a simple form. In the last section we discuss the results obtained and some open questions.
2 Component fields formulation
The N=4 Yang–Mills supermultiplet consists of one gauge field 3 aα ˙α = σμα ˙αAμ,eight Weyl fermions λiα , ¯λi˙α and six scalars φij = −φji = 1
2
ǫijkl φkl . Spinor indices are 4 α , ˙α = 1 , 2. i, j = 1 , · · · , 4, denote indices of the global symmetry group SU (4) . In fact, aα ˙α is a singlet, λαi and ¯λi˙α are in the 4 and ¯4 represen-tation and φij are in the second rank, self dual 6 representation of SU (4) . All of the fields are in the adjoint representation of a non–abelian gauge group. Hermitian conjugation which we attribute to the fields is (aα ˙β )† = −aβ ˙α , (λiα )† = ¯λi˙α , (φij )† = φij .
N=4 extended SYM action in the component fields aα ˙α, λ iα , φ ij can be writ-ten as
I =
∫
d4xTr ( 1
8 f ˙α ˙β f ˙α ˙β + 1
8 fαβ f αβ + 1
16 Dα ˙αφij D ˙αα φij − i
4 λαi Dα ˙α ¯λi ˙α
− i
8 φij {λαi , λ jα } − i
8 φij {¯λi˙α, ¯λj ˙α} + 1
64 [φij , φ kl ][ φij , φ kl ]), (1) where Dα ˙α = ∂α ˙α + [ aα ˙α, ·]. f αβ and f ˙α ˙β are self-dual and anti-self-dual field strengths defined as 5
fαβ = − 1
2 ǫ ˙α ˙β (
∂α ˙αaβ ˙β − ∂β ˙β aα ˙α + [ aα ˙α, a β ˙β ]
)
,f ˙α ˙β = − 1
2 ǫαβ (
∂α ˙αaβ ˙β − ∂β ˙β aα ˙α + [ aα ˙α, a β ˙β ]
)
.
The action (1) is invariant under the on-shell N=4 supersymmetry transfor-mations:
δa α ˙α = −ξiα ¯λi˙α − ¯ξi
˙α
λiα , (2)
3We always make the identification xα˙α=σμα˙αxμ, x ˙αα = ¯ σ˙αα μxμ, ∂ α˙α=σμα˙α∂μ.
4We use Wess-Bagger conventions to raise and lower the spinor indices: θα=
ǫαβ θβ, θ α=ǫαβ θβ, ǫ αβ ǫβγ =δγα.
5Note that, fαβ = ( σμν )γαǫγβ fμν and f˙α˙β=ǫ˙α˙γ(¯ σμν )˙γ
˙βfμν where fμν =∂μaν−∂νaμ+[aμ, a ν] as usual.
3δλ iα = 2iξ βi fαβ − iξ jα [φik , φ kj ] − 2i ¯ξj ˙αDα ˙αφij , (3)
δφ ij = ξαi λjα − ξαj λiα + ǫijkl ¯ξk
˙α
¯λl ˙α, (4) where ξi, ¯ξi are constant Weyl spinors.
3 Superfields in N=4 superspace
N=4 extended superspace is parametrized by the coordinates (xμ, θ αi , ¯θi˙α). (5) Translations in this extended superspace
xα ˙α → xα ˙α + 2 i(¯ζi
˙α
θiα + ζiα ¯θi˙α) , θαi → θαi + ζαi , ¯θi˙α → ¯θi˙α + ¯ζi
˙α
are generated by T ≡ ζαi Qiα + ¯ζi
˙α
¯Q ˙αi , where ζαi , ¯ζi ˙α are Grassmann constants. The supercharges
Qiα = ∂
∂θ αi
− i¯θi ˙α∂α ˙α , ¯Qi ˙α = − ∂
∂ ¯θi ˙α + iθ αi ∂α ˙α, (6) satisfy the graded algebra
{Qiα, ¯Q ˙αj } = 2iδ ij ∂α ˙α,
{Qiα, Q jβ } = { ¯Q ˙αi , ¯Q ˙βj } = [∂α ˙α, Q iβ ] = [ ∂α ˙α, ¯Q ˙βi ] = 0 .
To construct supersymmetric actions in superspace it is convenient to be ac-quainted with the differential operators Diα = ∂
∂θ αi
i¯θi ˙α∂α ˙α , ¯Di ˙α = − ∂
∂ ¯θi ˙α − iθ αi ∂α ˙α, (7) that anticommute with the supercharges (6):
{Qiα, Djβ } = { ¯Q ˙αi , Djα} = {Qiα, ¯Dj ˙α} = { ¯Q ˙αi , ¯D ˙βj } = 0 .
An off–shell N=4 SYM formulation is not available which would lead to con-struction of N=4 superfields living in the N=4 superspace (5) making use of accustomed methods. However, there exists another approach of introducing superfields whose components are constituted by the fields which are not auxil-iary, in terms of the constraint equations for the superconnections Aα ˙α, ω iα and ¯ωi ˙α . The supercovariant derivatives in N=4 superspace 6,
∇iα = Diα + [ ωiα, ·], (8) ¯∇i ˙α = ¯Di ˙α − [¯ ωi ˙α, ·], (9)
∇α ˙α = ∂α ˙α + [ Aα ˙α, ·]. (10)
6Note that, ( Aα˙β)†=−Aβ˙αbut ( ωiα)†= ¯ ωi˙α
4should satisfy the constraint equations
{∇ iα, ¯∇j ˙α} = −2iδ ij ∇α ˙α, (11)
{∇ iα, ∇jβ } = −2iǫ αβ Φij , { ¯∇ ˙αi , ¯∇ ˙βj } = 2 iǫ ˙α ˙β Φij , (12) [∇iα, ∇β ˙β ] = ǫαβ ¯Λi˙β , [ ¯∇i ˙α, ∇β ˙β ] = −ǫ ˙α ˙β Λiβ . (13) Here the upper–case letters indicate superfields whose first components are pro-portional to the component fields given by the lower–case letters. Let us also define the operator
D = θαi ∇iα − ¯θi ˙α ¯∇i ˙α. (14) One can show that Bianchi identities resulting from (11)–(13) lead to the recur-sion relations
DAα ˙α = −θiα ¯Λi˙α − ¯θi˙αΛiα , (15)
DΛiα = 2iθ βi Fαβ − i[Φ ik , Φkj ]θjα − 2i¯θj ˙α∇α ˙αΦij , (16)
DΦij = θαi Λjα − θαj Λiα + ǫijkl ¯θk
˙α
¯Λl ˙α. (17) Being superconnections there are some redundant parts in ω, ¯ω which should be eliminated, obviously leaving the usual Yang–Mills gauge transformations intact. Adopting the gauge fixing condition
θαi ωiα + ¯θi ˙α ¯ωi ˙α = 0 , (18) that eliminates all the gauge transformations depending on the Grassmann co-ordinates θαi , ¯θi ˙α, which is similar to the Wess–Zumino condition, the recursion relations for the spinor superconnections can be derived from (15) and (17) as (1 + D)ωiα = 2i¯θi ˙αAα ˙α − 2iΦij θαj , (19) (1 + D)¯ ωi ˙α = 2iθ αi Aα ˙α + 2 iΦij ¯θj
˙α
. (20) Note that after the gauge choice (18) the operator (14) turned to be the counting operator of the anticommuting coordinates θαi and ¯θi˙α :
D = θαi
∂
∂θ αi
¯θi ˙α ∂
∂ ¯θi ˙α .
Therefore, the superfields ω, A, Φ, Λ can be found from the recursion relations (15)–(20) order by order in θ, ¯θ.
When one replaces the upper–case letters with the lower–case ones in (15)– (16), D can be replaced with δ which is the supersymmetry transformation (2)–(4) with the replacements ξ → θ, ¯ξ → ¯θ. If the above superfields are written order by order in θ, ¯θ, as
Aα ˙α = s0A(0)
α˙α
s1A(1)
α˙α
· · · + snA(n)
α˙α
, (21) Φij = e0Φ(0)
ij
e1Φ(1)
ij
· · · + enΦ(n)
ij
, (22) Λiα = z0Λ(0)
iα
z1Λ(1)
iα
· · · + znΛ(n)
iα
, (23) 5where sm, e m, z m ; m = 0 , 1 · · · 16 , are real constants, the unique solution to any desired order can also be found as,
A(m)
α˙α
= δA (m−1)
α˙α
, Φ(m)
ij
= δΦ(m−1)
ij
, Λ(m)
iα
= δΛ(m−1)
iα
(24) by setting s0 = e0 = z0 = 1 and
sm = em , mz m = sm−1 , ms m = zm−1; m = 1 , · · · , 16 . (25) Hence, to obtain the superfields A, Λ, Φ one can proceed in two equivalent ways: Make use of the recursion relations (15)–(16) directly or perform the transformations (24). In terms of the arbitrary scale factors l and b which are real constants, let us define the zeroth order components as
A(0)
α˙α
= aα ˙α, Λ(0)
iα
= lλ iα , Φ(0)
ij
= bφ ij .
The first order components of the superfields A, Φ, Λ can be derived from these as
A(1)
α˙α
= −lθ iα ¯λi˙α − l¯θi
˙α
λiα , (26) Λ(1)
iα
= 2iθ βi fαβ − ib 2[φik , φ kj ]θjα − 2ib ¯θj ˙αDα ˙αφij , (27) Φ(1)
ij
= lθ αi λjα − lθ αj λiα + lǫ ijkl ¯θk
˙α
¯λl ˙α. (28) On the other hand, the spinor superconnection ω can be separated into two parts:
ωiα = viα + uiα,
such that the gauge condition (18) takes the form
θαi viα + ¯θi ˙α ¯vi ˙α = 0 , θαi uiα = ¯θi ˙α ¯ui ˙α = 0 .
There are no zeroth order components, their first and the second order com-ponents can be calculated from the recursion relations (19)–(20) and (26)–(28) as
v(1) iα = i¯θi ˙αaα ˙α , v(2) iα = −2il
3 ¯θi ˙α(θkα ¯λk
˙α
¯θk
˙α
λkα ), (29)
u(1) iα = −ibθ jα φij , u(2) iα = 2il
3 θjα (¯θi ˙α ¯λj
˙α
− ¯θj ˙α ¯λi˙α). (30) Here we presented the first two components of the superfields. The higher order terms are listed in Appendix.
4 N=4 SYM action in terms of superfields
We wish to find an action in terms of the superfields ω, A, Λ, Φ and the deriva-tives ∂α ˙α, such that after performing integrals over the Grassmann variables θ, ¯θ
6it attains the action in terms of component fields (1). Inspecting components of the superfields ω, A, Λ, Φ one observes that if we do not restrict the integration over θ, ¯θ but integrate over the whole superspace, the desired result cannot be achieved. We propose the action, in terms of the constant parameters k1, · · · , k 6,S = ik 1 < ¯ωi ˙α∂ ˙αα ωiα > +ik 2 < ¯ωi ˙α[A ˙αα , ω iα] >
+ik 3 < ω iα Λiα − ¯ωi ˙α ¯Λ ˙αi > +k4 < A α ˙αA ˙αα >
+ik 5 < Φij {ωiα , ω jα} + Φ ij {¯ωi ˙α, ¯ω ˙αj } > +k6 < Φij Φij >, (31) where we defined, by the normalization constant N = 1 /3200 ,< O >≡ N
(∫
d4x dθ αi dθ jα d¯θi˙αd¯θj ˙α Tr O
)
θ= ¯θ=0
. (32) Thus, the only non–vanishing θ, ¯θ contribution to the integral is
< θ αi θβj ¯θk ˙α ¯θl ˙β K(x) >= N
8 ǫαβ ǫ ˙α ˙β (δki δlj + δkj δli)
∫
d4x Tr K(x), (33) for any function K(x). With this choice of measure (32), due to mass dimensions and R-charges of the superfields (Table 1), (31) is the most general action one can write up to total derivatives. Table 1: Dimensions d, and R-weights.
Aα ˙α λi Φij ωi θi
d 1 3/2 1 1/2 -1/2
R 0 -1 -2 1 -1
Because of the choice of measure (32) which is manifestly SU (4) invariant, components of the superfields at most up to the fourth order in θ, ¯θ are required. Carrying out integrals over the variables θ, ¯θ in (31) is a very lengthy calculation although it is straightforward. Nevertheless, using the identity [φij , φ jk ][ φkl , φ li ] = 1
2 [φij , φ kl ][ φij , φ kl ], (34) and performing the integrals over θ, ¯θ, we conclude that to get the action (1) from (31) the coefficients k1, · · · , k 6, should satisfy the equations 12 k2 − 3k1 − 4k3 + 2 k4 = 0, (35)
−3k1 + 10 k3 − 8k4 = 3
20 N , (36)
k2 − 2k5 = 0, (37)
k4 − 2k6 = − 1
10 N l2 , (38) 73k1 − 10 k3 + 16 k6 = 3
10 N b2 , (39)
−k2 + k3 − k4 + 2 k5 + 2 k6 = − 3
20 N bl 2 , (40)
−3k2 − 7k3 − 3k4 + 18 k5 + 16 k6 = 3
16 N b4 . (41) Although these equations possess some different solutions, by fixing
k6 = − 3k4
2 (42) one obtains the solution
k1 = −8(104 + b(282 + b(16 + 63 b))) , (43)
k2 = 1
10 (−2815 − 4b(1974 + b(115 + 441 b))) , (44)
k3 = − 12
5 (105 + b(282 + b(20 + 63 b))) , (45)
k4 = −3 (21 + 4 b2) , (46)
k5 = k2
2 , (47) with the scale factors
b =
√
−2 + √26 /2 , l = 4 √(5 − 4b2)/39 , (48) whose signs can be taken diversely, i.e. b → ± b, l → ± l are also solutions.
5 A formalism by integral forms
To acquire an understanding of underlying geometrical aspects of the formula-tion given in the previous section, we would like to write superfields as integral forms [12, 13]. Let us introduce the differentials dθ, d ¯θ whose (wedge) products are commutative 7:
dθ αi ∧ dθ βj = dθ βj ∧ dθ αi ,d¯θi ˙α ∧ d¯θj ˙β = d¯θj ˙β ∧ d¯θi ˙α,dθ αi ∧ d¯θj ˙α = d¯θj ˙α ∧ dθ αi .
Obviously, to each superfield one can associate an integral form and write the action (31) in terms of these forms. This would not give a new insight. How-ever, we can collect differential forms possessing different degrees in a “master” superfield as
7Here, we write the wedge product symbol ∧explicitly to avoid the notational confusion.
8Ω = c1(uiα + viα)dθ αi + ic 2(2 Aα ˙αdθ αi ∧ d¯θi ˙α + 3Φ ij ǫαβ dθ αi ∧ dθ βj )
−2ic 3(Λ iα ǫ ˙α ˙β dθ αj ∧ d¯θi ˙α ∧ d¯θj ˙β + 4 ¯Λi˙αǫαβ d¯θj ˙α ∧ dθ αi ∧ dθ βj )+2 c4(2 Fαβ ǫ ˙α ˙β dθ αi ∧ dθ βj ∧ d¯θi ˙α ∧ d¯θj ˙β + 8 F ˙α ˙β ǫαβ d¯θi ˙α ∧ d¯θj ˙β ∧ dθ αi ∧ dθ βj
+3[Φ ik , Φkj ]ǫαβ ǫ ˙α ˙β dθ αj ∧ dθ βn ∧ d¯θi ˙α ∧ d¯θn ˙β ). (49) Construction of this master superfield is twofold: Firstly, each component is chosen to possess mass dimension equal to its form degree, e.g. the first compo-nent is a one form and it has mass dimension one. Secondly, once the first order components are chosen as one form, the second order components are related to the first ones up to the constant c2, by the recursion relations (19) replacing in the right hand side explicit θ, ¯θ with the differentials dθ, d ¯θ. The third order ones are obtained from the second order components utilizing the recursion re-lations (15), (17), up to the constant c3. Similarly, the fourth order terms are derived from the recursion relation (16) of the third order components, up to the constant c4.
To write an action we also need the hermitian conjugate of Ω : Ω† = −c1(¯ ui ˙α + ¯ vi ˙α)d¯θi ˙α + ic 2(2 Aα ˙αdθ αi ∧ d¯θi ˙α − 3Φ ij ǫ ˙α ˙β d¯θi ˙α ∧ d¯θj ˙β ) (50)
−2ic 3(4Λ iα ǫ ˙α ˙β dθ αj ∧ d¯θi ˙α ∧ d¯θj ˙β + ¯Λi˙αǫαβ d¯θj ˙α ∧ dθ αi ∧ dθ βj )
−2c4(8 Fαβ ǫ ˙α ˙β dθ αi ∧ dθ βj ∧ d¯θi ˙α ∧ d¯θj ˙β + 2 F ˙α ˙β ǫαβ d¯θi ˙α ∧ d¯θj ˙β ∧ dθ αi ∧ dθ βj
+3[Φ ik , Φkj ]ǫαβ ǫ ˙α ˙β dθ αj ∧ dθ βn ∧ d¯θi ˙α ∧ d¯θn ˙β ). (51) Let us introduce the operator
d = i∂ α ˙αdθ αi ∧ d¯θi ˙α
which corresponds to derivatives ∂/∂x μ. In terms of the constants m1, m 2, m 3,
we propose the action, suppressing superspace integrals and trace over the gauge group,
I = m1Ω† ∧ d ∧ Ω + m2Ω† ∧ Ω + m3
(Ω ∧ Ω† ∧ Ω + Ω † ∧ Ω ∧ Ω†) (52) and the SU (4) invariant 4–form
dθ αi ∧ dθ βj ∧ d¯θk ˙α ∧ d¯θl ˙β = ǫαβ ǫ ˙α ˙β (δki δlj − δkj δli)dθ γmdθ nγ d¯θm
˙γ
d¯θn ˙γ . (53) All other powers of the superfields Ω , Ω† give vanishing contributions due to the choice of the measure (32)–(33). Comparing the coefficients of (52) with (31) one can show that they are related as
k1 = 3 m1c21, k2 = −12 m3c21c2, k3 = −48 m2c1c3,k4 = −48 m2c22, k5 = −6m3c21c2, k6 = 72 m2c22. (54) 9c4 does not play any role. Note that in this case the condition (42), namely
k6 = − 3k4
2is dictated spontaneously. Replacing k1, · · · , k 6 in (31) with the values given in (54), one obtains the equations which c1, · · · , m 3 coefficients should satisfy, so that (52) reproduces the action (1). There exist several solutions to these equations. By setting
c1 = c2 = 1 one can show that there exists a solution such that
c3 = 4 + 12
5 b (4 + b2) , (55)
m1 = − 8
3 (104 + b(282 + b(16 + 63 b))) , (56)
m2 = 1
16
(21 + 4 b2) , (57)
m3 = 1
120 (2815 + 4 b(1974 + b(115 + 441 b))) , (58) where b and l are given with (48) as before.
6 Discussions
We presented a superfield formulation of N=4 SYM theory in 4 dimensions. Superfields which we deal with do not possess auxiliary fields, in contrast to the standard superfields which one engages to formulate off–shell supersymmetric theories. Thus, techniques to carry out calculations like taking variations or performing path integrals of their functionals with respect to these superfields are obscure at the moment. Hence, we also do not know how to imply super-symmetry invariance of the action (31) at the level of superfields. In spite of all these facts, being able to introduce integral forms to write the action (31) in terms of the master superfield Ω (52) is very promising. Getting a better knowledge of the geometrical aspects of the master field (49) can shed some light on the use of our formalism. One of the tools to deepen the understanding how to operate with these superfields is to study the analogous formulations of the N=1 SYM in 4 and 10 dimensions. Although, the former is available the latter case is still missing. Possessing a superfield formulation of N=4 SYM, even though without aux-iliary fields, should also be helpful to study deformations of it in terms of Moyal brackets: In spite of the fact that deformed equations of motion of N=4 SYM were worked out an underlying action is still missing. Acknowledgments We thank M. Hinczewski for fruitful discussions. Preliminary results of this work were announced during the meeting in honor of ˙I.H. Duru on the occasion of his 60th birthday at ˙IYTE, ˙Izmir, Turkey. 10 A Higher components of superfields
Here we list components of the superfields which are not given in Section 3. Because of the choice of measure (32)–(33) some of the terms of components evidently give vanishing contribution to the action (31). Below, “ · · · ” indicates these terms which are not needed for our calculation. From (26) by making use of the recursion relations (15)–(16) or performing the supersymmetry transformations (24), the components of the superfield Aα ˙α
can be obtained as
A(2)
α˙α
= −i
(
θiα ¯θi ˙β f ˙α ˙β + ¯θi˙αθβi fαβ − b2θiα ¯θj
˙α
[φik , φ kj ]
)
· · · , (59)
A(3)
α˙α
= − il
6 ¯θi˙αθβi
(
θkβ Dα ˙β ¯λk ˙β + θkα Dβ ˙β ¯λk ˙β + ¯θk ˙β Dα ˙β λkβ + ¯θk ˙β Dβ ˙β λkα + bθ jα [λkβ , φ kj ]
)
− il
6 θiα ¯θi ˙β (¯θk
˙α
Dβ ˙β λβk + ¯θk
˙β
Dβ ˙αλβk + θβk Dβ ˙β λk
˙α
θβk Dβ ˙α ¯λk
˙β
− b¯θj
˙α
[¯λk
˙β
, φ kj ]
))
· · · , (60)
A(4)
α˙α
= − i
24 θiα ¯θi ˙β (
l2θkβ
(¯θj
˙α
{¯λk
˙β
, λ βj } + ¯θj
˙β
{¯λk
˙α
, λ βj }
)
− l2θβj
(¯θk
˙β
{λkβ , ¯λj
˙α
} + ¯θk
˙α
{λkβ , ¯λj
˙β
}
)
+2 iθ jγ
(¯θj
˙α
Dβ ˙β f βγ + ¯θj
˙β
Dβ ˙αf βγ )
2 iθ βj ¯θj ˙γ (
Dβ ˙β f ˙α ˙γ + D β ˙αf ˙β ˙γ
)
+¯θj
˙α
(
−4ib 2θβm[D β ˙β φkm , φ kj ] + 2 l2θβk {¯λk
˙β
, λ jβ } − 2l2θβj {¯λk
˙β
, λ kβ }
) )
− i
24 ¯θi˙αθβi
(
l2 ¯θk
˙β
(
θjβ {λkα , ¯λj ˙β } + θjα {¯λj
˙β
, λ kβ }
)
l2 ¯θj
˙β
(
θkα {λjβ , ¯λk ˙β } + θkβ {λjα , ¯λk
˙β
}
)
+2 i¯θj
˙γ
(
θjβ Dα ˙β f ˙β ˙γ + θjα Dβ ˙β f ˙β ˙γ )
2 i¯θj ˙β θγj
(
Dβ ˙β fαγ + D α ˙β fβγ
)
+θjα
(
−4ib 2 ¯θm ˙β [D β ˙β φkm , φ kj ] + 2 l2 ¯θk ˙β {λkβ , ¯λj
˙β
} − 2l2 ¯θj ˙β {¯λk
˙β
, λ kβ
) )
· · · .(61) To derive components of the superfield Λ iα one departs from (27) and uses the recursion relations (15)–(16) or performs the supersymmetry transforma-tions (24): Λ(2)
iα
= il
2 θβi
(
θkβ Dα ˙α ¯λk ˙α + θkα Dβ ˙α ¯λk ˙α + 3 ¯θk ˙αDα ˙αλkβ
+¯θk ˙αDβ ˙αλkα + bθ jα [λkβ , φ kj ]
)
ilb
2 θjα
(
ǫiklm ¯θl ˙α[¯λm
˙α
φkj ] − ¯θk ˙α[¯λj
˙α
, φ ik ] + ¯θj ˙α[¯λk
˙α
, φ ik ]
)
+il ¯θj ˙α (
bθ kα [¯λk
˙α
, φ ij ] + b¯θk
˙α
[λkα , φ ij ] + θβj Dα ˙αλiβ + · · · .
)
· · · , (62) Λ(3)
iα
= i
6
(
θβi ¯θm
˙α
(2 l2θkβ {λmα , ¯λk ˙α} + 8 l2θkα {λmβ , ¯λk ˙α} − 2l2θmα {λkβ , ¯λk ˙α}
+6 l2 ¯θk ˙α{λmα , ¯λkβ } + 2 iθ mβ Dα ˙β f ˙α ˙β + 2 iθ mα Dβ ˙β f ˙α ˙β )+2 iθ βi ¯θm ˙α(θγmDβ ˙αfαγ + 2 b2θkβ Dα ˙α[φkj , φ jm ] + b2θkα Dβ ˙α[φkj , φ jm ] + b2θjα [D β ˙αφkm , φ kj ])
−l2 ¯θj ˙α(5 θβj θkα + θjα θβk ){λiβ , ¯λk
˙α
}−2l2ǫiklm ¯θj ˙αθjα ¯θl ˙β {¯λm
˙β
, ¯λk
˙α
} − 4l2 ¯θj ˙αθβj ¯θk ˙α{λkα , λ iβ }
11 +2 ib 2θjα θγn(ǫiklm ¯θl ˙α[D γ ˙αφmn , φ kj ] + ¯θj ˙α[D γ ˙αφnk , φ ik ])
−2ib ¯θj ˙αθβj (bθ mβ Dα ˙α[φik , φ km ] + 2 ¯θk ˙β Dα ˙αDβ ˙β φik )+4 ib ¯θj ˙α ¯θk
˙α
θβk [fαβ , φ ij ]+ib 3θjα ¯θm
˙α
(ǫikln ¯θl ˙α[[ φnp , φ pm ], φ kj ] + ¯θk ˙α[[ φnj , φ nm ], φ ik ]) +ib 3 ¯θj ˙α ¯θm
˙α
(θjα [[ φnk , φ nm ], φ ik ] + 4 θkα [[ φkn , φ nm ], φ ij ])
)
· · · . (63) Similarly components of the superfield Φ ij are calculated from (28) in terms of the recursion relations (15)–(16) or the supersymmetry transformations (24) as Φ(2) ij = − i
2
(
2b¯θi
˙α
θαk Dα ˙αφjk − bǫ ijkl ¯θn ˙αθαk Dα ˙αφln − b2 ¯θi˙α ¯θm ˙α[φjk , φ km ]+ b2
2 ǫijkl θαk θmα [φln , φ nm ]
)
i
2
(
i ←→ j
)
· · · , (64) Φ(3) ij = i
6
(
2l¯θi ˙α ¯θk ˙β θαk Dα ˙α ¯λj
˙β
− lǫ ijkl ¯θm ˙αθαk θβmDα ˙αλlβ + lbǫ jkln ¯θi ˙α ¯θm
˙α
θαl [λnα , φ km ]+lb ¯θi ˙α ¯θm
˙α
θαm[λkα , φ jk ] − 3lb ¯θi ˙α ¯θm
˙α
θαk [λmα , φ jk ] + lbǫ ijkl ¯θm ˙α ¯θn
˙α
θαk [λnα , φ lm ]+2 lbθ αk θmα ¯θi ˙α[¯λm
˙α
, φ jk ] + lb
2 ǫijkl ǫlnpr θαk θmα ¯θp ˙β [¯λr
˙β
, φ nm ] − 3lb
2 ǫijkl θαk θmα ¯θn ˙β [¯λm
˙β
, φ ln ]+ lb
2 ǫijkl θαk θmα ¯θm ˙β [¯λn
˙β
, φ ln ]
)
− i
6
(
i ←→ j
)
· · · , (65) Φ(4) ij = 1
24
(
ǫjkln ¯θi ˙α ¯θm
˙α
θαl
(b3θpα [[ φnr , φ rp ], φ km ] + 2 il 2θβm{λnα , λ kβ }) − 4il 2 ¯θi ˙α ¯θk ˙β θαk θmα [¯λm
˙α
, ¯λj
˙β
]
−b3 ¯θi ˙α ¯θm
˙α
θnα
(5θαk [φjk , [φmr , φ rn ]] − θαm[φjk , [φkr , φ rn ]] )
1
2 ǫijkl θαk θmα
(b3ǫlnpr ¯θp ˙α ¯θq
˙α
[[ φrz , φ zq ], φ nm ] + 2 il 2ǫlnpr ¯θm ˙α ¯θp ˙β {¯λn
˙α
, ¯λr
˙β
}
+5 b3 ¯θn ˙α ¯θp
˙α
[φln , [φmr , φ rp ]] − b3 ¯θm ˙α ¯θp
˙α
[φln , [pφ nr , φ rp ]] ) + 2 il 2ǫijkl θαk θβm ¯θm ˙α ¯θn
˙α
{λnα , λ lβ }
)
−4b¯θi ˙α ¯θk ˙β θαk θβl Dα ˙αDβ ˙β φjl − 2ǫijkl ¯θm ˙α ¯θn ˙β θαk θβmDα ˙αDβ ˙β φln
)
− 1
24
(
i ←→ j
)
· · · . (66) To find the third and fourth order components in θ, ¯θ of the spinor super-connections ωiα ≡ vαi + uiα one takes (29)–(30) and operates with the recursion relations (15)–(16):
v(3) iα = 1
2 ¯θi ˙α (¯θj
˙α
θβj fαβ + b2 ¯θm
˙α
θjα [φjk , φ km ]
)
· · · , (67)
u(3) iα = − b2
4 θjα ¯θm
˙α
(¯θi ˙α[φjk , φ km ] − ¯θj ˙α[φik , φ km ]) − 2b¯θj ˙β θjα θβk Dβ ˙β φik + · · · , (68)
v(4) iα = − l
15 ¯θi ˙γ θβj
(¯θk ˙β θkα (D β ˙β ¯λj
˙γ
D β ˙γ ¯λj
˙β
) + ¯θj
˙γ
θkβ Dα ˙β ¯λk ˙β
+¯θj
˙γ
θkα Dβ ˙β ¯λk ˙β + bθ j
˙γ
θmα [φmk , λ kβ ]
)
· · · , (69) 12 u(4) iα = − l
15 θjα ¯θm
˙α
(
bθ i ˙α[φjk , λ kβ ] − θβm(b¯θj ˙α[φik , λ kβ ] − 2¯θi ˙γ Dβ ˙γ ¯λj ˙α + 2 ¯θj ˙γ Dβ ˙γ ¯λi ˙α)
−3bθ βk ¯θj ˙α[φik , λ mβ ] − bǫ ikln θβl ¯θj ˙α[λnβ , φ km ]
)
· · · . (70) The higher components in θ, ¯θ which are not listed here do not play any role in our calculations.
References
F. Gliozzi, J. Scherk and D. Olive, Supersymmetry, supergravity theories and the dual spinor model , Nucl. Phys. B 122 (1977) 253. M. Sohnius, Introducing supersymmetry , Phys. Rept. 128 (1985) 39. P. Di Vecchia, Duality in N = 2,4 supersymmetric gauge theories ,Les Houches 1997, Probing the standard model of particle interactions,
hep–th/9803026. M. Sohnius, K. Stelle and P. West, Off mass shell formulation of extended supersymmetric dauge theories , Phys. Lett. B 92 (1980) 123; Dimensional reduction by Legendre transformation generates off-shell supersymmetric Yang-Mills theories , Nucl. Phys. B 173 (1980) 127. M.F. Sohnius, Bianchi identities for supersymmetric gauge theories, Nucl. Phys. B 136 (1978) 461. E.Witten, An interpretation of classical Yang-Mills theory , Phys. Lett. B
77 (1978) 394. J. Harnad, J. Hurtubise, M. Legare and S. Shnider, Constraint equations and field equations in supersymmetric N=3 Yang–Mills theory , Nucl. Phys.
B256 (1985) 609. J. Harnad and S. Shnider, Commun. Math, Phys. Constraints and field equations for ten dimensional super Yang–Mills theory , 106 (1986) 183. N. Berkovits, A new description of the superstrings, hep–th/9604123;
Super-Poincare covariant quantization of the superstring , JHEP 04 (2000) 018, hep–th/0001035. C. S¨ amann and M. Wolf, Constraint and super Yang-Mills equations on the deformed superspace R(4 |16)
ℏ
, JHEP 03 , (2004) 048, hep–th/0401147. ¨O. F. Dayi, N=1 supersymmetric Yang–Mills theory in d=4 and its Batalin– Vilkovisky quantization by spinor superfields , Mod. Phys. Lett. A 21 (2006) 2161, hep–th/0509110. 13 F. A. Berezin, Differential Forms On Supermanifolds, Yad. Fiz. 30 , (1979), 1168; Yu I. Manin, Gauge Fields and Complex Geometry, Springer, (2nd edition), 1997. B. M. Zupnik and D. G. Pak, Differential and Integral Forms in Supergauge Theories and Supergravity, Class. Quant. Grav. 6, (1989), 723. J. Wess and J. Bagger, Supersymmetry and supergravity, Princeton Uni-versity Press, Princeton, 1992. 14
|
74
|
Published Time: Sun, 22 Jan 2023 23:18:13 GMT
1
A micro Lie theory for state estimation in robotics
Joan Sol` a, Jeremie Deray, Dinesh Atchuthan
Abstract —A Lie group is an old mathematical abstract object dating back to the XIX century, when mathematician Sophus Lie laid the foundations of the theory of continuous transformation groups. Its influence has spread over diverse areas of science and technology many years later. In robotics, we are recently expe-riencing an important trend in its usage, at least in the fields of estimation, and particularly in motion estimation for navigation. Yet for a vast majority of roboticians, Lie groups are highly abstract constructions and therefore difficult to understand and to use. In estimation for robotics it is often not necessary to exploit the full capacity of the theory, and therefore an effort of selection of materials is required. In this paper, we will walk through the most basic principles of the Lie theory, with the aim of conveying clear and useful ideas, and leave a significant corpus of the Lie theory behind. Even with this mutilation, the material included here has proven to be extremely useful in modern estimation algorithms for robotics, especially in the fields of SLAM, visual odometry, and the like. Alongside this micro Lie theory, we provide a chapter with a few application examples, and a vast reference of formulas for the major Lie groups used in robotics, including most Jacobian matrices and the way to easily manipulate them. We also present a new C++ template-only library implementing all the functionality described here.
I. I NTRODUCTION
There has been a remarkable effort in the last years in the robotics community to formulate estimation problems properly. This is motivated by an increasing demand for precision, consistency and stability of the solutions. Indeed, proper modeling of the states and measurements, the functions relating them, and their uncertainties, is crucial to achieving these goals. This has led to designs involving what has been known as ‘manifolds’, which in this context are no less than the smooth topologic surfaces of the Lie groups where the state representations evolve. Relying on the Lie theory (LT) we are able to construct a rigorous calculus corpus to handle uncertainties, derivatives and integrals with precision and ease. Typically, these works have focused on the well-known manifolds of rotation SO(3) and rigid motion SE(3). When being introduced to Lie groups for the first time, it is important to try to regard them from different points of view. The topological viewpoint, see Fig. 1, involves the shape of the manifold and conveys powerful intuitions of its relation to the tangent space and the exponential map. The algebraic viewpoint involves the group operations and their concrete realization, allowing the exploitation of algebraic properties to develop closed-form formulas or to simplify them. The geometrical viewpoint, particularly useful in robotics, asso-ciates group elements to the position, velocity, orientation, TE M
E
exp( ⌧1)
exp( ⌧2)
⌧2
⌧1
M
vt
exp( vt)
log( X3)
X3
Figure 1. Representation of the relation between the Lie group and the Lie algebra. The Lie algebra TE M (red plane) is the tangent space to the Lie group’s manifold M (here represented as a blue sphere) at the identity E.Through the exponential map, each straight path vt through the origin on the Lie algebra produces a path exp( vt) around the manifold which runs along the respective geodesic. Conversely, each element of the group has an equivalent in the Lie algebra. This relation is so profound that (nearly) all operations in the group, which is curved and nonlinear, have an exact equivalent in the Lie algebra, which is a linear vector space. Though the sphere in R3 is not a Lie group (we just use it as a representation that can be drawn on paper), that in R4 is, and describes the group of unit quaternions —see Fig. 4 and Ex. 5.
and/or other modifications of bodies or reference frames. The origin frame may be identified with the group’s identity, and any other point on the manifold represents a certain ‘local’ frame. By resorting to these analogies, many mathematical abstractions of the LT can be brought closer to intuitive notions in vector spaces, geometry, kinematics, and other more classical fields. Lie theory is by no means simple. To grasp a minimum idea of what LT can be, we may consider the following three references. First, Abbaspour’s “Basic Lie theory” comprises more than 400 pages. With a similar title, Howe’s
“Very basic Lie theory” comprises 24 (dense) pages, and is sometimes considered a must-read introduction. Finally, the more modern and often celebrated Stillwell’s “Naive Lie theory” comprises more than 200 pages. With such precedents labeled as ‘basic’, ‘very basic’ and ‘naive’, the aim of this paper at merely 17 pages is to simplify Lie theory even more (thus our adjective ‘micro’ in the title). This we do in two ways. First, we select a small subset of material from the LT. This subset is so small that it merely explores the potential of LT. However, it appears very useful for uncertainty management in the kind of estimation problems we deal with in robotics ( e.g. inertial pre-integration, odometry and SLAM, visual servoing, and the like), thus enabling elegant and rigorous designs of optimal optimizers. Second, we explain it in a didactical way, with plenty of redundancy so as to
arXiv:1812.01537v9 [cs.RO] 8 Dec 2021 2
reduce the entry gap to LT even more, which we believe is still needed. That is, we insist on the efforts in this direction of, to name a paradigmatic title, Stillwell’s , and provide yet a more simplified version. The main text body is generic, though we try to keep the abstraction level to a minimum. Inserted examples serve as a grounding base for the general concepts when applied to known groups (rotation and motion matrices, quaternions, etc.). Also, plenty of figures with very verbose captions re-explain the same concepts once again. We put special attention to the computation of Jacobians (a topic that is not treated in ), which are essential for most optimal estimators and the source of much trouble when designing new algorithms. We provide a chapter with some applicative examples for robot localization and mapping, implementing EKF and nonlinear optimization algorithms based on LT. And finally, several appendices contain ample reference for the most relevant details of the most commonly used groups in robotics: unit complex numbers, quaternions, 2D and 3D rotation matrices, 2D and 3D rigid motion matrices, and the trivial translation groups. Yet our most important simplification to Lie theory is in terms of scope. The following passage from Howe may serve us to illustrate what we leave behind: “ The essential phenomenon of Lie theory is that one may associate in a natural way to a Lie group G its Lie algebra g. The Lie algebra
g is first of all a vector space and secondly is endowed with a bilinear nonassociative product called the Lie bracket [...]. Amazingly, the group G is almost completely determined by g
and its Lie bracket. Thus for many purposes one can replace
G with g. Since G is a complicated nonlinear object and g
is just a vector space, it is usually vastly simpler to work with g. [...] This is one source of the power of Lie theory. ”In , Stillwell even speaks of “ the miracle of Lie theory ”. In this work, we will effectively relegate the Lie algebra to a second plane in favor of its equivalent vector space Rn,and will not introduce the Lie bracket at all. Therefore, the connection between the Lie group and its Lie algebra will not be made here as profound as it should. Our position is that, given the target application areas that we foresee, this material is often not necessary. Moreover, if included, then we would fail in the objective of being clear and useful, because the reader would have to go into mathematical concepts that, by their abstraction or subtleness, are unnecessarily complicated. Our effort is in line with other recent works on the sub-ject , , , which have also identified this need of bringing the LT closer to the roboticist. Our approach aims at appearing familiar to the target audience of this paper: an audience that is skilled in state estimation (Kalman filtering, graph-based optimization, and the like), but not yet familiar with the theoretical corpus of the Lie theory. We have for this taken some initiatives concerning notation, especially in the definition of the derivative, bringing it close to the vectorial counterparts, thus making the chain rule clearly visible. As said, we opted to practically avoid the material proper to the Lie algebra, and prefer instead to work on its isomorphic tan-gent vector space Rn, which is where we ultimately represent uncertainty or (small) state increments. All these steps are undertaken with absolutely no loss in precision or exactness, TX M
MM
TXM
XX
˙X
Figure 2. A manifold Mand the vector space TXM(in this case ∼=R2)tangent at the point X, and a convenient side-cut. The velocity element, ˙X=
∂X/∂t , does not belong to the manifold Mbut to the tangent space TXM.
and we believe they make the understanding of the LT and the manipulation of its tools easier. This paper is accompanied by a new open-source C++ header-only library, called manif , which can be found at manif implements the widely used groups SO (2) , SO (3) , SE (2) and SE (3) , with support for the creation of analytic Jacobians. The library is designed for ease of use, flexibility, and performance. II. A MICRO LIE THEORY
A. The Lie group
The Lie group encompasses the concepts of group and
smooth manifold in a unique body: a Lie group G is a smooth manifold whose elements satisfy the group axioms. We briefly present these two concepts before joining them together. On one hand, a differentiable or smooth manifold is a topological space that locally resembles linear space. The reader should be able to visualize the idea of manifold (Fig. 2): it is like a curved, smooth (hyper)-surface, with no edges or spikes, embedded in a space of higher dimension. In robotics, we say that our state vector evolves on this surface, that is, the manifold describes or is defined by the constraints imposed on the state. For example, vectors with the unit norm constraint define a spherical manifold of radius one. The smoothness of the manifold implies the existence of a unique tangent space at each point. This space is a linear or vector space on which we are allowed to do calculus. On the other hand, a group (G, ◦) is a set, G, with a composition operation, ◦, that, for elements X , Y, Z ∈ G ,satisfies the following axioms, Closure under ‘ ◦’ : X ◦ Y ∈ G (1) Identity E : E ◦ X = X ◦ E = X (2) Inverse X −1 : X −1 ◦ X = X ◦ X −1 = E (3) Associativity : (X ◦ Y ) ◦ Z = X ◦ (Y ◦ Z ) . (4) In a Lie group , the manifold looks the same at every point (like e.g. in the surface of a sphere, see Exs. 1 and 2), and therefore all tangent spaces at any point are alike. The group structure imposes that the composition of elements of the manifold remains on the manifold, (1), and that each element has an inverse also in the manifold, (3). A special one of these elements is the identity, (2), and thus a special one of the tangent spaces is the tangent at the identity, which we call the Lie algebra of the Lie group. Lie groups join the local properties of smooth manifolds, allowing us to do calculus, 3z
✓
✓
i✓= log( x⇤z)x
zS1S1
T1S1=iR⇠=R
i✓= log( z)
z= exp( i✓)
log
exp
1
z=xexp( i✓)
TxS1⇠=R
i
Figure 3. The S1manifold is a unit circle (blue) in the plane C, where the unit complex numbers z∗z= 1 live. The Lie algebra s1=TES1is the line of imaginary numbers iR(red), and any tangent space T S 1is isomorphic to the line R(red). Tangent vectors (red segment) wrap the manifold creating the arc of circle (blue arc). Mappings exp and log (arrows) map (wrap and unwrap) elements of iRto/from elements of S1(blue arc). Increments between unit complex numbers are expressed in the tangent space via composition and the exponential map (and we will define special operators ⊕,for this). See the text for explanations, and Fig. 4 for a similar group.
Example 1: The unit complex numbers group S1
Our first example of Lie group, which is the easiest to visualize, is the group of unit complex numbers under complex multiplication (Fig. 3). Unit complex numbers take the form z = cos θ + i sin θ.
– Action: Vectors x = x + iy rotate in the plane by an angle θ, through complex multiplication, x′ = z x .
– Group facts: The product of unit complex numbers is a unit complex number, the identity is 1, and the inverse is the conjugate z∗.
– Manifold facts: The unit norm constraint defines the unit circle in the complex plane (which can be viewed as the 1-sphere, and hence the name S1). This is a 1-DoF curve in 2-dimensional space. Unit complex numbers evolve with time on this circle. The group (the circle) ressembles the linear space (the tangent line) locally, but not globally. with the global properties of groups, enabling the nonlinear composition of distant objects.
B. The group actions
Importantly, Lie groups come with the power to transform elements of other sets, producing e.g. rotations, translations, scalings, and combinations of them. These are extensively used in robotics, both in 2D and 3D. Given a Lie group M and a set V, we note X · v the action
of X ∈ M on v ∈ V ,
· : M × V → V ; ( X , v ) 7 → X · v . (5) For · to be a group action, it must satisfy the axioms, Identity : E · v = v (6) Compatibility : (X ◦ Y ) · v = X · (Y · v) . (7) Common examples are the groups of rotation matrices
SO (n), the group of unit quaternions, and the groups of rigid ✓
qS3
Hp⇠=R3
q= exp( u✓)
u✓= log( q)
S3
Hp⇠=R3
✓qS3✓
q=p ✓
✓=qp
q
p
1
⇠=R3
Figure 4. The S3manifold is a unit 3-sphere (blue) in the 4-space of quaternions H, where the unit quaternions q∗q= 1 live. The Lie algebra is the space of pure imaginary quaternions ix +jy +kz ∈Hp, isomorphic to the hyperplane R3(red grid), and any other tangent space T S 3is also isomorphic to R3. Tangent vectors (red segment) wrap the manifold over the great arc or geodesic (dashed). The centre and right figures show a side-cut through this geodesic (notice how it resembles S1in Fig. 3). Mappings exp
and log (arrows) map (wrap and unwrap) elements of Hpto/from elements of
S3(blue arc). Increments between quaternions are expressed in the tangent space via the operators ⊕,(see text).
Example 2: The unit quaternions group S3
A second example of Lie group, which is also relatively easy to visualize, is the group of unit quaternions under quaternion multiplication (Fig. 4). Unit quaternions take the form q = cos( θ/ 2) + u sin( θ/ 2) , with u = iu x +
ju y + ku z a unitary axis and θ a rotation angle.
– Action: Vectors x = ix + jy + kz rotate in 3D space by an angle θ around the unit axis u through the double quaternion product x′ = q x q ∗.
– Group facts: The product of unit quaternions is a unit quaternion, the identity is 1, and the inverse is the conjugate q∗.
– Manifold facts: The unit norm constraint defines the 3-sphere S3, a spherical 3-dimensional surface or manifold
in 4-dimensional space. Unit quaternions evolve with time on this surface. The group (the sphere) ressembles the linear space (the tangent hyperplane R3 ⊂ R4)locally, but not globally. motion SE (n). Their respective actions on vectors satisfy
SO (n) : rotation matrix R · x , Rx
SE (n) : Euclidean matrix H · x , Rx + t
S1 : unit complex z · x , z x
S3 : unit quaternion q · x , q x q ∗
See Table I for a more detailed exposition, and the appendices. The group composition (1) may be viewed as an action of the group on itself, ◦ : M × M → M . Another interesting action is the adjoint action , which we will see in Section II-F.
C. The tangent spaces and the Lie algebra
Given X (t) a point moving on a Lie group’s manifold M,its velocity ˙X = ∂X /∂t belongs to the space tangent to M
at X (Fig. 2), which we note TX M. The smoothness of the manifold, i.e. , the absence of edges or spikes, implies the existence of a unique tangent space at each point. The structure of such tangent spaces is the same everywhere. 4
Table I TYPICAL LIE GROUPS USED IN 2D AND 3D MOTION , INCLUDING THE TRIVIAL Rn . S EE THE APPENDICES FOR FULL REFERENCE
Lie group M, ◦ size dim X ∈ M Constraint τ ∧ ∈ m τ ∈ Rm Exp( τ ) Comp. Action
n-D vector Rn, + n n v ∈ Rn v − v = 0 v ∈ Rn v ∈ Rn v = exp( v) v1 +v2 v + x
circle S1, · 2 1 z ∈ C z∗z = 1 iθ ∈ iR θ ∈ R z = exp( iθ ) z1 z2 z x
Rotation SO (2) , · 4 1 R R>R = I [θ]× ∈ so (2) θ ∈ R R = exp([ θ]×) R1 R2 R x
Rigid motion SE (2) , · 9 3 M = [ R t
0 1
] R>R = I
[ [θ]× ρ
00
]
∈ se (2) [ ρ
θ
] ∈ R3 exp
([ [θ]× ρ
00
])
M1 M2 R x +t
3-sphere S3, · 4 3 q ∈ H q∗q = 1 θ/2 ∈ Hp θ ∈ R3 q = exp( uθ/ 2) q1 q2 q x q ∗
Rotation SO (3) , · 9 3 R R>R = I [θ]× ∈ so (3) θ ∈ R3 R = exp([ θ]×) R1 R2 R x
Rigid motion SE (3) , · 16 6 M = [ R t
0 1
] R>R = I
[ [θ]× ρ
00
]
∈ se (3) [ ρθ
] ∈ R6 exp
([ [θ]× ρ
00
])
M1 M2 R x +tS1
1
z(t)
T1S1 = iRTzS1
z
v^ = i! 2 iR
˙z = z · i! /2 iR
1
v^ = i! 2 iR
˙z = i! 2 iR
!t
Figure 5. Let a point z ∈ S1 move at constant rotation rate ω, z(t) = cos ωt + i sin ωt . Its velocities when passing through 1 and z are in the respective tangent spaces, T1S1 and TzS1. In the case of TzS1, the velocity is ˙z = z iω = −ω sin ωt + iω cos ωt when expressed in the global coordinates, and zv∧ = iω when expressed locally. Their relation is given by zv∧ = z−1 ˙z = z∗ ˙z. In the case of T1S1, this relation is the identity
1
v∧ = ˙ z = iω . Clearly, the structure of all tangent spaces is iR, which is the Lie algebra. This is also the structure of ˙z at the identity, and this is why the Lie algebra is defined as the tangent space at the identity.
1) The Lie algebra m: The tangent space at the identity,
TE M, is called the Lie algebra of M, and noted m,Lie algebra : m , TE M . (8) Every Lie group has an associated Lie algebra. We relate the Lie group with its Lie algebra through the following facts (see Figs. 1 and 6):
•
The Lie algebra m is a vector space. 1 As such, its elements can be identified with vectors in Rm, whose dimension m is the number of degrees of freedom of
M.
•
The exponential map , exp : m → M , exactly converts elements of the Lie algebra into elements of the group. The log map is the inverse operation.
•
Vectors of the tangent space at X can be transformed to the tangent space at the identity E through a linear transform. This transform is called the adjoint .Lie algebras can be defined locally to a tangent point X ,establishing local coordinates for TX M (Fig. 5). We shall denote elements of the Lie algebras with a ‘hat’ decorator, such as v∧ for velocities or τ ∧ = ( vt)∧ = v∧t for general elements. A left superscript may also be added to specify the precise tangent space, e.g. , Xv∧ ∈ TX M and Ev∧ ∈ TE M.The structure of the Lie algebra can be found (see Exam-ples 3 and 5) by time-differentiating the group constraint (3).
1
In any Lie algebra, the vector space is endowed with a non-associative product called the Lie bracket. In this work, we will not make use of it. X 2 M
log
exp
Log
Exp
(·)_
(·)^
Manifold
⌧ ^ 2 mLie algebra
⌧ 2 Rm
Vector
Tangent
TE M
Figure 6. Mappings between the manifold M and the representations of its tangent space at the origin TE M (Lie algebra m and Cartesian Rm). Maps hat (·)∧ and vee (·)∨ are the linear invertible maps or isomorphisms (10–11),
exp( ·) and log( ·) map the Lie algebra to/from the manifold, and Exp( ·) and
Log( ·) are shortcuts to map directly the vector space Rm to/from M.
For multiplicative groups this yields the new constraint
X −1 ˙X + ˙X −1X = 0 , which applies to the elements tangent at
X (the term ˙X −1 is the derivative of the inverse). The elements of the Lie algebra are therefore of the form, 2
v∧ = X −1 ˙X = − ˙X −1X . (9)
2) The Cartesian vector space Rm: The elements τ ∧ of the Lie algebra have non-trivial structures (skew-symmetric matrices, imaginary numbers, pure quaternions, see Table I) but the key aspect for us is that they can be expressed as linear combinations of some base elements Ei, where Ei are called the generators of m (they are the derivatives of X around the origin in the i-th direction). It is then handy to manipulate just the coordinates as vectors in Rm, which we shall note simply τ . We may pass from m to Rm and vice versa through two mutually inverse linear maps or isomorphisms , commonly called hat and vee (see Fig. 6), Hat : Rm → m ; τ 7 → τ ∧ =
m
∑
i=1
τi Ei (10) Vee : m → Rm ; τ ∧ 7 → (τ ∧)∨ = τ =
m
∑
i=1
τi ei , (11) with ei the vectors of the base of Rm (we have e∧
i
= Ei). This means that m is isomorphic to the vector space Rm —one writes m ∼= Rm, or τ ∧ ∼= τ . Vectors τ ∈ Rm are handier for our purposes than their isomorphic τ ∧ ∈ m, since they can be stacked in larger state vectors, and more importantly,
2
For additive Lie groups the constraint X −X = 0 differentiates to ˙X = ˙X ,that is, no constraint affects the tangent space. This means that the tangent space is the same as the group space. See App. E for more details. 5
Example 3: The rotation group SO (3) , its Lie algebra
so (3) , and the vector space R3
In the rotation group SO (3) , of 3 × 3 rotation matrices
R, we have the orthogonality condition R>R = I. The tangent space may be found by taking the time derivative of this constraint, that is R> ˙R + ˙R>R = 0 , which we rearrange as
R> ˙R = −(R> ˙R)>.
This expression reveals that R> ˙R is a skew-symmetric matrix (the negative of its transpose). Skew-symmetric matrices are often noted [ω]× and have the form
[ω]× =
[ 0 −ωz ωy
ωz0−ωx
−ωyωx0
]
.
This gives R> ˙R = [ ω]×. When R = I we have
˙R = [ ω]× ,
that is, [ω]× is in the Lie algebra of SO (3) , which we name so (3) . Since [ω]× ∈ so (3) has 3 DoF, the dimension of SO (3) is m = 3 . The Lie algebra is a vector space whose elements can be decomposed into
[ω]× = ωxEx + ωy Ey + ωz Ez
with Ex =
[ 0 0 00 0 −10 1 0
]
, Ey =
[ 0 0 1 0 0 0
−1 0 0
]
, Ez =
[ 0 −1 0 1 0 00 0 0
]
the generators of so (3) , and where ω = ( ωx, ω y , ω z ) ∈ R3
is the vector of angular velocities. The one-to-one linear relation above allows us to identify so (3) with R3 —we write so (3) ∼= R3. We pass from so (3) to R3 and viceversa using the linear operators hat and vee ,Hat : R3 → so (3); ω 7 → ω∧ = [ ω]×
Vee : so (3) → R3; [ω]× 7 → [ω]∨× = ω .
manipulated with linear algebra using matrix operators. In this work, we enforce this preference of Rm over m, to the point that most of the operators and objects that we define (specifically: the adjoint, the Jacobians, the perturbations and their covariances matrices, as we will see soon) are on Rm.
D. The exponential map
The exponential map exp() allows us to exactly transfer elements of the Lie algebra to the group (Fig. 1), an operation generically known as retraction . Intuitively, exp() wraps the tangent element around the manifold following the great arc or geodesic (as when wrapping a string around a ball, Figs. 1, 3 and 4). The inverse map is the log() , i.e. , the unwrapping operation. The exp() map arises naturally by considering the time-derivatives of X ∈ M over the manifold, as follows. From (9) we have,
˙X = X v∧ . (12) For v constant, this is an ordinary differential equation (ODE) whose solution is
X (t) = X (0) exp( v∧t) . (13)
Example 4: The exponential map of SO (3)
We have seen in Ex. 3 that ˙R = R [ω]× ∈ TRSO (3) .
For ω constant, this is an ordinary differential equation (ODE), whose solution is R(t) = R0 exp([ ω]× t). At the origin R0 = I we have the exponential map,
R(t) = exp([ ω]× t) ∈ SO (3) .
We now define the vector θ , uθ , ωt ∈ R3 as the integrated rotation in angle-axis form, with angle θ
and unit axis u. Thus [θ]× ∈ so (3) is the total rotation expressed in the Lie algebra. We substitute it above. Then write the exponential as a power series,
R = exp([ θ]×) = ∑
k
θk
k! ([ u]×)k .
In order to find a closed-form expression, we write down a few powers of [u]×,
[u]0
×
= I, [u]1
×
= [ u]× ,
[u]2
×
= uu > − I, [u]3
×
= − [u]× ,
[u]4
×
= − [u]2
×
, · · ·
and realize that all can be expressed as multiples of I,
[u]× or [u]2
×
. We thus rewrite the series as,
R = I + [ u]×
(θ − 13! θ3 + 15! θ5 − · · · )
[ u]2
×
( 12 θ2 − 14! θ4 + 16! θ6 − · · · ) ,
where we identify the series of sin θ and cos θ, yielding the closed form,
R = exp([ uθ]×) = I + [ u]× sin θ + [ u]2
×
(1 −cos θ) .
This expression is the well known Rodrigues rotation formula. It can be used as the capitalized exponential just by doing R = Exp( uθ) = exp([ uθ]×).Since X (t) and X (0) are elements of the group, then
exp( v∧t) = X (0) −1X (t) must be in the group too, and so
exp( v∧t) maps elements v∧t of the Lie algebra to the group. This is known as the exponential map .In order to provide a more generic definition of the expo-nential map, let us define the tangent increment τ , vt ∈ Rm
as velocity per time, so that we have τ ∧ = v∧t ∈ m a point in the Lie algebra. The exponential map, and its inverse the logarithmic map, can be now written as,
exp : m → M ; τ ∧ 7 → X = exp( τ ∧) (14)
log : M → m ; X 7 → τ ∧ = log( X ) . (15) Closed forms of the exponential in multiplicative groups are obtained by writing the absolutely convergent Taylor series,
exp( τ ∧) = E + τ ∧ + 12 τ ∧2 + 13! τ ∧3 + · · · , (16) and taking advantage of the algebraic properties of the powers of τ ∧ (see Ex. 4 and 5 for developments of the exponential 6
Example 5: The unit quaternions group S3 (cont.)
In the group S3 (recall Ex. 2 and see e.g. ), the time derivative of the unit norm condition q∗q = 1 yields
q∗ ˙q = −(q∗ ˙q)∗.
This reveals that q∗ ˙q is a pure quaternion (its real part is zero). Pure quaternions uv ∈ Hp have the form
uv = ( iu x + ju y + ku z )v = iv x + jv y + kv z ,
where u , iu x + ju y + ku z is pure and unitary, v is the norm, and i, j, k are the generators of the Lie algebra
s3 = Hp. Re-writing the condition above we have,
˙q = q u v ∈ TqS3,
which integrates to q = q0 exp( uvt ). Letting q0 = 1
and defining φ , uφ , uvt we get the exponential map,
q = exp( uφ) , ∑ φk
k! uk ∈ S3 .
The powers of u follow the pattern 1, u, −1, −u, 1, · · · .Thus we group the terms in 1 and u and identify the series of cos φ and sin φ. We get the closed form,
q = exp( uφ) = cos( φ) + u sin( φ) ,
which is a beautiful extension of the Euler formula,
exp( iφ ) = cos φ+i sin φ. The elements of the Lie algebra
φ = uφ ∈ s3 can be identified with the rotation vector
θ ∈ R3 trough the mappings hat and vee ,Hat : R3 → s3; θ 7 → θ∧ = θ/2
Vee : s3 → R3; φ 7 → φ∨ = 2 φ ,
where the factor 2 accounts for the double effect of the quaternion in the rotation action, x′ = q x q ∗. With this choice of Hat and Vee, the quaternion exponential
q = Exp( uθ) = cos( θ/ 2) + u sin( θ/ 2)
is equivalent to the rotation matrix R = Exp( uθ).map in SO (3) and S3). These are then inverted to find the logarithmic map. Key properties of the exponential map are
exp(( t + s)τ ∧) = exp( tτ ∧) exp( sτ ∧) (17)
exp( tτ ∧) = exp( τ ∧)t (18)
exp( −τ ∧) = exp( τ ∧)−1 (19)
exp( X τ ∧X −1) = X exp( τ ∧)X −1 , (20) where (20), a surprising and powerful statement, can be proved easily by expanding the Taylor series and simplifying the many terms X −1X .
1) The capitalized exponential map: The capitalized Exp and Log maps are convenient shortcuts to map vector elements
τ ∈ Rm (∼= TE M) directly with elements X ∈ M . We have,
Exp : Rm → M ; τ 7 → X = Exp( τ ) (21)
Log : M → Rm ; X 7 → τ = Log( X ) . (22) M
X⌧
E⌧
X
E
Y
Y=E⌧ X=X X⌧
E
X Y=E X=X X
E⌧= Ad XX⌧
X
X
Figure 7. Two paths, X ◦ Xδand Eδ◦ X , join the origin Ewith the point
Y. They both compose the element Xwith increments or ‘deltas’ expressed either in the local frame, Xδ, or in the origin, Eδ. Due to non-commutativity, the elements Xδand Eδare not equal. Their associated tangent vectors Xτ=Log( Xδ)and Eτ= Log( Eδ)are therefore unequal too. They are related by the linear transform Eτ=Ad X Xτwhere Ad Xis the adjoint of Mat X.
Clearly from Fig. 6,
X = Exp( τ ) , exp( τ ∧) (23)
τ = Log( X ) , log( X )∨ . (24) See the Appendices for details on the implementation of these maps for different manifolds.
E. Plus and minus operators
Plus and minus allow us to introduce increments between elements of a (curved) manifold, and express them in its (flat) tangent vector space. Denoted by ⊕ and , they combine one Exp/Log operation with one composition. Because of the non-commutativity of the composition, they are defined in right-and left- versions depending on the order of the operands. The right operators are (see Fig. 4-right ), right-⊕ : Y = X ⊕ Xτ , X ◦ Exp( Xτ ) ∈ M (25) right- : Xτ = Y X , Log( X −1 ◦Y ) ∈ TX M . (26) Because in (25) Exp( Xτ ) appears at the right hand side of the composition, Xτ belongs to the tangent space at X (see (26)): we say by convention 3 that Xτ is expressed in the local frame at X — we note reference frames with a left superscript. The left operators are, left-⊕ : Y = Eτ ⊕ X , Exp( Eτ ) ◦ X ∈ M (27) left- : Eτ = Y X , Log( Y ◦X −1) ∈ TE M . (28) Now, in (27) Exp( Eτ ) is on the left and we have Eτ ∈ TE M:we say that Eτ is expressed in the global frame. Notice that while left- and right- ⊕ are distinguished by the operands order, the notation in (26) and (28) is ambiguous. In this work, we express perturbations locally by default and therefore we use the right- forms of ⊕ and by default.
F. The adjoint, and the adjoint matrix
If we identify Y in (25, 27), we arrive at Eτ ⊕ X = X ⊕ Xτ ,which determines a relation between the local and global
3The convention sticks to that of frame transformation, e.g. Gx=RLx,where the matrix R∈SO (3) transforms local vectors into global. Notice that this convention is not shared by all authors, and for example uses the opposite, Lx=RGx.7
tangent elements (Fig. 7). We develop it with (20, 25, 27) as
Exp( Eτ )X = X Exp( Xτ )exp( Eτ ∧) = X exp( Xτ ∧)X −1 = exp( X Xτ ∧X −1)
E
τ ∧ = X Xτ ∧X −1
1) The adjoint: We thus define the adjoint of M at X ,noted Ad X , to be
Ad X : m → m; τ ∧ 7 → Ad X (τ ∧) , X τ ∧X −1 , (29) so that Eτ ∧ = Ad X (Xτ ∧). This defines the adjoint action
of the group on its own Lie algebra. The adjoint has two interesting (and easy to prove) properties, Linear : Ad X (aτ ∧ + bσ∧) = aAd X (τ ∧)+ bAd X (σ∧)
Homomorphism : Ad X (Ad Y (τ ∧)) = Ad X Y (τ ∧) .
2) The adjoint matrix: Since Ad X () is linear, we can find an equivalent matrix operator Ad X that maps the Cartesian tangent vectors Eτ ∼= Eτ ∧ and Xτ ∼= Xτ ∧,
Ad X : Rm → Rm; Xτ 7 → Eτ = Ad X Xτ , (30) which we call the adjoint matrix . This can be computed by applying ∨ to (29), thus writing
Ad X τ = ( X τ ∧X −1)∨ , (31) then developing the right hand side to identify the adjoint matrix (see Ex. 6 and the appendices). Additional properties of the adjoint matrix are,
X ⊕ τ = ( Ad X τ ) ⊕ X (32)
Ad X −1 = Ad X −1 (33)
Ad X Y = Ad X Ad Y . (34) Notice in (33, 34) that the left parts of the equality are usually cheaper to compute than the right ones. We will use the adjoint matrix often as a way to linearly transform vectors of the tangent space at X onto vectors of the tangent space at the origin, with Eτ = Ad X Xτ , (30). In this work, the adjoint matrix will be referred to as simply the adjoint.
G. Derivatives on Lie groups
Among the different ways to define derivatives in the context of Lie groups, we concentrate on those in the form of Jacobian matrices mapping vector tangent spaces. This is sufficient here since in these spaces uncertainties and increments can be properly and easily defined. Using these Jacobians, the formulas for uncertainty management in Lie groups will largely resemble those in vector spaces. The Jacobians described hereafter fulfill the chain rule, so that we can easily compute any Jacobian from the partial Jacobian blocks of inversion , composition , exponentiation and
action . See Section III-A for details and proofs.
Example 6: The adjoint matrix of SE (3)
The SE (3) group of rigid body motions (see App. D) has group, Lie algebra and vector elements,
M =
[R t0 1
]
, τ ∧ =
[[θ]× ρ
0 0
]
, τ =
[ρθ
]
.
The adjoint matrix is identified by developing (31) as
Ad M τ = ( Mτ ∧M−1)∨ = · · · ==
([ R [θ]× R> −R [θ]× R>t + Rρ
0 0
]) ∨
=
([ [Rθ]× [t]× Rθ + Rρ
0 0
]) ∨
=
[[t]× Rθ + Rρ
Rθ
]
=
[R [t]× R0 R
] [ ρθ
]
where we used [Rθ]× = R [θ]× R> and [a]× b =
− [b]× a. So the adjoint matrix is
Ad M =
[R [t]× R0 R
]
∈ R6×6 .
1) Reminder: Jacobians on vector spaces: For a multivari-ate function f : Rm → Rn, the Jacobian matrix is defined as the n × m matrix stacking all partial derivatives,
J = ∂f (x)
∂x ,
∂f 1
∂x 1
· · · ∂f 1
∂x m
... ...
∂f n
∂x 1
· · · ∂f n
∂x m
∈ Rn×m . (35) It is handy to define this matrix in the following form. Let us partition J = [ j1 · · · jm], and let ji = [ ∂f 1
∂x i
· · · ∂f n
∂x i
]> be its i-th column vector. This column vector responds to
ji = ∂f (x)
∂x i
, lim
h→0
f (x + hei) − f (x)
h ∈ Rn , (36) where ei is the i-th vector of the natural basis of Rm.Regarding the numerator, notice that the vector
vi(h) , f (x + hei) − f (x) ∈ Rn (37) is the variation of f (x) when x is perturbed in the direction of ei, and that the respective Jacobian column is just ji =
∂vi(h)/∂h |h=0 = lim h→0 vi(h)/h . In this work, for the sake of convenience, we introduce the compact form,
J = ∂f (x)
∂x , lim
h→0
f (x + h) − f (x)
h ∈ Rn×m , (38) with h ∈ Rm, which aglutinates all columns (36) to form the definition of (35). We remark that (38) is just a notation convenience (just as (35) is), since division by the vector h
is undefined and proper computation requires (36). However, this form may be used to calculate Jacobians by developing the numerator into a form linear in h, and identifying the left hand side as the Jacobian, that is,
lim
h→0
f (x+h)−f (x)
h = · · · = lim
h→0
Jh h , ∂Jh
∂h = J. (39) 8f (X )⌧1 = he1
M
X
N f (X ⌧1)
f(·)
X ⌧1
TXMTf(X)N
⌧2=he2
j1
j2
2(h) 1(h)
Figure 8. Right Jacobian of a function f:M → N . The perturbation vectors in the canonical directions, τi=hei∈TXM, are propagated to perturbation vectors σi∈Tf(X)Nthrough the processes of plus, apply f() , and minus (green arrows), obtaining σi(h) = f(X ⊕ hei)f(X). For varying values of
h, notice that in Mthe perturbations τi(h) = hei(thick red) produce paths in M(blue) along the geodesic (recall Fig. 1). Notice also that in N, due to the non-linearity of f(·), the image paths (solid blue) are generally not in the geodesic (dashed blue). These image paths are lifted onto the tangent space
Tf(X)N, producing smooth curved paths (thin solid red). The column vectors
jiof J(thick red) are the derivatives of the lifted paths evaluated at f(X),i.e. ,
ji= lim h→0σi(h)/h . Each hei∈TXMgives place to a ji∈Tf(X)N,and thus the resulting Jacobian matrix J= [ j1· · · jm]∈Rn×mlinearly maps vectors from TXM ∼=Rmto Tf(X)N ∼=Rn.
Notice finally that for small values of h we have the linear approximation,
f (x + h) −−−→
h→0
f (x) + ∂f (x)
∂x h . (40)
2) Right Jacobians on Lie goups: Inspired by the standard derivative definition (38) above, we can now use our ⊕ and
operators to define Jacobians of functions f : M → N acting on manifolds (see Fig. 8). Using the right- {⊕ , } in place of
{+, −} we obtain a form akin to the standard derivative, 4
X
Df (X )
DX , lim
τ→0
f (X ⊕ τ ) f (X )
τ ∈ Rn×m (41a) which develops as,
= lim
τ→0
Log (f (X )−1 ◦ f (X ◦ Exp( τ )) )
τ (41b)
= ∂ Log (f (X )−1 ◦ f (X ◦ Exp( τ )) )
∂τ
∣∣∣∣∣τ =0
. (41c) We call this Jacobian the right Jacobian of f . Notice that (41c) is just the standard derivative (38) of the rather complicated function g(τ ) = Log (f (X )−1 ◦ f (X ◦ Exp( τ )) ). Writing it as in (41a) conveys much more intuition: it is the derivative of f (X ) with respect to X , only that we expressed the infinitesimal variations in the tangent spaces! Indeed, thanks to the way right- ⊕ and operate, variations in X and f (X )
are now expressed as vectors in the local tangent spaces, i.e. ,tangent respectively at X ∈ M and f (X ) ∈ N . This derivative is then a proper Jacobian matrix Rn×m linearly mapping the
local tangent spaces TX M → Tf (X )N (and we mark the derivative with a local ‘ X ’ superscript). Just as in vector spaces, the columns of this matrix correspond to directional derivatives. That is, the vector
σi(h) = f (X ⊕ hei) f (X ) ∈ Rn (42)
4The notation DY
DX=Df (X)
DXis chosen in front of other alternatives in order to make the chain rule readable, i.e. ,DZ
DX=DZ
DY
DY
DX. We will later introduce the lighter notation JYX,DY
DX.Y=f(X)
M
X⌧E⌧
X
E
E
N
Y
E
Ad XAd Y
XDY
DX
EDY
DX
YDY
EDX
EDY
XDX
Figure 9. Linear maps between all tangent spaces involved in a function Y=
f(X), from Mto N. The linear maps Eτ=Ad X Xτ,Eσ=Ad Y Yσ,
Eσ=EDY
DXEτ, and Yσ=XDY
DXXτ, form a loop (solid) that leads to (46). The crossed Jacobians (dashed) form more mapping loops leading to (47,48).
(see Fig. 8 again, and compare σi in (42) with vi in (37)) is the variation of f (X ) when X varies in the direction of ei.Its respective Jacobian column is ji = ∂σi(h)/∂h |h=0 .As before, we use (41a) to actually find Jacobians by resorting to the same mechanism (39). For example, for a 3D rotation f : SO (3) → R3; f (R) = Rp , we have M = SO (3)
and N = R3 and so (see App. B-C5),
R
DRp
DR = lim
θ→0
(R ⊕ θ)p Rp
θ = lim
θ→0
R Exp( θ)p − Rp
θ
= lim
θ→0
R(I + [ θ]×)p − Rp
θ = lim
θ→0
R [θ]× p
θ
= lim
θ→0
−R [p]× θθ = −R [p]× ∈ R3×3 .
Many examples of this mechanism can be observed in Sec-tion III and the appendices. Remark that whenever the function
f passes from one manifold to another, the plus and minus operators in (41a) must be selected appropriately: ⊕ for the domain M, and for the codomain or image N .For small values of τ , the following approximation holds,
f (X ⊕ Xτ ) −−−−→
Xτ→0
f (X ) ⊕
X
Df (X )
DX
X
τ ∈ N . (43)
3) Left Jacobians on Lie groups: Derivatives can also be defined from the left- plus and minus operators, leading to,
E
Df (X )
DX , lim
τ→0
f (τ ⊕ X ) f (X )
τ ∈ Rn×m (44)
= lim
τ→0
Log( f (Exp( τ ) ◦ X ) ◦ f (X )−1)
τ
= ∂ Log (f (Exp( τ ) ◦ X ) ◦ f (X )−1)
∂τ
∣∣∣∣∣τ =0
,
which we call the left Jacobian of f . Notice that now
τ ∈ TE M, and the numerator belongs to TE N , thus the left Jacobian is a n×m matrix mapping the global tangent spaces,
TE M → TE N , which are the Lie algebras of M and N (and we mark the derivative with a global or origin ‘ E’ superscript). For small values of τ the following holds,
f (Eτ ⊕ X ) −−−−→
Eτ→0
E
Df (X )
DX
E
τ ⊕ f (X ) ∈ N . (45) We can show from (32, 43, 45) (see Fig. 9) that left and right Jacobians are related by the adjoints of M and N ,
E
Df (X )
DX Ad X = Ad f (X )
X
Df (X )
DX . (46) 9M
T¯XM
¯X
Figure 10. Uncertainty around a point ¯X ∈ M is properly expressed as a covariance on the vector space tangent at the point (red). Using ⊕(51), the probability ellipses in the tangent space are wrapped over the manifold (blue), thus illustrating the probability concentration region on the group.
4) Crossed right–left Jacobians: One can also define Jaco-bians using right-plus but left-minus, or vice versa. Though improbable, these are sometimes useful, since they map local to global tangents or vice versa. To keep it short, we will just relate them to the other Jacobians through the adjoints,
E
DY
X
DX =
E
DY
E
DX Ad X = Ad YY DY
X
DX (47)
Y
DY
E
DX =
Y
DY
X
DX Ad X −1 = Ad Y −1 E DY
E
DX , (48) where Y = f (X ). Now, the upper and lower super-scripts indicate the reference frames where the differentials are ex-pressed. Respective small-tau approximations read,
f (X ⊕ X τ ) −−−−→
Xτ→0
E
Df (X )
X
DX
X
τ ⊕ f (X ) (49)
f (E τ ⊕ X ) −−−−→
Eτ→0
f (X ) ⊕
f(X)
Df (X )
E
DX
E
τ . (50)
H. Uncertainty in manifolds, covariance propagation
We define local perturbations τ around a point ¯X ∈ M in the tangent vector space T ¯X M, using right- ⊕ and ,
X = ¯X ⊕ τ , τ = X ¯X ∈ T ¯X M . (51) Covariances matrices can be properly defined on this tangent space at ¯X through the standard expectation operator E[·],
ΣX , E[τ τ >] = E[( X ¯X )( X ¯X )>] ∈ Rm×m , (52) allowing us to define Gaussian variables on manifolds, X ∼ N ( ¯X , ΣX ), see Fig. 10. Notice that although we write ΣX , the covariance is rather that of the tangent perturbation τ . Since the dimension m of T M matches the degrees of freedom of
M, these covariances are well defined. 5
Perturbations can also be expressed in the global reference, that is, in the tangent space at the origin TE M, using left- ⊕
and ,
X = τ ⊕ ¯X , τ = X ¯X ∈ TE M . (53) This allows global specification of covariance matrices using left-minus in (52). For example, a 3D orientation that is known up to rotations in the horizontal plane can be associated to
5A naive definition ΣX,E[( X − ¯X)( X − ¯X)>]is always ill-defined if
size( X)>dim( M), which is the case for most non-trivial manifolds. X0
X3
X4
3
1
⌧1
⌧2
⌧3
⌧4
2
M
X1
X2
4
M
TX0M⌧1
1
Figure 11. Motion integration on a manifold. Each motion data produces a step τk∈TXk−1M, which is wrapped to a local motion increment or ‘delta’ δk= Exp( τk)∈ M , and then composed with Xk−1to yield Xk=
Xk−1◦δk=Xk−1◦Exp( τk) = Xk−1⊕τk∈ M .
a covariance E Σ = diag( σ2
φ
, σ 2
θ
, ∞). Since “horizontal” is a global specification, E Σ must be specified in the global reference. Since global and local perturbations are related by the adjoint (30), their covariances can be transformed with
E
ΣX = Ad X X ΣX Ad X > . (54) Covariance propagation through a function f : M →N ; X 7 → Y = f (X ) just requires the linearization (43) with Jacobian matrices (41a) to yield the familiar formula,
ΣY ≈ Df DX ΣX
Df DX
∈ Rn×n . (55)
I. Discrete integration on manifolds
The exponential map X (t) = X0 ◦ Exp( vt) performs the continuous-time integral of constant velocities v ∈ TX0 M
onto the manifold. Non-constant velocities v(t) are typically handled by segmenting them into piecewise constant bits
vk ∈ TXk−1 M, of (short) duration δt k, and writing the discrete integral
Xk = X0 ◦ Exp( v1δt 1) ◦ Exp( v1δt 2) ◦ · · · ◦ Exp( vkδt k)= X0 ⊕ v1δt 1 ⊕ v1δt 2 ⊕ · · · ⊕ vkδt k .
Equivalently (Fig. 11), we can define τk = vkδt k and construct the integral as a “sum” of (small) discrete tangent steps τk ∈ TXk−1 M, i.e. , Xk , X0 ⊕ τ1 ⊕ τ2 ⊕ · · · ⊕ τk. We write all these variants in recursive form,
Xk = Xk−1 ⊕ τk = Xk−1 ◦ Exp( τk) = Xk−1 ◦ Exp( vkδt k) .
(56) Common examples are the integration of 3D angular rates
ω into the rotation matrix, Rk = Rk−1 Exp( ωkδt ), or into the quaternion, qk = qk−1 Exp( ωkδt ).III. D IFFERENTIATION RULES ON MANIFOLDS
For all the typical manifolds M that we use, we can deter-mine closed forms for the elementary Jacobians of inversion ,
composition , exponentiation and action . Moreover, some of these forms can be related to the adjoint Ad X , which becomes a central block of the differentiation process. Other forms for
Log , ⊕ and can be easily derived from them. Once these forms or ‘blocks’ are found, all other Jacobians follow by the chain rule. Except for the so-called left Jacobian , which 10
we also present below, all Jacobians developed here are right-Jacobians, i.e. , defined by (41a). By following the hints here, the interested reader should find no particular difficulties in developing the left-Jacobians. For the reader not willing to do this effort, equation (46) can be used to this end, since
E
Df (X )
DX = Ad f (X )
X
Df (X )
DX Ad X −1 . (57) We use the notations Jf (X )
X
, Df (X )
DX
and JYX , DY
DX
.We notice also that Ad X −1 should rather be implemented by Ad X −1 —see (33, 34) and the comment below them.
A. The chain rule
For Y = f (X ) and Z = g(Y) we have Z = g(f (X )) . The chain rule simply states,
DZ
DX = DZ
DY
DY
DX or JZX = JZY JYX . (58) We prove it here for the right Jacobian using (43) thrice,
g(f (X )) ⊕ JZX τ ← g(f (X ⊕ τ )) → g(f (X ) ⊕ JYX τ )
→ g(f (X )) ⊕ JZY JYX τ
with the arrows indicating limit as τ → 0, and so JZX =
JZY JYX . The proof for the left and crossed Jacobians is akin, using respectively (45, 49, 50). Notice that when mixing right, left and crossed Jacobians, we need to chain also the reference frames, as in e.g.
Z
DZ
E
DX =
Z
DZ
Y
DY
Y
DY
E
DX =
Z
DZ
E
DY
E
DY
E
DX (59)
E
DZ
X
DX =
E
DZ
Y
DY
Y
DY
X
DX =
E
DZ
E
DY
E
DY
X
DX , (60) where the first identity of (59) is proven by writing,
g(f (Eτ ⊕ X )) (50)
−−−−→
Eτ→0
g(f (X )) ⊕
Z
DZ
E
DX
E
τ ;
g(f (Eτ ⊕ X )) (50)
−−−−→
Eτ→0
g
(
f (X ) ⊕
Y
DY
E
DX
E
τ
)
→
(43)
−−−−→
Eτ→0
g(f (X )) ⊕
Z
DZ
Y
DY
Y
DY
E
DX
E
τ ,
and identifying (59) in the first and third rows.
B. Elementary Jacobian blocks 1) Inverse: We define with (41a)
JX −1
X
,
X
DX −1
DX ∈ Rm×m . (61) This can be determined from the adjoint using (20) and (31),
JX −1
X
= lim
τ→0
Log(( X −1)−1(X Exp( τ )) −1)
τ
= lim
τ→0
Log( X Exp( −τ )X −1)
τ
= lim
τ→0
(X (−τ )∧X −1)∨
τ = −Ad X . (62)
2) Composition: We define with (41a)
JX ◦Y X ,
X
DX ◦ Y
DX ∈ Rm×m (63)
JX ◦Y Y ,
Y
DX ◦ Y
DY ∈ Rm×m , (64) and using (20, 31) as above and (33),
JX ◦Y X = lim
τ→0
Log(( X Y )−1(X Exp( τ )Y))
τ
= lim
τ→0
Log( Y−1 Exp( τ )Y)
τ
= lim
τ→0
(Y−1τ ∧Y)∨
τ = Ad Y −1 (65)
JX ◦Y Y = · · · = I (66)
3) Jacobians of M: We define the right Jacobian of M as the right Jacobian of X = Exp( τ ), i.e. , for τ ∈ Rm,
Jr (τ ) ,
τ
D Exp( τ )
Dτ ∈ Rm×m , (67) which is defined with (41a). The right Jacobian maps vari-ations of the argument τ into variations in the local tangent space at Exp( τ ). From (41a) it is easy to prove that, for small
δτ , the following approximations hold,
Exp( τ + δτ ) ≈ Exp( τ ) Exp( Jr (τ )δτ ) (68)
Exp( τ ) Exp( δτ ) ≈ Exp( τ + J−1
r
(τ ) δτ ) (69)
Log(Exp( τ ) Exp( δτ )) ≈ τ + J−1
r
(τ ) δτ . (70) Complementarily, the left Jacobian of M is defined by,
Jl(τ ) ,
E
D Exp( τ )
Dτ ∈ Rm×m , (71) using the left Jacobian (44), leading to the approximations
Exp( τ + δτ ) ≈ Exp( Jl(τ )δτ ) Exp( τ ) (72)
Exp( δτ ) Exp( τ ) ≈ Exp( τ + J−1
l
(τ ) δτ ) (73)
Log(Exp( δτ ) Exp( τ )) ≈ τ + J−1
l
(τ ) δτ . (74) The left Jacobian maps variations of the argument τ into vari-ations in the global tangent space or Lie algebra. From (68, 72) we can relate left- and right- Jacobians with the adjoint,
Ad Exp( τ ) = Jl(τ ) Jr −1(τ ) . (75) Also, the chain rule allows us to relate Jr and Jl,
Jr (−τ ) , JExp( −τ )
−τ
= JExp( −τ )
τ
Jτ
−τ
= JExp( τ )−1
τ
(−I)= −JExp( τ )−1
Exp( τ)
JExp( τ )
τ
= Ad Exp( τ )Jr (τ )= Jl(τ ) . (76) Closed forms of Jr , Jr −1, Jl and Jl
−1
exist for the typical manifolds in use. See the appendices for reference.
4) Group action: For X ∈ M and v ∈ V , we define with (41a)
JX · v
X
,
X
DX · vDX (77)
JX · vv ,
v
DX · vDv . (78) Since group actions depend on the set V, these expressions cannot be generalized. See the appendices for reference. 11
C. Useful, but deduced, Jacobian blocks 1) Log map: For τ = Log( X ), and from (70),
JLog( X )
X
= J−1
r
(τ ) . (79)
2) Plus and minus: We have
JX ⊕ τ
X
= JX ◦ (Exp( τ ))
X
= Ad Exp( τ )−1 (80)
JX ⊕ ττ = JX ◦ (Exp( τ )) Exp( τ ) JExp( τ )
τ
= Jr (τ ) (81) and given Z = X −1 ◦ Y and τ = Y X = Log( Z),
JY XX = JLog( Z)
Z
JZX −1 JX −1
X
= −J−1
l
(τ ) (82)
JY XY = JLog( Z)
Z
JZY = J−1
r
(τ ) . (83) where the former is proven here
JY XX = JLog( X −1◦Y )(X −1◦Y ) J(X −1◦Y )
X−1
JX −1
X
(79 , 65 , 62) = J−1
r
(τ ) Ad Y −1 (−Ad X )(33 , 34) = −J−1
r
(τ ) Ad Y−1X
= −J−1
r
(τ ) Ad Exp( τ )−1
(75) = −J−1
l
(τ ) .
IV. C OMPOSITE MANIFOLDS
At the price of losing some consistency with the Lie theory, but at the benefit of obtaining some advantages in notation and manipulation, one can consider large and heterogeneous states as manifold composites (or bundles). A composite manifold M = 〈M 1, · · · , MM 〉 is no less than the concatenation of M non-interacting manifolds. This stems from defining identity, inverse and composition acting on each block of the composite separately,
E ,
E1
...
EM
, X ,
X −1
...
X −1
M
, X Y ,
X ◦ Y 1
...
XM ◦ Y M
,
(84) thereby fulfilling the group axioms, as well as a non-interacting retraction map, which we will also note as “ex-ponential map” for the sake of unifying notations (notice the angled brackets),
Exp 〈τ 〉 ,
Exp( τ1)
...
Exp( τM )
, Log 〈X 〉 ,
Log( X )
...
Log( XM )
, (85) thereby ensuring smoothness. These yield the composite’s right- plus and minus (notice the diamond symbols),
X τ , X Exp 〈τ 〉 (86)
Y X , Log 〈X Y〉 . (87) The key consequence of these considerations (see Ex. 7) is that new derivatives can be defined, 6 using and ,
Df (X )
DX , lim
τ→0
f (X τ ) f (X )
τ . (88)
6We assume here right derivatives, but the same applies to left derivatives.
Example 7: SE (n) vs. T (n)×SO (n) vs. 〈Rn, SO (n)〉
We consider the space of translations t ∈ Rn and rota-tions R ∈ SO (n). We have for this the well-known SE (n)
manifold of rigid motions M = [ R t 0 1 ] (see Apps. C and D), which can also be constructed as T (n)×SO (n) (see Apps. A, B and E). These two are very similar, but have different tangent parametrizations: while SE (n) uses τ =(θ, ρ) with M = exp( τ ∧), T (n)×SO (n) uses τ = ( θ, p)
with M = exp( p∧) exp( θ∧). They share the rotational part θ, but clearly ρ 6 = p (see [11, pag. 35] for further details). In short, SE (n) performs translation and rotation simultaneously as a continuum, while T (n) × SO (n)
performs chained translation+rotation. In radical contrast, in the composite 〈Rn, SO (n)〉 rotations and translations do not interact at all. By combining composition with
Exp() we obtain the (right) plus operators,
SE (n) : M ⊕ τ =
[R Exp( θ) t + RV (θ)ρ
0 1
]
T (n)×SO (n) : M ⊕ τ =
[R Exp( θ) t + Rp 0 1
]
〈Rn, SO (n)〉 : M τ =
[ t + pR Exp( θ)
]
where either ⊕ may be used for the system dynamics,
e.g. motion integration, but usually not , which might however be used to model perturbations. Their respective minus operators read,
SE (n) : M2 M1 =
[V−11 R>
1
(p2 − p1)Log( R>
1
R2)
]
T (n)×SO (n) : M2 M1 =
[R>
1
(p2 − p1)Log( R>
1
R2)
]
〈Rn, SO (n)〉 : M2 M1 =
[ p2 − p1
Log( R>
1
R2)
]
,
where now, interestingly, can be used to evaluate errors and uncertainty. This makes , valuable op-erators for computing derivatives and covariances. With this derivative, Jacobians of functions f : M → N
acting on composite manifolds can be determined in a per-block basis, which yields simple expressions requiring only knowledge on the manifold blocks of the composite,
Df (X )
DX =
Df 1
DX1
· · · Df 1
DXM
... . . . ...
Df N
DX1
· · · Df N
DXM
, (89) where Df i
DXj
are each computed with (41a). For small values of τ the following holds,
f (X τ ) −−−→
τ→0
f (X ) Df (X )
DX τ ∈ N . (90) When using these derivatives, covariances and uncertainty propagation must follow the convention. In particular, the 12
covariance matrix (52) becomes
ΣX , E[( X ¯X )( X ¯X )>] ∈ Rn×n , (91) for which the linearized propagation (55) using (88) applies. V. L ANDMARK -BASED LOCALIZATION AND MAPPING
We provide three applicative examples of the theory for robot localization and mapping. The first one is a Kalman filter for landmark-based localization. The second one is a graph-based smoothing method for simultaneous localization and mapping. The third one adds sensor self-calibration. They are based on a common setup, explained as follows. We consider a robot in the plane (see Section V-D for the 3D case) surrounded by a small number of punctual landmarks or beacons . The robot receives control actions in the form of axial and angular velocities and is able to measure the location of the beacons with respect to its own reference frame. The robot pose is in SE (2) (App. C) and the beacon positions in R2 (App. E),
X =
[R t0 1
]
∈ SE (2) , bk =
[xk
yk
]
∈ R2 .
The control signal u is a twist in se (2) comprising longitu-dinal velocity v and angular velocity ω, with no lateral velocity component, integrated over the sampling time δt . The control is corrupted by additive Gaussian noise w ∼ N (0, W). This noise accounts for possible lateral wheel slippages us through a value of σs 6 = 0 ,
u =
uv
us
uω
=
v δt
0
ω δt
+ w ∈ se (2) (92)
W =
σ2
v
δt 0 00 σ2
s
δt 00 0 σ2
w
δt
∈ R3×3. (93) At the arrival of a control uj at time j, the robot pose is updated with (56),
Xj = Xi ⊕ uj , Xi Exp( uj ) . (94) Landmark measurements are of the range and bearing type, though they are put in Cartesian form for simplicity. Their noise n ∼ N (0, N) is zero mean Gaussian,
yk = X −1 · bk + n = R>(bk − t) + n ∈ R2 (95)
N =
[σ2
x
00 σ2
y
]
∈ R2×2 , (96) where we notice the rigid motion action X −1 ·bk (see App. C).
A. Localization with error-state Kalman filter on manifold
We initially consider the beacons bk situated at known positions. We define the pose to estimate as ˆX ∈ SE (2) . The estimation error δx and its covariance P are expressed in the tangent space at ˆX with (51, 52),
δx , X ˆX ∈ R3 (97)
P , E[( X ˆX )( X ˆX )>] ∈ R3×3 . (98) X1
X2
X3
b4
b5
b6
Figure 12. SAM factor graph with 3 poses and 3 beacons. Each measurement contributes a factor in the graph. There are 2 motion factors (black) and 5 beacon factors (gray). A prior factor on X1provides global observability.
At each robot motion we apply ESKF prediction,
ˆXj = ˆXi ⊕ uj (99)
Pj = F P i F> + G W j G> , (100) with the Jacobians computed from the blocks in App. C,
F , JXj
Xi
= J ˆXi⊕uj
ˆXi
= Ad Exp( uj )−1
G , JXj
uj
= J ˆXi⊕uj
uj
= Jr (uj ) .
At each beacon measurement yk we apply ESKF correction, Innovation : z = yk − ˆX −1 · bk
Innovation cov. : Z = H P H > + N
Kalman gain : K = P H > Z−1
Observed error : δx = Kz
State update : ˆX ← ˆX ⊕ δx (101) Cov. update : P ← P − K Z K > , (102) with the Jacobian computed from the blocks in App. C,
H , JX −1·bk
X
= JX −1·bk
X−1
JX −1
X
= [R> R> × bk
] [−R × t0 −1
]
= − [I R> × (bk − t)] .
Notice that the only changes with respect to a regular EKF are in (99) and (101), where regular + are substituted by ⊕.The Jacobians on the contrary are all computed using the Lie theory (see App. C). Interstingly, their usage is the same as in standard EKF — see e.g. the equation of the Kalman gain, which is the standard K = PH >(HPH > + N)−1.
B. Smooting and Mapping with graph-based optimization
We consider now the problem of smoothing and mapping (SAM), where the variables to estimate are the beacons’ locations and the robot’s trajectory. The solver of choice is a graph-based iterative least-squares optimizer. For simplicity, we assume the trajectory comprised of three robot poses
{X 1 · · · X 3}, and a world with three beacons {b4 · · · b6}. The problem state is the composite
X = 〈X 1, X2, X3, b4, b5, b6〉, Xi ∈ SE (2) , bk ∈ R2.
(103) The resulting factor graph is shown in Fig. 12. Each prior or measurement contributes a factor in the graph. Motion measurements from pose i to j are derived from (94), while measurements of beacon k from pose i respond to (95), 13
uij = Xj Xi + wij = Log( X −1
i
Xj ) + wij (104)
yik = X −1
i
· bk + nik . (105) Each factor comes with an information matrix, Ω1 , W−11 ,
Ωij , W−1
ij
and Ωik , N−1
ik
. The expectation residuals are, prior residual : r1(X ) = Ω>/21 (X1 ˆX1)
motion residual : rij (X ) = Ω>/2
ij
(uij − ( ˆXj ˆXi))
beacon residual : rik (X ) = Ω>/2
ik
(yik − ˆX −1
i
· ˆbk) .
The optimum update step δx stems from minimizing
δx∗ = arg min
δx
∑
p∈P
rp(X δx)>rp(X δx) (106) with P = {1, 12 , 23 , 14 , 15 , 25 , 26 , 36 } the set of node pairs of each measurement (see Fig. 12). The problem is solved iteratively as follows. Each residual in the sum (106) is linearized to rp(X δx) ≈ rp(X ) Jrp
X
δx following (90), where Jrp
X
are sparse Jacobians. The non-zero blocks of these Jacobians, that is Jr1
X1
, Jrij
Xi
, Jrij
Xj
, Jrik
Xi
and Jrik
bk
, can be easily computed following the methods in Section V-A, and noticing that by definition Jf (X ⊕ δx)
δx
|δx=0 = Jf (X ⊕ δx)
X
|δx=0 = Jf (X )
X
.Building the total Jacobian matrix and residual vector,
J =
Jr1
X1
0 0 0 0 0Jr12
X1
Jr12
X2
0 0 0 00 Jr23
X2
Jr23
X3
0 0 0Jr14
X1
0 0 Jr14
b4
0 0Jr15
X1
0 0 0 Jr15
b5
00 Jr25
X2
0 0 Jr25
b5
00 Jr26
X2
0 0 0 Jr26
b6
0 0 Jr36
X3
0 0 Jr36
b6
r =
r1
r12
r23
r14
r15
r25
r26
r36
(107) the linearized (106) is now transformed to minimizing
δx∗ = arg min
δx
‖r + Jδx‖2. (108) This is solved via least-squares using the pseudoinverse of
J (for large problems, QR , or Cholesky , factorizations are required),
δx∗ = −(J>J)−1J>r , (109) yielding the optimal step δx∗ used to update the state,
X ← X δx∗ . (110) The procedure is iterated until convergence. We highlight here the use of the composite notation in (103), which allows block-wise definitions of the Jacobian (107) and the update (110). We also remark the use of the SE (2) manifold in the motion and measurement models, as we did in the ESKF case in Section V-A.
C. Smoothing and mapping with self-calibration
We consider the same problem as above but with a motion sensor affected by an unknown calibration bias c = ( cv , c ω )>,so that the control is now ˜u = ( vδt + cv , 0, ωδt + cω )> + w.We define the bias correction function c() ,
u = c (˜ u, c) ,
˜uv − cv
˜us
˜uω − cω
∈ R3 ∼= se (2) . (111) The state composite is augmented with the unknowns c,
X = 〈c, X1, X2, X3, b4, b5, b6〉 ,
c ∈ R2, Xi ∈ SE (2) , bk ∈ R2 ,
and the motion residual becomes
rij (X ) = Ω>/2
ij
(c (˜ uij , c) − ( ˆXj ˆXi)) .
The procedure is as in Section V-B above, and just the total Jacobian is modified with an extra column on the left,
J =
0 Jr1
X1
0 0 0 0 0Jr12
c
Jr12
X1
Jr12
X2
0 0 0 0Jr23
c
0 Jr23
X2
Jr23
X3
0 0 00 Jr14
X1
0 0 Jr14
b4
0 00 Jr15
X1
0 0 0 Jr15
b5
00 0 Jr25
X2
0 0 Jr25
b5
00 0 Jr26
X2
0 0 0 Jr26
b6
0 0 0 Jr36
X3
0 0 Jr36
b6
,
where Jrij
c
= Ω>/2
ij
Jc(uij ,c)
c
, with Jc(uij ,c)
c
the 3 × 2 Jacobian of (111). The optimal solution is obtained with (109, 110). The resulting optimal state X includes an optimal estimate of c,that is, the self-calibration of the sensor bias.
D. 3D implementations
It is surprisingly easy to bring all the examples above to 3D. It suffices to define all variables in the correct spaces:
X ∈ SE (3) and u ∈ R6 ∼= se (3) (App. D), and {bk, y} ∈ R3
(App. E). Jacobians and covariances matrices will follow with appropriate sizes. The interest here is in realizing that all the math in the algorithms, that is from (97) onwards, is exactly the same for 2D and 3D: the abstraction level provided by the Lie theory has made this possible. VI. C ONCLUSION
We have presented the essential of Lie theory in a form that should be useful for an audience skilled in state estimation, with a focus on robotics applications. This we have done through several initiatives: First, a selection of materials that avoids abstract mathe-matical concepts as much as possible. This helps to focus Lie theory to make its tools easier to understand and to use. Second, we chose a didactical approach, with significant redundancy. The main text is generic and covers the abstract points of Lie theory. It is accompanied by boxed examples, which ground the abstract concepts to particular Lie groups, and plenty of figures with very verbose captions. Third, we have promoted the usage of handy operators, such as the capitalized Exp() and Log() maps, and the plus and minus operators ⊕, , , . They allow us to work on the Cartesian representation of the tangent spaces, producing 14
formulas for derivatives and covariance handling that greatly resemble their counterparts in standard vector spaces. Fourth, we have made special emphasis on the definition, geometrical interpretation, and computation of Jacobians. For this, we have introduced notations for the Jacobian matrices and covariances that allow a manipulation that is visually powerful. In particular, the chain rule is clearly visible with this notation. This helps to build intuition and reducing errors. Fifth, we present in the appendices that follow an extensive compendium of formulas for the most common groups in robotics. In 2D, we present the rotation groups of unit complex numbers S1 and rotation matrices SO (2) , and the rigid motion group SE (2) . In 3D, we present the groups of unit quaternions
S3 and rotation matrices SO (3) , both used for rotations, and the rigid motion group SE (3) . We also present the translation groups for any dimension, which can be implemented by either the standard vector space Rn under addition, or by the matrix translation group T (n) under multiplication. Sixth, we have presented some applicative examples to illustrate the capacity of Lie theory to solve robotics problems with elegance and precision. The somewhat naive concept of composite group helps to unify heterogeneous state vectors into a Lie-theoretic form. Finally, we accompany this text with the new C++ library
manif implementing the tools described here. manif can be found at The applications in Section V are demonstrated in manif as examples. Though we do not introduce any new theoretical material, we believe the form in which Lie theory is here exposed will help many researchers enter the field for their future developments. We also believe this alone represents a valuable contribution. APPENDIX ATHE 2D ROTATION GROUPS S1 AND SO (2)
The Lie group S1 is the group of unit complex numbers under the complex product. Its topology is the unit circle, or the unit 1-sphere, and therefore the name S1. The group, Lie algebra and vector elements have the form,
z = cos θ + i sin θ, τ ∧ = iθ, τ = θ . (112) Inversion and composition are achieved by conjugation z−1 =
z∗, and product za ◦ zb = za zb.The group SO (2) is the group of special orthogonal matrices in the plane, or rotation matrices, under matrix multiplication. Group, Lie algebra and vector elements have the form,
R = [ cos θ − sin θ
sin θcos θ
], τ ∧ = [ θ]× , [ 0 −θθ 0
], τ = θ . (113) Inversion and composition are achieved by transposition
R−1 = R>, and product Ra ◦ Rb = Ra Rb.Both groups rotate 2-vectors, and they have isomorphic tangent spaces. We thus study them together.
A. Exp and Log maps
Exp and Log maps may be defined for complex numbers of
S1 and rotation matrices of SO (2) . For S1 we have,
z = Exp( θ) = cos θ + i sin θ ∈ C (114)
θ = Log( z) = arctan(Im( z), Re( z)) ∈ R , (115) where (114) is the Euler formula, whereas for SO (2) ,
R = Exp( θ) =
[cos θ − sin θ
sin θ cos θ
]
∈ R2×2 (116)
θ = Log( R) = arctan( r21 , r 11 ) ∈ R . (117)
B. Inverse, composition, exponential map
We consider generic 2D rotation elements, and note them with the sans-serif font, Q, R. We have
R(θ)−1 = R(−θ) (118)
Q ◦ R = R ◦ Q , (119)
i.e. , planar rotations are commutative. It follows that
Exp( θ1 + θ2) = Exp( θ1) ◦ Exp( θ2) (120)
Log( Q ◦ R) = Log( Q) + Log( R) (121)
Q R = θQ − θR . (122)
C. Jacobian blocks
Since our defined derivatives map tangent vector spaces, and these spaces coincide for the planar rotation manifolds of S1
and SO (2) , i.e. , θ = Log( z) = Log( R), it follows that the Jacobians are independent of the representation used ( z or R).
1) Adjoint and other trivial Jacobians: From (41a), Sec-tion III-B and the properties above, the following scalar derivative blocks become trivial,
Ad R = 1 ∈ R (123)
JR−1
R
= −1 ∈ R (124)
JQ◦RQ = JQ◦RR = 1 ∈ R (125)
Jr (θ) = Jl(θ) = 1 ∈ R (126)
JR⊕θ
R
= JR⊕θθ = 1 ∈ R (127)
JQ RQ = −JQ RR = 1 ∈ R (128)
2) Rotation action: For the action R · v we have,
JR·v
R
= lim
θ→0
R Exp( θ)v − Rv
θ
= lim
θ→0
R(I + [ θ]×)v − Rv
θ
= lim
θ→0
R [θ]× v
θ = R × v ∈ R2×1 (129) and
JR·vv = DRv
Dv = R ∈ R2×2 . (130) APPENDIX BTHE 3D ROTATION GROUPS S3 AND SO (3)
The Lie group S3 is the group of unit quaternions under quaternion multiplication. Its topology is the unit 3-sphere in
R4, and therefore its name S3. Quaternions (please consult for an in-depth reference) may be represented by either of these equivalent forms,
q = w + ix + jy + kz = w + v ∈ H
= [w x y z]> =
[w
v
]
∈ H , (131) 15
where w, x, y, z ∈ R, and i, j, k are three unit imaginary numbers such that i2 = j2 = k2 = ijk = −1. The scalar w is known as the scalar or real part, and v ∈ Hp as the vector or imaginary part. We note Hp the set of pure quaternions, i.e. , of null scalar part, with dimension 3. Inversion and composition are achieved by conjugation q−1 = q∗, where q∗ , w − v is the conjugate, and product qa ◦ qb = qa qb.The group SO (3) is the group of special orthogonal matrices in 3D space, or rotation matrices, under matrix multiplication. Inversion and composition are achieved with transposition and product as in all groups SO (n).Both groups rotate 3-vectors. They have isomorphic tangent spaces whose elements are identifiable with rotation vectors in
R3, so we study them together. It is in this space R3 where we define the vectors of rotation rate ω , uω, angle-axis θ , uθ,and all perturbations and uncertainties. The quaternion manifold S3 is a double cover of SO (3) ,
i.e. , q and −q represent the same rotation R. The first cover corresponds to quaternions with positive real part w > 0. The two groups can be considered isomorphic up to the first cover.
A. Exp and Log maps
The Exp and Log maps may be defined for quaternions of S3 and rotation matrices of SO (3) . For quaternions q =(w, v) ∈ H we have (see Ex. 5),
q = Exp( θu) , cos( θ/ 2) + u sin( θ/ 2) ∈ H (132)
θu = Log( q) , 2 v arctan( ‖v‖, w )
‖v‖ ∈ R3 . (133) We can avoid eventual problems due to the double cover of q
by ensuring that its scalar part w is positive before doing the
Log . If it is not, we can substitute q by −q before the Log .For rotation matrices we have (see Ex. 4),
R = Exp( θu) , I + sin θ [u]× + (1 − cos θ) [ u]2
×
∈ R3×3
(134)
θu = Log( R) , θ(R − R>)∨
2 sin θ ∈ R3 , (135) with θ = cos −1 ( trace( R)−12
).
B. Rotation action
Given the expressions above for the quaternion and the rotation matrix, the rotation action of quaternions on 3-vectors is performed by the double quaternion product,
x′ = q x q ∗ (136) while rotation matrices use a single matrix product,
x′ = Rx . (137) Both correspond to a right-hand rotation of θ rad around the axis u. Identifying in them x and x′ yields the identity
R(q) =
[ w2+x2−y2−z2 2( xy −wz ) 2( xz +wy )2( xy +wz ) w2−x2+y2−z2 2( yz −wx )2( xz −wy ) 2( yz +wx ) w2−x2−y2+z2
]
(138)
C. Elementary Jacobian blocks
Since our defined derivatives map tangent vector spaces, and these spaces coincide for the 3D rotation manifolds of S3
and SO (3) , i.e. , θ = Log( q) = Log( R), it follows that the Jacobians are independent of the representation used ( q or R). We thus consider generic 3D rotation elements and note them with the sans-serif font R.
1) Adjoint: We have from (31)
Ad Rθ = ( R [θ]× R>)∨ = ([( Rθ)] ×)∨ = Rθ
therefore
Ad R = R , (139) which means, just to clarify it once again, that Ad q = R(q),see (138), and Ad R = R.
2) Inversion, composition: We have from Section III-B,
JR−1
R
= −Ad R = −R (140)
JQR Q = Ad R−1 = R> (141)
JQR R = I . (142)
3) Right and left Jacobians: They admit the closed forms [11, pag. 40],
Jr (θ) = I− 1−cos θθ2 [θ]× + θ−sin θθ3 [θ]2
×
(143)
Jr −1(θ) = I+ 12 [θ]× +
( 1
θ2 − 1+cos θ
2θ sin θ
)
[θ]2
×
(144)
Jl(θ) = I + 1 − cos θθ2 [θ]× + θ − sin θθ3 [θ]2
×
(145)
Jl−1(θ) = I − 12 [θ]× +
( 1
θ2 − 1 + cos θ
2θ sin θ
)
[θ]2
×
(146) where we can observe that
Jl = Jr > , Jl−1 = Jr −> . (147)
4) Right- plus and minus: We have for θ = Q R,
JR⊕θ
R
= R(θ)> JR⊕θθ = Jr (θ) (148)
JQ RQ = J−1
r
(θ) JQ RR = −J−1
l
(θ) (149)
5) Rotation action: We have
JR·v
R
, lim
θ→0
(R ⊕ θ)v − Rv
θ =lim
θ→0
R Exp( θ)v − Rv
θ = lim
θ→0
R(I+[ θ]×)v − Rv
θ
= lim
θ→0
R [θ]× v
θ = lim
θ→0
−R [v]× θθ = −R [v]×
(150) where we used the properties Exp( θ) ≈ I+[ θ]× and [a]× b =
− [b]× a. The second Jacobian yields,
JR·vv , lim
∂v→0
R(v + ∂v) − Rv
∂v = R . (151) 16
APPENDIX CTHE 2D RIGID MOTION GROUP SE (2)
We write elements of the rigid motion group SE (2) as
M =
[R t0 1
]
∈ SE (2) ⊂ R3×3 , (152) with R ∈ SO (2) a rotation and t ∈ R2 a translation. The Lie algebra and vector tangents are formed by elements of the type
τ ∧ =
[[θ]× ρ
0 0
]
∈ se (2) , τ =
[ρ
θ
]
∈ R3 . (153)
A. Inverse, composition
Inversion and composition are performed respectively with matrix inversion and product,
M−1 =
[R> −R>t0 1
]
(154)
Ma Mb =
[RaRb ta + Ratb
0 1
]
. (155)
B. Exp and Log maps
Exp and Log are implemented via exponential maps directly from the scalar tangent space R3 ∼= se (2) = T SE (2) — see for the derivation,
M = Exp( τ ) ,
[Exp( θ) V(θ) ρ
0 1
]
(156)
τ = Log( M) ,
[V−1(θ) t
Log( R)
]
. (157) with
V(θ) = sin θθ I + 1 − cos θθ × . (158)
C. Jacobian blocks 1) Adjoint: The adjoint is easily found from (31) using the fact that planar rotations commute,
Ad Mτ = ( Mτ ∧M−1)∨ =
[Rρ − [θ]× t
θ
]
= Ad M
[ρ
θ
]
,
leading to
Ad M =
[R − × t0 1
]
. (159)
2) Inversion, composition: We have from Section III-B,
JM−1
M
= −Ad M =
[−R × t0 −1
]
(160)
JMaMb
Ma
= Ad Mb
−1
=
[R>
b
R>
b
× tb
0 1
]
(161)
JMaMb
Mb
= I . (162)
3) Right and left Jacobians: We have from [11, pag. 36],
Jr =
[ sin θ/θ (1 −cos θ)/θ (θρ 1−ρ2+ρ2 cos θ−ρ1 sin θ)/θ 2
(cos θ−1) /θ sin θ/θ (ρ1+θρ 2−ρ1cos θ−ρ2sin θ)/θ 2
001
]
(163)
Jl =
[ sin θ/θ (cos θ−1) /θ (θρ 1+ρ2−ρ2 cos θ−ρ1 sin θ)/θ 2
(1 −cos θ)/θ sin θ/θ (−ρ1+θρ 2+ρ1cos θ−ρ2sin θ)/θ 2
001
]
.
(164)
4) Rigid motion action: We have the action on points p,
M · p , t + Rp , (165) therefore and since for τ → 0 we have Exp( τ ) → I + τ ∧,
JM·pM = lim
τ→0
M Exp( τ ) · p − M · p
τ = [R R × p]
(166)
JM·pp = R . (167) APPENDIX DTHE 3D RIGID MOTION GROUP SE (3)
We write elements of the 3D rigid motion group SE (3) as
M =
[R t0 1
]
∈ SE (3) ⊂ R4×4 , (168) with R ∈ SO (3) a rotation matrix and t ∈ R3 a translation vector. The Lie algebra and vector tangents are formed by elements of the type
τ ∧ =
[[θ]× ρ
0 0
]
∈ se (3) , τ =
[ρθ
]
∈ R6 . (169)
A. Inverse, composition
Inversion and composition are performed respectively with matrix inversion and product,
M−1 =
[R> −R>t0 1
]
(170)
Ma Mb =
[RaRb ta + Ratb
0 1
]
. (171)
B. Exp and Log maps
Exp and Log are implemented via exponential maps directly from the vector tangent space R6 ∼= se (3) = T SE (3) — see for the derivation,
M = Exp( τ ) ,
[Exp( θ) V(θ) ρ
0 1
]
(172)
τ = Log( M) ,
[V−1(θ) t
Log( R)
]
. (173) with (recall for Log( M) that θ = θu = Log( R))
V(θ) = I + 1 − cos θθ2 [θ]× + θ − sin θθ3 [θ]2
×
(174) which, notice, matches (145) exactly.
C. Jacobian blocks 1) Adjoint: We have (see Ex. 6),
Ad Mτ = ( Mτ ∧M−1)∨ =
[Rρ + [ t]× Rθ
Rθ
]
= Ad M
[ρθ
]
therefore,
Ad M =
[R [t]× R
0 R
]
∈ R6×6 . (175) 17
2) Inversion, composition: We have from Section III-B,
JM−1
M
= −
[R [t]× R
0 R
]
(176)
JMaMb
Ma
=
[R>
b
−R>
b
[tb]×
0 R>
b
]
(177)
JMaMb
Mb
= I6 . (178)
3) Right and left Jacobians: Closed forms of the left Jacobian and its inverse are given by Barfoot in ,
Jl(ρ, θ) =
[ Jl(θ) Q(ρ,θ)
0Jl(θ)
]
(179a)
J−1
l
(ρ, θ) =
[ J−1
l(θ)−J−1
l(θ)Q(ρ,θ)J−1
l(θ)
0J−1
l(θ)
]
(179b) where Jl(θ) is the left Jacobian of SO (3) , see (145), and
Q(ρ, θ) = 12 ρ× + θ−sin θθ3 (θ×ρ× + ρ×θ× + θ×ρ×θ×)
− 1− θ2
2
−cos θθ4 (θ2
×
ρ× + ρ×θ2
×
− 3θ×ρ×θ×)
− 12
( 1 − θ2
2
− cos θθ4 − 3 θ − sin θ − θ3
6
θ5
)
× (θ×ρ×θ2
×
θ2
×
ρ×θ×) . (180) The right Jacobian and its inverse are obtained using (76), that is, Jr (ρ, θ) = Jl(−ρ, −θ) and J−1
r
(ρ, θ) = J−1
l
(−ρ, −θ).
4) Rigid motion action: We have the action on points p,
M · p , t + Rp , (181) therefore and since for τ → 0 we have Exp( τ ) → I + τ ∧,
JM·pM = lim
τ→0
M Exp( τ ) · p − M · p
τ = [R −R [p]×
]
(182)
JM·pp = R . (183) APPENDIX ETHE TRANSLATION GROUPS (Rn, +) AND T (n)
The group (Rn, +) is the group of vectors under addition and can be regarded as a translation group. We deem it trivial
in the sense that the group elements, the Lie algebra, and the tangent spaces are all the same, so t = t∧ = Exp( t). Its equivalent matrix group (under multiplication) is the transla-tion group T (n), whose group, Lie algebra and tangent vector elements are,
T ,
[I t0 1
]
∈ T (n), t∧ ,
[0 t0 0
]
∈ t(n), t ∈ Rn .
Equivalence is easily verified by observing that T(0) = I,
T(−t) = T(t)−1, and that the commutative composition
T1T2 =
[I t1 + t2
0 1
]
,
effectively adds the vectors t1 and t2 together. Since the sum in Rn is commutative, so is the composition product in T (n).Since T (n) is a subgroup of SE (n) with R = I, we can easily determine its exponential map by taking (156, 172) with R = I
and generalizing to any n,
Exp : Rn → T (n) ; T = Exp( t) =
[I t0 1
]
. (184) The T (n) exponential can be obtained also from the Taylor expansion of exp( t∧) noticing that (t∧)2 = 0. This serves as immediate proof for the equivalent exponential of the (Rn, +)
group, which is the identity,
Exp : Rn → Rn t = Exp( t) . (185) This derives in trivial, commutative, right- and left- alike, plus and minus operators in Rn,
t1 ⊕ t2 = t1 + t2 (186)
t2 t1 = t2 − t1 . (187)
A. Jacobian blocks
We express translations indistinctly for T (n) and Rn, and note them S and T. The Jacobians are trivial (compare them with those of S1 and SO (2) in Section A-C1),
Ad T = I ∈ Rn×n (188)
JT−1
T
= −I ∈ Rn×n (189)
JT◦ST = JT◦SS = I ∈ Rn×n (190)
Jr = Jl = I ∈ Rn×n (191)
JT⊕v
T
= JT⊕vv = I ∈ Rn×n (192)
JS TS = −JS TT = I ∈ Rn×n . (193) REFERENCES H. Abbaspour and M. Moskowitz, Basic Lie Theory . WORLD SCIENTIFIC, 2007. [Online]. Available: doi/abs/10.1142/6462 R. Howe, “Very basic Lie theory,” The American Mathematical Monthly ,vol. 90, pp. 600–623, 1983. J. Stillwell, Naive Lie Theory . Springer-Verlag New York, 2008. T. D. Barfoot, State Estimation for Robotics . Cambridge University Press, 2017. E. Eade, “Lie groups for 2d and 3d transformations,” Tech. Rep. C. Forster, L. Carlone, F. Dellaert, and D. Scaramuzza, “On-manifold preintegration for real-time visual–inertial odometry,” IEEE Transactions on Robotics , vol. 33, no. 1, pp. 1–21, 2017. J. Deray and J. Sola, “Manif: A micro lie theory library for state estimation in robotics applications,” Journal of Open Source Software , vol. 5, no. 46, p. 1371, 2020. [Online]. Available: J. Sol a, “Quaternion kinematics for the error-state Kalman filter,”
CoRR , vol. abs/1711.02508, 2017. [Online]. Available: abs/1711.02508 G. Gallego and A. Yezzi, “A compact formula for the derivative of a 3-D rotation in exponential coordinates,” Tech. Rep., 2013. T. D. Barfoot and P. T. Furgale, “Associating uncertainty with three-dimensional poses for use in estimation problems,” IEEE Transactions on Robotics , vol. 30, no. 3, pp. 679–693, June 2014. G. Chirikjian, Stochastic Models, Information Theory, and Lie Groups, Volume 2: Analytic Methods and Modern Applications , ser. Applied and Numerical Harmonic Analysis. Birkh¨ auser Boston, 2011. [Online]. Available: F. Dellaert and M. Kaess, “Square Root SAM: Simultaneous Localiza-tion and Mapping via Square Root Information Smoothing,” vol. 25, no. 12, pp. 1181–1203, 2006. M. Kaess, H. Johannsson, R. Roberts, V. Ila, J. Leonard, and F. Dellaert, “iSAM2: Incremental smoothing and mapping with fluid relineariza-tion and incremental variable reordering,” in Robotics and Automation (ICRA), 2011 IEEE International Conference on , May 2011, pp. 3281– 3288. R. Kummerle, G. Grisetti, H. Strasdat, K. Konolige, and W. Burgard, “G2o: A general framework for graph optimization,” in Robotics and Automation (ICRA), 2011 IEEE International Conference on , May 2011, pp. 3607–3613. V. Ila, L. Polok, M. Solony, and P. Svoboda, “SLAM++ - a highly efficient and temporally scalable incremental SLAM framework,” The International Journal of Robotics Research , vol. 36, no. 2, pp. 210–230, 2017. [Online]. Available:
|
75
|
The angle of elevation is a widely used concept related to height and distance, especially in trigonometry. It is defined as an angle between the horizontal plane and oblique line from the observer’s eye to some object above his eye. Eventually, this angle is formed above the surface. As the name itself suggests, the angle of elevation is so formed that it is above the observer’s eye.
For example, an observer is looking at a bird sitting at the rooftop, then there is an angle formed, which is inclined towards the bird from the observer’s eye. This elevation angle is used in finding distances, heights of buildings or towers, etc with the help of trigonometric ratios, such as sine, cosine and tangent.
Table of contents:
Angle of Elevation Definition
The angle of elevation is an angle that is formed between the horizontal line and the line of sight. If the line of sight is upward from the horizontal line, then the angle formed is an angle of elevation.
In the above figure, you can see, an observer is looking at the object, standing on the ground, forming an angle θ with the line of sight and horizontal line. Here, if we join an imaginary line between the object and end of the horizontal line, a right angle triangle is formed. Thus we can use here trigonometry concept to find the distance of the observer from the tower or building. Height of the tower or building or the height at which the object is kept will be considered as perpendicular and the horizontal line will be considered as adjacent side of the triangle formed. The related terminology are given here.
Also, read:
| |
| --- |
| Trigonometry Trigonometry Formulas Trigonometry For Class 10 |
Terms Used for Angle of Elevation
The three terms related to the angle of elevation are
Angle
If two rays or two line segments meet at a common endpoint, then the point is known as the vertex. Two straight lines meet at a common point is said to form an angle. Angle play an important role, and some other definitions of angle are:
Horizontal Line
A straight line on the coordinate flat surface where all points on the line have the same y-coordinate. The angle and horizontal line combine to form the angle of elevation.
Line of Sight
The line which is drawn from the eyes of the observer to the point being viewed on the object is known as the line of sight.
Here, the object is kept above the line of sight of the observer. If we know the elevation angle, then we can easily determine the distance or the altitude. The reason why we use here trigonometric functions is that the angle formed with the respect to the observer’s eye to the top of a building or a tower produces an imaginary right triangle, where the height of the building or tower is considered as the perpendicular of the triangle.
Angle of Elevation Formula
The formula for finding the angle of elevation depends on knowing the information such as the measures of the opposite, hypotenuse, and adjacent side to the right angle. If the distance from the object and height of the object is given, then the formula for the angle of elevation is given by
Tangent of the angle of elevation = Height of the Object / Distance from the object
or
Tan θ = Opposite Side/Adjacent Side
Angle of Elevation and Depression Comparison
The angle of depression is just the opposite scenario of the angle of elevation. In this case, the observer is standing at height and the object is kept below the line of sight of the observer. We can define it as if the object is kept below the eye level of the observer, then the angles formed between the horizontal line and the observer’s line of sight is called the angle of depression.
The formula of the angle formed here is given by;
tangent of angle of depression = Opposite side/adjacent side
Angle of Elevation Examples
Question 1: Find the value of θ in the given figure.
Solution:
Given:
In the given triangle ABC, AC = 335 ft, BC = 249 ft
To find ∠A = θ
tan θ = Opposite side/ Adjacent side
tan θ = BC / AC
tan θ = 249/ 335
tan θ = 0.74
Therefore, θ = tan-1 (0.74) = 36
Hence, the value of θ = 36 degrees.
Question 2: If the angle of inclination formed by an observer to an object, kept above a tower, is 45 degrees. Find the horizontal distance between the observer and the height of the tower is 150 ft.
Solution: Given, θ = 45
Height of tower = 150 ft.
To find: the distance between the observer and base of the tower.
By the formula, we know,
tan θ = Height of the tower/Distance between observer and tower
tan 45 = 150/D
since tan 45 = 1
Therefore,
1 = 150/D
D = 150 ft.
Hence, the distance between the observer and tower is equal to the height of the tower.
To learn more Maths-related articles, stay tuned with BYJU’S – The Learning App and also watch engaging videos to learn with ease.
Register with BYJU'S & Download Free PDFs
Register with BYJU'S & Watch Live Videos
|
76
|
Topological order in a three-dimensional toric code at finite temperature Claudio Castelnovo1 and Claudio Chamon2 1Rudolf Peierls Centre for Theoretical Physics, University of Oxford, Oxford OX1 3NP, United Kingdom 2Department of Physics, Boston University, Boston, Massachusetts 02215, USA Received 12 May 2008; revised manuscript received 31 July 2008; published 21 October 2008 We study topological order in a toric code in three spatial dimensions or a 3+1D Z2 gauge theory at finite temperature. We compute exactly the topological entropy of the system and show that it drops, for any infinitesimal temperature, to half its value at zero temperature. The remaining half of the entropy stays constant up to a critical temperature Tc, dropping to zero above Tc. These results show that topologically ordered phases exist at finite temperatures, and we give a simple interpretation of the order in terms of fluctuating strings and membranes and how thermally induced point defects affect these extended structures. Finally, we discuss the nature of the topological order at finite temperature and its quantum and classical aspects.
DOI: 10.1103/PhysRevB.78.155120 PACS numbers: 03.65.Vf, 11.15.q, 03.67.Mn I. INTRODUCTION Some quantum systems are characterized by a type of order which cannot be captured by a local order parameter that signals broken symmetries, but instead the order is to-pological in nature.1 One of the ways in which this topologi-cal order manifests itself is in a ground-state GS degen-eracy that cannot be lifted by any local perturbation, and that depends on the genus of the surface in which the system is defined. Recently, there have been efforts to find character-izations of topological order other than ground-state degen-eracies, in particular, exploring the entanglement in the ground-state wave function.2,3 At zero temperature, topological order can be detected using the von Neumann entanglement entropy, more pre-cisely a topological contribution to it that can be separated from the boundary contribution by appropriate subtractions of different bipartitions of the system.2,3 Because the pure state density matrix is constructed from the ground state, it was argued in Ref. 2 that topological order is a property of the wave function, and not of the Hamiltonian, at absolute zero temperature.
An interesting question is what happens with topological order at finite temperature. The question is relevant because thermal fluctuations, no matter how small, are present in any laboratory system. To address this issue, it was proposed in Ref. 4 to use the topological entropy as a probe of topologi-cal order, but to compute it using an equilibrium mixed-state-density matrix ˆ =Z−1e−H ˆ . It becomes clear that, as opposed to zero temperature for which one can do away with the full information contained in the Hamiltonian and just use the ground-state wave function, topological order, if present at finite temperature, must be a property of the Hamiltonian.
The topological entropy was computed exactly for the two-dimensional 2D Kitaev model5 at finite temperature T, and it was shown that the infinite system size limit and the T→0 limit do not commute and that at finite T the topologi-cal entropy vanishes in the thermodynamic limit. Thus, it was argued that the topological order in the 2D system was fragile.4,6,7 Here we show that the situation in three dimensions is rather different using the three-dimensional 3D version of Kitaev’s model as an example.8 In contrast to two dimen-sions, topological order survives up to a phase transition at a finite temperature Tc. The order can be probed through a nonvanishing topological entropy, as well as understood from a simple cartoon picture that we present in the paper, using the fact that in 3D strings can move around point de-fects as opposed to two dimensions.
We prove in this paper that the von Neumann entropy of a subsystem A of a Z2 gauge model such as Kitaev’s toric code, in any number of dimensions, can be always decom-posed into two additive contributions from each of the two gauge structures magnetic and electric,9 SvNA;T = SvN SA;T/A + SvN PA;T/B, 1.1 where SvN S and SvN P are the separable contributions from the stars and plaquettes of the model and A and B the associ-ated coupling constants for these two structures. Conse-quently, the same additive separability holds for the topo-logical entropy, which is a sum of two independent contribu-tions, StopoT = Stopo S T/A + Stopo P T/B.
1.2 One of the contributions, Stopo S , evaporates for any infinitesi-mal temperature in the thermodynamic limit, just as in two dimensions, but the other one, Stopo P , remains constant up to a finite-temperature phase transition at Tc=1.313 3463B, which occurs for the 3D case, Stopo 3D T = 2 ln 2, T = 0 ln 2, 0 T Tc 0, T Tc. 1.3 As a consequence of these results, we argue that topologi-cal order can be well defined at finite temperatures in three dimensions.10 This finding raises the following interesting question: is the finite T order classical or quantum? Perhaps another way to ask the question is the following: what kind of information can be robustly stored using the isolated to-pological sectors in phase space that cannot be connected by local moves 23 such states in three dimensions? Classical bits or quantum qubits information? While we cannot ar-gue that the system does not realize a full quantum memory, PHYSICAL REVIEW B 78, 155120 2008 1098-0121/2008/7815/15512026 ©2008 The American Physical Society 155120-1 we can at the least argue that it can store probabilistic infor-mation pbits—probabilistic bits11 in the form of a quantum superposition of states in the different topological sectors, where the square amplitudes for all states in a given sector a probability do not fluctuate in the thermodynamic limit if the coupling to a thermal bath is local. However, the relative phases for all these amplitudes could be scrambled. This weak type of quantum superposition is not discernible from a classical probability distribution. Finally, this example shows that the notion of classical topological order, suggested for hard constrained models in two dimensions,12 is well defined in three dimensions without resorting to any hard constraints.
II. MODEL Consider a three-dimensional version of Kitaev’s toric code,8 defined on a simple-cubic lattice of size N=LL L, with periodic boundary conditions BCs and spin-1/2 degrees of freedom i living on the bonds, i=1, ... ,3N i x, i y, and i z being the three Pauli matrices. Let us label the centers of each single square plaquette in the lattice with p =1, ... ,3N and each site of the cubic lattice with s =1, ... ,N.
Let us define the plaquette and star operators on the lat-tice, Bp = ip i z, As = is i x, 2.1 as illustrated in Fig. 1. The Hamiltonian of the model can then be written in terms of these operators as H = −A s As −B p Bp, 2.2 where A and B are two real positive constants.
Notice that all star and plaquette operators commute, but they are not all independent. While only the product of all star operators equals the identity, therefore leaving N−1 in-dependent star operators, the product of the plaquette opera-tors around each cubic unit cell gives the identity, therefore introducing N−1 constraints in the 3N total plaquette opera-tors the product of all but one cube is equivalent to that same cube, so we have one less constraint. Moreover, three additional constraints come from the fact that the product of all plaquette operators along any crystal plane CP in the cubic lattice i.e., x,y, x,z, or y,z yields the identity, and we are finally left with 2N−2 independent plaquette op-erators.
The GS manifold of the system is identified by having all plaquette and star quantum numbers equal to +1, and it is 23N−N−1−2N−2=23 dimensional, assuming periodic bound-ary conditions in all three directions. Similarly to the 2D case, one can notice that this degeneracy has a topological nature, and the different sectors are distinguished by three nonlocal operators, 1 = i 1 i z, 2 = i 2 i z, 3 = i 3 i z 2.3 or 1 = i1 i x, 2 = i2 i x, 3 = i3 i x, 2.4 which are diagonal in the z and x bases, respectively. Here the i can be any winding paths along the edges of the cubic lattice in each of the three crystal directions x, y, or z, and the i can be any winding planes perpendicular to each of the crystal directions and passing through the midpoints of the corresponding edges of the cubic lattice i.e., crystal planes in the dual lattice whose sites sit at the centers of the elemen-tary cubic cells. Two examples are shown in Fig. 2 for clar-ity.
In the z basis and in the topological sector where all the i equal +1, the GS wave function of the system can be written as p 1 p 3 2 p s FIG. 1. Color online Illustration of the Kitaev model in three dimensions, with explicit examples of a star operator As=i x at the lattice site s and of three plaquette operators Bp=i z at the plaquette-dual sites p1, p2, and p3. The spin index i labels, re-spectively, the six red spins around s and the four blue spins around p connected by dashed lines.
γ 1 ξ 1 FIG. 2. Color online Two examples of the nonlocal operators needed to distinguish between the degenerate GS of the 3D Kitaev model.
CLAUDIO CASTELNOVO AND CLAUDIO CHAMON PHYSICAL REVIEW B 78, 155120 2008 155120-2 GS = 1 G 1/2 gG g 0, 2.5 where 0 is any state in the sector, say the state with all the i z= +1, and G is the Abelian group generated by all products of star operators of dimension G =2N−1. In the x basis and in the topological sector where all the i equal +1, the GS wave function of the system can be written as in Eq.
2.5, where now 0 is any state in the sector, say the state with all the i x= +1, and G is the Abelian group generated by all products of plaquette operators of dimension G =22N−1.
Notice the two different underlying structures in the sys-tem: the closed z loops along the edges of the cubic lattice, which satisfy loopi z=1 identically, and the closed x mem-branes in the body-centered dual lattice locally perpendicu-lar to the edges of the original lattice, satisfying membranei x=1 identically see Fig. 3.
III. TOPOLOGICAL ENTROPY AT ZERO TEMPERATURE Let us first compute the zero-temperature topological en-tropy of the system using a three-dimensional version of the bipartition scheme proposed by Levin and Wen2 in two di-mensions. Notice, however, that in three dimensions a bipar-tition can be topologically nontrivial with respect to closed loops but not with respect to closed membranes—e.g., a donut—and vice versa—e.g., a spherical shell. Thus, there is no unique way to generalize the 2D case. Two equally valid options are illustrated in Fig. 4 based on a “spherical” 1–4 and a “donut-shaped” 5–8 bipartition scheme, respectively.
In the z basis,13 where G is generated by the star opera-tors, the calculation of the entanglement entropy SvN pro-ceeds as in the 2D case.4,12,14 Using the group property of G in Eq. 2.5, one can show that SvNA = −lndAdB G , 3.1 where dA is the dimension of the subgroup GAG contain-ing all the elements of G that act as the identity on B, GA = gG g=gA 1B, and similarly for subsystem B. As in the 2D case, these subgroup dimensions depend on the num-ber NA s NB s of star operators acting solely on spins in A B and on the number mA mB of connected components of A B, dA = 2NA s+mB−1, 3.2 dB = 2NB s+mA−1.
3.3 The mB contribution to dA and the mA contribution to dB come from the so-called collective operations, i.e., elements of the groups GA GB that cannot be expressed as products of star operators in A B. In the 3D case, such collective operations correspond to noncontractible closed membranes.
In this respect, bipartitions 1 and 8 are special in that sub-systems 1B and 8A are composed of two separate connected components m1B=m8A=2, while all other subsystems have only one component.
We can then compute the topological entropy Stopo of the system in the z basis from either the spherical or the donut-shaped bipartition scheme, Stopo z = lim r,R→ −SvN 1A + SvN 2A + SvN 3A −SvN 4A = ln 2, FIG. 3. Color online Two examples of the underlying struc-tures of the 3D Kitaev model: the closed z loops along the edges of the cubic lattice, which satisfy loopi z=1 and the closed x mem-branes in the body-centered dual lattice, satisfying membranei x=1.
(1) (2) (3) (4) (5) (8) (7) (6) r R R−2r FIG. 4. Illustration of the two bipartition schemes used for the 3D Kitaev model: spherical top and donut shaped bottom.
TOPOLOGICAL ORDER IN A THREE-DIMENSIONAL… PHYSICAL REVIEW B 78, 155120 2008 155120-3 Stopo z = lim r,R→ −SvN 5A + SvN 6A + SvN 7A −SvN 8A = ln 2, 3.4 where we used the fact that all Ns contributions cancel out exactly. In fact, if we define NiAB s =Ns−NiA s−NiB s to be the number of star operators acting simultaneously on A and B, Ns=N being the total number of star operators in the sys-tem, one can show that N1A s + N1B s + N4A s + N4B s −N2A s + N2B s + N3A s + N3B s = 2N −N1AB s −N4AB s −2N + N2AB s + N3AB s = 0. 3.5 This result relies on the fact that the total boundary in bipar-titions 1 and 4 is the same—with the same multiplicity and with precisely the same edge and corner structure—as in bipartitions 2 and 3 by construction. Therefore, N1AB s +N4AB s =N2AB s +N3AB s , and similarly for bipartitions 5–8.
Let us also compute the topological entropy in the x basis,13 as it will be useful when we consider the finite-temperature case. The group G is now generated by the plaquette operators, which are highly redundant and require more involved calculations to obtain the von Neumann en-tropy SvN. In fact, while Eq. 3.1 still holds, one needs to count the number of independent plaquette generators of subgroups GA and GB in order to obtain the equivalent of Eqs. 3.2 and 3.3. Notice that the collective operations are now given by closed loops, and only bipartitions 4 and 5 allow for nontrivial i.e., noncontractible loops.
As we discussed before, G =22N−1. This arises from counting all independent generators of G as the total number of plaquettes in G all possible generators minus the number of independent constraints. These are all but one of the cubic unit cells plus three crystal planes. Similar arguments apply to the bipartitions 1–8. Notice that in all of the bipartitions, subsystem A does not contain any entire crystal plane, while subsystem B always contains all three crystal planes. Taking advantage of this simplification, in the following it will be understood that GB has three less independent generators with respect to GA.
Let us proceed case by case. For bipartitions where both A and B have only one connected component without handles, such as bipartitions 2, 3, 6, and 7 in Fig. 4, the group GA equivalently GB is generated by all the plaquette operators acting solely on A, subject to the constraints given by all cubic unit cells entirely contained in A. There are no collective operations in this case, and one obtains dA = 2NA p−NA c, 3.6 dB = 2NB p−NB c, 3.7 where NA p is the number of plaquette operators acting on spins in A, NA c is the number of cubic unit cells in A, and similarly for B.
Consider then the case of bipartition 4 equivalently 5.
Although both A and B are still connected, the presence of a handle allows now for collective operations. Take a crystal plane perpendicular to the largest surface of subsystem A and draw it so that it bisects the donut into two identical U-shaped portions see Fig. 5 top
. The intersection of this plane with A gives two rectangles of size rR−2r, a dis-tance R−2r apart. Now take the product of all plaquettes belonging to one of the rectangles plus those at its boundary.
The resulting operation acts on B alone, yet it cannot be constructed from plaquettes in B because the “outer bound-ary” of the rectangle cannot be the sole boundary of a surface in B. Notice that this collective operation can be deformed at will and moved along the donut by appropriate products of plaquettes in B; therefore there is only one such independent operation. Similar arguments apply if we repeat the construc-tion starting from a plane parallel to the largest surface of the subsystem A, again chosen so as to bisect the donut. This yields another independent collective operation acting now on A see Fig. 5 bottom
. As a result, dA = 2NA p−NA c+nA, 3.8 dB = 2NB p−NB c+nB, 3.9 where nA=1 and nB=1 are the number of collective opera-tions in A and B, respectively.
Finally, one can show that there are no collective opera-tions in the x basis in bipartitions 1 and 8. In fact, all closed loops are contractible to a point both in A and in B in these bipartitions. However, the disconnected nature of subsystem B in bipartition 1 equivalently, subsystem A in bipartition 8 requires special care in the counting of the independent gen-erators of GA respectively, GB. As in the previous cases, all plaquettes in A belong to GA, and all cubic unit cells in A act as independent constraints toward the counting of the independent generators of GA. However, in bipartition 1, there is a class of closed membranes in A that cannot be assembled as a product of cubic cells in A. This is the case, for example, of the closed cubic membranes in A that sur-round entirely the inner component of B. Any two such membranes can be obtained one from the other via multipli-cation by cubic unit cells in A. Thus, they only give rise to one additional constraint in the counting of the independent generators. In general, the number of such constraints is given by mB−1, where mB is the number of connected com-r R −2r R −2r FIG. 5. Color online Illustration of the collective operations in bipartitions 4 and 5 in the x basis, acting on subsystem B top and subsystem A bottom, respectively.
CLAUDIO CASTELNOVO AND CLAUDIO CHAMON PHYSICAL REVIEW B 78, 155120 2008 155120-4 ponents of B. Similarly for bipartition 8 and subsystem B, one obtains mA−1 additional constraints, where mA is the number of connected components of A.
Combining all of the above considerations into a general expression for the dimensions of subgroups GA and GB in the x basis, one obtains dA = 2NA p−NA c+nA−mB−1−mA CP, 3.10 dB = 2NB p−NB c+nB−mA−1−mB CP, 3.11 where mA CP mB CP is the number of crystal planes CPs en-tirely contained in A B. Recall that all bipartitions of in-terest have mA CP=0 and mB CP=3.
We can then use Eq. 3.1 to compute the topological entropy of the system using the spherical and the donut-shaped bipartition schemes in the x basis, Stopo x = lim r,R→ −SvN 1A + SvN 2A + SvN 3A −SvN 4A = −1 + 2ln 2 = ln 2, Stopo x = lim r,R→ −SvN 5A + SvN 6A + SvN 7A −SvN 8A = 2 −1ln 2 = ln 2, 3.12 where we used the fact that all Np and Nc contributions cancel out exactly. In fact, if we define NiAB p =Np−NiA p −NiB p to be the number of plaquette operators acting simul-taneously on A and B, Np=3N being the total number of plaquette operators in the system, and we define NiAB c =Nc−NiA c−NiB c to be the number of cubic unit cells simul-taneously encompassing spins in A and in B, Nc=N being the total number of cubic unit cells in the system, one can show that N1A p −N1A c + N1B p −N1B c + N4A p −N4A c + N4B p −N4B c −N2A p −N2A c + N2B p −N2B c + N3A p −N3A c + N3B p −N3B c = 4N −N1AB p −N4AB p + N1AB c + N4AB c −4N + N2AB p + N3AB p −N2AB c −N3AB c = 0.
3.13 This result relies on the fact that the total boundary in bi-partitions 1 and 4 is the same—with the same multiplicity and with precisely the same edge and corner structure—as in bipartitions 2 and 3 by construction. Therefore, N1AB p +N4AB p =N2AB p +N3AB p , N1AB c +N4AB c =N2AB c +N3AB c , and similarly for bipartitions 5–8.
Clearly, both bipartition schemes capture the topological nature of the system and provide an equally valid measure of the topological entropy. In two dimensions the choice of bi-partitions 1–4 in Ref. 2 is such that bipartition 1 is topologi-cally equivalent to bipartition 4 upon exchange of subsystem A with subsystem B, while bipartitions 2 and 3 are actually topologically invariant upon the same exchange. Hence, be-cause the von Neumann entropy for the ground state is sym-metric under the exchange of A and B, the topological con-tribution measured in the 2D scheme is bound to be double counted, namely, Stopo=2 ln D=ln D2, where D is the so-called quantum dimension of the system.2,3 In three dimen-sions, both the scheme 1–4 and the scheme 5–8 isolate the topological contribution to the entanglement entropy without double counting. Notice that all the bipartitions are topologi-cally invariant under the exchange of A and B, except for bipartitions 1 and 8. If we want to recover the symmetry of the 2D scheme, a possible solution is to define Stopo = lim r,R→ −SvN 1A + SvN 2A + SvN 3A −SvN 4A −SvN 5A + SvN 6A + SvN 7A −SvN 8A = ln D2, 3.14 with D=2. As we will see in the following, the symmetric 1–8 choice is actually required if we are interested in study-ing the finite-temperature case since the von Neumann en-tropy is no longer invariant upon exchange of A and B, and a nontopologically symmetric choice of bipartitions would lead to different results depending on whether we work with subsystem A or subsystem B.15 IV. FINITE TEMPERATURE BEHAVIOR In this section we study the behavior of the entanglement and topological entropies at finite temperature via a generali-zation of the approach used for the 2D Kitaev model in Ref.
4. A qualitative picture of the effect of thermal fluctuations can be argued by comparison with the two-dimensional case.
There the information about the topological sectors is stored in the eigenvalues of winding loop operators, namely, prod-ucts of spin operators along winding loops. On a torus, there are infinitely many choices for such winding loop operators, but the absence of magnetic and electric charges i.e., plaquettes and stars with eigenvalue −1 in the gauge struc-ture at zero temperature reduces them to only two indepen-dent ones: the two noncontractible winding loops on the torus. Any other can be obtained from these two via multi-plication by an appropriate set of plaquette or star operators, which have eigenvalue +1 at T=0. Clearly the presence of order 1 deconfined thermal defects destroys immediately all topological information stored in the system since the eigenvalues of two loops on opposite sides of a defect are no longer consistent with each other see Fig. 6.
TOPOLOGICAL ORDER IN A THREE-DIMENSIONAL… PHYSICAL REVIEW B 78, 155120 2008 155120-5 Let us now consider the case of the Kitaev model in three dimensions. First of all, we need to discuss the two gauge structures separately since they are no longer identical as in two dimensions. If we work in the x basis, then the topo-logical information is stored in the eigenvalues of winding membrane operators, given by the product of all x operators belonging to a closed winding surface locally perpendicular to the bonds of the sites it crosses see Fig. 2. All possible choices of these membranes yield the same result at zero temperature since the corresponding operators can be ob-tained one from the other by products of sets of star opera-tors, which have all eigenvalue +1 in the GS. Thermal de-fects in this case play exactly the same role as in two dimensions since two membranes on opposite sides of a de-fect read off opposite eigenvalues of the corresponding wind-ing membrane operator.
On the other hand, the situation is quite different for the loop operators defined in the z basis. There the topological information is stored in winding loop operators—as in the 2D case—but they are now embedded in three dimensions.
Clearly, localized defects have no disruptive effects on the topological information because any two winding loops with equal winding numbers can be smoothly deformed one into the other without crossing any defects at low enough temperatures see Fig. 7. This is indeed the case here, where we learn from 3D lattice gauge theory that defective plaquettes are confined at low temperatures. They are created in quadruplets by a single spin-flip operation, and they can be pairwise separated only at the cost of creating a string of defective plaquettes in between the two pairs.16,17 Therefore, the winding loop operators will keep carrying the same quan-tum information in presence of a low density of defects. If we were to read out the topological information from the system, we would be getting the correct result as long as the chosen loop does not pass directly through a defect.
However, can this information be accessed by means of the same expectation values of loop operators that are used at zero temperature Eqs. 2.3 and 2.4
? The answer to this question is negative, as it was recently shown using gauge theory arguments in Ref. 18. A simple reason as to why naively choosing a given loop operator and looking at its expectation value alone does not capture the order below Tc is that, typically, winding loops will pass through at least one defect in the thermodynamic limit the probability of a loop not crossing any defect scales as 1−defL, where def is the equilibrium density of defects at a given temperature and L is the linear size of the system
. However, only those loops that avoid the defects contain the topological information. Recall that in two dimensions the eigenvalues of loop operators, even when they do not pass through defects, differ on two sides of one defect, in contrast to the situation in three di-mensions. This implies that the average expectation value of loop operators is bound to vanish exponentially in system size for any finite density of defects, i.e., for any finite tem-perature, independently of the nature of the system. As we will show in the following, the topological entropy of the system is capable of capturing these physical differences, and it accurately reflects the topological properties of the different phases.
The physical meaning of the distinct sectors can be un-derstood as follows. Consider preparing the system in a co-herent superposition of different topological sectors at zero temperature. Raise the temperature to some value TTc and then lower it again back to zero. If defects are confined, transitions between different loop sectors are forbidden throughout the process. We are thus bound to obtain a final state where the probability magnitude of amplitude square of finding the final state in each loop sector is the same as in the initial state. In this sense, the loop sectors are protected from thermal fluctuations at low temperatures, and topologi-cal order survives at finite temperature TTc.
That the system does not change sectors during the time that it is in thermal equilibrium with the bath is a dynamical problem broken ergodicity. This can be understood by con-FIG. 6. Color online Qualitative illustration of the disruptive effect of two defects solid red dots in the 2D Kitaev model on a torus: two winding loops black wavy lines on either side of a defect solid circle, read off opposite eigenvalues of the corre-sponding winding loop operator.
FIG. 7. Color online Qualitative illustration of the reason why the topological information stored in the underlying z loop struc-ture of the 3D Kitaev model is robust to thermal fluctuations: even in presence of sparse defects solid red circles, any two winding loops black wavy lines, with equal winding numbers, can be smoothly deformed one into the other without crossing any defects.
The wiggly lines represent qualitatively the confining strings be-tween defect “pairs” discussed in the text. CLAUDIO CASTELNOVO AND CLAUDIO CHAMON PHYSICAL REVIEW B 78, 155120 2008 155120-6 trasting the time scales for mixing sectors if defects are con-fined or deconfined. Deconfined thermal defects are free to randomly walk across the system and induce transitions be-tween different topological sectors by means of creation, system-spanning propagation, and annihilation processes.
The characteristic time for a sector-changing process scales therefore as some power of the system size, deconfinedL.
In contrast, confined defects will have to overcome an energy barrier of the order of L to be able to wind around the system and induce a change in the topological sector. As a result, their characteristic time scale is instead exponential in sys-tem size, confinedecL. Even for rather small systems, con-fined defects would require time scales larger than the age of the universe to transition between sectors.
An even more interesting situation occurs when both Z2 gauge defect types are confined, so that the x and z topo-logical sectors are both protected. This case is briefly dis-cussed in Appendix A, and it is related to error recovery that was argued to be realizable, for example, in a four-dimensional 4D toric code.6 What we argue here based on the finite-temperature studies is that the system can be self-correcting: if the system is prepared in a given superposition at zero temperature and its temperature is raised and again lowered to zero without ever going above Tc, the system returns to the same original quantum state a “boomerang” effect.
The protection holds at low temperatures, but it is bound to vanquish as the density of defective plaquettes with eigen-value −1 grows with temperature: once enough defects are in place, one can no longer deform paths around them. There-fore, we expect a loss of topological information as tempera-ture is increased via a topological phase transition at finite temperature.
In analogy with 3D lattice gauge theory, we expect this transition to occur when plaquette defects deconfine at high enough temperature. This is captured by the expectation value of Wilson loop operators, which is exponentially sup-pressed with the length of the loop perimeter law at low temperatures, while it is suppressed with the area of the minimal enclosed surface area law at high temperatures.16,17,19,20 In our notation, the transition tempera-ture is set by the energy scale B, and the transition is ex-pected to occur at the critical point of the 3D lattice gauge theory.
The topological entropy is a nonlocal order parameter that detects the presence of topological order in a system. Any loss of topological information, e.g., whenever some topo-logical sectors become ill-defined, should have a measurable effect on such entropy. Indeed, we show below that this is the case and that the qualitative picture inferred from the arguments above is confirmed by an exact calculation of the topological entropy at finite temperature.
A. Density matrix Let us work for convenience in the x tensor product basis, where the Hilbert space H is spanned by the whole set of orthonormal states , labeled by the configurations of a classical Ising model on the bonds of a 3D simple-cubic lattice the value 1 of each Ising variable corresponds to the eigenvalue of the x operator at the same site. Define G to be the group generated by all plaquette operators Bp =ipi z. Recall that any two elements of the group differing by products of plaquettes around closed membranes are, in fact, the same element i.e., they are defined modulo the identities closed membraneBp=1, where we are assuming peri-odic boundary conditions, and full crystal planes are there-fore closed membranes as well. Recall also that G =22N−2, where N is the number of sites in the simple-cubic lattice.
Every two elements of the group commute with each other and g2=1, ∀gG. For later convenience, let us label with =0 the fully magnetized state x= +1.
The equilibrium properties of the system at finite tempera-ture are captured by the density matrix, T = 1 Ze−H ˆ = , e−H e−H .
4.1 For convenience of notation, let us rewrite Hamiltonian 2.2 as H = −BP −AS, P = p Bp, S = s As.
Notice that S =Ms , where Ms is the net “star magnetization,” i.e., the difference between the number of stars with eigenvalue of +1 and with eigenvalue −1 in the state . The action of any group element g is to flip plaquettes, which cannot change the sign of any star operator since they commute, and therefore Sg =Msg , ∀g G.
Thus, the denominator of Eq. 4.1 becomes e−H = eAMs eBP .
4.2 Upon expanding eBP = p cosh B + sinh BBp
, 4.3 as follows from the definition P=pBp and from the fact that Bp 21, ∀p, one can explicitly compute the last term eBP = p cosh B + sinh BBp .
4.4 All nonvanishing contributions in Eq. 4.4 come from products of plaquette operators that reduce to the identity i.e., products around closed membranes. The above equa-tion is therefore independent of , which we set to the ref-erence state 0 in the following.
The set of all possible closed membranes in a periodic 3D simple-cubic lattice is in one-to-two correspondence with all TOPOLOGICAL ORDER IN A THREE-DIMENSIONAL… PHYSICAL REVIEW B 78, 155120 2008 155120-7 possible configurations of an Ising model on the dual simple-cubic lattice the membranes are, say, the antiferromagnetic domain boundaries, provided we allow for both periodic and antiperiodic boundary conditions in all three directions. In this language, the sum of all nonvanishing contributions can be written as 0 eBP 0 = 1 2 C cosh B
3N−NAFCsinh B
NAFC, 4.5 where C is a generic configuration of the 3D Ising model with any type of boundary conditions, 3N is the total number of nearest-neighbor NN bonds, and NAFC is the number of antiferromagnetic NN bonds. The factor 1/2 comes from the Z2 symmetry: a given membrane configuration corre-sponds to two equivalent but distinct Ising configurations.
For convenience, let us introduce the simplified notation c =cosh B, s=sinh B, and t=s/c=tanh B and define J 0 such that e−2J=t recall that B0. The above expres-sion can then be further simplified to 20 eBP 0 = c3N C tNAFC = c3N C e−2JNAFC = c3N C expJ i,j SiSj −3NJ = sc3N/2 C expJ i,j SiSj sc3N/2ZJ tot, 4.6 where ZJ tot is the partition function of an Ising model on a simple-cubic lattice of size N=LLL with reduced ferro-magnetic coupling constant J, summed over all possible choices of periodic or antiperiodic boundary conditions.21 We can now move on to compute the numerator of Eq.
4.1, , e−H = , eAMs eBP = gG eAMs eBPg g, 4.7 where we used the fact that all matrix elements eBP vanish identically unless =g , ∃gG. Once again, the expectation value eBPg is independent of , and the above expression simplifies to gG eAMs0 eBPg 0 g.
4.8 The expectation value can be computed explicitly by ex-panding the exponential 0 eBPg 0 = 0 p cosh B + sinh BBp pg Bp 0.
4.9 Here, the notation pgBp represents the decomposition of g in terms of the group generators Bp. Clearly this decom-position is highly nonunique since the group elements are defined modulo the identities closed membraneBp=1, and Eq.
4.9 needs to be handled with care.
As before, all nonvanishing contributions come from products of plaquette operators that reduce to the identity. In this case, however, there are two options for every operator Bp: i it can be multiplied out directly by sinh BBp, with p=p recall that Bp 2 =1, or ii it can be completed to an identity by an appropriate product of Bp terms so that BpBp forms a closed membrane. Notice that in the second case the product over p may not include p itself.
All this can be expressed in more elegant terms in the Ising language defined previously. Case i corresponds to the two spins across the bond p being ferromagnetically aligned in the Ising model and contributing a Boltzmann factor sinh B. Case ii corresponds to the two spins across p being antiferromagnetically aligned and contributing a Boltzmann factor cosh B. Notice that the correlations be-tween the different p are automatically taken care of in the Ising language, and we obtain 20 eBPg 0 = sc3N/2 C expJ i,j ijgSiSj sc3N/2ZJ totg, 4.10 where ijg = + 1 if i,j g −1 if i,j g. 4.11 Recall that a bond in the Ising model corresponds to a plaquette in the original system and i, jg means that the corresponding plaquette operator appears in the decomposi-tion of g.
In order to derive Eq. 4.10, let us define NFC g NAFC g to be the number of bonds with ferromagnetically antiferromagnetically aligned spins in the subset of bonds corresponding to g of a given Ising configuration C. Define as well NFC g ¯ NAFC g ¯ to be the number of bonds with ferromagnetic antiferromagnetic spin alignment within bonds in the subset complementary to g. Clearly, NF/AFC =NF/AFC g+NF/AFC g ¯.
We can then rewrite Eq. 4.9 in the Ising language as 20 eBPg 0 = C cNFC g ¯sNAFC g ¯sNFC gcNAFC g = C cNFCsNAFCtNFC gt−NAFC g = c3N C e−2JNAFC+NFC g−NAFC g = c3N C expJ i,j SiSj −3N −2 i,jg SiSj = sc3N/2 C expJ i,j ijgSiSj.
In the following, it is convenient to introduce the convention that a bond ij belongs to or is inside a partition A of the system ijA if all the spins on the corresponding plaquette operator belong to A, and the bond does not belong or is outside A ijA otherwise. Similarly, we will re-fer to a cubic unit cell in or not in A if its six composing plaquettes are all in A or not. CLAUDIO CASTELNOVO AND CLAUDIO CHAMON PHYSICAL REVIEW B 78, 155120 2008 155120-8 In conclusion, the numerator of Eq. 4.1 can be mapped onto the partition function of a 3D random-bond Ising model on a simple-cubic lattice, where the randomness is controlled by the choice of g. Again, summation over all possible boundary conditions is understood.
Substituting Eqs. 4.6 and 4.10 into Eq. 4.1 gives T = gG ZJ totg ZJ tot1 eAMs Zs g, 4.12 where J=−1/2lntanhB
, Zs=eAMs is the parti-tion function of a noninteracting Ising system in a magnetic field of reduced strength A, and ZJ tot1ZJ tot.
In the limit of T→0 →
, J→0+, all g are equally weighed, Z0 totg = Z0 tot1 ∀g G, 4.13 and only the states with maximal star magnetization Ms =N, i.e., those that are eigenstates of the star operators with eigenvalue +1 everywhere, survive, eAMs Zs → 1 23 G Ms,N.
4.14 Such states are of the form g 0k, where k=1, ... ,23 labels the states obtained from 0 by the action of the nonlocal operators in Eq. 2.3. Namely, the states 0k are of the form 1 m1 2 m2 3 m3 0 for all possible choices of m1,m2,m3=0,1.
The factor 1/23 G in the above equation appears because there are precisely 23 G states with maximal star magnetiza-tion. Thus, one recovers the density matrix of the zero-temperature Kitaev model, prepared with equal probability across all topological sectors,14 T = 0 = 1 23 k=1 23 1 G g,gG g 0k0k gg.
4.15 In the limit T→ →0, J→
, all g are exponentially suppressed except for g=1, while all states become equally weighed. In this case one obtains the mixed state density matrix, T →
= 1 23N , 4.16 of a noninteracting Ising model defined on the bonds of a simple-cubic lattice.
Clearly from Eq. 4.12, one expects something to happen in the system when the value of the temperature T, i.e., the parameter J, is such that the 3D Ising model described by ZJ tot becomes critical. In order to understand how this relates to the presence of topological order at zero temperature, we need to proceed with the calculations and compute the von Neumann entropy and the topological entropy as a function of temperature.
B. von Neumann entropy Let us consider a generic bipartition of the original system S into subsystems A and B S=AB. The von Neumann entanglement entropy of partition A is given by SvN A −TrA ln A = −lim n→1 n TrA n , 4.17 where A=TrB is the reduced density matrix obtained from the full density matrix by tracing out the degrees of free-dom in subsystem B; similarly for SvN B . SvN A =SvN B holds if is a pure state density matrix.
In order to compute the von Neumann entropy 4.17 from the finite-temperature density matrix 4.12, we first obtain the reduced density matrix of the system using an approach similar to the one in Ref. 14, AT = gG ZJ totg ZJ tot1 eAMs Zs AA gAB gB B = gGA ZJ totg ZJ tot1 eAMs Zs AA gA, 4.18 where we used the generic tensor decomposition = A B, g=gA gB, and the fact that B gB B=1 if gB =1B and 0 otherwise. As in Sec. III, we denoted by GA= g G gB=1B the subgroup of G given by all operations g that act trivially on B similarly for GB.
Notice that a plaquette operator Bp can either act solely on spins in partition A represented in the following by the no-tation pA, solely on spins in partition B pB, or si-multaneously on spins belonging to A and B which we will refer to as boundary plaquette operators and represent by pAB. Recall from Sec. III that a complete set of genera-tors for the subgroup GA can be constructed by taking: i all plaquette operators that act solely on A, i.e., Bp pA NA p= Bp pA ; ii all possible independent collec-tive operators constructed from plaquettes in B and at the boundary but acting solely on A as illustrated in Sec. III, the number of such collective operators equals the number nA of noncontractible loops in subsystem A; and by iii account-ing for all constraints given by the independent closed mem-branes in A. That is, all NA c cubic unit cells in A, all pos-sible mB−1 additional closed membranes if B is disconnected, and all independent entire crystal planes inside A mA CP=0,1,2,3. Again, for all bipartitions of interest in our study mA CP=0 and mB CP=3, and for simplicity we will restrict to this specific case.
The cardinalities of the subgroups GA and GB are thus given by dA GA = 2NA p−NA c+nA−mB−1, 4.19a dB GB = 2NB p−NB c+nB−mA−1−3.
4.19b In particular, nA=nB=1 in bipartitions 4,5 and 0 otherwise, and mA=2 in bipartition 8, mB=2 in bipartition 1, and they equal 1 in all other cases.
Let us then use Eq. 4.18 to compute the trace of the nth power of AT, TOPOLOGICAL ORDER IN A THREE-DIMENSIONAL… PHYSICAL REVIEW B 78, 155120 2008 155120-9 TrA n T = g1,...,gnGA l=1 n ZJ totgl ZJ tot1 1,...,n l=1 n eAMsl Zs 1,A g1,A 2,A2,A g2,A 3,A ¯ n,A gn,A 1,A.
4.20 Each expectation value above imposes that the two configurations l+1 and l, l=1, ... ,n with the identification n+11, can be mapped one onto the other over subsystem A via the plaquette flipping operation gl,A. This is possible only if the set g1, ... ,gnGA satisfies the condition l=1 n gl,A=1A, i.e., l=1 n gl=1. Therefore, we can decompose each element gl into a product gl=g ˜lg ˜l+1, where g ˜lGA, l=1, ... ,n with periodic boundary conditions n+11 the fact that this decomposition is highly nonunique is immaterial to the calculations below, TrA n T = g1,...,gnGA l=1 n ZJ totgl ZJ tot1 1,...,n l=1 n eAMsl Zs 0 l=1 n gl 01,A g ˜1,Ag ˜2,A 2,A2,A g ˜2,Ag ˜3,A 3,A ¯ n,A g ˜n,Ag ˜1,A 1,A = g1,...,gnGA l=1 n ZJ totgl ZJ tot1 1,...,n l=1 n eAMsl Zs 0 l=1 n gl 01,A 2,A2,A 3,A ¯ n,A 1,A, 4.21 where we used the fact that the magnetization Ms of state is the same as Msg of state g , for any gG, to do away with the g ˜l via relabeling of the states l→g ˜l l.
We can further simplify the notation by defining the function A,A=A A, and the above equation can be rewritten as TrA n T = g1,...,gnGA l=1 n ZJ totgl ZJ tot10 l=1 n gl 0 1,...,n l=1 n eAMsl Zs l=1 n−1 l,A,l+1,A = ZPn ZSn.
4.22 Notice that the product l=1 n−1l,A,l+1,A implies 1,A,n,A, which is therefore redundant and has been omitted in the previous equation. In the notation of Eq.
4.22, it becomes evident that the star S contribution, i.e., involving only the star coupling constant A, and the plaquette P contribution, i.e., involving only the plaquette coupling constant B, decouple and factorize into two sepa-rate terms, ZSn and ZPn. In particular, ZPn=1 =ZSn=1=1.
Using the replica trick, we can compute the von Neumann entropy, SvNA;T = −lim n→1 n TrA n = −lim n→1 nZPnZSn = −ZS1lim n→1 nZPn −ZP1lim n→1 nZSn = −lim n→1 nZPn −lim n→1 nZSn = SvN PA;T/B + SvN SA;T/A.
4.23 Thus, from the factorizability in Eq. 4.22 above, it follows that the von Neumann entropy has two additive contributions from the star and plaquette terms that can then be computed separately.9 One can check that Eqs. 4.22 and 4.23 satisfy indeed the T→0 limit discussed in Sec. III, as well as the known T→ limit see Appendix B. Notice that although in this paper we are concerned with 3D systems, the derivation is independent of the dimensionality, and this result holds true for Z2 models in any number of dimensions.
Because the von Neumann entropy is separable as the sum of the two independent contributions from star and plaquette terms, so is the topological entropy, which is a linear combination of the entanglement entropies for the par-titions shown in Fig. 4, StopoT = Stopo S T/A + Stopo P T/B.
4.24 We now turn to the separate analysis of the two contribu-tions.
1. Star contribution Stopo (S) (TÕA) The computation of this contribution is very similar to the one in Ref. 4 for the 2D Kitaev model, where the limit B → was explicitly considered. In order to illustrate this analogy, let us define the following entropy differentials: SvNA;T SvNA;T −SvNA;0 = SvN SA;T/A + SvN PA;T/B 4.25 and CLAUDIO CASTELNOVO AND CLAUDIO CHAMON PHYSICAL REVIEW B 78, 155120 2008 155120-10 StopoT StopoT −Stopo0 = Stopo S T/A + Stopo P T/B, 4.26 where SvN SA;T/A SvN SA;T/A −SvN SA;0, 4.27a Stopo S T/A Stopo S T/A −Stopo S 0 4.27b and SvN PA;T/B SvN PA;T/B −SvN PA;0, 4.27c Stopo P T/B Stopo P T/B −Stopo P 0.
4.27d Notice that for B→
, SvN PA;T/B=0 and Stopo P T/B=0. Thus, one obtains that SvN SA;T/A = SvNA;T B→
, 4.28 Stopo S T/A = StopoT B→
.
4.29 Moreover, in the limit B→ and choosing to work in the z basis, one can show that both the group structure of G and the collective operations in GA are very much the same in two dimensions and in three dimensions. For example, the group G is generated by all but one star operator, and the subgroup GA is generated by all star operators in A with the addition of all but one collective operation that are obtained as products of star operators belonging to each component of B times the ones along the corresponding boundary. As a result, the topologically nontrivial bipartitions 1 and 4 in two dimensions correspond to bipartitions 1 and 8 in three di-mensions. All calculations generalize straightforwardly to three dimensions, and one can derive the expressions for SvN S and for Stopo S in a finite system at finite temperature.
The actual values for SvN S and Stopo S are then fixed by match-ing, say, the known T→0 limits.
From the 2D results in Ref. 4, we infer that the star con-tribution to the 3D topological entropy is fragile in the sense that it vanishes in the thermodynamic limit at any finite tem-perature. Namely, the behavior is singular in that the limits of T→0 and infinite size do not commute. If the thermody-namic limit is taken first, Stopo S T/A = 0, T = 0 −ln 2, T 0. 4.30 Thus, in the thermodynamic limit, the star contribution to the topological entropy evaporates at any infinitesimal tempera-ture. The finite-temperature and finite-size expressions for the star contributions to the von Neumann and topological entropies are shown in Appendix C. 2. Plaquette contribution SvN (P)(A;TÕB) Similarly to the above, one obtains for the plaquette con-tribution, SvN PA;T/B = SvNA;T A→
, 4.31 Stopo P T/B = StopoT A→
.
4.32 Because of the very different nature of the 2D and 3D group structures when using the x basis, the computation of the plaquette contribution in three dimensions is not a trivial extension of that in two dimensions, and it thus requires some work. The calculations are shown in detail in Appendix D, while only the results are summarized here for concise-ness and clarity. The behavior of Stopo P T/B as a function of temperature, in the thermodynamic limit, is Stopo P T/B = 0, T Tc −ln 2, T Tc, 4.33 where the critical temperature is associated with a 3D Ising transition and can be located at Tc=1.313 3463B.
V. DISCUSSION We can now put all the pieces together and argue for the persistence of topological order at finite temperatures in the 3D Kitaev model. Adding the contributions from stars and plaquettes, which we have shown to be exactly separable, the topological entropy of the system is Stopo 3D T = 2 ln 2, T = 0 ln 2, 0 T Tc 0, T Tc. 5.1 This is to be contrasted to the 2D case,4 Stopo 2D T = 2 ln 2, T = 0 0, T 0, 5.2 where the topological order is fragile, subsiding for any finite T when the thermodynamic limit is taken first.
In three dimensions the order survives up to a transition temperature that is determined by the coupling constant B associated with the plaquette degrees of freedom alone. The topological order in the system, as measured by the topologi-cal entropy, is thus the same as in the case where A=0, that is, in a purely classical model. In this sense, the order at finite T is classical in origin.22 Our results show that the extension of the notion of topo-logical order to classical systems applies beyond the hard constrained limit already discussed in Ref. 12 in two dimen-sions. In the 3D example discussed here, the order persists for noninfinite couplings A and B. Having obtained the result that topological order in the 3D toric code survives thermal fluctuations, in a classical sense, up to a finite critical temperature, we now turn to a discussion of what this type of order implies.
At zero temperature, topological sectors can be discerned according to the eigenvalues I= 1 of the loop operators , where =1,2,3, as in Eq. 2.3. The eight ground states I in the different topological sectors can be labeled by in-tegers I=0, ... ,23−1 made up of three bits, II1I2I3, I =0,1.
Suppose to prepare, at an initial time t=ti, a superposition of states, TOPOLOGICAL ORDER IN A THREE-DIMENSIONAL… PHYSICAL REVIEW B 78, 155120 2008 155120-11 ti = I=0 23−1 pI I, 5.3 then raise the temperature to some value 0TTc, and bring it back to T=0 at some time tf. The final T=0 state will again be, assuming thermodynamic equilibrium is reached, a superposition of the eight topologically degenerate ground states.
Following the discussion in Sec. IV, for temperatures be-low Tc, one can take a winding loop and deform it, past thermal defects, and read off the same eigenvalue of the topological operator as the path is deformed. The informa-tion stored in all winding loops that do not cross a thermal defect does not disappear as long as there is a way to pass a winding loop that avoids defects. Therefore, as long as the system temperature is not raised above Tc, upon returning to T=0 at tf, the system should return to the same topological sector that it was originally prepared in at time ti.
Thus, the state at tf is a superposition, tf = I=0 23−1 pIeiI I, 5.4 where phases I are accumulated during the thermal cycle.
These phases, unless locked together by some specific mechanism, shall be randomized by the thermal bath. How-ever, the magnitude of the amplitudes remains pI, for I =0, ... ,23−1, as there have been no transitions between dif-ferent topological sectors if the system was never heated above Tc.
Hence, the only accessible information preserved under the time evolution from ti to tf is that the relative probability to find the state in sector I equals pI. The state in Eq. 5.4 realizes a pbit or probabilistic bit.11 It is not a qubit because of the thermal dephasing between the states I. Although still a quantum superposition of a sort, in that it has prob-ability pI of being in sector I, it cannot be told apart by any type of measurement from a classical probabilistic system with the same probabilities pI. The stability of the system against local measurements only tells us that the state is not projected onto a sector until a nonlocal measurement is car-ried out. This effect is a nonmeasurable difference between the state in Eq. 5.3 and a classical probabilistic state: whether the projection occurs before as in the classical state or after as in the pbit the measurement is not detectable.
VI. CONCLUSIONS In this paper we have shown that topological order exists in the 3D toric code at finite temperatures up to a critical temperature Tc=1.313 3463B which is set by one of the couplings that associated to the plaquette terms in the Hamiltonian. This is in sharp contrast to what happens in the 2D toric code, where in the thermodynamic limit the order subsides for any infinitesimal temperature.
We first presented simple heuristic arguments for this re-sult. These arguments are based on the observation that ei-genvalues of operators defined as products of spin operators along winding loops can be used to determine the order even in the presence of thermally activated local defects because loops can be deformed around such obstacles in three dimen-sions, leaving unchanged the eigenvalues of such loop opera-tors. This is to be contrasted to the 2D case, where one can-not move a loop around a point, and thus the eigenvalues of nonlocal loop operators are unequal on opposite sides of the point defect.
We subsequently substantiated the heuristic arguments by means of an exact calculation of the von Neumann and to-pological entropies in the system as a function of tempera-ture. In carrying out this exact calculation, we derived a ge-neric result that applies to toric codes defined in any number of spatial dimensions: that the von Neumann entropy is sepa-rable as a sum of two terms, one associated with stars alone and a function of the dimensionless ratio T/A and another associated with plaquettes alone and a function of the di-mensionless ratio T/B. The same separability follows natu-rally for the topological entropy, StopoT=Stopo S T/A +Stopo P T/B. We then showed that, in the thermodynamic limit, the star contribution Stopo S T/A vanishes for any T 0, while the plaquette contribution Stopo P T/B remains constant for T/B1.313 3463 and vanishes for tempera-tures above this scale.
Because the critical temperature is set by B and not A, one can argue that the topological entropy remains nonzero when A→0. The resulting Hamiltonian is purely classical, and thus one can argue that the nature of the finite T topo-logical order must be classical as well.
Finally, we discussed the nature of the information that can be stored robustly in the system because of the topologi-cal order at finite T. We argued that the resilient information stored in the 3D system realizes a pbit.
We end with a note on an interesting situation that should occur in systems where both Z2 gauge defect types are con-fined. In three dimensions only one of the defect types is confined, the topological entropy drops from 2 ln 2 at T=0 to ln 2 for 0TTc, and only the probabilities of being in a given topological sector are preserved magnitude square of the amplitudes but not the relative phases. If instead both defect types are confined, the notion of sectors in both the x and z bases is retained, and this implies as discussed briefly in Appendix A that, if the system is prepared in a given superposition at zero temperature and its temperature is raised and again lowered to zero without ever going above Tc, the system returns to the same original quantum state a boomerang effect.
ACKNOWLEDGMENTS We are indebted to Xiao-Gang Wen for attracting our at-tention toward the possibility of a finite-temperature topo-logical phase transition in the three-dimensional Kitaev model and to Michael Levin, John Cardy, Eduardo Fradkin, and Roderich Moessner for several insightful discussions.
This work was supported in part by EPSRC-GB under Grant No. GR/R83712/01 C.Castelnovo.
APPENDIX A: THE CONFINED-CONFINED CASE In this appendix, we briefly discuss how the nature of the topological protection at finite temperature changes when CLAUDIO CASTELNOVO AND CLAUDIO CHAMON PHYSICAL REVIEW B 78, 155120 2008 155120-12 both types of thermal defects in a Z2 gauge theory are con-fined at low temperature TTc. For concreteness and sim-plicity, let us consider a modification of the 2D toric code, where some ad hoc energy terms have been introduced that confine both electric and magnetic thermal defects without inquiring on the nature of these terms. As mentioned in Sec. IV, this scenario should be realized in the 4D case with-out need of any additional term. The T=0 ground-state GS wave function in a given to-pological sector is uniquely specified by the eigenvalues of two independent Wilson toric cycles, i.e., winding loop operators. In the z basis, it is sufficient to consider the prod-uct of all ˆ i z operators along a horizontal T ˆ h z and a vertical T ˆ v z winding loop, respectively; similarly, in the x basis, using loop operators in the dual lattice, T ˆ h x and T ˆ v x. These loop operators satisfy the algebra T ˆ h x,T ˆ v z=0 and T ˆ v x,T ˆ h z =0.
Let us choose to work in the z basis and define a,b, a=, to be the normalized GS wave functions that are also eigenvectors of T ˆ h z and T ˆ v z, T ˆ h z a,b = a a,b, T ˆ v z a,b = b a,b.
Let us prepare the system in a given superposition of such basis states, in = a,b= a,b a,b, A1 where a,b= a,b 2=1, and consider coupling the system to a thermal bath so that the temperature can be varied from Tin =0, via 0TTc, back to Tfi=0, as discussed in Sec. IV.
Trivially, the final state of the system must again be a ground state, and therefore it can be written as fi = a,b= ˜ a,b a,b.
A2 Moreover, as long as the temperature was never raise beyond the deconfining transition at Tc, the coupling to the thermal bath cannot have transferred any amplitude between any of the topological sectors. Hence the following topological quantities must be conserved, in T ˆ h/v z/x in = fi T ˆ h/v z/x fi.
A3 For simplicity, consider the case where +,+ = cos/2, −,+ = sin/2ei, where 0, and −,, and all others vanish. Af-ter a little algebra, one can show that the conditions in Eq.
A3 require that the only nonvanishing terms in the final GS wave function are ˜ +,+ = cos ˜/2, ˜ −,+ = sin ˜/2ei ˜ , and they satisfy the relations cos = cos ˜, A4 sincos = sin ˜cos ˜ .
A5 That is, = ˜ and = ˜ .
The ambiguity in the sign of is immediately resolved if we further require, as expected below Tc, that also the expec-tation values of the products iT ˆ h zT ˆ v x and iT ˆ v zT ˆ h x are conserved, leading to the relation sinsin = sin ˜sin ˜ .
A6 Therefore, the quantum topological order in this system is fully protected from thermal fluctuations, as long as TTc, in the sense that the system is bound to come back to the same exact initial state upon cooling back to zero tempera-ture.
APPENDIX B: CHECK AGAINST KNOWN LIMITS As a check of the steps leading to Eqs. 4.22 and 4.23, let us verify that the known limits are indeed recovered. For T=0 i.e., for J=0 we have that eAMsl/Zs →Msl,N/23 G , while ZJ totg=ZJ tot1, ∀g. In the notation introduced below Eq. 4.14, this restricts the summation over l to states of the form l=g 0k, with gG and k =1, ... ,23 labeling the states obtained from 0 by the action of the nonlocal operators in Eq. 2.3. Namely, the states 0k are of the form 1 m1 2 m2 3 m3 0 for all possible choices of m1,m2,m3=0,1. Equation 4.22 reduces then to TrA n T = dA n−1 1 23n G n 1,...,n l=1 n Msl,N l=1 n−1 l,A,l+1,A = dA n−1 1 23n G n g1 ,...,gn G k1,...,kn l=1 n−1 gl 0klA,gl+1 0kl+1A = dA n−1 1 23n G n23n g1 ,...,gn G l=1 n−1 gl 0A,gl+1 0A TOPOLOGICAL ORDER IN A THREE-DIMENSIONAL… PHYSICAL REVIEW B 78, 155120 2008 155120-13 = dA n−1 1 G n G dB n−1 = dA n−1 dB G n−1 = dAdB G n−1 , B1 where we used the fact that, for the cases of interest, sub-system A is finite and the nonlocal operators can always be chosen so as to traverse only subsystem B, gl 0klA,gl+1 0kl+1A
gl 0A,gl+1 0A
. This in turn implies that gl gl+1 GB, and the constrained summation over g1 , ... ,gn G can be replaced by an unconstrained summa-tion over g1 G, g2 , ... ,gn GB where gl+1 gl gl+1 for l =2, ... ,n. Equation B1 is indeed the same as in the 2D case at zero temperature.4 In this limit, the von Neumann entropy is given by SvNA;T = 0 = −lim n→1 n TrA n = −ln dAdB G B2 and the topological entropy by Stopo=2 ln 2 as discussed in Sec. III for the full bipartition scheme 1–8. For T→ i.e., for J→
, we have ZJ totg/ZJ tot1→g−1, all are equally weighed, and Eq. 4.22 reduces to TrA n = 1 1 Zs n 1,...,n l=1 n−1 l,A,l+1,A = 1 1 23Nn23N2Bn−1 = 1 1 2A n−1 , B3 where A B is the number of spin degrees of freedom in A B and A+B=3N. Here we used the fact that l,A,l+1,A involves only subsystem A, hence A spins are summed over only once, while there are n independent copies of the remaining B spins.
This result leads to SvNA;T →
= −lim n→1 n TrA n = ln2A = A ln 2, B4 which is indeed the classical entropy of a collection of A free Ising spins. The topological entropy vanishes in this limit since the contributions from the different bipartitions cancel out exactly recall that the total number of spins in A for bipartitions 2 and 3 is the same as that for bipartitions 1 and 4 and similarly for 6,7 and 5,8.
Notice that, in our chosen factorization scheme in Eq.
4.22, the plaquette term does not yield any contribution to the von Neumann entropy at infinite temperature, while at zero temperature the plaquette term contribution equals −ln dA, and the star term contribution is −lndB/ G .
APPENDIX C: THE STAR CONTRIBUTION Here we present the expressions for the star contribution to the entropies for finite temperatures and finite system sizes. As we argued in the Sec. IV B 1, the star contribution to the entropies can be computed using Eqs. 4.28, which relate them to entropies evaluated for a hard constrained sys-tem where B→
. The calculation in this limit is done most conveniently in the z basis, very much along the lines of the calculation carried out for 2D systems in Ref. 4. Paralleling the steps of the computation for 2D systems, one obtains for the 3D case that SvNA;T B→ = ln cosh KA 2 N −NA sx ln x cosh KA 2 N −1 cosh KA 2 N −NA sy ln y sinh KA 2 N −1 cosh KA 2 N − i x ˜i ln x ˜i cosh KA 2 N −NAi s cosh KA 2 N − i y ˜i ln y ˜i sinh KA 2 N −NAi s cosh KA 2 N , C1 where KA=−lntanhA/T
, NAi sNBi s+NABi s is the total number of star operators acting on the ith component of sub-system B either entirely in Bi or at its boundary ABi, and x = cosh KA 2 y = sinh KA 2 , C2a x ˜i = cosh KA 2 NAi s y ˜i = sinh KA 2 NAi s.
C2b Notice that only the last two terms in Eq. C1 yield a topo-logical contribution in our bipartition scheme since N1A s −N2A s −N3A s +N4A s =0 and likewise for bipartitions 5–8.
Therefore, CLAUDIO CASTELNOVO AND CLAUDIO CHAMON PHYSICAL REVIEW B 78, 155120 2008 155120-14 Stopo S T/A,N = i=1 2 x ˜i 1 ln x ˜i 1 cosh KA 2 N −N1Ai s cosh KA 2 N + i=1 2 y ˜i 1 ln y ˜i 1 sinh KA 2 N −N1Ai s cosh KA 2 N −x ˜2 ln x ˜2 cosh KA 2 N −N2A s cosh KA 2 N −y ˜2 ln y ˜2 sinh KA 2 N −N2A s cosh KA 2 N −x ˜3 ln x ˜3 cosh KA 2 N −N3A s cosh KA 2 N −y ˜3 ln y ˜3 sinh KA 2 N −N3A s cosh KA 2 N + x ˜4 ln x ˜4 cosh KA 2 N −N4A s cosh KA 2 N + y ˜4 ln y ˜4 sinh KA 2 N −N4A s cosh KA 2 N + x ˜5 ln x ˜5 cosh KA 2 N −N5A s cosh KA 2 N + y ˜5 ln y ˜5 sinh KA 2 N −N5A s cosh KA 2 N −x ˜6 ln x ˜6 cosh KA 2 N −N6A s cosh KA 2 N −y ˜6 ln y ˜6 sinh KA 2 N −N6A s cosh KA 2 N −x ˜7 ln x ˜7 cosh KA 2 N −N7A s cosh KA 2 N −y ˜7 ln y ˜7 sinh KA 2 N −N7A s cosh KA 2 N + x ˜8 ln x ˜8 cosh KA 2 N −N8A s cosh KA 2 N + y ˜8 ln y ˜8 sinh KA 2 N −N8A s cosh KA 2 N , C3 where we used the fact that subsystem B has always one component except for bipartition 1, where it has two compo-nents.
With the expression above for Stopo S T/A,N, one can determine the topological entropy contribution from the star operators as a function of temperature and system sizes. In particular, let us look at two particular limits: that of the zero-temperature limit taken first and that of the thermody-namic limit taken first.
For T→0 first KA→0, and one can easily check that all terms in Eq. C3 vanish, which is expected as the difference Stopo S T/A,N is, by definition, zero at T=0. Now, when the thermodynamic limit is taken first, i.e., when the sizes N and all of N1Ai s for i=1,2 and NpAi s , p=2–8, are taken to infinity at fixed KA, each term in the expression in Eq. C3 gives ln 2 with the sign determined by whether the parti-tion is added or subtracted. Bipartition 1 gives −2 ln 2 its contribution is doubled because 1B has two disconnected components and it is added to bipartitions 4, 5, and 8, which give −ln 2 each; bipartitions 2, 3, 6, and 7 are sub-tracted and each of them gives +ln 2. Altogether, we obtain Stopo S T/A,N→
=−ln 2 for any temperature T. There-fore, we obtain in the thermodynamic limit the result used in Eq. 4.30.
One can finally add the zero-temperature contributions to obtain SvN ST/A = SvN ST/A −dB G C4 and Stopo S T/A = Stopo S T/A + lnd1Bd4Bd5Bd8B d2Bd3Bd6Bd7B = Stopo S T/A + ln 2.
C5 APPENDIX D: THE PLAQUETTE CONTRIBUTION As anticipated in Sec. IV B 2, the plaquette contribution in three dimensions is very different from the 2D case, and we need to carry out the calculations explicitly. Consider the expression for ZP, ZPn = g1,...,gnGA l=1 n ZJ totgl ZJ tot10 l=1 n gl 0, D1 where TOPOLOGICAL ORDER IN A THREE-DIMENSIONAL… PHYSICAL REVIEW B 78, 155120 2008 155120-15 ZJ totg = Si expJ ij ijgSiSj D2 is the partition function of the 3D random-bond Ising model summed over all possible boundary conditions, whose ran-domness is controlled by g according to Eq. 4.11. Namely, ijg= 1 depending on whether the plaquette perpendicu-lar to the bond ij is flipped in configuration g ij=−1 or not ij= +1.
Recall that the group G, and therefore its subgroup GA, is defined modulo the identities closed membraneBp=1. In the lan-guage of the randomness realizations ij, this amounts to summing over gauge inequivalent configurations. In fact, any ij and ij that differ by the product of plaquettes around closed surfaces are related by ij = ij S ¯iS ¯ j, ∃ S ¯i.
D3 Specifically, S ¯i corresponds to either of the two spin con-figurations that exhibit the closed surfaces in question as their only antiferromagnetic boundary the two configura-tions are related by an overall Z2 symmetry. Recall that the product of plaquettes belonging to an infinite crystal plane is also an allowed gauge transformation, and all possible boundary conditions periodic or antiperiodic in each direc-tion should be taken into account when enumerating all con-figurations S ¯i. In conclusion, every ijg admits 2N+3 equivalent randomness realizations ij =ijS ¯iS ¯ j, labeled by all possible Ising configurations S ¯ii=1 N where S ¯ii=1 N and −S ¯ii=1 N yield the exact same ij .
In the case of a summation over the whole group G, one has then the identity gG Si expJ ij ijgSiSj 1 2N+3 ij Si expJ ij ijSiSj.
D4 For the subgroup GA, the situation is more convoluted.
First of all, the operators gGA correspond to randomness realizations ijg where all the bonds outside A can be gauged to assume the value +1. Rather than considering all the equivalent configurations as for the whole group G, it is more convenient to introduce a restricted set of randomness realizations ij A where ij A is constrained to assume the value +1 whenever ijA. Notice that we do not constrain the bonds inside A, and we are therefore overcounting all the gauge equivalent configurations with respect to these bonds.
The number of equivalent realizations in the restricted sub-group can be counted as seen in Sec. III and repeated here-after for convenience. All cubic unit cells entirely contained in A are independent generators of gauge transformations.
Also, if A contains crystal planes, there are up to three ad-ditional generators. Finally, we have one extra generator per connected component of B i.e., entirely surrounded by A but for one of them. Thus, the total number of gauge equiva-lent configurations is now 2NA c+mA CP+mB−1, where again NA c is the number of cubic unit cells entirely contained in A, mA CP is the number of independent crystal planes in A mA CP=0, mB CP=3 for all cases of interest, and mB is the number of connected components of B.
As a result, one obtains gGA Si expJ ij ijgSiSj 1 2NA c+mB−1 ij A SiexpJ ijA ij ASiSj expJ ijA SiSj.
D5 Having done so, the summation over ij A is now uncon-strained, namely, the bond variables ij A= 1 are generated by freely flipping any of the plaquettes in A, starting from the configuration with all ij A= +1 which we refer to in the following as 0 ij 0, the ferromagnetic configuration. No-tice that this accounts only for the bipartitions where the plaquette operators in A are sufficient to generate the whole group GA bipartitions 1, 2, 3 and 6, 7, 8. As discussed in Sec. III, this is not always the case and additional collective operations may be needed to generate GA bipartitions 4 and 5. The summation encompasses then all configurations ob-tained by flipping plaquettes in A starting from ij 0 and starting from the configurations derived from the ferromag-netic one via the action of each of the independent collective operations. For concreteness, in bipartitions 4 and 5 there is only one collective operation in A, illustrated in the bottom panel of Fig. 5. In this case, the configurations ij A are obtained by flipping plaquettes in A starting from the ferro-magnetic configuration 0 and starting from the configura-tion with all ij A= +1, except for those inside the blue thick line in the bottom panel of Fig. 5 i.e., plaquettes in B or at the boundary, where ij A=−1. We will refer to this con-figuration in the following as 1 ij 1. If we label ˜ A ˜ ij A the set of all configurations obtained from the fer-romagnetic one via the action of the plaquette operators in A alone, the summation in Eq.
D5 runs over 0 ˜ A1 ˜ A, where the product of two configurations represents the new configuration with variables given by the site-by-site product of the two original variables ij 0 ˜ ij A ˜ ij A and ij 1 ˜ ij A.
We can then apply the identity in Eq. D5 to simplify our expression in Eq. D1. The condition that a term is nonva-nishing, namely, 0A g1,A ¯gn,A 0A=1, translates into the condition that l=1 n ij A,lgl = S ˜iS ˜ j ∀ij, ∃ S ˜i, D6 i.e., the product of all ij A,lgl, l=1, ... ,n, is gauge equiva-lent to 0 equivalently g1,A ¯gn,A =1. The very same na-ture of a collective operation in A requires that such opera-tion cannot be completed to an identity a closed membrane by means of plaquette operators in A alone. Therefore the CLAUDIO CASTELNOVO AND CLAUDIO CHAMON PHYSICAL REVIEW B 78, 155120 2008 155120-16 above equation holds independently for the collective opera-tions and for the ˜ A configurations. Namely, it imposes that the number of collective operations appearing in ij A,ll=1 n is even and that l=1 n ˜ ij A,lgl = S ˜iS ˜ j ∀ij, ∃ S ˜i.
D7 Trivially, Eqs. D6 and D7 become equivalent if no col-lective operations are present in A.
Notice that S ˜iS ˜ j1 for all ijA: all possible S ˜i con-figurations must be ferromagnetically ordered outside A. If mB is the number of connected components in B, then the ferromagnetic order holds across each component separately, and from one component to the next the overall sign of the S ˜ spins may change. An overall sign change in the spins S ˜ is immaterial, as one can see from Eq. D7, and therefore one needs to introduce a corresponding factor of 1/2 when sum-ming over S ˜i.
Equation D1 then becomes ZPn = 1 ZJ tot1 n 1 2NA c+mB−1
n 1 2 S ˜ i ij A,l l=1 n l=1 n ij A,l=S ˜ iS ˜ j l=1 n Si l ij expJij A,lSi lSj l = 1 ZJ tot12NA c+mB−1 n1 2 S ˜ i ij A,l l=1 n l=1 n ij A,l=S ˜ iS ˜ j Si l l=1 n ij expJ l=1 n ij A,lSi lSj l = 1 ZJ tot12NA c+mB−1 n1 2 S ˜ i Si l l=1 n ij ij A,ll=1 n l=1 n ij A,l=S ˜ iS ˜ j expJ l=1 n ij A,lSi lSj l = 1 ZJ tot12NA c+mB−1 n1 2 S ˜ i Si l l=1 n ¯ ll=1 n even ijA ˜ ij A,ll=1 n l=1 n ˜ ij A,l=S ˜ iS ˜ j expJ l=1 n ¯ ij l ˜ ij A,lSi lSj l ijAexpJ l=1 n ¯ ij lSi lSj l, D8 where ¯ ll=1 n even runs over all ntuples, ¯ l0,1l=1 n , with an even number of 1 terms. Notice that the summation ˜ ij A,ll=1 n l=1 n ˜ ij A,l=S ˜ iS ˜ j expJ l=1 n ¯ ij l ˜ ij A,lSi lSj l = Zn J ¯ ij lSi lSj l;S ˜iS ˜ j, D9 where Zn J ¯ ij lSi lSj l;S ˜iS ˜ j can be interpreted as the parti-tion function of an Ising chain of degrees of freedom ˜ ij A,ll=1 n in a random field of local strength J ¯ ij lSi lSj l and subject to the condition that the product of all Ising spins l=1 n ˜ ij A,l equals S ˜iS ˜ j. By means of the change of variables ˜ ij A,l=mij A,lmij A,l+1, this becomes the partition function of a nearest-neighbor Ising chain with periodic or antiperiodic BCs depending on the sign of S ˜iS ˜ j= 1 i.e., mij A,n+1 =mij A,1S ˜iS ˜ j, Zn Zn J ¯ ij lSi lSj l;S ˜iS ˜ j = 1 2 mij A,ll=1 n BC=S ˜ iS ˜ j expJ l=1 n ¯ ij lmij A,lmij A,l+1Si lSj l.
D10 This in turn can be computed exactly, TOPOLOGICAL ORDER IN A THREE-DIMENSIONAL… PHYSICAL REVIEW B 78, 155120 2008 155120-17 2Zn = 2 cosh Jn + l=1 n ¯ ij lSi lSj lS ˜iS ˜ j2 sinh Jn = 2 cosh Jn + l=1 n Si lSj lS ˜iS ˜ j2 sinh Jn.
D11 We also used the fact that ¯ ij l= +1 if ijA by construc-tion. Notice that this convenient choice does not introduce any limitations. In general, the number of times when a −1 appears in the l=1, ... ,n sequence of ¯ ij l values must be even, and therefore l=1 n ¯ ij l= +1, ∀i, j. For convenience of notation, let us consider the following change of summation variables: S ˜i →i = l=1 n Si lS ˜i, D12 so that we can write Zn= 1 2eAneBnij, with An and Bn defined as eAn+Bn = 2 cosh Jn + 2 sinh Jn, D13a eAn−Bn = 2 cosh Jn −2 sinh Jn.
D13b Given that l=1 n Si l= 1, for all sites i whose adjacent bonds ij are solely in A, the summation over S ˜i= 1 and the summation over i= 1 are unconstrained. The case is dif-ferent for the sites i that have an adjacent bond not in A. The correlation across such bond is, in fact, ferromagnetic by construction, and if B has only one connected component, the spin S ˜i has the same sign as all other spins not entirely surrounded by bonds in A. Consequently, all the boundary spins S ˜i have the same sign, and the values of the associated spins i are determined uniquely by the product l=1 n Si l. If mB is the number of connected components in B, then the ferromagnetic order holds across each component separately, and from one component to the next the overall sign of the S ˜ spins may change. This is accounted for by summing over boundary sign variables qr= 1, r=1, ... ,mB, assigned to each boundary r defined as the set of sites that have adjacent bonds both in A and in the rth component of A.
In the end, Eq. D8 becomes ZPn = 1 ZJ tot12NA c+mB−1 n1 2 S ˜ i Si l l=1 n ijA Zn JSi lSj l;S ˜iS ˜ j ¯ ll=1 n even ijAexpJ l=1 n ¯ ij lSi lSj l = 1 ZJ tot12NA c+mB−1 n1 2 Si l l=1 n i ijA 1 2eAneBnij ¯ ll=1 n even ijAexpJ l=1 n ¯ ij lSi lSj l qr = 1r=1 mB r=1 mB ir i l=1 n Si l = qr = 1 ZJ tot12NA c+mB−1 neNA pAn 2NA p 1 2 Si l l=1 n i ijA eBnij ¯ ll=1 n even ijAexpJ l=1 n ¯ ij lSi lSj l qr = 1r=1 mB r=1 mB ir i l=1 n Si l = qr.
D14 Notice that ¯ ij l= +1 if the plaquette ij does not belong to the collective operation and that whenever ij belongs to the collective operation the value of ¯ ij l= 1 is the same for all ij. We restrict here for simplicity to the case where there is at most one collective operation in A. In order to extend to the general case one needs to repeat the derivation for each collective operation separately. Notice also that the sum over Si l that are entirely sur-rounded by bonds in A is unconstrained, and it contributes a trivial factor 2NA cn to the sum over the remaining spins. In the following, we use this simplification and all summations over Si l are intended as constrained only to the remaining spins for convenience, we do not increase the already com-plex notation.
Let us focus on the boundary condition, qr = 1r=1 mB r=1 mB ir i l=1 n Si l = qr.
D15 Given that the and the S spins can assume only the values 1, then the quantity i+l=1 n Si l can only assume the values n+1,n−1,n−3, ... ,−n−1,−n+1.23 In particular, the product il=1 n Si l is positive whenever said summation equals n+1,n−3,n−7,..., and it is negative otherwise. We CLAUDIO CASTELNOVO AND CLAUDIO CHAMON PHYSICAL REVIEW B 78, 155120 2008 155120-18 can therefore rewrite the delta function in the above equation as i l=1 n Si l = qr = p=0 ⌊n+qr/2⌋ i + l=1 n Si l = n + qr −4p, where ⌊·⌋stands for the integer part of its argument. In other words, the sum i+l=1 n Si l must equal n+qmod 4 or i + l=1 n Si l −n + qr = 0mod 4.
D16 Using the function fx = 1 4 k=0 3 expi 2 kx = 1 if x = 0mod 4 0 if x = 1,2,3mod 4, D17 we can finally write the delta function as i l=1 n Si l = qr = 1 4 ki expi 2 kii + l=1 n Si l −n + qr.
D18 Substituting into Eq. D14, we obtain ZPn = 1 ZJ tot12NA c+mB−1 neNA pAn 2NA p 2NA cn1 2 Si l l=1 n i ijA eBnij ¯ ll=1 n even ijAexpJ l=1 n ¯ ij lSi lSj l qr = 1r=1 mB r=1 mB ir 1 4 ki expi 2 kii + l=1 n Si l −n + qr = 1 ZJ tot12mB−1 neNA pAn 2NA p 1 2 1 4N qr = 1r=1 mB kii=1 N i ijA eBnij ¯ ll=1 n even Si 1expJ ijA ¯ ij 1Si 1Sj 1expi 2 r=1 mB ir kii + Si 1 −1 −qr Si l l=2 n expJ ijA l=2 n ¯ ij lSi lSj lexpi 2 i ki l=2 n Si l −1, D19 where and N are, respectively, the full set and the total number of boundary sites, i.e., sites that have adjacent bonds both in A and outside A. In the language introduced earlier, Nc=NA c+NA c=NA c+NB c+N and therefore NA c=NB c+N.
Note that the last line in Eq. D19 does not depend on the S1 or spins. If we introduce the partition functions, Z ki A,+ = Si expJ ijA SiSj + i 2 i kiSi −1, Z ki A,−= Si expJ ijA ij 1SiSj + i 2 i kiSi −1, we can carry out the summation over the even number of collective operations ¯ ll=1 n explicitly and arrive at ZPn = 1 ZJ tot12mB−1 neNA pAn 2NA p 1 2 qr = 1r=1 mB 1 4N kii=1 N i expBn ijA ij Si 1 expi 2 r=1 mB ir kii + Si 1 −1 −qr 1 2expJ ijA Si 1Sj 1 + expJ ijA ij 1Si 1Sj 1Z ki A,+ + Z ki A,−n−1 +expJ ijA Si 1Sj 1 −expJ ijA ij 1Si 1Sj 1Z ki A,+ −Z ki A,−n−1.
D20 We are finally in the position to take the derivative with respect to n and to compute the von Neumann entropy of the bipartition, TOPOLOGICAL ORDER IN A THREE-DIMENSIONAL… PHYSICAL REVIEW B 78, 155120 2008 155120-19 SvN PA;T/B = −lim n→1 nZPn = −lim n→1 n 1 ZJ tot12mB−1 neNA pAn 2NA p i expB1 ijA ij Si 1expJ ijA Si 1Sj 1 1 2 qrr=1 mB 1 4N kii=1 N expi 2 r=1 mB ir kii + Si 1 −1 −qr D21a − 1 ZJ tot12mB−1 eNA pA1 2NA p lim n→1 n i expBn ijA ij Si 1expJ ijA Si 1Sj 1 1 2 qrr=1 mB 1 4N kii=1 N expi 2 r=1 mB ir kii + Si 1 −1 −qr D21b − 1 ZJ tot12mB−1 eNA pA1 2NA p i expB1 ijA ij Si 1 1 2 qrr=1 mB 1 4N kii=1 N expi 2 r=1 mB ir kii + Si 1 −1 −qr 1 2expJ ijA Si 1Sj 1 + expJ ijA ij 1Si 1Sj 1lnZ ki A,+ + Z ki A,− +expJ ijA Si 1Sj 1 −expJ ijA ij 1Si 1Sj 1lnZ ki A,+ −Z ki A,−.
D21c The summation over ki can be carried out explicitly both in contributions D21a and D21b. This leads to a delta func-tion that identifies i=qrSi 1, ir, and r=1, ... ,mB. One can verify that the factor qr is actually immaterial, and the and S1 terms in the above equation can be gathered into a single partition function, i expB1 ijA ij Si 1 expJ ijA Si 1Sj 1 = Si expJ ij SiSj ZJ tot1, D22 where we used the fact that B1=J see Eqs. D26 below
.
The summation over qr= 1r=1 mB becomes then trivial, yielding an overall factor 2mB.
In contribution D21c, each summation over qr= 1 yields a factor 2 cosirki/2, which vanishes unless ki is even. Thus, we can constrain the summation over ki =0, ... ,3ir to satisfy this condition, and we can drop the terms expi 2 irki1−qr since 1−qr is even and the term is identically one. The summation over qr= 1r=1 mB becomes again trivial. In particular, expi 2 i kii + Si 1 = expi 2 i kii −1 + Si 1 −1
D23 for the same reasoning, and we can write the and S1 terms in a more compact form using the definition of Z ki A, and introducing the notation Z ki B,+ = i expJ ijA ij + i 2 i kii −1. D24 The labeling B instead of A is used here as a reminder that the summation over i includes both spins surrounded only by bonds in A and spins on the boundary . Therefore, the total number of spins is NB c=NA c+N. These considerations allow us to simplify Eq. D21 to CLAUDIO CASTELNOVO AND CLAUDIO CHAMON PHYSICAL REVIEW B 78, 155120 2008 155120-20 SvN PA;T/B = −lim n→1 n 1 ZJ tot12mB−1 n−1eNA pAn 2NA p D25a − 1 ZJ tot1 eNA pA1 2NA p lim n→1 n Si expBn ijA SiSj + J ijA SiSj D25b − 1 ZJ tot1 eNA pA1 2NA p 1 4N kii=1 N even Z ki B,+ 2 Z ki A,+ + Z ki A,−lnZ ki A,+ + Z ki A,− + Z ki A,+ −Z ki A,−lnZ ki A,+ −Z ki A,−.
D25c In order to proceed further, let us first study some of the terms in Eq. D25 separately. From Eq. D13 we have that An = 1 2ln 2 cosh Jn + 2 sinh Jn 2 cosh Jn −2 sinh Jn = 1 2ln 2 cosh J2n −2 sinh J2n, D26a Bn = 1 2ln2 cosh Jn + 2 sinh Jn 2 cosh Jn −2 sinh Jn , D26b A1 = ln 2, D26c B1 = 1 2ln1 + tanh J 1 −tanh J = J, D26d d dnAn n=1 = ln 2 + cosh2 J lncosh J −sinh2 J lnsinh J, D26e d dnBn n=1 = sinh J cosh J ln sinh J cosh J .
D26f Notice that d dnAn n=1→ln 2 for J→0, d dnAn n=1J+1/2 +Oe−2J for J→ and that d dnBn n=1→0 for J→0, d dnBn n=1→−1/2+Oe−2J for J→
.
We can also carry out the derivative in Eq. D25, lim n→1 n Si expBn ijA SiSj + J ijA SiSj = d dnBn n=1 Si ijA SiSjexpB1 ijA SiSj + J ijA SiSj = sinh J cosh J ln sinh J cosh J Si ijA SiSjexpJ ij SiSj = sinh J cosh J ln sinh J cosh JEAZJ tot1ZJ tot1, D27a EAZJ tot1 Si ijA SiSjexpJ ij SiSj ZJ tot1 , D27b where EA is the extensive energy of the bonds in A in units of J in the Ising model described by the equilibrium partition function ZJ tot1. The last calculation we still need is d dneNA pAn n=1 = NA p2NA pln 2 + cosh2 J lncosh J −sinh2 J lnsinh J
.
D28 Combining all the results in Eqs. D26–D28, Eq. D25 reduces to SvN PA;T/B = ln2mB−1 + ln ZJ tot1 −NA pln 2 + cosh2 J lncosh J −sinh2 J lnsinh J D29a −sinh J cosh J ln sinh J cosh JEAZJ tot1 D29b TOPOLOGICAL ORDER IN A THREE-DIMENSIONAL… PHYSICAL REVIEW B 78, 155120 2008 155120-21 − 1 ZJ tot1 1 4N kii=1 N even Z ki B,+ 2 Z ki A,+ + Z ki A,−lnZ ki A,+ + Z ki A,− + Z ki A,+ −Z ki A,−lnZ ki A,+ −Z ki A,−
.
D29c Recall that ki is even and therefore kiSi is also even, irrespective of the values of the spins Si= 1. In particular, expi 2 i kiSi −1 = ie−i/2ki cos 2 ki + ie−i/2kisin 2 kiSi = i ki even + Siki odd
, D30 and both Z ki A,+ and Z ki A,+ can be rewritten as Z ki A,+ = Si i kiodd SiexpJ ijA SiSj = ZA,+ i kiodd Si, D31 Z ki A,+ = Si i kiodd SiexpJ ijA SiSj = ZA,+ i kiodd Si, D32 where ZA,+= SiexpJijASiSj and ZA,+ = SiexpJijASiSj; similarly for Z ki A,−and Z ki A,−. Thus, all these quantities can be interpreted as correlation functions of boundary spins located at the odd entries of the set ki times a partition function. Note that the constraint irki even, ∀r, requires that the number of such odd entries is also even separately on each boundary component r=1, ... ,mB.
If we are interested in computing the topological entropy of the system, it is convenient to decompose the last term in Eq. D29 so that SvN PA;T/B = ln2mB−1 + ln ZJ tot1 −NA pln 2 + cosh2 J lncosh J −sinh2 J lnsinh J D33a −sinh J cosh J ln sinh J cosh JEAZJ tot1 D33b −1 4N kii=1 N even Z ki B,+Z ki A,+ ZJ tot1 ln Z ki A,+ D33c −1 4N kii=1 N even Z ki B,+Z ki A,+ ZJ tot1 1 21 + Z ki A,− Z ki A,+ln1 + Z ki A,− Z ki A,+ +1 − Z ki A,− Z ki A,+ln1 − Z ki A,− Z ki A,+.
D33d The result in Eqs. D33a holds for nA=1 i.e., there is only one collective operation in A. In order to compute the topological entropy of the system with the bipartition scheme in Sec. III, we also need to consider the case where nA=0.
Repeating the derivation above, from Eq. D19 to Eq. D33, in the absence of collective operations leads rather straightfor-wardly to the result that ZPn = 1 ZJ tot12mB−1 neNA pAn 2NA p 1 2 qrr=1 mB 1 4N kii=1 N i expBn ijA ij Si 1 expi 2 r=1 mB ir kii + Si 1 −1 −qrexpJ ijA Si 1Sj 1Z ki A,+n−1, D34 and CLAUDIO CASTELNOVO AND CLAUDIO CHAMON PHYSICAL REVIEW B 78, 155120 2008 155120-22 SvN PA;T/B = ln2mB−1 + ln ZJ tot1 −NA pln 2 + cosh2 J lncosh J −sinh2 J lnsinh J D35a −sinh J cosh J ln sinh J cosh JEAZJ tot1 D35b −1 4N kii=1 N even Z ki B,+Z ki A,+ ZJ tot1 ln Z ki A,+.
D35c Notice that Eq. D35 differs from Eq. D33 only in that it lacks contribution D33d.
We can finally compute the plaquette contribution to the topological entropy Stopo P T/B using the full bipartition scheme.
All the terms that do not carry a topological contribution cancel. Namely, as discussed in Sec. III, N1A p + N4A p = N2A p + N3A p D36a and on similar grounds E1AZJ tot1 + E4AZJ tot1 = E2AZJ tot1 + E3AZJ tot1, D36b likewise for bipartitions 5–8. Recall also that mB=1 and nA=0 for all bipartitions, except bipartitions 4 and 5 which have mB=1 and nA=1 and bipartition 1 which has mB=2 and nA=0. Using Eqs. D33 and D35 accordingly, we obtain Stopo P T/B = ln2−m1B+m2B+m3B−m4B + 1 4N kii=1 N even Z ki 1B,+Z ki 1A,+ ZJ tot1 ln Z ki 1A,+ − Z ki 2B,+Z ki 2A,+ ZJ tot1 ln Z ki 2A,+ − Z ki 3B,+Z ki 3A,+ ZJ tot1 ln Z ki 3A,+ + Z ki 4B,+Z ki 4A,+ ZJ tot1 ln Z ki 4A,+ + 1 4N kii=1 N even Z ki 4B,+Z ki 4A,+ ZJ tot1 1 21 + Z ki 4A,− Z ki 4A,+ln1 + Z ki 4A,− Z ki 4A,+ +1 − Z ki 4A,− Z ki 4A,+ln1 − Z ki 4A,− Z ki 4A,+ + partitions 5 – 8.
D37 Using the fact that m1B−m2B−m3B+m4B=1, that m5B−m6B−m7B+m8B=0, and that Z ki 4A,JZ ki 5A,J since bipartitions 4 and 5 are in fact identical, one arrives to the result Stopo P T/B = −ln 2 + 1 4N kii=1 N even Z ki 1B,+Z ki 1A,+ ZJ tot1 ln Z ki 1A,+ − Z ki 2B,+Z ki 2A,+ ZJ tot1 ln Z ki 2A,+ − Z ki 3B,+Z ki 3A,+ ZJ tot1 ln Z ki 3A,+ + Z ki 4B,+Z ki 4A,+ ZJ tot1 ln Z ki 4A,+ + 1 4N kii=1 N even Z ki 5B,+Z ki 5A,+ ZJ tot1 ln Z ki 5A,+ − Z ki 6B,+Z ki 6A,+ ZJ tot1 ln Z ki 6A,+ − Z ki 7B,+Z ki 7A,+ ZJ tot1 ln Z ki 7A,+ + Z ki 8B,+Z ki 8A,+ ZJ tot1 ln Z ki 8A,+ + 1 4N kii=1 N even Z ki 4B,+Z ki 4A,+ ZJ tot1 1 + Z ki 4A,− Z ki 4A,+ln1 + Z ki 4A,− Z ki 4A,+ +1 − Z ki 4A,− Z ki 4A,+ln1 − Z ki 4A,− Z ki 4A,+.
D38 TOPOLOGICAL ORDER IN A THREE-DIMENSIONAL… PHYSICAL REVIEW B 78, 155120 2008 155120-23 This expression can be cast in a more useful way by no-ticing the following. Factors like P ki p 1 4Np Z ki pB,+Z ki pA,+ ZJ tot1 = 1 4Np ZpB,+ZpA,+ ZJ tot1 ip ki odd i ip ki odd Si D39 are greater than or equal to zero, for each of the par-titions p=1–8. This is because the expectation values of the products of spins are always non-negative when the interactions are ferromagnetic this can be shown explic-itly in a high-temperature expansion, for example. Recall that the set ki contains always an even number of odd ki’s.
Moreover, one can check that kii=1 Np even P ki p = 1 ZJ tot1 i Si expJ ijpA ijexpJ ijpA SiSj 1 4Np kii=1 Np even expi 2 ip kii + Si −2 = 1 ZJ tot1 i Si expJ ijpA ijexpJ ijpA SiSj 1 4Np kii=1 Np even expi 2 ip kii + Si −1 cos 2 ip ki −i sin 2 ip ki = 1 ZJ tot1 i Si expJ ijpA ijexpJ ijpA SiSj 1 2 q=1 1 4Np kii=1 Np expi 2 ip kii + Si −1 −q = 1 ZJ tot1 i Si expJ ijA ijexpJ ijA SiSj 1 2 q=1 iSi = q = 1 ZJ tot1 ZJ tot1 = 1, D40 and thus the P ki p 0 are probability weights. Similarly, we can define a probability P ki = P1P4P5P8 ki D41 =P2P3P6P7 ki 0, D42 where the ki are defined on the total boundary of the added partitions, and we used the fact that partitions 1, 4, 5, 8 and 2, 3, 6, 7 have exactly the same total boundary. We can then define averages with respect to this measure, ¯ ki kii=1 N even P ki¯, D43 and Eq. D38 reduces to Stopo P T/B = −ln 2 +ln Z ki 1A,+Z ki 4A,+Z ki 5A,+Z ki 8A,+ Z ki 2A,+Z ki 3A,+Z ki 6A,+Z ki 7A,+ ki D44a +1 + Z ki 4A,− Z ki 4A,+ln1 + Z ki 4A,− Z ki 4A,+ +1 − Z ki 4A,− Z ki 4A,+ln1 − Z ki 4A,− Z ki 4A,+ ki .
D44b CLAUDIO CASTELNOVO AND CLAUDIO CHAMON PHYSICAL REVIEW B 78, 155120 2008 155120-24 We can finally analyze this expression as a function of temperature. Recall that J=−1/2lntanhB
, so that J →0 when T→0, and the disordered Ising phase occurs for TTc1.313 3463B. Below the Ising transition at J=Jc 0.221 654 43, one can use a high-temperature loop ex-pansion to estimate the ratio of Z ki 4A,−over Z ki 4A,+.
The high-temperature expansion contains either closed loops or open strings that terminate at the boundary because an Si is inserted for each site i where ki is odd. The corre-sponding expansions for Z ki 4A,−over Z ki 4A,+ differ only by loop terms that intersect the twist surface generated by the col-lective operation in Fig. 5, bottom an odd number of times.
These terms appear indeed with opposite sign in the two expansions. This can be achieved only by closed loops that wind around the donut shape and by open strings that con-nect boundary spins Si among those identified by the set of odd ki’s see Fig. 8.
In the high-temperature limit, long loops are exponen-tially suppressed and we can safely neglect the winding loop contributions when the size of the partition is taken to infin-ity. Similarly, out of all possible ways of connecting bound-ary spins in the ki odd set, only “short” strings between spins “close” to the twist surface need be considered, as illustrated in Fig. 9.
For ki points near the twist surface, rearranging the way that points are paired does not change the parity of the num-ber of crossings of the twist surface. This is illustrated in Fig.
9, where reconnecting spins 5–8 via the dashed lines instead of the solid lines give 0 instead of 2 crossings, thus not changing the parity. Now, a reconnection that changes the parity involves drawing long strings. Below the Ising transi-tion, the probability P ki keeps the points with odd ki con-fined in pairs; thus there are ways to connect them together with short strings. But changing the parity of the intersec-tions requires rematching them in such a way that connec-tions with sites far away are made, and the total length of these strings is of order the system size. This is illustrated in Fig. 9: for example, reconnecting spins 1–4 requires strings whose total length spans the system size.
Therefore, one can verify that all the loop terms corre-sponding to a given choice of ki’s have the same parity in the number of intersections to the twist surface up to corrections that are exponentially small in the size of the bipartition. As a result, the ratio Z ki 4A,−/Z ki 4A,+ tends to 1 in the thermody-namic limit of N→
, and the sign is purely determined by the choice of ki.
Equation D44b is clearly symmetric under the change Z ki 4A,−/Z ki 4A,+→−Z ki 4A,−/Z ki 4A,+, and we finally arrive at the re-sult that at low temperature TTc, the term in Eq. D44b gives2 ln 2.
In the Ising ordered phase TTc here, on the other hand, the ratio Z4A,−/Z4A,+→0 in the thermodynamic limit because of the energy cost associated with the twist in boundary condition domain wall in the “−” partition.
Hence, in this case the term in Eq. D44b gives 0.
A similar reasoning gives that the ratios entering Eq.
D44a are equal to 1 in the thermodynamic limit, and cor-rections appear only as the correlation length becomes of the order of the size of the bipartitions, i.e., infinite in the ther-modynamic limit. Thus, in the low-temperature phase, Eq.
D44a gives ln 1=0 for TTc.
On the other hand, for TTc, the partitions order ferro-magnetically, and one must account for the fact that partition 1A has two disconnected components; therefore these two components can order in two ways relative to one another, giving a factor of 2 in the ratio appearing in Eq. D44a, and hence this term gives a contributionln 2.
Putting it all together, we obtain that Stopo P T/B = ln 2, T Tc 0, T Tc, D45 and Stopo P T/B=Stopo P T/B−Stopo P 0 is given by Eq.
4.32.
S S S S i j l k twist surface FIG. 8. Color online Qualitative examples of terms in the loop expansion that appear with different signs in Z ki 4A,−and Z ki 4A,+: closed loops that wind around the donut shape and open strings that connect boundary spins Si which appear in the high-temperature expansion whenever the corresponding ki is odd.
twist surface 5 6 7 8 2 1 3 4 FIG. 9. Color online Schematic projected illustration of open strings between boundary spins. The locations of the spins 1–8 are given by the sites where ki is odd recall that their total number must be even. One can verify that the parity of the number of intersections with the twist surface is fixed by the choice of the locations 1–8, up to exponentially small corrections such as the red dotted string in the figure, which vanish in the thermodynamic limit of N→
. For example, consider the change upon reconnecting spin 5–8 via the dashed lines instead of the solid lines. Notice that the case where, say, the points 1–4 are uniformly distributed on the boundary is exponentially suppressed by the probability P ki. TOPOLOGICAL ORDER IN A THREE-DIMENSIONAL… PHYSICAL REVIEW B 78, 155120 2008 155120-25 1F. D. M. Haldane and E. H. Rezayi, Phys. Rev. B 31, 2529 1985; X.-G. Wen and Q. Niu, ibid. 41, 9377 1990; X.-G.
Wen, Int. J. Mod. Phys. B 4, 239 1990; Adv. Phys. 44, 405 1995; Phys. Rev. B 65, 165113 2002.
2M. Levin and X.-G. Wen, Phys. Rev. Lett. 96, 110405 2006.
3A. Y. Kitaev and J. Preskill, Phys. Rev. Lett. 96, 110404 2006.
4C. Castelnovo and C. Chamon, Phys. Rev. B 76, 184442 2007.
5A. Y. Kitaev, Ann. Phys. N.Y. 303, 2 2003.
6E. Dennis, A. Kitaev, A. Landahl, and J. Preskill, J. Math. Phys.
43, 4452 2002.
7Z. Nussinov and G. Ortiz, arXiv:cond-mat/0605316 unpub-lished; arXiv:cond-mat/0702377 unpublished.
8A. Hamma, P. Zanardi, and X.-G. Wen, Phys. Rev. B 72, 035307 2005.
9Note that this decomposition is visible only at finite temperature, where the full Hamiltonian enters the calculations for the von Neumann entropy of the system via the density matrix. At zero temperature, the two contributions do remain distinct, but they cannot be told apart as there is no explicit dependence on A and B in the GS wave function.
10Our conclusions, from the topological entropy, are in disagree-ment with the ones obtained by Z. Nussinov and G. Ortiz in Ref.
18. While the authors discuss both phase transitions in the model, at T=0 and at finite temperature, they argue that only the former has a topological nature, and they indeed conclude that topological order is fragile at finite temperature. As explained in Sec. IV, this discrepancy is due to the fact that the authors con-sider winding loop operators as nonlocal order parameters, which vanish intrinsically at any finite temperature and cannot be used at least in a naive way to investigate the robustness of topological order to thermal fluctuations.
11E. Knill, R. Laflamme, H. Barnum, D. Dalvit, J. Dziarmaga, J.
Gubernatis, L. Gurvits, G. Ortiz, L. Viola, and W. H. Zurek, Los Alamos Sci. 27, 2 2002.
12C. Castelnovo and C. Chamon, Phys. Rev. B 76, 174416 2007.
13The entanglement entropy is invariant under a local spin rota-tion.
14A. Hamma, R. Ionicioiu, and P. Zanardi, Phys. Rev. A 71, 022315 2005.
15Alternatively, one could replace the von Neumann entropy with its symmetrized version—the mutual information entropy—as proposed in Ref. 12.
16F. Wegner, J. Math. Phys. 12, 2259 1971.
17J. B. Kogut, Rev. Mod. Phys. 51, 659 1979.
18Z. Nussinov and G. Ortiz, Phys. Rev. B 77, 064302 2008.
19R. Savit, Rev. Mod. Phys. 52, 453 1980.
20M. Caselle, M. Hasenbusch, and M. Panero, J. High Energy Phys. 2003, 057.
21The familiar reader may have noticed that the construction of ZJ tot is based on the well-known duality between the 3D Ising model and the Z2 Ising gauge theory in three dimensions, discussed, for example, in Refs. 17 and 19.
22Note that the classical model at finite T that obtains by setting A=0 is nothing but a classical Z2 gauge theory in three dimen-sions. Therefore, our results show that the topological entropy of this classical system behaves as a proper nonlocal order pa-rameter that captures its finite-temperature phase transition.
23We thank John Cardy for pointing us in the direction of this replica trick to handle the delta function terms in Eq. D15.
CLAUDIO CASTELNOVO AND CLAUDIO CHAMON PHYSICAL REVIEW B 78, 155120 2008 155120-26
|
77
|
Published Time: Mon, 23 Jan 2023 05:02:28 GMT
arXiv:2005.06480v2 [hep-th] 5 Oct 2020
From Hagedorn to Lee-Yang: Partition functions of
N = 4 SYM theory at finite N
Alexander T. Kristensson and Matthias Wilhelm
Niels Bohr International Academy, Niels Bohr Institute, Copenhagen University, Blegdamsvej 17, 2100 Copenhagen Ø, Denmark
[email protected], [email protected]
Abstract
We study the thermodynamics of the maximally supersymmetric Yang-Mills theory with gauge group U( N ) on R × S3, dual to type IIB superstring theory on AdS 5 × S5.While both theories are well-known to exhibit Hagedorn behavior at infinite N , we find evidence that this is replaced by Lee-Yang behavior at large but finite N : the zeros of the partition function condense into two arcs in the complex temperature plane that pinch the real axis at the temperature of the confinement-deconfinement transition. Concretely, we demonstrate this for the free theory via exact calculations of the (unrefined and refined) partition functions at N ≤ 7 for the su (2) sector containing two complex scalars, as well as at N ≤ 5 for the su (2 |3) sector containing 3 complex scalars and 2 fermions. In order to obtain these explicit results, we use a Molien-Weyl formula for arbitrary field content, utilizing the equivalence of the partition function with what is known to mathematicians as the Poincaré series of trace algebras of generic matrices. Via this Molien-Weyl formula, we also generate exact results for larger sectors.
Keywords : N = 4 SYM theory, partition function, finite N , confinement-deconfinement transition, Lee-Yang zeros Contents
1 Introduction 2
2 Partition functions at infinite and finite N 6
2.1 The Character Formula 6
2.2 The Molien-Weyl Formula 7
3 The su (2) sector and Lee-Yang behavior 9
4 Generalization to larger sectors 14
4.1 su (2 |3) sector 14
4.2 sl (2) sector 17
4.3 psu (1 , 1|2) sector 19
4.4 Full theory 20
5 Conclusion and outlook 21
A Derivation of the character formula 22
B Derivation of the Molien-Weyl formula 24
C Calculation of the partition function in the psu (1 , 1|2) sector at N = 2 27
1 Introduction
Gauge theories exhibit a rich thermodynamic structure, much of which is still to be un-derstood. This is even the case for what might be the simplest gauge theory, namely the maximally ( N = 4) supersymmetric Yang-Mills (SYM) theory with gauge group U( N ), on
R × S3. Via the AdS/CFT correspondence [ 1], a dual description of N = 4 SYM theory is given by type IIB superstring theory on AdS 5 × S5, which has been used to study the thermodynamic properties of both theories from early on [ 2]. Gauß’ law dictates that the states on a compact space such as S3 are color singlets, lead-ing to a phase transition in N = 4 SYM theory that bears resemblance to the confinement-deconfinement transition in QCD. The compact space S3 allows also for a direct comparison between the conformal N = 4 SYM theory and confining theories, such as QCD [ 3]; the finite radius RS3 acts as an effective infrared cutoff, limiting the running of couplings in the latter case, and making perturbation theory applicable in confining theories when tuned sufficiently small. In conformal theories, the product of the temperature T and RS3 yields a dimensionless quantity on which thermodynamic quantities can depend. In the following, we will thus set RS3 = 1 in the understanding that the dependence on RS3 can trivially 2be restored. Via the AdS/CFT correspondence, the confinement-deconfinement phase tran-sition in N = 4 SYM theory is conjectured [ 2] to be dual to the Hawking-Page phase transition [ 4] between a gas of gravitons (or closed strings) and a black hole. A theoretical description of thermal physics is based on the partition function
Z(T ) = ∑
states
e−E/T , (1.1) where the sum is over all states, E denotes the energy of a given state and T is the temperature in units of the Boltzmann constant. For example, the phase transition at temperature Tc can be detected by looking at the scaling of the free energy with respect to N :
F (T ) = −T log Z(T ) ∼
1 for T < T c ,N 2 for T > T c . (1.2) A more detailed description is obtained by including also chemical potentials Ω i for the two spins S1, S2 and three R-charges J1, J2, J3 of N = 4 SYM theory. This yields the refined partition function
Z(T ) = ∑
states
e−(E−∑3
i=1 ΩiJi−∑2
a=1 Ωa+3 Sa)/T
. (1.3) Using the state-operator map, the states on R×S3 can be described by gauge-invariant local composite operators on flat Minkowski space R1,3; their energies E are then given by the scaling dimensions ∆. At tree level, the scaling dimension of an operator is simply the operator’s engineering dimension, but quantum corrections shift it in the interacting theory. Gauge-invariant local composite operators are built as traces of products of fields that transform covariantly under gauge transformations; moreover, products of such traces are again gauge invariant. Operators containing one trace are conventionally called single-trace operators, whereas operators containing more than one trace are called multi-trace operators. At infinite N , no relations exist between single- and multi-trace operators and a basis of the latter can be generated from the former, such that it suffices to consider single-trace operators. Due to their cyclicity, single-trace operators can be thought of as necklaces built from a set of beads, and counted using Pólya theory. This observation was used by Sundborg [ 5] to calculate the free partition function of N = 4 SYM theory at infinite N .This combinatorial approach was later extended to include the first correction in the ’t Hooft coupling λ = g2
YM
N , to chemical potentials and to related theories [ 6–15 ]. An important property displayed by the partition function at infinite N is Hagedorn behavior – an exponential growth of the density of states with energy. This can even be seen without knowledge of the full partition function, by a rough estimate of the density of states. Following Ref. [ 3], consider as a toy model the so-called su (2) sector of N = 4 SYM theory, which is constructed from two complex scalars, say X = φ1 + iφ 4 and Y = φ2 + iφ 5,each having bare scaling dimension and thus energy E = ∆ = 1. The number of single-trace states built from E of these scalars, ρ(E), can be estimated to be 2E
E
≤ ρ(E) ≤ 2E , where 3the division by E in the lower bound overaccounts for the fact that a trace is invariant under the E cyclic permutations of the E matrices in it, and this fact is neglected in the upper bound. Both of the bounds, and thus also ρ(E), grow exponentially with E as
ρ(E) ∼ eE log 2 , such that a single-trace partition function Z(T ) = ∑
E
ρ(E)e−E/T given by this density of states diverges at the Hagedorn temperature T su (2) ,tree
H
= 1 /log 2, and so does the full multi-trace partition function. 1 The Hagedorn behavior is present in all (non-trivial) subsectors of the theory, including the full theory; the different field content only effects the value of the Hagedorn temperature. The Hagedorn temperature of the full
N = 4 SYM theory was calculated via the partition function at tree level [ 5] and at first order in λ . In the dual string theory, the Hagedorn behavior of the gauge theory is reflected in the well-known Hagedorn behavior of free (or tree-level) string theory [ 17 ]. In the planar limit, the scaling dimensions of all operators in N = 4 SYM theory are in principle known via integrability; see Refs. [ 18 , 19 ] for reviews. 2 Using integrability, also the Hagedorn temperature of N = 4 SYM theory and thus type IIB superstring theory on AdS 5 × S5 can be calculated at any value of the coupling [ 21 , 22 ]; explicit results exist both numerically at finite coupling as well as analytically up to the seventh order in λ
[23 , 24 ] at weak coupling. At large λ, it asymptotes to the Hagedorn temperature of type IIB superstring theory in ten-dimensional Minkowski space [ 22 ] calculated in Ref. [ 25 ]. At finite N , so-called trace relations exist that relate single-trace operators with more that N fields to sums of multi-trace operators. 3 As a consequence, the thermodynamic behavior at large but finite N drastically differs from the one at infinite N . In particu-lar, the trace relations cut off the exponential growth of the density of states with the energy for E > N , such that no Hagedorn behavior occurs for finite N , no matter how large. 4 While the low-temperature phase ceases to exist at the Hagedorn temperature, the (confinement-deconfinement) phase transition at large but finite N occurs at the lower critical temperature Tc ≤ TH . On the dual string-theory side, finite N corresponds to a non-vanishing string coupling, which allows for the Hawking-Page transition to a black hole. 5
At tree level, the partition function at finite N can be written as a power series in
x = e−1/T , where the coefficients are written in terms of Littlewood-Richardson coefficients counting the number of color singlets in a tensor product of adjoint U( N ) representations, or characters of the symmetric group Sn [3, 43 ]. This sum representation, to which we refer as ‘character formula’, was used to obtain closed expressions for the free unrefined partition
1The restriction to a subsector of the full theory can be thought of as the following limit in the partition function. Take (Ω 1, . . . , Ω5) = ( n1Ω, . . . , n 5Ω) for some n1, . . . n 5and consider Ω →1 with T′=T / (1 −Ω) fixed. Then, only states from a specific subsector survive in the sum over all states that defines the partition function. To select for instance the su (2) sector, one can choose ( n1, . . . , n 5) = (1 ,1,0,0,0). This reasoning can also be extended to non-vanishing coupling to obtain a decoupling limit, see e.g. Ref. [ 16 ].
2Interestingly, also the superconformal index could be calculated via a Bethe ansatz [ 20 ].
3A basis of operators at finite Nis given by so-called Schur operators [ 26 –28 ].
4This is also consistent with the known fact from statistical physics that the partition function of a system with a finite number of degrees of freedom on a compact space cannot have divergences at finite temperature.
5Further aspects of the thermodynamic behavior at finite N, also for further systems, have been studied in Refs. [ 29 –42 ].
4function in the su (2) sector for N ≤ 5 (as well as for larger numbers of complex scalars) [44 ] and extended to the refined partition function for N ≤ 4 [ 45 ], starting from an ansatz as a rational function in x.A powerful alternative mathematical formulation of the partition function is obtained by translating this problem from representation theory to invariant theory: in this context, the refined tree-level partition function in the su (2) sector is the bigraded Poincaré series of U( N ) invariants of two Hermitian matrices, which is the same as the bigraded Poincaré series of GL( N ) invariants of two generic matrices. Invariant theory provides an integral representation for this Poincaré series and its extension to more matrices, known as the Molien-Weyl formula, which proves to be an easier avenue to closed expressions of the partition functions at finite N than the power-series representation. In particular, the bigraded Poincaré series of GL( N ) invariants of two generic N × N matrices was calculated for N ≤ 6 already long ago [ 46 –49 ]. 6
In this paper, we employ the Molien-Weyl formula to obtain the tree-level partition function in the su (2) sector at N = 7. More importantly, we find strong evidence that the Hagedorn temperature at infinite N is replaced by so-called Lee-Yang behavior [ 51 ] at large but finite N : the zeros of the partition function appear to condense into two arcs in the complex x = e−1/T plane that pinch the real axis at the critical temperature Tc. In the case of the su (2) sector at tree level, we find T su (2) ,tree
c
= T su (2) ,tree
H
= 1 /log 2, which is consistent with the findings of Ref. [ 3] that Tc and TH coincide in the free theory. This main result of our paper is also reminiscent of the work of Witten and Maloney [ 52 ], who found that the Hawking-Page phase transition in three-dimensional quantum gravity is of Lee-Yang type. Moreover, Lee-Yang zeros are used in the context of lattice QCD to detect phase transitions, see e.g. Refs. [ 53 –57 ]. Finally, we use the generalization of the Molien-Weyl formula to a general field content [ 50 ] to generate many further explicit results. The remainder of this paper is structured as follows. In Section 2, we review the infinite sum representation of the free partition function at finite N and recall how it reproduces the infinite N results as N → ∞ . Moreover, we present the generalized Molien-Weyl formula for the finite-N partition function, which is an integral representation. In Section 3, we demonstrate how the Molien-Weyl formula can be used to obtain explicit expressions for the free partition function in the su (2) sector at fixed N , obtaining new results at N = 7. Furthermore, we show how the zeros of these partition functions condense in two arcs in the complex x = e−1/T plane, indicating Lee-Yang behavior. In Section 4, we proceed to larger sectors. We obtain explicit results for the free partition function in the fermionic su (2 |3) sector for N ≤ 5, which confirm the Lee-Yang behavior observed in the purely bosonic su (2) sector in the previous section. Moreover, we also give the Molien-Weyl formula in the non-compact bosonic sl (2) sector, in the higher-rank, non-compact, fermionic psu (1 , 1|2) sector, as well as in the full theory. As a proof of principle, we evaluate the integral formulas in the
sl (2) and psu (1 , 1|2) sector explicitly for N = 2. Our conclusion and outlook can be found in Section 5. Three appendices provide details on the derivation of the character formula (Appendix A), the generalized Molien-Weyl formula (Appendix B) and the calculation of
6In the context of partition functions in gauge theories, such a representation was used in Ref. [ 50 ].
5the partition function in the psu (1 , 1|2) sector at N = 2 (Appendix C).
2 Partition functions at infinite and finite N
In this section, we present two different (but mathematically equivalent) methods to eval-uate the (refined) partition function for free gauge theories on R × S3. We take these gauge theories to contain a general number and type of fields transforming in the adjoint representation of the gauge group U( N ).
2.1 The Character Formula
The first method takes the form of an infinite sum and was developed in Refs. [ 3, 43 ]. Since it makes use of group characters, we refer to it as the ‘character formula’. We review its derivation in Appendix A, and simply quote the resulting expression here:
Z(β) =
∞
∑
n=0
∑
k⊢n
∑
r⊢nn
∏
j=1
z(jβ )kj
kj ! jkj |χr(k)|2 , (2.1) where β = 1 /T , k and r are both integer partitions of n labeling the irreducible represen-tations of the symmetric group Sn and U( N ), respectively, and χr(k) is the character of a group element with cycle structure k in representation r.7 The Young tableau corresponding to the partition r is limited to have at most N rows. The (refined) single-particle partition function z(β) is obtained by summing over all fields of the theory:
z(β) = ∑
fields
e−β(∆ −∑3
i=1 ΩiJi−
∑2
a=1 Ωa+3 Sa)
. (2.2) In the case of N = 4 SYM theory, these are the fields corresponding to the spins in the spin-chain picture, transforming in the so-called singleton representation of the symmetry algebra psu (2 , 2|4); see e.g. Ref. [ 59 ]. In the su (2) sector, we only have two complex scalar fields X = φ1 + iφ 4 and Y = φ2 + iφ 5, which have vanishing spins and R charges JX = δX1,
JY = δY 2. The (refined) single-particle partition function in this sector is thus z(β) =
e−β(1 −Ω1 ) +e−β(1 −Ω2). If the theory contains fermions, z(jβ ) in Eq. ( 2.1 ) is to be understood as shorthand:
z(jβ ) ≡ zB (jβ ) − (−1) j zF (jβ ) , (2.3) where zB (β) and zF (β) are the single-particle partition functions for the bosonic and fermionic fields, respectively:
zB/F (β) = ∑
bosonic/fermionic fields
e−β(∆ −∑3
i=1 ΩiJi−
∑2
a=1 Ωa+3 Sa)
. (2.4) The single-particle partition function starts at least at order O(x1) in x ≡ e−β ≡ e−1/T ,since the minimal bare scaling dimension of a field in four dimensions is ∆ = 1. This means
7Recall that the characters of Snonly depend on the conjugacy class of a group element, which is specified by the cycle structure [ 58 ].
6that we can evaluate the partition function up to order O(xL) by only calculating the terms with n ≤ L, allowing a method for calculation of the power expansion of the partition function. If the structure of the partition function is simple enough, it is possible to guess the full exact function from a limited number of terms in the power expansion. This method was used in Ref. [ 44 ] for the su (q) ‘sector’ 8 (with unrefined single-particle partition function
z(x) = qx ) to calculate exact, unrefined partition functions for N ≤ 5 in the case of q = 2 and for N ≤ 3 in the cases q = 3 , 4, 5, based on an ansatz as a rational function in x. The extension to refined partition functions with non-zero chemical potentials was investigated in Ref. [ 45 ] for N ≤ 4 in the case q = 2, for N ≤ 3 in the case q = 3 and for N ≤ 2 in the case q = 4. The disadvantage of the character formula is that the exact expressions for the partition functions can be very complicated, especially as N increases. In this case, the exact partition functions are very hard to guess from the power expansion as many more terms are needed; cf. our explicit results in Sections 3 and 4.9
Let us now consider what happens in the limit of N going to infinity. If n ≤ N , the sum over partitions r is unrestricted and we can use the row orthogonality of the characters to show [ 58 ]:
∑
r⊢n
|χr(k)|2 =
n
∏
j=1
kj ! jkj . (2.5) For infinite N , we can insert this back into Eq. ( 2.1 ), which yields the simple expression
ZN →∞ (β) =
∞
∑
n=0
∑
k⊢nn
∏
j=1
z(jβ )kj . (2.6) Notice that the sums over n and partitions k ⊢ n can be replaced by an infinite set of sums over kj from 1 to ∞. These kj correspond to the number of rows with length j in the Young tableau corresponding to k. Then we have the simple expression
ZN →∞ (β) =
∞
∏
j=1
∞
∑
kj=1
z(jβ )kj =
∞
∏
j=1
1
1 − z(jβ ) . (2.7) The partition function in this limit clearly diverges when z(jβ ) = 1 for j = 1 , 2, . . . . The temperature of the lowest pole at z(βH = 1 /T H ) = 1 corresponds to the Hagedorn temper-ature for infinite N , which coincides with the confinement-deconfinement temperature in the free theory [ 3].
2.2 The Molien-Weyl Formula
Another method to calculate partition functions for gauge theories is given by the Molien-Weyl formula. In general, the Molien-Weyl formula is a way to generate the Hilbert or Poincaré series for a ring of group invariants. In the context of high-energy physics, it has been applied for example to count chiral gauge-invariant operators in SQCD (see e.g.
8N= 4 SYM theory contains of course only three complex scalars, and the su (3) sector is not closed beyond one-loop, closing to the larger su (2 |3) sector which we treat in Section 4.1 .
9Moreover, the computational cost of any given order also increases with N.
7Ref. [ 60 ] and references therein) as well as for BSM-EFT (see e.g. Ref. [ 61 ] and references therein). An explicit Molien-Weyl formula for the free partition function of N = 4 SYM theory with gauge group U( N ) and any field content was given in Ref. [ 50 ]. We review its derivation in our conventions and notation in Appendix B. The resulting expression is 10
ZN (x) = ( ZN =1 (x)) N 1
(2 πi )N −1
∮
|t1|=1
dt1
t1
· · ·
∮
|tN−1|=1
dtN −1
tN −1
∏
1≤k≤r≤N−1
1 − tk,r
φk,r
, (2.8) where x ≡ e−β = e− 1
T
, tk,r = tktk+1 · · · tr and
ZN =1 (x) =
∏
fermionic fields
(1 + x ˜∆)
∏
bosonic fields
(1 − x ˜∆) , (2.9)
φk,r =
∏
bosonic fields
(1 − x ˜∆tk,r )(1 − x ˜∆t−1
k,r
)
∏
fermionic fields
(1 + x ˜∆tk,r )(1 + x ˜∆t−1
k,r
) , (2.10) with ˜∆ = ∆ −
3
∑
i=1
ΩiJi −
2
∑
a=1
Ωa+3 Sa . (2.11) The products run over all bosonic respectively fermionic fields in the theory with ∆ denoting the conformal dimensions of the fields, Sa their spin and Ji their R-charge. Note that, compared to the character formula, the field content now appears as a product instead of a sum. The integrals all run over the unit circle in the complex planes of the tj . Because of this, we can use residue theory to replace each integral by a sum over all residues within the unit circle. Note that the partition function in the case of N = 1, ZN =1 (x) in Eq. ( 2.9 ), simply contains one Bose-Einstein factor for each bosonic field and one Fermi-Dirac factor for each fermionic field. Restricting furthermore to only bosons, the algebra of gauge invariants in this case is freely generated from tr(Φ), where Φ is running over all fields of the theory; it is simply a polynomial ring. As an important special case, consider again the su (2) sector composed of two scalar fields X = φ1 + iφ 4 and Y = φ2 + iφ 5 with conformal dimensions ∆ X = ∆ Y = 1, vanishing spins and R charges JX = δX1, JY = δY 2. The Molien-Weyl formula ( 2.8 ) then reduces to the form [ 47 –49 ]:
Zsu (2)
N
(x1, x 2) = 1
(1 − x1)N (1 − x2)N
× 1
(2 πi )N −1
∮
|t1|=1
dt1
t1
· · ·
∮
|tN−1|=1
dtN −1
tN −1
∏
1≤k≤r≤N−1
1 − tk,r
φk,r
,
(2.12) where
φk,r = (1 − x1tk,r )(1 − x2tk,r )(1 − x1t−1
k,r
)(1 − x2t−1
k,r
) . (2.13)
10 The partition function for gauge group SU( N) can be trivially obtained by replacing ( ZN=1 (x)) N→
(ZN=1 (x)) N−1in Eq. ( 2.8 ).
8Moreover, we now used the parameters x1 = e−(1 −Ω1)β , x2 = e−(1 −Ω2)β as shorthand notation for the corresponding Boltzmann factors with distinct chemical potentials. As was shown in the context of invariant theory in Ref. [ 47 ], the Molien-Weyl formula in the su (2) sector has an interesting property under inversion of its arguments: 11
Zsu (2)
N
(1 /x 1, 1/x 2) = ( −1) N −1(x1x2)N 2
Zsu (2)
N
(x1, x 2) . (2.14) Let us briefly sketch the proof [ 47 ] of this property, restricting ourselves to the case of vanishing chemical potentials for simplicity, x1 = x2 = x. In this case, the integrand ( 2.12 )is easily seen to transform with a factor x2N 2
when sending x → 1/x . It has poles at
tN −1 ∈ rN −1(x) ∪ rN −1(1 /x ), where rn(x) = {x, x
t1
, x
t1t2
, . . . , x
t1t2...t n−1
}. Since 0 ≤ T < ∞
in the physical case, we usually assume that |x|< 1, such that the poles rN −1(x) are within the unit circle. The contour integral over the unit circle in tN −1 then results in the sum over the residues at the poles rN −1(x). Sending x → 1/x results in the sum over the residues at the poles rN −1(1 /x ), which are the poles outside of the unit circle. As the sum of the residues at all poles vanishes via Cauchy’s integration theorem, the two previous sums differ by a global sign. Having integrated in tN −1 to ta+1 , the integrand in ta can be shown to have poles at ra(x) ∪ ra(x2) ∪ . . . ∪ ra(xN +1 −a) ∪ ra(1 /x ) ∪ ra(1 /x 2) ∪ . . . ∪ ra(1 /x N +1 −a). Sending x → 1/x makes the contour integral in ta pick up the residues at the poles ra(1 /x )∪
ra(1 /x 2) ∪ . . . ∪ ra(1 /x N +1 −a) instead of ra(x) ∪ ra(x2) ∪ . . . ∪ ra(xN +1 −a), again resulting in a global sign. Combining the effects of all N − 1 contour integrations, we thus arrive at a global sign ( −1) N −1, concluding the proof. Note that it was crucial for the proof that no residues at ti = 0 or ti = ∞ existed, since these positions do not change when sending
x → 1/x . For a different field content, the integrand of the Molien-Weyl formula can in fact have poles at ti = 0 and ti = ∞, such that no analog of the property ( 2.14 ) exists; we will encounter concrete examples of this in Section 4. The physical interpretation of the property ( 2.12 ) is as follows: after taking a Casimir energy xN 2
for X and Y into account, the partition function is symmetric (up to a sign) under temperature inversion T → − T ,a symmetry that was studied in detail for many other systems in Refs. [ 62 –64 ]. 12
3 The su (2) sector and Lee-Yang behavior
Using the Molien-Weyl formula ( 2.12 ), the free refined partition functions in the su (2) sec-tor built from two complex scalars X = φ1 + iφ 4 and Y = φ2 + iφ 5 can be calculated as a sum of residues. Explicit results for N ≤ 6 were originally calculated via its identi-fication with the Poincaré series of GL( N ) invariants of two generic matrices [ 46 –49 ]. 13
We have extended these known results by calculating the refined partition function for
N = 7, using a specialized Mathematica code with a computation time of approximately 3 days on a desktop machine. We attach the refined partition functions in the ancillary file
su2partitionfunctions.m .
11 This property has an immediate generalization to the su (q) sector.
12 Interestingly, the temperature reflection symmetry is absent in this sector for infinite N, cf. Eq. ( 2.7 ).
13 For N≤5, the results for the unrefined partition function have also been independently rederived in Ref. [ 44 ] by making an ansatz and determining the coefficients via the character formula ( 2.1 ).
9Below, we give the simpler, unrefined partition function, which are obtained by setting the chemical potentials to zero, i.e. setting x1 = x2 = x ≡ e−1/T . For N = 1, the matrices are numbers and the partition function is simply obtained via the geometric series:
Zsu (2)
N=1
(x) = 1
(1 − x)2 , (3.1) as can be seen immediately from Eq. ( 2.12 ). For N = 2, we have [ 46 ]
Zsu (2)
N=2
(x) = 1
(1 − x)2(1 − x2)3 . (3.2) This has again the form of a geometric series. Indeed, it was shown in Ref. [ 46 ] that the algebra of gauge invariants in this case is generated freely by tr( X), tr( Y ), tr (X2), tr( XY )and tr (Y 2).For N = 3, we have [ 47 ]
Zsu (2)
N=3
(x) = 1 + x6
(1 − x)2(1 − x2)3(1 − x3)4(1 − x4) = 1 − x2 + x4
(1 − x)2(1 − x2)4(1 − x3)4 . (3.3) Note that the partition function for N = 3 has not the form of a geometric series, indicating that the algebra of gauge invariants is not freely generated. The elements tr( X), tr( Y ), tr (X2), tr( XY ), tr (Y 2), tr (X3), tr (X2Y ), tr (XY 2), tr (Y 3) and tr (X2Y 2) are algebraically independent; they do not generate the full algebra though, only a subalgebra C. The full algebra of gauge invariants is obtained as C ⊕ (C tr (XY X 2Y 2)), as reflected in the numerator and denominator of the first form in Eq. ( 3.3 ), cf. Ref. [ 47 ]. 14
For N = 4, we have [ 48 ]
Zsu (2)
N=4
(x) = 1 − x − x2 + 2 x4 + 2 x5 − 4x7 + 2 x9 + 2 x10 − x12 − x13 + x14
(1 − x)3(1 − x2)4(1 − x3)5(1 − x4)5 , (3.4) In this case, an identification of the minimal set of generating traces similar to N = 3 becomes quickly quite tedious [ 48 ]. For N = 5 and N = 6, we have [ 49 ]
Zsu (2)
N=5
(x) = P40 (x)
(1 − x)0(1 − x2)6(1 − x3)8(1 − x4)6(1 − x5)6 , (3.5) with
P40 (x) = 1 + 2 x − 6x3 − 9x4 + 2 x5 + 25 x6 + 38 x7 + 17 x8 − 34 x9
− 68 x10 − 34 x11 + 73 x12 + 176 x13 + 171 x14 + 34 x15 − 127 x16
− 156 x17 − 2x18 + 218 x19 + 322 x20 + 218 x21 − 2x22 − 156 x23
− 127 x24 + 34 x25 + 171 x26 + 176 x27 + 73 x28 − 34 x29 − 68 x30
− 34 x31 + 17 x32 + 38 x33 + 25 x34 + 2 x35 − 9x36 − 6x37 + 2 x39 + x40 ,
(3.6)
14 The form of the partition function can also be analyzed via the so-called plethystic logarithm, see e.g. Refs. [ 44 ,65 –67 ].
10 and
Zsu (2)
N=6
(x) = P70 (x)
(1 − x)5(1 − x2)3(1 − x3)6(1 − x4)9(1 − x5)7(1 − x6)7 , (3.7) with
P70 (x) = 1 − 3x + 3 x2 − 3x3 + 3 x4 + 4 x5 − 2x6 − 8x8 − 8x9
11 x10 + x11 + 56 x12 − 24 x13 + 48 x14 − 69 x15 − 9x16 + 2 x17
78 x18 + 118 x19 + 223 x20 + 23 x21 + 158 x22 − 182 x23 + 221 x24
− 42 x25 + 600 x26 + 365 x27 + 633 x28 + 324 x29 + 303 x30 − 31 x31
484 x32 + 178 x33 + 1055 x34 + 518 x35 + 1055 x36 + 178 x37 + 484 x38
− 31 x39 + 303 x40 + 324 x41 + 633 x42 + 365 x43 + 600 x44 − 42 x45
221 x46 − 182 x47 + 158 x48 + 23 x49 + 223 x50 + 118 x51 + 78 x52
2 x53 − 9x54 − 69 x55 + 48 x56 − 24 x57 + 56 x58 + x59 + 11 x60
− 8x61 − 8x62 − 2x64 + 4 x65 + 3 x66 − 3x67 + 3 x68 − 3x69 + x70 .
(3.8) For N = 7, we found the following new result:
Zsu (2)
N=7
(x) = P136 (x)
(1 − x)0(1 − x2)4(1 − x3)8(1 − x4)12 (1 − x5)10 (1 − x6)8(1 − x7)8 , (3.9) where
P136 (x) = 1 + 2 x + 2 x2 − 2x3 − 12 x4 − 20 x5 − 10 x6 + 38 x7 + 124 x8 + 202 x9 + 186 x10
− 2x11 − 312 x12 − 494 x13 − 82 x14 + 1364 x15 + 3935 x16 + 7080 x17
9761 x18 + 11190 x19 + 12188 x20 + 16284 x21 + 29980 x22 + 61276 x23
117046 x24 + 200524 x25 + 311834 x26 + 452462 x27 + 634771 x28
891852 x29 + 1284256 x30 + 1896942 x31 + 2828447 x32 + 4174570 x33
6021068 x34 + 8452156 x35 + 11582747 x36 + 15602230 x37 + 20815499 x38
27651402 x39 + 36633392 x40 + 48308938 x41 + 63176460 x42 + 81638768 x43
104026405 x44 + 130676134 x45 + 162046094 x46 + 198782434 x47
241699563 x48 + 291628632 x49 + 349196244 x50 + 414593302 x51
487467350 x52 + 566967546 x53 + 651962894 x54 + 741302716 x55
834019828 x56 + 929323032 x57 + 1026404662 x58 + 1124098904 x59
1220612186 x60 + 1313438250 x61 + 1399593383 x62 + 1476059720 x63
1540326729 x64 + 1590748212 x65 + 1626630377 x66 + 1647969244 x67
1655036460 x68 + 1647969244 x69 + 1626630377 x70 + 1590748212 x71
1540326729 x72 + 1476059720 x73 + 1399593383 x74 + 1313438250 x75
1220612186 x76 + 1124098904 x77 + 1026404662 x78 + 929323032 x79
834019828 x80 + 741302716 x81 + 651962894 x82 + 566967546 x83
487467350 x84 + 414593302 x85 + 349196244 x86 + 291628632 x87
11 + 241699563 x88 + 198782434 x89 + 162046094 x90 + 130676134 x91
104026405 x92 + 81638768 x93 + 63176460 x94 + 48308938 x95 + 36633392 x96
27651402 x97 + 20815499 x98 + 15602230 x99 + 11582747 x100 + 8452156 x101
6021068 x102 + 4174570 x103 + 2828447 x104 + 1896942 x105 + 1284256 x106
891852 x107 + 634771 x108 + 452462 x109 + 311834 x110 + 200524 x111
117046 x112 + 61276 x113 + 29980 x114 + 16284 x115 + 12188 x116
11190 x117 + 9761 x118 + 7080 x119 + 3935 x120 + 1364 x121 − 82 x122
− 494 x123 − 312 x124 − 2x125 + 186 x126 + 202 x127 + 124 x128 + 38 x129
− 10 x130 − 20 x131 − 12 x132 − 2x133 + 2 x134 + 2 x135 + x136 . (3.10) The numerators of all partition functions are given by palindromic polynomials in x.This is a consequence of the partition functions being invariant under x → x−1 up to an overall factor of ( −1) N −1x2N 2
, as shown in Ref. [ 47 ]15 and discussed below Eq. ( 2.12 ). As also mentioned below Eq. ( 2.12 ), this property can be interpreted as a symmetry under temperature reflection ( T → − T ) up to a sign when including a Casimir energy of N 2,i.e. a Casimir energy of 1 /2 per real degree of freedom in each of the matrices X and
Y . The maximal orders of the (maximally canceled) polynomials increase rapidly with
N : 0 , 0, 4, 14 , 40 , 70 , 136 , . . . for N = 1 , 2, 3, 4, 5, 6, 7, . . . . The general structure of these polynomials for any N remains to be found; as can be seen from Eq. ( 3.3 ), the pattern is also prone to being obscured by cancellations between numerator and denominator. The denominators of the partition functions are built from products of (1 − xi)pi , with
i = 1 , ..., N . The powers, pi, all sum up to N 2 + 1. Thus for x → 1 or T → ∞ , the partition functions all have limits of the form
ZN (x) ≈ aN
(1 − x)N 2+1 for T → ∞ , (3.11) with aN some constant. This property was shown in Ref. [ 44 ] in the more general setting of the su (q) sector using Eq. ( 2.1 ), and it was argued that at high temperatures the su (q)sector on a compact space behaves as ( q − 1) N 2 + 1 independent harmonic oscillators. In Figure 1, we plot all zeros of the unrefined partition functions in the complex plane of x = e−1/T for N = 4 , . . . , 7. Remarkably, the right-most group of zeros appears to ‘condense’ in two arcs pinching the real axis, signaling a phase transition of Lee-Yang type [51 ]. The number of zeros on each arch for N = 1 , . . . , 7 is 0 , 0, 1, 3, 6, 10 , 15, which exactly follows the prescription ( N − 1)( N − 2) /2. Along with the observation that the zero closest to the real axis appears to inch closer and closer for each value of N , this further solidifies the hypothesis that the zeros are condensing on these two arcs pinching the real axis. Furthermore, the arcs appear to pinch the real axis just around the point xsu (2) ,tree
c
= 1 /2or T su (2) ,tree
c
= 1 /ln 2, which is the Hagedorn temperature at infinite N .16 This hints at the
15 The fact that the numerator polynomials are palindromic was also independently observed in examples in Ref. [ 44 ].
16 Note that there appear to be also further, smaller arcs, away from the positive real axis. The most prominent of these appear to pinch the negative real axis at x=−1/√2, which is a solution to z(jβ ) = 1
12 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6
−1
−0.8
−0.6
−0.4
−0.20.20.40.60.81 N = 4
N = 5
N = 6
N = 7
Figure 1 . Zeros of the su (2) partition functions for N = 4 . . . 7 plotted in the complex plane of
x = e−1/T . The right-most zeros appear to ‘condense’ in two arcs pinching the real axis just around the Hagedorn temperature xsu (2) ,tree
H
= 1 /2 of the infinite-N theory. This hints at a phase transition of Lee-Yang type.
remarkable result that for large but finite N , the Hagedorn behavior is replaced by Lee-Yang behavior. (Recall that the Hagedorn temperature is expected to coincide with the confinement-deconfinement temperature in the free theory, while the two are expected to differ at higher loop order [ 3].) Via the AdS/CFT correspondence, also the Hawking-Page transition of type IIB superstring theory on AdS 5 × S5 should thus be of Yang-Lee type. Our finding is reminiscent of the work of Maloney and Witten [ 52 ], who investigated the Hawking-Page phase transition in three-dimensional quantum gravity and found it to be of Lee-Yang type. With gravity being considerably simpler in three dimensions than in higher dimensions, it was also possible to rigorously prove the condensation of the zeros in this case. Moreover, the condensation of Lee-Yang zeros is in fact a well-known phenomenon in the field of lattice QCD; see for instance Refs. [ 53 –57 ]. From the explicit partition functions for N ≤ 7, we can also investigate the low- and high-temperature regimes of the su (2) sector. In Figure 2, we plot the logarithm of the partition functions for N = 1 , . . . , 7, which is proportional to the free energy; cf. Eq. ( 1.2 ).
for j= 2, i.e. it is the position of a higher Hagedorn pole, cf. Eq. ( 2.7 ) and the discussion below it. It would be interesting to develop a better understanding of these further arcs.
13 1 2 3 4 5 6 72.01 2.02 2.03 2.04
·10 −2
N
ln Z
0.020405
data
1 2 3 4 5 6 750 100 150
N
ln Z
fit: 3 .56(1) N 2 + 6 .5(3)
data
Figure 2 . Plots of the logarithm of the su (2) partition functions for N= 1 , . . . , 7 in the low tem-perature regime (left) with x= 0 .01 and the high temperature regime (right) with x= 0 .99. The plot on the left includes a constant line at ln Zsu (2)
N=7 (x= 0 .01) ∼0.020405, for comparison. On the right, we fit the data to the function c2·N2+c0, with an R2-value of 0 .999951. As evident from the respective plots, the free energy scales as N0/N2in the low/high temperature regime.
On the left, we have set x = 0 .01 corresponding to T ≈ 0.217. On the right, we have
x = 0 .99 corresponding to T ≈ 99 .5. Using a fit, we observe that the free energy scales as
N 0/N 2 in the low/high temperature regime, a clear sign of a confinement-deconfinement phase transition as described in Ref. [ 2].
4 Generalization to larger sectors
We now proceed to larger sectors. Including fermions on top of scalars, we find further evidence for Lee-Yang behavior in the su (2 |3) sector in Subsection 4.1 . We next go to the non-compact sl (2) sector, which includes arbitrary numbers of covariant derivatives on a single complex scalar. While the Molien-Weyl formula now requires to sum over infinitely many residues, it can still be evaluated, and we demonstrate how this can be done explicitly for N = 2 in Subsection 4.2 . In Subsection 4.3 , we show that the same procedure also works for the non-compact higher-rank psu (1 , 1|2) sector, which contains complex scalars, fermions and covariant derivatives. Finally, we give the Molien-Weyl formula for the full theory in Subsection 4.4 .
4.1 su (2 |3) sector
We now include fermions in the discussion, and investigate the su (2 |3) sector built from three complex scalars X = φ1 +iφ 4, Y = φ2 +iφ 5, Z = φ3 +iφ 6 and two fermions ψα=1 4 and
ψα=2 4 . While the scalars have classical scaling dimension ∆ = 1, the fermions have classical 14 scaling dimension ∆ = 3 /2, resulting in the single-particle partition functions
zB (x) = 3 x , zF (x) = 2 x3/2 . (4.1) Using Eq. ( 2.8 ) in the su (2 |3) sector, we obtain the Molien-Weyl formula
Zsu (2 |3)
N
(x) = (1 + x3/2)2N
(1 − x)3N
1
(2 πi )N −1
∮
|t1|=1
dt1
t1
· · ·
∮
|tN−1|=1
dtN −1
tN −1
∏
1≤k≤r≤N−1
1 − tk,r
φk,r
,
(4.2) where x = e−1/T , tk,r = tktk+1 · · · tr and
φk,r = (1 − xt k,r )3(1 − xt −1
k,r
)3
(1 + x3/2tk,r )2(1 + x3/2t−1
k,r
)2 . (4.3) Note that we are restricting ourselves to the unrefined partition function here for simplicity; the refined one can be obtained in a similar way. As was the case with the su (2) sector, we can evaluate this formula for low values of
N by replacing the integrals over the complex unit circle of ti with sums over the residues contained within it. Using this procedure, we were able to compute the su (2 |3) partition functions for N ≤ 5 via Mathematica , as shown below. Since the fermions contribute with half-integer scaling dimensions, it is convenient to introduce a ≡ √x = e− 1
2T
. We include our full results in the ancillary file su2slash3partitionfunctions.m .For N = 1, we have
Zsu (2 |3)
N=1
(a) = (1 + a3)2
(1 − a2)3 . (4.4) For N = 2, we find
Zsu (2 |3)
N=2
(a) = (1 + a)3(1 + a3)4
(1 − a2)4(1 − a4)5 × P11 (a) , (4.5) where
P11 (a) = 1 − 3a + 5 a2 − 9a3 + 16 a4 − 18 a5 + 19 a6 − 21 a7 + 17 a8 − 9a9 + 7 a10 − 3a11 .
(4.6) For N = 3, we find
Zsu (2 |3)
N=3
(a) = (1 + a)10 (1 + a3)6
(1 − a2)4(1 − a4)8(1 − a6)7 × P50 (a) , (4.7) 15 where
P50 (a) = 1 − 10 a + 54 a2 − 214 a3 + 698 a4 − 1972 a5 + 4977 a6 − 11458 a7 + 24401 a8
− 48556 a9 + 90987 a10 − 161476 a11 + 272627 a12 − 439456 a13 + 678225 a14
− 1004446 a15 + 1430189 a16 − 1960780 a17 + 2591626 a18 − 3305714 a19
4072484 a20 − 4848720 a21 + 5581917 a22 − 6215426 a23 + 6695473 a24
− 6978320 a25 + 7036587 a26 − 6863412 a27 + 6473870 a28 − 5902598 a29
5199179 a30 − 4421048 a31 + 3625988 a32 − 2865264 a33 + 2178627 a34
− 1591448 a35 + 1114790 a36 − 747174 a37 + 477876 a38 − 290730 a39
167604 a40 − 91122 a41 + 46455 a42 − 22050 a43 + 9654 a44
− 3852 a45 + 1380 a46 − 432 a47 + 114 a48 − 24 a49 + 3 a50 .
(4.8) For N = 4, we find
Zsu (2 |3)
N=4
(a) = (1 + a)18 (1 + a3)11
(1 − a2)6(1 − a4)8(1 − a6)10 (1 − a8)9 × P119 (a) , (4.9) where P119 (a) is a polynomial of degree 119. Since P119 (a) is one page long, we refrain from printing it here; the interested reader can find it in machine-readable form in the ancillary file su2slash3partitionfunctions.m . For N = 5, we find
Zsu (2 |3)
N=5
(a) = (1 + a)18 (1 + a3)18 (1 + a5)10
(1 − a4)12 (1 − a6)16 (1 − a8)12 (1 − a10 )11 × P219 (a) , (4.10) where P219 (a) is a two-and-a-half pages long polynomial of degree 219 which can equally be found in the ancillary file su2slash3partitionfunctions.m .Note that the polynomials in the numerator are not palindromic, in contrast to the
su (2) sector. Recall that in the su (2) sector, the palindromicness is a consequence of the uniform transformation behavior of the partition function under x → x−1, which can be proven via the Molien-Weyl formula. Its proof required the poles of the integrand at zero and infinity (or rather the corresponding residues) to vanish. Indeed, the Molien-Weyl formula ( 4.2 ) in the su (2 |3) sector has non-vanishing residues at infinity, which spoil this property. 17
In the su (2) sector, we found compelling evidence that the Hagedorn behavior at infinite
N is replaced by Lee-Yang behavior at large but finite N , from observing a condensation of the zeros of the partition function in the complex x = e−1/T plane. In Figure 3, we plot the zeros of the su (2 |3) partition functions for N ≤ 5 in the complex a = e− 1
2T
plane and observe a similar condensation of the zeros into two main arcs that appear to pinch the real axis. From Eq. ( 2.7 ), we see that the Hagedorn temperature of the su (2 |3) sector at infinite N is given by the lowest positive solution to the equation 1 = z(a) = zB (a) + zF (a) = 3 a2 + 2 a3,which is at asu (2 |3) ,tree
H
= 1 /2 corresponding to T su (2 |3) ,tree
H
= 1 /2 ln 2. (Recall again that
Tc = TH in the free theory [ 3].) In Figure 3, the zeros appear to pinch the real axis exactly at asu (2 |3) ,tree
H
= 1 /2, further supporting the statement that Lee-Yang behavior replaces Hagedorn behavior at finite N .
17 Note that temperature inversion symmetry is in general only expected to hold in the full theory [ 63 ].
16 −1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1 1.2 1.4
−1.2
−1
−0.8
−0.6
−0.4
−0.20.20.40.60.811.2 N = 2
N = 3
N = 4
N = 5
Figure 3 . Zeros of the free partition functions in the su (2 |3) sector for N = 2 . . . 5 plotted in the complex plane of a = e− 1
2T
. The right-most zeros appear to ‘condense’ in two arcs pinching the real axis just around the Hagedorn temperature asu (2 |3) ,tree
H
= 1 /2 of the infinite-N theory. This hints at a phase transition of Lee-Yang type.
4.2 sl (2) sector
The sl (2) sector is built from one complex scalar, say X = φ1 + iφ 4, and a single light-like covariant derivative that can act on it, say D ≡ D μσα=1 , ˙α=1
μ
, resulting in fields of type
Di−1X for i = 1 , 2, 3 . . . .18 Since ∆ Di−1X = i, we have the single-particle partition function
z(x) =
∞
∑
i=1
xi = x
1 − x . (4.11) Considering the sector as composed from an infinite set of fields with scaling dimensions ∆i = i, we obtain the following Molien-Weyl formula from Eq. ( 2.8 ):
Zsl (2)
N
(x) =
( ∞∏
i=1
1
1 − xi
)N 1
(2 πi )N −1
∮
|t1|=1
dt1
t1
· · ·
∮
|tN−1|=1
dtN −1
tN −1
∏
1≤k≤r≤N−1
1 − tk,r
φk,r
,
(4.12)
18 Recall that σμ= ( 1, σ 1, σ 2, σ 3), where σiare the usual Pauli matrices.
17 where tk,r = tktk+1 · · · tr and
φk,r =
∞
∏
i=1
(1 − xitk,r )(1 − xit−1
k,r
) . (4.13) For N = 1, the Molien-Weyl formula ( 4.12 ) simply reduces to
Zsl (2)
N=1
(x) =
∞
∏
i=1
1
1 − xi , (4.14) which is an infinite product of geometric series. 19 Under temperature inversion T → − T ,this expression transforms to ix − 1
12
Zsl (2)
N=1
(x); it is thus invariant when including x− 1
24
as Casimir energy. 20
For N = 2, we have
Zsl (2)
N=2
(x) =
( ∞∏
i=1
1
1 − xi
)2 1
2πi
∮
|t1|=1
dt1
t1
1 − t1
φ1,1
, (4.16) where
φ1,1 =
∞
∏
i=1
(1 − xit1)(1 − xit−11 ) . (4.17) This integral can be evaluated using the residue theorem by summing over all residues within the unit circle. Since φ1,1 diverges more rapidly than linearly at t1 = 0, there is no pole at t1 = 0 and the relevant poles come from the factor φ1,1 in the denominator. Since
|x| < 1, only the poles at t1 = xi are within the unit circle. Consider first one of the poles at t1 = xj , where the residue is Res t1=xj
[ 1 − t1
t1
1
∏∞
i=1
(1 − xit1)(1 − xit−11 )
]
= (1 − xj ) 1
∏∞
i=1
(1 − xixj )1
∏∞
i=1
i6=j
(1 − xix−j ) .
(4.18) Next, we split ∏∞
i=1
i6=j
(1 − xix−j ) = ∏∞
i=j+1
(1 − xi) ∏−1
i=1 −j
(1 − xi). Since
−1
∏
i=1 −j
(1 − xi) =
j−1
∏
i=1
(1 − xi)( −x−i) = (−1) j−1
xj(j−1) /2
j−1
∏
i=1
(1 − xi) , (4.19) we can simplify the expression by collecting terms: Res t1=xj [· · · ] = (−1) j−1x j(j−1)
2
(1 − xj )2
∏∞
i=1
(1 − xi)2 . (4.20)
19
The inverse of this infinite product is sometimes called Euler function, and it can equally be expressed in terms of the Dedekin η function η(τ ) = q 1
24
∏∞
n=1
(1 − qi) with q = e2πiτ .
20
Concretely,
Zsl (2)
N=1
(1 /x ) =
∞
∏
i=1
1
1 − x−i =
∞
∏
i=1
1
(−1)
∞
∏
i=1
xi
∞
∏
i=1
1
1 − xi = xζ(−1)
(−1) ζ(0) Zsl (2)
N=1
(x) = ix − 1
12
Zsl (2)
N=1
(x) , (4.15) where we have used ζ function regularization following Ref. [ 62 ].
18 Summing over all residues in t1 = xj for j ∈ N, we obtain
Zsl (2)
N=2
(x) =
∑∞
i=1
(−1) i−1x i(i−1)
2
(1 − xi)2
∏∞
i=1
(1 − xi)4 . (4.21) Note that the numerator is now an infinite sum instead of a polynomial as in the compact
su (2) and su (2 |3) sectors. 21 Each term in the sum in Zsl (2)
N=2
transforms with a different power of xi(i+1) under temperature reflection T → − T . Thus, temperature reflection symmetry, at least in the usual sense, seems to be absent. We can calculate the zeros of the N = 2 partition function ( 4.21 ) by truncating the infinite sum at some finite value M , observing a stabilization in the distribution of zeros as we increase M . In order to observe Lee-Yang behavior also in the sl (2) sector, we would need to evaluate the Molien-Weyl formula ( 4.12 )at N > 2; this is in principle possible, but quickly becomes quite tedious. We leave this for future work.
4.3 psu (1 , 1|2) sector
The psu (1 , 1|2) sector is built from two complex scalars X = φ1 +iφ 4 and Y = φ2 +iφ 5, two fermions ψα=1 4 and ¯ψ ˙α=1 3 as well as a single light-like covariant derivative D11 = Dμσα=1 , ˙α=1
μ
that can act on any of them. This field content results in the single-particle partition functions
zB (x) = 2
∞
∑
i=1
xi = 2x
1 − x , zF (x) = 2
∞
∑
i=1
xi+1 /2 = 2x3/2
1 − x , (4.22) where x = e−1/T .Using Eq. ( 2.8 ), the Molien-Weyl formula for this sector takes the form
Zpsu (1 ,1|2)
N
(x) =
( ∞∏
i=1
(1 + xi+1 /2)2N
(1 − xi)2N
)
× 1
(2 πi )N −1
∮
|t1|=1
dt1
t1
· · ·
∮
|tN−1|=1
dtN −1
tN −1
∏
1≤k≤r≤N−1
1 − tk,r
φk,r
,
(4.23) where tk,r = tktk+1 · · · tr and
φk,r =
∞
∏
i=1
(1 − xitk,r )2(1 − xit−1
k,r
)2
(1 + xi+1 /2tk,r )2(1 + xi+1 /2t−1
k,r
)2 . (4.24) For N = 1, the Molien-Weyl formula yields
Zpsu (1 ,1|2)
N=1
(x) =
∞
∏
i=1
(1 + xi+1 /2)2
(1 − xi)2 . (4.25) For N = 2, we have
Zpsu (1 ,1|2)
N=2
=
( ∞∏
i=1
(1 + xi+1 /2)4
(1 − xi)4
) 1
2πi
∮
|t1|=1
dt1
1 − t1
t1
∞
∏
i=1
(1 + xi+1 /2t1)2(1 + xi+1 /2t−11 )2
(1 − xit1)2(1 − xit−11 )2 .
(4.26)
21 The numerator can in fact be written in terms of so-called partial theta functions.
19 We can evaluate this integral using residue theory, similar to what was done in Section 4.2 .The full calculation is shown in Appendix C. The result is the following exact partition function for the psu (1 , 1|2) sector with N = 2:
Zpsu (1 ,1|2)
N=2
=
( ∞∏
i=1
(1 + xi+1 /2)8
(1 − xi)8
)
(1 + x1/2)4
×
∞
∑
j=1
x(j−1) (1 − xj )2(1 − xj−1/2 − 4xj − xj+1 /2 − 2x2j−1/2 − 3x2j − 2x2j+1 /2)
(1 + xj−1/2)3(1 + xj+1 /2)3 .
(4.27)
4.4 Full theory
The symmetry algebra of the full N = 4 SYM theory is psu (2 , 2|4). The field content of the theory is built from six real scalars, 16 fermions and four covariant derivatives. However, terms corresponding to the equations of motion, the Bianchi identities and their derivatives have to be subtracted. The single-particle partition functions are then given by [ 3]
zB (x) = 6x + 6 x2 − 14 x3 + 2 x4
(1 − x)4 = 6x + 12 x2 − 2x3
(1 − x)3 =
∞
∑
i=1
(8 i2 − 2) xi ,zF (x) = 16 x3/2 − 16 x5/2
(1 − x)4 = 16 x3/2
(1 − x)3 = 8
∞
∑
i=1
i(i + 1) xi+1 /2 ,
(4.28) where x = e−1/T .Using Eq. ( 2.8 ), the Molien-Weyl formula for the full N = 4 SYM theory is
Zpsu (2 ,2|4)
N
(x) =
∞
∏
i=1
( (1 + xi+1 /2)8i(i+1)
(1 − xi)8i2−2
)N
× 1
(2 πi )N −1
∮
|t1|=1
dt1
t1
· · ·
∮
|tN−1|=1
dtN −1
tN −1
∏
1≤k≤r≤N−1
1 − tk,r
φk,r
,
(4.29) where tk,r = tktk+1 · · · tr and
φk,r =
∞
∏
i=1
(1 − xitk,r )8i2−2(1 − xit−1
k,r
)8i2 −2
(1 + xi+1 /2tk,r )8i(i+1) (1 + xi+1 /2t−1
k,r
)8i(i+1) . (4.30) For N = 1, this formula yields
Zpsu (2 ,2|4)
N=1
(x) =
∞
∏
i=1
(1 + xi+1 /2)8i(i+1)
(1 − xi)8i2−2 . (4.31) While this integral can also be evaluated at larger N via residues, this becomes quite tedious due to the higher order of the poles; we leave this for future work. 20 5 Conclusion and outlook
In this paper, we have studied the partition function of free N = 4 SYM theory on R × S3
with gauge group U( N ) at finite N .We obtained closed expressions for the partition function at specific values of N via a Molien-Weyl formula, a (contour) integral over ( S1)N −1 that can be done via residues. We have explicitly evaluated this integral for reasonably high values of N in the su (2) and
su (2 |3) sector. Moreover, as a proof of principle, we have evaluated the contour integrals for N = 2 also in larger, non-compact sectors, namely the sl (2) and psu (1 , 1|2) sector. In the su (q) sector, the free partition function is given by a rational function in
x = e−1/T . Its numerator was conjectured to be palindromic [ 44 ]. This property can be rigorously proven via the Molien-Weyl formula [ 47 ]. Including also a Casimir energy, the partition function then becomes invariant under T → − T . This symmetry has been identi-fied and studied in a number of other physical systems in Refs. [ 62 –64 ]. In the higher-rank sectors including fermions that we considered, this symmetry is absent, though. The partition function allows us to study the thermodynamic behavior of N = 4 SYM theory on R×S3. While this theory is well known to exhibit Hagedorn behavior at infinite N
, we find that the Hagedorn behavior is replaced by Lee-Yang behavior at large but finite
N . Lee-Yang behavior [ 51 ] was originally found in the description of mesoscopic systems; it means that the zeros of the partition function in the complex x = e−1/T plane condense in arcs that pinch the real line at the temperature of the phase transition. Concretely, we find strong evidence for this behavior in the su (2) and su (2 |3) sector, where the zeros appear to condense in arcs that pinch the real temperature axis at the temperature of the confinement-deconfinement transition Tc. Since the Hagedorn behavior at infinite N
is present in all (non-trivial) sectors including the full theory, we expect the same of Lee-Yang behavior. 22 Thus, our findings strongly suggest that the confinement-deconfinement transition in the full N = 4 SYM theory is of Lee-Yang type as well. (While we have compelling evidence, it would be interesting to rigorously prove this.) Via the AdS/CFT dictionary, the confinement-deconfinement transition in N = 4 SYM theory on R × S3 is conjectured to be dual to the Hawking-Page transition of type IIB superstring theory on AdS 5 × S5 . Our results thus indicate that also the Hawking-Page transition of type IIB superstring theory on AdS 5 × S5 is of Lee-Yang type. This is reminiscent of the results by Maloney and Witten [ 52 ], who found that the Hawking-Page transition in 3D gravity is of Lee-Yang type. Our findings open up many interesting directions for further research. First among these is the extension to loop level. In the su (2) sector, the action of the one-loop (and higher-loop) dilatation operator at finite N can be expressed in a basis of so-called re-stricted Schur operators [ 68 –70 ]. This makes it possible to determine loop corrections to the partition function (in this sector) at higher loop orders [ 71 ]. A second direction concerns the exact calculation of the confinement-deconfinement temperature. In the cases we studied in detail, it could be seen that the arcs of zeros of
22 It might be interesting the evaluate the corresponding Molien-Weyl formulas in Subsections 4.2 -4.4 at higher Nto confirm this expectation.
21 the partition function pinch the real temperature axis roughly at the known Hagedorn temperature, which is expected [ 3] to coincides with the confinement-deconfinement tem-perature in the free theory. However, for fixed finite N , the zero closest to the real axis is still a finite distance away from the real axis. In order to determine the exact value of the confinement-deconfinement temperature in cases where it does not coincide with the Hagedorn temperature, i.e. at higher loop level, it is important to be able to determine the confinement-deconfinement temperature from the asymptotic position of the closest zero to the real axis at large N .Third, it would be very interesting to determine the confinement-deconfinement tem-perature at any value of λ via integrability, similar to what was done for the Hagedorn temperature in Refs. [ 21 , 22 ]. For the application of integrability to the Hagedorn tem-perature [ 21 , 22 ], it was crucial to determine the pole of the partition function without needing to calculate the whole partition function. Thus, it is also likely that an application of integrability to the confinement-deconfinement temperature requires us to be able to determine the asymptotic position of the closest zero without needing to know the full partition function.
Acknowledgments
We thank Robert de Mello Koch, Troels Harmark, David McGady, and Bo Sundborg for very useful discussions, David McGady for introducing the concept of Lee-Yang behavior to us, Robert de Mello Koch, Troels Harmark, and David McGady for comments on the manuscript as well as Dragomir Djokovic, and Bernd Sturmfels for communication. M.W. was supported in part by the ERC starting grant 757978 and the research grant 00015369 from Villum Fonden. Moreover, the work of M.W. and A.T.K. was supported by the re-search grant 00025445 from Villum Fonden.
A Derivation of the character formula
In this appendix, we review the derivation of the character formula ( 2.1 ) for the partition function of free Yang-Mills theory on a compact space R × S3. We closely follow the approach outlined in Refs. [ 3, 43 ], see also Ref. [ 72 ]. As we are considering the theory on a compact space, the allowed states must be singlets of the gauge group. In general, we will have a number of bosonic fields with energy
Ei, and a number of fermionic fields with energy E′
i′
. We assume all fields to be in the adjoint representation of the gauge group U( N ). The exact partition function can then be written as
Z(β) =
∞∑
n1=0
xn1E1
∞∑
n2=0
xn2E2 · · ·
∞∑
n′
1=0
xn′
1E′
1
∞∑
n′
2=0
xn′
2E′
2
· · · × d(n1, n 2, ..., n ′
1
, n ′
2
, ... ) , (A.1) where x = e−1/T = e−β and d(n1, n 2, ..., n ′
1
, n ′
2
, ... ) counts the number of singlets in the product representation sym n1
adj
⊗ sym n2
adj
⊗ · · · ⊗ anti n′
1
adj
⊗ anti n′
2
adj
⊗ · · · . This factor can be calculated by integrating the characters of the representations over the group manifold, 22 using the Haar measure [d U ]. This feature stems from the orthogonality of the characters of irreducible representations and is described in detail in Appendix A of Ref. [ 3]. Thus, we obtain
Z(β) =
∫
U( N)
[d U ] ∏
i
∞
∑
ni=0
xniEi χsym niadj
(U )
∏
i′
∞
∑
n′
i′=0
xn′
i′E′
i′
χanti n′
i′
adj
(U )
. (A.2) To continue, we can utilize the following properties of group characters:
∞
∑
n=0
xnχsym nR (U ) = exp
{ ∞∑
m=1
1
mxmχR(U m)
}
,
∞
∑
n=0
xnχanti nR (U ) = exp
{ ∞∑
m=1
(−1) m+1
m xmχR(U m)
}
,
(A.3) where R can be any representation. If we choose R = adj , this yields
Z(β) =
∫
U( N)
[d U ] ∏
i
exp
{ ∞∑
m=1
1
m xmE i χadj (U m)
} ∏
i′
exp
{ ∞∑
m=1
(−1) m+1
m xmE ′
i′
χadj (U m)
}
.
(A.4) We can now identify the bosonic and fermionic single-particle partition functions
zB (β) = ∑
i
xEi , zF (β) = ∑
i′
xE′
i′
. (A.5) Together with χadj (U ) = tr U tr U †, we obtain a compact expression:
Z(β) =
∫
U( N)
[d U ] exp
∞
∑
j=1
z(jβ )
j tr U j tr
(
U †)j
, (A.6) where z(jβ ) ≡ zB (jβ ) − (−1) j zF (jβ ) has been introduced to simplify the notation. For the next part of the derivation, we outline the method of Ref. [ 43 ]. Expanding the exponential, we write
Z(β) =
∫
U( N)
[d U ]
∞
∏
j=1
exp
( z(jβ )
j tr U j tr
(
U †)j )
=
∫
U( N)
[d U ]
∞
∏
j=1
∞
∑
kj=0
1
kj !
( z(jβ )
j tr U j tr
(
U †)j )kj
.
(A.7) The product of sums over kj can be replaced by a total sum over all integer-valued vectors
~k in an infinite dimensional configuration space:
∑
~k
∞
∏
j=1
1
kj !
( z(jβ )
j tr U j tr
(
U †)j )kj
= ∑
~k
∞
∏
j=1
z(jβ )kj
kj ! jkj Υ~k(U )Υ ~k(U †) , (A.8) where we defined Υ~k(U ) =
∞
∏
j=1
(tr U j )kj . (A.9) 23 If we let kj count the number of rows with length j, we can identify ~k with a Young tableau k containing n ≡ ∑∞
j=1
jk j boxes, and thus with an integer partition of n. With this, we can use Frobenius’ formula to rewrite Υ ~k in terms of group characters as done in Ref. [ 43 ]: Υ~k(U ) = ∑
R
χR(k) tr R U , (A.10) where the sum is over all representations R of U( N ). Now integrating over the Haar measure using the orthogonality of representations of U( N ),
∫
U( N)
[d U ] tr R(U ) tr R′ (U †) = δRR ′ , (A.11) we find
Z(β) = ∑
k
∞
∏
j=1
z(jβ )kj
kj ! jkj
∑
R
|χR(k)|2 . (A.12) It is convenient to group the sum over Young tableaux k by the total number of boxes n.The representations R can also be categorized as a set of Young tableaux with at most N
rows. We then arrive at what we call the ‘character formula’:
Z(β) = 1 +
∞
∑
n=1
∑
k⊢n
∑
r⊢nn
∏
j=1
z(jβ )kj
kj ! jkj |χr(k)|2 . (A.13) In this formula, it is very clear to see the effect of a finite N : the number of rows in the representation R is restricted to be no larger than N .Chemical potentials can be included by replacing E → E − ∑3
i=1
ΩiJi − ∑2
a=1
Ωa+3 Sa
throughout the derivation.
B Derivation of the Molien-Weyl formula
In this appendix, we review the derivation of the Molien-Weyl formula ( 2.8 ) for the free partition function with general field content, cf. Ref. [ 50 ]. We take our starting point in Eq. ( A.4 ). We can rewrite the adjoint characters in terms of traces over the fundamental representation as χadj (U ) = tr U tr U †. In the fundamental representation of U( N ), the group elements are N × N -valued matrices. We let εj denote the N eigenvalues of U with |εj | = 1, finding
χadj (U m) = tr U m tr U †m =
( N∑
r=1
εmr
)( N∑
k=1
ε−mk
)
=
N
∑
k,r =1
( εr
εk
)m
. (B.1) 24 Inserting this into Eq. ( A.4 ), we have
Z(β) =
∫
U( N)
[d U ]
N
∏
k,r =1
∏
i
exp
{ ∞∑
m=1
1
m xmE i
( εr
εk
)m} ∏
i′
exp
{ ∞∑
m=1
(−1) m+1
m xmE ′
i′
( εr
εk
)m}
=
∫
U( N)
[d U ]
N
∏
k,r =1
∏
i′
(
1 + xE′
i′εr
εk
)
∏
i
(
1 − xEi εr
εk
)
=
( ∏
i′
(1 + xE′
i′
)
∏
i
(1 − xEi )
)N ∫
U( N)
[d U ] ∏
1≤k<r ≤N
∏
i′
(
1 + xE′
i′εr
εk
)(
1 + xE′
i′εk
εr
)
∏
i
(
1 − xEi εr
εk
)(
1 − xEi εk
εr
) .
(B.2) In the second line we used the fact that − log(1 − x) = ∑∞
m=1 1
m
xm, and in the third line we split the product into three parts with k = r, k < r and k > r , respectively. The Haar measure can be replaced by an integral over the eigenvalues of the U( N )group element U , on the unit circle |εj | = 1 [ 73 ]:
∫
U( N)
[d U ] = 1
N ! (2 πi )N
∮ N∏
j=1
dεj
εj
∆ ¯∆ , (B.3) where the Vandermonde determinant is given by ∆ = ∏
k<r
(εr − εk) ¯∆ = ∏
k<r
(ε−1
r
− ε−1
k
) . (B.4) In conclusion, we can compute the partition function of any field content by the generalized Molien-Weyl formula:
Z(x) = ( ZN =1 (x)) N 1
N ! (2 πi )N
∮ N∏
j=1
dεj
εj
∆ ¯∆ ∏
1≤k<r ≤N
1
φk,r
, (B.5) where we have introduced
ZN =1 (x) =
∏
i′
(1 + xE′
i′
)
∏
i
(1 − xEi ) , (B.6)
φk,r =
∏
i
(
1 − xEi εr
εk
)(
1 − xEi εk
εr
)
∏
i′
(
1 + xE′
i′εr
εk
)(
1 + xE′
i′εk
εr
) . (B.7) We can obtain another form of the Molien-Weyl formula by utilizing the symmetry under a permutation of the N eigenvalues εi. First, notice that the measure ∏Nj=1 dεj
εj
, ∆ ¯∆and ∏
k<r
φk,r (x) are all invariant under a permutation of the εi while ∆ and ¯∆ are each invariant up to the sign of the permutation, sgn( σ). By its definition, the Vandermonde determinant can also be written as ∆ = ∏
k<r
(εr − εk ) = ∑
σ∈SN
sgn( σ) ε0
σ(1)
ε1
σ(2)
· · · εN −1
σ(N)
. (B.8) 25 Using the freedom to relabel the εi, we can permute the εi in each term with the inverse permutation σ−1 to make them all be of the form ε01ε12 · · · εN −1
N
. When doing so, ¯∆ picks up a factor of sgn( σ−1) which cancels the sign factor in ∆, and we are left with ∆ ¯∆ → N ! ε01ε12 · · · εN −1
N
¯∆ = N ! ∏
k<r
(
1 − εr
εk
)
. (B.9) We thus obtain the alternative Molien-Weyl formula
Z(x) = ( ZN =1 (x)) N 1
(2 πi )N
∮ N∏
j=1
dεj
εj
∏
1≤k<r ≤N
(
1 − εr
εk
) 1
φk,r
. (B.10) Yet another form of the Molien-Weyl formula can be obtained by a change of variables:
εj = t1 · · · tj . With this identification, we have
∏
1≤k<r ≤N
(
1 ± x εr
εk
) (
1 ± x εk
εr
)
= ∏
2≤k≤r≤N
(1 ± xt k,r )(1 ± xt −1
k,r
) , (B.11) and ∏
1≤k<r ≤N
(
1 − εr
εk
)
= ∏
2≤k≤r≤N
(1 − tk,r ) , (B.12) where tk,r = tktk+1 · · · tr. The Jacobian for the transformation is given by
J = det
[ dεi
dtj
]
= tN −11 tN −22 · · · t1
N−1
t0
N
=
N
∏
j=1
εj
tj
. (B.13) In total, we can rewrite the generalized Molien-Weyl formula as
Z(x) = ( ZN =1 (x)) N 1
(2 πi )N
∮ N∏
j=1
dtj
tj
∏
1≤k≤r≤N−1
1 − tk,r
φk,r
, (B.14) where we have relabeled the integration variables tN → tN −1 → . . . → t1 → tN and
ZN =1 (x) =
∏
i′
(1 + xE′
i′
)
∏
i
(1 − xEi ) , (B.15)
φk,r =
∏
i
(1 − xEi tk,r )(1 − xEi t−1
k,r
)
∏
i′
(1 + xE′
i′
tk,r )(1 + xE′
i′
t−1
k,r
) . (B.16) Note that tN only occurs once in the integrand as 1 /t N . Integrating tN then simply gives a factor of 2 πi , and we have
Z(x) = ( ZN =1 (x)) N 1
(2 πi )N −1
∮
|t1|=1
dt1
t1
· · ·
∮
|tN−1|=1
dtN −1
tN −1
∏
1≤k≤r≤N−1
1 − tk,r
φk,r
, (B.17) as claimed in Eq. ( 2.8 ) of the main text. Chemical potentials can be included by replacing E → E − ∑3
i=1
ΩiJi − ∑2
a=1
Ωa+3 Sa
throughout the derivation. 26 C Calculation of the partition function in the psu (1 , 1|2) sector at N = 2
In this appendix, we provide details on the calculation of the partition function in the
psu (1 , 1|2) sector at N = 2, the result of which is given in Eq. ( 4.27 ). Our starting point is the Molien-Weyl formula ( 4.23 ) for this sector with N = 2, which we quote here for convenience:
Zpsu (1 ,1|2)
N=2
=
( ∞∏
i=1
(1 + xi+1 /2)4
(1 − xi)4
) 1
2πi
∮
|t1|=1
dt1
1 − t1
t1
∞
∏
i=1
(1 + xi+1 /2t1)2(1 + xi+1 /2t−11 )2
(1 − xit1)2(1 − xit−11 )2 .
(C.1) The relevant poles are double poles in t1 = xj for all j ∈ N.23 Focusing on a single value of j, the residue of the pole at t1 = xj is Res t1=xj
1 − t1
t1
∞
∏
i=1
(1 + xi+1 /2t1)2(1 + xi+1 /2t−11 )2
(1 − xit1)2(1 − xit−11 )2
= ∂
∂t 1
t1(1 − t1)
︸ ︷︷ ︸
pre
∞
∏
i=1
(1 + xi+1 /2t1)2(1 + xi+1 /2t−11 )2
︸ ︷︷ ︸
ferm
∞
∏
i=1
1
(1 − xit1)2
∞
∏
i=1
i6=j
1
(1 − xit−11 )2
︸ ︷︷ ︸
scal
∣∣∣∣∣∣t1=xj
.
(C.3) We now split the action of the derivative onto the prefactor t1(1 − t1), the fermionic terms in the numerator and the scalar terms in the denominator. The action on the prefactor yields (∂t1 pre )( ferm )( scal )|t1=xj = (1 − 2xj )Γ( xj ) , (C.4) where Γ( xj ) = lim
t1→xj
∞
∏
i=1
(1 + xi+1 /2t1)2(1 + xi+1 /2t−11 )2
(1 − xit1)2
∞
∏
i=1
i6=j
1
(1 − xit−11 )2 . (C.5) The action on the scalar terms yields (pre )( ferm )( ∂t1 scal )|t1=xj = xj (1 − xj )
∞
∑
k=1
2xk
1 − xkxj −
∞
∑
k=1
k6=j
2xkx−2j
1 − xkx−j
Γ( xj ) . (C.6)
23
While there is a pole at t1 = 0, the corresponding residue is (1 − t1)
∞
∏
i=1
(1 + xi+1 /2t1)2(1 + xi+1 /2t−11 )2
(1 − xit1)2(1 − xit−11 )2
∣∣∣∣∣t1 =0
=
∞
∏
i=1
(t1 + xi+1 /2)2t21
(t1 − xi)2t21
∣∣∣∣∣t1=0
=
∞
∏
i=1
(xi+1 /2)2
(−xi)2 =
∞
∏
i=1
x .
(C.2) Since |x| < 1, this infinite product goes to 0, and the pole at t1 = 0 does not contribute.
27 The infinite sums can be reduced in the following manner:
∞
∑
k=1
xk
1 − xkxj −
∞
∑
k=1
k6=j
xkx−2j
1 − xkx−j =
∞
∑
i=j+1
xi−j
1 − xi −
−1
∑
i=1 −j
xi−j
1 − xi −
∞
∑
i=1
xi−j
1 − xi
= +
j−1
∑
i=1
x−j
1 − xi −
j
∑
i=1
xi−j
1 − xi
=
j−1
∑
i=1
x−j − xi−j
1 − xi − 1
1 − xj = x−j (j − 1) − 1
1 − xj .
(C.7) In total, the derivative on the scalar terms contributes with (pre )( ferm )( ∂t1 scal )|t1=xj = 2 xj (1 − xj )
(
x−j (j − 1) − 1
1 − xj
)
Γ( xj )=
(
2(1 − xj )( j − 1) − 2xj )
Γ( xj ) .
(C.8) Now we calculate the effect of the derivative on the fermionic terms: (pre )( ∂t1 ferm )( scal )|t1=xj = xj (1 − xj )
∞
∑
k=1
( 2xk+1 /2
1 + xk+1 /2xj − 2xk+1 /2x−2j
1 + xk+1 /2x−j
)
Γ( xj ) .
(C.9) Again, the infinite sums can be reduced:
∞
∑
k=1
( xk+1 /2
1 + xk+1 /2xj − xk+1 /2x−2j
1 + xk+1 /2x−j
)
=
∞
∑
i=j+1
xi−j+1 /2
1 + xi+1 /2 −
j−2
∑
k=1
xk+1 /2x−2j
1 + xk+1 /2x−j − x−j+1 /2
1 + x1/2 − x−j−1/2
1 + x−1/2 −
∞
∑
i=1
xi−j+1 /2
1 + xi+1 /2 .
(C.10) The two terms outside the sums join to become −x−j . The first sum cancels all except the first j terms in the last sum. The second sum becomes
−
j−2
∑
k=1
xk+1 /2x−2j
1 + xk+1 /2x−j = −
−1
∑
k=−j+2
xk−1/2x−j
1 + xk−1/2 = −
−1
∑
k=−j+2
x−j
1 + x−k+1 /2 = −
j−2
∑
i=1
x−j
1 + xi+1 /2 .
(C.11) With these rearrangements, we obtain
−x−j −
j
∑
i=1
xi−j+1 /2
1 + xi+1 /2 −
j−2
∑
i=1
x−j
1 + xi+1 /2 = − x−j − x−j (j − 2) − x−1/2
1 + xj−1/2 − x+1 /2
1 + xj+1 /2
= − x−j
(
(j − 1) + xj−1/2
1 + xj−1/2 + xj+1 /2
1 + xj+1 /2
)
.
(C.12) Thus, the total contribution from the derivative acting on the fermionic terms is (pre )( ∂t1 ferm )( scal )|t1 =xj = −2(1 − xj )
(
(j − 1) + xj−1/2
1 + xj−1/2 + xj+1 /2
1 + xj+1 /2
)
Γ( xj ) .
(C.13) 28 Adding the three contributions from the prefactor, the scalar and the fermionic terms, we find the full residue from the pole at t1 = xj :Res t1=xj [· · · ] =
(1 − 2xj ) + 2(1 − xj )( j − 1) − 2xj
− 2(1 − xj )
(
(j − 1) + xj−1/2
1 + xj−1/2 + xj+1 /2
1 + xj+1 /2
)Γ( xj )=
[
1 − 4xj − 2(1 − xj )
( xj−1/2
1 + xj−1/2 + xj+1 /2
1 + xj+1 /2
)]
Γ( xj )= 1 − xj−1/2 − 4xj − xj+1 /2 − 2x2j−1/2 − 3x2j − 2x2j+1 /2
(1 + xj−1/2)(1 + xj+1 /2) Γ( xj ) .
(C.14) Finally we will take a look at the infinite products in Γ( xj ). The infinite products in the denominators can be rearranged as
∞
∏
i=1
i6=j
(1 − xi−j )
∞
∏
i=1
(1 − xi+j ) =
−1
∏
i=1 −j
(1 − xi)
∞
∏
i=1
(1 − xi)
∞
∏
i=1+ j
(1 − xi) . (C.15) Since ∏−1
i=1 −j
(1 − xi) = ∏j−1
i=1
(1 − xi)( −x−i) = (−1) j−1
xj(j−1) /2
∏j−1
i=1
(1 − xi), we can simplify the expression by collecting terms:
∞
∏
i=1
i6=j
(1 − xi−j )
∞
∏
i=1
(1 − xi+j ) =
∏∞
i=1
(1 − xi)2
xj(j−1) /2(1 − xj ) . (C.16) The infinite products in the numerator can be rearranged as
∞
∏
i=1
(1 + xi+1 /2xj )(1 + xi+1 /2x−j ) =
∞
∏
i=1
(1 + xi+j+1 /2)(1 + xi−j+1 /2)=
∞
∏
i=1+ j
(1 + xi+1 /2)
0
∏
i=1 −j
(1 + xi+1 /2)
∞
∏
i=1
(1 + xi+1 /2) .
(C.17) The middle term can be rewritten as
0
∏
i=1 −j
(1 + xi+1 /2) =
j−1
∏
i=0
(1 + xi−1/2)x−i+1 /2 = xj/ 2−j(j−1) /2
j−2
∏
i=−1
(1 + xi+1 /2)= x−j(j−2) /2 (1 + x−1/2)(1 + x1/2)
(1 + xj−1/2)(1 + xj+1 /2)
j
∏
i=1
(1 + xi+1 /2)= x−j(j−2) /2−1/2 (1 + x1/2)2
(1 + xj−1/2)(1 + xj+1 /2)
j
∏
i=1
(1 + xi+1 /2)= x−(j−1) 2/2 (1 + x1/2)2
(1 + xj−1/2)(1 + xj+1 /2)
j
∏
i=1
(1 + xi+1 /2) .
(C.18) 29 We find
∞
∏
i=1
(1 + xi+1 /2xj )(1 + xi+1 /2x−j ) = x−(j−1) 2/2(1 + x1/2)2
(1 + xj−1/2)(1 + xj+1 /2)
∞
∏
i=1
(1 + xi+1 /2)2 . (C.19) In total, the Γ( xj ) factor thus takes the form Γ( xj ) =
∏∞
i=1
(1 + xi+1 /2xj )2(1 + xi+1 /2x−j )2
∏∞
i=1
(1 − xi+j )2 ∏∞
i=1
i6=j
(1 − xi−j )2
= xj(j−1) (1 − xj )2
∏∞
i=1
(1 − xi)4
x−(j−1) 2
(1 + x1/2)4
(1 + xj−1/2)2(1 + xj+1 /2)2
∞
∏
i=1
(1 + xi+1 /2)4
=
∏∞
i=1
(1 + xi+1 /2)4
∏∞
i=1
(1 − xi)4 (1 + x1/2)4 x(j−1) (1 − xj )2
(1 + xj−1/2)2(1 + xj+1 /2)2 .
(C.20) Combining Eqs. ( C.14 ) and ( C.20 ), we obtain the rearranged residue of the pole at t1 =
xj . Summing this expression over all j ∈ N we have included all residues, and (remembering the prefactor in the first line of Eq. ( C.1 )) we obtain the final partition function for the
psu (1 , 1|2) sector for N = 2:
Zpsu (1 ,1|2)
N=2
=
( ∞∏
i=1
(1 + xi+1 /2)8
(1 − xi)8
)
(1 + x1/2)4
×
∞
∑
j=1
x(j−1) (1 − xj )2(1 − xj−1/2 − 4xj − xj+1 /2 − 2x2j−1/2 − 3x2j − 2x2j+1 /2)
(1 + xj−1/2)3(1 + xj+1 /2)3 ,
(C.21) as claimed in Eq. ( 4.27 ) of the main text.
References
J. M. Maldacena, “The Large N limit of superconformal field theories and supergravity,”
Int. J. Theor. Phys. 38 (1999) 1113–1133 , arXiv:hep-th/9711200 [hep-th] . [Adv. Theor. Math. Phys.2,231(1998)]. E. Witten, “Anti-de Sitter space, thermal phase transition, and confinement in gauge theories,” Adv. Theor. Math. Phys. 2 (1998) 505–532 , arXiv:hep-th/9803131 [hep-th] . O. Aharony, J. Marsano, S. Minwalla, K. Papadodimas, and M. Van Raamsdonk, “The Hagedorn - deconfinement phase transition in weakly coupled large N gauge theories,”
Adv. Theor. Math. Phys. 8 (2005) 603–696 , arXiv:hep-th/0310285 [hep-th] . S. W. Hawking and D. N. Page, “Thermodynamics of Black Holes in anti-De Sitter Space,”
Commun. Math. Phys. 87 (1983) 577 . B. Sundborg, “The Hagedorn transition, deconfinement and N = 4 SYM theory,”
Nucl. Phys. B573 (2000) 349–363 , arXiv:hep-th/9908001 [hep-th] . M. Spradlin and A. Volovich, “A Pendant for Polya: The One-loop partition function of
N = 4 SYM on R × S3,” Nucl. Phys. B711 (2005) 199–230 ,
arXiv:hep-th/0408178 [hep-th] .
30 D. Yamada and L. G. Yaffe, “Phase diagram of N = 4 super-Yang-Mills theory with R-symmetry chemical potentials,” JHEP 09 (2006) 027 , arXiv:hep-th/0602074 [hep-th] . T. Harmark and M. Orselli, “Quantum mechanical sectors in thermal N = 4 super Yang-Mills on R × S3,” Nucl. Phys. B757 (2006) 117–145 ,
arXiv:hep-th/0605234 [hep-th] . M. Gomez-Reino, S. G. Naculich, and H. J. Schnitzer, “More pendants for Polya: Two loops in the SU(2) sector,” JHEP 07 (2005) 055 , arXiv:hep-th/0504222 [hep-th] . R. Suzuki, “Refined Counting of Necklaces in One-loop N = 4 SYM,” JHEP 06 (2017) 055 ,
arXiv:1703.05798 [hep-th] . O. Aharony, J. Marsano, S. Minwalla, K. Papadodimas, and M. Van Raamsdonk, “A First order deconfinement transition in large N Yang-Mills theory on a small S3,”
Phys. Rev. D 71 (2005) 125018 , arXiv:hep-th/0502149 . O. Aharony, J. Marsano, and M. Van Raamsdonk, “Two loop partition function for large N pure Yang-Mills theory on a small S3,” Phys. Rev. D 74 (2006) 105012 ,
arXiv:hep-th/0608156 . M. Mussel and R. Yacoby, “The 2-loop partition function of large N gauge theories with adjoint matter on S3,” JHEP 12 (2009) 005 , arXiv:0909.0407 [hep-th] . J. Fokken and M. Wilhelm, “One-Loop Partition Functions in Deformed N = 4 SYM Theory,” JHEP 03 (2015) 018 , arXiv:1411.7695 [hep-th] . S. Ramgoolam, M. C. Wilson, and A. Zahabi, “Quiver Asymptotics: N = 1 Free Chiral Ring,” J. Phys. A 53 no. 10, (2020) 105401 , arXiv:1811.11229 [hep-th] . T. Harmark, K. R. Kristjansson, and M. Orselli, “Decoupling limits of N = 4 super Yang-Mills on R × S3,” JHEP 09 (2007) 115 , arXiv:0707.1621 [hep-th] . J. J. Atick and E. Witten, “The Hagedorn transition and the number of degrees of freedom of string theory,” Nuclear Physics B 310 no. 2, (1988) 291 – 334 . N. Beisert et al. , “Review of AdS/CFT Integrability: An Overview,”
Lett. Math. Phys. 99 (2012) 3–32 , arXiv:1012.3982 [hep-th] . D. Bombardelli, A. Cagnazzo, R. Frassek, F. Levkovich-Maslyuk, F. Loebbert, S. Negro, I. M. Szécsényi, A. Sfondrini, S. J. van Tongeren, and A. Torrielli, “An integrability primer for the gauge-gravity correspondence: An introduction,” J. Phys. A49 no. 32, (2016) 320301 ,
arXiv:1606.02945 [hep-th] . F. Benini and P. Milan, “Black holes in 4d N = 4 Super-Yang-Mills,”
arXiv:1812.09613 [hep-th] . T. Harmark and M. Wilhelm, “Hagedorn Temperature of AdS 5/CFT 4 via Integrability,”
Phys. Rev. Lett. 120 no. 7, (2018) 071605 , arXiv:1706.03074 [hep-th] . T. Harmark and M. Wilhelm, “The Hagedorn temperature of AdS 5/CFT 4 at finite coupling via the Quantum Spectral Curve,” Phys. Lett. B786 (2018) 53–58 ,
arXiv:1803.04416 [hep-th] . M. Wilhelm. Talk at the conference “Integrability in Gauge and String Theory 2018”, 2018.
. T. Harmark and M. Wilhelm, “Solving the Hagedorn temperature of AdS 5/CFT 4 via the Quantum Spectral Curve: Chemical potentials and deformations.” To appear.
31 B. Sundborg, “Thermodynamics of Superstrings at High-energy Densities,”
Nucl. Phys. B254 (1985) 583–592 . S. Corley, A. Jevicki, and S. Ramgoolam, “Exact correlators of giant gravitons from dual
N = 4 SYM theory,” Adv. Theor. Math. Phys. 5 (2002) 809–839 ,
arXiv:hep-th/0111222 [hep-th] . T. W. Brown, P. Heslop, and S. Ramgoolam, “Diagonal multi-matrix correlators and BPS operators in N = 4 SYM,” JHEP 02 (2008) 030 , arXiv:0711.0176 [hep-th] . R. Bhattacharyya, S. Collins, and R. de Mello Koch, “Exact Multi-Matrix Correlators,”
JHEP 03 (2008) 044 , arXiv:0801.2061 [hep-th] . C. B. Thorn, “Infinite Nc QCD at finite temperature: Is there an ultimate temperature?,”
Phys. Lett. B 99 (1981) 458–462 . C. B. Thorn, “String Bits at Finite Temperature and the Hagedorn Phase,”
Phys. Rev. D 92 no. 6, (2015) 066007 , arXiv:1507.03036 [hep-th] . S. Raha, “Hagedorn temperature in superstring bit model and SU(N) characters,”
Phys. Rev. D 96 no. 8, (2017) 086006 , arXiv:1706.09951 [hep-th] . T. L. Curtright, S. Raha, and C. B. Thorn, “Color Characters for White Hot String Bits,”
Phys. Rev. D 96 no. 8, (2017) 086021 , arXiv:1708.03342 [hep-th] . S. Raha, “Finite N corrections to white hot string bits,”
Phys. Rev. D 100 no. 10, (2019) 106011 , arXiv:1909.08468 [hep-th] . M. Hanada and J. Maltz, “A proposal of the gauge theory description of the small Schwarzschild black hole in AdS 5×S5,” JHEP 02 (2017) 012 , arXiv:1608.03276 [hep-th] . D. Berenstein, “Submatrix deconfinement and small black holes in AdS,”
JHEP 09 (2018) 054 , arXiv:1806.05729 [hep-th] . D. Berenstein, “Negative specific heat from non-planar interactions and small black holes in AdS/CFT,” JHEP 10 (2019) 001 , arXiv:1810.07267 [hep-th] . G. Bergner, N. Bodendorfer, M. Hanada, E. Rinaldi, A. Schäfer, and P. Vranas, “Thermal phase transition in Yang-Mills matrix model,” JHEP 01 (2020) 053 ,
arXiv:1909.04592 [hep-th] . M. Hanada, A. Jevicki, C. Peng, and N. Wintergerst, “Anatomy of Deconfinement,”
JHEP 12 (2019) 167 , arXiv:1909.09118 [hep-th] . M. Hanada, G. Ishiki, and H. Watanabe, “Partial deconfinement in gauge theories,” in 37th International Symposium on Lattice Field Theory . 11, 2019. arXiv:1911.11465 [hep-lat] . A. Arabi Ardehali, J. Hong, and J. T. Liu, “Asymptotic growth of the 4d N = 4 index and partially deconfined phases,” arXiv:1912.04169 [hep-th] . M. Hanada, H. Shimada, and N. Wintergerst, “Color Confinement and Bose-Einstein Condensation,” arXiv:2001.10459 [hep-th] . H. Watanabe, G. Bergner, N. Bodendorfer, S. Shiba Funai, M. Hanada, E. Rinaldi, A. Schäfer, and P. Vranas, “Partial Deconfinement at Strong Coupling on the Lattice,”
arXiv:2005.04103 [hep-th] . S. Dutta and R. Gopakumar, “Free fermions and thermal AdS/CFT,” JHEP 03 (2008) 011 ,
arXiv:0711.0133 [hep-th] .
32 T. Harmark and M. Orselli, “Spin Matrix Theory: A quantum mechanical model of the AdS/CFT correspondence,” JHEP 11 (2014) 134 , arXiv:1409.4417 [hep-th] . K. E. Vardinghus, “Spin Matrix theory with chemical potentials,” Master’s thesis, Niels Bohr Inst., 2015. E. Formanek, P. Halpin, and W.-C. W. Li, “The Poincaré series of the ring of 2 × 2 generic matrices,” Journal of Algebra 69 no. 1, (1981) 105 – 112 . Y. Teranishi, “The ring of invariants of matrices,” Nagoya Math. J. 104 (1986) 149–161 . Y. Teranishi, “Linear Diophantine Equations and Invariant Theory of Matrices,” in
Commutative Algebra and Combinatorics , pp. 259–275. Mathematical Society of Japan, Tokyo, Japan, 1987. D. Ž. Ðoković, “Poincaré series of some pure and mixed trace algebras of two generic matrices,” Journal of Algebra 309 (2007) , arXiv:math/0609262v1 [math] . F. Dolan, “Counting BPS operators in N = 4 SYM,” Nucl. Phys. B 790 (2008) 432–464 ,
arXiv:0704.1038 [hep-th] . C.-N. Yang and T. D. Lee, “Statistical theory of equations of state and phase transitions. 1. Theory of condensation,” Phys. Rev. 87 (1952) 404–409 . A. Maloney and E. Witten, “Quantum Gravity Partition Functions in Three Dimensions,”
JHEP 02 (2010) 029 , arXiv:0712.0155 [hep-th] . XQCD-J Collaboration, K. Nagata, S. Motoki, Y. Nakagawa, A. Nakamura, and T. Saito, “Towards extremely dense matter on the lattice,” PTEP 2012 (2012) 01A103 ,
arXiv:1204.1412 [hep-lat] . A. Nakamura and K. Nagata, “Probing QCD phase structure using baryon multiplicity distribution,” PTEP 2016 no. 3, (2016) 033D01 , arXiv:1305.0760 [hep-ph] . K. Nagata, K. Kashiwa, A. Nakamura, and S. M. Nishigaki, “Lee-Yang zero distribution of high temperature QCD and the Roberge-Weiss phase transition,”
Phys. Rev. D91 no. 9, (2015) 094507 , arXiv:1410.0783 [hep-lat] . M. Wakayama, V. Bornyakov, D. Boyda, V. Goy, H. Iida, A. Molochkov, A. Nakamura, and V. Zakharov, “Lee-Yang zeros in lattice QCD for searching phase transition points,”
Phys. Lett. B 793 (2019) 227–233 , arXiv:1802.02014 [hep-lat] . M. Wakayama and A. Hosaka, “Search of QCD phase transition points in the canonical approach of the NJL model,” Phys. Lett. B795 (2019) 548–553 ,
arXiv:1905.10956 [hep-lat] . A. Zee, Group Theory in a Nutshell for Physicists . Princeton University Press, USA, 2016. J. A. Minahan, “Review of AdS/CFT Integrability, Chapter I.1: Spin Chains in N = 4 Super Yang-Mills,” Lett. Math. Phys. 99 (2012) 33–58 , arXiv:1012.3983 [hep-th] . J. Gray, A. Hanany, Y.-H. He, V. Jejjala, and N. Mekareeya, “SQCD: A Geometric Apercu,”
JHEP 05 (2008) 099 , arXiv:0803.4257 [hep-th] . U. Banerjee, J. Chakrabortty, S. Prakash, and S. U. Rahaman, “Characters and Group Invariant Polynomials of (Super)fields: Road to "Lagrangian",”
arXiv:2004.12830 [hep-ph] . G. Basar, A. Cherman, D. A. McGady, and M. Yamazaki, “Temperature-reflection symmetry,” Phys. Rev. D91 no. 10, (2015) 106004 , arXiv:1406.6329 [hep-th] .
33 D. A. McGady, “Temperature-reflection I: field theory, ensembles, and interactions,”
arXiv:1711.07536 [hep-th] . D. A. McGady, “Temperature-reflection II: Modular Invariance and T-reflection,”
arXiv:1806.09873 [hep-th] . S. Benvenuti, B. Feng, A. Hanany, and Y.-H. He, “Counting BPS Operators in Gauge Theories: Quivers, Syzygies and Plethystics,” JHEP 11 (2007) 050 , arXiv:hep-th/0608050 . B. Feng, A. Hanany, and Y.-H. He, “Counting gauge invariants: The Plethystic program,”
JHEP 03 (2007) 090 , arXiv:hep-th/0701063 . Y. Kimura, S. Ramgoolam, and D. Turton, “Free particles from Brauer algebras in complex matrix models,” JHEP 05 (2010) 052 , arXiv:0911.4408 [hep-th] . R. Bhattacharyya, R. de Mello Koch, and M. Stephanou, “Exact Multi-Restricted Schur Polynomial Correlators,” JHEP 06 (2008) 101 , arXiv:0805.3025 [hep-th] . V. De Comarmond, R. de Mello Koch, and K. Jefferies, “Surprisingly Simple Spectra,”
JHEP 02 (2011) 006 , arXiv:1012.3884 [hep-th] . R. de Mello Koch, G. Kemp, B. A. E. Mohammed, and S. Smith, “Nonplanar integrability at two loops,” JHEP 10 (2012) 144 , arXiv:1206.0813 [hep-th] . A. T. Kristensson and M. Wilhelm. In progress. B. Skagerstam, “On the Large Nc Limit of the SU( Nc) Color Quark - Gluon Partition Function,” Z. Phys. C 24 (1984) 97 . H. Weyl, The Classical Groups: Their Invariants and Representations . Princeton mathematical series. Princeton University Press, 1939.
34
|
78
|
calculus - Can Rolle's Theorem be true for the critical point where derivative doesnt exist? - Mathematics Stack Exchange
===============
Join Mathematics
By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy.
Sign up with Google
OR
Email
Password
Sign up
Already have an account? Log in
Skip to main content
Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Loading…
Tour Start here for a quick overview of the site
Help Center Detailed answers to any questions you might have
Meta Discuss the workings and policies of this site
About Us Learn more about Stack Overflow the company, and our products
current community
Mathematics helpchat
Mathematics Meta
your communities
Sign up or log in to customize your list.
more stack exchange communities
company blog
Log in
Sign up
Home
Questions
Unanswered
AI Assist Labs
Tags
Chat
Users
Teams
Ask questions, find answers and collaborate at work with Stack Overflow for Teams.
Try Teams for freeExplore Teams
3. Teams
4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
Hang on, you can't upvote just yet.
You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it?
Instead, you can save this post to reference later.
Save this post for later Not now
Can Rolle's Theorem be true for the critical point where derivative doesnt exist?
Ask Question
Asked 10 years, 8 months ago
Modified10 years, 8 months ago
Viewed 741 times
This question shows research effort; it is useful and clear
0
Save this question.
Show activity on this post.
there is the problem that I met
At 0 the derivative of f(x) doesn't exist so 0 is the critical number but the conclusion of Rolle's theorem is the f'(c) (here c=0) must be 0. Are there any explaination in this case
calculus
derivatives
Share
Share a link to this question
Copy linkCC BY-SA 3.0
Cite
Follow
Follow this question to receive notifications
edited Dec 1, 2014 at 10:06
aukxn
asked Dec 1, 2014 at 9:34
aukxnaukxn
515 1 1 gold badge 4 4 silver badges 14 14 bronze badges
Add a comment|
2 Answers 2
Sorted by: Reset to default
This answer is useful
1
Save this answer.
Show activity on this post.
A critical point is a point where the derivative exists and equals zero. So c=0 c=0 is not a critical point of f f. The point here is that f f fails to satisfy all of the assumptions of Rolle's theorem, and indeed the conclusion fails, too.
Share
Share a link to this answer
Copy linkCC BY-SA 3.0
Cite
Follow
Follow this answer to receive notifications
answered Dec 1, 2014 at 9:41
SiminoreSiminore
35.8k 3 3 gold badges 55 55 silver badges 85 85 bronze badges
5
1 c=0 c=0, while not a critical point, is an extreme point, and a global minimum. –Arthur Commented Dec 1, 2014 at 9:49
sorry, I must say critical number to be exact as the book I use so I still don't have any explaination for this case –aukxn Commented Dec 1, 2014 at 10:06
Please, read the assumptions of Rolle's theorem. Is your function f f differentiable at any point of the open interval (−1,1)(−1,1)? I do not think so. Hence there is no contradiction, since Rolle's theorem is not applicable to your function. –Siminore Commented Dec 1, 2014 at 10:47
I try using the definition of derivative at a point to find the derivative at c=0. The result is f'(c)=0 but if I use general derivative then there is no derivative at x=0. So if that's the case then there is point c that derivative of f will be 0 so the question is wrong because actually there is point c that f'(c) = 0 ? –aukxn Commented Dec 1, 2014 at 12:30
Let me be clear: the function x↦x 2/3 x↦x 2/3 is not, not, and not differentiable at zero. –Siminore Commented Dec 1, 2014 at 14:53
Add a comment|
This answer is useful
-1
Save this answer.
Show activity on this post.
If f is differentiable, it also be continuous and have critical point. According to Fermat theorem, if f is differentiable at this point it's derivative must be 0. So your example does not meet the prerequisite.
Share
Share a link to this answer
Copy linkCC BY-SA 3.0
Cite
Follow
Follow this answer to receive notifications
edited Dec 1, 2014 at 10:13
answered Dec 1, 2014 at 9:54
user193702user193702
89 3 3 bronze badges
3
This is incorrect. The function need only be differentiable. –JP McCarthy Commented Dec 1, 2014 at 9:59
You are correct. It just uses Fermat theorem, not IVT. –user193702 Commented Dec 1, 2014 at 10:08
This is still not precise enough. The function isn't differentiable on (−1,1)(−1,1) --- that is the problem. –JP McCarthy Commented Dec 1, 2014 at 11:01
Add a comment|
You must log in to answer this question.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
calculus
derivatives
See similar questions with these tags.
Featured on Meta
Will you help build our new visual identity?
Upcoming initiatives on Stack Overflow and across the Stack Exchange network...
Community help needed to clean up goo.gl links (by August 25)
Report this ad
Related
21An inflection point where the second derivative doesn't exist?
3True or false? A relative maximum or minimum must occur at a critical point.
2Statement about Rolle's Theorem (true or false?)
0Critical Numbers of the Second Derivative
0Let f(x)=4 x 3−3 x 2−2 x+1,f(x)=4 x 3−3 x 2−2 x+1, use Rolle's theorem to prove that there exist c,0<c<1 c,0<c<1 such that f(c)=0 f(c)=0
2The Rolle's theorem for continuous function with one-sided derivative
3If Rolle's Theorem is assumed to be true, doesn't that prove the MVT?
0Rules of checking differentiability for Rolle's theorem
1An endpoint of a closed interval where the derivative is zero is considered a critical point?
Hot Network Questions
What wire/cable for motion detector in drop ceiling?
What does, "For you alone are holy." mean in Revelation 15:4?
New Bass strings too tight
Searching for an elven sword in New York City
Why was there a child at the dig site in Montana?
Which set has greater cardinality and why?
I failed to make Claus benzene. (With sticks.)
Make separate appendix table of contents and remove appendix chapters and sections from main toc
I found that we can calculate the time of solar eclipses that will happen in the very far future. Do we need relativity in this calculation?
how often do CANZUK judges color their text?
VLOOKUP with wildcards
SPDX: GPL-2.0-or-later vs the + operator
Proper way to power off a Ubuntu 22.04-5 desktop from single user mode
Expected number of rolls to see all sides of a die
Rectangle and circle with same area and circumference
Why לֶחֶם instead of לַחַם?
Is there a simple method to prove that this triangle is isosceles?
Why do Jesus’ interlocutors in Mark 12:14 call Him ‘True,’ and how might this title implicitly reveal a messianic identity?
How to debug/correct missing number error in plug during memoization?
Why do we expect AI to reason instantly when humans require years of lived experience?
Can you negotiate a W1/W2/W3 position (Germany) if you bring enough third-party funds?
Does the warning "5 years imprisonment for removal" on Canada's Four Corners obelisk have any legal backing?
Elfquest story where two elves argue over one's hypnotizing of an animal
Open dense subset of a compact lie group has full measure
more hot questions
Question feed
Subscribe to RSS
Question feed
To subscribe to this RSS feed, copy and paste this URL into your RSS reader.
Why are you flagging this comment?
It contains harassment, bigotry or abuse.
This comment attacks a person or group. Learn more in our Code of Conduct.
It's unfriendly or unkind.
This comment is rude or condescending. Learn more in our Code of Conduct.
Not needed.
This comment is not relevant to the post.
Enter at least 6 characters
Something else.
A problem not listed above. Try to be as specific as possible.
Enter at least 6 characters
Flag comment Cancel
You have 0 flags left today
Mathematics
Tour
Help
Chat
Contact
Feedback
Company
Stack Overflow
Teams
Advertising
Talent
About
Press
Legal
Privacy Policy
Terms of Service
Your Privacy Choices
Cookie Policy
Stack Exchange Network
Technology
Culture & recreation
Life & arts
Science
Professional
Business
API
Data
Blog
Facebook
Twitter
LinkedIn
Instagram
Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804
By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.
Accept all cookies Necessary cookies only
Customize settings
Cookie Consent Preference Center
When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer.
Cookie Policy
Accept all cookies
Manage Consent Preferences
Strictly Necessary Cookies
Always Active
These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.
Cookies Details
Performance Cookies
[x] Performance Cookies
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.
Cookies Details
Functional Cookies
[x] Functional Cookies
These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly.
Cookies Details
Targeting Cookies
[x] Targeting Cookies
These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device.
Cookies Details
Cookie List
Clear
[x] checkbox label label
Apply Cancel
Consent Leg.Interest
[x] checkbox label label
[x] checkbox label label
[x] checkbox label label
Necessary cookies only Confirm my choices
|
79
|
Published Time: 2016-11-01T23:38:50Z
5.4: Time Dilation - Physics LibreTexts
===============
Skip to main content
Table of Contents menu
search Search build_circle Toolbar fact_check Homework cancel Exit Reader Mode
school Campus Bookshelves
menu_book Bookshelves
perm_media Learning Objects
login Login
how_to_reg Request Instructor Account
hub Instructor Commons
Search
Search this book
Submit Search
x
Text Color
Reset
Bright Blues Gray Inverted
Text Size
Reset
+-
Margin Size
Reset
+-
Font Type
Enable Dyslexic Font - [x]
Downloads expand_more
Download Page (PDF)
Download Full Book (PDF)
Resources expand_more
Periodic Table
Physics Constants
Scientific Calculator
Reference expand_more
Reference & Cite
Tools expand_more
Help expand_more
Get Help
Feedback
Readability
x
selected template will load here
Error
This action is not available.
chrome_reader_mode Enter Reader Mode
5: Relativity
University Physics III - Optics and Modern Physics (OpenStax)
{ }
{ "5.01:Prelude_to_Relativity" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.02:_Invariance_of_Physical_Laws" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.03:_Relativity_of_Simultaneity" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.04:_Time_Dilation" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.05:_Length_Contraction" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.06:_The_Lorentz_Transformation" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.07:_Relativistic_Velocity_Transformation" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.08:_Doppler_Effect_for_Light" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.09:_Relativistic_Momentum" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.0A:_5.A:_Relativity(Answers)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.10:Relativistic_Energy" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.E:_Relativity(Exercises)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.S:Relativity(Summary)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" }
{ "00:_Front_Matter" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "01:_The_Nature_of_Light" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "02:_Geometric_Optics_and_Image_Formation" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "03:_Interference" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "04:_Diffraction" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "05:__Relativity" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "06:_Photons_and_Matter_Waves" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "07:_Quantum_Mechanics" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "08:_Atomic_Structure" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "09:_Condensed_Matter_Physics" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "10:__Nuclear_Physics" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "11:_Particle_Physics_and_Cosmology" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "zz:_Back_Matter" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" }
Sun, 16 Mar 2025 21:22:21 GMT
5.4: Time Dilation
4515
4515
Delmar Larsen
{ }
Anonymous
Anonymous
2
false
false
[ "article:topic", "authorname:openstax", "proper time", "Time dilation", "license:ccby", "showtoc:no", "program:openstax", "licenseversion:40", "source@ ]
[ "article:topic", "authorname:openstax", "proper time", "Time dilation", "license:ccby", "showtoc:no", "program:openstax", "licenseversion:40", "source@ ]
Search site Search Search Go back to previous article
Sign in
Username Password Sign in
Sign in
Sign in
Forgot password
Contents
1. Home
2. Bookshelves
3. University Physics
4. University Physics (OpenStax)
5. University Physics III - Optics and Modern Physics (OpenStax)
6. 5: Relativity
7. 5.4: Time Dilation
Expand/collapse global location
University Physics III - Optics and Modern Physics (OpenStax)
Front Matter
1: The Nature of Light
2: Geometric Optics and Image Formation
3: Interference
4: Diffraction
5: Relativity
6: Photons and Matter Waves
7: Quantum Mechanics
8: Atomic Structure
9: Condensed Matter Physics
10: Nuclear Physics
11: Particle Physics and Cosmology
Back Matter
5.4: Time Dilation
Last updated Mar 16, 2025
Save as PDF
5.3: Relativity of Simultaneity
5.5: Length Contraction
picture_as_pdf Full Book Page Downloads Full PDF Import into LMS Individual ZIP Buy Print Copy Print Book Files Buy Print CopySubmit Adoption Report Peer ReviewDonate
Page ID 4515
OpenStax
OpenStax
( \newcommand{\kernel}{\mathrm{null}\,})
Table of contents
1. Learning Objectives
2. Definition: Time Dilation
3. Definition: Proper Time
4. Half-Life of a Muon
1. PROBLEM-SOLVING STRATEGY: RELATIVITY
2. Example 5.4.1 A 5.4.1 A: Time Dilation in a High-Speed Vehicle
1. Significance
2. Solution
3. Significance
4. Significance
3. Example 5.4.1 B 5.4.1 B: Relativistic Television/University_Physics_III_-_Optics_and_Modern_Physics_(OpenStax)/05:__Relativity/5.04:_Time_Dilation#Example_.5C(.5CPageIndex.7B1B.7D.5C):_Relativistic_Television)
1. Solution/University_Physics_III_-_Optics_and_Modern_Physics_(OpenStax)/05:__Relativity/5.04:_Time_Dilation#Solution_2)
2. Significance/University_Physics_III_-_Optics_and_Modern_Physics_(OpenStax)/05:__Relativity/5.04:_Time_Dilation#Significance_4)
3. Significance/University_Physics_III_-_Optics_and_Modern_Physics_(OpenStax)/05:__Relativity/5.04:_Time_Dilation#Significance_5)
4. Exercise 5.4.1 5.4.1/University_Physics_III_-_Optics_and_Modern_Physics_(OpenStax)/05:__Relativity/5.04:_Time_Dilation#Exercise_.5C(.5CPageIndex.7B1.7D.5C))
The Twin Paradox
Exercise 5.4.2 A 5.4.2 A
Exercise 5.4.2 B 5.4.2 B
Learning Objectives
By the end of this section, you will be able to:
Explain how time intervals can be measured differently in different reference frames.
Describe how to distinguish a proper time interval from a dilated time interval.
Describe the significance of the muon experiment.
Explain why the twin paradox is not a contradiction.
Calculate time dilation given the speed of an object in a given frame.
The analysis of simultaneity shows that Einstein’s postulates imply an important effect: Time intervals have different values when measured in different inertial frames. Suppose, for example, an astronaut measures the time it takes for a pulse of light to travel a distance perpendicular to the direction of his ship’s motion (relative to an earthbound observer), bounce off a mirror, and return (Figure 5.4.1 a 5.4.1 a). How does the elapsed time that the astronaut measures in the spacecraft compare with the elapsed time that an earthbound observer measures by observing what is happening in the spacecraft?
Examining this question leads to a profound result. The elapsed time for a process depends on which observer is measuring it. In this case, the time measured by the astronaut (within the spaceship where the astronaut is at rest) is smaller than the time measured by the earthbound observer (to whom the astronaut is moving). The time elapsed for the same process is different for the observers, because the distance the light pulse travels in the astronaut’s frame is smaller than in the earthbound frame, as seen in Figure 5.4.1 b 5.4.1 b. Light travels at the same speed in each frame, so it takes more time to travel the greater distance in the earthbound frame.
Figure 5.4.1 5.4.1: (a) An astronaut measures the time Δ τ Δ τ for light to travel distance 2D in the astronaut’s frame. (b) A NASA scientist on Earth sees the light follow the longer path 2s and take a longer time Δ t Δ t. (c) These triangles are used to find the relationship between the two distances D and s.
Definition:
Time Dilation
Time dilation is the lengthening of the time interval between two events for an observer in an inertial frame that is moving with respect to the
rest frame
of the events (in which the events occur at the same location).
To quantitatively compare the time measurements in the two inertial frames, we can relate the distances in Figure 5.4.1 b 5.4.1 b to each other, then express each distance in terms of the time of travel (respectively either Δ t Δ t or Δ τ Δ τ) of the pulse in the corresponding reference frame. The resulting equation can then be solved for Δ t Δ t in terms of Δ τ Δ τ.
The lengths D D and L L in Figure 5.4.1 c 5.4.1 c are the sides of a right triangle with hypotenuse s s. From the Pythagorean theorem,
s 2=D 2+L 2.s 2=D 2+L 2.
The lengths 2 s 2 s and 2 L 2 L are, respectively, the distances that the pulse of light and the spacecraft travel in time Δ t Δ t in the earthbound observer’s frame. The length D D is the distance that the light pulse travels in time Δ τ Δ τ in the astronaut’s frame. This gives us three equations:
2 s 2 L 2 D=c Δ t=v Δ t;=c Δ τ.2 s=c Δ t 2 L=v Δ t;2 D=c Δ τ.
Note that we used Einstein’s second postulate by taking the
speed of light
to be c in both inertial frames. We substitute these results into the previous expression from the Pythagorean theorem:
s 2(c Δ t 2)2=D 2+L 2=(c Δ τ 2)2+(v Δ t 2)2 s 2=D 2+L 2(c Δ t 2)2=(c Δ τ 2)2+(v Δ t 2)2
Then we rearrange to obtain
(c Δ t)2−(v Δ t)2=(c Δ τ)2.(c Δ t)2−(v Δ t)2=(c Δ τ)2.
Finally, solving for Δ t Δ t in terms of Δ τ Δ τ gives us
Δ t=Δ τ 1−(v/c)2−−−−−−−−√.Δ t=Δ τ 1−(v/c)2.
This is equivalent to
Δ t=γ Δ τ,(5.4.1)(5.4.1)Δ t=γ Δ τ,
where γ γ is the relativistic factor (often called the Lorentz factor) given by
γ=1 1−v 2 c 2−−−−−−√γ=1 1−v 2 c 2
and v v and c c are the speeds of the moving observer and light, respectively.
Note the asymmetry between the two measurements. Only one of them is a measurement of the time interval between two events—the emission and arrival of the light pulse—at the same position. It is a measurement of the time interval in the
rest frame
of a single clock. The measurement in the earthbound frame involves comparing the time interval between two events that occur at different locations. The time interval between events that occur at a single location has a separate name to distinguish it from the time measured by the earthbound observer, and we use the separate symbol Δ τ Δ τ to refer to it throughout this chapter.
Definition:
Proper Time
The proper timeinterval Δ τ Δ τ between two events is the time interval measured by an observer for whom both events occur at the same location.
The equation relating δ t δ t and Δ τ Δ τ is truly remarkable. First, as stated earlier, elapsed time is not the same for different observers moving relative to one another, even though both are in inertial frames. A
proper time
interval Δ τ Δ τ for an observer who, like the astronaut, is moving with the apparatus, is smaller than the time interval for other observers. It is the smallest possible measured time between two events. The earthbound observer sees time intervals within the moving system as dilated (i.e., lengthened) relative to how the observer moving relative to Earth sees them within the moving system. Alternatively, according to the earthbound observer, less time passes between events within the moving frame. Note that the shortest elapsed time between events is in the inertial frame in which the observer sees the events (e.g., the emission and arrival of the light signal) occur at the same point.
This time effect is real and is not caused by inaccurate clocks or improper measurements. Time-interval measurements of the same
event
differ for observers in relative motion. The dilation of time is an intrinsic property of time itself. All clocks moving relative to an observer, including biological clocks, such as a person’s heartbeat, or aging, are observed to run more slowly compared with a clock that is stationary relative to the observer.
Note that if the relative velocity is much less than the
speed of light
(v << c), then v 2/c 2 v 2/c 2 is extremely small, and the elapsed times Δ t Δ t and Δ τ Δ τ are nearly equal. At low velocities, physics based on modern relativity approaches classical physics—everyday experiences involve very small relativistic effects. However, for speeds near the
speed of light
, v 2/c 2 v 2/c 2 is close to one, so 1−v 2/c 2−−−−−−−−√1−v 2/c 2 is very small and Δ t Δ t becomes significantly larger than Δ τ Δ τ.
Half-Life
of a Muon
There is considerable experimental evidence that the equation Δ t=γ Δ τ Δ t=γ Δ τ is correct. One example is found in cosmic
ray
particles that continuously rain down on Earth from deep space. Some collisions of these particles with nuclei in the upper atmosphere result in short-lived particles called muons. The
half-life
(amount of time for half of a material to
decay
) of a muon is 1.52 μs when it is at rest relative to the observer who measures the
half-life
. This is the
proper time
interval Δ τ Δ τ. This short time allows very few muons to reach Earth’s surface and be detected if Newtonian assumptions about time and space were correct. However, muons produced by cosmic
ray
particles have a range of velocities, with some moving near the
speed of light
. It has been found that the muon’s
half-life
as measured by an earthbound observer (Δ t Δ t) varies with velocity exactly as predicted by the equation Δ t=γ Δ τ Δ t=γ Δ τ. The faster the muon moves, the longer it lives. We on Earth see the muon last much longer than its
half-life
predicts within its own
rest frame
. As viewed from our frame, the muon decays more slowly than it does when at rest relative to us. A far larger fraction of muons reach the ground as a result.
Before we present the first example of solving a problem in relativity, we state a strategy you can use as a guideline for these calculations.
PROBLEM-SOLVING STRATEGY: RELATIVITY
Make a list of what is given or can be inferred from the problem as stated (identify the knowns). Look in particular for information on relative velocity v.
Identify exactly what needs to be determined in the problem (identify the unknowns).
Make certain you understand the conceptual aspects of the problem before making any calculations (express the answer as an equation). Decide, for example, which observer sees time dilated or length contracted before working with the equations or using them to carry out the calculation. If you have thought about who sees what, who is moving with the event being observed, who sees proper time , and so on, you will find it much easier to determine if your calculation is reasonable.
Determine the primary type of calculation to be done to find the unknowns identified above (do the calculation). You will find the section summary helpful in determining whether a length contraction , relativistic kinetic energy , or some other concept is involved.
Note that you should not round off during the calculation. As noted in the text, you must often perform your calculations to many digits to see the desired effect. You may round off at the very end of the problem solution, but do not use a rounded number in a subsequent calculation. Also, check the answer to see if it is reasonable: Does it make sense? This may be more difficult for relativity, which has few everyday examples to provide experience with what is reasonable. But you can look for velocities greater than c or relativistic effects that are in the wrong direction (such as a time contraction where a dilation was expected).
Example 5.4.1 A 5.4.1 A:
Time Dilation
in a High-Speed Vehicle
The Hypersonic Technology Vehicle 2 (HTV-2) is an experimental rocket vehicle capable of traveling at 21,000 km/h (5830 m/s). If an electronic clock in the HTV-2 measures a time interval of exactly 1-s duration, what would observers on Earth measure the time interval to be?
Strategy
Apply the
time dilation
formula to relate the
proper time
interval of the signal in HTV-2 to the time interval measured on the ground.
Solution
Identify the knowns: Δ τ=1 s Δ τ=1 s; v=5830 m/s.v=5830 m/s.
Identify the unknown: Δ t Δ t.
Express the answer as an equation: Δ t=γ Δ τ=Δ τ 1−v 2 c 2−−−−−−√.Δ t=γ Δ τ=Δ τ 1−v 2 c 2.
Do the calculation. Use the expression for γ γ to determine Δ t Δ t from Δ τ Δ τ: Δ t=1 s 1−(5830 m/s 3.00×10 8 m/s)2−−−−−−−−−−−−−−−−−−−⎷=1.000000000189 s=1 s+1.89×10−10 s.Δ t=1 s 1−(5830 m/s 3.00×10 8 m/s)2=1.000000000189 s=1 s+1.89×10−10 s.
Significance
The very high speed of the HTV-2 is still only 10-5 times the
speed of light
. Relativistic effects for the HTV-2 are negligible for almost all purposes, but are not zero.
What Speeds are Relativistic?
How fast must a vehicle travel for 1 second of time measured on a passenger’s watch in the vehicle to differ by 1% for an observer measuring it from the ground outside?
Strategy
Use the
time dilation
formula to find v/c for the given ratio of times.
Solution
Identify the known: Δ τ Δ t=1 1.01.Δ τ Δ t=1 1.01.
Identify the unknown: v/c.
Express the answer as an equation: Δ t Δ τ Δ t(Δ τ Δ t)2 v c=γ Δ τ=1 1−v 2/c 2−−−−−−−−√Δ τ=1−v 2/c 2−−−−−−−−√=1−v 2 c 2=1−(Δ τ/Δ t)2−−−−−−−−−−−√.Δ t=γ Δ τ=1 1−v 2/c 2 Δ τ Δ τ Δ t=1−v 2/c 2(Δ τ Δ t)2=1−v 2 c 2 v c=1−(Δ τ/Δ t)2.
Do the calculation: v c=1−(1/1.01)2−−−−−−−−−−−√=0.14.v c=1−(1/1.01)2=0.14.
Significance
The result shows that an object must travel at very roughly 10% of the
speed of light
for its motion to produce significant relativistic
time dilation
effects.
Calculating Δ t Δ t for a Relativistic
Event
Suppose a cosmic
ray
colliding with a nucleus in Earth’s upper atmosphere produces a muon that has a velocity v=0.950 c v=0.950 c. The muon then travels at constant velocity and lives 2.20 μs as measured in the muon’s frame of reference. (You can imagine this as the muon’s internal clock.) How long does the muon live as measured by an earthbound observer (Figure 5.4.2 5.4.2)?
Figure 5.4.2 5.4.2: A muon in Earth’s atmosphere lives longer as measured by an earthbound observer than as measured by the muon’s internal clock.
As we will discuss later, in the muon’s reference frame, it travels a shorter distance than measured in Earth’s reference frame.
Strategy
A clock moving with the muon measures the
proper time
of its
decay
process, so the time we are given is Δ τ=2.20 μ s Δ τ=2.20 μ s. The earthbound observer measures Δ t Δ t as given by the equation Δ t=γ Δ τ Δ t=γ Δ τ. Because the velocity is given, we can calculate the time in Earth’s frame of reference.
Solution
Identify the knowns: v=0.950 c v=0.950 c; δ τ=2.20 μ s δ τ=2.20 μ s.
Identify the unknown: Δ t Δ t.
Express the answer as an equation. Use: Δ t=γ Δ τ.Δ t=γ Δ τ. with γ=1 1−v 2 c 2−−−−−−√.γ=1 1−v 2 c 2.
Do the calculation. Use the expression for γ γ to determine Δ t Δ t from Δ τ Δ τ: Δ t=γ Δ τ.=1 1−v 2 c 2−−−−−−√δ τ=2.20 μ s 1−(0.950)2−−−−−−−−−−√=7.05 μ s.Δ t=γ Δ τ.=1 1−v 2 c 2 δ τ=2.20 μ s 1−(0.950)2=7.05 μ s. Remember to keep extra significant figures until the final answer.
Significance
One implication of this example is that because γ=3.20 γ=3.20 at 95.0% of the
speed of light
(v=0.950 c v=0.950 c), the relativistic effects are significant. The two time intervals differ by a factor of 3.20, when classically they would be the same. Something moving at 0.950c is said to be highly relativistic.
Example 5.4.1 B 5.4.1 B: Relativistic Television
A non-flat screen, older-style television display (Figure 5.4.3 5.4.3) works by accelerating electrons over a short distance to relativistic speed, and then using electromagnetic fields to control where the electron beam strikes a fluorescent layer at the front of the tube. Suppose the electrons travel at 6.00×10 7 m/s 6.00×10 7 m/s through a distance of 0.200m0.200m from the start of the beam to the screen.
What is the time of travel of an electron in the rest frame of the television set?
What is the electron’s time of travel in its own rest frame ?
Figure 5.4.3 5.4.3: The electron beam in a cathode
ray
tube television display.
Strategy for (a)
(a) Calculate the time from v t=d v t=d. Even though the speed is relativistic, the calculation is entirely in one frame of reference, and relativity is therefore not involved.
Solution
Identify the knowns: v=6.00×10 7 m/s d=0.200 m.v=6.00×10 7 m/s d=0.200 m.
Identify the unknown: the time of travel Δ t Δ t.
Express the answer as an equation: Δ t=d v.Δ t=d v.
Do the calculation: t=0.200 m 6.00×10 7 m/s=3.33×10−9 s.t=0.200 m 6.00×10 7 m/s=3.33×10−9 s.
Significance
The time of travel is extremely short, as expected. Because the calculation is entirely within a single frame of reference, relativity is not involved, even though the electron speed is close to c.
Strategy for (b)
(b) In the frame of reference of the electron, the vacuum tube is moving and the electron is stationary. The electron-emitting cathode leaves the electron and the front of the vacuum tube strikes the electron with the electron at the same location. Therefore we use the
time dilation
formula to relate the
proper time
in the electron
rest frame
to the time in the television frame.
Solution
Identify the knowns (from part a): Δ t=3.33×10−9 s;v=6.00×10 7 m/s;d=0.200 m.Δ t=3.33×10−9 s;v=6.00×10 7 m/s;d=0.200 m.
Identify the unknown: τ τ.
Express the answer as an equation: Δ t=γ Δ τ=Δ τ 1−v 2/c 2−−−−−−−−√.Δ t=γ Δ τ=Δ τ 1−v 2/c 2.
Do the calculation: Δ τ=(3.33×10−9 s)1−(6.00×10 7 m/s 3.00×10 8 m/s)2−−−−−−−−−−−−−−−−−−−⎷=3.26×10−9 s.Δ τ=(3.33×10−9 s)1−(6.00×10 7 m/s 3.00×10 8 m/s)2=3.26×10−9 s.
Significance
The time of travel is shorter in the electron frame of reference. Because the problem requires finding the time interval measured in different reference frames for the same process, relativity is involved. If we had tried to calculate the time in the electron
rest frame
by simply dividing the 0.200 m by the speed, the result would be slightly incorrect because of the relativistic speed of the electron.
Exercise 5.4.1 5.4.1
What is γ γ if v=0.650 c v=0.650 c?
Answer
γ=1 1−v 2 c 2−−−−−−√=1 1−(0.650 c)c 2−−−−−−−−−−√=1.32 γ=1 1−v 2 c 2=1 1−(0.650 c)c 2=1.32
The Twin Paradox
An intriguing consequence of
time dilation
is that a space traveler moving at a high velocity relative to Earth would age less than the astronaut’s earthbound twin. This is often known as the twin paradox. Imagine the astronaut moving at such a velocity that γ=30.0 γ=30.0, as in Figure 5.4.4 5.4.4. A trip that takes 2.00 years in her frame would take 60.0 years in the earthbound twin’s frame. Suppose the astronaut travels 1.00 year to another star system, briefly explores the area, and then travels 1.00 year back. An astronaut who was 40 years old at the start of the trip would be would be 42 when the spaceship returns. Everything on Earth, however, would have aged 60.0 years. The earthbound twin, if still alive, would be 100 years old.
The situation would seem different to the astronaut in Figure 5.4.4 5.4.4. Because motion is relative, the spaceship would seem to be stationary and Earth would appear to move. (This is the sensation you have when flying in a jet.) Looking out the window of the spaceship, the astronaut would see time slow down on Earth by a factor of γ=30.0 γ=30.0. Seen from the spaceship, the earthbound sibling will have aged only 2/30, or 0.07, of a year, whereas the astronaut would have aged 2.00 years.
Figure 5.4.4 5.4.4: The twin paradox consists of the conflicting conclusions about which twin ages more as a result of a long space journey at relativistic speed.
The paradox here is that the two twins cannot both be correct. As with all paradoxes, conflicting conclusions come from a false premise. In fact, the astronaut’s motion is significantly different from that of the earthbound twin. The astronaut accelerates to a high velocity and then decelerates to view the star system. To return to Earth, she again accelerates and decelerates. The spacecraft is not in a single inertial frame to which the
time dilation
formula can be directly applied. That is, the astronaut twin changes inertial references. The earthbound twin does not experience these accelerations and remains in the same inertial frame. Thus, the situation is not symmetric, and it is incorrect to claim that the astronaut observes the same effects as her twin. The lack of symmetry between the twins will be still more evident when we analyze the journey later in this chapter in terms of the path the astronaut follows through four-dimensional space-time.
In 1971, American physicists Joseph Hafele and Richard Keating verified
time dilation
at low relative velocities by flying extremely accurate atomic clocks around the world on commercial aircraft. They measured elapsed time to an accuracy of a few nanoseconds and compared it with the time measured by clocks left behind. Hafele and Keating’s results were within experimental uncertainties of the predictions of relativity. Both special and general relativity had to be taken into account, because gravity and accelerations were involved as well as relative motion.
Exercise 5.4.2 A 5.4.2 A
a. A particle travels at 1.90×10 8 m/s 1.90×10 8 m/s and lives 2.1×10 8 s 2.1×10 8 s when at rest relative to an observer. How long does the particle live as viewed in the laboratory?
Answer
Δ t=Δ τ 1−v 2 c 2−−−−−−√=2.10×10−8 s 1−(1.90×10 8 m/s)2(3.00×10 8 m/s)2−−−−−−−−−−−−−−−−−−√=2.71×10−8 s.Δ t=Δ τ 1−v 2 c 2=2.10×10−8 s 1−(1.90×10 8 m/s)2(3.00×10 8 m/s)2=2.71×10−8 s.
Exercise 5.4.2 B 5.4.2 B
Spacecraft A and B pass in opposite directions at a relative speed of 4.00×10 7 m/s 4.00×10 7 m/s. An internal clock in spacecraft A causes it to emit a radio signal for 1.00 s. The computer in spacecraft B corrects for the beginning and end of the signal having traveled different distances, to calculate the time interval during which ship A was emitting the signal. What is the time interval that the computer in spacecraft B calculates?
Answer
Only the relative speed of the two spacecraft matters because there is no absolute motion through space. The signal is emitted from a fixed location in the frame of reference of A, so the
proper time
interval of its emission is τ=1.00 s τ=1.00 s. The duration of the signal measured from frame of reference B is then
Δ t=Δ τ 1−v 2 c 2−−−−−−√=1.00 s 1−(4.00×10 7 m/s)2(3.00×10 8 m/s)2−−−−−−−−−−−−−−−−−−√=1.01 s.Δ t=Δ τ 1−v 2 c 2=1.00 s 1−(4.00×10 7 m/s)2(3.00×10 8 m/s)2=1.01 s.
This page titled 5.4: Time Dilation is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by OpenStax via source content that was edited to the style and standards of the LibreTexts platform.
Back to top
5.3: Relativity of Simultaneity
5.5: Length Contraction
Was this article helpful?
Yes
No
Recommended articles
28.2: Simultaneity and Time DilationTwo simultaneous events are not necessarily simultaneous to all observers—simultaneity is not absolute. Time dilation is the phenomenon of time passin...
15.10: Time DilationThe interval s between two events is clearly independent of the orientation any reference frames, and is the same when referred to two reference fra...
13.3: Simultaneity and Time DilationTwo simultaneous events are not necessarily simultaneous to all observers—simultaneity is not absolute. Time dilation is the phenomenon of time passin...
14.4: Time DilationTime dilation is the lengthening of the time interval between two events when seen in a moving inertial frame rather than the rest frame of the events...
2.2: The Nature of TimeWe continue in the same manner as Einstein – with what he called "thought experiments." These are simply logical arguments that lead to inescapable co...
Article typeSection or PageAuthorOpenStaxLicenseCC BYLicense Version4.0OER program or PublisherOpenStaxShow TOCno
Tags
proper time
source@
Time dilation
© Copyright 2025 Physics LibreTexts
Powered by CXone Expert ®
?
The LibreTexts libraries arePowered by NICE CXone Expertand are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. Privacy Policy. Terms & Conditions. Accessibility Statement.For more information contact us [email protected].
Support Center
How can we help?
Contact Support Search the Insight Knowledge Base Check System Status×
contents readability resources tools
☰
5.3: Relativity of Simultaneity
5.5: Length Contraction
Complete your gift to make an impact
|
80
|
Published Time: 2017-10-17T10:23:55.923Z
Distance Attenuation Calculator
===============
Categories
Biology
Chemistry
Construction
Conversion
Ecology
Everyday life
Finance
Food
Health
Math
Physics
Sports
Statistics
Other
Board
Last updated: July 24, 2024
Distance Attenuation Calculator
Creators
Bogna Szyk
Bogna Szyk
LinkedIn
Bogna is the chief operating officer at Omni Calculator, where she helps keep things running smoothly and ideas moving forward. With a background in civil engineering and a knack for organizing chaos, she brings structure and strategy to everything she does. After hours, you’ll likely find her dancing zouk or crafting the next twist in a D&D campaign.See full profile
Check our editorial policy
Reviewers
Steven Wooding
Steven Wooding
LinkedIn
Steven Wooding is a physicist by training with a degree from the University of Surrey specializing in nuclear physics. He loves data analysis and computer programming. He has worked on exciting projects such as environmentally aware radar, using genetic algorithms to tune radar, and building the UK vaccine queue calculator. Steve is now the Editorial Quality Assurance Coordinator here at Omni Calculator, making sure every calculator meets the standards our users expect. In his spare time, he enjoys cycling, photography, wildlife watching, and long walks.See full profile
Check our editorial policy
287 people find this calculator helpful
287
Table of contents
What is the SPL (sound pressure level)?
Sound attenuation formula
Inverse square law
FAQs
This distance attenuation calculator is a tool that lets you analyze how the sound propagates in the air. The further away you are from the sound source, the lower the perceived sound intensity. We can describe the exact relationship between the sound level and distance using the sound attenuation formula.
In this article, we will show you how to calculate the exact sound level at any distance from the source (see distance calculator). We will also provide you with a rule of thumb to quickly estimate the drop in volume – without using any calculations whatsoever!
What is the SPL (sound pressure level)?
All sounds we hear are nothing but vibrations traveling through the air (or other mediums). These vibrations exert a certain pressure on our ears.
One of the ways to measure this sound pressure is using regular units of pressure called pascals. This approach is hugely inconvenient. Why? The quietest sounds we can hear – our hearing threshold – is about 0.00002 Pa. Expressing sound levels in fractions of a thousandth of a pascal is all but intuitive.
That is why, instead of regular pressure units, we use dedicated sound pressure units called decibels (see dB calculator). The decibel (dB) scale is logarithmic, meaning that an increase of roughly 3 dB is equivalent to doubling the pressure, expressed in Pascals.
When SPL is given in decibels, we can estimate the pressure of everyday sounds, usually in the 20-100 dB range. 120 or 130 dB is the pain threshold – for example, a jet aircraft taking off in your immediate neighborhood will emit this level of sound.
💡 To convert between different pressure units, use our pressure conversion tool.
Sound attenuation formula
Sound attenuation describes how the SPL changes with increasing distance from the sound source. For example, you can imagine two houses standing close to a highway. If you measure the distance from each of the buildings to the road and the SPL of one of them, you will be able to calculate the sound level in the other house.
The sound attenuation formula is as follows:
SPL 2=SPL 1−20 log(R 2 R 1),\small \text{SPL}_2 = \text{SPL}_1 - 20\log\left(\frac{R_2}{R_1}\right),SPL 2=SPL 1−20 lo g(R 1R 2),
where:
SPL 1\text{SPL}_1 SPL 1 – Sound pressure level at point 1;
SPL 2\text{SPL}_2 SPL 2 – Sound pressure level at point 2;
R 1 R_1 R 1 – Distance from the sound source to point 1; and
R 2 R_2 R 2 – Distance from the sound source to point 2.
Inverse square law
Now, imagine that the distance from the sound source to point 1 is two times smaller than the distance from the source to point 2. In other words, R 1=0.5×R 2 R_1 = 0.5 \times R_2 R 1=0.5×R 2. In this case:
SPL 2=SPL 1−20 log (R 2 R 1)=SPL 1−20 log (R 2 0.5×R 2)=SPL 1−20 log(2)=SPL 1−20×0.301=SPL 1−6.02 dB\small \begin{align} \text{SPL}_2 &= \text{SPL}_1 - 20\log!\left(\frac{R_2}{R_1}\right)\[1.4em] &= \text{SPL}_1 - 20\log!\left(\frac{R_2}{0.5 \times R_2}\right)\[1.4em] &= \text{SPL}_1 - 20\log(2)\[1em] &= \text{SPL}_1 - 20\times 0.301\[1em] &= \text{SPL}_1 - 6.02\ \text{dB} \end{align}SPL 2=SPL 1−20 lo g(R 1R 2)=SPL 1−20 lo g(0.5×R 2R 2)=SPL 1−20 lo g(2)=SPL 1−20×0.301=SPL 1−6.02 dB
We have just calculated that when the distance from the sound source is two times smaller, the sound pressure level increases by 6 dB. What does it mean?
Hopefully, you remember that an increase of 3 dB means a doubling of the sound pressure. Following that logic, a gain of 6 dB is actually a fourfold increase in SPL. Each time you reduce the distance to the source by a factor of 2, the SPL increases by a factor of 4.
This rule is known as the inverse square law. You can use it to roughly estimate the change in SPL without actually doing any real calculations. If you need exact numbers, though, don't hesitate to use this distance attenuation calculator!
FAQs
How do I calculate the sound pressure level change with distance?
To calculate SPL change between two points, follow these steps:
Measure the distances: from the sound source to points 1 and 2. Let's Denote them by R1 and R2.
Compute the ratio R2/R1.
Take log and multiply the result by 20.
What you got is the SPL difference between the two points in question.
What is the 3 dB rule?
The 3 dB rule states that if you double the power, you gain roughly 3 dB. Conversely, halving the power implies a loss of approximately 3 dB.
What is the 6 dB rule?
The 6 dB rule states that whenever the distance that separates you from the sound source doubles (e.g., you move from 100 to 200 feet away from the source), the sound decreases by 6 dB. Equivalently, SPL decreases by a factor of 4.
How much louder is 40 dB than 20 dB?
40 dB is 100 times louder than 20 dB. Similarly, 80 dB is 100 times louder than 60 dB. This is because the decibel scale is logarithmic, and an increase of 10 dB corresponds to ten times more power.
0
Point 1
Distance from the source
m
Sound pressure level
dB
0
Point 2
Distance from the source
m
Sound pressure level
dB
Difference in sound pressure level
Distance attenuation
dB
Share result
Reload calculator Clear all changes
Did we solve your problem today?
Yes
No
Check out 15 similar acoustic waves calculators 🔊
Acoustic impedance
Alfvén velocity
Beat frequency
Calculator Categories
Biology Chemistry Construction Conversion Ecology Everyday life Finance Food Health Math Physics Sports Statistics Other Discover Omni
Press
Editorial policies Partnerships
Meet Omni
About Resource library Contact We're hiring!
English
Privacy, Cookies & Terms of Service Copyright by Omni Calculator sp. z o.o.
Share Calculator
Distance Attenuation Calculator
Share Calculator
Distance Attenuation Calculator
Learn more about these settings
Input
[x] Save input value
|
81
|
Published Time: Fri, 07 Jul 2017 01:33:16 GMT
University of Regensburg Department of Mathematics
Master’s Thesis in Arithmetic Geometry
Harmonic Functions on the Berkovich Projective Line
Author: Veronika Wanner Supervisor: Prof. Dr. Walter Gubler Presented: April 28, 2016 Zusammenfassung
In dieser Arbeit werden harmonische Funktionen auf der Projektiven Berkovich Ger-aden P1Berk eingeführt und analoge Resultate zu denjenigen der klassischen Potentialthe-orie gezeigt. Des Weiteren wird eine Verbindung zu den glatten Funktionen im Sinne der Theorie der Formen und Ströme auf Berkovichräumen hergestellt. In den ersten vier Kapiteln der Masterarbeit widmen wir uns, wie bereits angekündigt, der Theorie der harmonischen Funktionen auf P1Berk , wobei wir [BR] von Matthew Baker and Robert Rumely folgen. Die Konstruktion der Projektiven Berkovich Ger-aden P1Berk über einem algebraisch abgeschlossenen und bezüglich eines nicht-trivialen nicht-archimedischen Absolutbetrages vollständigen Körpers K wird beschrieben und wesentliche Eigenschaften, wie deren Baumstruktur, werden gegeben. Mit Hilfe dieser Struktur kann ein Laplace Operator definiert werden, welcher es ermöglicht harmonis-che Funktionen wie gewohnt als dessen Lösungen zu definieren. Analoge Resultate zu denen der klassischen Potentialtheorie wie beispielsweise das Maximum Prinzip, die Poisson Formel oder das Harnack Prinzip werden bewiesen. Nachdem harmonische Funktionen auf P1Berk in den ersten Kapiteln eingehend studiert wurden, werden im letzten Kapitel glatte Funktionen auf der Analytifizierung einer beliebigen algebraischen Varietät X über K definiert. Diese wurden ursprünglich von Antoine Chambert-Loir und Antoine Ducros in [CD] als reellwertige (0 , 0) -Differential-formen auf Berkovich anaytischen Räumen eingeführt. Im Falle einer algebraischen Varietät X über K können Differentialformen auf der Analytifizierung Xan mit Hilfs-mitteln der tropischen Geometrie definiert werden. Hierbei folgen wir Walter Gubler in seiner Veröffentlichung [Gu13]. Wir erhalten Differentialoperatoren d′ und d′′ , wobei wir besonders den Kern der Komposition d′d′′ auf den glatten Funktionen betrachten werden. Glatte Funktionen im Kern von d′d′′ können durch die Prägarbe log |O ×
X
|
charakterisiert werden, was uns ermöglicht eine Verbindung zu den harmonischen Funk-tionen auf P1Berk herzustellen. Es kann gezeigt werden, dass eine reellwertige Funktion auf einer offenen Teilmenge W von P1Berk genau dann harmonisch ist falls sie sich als eine Linearkombination von Funktionen der Form log |O P1
K
(W )×| schreiben lässt. Dies impliziert die Übereinstimmung des Vektorraumes der harmonischen Funktionen auf W
mit dem Unterraum ker d′d′′ des Verkotrraumes der glatten Funktionen auf W . Folglich ist jeden harmonische Funktion auf einer offenen Teilmenge W von P1Berk glatt. Um auch im allgemeinen Fall, d.h. im Falle einer beliebigen glatten algebraischen Kurve
X über K, eine Verbindung geben zu können, führen wir die Garbe der harmonische Funktionen HX im Sinne von Amaury Thuillier in [Th] ein. Insbesondere wird gezeigt, dass Thuillier’s Definition diejenige von Baker und Rumely fortsetzt und ker d′d′′ eine Untergarbe von HX ist. Es werden zwei explizite Bedingungen gegeben in denen die Garbe HX gänzlich mit ker d′d′′ übereinstimmt. Darüber hinaus wird eine Kurve kon-struiert, sodass wir eine harmonische Funktion auf einer offenen Teilmenge von Xan
finden können welche sich nicht im Kern des Operators d′d′′ befindet. Da wir zeigen, dass glatte und zugleich harmonische Funktionen bereits im Kern von d′d′′ liegen, kön-nen wir die Frage ob jede harmonische Funktion notwendigerweise glatt ist abschließend mit einem Nein beantworten. Contents
1 Introduction 72 The Berkovich projective line 15
2.1 The definition and structure of the Berkovich unit disc . . . . . . . . . . 15 2.2 R-trees and the tree structure of D(0 , 1) . . . . . . . . . . . . . . . . . . 17 2.3 The construction of P1Berk . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3 The Laplacian on the Berkovich projective line 23
3.1 Construction and properties of the Laplacian on a subdomain of P1Berk . 23 3.2 Examples and the Hsia kernel . . . . . . . . . . . . . . . . . . . . . . . . 33
4 Harmonic functions 43
4.1 Harmonic functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 4.2 Harmonic functions and the main dendrite . . . . . . . . . . . . . . . . . 47 4.3 The Maximum Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 4.4 Poisson Formula and the Dirichlet and the Neumann Problem . . . . . . 61 4.5 Poisson Formula and the Equilibrium and the Poisson-Jensen Measure . 68 4.6 Uniform Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 4.7 Harnack’s Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5 The link to smooth functions on analytic curves 81
5.1 Differential forms and smooth functions on Xan . . . . . . . . . . . . . . 81 5.2 The link between the presheaf log |O ×
X
| and smooth functions . . . . . . 87 5.3 The link between the presheaf log |O ×
X
| and harmonic functions . . . . . 89
Bibliography 102
51 Introduction
Potential theory is a very old area of mathematics and originates in the 18 th century. One can say that the foundation was laid by Joseph-Louis Lagrange and Pierre-Simon Laplace. Lagrange discovered that gravitational forces derive from a function which was later called the potential function by George Green. A few years later, Laplace showed that in a mass free region this function satisfies the partial differential equation which is today known as the Laplacian equation. Todays classical potential theory still contains the study of solutions of Laplace’s equation which are called harmonic functions . Potential theory, and so in particular the theory of harmonic functions, can be extended to non-archimedean analytic geometry. This is for example done by Matthew Baker and Robert Rumely in [BR] and by Amaury Thuillier in [Th]. In [BR], Baker and Rumely give an approach to potential theory on the non-archimedean projective line, and Thuillier develops in [Th] a non-archimedean potential theory for general curves. In this Master’s thesis, we first follow [BR] and elaborate on the theory of harmonic functions on the Berkovich projective line P1Berk , including the construction of P1Berk and the definition of a Laplacian operator. We will see that there are analogues of the main results from complex potential theory, where we refer to the book [Ra] by Thomas Ransford for the classical results. However there are some statements in the classical theory which are not considered in this context. For example the property that every harmonic function on an open subset of C is smooth (cf. [Ra, Corollary 1.1.4]). This statement raises the question if there is a suitable definition of smoothness and an analogue statement in the non-archimedean potential theory. Antoine Chambert-Loir and Antoine Ducros introduced real-valued differential forms on Berkovich analytic spaces in their preprint [CD]. They define smooth functions as differential forms of bidegree (0 , 0) . In the algebraic situation, i.e. the Berkovich analytic space is the analytification of an algebraic variety, this theory is summarized and compared with tropical algebraic geometry by Walter Gubler in his paper [Gu13]. For the introduction of the theory of smooth functions we will follow his approach. The link between harmonic functions and smoothness in the sense of [CD] resp. [Gu13] is the main purpose of this thesis. We now outline the contents of this Master’s thesis and emphasize the main results. In the first three chapter we do potential theory on the Berkovich projective line after [BR]. In these chapters we work over an algebraically closed field K which is complete with respect to a non-trivial non-archimedean absolute value | | . In Chapter 2 we explain the construction and the natural tree structure of the Berkovich projective line
P1Berk . In Section 2.1, we therefore recall the definition of D(0 , 1) and state Berkovich’s 71 Introduction
classification theorem of the points contained in the Berkovich unit disc (Theorem 2.1.6). This says that every x ∈ D (0 , 1) corresponds to a sequence of nested closed discs (D(ai, r i)) . The nature of the intersection of these discs leads to a classification of points into four different types. D(0 , 1) has a tree structure which is explained in Section 2.2. In Section 2.3, we describe the construction of the topological space P1Berk and state some fundamental properties. One obtains P1Berk by glueing together two copies of
D(0 , 1) along a common annulus but there is also another way to construct P1Berk . Let
A1Berk denote the Berkovich spectrum of K[T ], then P1Berk can be seen as the one-point compactification of the locally compact Hausdorff space A1Berk , i.e. P1Berk = A1Berk t{∞} .
The extra point ∞ is regarded as a point of type I, i.e. a point in K. In particular, we will see that P1Berk is uniquely path-connected and it is homeomorphic to the inverse limit over all finite subgraphs Γ contained in P1Berk . Here a finite subgraph Γ of P1Berk
is the union of the unique paths between a finite set of points of type II, III or IV. In Chapter 3 we will see that this structure enables us to construct a measure-valued Laplacian operator on a class of functions f : U → R ∪ {±∞} for a domain (i.e. open and connected) U ⊂ P1Berk . Section 3.1 includes a description of the whole development and construction of the Laplacian operator. If Γ is a finite subgraph of P1Berk (or more generally a metrized graph), Baker and Rumely give in [BR] an extension ∆Γ of Zhang’s Laplacian operator, introduced in [Zh], to the space of functions of ‘bounded differential variation’ on Γ. We will denote this space by BDV(Γ) . ∆Γ(f ) is a finite signed Borel measure of total mass zero on Γ for each function f ∈ BDV(Γ) . Further, for every finite signed Borel measure μ of total mass zero on Γ there is a function f ∈ BDV(Γ) such that μ = ∆ Γ(f ). Defining BDV( U ) for a domain U of P1Berk as the class of functions
f : U → R ∪ {±∞} satisfying
• f |Γ ∈ BDV(Γ) for every finite subgraph Γ ⊂ U , and
• | ∆Γ(f )|(Γ) ≤ B(f ) for a constant B(f ) for every finite subgraph Γ ⊂ U ,the collection of measures {∆Γ(f )} is a coherent system for every f ∈ BDV( U ). This leads to a unique Borel measure ∆U (f ) of total mass zero on the subset U of the inverse limit space P1Berk . This Borel measure ∆U (f ) is called the complete Laplacian and its restriction to U is called the Laplacian .In Section 3.2, we give concrete examples of functions contained in the vector space
BDV( P1Berk ) and calculate their Laplacians. Clearly every constant function f : P1Berk →
R is contained in BDV( P1Berk ) and ∆P1Berk (f ) = 0 . For more complicated examples we introduce the Hsia kernel δ(x, y )∞ for x, y ∈ A1Berk . By Berkovich’s classification theorem x, y ∈ A1Berk correspond to sequences of nested discs D(ai, r i) and D(bi, s i),then δ(x, y )∞ := lim i→∞ max( ri, s i, |ai − bi|). Baker and Rumely introduced the Hsia kernel as the fundamental kernel for potential theory on the Berkovich line inspired by a function defined by Liang-Chung Hsia in [Hs]. Further, we define the generalized Hsia kernel δ(x, y )ζ with respect to an arbitrary point ζ ∈ P1Berk and show that the 8function f (x) := − log v(δ(x, y )ζ ) belongs to BDV( P1Berk ) with ∆P1Berk (f ) = δy − δζ for fixed y, ζ ∈ BDV( P1Berk ). We use the notation log v for the logarithm to the base qv,where qv > 1 is a fixed real number chosen so that log v | | is a normalized valuation on
K. Moreover, we derive from the last equation the following version of Poincaré-Lelong formula : If f (x) := − log v([ g]x) on P1Berk for a g ∈ K(T )× and div( g) = ∑mi=1 ni(ai),the function belongs to BDV( P1Berk ) and we have
∆P1Berk (f ) =
m∑
i=1
niδai .
In Chapter 4 we introduce the theory of harmonic functions on P1Berk and give ana-logues of the main results in the classical potential theory where we again follow [BR]. In Section 4.1, we define harmonic functions on open subsets of P1Berk as real-valued functions which are locally strongly harmonic , i.e. for every point x we can find an open and connected neighborhood U of x such that the function f is continuous, belongs to BDV( U ) and ∆U (f ) is supported on ∂U . Among others we instance the function
f (x) := − log v([ g]x) on P1Berk for a g ∈ K(T )× with div( g) = ∑mi=1 ni(ai) as a (strongly) harmonic function on a domain U in P1Berk { a1, . . . , a m}. Further, we state fundamen-tal properties of (strongly) harmonic functions. Additionally to the properties stated in [BR, §7.1], we say something about the behavior of a function f : U → R with Laplacian
∆U (f ) = 0 on finite subgraphs Γ ⊂ U for a domain U with |∂U | < ∞.In Section 4.2, we study the main dendrite of an open subset U of P1Berk as the points contained in the interior of paths between two boundary points of U . We will see that the behavior of a harmonic function on U is determined by its values on the main dendrite. The knowledge about the main dendrite enables us to compare the terms harmonic and strongly harmonic for a function defined on a domain. We show that every harmonic function in BDV( U ) is already strongly harmonic which is not explicitly stated in [BR]. Further, we give a concrete function which is harmonic but not strongly harmonic. In Section 4.3, we formulate and prove an analogue of the Maximum Principle, i.e. that every harmonic function on a domain U of P1Berk which attains a minimum or a maximum value on U must be constant. Further, we give a strengthening of it called the Strong Maximum Principle. In Section 4.4, we consider domains with a finite boundary of points of type II, III or IV called finite-dendrite domains . First, we see that every harmonic function on such a domain U belongs to BDV( U ) (and so is strongly harmonic on U ) and has a continuous extension to ∂U . In particular, we give the additional description f = f0 ◦ rΓ on U where Γ is the main dendrite and its endpoints, rΓ is a retraction map (cf. Definition 3.1.17) and f0 a piecewise affine function on Γ. This description of f will help us to connect the two definitions of harmonic functions ([BR] and [Th]). Further, we give an analogue of the Poisson formula on finite-dendrite domains, i.e. the values of a harmonic function on U are recaptured only from the knowledge of f on ∂U . With the help of Poisson formula we 91 Introduction
will see that the Dirichlet and the Neumann problem are solvable on finite-dendrite domains. Additionally to [BR], we consider the formula explicitly in the case of a strict simple domain , i.e. a finite-dendrite domain whose boundary points are all of type II.
Corollary (4.4.9 ). If V is a strict simple domain with ∂V = {x1, . . . , x m} and f
a harmonic function on V , then there exist c0, . . . c m ∈ R and a1, . . . , a m ∈ K not contained in V such that
f (x) = c0 −
m∑
i=1
ci log v([ T − ai]x)
for all x ∈ V .
There is also another version of the Poisson formula (following from the first one) which is stated in Section 4.5 and implies that the Equilibrium measure (cf. Definition 4.3.3) and the Poisson-Jensen measure (cf. Definition 4.5.1) coincide. In Section 4.6, we see another implication of the Poisson formula, an analogue of uniform convergence: If f1, f 2, . . . is a sequence of harmonic functions on an open subset U of P1Berk which converge pointwise to a function f : U → R, then f is harmonic on U , and the fi
converge uniformly to f on compact subsets of U . With the help of uniform convergence one can formulate an analogue of Harnack’s Principle which is stated in Section 4.7. If U ⊂ P1Berk is a domain and f1, f 2, . . . a sequence of harmonic functions on U with
f1 ≤ f2 ≤ . . . , Harnack’s Principle says that the sequence either converges locally uniformly to ∞, or converges locally uniformly to an harmonic function on the domain. Note that we do not require that 0 ≤ f1 ≤ f2 ≤ . . . as in [BR]. In Chapter 5 we introduce smooth functions and try to link them with harmonic func-tions. In Section 5.1 we start with the definition of (p, q )-superforms on open subsets of Rr together with linear differential operators d′ and d′′ which were introduced by Lagerberg in [La]. Further, we consider their restriction to supports of polyhedral com-plexes in Rr (cf. [CD] and [Gu13]). Afterwards, we will recall Gubler’s approach to define differential forms of bidegree (p, q ) on the analytification Xan of an algebraic variety X over an algebraically closed field endowed with a non-trivial complete non-archimedean absolute value | | (cf. [Gu13]). If X is such a n-dimensional algebraic variety, we can cover X by very affine open sets U , i.e. sets U which have a closed immersion to a multiplicative torus Grm. Then there is a map trop U : U an → Rr such that Trop( U ) := trop U (U an ) is the support of a polyhedral complex of pure dimension
n. We define superforms on U an as a formal pullback of forms on Trop( U ) (for details cf. Definition 5.1.20). Therefore, we obtain for every open subset W of Xan a real vector space Ap,q (W ) of differential forms of bidegree (p, q ) and differential operators
d′ : Ap,q (W ) → Ap+1 ,q (W ) and d′′ : Ap,q (W ) → Ap,q +1 (W ). We will see that a differen-tial form of bidegree (0 , 0) is a well-defined continuous function f : W → R, and hence
smooth functions can be defined as (0 , 0) -differential forms. We will denote the vector space A0,0(W ) by C∞(W ).In Section 5.2, it is shown that the function log |f | : W → R is smooth and satisfies 10 d′d′′ (log |f |) = 0 if W is an open subset of Xan and f ∈ O Xan (W )×. Further, we have the following main characterization:
Theorem (5.2.4 ). Let W be an open subset of Xan . A function f : W → R belongs to the kernel of d′d′′ : C∞(W ) → A1,1(W ) if and only if for every x ∈ W there is an open neighborhood V of x in W and an open subset U of X with V ⊂ U an such that
f =
r∑
i=1
λi log |fi|
on V for f1, . . . , f r ∈ O X (U )× and λ1, . . . , λ r ∈ R.
In Section 5.3, we give similar characterizations of harmonic functions. If X = P1
K
over an algebraically closed field K, we can formulate the following description of harmonic functions by using the above stated Poincaré-Lelong formula and the Poisson formula for strict simple domains.
Theorem (5.3.1 ). Let W be an open subset of P1Berk , then f is harmonic on W if and only if for every x ∈ W there is an open neighborhood V of x in W and an open subset
U of P1
K
with V ⊂ U an such that
f =
r∑
i=1
λi log |fi|
on V where f1, . . . , f r ∈ O P1
K
(U )× and λ1, . . . , λ r ∈ R.
Hence, we get the link between harmonic and smooth functions on Xan if X = P1
K
.
Corollary (5.3.2 ). A function f is harmonic on an open subset W of P1Berk if and only if f is smooth on W and d′d′′ f = 0 .
In the general case, we introduce Thuillier’s approach to harmonic functions from [Th], show that his definition is an extension to the one made in Chapter 4 (cf. Proposition 5.3.14) and state the following Theorem by Thuillier (cf. [Th, Théorème 2.3.21]) in-cluding an elaboration on the proof. Thuillier works over a field k which is complete with respect to a non-trivial non-archimedean absolute value | | . Note that k is not required to be algebraically closed.
Theorem (5.3.17 ). Let X be a smooth strictly k-analytic curve and FX be the sheaf of
R-vector spaces generated by the germes of functions log |f | where f ∈ O ×
X
. Then FX
is a subsheaf of HX and HX /FX is zero if one of the following conditions is satisfied: i) The residue field ˜k is algebraic over a finite field. ii) The curve X̂ ⊗k̂ ka is locally isomorphic to P1Berk over ̂ ka where ̂ ka is the completion of the algebraic closure of k.
11 1 Introduction
The definition of a strictly k-analytic curve is given in Definition 5.3.3. For instance the analytification of an algebraic curve over k is a strictly k-analytic curve. If K is an algebraically closed field endowed with a non-trivial complete non-archimedean absolute value and X is a smooth algebraic curve over K, Xan is a strictly k-analytic smooth curve. Further, FXan coincides with ker d′d′′ by Theorem 5.2.4. Hence, Thuil-lier’s theorem delivers two explicit conditions in which a function is harmonic if and only if it is smooth and belongs to ker d′d′′ . In particular, one can see that Thuillier’s theorem leads to the same result if X = P1
K
as Theorem 5.3.1.
Corollary (5.3.21 ). If X is a smooth algebraic curve over K and i) ˜K is algebraic over a finite field, or ii) Xan is locally isomorphic to P1Berk ,a function f : W → R on an open subset W of Xan is harmonic if and only if it is smooth and d′d′′ f = 0 .
By the proof of Theorem 5.3.17 one can construct a smooth algebraic curve X over
K such that HXan /FXan is nonzero (cf. Corollary 5.3.23). Thus, there is a harmonic function which is not contained in ker d′d′′ . However, we cannot yet say if the function is smooth or not. To give finally an answer to the question if every harmonic function on an open subset W of Xan is smooth, we state a further Theorem:
Theorem (5.3.22 ). Every smooth function f : W → R which is harmonic satisfies
d′d′′ f = 0 .
Altogether, we have the following conclusion:
Corollary (5.3.23 ). Harmonic functions are not smooth in general, i.e. there is a smooth curve X over K and a harmonic function f : W → R on an open subset W of
Xan which is not smooth.
12 Terminology
If A and B are two sets with A ⊂ B, then A may be equal to B. We denote the complement of A in B by B\A. The zero is included in N. Further, all rings and algebras are with 1. For a ring R we use the notation R× for the group of multiplicative units. If K is a field, then | | denotes a non-trivial non-archimedean absolute value on K.We write |K×| for its value group. The completion of K with respect to | | is denoted by ̂ K and an algebraic closure of K by Ka. A variety over a field is an irreducible separated reduced scheme of finite type.
Acknowledgements
The author would like to express her deepest gratitude to her supervisor Walter Gubler for the suggestion of the topic, the continuous support and the excellent guidance. Further, I would like to thank my advisor Philipp Jell for being always very helpful and taking time for my questions. 13 2 The Berkovich projective line
In this section, we will explain the structure and the topology of P1Berk . We start by studying the Berkovich unit disc D(0 , 1) and its properties in Section 2.1. In particular, we will see that there is a classification of points in D(0 , 1) . Further, in Section 2.2 we recall the basic knowledge about R-trees and show that the Berkovich unit disc carries a tree structure. By glueing two copies of the Berkovich unit disc together, one can construct the Berkovich projective line P1Berk (see Section 2.3). Hence, this construction leads to a tree structure on P1Berk as well. This property makes it possible to define a Laplacian operator on P1Berk (see Chapter 3) and do potential theory on it. In this Chapter we fix an algebraically closed field K which is complete with respect to a non-trivial non-archimedean absolute value | | , e.g. K = Cp. Let a ∈ K and r ≥ 0, then we use the notions D(a, r ) := {b ∈ K| | b − a| ≤ r} and D(a, r )− := {b ∈ K| | b − a| < r }
for the closed respectively the open ball.
2.1 The definition and structure of the Berkovich unit disc
For a real number R > 0, let
K〈R−1T 〉 = {
∞
∑
k=0
akT k| ak ∈ K, lim
n→∞
Rk|ak| = 0 }
be the ring of formal power series on K converging on D(0 , R ). K〈R−1T 〉 is complete under the norm ‖ ‖ R defined by ‖f ‖R := max k≥0 Rk|ak| for f = ∑∞
k=0
akT k with
lim k→∞ Rk|ak| = 0 . If R = 1 , we call ‖ ‖ R the Gauß norm and use the notion ‖ ‖ . We will define the Berkovich unit disc D(0 , 1) as the Berkovich spectrum of the Banach algebra A := K〈T 〉.
Definition 2.1.1. A map | | x : A → R≥0 is called a bounded multiplicative seminorm on A if it satisfies i) |0|x = 0 and |1|x = 1 ,ii) |f + g|x ≤ | f |x + |g|x,iii) |f g |x = |f |x · | g|x,for all f, g ∈ A and there is a constant Cx such that |f |x ≤ Cx‖f ‖ for each f ∈ A .
Lemma 2.1.2. Let | | x be a bounded multiplicative seminorm on A, then
15 2 The Berkovich projective line
i) |f |x ≤ ‖ f ‖,ii) |c|x = |c| for all c ∈ K,iii)
|f + g|x ≤ max( |f |x, |g|x),
and
|f + g|x = max( |f |x, |g|x)
if |f |x 6 = |g|x.Proof. See [BR, Lemma 1.1].
Definition 2.1.3. We call the Berkovich spectrum of A, i.e. the set of all bounded multiplicative seminorms on A, the Berkovich unit disc and write D(0 , 1) . The topology on D(0 , 1) is taken to be the weakest topology such that the function
D(0 , 1) → R≥0, x 7 → | f |x
is continuous for all f ∈ A .
Remark. The topology is generated by the sets of the form
U (f, α ) := {x ∈ D (0 , 1) | | f |x < α }
and
V (f, α ) := {x ∈ D (0 , 1) | | f |x > α }
with f ∈ A and α ∈ R≥0. This topology makes D(0 , 1) into a compact Hausdorff space (cf. [BR, Theorem C.3]). At next, we will see that the points in D(0 , 1) can be classified in four different types of points.
Definition 2.1.4. Let a ∈ D(0 , 1) ⊂ K and r ∈ (0 , 1] , then we define
|f |a := |f (a)| and |f |D(a,r ) := sup
b∈D(a,r )
|f (b)|
for f ∈ A . If ai ∈ D(0 , 1) ⊂ K and ri ∈ (0 , 1] for i ≥ 1 and x = ( D(ai, r i)) i=1 ,2,... is a sequence of nested discs, then
|f |x := inf
i≥1
|f |D(ai,r i)
for all f ∈ A .
Remark 2.1.5. The Maximum Modulus Principle in non-archimedean analysis ([BGR, Propositions 5.1.4/2 and 5.1.4/3]) says if D(a, r ) ⊂ D(0 , 1) and f = ∑
n≥0
an(T − a)n
16 2.2 R-trees and the tree structure of D(0 , 1)
in K〈T 〉, then |f |D(a,r ) = max n≥0(|an|rn). Hence, one can verify that |f |D(a,r ) is multi-plicative. Indeed, each of the three defined maps on D(0 , 1) are bounded multiplicative seminorms on A (cf. [BR, §1.2 p.3]).
Theorem 2.1.6 (Berkovich’s Classification Theorem) . For every x ∈ D (0 , 1) one can find a sequence of nested discs D(a1, r 1) ⊃ D(a2, r 2) ⊃ . . . such that
|f |x = inf
i≥1
|f |D(ai,r i).
Moreover, if the sequence has a non-empty intersection, then ⋂
i≥1
D(ai, r i) = D(a, r )
for r ≥ 0 and a ∈ K, and
|f |x = |f |D(a,r ).
Proof. See [BR, Theorem 1.2].
Corollary 2.1.7. The points of D(0 , 1) can be classified in the following four types: Let x ∈ D (0 , 1) and {D(ai, r i)} be its corresponding sequence of nested discs.
Type I : If inf i ri = 0 , we call x a point of type I. Since K is complete, ⋂
i≥1
D(ai, r i) = a ∈ D(0 , 1) , and so | | x = | | a.
Type II: If r := inf i ri > 0 and r ∈ | K×|, we call x a point of type II. Then ⋂
i≥1
D(ai, r i) = D(a, r ) 6 = ∅ for some a ∈ D(0 , 1) ⊂ K,and | | x = | | D(a,r ).
Type III: If r := inf i ri > 0 and r / ∈ | K×|, we call x a point of type III. Then ⋂
i≥1
D(ai, r i) = D(a, r ) 6 = ∅ for some a ∈ D(0 , 1) ⊂ K,and | | x = | | D(a,r ).
Type IV: If the sequence has an empty intersection, we call x a point of type IV. Then necessarily inf i ri > 0.
2.2 R-trees and the tree structure of D(0 , 1)
First, we repeat the definition of a rooted R-tree and a parametrized R-tree. Afterwards, we give a one-to-one correspondence between the two definitions. With the help of Berkovich’s classification theorem one can show that the Berkovich unit disc D(0 , 1) is a parametrized R-tree, and so a rooted R-tree. This tree structure on D(0 , 1) implies that P1Berk is profinite R-tree , i.e. an inverse limit of finite R-trees, which leads directly to the construction of a Laplacian operator (cf. Chapter 3).
Definition 2.2.1. Let (X, d ) be a metric space. i) A geodesic segment is the image of an isometric embedding [a, b ] → X for a real interval [a, b ].ii) An arc is an injective continuous map ι : [ a, b ] → X.17 2 The Berkovich projective line
iii) (X, d ) is an R-tree if every two points x 6 = y ∈ X are joined by a unique arc, i.e. there is an arc ι : [ a, b ] → X such that ι(a) = x and ι(b) = y. α([ a, b ]) is a geodesic segment. iv) A rooted tree is a triple (X, d, ζ ) consisting of an R-tree (X, d ) and a point ζ ∈ X,which is called the root .v) Let (X, d ) be an R-tree. A point x ∈ X is called ordinary if X{ x} has two connected components. It is called a branch point if X{ x} has more than two connected components. And we call x an end point if X{ x} has only one con-nected component. vi) A finite R-tree is an R-tree which is compact and has only finitely many branch points.
Definition 2.2.2. A parametrized R-tree is a partially ordered set (X, ≥) with a func-tion α : X → R≥0 satisfying i) X contains a unique maximal element ζ.ii) Sx := {z ∈ X| z ≥ x} is totally ordered for all x ∈ X.iii) α(ζ) = 0 .iv) α(x) ≥ α(y) for all x, y ∈ X with x ≤ y.v) The restriction of α to any full totally ordered subset of X gives a bijection onto a real interval, where a totally ordered subset S is called full if x ≤ z ≤ y with
x, y ∈ S implies z ∈ S.
Proposition 2.2.3. There is a one-to-one correspondence between rooted R-trees and parametrized R-trees. Proofsketch. If (X, d, ζ ) is a rooted R-tree, one can show that x ≥ y iff y is contained in the unique geodesic segment [x, ζ ] defines a partial order on X and α(x) := d(x, ζ )
is a function such that (X, ≥) is a parametrized R-tree. Conversely, if (X, ≥) is a parametrized R-tree and α the required function on X, then we can set ζ := max( X)
and
d(x, y ) := α(x) + α(y) − 2α(x ∨ y)
defines a metric on X, where
x ∨ y := α−1(inf( α(Sx ∩ Sy)) ∩ Sx.
In particular, x∨y satisfies x∨y ≥ x, x∨y ≥ y, and z ≥ x and z ≥ y imply z ≥ x∨y.
2.2.4. Considering an R-tree (X, d ), X is equipped with the topology induced by the metric d, which we will call the strong topology of X. But we can also define a weaker topology on X, which is given as follows: 18 2.2 R-trees and the tree structure of D(0 , 1)
For a p ∈ X, we define an equivalence relation on X{ p} by x ∼ y iff
[p, x ]{ p} ∩ [y, p ]{ p} 6 = ∅
for the unique geodesic segments [p, x ] and [y, p ]. Let Tp(X) denote the set of equiv-alence classes for this relation and define Bp(~v) as the set of points in an equivalence class ~v. Then we call the topology induced by the sets Bp(~v) for p ∈ X, ~v ∈ Tp(X) the
weak topology of X.Now, we will use Berkovich’s classification theorem to show that D(0 , 1) is homeomor-phic to an R-tree endowed with its weak topology. One can define a partial order on
D(0 , 1) by x ≤ y iff |f |x ≤ | f |y for all f ∈ A . Hence, the unique maximal element is the bounded multiplicative seminorm | | D(0 ,1) which we will call the Gauss point and it is denoted by ζGauss . The minimal points under this partial order are the points of type I and IV (cf. [BR, Corollary 1.11]).
Definition 2.2.5. Let x ∈ D (0 , 1) , then there exists a sequence of nested discs (D(ai, r i))
such that | | x = inf i≥1 | | D(ai,r i) by the Berkovich’s Classification Theorem. We define the diameter of x as
diam( x) := lim
i→∞
ri.
2.2.6. One can verify that the function
α := 1 − diam( x)
makes D(0 , 1) into a parametrized R-tree (cf. [BR, §1.4 p.11]). By Proposition 2.2.3, the metric
d(x, y ) := 2diam( x ∨ y) − diam( x) − diam( y),
where x ∨ y denotes the least upper bound of x and y, makes D(0 , 1) into an R-tree. The endpoints are given by the points of type I and IV, the ordinary points are the points of type III and the branch points coincide with the points of type II (cf. [BR, §1.4 p.12]).
Definition 2.2.7. The metric d is called the small metric . Note that the topology in-duced by the small metric is not the same as the Berkovich topology. On D(0 , 1) \D(0 , 1)
we can define the big distance or logarithmic distance
ρ(x, y ) := 2 log v(diam( x ∨ y)) − log v(diam( x)) − log v(diam( y)) .
Proposition 2.2.8. D(0 , 1) with its Berkovich topology is homeomorphic to the R-tree
(D(0 , 1) , d ) with its weak topology. Further, the metric ρ makes D(0 , 1) \D(0 , 1) into an
R-tree. Proof. See [BR, Proposition 1.13] and [BR, Proposition 1.15].
Corollary 2.2.9. The space D(0 , 1) is uniquely path-connected.
19 2 The Berkovich projective line
Proof. This is a direct consequence of the previous proposition and the fact that an
R-tree is uniquely path-connected in its weak topology by [BR, Corollary B.20].
2.3 The construction of P1Berk
In this section, we construct P1Berk by glueing together two copies of D(0 , 1) , which was defined in the previous section, along the following common annulus. Consider the Berkovich spectrum of K〈T, T −1〉 := {∑
n∈Z
anT n| lim |n|→∞ |an| = 0 } which we denote by S1. Then the points of S1 ⊂ D (0 , 1) of type I are the points a ∈ K such that |a| = 1 ,the points of type II and III are corresponding to discs D(a, r ) with |a| = 1 and the points of type IV are corresponding to a sequence (D(ai, r i)) with |ai| = 1 for all i ≥ 1.With the help of the involution ι : S1 → S1 given by
|g(T )|ι(x) := |g(1 /T )|x
one can glue two copies of D(0 , 1) together, i.e.
E ∐ E′/(z ∈ E ∼ ι(z) ∈ E′),
where E := D(0 , 1) and E′ := D(0 , 1) . There is also a further way to construct
P1Berk . Let A1Berk denote the Berkovich spectrum of K[T ], i.e. A1Berk is the set of all multiplicative seminorms on K[T ] extending | | endowed with the weakest topology such that x 7 → | f |x is continuous for all f ∈ K[T ]. Note that A1Berk is homeomorphic to the union ⋃
R> 0
D(0 , R ) (cf. [BR, §2.1 Equ.(2.1)]). Hence, Berkovich’s classification theorem can be extended to A1Berk (cf. [BR, Theorem 2.2]). P1Berk can be seen as the one-point compactification of the locally compact Hausdorff space A1Berk , i.e.
P1Berk = A1Berk t {∞} .
The extra point ∞ is regarded as a point of type I. Identifying P1Berk with A1Berk ∪ {∞} ,we view the open and closed Berkovich disc
D(a, r )− := {x ∈ A1Berk | | T − a|x < r }D(a, r ) := {x ∈ A1Berk | | T − a|x ≤ r}
as subsets of P1Berk .
Lemma 2.3.1. If the intersection of two open balls D(a, r )− and D(b, s )− is non-empty, one of them contains the other. Proof. Assume that r ≤ s, and let x be an element in the intersection. Since |T − b|a =
|a − b| and
|a − b| = |a − b|x ≤ max( |T − a|x, |T − b|x) < s,
20 2.3 The construction of P1Berk
a is contained in D(b, s )−. For each y ∈ D (a, r )− we have
|T − b|y = |T − a + a − b|y ≤ max( |T − a|y, |a − b|y) < s,
i.e. y ∈ D (b, s )−.
Proposition 2.3.2. Identifying P1Berk with A1Berk ∪ {∞} , a basis for the open sets of
P1Berk is given by the sets of the form
D(a, r )−, D(a, r )−\
N
⋃
i=1
D(ai, r i), and P1Berk \
N
⋃
i=1
D(ai, r i),
where a, a i ∈ K and r, r i > 0. If desired, one can require that the r, ri belong to |K×|.
Proof. See [BR, Proposition 2.7]. The constructed topological space P1Berk satisfies the following properties:
Proposition 2.3.3. i) P1Berk is a compact Hausdorff topological space. ii) Both P1(K) and P1Berk \P1(K) are dense in P1Berk .iii) P1Berk is uniquely path-connected. Proof. See [BR, Proposition 2.6], [BR, Lemma 2.9] and [BR, Lemma 2.10]. Furthermore, we will see that P1Berk and HBerk := P1Berk \P(K) have a tree structure as well.
Definition 2.3.4. i) ΓS ⊂ P1Berk is called a finite subgraph , if there is a finite set
S ⊂ HBerk such that
ΓS = ⋃
x,y ∈S
[x, y ],
where [x, y ] denotes the unique path between x and y.ii) By a vertex set for ΓS , we mean a finite set of points S such that ΓS \S is a union of open intervals where each of them has two distinct endpoints in ΓS .iii) Let Γ′, Γ be two finite subgraphs of P1Berk such that Γ′ ⊂ Γ. Then we have a
retraction map rΓ,Γ′ : Γ → Γ′, where r(x) is given by the first point of the path
[x, p ] in Γ′ for a fixed point p ∈ Γ′.iv) We define the path distance metric ρ on HBerk by ρ(x, y ) := ρ(x, y ) if x, y ∈ E or
x, y ∈ E′, and ρ(x, y ) := ρ(x, ζ Gauss ) + ρ(y, ζ Gauss ) if not.
Remark. Every finite subgraph endowed with the induced path distance metric ρ is a finite R-tree. Moreover, [BR, Proposition 2.29] states that HBerk is a complete metric 21 2 The Berkovich projective line
space under ρ. In the next section we will define the retraction map in a more general way and we will see that the retraction map is well-defined, i.e. it is independent of the fixed point p ∈ Γ′. Further,
rΓ,Γ′′ = rΓ′,Γ′′ ◦ rΓ,Γ′
for finite subgraphs Γ′′ ⊂ Γ′ ⊂ Γ.
Proposition 2.3.5. There is a canonical homeomorphism
P1Berk ' lim ←−−
Γ∈F
Γ,
where F is the set of all finite subgraphs in P1Berk and P1Berk is equipped with the Berkovich topology. Proof. See [BR, Theorem 2.21]. 22 3 The Laplacian on the Berkovich projective line
The goal of this chapter is to define a measure-valued Laplacian operator on a class of functions f : U → R ∪ {±∞} for a domain (i.e. open and connected) U ⊂ P1Berk . We will see that the profinite R-tree structure on P1Berk (cf. Proposition 2.3.5) leads directly to the construction of such a Laplacian operator. More precisely, we give in Section 3.1 an extension ∆Γ of the Laplacian introduced by [Zh] on the class of continuous, piecewise C2 functions to a larger class of continuous functions called BDV(Γ) for a finite subgraph Γ ⊂ P1Berk (after [BR]). This class is characterized by the fact that for every Borel measure μ of total mass zero on Γ there is a function f ∈ BDV(Γ) such that
μ = ∆ Γ(f ). We define a class of continuous functions f : P1Berk → R∪{±∞} , denoted by
BDV( P1Berk ), such that the collection of measures {∆Γ(f )} is a coherent system for every
f ∈ BDV( P1Berk ). This leads to a unique Borel measure ∆( f ) of total mass zero on the inverse limit space P1Berk . In Section 3.2, we give explicit examples of functions contained in BDV( P1Berk ) and determine their Laplacians. Next to some natural examples, we will define the Hsia kernel which leads to further examples of functions in BDV( P1Berk ).In particular, the function f (x) := − log v([ g]x) for g ∈ K(T )× can be verified to be contained in BDV( P1Berk ). We will calculate that ∆( f ) = ∑mi=1 niδai if div( g) =
∑mi=1 ni(ai), which is known as the Poincaré-Lelong formula .
3.1 Construction and properties of the Laplacian on a subdomain of P1Berk
In this section, we first give a definition of the Laplacian operator on the mentioned classes of continuous functions on finite subgraphs of P1Berk , which were defined in the previous chapter. For the construction we follow [BR]. Different to [BR] we just define the Laplacian operator for finite subgraphs of P1Berk instead for general metrized graphs (cf. [BR, Chapter 3]). However, the definitions and the constructions are totally the same and also hold for metrized graphs.
Definition 3.1.1. i) An injective length-preserving continuous map γ : [0 , L ] → Γ
is called an isometric path , and we say that γ emanates from p and terminates at q, if γ(0) = p and γ(L) = q.ii) Two isometric paths emanating from p are said to be equivalent if they share a common initial segment. 23 3 The Laplacian on the Berkovich projective line
iii) For each p ∈ Γ we define the projectivized tangent space at p as the set of equiv-alence classes of isometric paths in Γ emanating from p, and we write Tp(Γ) .
Remark. There is a bijection between Tp(Γ) and the ‘edges’ of Γ emanating from p.We will associate to each element of Tp(Γ) a formal unit tangent vector ~v, and write
p + t~ v instead of γ(t) for a representative path γ.
Definition 3.1.2. If f : Γ → R is a function, and ~v is a unit tangent vector at p, then we define the (one-sided) derivative of f in the direction ~v to be
d~vf (p) = lim
t→0+
f (p + t~ v) − f (p)
t = lim
t→0+
f (γ(t)) − f (p)
t
provided the limit exists as a finite number.
Definition 3.1.3. i) A function f : Γ → R is called piecewise affine , if there is a vertex set Xf for Γ such that f is affine on each edge in Γ\Xf with respect to an arclength parametrization of that edge. Let CPA(Γ) be the space of continuous, piecewise affine real-valued functions on Γ.ii) Since d~vf (p) is defined for every f ∈ CPA(Γ) for all p ∈ Γ and ~v ∈ Tp(Γ) we can introduce a Laplacian operator on CPA(Γ) like Chinburg and Rumely did in [CR]:
∆( f ) := ∑
p∈Γ
− ∑
~v∈Tp(Γ)
d~vf (p)
δp,
where δp is the Dirac unit measure at p. This Laplacian is a map from CPA(Γ)
to the space of discrete signed measures on Γ.This Laplacian can be extended on larger classes of functions:
Definition 3.1.4. We call a function f : Γ → R piecewise C2, if there is a vertex set Xf
such that f ′′ ∈ C 2(Γ \Xf ). Let Zh(Γ) be the space of continuous, piecewise C2 functions whose one-sided directional derivatives d~vf (p) exists for all p ∈ Γ and ~v ∈ Tp(Γ) .Zhang has defined in [Zh] the following Laplacian on Zh(Γ) ∆Zh (f ) := −f ′′ (x)dx + ∑
p∈Γ
(− ∑
~v∈Tp(Γ)
d~vf (p)) δp(x),
where f ′′ (x) is taken relative to the arclength parametrization on each segment in the complement of an appropriate vertex set Xf for Γ, i.e. f ′′ (x) = d2
dt 2
f (p + t~ v) for
x = p + t~ v ∈ Γ\Xf .Let A := A(Γ) be the Boolean algebra of subsets of Γ generated by the connected open sets, i.e. each subset S ⊂ Γ is in A iff S is a finite disjoint union of sets isometric to 24 3.1 Construction and properties of the Laplacian on a subdomain of P1Berk
open, half-open or closed intervals, where isolated points are considered as degenerate closed intervals. For this Boolean algebra we have the following Mass Formula: Let In( p, S ) := {~v ∈ Tp(Γ) |p+t~ v ∈ S for all sufficiently small t > 0} the inward-directed unit vectors at p and Out( p, S ) := Tp(Γ) \In( p, S ) the outward-directed unit vectors at p.
Lemma 3.1.5 (Mass Formula) . Let S be a set in the Boolean Algebra A. Then for each f ∈ Zh(Γ) ∆Zh (f )( S) = ∑
p∈∂S,p / ∈S
∑
~v∈In( p,S )
d~vf (p) − ∑
p∈∂S,p ∈S
∑
~v∈Out( p,S )
d~vf (p).
Proof. In [BR, Lemma 3.4] they give a Mass Formula for sets in A(Γ) which are a finite union of closed intervals. It is easy to see that this Mass Formula can be extended on
A in the stated way. Let S ∈ A (Γ) , i.e. S can be written as a finite disjoint union of points, open, closed and half-open intervals. Moreover, we can write S = S\E ∪ E,where E is a finite disjoint union of points and closed intervals such that S\E is a finite disjoint union of open intervals. Applying [BR, Lemma 3.4] to E, we get
∆Zh (f )( E) = − ∑
p∈∂E
∑
~v∈Out( p,E )
d~vf (p).
As Γ is also a finite union of closed intervals, the lemma states that
∆Zh (f )(Γ) = − ∑
p∈∂Γ
∑
~v∈Out( p, Γ)
d~vf (p) = 0 ,
because ∂Γ = ∅. Setting U := S\E, then Γ\U is a finite disjoint union of points and closed intervals, so we also can apply [BR, Lemma 3.4] to Γ\U :
∆Zh (f )(Γ \U ) = − ∑
p∈∂U
∑
~v∈In( p,U )
d~vf (p),
where we have additionally used that Out( p, Γ\U ) = In( p, U ). Since ∂E = ∂S ∩ S and
∂U = ∂S ∩ Γ\S, the three equations above imply
∆Zh (f )( S) = ∆ Zh (f )( E) − ∆Zh (f )(Γ \U )= ∑
p∈∂S,p / ∈S
∑
~v∈In( p,S )
d~vf (p) − ∑
p∈∂S,p ∈S
∑
~v∈Out( p,S )
d~vf (p).
The Mass Formula is the start point to extend the Laplacian on Zh(Γ) to a even larger class of functions which is called BDV(Γ) .25 3 The Laplacian on the Berkovich projective line
Definition 3.1.6. i) Let D(Γ) be the class of all functions on the finite subgraph
Γ whose one-sided derivatives exist everywhere, i.e.
D(Γ) := {f : Γ → R| d~vf (p) exists for each p ∈ Γ and ~v ∈ Tp(Γ) }.
ii) For f ∈ D (Γ) we define a finitely additive set function mf on A
mf (S) := ∑
p∈∂S,p / ∈S
∑
~v∈In( p,S )
d~vf (p) − ∑
p∈∂S,p ∈S
∑
~v∈Out( p,S )
d~vf (p).
iii) We will say that f ∈ D (Γ) is of bounded differential variation, and write f ∈
BDV(Γ) , if there is a constant B > 0 such that for any countable collection F of pairwise disjoint sets in A,
∑
Si∈F
|mf (Si)| ≤ B.
Remark 3.1.7. i) Since ∂∅ = ∂Γ = ∅,
mf (∅) = mf (Γ) = 0
for all f ∈ D (Γ) . Consequently, we have
mf (Γ \S) = −mf (S).
ii) Consider a set S in A. If S is open,
mf (S) = ∑
p∈∂S,p / ∈S
∑
~v∈In( p,S )
d~vf (p).
If S is closed,
mf (S) = − ∑
p∈∂S,p ∈S
∑
~v∈Out( p,S )
d~vf (p).
And in the case that S = {p},
mf ({p}) = − ∑
~v∈T(Γ)
d~vf (p).
iii) If f1, f 2 ∈ D (Γ) and c1, c 2 ∈ R, then
mc1f1+c2f2 = c1mf1 + c2mf2 .
iv) BDV(Γ) is a linear subspace of D(Γ) .26 3.1 Construction and properties of the Laplacian on a subdomain of P1Berk
Theorem 3.1.8. If f ∈ BDV(Γ) , then the finitely additive set function mf extends uniquely to a finite signed Borel measure m∗
f
of total mass zero on Γ.Proof. The existence of the extension is proved in [BR, Theorem 3.6]. For the unique-ness, we assume that μ is another finite signed Borel measure extending m∗
f
of total mass zero on Γ. Since μ and ∆( f ) are extending the set function mf , the measures coincide on the Boolean algebra A(Γ) which is generated by the connected open sub-sets of Γ. In particular, we have the identity μ(Γ) = 0 = ∆( f )(Γ) . Without loss of generality, we can assume that Γ = [ a, b ] is a closed interval. Consider an open subset
T of Γ, then T is the union of at most countably many open intervals in [a, b ], i.e. A(Γ)
generates the Borel σ-algebra on Γ. Thus, the Theorem of Uniqueness of Measures (cf. [El, Eindeutigkeitssatz 5.6]) states that μ and ∆( f ) have to coincide.
Definition 3.1.9. For every f ∈ BDV(Γ) , we define the Laplacian ∆( f ) to be the measure from Theorem 3.1.8:
∆( f ) := m∗
f
.
Remark 3.1.10. Conversely, if ν is a finite signed Borel measure of total mass zero on Γ, then there exists a function h ∈ BDV(Γ) such that ∆( h) = ν. This function h is unique up to addition of a real constant.
Proof. See [BR, Corollary 3.12] and [BR, Proposition 3.14 (B)].
Lemma 3.1.11. Let f, g ∈ BDV(Γ) and α, β ∈ R, then
∆( αf + βg ) = α∆( f ) + β∆( g).
Proof. By construction, ∆( αf +βg ) extends the set function mαf +βg and α∆( f )+ β∆( g)
extends αm f + βm g. Due to mαf +βg = αm f + βm g, the uniqueness of the extension in Theorem 3.1.8 implies the equality.
Lemma 3.1.12. Zh(Γ) is a subset of BDV(Γ) , and for each f ∈ Zh(Γ) ⊂ BDV(Γ) ∆( f ) = ∆ Zh (f ).
Proof. Let f ∈ Zh(Γ) and Xf be a vertex set such that f ∈ C 2(Γ \Xf ). By the definition of Zh(Γ) , every function in Zh(Γ) belongs to D(Γ) , so the directional derivative exist as real numbers in every point p ∈ Xf . One can show that this implies f ′′ ∈ L1(Γ \Xf , dx ).Consider a countable family F of pairwise disjoint sets Si ∈ A (Γ) . By the Mass Formula
mf (Si) = ∆ Zh (f )( Si) and Si\Xf is a again a union of ni ∈ N disjoint open, closed and half-open intervals. Let aij , b ij for j = 1 , . . . , n i be the endpoints of these segments. Thus,
mf (Si) = −
ni∑
j=1
∫bij
aij
f ′′ (x) dx + ∑
p∈Si∩Xf
∑
~v∈Tp(Γ)
d~vf (p).
27 3 The Laplacian on the Berkovich projective line
In sum, we have
∑
Si∈F
|mf (Si)| = ∑
Si∈F
| −
ni
∑
j=1
∫ bij
aij
f ′′ (x) dx + ∑
p∈Si∩Xf
∑
~v∈Tp(Γ)
d~vf (p)|≤ ∑
Si∈F
ni
∑
j=1
∫ bij
aij
|f ′′ (x)| dx + ∑
p∈Si∩Xf
∑
~v∈Tp(Γ)
|d~vf (p)|≤
∫
Γ\Xf
|f ′′ (x)|dx + ∑
p∈Xf
|mf ({p})| < ∞.
Therefore, f ∈ BDV(Γ) with B(f ) := ∫
Γ\Xf
|f ′′ (x)|dx + ∑
p∈Xf
|mf ({p})|, and conse-quently the finite signed Borel measure ∆( f ) on Γ exists. Further, we show that
∆Zh (f ) extends mf on A(Γ) . It suffices to consider an isolated point p ∈ Γ and an open interval (c, d ) contained in an edge of Γ\Xf . Clearly, ∆Zh (f )( {p}) = mf ({p}) is true for all p ∈ Γ. And,
∆Zh (f )(( c, d )) = −
∫ dc
f ′′ (x) dx
= f ′(c) − f ′(d)= mf (( c, d )) .
Since the extensio of mf to a finite signed Borel measure on Γ of total mass zero is unique by Theorem 3.1.8, we have the identity ∆( f ) = ∆ Zh (f ).This definition makes it possible to define a Laplacian operator on the Berkovich pro-jective line in the following way.
Definition 3.1.13. A domain U in P1Berk is a non-empty connected open subset of
P1Berk . We call a domain V simple if ∂V is a non-empty finite set {x1, . . . , x m} ⊂ HBerk ,where each xi is of type II or III. A strict simple domain is a simple domain whose boundary points are all of type II.
Remark 3.1.14. i) Different to [BR], we will say that U0 is a subdomain of an open set U if U0 is a domain and U0 ⊂ U . If we require U0 ⊂ U , this is stated additionally. ii) The strict simple domains (resp. simple domains) form a basis for the Berkovich topology on P1Berk (cf. [BR, §2.6 p.42]). Since P1Berk is a compact Hausdorff space, the compact subsets are just the closed subsets. The closures of the simple domains form a fundamental system of the closed, and so the compact neighbor-hoods for the Berkovich topology on P1Berk .
28 3.1 Construction and properties of the Laplacian on a subdomain of P1Berk
Definition 3.1.15. Let C(U ) be the space of continuous functions f : U → R, where
U is the closure of a domain U ⊂ P1Berk .Recall from §2.3 that a finite subgraph of P1Berk is the union of the unique paths between a finite set of points in HBerk .
Lemma 3.1.16. Let Γ be a finite subgraph of P1Berk . Then i) Γ is a closed subset of P1Berk .ii) The metric topology on Γ coincides with the relative (i.e. subspace) topology induced from P1Berk .Proof. See [BR, Lemma 5.2]. Now we generalize the retraction map, which was only defined for finite subgraphs in §2.3, to every non-empty connected closed subset E ⊂ P1Berk equipped with the relative topology. Recall that a set is connected under the topology of P1Berk if and only if it is uniquely path-connected.
Definition 3.1.17. We define the retraction map rE : P1Berk → E by setting rE (x) as the first point p in E on the path from x to a point p0 ∈ E.
Remark. By construction, we have rE (x) = x for all x ∈ E.
Lemma 3.1.18. The map rE : P1Berk → E is well-defined. Proof. Since P1Berk is path-connected, rE (x) exists for each x ∈ P1Berk . Furthermore, we have to show that the definition is independent of p0. Let x / ∈ E, p′
0
another fixed point in E and p′ the first point in E on the path from x to p′
0
. In the case of p 6 = p′,there is a path from p′ to p in E, since E is path-connected. But we also get a path from p′ to p which is not contained in E, by going from p′ to x and from x to p. This contradicts the fact that P1Berk is uniquely path-connected.
Lemma 3.1.19. For each non-empty closed connected subset E ⊂ P1Berk , the retraction map rE : P1Berk → E is continuous. Proof. See [BR, Lemma 5.3].
3.1.20. If E1 ⊂ E2 ⊂ P1Berk are two non-empty connected closed subsets, the retraction map rE1 induces a retraction map rE2,E 1 : E2 → E1 such that
rE1 (x) = rE2,E 1 (rE2 (x))
for all x ∈ P1Berk . If E1 and E2 have the relative topology, rE2,E 1 is continuous as well. 29 3 The Laplacian on the Berkovich projective line
Let U ⊂ P1Berk be a domain, and for each finite subgraph Γ ⊂ U we consider a finite signed Borel measure μΓ on Γ.
Definition 3.1.21. A system of measures {μΓ} on the finite subgraphs of U is called
coherent if i) For each pair of finite subgraphs Γ1, Γ2 of U with Γ1 ⊂ Γ2 we have
(rΓ2,Γ1 )∗(μΓ2 ) = μΓ1 .
ii) There is a constant B such that for each finite subgraph Γ ⊂ U
|μΓ|(Γ) ≤ B.
3.1.22. For any two graphs Γ1, Γ2 there is a unique minimal finite subgraph Γ3 con-taining Γ1 and Γ2. Hence the collection of finite subgraps Γ ⊂ U forms a directed set under containment. For every finite signed Borel measure μ on U , the system of measures {μΓ} on the finite subgraphs of U given by
μΓ := ( rU , Γ)∗(μ)
for each Γ ⊂ U is coherent. There is a 1-1 correspondence between finite signed Borel measures μ on U , and coherent systems of finite signed Borel measures on finite subgraphs of U :
Proposition 3.1.23. If {μΓ} is a coherent system of measures in U , the map
Λ( F ) = lim −→
Γ
∫
Γ
F (x)dμ Γ(x)
defines a bounded linear functional on C(U ), and there is a unique Borel measure μ on
U such that
Λ( F ) =
∫
U
F (x)dμ (x)
for each F ∈ C (U ). This measure is characterized by the fact that
(rU , Γ)∗(μ) = μΓ
for each finite subgraph Γ ⊂ U .In particular, if μ0 is a finite signed Borel measure on U , and we put μΓ = ( rU , Γ)∗(μ0)
for each finite subgraph Γ ⊂ U , then μ0 is the unique measure associated to the coherent system {μΓ} by the construction above. Proof. See [BR, Proposition 5.10]. 30 3.1 Construction and properties of the Laplacian on a subdomain of P1Berk
Using coherent systems of measures and the Laplacian on metrized graphs, we are able to construct a measure-valued Laplacian operator on a suitable class of functions
f : U → R ∪ {±∞} .
Definition 3.1.24. Let U ⊂ P1Berk be a domain. We will say that a function f : U →
R ∪ {±∞} is of bounded differential variation on U , and write f ∈ BDV( U ), if i) f |Γ ∈ BDV(Γ) for each finite subgraph Γ ⊂ U , and ii) there is a constant B(f ) such that for each finite subgraph Γ ⊂ U ,
|∆Γ(f )|(Γ) ≤ B(f ).
Remark. Due to Γ ⊂ U ∩ HBerk for every finite subgraph Γ ⊂ U , there is nothing required on the behavior of f on U ∩ P1(K) by this definition, and so f may be undefined at some points of U ∩ P1(K). We will use the notation C(U ) ∩ BDV( U ) for the space of functions f ∈ C (U ) whose restrictions to U belong to BDV( U ).
Proposition 3.1.25. If f ∈ BDV( U ), the system of measures {∆Γ(f )}Γ⊂U is coherent. Proof. Let Γ1, Γ2 be a pair of finite subgraphs of U with Γ1 ⊂ Γ2. Since Γ2 can be obtained by sequentially attaching a finite number of edges to Γ1, it suffices to consider the case where Γ2 = Γ 1 ∪ T for an attached segment T at a point p ∈ Γ1. As a segment,
T is a finite subgraph as well. We have to show that for every Borel subset e ⊂ Γ1
∆Γ2 (f )( r−1Γ2,Γ1 (e)) = ∆ Γ1 (f )( e).
At first, we consider the case that e ⊂ Γ1{ p}. Due to rΓ2,Γ1 (q) = p /∈ e for all
q ∈ Γ2\Γ1, we have r−1Γ2,Γ1 (e) ⊂ Γ1. Since rΓ2,Γ1 is the identity on Γ1, r−1Γ2,Γ1 (e) = e.Thus,
∆Γ2 (f )( r−1Γ2,Γ1 (e)) = ∆ Γ2 (f )( e) = ∆ Γ1 (f )( e),
where the last identity follows by the definition of the Laplacian on finite subgraphs. Due to the additivity of the Laplacian, it remains to consider the Borel set {p} ⊂ Γ1.We know that rΓ2,Γ1 (q) = p for all q ∈ T and rΓ2,Γ1 |Γ1 = id Γ1 . Due to Γ2 = Γ 1 ∪ T ,
r−1Γ2,Γ1 ({p}) = T.
Since T is just a closed interval, we have seen in Remark 3.1.7 that
∆Γ2 (f )( T ) = mf (T ) = − ∑
~v∈Out( p,T )
d~v(f )( p),
where mf is the set function on the Boolean algebra A(Γ 2). By Out( p, T ) = Tp(Γ 1)
and the two foregoing equations,
∆Γ2 (f )( r−1Γ2,Γ1 ({p})) = − ∑
~v∈Tp(Γ 1)
d~v(f )( p) = ∆ Γ1 (f )( p).
31 3 The Laplacian on the Berkovich projective line
By the definition of the space BDV( U ), there is a constant B such that |∆Γ(f )|(Γ) ≤ B
for all finite subgraphs Γ ⊂ U , and hence {∆Γ(f )}Γ⊂U is coherent.
Definition 3.1.26. Let U ⊂ P1Berk be a domain and f ∈ BDV( U ).i) We define the complete Laplacian
∆U (f )
as the unique finite signed Borel measure on U associated to the coherent system
{∆Γ(f )}Γ⊂U from Proposition 3.1.23, characterized by the property that
(rU , Γ)∗(∆ U (f )) = ∆ Γ(f )
for each finite subgraph Γ ⊂ U.
ii) We call the restriction of ∆U (f ) to U the Laplacian
∆U (f ) := ∆ U (f )|U .
iii) The Boundary Derivative
∆∂U (f ) := ∆ U (f )|∂U
is the restriction of ∆U (f ) to ∂U .
Remark 3.1.27. i) ∆U (f ) is the Borel meaure on U with
∆U (f )( S) = ∆ U (f )( S ∩ U )
for each Borel set S ⊂ U .ii) By construction,
∆U (f ) = ∆ U (f ) + ∆ ∂U (f ).
iii) When U = P1Berk , we have
∆U (f ) = ∆ U (f ),
and we will write ∆( f ) for ∆P1Berk (f ).Before we will see some examples, we give some important properties of the (complete) Laplacian:
Lemma 3.1.28. Let U ⊂ P1Berk be domain, f, g ∈ BDV( U ) and α, β ∈ R. Then
∆U (αf + βg ) = α∆U (f ) + β∆U (g).
32 3.2 Examples and the Hsia kernel
Proof. By the definition of the complete Laplacian,
∆Γ(f ) = ( rU , Γ)∗(∆ U (f ))
and
∆Γ(g) = ( rU , Γ)∗(∆ U (g))
are satisfied for every finite subgraph Γ ⊂ U . Since the Laplacian ∆Γ is linear by Lemma 3.1.11, it follows that
∆Γ(αf + βg ) = α∆Γ(f ) + β∆Γ(g)= α(rU , Γ)∗(∆ U (f )) + β(rU , Γ)∗(∆ U (g)) = ( rU , Γ)∗(α · ∆U (f ) + β · ∆U (g))
for every finite subgraph Γ ⊂ U . Due to the uniqueness in Proposition 3.1.23, the Laplacians have to coincide.
Proposition 3.1.29. Suppose U1 ⊂ U2 ⊂ P1Berk are domains, and f ∈ BDV( U2). Then
f |U1 ∈ BDV( U1) and
∆U1 (f ) = ∆ U2 (f )|U1 , ∆U1 (f ) = ( rU2,U 1 )∗(∆ U2 (f )) .
Proof. See [BR, Proposition 5.26].
Proposition 3.1.30. Let U ⊂ P1Berk be a domain, and let V1, . . . , V r ⊂ U be subdomains such that U = ⋃ri=1 Vi. Then for any function f , we have f ∈ BDV( U ) iff f |Vi ∈
BDV( Vi) for all i = 1 , . . . , r. Moreover, in the latter case, for each i = 1 , . . . , r
∆Vi (f ) = ∆ U (f )|Vi , ∆Vi (f ) = ( rU ,V i )∗(∆ U (f )) .
Proof. See [BR, Proposition 5.27].
3.2 Examples and the Hsia kernel
In this section, we give explicit examples of functions having bounded differential vari-ation and calculate their Laplacians. Next to the obvious one of a constant function we consider the composition f ◦ rΓ for a finite subgraph Γ and a function f0 ∈ BDV(Γ) .For further examples we will define the Hsia kernel . The Hsia kernel δ(x, y )∞ for
x, y ∈ A1Berk extends the usual distance |x − y| on K and the function − log v(δ(x, y )∞)
is a generalization of the usual potential theory kernel − log v(|x − y|) on K. We will also give a definition for an analogous kernel δ(x, y )ζ for an arbitrary ζ ∈ P1Berk which is called the generalized Hsia kernel . The functions of the form f (x) = − log v(δ(x, y )ζ )
33 3 The Laplacian on the Berkovich projective line
for fixed y, ζ ∈ P1Berk belong to BDV( P1Berk ) and they make it possible to verify a version of the Poincaré-Lelong formula at the end of this chapter. Moreover, these functions are used for the analogue Poisson formula in Chapter 4.
Example 3.2.1. If f (x) = C on P1Berk for a constant C ∈ R, then f ∈ BDV( P1Berk )
and
∆( f ) = 0 .
Proof. Clearly, f ∈ Zh(Γ) ⊂ BDV(Γ) and
∆( f ) = ∆ Zh (f ) = −f ′′ (x)dx + ∑
p∈Γ
(− ∑
~v∈Tp(Γ)
d~vf (p)) δp(x) = 0
for each finite subgraph Γ ⊂ P1Berk . Due to the uniqueness in Proposition 3.1.23, we have ∆( f ) = 0 .
Example 3.2.2. If f = f0 ◦ rΓ0 for a finite subgraph Γ0 of P1Berk and f0 ∈ BDV(Γ 0),then f ∈ BDV( P1Berk ) and
∆( f ) = ∆ Γ0 (f0).
Proof. First, we show that f is a function in BDV( P1Berk ). We can determine the one-sided derivative of f for all p ∈ P1Berk and all directions ~v. In the case of p ∈ Γ0
and ~v ∈ Tp(Γ 0), we have d~vf (p) = d~vf0(p). If p ∈ P1Berk and ~v /∈ Tp(Γ 0), one can calculate that d~vf (p) = 0 . The case that p / ∈ Γ0 and ~v ∈ Tp(Γ 0) is not possible, because
p = lim t→0+ γ(t), γ is continuous and Γ0 is closed in P1Berk . Hence, f ∈ D (P1Berk ).Furthermore, they imply for each finite subgraph Γ of P1Berk and S ∈ A (Γ)
mf (S) = ∑
p∈∂S,p / ∈S
∑
~v∈In( p,S )
d~vf (p) − ∑
p∈∂S,p ∈S
∑
~v∈Out( p,S )
d~vf (p)= ∑
p∈∂S ∩Γ0,p / ∈S
∑
~v∈In( p,S )∩Tp(Γ 0)
d~vf0(p) − ∑
p∈∂S ∩Γ0,p ∈S
∑
~v∈Out( p,S )∩Tp(Γ 0)
d~vf0(p).
In the formula above, we may write In( p, S ∩ Γ0) instead of In( p, S ) ∩ Tp(Γ 0) and
Out( p, S ∩ Γ0) instead of Out( p, S ) ∩ Tp(Γ 0). Moreover, we will see that we can replace
∂S ∩ Γ0 by ∂(S ∩ Γ0). Let p ∈ ∂S ∩ Γ0 and p / ∈ S such that In( p, S ∩ Γ0) 6 = ∅. Hence, there is an ~v ∈ In( p, S ∩ Γ0) such that for a representative γ we have γ(t) ∈ S ∩ Γ0
for all sufficiently small t > 0, and so p = lim
t→0+
γ(t) ∈ S ∩ Γ0. Since we have required that p / ∈ S ∩ Γ0, p belongs to ∂(S ∩ Γ0). Otherwise, if p ∈ ∂(S ∩ Γ0) ⊂ Γ0 and p / ∈ S
such that In( p, S ∩ Γ0) 6 = ∅, then there is a continuous map γ : [0 , L ] → Γ0 such that
γ(t) ∈ S for all sufficiently small t > 0. Due to p = lim
t→0+
γ(t) ∈ S ⊂ Γ, p ∈ ∂S ∩ Γ0.34 3.2 Examples and the Hsia kernel
Similar arguments show the same for the sum over Out( p, S ) ∩ Tp(Γ 0) respectively
Out( p, S ∩ Γ0).Consequently, mf (S) = mf0 (S ∩ Γ0), where S ∩ Γ0 ∈ A (Γ 0). Since f0 ∈ BDV(Γ 0), there is a constant B such that
∑
Si∈F
|mf (Si)| = ∑
Si∈F
|mf0 (Si ∩ Γ0)| ≤ B
for any countable collection F of pairwise disjoint sets in A(Γ) . Therefore, f |Γ is a function in BDV(Γ) for each finite subgraph Γ of P1Berk . It remains to show the existence of a constant B(f ) such that |∆Γ(f )|(Γ) ≤ B(f ) for every finite subgraph
Γ. For each finite subgraph Γ containing Γ0, f |Γ is constant on branches off Γ0, and
f |Γ0 = ( f0 ◦ rΓ0 )|Γ0 = f0, and so
∆Γ(f ) = ∆ Γ0 (f ) = ∆ Γ0 (f0)
by the definition of the Laplacian on finite subgraphs (Theorem 3.1.8). Due to f0 ∈
BDV(Γ 0), there is a constant B(f0) such that
|∆Γ(f )|(Γ) = |∆Γ0 (f0)|(Γ 0) ≤ B(f0) =: B(f ).
So we have proved that f is a function in BDV( P1Berk ). Proposition 3.1.23 states that
∆( f ) = ∆ Γ0 (f0).
For further examples we will introduce the (generalized) Hsia kernel.
Definition 3.2.3. i) For x ∈ A1Berk corresponding to a sequence of nested discs
{D(ai, r i)}, diam ∞(x) := lim i→∞ ri is called the diameter of x.ii) If x, y ∈ A1Berk , let x ∨∞ y be the point where [x, ∞] and [y, ∞] first meet. We define the Hsia kernel
δ(x, y )∞ := diam ∞(x ∨∞ y).
Remark. i) If x corresponds to a sequence of nested discs {D(ai, r i)} and y to
{D(bi, s i)}, then
δ(x, y )∞ = lim
i→∞
max( ri, s i, |ai − bi|).
Clearly, δ(x, x )∞ = diam ∞(x) for each x ∈ A1Berk , and δ(x, y )∞ = |x − y| for all
x, y ∈ A1(K). If x, y are of type I,II or III, corresponding to D(a, r ) and D(b, s ),
δ(x, y )∞ = max( r, s, |a − b|) = sup
z∈D(a,r ),w ∈D(b,s )
|z − w|.
35 3 The Laplacian on the Berkovich projective line
ii) The definitions above can be extended to P1Berk \D (0 , 1) ∼= D(0 , 1) − by setting
diam ∞(x) := diam ∞(ψ(x)) ,
where ψ is the homeomorphism from Chapter 2, which maps t to 1/t for all
t ∈ P1(K)\D(0 , 1) .
Definition 3.2.4. i) Let Γ be a finite subgraph. For fixed y, z ∈ Γ, [BR, §3.3] tells us that there is a unique function jz (x, y ) ∈ CPA(Γ) on Γ such that
∆x(jz (x, y )) = δy(x) − δz (x) and jz (z, y ) = 0
for all x ∈ Γ. We call jz (x, y ) the potential kernel .ii) The metric ρ : HBerk → R≥0 defined by
ρ(x, y ) := 2 log v(diam ∞(x ∨∞ y)) − log v(diam ∞(x)) − log v(diam ∞(y)) ,
is called the path metric . We call the topology introduced by this metric the
strong topology of HBerk .
3.2.5. Let Γ be a finite subgraph, so Γ ⊂ HBerk . For x, y, z ∈ Γ, let w := wz (x, y ) be the point where the path from x to z and the path from y to z first meet. One can show that
jz (x, y ) = p(z, w ).
For a fixed z ∈ HBerk we write jz (x, y )Γ for the potential kernel on Γ where Γ vary over finite subgraphs of P1Berk containing z. The functions {jz (x, y )}Γ coher to give a well-defined function jz (x, y ) on HBerk × HBerk (cf. [BR, §4.2]). Further, we can extend
jz to P1Berk × P1Berk by
jz (x, y ) :=
{jz (rΓ(x), r Γ(y)) Γ if (x, y ) /∈ Diag( K),
∞ if (x, y ) ∈ Diag( K),
where Γ is any finite subgraph containing z and wz (x, y ). Explicitly, for x, y ∈ P1(K)
with x 6 = y we have jz (x, y ) = ρ(z, w z (x, y )) . If z, ζ ∈ HBerk , then
jζ (x, y ) = jz (x, y ) − jz (x, ζ ) − jz (ζ, y ) + jz (ζ, ζ ).
Proposition 3.2.6 (Retraction Formula) . Let Γ be a finite subgraph of P1Berk and
z, x ∈ Γ. Then for any y ∈ P1Berk
jz (x, y ) = jz (x, r Γ(y)) Γ.
Proof. Since x, z ∈ Γ, the path [x, z ] lies in Γ. Clearly, wz (x, y ) ∈ [x, z ] ⊂ Γ, and so
jz (x, y ) = jz (x, r Γ(y)) Γ.
36 3.2 Examples and the Hsia kernel
Definition 3.2.7. i) We call the function ‖· , ·‖ : P1Berk × P1Berk → [0 , 1]
‖x, y ‖ := q−jζGauss (x,y )
v
the spherical kernel .ii) For a fixed ζ ∈ P1Berk , we define the generalized Hsia kernel to be the function
P1Berk × P1Berk → R ∪ {∞} given by
δ(x, y )ζ := ‖x, y ‖‖x, ζ ‖ ‖ y, ζ ‖
if ζ ∈ P1Berk \P1(K), and
δ(x, y )ζ =
‖x,y ‖‖x,ζ ‖‖ y,ζ ‖
if x, y ∈ P1Berk { ζ}∞ if x = ζ or y = ζ
if ζ ∈ P1(K).
Remark. i) Since ‖x, y ‖ = 0 if and only if x = y ∈ P1(K), the generalized Hsia kernel is well-defined in both cases. ii) The generalized Hsia kernel has the following geometric interpretation (cf. [BR, §4.4]): Let x, y ∈ P1Berk and w := x ∨ζ y, i.e. the point where the paths [x, ζ ] and
[y, ζ ] first meet. Then
δ(x, y )ζ = diam ζ (w),
where diam ζ (x) := δ(x, x )ζ for all x ∈ P1Berk . One has the identitiy
diam ζ (x) = 1
‖ζ, ζ ‖ · q−ρ(x,ζ )
v
for each ζ ∈ HBerk and x ∈ P1Berk (cf. [BR, §4.4 Equation (4.32)]).
Proposition 3.2.8. Fix ζ ∈ P1Berk .i) The generalized Hsia kernel δ(x, y )ζ : P1Berk × P1Berk → R ∪ {±∞} is nonnegative, symmetric, upper semicontinuous as a function of two variables and continuous in each variable separately. We have δ(x, y )ζ = 0 if and only if x = y ∈ P1(K).ii) If ζ ∈ HBerk , then δ(x, y )ζ is bounded and valued in [0 , 1/‖ζ, ζ ‖].iii) If ζ ∈ P1(K), then δ(x, y )ζ is unbounded and δ(x, y )ζ = ∞ if and only if x = ζ
or y = ζ.iv) For all x, y, z ∈ P1Berk ,
δ(x, y )ζ ≤ max( δ(x, z )ζ , δ (y, z )ζ ,
with equality if δ(x, z )ζ 6 = δ(y, z )ζ .
37 3 The Laplacian on the Berkovich projective line
v) For each a ∈ P1Berk and r > 0, the ‘open ball’
B(a, r )−
ζ
:= {x ∈ P1Berk | δ(x, z )ζ < r }
is connected and open in the Berkovich topology. It is empty if r ≤ diam ζ (a),and coincides with an open ball B(b, r )−
ζ
for some b ∈ P1(K) if r > diam ζ (a).Likewise, the ‘closed ball’
B(a, r )ζ := {x ∈ P1Berk | δ(x, z )ζ ≤ r}
is connected and closed in the Berkovich topology. It is empty if r < diam ζ (a), and coincides with B(b, r )ζ for some b ∈ P1(K) if r > diam ζ (a) or if r = diam ζ (a) and
a is of type II or III. If r = diam ζ (a) and a is of type I or IV, then B(a, r )ζ = {a}.Proof. See [BR, Proposition 4.10].
Example 3.2.9. We fix y, ζ ∈ P1Berk , and consider the function f : P1Berk → R ∪ {±∞}
defined by f (x) = − log v(δ(x, y )ζ ). One can show, that f ∈ BDV( P1Berk ) and
∆( − log v(δ(x, y )ζ )) = δy(x) − δζ (x).
Proof. Let Γ be a finite subgraph of P1Berk . By Remark 3.2.5, we may assume that the Gauss point ζGauss is contained in Γ. Set ˜y := rΓ(y) and ˜ζ := rΓ(ζ). By the definition of the generalized Hsia kernel and by Proposition 3.2.6,
− log v(δ(x, y )ζ ) = jζGauss (x, y ) − jζGauss (x, ζ ) − jζGauss (y, ζ )= jζGauss (x, ˜y) − jζGauss (x, ˜ζ) − jζGauss (y, ζ )
for all x ∈ Γ. Since the potential kernel on Γ for fixed ˜y respectively ˜ζ belongs to
CPA(Γ) , f |Γ ∈ BDV(Γ) . Due to the definition of the potential kernel,
∆Γ(f ) = ∆ Γ(jζGauss (·, ˜y)) − ∆Γ(jζGauss (·, ˜ζ)) − ∆Γ(jζGauss (y, ζ )) = δ˜y − δζGauss − (δ˜ζ − δζGauss ) − 0= δ˜y − δ˜ζ .
Therefore, |∆Γ(f )|(Γ) = ( δ˜y + δ˜ζ )(Γ) = 2 < ∞, and so f belongs to BDV( P1Berk ).Since ∆Γ(f ) = δ˜y − δ˜ζ = rΓ∗(δy − δζ ) for each Γ, we get ∆( f ) = δy − δζ by Definition 3.1.26.
Example 3.2.10 (Poincaré-Lelong formula) . Let 0 6 = g ∈ K(T ) with div( g) = ∑mi=1 ni(ai).We consider the function f : P1Berk → R ∪ {±∞} defined by f (x) := − log v([ g]x). Then 38 3.2 Examples and the Hsia kernel
f ∈ BDV( P1Berk ) and
∆( − log v([ g]x) =
m
∑
i=1
niδai (x).
Proof. Let ζ ∈ P1Berk be disjoint from the support of div( g), i. e. ζ / ∈ { a1, . . . , a m}. The decomposition formula for the generalized Hsia kernel (cf. [BR, Corollary 4.14]) tells us that there is a constant Cζ such that
[g]x = Cζ ·
m
∏
i=1
δ(x, a i)ni
ζ
.
Hence,
f (x) = − log v(Cζ ) +
m
∑
i=1
−ni log v(δ(x, a i)ζ ).
Due to Example 3.2.9, f is a function in the vector space BDV( P1Berk ), and
∆( f ) = ∆(log v(Cζ )) +
m
∑
i=1
ni∆( − log v(δ(x, a i)ζ )) = 0 +
m
∑
i=1
niδai (x) −
m
∑
i=1
niδζ (x)=
m
∑
i=1
niδai (x),
where last equation is true because of ∑mi=1 ni = 0 .
Example 3.2.11 (The potential function) . Let ν be a finite signed Borel measure on P1Berk . We define the potential function in the following way: If ζ ∈ HBerk or
ζ / ∈ supp( ν), we set
uν (x, ζ ) :=
∫
− log v(δ(x, y )ζ )dν (y).
If ζ ∈ P1(K) ∩ supp( ν), then the potential function is defined by
uν (x, ζ ) := uν (x, ζ Gauss ) + ν(P1Berk ) log v(‖x, ζ ‖).
One can show that uν (x, ζ ) ∈ BDV( P1Berk ) and
∆( uν (x, ζ )) = ν − ν(P1Berk )δζ (x).
Proof. Let Γ be any finite subgraph containing ζGauss . By the Retraction formula, 39 3 The Laplacian on the Berkovich projective line
Proposition 3.2.6, one have the identity jζGauss (x, y ) = jζGauss (x, r Γ(y)) for all y ∈ P1Berk
and x ∈ Γ.At first, we consider the case that ζ ∈ HBerk or ζ / ∈ supp( ν). If ζ ∈ HBerk , we can enlarge
Γ such that ζ ∈ Γ. By [BR, Proposition 3.3], jζGauss (·, ζ ) : P1Berk → R is bounded, and so
Cζ :=
∫
jζGauss (y, ζ )dν (y) < ∞.
If ζ / ∈ supp( ν), then jζGauss (·, ζ ) is real valued on supp( ν), because jζGauss (y, z ) /∈ R iff
y = z ∈ P1(K). Furthermore, jζGauss (·, ζ ) is continuous on the compact set supp( ν) by [BR, Proposition 3.3]. Consequently, jζGauss (·, ζ ) is bounded on supp( ν), and so we can set
Cζ :=
∫
jζGauss (y, ζ )dν (y) < ∞
as well. For all x ∈ Γ, we have
uν (x, ζ ) =
∫
− log v(δ(x, y )ζ )dν (y)=
∫
(jζGauss (x, y ) − jζGauss (x, ζ ) − jζGauss (y, ζ )) dν (y)=
∫
jζGauss (x, r Γ(y)) dν (y) −
∫
jζGauss (x, r Γ(ζ)) dν (y) −
∫
jζGauss (y, ζ )dν (y)=
∫
Γ
jζGauss (x, t )d(rΓ∗(ν))( t) − ν(P1Berk )jζGauss (x, r Γ(ζ)) − Cζ .
Now, consider ζ ∈ P1(K) ∩ supp( ν). Since ‖y, ζ Gauss ‖ = q−jζGauss (y,ζ Gauss )
v
= 1 for every
y ∈ P1Berk , the Retraction formula implies − log v(δ(x, y )ζGauss ) = jζGauss (x, r Γ(y)) for all
x ∈ Γ. Hence,
uν (x, ζ ) = uν (x, ζ Gauss ) + ν(P1Berk ) log v(‖x, ζ ‖)=
∫
jζGauss (x, r Γ(y)) dν (y) − ν(P1Berk )jζGauss (x, r Γ(ζ)) =
∫
Γ
jζGauss (x, t )d(rΓ∗ν)( t) − ν(P1Berk )jζGauss (x, r Γ(ζ)) .
Thus, we can calculate the Laplacian jointly for both cases. By [BR, Proposition 3.11],
h(x) := ∫
Γ
− log v(δ(x, t )ζGauss )d(rΓ∗ν)( t) ∈ BDV(Γ) and
∆Γ(h) = rΓ∗ν − (rΓ∗ν)(Γ) δζGauss .
We already know that ˜h(x) := jζGauss (x, r Γ(ζ)) ∈ CPA(Γ) and ∆Γ(˜h) = δrΓ(ζ) − δζGauss .40 3.2 Examples and the Hsia kernel
Together, we get uν (x, ζ ) ∈ BDV(Γ) and
∆Γ(uν (x, ζ )) = ∆ Γ(h) − ν(P1Berk )∆ Γ(˜h)= rΓ∗ν − rΓ∗ν(Γ) δζGauss − ν(P1Berk )δrΓ(ζ) + ν(P1Berk )δζGauss
= rΓ∗ν − ν(P1Berk )δζGauss − ν(P1Berk )δrΓ(ζ) + ν(P1Berk )δζGauss
= rΓ∗(ν − ν(P1Berk )δζ ).
The potential function uν (x, ζ ) is a function in BDV( P1Berk ) by the inequality
|∆Γ(uν (x, ζ )) |(Γ) = |rΓ∗(ν − ν(P1Berk )δζ )|(Γ)
≤ | ν − ν(P1Berk )δζ |(r−1Γ (Γ)) = |ν − ν(P1Berk )δζ |(P1Berk )
< ∞.
Proposition 3.1.23 and Proposition 3.1.25 state that ∆( uν (x, ζ )) = ν − ν(P1Berk )δζ (x).
Example 3.2.12. In particular, if ν is a probability measure on P1Berk , we define the potential function for ζ ∈ HBerk or ζ / ∈ supp( ν) by
uν (x, ζ ) :=
∫
− log v(δ(x, y )ζ )dν (y),
and for ζ ∈ P1(K) ∩ supp( ν) by
uν (x, ζ ) := uν (x, ζ Gauss ) + log v(‖x, ζ ‖).
Then uν (x, ζ ) ∈ BDV( P1Berk ) and
∆( uν (x, ζ )) = ν − δζ (x).
Moreover, if ζ ∈ HBerk or ζ / ∈ supp( ν), there is a constant Cζ such that for all z ∈ P1Berk
uν (x, ζ ) = uν (x, ζ Gauss ) + log v(‖x, ζ ‖) − Cζ .
Proof. This example is just a special case of Example 3.2.11, so it remains to show the last statement. Let ζ be as required and x ∈ P1Berk , then by the calculation in Example 41 3 The Laplacian on the Berkovich projective line
3.2.11 and the fact that ν(P1Berk ) = 1 , we have
uν (x, ζ ) =
∫
jζGauss (x, y )dν (y) − jζGauss (x, ζ ) − Cζ
=
∫
− log v(δ(x, y )ζGauss )dν (y) + log v(q−jζGauss (x,ζ )
v
) − Cζ
= uν (x, ζ Gauss ) + log v(‖x, ζ ‖) − Cζ .
42 4 Harmonic functions
The classical potential theory is including the study of harmonic functions. Baker and Rumely developed the theory of harmonic functions on P1Berk and established analogues of the main results of this theory in [BR]. In [Th] this theory is developed in a more gen-eral way extending the definition made in this chapter. One considers a general smooth strictly k-analytic curve X instead of just P1Berk . We will give a short introduction to this theory in Chapter 5 and we will verify that both definitions actually coincide. In this Chapter we elaborate on the theory of harmonic functions on P1Berk from [BR] and try to extend it with some new or slightly modified statements. In particular, we are interested in the connection between the terms strongly harmonic and harmonic , which are defined at the beginning of Section 4.1. Next to the definitions, we give examples and fundamental properties of (strongly) harmonic functions. In Section 4.2, we will introduce the main dendrite of a domain. We will see that the values of a harmonic function on this R-tree determine the behavior of the function on the whole domain. In the Sections 4.3 to 4.7, we prove analogues of the Maximum Principle (§4.3), the Poisson formula (§4.4 and §4.5), Uniform convergence (§4.6) and Harnack’s Principle (§4.7). Most of the mentioned main results are true in the general case. If so, we will give a reference in [Th], and refer for a precise definition of a harmonic function on X
to Section 5.3.
4.1 Harmonic functions
In the last chapter we have seen the existence of a Laplacian for a function f of bounded differential variation. Hence, we can define harmonic functions similarly to the defini-tion in the classical potential theory. These definitions are followed by some examples related to those in Chapter 3. Afterwards, we will give some nice fundamental proper-ties which are needed for the proofs of the main theorems and propositions which are stated in §4.3-4.7. In particular, we study the behavior of a function f : U → R with Laplacian ∆U (f ) = 0 on finite subgraphs Γ ⊂ U for a domain U satisfying |∂U | < ∞.
Definition 4.1.1. i) If U is a domain, a function f : U → R is called strongly harmonic on U if it is continuous on U , belongs to BDV( U ), and satisfies
∆U (f ) = 0 .
ii) If U is an arbitrary open set, then f : U → R is called harmonic on U if for each 43 4 Harmonic functions
x ∈ U there is a domain Vx ⊂ U with x ∈ Vx such that f is strongly harmonic on
Vx.At the end of Section 4.2, we give an example of a function f on a domain U which is harmonic but not strongly harmonic on U .
4.1.2. Since the Laplacian operator ∆Γ is linear by Lemma 3.1.28, the function a·f +b·g
is harmonic (resp. strongly harmonic) on V for any harmonic (resp. strongly harmonic) functions f and g on V and a, b ∈ R. We denote the space of harmonic functions on U
by H(U ).
Example 4.1.3. Let U ⊂ P1Berk be a domain and f : U → R given by f ≡ C on U for a constant C ∈ R. Then f is strongly harmonic on U .
Proof. The function f is clearly continuous, f ∈ BDV( U ) and ∆( f ) = 0 by Example 3.2.1. Using Proposition 3.1.29 with U1 = U and U2 = P1Berk , we get
∆U (f ) = ∆( f )|U = 0 .
Hence, f is strongly harmonic on U .
Example 4.1.4. Fix y, ζ ∈ P1Berk such that ζ / ∈ P1(K) or y 6 = ζ. Then the function
f : P1Berk { y, ζ } → R given by f (x) := − log v(δ(x, y )ζ ) is strongly harmonic on each connected component of P1Berk { y, ζ }.
Proof. Let U be a connected component of P1Berk { y, ζ }, then U is a domain. Since the generalized Hsia kernel is continuous in every x ∈ U by [BR, Proposition 4.1], f
is continuous as well. In Example 3.2.9, we have seen that f ∈ BDV( U ) and ∆( f ) =
δy − δζ . Due to U ⊂ P1Berk { y, ζ },
∆U (f ) = ∆( f )|U = ( δy − δζ )|U = 0
by Proposition 3.1.29.
Example 4.1.5. If 0 6 = g ∈ K(T ) and div( g) = ∑mi=1 ni(ai), then f (x) = − log v([ g]x)
is strongly harmonic on P1Berk { a1, . . . , a m}.
Proof. Due to a1, . . . , a m ∈ K, i.e. they are of type I, U = P1Berk { a1, . . . , a m} is connected. Clearly, U is open as well, so U is by definition a domain. In Example 3.2.10, one has seen that f ∈ BDV( U ) and
f (x) = − log v(Cζ ) +
m∑
i=1
−ni log v(δ(x, a i)ζ ).
44 4.1 Harmonic functions
Consequently, f is also continuous by [BR, Proposition 4.1]. We have also calculated that ∆( f ) = ∑mi=1 niδai , and so
∆U (f ) = ∆( f )|U = (
m∑
i=1
niδai )|U = 0
since U = P1Berk { a1, . . . , a m}. Therefore, f is strongly harmonic on P1Berk { a1, . . . , a m}.
Remark. In Proposition 5.3.15, we will see an analogue statement in the general case.
Example 4.1.6. Let ν be a probability measure on P1Berk and ζ / ∈ supp( ν), then the potential function uν (z, ζ ) := ∫ − log v(δ(x, y )ζ ) dν (y) is strongly harmonic on each connected component of P1Berk (supp( ν) ∪ { ζ}).
Proof. By Proposition 3.2.8, the generalized Hsia kernel δ(x, y )ζ is continuous in the variable x on P1Berk (supp( ν)∪{ ζ}). Therefore, uν (·, ζ ) is continuous on P1Berk (supp( ν)∪{ζ}) as well. Let U be a connected component of P1Berk (supp( ν) ∪ { ζ}), then U is a domain. By Example 3.2.12, we know that uν (·, ζ ) ∈ BDV( P1Berk ) and ∆( uν (·, ζ )) = (ν − δζ ). Hence, uν (·, ζ )|U ∈ BDV( U ) and
∆U (uν (·, ζ )) = ( ν − δζ )|U = 0
since U ∩ supp( ν) = ∅ and U ∩ { ζ} = ∅. Thus, f is strongly harmonic on U .Next we will see some properties of (strongly) harmonic functions, which are used to prove important theorems in following sections.
Lemma 4.1.7. i) If U1 ⊂ U2 are domains, and f is strongly harmonic on U2, then
f is strongly harmonic on U1.ii) If f is harmonic on an open set V , and U is a subdomain of V with U ⊂ V , then
f is strongly harmonic on U .iii) If f is harmonic on V and E ⊂ V is compact and connected, there is a subdomain
U ⊂ V containing E such that f is strongly harmonic on U .Proof. For i), let f be strongly harmonic on U2. Since f is continuous on U2, it is continuous on U1 ⊂ U2. By Proposition 3.1.29, f ∈ BDV( U2) implies f ∈ BDV( U1)
and
∆U1 (f ) = ∆ U2 (f )|U1 = 0 .
Therefore, f is strongly harmonic on U1.For ii), we consider a harmonic function f on an open set V and a subdomain U of V
such that U ⊂ V . Therefore, there is a domain Ux ⊂ V for each x ∈ U such that f
is strongly harmonic on Ux and x ∈ Ux. Since P1Berk is compact by Proposition 2.3.3, 45 4 Harmonic functions
the closed subset U is compact as well. Therefore, U ⊂ ⋃
x∈U
Ux implies that there are
Ux1 , . . . , U xm such that U ⊂ ⋃mi=1 Uxi =: W . Clearly, W is open as the union of open sets. Since U is connected, we know that ⋂mi=1 Uxi 6 = ∅. Therefore, W is connected as a union of non-disjoint connected sets. Thus, W is a domain, and we can apply Proposition 3.1.30. So we get f |W ∈ BDV( W ) and
∆W (f )|Uxi = ∆ Uxi (f ) = 0
for each i = 1 , . . . , m . Hence, ∆W (f ) = 0 . Proposition 3.1.29 implies f |U ∈ BDV( U )
and ∆U (f ) = ∆ W (f )|U = 0 for our domain U ⊂ W . Since f is continuous on Uxi for every i = 1 , . . . , m and U ⊂ ⋃mi=1 Uxi , f is continuous on U and so strongly harmonic on U .For iii), let f be harmonic on V and E ⊂ V a compact and connected subset. For each
x ∈ E ⊂ V , there is a domain Ux ⊂ V such that x ∈ Ux and f is strongly harmonic on Ux. Since E ⊂ ⋃
x∈E
Ux and E is compact, E ⊂ ⋃mi=1 Uxi =: U . As above, U is connected and f is strongly harmonic on U .A direct consequence of part i) of the lemma above is the following:
Corollary 4.1.8. If f is harmonic on an open set U ⊂ P1Berk , then f is harmonic on each open subset V of U .Proof. Consider x ∈ V ⊂ U . Since f is harmonic on U , there is a domain Ux ⊂
U containing x such that f is strongly harmonic on Ux. Let Vx be the connected component in the open set Ux ∩ V containing x. Then Vx is a domain in Ux, and therefore f is strongly harmonic on Vx by Lemma 4.1.7 i).
Lemma 4.1.9. Let V be a domain with a finite number of boundary points {x1, . . . , x m}
and h a strongly harmonic function on V .i) The function h belongs to CPA(Γ) for every finite subgraph Γ ⊂ V .ii) If Γ is a finite subgraph of V satisfying rV , Γ({x1, . . . , x m}) ⊂ ∂Γ,
∑
~v∈Tp(Γ)
d~vh(p) = 0
for every p ∈ Γ\∂Γ.
Proof. For i), we set yi := rV , Γ(xi). Since h is strongly harmonic on V , h belongs to 46 4.2 Harmonic functions and the main dendrite
BDV(Γ) and ∆V (h) = ∆ ∂V (h). Hence,
∆Γ(h) = ( rV , Γ)∗(∆ V (h)) = ( rV , Γ)∗(∆ ∂V (h)) =
m∑
i=1
ci · δyi ,
where ci := ∆ ∂V (h)( xi). By [BR, Corollary 3.9], we get h ∈ CPA(Γ) .For ii), Remark 3.1.7 and the definition of ∆V (h) state
∑
~v∈Tp(Γ)
d~vh(p) = −∆Γ(h)( p) = −∆V (h)( r−1
V , Γ
(p)) .
The requirements imply r−1
V , Γ
(p) ⊂ V , and so
∑
~v∈Tp(Γ)
d~vh(p) = −∆V (h)( r−1
V , Γ
(p)) = 0 .
4.2 Harmonic functions and the main dendrite
The behavior of a harmonic function on a domain U is controlled by its behavior on a special subset which is called main dendrite and is closely related to the skeleton in [Th]. This subset is defined below, and some properties of it are stated afterwards. In particular, the main dendrite is an R-tree. Further, we get from the proof of this property (cf. [BR, Proposition 7.10]) a countable exhaustion of any domain different from P1Berk by subdomains on which a harmonic function is strongly harmonic. This leads to the fact that every harmonic function on a domain of bounded differential variation is actually strongly harmonic. The main result of this section is that every harmonic function is determined by its values on the main dendrite. This knowledge enables us to give an example of a harmonic function which is not strongly harmonic at the end of this section.
Definition 4.2.1. If U is a domain, the main dendrite D = D(U ) ⊂ U is the set of all
x ∈ U belonging to paths between two boundary points y, z ∈ ∂U .
Remark. The main dendrite is empty iff |∂U | ∈ { 0, 1}. Clearly, if |∂U | ∈ { 0, 1},
D(U ) = ∅. If |∂U | ≥ 2, there are at least two different points y, z ∈ ∂U . Since
U is connected, the unique path from y to z is contained in U . So there are points belonging to the path from y and z which are contained in U . |∂U | ∈ { 0, 1} for a domain iff U = P1Berk or U is a connected component of P1Berk { ζ} for some ζ ∈ P1Berk .47 4 Harmonic functions
Lemma 4.2.2. Let W ⊂ P1Berk be a domain, x ∈ W and y ∈ P1Berk \W . Then the unique path Γ from x to y contains some boundary point of W .Proof. Set W ′ := P1Berk \W . Supposing Γ ∩ ∂W = ∅, we have Γ ∩ W ′ = Γ ∩ P1Berk \W.
Hence, y ∈ Γ ∩ W ′ and (Γ ∩ W ) ∩ (Γ ∩ W ′) = ∅. We also know that x ∈ Γ ∩ W . Thus, the two sets Γ ∩ W ′ and Γ ∩ W are non-empty relatively open disjoint subsets with
Γ = (Γ ∩ W ) ∪ (Γ ∩ W ′). This contradicts the fact that Γ is connected.
Proposition 4.2.3. Let U be a domain in P1Berk and D be the main dendrite of U . If
D is non-empty, then i) D is finitely branched at every point. ii) D is a countable union of finite R-trees, whose boundary points are all of type II. Proof. See [BR, Proposition 7.10]. We have defined in Section 2.1 a strict simple domain as a domain with only finitely many boundary points which are all of type II. The proof of the last proposition implies that every domain U 6 = P1Berk can be exhausted by a sequence of such domains:
Corollary 4.2.4. If U 6 = P1Berk is a domain in P1Berk , then U can be exhausted by a sequence W1 ⊂ W2 ⊂ · · · of strict simple domains with Wn ⊂ Wn+1 ⊂ U for each n.Proof. See [BR, Corollary 7.11].
Corollary 4.2.5. Let U ⊂ P1Berk be a domain and f harmonic on U . If f ∈ BDV( U ),then f is already strongly harmonic on U .Proof. Due to f ∈ BDV( U ), the Laplacian ∆U (f ) exists, and it remains to show that ∆U (f ) = 0 . At first, we consider the case that U = P1Berk . If T is a Borel measurable set in P1Berk , the compact set T ⊂ P1Berk , and so T , can be covered by finitely many subdomains Ux1 , . . . , U xm , where f is strongly harmonic on each Uxi for a point xi ∈ P1Berk . By Lemma 3.1.29 ∆P1Berk (f )|Uxi = ∆ Uxi (f ) for each i = 1 , . . . , m ,and so ∆P1Berk (f )( T ) = 0 . Thus, f is strongly harmonic on U = P1Berk .
If U 6 = P1Berk , we will use Corollary 4.2.4 to verify ∆U (f ) = 0 . Let (Wn)n≥1 be the exhaustion from the corollary. Then U = ⋃∞
n=1
Wn, Wn ⊂ Wn+1 and Wn ⊂ U for all
n ≥ 1. It follows directly from the σ-additivity of ∆U (f ) that ∆U (f ) is continuous from below. Since f is strongly harmonic on Wn for each n ≥ 1 by Lemma 4.1.7 ii), 48 4.2 Harmonic functions and the main dendrite
this implies
∆U (f )( T ) = ∆ U (f )( ∪n≥1T ∩ Wn)= lim
n→∞
∆U (f )( T ∩ Wn)= lim
n→∞
∆Wn (f )( T ∩ Wn)= 0
for every Borel measurable subset T of U .In the following, we will see the connection between the main dendrite and harmonic functions:
Proposition 4.2.6. Let U ⊂ P1Berk be a domain and D the main dendrite of U . If D is empty, every harmonic function on U is constant. If D is non-empty, every harmonic function f on U is constant along every path leading away from D.Proof. At first we consider the case that D 6 = ∅. We fix a y0 ∈ D and let x be a point in U \D. We denote the first point of the path [x, y 0] in D by w, and we show that f (x) = f (w). Let V be the connected component of P1Berk { w} which contains
x. V is connected and open with ∂V = {w} ⊂ U , i.e. V is a domain, and V ⊂ U .Applying Lemma 4.1.7 ii), f is strongly harmonic on V , and so ∆V (f ) = 0 . [BR, Proposition 5.25] implies ∆V (f )( {w}) = −∆V (f )( V ) = 0 , i.e. ∆∂V (f ) = 0 . Thus,
∆V (f ) = ∆ V (f ) + ∆ ∂V (f ) = 0 . [BR, Lemma 5.14] tells us, that in that case f is constant on V ∩ HBerk . We know that HBerk is dense in P1Berk , f is continuous on U
and V = V ∪ { w} ⊂ U . Therefore, f is constant on V with f (x) = f (w).If D = ∅, then U is either P1Berk or a connected component of P1Berk { ζ} for some
ζ ∈ P1Berk . We fix an element w ∈ U , and consider an arbitrary x ∈ U . There is a disc
V containing x and w such that V ⊂ U because of the description of U . Then V has a unique boundary point, and we can prove the claim as we did it in the first case.
Remark 4.2.7. Let f be a harmonic function on a domain U .i) There are only finitely many tangent directions at every point x ∈ U where f is nonconstant. This is a direct consequence of Proposition 4.2.3 and 4.2.6. ii) The function f is locally constant outside the main dendrite for the weak topology which we have defined in Remark 4.2.7 by Proposition 4.2.6 We now give an example of a function f on a domain U which is harmonic but not strongly harmonic on U . By Corollary 4.2.5, our function must not be contained in
BDV( U ).
Example 4.2.8. Let K = Cp, and fix coordinates such that P1Berk = A1Berk ∪ {∞} . At first, we verify that the set U := P1Berk \Zp is open by showing Zp is closed. Since P1Berk
49 4 Harmonic functions
is a Hausdorff space, it suffices to prove that Zp is compact respective to the subspace topology of P1Berk . Let Zp = ⋃
i∈I
Ui for Ui ⊂ Zp open in the Berkovich topology. As the Berkovich topology is the weakest topology on Zp such that the map Zp → R≥0
given by x 7 → | f (x)|p is continuous for all f ∈ Cp[T ] and polynomials are continuous in the p−adic topology of Zp, the sets Ui are also open in this finer topology. Zp is compact in the p−adic topology, so there is a finite number of the sets Ui covering Zp.Thus, Zp is compact in the Berkovich topology, too. Due to Zp ⊂ Cp = P1(K), U is also connected, and so U is a domain. By Proposition 4.2.6, it suffices to describe the function f on the main dendrite D of U . So we try to describe D such that we can define f properly on D. At first, we will show that D is a rooted R-tree whose root is the Gauss point ζGauss . As |x|p ≤ 1 for every point x ∈ Zp, we can see the main dendrite D as an R-tree contained in D(0 , 1) \D(0 , 1) which is an R-tree respective to the metric
ρ(x, y ) = 2 log p(diam( x ∨ y)) − log p(diam( x)) − log p(diam( y))
by Proposition 2.2.8. Since 0, 1 ∈ Zp = ∂U and |1 − 0|p = 1 , the first point where
[0 , ζ Gauss ] and [1 , ζ Gauss ] meet is the Gauss point. Thus, ζGauss has to be contained in
D, and so ζGauss is a root of D. Next, we determine all branches extending down from
ζGauss . Because each point x ∈ Zp with |x|p < 1 is contained in the same branch off
ζGauss as 0, it suffices to consider the points in Z×
p
. Let x, y ∈ Z×
p
, then we can write
x = ∑∞
i=0
aipi and y = ∑∞
i=0
bipi where ai, b i ∈ { 1, . . . , p − 1}. If a0 = b0, we have
|x − y|p < 1, i.e. x and y are on the same branch. If a0 6 = b0, then |x − y|p = 1 , and so they are on different ones. Hence, there are p different branches extending down from
ζGauss . Each other node of D is corresponding to the disc D(a, p −n) for a, n ∈ Z with
n ≥ 1 and 0 ≤ a ≤ pn − 1. One can see ζGauss as the case where n = 0 , and so a = 0 .Consider an arbitrary node D(a, p −n). One can show that there are branches extending down from the node D(a, p −n) to the nodes D(a + k · p, p −(n+1) ) with k ∈ { 0, . . . , p − 1}.Since 1/p n+1 < 1/p n and
|a + k · pn − a|p = |k · pn|p = 1 /p n,
we have D(a + k · p, p −(n+1) ) ( D(a, p −n). Furthermore, two such nodes D(a + k ·
p, p −(n+1) ) are on different branches since
|a + k · pn − (a + k′ · pn)|p = |k − k′|p · | pn|p = 1 /p n ≥ 1/p n+1
for k, k ′ ∈ { 0, . . . , p − 1}, k 6 = k′. Since we know that every node is of that form, there are clearly no other branches extending down off D(a, p −n). Thus, there are p branches extending down from each node. Let x be the point corresponding to D(a, p −n) and y
50 4.3 The Maximum Principle
to D(a + k · p, p −(n+1) ) with k ∈ { 0, . . . , p − 1}. Then
ρ(x, y ) = 2 log p(diam( x ∨ y)) − log p(diam( x)) − log p(diam( y)) = 2 log p(diam( x)) − log p(diam( x)) − log p(diam( y)) = log p(diam( x)) − log p(diam( y)) = log p(p−n) − log p(p−(n+1) )= −n + n + 1 = 1 ,
i.e. that each edge has length 1. Now we are able to give a proper description of f on
D. Set f (ζGauss ) = 0 and define f recursively. Let za be a node on which f (za) has been already defined. Let Na denote the slope of f on the edge entering za from above, and if za = ζGauss , we put Na = 0 . We have seen above, that there are p edges extending down from za. We choose two distinguished edges, and let f (z) have the slope Na + 1
on one and −1 on the other one until the next node. On the remaining p − 2 edges, we set f (z) = f (za) until the next node. By construction, f is continuous and locally piecewise linear. Furthermore, the sum of the slopes of f on the edges leading away from each node is 0, so f is harmonic on U (we can extend f from D to U properly by
f (x) := f (w) where w is the first point of [x, ζ Gauss ] in D for each x ∈ U ). However, f is not strongly harmonic on U . By the definition of f , there are edges of
D with arbitrarily large slopes of f . Let Γ be an edge of D ⊂ U with slope mΓ, then
|∆Γ(f )|(Γ) = 2 |mΓ|. Hence, f cannot be contained in BDV( U ), and so f cannot be strongly harmonic on U .
4.3 The Maximum Principle
In the classical theory, a harmonic function on a domain D in C does not achieve a maximum or a minimum within the domain ([Ra, Theorem 1.1.8]). This property is called the Maximum Principle. We will prove the analogue for harmonic functions on domains of P1Berk and give a reference in [Th] for the formulation in the case of an arbitrary smooth strictly k-analytic curve. Further we give a strengthening which is called the Strong Maximum Principle. Note that the formulation of the Strong Maximum Principle differs slightly from the one in [BR]. Afterwards, the Riemann Extension Theorem and the uniqueness of the Equilibrium measure are deduced from this strengthening.
Theorem 4.3.1 (Maximum Principle) . Let U ⊂ P1Berk be a domain and f a harmonic function on U .i) If f is nonconstant on U , f does not achieve a maximum or a minimum value on U .
51 4 Harmonic functions
ii) The inequality lim sup x→∂U f (x) ≤ M implies
f (x) ≤ M for all x ∈ U.
Respectively, if lim inf x→∂U f (x) ≥ m is satisfied, we have
f (x) ≥ m for all x ∈ U.
Proof. If f is harmonic, −f is harmonic as well. Since
min( f ) = − max( −f ) and lim inf
x→∂U
f (x) = − lim sup
x→∂U
(−f (x)) ,
it suffices to consider the case of a maximum in i) respectively lim sup x→∂U f (x) in ii). We prove i) by contradiction, so suppose that f is achieving a maximum at a point
x ∈ U. By definition, f is strongly harmonic on a subdomain V of U containing x. At first, we will show that f is constant on V , and subsequently we will conclude that f is constant on U . Without loss of generality, we may assume that the main dendrite D of
V is non-empty because otherwise f is constant on V by Proposition 4.2.6. Let T be the branch off of D containing x, and let w be the point where T attaches to D. Then
w ∈ D, and by Proposition 4.2.6 f (w) = f (x). Thus, f is achieving the maximum in
D. Let Γ ⊂ D be a finite subgraph with w in its interior. Because of the definition of the main dendrite, we have the identity (4.1) rV , Γ(∂V ) = {z ∈ D| z endpoint of Γ} =: E.
If z ∈ rV , Γ(∂V ), there is a y ∈ ∂V such that rV , Γ(y) = z, i.e. the first point of the path
[y, w ] in Γ is z, and so z is an endpoint of Γ. If z ∈ E ⊂ D, there are y, v ∈ ∂V such that z is contained in the path [y, v ]. Since z ∈ E, rV , Γ(y) = z or rV , Γ(v) = z.Since f is strongly harmonic on V , ∆V (f ) is supported on ∂V . By Equation (4.1), we know that r−1
V , Γ
({Γ\E}) ⊂ V . Hence, ∆Γ(f ) = ( rV , Γ)∗(∆ V (f )) implies (4.2) supp(∆ Γ(f )) ⊂ E.
Since Γ is a finite subgraph, E is finite. Thus, ∆Γ(f ) is a discrete measure on Γ. By [BR, Corollary 3.9], f |Γ therefore belongs to CPA(Γ) . We will show that Γ coincides with the connected component of {z ∈ Γ|f (z) = f (w)} containing w which is denoted by Γw. Suppose Γw 6 = Γ , then we can find a boundary point p of Γw in Γ which is not contained in E. This point p ∈ Γw satisfies f (p) = f (w) = f (x), i.e. f (p) is maximal. Hence,
d~vf (p) = lim
t→0
f (p + t~ v) − f (p)
t ≤ 0
for all tangent vectors ~v ∈ Tp(Γ) . The point p is a boundary point of Γw, so f is nonconstant near p. We can find therefore a tangent vector v ∈ Tp(Γ) such that 52 4.3 The Maximum Principle
d~vf (p) < 0 for our piecewise affine function f on Γ. Thus,
∆Γ(f )( p) = − ∑
~v∈Tp(Γ)
d~vf (p) > 0.
By Equation (4.2), p has to be contained in E, what is not possible by the choice of p.So Γ = Γ w. Because Γ can be taken arbitrary large, f is constant on D. By Proposition 4.2.6, f is constant on V .With this result, we can conclude easily that f is also constant on U . We consider the set W := {z ∈ U |f (z) = f (x)}. This set is non-empty, because x is contained in it. Since f is continuous on U and W = f −1(f (x)) , this set is closed. W is also open, because for every z0 ∈ W we have seen above that there is an open neighborhood
Vz0 ⊂ W of z0. We know that U is connected as a domain, and so the non-empty open and closed set W has to coincide with U , i.e. f is constant on U .For ii), we consider the function f ] : U → R defined by
f ](x) :=
{
f (x) for x ∈ U,
lim sup y→x,y ∈U f (y) for x ∈ ∂U.
Since f is continuous on U , the defined function f ] is upper semicontinuous by con-struction. P1Berk is compact by Proposition 2.3.3 i), so U is compact. Therefore, the upper semicontinuous function f ] is achieving a maximal value in U . By i), we know that this maximum has to be achieved on ∂U . Since we have required that
f ](x) = lim sup y→x f (y) ≤ M for every x ∈ ∂U , f (x) = f ](x) ≤ M for all x ∈ U .
Corollary 4.3.2. If U is an open set and f : U → R harmonic, then f achieves a local extremum in a point x ∈ U if and only if f is locally constant in x.Proof. Let V ⊂ U be a neighborhood of x such that f has an extremum in x on V .The connected component V0 of V containing x is a domain, f is harmonic on V0 and
f achieves an extremum on V0. Hence, f has to be constant on V0 by the Maximum Principle. The other direction is obvious.
Remark. If X is a smooth strictly k-analytic curve, then we have the same statement as in the corollary above in [Th, Proposition 3.1.1]. In the following, we see an important strengthening of the Maximum Principle. One can show that in some cases, sets of capacity 0 in ∂U can be ignored. Before we will state and prove the Stong Maximum Principle, we will define capacity and prove some lemmata.
Definition 4.3.3. Fix ζ ∈ P1Berk , and let e be a compact subset of P1Berk { ζ}.53 4 Harmonic functions
i) Let P(e) be the collection of all probability measures ν on P1Berk with supp( ν) ⊂ e.For a given ν ∈ P(e) we define the energy integral
Iζ (ν) :=
∫ ∫
e×e
− log v δ(x, y )ζ dν (x)dν (y),
where δ(x, y )ζ is the generalized Hsia kernel which was defined in Definition 3.2.7. ii) We call
Vζ (e) := inf
ν∈P(e)
Iζ (ν)
the Robin constant .iii) The logarithmic capacity of e relative to ζ is defined by
γζ (e) := q−Vζ (e)
v
.
For example, if K = Cp we can take qv = p. If H ⊂ P1Berk is an arbitrary set, we define the logarithmic capacity as
γζ (H) := sup
e⊂H compact
γζ (e).
iv) We call a probability measure μ supported on e with Iζ (μ) = Vζ (e) Equilibrium measure for e with respect to ζ. If γζ (e) > 0, [BR, Proposition 6.6] states the existence of such a probability measure μ. Later on, we will give a proof of the uniqueness in Corollary 4.3.10.
Remark. The capacity of a set e with respect to a ζ ∈ P1Berk \e is 0 if and only if the capacity of a set e is 0 to any ζ ∈ P1Berk \e (cf. [BR, Proposition 6.1]). Hence, in the following we will just say that a set has capacity 0 if γζ (e) = 0 for any ζ ∈ P1Berk \e.
Lemma 4.3.4. Let e := {a1, . . . , a n} ⊂ P1(K), then e has capacity 0.Proof. By the definition of P(e), every measure μ ∈ P(e) is supported on e = {a1, . . . , a m}.Hence, we can write μ = ∑mi=1 ciδai for ci ∈ R. For any ζ ∈ P1Berk { a1, . . . , a m} we have
Iζ (μ) =
m
∑
i=1
−c2
i
· log v(δ(ai, a i)ζ ) = ∞
since δ(ai, a i)ζ = 0 by Proposition 3.2.8. Thus, Vζ (e) = ∞, and finally γζ (e) = 0 .
Lemma 4.3.5. If e has capacity 0, then e is contained in P1(K).Proof. Suppose there is an element a ∈ e ∩ HBerk . Then the Dirac measure δa is a measure in P(e), and
Iζ (δa) = − log v δ(a, a )ζ = − log v(diam ζ (a)) < ∞
54 4.3 The Maximum Principle
for any ζ / ∈ e. Thus, Vζ (e) < ∞, i.e. γζ (e) > 0, for any ζ / ∈ e, contradicting that e has capacity 0. Consequently, every point of e has to be of type I. We need the following lemma to prove a strengthening of the Maximum Principle.
Lemma 4.3.6. Let e ⊂ P1Berk be a compact set of capacity 0 and ζ / ∈ e. Then there is a ν ∈ P(e) such that
lim
x→y
uν (x, ζ ) = ∞
for all y ∈ e. A function with this property is called an Evans function. Proof. See [BR, Lemma 7.18].
Theorem 4.3.7 (Strong Maximum Principle) . Let U ⊂ P1Berk be a domain and f aharmonic function on U .i) If f is bounded above on U , and lim sup x→z f (x) ≤ M is satisfied for all z ∈ ∂U \e,
where e ( ∂U is of capacity 0, then
f (x) ≤ M for all x ∈ U.
ii) If f is bounded below on U , and lim inf x→z f (x) ≥ m is satisfied for all z ∈ ∂U \e,
where e ( ∂U is of capacity 0, then
f (x) ≥ m for all x ∈ U.
Proof. As in the proof of Theorem 4.3.1, it suffices to deal with the claim in i). If f is constant with f ≡ c on U and there is at least one z ∈ ∂U such that lim sup x→z f (x) ≤
M , we get
f (y) = c = lim sup
x→z
f (x) ≤ M
for each y ∈ U . So we may assume that f is nonconstant on U . We will show the claim by contradiction. Suppose that there is a function f as required and there exists an element x0 ∈ U such that f (x0) > M .Since f is nonconstant, there is a ζ ∈ U such that f (ζ) 6 = f (x0). If f (ζ) > f (x0),then we just interchange x0 and ζ. Hence, the domain U contains two points x0 and ζ
satisfying f (ζ) < f (x0) and M < f (x0). Therefore, we can fix a M1 > M such that
f (x0) > M 1 > f (ζ).
In the next step, we will construct a suitable compact set e1 to which Lemma 4.3.6 can be applied. We define
W := {x ∈ U | f (x) > M 1} and W ′ := {x ∈ U | f (x) < M 1}.
55 4 Harmonic functions
Since f is continuous, the sets W = f −1(( M1, ∞)) and W ′ = f −1(( −∞ , M 1)) are open. By definition, one can see that ζ ∈ W ′, x0 ∈ W and W ′ ∩ W = ∅. Let V be the connected component of W containing x0. Then V is open and connected, i.e. V ⊂ U
is a domain. We will show that e1 := ∂V ∩ ∂U 6 = ∅. Suppose that the intersection is empty, i.e.
V ⊂ U . We therefore can find for each y ∈ ∂V a neighborhood Uy ⊂ U of y with
Uy ∩ V 6 = ∅ and Uy ∩ U \V 6 = ∅. We can actually find a neighborhood, for example a connected one, such that Uy ∩ V 6 = ∅ and Uy ∩ U \W 6 = ∅, because V is a connected component of W . The points contained in Uy ∩ V satisfy f (z) > M 1 and the points
Uy ∩ U \W satisfy f (z) ≤ M1. Since f is continuous on U , f (y) = M1. Hence,
lim sup x→∂V f (x) = M1. The Maximum Principle implies f (x) ≤ M1 for all x ∈ V .This contradicts our supposition f (x0) > M 1, because x0 ∈ V by the construction of
V . Consequently, e1 = ∂V ∩ ∂U 6 = ∅.
Next, we verify that e1 has capacity 0. The closed subset e1 is compact. Further, every z ∈ e1 = ∂V ∩ ∂U is clearly contained in the boundary of U and satisfies
lim sup x→z f (x) ≥ M1 > M . Hence, e1 has to be a subset of e. By the definition of capacity,
γζ (e1) ≤ γζ (e) = 0 .
Additionally, ζ / ∈ e1 ⊂ ∂U , because ζ ∈ U , so we can apply Lemma 4.3.6 to e1 and ζ.Lemma 4.3.6 states the existence of an Evans function h for e1 with respect to ζ, or more specifically there is a probability measure ν such that supp( ν) ⊂ e1 and for all
y ∈ e1
(4.3) lim
x→y
h(x) = ∞,
where h(x) := uν (x, ζ ) for all x ∈ P1Berk .Now we will define a harmonic function with the help of h on V such that we can apply the Maximum Principle 4.3.1. Then we will get a contradiction to our suppo-sition f (x0) > M . First, we show that h is harmonic on V . We know that ζ /∈ V .Furthermore, V ∩ supp( ν) is empty, because supp( ν) ∩ V ⊂ e1 ∩ V = ∅. Therefore,
V ⊂ P1Berk (supp( ν) ∪ { ζ}), and so h is harmonic on V by Example 4.1.6. [BR, Propo-sition 6.12] tells us that h(x) := uν (x, ζ ) is lower semicontinuous on P1Berk { ζ}, and so especially on V which does not contain ζ. Since V is compact, h is bounded below, so there is a constant B > 0 such that (4.4) h(x) ≥ − B
for all x ∈ V . For η > 0 we define the function fη(x) := f (x) − ηh (x). Since f and
h are harmonic on V , each fη is harmonic on V as well. We have required that f is 56 4.3 The Maximum Principle
bounded above on U , so (4.3) implies (4.5) lim sup
x→y
fη(x) = lim sup
x→y
(f (x) − ηh (x)) = −∞
for all y ∈ e1. Our function f is continuous in y and satisfies f (y) = M1 for each
y ∈ ∂V ∩ U . Thus, we have the inequality (4.6) lim sup
x→y
fη(x) = lim sup
x→y
(f (x) − ηh (x)) ≤ M1 + ηB
for each y ∈ ∂V ∩U by Equation (4.4). Because of the disjoint union ∂V = e1 ˙∪(∂V ∩U ),
(4.5) and (4.6) state that
lim sup
x→y
fη ≤ M1 + ηB
for all y ∈ ∂V . Since fη is harmonic on V , the Maximum Principle says that
fη(x) ≤ M1 + ηB
for all x ∈ V . Consequently, we get the following inequality
f (x) = fη(x) + ηh (x)
≤ M1 + ηB + ηh (x)= M1 + η(B + h(x))
on V . Letting η → 0, we have f (x) ≤ M1 for all x ∈ V . This contradicts our supposition f (x0) > M 1, because x0 ∈ V by the definition of V . Hence, f (x) ≤ M for all x ∈ U .Two nice consequences of the Strong Maximum Principle are the Riemann Extension Theorem and the uniqueness of the Equilibrium measure.
Corollary 4.3.8 (Riemann Extension Theorem) . Let U be a domain and e ⊂ U be a compact set of capacity 0. Then every bounded harmonic function f : U \e → R can be extended uniquely to a harmonic function on U .Proof. Since P1Berk is a Hausdorff space, the compact subset e is closed, and so U \e is open. We have seen in Lemma 4.3.5, that having capacity 0 implies e ⊂ P1(K). By definition, U is connected as a domain. Therefore, U \e is connected as well, i.e. U \e is actually a domain. To extend f : U \e → R properly, we will show that for each a ∈ e
there is a neighborhood of a in U on which f is constant. We consider an arbitrary point a ∈ e ⊂ P1(K). Since a is a point of type I and U is open, we can find a r ∈ R>0
such that D(a, r ) ⊂ U . Then the ball B := D(a, r )− is open and connected, B = D(a, r ),and B has a unique boundary point z in HBerk . More precisely, z is the point in P1Berk
corresponding to the disc D(a, r ).57 4 Harmonic functions
We consider the set V := B\ e. Then (4.7) ∂V ∩ P1Berk \e = ∂B = {z}.
Our strategy is to apply Theorem 4.3.7 to f |V . Note that V is a domain, by the same reasons that U \e is a domain. Additionally, f is harmonic and bounded on
V = B\ e ⊂ U \e, because we have required that for f on U \e. In particular, f is continuous in z ∈ U \e. Thus, (4.8) lim
x→z,x ∈V
f (x) = f (z).
Set e′ := e ∩ ∂V , then e′ is also a compact set of capacity 0. By Equation (4.7),
∂V \e′ = {z}. Equation (4.8) and the Strong Maximum Principle (Theorem 4.3.7 i) and ii)) imply f (x) = f (z) for all x ∈ V . By setting f (x) = f (z) for all x ∈ e ∩ B , we have f (x) = f (a) for all x ∈ B . Since such two balls are either disjoint, or they coincide (cf. Lemma 2.3.1), f is well-defined on the domain U .By Example 4.1.3, f is strongly harmonic on B as a constant function on B. We have required that f is harmonic on U \e, so f is harmonic on U = B ∪ U \e.By the construction of the extension, one can see that it has to be unique, but we also can verify that explicitly. Let h be a harmonic function on U such that h ≡ f on U \e.If a ∈ e, we have seen above that there is a r ∈ R>0 such that for B := D(a, r )− we have B ⊂ U . Since h − f is harmonic on U , h − f is also harmonic on the domain B ⊂ U
which has only one boundary point. Hence, the main dendrite of B is empty, and so the harmonic function h − f is constant on B by Proposition 4.2.6. Due to e ⊂ P1(K),the set B\ e cannot be empty which means that (h − f )( a) = 0 . Thus, h ≡ f on U .
Corollary 4.3.9. Let {a1, . . . , a m} ⊂ P1(K). Then every bounded harmonic function on P1Berk { a1, . . . , a m}, or on B(a, r )−
ζ
{ a1, . . . , a m} for some open ball B(a, r )−
ζ
, is constant. Proof. Let U be P1Berk or an open ball B(a, r )−
ζ
and f a bounded harmonic function on
U { a1, . . . , a m}. Clearly, the set e := {a1, . . . , a m} is closed, and so e is compact. By Lemma 4.3.4, e has capacity 0. So we can extend the function f to a harmonic function on U by the Riemann Extension Theorem (Corollary 4.3.8). Since |∂U | ≤ 1, the main dendrite D of U is empty in both cases. By Proposition 4.2.6, the only harmonic function on U are the constant ones. In particular, f is constant on P1Berk { a1, . . . , a m},respectively on B(a, r )−
ζ
{ a1, . . . , a m}.By the previous results, we can show that the Equilibrium measure, which we have defined in 4.3.3 iv) is unique.
Corollary 4.3.10. Let E ⊂ P1Berk be a compact set with positive capacity, and let
ζ ∈ P1Berk \E. Then the Equilibrium measure μζ of E with respect to ζ is unique.
58 4.3 The Maximum Principle
Proof. Suppose μ1 and μ2 are two Equilibrium measures for E with respect to ζ, i.e.
Iζ (μ1) = Iζ (μ2) = Vζ (E) < ∞.
Since μi is supported on E, the potential function ui(x) := uμi (x, ζ ) for i = 1 , 2 is well-defined on P1Berk , continuous on P1Berk \supp( μi), and achieves its minimum at x = ζ
by [BR, Proposition 6.12]. Furthermore, the Frostman’s Theorem [BR, Theorem 6.18] states that ui(z) ≤ Vζ (E) < ∞ for all z ∈ P1Berk { ζ}, and so ui is bounded above by
Vζ (E) for all z ∈ P1Berk . Additionally, there is a Fσ set fi ⊂ E of capacity zero such that ui(z) = Vζ (E) for all z ∈ E\fi and ui is continuous on E\fi. We have seen in Example 3.2.11 that (4.9) ui ∈ BDV( P1Berk ) and ∆P1Berk (ui) = μi − δζ .
Let U be the connected component of P1Berk \E containing ζ. Then U is open and connected, and so a domain. By [BR, Proposition 6.8], the measures μi are supported on ∂U ⊂ U ∩ E. Hence, μi ∈ P(∂U ), where ∂U is compact because it is closed. Due to
ζ / ∈ U , we can consider Vζ (∂U ), and
Vζ (∂U ) = inf
ν∈P(∂U )
Iζ (ν) ≤ Iζ (μi) < ∞.
Thus, ∂U has positive capacity. We consider two cases. First, we will assume that ζ ∈ HBerk , and afterwards ζ ∈ P1(K).So let ζ ∈ HBerk , then the functions ui are bounded below by [BR, Proposition 6.12]. We have already seen that they are bounded above as well. Hence, u : P1Berk → R with
u(x) := u1(x) − u2(x) is well-defined and bounded. Furthermore, u is continuous on
U , because the potential functions ui are continuous on P1Berk (supp( μi)) for i = 1 , 2,and U is contained in P1Berk (supp( μ1) ∪ supp( μ2)) . By Equation (4.9), we know that
u ∈ BDV( P1Berk ), and so u ∈ BDV( U ), and
∆P1Berk (u) = ∆ P1Berk (u1) − ∆P1Berk (u2) = ( μ1 − δζ ) − (μ2 − δζ ) = μ1 − μ2.
Since supp( μi) ⊂ ∂U ⊂ U by [BR, Proposition 6.8] and the retraction map rP1Berk ,U
fixes U , it follows that (4.10) ∆U (u) = ( rP1Berk ,U )∗(∆ P1Berk (u)) = ∆ P1Berk (u) = μ1 − μ2
by Proposition 3.1.29. Consequently, it remains to show ∆U (u) = 0 to prove μ1 = μ2.To do that, we will apply the Strong Maximum Principle. We have already mentioned that u is continuous on U and contained in BDV( U ), so u is strongly harmonic on U
since supp(∆ U (u)) = supp( μ1 − μ2) ⊂ ∂U. Furthermore, we know that u is bounded 59 4 Harmonic functions
on U . Let f := f1 ∪ f2, then f has capacity 0, because
γζ (f ) = γζ (f1 ∪ f2) = γζ (f1) = 0
by [BR, Corollary 6.21]. Since ∂U has positive capacity, ∂U \f cannot be empty. Con-sider an element z ∈ ∂U \f ⊂ E\f ⊂ E\fi, then u is continuous in z, and
u(z) = u1(z) − u2(z) = Vζ (E) − Vζ (E) = 0
by the Frostman’s Theorem. Hence, lim x→z, x ∈U u(x) = 0 . We can apply the Strong Maximum Principle (Theorem 4.3.7) to u two times, and we get that u ≡ 0 on U .Thus, ∆U (u) = 0 by [BR, Lemma 5.24]. Now, let ζ ∈ P1(K). Since ui(ζ) = −∞ for i = 1 , 2, u(ζ) = u1(ζ) − u2(ζ) = −∞ + ∞
is undefined. Hence, we consider the function u : P1Berk { ζ} → R with u(x) = u1(x) −
u2(x). By [BR, Proposition 6.12] and [BR, Proposition 6.18], ui is lower semicontinuous on P1Berk { ζ}, so particularly ui(x) 6 = −∞ for all x ∈ P1Berk { ζ}, and ui is bounded above on P1Berk . Hence, u is well-defined on P1Berk { ζ}. Since U is a domain and ζ is of type I, U { ζ} is a domain as well. Again, we try to apply the Strong Maximum Principle. Differently to the first case, we will apply it to u defined on the domain
U { ζ}. The function u is continuous on U { ζ}, because the potential functions ui are continuous on P1Berk \supp( μi) and U { ζ} is contained in P1Berk (supp( μ1) ∪ supp( μ2)) .
By [BR, Proposition 6.12], there exists an open neighborhood V of ζ such that ui(z) = log v(‖z, ζ ‖) for i = 1 , 2. Thus, u ≡ 0 on V { ζ}. Since P1Berk \V is compact, the lower semicontinuous functions ui are bounded below on P1Berk \V . By Frostman’s Theorem, the functions ui are bounded above on P1Berk \V ⊂ P1Berk { ζ}, and so u is bounded on
P1Berk \V . Thus, u is bounded on U { ζ}.By Equation (4.9), u ∈ BDV( P1Berk { ζ}), and so particularly u ∈ BDV( U { ζ}). Fur-ther,
∆P1Berk { ζ}(u) = ∆ P1Berk { ζ}(u1) − ∆P1Berk { ζ}(u2)= ∆ P1Berk (u1) − ∆P1Berk (u2)= μ1 − μ2.
By the same arguments as in the first case and U { ζ} = U , we have
∆U { ζ}(u) = ∆ U (u) = μ1 − μ2,
and supp(∆ U { ζ}(u)) ⊂ ∂U. Thus, u is strongly harmonic on U { ζ}. Again, we verify
μ1 = μ2 by showing ∆U (u) = 0 .Consider an element z ∈ ∂U \f , which exists by the same reasons as above. We know 60 4.4 Poisson Formula and the Dirichlet and the Neumann Problem
that u is continuous in z and u(z) = 0 , and so lim x→z, x ∈U u(x) = 0 . Additionally,
lim
x→ζ, x ∈U
u(x) = lim
x→ζ, x ∈U
(u1(x) − u2(x)) = lim
x→ζ, x ∈U
(u1(x) − log v(‖x, ζ ‖) + log v(‖x, ζ ‖) − u2(x)) = 0
by [BR, Proposition 6.12]. Together, we have
lim
x→z, x ∈U
u(x) = 0
for each z ∈ ∂(U { ζ})\f = ∂U \f ∪ { ζ}. Applying the Strong Maximum Principle to u
on U { ζ} and the exceptional set f ⊂ ∂(U { ζ}) of capacity 0, u = 0 on U { ζ}. Thus,
0 = ∆ U { ζ}(u) = ∆ U (u) = μ1 − μ2
by [BR, Lemma 5.24]. Hence, μ1 = μ2 is also true in the second case.
4.4 Poisson Formula and the Dirichlet and the Neumann Problem
Let D be a domain in C and φ : ∂D → R a continuous function, then the Dirichlet problem (cf. [Ra, Definition 1.2.1]) is to find a harmonic function h on D such that
lim
z→ζ
h(z) = φ(ζ)
for each ζ ∈ ∂D . The Dirichlet problem can be uniquely solved if D is an open disc with the help of the Poisson formula (cf. [Ra, Theorem 1.2.2] and [Ra, Theorem 1.2.4]). The Poisson formula in the classical theory says, if a function f is harmonic on an open disc D = {z ∈ C| | z − z0| > r } ⊂ C of radius r and centered in z0, and can be extended continuously to the closure of this disc D, then for any z ∈ D the value f (z)
can be recaptured only from knowledge of f on ∂D (cf. [Ra, Corollary 1.2.6]). The Dirichlet problem is not generally solvable for domains in C (cf. [Ra, §4.1 p.85]), but if
D is a simply connected domain in the Riemann sphere C∞ such that C∞\D contains at least two points there exists a unique solution (cf. [Ra, Corollary 4.1.8] and [Ra, Theorem 4.2.1]). In this section, we like to generalize the Poisson formula for a special class of domains in P1Berk and show that the Dirichlet problem is uniquely solvable on these domains as well. Furthermore, we will formulate the Neumann problem for these domains, and we will see that the solvability is a consequence of the unique solution of the Dirichlet problem and the Poisson formula. If U ⊂ P1Bek is a domain with ∂U = {x1, . . . , x m}, then we have the two problems which 61 4 Harmonic functions
we have mentioned above:
Dirichlet Problem. Given A1, . . . , A m ∈ R, there is a continuous function f : U → R
which is harmonic on U and satisfies
f (xi) = Ai
for all i = 1 , . . . , m.
Neumann Problem. For given c1, . . . , c m ∈ R with ∑mi=1 ci = 0 , there exists a con-tinuous function f : U → R which is harmonic on U and
∆∂U (f ) = ∆ U (f ) =
m
∑
i=1
ciδxi .
4.4.1. Clearly, both problems are uniquely solvable for the domain U = P1Berk { x}
where
x ∈ P1Berk . But these problems are not solvable for any domain U ⊂ P1Bek with
∂U = {x1, . . . , x m}. As an example, consider the domain U = P1Bek { a1, . . . , a m}
where {a1, . . . , a m} ⊂ P1(K) and m ≥ 2. Assume that there is a continuous function
f : U → R which is harmonic on U . Since f is bounded by the Maximum Principle, f
is constant on U by Corollary 4.3.9, and so constant on U . Thus, there is no solution for the Dirichlet problem if A1, . . . , A m are different and no solution for the Neumann problem if not all c1, . . . , c m are equal to zero. If ∂U = {x1, . . . , x m} ⊂ HBerk , we will see that both problems are solvable and the solution is given by the analogue of the Poisson formula.
Definition 4.4.2. We call a domain U a finite-dendrite domain , if U
i) is either a connected component of P1Berk { x} for some x ∈ HBerk , or ii) is of the form U = r−1Γ (Γ 0) for some finite subgraph Γ ⊂ HBerk , where Γ0 := Γ \∂Γ.
Remark. i) In the first case, the unique boundary point of U is x, and hence the main dendrite of U is empty. In the second case the main dendrite coincides with
Γ0. If
Γ = ⋃
i,j ∈{ 1,...,m }
[xi, x j ]
for a finite set of points {x1, . . . , x m} ⊂ HBerk and U := r−1Γ (Γ 0). We have
∂U = {x1, . . . , x m}.
ii) On the other side, let U be a domain with ∂U = {x1, . . . , x m} ⊂ HBerk . If m = 1 ,
U is a connected component of P1Berk { x1}. If m ≥ 2 and
Γ := ⋃
i,j ∈{ 1,...,m }
[xi, x j ],
62 4.4 Poisson Formula and the Dirichlet and the Neumann Problem
then Γ is the finite subgraph such that U = r−1Γ (Γ 0), where Γ0 := Γ { x1, . . . , x m}
is the main dendrite of U .iii) By [BR, Lemma 2.28] a domain U is a simple domain if and only if U is an open Berkovich disc or U = r−1Γ (Γ 0) for a non-trivial subgraph Γ ⊂ HBerk with endpoints of type II or III. Thus, the class of finite-dendrite domains contains the class of simple domains, which are regarded as the basic open neighborhoods in
P1Berk .In the following we consider a finite-dendrite domain V with ∂V = {x1, . . . , x m} ⊂
HBerk . Before we state the Poisson formula, it is shown that a harmonic function on
V can be written as a piecewise linear function on a finite subgraph composed with the corresponding retraction map if |∂V | ≥ 2. This description is used in Chapter 5 to verify that Thuillier’s definition of harmonic functions extends the one made in this chapter.
Proposition 4.4.3. Let V be a finite-dendrite domain in P1Berk with boundary points
x1, . . . , x m ∈ HBerk . Then each harmonic function f on V belongs to BDV( V ) and has a continuous extension f : V → R.If |∂V | ≥ 2, then
f = ˜f ◦ rΓ
for a function ˜f ∈ CPA(Γ) and a finite subgraph Γ of P1Berk .Proof. We consider at first the case m = 1 . If V has only one boundary point, the main dendrite is empty and every harmonic function on V is constant by Proposition 4.2.6. We have seen in Example 4.1.3, that f ∈ BDV( V ). Clearly, we can extend f to a continuous real-valued function on V .If m ≥ 2, the main dendrite of V is the interior of the finite subgraph Γ := ⋃[xi, x j ]
which we denote by Γ0. We know by Proposition 4.2.6 that f is constant on each branch off Γ0. Hence, it suffices to show that the restriction of f to each edge of Γ0 is affine. If ˜Γ is a finite subgraph of V contained in Γ0, we can find a subdomain U of V
such that U is a finite-dendrite domain with U ⊂ V and ˜Γ is contained in U . Lemma 4.1.7 says that f is strongly harmonic on U , and so f is piecewise affine on ˜Γ by Lemma 4.1.9. Moreover, Lemma 4.1.9 implies (4.11) − ∑
~v∈Tp(Γ)
d~vf (p) = 0
for all p ∈ Γ0. Let S be a vertex set of Γ, i.e. S contains all endpoints x1, . . . , x m and all branch points of Γ. Let e be an edge of Γ\S. If e is an edge between two branch points of Γ, we have seen above that f is piecewise affine on e. Since |Tp(Γ) | = 2 for each
p ∈ Γ\S, f has to be affine on e by Equation (4.11). We can choose points y1, . . . , y m
closer and closer to the endpoints x1, . . . , x m, and so the continuous function f has to 63 4 Harmonic functions
be also affine on each edge e ⊂ Γ0 between an endpoint and a branchpoint by the same reasons as above. Thus, the restriction of f to each edge of Γ0 is affine, and so we can extend f continuously to V . In particular, there is a ˜f ∈ CPA(Γ) such that
f = ˜f ◦ rΓ.
Due to CPA(Γ) ⊂ BDV(Γ) , we have seen in Example 3.2.2 that f ∈ BDV( P1Berk ), and so f ∈ BDV( V ) by Lemma 3.1.29.
Corollary 4.4.4. Every harmonic function on a finite-dendrite domain V is strongly harmonic on V .Proof. If f is harmonic on V , then f belongs to BDV( V ) by the previous proposition. Corollary 4.2.5 implies that f is strongly harmonic on V .
Definition 4.4.5. Let V be a finite-dendrite domain with ∂V = {x1, . . . , x m}. For any z ∈ P1Berk , we define the real (m + 1) × (m + 1) matrix M (z) as
M (z) :=
0 1 · · · 11 − log v(δ(x1, x 1)z ) · · · − log v(δ(x1, x m)z )
... ... . . . ...
1 − log v(δ(xm, x 1)z ) · · · − log v(δ(xm, x m)z )
.
We call that matrix M (z) Cantor matrix relative to z.
Lemma 4.4.6. For every z ∈ P1Berk , the matrix M (z) is non-singular. Proof. We will show that the matrix M := M (z) has a trivial kernel. Consider a vector
~c = ( c0, . . . , c m)T ∈ Rm+1 with M~ c = 0 . Then (4.12)
m∑
i=1
ci = 0
and
c0 +
m∑
i=1
ci(− log v(δ(xj , x i)z )) = 0
for each j ∈ { 1, . . . , m }. The latter equation is equivivalent to the fact, that the function
f : P1Berk → R ∪ {±∞} given by
f (x) := c0 +
m∑
i=1
ci(− log v(δ(x, x i)z ))
64 4.4 Poisson Formula and the Dirichlet and the Neumann Problem
satisfies f |∂V ≡ 0. By Example 3.2.9, f ∈ BDV( P1Berk ) and
∆P1Berk (f ) = ∆ P1Berk (c0) +
m
∑
i=1
ci · ∆P1Berk (− log v(δ(x, x i)z )) =
m
∑
i=1
ci(δxi − δz ) =
m
∑
i=1
ciδxi ,
where the last equation is true by (4.12). Due to ∆P1Berk (f ) = ∑mi=1 ciδxi and rP1Berk ,V (xi) =
xi for each i = 1 , . . . m , we have the identity
(rP1Berk ,V )∗(∆ P1Berk (f )) = ∆ P1Berk (f ).
Proposition 3.1.29 states that f ∈ BDV( V ) and implies
∆V (f ) = ( rP1Berk ,V )∗(∆ P1Berk (f )) = ∆ P1Berk (f ) =
m
∑
i=1
ciδxi .
Since ∂V = {x1, . . . , x m}, we have ∆V (f ) = 0 . The function f is continuous on V as the sum of continuous functions, i.e. f is strongly harmonic on V . We have already seen that f ≡ 0 on ∂V , so Theorem 4.3.1 ii), the Maximum Principle, says that f ≡ 0
on V . Thus, f is constant on V . Hence, ∑mi=1 ciδxi = ∆ V (f ) = 0 is true by [BR, Lemma 5.24], and so ci = ∆ V (f )( xi) = 0 for each i ∈ { 1, . . . , m }. Further, we get
c0 = f (x) −
m
∑
i=1
ci(− log v(δ(x, x i)z )) = 0 − 0 = 0
for any x ∈ V by the definition of f , i.e. ~c = 0 . Thus ker( M (z)) = {0}.
Theorem 4.4.7 (Poisson Formula, Version I) . Let V be a finite-dendrite domain in
P1Berk with boundary points x1, . . . , x m ∈ HBerk . For every A1, . . . , A m ∈ Rm, there is a unique solution of the Dirichlet problem which is given as follows: Fix z ∈ P1Berk , and let ~c := ( c0, . . . , c m)T ∈ Rm+1 be the unique solution of the linear equation M (z)~c = (0 , A 1, . . . , A m)T (which is possible by the lemma above). Then
f (x) = c0 +
m
∑
i=1
ci(− log v(δ(x, x i)z ))
for every x ∈ V . (This should be understood as a limit, if z is of type I and x = z ∈ V .) Moreover,
∆V (f ) =
m
∑
i=1
ciδxi .
65 4 Harmonic functions
Proof. First, we show the uniqueness of a solution. Suppose there are two such functions
f1, f 2, then f1 −f2 is harmonic on V and f1 −f2 ≡ 0 on ∂V . By the Maximum Principle (Theorem 4.3.1 ii)), f1 − f2 ≡ 0 on V , and so f1 ≡ f2.Now it remains to show that the given formula satisfies all required properties. By construction, f (xi) = Ai for all i = 1 , . . . , m . Proposition 3.2.8 states that the gen-eralized Hsia kernel δ(x, y )z is continuous in every x ∈ P1Berk , and so f is continuous on V . The function f belongs to the vector space BDV( P1Berk ) by Example 3.2.9, and hence to BDV( V ) by Proposition 3.1.29. Furthermore, we know by Example 3.2.9 and
∑mi=1 ci = e1T · (M (z) · ~c) = 0 , that
∆P1Berk (f ) =
m
∑
i=1
ci(δxi − δz ) =
m
∑
i=1
ciδxi .
Due to ∂V = {x1, . . . , x m}, Proposition 3.1.29 implies
∆V (f ) = (
m
∑
i=1
ciδxi )|V = 0 ,
i.e. f is strongly harmonic on V , so particularly harmonic. As in Lemma 4.4.6, we have
∆V (f ) = ( rP1Berk ,V )∗(∆ P1Berk (f )) = ∆ P1Berk (f ) =
m
∑
i=1
ciδxi ,
because ∆P1Berk (f ) is supported on ∂V .
Remark. We also have a similar statement in the general case: If X is a smooth strictly k-analytic curve and Y an k-affinoid domain in X, then the restriction map defines an isomorphism from the space of harmonic functions on Y to Hom( ∂Y, R) (cf. [Th, Proposition 2.1.12] and [Th, Corollary 3.1.21]).
Remark 4.4.8. With the help of Cramer’s rule, we can give an explicit formula for the coefficients ci for i = 0 , . . . , m ,
ci = det( Mi(z, ~A)) / det( M (z)) ,
where Mi(z, ~A) denotes the matrix which we obtain by replacing the ith column of
M (z) by ~A := (0 , A 1, . . . , A m)T . By the explicit formula for f in Theorem 4.4.7, we have the identity
f (z) = c0 = det( M0(z, ~A)) / det( M (z)) .
Recall that a strict simple domain is a finite-dendrite domain whose boundary points are all of type II. The Poisson formula has the following corollaries:
Corollary 4.4.9. If V is a strict simple domain with ∂V = {x1, . . . , x m} and f aharmonic function on V , then there exist c0, . . . c m ∈ R and a1, . . . , a m ∈ P(K) not
66 4.4 Poisson Formula and the Dirichlet and the Neumann Problem
contained in V such that
f (x) = c0 −
m
∑
i=1
ci log v([ T − ai]x)
for all x ∈ V .Proof. By a change of coordinates, we are allowed to assume that ∞ is not contained in V . Setting z := ∞, the Poisson formula Theorem 4.4.7 states the existence of
c0, . . . , c m ∈ R with ∑mi=1 ci = 0 such that
f (x) = c0 −
m
∑
i=1
ci log v(δ(x, x i)∞)
for all x ∈ V . We will show that we can find for every xi ∈ ∂V a point ai /∈ V of type I such that the path [ai, ∞] passes through xi and x ∨∞ ai = x ∨∞ xi for all x ∈ V . Since
V is connected and xi is of type II, there is a connected component Vi of P1Berk { xi}
such that V ∩ Vi = ∅ and ∞ /∈ Vi. The connected component Vi is open, and so it has to contain a point ai of the dense subset P1(K) of P1Berk . This type I point ai satisfies the required properties. Hence,
δ(x, x i)∞ = diam ∞(x ∨∞ xi) = diam ∞(x ∨∞ ai) = δ(x, a i)∞
for all x ∈ V . By [BR, Corollary 4.2], we have the identity δ(x, a i)∞ = [ T − ai]x on V .Thus,
f (x) = c0 −
m
∑
i=1
ci log v([ T − ai]x)
for all x ∈ V .
Corollary 4.4.10. The Neumann problem for V is solvable. The solution is unique up to addition of a constant. Proof. Proposition 4.4.3 and Theorem 4.4.7 state that every f ∈ H (V ) belongs to
BDV( V ), has a continuous extension f : V → R, and ∆V (f ) = ∑mi=1 diδxi for suitable
di ∈ R. By [BR, Proposition 5.25], we have (4.13) 0 = ∆ V (f )( V ) =
m
∑
i=1
di.~∂(f ) := ( d1, . . . , d m) = 0 if and only if ∆V (f ) = ∑mi=1 diδxi = 0 , what is equivalent to the fact that f is constant on V ∩ HBerk by [BR, Lemma 5.24]. Since f is continuous on
V , f is constant on V ∩ HBerk if and only if f is constant on V . Consequently, ~∂(f ) = 0
is equivalent to the fact that f is constant on V .67 4 Harmonic functions
If we have a given vector ~A := ( A1, . . . , A m)T ∈ Rm, we denote the unique solution of the Dirichlet problem for A1, . . . , A m by f ~A. Then the following map
L : Rm → Rm
~A 7 → ~∂(f ~A)
is R−linear by the uniqueness of the Poisson formula. Furthermore, one can show that
im( L) = {d ∈ Rm|
m
∑
i=1
di = 0 } =: H.
To see this, we will determine the dimension of the kernel of L. Suppose that ~A ∈ Rm
with ~∂(f ~A) = 0 . We have seen above that this is equivalent to fact that f ~A is constant on
V . Hence, A1 = . . . = Am. But on the other hand, if ~A ∈ Rm with A1 = . . . = Am, then
f ~A is constant on V by the Maximum Principle (Theorem 4.3.1 ii)). Hence, ~∂(f ~A) = 0 ,i.e. ~A ∈ ker( L). Thus, we have the identity ker( L) = Diag( Rm). Therefore,
dim(im( L)) = m − dim(ker( L)) = m − 1.
By Equation (4.13), im( L) ⊂ H, thus the image of L has to coincide with the hyperplane
H. Therefore, for every ~c = ( c1, . . . , c m)T ∈ Rm with ∑mi=1 ci = 0 there exists a
~A = ( A1, . . . , A m)T ∈ Rm such that ~∂(f ~A) = ~c, where f ~A is the unique solution of the Dirichlet problem. This means that f := f ~A is harmonic on V and continuous on V
with
∆∂V = ∆ V (f ) =
m
∑
i=1
ciδxi .
If f1, f 2 are two harmonic functions on V which are continuous on V and
∆V (f1) =
m
∑
i=1
ciδxi = ∆ V (f2),
then f1 − f2 ∈ BDV( V ), f1 − f2 is continuous on V and ∆V (f1 − f2) = 0 . Hence, f1 − f2
is constant on V ∩ HBerk by Lemma 5.24 [BR] and so on V . Consequently, the required function is unique up to addition of a constant.
4.5 Poisson Formula and the Equilibrium and the Poisson-Jensen Measure
Applying the Poisson formula for each boundary point separately will give us a further description of the solution of the Dirichlet problem. This version of the Poisson formula 68 4.5 Poisson Formula and the Equilibrium and the Poisson-Jensen Measure
leads to an easier proof of the uniqueness of the Equilibrium measure in special cases. Moreover, we will define the Poisson-Jensen measure for a finite-dendrite domain V ,and show that this measure coincides with the Equilibrium measure respectively to any point of V for the compacts set ∂V . Moreover, the second version of the Poisson formula enables us to characterize harmonic functions in a further way.
Definition 4.5.1. i) Let V be a finite-dendrite domain with ∂V = {x1, . . . , x m}.We will call the unique harmonic function on V with a continuous extension on
V and
hi(xj ) = δij ,
which is given by Theorem 4.4.7 the harmonic measure for the boundary compo-nent xi of V .ii) If V is a finite-dendrite domain with ∂V = {x1, . . . , x m} and z ∈ P1Berk , we define the Poisson-Jensen measure μz,V on V relative to the point z as
μz,V =
m∑
i=1
hi(z)δxi .
4.5.2. By part ii) of the Maximum Principle, we know that 0 ≤ hi(x) ≤ 1 for all x ∈ V .Since part i) of the Maximum Principle says that hi does not achieve an extremum on
V , the inequality has to be strict, i.e.
0 < h i < 1
on V . Furthermore, h(x) := ∑mi=1 hi(x) is harmonic on V and continuous on V with
h(x) = ∑mi=1 hi(x) = 1 for all x ∈ ∂V . Hence, the Maximum Principle part ii) implies that m∑
i=1
hi = 1
on V .
Proposition 4.5.3 (Poisson Formula, Version II) . Let V be a finite-dendrite domain in
P1Berk with ∂V = {x1, . . . , x m} and A1, . . . , A m ∈ R. Then the solution of the Dirichlet problem f with f (xi) = Ai for each i = 1 , . . . , m is given by
f (z) =
m∑
i=1
Ai · hi(z)
for all z ∈ V , where hi is the harmonic measure for xi ∈ ∂V .Proof. Since the functions hi are harmonic on V and continuous on V by construction, 69 4 Harmonic functions
the same is true for the function g(z) := ∑mi=1 Ai · hi(z). Moreover,
g(xj ) =
m∑
i=1
Ai · hi(xj ) = Aj
for all j ∈ { 1, . . . , m }. Thus, the second version of the Poisson formula is a direct consequence of the uniqueness in the first version of the Poisson formula (cf. Theorem 4.4.7).
Remark. By Remark 4.4.8, we have
hi(z) = det( M0(z, ˆei)) / det( M (z))
for each i = 1 , . . . , m, where ˆei ∈ Rm+1 is the vector which is 1 in the (i+1) st component and 0 elsewhere. By the second version of the Poisson formula, we can characterize harmonic functions defined on V . Afterwards, we extend this characterization to harmonic functions on general open sets.
Corollary 4.5.4. If V is a finite-dendrite domain with ∂V = {x1, . . . , x m}, then a continuous function f : V → R is harmonic on V if and only if
f (z) =
∫
∂V
f dμ z,V
for all z ∈ V .Proof. Since ∫
∂V
f dμ z,V =
m∑
i=1
f (xi)hi(z),
the corollary follows directly from Proposition 4.5.3. Let U be an open subset in P1Berk . Recall that every simple domain is a finite-dendrite domain. We can characterize harmonic functions on an open set U in the following way:
Corollary 4.5.5. If U is an open subset of P1Berk and f : U → R is a continuous function, then f harmonic on U if and only if for every simple subdomain V of U
satisfying V ⊂ U we have
f (z) =
∫
∂V
f dμ z,V
for all z ∈ V .Proof. The closures of simple domains form a fundamental system of compact neigh-borhoods for the topology on P1Berk . The function f therefore is harmonic on U if and 70 4.5 Poisson Formula and the Equilibrium and the Poisson-Jensen Measure
only if its restriction to every simple subdomain of U satisfying V ⊂ U is harmonic by Corollary 4.1.8. Hence, Corollary 4.5.4 implies the claim. Now, we will see that the Poisson-Jensen measure μζ,V coincides with the Equilibrium measure μζ for ∂V relative to ζ. On the way to that, we need the following lemma, which gives also a simpler proof of the uniqueness of the Equilibrium measure for ∂V
relative to ζ.
Lemma 4.5.6. Let V be a finite-dendrite domain in P1Berk , ζ ∈ V and ν ∈ P(e) for
e := ∂V .i) Then the following is equivalent: a) Iζ (ν) = Vζ (e).b) The potential function
uν (x) := uν (x, ζ ) =
∫
− log v δ(x, y )ζ dν (y)
is constant on ∂V .ii) The Equilibriums measure μζ for ∂V relative to ζ is the unique probability measure satisfying these equivalent conditions. Proof. Since e is finite and contained in HBerk , the set e is compact and has positive capacity by Lemma 4.3.5. Thus, the Equilibrium measure μζ ∈ P(e) exists by [BR, Proposition 6.6] and is supported on e = ∂V by [BR, Proposition 6.8]. Let ∂V =
{x1, . . . , x m}, then any probability measure ν ∈ P(e) is supported on e = {x1, . . . , x m},i.e. ν = ∑mi=1 νiδxi for νi = ν(xi) ∈ R. Hence, ∑mi=1 νi = ν(P1Berk ) = 1 and νi = ν(xi) ≥
0 for all i = 1 , . . . , m.
At first, we will show that a) implies b). Next, we will use this direction to show that there is a unique ν ∈ P(e) satisfying b). Afterwards, we use this uniqueness to verify the other direction. Then part i) and ii) are true. If ν ∈ P(e) satisfies a), then ν is an Equilibrium measure for e relative to ζ. Since
e ⊂ HBerk , every non-empty subset of e has positive capacity. Hence, uν (x) = Vζ (e) for every x ∈ e by the Frostman’s theorem [BR, Proposition 6.18]. Thus, uν is constant on e = ∂V .Let ν ∈ P(e) such that statement b) is satisfied. The potential function uν is constant on ∂V if and only if
M (ζ)( ν0, . . . , ν m)T = (
m∑
i=1
νi, ν 0 + uν (x1), . . . , ν 0 + uν (xm)) T = (1 , 0 . . . , 0) T
for some ν0 ∈ R. By Lemma 4.4.6, M (ζ) is non-singular, so there is a unique ~ν ∈ Rm+1
such that b) is satisfied. Hence, the probability measure ν is unique. 71 4 Harmonic functions
To prove the other direction, we will use the just shown uniqueness which we have just showed. Let ν ∈ P(e) such that b) is true. At the beginning, we have seen that there exists a Equilibriums measure μζ for e relative to ζ, which is contained in P(e) and satisfies a) by definition. Hence, b) is true for μζ by the first direction. Due to the uniqueness, μζ has to coincide with ν, and so ν satisfies a). With this lemma we can show that the Poisson measure and the Equilibrium measure coincide:
Proposition 4.5.7. Let V be a finite-dendrite domain with ∂V = {x1, . . . , x m}, and let μ = μz,V be the Poisson-Jensen measure for V relative to a point z ∈ V . Then μ is the Equilibrium measure for ∂V relative to z.
We will verify that μ satisfies condition b) in Lemma 4.5.6:
Proof. For the Poisson-Jensen measure μ = ∑mi=1 hi(z)δxi , the potential function
uμ(x) =
∫
− log v(δ(x, y )z ) dμ (y) =
m∑
i=1
− log v(δ(x, x i)z ) · hi(z)
is continuous on V since the generalized Hsia kernel δ(x, y )z is continuous in x by Proposition 3.2.8. We have seen in Example 3.2.11 that uμ(z) ∈ BDV( P1Berk ) and
∆( uμ) = μ−δz . By Lemma 3.1.29, uμ belongs to BDV( V ), and so uμ ∈ C (V )∩BDV( V ).
Further, (4.14) ∆V (uμ) = rV ∗(μ − δz ) = μ − δz .
due to x1, . . . , x m, z ∈ V . Set νi = δxi − δx1 for i = 1 , . . . , m , then νi is a finite signed Borel measure on V such that νi(V ) = 0 . By [BR, Proposition 5.28], there is a one-to-one correspondence between finite signed Borel measures of total mass zero on V
and functions h ∈ BDV( V ) modulu constant functions. Thus, there are fi ∈ BDV( V ),which are unique up to additive constants, such that (4.15) ∆V (fi) = νi
for i = 1 , . . . , m . Since νi is supported on ∂V for all i = 1 , . . . , m , the functions fi
are all strongly harmonic on V . We have seen in Proposition 4.4.3 that any strongly harmonic function h on the finite-dendrite domain V can be extended to a continuous function on V . So we can extend the function fi such that fi ∈ C (V ) ∩ BDV( V ) for each i = 1 , . . . , m . Above we have showed that uμ also belongs to C(V ) ∩ BDV( V ).Therefore, the following integrals exist and coincide by [BR, Corollary 5.39] (4.16)
∫
V
fi∆V (uμ) =
∫
V
uμ∆V (fi).
72 4.6 Uniform Convergence
Let i = 1 , . . . , m , then
uμ(xi) − uμ(x1) =
∫
V
uμ dν i =
∫
V
uμ∆V (fi)
by the definition of νi := δxi − δx1 and Equation (4.15). Applying (4.16) and then (4.14), leads to the identity
uμ(xi) − uμ(x1) =
∫
V
fi∆V (uμ) =
(∫
V
fi dμ
)
− fi(z).
Finally, we have uμ(xi) − uμ(x1) = 0 for each i = 1 , . . . , m by Corollary 4.5.4. Thus,
uμ is constant on ∂V .
Remark. i) One can generalize the last Proposition for an arbitrary domain U if you require that ∂U has positive capacity. The proof of this generalization uses Green functions, and can be found in [BR, Proposition 7.43]. ii) By Proposition 4.5.7 and the proof of Lemma 4.5.6, μz,V is the unique measure
μ supported on ∂V such that (4.17) M (z)( μ0, μ (x1), . . . , μ (xm)) T = (1 , 0, . . . , 0) T ∈ Rm+1 ,
for some μ0 ∈ R.The last Remark and Cramer’s rule provide a further explicit formula for the harmonic measure hi:
Corollary 4.5.8. Let Mi(z) denote the matrix obtained by replacing the ith column of
M (z) by (1 , 0, . . . , 0) T ∈ Rm+1 . Then the harmonic measure hi(z) for xi ∈ ∂V is given by
hi(z) = det( Mi(z)) / det( M (z))
for each i = 1 , . . . , m .Proof. Let μ denote the Poisson-Jensen measure which is given by μ = ∑mi=1 hi(z)δxi .
So we have hi(z) = μ(xi), and Equation (4.17) implies the formula.
4.6 Uniform Convergence
In the complex potential theory, it follows immediately from Poisson formula that the limit of a sequence of harmonic functions on a domain which are converging locally uniformly is a harmonic function on the domain (cf. [Ra, Corollary 1.2.8]). In this section, we will see that this is also true in the potential theory on P1Berk , even under a much weaker condition than is required classically. This fact is, as in the classical theory, a direct consequence of the Poisson formula. In Section 4.4, we have seen that 73 4 Harmonic functions
every harmonic function on a strict simple domain can be described by functions of the form log v([ T − ai]x) for ai ∈ K. At the end of this section, we extend this description to a harmonic function on an arbitrary domain using uniform convergence.
Proposition 4.6.1. Let U be an open subset of P1Berk and f1, f 2, . . . harmonic functions on U converging pointwise to a function f : U → R. Then f is harmonic on U , and the fi converge uniformly to f on compact subsets of U .Proof. Consider a x ∈ U , then we can choose a simple domain Ux containing x such that
Ux ⊂ U . The functions fk are harmonic on Ux ⊂ U by Lemma 4.1.7 and continuous on
Ux because they are continuous on U by definition. Note that every simple domain is a finite-dendrite domain. Let ∂U x = {x1, . . . , x m}. The uniqueness in the second version of the Poisson formula, Proposition 4.5.3, implies that for each k ≥ 1 the function fk
is given in the following way
fk(z) =
m∑
i=1
fk(xi)hi(z)
for all z ∈ Ux. We have required that the sequence fk converges pointwise to a func-tion f on U , so fk(xi) converges to f (xi) for each i = 1 , . . . , m . Hence, fk(z) =
∑mi=1
fk(xi)hi(z) converges uniformly to f (z) = ∑mi=1 f (xi)hi(z) on Ux. The first ver-sion of the Poisson formula, Theorem 4.4.7, states that the harmonic measures hi are strongly harmonic on Ux, and so f is strongly harmonic on Ux as well. Thus, f is harmonic on U .Every compact set E ⊂ U can be covered by finitely many domains Ux. Therefore, the sequence f1, f 2, . . . converges uniformly to f on E.
Corollary 4.6.2. If U is a finite-dendrite domain, a sequence of harmonic functions
f1, f 2, . . . converges pointwise to a function f : U → R if and only if the sequence fi
converges uniformly to f .Proof. As we have seen in the proof above, fk(z) = ∑mi=1 fk(xi)hi(z) on U , where
∂U = {x1, . . . , x m}. Since fk converges pointwise to f , the sequence fk converges uniformly to f (x) = ∑mi=1 f (xi)hi(z) as well. With the help of Corollary 4.4.9 we can describe a harmonic function on a domain in the following way:
Proposition 4.6.3. If U is a domain and f is harmonic on U , there are rational functions g1(T ), g 2(T ), . . . ∈ K(T ) and rational numbers R1, R 2, . . . ∈ Q such that
f (x) = lim
k→∞
Rk · log v([ gk]x)
uniformly on compact subsets of U .
74 4.6 Uniform Convergence
Proof. If the main dendrite of U is empty, the harmonic function f on U is constant by Proposition 4.2.6. Let c ∈ R such that f ≡ c on U . Since Q ⊂ R is dense, there is a sequence (Rk)k∈N ⊂ Q such that
f (x) = lim
k→∞
Rk
for all x ∈ U . Let α be a constant in K such that |α| = qv, then the claim is true with
gk ≡ α for all k ∈ N.Now we assume that the main dendrite is non-empty. Therefore, we can change co-ordinates if ∞ ∈ U , and so we are allowed to assume that ∞ is not contained in the domain U . By Corollary 4.2.4, we can consider an exhaustion (Uk)k≥1 of U , where Uk
are strict simple domains and Uk ⊂ U for k ≥ 1, and each fk is harmonic on Uk by Corollary 4.1.8. Let ∂U k = {xk, 1, . . . , x k,m k }, then by Corollary 4.4.9 for each k ≥ 1
there are ck, 0, . . . , c k,m k ∈ R with ∑mk
i=1
ck,i = 0 and points ak, 1, . . . , a k,m k /∈ Uk of type I such that
f (x) = ck, 0 −
mk
∑
i=1
ck,i log v([ T − ak,i ]x)
for all x ∈ Uk.At next, we will construct a sequence (fk)k≥1 of functions on U converging uni-formly to f on compact subsets of U . Afterwards, we will verify that these func-tions coincide with the functions in the claim. First, we show that the function
hk,i (x) := log v(δ(x, a k,i )∞) = log v([ T − ak,i ]x) is bounded on Uk. The last iden-tity is true by [BR, Corollary 4.2]. The function is continuous by [BR, Proposi-tion 4.1]. Since ak,i /∈ Uk and ∞ /∈ Uk, x ∨∞ ak,i cannot be a point of type I. Hence, δ(x, a k,i )∞ = diam ∞(x ∨∞ ak,i ) ∈ R>0, i.e. hk,i is real valued on Uk for each
i = 1 , . . . , m k. Thus, log v(δ(x, a k,i )∞) is bounded on the compact set Uk by con-stants λk,i . Set λk := max i=1 ,...,m k λk,i , and choose rational numbers dk,i such that
∑mk
i=1
dk,i = 0 , and |dk,i − ck,i | < 1
λk·mk·2k
for i = 1 , . . . , m k, and |dk, 0 − ck, 0| < 12k .Define
fk(x) := dk, 0 −
mk
∑
i=1
dk,i hk,i (x),
then
|fk(x) − f (x)| = |dk, 0 −
mk
∑
i=1
dk,i hk,i (x) − ck, 0 +
mk
∑
i=1
ck,i hk,i (x)|≤ | dk, 0 − ck, 0| +
mk
∑
i=1
|dk,i − ck,i | · | hk,i (x)|
< 12k + mk · λk · 1
λk · mk · 2k = 1
k
75 4 Harmonic functions
for each x ∈ Uk. Further, |fn(x) − f (x)| < 1/n ≤ 1/k for all n ≥ k since Uk ⊂ Un.Therefore, the sequence (fk+l)l∈N converges uniformly to f on Uk for all k ≥ 1. Thus,
(fk) converges uniformly to f on compact sets of U . It remains to show that the sequence (fk) has the form from the claim. Let Nk be the common denominator for the
dk,i and put nk,i = Nk ·dk,i ∈ Z. Then we can find a constant bk ∈ K with |bk| = q−nk, 0
v
.Setting
gk(T ) := bk ·
mk
∏
i=1
(T − ak,i )nk,i ,
we get the following identity on Uk
fk(x) = dk, 0 −
mk
∑
i=1
dk,i log v([ T − ak,i ]x)= − 1
Nk
· (−nk, 0 +
mk
∑
i=1
log v([ T − ak,i ]nk,i
x
)) = − 1
Nk
· (log v(
mk
∏
i=1
q−nk, 0
v
· [T − ak,i ]nk,i
x
)) = − 1
Nk
· (log v(
mk
∏
i=1
[bk]x · [T − ak,i ]nk,i
x
)) = − 1
Nk
· (log v([ gk]x).
4.7 Harnack’s Principle
In the classical potential theory we have Harnacks’s principle which describes the be-havior of an ordered sequence of harmonic functions on a domain in C∞, where C∞
is the Riemann sphere. The principle says that either the sequence converges locally uniformly to ∞, or it converges locally uniformly to a harmonic function on the do-main (cf. [Ra, Theorem 1.3.9]). In this section, an analogue of Harnack‘s principle is given. Note that we do not require that the sequence has to be non-negative as in [BR]. To prove the principle we will first give an analogue of Harnack’s inequality, which is needed in the classical theory as well.
Lemma 4.7.1 (Harnack’s Inequality) . Let U ⊂ P1Berk be a domain. Then for each
x0 ∈ U and each compact set X ⊂ U , there is a constant C = C(x0, X ) such that for any harmonic function h which is non-negative on U
(4.18) (1 /C ) · h(x0) ≤ h(x) ≤ C · h(x0)
76 4.7 Harnack’s Principle
is satisfied for all x ∈ X.Proof. If the main dendrite D of U is empty, h ≡ h(x0) on U . Thus, Harnack’s inequality (4.18) is true for all C ≥ 1. So we may assume that D 6 = ∅. If h(x0) = 0 ,our harmonic function h is achieving a minimum on U since we have required that h is non-negative. Hence, the harmonic function h has to be constant with h ≡ 0 on U by the Maximum Principle. Again, the Inequality (4.18) is true for all C ≥ 1. Therefore, it remains to consider the case where D 6 = ∅ and h(x0) > 0. We have seen in Proposition 4.2.6, that there is a point ω ∈ D such that h(ω) = h(x0), so we may assume that x0
is contained in the main dendrite D.We start with the upper bound in (4.18). Let ρ(x, y ) be the logarithmic path distance on P1Berk . By Proposition 4.2.3, the main dendrite D is finitely branched at every point
p ∈ D, i.e. there is an ε > 0 such that the closed neighborhood of p in D defined by
Γ( p, ε ) = {x ∈ D| ρ(x, p ) ≤ ε} is a star. This means that Γ( p, ε ) is the union of n closed segments of length ε emanating from p for some n ≥ 2,
Γ( p, ε ) =
n⋃
i=1
[p, q i],
where qi are the endpoints which can be written as qi = p+ε~ vi for i = 1 , . . . , n . We take
ε as large as possible such that ε ≤ 1. As in the proof of Proposition 4.4.3, the harmonic function h is linear on each of the segments [p, q i] and ∆Γ( p,ε )(h)( p) = − ∑ni=1 d~vi h(p) = 0. Consider a point x = p + t · ~vi ∈ [p, q i]. Since the restriction of h to each segment is linear, the one-sided derivatives can be written as
d~vi (h) = h(qi) − h(p)
ε = h(x) − h(p)
t .
Hence,
0 =
n∑
j=1
d ~vj (h) = h(x) − h(p)
t + ∑
j6=i
h(qj ) − h(p)
ε .
This equality and the fact that h(qj ) ≥ 0 for each j ∈ { 1, . . . , n } imply the following inequality
h(x) = −
∑
j6=i
h(qi) − h(p)
ε
· t + h(p)
≤ ∑
j6=i
h(p) · tε + h(p)
≤ (n − 1) h(p) + h(p)= h(p) · n.
So h(x) ≤ Cp ·h(p) for each x ∈ Γ( p, ε ) where Cp := n. Now we will use the compactness 77 4 Harmonic functions
of X to get the upper bound for all x ∈ X. Since X is compact, there is a finite subgraph
Γ of D such that the retraction of X to D is contained in the interior of Γ. This means there exists a finite subgraph Γ ⊂ D such that rP1Berk ,D (X) ⊂ Γ0, where Γ0 denotes the interior of Γ.If x0 is not in contained in Γ, we can consider the union of the segment [x0, r Γ(x0)] and
Γ instead of Γ. Since Γ is compact, there is a finite number of stars which cover Γ, i.e.
Γ ⊂ ⋃mi=1 Γ( pi, ε i). Starting at the point p = x0 ∈ Γ and proceeding stepwise, we get
h(x0) ≤ C · h(x)
for all x ∈ Γ, where C := ∏mi=1 Cpi . Since h(x) = h(rP1Berk ,D (x)) for each x ∈ X by Proposition 4.2.6, the upper bound holds for all x ∈ X.For the lower bound, let {x1, . . . , x m} be the set of endpoints of Γ. Then U 0Γ := r−1Γ (Γ 0)
defines a subdomain of U with ∂U 0Γ = {x1, . . . , x m}. Let CΓ,i be the constant which we have constructed above satisfying h(x) ≤ CΓ,i · h(xi) on Γ and so on X, for each
i = 1 , . . . , m. Taking C′
Γ
:= max i=1 ,...,m CΓ,i , then
h(x0) ≤ C′
Γ
· h(xi)
for each i = 1 , . . . , m . Since h is harmonic on U and U 0Γ ⊂ U , h is harmonic on U 0Γ and continuous on U 0Γ. Thus,
min( h(x1), . . . , h (xm)) ≤ h(x)
by the Maximum Principle for each x ∈ U 0Γ. As rP1Berk ,D (X) ⊂ Γ0,
rP1Berk ,Γ(X) = rD, Γ(rP1Berk ,D (X)) ⊂ Γ0,
and so X ⊂ U 0Γ. Altogether,
h(x0) ≤ C′
Γ
· min
i=1 ,...,m
h(xi) ≤ C′
Γ
· h(x)
for all x ∈ X. Putting C := max( CΓ, C ′
Γ
), we have
(1 /C ) · h(x0) ≤ h(x) ≤ C · h(x0)
for all x ∈ X.
Theorem 4.7.2 (Harnack’s Principle) . Let U ⊂ P1Berk be a domain and f1, f 2, . . .
harmonic functions on U with f1 ≤ f2 ≤ . . . . Then either i) lim i→∞ fi(x) = ∞ for each x ∈ U , or ii) f (x) = lim i→∞ fi(x) is finite for all x ∈ U , the fi converge uniformly to f on compact subsets of U , and f is harmonic on U .
78 4.7 Harnack’s Principle
Proof. First, we consider the case of a non-negative sequence 0 ≤ f1 ≤ f2 ≤ . . . .Suppose that i) is not true, i.e. there is some x0 ∈ U such that lim i→∞ fi(x0) is finite. Then for any x ∈ U , we can apply Lemma 4.7.1 to the compact set X := {x} ⊂ U .Thus, there is a constant C such that
(1 /C ) · fi(x0) ≤ fi(x) ≤ C · fi(x0)
for all i = 1 , 2, . . . . Since the sequence fi is bounded in x0, it is also bounded in our arbitrary x in U . Hence, the increasing sequence (fi) converge pointwise to f with
f (x) = lim i→∞ fi(x) < ∞ for each x ∈ U . We have seen in Proposition 4.6.1, that f is harmonic on U and the fi converge uniformly to f on compact subsets of U .Now, let a f1 ≤ f2 ≤ . . . be a sequence of harmonic functions on U which is not forced to be non-negative. Then we can apply the first case to the sequence
0 ≤ f2 − f1 ≤ f3 − f1 ≤ . . .
of harmonic functions on U . Since f1(x) ∈ R for each x ∈ U , we either have lim i→∞ fi(x) =
∞ for all x ∈ U , or f (x) = lim i→∞ fi(x) is finite for all x ∈ U as well. For the rest of the proof, there was no need to be non-negative. Thus the claim is also true in the arbitrary case.
Remark 4.7.3. If lim i→∞ fi(x) = ∞ for each x ∈ U , then the fi converge uniformly to ∞ on compact subsets of U as well. This is a direct consequence of Harnack’s inequality.
Remark. In the general case, Harnack’s principle can be found in [Th, Proposition 3.1.2]. 79 5 The link to smooth functions on analytic curves
In this chapter, we consider smooth functions and try to link it with harmonic func-tions. Antoine Chambert-Loir and Antoine Ducros introduced smooth functions on Berkovich analytic spaces and defined differential operators d′ and d′′ for them in [CD]. In particular, we study smooth functions and the operators d′ and d′′ on the analyti-fication of an algebraic variety X over K following Walter Gubler in his paper [Gu]. This raises the question of whether there is a link between harmonic functions and smooth functions which belong to the kernel of d′d′′ . Thuillier introduced in [Th] har-monic functions on an arbitrary strictly analytic smooth curve X and stated in [Th, Théorème 2.3.21] two explicit conditions in which all harmonic functions are locally given by functions of the form log |f | where f ∈ O ×
X
. This result has led to establish a connection between smoothness and functions of the form log |f | where f ∈ O ×
X
in Chapter 5.2. We will see that a function is smooth and belongs to the kernel of d′d′′
if and only if it can be locally written as a linear combination of the just mentioned functions. If X is the projective line P1
K
, the same can be shown for harmonic functions using some results from Chapter 4. Hence, the harmonic functions on P1Berk coincide with the smooth functions contained in ker d′d′′ . To find an answer in the general case, i.e. the analytification of a smooth algebraic variety X, we introduce Thuillier’s defini-tion of harmonic functions in Chapter 5.3, show that his definition is an extension to the one made in Chapter 4 and give the proof of [Th, Théorème 2.3.21]. At the end, we construct a smooth algebraic curve X over K such that one can find an open subset
W of Xan and a harmonic function on W which is not smooth.
5.1 Differential forms and smooth functions on Xan
In this section, we consider an algebraically closed field K endowed with a non-trivial complete non-archimedean absolute value | | . Let X be an algebraic variety over K,i.e. X is an irreducible separated reduced scheme of finite type. To define smooth functions on Xan we introduce differential forms on the algebraic variety X. First, we recall (p, q )-superforms on open subsets of Rr which were introduced originally by Lagerberg in [La, §2]. This theory of superforms leads to superforms on polyhedral complexes developed in [CD]. With the help of Bieri-Groves one can give a definition of differential forms on algebraic varieties (cf. [Gu13]). We will see that a differential form of bidegree (0 , 0) defines indeed a continuous function f : Xan → R, and so we 81 5 The link to smooth functions on analytic curves
can define smooth functions as differential forms of bidegree (0 , 0) .
Definition 5.1.1. i) Let U be an open subset of Rr, then a superform of bidegree
(p, q ) on U is an element of
Ap,q (U ) := C∞(U ) ⊗R ΛpRr∗ ⊗R ΛqRr∗.
If we choose a basis x1, . . . , x r of Rr, a superform α of bidegree (p, q ) can be written as
α = ∑
|I|=p, |J|=q
αIJ d′xI ∧ d′′ xJ
where I (resp. J) consists of i1 < · · · < i p (resp. j1 < · · · < j q) with i1, . . . , i p, j 1, . . . , j q ∈{1, . . . , r }, αIJ ∈ C∞(U ) and
d′xI ∧ d′′ xJ := ( dx i1 ∧ · · · ∧ dx ip ) ⊗ (dx j1 ∧ · · · ∧ dx jq ).
There is a natural alternating wedge product Ap,q (U )×Ap′,q ′
(U ) → Ap+p′,q +q′
(U )
with (α, β ) 7 → α ∧ β.ii) There are differential operators d′ : Ap,q (U ) → Ap+1 ,q (U ) given by
d′α := ∑
|I|=p, |J|=qr
∑
i=1
∂α IJ
∂x i
d′xi ∧ d′xI ∧ d′′ xJ ,
and d′′ : Ap,q (U ) → Ap,q +1 (U ) given by
d′′ α := ∑
|I|=p, |J|=qr
∑
j=1
∂α IJ
∂x j
d′′ xj ∧ d′xI ∧ d′′ xJ .
Remark 5.1.2. Within the context of this thesis we are interested in the composition
d′d′′ : A0,0(U ) → A1,1(U ). Hence, we will just work with A0,0(U ) = C∞(U ), A1,0(U ) =
C∞(U ) ⊗R Rr∗, A 0,1(U ) = C∞(U ) ⊗R Rr∗, and A1,1(U ) = C∞(U ) ⊗R Rr∗ ⊗R Rr∗. In particular, the differential operators are given for any f ∈ A0,0(U ) = C∞(U ) as follows
d′f =
r
∑
i=1
∂f ∂x i
d′xi, d′′ f =
r
∑
j=1
∂f ∂x j
d′′ xj ,
and hence (5.1) d′d′′ f = ∑
i,j ∈{ 1,...,r }
∂2f∂x j ∂x i
d′xi ⊗ d′′ xj .
Recall, that the linear map dx i : Rr → R sends v = ∑rk=1 λkxk to the coefficient λi
respective to the basis x1, . . . , x r.82 5.1 Differential forms and smooth functions on Xan
Lemma 5.1.3. Let U ⊂ Rr be an open set and f ∈ C∞(U ). Then f is affine on U if and only if d′d′′ f = 0 .Proof. This is a direct consequence of Equation (5.1). Next, we will define superforms on polyhedral complexes following [Gu13, §3] which are used later for the definition of differential forms on algebraic varieties.
Definition 5.1.4. i) A polyhedron in Rr is the intersection of finitely many half-spaces Hi := {w ∈ Rr|〈 ui, w 〉 ≤ ci} with ui ∈ Rr∗.ii) A polyhedral complex C in Rr is a finite set of polyhedra in Rr satisfying the following two properties: a) If τ is a face of a polyhedra σ ∈ C , then τ ∈ C .
b) If σ, τ ∈ C , then σ ∩ τ is a closed face of both.
Definition 5.1.5. Let C be a polyhedral complex in Rr.i) We say that C is of dimension n if the maximal dimension of its polyhedra is n.
C is called pure dimensional of dimension n if every maximal polyhedron in C
has dimension n.ii) The support |C | of C is the union of all polyhedra in C .iii) Let σ ∈ C , then Aσ denotes the affine space which is spanned by σ and Lσ denotes the corresponding linear subspace of Rr.iv) An open subset Ω of |C | is called polyhedrally star shaped with center z if there is a polyhedral complex D such that Ω is an open subset of D and for all maximal
σ ∈ D the set σ ∩ Ω is star shaped with center z in the sense that for all x ∈ σ ∩ Ω
and for all t ∈ [0 , 1] the point z + t(x − z) is contained in σ ∩ Ω (cf. [Je, Definition 2.13]).
Definition 5.1.6. Let C be a polyhedral complex and Ω an open subset of |C |. Then a superform α ∈ Ap,q (Ω) of bidegree (p, q ) on Ω is given by a superform α′ ∈ Ap,q (V )
where V is an open subset of Rr with V ∩ | C | = Ω . Two forms α′ ∈ Ap,q (V ) and
α′′ ∈ Ap,q (W ) with V ∩ | C | = W ∩ | C | = Ω define the same form in Ap,q (Ω) if we have for each σ ∈ C
〈α′(x); v1, . . . , v p, w 1, . . . , w q〉 = 〈α′′ (x); v1, . . . , v p, w 1, . . . , w q〉
for all x ∈ σ ∩ Ω, v1, . . . , v p, w 1, . . . , w q ∈ Lσ. If this is true, we say that the restrictions
α′|σ and α′′ |σ agree. If α ∈ Ap,q (Ω) is given by α′ ∈ Ap,q (V ) we write
α′|Ω = α.
5.1.7. Let F : Rr′
→ Rr be an affine map. If C ′ is a polyhedral complex of Rr′
and
C a polyhedral complex of Rr with F (|C ′|) ⊂ | C |, then the pullback F ∗ : Ap,q (|C |) →
83 5 The link to smooth functions on analytic curves
Ap,q (|C ′|) is well-defined and compatible with the differential operators d′ and d′′ .Hence, we have also differential operators d′ and d′′ on Ap,q (|C |) given by the restriction of the corresponding operators on Ap,q (Rr).To introduce (p, q )-forms on Xan , we first recall the analytification of X and define tropical charts of Xan .
Definition 5.1.8 (Analytification of X). Let U = Spec( A) be an open affine subset of X, then let U an be the set of all multiplicative seminorms on A extending | | on
K, endowed with the topology generated by the functions U an → R; p 7 → p(a) with a
ranging over A. By glueing, we get a topological space Xan which is connected locally compact and Hausdorff. We can endow it with a sheaf of analytic functions leading to a Berkovich analytic space over K which we call the analytification of X. We refer to [Be] for a more detailed definition and the fundamental properties of Xan .
5.1.9. If ϕ : Y → X is a morphism of algebraic varieties over K, we get an analytic morphism
ϕan : Y an → Xan
induced by composing the multiplicative seminorms with ϕ] on suitable affine open subsets.
Definition 5.1.10. Let T := Grm be the split multiplicative torus of rank r with coordinates z1, . . . , z r.i) We define the tropicalization map by
trop : T an → Rr, p 7 → (− log p(z1), . . . , − log p(zr)) .
ii) Let Y be a closed subvariety of T . The tropical variety associated to Y is defined by
Trop( Y ) := trop( Y an ).
Remark 5.1.11. The tropicalization map is continuous.
Definition 5.1.12. Let U be an open subset of the algebraic variety X.i) A moment map is a morphism ϕ : U → Grm.ii) The tropicalization of ϕ is defined by
ϕtrop := trop ◦ ϕan : U an → Rr.
iii) Let U ′ ⊂ U be another open subset of X and ϕ′ : U ′ → Gr′
m
a moment map. We say that ϕ′ refines ϕ if there is an affine homomorphism of tori (i.e. a group homomorphism composed with a multiplicative translation) ψ : Gr′
m
→ Grm such that ϕ = ψ ◦ ϕ′ on U ′.84 5.1 Differential forms and smooth functions on Xan
Remark 5.1.13. If a moment map ϕ′ : U ′ → Gr′
m
refines a moment map ϕ : U → Grm,the map ψ : Gr′
m
→ Grm from above induces an affine map Trop( ψ) : Rr′
→ Rr with
ϕtrop = Trop( ψ) ◦ ϕ′
trop
on (U ′)an .
Definition 5.1.14. If U is an open affine subset of X, one can construct a canonical moment map ϕU which is canonical up to multiplicative translation by an element of
TU (K) and coordinate change: The abelian group MU := O(U )×/K × is free of finite rank by [Sa, Lemme 1] and we choose representatives ϕ1, . . . , ϕ r in O(U )× of a basis. Due to
Hom K−Sch (U, Grm) = Hom K−Alg .(Γ( Grm, OGrm ), Γ( U, OU )) = Hom K−Alg .(K[z±11 , . . . , z ±1
r
], Γ( U, OU )) = (Γ( U, OU )×)r,
this leads to a moment map ϕU : U → Grm. We will write TU for the canonical tori Grm.By construction, ϕU refines all moment maps of U .
Definition 5.1.15. An open subset U of X is called very affine if U has a closed embedding into a multiplicative torus.
Remark 5.1.16. The very affine open subsets of X form a basis for the Zariski topol-ogy. If U is a very affine open subset of X, the canonical moment map ϕU from 5.1.14 is a closed embedding. These properties are stated in [Gu13, 4.13].
Definition 5.1.17. i) For a very affine open subset U of X we define
trop U := ( ϕU )trop ,
and
Trop( U ) := trop U (U an ).
ii) A tropical chart (V, ϕ U ) on Xan consists of an open subset V of Xan contained in U an for a very affine open subset U of X with
V = trop −1
U
(Ω)
for some open subset Ω of Trop( U ).iii) We say that the tropical chart (V ′, ϕ U ′ ) is a tropical subchart of (V, ϕ U ) if V ′ ⊂ V
and U ′ ⊂ U .
Remark. i) If (V, ϕ U ) is a tropical chart on Xan as in the definition above, trop U (V ) = Ω is open in Trop( U ).
ii) The tropical charts form a basis of Xan , i.e. for every open subset W of Xan and for every element x in W there is a tropical chart (V, ϕ U ) such that x ∈ V ⊂ W
(cf. [Gu13, Proposition 4.16 a)]). 85 5 The link to smooth functions on analytic curves
With the help of Bieri-Groves, we can introduce differential forms on algebraic varieties:
Proposition 5.1.18. If X is an algebraic variety of dimension n over K and U is a very affine open subset of X, then Trop( U ) is the support of an R-affine polyhedral complex of pure dimension n.Proof. A reference and further explanations are given in [Gu12, Theorem 3.3].
5.1.19. The last proposition allows us to consider a superform α ∈ Ap,q (trop U (V )) for a tropical chart (V, ϕ U ) of Xan . Let (V ′, ϕ U ′ ) be another tropical chart of Xan , then
(V ∩ V ′, ϕ U ∩U ′ ) is a tropical subchart of both by [Gu13, Proposition 4.16]. We get a canonical homomorphism ψU,U ∩U ′ : Gsm → Grm of the underlying tori with
ϕU = ψU,U ∩U ′ ◦ ϕU ∩U ′
on U ∩ U ′ and an associated affine map Trop( ψU,U ∩U ′ ) : Rs → Rr such that
trop U = Trop( ψU,U ∩U ′ ) ◦ trop U ∩U ′
and the tropical variety Trop( U ∩ U ′) is mapped onto Trop( U ) (cf. [Gu13, 5.1]). We define the restriction of α to trop U ∩U ′ (V ∩ V ′) as
Trop( ψU,U ∩U ′ )∗α ∈ Ap,q (trop U ∩U ′ (V ∩ V ′))
and write α|V ∩V ′ .
Definition 5.1.20. i) A differential form α of bidegree (p, q ) on an open subset W
of Xan is given by a covering (Vi)i∈I of W by tropical charts (Vi, ϕ Ui ) of Xan and superforms αi ∈ Ap,q (trop Ui (Vi)) such that
αi|Vi∩Vj = αj |Vi∩Vj
for every i, j ∈ I.If α′ is another differential form of bidegree (p, q ) on W given by α′
j
∈ Ap,q (trop U ′
i
(V ′
i
))
with respect to the tropical charts (V ′
i
, ϕ U ′
i
) covering W , then we consider α and
α′ as the same differential forms if and only if
αi|Vi∩V ′
j
= α′
j
|Vi∩V ′
j
for every i ∈ I and j ∈ J.ii) We denote the space of (p, q )-differential forms on an open subset W of Xan by
Ap,q (W ).iii) If α ∈ Ap,q (W ) is given by a covering of tropical charts (Vi, ϕ Ui ) and superforms
αi ∈ Ap,q (trop Ui (Vi)) , then we define d′α resp. d′′ α to be given by (Vi, ϕ Ui ) and the superforms d′αi ∈ Ap+1 ,q (trop Ui (Vi)) resp. d′′ αi ∈ Ap,q +1 (trop Ui (Vi)) .86 5.2 The link between the presheaf log |O ×
X
| and smooth functions
Remark 5.1.21. Let f be a differential form of bidegree (0 , 0) on an open subset W
of Xan . Then f : W → R is a well-defined continuous map.
Proof. If f is given by a covering (Vi, ϕ Ui )i∈I of W and fi ∈ A0,0(trop Ui (Vi)) , then
f = fi ◦ trop Ui
on Vi for every i ∈ I. Consider an arbitrary x ∈ W which is contained in charts Vi and
Vj . We have seen in 5.1.19 that
trop Ui = Trop( ψUi,U i∩Uj ) ◦ trop Ui∩Uj
and
trop Uj = Trop( ψUj ,U i∩Uj ) ◦ trop Ui∩Uj .
We have required in the definition of differential forms that fi|Vi∩Vj = fj |Vi∩Vj , i.e.
fi ◦ Trop( ψUi,U i∩Uj ) = fj ◦ Trop( ψUj ,U i∩Uj ).
Hence, fi(trop Ui (x)) = fj (trop Uj (x)) . Thus, f (x) is independent of i ∈ I, and so f is a well-defined function on W . Further, f is continuous in every x ∈ W as a composition of continuous functions.
Definition 5.1.22. Let W be an open subset of Xan . We denote the space of smooth functions on W by C∞(W ) := A0,0(W ).
5.2 The link between the presheaf log |O ×
X | and smooth functions
Again, we consider an algebraically closed field K endowed with a non-trivial complete non-archimedean absolute value | | . Let X be an algebraic variety over K of dimension
n. The goal of this section is to give a connection between smooth functions defined in Chapter 4.1 and functions of the form log |f | : Xan → R for a f ∈ O ×
X
. We will see that smooth functions in the kernel of d′d′′ can be written locally as a linear combination of functions in log |O ×
X
|. Further, we show that log |f | is smooth and contained in the kernel of d′d′′ for each f ∈ O ×
Xan
.
Lemma 5.2.1. Let W be an open subset of Xan and f ∈ C∞(W ), then f ∈ ker d′d′′ if and only if for every x ∈ W there is a tropical chart (V, ϕ U ) with x ∈ V ⊂ W such that
f = g ◦ trop U
on V for an affine map g : Rr → R where TU = Grm.
87 5 The link to smooth functions on analytic curves
Proof. We assume that f belongs to the kernel of d′d′′ and consider an arbitrary x ∈ W .Due to f ∈ C∞(W ), we can find a tropical chart (V, ϕ U ) of Xan such that x ∈ V ⊂ W
and f = g ◦ trop U on V for a superform g ∈ C∞(trop U (V )) . The neighborhood V of
x has the form trop −1
U
(Ω) for an open subset Ω of Trop( U ) ⊂ Rr which is the support of a polyhedral complex of pure dimension n. In particular, we have trop U (V ) = Ω .We can choose the tropical chart in the way that trop U (V ) is polyhedrally star shaped. We may assume (by translation) that the centre is the origin. Since we have required that f belongs to the kernel of d′d′′ , [Gu13, Proposition 5.6] implies that d′d′′ g = 0 in
A1,1(trop U (V )) . Hence, g is affine on each polyhedron in trop U (V ). Let σ be such a polyhedron in trop U (V ) and ν a vector in Lσ. The function g comes from a smooth function on an open set of Rr, so the linear map Dg : Rr → R satisfies Dg (0)( ν) =
∂g/∂ν. We have seen above that g is affine on σ, so g is given by g(ν) = ∂g/∂ν + g(0)
on σ. Thus, g coincides with the affine map Dg + g(0) on trop U (V ).The other direction is a direct consequence of the definitions and Lemma 5.1.3.
Remark 5.2.2. Let U be a very affine open subset of X and f ∈ O X (U )×, then the morphism ϕ : U → G1
m
obtained by the map K[z±1] → O X (U ); z 7 → f is refined by the canonical moment map ϕU . By Remark 5.1.13, there is an affine map Ψ : Rr → R
such that log |f | = trop ◦ ϕan = −Ψ ◦ trop U on U an . Hence, log |f | : U an → R is smooth and belongs to ker( d′d′′ ).For the analytification Xan of the algebraic variety X one obtains a morphism of locally ringed spaces from Xan to X. If U is an open affine subset of X this morphism leads to an injective map OX (U ) → O Xan (U an ) (a description of the structure sheaf OXan can be found in [Th, Remarque 2.1.11]). In the following we therefore give a generalization of the previous Remark.
Proposition 5.2.3. Let W be an open subset of Xan and f ∈ O Xan (W )×, then the function log |f | : W → R is smooth and belongs to ker d′d′′ .Proof. Let f ∈ O Xan (W )× and set T := G1
m
. Then f defines the analytic morphism
ϕ : W → T an which is locally given by x 7 → (F 7 → | F (f )|x), and satisfies (trop ◦ ϕ)( x) =
− log |f (x)|. [Gu13, Proposition 7.2] states that for every x ∈ W there is a very affine open subset U of X with a moment map ϕ′ : U → T and an open neighborhood V of
x in U an ∩ W such that trop ◦ ϕ = ϕ′
trop
on V . The canonical moment map ϕU refines
ϕ′ : U → T , i.e. ϕ′
trop
= Trop( ψ) ◦ trop U on U an for an affine map Trop( ψ). Thus,
− log |f | = Trop( ψ) ◦ trop U
is satisfied on an open neighborhood of x. Therefore, we can find a covering of W by tropical charts (Vi, ϕ Ui ) such that log |f | = fi ◦ trop Ui on Vi for fi ∈ A0,0(trop Ui (Vi)) ,and so the function log |f | is smooth on W . Moreover, Lemma 5.2.1 tells us that
− log |f | belongs to ker( d′d′′ ).88 5.3 The link between the presheaf log |O ×
X
| and harmonic functions
Theorem 5.2.4. Let W be an open subset of Xan . A function f : W → R belongs to the kernel of d′d′′ : C∞(W ) → A1,1(W ) if and only if for every x ∈ W there is an open neighborhood V of x in W and an open subset U of X with V ⊂ U an such that
f =
r∑
i=1
λi log |fi|
on V where f1, . . . , f r ∈ O X (U )× and λ1, . . . , λ r ∈ R.Proof. If f ∈ ker d′d′′ ⊂ C∞(W ), then for every x ∈ W there is a tropical chart (V, ϕ U )
such that
f = g ◦ trop U
on V for an affine map g : Rr → R by Lemma 5.2.1. Due to the definition of the canonical moment map, there are f1, . . . , f r ∈ O X (U )× such that
f = g ◦ (− log |f1|, . . . , − log |fr|)
on V . Since g is affine, f is of the form ∑ri=1 λi log |fi| + C for λi ∈ R and a constant
C ∈ R. The absolute value | | is non-trivial, so we can find λr+1 ∈ R and fr+1 ∈ K
such that C = λr+1 log |fr+1 |.If f has the described form, Remark 5.2.2 implies the other direction.
5.3 The link between the presheaf log |O ×
X
| and harmonic functions
In Section 4, we have already defined harmonic functions on P1Berk . At the beginning of this section, we verify that a function on an open subset of P1Berk is harmonic if and only if it can be written locally as a linear combination of log |f | where f ∈ O ×
X
for
X = P1
K
. Using Theorem 5.2.4, we can link the terms harmonic and smooth if X = P1
K
.Afterwards, we will define harmonic functions on a smooth strictly analytic curve X
generally (cf. [Th, §2.3]) and show that this definition is indeed an extension of the one made in Section 4. By [Th, Théorème 2.3.21], we get two explicit conditions in which the sheaf of harmonic functions coincides with the associated sheaf to the presheaf
log |O ×
X
|. Thuillier considers in [Th] smooth strictly k-analytic curves over a field k
which is complete with respect to a non-trivial non-archimedean absolute value | | . He does not require that k has to be algebraically closed different to Baker and Rumely in [BR] or Gubler in [Gu13]. Since we do not want to limit Thuillier’s definition in [Th], we use the notion K if we require that the field has to be algebraically closed and
k if not. For the link to smoothness, we consider again a smooth algebraic curve X
over K. The analytification Xan is a smooth strictly K-analytic curve, and so we can 89 5 The link to smooth functions on analytic curves
apply Thuillier’s theorem to Xan . The theorem and the characterization of ker d′d′′ (cf. Theorem 5.2.4) give us two explicit condition in which the harmonic functions coincides with the smooth functions in ker d′d′′ . Further, one can construct a smooth algebraic curve X such that there is a harmonic function which is not smooth.
Theorem 5.3.1. Let W be an open subset of P1Berk , then f is harmonic on W if and only if for every x ∈ W there is an open neighborhood V of x in W and an open subset
U of P1
K
with V ⊂ U an such that
f =
r∑
i=1
λi log |fi|
on V where f1, . . . , f r ∈ O P1
K
(U )× and λ1, . . . , λ r ∈ R.Proof. If f is harmonic on W , for every x ∈ W there is a strict simple domain V ⊂ W
containing x such that f is harmonic on V . By Corollary 4.4.9, there are c0, . . . , c m ∈ R
and a1, . . . , a m ∈ P(K)\V such that
f (x) = c0 −
m∑
i=1
ci log([ T − ai]x)
on V where ∂V = {x1, . . . , x m} ⊂ HBerk . The tropical charts form a basis of the Berkovich topology on P1Berk , so we can find a tropical chart ( ˜V , ϕ ˜U ) with x ∈ ˜V ⊂
V and ai /∈ ˜U for i = 1 , . . . , m . Hence, fi := T − ai ∈ O X ( ˜U )× for every i =1, . . . , m. Furthermore, we can find a λr+1 ∈ R and an element fr+1 ∈ P(K) such that
λr+1 log |fr+1 | = c0. Since [fi]x and |fi(x)| are just different notations, the claim is true. Assume that for every x ∈ W the function f has the described form on an open neighborhood V of x in W . Then we can find a domain ˜V contained in V such that
x ∈ ˜V . The functions log |fi| are strongly harmonic on ˜V by Example 4.1.5, and so f
is strongly harmonic on ˜V . Thus, f is harmonic on W .
Corollary 5.3.2. A function f is harmonic on an open subset W of P1Berk if and only if f is smooth on W and d′d′′ f = 0 .Proof. This is a direct consequence of Theorem 5.2.4 and Theorem 5.3.1. Up to now, K was an algebraically closed field which is complete with respect to a non-trivial non-archimedean absolute value | | . From now on, we work over a field k
and we do not require that k is algebraically closed. We set k◦ := {a ∈ k|| a| ≤ 1} and
k◦◦ = {a ∈ k|| a| < 1}. The residue field k◦/k ◦◦ is denoted by ˜k. For a k◦-algebra A we set Spf( A) := {p ∈ A| k◦◦ ⊂ p}. Further, we use the notation S := Spf( k◦).
90 5.3 The link between the presheaf log |O ×
X
| and harmonic functions
Definition 5.3.3. i) A Berkovich k-analytic space Y is called strictly k-affinoid if every y ∈ Y admits a fundamental system of neighborhoods consisting of compact strictly k-affinoid domains. ii) A strictly k-analytic domain of Y is a subset V ⊂ Y which has a locally finite covering by strictly k-affinoid domains. iii) If Y is a strictly k-affinoid space, we define the rigid site of Y as the category whose objects are the strictly k-analytic domains of Y , the morphisms are the inclusions and with the induced Grothendieck topology (see [Th, §2.1.1 p.20]). We will write YR for the rigid site of Y .iv) A strictly k-analytic curve X is given by a paracompact topological space |X| and a sheaf of k-algebras OX on |X| such that the ringed space (|X|, OX ) is locally isomorphic to (Y \∂Y, OY ) where Y is a strictly k-affinoid space of pure dimension
1.
5.3.4. By [Th, Remarque 2.1.11], the analytification of a 1-dimensional algebraic va-riety over k is a strictly k-analytic curve, e.g. P1Berk . We have seen in Chapter 2 that one can classify the points of P1Berk in four different types. This classification can be extended to an arbitrary strictly k-analytic curve X (cf. [Th, §2.1 p.27: Classification des points]).
Definition 5.3.5. i) A S-curve X is a formal S-scheme which is locally of finite type, flat, separated and of pure dimension 1.ii) We call a S-curve X strictly semi-stable if X has an open covering (Ui)i∈I such that there are ai ∈ k◦{ 0} and étale morphisms ϕi : Ui → S(ai) where
S(ai) := Spf( k◦{T0, T 1}/(T0T1 − ai)) .
5.3.6. If X is a strictly semi-stable S-curve, then the generic fibre Xη is a strictly
k-affinoid space which is rig-smooth (cf. [Th, Remarque 2.2.9] and for the definition of rig-smooth we refer to [Te, Definition 4.2.22]). For each strictly semi-stable S-curve
X there is a unique pair (S(X ), τ X ) of a polyhedral complex S(X ) in Xη of dimension
1 and a retraction map τX : Xη → S(X ) satisfying certain compatibility conditions (cf. [Th, Théorèm 2.2.10]). Note that the subset S(X ) of X just contains points of type II and III (cf. [Th, Définition 2.2.13]).
Definition 5.3.7. If X is a strictly semi-stable S-curve, we call S(X ) the skeleton of
X .In the following we will restrict to quasi-compact strictly semi-stable S-curves X , which is equivalent to the fact that the topological space |X η| is compact.
Definition 5.3.8. i) For a polyhedral complex S of dimension 1 and a locally finite subset Γ of S the space H(S, Γ) is defined as the space of piecewise affine functions
f on S satisfying ∑
~v∈TpS
d~vf (p) = 0 for each p ∈ S\Γ.
91 5 The link to smooth functions on analytic curves
ii) Let X by a strictly semi-stable S-curve and ∂Xη the Berkovich boundary of Xη
(see [Th, §2.1.2]). Then we define
H(X ) := τ ∗X H(S(X ), ∂ Xη)
which is a subspace of C0(|X η|, R).
5.3.9. Let Y be a strictly k-affinoid space of pure dimension 1 which is rig-smooth. By [Th, Théorèm 2.3.8] there exists a finite Galois extension k′ of k, a strictly semi-stable curve Y and an isomorphism ϕ : Y ⊗kk′ → Y η. Then the real subspace (ϕ∗H(Y)) Gal( k′/k )
of C0(|Y |, R) is independent of k′, Y and ϕ by [Th, Proposition 2.3.3] and [Th, Propo-sition 2.3.7].
Definition 5.3.10. If Y is a strictly k-affinoid space of pure dimension 1 which is rig-smooth, we set H(Y ) := ( ϕ∗H(Y)) Gal( k′/k ) for a strictly semi-stable curve Y and an isomorphism ϕ : Y ⊗k k′ → Y η.
5.3.11. Let X be a strictly k-analytic smooth curve. Let C be the category whose objects are the strictly k-affinoid spaces of pure dimension 1 which are rig-smooth. The morphisms in C are the affinoid immersions. By [Th, Proposition 2.3.3], we have a functor H : Cop → Vect R. If V is an object in the category XR, the strictly k-affinoid domains contained in V form a inductive filtered system I(V ) where the morphisms are the inclusions. Hence,
HX (V ) := lim
←− V′∈I(V)
H(V ′)
defines a presheaf HX on XR. For each strictly k-affinoid domain Y in X the canonical homomorphism from HX (Y ) to H(Y ) is an isomorphism. Note that HX is no sheaf ([Th, Remarque 2.3.11]). Every open subset Ω of X has a local finite cover of strictly
k-affinoid domains. Thus, Ω is an object in XR and every open cover of Ω is a cover in XR. We denote the site of the topological space |X| by |X| and we have a canonical morphism of sites ι : XR → | X|. [Th, Corollaire 2.3.15] says that ι∗HX is a sheaf on
|X|.
Definition 5.3.12. We denote the sheaf ι∗HX by HX and call it the sheaf of harmonic functions .
Remark 5.3.13. The presheaf CX of the germes of real continuous functions on |X| is actually a sheaf on XR (cf. [Th, Remarque 2.1.7]) and HX is a subpresheaf of it. The elements of HX (X) are the real continuous functions on |X| whose restrictions belongs to H(Y ) ⊂ C0(|Y |, R) for every strictly k-affinoid domain Y of X.Before we start to state and verify the announced link, we will try to understand that this definition of harmonic functions on open subsets of P1Berk coincides with the old one. 92 5.3 The link between the presheaf log |O ×
X
| and harmonic functions
Proposition 5.3.14. Let W be an open subset of P1Berk , then the vector space H(W ) of harmonic functions on W introduced in [BR] coincides with the vector space HX (W ).Proof. Consider a function f in H(W ), i.e. f is harmonic on W in the sense of Chapter 4. The last Remark tells us that it suffices to consider a strictly k-affinoid domain
Y ⊂ W . We may assume that the interior of Y is connected, and so the interior of Y
is a strict simple domain by [BR, Lemma 2.28], i.e. Y has a finite boundary and all boundary points are of type II. In particular, the interior of Y coincides with r−1Γ (Γ 0)
for a finite subgraph contained in Y whose endpoints are the boundary points of Y . In the setting of Chapter 4 the field K is algebraically closed, and so there is a strictly semi-stable S-curve Y such that Y is the generic fibre of Y by [Th, Théorèm 2.3.8]. The induced skeleton S(Y) is connected (cf. [BPR, Proposition 3.9] and [BPR, Proposition 4.10]), and so a finite subgraph of P1Berk . Moreover, we may assume by [Th, Théorèm 2.2.22] that the boundary points of Y are vertices of the skeleton S(Y) ⊂ Y and Γ
is contained in S(Y). Our function f is harmonic on r−1Γ (Γ 0) by Corollary 4.1.8 and we have the description f = ˜f ◦ rΓ on Y for a function ˜f ∈ CPA(Γ) by Proposition 4.4.3. By Corollary 4.4.4, f is especially strongly harmonic on r−1Γ (Γ 0). We know from Lemma 4.1.9 that the sum of outgoing slopes of ˜f at any point in Γ\∂Y is zero. Further, Proposition 4.2.6 states that f is constant on every path leading away from Γ. Hence, we can extend ˜f properly to S(Y) such that f ∈ H(Y ).Now assume that f ∈ H X (W ). It suffices to show that f is harmonic on each strict simple subdomain V of W satisfying V ⊂ W . By [BR, Lemma 2.27], the closure
Y := V of V is a strictly k−affinoid domain contained in W . By assumption f ∈ H(Y ),and H(Y ) = τ ∗Y (S(Y), ∂Y ) for a strictly semi-stable S-curve Y with Y = Yη by [Th, Théorèm 2.3.8]. Therefore, f = ˜f ◦ τY on Y for a piecewise affine function ˜f on S(Y)
with ∑
~v∈Tp
d~v ˜f (p) = 0
for all p ∈ S(Y)\∂Y . Note that Γ := S(Y) is a finite subgraph of P1Berk and τY = rΓ.Therefore, Example 3.2.2 implies that f is harmonic on V .Let X be a strictly k-analytic smooth curve. In this subsection we will link harmonic functions on X with the presheaf log |O ×
X
|.
Proposition 5.3.15. For each section f ∈ Γ( X, O×
X
) the function log |f | is harmonic on X.Proof. It suffices to show that the restriction log |f | to every strictly k-affinoid domain
Y ⊂ X belongs to the subspace H(Y ) ⊂ C0(|Y |, R). By [Th, Théorèm 2.3.8] and [Th, Lemme 2.3.5], we may assume that Y is the generic fibre of a simple semi-stable
S-curve Y. [Th, Proposition 2.2.24] states that log |f | = log |f | ◦ τY on Y and that the restriction of log |f | to S(Y) is harmonic on S(Y)\∂Y , i.e. log |f | ∈ H(Y).93 5 The link to smooth functions on analytic curves
Definition 5.3.16. Let FX denote the associated sheaf to the presheaf on |X| which maps an open set U to the subspace of C0(|U |, R) generated by the functions log |f |
where f ∈ Γ( U, O×
X
).
Theorem 5.3.17. Let X be a strictly k-analytic smooth curve over k. Then FX is a subsheaf of HX and HX /FX is supported on a discrete set of points of type II. Moreover,
HX /FX is zero if one of the following conditions is satisfied: i) The residue field ˜k is algebraic over a finite field. ii) The curve X̂ ⊗k̂ ka is locally isomorphic to P1Berk over ̂ ka where ̂ ka is the completion of the algebraic closure of k.
By Proposition 5.3.15 and [Ha, Exercise 1.4], FX is a subsheaf of HX . To prove the rest of the theorem above, we need an analogue statement to a fact we have proved in Chapter 4 and further lemmata. Theorem 4.4.7 states that the Dirichlet problem is uniquely solvable on finite-dendrite domains. Similarly, we have the following lemma in the case of affinoid domains:
Lemma 5.3.18. Let Y be a k-affinoid domain in X, then
H(Y ) → Hom( ∂Y, R); h 7 → h|∂Y
is an isomorphism. Proof. Note that the Shilov boundary of a k-affinoid domain in X coincides with its Berkovich boundary (cf. [Th, Proposition 2.1.12]) . With this fact, the lemma is proved in [Th, Corollaire 3.1.21].
Lemma 5.3.19. Let x be a point of type II, k′ a finite Galois extension of k and x′ apoint in X′ := X ⊗k k′ contained in the preimage of x under the canonical morphism
p : X ⊗k k′ → X. Then FX′,x ′ = HX′,x ′ implies FX,x = HX,x .Proof. Assume that FX′,x ′ = HX′,x ′ . By Proposition 5.3.15, it remains to verify the inclusion HX,x ⊂ F X,x . Let V be a strictly k-affinoid neighborhood of x in X and V ′
the connected component containing x′ in V ⊗k k′. Consider a function h ∈ H(V ), then
p∗h is harmonic on V ′ by [Th, Lemme 2.3.5]. Hence, (p∗h)|V ′ is a linear combination of functions log |f ′| where f ′ ∈ A×
V′
. The norm N (f ′) is defined as the determinant of the AV -algebra automorphism AV ′ → AV ′ given by the multiplication with f ′, where
AV ′ is a free AV -algebra of rank [k′ : k]. By [Th, Proposition 2.1.8], we get that
f := N (f ′) ∈ A×
V
and p∗|f ||V ′ = |f ′|[k′:k]. Using the norm, an easy calculation shows that h can be written as a linear combination of functions log |f | with f ∈ A×
V
. Thus,
FX′,x ′ = HX′,x ′ implies FX,x = HX,x .94 5.3 The link between the presheaf log |O ×
X
| and harmonic functions
Lemma 5.3.20. If X is isomorphic to the generic fibre of a strictly semi-stable S-curve
X and x is a point of type II corresponding to a proper, smooth and geometrically con-nected irreducible component Cx of the special fibre Xs, then HX,x /FX,x is canonically isomorphic to the vector space Pic 0(Cx) ⊗Z R.
Proof. Let V be a strictly k-affinoid neighborhood of x. By [BL, Lemma 4.4], there is an admissible blow up q : X ′ → X such that X ′ is a strictly semi-stable Spf( k◦)-curve and there is a formal scheme U′ open in X ′ such that V is isomorphic to U′
η
. The morphism q induces an isomorphism from the irreducible component of X ′ associated to x to Cx.For every h ∈ H(V ) we can define an R-divisor div( h) on the ˜k-curve Cx. We can see this ˜k-curve as an irreducible component of X ′
s
. Then the tangent space TxS(X ′)
can be canonically identified with the set of singular points of X ′
s
contained in Cx, and we denote the point corresponding to ~v ∈ TxS(X ′) by ˜x~v. Since h is harmonic on the neighborhood V of x, we can set
div( h) = ∑
~v∈TxS(X′)
d~vh(x)[ ˜x~v],
and
deg(div( h)) = ∑
t∈TxS(X′)
d~vh(x) = 0 .
This leads to a linear map
div : HX,x → Div( Cx) ⊗Z R,
and the following sequence
0 → R → H X,x div
−−→ Div( Cx) ⊗Z R deg
−−→ R → 0,
which can be verified to be exact. Obviously, the map R → H X,x , which maps a real number to a constant function, is injective. Moreover, div( h) = 0 if and only if the harmonic function h is locally constant in x, and the map deg : Div( Cx) ⊗Z R → R
is surjective. Further, we have seen above that im(div) ⊂ ker(deg) . So it remains to consider an R-divisor
D = ∑˜x∈| D|
n(˜x)[ ˜x]
of degree 0, and find a function h ∈ H X,x such that div( h) = D. One can construct a strictly semi-stable S-curve Y such that the generic fibre Yη is a neighborhood of x
contained in X = Xη and
|D| ⊂ TxS(Y),
where TxS(Y) can be identified canonically with a finite set of closed points in Cx.95 5 The link to smooth functions on analytic curves
Consider the subscheme Z := |D| ∩ (Xs\Sing( Xs)) in Xs and let q : X ′ → X be a blow up of Z. X ′ being strictly-semistable is equivalent to say that X ′
η
is smooth and X ′
s
is a ˜k-curve (locally algebraic) which has smooth irreducible components, and ordinary double points as singularities ([Th, Remarque 2.2.9]). Since the points in Z
are no singularities, the blow up X ′ of Z is a strictly semi-stable S-curve as well. Let
E := q−1(Z) be the exceptional divisor on X ′. By removing from each irreducible component of E a closed point disjoint of the strict transform of X in X ′, i.e. the closure of X \ Z in X ′, we obtain Y. Then Y which is open in X ′ is a strictly semi-stable
S-curve and we have |D| ⊂ TxS(Y).Let H be a piecewise affine function on S(Y) which satisfies
d~vH(x) =
{
n(˜x~v); ˜x~v ∈ | D|,
0; ˜x~v /∈ | D|.
Due to ∑
t∈TxS(Y)
d~vH(x) = deg( D) = 0 , the function H is harmonic in x. Hence, we can find a neighborhood of x in S(Y) where the piecewise affine function H is harmonic on. Hence, the function h := τ ∗Y H is harmonic on a neighborhood of x in X.By construction, we have div( h) = D. Thus, ker(deg) ⊂ im(div) , and so the sequence is exact. To get the claim, we consider another short sequence
0 → R → F X,x div
−−→ Pr( Cx) → 0
where Pr( Cx) denotes the R-vector space generated by the principal divisors on Cx.First, we show that div : FX,x → Pr( Cx) is well-defined. It suffices to consider a function of the form log |f | for f ∈ O ×
X,x
in the R-vector space FX,x . Since x is of type II, we can find a N ∈ N≥1 such that |f (x)|N ∈ | k×| (cf. [Th, §2.1 p.27: Classification des points]). Let α ∈ k be an element such that |f (x)|N = |α|. If ˜x~v is the singular point in X ′
s
corresponding to ~v ∈ TxS(X ′), we can find a small enough neighborhood of ˜x~v
containing only ˜x~v as a singularity, i.e. x is the endpoint of the corresponding skeleton, and we may apply [Th, Lemme 2.2.25]. This lemma says that there is a meromorphic function ˜f on Cx induced by f N /α such that
d~v(N log |f |)( x) = −ord ˜x~v ( ˜f ).
Thus, div( h) belongs to the R-vector space Pr( Cx) for any h ∈ F X,x . As above, the map R → F X,x is injective and the image coincides with the kernel of the map div . Let
f be a nonzero meromorphic function on Cx and Div( f ) its principal divisor. Again, we can consider an admissible blow up such that every point in |Div( f )| is a singularity, and we may apply [Th, Lemme 2.2.25]. We therefore can find a λ ∈ R such that
div( λ · log |f |) = Div( f ). Hence, div : FX,x → Pr( Cx) is surjective, and so the second sequence is exact as well. 96 5.3 The link between the presheaf log |O ×
X
| and harmonic functions
Let Div 0(Cx) denote the R-vector space generated by the divisors on Cx of degree zero. Then we have the following commutative diagramm of exact sequences
0
0
0
0 / / R / / _
FX,x div / /
_
Pr( Cx) / /_
00 / / R / /
HX,x
div //
Div 0(Cx)
//
00 HX,x /FX,x Pic 0(Cx) ⊗Z R
.
The snake lemma implies the isomorphism
HX,x /FX,x ∼
−−→ Pic 0(Cx) ⊗Z R.
Proof of Theorem 5.3.17. One can show that the germes of HX and FX coincide for each point x in X of type I, III or IV. If x is of type I or IV, there is a fundamental system of neighborhoods of x consisting of k-affinoid domains which have a unique boundary point. Lemma 5.3.18 states that the sections of HX , and so of FX as well, are constant on these neighborhoods of x. Thus, the germes coincide. If x is of type III, there is a fundamental system of neighborhoods of x consisting of k-affinoid domains having exactly two boundary points. Then the R-vector space HX,x has dimension 2
by Lemma 5.3.18. Hence, HX,x is generated by the germes of the constant function 1
and the function log |f |, where f is a global section of OX such that |f | is not locally constant on the neighborhoods of x. This means that HX,x coincides with its subspace
FX,x .Now we assume that x is a point of type II. By [Th, Théorèm 2.3.8], there is a finite seperable field extension k′ of k such that X ⊗k k′ is isomorphic to the generic fibre of a strictly semi-stable S-curve X and that x corresponds to a proper and geometrically connected irreducible component Cx of the special fibre Xs. By Lemma 5.3.19 we may assume that for X.To show that supp( HX /FX ) is discrete, we verify that supp( HX /FX ) is discrete at our arbitrary x ∈ X of type II. For every point y ∈ X of type II Lemma 5.3.20 states HX,y /FX,y ∼= Pic 0(Cy)⊗Z R for the corresponding proper and smooth irreducible component Cy of Xs, which is uniquely determined by its function field ˜κ(y) (cf. [Ha, 97 5 The link to smooth functions on analytic curves
Corollary 4.5]). Let T be the set of all points y ∈ X which are of type II and satisfy
Cy P 1
˜k
. Further, let S0(X ) denote the set consisting of the points in X which correspond to vertices of the skeleton S(X ). If y ∈ X\S(X ), there is an affinoid neighborhood of y which is isomorphic to a closed ball by [Th, Définition 2.2.13], and if y is contained S(X ) but not in S0(X ) there is an affinoid neighborhood of y which is isomorphic to an annulus. Hence, we can take y as a point of type II in A1
k
, and so
˜
κ(y) ∼= ˜k(T ). Since Cy is uniquely determined by ˜κ(y), Cy = P1
˜k
for all y ∈ X\S0(X ),i.e. T ⊂ S0(X ). The set of vertices of a skeleton is locally finite, and so supp( HX /FX )
is discrete at x.Now we come to the second part of the theorem. Above we have passed over to a finite field extension of k, so we show that for every x ∈ X̂ ⊗k̂ ka of type II the equality
Pic 0(Cx) ⊗Z R = 0 for the irreducible, proper and smooth ˜ka-curve Cx is satisfied. This implies HX /FX = 0 .First, we assume that ˜k is algebraic over a finite field. We know that Pic 0(Cx) is isomorphic to the group of ˜ka-points of the Jacobian variety, i.e. Pic 0(Cx) ∼= J( Cx)( ˜ka).
Since ˜ka is the algebraic closure of a finite field, we have
Pic 0(Cx) ∼= ⋃
k′⊂˜ka,|k′|<∞
J( Cx)( k′).
The Jacobian variety is a ˜ka-scheme of finite type, and so J( Cx)( k′) is finite for every finite field k′ contained in ˜ka, i.e. in particular torsion. Hence, Pic 0(Cx) ⊗Z R = 0 for every x ∈ X̂ ⊗k̂ ka of type II. Now, we assume that the second condition is satisfied. We consider a point x ∈ X̂ ⊗k̂ ka
of type II and assume that X̂ ⊗k̂ ka is locally isomorphic to the analytification of P1̂
ka
.As mentioned above it suffices to determine the function field ˜κ(x) to get Cx. Hence, we may identify X̂ ⊗k̂ ka with P1Berk over the field ̂ ka, and so ˜κ(x) = ˜ka(t) for the type II point x (cf. [BR, Proposition 2.3]). Therefore, the ˜ka-curve Cx has to be isomorphic to P1
˜ka
, and so Pic 0(Cx) = 0 .
To link harmonic functions to smooth functions, we consider again an algebraically closed field K endowed with a non-trivial non-archimedean complete absolute value | | .If X is a smooth algebraic curve over K, the analytification Xan is a strictly K-analytic smooth curve. Hence, we may apply Theorem 5.3.17 to Xan . Using Theorem 5.2.4, we get the following corollary.
Corollary 5.3.21. Let X be a smooth algebraic curve over K and assume that one of the following holds: i) ˜K is algebraic over a finite field. ii) Xan is locally isomorphic to P1Berk .
98 5.3 The link between the presheaf log |O ×
X
| and harmonic functions
Then a function f : W → R on an open subset W of Xan is harmonic if and only if it is smooth and d′d′′ f = 0 .Proof. If W is an open subset of Xan , the vector space FXan (W ) coincides with
ker d′d′′ ⊂ C∞(W ) by Theorem 5.2.4 and Proposition 5.2.3. Hence, Theorem 5.3.17 implies the claim. In particular, one can see that Thuillier’s theorem leads to the same result as Theorem 5.3.1 if X = P1
K
. To give finally an answer to the question if every harmonic function on an open subset W of Xan is smooth, we state a further theorem:
Theorem 5.3.22. Let X be a smooth algebraic curve over K. If a smooth function
f : W → R is harmonic, we have d′d′′ f = 0 .Proof. Replacing X by its canonical smooth compactification, we may assume that X
is proper. Consider a x ∈ W and let V be a strictly K-affinoid neighborhood of x in W .We have required that K is algebraically closed, so we can find a strictly semi-stable
Spf( K◦)-curve X and a formal open subscheme U of X such that V is isomorphic to Uη
by [Th, Théorèm 2.3.8] and [BL, Lemma 4.4]. Since f is harmonic on W , the function
f belongs to H(V ) and is consequently given by f = ϕ ◦ τU on V for a piecewise affine map ϕ : S(U) → R. [Th, Théorèm 2.2.10] implies that we can extend ϕ to a piecewise affine function on the skeleton S(X ) satisfying f |V = ϕ ◦ τX on V . By [GH, Proposition B.7], there is a unique line bundle L on X with L| X = OX such that ϕ = − log ‖s‖L
for the canonical invertible global section s = 1 of OX . Hence, ‖ ‖ L coincides with the metric ‖ ‖ ϕ on OX which is given by ‖1‖ϕ := e−ϕ. Note that the metric ‖ ‖ L is called a formal metric. For neat definitions of these metrics we refer to [GH, §1.2]. The metric
‖ ‖ ϕ = ‖ ‖ L leads to the following discrete measure
c1(OX , | ‖ ϕ) := ∑
Y
deg L(Y ) · δζY
where Y runs over all irreducible components of the special fibre Xs and ζY is the unique point in Xan such that red( ζy) is the generic point of Y (cf. [Be, Proposition 2.4.4]). This measure was introduced by Chambert-Loir and Ducros (in higher dimension) in [CD]. If W ′ is an open subset and α ∈ Ap,q (W ′), the support of α is the complement in W ′
of the set of points x of W ′ which have an open neighborhood Vx such that α|Vx = 0 .
Let Ap,q c (W ′) denote the space of (p, q )-differential forms with compact support in W ′.Every (1 , 1) -differential form ω on Xan induces a unique signed Radon measure μω on
Xan such that ∫
Xan
g dμ = ∫
Xan
g ∧ ω for every g ∈ C∞
c
(Xan ) and we may identify
ω with μω (cf. [Gu13, Example 6.7] and [Gu13, Example 6.8]). For the definition of the integral of a differential form over W we refer to [Gu13, 5.14]. Next to differential forms, one can also define currents on Xan . A definition of them can be found in [Gu13, 99 5 The link to smooth functions on analytic curves
6.2]. Setting ˜f := ϕ ◦ τX , the function ˜f on Xan is continuous and coincides with f on
V . We can define the following current
[ ˜f ] : A1,1
c
(Xan ) → R; ω 7 →
∫
Xan
˜
f dμ ω
and consider d′d′′ [ ˜f ] for the differential operators d′ and d′′ . In this setting, [GK, Theorem 10.5] implies that
d′d′′ [ ˜f ] = c1(OX , ‖ ‖ ϕ),
where we understand d′d′′ [ ˜f ] as a measure. On the other hand, Thuillier defined in [Th, §3.2.4] a measure-valued Laplacian operator
dd c on a class of functions which contains ˜f (cf. [Th, Théorème 3.2.10]). In particular, the kernel of dd c is the sheaf of harmonic functions HX (cf. [Th, Corollaire 3.2.11]). By [KRZB, Theorem 2.6], we have the identity
dd c ˜f = c1(OX , ‖ ‖ ϕ),
and so the measures d′d′′ [ ˜f ] and dd c ˜f coincide. Since f = ˜f |V is harmonic on V , we have
d′d′′ [f ]|V ′ = d′d′′ [ ˜f ]|V ′ = dd c ˜f |V ′ = dd cf |V ′ = 0
on an open neighborhood V ′ of x contained in V . Further, we have required that f is smooth on W , and so in particular on V ′. [Gu13, Theorem 5.17] implies that d′d′′ [f ] = [d′d′′ f ] on V ′ where d′d′′ f := ∫
V′
d′d′′ f ∧ g for every g ∈ C∞
c
(V ′). Together, we get that ∫
V′
d′d′′ f ∧ g = 0 for every g ∈ C∞
c
(V ′) and so d′d′′ f has to be zero on the open neighborhood V ′ of our arbitrary x in W .Altogether, we have the following conclusion:
Corollary 5.3.23. Harmonic functions are not smooth in general, i.e. there is a smooth curve X over K and a harmonic function f : W → R on an open subset W of
Xan which is not smooth. Proof. Using Lemma 5.3.20 one can construct a smooth algebraic curve X over an algebraically closed field K such that HXan /FXan is nonzero. Consider C(T ) with the absolute value corresponding to the vanishing order at zero and set K =̂ C(T )a. Let
E be a curve over ˜K = C such that Pic 0(E) ⊗Z R is nonzero, e.g. an elliptic curve of positive rank, and let ζ be the generic point of E. Consider the smooth algebraic curve X := E ⊗C K over K, then we obtain a reduction map red : Xan → E such that there is a unique point x ∈ Xan satisfying ˜K(x) = ˜K(ζ) (cf. [Be, Proposition 2.4.4]). Since the corresponding irreducible curve Cx over ˜K is uniquely determined 100 5.3 The link between the presheaf log |O ×
X
| and harmonic functions
by its function field, we have Cx = E. Hence, HXan ,x /FXan ,x = Pic 0(Cx) ⊗Z R is nonzero by Lemma 5.3.20. We therefore can find an open subset W of Xan and a harmonic function f : W → R which is not contained in FXan (W ). By Theorem 5.2.4 and Proposition 5.2.3, the vector space FXan (W ) coincides with the kernel of the linear operator d′d′′ : C∞(W ) → A1,1(W ). Finally, Theorem 5.3.22 implies that f cannot be smooth. 101 Bibliography
[Be] V.G. Berkovich: Spectral theory and analytic geometry over nonarchimedean fields , Mathematical Surveys and Monographs, 33. Providence, RI: AMS (1990). [BGR] S. Bosch, U. Güntzer, R. Remmert: Non-Archimedean Analysis: A Systematic Approach to Rigid Analytic Geometry , Springer-Verlag (1984). [BL] S. Bosch, W. Lütkebohmert: Formal and rigid geometry. I.Rigid spaces , Math-ematische Annalen, 295 (1993). [BPR] M. Baker, S. Payne, J. Rabinoff: On the structure of non-archimedean analytic spaces , arXiv:1404.0279. [BR] R.Rumely, M. Baker: Potential Theory and Dynamics on the Berkovich Projec-tive Line , Mathematical Surveys and Monographs 159. Providence, RI: Amer-ican Mathematical Society (2010). [CD] A. Chambert-Loir, A. Ducros: Formes différentielles réelles et courants sur les espaces Berkovich , arXiv:1204.64591. [CR] T. Chinburg, R. Rumely.: The capacity pairing , J. Reine Angew. Math., 434:1–44 (1993). [El] J. Elstrodt: Maß- und Integrationstheorie , Springer-Verlag (2007). [GH] W. Gubler, J. Hertel: Local heights of toric varieties over non-archimedean fields , arXiv:1512.06574. [GK] W. Gubler, K. Künnemann: A tropical approach to non-archimedean arakelov theory , arXiv:1406.7637. [Gu12] W. Gubler: A guide to tropicalizations , In ”Algebraic and combinatorial aspects of tropical geometry”, 125–189, Contemporary Mathematics, Vol. 589, Amer. Math. Soc., Providence, RI (2013). [Gu13] W. Gubler: Forms and currents on the analytification of an algebraic variety (after Chambert-Loir and Ducros) , (2013), arXiv:1303.7364. [Ha] R. Hartshorne: Algebraic Geometry , Springer-Verlag (1977). [Hs] L.-C. Hsia: p-adic equidistribution theorems , Manuscript. (2003). [Je] P. Jell: A Poincaré lemma for real-valued differential forms on Berkovich spaces , arXiv:1409.0676. 103 Bibliography
[KRZB] E. Katz, J. Rabinoff, D. Zureick-Brown: Uniform bounds for the number of rational points on curves of small Mordell-Weil rank , arXiv:1504.00694v2. [La] A. Lagerberg: Super currents and tropical geometry , Math. Zeitschrift 270, 1011-1050 (2012). [Ra] T. Ransford: Potential Theory in the Complex Plane , Cambridge University Press (1995). [Sa] P. Samuel: A propos du théorème des unités , Bull. Sci. Math. (2) 90, 89–96 (1966). [Te] M. Temkin: Introduction to Berkovich analytic spaces , arXiv:1010.2235. [Th] A. Thuillier: Théorie du potentiel sur les courbes en géométrie analytique non archimédienne. Applications à la théorie d’Arakelov, Thése de l’Université de Rennes 1 (2005). [Zh] S. Zhang: Admissible pairing on a curve , Invent. Math., 112(1): 171–193 (1993). 104 Selbständigkeitserklärung
Ich, Veronika Wanner, erkläre hiermit, dass ich die vorgelegte Masterarbeit mit dem Thema „Harmonic Functions on the Berkovich Projective Line“ selbständig verfasst und keine anderen als die angegebenen Quellen und Hilfsmittel benutzt habe. Ort, Datum Unterschrift
|
82
|
Mutation of Putative N-linked Glycosylation Sites on the Human Nucleotide Receptor P2X7 Reveals a Key Residue Important for Receptor Function - PMC
===============
Skip to main content
An official website of the United States government
Here's how you know
Here's how you know
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS
A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
Search
Log in
Dashboard
Publications
Account settings
Log out
Search… Search NCBI
Primary site navigation
Search
Logged in as:
Dashboard
Publications
Account settings
Log in
Search PMC Full-Text Archive Search in PMC
Advanced Search
Journal List
User Guide
New Try this search in PMC Beta Search
PERMALINK
Copy
As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with, the contents by NLM or the National Institutes of Health.
Learn more: PMC Disclaimer | PMC Copyright Notice
Biochemistry
. Author manuscript; available in PMC: 2011 Jun 8.
Published in final edited form as: Biochemistry. 2010 Jun 8;49(22):4611–4619. doi: 10.1021/bi902083n
Search in PMC
Search in PubMed
View in NLM Catalog
Add to search
Mutation of Putative N-linked Glycosylation Sites on the Human Nucleotide Receptor P2X 7 Reveals a Key Residue Important for Receptor Function
Lisa Y Lenertz
Lisa Y Lenertz
1 Department of Biomolecular Chemistry, The University of Wisconsin-Madison, Madison, Wisconsin 53706
Find articles by Lisa Y Lenertz
1, Ziyi Wang
Ziyi Wang
1 Department of Biomolecular Chemistry, The University of Wisconsin-Madison, Madison, Wisconsin 53706
Find articles by Ziyi Wang
1, Arturo Guadarrama
Arturo Guadarrama
1 Department of Biomolecular Chemistry, The University of Wisconsin-Madison, Madison, Wisconsin 53706
Find articles by Arturo Guadarrama
1, Lindsay M Hill
Lindsay M Hill
1 Department of Biomolecular Chemistry, The University of Wisconsin-Madison, Madison, Wisconsin 53706
Find articles by Lindsay M Hill
1, Monica L Gavala
Monica L Gavala
1 Department of Biomolecular Chemistry, The University of Wisconsin-Madison, Madison, Wisconsin 53706
Find articles by Monica L Gavala
1, Paul J Bertics
Paul J Bertics
1 Department of Biomolecular Chemistry, The University of Wisconsin-Madison, Madison, Wisconsin 53706
Find articles by Paul J Bertics
1,
Author information
Copyright and License information
1 Department of Biomolecular Chemistry, The University of Wisconsin-Madison, Madison, Wisconsin 53706
To whom correspondence should be addressed: [email protected] | tel 608-262-8667 | fax 608-262-5253
PMC Copyright notice
PMCID: PMC2895974 NIHMSID: NIHMS205418 PMID: 20450227
The publisher's version of this article is available at Biochemistry
Abstract
The nucleotide receptor P2X 7 is an immunomodulatory cation channel and a potential therapeutic target. P2X 7 is expressed in immune cells such as monocytes/macrophages and is activated by extracellular ATP following tissue injury or infection. Ligand binding to P2X 7 can stimulate ERK1/2, the transcription factor CREB, enzymes linked to the production of reactive oxygen species and interleukin-1 isoforms, and the formation of a non-specific pore. However, little is known about the biochemistry of P2X 7, including whether the receptor is N-linked glycosylated and if this modification affects receptor function. Here we provide evidence that P2X 7 is sensitive to the glycosidases EndoH and PNGase F, and that the human receptor appears glycosylated on N187, N202, N213, N241 and N284. Mutation of N187 results in diminished P2X 7 agonist-induced phosphorylation of ERK1/2, CREB, and p90 ribosomal S6 kinase, as well as decreased pore formation. In further support of a role for glycosylation in receptor function, treatment of RAW 264.7 macrophages with the N-linked glycosylation synthesis inhibitor tunicamycin attenuates P2X 7 agonist-induced, but not phorbol ester-induced, ERK1/2 phosphorylation. Interestingly, residue N187 belongs to an N-linked glycosylation consensus sequence found in six of the seven P2X family members, suggesting this site is fundamentally important to P2X receptor function. To address the mechanism whereby N187 mutation attenuates receptor activity, we developed a live cell proteinase K digestion assay that demonstrated altered cell surface expression of P2X 7 N187A. This is the first report to map human P2X 7 glycosylation sites and reveal residue N187 is critical for receptor trafficking and function.
Keywords: P2X 7, N-linked glycosylation, cell surface expression, ERK1/2, macrophages
The nucleotide ATP can act as an important extracellular signaling molecule that regulates multiple processes, including neurotransmission and immune response mediator production (1,2). ATP is released in millimolar concentrations from cells following infection or tissue injury and can stimulate cells in the microenvironment by binding to the P2 nucleotide receptors (3). The P2 nucleotide receptors have been divided into two major subfamilies: the G protein-coupled P2Y receptors and the ionotropic P2X cation channels (4).
The cation channel P2X 7 is considered an important component of the inflammatory response (5). Activation of P2X 7 by extracellular nucleotides leads to the processing of interleukin-1β and the production of reactive oxygen species through the NADPH oxidase complex (6,7). The P2X 7 receptor stimulates a number of downstream targets including mitogen activated protein kinases (MAPK) and several transcription factors, including cyclic-AMP response element-binding protein (CREB) and activating transcription factor 1 (ATF1) (7–11). Upon ligand binding, P2X 7 mediates Ca 2+ and Na+ influx and K+ efflux. Prolonged stimulation of P2X 7 can also promote the formation of a non-specific pore, allowing for molecules up to 900 Da to enter the cell (4).
It has been proposed that P2X 7 is an attractive therapeutic target for inflammatory diseases (5). Animal studies have shown that pharmacological targeting of P2X 7 may be used to treat certain types of arthritis (12,13), and a recent clinical study has correlated P2X 7 activity with virus-induced loss of asthma control (14).
Although numerous P2X 7 studies have focused on ATP-induced cell signaling, relatively few reports have examined the biochemical properties of the receptor. One aspect of P2X 7 biology that is poorly understood involves the glycosylation status of the receptor and how glycosylation contributes to receptor function. Such an analysis is important given that glycosylation of plasma membrane-bound proteins is often critical for numerous processes, including protein folding, cell adhesion, and pathogen recognition of host cells (15,16).
In the present study, we report that P2X 7 is N-linked glycosylated on five residues and this post-translational modification is important for P2X 7 agonist-stimulated signaling and pore formation. We present the first experimental evidence that human P2X 7 is glycosylated on residues 187, 202, 213, 241 and 284, and that mutation at the conserved amino acid 187 results in decreased nucleotide-induced signaling events. In addition, we propose P2X 7 N187A exhibits attenuated activity because its expression on the plasma membrane is altered. A recent report has shown that two P2X 7 naturally-occurring single nucleotide polymorphisms (SNPs) located near residue N187, E186K and L191P, exhibit decreased channel and pore activities (17). Those findings and this current report demonstrate that the extracellular region of P2X 7 surrounding the N187 glycosylation site is critical for receptor function.
MATERIALS AND METHODS
Reagents
The potent P2X 7 agonist, 2′ (3′)-O-(4-benzoylbenzoyl)-ATP (BzATP), and tunicamycin were obtained from Sigma (St. Louis, MO). EndoH and PNGase F were purchased from New England Biolabs (Ipswich, MA), and proteinase K was purchased from Promega (Madison, WI). The anti-c-Myc antibody (cat. no. sc-40), the anti-P2X 7 antibody (cat. no. sc-25698) used for immunofluorescence and the immunoblot in Fig. 7, as well as the anti-EGFR antibody (cat. no. sc-03), were purchased from Santa Cruz Biotechnology (Santa Cruz, CA). The anti-PDI antibody (cat. no. SPA-891) was obtained from Stressgen (Ann Arbor, MI). The anti-ERK1/2 antibody (cat. no. 06-182) was purchased from Millipore (Billerica, MA), the anti-pERK1/2 antibody (cat. no. 44680G) was purchased from Invitrogen (Carlsbad, CA), and the anti-P2X 7 antibody (cat. no. 550694) used for the immunoblots in Figs. 3, 6 and 10 was purchased from BD Biosciences (San Jose, CA). The anti-CREB (cat. no. 9197), anti-pCREB (cat. no. 9191), anti-pp90RSK (cat. no. 9344) and anti-β-tubulin (cat. no. 2146) antibodies were purchased from Cell Signaling Technology (Danvers, MA).
Fig. 7. Mutation of P2X 7 N-linked glycosylation sites results in attenuated BzATP-induced signaling.
Open in a new tab
Wild-type P2X 7 and the Asn to Ala mutants were stably-expressed in HEK293 cells by selecting G418 sulfate-resistant populations. The cells were treated with 250 μM BzATP for 5 or 10 min, and cell lysates were prepared and immunoblotted with anti-P2X 7, anti-pERK1/2, anti-pCREB, anti-pp90RSK, and anti-β-tubulin antibodies. The β-tubulin immunoblot is the loading control for the anti-pERK1/2 immunoblot. These data are representative of three experiments.
Fig. 3. Endogenous and exogenous P2X 7 is sensitive to glycosidase treatment.
Open in a new tab
Proteins from murine RAW 264.7 macrophages naturally expressing wild-type P2X 7 as well as human HEK293 and monkey COS7 cells transfected with human wild-type P2X 7 were denatured and treated with either EndoH or PNGase F for 1 h at 37°C as described under Materials and Methods. The proteins were then resolved on polyacrylamide gels containing SDS and immunoblotted for P2X 7. Cleavage of EGFR was used as a positive control to show EndoH and PNGase F were active. These data are representative of three or more experiments.
Fig. 6. Treatment of RAW 264.7 macrophages with tunicamycin results in attenuated BzATP-induced ERK1/2 activation.
Open in a new tab
A) RAW 264.7 macrophages were treated with the N-linked glycosylation synthesis inhibitor tunicamycin (5 μg/mL) for 42 h, and the cells were stimulated with either 250 μM BzATP or 1 μg/mL PMA for 5 min. Cell lysates were then immunoblotted with anti-P2X 7, anti-pERK1/2 and anti-ERK1/2 antibodies. Methanol is the vehicle control for tunicamycin, HEPES is the buffer control for BzATP, and DMSO is the vehicle control for PMA. These data are representative of four experiments. B) The fold changes between tunicamycin-treated cells stimulated with BzATP or PMA versus tunicamycin-treated cells stimulated with buffer control were quantified and graphed. The band intensities of the phospho-ERK1/2 bands were normalized to total ERK1/2, and the percent activation of ERK1/2 with either BzATP or PMA in comparison to the appropriate buffer control was calculated. Student’s t-tests were performed to determine the p values. N.S. = not significant.
Fig. 10. P2X 7 N187A expression on the cell surface is altered.
Open in a new tab
HEK293 cells stably expressing pcDNA3 vector control, P2X 7 WT, P2X 7 N187A and P2X 7 N202A were treated with 333 μg/mL proteinase K for 90 min as described under Materials and Methods. The cells were then boiled for 10 min to inactivate the enzyme, diluted in 2X sample buffer, sonicated, and immunoblotted with anti-P2X 7, anti-ERK1/2 and anti-β-tubulin antibodies. These data are representative of three or more experiments.
Cell Culture, Harvesting and Protein Assays
Human HEK293 and monkey COS7 cells (American Type Culture Collection, Manassas, VA) were cultured in Dulbecco’s modified Eagle’s medium supplemented with 10% cosmic calf serum (Mediatech, Herndon, VA), 1% L-glutamine and 100 U/ml penicillin/streptomycin, and the murine macrophage RAW 264.7 and RAW 264.7 SF cells were cultured in RPMI-1640 media supplemented with 5% cosmic calf serum, 2 mM sodium pyruvate, and 1% L-glutamine. The cells were incubated at 37 °C under 5% CO 2. RAW 264.7 SF is a mutant cell line that contains a Ser to Phe mutation at amino acid 342 in the extracellular domain of P2X 7, rendering the receptor non-functional and the cell line as a useful negative control (8,18,19). Cell lysates were generally prepared by harvesting the cells in 2X sample buffer (20 mM Tris, 2 mM EDTA, 2 mM DTT, 1 mM Na 3 VO 4, 2% SDS, 20% glycerol) followed by boiling and sonicating the samples. Total protein concentrations of lysates were quantified using a BCA protein assay kit obtained from Thermo Scientific (Waltham, MA).
P2X 7 Constructs
Full-length human P2X 7 (accession number AAH11913) was subcloned into pcDNA3 and pCMV-Myc. Subcloning P2X 7 into pCMV-Myc yielded an N-terminally tagged receptor. The P2X 7 point mutants used in these studies were generated by site-directed mutagenesis using the following primers and their reverse complements:
P2X7 N187A 5′-CAGTGCCGAAGCCTTCACTGTGCTC-3′,
P2X7 N187H 5′-CAGTGCCGAACACTTCACTGTGCTC-3′,
P2X7 N187Q 5′-CAGTGCCGAACAGTTCACTGTGCTC-3′,
P2X7 N202A 5′-CCGGCCACGCCTACACCACG-3′,
P2X7 N213A 5′-CCAGGTTTAGCCATCACTTGTACC-3′,
P2X7 N241A 5′-GAAACAGGCGATGCTTTTTCAG-3′,
P2X7 N284A 5′-CAAGACCACCGCCGTGTCCTTGTAC-3′.
Glycosidase Assays
Cell lysates were treated with EndoH or PNGase F according to the manufacturer’s instructions. Briefly, ~30 μg of cell extract protein was denatured and then treated with 250 units of glycosidase at 37°C for 1 h.
Immunofluorescent Staining
RAW 264.7 macrophages and COS7 cells were cultured on glass cover slips, fixed with 4% paraformaldehyde, permeabilized with 1% Triton X-100, and stained with the indicated antibodies. The cells were then stained with the following secondary antibodies from Molecular Probes (Eugene, OR): Alexa Fluor® 488 donkey anti-rabbit (used with anti-P2X 7 in Figs. 4a and 9), Alexa Fluor® 488 donkey anti-mouse (used with anti-PDI), and Alexa Fluor® 594 donkey anti-rabbit (used with anti-P2X 7 in Fig. 4c). The samples were imaged with either a Zeiss Axioplan 2 microscope (Figs. 4a and 9) or a Bio-Rad Radiance 2100 MP Rainbow confocal microscope (Fig. 4c). The ImageJ software co-localization plug-in was used to generate the co-localized image in Fig. 4c.
Fig. 4. P2X 7 is highly localized in intracellular compartments but is activated in response to extracellular BzATP administration.
Open in a new tab
A) The human nucleotide receptor P2X 7 was expressed in COS7 cells, and the cells were stained with an anti-P2X 7 antibody and visualized with a Zeiss Axioplan 2 microscope. This image is representative of >6 experiments. B) COS7 cells were transfected with either P2X 7/pcDNA3 or pcDNA3 and were stimulated with either 250 μM BzATP or HEPES buffer control in the presence of 1 μM YO-PRO-1. The cells were fixed and relative dye uptake was imaged with a Zeiss Axioplan 2 microscope. These data are representative of >6 experiments. C) RAW 264.7 macrophages were fixed, stained with anti-P2X 7 and anti-PDI antibodies and visualized with a Bio-Rad Radiance 2100 MP Rainbow confocal microscope. As a negative control, the cells were stained with secondary antibody only. The represented images are of one z-stack. The anti-P2X 7 and anti-PDI images were merged using the ImageJ co-localization plug-in. These data are representative of two experiments. D) RAW 264.7 and RAW 264.7 SF macrophages were stimulated with either BzATP or HEPES buffer control in the presence of 1 μM YO-PRO-1 and visualized as live cells using an Olympus IX Fluorescence Inverted Microscope. These YO-PRO-1 dye uptake assays are representative of five experiments. E) HEK293 cells were transfected with either P2X 7/pCMV-Myc or pCMV-Myc and were stimulated with either 250 μM BzATP or HEPES buffer control for 5 min. Cell lysates were prepared and immunoblotted using anti-Myc, anti-pERK1/2 and anti-ERK antibodies. These data are representative of >8 experiments.
Fig. 9. Wild-type and mutant P2X 7 display similar localization patterns.
Open in a new tab
Wild-type P2X 7, the Asn to Ala mutants, and vector control were expressed in COS7 cells. Approximately 24 h after the cells were transfected, they were fixed with paraformaldehyde, stained with an anti-P2X 7 antibody, and imaged with a Zeiss Axioplan 2 microscope. Similar data were obtained in two separate experiments.
YO-PRO-1 Dye Uptake Assay
COS7 cells were cultured on glass cover slips and were transfected with the indicated constructs using FuGENE™ 6 transfection reagent according to the manufacturer’s instructions (Roche). The COS7 and RAW 264.7 cells were stimulated at room temperature in potassium-glutamate buffer (130 mM K-glutamate, 20 mM HEPES-KOH (pH 7.4), 5 mM KCl, 0.1% BSA, 10 mM glucose) with 250 μM BzATP or HEPES buffer control for 10 or 20 min in the presence of 1 μM YO-PRO-1 (Molecular Probes). Potassium-glutamate buffer is known to facilitate robust YO-PRO-1 uptake in response to BzATP, as previously reported (20,21). The cells were then treated with 10 mM MgCl 2 to close the pore, fixed with 4% paraformaldehyde, and imaged with either a Zeiss Axioplan 2 microscope or an Olympus IX Fluorescence Inverted Microscope (20,21).
ERK1/2, CREB and p90RSK Phosphorylation/Activation Assays
HEK293 cells were transfected with pcDNA3, P2X 7/pcDNA3 or the P2X 7 Asn to Ala mutants using FuGENE™ 6 transfection reagent, and G418 sulfate-resistant populations were selected. For the transient transfection experiments, HEK293 cells were transfected with pCMV-Myc, P2X 7/pCMV-Myc, P2X 7 N187H/pCMV-Myc, P2X 7 N187Q/pCMV-Myc, or the Myc-tagged P2X 7 Asn to Ala mutants. The cells were serum starved for 1–4 h, treated with 250 μM BzATP for 5 or 10 min, harvested, and cell extracts were immunoblotted as detailed below.
Immunoblotting
Proteins from cell lysates (~10–30 μg) were separated by electrophoresis at 15–20 mA on 10% SDS-polyacrylamide gels, transferred to polyvinylidene fluoride (PVDF) membranes, and blocked in 5% nonfat dry milk. The membranes were probed with the indicated primary antibodies, incubated with secondary antibodies conjugated to horseradish peroxidase (Santa Cruz Biotechnology), and visualized by enhanced chemiluminescence.
Proteinase K Digestion Assays
HEK293 cells stably expressing wild-type or mutant P2X 7 were washed off the plate with media and pelleted. The cells were suspended in serum-free DMEM and treated with 333 μg/mL proteinase K or buffer control for 90 min at 37°C. The samples were then boiled for 10 min, diluted in 2X sample buffer, sonicated and immunoblotted.
RESULTS
P2X 7 is sensitive to glycosidase treatment
The NetNGlyc program (Technical University of Denmark) was used to identify putative N-linked glycosylation sites in the seven human P2X family members (Figs. 1 and 2). N-linked glycosylation sites contain the consensus sequence Asn-X-Ser/Thr, where X is any amino acid except proline (22). Cell lysates from RAW 264.7 macrophages, which express endogenous P2X 7 (8), and HEK293 and COS7 cells expressing exogenous human receptor were treated with the glycosidases EndoH or PNGase F and immunoblotted for P2X 7. We chose to include COS7 and HEK293 cells in our studies because they do not express detectable levels of endogenous P2X 7 (23). The enzyme EndoH cleaves high mannose and some hybrid N-linked glycosylation modifications while PNGase F is less specific and can also cleave complex oligosaccharides with di-, tri-, and tetra-antennary arms (24). Both endogenous and exogenous P2X 7 are sensitive to EndoH and PNGase F as indicated by faster electrophoretic migration (Fig. 3). The epidermal growth factor receptor (EGFR) was immunoblotted as a positive control to demonstrate the EndoH and PNGase F enzymes were functional (25).
Fig. 1. Graphical representation of the major P2X 7 domains and its putative N-linked glycosylation sites.
Open in a new tab
The nucleotide receptor P2X 7 has two predicted transmembrane domains (TM) and an extracellular ATP binding domain. The extracellular portion of P2X 7 contains putative N-linked glycosylation sites at amino acids 187, 202, 213, 241 and 284. The PredictProtein program was used to determine where the putative transmembrane domains and N-linked glycosylation sites are located (31).
Fig. 2. All human P2X family members contain putative N-linked glycosylation sites.
Open in a new tab
The protein sequences of the seven human P2X family members were aligned using ClustalW (32), and the NetNGlyc 1.0 program was used to identify potential N-linked glycosylation sites (marked in red boxes). Identical residues are marked as black and chemically similar residues are marked as gray. The following protein accession numbers were used in the alignment: P2X1 AAC24494, P2X2 Q9UBL9, P2X3 NP_002550, P2X4 NP_002551, P2X5 AAH39015, P2X6 AAF13303, and P2X7 AAH11913.
Endogenous P2X 7 in RAW 264.7 and transfected P2X 7 in HEK293 and COS7 cells are functional
EndoH cleaves N-linked glycosylation modifications that are processed in the Golgi, and proteins that are insensitive to this enzyme are thought to have trafficked out of the Golgi and possibly to the plasma membrane (26). Because we did not observe a large EndoH-resistant population of P2X 7 in the HEK293, COS7 and RAW 264.7 cells, we tested the ideas that P2X 7 is primarily localized at intracellular sites and a small population of the receptor is expressed on the cell surface that is capable of eliciting P2X 7 agonist-induced events. As assessed by immunofluorescence, both exogenous P2X 7 in COS7 cells and endogenous P2X 7 in RAW 264.7 macrophages appear to be highly represented at intracellular sites (Fig. 4a,c). In addition, P2X 7 in RAW 264.7 and COS7 cells co-localizes with the endoplasmic reticulum (ER) marker protein disulfide isomerase (PDI) (Fig. 4c and data not shown). Although P2X 7 is not readily detected on the plasma membrane via immunofluorescence even in naturally P2X 7-expressing RAW 264.7 macrophages, we observed that both the P2X 7-transfected COS7 and RAW 264.7 cells, but not the P2X 7-defective cell line RAW 264.7 SF, are capable of taking up the fluorescent dye YO-PRO-1 in the presence of BzATP (Fig. 4b,d). BzATP is a synthetic ATP derivative that is used to activate P2X 7, and the YO-PRO-1 dye uptake assay is a method of assessing the ability of P2X 7 to stimulate the formation of a pore (20). To demonstrate that exogenously-expressed P2X 7 in HEK293 cells is functional, BzATP-induced ERK1/2 phosphorylation tests were employed (Fig. 4e). The kinases ERK1 and ERK2 are phosphorylated/activated by BzATP when wild-type P2X 7 is expressed but not when the cells are transfected with the empty vector control.
Mutation of human P2X 7 at N187, N202, N213, N241 and N284 results in increased electrophoretic mobility
To identify the residues that likely contain a glycosylation modification, the five predicted N-linked glycosylation sites were mutated to alanine either individually or doubly, and their electrophoretic mobility was examined (Fig. 5). All of the Asn to Ala single and double mutants migrated faster than wild-type P2X 7. To illustrate that these migratory differences were likely due to an altered glycosylation profile and not protein degradation, lysates were treated with EndoH or PNGase F and immunoblotted for P2X 7. After cleavage with either glycosidase, the mutants had comparable apparent molecular weights as enzyme-treated wild-type receptor (Fig. 5b,c). In addition, we observed that P2X 7 N187A appears to have a slightly higher molecular weight than receptors containing the N202A, N213A, N241A or N284A mutations (Fig. 5a,d) as assessed by SDS-polyacrylamide gel electrophoresis. It is possible that this site (N187) is differentially glycosylated compared to the other sites and/or that the protein is altered in its folding and interaction with SDS.
Fig. 5. P2X 7 is glycosylated on N187, N202, N213, N241 and N284.
Open in a new tab
A) Wild-type P2X 7 (WT) and the Asn to Ala mutants were expressed in HEK293 cells and cell lysates were immunoblotted with anti-Myc antibody. The five single Asn to Ala mutants exhibited faster electrophoretic mobility than WT receptor, and the double mutants migrated further than the single mutants. The proteins were denatured and treated with B) EndoH or C) PNGase F for ~1 h at 37°C and immunoblottred with anti-Myc antibody. The data shown in panels A-C are representative of two or more experiments. D) P2X 7 WT and Asn to Ala mutants were expressed in HEK293 cells and lysates were immunoblotted with anti-Myc. The P2X 7 N187A mutant does not migrate as fast as the other Asn to Ala mutants but does migrate faster than WT. These data are representative of >10 experiments.
Treatment of RAW 264.7 macrophages with tunicamycin results in attenuated BzATP-induced ERK1/2 activation
To determine whether N-linked glycosylation is required for P2X 7 agonist-induced ERK1/2 phosphorylation/activation, RAW 264.7 macrophages were treated with the N-linked glycosylation synthesis inhibitor tunicamycin for 42 h, and the cells were stimulated with either BzATP or phorbol 12-myristate 13-acetate (PMA) for 5 min to activate ERK1/2. As shown in Fig. 6, treatment of cells with 5 μg/mL tunicamycin decreases BzATP but not PMA-stimulated ERK1/2 phosphorylation. Treatment with tunicamycin results in the formation of faster migratory P2X 7 proteins, which are presumably newly synthesized P2X 7 receptors that do not contain glycosylation modifications. In addition, less full-length receptor is observed in the cells treated with the inhibitor, providing further evidence that a smaller pool of P2X 7 is N-linked glycosylated in these samples (Fig. 6a). It should be noted that a cell viability assay was conducted to determine whether treatment with 5 μg/mL tunicamycin for 42 h results in significant cell death, and we found 85% of the tunicamycin-treated cells were metabolically active in comparison to the methanol control-treated cells (data not shown) indicating that decreases observed after tunicamycin treatment were not due to cytotoxicity. The fold change in ERK1/2 phosphorylation between tunicamycin-treated and buffer control-treated cells is depicted in Fig. 6b. Treatment with tunicamycin results in approximately a 30% reduction in BzATP-induced ERK1/2 phosphorylation but no significant change in PMA-stimulated ERK1/2 phosphorylation.
Mutation of P2X 7 N-linked glycosylation sites results in attenuated BzATP-induced signaling and pore formation
To determine which N-linked glycosylation sites of P2X 7 are required for its activity, we examined BzATP-induced ERK1/2, CREB and p90RSK phosphorylation/activation in cells stably-expressing the wild-type receptor or the Asn to Ala mutants. The phosphorylation of CREB was assessed by immunoblotting cell lysates with an antibody that detects phosphorylated/activated CREB and by analyzing mobility shifts on an anti-CREB antibody immunoblot. Cells expressing P2X 7 N187A do not detectably exhibit ERK1/2, CREB or p90RSK phosphorylation in the presence of the P2X 7 agonist BzATP (Fig. 7). Mutation of P2X 7 N213, N241 or N284 also resulted in an apparent reduction in BzATP-induced ERK1/2 phosphorylation, but not to the same extent as the P2X 7 N187A mutation. Conversely, mutation of P2X 7 N202 to Ala did not result in attenuated ligand-stimulated ERK1/2, CREB or p90RSK phosphorylation in either the stably-expressing cells (Fig. 7) or in transient transfection experiments (data not shown). The data from three separate experiments revealed that the fold change in ERK1/2 phosphorylation following 10 minutes of BzATP treatment in wild-type P2X 7-expressing cells ranged from 8.2 to 29 fold. In contrast, the change in BzATP-induced ERK1/2 activation ranged from 1.5 to 4.8 fold for P2X 7 N213A, 3.4 to 11 fold for P2X 7 N241A, and 3.8 to 13 fold for P2X 7 N284A. In addition, attenuated BzATP-induced ERK1/2 activation was also observed in HEK293 cells transiently expressing N213A, N241A and N284A (data not shown). Similar trends in CREB activation were also obtained, but smaller fold changes were achieved in comparison to ERK1/2 (data not shown). For example, in three separate experiments, the average increase in CREB activation in BzATP-treated cells expressing wild-type P2X 7 was 2.0-fold. These decreases in BzATP-stimulated responses observed in cells expressing a mutant P2X 7 wherein the Asn at position 284 has been altered to alanine complements a prior report demonstrating that the introduction of an Asn residue at the equivalent residue in mouse P2X 7 (D284), confers increased sensitivity to BzATP and ATP (27). These data support the idea that P2X 7 N284 may also be important for human P2X 7 function.
Because P2X 7 N187A generally exhibited lower expression levels than other Asn to Ala mutants, we increased the amount of P2X 7 N187A/pCMV-Myc DNA used in transient transfections in an effort to drive its expression to a level that is comparable to wild-type receptor. Even when P2X 7 N187A is expressed at a similar or higher level as wild-type receptor, the mutant receptor does not exhibit BzATP-induced ERK1/2 activation (Fig. 8a). In addition, COS7 cells transiently expressing P2X 7 N187A exhibit reduced BzATP-induced YO-PRO-1 uptake in comparison to cells expressing wild-type receptor (Fig. 8b), providing further evidence that this residue is important for P2X 7 function. To further verify that residue N187 is important for receptor activity, and that our observations with an alanine substitution at this position are not restricted to the specific chemistry (hydrophobicity) of alanine, we also mutated P2X 7 residue N187 to the more polar residue histidine, as well as to a residue structurally related to the asparagine normally located at this site, i.e., glutamine. As shown in Figs. 8c and 8d, mutation of P2X 7 N187 to His or Gln also results in diminished BzATP-induced ERK1/2 phosphorylation, thereby supporting the specific importance of asparagine/glycosylation at this site.
Fig. 8. Mutation of P2X 7 N187 results in decreased BzATP-induced ERK1/2 activation and pore formation.
Open in a new tab
A) HEK293 cells were transfected with the indicated amounts of pCMV-Myc vector control, P2X 7 WT/pCMV-Myc or P2X 7 N187A/pCMV-Myc. After 24 h, the cells were treated with 250 μM BzATP for 5 min, and cell lysates were prepared and immunoblotted with anti-Myc, anti-pERK1/2 and anti-ERK1/2 antibodies. These data are representative of two independent experiments. B) COS7 were transfected with pCMV-Myc, P2X 7 WT/pCMV-Myc, P2X 7 N187A/pCMV-Myc or P2X 7 N202A/pCMV-Myc. After 24 h, a YO-PRO-1 dye uptake was performed. Similar data were obtained in two separate experiments. C) HEK293 cells were transfected with pCMV-Myc vector control, P2X 7 WT/pCMV-Myc or P2X 7 N187H/pCMV-Myc. After 24 h, the cells were treated with 250 μM BzATP for 5 min, and cell lysates were prepared and immunoblotted with anti-Myc, anti-pERK1/2 and anti-ERK1/2 antibodies. These data are representative of three independent experiments. D) HEK293 cells were transfected with pCMV-Myc vector control, P2X 7 WT/pCMV-Myc or P2X 7 N187Q/pCMV-Myc. After 24 h, the cells were treated with 250 μM BzATP for 5 min, and cell lysates were prepared and immunoblotted with anti-Myc, anti-pERK1/2 and anti-ERK1/2 antibodies. These data are representative of three independent experiments.
Wild-type and mutant P2X 7 display similar localization patterns
To delineate the mechanism by which P2X 7 N187A exhibits attenuated activity, we considered the possibilities that it may be mislocalized or not efficiently trafficked to the plasma membrane. To test these ideas, we expressed wild-type P2X 7 and the Asn to Ala mutants in COS7 cells and examined their overall localization patterns by immunofluorescence. As shown in Fig. 9, wild-type P2X 7 and the glycosylation mutants, including N187A, all display a perinuclear, endoplasmic reticulum-like pattern. Even though P2X 7 N187A has diminished activity, it appears to localize in a pattern comparable to wild-type P2X 7 and to the other Asn to Ala mutants. These data support the notion that the diminished activity of P2X 7 N187A is not likely the result of gross mislocalization of the receptor.
P2X 7 N187A cell surface expression is altered
Because we did not detect any major differences in the localization between P2X 7 wild-type and N187A in fixed cells as assessed by immunostaining, we used a live cell proteinase K digestion assay to analyze the plasma membrane expression of P2X 7 over a given period of time and to test the hypothesis that P2X 7 N187A does not traffic as efficiently to the plasma membrane as wild-type P2X 7. When the receptor is expressed on the surface of live cells, the extracellular domain is digested with proteinase K, liberating the C-terminal tail. In this assay, we treated cells with protease for 90 min, allowing the enzyme to digest proteins that become exposed on the cell surface at any point during the incubation period. A key advantage of this approach is that it represents an accumulation of events that occur over 90 min vs the single “snapshot” in time represented by the immunostaining approach. We predicted that treatment of live cells with proteinase K would result in the cleavage of full-length wild-type P2X 7, but not the N187A mutant, yielding a fragment containing only the intracellular domain. As depicted in Fig. 1, human P2X 7 contains an intracellular C-terminal domain that contains more than 200 amino acids. Digestion of HEK293 cells stably expressing wild-type P2X 7 or N202A but not the N187A receptor resulted in the formation of ~35 kDa proteins that are immunoreactive with an anti-P2X 7 antibody that was raised against the C-terminus of the protein (Fig. 10). These immunoreactive proteins are likely the C-terminal tail of full-length plasma membrane-localized P2X 7. The predicted size of the C-terminal tail, including the second transmembrane domain, is ~30 kDa. The ~35 kDa proteins were also immunoreactive with a different anti-P2X 7 antibody that was also raised against the intracellular domain of the protein (data not shown). The samples were immunoblotted with anti-ERK1/2 and anti-β-tubulin antibodies to demonstrate the protease did not digest intracellular proteins. These data are evidence that P2X 7 N187A does not efficiently traffic to the cell surface, providing a plausible mechanism for why this mutant displays attenuated activity.
DISCUSSION
We have presented the first evidence that the nucleotide receptor P2X 7 appears glycosylated on five asparagine residues and that residue N187, which is an amino acid that is conserved among six of the seven P2X family members, is important for receptor-stimulated function/signaling. Mutation of residues N213, N241 and N284 also results in a modest attenuation of BzATP-induced ERK1/2 phosphorylation, but not to the same extent as mutation of N187. In addition, we have provided evidence that the P2X 7 N187A point mutant displays reduced activity, at least in part, because it does not properly traffic to the plasma membrane when compared to wild-type receptor.
There are several potential explanations as to how mutation of P2X 7 N187 attenuates receptor function. One possibility is that the proper folding and assembly of P2X 7 monomers into oligomers requires glycosylation at this site (4). Cell surface-bound proteins that are not folded properly in the ER are often degraded during the unfolded protein response (28). This process may contribute to the observations that it was often difficult to express P2X 7 N187A at levels that were similar to wild-type and that P2X 7 Asn to Ala double mutants containing N187A were generally expressed at lower levels than the single mutants (Fig. 8 and data not shown). The PredictProtein program was used to determine where predicted secondary structures in human P2X 7 are located. This analysis predicted that residue N187 is located in a short loop between an alpha helix and a beta sheet. Thus, it is possible that mutation at this residue disrupts these secondary structures, altering tertiary protein structure.
Another possible explanation for why the P2X 7 N187 point mutants exhibit decreased activity is that glycosylation at this residue is required for P2X 7 trafficking to the plasma membrane. Assessing P2X 7 cell surface expression at a given time point has been challenging in the cell lines commonly used for P2X 7 analysis. It has been reported by others that P2X 7 is naturally highly represented in intracellular compartments in several cell types (29). Similarly, we have noted that endogenous P2X 7 in RAW 264.7 macrophages and over-expressed P2X 7 in COS7 cells appears to be predominantly localized at intracellular sites, as assessed by immunofluorescent staining (Figs. 4 and 9). It is plausible that the trafficking of P2X 7 to and from the cell surface occurs rapidly in cells naturally expressing the receptor as well as in those transfected with exogenous receptor. Therefore, visualizing P2X 7 localization at the plasma membrane at any single time point is likely to be limited by the possibility that only a small percentage of the receptor is present at the surface at any given time. In agreement with this concept, we have found that P2X 7 levels on the cell surface appear low using other methods, including flow cytometry (data not shown) and EndoH-resistance assays (Figs. 3 and 5). Accordingly, the live cell proteinase K digestion assay described herein appears to be a valuable tool for examining the plasma membrane localization of P2X 7 over one hour or more. This assay has been useful in identifying P2X 7 mutants that exhibit a major defect in trafficking to the cell membrane.
Interestingly, a recent report by Roger et al. has shown that two P2X 7 SNPs located near N187, E186K and L191P, exhibit decreased ion channel and pore activity, providing further evidence this extracellular region of the receptor is critical for function (17). The authors propose the E186K and L191P mutations affect ATP binding because they are in close proximity to the ATP-binding amino acids. As discussed above, our data support two alternative explanations for why P2X 7 exhibits attenuated activity when residues in that region are mutated. We propose that the region of the receptor encompassing residue N187 is important for protein folding and/or trafficking to the cell surface and that any ATP binding issues may be secondary to the trafficking defect. Nonetheless, this report and that of Roger et al. demonstrate that the region surrounding N187 is important for P2X 7 function, and this information should help in advancing our understanding of how naturally-occurring genetic variations in P2X 7 lead to altered physiology.
As discussed previously, we and others have provided evidence that residue N284 may also be an important determinant of P2X 7 function. For example, we have shown herein that mutation of N284 to Ala in the P2X 7 receptor results in a modest decrease in BzATP-induced ERK1/2 activation (Fig. 7). Interestingly, Young et al. have demonstrated that introduction of Asn at the equivalent residue in mouse P2X 7 (D284), which leads to increased receptor glycosylation, results in increased sensitivity to BzATP- and ATP-induced responses, and that mutation of the preceding amino acid in rat P2X 7 (T283) results in attenuated channel and pore activities (27,30). This threonine residue is conserved among the human, mouse and rat sequences (Fig. 2, 27,30). It is possible that we only observed a modest decrease in P2X 7 N284A activity in response to BzATP because this region of the protein may be less critical for BzATP interaction with the receptor when compared to other P2X 7 ligands such as ATP. In support of this idea, introduction of Asn at mouse P2X 7 D284 promoted a larger sensitivity to ATP than BzATP (27).
In summary, this work highlights the importance of N-linked glycosylation in the regulation of the immunomodulatory critical protein P2X 7. We have demonstrated that both mutation of the conserved glycosylation site N187 and treatment of cells with the glycosylation synthesis inhibitor tunicamycin results in attenuated receptor activity, supporting the idea that N-linked glycosylation is essential for P2X 7 function.
Acknowledgments
We thank Dr. Charles Heise (UT Southwestern Medical Center) and Dr. James Keck (University of Wisconsin-Madison) for critical comments about this manuscript, and we thank Dr. Beatriz Quinchia-Rios and Lance Rodenkirch for technical assistance.
This work was supported by National Institutes of Health (NIH) grants 1 U19 AI070503, 2 R01 HL069116, and 1 P01 HL0885940 to PJB, a postdoctoral fellowship to LYL from The Hartwell Foundation, and a Trewartha undergraduate research grant to ZW.
ABBREVIATIONS
ERK1/2
extracellular signal-regulated kinases 1/2
CREB
cyclic-AMP response element-binding protein
p90RSK
p90 ribosomal S6 kinase
MAPK
mitogen activated protein kinase
ATF
activating transcription factor
BzATP
2′(3′)-O-(4-benzoylbenzoyl)-ATP
PVDF
polyvinylidene fluoride
EGFR
epidermal growth factor receptor
ER
endoplasmic reticulum
PDI
protein disulfide isomerase
TM
transmembrane
References
1.Burnstock G. Physiology and pathophysiology of purinergic neurotransmission. Physiol Rev. 2007;87:659–797. doi: 10.1152/physrev.00043.2006. [DOI] [PubMed] [Google Scholar]
2.Myrtek D, Idzko M. Chemotactic activity of extracellular nucleotides on human immune cells. Purinergic Signal. 2007;3:5–11. doi: 10.1007/s11302-006-9032-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
3.Dubyak GR. Signal transduction by P2-purinergic receptors for extracellular ATP. Am J Respir Cell Mol Biol. 1991;4:295–300. doi: 10.1165/ajrcmb/4.4.295. [DOI] [PubMed] [Google Scholar]
4.North RA. Molecular physiology of P2X receptors. Physiol Rev. 2002;82:1013–1067. doi: 10.1152/physrev.00015.2002. [DOI] [PubMed] [Google Scholar]
5.Romagnoli R, Baraldi PG, Cruz-Lopez O, Lopez-Cara C, Preti D, Borea PA, Gessi S. The P2X7 receptor as a therapeutic target. Expert Opin Ther Targets. 2008;12:647–661. doi: 10.1517/14728222.12.5.647. [DOI] [PubMed] [Google Scholar]
6.Ferrari D, Pizzirani C, Adinolfi E, Lemoli RM, Curti A, Idzko M, Panther E, Di Virgilio F. The P2X7 receptor: a key player in IL-1 processing. J Immunol. 2006;176:3877–3883. doi: 10.4049/jimmunol.176.7.3877. [DOI] [PubMed] [Google Scholar]
7.Lenertz LY, Gavala ML, Hill LM, Bertics PJ. Cell signaling via the P2X(7) nucleotide receptor: linkage to ROS production, gene transcription, and receptor trafficking. Purinergic Signal. 2009;5:175–87. doi: 10.1007/s11302-009-9133-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
8.Gavala ML, Pfeiffer ZA, Bertics PJ. The nucleotide receptor P2RX7 mediates ATP-induced CREB activation in human and murine monocytic cells. J Leukoc Biol. 2008;84:1159–1171. doi: 10.1189/jlb.0907612. [DOI] [PMC free article] [PubMed] [Google Scholar]
9.Bradford MD, Soltoff SP. P2X7 receptors activate protein kinase D and p42/p44 mitogen-activated protein kinase (MAPK) downstream of protein kinase C. Biochem J. 2002;366:745–755. doi: 10.1042/BJ20020358. [DOI] [PMC free article] [PubMed] [Google Scholar]
10.Noguchi T, Ishii K, Fukutomi H, Naguro I, Matsuzawa A, Takeda K, Ichijo H. Requirement of reactive oxygen species-dependent activation of ASK1-p38 MAPK pathway for extracellular ATP-induced apoptosis in macrophage. J Biol Chem. 2008;283:7657–7665. doi: 10.1074/jbc.M708402200. [DOI] [PubMed] [Google Scholar]
11.Potucek YD, Crain JM, Watters JJ. Purinergic receptors modulate MAP kinases and transcription factors that control microglial inflammatory gene expression. Neurochem Int. 2006;49:204–214. doi: 10.1016/j.neuint.2006.04.005. [DOI] [PubMed] [Google Scholar]
12.Dell’Antonio G, Quattrini A, Cin ED, Fulgenzi A, Ferrero ME. Relief of inflammatory pain in rats by local use of the selective P2X7 ATP receptor inhibitor, oxidized ATP. Arthritis Rheum. 2002;46:3378–3385. doi: 10.1002/art.10678. [DOI] [PubMed] [Google Scholar]
13.Dell’Antonio G, Quattrini A, Dal Cin E, Fulgenzi A, Ferrero ME. Antinociceptive effect of a new P(2Z)/P2X7 antagonist, oxidized ATP, in arthritic rats. Neurosci Lett. 2002;327:87–90. doi: 10.1016/s0304-3940(02)00385-3. [DOI] [PubMed] [Google Scholar]
14.Denlinger LC, Shi L, Guadarrama A, Schell K, Green D, Morrin A, Hogan K, Sorkness RL, Busse WW, Gern JE. Attenuated P2X7 pore function as a risk factor for virus-induced loss of asthma control. Am J Respir Crit Care Med. 2009;179:265–270. doi: 10.1164/rccm.200802-293OC. [DOI] [PMC free article] [PubMed] [Google Scholar]
15.Malhotra JD, Kaufman RJ. Endoplasmic reticulum stress and oxidative stress: a vicious cycle or a double-edged sword? Antioxid Redox Signal. 2007;9:2277–2293. doi: 10.1089/ars.2007.1782. [DOI] [PubMed] [Google Scholar]
16.Zhao YY, Takahashi M, Gu JG, Miyoshi E, Matsumoto A, Kitazume S, Taniguchi N. Functional roles of N-glycans in cell signaling and cell adhesion in cancer. Cancer Sci. 2008;99:1304–1310. doi: 10.1111/j.1349-7006.2008.00839.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
17.Roger S, Mei ZZ, Baldwin JM, Dong L, Bradley H, Baldwin SA, Surprenant A, Jiang LH. Single nucleotide polymorphisms that were identified in affective mood disorders affect ATP-activated P2X(7) receptor functions. J Psychiatr Res. 2009 doi: 10.1016/j.jpsychires.2009.10.005. [DOI] [PubMed] [Google Scholar]
18.Pfeiffer ZA, Guerra AN, Hill LM, Gavala ML, Prabhu U, Aga M, Hall DJ, Bertics PJ. Nucleotide receptor signaling in murine macrophages is linked to reactive oxygen species generation. Free Radic Biol Med. 2007;42:1506–1516. doi: 10.1016/j.freeradbiomed.2007.02.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
19.Guerra AN, Fisette PL, Pfeiffer ZA, Quinchia-Rios BH, Prabhu U, Aga M, Denlinger LC, Guadarrama AG, Abozeid S, Sommer JA, Proctor RA, Bertics PJ. Purinergic receptor regulation of LPS-induced signaling and pathophysiology. J Endotoxin Res. 2003;9:256–263. doi: 10.1179/096805103225001468. [DOI] [PubMed] [Google Scholar]
20.Denlinger LC, Coursin DB, Schell K, Angelini G, Green DN, Guadarrama AG, Halsey J, Prabhu U, Hogan KJ, Bertics PJ. Human P2X7 pore function predicts allele linkage disequilibrium. Clin Chem. 2006;52:995–1004. doi: 10.1373/clinchem.2005.065425. [DOI] [PubMed] [Google Scholar]
21.Denlinger LC, Angelini G, Schell K, Green DN, Guadarrama AG, Prabhu U, Coursin DB, Bertics PJ, Hogan K. Detection of human P2X7 nucleotide receptor polymorphisms by a novel monocyte pore assay predictive of alterations in lipopolysaccharide-induced cytokine production. J Immunol. 2005;174:4424–4431. doi: 10.4049/jimmunol.174.7.4424. [DOI] [PubMed] [Google Scholar]
22.Kornfeld R, Kornfeld S. Assembly of asparagine-linked oligosaccharides. Annu Rev Biochem. 1985;54:631–664. doi: 10.1146/annurev.bi.54.070185.003215. [DOI] [PubMed] [Google Scholar]
23.Humphreys BD, Dubyak GR. Modulation of P2X7 nucleotide receptor expression by pro- and anti-inflammatory stimuli in THP-1 monocytes. J Leukoc Biol. 1998;64:265–273. doi: 10.1002/jlb.64.2.265. [DOI] [PubMed] [Google Scholar]
24.Maley F, Trimble RB, Tarentino AL, Plummer TH., Jr Characterization of glycoproteins and their associated oligosaccharides through the use of endoglycosidases. Anal Biochem. 1989;180:195–204. doi: 10.1016/0003-2697(89)90115-2. [DOI] [PubMed] [Google Scholar]
25.Gamou S, Shimizu N. Glycosylation of the epidermal growth factor receptor and its relationship to membrane transport and ligand binding. J Biochem. 1988;104:388–396. doi: 10.1093/oxfordjournals.jbchem.a122478. [DOI] [PubMed] [Google Scholar]
26.Di Jeso B, Pereira R, Consiglio E, Formisano S, Satrustegui J, Sandoval IV. Demonstration of a Ca2+ requirement for thyroglobulin dimerization and export to the golgi complex. Eur J Biochem. 1998;252:583–590. doi: 10.1046/j.1432-1327.1998.2520583.x. [DOI] [PubMed] [Google Scholar]
27.Young MT, Pelegrin P, Surprenant A. Amino acid residues in the P2X7 receptor that mediate differential sensitivity to ATP and BzATP. Mol Pharmacol. 2007;71:92–100. doi: 10.1124/mol.106.030163. [DOI] [PubMed] [Google Scholar]
28.Todd DJ, Lee AH, Glimcher LH. The endoplasmic reticulum stress response in immunity and autoimmunity. Nat Rev Immunol. 2008;8:663–674. doi: 10.1038/nri2359. [DOI] [PubMed] [Google Scholar]
29.Gu BJ, Zhang WY, Bendall LJ, Chessell IP, Buell GN, Wiley JS. Expression of P2X(7) purinoceptors on human lymphocytes and monocytes: evidence for nonfunctional P2X(7) receptors. Am J Physiol Cell Physiol. 2000;279:C1189–1197. doi: 10.1152/ajpcell.2000.279.4.C1189. [DOI] [PubMed] [Google Scholar]
30.Young MT, Pelegrin P, Surprenant A. Identification of Thr283 as a key determinant of P2X7 receptor function. Br J Pharmacol. 2006;149:261–268. doi: 10.1038/sj.bjp.0706880. [DOI] [PMC free article] [PubMed] [Google Scholar]
31.Rost B, Yachdav G, Liu J. The PredictProtein server. Nucleic Acids Res. 2004;32:W321–326. doi: 10.1093/nar/gkh377. [DOI] [PMC free article] [PubMed] [Google Scholar]
32.Thompson JD, Higgins DG, Gibson TJ. CLUSTAL W: improving the sensitivity of progressive multiple sequence alignment through sequence weighting, position-specific gap penalties and weight matrix choice. Nucleic Acids Res. 1994;22:4673–4680. doi: 10.1093/nar/22.22.4673. [DOI] [PMC free article] [PubMed] [Google Scholar]
ACTIONS
View on publisher site
PDF (3.6 MB)
Cite
Collections
Permalink PERMALINK
Copy
RESOURCES
Similar articles
Cited by other articles
Links to NCBI Databases
Cite
Copy
Download .nbib.nbib
Format:
Add to Collections
Create a new collection
Add to an existing collection
Name your collection
Choose a collection
Unable to load your collection due to an error
Please try again
Add Cancel
Follow NCBI
NCBI on X (formerly known as Twitter)NCBI on FacebookNCBI on LinkedInNCBI on GitHubNCBI RSS feed
Connect with NLM
NLM on X (formerly known as Twitter)NLM on FacebookNLM on YouTube
National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894
Web Policies
FOIA
HHS Vulnerability Disclosure
Help
Accessibility
Careers
NLM
NIH
HHS
USA.gov
Back to Top
|
83
|
Teen Girl Squad | Tropedia | Fandom
===============
Tropedia
-
READ MORE
Start a Wiki
Sign In
Don't have an account?
Register
Sign In
Tropedia
Explore
Main Page
Discuss
All Pages
Community
Interactive Maps
Recent Blog Posts
Wiki Content
Troper Portals
Index Index
Tropes of Legend
Omnipresent Tropes
Universal Tropes
Genre Tropes
Narrative Tropes
Topical Tropes
Media
Meta Concepts
Just for Fun
Troping Utilities
Tropedia Community Portal
Trope Workshop
Troper Userboxes
Mechanics of Writing
Troper Social Networks
Official Discord
Official Subreddit
Forum
Sign In
Don't have an account?
Register
Sign In
Explore
Fan Central
Current
Skip to content
Tropedia
177,665
pages
Explore
Main Page
Discuss
All Pages
Community
Interactive Maps
Recent Blog Posts
Wiki Content
Troper Portals
Index Index
Tropes of Legend
Omnipresent Tropes
Universal Tropes
Genre Tropes
Narrative Tropes
Topical Tropes
Media
Meta Concepts
Just for Fun
Troping Utilities
Tropedia Community Portal
Trope Workshop
Troper Userboxes
Mechanics of Writing
Troper Social Networks
Official Discord
Official Subreddit
Forum
in:Work, Trope Namers, Clip Art Animation,
and 2 more
Web Animation
Teen Girl Squad
Teen Girl Squad
Sign in to edit
History
Purge
Talk (0)
| YMMV • Radar • Quotes • (Funny • Heartwarming • Awesome) • Fridge • Characters • Fanfic Recs • Nightmare Fuel • Shout Out • Plot • Tear Jerker • Headscratchers • Trivia • WMG • Recap • Ho Yay • Image Links • Memes • Haiku • LaconicSource • Setting |
| --- |
I mean SO GOOD!
Teen girl squaaaaaaaaad! the teenage girls between the ages of 13 and 19!
"Owww! My the fact that I was alive a second ago!"
—The Ugly One, Issue 12
Teen Girl Squad is a spinoff sub-series of Homestar Runner, in which (canonically) Strong Bad draws, in comically bad style, a stick-figure cartoon depicting the misadventures of four "teenage girls between the ages of thirteen and nineteen". The main characters include Cheerleader (the de facto leader of the group, an Alpha Bitch wannabe whose popularity is all in her head), So and So (an overachieving brainiac who lacks common sense), What's Her Face (the cynic of the group, as well as the token poor person, and by far the least popular of the four girls), and The Ugly One (who seems to possess suspect hygiene and a strained relationship with reality among other things, but is more popular than What's Her Face).
There are fifteen toons in the main series, and a number of other appearances elsewhere (most notably, the first three episodes of Strong Bads Cool Game for Attractive People each feature an exclusive TGS toon, the first two of which are interactive).
Generally speaking, the characters go on various bizarre misadventures, which usually lead to them dying or otherwise being affected in very strange ways (usually announced loudly) in every episode, which include such things as...
being "lathe'd" by a miniature samurai with a naginata;
being run over by a race car that makes the sound "TIIIIIIINES" driven by a fork;
being "arrowed" by a giant man who shoots arrows from his mouth;
dying...somehow.
The series is best known for spawning several minor memes, including "ow! my [something unusual]!", and being pwned by something with the caption "[said something]'D!", "It's over!", and others. It also makes fun of various stereotypical high-school/teen/young-adult activities, such as prom, sweet sixteen birthday parties, and ~~Valentine's~~ vamlumtimes day, with a gradually increasing ratio of directed, ridiculing humor to simple absurd humor.
A (so far oneshot) spinoff of this spinoff was created based on the several characters named "Greg" in the series. 4 Gregs is a misadventure of four such characters who each embody some sort of nerdy/geeky/dorky stereotype.
Teen Girl Squad is the Trope Namer for:
Wave of Babies.
Tropes used in Teen Girl Squad include:
Abhorrent Admirer: All of the girls, but Cheerleader especially.
Adorkable: What's Her Face. "Uh...hi. I like music, and um, cloth."
A Good Name for a Rock Band: Issue 12 gives us the two hipster kids who decide "She Likes Cloth" would make a good band name. Also, when the girls are forming a band, So And So suggests "Smartly Pretty", but is shot down in favour of "Kissyboots".
Against My Religion: "Thank groodness. Sweating is against several of my religions!" -Cheerleader in Issue 14
All Cheering All the Time: Cheerleader Brian
All Girls Like Ponies: In the Decemberween Special, the text on Cheerleader's intro card is "wants a pony".
All Girls Want Bad Boys: So and So has shown an attraction to Tompkins, considering him a "renegade", though he's more or less your typical high school slacker.
All Guys Want Cheerleaders: Subverted. Amongst the group, Cheerleader actually gets the least amount of attention from boys over the course of the series.
Alpha Bitch: Cheerleader.
All Women Are Lustful
"What time is it?"
"It's valentimes!"
"What time is it?"
"It's valentimes!"
"What we gonna get?"
"Several boys!"
"How we gonna get 'em?"
"Erm. Um. Uh..."
Amusing Injuries: Every three seconds.
And There Was Much Rejoicing: In Issue 9, Cheerleader dies, and the rest of them celebrate... shortly before dying.
Animation Bump: Two issues are in color: Issue 10 and Baddest of the Bands' Teen Girl Squad MeetsLimozeen!
Arson, Murder, and Jaywalking
Sci-Fi Greg: Tonight, new Earthlings... tonight, we go... TO A VARSITY FOOTBALL GAME!
(everyone else gasps)
D n'D Greg: Into the dragon's lair?
Japanese Culture Greg: Into the robotic dragon's lair?
Open Source Greg: Into the Apple store?
Beautiful All Along: The Ugly One in Issue 10.
Strong Bad: Whoa! Did I draw that new hotness?
Big Game:
Homestar Ruiner' s Teen Girl Squad (basketball).
4 Gregs (football).
Blatant Lies: In issue 15, when Manolios Ugly One tells Dn'D Greg to not lay a finger on his daughter (if he does so he'll gut him like a sheep), a random gutted sheep says that it isn't so bad... when it turns out to be bad. Also, in the same issue, Arrow'd Guy claims that What's-Her-Face will come into her own in college... and then immediately afterward shoots her down with a flock of sparrows. "A new twist on an old classic!"
Bottomless Pit: Their school has one (named after its sports team, of course)
Boy of the Week/TemporaryLoveInterest - Most of the Girls' boyfriends, though Sci-Fi Greg and What's Her Face seem to have a more off-and-on relationship.
Camp Straight: Cheerleader Brian
Canon Immigrant: Peacey P managed to enter the normal Homestar Runner universe after debuting in this series.
Catch Phrase - "SO GOOD!"
Celebrity Star: Baddest of the Bands' Teen Girl Squad MeetsLimozeen!
The Chew Toy: All of them. It's drawn by Strong Bad, what do you expect?
Chirping Crickets: Used briefly in 4 Gregs after D n'D Greg's explanation of football.
Choose Your Own Adventure:
Homestar Ruiner' s Teen Girl Squad.
Strong Badia the Free' s Cave Girl Squad.
Cloudcuckoolander: The Ugly One
Continuity Nod: In a rare example, Issue #13 references So And So's incarceration in Issue #11.
Cool Loser: What's Her Face was once the page image for that trope.
Cute Little Fangs: On occasion, one of the girls will be randomly drawn with them, usually So and So.
Dada Comics: This "comic" is Strong Bad's outlet for creativity. Dada humor is to be expected.
Dead Baby Comedy: Obviously the whole series, especially issue 7
Deadpan Snarker: What's Her Face. She was voted Least Likely To Care in high school.
Death by Flashback: The Ugly One dies in her own flashback she's recalling in Issue #12.
Department of Redundancy Department: "Drummers Play A Drum"
Diabolus Ex Machina: Arguably the whole point of the comic. It's drawn by Strong Bad--'nuff said.
Digital Avatar: In issue #15, Open-Source Greg sends his to the prom "in his stead". Don't even try to wrap your head around that.
Ditzy Genius: So and So
Do Wrong Right
Tompkins: "Aw, come on, Prinicpal [sic] Strong Bad! I only stole one Sega tape!"
Prinicpal [sic] Strong Bad: "That's just it, Tompkins. You could have stolen upwards of one Sega tape!"
Everybody Lives: Only issues 2,3, and the Decemberween special.
Everything's Better with Samurai: Issue 10. "Corn is no place for a mighty warrior!"
Everything's Even Worse with Sharks: "My bass feels seaworthy." And then the shark proceeds to eat most of her and take her place in the Kissyboots band.
Extracurricular Enthusiast: So-and-So explains that the "Priggity Prizom" is the name of this year's prom, and adds, with a deranged look, "I was on every committee ever." This fits well with the preppy overachiever stereotype she personifies.
Eyes Always Shut: Japanese Culture Greg. His eyes briefly pop open in issue #15 when he calls his prom date "CHIZUKO!".
Face of the Band: Invoked in Issue 8.
Cheerleader: I'm Kissyboots, and she plays bass.
Family-Unfriendly Violence: Several deaths are shown as graphic as you can get in poorly drawn stick figures.
Like an old guy's remains being exposed as he is eaten by vultures, or the 10thiversary where What's Her Face gets lathed.
Four-Girl Ensemble
The Friend Nobody Likes: What's-Her-Face is explicitly called the "pity friend" and is the one the other three are most likely to ditch or relegate to least-desired roles.
It's implied Cheerleader was this in Issue 9, when the girls celebrate her being run over by Learner's Permit Girl. It doesn't stick, though.
Girlish Pigtails: Cheerleader.
Gretzky Has the Ball: A Scotsman caber-tosses Cheerleader in Issue 10 and is disgusted that his throw only goes 23 meters. Success in the caber toss is measured by straightness, not distance. The creators point out their mistake in the DVD commentary.
Her Codename Was Mary Sue: Creator Strong Bad inserts himself into several of the comics, often as the love interest of one of the girls.
Hollywood Nerd: So and So. The 4 Gregs definitely qualify.
I Fell for Hours: "When you fall in a bottomless pit, you die of starvation."
I Want My Beloved to Be Happy: The Gregs' reaction to seeing What's Her Face date the Wireless Wizard.
Jumping the Shark: Invoked in the "tenthennial extravaganza":
Cheerleader: "It's our tenth issue-versary! Let's do a clip show!"
So And So: "Let's have a wedding!"
The Ugly One: "Let's have a baby!"
What's Her Face: "Let's kill someone off!"
Narrator Strong Bad:(beat) "Okay!" All four die
Kill'Em All: Usually. On rare occasions, one or two will survive. On even rarer occasions (issues 2, 3, and 6), none of them die.
Lighter and Softer: The other three girls attempt to take the group in this direction following Cheerleader's death in Issue 9. It doesn't work.
Makes Just as Much Sense in Context: Pretty much the whole series.
Mind Screw: The ending of "4 Gregs." So basically, Strong Bad was drawing the Teen Girl Squad drawing the four Gregs drawing the girls drawing the Gregs drawing the girls drawing the Gregs drawing the girls drawing the Gregs drawing the girls drawing the Gregs!
And it's a SUPER Mind Screw if you count the Brothers Chapman, so that means the Chaps are drawing Strong Bad drawing the Teen Girl Squad drawing the four Gregs drawing the girls drawing the Gregs drawing the girls drawing the Gregs drawing the girls drawing the Gregs drawing the girls drawing the Gregs!
Negative Continuity
No Celebrities Were Harmed: Issue 13 introduces a rapper named Peacey P, a parody of Snoop Dogg (with visual cues from Ludacris).
No Inner Fourth Wall
"No Respect" Guy: What's Her Face
Occidental Otaku: Japanese Culture Greg
Oh Crap: The ending of episode 8, verbatim. "Everybody died 'cept me!" CHOMP "Aw, crap!"
One Million BC: Strong Badia the Free' s Cave Girl Squad.
Only Sane Girl: What's Her Face is probably the only one with any common sense. For example, she is the only one who refuses to jump into a lion's mouth for the secret Santa.
Or So I Heard: In episode 8:
The Ugly One: Worldwide starlets get much boys!
Cheerleader: Or so I have read.
Out-of-Character Moment: While So-and-So and The Ugly One stayed relatively as they always were when Cheerleader died in episode 9, What's Her Face took advantage of not having to be the sensible one anymore and ate a heaping bowl of "Staple Sauce". This, understandably, killed her.
Overprotective Dad: Manolios Ugly One. "You lay one finger on my daughter, I gut you like sheep."
Ow, My Body Part: A frequent running gag. Examples include "Ow! My skin!", "Ow! My hopes of reaching first base!" and "Ow! My entire life!"
Variants are used, too, such as, "My blood hurts."
Perky Goth: So and So adopts this persona when the girls form a band in Issue 8. "I'm going with that gloomy keyboardist look I keep hearing about!"
The Pig Pen: One Spinoff Babies episode of Teen Girl Squad has What's Her Face as a direct parody of the original Pig Pen, as lampshaded in the commentary.
The Ugly One is sometimes suggested to embody this trope in the main series.
Police Are Useless: "Uh, yes, ma'am. We got a report of us wanting to watch the Ola Toya fight. You mind if we investigate?"
He DID bring snack mix. But it killed What's Her Face in Tompkins' Parlorwith a pretzel stick!
Rapid-Fire Comedy
Repetitive Name: Brett Bretterson, So-and-So's imaginary boyfriend.
Rescue Romance: Quarterman saves The Ugly One from an upperclassman's prank and then asks her to accompany him to the end credits.
Robe and Wizard Hat: The Wireless Wizard, of course.
Man, sometimes wizards are so awesome, it hurts.
Running Gag
Serious Business: Valentime's Day (issue 12). Apparently, Cheerleader risks losing her "Mindy cred" if she doesn't get enough Valentine's cards from enough boys.
She Cleans Up Nicely: The Ugly One: "Whoa! Did I draw that new hotness?"
And Dn'D Greg looks awful good in a tux.
Shown Their Work: All of Dn'D Greg's references to Dungeons and Dragons are accurate and well-researched.
Averted with the 12-sided die that looks nothing like a dodecahedron. This is likely because of Strong Bad not doing the research.
Show Within a Show
Spear Counterpart: The four Gregs.
Spinoff Babies: Issue 7 has Teen Girl Squad as toddlers--"Teeny Tiny Girl Squad".
Stylistic Suck: The girls are stick figures on a looseleaf backdrop. Strong Bad provides all the voices himself, so they all speak in falsetto.
Summer Campy: Camp Firstbassawassa in Issue 11.
Sure Why Not: What's Her Face suggests killing someone off in the tenth episode. Strong Bad complies (as if he needed What's Her Face's suggestion as an excuse).
Talking to Himself: Matt Chapman aside, Strong Bad voices all of the comics' characters.
They Killed Kenny Again: Some number of the Teen Girl Squad almost always die by the end of almost every episode, the exceptions being episodes 2 and 3 and the Decemberween special.
Title Drop: Done straight by Strong Bad at the start of every toon. Played with in issue 9:
So And So: "You guys, I think this might be the start of a kinder, gentler squad of teen girls."
Uncanny Valley: InvokedIn-Universe. One of the Gregs ends up getting killed by his robot girl due to this (don't try to think too hard on how this works).
Unfortunate Names: Since her father's name is Manolios Ugly One, The Ugly One's first name seems to be The. (Granted, this is so common in the Homestar Runner universe that it has its own name amongst the fandom.)
Unpopular Popular Character: Amongst fans and the creators of the show, What's Her Face may be the most popular character. In universe, the opposite is true.
Maybe it's because she wears baggy pants?
Unsound Effect
Vague Age: The Teen Girl Squad are described as "teenage girls between the ages of 13 and 19", with What's Her Face being fifteen at the youngest, since she's had her license for a year (in some states, mostly Midwestern ones, you can get your license at fourteen).
The Ugly One's birthday is her "Sweet Someteenth".
"We're in eighth grade!"
You Killed My Father: An astronaut punches out So-and-So's overbearing manager at Shirt Folding Store because she killed his dog. And then he flies away.
IT'S OVER!
Categories
Categories:
Work
Trope Namers
Clip Art Animation
Web Animation
Teen Girl Squad
[Configure Reference Popups]
Community content is available under CC-BY-SA unless otherwise noted.
Recent Images
### The Reapers Series/Recap/Episode 99: Evidence Run!7 hours ago
### My Hero Academia/Characters9 hours ago
### My Hero Academia/Characters9 hours ago
Explore properties
Fandom
Muthead
Fanatical
Follow Us
Overview
What is Fandom?
About
Careers
Press
Contact
Terms of Use
Privacy Policy
Digital Services Act
Global Sitemap
Local Sitemap
Community
Community Central
Support
Help
Advertise
Media Kit
Contact
Fandom Apps
Take your favorite fandoms with you and never miss a beat.
Tropedia is a FANDOM Anime Community.
View Mobile Site
Tropedia
-
CLOSE
|
84
|
Lecture 5: Monotone Compar. Statics 1 Where we are • Last time, we saw the setup for the one-dimensional version of Monotone Comparative Statics • We start with a parameterized optimization problem over a one-dimensional choice set, max x∈X g(x, t) X ⊆R and let x∗(t) = arg maxx∈X g(x, t) be the set of maximizers • We define a partial order of subsets of R, the Strong Set Order: A ≥SSO B if for any a ∈A and b ∈B, max{a, b} ∈A and min{a, b} ∈B, which reduces to a ∈A ≥b ∈B in the case of singleton sets • We define the function g as having increasing differences if for any x′ > x, the difference g(x′, t) −g(x, t) is weakly increasing in t, which is equivalent to ∂g ∂x increasing in t, or ∂g ∂t increasing in x, or ∂2g ∂x∂t ≥0, whenever these derivatives exist • And we gave the “punchline”, which I called “Baby Topkis”: Theorem. Let x∗(t) = arg maxx∈X g(x, t), where X ⊆R. If g has increasing differences, then x∗(t) is increasing in t via the Strong Set Order.
• Today we’ll build more intuition with the single-dimensional case, and then move on to the case where X is multi-dimensional • But first: any questions?
76 1.1 How do we use the result?
• Consider a single-output firm with cost function c • The firm’s problem is max q≥0 {pq −c(q)} so g(q, p) = pq −c(q), where p is the parameter and q the choice variable • Note that ∂g ∂p = q is increasing in q, so the objective function has increasing differences • So now we’re guaranteed that q∗(p) is increasing in p (via the strong set order) • If the firm has a unique optimal production level q, then q must be weakly increasing in p • And we didn’t have to make any assumptions about c – it doesn’t have to be convex, or differentiable, or anything!
• (To be fair, we already knew this – q∗increasing in output price is a consequence of the Law of Supply – but it’s still nice to see that this works!) 77 • Another example: a single-input, single-output firm with production function f, solving max z≥0 {pf(z) −wz} where f is increasing • Since f is increasing, ∂g ∂p = f(z) is increasing in z, so g has increasing differences in p and z – so when p goes up, input used (and output produced) goes up • What about when w changes?
• Well, ∂g ∂w = −z, so ∂2g ∂w∂z = −1, so the problem does not have increasing differences in z and w • So what can we do?
• Well, basically, we can flip the sign of w • We can introduce a new variable ˆ w = −w, and think of the problem in terms of ˆ w • That is, consider z∗( ˆ w) = arg max z≥0 {pf(z) + ˆ wz} • This is obviously the same problem; but the new objective function has increasing differences in z and ˆ w • So z∗is increasing in ˆ w • But since ˆ w is just −w, we learn z∗is decreasing in w – In practice, we don’t need to formally define a new variable, we can just think of −w instead of w as the parameter of interest, and note that g has increasing differences in z and −w • (Note that we can apply this parameter by parameter to any parameter in the problem – we don’t need to worry about the relationship between the different parameters, we can just think of holding the others fixed while we change one, so we just worry about the relationship between the choice variable and one parameter at a time) 78 • Let’s do one more little complication, so at least we’re proving something that we didn’t already know from the Law of Supply • Suppose the firm isn’t a price taker in the output market, but faces a downward-sloping demand curve giving inverse demand P(q) at each q • The firm is now solving max z {P(f(z))f(z) −wz} • Here’s the fun part – the objective function still has increasing differences in z and −w, so without knowing anything about the shape of demand P(·) or the production function f(·), we know that when w goes up, z must go down (at least via the SSO) 79 2 Minor Extensions • First of all, recall that the Strong Set Order is just regular weak inequality when the sets are singletons, so: • Corollary. If g has increasing differences and x∗(t) is single-valued, then x∗is weakly in-creasing in t in the “usual” sense.
• We can also show a stronger result if we have strictly increasing differences: • Theorem. Suppose g has strictly increasing differences, that is, g(x′, t) −g(x, t) is strictly increasing in t for any x′ > x. Then for any x ∈x∗(t) and x′ ∈x∗(t′), x′ ≥x.
• That is, x∗(t) is increasing in t in the more intuitive sense – every solution at t′ is at least as big as every solution at t < t′ – To prove it, suppose t′ > t, x ∈x∗(t), and x′ ∈x∗(t′) – x ∈x∗(t) requires g(x, t) −g(x′, t) ≥0 – and x′ ∈x∗(t′) requires g(x, t′) −g(x′, t′) ≤0 – So together, g(x, t′) −g(x′, t′) ≤g(x, t) −g(x′, t) – If g has strictly increasing differences, this is impossible for x > x′, which proves x′ ≥x • This is called the Monotone Selection Theorem: for any selection x and x′ from x∗(t) and x∗(t′), x′ ≥x • Our examples so far have had strictly increasing differences, so we get the stronger result • (which we already knew from the Law of Supply) 80 • Note, though, that even with strictly increasing differences, we’re not claiming x∗is strictly increasing in t – that would require some differentiability assumptions – although with those, we could get a strictly-increasing result.
– To see why strictly increasing differences does not imply x∗(t) strictly increasing, consider the example g(x, t) = ( tx if x ≤0 (t −3)x if x > 0 on X = R and T = {1, 2} – g has strictly increasing differences – for any x′ > x, g(x′, t) −g(x) is strictly increasing in t – – but x = 0 is optimal for both t = 1 and t = 2 – What goes wrong here is the “kink” – since the objective function is kinked at the optimum (not differentiable in x), even a strictly positive interaction between x and t does not ensure that x∗(t) moves strictly as t moves 81 • One other technical note • We made the assumption that g(x′, t) −g(x, t) is increasing in t, but all we actually need is that when this is positive for one value of t, it’s also positive for higher values of t • That is, we don’t really care whether this difference is 5 or 7, we only care about when it’s positive and when it’s negative, and whether that is increasing in t • So we can weaken the “increasing differences” condition to what’s called single crossing differences – that g(x′, t) −g(x, t) ≥ 0 − → g(x′, t′) −g(x, t′) ≥ 0 for any x′ > x and t′ > t • It’s easy to show that the proof we gave of Topkis’ Theorem only relies on this, not increasing differences • We go with increasing differences because it’s typically easier to check (Whether g has increasing differences depends only on “interaction effects” between x and t; single-crossing differences depends on levels, so adding a function of x that isn’t a function of t to g could change whether single crossing differences holds, but doesn’t change whether increasing differences holds so I think it’s easier to check increasing differences; but it’s good to know the weaker condition still gives the same result) 82 3 Motivating the bigger problem • So we get a nice clean result – when the choice variable is one-dimensional, if the objective function has increasing differences in the choice variable and the parameter, the optimal choice is increasing in the parameter • But what if there’s more than one choice variable?
• What about a firm with two inputs, capital and labor, say, and a production function f • The firm’s problem is max k,ℓ≥0{pf(k, ℓ) −wk −rℓ} • We already know that if p goes up, output will go up – because we can restate this as a one-dimensional quantity-setting problem with some cost function c(q), or from the Law of Supply • But what about inputs?
• If p goes up, will the firm use more capital and more labor?
Or more capital and less labor? Or more labor and less capital?
• If w goes up, the Law of Supply says k will go down, but what about ℓ, and output?
• What do we need to answer this question?
• That’s where we’re headed next – generalizing Topkis to cover the case of a multi-dimensional choice problem like the firm’s choice of inputs 83 4 The Multi-Dimensional Case 4.1 Setup • So let’s consider a general multi-dimensional optimization problem, x∗(t) = arg max x∈X g(x, t) where g : X × T →R and now X ⊆Rm (We can still let T ⊆R, since we only need to consider one parameter at a time, although for notational convenience I’ll often think of T ⊂Rn as well) • Our goal is the same as last time: to say when the solution x∗changes in a predictable direction when a parameter changes • To do this, we’ll need to do the following: 1. Extend the Strong Set Order to a way to rank sets that are subsets of Rm rather than R, so we’ll know what it means to say x∗is increasing in t 2. Put a condition on the choice set X to make our approach work 3. Generalize increasing differences to a condition on the objective function g in a multi-dimensional problem 4. Show the analogous result: given the conditions on X and g, x∗(t) is increasing in t 84 4.2 First: when is a subset of Rm above another one?
• To be able to say that x∗is increasing in t, we need to know what it means for x∗(t′) to be greater than x∗(t), when they are both sets of points in Rm • To do that, we introduce a generalization of the Strong Set Order • For two points a, b ∈X ⊂Rm, we’ll define their componentwise maximum a ∨b = “a join b” = (max{a1, b1}, max{a2, b2}, . . . , max{am, bm}) and their componentwise minimum a ∧b = “a meet b” = (min{a1, b1}, min{a2, b2}, . . . , min{am, bm}) • In two dimensions: x y x ∨y x ∧y x = x ∨y y = x ∧y A B A B A B (join) (meet) x y x ∨y x ∧y x = x ∨y y = x ∧y A B A B A B (join) (meet) (Note that if x ≥y, then the join is just x and the meet is just y) • If we consider the partial order on individual points where x ≥y if it’s weakly higher in every dimension, then the join of two points is their least upper bound – the “lowest” point bigger than both; and the meet is the greatest lower bound – the highest point lower than both 85 • With the meet and the join defined, we’ll say that a set A is bigger than a set B, A ≥B, if a ∈A and b ∈B − → a ∨b ∈A and a ∧b ∈B • If X is one-dimensional, this is identical to the Strong Set Order, because if a ∨b = max{a, b} and a ∧b = min{a, b} • If X is multi-dimensional but A and B are singleton sets {a} and {b}, then this requires that a ≥b, that is, the point in A is weakly bigger than the point in B in every dimension • But this also allows a bunch of other configurations: x y x ∨y x ∧y x = x ∨y y = x ∧y A B A B A B (join) (meet) • This is what we’ll mean when we say A ≥B when they’re both subsets of Rm; so our goal will be to show x∗(t′) ≥x∗(t) via this ranking when t′ > t 86 4.3 Second: what conditions do we need for X?
• So, we’re defining a set x∗(t′) to be above another set x∗(t) if for any points in the two sets, x∗(t′) also contains the join, and x∗(t) also contains the meet • Since x∗is a subset of X, this will only make sense if the meet and the join are also in the choice set X • For this reason, we can only apply Monotone Comparative Statics to choice sets X that have a certain shape • Specifically, for any x and y in X, we need X to contain x ∨y and x ∧y as well • This actually rules out a lot of the problems we consider this semester • When we solved the firm’s profit maximization over a production set Y , we were optimizing over some weird shape that would not satisfy this condition • When we thought about cost minimization, we were minimizing over the set of input vectors generating enough output, or the upper contour set of a production function – this would also not satisfy this condition • When we get to consumer theory, we’ll generally be assuming that consumers choose from budget sets, which are triangles, and don’t satisfy this condition x y x ∨y ∉B(p,w) x ∧y x = x ∨y y = x ∧y B(p,w) (join) (meet) Y x y x ∨y ∉Y x y 87 • So, what kind of choice sets do work?
• It suffices for X to be a product set X = X1 × X2 × . . . × Xm where each Xi ⊆R • (X doesn’t need to be a product set – formally, it needs to be a sublattice of Rm, which just means for any two points in X, the meet and join are also in X – but this is a natural assumption, and a sufficient one) • So basically, X is a grid or a rectangle, not a triangle or some other funny-shaped thing • We’re also assume that X is fixed while the parameter changes; this can also be relaxed some, but not completely, and it’s safest to just leave it fixed • (For the firm, we can’t analyze the general maximize-profits-over-Y problem this way, because Y is almost certainly not a product set.
This is another reason the single-output, production-function formulation is useful: if we think about just choosing input combinations, we’re choosing over Rm +, so we can do it this way.) 88 4.4 Third: what do we need for g?
• So now we know when a set A is greater than a set B, so we’ll know how to say that x∗is increasing in t, and we know what type of choice set X we’re able to consider • What we need now is conditions on the objective function g, which will allow us to say that x∗(t) is increasing in t • Basically, we need to extend Increasing Differences to a multi-dimensional environment • For now, fix t, so we can think of g as a function from X to R • Definition. For X a product set in Rm, a function g : X →R is supermodular if g(x ∨y) + g(x ∧y) ≥ g(x) + g(y) for any x, y ∈X.
• This sounds like a tough condition to check, but it turns out to be equivalent to a simpler one: • Equivalent Definition.
A function g : X →R is supermodular if and only if it has increasing differences in xi and xj for every pair (i, j), holding the other variables fixed.
• This is awesome, because we already know that if g is twice differentiable, this just means all its mixed partials ∂2g ∂xi∂xj ≥0, which is easy to check if we know g • (We’ll get intuition for why pairwise increasing differences is good enough, when we talk about the intuition for the upcoming result) • We’ll also say g has increasing differences in (X, T) if it has increasing differences in (xi, tj) for each i and j.
• So basically, the conditions we want come down to pairwise increasing differences – increasing differences between any two of the choice variables, and increasing differences between any choice variable and any parameter we’re considering • This will ensure that all “feedback loops” and indirect effects reinforce the primary effects, which will give us strong results 89 • So now, we know what it means to say x∗(t′) ≥x∗(t); we know what type of choice set X we want to allow; and we have a condition on g that we can impose • And that gives us the result: Theorem (Topkis). Let X be a product set in Rm, T ⊆Rn, g : X × T →R, and x∗(t) = arg max x∈X g(x, t) If...
1. g is supermodular in X, and 2. g has increasing differences in X and T, then x∗(t) is increasing in t.
• That is, if x ∈x∗(t) and x′ ∈x∗(t′), with t′ > t, then x ∨x′ ∈x∗(t′), and x ∧x′ ∈x∗(t) • Corollary. If x∗is single-valued, this means x∗(t) is weakly increasing in every dimension (That is, if t′ > t, then x∗(t′) ≥x∗(t), meaning x∗ i (t′) ≥x∗ i (t) for every dimension i) 90 • Before getting into the proof, an example will help clarify exactly what’s going on • Let’s consider the two-input firm I mentioned last time, which uses capital k and labor ℓas inputs, and solves max k,ℓ≥0{pf(k, ℓ) −wℓ−rk} • For simplicity, let’s suppose that f is twice differentiable, and that ∂2f ∂k∂ℓ≥0 • Then g is differentiable, and ∂2g ∂k∂ℓ= p ∂2f ∂k∂ℓ≥0, so g is supermodular in the choice variables X = (k, ℓ) • What about increasing differences in (X, T)?
• In differentiable cases, I find the easiest way to check is to take first derivatives of g with respect to each choice variable, and check whether they’re monotonic in each parameter • In this case, we’re best offthinking of the parameters as T = (p, −w, −r) • ∂g ∂k = p ∂f ∂k −r is increasing in p and −r, and since it doesn’t depend on w, we’re free to say it’s (weakly) increasing in −w • And ∂g ∂ℓ= p ∂f ∂ℓ−w is increasing in p and −w, and (weakly) increasing in −r as well • So g has increasing differences in (X, T), where X = (k, ℓ) and T = (p, −w, −r) 91 • Since g is supermodular in X and has increasing differences in (X, T), we can apply Topkis’ Theorem • In this case, if we assume that the firm’s problem has a unique solution (so we don’t have to worry about stating things in terms of sets above other sets), we simply get that (k∗, ℓ∗) is increasing in p and decreasing in w and ℓ • So if the price of output goes up, the firm demands more labor and more capital, and therefore produces more output (as we already knew); • and if either w or r goes up, the firm demands less capital and less labor, and therefore produces less output • Why does this make sense?
• Suppose the price of labor, w, goes up • The obvious first response is that the firm reduces the labor input ℓ • But since ∂f ∂k is increasing in ℓ, when ℓgoes down, that reduces the marginal product of capital; so the firm reduces its use of capital k • But since ∂f ∂ℓis increasing in k, when the firm reduces its use of capital, that reduces the marginal product of labor, so the firm reduces its labor demand again • And so on, and so on • Supermodularity basically ensures that all the feedback loops go in the same direction – every change the firm wants to make, reinforces the other changes • Here, we assumed f was differentiable and the solution was single-valued, but we could easily drop these assumptions; the only really substantive assumption we needed was that f is supermodular, i.e., more capital makes labor more productive and vice versa, i.e., capital and labor are complements in production!
92
|
85
|
Quotes like 'E venni dal martirio a...
===============
Citatepedia.net
Citatepedia.com
Citatepedia.ro
Omnilexica.com
Home
Humor
Proverbs
Aphorisms
Lines
Haiku
Limericks
Poems
Songs
Trailers
Clips
Shorts
Topics
Authors
Citatepedia®.com
Quotes in English
Latest quotes | Random quotes | Vote! | Latest comments | Submit quote
Garfield
'E venni dal martirio a questa pace.'
These words the poet heard in Paradise,
Uttered by one who, bravely dying here,
In the true faith was living in that sphere
Where the celestial cross of sacrifice
Spread its protecting arms athwart the skies;
And set thereon, like jewels crystal clear,
The souls magnanimous, that knew not fear,
Flashed their effulgence on his dazzled eyes.
Ah me! how dark the discipline of pain,
Were not the suffering followed by the sense
Of infinite rest and infinite release!
This is our consolation; and again
A great soul cries to us in our suspense,
'I came from martyrdom unto this peace!'
poem by Henry Wadsworth Longfellow
Added by Poetry Lover
Comment! | Vote! | Copy!
Related quotes
The Believer's Principles : Chap. IV.
Faith and Sense Natural, compared and distinguished.
When Abram's body, Sarah's womb,
Were ripe for nothing but the tomb,
Exceeding old, and wholly dead,
Unlike to bear the promis'd seed:
Faith said, 'I shall an Isaac see;'
'No, no,' said Sense, 'it cannot be;'
Blind Reason, to augment the strife,
Adds, 'How can death engender life?'
My heart is like a rotten tomb,
More dead than ever Sarah's womb;
O! can the promis'd seed of grace
Spring forth from such a barren place?
Sense gazing but on flinty rocks,
My hope and expectation chokes:
But could I, skill'd in Abram's art,
O'erlook my dead and barren heart;
And build my hope on nothing less
That divine pow'r and faithfulness;
Soon would I find him raise up sons
To Abram, out of rocks and stones.
Faith acts as busy boatmen do,
Who backward look and forward row;
It looks intent to things unseen,
Thinks objects visible too mean.
Sense thinks it madness thus to steer,
And only trusts its eye and ear;
Into faith's boat dare thrust its oar,
And put it further from the shore.
Faith does alone the promise eye;
Sense won't believe unless it see;
Nor can it trust the divine guide,
Unless it have both wind and tide.
Faith thinks the promise sure and good;
Sense doth depend on likelihood;
Faith ev'n in storms believes the seers;
Sense calls all men, ev'n prophets, liars.
Faith uses means, but rests on none;
Sense sails when outward means are gone:
[...] Read more
poem by Ralph Erskine
Added by Poetry Lover
Comment! | Vote! | Copy!
The Loves of the Angels
'Twas when the world was in its prime,
When the fresh stars had just begun
Their race of glory and young Time
Told his first birth-days by the sun;
When in the light of Nature's dawn
Rejoicing, men and angels met
On the high hill and sunny lawn,-
Ere sorrow came or Sin had drawn
'Twixt man and heaven her curtain yet!
When earth lay nearer to the skies
Than in these days of crime and woe,
And mortals saw without surprise
In the mid-air angelic eyes
Gazing upon this world below.
Alas! that Passion should profane
Even then the morning of the earth!
That, sadder still, the fatal stain
Should fall on hearts of heavenly birth-
And that from Woman's love should fall
So dark a stain, most sad of all!
One evening, in that primal hour,
On a hill's side where hung the ray
Of sunset brightening rill and bower,
Three noble youths conversing lay;
And, as they lookt from time to time
To the far sky where Daylight furled
His radiant wing, their brows sublime
Bespoke them of that distant world-
Spirits who once in brotherhood
Of faith and bliss near ALLA stood,
And o'er whose cheeks full oft had blown
The wind that breathes from ALLA'S throne,
Creatures of light such as still play,
Like motes in sunshine, round the Lord,
And thro' their infinite array
Transmit each moment, night and day,
The echo of His luminous word!
Of Heaven they spoke and, still more oft,
Of the bright eyes that charmed them thence;
Till yielding gradual to the soft
And balmy evening's influence-
The silent breathing of the flowers-
The melting light that beamed above,
As on their first, fond, erring hours,-
Each told the story of his love,
The history of that hour unblest,
When like a bird from its high nest
[...] Read more
poem by Thomas Moore
Added by Poetry Lover
Comment! | Vote! | Copy!
The Aeneid of Virgil: Book 11
SCARCE had the rosy Morning rais’d her head
Above the waves, and left her wat’ry bed;
The pious chief, whom double cares attend
For his unburied soldiers and his friend,
Yet first to Heav’n perform’d a victor’s vows: 5
He bar’d an ancient oak of all her boughs;
Then on a rising ground the trunk he plac’d,
Which with the spoils of his dead foe he grac’d.
The coat of arms by proud Mezentius worn,
Now on a naked snag in triumph borne, 10
Was hung on high, and glitter’d from afar,
A trophy sacred to the God of War.
Above his arms, fix’d on the leafless wood,
Appear’d his plumy crest, besmear’d with blood:
His brazen buckler on the left was seen; 15
Truncheons of shiver’d lances hung between;
And on the right was placed his corslet, bor’d;
And to the neck was tied his unavailing sword.
A crowd of chiefs inclose the godlike man,
Who thus, conspicuous in the midst, began: 20
“Our toils, my friends, are crown’d with sure success;
The greater part perform’d, achieve the less.
Now follow cheerful to the trembling town;
Press but an entrance, and presume it won.
Fear is no more, for fierce Mezentius lies, 25
As the first fruits of war, a sacrifice.
Turnus shall fall extended on the plain,
And, in this omen, is already slain.
Prepar’d in arms, pursue your happy chance;
That none unwarn’d may plead his ignorance, 30
And I, at Heav’n’s appointed hour, may find
Your warlike ensigns waving in the wind.
Meantime the rites and fun’ral pomps prepare,
Due to your dead companions of the war:
The last respect the living can bestow, 35
To shield their shadows from contempt below.
That conquer’d earth be theirs, for which they fought,
And which for us with their own blood they bought;
But first the corpse of our unhappy friend
To the sad city of Evander send, 40
Who, not inglorious, in his age’s bloom,
Was hurried hence by too severe a doom.”
Thus, weeping while he spoke, he took his way,
Where, new in death, lamented Pallas lay.
Acoetes watch’d the corpse; whose youth deserv’d 45
The father’s trust; and now the son he serv’d
With equal faith, but less auspicious care.
Th’ attendants of the slain his sorrow share.
A troop of Trojans mix’d with these appear,
And mourning matrons with dishevel’d hair. 50
[...] Read more
poem by Publius Vergilius Maro
Added by Poetry Lover
Comment! | Vote! | Copy!
The Sacrifice Of Victor
What is sacrifice?
(we s... we s... we s... we sacrifice)
Npg in mass attack, sonny, please.
(we sacrifice)
Church if u will, please turn 2 the book of victor (we s, we s)
We like 2 start at the top if u dont mind
(we sacrifice)
(dont say it, preacher)
I was born on a blood stained table
Cord wrapped around my neck
Epilectic til the age of 7
I was sure heaven marked the deck
(we sacrifice)
I know joy lives round the corner
{joy for sale down on the corner} (we sacrifice)
One day Ill visit her Im gonna
{out on my block Im just a loner} (we sacrifice)
When she tell me everything {tell me}
Thats when the angels sing {sacrifice}
Thats when the victory is sho nuff {sho nuff down with the sacrifice}
(we sacrifice)
(help me)
(dont say it, preacher)
Mama held up her baby 4 protection
From a man with a strap in his hand
Ask the victor bout pain and rejection
U think he dont when he do understand
(we sacrifice)
I know joy lives round the corner
{joy for sale down on the corner} (we sacrifice)
One day Ill visit her Im gonna
{out on my block Im just a loner} (we sacrifice)
When she tell me everything {tell me}
Thats when the angels sing {sacrifice}
Thats when the victory is sho nuff {sho nuff down with the sacrifice}
(we sacrifice)
(help me)
{s.a.c.r.i.f.i.c.e}
(we-we-we sacrifice)
(dont say it preacher)
(sac-sacrifice)
(we-we-we sacrifice)
(we-we-we sacrifice)
(sacrifice... if u turn the page)
(dont say it, preacher)
1967 in a bus marked public schools
Rode me and a group of unsuspecting political tools
Our parents wondered what it was like 2 have another color near
So they put their babies together 2 eliminate the fear
We sacrifice yes we did
[...] Read more
song performed by Prince
Added by Lucian Velea
Comment! | Vote! | Copy!
The Aeneid of Virgil: Book 12
WHEN Turnus saw the Latins leave the field,
Their armies broken, and their courage quell’d,
Himself become the mark of public spite,
His honor question’d for the promis’d fight;
The more he was with vulgar hate oppress’d, 5
The more his fury boil’d within his breast:
He rous’d his vigor for the last debate,
And rais’d his haughty soul to meet his fate.
As, when the swains the Libyan lion chase,
He makes a sour retreat, nor mends his pace; 10
But, if the pointed jav’lin pierce his side,
The lordly beast returns with double pride:
He wrenches out the steel, he roars for pain;
His sides he lashes, and erects his mane:
So Turnus fares; his eyeballs flash with fire, 15
Thro’ his wide nostrils clouds of smoke expire.
Trembling with rage, around the court he ran,
At length approach’d the king, and thus began:
“No more excuses or delays: I stand
In arms prepar’d to combat, hand to hand, 20
This base deserter of his native land.
The Trojan, by his word, is bound to take
The same conditions which himself did make.
Renew the truce; the solemn rites prepare,
And to my single virtue trust the war. 25
The Latians unconcern’d shall see the fight;
This arm unaided shall assert your right:
Then, if my prostrate body press the plain,
To him the crown and beauteous bride remain.”
To whom the king sedately thus replied: 30
“Brave youth, the more your valor has been tried,
The more becomes it us, with due respect,
To weigh the chance of war, which you neglect.
You want not wealth, or a successive throne,
Or cities which your arms have made your own: 35
My towns and treasures are at your command,
And stor’d with blooming beauties is my land;
Laurentum more than one Lavinia sees,
Unmarried, fair, of noble families.
Now let me speak, and you with patience hear, 40
Things which perhaps may grate a lover’s ear,
But sound advice, proceeding from a heart
Sincerely yours, and free from fraudful art.
The gods, by signs, have manifestly shown,
No prince Italian born should heir my throne: 45
Oft have our augurs, in prediction skill’d,
And oft our priests, a foreign son reveal’d.
Yet, won by worth that cannot be withstood,
Brib’d by my kindness to my kindred blood,
Urg’d by my wife, who would not be denied, 50
[...] Read more
poem by Publius Vergilius Maro
Added by Poetry Lover
Comment! | Vote! | Copy!
Z. Comments
CRYSTAL GLOW
Madhur Veena Comment: Who is she? ? ? ? ? ? ? ? ? ? ? ....You write good!
Margaret Alice Comment: Beautiful, it stikes as heartfelt words and touches the heart, beautiful sentiments, sorry, I repeat myself, but I am delighted. Your poem is like the trinkets I collect to adorn my personal space, pure joy to read, wonderful! Only a beautiful mind can harbour such sentiments, you have a beautiful mind. I am glad you have found someone that inspires you to such heights and that you share it with us, you make the world a mroe wonderful place.
Margaret Alice Comment: Within the context set by the previous poem, “Cosmic Probe”, the description of a lover’s adoration for his beloved becomes a universal ode sung to the abstract values of love, joy and hope personified by light, colours, fragrance and beauty, qualities the poet assigns to his beloved, thus elevating her to the status of an uplifting force because she brings all these qualities to his attention. The poet recognises that these personified values brings him fulfilment and chose the image of a love relationship to illustrate how this comes about; thus a love poem becomes the vehicle to convey spiritual epiphany.
FRAGRANT JASMINE
Margaret Alice Comment: Your words seem to be directed to a divine entity, you seem to be addressing your adoration to a divinity, and it is wonderful to read of such sublime sentiments kindled in a human soul. Mankind is always lifted up by their vision and awareness of divinity, thank you for such pure, clear diction and sharing your awareness of the sublime with us, you have uplifted me so much by this vision you have created!
Margaret Alice Comment: The poet’s words seem to be directed to a divine entity, express adoration to a divinity who is the personification of wonderful qualities which awakens a sense of the sublime in the human soul. An uplifting vision and awareness of uplifting qualities of innocence represented by a beautiful person.
I WENT THERE TO BID HER ADIEU
Kente Lucy Comment: wow great writing, what a way to bid farewell
Margaret Alice Comment: Sensory experience is elevated by its symbolical meaning, your description of the scene shows two souls becoming one and your awareness of the importance of tempory experience as a symbol of the eternal duration of love and companionship - were temporary experience only valid for one moment in time, it would be a sad world, but once it is seen as a symbol of eternal things, it becomes enchanting.
I’M INCOMPLETE WITHOUT YOU
Margaret Alice Comment: You elevate the humnan experience of longing for love to a striving for sublimity in uniting with a beloved person, and this poem is stirring, your style of writing is effective, everything flows together perfectly.
Margaret Alice Comment:
'To a resplendent glow of celestial flow
And two split halves unite never to part.'
Reading your fluent poems is a delight, I have to tear myself away and return to the life of a drudge, but what a treasure trove of jewels you made for the weary soul who needs to contemplate higher ideals from time to time!
IN CELESTIAL WINGS
Margaret Alice Comment: When you describe how you are strengthened by your loved one, it is clear that your inner flame is so strong that you need not fear growing old, your spirit seems to become stronger, you manage to convey this impression by your striking poetry. It is a privilege to read your work.
Obed Dela Cruz Comment: wow.... i remembered will shakespeare.... nice poem!
Margaret Alice Comment: The poet has transcended the barriers of time and space by becoming an image of his beloved and being able to find peace in the joy he confers to his beloved.
'You transcend my limits, transcend my soul, I forget my distress in your thoughts And discover my peace in your joy, For, I’m mere image of you, my beloved.'
Margaret Alice Comment: You are my peace and solace, I know, I am, yours too; A mere flash of your thoughts Enlivens my tired soul And fills me with light, peace and solace, A giant in new world, I become, I rise to divine heights in celestial wings. How I desire to reciprocate To fill you with light and inner strength raise you to divine heights; I must cross over nd hold you in arms, light up your soul, Fill you with strength from my inner core, Wipe away your tears burst out in pure joy How I yearn to instill hope and confidence in you we never part And we shall wait, till time comes right. the flame in my soul always seeks you, you transcend my limits, transcend my soul, I forget my distress in your thoughts And discover my peace in your joy, For, I’m mere image of you, my beloved.
RAGING FIRE
[...] Read more
poem by Praveen Kumar
Added by Poetry Lover
Comment! | Vote! | Copy!
Tannhauser
The Landgrave Hermann held a gathering
Of minstrels, minnesingers, troubadours,
At Wartburg in his palace, and the knight,
Sir Tannhauser of France, the greatest bard,
Inspired with heavenly visions, and endowed
With apprehension and rare utterance
Of noble music, fared in thoughtful wise
Across the Horsel meadows. Full of light,
And large repose, the peaceful valley lay,
In the late splendor of the afternoon,
And level sunbeams lit the serious face
Of the young knight, who journeyed to the west,
Towards the precipitous and rugged cliffs,
Scarred, grim, and torn with savage rifts and chasms,
That in the distance loomed as soft and fair
And purple as their shadows on the grass.
The tinkling chimes ran out athwart the air,
Proclaiming sunset, ushering evening in,
Although the sky yet glowed with yellow light.
The ploughboy, ere he led his cattle home,
In the near meadow, reverently knelt,
And doffed his cap, and duly crossed his breast,
Whispering his 'Ave Mary,' as he heard
The pealing vesper-bell. But still the knight,
Unmindful of the sacred hour announced,
Disdainful or unconscious, held his course.
'Would that I also, like yon stupid wight,
Could kneel and hail the Virgin and believe!'
He murmured bitterly beneath his breath.
'Were I a pagan, riding to contend
For the Olympic wreath, O with what zeal,
What fire of inspiration, would I sing
The praises of the gods! How may my lyre
Glorify these whose very life I doubt?
The world is governed by one cruel God,
Who brings a sword, not peace. A pallid Christ,
Unnatural, perfect, and a virgin cold,
They give us for a heaven of living gods,
Beautiful, loving, whose mere names were song;
A creed of suffering and despair, walled in
On every side by brazen boundaries,
That limit the soul's vision and her hope
To a red hell or and unpeopled heaven.
Yea, I am lost already,-even now
Am doomed to flaming torture for my thoughts.
O gods! O gods! where shall my soul find peace?'
He raised his wan face to the faded skies,
Now shadowing into twilight; no response
Came from their sunless heights; no miracle,
As in the ancient days of answering gods.
[...] Read more
poem by Emma Lazarus
Added by Poetry Lover
Comment! | Vote! | Copy!
XI. Guido
You are the Cardinal Acciaiuoli, and you,
Abate Panciatichi—two good Tuscan names:
Acciaiuoli—ah, your ancestor it was
Built the huge battlemented convent-block
Over the little forky flashing Greve
That takes the quick turn at the foot o' the hill
Just as one first sees Florence: oh those days!
'T is Ema, though, the other rivulet,
The one-arched brown brick bridge yawns over,—yes,
Gallop and go five minutes, and you gain
The Roman Gate from where the Ema's bridged:
Kingfishers fly there: how I see the bend
O'erturreted by Certosa which he built,
That Senescal (we styled him) of your House!
I do adjure you, help me, Sirs! My blood
Comes from as far a source: ought it to end
This way, by leakage through their scaffold-planks
Into Rome's sink where her red refuse runs?
Sirs, I beseech you by blood-sympathy,
If there be any vile experiment
In the air,—if this your visit simply prove,
When all's done, just a well-intentioned trick,
That tries for truth truer than truth itself,
By startling up a man, ere break of day,
To tell him he must die at sunset,—pshaw!
That man's a Franceschini; feel his pulse,
Laugh at your folly, and let's all go sleep!
You have my last word,—innocent am I
As Innocent my Pope and murderer,
Innocent as a babe, as Mary's own,
As Mary's self,—I said, say and repeat,—
And why, then, should I die twelve hours hence? I—
Whom, not twelve hours ago, the gaoler bade
Turn to my straw-truss, settle and sleep sound
That I might wake the sooner, promptlier pay
His due of meat-and-drink-indulgence, cross
His palm with fee of the good-hand, beside,
As gallants use who go at large again!
For why? All honest Rome approved my part;
Whoever owned wife, sister, daughter,—nay,
Mistress,—had any shadow of any right
That looks like right, and, all the more resolved,
Held it with tooth and nail,—these manly men
Approved! I being for Rome, Rome was for me.
Then, there's the point reserved, the subterfuge
My lawyers held by, kept for last resource,
Firm should all else,—the impossible fancy!—fail,
And sneaking burgess-spirit win the day.
The knaves! One plea at least would hold,—they laughed,—
One grappling-iron scratch the bottom-rock
[...] Read more
poem by Robert Browning from The Ring and the Book
Added by Veronica Serbanoiu
Comment! | Vote! | Copy!
Pearl
Pearl of delight that a prince doth please
To grace in gold enclosed so clear,
I vow that from over orient seas
Never proved I any in price her peer.
So round, so radiant ranged by these,
So fine, so smooth did her sides appear
That ever in judging gems that please
Her only alone I deemed as dear.
Alas! I lost her in garden near:
Through grass to the ground from me it shot;
I pine now oppressed by love-wound drear
For that pearl, mine own, without a spot.
2
Since in that spot it sped from me,
I have looked and longed for that precious thing
That me once was wont from woe to free,
To uplift my lot and healing bring,
But my heart doth hurt now cruelly,
My breast with burning torment sting.
Yet in secret hour came soft to me
The sweetest song I e'er heard sing;
Yea, many a thought in mind did spring
To think that her radiance in clay should rot.
O mould! Thou marrest a lovely thing,
My pearl, mine own, without a spot.
3
In that spot must needs be spices spread
Where away such wealth to waste hath run;
Blossoms pale and blue and red
There shimmer shining in the sun;
No flower nor fruit their hue may shed
Where it down into darkling earth was done,
For all grass must grow from grains that are dead,
No wheat would else to barn be won.
From good all good is ever begun,
And fail so fair a seed could not,
So that sprang and sprouted spices none
From that precious pearl without a spot.
4
That spot whereof I speak I found
When I entered in that garden green,
As August's season high came round
When corn is cut with sickles keen.
There, where that pearl rolled down, a mound
With herbs was shadowed fair and sheen,
With gillyflower, ginger, and gromwell crowned,
And peonies powdered all between.
[...] Read more
poem by Anonymous Olde English
Added by Poetry Lover
Comment! | Vote! | Copy!
Paradise Lost: Book X
Thus they in lowliest plight repentant stood
Praying, for from the Mercie-seat above
Prevenient Grace descending had remov'd
The stonie from thir hearts, and made new flesh
Regenerat grow instead, that sighs now breath'd
Unutterable, which the Spirit of prayer
Inspir'd, and wing'd for Heav'n with speedier flight
Then loudest Oratorie: yet thir port
Not of mean suiters, nor important less
Seem'd thir Petition, then when th' ancient Pair
In Fables old, less ancient yet then these,
Deucalion and chaste Pyrrha to restore
The Race of Mankind drownd, before the Shrine
Of Themis stood devout. To Heav'n thir prayers
Flew up, nor missed the way, by envious windes
Blow'n vagabond or frustrate: in they passd
Dimentionless through Heav'nly dores; then clad
With incense, where the Golden Altar fum'd,
By thir great Intercessor, came in sight
Before the Fathers Throne: Them the glad Son
Presenting, thus to intercede began.
See Father, what first fruits on Earth are sprung
From thy implanted Grace in Man, these Sighs
And Prayers, which in this Golden Censer, mixt
With Incense, I thy Priest before thee bring,
Fruits of more pleasing savour from thy seed
Sow'n with contrition in his heart, then those
Which his own hand manuring all the Trees
Of Paradise could have produc't, ere fall'n
From innocence. Now therefore bend thine eare
To supplication, heare his sighs though mute;
Unskilful with what words to pray, let mee
Interpret for him, mee his Advocate
And propitiation, all his works on mee
Good or not good ingraft, my Merit those
Shall perfet, and for these my Death shall pay.
Accept me, and in mee from these receave
The smell of peace toward Mankinde, let him live
Before thee reconcil'd, at least his days
Numberd, though sad, till Death, his doom (which I
To mitigate thus plead, not to reverse)
To better life shall yeeld him, where with mee
All my redeemd may dwell in joy and bliss,
Made one with me as I with thee am one.
To whom the Father, without Cloud, serene.
All thy request for Man, accepted Son,
Obtain, all thy request was my Decree:
But longer in that Paradise to dwell,
The Law I gave to Nature him forbids:
Those pure immortal Elements that know
[...] Read more
poem by John Milton
Added by Poetry Lover
Comment! | Vote! | Copy!
The Aeneid of Virgil: Book 10
THE GATES of heav’n unfold: Jove summons all
The gods to council in the common hall.
Sublimely seated, he surveys from far
The fields, the camp, the fortune of the war,
And all th’ inferior world. From first to last, 5
The sov’reign senate in degrees are plac’d.
Then thus th’ almighty sire began: “Ye gods,
Natives or denizens of blest abodes,
From whence these murmurs, and this change of mind,
This backward fate from what was first design’d? 10
Why this protracted war, when my commands
Pronounc’d a peace, and gave the Latian lands?
What fear or hope on either part divides
Our heav’ns, and arms our powers on diff’rent sides?
A lawful time of war at length will come, 15
(Nor need your haste anticipate the doom),
When Carthage shall contend the world with Rome,
Shall force the rigid rocks and Alpine chains,
And, like a flood, come pouring on the plains.
Then is your time for faction and debate, 20
For partial favor, and permitted hate.
Let now your immature dissension cease;
Sit quiet, and compose your souls to peace.”
Thus Jupiter in few unfolds the charge;
But lovely Venus thus replies at large: 25
“O pow’r immense, eternal energy,
(For to what else protection can we fly?)
Seest thou the proud Rutulians, how they dare
In fields, unpunish’d, and insult my care?
How lofty Turnus vaunts amidst his train, 30
In shining arms, triumphant on the plain?
Ev’n in their lines and trenches they contend,
And scarce their walls the Trojan troops defend:
The town is fill’d with slaughter, and o’erfloats,
With a red deluge, their increasing moats. 35
Æneas, ignorant, and far from thence,
Has left a camp expos’d, without defense.
This endless outrage shall they still sustain?
Shall Troy renew’d be forc’d and fir’d again?
A second siege my banish’d issue fears, 40
And a new Diomede in arms appears.
One more audacious mortal will be found;
And I, thy daughter, wait another wound.
Yet, if with fates averse, without thy leave,
The Latian lands my progeny receive, 45
Bear they the pains of violated law,
And thy protection from their aid withdraw.
But, if the gods their sure success foretell;
If those of heav’n consent with those of hell,
To promise Italy; who dare debate 50
[...] Read more
poem by Publius Vergilius Maro
Added by Poetry Lover
Comment! | Vote! | Copy!
The Aeneid of Virgil: Book 7
AND thou, O matron of immortal fame,
Here dying, to the shore hast left thy name;
Cajeta still the place is call’d from thee,
The nurse of great Æneas’ infancy.
Here rest thy bones in rich Hesperia’s plains; 5
Thy name (’t is all a ghost can have) remains.
Now, when the prince her fun’ral rites had paid,
He plow’d the Tyrrhene seas with sails display’d.
From land a gentle breeze arose by night,
Serenely shone the stars, the moon was bright, 10
And the sea trembled with her silver light.
Now near the shelves of Circe’s shores they run,
(Circe the rich, the daughter of the Sun,)
A dang’rous coast: the goddess wastes her days
In joyous songs; the rocks resound her lays: 15
In spinning, or the loom, she spends the night,
And cedar brands supply her father’s light.
From hence were heard, rebellowing to the main,
The roars of lions that refuse the chain,
The grunts of bristled boars, and groans of bears, 20
And herds of howling wolves that stun the sailors’ ears.
These from their caverns, at the close of night,
Fill the sad isle with horror and affright.
Darkling they mourn their fate, whom Circe’s pow’r,
(That watch’d the moon and planetary hour,) 25
With words and wicked herbs from humankind
Had alter’d, and in brutal shapes confin’d.
Which monsters lest the Trojans’ pious host
Should bear, or touch upon th’ inchanted coast,
Propitious Neptune steer’d their course by night 30
With rising gales that sped their happy flight.
Supplied with these, they skim the sounding shore,
And hear the swelling surges vainly roar.
Now, when the rosy morn began to rise,
And wav’d her saffron streamer thro’ the skies; 35
When Thetis blush’d in purple not her own,
And from her face the breathing winds were blown,
A sudden silence sate upon the sea,
And sweeping oars, with struggling, urge their way.
The Trojan, from the main, beheld a wood, 40
Which thick with shades and a brown horror stood:
Betwixt the trees the Tiber took his course,
With whirlpools dimpled; and with downward force,
That drove the sand along, he took his way,
And roll’d his yellow billows to the sea. 45
About him, and above, and round the wood,
The birds that haunt the borders of his flood,
That bath’d within, or basked upon his side,
To tuneful songs their narrow throats applied.
The captain gives command; the joyful train 50
[...] Read more
poem by Publius Vergilius Maro
Added by Poetry Lover
Comment! | Vote! | Copy!
Alleluja
Sembra che
la terra trema sotto i piedi
non mi credi
guarda i miei piedi
Io sento che
la musica profuma di te
you see, my baby, you see, my baby
Dimmi se
lo senti dimmi che senti che,
ti sembra vera amica nera
Mi sembra che
la terra viva sotto i piedi
Alleluja
in questa notte buia
sono un uomo
che lotta coi suoi guai
in questa notte buia
Io sento che
fai crescere quest'atmosfera
amica nera...amica nera
nuda ormai
la pelle tua un tamburo che fa
un suono pieno, d'arcobaleno
rotoler,
il suono di una ritmica che
come un respiro...come un respiro...
mi sembra che
la terra e' viva sotto i piedi
Alleluja
in questa notte buia
sono un uomo
che lotta coi suoi guai
Alleluja
dolcissima tortura
voglio morire su te
per morire di gioia
in questa notte buia
in questa notte buia
in questa notte buia
in questa notte buia
Alleluja
dolcissima tortura
voglio morire su te
per morir di gioia
in questa notte buia
in questa notte buia
in questa notte buia
in questa notte buia
song performed by Zucchero
Added by Lucian Velea
Comment! | Vote! | Copy!
OBIIT MDCCCXXXIII (Entire)
Strong Son of God, immortal Love,
Whom we, that have not seen thy face,
By faith, and faith alone, embrace,
Believing where we cannot prove;
Thine are these orbs of light and shade;
Thou madest Life in man and brute;
Thou madest Death; and lo, thy foot
Is on the skull which thou hast made.
Thou wilt not leave us in the dust:
Thou madest man, he knows not why,
He thinks he was not made to die;
And thou hast made him: thou art just.
Thou seemest human and divine,
The highest, holiest manhood, thou:
Our wills are ours, we know not how;
Our wills are ours, to make them thine.
Our little systems have their day;
They have their day and cease to be:
They are but broken lights of thee,
And thou, O Lord, art more than they.
We have but faith: we cannot know;
For knowledge is of things we see;
And yet we trust it comes from thee,
A beam in darkness: let it grow.
Let knowledge grow from more to more,
But more of reverence in us dwell;
That mind and soul, according well,
May make one music as before,
But vaster. We are fools and slight;
We mock thee when we do not fear:
But help thy foolish ones to bear;
Help thy vain worlds to bear thy light.
Forgive what seem’d my sin in me;
What seem’d my worth since I began;
For merit lives from man to man,
And not from man, O Lord, to thee.
Forgive my grief for one removed,
Thy creature, whom I found so fair.
I trust he lives in thee, and there
I find him worthier to be loved.
Forgive these wild and wandering cries,
[...] Read more
poem by Alfred Lord Tennyson
Added by Poetry Lover
Comment! | Vote! | Copy!
VI. Giuseppe Caponsacchi
Answer you, Sirs? Do I understand aright?
Have patience! In this sudden smoke from hell,—
So things disguise themselves,—I cannot see
My own hand held thus broad before my face
And know it again. Answer you? Then that means
Tell over twice what I, the first time, told
Six months ago: 't was here, I do believe,
Fronting you same three in this very room,
I stood and told you: yet now no one laughs,
Who then … nay, dear my lords, but laugh you did,
As good as laugh, what in a judge we style
Laughter—no levity, nothing indecorous, lords!
Only,—I think I apprehend the mood:
There was the blameless shrug, permissible smirk,
The pen's pretence at play with the pursed mouth,
The titter stifled in the hollow palm
Which rubbed the eyebrow and caressed the nose,
When I first told my tale: they meant, you know,
"The sly one, all this we are bound believe!
"Well, he can say no other than what he says.
"We have been young, too,—come, there's greater guilt!
"Let him but decently disembroil himself,
"Scramble from out the scrape nor move the mud,—
"We solid ones may risk a finger-stretch!
And now you sit as grave, stare as aghast
As if I were a phantom: now 't is—"Friend,
"Collect yourself!"—no laughing matter more—
"Counsel the Court in this extremity,
"Tell us again!"—tell that, for telling which,
I got the jocular piece of punishment,
Was sent to lounge a little in the place
Whence now of a sudden here you summon me
To take the intelligence from just—your lips!
You, Judge Tommati, who then tittered most,—
That she I helped eight months since to escape
Her husband, was retaken by the same,
Three days ago, if I have seized your sense,—
(I being disallowed to interfere,
Meddle or make in a matter none of mine,
For you and law were guardians quite enough
O' the innocent, without a pert priest's help)—
And that he has butchered her accordingly,
As she foretold and as myself believed,—
And, so foretelling and believing so,
We were punished, both of us, the merry way:
Therefore, tell once again the tale! For what?
Pompilia is only dying while I speak!
Why does the mirth hang fire and miss the smile?
My masters, there's an old book, you should con
For strange adventures, applicable yet,
[...] Read more
poem by Robert Browning from The Ring and the Book
Added by Veronica Serbanoiu
Comment! | Vote! | Copy!
The Aeneid of Virgil: Book 9
WHILE these affairs in distant places pass’d,
The various Iris Juno sends with haste,
To find bold Turnus, who, with anxious thought,
The secret shade of his great grandsire sought.
Retir’d alone she found the daring man, 5
And op’d her rosy lips, and thus began:
“What none of all the gods could grant thy vows,
That, Turnus, this auspicious day bestows.
Æneas, gone to seek th’ Arcadian prince,
Has left the Trojan camp without defense; 10
And, short of succors there, employs his pains
In parts remote to raise the Tuscan swains.
Now snatch an hour that favors thy designs;
Unite thy forces, and attack their lines.”
This said, on equal wings she pois’d her weight, 15
And form’d a radiant rainbow in her flight.
The Daunian hero lifts his hands and eyes,
And thus invokes the goddess as she flies:
“Iris, the grace of heav’n, what pow’r divine
Has sent thee down, thro’ dusky clouds to shine? 20
See, they divide; immortal day appears,
And glitt’ring planets dancing in their spheres!
With joy, these happy omens I obey,
And follow to the war the god that leads the way.”
Thus having said, as by the brook he stood, 25
He scoop’d the water from the crystal flood;
Then with his hands the drops to heav’n he throws,
And loads the pow’rs above with offer’d vows.
Now march the bold confed’rates thro’ the plain,
Well hors’d, well clad; a rich and shining train. 30
Messapus leads the van; and, in the rear,
The sons of Tyrrheus in bright arms appear.
In the main battle, with his flaming crest,
The mighty Turnus tow’rs above the rest.
Silent they move, majestically slow, 35
Like ebbing Nile, or Ganges in his flow.
The Trojans view the dusty cloud from far,
And the dark menace of the distant war.
Caicus from the rampire saw it rise,
Black’ning the fields, and thick’ning thro’ the skies. 40
Then to his fellows thus aloud he calls:
“What rolling clouds, my friends, approach the walls?
Arm! arm! and man the works! prepare your spears
And pointed darts! the Latian host appears.”
Thus warn’d, they shut their gates; with shouts ascend 45
The bulwarks, and, secure, their foes attend:
For their wise gen’ral, with foreseeing care,
Had charg’d them not to tempt the doubtful war,
Nor, tho’ provok’d, in open fields advance,
But close within their lines attend their chance. 50
[...] Read more
poem by Publius Vergilius Maro
Added by Poetry Lover
Comment! | Vote! | Copy!
The Door Of Humility
ENGLAND
We lead the blind by voice and hand,
And not by light they cannot see;
We are not framed to understand
The How and Why of such as He;
But natured only to rejoice
At every sound or sign of hope,
And, guided by the still small voice,
In patience through the darkness grope;
Until our finer sense expands,
And we exchange for holier sight
The earthly help of voice and hands,
And in His light behold the Light.
I
Let there be Light! The self-same Power
That out of formless dark and void
Endued with life's mysterious dower
Planet, and star, and asteroid;
That moved upon the waters' face,
And, breathing on them His intent,
Divided, and assigned their place
To, ocean, air, and firmament;
That bade the land appear, and bring
Forth herb and leaf, both fruit and flower,
Cattle that graze, and birds that sing,
Ordained the sunshine and the shower;
That, moulding man and woman, breathed
In them an active soul at birth
In His own image, and bequeathed
To them dominion over Earth;
That, by whatever is, decreed
His Will and Word shall be obeyed,
From loftiest star to lowliest seed;-
The worm and me He also made.
And when, for nuptials of the Spring
With Summer, on the vestal thorn
The bridal veil hung flowering,
A cry was heard, and I was born.
II
[...] Read more
poem by Alfred Austin
Added by Poetry Lover
Comment! | Vote! | Copy!
Lancelot And Elaine
Elaine the fair, Elaine the loveable,
Elaine, the lily maid of Astolat,
High in her chamber up a tower to the east
Guarded the sacred shield of Lancelot;
Which first she placed where the morning's earliest ray
Might strike it, and awake her with the gleam;
Then fearing rust or soilure fashioned for it
A case of silk, and braided thereupon
All the devices blazoned on the shield
In their own tinct, and added, of her wit,
A border fantasy of branch and flower,
And yellow-throated nestling in the nest.
Nor rested thus content, but day by day,
Leaving her household and good father, climbed
That eastern tower, and entering barred her door,
Stript off the case, and read the naked shield,
Now guessed a hidden meaning in his arms,
Now made a pretty history to herself
Of every dint a sword had beaten in it,
And every scratch a lance had made upon it,
Conjecturing when and where: this cut is fresh;
That ten years back; this dealt him at Caerlyle;
That at Caerleon; this at Camelot:
And ah God's mercy, what a stroke was there!
And here a thrust that might have killed, but God
Broke the strong lance, and rolled his enemy down,
And saved him: so she lived in fantasy.
How came the lily maid by that good shield
Of Lancelot, she that knew not even his name?
He left it with her, when he rode to tilt
For the great diamond in the diamond jousts,
Which Arthur had ordained, and by that name
Had named them, since a diamond was the prize.
For Arthur, long before they crowned him King,
Roving the trackless realms of Lyonnesse,
Had found a glen, gray boulder and black tarn.
A horror lived about the tarn, and clave
Like its own mists to all the mountain side:
For here two brothers, one a king, had met
And fought together; but their names were lost;
And each had slain his brother at a blow;
And down they fell and made the glen abhorred:
And there they lay till all their bones were bleached,
And lichened into colour with the crags:
And he, that once was king, had on a crown
Of diamonds, one in front, and four aside.
And Arthur came, and labouring up the pass,
All in a misty moonshine, unawares
[...] Read more
poem by Alfred Lord Tennyson
Added by Poetry Lover
Comment! | Vote! | Copy!
Orlando Furioso Canto 18
ARGUMENT
Gryphon is venged. Sir Mandricardo goes
In search of Argier's king. Charles wins the fight.
Marphisa Norandino's men o'erthrows.
Due pains Martano's cowardice requite.
A favouring wind Marphisa's gallery blows,
For France with Gryphon bound and many a knight.
The field Medoro and Cloridano tread,
And find their monarch Dardinello dead.
I
High minded lord! your actions evermore
I have with reason lauded, and still laud;
Though I with style inapt, and rustic lore,
You of large portion of your praise defraud:
But, of your many virtues, one before
All others I with heart and tongue applaud,
That, if each man a gracious audience finds,
No easy faith your equal judgment blinds.
II
Often, to shield the absent one from blame,
I hear you this, or other, thing adduce;
Or him you let, at least, an audience claim,
Where still one ear is open to excuse:
And before dooming men to scaith and shame,
To see and hear them ever is your use;
And ere you judge another, many a day,
And month, and year, your sentence to delay.
III
Had Norandine been with your care endued,
What he by Gryphon did, he had not done.
Profit and fame have from your rule accrued:
A stain more black than pitch he cast upon
His name: through him, his people were pursued
And put to death by Olivero's son;
Who at ten cuts or thrusts, in fury made,
Some thirty dead about the waggon laid.
IV
Whither fear drives, in rout, the others all,
Some scattered here, some there, on every side,
Fill road and field; to gain the city-wall
Some strive, and smothered in the mighty tide,
One on another, in the gateway fall.
Gryphon, all thought of pity laid aside,
Threats not nor speaks, but whirls his sword about,
Well venging on the crowd their every flout.
[...] Read more
poem by Ludovico Ariosto
Added by Poetry Lover
Comment! | Vote! | Copy!
David
My thought, on views of admiration hung,
Intently ravish'd and depriv'd of tongue,
Now darts a while on earth, a while in air,
Here mov'd with praise and mov'd with glory there;
The joys entrancing and the mute surprize
Half fix the blood, and dim the moist'ning eyes;
Pleasure and praise on one another break,
And Exclamation longs at heart to speak;
When thus my Genius, on the work design'd
Awaiting closely, guides the wand'ring mind.
If while thy thanks wou'd in thy lays be wrought,
A bright astonishment involve the thought,
If yet thy temper wou'd attempt to sing,
Another's quill shall imp thy feebler wing;
Behold the name of royal David near,
Behold his musick and his measures here,
Whose harp Devotion in a rapture strung,
And left no state of pious souls unsung.
Him to the wond'ring world but newly shewn,
Celestial poetry pronounc'd her own;
A thousand hopes, on clouds adorn'd with rays,
Bent down their little beauteous forms to gaze;
Fair-blooming Innocence with tender years,
And native Sweetness for the ravish'd ears,
Prepar'd to smile within his early song,
And brought their rivers, groves, and plains along;
Majestick Honour at the palace bred,
Enrob'd in white, embroider'd o'er with red,
Reach'd forth the scepter of her royal state,
His forehead touch'd, and bid his lays be great;
Undaunted Courage deck'd with manly charms,
With waving-azure plumes, and gilded arms,
Displaid the glories, and the toils of fight,
Demanded fame, and call'd him forth to write.
To perfect these the sacred spirit came,
By mild infusion of celestial flame,
And mov'd with dove-like candour in his breast,
And breath'd his graces over all the rest.
Ah! where the daring flights of men aspire
To match his numbers with an equal fire;
In vain they strive to make proud Babel rise,
And with an earth-born labour touch the skies.
While I the glitt'ring page resolve to view,
That will the subject of my lines renew;
The Laurel wreath, my fames imagin'd shade,
Around my beating temples fears to fade;
My fainting fancy trembles on the brink,
And David's God must help or else I sink.
[...] Read more
poem by Thomas Parnell
Added by Poetry Lover
Comment! | Vote! | Copy!
Search
Recent searches | Top searches
Contact
|
86
|
Skip to main content
Stack Overflow
About
For Teams
Stack Overflow for Teams
Where developers & technologists share private knowledge with coworkers
Advertising
Reach devs & technologists worldwide about your product, service or employer brand
Knowledge Solutions
Data licensing offering for businesses to build and improve AI tools and models
Labs
The future of collective knowledge sharing
About the company
Visit the blog
putchar() vs printf() - Is there a difference?
Ask Question
Asked
Modified
4 years, 7 months ago
Viewed
37k times
This question shows research effort; it is useful and clear
19
Save this question.
Show activity on this post.
I am currently in chapter 1.5.1 File copying and made a program like so:
```
include
/ copy input to output; 1st version /
main()
{
int c;
c = getchar();
while (c != EOF) {
putchar(c);
c = getchar();
}
}
```
If I ran it like this:
```
PS <..loc..> cc copy-0.c
PS ./a
Black
Black
White
White
Gray
Gray
```
The output is what I input.
And here's a program I made for experimental purposes:
```
include
/ copy input to output; 1st version /
main()
{
int c;
c = getchar();
while (c != EOF) {
printf("%c",c);
c = getchar();
}
}
```
It produces the same result but is there a difference between putchar and printf?
Which is better to use between the 2?
c
printf
getchar
putchar
Share
CC BY-SA 3.0
Improve this question
Follow this question to receive notifications
asked May 20, 2014 at 1:48
user3649506user3649506
4
5
printf("%c, c); and putchar(c); have identical behaviour in this example.
– M.M
Commented
May 20, 2014 at 2:35
2
printf("%c, c) and putchar(c) function the same other than the return value differs - which is not used in this example. putchar(c) will certainly perform faster than printf("%c, c). The degree of speed difference is highly dependent on many other factors.
– chux
Commented
May 20, 2014 at 18:25
1
@chux Why Would putchar be fast , and why putchar_unlocked is more faster?
– Suraj Jain
Commented
Mar 4, 2017 at 9:30
1
@SurajJain An optimizing compiler may emit the same code for printf("%c, c) and putchar(c) and so no performance difference in that case. With a less intelligent compiler, putchar(c), with its simple functionality would certainly be faster than printf("%c, c), although, without testing, the degree of speed difference is unknown and may be marginal. putchar_unlocked() is not a standard C library function - I am unfamiliar with its details.
– chux
Commented
Mar 4, 2017 at 16:18
Add a comment
|
5 Answers 5
Reset to default
This answer is useful
37
Save this answer.
Show activity on this post.
printf is a generic printing function that works with 100 different format specifiers and prints the proper result string. putchar, well, puts a character to the screen. That also means that it's probably much faster.
Back to the question: use putchar to print a single character. Again, it's probably much faster.
Share
CC BY-SA 3.0
Improve this answer
Follow this answer to receive notifications
edited Feb 6, 2015 at 1:35
answered May 20, 2014 at 1:53
kirbyfan64soskirbyfan64sos
10.8k66 gold badges5858 silver badges7878 bronze badges
2
6
Also putchar() is shorter. In can help if you are golfing.
– aloisdg
Commented
Jul 16, 2016 at 21:23
6
"putchar puts a character to the screen" is at best sloppy and at worst a severe misunderstanding. The buffered output functions know nothing of "screens"; they are completely oblivious of any specific hardware. The putc macro contains code which inserts a character in a stream. That abstraction is the beauty of the nix (inspired) operating systems and runtime libraries and the reason why you can pipe the output of one program to the input of another, or run servers without any physical terminals.
– Peter - Reinstate Monica
Commented
May 24, 2019 at 13:16
Add a comment
|
This answer is useful
12
Save this answer.
Show activity on this post.
I compiled an example using printf("a") with -S and got call putchar in the assembly code.
Looks like when you have only one char in the printf the compiler turns it into a putchar().
I did another example using printf("ab") and got call printf, with the text section in the %edi register.
Share
CC BY-SA 4.0
Improve this answer
Follow this answer to receive notifications
edited Jan 2, 2021 at 19:18
Roberto Caboni
7,4901010 gold badges2929 silver badges4242 bronze badges
answered Aug 1, 2015 at 16:43
user5181136user5181136
12111 silver badge22 bronze badges
2
2
Which platform and compiler were you using?
– Aaron D
Commented
Aug 1, 2015 at 16:54
5
@AaronD This optimization is performed by Clang 3.0 and up, and GCC 4.9 and up.
– kirbyfan64sos
Commented
Oct 18, 2015 at 19:25
Add a comment
|
This answer is useful
2
Save this answer.
Show activity on this post.
The difference is that putchar prints one character whereas printf can print a lot more.
```
printf("%s\n", "this is a lot longer than one character");
```
Generally when you print something to the terminal you want to end it with a newline character, '\n'. At the very least for that reason I would suggest using printf as then you can write
```
printf("%c\n", c);
```
instead of
```
putchar(c);
putchar('\n');
```
Share
CC BY-SA 3.0
Improve this answer
Follow this answer to receive notifications
edited Mar 28, 2016 at 19:35
Jakob
42366 silver badges1616 bronze badges
answered May 20, 2014 at 1:52
Tommy IvarssonTommy Ivarsson
60544 silver badges77 bronze badges
1
1
Did not downvote, but that is a bad example. The putchar calls seem better, and not even overly verbose here. Easier to read, too.
– Thilo
Commented
May 20, 2014 at 2:09
Add a comment
|
This answer is useful
1
Save this answer.
Show activity on this post.
Putchar : prints only a single character on the screen as the syntax tells.
Printf : printf line or word on the screen.
Hence when you want to display only one character on the screen the use putchar.
To read a string use gets function.
To display string you can use puts() or printf both.
Share
CC BY-SA 3.0
Improve this answer
Follow this answer to receive notifications
answered May 20, 2014 at 2:52
user3016508user3016508
Add a comment
|
This answer is useful
0
Save this answer.
Show activity on this post.
printf lets you format strings in a complicated way, substituting things like integers and floats and other strings.
getchar and putchar get and put characters
I can say that printf is more useful in more ways compared to putchar.
Better look in their respective manual pages ( man 3 printf man 3 putchar ) in terminal
Share
CC BY-SA 3.0
Improve this answer
Follow this answer to receive notifications
answered May 20, 2014 at 1:54
ajbeeajbee
3,65155 gold badges3232 silver badges5858 bronze badges
Add a comment
|
The Overflow Blog
Renewing Chat on Stack Overflow
AI isn’t stealing your job, it’s helping you find it
Featured on Meta
Will you help build our new visual identity?
Upcoming initiatives on Stack Overflow and across the Stack Exchange network...
Updated design for the new live activity panel experiment
Further Experimentation with Comment Reputation Requirements
Linked
1
Is puts or putchar better for printing just a newline?
-2
Printing a character multiple times in a for loop
0
what is difference between printf and putchar?
Hot Network Questions
Does it make any sense to run a journal for pre-college students interested in medicine?
Make separate appendix table of contents and remove appendix chapters and sections from main toc
Does the Melf's Acid Arrow spell require a ranged attack roll?
LM393 comparator not pulling down
Can defamation occur without specific intent for false statements about ordinarily non-damaging things?
Can high schoolers post to arXiv or write preprints?
SPDX: GPL-2.0-or-later vs the + operator
How to deal with this problem in hedonism?
History of Wilcoxon/Mann-Whitney being for the median?
Reskinning creatures without accidentally hiding how dangerous/safe they are
Does trading for Kyogre in Pokémon Omega Ruby include its Mega Evolution?
On pole distribution of matrix with Laurent polynomial entries
What does my 3D Printing Life-Seeder Probe need to print to populate the Universe for humans?
Samba(Linux)/Windows interaction
How can I tell that two analytical methods are orthogonal?
Is there any way to still use Manifest v2 extensions in Google Chrome 139+?
Does the warning "5 years imprisonment for removal" on Canada's Four Corners obelisk have any legal backing?
VLOOKUP with wildcards
Rectangle and circle with same area and circumference
What violent acts or injuries are attributable to Palestine Action?
When was this builder's paper produced?
Collect coefficient of sum of terms in Mathematica
Which set has greater cardinality and why?
Quantum model of atom
Question feed
By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.
|
87
|
Local Fields Tim Browning Notes by Florian Bouyer Copyright (C) Bouyer 2013.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
A copy of the license can be found at Contents 1 Foundations 2 1.1 Completion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6 2 The p-adic 8 3 Non-archimedean Local Fields 10 3.1 Hensel's Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12 4 Extensions of local elds 14 4.1 Normed vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14 4.2 Extension of Absolute Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15 4.3 Ramication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15 5 Algebraic Closure 20 6 Algebraic Number Fields 22 7 Diaphantine Equations 24 7.1 Quadratic forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
24 7.2 Cubic forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
24 1 1 Foundations Absolute Values Let K be a eld.
Denition 1.1. An absolute value on K is a map | · | : K →R>0 1. |x| = 0 ⇐ ⇒x = 0 2. |xy| = |x| · |y|∀x, y ∈K 3. |x + y| ≤|x| + |y| (the △inequality) Denition. An absolute value on K is called non-archimedean if also 1. |x + y| ≤max{|x|, |y|} (the ultrametric inequality) Otherwise we say the absolute value is archimedean Example.
1. K = Q and | · | to be the usual absolute value given by inclusion Q , →R. This is an archimedean absolute value.
2. Take |x| = ( 1 x ̸= 0 0 x = 0. This is non-archimedean absolute value. The trivial absolute value 3. K = Q and p a prime. For x ∈Q∗the p-adic valuation is νp(x) = r if x = pr u v for u, v ∈Z, r ∈Z and p ∤uv. We extend to all of Q by setting νp(0) = +∞ Check: νp(xy) = νp(x) + νp(y) and νp(x + y) ≥min{νp(x), νp(y)} (∗) Dene the p-adic absolute value on Q to be | · |p : Q →R≥0 by |x|p = ( pνp(x) x ̸= 0 0 x = 0. This satises the axioms of being a non-archimedean absolute value (using (∗)) Note. |pn|p = p−n so pn →0 as n →∞.
4. Let K be any eld. Put F = K(T) = n P(T) Q(T) : P, Q ∈K[T], Q ̸= 0 o . Dene the valuation ν∞ P(T) Q(T) = ( deg Q −deg P P Q ̸= 0 +∞ P Q = 0 Check this satises (∗). If c > 1, then we get a non-archimedean absolute value on F given by |F(T)|∞:= c−ν∞(f(T)).
Note. If K = Fq then convenient to take c = q.
Lemma 1.2. Let | · | be an absolute value on a eld K. Then 1. |1| = 1 2. x ∈K such that xn = 1, then |x| = 1.
2 3. x ∈K, then | −x| = |x| 4. K is a nite eld then the absolute value has to be the trivial absolute value Proof.
1. Note that x ̸= 0 ⇒x > 0. We have |1| = |12| = |1| · |1|. So 1. holds.
2. Note that 1 = |1| = |xn| = |x|n ⇒|x| = 1 3. Note that −x = −1 · x 4. Follows from 2. since any non-zero element x of a nite eld satises xn = 1 for some n.
The following result gives a criterion for checking whether an absolute value is non-archimedean.
Lemma 1.3. Let | · | be an absolute value on a eld K. Then | · | is non-archimedean if and only if |e| ≤1 for all e in the additive ring generated by 1.
Proof. ⇒ Since |n| = | −n| we may as well assume that n ≥1. Then |n| = |1 + · · · + 1 | {z } | n times ≤|1| = 1 ⇐ Suppose |e| ≤1 for all elements e in the additive ring generated by 1. Let x, y ∈K, then |x + y|m = m X j=0 m j xjym−j ≤ m X j=0 m j |x|j|y|m−j ≤ m X j=0 |x|j|y|m−j by assumption m j ≤1 ≤ max({|x|, |y|}m Take mth root and let m →∞(since (m + 1)1/m →1 as m →∞) Corollary 1.4. If char(K) ̸= 0 then all absolute values are non-archimedean Proof. The ring in Lemma 1.3 is a nite eld. Then apply Lemma 1.2 part 4.
Corollary 1.5. Suppose F ⊂K is a subeld of K and | · | is an absolute value on K. Then | · | is non-archimedean on K if and only if | · | is non-archimedean on F Topology Let K be a eld with absolute value | · | on K. Then we get a metric on K induced by | · |. Call it d : K × K →R≥0 dened by d(x, y) 7→|x −y|.
Exercise. Check this is a metric.
The notion of distance on elds with non-archimedean values is weird.
3 Lemma 1.6. Let K be a eld with non-archimedean absolute value. If x, y ∈K with |x| ̸= |y|, then |x + y| = max{|x|, |y|} Proof. Without loss of generality assume |x| > |y|. Then |x+y| ≤max{|x|, |y|} = |x| and |x| = |x+y−y| ≤ max{|x + y|, |y|}. Hence |x| ≤|x + y| ≤|x|.
Denition 1.7. Let K be a eld with absolute value | · |. Let a ∈K and r ∈R≥0. The open ball of radius r and centre a is B(a, r) = {x ∈K : |x −a| < r}. The closed ball of radius r and centre a is B(a, r) = {x ∈K : |x −a| ≤r}.
A set U ⊂K is open if and only if ∀x ∈U there exists an open ball around x contained in U. A set is closed if and only if its complement in K is open Lemma 1.8. Let K be a eld with non-archimedean absolute value | · |. Then 1. b ∈B(a, r) ⇒B(a, r) = B(b, r) 2. b ∈B(a, r) ⇒B(a, r) = B(b, r) 3. B(a, r) ∩B(a′, r′) ̸= 0 ⇐ ⇒B(a, r) ⊂B(a′, r′) or B(a, r) ⊃B(a′, r′) 4. B(a, r) ∩B(a′, r′) ̸= 0 ⇐ ⇒B(a, r) ⊂B(a′, r′) or B(a, r) ⊃B(a′, r′) 5. B(a, r) is both open and closed 6. B(a, r) is both open and closed.
Proof. We prove 1. 3. 5. only.
1. b ∈B(a, r) and c ∈B(b, r). |c −a| ≤max{|c −b|, |b −a|} < r, i.e., B(b, r) ⊂B(a, r). Reverse inclusion follows from symmetry since a ∈B(b, r).
3. Follows form 1.
5. b ∈B(a, r) implies B(b, r) ⊂B(a, r), so any open ball is open. To show that it is closed, note that b / ∈B(a, r) ⇒a / ∈B(b, r). So neither ball is contained in the other and they are disjoint. Hence B(b, r) ⊂K \ B(a, r) and the complement of B(a, r) in K is open.
Remark. Recall that a set S is said to be disconnected if there exists open sets U, V such that • U ∩V = ∅, • S ⊂U ∪V • S ∩U ̸= ∅and S ∩V ̸= ∅ Otherwise S is connected. If x ∈K then the connected component of x is the union of all connected sets containing it.
Example. K = R with usual absolute value, then connected component of any x ∈R is R.
Exercise. If | · | is a non-archimedean absolute value on a led K, then the connected component of any x ∈K is {x}, i.e., K is totally disconnected topological space.
4 Equivalence Denition 1.9. Two absolute values | · |1 and | · |2 on a eld K are equivalent if they induce the same topology on K. (i.e., every set which is open with respect to | · |1 is open with respect to | · |2) Given an absolute value |·| on a eld K, a sequence {an}n in K converges to a in the induced topology if and only if ∀ϵ > 0∃N ∈N such that for n > N, |an −a| < ϵ. Equivalently, for all open sets U containing a, there exists N such that an ∈U for n > N Thus the notion of convergence depends on the topology induced by the absolute value.
Lemma 1.10. Let | · |1, | · |2 be absolute values on eld K, with | · |1 non-trivial. Then the following are equivalent 1. | · |1 , | · |2 are equivalent 2. ∀x ∈K, |x|1 < 1 ⇐ ⇒|x|2 < 1 3. ∃α > 0 such that ∀x ∈K, |x|1 = |x|α 2 .
Proof.
3. ⇒1.
Then |x−a|2 < r ⇐ ⇒|x−a|1 < rα. So any open ball with respect to |·|2 is an open ball with respect to | · |1. Hence the topology must be the same and the absolute value are equivalent.
1. ⇒2.
|x|1 < 1 ⇐ ⇒xn →0 as n →∞with respect to | · |1 1.
⇐ ⇒xn →0 as n →∞with respect | · |2 ⇐ ⇒|x|2 < 1 2. ⇒3.
Now |x|1 > 1 ⇐ ⇒|x−1| < 1 ⇐ ⇒|x−1|2 < 1 ⇐ ⇒|x|2 > 1. Also |x|1 = 1 ⇐ ⇒|x|2 = 1.
Now pick (and x) a ∈K∗such that |a|1 < 1 (which is possible since | · |1 is non-trivial). Then also |a|2 < 1. Let α = log |a|1 log |a|2 > 0. Choose b ∈k∗ 1. |b|1 = 1 then |b|2 = 1 and 1 = 1α 2. |b|1 < 1 by assumption |b|2 < 1. Dene βi = log|a|i log|b|i for o = 1, 2. We show that β1 = β2 which implies log|b|1 log|b|2 = log|a|1 log|a|2 = α.
Suppose that β1 > β2, then ∃m n ∈Q such that β2 ≤m n < β1. Set x = anb−m ∈k, then log |x|i = n log |a|i −m log |b|i = n log |b|i | {z } <0 βi −m n | {z } > 0 i = 1 < 0 i = 2 , hence we have a contradiction with |x|1 < 1 and |x|2 > 1. Similarly if β2 > β1. Hence β1 = β2 3. If |b|1 > 1, |b|2 > 1, replace b by b−1 and get b−1 1 < 1 and b−1 2 < 1 How independent inequivalent absolute value are Lemma 1.11. Let | |1 , . . . , | |J be non trivial inequivalent absolute values on K. Then there exists x ∈K such that |x|1 > 1 and |x|j < 1 for 2 ≤j ≤J.
5 Proof. By induction on J.
J = 2 Since | |1 , | | are non-trivial and non-equivalent, by the previous lemma there exists y ∈K such that |y|1 < 1 and |y|2 ≥1, and z ∈K such that |z|1 ≥1 and |z|2 < 1. Let x = zy−1, then |x|1 = |z|1 |y|−1 1 > 1 and |x|2 = |z|2 |y|−1 2 < 1 J > 2 By induction, there exists y, z ∈K such that |y|1 > 1, |y|j < 1 for 2 ≤j < J and |z|1 < 1, |z|j > 1 for 2 ≤j < J. Consider |y|J and we have dierent cases: 1. |y|J < 1 so take x = y 2. |y|J = 1 so take x = ynz for large enough n 3. |y|J > 1, then yn 1+yn j = 1 1+y−n j → n→∞ ( 1 j = 2, . . . , J 0 else . So Let x = yn 1+yn z for large enough n Theorem 1.12 (Weak Approximation). Let | |1 , . . . , | |J be non trivial inequivalent absolute values on K.
Let bj ∈K for j = 1, . . . , J and let ϵ > 0. Then there exists x ∈K such that |x −bj|j < ϵ for all j = 1, . . . , J.
Proof. By Lemma 1.11, there exists xj ∈K such that |xj|j > 1 but |xj|i < 1 for i ̸= j.
Consider xn j 1+xn j j → n→∞ ( 1 i = j 0 else . Take wn = PJ j=1 bj xn j 1+xn j → n→∞bj, so take x = wn for n large enough.
Remark. This is clearly related to the Chinese Remainder Theorem. Let p1, . . . , pj be distinct primes and mj ∈N, bj ∈Z. Then there exists x ∈Z such that x ≡bj mod pmj j . Using the Theorem above, |x −bj|j < p−mj j with pj-adic absolute value 1.1 Completion Denition 1.13.
1. A sequence {xn} is a eld K is called Cauchy if ∀ϵ > 0, ∃N > 0 such that ∀m, n > N, |xm −xn| < ϵ 2. (K, |·|) is complete if every Cauchy sequence is convergent 3. A subset S ⊂K is dense if ∀x ∈K, ∀ϵ > 0, B(x, ϵ)∩S ̸= 0. That is, ∀x ∈K, there exists a sequence {xn} ∈S such that {xn} →x.
4. A eld b K, || || is a completion of (K, | |) if (a) There exists an embedding ι : K →b K which respect absolute values (b) im(K) is dense in b K (c) b K, || || is complete 6 Theorem 1.14. Let (K, | |) be a eld. Then there exists a completion ( b K, || ||) of K and it is unique as any two completions are canonically isomorphic. That is if ( b Kj, || ||j) for j = 1, 2 then there exists a unique isomorphism of b K1 ∼ = b Kj which is the identity of K and preserves || ||1 = || ||2 Proof.
Existence of Completion Let K be the set of all Cauchy Sequences in K. This is a ring as {an} + {bn} = {an + bn}, {an} × {bn} = {anbn} and id = {1}. Dene || || : K →R>0 by {an} →limn→∞|an| (R is complete). Let N ⊂K be the subset of all null sequences (||an|| = 0). Then N is a maximal ideal (Exercise). Hence K/N is a eld b K. We have || || (not an absolute value since ||an|| = 0 for non zero elements) only depends on K/N. We get a well dened functions || || : b K →R>0.
This is an absolute value. Dene ι : K →b K by a 7→{a} mod N. Then ι(K) is dense and ( b K, || ||) is complete.
Uniqueness Suppose (c K′, || ||′) is complete and is a completion, ι′ : K →b K′ satisfy the embedding properties above.
Claim. ι′ extends uniquely to an embedding λ : b K →b K′ such that K ι′ / ι b K′ b K λ O Let x ∈b K and {xn} is a sequence in K such that {ι(xn)} converges to x (dense). Dene λ(x) = limn→∞{ι′(xn)}. Construct λ′ : b K′ →b K in the same way Corollary 1.15. Let K be a eld and | |j (j ≤J) be non-trivial and inequivalent absolute values on K.
Let b Kj be the respective completions, let ∆: K , →Q j b Kj dened by x 7→(ιj(x)). Then ∆(K) is dense, i.e., its closure ∆(K) is Q j Kj.
Remark. We have Q , →R but Q , →R × R is not dense.
Proof. Let αj ∈b Kj, for 1 ≤j ≤J, then ∀ϵ > 0 there exists aj ∈K such that |aj −αj| < ϵ for 1 ≤j ≤J.
By Theorem 1.12 there exists b ∈K such that |b −aj|j < ϵ. Then |b −αj|j < 2ϵ so arbitrary closed to αj, hence dense.
7 2 The p-adic Theorem 2.1 (Ostrowski ). Every non trivial absolute value on Q is equivalent to | |v where v = p a prime or v = ∞.
Proof. Let | | be an absolute value on Q and a > 1, b > 0 be integers. Let t = max{|0| , |1|, . . . , |a −1|}, b = bmam + · · · + b1a + b0 with bi ∈{0, . . . , a −1}, bm ̸= 0 and m ≤log b log a. Then |b| ≤Pm j=0 bjaj ≤ (m + 1)t max{1, |a|m} ≤(log b/ log a + 1)t max{1, |a|m}. Replace b by bn and take nth root, |b| ≤ n log b log a + 1 1/n | {z } → n→∞1 t1/n max{1, |a|}log b/ log a Take the limit as n →∞, then |b| ≤max{1, |a|}log b/ log a (∗). We have two cases 1. | | is archimedean, then there exists |b| > 1 for some b by Lemma 1.3. So apply (∗), then |a| > 1 for all a > 1, so |b| ≤|a|log b/ log b. Reversing a and b we get |a| ≤|b|log a/ log b . Hence |a|1/ log a = |b|log b ,so log|a| log a = log|b| log b = α > 0, and it is independent of a and b. Hence |a| = aα = |a|α ∞for all a ∈N. But | ± 1| = 1, hence |a| = |a|α ∞for all a ∈Z. Let q = a b , hence true for all q ∈Q 2. | | is non-archimedean. Then there exists a ∈N such that |a| < 1. Let b be the such least integer.
Claim. b = p a prime number We prove this by contradiction. Suppose b is not a prime, b = uv. Now |uv| < 1, but as b is the least such number, we have |u| = |v| = 1, hence |b| = 1 a contradiction.
So b is a prime, let b = p.
Claim. p|a if and only if |a| < 1.
⇒: Let a = up, then |a| = |u||p|, hence |u| < 1 and |p| < 1.
⇐: Suppose that if p ∤a then a = up + r where r < p. By minimality of p, |r| = 1, |up| < 1, hence |a| = max{|up|, |r|} = 1 So let α == log|p| log p , |p| = |p|α p . For all a ∈Z we have a = pra′ where p ∤a′, hence |a′| = |a′|p = 1.
Therefore, |a| = |pra′| = |p|rα p = |a|α p . And |q| = |q|α p for all q ∈Q Denition 2.2.
1. The eld of p-adic numbers Qp is the completion of Q with respect to | |p.
(Qp, | |) is a non archimedean complete eld.
2. The ring of p-adic integers Zp is Zp = {x ∈Qp||x|p ≤1} = B(0, 1) (check it is a ring, by using non archimedean properties) Lemma 2.3. Z is dense in Zp 8 Proof. Q is dense in Qp, Zp is open in Qp so Q ∩Zp is dense in Zp. Now Q ∩Zp = a b ∈Q|p ∤b . Let a b ∈Q be such that p ∤b for n ≥1 pick yn ∈Z such that byn ≡1 mod pn (b is a unit in Zp). Then byn →1 as n →∞. Hence Z is dense in Q ∩Zp, hence dense in Zp What do elements of Qp look like?
Let x ∈Zp, let n ∈N, then by density there exists q = a b ∈Q such that x −a b p ≤p−n.
But then a b p ≤max{|x|p , x −a b p}. Hence p ∤b and there exists b′ ∈Z such that bb′ ≡1 mod pn. But then a b −ab′ p = a b (1 −bb′) p ≤p−n. Hence |x −ab′|p ≤max n x −a b p , a b −ab′ p o ≤p−n. Now let α ∈{0, . . . , pn −1} be the unique integer such that ab′ ≡α mod pn.
Conclusion: ∀x ∈Zp, ∀n ∈N, ∃α ∈{0, . . . , pn −1} such that x ≡α mod pn Lemma 2.4. For all n ∈N there exists an exact sequence of rings 0 →Zp pn →Zp φn →Z/(pnZ) →0 Proof. Note that ker pn = {z ∈Zp|pnz = 0} = {0} (take absolute value on both side). We have that φn is surjective since {0, . . . , pn −1} ⊂Zp.
We show that im(pn) = ker(φn). Suppose that x ∈im(pn), then x = pny for some y ∈Zp, then |pny −0|p ≤p−n. Thence φn(x) = 0 and x ∈ker φn.
Conversely, let x ∈ker(φn). Then |x|p = |x−0|p ≤p−n, hence |p−nx| ≤1 so x = pnp−nx | {z } ∈Zp ∈im(pn) Hence Zp/(pnZp) ∼ = Z/(pnZ).
We will see in a more general context that elements of Qp can be uniquely written as a Laurent series expansion in p. Later we will consider the extensions of Qp.
In a Global setting: [k : Q] < ∞, OK is the integral closure of Z in k. In a local setting: [k : Qp] < ∞, Ok is the integral closure of Zp in k. But in the global setting Ok is not necessarily a Unique Factorisation Domain while in a local setting it always is.
9 3 Non-archimedean Local Fields We will examine a general theory of elds which are complete with respect to a non archimedean absolute value.
Theorem 3.1 (Ostrowski ). Let K be a eld complete with respect to a archimedean absolute value. Then K ∼ = R or K ∼ = C and the absolute value are equivalent to the usual absolute value.
Proof. See Chapter 3 of Cassels Local Fields Basics Let K be a eld with a non-trivial non-archimedean absolute value | |.
• OK = {x ∈K| |x| ≤1}, the ring of integers of K • PK = {x ∈K| |x| < 1} Check that OK is an integral domain and that PK is maximal. (If J ⊋PK is an ideal of OK then there exists x ∈J such that |x| = 1. Then |x−1| = 1 and |1 = xx−1| ∈J) • UK = {x ∈K| |x| = 1}, the group of units in K • kK = OK/PK, the residue eld of K The characteristic of kK is the residual characteristic.
Note. In general charkK ̸= charK.
• ΓK = {|x| |x ∈K∗}, the value group of | | on K.
This is a multiplicative subgroup of R>0.
Denition 3.2. A non-archimedean absolute value is discrete if ΓK is discrete. (i.e., ΓK ∼ = Z) Lemma 3.3. A non-archimedean absolute value is discrete if and only if the maximal ideal is principal.
Proof. By problem A.6 ΓK is discrete if and only if ΓK is cyclic.
⇐: Suppose that PK is principal, say ⟨π⟩.
Let γ = |π| < 1.
Hence for all x ∈PK, we have |x| ≤γ (since x = πy with y ∈OK).
So for all x ∈K, there exists n ∈Z such that γn ≤|x| < γn−1. Dividing through by γn−1 we get that γ ≤ xπ1−n < 1, whence xπ1−n ∈PK.
So γ ≤ xπ1−n ≤γ, thus xπ1−n = γ. So |x| = γn, hence ΓK is cyclic generated by γ.
⇒: Suppose that ΓK is cyclic with generator γ < 1 say. Let π ∈K be such that |π| = γ. Clearly ⟨π⟩⊂PK. Conversely, for x ∈PK, then |x| = γn for some n ≥1 since ΓK is cyclic. So |xπ−1| = γn−1 ≤1, i.e. xπ−1 ∈OK and x ∈⟨π⟩ From now on | | is a discrete non-archimedean absolute value on a eld K. So by the previous lemma PK = ⟨π⟩.We call π the uniformiser for the absolute value. Any x ∈K∗can be written as x = πnϵ with n ∈Z and ϵ ∈UK. We write VK(x) = n ∈Z for the order of x. This gives a valuation VK : K → Z ∪{∞} by setting VK(0) = ∞.
10 Lemma 3.4. Let 0 ̸= I ⊂OK be an integral ideal. Then I = Pn K := {x1 . . . xn|xi ∈PK} for some n ∈N.
Proof. The subset {|x| |x ∈I} ⊂ΓK is bounded and so it attains its maximal at x0 = πnϵ,say (ΓK is discrete). Then I = ⟨x0⟩= Pn K This implies that PK is the unique non-zero prime ideal in OK. Furthermore, OK is a PID and a local ring (with a unique maximal ideal) Let K be the completion of K with respect to the absolute value | |. Let OK, PK be the ring of integers and the maximal ideal of K respectively. then OK = OK ∩K and PK = PK ∩K. There is an inclusion map OK , →OK and so a map OK →kK := OK/PK dened by x 7→x + PK. The kernel of this map is PK, so it induces a natural map kK ∼ →kK.
Claim. This map is an isomorphism. It suces to show that it is surjective Proof. Let x ∈OK. By density of K in K, there exists x ∈K such that |x −x| < 1. Then x −x ∈PK and |x| ≤max ( |x −x| <1 , |x| ≤1 ) ≤1. Thus x ∈K ∩OK = OK Denition 3.5. A non-archimedean local eld is a eld which is complete with respect to a non-trivial discrete non-archimedean absolute value such that the residue class kK is nite.
Example.
K Qp Fq ((T)) Completion of Q Fq(T) OK Zp Fq PK pZp (T) kK ∼ = Fp Fq From now on, K is a non-archimedean local eld.
Say that an innite sum P∞ n=0 xn, xn ∈K, converges to s if s = limN→∞ PN n=0 xn Lemma 3.6. P∞ n=0 xn converges if and only if xn →∞as n →∞ Proof. Exercise Lemma 3.7. Let π be a uniformiser of K and let A ⊂OK be set of representative of OK/PK. Then OK = {P∞ n=0 xnπn : xn ∈A} Proof. By Lemma 3.6 we have P∞ n=0 converges and lies in OK. Conversely, if x ∈OK, then there exists a unique x0 ∈A such that |x −x0| < 1. Hence x = x0 + πy1 for some y1 ∈OK. Continue inductively with y1 etc Suppose x ∈K∗, Then π−Nx ∈OK for some N ∈Z. Apply Lemma 3.7 to get K∗= {P∞ n=N xnπn : xn ∈A, N ∈Z, x Let us return topology. A subset V ⊂K is said to be compact if whenever we have a family Uλ (λ ∈Λ) of open sets of K such that V ⊂∪λ∈ΛUλ, then there exists a nite subset Λ0 ⊂Λ such that V ⊂∪λ∈Λ0Uλ.
We say that K locally compact if every point of K has a compact neighbourhood. (i.e., ∀x ∈K there exists Vx ⊂K which is compact and contains B(x, r) for some r > 0) Lemma 3.8. Let K be a non-archimedean local eld.
Then OK is compact, and hence K is locally compact.
11 Proof. First we prove that OK is compact. Let Uλ (λ ∈Λ) be open sets covering OK. Suppose that there does not exists a nite subcovering. Now OK = ∪x∈A(x + πOK) where A is set of representation for (nite eld) OK/PK. Then there exists x0 ∈A such that x0 + πOK is not covered by nitely many Uλ. Similarly there exists x1 ∈A such that x0 + x1π + π2OK is not nitely covered and so on. Let x = x0 + x1π + x2π2 + · · · ∈OK. There exists λ0 ∈Λ such that x ∈Uλ0. Since Uλ0 is open it follows that x + πnOK ∈Uλ0 for some N, which is a contradiction.
Next we prove that K is locally compact. Put Vx = B(x, 1) = x + OK.
Remark. In fact: F locally compact with respect non-archimedean absolute value ⇐ ⇒F non-archimedean local eld.
3.1 Hensel's Lemma Theorem 3.9. Let K be a non-archimedean local eld and f ∈OK[X].
Suppose x0 ∈OK satises |f(x0)| < |f′(x0)|2. Then there exists a unique x ∈OK such that f(x) = 0, |x −x0| ≤|f(x0)| |f′(x0)|.
Proof. Dene fj ∈OK[X] via f(X + Y ) = f(X) + f1(X)Y + f2(X)Y 2 + . . .
(3.1) In particular f1(X) + f′(X). Dened y0 ∈OK by f(x0) + y0f′(x0) = 0. Then |f(x0 + y0)| ≤ max j≥2 fj(x0)yj 0 By (3.1) ≤ max j≥2 yj 0 ≤ |y0|2 = f(x0) f′(x0) 2 < |f(x0)| Similarly |f1(x0 + y0) −f1(x0)| ≤|y0| < |f1(x0)|. Then |f1(x0 + y0)| = |f1(x0)|. Put x1 = x0 + y0. Then |f(x1)| ≤|f(x0)|2 |f1(x0)|2 , |f1(x1)| = |f1(x0)| and |x1 −x0| = |f(x0)| |f′(x0)|. So repeat the process and obtain a sequence of xn+1 = xn + yn such that |f1(xn)| = |f1(x0)| and |f(xn+1)| ≤ |f(xn)|2 |f1(xn)|2 = |f(xn)|2 |f1(x0)|2 . So f(xn) →0 as n →∞. Finally |xn+1 −xn| = |yn| = |f(xn)| |f1(xn)| →0 as n →∞. So {xn} is Cauchy and it has a limit as required.
Now suppose that we have another solution x + z with z ̸= 0 and |z| ≤ |f(x0)| |f1(x0)| < |f1(x0)| = |f1(x)|.
Then, putting X = x and Y = z in equation (3.1), we get 0 = f(x + z) −f(x) = xf1(x) + z2f2(x) + . . . .
But |zf1(x)| > zj ≥ zjfj(x) for all j ≥2. Which gives a contradiction.
Example.
1. Squares in Qp.
• p ̸= 2. Suppose that y ∈Z∗ p. If there exists x0 ∈Zp such that x2 0 −y < 1 then there exists x ∈Zp such that x2 = y . (Take f(X) = X2 −y, so |f(x0)| < 1 but |f′(x0)| = |2x0| = 1).
12 Theorem. Any z ∈Z with p ∤z, is a square in Zp ⇐ ⇒ z p = +1 Claim. Q∗ p/(Q∗ p)2 has 4 elements represented by 1, c, p, cp where c ∈{1, . . . , p−1} is a quadratic non-residue.
Corollary. It follows that Qp has exactly 3 quadratic extensions.
Proof of the claim. Suppose x ∈Q∗ p. We may assume x = u or pu for u ∈Zp (on multiplying x by a power of p2 ∈Q∗2). Let α ∈{1, . . . , p −1} be such that u ≡α mod pZp (i.e. u = α + v for some v ∈pZp). Then u = α(1 + α−1v) and 1 + α−1v ≡1 mod pZp which is a square. Thus we may assume u = α. But u p = 1 ⇒u ∈Q∗2 p , otherwise uc is in Q∗2 p .
• p = 2. See exercise B.1 2. Since residue eld kKis nite, it follows that k∗ K is cyclic group of order q −1 where q = pr for some prime p. Now show there exists an alternative set of representative for kK = OK/PK, besides {0, . . . , q −1}.
Note p · 1 ∈OK and so q −1 ∈O∗ K. For each α ∈k∗ K, let x0 ∈O∗ K such that x0 ≡α mod p and consider f(x) = xq−1 −1. Then |f(x0)| < 1, |f′(x0)| = |q −1| · |x0|q−2 = 1. Hence by Theorem (3.9), there exists a unique Teichmuller representative b α ∈O∗ K of α such that f(b α) = 0 and b α ≡α mod p. We can take {0} ∪{b α : α ∈k∗ K} as a set of representative for kK.
Dene principal congruence subgroup Un K = {u ∈UK = O∗ K : u −1 ∈Pn K} = 1 + Pn K. Then UK and Un K are open and closed and compact in K∗(with induced topology).
We have isomorphism of topological groups: • K∗/UK →Z dened by xUK 7→VK(x).
• UK/U1 K →k∗ K dened by ξvU1 K 7→gv where ξ is a primitive (q −1)th root of unity in K and g is a generator for k∗ K.
Hence any x ∈K∗can be uniquely written as πuξvϵ for ϵ ∈U1 K, i.e., K∗∼ = Z × Z/(q −1)Z × U1 K.
13 4 Extensions of local elds We consider eld extensions of non-archimedean local elds. We would like to show that these extension are non-archimedean local elds.
4.1 Normed vector spaces Let K be a non-archimedean local eld Denition 4.1. Let V be a vector space over K. A function ∥∥: V →R≥0 is a norm if 1. ∥x∥= 0 if and only if x = 0 2. ∥x + y∥≤∥x∥+ ∥y∥ 3. ∥λx∥= |λ| · ∥x∥for all λ ∈K Note. The norm induces a metric d(x, y) = ∥x −y∥on V , which gives a topology.
Denition 4.2. Two norms ∥∥1 and ∥∥2 one a vector V are equivalent if ∃c1,c2 > 0 such that c1∥x∥2 ≤ ∥x∥1 ≤c2∥x∥2 ∀x ∈V Exercise. Show that ∥∥1, ∥∥2 are equivalent if and only if they induce the same topology on V .
Lemma 4.3. Let V be a nite dimensional vector space over K. Then any 2 norms on V are equivalent.
Moreover, V is complete with respect to the induced metric.
Proof. We proof by induction on n = dimK V n = 1 Trivial n > 1 Let e1, . . . , en be a basis for V over K.
Put a = a1e1 + · · · + anen (aj ∈K) and dene ∥a∥0 := maxj |aj|. Check that ∥∥0 is a norm and that V is complete with respect to it. It will suce to show any norm ∥∥on V is equivalent to ∥∥0. Firstly ∥a∥≤P j |aj| · ∥ej∥≤c2∥a∥0 with c2 = P ∥ej∥.
We now need to show ∃c > 0 such that ∥a∥0 ≤c∥a∥for all a ∈V (∗). If not, ∀ϵ > 0, there exists b = bϵ ∈V such that ∥b∥≤ϵ∥b∥0. Assume without loss of generality that ∥b∥0 = |bn|.
Replacing b by b−1 n b we have b = c + en where c ∈⟨e1, . . . , en−1⟩K.
Summary: (∗) false, implies we can nd a sequence c(m) ∈W = ⟨e1, . . . , en−1⟩K such that ∥c(m) + en∥→0 as m →∞. But then ∥c(m) −c(l)∥→0. So now use induction hypothesis.
Since dim W = n−1, it is complete under ∥∥. Thus there exists c∗∈W such that ∥c(m)−c∥= 0.
Hence ∥c∗+ en∥= limm→∞∥c(m) + en∥= 0. Therefore c∗+ en = 0, which is impossible. Hence (∗) hold and so ∥∥and ∥∥0 are equivalent.
Corollary 4.4. V nite dimensional normed vector space over K. Then V is locally compact. (i.e., v ∈V has a compact neighbourhood) Proof. By Lemma 4.3 we can assume ∥∥is ∥∥0, with respect to some xed basis e1, . . . , en. Now imitate the proof of Lemma 3.8 to show that {v ∈V : ∥v∥0 ≤1} is compact.
14 4.2 Extension of Absolute Values Let K be a non-archimedean local eld and L ⊃K an extension. We say that an absolute value ∥∥L extends to the absolute value | | on K if ∥λ∥= |λ| for all λ ∈K Theorem 4.5. Let L ⊃K be a nite extension. Then there exists a unique extension ∥∥of | | to L.
Moreover, L, ∥∥is a non-archimedean local eld.
Proof.
Uniqueness: Suppose ∥∥1, ∥∥2 extend | | to L. Then, regarding L as a nite dimensional vector space over K, Lemma 4.3implies ∥∥1 and ∥∥2 are equivalent and some dene the same topology on L.
But then they are equivalent as absolute values and so by Lemma 1.10, there exists α such that ∥x∥1 = ∥x∥α 2 ∀x ∈L. But the two absolute values are equal on K, so that α = 1.
Second_part Apply 4.4 and converse of Lemma 3.8 Existence We will show that the extension of ∥∥of | | to L is given by ∥x∥= |NL/K(x)|1/n for x ∈L, where n = [L : K]. Here NL/K : L →K is the norm map. (Recall: Thinking of L as a vector space over K, multiplication by α ∈L gives a linear map mα : K →L, with matrix Aα ∈Mn(K). Put NL/K(α) := det Aα). For x ∈K, ∥x∥= |xn|1/n = |x|. So ∥∥does extend | |.
For x ∈L∗, the linear map mx : L →L is invertible with inverse mx−1. Thus the matrix Ax is invertible, and det Ax ̸= 0. Hence ∥x∦= 0. Multiplicativity follows from the multiplicativity of the norm map.
Remains to prove the ultrametric inequality. Suces to show ∥x∥≤1, then ∥1+x∥≤1. (Then, assuming ∥x∥≤∥y∥then ∥x + y∥= ∥y∥· ∥x y + 1∥≤∥y∥). Suppose ∥x∥≤1. Let χ(x) be the characteristic polynomial of the linear map mx : L →L. Let f(X) = Xr + fr−1Xr−1 + · · · + f0 be the minimal polynomial of x. Here r is the degree of x over K. Then χ(X) = f(X)n/r (where n/r is the degree of L over K(x), a proof of this can be found in Cassels book Lemma B.3). Then |fpower 0 | = |NL/K(x)| ≤1, hence |f0| ≤1. Since f is irreducible and monic it follows from consideration of Newton polygon associated to it that |fi| ≤1 (See Cassels chapter 4).
Hence f ∈OK[X] and also χ ∈OK[X]. Now NL/K(1 + x) = det(In + Ax) = (−1)nχ(−1). So ∥1 + x∥= |χ(−1)|1/n ≤1. This completes the proof Since the absolute value on L is unique, we will usually write it as | | instead of ∥∥.
Corollary 4.6. | |p on Qp extends uniquely to an absolute value on algebraic closure Qp.
Proof. x ∈Qp then x ∈K for some nite extension K/Qp. Let | | = | |K where | | is the unique absolute value on K extending | |p. This is independent of choice of K by Theorem 4.5 4.3 Ramication Suppose L/K is a nite extension of non-archimedean local elds of degree n = [L : K].
Lemma 4.7. There exists a natural injection kK →kL such that kL is an extension of kK of degree f = f(L/K) := [kL : kK] ≤n.
15 Proof. There is certainly an inclusion OK , →OL. But PK = OK ∩PL, this induces the injection kK →kL.
Let α1, . . . , αn+1 ∈kL. Show that there are linearly dependent over kK. Then we will have shown that f = dimkK kL ≤n. Let c α1, . . . , [ αn+1 ∈OL such that αi = b αi + PL for 1 ≤i ≤n + 1. Since dimK L = n, there are linearly dependent over K, i.e., there exists λi ∈K not all zeroes such that Pn+1 i=1 λi b αi = 0.
Without loss of generality, we assume that λn+1 ̸= 0. Let µi ∈kK be the reduction of λiλ−1 n+1 modulo PL.
Then Pn i=1 µiαi + αn+1 = 0, as required.
Denition 4.8. If f = f(L/K) = n, we say L/K is unramied.
If f = f(L/K) = 1, we say L/K is totally ramied.
If f = f(L/K) < n, we say L/K is ramied.
Remark. If K ⊂L ⊂E is a tower of extensions then f(E/K) = f(E/L) · f(L/K).
We shall see that unramied extensions are easy to characterise.
Theorem 4.9. Let α ∈kL = OL/PL. Then there exists b α ∈OL such that b α + PL = α and [K(b α) : K] = [kK(α) : kK].
Furthermore, the eld K(b α) depends on α.
Remark. The extension K(b α)/K is unramied.
Proof. Let φ ∈kx[x] be the minimal polynomial of α. Let Φ ∈K[X] be any lift of φ (i.e., deg φ = deg Φ and φ = Φ, meaning coecients of Φ are reduced modulo PK).
Let c α0 ∈OL be an element of the residue class of α. Then Φ(c α0) = φ(α) = 0 and Φ ′(b α0) = φ′(α) ̸= 0 (since kK is a nite eld so it is perfect). Thus |Φ(c α0)| < 1 and |Φ′(c α0)| = 1. Hence by Hensel's lemma, with K(b α0) as the ground eld, implies there exists b α ∈K(b α0) ⊂L such that Φ(b α) = 0, |b α −c α0| < 1. Hence b α in residue class of α and [K(b α) : K] = [kK(α) : kK] since Φ is irreducible.
Now suppose that b α′ is also in the residue class of α and satises [K(b α′) : K] = [kK(α) : kK]. Then the above argument implies b α ∈K(b α′) and so K(b α) = K(b α′). But we must have equality since the degrees are the same.
Corollary 4.10. There exists a bijection between intermediate elds E (with K ⊂E ⊂L) which are unramied and the elds k with kK ⊂k ⊂kL, given by E →kE = E ∩OL/E ∩PL Proof. The previous theorem gives one direction.
Let k be an intermediate eld kK ⊂k ⊂kL. Let q = #k. Then k = kK(α) for some (q −1)th root of unity α. Then apply Theorem 4.9 Corollary 4.11. Let K be a non-archimedean local eld. For all n ∈N there exists a unique (up to isomorphism) unramied extension of degree n. It is the splitting eld over K of Xq −X where q = qn K, with qK = #kK Proof. Let L/K be unramied extension of degree n. Then kL has q = qn K elements. Then L contains a full set of (q −1)th roots of unity (By example 2 after Hensel's lemma). In particular Xq −X splits in L and so L contains its splitting eld, say F. However qL = qF = q and so by Corollary 4.10 we must have F = L Corollary 4.12. Let f ∈OK[X] be monic of degree n and reduction f mod PK is irreducible. Then 1. if L = K(α) and α has minimal polynomial f, then L/K is unramied 16 2. The splitting eld of f over K is unramied and has degree n.
Proof.
1. Note that kL ⊃kK(α), where α is the reduction of α mod PL. Moreover, kK(α) has degree n over kK. Hence f(L/K) ≥n. But we also have f(L/K) ≤n = [L : K] by Lemma 4.7.
2. Let L be the splitting eld of f over K and let α, β be roots of f in L. Then part 1. implies that K(α) and K(β) are both unramied extensions of degree n. Then Corollary 4.10 implies they are equal, therefore L = K(α).
Summary. Unramied extensions of K are obtained by adjoining a root of unity of order coprime to residual characteristic of K.
Now let us look at ramied extensions.
Suppose L/K is a nite extension of non-archimedean local elds. Consider the relationship between value groups ΓL = {|x| : x ∈L∗} is a discrete (cyclic) subgroup of R>0.
Denition 4.13. The ramication index of L/K is e = e(L/K) = [ΓL : ΓK].
If πL, πK are uniformisers for L and K respectively. Recall |πK| < 1 is a generator for ΓK and similarly πL for ΓL. Then |πK| = |πL|e. This implies e(E/K) = e(E/L)e(L/K) for any tower K ⊂L ⊂E.
Theorem 4.14. L/K be a nite extensions of non-archimedean local elds of degree n. Then n = ef.
It follows from this that L/K is unramied if and only if e(L/K) = 1 L/K is totally ramied if and only if e(L/K) = n It is ramied if and only if e(L/K) > 1 Proof. Let πL be a uniformiser of L and let b α1, . . . , b αf be any lift to OL of a basis for kL/kK. (As in Theorem 4.9) We will prove that B = n b αiπj L : 1 ≤i ≤f, 0 ≤j ≤e −1 o is a basis for L/K. (In fact we will prove that it is an OK basis for OL.) Firs suppose that B is not linearly independent over K. Then there exists aij ∈K, not all zeroes, such that X i,j aij b αiπj L = 0 (∗) Without loss of generality, assume that maxi,j |aij| = 1.
Hence, there exists integers, I, J such that |aij| ≤|πK| for 1 ≤i ≤f, j < J, |aIJ| = 1. If we reduce P i aiJ b αi module PL, then we get a non-zero coecient aIJ. Since b αi mod PL are linearly independent over kK, this reduction is non-zero. Thus |P i aiJ b αi| = 1. We now get X i aij b αiπj L ≤|πK| = |πL|e j < J = |πL|J j = J ≤|πL|J+1 j > J Recalling J ≤e −1, one term in (∗) has to be bigger than all the others. Contradiction Now let x ∈L. We claim that x is in the K-span of B. Multiplying by a suitable power of πK, we reduce to the case x ∈OL. (If πn Kx = P ij aij b αiπj L with aij ∈K, then putting bij = π−n k aij gives x = P bij b αiπj L).
17 Since αi ≡b αi mod PL form a basis for kL/kK, there exists ci0 ∈kK such that x = P i ci0αi. Choose any lifts c ci0 ∈OK, we have x −P i b ci0b αi = πLx1 ∈PL for some x1 ∈OL. Now repeat process on x1, and so on, until we have obtained b cij ∈OK such that x − e−1 X j=0 X i b cij b αiπj L = πe Lxe for some xe ∈OL. Now |πL|e = |πK| and so πe Lxe = πKx(1) for some x(1) ∈OL. Now we start again with x(1) instead of x. Carrying on in this way, we nd a succession of linear combinations cr = e−1 X j=0 X i b c(r) ij b αiπj L of elements of B with coecients in OK such that x −c0 −c1πK −· · · −csπs K ∈πs+1 K OL, ∀s ≥0. Now let s →∞and, using completeness, put aij = ∞ X r=0 b c(r) ij πr K Then x = P i,j aij b αiπj L as required.
A polynomial f(X) = fnXn + fn−1Xn−1 + · · · + f0 ∈OK[X] is said to be Eisenstein if |fn| = 1, |fj| < 1 ∀0 ≤j < n, |f0| = |πK| (†) Aside on irreducibility: Let f = f0 + f1X + · · · + fnXn ∈K[X] with f0 ̸= 0, fn ̸= 0. The Newton polygon of f is the convex hull in R2 of the points p(j) = (j, log |fj|) for fj ̸= 0. It consist of line segments σs for 1 ≤s ≤r, where σs joins P(ms−1) to P(ms) and 0 = m0 < m1 < · · · < mr = n. The slope of σs is γs = log |fms| −log fms−1 /(ms −ms−1). We say f is of type (l1, γ1, . . . , lr, γr) where ls = ms −ms−1.
If r = 1 then f is said to be pure.
Fact. (Cassels Local Field pg 100): f of type (l1, γ1, . . . , lr, γr) then f(X) = g1(X) . . . gr(X) where gs is pure of type (ls, γs).
Totally ramied extensions are quite easy to classify.
Theorem 4.15. Let L/K be a nite extension of non-archimedean local elds.
Then L/K is totally ramied if and only if L = K(β), where β is the root of an Eisenstein polynomial.
Proof.
⇒ L/K totally ramied of degree n, let β = πL be a uniformiser for L. Then 1, πL, . . . , πn−1 L are linearly independent over K (as in the proof of Theorem 4.14). Hence there exists an equation βn + fn−1βn−1 + · · · + f0 = 0 with fj ∈K. Two of the summands must have the same absolute value and this must be the rst and the last. (Suppose |fkβk| = |flβl| for some n ≥k > l ≥0, then there exists ak, al ∈Z such that πk−l L = |β|k−l = |πK|ak−al = |πL|n(ak−al), hence l = 0 and k = n). Therefore |f0| = |πL|n = |πK| and |fj| < 1 for all j. Hence a polynomial in the equation is Eisenstein.
18 ⇐ Suppose fnβn + · · · + f0 = 0, where fj ∈K such that |fn| = 1, |fj| < 1 and |f0| = |πK| .Then |βn| < 1 and so |β| < 1. Hence the last term in the sum is bigger than all the others. except possibly for the rst. Since the sum is zero, they must be equal (in absolute value) and so |β|n = |f0| = |πK|. Hence β = πg Ly for some unit y, then |πL|gn = |πK|, whence e(L/K) = gn ≥n.
But we must have equality by Theorem 4.14, since n = [L : K]. Hence L/K is totally ramied.
Example. Let f(X) = X4 −2X3 + 17X2 + 22X + 66. We are going to look at the splitting eld E over Qp for various prime p. In each case, we want to calculate: • The degree [E : Qp]?
• Residue class degree f(E/Qp)?
• Ramication index e(E/Qp)?
• (If possible) maximal unramied subextension L? i.e., E ⊃L ⊃Qp with L/Qp unramied and hence E/L totally ramied.
p = 2 Newton polygon. Note log |2ab|2 = log 2−1 • 0 • 0 • −log 2 • −log 2 • −log 2 | 0 | 1 | 2 | 3 | 4 l1 = 2 −0 = 2, γ1 = log 2/2, l2 = 4 −2 = 2, γ2 = 0. So the type is (2, 1 2 log 2, 2, 0) and factorises as a product of 2 quadratic. Trial an error over Z gives f(X) = (X2 −2X + 6) | {z } :=g(X) · (X2 −11) | {z } :=h(X) and g, h irreducible over Q2 (Eisenstein criterion for g and 11 ̸≡1 mod 8 so apply Exercise B.1).
Let αbe a root of g and β a root of h in E. Then by Theorem 4.15, we have that Q2(α)/Q2 is totally ramied. Since β −1 satises 0 = h(X + 1) = X2 + 2X −10 which is Eisenstein, we also have Q2(β)/Q2 is totally ramied. Note that [Q2(α) : Q2] = [Q2(β) : Q2] = 2.
Next γ = α −1 satises 0 = g(X + 1) = X2 −7, so γ2 = 7. Let δ = βγ, then δ2 = 7 · 11. We claim that Q2(δ)/Q2 is unramied of degree 2. Then [E : Q2] = 4, e(E/Q2) = f(E/Q2) = 2 and L = Q2(δ). To show that Q2(δ)/Q2 is unramied, we need to show (by Corollary 4.11) that Q2(δ) is obtained from Q2 by adjoining a root of X2 + X + 1 (i.e., a primitive (22 −1)th root of unity). Do this by applying Hensel to (δ −1)/2.
p odd Use the fact that E = Qp(γ, β) where γ and β are as above (γ2 = 7, β2 = 11) p = 3 Since 7 3 = 1, then γ ∈Q3, while 11 3 = −1, so X2 −11 is irreducible in F3[X].
Hence it follows that E = Q3(β) and it is unramied of degree 2 over Q3 (by Corollary 4.12) p = 19 Since 7 19 = 11 19 = 1 so E = Q19 (all primes behave like 3, 5, 11, 13 or 19) 19 5 Algebraic Closure Recall that a eld K is called algebraically closed if every polynomial with coecients in K has a root in K.
Denition 5.1. An extension K ⊃K is the algebraic closure of K if 1. K is algebraically closes 2. Any α ∈K is algebraic over K.
For example, C is the algebraic closure of R, [C : R] = 2. If Qp is the closure of Qp then [Qp : Qp] = ∞.
(Note Qp must contain roots of Xn −p for all n ∈N) Theorem 5.2. Let K be a eld. Then there exists an algebraic closure K of K and it is unique up to isomorphism.
Proof. (Sketch) Let Λ be the set of all irreducible polynomials over K of degree ≥2. Let Ξ = {Xf : f ∈Λ} be a family of indeterminate indexed by Λ. Put R = K[Ξ]. Consider the ideal I = {f(Xf) : f ∈Λ}. This is a proper ideal: if not we would have an equation 1 = u1f1(Xf1) + · · · + unfn(Xfn) for some uj ∈R.
Let E/K formed by adjoining roots α1, . . . , αn of f1, . . . , fn respectively. Then we deduce that 1 = 0 a contradiction.
Since I is proper, it is contained in a maximal ideal m of R. Then K = R/m is a eld and the homomorphism K →K[Ξ] ↠K is an embedding of K →K. We claim K is an algebraic closure of K.
If f is an irreducible polynomial in K[X], then α = Xf + m is an root of f in K (since f(Xf) ∈I ⊂m).
Moreover, each Xf + m is algebraic over K and K is generated by them.
Uniqueness: Essentially follows form uniqueness of splitting elds of polynomials over K.
From now on K is a non-archimedean local eld with algebraic closure K. Recall that the absolute value on K extends uniquely to K.
(∀α ∈K,there exists K ⊂L ⊂K such that L = K(α), then |α| = NL/K(α) 1/[L:K]).
We want to know if it is possible for K to be a non-archimedean local eld: ΓK Let us ask is the value group ΓK is discrete? Suppose ΓK = {|x| : x ∈K} is generated by g < 1.
Suppose r ∈ΓK. Then r = |α| for some α ∈K. Let L/K of degree n such that α ∈L. Then |α| = gm/n for some m ∈Z. Hence ΓK ⊂ gm/n : m n ∈Q . In fact we have equality. Let L ⊂K be an extension obtained by adjoining a root α of the Eisenstein polynomial Xn −πKX −πK.
Then α is the uniformiser for L and L/K is totally ramied of degree n. Hence |α| = g1/n and so |αm| = gm/n. This shows that ΓK is not discrete.
kK Consider the residue eld kK. Let α ∈kK = OK/PK and let b α be a lift of α to OK. Then b α ∈K and so there exists a minimal polynomial Φ ∈OK[X] of b α over K. Let φ ∈kK[X] denote the reduction modulo PK of Φ. Then it follows φ(α) = 0 and so α is algebraic over kK.
Thus kk ⊂kK. In fact we have equality here. Suppose φ ∈kK[X] is irreducible and let Φ be a lift of φ. Then Φ has a root α ∈OK (since K is algebraic closed). Then α = α + PK is a root of φ in kK. Hence kK = kK.
Suppose L/K is Galois.
Exercise. If | | is an absolute value on L which extends the absolute value on K, then so ∥x∥= |σ(x)| for all σ ∈Gal(L/K). By uniqueness, we have |x| = |σ(x)| for all x ∈L∀σ ∈Gal(L/K).
20 Theorem 5.3 (Krasner's Lemma ). K eld of characteristic 0, which is complete with respect to a non-archimedean absolute value | |. Let a, b ∈K and suppose that |b −a| < |a −ai|for all 2 ≤i ≤n where a1 = a, a2, . . . , an are roots of the minimal polynomial of a in K[X]. Then K(a) ⊂K(b).
Proof. Put L = K(b). Suppose for contradiction that a / ∈L. Let f ∈L[X] be minimal polynomial of a over L. Let E be the splitting eld f over L. Then E/L is Galois and since a / ∈L, there exists σ ∈Gal(E/L) which does not x a. Then σ(a) = ai for some i > 1. |a −ai| ≤max{|a −b| , |b −ai| | {z } =|σ(b−a)|=|b−a| } = |a −b|, which is a contradiction.
Incompleteness K = Qp Theorem 5.4. Qp is not complete with respect to | |p Proof. We need to nd a Cauchy sequence {αn} in Qp which does not converge. For i ≥0, let ζi be a root of unity of order p(i+1)! −1. Put Fi = Qp(ζi), then • Fi is the splitting eld of Xp(1+i)! −X over Qp. Thus it is an unramied extension of Qp of degree (i + 1)! and it is Galois (Corollary 4.11) • ζi−1 ⊂Fi since [Fi : Fi−1] = i + 1 and moreover pi! −1|p(i+1)! −1.
Consider the sequence αn = Pn i=1 ζipi. Since |αm −αn|p = 1 p min{m,n} , so this is certainly Cauchy.
We claim it does not have a limit in Qp. Suppose that it does have a limit, α = P∞ n=0 ζip i ∈Qp. Let d be the degree of the minimal polynomial mα of α over Qp. Now Fd/Fd−1 is Galois of degree d + 1.
Hence there exists σ1, . . . , σd+1 ∈Gal(Fd/Fd−1) such that the images of ζd are all distinct. Note that |σi(α −αd)|p = |α −αd|p ≤p−(d+1). Also for i ̸= j, we have σi(αd) −σj(αd) = d−1 X k=0 ζkpk + σi(ζd)pd − d−1 X k=0 ζkpk + σj(ζd)pd !
= pd(σi(ζd) −σj(ζd)).
Hence for i ̸= j, we have |σi(αd) −σj(αd)|p = p−d (since σi(ζd) and σj(ζd) are distinct and (p(d+1)! −1)th root of unity). We conclude that |σi(α) −σj(α)|p = |σi(α −αd) + σi(αd) −σj(αd) −σj(α −αd)|p = p−d This implies that σi(α) ̸= σj(α) for all i ̸= j. But then σ1(α), . . . , σd+1(α) are distinct conjugates of α.
This is a contradiction to the fact that the degree of mα is d.
Note. Our sequence {αn} was actually in Qun p := Qp ∪(n,p)=1µn , which we've shown is not complete.
We let Cp denote the completion of Qp (as in Theorem 1.14) Theorem 5.5. Cp is algebraic closed.
21 Proof. The proof is based on the Lemma. Let char(K) = 0 and K complete with respect to a non-archimedean value. Let f = Xn + an−1Xn−1 + · · · + a0 ∈K[X]. Assume f is irreducible over K. Then there exists δ > 0 such that for all g = Xn + bn−1Xn−1 + · · · + b0 ∈K[X] with |ai −bi| < δ (0 ≤i ≤n −1), g is irreducible.
Proof. Let λ1, . . . , λn be the roots of f in K and similarly let µ1, . . . , µn be the roots of g in K. Put C = max{1, |ai|}. Dene r = mini̸=j |λi −λj|, R(f, g) = Q i,,j(λi −µj) = Q i g(λi) = Q j f(µj)·(−1)n (the resultant).
Step 1 If 0 < δ < C then for all g with |ai −bi| < δ, every root µj over g has |µj| ≤C.
Suppose for contradiction, we have |µ| > C. Then for 0 ≤i ≤n −1, biµi ≤C |µ|i < |µ|i+1 ≤ |µ|n. This is a contradiction.
Step 2 For all ϵ > 0, there exists δ > 0 such that if |ai −bi| < δ for all i then |R(f, g)| < ϵ If |ai −bi| < δ < C for all i then |f(µj)| = |f(µj) −g(µj)| = n−1 X i=0 (ai −bi)µi j ≤ max i |ai −bi| · max{1, |µj|n} < δCn by step 1. Thus for all δ < min{C, ϵ1/nC−n}, we have |R(f, g)| = Q j |f(µj)| < δnCn2 < ϵ.
Step 3 If |R(f, g)| < rn2 then g is irreducible over K.
The condition means at least one of the factors |λI −µJ| < r = mini̸=j |λi −λj|. Then by Krasner's lemma (Theorem 5.3) , we have K(λI) ⊂K(µJ), hence K(µJ) has degree n and so g is irreducible.
We apply the sublemma with K = Cp. Let f ∈Cp[X] be irreducible, which is monic. Let δ > 0 be as in the sublemma. Since Qp is dense in Cp, there exists a monic polynomial g ∈Qp[X] satisfying the hypothesis of the sublemma. Thus g is irreducible of degree n in Cp[X], so also in Qp[X]. But since Qp is algebraic closed, so deg g = 1 6 Algebraic Number Fields Let K/Q be a number eld. A place is an equivalence class of non-trivial absolute values on k, denote the completion of k at P by kp. If P is non-archimedean, then absolute values in Q ⊂K are equivalent to p-adic absolute value | |p, we write p|p. Then kpis an extension of Qp (and so is a non-archimedean local eld). Let qp be the cardinality of residue eld of kp Denition 6.1. The renormalised absolute value | |p on kp is determined by |πp|p = q−1 p where πp is a uniformiser. By problem C.1, we have |α|p = |α|[kp:Qp] p for all α ∈kp 22 If r is an archimedean place, the relevant completion kr is either R (r is a real place) or C (r is a complex place) The renormalised absolute value is | |r = ( | |∞ r real | |2 ∞ r complex An archimedean place r is an extension of an archimedean place ∞on Q, write r|∞ Lemma 6.2. Let α ∈k∗. Then |α|p = 1 for all but nitely many places p Proof. Let f = Xn + an−1Xn−1 + · · · + a0 ∈Q[X] be a minimal polynomial of α over Q. Then aj ∈Zp (0 ≤j ≤n −1) for almost all primes p. Hence |α|p ≤1 for almost all p (not aj ∈Zp implies |α|p ≤1∀p|p) Similarly α−1 p ≤1 for almost all p Theorem 6.3 (Product Formula). Let α ∈k∗. Then Y p archimedean & non−archimedean |α|p = 1 Proof. By standard eld theory we have k ⊗Q Qp = ⊕p|pkp and P p|p[kp : Qp] = [k : Q] Similarly k ⊗Q R = ⊕r|∞kr and P r|∞[kr : R] = [k : Q] Hence for all w ∈{p, ∞} (with Q∞:= R) Y p|w |α|p = Y p|w |α|[kp:Qw] w = Nk/Q(α) w since Nk/Q = Q p|w Nkp/Qw (c.f. Theorem 4.5). This reduces the statement to of the theorem to the case k = Q. Apply Problem A.2 Theorem 6.4 (Strong Approximation). Let S be a nite set of non-archimedean places of a number eld k. Let ϵ > 0. Let αp ∈kp for p ∈S. Then there exists α ∈k such that 1. |α −αp|p < ϵ for all p ∈S 2. |α|p ≤1, p / ∈S, p non-archimedean Note: If αP ∈OP (for p ∈S), then 2. can be replaces by α ∈O.
Proof. Let S0 be the set of rational primes p such that p|p for some p ∈S. Without loss of generality we assume S contains all p extending p ∈S0 (put αp = 0 for p not in original S). By the Weak Approximation Theorem (Theorem 1.12) there exists β ∈k such that |β −αp|p < ϵ (for p ∈S) . Lemma 6.2 implies the set R of non-archimedean places p / ∈S for |β|p > 1 is nite. Let R0 be the set of rational primes p such that p|p for some p ∈R. Then R0 ∩S0 = ∅.
Let η > 0. By the Chinese Remainder Theorem we can nd l ∈Z such that |l −1|p < η for p ∈S0 and |l|p < η for p ∈R0. Check that α = lβ satises the conclusion of the theorem.
23 7 Diaphantine Equations 7.1 Quadratic forms Let K be a eld of characteristic not 2, Q = P aijxixj ∈K[x1, . . . , xn] is a quadratic form of rank n, We say Q is soluble if there exists x ∈Kn \ {0} such that Q(x) = 0 Lemma 7.1. Suppose [K : Qp] < ∞, p ̸= 2. Assume without loss of generality that Q = P aix2 i , then Q is soluble if either 1. n ≥3 and ai ∈O∗ K for all i 2. n ≥5 Proof.
1. Without loss of generality, assume Q = ax2 + by2 −z2 for a, b ∈O∗ K. Let k = kK and assume q = #k. The maps x →ax2 and y →1 −by2 have images of size q+1 2 in k. Thus the images overlap and there exists x, y ∈OK such that ax2 + by2 ≡1 mod πK. By Hensel's lemma, Q is soluble 2. On multiplying by the square of the uniformiser we may assume vK(ai) ∈{0, 1}. As n ≥5, without loss of generality, vK(a1) = vK(a2) = vK(a3). If vK(a1) = vK(a2) = vK(a3) = 0, then apply part 1. .Otherwise if vK(a1) = vK(a2) = vK(a3) = 1, then divide through by uniformiser and apply part 1.
Note. Part 2. is still true when p = 2: quinary quadratic forms are isotropic over any p-adic eld.
On the arXiv, there is a recent paper by Bhargava, Cremona, Fisher which looks at the density of isotropic quadratic forms in 4 variables (roughly 97%).
Theorem 7.2 (Hasse-Minkowski Theorem). Q is a quadratic form over a number eld k. Then Q is soluble over k if and only if Q is soluble over kp for every place p.
Proof. Omitted Remark.
1. Lemma 7.1 implies if n ≥3, then local solubility is automatic for all but nitely many primes.
2. When n = 2 and k = Q this is very easy: a ∈Q∗2 p , if and only if vp(a) is even. a ∈R∗2, if and only if a > 0. Both of these implies a ∈Q∗2.
3. Using Rimenan-Roch one can show that any smooth and projective curve of genus 0 is over a number eld k is k-birationally equivalent to a conic over k. Thus Theorem 7.2 implies that the Hasse principle holds for smooth and projective curves of genus 0.
7.2 Cubic forms Natural question: Is there an analogue of Lemma 7.1 for a cubic forms?
Theorem 7.3 (Demyanov (p ̸= 3), Lewis, 1950's). Suppose [K : Qp] < ∞. Assume F = P i≤j≤k xixjxk ∈ K[x1, . . . , xn]. Then F is soluble if n ≥10.
24 Proof. Treat case k = Qp. Let ∆= ∆(F) be the discriminant of F (this is the resultant of ∂F ∂x1 , . . . ∂F ∂xn ).
Then ∆is a non-zero form of degree n2n−1 in the coecients of F. Moreover if M ∈GLn(Qp) such that x = My, (F(x) = F(My) = F ∗(y)) then F ∗(y) = aF(y), then ∆(F ∗) = an2n−1(det M)3·2n−1∆.
Since ∆is a non-zero form it can not vanish on any neighbourhood of a point. Hence if ∆(F) = 0 then ∀N ∈N there exists c(N) ijk ∈Qp such that ∆(F (N)) ̸= 0 and cijk −c(N) ijk p < 1/N. Suppose a(N) is zero at F (N) in Zn p. By compactness, these points have an accumulation point in Zp and since F is continuous, this point is a zero of F. Hence without loss of generality ∆(F) ̸= 0.
Note if F and F ∗are equivalent over Qp, then ∆(F) = 0 ⇐ ⇒∆(F ∗) = 0. F is equivalent over Qp to a form with coecients in Zp. Then δ(F) = vp(∆(F)) ≥0. We say that F is reduced if it has coecients in Zp and ∆(F) ̸= 0 and δ(F) ≤δ(F ∗) for all F ∗over Zp equivalent to F over Zp.
It suces to work with reduced F. Let r ∈N minimal such that F(x) ≡F1(L!, . . . , Lr) mod p where F1 ∈Zp[y1, . . . , yr] and the Li are linear forms with coecients in Zp, and are linearly independent.
Clearly r ≤n and make unimodular transformation yi = Li for 1 ≤i ≤r, to obtain an equivalent form F ∗, where F ∗(y1, . . . , yn) ≡F1(y1, . . . , yr). If F is reduced then so is F ∗. Let F ′(z1, . . . , zn) = p−1F ∗(pz1, . . . , pzr, zr+1, . . . , zn). Then F ′ has coecients in Zp and δ(F ′) = δ(F ∗)+2n−1(3r −n). Hence r ≥n/3 since F is reduced. Now n ≥10 implies r ≥4, hence there exists (b1, . . . , br) ∈F4 p \ {0} such that F1(b) = 0 (by Chevaley-Warning: Over Fp any form of n variables of degree d is soluble if n > d).
Assume without loss of generality b1 = 1. Then F ∗(z1, b2z1+z2, . . . , brz1+zr, zr+1, . . . , zn) ≡z2 1L+z1Q+C mod p where L, Q, C are forms in z2, . . . , zn. Since r is minimal, L and Q are not both identically zero modulo p.
Case 1.
L not identically zero modulo p: Then (1, 0, . . . , 0) is a solution of F ∗≡0 mod p and some partial derivative of F ∗does not vanish modulo p at (1, 0, . . . , 0) Case 2.
L is identically zero modulo p: There exists d = (d2, . . . , dn) ∈Zn−1 such that p ∤(d2, . . . , dn) and such that Q(d2, . . . , dn) ̸≡0 mod p. Then (−C(d), d2Q(d), . . . , dnQ(d)) is a solution of F ∗≡0 mod p with ∂F ∗ ∂x1 ̸= 0 mod p.
In either case Hensel's lemma yields the result Remark.
1. n ≥10 is best possible in Theorem 7.3. See problem C.3 2. Artin's conjecture: Qp is a C2 eld, i.e, any form over Qp in n variables and degree d is soluble over Qp if n > d2. This is FALSE.
3. What about an analogue of Theorem 7.2? Let k be a number eld and F a cubic form over k. Then F is soluble over k if n Conditions Notes n ≥16 None Pleasants (1975) n ≥10 F non-singular Brawning and Vishe (2013) However the Hasse principle can fail for cubic forms in fewer variables.
For n = 4, the rst example was produced by Swinnerton-Dyer in 1962: Let K = Q(θ) where θ3 −7θ2 + 14θ −7 = 0. Abelian cubic eld of discriminant 49 and OK = Z[θ]. Here (7) = P 3 and vP (θ) = 1. Consider F(x1, . . . , x4) = NK/Q(x1 + θx2 + θ3x3) + x4(x4 + x1)(2x4 + x1) 25 Check: non-singular, soluble over Qp for all p. But it is not soluble over Q!
Proof. Note that if N( ) = 0 then x1 = x2 = x3 = 0, hence x4 = 0. Contradiction as we want a non-zero solution. May assume that x1, x4 are coprime integers and x2, x3 ∈Q. Now 7|N( ) implies P divides N( ), hence 7|x1 and 7 ∤x4. Hence 7 ∤x4(x1 + x4)(2x4 + x1) which is a contradiction.
Hence 7 ∤N( ).
Since x4, x4 + x1 and 2x4 + x1 are all coprime, and their product is a norm in K, each of them must separately be a norm of an ideal. Now p ̸= 7 splits in K if and only p = ±1 mod 7. Hence each of the factors above is congruent to ±1 modulo 7. This contradicts x4 + (x4 + x1) = 2x4 + x1.
c.f. Elsenhans-Jahnel. (Recent paper on the arXiv, they show there is a Zariski dense set of counter examples).
26
|
88
|
NSHARP Hail and Tornado Reference - OCLO - Virtual Lab
===============
Navigation
Skip to Content
Redmine
CodeReview
Jenkins
Help
Sign In
Virtual Lab
OCLO
Home
Forecaster References
AWIPS Builds
AWIPS Build Changes
AWIPS Fundamentals Applet
AWIPS-2 Migration Variances (web)
AWIPS System Readiness
AWIPS DR and DCS Tracking
AWIPS Interactive Reference (AIR)
D2D Applications
NSHARP (web)
FFMP
Hazard Services (WFO Site Test)
WarnGen
D2D Tools
Damage Path Tool
Tracking Meteogram
Ensemble Tool
Boundary Tool
Grid/Models
"All-Layer" Grid
Hydro
Hydro Homepage
Hydro Hazard Simplification FAQs
ARIs - Users
ARIs FAQs
FFMP
MRMS Products Guide
HPE and Bias HPE
HPN - Users
HPN-Focal Point
AWIPS Build Changes for Hydro
Misc
VLab
Damage Surveys (web)
FFmpeg
RAC AWIPS Fundamentals
Radar
All Tilts
MRMS All Tilts
MRMS Products Guide
Canadian Radar
Dual Pol
Satellite
Gamma Adjustment
Sat Combo Daylight Transition
Surface Obs
mPING
Text
NWS Text Products
WFO AFOS PILs
AWIPS Fundamentals
AWIPS Fundamentals Applet
Job Sheets
OCLO VLAB Support
Contact
Creating and Editing Polls in VLab
NSHARP Hail and Tornado Reference - OCLO
NSHARP Hail and Tornado Parameters
This page is designed to identify where some of the newer hail and tornado parameters and tools in NSHARP are located and a little background on what the parameters are.To see these parameters, you must load an NSHARP sounding using the SPC Wide Screen Configuration(Configure->Display Pane Configuration->SPC Wide Screen Configuration).
Hail
Significant Hail Parameter (SHIP)
The SHIP values are visible from the bottom of the main data tables that are cycled through with the PvDt and NxDt buttons in the main NSHARP buttons control.
The Significant Hail Parameter (SHIP) was developed using a large database of surface-modified, observed severe hail proximity soundings. It is based on 5 parameters, and is meant to delineate between SIG (>=2" diameter) and NON-SIG (<2" diameter) hail environments.
SHIP = [(MUCAPE j/kg) (Mixing Ratio of MU PARCEL g/kg) (700-500mb LAPSE RATE c/km) (-500mb TEMP C) (0-6km Shear m/s) ] / 44,000,000
It is important to note that SHIP is NOT a forecast hail size.
Developed in the same vein as the STP and SCP parameters, values of SHIP greater than 1.00 indicate a favorable environment for SIG hail. Values greater than 4 are considered very high. In practice, maximum contour values of 1.5-2.0 or higher will typically be present when SIG hail is going to be reported.
Large Hail Parameter (LGHAIL)
The LGHAIL values are visible from the bottom of the main data tables that are cycled through with the PvDt and NxDt buttons in the main NSHARP buttons control.
The LGHAIL parameter includes three thermodynamic components: [MUCAPE, 700-500 mb lapse rates, the depth of the hail growth zone (-10 to -30 C)]. It also includes three vertical shear components [surface to EL bulk shear, the direction difference between the ground-relative winds at the EL and in the 3-6 km layer, and the direction difference between the storm-relative winds in the 3-6 km and 0-1 km layers].
If if 0-6 km BWD < 14 m s -1 or MUCAPE < 400 J kg-1, LHP = 0. If both the shear and MUCAPE are >= to the above conditions (a loose supercell check):
LHP = (TERM A TERM B) + 5
TERM A = (((MUCAPE-2000)/1000) + ((3200-THK HGZ)/500) + ((LR 75-6.5)/2)) where THK HGZ is the depth of the hail growth zone (the -10 to -30 C layer), and LR 75 is the 700-500 mb temperature lapse rate.
TERM B = (((Shear EL-25)/5) + ((GRW dirEL+5)/20) + ((SRW dirMID-80)/10)) where Shear EL is the magnitude of the vector wind difference between the surface wind and the mean wind in the 1.5 km layer immediately below the EL height for the MU parcel, GRW dirEL is the directional difference between the ground-relative mean wind in the 1.5 km layer below the EL and the mean wind in the 3-6 km layer AGL, and SRW dirMID is the directional difference betweem the mean storm-relative winds in the 3-6 km and 0-1 km layers.
The LGHAIL parameter is meant to discriminate between significant hail (>= 2 inch diameter and smaller hail).
Below is a box and whiskers plot of LGHAIL and hail size from A.W. Johnson and K.E. Sugden 2014(Internet link)
Hail Model and the Sounding Analog Retrieval System (SARS)
The hailcast hail model output is visible from the bottom-right part of the main display after clicking the HAILbutton.
The SARS method returns a maximum expected hail report by matching existing environmental conditions to historic severe hail cases. These forecast maximum sizes are conditional on severe hail of any size occurring.
Tornado
Significant Tornado Parameter (STP)
The STP values are visible from the bottom of the main data tables that are cycled through with the PvDt and NxDt buttons.
STP is a composite index that can include effective bulk wind difference (EBWD), effective storm-relative helicity (ESRH), 100-mb mean parcel CAPE (mlCAPE), 100-mb mean parcel CIN (mlCIN), and 100-mb mean parcel LCL height (mlLCL).
The index is formulated as follows:
STP = (mlCAPE/1500 J kg-1) ((2000-mlLCL)/1000 m) (ESRH/150 m 2 s-2) (EBWD/20 m s-1) ((200+mlCIN)/150 J kg-1)
The mlLCL term is set to 1.0 when mlLCL < 1000 m, and set to 0.0 when mlLCL > 2000 m; the mlCIN term is set to 1.0 when mlCIN > -50 J kg-1, and set to 0.0 when mlCIN < -200; the EBWD term is capped at a value of 1.5 for EBWD > 30 m s-1, and set to 0.0 when EBWD < 12.5 m s-1. Lastly, the entire index is set to 0.0 when the effective inflow base is above the ground.
There are multiple STP values listed in NSHARP:
STP(CIN)is based off a 100mb mean parcel, effective SHR, and effective bulk wind difference in the lower half of the storm. This is the preferred parameter because it incorporates the depth of the unstable inflow into the storm using the effective parcel with CAPE > 100 J/kg and CIN > -250J/kg.
STP(fixed) is based off surface-based CAPE and the CIN term is dropped.The shear levels are fixed using 0-1km SRH and 0-6km bulk wind difference.
STPC(test) is based off CAPE below 500mb.
A majority of significant tornadoes (F2 or greater damage) have been associated with STP values greater than 1 within an hour of tornado occurrence, while most non-tornadic supercells have been associated with values less than 1 in a large sample of RAP analysis proximity soundings.
Vrot
In NSHARP a rotational velocity (Vrot) calculated from a low-altitude radar velocity couplet can be entered into NSHARP using the Vrotbutton in the main NSHARP button window. Applying this number in NSHARP displays a vertical line plot in a graph that provides a probability of tornado for different EF scales.
Vrot is calculated as follows:
Below is a box and whiskers plot of Vrot and EF scale by Smith, B.T., Thompson, R.L, Dean, A.R., and Marsh, P.T. (2015; Internet link)
CONDTor
CONDTor is the conditional probability of tornado damage using the effective layer STP value. It is loaded using the CondTOR button in the main NSHARP buttons window.
The CONDtor graph is displayed on the bottom-right part of the NSHARP display, and it illustrates the effective STP [STP (eff)] and its relation to different EF scales. The current STP(eff) value is indicated by a vertical dotted line.
Below is a box and whiskers graph from Smith, B.T., Thompson, R.L, Dean, A.R., and Marsh, P.T. (2015; Internet link)
US Dept of Commerce
National Oceanic and Atmospheric Administration
National Weather Service
Disclaimer
Information Quality
Help
Glossary
Acronyms
Privacy Policy
Freedom of Information Act (FOIA)
About Us
Career Opportunities
|
89
|
534 2 S.C.R. BAZLEY v. CURRY
The Children’s Foundation, the La Children’s Foundation, le Superintendent Superintendent of Family and Child Services of Family and Child Services de la province in the Province of British Columbia and de la Colombie-Britannique et Sa Majest´ eHer Majesty The Queen in Right of the la Reine du chef de la province de la Province of British Columbia as represented Colombie-Britannique, repr´ esent´ ee par le by the Ministry of Social Services and Ministry of Social Services and Housing Appellants Housing Appelants v. c.
Patrick Allan Bazley Respondent Patrick Allan Bazley Intim´ e
and et
Her Majesty The Queen in Right of Alberta Sa Majest´ e la Reine du chef de l’Alberta, as represented by the Minister of Justice repr´ esent´ ee par le ministre de la Justice et and Attorney General of Alberta, the procureur g´ en´ eral de l’Alberta, la Canadian Conference of Catholic Bishops, Conf´ erence des ´ evˆ eques catholiques du the United Church of Canada, the General Canada, l’ ´ Eglise unie du Canada, le synode Synod of the Anglican Church of Canada, g ´ en´ eral de l’ ´ Eglise anglicane du Canada, la Wunnumin Lake First Nation, William Premi` ere Nation de Lac Wunnumin, William Richard Blackwater et al., and Barrie Richard Blackwater et autres, et Barrie Caldwell, Samuel McNab and Glen Caldwell, Samuel McNab et Glen Pelletier Interveners Pelletier Intervenants
INDEXED AS : BAZLEY
v. CURRY R ´EPERTORI ´ E: BAZLEY c. C URRY
File No.: 26013. No du greffe: 26013. 1998: October 6; 1999: June 17. 1998: 6 octobre; 1999: 17 juin. Present: L’Heureux-Dub´ e, Cory, McLachlin, Iacobucci, Pr´ esents: Les juges L’Heureux-Dub´ e, Cory, McLachlin, Major, Bastarache and Binnie JJ. Iacobucci, Major, Bastarache et Binnie. ON APPEAL FROM THE COURT OF APPEAL FOR EN APPEL DE LA COUR D’APPEL DE LA COLOMBIE-BRITISH COLUMBIA BRITANNIQUE
Torts — Vicarious liability — Intentional torts — Sex- Responsabilit´ e d´ elictuelle — Responsabilit´ e du fait ual abuse — Child sexually abused while in residential d’autrui — D´ elits intentionnels — Agression sexuelle — care facility — Whether organization operating facility Enfant agress´ e sexuellement alors qu’il ´ etait trait´ e dans vicariously liable for employee’s sexual assault of child un ´ etablissement de soins pour b´ en´ eficiaires internes — — Whether non-profit employers should be exempted La responsabilit´ e du fait d’autrui de l’organisme qui from liability. exploite l’´ etablissement en cause est-elle engag´ ee en raison de l’agression sexuelle d’un enfant par son employ´ e? — L’employeur qui est un organisme sans but lucratif devrait-il ˆ etre exon´ er´ e de toute responsabilit´ e? Employment law — Liability of employers — Inten- Employeur et employ´ e — Responsabilit´ e de l’em-tional torts of employees — Child sexually abused while ployeur — D´ elit intentionnel d’un employ´ e — Enfant in residential care facility — Whether organization agress´ e sexuellement alors qu’il ´ etait trait´ e dans un 2 R.C.S. 535 BAZLEY c. CURRY
operating facility vicariously liable for employee ’s ´etablissement de soins pour b ´en ´eficiaires internes — La tortious conduct. responsabilit ´e du fait d’autrui de l’organisme qui exploite l ’´ etablissement en cause est-elle engag ´ee en raison de la conduite d ´elictueuse de son employ ´e?
The appellant Foundation, a non-profit organization, La Fondation appelante, un organisme sans but lucra-operated two residential care facilities for the treatment tif, exploitait deux ´ etablissements de soins pour b´ en´ efi-of emotionally troubled children. As substitute parent, it ciaires internes ou des enfants ´ etaient trait´ es pour des practised “total intervention” in all aspects of the lives troubles affectifs. En sa qualit´ e de substitut parental, elle of the children it cared for. The Foundation’s employees pratiquait l’«intervention totale» dans tous les aspects de were to do everything a parent would do, from general la vie des enfants qui lui ´ etaient confi´ es. Les employ´ es supervision to intimate duties like bathing and tucking de la Fondation devaient faire tout ce qu’un parent in at bedtime. The Foundation hired C, a pedophile, to ferait, de la surveillance g´ en´ erale a des tˆ aches intimes work in one of its homes. The Foundation did not know comme donner le bain aux enfants et les border ahe was a pedophile. It checked and was told he was a l’heure du coucher. La Fondation a embauch´ e C, un suitable employee. After investigating a complaint p´ edophile, a l’un de ses ´ etablissements. Elle ignorait about C, and verifying that he had abused a child in one qu’il ´ etait p´ edophile. Elle a proc´ ed´ e a des v´ erifications of its homes, the Foundation discharged him. C was et s’est fait dire qu’il ´ etait apte a occuper le poste en convicted of 19 counts of sexual abuse, two of which question. Apres avoir enquˆ et´ e sur une plainte au sujet de related to the respondent. The respondent sued the C et avoir obtenu confirmation qu’il avait agress´ e un Foundation for compensation for the injury he suffered enfant dans l’un de ses ´ etablissements, la Fondation l’a while in its care. The parties stated a case to determine cong´ edi´ e. C a ´ et´ e d´ eclar´ e coupable relativement a 19 whether the Foundation was vicariously liable for its chefs d’accusation d’agression sexuelle, dont deux con-employee’s tortious conduct. The chambers judge found cernaient l’intim´ e. L’intim´ e a intent´ e une action contre that it was and the Court of Appeal upheld that decision. la Fondation en vue d’ˆ etre indemnis´ e du pr´ ejudice subi pendant qu’il avait ´ et´ e confi´ e ` a ses soins. Les parties ont pr´ esent´ e un expos´ e de cause pour ´ etablir si la responsa-bilit´ e du fait d’autrui de la Fondation ´ etait engag´ ee en raison de la conduite d´ elictueuse de son employ´ e. Le juge en chambre a d´ ecid´ e qu’elle l’´ etait et la Cour d’appel a confirm´ e cette d´ ecision.
Held : The appeal should be dismissed and the matter Arr ˆet : Le pourvoi est rejet´ e et l’affaire est renvoy´ ee aremitted to trial. proc es. Pursuant to the Salmond test, employers are vicari- Selon le critere Salmond , la responsabilit´ e du fait ously liable for both employee acts authorized by the d’autrui d’un employeur est engag´ ee en raison a la fois employer and unauthorized acts so connected with des actes d’un employ´ e autoris´ es par cet employeur, et authorized acts that they may be regarded as modes des actes non autoris´ es qui sont si ´ etroitement li´ es aux (albeit improper modes) of doing authorized acts. In actes autoris´ es qu’ils peuvent ˆ etre consid´ er´ es comme determining whether an employer is vicariously liable des fa¸ cons (quoiqu’incorrectes) de les accomplir. Pour for an employee’s unauthorized, intentional wrong in d´ ecider si la responsabilit´ e du fait d’autrui d’un cases where precedent is inconclusive, courts should be employeur est engag´ ee en raison de la faute intention-guided by the following principles. First, they should nelle et non autoris´ ee de son employ´ e dans des cas ou la openly confront the question of whether liability should jurisprudence n’est pas concluante, les tribunaux lie against the employer, rather than obscuring the deci- devraient appliquer les principes suivants. Premi ere-sion beneath semantic discussions of “scope of employ- ment, ils devraient s’attaquer ouvertement a la question ment” and “mode of conduct”. Second, the fundamental de savoir si la responsabilit´ e de l’employeur devrait ˆ etre question is whether the wrongful act is sufficiently engag´ ee, au lieu d’embrouiller la d´ ecision par des analy-related to conduct authorized by the employer to justify ses s´ emantiques de l’«exercice des fonctions» et du the imposition of vicarious liability. Vicarious liability is «mode de comportement». Deuxi emement, il s’agit generally appropriate where there is a significant con- essentiellement de savoir si l’acte fautif est suffisam-nection between the creation or enhancement of a risk ment li´ e a la conduite autoris´ ee par l’employeur pour and the wrong that accrues therefrom, even if unrelated justifier l’imputation de la responsabilit´ e du fait 536 2 S.C.R. BAZLEY v. CURRY to the employer’s desires. Where this is so, vicarious lia- d’autrui. La responsabilit´ e du fait d’autrui est g´ en´ erale-bility will serve the policy considerations of the provi- ment fond´ ee quand il existe un lien important entre la sion of an adequate and just remedy and of deterrence. cr´ eation ou l’accroissement d’un risque et la faute qui en Incidental connections to the employment enterprise, d´ ecoule, mˆ eme si elle n’a rien a voir avec les souhaits de like time and place (without more), will not suffice. l’employeur. Le cas ´ ech´ eant, la responsabilit´ e du fait Once engaged in a particular business, it is fair that an d’autrui satisfera aux consid´ erations de politique g´ en´ e-employer be made to pay the generally foreseeable costs rale de la dissuasion et de la r´ eparation juste et appro-of that business. In contrast, to impose liability for costs pri´ ee. Des liens accessoires avec l’entreprise qui procure unrelated to the risk would effectively make the l’emploi, comme la date, l’heure et le lieu (sans plus), employer an involuntary insurer. Third, in determining ne sont pas suffisants. Lorsque l’employeur exerce une the sufficiency of the connection between the employ- activit´ e particuliere, il est juste qu’il soit oblig´ e d’ac-er’s creation or enhancement of the risk and the wrong quitter les coˆ uts g´ en´ eralement pr´ evisibles de cette acti-complained of, subsidiary factors may be considered. vit´ e. Par contre, l’imputation de la responsabilit´ e des These may vary with the nature of the case. When coˆ uts non li´ es au risque ferait effectivement de l’em-related to intentional torts, the relevant factors may ployeur un assureur involontaire. Troisi emement, pour include, but are not limited to, the following: (a) the d´ ecider s’il existe un lien suffisant entre la cr´ eation ou opportunity that the enterprise afforded the employee to l’accroissement du risque par l’employeur et la faute abuse his or her power; (b) the extent to which the reproch´ ee, il est possible de tenir compte de facteurs wrongful act may have furthered the employer’s aims subsidiaires qui peuvent varier selon la nature de l’af-(and hence be more likely to have been committed by faire. Quand ils se rapportent a des d´ elits intentionnels, the employee); (c) the extent to which the wrongful act les facteurs pertinents peuvent notamment comprendre was related to friction, confrontation or intimacy inher- les suivants: a) l’occasion que l’entreprise a fournie aent in the employer’s enterprise; (d) the extent of power l’employ´ e d’abuser de son pouvoir, b) la mesure dans conferred on the employee in relation to the victim; and laquelle l’acte fautif peut avoir contribu´ e a la r´ ealisation (e) the vulnerability of potential victims to the wrongful des objectifs de l’employeur (et avoir donc ´ et´ e plus sus-exercise of the employee’s power. ceptible d’ˆ etre commis par l’employ´ e), c) la mesure dans laquelle l’acte fautif ´ etait li´ e a la situation de con-flit, d’affrontement ou d’intimit´ e propre a l’entreprise de l’employeur, d) l’´ etendue du pouvoir conf´ er´ e a l’em-ploy´ e relativement a la victime, et e) la vuln´ erabilit´ e des victimes potentielles a l’exercice fautif du pouvoir de l’employ´ e. Applying these general considerations to sexual abuse Pour appliquer ces consid´ erations g´ en´ erales a l’agres-by employees, the test for vicarious liability for an sion sexuelle commise par un employ´ e, le crit ere de la employee’s sexual abuse of a client should focus on responsabilit´ e du fait d’autrui d´ ecoulant de l’agression whether the employer’s enterprise and empowerment of sexuelle d’un client par un employ´ e devrait ˆ etre ax´ e sur the employee materially increased the risk of the sexual la question de savoir si l’entreprise de l’employeur et assault and hence the harm. The test must not be applied l’habilitation de l’employ´ e ont accru sensiblement le ris-mechanically, but with a sensitive view to the policy que d’agression sexuelle et, par cons´ equent, de pr´ eju-considerations that justify the imposition of vicarious dice. L’application du critere ne doit pas ˆ etre machinale liability — fair and efficient compensation for wrong mais doit tenir compte des consid´ erations de politique and deterrence. This requires trial judges to investigate g´ en´ erale qui justifient l’imputation de la responsabilit´ ethe employee’s specific duties and determine whether du fait d’autrui, soit la dissuasion et l’indemnisation they gave rise to special opportunities for wrongdoing. juste et efficace de la faute. Pour ce faire, les juges de Because of the peculiar exercises of power and trust that premi ere instance doivent examiner les tˆ aches particu-pervade cases such as child abuse, special attention lieres de l’employ´ e et d´ ecider si elles cr´ eent des occa-should be paid to the existence of a power or depen- sions sp´ eciales de commettre une faute. Compte tenu dency relationship, which on its own often creates a des utilisations particuli eres qui sont faites de l’autorit´ econsiderable risk of wrongdoing. et de la confiance dans les cas d’agression d’un enfant, il faut prˆ eter une attention sp´ eciale a l’existence d’un rapport de force ou de d´ ependance, qui cr´ ee souvent en soi un risque consid´ erable de faute. 2 R.C.S. 537 BAZLEY c. CURRY There should not be an exemption for non-profit Il ne devrait pas y avoir exon´ eration de responsabilit´ eorganizations. While non-profit organizations perform dans le cas d’un organisme sans but lucratif. Bien que needed services on behalf of the community as a whole, les organismes sans but lucratif fournissent des services the Foundation’s institution, however meritorious, put n´ ecessaires au nom de l’ensemble de la collectivit´ e, the respondent in the intimate care of C and in a very l’´ etablissement exploit´ e par la Fondation, si louable soit-real sense enhanced the risk of his being abused. From il, a confi´ e l’intim´ e aux soins personnels de C et a donc his perspective, it is fair that as between him and the vraiment accru le risque qu’il soit agress´ e. De son point institution that enhanced the risk, the institution should de vue, il est juste que, entre lui et l’´ etablissement qui a bear legal responsibility for his abuse and the harm that accru le risque, ce soit ce dernier qui soit responsable, befell him. It may also deter other incidents of sexual en droit, de son agression et du pr´ ejudice qu’il a subi. abuse by motivating charitable organizations entrusted Cela peut ´ egalement contribuer a pr´ evenir d’autres ´ epi-with the care of children to take not only such precau- sodes d’agression sexuelle en incitant les organismes de tions as the law of negligence requires, but all possible bienfaisance a qui des enfants sont confi´ es a prendre precautions to ensure that their children are not sexually non seulement les pr´ ecautions requises par le droit en abused. matiere de n´ egligence, mais toutes celles possibles pour ´eviter que les enfants qui leur sont confi´ es soient victimes d’une agression sexuelle. Here the Foundation is vicariously liable for the sex- En l’esp ece, la responsabilit´ e du fait d’autrui de la ual misconduct of its employee. The opportunity for Fondation est engag´ ee en raison de l’inconduite sexuelle intimate private control and the parental relationship and de son employ´ e. L’occasion d’exercer un contrˆ ole per-power required by the terms of employment created the sonnel intime ainsi que l’autorit´ e et la relation parenta-special environment that nurtured and brought to frui- les requises par les conditions de travail ont engendr´ e le tion the sexual abuse. The employer’s enterprise created climat propice a la perp´ etration de l’agression sexuelle. and fostered the risk that led to the ultimate harm. Fair- L’entreprise de l’employeur a cr´ e´ e et favoris´ e le risque aness and the need for deterrence in this critical area of l’origine du pr´ ejudice caus´ e. L’´ equit´ e et le besoin de human conduct — the care of vulnerable children — dissuasion dans ce domaine crucial du comportement suggest that as between the Foundation that created and humain, qu’est le soin d’enfants vuln´ erables, portent ` amanaged the risk and the innocent victim, the Founda- croire que, entre la victime innocente et la Fondation qui tion should bear the loss. a cr´ e´ e et g´ er´ e le risque, c’est la Fondation qui devrait assumer la perte.
Cases Cited Jurisprudence Disapproved: S.T. v. North Yorkshire County Arr ˆet critiqu ´e: S.T. c. North Yorkshire County Council , I.R.L.R. 98; referred to: London Drugs Council , I.R.L.R. 98; arr ˆets mentionn ´es:
Ltd. v. Kuehne & Nagel International Ltd. , London Drugs Ltd. c. Kuehne & Nagel International
3 S.C.R. 299; Kay v. I.T.W. Ltd. , 1 Q.B. 140; Ltd. , 3 R.C.S. 299; Kay c. I.T.W. Ltd. ,
Ryan v. Fildes , 3 All E.R. 517; Daniels v. 1 Q.B. 140; Ryan c. Fildes , 3 All E.R. 517;
Whetstone Entertainments, Ltd. , 2 Lloyd’s Daniels c. Whetstone Entertainments, Ltd. , 2 Rep. 1; Dyer v. Munday , 1 Q.B. 742; Lakatosh v. Lloyd’s Rep. 1; Dyer c. Munday , 1 Q.B. 742;
Ross (1974), 48 D.L.R. (3d) 694; Cole v. California Lakatosh c. Ross (1974), 48 D.L.R. (3d) 694; Cole c. Entertainment Ltd. , B.C.J. No. 2162 (QL); Lloyd California Entertainment Ltd. , B.C.J. No. 2162
v. Grace, Smith & Co. , A.C. 716; The Queen v. (QL); Lloyd c. Grace, Smith & Co. , A.C. 716;
Levy Brothers Co. , S.C.R. 189; Boothman v. The Queen c. Levy Brothers Co. , R.C.S. 189;
Canada , 3 F.C. 381; Warren v. Henlys, Ltd. , Boothman c. Canada , 3 C.F. 381; Warren c.
2 All E.R. 935; G.J. v. Griffiths , B.C.J. Henlys, Ltd. , 2 All E.R. 935; G.J. c. Griffiths ,No. 2370 (QL); Palsgraf v. Long Island R. Co. , 162 B.C.J. No. 2370 (QL); Palsgraf c. Long Island R.
N.E. 99 (1928); Plains Engineering Ltd. v. Barnes Co. , 162 N.E. 99 (1928); Plains Engineering Ltd. c. Security Services Ltd. (1987), 43 C.C.L.T. 129; Poland Barnes Security Services Ltd. (1987), 43 C.C.L.T. 129;
v. John Parr and Sons , 1 K.B. 236; Morris v. C. Poland c. John Parr and Sons , 1 K.B. 236;
W. Martin & Sons Ltd. , 1 Q.B. 716. Morris c. C. W. Martin & Sons Ltd. , 1 Q.B. 716. 538 2 S.C.R. BAZLEY v. CURRY
Authors Cited Doctrine cit ´ee
Atiyah, P. S. Vicarious Liability in the Law of Torts . Atiyah, P. S. Vicarious Liability in the Law of Torts .London: Butterworths, 1967. London: Butterworths, 1967. Feldthusen, Bruce. “Vicarious Liability for Sexual Feldthusen, Bruce. «Vicarious Liability for Sexual Torts”. In Nicholas J. Mullany and Allen M. Linden, Torts». In Nicholas J. Mullany and Allen M. Linden, eds., Torts Tomorrow: A Tribute to John Fleming . eds., Torts Tomorrow: A Tribute to John Fleming .Sydney: LBC Information Services, 1998. Sydney: LBC Information Services, 1998. Fleming, John G. The Law of Torts , 9th ed. Sydney: Fleming, John G. The Law of Torts , 9th ed. Sydney: LBC Information Services, 1998. LBC Information Services, 1998. Fridman, G. H. L. The Law of Torts in Canada , vol. 2. Fridman, G. H. L. The Law of Torts in Canada , vol. 2. Toronto: Carswell, 1990. Toronto: Carswell, 1990. Laski, Harold J. “The Basis of Vicarious Liability” Laski, Harold J. «The Basis of Vicarious Liability» (1916), 26 Yale L.J. 105. (1916), 26 Yale L.J . 105.
Prosser and Keeton on the Law of Torts , 5th ed. W. Prosser and Keeton on the Law of Torts , 5th ed. W. Page Keeton, General Editor. St. Paul, Minn.: West Page Keeton, General Editor. St. Paul, Minn.: West Publishing Co., 1984. Publishing Co., 1984.
Salmond and Heuston on the Law of Torts , 19th ed. By Salmond and Heuston on the Law of Torts , 19th ed. By R. F. V. Heuston and R. A. Buckley. London: Sweet R. F. V. Heuston and R. A. Buckley. London: Sweet & Maxwell, 1987. & Maxwell, 1987. Sykes, Alan O. “The Boundaries of Vicarious Liability: Sykes, Alan O. «The Boundaries of Vicarious Liability: An Economic Analysis of the Scope of Employment An Economic Analysis of the Scope of Employment Rule and Related Legal Doctrines” (1988), 101 Harv. Rule and Related Legal Doctrines» (1988), 101 Harv. L. Rev. 563. L. Rev. 563. Williams, Glanville. “Vicarious Liability: Tort of the Williams, Glanville. «Vicarious Liability: Tort of the Master or of the Servant?” (1956), 72 L.Q. Rev. 522. Master or of the Servant?» (1956), 72 L.Q. Rev. 522. APPEAL from a judgment of the British POURVOI contre un arrˆ et de la Cour d’appel de Columbia Court of Appeal (1997), 30 B.C.L.R. la Colombie-Britannique (1997), 30 B.C.L.R. (3d) (3d) 1 ( sub nom. B. (P.A.) v. Curry ), 89 B.C.A.C. 1 ( sub nom. B. (P.A.) c. Curry ), 89 B.C.A.C. 93, 93, 145 W.A.C. 93, 146 D.L.R. (4th) 72, 4 145 W.A.C. 93, 146 D.L.R. (4th) 72, W.W.R. 431, 26 C.C.E.L. (2d) 161, 34 C.C.L.T. 4 W.W.R. 431, 26 C.C.E.L. (2d) 161, 34 C.C.L.T. (2d) 241, B.C.J. No. 692 (QL) ( sub nom. (2d) 241, B.C.J. No. 692 (QL) ( sub nom. P.A.B. v. Curry ), affirming a decision of the P.A.B. c. Curry ), qui a confirm´ e la d´ ecision de la British Columbia Supreme Court (1995), Cour suprˆ eme de la Colombie-Britannique (1995), 9 B.C.L.R. (3d) 217, 10 W.W.R. 339, 9 B.C.L.R. (3d) 217, 10 W.W.R. 339, 12 C.C.E.L. (2d) 228, 25 C.C.L.T. (2d) 302, 12 C.C.E.L. (2d) 228, 25 C.C.L.T. (2d) 302, B.C.J. No. 1468 (QL), finding the appellant B.C.J. No. 1468 (QL), concluant ` a la res-Foundation vicariously liable for its employee’s ponsabilit´ e du fait d’autrui de la Fondation appe-tortious conduct. Appeal dismissed. lante en raison de la conduite d´ elictueuse de son employ´ e. Pourvoi rejet´ e.
William M. Holburn , Q.C. , and Dale Stewart , William M. Holburn , c.r. , et Dale Stewart , pour for the appellant Children’s Foundation. l’appelante la Children’s Foundation.
Richard J. Meyer and J. Douglas Eastwood , for Richard J. Meyer et J. Douglas Eastwood , pour the appellant Her Majesty the Queen in Right of l’appelante Sa Majest´ e la Reine du chef de la British Columbia. Colombie-Britannique.
D. Brent Adair , for the respondent. D. Brent Adair , pour l’intim´ e. 2 R.C.S. 539 BAZLEY c. CURRY Le juge McLachlin William C. Olthuis and Tim Hurlburt , for the William C. Olthuis et Tim Hurlburt , pour l’inter-intervener Her Majesty the Queen in Right of venante Sa Majest´ e la Reine du chef de l’Alberta. Alberta.
William J. Sammon , for the intervener the William J. Sammon , pour l’intervenante la Con-Canadian Conference of Catholic Bishops. f´ erence des ´ evˆ eques catholiques du Canada.
Christopher E. Hinkson , Q.C. , and Elizabeth Christopher E. Hinkson , c.r. , et Elizabeth Campbell , for the intervener the United Church of Campbell , pour l’intervenante l’ ´ Eglise unie du Canada. Canada.
George E. H. Cadman , Q.C. , and Heather George E. H. Cadman , c.r. , et Heather Craig ,
Craig , for the intervener the General Synod of the pour l’intervenant le synode g´ en´ eral de l’ ´ Eglise Anglican Church of Canada. anglicane du Canada.
Susan M. Vella and Jonathan Eades , for the Susan M. Vella et Jonathan Eades , pour l’inter-intervener Wunnumin Lake First Nation. venante la Premi` ere Nation de Lac Wunnumin.
Peter R. Grant and Diane Soroka , for the inter- Peter R. Grant et Diane Soroka , pour les inter-veners William Richard Blackwater et al. venants William Richard Blackwater et autres.
Robert G. Richards and Dana Schindelka , for Robert G. Richards et Dana Schindelka , pour les the interveners Barrie Caldwell, Samuel McNab intervenants Barrie Caldwell, Samuel McNab et and Glen Pelletier. Glen Pelletier. The judgment of the Court was delivered by Version fran¸ caise du jugement de la Cour rendu par MC LACHLIN J. — LE JUGE M C LACHLIN —I. Introduction I. Introduction It is tragic but true that people working with the 1Il est tragique mais vrai que des gens qui travail-vulnerable sometimes abuse their positions and lent avec des personnes vuln´ erables abusent par-commit wrongs against the very people they are fois de leur position et causent du tort aux per-engaged to help. The abused person may later seek sonnes mˆ emes qu’ils sont charg´ es d’aider. Il peut to recover damages for the wrong. But judgment arriver, par la suite, que la personne maltrait´ ee against the wrongdoer may prove a hollow rem- r´ eclame des dommages-int´ erˆ ets pour le tort subi. edy. This raises the question of whether the organi- Toutefois, un jugement contre l’auteur du tort peut zation that employed the offender should be held se r´ ev´ eler futile. La question qui se pose alors est liable for the wrong. The law refers to such liabil- de savoir si l’organisme qui a employ´ e le contreve-ity as “vicarious” liability. It is also known as nant devrait ˆ etre tenu responsable du tort caus´ e. Le “strict” or “no-fault” liability, because it is droit qualifie cette responsabilit´ e de responsabilit´ eimposed in the absence of fault of the employer. «du fait d’autrui». Elle est ´ egalement connue sous The issue in this case is whether such liability lies le nom de responsabilit´ e «stricte» ou «sans faute», for an employee’s sexual abuse of children in his parce qu’elle est imput´ ee en l’absence de faute de care. la part de l’employeur. Il s’agit en l’espece de savoir si une telle responsabilit´ e est engag´ ee en raison de l’agression sexuelle commise par un employ´ e sur des enfants confi´ es a ses soins. 540 2 S.C.R. BAZLEY v. CURRY McLachlin J.
II. Facts II. Les faits The appellant, the Children’s Foundation, is a 2 L’appelante, la Children’s Foundation (la non-profit organization. It operated two residential «Fondation»), est un organisme sans but lucratif. care facilities for the treatment of emotionally Elle exploitait deux ´ etablissements de soins pour troubled children between the ages of six and b´ en´ eficiaires internes ou des enfants de six a douze twelve. As substitute parent, it practised “total ans ´ etaient trait´ es pour des troubles affectifs. En sa intervention” in all aspects of the lives of the chil- qualit´ e de substitut parental, elle pratiquait dren it cared for. The Foundation authorized its l’«intervention totale» dans tous les aspects de la employees to act as parent figures for the children. vie des enfants qui lui ´ etaient confi´ es. La Fonda-It charged them to care for the children physically, tion autorisait ses employ´ es a faire figure de mentally and emotionally. The employees were to parents pour les enfants. Elle leur confiait le soin do everything a parent would do, from general des enfants sur les plans physique, mental et affec-supervision to intimate duties like bathing and tif. Les employ´ es devaient faire tout ce qu’un tucking in at bedtime. parent ferait, de la surveillance g´ en´ eralea des tˆ aches intimes comme donner le bain aux enfants et les border a l’heure du coucher. The Foundation hired Mr. Curry, a pedophile, to 3 La Fondation a embauch´ e M. Curry, un p´ edo-work in its Vancouver home. The Foundation did phile,a son ´etablissement de Vancouver. La not know he was a pedophile. It checked and was Fondation ignorait qu’il ´ etait p´ edophile. Elle a pro-told he was a suitable employee. Into this environ- c´ ed´ e a des v´ erifications et s’est fait dire qu’il ´ etait ment, too, came the child Patrick Bazley, young apte a occuper le poste en question. Patrick Bazley, and emotionally vulnerable. Curry began a seduc- un enfant jeune et vuln´ erable sur le plan affectif, tion. Over the months, step by subtle step, bathing s’est ´ egalement retrouv´ e dans ce milieu. Curry a became sexual exploration; tucking in in a dark- entrepris de le s´ eduire. Au fil des mois, l’heure du ened room became sexual abuse. bain est devenue peu a peu subtilement une explo-ration sexuelle; l’heure du coucher dans une cham-bre sombre s’est transform´ ee en agression sexuelle. Someone complained about Curry. The Founda-4 Quelqu’un s’est plaint de Curry. La Fondation a tion inquired and upon verifying that Curry had enquˆ et´ e sur lui et l’a cong´ edi´ e sur-le-champ apr es abused a child in one of its homes, immediately avoir obtenu confirmation qu’il avait agress´ e un discharged him. In 1992, Curry was convicted of enfant dans l’un de ses ´ etablissements. En 1992, 19 counts of sexual abuse, two of which related to Curry a ´ et´ e d´ eclar´ e coupable relativement a 19 Bazley. Curry has since died. chefs d’accusation d’agression sexuelle, dont deux concernaient Bazley. Curry est maintenant d´ ec´ ed´ e. Bazley sued the Foundation for compensation 5 Bazley a intent´ e une action contre la Fondation for the injury he suffered while in its care. The en vue d’ˆ etre indemnis´ e du pr´ ejudice subi pendant Foundation took the position that since it had com- qu’il avait ´ et´ e confi´ e a ses soins. La Fondation a mitted no fault in hiring or supervising Curry, it adopt´ e le point de vue selon lequel, ´ etant donn´ ewas not legally responsible for what he had done. qu’elle n’avait commis aucune faute en embau-The parties stated a case to determine whether chant ou en supervisant Curry, elle n’´ etait aucune-(assuming the appellant was not, in fact, negligent) ment responsable en droit de ce qu’il avait fait. Les the appellant was nonetheless vicariously liable for parties ont pr´ esent´ e un expos´ e de cause pour ´ eta-its employee’s tortious conduct. The chambers blir si (en supposant qu’elle n’avait pas, en r´ ealit´ e, fait preuve de n´ egligence) la responsabilit´ e du fait 2 R.C.S. 541 BAZLEY c. CURRY Le juge McLachlin
judge found that it was and the Court of Appeal d’autrui de l’appelante ´ etait n´ eanmoins engag´ ee en dismissed the appeal. raison de la conduite d´ elictueuse de son employ´ e. Le juge en chambre a conclu qu’elle l’´ etait et la Cour d’appel a rejet´ e l’appel. III. Judgments III. Jugements A. British Columbia Supreme Court (1995), A. Cour supr ˆeme de la Colombie-Britannique
9 B.C.L.R. (3d) 217 (1995), 9 B.C.L.R. (3d) 217 The chambers judge, Lowry J., applied the com- 6Le juge en chambre Lowry a appliqu´ e le critere mon law test known as the Salmond test (from de common law connu sous le nom de crit ere Salmond and Heuston’s treatise on torts: see, e.g., Salmond (tir´ e du trait´ e sur la responsabilit´ e civile
Salmond and Heuston on the Law of Torts (19th d´ elictuelle de Salmond et Heuston: voir, par ed. 1987), at pp. 521-22) to determine the Founda- exemple, Salmond and Heuston on the Law of
tion’s strict liability for its employee’s conduct. Torts (19 e ´ed. 1987), aux pp. 521 et 522), pour ´ eta-Under this test, employers are vicariously liable for blir la responsabilit´ e stricte de la Fondation aemployee torts falling within the “scope of l’´ egard de la conduite de son employ´ e. Selon ce employment” (at p. 220): crit ere, la responsabilit´ e du fait d’autrui de l’em-ployeur est engag´ ee pour le d´ elit que son employ´ ecommet dans l’[ TRADUCTION ] «exercice de ses fonctions» (a la p. 220): An employee’s wrongful conduct is said to fall within [TRADUCTION ] On dit que la conduite fautive d’un the course and scope of his or her employment where it employ´ e entre dans l’exercice de ses fonctions quand consists of either (1) acts authorized by the employer or elle consiste a accomplir (1) des actes autoris´ es par (2) unauthorized acts that are so connected with acts that l’employeur, ou (2) des actes non autoris´ es qui sont si the employer has authorized that they may rightly be ´etroitement li´ es aux actes que l’employeur a autoris´ es regarded as modes — although improper modes — of qu’ils peuvent ` a juste titre ˆ etre consid´ er´ es comme des doing what has been authorized: Canadian Pacific fa¸ cons, quoiqu’incorrectes, d’accomplir ce qui a ´et´ e
Railway Co. v. Lockhart , A.C. 591 at 599 (P.C.). autoris´ e: Canadian Pacific Railway Co. c. Lockhart , A.C. 591, ` a la p. 599 (C.P.).
The Foundation clearly had not authorized Curry’s Il est ´ evident que la Fondation n’avait pas autoris´ esexual abuse. Therefore the only question was Curry a commettre l’agression sexuelle. Par cons´ e-whether the wrong was so connected to an author- quent, il s’agissait uniquement de savoir si la faute ized act that it could be regarded as a mode of ´etait si ´ etroitement li´ ee a un acte autoris´ e qu’elle doing that act. Lowry J. said that it was. The Foun- pouvait ˆ etre consid´ er´ ee comme une fa¸ con de l’ac-dation had authorized Curry to put the child to bed. complir. Le juge Lowry a dit qu’elle l’´ etait. La Curry committed the sexual abuse while putting Fondation avait autoris´ e Curry ` a coucher l’enfant. the child to bed. Therefore the sexual abuse could Curry a commis l’agression sexuelle pendant qu’il be viewed as a mode, however improper, of doing couchait l’enfant. L’agression sexuelle pouvait an authorized act. Lowry J. accordingly ruled that donc ˆ etre consid´ er´ ee comme une fa¸ con, quoiqu’in-the Foundation was vicariously liable for Curry’s correcte, d’accomplir un acte autoris´ e. Le juge sexual torts. Lowry a donc d´ ecid´ e que la responsabilit´ e du fait d’autrui de la Fondation ´ etait engag´ ee en raison des d´ elits sexuels de Curry. 542 2 S.C.R. BAZLEY v. CURRY McLachlin J.
B. British Columbia Court of Appeal (1997), B. Cour d’appel de la Colombie-Britannique
30 B.C.L.R. (3d) 1 (1997), 30 B.C.L.R. (3d) 1 The Court of Appeal affirmed this ruling, but 7 La Cour d’appel a confirm´ e cette d´ ecision, mais did not content itself with merely examining the ne s’est pas content´ ee d’examiner l’expression phrase “unauthorized modes of authorized acts”. [TRADUCTION ] «fa¸ cons non autoris´ ees d’accomplir The four sets of reasons, while divergent in des actes autoris´ es». Bien qu’ils divergent sur le emphasis and detail, reflect general agreement plan des d´ etails et qu’ils mettent l’accent sur des that: (1) it is better to confront the question of ´el´ ements diff´ erents, les quatre s´ eries de motifs whether liability should rest with the employer refletent le consensus g´ en´ eral selon lequel (1) il est directly than to bury it beneath the semantics of pr´ ef´ erable d’aborder directement la question de phrases like “unauthorized modes of authorized savoir si la responsabilit´ e doit ˆ etre imput´ ee a l’em-acts”; (2) a useful focus for inquiry is the closeness ployeur, plutˆ ot que de l’enfouir sous des expres-of the connection between authorized acts and the sions comme «fa¸ cons non autoris´ ees d’accomplir injury suffered; (3) factors relevant to assessing des actes autoris´ es», (2) il est utile de mettre l’ac-this connection in cases like this one include cent sur l’´ etroitesse du lien entre les actes autoris´ es power, trust and the extent to which the employ- et le pr´ ejudice subi, (3) les facteurs pertinents pour ment enabled or cloaked the wrong; (4) policy con- ´evaluer ce lien dans des cas comme la pr´ esente siderations, such as deterrence and which of two affaire comprennent le pouvoir, la confiance et la “innocent” parties should bear the loss, should also mesure dans laquelle l’emploi a permis ou masqu´ ebe taken into account in finding liability; and la faute, (4) des consid´ erations de politique g´ en´ e-(5) there should be no special rule for non-profit rale, comme la dissuasion et la question de savoir employers. laquelle de deux parties «innocentes» devrait assu-mer la perte, devraient ´ egalement ˆ etre prises en compte pour ´ etablir la responsabilit´ e, et (5) il ne devrait pas y avoir de regle particuli ere pour l’em-ployeur qui est un organisme sans but lucratif. The various reasons in the Court of Appeal pre-8 Les divers motifs de la Cour d’appel pr´ esentent sent a sophisticated and nuanced review of this dif- un examen subtil et nuanc´ e de cette difficile ques-ficult issue and the considerations that may prop- tion et des consid´ erations qui peuvent a juste titre erly bear on it. Given that many of the ideas will s’y rapporter. ´ Etant donn´ e qu’un bon nombre de be discussed in the analysis that follows, I content ces id´ ees seront examin´ ees dans l’analyse qui suit, myself with sketching their respective themes. je vais me contenter d’en esquisser les th emes res-Huddart J.A. emphasized the power-trust relation- pectifs. Le juge Huddart a soulign´ e que le rapport ship as key to finding the necessary connection de force et de confiance est l’´ el´ ement cl´ e pour ´ eta-between an authorized act and the wrong. blir le lien n´ ecessaire entre un acte autoris´ e et la Newbury J.A., while relying on the trust inherent faute commise. Le juge Newbury, tout en se fon-in Curry’s duties, included in the analysis other dant sur la confiance inh´ erente aux fonctions de factors like spatial, temporal and “formal” connec- Curry, a inclus dans l’analyse d’autres facteurs tions (employer objectives permitting or encourag- comme les liens spatiaux, temporels et «formels» ing the wrong). Hollinrake J.A. (Donald J.A. con- (les objectifs de l’employeur permettant ou encou-curring) agreed with both Huddart J.A. and rageant la faute). Le juge Hollinrake (avec l’appui Newbury J.A., and stressed the need for a suffi- du juge Donald) ´ etait du mˆ eme avis que les juges cient connection between the duties of the Huddart et Newbury et a soulign´ e la n´ ecessit´ e d’un employee and the wrong. Finally, Finch J.A., who lien suffisant entre les fonctions de l’employ´ e et la also agreed with both Huddart J.A. and faute commise. Enfin, le juge Finch, qui ´ etait aussi Newbury J.A. (and, by implication, also d’accord avec les juges Huddart et Newbury 2 R.C.S. 543 BAZLEY c. CURRY Le juge McLachlin
Hollinrake J.A. (Donald J.A. concurring)), took the (et, implicitement, avec le juge Hollinrake (qui view that outcomes in this area of the law rest avait l’appui du juge Donald)), estimait que l’issue more on policy considerations than on coherent dans ce domaine du droit repose davantage sur des legal principle, and advocated a case-by-case, consid´ erations de politique g´ en´ erale que sur des policy-oriented approach. principes juridiques coh´ erents, et a pr´ econis´ e une m´ ethode cas par cas, ax´ ee sur une politique g´ en´ erale. IV. Issues IV. Les questions en litige The issue in this appeal is whether the Founda- 9La question en litige dans le pr´ esent pourvoi est tion is vicariously liable for its employee’s sexual de savoir si la responsabilit´ e du fait d’autrui de la assault of a child in its care. This poses two sub- Fondation est engag´ ee en raison de l’agression issues: sexuelle commise par son employ´ e sur un enfant dont elle avait la garde. Cela souleve deux sous-questions: (1) May employers be held vicariously liable (1) La responsabilit´ e du fait d’autrui de l’em-for their employees’ sexual assaults on cli- ployeur peut-elle ˆ etre engag´ ee en raison de ents or persons within their care? l’agression sexuelle commise par son employ´ e sur un client ou une personne con-fi´ ee a ses soins? (2) If so, should non-profit employers be (2) Dans l’affirmative, l’employeur qui est un exempted from liability? organisme sans but lucratif devrait-il ˆ etre exon´ er´ e de toute responsabilit´ e? V. Analysis V. Analyse A. May Employers Be Held Vicariously Liable for A. La responsabilit ´e du fait d’autrui de l’em-Their Employees ’ Sexual Assaults on Clients or ployeur peut-elle ˆetre engag ´ee en raison de Persons Within Their Care? l’agression sexuelle commise par son employ ´esur un client ou une personne confi ´ee ` a ses soins?
Both parties agree that the answer to this ques- 10 Les deux parties conviennent que la r´ eponse ation is governed by the Salmond test, which posits cette question est r´ egie par le crit ere Salmond , qui that employers are vicariously liable for veut que la responsabilit´ e du fait d’autrui d’un (1) employee acts authorized by the employer; or employeur soit engag´ ee en raison (1) des actes (2) unauthorized acts so connected with authorized d’un employ´ e autoris´ es par cet employeur, ou acts that they may be regarded as modes (albeit (2) des actes non autoris´ es qui sont si ´ etroitement improper modes) of doing an authorized act. Both li´ es aux actes autoris´ es qu’ils peuvent ˆ etre consi-parties also agree that we are here concerned with d´ er´ es comme des fa¸ cons (quoiqu’incorrectes) d’ac-the second branch of the test. They diverge, how- complir un acte autoris´ e. Les deux parties convien-ever, on what the second branch of the test means. nent ´ egalement que c’est le deuxieme volet du The Foundation says that its employee’s sexual crit ere qui est en cause en l’espece. Toutefois, ils assaults of Bazley were not “modes” of doing an ne s’entendent sur le sens a lui donner. La Fonda-authorized act. Bazley, on the other hand, submits tion affirme que les agressions sexuelles que son that the assaults were a mode of performing employ´ e a commises sur Bazley ne sont pas des authorized tasks, and that courts have often found «fa¸ cons» d’accomplir un acte autoris´ e. Par contre, 544 2 S.C.R. BAZLEY v. CURRY McLachlin J.
employers vicariously liable for intentional wrongs Bazley soutient que les agressions ´etaient une of employees comparable to sexual assault. fa¸ con d’accomplir des tˆ aches autoris´ ees et que les tribunaux ont souvent conclu que la responsabilit´ edu fait d’autrui de l’employeur ´ etait engag´ ee en raison de la faute intentionnelle comparable a une agression sexuelle qu’un employ´ e avait commise. The problem is that it is often difficult to distin-11 Le probl eme est qu’il est souvent difficile d’´ eta-guish between an unauthorized “mode” of per- blir une distinction entre une «fa¸ con» non autori-forming an authorized act that attracts liability, s´ ee d’accomplir un acte autoris´ e qui engage la res-and an entirely independent “act” that does not. ponsabilit´ e, et un «acte» tout a fait ind´ ependant qui Unfortunately, the test provides no criterion on ne le fait pas. Malheureusement, le crit ere ne four-which to make this distinction. In many cases, like nit aucun ´ el´ ement qui puisse servir de base a cette the present one, it is possible to characterize the distinction. Dans de nombreux cas, comme la pr´ e-tortious act either as a mode of doing an author- sente affaire, il est possible d’envisager l’acte ized act (as the respondent would have us do), or d´ elictueux soit comme une fa¸ con d’accomplir un as an independent act altogether (as the appellants acte autoris´ e (comme l’intim´ e nous invitea le would suggest). In such cases, how is the judge to faire), soit comme un acte tout a fait ind´ ependant decide between the two alternatives? (comme les appelants le proposent). Dans ces cas, que doit faire le juge pour choisir entre les deux solutions? One answer is to look at decided cases on simi-12 Une possibilit´ e consiste a examiner la jurispru-lar facts. As Salmond and Heuston, supra , put it, dence portant sur des faits similaires. Comme “the principle is easy to state but difficult to apply. Salmond et Heuston, op. cit ., l’affirment, [ TRADUC-All that can be done is to provide illustrations on TION ] «le principe est facile a ´ enoncer, mais diffi-either side of the line” (p. 522). The problem is cile a appliquer. Tout ce qu’on peut faire, c’est that only very close cases may be useful. Fleming donner des exemples de part et d’autre» (p. 522). observes that “[n]o statistical measurement is pos- Le probleme est que seules les affaires tr es sem-sible [of when such torts are properly said to be blables peuvent ˆ etre utiles. Fleming fait remarquer within the “scope of employment”], and prece- que [ TRADUCTION ] «[a]ucune ´ evaluation statistique dents are helpful only when they present a sugges- [des cas ou on peut affirmer, a juste titre, que ces tive uniformity on parallel facts” (J. G. Fleming, d´ elits ont ´ et´ e commis dans l’“exercice des fonc-
The Law of Torts (9th ed. 1998), at p. 421). tions”] n’est possible, et les pr´ ec´ edents ne sont utiles que s’ils pr´ esentent une uniformit´ e ´ evoca-trice d´ ecoulant de faits semblables» (J. G. Fleming,
The Law of Torts (9 e ´ed. 1998), a la p. 421). Where decided cases do not help, Salmond and 13 En ce qui concerne les cas o u la jurisprudence Heuston, supra , at p. 522, suggest the impasse may n’est d’aucune utilit´ e, Salmond et Heuston, op. cit ., be resolved by the devices of a prima facie case a la p. 522, proposent de r´ esoudre l’impasse au and shifting evidentiary burden. If the plaintiff moyen d’une preuve prima facie et du d´ eplace-establishes that the employee’s act was done on the ment de la charge de preuve. Si le demandeur ´ eta-employer’s premises, during working hours, and blit que l’employ´ e a accompli l’acte reproch´ e dans that it bears a close connection with the work that les locaux de l’employeur, pendant les heures de the employee was authorized to do, then the travail, et que cet acte est ´ etroitement li´ e au travail responsibility shifts to the employer to show that que l’employ´ e ´ etait autoris´ e a effectuer, il incombe the act is one for which it was not responsible. But alors ` a l’employeur de d´ emontrer qu’il s’agissait this is not so much a test as a default position, and d’un acte dont il n’´ etait pas responsable. Toutefois, 2 R.C.S. 545 BAZLEY c. CURRY Le juge McLachlin
it remains unclear exactly what the employer ce n’est pas tant un critere qu’une solution par would need to show to escape responsibility. d´ efaut, et on ne sait pas encore tr es bien ce que l’employeur devrait d´ emontrer pour ´echapper atoute responsabilit´ e. Increasingly, courts confronted by issues of 14 Les tribunaux aux prises avec des questions de vicarious liability where no clear precedent exists responsabilit´ e du fait d’autrui, dans des cas o u il are turning to policy for guidance, examining the n’existe pas de pr´ ec´ edent clair, recourent de plus purposes that vicarious liability serves and asking en plus a une politique g´ en´ erale consistant a exa-whether imposition of liability in the new case miner les objectifs de la responsabilit´ e du fait d’au-before them would serve those purposes: see trui et ` a se demander si l’imputation de responsabi-Fleming, supra , at p. 410; London Drugs Ltd. v. lit´ e dans la nouvelle affaire dont ils sont saisis
Kuehne & Nagel International Ltd. , 3 serait conforme ` a ces objectifs: voir Fleming, op.
S.C.R. 299, per La Forest J. cit ., a la p. 410; London Drugs Ltd. c. Kuehne & Nagel International Ltd. , 3 R.C.S. 299, le juge La Forest. This review suggests that the second branch of 15 Cet examen montre qu’il peut ˆ etre utile d’abor-the Salmond test may usefully be approached in der en deux ´ etapes le deuxi eme volet du critere two steps. First, a court should determine whether Salmond . Le tribunal doit d’abord d´ ecider s’il y a there are precedents which unambiguously deter- des pr´ ec´ edents qui ´ etablissent sans ´ equivoque la mine on which side of the line between vicarious responsabilit´ e du fait d’autrui ou encore l’absence liability and no liability the case falls. If prior cases de responsabilit´ e dans l’affaire en cause. Si aucune do not clearly suggest a solution, the next step is to solution ne ressort clairement de la jurisprudence, determine whether vicarious liability should be la prochaine ´ etape consiste a d´ ecider si la respon-imposed in light of the broader policy rationales sabilit´ e du fait d’autrui devrait ˆetre imput´ ee behind strict liability. This Court has an additional compte tenu des raisons de politique g´ en´ erale qui duty: to provide guidance for lower tribunals. sous-tendent la responsabilit´ e stricte. Notre Cour a Accordingly, I will try to proceed from these first en outre le devoir de guider les tribunaux inf´ e-two steps to articulate a rule consistent with both rieurs. Par cons´ equent, je vais essayer de formuler, the existing cases and the policy reasons for vicari- a partir de ces deux premi eres ´ etapes, une regle ous liability. compatible tant avec la jurisprudence existante qu’avec les raisons de politique g´ en´ erale qui sous-tendent la responsabilit´ e du fait d’autrui. 1. Previous Cases 1. La jurisprudence This is one of those difficult cases where there is 16 La pr´ esente affaire est l’un de ces cas difficiles little helpful precedent to guide the Court in deter- o u notre Cour dispose de peu de pr´ ec´ edents pour mining whether the employee’s tortious act should d´ ecider s’il y a lieu de consid´ erer l’acte d´ elictueux be viewed as an unauthorized mode of an author- de l’employ´ e comme une fa¸ con non autoris´ ee d’ac-ized act, or as an altogether independent act. Apart complir un acte autoris´ e ou comme un acte tout afrom one recent case in the United Kingdom, the fait ind´ ependant. Abstraction faite d’une affaire issue before us appears not to have been previously r´ ecente au Royaume-Uni, les tribunaux sup´ erieurs considered in depth by higher tribunals. Neverthe- ne semblent pas avoir d´ ej a analys´ e en profondeur less, it may be useful to review the situations la question dont nous sommes saisis. N´ eanmoins, where courts have held employers vicariously lia- il peut ˆ etre utile d’examiner les situations dans les-ble for the unauthorized torts of employees. At quelles les tribunaux ont conclu que la responsabi-very least, they may suggest recurring concepts lit´ e du fait d’autrui de l’employeur ´ etait engag´ ee en 546 2 S.C.R. BAZLEY v. CURRY McLachlin J.
and policy considerations that shed light on how raison du d´ elit non autoris´ e commis par un the issue should be resolved. employ´ e. Il peut, a tout le moins, en ressortir des notions et des consid´ erations de politique g´ en´ erale r´ ecurrentes qui clarifient la fa¸ con de r´ egler la ques-tion. The relevant cases may usefully be grouped into 17 La jurisprudence pertinente peut ˆ etre class´ ee uti-three general categories: (1) cases based on the lement en trois cat´ egories g´ en´ erales: (1) la juris-rationale of “furtherance of the employer’s aims”; prudence fond´ ee sur le raisonnement de la «r´ eali-(2) cases based on the employer’s creation of a sit- sation des objectifs de l’employeur», (2) la uation of friction; and (3) the dishonest employee jurisprudence fond´ ee sur la cr´ eation d’une situation cases. If we can find a common thread among de conflit par l’employeur, et (3) la jurisprudence these three categories of cases, it may suggest how relative aux employ´ es malhonnˆ etes. Si nous pou-the test should be interpreted. vons trouver un d´ enominateur commun a ces trois cat´ egories de jurisprudence, cela pourra nous indi-quer la fa¸ con d’interpr´ eter le crit` ere. The cases confirming vicarious liability on the 18 La jurisprudence confirmant la responsabilit´ e du basis that the employee was acting in furtherance fait d’autrui pour le motif que l’employ´ e agissait of the employer’s aims rely on the agency rationale en vue de r´ ealiser les objectifs de l’employeur implicit in the Salmond test: see, e.g., Kay v. I.T.W. repose sur le raisonnement du mandat contenu
Ltd. , 1 Q.B. 140 (C.A.). Because the implicitement dans le critere Salmond : voir, par employee was acting in furtherance of the employ- exemple, Kay c. I.T.W. Ltd. , 1 Q.B. 140 er’s aims, he or she is said to have “ostensible” or (C.A.). ´ Etant donn´ e que l’employ´ e agissait en vue “implied” authority to do the unauthorized act. de r´ ealiser les objectifs de l’employeur, il avait, This rationale works well enough for torts of negli- dit-on, le pouvoir [ TRADUCTION ] «apparent» ou gent accident. It does not suffice for intentional «implicite» d’accomplir l’acte non autoris´ e. Ce rai-torts, however. It is difficult to maintain the fiction sonnement, qui fonctionne assez bien dans le cas that an employee who commits an assault or theft du d´ elit de l’accident caus´ e par n´ egligence, n’est was authorized to do so, even “ostensibly”: see toutefois pas suffisant dans le cas du d´ elit inten-H. J. Laski, “The Basis of Vicarious Liability” tionnel. Il est difficile de croire qu’un employ´ e qui (1916), 26 Yale L.J. 105. I would put the line of a commis des voies de fait ou un vol ´ etait autoris´ ecases addressing the distinction between a “frolic”a le faire, mˆ eme [ TRADUCTION ] «en apparence»: and a “detour” in this group. voir H. J. Laski, «The Basis of Vicarious Liability» (1916), 26 Yale L.J. 105. Je classerais dans cette cat´ egorie le courant de jurisprudence portant sur la distinction entre un «jeu» et une «d´ eviation». The cases based on the employer’s creation of a 19 La jurisprudence fond´ ee sur la cr´ eation d’une situation of friction rest on the idea that if the situation de conflit par l’employeur repose sur employer’s aims or enterprise incidentally create a l’id´ ee que, si les objectifs ou l’entreprise de l’em-situation of friction that may give rise to employ- ployeur cr´ eent indirectement une situation de con-ees committing tortious acts, an employee’s inten- flit qui peut amener des employ´ es a accomplir des tional misconduct can be viewed as falling within actes d´ elictueux, l’inconduite intentionnelle d’un the scope of the employment and the employer is employ´ e peut ˆ etre consid´ er´ ee comme relevant de vicariously liable for ensuing harm. This rationale l’exercice de ses fonctions et la responsabilit´ e du was used to extend vicarious liability to intentional fait d’autrui de l’employeur est engag´ ee pour le torts like a provoked bartender’s assault on an pr´ ejudice qui en d´ ecoule. Ce raisonnement a servi obnoxious customer. While it does not rest ona ´etendre la responsabilit´ e du fait d’autrui aux 2 R.C.S. 547 BAZLEY c. CURRY Le juge McLachlin
ostensible or implied authority, it builds on the d´ elits intentionnels comme les voies de fait avec logic of risk and accident inherent in the cases provocation auxquelles un barman se livre sur un imposing vicarious liability on the basis that the client insupportable. Bien qu’il ne repose pas sur employee was acting to further the employer’s un pouvoir apparent ou implicite, il se fonde sur la aims. Intentional torts arising from situations of logique du risque et de l’accident inh´ erente a la friction are like accidents in that they stem from a jurisprudence qui impute la responsabilit´ e du fait risk attendant on carrying out the employer’s aims. d’autrui pour le motif que l’employ´ e agissait en Like accidents, they occur in circumstances where vue de r´ ealiser les objectifs de l’employeur. Les such incidents can be expected to arise because of d´ elits intentionnels d´ ecoulant de situations de con-the nature of the business, and hence their ramifi- flit se comparent a des accidents dans la mesure oucations appropriately form part of the cost of doing ils sont imputables a un risque concomitant de la business. See, e.g ., Ryan v. Fildes , 3 All r´ ealisation des objectifs de l’employeur. ` A l’instar E.R. 517 (K.B.D.) (schoolteachers’ discipline); des accidents, ils surviennent dans des circons-
Daniels v. Whetstone Entertainments, Ltd. , tances o` u on peut s’y attendre en raison de la 2 Lloyd’s Rep. 1 (C.A.) (dance hall “bouncer”); nature de l’entreprise et, en cons´ equence, leurs
Dyer v. Munday , 1 Q.B. 742 (C.A.) (furni- ramifications font ` a juste titre partie du coˆ ut d’ex-ture repossessor); Lakatosh v. Ross (1974), ploitation de l’entreprise. Voir, par exemple, Ryan
48 D.L.R. (3d) 694 (Man. Q.B.) (bouncer); Cole v. c. Fildes , 3 All E.R. 517 (K.B.D.) (disci-
California Entertainment Ltd. , B.C.J. pline des enseignants); Daniels c. Whetstone
No. 2162 (QL) (C.A.) (bouncer). Entertainments, Ltd. , 2 Lloyd’s Rep. 1(C.A.) («videur» de salle de danse); Dyer c. Munday , 1 Q.B. 742 (C.A.) (personne fai-sant la reprise de possession d’ameublement);
Lakatosh c. Ross (1974), 48 D.L.R. (3d) 694 (B.R. Man.) (videur); Cole c. California Entertain-ment Ltd. , B.C.J. No. 2162 (QL) (C.A.) (videur). Neither furtherance of the employer’s aims nor 20 Cependant, selon des affaires comme Lloyd c.
creation of situations of friction, however, suffice Grace, Smith & Co. , A.C. 716 (H.L.), et to justify vicarious liability for employee theft or The Queen c. Levy Brothers Co. , R.C.S. fraud, according to cases like Lloyd v. Grace, 189, ni la r´ ealisation des objectifs de l’employeur
Smith & Co. , A.C. 716 (H.L.), and The ni la cr´ eation de situations de conflit ne suffisent ` a
Queen v. Levy Brothers Co. , S.C.R. 189. justifier la responsabilit´ e du fait d’autrui pour le The language of authority, whether actual or osten- vol ou la fraude commis par un employ´ e. Le lan-sible, is inappropriate for intentional, fraudulent gage du pouvoir, qu’il soit r´ eel ou apparent, ne conduct like the theft of a client’s property. A bank convient pas a une conduite intentionnelle et frau-employee stealing a client’s money cannot be said duleuse comme le vol des biens d’un client. On ne to be furthering the bank’s aims. Nor does the saurait dire qu’un employ´ e de banque qui vole l’ar-logic of a situation of friction apply, unless one gent d’un client agit en vue de r´ ealiser les objectifs believes that any money-handling operation gener- de la banque. La logique d’une situation de conflit ates an inexorable temptation to steal. Neverthe- ne s’applique pas non plus, a moins que l’on ne less, courts considering this type of case have croie que toute activit´ e o` u l’on manipule de l’ar-increasingly held employers vicariously liable, gent provoque immanquablement la tentation de even when the employee’s conduct is antithetical voler. N´ eanmoins, les tribunaux qui ont examin´ eto the employer’s business: see, e.g., Boothman v. ce genre d’affaire ont de plus en plus tenu les
Canada , 3 F.C. 381 (T.D.) (unauthorized employeurs responsables du fait d’autrui, mˆ eme si intentional infliction of nervous shock by supervi- la conduite de l’employ´ e ´etait contraire aux 548 2 S.C.R. BAZLEY v. CURRY McLachlin J.
sory employee on his subordinate found to invoke activit´ es de l’employeur: voir, par exemple, vicarious liability for the employer, albeit it based Boothman c. Canada , 3 C.F. 381 (1 re inst.) on statutory, as opposed to common law, princi- (ou il a ´ et´ e jug´ e que le choc nerveux qu’un surveil-ples). lant avait fait subir de fa¸ con intentionnelle et non autoris´ ee a son subalterne engageait la responsabi-lit´ e du fait d’autrui de l’employeur, quoiqu’elle fˆ ut fond´ ee sur des principes de droit ´ ecrit plutˆ ot que sur des principes de common law). At the heart of the dishonest employee decisions 21 Des consid´ erations de politique g´ en´ erale et is consideration of fairness and policy: see Laski, d’´ equit´ e sont au cœur des d´ ecisions portant sur des
supra , at p. 121. As P. S. Atiyah, Vicarious Liabil- employ´ es malhonnˆ etes: voir Laski, loc. cit ., ` a la
ity in the Law of Torts (1967), at p. 263, puts it, p. 121. Comme l’affirme P. S. Atiyah, dans Vicar-
“certain types of wilful acts, and in particular ious Liability in the Law of Torts (1967), a la frauds and thefts, are only too common, and the p. 263, [ TRADUCTION ] «certains types d’actes d´ eli-fact that liability is generally imposed for torts of b´ er´ es, notamment la fraude et le vol, ne sont que this kind shows that the courts are not unmindful trop r´ epandus, et le fait qu’il y ait g´ en´ eralement of considerations of policy.” The same logic dic- imputation de responsabilit´ e pour ce genre de tates that where the employee’s wrongdoing was a d´ elits montre que les tribunaux ne sont pas indiff´ e-random act wholly unconnected to the nature of rents aux consid´ erations de politique g´ en´ erale.» La the enterprise and the employee’s responsibilities, mˆ eme logique exige que, lorsque l’acte fautif de the employer is not vicariously liable. Thus an l’employ´ e a ´ et´ e accompli au hasard et n’a absolu-employer has been held not liable for a vengeful ment rien a voir avec la nature de l’entreprise et les assault by its store clerk: Warren v. Henlys, Ltd. , responsabilit´ es de l’employ´ e, la responsabilit´ e du 2 All E.R. 935 (K.B.D.). fait d’autrui de l’employeur ne soit pas engag´ ee. Ainsi, un marchand a ´ et´ e jug´ e non responsable des voies de fait que son employ´ e avait commises par vengeance: Warren c. Henlys, Ltd. , 2 All E.R. 935 (K.B.D.). Looking at these three general classes of cases 22 A l’examen de ces trois cat´ egories g´ en´ erales de in which employers have been held vicariously lia- jurisprudence o u il a ´ et´ e d´ ecid´ e que la responsabi-ble for employees’ unauthorized torts, one sees a lit´ e du fait d’autrui de l’employeur ´ etait engag´ ee en progression from accidents, to accident-like inten- raison du d´ elit non autoris´ e commis par un tional torts, to torts that bear no relationship to employ´ e, on constate qu’on est pass´ e des accidents either agency-like conduct or accident. In search of a des d´ elits intentionnels apparent´ es a des acci-a unifying principle, one asks what the three clas- dents et a des d´ elits n’ayant rien a voir avec une ses of cases have in common. At first glance, it conduite semblable a celle d’un mandataire ou may seem little. Yet with the benefit of hindsight it avec un accident. En cherchant un principe unifi-is possible to posit one common feature: in each cateur, on se demande ce que ces trois cat´ egories case it can be said that the employer’s enterprise de jurisprudence ont en commun. A premiere vue, had created the risk that produced the tortious act. il peut sembler qu’elles ont peu de choses en com-The language of “furtherance of the employer’s mun. Pourtant, en r´ etrospective, il est possible de aims” and the employer’s creation of “a situation d´ eceler un point commun: dans chaque cas, il est of friction” may be seen as limited formulations of possible d’affirmer que l’entreprise de l’employeur the concept of enterprise risk that underlies the dis- avait cr´ e´ e le risque a l’origine de l’acte d´ elictueux. honest employee cases. The common theme L’expression «r´ ealisation des objectifs de l’em-resides in the idea that where the employee’s ployeur» et la cr´ eation par l’employeur d’une 2 R.C.S. 549 BAZLEY c. CURRY Le juge McLachlin
conduct is closely tied to a risk that the employer’s «situation de conflit» peuvent ˆetre consid´ er´ ees enterprise has placed in the community, the comme des formulations limit´ ees de la notion de employer may justly be held vicariously liable for risque d’entreprise qui sous-tend la jurisprudence the employee’s wrong. relative aux employ´ es malhonnˆ etes. Le theme commun est l’id´ ee que, si la conduite de l’employ´ eest ´ etroitement li´ ee a un risque auquel l’entreprise de l’employeur a expos´ e la collectivit´ e, l’em-ployeur peut se voir imputer ` a juste titre la respon-sabilit´ e du fait d’autrui pour la faute de cet employ´ e. If employers are vicariously liable for acts like 23 Si la responsabilit´ e du fait d’autrui de l’em-employee theft, why not for sexual abuse? That ployeur est engag´ ee en raison d’un acte tel que le was the question before the English Court of vol commis par un employ´ e, pourquoi ne la serait-Appeal in S.T. v. North Yorkshire County Council , elle pas pour une agression sexuelle? Telle ´ etait la I.R.L.R. 98, where the court applied the question dont ´ etait saisie la Cour d’appel d’Angle-
Salmond test to reverse a finding of vicarious lia- terre dans l’arrˆ et S.T. c. North Yorkshire County
bility against a school council for a teacher who Council , I.R.L.R. 98, ou elle a appliqu´ e le sexually accosted a mentally handicapped student crit ere Salmond pour infirmer une conclusion que during a school field trip to the continent. It held la responsabilit´ e du fait d’autrui d’un conseil sco-that the sexual tort was not an unauthorized mode laire ´ etait engag´ ee en raison de l’agression sexuelle of performing an authorized act; it was an inde- dont un ´ eleve atteint de d´ eficience intellectuelle pendent act, outside the scope of the teacher’s avait ´ et´ e victime de la part d’un enseignant, au authority. The court recognized the difficulty of cours d’une sortie scolaire sur le continent. Elle a saying that some intentional acts, like a store statu´ e que le d´ elit sexuel n’´ etait pas une fa¸ con non clerk’s assault, do not attract vicarious liability, autoris´ ee d’accomplir un acte autoris´ e; il s’agissait while other intentional acts, like theft, do. In the d’un acte ind´ ependant qui exc´ edait le pouvoir de end, however, it did not confront the underlying l’enseignant. La cour a reconnu qu’il est difficile policy of vicarious liability, preferring to reason d’affirmer que certains actes intentionnels, comme that sexual abuse was closer to the store clerk’s les voies de fait commises par un employ´ e de assault than to a solicitor’s clerk’s theft. It inter- magasin, n’engagent pas la responsabilit´ e du fait preted the stolen property cases of Levy Brothers d’autrui, alors que d’autres actes intentionnels, and Lloyd , thought by many to be developing law, comme le vol, le font. En fin de compte, elle n’a as a minor off-shoot of a line of cases concerning toutefois pas abord´ e la politique g´ en´ erale qui sous-entrustment of goods — a departure from the tend la responsabilit´ e du fait d’autrui, pr´ ef´ erant “general” rule. soutenir que l’agression sexuelle s’apparentait davantage aux voies de fait commises par un employ´ e de magasin qu’au vol commis par le clerc d’un avocat. Elle a interpr´ et´ e les affaires de biens vol´ es Levy Brothers et Lloyd , qui, d’apr es bien des gens, repr´ esentaient une ´ evolution du droit, comme une ramification peu importante du courant de jurisprudence portant sur la garde de biens — une d´ erogation a la r egle «g´ en´ erale». The S.T. decision thus fails to successfully inte- 24 L’arrˆ et S.T. n’incorpore donc pas avec succ` es la grate the dishonest employee cases. It also rests on jurisprudence relative aux employ´ es malhonnˆ etes. the questionable conclusion that sexual torts by Il repose ´ egalement sur la conclusion discutable 550 2 S.C.R. BAZLEY v. CURRY McLachlin J.
caretakers against children are closer to a shop que les d´ elits sexuels commis sur des enfants par assault than a bank employee’s conversion. (While des gardiens s’apparentent davantage aux voies de a molestation is a physical attack, it is equally fait commises dans une boutique qu’au d´ etourne-arguable that the trust-abusing character of child ment de fonds par un employ´ e de banque. (Bien abuse fits more in the dishonesty genre.) Further- que l’agression sexuelle soit une agression phy-more, the opinion’s reasoning depends on the level sique, on peut tout autant soutenir que l’abus de of generality with which the sexual act is confiance qu’implique l’agression sexuelle d’en-described. Instead of describing the act in terms of fants s’apparente davantage a la malhonnˆ etet´ e.) En the employee’s duties of supervising and caring outre, le raisonnement sous-jacent a cette opinion for vulnerable students during a study trip abroad, d´ epend du degr´ e de g´ en´ eralit´ e de la description de the Court of Appeal cast it in terms unrelated to l’acte sexuel en cause. Au lieu de d´ ecrire l’acte en those duties. Important legal decisions should not fonction des tˆ aches de l’employ´ e qui consistaient aturn on such semantics. As Atiyah points out surveiller des ´ el eves vuln´ erables et a s’en occuper (supra , at p. 263): “conduct can be correctly pendant un voyage d’´ etudes a l’´ etranger, la Cour described at varying levels of generality, and no d’appel l’a d´ efini d’une maniere qui n’a rien a voir one description of the “act” on which the servant avec ces tˆ aches. Les d´ ecisions judiciaires impor-was engaged is necessarily more correct than any tantes ne devraient pas reposer sur une telle s´ eman-other.” Finally, the reasoning in S.T. leads to tique. Comme Atiyah le souligne ( op. cit. , a la anomalies. Lowry J.’s question in the chambers p. 263): [TRADUCTION ] «la conduite peut ˆetre decision appealed from (at p. 223) remains unan- d´ ecrite correctement de mani ere plus ou moins swered: “If a postal clerk’s theft and a solicitor’s g´ en´ erale et aucune description de l’“acte” auquel clerk’s fraud can be said to have been committed l’employ´ e s’est livr´ e n’est n´ ecessairement mieux in the course of their employment, I can see no qu’une autre.» Enfin, le raisonnement de l’arrˆ et sound basis in principle on which it can be con- S.T. entraˆ ıne des anomalies. La question que sou-cluded that Curry’s criminal conduct should not leve le juge en chambre Lowry dans sa d´ ecision attract vicarious liability.” Or, as Wilkinson J. port´ ee en appel ( a la p. 223) demeure sans r´ eponse: expressed more bluntly in the companion appeal [TRADUCTION ] «S’il est possible de dire que le vol (G.J. v. Griffiths , B.C.J. No. 2370 (QL) d’un commis des postes et la fraude d’un clerc (S.C.), at para. 76), “[s]urely a distinction is not to d’avocat ont ´ et´ e perp´ etr´ es dans l’exercice de leurs be drawn attributing a higher standard to the way fonctions, je ne vois aucun motif de principe vala-society looks after its jewellery than its children.” ble de conclure que la conduite criminelle de Curry ne devrait pas d´ eclencher la responsabilit´ e du fait d’autrui.» Ou encore, comme le juge Wilkinson l’a exprim´ e plus cat´ egoriquement dans l’appel con-nexe ( G.J. c. Griffiths , B.C.J. No. 2370 (QL) (C.S.), au par. 76), [ TRADUCTION ] «[i]l ne faut sˆ urement pas faire une distinction qui impose-rait, a l’´ egard de la fa¸ con dont la soci´ et´ e veille sur ses bijoux, une norme plus ´ elev´ ee qu’en ce qui a trait a celle dont elle veille sur ses enfants.» To return to the approach suggested earlier, pre-25 Pour revenir a la solution propos´ ee plus tˆ ot, les cedent does not resolve the issue before us. We pr´ ec´ edents ne r eglent pas la question dont nous must therefore proceed to the second stage of the sommes saisis. Nous devons donc passer a la inquiry — a consideration of the policy reasons for seconde ´ etape de l’analyse — l’examen des raisons vicarious liability, in the hope of discerning a prin- de politique g´ en´ erale qui sous-tendent la responsa-ciple to guide courts in future cases. bilit´ e du fait d’autrui, dans l’espoir de d´ egager un principe qui guidera les tribunaux a l’avenir. 2 R.C.S. 551 BAZLEY c. CURRY Le juge McLachlin
Policy Considerations 2. Consid´ erations de politique g´ en´ erale Vicarious liability has always been concerned 26 La responsabilit´ e du fait d’autrui a toujours ´ et´ ewith policy: Fleming, supra , at pp. 409 et seq . The ax´ ee sur une politique g´ en´ erale: Fleming, op. cit ., view of early English law that a master was aux pp. 409 et suiv. La position du droit anglais responsible for all the wrongs of his servants (as ancien selon laquelle l’employeur ´ etait responsable well as his wife’s and his children’s) represented a de toutes les fautes de son employ´ e (y compris cel-policy choice, however inarticulate, as to who les de son ´ epouse et de ses enfants) constituait un should bear the loss of wrongdoing and how best choix de politique g´ en´ erale, bien que non exprim´ eto deter it. The narrowing of vicarious responsibil- clairement, quant a savoir qui devait assumer la ity with the expansion of commerce and trade and perte caus´ ee par l’acte fautif et quant au meilleur the rise of industrialism also represented a policy moyen de dissuasion. La limitation de la responsa-choice. Indeed, it represented a compromise bilit´ e du fait d’autrui, qui a co¨ ıncid´ e avec l’accrois-between two policies — the social interest in fur- sement des ´ echanges et du commerce et la nais-nishing an innocent tort victim with recourse sance de l’industrialisme, repr´ esentait aussi un against a financially responsible defendant, and a choix de politique g´ en´ erale. Il s’agissait, en r´ ealit´ e, concern not to foist undue burdens on business d’un compromis entre deux politiques — l’int´ erˆ et enterprises: Fleming, ibid. The expansion of vicari- qu’a la soci´ et´ e a fournir a la victime innocente ous liability in the 20th century from authoriza- d’un d´ elit un recours contre un d´ efendeur solvable tion-based liability to broader classes of ascription et le souci de ne pas imposer un fardeau indu aux is doubtless driven by yet other policy concerns. entreprises commerciales: Fleming, ibid. Il n’y a “[V]icarious liability cannot parade as a deduction aucun doute que ce sont encore d’autres consid´ era-from legalistic premises, but should be frankly tions de politique g´ en´ erale qui sont a l’origine de recognised as having its basis in a combination of l’´ elargissement de la responsabilit´ e du fait d’autrui policy considerations” (Fleming, at p. 410). au XX e siecle, d’une responsabilit´ e fond´ ee sur l’autorisation a des cat´ egories plus larges d’impu-tation. [ TRADUCTION ] «[L]a responsabilit´ e du fait d’autrui ne peut pas passer pour une d´ eduction fon-d´ ee sur des pr´ emisses formalistes, mais devrait franchement ˆetre reconnue comme reposant sur une combinaison de consid´ erations de politique g´ en´ erale» (Fleming, a la p. 410). A focus on policy is not to diminish the impor- 27 L’accent mis sur une politique g´ en´ erale ne doit tance of legal principle. It is vital that the courts pas diminuer l’importance des principes juri-attempt to articulate general legal principles to diques. Il est essentiel que les tribunaux tentent lend certainty to the law and guide future applica- d’´ etablir des principes juridiques g´ en´ eraux pour tions. However, in areas of jurisprudence where conf´ erer une certitude au droit et guider son appli-changes have been occurring in response to policy cation future. Toutefois, dans les domaines de considerations, the best route to enduring principle jurisprudence o u des changements sont survenus may well lie through policy. The law of vicarious en raison de consid´ erations de politique g´ en´ erale, liability is just such a domain. l’application d’une politique g´ en´ erale peut bien repr´ esenter la meilleure fa¸ con d’´ etablir un principe durable. Le droit de la responsabilit´ e du fait d’autrui est justement un de ces domaines. Recognizing the policy-driven perspective of the 28 Reconnaissant que ce sont des consid´ erations de law of vicarious liability, La Forest J. in London politique g´ en´ erale qui sous-tendent la responsabi-
Drugs , supra , opined that vicarious liability was lit´ e du fait d’autrui, le juge La Forest, dans l’arrˆ et 552 2 S.C.R. BAZLEY v. CURRY McLachlin J.
traditionally considered to rest on one of two logi- London Drugs , pr´ ecit´ e, a exprim´ e l’avis que l’on cal bases: (1) that the employee’s acts are regarded consid´ erait habituellement que la responsabilit´ e du in law as being authorized by the employer and fait d’autrui reposait sur l’un des deux fondements hence as being the employer’s acts (the “master’s logiques suivants: (1) celui voulant que les actes de tort theory” or “direct liability theory”); or (2) that l’employ´ e soient consid´ er´ es en droit comme ´ etant the employer was the employee’s superior in autoris´ es par l’employeur et, en cons´ equence, charge or command of the employee (the “ser- comme ´ etant ceux de l’employeur (la «th´ eorie du vant’s tort theory”) (at pp. 335-36, citing G. H. L. d´ elit de l’employeur» ou «th´ eorie de la responsabi-Fridman, The Law of Torts in Canada (1990), lit´ e directe»), ou (2) celui voulant que l’employeur vol. 2, at pp. 314-15; Atiyah, supra , at pp. 6-7; soit le sup´ erieur de l’employ´ e et que ce dernier soit G. Williams, “Vicarious Liability: Tort of the sous ses ordres (la «th´ eorie du d´ elit de l’employ´ e») Master or of the Servant?” (1956), 72 L.Q. Rev. (aux pp. 335 et 336, citant G. H. L. Fridman, The
522). La Forest J., quoting Fridman (at p. 315), Law of Torts in Canada (1990), vol. 2, aux pp. 314 went on to note, however, that “neither of the logi- et 315; Atiyah, op. cit ., aux pp. 6 et 7; G. Williams, cal bases for vicarious liability succeeds com- «Vicarious Liability: Tort of the Master or of the pletely in explaining the operation of the doc- Servant?» (1956), 72 L.Q. Rev. 522. Le juge trine . . . ‘express[ing] not so much the true La Forest, citant Fridman (a la p. 315), a toutefois rationale of vicarious liability but an attempt by the ajout´ e que «ni l’un ni l’autre de ces deux fonde-law to give some formal, technical explanation of ments logiques ne permet d’expliquer compl ete-why the law imposes vicarious liability’” (p. 336). ment l’application de la regle de la responsabilit´ eFaced with the absence in the existing law of a du fait d’autrui [. . .] “n’exprim[ant] pas tant la rai-coherent principle to explain vicarious liability, son d’ˆ etre v´ eritable de la responsabilit´ e du fait La Forest J. found its basis in policy (at p. 336): d’autrui, mais constitu[ant] une tentative, en droit, “the vicarious liability regime is best seen as a d’expliquer d’une fa¸ con formelle et technique response to a number of policy concerns. In its pourquoi la responsabilit´ e du fait d’autrui existe en traditional domain, these are primarily linked to droit”» (p. 336). Confront´ e a l’absence dans le compensation, deterrence and loss internalization.” droit existant d’un principe coh´ erent pour expli-quer la responsabilit´ e du fait d’autrui, le juge La Forest a conclu qu’elle reposait sur une poli-tique g´ en´ erale ( policy ) (a la p. 336): «le r´ egime de la responsabilit´ e du fait d’autrui est mieux per¸ cu comme une r´ eponse a un certain nombre de ques-tions de principe [policy concerns ]. Dans le domaine traditionnel de l’application de la respon-sabilit´ e du fait d’autrui, ces questions sont princi-palement li´ ees au d´ edommagement, a la dissuasion et a l’imputation de la perte.» Fleming has identified similar policies lying at 29 Fleming identifie des politiques similaires qui the heart of vicarious liability. In his view, two sont au cœur de la responsabilit´ e du fait d’autrui. Il fundamental concerns underlie the imposition of est d’avis que deux pr´ eoccupations fondamentales vicarious liability: (1) provision of a just and prac- sont a l’origine de l’imputation de la responsabilit´ etical remedy for the harm; and (2) deterrence of du fait d’autrui: (1) fournir un recours juste et pra-future harm. While different formulations of the tique pour le pr´ ejudice subi, et (2) dissuader de policy interests at stake may be made (for exam- causer un pr´ ejudice a l’avenir. Bien qu’il soit pos-ple, loss internalization is a hybrid of the two), sible de formuler de diff´ erentes fa¸ cons les int´ erˆ ets I believe that these two ideas usefully embrace the g´ en´ eraux en jeu (par exemple, l’imputation de la 2 R.C.S. 553 BAZLEY c. CURRY Le juge McLachlin
main policy considerations that have been perte est une combinaison des deux), je crois que advanced. ces deux id´ ees englobent utilement les principales consid´ erations de politique g´ en´ erale qui ont ´ et´ epr´ esent´ ees. 30 D’abord et avant tout, il y a le souci de fournir First and foremost is the concern to provide a un recours juste et pratique aux gens qui subissent just and practical remedy to people who suffer as a les cons´ equences des fautes d’un employ´ e. Fle-consequence of wrongs perpetrated by an ming exprime cela de fa¸ con succincte (` a la p. 410): employee. Fleming expresses this succinctly [TRADUCTION ] «la personne qui emploie d’autres (at p. 410): “a person who employs others to personnes pour promouvoir ses propres int´ erˆ ets advance his own economic interest should in fair-financiers devrait, en toute ´ equit´ e, se voir imputer ness be placed under a corresponding liability for une responsabilit´ e correspondante pour les pertes losses incurred in the course of the enterprise”. caus´ ees dans le cadre de l’exploitation de son The idea that the person who introduces a risk entreprise». L’id´ ee que la personne qui cr´ ee le ris-incurs a duty to those who may be injured lies at que a une obligation envers les gens qui peuvent the heart of tort law. As Cardozo C.J. stated in subir un pr´ ejudice est au cœur du droit de la res-Palsgraf v. Long Island R. Co. , 162 N.E. 99 (N.Y. ponsabilit´ e d´ elictuelle. Comme le juge en chef 1928), at p. 100, “[t]he risk reasonably to be per-Cardozo l’a affirm´ e dans Palsgraf c. Long Island ceived defines the duty to be obeyed, and risk
R. Co. , 162 N.E. 99 (N.Y. 1928), a la p. 100, [ TRA-imports relation; it is risk to another or to others DUCTION ] «[l]e risque raisonnablement perceptible within the range of apprehension.” This principle d´ efinit l’obligation a remplir, et le risque entraˆ ıne of fairness applies to the employment enterprise un rapport; il s’agit du risque que l’on peut appr´ e-and hence to the issue of vicarious liability. While hender pour autrui.» Ce principe d’´ equit´ e s’ap-charitable enterprises may not employ people to plique a l’entreprise qui procure l’emploi et, par-advance their economic interests, other factors, tant,a la question de la responsabilit´ e du fait discussed below, make it fair that they should bear d’autrui. Bien qu’il se puisse que des organismes the burden of providing a just and practical remedy de bienfaisance n’embauchent pas des gens pour for wrongs perpetrated by their employees. This servir leurs int´ erˆ ets ´economiques, d’autres fac-policy interest embraces a number of subsidiary teurs, analys´ es plus loin, font en sorte qu’il est goals. The first is the goal of effective compensa-juste qu’ils soient tenus de fournir une r´ eparation tion. “One of the most important social goals juste et pratique pour les fautes de leurs employ´ es. served by vicarious liability is victim compensa-Cet int´ erˆ et g´ en´ eral comporte plusieurs objectifs tion. Vicarious liability improves the chances that secondaires. Le premier est celui de l’indemnisa-the victim can recover the judgment from a solvent tion efficace. [ TRADUCTION ] «L’un des objectifs defendant.” (B. Feldthusen, “Vicarious Liability sociaux les plus importants de la responsabilit´ e du for Sexual Torts”, in Torts Tomorrow (1998), 221, fait d’autrui est l’indemnisation des victimes. La at p. 224.) Or to quote Fleming, the master is “a responsabilit´ e du fait d’autrui augmente les more promising source of recompense than his ser-chances de la victime de recouvrer le montant du vant who is apt to be a man of straw” (p. 410). jugement aupres d’un d´ efendeur solvable.» (B. Feldthusen, «Vicarious Liability for Sexual Torts», dans Torts Tomorrow (1998), 221,a la p. 224.) Ou pour citer Fleming, l’employeur est [ TRADUCTION ] «une source de d´ edommagement plus prometteuse que son employ´ e qui est suscepti-ble d’ˆ etre un homme de paille» (p. 410). 554 2 S.C.R. BAZLEY v. CURRY McLachlin J.
However, effective compensation must also be 31 Toutefois, l’indemnisation efficace doit ´ egale-fair, in the sense that it must seem just to place lia- ment ˆ etre ´ equitable, en ce sens qu’il doit sembler bility for the wrong on the employer. Vicarious lia- juste de tenir l’employeur responsable de la faute bility is arguably fair in this sense. The employer commise. Dans ce sens, il est possible de soutenir puts in the community an enterprise which carries que la responsabilit´ e du fait d’autrui est ´ equitable. with it certain risks. When those risks materialize L’employeur implante dans la collectivit´ e une and cause injury to a member of the public despite entreprise qui comporte certains risques. Quand the employer’s reasonable efforts, it is fair that the ces risques se mat´ erialisent et causent un pr´ ejudice person or organization that creates the enterprise a un membre du public malgr´ e les efforts raison-and hence the risk should bear the loss. This nables de l’employeur, il est juste que la perte soit accords with the notion that it is right and just that assum´ ee par la personne ou l’organisme qui a cr´ e´ ethe person who creates a risk bear the loss when l’entreprise et, en cons´ equence, le risque. Cela the risk ripens into harm. While the fairness of this concorde avec l’id´ ee qu’il est juste et ´ equitable que proposition is capable of standing alone, it is but- la personne a l’origine d’un risque assume la perte tressed by the fact that the employer is often in the qui r´ esulte quand le risque se mat´ erialise et cause best position to spread the losses through mecha- un pr´ ejudice. Quoique la justesse de cette proposi-nisms like insurance and higher prices, thus mini- tion puisse ˆ etre ´ evidente en soi, elle est ´ etay´ ee par mizing the dislocative effect of the tort within le fait que l’employeur est souvent le mieux plac´ esociety. “Vicarious liability has the broader func- pour r´ epartir les pertes au moyen de m´ ecanismes tion of transferring to the enterprise itself the risks comme l’assurance et la hausse de prix, et ainsi created by the activity performed by its agents” pour r´ eduire l’effet perturbateur du d´ elit dans la (London Drugs , per La Forest J., at p. 339). soci´ et´ e. «La responsabilit´ e du fait d’autrui a pour fonction plus g´ en´ erale de transf´ erer a l’entreprise elle-mˆ eme les risques cr´ e´ es par l’activit´ e a laquelle se livrent ses mandataires» ( London Drugs , le juge La Forest, a la p. 339). The second major policy consideration underly-32 La deuxi eme consid´ eration majeure de politique ing vicarious liability is deterrence of future harm. g´ en´ erale qui sous-tend la responsabilit´ e du fait Fixing the employer with responsibility for the d’autrui est la dissuasion de causer un pr´ ejudice aemployee’s wrongful act, even where the employer l’avenir. L’imputation a l’employeur, mˆ eme non is not negligent, may have a deterrent effect. n´ egligent, de la responsabilit´ e de l’acte fautif Employers are often in a position to reduce acci- accompli par l’employ´ e peut avoir un effet dissua-dents and intentional wrongs by efficient organiza- sif. L’employeur est souvent en mesure de r´ eduire tion and supervision. Failure to take such measures les accidents et les fautes intentionnelles au moyen may not suffice to establish a case of tortious neg- d’une organisation et d’une supervision efficaces. ligence directly against the employer. Perhaps the Il se peut que l’omission de prendre de telles harm cannot be shown to have been foreseeable mesures ne soit pas suffisante pour constituer une under negligence law. Perhaps the employer can preuve de n´ egligence d´ elictuelle directement con-avail itself of the defence of compliance with the tre lui. Il peut se r´ ev´ eler impossible de d´ emontrer industry standard. Or perhaps the employer, while que le pr´ ejudice ´ etait pr´ evisible au sens du droit en complying with the standard of reasonable care, mati` ere de n´ egligence. L’employeur peut ˆ etre en was not as scrupulously diligent as it might mesure d’invoquer comme moyen de d´ efense la feasibly have been. As Wilkinson J. explained conformit´ e avec la norme industrielle. Ou encore, il se peut que, mˆ eme en respectant la norme de la diligence raisonnable, l’employeur n’ait pas aussi scrupuleusement fait preuve de diligence qu’il aurait pu le faire. Comme le juge Wilkinson l’a 2 R.C.S. 555 BAZLEY c. CURRY Le juge McLachlin
in the companion appeal’s trial judgment (at expliqu´ e dans le jugement de premiere instance para. 69): rendu dans l’affaire connexe (au par. 69): If the scourge of sexual predation is to be stamped [TRADUCTION ] Pour ´ eliminer ou, du moins, freiner le out, or at least controlled, there must be powerful moti- fl´ eau de la pr´ edation sexuelle, il faut une motivation vation acting upon those who control institutions puissante chez les dirigeants d’´ etablissements qui se engaged in the care, protection and nurturing of chil- consacrent a la garde, a la protection et a l’´ education dren. That motivation will not in my view be suffi- d’enfants. A mon avis, la probabilit´ e d’une responsabi-ciently supplied by the likelihood of liability in negli- lit´ e pour n´ egligence ne suffira pas a fournir cette moti-gence. In many cases evidence will be lacking or have vation. Dans bien des cas, la preuve sera inexistante ou long since disappeared. The proof of appropriate stan- aura depuis longtemps disparu. La preuve des normes dards is a difficult and uneven matter. appropri´ ees est une question difficile et changeante. I agree. Beyond the narrow band of employer 33 Je partage cet avis. Au-dela de la gamme res-conduct that attracts direct liability in negligence treinte des comportements d’un employeur qui lies a vast area where imaginative and efficient engendrent la responsabilit´ e directe pour n´ egli-administration and supervision can reduce the risk gence, il existe un large secteur o u un mode de that the employer has introduced into the commu- gestion et de surveillance original et efficace peut nity. Holding the employer vicariously liable for r´ eduire le risque auquel l’employeur expose la col-the wrongs of its employee may encourage the lectivit´ e. Imputer a l’employeur la responsabilit´ eemployer to take such steps, and hence, reduce the du fait d’autrui pour les fautes de son employ´ erisk of future harm. A related consideration raised peut l’inciter a prendre de telles mesures et, par by Fleming is that by holding the employer liable, cons´ equent, a r´ eduire le risque de pr´ ejudice futur. “the law furnishes an incentive to discipline ser- Fleming fait observer, a titre connexe, qu’en tenant vants guilty of wrongdoing” (p. 410). l’employeur responsable [ TRADUCTION ] «le droit l’incite a imposer des mesures disciplinairesal’employ´ e qui a commis une faute» (p. 410). The policy grounds supporting the imposition of 34 Les motifs de politique g´ en´ erale justifiant l’im-vicarious liability — fair compensation and deter- putation de la responsabilit´ e du fait d’autrui, soit la rence — are related. The policy consideration of juste indemnisation et la dissuasion, sont con-deterrence is linked to the policy consideration of nexes. La consid´ eration de politique g´ en´ erale que fair compensation based on the employer’s intro- repr´ esente la dissuasion est li´ ee a celle de la juste duction or enhancement of a risk. The introduction indemnisation fond´ ee sur la cr´ eation ou l’accrois-of the enterprise into the community with its sement d’un risque par l’employeur. L’implanta-attendant risk, in turn, implies the possibility of tion de l’entreprise dans la collectivit´ e avec les ris-managing the risk to minimize the costs of the ques qu’elle comporte implique, en revanche, la harm that may flow from it. possibilit´ e de g´ erer le risque afin de r´ eduire les coˆ uts du pr´ ejudice qui peut en d´ ecouler. Policy considerations relating to the fair alloca- 35 Les consid´ erations de politique g´ en´ erale concer-tion of loss to risk-creating enterprises and the nant l’attribution ´ equitable de la perte aux entrepri-deterrence of harms tend to support the imposition ses qui cr´ eent des risques et la dissuasion de causer of vicarious liability on employers. But, as un pr´ ejudice tendent a justifier l’imputation de la Fleming notes, there often exists a countervailing responsabilit´ e du fait d’autrui a l’employeur. concern. At one time the law held masters respon- Cependant, comme Fleming le fait remarquer, il y sible for all wrongs committed by servants. Later, a souvent un facteur att´ enuant.A une ´ epoque, le that policy was abandoned as too harsh in a com- droit tenait l’employeur responsable de toutes les plex commercial society where masters might not fautes de son employ´ e. Cette politique a ´ et´ e aban-be in a position to supervise their servants closely. donn´ ee parce qu’elle ´etait trop dure dans une 556 2 S.C.R. BAZLEY v. CURRY McLachlin J.
Servants may commit acts, even on working prem- soci´ et´ e commerciale complexe ou il est possible ises and during working hours, which are so que des employeurs ne soient pas en mesure de unconnected with the employment that it would surveiller ´etroitement leurs employ´ es. Des seem unreasonable to fix an employer with respon- employ´ es peuvent accomplir des actes, mˆ eme sur sibility for them. For example, if a man assaults his les lieux de travail et pendant les heures de travail, wife’s lover (who coincidentally happens to be a qui ont si peu de rapport avec leurs fonctions qu’il co-worker) in the employees’ lounge at work, few semblerait d´ eraisonnable d’en tenir l’employeur would argue that the employer should be held responsable. Par exemple, si un homme se livrait aresponsible. Similarly, an employer would not be des voies de fait sur l’amant de sa femme (qui tout liable for the harm caused by a security guard who a fait par hasard se trouve ˆ etre un coll egue) dans le decides to commit arson for his or her own amuse- salon des employ´ es au travail, peu de gens soutien-ment: see, e.g., Plains Engineering Ltd. v. Barnes draient que l’employeur doit ˆ etre tenu responsable.
Security Services Ltd. (1987), 43 C.C.L.T. 129 De mˆ eme, un employeur ne serait pas responsable (Alta. Q.B.). du pr´ ejudice caus´ e par un gardien de s´ ecurit´ e qui d´ ecide d’allumer un incendie criminel pour se dis-traire: voir, par exemple, Plains Engineering Ltd. c. Barnes Security Services Ltd. (1987), 43 C.C.L.T. 129 (B.R. Alb.). On further analysis, however, this apparently 36 Toutefois, si on poursuit l’analyse, cette consi-negative policy consideration of when liability d´ eration de politique g´ en´ erale apparemment n´ ega-would be appropriate is revealed as nothing more tive des cas ou il conviendrait d’imputer la respon-than the absence of the twin policies of fair com- sabilit´ e se r´ ev ele n’ˆ etre rien de plus que l’absence pensation and deterrence that justify vicarious lia- des politiques jumel´ ees de la juste indemnisation et bility. A wrong that is only coincidentally linked to de la dissuasion qui justifient la responsabilit´ e du the activity of the employer and duties of the fait d’autrui. Une faute qui n’est li´ ee que tout a fait employee cannot justify the imposition of vicari- par hasard a l’activit´ e de l’employeur et aux fonc-ous liability on the employer. To impose vicarious tions de l’employ´ e ne saurait justifier l’imputation liability on the employer for such a wrong does not de la responsabilit´ e du fait d’autrui a l’employeur. respond to common sense notions of fairness. Nor L’imputation de cette responsabilit´ e a l’employeur does it serve to deter future harms. Because the pour une telle faute ne satisfait pas aux notions wrong is essentially independent of the employ- logiques d’´ equit´ e et ne contribue pas non plus ament situation, there is little the employer could dissuader de causer un pr´ ejudice a l’avenir. ´ Etant have done to prevent it. Where vicarious liability is donn´ e que la faute commise est essentiellement not closely and materially related to a risk intro- ind´ ependante des conditions de travail, il n’y a pas duced or enhanced by the employer, it serves no grand-chose que l’employeur aurait pu faire pour deterrent purpose, and relegates the employer to la pr´ evenir. Quand la responsabilit´ e du fait d’autrui the status of an involuntary insurer. I conclude that n’est pas li´ ee ´ etroitement et sensiblement a un ris-a meaningful articulation of when vicarious liabil- que que l’employeur a cr´ e´ e ou accru, elle n’a ity should follow in new situations ought to be aucun effet dissuasif et rel egue l’employeur au animated by the twin policy goals of fair compen- rang d’assureur involontaire. Je conclus que, pour sation and deterrence that underlie the doctrine, pr´ eciser utilement quand la responsabilit´ e du fait rather than by artificial or semantic distinctions. d’autrui devrait ˆ etre engag´ ee dans de nouvelles situations, il faut se fonder sur les objectifs de poli-tique g´ en´ erale jumel´ es de la juste indemnisation et de la dissuasion qui sous-tendent cette th´ eorie, plutˆ ot que sur des distinctions artificielles ou s´ emantiques. 2 R.C.S. 557 BAZLEY c. CURRY Le juge McLachlin
From Precedent and Policy to Principle 3. De la jurisprudence et de la politique g´ en´ e-rale aux principes Underlying the cases holding employers vicari- 37 La jurisprudence selon laquelle l’acte non auto-ously liable for the unauthorized acts of employees ris´ e d’un employ´ e engage la responsabilit´ e du fait is the idea that employers may justly be held liable d’autrui de son employeur repose sur l’id´ ee que where the act falls within the ambit of the risk that l’employeur peut ˆ etre tenu responsable a juste titre the employer’s enterprise creates or exacerbates. quand l’acte ressortit au risque que son entreprise a Similarly, the policy purposes underlying the cr´ e´ e ou accru. De mˆ eme, les objectifs de politique imposition of vicarious liability on employers are g´ en´ erale qui sous-tendent l’imputation de la res-served only where the wrong is so connected with ponsabilit´ e du fait d’autrui a l’employeur ne sont the employment that it can be said that the respect´ es que dans le cas ou la faute est si ´ etroite-employer has introduced the risk of the wrong (and ment li´ ee a l’emploi qu’il est possible de dire que is thereby fairly and usefully charged with its man- l’employeur a cr´ e´ e le risque de faute (et qu’il est agement and minimization). The question in each donc ´ equitablement et utilement charg´ e de le g´ erer case is whether there is a connection or nexus et de le r´ eduire). Dans chaque cas, il s’agit de between the employment enterprise and that wrong savoir s’il existe un lien entre l’entreprise qui pro-that justifies imposition of vicarious liability on the cure l’emploi et la faute qui justifie l’imputation de employer for the wrong, in terms of fair allocation la responsabilit´ e du fait d’autrui a l’employeur, en of the consequences of the risk and/or deterrence. ce qui concerne la dissuasion ou la r´ epartition ´equitable des cons´ equences du risque, ou les deuxa la fois. Where the risk is closely associated with the 38 Quand le risque est ´ etroitement li´ e a la faute wrong that occurred, it seems just that the entity commise, il semble juste que l’entit´ e qui a lanc´ ethat engages in the enterprise (and in many cases l’entreprise (et dans de nombreux cas qui en tire profits from it) should internalize the full cost of profit) en assume le plein coˆ ut d’exploitation, y operation, including potential torts. See generally compris celui des d´ elits qui peuvent ˆ etre commis. A. O. Sykes, “The Boundaries of Vicarious Liabil- Voir, de mani ere g´ en´ erale, A. O. Sykes, «The ity: An Economic Analysis of the Scope of Boundaries of Vicarious Liability: An Economic Employment Rule and Related Legal Doctrines” Analysis of the Scope of Employment Rule and (1988), 101 Harv. L. Rev . 563. On the other hand, Related Legal Doctrines» (1988), 101 Harv. L.
when the wrongful act lacks meaningful connec- Rev. 563. Par contre, si l’acte fautif n’a aucun lien tion to the enterprise, liability ceases to flow: utile avec l’entreprise, il n’y a plus de responsabi-
Poland v. John Parr and Sons , 1 K.B. 236 lit´ e: Poland c. John Parr and Sons , 1 K.B. (C.A.) (noting that the question is often one of 236 (C.A.) (o` u l’on fait remarquer que c’est sou-degree). As Prosser and Keeton sum up ( Prosser vent une question de degr´ e). Comme Prosser et
and Keeton on the Law of Torts (5th ed. 1984), at Keeton le r´ esument ( Prosser and Keeton on the
pp. 500-501), when the harm is connected to the Law of Torts (5 e ´ed. 1984), aux pp. 500 et 501), employment enterprise: dans le cas ou le pr´ ejudice est li´ e a l’entreprise qui procure l’emploi: The losses caused by the torts of employees, which as a [TRADUCTION ] Les pertes r´ esultant des d´ elits commis par practical matter are sure to occur in the conduct of the des employ´ es, qui en pratique surviennent in´ evitable-employer’s enterprise, are placed upon that enterprise ment dans l’exploitation de l’entreprise de l’employeur, itself, as a required cost of doing business. They are sont assum´ ees par l’entreprise elle-mˆ eme, a titre de coˆ ut placed upon the employer because, having engaged in n´ ecessaire de son exploitation. Elles sont assum´ ees par an enterprise, which will on the basis of all past experi- l’employeur parce que, ayant mis sur pied une entreprise ence involve harm to others through the torts of employ- qui, selon l’exp´ erience acquise, causera un pr´ ejudice aees, and sought to profit by it, it is just that he, rather autrui en raison des d´ elits des employ´ es, et ayant 558 2 S.C.R. BAZLEY v. CURRY McLachlin J.
than the innocent injured plaintiff, should bear them; cherch´ e a en tirer profit, il est juste que ce soit lui qui les and because he is better able to absorb them, and to dis- assume au lieu du demandeur qui est une victime inno-tribute them, through prices, rates or liability insurance, cente, et parce qu’il est mieux plac´ e pour les ´ eponger et to the public, and so to shift them to society, to the com- les r´ epercuter sur le public au moyen des prix, des taux munity at large. ou l’assurance responsabilit´ e, et les transf´ erer ainsi a la soci´ et´ e, a l’ensemble de la collectivit´ e. The connection between the tort and the 39 Le lien entre le d´ elit et l’emploi est g´ en´ eral. Par employment is broad. To say the employer’s enter- cons´ equent, affirmer que l’entreprise de l’em-prise created or materially enhanced the risk of the ployeur a cr´ e´ e ou sensiblement accru le risque de tortious act is therefore different from saying that a perp´ etration de l’acte d´ elictueux ne revient pas areasonable employer should have foreseen the dire qu’un employeur raisonnable aurait dˆ u pr´ evoir harm in the traditional negligence sense, making it le pr´ ejudice au sens traditionnel du droit en matiere liable for its own negligence. As Fleming explains de n´ egligence, ce qui le rendrait responsable de sa (supra , at p. 422): propre n´ egligence. Comme Fleming l’explique (op. cit. , a la p. 422): Perhaps inevitably, the familiar notion of foreseeabil- [TRADUCTION ] Peut-ˆ etre est-il in´ evitable que, dans le ity can here be seen once more lurking in the back- pr´ esent cas, la notion familiere de la pr´ evisibilit´ e figure ground, as undoubtedly one of the many relevant factors encore une fois a l’arriere-plan, ´ etant donn´ e que l’un des is the question of whether the unauthorised act was a nombreux facteurs pertinents est sans aucun doute la normal or expected incident of the employment. But one question de savoir si l’acte non autoris´ e ´ etait rattach´ e de must not confuse the relevance of foreseeability in this fa¸ con normale ou pr´ evisible a l’emploi. Il ne faut cepen-sense with its usual function on a negligence issue. We dant pas confondre la pertinence de la pr´ evisibilit´ e dans are not here concerned with attributing fault to the ce sens avec sa fonction habituelle lorsqu’il est question master for failing to provide against foreseeable harm de n´ egligence. Ce qui nous int´ eresse en l’espece, ce (for example in consequence of employing an incompe- n’est pas d’imputer la faute a l’employeur parce qu’il ne tent servant), but with the measure of risks that may s’est pas pr´ emuni contre un pr´ ejudice pr´ evisible (d´ ecou-fairly be regarded as typical of the enterprise in ques- lant, par exemple, de l’embauche d’un employ´ e incom-tion. The inquiry is directed not at foreseeability of risks p´ etent), mais ce sont plutˆ ot les risques qui peuvent ˆ etre from specific conduct, but at foreseeability of the broad consid´ er´ es a juste titre comme typiques a l’entreprise en risks incident to a whole enterprise. [Emphasis added.] cause. L’examen porte non pas sur la pr´ evisibilit´ e des risques d´ ecoulant d’une conduite particuliere, mais sur la pr´ evisibilit´ e des risques g´ en´ eraux que comporte l’en-semble d’une entreprise. [Je souligne.] On the other hand, this analysis’s focus on what 40 Par ailleurs, la pr´ esente analyse porte sur ce que might be called “general cause”, while broader l’on pourrait appeler une «cause g´ en´ erale», alors than specific foreseeability, in no way implies a que la pr´ evisibilit´ e plus g´ en´ erale que particuli ere simple “but-for” test: but for the enterprise and ne fait aucunement intervenir un simple critere du employment, this harm would not have happened. «n’eˆ ut ´ et´ e»: n’eussent ´ et´ e l’entreprise et l’emploi, This is because reduced to formalistic premises, le pr´ ejudice n’aurait pas ´ et´ e caus´ e. Il en est ainsi any employment can be seen to provide the causa- parce qu’il est possible de consid´ erer que tout tion of an employee’s tort. Therefore, “mere emploi, r´ eduit a des pr´ emisses formalistes, fournit opportunity” to commit a tort, in the common la relation de cause a effet du d´ elit d’un employ´ e. “but-for” understanding of that phrase, does not Par cons´ equent, la «simple occasion» de commet-suffice: Morris v. C. W. Martin & Sons Ltd. , tre un d´ elit, au sens ordinaire de «n’eˆ ut ´ et´ e», ne 1 Q.B. 716 (C.A.) ( per Diplock L.J.). The suffit pas: Morris c. C. W. Martin & Sons Ltd. ,enterprise and employment must not only provide 1 Q.B. 716 (C.A.) (le lord juge Diplock). the locale or the bare opportunity for the employee Pour que l’employeur puisse ˆ etre tenu a juste titre to commit his or her wrong, it must materially responsable du fait d’autrui, l’entreprise et 2 R.C.S. 559 BAZLEY c. CURRY Le juge McLachlin
enhance the risk, in the sense of significantly con- l’emploi ne doivent pas seulement avoir fourni atributing to it, before it is fair to hold the employer l’employ´ e l’endroit o u commettre une faute ni lui vicariously liable. Of course, opportunity to com- avoir donn´ e la simple occasion de la commettre, ils mit a tort can be “mere” or significant. Conse- doivent avoir accru sensiblement le risque de faute quently, the emphasis must be on the strength of de sa part, c’est-a-dire y avoir contribu´ e de fa¸ con the causal link between the opportunity and the importante. Il est ´ evident que l’occasion de com-wrongful act, and not blanket catch-phrases. When mettre un d´ elit peut ˆ etre «simple» ou importante. the opportunity is nothing more than a but-for Par cons´ equent, l’accent doit ˆ etre mis sur la force predicate, it provides no anchor for liability. When du lien de causalit´ e entre l’occasion et l’acte fautif, it plays a more specific role — for example, as et non sur des formules g´ en´ erales. Quand l’occa-permitting a peculiarly custody-based tort like sion n’est rien de plus qu’un pr´ edicat de type embezzlement or child abuse — the opportunity «n’eˆ ut ´ et´ e», elle ne justifie aucune responsabilit´ e. provided by the employment situation becomes Quand elle joue un rˆ ole plus pr´ ecis comme, par much more salient. exemple, permettre un d´ elit proprea la garde comme le d´ etournement de fonds ou l’agression sexuelle d’enfants, l’occasion fournie par les con-ditions de travail devient beaucoup plus ´ evidente. Reviewing the jurisprudence, and considering 41 Apres avoir examin´ e la jurisprudence et les the policy issues involved, I conclude that in deter- questions de politique g´ en´ erale soulev´ ees, je con-mining whether an employer is vicariously liable clus que, pour d´ ecider si la responsabilit´ e du fait for an employee’s unauthorized, intentional wrong d’autrui d’un employeur est engag´ ee en raison de in cases where precedent is inconclusive, courts la faute intentionnelle et non autoris´ ee d’un should be guided by the following principles: employ´ e dans des cas o u la jurisprudence n’est pas concluante, les tribunaux devraient appliquer les principes suivants: (1) They should openly confront the question of (1) Ils devraient s’attaquer ouvertement a la whether liability should lie against the question de savoir si la responsabilit´ e de l’em-employer, rather than obscuring the decision ployeur devrait ˆetre engag´ ee, au lieu d’em-beneath semantic discussions of “scope of brouiller la d´ ecision par des analyses s´ eman-employment” and “mode of conduct”. tiques de l’«exercice des fonctions» et du «mode de comportement». (2) The fundamental question is whether the (2) Il s’agit essentiellement de savoir si l’acte wrongful act is sufficiently related to conduct fautif est suffisamment li´ e a la conduite autori-authorized by the employer to justify the impo- s´ ee par l’employeur pour justifier l’imputation sition of vicarious liability. Vicarious liability is de la responsabilit´ e du fait d’autrui. La respon-generally appropriate where there is a significant sabilit´ e du fait d’autrui est g´ en´ eralement fond´ ee connection between the creation or enhancement quand il existe un lien important entre la cr´ ea-of a risk and the wrong that accrues therefrom, tion ou l’accroissement d’un risque et la faute even if unrelated to the employer’s desires. qui en d´ ecoule, mˆ eme si elle n’a rien ` a voir avec Where this is so, vicarious liability will serve les souhaits de l’employeur. Le cas ´ ech´ eant, la the policy considerations of provision of an ade- responsabilit´ e du fait d’autrui satisfera aux con-quate and just remedy and deterrence. Incidental sid´ erations de politique g´ en´ erale de la dissuasion connections to the employment enterprise, like et de la r´ eparation juste et appropri´ ee. Des liens time and place (without more), will not suffice. accessoires avec l’entreprise qui procure l’em-Once engaged in a particular business, it is fair ploi, comme la date, l’heure et le lieu (sans that an employer be made to pay the generally plus), ne sont pas suffisants. Lorsque l’em-560 2 S.C.R. BAZLEY v. CURRY McLachlin J.
foreseeable costs of that business. In contrast, to ployeur exerce une activit´ e particuliere, il est impose liability for costs unrelated to the risk juste qu’il soit oblig´ e d’acquitter les coˆ uts g´ en´ e-would effectively make the employer an invol- ralement pr´ evisibles de cette activit´ e. Par contre, untary insurer. l’imputation de la responsabilit´ e des coˆ uts non li´ es au risque ferait effectivement de l’em-ployeur un assureur involontaire. (3) In determining the sufficiency of the connec- (3) Pour d´ ecider s’il existe un lien suffisant entre tion between the employer’s creation or la cr´ eation ou l’accroissement du risque par enhancement of the risk and the wrong com- l’employeur et la faute reproch´ ee, il est possible plained of, subsidiary factors may be consid- de tenir compte de facteurs subsidiaires qui peu-ered. These may vary with the nature of the vent varier selon la nature de l’affaire. Quand ils case. When related to intentional torts, the rele- se rapportent a des d´ elits intentionnels, les fac-vant factors may include, but are not limited to, teurs pertinents peuvent notamment comprendre the following: les suivants: (a) the opportunity that the enterprise a) l’occasion que l’entreprise a fournie aafforded the employee to abuse his or l’employ´ e d’abuser de son pouvoir; her power; (b) the extent to which the wrongful act b) la mesure dans laquelle l’acte fautif may have furthered the employer’s peut avoir contribu´ e a la r´ ealisation des aims (and hence be more likely to objectifs de l’employeur (et avoir donc have been committed by the ´et´ e plus susceptible d’ˆ etre commis par employee); l’employ´ e); (c) the extent to which the wrongful act c) la mesure dans laquelle l’acte fautif was related to friction, confrontation ´etait li´ e a la situation de conflit, or intimacy inherent in the employer’s d’affrontement ou d’intimit´ e propre aenterprise; l’entreprise de l’employeur; (d) the extent of power conferred on the d) l’´ etendue du pouvoir conf´ er´ e a l’em-employee in relation to the victim; ploy´ e relativement a la victime; (e) the vulnerability of potential victims e) la vuln´ erabilit´ e des victimes poten-to wrongful exercise of the tielles a l’exercice fautif du pouvoir de employee’s power. l’employ´ e. Applying these general considerations to sexual 42 Pour appliquer ces consid´ erations g´ en´ erales aabuse by employees, there must be a strong con- l’agression sexuelle commise par un employ´ e, il nection between what the employer was asking the doit exister un lien solide entre ce que l’employeur employee to do (the risk created by the employer’s demandait ` a l’employ´ e de faire (le risque cr´ e´ e par enterprise) and the wrongful act. It must be possi- l’entreprise de l’employeur) et l’acte fautif. Il doit ble to say that the employer significantly increased ˆetre possible de dire que l’employeur a accru sensi-the risk of the harm by putting the employee in his blement le risque de pr´ ejudice en pla¸ cant l’em-or her position and requiring him to perform the ploy´ e dans son poste et en lui demandant d’accom-assigned tasks. The policy considerations that jus- plir les tˆ aches qui lui ´ etaient assign´ ees. Il est peu tify imposition of vicarious liability for an employ- probable que les consid´ erations accessoires de la ee’s sexual misconduct are unlikely to be satisfied date, de l’heure et du lieu constitueront des consi-by incidental considerations of time and place. For d´ erations de politique g´ en´ erale justifiant l’imputa-example, an incidental or random attack by an tion de la responsabilit´ e du fait d’autrui pour l’in-employee that merely happens to take place on the conduite sexuelle d’un employ´ e. Par exemple, une 2 R.C.S. 561 BAZLEY c. CURRY Le juge McLachlin
employer’s premises during working hours will attaque a laquelle un employ´ e se livre purement scarcely justify holding the employer liable. Such par hasard dans les locaux de l’employeur pendant an attack is unlikely to be related to the business les heures de travail ne justifie gu ere de tenir l’em-the employer is conducting or what the employee ployeur responsable. Il est peu probable qu’une was asked to do and, hence, to any risk that was telle attaque aura un rapport avec l’entreprise que created. Nor is the imposition of liability likely to l’employeur exploite ou avec que ce que l’employ´ ehave a significant deterrent effect; short of closing est requis de faire et, en cons´ equence, avec the premises or discharging all employees, little quelque risque que ce soit. Il est ´ egalement peu can be done to avoid the random wrong. Nor is probable que l’imputation de la responsabilit´ e aura foreseeability of harm used in negligence law the un effet dissuasif important; a moins de fermer les test. What is required is a material increase in the lieux ou de cong´ edier tous les employ´ es, la faute risk as a consequence of the employer’s enterprise commise au hasard est difficile a ´ eviter. Le critere and the duties he entrusted to the employee, mind- applicable n’est pas non plus celui de la pr´ evisibi-ful of the policies behind vicarious liability. lit´ e du pr´ ejudice auquel recourt le droit en mati ere de n´ egligence. Ce qui est n´ ecessaire est un accrois-sement sensible du risque r´ esultant de l’entreprise de l’employeur et des fonctions qu’il a confi´ ees al’employ´ e, sans oublier les politiques g´ en´ erales qui sous-tendent la responsabilit´ e du fait d’autrui. What factors are relevant to whether an employ- 43 Quels sont les facteurs pertinents pour d´ ecider si er’s enterprise has introduced or significantly l’entreprise de l’employeur a cr´ e´ e ou sensiblement exacerbated a risk of sexual abuse by an accru le risque d’agression sexuelle de la part d’un employee? (Again, I speak generally, supplement- employ´ e? (L a encore, je parle de maniere g´ en´ erale, ing the factors suggested above.) It is obvious that pour compl´ eter les facteurs propos´ es plus haut.) Il the risk of an employee sexually abusing a child est ´ evident qu’un employ´ e risque beaucoup plus may be materially enhanced by giving the d’agresser sexuellement un enfant si on lui donne employee an opportunity to commit the abuse. l’occasion de commettre l’agression. Il y a de There are many kinds of opportunity and the nombreux types d’occasion et la nature de l’occa-nature of the opportunity in a particular case must sion dans un cas particulier doit ˆ etre soigneuse-be carefully evaluated in determining whether it ment ´ evalu´ ee pour d´ ecider si elle a effectivement has, in fact, materially increased the risk of the accru sensiblement le risque de pr´ ejudice qui a harm that ensued. If an employee is permitted or r´ esult´ e. Si on permet ou on demande a un employ´ erequired to be with children for brief periods of de tenir compagnie a des enfants pendant de time, there may be a small risk of such harm — courtes p´ eriodes, il peut y avoir un faible risque de perhaps not much greater than if the employee pr´ ejudice, peut-ˆ etre pas beaucoup plus important were a stranger. If an employee is permitted or que si l’employ´ e ´ etait un ´ etranger. Si on permet ou required to be alone with a child for extended peri- on demande a un employ´ e de rester seul avec un ods of time, the opportunity for abuse may be enfant pendant de longues p´ eriodes, les chances greater. If in addition to being permitted to be qu’une agression se produise peuvent ˆ etre accrues. alone with a child for extended periods, the Si en plus de l’autoriser a demeurer seul avec un employee is expected to supervise the child in inti- enfant pendant de longues p´ eriodes, on s’attend amate activities like bathing or toiletting, the oppor- ce que l’employ´ e surveille l’enfant lors d’activit´ es tunity for abuse becomes greater still. As the intimes comme le bain ou la toilette, les chances opportunity for abuse becomes greater, so the risk qu’une agression se produise sont encore plus of harm increases. grandes. Plus grandes sont les chances qu’une agression se produise, plus le risque de pr´ ejudice s’accroˆ ıt. 562 2 S.C.R. BAZLEY v. CURRY McLachlin J.
The risk of harm may also be enhanced by the 44 La nature de la relation que l’emploi ´etablit nature of the relationship the employment estab- entre l’employ´ e et l’enfant est ´ egalement suscepti-lishes between the employee and the child. ble d’accroˆ ıtre le risque de pr´ ejudice. L’emploi qui Employment that puts the employee in a position place l’employ´ e dans une situation d’intimit´ e et of intimacy and power over the child (i.e ., a par- d’autorit´ e vis-a-vis de l’enfant (c’est- a-dire une ent-like, role-model relationship) may enhance the relation dans laquelle l’employ´ e agit a la mani ere risk of the employee feeling that he or she is able d’un parent ou fait fonction de modele) est suscep-to take advantage of the child and the child submit- tible d’accroˆ ıtre le risque que l’employ´ e sente qu’il ting without effective complaint. The more the peut profiter de l’enfant et que l’enfant se soumette employer encourages the employee to stand in a sans pouvoir se plaindre de mani ere efficace. Plus position of respect and suggests that the child l’employeur encourage l’employ´ e a imposer le res-should emulate and obey the employee, the more pect autour de lui et propose que l’enfant imite cet the risk may be enhanced. In other words, the more employ´ e et lui ob´ eisse, plus le risque est suscepti-an enterprise requires the exercise of power or ble de croˆ ıtre. Autrement dit, plus une entreprise authority for its successful operation, the more requiert l’exercice de pouvoir ou d’autorit´ e pour la materially likely it is that an abuse of that power r´ eussite de ses activit´ es, plus un abus de ce rapport relationship can be fairly ascribed to the employer. de force pourra ˆ etre attribu´ e, a juste titre, a l’em-See Boothman v. Canada , supra . ployeur. Voir Boothman c. Canada , pr´ ecit´ e. Other factors may be important too, depending 45 D’autres facteurs peuvent ˆ etre ´ egalement impor-on the nature of the case. To require or permit an tants, selon la nature de l’affaire. Demander ou employee to touch the client in intimate body permettrea un employ´ e de toucher les parties zones may enhance the risk of sexual touching, intimes du corps d’un client peut accroˆ ıtre le ris-just as permitting an employee to handle large que d’attouchements sexuels, tout comme permet-sums of money may enhance the risk of embezzle- tre a un employ´ e de manipuler des sommes d’ar-ment or conversion. This is the common sense core gent importantes peut accroˆ ıtre le risque de of the “mode of conduct” argument accepted by d´ etournement de fonds ou d’appropriation illicite. the trial judge in this case. (The same factor might Telle est la logique qui est au cœur de l’argument of course be analyzed in terms of enhanced oppor- du «mode de comportement» que le juge de pre-tunity.) Time and place arguments may also be rel- mi ere instance a retenu dans la pr´ esente affaire. evant in particular cases. The mere fact that the (Le mˆ eme facteur pourrait ´ evidemment ˆ etre ana-wrong occurred during working hours or on the lys´ e sous l’angle de l’occasion plus grande.) Les jobsite may not, standing alone, be of much impor- arguments de la date, de l’heure et du lieu peuvent tance; the assessment of material increase in risk aussi ˆ etre pertinents dans certains cas particuliers. cannot be resolved by the mechanical application Le simple fait que la faute a ´ et´ e commise pendant of spatial and temporal factors. This said, spatial les heures de travail ou au travail ne saurait, a lui and temporal factors may tend to negate the sug- seul, ˆ etre d’une grande importance; l’´ evaluation de gestion of materially enhanced risk of harm, inso- l’accroissement sensible du risque ne peut pas se far as they suggest that the conduct was essentially faire par l’application machinale de facteurs spa-unrelated to the employment and any enhanced tiaux et temporels. Cela ´ etant dit, les facteurs spa-risk it may have created (for example, the employ- tiaux et temporels peuvent tendre a annihiler l’id´ ee ee’s tort occurred offsite and after hours). The pol- d’accroissement sensible du risque de pr´ ejudice, icy considerations of fair compensation and deter- dans la mesure ou ils portent a croire que la con-rence upon which vicarious liability is premised duite n’avait essentiellement rien a voir avec l’em-ploi et tout risque qu’il peut avoir cr´ e´ e (par exemple, lorsque le d´ elit de l’employ´ e est survenu en dehors des lieux de travail et apr es les heures de 2 R.C.S. 563 BAZLEY c. CURRY Le juge McLachlin
may be attenuated or completely eliminated in travail). Les consid´ erations de politique g´ en´ erale such circumstances. de la juste indemnisation et de la dissuasion qui sous-tendent la responsabilit´ e du fait d’autrui peu-vent ˆetre att´ enu´ ees ou completement ´elimin´ ees dans de telles circonstances. In summary, the test for vicarious liability for an 46 En r´ esum´ e, le crit ere de la responsabilit´ e du fait employee’s sexual abuse of a client should focus d’autrui d´ ecoulant de l’agression sexuelle d’un on whether the employer’s enterprise and empow- client par un employ´ e devrait ˆ etre ax´ e sur la ques-erment of the employee materially increased the tion de savoir si l’entreprise de l’employeur et risk of the sexual assault and hence the harm. The l’habilitation de l’employ´ e ont accru sensiblement test must not be applied mechanically, but with a le risque d’agression sexuelle et, par cons´ equent, sensitive view to the policy considerations that jus- de pr´ ejudice. L’application du critere ne doit pas tify the imposition of vicarious liability — fair and ˆetre machinale mais doit tenir compte des consid´ e-efficient compensation for wrong and deterrence. rations de politique g´ en´ erale qui justifient l’impu-This requires trial judges to investigate the tation de la responsabilit´ e du fait d’autrui, soit la employee’s specific duties and determine whether dissuasion et l’indemnisation juste et efficace de la they gave rise to special opportunities for wrong- faute. Pour ce faire, les juges de premi ere instance doing. Because of the peculiar exercises of power doivent examiner les tˆ aches particulieres de l’em-and trust that pervade cases such as child abuse, ploy´ e et d´ ecider si elles cr´ eent des occasions sp´ e-special attention should be paid to the existence of ciales de commettre une faute. Compte tenu des a power or dependency relationship, which on its utilisations particuli eres qui sont faites de l’autorit´ eown often creates a considerable risk of et de la confiance dans les cas d’agression d’un wrongdoing. enfant, il faut prˆ eter une attention sp´ eciale ` a l’exis-tence d’un rapport de force ou de d´ ependance, qui cr´ ee souvent en soi un risque consid´ erable de faute. B. Should There Be an Exemption for Non-Profit B. Devrait-il y avoir exon ´eration de responsabilit ´eOrganizations? dans le cas d ’un organisme sans but lucratif?
In the alternative, the Foundation submits that 47 La Fondation soutient subsidiairement que, even if vicarious liability should presumptively mˆ eme s’il y a lieu de pr´ esumer que la responsabi-attach for Curry’s torts, this Court should exempt lit´ e du fait d’autrui est engag´ ee en raison des d´ elits non-profit organizations. None of the judges below commis par Curry, notre Cour devrait exon´ erer de accepted this suggestion. Nor would I. toute responsabilit´ e les organismes sans but lucra-tif. Aucun des juges d’instance inf´ erieure n’a retenu cet argument, et je ne le ferai pas non plus. In support of a charitable or non-profit exemp- 48 Pour ´ etayer l’exon´ eration de responsabilit´ e dans tion from liability, the Foundation argues: (1) that le cas d’un organisme de bienfaisance ou sans but it is unfair to fix liability without fault on non- lucratif, la Fondation soutient (1) qu’il est injuste profit organizations performing needed services on d’imputer la responsabilit´ e sans faute aux organis-behalf of the general public; (2) that non-profit mes sans but lucratif qui fournissent des services organizations are less able to control and supervise n´ ecessaires au nom du grand public, (2) que les the conduct of their agents, many of whom are vol- organismes sans but lucratif sont moins en mesure unteers, which enhances the unfairness of impos- de contrˆ oler et de surveiller la conduite de leurs ing vicarious liability and diminishes its deterrent mandataires, qui comptent de nombreux b´ en´ e-effect; and (3) that the practical effect of making voles, ce qui a pour effet d’accroˆ ıtre l’injustice 564 2 S.C.R. BAZLEY v. CURRY McLachlin J.
non-profit organizations vicariously liable for the d’imputer la responsabilit´ e du fait d’autrui, et d’en misconduct of their agents will be to make it diffi- diminuer l’effet dissuasif, et (3) que, si les organis-cult or impossible for such organizations to carry mes sans but lucratif sont responsables de l’incon-out their important work. The Foundation suggests duite de leurs mandataires, il leur sera difficile ou that the body that should bear the responsibility for impossible en pratique d’accomplir leur important Curry’s sexual abuse of the respondent is the pro- travail. Selon la Fondation, l’organisme qui devrait vincial government, which placed him in the assumer la responsabilit´ e de l’agression sexuelle society’s care. de l’intim´ e par Curry est le gouvernement provin-cial, qui l’a confi´ e aux soins de la soci´ et´ e. The first submission is that it is unfair to fix lia-49 Le premier argument veut qu’il soit injuste bility without fault on non-profit organizations d’imputer la responsabilit´ e sans faute aux organis-performing needed services on behalf of the com- mes sans but lucratif qui fournissent des services munity as a whole. It is difficult not to be sympa- n´ ecessaires au nom de l’ensemble de la collecti-thetic to this plea. Churches and aid societies vit´ e. Il est difficile de ne pas ˆ etre sympathique a ce undertake to care for society’s most needy. They moyen de d´ efense. Les ´eglises et les soci´ et´ es do work few others would, and they do it in a self- d’aide se chargent de prendre soin des laiss´ es-less, generous manner. In the case at bar, the pour-compte de la soci´ et´ e. Elles accomplissent un Children’s Foundation took in the respondent travail qui int´ eresse peu de gens, et elles le font de when no one else seemed ready or able to do so fa¸ con altruiste et g´ en´ ereuse. En l’esp ece, la and undertook the difficult task of providing him Children’s Foundation a recueilli l’intim´ e alors with the love and guidance that other children que personne d’autre ne semblait dispos´ e a le faire, receive from their parents. That non-profit organi- ou en mesure de le faire, et a entrepris la tˆ ache dif-zations do important work is beyond question. ficile de lui donner l’amour et l’encadrement que They are funded by the government and by dona- les autres enfants re¸ coivent de leurs parents. Il est tions from the public. It is unjust, the appellant incontestable que les organismes sans but lucratif argues, that they be made to pay damages when, accomplissent un travail important. Ils sont through no legal fault of their own, an unscrupu- financ´ es par le gouvernement et au moyen des lous employee or volunteer abuses his position dons du public. L’appelante fait valoir qu’il est with one of the wards. injuste de les forcera verser des dommages-int´ erˆ ets dans le cas ou, en l’absence de toute faute de leur part sur le plan juridique, un employ´ e ou un b´ en´ evole sans scrupules abuse de sa situation avec l’un des pupilles. There is, however, another perspective to be 50 Cependant, il faut examiner la situation d’un considered — that of the innocent child who was autre point de vue, celui de l’enfant innocent qui a the victim of the abuse. From his perspective, the ´et´ e victime de l’agression. Du point de vue de ce appellant’s institution, however meritorious, put dernier, l’´ etablissement de l’appelante, si louable him in the intimate care of Mr. Curry and in a very soit-il, l’a confi´ e aux soins personnels de M. Curry real sense enhanced the risk of his being abused. et a donc vraiment accru le risque qu’il soit From his perspective, it is fair that as between him agress´ e. De son point de vue, il est juste que, entre and the institution that enhanced the risk, the insti- lui et l’´ etablissement qui a accru le risque, ce soit tution should bear legal responsibility for his abuse ce dernier qui soit responsable, en droit, de son and the harm that befell him. It may also deter agression et du pr´ ejudice qu’il a subi. Cela peut other incidents of sexual abuse by motivating char- ´egalement contribuer a pr´ evenir d’autres ´ episodes itable organizations entrusted with the care of chil- d’agression sexuelle en incitant les organismes de dren to take not only such precautions as the law of bienfaisance a qui des enfants sont confi´ es a pren- 2 R.C.S. 565 BAZLEY c. CURRY Le juge McLachlin
negligence requires, but all possible precautions to dre non seulement les pr´ ecautions requises par le ensure that their children are not sexually abused. droit en matiere de n´ egligence, mais toutes celles possibles pour ´ eviter que les enfants qui leur sont confi´ es soient victimes d’une agression sexuelle. When all perspectives are considered, it is diffi- 51 Tout compte fait, il est difficile de conclure que, cult to conclude that the fact that the appellant parce que l’appelante accomplit un bon travail does good work in the community without expec- dans la collectivit´ e sans s’attendre a r´ ealiser un tation of profit makes it unjust that it should be profit, il est injuste que sa responsabilit´ e du fait held vicariously responsible for the abuse of the d’autrui soit engag´ ee en raison de l’agression dont respondent. These facts, therefore, do not consti- a ´ et´ e victime l’intim´ e. Par cons´ equent, ces faits tute a sound basis by themselves for exempting mˆ emes ne constituent pas une raison valable non-profit organizations from legal liability that d’exon´ erer les organismes sans but lucratif de la would otherwise fall on them. responsabilit´ e en droit qui leur serait par ailleurs imput´ ee. The second argument is that non-profit charita- 52 Le deuxieme argument veut que les organismes ble organizations often work with volunteers and de bienfaisance sans but lucratif recourent souvent are thus less able than commercial enterprises to aux services de b´ en´ evoles et qu’ils soient donc supervise what their agents do. This, it is said, moins en mesure que les entreprises commerciales diminishes the fairness of holding such organiza- de surveiller ce que font leurs mandataires. Cela, tions vicariously liable, and lessens any deterrent dit-on, fait en sorte qu’il est moins juste de tenir effect that liability might bring. This position rests ces organismes responsables du fait d’autrui, et on the premise that an organization’s responsibility att´ enue l’effet dissuasif que cette responsabilit´ e est and control over its operations diminish when it susceptible d’avoir. Ce point de vue repose sur la employs volunteers, a premise I cannot accept. pr´ emisse selon laquelle la responsabilit´ e qu’un Indeed, it is not suggested that non-profit organiza- organisme assume a l’´ egard de ses activit´ es et le tions do not have a duty to screen or supervise contrˆ ole qu’il exerce sur celles-ci diminuent quand those whom they entrust with their important il emploie des b´ en´ evoles, une pr´ emisse que je ne work. Accordingly, the same considerations of puis accepter. En fait, on ne laisse pas entendre fairness and deterrence arise, whether the organi- que les organismes sans but lucratif n’ont pas le zation is non-profit or commercial. devoir de s´ electionner ou de surveiller ceux a qui ils confient l’ex´ ecution de leur travail important. Par cons´ equent, les mˆ emes consid´ erations d’´ equit´ eet de dissuasion s’appliquent, peu importe que l’organisme soit sans but lucratif ou commercial. The third argument, essentially a variation on 53 Selon le troisi eme argument, qui est essentielle-the first, is that vicarious liability will put many ment une variante du premier, si on leur impute la non-profit organizations out of business or make it responsabilit´ e du fait d’autrui, de nombreux orga-difficult for them to carry on their good work. It is nismes sans but lucratif devront cesser leurs acti-argued that unlike commercial organizations, non- vit´ es ou pourront difficilement poursuivre leur bon profit organizations have few means of distributing travail. On soutient que, contrairement aux orga-any loss they are made to assume, since they can- nismes commerciaux, les organismes sans but not increase what they charge the public and can- lucratif ont peu de moyens de r´ epartir toute perte not easily obtain insurance for liability arising qu’on leur fait assumer, du fait qu’ils ne peuvent from sexual abuse. While in this case, it may be pas obliger le public ` a payer davantage pour leurs that the loss can be distributed to the public (since services et qu’il leur est difficile d’obtenir une the province pays the Foundation for caring for assurance responsabilit´ e pour agression sexuelle. 566 2 S.C.R. BAZLEY v. CURRY McLachlin J.
children like the respondent), many non-profit Bien que, dans la pr´ esente affaire, il soit possible organizations may have no way to obtain contribu- de r´ epercuter la perte sur le public (vu que la pro-tion from other sources to cover judgments against vince paie la Fondation pour prendre soin d’en-them. In sum, attaching liability to charities like fants comme l’intim´ e), de nombreux organismes the Foundation will, in the long run, disadvantage sans but lucratif n’ont pas les moyens d’obtenir des society. contributions d’autres sources pour payer le mon-tant des jugements prononc´ es contre eux. Somme toute, l’imputation d’une responsabilit´ e a des orga-nismes de bienfaisance comme la Fondation aura pour effet a long terme de d´ esavantager la soci´ et´ e. I cannot accept this contention. It is based on the 54 Je ne puis retenir cet argument. Il repose sur idea that children like the respondent must bear the l’id´ ee que des enfants comme l’intim´ e doivent cost of the harm that has been done to them so that assumer le coˆ ut du pr´ ejudice qu’ils ont subi, afin others in society may benefit from the good work de permettre a d’autres membres de la soci´ et´ e de of non-profit organizations. The suggestion that b´ en´ eficier du bon travail des organismes sans but the victim must remain remediless for the greater lucratif. L’id´ ee que la victime doit demeurer sans good smacks of crass and unsubstantiated utilitari- recours pour le plus grand bien de tous fleure l’uti-anism. Indeed, it is far from clear to me that the litarisme grossier et non fond´ e. En r´ ealit´ e, il est “net” good produced by non-profit institutions jus- loin d’ˆ etre ´ evident pour moi que le bien «net» que tifies the price placed on the individual victim, nor font les organismes sans but lucratif justifie le prix that this is a fair way for society to order its que l’on fait payer a la victime elle-mˆ eme, ou qu’il resources. If, in the final analysis, the choice is s’agit d’un moyen juste pour la soci´ et´ e d’organiser between which of two faultless parties should bear ses ressources. Si, en derniere analyse, il s’agit de the loss — the party that created the risk that mate- d´ ecider laquelle des deux parties qui n’a commis rialized in the wrongdoing or the victim of the aucune faute doit assumer la perte — la partie qui wrongdoing — I do not hesitate in my answer. a cr´ e´ e le risque a l’origine de l’acte fautif ou la vic-Neither alternative is attractive. But given that a time de cet acte fautif — je r´ eponds sans h´ esita-choice must be made, it is fairer to place the loss tion. Aucune de ces solutions n’est int´ eressante. on the party that introduced the risk and had the Cependant, comme un choix doit ˆ etre fait, il est better opportunity to control it. plus juste de faire assumer la perte par la partie qui a cr´ e´ e le risque et qui ´ etait mieux plac´ ee pour le contrˆ oler. Finally, it seems to me artificial to suggest that 55 Enfin, il me semble factice de laisser entendre Bazley could have claimed against the government que Bazley aurait pu intenter une action contre le because, by making the initial placement order, it gouvernement parce que c’est lui qui, en rendant was the cause-in-fact for Curry’s torts. The con- l’ordonnance initiale de placement, a ´ et´ e la cause nection between the original government order and r´ eelle des d´ elits de Curry. Le lien entre l’ordon-the sexual abuse is too remote to support liability. nance initiale du gouvernement et l’agression sexuelle est trop t´ enu pour justifier l’imputation de responsabilit´ e. I conclude that the case for exempting non-56 Je conclus qu’il n’a pas ´ et´ e prouv´ e que les orga-profit institutions from vicarious liability otherwise nismes sans but lucratif doivent ˆ etre exon´ er´ es de la properly imposed at law has not been established. responsabilit´ e du fait d’autrui par ailleurs imput´ ee I can see no basis for carving out an exception a juste titre en droit. Je ne vois aucune raison from the common law of vicarious liability for a d’exon´ erer une cat´ egorie particuli ere de d´ efen-particular class of defendants, non-profit organiza- deurs, ` a savoir les organismes sans but lucratif, de 2 R.C.S. 567 BAZLEY c. CURRY Le juge McLachlin
tions. The record before us does not support craft- la responsabilit´ e du fait d’autrui reconnue en com-ing such a status-based exemption from liability, mon law. Le dossier dont nous sommes saisis ne and I am unconvinced that such a course would be justifie pas de cr´ eer une telle exon´ eration de res-appropriate. The Court’s task is to clarify the gen- ponsabilit´ e fond´ ee sur le statut de l’entit´ e en cause eral legal principles that govern vicarious liability. et je ne suis pas convaincue de l’opportunit´ e de le The common law backdrop thus established, it is faire. La Cour a pour tˆ ache de clarifier les prin-for the legislature to consider whether relief should cipes juridiques g´ en´ eraux qui r´ egissent la respon-be granted to limit the legal exposure of non-profit sabilit´ e du fait d’autrui. Dans ce contexte de organizations to prosecution for sexual abuse. common law, il appartient au l´ egislateur d’exami-ner si l’exon´ eration devrait ˆetre accord´ ee pour limiter le risque que des organismes sans but lucra-tif fassent l’objet de poursuites judiciaires pour agression sexuelle. C. Application to the Case at Bar C. Application ` a la pr ´esente affaire
The appropriate inquiry in a case such as this is 57 Dans une affaire comme celle dont nous whether the employee’s wrongful act was so sommes saisis, il convient d’examiner si l’acte fau-closely connected to the employment relationship tif de l’employ´ e ´ etait si ´ etroitement li´ e a la relation that the imposition of vicarious liability is justified employeur-employ´ e que l’imputation de la respon-in policy and principle. From the point of view of sabilit´ e du fait d’autrui est justifi´ ee sur les plans de principle, a prime indicator is whether the la politique g´ en´ erale et des principes. Sur le plan employer, by carrying on its operations, created or des principes, il est primordial de savoir si, en materially enhanced the risk of the wrong that exer¸ cant ses activit´ es, l’employeur a cr´ e´ e ou sensi-occurred, such that the policy considerations of fair blement accru le risque a l’origine de la faute com-recovery and deterrence are engaged. In answering mise, de maniere a faire intervenir les consid´ era-this question, the court must have regard to how tions de politique g´ en´ erale de la juste the employer’s enterprise increased opportunity to indemnisation et de la dissuasion. En r´ epondant acommit the wrong, and how it fostered power- cette question, le tribunal doit se demander com-dependency relationships that materially enhanced ment l’entreprise de l’employeur a accru les the risk of the harm. There is no special rule for chances de commettre la faute, et comment elle a non-profit corporations. favoris´ e le d´ eveloppement des rapports de force et de d´ ependance qui ont accru sensiblement le risque de pr´ ejudice. Aucune r egle particuliere ne s’ap-plique aux organismes sans but lucratif. Applying these considerations to the facts in the 58 Si on applique ces consid´ erations aux faits de la case at bar, the Foundation is vicariously liable for pr´ esente affaire, la responsabilit´ e du fait d’autrui the sexual misconduct of Curry. The opportunity de la Fondation est engag´ ee en raison de l’incon-for intimate private control and the parental rela- duite sexuelle de Curry. L’occasion d’exercer un tionship and power required by the terms of contrˆ ole personnel intime ainsi que l’autorit´ e et la employment created the special environment that relation parentales requises par les conditions de nurtured and brought to fruition Curry’s sexual travail ont engendr´ e le climat propice a la perp´ etra-abuse. The employer’s enterprise created and fos- tion de l’agression sexuelle par Curry. L’entreprise tered the risk that led to the ultimate harm. The de l’employeur a cr´ e´ e et favoris´ e le risque a l’ori-abuse was not a mere accident of time and place, gine du pr´ ejudice caus´ e. L’agression ´ etait non pas but the product of the special relationship of inti- simplement le fruit d’un malheureux concours de macy and respect the employer fostered, as well as circonstances, mais le r´ esultat de la relation parti-the special opportunities for exploitation of that culi ere d’intimit´ e et de respect dont l’employeur a 568 2 S.C.R. BAZLEY v. CURRY McLachlin J.
relationship it furnished. Indeed, it is difficult to favoris´ e le d´ eveloppement, ainsi que des occasions imagine a job with a greater risk for child sexual sp´ eciales d’exploiter cette relation qu’il a fournies. abuse. This is not to suggest that future cases must En r´ ealit´ e, il est difficile d’imaginer un travail qui rise to the same level to impose vicarious liability. comporte un plus grand risque d’agression sexuelle Fairness and the need for deterrence in this critical pour les enfants. Cela ne revient pas a dire que les area of human conduct — the care of vulnerable futures affaires devront se situer au mˆ eme niveau children — suggest that as between the Foundation pour que la responsabilit´ e du fait d’autrui puisse that created and managed the risk and the innocent ˆetre imput´ ee. L’´ equit´ e et le besoin de dissuasion victim, the Foundation should bear the loss. dans ce domaine crucial du comportement humain, qu’est le soin d’enfants vuln´ erables, portentacroire que, entre la victime innocente et la Fonda-tion qui a cr´ e´ e et g´ er´ e le risque, c’est la Fondation qui devrait assumer la perte. VI. Conclusion VI. Conclusion I would dismiss the appeal with costs and remit 59 Je suis d’avis de rejeter le pourvoi avec d´ epens the matter to trial. et de renvoyer l’affaire a proc es.
Appeal dismissed with costs. Pourvoi rejet ´e avec d ´epens. Solicitors for the appellant the Children ’s Foun- Procureurs de l’appelante la Children ’sdation: Alexander, Holburn, Beaudin & Lang, Foundation: Alexander, Holburn, Beaudin &Vancouver. Lang, Vancouver. Solicitor for the appellant Her Majesty the Procureur de l ’appelante Sa Majest ´e la Reine du Queen in Right of British Columbia: The Ministry chef de la Colombie-Britannique: Le ministere du of the Attorney General, Victoria. Procureur g ´en ´eral, Victoria. Solicitor for the respondent: D. Brent Adair, Procureur de l ’intim ´e: D. Brent Adair, Trail Trail, B.C. (C.-B.). Solicitor for the intervener Her Majesty the Procureur de l ’intervenante Sa Majest ´e la Reine Queen in Right of Alberta: Alberta Justice, du chef de l ’Alberta: Alberta Justice, Edmonton. Edmonton. Solicitors for the intervener the Canadian Con- Procureurs de l ’intervenante la Conf ´erence des ference of Catholic Bishops: Barnes, Sammon, ´ev ˆeques catholiques du Canada: Barnes, Sammon, Ottawa. Ottawa. Solicitors for the intervener the United Church Procureurs de l ’intervenante l ’ ´ Eglise unie du of Canada: Harper Grey Easton, Vancouver. Canada: Harper Grey Easton, Vancouver. Solicitors for the intervener the General Synod Procureurs de l ’intervenant le synode g ´en ´eral of the Anglican Church of Canada: Boughton de l’ ´ Eglise anglicane du Canada: Boughton Peterson Yang Anderson, Vancouver. Peterson Yang Anderson, Vancouver. Solicitors for the intervener Wunnumin Lake Procureurs de l ’intervenante la Premi ere Nation First Nation: Goodman & Carr, Toronto. de Lac Wunnumin: Goodman & Carr, Toronto. 2 R.C.S. 569 BAZLEY c. CURRY
Solicitors for the interveners William Richard Procureurs des intervenants William Richard Blackwater et al.: Hutchins, Soroka & Grant, Blackwater et autres: Hutchins, Soroka & Grant, Vancouver. Vancouver. Solicitors for the interveners Barrie Caldwell, Procureurs des intervenants Barrie Caldwell, Samuel McNab and Glen Pelletier: MacPherson Samuel McNab et Glen Pelletier: MacPherson Leslie & Tyerman, Regina. Leslie & Tyerman, Regina.
|
90
|
Rule of thumb for sparse vs dense matrix storage - Computational Science Stack Exchange
===============
Join Computational Science
By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy.
Sign up with Google
OR
Email
Password
Sign up
Already have an account? Log in
Skip to main content
Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Loading…
Tour Start here for a quick overview of the site
Help Center Detailed answers to any questions you might have
Meta Discuss the workings and policies of this site
About Us Learn more about Stack Overflow the company, and our products
current community
Computational Science helpchat
Computational Science Meta
your communities
Sign up or log in to customize your list.
more stack exchange communities
company blog
Log in
Sign up
Computational Science
Home
Questions
Unanswered
AI Assist Labs
Tags
Chat
Users
Companies
Teams
Ask questions, find answers and collaborate at work with Stack Overflow for Teams.
Try Teams for freeExplore Teams
3. Teams
4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
Hang on, you can't upvote just yet.
You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it?
Instead, you can save this post to reference later.
Save this post for later Not now
Rule of thumb for sparse vs dense matrix storage
Ask Question
Asked 6 years, 11 months ago
Modified5 years ago
Viewed 7k times
This question shows research effort; it is useful and clear
18
Save this question.
Show activity on this post.
Suppose I know the expected sparsity of a matrix (i.e. the number of non-zeros / total possible number of non-zeros). Is there a rule of thumb (perhaps approximate) for deciding whether to use sparse matrix storage (specifically, compressed row storage) vs. storing it as a dense matrix?
Speed is more important in my application than memory. But out of general curiosity, I'm interested in answers from both a speed and memory perspective.
After generating the matrix, I only apply addition and multiplication operations on it.
I have only been able to find qualitative answers, e.g. this question and this question but I'm looking for something like
...if the sparsity is more than approximately x%x%, then use dense storage.
matrix
sparse-matrix
dense-matrix
Share
Share a link to this question
Copy linkCC BY-SA 4.0
Cite
Improve this question
Follow
Follow this question to receive notifications
edited Sep 14, 2018 at 15:05
Anton Menshov♦
8,862 7 7 gold badges 42 42 silver badges 95 95 bronze badges
asked Sep 14, 2018 at 14:47
josh_eimejosh_eime
193 1 1 silver badge 6 6 bronze badges
1
1 Are the additions structure preserving? When you add a matrix with the same non-zero elements to a sparse matrix, a good implementation should be able to avoid costly operations. When it changes the sparsity structure, the addition can become quite expensive. On the other hand, adding two dense matrices that contain a lot of zeros is inefficient as well and rebuilding a new sparse matrix may be less expensive. –allo Commented Jul 30, 2020 at 12:31
Add a comment|
3 Answers 3
Sorted by: Reset to default
This answer is useful
17
Save this answer.
Show activity on this post.
All matrix operations are memory bound (and not compute bound) on today's processors. So basically, you have to ask which format stores fewer bytes. This is easy to compute:
For a full matrix, you store 8 bytes (one double) per entry
For a sparse matrix, you store 12 bytes per entry (one double for the value, and one integer for the column index of the entry).
In other words, if your sparsity is below 67% -- i.e., for nearly any matrix any reasonable person would call sparse --, the sparse matrix format will not only yield better memory use but also better compute time.
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Cite
Improve this answer
Follow
Follow this answer to receive notifications
answered Sep 14, 2018 at 23:14
Wolfgang BangerthWolfgang Bangerth
58.4k 1 1 gold badge 64 64 silver badges 123 123 bronze badges
10
1 I would like to hear why someone has downvotes this answer. It’s qualitative, quantitative, and gives a good rule of thumb. If I could upvote it twice, I would. –Charles Commented Sep 15, 2018 at 0:19
4 You’ll need slightly more storage than that- you need to keep track of the rows too. One bit per row is sufficient. –Brian Borchers Commented Sep 15, 2018 at 2:38
5 Matrix-matrix multiplication of dense matrices is one place where you get sufficient cache reuse that you can get very close to peak FLOPS. I agree that matrix vector multiplication will be memory bandwidth limited. –Brian Borchers Commented Sep 15, 2018 at 3:10
7 67% is actually very far away from the point where computations would profit from sparsity. Dense matrix-vector multiplication can profit to a significantly greater extent from caching. (You need very irregular memory access for sparse matrix-vector multiplication.) If it is about solving linear systems with a direct solver, people sometimes say that a matrix is sparse if it has less than 0.1% of nonzero values. But in practice, the actual connectivity of the matrix entries is much more important than the number of nonzeros. –Henrik Schumacher Commented Sep 16, 2018 at 17:09
2 @WolfgangBangerth: Your definition of sparse ("Sparse" means that the number of nonzero entries per row is independent of the size for a set of matrices that grow larger and larger.), differs quite a bit from J.H. Wilkinson's (informal working) definition: "any matrix with enough zeros that it pays to take advantage of them", which is often cited in literature. I prefer Wilkinson's definition. –wim Commented Sep 17, 2018 at 14:40
|Show 5 more comments
This answer is useful
20
Save this answer.
Show activity on this post.
For what it is worth, for random sparse matrices of size 10,000 by 10,000 vs. dense matrices of the same size, on my Xeon workstation using MATLAB and Intel MKL as the BLAS, the sparse matrix-vector multiply was faster for densities of 15% or less. At 67% (as proposed by another answer), the dense matrix-vector multiplication was about three time faster.
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Cite
Improve this answer
Follow
Follow this answer to receive notifications
edited Sep 15, 2018 at 3:49
Anton Menshov♦
8,862 7 7 gold badges 42 42 silver badges 95 95 bronze badges
answered Sep 15, 2018 at 2:55
Brian BorchersBrian Borchers
19.2k 1 1 gold badge 41 41 silver badges 71 71 bronze badges
4
1 Interesting, thanks. Some of my matrices are up to 30-40% sparse (inconveniently right in between the 15% and 67% estimates), so I should probably conduct tests similar to yours (for the operations I'm interested in) to see if the memory advantages are worth the slow down. –josh_eime Commented Sep 15, 2018 at 3:17
3 A lot will depend on the hardware and software you’re using. My machine has quad channel memory so it has more memory bandwidth than a typical dual channel system. MKL is a very good BLAS and MATLAB’s sparse matrix data structures might not be perfectly optimized for this. –Brian Borchers Commented Sep 15, 2018 at 3:25
1 One problem with compressed row storage (or compressed column storage) is that the entries are usually stored in a different area in memory from the index information. This lack of locality can hurt performance. In comparison, in conventional dense matrix storage (by rows (C) or columns (Fortran)), you can load entries of the matrix consecutively from memory in a more efficient way. –Brian Borchers Commented Sep 15, 2018 at 16:48
2 In recent years there's been research on new storage formats for sparse matrices which enable enhanced performance for sparse matrix-vector multiplication on mutlcore processors, machines with SIMD instructions, and GPU's. See for example:pdfs.semanticscholar.org/041b/… –Brian Borchers Commented Sep 16, 2018 at 1:44
Add a comment|
This answer is useful
7
Save this answer.
Show activity on this post.
Even if a matrix is very sparse, its matrix product with itself can be dense. Take for example a diagonal matrix and fill its first row and column with nonzero entries; its product with itself will be completely dense. Such a matrix can arise, for examle, as graph Laplacian of a graph in which there is a vertex that is connected to all other vertices. In practice, it suffices if there are few vertices with pretty high connectivity to the rest of the network. For matrix-vector multiplication, this phenomenon is less relevant although it may lead to imbalances when trying to parallelize the matrix-vector multiplication.
What I want to highlight: It really depends on the sparsity pattern and on what you want to do with the matrix. So, the best definition of a sparse matrix that I can come up with (which is pretty useless at the same time) is as follows:
A matrix is sparse if it is advantageous to store only its nonzero values and their positions and to invest the additional overhead that is coming from managing the arising data structure.
The lesson to learn: It really depends on what you want to do with it, which algorithm you use, and (as others have already pointed out) which hard- and software you use whether a given matrix is sparse or not (read as: whether you should use a sparse or dense matrix data structure). There cannot be a purely percentage-based rule if it is not only about storing data or matrix-vector multiplication. The best way to find out if your matrices are sparse is just to try it and compare with dense matrix methods.
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Cite
Improve this answer
Follow
Follow this answer to receive notifications
edited Sep 17, 2018 at 5:04
answered Sep 17, 2018 at 4:50
Henrik SchumacherHenrik Schumacher
171 5 5 bronze badges
2
3 The famous J.H. Wilkinson defined a sparse matrix as: "any matrix with enough zeros that it pays to take advantage of them." Exactly this definition has been cited by others frequently. Nevertheless, your definition is also quite suitable. –wim Commented Sep 17, 2018 at 13:02
1 Nice. That precisely the definition I tried to mimick, but I could not recall the source. –Henrik Schumacher Commented Sep 17, 2018 at 13:43
Add a comment|
Your Answer
Thanks for contributing an answer to Computational Science Stack Exchange!
Please be sure to answer the question. Provide details and share your research!
But avoid …
Asking for help, clarification, or responding to other answers.
Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Draft saved
Draft discarded
Sign up or log in
Sign up using Google
Sign up using Email and Password
Submit
Post as a guest
Name
Email
Required, but never shown
Post Your Answer Discard
By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
matrix
sparse-matrix
dense-matrix
See similar questions with these tags.
Featured on Meta
Upcoming initiatives on Stack Overflow and across the Stack Exchange network...
Will you help build our new visual identity?
Report this ad
Report this ad
Linked
6How "sparse" should a sparse matrix be to see benefits?
Related
12What is the overhead in sparse matrix multiplication
4Leveraging scipy for matrix free finite elements
3How can I speed up this code for sparse matrix-vector multiplication?
9When is it easy to invert a sparse matrix?
3What is the conventional approach for sparse matrix multiplication?
4nnz-preserving sparse matrix multiplication
3Finding ALL Eingenvalues of a Sparse Integer Matrix
Hot Network Questions
Is the logic of the original smoking study valid?
Limit in the Lie Derivative
Why are illegal immigrants counted towards congressional district apportionment and allocation of Electoral College votes in the United States?
Reskinning creatures without accidentally hiding how dangerous/safe they are
In the US, can I contribute to my Roth IRA, ahead of the time I get the earned income?
XSIM : print solutions of exercises having a predefined tag?
VLOOKUP with wildcards
Elfquest story where two elves argue over one's hypnotizing of an animal
Do you email authors whose results you have improved?
I found that we can calculate the time of solar eclipses that will happen in the very far future. Do we need relativity in this calculation?
Does cell phone only receive (one way communication) or receive and transmit microwaves (two way communication) during download?
A story where a character that looks like Wile E. Coyote helps to relocate a community of business-sharp hunters-gatherers
Is a sickle supposed to have a sideways bend like this?
What violent acts or injuries are attributable to Palestine Action?
Meaning of 'present' in Job 1:6
Landmark identification in "The Angel" (Arsenal FC's anthem)
What is a good way to get magnetic sensor input?
Dropdown width with very long options
Can high schoolers post to arXiv or write preprints?
How to defend against GDPR being used to access anti-fraud measures?
What's the difference between democracy and totalitarianism if, even in democracy, we must respect laws set by parties we didn't vote for?
What does, "For you alone are holy." mean in Revelation 15:4?
What does my 3D Printing Life-Seeder Probe need to print to populate the Universe for humans?
Does the warning "5 years imprisonment for removal" on Canada's Four Corners obelisk have any legal backing?
Question feed
Subscribe to RSS
Question feed
To subscribe to this RSS feed, copy and paste this URL into your RSS reader.
Why are you flagging this comment?
It contains harassment, bigotry or abuse.
This comment attacks a person or group. Learn more in our Code of Conduct.
It's unfriendly or unkind.
This comment is rude or condescending. Learn more in our Code of Conduct.
Not needed.
This comment is not relevant to the post.
Enter at least 6 characters
Something else.
A problem not listed above. Try to be as specific as possible.
Enter at least 6 characters
Flag comment Cancel
You have 0 flags left today
Computational Science
Tour
Help
Chat
Contact
Feedback
Company
Stack Overflow
Teams
Advertising
Talent
About
Press
Legal
Privacy Policy
Terms of Service
Your Privacy Choices
Cookie Policy
Stack Exchange Network
Technology
Culture & recreation
Life & arts
Science
Professional
Business
API
Data
Blog
Facebook
Twitter
LinkedIn
Instagram
Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804
By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.
Accept all cookies Necessary cookies only
Customize settings
Cookie Consent Preference Center
When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer.
Cookie Policy
Accept all cookies
Manage Consent Preferences
Strictly Necessary Cookies
Always Active
These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.
Cookies Details
Performance Cookies
[x] Performance Cookies
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.
Cookies Details
Functional Cookies
[x] Functional Cookies
These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly.
Cookies Details
Targeting Cookies
[x] Targeting Cookies
These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device.
Cookies Details
Cookie List
Clear
[x] checkbox label label
Apply Cancel
Consent Leg.Interest
[x] checkbox label label
[x] checkbox label label
[x] checkbox label label
Necessary cookies only Confirm my choices
|
91
|
QR decomposition
===============
Stat Lect
Index>Matrix algebra
QR decomposition
by Marco Taboga, PhD
The QR decomposition (or QR factorization) allows us to express a matrix having linearly independent columns as the product of 1) a matrix Q having orthonormal columns and 2) an upper triangular matrix R.
In order to fully understand how the QR decomposition is obtained, we should be familiar with the Gram-Schmidt process.
Table of contents
Overview of the decomposition
A formal statement
Uniqueness of the decomposition
Pre-multiplication by the Q factor
Square matrices
Application to linear regression
Solved exercises
Exercise 1
Overview of the decomposition
Remember that the Gram-Schmidt process is a procedure used to transform a set of linearly independent vectors into a set of orthonormal vectors (i.e., a set of vectors that have unit norm and are orthogonal to each other).
In the case of a matrix , denote its columns by . If these columns are linearly independent, they can be transformed into a set of orthonormal column vectors by using the Gram-Schmidt process, which alternates normalization steps and projection steps:
we start with the normalizationwhere denotes the norm of ;
we project on :where is the inner product between and and is the residual of the projection, orthogonal to ;
we normalize the residual:
we project on and :where the residual is orthogonal to and ;
we keep on alternating normalization steps (where projection residuals are divided by their norms) and projection steps (where is projected on ) until we have produced a set of orthonormal vectors .
Note that the residuals can be expressed in terms of normalized vectors asfor , where we have defined
Thus, the projections can be written as
The orthonormal vectors can be adjoined to form a matrixwhose columns are orthonormal.
The coefficients of the projections can be collected in an upper triangular matrix
By computing the matrix product between and , we recover the projections in equation (1). As a matter of fact, each column of the product is a linear combination of the columns of with coefficients taken from the corresponding column of (see the lecture on matrix products and linear combinations).
Therefore, we have that
A formal statement
We now provide a formal statement of the QR decomposition.
Proposition Let be a matrix. If the columns of are linearly independent, then can be factorized aswhere is a matrix whose columns form an orthonormal set, and is an upper triangular matrix whose diagonal entries are strictly positive.
Proof
In the previous section we have already shown a constructive proof of how the QR decomposition is obtained. The only important detail we have not mentioned is that the linear independence of the columns of guarantees that the residuals of the projections performed in the Gram-Schmidt process are different from zero. As a consequence, the normalized vectorsare well-defined because the norms are strictly positive. Moreover, the entries on the main diagonal of are strictly positive.
Note that is invertible because a triangular matrix is invertible if its diagonal entries are strictly positive.
It is time to make an example.
Example Define the matrixThe norm of the first column isso that the first normalized vector isThe inner product between and isThe projection of the second column on isand the residual of the projection isThe norm of the residual isThus,Let us verify that and are orthogonal:We now have performed all the calculations that lead to the QR factorizationThe matrix with orthonormal columns isand the upper triangular matrix isLet us check that indeed the product of and equals :
Uniqueness of the decomposition
The QR decomposition is unique.
Proposition Under the assumptions of the previous proposition, the QR decomposition is unique, that is, the matrices and satisfying the stated properties are unique.
Proof
Suppose where is a second decomposition into a matrix having orthonormal columns and an upper triangular matrix having strictly positive diagonal elements. Since the columns of are orthonormal, we have thatwhere is the conjugate transpose of , and is the identity matrix (see Non-square matrices with orthonormal columns). By the same token,If we pre-multiply both sides of the equalityby we getorIf we instead pre-multiply the equality by we obtainorBy plugging equation (3) into equation (2), we obtainThe latter equation implies that, for , the -th row of can be written as a linear combination of all the rows of with coefficients taken from the -th row of the matrix (see Matrix multiplication and linear combinations). But is triangular with strictly positive diagonal entries, so its rows are linearly independent and they form a basis for the space of vectors. As a consequence, the only way to represent the -th row of as a linear combination of all the rows of is to put a unitary coefficient on the -th row itself and a zero coefficient on all the other rows (by the uniqueness of the representation in terms of a basis). In other words, the -th row of is the -th vector of the canonical basis. Since this is true for , we have thatorThus, is a unitary matrix (its conjugate transpose is equal to its inverse). Moreover, we have thatSince the inverse of an upper triangular matrix (UT) is UT and the product of two UT matrices is UT, is UT. It is also invertible, which means that its diagonal entries are strictly positive. To sum up, is both unitary and UT with strictly positive diagonal entries. Therefore, by a result on unitary and triangular matrices, we have that and, as a consequence,andThus, the two matrices involved in the QR decomposition are unique.
Pre-multiplication by the Q factor
An important fact that we have discussed in the previous proof but we have not separately stated until now is that the matrix in the decomposition is such thatwhere is the conjugate transpose of . As a consequence,
If has only real entries, then the conjugate transpose coincides with the transpose and the two equations above becomeand
Square matrices
When the matrix being decomposed is a square matrix, then
where and are both square matrices.
But a square matrix having orthonormal columns is a unitary matrix.
Therefore, the QR decomposition of a square matrix having linearly independent columns is the product of a unitary matrix and an upper triangular matrix with strictly positive entries.
Application to linear regression
The QR method is often used to estimate linear regressions.
In a linear regression we have an vector of outputs and an matrix of inputs whose columns are assumed to be linearly independent. We need to find the coefficient vector that minimizes the mean squared errors made by using the fitted valuesto predict the actual values .
The well-known solution to this problem is the so-called ordinary least squares (OLS) estimator
We can simplify the formula for the OLS estimator, avoid to invert a matrix and thus reduce the computational burden (and the possible numerical instabilities) by computing the QR decomposition of :where is and is .
Then, the OLS estimator becomesor
The latter way of writing the solution is more convenient: since is upper triangular, we do not need to invert it, but we can use the back-substitution algorithm to find the solution .
Solved exercises
Below you can find some exercises with explained solutions.
Exercise 1
Compute the QR decomposition of
Solution
The norm of the first column of isThus, the first orthonormal vector isThe inner product between the second column of and isThe projection of on isand the residual of the projection isThe norm of the residual isThe second orthonormal vector isThus, the QR decomposition is given byand
How to cite
Please cite as:
Taboga, Marco (2021). "QR decomposition", Lectures on matrix algebra.
The books
Most of the learning materials found on this website are now available in a traditional textbook format.
Probability and statisticsMatrix algebra
Featured pages
Chi-square distribution
Law of Large Numbers
Exponential distribution
Normal distribution
Combinations
Mean square convergence
Explore
Likelihood ratio test
Convergence in probability
Uniform distribution
Main sections
Mathematical tools
Fundamentals of probability
Probability distributions
Asymptotic theory
Fundamentals of statistics
Glossary
About
About Statlect
Contacts
Cookies, privacy and terms of use
Glossary entries
Convolutions
Factorial
Mean squared error
Almost sure
Precision matrix
Critical value
Share
To enhance your privacy,
we removed the social buttons,
but don't forget to share.
|
92
|
Continued Fraction Expansion of Irrational Square Root/Examples/2
From ProofWiki
< Continued Fraction Expansion of Irrational Square Root | Examples
Jump to navigation
Jump to search
Contents
1 Examples of Continued Fraction Expansion of Irrational Square Root
1.1 Convergents
2 Proof
3 Sources
Examples of Continued Fraction Expansion of Irrational Square Root
The continued fraction expansion of the square root of 2 is given by:
: 2–√=[1,⟨2⟩]
This sequence is A040000 in the On-Line Encyclopedia of Integer Sequences (N. J. A. Sloane (Ed.), 2008).
Convergents
The sequence of convergents to the continued fraction expansion of the square root of 2 begins:
: 11,32,75,1712,4129,9970,239169,577408,1393985,33632378,…
Proof
| | | | | | | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | | | | | 2–√ | = | | | | 1+(2–√−1) | | | | |
| | | | | | | = | | | | 1+(2–√−1)(2–√+1)2–√+1 | | | multiplying top and bottom by 2–√+1 | |
| | | | | | | = | | | | 1+(2–√)2−122–√+1 | | | Difference of Two Squares | |
| | | | | | | = | | | | 1+11+2–√ | | | as (2–√)2−12=2−1=1 | |
Thus it is possible to replace 2–√ recursively:
| | | | | | | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | | | | | 2–√ | = | | | | 1+11+2–√ | | | | |
| | | | | | | = | | | | 1+11+(1+11+2–√) | | | | |
| | | | | | | = | | | | 1+12+11+2–√ | | | | |
| | | | | | | = | | | | 1+12+11+(1+11+2–√) | | | | |
| | | | | | | = | | | | 1+12+12+11+2–√ | | | | |
The pattern repeats indefinitely, producing the continued fraction expansion:
: 2–√=[1,2,2,2,…]=[1,⟨2⟩]
■
Sources
2014: Christopher Clapham and James Nicholson: The Concise Oxford Dictionary of Mathematics (5th ed.) ... (previous) ... (next): continued fraction
2021: Richard Earl and James Nicholson: The Concise Oxford Dictionary of Mathematics (6th ed.) ... (previous) ... (next): continued fraction
Retrieved from "
Categories:
OEIS Articles
Proven Results
Examples of Continued Fractions
2
Navigation menu
Search
|
93
|
Woman's Wedding Shoe - China - Qing dynasty (1644–1911) - The Metropolitan Museum of Art
===============
Visiting Sleeping Beauties: Reawakening Fashion?
You must join the virtual exhibition queue when you arrive. If capacity has been reached for the day, the queue will close early.
Learn more
Jump to contentticketsMember | Make a donation
Search
Visit
Plan Your Visit
Buy Tickets
Become a Member
Free Tours
Museum Map
Food and Drink
Accessibility
Group Visits
Exhibitions and Events
Exhibitions
Events
Free Tours
Performances
Art
The Met Collection
Curatorial Areas
Conservation and Scientific Research
Learn with Us
Learning Resources
Publications
Timeline of Art History
Workshops and Activities
Articles, videos, and podcasts
Research
Libraries and Research Centers
Shop
Search
Go
The Collection
The American WingAncient Near Eastern ArtArms and ArmorThe Michael C. Rockefeller WingAsian ArtThe CloistersThe Costume InstituteDrawings and PrintsEgyptian ArtEuropean PaintingsEuropean Sculpture and Decorative ArtsGreek and Roman ArtIslamic ArtRobert Lehman CollectionThe LibrariesMedieval ArtMusical InstrumentsPhotographsAntonio Ratti Textile CenterModern and Contemporary Art
×
Crop your artwork:
Scan your QR code:
Gratefully built with ACNLPatternTool
Woman's Wedding Shoe
China
late 19th–early 20th century
Not on view
View more
This image cannot be enlarged, viewed at full screen, or downloaded.
Public Domain
Open Access
As part of the Met's Open Access policy, you can freely copy, modify and distribute this image, even for commercial purposes.
API
Public domain data for this object can also be accessed using the Met's Open Access API.
Share
Link copied to clipboard
Facebook
Twitter
Pinterest
Animal Crossing
Email
Download image
Enlarge image
Artwork Details
Use your arrow keys to navigate the tabs below, and your tab key to choose an item
Overview
Provenance
Exhibition History
Title:Woman's Wedding Shoe
Period:Qing dynasty (1644–1911)
Date:late 19th–early 20th century
Culture:China
Medium:Satin, leather, glass, metallic thread
Dimensions:4 x 9 3/4 in. (10.16 x 24.77 cm)
Classification:Costumes-Embroidered
Credit Line:Gift of Captain and Mrs. James Thach, 1946
Object Number:46.187.4
Captain and Mrs. James Thach , Old Lyme, CT (until 1946; donated to MMA)
New York. The Metropolitan Museum of Art. "The Manchu Dragon: Costumes of the Ch'ing Dynasty (1644–1912)," December 16, 1980–August 30, 1981.
Learn more about this artwork
Timeline of Art History
Chronology
Central and North Asia, 1800-1900 A.D.
Chronology
Central and North Asia, 1900 A.D.-present
Chronology
China, 1800-1900 A.D.
Chronology
China, 1900 A.D.-present
Related Artworks
All Related Artworks
Asian Art
Costume
Embroidery
Glass
Leather
Needlework
Satin
Shoes
From Asia
From China
Woman's Wedding Shoe
late 19th–early 20th century
Kurukulla
19th century
Woman’s ceremonial robe
first half 18th century
Decorative pendant with the Five Offerings
early 15th century
Belt slide with a falcon attacking a swan
12th–14th century
Resources for Research
The Met's Libraries and Research Centers provide unparalleled resources for research and welcome an international community of students and scholars.
The Met Collection API is where all makers, creators, researchers, and dreamers can connect to the most up-to-date data and public domain images for The Met collection. Open Access data and public domain images are available for unrestricted commercial and noncommercial use without permission or fee.
Feedback
We continue to research and examine historical and cultural context for objects in The Met collection. If you have comments or questions about this object record, please complete and submit this form. The Museum looks forward to receiving your comments.
Asian Art at The Met
The Met's collection of Asian art—more than 35,000 objects, ranging in date from the third millennium B.C. to the twenty-first century—is one of the largest and most comprehensive in the world.
The Met Fifth Avenue
1000 Fifth Avenue
New York, NY 10028
Phone: 212-535-7710
The Met Cloisters
99 Margaret Corbin Drive
Fort Tryon Park
New York, NY 10040
Phone: 212-923-3700
About The Met
Mission and History
Collection Areas
Conservation Departments
Accessibility
Press
Support
Membership
Host an Event
Travel with The Met
Corporate Support
Career Opportunities
Volunteers
Fellowships
Internships
Follow us
Join our newsletter
Sign Up
Accessibility
Site Index
Terms and Conditions
Privacy Policy
Contact Information
© 2000–2024 The Metropolitan Museum of Art. All rights reserved.
|
94
|
Converting RDKit to Networkx · GitHub
===============
Skip to content
Search Gists Search Gists
All gistsBack to GitHubSign inSign up
Sign inSign up
You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
Instantly share code, notes, and snippets.
fangkuoyu/molecule_graph.py
Created September 13, 2021 14:28
Show Gist options
Download ZIP
Star 3(3)You must be signed in to star a gist
Fork 0(0)You must be signed in to fork a gist
Embed
Embed Embed this gist in your website.
Share Copy sharable link for this gist.
Clone via HTTPS Clone using the web URL.
Learn more about clone URLs
Clone this repository at <script src="
Save fangkuoyu/dc785218e5d4d94c752e80f1aaba4fad to your computer and use it in GitHub Desktop.
CodeRevisions 1Stars 3
Embed
Embed Embed this gist in your website.
Share Copy sharable link for this gist.
Clone via HTTPS Clone using the web URL.
Learn more about clone URLs
Clone this repository at <script src="
Save fangkuoyu/dc785218e5d4d94c752e80f1aaba4fad to your computer and use it in GitHub Desktop.
Download ZIP
Converting RDKit to Networkx
Raw
molecule_graph.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
Show hidden characters
import library --------------------------------------------------------------
from rdkit import Chem
import networkx as nx
import matplotlib.pyplot as plt
define the smiles string and covert it into a molecule sturcture ------------
caffeine_smiles='CN1C=NC2=C1C(=O)N(C(=O)N2C)C'
caffeine_mol=Chem.MolFromSmiles(caffeine_smiles)
define the function for coverting rdkit object to networkx object -----------
def mol_to_nx(mol):
G=nx.Graph()
for atom in mol.GetAtoms():
G.add_node(atom.GetIdx(),
atomic_num=atom.GetAtomicNum(),
is_aromatic=atom.GetIsAromatic(),
atom_symbol=atom.GetSymbol())
for bond in mol.GetBonds():
G.add_edge(bond.GetBeginAtomIdx(),
bond.GetEndAtomIdx(),
bond_type=bond.GetBondType())
return G
conver rdkit object to networkx object --------------------------------------
caffeine_nx=mol_to_nx(caffeine_mol)
caffeine_atom=nx.get_node_attributes(caffeine_nx, 'atom_symbol')
color_map= {'C': 'cyan',
'O': 'orange',
'N': 'magenta'}
caffeine_colors= []
for idx in caffeine_nx.nodes():
if (caffeine_nx.nodes[idx]['atom_symbol'] in color_map):
caffeine_colors.append(color_map[caffeine_nx.nodes[idx]['atom_symbol']])
else:
caffeine_colors.append('gray')
nx.draw(caffeine_nx,
labels=caffeine_atom,
with_labels=True,
node_color=caffeine_colors,
node_size=800)
plt.show()
print out the adjacency matrix ----------------------------------------------
matrix=nx.to_numpy_matrix(caffeine_nx)
print(matrix)
Sign up for freeto join this conversation on GitHub. Already have an account? Sign in to comment
Footer
© 2025 GitHub,Inc.
Footer navigation
Terms
Privacy
Security
Status
Docs
Contact
Manage cookies
Do not share my personal information
You can’t perform that action at this time.
|
95
|
abstract algebra - Group presentation of $A_5$ with two generators - Mathematics Stack Exchange
===============
Join Mathematics
By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy.
Sign up with Google
OR
Email
Password
Sign up
Already have an account? Log in
Skip to main content
Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Loading…
Tour Start here for a quick overview of the site
Help Center Detailed answers to any questions you might have
Meta Discuss the workings and policies of this site
About Us Learn more about Stack Overflow the company, and our products
current community
Mathematics helpchat
Mathematics Meta
your communities
Sign up or log in to customize your list.
more stack exchange communities
company blog
Log in
Sign up
Home
Questions
Unanswered
AI Assist Labs
Tags
Chat
Users
Teams
Ask questions, find answers and collaborate at work with Stack Overflow for Teams.
Try Teams for freeExplore Teams
3. Teams
4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
Hang on, you can't upvote just yet.
You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it?
Instead, you can save this post to reference later.
Save this post for later Not now
Group presentation of A 5 A 5 with two generators
Ask Question
Asked 8 years, 9 months ago
Modified4 years, 10 months ago
Viewed 6k times
This question shows research effort; it is useful and clear
13
Save this question.
Show activity on this post.
In [Huppert, Endliche Gruppen, p140] the author shows that the alternating group A 5 A 5 is isomorphic to G:=⟨x,y∣x 5=y 2=(x y)3=1⟩G:=⟨x,y∣x 5=y 2=(x y)3=1⟩. The proof is elementary but long and complicated. Is there a simple way to prove the assertion by using some theory? Of course essentially we have to show that |G|≤60|G|≤60.
Here is a possible attempt: A 5 A 5 is generated by (1,2,3,4,5)(1,2,3,4,5) and (12)(34)(12)(34), and these elements satisfy the above relations. We can try to give a proof of |A 5|≤60|A 5|≤60 by using these generators (and the well known subgroup structure of A 5 A 5), and then to adapt the same proof for G G. This could be done as follows:
Set a:=x y a:=x y and b:=(x y)x 2=x−1 y x 2 b:=(x y)x 2=x−1 y x 2. Both elements are of order three. The corresponding permutations are (2,4,5)(2,4,5) and (1,2,4)(1,2,4) so in principle we should be able to show that U:=⟨a,b⟩U:=⟨a,b⟩ (which is in fact isomorphic to A 4 A 4) has at most 12 12 elements. For doing so we define V:=⟨a b,(a b)b⟩V:=⟨a b,(a b)b⟩. V V has to be isomorphic to the Klein four group, so we have to show that (a b)(a b) and (a b)b(a b)b are commuting involutions (should be possible somehow...), and that b b normalizes V V (easy). Then it is clear that U=V⟨b⟩U=V⟨b⟩ has at most 12 12 elements. Finally, we have to show that the index |G:U||G:U| is at most 5 5. This is the only part, where I have no idea how to proceed.
Any ideas?
abstract-algebra
group-theory
alternative-proof
symmetric-groups
group-presentation
Share
Share a link to this question
Copy linkCC BY-SA 4.0
Cite
Follow
Follow this question to receive notifications
edited Dec 25, 2019 at 20:10
Shaun♦
48k 20 20 gold badges 75 75 silver badges 187 187 bronze badges
asked Nov 4, 2016 at 15:26
DuneDune
7,661 1 1 gold badge 24 24 silver badges 52 52 bronze badges
5
1 The easiest way to show that |G:U|=5|G:U|=5 is to apply the Todd-Coxeter coset enumeration algorithm. –Derek Holt Commented Nov 4, 2016 at 19:34
@DerekHolt Thanks! I was not aware of that algorithm, but luckily I was able to finish my proof without using it. –Dune Commented Nov 4, 2016 at 22:22
I prefer the use of the algorithm because it is equally easy, but also more systematic and of general applicability. Also, it is not difficult to prove the result using coset enumeration and then to convert the calculation into a proof like the one you gave. (I am talking here about the proof that |G:U|=5|G:U|=5, not the proof that |U|≤12|U|≤12. –Derek Holt Commented Nov 5, 2016 at 10:21
@DerekHolt: You're right, the Todd-Coxeter algorithm yields |G:U|=5|G:U|=5 extremely fast in this case. Thank you for mentioning it! –Dune Commented Nov 6, 2016 at 18:45
Related : math.stackexchange.com/questions/1077664/… –Arnaud D. Commented Mar 2, 2018 at 13:49
Add a comment|
1 Answer 1
Sorted by: Reset to default
This answer is useful
10
Save this answer.
Show activity on this post.
Finally, I am able to complete my sketch of the proof. We begin by proving the following:
G:=⟨x,y∣x 3=y 3=(x y)2=1⟩G:=⟨x,y∣x 3=y 3=(x y)2=1⟩ is isomorphic to A 4 A 4
Proof:A 4 A 4 is generated by (123)(123) and (234)(234), and these permutations satisfy the above relations. Hence, A 4 A 4 is a homomorphic image of G G. We will show henceforth |G|≤12|G|≤12. Let a=x y a=x y and b=a x=y x b=a x=y x. We have a 2=b 2=1 a 2=b 2=1, and also (a b)2=x y−1 x−1 y−1 x=x(x y)−2 x 2=1(a b)2=x y−1 x−1 y−1 x=x(x y)−2 x 2=1. So V:=⟨a,b⟩V:=⟨a,b⟩ is a homomorphic image of C 2×C 2 C 2×C 2. Since a x=b∈V a x=b∈V and b x=x−1 y x 2=(y x)−1(x y)−1=b a∈V b x=x−1 y x 2=(y x)−1(x y)−1=b a∈V, ⟨x⟩⟨x⟩ normalizes V V, and G=V⟨x⟩G=V⟨x⟩ has at most 12 elements. □◻
Now we are able to prove the original statement:
G:=⟨x,y∣x 5=y 2=(x y)3=1⟩G:=⟨x,y∣x 5=y 2=(x y)3=1⟩ is isomorphic to A 5 A 5
Proof:A 5 A 5 is generated by (12345)(12345) and (12)(34)(12)(34), and these permutations satisfy the above relations. Hence, A 5 A 5 is a homomorphic image of G G. We will show |G|≤60|G|≤60. Let a=x y a=x y and b=a x 2=x−1 y x 2 b=a x 2=x−1 y x 2. We have a 3=b 3=1 a 3=b 3=1. In the following we will frequently need the identity
y x−1 y=x y x,(∗)(∗)y x−1 y=x y x,
which follows directly from (x y)3=1(x y)3=1. Using (∗)(∗) we compute (a b)2=x(y x−1 y)x 3(y x−1 y)x 2=x(x y x)x 3(x y x)x 2=1(a b)2=x(y x−1 y)x 3(y x−1 y)x 2=x(x y x)x 3(x y x)x 2=1. Hence, U:=⟨a,b⟩U:=⟨a,b⟩ is a homomorphic image of A 4 A 4, and has therefore at most 12 12 elements. We finish the proof by showing that the complete set of right cosets of U U in G G is given by Ω={U,U x,U x 2,U x 3,U x 4}Ω={U,U x,U x 2,U x 3,U x 4}. Since G G acts transitively on its right cosets, this can be done by showing that Ω Ω is invariant under the action of the generators x x and y y. It is clear that Ω x=Ω Ω x=Ω. Furthermore, we have
U y=U a y=U x U y=U a y=U x
(U x)y=U a=U(U x)y=U a=U
(U x 2)y=U x(x y x)x−1=U x(y x−1 y)x−1=U a b x 2=U x 2(U x 2)y=U x(x y x)x−1=U x(y x−1 y)x−1=U a b x 2=U x 2
(U x 3)y=U b−1 x 4=U x 4(U x 3)y=U b−1 x 4=U x 4
(U x 4)y=U b x 3=U x 3(U x 4)y=U b x 3=U x 3
This also shows Ω y=Ω Ω y=Ω, and hence Ω G=Ω Ω G=Ω, which completes the proof. □◻
I am quite satisfied with this proof, since it is very conceptual. But still it is quite long and depends on many calculations which seem a bit random. I would be happy to see shorter proofs which are using more sophisticated concepts.
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Cite
Follow
Follow this answer to receive notifications
edited Dec 25, 2019 at 20:17
Shaun♦
48k 20 20 gold badges 75 75 silver badges 187 187 bronze badges
answered Nov 4, 2016 at 22:21
DuneDune
7,661 1 1 gold badge 24 24 silver badges 52 52 bronze badges
Add a comment|
You must log in to answer this question.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
abstract-algebra
group-theory
alternative-proof
symmetric-groups
group-presentation
See similar questions with these tags.
Featured on Meta
Will you help build our new visual identity?
Upcoming initiatives on Stack Overflow and across the Stack Exchange network...
Community help needed to clean up goo.gl links (by August 25)
Report this ad
Linked
8Showing ⟨a,b,c:a 2=b 3=c 5=a b c⟩/⟨a b c⟩⟨a,b,c:a 2=b 3=c 5=a b c⟩/⟨a b c⟩ is finite
5Quotient of group given by presentation is finite
5Is there a more elegant way of proving ⟨(1,2)(3,4),(1,2,3,4,5)⟩=A 5⟨(1,2)(3,4),(1,2,3,4,5)⟩=A 5
3Presentation for Binary Icosahedral Group "Using" Presentation for A 5 A 5
4Isomorphy of simple groups of order 360 : a proof with a presentation
1Show A 5≅⟨x,y∣x 5,y 2,(x y)3⟩A 5≅⟨x,y∣x 5,y 2,(x y)3⟩
Related
6Two 3-cycles generate A 5 A 5
5Is there a more elegant way of proving ⟨(1,2)(3,4),(1,2,3,4,5)⟩=A 5⟨(1,2)(3,4),(1,2,3,4,5)⟩=A 5
2If G=⟨g 1,…,g n⟩G=⟨g 1,…,g n⟩ and U≤G U≤G of finite index, then U U has a generating set with 2 n|G:U|2 n|G:U| elements. What's wrong?
1Showing a presentation is a permutation group.
1Subgroup generated by S S is A 5 A 5
5Equivalent group presentation
3Presentation for Binary Icosahedral Group "Using" Presentation for A 5 A 5
1Show A 5≅⟨x,y∣x 5,y 2,(x y)3⟩A 5≅⟨x,y∣x 5,y 2,(x y)3⟩
1Presentation of Klein group: Show G=⟨a,b∣a 2,b 2,(a b)2,(b a)2,a b 2 a⟩G=⟨a,b∣a 2,b 2,(a b)2,(b a)2,a b 2 a⟩ is iso to K 4=⟨x,y∣x 2,y 2,x y x−1 y−1⟩K 4=⟨x,y∣x 2,y 2,x y x−1 y−1⟩
Hot Network Questions
PCB design for audio compressor – THT routing, GND plane and power tracks
What's a simple way to index symbols?
Illustrative GenAI images of real world objects in publications
Why do the rules allow resigning in drawn positions with insufficient mating material?
When two black holes merge, what happens to the space-time inside them?
Show double quotient with congruence subgroup is simply connected?
Formality regarding abbreviation. Is "GM foods" less formal than "genetically modified foods"?
Indecomposable ascending integer vectors adding to zero
Harry Potter fanfic where Petunia dies of cancer and Vernon works at a horse racing track?
In Isa. 46:9 why is וְאֵ֣ין עֹ֔וד אֱלֹהִ֖ים not translated "and there are no other gods?"
If linear negation is interpreted as representing destructors, how to make sense of double linear negation elimination?
History of Wilcoxon/Mann-Whitney being for the median?
Multithreaded array summation in Java - best practice, structure, acceptable use of Constant classes. There is no concurrency?
Expected number of rounds for three-player rock-paper-scissors with hand changes
How to deal with this problem in hedonism?
Why are illegal immigrants counted towards congressional district apportionment and allocation of Electoral College votes in the United States?
In Grep, how can I grep -r --exclude build/lib//.py
I found that we can calculate the time of solar eclipses that will happen in the very far future. Do we need relativity in this calculation?
At the time of the prequels, was everyone who worked in the Jedi Temple on Coruscant a Jedi?
How can a theory be discarded if the Duhem–Quine thesis suggests it can’t be falsified
Meaning of 'present' in Job 1:6
VLOOKUP with wildcards
Pilot Procedures for OFV Control When Cabin System Fails
XSIM : print solutions of exercises having a predefined tag?
Question feed
Subscribe to RSS
Question feed
To subscribe to this RSS feed, copy and paste this URL into your RSS reader.
Why are you flagging this comment?
It contains harassment, bigotry or abuse.
This comment attacks a person or group. Learn more in our Code of Conduct.
It's unfriendly or unkind.
This comment is rude or condescending. Learn more in our Code of Conduct.
Not needed.
This comment is not relevant to the post.
Enter at least 6 characters
Something else.
A problem not listed above. Try to be as specific as possible.
Enter at least 6 characters
Flag comment Cancel
You have 0 flags left today
Mathematics
Tour
Help
Chat
Contact
Feedback
Company
Stack Overflow
Teams
Advertising
Talent
About
Press
Legal
Privacy Policy
Terms of Service
Your Privacy Choices
Cookie Policy
Stack Exchange Network
Technology
Culture & recreation
Life & arts
Science
Professional
Business
API
Data
Blog
Facebook
Twitter
LinkedIn
Instagram
Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804
By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.
Accept all cookies Necessary cookies only
Customize settings
Cookie Consent Preference Center
When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer.
Cookie Policy
Accept all cookies
Manage Consent Preferences
Strictly Necessary Cookies
Always Active
These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.
Cookies Details
Performance Cookies
[x] Performance Cookies
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.
Cookies Details
Functional Cookies
[x] Functional Cookies
These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly.
Cookies Details
Targeting Cookies
[x] Targeting Cookies
These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device.
Cookies Details
Cookie List
Clear
[x] checkbox label label
Apply Cancel
Consent Leg.Interest
[x] checkbox label label
[x] checkbox label label
[x] checkbox label label
Necessary cookies only Confirm my choices
|
96
|
Back to arXiv
License: arXiv.org perpetual non-exclusive license
arXiv:2408.05148v3 [cs.DC] 30 Oct 2024
Impacts of floating-point non-associativity on reproducibility for HPC and deep learning applications ††thanks: This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan ( downloads/doe-public-access-plan). ∥Equal contributions
Sanjif Shanmugavelu∥4∗,
Mathieu Taillefumier∥2∗,
Christopher Culver4,
Oscar Hernandez3,
Mark Coletti3 and
Ada Sedova∥∗3
4Maxeler Technologies, a Groq Company. 3 Hammersmith Grove, W6 0ND, London, UK
2ETH Zurich/Swiss National Supercomputing Centre (CSCS). Via Trevano 131, 6900 Lugano, Switzerland
3Oak Ridge National Laboratory. Oak Ridge, TN, USA
∗Corresponding emails:
[email protected]
[email protected]
[email protected]
Abstract
Run to run variability in parallel programs caused by floating-point non-associativity has been known to significantly affect reproducibility in iterative algorithms, due to accumulating errors. Non-reproducibility can critically affect the efficiency and effectiveness of correctness testing for stochastic programs. Recently, the sensitivity of deep learning training and inference pipelines to floating-point non-associativity has been found to sometimes be extreme. It can prevent certification for commercial applications, accurate assessment of robustness and sensitivity, and bug detection. New approaches in scientific computing applications have coupled deep learning models with high-performance computing, leading to an aggravation of debugging and testing challenges. Here we perform an investigation of the statistical properties of floating-point non-associativity within modern parallel programming models, and analyze performance and productivity impacts of replacing atomic operations with deterministic alternatives on GPUs. We examine the recently-added deterministic options in PyTorch within the context of GPU deployment for deep learning, uncovering and quantifying the impacts of input parameters triggering run to run variability and reporting on the reliability and completeness of the documentation. Finally, we evaluate the strategy of exploiting automatic determinism that could be provided by deterministic hardware, using the Groq accelerator for inference portions of the deep learning pipeline. We demonstrate the benefits that a hardware-based strategy can provide within reproducibility and correctness efforts.
Index Terms:
Reproducibility of results, floating-point arithmetic, parallel programming, high-performance computing, deep learning
I Introduction
Run to run variability of a program, despite identical inputs and software stack, is usually assumed to be negligible, even in heterogeneous parallel programming . However, variability caused by floating-point non-associativity (FPNA) coupled with asynchronous parallel operations such as reductions can be substantial , especially for massively parallel programs using iterative stochastic routines, such as those implementing optimization algorithms like conjugate gradient . It can also mask errors within threshold-based correctness testing schemes , which are often used in scientific computing programs such as molecular simulation [5, 6, 7], making debugging difficult. Deep neural network (DNN) training also involves iterative, stochastic algorithms coupled with non-linear activation functions. This combination has been found to cause extreme sensitivity to bit-level numerical changes, such as those caused by FPNA [8, 9, 10]. DNN inference can also suffer from such sensitivity, due to the influence of non-linear activation functions, although the effects may be reduced due to the absence of the compounding errors caused by the iterative training schemes. Recently, a number of studies have shown that this run to run variability in the full DNN training and inference pipeline can lead to unacceptably large differences in the predictions produced by a model. They have thwarted efforts to release reproducible commercial applications in safety-critical sectors such as medical diagnostics and autonomous driving . In addition, recent reports have found that all of the major deep learning (DL) software frameworks such as TensorFlow and PyTorch contain hundreds of bugs originating from all software levels, including environment misuse, incorrect assignment, and concurrency issues such as data races ; bugs in these frameworks can be disastrous , especially when they are silent , and high runtime variability can make it extremely difficult to detect bugs in these deep, multi-language, multi-level parallel stacks.
Within scientific high-performance computing (HPC), the incorporation of DL into traditional approaches for simulation has become increasingly popular [15, 16, 17]. Molecular simulation, for example, has used DNN models for interatomic potentials, which promise quantum mechanical accuracy at the cost of simpler, empirical models; this translates to speedups of several orders of magnitude and the approach received the ACM Gordon Bell Award in 2020 . Our preliminary tests using identical inputs for training and inference pipelines for these types of models revealed a level of non-reproducibility for prediction of forces that would be unacceptable for traditional quantum mechanical HPC programs . Besides non-deterministic kernels within the deep learning framework itself, any external kernels programmed by the scientific simulation developers which contain non-determinism, that feed into training pipelines , can also introduce large runtime non-reproducibility.
Here we present a systematic analysis of the effects of FPNA within asynchronous parallel kernels on graphics processing unit (GPU) accelerators, together with several metrics that quantify this variability, within parallel programming schemes and in PyTorch functions, as well as within full end-to-end training and inference pipelines. We assess the impact of these effects on the ability to perform correctness testing and to produce reproducible scientific results. We also study solutions, using programming approaches, and via deterministic hardware for which we perform experiments with the Groq deterministic inference chip.
II Metrics for measuring the variability of non-deterministic functions
We defined metrics for quantifying the run to run variability in output values for implementations of functions with scalar and multidimensional (array) inputs and outputs, following a similar approach used in error analysis . These are explained below.
In all cases, if two implementations are bitwise identical, nonzero otherwise, and increasing as variability increases.
II-1 Scalar-valued outputs
We use to quantify the bitwise non-determinism between the outputs of two implementations of some function ,
where the subscripts nd and d label the non-deterministic and deterministic implementations respectively.
II-2 Array outputs
To quantify the bitwise variability of a non-deterministic implementation of a function producing a multidimensional array output, we define two different metrics. Let two implementations of a function be and , which produce as outputs the arrays and , respectively, each with dimensions and total elements. The first metric is the elementwise relative mean absolute variation (ERMV),
| | | | |
| --- | --- | --- | --- |
| | | | (1) |
The second metric, the count (C) variability, measures how many elements in the multidimensional array are different,
| | | | |
| --- | --- | --- | --- |
| | | | (2) |
where is the indicator function, which is 1 if the condition inside the parentheses is true, and 0 otherwise.
Each of these metrics is zero if and only if the two multidimensional arrays and are bitwise identical. The count variability informs us of the percentage of varying array elements between the two output arrays, while the produces a global metric for the variability of the array outputs.
III Programming deterministic parallel sums
In this section we first introduce FPNA impacts for the parallel sum, then demonstrate several deterministic programming solutions, for GPUs using CUDA, and also using OpenMP for CPU or GPU. We then examine the performance impacts of these solutions, as well as programming productivity considerations. Parallel sums, as we will show in the subsequent sections, contribute some of the most substantial sources of variability to PyTorch functions on GPUs, and are thus responsible for concerning non-determinism in real-world DL applications such as graph neural network (GNN) s, which make heavy use of them. A concerning connection to molecular physics and chemistry applications in scientific computing is the increased popularity of GNN s for surrogate models in these applications [21, 22], especially considering the stringent precision requirements for these applications, as discussed below.
Consider the sum that we evaluate with a deterministic algorithm (the summation order is always the same). The simplest implementation adds the floating-point numbers (FP) in serial in the order they are stored. When parallelized with asynchronous operations and executed in an unspecified order, this is equivalent to applying a random permutation to the series before computing the serial sum, .
TABLE I: Effects of permutations on sums of floating-point numbers
| size | | |
| --- | --- | --- |
| 100 | | |
| 1000 | | |
| 1000 | | |
| 10000 | | |
| 10000 | | |
| 100000 | | |
| 100000 | | |
| 1000000 | | |
| 1000000 | | |
To demonstrate the magnitude of the variability, we can generate lists of double precision floating point numbers (FP64) numbers of various lengths using Python, with, for example, the normal distribution of zero mean and , and computed and before and after applying a random permutation to , repeating ten times; Table I shows the results. The variability can be larger than the tolerance of some of the correctness tests included in computational physics and chemistry programs; this same result obtains with a Boltzmann (exponential) distribution as well, which is the expected distribution for such calculations. For example, the quantum mechanics simulation program CP2K uses a tolerance-based correctness testing scheme, and some of the tests have tolerances as low as for values such as energy. This illustrates the problems that FPNA can introduce in correctness testing schemes for computational tasks with stringent accuracy requirements. In addition, such programs often employ iterative solvers such as conjugate gradient, leading to accumulating FP errors which can approach or exceed 20% after six or seven iterations using double precision, as has been shown for HPC systems such as massively multithreaded machines like the Cray XMT .
III-A Examples of deterministic parallel sum implementations on GPUs with CUDA/HIP
A difficulty for programming on GPUs is the absence of a global synchronization barrier at the kernel level.
To avoid races one can use (i) a single atomicAdd function, (ii) the implicit global synchronization happening when multiple kernels are added to the same stream
(iii) one GPU thread to calculate the sum, (iv) the __threadfence instruction.
Option (iii) is deterministic, but slow, as only one GPU thread computes the reduction. A code snippet can be found in the GitHub repository associated with this paper. Option (i) is only available for AMD GPUs (with HIP) in an unsafe compiler mode and is not considered for AMD GPUs here. This atomicAdd-only (AO) method is
shown in Listing 1 for the CUDA version. This approach is the easiest to program (only a few lines of code), but is actually sequential, and yet the instruction order is runtime dependent, making it non-deterministic.
Option (ii) and (iv) use a pairwise algorithm [20, 23] coupled to data blocking for distributing the data over the different thread blocks and specifying the order for summation.
Here, each thread of a given block adds
the elements in pairs , , done in parallel on GPU. This process is repeated times on .
We use the __syncthreads instruction after each step of the partial reduction for thread synchronization within the thread block. Shared memory is also used to improve data locality and performance. Partial results are then stored back in global memory.
We tested three different methods for accumulating these partial results. The simple-pass-with atomicAdd (SPA), uses an atomicAdd instruction to accumulate all partial sums instead of storing the results back in memory. This is again a simple solution from the programmatic perspective, but the implementation is not deterministic. The two-passes-with-final-reduction-on-CPU (TPRC) method uses the property that two kernels launched on the same stream will execute sequentially following the submission order, introducing a synchronization barrier between them, or between a kernel and a data transfer between GPU and CPU.
We choose the data transfer, and compute the final sum on CPU using the sequential recursive method. TPRC is deterministic, but more sensitive to compiler optimizations because of vectorization.
The last approach uses an integer counter to keep track of the completed thread blocks. The sum function from the CUB/HIPCUB library (CU) uses a similar technique.
Each thread block increments this variable atomically before exiting. The last block incrementing the counter is responsible for the remaining reduction, and can be performed in two ways. The single-pass-with-tree-reduction (SPTR) method uses the same block reduction algorithm than for the first stage while the single-pass-with-final-recursive-sum-on-GPU (SPRG) variant (see listing 1) uses the recursive sum. SPTR and SPRG are both deterministic by construction. SPRG and SPTR are only possible if we use the __threadfence instruction to avoid data race conditions. This instruction ensures that all writes issued by the calling thread are finished before the calling thread runs the next instruction, and notifies all the other running threads that the memory operation is over. The instruction is not a global synchronization barrier.
TABLE II: Different implementations of the parallel sum in CUDA. Non-deterministic implementations shown in red.
| Method | deterministic | # of kernels | synchronization methods |
| --- | --- | --- | --- |
| CU | Yes | - | __threadfence |
| SPTR | Yes | 1 | __threadfence |
| SPRG | Yes | 1 | __threadfence |
| TPRC | Yes | 2 | stream synchronization |
| SPA | No | 1 | atomicAdd |
| AO | No | 1 | atomicAdd |
A summary of the main properties of each implementation is provided in Table II while codes snippets and full reference codes are provided in the GitHub repository .
III-B Other approaches: Parallel sums with OpenMP
OpenMP, a directive-based API for shared memory and accelerator parallel programming, supports data reduction to perform parallel calculations that are portable across architectures. The OpenMP programming model uses reduction scoping clauses to specify the regions of code where the reduction takes place. These regions can include parallel fork-join, tasks, sections, SIMD, and scope constructs , which can be executed on a host or a device such as a GPU. The OpenMP specification does not specify the location and ordering in which the values are combined, thus,
bitwise determinism is not guaranteed.
As a result, implementing a deterministic parallel reduction in OpenMP will require using constructs that can enforce ordering when reducing private and shared variables. An ordered construct can define a structured block within a loop, SIMD, or loop SIMD region. This construct enforces sequential execution for a specified region of code according to the loop iterations, while enabling parallel execution for code outside the region. When the thread handling the first iteration of the loop reaches the ordered construct, any other thread that encounters the ordered construct in its iteration will wait at the start of the ordered region until all ordered regions from previous iterations have been executed. To order the entire loop, an ordered clause can be used.
Listing 2 shows the use of a block-ordered directive on a reduction. A similar effect can be achieved while using an ordered clause. Listing 3 shows how to use an ordered clause inside a target region that executes on a device or GPU.
When comparing the results of listing 2 with a reduction without the ordered directive, using gcc 12.2.0 on CPU, we get the results shown in Table III; the ordered version produces deterministic results.
TABLE III: Results of the normal and ordered reductions using OpenMP on CPU
| Trial | Normal Reduction | Ordered Reduction |
| --- | --- | --- |
| 1 | 2.3542548638889723e-07 | 2.3542548638889725e-07 |
| 2 | 2.3542548638889731e-07 | 2.3542548638889725e-07 |
| 3 | 2.3542548638889725e-07 | 2.3542548638889725e-07 |
| 4 | 2.3542548638889723e-07 | 2.3542548638889725e-07 |
| 5 | 2.3542548638889717e-07 | 2.3542548638889725e-07 |
| 6 | 2.3542548638889725e-07 | 2.3542548638889725e-07 |
| 7 | 2.3542548638889728e-07 | 2.3542548638889725e-07 |
| 8 | 2.3542548638889728e-07 | 2.3542548638889725e-07 |
| 9 | 2.3542548638889731e-07 | 2.3542548638889725e-07 |
| 10 | 2.3542548638889728e-07 | 2.3542548638889725e-07 |
We achieve this determinism by taking advantage of OpenMP static scheduling and the ordered clause and directive. However, the ordered directive is not available for the team distribute directive. This limitation is important for reductions that execute on a device, as the order of iterations between teams is not possible; a user-defined or lower-level reduction implementation as previously described will be needed.
III-C Statistical properties of the variability of non-deterministic parallel sums using CUDA or HIP on different GPUs
It is often assumed that the variability due to FPNA can be described as Gaussian noise, but we could not find evidence supporting this in the literature. We therefore performed numerical experiments to estimate the probability density function (PDF) of .
We generated a set of 100 arrays of one million FP64 elements each taken from the uniform distribution , applied SPA 10000 times to each array, and evaluated the variability , using SPTR for the deterministic summation, returning one million values for . thus refers here to SPTR while refers to SPA. We then repeated the experiment, replacing the uniform distribution with a normal distribution of zero mean and standard deviation 1.
The resulting PDF s are shown in Fig.1 for both distributions for the V100; for the Mi250X and GH200 GPUs the results can be found in the GitHub repository . The
means and standard deviations of are different between the GPU types, while the shapes are similar. Using Kullback–Leibler divergence criterion (KL) analysis, we find that all PDF s for SPA do converge towards a normal distribution whose mean and standard deviation depend on and the GPU family. However, when we repeated the experiments replacing SPA with AO on the NVIDIA GPUs, using a sample size of 500000 sums (results shown in Fig. 2 for the V100), the distribution is found not to be normal, showing that the assumption of Gaussian noise is invalid in general. The reasons for this distribution are unclear, as the NVIDIA runtime scheduler details are proprietary.
We also calculated the dependence as a function of the array size for the same sequences used for Fig. 1. The data can be fit with a power law of the form . is proportional to when . The exponent is larger for , showing that the range of the numbers also plays a role.
III-D Performance comparisons
To illustrate the performance impact of these different strategies, we measured the time needed to compute 100 sums of 4194304 FP64 elements taken from the uniform distribution, using SPTR, TPRC, CU, AO and SPA. The only nondeterministic algorithms in this tests are AO and SPA. We compute the negative quantity where is the minimum of all timings , to compare the performance of all implementations to the fastest one. The other parameters controlling the algorithm are the thread block size and the number of thread blocks. Details on the choice of these settings are provided in the associated repository .
The main results are summarized on Tab. IV. We find that AO is 2 orders of magnitude slower than the fastest implementations. The fastest sum implementation depends on the GPU version. The non-deterministic SPA seems to be favored on NVIDIA GPU, followed by SPTR and CU. The difference in timings between SPA and SPTR is less than 0.2% for the V100 GPU but can reach up to 7.8% on the GH200. CU suffers a 4.5% penalty on GH200. Most implementations on V100 are within 0.5% to 1% of each other.
The performance penalty on GH200 is more spread than on V100.
TPRC is the fastest implementation on the Mi250X GPU followed by CU.
TABLE IV: Timing and performance penalty of parallel sum implementations on different GPUs for 100 sums of 4194304 FP64 numbers and varying kernel parameters. Timings averaged over 10 consecutive runs; non-deterministic algorithms indicated in red. Standard deviations in parentheses.
| GPU | implementation | () | time for 100 sums (in ms) | (in %) |
| --- | --- | --- | --- | --- |
| | SPA | | | |
| | SPTR | | | |
| V100 | TPRC | | | |
| | CU | (unknown) | | |
| | AO | (fixed parameters) | | |
| | SPA | | | |
| | CU | (unknown) | | |
| | TPRC | | | |
| GH200 | SPTR | | | |
| | AO | (fixed parameters) | | |
| | TPRC | | | |
| | CU | (unknown) | | |
| Mi250X | SPA | | | |
| | SPTR | | | |
Our results show that the performance of a deterministic implementation can be faster or
only slightly slower than its non-deterministic counterpart, depending on the GPU type. While timings can also change depending on the GPU load,
these results suggest that there is no reason to calculate a parallel sum using nondeterministic atomicAdd operations, as the performance benefit is marginal at best.
IV Non-determinism in PyTorch Functions
As discussed in the introduction, FPNA can lead to large variations in identical training and inference pipelines for deep learning, due to compounding effects within training, along with the impact of nonlinear activation functions. Analogously to the above section, we explore FPNA-induced variability for the PyTorch operations that are documented to have non-deterministic behavior .
PyTorch makes it easy for end users to construct neural networks out of high-level modules and deploy software on devices such as GPUs without the need for direct GPU programming. GPU kernels are provided by vendor libraries such as NVIDIA’s cuDNN and AMD’s MIOpen, and sometimes by PyTorch developers. As demonstrated in the previous section, any kernels PyTorch uses that rely on atomic operations will not be deterministic, leading to output variability. Furthermore, to be hardware agnostic and computationally efficient there are cases where strategies have been developed to determine the optimal computational kernel at runtime, also causing non-determinism.
Apart from the choice of kernel at runtime and atomic operations, there are several other ways in which PyTorch may induce variability. These include having an unset random number generator seed, having unset CUDA environment variables, uninitialized GPU memory and communication between devices. To focus exclusively on variability emergent from the first two sources, we a single GPU and remove those other sources of variability.
We control whether or not non-deterministic kernels are available through the environment variable that PyTorch provides . We note that this documentation may not necessarily be completely accurate across all systems and software versions, as we received a runtime error when trying to obtain a deterministic result for scatter_reduce, suggesting the potential difficulties high level users may experience when trying to control determinism within this deep software stack.
We performed experiments in two ways, depending on whether or not a deterministic kernel exists. If there is a deterministic option, then the output multidimensional array is fixed by that implementation. We compare it with the tensors with labelling runs of the non-deterministic implementation. If there is no non-deterministic kernel, we choose the first non-deterministic invocation as a reference, .
For each experiment we also present results for measurements of kernel runtime for non-deterministic implementations on the GPU, and deterministic implementations on the GPU and LPU architectures, to expose the performance costs of using deterministic operations. Runtime measurements are only for the execution time of the kernel, excluding data transfers to/from the device. For the GPU we make many measurements to report the average and standard deviation of the execution time. On the LPU architecture the runtime for the PyTorch function is reported as a fixed number since the cycle-by-cycle execution is determined ahead of time .
We performed a sweep over operation hyperparameters using 10000 runs on an H100 and observed only a handful of functions specified in the PyTorch documentation to exhibit non-determinism. We list these operations in Table V alongside minimum and maximum . We refer the reader to our repository for in-depth details on the full hyperparameter sweep, where, for example, we consider kernel size, stride and padding when testing convolution operations.
Our test results indicated that the main factor in ability to observe output variability in these kernels is the size of the input tensor, which gives more chances to observe the switching of order at runtime. For functions which perform reductions on an input tensor, we found that the ratio of the output dimension to input dimension is another key factor. We explore this further, restricting our analysis to scatter_reduce and index_add, which have such reductions.
TABLE V: Max and min variability for non-deterministic pytorch operations over all hyperparameters tested.
| Operation | min | max |
| --- | --- | --- |
| ConvTranspose1d | 1.52 | 2.36 |
| ConvTranspose2d | 1.29 | 4.52 |
| ConvTranspose3d | 0 | 1.34 |
| cumsum | 0 | 0.52 |
| index_add | 0 | 5.03 |
| index_copy | 0.36 | 2.08 |
| index_put | 0 | 1.68 |
| scatter | 0 | 3.82 |
| scatter_reduce | 0 | 3.35 |
IV-A Case studies of PyTorch kernels
The scatter_reduce function updates an output array by applying a reduction operation on values from the input array , according to indices specified in an index array , for example, when reducing a one dimensional array with summation reduction, . This can be generalized to arrays with arbitrary dimensions and different reduction functions. The index_add operation updates the output array by adding values from an input array according to the indices specified in an index array . As an example, for two dimensional input and output arrays with a summation reduction over the first dimension, the elements of are
. This can similarly be generalized to arbitrary dimensions and different reductions other than addition. Both of these operations reduce the size of an input tensor, so we define the reduction ratio as the ratio of the output dimension size to source dimension size. When , the arrays have the same size, and smaller values of correspond to larger reductions of the source array down to at most a singular value, along the axis of reduction. For the index tensors of both operations, we use random integers drawn from a uniform distribution to choose arbitrary values from the source tensor. The motivation for this is to mimic an arbitrary graph structure, where the reduction is happening over nodes that share an edge, although in this case we do not a priori expect any specific structure or hierarchy to be emergent.
Fig. 3 shows heat maps of as a function of and input dimension for the scatter_reduce and index_add operations with summation reductions. There is a clear trend of increased variability as a function of each of these parameters. In the case of larger input dimension, there are more opportunities for runtime non-determinism to occur. For small reduction ratios there is less variation. We suspect this is due to the fact that our index tensor is random, meaning that elements being reduced will not be located locally in memory. For both operations we observe near one, meaning that most runs have a unique output. This is the worst case for reproducibility and error debugging.
Figures 4 and 5 show and as a function of reduction ratio. For the scatter_reduce operation we use a one dimensional array with 2,000 elements, and for index_add we use a two dimensional square array with 100 100 elements. These array sizes are selected to be in an interesting regime of reduction ratio behavior as observed from the heat-maps. We notice a fairly constant between 0.005 and 0.01 for scatter_reduce with both the sum and mean reductions. The for this operation at a reduction ratio of 1.0 has a value around 0.10 (not plotted) which is a significant jump. We notice similar behavior for for this operation. Note for the case where the output array has only one element, we recover the previous problem statement of computing the sum of an array, covered in Section III. For index_add, is increasing almost linearly with reduction ratio. The errors are inconsistently sized across reduction ratios, indicating different behavior at each reduction ratio. We also notice an approximately linear trend between and reduction ratio, with a particularly large error bar at . The lack of trend for the errors on the variability requires further analysis.
TABLE VI: Average kernel runtime for scatter_reduce and index_add kernels on the H100 and Groq using deterministic (D) and non-deterministic (ND) implementations. Standard deviations in parentheses.
| Operation | Implementation | H100 | Groq |
| --- | --- | --- | --- |
| scatter_reduce | D | N/A | |
| (sum) | ND | 30.2(1.4) | N/A |
| scatter_reduce | D | N/A | 28.9 |
| (mean) | ND | 74.9(1.4) | N/A |
| index_add | D | 161(4) | 12.0 |
| | ND | 12.8(26) | N/A |
We now investigate performance effects of deterministic and non-deterministic operations, performed on the H100 and LPU architectures. Up until now, the LPU architecture has been left out of the discussion of variability because its hardware operates deterministically . In Table VI we report the average kernel runtime for the scatter_reduce with input dimension 1,000 and reduction ratio and index_add with input dimension 1,000 1,000 and reduction ratio on both the H100 and LPU architectures. For the latter we found the non-deterministic implementation on the H100 to be faster. This trend is not observed across all the operations in the documented list , however. For scatter_reduce and index_add the LPU architecture, which is by default deterministic, is faster than all GPU implementations.
Runtimes for the complete set of functions in are given in the supplemental repository .
V Effect of non-determinism on full deep learning workflows
Non-determinism in PyTorch kernels can adversely affect upstream tasks in model training and inference as found in real-world applications.
To highlight this, we focus on training and inference for a Graph Sample and Aggregate (GraphSAGE) neural network.
V-A GraphSAGE Convolution Network
Graph Neural Networks (GNNs) are extensively used for analyzing graph-structured data, such as social networks and molecular structures. GNNs operate on the principle of message passing, where each node in the graph aggregates information from its neighbors to update its own representation. The aggregation of information in software is often implemented via scatter and gather operations, which we have shown above have severe runtime variability. A layer in a GNN is defined by
where is the representation of node at layer , denotes the set of neighbors of node , and
U and A are functions defining the update and aggregation steps, respectively. GraphSAGE is a popular graph convolution that uses
functions such as the sum and mean for the aggregation. Implementations often use the scatter_reduce and index_add operations discussed above, for example in the PyTorch Geometric library.
V-B Results
We trained a GNN with two SAGEConv layers on the Cora dataset. This dataset is widely used for GNN research and benchmarks; it consists of 2,708 scientific publications classified into one of seven classes. The graph structure is created with 5,429 links representing citations between these publications. Each publication is described by a 1,433-dimensional feature vector.
Training of the SAGEConv model is performed on this data set with a 10 epoch training run. The only source of non-determinism in our implementation of this DNN is the index_add operation.
We studied the variability of model weights over 10 epochs and found that mean increased from from epoch 1 to 10, while the standard deviations also increased: 1.414(0.05) to 1.418(0.23).
This may indicate that, as expected, non-determinism results in a compounding increase in the variability with more kernel calls. At the end of the training loop, all 1,000 models had a unique set of model weights: . Despite this variability all models converge to similar loss values. Unlike stochastic training using random number generators for initialization, the resulting 1,000 models are completely non-reproducible, even for a single user on a single machine.
TABLE VII: and for different training and inference deterministic (D) and non-deterministic (ND) combinations on GPUs. Standard deviations in parentheses.
| Training | Inference | | |
| --- | --- | --- | --- |
| D | D | 0(0) | 0(0) |
| | ND | | |
| ND | D | | |
| | ND | | |
We also measured the different combinations of training and inference stages cumulatively, by creating 1,000 models under four conditions. These are deterministic training and deterministic inference, deterministic training and non-deterministic inference, non-deterministic training and deterministic inference, and lastly non-deterministic training and non-deterministic inference. The and for each of these experiments is presented in Table VII. As expected, the non-deterministic training with non-deterministic inference has the most severe variations, and while training seems to incur more variability, inference variability contributes a non-negligible amount. The runtime for the GraphSAGE model training for the full 10 epochs is seconds with a deterministic operation and seconds with a non-deterministic one. The inference runtimes for a single input are given in Table VIII; the deterministic implementation is slower than the non-deterministic one on GPU. Inference on the LPU accelerator is 30 times faster than the fastest PyTorch implementation, consistent with results presented in .
TABLE VIII: H100 and Groq kernel runtime for deterministic and non deterministic inference of the GraphSAGE model.
| Inference | H100 (ms) | Groq (ms) |
| --- | --- | --- |
| Deterministic | | |
| Non Deterministic | | N/A |
VI Conclusions
The analyses presented here highlight variability from non-deterministic kernels for basic parallel reductions, PyTorch functions, and resulting adverse effects on a full DL training and inference pipeline. The variability from non-deterministic reductions can approach the tolerance thresholds used in high-accuracy molecular simulation correctness tests, and for DL, will produce a unique and non-reproducible set of model weights with each run using identical inputs. These effects were pronounced for scatter_reduce and index_add operations, which are used extensively in GNNs. GNNs have become a key tool for DL models used in molecular simulation. In terms of productivity and performance, using deterministic programming patterns may require more effort or experience, especially to maintain performance, and may not be attainable for all algorithms. Within large DL framework stacks, providing deterministic solutions is left to the developers. We found that documentation on which functions in PyTorch are deterministic may be erroneous or incomplete, highlighting the large burden placed on end users in development of reproducible DL workflows. The benefits of deterministic chip designs, therefore, may extend beyond performance improvements, to reproducibility facilitation and support, increasing productivity and correctness. The non-determinism studied here results from asynchronous atomic operations on GPUs, but in HPC and distributed settings there will also be inter-chip and inter-node communication, such as with MPI, leading to more runtime variation. On the LPU architecture, inter-chip communication can be software scheduled, removing such communication variations . Future work should explore effects such as this in addition to the ones presented here. In addition, to mitigate the non-reproducibility produced by the training portions of a DL workflow, deterministic hardware for training could also be proposed; emerging DL chip designs may support such solution and can be evaluated.
VII Acknowledgements
This work was supported in part by the ORNL AI LDRD Initiative and in part by Swiss Platform For Advanced Scientific Computing (PASC), and used resources of the OLCF, a DOE Office of Science User Facility [DE-AC05-00OR22725], and Swiss National Supercomputing Centre (CSCS).
References
C. Lin et al., Principles of Parallel Programming. Pearson Education India, 2008.
W. Ahrens, J. Demmel, and H. D. Nguyen, “Algorithms for efficient reproducible
floating point summation,” ACM Trans. Math. Softw., vol. 46, p. 22,
2020.
O. Villa, D. Chavarria-Miranda, V. Gurumoorthi, A. Márquez, and
S. Krishnamoorthy, “Effects of floating-point non-associativity on numerical
computations on massively multithreaded systems,” in Proceedings of
Cray User Group Meeting (CUG), vol. 3, 2009.
M. Thavappiragasam, W. Elwasif, and A. Sedova, “Portability for
GPU-accelerated molecular docking applications for cloud and HPC: can
portable compiler directives provide performance across all platforms?” in
2022 22nd IEEE International Symposium on Cluster, Cloud and Internet
Computing (CCGrid), 2022, pp. 975–984.
T. D. Kühne, M. Iannuzzi, M. Del Ben, V. V. Rybkin, P. Seewald, F. Stein,
T. Laino, R. Z. Khaliullin, O. Schütt, F. Schiffmann, D. Golze, J. Wilhelm,
S. Chulkov, M. H. Bani-Hashemian, V. Weber, U. Borštnik, M. Taillefumier,
A. S. Jakobovits, A. Lazzaro, H. Pabst, T. Müller, R. Schade, M. Guidon,
S. Andermatt, N. Holmberg, G. K. Schenter, A. Hehn, A. Bussy, F. Belleflamme,
G. Tabacchi, A. Glöß, M. Lass, I. Bethune, C. J. Mundy, C. Plessl,
M. Watkins, J. VandeVondele, M. Krack, and J. Hutter, “CP2K: An electronic
structure and molecular dynamics software package - Quickstep: Efficient and
accurate electronic structure calculations,” The Journal of Chemical
Physics, vol. 152, no. 19, p. 194103, 05 2020. [Online]. Available:
R. Salomon-Ferrer, D. A. Case, and R. C. Walker, “An overview of the Amber
biomolecular simulation package,” Wiley Interdisciplinary Reviews:
Computational Molecular Science, vol. 3, no. 2, pp. 198–210, 2013.
S. LeGrand, A. Scheinberg, A. F. Tillack, M. Thavappiragasam, J. V. Vermaas,
R. Agarwal, J. Larkin, D. Poole, D. Santos-Martins, L. Solis-Vasquez
et al., “GPU-accelerated drug discovery with docking on the summit
supercomputer: Porting, optimization, and application to COVID-19
research,” in Proceedings of the 11th ACM international conference on
bioinformatics, computational biology and health informatics, 2020, pp.
1–10.
D. Riach, “Framework reproducibility: Determinism (d9m),”
accessed: 2024-8-07.
P. Nagarajan, G. Warnell, and P. Stone, “The impact of nondeterminism on
reproducibility in deep reinforcement learning,” in 2nd
Reproducibility in Machine Learning Workshop at ICML 2018, 2018, pp. 1–10.
C. Summers and M. J. Dinneen, “Nondeterminism and instability in neural
network optimization,” in International Conference on Machine
Learning. PMLR, 2021, pp. 9913–9922.
L. Heumos, P. Ehmele, L. Kuhn Cuellar, K. Menden, E. Miller, S. Lemke,
G. Gabernet, and S. Nahnsen, “mlf-core: a framework for deterministic
machine learning,” Bioinformatics, vol. 39, no. 4, p. btad164, 2023.
J. Chen, Y. Liang, Q. Shen, J. Jiang, and S. Li, “Toward understanding deep
learning framework bugs,” ACM Transactions on Software Engineering and
Methodology, vol. 32, no. 6, pp. 1–31, 2023.
L. Jia, H. Zhong, X. Wang, L. Huang, and X. Lu, “An empirical study on bugs
inside tensorflow,” in Database Systems for Advanced Applications,
Y. Nah, B. Cui, S.-W. Lee, J. X. Yu, Y.-S. Moon, and S. E. Whang, Eds. Cham: Springer International Publishing,
2020, pp. 604–620.
F. Tambon, A. Nikanjam, L. An, F. Khomh, and G. Antoniol, “Silent bugs in deep
learning frameworks: an empirical study of keras and tensorflow,”
Empirical Software Engineering, vol. 29, no. 1, p. 10, 2024.
S. Partee, M. Ellis, A. Rigazzi, A. E. Shao, S. Bachman, G. Marques, and
B. Robbins, “Using machine learning at scale in numerical simulations with
SmartSim: An application to ocean climate modeling,” Journal of
Computational Science, vol. 62, p. 101707, 2022.
M. Boyer, W. Brewer, D. Jude, and I. Dettwiller, “Scalable integration of
computational physics simulations with machine learning,” in 2022
IEEE/ACM International Workshop on Artificial Intelligence and Machine
Learning for Scientific Applications (AI4S). IEEE, 2022, pp. 44–49.
H. Wang, L. Zhang, J. Han, and E. Weinan, “DeePMD-kit: A deep learning
package for many-body potential energy representation and molecular
dynamics,” Computer Physics Communications, vol. 228, pp. 178–184,
2018.
W. Jia, H. Wang, M. Chen, D. Lu, L. Lin, R. Car, E. Weinan, and L. Zhang,
“Pushing the limit of molecular dynamics with ab initio accuracy to 100
million atoms with machine learning,” in SC20: International
conference for high performance computing, networking, storage and
analysis. IEEE, 2020, pp. 1–14.
A. Sedova, G. Sivaraman, M. Coletti, W. Elwasif, M. Smith, and O. Hernandez,
“Avoiding a reproducibility crisis in deep learning for surrogate
potentials: How massively parallel programming, millions of training steps,
and numerics combine to create non-determinism in models and what this means
for the simulated physics,” in APS March Meeting 2024, ser. Bulletin
of the American Physical Society, vol. March 4–8, 2024; Minneapolis &
Virtual. American Physical Society,
2024.
N. J. Higham, Accuracy and Stability of Numerical Algorithms. SIAM, 2002.
S. Batzner, A. Musaelian, L. Sun, M. Geiger, J. P. Mailoa, M. Kornbluth,
N. Molinari, T. E. Smidt, and B. Kozinsky, “E (3)-equivariant graph neural
networks for data-efficient and accurate interatomic potentials,”
Nature communications, vol. 13, no. 1, p. 2453, 2022.
D. P. Kovács, I. Batatia, E. S. Arany, and G. Csányi, “Evaluation of
the mace force field architecture: From medicinal chemistry to materials
science,” The Journal of Chemical Physics, vol. 159, no. 4, 2023.
P. Blanchard, N. J. Higham, and T. Mary, “A class of fast and accurate
summation algorithms,” SIAM Journal on Scientific Computing, vol. 42,
pp. A1541–A1557, 2020.
S. Shanmugavelu, M. Taillefumier, C. Culver, O. Hernandez, M. Coletti, and
A. Sedova, “Correctness,”
2024.
OpenMP Architecture Review Board, OpenMP Application Programming
Interface Version 5.2, OpenMP Architecture Review Board, Nov. 2021.
[Online]. Available:
PyTorch, “PyTorch documentation for determinism,”
accessed: 2023-5-08.
D. Abts, J. Ross, J. Sparling, M. Wong-VanHaren, M. Baker, T. Hawkins, A. Bell,
J. Thompson, T. Kahsai, G. Kimmell, J. Hwang, R. Leslie-Hurd, M. Bye,
E. Creswick, M. Boyd, M. Venigalla, E. Laforge, J. Purdy, P. Kamath,
D. Maheshwari, M. Beidler, G. Rosseel, O. Ahmad, G. Gagarin, R. Czekalski,
A. Rane, S. Parmar, J. Werner, J. Sproch, A. Macias, and B. Kurtz, “Think
fast: A tensor streaming processor (tsp) for accelerating deep learning
workloads,” in 2020 ACM/IEEE 47th Annual International Symposium on
Computer Architecture (ISCA), 2020, pp. 145–158.
PyTorch, “PyTorch Geometric documentation,”
accessed:
2023-5-08.
R. Hosseini et al., “Exploring the use of dataflow architectures for
graph neural network workloads,” in High Performance Computing. ISC
High Performance 2023, ser. Lecture Notes in Computer Science, A. Bienz,
M. Weiland, M. Baboulin, and C. Kruse, Eds., vol. 13999. Springer, Cham, 2023.
D. Abts, G. Kimmell, A. Ling, J. Kim, M. Boyd, A. Bitar, S. Parmar, I. Ahmed,
R. DiCecco, D. Han, J. Thompson, M. Bye, J. Hwang, J. Fowers, P. Lillian,
A. Murthy, E. Mehtabuddin, C. Tekur, T. Sohmers, K. Kang, S. Maresh, and
J. Ross, “A software-defined tensor streaming multiprocessor for large-scale
machine learning,” in Proceedings of the 49th Annual International
Symposium on Computer Architecture, ser. ISCA ’22. New York, NY, USA: Association for Computing Machinery,
2022, p. 567–580. [Online]. Available:
Artifact Description
All source code and inputs for all tests and programs reported in this paper can be found in our devoted GitHub repository at
The main directories codes/test_reduce and codes/test-suite contain the codes used in Section III and IV of the main text. Repository organization, installation, usage, and contributing instructions are documented in the README.
VII-A Experimental Methods
In the following we describe the hardware and systems software used to perform our numerical analyses and the mathematical definitions developed to quantify output variability caused by non-determinism due to FPNA in parallel kernels, studied as isolated test cases and within the PyTorch deep learning framework high-level operations.
Tests on GH200 are run on the Alps supercomputing system located at CSCS running SLE 15 (enterprise). Each node has 4 GH200 with 72 ARM cores, 128 GB LPDDR5, and one H100 GPU with 96 GB HBM3 memory each.
Tests for the V100 are run on the Summit supercomputer at the Oak Ridge Leadership Computing Facility (OLCF), running Redhat OS 8. Summit is an IBM system; each IBM Power System AC922 node has two Power9 CPUs with 512 GB of memory and 6 V100 NVIDIA GPU with 16GB of HBM2 memory.
Tests on Mi250X AMD GPU are obtained on the Frontier supercomputer at OLCF, running SLE 15 (enterprise). Frontier is an HPE Cray EX supercomputer; each Frontier compute node has a 64-core AMD “Optimized 3rd Gen EPYC” CPU with 512 GB of DDR4 memory and 4 AMD MI250X GPUs, each with 2 Graphics Compute Dies (GCDs) for a total of 8 GCDs per node.
Tests on the NVIDIA H100 are performed on a Groq host node3 in Groq ZankerLab running Ubuntu LTS 22.04.06. This machine has 2 H100’s with 40GB HBM3 memory each and has an AMD EPYC 7302 16-Core Processor host CPU with 2 sockets, 16 cores per socket and 2 threads per core. Tests on the LPU accelerator are run on the server r01-gn-01 in Groq ZankerLab with SDK 0.11 and Ubuntu LTS 22.04. A GroqNode has 8 Groq LPUs connected with a fully connected 88 RealScale™ chip-to-chip connectors and 2 AMD EPYC 7313 processors (3GHz, 16C/32T, 155W TDP each).
|
97
|
Published Time: 2018-09-12 07:25:22
IEEE Standard 754 Floating Point Numbers - GeeksforGeeks
===============
Skip to content
Tutorials
Python
Java
DSA
ML & Data Science
Interview Corner
Programming Languages
Web Development
CS Subjects
DevOps
Software and Tools
School Learning
Practice Coding Problems
Courses
DSA to Development
Get IBM Certification
Newly Launched!
Master Django Framework
Become AWS Certified
For Working Professionals
Interview 101: DSA & System Design
JAVA Backend Development (Live)
DevOps Engineering (LIVE)
Data Structures & Algorithms in Python
For Students
Placement Preparation Course
Data Science (Live)
Data Structure & Algorithm-Self Paced (C++/JAVA)
Master Competitive Programming (Live)
Full Stack Development with React & Node JS (Live)
Full Stack Development
Data Science Program
All Courses
Go Premium
Switch to Dark Mode
Sign In
DSA
Practice Problems
C
C++
Java
Python
JavaScript
Data Science
Machine Learning
Courses
Linux
DevOps
SQL
Web Development
System Design
Aptitude
GfG Premium
Sign In
▲
Open In App
Next Article: What is a Computer?
IEEE Standard 754 Floating Point Numbers
Last Updated : 16 Mar, 2020
Comments
Improve
Suggest changes
86 Likes
Like
Report
The IEEE Standard for Floating-Point Arithmetic (IEEE 754) is a technical standard for floating-point computation which was established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found in the diverse floating point implementations that made them difficult to use reliably and reduced their portability. IEEE Standard 754 floating point is the most common representation today for real numbers on computers, including Intel-based PC's, Macs, and most Unix platforms. There are several ways to represent floating point number but IEEE 754 is the most efficient in most cases. IEEE 754 has 3 basic components:
1. The Sign of Mantissa - This is as simple as the name. 0 represents a positive number while 1 represents a negative number.
2. The Biased exponent - The exponent field needs to represent both positive and negative exponents. A bias is added to the actual exponent in order to get the stored exponent.
3. The Normalised Mantissa - The mantissa is part of a number in scientific notation or a floating-point number, consisting of its significant digits. Here we have only 2 digits, i.e. O and 1. So a normalised mantissa is one with only one 1 to the left of the decimal.
IEEE 754 numbers are divided into two based on the above three components: single precision and double precision.
| TYPES | SIGN | BIASED EXPONENT | NORMALISED MANTISA | BIAS |
| --- | --- | --- | --- | --- |
| Single precision | 1(31st bit) | 8(30-23) | 23(22-0) | 127 |
| Double precision | 1(63rd bit) | 11(62-52) | 52(51-0) | 1023 |
Example -85.125
85 = 1010101
0.125 = 001
85.125 = 1010101.001
=1.010101001 x 2^6
sign = 0
Single precision:
biased exponent 127+6=133
133 = 10000101
Normalised mantisa = 010101001
we will add 0's to complete the 23 bits
The IEEE 754 Single precision is:
= 0 10000101 01010100100000000000000
This can be written in hexadecimal form 42AA4000
Double precision:
biased exponent 1023+6=1029
1029 = 10000000101
Normalised mantisa = 010101001
we will add 0's to complete the 52 bits
The IEEE 754 Double precision is:
= 0 10000000101 0101010010000000000000000000000000000000000000000000
This can be written in hexadecimal form 4055480000000000 Special Values: IEEE has reserved some values that can ambiguity.
Zero - Zero is a special value denoted with an exponent and mantissa of 0. -0 and +0 are distinct values, though they both are equal.
Denormalised - If the exponent is all zeros, but the mantissa is not then the value is a denormalized number. This means this number does not have an assumed leading one before the binary point.
Infinity - The values +infinity and -infinity are denoted with an exponent of all ones and a mantissa of all zeros. The sign bit distinguishes between negative infinity and positive infinity. Operations with infinite values are well defined in IEEE.
Not A Number (NAN) - The value NAN is used to represent a value that is an error. This is represented when exponent field is all ones with a zero sign bit or a mantissa that it not 1 followed by zeros. This is a special value that might be used to denote a variable that doesn't yet hold a value.
| EXPONENT | MANTISA | VALUE |
| --- | --- | --- |
| 0 | 0 | exact 0 |
| 255 | 0 | Infinity |
| 0 | not 0 | denormalised |
| 255 | not 0 | Not a number (NAN) |
Similar for Double precision (just replacing 255 by 2049), Ranges of Floating point numbers:
| | Denormalized | Normalized | Approximate Decimal |
| --- | --- | --- | --- |
| Single Precision | ± 2-149 to (1 - 2-23)×2-126 | ± 2-126 to (2 - 2-23)×2 127 | ± approximately 10-44.85 to approximately 10 38.53 |
| Double Precision | ± 2-1074 to (1 - 2-52)×2-1022 | ± 2-1022 to (2 - 2-52)×2 1023 | ± approximately 10-323.3 to approximately 10 308.3 |
The range of positive floating point numbers can be split into normalized numbers, and denormalized numbers which use only a portion of the fractions's precision. Since every floating-point number has a corresponding, negated value, the ranges above are symmetric around zero. There are five distinct numerical ranges that single-precision floating-point numbers are not able to represent with the scheme presented so far:
1. Negative numbers less than - (2 - 2-23) × 2 127 (negative overflow)
2. Negative numbers greater than - 2-149 (negative underflow)
3. Zero
4. Positive numbers less than 2-149 (positive underflow)
5. Positive numbers greater than (2 - 2-23) × 2 127 (positive overflow)
Overflow generally means that values have grown too large to be represented. Underflow is a less serious problem because is just denotes a loss of precision, which is guaranteed to be closely approximated by zero. Table of the total effective range of finite IEEE floating-point numbers is shown below:
| | Binary | Decimal |
| --- | --- | --- |
| Single | ± (2 - 2-23) × 2 127 | approximately ± 10 38.53 |
| Double | ± (2 - 2-52) × 2 1023 | approximately ± 10 308.25 |
Special Operations -
| Operation | Result |
| --- | --- |
| n ÷ ±Infinity | 0 |
| ±Infinity × ±Infinity | ±Infinity |
| ±nonZero ÷ ±0 | ±Infinity |
| ±finite × ±Infinity | ±Infinity |
| Infinity + Infinity Infinity - -Infinity | +Infinity |
| -Infinity - Infinity -Infinity + - Infinity | - Infinity |
| ±0 ÷ ±0 | NaN |
| ±Infinity ÷ ±Infinity | NaN |
| ±Infinity × 0 | NaN |
| NaN == NaN | False |
Comment
More info
Advertise with us
Next Article
What is a Computer?
A
Ayusharma0698
Follow
86
Improve
Article Tags :
Computer Organization & Architecture
binary-representation
Similar Reads
Computer Organization and Architecture Tutorial In this Computer Organization and Architecture Tutorial, you’ll learn all the basic to advanced concepts like pipelining, microprogrammed control, computer architecture, instruction design, and format. Computer Organization and Architecture is used to design computer systems. Computer architecture I 5 min read
Basic Computer Instructions
What is a Computer? A computer is an electronic device that processes data according to instructions provided by software programs. It takes input (data), processes it using a central processing unit (CPU), stores information, and produces output (results) to perform various tasks.Types of ComputersThere are various ty 8 min readIssues in Computer Design Computer Design is the structure in which components relate to each other. The designer deals with a particular level of system at a time and there are different types of issues at different levels. At each level, the designer is concerned with the structure and function. The structure is the skelet 3 min readDifference between assembly language and high level language Programming Language is categorized into assembly language and high-level language. Assembly-level language is a low-level language that is understandable by machines whereas High-level language is human-understandable language. What is Assembly Language?It is a low-level language that allows users 2 min readAddressing Modes Addressing modes are the techniques used by the CPU to identify where the data needed for an operation is stored. They provide rules for interpreting or modifying the address field in an instruction before accessing the operand.Addressing modes for 8086 instructions are divided into two categories: 7 min readDifference between Memory based and Register based Addressing Modes Prerequisite - Addressing Modes Addressing modes are the operations field specifies the operations which need to be performed. The operation must be executed on some data which is already stored in computer registers or in the memory. The way of choosing operands during program execution is dependen 4 min readComputer Organization - Von Neumann architecture Computer Organization is like understanding the "blueprint" of how a computer works internally. One of the most important models in this field is the Von Neumann architecture, which is the foundation of most modern computers. Named after John von Neumann, this architecture introduced the concept of 6 min readHarvard Architecture In a normal computer that follows von Neumann architecture, instructions, and data both are stored in the same memory. So same buses are used to fetch instructions and data. This means the CPU cannot do both things together (read the instruction and read/write data). So, to overcome this problem, Ha 5 min readInteraction of a Program with Hardware When a Programmer writes a program, it is fed into the computer and how does it actually work? So, this article is about the process of how the program code that is written on any text editor is fed to the computer and gets executed. As we all know computers work with only two numbers,i.e. 0s or 1s. 3 min readSimplified Instructional Computer (SIC) Simplified Instructional Computer (SIC) is a hypothetical computer that has hardware features that are often found in real machines. There are two versions of this machine: SIC standard ModelSIC/XE(extra equipment or expensive)Object programs for SIC can be properly executed on SIC/XE which is known 4 min readInstruction Set used in simplified instructional Computer (SIC) Prerequisite - Simplified Instructional Computer (SIC) These are the instructions used in programming the Simplified Instructional Computer(SIC). Here, A stands for Accumulator M stands for Memory CC stands for Condition Code PC stands for Program Counter RMB stands for Right Most Byte L stands for 1 min readInstruction Set used in SIC/XE Pre-Requisite: SIC/XE Architecture SIC/XE (Simplified Instructional Computer Extra Equipment or Extra Expensive). SIC/XE is an advanced version of SIC. Both SIC and SIC/XE are closely related to each other that’s why they are Upward Compatible. Below mentioned are the instructions that are used in S 2 min readRISC and CISC in Computer Organization RISC is the way to make hardware simpler whereas CISC is the single instruction that handles multiple work. In this article, we are going to discuss RISC and CISC in detail as well as the Difference between RISC and CISC, Let's proceed with RISC first. Reduced Instruction Set Architecture (RISC) The 5 min readVector processor classification Vector processors have rightfully come into prominence when it comes to designing computing architecture by virtue of how they handle large datasets efficiently. A large portion of this efficiency is due to the retrieval from architectural configurations used in the implementation. Vector processors 5 min readEssential Registers for Instruction Execution Registers are small, fast storage locations directly inside the processor, used to hold data, addresses, and control information during instruction processing. They play an important role in instruction execution within a CPU. Following are various registers required for the execution of instruction 3 min readIntroduction of Single Accumulator based CPU organization The computers, present in the early days of computer history, had accumulator-based CPUs. In this type of CPU organization, the accumulator register is used implicitly for processing all instructions of a program and storing the results into the accumulator. The instruction format that is used by th 2 min readStack based CPU Organization Based on the number of address fields, CPU organization is of three types: Single Accumulator organization, register based organization and stack based CPU organization.Stack-Based CPU OrganizationThe computers which use Stack-based CPU Organization are based on a data structure called a stack. The 4 min readMachine Control Instructions in Microprocessor Microprocessors are electronic devices that process digital information using instructions stored in memory. Machine control instructions are a type of instruction that control machine functions such as Halt, Interrupt, or do nothing. These instructions alter the different type of operations execute 4 min readVery Long Instruction Word (VLIW) Architecture The limitations of the Superscalar processor are prominent as the difficulty of scheduling instruction becomes complex. The intrinsic parallelism in the instruction stream, complexity, cost, and the branch instruction issue get resolved by a higher instruction set architecture called the Very Long I 4 min read
Input and Output Systems
Computer Organization | Different Instruction Cycles Introduction : Prerequisite - Execution, Stages and Throughput Registers Involved In Each Instruction Cycle: Memory address registers(MAR) : It is connected to the address lines of the system bus. It specifies the address in memory for a read or write operation.Memory Buffer Register(MBR) : It is co 11 min readMachine Instructions Machine Instructions are commands or programs written in the machine code of a machine (computer) that it can recognize and execute. A machine instruction consists of several bytes in memory that tell the processor to perform one machine operation. The processor looks at machine instructions in main 5 min readComputer Organization | Instruction Formats (Zero, One, Two and Three Address Instruction) Instruction formats refer to the way instructions are encoded and represented in machine language. There are several types of instruction formats, including zero, one, two, and three-address instructions. Each type of instruction format has its own advantages and disadvantages in terms of code size, 11 min readDifference between 2-address instruction and 1-address instructions When we convert a High-level language into a low-level language so that a computer can understand the program we require a compiler. The compiler converts programming statements into binary instructions. Instructions are nothing but a group of bits that instruct the computer to perform some operatio 4 min readDifference between 3-address instruction and 0-address instruction According to how many addresses an instruction consumes for arguments, instructions can be grouped. Two numerous kinds of instructions are 3 address and 0 address instructions. It is crucial to comprehend the distinction between these two, in order to know how different processors function in relati 4 min readRegister content and Flag status after Instructions Basically, you are given a set of instructions and the initial content of the registers and flags of 8085 microprocessor. You have to find the content of the registers and flag status after each instruction. Initially, Below is the set of the instructions: SUB A MOV B, A DCR B INR B SUI 01H HLT Assu 3 min readDebugging a machine level program Debugging is the process of identifying and removing bug from software or program. It refers to identification of errors in the program logic, machine codes, and execution. It gives step by step information about the execution of code to identify the fault in the program. Debugging of machine code: 3 min readVector Instruction Format in Vector Processors INTRODUCTION: Vector instruction format is a type of instruction format used in vector processors, which are specialized types of microprocessors that are designed to perform vector operations efficiently. In a vector processor, a single instruction can operate on multiple data elements in parallel, 7 min readVector Instruction Types An ordered collection of elements — the length of which is determined by the number of elements—is referred to as a vector operand in computer architecture and programming. A vector contains just one kind of element per element, whether it is an integer, logical value, floating-point number, or char 4 min read
Instruction Design and Format
Introduction of ALU and Data Path Representing and storing numbers were the basic operations of the computers of earlier times. The real go came when computation, manipulating numbers like adding and multiplying came into the picture. These operations are handled by the computer's arithmetic logic unit (ALU). The ALU is the mathemat 8 min readComputer Arithmetic | Set - 1 Negative Number Representation Sign Magnitude Sign magnitude is a very simple representation of negative numbers. In sign magnitude the first bit is dedicated to represent the sign and hence it is called sign bit. Sign bit ‘1’ represents negative sign. Sign bit ‘0’ represents positive sign. In sign 5 min readComputer Arithmetic | Set - 2 FLOATING POINT ADDITION AND SUBTRACTION FLOATING POINT ADDITION To understand floating point addition, first we see addition of real numbers in decimal as same logic is applied in both cases. For example, we have to add 1.1 103 and 50. We cannot add these numbers directly. First, we need to align 4 min readDifference Between 1's Complement Representation and 2's Complement Representation Technique In computer science, binary number representations like 1's complement and 2's complement are essential for performing arithmetic operations and encoding negative numbers in digital systems. Understanding the differences between these two techniques is crucial for knowing how computers handle signed 5 min readRestoring Division Algorithm For Unsigned Integer The Restoring Division Algorithm is an integral procedure employed when calculating division on unsigned numbers. It is particularly beneficial in the digital computing application whereby base-two arithmetic is discrete. As a distinct from other algorithms, the Restoring Division Algorithm divides 5 min readNon-Restoring Division For Unsigned Integer The non-restoring division is a division technique for unsigned binary values that simplifies the procedure by eliminating the restoring phase. The non-restoring division is simpler and more effective than restoring division since it just employs addition and subtraction operations instead of restor 4 min readComputer Organization | Booth's Algorithm Booth algorithm gives a procedure for multiplying binary integers in signed 2’s complement representation in efficient way, i.e., less number of additions/subtractions required. It operates on the fact that strings of 0’s in the multiplier require no addition but just shifting and a string of 1’s in 7 min readHow the negative numbers are stored in memory? Prerequisite - Base conversions, 1’s and 2’s complement of a binary number, 2’s complement of a binary string Suppose the following fragment of code, int a = -34; Now how will this be stored in memory. So here is the complete theory. Whenever a number with minus sign is encountered, the number (igno 2 min read
Microprogrammed Control
Computer Organization | Micro-Operation In computer organization, a micro-operation refers to the smallest tasks performed by the CPU's control unit. These micro-operations helps to execute complex instructions. They involve simple tasks like moving data between registers, performing arithmetic calculations, or executing logic operations. 3 min readMicroarchitecture and Instruction Set Architecture In this article, we look at what an Instruction Set Architecture (ISA) is and what is the difference between an 'ISA' and Microarchitecture. An ISA is defined as the design of a computer from the Programmer's Perspective. This basically means that an ISA describes the design of a Computer in terms o 5 min readTypes of Program Control Instructions In microprocessor and Microcontroller ,program control instructions guide how a computer executes a program by allowing changes in the normal flow of operations. These instructions help in making decisions, repeating tasks, or stopping the program.What is Program Control Instructions ?Program Contro 6 min readDifference between CALL and JUMP instructions In assembly language as well as in low level programming CALL and JUMP are the two major control transfer instructions. Both instructions enable a program to go to different other parts of the code but both are different. CALL is mostly used to direct calls to subroutine or a function and regresses 5 min readComputer Organization | Hardwired v/s Micro-programmed Control Unit Introduction :In computer architecture, the control unit is responsible for directing the flow of data and instructions within the CPU. There are two main approaches to implementing a control unit: hardwired and micro-programmed.A hardwired control unit is a control unit that uses a fixed set of log 5 min readImplementation of Micro Instructions Sequencer The address is used by a microprogram sequencer to decide which microinstruction has to be performed next. Microprogram sequencing is the name of the total procedure. The addresses needed to step through a control store's microprogram are created by a sequencer, also known as a microsequencer. The c 4 min readPerformance of Computer in Computer Organization In computer organization, performance refers to the speed and efficiency at which a computer system can execute tasks and process data. A high-performing computer system is one that can perform tasks quickly and efficiently while minimizing the amount of time and resources required to complete these 5 min readIntroduction of Control Unit and its Design A Central Processing Unit is the most important component of a computer system. A control unit is a part of the CPU. A control unit controls the operations of all parts of the computer but it does not carry out any data processing operations. What is a Control Unit?The Control Unit is the part of th 10 min readComputer Organization | Amdahl's law and its proof It is named after computer scientist Gene Amdahl( a computer architect from IBM and Amdahl corporation) and was presented at the AFIPS Spring Joint Computer Conference in 1967. It is also known as Amdahl's argument. It is a formula that gives the theoretical speedup in latency of the execution of a 6 min readSubroutine, Subroutine nesting and Stack memory In computer programming, Instructions that are frequently used in the program are termed Subroutines. This article will provide a detailed discussion on Subroutines, Subroutine Nesting, and Stack Memory. Additionally, we will explore the advantages and disadvantages of these topics. Let's begin with 5 min readDifferent Types of RAM (Random Access Memory ) In the computer world, memory plays an important component in determining the performance and efficiency of a system. In between various types of memory, Random Access Memory (RAM) stands out as a necessary component that enables computers to process and store data temporarily. In this article, we w 8 min readRandom Access Memory (RAM) and Read Only Memory (ROM) Memory is a fundamental component of computing systems, essential for performing various tasks efficiently. It plays a crucial role in how computers operate, influencing speed, performance, and data management. In the realm of computer memory, two primary types stand out: Random Access Memory (RAM) 8 min read2D and 2.5D Memory organization The internal structure of Memory either RAM or ROM is made up of memory cells that contain a memory bit. A group of 8 bits makes a byte. The memory is in the form of a multidimensional array of rows and columns. In which, each cell stores a bit and a complete row contains a word. A memory simply can 4 min read
Input and Output Organization
Priority Interrupts | (S/W Polling and Daisy Chaining) In I/O Interface (Interrupt and DMA Mode), we have discussed the concept behind the Interrupt-initiated I/O. To summarize, when I/O devices are ready for I/O transfer, they generate an interrupt request signal to the computer. The CPU receives this signal, suspends the current instructions it is exe 5 min readI/O Interface (Interrupt and DMA Mode) The method that is used to transfer information between internal storage and external I/O devices is known as I/O interface. The CPU is interfaced using special communication links by the peripherals connected to any computer system. These communication links are used to resolve the differences betw 6 min readDirect memory access with DMA controller 8257/8237 Suppose any device which is connected to input-output port wants to transfer data to memory, first of all it will send input-output port address and control signal, input-output read to input-output port, then it will send memory address and memory write signal to memory where data has to be transfe 3 min readComputer Organization | Asynchronous input output synchronization Introduction : Asynchronous input/output (I/O) synchronization is a technique used in computer organization to manage the transfer of data between the central processing unit (CPU) and external devices. In asynchronous I/O synchronization, data transfer occurs at an unpredictable rate, with no fixed 7 min readProgrammable peripheral interface 8255 PPI 8255 is a general purpose programmable I/O device designed to interface the CPU with its outside world such as ADC, DAC, keyboard etc. We can program it according to the given condition. It can be used with almost any microprocessor. It consists of three 8-bit bidirectional I/O ports i.e. PORT A 4 min readSynchronous Data Transfer in Computer Organization In Synchronous Data Transfer, the sending and receiving units are enabled with the same clock signal. It is possible between two units when each of them knows the behaviour of the other. The master performs a sequence of instructions for data transfer in a predefined order. All these actions are syn 4 min readIntroduction of Input-Output Processor The DMA mode of data transfer reduces the CPU's overhead when handling I/O operations. It also allows parallel processing between CPU and I/O operations. This parallelism is necessary to avoid the wastage of valuable CPU time when handling I/O devices whose speeds are much slower as compared to CPU. 5 min readMPU Communication in Computer Organization MPU communicates with the outside world with the help of some external devices which are known as Input/Output devices. The MPU accepts the binary data from input devices such as keyboard and analog/digital converters and sends data to output devices such as printers and LEDs. For performing this ta 4 min readMemory Mapped I/O and Isolated I/O CPU needs to communicate with the various memory and input-output devices (I/O). Data between the processor and these devices flow with the help of the system bus. There are three ways in which system bus can be allotted to them:Separate set of address, control and data bus to I/O and memory.Have co 5 min read
Memory Organization
Introduction to memory and memory units Memory is required to save data and instructions. Memory is divided into cells, and they are stored in the storage space present in the computer. Every cell has its unique location/address. Memory is very essential for a computer as this is the way it becomes somewhat more similar to a human brain. 11 min readMemory Hierarchy Design and its Characteristics In the Computer System Design, Memory Hierarchy is an enhancement to organize the memory such that it can minimize the access time. The Memory Hierarchy was developed based on a program behavior known as locality of references (same data or nearby data is likely to be accessed again and again). The 6 min readRegister Allocations in Code Generation Registers are the fastest locations in the memory hierarchy. But unfortunately, this resource is limited. It comes under the most constrained resources of the target processor. Register allocation is an NP-complete problem. However, this problem can be reduced to graph coloring to achieve allocation 6 min readCache Memory Cache memory is a small, fast storage space within a computer. It holds duplicates of data from commonly accessed locations in the main memory. The CPU contains several separate caches that store both instructions and data.Cache Memory The key function of cache memory is to reduce the average time n 5 min readCache Organization | Set 1 (Introduction) Cache is close to CPU and faster than main memory. But at the same time is smaller than main memory. The cache organization is about mapping data in memory to a location in cache. A Simple Solution: One way to go about this mapping is to consider last few bits of long memory address to find small ca 3 min readMultilevel Cache Organisation Cache is a type of random access memory (RAM) used by the CPU to reduce the average time required to access data from memory. Multilevel caches are one of the techniques used to improve cache performance by reducing the miss penalty. The miss penalty refers to the additional time needed to retrieve 6 min readDifference between RAM and ROM Memory is an important part of the Computer which is responsible for storing data and information on a temporary or permanent basis. Memory can be classified into two broad categories: Primary Memory Secondary Memory What is Primary Memory? Primary Memory is a type of Computer Memory that the Prepro 7 min readDifference Between CPU Cache and TLB The CPU Cache and Translation Lookaside Buffer (TLB) are two important microprocessor hardware components that improve system performance, although they have distinct functions. Even though some people may refer to TLB as a kind of cache, it's important to recognize the different functions they serv 4 min readIntroduction to Solid-State Drive (SSD) A Solid-State Drive (SSD) is a non-volatile storage device that stores data without using any moving parts, unlike traditional Hard Disk Drives (HDDs), which have spinning disks and mechanical read/write heads. Because of this, SSDs are much faster, more durable, and quieter than HDDs. They load fil 7 min readRead and Write operations in Memory A memory unit stores binary information in groups of bits called words. Data input lines provide the information to be stored into the memory, Data output lines carry the information out from the memory. The control lines Read and write specifies the direction of transfer of data. Basically, in the 3 min read
Pipelining
Instruction Level Parallelism Instruction Level Parallelism (ILP) is used to refer to the architecture in which multiple operations can be performed parallelly in a particular process, with its own set of resources - address space, registers, identifiers, state, and program counters. It refers to the compiler design techniques a 5 min readComputer Organization and Architecture | Pipelining | Set 1 (Execution, Stages and Throughput) Pipelining is a technique used in modern processors to improve performance by executing multiple instructions simultaneously. It breaks down the execution of instructions into several stages, where each stage completes a part of the instruction. These stages can overlap, allowing the processor to wo 9 min readComputer Organization and Architecture | Pipelining | Set 3 (Types and Stalling) Please see Set 1 for Execution, Stages and Performance (Throughput) and Set 2 for Dependencies and Data Hazard. Types of pipeline Uniform delay pipeline In this type of pipeline, all the stages will take same time to complete an operation. In uniform delay pipeline, Cycle Time (Tp) = Stage Delay If 3 min readComputer Organization and Architecture | Pipelining | Set 2 (Dependencies and Data Hazard) Please see Set 1 for Execution, Stages and Performance (Throughput) and Set 3 for Types of Pipeline and Stalling. Dependencies in a pipelined processor There are mainly three types of dependencies possible in a pipelined processor. These are : 1) Structural Dependency 2) Control Dependency 3) Data D 6 min read
Last Minute Notes Computer Organization Table of ContentBasic TerminologyInstruction Set and Addressing ModesInstruction Design and FormatControl UnitMemory Organization I/O InterfacePipeliningIEEE Standard 754 Floating Point NumbersBasic TerminologyControl Unit - A control unit (CU) handles all processor control signals. It directs all i 15+ min read
COA GATE PYQ's AND COA Quiz
GATE CS Preparation Preparing for the GATE exam can be straightforward if you know the right steps to take. This brief GATE CSE Preparation Guide will help you get started and stay on track as you prepare for one of the most important exams for admissions into IITs, NITs and other government colleges.Let's get started: 3 min read
Like 86
Corporate & Communications Address:
A-143, 7th Floor, Sovereign Corporate Tower, Sector- 136, Noida, Uttar Pradesh (201305)
Registered Address:
K 061, Tower K, Gulshan Vivante Apartment, Sector 137, Noida, Gautam Buddh Nagar, Uttar Pradesh, 201305
Advertise with us
Company
About Us
Legal
Privacy Policy
In Media
Contact Us
Advertise with us
GFG Corporate Solution
Placement Training Program
Languages
Python
Java
C++
PHP
GoLang
SQL
R Language
Android Tutorial
Tutorials Archive
DSA
DSA Tutorial
Basic DSA Problems
DSA Roadmap
Top 100 DSA Interview Problems
DSA Roadmap by Sandeep Jain
All Cheat Sheets
Data Science & ML
Data Science With Python
Data Science For Beginner
Machine Learning
ML Maths
Data Visualisation
Pandas
NumPy
NLP
Deep Learning
Web Technologies
HTML
CSS
JavaScript
TypeScript
ReactJS
NextJS
Bootstrap
Web Design
Python Tutorial
Python Programming Examples
Python Projects
Python Tkinter
Python Web Scraping
OpenCV Tutorial
Python Interview Question
Django
Computer Science
Operating Systems
Computer Network
Database Management System
Software Engineering
Digital Logic Design
Engineering Maths
Software Development
Software Testing
DevOps
Git
Linux
AWS
Docker
Kubernetes
Azure
GCP
DevOps Roadmap
System Design
High Level Design
Low Level Design
UML Diagrams
Interview Guide
Design Patterns
OOAD
System Design Bootcamp
Interview Questions
Inteview Preparation
Competitive Programming
Top DS or Algo for CP
Company-Wise Recruitment Process
Company-Wise Preparation
Aptitude Preparation
Puzzles
School Subjects
Mathematics
Physics
Chemistry
Biology
Social Science
English Grammar
Commerce
GeeksforGeeks Videos
DSA
Python
Java
C++
Web Development
Data Science
CS Subjects
@GeeksforGeeks, Sanchhaya Education Private Limited, All rights reserved
We use cookies to ensure you have the best browsing experience on our website. By using our site, you acknowledge that you have read and understood our Cookie Policy&Privacy Policy Got It !
Improvement
Suggest changes
Suggest Changes
Help us improve. Share your suggestions to enhance the article. Contribute your expertise and make a difference in the GeeksforGeeks portal.
Create Improvement
Enhance the article with your expertise. Contribute to the GeeksforGeeks community and help create better learning resources for all.
Suggest Changes
min 4 words, max Words Limit:1000
Thank You!
Your suggestions are valuable to us.
What kind of Experience do you want to share?
Interview ExperiencesAdmission ExperiencesCareer JourneysWork ExperiencesCampus ExperiencesCompetitive Exam Experiences
Login Modal | GeeksforGeeks
===============
Log in
New user ?Register Now
Continue with Google
or
Username or Email Password
[x] Remember me
Forgot Password
Sign In
By creating this account, you agree to ourPrivacy Policy&Cookie Policy.
Create Account
Already have an account ?Log in
Continue with Google
or
Username or Email Password Institution / Organization Sign Up
Please enter your email address or userHandle.
Back to Login
Reset Password
|
98
|
WHAT IS. . .
THE SHAPE OF A LATTICE?
ANDREAS WIESER Abstract. This are the notes for my talk given in the What is..?-seminar in Zurich on 25. October 2018.
In this talk we introduce the shape of a lattice (here a discrete subgroup of Euclidean space) which roughly captures the form of a fundamental parallelo-tope in it. We will particularly focus on primitive integral lattices and then address old and new questions surrounding such lattices and their shapes. The main (equidistribution) conjecture we discuss answers amongst other things the question whether or not the orientations of such lattices yield any information about their shapes and vice versa.
Let me begin by first explaining what a lattice is (for the purposes of this talk).
Definition 1.1. A lattice Λ ⊂Rn is a discrete subgroup.
One can prove that any lattice Λ is of the form Λ = Zv1 + . . . + Zvk for v1, . . . , vk ∈Rn. The minimal such number k is called the rank of Λ.
Figure 1. A lattice Λ of rank k = 2 viewed as a subset of the subspace ΛR spanned by Λ.
The volume of the drawn parallelogram is what one usually calls the covolume of the lattice. Another quantity which one can attach to a lattice is the discriminant 1 2 ANDREAS WIESER which is the determinant of the matrix ⟨v1, v1⟩ · · · ⟨v1, vk⟩ .
.
.
.
.
.
⟨vk, v1⟩ · · · ⟨vk, vk⟩ or in other words just the square of the covolume.
1.1. Parametrizing lattices of full rank. Let us for a moment now focus on lattices of full rank, i.e. with rank equal to n. By what we had before, any such lattice Λ can be written as Λ = gZn where g ∈GLn(R). Note that | det(g)| = covol(Λ).
We say that Λ is unimodular, if it has covolume 1 or equivalently if Λ = gZn for g ∈SLn(R). The space of unimodular lattices is defined as Xn = {Λ : Λ unimodular} ≃SLn(R) .
SLn(Z).
Dynamics and ergodic theory on this quotient has been very successful in encoding statements from number theory (e.g. from Diophantine approximation). The rea-son why one is able to prove many things using ergodic theory on this quotient is that SLn(Z) is a lattice in SLn(R) i.e. there is a finite SLn(R)-invariant measure on Xn.
The shape of a lattice of rank k will be an element of Xk = SO(k) / Xk i.e. the space of lattices up to rotation1.
To visualize X2, note that SL2(R) / SO(2) can be identified with the hyperbolic plane via Moebius transformations. Thus, X2 is the hyperbolic plane folded up under the SL2(Z)-action by Moebius transformations. A fundamental domain is given as in the following picture.
1.2. Definition of the shape. Let Λ < Rn be a lattice of rank k.
We fix a rotation r ∈SO(n) with the property that r.ΛR = Rk × {0}n−k = Rk, that is, we rotate the k-dimensional subspace in which Λ lies to a fixed reference subspace that we simply call Rk. Since r.Λ < Rk is a lattice of full rank, we may stretch it evenly in all directions to obtain a unimodular lattice in Xk.
1In some cases, the shape will in fact be an element of O(k) \ PGLk(R) / PGLk(Z) but we will ignore this issue here.
WHAT IS. . .
THE SHAPE OF A LATTICE?
3 Definition 1.2 (Shape). The shape of the lattice Λ is the point [Λ] = SO(k) 1 covol(Λ) 1 k rΛ ∈SO(k) / Xk = Xk.
Taking the quotient with SO(k) asserts that there is no dependency on the choice of r in the definition of the shape2.
1.3. Distribution of planes and shapes. Let us now fix a lattice of full rank and consider only rank k sublattices of that lattice. For concreteness we take the integer lattice Zn and call a lattice Λ < Rn integral if Λ ⊂Zn. Note that the discriminant of an integer lattice is always a positive integer.
Such an integral lattice Λ is primitive if it is not contained in any larger sublattice of Zn of the same rank. Equivalently, Λ is primitive if Λ = ΛR ∩Zn.
For any positive integer d we define the finite set Rk,n d = {Λ : primitive integral lattice of rank k and discriminant d}.
One can now ask various questions of very different flavour for Rk,n d . For instance When is Rk,n d non-empty?
To the author’s knowledge there is no complete answer to this question. There are however some cases in which there is an answer: • R1,3 d is non-empty if and only if d ̸≡0, 4, 7 mod 8. This is in essence Le-gendre’s theorem on sums of three squares proven in full by Gauss [Gau86].
• R2,4 d is non-empty if and only if d ̸≡0, 7, 12, 15 mod 16 – see for example [AEW19].
• R2,n d for n ≥5 is always non-empty (Mordell [Mor32]).
In general, such a question is strongly connected Siegel’s mass formula which aims at counting representations of forms in few by forms in many variables. This also yields the question If non-empty, how large is Rk,n d ?
Let us however not dwell on that and ask how these solutions (if you will) are distributed.
Conjecture 1.3 (Equidistribution of planes and shapes). Let n ≥3 and k ≤n with n −k ≥2. If k ≥2 the set J k,n d = {(ΛR, [Λ], [Λ⊥∩Zn]) : Λ ∈Rk,n d } equidistributes to the uniform probability measure on Grk,n(R) × Xk × Xn−k. If k = 1, the analogous statement holds for the pairs (ΛR, [Λ⊥∩Zn]).
This means that, whenever you give yourself, say, a nice measurable set A of half the volume in Gr2,4(R) in the limit you will still find all kinds of shapes under the given restriction on the subspace. Conversely, one can fix an approximate shape the lattice should have and also an approximate shape its orthogonal complement should have and one will always find for large enough discriminants a lattice with these given approximate shapes.
To the author’s knowledge, the progress to the conjecture is the following: 2Up to a slight issue with orientation; this is the same problem as the one mentioned in the footnote on page 2.
4 ANDREAS WIESER • Maass [Maa56], [Maa59] in the 50’s and W. Schmidt [Sch98] in the 90’s: the pairs (ΛR, [Λ]) equidistribute when Λ varies over the primitive integral lattices of rank k with discriminant ≤d (!).
• Aka, Einsiedler, Shapira [AES16b], [AES16a]: k = 1 where for n = 3 additional congruence conditions on d need to be assumed.
• Aka, Einsiedler, W. [AEW19]: k = 2 and n = 4 also under additional congruence assumptions. Here, the result is in fact much stronger as it also considers 2 further natural shapes that one can attach to each lattice.
The remaining cases will be treated in an upcoming preprint by Menny Aka and the author (also under additional congruence conditions). It is worthwhile remarking that in all of the above cases the congruence conditions are an artefact of the dynamical proofs.
1.3.1. About the dynamical proofs and the congruence condition. The theorems in [AES16b],[AES16a] and [AEW19] each follow from an equidistribution result for orbits in a locally homogeneous product space3 Y1 × Y2 × Y3 under the stabilizer subgroup of subspaces L = spanR(Λ) HL = {g ∈SOn : g.L = L}.
Over the reals, we have an action of a compact group HL(R) ≃SOk(R)×SOn−k(R) (up to finite index), which cannot yield any interesting dynamical behaviour. The way to avoid this, one considers instead the group of Qp-points! If Q is a positive-definite rational quadratic form, the group SOQ(R) is compact, but SOQ(Qp) might not be. In either of the works mentioned above, the imposed congruence condition asserts that for a given discriminant D satisfying this congruence condition, the stabilizer subgroup of any plane of this discriminant is isotropic. In order to obtain an action of HL(Qp) one passes to an extension.
References [AES16a] M. Aka, M. Einsiedler, and U. Shapira, Integer points on spheres and their orthogonal grids, J London Math Soc 93 (2016), no. 1, 143–158.
[AES16b] , Integer points on spheres and their orthogonal lattices, Invent. Math. 206 (2016), no. 2, 379–396.
[AEW19] M. Aka, M. Einsiedler, and A. Wieser, Planes in four space and four associated CM points, arXiv:1901.05833 (2019).
[Gau86] C. F. Gauss, Disquisitiones arithmeticae, Springer-Verlag, 1986, Translated and with a preface by Arthur A. Clarke, Revised by William C. Waterhouse, Cornelius Greither and A.W.Grootendorst and with a preface by Waterhouse.
[Maa56] H. Maass, Spherical functions and quadratic forms, J. Indian Math. Soc. 20 (1956), 117–162.
[Maa59] , ¨ Uber die Verteilung der zweidimensionalen Untergitter in einem euklidschen Gitter, Mathematische Annalen 137 (1959), 319–327.
[Mor32] L.J. Mordell, On the representations of a binary quadratic form as a sum of squares of linear forms, Mathematische Zeitschrift 35 (1932), no. 1, 1–15.
[Sch98] W. Schmidt, The distribution of sublattices of Zm, Monatsh. Math. 125 (1998), no. 1, 37–81.
3More precisely, Y1 is an S-arthmetic extension of SO(n) \ SOn(Z) and Y2, Y3 are S-arthmetic extensions of SLk(R) \ SLk(Z) resp. SLn−k(R) \ SLn−k(Z).
|
99
|
MAT 533, SPRING 2021, Stony Brook University REAL ANALYSIS II FOLLAND’S REAL ANALYSIS: CHAPTER 9 ELEMENTS OF DISTRIBUTION THEORY Christopher Bishop Chapter 9: Elements of Distribution Theory 9.1 Distributions 9.2 Compactly Supported, Tempered and Periodic Distributions 9.3 Sobolev Spaces Chapter 9.1: Distributions C∞ c (E) = compactly supported smooth functions with support inside E.
If U is open in Rn we define: (i) A sequence {φj} in C∞ c (U) converges in C∞ c to φ {φj} ⊂C∞ c (K) for some compact set K ⊂U and φj →φ in the topology of C∞ c (K), that is, ∂αφj →∂αφ uniformly for all α.
(ii) If X is a locally convex topological vector space and T : C∞ c (U) →X is a linear map, then T is continuous if for each compact K ⊂U, T restricted to C∞ c (K) is continuous, that is Tφj →Tφ whenever φj →φ in C∞ c (K) and K ⊂U is compact.
(iii) A linear map T : C∞ c (U) →C∞ c (U ′) is continuous if for each compact K ⊂U there is a compact K′ ⊂U ′ such that T(C∞ c (K)) ⊂C∞ c (K′) and T is continuous from C∞ c (K) to C∞ c (K′).
(iv) A distribution on U is a continuous linear functional on C∞ c (U). The space of all distributions on U is denoted by D′(U), and we set D′ = D′(Rn).
We impose the weak topology on D′(U), that is, the topology of pointwise convergence on C∞ c (U).
D is Schwarz’s notation for C∞ c .
Examples of Distributions: • Every f ∈L1 loc, i.e., every function f on U that is integrable on every compact set.
• Every Radon measure µ on U defines a distribution by φ → R φdµ. .
• If xo ∈U and α is a multi-index, the map φ →∂αφ(x0) is a distribution. This does not arise from a function; it arises from a measure precisely when α = 0, in which case it is the point mass at x0.
Notation: If F ∈D′(U) and and φ ∈C∞ c (U), the value of F at φ, is denoted by ⟨F, φ⟩. This is linear in each variable; this conflicts with our earlier notation for inner products, but will cause no serious confusion.
Sometimes it is convenient to pretend that a distribution is a function when it really is not, and to write R F(x)φ(x)dx instead of ⟨F, φ⟩. This is the case especially when the explicit presence of the variable x is notationally helpful.
Notation: We shall use a tilde to denote the reflection of function in the origin: ˜ φ(x) = φ(−x).
Notation: We denote the point mass at the origin, by δ: ⟨δ, φ⟩= φ0).
The following is an important corollary of Theorem 8.14: Prop 9.1: Suppose that f ∈L1(Rn) and R f = 1, and for t > 0 let ft(x) = t−nf(x/t). Then ft →aδ in D′ as t →0.
Proof. If φ ∈C∞ c then by Theorem 8.14 we have ⟨ft, φ⟩= Z fyφ = ft ∗˜ φ(0) →a˜ φ(0) = aφ(O) = a⟨δ, φ⟩.
□ If does not make sense to say two distributions agree at a point.
We say two distributions on U agree on an open subset V ⊂U if they agree on all functions in C∞ c (V ).
Prop 9.2: Let {Vα} be a collection of open subsets of U and let V = ∪Vα. If F, G ∈D′(U) agree on every Vα then they agree on V .
Proof. If φ ∈C∞ c (V ), then it has compact support, so this support is contain in a finite union Vα1 ∪· · · ∪Vαm. Pick ψ1, . . . , ψm ∈C∞ c so that P ψm = 1 on supp(φ). (This is the C∞analogue of Proposition 4.41). Then ⟨F, φ⟩= X ⟨F, ψjφ⟩= X ⟨G, ψjφ⟩= ⟨G, φ⟩.
□ According to Proposition 9.2, there is a maximal open subset V of U on which F agrees with the zero distribution. The complement in U of V is called the support of F.
There is a general procedure for extending various linear operations from func-tions distributions.
Suppose that U and V are open sets in Rn, and T is a linear map from some subspace X ⊂L1 loc(U) into L1 loc(V ).
Suppose that there is another linear map T ′ : Cc∞(V ) →Cc∞(U) such that Z (Tf)φ = Z f(T ′φ), f ∈X, φ ∈C∞ c (V ).
Then T can be extended to a map from D′(U) →D′(V ) by ⟨TF, φ⟩= ⟨F, T ′φ⟩, F ∈D′(U), φ ∈C∞ c (U).
Examples: i. (Differentiation): Let Tf = ∂αf, defined on C|α|(U). If φ ∈C∞ c (U), integration by parts gives Z (∂αf)φ = (−1)|α| Z f(∂αφ).
(there are no boundary terms since φ has compact support.) Hence T ′ is the restriction of (−1)|α|T to C∞ c (U). We define the derivative of a distribution F ∈D′(U) by ⟨∂αF, φ⟩= (−1)|α|⟨F, ∂αφ⟩ In particular, we can define derivatives of any locally integrable functions or any finite measure.
ii. (Multiplication by smooth functions): Given ψ ∈C∞(U), define Tf = ψf and let T ′ be the restriction of T to C∞ c (U). The for a distribution F ∈D′(U) we define ψF by ⟨ψF, φ⟩⟨F, ψφ⟩.
iii. (Translation): Given y ∈Rn let V = U+ = {x + y : x ∈U} and let T = τy. Since Z f(x −y)φ(x)dx = Z f(x)φ(x + y)dx we have that T ′ is the restriction of τ−y to C∞ c (U + y). For a distribution F ∈D′(U), we define the translated distribution τyF by ⟨τyF, φ⟩= ⟨F, τ−yφ⟩.
iv. (Composition with linear maps): Given an invertible linear trans-formation S on Rn let V = S−1(U) and let Tf = f ◦S. Then T ′φ = φ ◦S−1 | det(S)|.
τyF by ⟨F ◦S, φ⟩= ⟨F, φ ◦S−1⟩/| det(S)|.
v. (Convolution, First Method): Given ψ ∈C∞ c (U), let V = {x : x −y ∈U for y ∈supp(ψ)}.
(V is open but may be empty.) If f ∈L1 loc(U), the integral f ∗ψ(x) = Z f(x −y)ψ(y)dy = Z f(y)ψ0xy)dy = Z f(τy ˜ ψ) is well defined for all x ∈V . The convolution of F ∗ψ is the function defined on V by F ∗ψ(x) = ⟨F, τx ˜ ψ⟩.
This is a continuous function of x (actually C∞; see below).
If ψ ∈C∞ c we have δ ∗ψ = ⟨δ, τx ˜ ψ⟩= τx ˜ ψ(0) = ψ(x).
Thus δ is the multiplicative identity for convolution.
vi. (Convolution, Second Method): Let ψ, ˜ ψ, V be as above. If f ∈ L1 loc(U) and φ ∈C∞ c (V ) then Z (f ∗ψ)φ = ZZ f(y)ψ(x −y)φ(y)dydx = Z f(φ ∗˜ ψ).
Hence convolution with ψ, Tf = f∗ψ maps L1 loc(U) to L1 loc(V ). Let T ′φ = φ∗˜ ψ.
For a distribution F ∈D′(U) we can define convolution with ψ as a distribution in D′(V ) by ⟨F ∗ψ, φ⟩= ⟨F, φ ∗˜ ψ⟩.
This is a continuous function of x (actually C∞; see below).
We will show the definitions of convolution in (v) and (vi) agree.
Prop 9.3: Suppose U ⊂Rn is open andψ ∈C∞ c . Let V = {x : x −y ∈U for y ∈supp(ψ)}.
For F ∈D′(U) and x ∈V let F ∗ψ(x) = ⟨F, τxψ⟩.
Then (a) F ∗ψ ∈C∞(V ).
(b) ∂α(F ∗ψ) = (∂αF) ∗ψ = F ∗(∂αψ).
(c) For any φ ∈C∞ c (V ), R (F ∗ψ)φ = ⟨F, φ ∗˜ ψ⟩.
Proof. Let {e1, . . . en} be the standard basis of Rn. Since V is open, if x ∈V , then there is t0 > 0 so that x + tej ∈V for |t| < t0. Then 1 t(τx+tej ˜ ψ −τxψ) →τxg ∂jψ, in C∞ c (U) as t →0.
It follows that ∂j(F ∗ψ) exists and equals F ∗∂jψ(x). By induction we get F ∗ψ ∈C∞(V ) and ∂α(F ∗ψ) = F ∗(∂αψ). This proves (a).
Moreover, since ∂α ˜ ψ = (−1)|α| g ∂αψ and ∂ατx = τx∂α, we have (∂αF)∗ψ(x) = ⟨∂αF, τx ˜ ψ⟩= (−1)|α|⟨F, ∂ατx ˜ ψ⟩= ⟨F, τx g ∂αψ⟩= F ∗(∂αψ)(x).
This proves (b).
If φ ∈C∞ c (V ), then φ ∗˜ ψ(x)) = Z φ(y)ψ(y −x)dy = Z φ(y)τy ˜ ψ(x)dy.
The integrand is continuous and supported in a compact subset of U, so it can be approximated by Riemann sums. More precisely, approximate supp(φ) by a union of cubes of side length 2−m centered at points ym 1 , . . . , ym k(m) in supp(φ).
The Riemann sums Sm = 2−nm X φ(ym j )τym j ˜ ψ, are supported in a common compact subset of U and converge uniform to φ ∗˜ ψ as m →∞.
Likewise ∂αSm = 2−nm X φ(ym j )τym j ∂α ˜ ψ, converges to φ ∗∂α ˜ ψ = ∂α ∗φ ∗˜ ψ), so Sm →φ ∗˜ ψ in C∞ c (U). Hence ⟨F, φ ∗˜ ψ⟩= lim m→∞⟨F, Sm⟩ = lim m→∞2−nm X φ(ym j )⟨F, τym j ˜ ψ⟩ = Z φ(y)⟨F, τy ˜ ψ⟩ = Z φ(y)F ∗ψ(y)dy □ Lemma 9.4: Suppose that φ, ψ ∈C∞ c and R ψ = 1 and let ψt = t−nψ(x/t).
(a) Given any neighborhood U of supp(φ), we have supp(φ ∗ψt) ⊂U for t sufficiently small.
(b) φ ∗ψt →ψ in C∞ c as t →0.
Proof. If supp(ψ) ⊂B(0, R) then supp(φ ∗(ψt) is contained in a tR neighbor-hood of supp(φ), which is in U for t small. . Moreover, ∂α(φ ∗ψt) = (∂αφ) ∗ψt →∂αφ uniformly as t →0 By Theorem 8.14.
□ Prop. 9.5: For any open U ⊂Rn, C∞ c (U) is dense in D′(U).
Proof. Suppose F ∈D′(U).
We shall first approximate F by distributions supported in compact subsets of U, then approximate the latter by functions in C∞ c (U).
Let {Vj} be an increasing sequence of precompact open subsets of U whose union is U, as in Proposition 4.39. For each j, by the C∞Urysohn lemma we can pick ζj ∈C∞ c (U) such that ζj = 1 on Vj. Given φC∞ c (U), for j sufficiently large we have supp(φ) ⊂Vj and hence ⟨F, φ⟩= ⟨F, ζjφ⟩= ⟨ζjF, φ⟩.
Therefore ζjF →F as j →∞.
Since supp(φ) is compact, ζjF can be regarded as a distribution on Rn Let ψ, ψt be as in Lemma 9.4. If ˜ ψ(x) = psi(−x) then R ˜ ψ = 1, so given φ ∈C∞ c we have φ ∗˜ ψt →φ in C∞ c by Lemma 9.4. By Proposition 9.3 we have (ζjF) ∗ψt ∈C∞ and ⟨(ζjF) ∗ψt, φ⟩= ⟨ζjF, φ ∗˜ ψt⟩→⟨ζjF, φ⟩, so (ζjF) ∗ψt →ζjF in D′. In other words, every neighborhood of F in D′(U) contains the C∞functions (ζjF) ∗ψt for j large and t small.
Finally, note that supp(ζj) ⊂Vk for some k. If supp(φ)∩Vk = ∅, then for small enough t we have supp(φ ∗˜ ψ) ∩Vk = ∅(Lemma 9.4 again) and hence ⟨(ζjF) ∗ψt, φ⟩= ⟨F, ζj(φ ∗˜ ψt)⟩= .
In other words, supp((ζjF) ∗ψt) ⊂Vj ⊂U so we are done.
□ Example: derivatives of step functions.
If H = χ(0,∞), then ⟨H′, φ⟩= −⟨H, φ′⟩= − Z RH · φ′dx = − Z ∞ 0 H · φ′dx = φ(0)⟨δ, φ⟩ so H′ = δ as distributions.
Example: divergent integral.
Let f(x) = χ(0,∞) · 1 x.
This is locally integrable on U = (0, ∞) so defines a distribution there. It does not define a distribution on R since R fφ diverges unless φ(0) = 0.
However, there is a distribution on R that agrees with f on U.
Note that L(x)(log x)χ0,∞) defines a distribution on R, so L′ is a well defined distribution.
Let Lϵ = (log x)χ(ϵ,∞). By the dominated convergence theorem Z Lφ = lim ϵ→0 Z Lϵφ for φ ∈S. Thus Lϵ →L as distributions and hence L′ ϵ →L′ as distributions.
But ⟨L′ ϵ, φ⟩= −⟨Lϵ, φ′⟩= − Z Lϵφ′ = − Z ∞ ϵ log(x)φ′(x)dx = Z ∞ ϵ φ(x) x dx+φ(ϵ)ϵ.
As ϵ →0 this converges to ⟨L′, φ⟩even though the two terms diverge.
Example: This function from calculus has different mixed partials.
f(x, y) = xy(x2 −y2) x2 + y2 , ∂x∂yf(0, 0) ̸= ∂y∂xf(0, 0).
However mixed partials define the same function everywhere except at origin, so mixed partials in sense of distributions are the same. This is always true for distributions since it is true for C∞ c .
Chapter 9.2: Compactly Supported, Tempered and Periodic Dis-tributions Distribution = D′ = dual of C∞ c (U).
Compactly supported distribution = E′ = dual of C∞(U).
Tempered distribution = S′ = dual of S, Schwarz class Periodic distributions = D′(Tn) = dual of C∞(Tn).
C∞(U) is a Fr´ echet space with seminorms ∥f∥[m,α] = sup Vm |∂αf(x))|, where {Vm} is an increasing sequence of precompact open subsets of U whose union is U.
Prop 9.7: C∞ c (U) is dense in C∞(U).
Proof. For each m use the C∞Urysohn lemma to pick ψm ∈C∞ c (U) with ψm = 1 on Vm. If φ ∈C∞(U) then ∥φ −ψkφ∥[m,α] = 0 for k ≥m, so φkφ →φ in C∞(U).
□ Theorem 9.7: E′(U) is the dual space of C∞(U). More precisely: If F ∈ E′(U), then F extends uniquely to a continuous linear functional on C∞(U), and if G is a continuous linear functional on C∞(U), then G restricted to C∞ c (U) ∈E′(U).
Proof. If F ∈E′(U), choose ψ ∈C∞ c (U) with ψ = 1 on supp(F), and define the linear functional G on C∞(U) by ⟨Gφ⟩= ⟨F, ψφ⟩.
Since F is continuous on C∞ c (supp(ψ)), and the topology of the latter is defined by the norms φ →∥∂αφ∥u, by Proposition 5.15 there are N ∈N and 0 < C < ∞so that |⟨Gφ⟩| ≤C X |α|≤N sup x∈supp(ψ )|∂αφ(x)| ≤C′ X |α|≤N ∥φ∥[m,α].
Thus G is continuous on C∞(U). That G is the unique extension follows from the previous lemma (C∞ c (U) is dense in C∞(U)).
On the other hand, if G is a continuous linear functional on C∞(U), then by Proposition 5.15 there exist C, m, N such that |⟨Gφ⟩| ≤C X |α|≤N ∥φ∥[m,α].
Since ∥φ∥[m,α] ≤∥∂αφ∥u, this implies G is continuous of C∞ c (K) for each compact K in U. Thus G restricted to C∞ c (U) is continuous and hence in D′(U).
Moreover, if supp(φ) is disjoint from Vm then ⟨G, φ⟩= 0 so supp(G) is in Vm and so G is compactly supported.
□ The operations of differentiation, multiplication by Coo functions, translation, and composition by linear maps discussed in 9.1 all preserve the class E′.
Convolution, is slightly more complicated.
First, if F ∈E′ and φ ∈C∞ c then F ∗φ ∈C∞ c since Proposition 8.6d remains valid.
Second, if f ∈E′ and φ ∈C∞, then F ∗φ can be defined as a C∞function or as a distribution just as before: F ∗ψ(x) = ⟨F, τnψ⟩, ⟨F ∗ψ, φ(x)⟩= ⟨F, φ ∗˜ ψ⟩, for φ ∈C∞ c .
Finally, a further dualization allows us to define convolutions of arbitrary dis-tributions with compactly supported distributions. If F ∈D′ and G ∈E′, we can define F ∗G ∈D′ and G ∗F ∈D′ as follows: ⟨F ∗G, φ⟩= ⟨F, ˜ G ∗φ⟩ ⟨F ∗G, φ⟩= ⟨G, ˜ F ∗φ⟩ The proof that F ∗G and G∗F are indeed distributions and that F ∗G = G∗F are Exercises 20 and 21.
We have not yet extended the Fourier transform to distributions.
Fact: If φ ∈C∞ c is not zero then ˆ φ ̸∈C∞ c , in fact, its support is the entire space. Stronger versions of this are called the uncertainty principle.
Proof. Suppose ˆ φ is zero on a neighborhood of ξ0. By replacing φ by ]phie−2πiξ·x we may assume ξ0 = 0. Since φ has compact support we may expand e−2πiξ·x in a power series and integrate term-by-term ˆ φ(ξ) = ∞ X k=1 1 k!
Z (−2πiξ · x)kφ(x)dx = X α 1 α!ξα Z (−2πix)αφ(x)dx This uses the multinomial theorem (Exercise 8.2.a) (x1 + · · · + xn)k = X |α|=k k!
α!xα.
But Z (−2πx)αφ(x)dx = ∂α ˆ φ(0) = 0 since ˆ φ vanishes on a neighborhood of zero. Hence ˆ φ is everywhere zero and so φ is too.
□ There are many more quantitative versions.
The Uncertainty Principle: A Mathematical Survey by Gerald B. Folland and Alladi Sitaram Theorem: if f ∈S, x0, ξ0 ∈Rn then ∥f∥2 ≤4π{(xx0)f(x)∥2 · ∥(ξ −ξ0)f(ξ)∥2.
Theorem (Amrien and Berthier): if f ∈S, and E, F ⊂Rn have finite measure then ∥f∥L2(Rn) ≤C(E, F) ∥f∥L2(Rn\E)∥ˆ f∥L2(Rn\F) Conclusion (Benedicks) : f and ˆ f can’t both be supported on finite measure sets.
Hardy’s Uncertainty Principle: Suppose f is a measurable function on R such that |f(x)| ≤Ae−απx2 | ˆ f(x)| ≤Be−βπx2 If αβ > 1 then f is the zero function.
Theorem (Benedicts): If f ∈L1(Rn) let A = {x : |f(x)| > 0}, = {x : | ˆ f(x)| > 0}.
If m(A) < ∞and m(B) < ∞then f = 0 a.e., Proof. By dilating we may assume m(A) < (2π)n and m(B) < ∞. Let φ = χB and ˜ φ = X k∈Zn τkφ.
This is positive measurable and 1-periodic. Let K = [0, 1]n. Then Z K ˜ φ = Z Rn φ = m(B) < ∞ so for almost every ξ, we have ξ + k ∈B for only finitely many k ∈Zn.
Fix ξ0 ∈Rn and define ˜ fξ0(x) = X k∈Zn e−2πiξ0·(x−k)f(x −k).
Then (i) ˜ fξ0 ∈L1(Tn) (ii) ˜ fξ0 has Fourier coefficients (fξ0)∧(k) = (2π)−n ˆ f(ξ0 + k), (iii) m({x : ˜ fξ0(x) > 0}) < (2π)n By (ii) and our earlier remarks, ˜ fξ0 is a trigonometric polynomial for a.e. ξ0.
But by (iii) this trig polynomial vanishes on a set of positive measure, so is the zero function. Thus for a.e. ξ0, ˆ f(ξ + k) = 0 for all k. This implies ˆ f = 0 a.e.
and hence f = 0 a.e..
□ Michael Benedicks, 1949–present Even though the Fourier transform does not map C∞ c into itself, it does not S into itself.
Prop 9.9: Suppose ψ ∈C∞ c , ψ(0) = 1 and let ψϵ(x) = ψ(ϵx). Then for any φ ∈S, ψϵφ →φ in S as ϵ →0. In particular, C∞ c is dense in D.
Proof. First consider the semi-norms with no derivatives. Given any N and η > 0 we can choose a compact K so that (1 + |x|)N|φ(x)| < η offK. Since ψ(ϵx) →1 uniformly on K we get sup K (1 + |x|)N|φ(x) −ψϵ(x)φ(x)| →0 so ∥φ −ψϵφ∥N →0.
For semi-norms involving derivatives, use the product rule: (1 + |x|)N∂α(ψϵφ −φ) = (1 + |x|)N(ψϵ∂αφ −∂αφ) + Eϵ(x), where Eϵ is a sum of terms involving derivatives of ψϵ. Since ∂βψϵ(x)| = ϵ|β||∂βψ(ϵx)| ≤Cβ · ϵ|β|, we have ∥Eϵ∥u ≤Cϵ →0. Thus ∥ψϵφ −φ∥(N,α) →0.
□ Defn: A tempered distribution is a continuous linear functional on S.
These are denoted S′.
If F ∈S′ then it defines a distribution on C∞ c since convergence in C∞ c implies convergence in S. Thus tempered distributions are a subset of distributions: the ones that extend continuously from C∞ c to S.
Roughly speaking a distribution is tempered if it does not grow too quickly at ∞.
Compactly supported distributions are tempered.
If f ∈L1 loc and R (1 + |x|)N|f(x)|dx < ∞for some N > −∞, then f is tempered.
Example 1: eiax is bounded, hence tempered by (ii).
Example 2: ebx is not tempered: choose ψ ∈C∞ c so that R ψ = 1. Then ψj = e−bxψ(x −j) →in S, but Z φjebxdx = Z ψdx = 1 ̸→0.
Example 3: f(x) = ex cos(ex) is tempered, because it is the derivative of the bounded function sin(ex). Indeed, | Z fφ| = | Z φ′(x) sin(ex)dx| ≤C∥φ∥(2,1).
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 -150 -100 -50 0 50 100 150 f(x) = ex cos(ex) Differentiation, translation and composition with linear transformations all work the same for tempered distributions as for distributions.
Multiplication by smooth functions is slightly different.
For F →ψF to map a tempered distribution to a tempered distribution, we need ψ and its derivatives to have at most polynomial growth |∂αψ(x)| ≤Cα(1 + |x|)N(α) Such functions are called slowly increasing.
Polynomials are examples. So is (1 + |x|2)s, s ∈R.
Prop. 9.10: If F ∈S′ and ψ ∈S then F ∗ψ is a slowly increasing C∞ function, and for any φ ∈S, Z (F ∗ψ)φ = ⟨F, φ ∗˜ ψ⟩.
Proof. That F ∗′ ψ ∈C∞is proven as in Proposition 9.3.
By Proposition 5.15, the continuity of F implies that there exist m, N, C such that |⟨F, φ⟩| ≤C X |α|≤N ∥φ∥(,α), and hence by (8.12), |F ∗ψ| ≤C X |α|≤N (1 + |y|)m|∂αψ(x −y)| ≤C(1 + |x|)m X |α|≤N sup y (1 + |x −y|)m|∂αψ(x −y)| ≤C(1 + |x|)m X |α|≤N ∥ψ∥(mα).
The same reasoning applies with ψ replaced by ∂βψ, so F ∗ψ is slowly increasing.
Next, by Proposition 9.3 we know that the equation Z (F ∗ψ)φ = ⟨F, φ ∗˜ φ⟩ holds when φ, ψ ∈C∞ c .
By Proposition 9.9, if φ, ψ ∈S, we can find sequences {φj} and {ψj} in C∞ c that converge to φ, ψ in S. Then φj ∗˜ ψj →φ ∗˜ ψ in S by the proof of Prop 8.11, so ⟨F, φj ∗˜ ψj⟩→∠F, φ ∗˜ ψ⟩.
On the other hand, the preceding estimates show that |F ∗ψj(x)| ≤C(1 + |x|)m, with C and m independent of j, and likewise |φj(x)| ≤C(1 + |x|)−m−m−1.
Thus Z (F ∗ψj)φj → Z (F ∗ψ)φ, by the dominated convergence theorem.
□ The main reason for considering tempered distributions, is the Fourier transform.
Recall the Fourier transform maps S continuously into itself, and that for f, g ∈ S ⊂L1,Z ˆ f(y)g(y)dy = ZZ f(x)g(y)e−2πix·ydxdy = Z f(x)ˆ g(x)dx.
We can extend the Fourier transform to a continuous linear map from S′ to itself by defining ⟨ˆ F, φ⟩= ⟨F, ˆ φ⟩.
This definition agrees with the one in Chapter 8 when f ∈l1 + L2. The basic properties of the Fourier transform in Theorem 8.22 continue to hold in S′: (τyf)∧= e−2πξ·y ˆ F, τη ˆ F = (e2πiη·xF)∧, ∂α ˆ F = [(2πix)αF]∧ (∂αF)∧= (2πiξ)α ˆ F (f ◦T)∧= | det T|−1 ˆ f ◦(T ∗)−1, T ∈GL(n, R), (F ∗ψ)∧= ˆ ψ · ˆ F, Verifications are left to the reader.
Inverse transform given by ⟨F ∨, φ⟩= ⟨F, φ∨⟩.
Fourier inversion: ⟨( ˆ F)∨, φ⟩= ⟨ˆ F, (φ∨)∧⟩= ⟨ˆ F, φ⟩.
Thus the Fourier transform is an isomorphism on S′.
There is an alternative way to define Fourier transform of a compactly sup-ported distribution F. Since φ(x) = exp(2πiξ · x) is C∞, ⟨F, φ⟩should also be ˆ F. The two possibilities agree: Prop 9.11: If F ∈E′, then ˆ F is a slowly increasing C∞function, and it is given by ˆ F(ξ) = ⟨F, E−ξ⟩, where Eξ = exp(2πiξ · x).
Proof. Let g(ξ) = ⟨F, E−ξ⟩. Consideration of difference quotients of g, as in the proof of Proposition 9.3, shows that g ∈C∞with derivatives given ∂αg = ⟨F, ∂α ξ E−ξ⟩= (−2πi)|α|⟨F, xαE−ξ⟩.
Moreover, by Theorem 9.8 and Proposition 5.15, there exist m, N, C such that ∂αg(ξ)| ≤C X |β|≤N sup |x|≤m |∂β[xαE−ξ(x)]| ≤C′(1 + m)α|(1 + |ξ|)N, so g is slowly increasing.
It remains to show that g = ˆ F, and by Proposition 9.9 it suffices to show that Z gφ = ⟨F, ˆ φ⟩, for any φC∞ c . In this case gφ ∈C∞ c so R gφ can be approximated by Riemann sums as in the proof of Proposition 9.3, say Z gφ ≈ X g(ξj))φ(ξj)∆ξj.
The corresponding sums X φ(ξj)e−2πiξj·x and their derivatives in x converge uniformly, for x in any compact set, to φ(x) and its derivatives. Therefore, since F is a continuous functional on C∞ Z gφ = lim X ⟨F, E−ξ⟩φ(ξj)∆ξj = lim⟨F, X φ(xj)E−ξ∆ξj⟩ = ⟨F, ˆ φ⟩.
□ The Fourier transform of the point mass at 0 is the constant function 1: ⟨δE−ξ = E−ξ(0) = 1.
More generally, Prop. 9.12: The Fourier transform of the linear combinations of δ its deriva-tives are precisely the polynomials.
(xα)∧= [(−x)α]∨= (2πi)−|α|∂αδ, ˆ Ey = (E −−y)∨= τyδ.
Z 1 · e2πξ·xdξ = δ(x).
Every distribution is a linear combination of derivatives of continuous functions.
Prop 9.14: a. If F ∈E′, there exist N ∈N, constants Cα for |α| ≤n and fαC0(R) so that that F = P |α|≤N cα∂αf.
b. If F ∈D′(U) and V is a precompact open set in U, then there are N, cα, fα as above so F = P |α|≤N cα∂αf in V .
Proof. By Proposition 9.11, if F ∈E′ then ˆ F is slowly increasing, so g(ξ) = (1 + |ξ|2)−M ˆ F(ξ), is in l1 if M is large enough. Let f = ˆ g. Then f ∈C0 and ˆ F = (1 + |ξ|2)M ˆ f.
Thus F = (I −1 4π2 n X 1 ∂2 j)Mf.
This proves (a). For (b), choose ψ ∈C∞ c (U) such that ψ = 1 on V and apply (a) to ψF.
□ I am omitting Folland’s remarks on periodic distributions. read about this in the textbook.
Chapter 9.3: Sobolev Spaces Sobolev spaces measure smoothness properties of functions and distributions is in terms of L2 norms.
• L2 is Hilbert space, • the Fourier transform converts differentiation into multiplication by the co-ordinate functions and, is an isometry on L2.
Let Hk be the space of all functions f ∈L2(Rn) whose distribution derivatives ∂α are L2 functions for |α| ≤k. Make this into a Hilbert space with the inner product ⟨f, g⟩= X |α|≤k (∂αf)(∂αg).
However, it is more convenient to use an equivalent inner product defined in terms of the Fourier transform. Theorem 8.22e and the Plancherel theorem imply that f ∈Hk iff ξk ˆ f ∈L2 for |α| ≤k.
A simple modification of the argument in the proof of Proposition 8.3 shows that there exist C,C2 > 0 such that C1(1 + |ξ|2)k ≤ X |α|≤k |ξα|2C2(1 + |ξ|2)k.
If follows that f ∈Hk iff(1 + |ξ|2)k/2 ∈L2. The norms X |α|≤k ∥∂αf∥2 2 1/2 , ∥(1 + |ξ|2)k/2 ˆ f∥2 are equivalent.
The second norm makes sense for any k ∈R, and we can use it to extend the definition of Hs to all real s.
For any s ∈R, ξ →(1 + |ξ|2)s/2 is C∞and slowly increasing (Exercise 30), so Λsf = [(1 + |ξ|2)s/2 ˆ f]∨ is a continuous linear operator on S′. In fact, it is an isomorphism since Λ−1 s = Λ)−s.
Defn: If s ∈R define the Sobolev space Hs to be Hs = {f ∈S′ : Λsf ∈L2}, with the inner product and norm ⟨f, g⟩(s) = Z (Λsf)(Λsg) = Z ˆ f(ξ)(1 + |ξ|2)sˆ g(ξ).
∥f∥(s) = ∥Λsf∥2 = Z ˆ f(ξ)(1 + |ξ|2)sdξ 1/2 .
(i) The Fourier transform in a unitary isomorphism from Hs to L2(Rn, µs) where dµs = (1 + |ξ|2)sdξ. So Hs is a Hilbert space.
(ii) S is dense in Hs for all s ∈R.
(iii) If t < s, Hs is dense in Ht in the topology of Ht and ∥· ∥(t) ≤∥· ∥(t).
(iv) Λt is a unitary isomorphism from Hs to Hs−t for all s, t ∈R.
(v) H0 = L2 and ∥· ∥(0) ≤∥· ∥L2.
(vi) ∂α is a bounded linear map from Hs to Hs−|α| for all s, α because |ξα| ≤ (1 + |ξ|2)|α|/2.
For s ≥0 elements of Hs are functions. This need not be true for s < 0. For example the point mass δ ∈Hs(Rn) iffs < −n/2. (Recall ˆ δ is the constant function 1.) Prop 9.16: If s ∈R, the duality between S and S′ induces a unitary isomor-phism from H−s to (Hs)∗. More precisely, if f ∈H−s the functional φ →⟨f, φ⟩, on S extends to a continuous linear functional on Hs with operator norm equal to ∥f∥(s) and every element of (Hs)∗arises in this way.
Proof. If f ∈H−s and φ ∈S, ⟨f, φ⟩= ⟨f ∨, ˆ φ⟩= Z f ∨(ξ)ˆ φ(ξ)dξ, since f ∨(ξ) = ˆ f(−ξ) is a tempered function. By the Schwarz inequality, |⟨f, φ⟩| ≤ Z |f ∨(ξ)|2(1 + |ξ|2)−sdξ 1/2 Z |ˆ φ(ξ)|2(1 + |ξ|2)sdξ 1/2 = ∥f∥(−s) · ∥φ∥(s).
Thus the functional φ →⟨f, φ⟩extends continuously to Hs with norm at most ∥f∥(−s).
In fact, the norm is equal to this since if g ∈S′ is the distribution whose Fourier transform equals ˆ g(ξ) = (1 + |ξ|2)−s ˆ f(ξ), then g ∈Hs and ⟨f, g⟩= Z | ˆ f(ξ)|2(1 + |ξ|2)sdξ = ∥f∥2 (−s) = ∥f∥(−s)∥f∥(s).
Finally, if G ∈(Hs)∗then G ◦F−1 is a bounded linear functional on L2(µs) where dµs = (1 + |ξ|2)sdξ, so there is a g ∈L2(µs) so that G(φ) = Z ˆ φ(ξ)g(ξ)(1 + ξ|2)2dξ.
But then G(φ) = ⟨f, ]phi⟩where f ∨(ξ) = (1 + |ξ|2)sg(ξ), and f ∈H−s since ∥f∥2 (−s) = Z | ˆ f(ξ)|2(1 + |ξ|2)sdξ = Z |g(ξ)|2(1 + |ξ|2)sdξ.
□ Elements of Hs that are functions are define a.e.
To ask if such a function is Ck means that it agrees a.e. with a Ck function.
Define Dk 0 = {f ∈Ck(Rn) : ∂αf ∈C0 for |α| ≤k}.
This is a Banach space with the norm X |α|≤k ∥∂αf∥u.
The Sobolev Embedding Theorem: Suppose s > k + 1 2n.
(a) If f ∈Hs, then (∂αf)∧∈L1 and ∥(∂αf)∧∥1 ≤C∥f∥(s), where C depends only n k −s.
(b) Hs ⊂Ck 0 and the inclusion map is continuous.
Proof. By the Schwarz inequality (2π)|α| Z |(∂αf)∧|dξ = Z |ξα ˆ f(ξ)|dξ ≤ Z |(1 + |ξ|2)k/2 ˆ f(ξ)|dξ ≤ Z |(1 + |ξ|2)s ˆ f(ξ)|2dξ 1/2 Z |(1 + |ξ|2)k−sdξ 1/2 ≤∥f∥(s) · C.
This proves (a). Part (b) follows from the Fourier inversion formula and the Riemann-Lebesgue lemma.
□ Cor 9.18: If f ∈Hs for all s, then f ∈C∞.
Example: Let fλ(x) = xλφ(x) where φ ∈C∞ c and φ equals 1 on a neighbor-hood of 0. Then ∂αf is C∞except at 0 and |∂αfλ| ≤Cα,λ|x|λ−|α|.
This is in L1 iffλ −|α| > −n in which case the pointwise derivative ∂αf is also the distributional derivative.
Moreover, ∂αfλ ∈L2 iffλ −|α| > −n/2 so f ∈Hk iffλ > k −n/2 whereas fλ ∈Ck 0 iffλ > k.
Lemma 9.19: For all ξ, η ∈Rn and s ∈R, (1 + |ξ|2)s(1 + |η|2)−s ≤2|s|(1 + |ξ −η|2)s(1 + |η|2)s.
Proof. Since |ξ| ≤|ξ −η| + |η|, we have |ξ|2 ≤2(|ξ −η|2 + |η|2), and 1 + |ξ|2 ≤2(1 + |ξ −η|2)(1 + |η|2).
If s ≥0 raise both sides to the powers. If s < 0, interchange η and ξ and replace s by −s to get (1 + |η|2)−s ≤2−s(1 + |ξ|2)−s(1 + |ξ −η|2)−s.
□ Theorem 9.20: Suppose that φ ∈C0(Rn) and that ˆ φ is a function that satisfies Z (1 + |ξ|2)a/2|ˆ φ(ξ)|dξ = C < ∞, for some a > 0. The the map Mφ(f) = φf is a bounded operator on Hs for |s| ≤a.
Proof. Since Λs is a unitary map from Hs to H0 = L2, it suffices to show that ΛsMφΛ−s is a bounded operator on L2. But (ΛsMφΛ−sf)∧(ξ) = (1 + |ξ|2)s/2ˆ φ ∗(Λ−sf)∧ = Z K(ξ, η) ˆ f(η)dη, where K(ξ, η) = 1 + |ξ|2)s/2(1 + |η|2)−s/2 ˆ φ(ξ −η).
By Lemma 9.19 |K(ξ, η)| ≤2|s|/2(1 + |ξ −η|2)|s|/2|ˆ φ(ξ −η)| so if |s| ≤a then R |K(ξ, η|dξ and R |K(ξ, η|dξ are bounded by 2a/2C. Bound-edness of ΛsMφΛ−s follows from the Plancherel theorem and Theorem 6.18.
□ Theorem 6.18: Suppose Z K(x, y)dµ(x) ≤C, Z K(y, x)dν(x) ≤C for a.e. x and y. If 1 ≤p ≤∞and f ∈Lp(dν) then Tf(x) = Z K(x, y)f(y)dν(y) converges absolutely for a.e. x and Tf ∈Lp(dµ) with ∥Tf∥p ≤C∥f∥p.
Cor 6.18: If φ ∈S then Mφ is a bounded operator on every Hs, s ∈R.
Rellich’s Theorem: Suppose that {fk} is a sequence of distributions in Hs, that are all supported in a fixed compact set K and satisfy supk ∥fk∥(s) < ∞.
Then there is a subsequence {fkj} that converges in Ht, for all t < s.
Proof. First we observe that by Proposition 9.11, ˆ fk is a slowly increasing C∞ function.
Pick φ ∈C∞ c such that φ = 1 on a neighborhood of K.
Then fk = φfk, so ˆ fk = ˆ φ ∗ˆ fk where the convolution is defined pointwise by an absolutely convergent integral. By Lemma 9.19 and the Schwarz inequality, (1 + |ξ|2)s/2| ˆ fk(ξ)| ≤2|s|/2 Z |ˆ φ(ξ −η)|(1 + |ξ −η|2)|s|/2| ˆ fk(η)|(1 + |η|2)s/2dη ≤2|s|/2∥φ∥(s)∥fk∥(s) ≤C < ∞ Likewise, since ∂j(ˆ φ ∗ˆ fk) = (∂j ˆ φ) ∗ˆ fk we see that (1 + |ξ|2)s/2|∂j ˆ fk(ξ)| is bounded by a constant independent of ξ, j and k. In particular, the fk’s and their first derivatives are uniformly bounded on compact sets, so by the mean value theorem and the Arzel´ a-Ascoli theorem there is a subsequence { ˆ fkj} that converges uniformly on compact sets.
We claim that this subsequence is Cauchy in Ht for t < s. For any R > 0 we can write ∥fki −fkj∥2 (s) = Z (1 + |ξ|2)t| ˆ fki −ˆ fkj|2dξ, as the sum of integrals over the regions {|ξ| ≤R} and {|ξ| > R}. On the first region we have (1 + |ξ|2)t ≤(1 + R2)max(t,0) and for |ξ| > R we use (1 + |ξ|2)t ≤(1 + R2)t−s(1 + |ξ|2)s.
These give ∥fki −fkj∥2 (t) ≤ctN(1 + R2)max(t,0) sup |xi|≤R | ˆ fki −ˆ fkj|2(ξ) +(1 + R2)t−s∥fki −fkj∥2 (s) For any ϵ > 0 the second term will be less than ϵ/2 if R is large enough since t −s < 0. Fixing such an R we then take i, j sufficiently large to make the first term < ϵ/2.
□ Sobolev spaces can also be defined on proper open subsets U ⊂Rn.
The localized Sobolev space Hloc s (U) is the set of all distributions f ∈D′(U) such that for every precompact open set V ⊂U there exists g ∈Hs so that f = g on V .
Prop 9.23: A distribution f ∈D′(U) is in Hloc s (U) iffφf ∈Hs, for every φ ∈C∞ c (U).
Proof. If f ∈Hloc s (U) and φ ∈C∞ c (U), then f agrees with some g ∈Hs on a neighborhood of supp(φ); hence φf = φg ∈Hs by Corollary 9.21.
For the converse, given a precompact open ⊂U, we can choose φC∞ c (U) with φ = 1 on a neighborhood of V by the C∞Urysohn lemma. Then φfıHs and φf = f on V .
□ Application of Sobolev spaces to PDE: Consider a constant-coefficient differential operator P(D) = X |α|≤m cαDα.
Assume cα ̸= 0 for some |α| = m.
The principle symbol Pm is the polynomial corresponding to the degree m terms of P(D): Pm(ξ) = X |α|=m cαξα.
Assume cα ̸= 0 for some |α| = m.
P(D) is called elliptic if Pm(ξ) ̸= 0 for all non-zero ξ ∈Rn. The Laplacian ∆is elliptic on Rn but the heat operator ∂t −∆and wave operator ∂2 t −∆are not.
Lemma 9.24: Suppose P(D) is of order m.T hen P(D) is elliptic iffthere are 0 < C, R < ∞so that |P(ξ)| ≥C|ξ|m when |ξ| ≥R.
Proof. If P(D) is elliptic let C1 > 0 be the minimum value of the symbol Pm on the unit sphere |ξ| = 1. Since Pm is homogeneous of degree m this implies |Pm(ξ)| ≥C1|ξ|m for all ξ.
Moreover, P −Pm is of order m −1, so |P(ξ) −Pm(ξ)| ≤C2|ξ|m−1, for some C2 < ∞. Thus |P(ξ)| ≥|Pm(ξ)| −|P(ξ) −Pm(ξ)| ≥1 2C2|ξ|m, for |ξ| ≥2C2/C1.
Conversely, if P(D) is not elliptic, say P(ξ0) = 0, then |P(ξ)| ≤C|ξ|m−1 for all scalar multiples of ξ0.
□ Lemma 9.25: If P(D) is elliptic of order m, u ∈Hs and P(D)u ∈H2, then u ∈Hs+m.
Proof. By hypothesis (1 + |ξ|2)s/2ˆ u ∈L2, (1 + |ξ|2)s/2P ˆ u ∈L2.
By the previous lemma (Lemma 9.24) for some R ≥1, (1 + |ξ|2)m/2 ≤2m|ξ|m ≤c−12m|P(ξ)| for |ξ| ≥R and (1 + |ξ|2)m/2 ≤(1 + R2)m/2 for |ξ| ≤R. Thus (1 + |ξ|2)(s+m)/2|ˆ u| ≤C′(1 + |ξ|2)s/2(|P ˆ u| + |ˆ u|) ∈L2.
Hence u ∈Hs+m.
□ 9.26 The Elliptic Regularity Theorem: Suppose L is a constant coeffi-cient elliptic differential operator of order m, Ω⊂Rn is open, and u ∈D′(Ω).
If Lu ∈Hloc s (Ω) for some s ∈R, then u ∈Hloc s+m(Ω). If Lu is C∞on Ω, so is u.
Proof. We only need to prove the first claim.
By Prop 9.23 it suffices to show that uφ ∈Hs+m for every φ ∈C∞ c (Ω). Let V ⊂U be a precompact open set containing supp(φ) and choose ψ ∈C∞ c (Ω) with ψ = 1 on V .
Then ψu ∈E′ so by Prop 9.11 it follows that ψu ∈Hσ for some σ.
By decreasing σ we may assume s + m −σ is a positive integer.
Set ψ0 = ψ and ψk = φ and recursively choose ψ1, . . . , ψk−1 ∈C∞ c so that ψj = 1 on a neighborhood of supp(φ) and so that supp(ψj) is contained in the set where ψj−1 = 1.
We claim that ψju ∈Hσ+j. When j = k this gives φu = ψku =∈Hσ+k = Hs+m.
Thus it suffices to prove the claim by induction on j.
Note that for any ζ ∈C∞ c , the operator (a commutator) [L, ζ]f = L(ζf) −ζLf, is a differential operator of order m −1; by the product rule, the order m derivatives of f cancel out. The coefficients of [L, ζ] are linear combinations of derivatives of ζ and hence they vanish where ζ is constant.
Thus if f ∈Ht and |α| ≤m −1, we have ∂αf ∈Ht−(m−1) and thus [L, ζ]f ∈ Ht−(m−1) by Theorem 9.20 (multiplication by functions in C∞ c is a bounded operator).
To begin the induction, note that for j = 0 we have ψ0u ∈Hσ by our choice of σ.
In general, assume ψju ∈Hσ+j. Then since ψj+1u == ψj+1ψju and s = σ + k −m ≥σ + (j + 1) −m we have L(ψj+1u) = ψj+1Lu + [L, ψj+1]u = ψj+1Lu + [L, ψj+1]ψju ∈Hs + Hσ+j−(m−1) = Hσ+j+1−m).
Lemma 9.25 with P(D) = L implies ψj+1u ∈Hσ+j+1.
□ Example (Weyl’s lemma) : Every distributional solution of ∆u = 0 is C∞ Thus harmonic functions are C∞. So are solutions of ∆u = φ where φ is C∞.
Example (Cauchy-Riemann: If L = ∂1 + i∂2 on R2 then P(L) = ξ1 + iξ2, |P(ξ)| ≥(|ξ1|2 + |ξ2|2)1/2 = |ξ|, Thus every distributional solution of Lu = 0 is C∞. Thus holomorphic functions are C∞.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.