id
stringlengths
1
6
content
stringlengths
0
5.74M
100
Published Time: 2016-10-14T21:07:20.000Z Regular n-gon calculator and theory | calcresource =============== CALC RESOURCE ==================================================== Calculation Tools & Engineering Resources × Custom Search Sort by: Relevance Relevance Date CalculatorsResourcesFeedbackTerms of use Cancel Print Jump to -Calculator -Theoretical background x Calculation options: Number of digits:6 10 x notation for big numbers:- [x] Regular n-gon calculator By Dr. Minas E. Lemonis, PhD - Updated: March 8, 2024 Home>Geometry>N-gon This tool calculates the basic geometric properties of a regular n-gon, that is a polygon having n sides and n vertices. Regular polygons are equilateral (all sides equal) and also equiangular (all interior angles equal). The tool can calculate the properties of any regular n-gon, given either the edge length, the inradius, the circumradius, the area, the height or the width. Enter below the shape dimensions. The calculated results will have the same units as your input. Please use consistent units for all input. Known data: Side length Inradius Circumradius Area Height Width N = α = R i = R c = A = h = w = Geometric properties: Area = Perimeter = Side α = R i = R c = Bounding box: Height h = Width w = Angles : deg rad Interior φ = Central θ = ADVERTISEMENT Share this Table of Contents -Calculator -Theoretical background -Definitions -Properties of regular n-gons -Symmetry -Interior angle and central angle -Circumcircle and incircle -Area -Perimeter -Bounding box -Examples -Regular n-gon cheat-sheet -Table of regular n-gon properties -See also See also Properties of Hexagon Properties of Octagon Properties of a Rectangle Properties of a Right-Triangle Properties of a Circular area All Geometric Shapes ADVERTISEMENT Theoretical background Table of contents -Definitions -Properties of regular n-gons -Symmetry -Interior angle and central angle -Circumcircle and incircle -Area -Perimeter -Bounding box -Examples -Regular n-gon cheat-sheet -Table of regular n-gon properties -See also Definitions N-gon is a polygon with N sides and N vertices. An n-gon, can be either convex or concave, as illustrated in the next figure. A convex polygon has none of its interior angles greater than 180°. To the contrary, a concave polygon has one or more of its interior angles greater than 180°. A polygon is called regular when its sides are equal and also its interior angles are equal. Having only the sides equal is not adequate to guarantee that the interior angles are also equal. As demonstrated in the figure below, many possible polygons can be defined, with equal sides, but unequal interior angles. These polygons are called equilateral. A special case of equilateral polygons is the so called star polygons. Any n-gon that is not regular is called irregular. The simplest non-degenerate n-gon is the triangle with N=3. N-gons with N<3 are degenerate. The digon with N=2 has two vertices and two edges. The two edges coincide in the plane, resulting in a shape that looks like a linear segment between the two vertices. The monogon, with N=1, has one vertex and one side that connects the vertex with itself. The following table presents the established naming convention for polygons, depending on the number of edges N: | N | Name | Comments | | --- | --- | --- | | 1 | monogon | degenerate | | 2 | digon | degenerate | | 3 | trigon | commonly called triangle | | 4 | tetragon | commonly called quadrilateral | | 5 | pentagon | | | 6 | hexagon | | | 7 | heptagon | or septagon | | 8 | octagon | | | 9 | nonagon | or enneagon | | 10 | decagon | | | 11 | hendecagon | or undecagon | | 12 | dodecagon | | Any n-gon with N>3, can be constructed as an assembly of triangles. In fact the number of triangles required to construct an n-gon is always the same and equal to N-2. Since the sum of interior angles in a triangle is constant to 180° (or ), the sum of the interior angles in an n-gon, either convex or concave is also constant and equal to: . A polygon with N vertices can be constructed by a minimum of N-2 triangles Properties of regular n-gons The focus comes to the regular polygons, hereafter. A regular polygon has all its edges equal and all its interior angles equal. Symmetry A regular n-gon features N axes of symmetry. All these axes meet at a common point, the center of the n-gon. If N is an even number, half of the axes pass through diagonally opposite vertices and the remaining ones, pass through the midpoints of opposite edges. On the other hand, if N is odd, all the axes of symmetry, pass through a vertex and the midpoint of its opposite edge. Axes of symmetry in a regular n-gon. Interior angle and central angle A regular n-gon has equal interior angles by definition. Since there are N of them, and given that their total sum is , as explained earlier, it can be concluded that each interior angle should be equal to: From the last expression, it is seen that the interior angle should be lower than radians (or 180°). In other words the regular n-gon cannot be concave. The value of , reaches asymptotically , increasing the number of N. The central angle , is the angle inside the triangle that is highlighted in the figure below. Specifically, it is the angle between two triangle edges, connecting the center of the n-gon with two successive vertices. The third edge in an edge of the n-gon. There are N central angles, in total, around the common center, therefore each one should be: The remaining two angles, in the highlighted triangle, are equal to (a line passing through the n-gon center and a vertex is a symmetry axis). It is worthy to mention, that the interior and central angles are supplementary, since their sum is : Circumcircle and incircle In any regular n-gon, a circle can be drawn that passes through all vertices. This is the cirmuscribed circle or circumcircle.The center of this circle is the center of the n-gon. The radius of circumcircle is usually called circumradius. Another circle can also be drawn, that passes through the midpoints of the n-gon edges. This circle is called inscribed circle or incircle. The radius of incircle is usually called inradius. The incircle is tangent to all N edges and its center is the same with that of the circumcircle. Circumscribed and inscribed circles of regular n-gon The radii of circumcircle, , and incircle, , are related to the length of the edges, . These relationships can be discovered using the right triangle, with sides: the circumradius, the inradius and half the n-gon edge, which is shown in the next figure. Using basic trigonometry we may find: where the central angle and the side length. Substituting the equation for the central angle into the last expressions become: . The above formulas reveal that for increasing values of N, the inradius asymptotically tends to the circumradius (because approaches zero, and the cosine of the third equation approaches unity). It can be visualized that for large N's, the polygon looks very circular in shape, and therefore its circumcircle and incircle match more perfectly with the n-gon shape and as a result with each other. Area The total area of a regular n-gon can be divided into N identical isosceles triangles, as indicated in the figure below. The height of any of these triangles, perpendicular to the n-gon edge , is indeed a radius of the incircle, therefore its length is equal to . Therefore, the area of each triangle is , and as a result, the total area of the N triangles becomes: Alternatively, using the relationship , the n-gon area can be expressed in terms of the circumradius, , this way: To reach to the last formula, the trigonometric identity was utilized, as well the relationship between and , which, as seen before, is and in inverted form: . Perimeter The perimeter of any N-sided regular polygon is simply the sum of the lengths of all edges: . Bounding box The bounding box of a planar shape is the smallest rectangle that encloses the shape completely. The bounding box is defined by its height and width . For a regular n-gon, the width is parallel to an edge while the height is perpendicular to it. Universal formulas for these dimensions are not possible though. A methodology can be improvised however, based on the value of N. Height Depending on N, the height, , can be equal to: the distance between two opposite edge midpoints, if N is even the distance between a vertex and the midpoint of an opposite edge, if N is odd. In either case the height crosses the n-gon center. Therefore: Width Depending on N, the width, , can be equal to either: the distance between two opposite edge midpoints, or the distance between two opposite vertices. If N is an even number, the width passes through the center of the polygon. This does not happen though, if N is an odd number. Therefore, two cases should be examined: N is even In this case, the width passes through the center of the polygon. If N/2 is even too, then the width connects two opposite vertices. If N/2 is odd on the other hand the width connects the midpoints of two opposite edges. In other words: N is odd In this case, the width connects two opposite vertices, in other words it is a diagonal of the n-gon. Since N is odd, N-1 should be even, by default. Depending on the parity of (N-1)/2 (odd or even) though, the diagonal that defines width can be either above the center of the n-gon or below the it. Specifically: if (N-1)/2 is even, the diagonal of lies above the center if (N-1)/2 is odd, the diagonal of lies below the center The next figure illustrates these two sub-cases. The calculation of also becomes straightforward if the angle , in the highlighted triangle, is found. This can be done, by counting the number of central angles until we reach the vertex of required diagonal, starting from the top. It is: Substituting we get the same result for both sub-cases: The hypotenuse of the highlighted triangle is equal to the circumradius while the opposite. The edge opposite to to angle , is half the wanted width. Therefore, if the number of the regular n-gon edges is odd, the width can be calculated with the formula: Examples Example 1 Determine the circumradius, the inradius and the area of a regular 17-gon, with edge length equal to. The circumradius and the inradius, in terms of the edge length , for the regular n-gon, have been derived in the previous sections. These are: Therefore, we have to simply substitute and . Doing so we get: The angle is approximately equal to: . The circumradius is then found: Similarly, the inradius is calculated this way: Finally, the area of a regular n-gon is given by the following equation: Substituting and we get: Example 2 Calculate the edge length of the following regular n-gons: a 9-gon (aka nonagon or enneagon), with area a 12-gon (aka dodecagon) having height a 4-gon (aka square) having circumradius 1. Regular 9-gon with given area The area of a regular n-gon, , in terms of the side length , is given by the equation: Since we are looking for , we have to rearrange the formula: From the last equation we can calculate the required edge length , if we substitute and for the 9-gon: . 2. Regular 12-gon with given height Because N=12, which is an even number, the height , of the 12-gon is equal to: The inradius, , is given be the formula: Therefore: . 3. Regular 4-gon with given circumradius The circumradius, , of a regular n-gon is related to its edge length, , with the following formula: Therefore: From the last equation we can calculate the wanted edge length , if we substitute and , for the given 4-gon: . Example 3 Find the edge of a regular, hexagon having the same area with a regular 15-gon, with edge length The area of a regular n-gon is given by the formula: Applying this formula specifically, for the hexagon and the 15-gon we get: It is , which means: Rearranging: Substituting , to the last equation, we get the wanted edge length, of the regular hexagon: Regular n-gon cheat-sheet In the following table a concise list of the main formulas, related to the regular n-gon is included. | Regular n-gon formulas By calcresource.com | | --- | | Circumradius: | | | Inradius: | | | Height: | | | Width: | N is even N is odd | | Area: | | | Interior angle: | | | Central angle: | | Table of regular n-gon properties In the following table, some key properties of various n-gons are presented. Included are the interior angle , the central angle and the ratios, , , , and where: , the circumradius, , the inradius, , the edge length, , the area enclosed by the n-gon It other words, the presented ratio values in the following table represent the respective properties (circumradius, inradius, perimeter and area), for an n-gon with , or , for the last ratio. | N | φ(°) | θ(°) | Rc/α | Ri/α | A/α² | A/Rc² | | --- | --- | --- | --- | --- | --- | --- | | 3 | 60 | 120 | 0.5774 | 0.2887 | 0.433 | 1.299 | | 4 | 90 | 90 | 0.7071 | 0.5 | 1 | 2 | | 5 | 108 | 72 | 0.8507 | 0.6882 | 1.72 | 2.378 | | 6 | 120 | 60 | 1 | 0.866 | 2.598 | 2.598 | | 7 | 128.6 | 51.43 | 1.152 | 1.038 | 3.634 | 2.736 | | 8 | 135 | 45 | 1.307 | 1.207 | 4.828 | 2.828 | | 9 | 140 | 40 | 1.462 | 1.374 | 6.182 | 2.893 | | 10 | 144 | 36 | 1.618 | 1.539 | 7.694 | 2.939 | | 11 | 147.3 | 32.73 | 1.775 | 1.703 | 9.366 | 2.974 | | 12 | 150 | 30 | 1.932 | 1.866 | 11.2 | 3 | | 13 | 152.3 | 27.69 | 2.089 | 2.029 | 13.19 | 3.021 | | 14 | 154.3 | 25.71 | 2.247 | 2.191 | 15.33 | 3.037 | | 15 | 156 | 24 | 2.405 | 2.352 | 17.64 | 3.051 | | 16 | 157.5 | 22.5 | 2.563 | 2.514 | 20.11 | 3.061 | | 20 | 162 | 18 | 3.196 | 3.157 | 31.57 | 3.09 | | 24 | 165 | 15 | 3.831 | 3.798 | 45.57 | 3.106 | | 30 | 168 | 12 | 4.783 | 4.757 | 71.36 | 3.119 | | 40 | 171 | 9 | 6.373 | 6.353 | 127.1 | 3.129 | | 50 | 172.8 | 7.2 | 7.963 | 7.947 | 198.7 | 3.133 | | 60 | 174 | 6 | 9.554 | 9.541 | 286.2 | 3.136 | | 100 | 176.4 | 3.6 | 15.92 | 15.91 | 795.5 | 3.14 | | 200 | 178.2 | 1.8 | 31.83 | 31.83 | 3183 | 3.141 | See also Properties of HexagonProperties of OctagonProperties of a RectangleProperties of a Right-TriangleProperties of a Circular areaAll Geometric Shapes See also Properties of Hexagon Properties of Octagon Properties of a Rectangle Properties of a Right-Triangle Properties of a Circular area All Geometric Shapes Connect with us: About Website calcresource offers online calculation tools and resources for engineering, math and science. Read more about us here. Short disclaimer Although the material presented in this site has been thoroughly tested, it is not warranted to be free of errors or up-to-date. The author or anyone else related with this site will not be liable for any loss or damage of any nature. For the detailed terms of use click here. Help us Send your feedbackAdd to Favorites/Bookmark Link to this site Link to this pageBack to top Copyright © 2015-2024, calcresource. All rights reserved.
101
Skip to main content KB Journal 16.1 (Winter 2023) KB Journal Editor: David Blakesley Associate Editor: Rochelle Gregory Kenneth Burke’s Theory of Attention: Homo Symbolicus’ Experiential Poetics by David Landes A Flash of Light to Blurred Vision: Theorizing Generating Principles for Nuclear Policy from The Day After Trinity to the Year 2021 by Cody Hunter Kenneth Burke’s Late Theory of History: The Personalistic and Instrumentalist Principles by Michael Feehan Kenneth Burke and the Gargoyles of Language: Perspective by Incongruity and the Transvaluation of Values in Counter-Statement and Permanence and Change by Jeremy Cox A Survey of the Diverse Historical Uses of the Circumstantial Terms from Homer to Kenneth Burke and Beyond by Lawrence J. Prelli and Floyd D. Anderson A Technological Psychosis: The Problem with “Overfishing” in the Magnuson-Stevens Act by Karen Gulbrandsen The Morality Martyr Homology by Lisa Glebatic Perks Slaying the Vile Beasts Within: Theorizing a Mortification Mechanism by Floyd D. Anderson and Kevin R. McClure Review: Philosophical Turns: Epistemological, Linguistic, and Metaphysical by Robert Wess. Reviewed by Greig Henderson The cover image is a screenshot from David Lynch's Twin Peaks, "The Return, Part 8" or "The Last Evening." Kenneth Burke’s Theory of Attention: Homo Symbolicus’ Experiential Poetics David Landes 11 November 2023 In light of cross-disciplinary interest in rethinking the conceptions of attention and attention economy, this paper conducts an archeology of Kenneth Burke’s concepts in order to construct a theory of attention implicit in his work. First, I overview key parts of rhetorical studies calling for rethinking the idea of attention. Then, I read Burke’s concepts for their implicit attentional aspects and implications. These findings are collected, listed into a glossary, and extrapolated into an account of Burkean attention, which I call “symbol-formed attention” to complement the reigning empirical theories of attention problematically borrowed from the sciences. I conclude by suggesting how Burke provides a rhetorical idea of “attention” as a terministic screen adaptively reconfigurable to situation and strategy. A Flash of Light to Blurred Vision: Theorizing Generating Principles for Nuclear Policy from The Day After Trinity to the Year 2021 Cody Hunter This essay examines contemporary arguments for nuclear weapons rearmament and disarmament by theorizing generating and generative principles in terms of principles of use and principles of existence through Kenneth Burke’s temporizing of essence. The essay concludes with an audio/visual experiment that invites audiences to reconsider the generating principles implicit in their nuclear terms. Kenneth Burke’s Late Theory of History: The Personalistic and Instrumentalist Principles Michael Feehan In his last published article, “In Haste,” Kenneth Burke outlined a new theory of history, a dialectical approach based on the two principles he had developed in the “Afterwords” to the third editions of Permanence and Change [PC]and Attitudes Toward History [ATH]: the personalistic principle and the instrumentalist principle. These two new principles were developed through the four loci of motives that Burke had created in the two “Afterwords” and which he sloganized as “Bodies That Learn Language.” The two principles differ from other similar principles dealing with intersecting developments between persons and technologies in that Burke’s principles arise through his theory of symbolic action, depending on his unique distinction between (non-symbolic)motion and (symbolic)action. Kenneth Burke and the Gargoyles of Language: Perspective by Incongruity and the Transvaluation of Values in Counter-Statement and Permanence and Change Jeremy Cox Ideas of transgression and transvaluation were central to Kenneth Burke’s early writing and the development of his critical method of “perspective by incongruity.” During the 1930s, Burke was concerned with the impact that art and criticism could have on the tumultuous Depression-era politics in which he was living. For him, language in general—and literature more specifically—can provide a vital corrective for a society trapped within its own misapplied terminologies. While Permanence and Change is typically considered to mark a shift in Kenneth Burke’s interest from the socio-aesthetics of Counter-Statement to the critical inquiry of language itself, this paper argues that Burke’s method of perspective by incongruity links the two works together as parts of a common project. Reading these works alongside archival material from the intervening period between their publications shows that Burke’s initial concern with the radical potential of poetic invention evolved into a more general means of affecting social change. A Survey of the Diverse Historical Uses of the Circumstantial Terms from Homer to Kenneth Burke and Beyond Lawrence J. Prelli and Floyd D. Anderson In this essay, we survey the diverse historical uses and functions of the circumstantial terms during more than three millennia of western thought and culture. In so doing, we reveal the originality and innovativeness of Kenneth Burke’s use of the terms. Our survey also supports Burke’s contention that the terms are “transcendental” because they represent “the basic forms of thought.” A Technological Psychosis: The Problem with “Overfishing” in the Magnuson-Stevens Act Karen Gulbrandsen A group of scientists publicly advocated to remove the word “overfishing” from the Magnuson Stevens Act, calling its use metaphorical. I draw on Burke’s terministic screens and technological psychosis to trace the implications embedded in the term and show how a terminological screen can become entrenched in dialectics that substantiate technology and innovation. This case raises questions about how to counter-balance a technological rationality that continues to dominate our perspective on many public issues. The Morality Martyr Homology Lisa Glebatic Perks This article explicates a “morality martyr” homology with three characteristics: amoral actions against “good” characters, introspection, and a fatalistic final act. Formal morality martyr patterns are analyzed in two characters from The Walking Dead. Exposing the morality martyr’s thinly-veiled suicide endorsement is an initial step in undercutting the deadly terministic cycle. Through comparison of the two characters, a merciful stretching of the formal pattern emerges, offering a set of values that preserve life through forgiveness. Slaying the Vile Beasts Within: Theorizing a Mortification Mechanism Floyd D. Anderson and Kevin R. McClure We develop a mortification mechanism that complements Kenneth Burke’s scapegoat mechanism. Employing Edward M. Kennedy’s redemptive 1980 presidential primary campaign as our representative anecdote, we chart the stages of his mortification. Our findings show that self-victimage is more complex than scapegoating, has more ingredients and possesses paradoxical qualities. Review: Philosophical Turns: Epistemological, Linguistic, and Metaphysical by Robert V. Wess Robert Wess, Philosophical Turns: Epistemological, Linguistic, and Metaphysical , Parlor Press, 2023r. 288 pp. $34.99 (paperback); $69.99 (hardcover) $29.99 (PDF and EPUB) Reviewed by Greig Henderson Philosophical Turns is a tour de force, a sophisticated and erudite book that not only captures the cognitive and emotive rhythms of the contemporary philosophical conversation surrounding speculative realism but also becomes a distinctive voice within that conversation. A reviewer can but adumbrate and applaud the density, complexity, and richness of the arguments Wess prosecutes with rigor and artfulness, paying homage to him just as he paid homage to Richard McKeon, his mentor. User login Welcome to KB Journal Register for a free account to take advantage of new content notifications, make connections with the scholarly community, join the Burke Society, get access to premium bibliographies and archives, and more. New! Join the KB Society online and get access to premium content and publisher discounts. KBS 2025: Kenneth Burke, the Humanities, and Agency in the Era of AI A Virtual Conference and Film Festival Fast Facts Conference Dates: May 22–25, 2025 Registration is free and not required View the Program and Conference Links Location: Online in Zoom, Gather, and New Art City RENEW A MEMBERSHIP OR JOIN THE KENNETH BURKE SOCIETY HERE (All conference presenters must be current members of the Kenneth Burke Society) Conference Website: Conference Chair: David Blakesley ([email protected]) Program Chair: Taylor Wyatt ([email protected]) Virtual Producer: Andrew Okai ([email protected]) New Art City Coordinator: Eddie Lohmeyer ([email protected]) Film Festival Coordinator: David Williams ([email protected]) Accessibility Coordinator: Ashlyn Walden ([email protected]) Submit to KB Journal Click on this Submit button to visit the KB Journal submission interface at Submittable: KB: A Conversation (Video) Find out about Harry Chapin's video interview of Kenneth Burke and how you can get a copy. Shopping Cart View your shopping cart. Who's online There are currently 0 users online. Tweet Something about Kenneth Burke Tweet #kennethburke Tweets at the Edge of an Abyss Steve Fuller @ProfSteveFuller Jul 15 #KennethBurke is doomed to be perennially underrated because his depth comes from reading each sentence as a paragraph sketch. His writing creates a projectible space. It's a very modern vibe. Kenneth Burke - Wikipedia wikipedia.org 0 0 Share Share on Facebook Share on X Jul 15 rustykomori @rustykomori Nov 29 Magnum PI Producer Kenneth Burke (Beyond the Lines) via @YouTube Kenneth shares incredible insights about the film industry!!! #MagnumPI #kennethburke - YouTube youtu.be 0 1 Share Share on Facebook Share on X Nov 29 Professor Oopalee Operajita @OopaliOperajita Oct 16 @MarinaHarss I don't argue either — but I studied — and teach — #Argument as a rhetorical genre, since 1995. It's maravilhoso going back to reading my father's priceless teachers at @Columbia : #EdmundWilson, #JacquesBarzun & #LionelTrilling; & to #KennethBurke et al. 0 0 Share Share on Facebook Share on X Oct 16 In The Bin @InTheBinPodcast Aug 4 "Men seek for vocabularies that are reflections of reality. To this end, they must develop vocabularies that are selections of reality. And any selection of reality must, in certain circumstances, function as a deflection of reality." #KennethBurke 0 0 Share Share on Facebook Share on X Aug 4 Waywords Studio @WaywordsStudio May 5 All vision is a form of blindness? All knowing is . . . ? Absolutely. Happy b-day, KB! TY for shifting my thinking on language! (& remember that time I asked you a quick @ and you spent our hour together answering it!) #kennethburke #permanenceandchange #dramatism 0 0 Share Share on Facebook Share on X May 5 Mike Winmill Ⓜ️ @mikewinmill Mar 8 Anyone know what happened with this case from 2020? According to the online court records, the case was closed? Seems odd this incident disappeared from the public with no news updates. #kennethburke #2020riots #GeorgeFloyd - YouTube youtube.com 1 1 Share Share on Facebook Share on X Mar 8 Bettina Boeck @poetischeswoert Mar 6 #BarrierefreiheitMussAlsInternationaleAufgabeVerstandenWerden #AuthentischHerausstellenEinesZentralenPunktes #Spannend #Storytelling #Shakespeare #LesenHörenErleben gez #GoldfischPost @joinsteady #EloquenceIsTheMinimizingOfInterestInFact #KennethBurke on #StoriesWithS 0 1 Share Share on Facebook Share on X Mar 6 70mm Nerd Thoughts @70mnerdthoughts Feb 10 #70mmnerdthoughts #KennethBurke #telugu #filmmakersquotes #filmmakersquoteintelugu #telugucinema #cinema #trending #screenwriting #filmmaking #writingcommunity #scriptwriter #filmaspirants #telugumemepage #PSPK #garikipatinarasimharao #Telangana #andrapradesh #Brahmanandam 0 0 Share Share on Facebook Share on X Feb 10 70mm Nerd Thoughts @70mnerdthoughts Jan 3 #70mmnerdthoughts #kennethburke #telugu #filmmakersquotes #filmmakersquoteintelugu #telugucinema #indiancinema #worldcinema #trending #viralpost #aspirantfilmmakers #screenwriting #screenwriter #filmmaking #writingcommunity #screenwritingtips #scriptwriter #filmaspirants 1 1 Share Share on Facebook Share on X Jan 3 Amylea @lexrhetoricae Sep 30 I don't know who runs the social media for @SCEMD but as a #rhetoric professor specializing in science & dystopian comm, I want to give you a high five. The tone and ethos mix is so "impious" but so effective! Nice audience needs analysis. #kennethburke #sciencecommunication Y’all. We have down powerlines on flooded roads, storm surge in local sewer systems, garbage cans floating down streets and all manner of ick knows what. ⛲️🚽🍲⚡️🤢 👏STAY 👏OUT 👏OF 👏FLOODED 👏AREAS👏👏 0 0 Share Share on Facebook Share on X Sep 30 Editor David Blakesley is the Campbell Chair in Technical Communication and Professor of Rhetorics, Communication, and Information Design at Clemson University. His books about or drawing from Burke include The Elements of Dramatism, Late Poems 1968–1993 by Kenneth Burke ((with Julie Whitaker), and The Terministic Screen: Rhetorical Perspectives on Film (SIU Press, 2007). He received the Distinguished Service Award from the KB Society in 2005. KB Journal's mission is to explore what it means to be "Burkean." To this end, KB Journal publishes original scholarship that addresses, applies, extends, repurposes, or challenges the writings of Kenneth Burke, which include but are not limited to the major books and hundreds of articles by Burke, as well as the growing corpus of research material about Burke. It provides an outlet for integrating and critiquing the gamut of Burkeanstudies in communication, composition, English, gender, literature, philosophy, psychology, sociology, and technical writing. In light of this, Kenneth Burke need not be sole focus of a submission, but Burke should be integral to the structure of the argument.
102
SIGNAL TIMING ON A SHOESTRING - III. Signal Timing Tool Box =============== Next Page >> << Previous Page Table of Contents U.S. Department of Transportation Federal Highway Administration SIGNAL TIMING ON A SHOESTRING III. Signal Timing Tool Box When most Traffic Engineers consider signal timing, the first thought invariably involves the computerized optimization models. Issues like which model is best, and what are the minimum data required to use the model, are typical topics. Over the years, much research effort has been invested in developing these models, and of all of the steps in the signal timing process, the evolution of the signal timing optimization models is the most highly developed. When one mentions the word, “model,” most automatically think of a computer model. But it is important to recognize that a model can also be a manual model. The following sections provide a description of various manual and automated techniques that can be used to develop timing plans. These techniques can be used to estimate parameters directly, or to estimate various inputs to signal timing optimization computer programs that will be used to generate timing plans. The basic concept underlying the approach to minimizing timing plan development cost is to identify those parts of the process when resources should be directed to achieve the best benefit, and conversely, identify areas where parameters can be approximated. Data Collection Tools Regardless of what computer model or manual process the Engineer chooses to use to develop the timing plans, all require network descriptive information and turning movement data. All signal optimization and simulation models, even manual signal timing procedures, require a physical description of the network. This description includes distance between intersections (link length); the number and type of lanes; lane width, length, and grade; permitted traffic movements from each lane; and the traffic signal phase that services each flow. Building a network from scratch is a significant undertaking. But once the network is defined, in general, only traffic demand and signal timing parameters have to be updated to test a new scenario. The tools related to data collection are provided below. Intersection Categorization The intersection may be categorized as either Primary or Secondary. The primary intersections are the ones that have the highest demand to capacity ratio and will, therefore, require the longest cycles. These intersections are usually well known to the Traffic Engineer. They are the intersection of two arterials, the intersections with the worst accident experience, the intersections that service the major shopping centers, and the intersections that generate the most complaints. The secondary intersections are the ones that generally serve the adjacent residential areas and local commercial areas. They are usually characterized by heavy demand on the two major approaches and much less demand on the cross-street approaches. The purpose of assigning intersections to one of these two categories is to reduce the locations where traffic counts are required. The primary intersections require turning movement traffic counts—there is simply no other way to measure demand. However, the secondary intersections usually have side street demand that can be met with phase minimum green times (usually between 8 and 15 seconds with lower values if presence detection is provided near the stop bar). The strategy, therefore, is to concentrate the counting resources at the locations where there is no substitute, and to use minimum green times for the minor phases at secondary intersections. This categorization is important because more and costly data is needed for the Primary intersections than for the Secondary intersections. Many of the timing parameters for the Secondary intersections will be estimated rather than calculated, and therefore, are subject to larger errors. This characterization is very subjective, and to a great extent, the categorization depends on the budget available for signal timing. If the budget is small, fewer intersections would be considered Primary; if the budget is moderate, more intersections on the cusp would be considered primary. Short-Count Method Regardless of whether manual or computerized signal timing models are planned to be used, there is a need for turning movement count input to the process. The turning movement count is the single most costly element in the signal timing process, and therefore, is generally the most significant impediment to overcome. One way to reduce the expense of data collection is to reduce the time required to collect the data. Many traffic engineers use “short counts” to meet this objective. Short counts are normal turning movement counts that are conducted over periods that are less than normal. The basic concept of the short count is to take a sample of the turning movements during the period of interest and to expand the short period to reflect an estimate of the demand during the entire period. Fifteen-minute samples are typical and they are expanded to hourly flow rates for use in the various signal timing procedures. One method of developing these counts, the Maximum Likehood model, was defined by Maher in 1984.3 If the agency does not have a procedure in place for conducting short counts, the following is suggested: Determine the beginning and ending time of the period for which the count is intended to represent Within this time window identified above, start a stop watch when the yellow ends for the through movement on the approach being observed Record the number of vehicles turning left, through, and right during the cycle measured from the end of yellow to the end of yellow during each cycle Continue recording the counts at the end of each cycle until at least 15 minutes have elapsed and at least eight cycles are recorded For the last cycle, add the number of vehicles in queue (if any) to the count for the last cycle Record the time on the stop watch (10 minutes or more) Convert the counts to an hourly flow rate for each movement. Estimated Turning Movements When turning movement counts are not available, it is sometimes possible to estimate the turning movements when approach and departure volumes are known and some information is available concerning the intersection flows. The National Cooperative Highway Research Program (NCHRP) developed techniques for estimating traffic demand and turning movements. These techniques are described in NCHRP 255, “Highway Traffic Data for Urbanized Area Project Planning and Design.” One of the procedures described in this document derives turning movements using an iterative approach, which alternately balances the inflows and outflows until the results converge (up to a user-specified maximum number of row and column iterations). Dowling Associates, Inc., a traffic engineering and transportation planning consulting firm based in Oakland, California developed a program, TurnsW, that can be used to estimate turning volumes given approach and departure volumes. This program is available from (under downloads). The user may “lock in” pre-determined volumes for one or more of the estimated turning movements. The program will then compute the remaining turning volumes based upon these restrictions. Signal Grouping To state the obvious, all signals that are synchronized together must operate on the same cycle length or a multiple of that cycle length. Since it is unlikely that all primary intersections will have the same cycle length requirements, some method must be used to arrive at a common cycle length. Engineering judgment usually prevails in this area. For example, if there are three intersections requiring 75-, 80-, and 110-second cycles, the 110-second cycle must be used. However if the results were 80, 80, and 85, then an 80-second cycle may be appropriate. In general, the longest cycle length would be used. Another important point to make regarding the grouping of intersections is that the need to group the intersections is based on traffic demand. Since it is likely that traffic demand is different during different times of the day, it is reasonable to expect that different groupings of intersections may be appropriate during different times of the day. In practice, this may mean, for example, that an intersection is associated with a group and operates with the common group cycle length during a peak period, but operates as an isolated intersection during other time periods. It is important to recognize that intersection groupings are a function of traffic demand, and signal groupings are not a static condition. Coupling Index The Coupling Index is a simple methodology to determine the potential benefit of coordinating the operation of two signalized intersections. The theory is based on Newton’s law of gravitation, which states that the attraction between two bodies is proportional to the size of the two bodies (traffic volume) and inversely proportional to the distance squared. In equation form, the Coupling Index is: CI = V / D 2 Where: CI = Coupling Index V = Two-way total traffic volume peak hour / (1000 vph) D = Distance between signals (miles) There are several variations of this approach. The “Linking Factor” as used by Computran in Winston Salem, NC, and the “Offset Benefit” as described in NCHRP Report 3-18 (3) are two examples of different similar techniques that have been used to determine signal group boundaries. A recent review and analysis of these grouping methods by Hook and Albers concluded that there is no absolute best method to use for determining where system breaks should occur.4 The authors further concluded that each method gives about the same result and the simpler methods are just as valid as the complicated methods. In general, they suggested that the following criteria be used: Group all intersections that are within 2,500 feet of one another. Use all links that are 5,000 feet or more in length as boundary links. Calculate the Coupling Index for all links between 2,500 feet and 5,000 feet in length and link all intersections that have a value greater than 50, consider linking intersections that have a value of 1 to 50, and do not link intersections that have a value of less than 1. The following process is suggested for use with any of the Index procedures. The first step is to determine which sections of roadways are to be analyzed. These links are then drawn on a map, which may be distorted to provide space to display information related to each link. Various traffic data can be superimposed over the roadway network to determine applicable traffic volumes for the particular segment being registered. Some links may not have any corresponding traffic data. In which case, the segment is still registered, but with a zero value given for the traffic volume, which in turn results in a Coupling Index of zero. The next step is to calculate the indices for all of the registered links. The final step is to identify signal groups by linking together intersections with high index values and identifying group boundaries using links with low index values. Major Traffic Flows Another factor that should be considered when considering intersection groupings is traffic-flow demand paths. With an arterial, this issue is moot, but with a grid network, it can be crucial. With the grid pattern shown in top chart in Figure 3, the horizontal dashed line shows a likely group boundary. However, when a major traffic-flow pattern does a dogleg, as shown in the bottom chart of Figure 3, then a different group boundary may be appropriate. This characteristic will probably manifest itself in the index, but when the signal engineer must make decisions based on sparse data, then knowledge of traffic-flow patterns can be a useful discriminator to identify group boundaries. Coordinatability Factor There is one additional technique than can be employed by those that use the computer program, Synchro. Synchro has an internal methodology to calculate a “coordinatability factor.” This factor considers travel time, volume, distance, vehicle platoons, vehicle queuing, and natural cycle lengths. The coordinatability factor is similar to the “strength of attraction,” but also considers the natural cycle length and vehicle queuing. The natural cycle length is defined as the cycle at which the intersection would run in an isolated mode or the minimum delay cycle length. The potential for vehicle queues exceeding the available storage is also considered in determining the desirability of coordination. Number of Timing Plans The “rule of thumb” for the number of signal timing plans is that each group requires a minimum of four plans: morning peak plan, average day plan, afternoon peak plan, and evening plan. But each signal group is unique, and each group has unique demands. For example, an arterial that provides access to a regional shopping center may experience major demands on Saturday. Other examples abound of locations that require different timing plans to meet demands by other major traffic generators, such as amusement parks, recreational demands, and other non-work-related trips. One analytical method that can be used to estimate the need for a special timing plan is to plot the arterial traffic by direction and by time of day. The plot of the sum of both directions provides an indication when cycle length changes may be required. Longer cycles are typically required to service heavier volumes. The ratio of one direction to the total traffic by time of day provides a good indication when offset changes may be required. The number of plans required and the time during which they will be used is needed to schedule the site surveys described in the next section. Analysis of the traffic demands at individual intersections will indicate when split changes are required. Cycle Length Issues As noted above, having a common cycle length is fundamental to coordinated signal operation. The cycle length must be evaluated from two different perspectives: individual intersection and the group cycle length. For the individual intersection, the recommended approach is to focus on the one or two major intersections in the group—the intersections with the highest demand because these are the ones that will set the minimum cycle length limits. When evaluating cycle lengths, it is important to verify that the pedestrian timing is sufficient to allow pedestrians to cross the street. When the pedestrian timing is known, say 7 seconds to Walk, 10 seconds for Pedestrian Clearance, and 3 seconds for yellow change, and the vehicle phase is to be allocated at least 25 percent of the cycle, then the minimum cycle length that can meet both constraints is 20 seconds divided by 25 percent, or 80 seconds (assuming two critical phases). In general, the intersection in the group that requires the longest cycle length will set the group cycle length. The cycle length and splits can be determined by using either Webster’s equation or the Greenshields-Poisson Method. Both of these methods are explained below. In general, for a given demand condition, there is a cycle length that will provide the optimum two-way progression. This cycle length is a function of the speed of the traffic on the links between intersections and the link distance between intersections. This cycle length is called the “Resonant Cycle,” and is explained further below. Webster’s Equation One approach to determining cycle lengths for an isolated pre-timed location is based on Webster's equation for minimum delay cycle lengths. The equation is as follows: Cycle Length = (1.5 L + 5) / (1.0 -Y i) Where: L = The lost time per cycle in seconds Y i = Sum of the degree of saturation for all critical phases5 This method was developed by F.V. Webster of England's Road Research Laboratory in the 1960s. The research supporting this equation is based on measuring delay at a large number of intersections with different geometric designs and cycle lengths. These observations yielded the equation that is used today. It is important to recognize that this work assumed random arrivals and fixed-time operation—two conditions that can rarely be met in the United States. Notice that the equation becomes unstable at high levels of saturation and should not be used at locations where demand approaches capacity. Nevertheless, this technique provides a starting point when developing signal timings. To use this equation: Estimate the lost time per cycle by multiplying the number of critical phases per cycle (2, 3, or 4) by 5 seconds (estimated yellow change plus red clearance time) to determine the “L” factor. L will have a value of 10, 15, or 20 and the numerator will equate to 20, 27.5, or 35 seconds. Estimate the degree of saturation for each critical phase by dividing the demand by the saturation flow (normally 1900 vehicles per hour per lane). Sum the degree of saturation for each critical phase and subtract the sum from 1.0. This is the denominator. To obtain the cycle length, round the division to the next highest 5 seconds. Greenshields-Poisson Method This approach to signal timing is statistically-based and makes several assumptions about the behavior of traffic. It uses the Poisson distribution to describe the arrival patterns of vehicles at an intersection. This distribution assumes that the vehicles travel randomly. This assumption is frequently a problem in urban areas, but like other methods, it can provide a good starting point to develop signal settings. While the Poisson distribution is used to estimate the arrivals, the time required for the approach discharge is based on work done by B. D. Greenshields in 1947. Surprisingly, this work has held up well during the intervening 50 years. Like Webster, Greenshields founded his work on many observations of traffic performance. The results of these studies are summed in the equation: Phase Time = 3.8 + 2.1 n Where: Phase Time is the required duration to service the queue n is the number of vehicles in queue in the critical lane6 The basic procedure is iterative and uses the following steps: Assume a cycle length. For two critical phases, we suggest 60 seconds, for three critical phases, we suggest 75 seconds, and for four critical phases, we suggest 100 seconds. a. Calculate the number of cycles per hour by dividing 3,600 (seconds per hour) by the assumed cycle length. For each critical phase, divide the demand volume by the number of lanes and by the number of cycles per hour to determine the mean arrival rate per lane. Use the Poisson distribution (Table 1) to convert the Mean Arrival Rate to the Maximum Expected Arrivals at the 95 percentile level. Convert this Maximum Expected Arrivals to time required using Greenshields equation. Add the time required for each critical phase plus the clearance and change time required (nominally 5 seconds) for each critical phase. If the sum is more than 5 seconds less than the assumed cycle, repeat the steps starting with the new (shorter) cycle length. If the sum is greater than the assumed cycle length by more than 5 seconds, repeat the steps but use the 90th or 85th percentile Maximum Expected Arrivals. If the calculations using the 85th percentile arrivals indicate a cycle length greater than 80 seconds for two-phase operation, 100 seconds for three critical phases, and 120 seconds for four critical phases, then the volumes may be too high to use this method. Table 1 Poisson Distribution. | Mean Arrival Rate | 85 Percentile | 90 Percentile | 95 Percentile | | --- | --- | --- | --- | | 1 | 3 | 3 | 3 | | 2 | 4 | 4 | 5 | | 3 | 5 | 6 | 7 | | 4 | 7 | 7 | 8 | | 5 | 8 | 8 | 9 | | 6 | 9 | 10 | 11 | | 7 | 10 | 11 | 12 | | 8 | 11 | 12 | 13 | | 9 | 13 | 13 | 15 | | 10 | 14 | 15 | 16 | | 11 | 16 | 16 | 17 | | 12 | 16 | 17 | 18 | | 13 | 17 | 18 | 20 | | 14 | 18 | 19 | 21 | | 15 | 19 | 20 | 22 | | 16 | 21 | 22 | 23 | | 17 | 22 | 23 | 24 | | 18 | 23 | 24 | 26 | | 19 | 24 | 25 | 27 | | 20 | 25 | 26 | 28 | The Greenshields-Poisson Method is best suited to lower volume intersections. When the critical lane volume exceeds 400 vph then the basic assumption of random arrivals (no vehicle interactions) is probably not valid. Even within this range, care must be exercised. The method is designed to accommodate more vehicles than is expected on average; but some percentage of the time, 5 to 15 percent, the demand will exceed the time allocated and not all arrivals will be served. Care should be used to not apply this method at congested locations as the process will suggest unrealistically long cycle lengths which will result in high delay and long queues. Cycle Length When the traffic demand is balanced in both directions on the arterial, and when the distance between the intersections is approximately equal, then it is possible to obtain good progression in both directions by adjusting the cycle length using the following formulas: (1) Cycle = 2 Distance / Speed (1) (2) Cycle = 4 Distance / Speed (2) (3) Cycle = 6 Distance / Speed (3) Where: Cycle is the cycle length in seconds Distance is the link length in feet Speed is the average link speed in feet per second. These equations define resonant cycle lengths for this signal group.7 Notice that the only real-time variable in the equations is traffic speed, which is actually used to estimate link travel time. This implies that different cycle lengths would be appropriate when there is a significant change in the link speed. It is typical for link speeds to be slower during the peak periods. This implies that it may be appropriate to use a longer cycle length during peak periods. Once an appropriate cycle length is selected using one of the three formulas noted above, the offsets can be identified as follows: Formula (1)—The offset of an intersection at one end of the arterial is set to an arbitrary value—many engineers use 0 seconds. The offset at the next intersection is set to the sum of the value of the offset at the first intersection plus 50 percent of the cycle. For example, if the offset of the first intersection is 0 and the cycle length is 100 seconds, then the offset of the second intersection is 50 seconds. The offset of the third intersection and all other odd-numbered intersections is the same as the offset at the first intersection, 0 seconds in the example. The offset at the fourth intersection and all other even-numbered intersections is the same as the offset at the second intersection, 50 seconds in the example. This method of setting signal timing is called a Single Alternate, and is the most desirable because it provides the maximum bandwidth in both directions. Formula (2)—The offset of two intersections at one end of the arterial are set to an arbitrary value—0 seconds, for example. The offset at the next two intersections are set to the sum of the value of the offset at the first intersection plus 50 percent of the cycle. For example, if the offset of the first and second intersection is 0 and the cycle length is 100 seconds, then the offset of the third and fourth intersections is 50 seconds. The offset of the fifth and sixth intersections is the same as the offset at the first and second intersection, 0 seconds in the example. The offset at the seventh and eighth intersection is the same as the offset at the third and fourth intersections, 50 seconds in the example. The offsets at additional intersections are in a similar manner. This method of setting signal timing is called a Double Alternate, and is useful when the intersections are spaced more closely. It provides bandwidths half that provided by the Single Alternate solution. Formula (3)—The offsets at three intersections at one end of the arterial are set to an arbitrary value—for example, 0 seconds. The offset at the next three intersections are set to the sum of the value of the offset at the first intersection plus 50 percent of the cycle. For example, if the offset of the first, second, and third intersection is 0 and the cycle length is 100 seconds, then the offset of the fourth, fifth, and sixth intersections is 50 seconds. The offset of the seventh, eighth, and ninth intersection is the same as the offset at the first, second, and third intersection, 0 seconds in the example. The offsets at additional intersections are set in a similar manner in groups of three. This method of setting signal timing is called a Triple Alternate. This is appropriate for closely spaced intersections and provides a bandwidth one-third that of the Single Alternate. The important point to recognize when testing various Resonant Cycle lengths is that the speed of traffic is set based on what the average driver considers reasonable, not on an arbitrary speed that provides the maximum bandwidth. It is a common error to put a timing plan in the field that looks great on paper, but does not work in the field because the vehicles are traveling faster (or slower) than the assumptions. Another related issue is that the average speed is not necessarily consistent throughout the day. It may be lower during the peak periods or at night, for example. Small errors in speed estimates can result in very poor signal timing (large offset errors), especially on suburban arterials where the distances between intersections are large. For example, estimating a speed of 30 MPH when in fact the true speed is 35 MPH will result in an offset error of 13 seconds on a 4,000-foot link. Offset Issues The offset is the heart of coordination signal timing. It is the difference in time from a reference point in the cycle at the upstream intersection to the same point in the cycle at the downstream intersection. This reference point is usually taken to be the beginning of the main street green. The simplest offset to consider is the one-way offset. When the light turns green at the upstream intersection and the platoon travels down the link, it is desirable for the downstream controller to change to green when the platoon approaches. This offset is appropriate for one-way streets and for situations when heavy demand in one direction justifies ignoring counter-flowing traffic. Notice that this explanation deals with one link between intersections. Except at the ends of an arterial, the intersections on an arterial have one intersection upstream and another intersection downstream. It is important to recognize that changing the offset timing at one intersection affects the relative offset on four links. This is illustrated in Figure 4. Figure 4. Offset Change Impacts. In this example, the offset of the middle intersection is adjusted downward (earlier). Notice that this impacts the right-bound traffic flowing to the right intersection, as well as the two links of left-bound traffic. There is always a temptation to adjust the offset at one intersection to accommodate demand in one direction on one link without taking into account the effects of this change on the other three links. One way to manually analyze the offset impacts is to use the Kell Method described below. One-Way Offset For the predominant one-way flow situation, the expedient approach requires only an estimate of the median travel time between intersections. The offset, expressed in seconds, is set at the intersections farthest upstream to an arbitrary value—many engineers use 0 seconds. The offset at the nearest downstream intersection is determined by adding the travel time to the offset of the adjacent upstream intersection. The travel time is estimated by dividing distance between the intersections by the average speed on the link. This process continues until the offsets of all intersections in the group have been determined. Notice that this method of determining offsets is independent of the splits at each intersection and the cycle length. As a further refinement, many traffic engineers will provide sufficient time for any standing queue to discharge before the arriving platoon. To do this, estimate the total number of vehicles in queue (vehicles that arrive during the red and do not turn right). Divide this number by the number of lanes, and multiply the result by 2.5 seconds. Subtract this total from the offset that was determined by the link travel time. Adjusting for the average standing queue is referred to as the “Smooth Flow Offset.” As with the basic offset calculation method, when adjusting for the standing queues, start at the upstream intersection and work downstream calculating each offset based on the upstream offset by adding the travel time and subtracting the queue discharge time to the upstream offset at each intersection. Two-way Offsets (Kell Method) The Kell Method is a technique that can be used to manually construct a Time-Space Diagram that results in balanced offsets in both directions. This technique is named after its developer; Mr. James H. Kell, who was an instructor at University of California, Berkeley. The process is straightforward and requires a minimum input of information. An estimate of the percent green for the main street for each intersection, the average speed on the arterial, and the distance between intersections are the only information required to use the technique. The products of the method are the cycle length for the arterial and the offset for each intersection that provides equal bandwidth in each direction. The process is as follows: Prepare a scale drawing laying out the intersections along the bottom of the page. Draw a vertical line at the first intersection on the left. Draw several cycles on the vertical line using a closed rectangle to represent the percent of time that the signal is NOT green. Draw a horizontal working line through the middle of either a green or not-green. Draw a line that slopes upward and to the right at the beginning of green at the left-most intersection. The sketch would look something like that shown in Figure 5. Figure 5. Kell Method (Beginning). Plot the cycle of the next intersection such that either the green or the not-green (whichever causes the beginning of green to come closest to the sloped line) is centered on the Working Line, as shown in Figure 6. Figure 6. Kell Method (Continued). Continue plotting the cycle for the remaining intersections by centering either the green or the red. A completed diagram is shown in Figure 7. Figure 7. Kell Method (Completed Diagram). Notice that this technique forces a symmetrical solution that provides two-way progression with approximately equal bandwidths in each direction. The final step in the process is to determine the cycle length. In general, traffic will move on an arterial at a speed that the drivers consider reasonable for the prevailing conditions. The Engineer must estimate this speed and use it to determine the cycle length. Notice that the diagram shows that a vehicle requires approximately 1 ½ cycles to travel from intersection “A” to “D” in either direction. If the prevailing speed on the arterial were 35 MPH, then an appropriate cycle length would be 75 seconds. This is determined by noting that it requires 1 ½ cycles to travel 5,800 feet which is equivalent to 3,867 feet per cycle. The cycle length is determined by dividing the distance (3,867 feet) by the speed, 51.33 feet per second (35 MPH), and the cycle is 75 seconds. Split Issues The split is the amount of time allocated to each phase in a cycle at each intersection. The toolbox offers two ways to calculate splits manually, the Greenshields-Poisson method previously described and the Critical Movement method which is described below. Critical Movement Method This method uses techniques that were employed in the 1984 Highway Capacity Manual as a “Planning Analysis” (Figure 8) to estimate intersection capacity. We have adapted elements of this analysis to use to develop traffic signal timing parameters. To use this method, intersection turning movements, the signal phasing, and the cycle length for the intersection must be known. The designer must determine the effective demand for each phase by applying various adjustment factors to reduce the demand to passenger car equivalents per lane. For the purposes of preparing traffic signal timing plans, a high level of precision is not needed. Traffic demand can vary plus or minus 20 percent in just a few minutes at a given location. Also a variation of 20 percent from day to day is not unusual. Our objective, therefore, is to develop timing plans that are robust and that will perform well through a wide range of demand conditions. The following steps are suggested: If the left turn movement is not protected, multiply the left turn demand by 1.6. If the number of trucks is known, multiply the trucks by 1.5. Divide the traffic demand on the four major and left-turn approaches by the number of lanes for each movement. Determine the critical movements for both the east-west street and the north-south street. Determine the intersection critical movement by adding these two together. If this sum is less than 1,500, then continue. If it is over 1,500 then the intersection is probably over-saturated and the method may not be applicable. Determine the number of critical movements in each cycle. With no left-turn protection, there would be two; with left turn protection on one street there would be three; and with left turn protection on both streets there would be four. Multiply the number of critical movements by five (the lost time), and subtract this number from the cycle length. This result is an estimate of the total available seconds of green per cycle that can be used for traffic movements. Figure 8. Critical Lane Analysis Example. The final step is to multiply the total available green time by the ratio of the critical lane volume for the movement to the total intersection critical lane volume. For pre-timed operation, this is the phase green time in seconds. For coordinated operation with actuated controllers, this phase time is used to set the Force-off for the phase. For all actuated phases, the calculated time is the average green time for the phase. The phase maximum should be set at 25- to 50-percent greater than this value. 3 Maher, M.J. “Estimating the Turning Flows at a Junction: A Comparison of Three Models,” Traffic Engineering and Control 25 (11), pages 19-22. 4 “Comparison of Alternative Methodologies to Determine Breakpoints in Signal Progression,” TRB Paper by David Hook and Allen Albers, 2002. 5 The critical phases are the ones that require the most green time. The flow ratio is calculated by dividing the volume by the saturation flow rate for that movement. 6 The critical lane or movement for each phase is the lane that requires the most green time. 7 “Resonant Cycles in Traffic Signal Control;” Shelby, S.G., Darcy Bullock, and Douglas Gettman, TRB Meeting, January 2005. Next Page >> << Previous Page Table of Contents US DOT Home| FHWA Home| Operations Home| Privacy Policy United States Department of Transportation - Federal Highway Administration Last modified: May 3, 2022
103
GAME MATERIAL PERSONALITY CARDS Concordia: GAME RULES Game setup The setup of CONCORDIA is described in detail in the separate quick intro. Cards for sale that are not needed are taken out of the game (depending on the player count). After randomly determining the start player, the last player (player to the right of the start player) receives the Praefectus Magnus. Game flow Players execute their turns in clockwise order. A player’s hand consists of his un-played personality cards. A player’s turn consists of playing 1 card from their hand and executing the related actions. All played cards form a personal discard pile showing only the last card played. With the Tribune, a player takes back all cards previously played. Game end The game ends either after a player purchases the last card from the display on the board, or after the first player builds his 15th house. In either case, this player is awarded the CONCORDIA card. Now all other players execute their final turn GAME OVERVIEW and then all players tally their final victory points. Scoring victory points Each personality card is related to an ancient god. These gods individually reward certain achievements (for instance number of populated provinces, number of colonists, etc.) If Concordia is played for the first time, it is recommended to conduct an intermediate scoring. The final (and intermediate) scorings are described in detail on the last page. 5 storehouses Game board: Imperium (3-5 players) / Italia (2-4 players) 30 city tokens 24 bonus markers Coins (1, 2, 5 und 10 sestertii) 1 game rules 1 quick intro 1 historical information booklet 110 wooden pieces in player colors red, green, yellow, blue, and black; per player: 3 sea colonists 3 land colonists 1 scoring marker 15 houses 80 wooden units of goods: brick food tool wine cloth 72 cards 10 SeSterzen Stadt (ohne ziegel-Städte) Provinz Produktion koloniSt SPezialiSt 65 personality cards Concordia­ Card Praefectus Magnus Card 5 player aids 35 starting cards, 7 per player 30 cards for sale, decks I - V Tribune Example 1. Recover cards The player recovers all of his previously played cards back into his hand. If the player takes back more than 3 cards (including the Tribune in the count), he receives 1 sestertius per card past the 3rd from the bank. 2. 1 new colonist In addition the player may optionally purchase 1 new colonist by paying 1 food and 1 tool to the bank and placing either a new land or sea colonist from his storehouse into “Roma”. A player who until now has played 4 cards now plays his TRIBUNE card. Therefore he takes a total of 5 cards back into his hand and receives 2 sestertii from the bank. In addition he decides to build a new colo-nist. He pays 1 food and 1 tool to the bank and places the new colonist inside ROMA on the game board. The colonist discovers one new storage spot for goods inside his storehouse. BeWege deiNe koloNisteN uNd Baue daNaCH iN NaCHBarstÄdteN. aRCHitEKt pR ÆFEKt lass 1 proviNz produziereN uNd Nimm deN proviNzBoNus. oder aktiviere alle proviNzeN uNd Nimm deN geldBoNus. pR ÆFEKt lass 1 proviNz produziereN uNd Nimm deN proviNzBoNus. oder aktiviere alle proviNzeN uNd Nimm deN geldBoNus. Use an opponent's top face Up personality card in his discard pile. diplomat T R IBUN E TAKE BACK ALL YOUR PERSONALITY CARDS AND GAIN 1 SESTERTIUS FOR EACH CARD TAKEN IN EXCESS OF 3. INCLUDE THE TRIBUNE IN THE COUNT. IN ADDITION YOU CAN BUILD 1 COLONIST IN ROMA FOR: We apologize for a misprint on the Diplomat card in deck IV. The cost is tools, as depicted by the symbol. Architect 1. Move colonists The number of colonists a player has on the board determines the number of possible movement steps that a player can freely allocate to his own colonists. Land colonists are moved only along the brown lines and sea colonists only along the blue lines. A colonist’s first movement step is out of his starting city onto an adjacent line. Any further steps will move the colonist through a city and onto the next adjacent line to that city. At the end of his movement, a colonist cannot be placed on a line that is already occupied by another colonist. However, a colonist is allowed to move through occupied lines, adding the occupied sections passed through into his movement count. 2. Build houses (after all movements) The player may build houses in cities adjacent to any of his own colonists. Each new house built in a city is paid with goods and coins to the bank: • Goods: 1 food in a brick city, or 1 brick plus the good of that city type in every other city. • Coins: 1 sestertius in a brick city, 2 sestertii in a food city, 3 in a tool city, 4 in a wine city, and 5 in a cloth city. If a new house is built in a city where there are already other houses, the cost in coins is multiplied by the number of houses that will be in the city after this build (i.e., to build the fourth house in a city the cost in coins is multiplied by 4). The cost in goods remains the same. Players may not build more than 1 own house in a single city and never in “Roma”. Prefect The player chooses between two alternatives: a) The player chooses a province where the houses pro-duce goods. He can only choose an active province whose bonus marker (province tile) still shows the goods symbol. It is not necessary that the player (or any other player) owns a house in the chosen province. He flips the bonus marker of the province to its coin side and receives 1 unit of the goods type depicted on the bonus marker out of the bank. In addition all houses inside the province, regardless of their owner, each produce one unit of the goods produced in that city. or b) Instead of producing the player may choose to collect the cash bonus. For every visible coin on the bonus markers he receives 1 sestertius from the bank. After-wards all bonus markers are flipped back to the side showing their good’s symbol. Colonist The player chooses between two alternatives: a) The player may place new colonists on the game board each to be paid for with 1 food and 1 tool. New colonists can be placed inside “Roma” or inside any other city where the player owns a house. or b) The player receives 5 sestertii plus 1 sestertius for each of their own colonists on the game board. Red has 3 colonists, 1 land colonist is located between “Colonia A.” and “Novaria”, and the other 2 are still in “Roma”. Therefore he has 3 movement steps available. The black arrows show how he allocates his movements to his colonists. The sea colonist from “Roma” moves onto the sea line to “Massilia” (1 step), and the land colonist makes 2 steps onto the line between “Aquileia” and “Vindobona”. After moving his colonists he may build houses. All in all there are 5 cities adjacent to his colonists. But as he already owns a house in “Colonia A.” only 4 cities remain where he could build. He has enough goods and cash to build 3 new houses. He builds a house in “Massilia” (5 sestertii, 1 cloth, and 1 brick), in “Novaria” (4 sestertii, 1 wine, and 1 brick) and in “Aquileia” (6 sestertii, 1 food, and 1 brick). A house in a food city basically costs only 1 brick, 1 food, and 2 sestertii, but as “Aquileia” already has 2 houses the cash price is tripled. He pays the goods and sestertii to the bank and puts 3 new houses into the cities. Syria is able to produce because its bonus marker still shows the good’s symbol. Red plays the Prefect card, chooses Syria, and receives a bonus of 1 cloth as shown on the bonus marker. The Syrian bonus marker is now flipped over to its coin side and all houses in Syria produce: Red and Blue receive 1 food each, and Yellow receives 1 cloth. In the situation depicted down left a player who chooses the cash bonus would get 6 sestertii out of the bank, because there are 6 coins visible on the bonus markers (given that no other bonus markers show coins) and the bonus markers with coins would be turned over to show their units of goods again. Red has 2 food and 3 tools in his storehouse and plays the COLONIST card. Paying 2 food and 2 tools he places 2 colonists. He decides to place a new sea colonist in “Roma” and a new land colonist in “Massilia”. He could also have chosen to place one colonist in “Aquileia” instead, but not in “Novaria” because he has no house there (obviously, only a land colonist would be reasonable in “Novaria”). Or he could even place 2 colonists in the same city. PERSONALITY CARDS Mercator This turn is executed in 2 steps: 1. The player receives 3 sestertii out of the bank (or 5 sestertii with a purchased Mercator). 2. He may then trade in two types of goods with the bank. This means he may sell two types, buy two types, or sell one type and buy another. The number of units he may sell and/or buy is only limited by the free space inside his storehouse, where every single unit occupies one storage space. The trade is done at fixed prices, which are shown on the roof of the storehouses. Diplomat The player executes an action from a personality card that is on top of another player’s discard pile and thus is displayed face up in front of them. The action is executed the same as if the player had played that card himself. Actions of players who recently used a Diplomat card or took back their cards into their hand with their Tribune card cannot be copied. Senator The player may purchase up to two personality cards from the display on the game board and take them into his hand. The price of a card is the sum of: • The goods depicted inside the red field of the card • plus the goods depicted beneath the card’s position on the game board, where a question mark stands for a good of the player’s choice. After the purchase(s), all remaining personality cards inside the display move to the left if their left position is empty, and the display is replenished to the new total of 7 cards (as long as there are fresh cards inside the stack) Consul The player may purchase one personality card from the display on the game board and take it into the hand. The price consists only of the goods depicted inside the red field of the personality card. Any goods depicted beneath the card’s position on the game board are ignored. As with a SENATOR, the remaining cards inside the display move to the left if their left position is empty, and the display is replenished from the stack (as long as the stack exists). Specialists (Mason, Farmer, Smith, Vintner, Weaver) All the player’s houses of the related type of goods produce one unit each. Green has 2 sestertii cash and her storehouse is as depicted. She plays the Mercator card and receives 3 sestertii (It’s not a purchased Mercator). She sells 3 units of wine for 3 x 6 = 18 sestertii to the bank, so that the total cash now is 23 sestertii. For her second type of goods she wants to buy bricks. She would be able to pay for up to 7 bricks, but there are only 5 storage spaces available in the storehouse so that she cannot buy more than 5 units. She decides to buy 4 units of bricks paying 4x3=12 sestertii. She would have loved to buy a unit of food, but that is not allowed as she has already traded in two different types of goods. The picture shows the personality cards recently played by the other 4 players. The 5th player plays a Diplomat card. He now may either execute the action of a Senator, an Architect, or a Prefect. MoVe yoUr colonists then BUild in cities adJacent to yoUr colonists. aRCHitECt Nutze die voN eiNem mitspieler zuletzt ausgelegte persoNeNkarte. diplomat Use an opponent's top face Up personality card in his discard pile. diplomat pR ÆFEKt lass 1 proviNz produziereN uNd Nimm deN proviNzBoNus. oder aktiviere alle proviNzeN uNd Nimm deN geldBoNus. pR ÆFEKt lass 1 proviNz produziereN uNd Nimm deN proviNzBoNus. oder aktiviere alle proviNzeN uNd Nimm deN geldBoNus. pR ÆFEKt lass 1 proviNz produziereN uNd Nimm deN proviNzBoNus. oder aktiviere alle proviNzeN uNd Nimm deN geldBoNus. pR ÆFEKt lass 1 proviNz produziereN uNd Nimm deN proviNzBoNus. oder aktiviere alle proviNzeN uNd Nimm deN geldBoNus. WINE PREFECT TURN OVER ONE ACTIVE PROVINCE TILE TO TAKE THE PRODUCTION BONUS, AND THEN THE PROVINCE PRODUCES. OR, REACTIVATE ALL PROVINCE TILES TO TAKE THE COINS AS A CASH BONUS. SENatoR kauFe Bis zu 2 persoNeNkarteN (auF die HaNd). SENatoR pUrchase Up to 2 neW personality cards and pUt theM into yoUr hand. The picture shows the cheapest 4 personality cards on sale. The player wants to purchase the Mercator and the Architect card. It costs 1 unit of wine for the Mercator card, and 1 tool and 1 brick for the Architect card to the bank. Instead of a brick he could have paid with any other type of goods as the question mark allows a free choice of goods for payment. He takes the Mercator and the Architect card into his hand. Now the Prefect card moves by 1 and all other remaining cards by 2 positions to the left. Finally the 2 free spots on the display are replenished with 2 new cards from the stack. (The Farmer would have cost 1 brick, 1 food, and 1 cloth) Wine mercator take 5 sestertii and trade Up to 2 types of Goods. WINE PREFECT TURN OVER ONE ACTIVE PROVINCE TILE TO TAKE THE PRODUCTION BONUS, AND THEN THE PROVINCE PRODUCES. OR, REACTIVATE ALL PROVINCE TILES TO TAKE THE COINS AS A CASH BONUS. food bricks prodUce food in all yoUr fa r m er tools MoVe yoUr colonists then bUild in cities adJacent to yoUr colonists. architect The player wants to purchase the Colonist card, which is located in 6th position inside the card’s display. He pays only 1 unit of food because the goods depicted on the game board are ignored (1 unit of free choice plus 1 cloth). But he cannot purchase more than 1 card. He takes the Colonist into his hand and the Prefect card moves one position to the left. The former position of the Prefect is replenished with a new card from stack. COLONIST FOOD BUILD NEW COLONISTS IN YOUR CITIES, EACH FOR: OR, TAKE 5 SESTERTII PLUS 1 PER COLONIST. WINE PREFECT TURN OVER ONE ACTIVE PROVINCE TILE TO TAKE THE PRODUCTION BONUS, AND THEN THE PROVINCE PRODUCES. OR, REACTIVATE ALL PROVINCE TILES TO TAKE THE COINS AS A CASH BONUS. The player has a total of 4 houses inside wine cities and plays the Vintner. She receives 4 units of wine and puts them on 4 empty storage spaces inside her storehouse. The other players do not receive any goods. PERSONALITY CARDS Each personality card is related to an ancient god who rewards its owner with victory points. First players gather all their cards, including the ones from their discard pile, and arrange them according to the different ancient gods. The back of the player aid shows a summary of the gods and in which order they are scored. The victory points assigned to each card that is marked with the respective god are described in the following text. All victory points (VP) are tallied with the player’s score marker on the VP-track. It is recommended to first score VESTA for all players, then Jupiter etc. VESTA The value of all goods in the storehouse (usual price as depicted) is added to the cash money. Then the player receives 1 VP per full 10 sestertii, any fractions are ignored. JUPITER For each house inside a non-brick city the player receives 1 VP. (max. 15 VP) SATURNUS For each province with at least one of their houses the player receives 1 VP. (Imperium max. 12 VP, Italia max. 11 VP) MERCURIUS For each type of goods that the player produces with their houses, he receives 2 VP. (max. 10 VP) MARS For each of his colonists on the game board the player receives 2 VP. (max. 12 VP) MINERVA For each city of the related city type the player receives a certain number of VP as depicted on the specialist’s card. FINAL SCORING The player owns a total of 12 houses on the board of which 3 produce bricks, 4 produce food, 3 produce tools, and 2 produce cloth. They are distributed over 7 provinces. Furthermore the player owns 5 colonists on the game board, has a total of 13 sestertii, and owns the Concordia card because he purchased the last card from the display area on the game board. The storehouse contains 1 cloth, 3 tools, and 1 brick. After arranging the cards as shown to the right, the final scoring gives the following result: VESTA: His goods are worth 7 (1 cloth) + 15 (3 tools) + 3 (1 brick) sestertii. Added to his cash on hand (13 sestertii) he has 38 sestertii which are worth 3 VP. JUPITER: As 3 of his 12 houses are inside brick cities, he has 9 houses that count for this god. With 2 cards assigned to Jupiter he receives 2 x 9 = 18 VP. SATURNUS: As the player has houses in 7 provinces and owns 4 cards assigned to Saturnus, he receives 4 x 7 = 28 VP. MERCURIUS: Unfortunately the player has not built inside a wine city, but he does produce the other 4 types of goods. For his 2 cards assigned to Mercurius he receives 2 x 8 = 16 VP. MARS: For his 5 colonists on the game board he receives 10 VP per card assigned to Mars, hence 3 x 10 = 30 VP. MINERVA: The player owns a farmer who rewards him with 3 VP for each house inside a food city. With 4 such houses this results in 12 VP. Together with 7 points from the Concordia Card the player therefore achieves 114 victory points. wein merkator Nimm 5 sesterzeN uNd haNdele iN Bis zu 2 WareNsorteN. Wine mercator take 5 sestertii and trade Up to 2 types of Goods. wein pr ÆFekt lass 1 proviNz produziereN uNd Nimm deN proviNzBoNus. oder aktiviere alle proviNzeN uNd Nimm deN geldBoNus. wein pr ÆFekt lass 1 proviNz produziereN uNd Nimm deN proviNzBoNus. oder aktiviere alle proviNzeN uNd Nimm deN geldBoNus. wein pr ÆFekt lass 1 proviNz produziereN uNd Nimm deN proviNzBoNus. oder aktiviere alle proviNzeN uNd Nimm deN geldBoNus. WINE PREFECT TURN OVER ONE ACTIVE PROVINCE TILE TO TAKE THE PRODUCTION BONUS, AND THEN THE PROVINCE PRODUCES. OR, REACTIVATE ALL PROVINCE TILES TO TAKE THE COINS AS A CASH BONUS. food bricks prodUce food in all yoUr fa r m er werkzeug architekt BeWege deiNe koloNisteN uNd Baue daNaCh iN NaChBarstÄdteN. tools MoVe yoUr colonists then bUild in cities adJacent to yoUr colonists. architect SENatoR pUrchase Up to 2 neW personality cards and pUt theM into yoUr hand. t R iBUN Nimm alle persoNeNkarteN auF die HaNd uNd Nimm aB der 4. karte Je 1 sesterze. daBei darFst du iN roma 1 koloNist auFstelleN FÜr: t R iBUN Nimm alle persoNeNkarteN auF die HaNd uNd Nimm aB der 4. karte Je 1 sesterze. daBei darFst du iN roma 1 koloNist auFstelleN FÜr: T R IBUN E TAKE BACK ALL YOUR PERSONALITY CARDS AND GAIN 1 SESTERTIUS FOR EACH CARD TAKEN IN EXCESS OF 3. INCLUDE THE TRIBUNE IN THE COUNT. IN ADDITION YOU CAN BUILD 1 COLONIST IN ROMA FOR: Game End INTERMEDIATE SCORING If a player purchases the last personality card and such empties the display on the game board, or if he builds his 15th house, he receives the Concordia card, which is worth 7 additional VP. Every other player now executes his last turn before the final scoring is done as described below. The player with the most VP wins the game. A tie is won by the player owning PRÆFECTUS MAGNUS, or by the tied player who would receive him next in the course of the game. If a player plays his Tribune card for the first time in the game in order to take his cards back into his hand, he immediately performs a personal intermediate scoring. He scores all his cards as described below for the final scoring, and tallies his VP on the VP-track. After all players have played their Tribune card for the first time and have received the intermediate scoring, these scores are compared and player with the highest score receives 2 sestertii. Second place receives 1 sestertius. If players share the same position, they all receive the same amount (all in 1st place receive 2 sestertii and all in 2nd place receive 1 sestertius). After that, all scoring markers move back to the zero position on the VP-track. We do not recommend performing intermediate scoring if all players know the game well. FURTHER RULES PRÆFECTUS MAGNUS If a player who currently owns the PRÆFECTUS M. plays a Prefect card (or uses one with the Diplomat) in order to let a province produce, he receives a double bonus (2 units instead of 1). Production inside the cities is not affected. After his turn he hands the PRÆFECTUS M. to the player sitting to his right. A player must use the PRÆFECTUS M. when able and may not choose to forego its benefit to keep it for later. But if a player plays a PREFECT card in order to receive the cash bonus, the PRÆFECTUS M. is not activated and remains with the player: it is not allowed to double the cash bonus. STOREHOUSE Each player has a storehouse with 12 storage spaces. Each space may house either one colonist or one unit of goods. At the beginning of the game, 4 storage spaces are occupied by colonists and therefore are not available for housing goods. However, if a player places new colonists on the game board, additional storage spaces become available. If all spaces are occupied in one way or another, no more units of goods can be taken in. It is not allowed to discard goods in order to make room for other ones instead. If a player receives plenty of goods without having enough empty storage spaces, it is allowed to choose which specific units of goods he wants to take, but not to leave any storage spaces empty. TRADE and STOCKPILE Players are not allowed to trade goods with each other. Goods and coins are considered to be unlimited. The number of colonists is restricted to 6 per player.
104
105
Testing Bell’s Theorem with Circular Polarization =============== Login Login切换导航 Home Articles Journals Books News About Services Submit Home Journals Article Optics and Photonics Journal>Vol.6 No.11, November 2016 Testing Bell’s Theorem with Circular Polarization () Richard A. Hutchin Optical Physics Company, Simi Valley, CA, USA. DOI:10.4236/opj.2016.611029PDFHTMLXML 1,724 Downloads 3,743 ViewsCitations Abstract Bell tests with entangled light have been performed many times in many ways using linear polarizers, but the same tests have never been done with a circular polarizer. Until recently there has never been a true circular polarization beamsplitter—an optical component that separates light directly into left and right handed polarizations. Using a true circular polarization beamsplitter based on birefringent gratings, entangled light has been analyzed with unexpected results. Keywords Entangled Photons, Bell’s Theorem, Circular Polarization, Tests of Quantum Mechanics Share and Cite: FacebookTwitterLinkedInSina WeiboShare Hutchin, R. (2016) Testing Bell’s Theorem with Circular Polarization. Optics and Photonics Journal, 6, 289-297. doi: 10.4236/opj.2016.611029. 1. Introduction Ever since Bell published his article proving what is now called Bell’s Theorem , there has been a flurry of experiments done to verify various aspects , and entangled light has matured so much that it has even been transitioned into encrypted communications. In all this work, circular polarization beamsplitters were not used for two reasons. 1) There were no real circular polarizers available. The best you could do was put a quarter-wave plate in front of a linear polarizer. What comes out is not circularly polarized but is theoretically supposed to match the transmission if you actually did have a real circular polarizer. 2) Unlike linear polarizers which can be rotated over 180 degrees to give curves that can be used to extract metrics, there are only two circular polarizations―left and right without anything in between. Basically then circular polarizer tests are much less interesting than linear polarizer tests and do not help resolve continuing theoretical questions about Bell tests. A true circular polarizer was invented about 10 years ago by Prof. Michael Escuti of the University of North Carolina based on a birefringent grating . Unfortunately, this new component has been mostly unknown to optical researchers since it was developed for display purposes and was never called a circular polarizer. The author has been using those gratings to create interferometric trackers for six years , and thus was familiar with its qualities and was able to apply a true circular polarizer to analyze entangled light. 2. Experiment Design The basic construction of the circular polarization beamsplitter we used here is a birefringent grating of 20 um period which diffracts right circular polarization one direction and left circular polarization the opposite direction with the usual diffraction angle equal to the wavelength divided by the 20 um period. In our case using 806 nm entangled light, this angle is ±0.0403 radians = ±2.3 deg. The quality we especially value in these gratings is the polarization purity of the transmitted beams. According to ref , the Right and Left circular diffraction orders are >99.8% pure right and left circular polarization, and the very weak zero order has the same polarization state as the input beam. Transmission is typically >95%. This true circular polarization beamsplitter is extremely useful for this QM experiment. The first test configuration is shown in Figure 1. It was used to verify the two entangled states desired for this experiment. or, which can be selected using a quarter-wave retarder or not. The entangled source was created using the BBO entangled light module from Newlight Photonics. This is a very standard entangled light generator to create a Bell test with linear polarizers. The various hardware elements for this experiment are listed below and photographs of the setup are included in the Appendix. 1) Laser source: Begin with a 50 milliwatt Radius Diode Laser at 403 nm (as calibrated by the manufacturer). Polarization > 100:1 TEM 0.0. Divergence 0.2 × 0.3 millirad. A good commercial laser. 2) BBO down-converter: Standard entangled light module from Newlight Photonics. We added a laser isolator to prevent back reflections, a half-wave plate to rotate the input polarization, a quarter-wave plate to convert linear polarization to circular, and a paired BBO crystal designed to generate down converted photon pairs at +/− 2.8 deg with respect to the incident laser beam. 3) Laser dump: We added a custom deflector to remove the unused 403 nanometer laser beam so as not to interfere with the entangled measurements. 4) Detectors: The detectors are not shown in Figure 1, but they are the very familiar Tau-SPAD detectors with a Hydra-HARP 300 pulse coincidence detector used by many labs to detect coincident photons. They have about 30% QE, and are sometimes used with linear polarizers in front to verify standard entanglement and sometimes with no polarizers when the birefringent grating polarizer above is moved into place. Figure 1. General layout of the standard test to verify proper photon entanglement. 5) 403 nanometer blocking: Since the laser pump is much brighter than the down- converted light, we used an absorbing plate to absorb any residue 403 nm light prior to reaching the Tau-SPAD detectors while transmitting the 806 nm entangled light. Also we used a narrow band filter of 96% transmission about 4 nm half-width tuned to 806 nm in front of each detector. These additional components reduced the transmission to the detectors to about 90%. However by adding them, any leaked laser light would not show significant counts. 3. Verification of the Entanglement The first test was to verify a high degree of entanglement with max-to-min = 139:1. We set both polarizers 1 and 2 to nominal vertical and scanned polarizer 2 in 10 degree steps. Our photon count rate was about 10,000 counts per second and coincident counts peaked at 2100 counts per second. In 200 seconds we accumulated up to 400,000 coincident counts per data point. The resulting data is shown in Figure 2. This data showed a total transmission times QE of slightly over 20%. Given our detector QE quoted at 30%, this was reasonable. The theoretical sinusoidal fit matched the data to 1.5× the shot noise limit-an rms noise of 3.2 × 10−4 on the coincident fraction. 4. Determining the Entangled State To characterize the relationship between the X 1 X 2 and Y 1 Y 2 states we set linear polarizer 1 to negative 45 deg from nominal vertical (counterclockwise from vertical) and scanned linear polarizer 2 as before from 0-180 (clockwise from vertical). If we were in the state, then we would expect this response curve to match Figure 3 shifted 45 deg to the left―which would begin with a huge dip over half the cycle (magenta curve in Figure 4). If the Y 1 Y 2 state is not perfectly phased to the X 1 X 2 state, then the modulation would drop a bit, but the general shape of the curve would remain the same. We plotted this curve dropped in modulation to best match the observed data. Clearly it is 180 deg out of phase. Note: Since the vertical axis is the symmetry axis for this experiment, it got labeled as X. the horizontal axis is labeled as Y. Conversely, if we were in the state, then we would expect this response curve to match Figure 3 extended 45 deg to the left―which would begin with a huge rise over half the cycle. Again if the Y 1 Y 2 state is not perfectly phased to the X 1 X 2 Figure 2. Typical entangled photon results from the standard Bell test using linear polarizers in our laboratory. Polarizer 1 was kept stationary and Polarizer 2 was rotated over 360 degrees. Contrast of 139:1 indicated a high level of entanglement for the vertical polarization. Figure 3. The data where polarizer 1 is rotated 45 deg counterclockwise, match the XX-YY entangled mode and wildly disagree with the XX + YY entangled mode. Figure 4. The entangled state deduced from the coincidence counts versus angle in Figure 2 and Figure 3 was applied to give an excellent match to the experimental data in Figure 2, where Polarizer 1 and 2 were set to nominal vertical and then Polarizer 2 was scanned over 180 degrees. state, then the modulation would drop a bit, but the general shape of the curve would remain the same. We plotted this curve dropped in modulation to best match the observed data. Clearly the state matches the data quite well, while the state (magenta curve) curves oppositely. We conclude that we are close to the state with a phase shift in the Y 1 Y 2 term to make the modulation drop. Entangled State Estimation Given this combined data from Figure 3 and Figure 4 (the blue dots), we then did a precise match of the phase state for both sets of data together and got the following entangled state. With these values we matched the observed data quite well as shown in Figure 4 and Figure 5. 5. QM Predictions for Circular Polarization Given this measured entangled state, we can rewrite it in terms of right and left circular polarization states, R and L. The quantum state of each of two Type 1 entangled photons is usually written as a superposition of both horizontal (X) and vertical (Y) quantum states as shown in Equation (1), where the subscript 1 or 2 applies to the photon in the first or second path. This is a mathematical way of saying that neither the photon nor nature knows what polarization applies to each photon but whatever they turn out Figure 5. The entangled state deduced from the coincidence counts versus angle in Figure 2 and Figure 3 was applied to give an excellent match to the experimental data in Figure 3, where Polarizer 1 was set to −45 deg and Polarizer 2 was scanned over 180 degrees. to be, they are the same. (1) One can also decompose these linear polarization states into circular polarization states R and L (for right and left circular) using the canonical QM transformation in Equations 2(a) and 2(b) from circular polarization states L and R to linear polarization states X and Y. (2a) (2b) Substituting these Equations (2a) and (2b) into Equation (1), we get Equation 3(a), which reorders into Equation (3b). (3a) (3b) This quantum state predicts that we should observe the two entangled photons with the same handedness 84.8% of the time and with the opposite handedness 15.2% of the time. 6. The Experimental Results The experiment for circular polarization measurements begins with a verified source of entangled photon pairs, which will then be passed through a circular polarizer to create left and right circular polarized photons. Figure 6 shows a diagram of the experimental setup, and Figure 7 shows a picture. When we add the circular polarizing birefringent grating in front of the two entangled beams coming from the BBO crystal, we got 4 different beams as shown in Figure 6. Each photon is labeled by its circular polarization (Left or Right) and its entangled beam (1 or 2). These four beams are identified in the drawing. We also remove the rotatable linear polarizers in front of the detectors because the beams now have a fixed and known circular polarization for each beam position. The table layout is shown in Figure 7. What we found experimentally (Table 1) is that one entangled photon transmits into the left circular polarization path and the other one into the right circular polarization path about 86.3% of the time (opposite handedness). The mean probability of the two photons having the same circular polarization is 13.7%. This is the opposite of QM predictions as shown in Table 1. Due to the high count rates, these data favor the opposite handedness hypothesis by over 300 sigmas compared to the QM predictions. Example: QM Prediction that R1R2 coincidences are more than L1R2 coincidences is contradicted by (177262 − 24511) = 152751 counts against that hypothesis with a Figure 6. A 403 nm UV laser is sent through a standard BBO Type 1 pair generator from Newlight Photonics. The two entangled beams of 806 nm light were passed through a circularly polarizing grating of 20 um period from Imagin Optix. The two weak zero orders pass through undeflected, while left circular light is deflected left and right circular light is deflected right. Standard Tau-SPAD detectors connected to a Pico-HARP 300 are used to find coincident counts. Pol 1 Pol 2 Counts 1 Counts 2 Counts Cts Coincident %Exp %QM % L1 R2 1,376,815 1,866,549 177,262 12.87%46.02%7.60% R1 L1 1,234,288 2,095,498 139,163 11.27%40.30%7.60% R1 R2 1,208,436 1,873,030 24,511 2.03%7.25%42.40% L1 L2 1,319,158 1,812,161 23,733 1.80%6.43%42.40% Table 1. Data taken with the circular polarization experiment for entangled photons showing that coincidences happen in the reverse handedness predicted by quantum mechanics. Figure 7. Bell test experiment for circular polarization. The entangled photon generator is behind the black wall on the right. standard deviation = sqrt(177262 + 24511) = 449 counts. The result is 340 sigmas against the QM prediction. There are four possible tests such as this one―all showing strong statistics against the QM predictions. 7. Conclusion An experiment was setup using circular polarization to test QM predictions for entangled photons. While the entangled setup performed normally using linear polarizers, it performed opposite to QM predictions with circular polarization with over 300 sigmas of statistical significance. Since circular polarization tests have never been reported for Bell tests before, these results suggest that other entangled facilities should repeat these tests to see if they find the same discrepancy. If other tests confirm these results, then more experiments can be done to understand and model these effects better. Acknowledgements The author would like to acknowledge many useful discussions with Dr. Reinhard Erdman, who is a strong supporter of quantum mechanics and skillfully debated the standard QM theory. Also, the setup and the data acquisition of this experiment were handled excellently and patiently by Mr. Chris Warren. Conflicts of Interest The authors declare no conflicts of interest. References Bell, S. (1964) On the Einstein Podolsky Rosen Paradox. Physics, 1, 195-200. Bruner, N., et al. (2014) Bell Nonlocality. Reviews of Modern Physics, 86, 419. Komanduri, R.K., Jones, W.M., Oh, C. and Escuti, M.J. (2007) Polarization-Independent Modulation for Projection Displays Using Small-Period LC Polarization Gratings. Journal of the Society for Information Display, 15, 589-594. Oh, C. and Escuti, M.J. (2008) Achromatic Diffraction from Polarization Gratings with High Efficiency. Optics Letters, 33, 2287-2289. Hutchin, R.A. (2016) Two Axis Interferometric Tracking Device and Method. US Patent 9297880. Journals Menu Articles Archive Indexing Aims & Scope Editorial Board For Authors Publication Fees Journals Menu Articles Archive Indexing Aims & Scope Editorial Board For Authors Publication Fees Related Articles Collapse of Bell’s Theorem Bell’s Theorem and Instantaneous Influences at a Distance Interval Based Analysis of Bell’s Theorem Bell’s Non-Locality Theorem Can Be Understood in Terms of Classical Thermodynamics Bell’s Theorem and Einstein’s Worry about Quantum Mechanics Open Special Issues Published Special Issues Special Issues Guideline E-Mail Alert OPJ Subscription Publication Ethics & OA Statement Frequently Asked Questions Recommend to Peers Recommend to Library Contact us Disclaimer History Issue Sponsors, Associates, and Links Journal of Applied Mathematics and Physics Journal of Electromagnetic Analysis and Applications Advances in Materials Physics and Chemistry Journal of Modern Physics Advances in Molecular Imaging Open Journal of Biophysics Follow SCIRP Contact us [email protected] +86 18163351462(WhatsApp) 1655362766 Paper Publishing WeChat Copyright © 2025 by authors and Scientific Research Publishing Inc. This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License. Free SCIRP Newsletters Add your e-mail address to receive free newsletters from SCIRP. Home Journals A-Z Subject Books Sitemap Contact Us About SCIRP Publication Fees For Authors Peer-Review Issues Special Issues News Service Manuscript Tracking System Subscription Translation & Proofreading FAQ Volume & Issue Policies Open Access Publication Ethics Preservation Retraction Privacy Policy Copyright © 2006-2025 Scientific Research Publishing Inc. All Rights Reserved. Top ✓ Thanks for sharing! AddToAny More…
106
ASIAN J. MATH. c ⃝2010 International Press Vol. 14, No. 1, pp. 073–108, March 2010 005 EMBEDDED CONSTANT MEAN CURVATURE HYPERSURFACES ON SPHERES∗ OSCAR M. PERDOMO† Abstract. Let m ≥2 and n ≥2 be any pair of integers. In this paper we prove that if H lies between cot( π m ) and bm,n = (m2−2)√n−1 n√ m2−1 , there exists a non isoparametric, compact embedded hypersurface in Sn+1 with constant mean curvature H that admits O(n) × Zm in its group of isometries. These hypersurfaces therefore have exactly 2 principal curvatures. When m = 2 and H is close to the boundary value 0 = cot( π 2 ), such a hypersurface looks like two very close n-dimensional spheres with two catenoid necks attached, similar to constructions made by Kapouleas. When m > 2 and H is close to cot( π m), it looks like a necklace made out of m spheres with m + 1 catenoid necks attached, similar to constructions made by Butscher and Pacard. In general, when H is close to bm,n the hypersurface is close to an isoparametric hypersurface with the same mean curvature. For hyperbolic spaces we prove that every H ≥0 can be realized as the mean curvature of an embedded CMC hypersurface in Hn+1. Moreover we prove that when H > 1 this hypersurface admits O(n)×Z in its group of isometries. As a corollary of the properties we prove for these hypersurfaces, we construct, for any n ≥6, non-isoparametric compact minimal hypersurfaces in Sn+1 whose cones in Rn+2 are stable. Also, we prove that the stability index of every non-isoparametric minimal hypersurface with two principal curvatures in Sn+1 exceeds n + 3. Key words. Constant mean curvature, embedded, principal curvatures. AMS subject classifications. 53C42, 53A10 1. Introduction. Minimal hypersurfaces of spheres that have exactly two prin-cipal curvatures at each point were initially studied by Otsuki in . He reduced the problem of classifying them, to that of solving an ODE, and the problem of de-ciding about their compactness, to the problem of studying an integral that relates periods of two functions involved in the immersions that he found. For surfaces in R3, Delaunay in 1841 showed that if one rolls a conic section on a line in a plane and then rotates about that line the trace of a focus, one obtains a CMC surface of revolution. CMC stands for constant mean curvature. This rolling construction was generalized for the case of CMC hypersurfaces in Rn+1 by Hsiang and Yu in the early eighties , and for CMC hypersurfaces in the hyperbolic space and the sphere by Sterling in 1987 . After Oksuki’s paper in 1970, several properties for a CMC hypersurface M ⊂ Sn+1 with exactly two principal curvatures were proved in , , , , , , , , , , and among others. For the case n = 2, we give explicit trigonometric formulas for immersions of CMC hypersurfaces in S3. A gallery of pictures of the stereographic projection of some of these surfaces, made by Schmitt, can be found in the GANG (Geometry Analysis Numerics Graphics, University of Massachusetts) web page. These surfaces are called unduloidal tori in S3 with m-lobes because all of them have Zm, for some m, in their group of symmetries. In this paper we will prove that this symmetry property holds in every dimension and we will also prove that for every positive integer m, if H lies between ∗Received May 30, 2009; accepted for publication January 20, 2010. †Department of Mathematics, Central Connecticut State University, New Britain, CT 06050, USA ([email protected]). 73 74 O. M. PERDOMO an,m = cot π m and bn,m = (m2 −2) √n −1 n √ m2 −1 then, there exists an embedded non-isoparametric unduloidal n-dimensional hyper-surface in Sn+1 with m-lobes and constant mean curvature H. Some previous results on the problem of determining which values of H can be realized as mean curvatures of CMC embedded hypersurfaces on n dimensional spheres were found by Otsuki and Furuya . They showed that H = 0 can not be the mean curvature of a non isoparametric minimal embedded hypersurface in Sn+1 with two principal curvatures. Later on, Leite and Brito in , showed that small positive values of H can be real-ized as non isoparametric values for CMC hypersurfaces in Sn+1 with two principal curvatures. This result by Leiti and Brito can be considered as the first step toward the solution of the problem considered in our paper: Given m ≥2, say exactly which values of H allow to embed a compact hypersurface with mean curvature H, and O(n −2) × Zm symmetry into Sn. I would like to point out that a big part of our paper is the understanding of a formula given by an integral. In the particular case when H = 0, this integral was studied by Furuya in 1971 and by Otsuki in 1972 and, in the general case, this integral was studied by Brito and Leite in 1990. Lemma (4.1) and its corollary (4.2) play an important role in the main result and they are responsible for the explicit bounds an,m and bn,m for H given above. The Lemma and techniques developed in this paper can be used to obtain similar results for hypersurfaces with two principal curvatures with generalized mean curvature Hk constant were Hk is the Hth symmetric function of the principal curvatures. A few weeks after the results of this paper were posted on the ArXiv, Cheng, Li and Wei obtained similar result for hypersurfaces with constant fourth mean curvature H4 on spheres . Similar results were also obtained by the author for hypersurfaces on space forms and . Since the formulas obtained for the CMC immersions of the sphere are very ex-plicit, it is not difficult to generalize them to obtain similar results in Euclidean spaces and hyperbolic spaces. See sections (6.2) and (6.1). As a consequence of the symmetries proven for all compact constant mean cur-vatures in Sn+1 with two principal curvatures everywhere, we proved that all such examples with H = 0 have stability index greater than n + 3. There is a conjecture stating that the only minimal hypersurfaces in Sn+1 with stability index n + 3 are the isoparametric ones with two principal curvatures. Some partial results for this conjecture were proven in . Also, since it is not difficult to prove that the square of the norm of the second fundamental form of these examples can be chosen to be as close as we want from those of the isoparametric examples, we point out that some of Otsuki’s minimal hypersurfaces produce examples of non isoparametric compact stable minimal truncated cones in Rn+2 for n ≥6. Recall that these examples are not embedded. Stable embedded minimal cones in Rn+2 for some values n were constructed by Hsiang and Sterling in . The author would like to express his gratitude to Professor Bruce Solomon for discussing the hypersurfaces with him and pointing out the similarity to Delaunay’s surfaces and to the referees for many valuable suggestions. This work was partially supported by a CCSU research grant. 2. Preliminaries. Let M be an n-dimensional hypersurface of the (n + 1)-dimensional unit sphere Sn+1 ⊂Rn+2. Let ν : M →Sn+1 be a Gauss map and EMBEDDED CMC HYPERSURFACES ON SPHERES 75 Ap : TpM →TpM the shape operator. Notice that Ap(v) = −¯ ∇vν for all v ∈TpM where ¯ ∇is the Euclidean connection in Rn+2. We will denote by ||A||2 the square of the norm of the shape operator. If X, Y and Z are vector fields on M, ∇XY represents the Levi-Civita connec-tion on M with respect to the metric induced by Sn+1 and [X, Y ] = ∇XY −∇Y X represents the Lie bracket, then the curvature tensor on M is defined by R(X, Y )Z = ∇Y ∇XZ −∇X∇Y Z + ∇[X,Y ]Z (2.1) and the covariant derivative of A is defined by DA(X, Y, Z) = Z⟨A(X), Y ⟩−⟨A(∇ZX), Y ⟩−⟨A(X), ∇ZY ⟩ (2.2) the Gauss equation is given by R(X, Y )Z = ⟨X, Z⟩Y −⟨Y, Z⟩X + ⟨A(X), Z⟩A(Y ) −⟨A(Y ), Z⟩A(X) (2.3) and the Codazzi equations are given by DA(X, Y, Z) = DA(Z, Y, X). (2.4) Let us denote by κ1, . . . , κn the principal curvatures of M and, by H = κ1 + · · · + κn n the mean curvature of M. We will assume that M has exactly two principal curvatures everywhere and that H is a constant function on M. Since it is known that M has to be isoparametric in the case that the multiplicities of both principal curvatures are greater than 1, , we will assume that κ1 = · · · = κn−1 = λ, κn = µ and (n −1)λ + µ = nH. By changing ν by −ν if necessary we can assume without loss of generality that λ−µ > 0. Recall that this hypersurface does not have umbilical points because we are assuming it has exactly two principal curvatures everywhere. Let {e1, . . . , en} denote a locally defined orthonormal frame such that A(ei) = λei for i = 1, . . . , n −1 and A(en) = µen. (2.5) The next Theorem is well known . For completeness sake and partly to prepare for the deduction of other formulas, we give a proof here. Theorem 2.1. If M ⊂Sn+1 is a CMC hypersurface with two principal curvatures and dimension greater than 2, and {e1, . . . , en} is a locally defined orthonormal frame such that (2.5) holds, then 76 O. M. PERDOMO v(λ) = 0 for any v ∈Span{e1, . . . , en−1} ∇ven = en(λ) µ −λ v for any v ∈Span{e1, . . . , en−1} ∇enen = 0 1 + λµ = en(en(λ) λ −µ) −(en(λ) λ −µ)2 [ei, ej] ∈Span{e1, . . . , en−1} for any i, j ∈{1, . . . , n −1}. Proof. For any i, j ∈{1, . . ., n −1} with i ̸= j (here we are using the fact that the dimension of M is greater than 2) and any k ∈{1, . . . , n}, we have that, DA(ei, ej, ek) = ek⟨A(ei), ej ⟩−⟨A(∇ekei), ej ⟩−⟨A(ei), ∇ekej ⟩ = ek(λ⟨ei, ej ⟩) −⟨∇ekei, A(ej) ⟩−λ⟨ei, ∇ekej ⟩ = ek(0) −λ⟨∇ekei, ej ⟩−λ⟨ei, ∇ekej ⟩ = 0 −λek(⟨ei, ej ⟩) = 0. On the other hand, DA(ei, ei, ej) = ej⟨A(ei), ei ⟩−⟨A(∇ejei), ei ⟩−⟨A(ei), ∇ejei ⟩ = ej(λ) −λej(⟨ei, ei ⟩) = ej(λ). By the Codazzi equation (2.4), we now get ej(λ) = 0, for all j ∈{1, . . . , n −1}, and therefore v(λ) = 0 for any v ∈Span{e1, . . . , en−1}. Now, DA(ei, en, ej) = ej⟨A(ei), en ⟩−⟨A(∇ejei), en ⟩−⟨A(ei), ∇ejen ⟩ = ej(λ⟨ei, en ⟩) −⟨∇ejei, A(en) ⟩−λ⟨ei, ∇ejen ⟩ = ej(0) −µ⟨∇ejei, en ⟩−λ⟨ei, ∇ejen ⟩+ (λ⟨∇ejei, en ⟩−λ⟨∇ejei, en ⟩) = (λ −µ)⟨∇ejei, en ⟩−λ ej(⟨ei, en ⟩) = (µ −λ)⟨ei, ∇ejen ⟩. Since µ −λ > 0, using the Codazzi equations we get ⟨ei, ∇ejen ⟩= 0 for any i, j ∈{1, . . . , n −1} with i ̸= j. (2.6) Now, for any i ∈{1, . . . , n−1}, using computations like those above we can prove DA(ei, ei, en) = en(λ) = DA(ei, en, ei) = (µ −λ)⟨ei, ∇eien ⟩ and EMBEDDED CMC HYPERSURFACES ON SPHERES 77 DA(en, en, ei) = ei(µ) = 0 = DA(ei, en, en) = (µ −λ)⟨ei, ∇enen ⟩. Therefore, ⟨ei, ∇eien ⟩= en(λ) µ −λ and ⟨ei, ∇enen ⟩= 0 for any i ∈{1, . . . , n −1}. (2.7) Since en is a unit vector field, we have that ⟨∇eken, en ⟩= 0 for any k. From the equations (2.6 ) and (2.7 ) we conclude that ∇ven = en(λ) µ −λ v for any v ∈Span{e1, . . . , en−1} and ∇enen = 0. Noticing that for any i, j ∈{1, . . . , n −1} with i ̸= j, using equation (2.6), we see that ⟨[ei, ej], en ⟩= ⟨∇eiej −∇ejei, en ⟩= ⟨ei, ∇ejen ⟩−⟨ej, ∇eien ⟩= 0. Therefore [ei, ej] ∈Span{e1, . . . , en−1}. Finally we will use Gauss equation to prove the differential equation on λ. First we point out that, using equation (2.7), we can prove ⟨[en, e1], en ⟩= 0 and therefore [en, e1] ∈Span{e1, . . . , en−1}. By the Gauss equation we then get, 1 + λµ = ⟨R(en, e1)en, e1 ⟩ = ⟨∇e1∇enen −∇en∇e1en + ∇[en,e1]en, e1 ⟩ = ⟨0 −∇en(en(λ) µ −λ e1) + en(λ) µ −λ [en, e1] , e1⟩ = −en(en(λ) µ −λ) + en(λ) µ −λ⟨∇ene1 −∇e1en, e1 ⟩ = −en(en(λ) µ −λ) −(en(λ) µ −λ)2 = en(en(λ) λ −µ) −(en(λ) λ −µ)2. 3. Construction of the examples. Maintaining the notation of the previous section, we now prove a series of identities and results that make it easier to state and prove the theorem that defines the examples at the end of this section. 3.1. The function w and its solution along a line of curvature. Since (n −1)λ + µ = nH, we have λ −µ = λ −(nH −(n −1)λ) = n(λ −H) = nw−n where w = (λ −H)−1 n . (3.1) Recall that we are assuming that λ −µ is always positive, so w is a smooth differentiable function. By the definition of w in (3.1) we have 78 O. M. PERDOMO en(w) = −1 n(λ −H)−n+1 n en(λ) = −1 nwn+1 en(λ) = −w en(λ) λ −µ. (3.2) Using w, the second order differential equation in Theorem (2.1) can be written as en en(w) w  + en(w) w 2 + 1 + λµ = 0 (3.3) and if we write λ and µ in terms of w we get en en(w) w  + en(w) w 2 −(n −1) w2n −(n −2)H wn + H2 + 1 = 0. (3.4) Deriving the previous equation, we have used the following identities, λ = w−n + H and µ = H −(n −1)w−n. (3.5) From Equation (3.2) we now get en(λ) = −(λ −µ) en(w) w . (3.6) This allows us to write one of the equations in Theorem (2.1) as ¯ ∇ven = en(w) w v for any v ∈Span{e1, . . . , en−1}. (3.7) Notice that equation (3.4) reduces to en(en(w)) w −(n −1) w2n −(n −2)H wn + H2 + 1 = 0 (3.8) and therefore multiplying by 2wen(w) we see that there exists a constant C such that, (en(w))2 + w2−2n + (1 + H2)w2 + 2Hw2−n = C. (3.9) The equation above plays an important role in the constructions of immersions with CMC in Sn and it was also proven by Wei in . Let x : M →Rn+2 denote the position vector, viewed as a map, and by ¯ ∇the Euclidean connection on Rn+1. Using the equations in Theorem (2.1) and the fact that ¯ ∇vx = v, ⟨x, ν(x) ⟩= 0 and ⟨ν(x), ν(x) ⟩= 1, we get that ¯ ∇enen = −x + µν (3.10) ¯ ∇enν = −µen (3.11) ¯ ∇enx = en. (3.12) Fix a point p0 ∈M, and let us denote by γ(u) the only geodesic in M such that γ(0) = p0 and γ′(0) = en(p0). Since ∇enen vanishes, then γ(u) = en(γ(u)). Notice that γ(u) is also a line of curvature. Let g(u) = w(γ(u)). Equation (3.9) implies that EMBEDDED CMC HYPERSURFACES ON SPHERES 79 (g′)2 + g2−2n + (1 + H2)g2 + 2Hg2−n = C (3.13) or equivalently, gn−1 g′ p Cg2n−2 −1 −(1 + H2)g2n −2Hgn = ±1. (3.14) It is clear that the constant C must be positive and moreover, in order to solve this equation we need to consider a constant C such that the polynomial ξ(s) = Cs2n−2 −1 −(1 + H2)s2n −2Hsn (3.15) is positive on a interval (t1, t2) with 0 < t1 < t2 and ξ(t1) = 0 = ξ(t2). Notice that for every H we may pick a C such that ξ is positive on an interval because ξ is a polynomial of even degree with negative leading coefficient, ξ(0) = −1, and if C is big enough, this polynomial takes positive values for positive values of s. Let us assume that t1 and t2 are as above and also that ξ′(t1) and ξ′(t2) are not zero, so that the following formula for G is well defined on [t1, t2]: G(s) = Z s t1 tn−1 p Ct2n−2 −1 −(1 + H2)t2n −2Htn dt for t1 ≤s ≤t2. Let T = 2G(t2). Since G′(s) > 0 for s ∈(t1, t2), G has an inverse. Denoting it by F : [0, T 2 ] →[t1, t2], a direct verification shows that the T -periodic function given by g(u) = F(u) for 0 ≤u ≤T 2 and g(u) = F(T −u) for T 2 ≤u ≤T solves equation (3.14). 3.2. The vector field η . Now define the following vector field along M η = −en(w) w en + λ ν −x. It has the following properties 1. ⟨η, η⟩= (en(w) w )2 + λ2 + 1 = C w2 , which follows from Equation (3.9) and the definition of λ in terms of w, (3.5). 2. ¯ ∇enη = −en(w) w η. This crucial fact makes all the constructions work in this section. The equation follows from Equations (3.10), (3.11) and (3.12 ) and the first and second differential equations for the function w, especially, Equa-tion (3.3) and Equation (3.6). 3. For any i ∈{1, . . . , n −1}, ¯ ∇ei(x + w2 C η) vanishes. The proof of this identity is similar, and additionally, uses the Equation (3.7). 4. ⟨x + w2 C η, x + w2 C η⟩= 1 −w2 C . 80 O. M. PERDOMO 3.3. Vector fields that lie on a plane . Now that we have computed g(u) = w(γ(u)), we can better understand the geodesic γ. The equations (3.10), (3.11) and (3.12 ) imply that X(u) = en(γ(u)), Y (u) = ν(γ(u)) and Z(u) = γ(u) satisfy an ordinary linear differential equation in the variable u with periodic coeffi-cients (notice that µ(γ(u)) is a function of g(u)). By the existence and uniqueness theorem of ordinary differential equations, the solutions X(u), Y (u) and Z(u) must lie in the three dimensional space Γp0 = Span{en(p0), ν(p0), p0}. (3.16) For the sake of simplicity, we will consider the T -periodic function r : R →R defined by r(u) = g(u) √ C . It is not difficult to check that r satisfies the equations r′′ r + 1 + λµ = 0, (r′)2 + r2 (1 + λ2) = 1, λ′ = −(λ −µ)r′ r . (3.17) In the previous equations we are abusing notation with the name of the functions λ and µ. Here and whenever the context dictates it, they will also denote the functions λ(γ(u)) and µ(γ(u)) respectively. To construct our examples, the function 1−r2 needs to be positive. We can achieve this by assuming H ≥0 because that will imply λ > 0, and therefore r < 1. Define the following vector fields along γ B1(u) = η(γ(u)) = −r′ r X + λ Y −Z B2(u) = − rr′ √ 1 −r2 X + r2λ √ 1 −r2 Y + p 1 −r2Z B3(u) = rλ √ 1 −r2 X + r′ √ 1 −r2 Y. Using the equations in section (3.2), Equations (3.10), (3.11) and (3.12 ) giving the derivative of the vector fields X, Y and Z, and Equation (3.17), we can check the following properties. 1. B1(u), B2(u) and B3(u) lie on the three dimensional subspace Γp0. 2. B′ 1 = −r′ r B1. 3. ⟨B1, B2⟩= 0, ⟨B1, B3⟩= 0 and ⟨B2, B3⟩= 0. EMBEDDED CMC HYPERSURFACES ON SPHERES 81 4. ⟨B2, B2⟩= 1, ⟨B3, B3⟩= 1 and ⟨B1, B1⟩= 1 r2 . 5. From the previous items we get that B′ 2 = hB3 and B′ 3 = −hB2 for some function h : R →R. These equations hold because ⟨B′ 2, B1⟩= −⟨B′ 1, B2⟩= r′ r ⟨B1, B2⟩= 0 likewise ⟨B′ 3, B1⟩= 0. 6. From the previous item we get that the vectors B2 and B3 lie in a two dimensional subspace. 7. We have ⟨B′ 3, Z⟩= − rλ √ 1 −r2 and ⟨B2, Z⟩= p 1 −r2. Therefore the function h in the previous item is given by rλ 1−r2 . It follows that, B′ 2 = rλ 1 −r2 B3 and B′ 3 = − rλ 1 −r2 B2. The fact that h does not change sign when λ > 0, in particular when H ≥0, will help us prove that for some choices of C the hypersurface M is embedded. 8. If we assume without loss of generality that 1 |B1(0)| B1(0) = (0, . . . , 1, 0, 0), B2(0) = (0, . . . , 0, 1, 0) and B3(0) = (0, . . . , 0, 0, 1) then, B1(u) = 1 r (0, . . . 0, 1, 0, 0) B2(u) = sin(θ(u))(0, . . . 0, 0, 1) + cos(θ(u))(0, . . . , 0, 1, 0) B3(u) = cos(θ(u))(0, . . . 0, 0, 1) −sin(θ(u))(0, . . . , 0, 1, 0) where θ : R →R is given by θ(u) = Z u 0 r(s)λ(s) 1 −r2(s)ds. 9. If K = K(H, n, C) = θ(T ) = Z T 0 r(s)λ(s) 1 −r2(s)ds = 2 Z T 2 0 r(s)λ(s) 1 −r2(s)ds then, for any positive integer m and any u ∈[mT, (m + 1)T ] we have θ(u) = mK + θ(u −mT ). This property is a consequence of the existence and uniqueness theorem for differential equation and will be used to prove the invariance of M under some rotations. 82 O. M. PERDOMO 10. If q(u) = γ(u) + r2(u)η(γ(u)), then ⟨q, q⟩= 1 −r2 and B2 = q |q| i.e q = p 1 −r2 B2. 3.4. A classification of constant mean curvature hypersurfaces in spheres with two principal curvatures . We are ready to define the examples of constant mean curvature hypersurfaces on Sn+1 when n ≥2. Here is the theorem: Theorem 3.1. Let n be an integer greater than 1 and let H be a non-negative real number. 1. Let gC : R →R be a T -periodic solution of the equation (3.13) associated with this H and a positive constant C. If λ, r, θ : R →R are defined by r = gC √ C , λ = H + g−n C and θ(u) = Z u 0 r(s)λ(s) 1 −r2(s)ds then, the map φ : Sn−1 × R →Sn+1 given by φ(y, u) = ( r(u) y, p 1 −r(u)2 cos(θ(u)), p 1 −r(u)2 sin(θ(u)) ) (3.18) is an immersion with constant mean curvature H. 2. If K(H, n, C) = 2 R T 2 0 r(u)λ(u) 1−r2(u) du = 2π k for some positive integer k, then, the image of the immersion φ is an embedded compact hypersurface in Sn+1. More generally, if K(H, n, C) = 2kπ m for a pair (k, m) of integers, then, the image of φ is a compact hypersurface in Sn+1. 3. Let n be an integer greater than 2, and let M ⊂Sn+1 be a connected compact hypersurface with two principal curvatures λ with multiplicity n−1, and µ with multiplicity 1. If λ−µ is positive and the mean curvature H = (n−1)λ+µ is a non-negative constant, then, up to a rigid motion of the sphere, M can be written as an immersion of the form (3.18). Moreover, M contains O(n)×Zm in its isometry group, where m is the positive integer such that K(H, n, C) = 2kπ m , with k and m relatively prime. Proof. Defining B1 and B2 as before we have that φ(y, u) = r(u)(y, 0, 0) + p 1 −r(u)2B2(u). A direct verification shows ∂φ ∂u = r′ (y, 0, 0) − r r′ √ 1 −r2 B2 + λ r √ 1 −r2 B3. We have ⟨∂φ ∂u, ∂φ ∂u⟩= 1 and that the tangent space of the immersion at (y, u) is given by Tφ(y,u) = {(v, 0, 0) + s ∂φ ∂u : ⟨v, y⟩= 0 and s ∈R}. EMBEDDED CMC HYPERSURFACES ON SPHERES 83 A direct verification shows that the map ν(y, u) = −r(u)λ(u) (y, 0, 0) + r2(u) λ(u) p 1 −r2(u) B2(u) + r′(u) p 1 −r2(u) B3(u) satisfies ⟨ν, ν⟩= 1, ⟨ν, ∂φ ∂u⟩= 0, and for any v ∈Rn with ⟨v, y⟩= 0 we have ⟨ν, (v, 0, 0)⟩= 0. It then follows that ν is a Gauss map of the immersion φ. The fact that φ has constant mean curvature H follows because for any unit vector v in Rn perpendicular to y, we have β(t) = (r cos(t)y + r sin(t)v, 0, 0) + p 1 −r2 B2 = φ(cos(t)y + r sin(t)v, u) satisfies that β(0) = φ(y, u), β′(0) = rv and dν(β(t)) dt t=0 = dν(rv) = −rλ v. Therefore, the tangent vectors of the form (v, 0, 0) are principal directions with principal curvature λ and multiplicity n −1. Now, since ⟨∂φ ∂u, (v, 0, 0)⟩= 0, we have that ∂φ ∂u defines a principal direction, i.e. we must have that ∂ν ∂u is a multiple of ∂φ ∂u. A direct verification shows that if we define µ : R →R by µ(u) = nH −(n −1)λ(u), then, ⟨∂ν ∂u, y⟩= −λ′ r −λr′ = (λ −µ)r′ −λ r′ = −µ r′ = −(nH −(n −1)λ)r′. We also have that ⟨∂φ ∂u, y⟩= r′, therefore, ∂ν ∂u = dν(∂φ ∂u) = −µ ∂φ ∂u = −(nH −(n −1)λ)∂φ ∂u. It follows that the other principal curvature is nH −(n−1)λ. Therefore φ defines an immersion with constant mean curvature H, which proves the first item in the Theorem. In order to prove the second item, we notice that if K(H, n, C) = 2π k for some positive k, θ(kT ) = 2π, which makes the image of φ compact. It is also embedded because φ is one-to-one for values of u between 0 and kT as we can easily check using the fact that whenever H ≥0, the function θ is strictly increasing. Recall that under these circumstances θ(0) = 0 and θ(kT ) = 2π. The proof of the other statement in this item is similar. Let us prove the next item. For n > 2, consider a minimal hypersurface M with the properties of the statement. We will use the notation we used in the preliminaries, in particular the function w : M →R is defined by the relation (λ −µ) = nwn. We will also assume that B1(0), B2(0) and B3(0) are chosen as before. By Theorem (2.1) we get that the distribution Span{e1, . . . , en−1} is completely integrable. Let us fix a point p0 in M and define the geodesic γ : R →M, and the functions r : R →R as before and let us denote by Mu ⊂M the (n −1)-dimensional integral submanifold of M of this distribution that passes through γ(u). We define the vector field η on M as before. Recall that B1(u) = η(γ(u)). Fixing a value u, let us define the maps 84 O. M. PERDOMO ρu, ζu : Mu →Rn+2 by ρu(x) = x + w2(x) C η and ζu(x) = ν(x) + λ(x) x. Using the equations in section (3.2) we find that the maps ρu and ζu are constant. Therefore, ρu(x) = x + w2(x) C η = γ(u) + r2(u)B1 = p 1 −r2B2. Notice that for every x ∈Mu, we have |x −ρu(x)|2 = |Z(u) − p 1 −r2B2(u)|2 = r2(u). Therefore Mu is contained in a sphere with center in √ 1 −r2B2 and radius r. We have that the vectors e1, . . . , en−1 are perpendicular to the vectors ρu(x) = p 1 −r2(u)B2(u) and ζu(x) = Y (u) + λ(u)Z(u). Since ⟨Y (u) + λ(u)Z(u), B1(u)⟩= 0, ⟨Y (u) + λ(u)Z(u), B2(u)⟩= λr2 √ 1−r2 and ⟨Y (u) + λ(u)Z(u), B3(u)⟩= r′ √ 1−r2 , we get that ζu(x) = λr2 √ 1 −r2 B2 + r′ √ 1 −r2 B3. It follows that, anytime r′(u) ̸= 0, all tangent vectors of Mu must lie in the n-dimensional space perpendicular to the two dimensional space spanned by B1(u) and B2(u). Since this two dimensional space is independent of u, we conclude that every point x ∈Mu, satisfies that x −ρu(x) = r(u)(y, 0, 0) where |y|2 = 1 or equivalently, x = r(u) (y, 0, 0) + ρu(x) = r(u)(y, 0, 0) + p 1 −r(u)2B2(u). Since the set of points where r′ is discrete, we conclude that the expression for the points x ∈Mu holds true for all u. The theorem then follows because the manifold M is connected. The property on the group of isometries of the manifold follows because we can write M as the image of the map φ(y, u) = ( r(u) y, p 1 −r(u)2 cos(θ(u)), p 1 −r(u)2 sin(θ(u)) ). (3.19) The group O(n) acts isometrically on M because any isometry in Rn+2 that fixes the origin and the last two entries of Rn+2 leaves our manifold M invariant. The EMBEDDED CMC HYPERSURFACES ON SPHERES 85 group Zm includes in the isometry group because the closed curve given by the last two entries is built by joining m pieces of the the curve α(u) = ( p 1 −r(u)2 cos(θ(u)), p 1 −r(u)2 sin(θ(u))) 0 ≤u ≤K(H, n, C) = 2kπ m . This last statement is true by the the following observation already pointed out in the previous section. For any positive integer j and u ∈[jT, (j+1)T ] we have that θ(u) = jK+θ(u−jT ). Corollary 3.2. If M is one of the compact examples in the previous theorem with H = 0, then, the stability index, i.e, the number of negative eigenvalues of the operator J(f) = −∆f −nf −||A||2 f is greater than n + 3. Proof. Theorem (3.1.1) in states that if M ⊂Sn+1 is a compact minimal hypersurface different from a Clifford torus with the property that for any non-zero vector v ∈Rn+2 there exists an (n + 2) × (n + 2) orthogonal matrix B such that B(M) = M and B(v) ̸= v, then the stability index of M is greater than n + 3. Let M be one of the examples from the previous theorem. Since M is compact, M is left invariant by the orthogonal matrices in the group O(n) × Zm where m satisfies that K(0, n, C) = 2kπ m , with k and m relatively prime. Independently, Otsuki in and Furuya in showed that M can not be embedded by showing that π < K(0, n, C) < 2π, these inequalities also implies that m ≥2. Since for any non-zero vector v ∈Rn+2, there exists a matrix B ∈O(n)×Zm such that B(v) ̸= v, then, the stability index of M must be greater than n + 3. 4. Embedded hypersurface with CMC in Sn+1. In this section we will study the existence of compact examples in Sn+1 by studying the values K(H, n, C). The key for this is the following. Lemma 4.1. Let f : (−δ, δ) →R be a smooth function such that f(0) = f ′(0) = 0 and f ′′(0) = −2a < 0. For positive values of c close to 0, let t(c) be the first positive root of the function f(t) + c. Then lim c→0+ Z t(c) 0 dt p f(t) + c = π 2√a. Proof. For any b > a let us define the function h(t) = f ′(t) + 2bt. Since h′(0) = 2(b −a) > 0 there exists a positive ǫ such that h′(t) > 0 for all t ∈[0, ǫ]. Now for any c such that t(c) < ǫ, the function g(t) = f(t) + c −(bt(c)2 −bt2) satisfies that g(t(c)) = 0 and g′(t) = h(t) > 0. Therefore, g(t) < 0 for any t ∈[0, t(c)]. By the definition of g(t) we get that 0 < f(t) + c < bt(c)2 −bt2 for all t ∈[0, t(c)) and therefore 86 O. M. PERDOMO π 2 √ b = Z t(c) 0 dt p bt(c)2 −bt2 < Z t(c) 0 dt p f(t) + c . Likewise, for any b < a, the same argument shows Z t(c) 0 dt p f(t) + c < Z t(c) 0 dt p bt(c)2 −bt2 = π 2 √ b . Since b ̸= a can be chosen arbitrarily close to a, we obtain the lemma. Corollary 4.2. Let ǫ and δ be positive real numbers and let f : (t0 −ǫ, t0 +ǫ) → R and g : (−δ, δ)×(t0−ǫ, t0+ǫ) →R be smooth functions such that f(t0) = f ′(t0) = 0 and f ′′(t0) = −2a < 0. If for any small c > 0, t1(c) < t0 < t2(c) are such that f(t1(c)) + c = 0 = f(t2(c))) + c, then lim c→0+ Z t2(c) t1(c) g(c, t) dt p f(t) + c = g(0, t0) π √a . This lemma allows us to prove our main theorem: Theorem 4.3. For any n ≥2 and any H ∈(0, 2√n−1 n √ 3 ) there exists a non-isoparametric compact embedded hypersurface in Sn+1 with constant mean curvature H. More generally, for any integer m > 1 and H between cot π m and (m2 −2) p (n −1) n √ m2 −1 there exists a non isoparametric compact embedded hypersurface in Sn+1 with constant mean curvature H whose isometry group contains O(n) × Zm. Proof. We will consider only positive values for H. Here we will use the explicit solution for the ODE (3.13) given in section (3.1). Let us rewrite that ODE as (g′)2 = q(g) where q(v) = C −v2−2n −(1 + H2)v2 −2Hv2−n. We already pointed out in section (3.1) that for some values of C, the function q has positive values between two positive roots of q, denoted by t1 and t2. Let us be more precise and give an expression for how big C needs to be. A direct verification shows that q′(v) = −2(1 + H2)v −(2 −2n)v1−2n −2H(2 −n)v1−n and that the only positive root of q′ is v0 = ( p H2n2 + 4(n −1) + (n −2)H 2 + 2H2 ) 1 n . (4.1) EMBEDDED CMC HYPERSURFACES ON SPHERES 87 Therefore, for positive values of v, the function q increases from 0 to v0 and decreases for values greater than v0. A direct computation shows that q(v0) = C −c0 where, c0 = n (2 + 2H2) n−2 n 2 + nH2 + H p H2n2 + 4(n −1) (n −2)H + p H2n2 + 4(n −1)  2n−2 n . (4.2) Therefore, whenever C > c0 we will have exactly two positive roots of the function q(v) that we will denote by t1(C) and t2(C) to emphasize its dependence on C. A direct computation shows that q′′(v0) = −2a where a = 2n(1 + H2) 4(n −1) + H2 n2 + H (n −2) p 4(n −1) + H2n2 ( H(n −2) + p 4(n −1) + H2n2 )2 . Using the notation and results of section (3.3), we get K(H, n, C) = 2 Z T 2 0 r(s)λ(s) 1 −r2(s) ds (4.3) Since r(s) = g(s) √ C and λ(s) = H + g(s)−n we have K(H, n, C) = 2 Z T 2 0 √ Cg(s)(H + g−n(s)) c −g2(s) ds. Since g(0) = t1(C) and g( T 2 ) = t2(C), by making the substitution t = g(s) we get K(H, n, C) = 2 Z t2(c) t1(c) √ Ct(H + t−n) c −t2 1 p q(t) dt. Since a > 0 we can apply Corollary (4.2) to the get that lim C→c+ 0 K(H, n, C) = π s 2 − 2nH p 4(n −1) + H2n2 . It can be verified that this bound is the same bound we found for the case n = 2. In order to analyze the limit of the function K(H, n, C) when C →∞we return to the expression (4.3) and we make the substitution t = r(s) to obtain K(H, n, C) = 2 Z t2(C) √ C t1(C) √ C t(H + C−n 2 t−n) (1 −t2) p 1 −t2(1 + (H + C−n 2 t−n)2) dt. In this case we have used (3.17) to change the ds to dt. Notice that the limit values t1(C) √ C and t2(C) √ C can also be characterized as the only positive roots of the function ˜ q = 1 −t2(1 + (H + C−n 2 t−n)2) = 1 −(1 + H2)t2 −C−nt2−2n −2HC−n 2 t2−n 88 O. M. PERDOMO because of the relation q(v) = C˜ q( v √ C ). Since for every positive C we have that limt→0+ ˜ q(t) = −∞, ˜ q( 1 √ 1+H2 ) < 0 and for every positive ǫ < 1 √ 1+H2 we have lim C→∞˜ q(ǫ) > 0 and lim c→∞˜ q( 1 √ 1 + H2 −ǫ) > 0, we conclude that the only two positive roots of ˜ q converge to 0 and to 1 √ 1+H2 when C →∞. Therefore, lim C→∞K(H, n, C) = 2 Z 1 √ 1+H2 0 Ht (1 −t2) p 1 −(1 + H2)t2 dt = 2arccot(H). Therefore, for any fixed H > 0, the function K(H, n, C) takes all the values between a1(H) = 2arccot(H) and a2,n(H) = π s 2 − 2nH p 4(n −1) + H2n2 . The functions a1(H) and a2,n(H) are decreasing. Moreover, we have that for any y < √ 2 a2,n(2 (2 −y2) √n −1 n y p 4 −y2 ) = π y. Therefore, replacing y by 2 m in the expression above, we obtain that for values of H between cot π m and (m2 −2) p (n −1) n √ m2 −1 the number 2π m lies between a1(H) and a2,n(H), and therefore, for some constant C, we will have that K(H, n, C) = 2π m . Applying Theorem 3.1 concludes the proof. Notice that when m = 2 these two bounds are 0 and 2 √n−1 n √ 3 . Let us finish this section with a remark already pointed out by Otsuki in (). Lemma 4.4. For any integer n ≥2 and any ǫ > 0 there exist compact non-isoparametric minimal hypersurfaces in Sn+1 such that n −ǫ ≤||A||2(p) ≤n + ǫ for all p ∈M. Proof. This is a consequence of the fact that the expression for v0 in Equation (4.1) reduces to (n −1) 1 2n when H = 0 and the fact that by picking C close to c0, the roots t1(C) and t2(C) of the function q are as close as v0 as we want. Since the range of the function g move from t1(C) to t2(C), we can make the values of g to move as close of (n −1) 1 2n as we want. When H = 0, we have that λ = g−n µ = −(n−1)g−n and ||A||2 = (n−1)g−2n+(n−1)2g−2n = n(n−1)g−2n. EMBEDDED CMC HYPERSURFACES ON SPHERES 89 Therefore, we can make ||A||2 as close n as we want. By density of the rational numbers and the continuity of the function K(H, n, C), we can choose C so that K(H, n, C) is of the form 2kπ m for some pair of integers m and k. This last condition guarantees the compactness of the profile curve and therefore the compactness of the hypersurface. 5. Non isoparametric stable cones in Sn+1. For any compact minimal hy-persurface M ⊂Sn+1, let us define the operator L1 and the number λ1 as follows, L1(f) = −∆f −||A||2f and λ1 = first eigenvalue of L1. Moreover, let us denote by CM = {tm : t ∈[0, 1], m ∈M } the cone over M. We will say that CM is stable if every variation of CM, which holds M fixed, increases area. In (, Lemma 6.1.6) Simons proved that if λ1 + ( n−1 2 )2 > 0 then CM is stable. We will prove that for any n ≥6, the cone over some non isoparametric examples studied in this paper for H = 0, i.e, the cone over some of the Otsuki’s examples, are stable. More precisely we have, Theorem 5.1. For any n ≥6, there are non-isoparametric compact hypersurfaces in Sn+1 bounding stable minimal cones. Proof. A direct verification shows that (n −1 2 )2 ≥n + 1 4 for all n ≥6. Using Lemma (4.4), let us consider a non isoparametric compact minimal hy-persurface M such that ||A||2 ≤n + 1 8. We have that the first eigenvalue λ1 of the operator L1 is greater than −n −1 8 because λ1 = inf { R M(−∆f −||A||2 f)f R M f 2 : f is smooth and Z M f 2 ̸= 0} and we have that, R M(−∆f −||A||2 f)f R M f 2 = R M |∇f|2 R M f 2 − R M ||A||2 f 2 R M f 2 ≥−(n + 1 8). Therefore, we get that λ1 + (n −1 2 )2 ≥−(n + 1 8) + n + 1 4 = 1 8 > 0 which implies, by Simons’ result, that the cone over M is stable. 6. Some explicit solutions. In this section we will pick some arbitrary values of H to explicitly show the embedding, the graph of the profile curves, and the stereographic projections of some examples of surfaces with CMC in S3. A direct computation shows that the solution of the equation (3.13) when n = 2 is given by 90 O. M. PERDOMO g(t) = s (C −2H) + √ −4 + C2 −4CH sin(2 √ 1 + H2 t ) 2(1 + H2)) . From the expression for g we get that its period T is π √ 1+H2 . Since n = 2, the condition on C to get solutions of the ODE (3.13) reduces to C > 2(H + √ 1 + H2). We can get surfaces associated with m = 2 if we take H between 0 and 1 √ 3 ≃ 0.57735 and we can surfaces associated with m = 3 if we take H between 1 √ 3 and 7 4 √ 2 ≃1.23744. Once we have picked the value for H in the right range, in order to get the embedded surface, we need to solve the equation K(H, 2, C) = Z π √ 1+H2 0 √ C g(t)(H + g(t)−2) C −g(t)2 dt = 2π m . Finally, when we have the H and the C, the profile curve is given by ( r 1 −g2(t) C cos(θ(t)), r 1 −g2(t) C sin(θ(t)) ) where θ(t) = Z t 0 √ C g(τ)(H + g−2(τ)) C −g2(τ) dτ and the embedding is given by ( g(t) √ C cos(u), g(t) √ C sin(u), r 1 −g2(t) C cos(θ(t)), r 1 −g2(t) C sin(θ(t)) ) 0 ≤u < 2π 0 ≤t < m π √ 1 + H2 Here are some graphics, Fig. 6.1. Profile curve for m = 2, H = 0.1, in this case C = 41.28796038772471 EMBEDDED CMC HYPERSURFACES ON SPHERES 91 Fig. 6.2. Profile curve for m = 2, H = 0.3, in this case C = 9.129645968138256 Fig. 6.3. Profile curve for m = 2, H = 0.57, in this case C = 3.5313222039296357 92 O. M. PERDOMO Fig. 6.4. Profile curve for m = 2, H = 0.001, H = 0.1, H = 0.3, H = 0.57. Fig. 6.5. Stereographic projection for the surface with CMC H = 0.1 EMBEDDED CMC HYPERSURFACES ON SPHERES 93 Fig. 6.6. Stereographic projection of half the surface with CMC H = 0.1 Fig. 6.7. Stereographic projection one of the two catenoid necks of the surface with CMC H = 0.1 94 O. M. PERDOMO Fig. 6.8. Stereographic projection of the surface with CMC H = 0.3 and m = 2 Fig. 6.9. Stereographic projection of the surface with CMC H = 0.57 and m = 2 EMBEDDED CMC HYPERSURFACES ON SPHERES 95 Fig. 6.10. Profile curve for m = 3 and H = 0.5774, in this case C = 346879.6632142387 Fig. 6.11. Profile curve for m = 3 and H = 0.6, in this case C = 365.3705636110441 96 O. M. PERDOMO Fig. 6.12. Profile curve for m = 3 and H = 0.8, in this case C = 22.320379289179478 Fig. 6.13. Profile curve for m = 3 and H = 1.0, in this case C = 9.908469426660892 EMBEDDED CMC HYPERSURFACES ON SPHERES 97 Fig. 6.14. Profile curve for m = 3 and H = 1.2, in this case C = 6.084010495710457 Fig. 6.15. Profile curve for m = 3 and H = 1.237, in this case C = 5.6615177218839605 98 O. M. PERDOMO Fig. 6.16. Profile curve for m = 3, H = 0.5774, H = 0.6, H = 0.7, H = 0.8, H = 1.0 H = 1.1 H = 1.2, H = 1.22 H = 1.237. EMBEDDED CMC HYPERSURFACES ON SPHERES 99 Fig. 6.17. Stereographic projection of a surface with CMC H = 0.5774 and m = 3 100 O. M. PERDOMO Fig. 6.18. Stereographic projection of a surface with CMC H = 0.8 and m = 3 EMBEDDED CMC HYPERSURFACES ON SPHERES 101 Fig. 6.19. Stereographic projection of a surface with CMC H = 1.2 and m = 3 102 O. M. PERDOMO Fig. 6.20. Stereographic projection of a surface with CMC H = 1.2 and m = 4 EMBEDDED CMC HYPERSURFACES ON SPHERES 103 6.1. Embedded solutions in hyperbolic spaces. Here we show that the theorem above can be adapted to hyperbolic spaces. In this case we get the embedded hypersurfaces without much effort since Hyperbolic space is not compact. We will use the following model for hyperbolic space: Hn+1 = { x ∈Rn+2 : x2 1 + · · · + x2 n+1 −x2 n+2 = −1 }. The following notation will only be used in this subsection. For any pair of vectors v = (v1, . . . , vn+2) and w = (w1, . . . , wn+2), ⟨v, w⟩= v1w1 + vn+1wn+1 −vn+2wn+2. Theorem 6.1. Let gC,H : R →R be a positive solution of the equation (g′)2 + g2−2n + (H2 −1)g2 + 2Hg2−n = C (6.1) associated with a non negative H and a positive constant C. If µ, λ, r, θ : R →R and are defined by r = gC,H √ C , λ = H + g−n C,H, µ = nH −(n −1)λ = H −(n −1)g−n C,H and θ(u) = Z u 0 r(s)λ(s) 1 + r2(s)ds then, the map φ : Sn−1 × R →Hn+1 given by φ(y, u) = ( r(u) y, p 1 + r(u)2 sinh(θ(u)), p 1 + r(u)2 cosh(θ(u)) ) (6.2) defines an embedded hypersurface in Hn+1 with constant mean curvature H. More-over, if H2 > 1, the embedded manifold defined by (6.2) admits O(n) × Z in its group of isometries, where Z is the group of integers. Remark. Arguments similar to those in section (3.1) show that it is not difficult to find positive values C that lead to positive solutions of the equation (6.1) in terms of the inverse of a function defined by an integral. Proof. A direct computation shows the following identities, (r′)2 + λ2 r2 = 1 + r2, and λr′ + rλ′ = µr′. Let us define B2(u) = (0, . . . , 0, sinh(θ(u)), cosh(θ(u))) and B3(u) = (0, . . . , 0, cosh(θ(u)), sinh(θ(u))). Notice that ⟨B2, B2⟩= −1, ⟨B3, B3⟩= 1, ⟨B2, B3⟩= 0, B′ 2 = rλ 1+r2 B3 and B′ 3 = rλ 1+r2 B2, moreover, the map φ can be written as φ = r(y, 0, 0) + p 1 + r2 B2. A direct verification shows that ⟨φ, φ⟩= −1 and that 104 O. M. PERDOMO ∂φ ∂u = r′ (y, 0, 0) + rr′ √ 1 + r2 B2 + rλ √ 1 + r2 B3 is a unit vector, i.e, ⟨∂φ ∂u, ∂φ ∂u⟩= 1. The tangent space of the immersion at (y, u) is given by Tφ(y,u) = {(v, 0, 0) + s ∂φ ∂u : ⟨v, y⟩= 0 and s ∈R}. A direct verification shows that the map ν = −rλ (y, 0, 0) − r2 λ √ 1 + r2 B2 + r′ √ 1 + r2 B3 satisfies that ⟨ν, ν⟩= 1, ⟨ν, ∂φ ∂u⟩= 0 and for any v ∈Rn with ⟨v, y⟩= 0 we have that ⟨ν, (v, 0, 0)⟩= 0. It then follows that ν is a Gauss map of the immersion φ. The fact that the immersion φ has constant mean curvature H follows because for any unit vector v in Rn perpendicular to y, we have that β(t) = (r cos(t)y + r sin(t)v, 0, 0) + p 1 + r2 B2 = φ(cos(t)y + r sin(t)v, u) satisfies that β(0) = φ(y, u), β′(0) = rv and dν(β(t)) dt t=0 = dν(rv) = −rλ v. Therefore, the tangent vectors of the form (v, 0, 0) are principal directions with principal curvature λ and multiplicity n −1. Now, since ⟨∂ν ∂u, (v, 0, 0)⟩= 0, we have that ∂φ ∂u defines a principal direction, i.e. we must have that ∂ν ∂u is a multiple of ∂φ ∂u. A direct verification shows that, ⟨∂ν ∂u, y⟩= −λ′ r −λr′ = −µ r′ = −(nH −(n −1)λ)r′. We also have that ⟨∂φ ∂u, y⟩= r′, therefore, ∂ν ∂u = dν(∂φ ∂u) = −µ ∂φ ∂u = −(nH −(n −1)λ)∂φ ∂u. It follows that the other principal curvature is nH −(n −1)λ. Therefore φ defines an immersion with constant mean curvature H, this proves the first item in the Theorem. This immersion is embedded because the immersion φ is one to one as we can easily check using the fact that whenever H ≥0, the function θ is strictly increasing. In order to prove the condition on the isometries of the immersion when H > 1 we notice first that the ODE (6.1) can be written as (g′)2 = g2−2n q(g) where q(v) = Cv2n−2 −(H2 −1)v2n −2Hvn −1. EMBEDDED CMC HYPERSURFACES ON SPHERES 105 Since q(0) = −1 and the leading coefficient of q is negative under the assumption that H > 1, then by the arguments used in section (3.1) we conclude that a positive solution g of (6.1) must be periodic, moreover the values of g must move from two positive roots t1 and t2. Now if T is the period of g and we define K = Z T 0 r(u)λ(u) 1 + r2(u)du. We then have: For any integer j and u ∈[jT, (j + 1)T ] we have that θ(u) = jK + θ(u −jT ). Using the equation above we get that the immersion φ is invariant under the group generated by hyperbolic rotations of the angle K in the xn+1-xn+2 plane. This concludes the theorem. 6.2. Solutions in Euclidean spaces. In this section we point out that the same kind of theorem can be adapted to Euclidean spaces. In this case we get the Delaunay hypersurfaces. Theorem 6.2. Let gC,H : R →R be a positive solution of the equation (g′)2 + g2−2n + H2 g2 + 2Hg2−n = C (6.3) associated with a real number H and a positive constant C. If µ, λ, r, R : R →R and are defined by r = gC,H √ C , λ = H + g−n C,H, µ = nH −(n −1)λ = H −(n −1)g−n C,H and R(u) = Z u 0 r(s)λ(s)ds then, the map φ : Sn−1 × R →Rn+1 given by φ(y, u) = ( r(u) y, R(u)) (6.4) defines an immersed hypersurface in Rn+1 with constant mean curvature H. More-over, if H ≥0, the manifold defined by (6.4) is embedded. We also have that when n > 2, up to rigid motions they are the only non isoparametric CMC hypersurfaces with exactly two principal curvatures. Proof. A direct computation shows the following identities, (r′)2 + λ2 r2 = 1, and λr′ + rλ′ = µr′. In this case we have that the map ν(y, u) = (−r(u)λ(u) y, r′(u)) 106 O. M. PERDOMO is a Gauss map of the immersion. A direct computation shows that indeed this immer-sion has constant mean curvature H. The fact that the immersion is an embedding when H ≥0 follows from the fact that λ > 0 in this case and therefore the function R is strictly increasing. For the last part of the theorem we will use the same notation used in the previous sections, and in particular we define the functions w, λ on the whole manifold as before, and we extend the function r to the manifold by defining it as r = w √c, we will also assume that γ will denote a geodesic defined by the vector field en. We have that, 1. the vector λ ren +en(r) ν is a unit constant vector on the whole manifold, we can assume that this vector is the vector (0, . . . , 0, 1) 2. The vector η = −en(r) en + λ r ν is constant along a geodesic γ, i.e, we can prove that ¯ ∇enη vanishes. We also have that η is a unit vector perpendicular to the vector defined in the previous item. 3. From the last items we can solve for en in terms of the vectors (0, . . . , 0, 1) and η along a geodesic γ, and then, integrate in order to get the one of this geodesics. Using the differential equation for r at the beginning of the proof, we get that en = λ r(0, . . . , 0, 1) −en(r)η 4. Similar to the case of Sn, we can show that ∇ven = en(r) r en for every v ∈ Span{e1, . . . , en−1}. Therefore, the vector field x + rη is independent of the integral submanifolds of the distribution Span{e1, . . . , en−1}. 5. The previous considerations and the fact that the vectors e1, . . . en−1 are perpendicular to the vector η and (0, . . . , 0, 1) imply that the integral sub-manifolds of the distribution Span{e1, . . . , en−1} are spheres with center at x + rη and radius r. Notice that ||x −(x + rη)|| = r. 6. If we fix a point p0 and we define the geodesic γ(u) as before, then, without loss of generality we may assume that η(p0) = (0, . . . , 0, 1, 0) = η(u) and therefore, we can also assume by doing a translation, if necessary, that γ(u) = Z u 0 en(u) = Z u 0 (0, . . . , 0, −r′(u), λ(u) r(u)) = (0, . . . , −r(u), R(u)) Where R(u) = R u 0 λ(t) r(t)dt. The theorem follows by noticing that the center of the integral submanifolds take the form γ(u) + r(u)η(γ(u)) = (0, . . . , 0, R(u)) In the case n = 2 we can find explicit solutions. For any positive C > 4H, they look like, φ(u, v) = ( r(u) cos(v), r(u) sin(v), R(u)) where, R(u) = Z u 0 C + p C(C −4H) cos(2Hy) √ 2C q C −2H + p C(C −4H) cos(2Hy) dy and r(u) = q C −2H + p C(C −4H) cos(2Hu) √ 2 √ CH . Here there is the graph of a non embedded Delaunay surface, EMBEDDED CMC HYPERSURFACES ON SPHERES 107 Fig. 6.21. Half rotation of a non embedded Delaunay surface with CMC H = −1, here C = 2 REFERENCES L. de Almeida Alias and A. S. Brasil, Hypersurfaces with constant mean curvature and two principal curvatures in Sn+1, An. Acad. Brasil. Cienc., 76:3 (2004), pp. 489–497. Brasil de Almeida, Hypersurfaces with two principal curvatures in Sn+1, Mat. Contemp., 28 (2005), pp. 51–66. F. Brito and M. Leite, A remark on rotational hypersurfaces of Sn, Bull. Soc. Math. Belg. Ser. B, 42:3 (1990), pp. 303–318. M.-C. Cheng, Rigidity of Clifford torus S1( 1 n ) × Sn−1( n−1 n ), Comment. Math. Helv., 71 (1996), pp. 60–69. Q.-M. Cheng, H. Li and G. Wei, Embedded hypersurfaces with constant mth mean curvature in a unit sphere, arxiv:0904.0299. C. Delaunay, Sur la surface de revolution dont la courbure mayenne est constante, J. Math. Pures Appl., Ser. 1, 6 (1841), pp. 309–320. M. Do Carmo and M. Dajczer, Rotational hypersurfaces in spaces of constant curvature, Trans. Amer. Math. Soc., 277 (1983), pp. 685–709. S. Furuya, On the periods of periodic solutions of a certain non-linar differential equation, Japan-U.S. Seminar on Ord. Diff. and Fcal. Eq., Springer-Verlag, Berlin and New York (1971), pp. 320–323. T. Hasanis, Savas-Halilaj and T. Vlachos, Complete minimal hypersurfaces in a sphere, Monatsh Math, 145 (2004), pp. 301–305. W.-Y. Hsiang and W. C. Yu, A generalization of a theorem of Delaunay., J. Differentail Geom., 16:2 (1981), pp. 161–177. W.-Y. Hsiang, On a generalization of theorems of A.D Alexandrov and C. Delaunay on hy-persurfaces of constant mean curvature., Duke Math. J., 49:3 (1982), pp. 485–496. W.-Y. Hsiang and I. Sterling, Minimal cones and the spherical Bernstein problem. III, Invent. Math., 85:2 (1986), pp. 223–247. H. Li and G. Wei, Compact embedded rotation hypersurfaces of Sn+1, Bull. Braz. Math. Soc. (N.S), 38:1 (2007), pp. 81–99. T. Otsuki, Minimal hypersurfaces in a Riemannian manifold of constant curvature, Amer. J. Math., 92 (1970), pp. 145–173. T. Otsuki, On integral inequalities related with a certain non-linear differential equation, Proc. Japan. Acad., 48 (1972), pp. 9–12. 108 O. M. PERDOMO O. Perdomo, Low index minimal hypersurfaces of spheres, Asian J. Math., 5 (2001), pp. 741– 749. O. Perdomo, Rigidity of minimal hypersurfaces of spheres with two principal curvatures, Arch. Math. (Basel), 82:2 (2004), pp. 180–184. O. Perdomo, CMC hypersurfaces on riemannian and semi-riemannian manifolds, arxiv:0904.1984. O. Perdomo, Embedded cmc hypersurfaces on hyperbolic spaces, arxiv:0903.4934. J. Simons, Minimal varieties in riemannian manifolds, Ann. of Math. (2), 88 (1968), pp. 62– 105. I. Sterling, A generalization of a theorem of Delaunay to rotational W -hypersurfaces of σl-type in Hn+1 and Sn+1., Pacific J. Math, 127:1 (1987), pp. 187–197. Q. Wang, Rigidity of Clifford minimal hypersurfaces, Monatsh. Math., 140:2 (2003), pp. 163– 167. G. Wei, Rigidity theorem for hypersurfaces in a unit sphere, Monatsh. Math., 149:4 (2006), pp. 343–350. G. Wei, Complete hypersurfaces with constant mean curvature in a unit sphere, Monatsh. Math., 149:3 (2006), pp. 251–258. G. Wei, Rotational hypersurfaces of the sphere, Proceedings of the Eleventh International Workshop on Differential Geometry, (2007), Kyungpook Nat. Univ., Taegu pp. 225–232.
107
combinatorics - Number of $n^2\times n^2$ permutation matrices with a 1 in each $n\times n$ subgrid - Mathematics Stack Exchange =============== Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Number of n 2×n 2 n 2×n 2 permutation matrices with a 1 in each n×n n×n subgrid Ask Question Asked 9 years, 8 months ago Modified6 years, 4 months ago Viewed 1k times This question shows research effort; it is useful and clear 9 Save this question. Show activity on this post. I found the following question in a paper I was trying to solve: The following figure shows a 3 2×3 2 3 2×3 2 grid divided into 3 2 3 2 subgrids of size 3×3 3×3. This grid has 81 81 cells, 9 9 in each subgrid. Now consider an n 2×n 2 n 2×n 2 grid divided into n 2 n 2 subgrids of size n×n n×n. Find the number of ways in which you can select n 2 n 2 cells from this grid such that there is exactly one cell coming from each subgrid, one from each row and one from each column. My try: Since we have n 2 n 2 rows, n 2 n 2 columns and n 2 n 2 subgrids in total, we have to choose one and only one cell from each of them. Let's choose them one at a time. We can choose the first cell in n 4 n 4 many ways. Then, we'll have to avoid that subgrid, that column and that row that we've chosen the first one from when choosing the second cell. So, we have n 4−n 2−2 n(n−1)n 4−n 2−2 n(n−1) choices. We can continue this to get the total number of possible ways. But, I think there's a hole. Say, we've chosen the first cell from the subgrid of the up-left corner and the second from the subgrid just right to it so that it doesn't violate any rules. Then, when finding the number of ways we can choose the third cell, we would have substracted some of the cells twice. I think you get it. Please, if anyone can help me solving this problem, it'd be greatly appreciated. combinatorics discrete-mathematics Share Share a link to this question Copy linkCC BY-SA 3.0 Cite Follow Follow this question to receive notifications edited Dec 3, 2015 at 10:41 Marc van Leeuwen 120k 8 8 gold badges 182 182 silver badges 372 372 bronze badges asked Dec 3, 2015 at 6:53 SntnSntn 4,620 14 14 silver badges 34 34 bronze badges 9 2 Why not consider restricting your first choice to a particular n×n n×n subgrid? Then consider each next choice going along the rows and the columns in a systematic way. I don't think this affects the final outcome: you should still be counting all the possibilities, and other counting processes probably boil down to rotating the board or permuting the subgrids. It might help to consider the n=2 n=2 and n=3 n=3 cases first. –Will R Commented Dec 3, 2015 at 7:18 @WillR Well... Tell me if I'm wrong... What you're trying to say is that first I should select the first cell from, say, the up-left subgrid. Then, I carry along the row and repeat the same process for all the columns. What confuses me is how am I supposed to make sure that neither a row nor a column is repeated? –Sntn Commented Dec 3, 2015 at 7:23 1 You can factor that into your calculation. Take n=3 n=3, for simplicity, since you already have the diagram. If your first choice is the little square in the very top-left, and you know you're going to make your next choice in the middle-top subgrid, then how many choices do you have? You have to avoid the top row, but you can choose any other little square, so you have 9−3=6 9−3=6 choices. Same goes for the left-middle subgrid. Now for the top-right subgrid, you have to avoid the top and middle row, leaving you 3 3 choices; and so on. In all, 9⋅6⋅6⋅3⋅3⋅4⋅2⋅2⋅1 9⋅6⋅6⋅3⋅3⋅4⋅2⋅2⋅1? –Will R Commented Dec 3, 2015 at 7:35 1 I'd like to clarify before this gets too serious: I don't have an awful lot of experience with combinatorial problems like this, so please consider what I'm saying with a very skeptical eye. –Will R Commented Dec 3, 2015 at 7:42 @WillR No. I think it'd work. I'll try and finish the problem. I believe it'll work. –Sntn Commented Dec 3, 2015 at 7:46 |Show 4 more comments 3 Answers 3 Sorted by: Reset to default This answer is useful 4 Save this answer. Show activity on this post. Well, in a 2 2×2 2 2 2×2 2 grid there are are 4 choices for the top left sub grid, then as the representative for the top right subgrid can't be in the same row, there are 2 choices. As the bottom left rep can't be in the same column as the top rep the there are 2 choices. There is only one choice left for the bottom right sub grid. In total there are 4221 = 16 options. Or [2∗2][(2−1)2]×[2∗(2−1)][(2−1)(2−1)][2∗2][(2−1)2]×[2∗(2−1)][(2−1)(2−1)] or in general ∏n i=1∏n k=1 i k∏i=1 n∏k=1 n i k. This is the same argrument in the top row of sub grids there n 2 n 2 choices for the first subgrid. (n−1)n(n−1)n for the second and so on. This is ∏n k=1 n∗k∏k=1 n n∗k. For the second to top row (n-1 from the bottom) of of subgrids there are n(n−1)n(n−1) choices for the first subgrid, (n−1)(n−1)(n−1)(n−1) for the second and so on. This is ∏n k=1(n−1)k∏k=1 n(n−1)k. For all the lth row of sub grids there are n(n−l)n(n−l) choices for the first subgrid, (n−1)(n−l)(n−1)(n−l) for the second and so on. This is ∏n k=1(n−l)k∏k=1 n(n−l)k. The total product for all rows of subgrids is ∏n i=1∏n k=1 i k∏i=1 n∏k=1 n i k. ==== Hmmm, when I first posted I really should have continued: ∏n i=1(∏n k=1 i k)=∏i=1 n(∏k=1 n i k)= ∏n i=1(i n n!)=∏i=1 n(i n n!)= (n!)n(n!n)=n!2 n(n!)n(n!n)=n!2 n So for a 2 2×2 2 2 2×2 2 grid it is (2!)2∗2=2 4=16(2!)2∗2=2 4=16. For the 3 2×3 2 3 2×3 2 grid it is 3!2∗3=6 6=46,656 3!2∗3=6 6=46,656. (Which I figure should be (9∗6∗3)∗(6∗4∗2)∗(3∗2∗1)=(3 4∗2)(3∗2 4)(3∗2)=3 6∗2 6=6 6(9∗6∗3)∗(6∗4∗2)∗(3∗2∗1)=(3 4∗2)(3∗2 4)(3∗2)=3 6∗2 6=6 6. Yep, seems to fit.) I imagine 16 x 16 will be a monster! (4!)2∗4=24 8=110,075,314,176.(4!)2∗4=24 8=110,075,314,176. Wow! By hand it is (16∗12∗8∗4)(12∗9∗6∗3)(8∗6∗4∗2)(4∗3∗2∗1)=(4 4∗4!)(3 4∗4!)(2 4∗4!)(4!)=(4!)4(4!)4=4!2∗4(16∗12∗8∗4)(12∗9∗6∗3)(8∗6∗4∗2)(4∗3∗2∗1)=(4 4∗4!)(3 4∗4!)(2 4∗4!)(4!)=(4!)4(4!)4=4!2∗4 which.. yeah... Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications edited Dec 3, 2015 at 9:51 answered Dec 3, 2015 at 9:14 fleabloodfleablood 131k 5 5 gold badges 51 51 silver badges 142 142 bronze badges 5 I believe it coincides with the answer I came up with myself. –Sntn Commented Dec 3, 2015 at 9:18 I think so too. –fleablood Commented Dec 3, 2015 at 9:30 Just came with a result similar as you, generalising the product I got stuck into. –Cloverr Commented Dec 3, 2015 at 9:36 I think, continuing this would give us the possible number of Sudoku puzzles in the board. If we continue this for all the numbers (let's just assume we have a n 2+1 n 2+1 base number system). –Sntn Commented Dec 3, 2015 at 10:01 For soduku we'd need a way of anticipating and ruling out impossible contradictions. And apparently many people have worked on this math. –fleablood Commented Dec 3, 2015 at 18:34 Add a comment| This answer is useful 3 Save this answer. Show activity on this post. Using the counting system outlined in the comments, we can associate to each subgrid a positive integer: the number of choices in that subgrid. It's fairly clear that the number is invariant of the exact method of getting to that subgrid: however you do it, subgrid (i,j)(i,j) (in the usual matrix suffix notation) has associated with it (n+1−i)(n+1−j)(n+1−i)(n+1−j) choices. Now the matrices go as follows: n=1 n=2 n=3 n=k↦↦↦⋮↦⋮(1)(4 2 2 1)⎛⎝⎜9 6 3 6 4 2 3 2 1⎞⎠⎟⎛⎝⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜k 2 k(k−1)k(k−2)⋮2 k k k(k−1)(k−1)2(k−1)(k−2)⋮2(k−1)k−1 k(k−2)(k−1)(k−2)(k−2)2⋮2(k−2)k−2………⋱……2 k 2(k−1)2(k−2)⋮4 2 k k−1 k−2⋮2 1⎞⎠⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟n=1↦(1)n=2↦(4 2 2 1)n=3↦(9 6 3 6 4 2 3 2 1)⋮n=k↦(k 2 k(k−1)k(k−2)…2 k k k(k−1)(k−1)2(k−1)(k−2)…2(k−1)k−1 k(k−2)(k−1)(k−2)(k−2)2…2(k−2)k−2⋮⋮⋮⋱⋮⋮2 k 2(k−1)2(k−2)…4 2 k k−1 k−2…2 1)⋮ For each n n, the answer to the problem is the product of all the entries in the corresponding matrix; denote each of these numbers by P n P n. Now since each matrix contains the preceding matrix as a "submatrix", it is clear that we can find a recursive formula: in particular, some thought gives P n==P n−1 n 2∏i=1 n−1(i n)2 P n−1 n 2 n[(n−1)!]2,P n=P n−1 n 2∏i=1 n−1(i n)2=P n−1 n 2 n[(n−1)!]2, with the initial condition P 1=1 P 1=1. Now for each n∈N n∈N let f(n)=(n!)2 n f(n)=(n!)2 n. Clearly P 1=f(1)P 1=f(1). Further, suppose that, for some n∈N n∈N, we know that P n−1=f(n−1)P n−1=f(n−1); then we have P n====P n−1 n 2 n[(n−1)!]2[(n−1)!]2(n−1)n 2 n[(n−1)!]2(n!)2 n f(n).P n=P n−1 n 2 n[(n−1)!]2=[(n−1)!]2(n−1)n 2 n[(n−1)!]2=(n!)2 n=f(n). By induction, P n=(n!)2 n P n=(n!)2 n for all n∈N n∈N. Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications edited Dec 3, 2015 at 13:07 answered Dec 3, 2015 at 10:34 Will RWill R 9,154 4 4 gold badges 23 23 silver badges 39 39 bronze badges 6 Whoa! This is some serious stuff! –Sntn Commented Dec 3, 2015 at 10:43 2 I decided to post this as a separate answer because, for me, matrices are almost inherently visual, so their implementation allows us some clearer insights. For example, without writing out the matrix, it's not immediately obvious that the recursive relationship holds between each of the special cases; once you write out the matrix, you can literally just write down the relationship, no further fancy tricks required. –Will R Commented Dec 3, 2015 at 10:47 Yeah... And I believe it's a much more general approach... –Sntn Commented Dec 3, 2015 at 10:49 I believe the first step would be divided by n 2 n 2. –Sntn Commented Dec 3, 2015 at 10:53 1 @SayantanSantra: Ah, I see what you mean; I'll edit that. The error obviously hasn't carried through at all. –Will R Commented Dec 3, 2015 at 11:07 |Show 1 more comment This answer is useful 2 Save this answer. Show activity on this post. I believe I've come upon a solution. Just as @WillR suggested, who showed me the right approach to it, I'm posting this as an answer. Please feel free to notify me about flaws of logic if there are any. So, here it goes: Let's start by choosing a cell from the top-left subgrid. As we have to select cells from each subgrids, I don't think this approach would affect the final outcome. We have n 2 n 2 choices in choosing the first cell then. Now, we move towards the right subgrid. This time, we'll have to keep in mind that one of the rows is used. So, we have n(n−1)n(n−1) choices left. If we continue this way, the total number of ways in which we can select n n cells from the topmost row is n 2⋅n(n−1)⋅⋅⋅n=n n⋅n!n 2⋅n(n−1)⋅⋅⋅n=n n⋅n! Now, we move to the second row from the top. Now, we don't have to worry about the used rows anymore, only the used columns. So, when choosing the first one from here, we have n(n−1)n(n−1) choices. When we move to the second subgrid, we'll have to avoid a row, but that reduces the number of choices by (n−1)(n−1), not n n. Because, the row and column we have to avoid has a common cell. Continuing this way, in case of the second row, we have a total of (n−1)n⋅n!(n−1)n⋅n! choices. Continuing this way, number of choices for the r r'th row would be (n−r+1)n⋅n!(n−r+1)n⋅n!. As the choices are independent, we can multiply them to get the total number of choices. That'd be: ∏r=1 n n!n⋅(n−r+1)n=n!n⋅∏r=1 n r n=n!n⋅n!n=n!2 n∏r=1 n n!n⋅(n−r+1)n=n!n⋅∏r=1 n r n=n!n⋅n!n=n!2 n Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications edited Mar 31, 2019 at 9:00 answered Dec 3, 2015 at 8:16 SntnSntn 4,620 14 14 silver badges 34 34 bronze badges 9 2 I disagree with your calculation on the top row: it shouldn't be n⋅n!n⋅n!. In particular, take n=3 n=3: by counting, we get 9⋅6⋅3=162≠18=3⋅3!9⋅6⋅3=162≠18=3⋅3!. –Will R Commented Dec 3, 2015 at 8:24 @WillR Sorry... My bad... I'll just rectify the answer... –Sntn Commented Dec 3, 2015 at 8:26 1 No need to apologize. It might be worthwhile to consider the following: you have a grid of n×n n×n subgrids. In each subgrid, you get a certain number of choices. Consider the choices as entries in an n×n n×n matrix; for example, when n=3 n=3, this gives ⎛⎝⎜9 6 3 6 4 2 3 2 1⎞⎠⎟.(9 6 3 6 4 2 3 2 1). Now the total number of choices for that value of n n is the product of all the entries in the matrix. Write out the matrices for n=1,2,3,4 n=1,2,3,4; do you notice a pattern? –Will R Commented Dec 3, 2015 at 8:30 1 Yes, I agree with your answer now, and I think I have a fairly autonomous argument using a recursive formula (derived from the matrices - let P n P n be the product of the entries of the n×n n×n matrix, then P n=P n−1⋅n 2 n⋅[(n−1)!]2 P n=P n−1⋅n 2 n⋅[(n−1)!]2), and your answer satisfies the formula and has the correct initial condition. –Will R Commented Dec 3, 2015 at 8:56 1 Happy to help! Always remember to check with small numbers: it's frequently easy, and it's even more frequently helpful. –Will R Commented Dec 3, 2015 at 9:01 |Show 4 more comments You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions combinatorics discrete-mathematics See similar questions with these tags. Featured on Meta Will you help build our new visual identity? Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Community help needed to clean up goo.gl links (by August 25) Report this ad Linked 0Number of ways of selecting cells from a grid Related 2Combinatorics-Find number of ways we can select cells from a nXn grid such that the number of cells selected from each row and column is odd 1no rows or columns contain same letters, is 1n×n n×n grid, each row contains n distinct colors. Permute the cells within each row such that columns contain n distinct colors 10Balanced 2019×2019 2019×2019 grids (BMO 2020 2020 round 2 2) 4Ways to place 5 5 Ps in a 4×4 4×4 grid so that each row has at least one P? 2Colouring a n×n n×n grid with 3 3 colours 3When can we flip the entire grid if we contunue flipping the cells of a subgrid? Hot Network Questions Is 人形机器人 a redundant expression? Why לֶחֶם instead of לַחַם? Replacing \kern1em with $\hookrightarrow$ in macro using \discretionary gives ‘Improper discretionary list’ error. How to solve this problem? Is there a simple method to prove that this triangle is isosceles? Do you email authors whose results you have improved? Where should I host software for individual papers when GitHub is now part of Microsoft AI? Reskinning creatures without accidentally hiding how dangerous/safe they are Why was there a child at the dig site in Montana? In Isa. 46:9 why is וְאֵ֣ין עֹ֔וד אֱלֹהִ֖ים not translated "and there are no other gods?" Elfquest story where two elves argue over one's hypnotizing of an animal Why do we expect AI to reason instantly when humans require years of lived experience? I found that we can calculate the time of solar eclipses that will happen in the very far future. Do we need relativity in this calculation? Open dense subset of a compact lie group has full measure A story where a character that looks like Wile E. Coyote helps to relocate a community of business-sharp hunters-gatherers On pole distribution of matrix with Laurent polynomial entries Can metal atoms act as ligands? Highlight everything after 500 characters I failed to make Claus benzene. (With sticks.) Wiring a bathroom exhaust fan In the US, can I contribute to my Roth IRA, ahead of the time I get the earned income? Make separate appendix table of contents and remove appendix chapters and sections from main toc Is there any way to still use Manifest v2 extensions in Google Chrome 139+? Can "Accepted" Be Used as a Noun? What does, "For you alone are holy." mean in Revelation 15:4? Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
108
Math 13 — An Introduction to Abstract Mathematics Neil Donaldson & Alessandra Pantano December 2, 2015 Contents 1 Introduction 3 2 Logic and the Language of Proofs 9 2.1 Propositions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2 Methods of Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.3 Quantifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3 Divisibility and the Euclidean Algorithm 41 3.1 Remainders and Congruence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.2 Greatest Common Divisors and the Euclidean Algorithm . . . . . . . . . . . . . . . . . 47 4 Sets and Functions 52 4.1 Set Notation and Describing a Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.2 Subsets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.3 Unions, Intersections, and Complements . . . . . . . . . . . . . . . . . . . . . . . . . . 61 4.4 Introduction to Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 5 Mathematical Induction and Well-ordering 75 5.1 Investigating Recursive Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 5.2 Proof by Induction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 5.3 Well-ordering and the Principle of Mathematical Induction . . . . . . . . . . . . . . . . 84 5.4 Strong Induction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 6 Set Theory, Part II 97 6.1 Cartesian Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 6.2 Power Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 6.3 Indexed Collections of Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 7 Relations and Partitions 116 7.1 Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 7.2 Functions revisited . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 7.3 Equivalence Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 7.4 Partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 7.5 Well-definition, Rings and Congruence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 7.6 Functions and Partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 1 8 Cardinalities of Infinite Sets 146 8.1 Cantor’s Notion of Cardinality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 8.2 Uncountable Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Useful Texts • Book of Proof, Richard Hammack, 2nd ed 2013. Available free online! Very good on the basics: if you’re having trouble with reading set notation or how to construct a proof, this book’s for you! These notes are deliberately pitched at a high level relative to this textbook to provide contrast. • Mathematical Reasoning, Ted Sundstrom, 2nd ed 2014. Available free online! Excellent resource. If you would like to buy the actual book, you can purchase it on Amazon at a really cheap price. • Mathematical Proofs: A Transition to Advanced Mathematics, Chartrand/Polimeni/Zhang, 3rd Ed 2013, Pearson. The most recent course text. Has many, many exercises; the first half is fairly straightforward while the second half is much more complex and dauntingly detailed. • The Elements of Advanced Mathematics, Steven G. Krantz, 2nd ed 2002, Chapman & Hall and Foundations of Higher Mathematics, Peter Fletcher and C. Wayne Patty, 3th ed 2000, Brooks–Cole are old course textbooks for Math 13. Both are readable and concise with good exercises. Learning Outcomes 1. Developing the skills necessary to read and practice abstract mathematics. 2. Understanding the concept of proof, and becoming acquainted with several proof techniques. 3. Learning what sort of questions mathematicians ask, what excites them, and what they are looking for. 4. Introducing upper-division mathematics by giving a taste of what is covered in several areas of the subject. Along the way you will learn new techniques and concepts. For example: Number Theory Five people each take the same number of candies from a jar. Then a group of seven does the same. The, now empty, jar originally contained 239 candies. Can you decide how much candy each person took? Geometry and Topology How can we visualize and compute with objects like the Mobius strip? Fractals How to use sequences of sets to produce objects that appear the same at all scales. To Infinity and Beyond! Why are some infinities greater than others? 1 Introduction What is Mathematics? For many students this course is a game-changer. A crucial part of the course is the acceptance that upper-division mathematics is very different from what is presented at grade-school and in the cal-culus sequence. Some students will resist this fact and spend much of the term progressing through the various stages of grief (denial, anger, bargaining, depression, acceptance) as they discover that what they thought they excelled at isn’t really what the subject is about. Thus we should start at the beginning, with an attempt to place the mathematics you’ve learned within the greater context of the subject. The original Greek meaning of the word mathemata is the supremely unhelpful, “That which is to be known/learned.” There is no perfect answer to our question, but a simplistic starting point might be to think of your mathematics education as a progression. Arithmetic College Calculus Abstract Mathematics In elementary school you largely learn arithmetic and the basic notions of shape. This is the mathe-matics all of us need in order to function in the real world. If you don’t know the difference between 15,000 and 150,000, you probably shouldn’t try to buy a new car! For the vast majority of people, arithmetic is the only mathematics they’ll ever need. Learn to count, add, and work with percent-ages and you are thoroughly equipped for most things life will throw at you. Calculus discusses the relationship between a quantity and its rate of change, the applications of which are manifold: distance/velocity, charge/current, population/birth-rate, etc. Elementary calculus is all about solving problems: What is the area under the curve? How far has the projec-tile traveled? How much charge is in the capacitor? By now you will likely have computed many integrals and derivatives, but perhaps you have not looked beyond such computations. A mathe-matician explores the theory behind the calculations. From an abstract standpoint, calculus is the beautiful structure of the Riemann integral and the Fundamental Theorem, understanding why we can use anti-derivatives to compute area. To an engineer, the fact that integrals can be used to model the bending of steel beams is crucial, while this might be of only incidental interest to a mathemati-cian. Perhaps the essential difference between college calculus and abstract mathematics is that the former is primarily interested in the utility of a technique, while the latter focuses on structure, ve-racity and the underlying beauty. In this sense, abstract mathematics is much more of an art than a science. No-one measures the quality of a painting or sculpture by how useful it is, instead it is the structure, the artist’s technique and the quality of execution that are praised. Research mathemati-cians, both pure and applied, view mathematics the same way. In areas of mathematics other than Calculus, the link to applications is often more tenuous. The structure and distribution of prime numbers were studied for over 2000 years before, arguably, any serious applications were discovered. Sometimes a real-world problem motivates generalizations that have no obvious application, and may never do so. For example, the geometry of projecting 3D objects onto a 2D screen has obvious applications (TV, computer graphics/design). Why would anyone want to consider projections from 4D? Or from 17 dimensions? Sometimes an application will appear later, sometimes not.1 The reason the mathematician studies such things is because the structure appears beautiful to them and they want to appreciate it more deeply. Just like a painting. 1There are very useful applications of high-dimensional projections, not least to economics and the understanding of sound and light waves. 3 The mathematics you have learned so far has consisted almost entirely of computations, with the theoretical aspects swept under the rug. At upper-division level, the majority of mathematics is presented in an abstract way. This course will train you in understanding and creating abstract mathematics, and it is our hope that you will develop an appreciation for it. Proof The essential concept in higher-level mathematics is that of proof. A basic dictionary entry for the word would cover two meanings: 1. An argument that establishes the truth of a fact. 2. A test or trial of an assertion.2 In mathematics we always mean the former, while in much of science and wider culture the second meaning predominates. Indeed mathematics is one of the very few disciplines in which one can categorically say that something is true or false. In reality we can rarely be so certain. A greasy sales-man in a TV advert may claim that to have proved that a certain cream makes you look younger; a defendant may be proved guilty in court; the gravitational constant is 9.81ms−2. Ask yourself what these statements mean. The advert is just trying to sell you something, but push harder and they might provide some justification: e.g. 100 people used the product for a month and 75 of them claim to look younger. This is a test, a proof in the second sense of the definition. Is a defendant really guilty of a crime just because a court has found them so; have there never been any miscarriages of justice? Is the gravitational constant precisely 9.81ms−2, or is this merely a good approximation? This kind of pedantry may seem over the top in everyday life: indeed most of us would agree that if 75% of people think a cream helps, then it probably is doing something beneficial. In mathematics and philosophy, we think very differently: the concepts of true and false and of proof are very precise. So how do mathematicians reach this blissful state where everything is either right or wrong and, once proved, is forever and unalterably certain? The answer, rather disappointingly, is by cheating. Nothing in mathematics is true except with reference to some assumption. For example, consider the following theorem: Theorem 1.1. The sum of any two even integers is even. We all believe that this is true, but can we prove it? In the sense of the second definition of proof, it might seem like all we need to do is to test the assertion: for example 4 + 6 = 10 is even. In the first sense, the mathematical sense, of proof, this is nowhere near enough. What we need is a definition of even.3 Definition 1.2. An integer is even if it may be written in the form 2n where n is an integer. The proof of the theorem now flows straight from the definition. 2It is this notion that makes sense of the seemingly oxymoronic phrase The exception proves the rule. It is the exception that tests the validity of the rule. 3And more fundamentally of sum and integer. 4 Proof. Let x and y be any two even integers. We want to show that x + y is an even integer. By definition, an integer is even if it can be written in the form 2k for some integer k. Thus there exist integers n, m such that x = 2m and y = 2n. We compute: x + y = 2m + 2n = 2(m + n). (∗) Because m + n is an integer, this shows that x + y is an even integer. There are several important observations: • ‘Any’ in the statement of the theorem means the proof must work regardless of what even in-tegers you choose. It is not good enough to simply select, for example, 4 and 16, then write 4 + 16 = 20. This is an example, or test, of the theorem, not a mathematical proof. • According to the definition, 2m and 2n together represent all possible pairs of even numbers. • The proof makes direct reference to the definition. The vast majority of the proofs in this course are of this type. If you know the definition of every word in the statement of a theorem, you will often discover a proof simply by writing down the definitions. • The theorem itself did not mention any variables. The proof required a calculation for which these were essential. In this case the variables m and n come for free once you write the definition of evenness! A great mistake is to think that the proof is nothing more than the calculation (∗). This is the easy bit, and it means nothing without the surrounding sentences. The important link between theorems and definitions is much of what learning higher-level math-ematics is about. We prove theorems (and solve homework problems) because they make us use and understand the subtleties of definitions. One does not know mathematics, one does it. Mathematics is a practice; an art as much as it is a science. Conjectures In this course, you will discover that one of the most creative and fun aspects of mathematics is the art of formulating, proving and disproving conjectures. To get a taste, consider the following: Conjecture 1.3. If n is any odd integer, then n2 −1 is a multiple of 8. Conjecture 1.4. For every positive integer n, the integer n2 + n + 41 is prime. A conjecture is the mathematician’s equivalent of the experimental scientist’s hypothesis: a state-ment that one would like to be true. The difference lies in what comes next. The mathematician will try to prove that a conjecture is undeniably true by relying on logic, while the scientist will ap-ply the scientific method, conducting experiments attempting, and hopefully failing, to show that a hypothesis is incorrect. 5 Once a mathematician proves the validity of a conjecture it becomes a theorem. The job of a math-ematics researcher is thus to formulate conjectures, prove them, and publish the resulting theorems. The creativity lies as much in the formulation as in the proof. As you go through the class, try to formulate conjectures. Like as not, many of your conjectures will be false, but you’ll gain a lot from trying to form them. Let us return to our conjectures: are they true or false? How can we decide? As a first attempt, we may try to test the conjectures by computing with some small integers n. In practice this would be done before stating the conjectures. n 1 3 5 7 9 11 13 n2 −1 0 8 24 48 80 120 168 n 1 2 3 4 5 6 7 n2 + n + 41 43 47 53 61 71 83 97 Because 0, 8, 24, 48, 80, 120 and 168 are all multiples of 8, and 43, 47, 53, 61, 71, 83 and 97 are all prime, both conjectures appear to be true. Would you bet $100 that this is indeed the case? Is n2 −1 a multiple of 8 for every odd integer n? Is n2 + n + 41 prime for every positive integer n? The only way to establish whether a conjecture is true or false is by doing one of the following: Prove it by showing it must be true in all cases, or, Disprove it by finding at least one instance in which the conjecture is false. Let us work with Conjecture 1.3. If n is an odd integer, then, by definition, we can write it as n = 2k + 1 for some integer k. Then n2 −1 = (2k + 1)2 −1 = (4k2 + 1 + 4k) −1 = 4k2 + 4k. We need to investigate whether this is always a multiple of 8. Since 4k2 + 4k = 4(k2 + k) is already a multiple of 4, it all comes down to deciding whether or not k2 + k contains a factor 2 for all possible choices of k; i.e. is k2 + k even? Do we believe this? We can return to trying out some small values of k: k −2 −1 0 1 2 3 4 k2 + k 2 0 0 2 6 12 20 Once again, the claim seems to be true for small values of k, but how do we know it is true for all k? Again, the only way is to prove it or disprove it. How to proceed? The question here is whether or not k2 + k is always even. Factoring out k, we get: k2 + k = k(k + 1). We have therefore expressed k2 + k as a product of two consecutive integers. This is great, because for any two consecutive integers, one is even and the other is odd, and so their product must be even. We have now proved that the conjecture is true. Conjecture 1.3 is indeed a theorem! Everything we’ve done so far has been investigative, and is laid out in an untidy way. We don’t want the reader to have to wade through all of our scratch work, so we formalize the above argument. This is the final result of our deliberations; investigate, spot a pattern, conjecture, prove, and finally present your work in as clean and convincing a manner as you can. 6 Theorem 1.5. If n is any odd integer, then n2 −1 is a multiple of 8. Proof. Let n be any odd integer; we want to show that n2 −1 is a multiple of 8. By the definition of odd integer, we may write n = 2k + 1 for some integer k. Then n2 −1 = (2k + 1)2 −1 = (4k2 + 1 + 4k) −1 = 4k2 + 4k = 4k(k + 1). We distinguish two cases. If k is even, then k(k + 1) is even and so 4k(k + 1) is divisible by 8. If k is odd, then k + 1 is even. Therefore k(k + 1) is again even and 4k(k + 1) divisible by 8. In both cases n2 −1 = 4k(k + 1) is divisible by 8. This concludes the proof. It is now time to explore Conjecture 1.4. The question here is whether or not n2 + n + 41 is a prime integer for every positive integer n. We know that when n = 1, 2, 3, 4, 5, 6 or 7 the answer is yes, but examples do not make a proof. At this point, we do not know whether the conjecture is true or false. Let us investigate the question further. Suppose that n is any positive integer; we must ask whether it is possible to factor n2 + n + 41 as a product of two positive integers, neither of which is one.4 When n = 41 such a factorization certainly exists, since we can write 412 + 41 + 41 = 41(41 + 1 + 1) = 41 · 43. Our counterexample shows that there exists at least one value of n for which n2 + n + 41 is not prime. We have therefore disproved the conjecture that ‘for all positive integers n, n2 + n + 41 is prime,’ and so Conjecture 1.4 is false! The moral of the story is this: to show that a conjecture is true you must prove that it holds for all the cases in consideration, but to show that it is false a single counterexample suffices. Conjectures: True or False? Do your best to prove or disprove the following conjectures. Then revisit these problems at the end of the course to realize how much your proof skills have improved. 1. The sum of any three consecutive integers is even. 2. There exist integers m and n such that 7m + 5n = 4. 3. Every common multiple of 6 and 10 is divisible by 60. 4. There exist integers x and y such that 6x + 9y = 10. 5. For every positive real number x, x + 1 x is greater than or equal to 2. 6. If x is any real number, then x2 ≥x. 4Once again we rely on a definition: a positive integer is prime if it cannot be written as a product of two integers, both greater than one. 7 7. If n is any integer, n2 + 5n must be even. 8. If x is any real number, then |x| ≥−x. 9. Consider the set R of all real numbers. For all x in R, there exists y in R such that x < y. 10. Consider the set R of all real numbers. There exists x in R such that, for all y in R, x < y. 11. The sets A = {n ∈N : n2 < 25} and B = {n2 : n ∈N and n < 5} are equal. Here N denotes the set of natural numbers. Now we know a little of what mathematics is about, it is time to practice some of it! 8 2 Logic and the Language of Proofs In order to read and construct proofs, we need to start with the langauge in which they are written: logic. Logic is to mathematics what grammar is to English. Section 2.1 will not look particularly mathematical, but we’ll quickly get to work in Section 2.2 using logic in a mathematical context. 2.1 Propositions Definition 2.1. A proposition or statement is a sentence that is either true or false. Examples. 1. 17 −24 = 7. 2. 392 is an odd integer. 3. The moon is made of cheese. 4. Every cloud has a silver lining. 5. God exists. In order to make sense, these propositions require a clear definition of every concept they contain. There are many concepts of God in many cultures, but once it is decided which we are talking about, it is clear that They either exist or do not. This example illustrates that a question need not be indis-putably answerable (by us) in order to qualify as a proposition. Indeed mostly when people argue over propositions and statements, what they are really arguing over are the defintions! Anything that is not true or false is not a proposition. January 1st is not a proposition, neither is Green. Truth Tables Often one has to deal with abstract propositions; those where you do not know the truth or falsity, or indeed when you don’t explicitly know the proposition! In such cases it can be convenient to repre-sent the combinations of propositions in a tabular format. For instance, if we have two propositions (P and Q), or even three (P, Q and R) then all possibile combinations of truth T and falsehood F are represented in the following tables: P Q T T T F F T F F P Q R T T T T T F T F T T F F F T T F T F F F T F F F The mathematician in you should be looking for patterns and asking: how many rows would a truth table corresponding to n propositions have, and can I prove my assertion? Right now it is hard to prove that the answer is 2n: induction (Chapter 5) makes this very easy. 9 Connecting Propositions: Conjunction, Disjunction and Negation We now define how to combine propositions in natural ways, modeled on the words and, or and not. Definition 2.2. Let P and Q be propositions. The conjunction (AND, ∧) of P and Q, the disjunction (OR, ∨) of P and Q, and the negation or denial (NOT, ¬, ∼, ) of P are defined by the truth tables, P Q P ∧Q T T T T F F F T F F F F P Q P ∨Q T T T T F T F T T F F F P ¬P T F F T It is best to use and, or and not when speaking about these concepts: conjunction, disjunction and negation may make you sound educated, but at the serious risk of not being understood! Example. Let P, Q & R be the following propositions: P. Irvine is a city in California. Q. Irvine is a town in Ayrshire, Scotland. R. Irvine has seven letters. Clearly P is true while R is false. If you happen to know someone from Scotland, you might know that Q is true.5 We can now compute the following (increasingly grotesque) combinations... P ∧Q P ∨Q P ∧R ¬R (¬R) ∧P ¬(R ∨P) (¬P) ∨[((¬R) ∨P) ∧Q] T T F T T F T How did we establish these facts? Some are quick, and can be done in your head. Consider, for instance, the statement (¬R) ∧P. Because R is false, ¬R is true. Thus (¬R) ∧P is the conjunction of two true statements, hence it is true. Similarly, we can argue that R ∨P is true (because R is false and P is true), so the negation ¬(R ∨P) is false. Establishing the truth value of the final proposition (¬P) ∨[((¬R) ∨P) ∧Q] requires more work. You may want to set up a truth table with several auxiliary columns to help you compute: P Q R ¬P ¬R (¬R) ∨P ((¬R) ∨P) ∧Q (¬P) ∨[((¬R) ∨P) ∧Q] T T F F T T T T The importance of parentheses in a logical expressions cannot be stressed enough. For example, try building the truth table for the propositions P ∨(Q ∧R) and (P ∨Q) ∧R. Are they the same? 5The second syllable is pronounced like the i in bin or win. Indeed the first Californian antecedent of the Irvine family which gave its name to UCI was an Ulster-Scotsman named James Irvine (1827–1886). Probably the family name was originaly pronounced in the Scottish manner. 10 Conditional and Biconditional Connectives In order to logically set up proofs, we need to see how propositions can lead one to another. Definition 2.3. The conditional ( = ⇒) and biconditional ( ⇐ ⇒) connectives have the truth tables P Q P = ⇒Q T T T T F F F T T F F T P Q P ⇐ ⇒Q T T T T F F F T F F F T For the proposition P = ⇒Q, we call P the hypothesis and Q the conclusion. Observe that the expressions P = ⇒Q and P ⇐ ⇒Q are themselves propositions. They are, after all, sentences which are either true or false! Synonyms = ⇒and ⇐ ⇒can be read in many different ways: P = ⇒Q P ⇐ ⇒Q P implies Q P if and only if Q Q if P P iff Q P only if Q P and Q are (logically) equivalent P is sufficient for Q P is necessary and sufficient for Q Q is necessary for P For instance, the following propositions all mean exactly the same thing: • If you are born in Rome, then you are Italian. • You are Italian if you are born in Rome. • You are born in Rome only if you are Italian. • Being born in Rome is sufficient to be Italian. • Being Italian is necessary for being born in Rome. Are you comfortable with what P and Q are here? The biconditional connective should be easy to remember: P ⇐ ⇒ Q is true precisely when P and Q have identical truth states. It is harder to make sense of the conditional connective. One way of thinking about it is to consider what it means for an implication to be false. If P = ⇒Q is false, it is impossible to create a logical argument which assumes P and concludes Q. The second row of P = ⇒Q encapsulates the fact that it should be impossible for truth ever to logcially imply falsehood. 11 Aside: Why is F = ⇒T considered true? This is the most immediately confusing part of the truth table for the conditional connective. Here is a mathematical example, written with an English translation at the side. 7 = 3 = ⇒0 · 7 = 0 · 3 (If 7 = 3, then 0 times 7 equals 0 times 3) = ⇒0 = 0 (then 0 equals 0) Thus 7 = 3 = ⇒0 = 0. Logically speaking this is a perfectly correct argument, thus the implication is true. The argument makes us uncomfortable because 7 = 3 is clearly false. If you want to understand connectives more deeply than this, then take a logic or philosophy course! For example, although neither statement makes the least bit of sense in English; 17 is odd = ⇒Mexico is in China is false, whilst 17 is even = ⇒Mexico is in China is true. Such bizarre constructions are happily beyond the consideration of this course! Theorems and Proofs Truth tables and connectives are very abstract. To apply them to mathematics we need the following basic notions of theorem and proof. Definition 2.4. A theorem is a justified assertion that some statement of the form P = ⇒Q is true. A proof is an argument that justifies the truth of a theorem. Think back to the truth table for P = ⇒Q in Definition 2.3. Suppose that the hypothesis P is true and that P = ⇒Q is true: that is, P = ⇒Q is a theorem. We must be in the first row of the truth table, and so the conclusion Q is also true. This is how we think about proving basic theorems. In a direct proof we start by assuming the hypothesis (P) is true and make a logical argument (P = ⇒Q) which asserts that the conclusion (Q) is true. As such, it often convenient to rewrite the statement of a theorem as an implication of the form P = ⇒Q. Here is an example of a direct proof. Theorem 2.5. The product of two odd integers is odd. We can write the theorem in terms of propositions and connectives: • P is ‘x and y are odd integers.’ This is our assumption, the hypothesis. • Q is ‘The product of x and y is odd.’ This is what we want to show, the conclusion. • Showing that P = ⇒ Q is true, that (the truth of) P implies (the truth of) Q requires an argument. This is the proof. 12 Proof. Let x and y be any two odd integers. We want to show that product x · y is an odd integer. By definition, an integer is odd if it can be written in the form 2k + 1 for some integer k. Thus there must be integers n, m such that x = 2n + 1 and y = 2m + 1. We compute: x · y = (2n + 1)(2m + 1) = 4mn + 2n + 2m + 1 = 2(2mn + n + m) + 1. Because 2mn + n + m is an integer, this shows that x · y is an odd integer. The Converse and Contrapositive The following constructions are used continually in mathematics: it is vitally important to know the difference between them. Definition 2.6. The converse of an implication P = ⇒Q is the reversed implication Q = ⇒P. The contrapositive of P = ⇒Q is ¬Q = ⇒¬P. In general, we can’t say anything about the truth value of the converse of a true statement. The contrapositive of a true statement is, however, always true. Theorem 2.7. The contrapositive of an implication is logically equivalent the original implication. Proof. Simply use our definitions of negation and implication to compute the truth table: P Q P = ⇒Q ¬Q ¬P ¬Q = ⇒¬P T T T F F T T F F T F F F T T F T T F F T T T T Since the truth states in the third and sixth columns are identical, we see that P = ⇒ Q and its contrapositive ¬Q = ⇒¬P are logically equivalent. Example. Let P and Q be the following statements: P. Claudia is holding a peach. Q. Claudia is holding a piece of fruit. The implication P = ⇒Q is true, since all peaches are fruit. As a sentence, we have: If Claudia is holding a peach, then Claudia is holding a piece of fruit. The converse of P = ⇒Q is the sentence: If Claudia is holding a piece of fruit, then Claudia is holding a peach. This is palpably false: Claudia could be holding an apple! 13 The contrapositive of P = ⇒Q is the following sentence: If Claudia is not holding any fruit, then she is not holding a peach. This is clearly true. The fact that P = ⇒Q and ¬Q = ⇒¬P are logically equivalent allows us, when convenient, to prove P = ⇒Q by instead proving its contrapositive... Proof by Contrapositive Here is another basic theorem. Theorem 2.8. Let x and y be two integers. If x + y is odd, then exactly one of x or y is odd. The statement of the theorem is an implication of the form P = ⇒Q . Here we have P. The sum x + y of integers x and y is odd. Q. Exactly one of x or y is odd. A direct proof would require that we assume P is true and logically deduce the truth of Q. The problem is that it is hard to work with these propositions, especially Q. The negation of Q is, however, much easier: ¬Q. x and y are both even or both odd (they have the same parity). ¬P. The sum x + y of integers x and y is even. Since P = ⇒Q is logically equivalent to the simpler-seeming contrapositive (¬Q) = ⇒(¬P), we choose to prove the latter. This is, after all, equivalent to proving the original implication. Proof. There are two cases: x and y are both even, or both odd. Case 1: Let x = 2m and y = 2n be even. Then x + y = 2(m + n) is even. Case 2: Let x = 2m + 1 and y = 2n + 1 be odd. Then x + y = 2(m + n + 1) is even. The above is an example of a proof by contrapositive. De Morgan’s Laws Two of the most famous results in logic are attributable to Augustus De Morgan, a very famous 19th century logician. Theorem 2.9 (De Morgan’s laws). Let P and Q be any propositions. Then: 1. ¬(P ∧Q) ⇐ ⇒¬P ∨¬Q. 2. ¬(P ∨Q) ⇐ ⇒¬P ∧¬Q. The first law says that the negation of P ∧Q is logically equivalent to ¬P ∨¬Q: the two expres-sions have the same truth table. Here is a proof of the first law. Try the second on your own. 14 Proof. P Q P ∧Q ¬(P ∧Q) ¬P ¬Q ¬P ∨¬Q T T T F F F F T F F T F T T F T F T T F T F F F T T T T Simply observe that the fourth and seventh columns are identical. It is worth pausing to notice how similar the two laws are, and how concise. There is some beauty here. With a written example the laws are much easier to comprehend. Example. (Of the first law) Suppose that of a morning you can choose (or not) to ride the subway to work, and you can choose (or not) to have a cup of coffee. Consider the following sentence: I rode the subway and I had coffee. What is its negation (opposite)? Clearly it is: I didn’t ride the subway or I didn’t have coffee. Note that the mathematical use of or includes the possibility that you neither rode the subway nor had coffee. You will see these laws again when we think about sets. Aside: Think about the meaning! In the previous example we saw how negation switches and to or. This is true only when and denotes a conjunction between two propositions. Before applying De Morgan’s laws, think about the meaning of the sentence. For example, the negation of Mark and Mary have the same height. is the proposition: Mark and Mary do not have the same height. If you blindly appeal to De Morgan’s laws you might end up with the following piece of nonsense: Mark or Mary do not have the same height. Logical rules are wonderfully concise, but very easy to misuse. Always think about the meaning of a sentence and you shouldn’t go wrong. Negating Conditionals As our discussion of contrapositives makes clear, you will often want to understand the negation of a statement. In particular, it is important to understand the negation of a conditional P = ⇒Q. Is it enough to say ‘P doesn’t imply Q’? And what could this mean? To answer the question you can use truth tables, or just think. 15 Here is the truth table for P = ⇒Q and its negation: recall that negation simply swaps T and F. P Q P = ⇒Q ¬(P = ⇒Q) T T T F T F F T F T T F F F T F The only time there is a T in the final column is when both P is true and Q is false. We have therefore proved the following: Theorem 2.10. ¬(P = ⇒Q) is logically equivalent to P ∧¬Q (read ‘P and not Q’). Now think rather than calculate. What is the opposite of the following implication? It’s the morning therefore I’ll have coffee. Hopefully it is clear that the negation is: It’s the morning and I won’t have coffee. The implication ‘therefore’ has disappeared and a conjuction ‘and’ is in its place. Warning! The negation of P = ⇒Q is not a conditional. In particular it is neither of the following: The converse, Q = ⇒P. The contrapositive of the converse, ¬P = ⇒¬Q. If you are unsure about this, write down the truth tables and compare. Example. Let x be an integer. What is the negation of the following sentence? If x is even then x2 is even. Written in terms of propositions, we wish to negate P = ⇒Q , where P and Q are: P. x is even. Q. x2 is even. Hence the negation is P ∧¬Q, which is: x is even and x2 is odd. This is very different to ¬P = ⇒¬Q (if x is odd then x2 is odd). Keep yourself straight by thinking about the meaning of the sentences. It should be obvious that ‘x even = ⇒x2 even’ is true. It negation should therefore be false. Even reading the negation should make you feel a little uncomfortable. 16 Tautologies and Contradictions There are two final related concepts that are helpful for understanding proofs. Definition 2.11. A tautology is a logical expression that is always true, regardless of what the compo-nent statments might be. A contradiction is a logical expression that is always false. The easiest way to detect these is simply to construct a truth table. Examples. 1. P ∧(¬P) is a very simple contradiction: P ¬P P ∧(¬P) T F F F T F Whatever the proposition P is, it cannot be true at the same time as its negation. 2. (P ∧(P = ⇒Q)) = ⇒Q is a tautology. P Q P = ⇒Q P ∧(P = ⇒Q) (P ∧(P = ⇒Q)) = ⇒Q T T T T T T F F F T F T T F T F F T F T Aside: Algebraic Logic One can study logic in a more algebraic manner. De Morgan’s Laws are algebraic. Here are a few of the other basic laws of logic: P ∧Q ⇐ ⇒Q ∧P P ∨Q ⇐ ⇒Q ∨P (P ∧Q) ∧R ⇐ ⇒P ∧(Q ∧R), (P ∨Q) ∨R ⇐ ⇒P ∨(Q ∨R), (P ∧Q) ∨R ⇐ ⇒(P ∨R) ∧(Q ∨R), (P ∨Q) ∧R ⇐ ⇒(P ∧R) ∨(Q ∧R). The three pairs are, respectively, the commutative, associative, and distributive laws of logic, and you can check them all with truth tables. Using these rules, one can answer questions, such as deciding when an expression is a tautology, without laboriously creating truth tables. It is even fun! Such an approach is appropriate when you are considering abstract propositions, say in a formal logic course. In this text our primary interest with logic lies in using it to prove theorems. When one has an explicit theorem it is important to keep the meanings of all propositions clear. By relying too much on abstract laws like the above, it is easy to lose the meaning and write nonsense! 17 Exercises 2.1.1 Express each of the following statements in the “If . . . , then . . . ” form. (a) You must eat your dinner if you want to grow. (b) Being a multiple of 12 is a sufficient condition for a number to be even. (c) It is necessary for you to pass your exams in order for you to obtain a degree. (d) A triangle is equilateral only if all its sides have the same length. 2.1.2 Suppose that “Girls smell of roses” and “Boys have dirty hands” are true statements and that “The Teacher is always right” is a false statment. Which of the following are true? Hint: Label each of the given statements, and think about each of the following using connectives. (a) If girls smell of roses, then the Teacher is always right. (b) If the Teacher is always right, then boys have dirty hands. (c) If the Teacher is always right or girls smell of roses, then boys have dirty hands. (d) If boys have dirty hands and girls smell of roses, then the Teacher is always right. 2.1.3 Write the negation (in words) of the following claim: If Jack and Jill climb up the hill, then they fall down and like pails of water. 2.1.4 Orange County has two competing transport plans under consideration: widening the 405 freeway and constructing light rail down its median. A local politician is asked, “Would you like to see the 405 widened or would you like to see light rail constructed?” The politician wants to sound positive, but to avoid being tied to one project. What is their response? Think about how the word ‘OR’ is used in logic... 2.1.5 Construct the truth tables for the propositions P ∨(Q ∧R) and (P ∨Q) ∧R. Are they the same? 2.1.6 Use De Morgan’s laws to prove that P = ⇒Q is logically equivalent to ¬P ∨Q. 2.1.7 Prove that the expressions (P = ⇒Q) ∧(Q = ⇒P) and P ⇐ ⇒Q are logically equivalent (have the same truth table). Why does this make sense? 2.1.8 Prove that ((P ∨Q) ∧¬P) ∧¬Q is a contradiction. 2.1.9 Prove that (¬P ∧Q) ∨(P ∧¬Q) ⇐ ⇒¬(P ⇐ ⇒Q) is a tautology: 2.1.10 Suppose that “If Colin was early, then no-one was playing pool” is a true statement. (a) What is its contrapositive of this statement? Is it true? (b) What is the converse? Is it true? (c) What can we conclude (if anything?) if we discover each of the following? Treat the two scenarios separately. (i) Someone was playing pool. (ii) Colin was late. 2.1.11 Suppose that “Ford is tired and Zaphod has two heads” is a false statement. What can we conclude if we discover each of the following? Treat the two scenarios separately. (a) Ford is tired. 18 (b) Ford is tired if and only if Zaphod has two heads. 2.1.12 (a) Do there exists propositions P, Q such that both P = ⇒Q and its converse are true? (b) Do there exist propositions P, Q such that both P = ⇒Q and its converse are false? Justify your answers by giving an example or a proof that no such examples exist. 2.1.13 Let R be the proposition “The summit of Mount Everest is underwater”. Suppose that S is a proposition such that (R ∨S) ⇐ ⇒(R ∧S) is false. (a) What can you say about S? (b) What if, instead, (R ∨S) ⇐ ⇒(R ∧S) is true? 2.1.14 (Hard) Suppose that P, Q are propositions. Argue that any of the 16 possible truth tables P Q ? T T T/F T F T/F F T T/F F F T/F represents an expression ? created using only P and Q and the operations ∧, ∨, ¬. Can you extend your argument to show that any truth table with any number of inputs represents some logical expression? 19 2.2 Methods of Proof There are four standard methods for proving P = ⇒Q. In practice, long proofs will use several of these. Direct Assume P and logically deduce Q. Contrapositive Assume ¬Q and deduce ¬P. This is enough since the contrapositive ¬Q = ⇒¬P is logically equavalent to P = ⇒Q. Contradiction Assume that P and ¬Q are true and deduce a contradiction. Since P ∧¬Q implies a contradiction, this shows that P ∧¬Q must be false. Because P ∧¬Q is equivalent to ¬(P = ⇒ Q), this is enough to conclude that P = ⇒Q is true (Theorem 2.10). Induction This has a completely different flavor: we will consider it in Chapter 5. The direct method has the advantage of being easy to follow logically. The contrapositive method has its advantage when it is difficult to work directly with the propositions P, Q, especially if one or both involve the non-existence of something. Working with their negations might give you the exis-tence of ingredients with which you can calculate. Proof by contradiction has a similar advantage: assuming both P and ¬Q gives you two pieces of information with which you can calculate. Logically speaking there is no difference between the three methods, beyond how you visualize the argument. To illustrate the difference between direct proof, proof by contrapositive, and proof by contradic-tion, we prove the same simple theorem in three different ways. Theorem 2.12. Suppose that x is an integer. If 3x + 5 is even, then 3x is odd. Direct Proof. We show that if 3x + 5 is even then 3x is odd. Assume that 3x + 5 is even, then 3x + 5 = 2n for some integer n. Hence 3x = 2n −5 = 2(n −3) + 1. This is clearly odd, because it is of the form ‘an even integer plus one.’ Contrapositive Proof. We show that if 3x is even then 3x + 5 is odd. Assume that 3x is even, and write 3x = 2n for some integer n. Then 3x + 5 = 2n + 5 = 2(n + 2) + 1. This is odd, because n + 2 is an integer. 20 Contradiction Proof. We assume that 3x + 5 and 3x are both even, and we deduce a contradiction. Write 3x + 5 = 2n and 3x = 2k for some integers n and k. Then 5 = (3x + 5) −3x = 2n −2k = 2(n −k). But this says that 5 is even: a contradiction. Some simple proofs We now give several examples of simple proofs. The only notation needed to speed things along is that of some basic sets of numbers: N for the positive integers, Z for the integers, R for the real numbers, and ∈for ‘is a member of the set’. Thus 2 ∈Z is read as ‘2 is a member of the set of integers’, or more concisely, ‘2 is an integer’. Theorem 2.13. Let m, n ∈Z. Both m and n are odd if and only if the product mn is odd. There are two theorems here: (⇒) If m and n are both odd, then the product mn is odd. (⇐) If the product mn is odd, then both integers m and n are odd. Most often when there are two directions you’ll have to prove them separately. Here we give a direct proof for (⇒) and a contapositive proof for (⇐). Proof. (⇒) Let m and n be odd. Then m = 2k + 1 and n = 2l + 1 for some k, l ∈Z. Then mn = (2k + 1)(2l + 1) = 4kl + 2k + 2l + 1 = 2(2kl + k + l) + 1. This is odd, because 2kl + k + l ∈Z. (⇐) Suppose that the integers m and n are not both odd. That is, assume that at least one of m and n is even. We show that the product mn is even. Without loss of generality,a we may assume that n is even, from which n = 2k for some integer k. Then, mn = m(2k) = 2(mk) is even. aSee ‘Potential Mistakes’ below for what this means. In the second part of the proof, we did not need to consider whether m was even or odd: if n was even, the product mn would be even regardless. The second part would have been very difficult to prove directly: Assume mn is odd, then mn = 2k + 1, so...We are stuck! Theorem 2.14. If 3x + 5 is even, then x is odd. We can prove this directly, by the contrapositive method, or by contradiction. We’ll do all of them, so you can appreciate the difference. 21 Direct Proof. Simply quote the two previous theorems. Because 3x + 5 is even, 3x must be odd by Theorem 2.12. Now, since 3x is odd, both 3 and x are odd by Theorem 2.13. Contrapositive Proof. Suppose that x is even. Then x = 2m for some integer m and we get 3x + 5 = 6m + 5 = 2(3m + 2) + 1. Because 3m + 2 ∈Z, we have 3x + 5 odd. Contradiction Proof. Suppose that both 3x + 5 and x are even. We can write 3x + 5 = 2m and x = 2k for some integers m and k. Then 5 = (3x + 5) −3x = 2m −6k = 2(m −3k) is even. Contradiction. Selecting a method of proof is often a matter of taste. You should be able to see the advantages and disadvantages of the various approaches. The direct proof is more logically straightforward, but it depends on two previous results. The contrapositive and the contradiction arguments are quicker and more self-contained, but they require a deeper familiarity with logic.6 Potential Mistakes: Generality and ‘Without Loss of Generality’ There are many common mistakes that you should be careful to avoid. Here are two incorrect ‘proofs’ of the = ⇒direction of Theorem 2.13. Fake Proof 1. m = 3 and n = 5 are both odd, and so mn = 15 is odd. This is an example of the theorem, not a proof. Examples are critical to helping you understand and believe what a theorem says, but they are no substitute for a proof! Recall the discussion in the Introduction on the usage of the word proof in English. Fake Proof 2. Let m = 2k + 1 and n = 2k + 1 be odd. Then, mn = (2k + 1)(2k + 1) = 2(2k2 + 2k) + 1 is odd. The problem with this second ‘proof’ is that it is not sufficiently general. m and n are supposed to be any odd integers, but by setting both of them equal to 2k + 1, we’ve chosen m and n to be the 6For even more variety, here is a direct proof of Theorem 2.14 that does not use any previous theorem. Suppose 3x + 5 is even, so 3x + 5 = 2n for some integer n. Then x = (3x + 5) −2x −5 = 2n −2x −5 = 2(x −n −3) + 1 is odd. You will often have a variety of possible approaches: this just makes proving theorems even more fun! 22 same! Notice how the correct proof uses m = 2k + 1 and n = 2l + 1, where we place no restriction on the integers k and l. By generality we mean that we must make sure to consider all possibilities encompassed by the hypothesis. The phrase Without Loss of Generality, often shorted to WLOG, is used when a choice is made which might at first appear to restrict things but, in fact, does not. Think back to how this was used in the the proof of Theorem 2.13. If at least one of integers m, n is even, then we lose nothing by assuming that it is the second integer n. The labels m, n are arbitrary: if n happened not to be even, we could simply relabel the integers, changing their order so that the second is now even. The phrase WLOG is used to pre-empt a challenge to a proof in the sense of Fake Proof 2, as if to say to the reader: ‘You might be tempted to object that my argument is not general enough. However, I’ve thought about it, and there is no problem.’ Here is a palpably ludicrous ‘theorem’ which illustrates another potential mistake. Theorem (Fake Theorem). The only number is zero. Fake Proof. Let x be any number and let y = x, then x = y = ⇒x2 = xy (Multiply both sides by x) = ⇒x2 −y2 = xy −y2 (Subtract y2 from both sides) = ⇒(x −y)(x + y) = (x −y)y (Factorize) = ⇒x + y = y (Divide both sides by x −y) = ⇒x = 0 Everything is fine up to the third line, but then we divide by x −y, which is zero! Don’t let yourself become so enamoured of logical manipulations that you forget to check the basics. More simple proofs Theorem 2.15. Suppose x ∈R. Then x3 + 2x2 −3x −10 = 0 = ⇒x = 2. We can prove this theorem using any of the three methods. All rely on your ability to factorize the polynomial: x3 + 2x2 −3x −10 = (x −2)(x2 + 4x + 5) = (x −2)[(x + 2)2 + 1], and partly on your knowledge that ab = 0 ⇐ ⇒a = 0 or b = 0 (proof in the exercises). 23 Direct Proof. If x3 + 2x2 −3x −10 = 0, then (x −2)[(x + 2)2 + 1] = 0. Hence at least one of the factors x −2 or (x + 2)2 + 1 is zero. In the first case we conclude that x = 2. The second case is impossible, since (x + 2)2 ≥0 = ⇒(x + 2)2 + 1 > 0. Therefore x = 2 is the only solution. Contrapositive Proof. Suppose x ̸= 2. Then x3 + 2x2 −3x −10 = (x −2)[(x + 2)2 + 1] ̸= 0 since neither of the factors is zero. Contradiction Proof. Suppose that x3 + 2x2 −3x −10 = 0 and x ̸= 2. Then 0 = x3 + 2x2 −3x −10 = (x −2)[(x + 2)2 + 1]. Since x ̸= 2, we have x −2 ̸= 0. It follows that (x + 2)2 + 1 must be zero. However, (x + 2)2 + 1 ≥1 for all real numbers x, so we have a contradiction. On balance the contrapositive proof is probably the nicest, but you may decide for yourself. Aside: Being Excessively Logical The statement of Theorem 2.15 is an implication P = ⇒Q where P and Q are: P. x3 + 2x2 −3x −10 = 0, Q. x = 2. You can make life very hard for yourself by being overly logical. For instance, you may wish take a third proposition R. x ∈R, and state the theorem as R = ⇒(P = ⇒ Q). This is the way of pain! It’s easier to assume that you’re always dealing with real numbers as a universal constraint, and ignore it entirely in the logic. One can always append a third proposition to the front of any theorem, namely, “all math I al-ready know.” Try to resist the temptation to be so logical that your arguments become unreadable! Theorem 2.16. If n ∈Z is divisible by p ∈N, then n2 is divisible by p2. Before trying to prove this, recall what ‘n is divisible by p’ means: that n = pk for some integer k. With the correct definition, the proof is immediate. Proof. We prove directly. Let n be divisible by p. Then n = pk for some k ∈Z. Then n2 = p2k2, and so n2 is divisible by p2. Remember: state the definition of everything important in the theorem and often the proof will be staring you in the face. 24 Proof by Cases The next proof involves breaking things into cases. The relevant definition here is that of remainder. An integer n is said to have remainder r = 0, 1, or 2 upon division by 3 if we can write n = 3k + r for some integer k. With a little thought, it should be clear that every integer is of the form 3k, 3k + 1, or 3k + 2. This is analogous to how all integers are either even (2k) or odd (2k + 1). We will consider remainders more carefully in Chapter 3. Theorem 2.17. If n is an integer, then n2 has remainder 0 or 1 upon dividing by 3. Proof. We again prove directly. There are three cases: n has remainder 0, 1 or 2 upon dividing by 3. (a) If n has remainder 0, then n = 3m for some m ∈Z and so n2 = 9m2 has remainder 0. (b) If n has remainder 1, then n = 3m + 1 for some m ∈Z and so n2 = 9m2 + 6m + 1 = 3(3m2 + 2m) + 1 has remainder 1. (c) If n has remainder 2, then n = 3m + 2 for some m ∈Z and so n2 = 9m2 + 12m + 4 = 3(3m2 + 4m + 1) + 1 has remainder 1. Thus n2 has remainder 0 or 1 and cannot have remainder 2. Non-existence Proofs When a Theorem claims that something does not exist, it is generally a good time for a contrapositive or contradiction proof. This is since ‘does not exist’ is already a negative condition. A contradiction or contrapositive proof of a theorem P = ⇒ Q already involve the negated statement ¬Q. If Q states that something does not exist, then ¬Q states that it does! To see this in action, consider the following result. Theorem 2.18. x17 + 12x3 + 13x + 3 = 0 has no positive (real number) solutions. First we interpret the theorem as an implication: throughout we assume that x is a real number. If x is a solution to the equation x17 + 12x3 + 13x + 3 = 0, then x ≤0. The theorem is of the form P = ⇒Q, with: P. x17 + 12x3 + 13x + 3 = 0, Q. x ≤0. The negation of Q is simply ‘x > 0.’ To prove the theorem by contradiction, we assume P and not Q, and deduce a contradiction. 25 Proof. Assume that x satisfies x17 + 12x3 + 13x + 3 = 0, and that x > 0. Because all terms on the left hand side are positive, we have x17 + 12x3 + 13x + 3 > 0. A contradiction. Note how quickly the proof is written: it assumes that you, and any reader, are familiar with the underlying logic of a contradiction proof without it needing to be spelled out. The discussion we undertook before writing the proof would be considered scratch work: you shouldn’t include it a final write-up. If you recall the Intermediate and Mean Value Theorems from Calculus, you should be able to prove that there is exactly one (necessarily negative!) solution to the above polynomial equation. The AM-GM inequality Next we give several proofs of a famous inequality relating the arithmetic and geometric means of two or more numbers. Theorem 2.19. If x, y are positive real numbers, then x+y 2 ≥√xy with equality if and only if x = y. First a direct proof: note how the implication signs are stacked to make the argument easy to read. Direct Proof. Clearly (x −y)2 ≥0 with equality ⇐ ⇒x = y. Now multiply out: x2 −2xy + y2 ≥0 ⇐ ⇒(x2 + 2xy + y2) −4xy ≥0 ⇐ ⇒x2 + 2xy + y2 ≥4xy ⇐ ⇒(x + y)2 ≥4xy ⇐ ⇒x + y ≥2√xy (∗) ⇐ ⇒x + y 2 ≥√xy. The square-root in (∗) is well-defined because x + y is positive.a Moreover, it is clear that the final inequality is an equality if and only if all of them are, which is if and only if x = y. aWe are using the fact that the function f (t) = t2 is increasing for t positive. The argument for ‘with equality if and only if x = y’ depended on all of the implications in the proof are biconditionals. The following contradiction proof incorporates exactly the same calculation, but is laid out in a different order. This is not always possible, and you have to take great care when trying it. You will likely agree that the direct proof is easier to follow. 26 Contradiction Proof. Suppose that x+y 2 < √xy. Since x + y ≥0, this is if and only if (x + y)2 < 4xy. Now multiply out and rearrange: (x + y)2 < 4xy ⇐ ⇒x2 + 2xy + y2 < 4xy ⇐ ⇒x2 −2xy + y2 < 0 ⇐ ⇒(x −y)2 < 0. Since squares of real numbers are non-negative, this is a contradiction. Thus x+y 2 ≥√xy. Now suppose that x+y 2 = √xy. Following the biconditionals through the proof, we see that this is if and only if (x −y)2 = 0, from which we recover x = y. Hence result. Aside: The general AM-GM inequality Both the statement and the proof of the general inequality are more difficult. You might be sur-prised that an argument involving ‘raising to the nth power’ doesn’t work. Try it and see why... The general proof is harder and we present it at a higher level, leaving out some of the more obvi-ous details. This helps us view the proof as a whole, and makes the logical flow clearer. The only prerequisite is a little calculus, namely the First Derivative Test at the end of the first paragraph. Theorem 2.20. If x1, . . . , xn > 0 then x1+···+xn n ≥ n √x1 · · · xn, with equality if and only if x1 = · · · = xn. Proof. Consider the function f (x) = ex−1 −x. Its derivative is f ′(x) = ex−1 −1, which is zero if and only if x = 1. The sign of the derivative changes from negative to positive at x = 1, whence this is a local minimum. f has no other critical points and its domain is the whole real line, whence x = 1 is the location of the global minimum of f. Since f (1) = 0, we have ex−1 ≥x with equality if and only if x = 1. Now consider the average µ = x1+x2+···+xn n . Applying our inequality to x = xi µ , we have xi µ ≤exp  xi µ −1  , for each i = 1, 2, . . . , n. (∗) Since all xi are positive, we may multiply the expressions (∗) while preserving the inequality: x1 µ · · · xn µ ≤exp  x1 µ −1 + · · · + xn µ −1  = exp(n −n) = 1. (†) Thus µn ≥x1 · · · xn from which the result, µ ≥ n √x1 · · · xn, follows. Equality is if and only if all the inequalities (∗) are equalities, which is if and only if xi = µ for all i = 1, . . . , n. That is, all the xi are equal. Given the theorem and proof are both more difficult, there are a few things you should do to help convince yourself of their legitimacy. 27 1. Write down some examples. E.g. if x1 = 20, x2 = 27, x3 = 50, the inequality reads 97 3 ≥ 3 √ 20 · 27 · 50 = 30. 2. Observe that Theorem 2.19 is a special case. 3. Work through the proof, inserting comments and extra calculations until you are convinced that the argument is correct. For example, the calculation x1+···+xn µ = µn µ = n was omitted from (†): anyone with the prerequisite knowledge to read the rest of the proof should easily be able to supply this. It is perfectly reasonable to ask how you would know to try such a proof. The answer is that you wouldn’t. You should appreciate that a proof like this is a distillation of thousands of attempts and improvements, perhaps over many years. No-one came up with this argument as a first attempt! Combining and Subdividing Theorems Sometimes it is useful to break a proof into pieces, akin to viewing a computer program as a collection of subroutines that you combine for the finale. Usually the purpose is to make the proof of a difficult result more readable, but it can be done to emphasize the importance of certain aspects of your work. Mathematics does this by using lemmas and corollaries. Lemma: a theorem whose importance you want to downplay. Often the result is individually unimportant, but becomes more useful when incorporated as part of a larger theorem. Corollary: a theorem which follows quickly from a larger result. Corollaries can be used to draw attention to a particular aspect or a special case of a theorem. In many mathematical papers the word theorem is reserved only for the most important results, everything else being presented as a lemma or corollary. The choice of what to call a result is entirely one of presentation. If you want your paper to be more readable, or to highlight the what you think is important, then lemmas and corollaries are your friends! Here is a famous example of a lemma at work. Lemma 2.21. Suppose that n ∈Z. Then n2 is even ⇐ ⇒n is even. Prove this yourself: the (⇒) direction is easiest using the contrapositive method, while the (⇐) direction works well directly. Theorem 2.22. √ 2 is irrational. This is tricky for a few reasons. The theorem does not appear to be of the form P = ⇒Q, but in fact it is. Consider: Q. √ 2 is irrational. 28 P. Everything you already know in mathematics! Of course P is entirely unhelpful; How would we start a direct proof when we don’t know what to choose from the whole universe of mathematics? A contrapositive proof might also be difficult: ¬Q straightforwardly states that √ 2 is rational, but ¬P is the cryptic statement, ‘something we know happens to be false.’ But what is the something? Instead we use a proof by contradiction. Proof. Suppose that √ 2 = m n for some m, n ∈N, where m, n have no common factors. Then m2 = 2n2 which says that m2 is even. By Lemma 2.21 we have that m is even. Thus m = 2k for some k ∈Z. But now, n2 = 2k2, from which (Lemma 2.21 again) we see that n is even. Thus m and n have a common factor of 2. This is a contradiction. First observe how Lemma 2.21 was used to make the proof easier to read. Now try to make sense of the proof. The main challenge comes in the first line. Once we assume that √ 2 = m n , we can immediately insist that m, n have no common factors. It is important to realize that this is not the assumption being contradicted. Indeed it is no real restriction once we assume that √ 2 is rational. If you find this approach difficult, you may prefer the alternative proof given in the exercises. Here is another famous result involving prime numbers. Definition 2.23. A positive integer p ≥2 is prime if its only positive divisors are itself and 1. The first few primes are 2, 3, 5, 7, 11, 13, 17, 19, . . .. It follows7 from the definition that all positive integers ≥2 are either primes or composites (products of primes). Theorem 2.24. There are infinitely many prime numbers. To break down the proof we first prove a lemma: the symbol := is read ‘defined to be equal to.’ Lemma 2.25. Suppose that p1, . . . , pn are integers ≥2. Then Π := p1p2 · · · pn + 1 is not divisible by pi for any i. Proof. Suppose that Π is divisible by pi. Observe that Π −p1 · · · pn = 1. Since p1 · · · pn is divisible by pi, the left hand side of this equation is divisible by pi. But then 1 must be disvisible by pi. Since pi ≥2, this is a contradiction. 7This is not obvious: we will prove it much later in Theorem 5.16. 29 Proof of theorem. Again we prove by contradiction. Assume that there are exactly n prime numbers p1, . . . , pn and consider Π := p1 · · · pn + 1. By the lemma, Π is not divisible by any of the primes p1, . . . , pn. There are two cases: (a) Π is prime, in which case it is a larger prime than anything in our list p1, . . . , pn. (b) Π is composite, in which case it is divisible by a prime. But this prime cannot be in our list p1, . . . , pn. In either case we’ve shown that there is another prime not in the list p1, . . . , pn, and we’ve contra-dicted our assumption that we had all the primes. The lemma approach was almost essential for this example, since both the lemma and the theorem were proved by contradiction. Nesting one contradiction argument within another is a recipe for serious confusion! Exercises 2.2.1 Show that for any given integers a, b, c, if a is even and b is odd, then 7a −ab + 12c + b2 + 4 is odd. 2.2.2 Prove or disprove the following conjectures. (a) There is an even integer which can be expressed as the sum of three even integers. (b) Every even integer can be expressed as the sum of three even integers. (c) There is an odd integer which can be expressed as the sum of two odd integers. (d) Every odd integer can be expressed as the sum of three odd integers. To get a feel about whether a claim is true or false, try out some examples. If you believe a claim is false, provide a specific counterexample. If you believe a claim is true, give a (formal) proof. 2.2.3 Prove or disprove the following conjectures: (a) The sum of any 3 consecutive integers is divisible by 3. (b) The sum of any 4 consecutive integers is divisible by 4. (c) The product of any 3 consecutive integers is divisible by 6. 2.2.4 Augustus De Morgan satisfied his own problem: I turn(ed) x years of age in the year x2. (a) Given that de Morgan died in 1871, and that he wasn’t the beneficiary of some miraculous anti-aging treatment, find the year in which he was born. (b) Suppose you have an acquaintance who satisfies the same problem. How old will they turn in 2014? Give a formal argument which justifies that you are correct. 2.2.5 Prove that if n is a natural number greater than 1, then n! + 2 is even. Here n! denotes the factorial of the integer n. Look up the definition if you forgot about it. 30 2.2.6 Let x, y ∈Z. Prove that if xy is odd, then x and y are odd. 2.2.7 (a) Let x ∈Z. Prove that 5x + 3 is even if and only if 7x −2 is odd. (b) Can you conclude anything about 7x −2 if 5x + 3 is odd? 2.2.8 Below is the proof of a result. What result is being proved? Proof. Assume that x is odd. Then x = 2k + 1 for some integer k. Then 2x2 −3x −4 = 2(2k + 1)2 −3(2k + 1) −4 = 8k2 + 2k −5 = 2(4k2 + k −3) + 1. Since 4k2 + k −3 is an integer, 2x2 −3x −4 is odd. 2.2.9 Given below is the proof of a result. What is the result? Proof. Assume, without loss of generality, that x and y are even. Then x = 2a and y = 2b for some integers a, b. Therefore, xy + xz + yz = (2a)(2b) + (2a)z + (2b)z = 2(2ab + az + bz). Since 2ab + az + bz is an integer, xy + xz + yz is even. 2.2.10 Suppose that x, and y are real numbers. Prove that if 3x + 5y is irrational, then at least one of x and y is irrational. Recall that x is irrational if it cannot be written as a ratio of integers. 2.2.11 Let x and y be integers. Prove: For x2 + y2 to be even, it is necessary that x and y have the same parity (i.e. both even or both odd). 2.2.12 Prove that if x and y are positive real numbers, then √x + y ̸= √x + √y. Argue by contradiction. 2.2.13 Prove that ab = 0 ⇐ ⇒a = 0 or b = 0. 2.2.14 You meet three old men, Alain, Boris, and C´ esar, each of whom is a Truthteller or a Liar. Truthtellers speak only the truth; Liars speak only lies. You ask Alain whether he is a Truthteller or a Liar. Alain answers with his back turned, so you cannot hear what he says. “What did he say?” you ask Boris. Boris says: “Alain says he is a Truthteller.” C´ esar says: “Boris is lying.” Is C´ esar a Truthteller or a Liar? Explain your answer. 2.2.15 (Snake-like integers) Let’s say that an integer y is Snake-like if and only if there is some integer k such that y = (6k)2 + 9. (a) Give three examples and three non-examples of Snake-like integers. (b) Given y ∈Z, compute the negation of the statement, ‘y is Snake-like.’ (c) Show that every Snake-like integer is a multiple of 9. 31 (d) Show that the statements, ‘n is Snake-like,’ and, ‘n is a multiple of nine,’ are not equivalent. 2.2.16 Assume that Ben’s father lives in Peru. Consider the following implication β: If Ben’s father is an artist and does not have any friends in Asia, then Ben plays tennis or ping-pong, or he appeared in at least one picture of the May 1992 Time magazine. (a) Find the contrapositive of β. (b) Find the converse of β. (c) Find the negation of β. (d) Imagine you are a detective and want to find the truth value of β. Describe your action-strategy in full detail. 2.2.17 Here is an alternative argument that √ 2 is irrational. Suppose that √ 2 = m n where m, n ∈N. This time we don’t assume that m, n have no common factors. (a) m, n satisfy the equation m2 = 2n2. Prove that there exist positive integers m1, n1 which satisfy the following three conditions: m2 1 = 2n2 1, m1 < m, n1 < n. (b) Show that there exist two sequences of decreasing positive integers m > m1 > m2 > · · · and n > n1 > n2 > · · · which satisfy m2 i = 2n2 i for all i ∈N. (c) Is it possible to have an infinite sequence of decreasing positive integers? Why not? Show that we obtain a contradiction and thus conclude that √ 2 ̸∈Q. This is an example of the method of infinite descent, which is very important in number theory. 2.2.18 You are given the following facts. (a) All polynomials are continuous. (b) (Intermediate Value Theorem) If f is continuous on [a, b] and L lies between f (a) and f (b), then f (x) = L for some x ∈(a, b). (c) If f ′(x) > 0 on an interval, then f is an increasing function. Use these facts to give a formal proof that x17 + 12x3 + 13x + 3 = 0 has exactly one solution x, and that x lies in the interval (−1, 0). 32 2.3 Quantifiers The proofs we’ve dealt with thusfar have been fairly straightforward. In higher mathematics, how-ever, there are often definitions and theorems that involve many pieces, and it becomes unwieldy to write everything out in full sentences. Two space-saving devices called quantifiers are often used to contract sentences and make the larger structure of a statement clearer.8 Their use in formal logic is more complex, but for most of mathematics (and certainly this text) all you need is to be able to recognize, understand, and negate them. This last is most important for attempting contrapositive or contradiction proofs. Definition 2.26. The universal quantifier ∀is read ‘for all’. The existential quantifier ∃is read ‘there exists.’ Many sentences in English can be restated using quantifiers: Examples. 1. Every cloud has a silver lining: ∀clouds, ∃a silver lining. 2. All humans have a brain: ∀humans, ∃a brain. 3. There is an integer smaller than π: ∃n ∈Z such that n < π. 4. π cannot be written as a ratio of integers: ∀integers m, n, we have m n ̸= π. Propositional Functions and Quantified Propositions Definition 2.27. A propositional function is an expression P(x) which depends on a variable x. The collection of allowed variables x is the domain of P. For each x, the expression P(x) is a proposition in the usual sense. The quantified proposition ∀x, P(x) is an assertion that P(x) is true for all values of x. Similarly ∃x, P(x) asserts that P(x) is true for at least one value of x. Example. Suppose that x is allowed to be any real number. We could define the propositional func-tion P(x) by P(x). x2 > 4. For this example, P(5) is true, whilst P(−1) is false. More generally, P(x) is true for some values of x (namely x > 2 or x < −2) and false for others (−2 ≤x ≤2). In this case the quantified proposition ∀x ∈R, P(x) is false, while ∃x ∈R, P(x) is true. Aside: Clarity versus Concision As we’ve observed, mathematics is something of an art form and, like with all art, different prac-titioners have different tastes. Some mathematicians write very concisely, keeping words to a mini-mum. Some write almost entirely in English. Most use a hybrid of quantifiers and English, aiming for a balance between brevity and clarity. For example, consider the famous sum of four squares theorem: 8At least that’s the idea: very often they are over-used and achieve the opposite effect! 33 English Every positive integer may be written as the sum of four squares Full Logic (∀n ∈N)(∃a, b, c, d ∈Z)(n = a2 + b2 + c2 + d2) Hybrid ∀n ∈N, ∃a, b, c, d ∈Z such that n = a2 + b2 + c2 + d2 You will probably agree that the English version is easiest to follow, and the Full Logic the most ab-stract. However, the English version is less precise: ‘sum of four squares’ has to be interpreted. The Full Logic expression avoids this by introducing variables and a formula. The Hybrid expression aims for a balance between these extremes. The insertion of a single comma and the phrase ‘such that’ increases readabilty, while retaining the benefit of precision. Remember that the purpose of writing mathematics is so that someone else can read and understand what you’ve written without you being there to explain it to them. Your presentation style has an enormous effect on whether you are successful! Similarly, in our previous example, the sentence ‘∃x ∈R such that x2 > 4’ is more understand-able than our original formulation ‘∃x ∈R, x2 > 4.’ Counterexamples and Negating Quantified Propositions Besides the concision afforded by quantifiers, one of their benefits is a rule that allows for easy nega-tion. Theorem 2.28. For any propositional function P(x), we have: 1. ¬(∀x, P(x)) is equivalent to ∃x, ¬P(x). 2. ¬(∃x, P(x)) is equivalent to ∀x, ¬P(x). Like with all theorems, to understand it you should unpack it, write it in English, and come up with an example: 1. The negation of ‘For all x, P(x) is true’ is There exists an x such that P(x) is false.’ 2. The negation of ‘There exists an x such that P(x) is true’ is For all x, P(x) is false. Definition 2.29. A counterexample to ∀x, P(x) is a single element t in the domain of P such that P(t) is false. Clearly x = 1 is a suitable counterexample to ∀x ∈R, x2 > 4. Examples. Here are two examples, numbered corresponding to the parts of Theorem 2.28. 1. The negation of the statement, ‘Everyone owns a bicycle’ is: 34 Somebody does not own a bicycle. It certainly looks pedantic, but symbolically we might write: ¬ h ∀people x, x owns a bicycle i ⇐ ⇒∃a person x such that x does not own a bicycle. 2. Suppose that x is a real number and consider the quantified proposition: ∃x ∈R such that sin x = 4. This has the form ∃x, P(x), and therefore has negation ∀x, ¬P(x). Explicitly: ∀x ∈R we have sin x ̸= 4. Note how we introduced the words we have to make the sentence read more clearly. Advice when Negating: Hidden and Excess Quantifiers Theorem 2.28 seems very simple, but in practice it can be very easy to misuse. Here are some points to consider when negating quantifiers. 1. Don’t forget the meaning of the sentence. Use the logical rules in Theorem 2.28 but also think it out in words. You should get the same result. Think about the finished sentence and read it aloud: if it sounds like the opposite of what you started with then it probably is! 2. The symbol ∄for ‘does not exist’ is much abused. Very occasionally its use is appropriate, but it too often demonstrates laziness or a lack of understanding. Avoid using it unless absolutely necessary. 3. Only switch the symbols ∀and ∃if they preceed a proposition and are truly used as logical quantifiers. In the following example, ‘silver lining’ is not a proposition. ∀clouds, ∃a silver lining. When negating, we don’t switch ∃to ∀. Indeed its negation is ∃a cloud without a silver lining. 4. Beware of hidden quantifiers! Sometimes a quantifier is implied but not explicitly stated. This is very common when a statement contains an implication. Consider the following very easy theorem. If n is an odd integer, then n2 is odd. (∗) This is really a statement about all integers. There is a hidden quantifier that’s been suppressed in the interest of readability. Instead, the theorem could have been written ∀n ∈Z, n is odd = ⇒n2 is odd. 35 In this form we can negate by combining the rules in Theorems 2.10 and 2.28. The pattern is ¬ [∀n, P(n) = ⇒Q(n)] ⇐ ⇒∃n, P(n) and ¬Q(n). The negation of (∗) is therefore, ∃n ∈Z such that n is odd and n2 is even. The negation of (∗) is, of course, false! Here is a harder example of a hidden quantifier, this time from Linear Algebra. Definition 2.30. Vectors x, y, z are linearly independent if ax + by + cz = 0 = ⇒a = b = c = 0. The implication is a statement about all real numbers a, b, c. We could instead have written ∀a, b, c ∈R we have ax + by + cz = 0 = ⇒a = b = c = 0. To negate the definition, we must also negate the hidden quantifier: Vectors x, y, z are linearly dependent if ∃a, b, c not all zero such that ax + by + cz = 0. The final challenge is recalling how to negate an implication: recall Theorem 2.10, and note that the negation of a = b = c = 0 is that at least one of a, b, c is non-zero. Multiple quantifiers Once you’re comfortable negating simple propositions and quantifiers, negating multiple quantifiers is easy. Just follow the rules, think, and take your time. Example. Show that the following statement is false. ∀x ∈R, ∃y ∈R such that xy = 3. The negation of this expression follows the rules for switching quantifiers and negating the final statement: ∃x ∈R such that ∀y ∈R we have xy ̸= 3. It is easy to see that the negated statement is true: Proof. Let x = 0, then, regardless of y, we have xy = 0 ̸= 3. Because the negation is true, the original statement is false. Putting it all together: Continuity The definition of continuity from calculus combines multiple quantifiers, a hidden quantifier and an implication. The purpose of this text isn’t to teach you the subtleties of what the following def-inition means, that’s for a later Analysis class. We simply want to be able to read and negate such expressions. 36 Definition 2.31. Suppose that f is a function whose domain and codomain are sets of real numbers. We say that f is continuous at x = a if, ∀ε > 0, ∃δ > 0 such that |x −a| < δ = ⇒| f (x) −f (a)| < ε. (∗) The implication is a statement about all real numbers x which satisfy some property, so we once again have a hidden quantifier: ∀ε > 0, ∃δ > 0 such that ∀x ∈R, |x −a| < δ = ⇒| f (x) −f (a)| < ε. We can now use our rules to state what it means for f to be discontinuous at x = a: ∃ε > 0 such that ∀δ > 0, ∃x ∈R such that |x −a| < δ and | f (x) −f (a)| ≥ε. Warning! The negation of ∀ε > 0 is not ∃ε ≤0. Only the ultimate proposition9 is negated! For an example of this definition in use, see the exercises. The Order of Quantifiers Matters! We conclude this section with an important observation: the order of quantifiers matters critically! Consider, for example, the following two propositions: 1. For every person x, there exists a person y such that y is a friend of x. 2. There exists a person y such that, for every person x, y is a friend of x. Assuming x and y always represent people, we can rewrite the sentences as follows: 1. ∀x, ∃y such that y is a friend of x. 2. ∃y such that, ∀x, y is a friend of x. All we have done is to switch the order of the two quantifiers! How does this affect the meaning? Written entirely in English, the statments become: 1. Everyone has a friend. 2. There exists somebody who is friend with everybody. Quite different! Play around with the pairs of examples below. What are the meanings? Which ones are true? • ∀days x, ∃a person y such that y was born on day x. • ∃a person y such that, ∀days x, y was born on day x. • ∀circles x, ∃a point y such that y is the center of x. • ∃a point y such that, ∀circles x, y is the center of x. • ∀x ∈Z, ∃y ∈Z such that y < x. • ∃y ∈Z such that, ∀x ∈N, y < x. 9In this case |x −a| < δ = ⇒| f (x) −f (a)| < ε. 37 Exercises 2.3.1 For each of the following sentences, rewrite the sentence using quantifiers. Then write the negation (using both words and quantifiers) (a) All mathematics exams are hard. (b) No football players are from San Diego. (c) There is a odd number that is a perfect square. 2.3.2 Let P be the proposition: ‘Every positive integer is divisible by thirteen.’ (a) Write P using quantifiers. (b) What is the negation of P? (c) Is P true or false? Prove your assertion. 2.3.3 Prove or disprove: There exist integers m and n such that 2m −3n = 15. 2.3.4 Prove or disprove: There exist integers m and n such that 6m −3n = 11. Hint: The left-hand side is always divisible by... 2.3.5 Prove that between any two distinct rational numbers there exists another rational number. 2.3.6 Let p be an odd integer. Prove that x2 −x −p = 0 has no integer solutions. 2.3.7 Prove: For every positive integer n, n2 + n + 3 is an odd integer greater than or equal to 5. There are two claims here: n2 + n + 3 is odd, and n2 + n + 3 ≥5. 2.3.8 Consider the propositional function P(x, y, z) : (x −3)2 + (y −2)2 + (z −7)2 > 0 where the domain of each of the variables x, y and z is R. (a) Express the quantified statement ∀x ∈R, ∀y ∈R, ∀z ∈R, P(x, y, z) in words. (b) Is the quantified statement in (a) true or false? Explain. (c) Express the negation of the quantified statement in (a) in symbols. (d) Express the negation of the quantified statement in (a) in words. (e) Is the negation of the quantified statement in (a) true or false? Explain. 2.3.9 The following statements are about positive real numbers. Which one is true? Explain your answer. (a) ∀x, ∃y such that xy < y2. (b) ∃x such that ∀y, xy < y2. 2.3.10 Which of the following statements are true? Explain. (a) ∃a married person x such that ∀married people y, x is married to y. (b) ∀married people x, ∃a married person y such that x is married to y. 2.3.11 Here are four propositions. Which are true and which false? Justify your answers. (a) ∀x ∈R, ∃y ∈R such that y4 = 4x. 38 (b) ∃y ∈R such that ∀x ∈R we have y4 = 4x. (c) ∀y ∈R, ∃x ∈R such that y4 = 4x. (d) ∃x ∈R such that ∀y ∈R we have y4 = 4x. 2.3.12 A function f is said to be decreasing if: x ≤y = ⇒f (x) ≥f (y). (a) There is a hidden quantifier in the definition: what is it? (b) State what it means for f not to be decreasing. (c) Give an example to demonstrate the fact that not decreasing and increasing do not mean the same thing! 2.3.13 Prove or disprove each of the following statements. (a) For every two points A and B in the plane, there exists a circle on which both A and B lie. (b) There exists a circle in the plane on which lie any two points A and B. 2.3.14 You are given the following definition (you do not have to know what is meant by a field). Let x be an element of a field F. An inverse of x is an element y in F such that xy = 1. Consider the following proposition: All non-zero elements in a field have an inverse. (a) Restate the proposition using both of the quantifiers ∀and ∃. (b) Find the negation of the proposition, again using quantifiers. 2.3.15 Recall from calculus the definitions of the limit of a sequence (xn) = (x1, x2, x3, . . .). ‘xn diverges to ∞’ means: ∀M > 0, ∃N ∈N such that n > N = ⇒xn > M. ‘xn converges to L’ means: ∀ε > 0, ∃N ∈N such that n > N = ⇒|xn −L| < ε. Here we assume that all elements of (xn) are real numbers. (a) State what it means for a sequence xn not to diverge to ∞. Beware of the hidden quantifier! (b) State what it means for a sequence xn not to converge to L. (c) State what it means for a sequence xn not to converge at all. (d) Prove, using the definition, that xn = n diverges to ∞. (e) Prove that xn = 1 n converges to zero. 2.3.16 This question uses Definition 2.31. You will likely find this difficult. (a) Prove, directly from the definition, that f (x) = x2 is continuous at x = 0. If you are given ϵ > 0, what should δ be? 39 (b) Prove that g(x) = ( 1 + x if x ≥0, x if x < 0, is discontinuous at x = 0. (c) (Very hard) Let h(x) = ( x if x is rational, 0 if x is irrational. Prove that f is continuous only at x = 0. 2.3.17 In this question we prove Rolle’s Theorem from calculus: If f is continuous on [a, b], differentiable on (a, b), and f (a) = f (b) = 0, then ∃c ∈(a, b) such that f ′(c) = 0. As you work through the question, think about where the hypotheses are used and why we need them. (a) Recall the Extreme Value Theorem. The function f is continuous on [a, b], so f is bounded and attains its bounds. Otherwise said, ∃m, M ∈[a, b] such that ∀x ∈[a, b] we have f (m) ≤f (x) ≤f (M). Suppose that f (m) = f (M). Why is the conclusion of Rolle’s Theorem obvious in this case? (b) Now suppose that f (m) ̸= f (M). Argue that at least one of the following cases holds: f (M) > 0 or f (m) < 0. (c) Without loss of generality, we may assume that f (M) > 0. By considering the function −f, explain why. (d) Assume f (M) > 0. Then M ̸= a and M ̸= b. Consider the difference quotient, f (M + h) −f (M) h . Show that if 0 < |h| < min{M −a, b −M} then the difference quotient is well-defined (exists and makes sense). (e) Suppose that 0 < h < b −M. Show that f (M + h) −f (M) h ≤0. How do we know that L+ := lim h→0+ f (M+h)−f (M) h exists? What can you conclude about L+? (f) Repeat part (d) for L−:= lim h→0− f (M+h)−f (M) h . (g) Conclude that L+ = L−= 0. Why have we completed the proof? 40 3 Divisibility and the Euclidean Algorithm In this section we introduce the notion of congruence: a generalization of the idea of separating all integers into ‘even’ and ‘odd.’ At its most basic it involves going back to elementary school when you first learned division and would write something similar to 33 ÷ 5 = 6 r 3 ‘6 remainder 3.’ The study of congruence is of fundamental importance to Number Theory, and provides some of the most straightforward examples of Groups and Rings. We will cover the basics in this section— enough to compute with—then return later for more formal observations. 3.1 Remainders and Congruence Definition 3.1. Let m and n be integers. We say that n divides m and write n|m if m is divisible by n: that is if there exists some integer k such that m = kn. Equivalently, we say that n is a divisor of m, or that m is a multiple of n. For example: 4|20 and 17|51, but 12∤8. When one integer does not divide another, there is a remainder left over. Theorem 3.2 (The Division Algorithm). Let m be an integer and n a positive integer. Then there exist unique integers q (the quotient) and r (the remainder) which satisfy the following conditions: 1. 0 ≤r < n. 2. m = qn + r. For example: If m = 23 and n = 7, then q = 3 and r = 2 because ‘23 ÷ 7 = 3 remainder 2.’ More formally, 23 = 3 · 7 + 2, with 0 ≤2 < 7. Similarly, if m = −11 and n = 3, then q = −4 and r = 1 because −11 = (−4) · 3 + 1, with 0 ≤1 < 3. For practice, find a formula for all the integers that have remainder 4 after division by 6. The proof of the Division Algorithm relies on the development of induction, to which we will return in Chapter 5. The theorem should be read as saying that n goes q times into m with r left over. The fact that the remainder is nicely defined allows us to construct an alternative form of arithmetic. Definition 3.3. Let a, b be integers, and n a positive integer. We say that a is congruent to b modulo n and write a ≡b (mod n) if a and b have the same remainder upon dividing by n. When the modulus n is clear, it tends to be dropped, and we just write a ≡b. For example: 7 ≡10 (mod 3), since both have the same remainder (1) on dividing by 3. Can you find a formula for all the integers that are congruent to 10 modulo 3? Let a be an integer. Consider the following conjectures. Are they true or false? 41 Conjecture 3.4. a ≡8 (mod 6) = ⇒a ≡2 (mod 3). Conjecture 3.5. a ≡2 (mod 3) = ⇒a ≡8 (mod 6). The first conjecture is true. Indeed, if a ≡8 (mod 6), we can write a = 6k + 8 for some integer k. Then a = 6k + 8 = 6k + 6 + 2 = 3(2k + 2) + 2 so a has remainder 2 upon division by 3, showing that a is congruent to 2 modulo 3. On the other hand, the second conjecture is false. All we need is a counterexample. Consider a = 5: clearly a is congruent to 2 modulo 3, but a is not congruent to 8 modulo 6 (because it has remainder 5, not 2, upon division by 6). The following theorem is crucial, and provides an equivalent definition of congruence. Theorem 3.6. a ≡b (mod n) ⇐ ⇒n|(b −a). Proof. There are two separate theorems here, although both rely on the Division Algorithm (Theorem 3.2) to divide both a and b by n. Given a, b, n, the Division Algorithm shows that there exist unique quotients q1, q2 and remainders r1, r2 which satisfy a = q1n + r1, b = q2n + r2, 0 ≤r1, r2 < n. (∗) Now we perform both directions of the proof. (⇒) Suppose that a ≡b (mod n). By definition, this means that a and b have the same remainder when divided by n. That is, r1 = r2. Now subtracting a from b gives us b −a = (q2 −q1)n + (r2 −r1) = (q2 −q1)n, which is divisible by n. Therefore n|(b −a). (⇐) This direction is a little more subtle. We assume that b −a is divisible by n. Thus b −a = kn for some integer k. According to (∗), this implies that r2 −r1 = (b −a) −(q2 −q1)n = (k −q2 + q1)n is also a multiple of n. Now consider the condition on the remainders in (∗): since 0 ≤r1, r2 < n, we quickly see that ( 0 ≤r2 < n −n < −r1 ≤0 = ⇒−n < r2 −r1 < n. This says that r2 −r1 is a multiple of n lying strictly between ±n. The only possibility is that r2 −r1 = 0. Otherwise said, r2 = r1, whence a and b have the same remainder, and so a ≡b (mod n). 42 If you are having some trouble with the final step, think about an example. Suppose that n = 26 and that and that r2 −r1 is an integer satisfying the inequalities −26 < r2 −r1 < 26. It should be obvious that r2 −r1 = 0. To gain some familiarity with congruence, use Theorem 3.6 to show that a ≡b (mod n) ⇐ ⇒b ≡a (mod n). Note that both this expression and the theorem contain a hidden quantifier, as discussed in Section 2.3. Morover, combining the theorem with Definition 3.1 leads to the observation that a ≡b (mod n) ⇐ ⇒∃k ∈Z such that b −a = kn, that is, b = a + kn. Congruence and Divisibility The previous two theorems may appear a little abstract, so it’s a good idea to recap the relationship between congruence and divisibility. The following observations should be immediate to you! Let a be any integer and let n be a positive integer. Then • a is congruent to either 0, 1, 2,. . . or n −1 modulo n. • a is divisible by n if and only if a ≡0 (mod n). • a is not divisible by n if and only if a ≡1, 2, 3, . . . , n −1 (mod n). To test your level of comfort with the definition of congruence, and review some proof techniques, prove the following theorem. Theorem 3.7. Suppose that n is an integer. Then n2 ̸≡n (mod 3) ⇐ ⇒n ≡2 (mod 3). If you don’t know how to start, try completing the following table: n n2 n2 ≡n (mod 3) 0 0 T 1 2 Now try to write a formal proof. That the congruence sign ≡appears similar to the equals sign = is no accident. In many ways it behaves exactly the same. In Chapter 7.3 we shall see that congruence is an important example of an equivalence relation. 43 Modular Arithmetic The arithmetic of remainders is almost exactly the same as the more familiar arithmetic of real num-bers, but comes with all manner of fun additional applications, most importantly cryptography and data security: your cell-phone and computer perform millions of these calculations every day! Here we spell out the basic rules of congruence arithmetic.10 Theorem 3.8. Suppose throughout that a, b, c, d are integers, and that all congruences are modulo the same integer n. 1. a ≡b and c ≡d = ⇒ac ≡bd 2. a ≡b and c ≡d = ⇒a ± c ≡b ± d What the theorem says is that the operations of ‘take the remainder’ and ‘add’ (or ‘multiply’) can be performed in either order; the result will be the same. For example, consider a = 29, b = 14 and n = 6. We can add a and b then take the remainder when dividing by n: 29 + 14 = 43 = 6 · 7 + 1. Instead we could first take the remainders of a and b modulo 6 and then add these: 5 + 2 = 7, which has the same remainder 1. Either way, we may write the result as a congruence, 29 + 14 ≡1 (mod 6). Proof of Theorem 3.8. Suppose that a ≡b and c ≡d. By Theorem 3.6 we have a −b = kn and c −d = ln for some integers k, l. Thus ac = (b + kn)(d + ln) = bd + n(bl + kd + kln) ⇒ac −bd = n(bl + kd + kln) is divisible by n. Hence ac ≡bd. Try the second argument yourself. The ability to take remainders before adding and multiplying is remarkably powerful, and allows us to perform some surprising calculations. Examples. 1. What is the remainder when 3923 is divided by 10? At the outset this appears im-possible. Ask your calculator and it will tell you that 3923 ≈3.93 × 1036, which is of no help at all! Instead think about the rules of arithmetic modulo 10. Since 39 ≡9 ≡−1 (mod 10), we quickly notice that 39 · 39 ≡(−1) · (−1) ≡1 (mod 10), 10The usual associative, commutative and distributive laws of arithmetic a + (b + c) ≡(a + b) + c, a(bc) ≡(ab)c, a + b ≡b + a, ab ≡ba, a(b + c) ≡ab + ac all follow because x = y = ⇒x ≡y (mod n), regardless of n: equal numbers have the same remainder after all! 44 whence 392 ≡1 (mod 10). Since positive integer exponents signify repeated multiplication, we can repeat the exercise to obtain 3923 ≡(−1)23 ≡−1 ≡9 (mod 10). Therefore 3923 has remainder 9 when divided by 10. Otherwise said, the last digit of 3923 is a 9. If you ask a computer for all the digits you can check this yourself. 2. Now that we understand powers, more complex examples become easy. Here we compute modulo n = 6. 79 + 143 ≡19 + 23 ≡1 + 8 ≡9 ≡3 (mod 6). Hence 79 + 143 = 40356351 has remainder 3 when divided by 6. 3. Find the remainder when 12412 · 6549 is divided by 11. This time we’ll need to perform multiple calculations to keep reducing the base to something managable. Since 124 = 112 + 3 and 65 = 11 · 6 −1, we write 12412 · 6549 ≡312 · (−1)49 ≡274 · (−1) ≡54 · (−1) ≡−(252) ≡−(32) ≡2 (mod 11) The remainder is therefore 2. There is no way to do this on a pocket calculator, since the original number 12412 · 6549 ≈9 × 10113 is far too large to work with! The primary difference between modular and normal arithmetic is, perhaps unsurprisingly, with regard to division. Theorem 3.9. If ka ≡kb (mod kn) then a ≡b (mod n). The modulus is divided by k as well as the terms, so the meaning of ≡changes. In Exercise 3.1.6 you will prove this theorem, and observe that, in general, we do not expect a ≡b (mod n). Exercises 3.1.1 Find the remainder when 17251 · 2312 −1941 is divided by 5. Hint: 17 ≡2 and 22 ≡−1 (mod 5). 3.1.2 Is the statement n2 ≡n (mod 3) ⇐ ⇒n ≡0 (mod 3) or n ≡1 (mod 3), identical to Theorem 3.7? Why/why not? 3.1.3 Prove that if a ≡b (mod n) and c ≡d (mod n) then 3a −c2 ≡3b −d2 (mod n). 3.1.4 Find a natural number n and integers a, b such that a2 ≡b2 (mod n) but a ̸≡b (mod n). 3.1.5 Let p be a prime number greater than or equal to 3. Show that if p ≡1 (mod 3), then p ≡1 (mod 6). Hint: p is odd. 45 3.1.6 Suppose that 7x ≡28 (mod 42). By Theorem 3.9, it follows that x ≡4 (mod 6). (a) Check this explicitly using Theorem 3.6. (b) If 7x ≡28 (mod 42), is it possible that x ≡4 (mod 42)? (c) Is it always the case that 7x ≡28 (mod 42) = ⇒x ≡4 (mod 42)? Why/why not? (d) Prove Theorem 3.9. 3.1.7 If a|b and b|c, prove that a|c. 3.1.8 Let a, b be positive integers. Prove that a = b ⇐ ⇒a|b and b| a. 3.1.9 Here are two conjectures: Conjecture 1 a|b and a|c = ⇒a|bc. Conjecture 2 a|c and b|c = ⇒ab|c. Decide whether each conjecture is true or false and prove/disprove your assertions. 3.1.10 Fermat’s Little Theorem (to distinguish it from his ‘Last’) states that if p is prime and a ̸≡0 mod p, then ap−1 ≡1 (mod p). (a) Use Fermat’s Little Theorem to prove that bp ≡b (mod p) for any integer b. (b) Prove that if p is prime then p|(2p −2). (c) Prove that the converse is not true, that 2n −2 being divisible by n does not imply that n is prime (take n = 341...). 46 3.2 Greatest Common Divisors and the Euclidean Algorithm At its most basic, Number Theory involves finding integer solutions to equations. Here are two simple-sounding questions: 1. The equation 9x −21y = 6 represents a straight line. Are there any integer points on this line? That is, can you find integers x, y satisfying 9x −21y = 6? 2. What about on the line 4x + 6y = 1? Before you do anything else, try sketching both lines (lined graph paper will help) and try to decide if there are any integer points. If there are any, how many are there? Can you find them all? In this section we will see how to answer these questions in general: for which lines ax + by = c with a, b, c ∈Z, are there integer solutions, and how can we find them all? The method introduces the appropriately named Euclidean algorithm, a famous procedure dating at least as far back as Euclid’s Elements (c. 300 BC.). Definition 3.10. Let m, n be integers, not both zero. Their greatest common divisor gcd(m, n) is the largest (positive) divisor of both m and n. We say that m, n are relatively prime if gcd(m, n) = 1. Example. Let m = 60 and n = 90. The positive divisors of the two integers are listed in the table: m 1 2 3 4 5 6 10 12 15 20 30 60 n 1 2 3 5 6 9 10 15 18 30 45 90 The greatest common divisor is the largest number common to both rows: clearly gcd(60, 90) = 30. Finding the greatest common divisor by listing all the positive divisors of a number is extremely tedious. This is where Euclid rides to the rescue. Euclidean Algorithm. To find gcd(m, n) for two positive integers m > n: (i) Use the division algorithm (Theorem 3.2) to write m = q1n + r1 with 0 ≤r1 < n. (ii) If r1 = 0, set gcd(m, n) = n. Otherwise, If r1 > 0, apply again: divide n by r1 to obtain n = q2r1 + r2 with 0 ≤r2 < r1. (iii) If r2 = 0, set gcd(m, n) = r1. Otherwise, If r2 > 0, apply again: divide r1 by r2 to obtain r1 = q3r2 + r3 with 0 ≤r3 < r2. (iv) If r2 = 0, set gcd(m, n) = r1. Otherwise, Repeat the process: obtain a decreasing sequence of positive integers r1 > r2 > r3 > . . . > 0 Theorem 3.11. The Algorithm eventually produces a remainder of zero: ∃rp+1 = 0. The greatest common divisor of m, n is the last non-zero remainder: gcd(m, n) = rp. The proof is in the exercises. If m, n are not both positive, take absolute values first and apply the algorithm. For instance gcd(−6, 45) = 3. 47 Example. Compute gcd(1260, 750) using the Euclidean algorithm: the steps are labeled as in the original algorithm. You might instead find it easier to create a table with and observe each remainder moving diagonally down and left at each successive step. (i) 1260 = 1 × 750 + 510 (ii) 750 = 1 × 510 + 240 (iii) 510 = 2 × 240 + 30 (iv) 240 = 8 × 30 + 0 m n q r 1260 750 1 510 750 510 1 240 510 240 2 30 240 30 8 0 Theorem 3.11 says that gcd(1260, 750) = 30, the last non-zero remainder. As you can see, the Euclidean algorithm is very efficient. Reversing the Algorithm: Integer Points on Lines To apply the Euclidean algorithm to finding integer points on lines, we must turn it on its head. By starting with the second last line of the algorithm and substituing the previous lines one at a time, we can find integers x, y such that gcd(m, n) = mx + ny. This is easiest to demonstrate by continuing our previous example: Example (continued). Find integers x, y such that 1260x + 750y = 30. Solve for 30 (the gcd of 1260 and 750) using step (iii), to get 30 = 510 −2 × 240. Now use the equation in step (ii) to solve for 240 and substitute: 30 = 510 −2 × (750 −510) = 3 × 510 −2 × 750. Finally, substitute for 510 using equation (i): 30 = 3 × (1260 −750) −2 × 750 = 3 × 1260 −5 × 750. We have expressed 30 as a linear combination of 1260 and 750, as desired. Reading off the coefficients of the combination, we get x = 3 and y = −5 therefore satisfy 1260x + 750y = 30. Note how the process to find x and y is twofold: first we find gcd(m, n) using the Euclidean Algo-rithm, then we do a series of back substitutions to recover x and y. More generally, we have the following corollary. Corollary 3.12. Given any integers m, n there exist integers x, y such that gcd(m, n) = mx + ny. We are now in a position to solve our motivating problem: finding all integer points on the line ax + by = c where a, b, c are integers. 48 Theorem 3.13. Let a, b, c be integers and d = gcd(a, b). Then the equation ax + by = c has an integer solution (x, y) iff d|c. In such a case, all integer solutions are given by x = x0 + b dt, y = y0 −a dt, (∗) where (x0, y0) is any fixed integer solution, and t takes any integer value. One uses the Euclidean Algorithm to find the initial solution (x0, y0), then applies (∗) to obatin all of them.11 The proof is again in the exercises. Examples. 1. Find all integer solutions to the equation 1260x + 750y = 90. We calculated earlier that gcd(1260, 750) = 30. Thus d = 30. Since d | c (that is, 30 | 90), we know that there are integer solutions. We also calculated that 1260 × 3 + 750 × (−5) = 30. Since we want 90, we simply multiply our pair (3, −5) by three: 1260 × 9 + 750 × (−15) = 90. whence (x0, y0) = (9, −15) is an integer solution to the equation. The general solution is there-fore (x, y) =  9 + 750 30 t, −15 −1260 30 t  = (9 + 25t, −15 −42t), where t ∈Z. 2. No consider the line 570x + 123y = 7. We calculate the greatest common divisor using the Euclidean algorithm: 570 = 4 × 123 + 78 123 = 1 × 78 + 45 78 = 1 × 45 + 33 45 = 1 × 33 + 12 33 = 2 × 12 + 9 12 = 1 × 9 + 3 9 = 4 × 3 + 0                        = ⇒gcd(570, 123) = 3. Since 3 ∤7, we conclude that the line 570x + 123y = 7 has no integer points. 3. Repeat the above calculations for our motivating problems: what does the theorem say? 11The astute observer should recognize the similarity between this and the complementary function/particular integral method for linear differential equations: (x0, y0) is a ‘particular solution’ to the full equation ax + by = c, while ( b d t, −a d t) comprises all solutions to the ‘homogeneous equation’ ax + by = 0. 49 Exercises 3.2.1 Use the Euclidean Algorithm to compute the greatest common divisors indicated. (a) gcd(20, 12) (b) gcd(100, 36) (c) gcd(207, 496) 3.2.2 For each part of Question 3.2.1, find integers x, y for which gcd(m, n) = mx + ny. 3.2.3 (a) Answer our motivating problems using the above process. (i) Find all integer points on the line 9x −21y = 6. (ii) Show that there are no integer points on the line 4x + 6y = 1. (b) Can you give an elementary proof as to why there are no integer points on the line 4x + 6y = 1? 3.2.4 Find all the integer points on the following lines, or show that none exist. (a) 16x −33y = 2. (b) 122x + 36y = 3. (c) 324x −204y = −12. 3.2.5 Find all possible solutions to the motivating problem at the start of the notes: Five people each take the same number of candies from a jar. Then a group of seven does the same. The, now empty, jar originally contained 239 candies. How much candy did each person take? 3.2.6 Show that there exists no integer x such that 3x ≡5 (mod 6). 3.2.7 In Theorem 3.11 we claim that the Euclidean algorithm terminates with rp+1 = 0. Why? Show that the number of steps p is no more than n. The algorithm is much faster than this in practice! 3.2.8 Let m = qn + r be the result of the division algorithm for integers m, n. (a) Let d be a common positive divisor of m, n. Prove that d|r. (b) Now suppose that c is a common divisor of n and r. Prove that c|m. (c) Explain why parts (a) and (b) prove that gcd(m, n) = gcd(n, r). (d) Conclude that the final remainder rp in the Euclidean algorithm really is gcd(m, n). 3.2.9 Prove the following: gcd(m, n) = 1 ⇐ ⇒∃x, y ∈Z such that mx + ny = 1. One direction can be done by applying Corollary 3.12, but the other direction requires an argument. 3.2.10 In this question we prove the Theorem on integer solutions to linear equations. Let a, b, c ∈Z. Suppose that (x0, y0) and (x1, y1) are two integer solutions to the linear Diophantine equation ax + by = c. (a) Show that (x0 −x1, y0 −y1) satisfies the equation ax + by = 0. (b) Suppose that gcd(a, b) = d. Prove that gcd( a d, b d) = 1. (Use Question 3.2.9) 50 (c) Find all integer solutions (x, y) to ax + by = 0 (Don’t use the Theorem, it’s what you’re trying to prove! Think about part (b) and divide through by d first.). (d) Use (a) and (b) to conclude that (x, y) is an integer solution to ax + by = c if and only if x = x0 + b dt y = y0 −a dt, where t ∈Z. 3.2.11 Show that gcd(5n + 2, 12n + 5) = 1 for every integer n. There are two ways to approach this: you can try to use the Euclidean algorithm abstractly, or you can use the result of Exercise 3.2.9. 3.2.12 The set of remainders Zn = {0, 1, 2, . . . , n −1} is called a ring when equipped with addition and multiplication modulo n. For example 5 + 6 ≡3 (mod 8). We say that b ∈Zn is an inverse of a ∈Zn if ab ≡1 (mod n). (a) Show that 2 has no inverse modulo 6. (b) Show that if n = n1n2 is composite (∃integers n1, n2 ≥2) then there exist elements of the ring Zn which have no inverses. (c) Prove that a has an inverse modulo n if and only if gcd(a, n) = 1. Conclude that the only sets Zn for which all non-zero elements have inverses are those for which n is prime. You will find Exercise 3.2.9 helpful. 51 4 Sets and Functions Sets are the fundamental building blocks of mathematics. In the sub-discipline of Set Theory, mathe-maticians define all basic notions, including number, addition, function, etc., purely in terms of sets. In such a system it can take over 100 pages of discussion to prove that 1 + 1 = 2! We will not be any-thing like so rigorous. Indeed, before one can accept that such formality has its place in mathematics, a level of familiarity with sets and their basic operations is necessary. 4.1 Set Notation and Describing a Set We start with a very na¨ ıve notion: a set is a collection of objects.12 Definition 4.1. If x is an object in a set A, we write x ∈A and say that x is an element or member of A. On the other hand, if x is a member of some other set B, but not of A, we write x / ∈A. Two sets are described as equal if they have exactly the same elements. When thinking abstractly about sets, you may find Venn diagrams useful. A set is visualized as a region in the plane and, if necessary, members of the set can be thought of as dots in this region. This is most useful when one has to think about multiple, possibly over-lapping, sets. The graphic here represents a set A with at least three elements a1, a2, a3. A a1 a2 a3 Notation and Conventions Use capital letters for sets, e.g. A, B, C, S, and lower-case letters for elements. It is conventional, though not required, to denote an abstract element of a set by the corresponding lower-case letter: thus a ∈A, b ∈B, etc. Curly brackets { , } are used to bookend the elements of a set: for instance, if we wrote S = {3, 5, f, α, β} then we’d say, ‘S is the set whose elements are 3, 5, f, α and β.’ The order in which we list the elements in a set is irrelevant, thus S = {β, f, 5, α, 3} = { f, α, 3, β, 5}. Listing the elements in a set in this way is often known as roster notation. By contrast, set-builder notation describes the elements of a set by starting with a larger set and restricting to those elements which satisfy some property. The symbols | or : are used as a short-hand for ‘such that.’ Which symbol you use depends partly on taste, although the context may make one clearer to read.13 For example, if S = {3, 5, f, α, β} is the set defined above, we could write, {s ∈S : s is a Greek letter} = {α, β} 12Much thinking was required before mathematicians realized that this is indeed na¨ ıve. It eventually became clear that some collections of objects cannot be considered sets, and the search for a completely rigorous definition began. Thus was Axiomatic Set Theory born. For the present, our notion is enough. 13See Choice of Notation, below. 52 or {s ∈S | s is a Greek letter} = {α, β}. We would read: ‘The set of elements s in the set S such that s is a Greek letter is {α, β}.’ Example. Let A = {2, 4, 6} and B = {1, 2, 5, 6}. There are many options for how to write A and B in set-builder notation. For example, we could write A = {2n : n = 1, 2 or 3} and B = {n ∈Z | 1 ≤n ≤6 and n ̸= 3, 4}. We now practice the opposite skill by converting five sets from set-builder to roster notation. S1 = {a ∈A : a is divisible by 4} = {4} S2 = {b ∈B : b is odd} = {1, 5} S3 = {a ∈A | a ∈B} = {2, 6} S4 = {a ∈A : a ̸∈B} = {4} S5 = {b ∈B | b is odd and b −1 ∈A} = {5} Take your time getting used to this notation. Can you find an alternative description in set-builder notation of the sets S1, . . . , S5 above? It is crucial that you can translate between set notations and English, or you will be incapable of understanding most higher-level mathematics. Sets of Numbers Common sets of numbers are written in the BLACKBOARD BOLD typeface. N = Z+ = natural numbers = {1, 2, 3, 4, . . .} N0 = W = Z+ 0 = whole numbers = {0, 1, 2, 3, 4, . . .} Z = integers = {. . . , −3, −2, −1, 0, 1, 2, 3, . . .} Q = rational numbers = { m n : m ∈Z and n ∈N} = { a b : a, b ∈Z and b ̸= 0} R = real numbers R \ Q = irrational numbers (read ‘R minus Q’) C = complex numbers = {x + iy : x, y ∈R, where i = √−1} Z≥n = Integers ≥n = {n, n + 1, n + 2, n + 3, . . .} nZ = multiples of n = {. . . , −3n, −2n, −n, 0, n, 2n, 3n, . . .} Where there are multiple choices of notation, we will tend to use the first in the list: for example N0 = Z≥0. The use of a subscript 0 to include zero and superscript ± to restrict to positive or negative numbers is standard. Examples. 7 ∈Z, π ∈R, π ̸∈Q, √−5 ∈C, −e2 ∈R−. 53 There are often many different ways to represent the same set in set-builder notation. For exam-ple, the set of even numbers may be written in multiple ways: 2Z = {2n : n ∈Z} (The set of numbers of the form 2n such that n is an integer) = {n ∈Z : ∃k ∈Z, n = 2k} (The set of integers which are a multiple of 2) = {n ∈Z : n ≡0 (mod 2)} (The set of integers congruent to 0 modulo 2) = {n ∈Z : 2|n} (The set of integers which are divisible by 2) Here we use both congruence and divisor notation to obtain suitable descriptions. Can you find any other ways to describe the even numbers using basic set notation? The notation nZ is most commonly used when n is a natural number, but it can also be used for other n. For example 1 2Z =  1 2x : x ∈Z =  m, m + 1 2 : m ∈Z is the set of multiples of 1 2 (comprising the integers and half-integers). The notation can also be extended: for example 2Z + 1 would denote the odd integers. Aside: Choice of Notation The two notations for ‘such that’ ( | and :) are to give you leeway in case of potential confusion. For example, the final expression (above) for the even numbers 2Z = {n ∈Z : 2|n} is much cleaner than the alternative 2Z = {n ∈Z | 2 | n}. In other situations the opposite is true. In Section 4.4 we shall consider functions. If you recall the concept of an odd function from calculus, we could denote the set of such with domain the real numbers as { f : R →R : ∀x, f (x) = f (−x)} or { f : R →R | ∀x, f (x) = f (−x)}. In this case the latter notation is superior. You may use whichever notation you prefer, provided the outcome is unambiguous. Examples. 1. List the elements of the set A = {x ∈R : x2 + 3x + 2 = 0}. We are looking for the set of all real number solutions to the quadratic equation x2 + 3x + 2 = 0. A simple factorization tells us that x2 + 3x + 2 = (x + 1)(x + 2), whence A = {−1, −2}. 2. Use the set B = {0, 1, 2, 3, . . . , 24} to describe C = {n ∈Z : n2 −3 ∈B} in roster notation. We see that n2 −3 ∈B ⇐ ⇒n2 ∈{3, 4, 5, . . . , 25, 26, 27} Since n must be an integer, it follows that C = {±2, ±3, ±4, ±5}. 54 3. It is often harder to convert from roster to set-builder notation, as you might be required to spot a pattern, and many choice could be available. For example, if D = 1 6, 1 20, 1 42, 1 72, 1 110, 1 156, . . .  , you might consider it reasonable to write D =  1 2n(2n + 1) : n ∈N  . Of course the ellipses (. . . ) might not indicate that the elements of the set continue in the way you expect. For larger sets, the concision and clarity of set-builder notation makes it much preferred! 4. Are the following sets equal? E = {n2 + 2 : n is an odd integer}, F = {n ∈Z : n2 + 2 is an odd integer}. It will help to first construct a table to list some of the values of n2 + 2: n n2 n2 + 2 ±1 1 3 ±3 9 11 ±5 25 27 ±7 49 51 ±9 81 83 . . . . . . . . . The set E consists of those integers of the form n2 + 2 where n is an odd integer. By the table, E = {3, 11, 27, 51, 83, . . .}. On the other hand, F includes all those integers n such that n2 + 2 is odd. It is easy to see that n2 + 2 is odd ⇐ ⇒n2 is odd ⇐ ⇒n is odd. Thus F is simply the set of all odd integers: F = {±1, ±3, ±5, ±7, . . .} = 2Z + 1. Plainly the two sets are not equal. Intervals Interval notation is useful when discussing collections of real numbers. For example, (0, 1) = {x ∈R : 0 < x < 1}, [0, 1] = {x ∈R : 0 ≤x ≤1}, (0, 1] = {x ∈R : 0 < x ≤1}. 55 When writing intervals with ±∞use an open bracket at the infinite end(s): [1, ∞) = {x ∈R : x ≥1}. This is since the symbols ±∞do not represent real numbers and so are not members of any interval. Example. Recall some basic trigonometry: the so-lutions of the equation cos x = −1 2 on the interval [0, 4π] can be written in set-builder and roster nota-tion as  x ∈[0, 4π] : cos x = −1 2  = 2π 3 , 4π 3 , 8π 3 , 10π 3  −1 0 1 y x π 2π 3π 4π 2π 3 4π 3 8π 3 10π 3 −1 2 Cardinality and the Empty Set Definition 4.2. A set A is finite if it contains a finite number of elements: this number is the set’s cardinality, written |A|. A is said to be infinite otherwise. Cardinality is a very simple concept for finite sets. For infinite sets, such as the natural numbers N, the concept of cardinality is much more subtle. We cannot honestly talk about N having an ‘infinite number’ of elements, since infinity is not a number! In Chapter 8 we will consider what cardinality means for infinite sets and meet several bizarre and fun consequences. For the present, cardinality only has meaning for finite sets. Examples. 1. Let A = {a, b, α, γ, √ 2}, then |A| = 5. 2. Let B = n 4, {1, 2}, {3} o . It is important to note that the elements/members of B are 4, {1, 2} and {3}, two of which are themselves sets. Therefore |B| = 3. The set {1, 2} is an object in its own right, and can therefore be placed in a set along with other objects.14 To round things off we need a symbol to denote a set that contains nothing at all! Axiom. There exists a set ∅with no elements (cardinality zero: |∅| = 0). We call ∅the empty set. There are many representations of the empty set. For example {x ∈N : x2 + 3x + 2 = 0} and {n ∈N : n < 0} are both empty. Despite this, we will see in Theorem 4.4 that there is only one set with no elements, so that all such representations actually denote the same set ∅. Note also that |A| ∈N for any finite non-empty set A. Aside: Axioms An axiom is a basic assumption; something that we need in order to do mathematics, but cannot prove. This is the cheat by which mathematicians can be 100% sure that something is true: a result is proved based on the assumption of several axioms. With regard to the empty set axiom, it probably seems bizarre that we can assume the existence of some set that has nothing in it. Regardless, mathe-maticians have universally agreed that we need the empty set in order to do the rest of mathematics. 14The fact that a set (containing objects) is also an object might seem confusing, but you should be familiar with the same problem in English. Consider the following sentences: ‘UCI are constructing a laboratory’ and ‘UCI is constructing a laboratory.’ In the first case we are thinking of UCI as a collection of individuals, in the latter case UCI is a single object. Opinions differ in various modes of English as to which is grammatically correct. 56 Exercises 4.1.1 Describe the following sets in roster notation, that is, list their elements. (a) {x ∈N : x2 ≤3x}. (b) {x2 ∈R : x2 −3x + 2 = 0}. (c)  n + 2 ∈{0, 1, 2, 3, . . . , 19} : n + 3 ≡5 (mod 4) (d)  n ∈{−2, −1, 0, 1, . . . , 23} : 4|n2 (does : or | denote the condition?) (e) {x ∈1 2Z : 0 ≤x ≤4 and 4x2 ∈2Z + 1} 4.1.2 Describe the following sets in set-builder notation (look for a pattern). (a) {. . . , −3, 0, 3, 6, 9, . . .} (b) {−3, 1, 5, 9, 13, . . .} (c) {1, 1 3, 1 7, 1 15, 1 31, . . .} 4.1.3 Each of the following sets of real numbers is a single interval. Determine the interval. (a) {x ∈R : x > 3 and x ≤17} (b) {x ∈R : x ≰3 or x ≤17} (c) {x2 ∈R : x ̸= 0} (d) {x ∈R−: x2 ≥16 and x3 ≤27} 4.1.4 Can you describe the set {x ∈Z : −1 ≤x < 43} in interval notation? Why/why not? 4.1.5 Compare the sets A = {3x : x ∈2Z} and B = {x ∈Z : x ≡12 (mod 6)}. Are they equal? 4.1.6 What is the cardinality of the following set? What are the elements? n ∅,  ∅ ,  ∅, {∅} o . 4.1.7 Let A = {orange, banana, apple, mango}, and let B be the set B = {x, y} : x, y ∈A . (a) Describe B in roster notation. (b) Now compute the cardinality of the sets C = (x, y) : x, y ∈A and D = n{x, {y} : x, y ∈A} o . Compare them to |B|. 57 4.2 Subsets In this section we consider the most basic manner in which two sets can be related. Definition 4.3. If A and B are sets such that every element of A is also an element of B, then we say that A is a subset of B and write A ⊆B. Sets A, B are equal, written A = B, if they have exactly the same elements. Equivalently A = B ⇐ ⇒A ⊆B and B ⊆A. (∗) A is a proper subset of B if it is a subset which is not equal. This can be written A ⊊B.a aWe will religiously stick to this notation. When reading other texts, note that some authors prefer A ⊂B for proper subset. Others use ⊂for any subset, whether proper or not. The characterization (∗) of equality is very important. In order to prove that two sets are equal you will often have to show double-inclusion. Venn diagrams are particularly useful for depicting subset re-lations. The graphic on the right depicts three sets A, B, C: it should be clear that the only valid subset relation between the three is A ⊆B. A B C Set-builder notation implicitly uses the concept of subset: the notation X = {y ∈Y : . . .} de-scribes a set X as a subset of some larger set Y. The previous section contained many examples that were subsets of the set of real numbers R. Here are some other examples of subsets. Examples. 1. N = {n ∈Z : n > 0}. This is clearly a subset of Z. 2. {x ∈R : x2 −1 = 0} ⊆{y ∈R : y2 ∈N}. To make sense of this relationship, convert to roster notation: we obtain {−1, 1} ⊆{± √ 1, ± √ 2, ± √ 3, ± √ 4, . . .}. 3. mZ ⊆nZ ⇐ ⇒n | m. Make sure you’re comfortable with this! For example, 4Z ⊆2Z since every multiple of 4 is also a multiple of 2. Here we collect several results relating to subsets. Theorem 4.4. 1. If |A| = 0, then A = ∅ (Uniqueness of the empty set) 2. For any set A, we have ∅⊆A and A ⊆A (Trivial and non-proper subsets) 3. If A ⊆B and B ⊆C, then A ⊆C (Transititvity of subsets) Proof. 1. Let A be a set with cardinality zero, i.e., with no elements. ∅has no members, therefore ∅⊆A is trivial: there is nothing to check to see that all elements of ∅are also elements of A! The argument for A ⊆∅is identical. 58 2. Let A be any set. ∅⊆A follows by the argument in 1. To prove that A ⊆A we must show that all elements of A are also elements of A. But this is completely obvious! 3. Assume that A is a subset of B and that B is a subset of C. We must show that all elements of A are also elements of C. Let a ∈A. Since A ⊆B we know that a ∈B. Since B ⊆C and a ∈B, we conclude that a ∈C. This shows that every element of A belongs to C. Hence A ⊆C. As a final observation, to which we will return in Theorem 4.12 and in Chapter 8, your intutition should tell you that, for finite sets, subsets have smaller cardinatlity: A ⊆B = ⇒|A| ≤|B| . More generally, consider replacing the terms in Theorem 4.4 according to the following table: ⊆ ≤ ∅ 0 sets A, B, C non-negative integers cardinality absolute value The results should seem completely natural! Recognizing the similarities between a new concept and a familiar one, essentially spotting patterns, is perhaps the most necessary skill in mathematics. Exercises 4.2.1 Let A, B, C, D be the following sets. A = {−4, 1, 2, 4, 10} B = {m ∈Z : |m| ≤12} C = {n ∈Z : n2 ≡1 (mod 3)} D = {t ∈Z : t2 + 3 ∈[4, 20)} Of the 12 possible subset relations A ⊆B, A ⊆C, . . . D ⊆C, which are true and which false? 4.2.2 Let A = {x ∈R : x3 + x2 −x −1 = 0} and B = {x ∈R : x4 −5x2 + 4 = 0}. Are either of the relations A ⊆B or B ⊆A true? Explain. 4.2.3 For which values of x > 0 is the following claim true? [0, x] ⊆[0, x2] Prove your assertion. 4.2.4 Given A ⊆Z and x ∈Z, we say that x is A-mirrored if and only if −x ∈A. We also define: MA := {x ∈Z: x is A-mirrored}. (a) What is the negation of ‘x is A-mirrored.’ (b) Find MB for B = {0, 1, −6, −7, 7, 100}. 59 (c) Assume that A ⊆Z is closed under addition (i.e., x + y ∈A, for all x, y ∈A). Show that MA is closed under addition. (d) In your own words, under which conditions is A = MA? 4.2.5 Define the set by: = {x ∈Z: x ≡1 (mod 5)}. (a) Describe the set in roster notation. (b) Compute the set M, as defined in Exercise 4.2.4 (c) Are the sets and M equal? Prove/Disprove. (d) Now consider the set = {x ∈Z: x ≡10 (mod 5)}. Are the sets and M equal? Prove/Disprove. 4.2.6 (a) Give a formal proof of the fact that A ⊆B = ⇒ |A| ≤|B| for finite sets. Resist the temptation to look at Theorem 4.12: it is far more technical than you need for this! (b) Explain why |A| ≤|B| ̸ = ⇒A ⊆B. 60 4.3 Unions, Intersections, and Complements In the last section we compared nested sets. In this section we constuct new sets from old, modeled precisely on the logical concepts of and, or, and not. For the duration of this section, suppose that U is some universal set, of which every set mentioned subsequently is a subset.15 First we consider the set contruction modeled on not. Definition 4.5. Let A ⊆U be a set. The complement of A is the set AC = {x ∈U : x / ∈A}. This can also be written U \ A, U −A, A′, or A. The Venn diagram is drawn on the right: A is represented by a circular region, while the rectangle represents the universal set U. The complement AC is the blue shaded region. If B ⊆U is some other set, then the complement of A relative B is B \ A = {x ∈B : x / ∈A}. The set B \ A is also called B minus A. For its Venn diagram, we represent A and B as overlapping circular regions. The comple-ment B \ A is the green shaded region. Note that AC = U \ A, so that the two definitions correspond. A AC AC: everything not in A B \ A A B U B \ A: everything in B but not in A Example. Let U = {1, 2, 3, 4, 5}, A = {1, 2, 3}, and B = {2, 3, 4}. Then AC = {4, 5}, BC = {1, 5}, B \ A = {4}, A \ B = {1}. Now we construct sets based on or and and. Definition 4.6. The union of A and B is the set A ∪B = {x ∈U : x ∈A or x ∈B}. The intersection of A and B is the set A ∩B = {x ∈U : x ∈A and x ∈B}. We say that A and B are disjoint if A ∩B = ∅. A \ B B \ A A B A ∩B | {z } A ∪B U In the Venn diagram, the sets A and B are again depicted as overlapping circles. Although it doesn’t constitute a proof, the diagram makes it clear that A = (A \ B) ∪(A ∩B) and B = (B \ A) ∪(A ∩B). 15This is necessary so that the definitions in this section are legitimate. 61 ‘Or’ is used in the logical sense: A ∪B is the collection of all elements that lie in A, in B, or in both. Now observe the notational pattern: ∪looks very similar to the logic symbol ∨from Chapter 2. The symbols ∩and ∧are also similar. Examples. 1. Let U = {fish, dog, cat, hamster}, A = {fish, cat}, and B = {dog, cat}. Then, A ∪B = {fish, dog, cat}, A ∩B = {cat}. 2. Using interval notation, let U = [−4, 5], A = [−3, 2], and B = [−4, 1). Then AC = [−4, −3) ∪(2, 5], BC = [1, 5], B \ A = [−4, −3), A \ B = [1, 2]. −4 −3 −2 −1 0 1 2 3 4 5 U [ ] A [ ] B [ ) AC [ ) ( ] BC [ ] B \ A [ ) A \ B [ ] 3. Let A = (−∞, 3) and B = [−2, ∞) in interval notation. Then A ∪B = R and A ∩B = [−2, 3). In the final example it seems reasonable to assume that U = R. The universal set is rarely made explicit in practice, and is often assumed to be the smallest suitable uncomplicated set. When dealing with sets of real numbers this typically means U = R. In other situations U = Z or U = {0, 1, 2, 3, . . . , n −1} might be more appropriate. The next theorem comprises the basic rules of set algebra. Theorem 4.7. Let A, B, C be sets. Then: 1. ∅∪A = A and ∅∩A = ∅. 2. A ∩B ⊆A ⊆A ∪B. 3. A ∪B = B ∪A and A ∩B = B ∩A. 4. A ∪(B ∪C) = (A ∪B) ∪C and A ∩(B ∩C) = (A ∩B) ∩C. 5. A ∪A = A ∩A = A. 6. A ⊆B = ⇒A ∪C ⊆B ∪C and A ∩C ⊆B ∩C. You should be able to prove each of these properties directly from Definitions 4.3 and 4.6. Don’t memorize the proofs: with a little practice working with sets, each of these results should feel com-pletely obvious. It is more important that you are able to vizualize the laws using Venn diagrams. A Venn diagram does not constitute a formal proof, though it is extremely helpful for clarification. Here we prove only second result: think about how the Venn diagram in Defintion 4.6 illustrates the result. Some of the other proofs are in the Exercises. 62 Proof of 2. There are two results here: A ∩B ⊆A and A ⊆A ∪B. We show each separately, along with some thinking. Suppose that x ∈A ∩B. (Must show x ∈A ∩B ⇒x ∈A) Then x ∈A and x ∈B. (Definition of intersection) But then x ∈A, whence A ∩B ⊆A (Definition of subset) Now let y ∈A. (Must show y ∈A ⇒y ∈A ∪B) Then ‘y ∈A or y ∈B’ is true, from which we conclude that y ∈A ∪B. Thus A ⊆A ∪B. Once you get comfortable, you can strip away all the comments and write the proof more quickly. The following theorem describes how complements interact with other set operations. Theorem 4.8. Let A, B be sets. Then: 1. (A ∩B)C = AC ∪BC. 2. (A ∪B)C = AC ∩BC. 3. (AC)C = A. 4. A \ B = A ∩BC. 5. A ⊆B ⇐ ⇒BC ⊆AC. A B (A ∩B)C = AC ∪BC Again: don’t memorize these laws! Draw Venn diagrams to help with visualization. Proof of 1. We start by trying to show that the left hand side is a subset of the right hand side. x ∈(A ∩B)C = ⇒x / ∈A ∩B = ⇒x not a member of both A and B = ⇒x not in at least one of A and B = ⇒x / ∈A or x / ∈B = ⇒x ∈AC or x ∈BC = ⇒x ∈AC ∪BC With a little thinking, we realize that all of the = ⇒arrows may be replaced with if and only if arrows ⇐ ⇒without compromising the argument. We’ve therefore shown that the sets (A ∩B)C and AC ∪BC have the same elements, and are thus equal. In the proof we were lucky. Showing that both sides are subsets of each other would have been tedious, but we found a quicker proof by carefully laying out one direction. This happens more often than you might think. Just be careful: you can’t always make conditional connectives biconditional. 63 Parts 1. and 2. of the theorem are known as De Morgan’s laws, just as the equivalent statements in logic: Theorem 2.9. Indeed, we could rephrase our proof in that language. Alternative Proof of 1. x ∈(A ∩B)C ⇐ ⇒¬[x ∈A ∩B] ⇐ ⇒¬[x ∈A and x ∈B] ⇐ ⇒¬[x ∈A] or ¬[x ∈B] (De Morgan’s first law) ⇐ ⇒x ∈AC or x ∈BC ⇐ ⇒x ∈AC ∪BC Theorem 4.9 (Distributive laws). For any sets A, B, C: 1. A ∩(B ∪C) = (A ∩B) ∪(A ∩C) 2. A ∪(B ∩C) = (A ∪B) ∩(A ∪C) We prove only the second result. The method is the standard ap-proach: show that each side is a subset of the other. We do both directions this time, though with a little work and the cost of some clarity, you might be able to slim down the proof. The Venn di-agram on the right illustrates the second result: simply add the colored regions. A B C Proof. (⊆) Let x ∈A ∪(B ∩C). Then x ∈A or x ∈B ∩C. There are two cases: (a) If x ∈A, then x ∈A ∪B and x ∈A ∪C by Theorem 4.7, part 2. (b) If x ∈B ∩C, then x ∈B and x ∈C. It follows that x ∈A ∪B and x ∈A ∪C, again by Theorem 4.7. In both cases x ∈(A ∪B) ∩(A ∪C). (⊇) Let y ∈(A ∪B) ∩(A ∪C). Then y ∈A ∪B and y ∈A ∪C. There are again two cases: (a) If y ∈A, then we are done, for then y ∈A ∪(B ∩C). (b) If y / ∈A, then y ∈B and y ∈C. Hence y ∈B ∩C. In particular y ∈A ∪(B ∩C). In both cases y ∈A ∪(B ∩C). Exercises 4.3.1 Describe each of the following sets in as simple a manner as you can: e.g., {x ∈R : (x2 > 4 and x3 < 27) or x2 = 15} = (−∞, −2) ∪(2, 3) ∪{ √ 15, − √ 15}. 64 (a) {x ∈R : x2 ̸= x} (b) {x ∈R : x3 −2x2 −3x ≤0 or x2 = 4} (c) {x2 ∈R : x ̸= 1} (d) {z ∈Z : z2 is even and z3 is odd} (e) {y ∈3Z + 2 : y2 ≡1 (mod 3)} 4.3.2 Let A = {1, 3, 5, 7, 9, 11} and B = {1, 4, 7, 10, 13}. What are the following sets? (a) A ∩B (b) A ∪B (c) A \ B (d) (A ∪B) \ (A ∩B) 4.3.3 Let A ⊆R, and let x ∈R. We say that the point x is far away from the set A if and only if: ∃d > 0: No element of A belongs to the set [x −d, x]. Equivalently, A ∩[x −d, x] = ∅. If this does not happen, we say that x is close to A. (a) Draw a picture of a set A and an element x such that is far away from A. (b) Draw a picture of a set A and an element x such that x is close to A. (c) Compute the definition of “x is close to A”. [So negate “x is far away from A”.] (d) Let A = {1, 2, 3}. Show that x = 4 is far away from A, by using definitions. (e) Let A = {1, 2, 3}. Show that x = 1 is close to A, by using definitions. (f) Show that if x ∈A, then x is close to A. (g) Let A be the open interval (a, b). Is the end-point a far away from A? What about the end-point b? 4.3.4 Consider Theorems 4.7 and 4.9. In all seven results, replace the symbols in the first row of the following table with those in the second. Which of the results seem familar? Which are false? ∅ A, B, C sets ∪ ∩ ⊆ 0 A, B, C ∈N0 + · ≤ 4.3.5 Prove that B \ A = B ⇐ ⇒A ∩B = ∅. 4.3.6 Practice your proof skills by giving formal proofs of the following results from Theorems 4.7 and 4.8. With practice you should be able to prove all of parts of these theorems (and of Theorem 4.9) these without looking at the arguments in the notes! (a) ∅∩A = ∅. (b) A ∩(B ∩C) = (A ∩B) ∩C. (c) (AC)C = A. (d) A ⊆B ⇐ ⇒BC ⊆AC. 65 4.4 Introduction to Functions You have been using functions for a long time. A formal definition in terms of relations will be given in Section 7.2. For the present, we will just use the following. Definition 4.10. Let A and B be sets. A function from A to B is a rule f that assigns one (and only one) element of B to each element of A. The domain of f, written dom( f ), is the set A. The codomain of f is the set B. The range of f, written range( f ) or Im( f ), is the subset of B consisting of all the elements assigned by f. You can think of the domain of f as the set of all inputs for the function, and the range of f as the set of all outputs. The codomain is the set of all potential values the function may take (of course, only the values in the range are actually achieved). Notation If f is a function from A to B we write f : A →B. If a ∈A, we write b = f (a) for the the element of B assigned to a by the function f. We can also write f : a 7→b, which is read ‘f maps a to b.’ If U is a subset of A then the image of U is the following subset of B, f (U) = { f (u) ∈B : u ∈U}. The image of A is precisely the range of f, hence the notaion Im( f ), f (A) = range( f ) = Im( f ) = { f (a) : a ∈A}. f a1 f (a1) a2 f (a2) a3 f (a3) = f (a4) a4 A B                f (A) Examples. 1. Let f : [−3, 2) →R be the square function f : x 7→x2. We have dom( f ) = [−3, 2), and range( f ) = [0, 9], as shown in the picture. We could also calculate other im-ages, for example, f [−1, 2)  = [0, 4). 3 6 9 y −3 −2 −1 0 1 2 x range domain 66 2. Define f : Z →{0, 1, 2} by f : n 7→n2 (mod 3), where we take the remainder of n2 modulo 3. Clearly dom( f ) = Z, but what is the range? Trying a few examples, we see the following: n 0 1 2 3 4 5 6 7 8 9 10 f (n) 0 1 1 0 1 1 0 1 1 0 1 It looks like the range is simply {0, 1}. We have already proved this fact in Theorem 2.17, although a faster proof can now be given by appealing to modular arithmetic (Section 3.1). If n ≡0, then n2 ≡0 (mod 3). If n ≡1, then n2 ≡1 (mod 3). If n ≡2, then n2 ≡4 ≡1 (mod 3). Thus n2 ≡0, 1 (mod 3), and range( f ) = {0, 1}. 3. Let A = {0, 1, 2, . . . , 9} be the set of remainders modulo 10 and define f : A →A by f : n 7→3n (mod 10). To help understand this function, list the elements: the domain only has 10 elements after all. n 0 1 2 3 4 5 6 7 8 9 f (n) 0 3 6 9 2 5 8 1 4 7 It should be obvious that range( f ) = A. 4. With the same notation as the previous example, let g : A →A : n 7→4n (mod 10). Now we have the following table: n 0 1 2 3 4 5 6 7 8 9 g(n) 0 4 8 2 6 0 4 8 2 6 with range(g) = {0, 2, 4, 6, 8}. Injections, surjections and bijections Definition 4.11. A function f : A →B is 1–1 (one-to-one), injective, or an injection if it never takes the same value twice. Equivalently,a ∀a1, a2 ∈A, f (a1) = f (a2) = ⇒a1 = a2. f : A →B is onto, surjective, or a surjection if it takes every value in the codomain: i.e., B = range( f ). Equivalently, ∀b ∈B, ∃a ∈A such that f (a) = b. f : A →B is invertible, bijective, or a bijection if it is both injective and surjective. aThis is the contrapositive: if f never takes the same value twice, then ∀a1, a2 ∈A we have a1 ̸= a2 = ⇒f (a1) ̸= f (a2). 67 Remark: Since the definitions of injective and surjective are both ‘forall’ statements, to show that a function is not injective or not surjective you will need counterexamples. First we consider our examples above. The details are provided for 1 and 2. For the remaining examples, make sure you understand why the answer is correct. 1. f : [−3, 2) →R : x 7→x2 is neither injective nor surjective. Indeed we have the following counterexamples: • f (−1) = f (1). If f were injective, the values at 1 and −1 would have to be different. • 81 ∈R, yet there is no x ∈[−3, 2) such that f (x) = 81. Thus f is not surjective. 2. f : Z →{0, 1, 2} : n 7→n2 (mod 3) is neither injective nor surjective. • If f were injective, then we could not have f (1) = f (2). • 2 is in the codomain {0, 1, 2} of f, yet 2 / ∈range( f ), so f is not surjective. 3. A bijection: this is an example of a permutation, a bijection from a set onto itself. 4. Neither injective, nor surjective. Here is a more complicated example. Example. Prove that f : R \ {1} →R \ {2} defined by f (x) = 2 + 1 1−x is bijective. (Injectivity) Suppose that x1 and x2 are in R \ {1}, and f (x1) = f (x2). Then 2 + 1 1 −x1 = 2 + 1 1 −x2 . A little elementary algebra shows that x1 = x2, whence f is injective. (Surjectivity) Let y ∈R \ {2} and define x = 1 − 1 y−2. This makes sense since y ̸= 2. Then f (x) = 2 + 1 1 −(1 − 1 y−2) = y whence f is surjective. y x −2 −1 −1 1 1 2 2 3 3 4 4 5 The graphic is colored so that you can see how the different parts of the range and domain corre-spond bijectively. The argument for surjectivity is sneaky: how did we know to choose x = 1 − 1 y−2? The answer is scratch work: just solve y = 2 + 1 1−x for x. Essentially we’ve shown that f has the inverse function f −1(x) = 1 − 1 x−2. 68 Aside: Inverse Functions The word invertible is a synonym for bijective because bijective functions really have inverses! Indeed, suppose that f : A →B is bijective. Since f is surjective, we know that B = range( f ) and so every element of B has the form f (a) for some a ∈A. Moreover, since f is injective, the a in question is unique. The upshot is that, when f is bijective, we can construct a new function f −1 : B →A : f (a) 7→a. This may appear difficult at the moment but we will return to it in Chapter 7. Instead, recall that in Calculus we saw that any injective function has an inverse. How does this fit with our definition? Consider, for example, f : [0, 3] →R : x 7→x2. This is injective but not surjective. To fix this, simply define a new function with the same formula but with codomain equal to the range of f. We obtain the bijective function g : [0, 3] →[0, 9] : x 7→x2, with inverse g−1 : [0, 9] →[0, 3] : x 7→√ x. In Calculus we didn’t nitpick like this and would simply go straight to f −1(x) = √x. In general, if f : A →B is any injective function, then g : A →f (A) : x 7→f (x) is automatically bijective, since we are forcing the codomain of g to match its range. Functions and Cardinality Injective and surjective functions are intimately tied to the notion of cardinality. Indeed, in Chapter 8, we will use such functions to give a definition of cardinality for infinite sets. For the present we stick to finite sets. Theorem 4.12. Let A and B be finite sets. The following are equivalent: 1. |A| ≤|B|. 2. ∃f : A →B injective. 3. ∃g : B →A surjective. Read the theorem carefully. It is simply saying that, of the three statements, if any one is true then all are true. Similarly, if one is false then so are the others. It might appear that we require six arguments! Instead we illustrate an important technique: when showing that multiple statements are equivalent, it is enough to prove in a circle. E.g., if we prove the three implications indicated in the picture, then 1 ⃝⇒3 ⃝will be true because both 1 ⃝⇒2 ⃝and 2 ⃝⇒3 ⃝are true. 1 ⃝ 2 ⃝ 3 ⃝ = ⇒ = ⇒ = ⇒ More generally, to show that n statements are equivalent, only n arguments are required. The proof may appear very abstract, but it is motivated by two straightforward pictures. Don’t be afraid to use pictures to illustrate your proofs if it’s going to make them easier to follow! If |A| = m and |B| = n, then the two functions can be displayed pictorially. Refer back to these pictures as you read through the proof. 69 A = {a1, a2, a3, · · · , am} 7→ 7→ 7→ 7→ B = {b1, b2, b3, · · · , bm, · · · , bn} A = {a1, a2, a3, · · · , am} 7→ 7→ 7→ 7→ B = {b1, b2, b3, · · · , bm, z }| { bm+1, · · · , bn } The function f The function g Proof. The proof relies crucially on the fact that A, B are finite. Suppose that |A| = m and |B| = n throughout and list the elements of A and B as, A = {a1, a2, . . . , am}, B = {b1, b2, . . . , bn}. 1 ⃝⇒2 ⃝  Assume that m ≤n. Define f : A →B by f (ak) = bk. This is injective since the elements b1, . . . , bm are distinct. 2 ⃝⇒3 ⃝  Suppose that f : A →B is injective. Without loss of generality we may assume that the elements of A and B are labeled such that f (ak) = bk. Now define g : B →A by g(bk) = ( ak if k ≤m, a1 if k > m. Then g is surjective since every element ak is in the image of g. 3 ⃝⇒1 ⃝  Finally suppose that g : B →A is surjective. Without loss of generality we may assume that ak = g(bk) for 1 ≤k ≤m. Thus n ≥m. If you read the proof carefully, it should be clear that when m = n, the function f is actually a bijection (with inverse f −1 = g). Corollary 4.13. If A, B are finite sets, then |A| = |B| ⇐ ⇒∃f : A →B bijective. Proof. Suppose that m = n. The argument 1 ⃝⇒2 ⃝creates an injective function f : A →B. However every element bk ∈B is in the image of f, so this function is also surjective. Hence f is a bijection. Conversely, if f : A →B is a bijection, then it is injective, whence m ≤n. It is also surjective, from which n ≤m. Therefore m = n. Composition of functions Definition 4.14. Suppose that f : A →B and g : B →C are functions. The composition g ◦f : A →C is the function defined by (g ◦f )(a) = g( f (a)). Note the order: to compute (g ◦f )(x), you apply f first, then g. 70 f g g ◦f a f (a) g( f (a)) A B C Example. If f (x) = x2 and g(x) = 1 x−1, then (g ◦f )(x) = 1 x2 −1, and ( f ◦g)(x) = 1 (x −1)2 . You should be extra careful of ranges and domains when composing functions. The domain and range are not always explicitly mentioned, and at times some restriction of the domain is implied. In this example, you might assume that dom( f ) = R and dom(g) = R \ {1}. This is perfectly good if we are considering f and g separately. However, it should be clear from the formulæ that the implied domains of the compositions are, dom(g ◦f ) = R \ {±1}, and dom( f ◦g) = R \ {1}. Finally we consider how injectivity and surjectivity interact with composition. Theorem 4.15. Let f : A →B and g : B →C be functions. Then: 1. If f and g are injective, then g ◦f is injective. 2. If f and g are surjective, then g ◦f is surjective. It follows that the composition of bijective functions is also bijective. Proof. 1. Suppose that f and g are injective and let a1, a2 ∈A satisfy (g ◦f )(a1) = (g ◦f )(a2). We are required to show that a1 = a2. However, (g ◦f )(a1) = (g ◦f )(a2) = ⇒g f (a1)  = g f (a2)  = ⇒f (a1) = f (a2) (since g is injective) = ⇒a1 = a2 (since f is injective) Part 2 is in the Exercises. It is an interesting to observe that the converse of this theorem is false. Assuming that a composition is injective or surjective only requires that one of the component func-tions be so. 71 Theorem 4.16. Suppose that f : A →B and g : B →C are functions. 1. If g ◦f is injective, then f is injective. 2. If g ◦f is surjective, then g is surjective. Before showing the proof, consider the following picture of two functions f and g which simulta-neously illustrate both parts of the theorem. It should be clear that g ◦f is bijective, f is only injective, and g is only surjective. f g a2 a1 b2 b1 b3 c2 c1 A B C Here is a formulaic example of the same thing. Make sure you’re comfortable with the definitions and draw pictures or graphs to help make sense of what’s going on. f : [0, 2] →[−4, 4] : x 7→x2 (injective only) g : [−4, 4] →[0, 16] : x 7→x2 (surjective only) g ◦f : [0, 2] →[0, 16] : x 7→x4 (bijective!) Proof. 2. Let c ∈C and assume that g ◦f is surjective. We wish to prove that ∃b ∈B such that g(b) = c. Since g ◦f is surjective, ∃a ∈A such that (g ◦f )(a) = c. But this says that g( f (a)) = c. Hence b = f (a) is an element of B for which g(b) = c. Thus g is surjective. We leave part 1 for the Exercises. Exercises 4.4.1 For each of the following functions f : A →B determine whether f is injective, surjective or bijective. Prove your assertions. (a) f : [0, 3] →R where f (x) = 2x. (b) f : [3, 12) →[0, 3) where f (x) = √ x −3. (c) f : (−4, 1] →(−5, −3] where f (x) = − √ x2 + 9. 72 4.4.2 Suppose that f : [−3, ∞) →[−8, ∞) and g : R →R are defined by f (x) = x2 + 6x + 1, g(x) = 2x + 3. Compute g ◦f and show that g ◦f is injective. 4.4.3 (If you did Exercise 2.3.12 you should find this easy) Let X be a subset of R. A function f : X →R is strictly increasing if ∀a, b ∈X, a < b = ⇒f (a) < f (b). For example, the function f : [0, ∞) →R, x 7→x2 is increasing because ∀a, b ∈[0, ∞), a < b = ⇒f (a) = a2 < b2 = f (b). (a) Give another example of a function that is increasing. Draw its graph, and prove that the function is increasing. (b) By negating the above definition, state what it means for a function not to be strictly increas-ing. (c) Give an example of a function that is not strictly increasing. Draw its graph, and prove that the function is not stictly increasing. (d) Let f, g : R →R be strictly increasing. Prove or disprove: The function h = f + g is strictly increasing. Note that the formula for h is h(x) = f (x) + g(x). 4.4.4 Find: (a) A set A so that the function f : A →R : x 7→sin x is injective. (b) A set B so that the function f : R →B : x 7→sin x is surjective. 4.4.5 A function f : R →R is even if ∀x ∈R, f (−x) = f (x). For example, the function f : R →R, x 7→x2 is even because ∀x ∈R, f (−x) = (−x)2 = x2 = f (x). Note that f is even if and only if the graph of f is symmetric with respect to the y axis. (a) Give an example of a function that is even. Draw its graph, and prove that the function is even. (b) Define what it means for a function not to be even, by negating the definition above. (c) Give an example of a function that is not even. Draw its graph, and prove that the function is not even. (d) Prove or disprove: for every f, g: R →R even, the composition h = f ◦g is even. Here h is the function mapping x to f (g(x)). 73 4.4.6 Define f : (−∞, 0] →R and g : [0, ∞) →R by f (x) = x2, g(x) = ( x 1−x x < 1, 1 −x x ≥1. Does g ◦f map (−∞, 0] onto R? Justify your answer. 4.4.7 Negate Definition 4.11 to find what it means for a function to be (a) Not injective. (b) Not surjective. 4.4.8 Prove that the composition of two surjective functions is surjective. 4.4.9 Suppose that g ◦f is injective. Prove that f is injective. 4.4.10 In the proof of Theorem 4.12 we twice invoked without loss of generality. In both cases explain why the phrase applies. 4.4.11 Recall Examples 3 and 4 on page 67. (a) Consider the nine functions fk : A →A : x 7→kx (mod 10), where k = 1, 2, . . . , 9. Find the range of fk for each k. Can you find a relationship between the cardinality of range( fk) and k? (b) More generally, let A = {0, 1, 2 . . . , n −1} be the set of remainders modulo n. If fk : A → A : x 7→kx (mod n), conjecture a relationship between |range( fk)|, k and n. You don’t need to prove your assertions. 74 5 Mathematical Induction and Well-ordering In Section 2.2 we discussed three methods of proof: direct, contrapositive, and contradiction. The fourth standard method of proof, induction, has a very different flavor. In practice it formalizes the idea of spotting a pattern. Before we give the formal definition of induction, we consider where induction fits into the investigative process. 5.1 Investigating Recursive Processes In applications of mathematics, one often has a simple recurrence relation but no general formula. For instance, a process might be described by an expression of the form xn+1 = f (xn), where some initial value x1 is given. While investigating such recurrences, you might hypothesize a general formula xn = g(n). Induction is a method of proof that allows us to prove the correctness of such general formulæ. Here is a simple example of the process. Stacking Paper Consider the operation whereby you take a stack of paper, cut all sheets in half, then stack both halves together. Cut and stack If a single sheet of paper has thickness 0.1 mm, how many times would you have to repeat the pro-cess until the stack of paper reached to the sun? (≈150 million kilometers). The example is describing a recurrence relation. If hn is the height of the stack after n operations, then we have a sequence (hn)∞ n=0 satisfying ( hn+1 = 2hn h0 = 0.1 mm. It is easy to compute the first few terms of the sequence: n 0 1 2 3 4 5 6 7 8 · · · hn (mm) 0.1 0.2 0.4 0.8 1.6 3.2 6.4 12.8 25.6 · · · It is not hard to hypothesize that, after n such operations, the stack of paper will have height hn = 2n × 0.1 mm. 75 All we have done is to spot a pattern. We can reassure ourselves by checking that the first few terms of the sequence satisfy the formula: certainly h0 = 20 × 0.1 mm and h1 = 21 × 0.1 mm, etc. Unfor-tunately the sequence has infinitely many terms, so we need a trick which confirms all of them at once. Unless we can prove that our formula is correct for all n ∈N0 it will remain just a guess. This is where induction steps in. The trick is called the induction step. We assume that we have already confirmed the formula for some fixed, but unspecified, value of n and then use what we know (the recurrence relation hn+1 = 2hn) to confirm the formula for the next value n + 1. Here it goes: Induction Step Suppose that hn = 2n × 0.1 mm, for some fixed n ∈N0. Then hn+1 = 2hn = 2(2n × 0.1) = 2n+1 × 0.1 mm. This is exactly the expression we hoped to find for the (n + 1)th term of the sequence. Think about what the induction step is doing. By leaving n unspecified, we have proved an infinite collection of implications at once! Each implication has the form hn = 2n × 0.1 = ⇒hn+1 = 2n+1 × 0.1. Since the implications have been proved for all n ∈N0, we can string them together: h0 = 20 × 0.1 = ⇒h1 = 21 × 0.1 = ⇒h2 = 22 × 0.1 = ⇒h3 = 23 × 0.1 = ⇒· · · We have already checked that the first formula h0 = 20 × 0.1 in the implication chain is true. By the induction step, the entire infinite collection of formulæ must be true. We have therefore proved that hn = 2n × 0.1 mm = 2n × 10−4 m, ∀n ≥0. Now that we’ve proved the formula for every hn, finishing the original problem is easy: we need to find n ∈N0 such that hn = 2n × 10−4 ≥150 × 109 m ⇐ ⇒2n ≥15 × 1014. Since logorithms are increasing functions, they preserve inequalities and we may easily solve to see that n ≥log2(15 × 1014) = log2 15 + 14 log2 10 ≈50.4. Thus 51 iterations of the cut-and-stack process are sufficient for the pile of paper to reach the sun! We will formalize the discussion of induction in the next section so that you will never have to write as much as we’ve just done. However, it is important to remember how induction fits into a practical investigation. It is the missing piece of logic that turns a guess into a justified formula. Before we do so, here is a famous and slightly more complicated problem. 76 The Tower of Hanoi The Tower of Hanoi is a game involving circular disks of decreasing radii stacked on three pegs. A ‘move’ consists of transferring the top disk in any stack onto a larger disk or an empty peg. If we start with n disks on the first peg, how many moves are required to transfer all the disks to one of the other pegs? The challenge here is that we have no formula to play with, only the variable n for the number of disks. The first thing to do is to play the game. If the variable rn represents the number of moves required when there are n disks, then it should be immediately clear that r1 = 1: one disk only requires one move! The picture below shows that r2 = 3. With more disks you can keep experimenting and find that r3 = 7, etc. At this point you may be ready to hypothesize a general formula. Conjecture 5.1. The Tower of Hanoi with n disks requires rn = 2n −1 moves. Certainly the conjecture is true for n = 1, 2 and 3. To see that it is true in general, we need to think about how to move a stack of n + 1 disks. Since the largest disk can only be moved onto an empty peg, it follows that the n smaller disks must already be stacked on a single peg before the (n + 1)th disk can move. From the starting position this requires rn moves.        n disks rn moves 1 move rn moves The largest disk can now be moved to the final peg, before the original n disks are moved on top of it. In total this requires rn + 1 + rn moves, as illustrated in the picture. We therefore have a recurrence relation for rn: ( rn+1 = 2rn + 1 r1 = 1. We are now in a position to prove our conjecture. Again we know that the conjecture is true for n = 1 and we assume that the formula rn = 2n −1 is true for some fixed but unspecified n. Now we 77 use the recurrence relation to prove that rn+1 = 2n+1 −1. Induction Step Suppose that rn = 2n −1 for some fixed n ∈N. Then rn+1 = 2rn + 1 = 2(2n −1) + 1 = 2n+1 −2 + 1 = 2n+1 −1. Exactly as in the paper-stacking example, we have simultaneously proved an infinite collection of implications: r1 = 21 −1 = ⇒r2 = 22 −1 = ⇒r3 = 23 −1 = ⇒r4 = 24 −1 = ⇒· · · Since the first of these statements is true, it follows that all of the others are true. Hence Conjecture 5.1 is true, and becomes a theorem. As an illustration of how ridiculously time-consuming the Tower becomes, the following table gives the time taken to complete the Tower if you were able to move one disk per second. Disks Time 5 31sec 10 17min 3sec 15 9hr 6min 7sec 20 12days 3hrs 16min 15sec 25 ∼1yr 23days 30 ∼34yrs 9days Animation of five disks (click) Exercises 5.1.1 A room contains n people. Everybody wants to shake everyone else’s hand (but not their own). (a) Suppose that n people require hn handshakes. If an (n + 1)th person enters the room, how many additional handshakes are required? Obtain a recurrence relation for hn+1 in terms of hn. (b) Hypothesize a general formula for hn, and prove it using the method in this section. 5.1.2 Skippy the Kangaroo is playing jump rope, but he tires as the day goes on. The heights hn (inches) of successive jumps are related by the recurrence hn+1 = 8 9hn + 1. (a) Suppose that Skippy’s initial jump has height h1 = 100 in. Show that Skippy fails to jump above 10in for the first time on the 40th jump. (b) Find the total height jumped by Skippy in the first n jumps. You may find it useful to define Hn = hn −9 and think about the recurrence for Hn. Now guess and prove a general formula for Hn. Finally, remind yourself about geometric series.) 78 5.2 Proof by Induction The previous section motivated the need for induction and helped us see where induction fits into a logical investigation. In this section we formally lay out several induction proofs. Induction is the mathematical equivalent of a domino rally; toppling the nth domino causes the (n + 1)th domino to fall, hence to knock all the dominos over it is enough merely to topple the first. Instead of dominoes, in mathematics we consider a sequence of propositions: P(1), P(2), P(3), etc. Induction demonstrates the truth of every proposition P(n) by doing two things: 1. Proving that P(1) is true (Base Case) 2. Proving that ∀n ∈N, P(n) = ⇒P(n + 1) (Induction Step) You could think of the base case as knocking over the first domino, and the induction step as the nth domino knocking over the (n + 1)th, for all n. Both of the examples in the previous section followed this pattern. Unpacking the induction step gives an infinite chain of implications: P(1) = ⇒P(2) = ⇒P(3) = ⇒P(4) = ⇒P(5) = ⇒· · · . The base case says that P(1) is true, and so all of the remaining propositions P(2), P(3), P(4), P(5), . . . are also true. All induction proofs have the same formal structure: (Set-up) Define P(n), set-up notation and orient the reader as to what you are about to prove. (Base Case) Prove P(1). (Induction Step) Let n ∈N be fixed and assume that P(n) is true. This assumption is the induction hypothesis. Perform calculations or other reasonings to conclude that P(n + 1) is true. (Conclusion) Remind the reader what it is you have proved. As you read more mathematics, you will find that the induction step is often the most involved part of the proof. The set-up stage is often no more than a sentence: ‘We prove by induction,’ and the explicit definition of P(n) is commonly omitted. These are the only shortcuts that it is sensible to take until you are extremely comfortable with induction. Practice making it completely clear what you are doing at each juncture. Here is a straightforward theorem, where we write the proof in the above language. Theorem 5.2. The sum of the first n positive integers is given by the formula n ∑ i=1 i = 1 2n(n + 1). 79 Proof. (Set-up) We prove by induction. For each n ∈N, let P(n) be the proposition n ∑ i=1 i = 1 2n(n + 1). (Base Case) Clearly 1 ∑ i=1 i = 1 = 1 21(1 + 1), and so P(1) is true. (Induction Step) Assume that P(n) is true for some fixed n ≥1. We compute the sum of the first n + 1 positive integers using our induction hypothesis P(n) to simplify: n+1 ∑ i=1 i = (n + 1) + n ∑ i=1 i = (n + 1) + 1 2n(n + 1) (by assumption of P(n)) =  1 + 1 2n  (n + 1) = 1 2(n + 2)(n + 1) = 1 2(n + 1) (n + 1) + 1 . This last says that P(n + 1) is true. (Conclusion) By mathematical induction, we conclude that P(n) is true for all n ∈N. That is ∀n ∈N, n ∑ i=1 i = 1 2n(n + 1). Note how we grouped 1 2(n + 1) (n + 1) + 1 so that it is obviously the right hand side of P(n + 1). Here is another example in the same vein, but done a little faster.16 Theorem 5.3. Prove that n(n + 1)(2n + 1) is divisible by 6 for all natural numbers n. Proof. We prove by induction. For each n ∈N, let P(n) be the proposition n(n + 1)(2n + 1) is divisible by 6. (Base Case) Clearly 1 · (1 + 1) · (2 · 1 + 1) = 6 is divisible by 6, hence P(1) is true. (Induction Step) Assume that P(n) is true for some fixed n ∈N. Then (n + 1)(n + 2) 2(n + 1) + 1 −n(n + 1)(2n + 1) = (n + 1) (n + 2)(2n + 3) −n(2n + 1) = (n + 1)(2n2 + 7n + 6 −2n2 −n) = 6(n + 1)2. This is divisible by 6. Since, by the induction hypothesis, n(n + 1)(2n + 1) is also divisible by 6, it 16The most common question after reading this proof is, ‘How would I know to do that calculation?’ It is better to think on how much scratch work was done before the originator stumbled on exactly this argument. Read more proofs and practice writing them, and you’ll soon find that strategies like these will suggest themselves! 80 follows that (n + 1)(n + 2) 2(n + 1) + 1 = n(n + 1)(2n + 1) + 6(n + 1)2 is divisible by 6, as required. Thus P(n + 1) is true. By mathematical induction, P(n) is true for all n ≥1. Theorem 5.3 is also true for n = 0, and indeed for all integers n. As we shall see in the next section, induction works perfectly well with any base case (say n = 0): you are not tied to n = 1. We can even modify induction to prove the result for the negative integers! Here is another example, written in a more advanced style: we don’t explicitly name P(n), and the reader is expected to be familiar enough with induction to realize when we are covering the base case and induction step. If you find this proof a challenge, you should rewrite it in the same style as we used previously. Some assistance in this is given below. Theorem 5.4. For all n ∈N, 2 + 5 + 8 + · · · + (3n −1) = 1 2n(3n + 1). Proof. For n = 1 we have 2 = 2, hence the proposition holds. Now suppose the proposition holds for some fixed n ∈N. Then 2 + 5 + · · · + [3(n + 1) −1] = [2 + 5 + · · · + (3n −1)] + 3n + 2 = 1 2n(3n + 1) + 3n + 2 = 1 2(3n2 + 7n + 4) = 1 2(n + 1)(3n + 4) = 1 2(n + 1) 3(n + 1) + 1 . This says that the proposition holds for n + 1. By mathematical induction the proposition holds for all n ∈N. Scratch work is your friend! Once you are comfortable with the structure of an induction proof, the challenge is often in finding a clear argument for the induction step. Don’t dive straight into the proof! First try some scratch calculations. Be creative, since the same approach will not work for all proofs. One of the benefits of explicitly stating P(n) is that it helps you to isolate what you know and to identify your goal. When stuck, write down both expressions P(n) and P(n + 1) and you will often see how to proceed. Consider, for example, the proof of Theorem 5.4. We have: P(n) : 2 + 5 + 8 + · · · + (3n −1) = 1 2n(3n + 1). P(n + 1) : 2 + 5 + 8 + · · · + [3(n + 1) −1] = 1 2(n + 1) 3(n + 1) + 1 Simply by writing these down, we know that our goal is to somehow convert the left hand side of P(n + 1) into the right hand side, using P(n). 81 As a final comment on scratch work, remember that it is very unlikely to constitute a proof. Here is a typical attempt at a proof of Theorem 5.4 by someone who is new to induction. False Proof. P(n + 1) : 2 + 5 + · · · + (3n −1) | {z } = 1 2 n(3n+1) by P(n) +[3(n + 1) −1] = 1 2(n + 1) 3(n + 1) + 1 = 1 2(n + 1)(3n + 4) = ⇒ 3 2n2 + 1 2n + 3n + 3 −1 = 1 2(3n2 + 7n + 4) = ⇒ 3 2n2 + 7 2n + 2 = 3 2n2 + 7 2n + 2 Such an approach is likely to score zero in an exam! Here are some of the reasons why. • P(n + 1) is the goal, the conclusion of the induction step. You cannot prove P(n) = ⇒P(n + 1) by starting with P(n + 1)! • More logically: the false proof says that something we don’t know (P(n) ∧P(n + 1)) implies something true (the trivial final line). Since the implications T = ⇒T and F = ⇒T are both true, this tells us nothing about whether P(n + 1) is true. • Reversing the arrows and turning the false proof upside down would be a start. However there is no explanation as to why the calculation is being done. The induction step is only part of an induction proof and it need to be placed and explained in context. More concretely: – There is no set-up. P(n) has not been defined, neither indeed has n. You cannot use symbols in a proof unless they have been properly defined. – The base case is missing. – There is no conclusion. Indeed the word induction isn’t mentioned: is the reader supposed to guess that we’re doing induction?! For all this negativity, there are some good things here. If you remove the = ⇒symbols, you are left with an excellent piece of scratch work. By simplifying both sides of your goal you can more easily see how to calculate. For example, the expression 1 2(n + 1)(3n + 4) is an easier target to aim for when manipulating the left hand side of P(n + 1). Your scratch work may make perfect sense to you, but if a reader cannot follow it without your assistance then it isn’t a proof. The moral of the story is to do your scratch work for the induction step then lay out the structure of the proof (set-up, base case, etc.) before incorporating your calculation into a coherent and convincing argument. Exercises 5.2.1 (a) Complete Gauss’ direct proof of Theorem 5.2. (b) Give a direct proof of Theorem 5.3. (c) In Theorem 5.3, what is the proposition P(n + 1)? (d) In the Induction Step of Theorem 5.3, explain why it would be incorrect to write P(n + 1) −P(n) = (n + 1) (n + 2)(2n + 3) −n(2n + 1) 82 = (n + 1)(2n2 + 7n + 6 −2n2 −n) = 6(n + 1)2. 5.2.2 Prove by induction that for each natural number n, we have n ∑ j=0 2j = 2n+1 −1. 5.2.3 Consider the following Theorem: If n is a natural number, then n ∑ k=1 k3 = 1 4n2(n + 1)2. (a) What explicitly is the meaning of 4 ∑ k=1 k3? (b) What would be meant by the expression n ∑ k=1 n3, and why is it different to n ∑ k=1 k3? (c) If the Theorem is written in the form ∀n ∈N, P(n), what is the proposition P(n)? (d) Give as many reasons as you can as to why the following ‘proof’ of the induction step is incorrect. P(n + 1) = n+1 ∑ k=1 k3 = 1 4(n + 1)2((n + 1) + 1)2 = n ∑ k=1 k3 + (n + 1)3 = 1 4(n + 1)2(n + 2)2 = 1 4n2(n + 1)2 + (n + 1)3 = 1 4(n + 1)2(n + 2)2 = 1 4(n + 1)2 n2 + 4(n + 1) = 1 4(n + 1)2(n + 2)2 = 1 4(n + 1)2(n + 2)2 = 1 4(n + 1)2(n + 2)2 (e) Give a correct proof of the Theorem by induction. 5.2.4 (a) Prove by induction that ∀n ∈N we have 3|(2n + 2n+1). (b) Give a direct proof that 3|(2n + 2n+1) for all integers n ≥1 and for n = 0. (c) Look carefully at your proof for part (a). If you had started with the base case n = 0 instead of n = 1, would your proof still be valid? 5.2.5 Show by induction, that for every n ∈N we have: n ≡5 (mod 3) or n ≡6 (mod 3) or n ≡7 (mod 3). 5.2.6 Show, by induction, that for all n ∈N, 4 divides the integer 11n −7n. 5.2.7 (a) Find a formula for the sum of the first n odd natural numbers. Prove your assertion by induction. (b) Give an alternative direct proof of your formula from part (a). You may use results such as n ∑ i=1 i = 1 2n(n + 1). 83 5.3 Well-ordering and the Principle of Mathematical Induction Before seeing more examples of induction, it is worth thinking more carefully about the logic behind induction. The fact that induction really proves statements of the form ∀n ∈N, P(n) depends on a fundamental property of the natural numbers. Definition 5.5. A set of real numbers A is well-ordered if every non-empty subset of A has a minimum element. The definition is delicate: to test if a set A is well-ordered, we need to check all of its non-empty subsets. The definition could be written as follows: ∀B ⊆A such that B ̸= ∅, we have that min(B) exists. Consequently, to show that a set A is not well-ordered, we need only exhibit a non-empty subset B which has no minimum. Examples. 1. A = {4, −7, π, 19, ln 2} is a well-ordered set. There are 31 non-empty subsets of A, each of which has a minimum element. Can you justify this fact without listing the subsets? 2. The interval [3, 10) is not well-ordered. Indeed (3, 4) is a non-empty subset which has no mini-mum element. 3. The integers Z are not well-ordered, since there is no minimum integer. More generally, every finite set of numbers is well-ordered, and intervals are not. Are there any infinite sets which are well-ordered? The answer is yes. Indeed it is part of the standard definition (Peano’s Axioms) of the natural numbers that N is such a set. Axiom. N is well-ordered. Armed with this axiom, we can justify the method of proof by induction. Theorem 5.6 (Principle of Mathematical Induction). Let P(n) be a proposition for each n ∈N. Suppose: (a) P(1) is true. (b) ∀n ∈N, P(n) = ⇒P(n + 1). Then P(n) is true for all n ∈N. Proof. We argue by contradiction. Assume that conditions (a) and (b) hold and that ∃n ∈N such that P(n) is false. Then the set S = {k ∈N : P(k) is false} is a non-empty subset of the well-ordered set N. It follows that S has a minimum element m = min(S). Note that P(m) is false. Clearly m ̸= 1, since P(1) is true (condition (a)). Therefore m ≥2 and so m −1 ∈N. Since m = min(S) it follows that m −1 ̸∈S and so P(m −1) must be true. 84 Now condition (b) forces P(m) to be true. A contradiction. We conclude that P(n) is true for all n ∈N. Different Base Cases An induction argument need not begin with the case n = 1. By proving Theorem 5.6 it should be clear where we used the well-ordering of N. Now fix an integer m (positive, negative or zero) and consider the set Z≥m = {n ∈Z : n ≥m} = {m, m + 1, m + 2, m + 3, . . .}. This set is well-ordered, whence the following modification of the induction principle is immediate. Corollary 5.7. Fix m ∈Z. Let P(n) be a proposition for each integer n ≥m. Suppose: (a) P(m) is true. (b) ∀n ≥m, P(n) = ⇒P(n + 1). Then P(n) is true for all n ≥m. We are simply changing the base case. The induction concept is exactly the same as before: P(m) = ⇒P(m + 1) = ⇒P(m + 2) = ⇒P(m + 3) = ⇒· · · As long as you explicitly prove the first claim in the sequence, and you show the induction step, then all the propositions are true. Here is an example where we begin with n = 4. Theorem 5.8. For all integers n ≥4, we have 3n > n3. Proof. We prove by induction. The first case of interest is n = 4, so we choose this to be our base case. (Base Case) If n = 4 we have 3n = 81 > 64 = n3. The proposition is therefore true for n = 4. (Induction Step) Fix n ∈Z≥4 and suppose that 3n > n3. Then 3n+1 = 3 · 3n > 3n3. To finish the proof, we want to see that this right hand side is at least (n + 1)3. Now 3n3 ≥(n + 1)3 ⇐ ⇒3 ≥  1 + 1 n 3 This is true for n = 3 and, since the right hand side is decreasing as n increases, it is certainly true when n ≥4. We therefore conclude that 3n > n3 = ⇒3n+1 > (n + 1)3 85 which is the induction step. By induction, we have shown that 3n > n3 whenever n ∈Z≥4. Our next example is reminiscent of sequences and series from elementary calculus. If you follow the derivation of such a formula given in an elementary calculus text, you’ll probably see liberal use of ellipsis dots (. . .). When you see ellipses in a proof, it is often because the author is hiding an induction argument. Theorem 5.9. For all integers n ≥3, we have n ∑ i=3 1 i(i −2) = 3 4 − 2n −1 2n(n −1). (∗) Proof. We prove by induction. (Base Case) When n = 3, (∗) reads 3 ∑ i=3 1 i(i−2) = 3 4 −5 12. Both sides are equal to 1 3, and so (∗) is true. (Induction Step) Assume that (∗) is true for some fixed n ≥3. Then n+1 ∑ i=3 1 i(i −2) = n ∑ i=3 1 i(i −2) + 1 (n + 1)(n −1) = 3 4 − 2n −1 2n(n −1) + 1 (n + 1)(n −1) (by the induction hypothesis) = 3 4 − (2n −1)(n + 1) −2n 2(n + 1)n(n −1)  = 3 4 −  1 + n −2n2 2(n + 1)n(n −1)  = 3 4 + (2n + 1)(1 −n) 2(n + 1)n(n −1) = 3 4 − 2n + 1 2(n + 1)n which is exactly (∗) when n is replaced by n + 1. By induction (∗) holds for all integers n ≥3. Our final example involves a little abstraction. Theorem 5.10. The interior angles of an n-gon (n-sided polygon) sum to (n −2)π radians. The challenge here is to set up the induction step properly. We will take the initial case (n = 3) that the angles of a triangle sum to π radians as given,17 and merely prove the induction step. The main logical difficulty comes from the fact that we must consider all n-gons simultaneously. If we were to write the induction step in the form ∀n ∈Z≥3, P(n) = ⇒P(n + 1), 17Can you supply a direct proof of this fact? 86 then the proposition P(n) would be P(n) : ∀n-gons Pn, the sum of the interior angles of Pn is (n −2)π radians. To prove our induction step for a fixed integer n, we must show that all (n + 1)-gons have the correct sum of interior angles. We therefore assume that we are given some (n + 1)-gon Pn+1 and proceed to compute its interior angles in terms of a related n-gon. Proof. Fix an integer n ≥3, and suppose that all n-gons have interior angles summing to (n −2)π radians. Suppose we are given an (n + 1)-gon Pn+1. Select a vertex A, and label the adjacent vertices B and C. Delete A, and join B and C with a straight edge. The result is an n-gon Pn. There are two cases to consider.18 Case 1: The deleted point A is outside Pn. The sum of the inte-rior angles of Pn+1 exceeds those of Pn by the α + β + γ = π radians of the triangle △ABC. Therefore Pn+1 has interior angles summing to (n −2)π + π = [(n + 1) −2]π radians. Case 2: The deleted point A is inside Pn. To obtain the sum of the interior angles of Pn+1, we take the sum of the interior angles of Pn and do three things: • Subtract β • Subtract γ • Add the reflex angle 2π −α at A We are therefore adding an additional A B C Pn γ β α Case 1: A outside Pn A B C Pn γ β α Case 2: A inside Pn −β −γ + (2π −α) = 2π −(α + β + γ) = 2π −π = π radians. Pn+1 again has interior angles summing to [(n + 1) −2]π radians. Note that if A was on the edge of Pn, then our original polygon Pn+1 would have had only n sides. 18We are obscuring two subtleties here. It is a fact, though not an obvious one, that it is always possible to choose a vertex A so that the new polygon Pn doesn’t cross itself. Read about ‘ears’ and ‘mouths’ of polygons and triangulation if you’re interested. There are also two other, less likely, cases, where deleting a point from an (n + 1)-gon leaves you with an (n −1)-gon, or even an (n −2)-gon. To think it out, try drawing a 12-gon in the shape of a Star of David. Deleting one of the outer corners creates a 9-gon! Dealing with these cases strictly requires strong induction, so we return to them later. Aside: Well-ordering more generally Well-ordering is a fundamental concept whose implications are far beyond what we’re discussing here. Informally speaking, well-ordering a set A involves listing the elements of A in some order so that every non-empty subset of A has a first element with respect to that order. Consider, for example, the set of negative integers Z−. For the purposes of these notes we will always consider the standard ordering: · · · < −4 < −3 < −2 < −1. 87 Written in the standard order, Z−= {. . . , −4, −3, −2, −1} is not a well-ordered set. In more ad-vanced logic course one could consider alternative orderings, and the definition of well-ordered would change accordingly. If we choose the alternative ordering Z−= {−1, −2, −3, −4, · · · }, (∗) then Z−would be well-ordered: if B ⊆Z−is non-empty and has its elements listed in the same order as (∗), then B has a first element. Since the principle of mathematical induction depends only on us having a well-ordered set, we are now permitted to prove theorems of the form ∀n ∈Z−, P(n), by induction. The base case is n = −1 and the induction step justifies the chain P(−1) = ⇒P(−2) = ⇒P(−3) = ⇒· · · An extremely important theorem in advanced set theory states that it is possible to well-order every set. With a slight modification of the process, this massively increases the applicability of induction. In these notes we keep things simple: well-ordering is always in the sense of Definition 5.5, where we list the elements of a set in the usual increasing order. Exercises 5.3.1 Consider the following Theorem. For every natural number n ≥2,  1 −1 4   1 −1 9   1 −1 16  · · ·  1 −1 n2  = n + 1 2n (a) If the Theorem is written in the form ∀n ∈N≥2, P(n), what is P(n)? (b) Π-notation is used for products in the same way as Σ-notation for sums: for example 5 ∏ k=1 (k + 1)k = 21 · 32 · 43 · 54 · 65 Rewrite the statement of the Theorem using Π-notation. (c) Prove the Theorem by induction (you may use whatever notation you wish). 5.3.2 Recall the geometric series formula from calculus: if r ̸= 1 is constant, and n ∈N0, then n ∑ k=0 rk = 1 −rn+1 1 −r (∗) (a) Here is an incorrect proof by induction. Explain why it is incorrect. Proof. Let P(n) = n ∑ k=0 rk = 1−rn+1 1−r . (Base Case n = 0) P(0) = 0 ∑ k=0 rk = r0 = 1 = 1−r0+1 1−r is true. (Induction Step) Fix n ∈N0 and assume that P(n) is true. Then P(n + 1) = n+1 ∑ k=0 rk = n ∑ k=0 rk + rn+1 = 1 −rn+1 1 −r + rn+1 88 = 1 −rn+1 1 −r + rn+1 −rn+2 1 −r = 1 −rn+2 1 −r , is true. By induction, (∗) is true for all n ∈N0. (b) Give a correct proof of (∗). 5.3.3 Here is an argument attempting to justify n ∑ i=1 i = 1 2n(n + 1) + 7. What is wrong with it? Assume that the statement is true for some fixed n. Then n+1 ∑ i=1 i = n ∑ i=1 i + (n + 1) = 1 2n(n + 1) + 7 + (n + 1) = 1 2(n + 1)[(n + 1) + 1] + 7, hence the statement is true for n + 1 and, by induction, for all n ∈N. 5.3.4 Consider the following ‘proof’ that all human beings have the same age. Where is the flaw in the argument. Proof. (Base case n = 1) Clearly, in a set with only 1 person, all the people in the set have the same age. (Inductive hypothesis) Suppose that for some integer n ≥1 and for all sets with n people, it is true that all of the people in the set have the same age. (Inductive step) Let A be a set with n + 1 people, say A = {a1, . . . , an, an+1}, and let A′ = {a1, . . . , an} and A′′ = {a2, . . . , an+1}. The inductive hypothesis tells us that all the people in A′ have the same age and all the people in A′′ have the same age. Since a2 belongs to both sets, then all the people in A have the same age as a2. We conclude that all the people in A have the same age. (Conclusion) By induction, the claim holds for all n ≥1. 5.3.5 Let P(n) and Q(n) be propositions for each n ∈N. (a) Assume that m is the smallest natural number such that P(m) is false. Let A = {n ∈N : n < m}. What can you say about the elements in the set A, with respect to the property P? (b) Assume that a is the smallest natural number such that P(a) ∨Q(a) is false. Let B = {n ∈N : n < a}. What can you say about the elements in the set B, with respect to the properties P and Q? (c) Assume that u is the smallest natural number such that P(u) ∧Q(u) is false. Let C = {n ∈N : n < u}. What can you say about the elements in the set C, with respect to the properties P and Q? 89 (d) Assume that P(1) is true, but that ‘∀n ∈N, P(n)’ is false. Show that there exists a natural number k such that the implication P(k) = ⇒P(k + 1) is false. 5.3.6 Prove that if A ⊆R is a finite set, then A is well-ordered. 5.3.7 In this question we use the fact that N0 is well-ordered to prove the Division Algorithm (Theo-rem 3.2). If m ∈Z and n ∈N, then ∃unique q, r ∈Z such that m = qn + r and 0 ≤r < n. Let m ∈Z and n ∈N be given, and define S = {k ∈N0 : k = m −qn for some q ∈Z}. (a) Show that S is a non-empty subset of N0. (b) N0 is well-ordered. By part (a), S has a minimal element r. Prove that 0 ≤r < n. (c) Suppose that there are two pairs of integers (q1, r1) and (q2, r2) which satisfy m = qin + ri. Prove that r1 = r2 and, consequently, that the division algorithm is true. 5.3.8 In this question we consider Peano’s Axioms for the natural numbers: Initial element: 1 ∈N Successor elements: There is a successor function f : N →N. For each n ∈N, the successor f (n) is also a natural number. No predecessor of 1 ∀n ∈N, f (n) = 1 is false. Unique predecessor: f is injective: f (n) = f (m) = ⇒m = n. Induction: If A ⊆N has the following properties: • 1 ∈A, • ∀a ∈A, f (a) ∈A, then A = N. The successor function f is simply ‘plus one’ in disguise: f (n) = n + 1. (a) Suppose you replace N with Z in each of the above axioms. Which axioms are still true and which are false? (b) Here we use the notation (m, n) to represent a pair of natural numbers. Let T be the set of all pairs T = {(m, n) : m, n ∈N}. Let f : T →T be the function f (m, n) = (m + 1, n). Letting the pair (1, 1) play the role of ‘1’ in Peano’s axioms, and f be the successor function, decide which of the above axioms are satisfied by T. (c) (Hard!) With the same set T as in part (b), take the successor function f : T →T to be f (m, n) = ( (m −1, n + 1) if m ≥2, (m + n, 1) if m = 1. Which of the above axioms are satisfied by T and f? 90 5.3.9 (Ignore this question if you haven’t studied matrices) Suppose that A = 7 12 −2 −3  . We prove that ∀n ∈Z, An = −2 −6 1 3  + 3n  3 6 −1 −2  . (†) Here A−n = (An)−1 is the inverse of An, and we follow the convention that A0 = 1 0 0 1  is the identity matrix. (a) Prove by induction that (†) holds ∀n ∈N0. (b) Modify your argument in part (a) to prove that (†) holds ∀n ∈Z− 0 . (Use the fact that, when written in reverse order, Z− 0 = {0, −1, −2, −3, −4, . . .} is a well-ordered set.) (c) Using what you know about matrix inverses, give a direct proof that (†) holds ∀n ∈Z− 0 . (If C and D are 2 × 2 matrices such that CD = 1 0 0 1  , then D = C−1.) (d) Diagonalize the matrix A and thereby give a direct proof of (†) for all integers n. 91 5.4 Strong Induction The principle of mathematical induction as represented in Theorem 5.6 is sometimes known as weak induction. In weak induction, the induction step requires only that one proposition P(n) is true to demonstrate the truth of P(n + 1). By contrast, the induction step in strong induction additionally requires that some, perhaps all, of the propositions coming before P(n) are also true. Theorem 5.11 (Principle of Strong Induction). Let m be an integer and suppose that P(n) is a proposition for each n ∈Z≥m. Also fix an integer l > m. Suppose: (a) P(m), P(m + 1), . . . , P(l) are true. (b) ∀n ≥l, (P(m) ∧P(m + 1) ∧· · · ∧P(n)) = ⇒P(n + 1). Then P(n) is true for all n ∈Z≥m. The statement is a little complicated: what matters is that Z≥m is a well-ordered set. In the simplest examples, we have m = 1 and Z≥1 = N. The challenge in strong induction is identifying how many base cases l −m + 1 are needed. To see this in action, consider the Fibonacci numbers: an excellent source of strong induction examples. Definition 5.12. The Fibonacci numbers are the sequence ( fn)∞ n=1 defined by the recurrence relation ( fn+1 = fn + fn−1 if n ≥2, f1 = f2 = 1 (∗) Theorem 5.13. ∀n ∈N, fn < 2n. Proof. For each natural number n, let P(n) be the proposition fn < 2n. (Base cases n = 1, 2) f1 = 1 < 21 and f2 = 1 < 22, whence P(1) and P(2) are true. (Induction step) Fix n ≥2 and suppose that P(1), . . . , P(n) are true. Then fn+1 = fn + fn−1 < 2n + 2n−1 < 2n + 2n = 2n+1 which says that P(n + 1) is true. By strong induction P(n) is true for all n ∈N, and so fn < 2n. In terms of Theorem 5.11, we have m = 1 and l = 2 with m −l + 1 = 2 base cases. Te reason we need m = 1 is because the first claim in the Theorem is about the integer 1, namely f1 < 21. We need two base cases because the recurrence relation (∗) defining the Fibonacci numbers requires the previous two terms of the sequence to construct the next. 92 To help understand strong induction, it is instructive to see why a proof by weak induction would fail in this setting. Wrong Proof A. We show, by weak induction, that ∀n ∈N, fn < 2n. (Base Case n = 1) By definition, f1 = 1 < 21, whence the claim is true for n = 1. (Induction Step) Fix n ∈N and assume that fn < 2n. We want to show that fn+1 < 2n+1. By the recurrence relation, we can write fn+1 = fn + fn−1. (∗) The inductive hypothesis tells us that fn < 2n, but what can we say about fn−1? Absolutely nothing! We are stuck: weak induction fails to prove the theorem. The incorrect proof tells us why we need strong induction: the recurrence relation defines each Fibonacci number (except f1 and f2) in terms of the previous two. To make use of the recurrence, our induction hypothesis must assume something about at least fn and fn−1. Assuming something about only fn is not enough. From Wrong Proof A we learned that we needed to prove by strong induction. Now suppose that we try the following, which looks almost identical to the correct proof. Wrong Proof B. For each n ∈N, let P(n) be the proposition fn < 2n. We prove that P(n) is true for all n ∈N by strong induction. (Base Case n = 1) By definition, f1 = 1 < 21, whence P(1) is true. (Induction Step) Fix n ∈N and assume that P(1), . . . , P(n) are all true. We want to show that fn+1 < 2n+1. By the recurrence relation, we can write fn+1 = fn + fn−1 < 2n + 2n−1 < 2 · 2n = 2n+1. (†) Hence P(n) is true for all n ≥1. Where is the problem with this second incorrect proof? The recursive formula fn+1 = fn + fn−1 only applies if n ≥2. If we take n = 1, then it reads f2 = f1 + f0, but f0 is not defined! In the induction step of Wrong Proof B, we are letting n be any integer ≥1. When n = 1 the step (†) is not justified, and so the proof fails. For (†) to be legitimate, we must have n ≥2. This is why, in our correct proof, we had to prove P(1) and P(2) separately. The moral here is to try the induction step as scratch work. Your attempt will tell you if you need strong induction and, if you do, how many base cases are required. 93 Strong Induction on Well-ordered Sets In the next example the first term is suffixed by n = 0. In the language of Theorem 5.11, we have m = 0 and l = 1 with m −l + 1 = 2 base cases. Just like the Fibonacci example, two base cases are required because the defining recurrence relation constructs the next term in the sequence from the two previous terms. Theorem 5.14. A sequence of integers (an)∞ n=0 is defined by ( an = 5an−1 −6an−2, n ≥2, a0 = 0, a1 = 1. Then an = 3n −2n for all n ∈N0. Proof. We prove by strong induction. (Base cases n = 0, 1) The formula is true in both cases: a0 = 0 = 30 −20 and a1 = 1 = 31 −21. (Induction step) Fix an integer n ≥1 and suppose that ak = 3k −2k for all k ≤n. Then an+1 = 5an −6an−1 = 5(3n −2n) −6(3n−1 −2n−1) = (15 −6)3n−1 + (10 −6)2n−1 = 3n+1 −2n+1. By strong induction an = 3n −2n is true for all n ∈N0. Think about why we wrote an+1 = 5an −6an−1 in the induction step, whereas the statement in the Theorem reads an = 5an−1 −6an−2. Does it matter? What does it mean to say that n is a ‘dummy variable’? In the two previous examples, it might seem that strong induction is something of a logical overkill. In the induction step we are assuming far more than we need. In both examples, estab-lishing the truth of P(n + 1) required only the truth of P(n) and P(n −1). We assumed that the earlier propositions were also true, but we never used them. Depending on the proof, you might need two, three or even all of the propositions prior to P(n + 1) to complete the induction step. Once you are used to strong induction you may feel comfortable slimming a proof down so that you only mention precisely what you need. For the present, the way we’ve stated the principle is maximally safe! For some practice with this, see Exercise 5.4.3 where three base cases are needed, and the induc-tion step requires the three previous propositions P(n), P(n −1), P(n −2) to P(n + 1). In order to see strong induction in all its glory, where the induction step requires all of the previous propositions, we prove part of the famous Fundamental Theorem of Arithmetic which states that all natural numbers may be factored into a product of primes: for example 3564 = 22 × 34 × 11. Definition 5.15. p ∈N≥2 is prime if its only positive divisors are itself and 1. If q ∈N≥2 is not prime, then it is composite: ∃a, b ∈N≥2 such that q = ab. 94 As you read the proof, think carefully about why only one base case is required. Theorem 5.16. Every natural number n ≥2 is either prime, or a product of primes. Proof. We prove by strong induction. (Base case n = 2) The only positive divisors of 2 are itself and 1, hence 2 is prime. (Induction step) Fix n ∈N≥2 and assume that every natural number k satisfying 2 ≤k ≤n is either prime or a product of primes. There are two possibilities: • n + 1 is prime. In this case we are done. • n + 1 is composite. Thus n + 1 = ab for some natural numbers a, b ≥2. Clearly a, b ≤n, and so, by the induction hypothesis, both are prime or the product of primes. Therefore n + 1 is also the product of primes. By strong induction we see that all natural numbers n ≥2 are either prime, or a product of primes. Exercises 5.4.1 Define a sequence (bn)∞ n=1 as follows: ( bn = bn−1 + bn−2, n ≥3, b1 = 3, b2 = 6. Prove: ∀n ∈N, bn is divisible by 3. 5.4.2 Consider the proof of Theorem 5.16. (a) If the Theorem is written in the form ∀n ∈N≥2, P(n), what is the proposition P(n)? (b) Explicitly carry out the induction step for the three situations n + 1 = 9, n + 1 = 106 and n + 1 = 45. How many different ways can you perform the calculation for n + 1 = 45? Explain why it is only necessary in the induction step to assume that all integers k satisfying 2 ≤k ≤n+1 2 are prime or products of primes. (c) Rewrite the proof in the style of Theorem 5.13, explicitly mentioning the propositions P(n), and thus making the logical flow of strong induction absolutely clear. 5.4.3 Define a sequence (cn)∞ n=0 as follows: ( cn+1 = 49 8 cn −225 8 cn−2, n ≥2, c0 = 0, c1 = 2, c2 = 16. Prove that cn = 5n −3n for all n ∈N0. Hint: you need three base cases! 5.4.4 Prove that the nth Fibonacci number fn is given by the formula fn = φn −ˆ φn √ 5 , where φ = 1 + √ 5 2 and ˆ φ = 1 − √ 5 2 . φ is the famous Golden ratio. φ and ˆ φ are the two solutions to the equation φ = 1 + φ−1. 95 5.4.5 In this question we use an alternative definition of prime.19 Definition. p ∈N≥2 is prime if ∀a, b ∈N, p| ab = ⇒p| a or p|b. Let p be prime, let n ∈N, and let a1, . . . , an be natural numbers such that p divides the product a1a2 · · · an. Prove by induction that, ∃i ∈{1, 2, . . . , n} such that p| ai. Hint: you need to cover two base cases. Why? Think about the induction step first and it will help you decide how many base cases you need. 5.4.6 Show that for every positive integer n, (3 + √ 5)n + (3 − √ 5)n is an even integer. Hints: Prove simultaneously that (3 + √ 5)n −(3 − √ 5)n is an even multiple of √ 5. Subtract the nth expression from the (n + 1)th in both cases... 5.4.7 (Hard!) Return to the proof of Theorem 5.10. Can you make a watertight argument using strong induction that also covers the two missing cases? Draw a picture to illustrate each case. 19Strictly this is what it means for p to be irreducible. In the ring of integers, prime and irreducible are synonymous. For the details, take a Number Theory course. 96 6 Set Theory, Part II In this chapter we return to set theory, where we consider more-advanced constructions. 6.1 Cartesian Products You have been working with Cartesian products for years, referring to a point in the plane R2 by its Cartesian co-ordinates (x, y). The basic idea is that each of the co-ordinates x and y is a member of the set R. Definition 6.1. Let A and B be sets. The Cartesian product of A and B is the set A × B = {(a, b) : a ∈A and b ∈B}. A × B is exactly the set of ordered pairs (a, b). Examples. 1. The Cartesian product of the real line R with itself is the xy-plane: rather than writing R × R which is unwieldy, we write R2. R2 = R × R = {(x, y) : x, y ∈R}. More generally, Rn = R × R × · · · R | {z } n times is the set of n-tuples of real numbers: Rn = {(x1, x2, . . . , xn) : x1, x2, . . . , xn ∈R}. 2. Suppose you go to a restaurant where you have a choice of one main course and one side. The menu might be summarized set-theoretically: consider the sets Mains = {fish, steak, eggplant, pasta} Sides = {asparagus, salad, potatoes} The Cartesian product Mains×Sides is the set of all possible meals made up of one main and one side. It should be obvious that there are 4 × 3 = 12 possible meal choices. This last example illustrates the following theorem. Indeed it partly explains the use of the word product in the definition. Theorem 6.2. If A and B are finite sets, then |A × B| = |A| · |B|. 97 Proof. Label the elements of each set and list the elements of A × B lexicographically. If |A| = m and |B| = n, then we have: (a1, b1) (a1, b2) (a1, b3) · · · (a1, bn) (a2, b1) (a2, b2) (a2, b3) · · · (a2, bn) . . . . . . . . . . . . (am, b1) (am, b2) (am, b3) · · · (am, bn) It should be clear that every element of A × B is listed exactly once. There are m rows and n columns, thus |A × B| = mn. Before we go any further, consider the complement of a Cartesian product A × B. If you had to guess an expression for (A × B)C, you might well try AC × BC. Let us think more carefully. (x, y) ∈(A × B)C ⇐ ⇒(x, y) ̸∈A × B ⇐ ⇒¬((x, y) ∈A × B) ⇐ ⇒¬(x ∈A and y ∈B) ⇐ ⇒x ̸∈A or y ̸∈B Since the definition of Cartesian product involves and, its negation, by De Morgan’s laws, involves or. It follows that the complement of a Cartesian product is not a Cartesian product! As an example of a basic set relationship involving Cartesian products, we prove a theorem. Theorem 6.3. Let A, B, C, D be sets. Then (A × B) ∪(C × D) ⊆(A ∪C) × (B ∪D). Proof. Since we are dealing with Cartesian products, the general element has the form (x, y). Let (x, y) ∈(A × B) ∪(C × D). Then (x, y) ∈A × B or (x, y) ∈C × D. But then (x ∈A and y ∈B) or (x ∈C and y ∈D). Clearly x ∈A or x ∈C, so x ∈A ∪C. Similarly y ∈B or y ∈D, so y ∈B ∪D. Therefore (x, y) ∈(A ∪C) × (B ∪D), as required. A C B D The picture is an imagining of the theorem, where we assume that the sets A, B, C and D are all intervals of real numbers. (A × B) ∪(C × D) is the yellow shaded region, while (A ∪C) × (B ∪D) 98 is the larger dashed square. Be careful with pictures! The theorem is a statement about any sets, whereas the picture implicitly assumes that these sets are intervals. While helpful, the picture is not a proof! Either by carefully reading the proof or by thinking about the picture, you should be convinced that the two sets in the theorem are not equal (in general): if x ∈(A \ C) and y ∈D, then (x, y) is an element of the right hand side, but not the left. Is it clear where the point (x, y) lives in the picture? Exercises 6.1.1 Consider the following subintervals of the real line: A = [2, 5], B = (0, 4). (a) Express the set (A \ B)C in interval notation, as a disjoint union of intervals. (b) Draw a picture of the set (A \ B)C × (B \ A). 6.1.2 Rewrite the condition (x, y) ∈(AC ∪B) × (C \ D) in terms of (some of) the following propositions: x ∈A, x ̸∈A, x ∈B, x ̸∈B, y ∈C, y ̸∈C, y ∈D, y ̸∈D. 6.1.3 Let A = [1, 3], B = [2, 4] and C = [2, 3]. Prove or disprove that (A × B) ∩(B × A) = C × C. Hint: Draw the sets A × B, B × A and C × C in the Cartesian plane. The picture will give you a hint on whether or not the statement is true, but it does not constitute a proof. 6.1.4 A straight line subset of the plane R2 is a subset of the form Aa,b,c = {(x, y) : ax + by = c}, for some constants a, b, c, with ab ̸= 0. (a) Draw the set A1,2,3. Is it a Cartesian product? (b) Which straight line subsets in the plane R2 are Cartesian products? Otherwise said, find a condition on the constants a, b, c for which the set Aa,b,c is a Cartesian product. 6.1.5 Draw a picture, similar to that in Theorem 6.3, which illustrates the fact that (A × B)C ̸= AC × BC. Using your picture, write the set (A × B)C in the form (C1 × D1) ∪(C2 × D2) ∪· · · where each of the unions are disjoint: that is i ̸= j = ⇒(Ci × Di) ∩(Cj × Dj) = ∅. You don’t have to prove your assertion. 6.1.6 Let E ⊆N × N be the smallest subset which satisfies the following conditions: • Base case: (1, 1) ∈E 99 • Generating Rule I: If (a, b) ∈E then (a, a + b) ∈E • Generating Rule II: If (a, b) ∈E then (b, a) ∈E (a) Show in detail that (4, 3) ∈E. (b) Show by induction that for every n ∈N, (1, n) ∈E. (c) (Very hard!!!) Show that E = {(a, b) ∈N × N : gcd(a, b) = 1}. Think carefully about how the Euclidean algorithm works, and what the generating rules might have to do with it... 6.1.7 A strict set-theoretic definition requires you to build the ordered pair (a, b) as a set: typically (a, b) = {a, {a, b}}. One then proves that (a, b) = (c, d) ⇐ ⇒a = c and b = d. (a) One of the axioms of set theory (regularity) says that there is no set a for which a ∈a. Use this to prove that the cardinality of (a, b) = {a, {a, b}} is two. (b) Prove that (a, b) = (c, d) = ⇒      a = c and b = d, or a = {c, d} and c = {a, b}. (c) In the second case, prove that there exists a set S such that a ∈S ∈a. The axiom of regularity also says that this is illegal. Conclude that (a, b) = (c, d) ⇐ ⇒a = c and b = d. 100 6.2 Power Sets Given a set A, it is often useful to consider the collection of all of the subsets of A. Indeed, we want to call this collection a set. Definition 6.4. The power set of A is the set P(A) of all subsets of A. That is, P(A) = {B : B ⊆A}. Otherwise said: B ∈P(A) ⇐ ⇒B ⊆A. Examples. 1. Let A = {1, 3, 7}. Then A has the following subsets, listed by how many elements are in each subset. 0-elements: ∅ 1-element: {1}, {3}, {7} 2-elements: {1, 3}, {1, 7}, {3, 7} 3-elements: {1, 3, 7} Gathering these together, we have the power set: P(A) = n ∅, {1}, {3}, {7}, {1, 3}, {1, 7}, {3, 7}, {1, 3, 7} o . 2. Consider B = n 1, {2}, 3 o . It is essential that you use different size set brackets to prevent confusion. B has only two elements, namely 1 and {2}, 3 . We can gather the subsets of B in a table. 0-elements: ∅ 1-element: {1}, n{2}, 3 o 2-elements: n 1, {2}, 3 o In the second line, remember that to make a subset out of a single element you must surround the element with set brackets. Thus 1 ∈B = ⇒{1} ⊆B and {2}, 3 ∈B = ⇒ n{2}, 3 o ⊆B. The power set of B is therefore P(B) =  ∅, {1}, n{2}, 3 o , n 1, {2}, 3 o . Notation Be absolutely certain that you understand the difference between ∈and ⊆. It is easy to become confused when considering power sets. In the context of the previous example, here are eight propo-sitions. Which are true and which are false?20 (a) 1 ∈A (b) 1 ∈P(A) (c) {1} ∈A (d) {1} ∈P(A) (e) 1 ⊆A (f) 1 ⊆P(A) (g) {1} ⊆A (h) {1} ⊆P(A) 20Only (a), (d), and (g) are true. Make sure you understand why! 101 As a further exercise in being careful with notation, consider the following theorem. Theorem 6.5. If A ⊆B, then P(A) ⊆P(B). Proof. Suppose that A ⊆B and let C ∈P(A). We must show that C ∈P(B). By definition, C ∈P(A) = ⇒C ⊆A. Since subset inclusion is transitive (Theorem 4.4), we have C ⊆A ⊆B = ⇒C ⊆B. This says that C ∈P(B). Therefore P(A) ⊆P(B). It is very easy to get confused by this theorem. Exercises 6.2.4 and 6.2.5 discuss things further. Cardinality and Power Sets Let’s investigate how the cardinality of a set and its power set are related. Consider a few basic examples where we list all of the subsets, grouped by cardinality. Set A 0-elements 1-element 2-elements 3-elements |P(A)| ∅ ∅ 1 {a} ∅ {a} 1 + 1 = 2 {a, b} ∅ {a}, {b} {a, b} 1 + 2 + 1 = 4 {a, b, c} ∅ {a}, {b}, {c} {a, b}, {a, c}, {b, c} {a, b, c} 1 + 3 + 3 + 1 = 8 You should have seen this pattern before: we are looking at the first few lines of Pascal’s Triangle. It should be no surprise that if |A| = 4, then |P(A)| = 1 + 4 + 6 + 4 + 1 = 16. The progression 1, 2, 4, 8, 16, . . . in the final column immediately suggests the following theorem. Theorem 6.6. Suppose that A is a finite set. Then |P(A)| = 2|A|. How are we supposed to prove such a theorem for all sets at once? The trick is to think about all n-element sets simultanously, and prove by induction on the cardinality of A. The basic idea is that every set with n + 1 elements is the disjoint union of a set with n elements and a single-element set. The induction step is essentially the observation that an n + 1-element set B has twice the number of subsets of some n-element set A. It is instructive to see an example of this before writing the proof. Example. Let B = {1, 2, 3}. Now choose the element 3 ∈B and delete it to create the smaller set A = {1, 2} = B \ {3}. We can split the subsets of B into two groups: those which contain 3 and those which do not. In the following table we list all of the subsets of B. In the first column are those subsets X which do not 102 contain 3. These are exactly the subsets of A. In the second column are the subsets Y = X ∪{3} of B which do contain 3. X X ∪{3} ∅ {3} {1} {1, 3} {2} {2, 3} {1, 2} {1, 2, 3} It is clear that B has twice the number of subsets of A. This method of pairing is exactly mirrored in the proof. Proof. We prove by induction. For each n ∈N0, let Q(n) be the proposition |A| = n = ⇒|P(A)| = 2n. (Base Case) If n = 0, then A = ∅(Theorem 4.4). But then P(A) = {∅}, whence |P(A)| = 1 = 20. Therefore Q(0) is true. (Induction Step) Fix n ∈N0 and assume that Q(n) is true. That is, assume that any set with n elements has 2n subsets. Now let B be any set with n + 1 elements. Choose one of the elements b ∈B and define A = B \ {b}. Subsets of B are of two types: 1. Subsets X ⊆B which do not contain b. 2. Subsets Y ⊆B which contain b. In the first case, X is really a subset of A. Since |A| = n, the induction hypothesis Q(n) tells us that there are 2n subsets X of this type. In the second case, we can write Y = X ∪{b}, where X is again a subset of A. Since there are 2n subsets X, it follows that there are 2n subsets Y ⊆B of this form. Therefore |P(B)| = 2n + 2n = 2n+1. By induction, Q(n) is true for all n ∈N0. Once you understand the proof, you should compare it to the proof of Theorem 5.10 on the in-terior angles of a polygon. The idea is very similar. Exercise 6.2.8 gives an alternative proof of this result. As a final example, we consider the interaction of power sets and Cartesian products. Suppose that A = {a} and B = {b, c}. Then A × B = {(a, b), (a, c)}. The power set P(A × B) therefore contains 22 = 4 elements: indeed P(A × B) = n ∅, {(a, b)}, {(a, c)}, {(a, b), (a, c)} o . 103 The power sets of A and B have 2 and 4 elements respectively: P(A) = {∅, {a}}, P(B) = {∅, {b}, {c}, {b, c}}. The Cartesian product of the power sets therefore has 2 × 4 = 8 elements: P(A) × P(B) = n∅, ∅  , ∅, {b}  , ∅, {c}  , ∅, {b, c}  , {a}, ∅  , {a}, {b}  , {a}, {c}  , {a}, {b, c} o . It should be clear from this example not only that P(A × B) ̸= P(A) × P(B), but that the elements of the two sets are completely different. The elements of P(A × B) are sets of ordered pairs, while the elements of P(A) × P(B) are ordered pairs of sets. Exercises 6.2.1 Find P(A) and |P(A)| for the following: (a) A = {1, 2}. (d) A = {∅, 1, {a}}. (b) A = {1, 2, 3}. (e) A = n 1, 2 , 3,  4, {5} o . (c) A = (1, 2), (2, 3) . (f) A = n (1, 2), 3, 4, {5} o . 6.2.2 Let A = {1, 3} and B = {2, 4}. (a) Draw a picture of the set A × B. (b) Compute P(A × B). (c) What is the cardinality of P(A) × P(B)? Don’t compute the set! 6.2.3 Determine whether the following statements are true or false (in (b), the symbol ⊊means ‘proper subset’). Justify your answers. (a) If {7} ∈P(A), then 7 ∈A and {7} / ∈A. (b) Suppose that A, B and C are sets such that A ⊊P(B) ⊊C and |A| = 2. Then |C| can be 5, but |C| cannot be 4. (c) If a set B has one more element than a set A, then P(B) has at least two more elements than P(A). (d) Suppose that the sets A, B, C and D are all subsets of {1, 2, 3} with cardinality two. Then at least two of these sets are equal. 6.2.4 Here are three incorrect proofs of Theorem 6.5. Explain why each fails. (a) Let x ∈P(A). Then x ∈A. Since A ⊆B, we have x ∈B. Therefore x ∈P(B), and so P(A) ⊆P(B). (b) Let A = {1, 2} and B = {1, 2, 3}. Then P(A) = {∅, {1}, {2}, A}, and P(B) = {∅, {1}, {2}, {3}, {1, 2}, {1, 3}, {2, 3}, B}. Thus P(A) ⊆P(B). 104 (c) Let x ∈A. Since A ⊆B, we have x ∈B. Since x ∈A and x ∈B, we have {x} ∈P(A), and {x} ∈P(B). 6.2.5 Consider the converse of Theorem 6.5. Is it true or false? Prove or disprove your conjecture. 6.2.6 (a) Prove that P(A) ∪P(B) ⊆P(A ∪B). Provide a counter-example to show that we do not expect equality. (b) Does anything change if you replace ∪with ∩in part (a)? Justify your answer. 6.2.7 Consider the proof of Theorem 6.6. Let B be a set with n + 1 elements, let b ∈B and let A = B \ {b}. Prove that the function f : P(A) × {1, 2} →P(B) defined by f (X, 1) = X, f (X, 2) = X ∪{b} is a bijection, and that consequently, by Theorem 4.12, |P(A) × {1, 2}| = |P(B)|. 6.2.8 We use the following notation for the binomial coefficient:21 (n r) = n! r!(n−r)!. This symbol denotes the number of distinct ways one can choose r objects from a set of n objects. (a) Prove directly, use the definition of the binomial coefficient, that If 1 ≤r ≤n, then n + 1 r  = n r  +  n r −1  . (b) Prove by induction that ∀n ∈N, n ∑ r=0 (n r) = 2n. You will need part (a) in the induction step. (c) Explain why part (b) provides an alternative proof of Theorem 6.6. If you found this easy, try proving the binomial theorem: ∀n ∈N, (x + y)n = n ∑ r=0 (n r)xryn−r. 21You may have seen this written nCr, or nCr, where the C stands for combination. 105 6.3 Indexed Collections of Sets An indexed family of sets is a collection of sets An, one for each n in some indexing set I. It is very often the case that I = N or Z. If I is some other set, for example the real numbers R, the label for the index may be chosen accordingly: e.g. Ax ⊆R. Definition 6.7. Given a family of indexed sets A = {An : n ∈I}, we may form the union and intersection of the collection: ∪A = [ n∈I An = {x : x ∈An for some n ∈I}, ∩A = \ n∈I An = {x : x ∈An for all n ∈I}. Otherwise said, x ∈ [ n∈I An ⇐ ⇒∃n ∈I such that x ∈An x ∈ \ n∈I An ⇐ ⇒∀n ∈I we have x ∈An A collection A = {An : n ∈I} is pairwise disjoint if Am ∩An = ∅whenever m ̸= n. When the indexing set is N or Z, it is also common to write, for example, S n∈N An as ∞ S n=1 An. The following Theorem is almost immediate given the definitions of union and intersection: can you supply a formal proof? Theorem 6.8. Let A = {An : n ∈I} and let m ∈I. Then Am ⊆ [ n∈I An and \ n∈I An ⊆Am. Examples. 1. For each n ∈N, let An = [−n, n]. Each of the sets An is a closed interval. E.g., A1 = [−1, 1], A2 = [−2, 2], A3 = [−3, 3]. It should be clear that n ≤m = ⇒An ⊆Am. We therefore have a nested sequence of sets: A1 ⊆A2 ⊆A3 ⊆· · · It follows immediately that T n∈N An = A1 = [−1, 1]. The union is a little harder. With a little thinking you might hypothesize S n∈N An = R. This is indeed the case, but to prove it we need to return to the definition. Since every interval An is a subset of R, we automatically have S n∈N An ⊆R. All that remains is to see that R ⊆ S n∈N An. Let x ∈R. We must show that ∃n ∈N such that x ∈An. We construct n explicitly using the 106 ceiling function.22 If x ≥0, then x ≤⌈x⌉, whence x ∈A⌈x⌉. Similarly, if x < 0, then x ∈A⌈−x⌉. For example, −3.124 ∈A⌈3.124⌉= A4. It follows that all real numbers x are in at least one of the sets An, and so S n∈N An = R. 2. Let An = (n, n + 1] ⊆R, for each n ∈Z. For example, A3 = (3, 4], and A−17 = (−17, −16]. In this case the sets An are pairwise disjoint, and we have [ n∈Z An = R, and \ n∈Z An = ∅. 3. For each n ∈N, let An = {x ∈R : x2 −1 < 1 n}. Before computing the union and intersection of these sets, it is helpful to write each set as a pair of intervals. Note that x2 −1 < 1 n ⇐ ⇒−1 n < x2 −1 < 1 n ⇐ ⇒ r 1 −1 n < |x| < r 1 + 1 n. Therefore An =  − q 1 + 1 n, − q 1 −1 n  ∪ q 1 −1 n, q 1 + 1 n  . As the picture suggests, the sets An are nested: A1 ⊇A2 ⊇A3 ⊇· · · ) ( ) ( √ 2 − √ 2 0 A1 ) ( ) ( q 3 2 − q 3 2 q 1 2 − q 1 2 A2 ) ( ) ( q 4 3 − q 4 3 q 2 3 − q 2 3 A3 ) ( ) ( A4 ) ( ) ( A5 Since A1 is the largest of the nested sets, we see that S n∈N An = A1 = (− √ 2, 0) ∪(0, √ 2). For the intersection, note that ∀n ∈N, x ∈An ⇐ ⇒∀n ∈N, x2 −1 < 1 n ⇐ ⇒x2 −1 = 0. It follows that T n∈N An = {1, −1}. 22The ceiling ⌈x⌉is the smallest integer greater than or equal to x. For example ⌈3.124⌉= 4. The ceiling function is simply the concept of ‘rounding up’ written in mathematical language. The corresponding function for ‘rounding down’ is the floor: ⌊x⌋is the greatest integer less than or equal to x. 107 Don’t take Limits! Here we dissect an extremely important example. For each n ∈N, define the interval An = h 0 , 1 n  . Let us analyze the collection {An : n ∈N}. First observe that m ≤n = ⇒ 1 n ≤1 m = ⇒An ⊆Am, so that the sets are nested: A1 ⊇A2 ⊇A3 ⊇· · · The union is therefore the largest interval A1, ∞ [ n=1 An = A1 = [0, 1). Before considering the full intersection, we first compute a finite intersection. Since the sets An are nested, it follows that any finite intersection is simply the smallest of the listed sets: i.e., for any constant m ∈N we have m \ n=1 An = Am = h 0 , 1 m  . Observe that this is non-empty for every m. What about the infinite intersection? You might be tempted to take a limit and make an argument such as ∞ \ n=1 An = lim m→∞ m \ n=1 An = lim m→∞ h 0 , 1 m  = [0, 0). Quite apart from the issue that [0, 0) is ugly and could only mean the empty set, we should worry about whether this is a legitimate use of limits. It isn’t! Moreover, the attempt to use limits produces an incorrect conclusion: the intersection is in fact non-empty, and we claim the following. Theorem 6.9. ∞ T n=1 An = {0}. Before we give a formal proof, it is instructive to see a calculation. Let us show, for example, that 2 9 ̸∈ ∞ T n=1 An. To prove that 2 9 is not in the intersection of all the An, it is enough to exhibit a single integer m such that 2 9 ̸∈Am. The picture shows that we can choose m = 10: since 1 10 < 2 9, we have 2 9 ̸∈[0, 1 10] = A10. Since 2 9 ̸∈A10, we conclude that 2 9 ̸∈ ∞ T n=1 An. ) [ | 0 1 10 2 9 A10 108 Proof. We will prove that x ∈ ∞ T n=1 An = ⇒x = 0. Suppose that x ∈ ∞ T n=1 An. Then x ∈ 0 , 1 n  for all n. Otherwise said, ∀n ∈N, we have 0 ≤x < 1 n. Certainly x = 0 satisfies these inequalities. Suppose, for a contradiction, that x > 0. Since lim n→∞ 1 n = 0, we can certainly choosea N large enough so that 1 N ≤x. A contradiction. Thus the intersection contains no positive elements, and we conclude that ∞ \ n=1 An = {0}. aExplicitly, you may choose choose N = ⌈1 x ⌉, or anything larger. The outcome of this discussion depends crucially on whether the ends of the intervals An are open or closed. Consider each of the following modifications in turn. How would the argument for computing each intersection differ from what we did above? • If Bn =  0 , 1 n  , then ∞ T n=1 Bn = ∅. • If Cn =  0 , 1 n i , then ∞ T n=1 Cn = ∅. • If Dn = h 0 , 1 n i , then ∞ T n=1 Dn = {0}. The moral of these examples is that you cannot na¨ ıvely apply limits to sets. Be very careful with infinite unions and intersections, for your intuition can easily lead you astray. Finite Decimals Here is another example where ‘taking the limit’ is the incorrect thing to do. This time it is the union that forces us to be careful. For each n ∈N, let An = {decimals 0.a1a2 . . . an of length n}, where each ai ∈{0, 1, 2, . . . , 9}. For example 0.134 ∈A3. Since 0.134 = 0.1340, we also have 0.134 ∈A4. Once again we have nested intervals m ≤n = ⇒Am ⊆An, whence the infinite intersection is simply \ n∈N An = A1 = {0, 0.1, . . . , 0.9}. 109 Consider first a finite union: if m ∈N, then m [ n=1 An = Am = {x ∈[0, 1) : x has a decimal representation of length ≤m}. If one were to take the limit as m →∞of the property ‘length m decimal,’ it seems like the infinite union should be the whole23 interval [0, 1]. This is another incorrect application of limits: one cannot take the limit of a property! Instead we use the definiton: x ∈ [ n∈N An ⇐ ⇒∃n ∈N such that x ∈An ⇐ ⇒∃n ∈N such that x is a decimal of length n. It follows that [ n∈N An =  x ∈[0, 1) : x has a finite decimal representation Not only does this mean that there are no irrational numbers in S n∈N An, but many rational numbers are also excluded. For example 1 3 = 0.3333 · · · is not in any set An and is therefore not in the union. Indexed Unions: Don’t Confuse Sets and Elements It is easy to confuse, but important to distinguish between the sets A = {An : n ∈I} and ∪A = [ n∈I An. A is a set whose elements are themselves sets. The second is the collection of all elements in any set An. Consider the following examples. Examples. 1. For each n ∈{1, 2, 3}, let An be the plane {(x, y, z) : x + ny + n2z = 1} ⊆R3. A = {A1, A2, A3} has three elements: each of the planes A1, A2, A3 is an object in its own right. The union ∪A = A1 ∪A2 ∪A3 is an infinite set consisting of all the points on the three planes. For the intersection, a little work with simultaneous equations should convince you that (x, y, z) ∈ \ n∈{1,2,3} An ⇐ ⇒      x + y + z = 1 x + 2y + 4z = 1 x + 3y + 9z = 1 ⇐ ⇒(x, y, z) = (1, 0, 0). The planes are drawn below. 2. For each m ∈R ∪{∞}, let Am be the line24 through the origin in R2 with gradient m. Each element of A is a line: there is one for each possible direction through the origin. ∪A is all of the points that lie on any line through the origin. Since every point can be joined to the origin with a straight line, the set ∪A = R2 consists of all points in the plane. 23We would include 1 = 0.9999 · · · 24We include the vertical line A∞. 110 It should be clear that all the lines intersect at the origin, and so ∩A = {(0, 0)}. The collection of lines A = {Am : m ∈R ∪{∞}} is the famous projective space P(R2); this is a very different set from R2! This example also shows that indexing sets don’t have to be simple sets of integers. It is also possible to index the same set using I = [0, π). If we define Bθ to be the line through the origin making an angle θ with the positive x-axis, we would then have Bθ = Atan θ. Example 1: Three elements, or an infinite number? A1 A2 A4 A0.5 A−3 A−0.75 A−0.2 A0 A∞ Example 2: Elements in P(R2) Aside: The Cantor Set For a bit of fun, we can use infinite intersections to create self-similar sets, or fractals. Here is a famous example: the Cantor middle-third set. Construct a sequence of sets Cn for n ∈N0 by repeatedly removing the middle third of each of the intervals at each step, starting with [0, 1]. C0 = [0, 1], C1 = [0, 1 3] ∪[ 2 3, 1], C2 = [0, 1 9] ∪[ 2 9, 1 3] ∪[ 2 3, 7 9] ∪[ 8 9, 1], etc. 0 1 3 2 3 1 The sequence is drawn up to C9, with an animation below. To see the detail for the last few sets, try zooming in as far as you can. Define the Cantor set C to be the infinte intersection C = ∞ T n=0 Cn. This set has several interesting properties. Zero Measure (length) Intuitively, the length of a set of real numbers is the sum of the lengths of all the intervals contained in the set. Since we start with the interval [0, 1] and remove a third of the set each time, it should be clear that length(C0) = 1, length(C1) = 2 3, length(C2) = 2 3 2 , etc. Induction then gives us length(Cn) = 2 3 n . 111 0 1 3 2 3 1 C0 As n →∞this goes to zero, so the Cantor set contains no intervals: it is purely made up of individual points. This at least seems reasonable from the picture. Non-emptiness The Cantor set C contains the endpoints of every interval removed at any stage of its construction. In particular, 1 3n ∈C for all n ∈N0, and so C is an infinite set. Indeed it is more than merely infinite, it is uncountably so, as we shall see in Chapter 8. Self-similarity If C 3 means ‘take all the numbers in the set C and divide them by three,’ and C 3 + 2 3 means ‘take all the numbers in C 3 and add 2 3 to them,’ then C = C 3 ∪ C 3 + 2 3  . (∗) Otherwise said, C is made up of two shrunken copies of itself, a classic property of fractals. If you were to zoom into the Cantor set far enough that you couldn’t see the whole set, you would not know what the scale was. In the following animation we are repeatedly zooming in on the second (of four) groups of points. To get further with the Cantor set, it is necessary to understand exactly what the elements of the set are. This can be accomplished using the ternary representation. It can be shown that every number x ∈[0, 1] may be written in the form25 x = [0.a1a2a3 · · · ]3 = ∞ ∑ n=1 an · 3−n = a1 3 + a2 32 + a3 33 + · · · where each an ∈{0, 1, 2}. For example: [0.12]3 = 1 3 + 2 32 = 5 9, 64 243 = 2 32 + 1 33 + 1 35 = [0.02101]3, 1 = [0.22222 · · · ]3. For this last, use the formula for the sum of a geometric series to calculate ∞ ∑ n=1 2 1 3 n = 2 · 1/3 1−1/3 = 1. The only possibility whereby x can have two ternary expansions is if one of them terminates. The other will eventually become a sequence of repeating 2’s. For example:26 [0.0222222 · · · ]3 = [0.1]3 = 1 3 and [0.10122222 · · · ]3 = [0.102]3 = 1 3 + 2 27 = 11 27. Theorem 6.10. Cn is the set of all numbers x ∈[0, 1] with a ternary expansion whose first n digits are only 0 or 2. It follows that C is exactly the set of x ∈[0, 1] with a ternary expansion containing only 0 and 2. 25Analogous to a decimal representation x = ∞ ∑ n=1 an · 10−n = a1 10 + a2 102 + a3 103 + · · · where an ∈{0, 1, 2, . . . , 9}. 26This is ticklish to prove, as is the corresponding result for decimals: consider 1 = 0.99999999 · · · 112 Proof. We prove by induction. (Base Case) The proposition is clearly true for C0 = [0, 1], as there is nothing to check. (Induction Step) Assume that the proposition is true for some fixed n ∈N0. Analogously to (∗) above, observe that Cn+1 is built from two shrunken copies of Cn: Cn+1 = 1 3Cn ∪ 1 3Cn + 2 3  . Multiplication by 1 3 shifts a ternary representation one position to the right.a Addition of 2 3 adds [0.2]3 to the representation, inserting 2 in the (now empty) first ternary place. Thus if Cn contains only 0’s and 2’s in its first n entries, Cn+1 contains only 0’s and 2’s in its first n + 1 entries. By induction the proposition is true for all n ∈N. aCompare to multiplication of a decimal by 1 10. As the Theorem shows, the Cantor set contains a lot of elements. For example: [0.020202020 · · · ]3 = 2 ∞ ∑ n=1 3−2n = 2/9 1 −1/9 = 1 4 ∈C. What is strange is that 1 4 is not the endpoint of any of the open intervals deleted during the construc-tion of C, and yet we’ve already established that C contains no intervals! Cantor introduced his set precisely because it was so challenging to the traditional concept of size: C seems to simultaneously have very few elements and enormously many. Generalizations and related concepts include Cantor dust C × C, the Sierpi´ nski carpet and gasket, and the von Koch snowflake. Exercises 6.3.1 For each integer n, consider the set Bn = {n} × R. (a) Draw a picture of 4 S n=2 Bn (in the Cartesian plane). Hint: 4 S n=2 Bn = B2 ∪B3 ∪B4. (b) Draw a picture of the set C = [1, 5] × {−2, 2}. Careful! [1, 5] is an interval, while {−2, 2} is a set containing two points. (c) Compute  4 S n=2 Bn  ∩C. (d) Compute 4 S n=2 (Bn ∩C). (e) Compare  4 S n=2 Bn  ∩C and 4 S n=2 (Bn ∩C). What do you notice? 113 6.3.2 For each real number r, define the interval Sr = [r −1, r + 3]. Let I = {1, 3, 4}. Determine S r∈I Sr and T r∈I Sr. 6.3.3 Give an example of four different subsets A, B, C and D of {1, 2, 3, 4} such that all intersections of two subsets are different. 6.3.4 For each of the following collections of intervals, define an interval An for each n ∈N such that indexed collection {An}n∈N is the given collection of sets. Then find both the union and intersection of the indexed collections of sets. (a) {[1, 2 + 1), [1, 2 + 1 2), [1, 2 + 1 3), . . .} (b) {(−1, 2), (−3 2, 4), (−5 3, 6), (−7 4, 8), . . .} (c) {( 1 4, 1), ( 1 8, 1 2), ( 1 16, 1 4), ( 1 32, 1 8), ( 1 64, 1 16), . . .} 6.3.5 For each real number x, let Ax = {3, −2} ∪{y ∈R : y > x}. Find S x∈R Ax and T x∈R Ax. 6.3.6 In Example 2 on page 107, give a formal proof using the ceiling function that S n∈Z An = R. 6.3.7 Use Definition 6.7 to prove the following results about nested sets. (a) A1 ⊇A2 ⊇A3 ⊇· · · = ⇒ S n∈N An = A1. (b) A1 ⊆A2 ⊆A3 ⊆· · · = ⇒ T n∈N An = A1. 6.3.8 Let C0(R) denote the set of continuous functions f : R →R which satisfy f (0) = 0. Let A f = {x ∈[0, 1] : f (x) = 0} (so, for example, if f : R →R, x 7→x(2x −1), then A f = {0, 1 2}). Prove that [ f ∈C0(R) A f = [0, 1] and \ f ∈C0(R) A f = {0}. 6.3.9 Let An be the set of decimals of length n, as described on page 109. (a) Prove directly that the cardinality of An is 10n. (b) Prove by induction that |An| = 10n. (c) Prove that ∞ S n=1 An ⊆Q. (d) Prove by contradiction that 1 3 ̸∈ ∞ S n=1 An. 6.3.10 Suppose that the following are true: • ∀n ∈N, An ̸= ∅. • m ≥n = ⇒Am ⊆An. 114 Prove or disprove the following conjectures: (a) 293 S n=1 An ̸= ∅ (b) 293 T n=1 An ̸= ∅ (c) S n∈N An ̸= ∅ (d) T n∈N An ̸= ∅ 6.3.11 (Hard) Let An = { m n ∈Q : 0 < m < n, m ∈N}, for each n ∈N. (a) Write down A1, A2, A3, A4 explicitly. (b) Prove that Am ⊆Apm for any p ∈N. (c) Argue that S n∈N An = Q ∩(0, 1). (d) Argue that further S n∈N A2n = Q ∩(0, 1). (e) Extend your proof to show that, for any fixed p ∈N, S n∈N Apn = Q ∩(0, 1). 115 7 Relations and Partitions The mathematics of sets is rather basic, at least until one has a notion of how to relate elements of sets with each other. We are already familiar with examples of this: 1. The usual order of the natural numbers (e.g. 3 < 7) is a way of relating/comparing two elements of N. Recall that, as sets, order doesn’t matter: {3, 7} = {7, 3}. As ordered pairs however, (3, 7) ̸= (7, 3). 2. A function f : A →B relates elements in the set A with those in B. It turns out that the concept of ordered pair is essential to relating elements. 7.1 Relations Definition 7.1. Let A and B be sets. A (binary) relation R from A to B is a set of ordered pairs R ⊆A × B. A relation on A is a relation from A to itself. If (x, y) ∈R we can also write x R y, and say ‘x is related to y.’ Similarly x ̸R y means (x, y) ̸∈R. Examples. 1. R = {(1, 3), (2, 2), (2, 3), (3, 2), (4, 1), (5, 2)} is a relation from N to N. It is also a relation from {1, 2, 3, 4, 5} to {1, 2, 3}. 2. R = [1, 3) × (3, 4]  ∪ (2t + 1, t2) : t ∈[ 1 2, 2] is a relation from R to R. Be careful: it is easy to confuse interval notation with the notation for ordered pair! 3. The diagonal R = {(a, a) : a ∈A} is a relation on A, indeed (x, y) ∈R ⇐ ⇒x = y defines a relation on any set A. This example is where the term equivalence relation comes from. x R y ⇐ ⇒x = y simply says that R is ‘equals.’ 4. If A = {all humans}, we may define R ⊆A × A by (a1, a2) ∈R ⇐ ⇒a1, a2 have a parent-child, or a sibling relationship. In this example, the mathematical use of the word relation is identical to that in English. For example, I am related to my sister, and my mother is related to me. 5. If A is a set, then ⊆is a relation on the power set P(A). When R is a relation between sets of numbers, we can often graph the relation. Examples 1 and 2 above would be graphed as follows: 116 1 2 3 4 5 N 1 2 3 4 5 N Example 1. 0 1 2 3 4 5 R 0 1 2 3 4 5 R Example 2. Not all relations between sets of numbers can be graphed: for example, graphing the relation Q × Q is impossible! Definition 7.2. If R ⊆A × B is a relation, then its inverse R−1 ⊆B × A is the set R−1 = {(y, x) ∈B × A : (x, y) ∈R}. To find the elements of R−1, you simply switch the components of each ordered pair in R. Suppose A = B. We say that R is symmetric if R = R−1. The following results should seem natural, even if some of the proofs may not be obvious. Theorem 7.3. Given any relations R, S ⊆A × A: 1. (R−1)−1 = R 2. R ⊆S ⇐ ⇒R−1 ⊆S−1 3. (R ∪S)−1 = R−1 ∪S−1 4. (R ∩S)−1 = R−1 ∩S−1 5. R ∪R−1 is symmetric 6. R ∩R−1 is symmetric Proof. Here are two of the arguments. Try the others yourself. 2. Assume that R ⊆S, and suppose that (x, y) ∈R−1. We must prove that (x, y) ∈S−1. By the definition of inverse, (x, y) ∈R−1 = ⇒(y, x) ∈R = ⇒(y, x) ∈S = ⇒(x, y) ∈S−1. Therefore R−1 ⊆S−1. For the converse, suppose that R−1 ⊆S−1. Then, by an argument similar 117 to the above, we see that (R−1)−1 ⊆(S−1)−1. Now use 1. to see that R−1 ⊆S−1 = ⇒R ⊆S. 5. By 3, (R ∪R−1)−1 = R−1 ∪(R−1)−1 = R−1 ∪R = R ∪R−1, and so R ∪R−1 is symmetric. Be careful! Several parts of Theorem 7.3 look suspiciously similar to earlier results and it is easy to get confused. For example, 3. and 4. look almost like De Morgan’s laws, except that ∪and ∩do not switch over. This is why it is important to be able to prove and come up with examples of such statements. Suppose that you forget which result is correct: you might expect that (R ∪S)−1 =      R−1 ∪S−1 or R−1 ∩S−1. Now that you have two sensible guesses, you should be able to decide the correct one by thinking about examples and, if necessary, proving it! Example. Consider the example R = {(1, 3), (2, 2), (2, 3), (3, 2), (4, 1), (5, 2)} from earlier. This is clearly not symmetric since (1, 3) ∈R but (3, 1) / ∈R. We compute R−1 = {(3, 1), (2, 2), (3, 2), (2, 3), (1, 4), (2, 5)}, and observe that R ∩R−1 = {(2, 2), (2, 3), (3, 2)} and R ∪R−1 = {(1, 3), (3, 1), (2, 2), (2, 3), (3, 2), (4, 1), (1, 4), (5, 2), (2, 5)} are both symmetric. 1 2 3 4 5 N 1 2 3 4 5 N The relation R ∩R−1 1 2 3 4 5 N 1 2 3 4 5 N The relation R ∪R−1 The above pictures should confirm something intuitive: if you are able to graph a symmetric relation, then the graph will have symmetry about the line y = x. 118 Exercises 7.1.1 Draw pictures of the following relations on R. (a) R = {(x, y) : y ≤x and y ≤2 and y ≤2 −x}. (b) S = {(x, y) : (x −4)2 + (y −1)2 ≤9}. Also draw the inverse of each relation. 7.1.2 A relation is defined on N by a R b ⇐ ⇒ a b ∈N. Let c, d ∈N. Under what conditions is it permissable to write c R−1 d? 7.1.3 Let R ⊆{1, 2, 3, 4} × {1, 2, 3, 4} be the relation R = {(1, 3), (1, 4), (2, 2), (2, 4), (3, 1), (3, 2), (4, 4)}. (a) Compute R−1. (b) Compute the relations R ∪R−1 and R ∩R−1, and check that they are symmetric. 7.1.4 For the relation R = {(x, y) : x ≤y} defined on N, what is R−1? 7.1.5 Let A be a set with |A| = 4. What is the maximum number of elements that a relation R on A can contain such that R ∩R−1 = ∅? 7.1.6 Give formal proofs of the remaining cases (1, 3, 4 & 6) of Theorem 7.3. 119 7.2 Functions revisited Now that we have the language of relations, we can properly define functions. Recall that a function f : A →B is a rule that assigns one, and only one, element of B to each element of A. We may therefore view f as a collection of ordered pairs in A × B: {(a, f (a)) : a ∈A}. This set is nothing more than the graph of the function, and, being a set of ordered pairs, it is a relation. Definition 7.4. Let R ⊆A × B be a relation from A to B. The domain and range of R are the sets dom(R) = {a ∈A : (a, b) ∈R for some b ∈B}, range(R) = {b ∈B : (a, b) ∈R for some a ∈A}. A function from A to B is a relation f ⊆A × B satisfying the following conditions: 1. dom( f ) = A, 2. (a, b1), (a, b2) ∈f = ⇒b1 = b2. The two conditions can be thought of as saying: 1. Every element of A is related to at least one element of B. 2. Every element of A is related to at most one element of B. Putting these together, we see that a relation R ⊆A × B is a function if every a ∈A is the first entry of one (and only one) ordered pair (a, b) ∈R. The second condition is the vertical line test, familiar from calculus. B A a f (a) b1 = b2 = f (a): a function B A a b1 b2 b1 ̸= b2: not a function We can also think about injectivity and surjectivity (recall Definition 4.11) in this context. A func-tion f ⊆A × B is: • Injective if no two pairs in f share the same second entry. • Surjective if every b ∈B appears as the second entry of at least one pair in f. • Bijective if every b ∈B appears as the second entry of one (and only one) ordered pair (a, b) ∈f. 120 Definition 7.5. The inverse of a function f ⊆A × B is the inverse relation f −1 ⊆B × A. Since to compute the inverse relation we simply switch the components of each ordered pair, it should be clear that dom( f −1) = range( f ) and range( f −1) = dom( f ). In general, you should expect the inverse of a function to be merely a relation and not a function in its own right. Theorem 7.6 will discuss when the inverse relation is a function. The inverse of a function is usually written in set notation. If V ⊆B, then we defined the inverse image of V (or pull-back of V) by f −1(V) = {a ∈A : f (a) ∈V}. In particular, if b ∈B, then f −1({b}) = {a ∈A : f (a) = b}. Both are subsets of A. When f −1 is a function, each set f −1({b}) consists of a single point of A (one for each b ∈B). Only in this case are we entitled to write f −1(b) = a. Examples. 1. Let A = B = {1, 2, 3} and f = {(1, 3), (2, 1), (3, 3)}. Note that dom( f ) = {1, 2, 3} = A, and that each element of A appears exactly once as the first element in a pair (a, b) ∈f. This relation therefore satisfies both conditions necessary to be a function. In more elementary language we would write f (1) = 3, f (2) = 1, and f (3) = 3. f is not injective, since 3 appears twice as a second entry of an ordered pair in f. f is not surjective, since 2 never appears as the second entry of an ordered pair in f. The inverse relation f −1 = {(3, 1), (1, 2), (3, 3)} ⊆B × A is not a function by dint of failing both conditions in Definition 7.4. • dom( f −1) = {1, 3} is not the whole of B. • (3, 1) ∈f −1 and (3, 3) ∈f −1, but 1 ̸= 3. The graphs of f and f −1 are shown below. 1 2 3 N 1 2 3 N f : A →B 1 2 3 N 1 2 3 N f −1 ⊆B × A: not a function 121 2. Let A = B = R and f = {(x, x2) : x ∈R}. This is just the function f (x) = x2. The inverse is not a function: f −1 = (x2, x) : x ∈R = (y, ±√y) : y ≥0 , since, for example, f −1({4}) = {−2, 2} is not a single-element set. 1 2 3 4 R −2 −1 0 1 2 R f : A →B −2 −1 0 1 2 R 1 2 3 4 R f −1 ⊆B × A: not a function 3. Let A = B = R and f = {(x, x3) : x ∈R}. This is the function f (x) = x3. This time, the inverse is also a function, f −1(y) = 3 √y: f −1 = (x3, x) : x ∈R = (y, 3 √y) : y ∈R . −8 −4 4 8 R −2 −1 1 2 R f : A →B −2 −1 1 2 R −8 −4 4 8 R f −1 : B →A is a function 4. Let A = R, B = Q and f = (x, x) : x ∈Q ∪ (x, 0) : x ̸∈Q . Then f is a function f (x) = ( x if x ∈Q, 0 if x ̸∈Q. This is a surjective function since every element of B = Q appears as the second entry in an ordered pair (a, b) ∈f. It is not injective since zero appears more than once as the second entry of an ordered pair. For example, ( √ 2, 0), ( √ 3, 0) ∈f. Intuitively this is simply f ( √ 3) = f ( √ 2). The inverse relation f −1 is not a function; for example f −1({0}) is the set {0} ∪(R \ Q), not a single value. 122 These examples help to illustrate the following important theorem. Theorem 7.6. A relation f −1 ⊆B × A is a function ⇐ ⇒f is bijective (both injective and surjective). Proof. Recalling Definition 7.4, we see that f −1 is a function ⇐ ⇒      dom( f −1) = B, and (b, a1), (b, a2) ∈f −1 = ⇒a1 = a2. The first of these is equivalent to range( f ) = B, and says that f is surjective. The second is equivalent to (a1, b), (a2, b) ∈f = ⇒a1 = a2, which says that f is injective. Equality of functions There are two competing notions of equality of functions, dependent on what definition you take as fundamental. Same domain, same graph, same codomain f = g means that f and g are the same subset of the same A × B. This notion is preferred by set theorists because it sticks rigidly to the idea that a function is a relation, and it requires both the domain A and codomain B to be explicit. Same domain, same graph f = g means that f ⊆A × B, g ⊆A × C, and (a, b) ∈f ⇐ ⇒(a, b) ∈g. This notion considers fundamental the notion of what a function does, rather than its strict status as a relation; if two functions do the same thing to elements of the same domain then they are the same. This looser notion of equality is used more often. Unfortunately the second notion, while intuitive, has a problem. For example, let f : R →R, and g : R →[−1, 1] satisfy f (x) = g(x) = sin x. Although f and g have the same graph, the different codomains of f and g mean that these are differ-ent functions with respect to the first notion. Under the second notion, they are the same. However, g is surjective while f is not, so don’t we want f and g to be different?! The same problem does not arise when considering domains. For example, in calculus you might have compared functions such as f (x) = x2 + 2, and g(x) = (x2 + 2)(x −1) x −1 . The implied domains of these functions are dom( f ) = R and dom(g) = R \ {1}. Even though these functions have the same graph whenever both are defined, regardless of which notion you choose we have f ̸= g, since the functions have different domains. 123 Exercises 7.2.1 Suppose that f ⊆{1, 2, 3, 4} × {1, 2, 3, 4, 5, 6, 7} is the relation f = {(1, 1), (2, 3), (3, 5), (4, 7)}. (a) Show that f is a function f : {1, 2, 3, 4} →{1, 2, 3, 4, 5, 6, 7}. Can you find a concise formula f (x) to describe f? (b) Is f injective? Justify your answer. (c) Suppose that g ⊆{1, 2, 3, 4} × B is another relation so that the graphs of f and g are iden-tical: i.e. (a, f (a)) : a ∈{1, 2, 3, 4} = (a, g(a)) : a ∈{1, 2, 3, 4} . as sets. If g is a bijective function, what is B? 7.2.2 Decide whether each of the following relations are functions. For those which are, decide whether the function is injective and/or surjective. (a) R = {(x, y) ∈[−1, 1] × [−1, 1] : x2 + y2 = 1} (b) S = {(x, y) ∈[−1, 1] × [0, 1] : x2 + y2 = 1} (c) T = {(x, y) ∈[0, 1] × [−1, 1] : x2 + y2 = 1} (d) U = {(x, y) ∈[0, 1] × [0, 1] : x2 + y2 = 1} 7.2.3 In Example 2 on page 122, explain why the functon f is neither injective nor surjective in the same manner as we did for Example 1. 7.2.4 (a) Express the function f : R →R : x 7→x4 + 3 as a relation. (b) What is the inverse relation f −1? (c) Use Definition 7.4 to prove that the relation f −1 is not a function. (d) Prove directly from Definition 4.11 that f is not injective and not surjective. Compare your arguments with your answer to part (c). 124 7.3 Equivalence Relations In mathematics, the notion of equality is not as simple as one might think. The idea of two numbers being equal is straightforward, but suppose we want to consider two paths between given points as ‘equal’ if and only if they have the same length? Since two ‘equal’ paths might look very different, is this a good notion of equality? Mathematicians often want to gather together objects that have a common property and then treat them as if they were a single object. This is done using equivalence relations and equivalence classes. First recall the alternative notation for a relation on a set A: if R ⊆A × A is a relation on A, then x R y has the same meaning as (x, y) ∈R. We might read x R y as ‘x is R-related to y.’ Definition 7.7. A relation R on a set A may be described as reflexive, symmetric or transitive if it satisfies the following properties: Reflexivity ∀x ∈A, x R x (every element of A is related to itself) Symmetry ∀x, y ∈A, x R y = ⇒y R x (if x is related to y, then y is related to x) Transitivity ∀x, y, z ∈A, x R y and y R z = ⇒x R z (if x is related to y, and y is related to z, then x is related to z) Symmetry is exactly the same notion as in Definition 7.2. Examples. 1. Let A = R and let R be ≤. Thus 2 ≤3, but 7 ≰4. We check whether R satisfies the above properties. Reflexivity True. ∀x ∈R, x ≤x. Symmetry False. For example, 2 ≤3 but 3 ≰2. Transitivity True. ∀x, y, z ∈R, if x ≤y and y ≤z, then x ≤z. 2. Let A be the set of lines in the plane and define ℓ1 R ℓ2 ⇐ ⇒ℓ1 and ℓ2 intersect. Reflexivity True. Every line intersects itself, so ℓR ℓfor all ℓ∈A. Symmetry True. For all lines ℓ1, ℓ2 ∈A, if ℓ1 intersects ℓ2, then ℓ2 intersects ℓ1.. Transitivity False. As the picture illustrates, we may let ℓ1 and ℓ3 be parallel lines, and ℓ2 cross both of these. Then ℓ1 R ℓ2 and ℓ2 R ℓ3, but ℓ1 ̸R ℓ3. ℓ1 ℓ3 ℓ2 Definition 7.8. An equivalence relation is a relation ∼which is reflexive, symmetric and transitive. The symbol ∼is almost universally used for an abstract equivalence relation. It can be read as ‘related to,’ ‘tilde,’ or ‘twiddles.’ The two examples above are not equivalence relations because they fail one of the three conditions. Here is the simplest equivalence relation. Example. Equals ‘=’ is an equivalence relation on any set, hence the name! 125 Read the definitions of reflexive, symmetric and transitive until you are certain of this fact. There are countless other equivalence relations: here are a few. Examples. 1. For all x, y ∈Z, let x ∼y ⇐ ⇒x −y is even. We claim that ∼is an equivalence relation on Z. Reflexivity ∀x ∈Z, x −x = 0 is even, hence x ∼x. Symmetry ∀x, y ∈Z, x ∼y = ⇒x −y is even = ⇒y −x is even = ⇒y ∼x. Transitivity ∀x, y, z ∈Z, if x ∼y and y ∼z, then x −y and y −z are even. But the sum of two even numbers is even, hence x −z = (x −y) + (y −z) is even, and so x ∼z. 2. Let A = {all students taking this course}. For all x, y ∈A, let x ∼y ⇐ ⇒x achieves the same letter-grade as y. Then ∼is an equivalence relation on A. Reflexivity ∀x ∈A, x ∼x since everyone scores the same as themself! Symmetry ∀x, y ∈A, x ∼y = ⇒x achieves the same letter-grade as y = ⇒y achieves the same letter-grade as x = ⇒y ∼x Transitivity ∀x, y, z ∈A, if x ∼y and y ∼z, then x achieves the same as y who achieves the same as z, whence x achieves the same as z. Thus x ∼z. 3. For all x, y ∈Z, let x ∼y ⇐ ⇒x2 ≡y2 (mod 5). Then ∼is an equivalence relation on Z. Reflexivity ∀x ∈Z, x ∼x since x2 is always congruent to itself! Symmetry ∀x, y ∈Z, x ∼y = ⇒x2 ≡y2 (mod 5) = ⇒y2 ≡x2 (mod 5) = ⇒y ∼x Transitivity ∀x, y, z ∈Z, if x ∼y and y ∼z, then x2 ≡y2 and y2 ≡z2 (mod 5). But then x2 ≡z2 (mod 5) and so x ∼z. The most important thing to observe with each of these examples is that an equivalence relation separates elements of a set into subsets where elements share a common property (even/oddness, letter-grade, etc.). The next definition formalizes this idea. Definition 7.9. Let ∼be an equivalence relation on X. The equivalence class of x is the set [x] = {y ∈X : y ∼x}. X ∼is the set of all equivalence classes: the quotient of X by ∼, or ‘X mod ∼.’ Let us think about the definition in the context of our examples. Examples. 1. = {y ∈Z : y ∼0} = {y ∈Z : y is even} is the set of even numbers. Note that is also equal to , , , etc. The other equivalence class is = {y ∈Z : y −1 is even}, which is the set of odd numbers. The quotient set is Z ∼= , = {even numbers}, {odd numbers} . 126 2. There is one equivalence class for each letter grade awarded. Each equivalence class con-tains all the students who obtain a particular letter-grade. If we call the equivalence classes A+, A, A−, B+, . . . , F, where, say, B = {students obtaining a B-grade}, then {Students} ∼= {A+, A, A−, B+, . . . , F}. 3. The equivalence classes for this example are a little tricky. First observe that x ≡y (mod 5) = ⇒x2 ≡y2 (mod 5), so that there are at most five equivalence classes; those of 0, 1, 2, 3 and 4. Are they distinct? If we square each of these modulo 5, we obtain x (mod 5) 0 1 2 3 4 x2 (mod 5) 0 1 4 4 1 Notice that 1 ∼4, so they share an equivalence class. Similarly 2 ∼3. Indeed the distinct equivalence classes are = {x ∈Z : x ≡0 (mod 5)} = {x ∈Z : x ≡1, 4 (mod 5)} = {x ∈Z : x ≡2, 3 (mod 5)} In this case the quotient is the set Z ∼= n , , o . Here is one further example of an equivalence relation, this time on R2. Be careful with the notation: R2 = R × R is already a Cartesian product, so a relation on R2 is a subset of R2 × R2! Example. Let ∼be the relation on R2 defined by (x, y) ∼(v, w) ⇐ ⇒x2 + y2 = v2 + w2. We claim that this is an equivalence relation. Reflexivity ∀(x, y) ∈R2, x2 + y2 = x2 + y2. Symmetry ∀(x, y), (v, w) ∈R2, (x, y) ∼(v, w) = ⇒x2 + y2 = v2 + w2 = ⇒v2 + w2 = x2 + y2 = ⇒(v, w) ∼(x, y) Transitivity ∀(x, y), (v, w), (p, q) ∈R2, if (x, y) ∼(v, w) and (v, w) ∼(p, q), then x2 + y2 = v2 + w2 and v2 + w2 = p2 + q2. But then x2 + y2 = p2 + q2 and so (x, y) ∼(p, q). ∼is therefore an equivalence relation. But what are the equivalence classes? By definition, [(x, y)] = (v, w) ∈R2 : v2 + w2 = x2 + y2 . 127 This isn’t particularly helpful. Indeed it is easier to think of each of these sets as n (v, w) ∈R2 : v2 + w2 is constant o . Each equivalence class is therefore a circle centered at the ori-gin! Some of the equivalence classes are drawn in the pic-ture: the class [(1, 0)] is highlighted. Moreover, the quotient set is R2 ∼= {circles centered at the origin}. −1 1 w −1 1 v Exercises 7.3.1 A relation R is antisymmetric if ((x, y) ∈R) ∧((y, x) ∈R) = ⇒ x = y. Give examples of relations R on A = {1, 2, 3} having the stated property. (a) R is both symmetric and antisymmetric. (b) R is neither symmetric nor antisymmetric. (c) R is transitive but R ∪R−1 is not transitive. 7.3.2 Let S = {(x, y) ∈R2 : sin2 x + cos2 y = 1}. (a) Give an example of two real numbers x, y such that x ∼y. (b) Is S reflexive? Symmetric? Transitive? Justify your answers. 7.3.3 Each of the following relations ∼is an equivalence relation on R2. Identify the equivalence classes and draw several of them. (a) (a, b) ∼(c, d) ⇐ ⇒ab = cd. (b) (v, w) ∼(x, y) ⇐ ⇒v2w = x2y. 7.3.4 (a) Let ∼be the relation defined on Z by a ∼b ⇐ ⇒ a + b is even. Show that ∼is an equivalence relation and determine the distinct equivalence classes. (b) Suppose that ‘even’ is replaced by ‘odd’ in part (a). Which of the properties reflexive, symmetric, transitive does ∼possess? 7.3.5 For each of the following relations R on Z, decide whether R is reflexive, symmetric, or transi-tive, and whether R is an equivalence relation. (a) a R b ⇐ ⇒a ≡b (mod 3) or a ≡b (mod 4). (b) a R b ⇐ ⇒a ≡b (mod 3) and a ≡b (mod 4). 7.3.6 We call a real number x small if |x| ≤1. Let R be the relation on the set of real numbers defined by x R y ⇐ ⇒x −y is small. Prove or disprove: R is an equivalence relation on R. 128 7.3.7 Let A = {1, 2, 3, 4, 5, 6}. The distinct equivalence classes resulting from an equivalence relation R on A are {1, 4, 5}, {2, 6}, and {3}. What is R? Give your answer as a subset of A × A. 7.3.8 ⊆is a relation on any set of sets. Is ⊆reflexive, symmetric, transitive? Prove your assertions. 7.3.9 Let S be the set of all polynomials of degree at most 3. An element s ∈S can then be expressed as s(x) = ax3 + bx2 + cx + d, where a, b, c, d ∈R. A relation R on S is defined by p R q ⇐ ⇒p and q have a common root. For example p(x) = (x −1)2 and q(x) = x2 −1 have the root 1 in common so that p R q. Determine which of the properties reflexive, symmetric and transitive are possessed by R. 7.3.10 Let A = {2m : m ∈Z}. A relation ∼is defined on the set Q+ of positive rational numbers by a ∼b ⇐ ⇒a b ∈A (a) Show that ∼is an equivalence relation. (b) Describe the elements in the equivalence class . 7.3.11 A relation is defined on the set A = {a + b √ 2 : a, b ∈Q, a + b √ 2 ̸= 0} by x ∼y ⇐ ⇒ x y ∈Q. Show that ∼is an equivalence relation and determine the distinct equivalence classes. 7.3.12 The reflexive, symmetric and transitive closures of a relation R are defined respectively as the smallest relations containing R which also exhibit the given property. Find each of the three closures of R = {(1, 2), (2, 3), (3, 3)} ⊆Z × Z. 7.3.13 Recall the description of the real projective line (page 110): if Am is the line through the origin with gradient m, then P(R2) = {Am : m ∈R ∪{∞}}. Define a relation on R2 ∗= R2 \ {(0, 0)} by (a, b) ∼(c, d) ⇐ ⇒ad = bc. (a) Prove that ∼is an equivalence relation. (b) Find the equivalence classes of ∼. How do the equivalence classes differ from the lines Am? 7.3.14 Suppose that R, S are relations on some set X. Define the composition R ◦S to be the relation (a, c) ∈R ◦S ⇐ ⇒∃b ∈X such that (a, b) ∈R and (b, c) ∈S. (a) If R = {(1, 1), (1, 2), (2, 3), (3, 1), (3, 3)} and S = {(1, 2), (1, 3), (2, 1), (3, 3)}, find R ◦S. (b) Suppose that R and S are reflexive. Prove that R ◦S is reflexive. (c) Suppose that P and Q are symmetric. Prove that (x, y) ∈P ◦Q ⇐ ⇒(y, x) ∈Q ◦P. 129 (d) Give an example of symmetric relations P, Q such that P ◦Q is not symmetric. Conclude that if P, Q are equivalence relations, then P ◦Q need not be an equivalence relation. 7.3.15 (Only for those who have studied Linear Algebra) Let ∼be the relation on the set of 2 × 2 real matrices given by A ∼B ⇐ ⇒∃M such that B = MAM−1. (a) Prove that ∼is an equivalence relation. (b) What is the equivalence class of the identity matrix? (c) Show that −11 15 −5 9  ∼ 4 10 0 −6  (Hint: think about diagonalizing) (d) (Hard) Suppose that L : R2 →R2 is a linear map and U, V are bases of R2. Suppose that A = [L]U and B = [L]V are the matrix representations of L with respect to the two bases. Prove that A ∼B. (e) (Hard) Suppose that A, B have the same, but distinct, eigenvalues λ1 ̸= λ2. Prove that A ∼B. Again use diagonalization, the challenge here is to make your proof work even when the eigenvalues are complex numbers. 130 7.4 Partitions Recall the important observation about our equivalence relation examples: every element of the orig-inal set of objects ends up in exactly one equivalence class. For instance, every integer is either even or odd but not both. The equivalence classes partition the original set in the same way that cutting a cake partitions the crumbs: each crumb ends up in exactly one slice. We shall prove in a moment that equivalence relations always do this. Before doing so we reverse the discussion. Definition 7.10. Let X be a set and A = {An : n ∈I} be a collection of non-empty subsets An ⊆X. We say that X is partitioned by A if 1. X = S n∈I An. (the An together make up X) 2. If Am ̸= An, then Am ∩An = ∅. (distinct An are pairwise disjointa) We describe the collection A as a partition of X. aRecall that two sets A, B are disjoint if A ∩B = ∅: see Definition 4.6. In this definition we don’t require the sets An to all be different, some could be identical to each other. Example. Partition the set X = {1, 2, 3, 4, 5} into subsets A1 = {1, 3}, A2 = {2, 4} and A3 = {5}. Now consider the relation R on X, defined by R = {(1, 1), (1, 3), (3, 1), (3, 3), (2, 2), (2, 4), (4, 2), (4, 4), (5, 5)}. What does R have to do with the partition? R was constructed by insisting that x R y ⇐ ⇒x and y are in the same subset An. Run through your mental checklist: reflexive? symmetric? transitive? Indeed R is an equivalence relation! Moreover, the equivalence classes of R are exactly the sets A1, A2, A3. For example, because 1 belongs to A1, the element 1 should be related to every other element in A1. Therefore, the pairs (1, 1) and (1, 3) should be in R. The example suggests that partitioning a set actually defines an equivalence relation. Combining this with our previous observation you should be starting to belive that partitions and equivalence relations are essentially the same thing. Examples. 1. The integers can be partitioned according to their remainder modulo 3: define Am = {z ∈Z : z ≡m (mod 3)}, then Z = A0 ∪A1 ∪A2. This is certainly a partition: • Every integer z has remainder of 0, 1 or 2 after division by 3, and so every integer is in some set Am. • No integer has two distinct remainders modulo 3, so the sets A0, A1, A2 are disjoint. 2. More generally, if n ∈N, then the set of integers Z is partitioned into n sets A0, . . . , An−1 where Am = {z ∈Z : z ≡m (mod n)} is the set of integers with remainder m upon dividing by n. 3. R is partitioned by the sets of rational and irrational numbers: R = Q ∪(R \ Q). 131 Finally, here is an example of a relation which doesn’t produce a partition. Example. Let R = {(1, 3), (1, 4), (2, 2), (2, 3), (3, 1), (3, 2), (4, 3), (4, 4)} be a relation on X = {1, 2, 3, 4} and define the sets An = {x ∈X : (n, x) ∈R}. Thus An is the set of all elements of X which are related to n. We quickly see that A1 = {3, 4}, A2 = {2, 3}, A3 = {1, 2}, A4 = {3, 4}. The collection of sets An is as follows: {An}n∈X =  A1, A2, A3, A4 = {3, 4}, {2, 3}, {1, 2} , where we only have three sets in the collection since A4 = A1. This collection is not a partition because, for instance, 2 ∈{2, 3} ∩{1, 2}. In the language of the definition, {2, 3} ̸= {1, 2} but {2, 3} ∩{1, 2} ̸= ∅. More importantly, you should convince yourself that R is not an equivalence relation. Before we present the fundamental result of the chapter, we prove a lemma. Lemma 7.11. Suppose that ∼is an equivalence relation. Then x ∼y ⇐ ⇒[x] = [y]. Proof. (⇐) If [x] = [y], then x ∈[y], whence x ∼y. (⇒) Suppose that x ∼y. We begin by showing the inclusion [x] ⊆[y]. Let z ∈[x], then z ∼x and x ∼y = ⇒z ∼y = ⇒z ∈[y]. (Transitivity) Therefore [x] ⊆[y]. The argument is symmetric in x and y, so we also have [y] ⊆[x], and thus [x] = [y]. Theorem 7.12. Let X be a set. 1. If ∼is an equivalence relation on X, then X is partitioned by the equivalence classes of ∼. 2. If {An}n∈I is a partition of X, then the relation ∼on X defined by x ∼y ⇐ ⇒∃n ∈I such that x ∈An and y ∈An is an equivalence relation. 132 X a a b b c c partition A1 A2 A3 A4 A5 Each element of X ends up in exactly one subset. In the language of the Theorem, we have A1 = [a], A2 = [b] = [c], b ∼c, a ≁b, a ≁c. Some things to consider while reading the proof: • Keep your eyes on the picture: it’s where your intuition comes from, and it’s how you should remember the result. The algebra merely confirms that the picture is telling a legitimate story. • In part 1. of the proof, look for where the reflexive, symmetric and transitive assumptions about ∼are used. Why do we need ∼to be an equivalence relation? • Similarly, in part 2., look for where we use both parts of the defintion of partition. Why are both assumptions required? Proof. 1. Assume that ∼is an equivalence relation on X. To prove that the equivalence classes of ∼partition X, we must show two things: (a) That every element of X is in some equivalence class. (b) That the distinct equivalence classes are pairwise disjoint: if [x] ̸= [y], then [x] ∩[y] = ∅. For (a), we only need reflexivity: ∀x ∈X we have x ∼x. Otherwise said, x ∈[x], whence every element of X is in the equivalence class defined by itself. For (b), we prove by the contrapositive method and show that [x] ∩[y] ̸= ∅= ⇒[x] = [y]. Assume that [x] ∩[y] ̸= ∅. Then ∃z ∈[x] ∩[y]. This gives z ∼x and z ∼y = ⇒x ∼z and z ∼y (Symmetry) = ⇒x ∼y (Transitivity) = ⇒[x] = [y] (Lemma 7.11) We have proved (b) and therefore part 1. of the theorem. 2. Now suppose that {An}n∈I is a partition of X and define ∼by x ∼y ⇐ ⇒∃n ∈I such that x ∈An and y ∈An. We must prove the reflexivity, symmetry and transitivity of ∼. 133 Reflexivity Every x ∈X is in some An. Thus x ∼x for all x ∈X. Symmetry If x ∼y, then ∃n ∈I such that x, y ∈An. But then y, x ∈An and so y ∼x. Transitivity Let x ∼y and y ∼z. Then ∃p, q ∈I such that x, y ∈Ap and y, z ∈Aq. Since {An}n∈I is a partition and y ∈Ap ∩Aq, we necessarily have p = q. Thus x, z ∈Ap and so x ∼z. Thus ∼is an equivalence relation. Reading the proof carefully, you should see that reflexivity comes from the fact that X = S n∈I An, while transitivity is due to the pairwise disjointness of the parts of the partition. Symmetry is essen-tially free because the definition of ∼is symmetric in x and y. Examples of partitions are especially easy to see with curves in the plane. Here we return to the example on page 127 and describe things in our new language. Example. For each real number r ≥0, define the set Ar = (x, y) ∈R2 : x2 + y2 = r2 . This is simply the circle of radius r centered at the origin. We check that {Ar}r∈R+ 0 is a partition of R2. • Every point of the plane lies on some circle. Precisely, (x, y) ∈A√ x2+y2 since p x2 + y2 is the distance of (x, y) from the origin. Thus R2 = S r∈R+ 0 Ar. • If r1 ̸= r2, then the concentric circles Ar1 and Ar2 do not intersect. Thus Ar1 ∩Ar2 = ∅. −1 1 y −1 1 x Now define a relation ∼on R2 via (x, y) ∼(v, w) ⇐ ⇒∃r ≥0 such that (x, y), (v, w) both lie on the circle Ar. By Theorem 7.12 this is an equivalence relation. We can also check explicitly: dropping any mention of the radius r, we see that (x, y) ∼(v, w) ⇐ ⇒x2 + y2 = v2 + w2. This is exactly the equivalence relation described on page 127. The equivalence classes are precisely the sets Ar. Indeed [(v, w)] = {(x, y) ∈R2 : x2 + y2 = v2 + w2} = A√ v2+w2 is just the circle of radius √ v2 + w2. 134 Geometric Examples The language of equivalence relations and partitions is used heavily in geometry and topology to describe complex shapes. Here are a couple of examples. The M¨ obius Strip Take a rectangle X = [0, 6] × [0, 1] and partition into the following subsets. • If a point does not lie on the left or right edge of the rectangle, place it in a subset by itself: {(x, y)} for x ̸= 0, 6, • If a point does lie on the left or right edge of the rectangle, place it in a subset with one point from the other edge: {(0, y), (6, 1 −y)} for any y. The rectangle is drawn below, where the points on the left and right edges are colored red. The arrows indicate how the edges are paired up. For example the point (0, 0.8) (high on the left near the tip of the arrow) is paired with (6, 0.2) (low on the right edge of the rectangle). These subsets clearly partition the rectangle X. The partitions define an equivalence relation ∼ on X in accordance with Theorem 7.12. Note that there are infinitely many equivalence classes. How can we interpret the quotient set X ∼? This is easier to visualize than you might think. Since each point on the left edge of the rectangle is in an equivalence class with a point on the right edge, we imagine gluing the two edges together in such a way that the correspoinding points are touching. In the picture, we imagine holding X like a strip of paper, giving one side a twist, and then gluing the edges together. This is the classic construction of a M¨ obius strip. Rectangle Half twist Glue arrows to get M¨ obius strip The Cylinder One could construct a cylinder similarly to the M¨ obius strip, by identifying edges of the rectangle but without applying the half-twist. Instead we do something a little different. Let X = R2 with equivalence relation ∼defined by (a, b) ∼(c, d) ⇐ ⇒a −c ∈Z and b = d. The equivalence classes are horizontal strings of points with the same y co-ordinate. If we imagine wrapping R2 repeatedly around a cylinder of circumference 1, all of the points in a given equivalence class will now line up. The set of equivalence classes R2 ∼can therefore be vizualized as the cylinder. 135 Alternatively, you may imagine piercing a roll of toilet paper and unrolling it. The single puncture now becomes a row of (almost!27) equally spaced holes. In the picture, the left hand side is (part of) the plane R2, displayed so that points in each equivalence class have the same color. The three horizontal dots are all in the same equivalence class. We roll up the plane into a cyclinder so that all the points with the same color end up at the same place. wrap around x y 0 1 More complex shapes can be created by other partitions/relations. If you want a challenge in visualization, consider why the equivalence relation (a, b) ∼(c, d) ⇐ ⇒a −c ∈Z and b −d ∈Z on R2 defines a torus (the surface of a ring-doughnut). Exercises 7.4.1 For each of the collections {An}n∈R, determine whether the collections partition R2. Justify your answers, and sketch several of the sets An. (a) An = {(x, y) ∈R2 : y = 2x + n}. (b) An = {(x, y) ∈R2 : y = (x −n)2}. (c) An = {(x, y) ∈R2 : xy = n}. (d) An = {(x, y) ∈R2 : y4 −y2 = x −n}. 7.4.2 Let X be the set of all humans. If x ∈X, we define the set Ax = {people who had the same breakfast or lunch as x}. (a) Does the collection {Ax}x∈X partition X? Explain. (b) Is your answer different if the or in the definition of Ax is changed to and? 27Unfortunately for the analogy, toilet paper has purposeful thickness! 136 If Jane and Tom had both had the same breakfast and lunch, then AJane = ATom so there are likely many fewer distinct sets Ax than there are humans! 7.4.3 Let X = {1, 2, 3}. Define the relation R = {(1, 1), (1, 2), (1, 3), (2, 1), (2, 2), (3, 1), (3, 3)} on X. (a) Which of the properties reflexive, symmetric, transitive does R satisfy? (b) Compute the sets A1, A2, A3 where An = {x ∈X : x R n}. Show that {A1, A2, A3} do not form a partition of X. (c) Repeat parts (a) and (b) for the relations S and T on X, where S = {(1, 1), (1, 3), (3, 1), (3, 3)} T = {(1, 1), (1, 2), (1, 3), (2, 1), (2, 2), (2, 3), (3, 3)} Some of the sets A1, A2, A3 might be the same in each of your examples. If, for example, A1 = A3, then the collection {A1, A2, A3} only contains two sets: {A1, A2}. Is this a partition? Compare with the example on page 132. 7.4.4 Using the equivalence relation description of the M¨ obius strip, prove that you may cut a M¨ obius strip round the middle and yet still end up with a single loop. Where would you cut the defining rectangle and how can you tell that you still have one piece? 7.4.5 (Hard!) A Klein bottle can be visualized as follows. Define an equiva-lence relation ∼on the unit square X = [0, 1] × [0, 1] so that: • (0, y) ∼(1, y) for 0 ≤y ≤1. • (x, 0) ∼(1 −x, 1) for 0 ≤x ≤1. The result is the picture: the blue edges are identified in the same di-rection and the red in the opposite. Attempting to visualize this in 3D requires a willingness to stretch and distort the square, but results in the green bottle. The original red an blue arrows have become curves on the bottle. If you are using Acrobat Reader, click on the bottle and move it around. (a) Suppose you cut the Klein bottle along the horizontal dashed line of the defining square. What is the resulting object? (b) Now cut the bottle along the vertical dashed line. What do you get this time? Can you visualize where the two dashed lines are on the green bottle? 137 7.5 Well-definition, Rings and Congruence We return to our discussion of congruence (recall Section 3.1) in the context of equivalence relations and partitions. The important observation is that congruence modulo n is an equivalence relation on Z, each equivalence class being the set of all integers sharing a remainder modulo n. Theorem 7.13. For each n ∈N, define x ∼n y ⇐ ⇒x ≡y (mod n). Then ∼n is an equivalence relation on Z. The theorem is a restatement of Example 2 on page 131, in conjunction with Theorem 7.12. You should prove this yourself, as practice in using the definition of equivalence relation. The equivalence classes are precisely those integers which are congruent modulo n: the integers which share the same remainder. [a] = {x ∈Z : x ≡a (mod n)} = {x ∈Z : x has the same remainder as a when divided by n} = {x ∈Z : x −a is divisible by n} In this language, we may restate what it means for two equivalence classes to be identical. Theorem 7.14. [a] = [b] ⇐ ⇒a ≡b (mod n) ⇐ ⇒∃k ∈Z such that b = a + kn. If the meaning of any of the above is unclear, re-read the previous two sections: they are critically important! The equivalence classes of ∼n partition the integers Z. According to Theorem 7.14, there are exactly n equivalence classes, whence we may describe the quotient set as Z ∼n = {, , . . . , [n −1]}. We use this set to define an extremely important object. Definition 7.15. Define two operations +n and ·n on the set Z ∼n as follows: [x] +n [y] := [x + y], [x] ·n [y] := [x · y]. The ring Zn is the set Z ∼n together with the operations +n and ·n. The operation +n (similarly ·n) is telling us how to add equivalence classes, that is, how to produce a new equivalence class from two old ones. +n is not the same operation as +: we are defining +n using +. The former combines equivalence classes, while the latter sums integers. The challenge here is that you have to think of each equivalence class as a single object. When we write +8 = [3 + 6] = = , 138 we are thinking about the sets and as individual objects rather than as collections of elements: remember that = {. . . , −5, 3, 11, 19, . . .} is an infinite set! There is, moreover, a matter of choice: since, for example, = and = we should be able to observe that +8 = +8 . Is this true? If not, then the operation +8 would not be particularly useful. Thankfully this is not a problem: according to the defintion of +8, we have +8 = [11 + 22] = = , exactly as we would wish. Let us think a little more abstractly. Suppose we are given equivalence classes X and Y, how do we compute X +n Y? Here is the process. 1. Choose elements x ∈X and y ∈Y. 2. Add x and y to get a new element x + y ∈Z. 3. Then X +n Y is the equivalence class [x + y]. The issue is that there are infinitely many choices for the elements x ∈X and y ∈Y. If +n is to make sense, we must obtain the same equivalence class [x + y] regardless of our choices of x ∈X and y ∈Y. Definition 7.16. A concept is well-defined if it is independent of all choices used in the definition. Theorem 7.17. The operations +n and ·n are well-defined. The choices made in the definitions of +n and ·n were of representative elements x and y of the equivalence classes [x] and [y]. All representatives of these classes have the form x + kn ∈[x] and y + ln ∈[y] for some integers k, l. It therefore suffices to prove that ∀k, l ∈Z, [x + kn] +n [y + ln] = [x] +n [y] and [x + kn] ·n [y + ln] = [x] ·n [y]. 139 Proof. We prove that +n is well-defined. [x + kn] +n [y + ln] = [(x + kn) + (y + ln)] (by definition of +n) = [x + y + (k + l)n] = [x + y] (by Theorem 7.14) = [x] +n [y] (by definition of +n) The argument for ·n is similar. You should now re-read Theorem 3.8 until you are comfortable that we are doing the same thing! Aside: Ugly notation Given the usefulness of Zn, and the cumbersome nature of the above notation, it is customary to drop the square brackets and subscripts and simply write Zn = {0, 1, 2, . . . , n −1}, x + y := x + y (mod n), x · y := xy (mod n). When using this description of Zn, you should realize that we are working with equivalence classes, not numbers. In this context, −3 ∈Z8 makes perfect sense, for it really means [−3] ∈Z8. This is percectly fine, since [−3] = as equivalence classes, and so it is legitimate to write −3 = 5 in Z8. Until you are 100% sure that you know when 3 represents an equivalence class and when it represents a number, you should keep the brackets in place! Exercises 7.5.1 Give an explicit proof of Theorem 7.13. 7.5.2 (a) Prove the second half of Theorem 7.17, that ·n is well-defined. (b) Prove by induction that the operation of raising to the power m ∈N is well-defined in Zn. I.e., prove that ∀m ∈N, ∀[x] ∈Z ∼n we have [xm] = [x]m. Be careful! n is fixed, your induction variable is m. What base case(s) do you need? 7.5.3 Consider the relation ∼defined on Z × N = {(x, y) : x ∈Z, and y ∈N} by (a, b) ∼(c, d) ⇐ ⇒ad = bc. (a) Prove that ∼is an equivalence relation. (b) List several elements of the equivalence class of (2, 3). Repeat for the equivalence class of (−3, 7). What do the equivalence classes have to do with the set of rational numbers Q? (c) Define operations ⊕and ⊗on Z × N ∼by [(a, b)] ⊕[(c, d)] = [(ad + bc, bd)], [(a, b)] ⊗[(c, d)] = [(ac, bd)]. Prove that ⊕and ⊗are well-defined. Try to do this question without using division! We will return to this example in the next section. 140 7.6 Functions and Partitions To complete our discussion of partitions and equivalence relations, we consider how to define func-tions whose domain is a set of equivalence classes. Take congruence as our motivating example. Suppose we want to define a function f : Z4 →Z6. Say f (x) = 3x (mod 6). This certainly looks like a function, but is it? Remember that ‘x’ and ‘3x’ are really equivalence classes, so we should say28 f [x]4  = [3x]6, where [x]4 ∈Z4 and [3x]6 ∈Z6. Is this a function? To make sure, we need to check that any representative a ∈[x]4 gives the same result. That is, we need to prove that a ≡b (mod 4) = ⇒3a ≡3b (mod 6). This is not so hard: a ≡b (mod 4) = ⇒∃n ∈Z such that a = b + 4n = ⇒3a = 3b + 12n = ⇒3a ≡3b (mod 6). It might look like a small difference, but attempting to define g : Z4 →Z6 by g(x) = 2x (mod 6) does not result in a function. If it were, then we should have a ≡b (mod 4) = ⇒2a ≡2b (mod 6). But this is simply not true: for example 4 ≡0 (mod 4), but 8 ̸≡0 (mod 6). It might look like g is a function, but it is not well-defined because = in Z4 and g  ̸= g  in Z6. Just as in Definition 7.16, the process of verifying that a rule really is a function is called checking well-definition. In general, if we are defining a function f : X ∼→A (∗) whose domain is a quotient set, then it is usually necessary to construct f by saying what happens to a representative x of an equivalence class [x]: f [x]  = ‘do something to x’. We need to make sure that the ‘something’ is independent of the choice of element x. Definition 7.18. Suppose that f : X ∼→A is a rule of the form (∗). We say that f is a well-defined function if [x] = [y] = ⇒f [x]  = f [y]  . If you think carefully, this is nothing more than condition 2 of Definition 7.4. Examples. 1. Show that f : Zn →Zn defined by f (x) = x2 + 4 (mod n) is well-defined. We must check that x ≡y (mod n) = ⇒x2 + 4 ≡y2 + 4 (mod n). But this is trivial! 28The notation [x]4 is helpful for reminding us which equivalence relation is being applied. When dealing with functions between different quotient sets, it is easy to become confused. 141 2. For which integers k is the rule f : Z4 →Z6 defined by f (x) = kx (mod 6) a well-defined function? We require x ≡y (mod 4) = ⇒kx ≡ky (mod 6). Now x ≡y (mod 4) = ⇒∃n ∈Z such that x −y = 4n = ⇒kx −ky = 4kn. For f to be well-defined, we need kx −ky = 4kn to be a multiple of 6 independently of x and y. Thus f is well-defined if and only if 6 | 4kn for all n ∈Z. This can only be the case if 6 | 4k. Otherwise said, f is well-defined ⇐ ⇒6|4k ⇐ ⇒3|2k ⇐ ⇒3|k. Given that kx ∈Z6, we need only consider k ∈{0, 1, 2, 3, 4, 5}: equivalent values of k modulo 6 won’t change the definition of f. It follows that there are only two well-defined functions f : Z4 →Z6 : x 7→kx, namely f0(x) = 0 and f3(x) = 3x. Here they are in tabular form. x 0 1 2 3 f0(x) 0 0 0 0 x 0 1 2 3 f3(x) 0 3 0 3 It is instructive to play with another value of k, say k = 5, and attempt to construct a table: x 0 1 2 3 4 5 · · · f5(x) 0 5 4 3 2 1 · · · The problem is that 4 ≡0 (mod 4), yet f5(4) ̸≡f5(0) (mod 6). In order to be a function, the second row must repeat with period four. You should compare this with the examples on page 67 and with Exercise 4.4.11. Functions on the Cylinder and Torus Recall our construction on page 135, where we viewed the cylinder as the set R2 ∼with respect to the equivalence relation (a, b) ∼(c, d) ⇐ ⇒a −c ∈Z and b = d. We wish to define a function f : R2 ∼→A whose domain is the cylinder.29 Well-definition requires that f satisfy (a, b) ∼(c, d) = ⇒f (a, b)  = f (c, d)  . Since (a, b) ∼(a + 1, b), we require f (a, b)  = f (a + 1, b)  , for all a, b ∈R. Otherwise said, f (x, y)  must be periodic in x with period 1. It is easy to see that f (x, y)  = y2 sin(2πx) is a suitable choice of function f : R2 ∼→R. 29A is any target set you like. We will choose an example with A = R in a moment. 142 More generally, to define a function whose domain is the torus T2 = R2 ∼ where (a, b) ∼(c, d) ⇐ ⇒a −c ∈Z and b −d ∈Z, requires a function which has period 1 in both x and y. The function f (x, y)  = sin(2πx) cos(2πy) is plotted below, with the color on the torus indicating the value of f. It is easier for us to simply consider the function F : R2 →R : (x, y) 7→sin(2πx) cos(2πy). This is also plotted, with the same color for each value. The function f: domain T2 The arrows in the two pictures correspond 0 1 y 0 1 x −1.0 −0.5 0.0 +0.5 +1.0 F(x, y) = sin(2πx) cos(2πy) 1 2 1 2 The function F restricted to [0, 1) × [0, 1) Aside: The Canonical Map To do this justice, and to give you a taste for the details which are necessary in pure mathematics, here is the important definition. Definition 7.19. Suppose that ∼is an equivalence relation on a set X. The function γ : X →X ∼ defined by γ(x) = [x] is the canonical map.a aCanonical, in mathematics, just means natural or obvious. The canonical map has only one purpose; to allow us to construct functions f : X ∼→A. Theorem 7.20. Suppose that ∼is an equivalence relation on X. 1. If f : X ∼→A is a function, then F : X →A defined by F = f ◦γ satisfies x ∼y = ⇒F(x) = F(y). 2. If F : X →A satisfies x ∼y = ⇒ F(x) = F(y), then there is a unique function f : X ∼→A satisfiying F = f ◦γ. 143 Proof. 1. This is trivial: x ∼y = ⇒[x] = [y] = ⇒γ(x) = γ(y) = ⇒f (γ(x)) = f (γ(y)) = ⇒F(x) = F(y). 2. f : X ∼→A can only be the function defined by f ([x]) = F(x). We show that this is well-defined: [x] = [y] = ⇒x ∼y = ⇒F(x) = F(y) = ⇒f ([x]) = f ([y]). The proof, like much of mathematics, is a masterpiece in concision that seems to be doing nothing at all. The point is that functions of the form f : X ∼→A are difficult to work with. The Theorem says that we never need to explicitly use such functions, and can instead work with simpler functions of the form F : X →A. The only condition is that x ∼y = ⇒F(x) = F(y). Essentially, F is f in disguise! X A X ∼ F γ f This result will be resurrected when you study Groups Rings & Fields as part of the famous First Isomorphism Theorem. Exercises 7.6.1 Prove or disprove: f : Z3 →Z5 : x 7→x3 (mod 5) is well-defined. 7.6.2 (a) Compute (x + 4n)2. (b) Suppose that ∀n ∈Z, we have (x + 4n)2 ≡x2 (mod m). Find all the integers m for which this is a true statement. (c) For what m ∈N≥2 is the function f : Z4 →Zm : x 7→x2 (mod m) well-defined. 7.6.3 A rule f : X ∼→A is well-defined if [x] = [y] = ⇒f [x]  = f [y]  . (a) State what it means for f : X ∼→A to be injective. What do you observe? (b) Prove that f : Z7 →Z35 : x 7→15x is a well-defined, injective function. (c) Repeat part (b) for the function f : Z100 →Z300 : x 7→9x. Compare your arguments for well-definition and injectivity. This forces you to write your argument abstractly, rather than using a table! You may find it useful that 9 · (−11) ≡1 (mod 100). 7.6.4 Define a partition of the sphere S2 = (x, y, z) : x2 + y2 + z2 = 1 into subsets of the form (x, y, z), (−x, −y, −z) . Each subset consists of two points directly opposite each other on the sphere (antipodal points). Let ∼be the equivalence relation whose equivalence classes are the above subsets. (a) f : S2 ∼→R : [(x, y, z)] 7→xyz is not well-defined. Explain why. 144 (b) Prove that f : S2 ∼→R3 : [(x, y, z)] →(yz, xz, xy) is a well-defined function. The image of this function is Steiner’s famous Roman Surface, another example, like the Klein Bottle, of a generalization of the M¨ obius Strip. 7.6.5 Recall Exercise 7.5.3, where we defined an equivalence relation ∼on Z × N. (a) Prove that the function f : Z × N ∼→Q defined by f (x, y)  = x y is a well-defined bijection. (b) Prove that f transforms the operations ⊕and ⊗into the usual addition and multiplication of rational numbers. That is: f (a, b) ⊕ (c, d)  = f (a, b)  + f (c, d)  f (a, b) ⊗ (c, d)  = f (a, b)  · f (c, d)  The technical term for this is that f : Z × N ∼, ⊕, ⊗  →(Q, +, ·) is an isomorphism of rings. 145 8 Cardinalities of Infinite Sets 8.1 Cantor’s Notion of Cardinality During the late 1800’s a German mathematician named Georg Cantor almost single-handedly over-turned the foundations of mathematics. Prior to Cantor, mathematicians had understood a set to be nothing more than a collection of objects. Via the consideration of certain infinite sets,30 Cantor demonstrated that this na¨ ıve idea is woefully inadequate. Cantor met great resistance from many fa-mous mathematicians, philosophers, and even religious scholars, who felt his ideas were unnatural and risked undermining the divine. Despite strong initial antipathy, Cantor’s notion of cardinality is now universally accepted by mathematicians. More importantly, it led to the creation of axiomatic set theory and the, still somewhat controversial, modern conception of set. Cantor’s legacy is arguably the modern axiomatic nature of pure mathematics, where rigor dominates and mathematicians are obliged to follow logic wherever it might lead, regardless of the bizarre paradoxes which might ap-pear. In this chapter we consider the basics of Cantor’s contribution, essentially his extension of the concept of cardinality to infinite sets. Recall that if A is a finite set, then |A|, the cardinality of A, is simply the number of elements in A. This definition obviously does not extend to infinite sets. However, we can provide an alternative interpretation of cardinality as a tool to compare sizes of sets. This interpretation turns out to apply to infinite sets. For example, suppose that A = {fish, dog}, and B = {α, β, γ}. Even though the elements of the sets A and B are completely different, we may use cardinality to compare the sizes of A and B: since |A| = 2 and |B| = 3, we may write |A| ≤|B| to indicate that B has at least as many elements as A. By Theorem 4.12, this condition is equivalent to the existence of an injective (one-to-one) map from A to B. For instance, we can choose the function f : A →B defined by fish 7− →α, dog 7− →β. In a sense, Theorem 4.12 tells us how to compare cardinalities of finite sets without counting ele-ments. Cantor’s seemingly innocuous idea was to turn this theorem for finite sets into a definition of cardinality for infinite sets. 30In particular his middle third set. 146 Definition 8.1. The cardinalities of two sets A, B are denoted |A| and |B|. We compare cardinalities as follows: • |A| ≤|B| ⇐ ⇒∃f : A →B injective. • |A| = |B| ⇐ ⇒∃f : A →B bijective. We write |A| < |B| ⇐ ⇒|A| ≤|B| and |A| ̸= |B|. That is ∃f : A →B injective but ∄g : A →B bijective. Cardinality is defined as an abstract property whereby two sets can be compared. To define the cardinality |A| as an object, we need the following theorem. Theorem 8.2. On any collection of sets, the relation A ∼B ⇐ ⇒|A| = |B| is an equivalence relation. The cardinality of a set A is precisely the equivalence class of A with respect to this relation: |A| := [A]. It is now clear that cardinality partitions any collection of sets: every set has a cardinality, and no set has more than one cardinality. To get further it is useful to introduce a symbol for the cardinality of the simplest infinite set. Countably Infinite Sets Definition 8.3. The cardinality of the set of natural numbers N is denoted ℵ0, read aleph-nought or aleph-null. We say that a set A is countably infinite, or denumerablea if |A| = ℵ0. aSometimes this is shortened to countable, although some authors use countable to mean ‘finite or denumerable,’ i.e. any A for which |A| ≤ℵ0. Use countably infinite or denumerable to avoid confusion. ℵis the first letter of the Hebrew alphabet. We will discuss in a moment why we need a new symbol; why ∞doesn’t suffice. First we consider an example of Definition 8.1 at work. Example. Let 2N = {2, 4, 6, 8, 10, . . .} be the set of positive even integers. The function f : N →2N : n 7→2n is a bijection. It follows that |2N| = |N| = ℵ0 and we would say that 2N is denumerable. This example shows one of the first strange properties of infinite sets: 2N is a proper subset of N, and yet the two sets are in bijective correspondence with one another! You should feel like you want to say two contradictory things simultaneously: • N has the same ‘number of elements’ as 2N. • N has twice the ‘number of elements’ as 2N. 147 If this doesn’t make you feel uncomfortable, then read it again! The remedy to your discomfort is to appreciate that cardinality and number of elements are different concepts. Replacing ‘number of el-ements’ with ‘cardinality’ in the two statements makes both true! Indeed it is completely legitimate to write 2ℵ0 = ℵ0. Here is another example of the same phenomenum; N has one more element than N≥2 and yet they have the same cardinality: ℵ0 + 1 = ℵ0. Example. The function g : N →N≥2 : n 7→n + 1 is a bijection, whence N≥2 = {2, 3, 4, 5, . . .} is denumerable. As practice in using the definition of cardinality, we prove the following. Theorem 8.4. Suppose that A is a finite set. Then |A| < ℵ0. Proof. The n = 0 case is left to the Exercises. Suppose that |A| = n ≥1 so that we may list the elements of A as {a1, . . . , an}. We must prove two things: 1. |A| ≤ℵ0. That is, ∃f : A →N which is injective. 2. |A| ̸= ℵ0. That is, ∄g : A →N which is bijective. By symmetry this is equivalent to showing that there is no bijective function h : N →A.a For part 1., simply define f by f (ak) = k for each k ∈{1, 2, 3, . . . , n}. This is injective since the distinct elements ak of A map to distinct integers. For part 2., suppose that h : N →A is bijective. Consider the set h {1, . . . , n + 1}  =  h(1), . . . , h(n + 1) ⊆A. Since A has n elements, by Dirichlet’s box principle, at least two of the values h(1), . . . , h(n + 1) must be equal. Therefore h is not injective and consequently not bijective. A contradiction. aIf g : A →N is a bijection, then g−1 : N →A is also a bijection. Aside: ℵ0 versus ∞: what’s the difference? It can be difficult to grasp why ℵ0 and ∞are not the same thing. The problem is compounded by references to an ‘infinite number’ of objects any time that the cardinality of a set is not finite. This loose phrase is commonly used, but risks conflating the concepts of ‘infinite set’ and ‘infinity.’ So what is the difference between ℵ0 and ∞? If there aren’t an ‘infinite number’ of natural numbers, how many are there? Theorem 8.4 says that ℵ0 is ‘larger than any natural number.’ Is this not what we mean by infinity? The reason we need a new symbol ℵ0, and why it and ∞are different, is twofold: 1. As we shall see shortly, there are infinite sets with greater cardinality than ℵ0: in a na¨ ıve sense, there are multiple infinities. The single symbol ∞is insufficient to distinguish sets with different cardinalities. 148 2. More philosophically, ℵ0 is an object in its own right; an object to which the cardinality of some set may be equal. Indeed, by Theorem 8.2, ℵ0 is an equivalence class. By contrast, ∞is not a object. Think back to where you’ve seen ∞before. It is mostly used in interval notation (e.g., [1, ∞)) and when talking about limits: for example lim x→3 1 (x−3)2 = ∞is short-hand for the notion that the function f (x) = 1 (x−3)2 gets unboundedly larger as x approaches 3. The danger with this notation is that you mistakenly think of ∞as a number: it isn’t! An elementary calculus student might be tempted to write f (3) = 1 (3−3)2 = ∞, but this makes absolutely no sense. Similarly, it is easy to mistake the appearance of ∞in interval notation for a number: e.g. (2, ∞) merely means ‘all numbers greater than 2.’ To say ‘greater than 2 and less than infinity’ would be an error. The challenge of Cantor’s notion of cardinality is to appreciate that the question, ‘How many natural numbers are there?,’ is meaningless! We conclude this section with two important examples of denumerable sets. Theorem 8.5. The integers Z are denumerable. Proof. We must construct a bijective function f : N →Z. By experimenting, you may feel it is enough simply to write down the first few terms of a suitable function: n 1 2 3 4 5 6 7 8 9 10 · · · f (n) 0 1 −1 2 −2 3 −3 4 −4 5 · · · With a bit of thinking, it should be obvious what the function is doing, and that it is bijective. For a bit more formality, we can write f (n) = ( 1 2n if n even, −1 2(n −1) if n odd. Now we check that this is bijective: (Injectivity) Let m, n ∈N, and suppose that f (m) = f (n). Without loss of generality, there are three cases to consider. (m, n both even) f (m) = f (n) = ⇒ m 2 = n 2 = ⇒m = n. (m, n both odd) f (m) = f (n) = ⇒−1 2(m −1) = −1 2(n −1) = ⇒m = n. (m even, n odd) f (m) = f (n) = ⇒ m 2 = −1 2(n −1) = ⇒m + n = 1. But m, n ∈N, so m + n ≥2, which is a contradiction. Therefore f is injective. (Surjectivity) With a little calculation, you should be able to see that, for any z ∈Z, there exists a 149 positive integer n such that f (n) = z, namely: z = ( f (2z) if z > 0, f (1 −2z) if z ≤0. Hence f is surjective. As you build up examples, you no longer have to compare denumerable sets directly with N. A set A is denumerable if and only if ∃f : A →B bijective where B is any other denumerable set. This holds because the composition of bijective function is also bijective (Theorem 4.15). Theorem 8.6. The rational numbers Q are denumerable. Proof. We do this in stages. First we construct a bijection between the positive rational numbers Q+ and the natural numbers N. For each a, b, ∈N, place the fraction a b in the ath row and bth column of the infinite square as shown below. Now list the elements by tracing the diagonals as shown, deleting any number that has already appeared in the list (2 2 = 1 1, 6 4 = 3 2, etc.). 1 1 . . . 2 1 . . . 3 1 . . . 4 1 . . . 5 1 . . . 6 1 . . . 7 1 . . . · · · 1 2 2 2 3 2 4 2 5 2 6 2 7 2 · · · 1 3 2 3 3 3 4 3 5 3 6 3 7 3 · · · 1 4 2 4 3 4 4 4 5 4 6 4 7 4 · · · 1 5 2 5 3 5 4 5 5 5 6 5 7 5 · · · 1 6 2 6 3 6 4 6 5 6 6 6 7 6 · · · ... The infinite square 1 1 . . . 2 1 . . . 3 1 . . . 4 1 . . . 5 1 . . . 6 1 . . . 7 1 . . . · · · 1 2 2 2 3 2 4 2 5 2 6 2 7 2 · · · 1 3 2 3 3 3 4 3 5 3 6 3 7 3 · · · 1 4 2 4 3 4 4 4 5 4 6 4 7 4 · · · 1 5 2 5 3 5 4 5 5 5 6 5 7 5 · · · 1 6 2 6 3 6 4 6 5 6 6 6 7 6 · · · ... × × × × × × × × × × × × × Trace diagonals and delete repeats We obtain the ordered set A = {a1, a2, a3, a4, . . .} = 1 1, 2 1, 1 2, 1 3, 3 1, 4 1, 3 2, 2 3, 1 4, 1 5, . . .  . Now define the function f : N →Q+ by f (n) = an. We claim that this is a bijection. (Injectivity) Let m = n ∈N, and suppose that f (n) = f (m). Then am = an. But in the construction of A we deleted any number which had already appeared in the list. Thus am can only equal an if m = n. (Surjectivity) A positive rational number a b appears in the ath row and bth column of the square (and in many other places). When constructing A, note that a b will not be deleted unless it has already 150 appeared elsewhere in A. Therefore every positive fraction a b is in the set A. To finish things off, extend the function to all rational numbers by g : Z →Q : n 7→      f (n) if n > 0, 0 if n = 0, −f (−n) if n < 0. Now g : Z →Q is a bijection, from which we deduce that |Q| = |Z| = ℵ0. This result should surprise you! Any sensible person should feel that there are far, far more ratio-nal numbers than integers and yet the two sets have the same cardinality. Bizarre. There are other denumerable sets that appear to be even larger. For example, we can show that N × N is denumerable (using almost the same proof as for Q+ except that there are no repeats to delete). For a much larger-seeming denumerable set, consider the set of algebraic numbers: {x ∈R : ∃a polynomial p with integer coefficients such that p(x) = 0}. Algebraic numbers are the zeros of polynomials with integer coefficients. Clearly every rational number a b is algebraic, since it satisfies p(x) = 0 for p(x) = bx −a. There are many more algebraic numbers than rational numbers: e.g. 5 √ 2 −3 is algebraic since it is a root of the polynomial p(x) = (x + 3)5 −2 = 0. Not all real numbers are algebraic however: those which aren’t, such as π and e, are termed transcendental. Exercises 8.1.1 Refresh your proof skills by proving that the following functions are bijections: (a) f : N →2N : n 7→2n. (b) g : N →N≥2 : n 7→n + 1. 8.1.2 Construct a function f : N →Z≥−3 = {−3, −2, −1, 0, 1, 2, 3, 4, . . .} which proves that the latter set is denumerable: you must show that your function is a bijection. 8.1.3 Prove that the set 3Z + 2 = {3n + 2 : n ∈Z} is denumerable. 8.1.4 Show that the set of all triples of the form (n2, 5, n + 2) with n ∈3Z is denumerable by explicitly providing a bijection with a denumerable set A. (You must check that the set A is denumerable, and that your map is indeed a bijection.) 8.1.5 Imagine a hotel with an infinite number of rooms: Room 1, Room 2, Room 3, Room 4, etc.. Show that, even if the hotel is full, the guests may be re-accommodated so that there is always a room free for one additional guest. Hint: consider the function f : N →N : n 7→n + 1. 8.1.6 Prove that A ⊆B = ⇒|A| ≤|B|. (You need an injective function f : A →B) 8.1.7 Prove Theorem 8.2. (You need little more than Theorem 4.15 on the composition of bijective functions.) 151 8.1.8 Prove that the set N × N is denumerable. You should base your proof on Theorem 8.6. 8.1.9 We know that Q is denumerable, and we saw (Theorem 8.6) that there most exist a bijective function f : N →Q. Show that g : N × N →Q × Q defined by g(m, n) = ( f (m), f (n)) is a bijection. Appeal to the previous question to show that Q × Q is denumerable. 8.1.10 Here we consider the n = 0 case of Theorem 8.4. Recall the definition of function in Section 7.2. (a) If |A| = 0, then A = ∅. Suppose that f : ∅→N is a function. Use Definition 7.4 to prove that f = ∅. (b) State what it means, in the language of Definition 7.4, for a function f : A →N to be injective. Show that f = ∅is an injective function. (c) Suppose that B is a set with |B| ≥1. Prove by contradiction that there are no functions h : B →∅. Conclude that 0 < ℵ0. 8.1.11 Suppose that the set An is denumerable for each n ∈N. We may then list the elements of each set: An = {an1, an2, an3, an4, . . .}. Now list the elements of the sets A1, A2, A3, . . . as follows: A1 = {a11, a12, a13, a14, . . .} A2 = {a21, a22, a23, a24, . . .} A3 = {a31, a32, a33, a34, . . .} . . . Use this construction to prove that S n∈N An is a denumerable set. This result is often stated, ‘A countable union of countable sets is countable.’ 8.1.12 (Hard!) In this question we prove the converse of Theorem 8.4: if |A| < ℵ0, then A is a finite set. Otherwise said, ℵ0 is the smallest infinite cardinal. We prove by contradiction. Suppose that A is an infinite set such that |A| < ℵ0. Then there exists an injective function f : A →N. List the elements of the image of f in increasing order: Im f = {n1, n2, n3, . . .}. (a) Prove that Im f is an infinite set. (b) Show that for all k ∈N, there exists a unique ak ∈A satisfying f (ak) = nk. (c) Define g : N →A by g(k) = ak. Prove that g is a bijection. (d) Why do we obtain a contradiction? 152 8.2 Uncountable Sets You might think, since Q seems so large, that there can’t be any sets with strictly larger cardinality. But we haven’t yet thought about the set of real numbers. Definition 8.7. A set A is uncountable if |A| > ℵ0, that is if there exists an injection f : N →A but no bijection g : N →A. Theorem 8.8. The interval [0, 1] of real numbers is uncountable. We denote the cardinality of the interval [0, 1] by the symbol c for continuum. The theorem may therefore be written c > ℵ0. Proof. First we require an injective function f : N →[0, 1]. The function defined by f (n) = 1 n clearly fits the bill, for f (n) = f (m) = ⇒1 n = 1 m = ⇒n = m. Therefore ℵ0 ≤c. Next, we prove that there exists no bijection from N to [0, 1], arguing by contradiction. Suppose that g : N →[0, 1] is a bijection and consider the sequence of values g(1), g(2), g(3), . . . These are real numbers between 0 and 1, hence they may all be expressed as decimals of the form 0.a1a2a3a4a5 · · · , where each ai ∈{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}.a We can write: g(1) = 0.b11b12b13b14b15b16 · · · g(2) = 0.b21b22b23b24b25b26 · · · g(3) = 0.b31b32b33b34b35b36 · · · g(4) = 0.b41b42b43b44b45b46 · · · g(5) = 0.b51b52b53b54b55b56 · · · . . . By assumption, g is bijective, so it is certainly surjective. It follows that all of the numbers in [0, 1] appear in the above list of decimals. Since g is injective, there are no repeats in the list. Now define a new decimal c = 0.c1c2c3c4c5 · · · where cn = ( 1 if bnn ̸= 1, 2 if bnn = 1. c is a non-terminating decimal whose digits are only 1’s and 2’s: it therefore has no other decimal representation. Since c disagrees with g(n) at the nth decimal place, we have c ̸= g(n), ∀n ≥1. Hence c is not in the above list. However c ∈[0, 1] and g is surjective with Im g = [0, 1], so we have a contradiction. We conclude that c ̸= ℵ0. 153 Putting this together with the first part of the proof, we see that c > ℵ0. aCertain numbers, like 0.12 = 0.12121212 · · · have a unique decimal representation. Others, like 0.317 = 0.3169999 · · · have both a finite decimal representation and an infinite representation that ultimately becomes an infinite sequence of 9’s. For the purposes of this proof it does not matter which representation is chosen when there is a choice. We are forced, however, to take 1 = 0.999999 · · · , due to our insistence that all elements are written with zero units. The interval [0, 1] has a strictly larger cardinality than the set of integers. Since [0, 1] ⊆R, it follows immediately that the real numbers are also uncountable. Indeed we shall see in a moment that the real numbers have cardinality c, as does any interval (of positive width). More amazingly, the Cantor middle-third set (page 111) also has cardinality c, despite seeming vashishingly small. More advanced ideas Our countable and uncountable examples are merely scratching the foothills of a truly weird subject. Here are a couple more ideas. The following theorem is very useful for being able to compare cardinalities. It allows us to prove that two sets have the same cardinality without explicitly constructing bijective functions. Injective functions are usually much easier to build. Theorem 8.9 (Cantor–Schr¨ oder–Bernstein). If |A| ≤|B| and |B| ≤|A|, then |A| = |B|. The theorem seems like it should be obvious, but pause for a moment: it is not a result about numbers! A and B are sets, and so the theorem must be understood in the context of Definition 8.1. In this language the theorem becomes: Suppose that there exist injective functions f : A →B and g : B →A. Then there exists a bijective function h : A →B. The proof is beautiful, though a little long to reproduce here. If you are interested it can be found in any text on set theory. The applications of the theorem are more important to our purposes. Theorem 8.10. The interval (0, 1) has cardinality c. It is possible to define a bijection h : (0, 1) →[0, 1], though it is extremely messy. Instead we construct two injections. Proof. f : (0, 1) →[0, 1] : x 7→x is clearly an injection, whence |(0, 1)| ≤|[0, 1]| = c. Now define g : [0, 1] →(0, 1) : x 7→1 2x + 1 4. g is certainly injective (g isn’t surjective, since Im(g) = [ 1 4, 3 4] ̸= (0, 1)), and so c ≤|(0, 1)|. By the Cantor–Schr¨ oder–Bernstein Theorem, the sets (0, 1) and [0, 1] have the same cardinality c. 154 By a similar trick, covered in the Exercises, one can see that R also has cardinality c. For a final idea, we prove Cantor’s Theorem, which says that the power set of a set always has a strictly larger cardinality than the original set. In Theorem 6.6 we saw that, if A is finite, then |P(A)| = 2|A| for finite sets. We therefore already believe that Cantor’s Theorem is true for finite sets. The proof we shall give also works for infinite sets. The main implication of this is that there is no largest set! We can always make a larger set simply by taking the power set of what we already have: now rinse and repeat! For example, P(R) has larger cardinality than R. If you want a set with larger cardinality, why not take P(P(R))? Or P(P(P(R))). There is no limit to the cardinality of sets. Theorem 8.11 (Cantor). If A is any set, then |A| ⪇|P(A)|. Proof. We must show two things: • ∃f : A →P(A) which is injective. • ∄g : A →P(A) which is bijective. For the first, note that f : a 7→{a} is a suitable injective function.a Now suppose for a contradiction that ∃g : A →P(A) which is bijective. For every a ∈A, g(a) is a subset of A. Consider the set X = {a ∈A : a ̸∈g(a)}. aThis even works if A = ∅, for then f is itself the ‘empty’ function! If this sort of thinking disturbs you, don’t worry. We have already proved Cantor’s Theorem for all finite sets, so we only need the proof to work for infinite sets. This is a difficult set to think about. Before proceeding, let us consider an example. Suppose that g : {1, 2} →P({1, 2}) is defined by g(1) = {1, 2}, g(2) = {1}. Then 1 ∈g(1) and 2 ̸∈g(2), whence the above set is X = {2}. Since we are trying to show that a bijection g as in the proof does not exist, it is important to note that the function g in our example is not bijective! Proof Continued. By assumption, g is bijective, hence it is certainly surjective. Because Im g = P(A), the set X is in the image of g. Otherwise said, there exists ˆ a ∈A such that g(ˆ a) = X. We ask whether ˆ a is an element of X. Think carefully about the definition of X, and observe that ˆ a ∈X ⇐ ⇒ˆ a ̸∈g(ˆ a) (by the definition of X) ⇐ ⇒ˆ a ̸∈X (since X = g(ˆ a)) Look at what we have: ˆ a ∈X ⇐ ⇒ ˆ a ̸∈X. This is clearly a contradiction! We conclude that no bijection g : A →P(A) exists, and so |A| ⪇|P(A)|. 155 Cantor’s Theorem played a large part in pushing set theory towards axiomatization. Here is a conundrum motivated by the theorem: If a ‘set’ is just a collection of objects, then we may consider the ‘set of all sets.’ Call this A. Now consider the power set of A. Since P(A) is a set of sets, it must be a subset of A, whence |P(A)| ≤|A|. However, by Cantor’s Theorem, we have |A| ⪇|P(A))|. The conclusion is the palpable contradiction |P(A)| ⪇|P(A)|! The remedy is a thorough definition of ‘set’ which prevents the collection of all sets from being a set. This is where axiomatic set theory, and a completely new approach, begins. Exercises 8.2.1 You may assume that [0, 1] has cardinality c. (a) Construct an explicit bijection f : [0, 1] →[3, 8] which proves that the interval [3, 8] also has cardinality c. Try a linear function mapping the endpoints of [0, 1] to the endpoints of [3, 8]. (b) Let a, b ∈R with a < b. Generalizing the previous example, construct a bijection which proves that the closed interval [a, b] has cardinality c. 8.2.2 (a) Suppose that g : {1, 2, 3, 4} →P({1, 2, 3, 4}) is defined by g(1) = {1, 2, 3}, g(2) = {1, 4}, g(3) = ∅, g(4) = {2, 4}. Compute the set X =  a ∈{1, 2, 3, 4} : a ̸∈g(a) . (b) Repeat part (a) for g : N →P(N) : n 7→{x ∈2N : x ≤n}. 8.2.3 The proof of Cantor’s Theorem makes use of a construction similar to Russell’s Paradox. Let X be the set of all sets which are not members of themselves: explicitly X = {A : A ̸∈A}. (a) Assume that X is a set, and use it to deduce a contradiction: ask yourself if X is a member of itself. (b) Russell’s paradox (and indeed the proof of Cantor’s Theorem) is one avatar of an ancient logical paradox which appears in many guises. For example, suppose that a town has one hairdresser, and suppose that the hairdresser is the person who cuts the hair of all the people, and only those people, who do not cut their own hair. Who cuts the hairdresser’s hair? Can you explain the connection with Russell’s paradox/Cantor’s Theorem? The point of Russell’s paradox is that we need a definition of ‘set’ which prevents objects like X from being sets. 8.2.4 Recall the Cantor set as described in the notes, where we proved that C is the set of all num-bers in [0, 1] possessing a ternary expansion consisting only of zeros and twos. Modeling your answer on the proof that the interval [0, 1] is uncountable, prove that C is uncountable. 8.2.5 (a) Show that |(0, 1)| ≤|R \ N| ≤|R|. (b) Construct a bijection f : (0, 1) →(−π 2 , π 2 ). (Try a linear function) (c) Show that g : (−π 2 , π 2 ) →R : x 7→tan x is a bijection. (d) Use the Cantor–Schr¨ oder–Bernstein Theorem to conclude that |R \ N| = |R| = c. 156
109
110
Classical Theory Fields by Landau Lifshitz, Used - AbeBooks =============== Skip to main content AbeBooks.com Search Sign in My Account Basket Help Menu Find My Account My Purchases Sign Off Advanced Search Browse Collections Rare Books Art & Collectibles Textbooks Sellers Start Selling Help CLOSE Item added to your basket View basket Order Total (1 Item Items): Shipping Destination: Proceed to Basket View basket Continue shopping Classical Theory Fields by Landau Lifshitz, Used (21 results) Feedback Author:landau lifshitz, Title:classical theory fields Refine with Advanced Search Feedback List Grid Sort By Search preferences Skip to main search results Search filters Product Type All Product Types Books(21) Magazines & Periodicals Magazines & Periodicals (No further results match this refinement) Comics Comics (No further results match this refinement) Sheet Music Sheet Music (No further results match this refinement) Art, Prints & Posters Art, Prints & Posters (No further results match this refinement) Photographs Photographs (No further results match this refinement) Maps Maps (No further results match this refinement) Manuscripts & Paper Collectibles Manuscripts & Paper Collectibles (No further results match this refinement) Condition Learn more New(15) As New, Fine or Near Fine(7) Very Good or Good(12) Fair or Poor(1) As Described(1) Binding All Bindings Hardcover(11) Softcover(6) Collectible Attributes First Edition(1) Signed Signed (No further results match this refinement) Dust Jacket(1) Seller-Supplied Images(5) Not Print on Demand(21) Language (2) Apply Price Any Price Under US$ 25 Under US$ 25 (No further results match this refinement) US$ 25 to US$ 50 Over US$ 50 Custom price range (US$) to USD Only use numbers for the minimum and maximum price. The minimum price must be lower than or match the maximum price. Free Shipping Free Shipping to U.S.A.(2) Seller Location Seller region European Union North America Europe Asia Seller country China France Germany U.S.A. Seller Rating All Sellers 2-star rating and up(21) 3-star rating and up(18) 4-star rating and up(18) 5-star rating(14) Stock Image The Classical Theory of Fields: Volume 2 Landau, L D, and Lifshitz, E M Published by Butterworth-Heinemann, 1980 ISBN 10: 0750627689/ ISBN 13: 9780750627689 Language: English Seller: Lost Books, AUSTIN, TX, U.S.A. (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Used - Softcover Condition: Very good US$ 65.00 Convert currency US$ 4.99 shipping within U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Trade paperback. Condition: Very good. 4th Revised ed. Trade paperback (US). 444 p. Course of Theoretical Physics Series, 2. Audience: General/trade. THE CLASSICAL THEORY OF FIELDS Landau, L.D. and Lifshitz, E.M.; Hamermesh, Morton [Translated by] Published by Addison-Wesley Publishing Company, Inc, Reading, Menlo Park, London, 1971 Seller: Second Story Books, ABAA, Rockville, MD, U.S.A. Association Member: ABAAILAB (4-star seller)Seller rating 4 out of 5 stars;) Contact seller Used - Hardcover US$ 37.50 Convert currency US$ 6.00 shipping within U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Hardcover. Third Edition. Large Octavo, vii, x, xi, 374 pages. In Very Good minus condition. Bound in the publisher's yellow cloth bearing gilt lettering to the spine. Boards have very slight wear including faint sunning to the spine, few scuffs/soiling marks and minor edge wear. Text block has slight wear including mild age toning to the edges. Previous owner's information to the front free end paper. Illustrated. Third, revised English edition. Oversized book(s). Additional postage necessary for expedited/international orders.... More Featured items related to your search Image 12 THE CLASSICAL THEORY OF FIELDS - VOLUME 2 of COURSE OF...Landau, L.D./ Lifshitz, E.M. Used, Hardcover US$ 249.00 Image 13 The Classical Theory of FieldsLandau, L.D. and Lifshitz, E.M. Used, Hardcover US$ 36.08 Image 14 THE CLASSICAL THEORY OF FIELDSLandau, L.D. and Lifshitz, E.M.;... Used, Hardcover US$ 37.50 Image 15 The Classical Theory of Fields, 3rd Revised EditionLandau, L. D. & E. M. Lifshitz Used, Hardcover US$ 400.00 Image 16 The Classical Theory of Fields. With a preface to this...Landau, L.D. and E.M. Lifshitz: Used US$ 47.52 Image 17 The Classical Theory of Fields: Course of Theoretical...L. D. Landau, E. M. Lifshitz Used, Hardcover US$ 64.99 Image 18 The Classical Theory of Fields: Course of Theoretical...L.D. Landau, E.M. Lifshitz Used, Softcover US$ 47.55 Image 19 The classical theory of fields (Course of theoretical...L. D. Landau; E. M. Lifshitz Used, Hardcover US$ 134.89 Image 20 The Classical Theory of Fields: Course of Theoretical...L. D. Landau & E. M. Lifshitz First Edition, Used, Hardcover US$ 120.00 Image 21 The Classical Theory of Fields . Third Revised English...Landau, L D; Lifshitz, E. M.;... Used, Hardcover US$ 55.00 Image 22 The classical theory of fields. Revised second edition...L. D Landau; E. M. Lifshitz Used, Hardcover US$ 53.98 Image 23 The Classical Theory of Fields. Fourth Revised English...L. D. Landau & E. M. Lifshitz Used, Hardcover US$ 131.07 Image 24 The Classical Theory Of Fields (Fourth Revised English...Landau, L. D., And Lifshitz, E. M. Used, Softcover US$ 75.00 Image 25 The Classical Theory of Fields: Volume 2Landau, L D, and Lifshitz, E M Used, Softcover US$ 65.00 Image 26 The classical theory of fields (Course of theoretical...E. M. Lifshitz,L. D. Landau Used, Hardcover US$ 130.90 Image 27 The Classical Theory of Fields. 4th revised english editionLandau, Lev Davidovich;Lifshitz, E.M. Used, Softcover US$ 85.00 Image 28 The Classical Theory of Fields,4th revised english editionLandau, Lev Davidovich;Lifshitz, E.M. Used, Softcover US$ 87.50 Image 29 Classical Theory of Fields: Physics Selection Revised New...Landau Lifshitz; translated by Toru... Used US$ 60.00 Image 30 The Classical Theory of Fields: Course of Theoretical...L. D. Landau and E. M. Lifshitz Used, Softcover US$ 69.30 Seller Image The Classical Theory Of Fields (Fourth Revised English Edition, With 1987 Corrections) Landau, L. D., And Lifshitz, E. M. Published by Pergamon Press / Addison-Wesley Publishing, Oxford / Reading, 1987 Language: English Seller: Arroyo Seco Books, Pasadena, Member IOBA, Pasadena, CA, U.S.A. Association Member: IOBA (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Used - Softcover Condition: Near fine US$ 75.00 Convert currency US$ 8.00 shipping within U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Soft cover. Condition: Near Fine. 4th Edition. Xi, 374 Pp. Softcover Fourth Edition, Revised, Sixth Printing, First Printing With These Further 1987 Corrections); "The Chapters Concerning The Theory Of The Gravitational Field Have Been Rrevised And Expanded" Since The Third Edition. Near Fine, Small Ownership Name. The classical theory of fields. Revised second edition (Course of theroretical physics, Vol. 2) L. D Landau; E. M. Lifshitz Published by Addison-Wesley Seller: ThriftBooks-Dallas, Dallas, TX, U.S.A. (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Used - Hardcover Condition: Good US$ 53.98 Convert currency Free shipping within U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Hardcover. Condition: Good. No Jacket. Pages can have notes/highlighting. Spine may show signs of wear. ~ ThriftBooks: Read More, Spend Less 3.85. Stock Image The Classical Theory of Fields. 4th revised english edition Landau, Lev Davidovich;Lifshitz, E.M. Published by Saint Louis, Missouri, U.S.A.: Butterworth-Heinemann, 1983 ISBN 10: 0080250726/ ISBN 13: 9780080250724 Language: English Seller: Rob the Book Man, Vancouver, WA, U.S.A. (4-star seller)Seller rating 4 out of 5 stars;) Contact seller Used - Softcover Condition: Very good US$ 85.00 Convert currency US$ 6.99 shipping within U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Soft cover. Condition: Very Good. 4th Edition. Trade paperback in very good condition. 4th revised english edition. More buying choices from other sellers on AbeBooks Used offers fromUS$ 91.99 Also findSoftcover Seller Image The Classical Theory of Fields . Third Revised English Edition Landau, L D; Lifshitz, E. M.; Hamermesh, Morton (translator) Published by Addison-Wesley Publishing Company, Reading MA, Menlo Park, CA, London, Don Mills, Ont, 1971 Seller: Kuenzig Books ( ABAA / ILAB ), Topsfield, MA, U.S.A. Association Member: ABAAESAILABIOBASNEAB (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Used - Hardcover Condition: Very good US$ 55.00 Convert currency US$ 5.99 shipping within U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Cloth. Condition: Very Good. xii, 374 pages. 8vo. Publisher's yellow cloth. Previous owner name stamped on front flyleaf. Light soiling to the boards. Clean internally, a nice working copy. Cloth. Volume 2 in the Course of Theoretical Physics. Stock Image The Classical Theory of Fields: Course of Theoretical Physics, Volume 2 of 9 L. D. Landau & E. M. Lifshitz Published by Pergamon Press, New York, NY, 1975 ISBN 10: 0080181767/ ISBN 13: 9780080181769 Language: English Seller: Black Cat Hill Books, Oregon City, OR, U.S.A. (4-star seller)Seller rating 4 out of 5 stars;) Contact seller First Edition Used - Hardcover Condition: Good US$ 120.00 Convert currency US$ 6.25 shipping within U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Hardcover. Condition: Good. Fourth Revised Edition. Good: shows mild rubbing and a hint of consequent soiling to the panels; some wear to the lower corner tips, which have worn through to expose the underlying board; a single page has been dog-eared; the upper corner tips are very mildly nudged; the orange author's names and the edition number at the backstrip have been blanched (the black titles thereon remain unaffected: bold and clearly legible) ; some smudging and soiling to... More More buying choices from other sellers on AbeBooks Used offers fromUS$ 126.25 Also findHardcoverFirst Edition Stock Image The Classical Theory of Fields: Course of Theoretical Physics Vol. 2 L.D. Landau, E.M. Lifshitz ISBN 10: 7506242567/ ISBN 13: 9787506242561 Seller: BookHolders, Towson, MD, U.S.A. (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Used - Softcover Condition: Good US$ 47.55 Convert currency US$ 4.25 shipping within U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Condition: Good. [ No Hassle 30 Day Returns ][ Ships Daily ] [ Underlining/Highlighting: NONE ] [ Writing: NONE ] [ Edition: Fourth ] Publisher: Butterworth Heinemann Pub Date: 1/1/1975 Binding: Paperback Pages: 406 Fourth edition. More buying choices from other sellers on AbeBooks Used offers fromUS$ 51.80 Also findSoftcover The Classical Theory of Fields: Course of Theoretical Physics L. D. Landau, E. M. Lifshitz Published by Pergamon Press, 1959 Seller: Burnt Biscuit Books, NEWNAN, GA, U.S.A. (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Used - Hardcover Condition: Good US$ 64.99 Convert currency US$ 5.00 shipping within U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Hardcover. Condition: Good. 2nd printing 1959 Addison Wesley No DJ. Some underlining Slight smoke smell. Seller Image The Classical Theory of Fields. With a preface to this second english edition. Landau, L.D. and E.M. Lifshitz: Published by Oxford, Pergamon Press, 1962 Language: English Seller: Chiemgauer Internet Antiquariat GbR, Altenmarkt, BAY, Germany (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Used Condition: Fine US$ 47.52 Convert currency US$ 52.55 shipping from Germany to U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Condition: Sehr gut. IX,404 pages. Index Nur der Einband mit leichten Gebrauchsspuren. Der Innendeckel mit stärkeren Restspuren eines entfernten Exlibris. Sonst SEHR gutes Exemplar . - aus der Bibliothek eines bedeutenden deutschen Physikers ( Wikipedia!) - Only the binding with slight signs of wear, the inside cover with stronger traces of a removed bookplate. Otherwise a VERY good copy - from the library of an important German physicist ( Wikipedia!) - " The present edition has been extensively REVISED... More Seller Image More images The Classical Theory of Fields Landau, L.D. and Lifshitz, E.M. Published by Pergamon Press, 1962 Seller: Imaginal Books, Sardent, France (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Used - Hardcover Condition: Good US$ 36.08 Convert currency US$ 52.55 shipping from France to U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Hardcover. Condition: Good. No Jacket. 2nd Edition. Seller Image THE CLASSICAL THEORY OF FIELDS - VOLUME 2 of COURSE OF THEORETICAL PHYSICS Landau, L.D./ Lifshitz, E.M. Published by Pergamon Press/Addison-Wesley, 1962 Seller: Virtual Books, Vancouver, WA, U.S.A. (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Used - Hardcover Condition: Very good US$ 249.00 Convert currency US$ 6.95 shipping within U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Hardcover. Condition: Very Good. Dust Jacket Condition: Fair. 2nd Edition. VERY GOOD/FAIR; the pages are clean and unmarked and the binding is very good; the dust jacket has pieces missing from the edges and wear; there are no signatures or underlining; The Classical Theory of Fields, 3rd Revised Edition Landau, L. D. & E. M. Lifshitz Published by Pergamon Presss, 1971 Seller: Treehorn Books, Santa Rosa, CA, U.S.A. (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Used - Hardcover Condition: Very good US$ 400.00 Convert currency US$ 4.00 shipping within U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Hardcover. Condition: Very Good. Dust Jacket Condition: No Dust Jacket. 9.40 X 6.60 X 1.10 inches; 374 pages. Classical Theory of Fields: Physics Selection Revised New Edition [Japanese Edition] Landau Lifshitz; translated by Toru Hiroshige Toshihiko Tsuneto Seller: Librairie Chat, Beijing, China (2-star seller)Seller rating 2 out of 5 stars;) Contact seller Used Condition: Fine US$ 60.00 Convert currency US$ 35.00 shipping from China to U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Condition: Fine. Number of pages: 428p Size: 22cm. Classical Theory of Fields: Physics Selection Revised New Edition [Japanese Edition] Landau Lifshitz; translated by Toru Hiroshige Toshihiko Tsuneto Seller: Librairie Chat, Beijing, China (2-star seller)Seller rating 2 out of 5 stars;) Contact seller Used Condition: Fine US$ 60.00 Convert currency US$ 35.00 shipping from China to U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Condition: Fine. Number of pages: 428p Size: 22cm. Classical Theory of Fields: Physics Selection Revised New Edition [Japanese Edition] Landau Lifshitz; translated by Toru Hiroshige Toshihiko Tsuneto Seller: Librairie Chat, Beijing, China (2-star seller)Seller rating 2 out of 5 stars;) Contact seller Used Condition: Fine US$ 60.00 Convert currency US$ 35.00 shipping from China to U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Condition: Fine. Number of pages: 428p Size: 22cm. Create a Want Tell us what you're looking for and once a match is found, we'll inform you by e-mail. Create a Want BookSleuth Can't remember the title or the author of a book? Our BookSleuth is specially designed for you. Visit BookSleuth Help with Search Search Tips Glossary of Terms Set your own Search Preferences Back to top Shop With Us Advanced Search Browse Collections My Account My Orders View Basket Sell With Us Start Selling Join Our Affiliate Program Book Buyback Refer a seller About Us About AbeBooks Media Careers Forums Privacy Policy Your Ads Privacy Choices Designated Agent Accessibility Find Help Help Customer Support Other AbeBooks Companies AbeBooks.co.uk AbeBooks.de AbeBooks.fr AbeBooks.it AbeBooks Aus/NZ AbeBooks.ca IberLibro.com ZVAB.com BookFinder.com Find any book at the best price Follow AbeBooks AbeBooks.co.uk AbeBooks.de AbeBooks.fr AbeBooks.it AbeBooks Aus/NZ AbeBooks.ca IberLibro.com ZVAB.com BookFinder.com Find any book at the best price By using the Web site, you confirm that you have read, understood, and agreed to be bound by the Terms and Conditions. © 1996 - 2025 AbeBooks Inc. All Rights Reserved. AbeBooks, the AbeBooks logo, AbeBooks.com, "Passion for books." and "Passion for books. Books for your passion." are registered trademarks with the Registered US Patent & Trademark Office. × Change currency Choose your preferred currency You will be charged in {0}. You will be shown prices in {0} as a reference only. Your orders will process in {1}. Learn more about currency preferences. Cancel Save
111
Published Time: 2003-01-18T23:35:39Z Shock absorber - Wikipedia =============== Jump to content [x] Main menu Main menu move to sidebar hide Navigation Main page Contents Current events Random article About Wikipedia Contact us Contribute Help Learn to edit Community portal Recent changes Upload file Special pages Search Search [x] Appearance Donate Create account Log in [x] Personal tools Donate Create account Log in Pages for logged out editors learn more Contributions Talk Contents move to sidebar hide (Top) 1 DescriptionToggle Description subsection 1.1 Vehicle suspension 1.2 Construction 2 Early history 3 Types of vehicle shock absorbersToggle Types of vehicle shock absorbers subsection 3.1 Twin-tube 3.1.1 Basic twin-tube 3.1.2 Twin-tube gas charged 3.1.3 Position sensitive damping 3.1.4 Acceleration sensitive damping 3.1.5 Coilover 3.2 Mono-tube 3.3 Spool valve 3.4 Remote reservoir/piggy-back 3.5 Bypass shock 4 Theoretical approaches 5 Special features 6 Shock absorber and strut comparison 7 See also 8 References 9 Sources 10 Bibliography 11 External links [x] Toggle the table of contents Shock absorber [x] 51 languages Afrikaans العربية Български བོད་ཡིག Català Чӑвашла Čeština Dansk Deutsch Eesti Ελληνικά Español Esperanto فارسی Français Galego 한국어 Հայերեն हिन्दी Hrvatski Ido Bahasa Indonesia Íslenska Italiano עברית Қазақша Lietuvių Македонски മലയാളം Bahasa Melayu Монгол မြန်မာဘာသာ Nederlands 日本語 Norsk bokmål Norsk nynorsk Oʻzbekcha / ўзбекча Polski Português Română Русский Српски / srpski Srpskohrvatski / српскохрватски Suomi Svenska Тоҷикӣ Türkçe Українська اردو Tiếng Việt 中文 Edit links Article Talk [x] English Read Edit View history [x] Tools Tools move to sidebar hide Actions Read Edit View history General What links here Related changes Upload file Permanent link Page information Cite this page Get shortened URL Download QR code Print/export Download as PDF Printable version In other projects Wikimedia Commons Wikidata item Appearance move to sidebar hide From Wikipedia, the free encyclopedia Mechanical component This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. Find sources:"Shock absorber"–news·newspapers·books·scholar·JSTOR(November 2023) (Learn how and when to remove this message) Miniature oil-filled Coilover shock components for scale cars A shock absorber or damper is a mechanical or hydraulic device designed to absorb and dampshock impulses. It does this by converting the kinetic energy of the shock into another form of energy (typically heat) which is then dissipated. Most shock absorbers are a form of dashpot (a damper which resists motion via viscous friction). Description [edit] Pneumatic and hydraulic shock absorbers are used in conjunction with cushions and springs. An automobile shock absorber contains spring-loaded check valves and orifices to control the flow of oil through an internal piston (see below). One design consideration, when designing or choosing a shock absorber, is where that energy will go. In most shock absorbers, energy is converted to heat inside the viscous fluid. In hydraulic cylinders, the hydraulic fluid heats up, while in air cylinders, the hot air is usually exhausted to the atmosphere. In other types of shock absorbers, such as electromagnetic types, the dissipated energy can be stored and used later. In general terms, shock absorbers help cushion vehicles on uneven roads and keep wheels in contact with the ground. Vehicle suspension [edit] Main article: Suspension (vehicle) In a vehicle, shock absorbers reduce the effect of traveling over rough ground, leading to improved ride quality and vehicle handling. While shock absorbers serve the purpose of limiting excessive suspension movement, their intended main purpose is to damp spring oscillations. Shock absorbers use valving of oil and gasses to absorb excess energy from the springs. Spring rates are chosen by the manufacturer based on the weight of the vehicle, loaded and unloaded. Some people use shocks to modify spring rates but this is not the correct use. Along with hysteresis in the tire itself, they damp the energy stored in the motion of the unsprung weight up and down. Effective wheel bounce damping may require tuning shocks to an optimal resistance. Spring-based shock absorbers commonly use coil springs or leaf springs, though torsion bars are used in torsional shocks as well. Ideal springs alone, however, are not shock absorbers, as springs only store and do not dissipate or absorb energy. Vehicles typically employ both hydraulic shock absorbers and springs or torsion bars. In this combination, "shock absorber" refers specifically to the hydraulic piston that absorbs and dissipates vibration. Now, composite suspension systems are used mainly in 2 wheelers and also leaf springs are made up of composite material in 4 wheelers. Construction [edit] This section needs expansion. You can help by adding to it. (August 2022) Shock absorbers are an important part of car suspension designed to increase comfort, stability and overall safety. The shock absorber, produced with precision and engineering skills, has many important features. The most common type is a hydraulic shock absorber, which usually includes a piston, a cylinder, and an oil-filled chamber. The piston is connected to the piston rod, which extends into the cylinder and divides the cylinder into two parts. One chamber is filled with hydraulic oil, while the other chamber contains compressed oil[according to whom?] or air. When there is an accident[relevant?] or vibration in the vehicle, the piston moves into the cylinder, forcing the hydraulic fluid through small holes, creating resistance and dissipating energy in the form of heat. This dampens oscillations, reducing further bouncing or wobble of the car. Shock construction requires a balance of features such as piston design, fluid viscosity, and overall size of the unit to ensure performance. As technology developed, other types of shock absorbers emerged, including gas and electric shock absorbers, that provided improved control and flexibility. The design and manufacture of shock absorbers is constantly evolving due to the continuous improvement of vehicle dynamics and passenger comfort. Early history [edit] In common with carriages and railway locomotives, most early motor vehicles used leaf springs. One of the features of these springs was that the friction between the leaves offered a degree of damping, and in a 1912 review of vehicle suspension, the lack of this characteristic in helical springs was the reason it was "impossible" to use them as main springs. However the amount of damping provided by leaf spring friction was limited and variable according to the conditions of the springs, and whether wet or dry. It also operated in both directions. Motorcycle front suspension adopted coil sprung Druid forks from about 1906, and similar designs later added Friction disk shock absorber rotary friction dampers, which damped both ways - but they were adjustable (e.g. 1924 Webb forks). These friction disk shock absorber s was also fitted to many cars. One of the problems with motor cars was the large variation in sprung weight between lightly loaded and fully loaded, especially for the rear springs. When heavily loaded the springs could bottom out, and apart from fitting rubber 'bump stops', there were attempts to use heavy main springs with auxiliary springs to smooth the ride when lightly loaded, which were often called 'shock absorbers'. Realizing that the spring and vehicle combination bounced with a characteristic frequency, these auxiliary springs were designed with a different period, but were not a solution to the problem that the spring rebound after striking a bump could throw you out of your seat. What was called for was damping that operated on the rebound. Although C.L. Horock came up with a design in 1901 that had hydraulic damping, it worked in one direction only. It does not seem to have gone into production right away, whereas mechanical dampers such as the Gabriel Snubber started being fitted in the late 1900s (also the similar Stromberg Anti-Shox). These used a belt coiled inside a device such that it freely wound in under the action of a coiled spring but met friction when drawn out. Gabriel Snubbers were fitted to an 11.9HP Arrol-Johnston car which broke the 6 hour Class B record at Brooklands in late 1912, and the Automator journal noted that this snubber might have a great future for racing due to its light weight and easy fitment. French engineers Gaston Dumond and Ernest Mathis patented two different hydraulic shock absorbers with rectilinear motion in 1906–1907, but those were not commercially successful. One of the earliest hydraulic dampers to go into production was the Telesco Shock Absorber, exhibited at the 1912 Olympia Motor Show and marketed by Polyrhoe Carburettors Ltd. This contained a spring inside the telescopic unit like the pure spring type 'shock absorbers' mentioned above, but also oil and an internal valve so that the oil damped in the rebound direction. The Telesco unit was fitted at the rear end of the leaf spring, in place of the rear spring to chassis mount, so that it formed part of the springing system, albeit a hydraulically damped part. This layout was presumably selected as it was easy to apply to existing vehicles, but it meant the hydraulic damping was not applied to the action of the main leaf spring, but only to the action of the auxiliary spring in the unit itself. The first production hydraulic dampers to act on the main leaf spring movement were probably those based on an original concept by Maurice Houdaille patented in 1908 and 1909. These used a lever arm which moved hydraulically damped vanes inside the unit. The main advantage over the friction disk dampers was that it would resist sudden movement but allow slow movement, whereas the rotary friction dampers tended to stick and then offer the same resistance regardless of speed of movement. There appears to have been little progress on commercialising the lever arm shock absorbers until after World War I, after which they came into widespread use, for example as standard equipment on the 1927 Ford Model A and manufactured by Houde Engineering Corporation of Buffalo, NY. Types of vehicle shock absorbers [edit] Diagram of the main components of a twin-tube and mono-tube shock absorber Most vehicular shock absorbers are either twin-tube or mono-tube types with some variations on these themes. Twin-tube [edit] Basic twin-tube [edit] Also known as a "two-tube" shock absorber, this device consists of two nested cylindrical tubes, an inner tube that is called the "working tube" or the "pressure tube", and an outer tube called the "reserve tube". At the bottom of the device on the inside is a compression valve or base valve. When the piston is forced up or down by bumps in the road, hydraulic fluid moves between different chambers via small holes or "orifices" in the piston and via the valve, converting the "shock" energy into heat which must then be dissipated. Twin-tube gas charged [edit] Variously known as a "gas cell two-tube" or similarly named design, this variation represented a significant advancement over the basic twin-tube form. Its overall structure is very similar to the twin-tube, but a low-pressure charge of nitrogen gas is added to the reserve tube. The result of this alteration is a dramatic reduction in "foaming" or "aeration", the undesirable outcome of a twin-tube overheating and failing which presents as foaming hydraulic fluid dripping out of the assembly. Twin-tube gas charged shock absorbers represent the vast majority of original modern vehicle suspension installations. Position sensitive damping [edit] Often abbreviated simply as "PSD", this design is another evolution of the twin-tube shock. In a PSD shock absorber, which still consists of two nested tubes and still contains nitrogen gas, a set of grooves has been added to the pressure tube. These grooves allow the piston to move relatively freely in the middle range of travel (i.e., the most common street or highway use, called by engineers the "comfort zone") and to move with significantly less freedom in response to shifts to more irregular surfaces when upward and downward movement of the piston starts to occur with greater intensity (i.e., on bumpy sections of roads— the stiffening gives the driver greater control of movement over the vehicle so its range on either side of the comfort zone is called the "control zone"). This advance allowed car designers to make a shock absorber tailored to specific makes and models of vehicles and to take into account a given vehicle's size and weight, its maneuverability, its horsepower, etc. in creating a correspondingly effective shock. Acceleration sensitive damping [edit] The next phase in shock absorber evolution was the development of a shock absorber that could sense and respond to not just situational changes from "bumpy" to "smooth" but to individual bumps in the road in a near instantaneous reaction. This was achieved through a change in the design of the compression valve, and has been termed "acceleration sensitive damping" or "ASD". Not only does this result in a complete disappearance of the "comfort vs. control" tradeoff, it also reduced pitch during vehicle braking and roll during turns. However, ASD shocks are usually only available as aftermarket changes to a vehicle and are only available from a limited number of manufacturers. Coilover [edit] Main article: Coilover Coilover shock absorbers are usually a kind of twin-tube gas charged shock absorber inside the helical road spring. They are common on motorcycles and scooter rear suspensions, and widely used on front and rear suspensions in cars. Mono-tube [edit] Hydraulic shock absorber monotube in different operational situations: 1 ) Drive slow or adjustments open 2 ) Like "1", but extension immediately after the compression 3 ) Drive fast adjustments or closed, you can see the bubbles of depression, which can lead to the phenomenon of cavitation 4 ) Like "3", but the extension immediately after the compression Note: The volume change caused by the stem is considered. Absorber with remote-reservoir connected rigidly, compared to most shock absorbers. It uses a diaphragm instead of a membrane, and does not contain a control valve for expansion of the pneumatic chamber. Description: 1) Sheath and gas tank 2) Stem 3) Snap rings 4) Plate bearing spring 5) Spring 6) End cap and preload adjustment 7) Cap gas, present in versions both with or without gas valve (inverted profile) 8) Mobile diaphragm 9) Pad switch (compression) 10) Wiper 11) Oil seal assembly, and shock seal 12) Negative buffer pad or limit switch (extension) 13) Piston with sliding blades and seal The principal design alternative to the twin-tube form has been the mono-tube shock absorber which was considered a revolutionary advancement when it appeared in the 1950s. As its name implies, the mono-tube shock, which is also a gas-pressurized shock and also comes in a coilover format, consists of only one tube, the pressure tube, though it has two pistons. These pistons are called the working piston and the dividing or floating piston, and they move in relative synchrony inside the pressure tube in response to changes in road smoothness. The two pistons also completely separate the shock's fluid and gas components. The mono-tube shock absorber is consistently a much longer overall design than the twin-tubes, making it difficult to mount in passenger cars designed for twin-tube shocks. However, unlike the twin-tubes, the mono-tube shock can be mounted either way—it does not have any directionality. It also does not have a compression valve, whose role has been taken up by the dividing piston, and although it contains nitrogen gas, the gas in a mono-tube shock is under high pressure (260-360 p.s.i. or so) which can actually help it to support some of the vehicle's weight, something which no other shock absorber is designed to do. Mercedes became the first auto manufacturer to install mono-tube shocks as standard equipment on some of their cars starting in 1958. They were manufactured by Bilstein, patented the design and first appeared in 1954s. Because the design was patented, no other manufacturer could use it until 1971 when the patent expired. Spool valve [edit] Spool valve dampers are characterized by the use of hollow cylindrical sleeves with machined-in oil passages as opposed to traditional conventional flexible discs or shims. Spool valving can be applied with monotube, twin-tube, or position-sensitive packaging, and is compatible with electronic control. Primary among benefits cited in Multimatic’s 2010 patent filing is the elimination of performance ambiguity associated with flexible shims, resulting in mathematically predictable, repeatable, and robust pressure-flow characteristics. Remote reservoir/piggy-back [edit] An extra tube or container of oil connected to the oil compartment of the (main) shock via a flexible pipe (remote reservoir) or inflexible pipe (piggy-back shock). Increases the amount of oil a shock can carry without increasing its length or thickness. Bypass shock [edit] Allows each section of suspension travel to have an independent suspension tune. Bypass shock, double bypass shock, triple bypass shock etc. Triple bypass would have a separate set of suspension tuning controls for each of its three sections of suspension travel: initial travel, mid-travel, full-travel. Theoretical approaches [edit] There are several commonly used principles behind shock absorption: Hysteresis of structural material, for example the compression of rubber disks, stretching of rubber bands and cords, bending of steelsprings, or twisting of torsion bars. Hysteresis is the tendency for otherwise elastic materials to rebound with less force than was required to deform them. Simple vehicles with no separate shock absorbers are damped, to some extent, by the hysteresis of their springs and frames. Dry friction as used in wheel brakes, by using disks (classically made of leather) at the pivot of a lever, with friction forced by springs. Used in early automobiles such as the Ford Model T, up through some British cars of the 1940s and on the French Citroën 2CV in the 1950s. Although now considered obsolete, an advantage of this system is its mechanical simplicity; the degree of damping can be easily adjusted by tightening or loosening the screw clamping the disks, and it can be easily rebuilt with simple hand tools. A disadvantage is that the damping force tends not to increase with the speed of the vertical motion. Further information: Friction disk shock absorber Solid state, tapered chain shock absorbers, using one or more tapered, axial alignment(s) of granular spheres, typically made of metals such as nitinol, in a casing. , Fluid friction, for example the flow of fluid through a narrow orifice (hydraulics), constitutes the vast majority of automotive shock absorbers. This design first appeared on Morsracing cars in 1902. One advantage of this type is, by using special internal valving, the absorber may be made relatively soft to compression (allowing a soft response to a bump) and relatively stiff to extension, controlling "rebound", which is the vehicle response to energy stored in the springs; similarly, a series of valves controlled by springs can change the degree of stiffness according to the velocity of the impact or rebound. Specialized shock absorbers for racing purposes may allow the front end of a dragster to rise with minimal resistance under acceleration, then strongly resist letting it settle, thereby maintaining a desirable rearward weight distribution for enhanced traction. Further information: Lever arm shock absorber Compression of a gas, for example pneumatic shock absorbers, which can act like springs as the air pressure is building to resist the force on it. Enclosed gas is compressible, so equipment is less subject to shock damage. This concept was first applied in series production on Citroën cars in 1954. Today, many shock absorbers are pressurized with compressed nitrogen, to reduce the tendency for the oil to cavitate under heavy use. This causes foaming which temporarily reduces the damping ability of the unit. In very heavy duty units used for racing or off-road use, there may even be a secondary cylinder connected to the shock absorber to act as a reservoir for the oil and pressurized gas. In aircraft landing gear, air shock absorbers may be combined with hydraulic damping to reduce bounce. Such struts are called oleo struts (combining oil and air) . Inertial resistance to acceleration, the Citroën 2CV had shock absorbers that damp wheel bounce with no external moving parts. These consisted of a spring-mounted 3.5 kg (7.75 lb) iron weight inside a vertical cylinder and are similar to, yet much smaller than versions of the tuned mass dampers used on tall buildings. Composite hydropneumatic suspension combines many suspension elements in a single device: spring action, shock absorption, ride-height control, and self leveling suspension. This combines the advantages of gas compressibility and the ability of hydraulic machinery to apply force multiplication. Conventional shock absorbers can be combined with air suspension springs - an alternate way to achieve ride-height control, and self leveling suspension. In an electrorheological fluid damper, an electric field changes the viscosity of the oil. This principle allows semi-active damper applications in automotive and various industries. Magnetic field variation: a magnetorheological damper changes its fluid characteristics through an electromagnet. The effect of a shock absorber at high (sound) frequencies is usually limited by using a compressible gas as the working fluid or mounting it with rubber bushings. Special features [edit] Some shock absorbers allow tuning of the ride via control of the valve by a manual adjustment provided at the shock absorber. In more expensive vehicles the valves may be remotely adjustable, offering the driver control of the ride at will while the vehicle is operated. Additional control can be provided by dynamic valve control via computer in response to sensors, giving both a smooth ride and a firm suspension when needed, allowing ride height adjustment or even ride height control. Ride height control is especially desirable in highway vehicles intended for occasional rough road use, as a means of improving handling and reducing aerodynamic drag by lowering the vehicle when operating on improved high speed roads. Heatsinks, fans, or liquid cooling to prevent or delay shock fade and failure (oil leak) due to overheating Shock absorber and strut comparison [edit] A strut is a structural component that combines the shock absorber with other suspension parts like the coil spring and steering knuckle into one compact unit Unlike a shock absorber, a strut has a reinforced body and stem. Struts are subjected to multidirectional loads, while a shock absorber only damps vibration, only receiving a load along its axis. Struts and shock absorbers have a different way of attachment. Shock absorbers are mounted through rubber or urethane bushings to the frame and suspension. A strut is hard mounted to the suspension and is mounted to the frame through a rotating plate providing the upper pivot point of the steering. See also [edit] Base isolation Betagel, uses gel and silicone to absorb violent shocks Buffer (disambiguation) Buffer (rail transport) Buffer stop Chapman strut Cushioning Damped wave Damper (disambiguation) Damping ratio Dashpot Hydropneumatic suspension Impact force Lever arm shock absorber List of auto parts MacPherson strut Oleo strut Packaging and labeling Ralph Peo Shock (mechanics) Shock mount Shock response spectrum Strut bar Strut Vibration Vibration isolation References [edit] ^, Horst Bauer (ed)., Automotive Handbook 4th Edition, Robert Bosch GmbH, 1996, ISBN0-8376-0333-1, page 584 ^"Springs - A simple study of car suspension", The Automotor Journal, August 10th, 1912, pp936-937 ^ ab"Some accessories to see at Olympia", The Automator Journal, Nov 2nd , 1912, p1284 ^Simionescu, P. A.; Norton, Robert L. (2024). Okada, Masafumi (ed.). "On the History of Early Automobile Suspension Systems". Advances in Mechanism and Machine Science. Cham: Springer Nature Switzerland: 1012–1022. doi:10.1007/978-3-031-45709-8_99. ISBN978-3-031-45709-8. ^"What a Chauffeur Expects to see at Olympia", The Automator Journal, Nov 9th 1912, p1313 ^"Nitrogen - Element information, properties and uses | Periodic Table". www.rsc.org. Retrieved 2024-11-18. ^"thyssenkrupp Bilstein - Entwicklung / Produkte - Konventionelle Dämpfer - 1-Rohr-Dämpfer (deCarbon-Prinzip)". www.thyssenkrupp-bilstein.de. Retrieved 2017-07-13. ^ abCarley, Larry (February 2008), "Monotube shocks-- don't absorb shocks, but..."(PDF), Brake and front end magazine, archived from the original(PDF) on 2014-01-02, retrieved 1 January 2014 ^Shelton, p.24 and p.26 caption. ^"From F1 to Baja: Multimatic's Clever Spool-Valve Dampers Explained". Retrieved 2017-07-19. ^"Damper and Awe: 6 Types of Automotive Dampers Explained - Feature". Retrieved 2017-07-19. ^US 8800732 B2, Holt, Laurence J.; O'Flynn, Damian & Tomlin, Andrew, "Hydraulic damper spool valve", published 2014-08-12 ^"Byass Shocks Part 1 – AccuTune Off-Road". Retrieved 2024-06-14. ^Setright, L. J. K. "Dampers: Smoothing Out the Bumps", in Northey, Tom, ed. World of Automobiles (London: Orbis, 1974), Volume 5, p.490. and 1903-03-07 - Le Génie civil: "Amortisseur Mors". ^"Understanding Car Shock Absorbers | PartsHawk". partshawk.com. Retrieved 2023-05-21. Sources [edit] Shelton, Chris. "Then, Now, and Forever" in Hot Rod, March 2017, pp.16–29. Bibliography [edit] Kinra, Vikram K.; Wolfenden, Alan (1992), M3D: mechanics and mechanisms of material damping, ASTM special technical publication number 1169, Philadelphia, Pennsylvania, USA: ASTM International, ISBN978-0-8031-1495-1 Holland, Max (1989), When the Machine Stopped: A Cautionary Tale from Industrial America, Boston: Harvard Business School Press, ISBN978-0-87584-208-0, OCLC246343673. External links [edit] Wikimedia Commons has media related to Shock absorbers. This Motor-Truck Hasn't Any Springs, Popular Science, February 1919 MIT Undergrads Create Shock Absorber That Generates Energy. Leveling Out The Rough Spots, 1943 article Damping rate calculationsArchived 2021-06-09 at the Wayback Machine, seminar notes by Kaz Technologies Archived 2017-11-20 at the Wayback Machine, shock absorber measurement | v t e Powertrain | | --- | | Part of the Automobile series | | Automotive engine | Diesel engine Electric Fuel cell Hybrid (Plug-in hybrid) Internal combustion engine Petrol engine Steam engine | | Transmission | Automatic transmission Chain drive Direct-drive Clutch Constant-velocity joint Continuously variable transmission Coupling Differential Direct-shift gearbox Drive shaft Dual-clutch transmission Drive wheel Automated manual transmission Electrorheological clutch Epicyclic gearing Fluid coupling Friction drive Gearshift Giubo Hotchkiss drive Limited-slip differential Locking differential Manual transmission Manumatic Parking pawl Park-by-wire Preselector gearbox Semi-automatic transmission Shift-by-wire Torque converter Transaxle Transfer box Transmission control unit Universal joint | | Wheels and tires | Wheel hub assembly Wheel Rim Alloy wheel Hubcap Tire Off-road Racing slick Radial Rain Run-flat Snow Spare Tubeless | | Hybrid | Electric motor Hybrid vehicle drivetrain Electric generator Alternator | | Portal Category | | v t e Automotive handling | | --- | | Main topics | Car handling Center of mass Downforce Drifting Electronic Stability Control Fishtailing Inboard brake Oversteer Steering Suspension Tire / Tyre Transaxle Understeer Unsprung mass Vehicle dynamics Weight transfer | | Spring types | Coil Leaf Pneumatic Torsion | | Suspension types | | Dependent | Beam axle De Dion tube | | --- | | Semi-independent | Twist beam | | Independent | Double wishbone(Jaguar IRS) Dubonnet MacPherson strut(Chapman strut) Multi-link Sliding pillar Swing axle Trailing arm(Semi-trailing arm) | | | v t e Chassis control system | | --- | | Part of the Automobile series | | Suspension | Anti-roll bar (sway bar) Axle Axle track Beam axle Camber angle Car handling Coil spring De Dion tube Double wishbone Hydrolastic (Hydragas) Hydropneumatic Independent suspension Leaf spring Live axle MacPherson strut Multi-link suspension Panhard rod Shock absorber Swing axle Toe angle Torsion bar Trailing arm Unsprung mass Watt's linkage Wheel alignment Wheelbase | | Steering | Ackermann steering geometry Caster angle Kingpin Oversteer Power steering Rack and pinion Torque steering Understeer | | Brakes | Automatic braking Anti-lock braking system Active rollover protection Brake bleeding Brake fade Brake fluid Brake lining Combined braking system Disc brake Drum brake Electric park brake Electronic brakeforce distribution Electronic stability control Engine braking Hydraulic brake Hydraulic fluid Inboard brake Parking brake Regenerative brake Vacuum servo | | Roadwheels Tires (Tyres) | Alloy wheel Custom wheel Drive wheel Hubcap Outline of tires Rostyle wheel Spinner Whitewall tire Wire wheels | | Portal Category | | Authority control databases: National | Germany United States Japan Czech Republic Israel | | --- | Retrieved from " Categories: Shock absorbers Mechanical devices using viscosity Hidden categories: Articles with short description Short description is different from Wikidata Articles needing additional references from November 2023 All articles needing additional references Articles to be expanded from August 2022 All articles to be expanded All articles with specifically marked weasel-worded phrases Articles with specifically marked weasel-worded phrases from May 2024 All articles that may have off-topic sections Wikipedia articles that may have off-topic sections from May 2024 Commons category link is on Wikidata Webarchive template wayback links This page was last edited on 24 May 2025, at 18:54(UTC). Text is available under the Creative Commons Attribution-ShareAlike 4.0 License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Code of Conduct Developers Statistics Cookie statement Mobile view Search Search [x] Toggle the table of contents Shock absorber 51 languagesAdd topic
112
INVARIANCE PROPERTIES OF ALMOST DISJOINT FAMILIES M. ARCIGA-ALEJANDRE, M. HRUˇ S´ AK, C. MARTINEZ-RANERO Abstract. We answer a question of Gracia-Ferreira and Hruˇ s´ ak by constructing consistently a MAD family maximal in the Katˇ etov order. We also answer several questions of Garcia-Ferreira. 1. Introduction We consider two kinds of closely related mathematical structures in this paper: almost disjoint families and cofinitary groups. An infinite family A ⊆P(ω) is almost disjoint (AD) if the intersection of any two distinct elements of A is finite. It is maximal almost disjoint (MAD) if it is not properly included in any larger AD family or, equivalently, if given an infinite set X ⊆ω there is an A ∈A such that |A ∩X| = ω. An attempt to classify MAD families via Katˇ etov order was initiated by the second author in and continued in . This analysis is analo-gous to the study of ultrafilters via the Rudin-Keisler order. The following theorem can be considered the main result of the paper, it answers one of the basic question about this ordering. Theorem 1.1. (t = c) There exists a MAD family maximal in the Katˇ etov order. It is worth mentioning that a MAD family maximal in the Katˇ etov order is the analogue of a selective ultrafilter in this context. This will be explained in detail in Section 3. Cofinitary groups are subgroups of the symmetric group on ω, and there-fore they have a natural action on ω. The structure of (maximal) cofinitary groups has received a lot of attention (recently see e.g., , ). For a nice survey of algebraic aspects of cofinitary groups consult Cameron’s . Definition 1.2. (i) For any set A we denote by Sym(A) the group of permutations from A onto A, with the group operation given by 0The second author gratefully acknowledges support from PAPIIT grant 102311. 0The third author gratefully acknowledges support from Conacyt grant 99047. 1991 Mathematics Subject Classification. Primary Set Theory; Secondary Logic. Key words and phrases. MAD family, group of permutations, Katetov, ideals, cofini-tary group. 1 2 M. ARCIGA-ALEJANDRE, M. HRUˇ S´ AK, C. MARTINEZ-RANERO composition. We write IdA for the identity or just Id in case A is clear from the context. (ii) We say that a subgroup G ≤Sym(A)1is cofinitary if any g ∈ G \ {Id} has finitely many fixed points. i.e., the set Fix(g) = {x ∈ A : g(x) = x} is finite. Some of the interest in cofinitary groups derives from the fact that they are groups in which the graphs of all members are almost disjoint. We can associate to each AD family A the subgroup Inv(A) of Sym(ω) which consists of the permutations that preserve A, i.e., f[A] ∈A for all A ∈A. Also, we shall consider its module finite version Inv∗(A) = {f ∈ Sym(ω) : ∀A ∈A ∃A′ ∈A, A′ =∗f[A]}. We consider Sym(ω) as a topo-logical group with the subspace topology of the product ωω. Sym(ω) is a polish group since Sym(ω) is a Gδ subspace of ωω. Garcia-Ferreira in asked several questions concerning the existence of invariant subgroups of Sym(ω) with certain topological properties. In Section 2 we answer these questions and in the process we also construct a cofinitary group with special topological properties which is of independent interest. Theorem 1.3. There exists a countable dense cofinitary group. For convenience of the reader we state the questions of . Question 1.4. For any countable F ⊆Sym(ω) is there a MAD family A so that F ⊆Inv(A)? Question 1.5. Is there a MAD family A so that Inv(A) is a closed sub-space? Question 1.6. Is there a MAD family A such that Inv(A) is a dense subspace? We answer the first question in the negative and the other two questions in the affirmative. 2. Cofinitary groups The following Proposition gives a negative answer to question 1.4. Proposition 2.1. There is a countable subset F of Sym(ω) such that F ⊈ Inv(A) for any MAD family A. Proof. We shall show that the set F consisting of functions which are almost equal to the identity is as required. 1Here ≤denotes the subgroup relation. INVARIANCE PROPERTIES OF ALMOST DISJOINT FAMILIES 3 For each MAD family A, choose A ∈A, n ∈A and m ∈ω \ A. Define f ∈Sym(ω) as follows: f(k) =    n if k = m m if k = n k if k / ∈{n, m} Then f ∈F but f / ∈Inv(A) since f[A] = (A ∪{m}) \ {n} / ∈A. □ We shall need the following simple facts. Fact 2.2. If A and B are MAD families such that, for any A ∈A there is a B ∈B so that A =∗B, then Inv∗(A) = Inv∗(B). Fact 2.3. Let A be a MAD family. For any g ∈Inv∗(A) and B ⊂A with |B| < |A|, there are X, Y ∈A \ B such that Y =∗g[X]. We are now in position to provide an answer to Question 1.5. Proposition 2.4. There is a MAD family A so that Inv(A) = {Id}. Proof. Let C be a MAD family of cardinality c and let {fα : α < κ} be an enumeration of the set Inv∗(C{Id}). We will construct recursively a family {Bi β : i < 2, β < κ} ⊆C satisfying: (1) {B0 α, B1 α} ∩{Bi β : i < 2, β < α} = ∅for any α < κ and (2) B1 α =∗fα[B0 α] for any α < κ. Suppose that we have constructed B = {Bi β : i < 2, β < α} satisfying (1) and (2) for some α. Using Fact 2.3, we can find A, B ∈C \B so that B =∗fα[A], we set B0 α = A and B1 α = B. This finishes the recursive construction. For each α < κ, we choose nα, mα ∈ω such that nα ̸= mα and fα(mα) = nα. We now set A0 α = B0 α∪{mα} and A1 α = B1 α{nα}. Observe that A1 α ̸= fα[A0 α]. We define A = (C \ {Bi α : i ∈2, α < κ}) ∪{Ai α : i ∈2, α < κ}. It is easy to see that A is a MAD family and moreover, by Fact 2.2, Inv∗(A) = Inv∗(C). Suppose that there is fα ∈Inv(A) \ {Id} ⊆Inv∗(C) \ {Id}, then, fα[A0 α] = fα[B0 α ∪{mα}] =∗B1 α =∗A1 α and also fα[A0 α] ̸= A1 α, which is a contradiction since both belong to the same MAD family A. □ The following lemma give us a useful combinatorial characterization of cofinitary groups. Lemma 2.5. If G < Sym(ω) is a countable group, then the following are equivalent: 4 M. ARCIGA-ALEJANDRE, M. HRUˇ S´ AK, C. MARTINEZ-RANERO (i) For any A ∈[ω]ω there is B ∈[A]ω such that the family {f[B] : f ∈ G} is almost disjoint, (ii) G is cofinitary. Proof. Let us first show that (i) implies (ii). Suppose that this is not the case, then there is f ∈G \ {Id} so that B ∈[Fix(f)]ω. It follows that Id[B] ∩f[B] = B, which is a contradiction. For the reverse implication. Let {fk : k ∈ω} be an enumeration of G with f0 = Id and let A ∈[ω]ω be given. We shall construct recursively a family B = {Bn : n < ω} such that: (1) B0 = A, (2) Bn+1 ⊊Bn, (3) |Bn+1| = ω and (4) the family {fi[Bn] : i ≤n} is disjoint. Suppose we have constructed {Bi : i ≤k}, since fk+1 ∈G \ {Id} has finitely many fixed points we can find C0 ∈[Bk]ω such that fk+1[C0]∩C0 = ∅. Moreover f −1 j ◦fk+1 ∈G \ {Id} for 0 < j < k + 1, so there exists Cj ∈[Cj−1]ω such that (f −1 j ◦fk+1)[Cj] ∩Cj = ∅. As each fj is a bijection, we can infer from the last equation that fk+1[C1] ∩f1[C1] = fk+1[C2] ∩f2[C2] = ... = fk+1[Ck] ∩fk[Ck] = ∅ (∗) Fix b ∈Bk and set Bk+1 = Ck \ {b}. It should be clear that Bk+1 ∈[Bk]ω. We are left to show that the family {fi[Bk+1] : i ≤k + 1} is disjoint. Let i, j ≤k + 1, i ̸= j be given. If i < k + 1 and j < k + 1, then fi[Bk+1] ∩ fj[Bk+1] ⊆fi[Bk] ∩fj[Bk] = ∅. On the other hand, if we have i = k + 1 and j < k + 1, then, since Bk+1 ⊆Ck ⊆· · · ⊆C0 ⊆Bk and by (∗) we have fi[Bk+1] ∩fj[Bk+1] = fk+1[Bk+1] ∩fj[Bk+1] ⊆fk+1[Cj] ∩fj[Cj] = ∅. This finish the recursive construction. Choose b0 ∈B0 and for each n > 0 we choose bn ∈Bn \ Bn−1. Let B = {bn : n ∈ω}. Note that B ⊆∗Bn for any n ∈ω and moreover the family {f[B] : f ∈G} = {fi[B] : i ∈ω} is almost disjoint. □ The following is the well-known result of Cayley that any group can be represented as a group of permutations. Theorem 2.6 (Cayley). For any group G there is a subgroup H < Sym(G) such that (i) G ∼ = H and (ii) ∀π ∈H \ {Id}, Fix(π) = ∅. Condition (ii) follows from Caley’s proof since the left action does not have fixed points. INVARIANCE PROPERTIES OF ALMOST DISJOINT FAMILIES 5 Definition 2.7. Let X and Y be given such that X ⊆Y and G < Sym(X), H < Sym(Y ). We say H is final extension of G if there is an isomorphism ψ : G →H such that ψ(g) ↾X = g for any g ∈G. We are now in position to prove the main theorem of the section. For more on constructions of cofinitary groups see e.g. [?, K] Theorem 2.8. There is a countable dense cofinitary group G < Sym(ω). Proof. Choose an enumeration {πi : i ∈ω} of S i∈ω{0} Sym(i) with π0 ∈ Sym(1). We will construct recursively a family of groups {Gi n : n ≤i < ω} and at the same time a strictly increasing sequence of natural numbers {ni : i ∈ω} such that n0 = 1, G0 0 = {Id} and (1) ∀n ≤i < ω Gi n < Sym(ni), (2) ∀n ≤j < i < ω Gi n is a final extension of Gj n, (3) ∀n < ω ∃g ∈Gn n such that πn ⊆g and (4) ∀j ≤i < ω∀f ∈Gj i, Fix(f) ⊆nj. Suppose that {Gi n : n ≤i ≤k} and {ni ∈ω : i ≤k} have been already constructed for some k. Let t be minimal so that nk+t|Gk k| ≥dom(πk+1) and let nk+1 = nk+t|Gk k|. Claim: There is Gk+1 k < Sym(nk+1) which is a final extension of Gk k such that ∀f ∈(Gk+1 k \ {Id}), Fix(f) ⊆nk. Proof of Claim: Apply Cayley’s Theorem successively t times starting with H0 = Gk k to obtain a sequence Hi (i < t) so that Hi+1 < Sym(Hi) and Hi ∼ = H0 for all i < t. Let φi denote the isomorphism between H0 and Hi given by composition of Cayley’s ones. Let X = nk ∪S i<t Hi. Observe that |X| = nk+1. For each h ∈H0, we de-fine a permutation φh : X →X given by φh(x) = φi(h)(x) where i is the unique integer so that x ∈Hi−1. Fix a bijection ψ : X →nk+1 and define Gk+1 k = {ψ ◦φh ◦ψ−1 : h ∈H0}. It is easy to prove, by using the fact that Cayley representation does not have fixed points, that Gk+1 k is as required. Let F be an isomorphism witnessing that Gk+1 k is a final extension of Gk k. We know that Gk 0 ≤Gk 1 ≤· · · ≤Gk k−1 ≤Gk k. For each j < k, set Gk+1 j = F[Gk j], since F is an isomorphism, Gk+1 0 ≤Gk+1 1 ≤· · · ≤Gk+1 k−1 and moreover, Gk+1 j is a final extension of Gk j for each j < k. In order to define Gk+1 k+1, consider the function π : nk+1 →nk+1 defined as π(x) =  πk+1(x) if x ∈dom(πk+1) x otherwise. 6 M. ARCIGA-ALEJANDRE, M. HRUˇ S´ AK, C. MARTINEZ-RANERO Now we set Gk+1 k+1 to be the subgroup generated by Gk+1 k and π. It is clear, due to the construction, that nk+1, Gk+1 0 , ..., Gk+1 k+1 satisfy con-ditions (1)-(4), It follows from condition (2) that for fix i the sequence Gj i (i ≤j) is a chain of a final extensions. Thus, there exists a group Gω i < Sym(ω) which is a final extension of Gj i for all j ≥i (the group is constructed by gluing to-gether the all the groups in the obvious way). We now define G = S i∈ω Gω i . Note that G is a subgroup since for each i, Gi m ≤Gi n whenever m ≤n ≤i. Therefore Gω m ≤Gω n whenever m ≤n. It is easy to see that G is the desired group. □ We are ready to provide an answer to Question 1.6. Theorem 2.9. There is a MAD family A such that Inv(A) is dense in Sym(ω). Proof. Let G < Sym(ω) be like in Theorem 2.8 and let Σ = {A : A is an AD family and A ∈A iff{f[A] : f ∈G} ⊆A}. Note that by Lemma 2.5 Σ ̸= ∅. Also (Σ, ⊆) is a partial order in which every chain has an upper bound. By an application of Zorn’s Lemma there is A0 maximal in (Σ, ⊆). Note that A0 is dense since G ⊆Inv(A0). So it suffices to show that A0 is a MAD family. Suppose this is not the case, then there is X ∈[ω]ω almost disjoint from A0. We infer from lemma 2.5 that there exists an infinite subset Y ⊆X so that {f[Y ] : f ∈G} is almost disjoint. It follows that B = A0 ∪{f[Y ] : f ∈G} is almost disjoint and B ∈Σ which contradicts the maximality of A0. □ 3. A Katˇ etov maximal MAD family If A is a MAD family then J (A) denotes the ideal of all subsets of ω which can be almost covered by finitely many elements of A, J +(A) = P(ω)\J (A) denotes the family of sets of positive measure. We also need the set J++(A) consisting of all X ∈P(ω) so that there exists ⟨An : n ∈ω⟩⊆A such that |X ∩An| = ω for all n ∈ω. Note that for any MAD family A, J+(A) = J++(A). In the case A is just an AD family the set J++(A) consist of the sets that remain positive for any AD family extending A. Recall the definition of Katˇ etov order. Definition 3.1. Let I, J be ideals on ω. We say that I ≤K J if there is a function f : ω →ω such that f −1(I) ∈J for all I ∈I. If A and B are MAD families then we write A ≤K B for J (A) ≤K J (B). We refer to ≤K as the Katˇ etov ordering. INVARIANCE PROPERTIES OF ALMOST DISJOINT FAMILIES 7 For h ∈ωω, a function φ : ω →[ω]<ω with |φ(n)| ≤h(n) for all n is called an h-slalom. A function π : [ω]<ω →ω is said to be a predictor. If h : ω<ω →ω, a function π : ω<ω →[ω]<ω with |π(s)| ≤h(s) for all s is called an h-slalom predictor. The following theorem give us a several characterizations of non(M) in terms of families of functions. Theorem 3.2. The following are equivalent for any cardinal κ. (i) non(M) > κ, (ii) for all F ⊆ωω of size ≤κ there is g ∈ωω such that for all f ∈ F, f(n) ̸= g(n) holds for almost all n, (iii) for all families Π of predictors of size ≤κ there is g ∈ωω such that for all π ∈Π, g(n) ̸= π(g ↾n) holds for almost all n, (iv) any of (ii) through (iii) with the additional stipulation that g be in-jective. (v) any of (ii) through (iii) with the additional assumptions that the families consists of partial functions. Moreover, for every X ∈[ω]ω we can find g so that the range of g is contained in X. Proof. (i) to (iii) is the well-known Bartoszynski-Miller characterization of non(M) (see ). Details for showing that (iv) is equivalent to (ii) can be found in . Since (v) is a strengthening of the preceding ones, it suffices to prove that (ii) implies (v). Let F be a family of ≤κ partial functions by extending every function arbitrarily we may assume that the domain of each function is all ω. Now, let F′ = {f ↾f−1(X): f ∈F} applying (iii) to the space Xω and the family F′ we obtain the desired conclusion. □ In order to prove Theorem 1.1 we shall need a slight generalization of the concept of cofinitary group. Definition 3.3. Let G be a subset of injective partial functions from ω into ω closed under compositions and inverses. We say that G is a partial cofinitary semigroup if for every f ∈G either f is a partial identity or f has finitely many fix points. The following lemma will play a key role in the construction of a MAD family maximal in the Katˇ etov order. Lemma 3.4. Let G be a partial cofinitary semigroup of cardinality < non(M) and X ∈[ω]ω then there exists f : ω →X such that G∗f is a partial cofini-tary semigroup. Proof. Define an operation F : ω≤ω →ωω recursively as follows: let n ∈ ω, f ∈ω≤ω and assume F(f)(k) and F(f)−1(k)have been defined for k < n. If F(f)−1(k) = n for some k < n, then clearly F(f)(n) = k. If not, then let 8 M. ARCIGA-ALEJANDRE, M. HRUˇ S´ AK, C. MARTINEZ-RANERO F(f)(n) = f(2n). If F(f)(k) = n for some k < n, then clearly F(f)−1(n) = k. If n ∈X, then let F(f)−1(n) = f(2n + 1). If n / ∈X, then F(f)−1 is not defined at n. If H is a partial cofinitary semigroup, a word w(x) in variable x from H is an expression of the form g0 · xm0 · ... · gl−1 · xml−1 · gl such that gi ∈H, gi ̸= Id for 1 ≤i ≤l −1, and mi ∈Z \ {0} for all i. The length of such a w(x) is lg(w(x)) = |{i ≤l : gi ̸= Id}+P i<l |mi|. For a word w(x), an injective finite partial function (not necessarily in ω<ω), we form the (possible empty) injective partial function w(t) in the usual manner. Also, if g is an injective partial function, we define w(g) as usual. Given a word w(x), define a predictor πw(x)(s) by w(F(s))(n) where 2n + e = |s| (e ∈{0, 1}) for s ∈S (S denotes the set of injective finite functions from ω into ω). Now let H be a partial cofinitary semigroup of size < non(M). We have to show that H is not maximal. By the injective version of (v) in Theorem 3.3, there is f : ω →X injective such that for all πw(x) with w(x) being a word from H, πw(x)(f ↾n) ̸= f(n) holds for almost all n. We claim that G = H ∗F(f) is a partial cofinitary semigroup. Since all elements of G are of the form w(F(f)), where w(x) is a word from H, it suffices to show that that for all such words w(x) ̸= Id. This is done by induction on lg(w(x)). Basic Step. lg(w(x)) = 1. Then either w(x) = g0 for g0 ∈H \ {Id} in which case there is nothing to prove, or w(x) = x or w(x) = x−1. Since π1(f ↾n) ̸= f(n) for almost all n (where π1 is the predictor associated with the word representing the identity), it follows that F(f)(k) = f(2k) ̸= k for almost all k. Induction Step. Assume w(x) = g0 ·xm0 ·...·gl−1 ·xml−1 ·gl is a word of length at least two and the claim has been proved for all shorter words. For k < P i<l |mi| we define the chopped word wk(x) and the inverse chopped word w−1 k (x) basically by removing the occurrence of x, as follows. First let j < k be such that P i<j |mi| ≤k < P i<j+1 |mi| and assume k = P i<j |mi| + k′ with 0 ≤k′ < |mj|. Then wk(x) is the reduced word obtained from the word xsgn(mj)(|mj|−k′−1) · gj+1 · xmj+1 · ... · xmi−1 · gl · g0 · xm0 · ... · gj · xsgn(mj)k′, and w−1 k is simply its inverse. Now let n∗be large enough so that for all n ≥n∗the following hold: (i) the values n, (F(f)sgn(ml−1) · gl)(n), (F(f)sgn(ml−2)·2 · gl)(n), ..., (F(f)ml−1 · gl)(n), ..., (F(f)m0−sgn(m0) · g1 · ... · gl−1 · F(f)ml−1 · gl)(n), INVARIANCE PROPERTIES OF ALMOST DISJOINT FAMILIES 9 and in case gl ̸= Id also gl(n), and in case g0 ̸= Id also (F(f)m0 · g1 · ... · gl−1 · F(f)ml−1 · gl)(n), are all distinct as well as (ii) for each k < P i<l |mi| with k = P i<j |mi| + k′, if n′ = (F(f)−sgn(mj)·k′ · g−1 j · ... · F(f)−m0 · g−1 0 )(n), then f(2n′) ̸= πw−1 k (x)(f ↾2n′). By induction hypothesis, and since there are only finitely many k and for each k only finitely many n′ for which (ii) can fail, it is clear that there is such an n∗. We claim that w(f)(n) ̸= n for each n ≥n∗. Assume this were not the case and fix n ≥n∗with w(F(f))(n) = n. For each k < P i<l |mi| with k = P i<j |mi| + k′, let nk = min{(f sgn(mj)(|mj|−k′−1) · .... · f ml−1 · gl)(n), (f sgn(mj)(|mj|−k′) · ... · f ml−1 · gl)(n)}. Now note that by (i), there can be at most two values k0 and k1 for k such that nk is maximal; and if there are two they must be adjacent; i.e., k1 = k0+1 without loss. Let j < l be such that this (these) maximal value(s) nk occur(s) at k = P i 0, and either there are k1 = k0 + 1 such that nk0 = nk1 is maximal in which case we let k = k1, or there is a unique k such that nk is maximal and one has nk = (f sgn(mj)(|mj|−k′) · .... · f ml−1 · gl)(n). Note that in the former case nk must necessarily have the value (f sgn(mj)(|mj|−k′) · ... · f ml−1 · gl)(n). Also note that since we assume w(f)(n) = n we additionally have nk = (f −sgn(mj)k′ · ... · f −m0 · g−1 0 )(n). Now, πwk(x)(f ↾nk+1) = wk(f ↾nk+1)(nk) because the right-hand side is indeed defined by maximality of nk. w(f)(n) = n clearly entails wk(f ↾n+1)(nk) = f −1(nk). However, by (ii), we get πwk(x)(f ↾nk+1) ̸= f(nk), a contradiction. Case 2. mj < 0, and either there are k1 = k0 + 1 such that nk0 = nk1 is maximal in which case we let k = k0, or there is a unique k such that nk is maximal and one has nk = (f sgn(mj)(|mj|−k′−1) · .... · f ml−1 · gl)(n). In this case use πw−1 k (x)(f ↾nk+1) to derive a contradiction. Case 3. mj > 0 and there is a unique k such that nk is maximal and one has nk = (f sgn(mj)(|mj|−k′−1) · ... · f ml−1 · gl)(n). Use πw−1 k (x)(f ↾nk). Case 4. mj < 0 and there is a unique k such that nk is maximal and one 10 M. ARCIGA-ALEJANDRE, M. HRUˇ S´ AK, C. MARTINEZ-RANERO has nk = (f sgn(mj)(|mj|−k′) · ... · f ml−1 · gl)(n). Use πwk(x)(f ↾nk+1). These contradictions complete the proof of the theorem. □ We recall the following definitions from . Definition 3.5. We say that a MAD family A is K-uniform if A ≤K A ↾X for every X ∈J+(A). Definition 3.6. We say that a MAD family A is tight (weakly tight) if for every ⟨Xn : n ∈ω⟩⊆J+(A) there is A ∈A so that ∀n (∃∞n), |A∩Xn| = ω. The following proposition from shows that (weakly) tight MAD fam-ilies are almost maximal in the Katˇ etov order. Proposition 3.7. Let A be a weakly tight MAD family and let B be a MAD family. If A ≤K B then there exists an X ∈J+(A) such that B ≤K ↾X. Recently Raghavan and Steprans , using a novel technique of Shelah, showed that assuming s ≤s there is a weakly tight MAD family. We are now in position to prove the main theorem of the paper. Theorem 3.8. Assuming t = c. There exists a MAD family maximal in the Katˇ etov order. Proof. By propotion 3.7, it suffices to construct a tight K-uniform MAD family. In order to do this, enumerate ([ω]ω)ω as { ⃗ Xα : α < c} in such a way that each sequence appears cofinally many times. We shall construct recursively an increasing sequence Aα, α < c of almost disjoint families and a sequence {α α < c of injective partial functions from ω into ω so that A0 is a partition of ω into infinitely many infinite pieces and f0 = Id for every α < c: (1) |Aα| < c, (2) the set Fα consisting of elements of the form w(fξ1, ..., fξn) is a par-tial cofinitary semigroup where w(x1, .., xn) is a reduced word in n variables and ξ1, ..., ξn < α, (3) Fα is a strictly increasing sequence of partial cofinitary semigroups of cardinality < c, (4) Fα respects Aα, i.e., f −1(A) ∈Aα for all A ∈Aα and all f ∈Fα, (5) if ⃗ Xα ⊆J (Aα)++ then there exists A ∈Aα+1 such that A ∩⃗ Xα(n) is infinite for all n ∈ω, (6) if ⃗ Xα(0) ∈J (Aα)++ then there exists f : ω →⃗ Xα(0) with f ∈Fα+1. For α limit let Fα = S{Fβ : β < α} and Aα = S{Aβ : β < α}. For α = β +1 consider Aβ and Fβ. If ⃗ Xα(0) ∈J (Aα)++ then, using Lemma 3.3, we can find a bijection f : ω →X between ω and a subset X almost disjoint from every element of Aβ so that Fβ ∗f is a partial cofinitary INVARIANCE PROPERTIES OF ALMOST DISJOINT FAMILIES 11 semigroup, we set fα = f. It is easy to verify that (1), (2) and (4) holds. In order to construct Aα, enumerate Fα as {fγ : γ < κ}, and assume that ⃗ Xα ⊆J (Aα)++. We may assume that ⃗ X is a partition of ω. For each n, recursively choose a ⊆∗-decreasing sequence T n γ (γ < κ) of infinite subsets of ⃗ Xα(n) so that: (i) T n 0 ⊆⃗ Xα(n) is almost disjoint from all elements of Aα, (ii) for γ < κ, f −1 γ (T n α ) is almost disjoint from every element of Aα, (iii) for every ξ, η ≤γ < κ,and for every n, m < ω f −1 ξ (T m γ ) ∩f −1 η (T n γ ) is finite. Note that (ii) follows directly from (i) and the fact that Fα respects Aβ. Assume that T n ξ , ξ < γ has been successfully constructed. Choose Sn ∈ [ ⃗ Xα(n)]ω such that Sn ⊆∗T n ξ for ξ < γ. Since Fα is a partial cofinitary semigroup there exists Sn 0 ∈[Sn]ω so that f −1 α (Sn 0 ) is almost disjoint from Aβ. Note that if T n α is a subset of Sn 0 then (i) and (ii) are satisfied. In order to find T n α so that (iii) holds enumerate all pairs ξ, η, ξ, η ≤α as {(ξζ, ηζ) : ζ < λ. Note that λ < t. Construct another decreasing sequence {Sn ζ : ζ < λ} (Sn 0 has already been chosen) so that for all n.m < ω f −1 ξζ (Sn ζ+1) ∩f −1 ηζ (Sm ζ+1) =∗∅. Now that is easy to do as Fα is a partial cofinitary semigroup we can always find an infinite subset of Sn ζ and Sm ζ so that their pre images are almost disjoint. Finally choose T n α ∈[Sn 0 ] so that T n α ⊆∗Sn ζ for all ζ < λ. This finishes the construction. Let {T n γ : γ < κ, n < ω} be the sequence satisfying the above requirements (i)-(iii). As κ < t we can find a pseudo-intersection T n of the family {T n γ : γ < κ} for all n ∈ω. Let T = S Tn. Fix an enumeration {fγ : γ < κ} of Fα+1 and let {(γξ, δ+ξ) : ξ < κ} be an enumeration of all ordered pairs (γ, δ) ∈κ × κ. For each ξ < κ and n < ω, let f n ξ be the function from ω into ω defined as follows: f n ξ (k) = max(fγξ([T n] ∩fδξ[T k]. Since κ < b we can find h : ω →ω so that f n ξ ≤∗h for all ξ < κ and all n < ω. Let A = S n∈ω(T n \ h(n)). Set Aα+1 = Aα∪{w(fβ1, ..., fβn)[A] : w(x1, ..., xn) is a reduced word in n variables and fβ1, ..., fβn ∈{fg : γ ≤α + 1}}. It is easy to see that Aα+1 is an AD family and satisfies the required prop-erties. This finishes the proof of the Theorem. □ We will finish with some open questions. Question 3.9. Does there exists a MAD family maximal in the Katˇ etov order which is weakly tight but not tight? 12 M. ARCIGA-ALEJANDRE, M. HRUˇ S´ AK, C. MARTINEZ-RANERO Question 3.10. Is every MAD family maximal in the order of Katˇ etov weakly tight? Question 3.11. Is it consistent with ZFC that there are no Katˇ etov max-imal MAD families? References T. Bartoszynski and H. Judah, Set Theory, On the Structure of the Real Line, Peters, Wellesley, MA, 1995. J. Brendle, O. Spinas, Y. Zhang, Uniformity of the Meager ideal and Maximal Cofini-tary Groups, Journal of Algebra 232, 209-225 (2000). J. Brendle and S. Yatabe, Forcing indestructibility of MAD families, Ann. Pure Appl. Logic, 132 (2-3): 271-312, 2005. P. Cameron, Cofinitary permutation groups, Bull. London Math. Soc., 28 (2): 113-140, 1996. S. Garcia-Ferreira, Continuos functions between Isbell-Mr´ owka spaces, Comment. Math. Univ. Carolinae 21 (1980), 742-769. S. Garcia-Ferreira and M. Hruˇ sak, Ordering MAD families a la Katˇ etov, J. Symbolic Logic, Vol. 68, No. 4, ( 2003), pp. 1337-1353. S. Garcia-Ferreira and P. Szeptycki, MAD families and P-points, Comment. Math. Univ. Carolin., 48 (4): 699-705, 2007. M. Hruˇ s´ ak, J. Steprans and Y. Zhang, Cofinitary Groups, almost Disjoint and Dom-inating Families, J. Symbolic Logic, Vol. 66, No. 3, (2001), pp. 1259-1276. M. Hruˇ s´ ak and J. Zapletal, Forcing with quotients, Arch. Math. Logic, 47 (7-8): 719-739, 2008. B. Kastermans, J. Steprans and Y. Zhang, Analytic and coanalytic families of almost disjoint functions, J. Symb. Logic, 73 (4): 1158-1172, 2008. D. Raghavan and J. Steprans, On weakly tight families, pre-print 2012. Centro de Investigacion en Matematicas, A.C. Jalisco S/N, Col. Valen-ciana, CP 36240 Guanajuato, Gto. Mexico E-mail address: [email protected] Centro de Ciencias Matematicas, Universidad Nacional Autonoma de Mexico, A. P. 61 -3 Xangari C. P. 58089 Morelia, Michoacan Mexico E-mail address: [email protected] Centro de Ciencias Matematicas, Universidad Nacional Autonoma de Mexico, A. P. 61 -3 Xangari C. P. 58089 Morelia, Michoacan Mexico E-mail address: [email protected]
113
CS276 Cryptography Spring 2004 Lecture 4.14.04 Lecturer: David Wagner Scribe: Boriska Toth Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications. They may be distributed outside this class only with the permission of the Instructor. 0.1 Introduction Today's lecture concerns secret sharing. Consider two ocers and a president, who want a protocol for launching a nuke such that if all three submit their share of some secret the nuke is launched, but if any two submits his share the nuke isn't launched. Thus, we want to share secret x such that each person i gets xi. We denote this x = ) (x1; x2; x3) In fact, we want the stronger condition that given only two shares of the secret, there is not even partial info that can be gained about the secret that would facilitate launching the nuke. It turns out this simple, concrete scenario has a simple solution. Pick x1, x2 randomly, and set x3 = xx1x2. It is trivial to verify that this protocol works. Given all three shares, take the xor of all three to get x. Given less than three shares, in the case that x1; x2; x3 are of the same length n, x is uniformly distributed among all n-bit strings. This situation is called \3-out-of-3 sharing". Now we wish to add a protocol to enable any one of the three participants to unarm the nuke. That is, a secret y should be distributed among the three participants as shares y1; y2; and y3 such that given any yi, the nuke is disabled. This is called a \one-out-of-three scheme". It turns out to be trivial also: set y = y1 = y2 = y3. In the rest of the lecture we consider the general problem of t-out-of-n secret sharing. We have n parties, and we want to distribute secrets x = ) (x1; ::xn) The two properties we need to guarantee are: 1) Recoverability: Given any t shares, we can recover x. 2) Secrecy: Given any < t shares, absolutely nothing is learned about x. In other words, the conditional distribution given the known shares for x should be the a priori distribution for x, so Pr (xjshares) = Pr (x). We have already seen schemes for 1-out-of-n and n-out-of-n secret sharing. Now we want to deal with the remaining cases. 0.2 The Shamir Secret Sharing Scheme One idea for 2-out-of-n sharing is that your secret is the slope of a line. Now pick two random points on the line as shares x1; x2. Any two people can nd the slope, and thus the secret, but secrecy is also preserved as knowing one point on a line tells you nothing about its slope. 0-1 0-2 Lecture 4.14.04 In fact, we can generalize this idea to using a quadratic function for 3-out-of-n sharing, and keep going with higher degree polynomials. Fact: Let F be a eld (typically nite). Then d + 1 pairs (ai; bi) uniquely determine a polynomial f (z) of degree  d, such that f (ai) = bi. We are assuming d < jFj, so that the ai's can be distinct. In the fact above, f (z) has degree  d and not = d because of degeneracy issues. So here is Shamir's secret sharing scheme, for t-out-of-n secret sharing: First, choose a large prime p, and let F = Z=p Z. To share secret x as x = ) (x1; ::: xn) do the following: 1. Choose coecients f1; ::: ft1 2 Z=p Z, which are to be the coecients of degree t 1 polynomial f. 2. Let f (z) = f0 + f1 z + ::: ft1 zt1, where f0 = x. 3. Give f (i) to party i, i = 1 ::: n. Now we need a recovery procedure and we need to prove the secrecy condition to show that this is a secret sharing scheme. Recovery is straightforward. When t parties have a secret, then we have t points on the curve of a  (t 1) degree polynomial, so by the fact above, we get unique coecients to a degree  (t 1) polynomial. The secret is coecient f0. Formalizing, we use Lagrange Interpolation over a nite eld. Given (i; xi) for i 2 G, f (z) = X i2G xi Y j2G;j6=i z j i j This is a linear system in t unknowns, the coecients fk, with t equations. The existence of a unique solution is guaranteed by the fact stated above. So Gaussian elimination can be used to solve. Then x = f0 = f (0) = X i2G xi Y j2G;j6=i j i j Letting Q j2G;j6=i j ij = ci, we have that x = P i2G cixi. Note that ci is a constant independent of the xi's. Thus we can compute ci; i 2 G ahead of time without knowing the xi's, and then in linear time can nd x, the secret, once we have the xi's. So recoverability is quite ecient in this case. Now we need to verify the secrecy of this scheme. Suppose we have only t 1 parties contributing shares. This corresponds to knowing t 1 points of a degree t 1-polynomial. Can we nd out coecient f0? Or even gain partial info? It turns out we cannot. Stating this formally, given t 1 shares (i; f (i)), and a hypothetical value x for the secret, to test whether the secret is x, we need that x = f (0), or in other words, that point (0; x) is another correspondence. If we just know t 1 points, none of which have input value 0, the conditional distribution on those points for having a point (0; x) is still uniform. This wasn't proved in class why. Thus all x values for the secret are equally likely, and secrecy holds. So it seems we have a solution for secret sharing; we have an ecient procedure to share a secret such that secrecy and recoverability both hold. Looks like we're done, right? Actually, we shouldn't be satis ed with Shamir's scheme. Here are four problems with it we can immediately see: Lecture 4.14.04 0-3 1. If the participants cheat in the recover phase, the secret cannot be recovered. The other participants don't even have a way of knowing if someone cheated. There is no way to recognize for the Shamir scheme whether an alleged share is valid. Signature schemes or Advanced Coding Theory can be used to address this issue. 2. There is total trust in the dealer, and thus in any single point of failure. If the application is ring the nuke, or voting, Shamir's scheme is obviously useless, as there is no single party all parties will agree to trust. The dealer might hand out bogus or inconsistent shares with the Shamir scheme, and the other parties don't even know what went wrong, that the problem lies with the dealer and not with not enough participants wanting to share their secret. Quite surprisingly, there is actually a x to having to trust a single dealer! We can come up with a scheme where the parties can check that the dealer has shared a secret properly and does computation correctly on the shares. 3. The scheme is one-time. 4. The scheme only allows revealing a secret, and not computing with it. An example of the relevance of this problem is the following. There is a PGP private key scheme where a private key is shared across three machines, so that three machines need to get hacked into for security to be compromised. However, it should be the case that to use the private key for computation, no one machine should ever, even temporarily, hold the entire key and represent a single point of failure. If we used the Shamir scheme to share the private key as a secret among three machines, then to decrypt a message, we would undesirably need a single machine to know the entire private key. We will x many of these problems with a new protocol, Veri able Secret Sharing (VSS). We will x the rst two problems directly. The version of VSS given in this lecture is one-time. The fourth problem is a little harder to x. A whole eld in cryptography that studies \threshold cryptosystems" deals with it. 0.3 Veri able Secret Sharing Shamir's scheme had a nice property that we will exploit with some modi cations. As Shamir's scheme is linear, in a way clari ed below, it has a nice homomorphism property. Given secrets x; y that are shared: x = )f (x1; ::: xn) y = )g (y1; ::: yn) The key property is that given valid shares of x; y, parties can compute valid shares of x + y = )f+g (x1 + y1 ::: xn + yn) Any party with xi; yi has points f (i) ; g (i), so he can compute share fx+ygi = (i; (f + g) (i)) = (i; f (i) + g (i)). The participants can thus compute their shares of x + y on their own given their shares of x; y. So a limit type of computation (addition) can be done in a distributed fashion even in the Shamir scheme. Now we want a scheme that works much like the Shamir scheme when the dealer is honest, but unlike the Shamir scheme, does not assume an honest dealer. Again, if  t parties are honest, the secret should be recoverable. Ideally, the secret should not be recoverable otherwise. However, if the dealer can be malicious, 0-4 Lecture 4.14.04 we cannot always guard the secrecy of the scheme. The dealer could simply reveal the secret, for instance. So our goals are now: All honest parties know or detect the dealer is malicious, OR There is a consistent sharing of the secret with a way to recover it. Before stating the VSS scheme, we start with some preliminaries. We must make an intractability assumption, which will be the assumed hardness of the discrete log problem. The problem is as follows. Given g 2 G, a generator in a group, and given h 2 G, the goal is to nd x 2 Z (Z denotes the integers) such that gx = h, with operations de ned over the group. This problem is widely believed to be hard and used as a cryptographic intractability assumption, although its hardness is unproven. Note that it has a nice trapdoor structure. Given g, it is easy to get h from x but hard to get x from h. We now go over some notation. Let p; q be primes, with p = 2q + 1. Z=p Z forms a group. Now we restrict our attention to quadratic cyclic residues in this group. These are elements a in a group of the from a = w2, for some w in the group. Thus they are the \squares" in the group, such as the element 1. They form a cyclic subgroup of Z=p Z of prime order q. We call this subgroup QR. We then pick elements g; h 2 QR randomly. We de ne a two-tuple notation. Let: G = (g; h) ; A = (a; b) ; C = (c; d) Then we can de ne addition, scalar multiplication, and exponentiation operations as: A + C = (a + c; b + d) ; n A = (n a; n b) ; GA = ga hb Furthermore, if polynomials f (z) = f0 + ::: + fd zd and f 0 (z) = f 0 0 + ::: + f 0 d zd, then de ne: F = (f; f 0) ; F (a) = (f (a) ; f 0 (a)) Finally, de ne commitments as: commit (A) = GA = ga hb The above represents a commitment to value a. g; h are given constants, and b is a random independent value. Now we can use a homomorphism property to observe that: commit (A + B) = GA+B = G(a;b)+(a0;b0) = G(a+a0;b+b0) = ga+a0 hb+b0 = ga hb ga0 hb0 = commit (A) commit (B) Thus, we have a homomorphic commitment scheme. It is unusual for a commitment scheme to have this property. We also need to prove that the commit function above indeed represents a commitment scheme. Two properties must hold: a scheme must be binding, and it must be hiding. commit is hiding: This means that given the value commit (A), which represents a commitment on a, one has no idea about a. So we would like it that for any value of commit (A), any value a0 could have been the value being committed. Formally, we need given A = (a; b), for each a0, there is a b0 such that ga hb = ga0 hb0 Lecture 4.14.04 0-5 This means given any a; b; a0, we need a b0 to t the equation above. This happens in a group of prime order if for some d: ga+bdloggh = ga0+b0dloggh In a group of prime order, the equation above has a unique solution for b0, given a; b; a0. So the commitment scheme is hiding. commit is binding: The whole point of using a commitment scheme is that after a commitment value commit (A) has been advertised, participants must be sure that A is the value being used later in a scheme. So it should be hard to nd other values A0 having commit (A) = commit (A0). Or in other words, given a; b, it should be hard to nd a0; b0 such that ga hb = ga0 hb0 Equivalently it should be hard to nd a0; b0 with a + b d loggh = a0 + b0 d loggh If we could solve the above equation for a pair a0; b0 then we could solve the discrete log problem essentially. So we will assume our commit scheme is binding. Finally, here is the VSS scheme, to share secret x: x = )F (x1; ::: xn) 1. The dealer chooses F = (f; f 0) randomly, where f; f 0 are (t 1)- degree polynomials, and such that f (0) = x, and F (0) = (x; random). 2. The dealer computes Ai = commit (Fi), i = 0:: (t 1), and thus gets a commitment for all coecients. He broadcasts all these t commitments Ai to all n participants. 3. The dealer computes Xi = F (i) and sends this value Xi to participant i, for each 1  i  n. The dealer also signs each Xi value and sends the signature sigD (Xi) to person i. 4. Each person Pi veri es the following: GXi = GF(i) = GF0+ :: +Ft1it1 = GF0 GF1i :: GFt1it1 = A0 Ai 1 Ai2 2 ::: Ait1 t1 That is, he checks if the left hand side equals the right hand side, which should be equal by the homomorphism property of the commitment scheme. 5. If this check fails for party Pi, then he broadcasts to all participants an accusation. The accusation includes his share Xi and the signature from the dealer sigD (Xi). So far, the other participants only see that something isn't kosher, but they don't know whether the dealer or Pi is at fault. It could be that the dealer never handed Pi a valid share, or that Pi is lying. So the dealer, to prove things are kosher from his end, broadcasts to all participants Xi, so that the participants can check that the share the dealer now claims to have sent is a valid share. Now each person Pi aborts if he sees at least t such accusations, or if his share doesn't pass the check in the previous step. Otherwise he accepts this as a successful sharing. 0-6 Lecture 4.14.04 Note that we have been assuming that all channels are secure, private, authenticated signed channels. The point of having the dealer send an Xi after an accusation was broadcast by person Pi was not just so the dealer can claim he is giving valid shares, but also because then if an honest participant didn't get a valid share, then the other t 1 honest participants can use the valid share broadcast by the dealer to recover the secret. We have removed a lot of the trust in the dealer already, but there are still ways the dealer can upset this scheme. For instance, if instead of choosing prime numbers to construct a quadratic residue subgroup, he might pick his phone number. How do we totally remove trust in the dealer? Here is a simple idea: Suppose we have two instances of running the VSS scheme with the same participants: x = )F (:: Xi ::) ; with commitments [A0 ::: At1]; and y = )E (:: Yi ::) ; with commitments [B0 ::: Bt1] We can now combine the two procedures as follows. Think of the polynomial being used as E + F, then the secret is F0 + E0 = x + y. Each person now instead of receiving Xi = F (i) or Yi = E (i), receives the sum Xi+Yi = F (i)+E (i). Finally by the homomorphism property of the commitment scheme, the commitments now look like commit (Fi + Ei) = GFi+Ei = commit (Fi) commit (Ei). So we have a procedure for sharing with characteristics: x + y = )E+F (:: Xi + Yi ::) ; with commitments [A0 B0; ::: At1 Bt1] Finally, assume each participant acts as a dealer and picks a function F[i]; 1  i  n, and the principle of adding functions to have a secret that's the sum of secrets shown above is applied n times: x + x + ::: x[n] = )F+ ::: +F[n] (; ::: Xi + Xi + ::: + X[n]i; :::) ; with commitments [A0 A0 ::: A[n]0; :::::::::::::::: At1 At1 ::: A[n]t1] If any one of the participants if playing the role of the dealer in an honest, unbiased way, the above scheme with all players acting as a dealer in parallel will be uniform and unbiasable by the other players. The one issue using the above scheme with all players picking a function F[i] is that all players must broadcast their commitments F[i]j; 1  i  n; 0  j  t 1 as soon as they pick a function and before any players are given their shares, because otherwise, the players can bias the scheme by choosing their functions based on other players' functions.
114
Basquin's and Coffin's Laws, Fatigue PepperEng Tutor 317 subscribers 141 likes Description 13456 views Posted: 9 Jul 2020 Basquin's and Coffin's Laws, Fatigue The high-cycle fatigue data is as follows: Using the data above, estimate the maximum stress amplitude to ensure 10^8 cycles. 15 comments Transcript: Intro hey guys so today I'm going to be going over Baskins and coffins laws with respect to fatigue and so with out further ado let's get into it and so the first thing that you need to know is really this little chart here um that I've outlined where it just talks about high cycle and low cycle fatigue and this will help us when thinking about Baskins and coffins law and so high cycle fatigue is when you're submitting a test specimen or just a regular material to more than ten to the power four cycles and then low cycle is less than 10 to the power 4 and then high cycle is when Baskins laws apply and then low cycle is when coffins law applies and elastic deformation occurs in high cycle and low cycle plastic deformation occurs so this is a non permanent and then this is permanent deformation and if you want more info you can watch my other video that will be posting about more fatigue related Definitions information and that'll be on the screen now ok so we know that but Baskins and coffin saw I should probably write them out for you because that's what you're really here for and so Baskins law is written as the change in stress where times the number of cycles to failure times a constant B is equal to an on another constant C and so this is our Baskins law and then coffins law is written as the change mm the plastic strain is equal to a constant C over the number of cycles to failure times that same constant c arab sorry these are actually different numbers these C's are different and so um now let's look at a problem and so we'll pull that up okay and I'll just clear that before us um and so here's a problem it says the Problem high cycle fatigue loading data is as follows use the data above estimate the maximum stress amplitude to ensure 10 to the 8 cycles okay so the first thing that we need to know about this is high cycle fatigue and so that means that it's going to mean this side of the table that's going to apply so we're going to need to use Baskins law and so thinking about that let's write down Baskins law which is the change in stress equals the number of cycles to failure raised to a constant B is equal to C and then usually the way that these things go where we're trying to find a maximum stress at a number of cycles where we're not given that number of cycles in the data because you know if we were given a 10 to the 8 cycles and then a corresponding stress amplitude we could just get the stress range here by multiplying or dividing by 2 but what we're gonna do here is actually try to interpolate between these two data points in a way and so that may seem kind of abstract but I'll just show you the math and I'm sure it won't make sense then or I hope it will and so here is what we're gonna do so with this equation we need to essentially isolate B and C because we have two data points and then we have two unknowns and so with those two data points we can solve for those two unknowns but we have this B here in the exponent position so we're going to need to get rid of that and we use logs to get rid of exponents so if I take the log of both sides I can write the log of this oh sorry I forgot to write log there log equals log of the constant C right and this just all comes from our log rules because if there's an exponent here we can bring it down to our to the front of the log expression there and then if we're adding them these can be multiplied so we're taking the log of this guy and then the log just erase here of C is what we're actually doing in this situation and then I've just broken it out into this form so that we can isolate for our B and so we know two of the data points that we can use here so plugging those in we can say that log and remember this is the stress range here because of the Delta so we're going to need to multiply it by two because if you pay attention here this is the stress amplitude so two times 192 plus B log of 10 to the six equals log C and then log of two moving on to the second data point of 167 plus B log six times 10 to the 6 equals log scene and these constants are going to be the same in both situations right because it's the same material and it's only subjected to high cycle fatigue and so we have this system of equations here so two equations with two unknowns we can usually solve those and so if we subtract these two what we're going to end up getting and I'll do this in a different color here let's try yellow what we're gonna end up getting and I'll write this minus sign in yellow as well is log of 192 over 167 right because when we subtract logs you can take the inside of the brackets and simply divide it so the twos will cancel and then 192 over 167 and then this is going to be minus B and then same type deal here log of one over six right because these ten over sixes are getting canceled so it's just gonna be one over six and then that's gonna be equal to zero because log C is just gonna cancel but log C is just like subtracting five minus five or one minus one and so doing all that I'm just gonna make some room here and then I'll move this guy up and so using this guy what we're gonna say here is we have these two logs and then we need to isolate for our B of course so I'm gonna switch to a different color again for fun and we get B is equal to log of 192 over 167 over log of one over six right and so that gives us that our B value is going to be equal to zero point zero seven seven and of course you could like write this out as are you trailing decimal but I'm not going to because we're not trying to get a theoretically exact value we're just trying to get a good estimate here as we specified right and so that's another thing just to keep Solution in mind when solving these you're just looking for an estimate if the question has specified that okay so here's our B value and then what we're gonna say is okay so we have our B value but we still need our C value right and we can get our C value by just plugging in this D value to our original Baskins law equation which was of course the change in stress for the stress range times the number of cycles to failure raised to a constant B is equal to C and so we know the stress range we know the number of cycles to failure we know B that means we can find C and so doing that we're gonna do the change in stress is 192 times 2 because it's stress amplitude is what we're given there so we need to multiply it by 2 times the number of cycles to failure using 192 so that's going to be 10 to the power 6 times 0.077 and so that's gonna give us our C value where C is equal to 1112 point five seconds okay and so that's that so now we have our B and our C values and so now we're going to need to actually estimate the 10 to the power 8 cycles and so the way that we're gonna do that is go back to our good old log change in stress plus the log number of cycles to failure equals log C and we're trying to solve for the stress range that is going to or sorry we're trying to solve for the maximum stress amplitude for that number of cycles but we know that this guy if we can isolate for it and then multiply it by herb sorry divided by 2 that will give us our stress amplitude and so we're gonna go ahead and do that here so plug in some numbers we know log changing stress sorry not equal to + 0.077 log of 10 to the 8th equals log of that 1112 0.57 right and so if we just move it this guy this guy here over subtracted from this value and raise it to the power of 10 because this is log base 10 so that is going to cancel this log and just give us a stress amplitude then we will get that these stress range is equal to two hundred and sixty nine point three five mega Pascal's and that's mega right which is just ten to the six and so that's gonna be our stress range but we need the stress amplitude and so if we take our stress range divided by two that's gonna give us our stress amplitude and so that leaves us with our stress amplitude equaling 134 0.67 mega Pascal's and so we can say that that is our estimate for the maximum stress amplitude to ensure that the material does not fail after 10 to the power 8 cycles and so that's all that I have for today if you were confused about this at all had to recommend going to check out my other fatigue video and that'll be up on the screen now again and so have a great day
115
Vinogradov’s Mean Value Theorem Ciprian Demeter (Indiana University) joint with Jean Bourgain (IAS) and Larry Guth (MIT) Madison, May 2016 Ciprian Demeter (Indiana University) joint with Jean Bourgain (IAS) and Larry Guth (MIT) Vinogradov’s Mean Value Theorem We write A ≲B if there is an implicit constant C which depends on fixed parameters such as n (dimension of Rn) and p (the index of Lp) such that A ≤CB. C will never depend on the scales δ, N. We often write A ≲ǫ Nǫ to denote the fact that the implicit constant depends on ǫ. For example log N ≲ǫ Nǫ, for each ǫ > 0 We will use the notation e(z) = e2πiz, z ∈R. Ciprian Demeter (Indiana University) joint with Jean Bourgain (IAS) and Larry Guth (MIT) Vinogradov’s Mean Value Theorem For each integers s ≥1 and n, N ≥2 denote by Js,n(N) the number of integral solutions for the following system X i 1 + . . . + X i s = Y i 1 + . . . + Y i s , 1 ≤i ≤n, with 1 ≤X1, . . . , Xs, Y1, . . . , Ys ≤N. Example: n=2 ( X1 + . . . + Xs = Y1 + . . . + Ys X 2 1 + . . . + X 2 s = Y 2 1 + . . . + Y 2 s . Theorem (Vinogradov’s Mean Value “Theorem”) For each s ≥1, ǫ > 0 and n, N ≥2 we have the upper bound Js,n(N) ≲ǫ Ns+ǫ + N2s−n(n+1) 2 +ǫ. The number Js,n(N) has the following analytic representation Js,n(N) = Z [0,1]n | N X j=1 e(x1j + x2j2 + . . . + xnjn)|2sdx1 . . . dxn. Ciprian Demeter (Indiana University) joint with Jean Bourgain (IAS) and Larry Guth (MIT) Vinogradov’s Mean Value Theorem Theorem (Vinogradov’s Mean Value “Theorem” (VMVT)) For each p ≥2, ǫ > 0 and n, N ≥2 we have the upper bound ( Z [0,1]n | N X j=1 e(x1j + x2j2 + . . . + xnjn)|pdx1 . . . dxn)1/p ≲ǫ ( N 1 2+ǫ, if 2 ≤p ≤n(n + 1) N1−n(n+1) 2p +ǫ, if p ≥n(n + 1) . When p = 2, ∞we have sharp estimates ∥ N X j=1 e(x1j + x2j2 + . . . + xnjn)∥Lp(Tn) = ( N 1 2, p = 2 N, p = ∞ Given n, the full range of estimates in VMVT will follow if we prove the case p = n(n + 1) (critical exponent) Ciprian Demeter (Indiana University) joint with Jean Bourgain (IAS) and Larry Guth (MIT) Vinogradov’s Mean Value Theorem • n=2 is easy and has been known (folklore?). It has critical exponent p = 2(2 + 1) = 6. One needs to check that ( X1 + X2 + X3 = Y1 + Y2 + Y3 X 2 1 + X 2 2 + X 2 3 = Y 2 1 + Y 2 2 + Y 2 3 . has O(N3+ǫ) integral solutions in the interval [1, N]. Note that (X1, X1, X3, X1, X2, X3) is always a (trivial) solution, so we have at least N3 solutions. The required estimate says that fixing X1, X2, X3 will determine Y1, Y2, Y3 within O(Nǫ) choices. Using easy algebraic manipulations this boils down to the fact that a circle of radius N contains at most O(Nǫ) lattice points. Ciprian Demeter (Indiana University) joint with Jean Bourgain (IAS) and Larry Guth (MIT) Vinogradov’s Mean Value Theorem • n ≥3 : Only partial results have been known until ∼2012 Theorem (Vinogradov (1935), Karatsuba, Stechkin) VMVT holds for p ≥n2(4 log n + 2 log log n + 10), and in fact one has a sharp asymptotic formula ∥ N X j=1 e(x1j + x2j2 + . . . + xnjn)∥Lp(Tn) ∼C(p, n)N1−n(n+1) 2p Wooley developed the efficient congruencing method which led to the following progress Theorem (Wooley, 2012 and later) VMVT holds for • n = 3 and all values of p • p ≤n(n + 1) −2n 3 + O(n2/3), • p ≥2n(n −1), all n ≥3 Ciprian Demeter (Indiana University) joint with Jean Bourgain (IAS) and Larry Guth (MIT) Vinogradov’s Mean Value Theorem Theorem (Bourgain, D, Guth 2015) VMVT holds for all n ≥2 and all p. Moreover, when combining this with known sharp estimates on major arcs, there will be no losses in the supercritical regime p > n(n + 1) ∥ N X j=1 e(x1j + x2j2 + . . . + xnjn)∥Lp(Tn) ≤C(p, n)N1−n(n+1) 2p . Our method does not seem to say anything meaningful about the implicit constant C(p, n), so we can’t say anything new about the zero-free regions of the Riemann zeta. But the are at least two other classical applications. Ciprian Demeter (Indiana University) joint with Jean Bourgain (IAS) and Larry Guth (MIT) Vinogradov’s Mean Value Theorem Weyl sums x = (x1, . . . , xn) fn(x, N) = N X j=1 e(x1j + x2j2 + . . . + xnjn) Theorem (H. Weyl) Assume |xn −a q| ≤1 q2 , (a, q) = 1. Then |fn(x, N)| ≲N1+ǫ(q−1 + N−1 + qN−n)21−n As a consequence of VMVT we can now replace 21−n with σ(n) = 1 n(n−1) (best known bounds for large n). Ciprian Demeter (Indiana University) joint with Jean Bourgain (IAS) and Larry Guth (MIT) Vinogradov’s Mean Value Theorem The asymptotic formula in Waring’s problem Rs,k(n) = number of representations of the integer n as a sum of s kth powers. Based on circle method heuristics, the following asymptotic formula is conjectured Rs,k(n) = Γ(1 + 1 k )s Γ( s k ) Gs,k(n)n s k −1 + o(n s k −1), n →∞ for s ≥k + 1, k ≥3. Let ˜ G(k) (Waring number) be the smallest s for which the formula holds. Ciprian Demeter (Indiana University) joint with Jean Bourgain (IAS) and Larry Guth (MIT) Vinogradov’s Mean Value Theorem Wooley showed that VMVT would imply for all k ≥3 ˜ G(k) ≤k2 + 1 − max 1≤j≤k−1 2j ≤k2  kj −2j k + 1 −j  . In particular, we get ˜ G(k) ≤k2 + 1 − log k log 2  This improves all previous bounds on ˜ G(k), except for Vaughan’s ˜ G(3) ≤8 (1986). Further improvements are possible. Our VMVT leads (rather immediately) to progress on Hua’s lemma, which leads (Bourgain 2016) to a further improvement ˜ G(k) ≤k2 −k + O( √ k). Ciprian Demeter (Indiana University) joint with Jean Bourgain (IAS) and Larry Guth (MIT) Vinogradov’s Mean Value Theorem f (x) = X j∼N e(jnx) Conjecture: R 1 0 |f (x)|pdx ≲Np−n+ǫ, for p ≥2n Lemma (Hua) For l ≤n Z 1 0 |f (x)|2l dx ≲N2l −l+ǫ, sharp when l = n Theorem (Bourgain, 2016) For s ≤n Z 1 0 |f (x)|s(s+1)dx ≲Ns2+ǫ, sharp when s = n Ciprian Demeter (Indiana University) joint with Jean Bourgain (IAS) and Larry Guth (MIT) Vinogradov’s Mean Value Theorem Motivated in part by investigations by T. Wolfffrom late 1990s, Bourgain and I have developed a decoupling theory for Lp spaces. In a nutshell, our theorems go as follows: Theorem (Abstract decoupling theorem) Let f : M →C be a function on some compact manifold M in Rn, with natural measure σ. Partition the manifold into caps τ of size δ (with some variations forced by curvature) and let fτ = f 1τ be the restriction of f to τ. Then there is a critical index pc > 2 and some q ≥2 (both depending on the manifold) so that we have ∥d fdσ∥Lp(Bδ−q ) ≲ǫ δ−ǫ( X τ:δ−cap ∥d fτdσ∥2 Lp(Bδ−q ))1/2 for each ball Bδ−q in Rn with radius δ−q and each 2 ≤p ≤pc. Ciprian Demeter (Indiana University) joint with Jean Bourgain (IAS) and Larry Guth (MIT) Vinogradov’s Mean Value Theorem For a ”non-degenerate” d-dimensional smooth, compact graph manifold in Rn M = {(t1, . . . , td, φ1(t1, . . . , td), . . . , φn−d(t1, . . . , td))} it seems reasonable to expect (at least for lp decouplings) (1) pc = 4n d −2 and q = 2, if d > n 3. This should be achieved with purely quadratic φi. When d = n −1, pc = 2(n+1) n−1 . (2) pc = 3 · 4 and q = 3, if n 4 < d ≤n 3. The cubic terms become relevant. Examples include (t, t2, t3) in R3, (t1, t2 1, t3 1, t2, t2 2, t3 2, 0) in R7 (3) pc = 4 · 5 and q = 4, if n 5 < d ≤n 4. The quartic terms become relevant. One example is (t, t2, t3, t4) in R4. It is clear how to continue. Ciprian Demeter (Indiana University) joint with Jean Bourgain (IAS) and Larry Guth (MIT) Vinogradov’s Mean Value Theorem Bourgain’s observation (2011): To get from... Theorem (Abstract decoupling theorem) ∥d fdσ∥Lp(Bδ−q ) ≲ǫ δ−ǫ( X τ:δ−cap ∥d fτdσ∥2 Lp(Bδ−q ))1/2 for each ball Bδ−q in Rn with radius δ−q and each 2 ≤p ≤pc. ...to the exponential sum estimate Theorem (Abstract exponential sum estimate) For each cap τ let ξτ ∈τ and aτ ∈C. Then |Bδ−q|−1/p∥ X τ aτe(ξτ · x)∥Lp(Bδ−q ) ≲ǫ δ−ǫ( X τ |aτ|2)1/2 for each ball Bδ−q in Rn with radius δ−q and each 2 ≤p ≤pc, simply use (a smooth approximation of) f (ξ) = P τ aτδξτ Ciprian Demeter (Indiana University) joint with Jean Bourgain (IAS) and Larry Guth (MIT) Vinogradov’s Mean Value Theorem We have so far established the optimal decoupling theory for the following manifolds M, with the following applications • Hypersurfaces in Rn with nonzero Gaussian curvature (pc = 2(n+1) n−1 ). Many applications: Optimal Strichartz estimates for Shr¨ odinger equation on both rational and irrational tori in all dimensions, improved Lp estimates for the eigenfunctions of the Laplacian on the torus, etc • The cone (zero Gaussian curvature) in Rn (pc = 2n n−2). Many applications: progress on Sogge’s “local smoothing conjecture for the wave equation”, etc • (Bourgain) Two dimensional surfaces in R4 (pc = 6). Application: Bourgain used this to improve the estimate in the Lindel¨ of hypothesis for the growth of Riemann zeta • (with Larry, too) Curves with torsion in Rn (pc = n(n + 1)). Application: Vinogradov’s Mean Value Theorem. Ciprian Demeter (Indiana University) joint with Jean Bourgain (IAS) and Larry Guth (MIT) Vinogradov’s Mean Value Theorem Here is some insight on why we need to work on ”big” balls Bδ−q. Typically, working with q = 1 does not produce interesting results, decoupling only works at this scale for p = 2. The very standard (L2 almost orthogonality) estimate is that, for any δ- separated points ξ in Rn. ( 1 |Bδ−1| Z Bδ−1 | X ξ aξe(ξ · x)|2dx)1/2 ≲∥aξ∥l2. One can not replace the L2 average with an Lp (p > 2) average if no additional restrictions are imposed. Ciprian Demeter (Indiana University) joint with Jean Bourgain (IAS) and Larry Guth (MIT) Vinogradov’s Mean Value Theorem Even under the curvature assumption Λ ⊂Sn−1, when p = 2(n+1) n−1 the expected estimate is (equivalent form of Stein-Tomas) ( 1 |Bδ−1| Z Bδ−1 | X ξ∈Λ aξe(ξ · x)|pdx)1/p ≲δ n p −n−1 2 ∥aξ∥l2. Note that the exponent n p −n−1 2 is negative. However, by averaging the same exponential sum on the larger ball Bδ−2 (this allows more room for the oscillations to annihilate each other), we get a stronger estimate (reverse H¨ older) ( 1 |Bδ−2| Z Bδ−2 | X ξ∈Λ aξe(ξ · x)|pdx)1/p ≲δ−ǫ∥aξ∥l2. This perhaps explains why early attempts to prove optimal Strichartz estimates on Tn using the Stein-Tomas approach failed. Recap: Decouplings need separation, curvature and large enough spatial balls. Ciprian Demeter (Indiana University) joint with Jean Bourgain (IAS) and Larry Guth (MIT) Vinogradov’s Mean Value Theorem Theorem (Bourgain, D, Guth, 2015) Let ¯ ξ = (ξ, . . . , ξn) be δ−separated points on the curve {(t, t2, . . . , tn) : 0 ≤t ≤1}. Then for each 2 ≤p ≤n(n + 1) ( 1 |Bδ−n| Z Bδ−n | X ¯ ξ a¯ ξe(ξx1 + ξ2x2 + . . . ξnxn)|pdx)1/p ≲ǫ δ−ǫ∥a¯ ξ∥l2 Apply this with ξ = j N , 1 ≤j ≤N. Change variables x1 N = y1, . . . , xn Nn = yn. Then we get (δ = 1 N ) ( 1 |C| Z C | N X j=1 aje(jy1 + j2y2 + . . . jnyn)|pdy)1/p ≲ǫ Nǫ∥aj∥l2 C = [−Nn−1, Nn−1] × [−Nn−2, Nn−2] × . . . × [−1, 1] Ciprian Demeter (Indiana University) joint with Jean Bourgain (IAS) and Larry Guth (MIT) Vinogradov’s Mean Value Theorem ( 1 |C| Z C | N X j=1 aje(jy1 + j2y2 + . . . jnyn)|pdy)1/p ≲ǫ Nǫ∥aj∥l2 C = [−Nn−1, Nn−1] × [−Nn−2, Nn−2] × . . . × [−1, 1] Next cover C with translates of [0, 1]n and use periodicity to get ( Z Tn | N X j=1 aje(jy1 + j2y2 + . . . jnyn)|pdy)1/p ≲ǫ Nǫ∥aj∥l2 Conclusions 1. Periodicity is the only fact that we exploit about integers j. We have no other number theory in our argument. In fact, integers can be replaced with well separated real numbers. 2. We recover a more general theorem, with coefficients aj. Ciprian Demeter (Indiana University) joint with Jean Bourgain (IAS) and Larry Guth (MIT) Vinogradov’s Mean Value Theorem The proof of our decoupling theorem (n=3)... M = {(t, t2, t3) : 0 ≤t ≤1}. Theorem Let f : M →C. Partition M into caps τ of size δ. Then ∥d fdσ∥L12(Bδ−3) ≲ǫ δ−ǫ( X τ ∥d fτdσ∥2 L12(Bδ−3))1/2 for each ball Bδ−3 in R3 with radius δ−3. ...goes via gradually decreasing the size of the caps τ and at the same time increasing the radius of the balls. This is done using the following tools. Ciprian Demeter (Indiana University) joint with Jean Bourgain (IAS) and Larry Guth (MIT) Vinogradov’s Mean Value Theorem • L2 decoupling: This is a form of L2 orthogonality ∥d fdσ∥L2(Bδ−1) ≲( X τ ∥d fτdσ∥2 L2(Bδ−1))1/2 It only works for L2 but it decouples efficiently, into caps of very small size, equal to 1 radius of the ball Ciprian Demeter (Indiana University) joint with Jean Bourgain (IAS) and Larry Guth (MIT) Vinogradov’s Mean Value Theorem • Lower dimensional decoupling: We use induction on dimension. We assume and use the n = 2 decoupling result at L6. The weakness of this is that the critical exponent pc = 6 for n = 2 is small compared to 12 (n = 3). The strength is the fact that it decouples into small intervals, of length 1 R1/2 as opposed to 1 R1/3 (R is the radius of the spatial ball). At the right spatial scale, arcs of the twisted cubic look planar. One can treat them with L6 decoupling. For example, the ∼δ−3 neighborhood of {(t, t2, t3) : 0 ≤t ≤δ} is essentially the same as the ∼δ−3 neighborhood of the arc of parabola {(t, t2, 0) : 0 ≤t ≤δ} so there is an L6 decoupling of this into δ 3 2 arcs on Bδ−3 Ciprian Demeter (Indiana University) joint with Jean Bourgain (IAS) and Larry Guth (MIT) Vinogradov’s Mean Value Theorem • Multilinear Kakeya type inequalities: Do a wave packet decomposition of d fdσ using plates. There is a hierarchy of incidence geometry inequalities about how these plates intersect, ranging from easy to hard. These inequalities have only been clarified in the last two years. Ciprian Demeter (Indiana University) joint with Jean Bourgain (IAS) and Larry Guth (MIT) Vinogradov’s Mean Value Theorem Theorem (Multilinear Kakeya in disguise) Fix 1 ≤k ≤n −1, p ≥2n and n! separated intervals Ii ⊂[0, 1]. Let B be an arbitrary ball in Rn with radius δ−(k+1), and let B be a finitely overlapping cover of B with balls ∆of radius δ−k. Then (# denotes an average) 1 |B| X ∆∈B     n! Y i=1 ( X Ji ⊂Ii |Ji |=δ ∥\ gJi dσ∥2 L pk n ♯ (∆) )1/2     p/n! ≲ δ−ǫ     n! Y i=1 ( X Ji ⊂Ii |Ji |=δ ∥\ gJi dσ∥2 L pk n ♯ (B) )1/2     p/n! . Our first attempt (Jean and I) to prove VMVT only used the k = 1 result and resulted in the poor range 2 ≤p ≤4n −2. Ciprian Demeter (Indiana University) joint with Jean Bourgain (IAS) and Larry Guth (MIT) Vinogradov’s Mean Value Theorem • Parabolic rescaling: Each arc on (t, t2, . . . , tn) can be mapped via an affine transformation to the full arc (0 ≤t ≤1). • Lots of induction on scales: Let Cδ be the best constant in some decoupling inequality at scale δ. How does Cδ relate to Cδ1/2? • Lots of H¨ older’s inequality and ball inflations Ciprian Demeter (Indiana University) joint with Jean Bourgain (IAS) and Larry Guth (MIT) Vinogradov’s Mean Value Theorem
116
Maximum-likelihood sequence estimation of digital sequences in the presence of intersymbol interference | IEEE Transactions on Information Theory =============== Consent Details [#IABV2SETTINGS#] About This website uses cookies We occasionally run membership recruitment campaigns on social media channels and use cookies to track post-clicks. We also share information about your use of our site with our social media, advertising and analytics partners who may combine it with other information that you’ve provided to them or that they’ve collected from your use of their services. Use the check boxes below to choose the types of cookies you consent to have stored on your device. Consent Selection Necessary [x] Preferences [x] Statistics [x] Marketing [x] Show details Details Necessary 8- [x] Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies. These cookies do not gather information about you that could be used for marketing purposes and do not remember where you have been on the internet. ACM 5Learn more about this provider__cf_bm[x2]This cookie is used to distinguish between humans and bots. This is beneficial for the website, in order to make valid reports on the use of their website.Maximum Storage Duration: 1 dayType: HTTP Cookie _cfuvidThis cookie is a part of the services provided by Cloudflare - Including load-balancing, deliverance of website content and serving DNS connection for website operators. Maximum Storage Duration: SessionType: HTTP Cookie cf_chl_rc_mThis cookie is a part of the services provided by Cloudflare - Including load-balancing, deliverance of website content and serving DNS connection for website operators. Maximum Storage Duration: 1 dayType: HTTP Cookie JSESSIONIDPreserves users states across page requests.Maximum Storage Duration: SessionType: HTTP Cookie Cookiebot 1Learn more about this providerCookieConsentStores the user's cookie consent state for the current domainMaximum Storage Duration: 1 yearType: HTTP Cookie c.disquscdn.com 2__jidUsed to add comments to the website and remember the user's Disqus login credentials across websites that use said service.Maximum Storage Duration: SessionType: HTTP Cookie disqusauthRegisters whether the user is logged in. This allows the website owner to make parts of the website inaccessible, based on the user's log-in status. Maximum Storage Duration: SessionType: HTTP Cookie Preferences 5- [x] Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in. ACM 1Learn more about this providerMACHINE_LAST_SEENPendingMaximum Storage Duration: 300 daysType: HTTP Cookie Mopinion 1Learn more about this providermopDeployPendingMaximum Storage Duration: SessionType: HTML Local Storage c.disquscdn.com 3aet-dismissNecessary for the functionality of the website's comment-system.Maximum Storage Duration: PersistentType: HTML Local Storage drafts.queueNecessary for the functionality of the website's comment-system.Maximum Storage Duration: PersistentType: HTML Local Storage submitted_posts_cacheNecessary for the functionality of the website's comment-system.Maximum Storage Duration: PersistentType: HTML Local Storage Statistics 11- [x] Statistic cookies help website owners understand how visitors interact with websites by collecting and reporting information anonymously. Google 4Learn more about this providerSome of the data collected by this provider is for the purposes of personalization and measuring advertising effectiveness. _gaRegisters a unique ID that is used to generate statistical data on how the visitor uses the website.Maximum Storage Duration: 2 yearsType: HTTP Cookie ga#Used by Google Analytics to collect data on the number of times a user has visited the website as well as dates for the first and most recent visit. Maximum Storage Duration: 2 yearsType: HTTP Cookie _gatUsed by Google Analytics to throttle request rateMaximum Storage Duration: 1 dayType: HTTP Cookie _gidRegisters a unique ID that is used to generate statistical data on how the visitor uses the website.Maximum Storage Duration: 1 dayType: HTTP Cookie Heap Analytics 3Learn more about this providerhp2#Collects data on the user’s navigation and behavior on the website. This is used to compile statistical reports and heatmaps for the website owner.Maximum Storage Duration: 1 dayType: HTTP Cookie _hp2_id.#Collects data on the user’s navigation and behavior on the website. This is used to compile statistical reports and heatmaps for the website owner.Maximum Storage Duration: 13 monthsType: HTTP Cookie _hp2_ses_props.#Collects data on the user’s navigation and behavior on the website. This is used to compile statistical reports and heatmaps for the website owner.Maximum Storage Duration: 1 dayType: HTTP Cookie Hotjar 3Learn more about this providerhjSession#Collects statistics on the visitor's visits to the website, such as the number of visits, average time spent on the website and what pages have been read.Maximum Storage Duration: 1 dayType: HTTP Cookie hjSessionUser#Collects statistics on the visitor's visits to the website, such as the number of visits, average time spent on the website and what pages have been read.Maximum Storage Duration: 1 yearType: HTTP Cookie _hjTLDTestRegisters statistical data on users' behaviour on the website. Used for internal analytics by the website operator. Maximum Storage Duration: SessionType: HTTP Cookie c.disquscdn.com 1disqus_uniqueCollects statistics related to the user's visits to the website, such as number of visits, average time spent on the website and loaded pages.Maximum Storage Duration: SessionType: HTTP Cookie Marketing 26- [x] Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers. Google 1Learn more about this providerSome of the data collected by this provider is for the purposes of personalization and measuring advertising effectiveness. NIDRegisters a unique ID that identifies a returning user's device. The ID is used for targeted ads.Maximum Storage Duration: 6 monthsType: HTTP Cookie Heap Analytics 2Learn more about this providerapi/telemetryCollects data on user behaviour and interaction in order to optimize the website and make advertisement on the website more relevant. Maximum Storage Duration: SessionType: Pixel Tracker hCollects data on user behaviour and interaction in order to optimize the website and make advertisement on the website more relevant. Maximum Storage Duration: SessionType: Pixel Tracker YouTube 23Learn more about this provider#-#Used to track user’s interaction with embedded content.Maximum Storage Duration: SessionType: HTML Local Storage __Secure-ROLLOUT_TOKENUsed to track user’s interaction with embedded content.Maximum Storage Duration: 180 daysType: HTTP Cookie __Secure-YECStores the user's video player preferences using embedded YouTube videoMaximum Storage Duration: SessionType: HTTP Cookie iU5q-!O9@$Registers a unique ID to keep statistics of what videos from YouTube the user has seen.Maximum Storage Duration: SessionType: HTML Local Storage LAST_RESULT_ENTRY_KEYUsed to track user’s interaction with embedded content.Maximum Storage Duration: SessionType: HTTP Cookie LogsDatabaseV2:V#||LogsRequestsStoreUsed to track user’s interaction with embedded content.Maximum Storage Duration: PersistentType: IndexedDB nextIdUsed to track user’s interaction with embedded content.Maximum Storage Duration: SessionType: HTTP Cookie remote_sidNecessary for the implementation and functionality of YouTube video-content on the website. Maximum Storage Duration: SessionType: HTTP Cookie requestsUsed to track user’s interaction with embedded content.Maximum Storage Duration: SessionType: HTTP Cookie ServiceWorkerLogsDatabase#SWHealthLogNecessary for the implementation and functionality of YouTube video-content on the website. Maximum Storage Duration: PersistentType: IndexedDB TESTCOOKIESENABLEDUsed to track user’s interaction with embedded content.Maximum Storage Duration: 1 dayType: HTTP Cookie VISITOR_INFO1_LIVETries to estimate the users' bandwidth on pages with integrated YouTube videos.Maximum Storage Duration: 180 daysType: HTTP Cookie YSCRegisters a unique ID to keep statistics of what videos from YouTube the user has seen.Maximum Storage Duration: SessionType: HTTP Cookie yt.innertube::nextIdRegisters a unique ID to keep statistics of what videos from YouTube the user has seen.Maximum Storage Duration: PersistentType: HTML Local Storage ytidb::LAST_RESULT_ENTRY_KEYUsed to track user’s interaction with embedded content.Maximum Storage Duration: PersistentType: HTML Local Storage YtIdbMeta#databasesUsed to track user’s interaction with embedded content.Maximum Storage Duration: PersistentType: IndexedDB yt-remote-cast-availableStores the user's video player preferences using embedded YouTube videoMaximum Storage Duration: SessionType: HTML Local Storage yt-remote-cast-installedStores the user's video player preferences using embedded YouTube videoMaximum Storage Duration: SessionType: HTML Local Storage yt-remote-connected-devicesStores the user's video player preferences using embedded YouTube videoMaximum Storage Duration: PersistentType: HTML Local Storage yt-remote-device-idStores the user's video player preferences using embedded YouTube videoMaximum Storage Duration: PersistentType: HTML Local Storage yt-remote-fast-check-periodStores the user's video player preferences using embedded YouTube videoMaximum Storage Duration: SessionType: HTML Local Storage yt-remote-session-appStores the user's video player preferences using embedded YouTube videoMaximum Storage Duration: SessionType: HTML Local Storage yt-remote-session-nameStores the user's video player preferences using embedded YouTube videoMaximum Storage Duration: SessionType: HTML Local Storage Unclassified 15 Unclassified cookies are cookies that we are in the process of classifying, together with the providers of individual cookies. ACM 12Learn more about this provider10.1145%2F3519932_pdfPendingMaximum Storage Duration: PersistentType: HTML Local Storage 10.1145%2F3519937_pdfPendingMaximum Storage Duration: PersistentType: HTML Local Storage accessPendingMaximum Storage Duration: SessionType: HTML Local Storage accessTypePendingMaximum Storage Duration: SessionType: HTML Local Storage book_reader_settingsPendingMaximum Storage Duration: PersistentType: HTML Local Storage chapter_reader_settingsPendingMaximum Storage Duration: PersistentType: HTML Local Storage doiPendingMaximum Storage Duration: SessionType: HTML Local Storage issue_reader_settingsPendingMaximum Storage Duration: PersistentType: HTML Local Storage MAIDPendingMaximum Storage Duration: 300 daysType: HTTP Cookie tipKeyPendingMaximum Storage Duration: PersistentType: HTML Local Storage titlePendingMaximum Storage Duration: SessionType: HTML Local Storage typePendingMaximum Storage Duration: SessionType: HTML Local Storage Cloudflare 1Learn more about this providercf.turnstile.uPendingMaximum Storage Duration: PersistentType: HTML Local Storage Google 1Learn more about this providerSome of the data collected by this provider is for the purposes of personalization and measuring advertising effectiveness. GSPPendingMaximum Storage Duration: 400 daysType: HTTP Cookie c.disquscdn.com 1disqus.threadPendingMaximum Storage Duration: PersistentType: HTML Local Storage Cross-domain consent 1Your consent applies to the following domains: List of domains your consent applies to:dl.acm.org Cookie declaration last updated on 8/10/25 by Cookiebot [#IABV2_TITLE#] [#IABV2_BODY_INTRO#] [#IABV2_BODY_LEGITIMATE_INTEREST_INTRO#] [#IABV2_BODY_PREFERENCE_INTRO#] [#IABV2_LABEL_PURPOSES#] [#IABV2_BODY_PURPOSES_INTRO#] [#IABV2_BODY_PURPOSES#] [#IABV2_LABEL_FEATURES#] [#IABV2_BODY_FEATURES_INTRO#] [#IABV2_BODY_FEATURES#] [#IABV2_LABEL_PARTNERS#] [#IABV2_BODY_PARTNERS_INTRO#] [#IABV2_BODY_PARTNERS#] About Cookies are small text files that can be used by websites to make a user's experience more efficient. Other than those strictly necessary for the operation of the site, we need your permission to store any type of cookies on your device.Learn more about ACM, how you can contact us, and how we process personal data in our Privacy Policy. Also please consult our Cookie Notice. You can change or withdraw your consent from the Cookie Declaration on our website at any time by visiting the Cookie Declaration page. If contacting us regarding your consent, please state your consent ID and date from that page. [x] Do not sell or share my personal information Use necessary cookies only Allow selected cookies Customize Allow all cookies skip to main content Advanced Search Browse About Sign in Register Advanced Search Journals Magazines Proceedings Books SIGs Conferences People Browse About More Search ACM Digital Library Search Search Search ACM Digital Library Search Search Advanced Search IEEE Transactions on Information Theory Periodical Home Latest Issue Archive Authors Affiliations Award Winners More Home Browse by Title Periodicals IEEE Transactions on Information Theory Vol. 18, No. 3 Maximum-likelihood sequence estimation of digital sequences in the presence of intersymbol interference research-article Share on Maximum-likelihood sequence estimation of digital sequences in the presence of intersymbol interference Author: G.Forney, Jr. G.Forney, Jr. View Profile Authors Info & Claims IEEE Transactions on Information Theory, Volume 18, Issue 3 Pages 363 - 378 Published: 01 September 2006Publication History 80 citation 0 Downloads Metrics Total Citations 80Total Downloads 0 Last 12 Months 0 Last 6 weeks 0 Get Citation Alerts Contents IEEE Transactions on Information Theory Volume 18, Issue 3 ###### PREVIOUS ARTICLE An upper bound on the probability of error due to intersymbol interference for correlated digital signals Previous###### NEXT ARTICLE Coding of sources with unknown statistics--I Next Abstract Cited By Recommendations Comments Information & Contributors Bibliometrics & Citations View Options References Figures Tables Media Share Abstract A maximum-likelihood sequence estimator for a digital pulse-amplitude-modulated sequence in the presence of finite intersymbol interference and white Gaussian noise is developed, The structure comprises a sampled linear filter, called a whitened matched filter, and a recursive nonlinear processor, called the Viterbi algorithm. The outputs of the whitened matched filter, sampled once for each input symbol, are shown to form a set of sufficient statistics for estimation of the input sequence, a fact that makes obvious some earlier results on optimum linear processors. The Viterbi algorithm is easier to implement than earlier optimum nonlinear processors and its performance can be straightforwardly and accurately estimated. It is shown that performance (by whatever criterion) is effectively as good as could be attained by any receiver structure and in many cases is as good as if intersymbol interference were absent. Finally, a simplified but effectively optimum algorithm suitable for the most popular partial-response schemes is described. Cited By View all Pengyu D Xin X Peng W Yuan L(2025)A robust MMSE-DFE framework with joint time-frequency domain processing for UAV-to-ground SC-FDE communication systems Wireless Networks 10.1007/s11276-025-03919-131:3(2955-2974)Online publication date: 23-Feb-2025 Benveniste A Raclet J(2023)Mixed Nondeterministic-Probabilistic Automata Discrete Event Dynamic Systems 10.1007/s10626-023-00375-x33:4(455-505)Online publication date: 1-Dec-2023 Li S Xu D Zhang A(2023)Precoding and decoding technology based on DFE structure in field cable transmission Proceedings of the 2023 6th International Conference on Artificial Intelligence and Pattern Recognition 10.1145/3641584.3641750(1116-1122)Online publication date: 22-Sep-2023 Show More Cited By Maximum-likelihood sequence estimation of digital sequences in the presence of intersymbol interference Mathematics of computing Information theory Recommendations ### Reduced-state sequence estimation for coded modulation of intersymbol interference channelsThe authors investigated the detection of trellis codes designed for channels that are intersymbol interference free when they operate in the presence of intersymbol interference. A well-structured reduced-state sequence estimation (RSSE) algorithm is ... Read More ### Iterative semi-blind single-antenna cochannel interference cancellation and tight lower bound for joint maximum-likelihood sequence estimationSignal processing in communications In this paper, an iterative receiver for joint channel estimation and cochannel interference cancellation suitable for time-division multiple-access cellular radio systems is proposed. The receiver is blind with respect to (w.r.t.) the data of the ... Read More ### Adaptive maximum-likelihood sequence estimation for digital signaling in the presence of intersymbol interference (Corresp.)An adaptive maximum-likelihood sequence estimator for a digital pulse-amplitude-modulated sequence in the presence of finite-duration unknown slowly time-varying intersymbol interference and additive white Gaussian noise is developed. Predicted ... Read More Comments 0 Comments Information & Contributors Information Contributors Information Published In IEEE Transactions on Information Theory Volume 18, Issue 3 May 1972 156 pages ISSN:0018-9448 Issue’s Table of Contents Copyright © 2006. Publisher IEEE Press Publication History Published: 01 September 2006 Qualifiers Research-article Contributors Other Metrics View Article Metrics Bibliometrics & Citations Bibliometrics Citations 80 Bibliometrics Article Metrics 80 Total Citations View Citations 0 Total Downloads Downloads (Last 12 months)0 Downloads (Last 6 weeks)0 Reflects downloads up to 11 Aug 2025 Other Metrics View Author Metrics Citations Cited By View all Pengyu D Xin X Peng W Yuan L(2025)A robust MMSE-DFE framework with joint time-frequency domain processing for UAV-to-ground SC-FDE communication systems Wireless Networks 10.1007/s11276-025-03919-131:3(2955-2974)Online publication date: 23-Feb-2025 Benveniste A Raclet J(2023)Mixed Nondeterministic-Probabilistic Automata Discrete Event Dynamic Systems 10.1007/s10626-023-00375-x33:4(455-505)Online publication date: 1-Dec-2023 Li S Xu D Zhang A(2023)Precoding and decoding technology based on DFE structure in field cable transmission Proceedings of the 2023 6th International Conference on Artificial Intelligence and Pattern Recognition 10.1145/3641584.3641750(1116-1122)Online publication date: 22-Sep-2023 Yuan G Williams B Chen Y Neville J(2023)Coordinate descent methods for DC minimization Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence and Thirteenth Symposium on Educational Advances in Artificial Intelligence 10.1609/aaai.v37i9.26307(11034-11042)Online publication date: 7-Feb-2023 Kang W(2022)A novel shaping pulse in faster-than-Nyquist system Telecommunications Systems 10.1007/s11235-022-00949-481:3(333-340)Online publication date: 1-Nov-2022 Liu Y Zhang P Liu J Shen Y Jiang X(2022)Physical layer authentication in MIMO systems: a carrier frequency offset approach Wireless Networks 10.1007/s11276-022-02916-y28:5(1909-1921)Online publication date: 1-Jul-2022 Nouri A Asvadi R(2022)Matched Information Rate Codes for Binary-Input Intersymbol Interference Wiretap Channels 2022 IEEE International Symposium on Information Theory (ISIT)10.1109/ISIT50566.2022.9834578(1052-1057)Online publication date: 26-Jun-2022 Albreem M Alhabbash A Abu-Hudrouss A Almohamad T(2022)Data detection in decentralized and distributed massive MIMO networks Computer Communications 10.1016/j.comcom.2022.03.015189:C(79-99)Online publication date: 1-May-2022 Zhang Y Huang W Yang C Wang W Chen Y You C Huang D Xue G Yu J(2020)Endophasia Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 10.1145/33810084:1(1-26)Online publication date: 18-Mar-2020 Karanov B Liga G Aref V Lavery D Bayvel P Schmalen L(2019)Deep Learning for Communication over Dispersive Nonlinear Channels: Performance and Comparison with Classical Digital Signal Processing 2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton)10.1109/ALLERTON.2019.8919850(192-199)Online publication date: 24-Sep-2019 Show More Cited By View Options View options Figures Tables Media Share Share Share this Publication link Copy Link Copied! Copying failed. Share on social media XLinkedInRedditFacebookemail References References Affiliations Figures Tables Close figure viewer Back to article Figure title goes here Change zoom level Go to figure location within the article Download figure Toggle share panel Share on social media Toggle information panel All figures All tables xrefBack.goTo xrefBack.goTo Request permissions Expand All Collapse Expand Table Show all references SHOW ALL BOOKS Authors Info & Affiliations View Issue’s Table of Contents Close modal Export Citations Select Citation format Please download or close your previous search result export first before starting a new bulk export. Preview is not available. By clicking download,a status dialog will open to start the export process. The process may takea few minutes but once it finishes a file will be downloadable from your browser. You may continue to browse the DL while the export process is in progress. Download citation "Download citation") Copy citation "Copy citation") Close modal New Citation Alert added! This alert has been successfully added and will be sent to: You will be notified whenever a record that you have chosen has been cited. To manage your alert preferences, click on the button below. Manage my Alerts Close modal New Citation Alert! Please log in to your account Footer Categories Journals Magazines Books Proceedings SIGs Conferences Collections People About About ACM Digital Library ACM Digital Library Board Subscription Information Author Guidelines Using ACM Digital Library All Holdings within the ACM Digital Library ACM Computing Classification System Accessibility Statement Join Join ACM Join SIGs Subscribe to Publications Institutions and Libraries Connect Contact us via email ACM on Facebook ACM DL on X ACM on Linkedin Send Feedback Submit a Bug Report The ACM Digital Library is published by the Association for Computing Machinery. Copyright © 2025 ACM, Inc. Terms of Usage Privacy Policy Code of Ethics Your Search Results Download Request We are preparing your search results for download ... We will inform you here when the file is ready. Download now! Your Search Results Download Request Your file of search results citations is now ready. Download now! Your Search Results Download Request Your search export query has expired. Please try again. Feedback ✓ Thanks for sharing! AddToAny More… What would you like to report? What is your opinion? Please select your feedback category: Other Compliment Bug Content Error Suggestion Please leave your feedback below:We appreciate as much detail as you can provide. What is your opinion of this page? If you 'd like us to contact you regarding your feedback, please provide your contact details here. Website data Send Powered by
117
Алгебра и анализ Том 16 (2004), вып. 6 DIFFERENTIATION IN METRIC SPACES c ⃝ALEXANDER LYTCHAK We discuss differentiation of Lipschitz maps between abstract metric spaces and study such issues as differentiability of isometries, first variation formula and theorems of Rademacher type. §1. Introduction 1.1. The Aim. This paper is devoted to the study of the first order geometry of metric spaces. Our study was mainly motivated by the observation that whereas the advanced features of the theories of Alexandrov spaces with up-per and lower curvature bounds are quite different, the beginnings are almost identical, at least as far as only first order derivatives are concerned (for ex-ample tangent spaces and the first variation formula). One is naturally led to the question on which spaces the first order geometry can be established. As it turns out the same first order geometry exists in many other spaces that we call geometric. The class of geometric spaces contains all H¨ older contin-uous Riemannian manifolds, sufficiently convex and smooth Finsler manifolds ([LY]), a big class of subsets of Riemannian manifolds (for example sets of positive reach, see [Fed59] and [Lyta]), surfaces with an integral curvature bound ([Res93]) and extremal subsets of Alexandrov spaces with lower curva-ture bound ([PP94a]). The last case was discussed in [Pet94] and the proof of the first variation formula was a major step towards proving the deep gluing theorem ([Pet94]). Moreover the class of geometric spaces is stable under metric operations, even under such a difficult one as taking quotients. Finally the existence of the first order geometry is a good assumption for studying features of higher order, such as gradient flows of semi-concave functions ([PP94b] and [Lytc]). One of the main issues of this paper is the establishing of natural, easily verifiable axioms, that describe this first order geometry and their consequences. Keywords and phrases: Alexandrov spaces, Rademacher theorem, variation formulas, tan-gent cones. 1 2 ALEXANDER LYTCHAK Another more direct motivation comes from the question whether a sub-metry (or more specially an isometry) between metric spaces must be differ-entiable in some suitable sense. This question was answered affirmatively for smooth Riemannian manifolds in [BG00]. On the other hand in [CH70] an example of a non-differentiable isometry between Riemannian manifolds with continuous Riemannian metrics is constructed. Even for asking this question a language is needed that allows us to speak about differentiability of Lips-chitz mappings. Another main issue of this paper is the establishing of such a language. Remark 1.1. For (special) doubling metric measure spaces Cheeger has de-veloped in [Che99] a deep theory giving a Rademacher type theorem for such spaces. However this approach does not allow to speak about differentiability in a given (singular) point. Moreover it is essentially restricted to differen-tiation of functions and does not apply to maps into another singular space. Kirchheim developed in [Kir94] a very interesting theory of metric differentia-tion of Lipschitz mappings of the Euclidean space into arbitrary metric spaces. The disadvantage of this theory is that it completely neglects the (possibly existing) tangent structure of the image space. Whereas Kirchheims definition has a clear interpretation in our language, the connections to the theory of Cheeger are much less clear and will not be discussed here. In this paper we discuss the basics of the theory. In [Lytc] we study con-nections between properties of the differentials and the map itself, in [Lytb] we apply these ideas to differentiability in Carnot–Caratheodory spaces. 1.2. The Problem. The notion of a tangent cone at a point of a proper met-ric space was defined by Gromov using Gromov–Hausdorff convergence of rescaled spaces, through the requirement that an infinitesimal portion of the space at x does not depend on the infinitesimal scale. In many situations this concept has been used to study the properties of the original space (for ex-ample [BGP92, Pet94, Mit85] and many others). Unfortunately this definition being perfect for the study of the infinitesimal portions of a given space is not very suitable for the study of differentials. The problem is that Gromov– Hausdorff convergence of abstract metric spaces is defined only on the set of isometry classes. For example the question about differentiability of isometries does not make any sense in this context. Dealing with differentials one would prefer to know what happens in a fixed direction in the tangent space. Mostow and Margulis encountered this problem as they were dealing with differentials between Carnot–Caratheodory spaces ([MM00]). 1.3. The Method. To circumvent this problem we give a slightly different definition of the tangent cone, working with ultra-convergence instead of the DIFFERENTIATION IN METRIC SPACES 3 Gromov–Hausdorff convergence, a notion widely used in the theory of non-positively curved spaces. Namely for each zero sequence (o) = (ϵi) which we call an (infinitesimal) scale one can consider the blow up X(o) x of the pointed space (X, x) at the scale (o), given as the ultralimit X(o) x = limω( 1 ϵi X, x). Now we say that a tangent cone TxX of X at the point x is a metric cone (T, 0) together with a fixed choice of pointed isometries i(o) : T →X(o) x for each scale (o), such that certain natural commutation relations (Definition 6.1) are satisfied. If the tangent space exists in the sense of Gromov, our definition just makes the additional requirement of fixing a special choice of a metric space in the isometry class of the tangent space in the sense of Gromov (Remark 6.2). With this definition of the tangent space, the differential of a Lipschitz map is the blow up at the given point, if this blow up is unique. If the tangent spaces in X and in Y exist, then they exist in a natural way in the product X × Y and in the Euclidean cone CX. Moreover there is a natural choice (up to the tangent cones in X) of the tangent cones to subsets of X. In general no tangent space in our sense may exist or there may be no natural choice (we assumed in the definition, that the isometries i(ϵi) are given somehow). However, tangent spaces exist in lots of important singular metric spaces. This existence is given by a (not necessarily continuous) map e from a small portion of a metric cone T to a small neighborhood of the given point x, that is an infinitesimal isometry at x (thus being a very singular equivalent of the exponential map, see Subsection 3.5 and Subsection 6.1 for the precise definition). All examples of tangent cones known to the author arise in this way. One problem closely related to the question whether the tangent cone is defined in a natural way is that the identification of the tangent space at x with the ultraproduct T ω (this ultraproduct is equal to T if T is proper) via this map e depends not only on e and the metric of T but also on the particular metric cone structure on T, i.e. a particular choice of the dilations (see Section 4). This is the reason for the pathological example of [CH70], see Example 7.6. Even though we use a choice of an ultrafilter ω in our definitions, the notions of differentiability and differential do not depend on ω if the tangent cones are given by a map e as above (see Subsection 7.2). For example for Lipschitz mappings between Banach spaces we get the usual definition of directional differentiability. 1.4. Geometric conditions. In order to obtain the tangent cones (the isome-tries i(ϵi)) in a natural way, we observe that each metric space defines in a 4 ALEXANDER LYTCHAK natural way a cone Cx at each point x, being the set of germs of unparam-eterized geodesics starting at x. Moreover this cone Cx comes along with a natural family of 1-Lipschitz exponential mappings exp(ϵi) x : Cx →X(ϵi) x to the different blow ups of X at x. We now define a generalized angle condition (A), that is satisfied by spaces with one-sided curvature bound, by strongly convex Banach spaces and many others (see below). It generalizes the usual condition of the equality of the upper and the lower angles (Example 5.3). We say that X has the property (A) at x if limt→0 d(γ1(t),γ2(st)) t exists for all s ∈R+ and all geodesics γ1, γ2 starting at x. However even in proper geodesic spaces geodesics may see only a small part of the blow ups, as the example of Carnot–Caratheodory spaces shows. To guarantee the surjectivity of the exponential maps, we impose a unifor-mity condition (U). We say that a locally geodesic space X with the property (A) at x has the property (U) at x, if the geodesic cone Cx is proper and d(γ1(t), γ2(t)) ⩽O(t, d(γ+ 1 , γ+ 2 ))t holds, where d(γ+ 1 , γ+ 2 ) is the distance be-tween the starting directions γ+ i of γi in Cx and O is some function going to 0 if both arguments go to 0. Given this condition one can define a natural (however not continuous) exponential map e : Cx →X identifying Cx with the tangent cone TxX. Hence in spaces with the property (U) the tangent space exist in a natural way. We call a locally geodesic space X infinitesimally cone-like, if it has the property (U) at each point and each tangent cone TxX = Cx is a Euclidean cone (Definition 6.3). In [Lytc] we prove that gradient flows of semi-concave functions exist in such spaces, generalizing the corresponding result of [PP94b]. Finally to be able to deal with distance functions, we need a further con-dition. We say that geodesics vary smoothly at x, if small long and thin quadrangles with a vertex at x essentially look like quadrangles in Cx (see Definition 9.2). This expresses the fact that geodesics converging pointwise to a given geodesic converge also in some better sense. For example it is true in a continuous Riemannian metric, if all geodesics are uniformly C1,α for some α > 0, for this reason the name. Remark 1.2. This (local) condition is almost equivalent to the global statement that the first variation formula is valid in X, see Section 9 for details. We call a proper geodesic space geometric if it has the property (U) at each point, each tangent cone TxX = Cx is a uniformly convex and smooth cone (for example a Euclidean cone or a Banach space with a strongly convex and smooth norm, see Definition 4.3 and Definition 4.4) and if geodesics vary smoothly at each point (Definition 10.1). DIFFERENTIATION IN METRIC SPACES 5 1.5. Results. As was already mentioned in the beginning the class of geomet-ric spaces is very big. Alexandrov spaces (see Definition 2.2), surfaces with an integral curvature bound, manifolds with only H¨ older continuous Riemannian metrics, sets of positive reach and some more general subsets of Riemannian manifolds are geometric and infinitesimally cone-like. A finite dimensional Ba-nach space is geometric iff its norm is strongly convex and smooth. Finsler manifolds with H¨ older continuous and pointwise smooth and sufficiently con-vex norms are geometric. Products and convex sets of and Euclidean cones over (infinitesimally cone-like) geometric spaces are (infinitesimally cone-like) geometric. Each open subset of an infinitesimally cone-like space is infinitesi-mally cone-like. We can now state our results. A map f : X →Z of a space X with property (U) at x to another metric space Z is differentiable at x iff it is directionally differentiable at x, i.e. if f ◦γ : [0, ϵ) →Z is differentiable at 0 for all geodesics γ starting at x. This implies: Proposition 1.1. Let f : X →Z be an isometric embedding. If X and Z are infinitesimally cone-like (or more general just have the property (U)) then f is differentiable at all points. In geometric spaces the first variation formula holds, i.e. the distance func-tions dS to subsets S and the metric d : X × X →R itself are differentiable and the differential of dS at x depends only on the set of directions in Cx of minimal geodesics, between x and S, see Proposition 9.3 for the precise for-mulation, where the usual angles are replaced by the corresponding Busemann functions. If the tangent spaces are Euclidean cones one gets the usual first variation formula: Theorem 1.2. Let X be an infinitesimally cone-like space, x ̸= z ∈X. Let γ be a geodesic between x and z with starting resp. ending directions γ+ ∈TxX and γ−∈TzX. Then the differential of the distance d : X × X →R can be estimated by D(x,z)d(v, w) ⩽−⟨γ+, v⟩−⟨γ−, w⟩. If X is in addition geometric, then D(x,z)d(v, w) exists and is equal to the above sum for some geodesic γ between x and z. Using the uniform convexity of the tangent spaces we see that the distance functions to points in a geometric space play the role of the coordinate func-tions in the Euclidean space, i.e. a Lipschitz map f : X →Z of a space X to a geometric space Z is differentiable at x if for a dense sequence of points zn in a punctured neighborhood of f(x) the composition functions dzn ◦f are differentiable at x. Now the first statement of the next proposition is an easy 6 ALEXANDER LYTCHAK application, whereas the second one requires some work. It shows that our no-tion of geometric spaces is stable enough to survive such a difficult operation as taking quotients. Theorem 1.3. Let f : X →Y be a submetry. If X and Y are geometric, then f is differentiable at each point. Moreover the assumption that X is geometric already implies that Y is geometric. Moreover it is possible to describe precisely the differential structure of a submetry, getting the usual vertical (tangent space to the fiber) and horizontal (tangent space to the union of horizontal geodesics) subspaces of the tangent space. For maps into a geometric space the theorem of Rademacher is equivalent to the theorem of Rademacher for functions: Proposition 1.4. Let Z be a metric space with a Borel measure μ, and tangent spaces at almost each point, such that each Lipschitz function f : Z →R is differentiable μ-almost every where. Then for each geometric space X each Lipschitz map f : Z →X is differentiable almost everywhere. Corollary 1.5. If Z is a measurable subset of the Euclidean space Rn and f : Z →X is a locally Lipschitz map to a geometric space X, then for almost all z ∈Z the differential Dzf exists, the image Dzf(Rn) ⊂Tf(z)X is a Banach space and the restriction Dzf : Rn →Dzf(Rn) is linear. A final issue that we address in this paper is differentiability of maps into arbitrary spaces with a one-sided curvature bound. In this situation the tangent space in our sense may not exist, however one can use the same ideas and work with the geodesic cone Cx instead of the tangent cone. For semi-concave functions this is used in [Lytc], here we prove: Theorem 1.6. Let Z be either CAT(κ) space or a space with curvature ⩾κ. Let S ⊂Rn be a measurable subset, f : S →Z a locally Lipschitz map. Then f has at almost each point a differential Dxf : TxS →Cf(x)Z. Remark 1.3. If Z is an Alexandrov space in the sense of Definition 2.2 then Theorem 1.6 is a special case of Proposition 1.4. 1.6. The Plan. After the preliminaries we recall some basic notions concern-ing ultra-convergence of spaces and maps, a major tool for this paper. In Section 4 we discuss basic issues about general metric cones. In Section 5 we start with differential issues and discuss geodesic cones and the exponential mappings. In Section 6 and Section 7 we give the definition of tangent cones differentials, give the main examples and discuss the condition (U) and some DIFFERENTIATION IN METRIC SPACES 7 other related topics. In Section 8 we recall Kirchheim’s notion of metric dif-ferentiability. In Section 9 we discuss the first variation formula. In Section 10 and Section 11 geometric spaces are studied. Finally in Section 12 we prove Theorem 1.6. 1.7. Acknowledgments. I would like to thank Werner Ballmann for encour-agement, support, helpful discussions and comments. I am very grateful to Sergei Buyalo for many remarks and corrections. I am thankful to Juan Souto for useful comments and suggestions. §2. Preliminaries and Notations 2.1. Notations. By R+ resp. Rn we will denote the positive real numbers resp. the Euclidean space. We shall denote by d the distance in metric spaces. For a subset A of a metric space X we denote by dA the distance function to the set A. For a positive number r we denote by rX the set X with the metric scaled by r. By Br(x) we denote the closed ball of radius r around x. A pseudo metric d on a space X is a metric for which the distance between different points may be 0. Identifying in X points x, z with d(x, z) = 0 we get the corresponding metric space. A map f : X →Y between metric spaces is called L-Lipschitz if for all x, z ∈X one has d(f(x), f(z)) ⩽Ld(x, z). Example 2.1. Each distance function dA is 1-Lipschitz, whereas the metric d : X × X →R is a √ 2-Lipschitz function. An s-dilation is a bijective map f : X →Y between metric spaces with d(f(x), f(¯ x)) = sd(x, ¯ x) for all x, ¯ x ∈X. An isometry is a 1-dilation. Definition 2.1. By a scale we will denote a sequence (o) = (ϵi) of positive real numbers converging to 0. 2.2. Geodesics. For a curve γ in X we will denote its length by L(γ). A geodesic resp. ray resp. line in X is an isometric embedding of an interval resp. half-line resp. the whole real line into X. For disjoint subsets S, T ⊂X we denote by ΓS,T the set of all geodesics of length d(S, T) starting in S and ending in T. The space X is called geodesic if for all points x ̸= z in X the set Γx,z is not empty. Finally we will denote by Γx the set of all geodesics starting at x. A metric space X is called proper if its closed bounded subsets are compact. In a proper geodesic space X the set ΓS,T is compact and not empty if S is compact and T is closed. 8 ALEXANDER LYTCHAK 2.3. Busemann functions. For a ray h : [0, ∞) →X in space X we denote by bh its Busemann function bh(x) = limt→∞(d(x, h(t)) −t). This limit always exists and bh is a 1-Lipschitz function. Example 2.2. If f : X →R is a 1-Lipschitz map with f(h(t)) = −t, then f ⩽bh holds. Especially for rays γj converging to a ray γ the inequality lim inf bhj ⩽bh holds. Example 2.3. Let γ be a line defining two rays γ+ and γ−. Then −bγ−is a 1-Lipschitz function satisfying −bγ−= bγ+ on γ. Therefore we get bγ−+bγ+ ⩾0 on X. We will call γ straight if bγ−+ bγ+ = 0 in X. Example 2.4. For i = 1, 2 let hi be a ray in the space Xi. Then h(t) = (h1( t √ 2), h2( t √ 2)) is a ray in X1 × X2. Let f : X1 × X2 be a √ 2-Lipschitz function satisfying f(h1(t), h2(t)) = −2t. Then for (v, w) ∈X1 × X2 we get the inequality f((v, w)) ⩽ √ 2bh((v, w)) = bh1(v) + bh2(w). 2.4. Alexandrov spaces. We refer to [BBI01, BGP92, BH99] for the theory of spaces with one-sided curvature bound. A space X is called a CAT(κ) space resp. a space with curvature ⩾κ if it is complete and geodesic and triangles in X are not thicker resp. not thinner than triangles in the two dimensional simply connected manifold M2 κ of constant curvature κ. Definition 2.2. We will call a space X an Alexandrov space, if X is a proper space, that either has curvature ⩾κ and finite Hausdorff dimension or that is geodesically complete (i.e. each geodesic is part of an infinite locally geodesic) and contains a CAT(κ) neighborhood of each of its points, for some κ ∈R. §3. Ultralimits 3.1. Ultraconvergence of spaces. A reader not used to ultrafilters and ultra-limits should consult [BH99] or [KL97] for excellent accounts. Let ω denote an arbitrary non-principal ultrafilter on the set of natural numbers. It allows to choose for each sequence (xi) in a compact Hausdorff space X a point limω(xi) among the limit points of the sequence. It also allows us to construct a limit space of a sequence of spaces and limits of Lipschitz maps between them in the following manner. For a sequence (Xi, xi) of pointed metric spaces their ultralimit (X, x) =: limω(Xi, xi) is defined to be the set of all sequences (zi) of points zi ∈ Xi with sup{d(zi, xi)} < ∞. On this set one considers the pseudo metric d((zi), (yi)) := limω(d(zi, yi)). The ultralimit (X, x) is the metric space arising from this pseudo metric. DIFFERENTIATION IN METRIC SPACES 9 Example 3.1. Let (Xi, xi) be a constant sequence (X, x). We call then limω(Xi, xi) the ultraproduct of (X, x) and denote it by Xω. This space con-tains (X, x) in a natural way (z →(z, z, z . . .)) and does not depend on the base point x. It coincides with (X, x) iff X is a proper space. 3.2. Relation to the usual convergence. The following lemma allows us to replace ultralimits by limits, if the statement concerns all sequences: Lemma 3.1. Let (xj) be a sequence in a complete metric space (X, x) with uniformly bounded distances to x. If for each subsequence (xki) of this se-quence the point z = (xki) in the ultraproduct Xω = limω(X, x) does not depend on the subsequence, then z is in X and the sequence (xj) converges to z. Proof. Assume that xi is not a Cauchy sequence. Then replacing (xi) by a subsequence, we may assume d(xi, xi+1) > ϵ for all i. Consider the subse-quence yi of xi given by yi = xi+1. Then the points (yi) and (xi) have in Xω distance at least ϵ from each other. Contradiction. • Gromov–Hausdorff topology on the set of isometry classes of pointed proper metric spaces is closely related to ultralimits. If a sequence (Xi, xi) of proper metric spaces converges to a proper space (X, x) in the Gromov-Hausdorff topology, then limω(Xi, xi) is in the isometry class of (X, x) (see [KL97, p. 132]). 3.3. Ultralimits of maps. Each sequence of L-Lipschitz maps fj : (Xj, xj) → (Yj, yj) induces in a natural way an ultralimit f = limω fj that is an L-Lipschitz map between the ultralimits (X, x) resp. (Y, y) of the sequences (Xj, xj) resp. (Yj, yj), defined by f((zj)) := (fj(zj)). These ultralimits of maps commute with compositions. Example 3.2. If γj are L-Lipschitz curves in Xj starting at xj, then γ = limω γj is an L-Lipschitz curve in (X, x) = limω(Xj, xj) starting at x. If all curves γj are geodesics, then so is γ. In particular if all the spaces Xj are geodesic, then so is X. Actually X is geodesic if Xj are only length metric spaces. Example 3.3. The ultralimit of products of spaces is the product of the cor-responding ultralimits. If (Sj, xj) are subsets of (Xj, xj) then the ultralimit limω(Sj, xj) is embedded into limω(Xj, xj) in a natural way. Example 3.4. Let Xj be a CAT(κj) space resp. a space with curvature ⩾κj, with κj →κ. Then limω(Xj, xj) is a CAT(κ) space resp. a space with curvature ⩾κ. For spaces with upper curvature bound this is proved in 10 ALEXANDER LYTCHAK [KL97]. For lower curvature bound the statement is not completely trivial, but it follows directly from [PP94b], Subsection 1.6. Remark 3.5. The ultralimits of sequences of spaces and maps usually depend on the choice of the ultrafilter ω. In fact if for a sequence (Xi, xi) of proper metric spaces the isometry class of (X, x) = limω(Xi, xi) does not depend on the ultrafilter ω and if this space X is proper, then the sequence of the isometry classes of (Xi, xi) is a convergent sequence with respect to the Gromov–Hausdorff topology. 3.4. Blow up. Let X be a metric space, x ∈X. For each scale (o) = (ϵi) we get a blow up X(o) x = limω( 1 ϵi X, x) at the scale (o). It is a space with a distinguished point 0 = (x, x, . . .). If f : (X, x) →(Y, y) is a locally Lipschitz map, we get a blown up map: f(o) x : X(o) x →Y (o) y . For a subspace S of X containing x we get a subspace S(o) x of X(o) x . In particular a geodesic γ starting at x defines a ray γ(o) x starting at 0. Remark 3.6. If X is a doubling metric space near x, i.e. if for some C > 0, each r ⩽1 C and each point z ∈B 1 C (x) the ball Br(z) can be covered by C balls of radius r 2, then each blow up X(o) x is a proper metric space. For example this is the case if X is a doubling measure space (see [Che99]). Example 3.7. If X is a Banach space, resp. has lower resp. upper curvature bound, then for each scale (o) the blow up X(o) x is a Banach resp. a non-negatively curved resp. a CAT(0) space. Example 3.8. Let (o) = (ti) and (˜ o) = (ri) be different scales. In general there is no possibility to compare the blow ups X(o) x and X(˜ o) x . However if the scales are comparable, i.e. if 0 < limω( ti ri ) := s < ∞holds, then the identity id : ( 1 tiX, x) →( 1 ri X, x), being an ti ri -dilation induces a natural s-dilation id(o) (˜ o) : (X(o) x , 0) →(X(˜ o) x , 0). 3.5. Infinitesimal isometries. The following definition is the metric analog of the notion of a Lebesgue point. Definition 3.1. Let (S, x) be a subset of (X, x). We will say that S is in-finitesimally dense at x if for each scale (o) the canonical isometric embedding i(o) : S(o) x →X(o) x is onto (i.e. an isometry). The above definition just says, that for each ϵ > 0 and all sufficiently small δ the ball Bδ(x) ⊂S is ϵδ-dense in the ball Bδ(x) ⊂X. DIFFERENTIATION IN METRIC SPACES 11 Example 3.9. If S is dense in a neighborhood of x in X, then S is infinites-imally dense at x. If X is a doubling metric measure space ([Che99]) and S a measurable subset, then S is infinitesimally dense at each of its Lebesgue points. Example 3.10. Let X be complete and geodesic. If a closed subset S of X is infinitesimally dense at each point x ∈S, then S = X ([Lytc]). Definition 3.2. Let e : (X, x) →(Y, y) be a not necessarily continuous map. We will call e an infinitesimal isometric embedding (at x) if |d(e(x1), e(x2)) − d(x1, x2)| ⩽o(d(x1, x) + d(x2, x)), for all x1, x2 ∈X and some function o : R+ →R+ with limt→0 o(t) t = 0. We will say that e is an infinitesimal isometry (at x) if in addition the image e(Bδ(x)) of each ball Bδ(x) around x is infinitesimally dense at y in Y . Example 3.11. A Lipschitz map e : (X, x) →(Y, y) is an infinitesimal isometry iff for each scale (o) the blow up e(o) x : X(o) x →Y (o) y is an isometry. A composition of infinitesimal isometries is again an infinitesimal isometry. The importance of this notion is due to the following easy observation: Lemma 3.2. Let e : (X, x) →(Y, y) be an infinitesimal isometry. Then for each scale (o) the map e(o) x : X(o) x →Y (o) y given by e(o) x ((xi)) = (e(xi)) is well defined. Moreover it is an isometry. Example 3.12. Let (S, x) be a subset of (X, x). Define a map e : (X, x) → (S, x) by setting e(z) = ¯ z, where ¯ z is an arbitrary point in S with d(z, ¯ z) ⩽ 2d(z, S). Then e is an infinitesimal isometry iff S is infinitesimally dense at x. In this case e(o) x : X(o) x →S(o) x is the canonical identification. These (generalized) blow ups are again compatible with compositions (of infinitesimal isometries). From Example 3.12 one deduces, that for each in-finitesimal isometry e : (X, x) →(Y, y) there is an infinitesimal isometry ¯ e : (Y, y) →(X, x) such that for each scale (o) one has e(o) ◦¯ e(o) = id and ¯ e(o) ◦e(o) = id. §4. Metric cones 4.1. Group of dilations. Let (X, x) be a pointed metric space. Consider the group Dilx(X) of all dilations of X leaving the point x invariant, equipped with the topology of pointwise convergence. The natural map P : Dilx(X) →R+ sending an s-dilation to the number s is a continuous homomorphism. The kernel of P is the group Ix of isometries of X fixing the point x. 12 ALEXANDER LYTCHAK Definition 4.1. A metric cone structure on the space (X, x) is a continuous section of the homomorphism P above, i.e. a continuous homomorphism ρ : R+ →Dilx(X) that sends s to some s-dilation ρs. A metric cone is a space with a metric cone structure. We call x the origin of the metric cone X and denote it by 0. A map f : X →Y between metric cones is called homogeneous if it commutes with all dilations ρt. A metric space (X, x) can admit several families of dilations making it to a metric cone. If X is a proper metric space, then the pointwise topology on Dilx(X) coincides with the compact-open topology and the group Dilx(X) resp. Ix is locally compact resp. compact. If a metric cone structure on (X, x) exists, the projection P : Dilx(X) →R+ is surjective. On the other hand if the map P is surjective and X is proper it is easy to see, that the group Dilx(X) splits as a direct product Ix×R+, such that P becomes the projection onto the second factor (First reduce to the connected component of Dilx(X). Then use the fact the group of outer automorphisms of Ix is totally disconnected, see [HM98, p. 512]). In particular in this case a metric cone structure on X exists, such that all dilations ρs are in the center of Dilx. Moreover different metric cone structures are in one-to-one correspondance with different continuous homomorphisms p : R+ →Ix. 4.2. Cones. The products and ultralimits of metric cones are metric cones with naturally defined dilations ρs. For a metric cone (X, 0) the metric d : X × X →R is a homogeneous function. By the norm | · | we will denote the homogeneous function d0. A ray γ : [0, ∞) →X starting at the origin of the cone X is called radial, if it is stable under the dilations, i.e. if it is a homogeneous map. If a ray γ is radial, then its Busemann function bγ : X →R is homogeneous. Example 4.1. A Banach space B is a cone with dilations ρt(v) = tv. Radial rays are precisely the linear ones γ(t) = tv. The Busemann function bγ of such a ray γ is linear iff v is a smooth point of the unit sphere (see [JL01, p. 30] for the definition), i.e. iff the affine line in the direction of v is straight in the sense of Example 2.3. Example 4.2. The Euclidean cone CY over a metric space Y ([BBI01, p. 91]) is a metric cone. Each ray starting at 0 is radial and has the form γ(t) = tv with v ∈Y . Its Busemann function is given for w ∈Y by bγ(sw) = −⟨v, sw⟩:= −s cos(dY (v, w)). 4.3. Special metric cones. Cones can be arbitrary wild in general. We will use the following particularly nice classes of metric cones. DIFFERENTIATION IN METRIC SPACES 13 Definition 4.2. We will call a metric cone X radial, if for each x ∈X with |x| = 1 the map t →ρt(x) is a ray. A cone is radial iff it is the union of its radial rays. Consider the unit sphere S in a radial cone X, i.e. the set of all points v ∈X with |v| = 1. Then it is easy to check using only the triangle inequality, that the natural homogeneous map CS →X of the Euclidean cone over S to X that sends the point tv ∈CS to ρt(v) is biLipschitz. Definition 4.3. We call a radial cone X uniformly convex if for each ϵ > 0 there is some δ > 0, such that for each radial ray γ(t) = ρt(v0) (|v0| = 1) and each v ∈X with |v| = 1 and d(v, v0) ⩾ϵ one has bγ(v) ⩾−1 + δ. Definition 4.4. We call a metric cone X smooth if for each sequence of radial rays γj converging to a radial ray γ the Busemann functions bγj converge pointwise to the Busemann functions bγ. A direct product or a subcone of radial resp. uniformly convex resp. smooth cones is radial resp. uniformly convex resp. smooth. A completion or an ul-traproduct of radial resp. of uniformly convex cones is radial resp. uniformly convex. Euclidean cones are uniformly convex and smooth. Banach spaces are radial cones and the notion of uniform convexity resp. of smoothness is the usual uniform convexity resp. smoothness of the norm. Example 4.3. Carnot groups are not radial, but it is possible to prove that they are smooth cones. §5. Geodesic cones 5.1. Germs of geodesics. Let x be a point in a space X. Consider the set Γx of all geodesics starting at x and the direct product Γx × [0, ∞). Define a pseudo metric ˜ d on Γx × [0, ∞) by setting ˜ d((γ1, s1), (γ2, s2)) = lim supt→0 d(γ1(s1t),γ2(s2t)) t . Denote by Cx the metric space corresponding to the pseudo metric space (Γx × [0, ∞), ˜ d) and by Cx its metric completion. The set Γx × {0} is identified to a point 0 in Cx. The dilations of [0, ∞) define on the spaces ( Cx, 0) and (Cx, 0) structures of metric cones. Each geodesic γ ∈Γx defines a radial ray ¯ γ(t) = (γ, t) in Cx ⊂Cx. Hence Cx is always a radial metric cone. We will denote by γ+ the ray (γ, t) in Cx as well as the point γ+(1) = (γ, 1). By Sx we denote the unit sphere in Cx and call it the link at x. One can think of Cx as the space of unparameterized geodesic germs at x, however one should be cautious: Example 5.1. Let X be a Banach space and x = 0 its origin. If the norm of X is strongly convex, then all geodesics are straight lines and Cx is naturally 14 ALEXANDER LYTCHAK isometric to X. But if the norm is not strongly convex, the space Cx is much bigger than X, for instance Cx is never locally compact in this case. 5.2. Exponential mappings. For each scale (o) = (ti) there is a natural map exp(o) x : Γx × [0, ∞) →X(o) x defined by exp(o) x ((γ, t)) = (γ(tti)) ∈X(ti) x . The exponential map exp(o) x goes down and defines a 1-Lipschitz map exp(o) x : Cx → X(o) x , due to the limit superior in the definition of the distance in Cx. Also by exp(o) x we will denote the unique 1-Lipschitz extension exp(o) x : Cx →X(o) x . Example 5.2. Let (o) = (ti), (˜ o) = (ri) be two scales with 0 < limω( ti ri ) := s < ∞. For the canonical s-dilation id(o) (˜ o) : (X(o) x , 0) →(X(˜ o) x , 0) we get: id(o) (˜ o) ◦exp(o) x = exp(˜ o) x ◦ρs, where ρs is the natural s-dilation on Cx. For each γ ∈Γx the radial ray γ+ ⊂Cx is mapped by expx : Cx →X(o) x isometrically onto the ray γ(o) x . The exponential mappings exp(o) x : Cx →X(o) x are isometric embeddings for all scales (o) = (ti) if and only if the limit superior in the definition of the distance in Cx is always a limit. This property being quite fundamental justify the following Definition 5.1. We say that the space X has the property (A) at x if the limit superior in the definition of the distance in Cx is always a limit. Example 5.3. The upper angle coincides with the lower angle between ar-bitrary geodesics starting at x (see [BBI01, p. 98] for the definition) iff the condition (A) holds at x and the geodesic cone is a Euclidean cone Cx = CSx. Remark 5.4. Even if X is a geodesic space and the property (A) holds at x, and so the mappings exp(o) x define isometric embeddings of Cx into geodesic spaces X(o) x , the geodesic cone Cx need not be a geodesic space. For example it is not the case in Carnot–Caratheodory spaces or in general spaces with lower curvature bound ([Hal00]). Remark 5.5. If X has the property (A) at x and some blow up X(o) x is a proper space (compare Remark 3.6), then Cx is a proper space too. 5.3. Geodesic cones under one-sided curvature bounds. In a general space X with one-sided curvature bound the lower and the upper angle between arbitrary geodesics coincide (see [BBI01]). Therefore the geodesic cone Cx at each point x ∈X is a Euclidean cone and for each scale (o) the exponential map exp(o) x : Cx →X(o) x is an isometric embedding. If X is an Alexandrov space, we will see below, that exp(o) x is also onto. However in general spaces with one-sided curvature bound it is almost never the case. DIFFERENTIATION IN METRIC SPACES 15 If X has an upper curvature bound, it was proved in [Nik95] that Cx is a geodesic space, hence it is a totally convex subset of each blow up X(o) x . The example of [Hal00] shows that Cx is not necessarily a geodesic space if X has a lower curvature bound. However, since Cx is an isometrically embedded Euclidean cone in X(o) x , the rigidity theorem of Toponogov gives us that for all radial rays η1, η2 in Cx and an arbitrary geodesic η in X(o) x connecting arbitrary points on η1 and η2, the triangle η1ηη2 is Euclidean. Remark 5.6. Let X be a space with lower curvature bound. It is not difficult to prove, that if a unit vector v ∈Cx has an antipode w ∈X(o) x , i.e. a point w with d(w, 0) = 1 and d(w, v) = 2, then w is contained in exp(o) x (Cx) and the line defined by v and w is a Euclidean factor not only of X(o) x but also of Cx, i.e. v is connected with each other point ¯ v ∈Cx by a geodesic in Cx. §6. Tangent cones 6.1. Definition and the main example. The following main definition of this paper is motivated by Example 5.2 Definition 6.1. Let (X, x) be a metric space. We say that the tangent space of X at x exists, if for some metric cone (T, ρs) and each scale (o) = (ti) an isometry i(o) : (T, 0) →(X(o) x , 0) is chosen, such that for different scales (o) = (ti) and (˜ o) = (ri) with 0 < limω( ti ri ) := s < ∞the map i(˜ o)◦ρs◦(i(o))−1 : X(o) x →X(˜ o) x is the natural s-dilation from Subsection 3.4. This metric cone T together with the fixed isometries i(o) will be called the tangent space at x and denoted by TxX. Example 6.1. If (T, 0) is a metric cone then the tangent space at 0 is naturally isometric to the ultraproduct T ω. The isometries i(ti) : T ω →T (ti) 0 are given by i(ti)((xi)) = (ρti(xi)). If the tangent cone TxX exists, then the isometries i(o) allow us to identify in a unique way points from the blow up X(o) x with points in TxX. We will always use this particular identification below. The following remark refers our definition to that of Gromov: Remark 6.2. Let (X, x) be a proper space. Assume that for t →0 the set of all (isometry classes of) spaces ( 1 t X, x) is relatively compact in the G.-H.-topology (compare Remark 3.6). If TxX exist in the sense of Definition 6.1 then for t →0 the isometry classes of ( 1 t X, x) converge to the isometry class of TxX (Subsection 3.2). On the other hand if such a convergence takes place, we know that for each scale (o) = (ti) there is an isometry i(o) : TxX →X(o) x . 16 ALEXANDER LYTCHAK The natural s-dilations between the blow ups define s-dilations on TxX. It is possible to choose a metric cone structure on TxX (see Subsection 4.1) and to change the isometries i(o), such that the commutation relations of Definition 6.1 are satisfied. The most tangent spaces arise from Example 6.1 and the following: Example 6.3. Let e : (X, x) →(Y, y) be a (not necessarily continuous) in-finitesimal isometry. If TxX exists, then TyY exists too and is naturally iso-metric to TyY . Namely the isometries i(o) : TxX →X(o) x define isometries ˜ i(o) : TxX →Y (o) y by ˜ i(o) = e(o) x ◦i(o), where the isometries = e(o) x : X(o) x →Y (o) y are given by Lemma 3.2. Combining Example 6.1, Example 6.3 and Example 3.12 we obtain: Lemma 6.1. Let (T, 0) be a metric cone, (D, 0) a subset of T infinitesimally dense at 0 and let e : (D, 0) →(X, x) be an infinitesimal isometry. Then TxX exists and is naturally isometric to the ultraproduct T ω. Remark 6.4. The identification of TxX with T ω as above depends on the metric cone structure of T in an essential way (Example 6.1). Thus changing the cone structure on T (and letting e and the metric on T fixed) we get a different tangent cone structure at x. Remark 6.5. If the cone T in Lemma 6.1 is proper then the tangent cone TxX is isometric to T, in particulary it does not depend on the choice of the ultrafilter ω. Example 6.6. Let (M, | · |) be a smooth manifold with a continuous Finsler metric. For x ∈M let (TxM, | · |x) be the usual tangent space at x. Then each chart e : TxM →M with e(0) = x whose differential at 0 is the identity, satisfies the assumptions of Lemma 6.1. Therefore the tangent space in the sense of Definition 6.1 coincides with the Banach space (TxM, | · |x). Remark that the identification (i.e. the maps i(o)) does not depend on the choice of the chart e. Moreover the topology, the metric cone structure and the identification map e only depend on the manifold structure of X. Only the metric (norm) on TxM depends on the Finsler metric | · |. Example 6.7. Belaiche constructed in [Bel96] for each Carnot–Caratheodory space M an almost isometry e : Gx →M from the Nilpotenization Gx of M at x to M, identifying TxM with Gx. 6.2. Metric operation. Let X, Y be metric spaces. If TxX and TyY exist, then T(x,y)X × Y exists and is naturally isometric to TxX × TyY . The tangent space at x to the rescaled space tX is naturally isometric to tTxX = TxX. If DIFFERENTIATION IN METRIC SPACES 17 X and Y are geodesic spaces and f : X →R+ a continuous function, then the warped product X ×f Y (compare [BBI01, p. 95]) has at (x, y) the tangent cone TxX × TyY (Use Example 6.3 for the identity map between X ×f Y and X × ˜ f Y for the constant function ˜ f = f(x)). In particular if TxX exists then for each t ∈R+ the tangent space TtxCX to the Euclidean cone exists and is naturally isometric to TxX × R. Let S be a subset of X, x ∈S and let TxX exist. If we say that S has a tangent cone at x it means, that the subset S(o) x ⊂X(o) x = TxX does not depend on the scale (o). If S1, S2 are subsets of X both containing x with tangent cones TxS1, TxS2 ⊂TxX then the union S1 ∪S2 has the tangent cone TxS1 ∪TxS2 at x. Example 6.8. If the subset (S, x) of (X, x) is infinitesimally dense at x, then TxS exists and is equal to TxX. In particular if X is a doubling metric measure space such that TxX exists for almost all x ∈X, then for each measurable subset S ⊂X and almost each point (with respect to the induced measure) x ∈S the tangent space TxS ⊂TxX exists and coincides with TxX. Example 6.9. Let X be an Alexandrov space with curvature ⩾k, S an extremal subset of X ([PP94a]). Then at each point x ∈S the tangent space TxS ⊂TxX exists and it is an extremal subset of TxX. 6.3. Property (U). The following condition seems to be very natural. It is a very rough generalization of the lower curvature bound condition: Definition 6.2. Let X be a space, x ∈X. Assume that the union of all geodesics starting in x contains a neighborhood of x, that the property (A) holds at x and that the geodesic cone Cx is proper. We say that X has the property (U) at x if for each ϵ > 0 there is some ρ > 0, such that d(γ(t), η(t)) ⩽ϵt for all t < ρ and all γ, η ∈Γx with d(γ+, η+) < ρ. Example 6.10. If X has the property (U) at x, then so does each subset S of X that is a union of geodesics starting at x. Example 6.11. A complete metric cone T has the property (U) at the origin iff it is proper and one can change the metric cone structure such that T becomes radial and the only geodesics starting at 0 are parts of radial rays. The if direction is clear and the only if implication follows from the fact (see below for a proof) that under the condition (U) the geodesic cone C0 is isometric to the ultraproduct T ω. In particular each proper Euclidean cone and each proper, uniformly convex Banach space have the property (U). The property (U) allows us to compare distances in Cx and in X. 18 ALEXANDER LYTCHAK Proposition 6.2. Let X be a space with the property (U) at x. Then for each ϵ > 0 there is some ρ > 0, such that for all r ⩽t ⩽ρ and all γ, η ∈Γx the inequality |d(γ(r), η(t)) −d((γ, r), (η, t))| ⩽ϵt holds. Proof. Assume that there are sequences γi, ηi ∈Γx and zero sequences ri ⩽ ti →0 violating the above inequality. Choosing a subsequence we may assume that γ+ i and η+ i are Cauchy sequences and ri ti converge to a number s with 0 ⩽s ⩽1. Moreover we may assume ri = sti and that the sequence ti is non-increasing. For arbitrary small ρ > 0 we can choose i big enough such that for all j ⩾i we get d(η+ i , η+ j ) + d(γ+ i , γ+ j ) < ρ < ϵ 5. Using the property (U), increasing i if necessary and having chosen ρ small enough we get d(γi(t), γj(t)) + d(ηi(t), ηj(t)) ⩽ϵt 5 for all t ⩽ti. Hence we get |d(γj(stj), ηj(tj)) −d(γi(stj), ηi(tj))| + |d((γj, stj), (ηj, tj)) −d((γi, stj), (ηi, tj))| ⩽4ϵ 5 . Therefore we obtain the inequality |d(γi(stj), ηi(tj)) −d((γi, stj), (ηi, tj))| ⩾ϵtj 5 for all j ⩾i. This is a contradiction to the property (A). • Corollary 6.3. Let X have the property (U) at x and let γi be a sequence in Γx converging pointwise to a geodesic γ of positive length. Then γ+ i converge to γ+ in Cx. For each ϵ > 0 there is some ρ > 0, such that for each z with d(z, x) < ρ the inequality d(γ+, η+) < ϵ holds for all geodesics γ, η ∈Γx,z. Now we can use Proposition 6.2 and Lemma 6.1 to identify Cx with TxX. Namely we consider the logarithmic map h : X →Cx, that sends a point z ∈X to some pair (γ, t) ∈Cx with t = d(x, z) and γ ∈Γx,z ⊂Γx. By assumption h is defined on the neighborhood ∪γ∈Γx of x. From Proposition 6.2 we conclude, that h is an infinitesimal isometry, hence each map e : h(X) ⊂ Cx →X satisfying e ◦h = id has the properties used in Lemma 6.1 to identify Cx with TxX. The identification between X(o) x and Cx given by Lemma 6.1 is exactly the exponential map exp(o) x . Remark 6.12. The last construction is well known in many cases. In the case of Riemannian manifolds the logarithmic map h above is just the inversion of the usual exponential map. If X is an Alexandrov space with curvature ⩽0, then h is uniquely defined, surjective and 1-Lipschitz. If X is an Alexandrov space with curvature ⩾0, then one can define the almost inversion e : Cx →X to be surjective and 1-Lipschitz ([PP94b]). Definition 6.3. A space X will be called infinitesimally cone-like, if it is locally geodesic, at each point x ∈X the property (U) holds and each tangent cone TxX = Cx is a Euclidean cone. DIFFERENTIATION IN METRIC SPACES 19 §7. Differentials 7.1. Generalities. Let f : (X, x) →(Y, y) be a locally Lipschitz map and assume that TxX and TyY exist. For each scale (o) the blow up f(o) x : X(o) x → Y (o) y gives us a map between the tangent spaces. Definition 7.1. Let f : X →Y be as above. We say that f is differentiable at x if the blow up f(o) x : TxX →TyY does not depend on the scale (o). In this case we denote this uniquely defined map by Dxf. Example 7.1. If f : (X, 0) →(Y, 0) is a homogeneous map between cones, then f is differentiable at 0 and the differential D0f is the ultraproduct fω = limω f : Xω →Y ω of f. Example 7.2. Let f : X →Y be an isometry. If TxX does not admit a non-trivial isometry fixing the origin 0, then f is differentiable at x, since for each scale (o) the map f(o) x : TxX →TyY is an origin preserving isometry. Example 7.3. Let S be a subset of X, x ∈S. If TxX and TxS ⊂TxX exist as in Subsection 6.2 then the inclusion I : S →X is differentiable at x and the differential is the natural embedding Ix : TxS →TxX. Example 7.4. Let f : X →Y be a biLipschitz embedding. If f is differentiable at x, then f(X) has a tangent cone at f(x) given by Tf(x)f(X) = Dxf(TxX) ⊂ Tf(x)Y . On the other hand if f : X →Y is a differentiable C-open map (see [Lytc]) and S a subset of Y that has a tangent cone at f(x), then f−1(S) has a tangent cone at x given by Txf−1(S) = (Dxf)−1(Tf(x)S) ⊂TxX. Example 7.5. If TxX and TyY exist, then the projection p : X × Y →X is differentiable and the differential is just the projection. If TxX exist, then the metric d : X ×X →R is differentiable at each point (x, x) on the diagonal and the differential is just the metric on TxX. The distance function dx : X →R is differentiable at x with differential Dxdx(v) = |v|. The differentiability of the metric at points outside the diagonal will be discussed in Section 9. Example 7.6. Let (T, 0) be a proper metric cone with dilations lying in the center of Dil0. Let ( T, 0) be the same space with a different metric cone structure given by a continuous homomorphism p : R+ →I0 (Subsection 4.1). Let f : (T, 0) →( T, 0) be the identity. Then f(ti) 0 is exactly the isometry limω(p(ti)). Hence f is differentiable at 0 iff limt→0 p(t) exists. However this can only happen if p is the trivial map. This suggests, that there is at most one natural tangent cone structure. Considering T = R2 we get essentially the counterexample of [CH70]. 20 ALEXANDER LYTCHAK Since ultralimits commute with compositions we immediately see: Lemma 7.1. Let f : X →Y and g : Y →Z be Lipschitz maps, f(x) = y, g(y) = z. If f is differentiable at x and g differentiable at y then g ◦f is differentiable at x with differential Dx(g ◦f) = Dyg ◦Dxf. Example 7.7. If f : X →Y is differentiable at x and S a subset of X such that TxS ⊂TxX exists, then f : S →Y is differentiable in x and the differential Dxf : TxS →Tf(x)Y is the restriction of Dxf : TxX →Tf(x)Y . If on the other hand TxS = TxX and the restriction f : S →X is differentiable at x, then f : X →Y is also differentiable at x. 7.2. Comparing with the usual differentiability. If the tangent spaces are given by Lemma 6.1, we get the usual definition of differentiability. Let namely f : (X, x) →(Y, y) be a Lipschitz map, (T1, 0) resp. (T2, 0) metric cones and e1 : T1 →X resp. e2 : T2 →Y be maps as in Subsection 6.1 (If ei are defined only on infinitesimally dense subsets (Di, 0) ⊂(Ti, 0) we may extend them by Example 3.12). If A : T1 →T2 is a homogeneous Lipschitz map, such that for v ∈T1 one has lim|v|→0 d(f(e1(v)),e2(A(v))) |v| = 0 then the differential of f at x exists and is equal to the ultraproduct Aω : T ω 1 →T ω 2 . On the other hand we can use Lemma 3.1 and see, that if the differential Dxf exists, then the image of Dxf(T1) ⊂T ω 2 must be contained in T2 ⊂T ω 2 , therefore the existence of a map A as above is also necessary in this case. In particular the differentiability does not depend on the ultrafilter ω! More-over if T1 and T2 are proper, then the differential does not depend on the ultrafilter too. 7.3. Separating maps. Let (Y, y) be a metric space, {fj : Y →Yj} a set of Lipschitz maps differentiable at y and separating the points in TyY , i.e. for v1 ̸= v2 ∈TyY there is some j, such that Dyfj(v1) ̸= Dyfj(v2). Since ultralimits of maps commute with compositions we obtain, that a map g : (X, x) →(Y, y) is differentiable at x iff for each j the map fj◦g is differentiable at x. For example a biLipischitz map f0 : Y →Y0 differentiable at y satisfies the above conditions. In particular its inverse must be differentiable at f0(y). 7.4. Differentiating curves. Let γ : [0, a] →X be a Lipschitz curve, with γ(0) = x. If γ is differentiable at 0, then the differential is a homogeneous map h of the half-line [0, ∞) to TxX. Since this map is uniquely determined by h(1), we will call the point h(1) the right hand side differential of γ at 0 and denote it by γ+. In the same way one defines γ−if γ is differentiable at a. The differential exists at an inner point t ∈(0, a), iff γ+ and γ−exist in t. DIFFERENTIATION IN METRIC SPACES 21 7.5. Differentiating geodesics. The most natural and basic maps into a met-ric space are geodesics. One can only hope to get a rich theory of differ-entiation if many geodesics are differentiable. A geodesic γ : [0, a] →X starting at x is differentiable at 0 iff the ray γ(o) x ⊂X(o) x = TxX does not depend on the scale (o). In this case we get a unique radial ray γx ⊂TxX. We see that all geodesics are differentiable at x iff the exponential mappings exp(o) x : Cx →X(o) x = TxX do not depend on the scale (o), that means iff Cx is naturally embedded in TxX via the exponential mappings. In this case X has the property (A) at x. For example this is always true if X has the property (U) at x. In general however it does not need to be true even in quite tame spaces, see [CH70] or Example 7.6. 7.6. Directional derivatives. Let f : (X, x) →(Z, z) be a locally Lipschitz map and assume that TzZ exists. We say that f has directional derivatives at x if the restriction f ◦γ to each geodesic γ ∈Γx is differentiable at 0. In this case we obtain a well defined homogeneous map Dxf : Cx →TzZ of the geodesic cone Cx into the tangent cone TzZ. For each scale (o) we have f(o) x ◦exp(o) x = Dxf, in particular Dxf inherits the Lipschitz constant of f. Example 7.8. Each locally Lipschitz semi-concave function f : X →R has directional derivatives at all points (see [Lytc] for more on this). If X has a tangent cone at x and the geodesics are differentiable at x, then each Lipschitz map f : (X, x) →(Z, z) differentiable at x is also directionally differentiable and Dxf : Cx →TzZ is just the restriction of Dxf : TxX →TzZ to Cx. On the other hand if all geodesics are differentiable at x and the map f : X →Z is directionally differentiable at x then the restriction of f(o) x : TxX →TzZ to the subset Cx ⊂TxX is independent of the scale (o). This implies Proposition 1.1. We can actually deduce a bit more smoothness of isometries: Corollary 7.2. Let X have the property (U) at x. Let fi : (X, x) →(X, x) be isometries fixing x and converging pointwise to an isometry f. Then the isometries Dxfi of TxX converge to the isometry Dxf. Proof. Composing the isometries fi with f−1 we may assume f = Id. Then for each geodesic γ the geodesics γi = fi(γ) converge to γ. By Corollary 6.3 for the starting direction γ+ of γ the directions Dxfi(γ+) converge to γ+. • 7.7. Strong differentiability. Let f : (X, x) →(Z, z) be a locally Lipschitz map and assume that TxX exists and Z has the property (A) at z. We will say that a strong differential Dxf : TxX →Cz exists, if for each scale (o) one has f(o) x = exp(o) z ◦Dxf. If TzZ exists and geodesics are differentiable at 22 ALEXANDER LYTCHAK z, then a map f is strongly differentiable at x iff it is differentiable and the differential Dxf : TxX →TzZ satisfies Dxf(TxX) ⊂Cz. Remark that the strong differential (if it exists) is a homogeneous Lipschitz map. §8. Metric Differentials Example 7.5 gives rise to the following definition ([Kir94]): Definition 8.1. Let f : X →Y be a Lipschitz map and let TxX exist. We say that f has a metric differential at x if the composition d◦(f ×f) : X ×X →R is differentiable at (x, x). In this case the differential is a homogeneous pseudo metric on TxX and we denote it by mDxf. We will say that f has a weak metric differential at x if the map df(x) ◦f : X →R is differentiable at x. Again by mDxf we denote the differential of this map. If f is differentiable at x then it also has a metric differential given by mDxf(v, w) = d(Dxf(v), Dxf(w)). If f is metrically differentiable at x then it is also weakly metrically differentiable with weak metric differential mDx(v) = mDx(0, v). Let on the other hand f : X →Y be a biLipschitz map, f(x) = y. If f is metrically differentiable at x then one can uniquely define a tangent space TyY , such that f becomes differentiable at x. An isometric embedding I : X →Z is metrically differentiable at each point x where TxX exists and the metric differential is just the metric mDxI = d : TxX × TxX →R. Example 8.1. The space X has the property (A) at the point x, iff for each pair of geodesics γ1, γ2 ∈Γx the map γ : (−ϵ, ϵ) →X given by γ(t) = γ1(t) for t ⩽0 and γ(t) = γ2(t) for t ⩾0 is metrically differentiable at 0. Using this example we immediately obtain: Lemma 8.1. Let f : (X, x) →(Z, z) be a Lipschitz map that is an infini-tesimal isometric embbeding at x and assume that TzZ exists. If the image f ◦γ of each geodesic γ ∈Γx is differentiable at 0, then X has the property (A) at x, the map f is directionally differentiable at x and the differential Dxf : Cx →TzZ is an isometric embedding. Example 8.2. Let M be a Finsler manifold. If each geodesic γ ∈Γx is differentiable at 0, then M has the property (A) at x. The following deep theorem was proved in [Kir94]: Theorem 8.2. Let K be a measurable subset of Rn, f : K →Y a Lipschitz map. Then f has a metric differential at almost each point, this metric differential is almost everywhere a semi-norm, and the map x →mDx = | · |x is measurable. DIFFERENTIATION IN METRIC SPACES 23 Example 8.3. Let γ : [p, q) →Y be a Lipschitz map. If the weak metric differential of γ at p exists, we will denote by mD+ p the number mDpγ(1), i.e. mD+ p = limt→0 d(γ(p+t),γ(p)) t . The fact that the metric differential exists and is a semi-norm amounts to the much stronger statement lim t→0 d(γ(p + s1t), γ(p + s2t)) t = |s2 −s1|mD+ p for all s1, s2 > 0. §9. Differentiation of distance functions 9.1. Generalities. We start with the following paradigmatic example: Example 9.1. Let T be a proper metric cone, h a radial ray, x = h(1). Then the distance function dx is differentiable at the origin 0 and the differen-tial is given by D0dx(v) = bh(v), since D0dx(v) = limt→0 d(x,ρt(v))−d(x,0) t = limt→∞(d(ρt(x), v) −t) = bh(v). To state our results we need the following extension of Definition 6.1. Let f : X →Y be a Lipschitz map and let TxX and TyY exist. We say that Dxf has some property (even if it does not exist) if each blow up f(o) x : TxX →TyY has this property. Let now X be a space, x ̸= z points in X such that TxX and TzX exist. Assume that each geodesic γ ∈Γx,z is differentiable at x and at z thus defining radial rays γ+ ⊂TxX and γ−⊂TzX. Then we show: Lemma 9.1. In the above notations the differential D(x,z)d : TxX × TzX → R of the metric d : X × X →R+ can be estimated by D(x,z)d(v, w) ⩽ infγ∈Γx,z(bγ+(v) + bγ−(w)). Proof. Choose some geodesic γ : [a1, a2] →X connecting x and z. Then d(γ(a1+s), γ(a2−s)) = d(x, z)−2s. Hence each blow up d(o) (x,z) : TxX ×TzX → R satisfies d(o) (x,z)(γ+(s), γ−(s)) = −2s. We are done by Example 2.4. • In the same way using Example 2.2 instead of Example 2.4 we see: Lemma 9.2. Let X be a space, S a closed subset of X, x ∈X \ S. Assume that TxX exists and each geodesic γ ∈Γx,S connecting x and S is differen-tiable at 0. Then the differential of the distance function dS is bounded from above by DxdS(v) ⩽infγ∈Γx,S bγ+(v). Remark 9.2. Even if TxX and TzX do not exist, one can work with the directional differentials D(x,z) : Cx × Cz →R and get (for the same reason) the same estimations as in Lemma 9.1. 24 ALEXANDER LYTCHAK 9.2. First variation formula. As in the Riemannian geometry one would like to have equalities in the last two lemmas. Definition 9.1. We say that the first variation formula holds for S ⊂X and x ∈X \ S if in the statement of Lemma 9.2 equality holds. Example 9.3. Let γ ∈Γz,S be a geodesic, x = γ(t) an inner point of γ. If γ is differentiable at x, it defines a homogeneous line γ in TxX. Moreover Dt(dS ◦γ) = −id. If γ is straight in the sense of Example 2.3, then the first variation formula holds for S and x. The validity of the first variation formula is closely related to the question whether geodesics vary smoothly in X. Definition 9.2. Let X be a space with the property (U) at x. We say that geodesics vary smoothly at x if for all geodesics γ and η in Γx and each sequence of geodesics γn with γn(0) = η(tn) converging to γ, the following condition holds. For each ϵ > 0 there is some n > 0 and ρn > 0, such that ρn −d(x, γn(ρn)) ⩾(bγ+(v) −ϵ)tn, where v = η+(1) is the starting direction of η. If this condition holds for all x and all γ ∈Γx, we will say that geodesics vary smoothly in X. Remark 9.4. First of all we see, that since we require the above condition to hold for all convergent sequences of geodesics, the number ρn as above exists for all sufficiently large n. Moreover the numbers ρn can be chosen, such that ρn →0. Finally if the inequality in Definition 9.2 holds for some ρn, it also holds for all ρ ⩾ρn. Hence we may also choose all ρn to be equal to a small constant ρ depending on γ and ϵ. The connection between Definition 9.2 and the first variation formula is provided by the next three results. Proposition 9.3. Let X be a proper geodesic space, x, z ∈X points at which X has the property (U). Assume that geodesics vary smoothly at x and at z. Then in Lemma 9.1 equality holds. Proof. Since X × X has the property (U) at (x, z) it is enough to prove, that the equality holds for the starting direction of each geodesic ˜ η in X × X starting at (x, z). Hence it is enough to prove, that for arbitrary geodesics η1 ∈Γx and η2 ∈Γz with starting directions v ∈TxX resp. w ∈TzX one has lim inft→0 d(η1(t),η2(st)) t ⩾infγ∈Γx,z(bγ+(v)+bγ−(sw)), for all s ⩾0. Assume the contrary and choose a zero sequence (tn) violating the above inequality. Choose a geodesic γn from η1(tn) to η2(stn). Going to a subsequence we may assume, that γn converge to a geodesic γ ∈Γx,z. For given ϵ > 0 and all n big enough DIFFERENTIATION IN METRIC SPACES 25 we can find numbers ρ+ and ρ−, such that d(x, γn(ρ+)) ⩽ρ+ −(bγ+(v) −ϵ)tn and d(x, γn(L(γn) −ρ−)) ⩽ρ−−(bγ−(sw) −ϵ)tn. But since γ is a geodesic we get L(γ) ⩽d(x, γn(ρ+)) + (L(γn) −ρ+ −ρ−) + d(z, γn(L(γn) −ρ−)). Hence L(γ) −L(γn) ⩽(2ϵ + bγ+(v) + bγ−(sw))tn. This proves the result. • In the same way we see: Proposition 9.4. Let X be a proper geodesic space with property (U) at x, S ⊂X a closed subset not containing x. Assume that geodesics vary smoothly at x. Then the first variation formula holds for S and x. Example 9.5. Under the assumptions of Proposition 9.3 assume in addition, that the tangent cones TxX and TzX are smooth (Definition 4.4). For v ∈ TxX, w ∈TzX choose a sequence γi ∈Γx,z such that D(x,z)d(v, w) = bγ+ i (v) + bγ− i (w). Let γ ∈Γx,z be a pointwise limit of γi. Then γ+ i resp. γ− i converge to γ+ resp. γ−by Corollary 6.3 and from the smoothness of the tangent cones we obtain D(x,z)d(v, w) = bγ+(v)+bγ−(w). This finishes the proof of Theorem 1.2. Lemma 9.5. Let X be a proper geodesic space with the property (U) at x. Assume that the first variation formula is valid at x for each closed subset S not containing x. If the tangent cone TxX is smooth then geodesics vary smoothly at x. Proof. Let γ, η, γn →γ be as in Definition 9.2 and set v = η+. For sufficiently small δ and each geodesic ˜ γ ∈Γx,γ(δ) we obtain from Corollary 6.3 that d(γ+, ˜ γ+) ⩽ϵ1 with ϵ1 →0 if δ →0. By the smoothness of the tangent cone we get |bγ+(v) −b˜ γ+(v)| ⩽ϵ with ϵ →0 if δ →0. Let now ϵ be given. Choose δ small enough and consider δn such that d(γn(δn), x) = δ + t2 n. Let S be the closed subset of X that consists of the sequence γn(δn) and the point z = γ(δ). The first variation formula gives us DxdS(v) = Dxdz(v) = inf˜ γ∈Γx,z b˜ γ+(v). This and the above estimate of b˜ γ+(v) directly imply the inequality of Definition 9.2. • At the beginning of this section we have seen that in Banach spaces the first variation formula always holds for distance functions to points. The situation for the distance functions to subsets is more complicated, namely: Lemma 9.6. Let B be a finite dimensional uniformly convex Banach space. Then the first variation formula holds in B for all distance functions iff the norm of B is smooth. Proof. Assume that the norm is smooth. Let γn be a sequence of geodesics converging to a geodesic γ. We may assume that γ(s) = sh, γn(s) = tnv +shn with tn →0 and some unit vectors v, hn and h, where hn converge to h. Fix 26 ALEXANDER LYTCHAK some positive ϵ. By the smoothness of the norm, for large positive C we get |Ch+v|+|Ch−v|−2C < ϵ. But d(0, γn(Ctn)) = |tnv +Ctnhn| = tn|v +Chn|. Choosing n such that Chn is very close to Ch, we get Ctn −d(0, γn(Ctn)) ⩾ tn(C −|v + Chn|) ⩾tn(|Ch −v| −C −ϵ 2) ⩾tn(bh(v) −ϵ). Therefore geodesics vary smoothly in X and we are done by Proposition 9.4. If the norm is not smooth, one can consider a non-smooth point x of the unit sphere in B. Let H be a supporting hyperplane at x. It is easy to see that for the distance function dH the first variation formula does not hold at the origin. We leave the details to the reader. • §10. The class of geometric spaces We recall from the introduction: Definition 10.1. A proper geodesic space X is called geometric if it has property (U) at each point, each tangent space TxX = Cx is uniformly convex and smooth and if geodesics vary smoothly in X. We are going to show now that many important spaces are geometric. 10.1. Alexandrov spaces. Let X be an Alexandrov space. The upper angle coincides with the lower angle for each pair of geodesics starting at the same point. Hence X has the property (A) at each point and each geodesic cone is a Euclidean cone. If X has a lower curvature bound, then the geodesic cone Cx is proper by [BGP92] and the property (U) holds by the very definition of lower curvature bound. For Alexandrov spaces with an upper curvature bound the property (U) easily follows from the geodesically completeness (see [OT]). Hence Alexandrov spaces are infinitesimally cone-like. In order to prove that they are geometric, consider geodesics γ and η starting at x at the angle α and a sequence of geodesics γi converging to γ with γi(0) = η(ti). If X has a lower curvature bound, then by the semi-continuity of angles the angle between γi and η+ is ⩾α −ϵ for arbitrary small ϵ and sufficiently big i. Hence the angle between γi and η−is at most π−α+ϵ. Now using the comparison triangle for xη(ti)γi(ρ) we get the needed upper bound for d(x, γi(ρ)). If X has an upper curvature bound, then the angle between η and the geodesic connecting x with γi(ρ) is at least α −ϵ. Again the comparison triangle to xη(ti)γi(ρ) gives us the needed upper bound for d(x, γi(ρ)). 10.2. Extremal subsets. Petrunin has proved in [Pet94] that an extremal subset of an Alexandrov space with a lower curvature bound is infinitesimally cone-like and geometric with respect to the inner metric. DIFFERENTIATION IN METRIC SPACES 27 10.3. Surfaces with an integral curvature bound. We will assume that the reader is familiar with the notion of a two-dimensional surface with an integral curvature bound, see [Res93] for the definition and an excellent survey. Let M be a surface with an integral curvature bound. By Theorem 8.2.3 of [Res93] the upper and lower angle between each pair of geodesics coincide, hence M has at each point the property (A) and the geodesic cone is a Euclidean cone. We will denote by Ω+ resp. Ω−the Borel measures that describe the positive resp. the negative part of the curvature. We will use Theorem 8.2.2 of [Res93] saying the following: let T be a triangle in M such that the concatenation of its sides is a simple closed curve and its inner part T 0 is homeomorphic to a ball. Let α be the angle between two sides of T and let ˜ α be the corresponding angle in the comparison triangle in the Euclidean plane. Then α− α ⩽Ω+(T 0). Now let x ∈M be an arbitrary point. Since the intersection of the punctured balls B0 r(x) := Br(x) \ {x} is empty, for each ϵ ⩾0 we can find a r > 0, such that Ω+(B0 r (x)) + Ω−(B0 r(x)) ⩽ϵ. Hence for each triangle T as above with a vertex in x and sidelengthes ⩽r, we obtain, that each angle of T differs from the corresponding angle of the comparison triangle by at most 3ϵ. Consider now two geodesics γ1, γ2 of length t ⩽r starting at x at an angle ⩽ϵ. In order to verify the property (U) we have to estimate d(γ1(t),γ2(t)) t from above. If γ1 and γ2 intersect at γ1(t0) = γ2(t0), then the angle between γ+ 1 and γ+ 2 at γ1(t0) is at most ϵ. Hence we may assume that γ1 and γ2 do not intersect. Now it is easy to see, that for each geodesic η between γ1(t) and γ2(t) does not intersect γ1[0, t) ∪γ2[0, t). Hence we may apply the above remark to the triangle γ1ηγ2 and get the needed estimate for the length of η. Thus M is infinitesimally cone-like. In order to prove that geodesics vary smoothly at x consider two geodesics γ, η ∈Γx enclosing a positive angle α at x. Let γn be a sequence of geodesics converging to γ with γn(0) = η(tn). Consider a geodesic νn between x and γn(r). Applying the above consideration we see that the angle between νn and γ is at most 2ϵ for big n. Hence the angle between η and νn is at least α −2ϵ. Using the triangle ηνnγn we get the needed upper bound for the length of νn. 10.4. Metric operations. If X and Y are geometric, then so is the product X × Y . If X is geometric and C a closed convex subset of X then C is geometric. Moreover the Euclidean cone CX is geometric. The proofs are straightforward and left to the reader. 10.5. A class of interesting subsets of manifolds. Let M be a smooth manifold with a continuous Finsler metric. Let K ⊂M be a closed subset such that the inner metric on K is biLipschitz equivalent to the induced one, i.e. each two points x, z ∈K are connected in K by a curve of length at most 28 ALEXANDER LYTCHAK Ld(x, z). Assume further that all geodesics in K with respect to the inner metric have uniformly bounded C1,α norms for a fixed 0 < α ⩽1. Remark 10.1. In [Lyta] it is shown, that the above conditions are satisfied by sets of positive reach (α = 1) and similar big classes of subsets in smooth Riemannian manifolds. Moreover they are satisfied if K = M and the Finsler metric on M is H¨ older continuous and sufficiently convex ([LY]). We are going to prove now that K with its inner metric has the property (U) at each point and that it has continuously varying geodesics if all norms |·|x are strongly convex and smooth. We will denote by dK resp. by d the inner resp. the induced metric on K. The question is local, so we may assume that M is a chart U ⊂Rn and the Finsler structure is uniformly continuous. We denote by || · || the Euclidean norm on Rn and by | · |x the norm defined by the Finsler structure at x. For each K-geodesic γ in U we have ||γ′(t) −γ′(0)|| ⩽Ltα for some fixed constant L. Moreover ||γ(0) −γ(t)|γ(0) −t| ⩽o(t) where the function o(t) depends only on U and satisfy limt→0 o(t) t →0. This implies the inequality dK(x, z) ⩽d(x, z) + o(d(x, z)) (compare [LY]). By Lemma 8.1 the space K has the property (A) at each point. If γ1, γ2 are two geodesics starting at x, then |dK(γ1(t), γ2(t)) −|γ1(t) − γ2(t)|x| ⩽o(t). Since |γi(t) −tγ+ i (0)|x ⩽o(t) we conclude that K has the property (U) at x. Let finally γ and η be geodesics starting at x and let γn be a sequence of geodesics converging to γ with γn(0) = xn = η(tn). Let v be the starting direction of η and let h resp. hn be the starting directions of γ resp. of γn. From the uniform C1,α bound of γn we see, that hn converge to h. Fix some ϵ > 0 and choose a sufficiently big C = C(ϵ) > 0. Consider the triangle xγn(0)γn(Ctn). We have d(x, γn(Ctn)) ⩽|γn(Ctn)−x|x +o(tn). On the other hand we have |γn(Ctn)−x|x ⩽tn|v+Chn|x +o(tn). Hence geodesics in X vary continuously at x if this is true in the Banach space TxM, i.e. if the norm of TxM is smooth and uniformly convex (Lemma 9.6). Finally remark, that if each norm | · |x is a Euclidean norm, then K is infinitesimally cone-like. §11. Differentiating in geometric spaces 11.1. Basics. Let X be a geometric space, F a closed subset of X and x ∈ X \ F. The uniform convexity of TxX and the first variation formula show, that DxdF(v) ⩽−1 + δ for a vector v ∈Sx ⊂Cx implies d(v, γ+) ⩽ϵ for some γ ∈Γx,F and ϵ = ϵ(δ) = ϵ(x, δ) with limδ→0 ϵ(δ) = 0. In particular DxdF(v) = −1 iff v is the starting direction γ+ of some γ ∈Γx,F. DIFFERENTIATION IN METRIC SPACES 29 Choose now a dense countable subset S of a punctured neighborhood of x. For each z ∈S the function dz is differentiable at x with differential given by the first variation formula. For each unit vector v ∈TxX and each ϵ > 0 we can find a point z ∈S such that Dxdz = infγ∈Γx,z bγ+ where γ+ runs over some radial rays h with d(v, h(1)) < ϵ, i.e. z lies almost in the direction v from x. The uniform convexity of TxX shows: Lemma 11.1. The differentials {Dxdz|z ∈S} of distance functions dz sepa-rate the points in TxX, i.e. functions dz satisfy the conditions of Subsection 7.3. Now Subsection 7.3 gives us: Corollary 11.2. Let f : Z →X be a Lipschitz map. Assume that TzZ exists and that X is geometric. The map f is differentiable at z iff the compositions dxn ◦f : Z →R are differentiable at z, for all points xn in dense countable subset D of X. This implies Proposition 1.4 and from Theorem 8.2 we deduce Corollary 1.5. 11.2. Differentiating submetries. We recall some facts about submetries, a notion invented in [Ber87], see also [BG00]. Definition 11.1. A map f : X →Y is a submetry if f(Br(x)) = Br(f(x)) holds for all x ∈X and r ∈R+. If f : X →Y is a submetry, and X is proper resp. geodesic then so is Y . For each closed subset A ⊂Y we have dA ◦f = df −1(A). Two points x, ¯ x in X are called near with respect to f, if d(x, ¯ x) = d(f(x), f(¯ x)) holds. By Nx we denote the set of all points near to x. The restriction f : Nx →Y is a surjective map. Each geodesic γ between near points (called a horizontal geodesic) is mapped isometrically onto its image, that is itself a geodesic. If X is geodesic, then the set Nx is the union of horizontal geodesics starting at x and each geodesic in Γf(x) has a horizontal lift in Γx. Proposition 11.3. Let f : X →Y be a submetry between geometric spaces. Then f is differentiable at each point and the differential Dxf : TxX → Tf(x)Y is a homogeneous submetry. Proof. Consider a point x ∈X and y = f(x). For each ¯ y ̸= y the function d¯ y ◦f is the distance function dF¯ y to the fiber F¯ y = f−1(¯ y) and therefore differentiable at x. By Corollary 11.2 the map f is differentiable at x. Being an ultralimit of submetries the differential Dxf is a submetry. • 30 ALEXANDER LYTCHAK Under a convergence of submetries fibers converge to fibers, hence the tan-gent space to each fiber exists and is given by Tx(f−1(f(x))) = (Dxf)−1(0) := Vx (compare Example 7.4). The subset Nx of all points near to x is the union of all horizontal geodesics starting at x. Therefore we know by Example 6.10 that the space Nx has the property (U) at the point x. Hence the tangent space to Nx at x exists and is given by the closure of the union of radial rays in the tangent cone TxX corresponding to horizontal geodesics. In particular TxNx is contained in the horizontal subcone Hx = {h ∈TxX||h| = |Dxf(h)|}. Take now an arbitrary unit direction h ∈Hx and consider w = Dxf(h). Choose a sequence yj converging to y from the direction w. Then Dydyj(w) goes to −1. Therefore DxdFj(h) goes to −1 too, where Fj is the fiber f−1(yj). Thus the vector h is the limit of initial directions hj corresponding to some geodesics γj ∈Γx,Fj. But each geodesic γ in Γx,Fj is horizontal. Thus we have proved TxNx = Hx = {h ∈TxX||h| = |Dxf(h)|}. Moreover the proof shows, that for each h ∈Hx and each geodesic γ in Y starting at y in the direction Dxf(h) there is a horizontal lift ¯ γ of γ starting at x in the direction h. 11.3. More on submetries. The aim of this subsection is to sketch the proof of the following Proposition 11.4. Let X be a geometric space, f : X →Y a submetry. Then Y is geometric. Proof. Choose x ∈X and set y = f(x). The set Nx of points near to x still has the property (U). Denote by Hx = Cx(Nx) = TxNx ⊂Cx = TxX the tangent space so Nx. Each geodesic in Nx starting at x is mapped isometrically onto a geodesic in Y . Hence we get a natural surjective map Dxf : Hx →Cy, that is 1-Lipschitz and maps radial rays isometrically. In particular the geodesic cone Cy must be proper. For each radial ray h and each point v in Hx we get the following inequality for the Busemann functions: bDxf(h)(Dxf(v)) ⩽bh(v) (Example 2.2). In order to prove the property (A) at y, consider two geodesics γ1 and γ2 starting at y and let ¯ γ1, ¯ γ2 ∈Γx be their horizontal lifts. Denote by Fr resp. Gr the fiber f−1(γ1(r)) through ¯ γ1(r) resp. the fiber f−1(γ2(r)) through ¯ γ2(r). We get limω d(γ1(ti),γ2(sti)) ti = limω d(Fti,Gsti) ti . Hence it is enough to prove, that the equidistant decomposition of TxX defined by the submetry f(ti) x : X(ti) x →Y (ti) y is independent of the scale (ti). However using the first variation formula and the uniform convexity of TxX, it is possible to show, that v, w ∈TxX are in the same fiber of f(ti) x iff DxdF(v) = DxdF(w) holds for each fiber F = f−1(¯ y) with ¯ y ̸= y. In fact this shows, that f is metrically differentiable at x. DIFFERENTIATION IN METRIC SPACES 31 Let now γ ∈Γy be a geodesic and ¯ γ ∈Γx a horizontal lift of γ. Let ¯ y ̸= y be an arbitrary point and set F = f−1(¯ y). Then d¯ y ◦γ = dF ◦¯ γ. Therefore the differentials of these two maps at 0 coincide. If we denote by v the unit vector ¯ γ+ ∈Hx we get by the first variation formula D0(dF ◦γ) = bη+(v), for some geodesic η ∈Γx,F. Geodesics from Γx,F are mapped by f isometrically onto geodesics in Γy,¯ y. Set w = Dxf(v). Then as in Lemma 9.2, we get D0(d¯ y ◦γ) ⩽bDxf(η+)(w). But bDxf(η+) ◦Dxf ⩽bη+. Thus we obtain bDxf(η+)(w) = bη+(v). Using once again the uniform convexity of Cx, the above equality and the property (U) in X we get the following: if d(γ+ 1 , γ+ 2 ) < ρ for some γ1, γ2 ∈Γy, then d(¯ γ1(t), ¯ γ2(t)) < ϵt for all t < ρ, each horizontal lift ¯ γ1 of γ1 and some horizontal lift ¯ γ2 of γ2 starting at x. This verifies the property (U) at y. Moreover the above equality for Busemann functions implies, that for each v ∈Hx and each radial ray η ⊂Cy there is at least one radial ray ¯ η ⊂Hx with Dxf(¯ η) = η and b¯ η(v) = bη(Dxf(v)). If both Hx and Cy were Euclidean cones, this would give us that the differential Dxf : Hx →Cy is a submetry. In general we do not know if this must be true. However the uniform convexity and the smoothness of Hx imply that the cone Cy is uniformly convex and smooth. Finally the above equality of Busemann functions in Cx and in Hy shows that the first variation formula is valid at y. From Lemma 9.5 we deduce that Y is geometric. • §12. Theorem of Rademacher Now we are going to prove Theorem 1.6. Proof. The question is local and we may assume that S is compact. Assume that we already know the result in the case n = 1. Then we can deduce it for arbitrary n by standard reasoning ([Kir94] and [MM00]). Let namely v be a unit vector in Rn. For each x ∈Rn consider the line γx through x in the direction v. The restriction of f to γx ∩S is Lipschitz and by assumption this restriction is differentiable a.e. on γx ∩S. Denote by Gv the set of all x ∈S, such that the restriction of f to γx ∩S is differentiable at x. Then Gv has full measure in S, by the theorem of Fubini ([MM00]). Put G = ∩v∈DGv where v runs over a countable dense subset of the unit sphere. The set G has also full measure in S and f is differentiable at each point of G. So let S ⊂I be a compact subset of an interval and f : S →Z a Lipschitz map. Since Z is geodesic, we may extend f to a Lipschitz curve γ : I →Z. Reparametrizing γ we may assume, that it is parameterized by the arclength. Then γ is 1-Lipschitz and by Theorem 8.2 there is a subset G ⊂I of full 32 ALEXANDER LYTCHAK measure in I such that for all s ∈ G the metric differential mDsγ exists and is the canonical metric d on R = TsI. Set xt = γ(t) and let ht : I →R be the non-negative 1-Lipschitz function ht(s) = dxt(γ(s)) = d(xt, xs). Let T be a dense countable subset in I. By the usual theorem of Rademacher the set G of all points x ∈ G, where h′ t(x) exists and is linear for all t ∈T, has full measure in I. Denote by N + ϵ resp. N − ϵ the set of all point x ∈G such that h′ t(x) < 1 −ϵ for all t ∈T with t < s resp. h′ r(x) > −1 + ϵ for all r ∈T with r > s. The set N + ϵ is measurable. Assume that it has positive measure and take a Lebesgue point s of N + ϵ . Choose ρ such that for all t ∈G with s −ρ < t < s the inequalities d(xt, xs) ⩾(1 −ϵ2)|t −s| and μ(N + ϵ ∩[t, s]) ⩾(1 −ϵ2)|s −t| hold. But T is dense in I by assumption. Hence we can choose some t ∈T with s −ρ < t < s and get ht(s) −ht(t) ⩾(s −t)(1 −ϵ2). On the other hand the differential of 1-Lipschitz function ht on the subset N + ϵ ∩[t, s] is bounded above by 1 −ϵ. Since this subset has measure at least (s −t)(1 −ϵ2) we see ht(s) −ht(t) ⩽(s −t)(ϵ2) + (s −t)(1 −ϵ2)(1 −ϵ) = (s −t)(1 −ϵ + ϵ3). For small ϵ we get a contradiction to ht(s) −ht(t) ⩾|s −t|(1 −ϵ2). In the same way we see, that N − ϵ has measure 0 in G. Hence also Nϵ = N + ϵ ∪N − ϵ and N = ∪ϵ>0Nϵ have measure 0 in I. Thus the complement G0 = G \ N has full measure in I. Until now we have not used the curvature assumptions and they will imply the result now. Let s ∈G0 be arbitrary and set z = γ(s). Choose sequences tn and rn with tn < s < rn, such that h′ tn(s) →1 and h′ rn(s) →−1. Let vn resp. wn be the starting vectors of some geodesics from z to γ(tn) resp. from z to γ(rn). We are going to prove that vn and wn converge in Cx to γ+ resp. to γ−. In order to prove this consider an arbitrary sequence ϵj →0 and the point w = (γ(s + ϵj)) ∈Z(ϵj) z . Assume first that Z is a CAT(κ) space. Then from the comparison triangle to xrnxsxs+ϵj with ϵj << rn −s we see that h− rn(s) →−1 implies that the distance between wn and w goes to 0 with n →∞. This finishes the proof in the case of upper curvature bound. Let now Z be a space with curvature ⩾κ. Then the comparison triangle to xtnxsxs+ϵj with ϵj << s −tn shows that the distance between vn and w goes to 2 as n goes to ∞. But Z(ϵj) z is a non-negatively curved space. This shows that vn converge to a unique point ¯ v ∈Cz ⊂Z(ϵj) z with |¯ v| = 1 and d(¯ v, w) = 2. But since the metric differential at s of γ is the usual metric on R = TsI, we see that the point v = (γ(s−ϵj)) ∈Z(ϵj) z also satisfies d(v, 0) = 1 and d(v, w) = 2. In the non-negatively curved space Z(ϵj) z geodesics cannot DIFFERENTIATION IN METRIC SPACES 33 branch, hence v and ¯ v coincides. This finishes the proof in the case of lower curvature bound. • References [BBI01] Burago D., Burago Yu., Ivanov S., A course in metric geometry, Grad. Stud. Math., vol. 33, Amer. Math. Soc., Providence, RI, 2001. [Bel96] Bellaiche A., The tangent space in sub-Riemannian geometry, Sub-Riemannian Ge-ometry, Progr. Math., vol. 144, Birkhauser, Basel, 1996, pp. 1–78. [Ber87] Берестовский В. Н., Субметрии пространственных форм неотрицательной кривизны, Сиб. мат. ж. 28 (1987), №4, 44–56. [BG00] Berestovskii V., Guijarro L., A metric characterization of Riemannian submersions, Ann. Global Anal. Geom. 18 (2000), no. 6, 577–588. [BGP92] Бураго Ю., Громов М., Перельман Г., Пространства А. Д. Александрова с огра-ниченными снизу кривизнами, Успехи мат. наук 47 (1992), №2, 3–51. [BH99] Bridson M., Haefliger A., Metric spaces of non-positive curvature, Grundlehren Math. Wiss., vol. 319, Springer-Verlag, Berlin, 1999. [CH70] Calabi E., Hartman Ph., On the smoothness of isometries, Duke Math. J. 37 (1970), 741–750. [Che99] Cheeger J., Differentiability of Lipschitz functions on metric measure spaces, Geom. Funct. Anal. 9 (1999), 428–517. [Fed59] Federer H., Curvature measures, Trans. Amer. Math. Soc. 93 (1959), 418–491. [Hal00] Halbeisen S., On tangent cones of Alexandrov spaces with curvature bounded below, Manuscripta Math. 103 (2000), no. 2, 169–182. [HM98] Hoffmann K., Morris S., The structure of compact groups, Walter de Gruyter and Co., Berlin, 1998. [JL01] Johnson W., Lindenstrauss L., Basic concepts in the geometry of Banach spaces, Handbook of the Geometry of Banach Spaces, Vol. 1, North-Holland, Amsterdam, 2001, pp. 1–84. [Kir94] Kirchheim B., Rectifiable metric spaces: local structure and regularity of the Haus-dorff measure, Proc. Amer. Math. Soc. 121 (1994), 113–123. [KL97] Kleiner B., Leeb B., Rigidity of quasi-isometries for symmetric spaces and Euclidean buildings, Inst. Hautes Etudes Sci. Publ. Math. №86 (1997), 115–197 (1998). [Lyta] Lytchak A., Almost convex subsets (готовится к печати). [Lytb] Lytchak A., Differentiation in Carnot-Caratheodory spaces (готовится к печати). [Lytc] Lytchak A., Open map theorem in metric spaces, Preprint. [LY] Lytchak A., Yaman A., On H¨ older continuous Riemannian and Finsler manifolds, Preprint. [Mit85] Mitchell J., On Carnot-Caratheodory metrics, J. Differential Geom. 21 (1985), no. 1, 35–45. [MM00] Margulis G. A., Mostow G. D., Some remarks on the definition of tangent cones in a Carnot-Caratheodory space, J. Anal. Math 80 (2000), 299–317. [Nik95] Nikolaev I., The tangent cone of an Aleksandrov space of curvature k, Manuscrip-ta Math. 86 (1995), 137–147. [OT] Otsu Y., Tanoue H., The Riemannian structure of Alexandrov spaces with curvature bounded above, Preprint. 34 ALEXANDER LYTCHAK [Pet94] Petrunin A., Applications of quasigeodesics and gradient curves, Comparison Ge-ometry (Berkeley, CA, 1993-94), Math. Sci. Res. Inst. Publ., vol. 30, Cambridge Univ. Press, Cambridge, 1997, pp. 203–219. [PP94a] Перельман Г. Я., Петрунин А. М., Экстремальные подмножества в прост-ранствах Александрова и обобщенная теорема Либермана, Алгебра и анализ 5 (1993), №1, 242–256. [PP94b] Perel′man G., Getrunin A., Quasigeodesics and gradient curves in Alexandrov spaces, Preprint, 1994. [Res93] Решетняк Ю. Г., Двумерные многообразия ограниченной кривизны, Геометрия-4, Итоги науки и техн., т. 70, ВИНИТИ, М., 1989, с. 7–189. Mathematisches Institut Universit¨ at Bonn Beringstr. 1 53115 Bonn, Germany Поступило 12 мая 2004 г. E-mail: [email protected]
118
c# - Efficient algorithm to get primes between two large numbers - Stack Overflow =============== Join Stack Overflow By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google Sign up with GitHub OR Email Password Sign up Already have an account? Log in Skip to main content Stack Overflow 1. About 2. Products 3. For Teams Stack Overflow for Teams Where developers & technologists share private knowledge with coworkers Advertising Reach devs & technologists worldwide about your product, service or employer brand Knowledge Solutions Data licensing offering for businesses to build and improve AI tools and models Labs The future of collective knowledge sharing About the companyVisit the blog Loading… current community Stack Overflow helpchat Meta Stack Overflow your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Let's set up your homepage Select a few topics you're interested in: python javascript c#reactjs java android html flutter c++node.js typescript css r php angular next.js spring-boot machine-learning sql excel ios azure docker Or search from our full list: javascript python java c# php android html jquery c++ css ios sql mysql r reactjs node.js arrays c asp.net json python-3.x .net ruby-on-rails sql-server swift django angular objective-c excel pandas angularjs regex typescript ruby linux ajax iphone vba xml laravel spring asp.net-mvc database wordpress string flutter postgresql mongodb wpf windows amazon-web-services xcode bash git oracle-database spring-boot dataframe azure firebase list multithreading docker vb.net react-native eclipse algorithm powershell macos visual-studio numpy image forms scala function vue.js performance twitter-bootstrap selenium winforms kotlin loops express dart hibernate sqlite matlab python-2.7 shell rest apache entity-framework android-studio csv maven linq qt dictionary unit-testing asp.net-core facebook apache-spark tensorflow file swing class unity-game-engine sorting date authentication go symfony t-sql opencv matplotlib .htaccess google-chrome for-loop datetime codeigniter perl http validation sockets google-maps object uitableview xaml oop visual-studio-code if-statement cordova ubuntu web-services email android-layout github spring-mvc elasticsearch kubernetes selenium-webdriver ms-access ggplot2 user-interface parsing pointers google-sheets c++11 security machine-learning google-apps-script ruby-on-rails-3 templates flask nginx variables exception sql-server-2008 gradle debugging tkinter listview delphi jpa asynchronous web-scraping haskell pdf jsp ssl amazon-s3 google-cloud-platform jenkins xamarin testing wcf batch-file generics npm ionic-framework network-programming unix recursion google-app-engine mongoose visual-studio-2010 .net-core android-fragments assembly animation math svg session hadoop intellij-idea rust next.js curl join winapi django-models laravel-5 url heroku http-redirect tomcat google-cloud-firestore inheritance webpack image-processing gcc keras asp.net-mvc-4 swiftui logging dom matrix pyspark actionscript-3 button post optimization web firebase-realtime-database jquery-ui cocoa iis xpath d3.js javafx firefox xslt internet-explorer caching select asp.net-mvc-3 opengl events asp.net-web-api plot dplyr encryption magento search stored-procedures amazon-ec2 ruby-on-rails-4 memory canvas audio multidimensional-array jsf random vector redux cookies input facebook-graph-api flash indexing xamarin.forms arraylist ipad cocoa-touch data-structures video model-view-controller azure-devops apache-kafka serialization jdbc woocommerce razor routes awk servlets mod-rewrite excel-formula beautifulsoup filter docker-compose iframe aws-lambda design-patterns text django-rest-framework visual-c++ cakephp mobile android-intent struct react-hooks methods groovy mvvm ssh lambda checkbox time ecmascript-6 google-chrome-extension grails installation sharepoint cmake shiny spring-security jakarta-ee plsql android-recyclerview core-data types meteor sed android-activity activerecord bootstrap-4 websocket graph replace scikit-learn group-by vim file-upload junit boost sass memory-management import deep-learning async-await error-handling eloquent dynamic soap dependency-injection silverlight layout apache-spark-sql charts deployment browser gridview svn while-loop google-bigquery vuejs2 ffmpeg dll highcharts view foreach makefile plugins c#-4.0 redis reporting-services jupyter-notebook merge unicode reflection https server google-maps-api-3 twitter oauth-2.0 extjs terminal axios pip split pytorch cmd encoding django-views collections database-design hash netbeans automation data-binding ember.js build tcp pdo mysqli sqlalchemy apache-flex entity-framework-core concurrency command-line spring-data-jpa printing react-redux java-8 lua html-table jestjs ansible neo4j service parameters material-ui enums flexbox module promise visual-studio-2012 outlook firebase-authentication webview web-applications uwp jquery-mobile utf-8 datatable python-requests parallel-processing colors drop-down-menu scipy scroll tfs hive count syntax ms-word twitter-bootstrap-3 ssis fonts rxjs google-analytics constructor file-io three.js paypal powerbi graphql cassandra discord graphics compiler-errors gwt socket.io react-router solr backbone.js url-rewriting memory-leaks datatables nlp oauth terraform datagridview drupal oracle11g zend-framework knockout.js triggers neural-network interface django-forms angular-material casting google-api jmeter linked-list path timer proxy django-templates arduino orm directory windows-phone-7 parse-platform visual-studio-2015 cron conditional-statements push-notification functional-programming primefaces pagination model jar xamarin.android hyperlink uiview google-cloud-functions visual-studio-2013 vbscript gitlab azure-active-directory jwt download swift3 sql-server-2005 configuration process rspec pygame properties combobox callback windows-phone-8 linux-kernel safari scrapy permissions emacs scripting raspberry-pi clojure x86 scope io azure-functions expo compilation responsive-design nhibernate mongodb-query angularjs-directive request bluetooth reference binding dns 3d architecture playframework pyqt version-control discord.js doctrine-orm package get rubygems f# sql-server-2012 autocomplete openssl tree datepicker kendo-ui jackson yii controller grep nested xamarin.ios static null dockerfile statistics transactions active-directory datagrid uiviewcontroller webforms discord.py phpmyadmin sas computer-vision notifications duplicates mocking youtube pycharm yaml nullpointerexception menu sum blazor plotly bitmap asp.net-mvc-5 visual-studio-2008 electron yii2 floating-point css-selectors stl jsf-2 android-listview time-series cryptography ant hashmap character-encoding stream msbuild asp.net-core-mvc sdk google-drive-api selenium-chromedriver jboss joomla cors devise navigation anaconda cuda background multiprocessing binary frontend camera pyqt5 iterator linq-to-sql mariadb onclick ios7 android-jetpack-compose microsoft-graph-api rabbitmq android-asynctask tabs laravel-4 amazon-dynamodb environment-variables insert uicollectionview linker xsd coldfusion console continuous-integration upload ftp textview opengl-es macros operating-system mockito formatting localization vuejs3 xml-parsing json.net type-conversion data.table kivy timestamp integer calendar segmentation-fault android-ndk prolog drag-and-drop char crash jasmine dependencies automated-tests geometry azure-pipelines android-gradle-plugin itext fortran sprite-kit header firebase-cloud-messaging mfc attributes nuxt.js nosql format odoo db2 jquery-plugins event-handling jenkins-pipeline nestjs leaflet julia annotations flutter-layout keyboard postman textbox arm visual-studio-2017 gulp stripe-payments libgdx synchronization timezone uikit azure-web-app-service xampp dom-events crystal-reports wso2 android-emulator swagger namespaces uiscrollview aggregation-framework sequelize.js jvm google-sheets-formula chart.js com subprocess snowflake-cloud-data-platform geolocation webdriver centos html5-canvas garbage-collection dialog widget numbers concatenation sql-update qml set tuples java-stream smtp mapreduce ionic2 windows-10 rotation android-edittext modal-dialog spring-data nuget http-headers doctrine radio-button grid sonarqube lucene xmlhttprequest listbox switch-statement initialization internationalization components apache-camel boolean google-play serial-port ldap gdb ios5 youtube-api return latex pivot eclipse-plugin frameworks tags containers github-actions subquery c++17 dataset asp-classic foreign-keys label uinavigationcontroller embedded copy google-cloud-storage delegates struts2 migration protractor base64 queue find uibutton sql-server-2008-r2 arguments composer-php append jaxb stack zip tailwind-css cucumber autolayout ide entity-framework-6 iteration popup r-markdown windows-7 airflow vb6 ssl-certificate g++ gmail hover jqgrid clang range Next You’ll be prompted to create an account to view your personalized homepage. Home Questions AI Assist Labs Tags Challenges Chat Articles Users Jobs Companies Collectives Communities for your favorite technologies. Explore all Collectives Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Collectives™ on Stack Overflow Find centralized, trusted content and collaborate around the technologies you use most. Learn more about Collectives Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Efficient algorithm to get primes between two large numbers Ask Question Asked 15 years, 1 month ago Modified7 years, 11 months ago Viewed 19k times This question shows research effort; it is useful and clear 12 Save this question. Show activity on this post. I'm a beginner in C#, I'm trying to write an application to get primes between two numbers entered by the user. The problem is: At large numbers (valid numbers are in the range from 1 to 1000000000) getting the primes takes long time and according to the problem I'm solving, the whole operation must be carried out in a small time interval. This is the problem link for more explanation: SPOJ-Prime And here's the part of my code that's responsible of getting primes: ```csharp public void GetPrime() { int L1 = int.Parse(Limits); int L2 = int.Parse(Limits); if (L1 == 1) { L1++; } for (int i = L1; i <= L2; i++) { for (int k = L1; k <= L2; k++) { if (i == k) { continue; } else if (i % k == 0) { flag = false; break; } else { flag = true; } } if (flag) { Console.WriteLine(i); } } } ``` Is there any faster algorithm? Thanks in advance. c# algorithm primes Share Share a link to this question Copy linkCC BY-SA 3.0 Improve this question Follow Follow this question to receive notifications edited Jan 21, 2015 at 23:53 Degustaf 2,690 2 2 gold badges 18 18 silver badges 30 30 bronze badges asked Jul 10, 2010 at 21:31 rafaelrafael 687 4 4 gold badges 11 11 silver badges 21 21 bronze badges 3 1 Possible duplicate: stackoverflow.com/questions/453793/… –György Andrasek Commented Jul 10, 2010 at 21:35 1 It's not really the same. That question asks what the fastest is, this question asks what's fast enough to get accepted on SPOJ. –IVlad Commented Jul 10, 2010 at 21:42 3 Large is relative. For some reason I would call every prime that fits in a Int32 small. –Dykam Commented Jul 11, 2010 at 6:52 Add a comment| 10 Answers 10 Sorted by: Reset to default This answer is useful 24 Save this answer. Show activity on this post. I remember solving the problem like this: Use the sieve of eratosthenes to generate all primes below sqrt(1000000000) = ~32 000 in an array primes. For each number x between m and n only test if it's prime by testing for divisibility against numbers <= sqrt(x) from the array primes. So for x = 29 you will only test if it's divisibile by 2, 3 and 5. There's no point in checking for divisibility against non-primes, since if x divisible by non-prime y, then there exists a prime p < y such that x divisible by p, since we can write y as a product of primes. For example, 12 is divisible by 6, but 6 = 2 3, which means that 12 is also divisible by 2 or 3. By generating all the needed primes in advance (there are very few in this case), you significantly reduce the time needed for the actual primality testing. This will get accepted and doesn't require any optimization or modification to the sieve, and it's a pretty clean implementation. You can do it faster by generalising the sieve to generate primes in an interval [left, right], not [2, right] like it's usually presented in tutorials and textbooks. This can get pretty ugly however, and it's not needed. But if anyone is interested, see: and this linked answer. Share Share a link to this answer Copy linkCC BY-SA 3.0 Improve this answer Follow Follow this answer to receive notifications edited May 23, 2017 at 12:16 CommunityBot 1 1 1 silver badge answered Jul 10, 2010 at 21:48 IVladIVlad 43.6k 13 13 gold badges 115 115 silver badges 183 183 bronze badges 9 With generalising the sieve, do you mean the segmented version of the sieve of Eratosthenes? –Christian Ammer Commented Jan 7, 2013 at 9:12 1 "This can get pretty ugly however" Actually it's very easy, you just have two sieves, one for the range 2..sqrt(n) and one for m..n Example in C++: pastie.org/9199654 and you get a nice runtime of O((n-m) log log (n-m) + sqrt(n) log log n). Segmented sieve is not a lot more complicated if you don't interleave the prime finding and sieving phase –Niklas B. Commented May 22, 2014 at 16:53 downvote because of the dissing of the sieve of Eratosthenes. yes you do need it to find primes in a range between two big numbers, and no it isn't ugly. More here. –Will Ness Commented Jul 10, 2014 at 19:14 1 @IVlad in fact there are so few primes below 32K that you probably could find them with a trial division too, without any noticeable slowdown. -- I tried it, you were right: TD C code was AC at 2.65 secs (my old SoE C code there was 0.07 secs). –Will Ness Commented Jul 11, 2014 at 13:38 1 how about we use sieve two times? 1st one gets us array of primes and we use the second sieve as usual to strike off all the numbers which are divisible by it after determining if its a prime? –Arthas Commented Nov 4, 2016 at 5:57 |Show 4 more comments This answer is useful 6 Save this answer. Show activity on this post. You are doing a lot of extra divisions that are not needed - if you know a number is not divisible by 3, there is no point in checking if it is divisible by 9, 27, etc. You should try to divide only by the potential prime factors of the number. Cache the set of primes you are generating and only check division by the previous primes. Note that you do need to generate the initial set of primes below L1. Remember that no number will have a prime factor that's greater than its own square root, so you can stop your divisions at that point. For instance, you can stop checking potential factors of the number 29 after 5. You also do can increment by 2 so you can disregard checking if an even number is prime altogether (special casing the number 2, of course.) I used to ask this question during interviews - as a test I compared an implementation similar to yours with the algorithm I described. With the optimized algorithm, I could generate hundreds of thousands of primes very fast - I never bothered waiting around for the slow, straightforward implementation. Share Share a link to this answer Copy linkCC BY-SA 2.5 Improve this answer Follow Follow this answer to receive notifications answered Jul 10, 2010 at 21:37 MichaelMichael 55.5k 5 5 gold badges 128 128 silver badges 145 145 bronze badges Add a comment| This answer is useful 3 Save this answer. Show activity on this post. You could try the Sieve of Eratosthenes. The basic difference would be that you start at L1 instead of starting at 2. Share Share a link to this answer Copy linkCC BY-SA 2.5 Improve this answer Follow Follow this answer to receive notifications answered Jul 10, 2010 at 21:37 Brian SBrian S 5,066 4 4 gold badges 30 30 silver badges 46 46 bronze badges 2 One thing I like about this algorithm is you could potentially do it in parallel to leverage multiple cores - alas, I've never gotten around to actually trying it. –Michael Commented Jul 10, 2010 at 21:41 1 Uh, no, you'd still have to start at 2. –BlueRaja - Danny Pflughoeft Commented Jul 11, 2010 at 6:12 Add a comment| This answer is useful 1 Save this answer. Show activity on this post. Let's change the question a bit: How quickly can you generate the primes between m and n and simply write them to memory? (Or, possibly, to a RAM disk.) On the other hand, remember the range of parameters as described on the problem page: m and n can be as high as a billion, while n-m is at most a million. IVlad and Brian most of a competitive solution, even if it is true that a slower solution could be good enough. First generate or even precompute the prime numbers less than sqrt(billion); there aren't very many of them. Then do a truncated Sieve of Eratosthenes: Make an array of length n-m+1 to keep track of the status of every number in the range [m,n], with initially every such number marked as prime (1). Then for each precomputed prime p, do a loop that looks like this: csharp for(k=ceil(m/p)p; k <= n; k += p) status[k-m] = 0; This loop marks all of the numbers in the range m <= x <= n as composite (0) if they are multiple of p. If this is what IVlad meant by "pretty ugly", I don't agree; I don't think that it's so bad. In fact, almost 40% of this work is just for the primes 2, 3, and 5. There is a trick to combine the sieve for a few primes with initialization of the status array. Namely, the pattern of divisibility by 2, 3, and 5 repeats mod 30. Instead of initializing the array to all 1s, you can initialize it to a repeating pattern of 010000010001010001010001000001. If you want to be even more cutting edge, you can advance k by 30p instead of by p, and only mark off the multiples in the same pattern. After this, realistic performance gains would involve steps like using a bit vector rather than a char array to keep the sieve data in on-chip cache. And initializing the bit vector word by word rather than bit by bit. This does get messy, and also hypothetical since you can get to the point of generating primes faster than you can use them. The basic sieve is already very fast and not very complicated. Share Share a link to this answer Copy linkCC BY-SA 2.5 Improve this answer Follow Follow this answer to receive notifications answered Jul 11, 2010 at 5:38 Greg KuperbergGreg Kuperberg 4,075 24 24 silver badges 26 26 bronze badges Add a comment| This answer is useful 1 Save this answer. Show activity on this post. One thing no one's mentioned is that it's rather quick to test a single number for primality. Thus, if the range involved is small but the numbers are large (ex. generate all primes between 1,000,000,000 and 1,000,100,000), it would be faster to just check every number for primality individually. Share Share a link to this answer Copy linkCC BY-SA 2.5 Improve this answer Follow Follow this answer to receive notifications answered Jul 11, 2010 at 6:23 BlueRaja - Danny PflughoeftBlueRaja - Danny Pflughoeft 86k 36 36 gold badges 205 205 silver badges 295 295 bronze badges 1 I was going to make that point, until I realized that it isn't relevant for the parameters of the problem. Even for the example range that you give, the range is big enough that a sieve is faster. If you wanted all primes, say, between 1,000,000,000 and 1,000,001,000, then a partial sieve combined with Miller-Rabin would be the best approach. –Greg Kuperberg Commented Jul 11, 2010 at 6:35 Add a comment| This answer is useful 1 Save this answer. Show activity on this post. There are many algorithms finding prime numbers. Some are faster, others are easier. You can start by making some easiest optimizations. For example, why are you searching if every number is prime? In other words, are you sure that, given a range of 411 to 418, there is a need to search if numbers 412, 414, 416 and 418 are prime? Numbers which divide by 2 and 3 can be skipped with very simple code modifications. ~~if the number is not 5, but ends by a digit '5' (1405, 335), it is not prime~~ bad idea: it will make the search slower. what about caching the results? You can then divide by primes rather by every number. Moreover, only primes less than square root of the number you search are concerned. If you need something really fast and optimized, taking an existing algorithm instead of reinventing the wheel can be an alternative. You can also try to find some scientific papers explaining how to do it fast, but it can be difficult to understand and to translate to code. Share Share a link to this answer Copy linkCC BY-SA 2.5 Improve this answer Follow Follow this answer to receive notifications edited Jul 11, 2010 at 6:46 answered Jul 10, 2010 at 21:41 Arseni MourzenkoArseni Mourzenko 52.6k 35 35 gold badges 119 119 silver badges 211 211 bronze badges 1 Converting the number to base-10 to check if the last digit is 5 is significantly slower than just checking num % 5 == 0... –BlueRaja - Danny Pflughoeft Commented Jul 11, 2010 at 6:16 Add a comment| This answer is useful 1 Save this answer. Show activity on this post. ```csharp int ceilingNumber = 1000000; int myPrimes = 0; BitArray myNumbers = new BitArray(ceilingNumber, true); for(int x = 2; x < ceilingNumber; x++) if(myNumbers[x]) { for(int y = x 2; y < ceilingNumber; y += x) myNumbers[y] = false; } for(int x = 2; x < ceilingNumber; x++) if(myNumbers[x]) { myPrimes++; Console.Out.WriteLine(x); } Console.Out.WriteLine("======================================================"); Console.Out.WriteLine("There is/are {0} primes between 0 and {1} ",myPrimes,ceilingNumber); Console.In.ReadLine(); ``` Share Share a link to this answer Copy linkCC BY-SA 2.5 Improve this answer Follow Follow this answer to receive notifications edited Dec 15, 2010 at 16:16 Josh Lee 179k 39 39 gold badges 277 277 silver badges 281 281 bronze badges answered Dec 15, 2010 at 13:35 Enow B. MbiEnow B. Mbi 11 2 2 bronze badges Add a comment| This answer is useful 0 Save this answer. Show activity on this post. I think i have a very fast and efficient(generate all prime even if using type BigInteger) algorithm to getting prime number,it much more faster and simpler than any other one and I use it to solve almost problem related to prime number in Project Euler with just a few seconds for complete solution(brute force) Here is java code: ```csharp public boolean checkprime(int value){ //Using for loop if need to generate prime in a int n, limit; boolean isprime; isprime = true; limit = value / 2; if(value == 1) isprime =false; /if(value >100)limit = value/10; // if 1 number is not prime it will generate if(value >10000)limit = value/100; //at lest 2 factor (not 1 or itself) if(value >90000)limit = value/300; // 1 greater than average 1 lower than average if(value >1000000)limit = value/1000; //ex: 9997 =13769 (average ~ sqrt(9997) is 100) if(value >4000000)limit = value/2000; //so we just want to check divisor up to 100 if(value >9000000)limit = value/3000; // for prime ~10000 / limit = (int)Math.sqrt(value); //General case for(n=2; n <= limit; n++){ if(value % n == 0 && value != 2){ isprime = false; break; } } return isprime; } ``` Share Share a link to this answer Copy linkCC BY-SA 3.0 Improve this answer Follow Follow this answer to receive notifications edited Jun 8, 2011 at 13:00 answered Jun 6, 2011 at 19:22 Tranquocbinh333Tranquocbinh333 31 2 2 bronze badges 3 Any reason for not just returning isprime at the end? –DarthJDG Commented Jun 7, 2011 at 16:48 Yes it's true just return isprime is enough. –Tranquocbinh333 Commented Jun 8, 2011 at 6:00 The execution time between 1M numbers and 10M numbers is pretty significant (like .5 seconds vs. 8 seconds). You could probably half your tested set by incrementing your loop by 2 instead of 1. –Sinaesthetic Commented Nov 20, 2017 at 16:50 Add a comment| This answer is useful 0 Save this answer. Show activity on this post. ```csharp import java.io.; import java.util.Scanner; class Test{ public static void main(String args[]){ Test tt=new Test(); Scanner obj=new Scanner(System.in); int m,n; System.out.println(i); m=obj.nextInt(); n=obj.nextInt(); tt.IsPrime(n,m); } public void IsPrime(int num,int k) { boolean[] isPrime = new boolean[num+1]; // initially assume all integers are prime for (int i = 2; i <= num; i++) { isPrime[i] = true; } // mark non-primes <= N using Sieve of Eratosthenes for (int i = 2; ii <= num; i++) { // if i is prime, then mark multiples of i as nonprime // suffices to consider mutiples i, i+1, ..., N/i if (isPrime[i]) { for (int j = i; ij <=num; j++) { isPrime[ij] = false; } } } for (int i =k; i <= num; i++) { if (isPrime[i]) { System.out.println(i); } } } ``` } Share Share a link to this answer Copy linkCC BY-SA 3.0 Improve this answer Follow Follow this answer to receive notifications answered Aug 14, 2014 at 13:07 Anish KumarAnish Kumar 9 1 1 bronze badge 1 you should use addition, not multiplication, in your inner loop: [ij]; j++ is the same as [j]; j+=i. –Will Ness Commented Aug 15, 2014 at 19:06 Add a comment| This answer is useful -1 Save this answer. Show activity on this post. csharp List<int> prime(int x, int y) { List<int> a = new List<int>(); int b = 0; for (int m = x; m < y; m++) { for (int i = 2; i <= m / 2; i++) { b = 0; if (m % i == 0) { b = 1; break; } } if (b == 0) a.Add(m)` } return a; } Share Share a link to this answer Copy linkCC BY-SA 3.0 Improve this answer Follow Follow this answer to receive notifications edited Sep 13, 2017 at 14:27 Adalcar 1,458 11 11 silver badges 26 26 bronze badges answered Sep 13, 2017 at 13:50 Rana TomarRana Tomar 1 1 1 bronze badge 1 Welcome to SO, please add some explanation of your solution, that will give you more chances to get up voted. –Alejandro Montilla Commented Sep 13, 2017 at 14:11 Add a comment| Your Answer Thanks for contributing an answer to Stack Overflow! Please be sure to answer the question. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Making statements based on opinion; back them up with references or personal experience. To learn more, see our tips on writing great answers. Draft saved Draft discarded Sign up or log in Sign up using Google Sign up using Email and Password Submit Post as a guest Name Email Required, but never shown Post Your Answer Discard By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions c# algorithm primes See similar questions with these tags. The Overflow Blog Renewing Chat on Stack Overflow AI isn’t stealing your job, it’s helping you find it Featured on Meta Will you help build our new visual identity? Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Updated design for the new live activity panel experiment Further Experimentation with Comment Reputation Requirements Report this ad Report this ad 27 people chatting C# 1 hour ago - OakBot Linked 2Generating prime numbers between m and n -1Segmented Sieve of Erastothenes C++ SPOJ 228Which is the fastest algorithm to find prime numbers? 3Is a Recursive-Iterative Method Better than a Purely Iterative Method to find out if a number is prime? 0printing all prime numbers in given range, algorithm too slow.. how to improve? (code-challenge) Related 2Find prime numbers in c# 4How to determine if an incredibly large number is prime? 3What is the best, most performant algorithm to find all primes up to a given number? 3Is there a way to generate large primes in C#, without the use of an external library? 21Write the biggest prime 0Sum of primes for large numbers 0C# find large prime numbers 1getting all prime numbers 0Find Prime Numbers up to a maximum 0Trying to find the 10001st prime number, c# Hot Network Questions Can high schoolers post to arXiv or write preprints? Is 人形机器人 a redundant expression? Does the warning "5 years imprisonment for removal" on Canada's Four Corners obelisk have any legal backing? Why was there a child at the dig site in Montana? Dropdown width with very long options Humanity sent mothership to explore universe, coming back home after thousands years, crew now like winged aliens In the US, can I contribute to my Roth IRA, ahead of the time I get the earned income? How to deal with this problem in hedonism? Dimension too large compiling longtable with lualatex. What is the cause? How can a theory be discarded if the Duhem–Quine thesis suggests it can’t be falsified Elfquest story where two elves argue over one's hypnotizing of an animal What's the difference between democracy and totalitarianism if, even in democracy, we must respect laws set by parties we didn't vote for? Why לֶחֶם instead of לַחַם? If I self-publish a book and give it away for free, would it meet a future publisher's desire to be "first publishing rights"? How to describe this set of figures? A specific case When was this builder's paper produced? If linear negation is interpreted as representing destructors, how to make sense of double linear negation elimination? Why doesn't chatGPT learn from its interactions with users? Summation with fractional part Is there a simple method to prove that this triangle is isosceles? Rectangle and circle with same area and circumference Is the logic of the original smoking study valid? In Matthew 17:4, what was Peter’s intention in proposing to make three tents for Jesus, Moses, and Elijah? What keeps an index ETF pegged to the index? Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. lang-cs Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Stack Overflow Questions Help Chat Products Teams Advertising Talent Company About Press Work Here Legal Privacy Policy Terms of Service Contact Us Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings
119
Measurement and Interpretation of the Ankle-Brachial Index | Circulation =============== Skip to main content Advertisement Become a member Volunteer Donate Journals BrowseCollectionsSubjectsAHA Journal PodcastsTrend Watch ResourcesCMEAHA Journals @ MeetingsJournal MetricsEarly Career ResourcesDEIA Resources InformationFor AuthorsFor ReviewersFor SubscribersFor International Users Alerts 0 Cart Search Sign inREGISTER Quick Search in Journals Enter search term Quick Search anywhere Enter search term Quick search in Citations Journal Year Volume Issue Page Searching: This Journal This JournalAnywhereCitation Advanced SearchSearch navigate the sidebar menu Sign inREGISTER Quick Search anywhere Enter search term Publications Arteriosclerosis, Thrombosis, and Vascular Biology Circulation Current Issue Archive Journal Information About Circulation Author Instructions Editorial Board Information for Advertisers Features Circ On The Run Podcast Special Issues Circulation at Major Meetings Supplements Hospitals of History Circulation Research Hypertension Stroke Journal of the American Heart Association Circulation: Arrhythmia and Electrophysiology Circulation: Cardiovascular Imaging Circulation: Cardiovascular Interventions Circulation: Cardiovascular Quality & Outcomes Circulation: Genomic and Precision Medicine Circulation: Heart Failure Stroke: Vascular and Interventional Neurology Annals of Internal Medicine: Clinical Cases Information For Authors For Reviewers For Subscribers For International Users Submit & Publish Submit to the AHA Editorial Policies Open Science Diversity, Equity, Inclusion, and Accessibility Publishing with the AHA Open Access Information Resources AHA Journals CME AHA Journals @ Meetings Metrics AHA Journals Podcast Network Early Career Resources Trend Watch Professional Heart Daily AHA Newsroom Current Issue Archive Journal Information About Circulation Author Instructions Editorial Board Information for Advertisers Features Circ On The Run Podcast Special Issues Circulation at Major Meetings Supplements Hospitals of History Submit Reference #1 Research Article Originally Published 16 November 2012 Free Access Measurement and Interpretation of the Ankle-Brachial Index: A Scientific Statement From the American Heart Association Victor Aboyans, MD, PhD, FAHA, Chair, Michael H.Criqui, MD, MPH, FAHA, Co-Chair, Pierre Abraham, MD, PhD, Matthew A.Allison, MD, MPH, FAHA, Mark A.Creager, MD, FAHA, Curt Diehm, MD, PhD, F. Gerry R.Fowkes, MBChB, PhD, FAHA, … Show All …, William R.Hiatt, MD, FAHA, Björn Jönsson, MD, PhD, Philippe Lacroix, MD, Benôıt Marin, MD, Mary M.McDermott, MD, FAHA, Lars Norgren, MD, PhD, Reena L.Pande, MD, MSc, Pierre-Marie Preux, MD, PhD, H.E. (Jelle)Stoffers, MD, PhD, and Diane Treat-Jacobson, PhD, RN, FAHA on behalf of the American Heart Association Council on Peripheral Vascular Disease on behalf of Council on Epidemiology and Prevention on behalf of Council on Clinical Cardiology on behalf of Council on Cardiovascular Nursing on behalf of Council on Cardiovascular Radiology and Intervention, and Council on Cardiovascular Surgery and Anesthesia Show FewerAuthor Info & Affiliations Circulation Volume 126, Number 24 1,083,047 1,291 Metrics Total Downloads 1,083,047 Last 12 Months 39,538 Total Citations 1,291 Last 12 Months 125 View all metrics Track CitationsAdd to favorites PDF/EPUB Contents Introduction Rationale for Standardization of the ABI Aims and Scope ABI Terminology Physiology of the ABI ABI in Clinical Practice Training for the Use of the ABI Standards to Report ABI in Scientific Papers Unmet Needs: Fields of Research for the Future Supplemental Material References eLetters Information & Authors Metrics & Citations View Options References Figures Tables Media Share This article has been corrected. VIEW CORRECTION Introduction The ankle-brachial index (ABI) is the ratio of the systolic blood pressure (SBP) measured at the ankle to that measured at the brachial artery. Originally described by Winsor1 in 1950, this index was initially proposed for the noninvasive diagnosis of lower-extremity peripheral artery disease (PAD).2,3 Later, it was shown that the ABI is an indicator of atherosclerosis at other vascular sites and can serve as a prognostic marker for cardiovascular events and functional impairment, even in the absence of symptoms of PAD.4–6 Rationale for Standardization of the ABI The current lack of standards for measurement and calculation of the ABI leads to discrepant results with significant impact from clinical, public health, and economic standpoints. Indeed, the estimated prevalence of PAD may vary substantially according to the mode of ABI calculation.7–9 In a review of 100 randomly selected reports using the ABI, multiple variations in technique were identified, including the position of the patient during measurement, the sizes of the arm and leg cuffs, the location of the cuff on the extremity, the method of pulse detection over the brachial artery and at the ankles, whether the arm and ankle pressures were measured bilaterally, which ankle pulses were used, and whether a single or replicate measures were obtained.10 There is controversy about what ABI threshold should be used to diagnose PAD. The ABI threshold most commonly used is ≤0.90 based on studies reporting >90% sensitivity and specificity to detect PAD compared with angiography.2,3 These studies were limited in that they included mostly older white men with PAD or who were at high risk for PAD and compared them with a younger healthy group. A recent meta-analysis of 8 studies of diverse populations, including diabetic patients, confirmed a high specificity but lower sensitivity (at best <80%) than that reported in earlier studies.11 Similar to other vascular markers such as carotid intima-media thickness12 or coronary artery calcium score,13 standardization of the techniques used to measure the ABI and the calculation and interpretation of its values is necessary. Aims and Scope The goals for this document are to provide a comprehensive review of the relevant literature on the measurement of the ABI, to provide recommendations for a standardized method to determine the ABI, to provide guidance on the interpretation of the ABI in the clinical setting, to propose standards for reporting ABI data in the scientific literature, and to delineate methodological issues requiring further research. ABI Terminology The ABI has also been called the ankle-arm index, the ankle-brachial blood pressure index, the ankle-arm ratio, or the Winsor Index. The term ABI was recommended by the recent American Heart Association Proceeding on Atherosclerotic Peripheral Vascular Disease14 on the basis of its current widespread use in contemporary literature and accordingly is used throughout this document. Physiology of the ABI Why Is SBP Higher in the Ankles Than in the Arms? The blood pressure waveform amplifies as it travels distally from the heart, resulting in a progressive increase in SBP and a decrease in diastolic blood pressure. The most widely accepted model used to explain the SBP amplification relies on retrograde wave reflection from resistant distal arterioles, which is additive to the antegrade wave.15 Several lines of evidence indicate that reflected waves occur at various sites in the vascular bed,16,17 with some attenuation along the arterial system.18,19 However, the reflected wave is not the sole explanation for the changes in pressure wave morphology.18 In the legs, remodeling of vessel structure occurs, resulting from increased intraluminal pressure, characterized by increased wall thickening and unchanged inner radius.20,21 The changes in wall thickness resulting from increased hydrostatic pressure in the lower extremities with walking (vertical position) occur during the second year of life and plausibly explain why the ABI is <1.00 in the newborn and increases to adult values at 2 to 3 years of age.22 Therefore, both reflected waves and changes in vessel wall thickness and consequently stiffness contribute to SBP amplification. Physiological Conditions Affecting the ABI at Rest Age, height, ethnicity, and even the order of measurement can affect the ABI. In 2 population studies, the ABI of the right leg was on average 0.03 higher than that of the left leg.23,24 This observation may be due to the order of measurements (usually the right leg first) and the resulting temporal reduction in systemic pressure over time (white coat attenuation effect). An increased ABI may be expected with aging as a result of arterial stiffening. Cross-sectional and longitudinal population studies indicate that the ABI decreases with age, probably because of the increased prevalence and progression of PAD.23,25 It might be expected that taller people would have higher ABIs than shorter people as a consequence of the progressive SBP increase with greater distance from the heart. Indeed, in populations without clinical cardiovascular disease (CVD), there is a direct correlation between height and ABI.24,26 In the Multi-Ethnic Study of Atherosclerosis (MESA), however, the adjusted contribution of height to ABI was negligible, <0.01 higher for every 20-cm height increase, after accounting for sex, ethnicity, and risk factors.27 Sex differences in ABI have been reported in many population studies.23,26–29 Among participants without traditional CVD risk factors in the San Luis Valley Diabetes Study,24 the average ABI was 0.07 less in women than in men. Adjustment for height reduces but does not eliminate observed differences.24,27,30 After multivariate adjustments, ABI was 0.02 lower in women than men in a subset of MESA participants free of PAD and traditional risk factors for atherosclerosis.27 Black PAD-free participants in MESA had an ABI 0.02 unit lower than non-Hispanic white counterparts after multivariate adjustment,27 consistent with a previous observation from the Atherosclerosis Risk in Communities Study (ARIC).30 Ethnic differences are likely to result from genetic influences. Carmelli et al31 measured the ABI of monozygotic and dizygotic pairs of elderly, white, male twins and estimated that 48% of the variability in ABI values could be attributed to genetic factors. European ancestry was associated with lower odds for PAD (ABI ≤0.90) than among Hispanic and black participants in MESA.32 An inverse relationship between the ABI and heart rate has been reported in subjects without heart disease33,34 and in subjects referred to a vascular laboratory.35 In 1 study,34 an increased difference between peripheral and central SBP was observed during cardiac pacing as heart rate increased from 60 to 110 bpm. With increasing heart rate, the ratio of brachial to central pressure rose by 0.012 unit for every 10 bpm, whereas the amplification index (the difference between the first and second peaks of the central arterial waveform) decreased. This was attributed to the ejection duration reduction, which causes a shift of the reflected wave into diastole associated with an increasing heart rate. In MESA, a population-based study, heart rate did not correlate with the ABI.27 Because the ABI is a ratio, it is in theory not affected by factors that raise or lower blood pressure. For example, changes in blood volume after hemodialysis do not alter the ABI, despite significant removal of fluid and reduction in blood pressure.36 Overall, all these factors that affect the ABI at an individual level are minor but may be relevant in large population studies, especially when the epidemiology of PAD is being studied. ABI in Clinical Practice Background ABI: A Diagnostic Method for Lower-Extremity PAD ABI Versus Angiography and Other Imaging Methods Compared with a variety of imaging methods to determine the presence of PAD, the diagnostic performance of the ABI varies according to the population studied, the cutoff threshold, and the technique used to detect flow in the ankle arteries. Table I in the online-only Data Supplement summarizes these disparities and provides diagnostic performances.2,3,28,37–55 The sensitivity and specificity of the ABI with the Doppler technique range from 0.17 to 1.0 and from 0.80 to 1.0, respectively. Lower sensitivities (0.53–0.70) are reported in diabetic patients.43,47,48 The sensitivities and specificities of the ABI measured with oscillometric methods vary from 0.29 to 0.93 and from 0.96 to 0.98, respectively. The overall diagnostic ability may be provided by the receiver-operating characteristic (ROC) curves. The reported areas under the ROC curve are higher for ABI measured by Doppler (0.87–0.95) than that measured with the oscillometric method (0.80–0.93; Table 1).38,42,48,50 Studies used to determine the accuracy of the ABI generally included severe cases of PAD in which arterial imaging was performed after initial ABI measurements were found to be abnormal. To avoid verification bias, Lijmer et al38 estimated the corrected area under the curve of the Doppler ABI to diagnose >50% angiographic stenosis as very satisfactory (0.95±0.02). Diagnostic performance was higher for detecting proximal compared with distal lesions. Using the plethysmographic method to detect flow, 1 study49 reported a specificity of 0.99 but a sensitivity of 0.39, and only about half the participants in that study had isolated occlusive disease of the posterior tibial (PT) artery. Open in Viewer Table 1. The Diagnostic Performances of the Ankle-Brachial Index Versus Other Methods: Receiver-Operating Characteristic Curve Analysis | Authors, Year | Population Study | Gold Standard | Method for ABI Measurement | Area Under the Curve | | --- | --- | --- | --- | --- | | Lijmer et al,38 1996 | 441 Patients (PAD suspicion) | Angiography limited to 53 patients | Doppler | Entire limb ≥50% stenosis: 0.95 (0.02) | | | | Criteria: ≥50% or occlusion | (Higher ankle artery pressure/higher brachial pressure) | Occlusion: 0.80 (0.05) Aortoiliac ≥50% stenosis: 0.69 (0.05) | | | | | | Occlusion: 0.83 (0.05) | | | | | | Femoral-popliteal ≥50% stenosis and occlusion: 0.77 (0.04) | | | | | | Infrapopliteal ≥50% stenosis: 0.59 (0.06) | | | | | | Occlusion: 0.57 (0.07) | | Parameswaran et al,42 2005 | 57 Type 2 diabetics with no clinical evidence of PAD | Doppler waveform analysis | Doppler (PT or DP if PT absent/high) | 0.88 (0.80–0.96) | | Guo et al,50 2008 | 298 Patients (cardiology), PAD in 7% | Angiography: 50% stenosis | Oscillometry | 0.93 (0.87–0.96) | | Clairotte et al,48 2009 | 146 Patients (296 limbs), vascular laboratory (diabetes group, 83) | Color duplex | Doppler and oscillometry | Doppler: 0.87Oscillometric: 0.81 (P=0.006) | Expand Table ABI indicates ankle-brachial index; PAD, peripheral artery disease; PT, posterior tibial; and DP, dorsalis pedis. Data on the optimal ABI threshold for the diagnosis of PAD are scarce, with different criteria having been used to determine the optimal ABI cutoff value (Table 2).28,38,40,45,48,50,56,57 In older studies, the lower limit of the 95% confidence interval (CI) ranged from 0.85 to 0.97. Subsequent studies using the ROC curve recommended a threshold value of either 0.97 or 0.92.41,45,56 Clairotte et al48 reported a cutoff value between 1.00 and 1.04 for people with and without diabetes mellitus, with slightly higher values recommended for the oscillometric method than the Doppler technique. Serial ABI measurements can influence the optimal threshold value for detecting PAD. In a study based on ROC curve analysis, Stoffers et al28 proposed a cutoff value of 0.97 for a single measurement and of 0.92 for 3 measurements. They argued that the optimal cutoff might be influenced by population characteristics and disease prevalence.28 From a bayesian perspective, the optimal cutoff for identifying PAD patients depends on the pretest probability of PAD. The pretest probability is based on multiple clinical parameters, including the presence, characteristics, and intensity of symptoms; the presence of CVD risk factors; and other information derived from the medical history and physical examination. Although an ABI ≤0.90 remains the most common and consensual threshold, this value should not be considered a binary marker for the diagnosis of PAD. Eight studies assessed the diagnostic performances of an ABI ≤0.90 (Doppler method) to detect >50% stenosis identified by imaging methods, including color duplex ultrasound,37,43,44,46 magnetic resonance angiography,34 or angiography (Table I in the online-only Data Supplement).38,39,50 All these studies found reasonably high specificity (83%–99%) but lower sensitivity (69%–79%, except 1 outlier51 reporting 20% sensitivity). With an ABI ≤1.0 used as a threshold for detecting PAD, sensitivities as high as 100% have been reported.2,52 Yet, ABI should be interpreted according to the a priori probability of PAD, and values between 0.91 and 1.00 should be considered borderline. For example, for a 47-year-old woman with atypical calf pain, no history of CVD or risk factors, and an ABI of 0.91, the probability of PAD is low; however, the probability of PAD is high for a man with classic intermittent claudication who smokes and whose ABI is 0.96. Thus, clinical judgment is important when interpreting the ABI results. The sensitivity of the ABI can be significantly increased when it is measured immediately after treadmill exercise. Open in Viewer Table 2. Studies Assessing Optimal Ankle-Brachial Index Cutoff for the Diagnosis of Peripheral Artery Disease | Authors, Year | Study Population | Method for Determination of Optimal ABI | Optimal ABI Cutoff Proposed | | --- | --- | --- | --- | | Carter,56 1969 | Inpatients: 202 diseased limbs, 86 control subjects | 95% Confidence limit for limbs without PAD | 0.97 | | Sumner and Strandness,45 1979 | 48 Control subjects | Normal minus 2 SD (1.08±0.08) | 0.92 | | Bernstein et al,57 1982 | Patients with angiographically significant PAD | 95% Confidence limit for limbs without PAD | 0.85 | | Ouriel et al,40 1982 | 218 PAD patients (56 limbs not tested, 247 limbs with claudication, 58 with rest pain, ulcers, or gangrene), 25 control subjects (<30 y old, no RF, triphasic Doppler waveforms) | ROC curve analysis | 0.97 | | Stoffers et al,28 1996 | Community and vascular laboratory | ROC curve analysis | 0.97 (If pretest probability 33%) 0.92 (If pretest probability 50%) | | Lijmer et al,38 1996 | 441 Inpatients (PAD suspicion) | ROC curve analysis | 0.98 (Corrected) | | Guo et al,50 2008 | 298 Inpatients, cardiology PAD prevalence (angiography): 7% | ROC curve analysis | 0.95 | | Clairotte et al,48 2009 | 146 Patients (296 limbs) undergoing color duplex (diabetes group, 83), PAD prevalence: 33% non–diabetes mellitus, 27% diabetes mellitus | ROC curve analysis | 1.00 (1.04 in the absence of diabetes mellitus) | Expand Table ABI indicates ankle-brachial index; PAD, peripheral artery disease; RF, radiofrequency; and ROC, receiver-operating characteristic. Postexercise ABI With leg exercise, systolic pressure increases in the central circulation, as measured in the arms, concordant with an increase in left ventricular systolic pressure. Peripheral vasoconstriction occurs in nonexercising limbs and other organs, whereas it decreases at the ankle owing to vasodilation in exercising muscle. This leads to a mild decrease in the ABI in healthy patients when measured immediately after exercise cessation.41,58 The ankle pressure then increases rapidly and reaches the pre-exercise values within 1 to 2 minutes.58,59 In the case of even moderate occlusive PAD (typically in the proximal vessels), the ankle pressure decreases more during treadmill exercise compared with healthy patients, and the recovery time to the pre-exercise value after exercise cessation is prolonged, proportional to the severity of PAD.40,58–60 The ABI recovery time also is affected by the duration of exercise.61 Ouriel et al40 reported an average ABI decrease of 5% from resting to postexercise values after treadmill exercise in healthy people compared with 20% in patients with PAD. A recovery of at least 90% of the ABI to baseline value within the first 3 minutes after exercise was found to have a specificity of 94% to rule out PAD. Compared with angiography, the ROC curves of ABI at rest and after exercise were comparable for the detection of PAD.40 Augmentation of the ankle-brachial pressure gradient after exercise improves the sensitivity of the ABI to detect PAD, especially for borderline ABI values (0.91–1.00). Laing and Greenhalgh60 proposed an absolute decline of 30-mm Hg ankle pressure for the diagnosis of PAD according to the 95% CI of the change in ankle pressure change after 1 minute of treadmill exercise in a study of healthy subjects. Others62 reported 33% sensitivity and 85% specificity for a postexercise ABI <0.90- and/or >30-mm Hg drop in ankle pressure after exercise. Diagnostic criteria for postexercise ABI should also take into account the reproducibility of this measurement (see below). A challenge for establishing diagnostic criteria for the postexercise ABI is the heterogeneity of exercise protocols. Although treadmill testing requires specific equipment, an alternative method, the active pedal plantar flexion technique, has been proposed for an office-based assessment of postexercise ABI.63,64 This technique consists of repetitive active plantar flexion (heel raising) while standing, with an excellent correlation between ABI obtained after this method compared with treadmill exercise in claudicants.63,64 Abnormally High ABI In some cases, the ankle artery is incompressible and the systolic pressure at that location cannot be measured despite cuff inflation >250 mm Hg. In other cases, the ankle artery systolic pressure is measurable but is much higher than the brachial artery systolic pressure, leading to an ABI that exceeds the normal range. These situations are related to calcification of the arterial wall and may occur in patients with medial calcinosis, diabetes mellitus, or end-stage renal disease. Vascular calcification does not imply that occlusive lesions are present, although these 2 conditions frequently coexist. When vascular calcification is present, however, stenotic disease cannot be detected by the ABI.65,66 Other noninvasive tests such as measurement of the toe-brachial index or analysis of the Doppler waveform enable detection of occlusive disease despite a falsely high ABI. Measurement of the toe-brachial index is useful in such circumstances because the digital vessels rarely develop calcification and can provide an accurate determination of vascular disease in this setting. With these alternative tests, the rates of coexistent peripheral artery occlusive disease in patients with high ABIs range from 60% to 80%.65,66 ABI and Monitoring Patients With PAD ABI as a Marker of PAD Progression. The natural history of PAD includes a decrease in the ABI over time. In a series of patients assessed in a vascular laboratory,67 the ABI decreased by a mean of 0.06 over 4.6 years. A smaller ABI change (0.025 decrease over 5 years) was reported in the general population.23 Nicoloff et al68 defined PAD progression as a decrease in ABI of >0.15, a condition observed at 3 and 5 years in 19% and 37% of their vascular laboratory patients, respectively. Among patients with intermittent claudication followed up for a mean period of 2.5 years, Cronenwett et al69 found no correlation between baseline ABI and clinical outcome of the limb, whereas an ABI decrease of at least 0.15 was associated with an increased risk for bypass interventions (2.5-fold) and symptom progression (1.8-fold). In the absence of revascularization, an ABI decrease is correlated with clinical deterioration. Clinical improvement in terms of an increased walking distance, however, is not correlated with an ABI increase.70 The level of ABI (and the corresponding ankle pressure) is useful to predict limb outcomes. An ankle pressure <50 mm Hg is associated with higher risk for amputation.71 An increased risk of amputation has been reported when the ABI is <0.50 in nonrevascularized patients with leg ulcers.72 An ABI ≤0.90 is strongly associated (odds ratio: 8.2) with a 7-year risk of amputation in people with diabetes mellitus.73 Several studies reported greater accuracy of the ankle pressure per se, rather than the ABI, to predict the clinical prognosis of the limb.41,74–76 From a clinical perspective, PAD may not progress in a parallel manner in both limbs, so it is necessary to assess the ABI in both limbs during follow-up. ABI and Monitoring Patients After Revascularization. The ABI change correlates poorly with improvement in symptoms or functional performance. After angioplasty, an ABI increase of 0.10 and 0.15 in the revascularized limb predicted no residual stenosis >50% with sensitivities of 79% and 67% and specificities of 92% and 100%, respectively.77 The ABI may continue to improve from that measured in the immediate postoperative period for several weeks or months after revascularization.3,78–80 The accuracy of the ABI in predicting revascularization failure is poor, as shown in Table II in the online-only Data Supplement,77,81–87 because the ABI is a global estimator of whole-limb perfusion and cannot distinguish between graft failure and progression of PAD in native arteries. The ABI is not site specific and may reflect changes elsewhere in the arterial tree. Considering its low sensitivity for predicting graft failure, the measurement of the ABI alone is not a reliable method of surveillance after revascularization. The ABI and Functional Impairment and Decline Compared with individuals without PAD, those with PAD have poorer walking endurance, slower walking velocity, and lower physical activity levels.88–91 A thorough medical history is an important means for assessing the degree of functional impairment in men and women with PAD. However, some PAD patients restrict their physical activity to avoid exertional leg symptoms88; therefore, patient report of symptoms cannot be construed as a reliable measure of the degree of functional limitation.92 Several studies have demonstrated that in cohorts including men and women with and without PAD, lower ABI values are associated with greater functional impairment or faster functional decline compared with higher ABI values.5,89,90,92 The Walking and Leg Circulation Study (WALCS) cohort further demonstrated that even individuals with borderline baseline ABI values (0.91–0.99) and those with low-normal ABI values (1.00–1.09) had significantly higher rates of mobility loss than participants with a baseline ABI of 1.10 to 1.30.5 The association of lower ABI values with greater functional impairment in cohorts restricted to men and women with PAD is less consistent. Several studies that included only PAD participants reported that lower ABI values are not associated with greater functional limitations.93–95 These prior studies were limited by small sample sizes, by exclusion of functional measures other than treadmill walking performance, and by exclusion of participants without classic symptoms of intermittent claudication.93–95 In other studies of patients with PAD, both with and without intermittent claudication symptoms, strong and independent associations of lower ABI values were observed with poorer 6-minute walk performance, slower walking velocity at usual and fastest pace, greater limitation in maximum treadmill walking performance, and lower Walking Impairment Questionnaire distance score.90,96,97 No prospective studies in cohorts restricted to patients with PAD have demonstrated that lower ABI values are associated with a faster decline in functioning. However, it is important to point out that characteristics contributing to functional impairment and decline in people with PAD are multifactorial and include muscle size and composition, inflammation, lower-extremity strength, mitochondrial function, and behavioral factors.98–102 Therefore, the ABI is just one of many characteristics associated with functional impairment and decline in patients with PAD. ABI: A Marker for CVD Risk and Events ABI: A Marker of Cardiovascular Risk and Atherosclerosis Association of Low ABI With Cardiovascular Risk Factors and Prevalent Disease. The ABI serves as a measure of systemic atherosclerosis and thus is associated with both atherosclerotic risk factors and prevalent CVD in other vascular beds. A low ABI is associated with many cardiovascular risk factors, including hypertension, diabetes mellitus, dyslipidemia, smoking history, and several novel cardiovascular risk factors (eg, C-reactive protein, interleukin-6, homocysteine, and chronic kidney disease).30,103–105 The majority of studies use an ABI of 0.90 as a threshold to define PAD and use Doppler for ABI measurement. Therefore, it is not known whether the strength of the associations between low ABI and cardiovascular risk factors differs with alternative measurement methods and thresholds of ABI. Some studies have shown a graded inverse association of CVD risk factors across ABI thresholds.103,106 A strong and consistent relationship between low ABI and prevalent coronary artery disease and cerebrovascular disease has been demonstrated in several population-based cohort studies that included individuals with existing CVD.29,103,104,107,108 The strength of the relationship between low ABI and coronary artery disease varies, depending on the underlying risk of the population studied. In most studies, odds ratios range from 1.4 to 3.0, with 1 study reporting the association to be as high as 9.3 in individuals with type 1 diabetes mellitus.103,109–111 The prevalence of coronary artery disease among PAD patients ranges from 10.5% to 71% compared with 5.3% to 45.4% among subjects without PAD. Low ABI is also associated with prevalent cerebrovascular disease, with odds ratios in the range of 1.3 to 4.2 among 9 studies.29,104,111–113 The majority of these studies use Doppler to measure ABI and 0.90 as a threshold for defining PAD. Whether the association of low ABI with prevalent CVD would differ with alternative measurement methods or definitions is unknown. There is little information to determine whether the associations of abnormal ABI and CVD differ by sex. In the ARIC study,29 the association of low ABI and coronary artery disease was strong in both men and women, but there was no association of low ABI with stroke in women despite a strong association reported in men. In a Spanish study, low ABI was associated with coronary artery disease in both men (odds ratio, 2.1) and women (odds ratio, 3.3).114 Association of High ABI With Cardiovascular Risk Factors and Prevalent Disease. Few studies have evaluated the association of an abnormally high ABI, indicative of vascular calcification, with cardiovascular risk factors or with prevalent CVD. High ABI is associated directly with male sex, diabetes mellitus, and hypertension but is inversely associated with smoking and hyperlipidemia.66,115 Allison et al115 demonstrated an ABI >1.40 to be associated with stroke and congestive heart failure but not with myocardial infarction or angina. In MESA, high ABI was associated with incident CVD.116 Other studies have reported inconsistent results.117–119 ABI and Risk of Future Cardiovascular Events The ABI is a measure of the severity of atherosclerosis in the legs but is also an independent indicator of the risk of subsequent atherothrombotic events elsewhere in the vascular system. The ABI may be used as a risk marker both in the general population free of clinical CVD and in patients with established CVD. In the general population, cardiovascular risk equations incorporating traditional risk factors such as age, sex, cigarette smoking, hypercholesterolemia, hypertension, and diabetes mellitus have been used to predict future risk of events.120 These predictive scores, however, have limited accuracy,121 leading to the evaluation of other risk predictors such as C-reactive protein122 or measures of subclinical atherosclerosis such as coronary artery calcium,123 used alone or in combination with traditional risk factors. More precise identification of high-risk individuals may permit appropriate targeting of aggressive risk reduction therapies, although this strategy has not been properly evaluated. The ABI has been investigated as a risk predictor in several population-based cohort studies, mostly in Europe124–127 and North America.106,107,128–130 These studies have consistently found that a low ABI is associated with an increased risk of myocardial infarction, stroke, and both total and cardiovascular-related mortality. Furthermore, the increased risks are independent of established CVD and risk factors at baseline, suggesting that the ABI, as an indicator of atherosclerosis, might enhance the accuracy of risk prediction with established scoring systems.6 The ABI Collaboration performed an individual-based meta-analysis of 16 population cohorts to investigate in a large data set whether the ABI provided information on the risk of cardiovascular events and mortality independent of the Framingham Risk Score (FRS) and might improve risk prediction when combined with the FRS.6 An ABI ≤0.90 was associated with approximately twice the age-adjusted 10-year total mortality, cardiovascular mortality, and major coronary event rate compared with the overall rate in each FRS category. Use of the ABI resulted in reclassification of the risk category in both men and women.6 In men, the greatest incremental benefit of ABI for predicting risk was in those with an FRS >20%; a normal ABI, found in 43% of cases, reclassified them to the intermediate-risk category. Conversely, 9% of women at low (<10%) or intermediate (10%–19%) risk estimated by the FRS presented abnormal ABI (<0.90 or >1.40) and were reclassified as high risk. Since this meta-analysis, a recent report from MESA presented consistent data in different ethnic groups in the United States.116 Thus, a low or high ABI is associated with increased cardiovascular risk, and the risk prediction extends beyond that of the FRS alone.6,116 Further work is warranted to refine these results and to establish whether the ABI is of more value in certain subgroups in the population. Additional analyses are encouraged to use several recent metrics assessing the improvement of CVD risk prediction with the ABI. Specifically, criteria such as discrimination, calibration, and net reclassification improvement are awaited. Although an ABI cut point of 0.90 is used in many studies to identify high-risk individuals, the ABI Collaboration confirmed that the risk increases as the ABI decreases below a threshold of 1.10 (Figure 1).6 Clinical risk prediction could conceivably benefit from using ABI categories rather than 1 cut point for high risk. Individuals with a high ABI >1.40 are also at increased risk. Thus, the graph of mortality or other cardiovascular outcome by ABI level is a reverse J-shaped curve in which the lowest level of risk (normal) is from 1.11 to 1.40 (Figure 1).6 One explanation for an increased risk associated with a high ABI is that a high ABI caused by calcified arteries is associated frequently with occlusive PAD.131 Open in Viewer Figure 1. Hazard ratios for total mortality in men and women by ankle-brachial index at baseline for all studies combined in the ABI Collaboration. Reproduced from Fowkes et al6 with permission from the publisher. Copyright © 2008, American Medical Association. Patients with established CVD who also have a low ABI are at higher risk compared with patients with CVD who have a normal ABI.132–134 This is consistent with the observation that in patients with evidence of disease in >1 vascular bed, the 3-year vascular event rate is >60% higher than in those with disease in only 1 vascular territory.135 The magnitude of the increased risk associated with a low ABI would appear to be slightly less for those with known CVD than the 2- to 3-fold increased relative risk in healthy individuals. In the Heart Outcomes Prevention Evaluation (HOPE) study of patients with coronary heart disease, stroke, or diabetes mellitus, ABIs in the range of 0.60 to 0.90 were associated with a risk ratio for future nonfatal myocardial infarction of 1.4, nonfatal stroke of 1.2, and cardiovascular mortality of 1.6 compared with higher ABIs.135 In patients with prior CVD, the Cardiovascular Health Study found that those with a low ABI of ≤0.90 had an increased risk of congestive heart failure (risk ratio, 1.3) and cardiovascular mortality (risk ratio, 1.5).107 These increased risks were independent of established cardiovascular risk factors. Furthermore, in patients with PAD, not only is a low ABI associated independently with an increased risk of cardiovascular morbidity and mortality, but a decrease in ABI of >0.15 over time is associated with a 2-fold increase in mortality independently of the absolute ABI level.136 Thus, risk of vascular events in cardiovascular patients with a low or declining ABI is higher than in those with a normal ABI. The postexercise ABI is also predictive of risk. In the case of a normal ABI at rest, the presence of an abnormal ABI after exercise is associated with increased mortality.137 The Use of ABI in Primary Care As one of the least expensive and most available markers of atherosclerosis, the ABI is a highly appropriate measurement for CVD risk assessment in primary care. In the PAD Awareness, Risk, and Treatment: New Resources for Survival (PARTNERS) study, several barriers to the use of the ABI in the primary care, including time constraints, reimbursement, staff availability, and staff training, were identified.138 Yet, in this study, the time needed for ABI measurement was <15 minutes.138 In a Dutch study, which included 955 general practices, the time needed for an ABI measurement varied between 12 and 20 minutes (average, 17 minutes).139 The lack of reimbursement for ABI measurement is a hurdle for its broader use in general practice. The standardized ABI measurement proposed in this document has very good test characteristics for the diagnosis of PAD and should be considered for appropriate reimbursement. Conditions for the Measurement of the ABI The Patient Body position and knee or hip flexion influence the ABI.140 Gornik et al141 showed that arm pressure is not different in the sitting and supine positions when the arm is kept at heart level. These positions affect ankle pressure because the ankle is lower than the heart in the seated but not in the supine position, and consequently, the pressure is higher. The ABI averages 0.35 higher in the seated than in the supine position. Therefore, patients should be lying flat for an accurate ABI measurement, with the head and heels fully supported, ie, not hanging over the end of the examination table. Gornik et al141 recommended a formula to correct the seated ABI (under standardized conditions) in patients who cannot lie down. However, no external validation of this formula is available. The effect of the duration of the rest period on the reliability of ABI measurement is unknown. The length of the rest period before performing the ABI measurement has varied among studies,10 with most studies using a 5- to 10-minute period. Longer delays are impractical in the clinical setting. Even after a resting period, the first limb measurement tends to provide higher systolic pressures during a sequential (limb by limb) measurement. Smoking cigarettes also may affect the ABI. Smoking 10 minutes before the measurement significantly decreases the ABI (−0.09) compared with the ABI measured after 12 hours of smoking abstinence.142 The effect on the ABI was specifically related to a decrease in ankle pressures without a corresponding change in brachial artery pressure.142 The Cuff Studies of brachial blood pressure measurement highlight the importance of an appropriate cuff size to avoid inaccurate measurements.143,144 Comparable information is not available on the size of the ankle cuff. If the same concept of cuff size used for the arm is applied to that of the ankle, the width of the cuff should be at least 40% of the limb circumference.144 The cuff should always be clean and dry. The cuff wrapping method (spiral or parallel) affects the ankle SBP, with lower values occurring with the spiral cuff wrapping method.145 In a comparative study, similar intraobserver reproducibility was observed between both wrapping methods when an automated cuff was used, but a slightly better intraobserver reproducibility was observed for the spiral wrap when a manual cuff inflation was used with the Doppler technique.145 Takahashi et al146 found good correlation of parallel and spiral wrapping with intra-arterial pressure, similar intraobserver variability with both wrapping methods, but better interobserver variability with parallel wrapping. Given these data and the fact that the straight method is used to assess arm blood pressure, parallel wrapping is also preferred for the ankles. Although the measurement of the ABI by a pressure cuff is noninvasive, safe, and well tolerated in most circumstances, cuff inflation should be interrupted if it is painful. Caution is advised in 2 clinical situations. Direct apposition of the ankle cuff over open wounds and ulcers should be avoided or prevented by an impermeable dressing. In addition, cuff inflation should be avoided over a recently placed bypass graft because of the potential risk of causing graft thrombosis. The Measurement of the ABI Methods of Pressure Measurement Several noninvasive techniques are used to detect limb flow or pulse volume for measuring the ABI, primarily Doppler ultrasound and oscillometric methods. The former uses a continuous-wave Doppler probe for detection of arterial flow (Figure 2). The SBP is determined with a pneumatic cuff, which is first inflated until flow ceases and then deflated slowly until there is reappearance of the flow signal. The corresponding cuff pressure is the SBP. The oscillometric technique is based on the assumptions that the maximum oscillations appearing during cuff deflation correspond to the mean arterial pressure and that SBP and diastolic blood pressure can be calculated from this pressure with mathematical algorithms. These algorithms, based on empirical data from healthy subjects, were originally developed to measure arm blood pressure. The validation studies for oscillometric methods48,145,147–174 are summarized in Table III in the online-only Data Supplement. Some studies, but not others, have questioned the validity of the oscillometric method for the detection of PAD.145,155,175–177 The correlation between Doppler-derived and oscillometry-determined ankle pressures and ABIs in healthy subjects or subjects with mild PAD has been acceptable in most studies151,152,155,156,162,178 with 1 exception.164 However, when the ABI determined by the Doppler method is in the low range, the oscillometric method results in an overestimation of the actual pressure value,148,155,156,160,161,165,179 as illustrated in Figure 3 .156 In addition, most oscillometric blood pressure devices are unable to detect low pressures, eg, <50 mm Hg148 or even 80 mm Hg,178 and as a consequence, recording failures are frequent (from 11%161 to 44%178) in patients with advanced PAD.153,158,160,161,178,179 The sensitivity (67%–97%) and specificity (62%–96%) of the ABI measured with oscillometry compared with the Doppler method have been reported in multiple studies (Table III in the online-only Data Supplement).48,145,147–174 Bland-Altman plots were used in several studies to assess the agreement between the Doppler and oscillometric techniques.48,147,149,152–155,162,164,176,178 The limits of agreement (±2 SD) for the ABI were 0.25149 and 0.23158 in 2 studies in which it was calculated. In a third study, the limit of agreement of the ankle pressure in non-PAD subjects was ±20 mm Hg but more than ±70 mm Hg in patients with PAD.155 The 95% CI of the difference between the 2 methods in 2 additional studies varied from −0.19 to 0.14164 and −0.18 to 0.35,176 respectively. Open in Viewer Figure 2. Ankle pressure measurement with a Doppler probe: posterior tibial (A) and dorsalis pedis (B) arteries. Open in Viewer Figure 3. Difference between ankle pressures measured with an oscillometric device (CASMED 740) and Doppler (y axis) according to the ankle pressure bands obtained with Doppler (x axis). In the box plot, the line indicates median percentiles and outer markers indicate 5% and 95% percentiles. Reprinted from Korno et al156 with permission from the publisher. © Copyright 2009, Elsevier. Other methods used to measure ABI include plethysmography,180 photoplethysmography,169,173,174 auscultation,146 and pulse palpation.147,171 Strain-gauge plethysmography is not suitable for use in most settings other than a vascular laboratory. The photoplethysmography method, in which a sensor is placed on the great toe to detect flow after cuff deflation, correlated well with Doppler in several series of patients with PAD.169,173,174 However, the reproducibility of this method has not been reported. In 1 series, the limits of agreement (±2 SD) for the differences compared with the Doppler method ranged from −0.23 to 0.24.169 In addition, photoplethysmography of the toe is affected by temperature. A cool environment causes digital vasoconstriction. A laser Doppler probe placed on the dorsum of the foot to detect flow was used for ABI measurements in 1 study.170 The mean difference compared with Doppler was negligible, but agreement and reproducibility were not reported. Measurement of ABI using auscultation with a stethoscope was assessed in a Japanese study.146 Korotkoff sounds, however, are not always audible in the ankles (inaudible in ≈40% of cases), and there is an unacceptable difference in ankle pressures determined by this method compared with Doppler (−15.2 mm Hg). Compared with Doppler, pulse palpation to measure the ABI has a sensitivity of 88% and a specificity ranging from 75% to 82%.147,171 The palpation method underestimates (−0.14) the ABI compared with the Doppler method.147 Are the Different Methods of ABI Measurement Similarly Reproducible? Several studies assessed the intraobserver and interobserver reproducibilities of the ABI, with mixed findings (Tables IV and V in the online-only Data Supplement).40,147,156,162,165,181–196 Direct comparisons of studies are difficult because different statistical approaches were used or because of methodological limitations (eg, small samples of observers or patients, selective inclusion of symptomatic PAD patients). The intraobserver coefficient of variation (CoV) of the ABI with the Doppler method varies widely in the literature, from 4.7%189 to 13.0%28 (on average, ≈10%). Overall, these results are superior to those obtained with an automated oscillometric method, which has a CoV ranging from 5.1%156 to 20.2%.185 This general observation is confirmed by 2 comparative studies147,184 but has been challenged recently by Richart et al,165 who used a 4-cuff oscillometric device. The palpation method has poor reproducibility (CoV, 23%).147 Similarly, the intraobserver and interobserver reproducibilities are poorer for the auscultation than for the Doppler method.197 No reproducibility data are available for the plethysmographic method. The interobserver variability has been studied extensively for the Doppler method, but there are few data for other methods.147,156,177,181,182,184,188,190–196,198 The interobserver variability of the oscillometric method has been assessed only in the ARIC study, showing a CoV of 11%.184 All other studies (Table V in the online-only Data Supplement)147,156,177,181,182,184,188,190–196 used the Doppler method, with CoVs varying from 5.4% to 24% (mean, 13%). The ABI measured by Doppler in all limbs showed significantly better reproducibility than the 2 alternative methods of using a stethoscope or an oscillometry for the arms.194,195 Considering the evidence, Doppler appears to be the most reliable method to determine the ABI. The Examiner's Experience Several studies reported higher ABI reproducibility when measured by skilled examiners.183,199 Endres et al200 found no systematic bias between examiners from 3 distinct occupational groups with diverse training backgrounds, but all the examiners were well trained to measure the ABI. In patients with critical limb ischemia, comparison of ABIs obtained by inexperienced physicians and skilled vascular technicians revealed a higher interobserver difference for the former, especially when the dorsalis pedis (DP) artery was used.75 The ABI is more reproducible in “nonexpert” hands for healthy people compared with patients with PAD.188 Overall Reliability and Reproducibility of ABI Measurement The confidence of any particular point estimate of the “true” ABI depends on the number of measurements. Theoretically, the 95% CI is reduced by the square root of the number of measurements. As an illustration, in the ARIC ABI reliability study, the actual ABI value after 1 measurement could be the point estimate ±0.21.190 Considering this, the CI for an ABI based on the average of 2 visits would be ±0.15; it would be ±0.12 if based on 3 measures. For a given method of ABI measurement and calculation, Fowkes et al175 reported several factors that contribute to the within-subject ABI variability, including the interactions among the subject, the subject's leg (right versus left), the observer, and the delay between measurements. However, the variability resulting from these interactions is considered trivial compared with the greater ABI variability between different subjects. The variability of ankle pressures was found to be similar to that of arm pressures in 3 reports,2,181,189 whereas in 5 other studies,157,166,167,196,201 a better reproducibility of the arm pressures was reported. Overall, data demonstrate that the ABI is a valid biological parameter.181 Nevertheless, establishing the ABI method with the best reproducibility is warranted to keep the single measurement error to a minimum and to improve the ability of repeated ABI measurements over time to detect an actual change in PAD severity. In addition to methodological aspects and variability of the measurements in different laboratories,202 the CoV depends on the average ABI of the population studied (Figure I in the online-only Data Supplement), with a better ABI reproducibility in healthy people than in those with PAD. At an individual level, the size and direction of change between 2 ABI measurements do not vary with the average ABI.190,193 However, Osmundson et al194 and Fowkes et al175 reported lower variability in healthy subjects compared with PAD patients. Additionally, in patients with critical limb ischemia, significantly higher interobserver variability occurs in those with an ABI <0.50 than in those with an ABI >0.50.75 Data on postexercise ABI variability are scarce. In 20 patients with intermittent claudication, the interobserver variability for the ABI at rest and after exercise was 10% and 21%, respectively.196 Similarly, the intraobserver variability was higher for the ABI measured after exercise than for that measured at rest.40 The specific steps for an adequate measurement of the ABI are summarized in Table 3. Open in Viewer Table 3. Limb Pressure Measurement Protocol for the Determination of the Ankle-Brachial Index With the Doppler Method The patient should be at rest 5 to 10 min in the supine position, relaxed, head and heels supported, in a room with comfortable temperature (19°C–22°C/66°F–72°F). The patient should not smoke at least 2 hours before the ABI measurement. The cuff should be chosen adequately according to the limb size. The width should contour at least 40% of the limb circumference. The cuff should not be applied over a distal bypass (risk of thrombosis) or over ulcers. Any open lesion posing potential contamination should be covered with an impermeable dressing. The patient should stay still during the pressure measurement. If the patient is unable to not move his/her limbs (eg, tremor), other methods should be considered. Similar to the brachial blood pressure measurement, the cuff should be placed around the ankle using the straight wrapping method. The lower edge of the cuff should be 2 cm above the superior aspect of the medial malleolus (Figure 2). An 8- to 10-MHz Doppler probe should be used. Doppler gel should be applied over the sensor. After the Doppler device is turned on, the probe should be placed in the area of the pulse at a 45° to 60° angle to the surface of the skin. The probe should be moved around until the clearest signal is heard. The cuff should be inflated progressively up to 20 mm Hg above the level of flow signal disappearance and then deflated slowly to detect the pressure level of flow signal reappearance. The maximum inflation is 300 mm Hg; if the flow is still detected, the cuff should be deflated rapidly to avoid pain. The detection of the brachial blood flow during the arm pressure measurement should also be done by Doppler. The same sequence of limb pressure measurements should be used. The sequence should be the same for clinicians within a same center. During the sequence of measurement, the first measurement should be repeated at the end of the sequence and both results averaged to temper the white coat effect of the first measurement, except if the difference between the 2 measurements of the first arm exceeds 10 mm Hg. In that case, the first measurement should be disregarded and only the second measurement should be considered. For example, when the counterclockwise sequence—right arm, right PT, right DP, left PT, left DP, left arm—is used, the measurement of the right arm should be repeated at the end of the sequence and both results obtained at the right arm should be averaged unless the difference between the 2 measurements of the right arm exceeds 10 mm Hg. In this case, only the second measurement of right arm pressure should be considered. In case of repeat measurement of the 4 limb pressures (see indications in the text), the measurements should be repeated in the reverse order of the first series (eg, in the case of the initial counterclockwise sequence [right arm, right PT, right DP, left PT, left DP, left arm, right arm], the clockwise sequence should be used, starting and ending with the left arm). Expand Table ABI indicates ankle-brachial index; PT, posterior tibial; and DP, dorsalis pedis. Recommendations for the Measurement of the ABI 1. The Doppler method should be used to measure the SBP in each arm and each ankle for the determination of the ABI (Class I; Level of Evidence A).38,42,48,50,147,156,165,181–189 2. The cuff size should be appropriate with a width at least 40% of the limb circumference (Class I; Level of Evidence B).143,144 3. The ankle cuff should be placed just above the malleoli with the straight wrapping method (Class I; Level of Evidence B).146 4. Any open lesion with the potential for contamination should be covered with an impermeable dressing (Class I; Level of Evidence C). 5. The use of the cuff over a distal bypass should be avoided (risk of bypass thrombosis) (Class III harm; Level of Evidence C). Standard Calculation of the ABI The Denominator (Arm) The highest SBP of that measured in each arm is used most often as the denominator, although some studies report the average SBP of both arms, except in cases of interarm blood pressure differences. Differences in SBP between arms may occur in the case of subclavian artery stenosis. Osborn et al201 reported 100% sensitivity and specificity to detect >50% subclavian stenosis when the interarm blood pressure difference exceeded 15 mm Hg. Thus, subclavian artery stenosis should be suspected when the SBP difference between both arms is ≥15 mm Hg. In an analysis of 3 cohorts derived from the general population or from patients visiting a vascular laboratory, the presence of subclavian artery stenosis was associated with an increased risk of mortality,203 and several studies found a significant association between high interarm blood pressure difference and other cardiovascular conditions, including PAD.179,204–207 Apparent differences also may be observed in an anxious patient (white coat effect) when the first measurement (usually the right arm) is higher than the last one (left arm). This issue justifies a second measurement of the SBP in the first arm measured. To minimize the risk of ABI overestimation by a falsely lower denominator, the higher SBP between both arms should be used systematically for the ABI denominator. The Numerator (Ankle) The numerator for the calculation of the ABI incorporates the SBP of the PT and/or the DP artery separately or the average of both. The intraobserver variability of the ABI is the lowest when the average pressures of PT and DP artery are used for the numerator, although the differences with other methods that take either the highest or the lowest pressure are trivial in direct comparisons.178,183 No significant difference in interobserver variability was reported between the ABI obtained by the PT versus the DP artery.75,195 The ABI reproducibility is affected more by the technique used to record pressure at the ankle than by which artery is used.183,181,190,202 The Effect of the Mode of Determination of the Ankle Pressure on the Ability of the ABI to Diagnose PAD. Two studies39,44 assessed the performance of the ABI with 2 methods for determining the numerator, comparing the higher with the lower pressure between the PT and DP arteries at each ankle. In both studies, the higher brachial pressure was selected as the denominator, and the ABI cutoff value was 0.90. One study compared Doppler ABI <0.90 with the presence of ≥70% stenosis detected by color duplex ultrasound.44 The other study compared Doppler ABI ≤0.90 with angiographic stenosis ≥50% of any lower-limb artery. Choosing the lower compared with the higher ankle pressure as the ABI numerator was associated with better sensitivity (0.89 versus 0.66 in the former and 0.83 versus 0.79 in the latter study).39,44 Using the higher ankle pressure, however, resulted in higher specificity (0.99 versus 0.93 in the former and 0.93 versus 0.83 in the latter study, respectively).39,44 Neither of these studies assessed the average of both pressures as the numerator; however, the average of the PT and DP would likely not change overall accuracy and would result in intermediate values for sensitivity and specificity. Of note, if arterial flow in the ankle is not detected, the reason is seldom arterial agenesis but is most likely related to arterial occlusion or technical difficulties in localizing the artery. When an ankle artery signal is absent and the ABI based on the other ankle artery is within the normal range, it is reasonable to perform other vascular tests (eg, duplex ultrasound) to determine whether PAD is present. In calculations of the ABI to confirm a suspected diagnosis of PAD, use of the higher pressure at the ankle (high specificity) is preferred to minimize overdiagnosis in healthy subjects and thus to avoid further unnecessary tests and treatment. Although more false-negative tests will occur compared with using the lower ankle pressure, the clinical suspicion of PAD should lead to further investigation in such patients so that the diagnosis is unlikely to be missed. The Effect of the Mode of Determination of the Ankle Pressure on the Association of PAD With Cardiovascular Risk Factors and Localization of Atherosclerosis. In MESA,9 the association of PAD (ABI ≤0.90) with CVD risk factors was assessed with 3 alternative numerators: the higher, the average, and the lower of the PT and DP arteries. The use of the lower of the PT and DP arteries for the calculation led to the weakest association between PAD and cardiovascular risk factors and subclinical atherosclerosis in the coronary or carotid arteries. This is plausibly related to the inclusion of participants with less burden of disease (perhaps affecting only 1 ankle artery) in the PAD group. The Effect of the Mode of Determination of the Ankle Pressure on the Ability of the ABI to Predict Cardiovascular Events. In the population cohort studies that participated in the ABI Collaboration, the associations of ABI with total mortality, cardiovascular mortality, and major coronary events were consistent between studies despite some differences in ABI protocols.6 For an ABI ≤0.90 compared with a reference ABI range of 1.11 to 1.40, the pooled hazard ratio for cardiovascular mortality in men was 4.2 (95% CI, 3.3–5.4) and in women was 3.5 (95% CI, 2.4–5.1). In approximately half of the studies, the ABI was determined with only 1 arm, only the PT, and the lower ABI of the 2 legs. Direct comparisons of methods that measure the ABI for prediction of events are limited.208,209 In 1 study, the ABI was measured in >800 patients undergoing coronary angiography who were then followed up for 6 years to detect myocardial infarction, stroke, and CVD death.208 The prevalence of patients with an ABI <0.90 in either leg was 25% with the use of the higher of the PT and DP pressure compared with 36% with the use of the lower pressure. The cardiovascular event rate in subjects with an ABI <0.90 was almost identical with each mode of ABI calculation (28.1% and 27.4%, respectively). Thus, the lower of the PT and DP identified more patients at risk. A secondary analysis in the Cardiovascular Health Study assessed the prognostic value of the ABI to predict cardiovascular events.209 Using the lower ABI of the 2 legs identified more individuals with an ABI below the traditional high-risk cut point of 0.90. There were, however, no significant differences in the relative risks of a cardiovascular event based on calculations using the lower or higher ABI. Thus, taking the lower ABI of both legs will identify more individuals at risk of cardiovascular events. This conclusion is not surprising given that PAD may be unilateral or more severe in 1 leg than another. When the higher ABI of the 2 legs is used, individuals with significant disease who are at high risk of cardiovascular events may be missed. Recommendations for the Measurement of the Systolic Pressures of the 4 Limbs 1. Each clinician should adopt the following sequence of limb pressure measurement for the ABI at rest: first arm, first PT artery, first DP artery, other PT artery, other DP artery, and other arm (Class I; Level of Evidence C). 2. After the measurement of systolic pressures of the 4 limbs, if the SBP of the first arm exceeds the SBP of the other arm by ≥10 mm Hg, the blood pressure of the first arm should be repeated, and the first measurement of the first arm should be disregarded (Class I; Level of Evidence C). In clinical practice, one should consider that reproducibility is crucial only when the ABI obtained after the first set of measurements is close to the threshold values. Taking into consideration the threshold ABI value of 0.90 for the diagnosis of PAD, with 95% CI of differences between 2 measurements reported as ±0.10, an ABI <0.80 is sufficient to detect PAD and an ABI >1.00 is high enough to rule it out, whereas repeat measurements are needed within the interval of 0.80 to 1.00 for a definitive diagnosis. Thus, repeated measurements are indicated if the initial ABI is between 0.80 and 1.00; a single ABI result <0.80 has a 95% positive predictive value for the diagnosis of PAD; and a single ABI >1.00 has a 99% negative predictive value for PAD.28 The Public Health Consequences of the Mode of Calculation of the ABI ABI Mode of Calculation and the Epidemiology of PAD. Several studies have demonstrated that the mode of calculation of the ABI affects the estimation of PAD prevalence within a population.7–9 In MESA, in which the lower pressure between PT and DP was used instead of the higher one for the ABI numerator, the prevalence of PAD was 3.95 times higher in women (14.6% instead of 3.7%) and 2.74 times higher in men (9.3% instead 3.4%).9 The ABI Mode of Calculation and the Prevention of CVDs. The ABI can be used to stratify the risk of individuals initially classified as intermediate risk on the basis of cardiovascular risk scores (eg, FRS). Subjects with an ABI ≤0.90 are considered at high risk of CVD events, primarily on the basis of using the higher of PT and DP pressures as the numerator or exclusively using the PT artery (Table 4).4,24,89,104,107,109,124–130,190,210,212–215 Less is known about the prognostic value of the ABI in the general population if calculated using the lower of the PT and DP pressures. Although the use of this mode of calculation may slightly increase the sensitivity for identification of high-risk patients, the overall level of risk of those with an ABI ≤0.90 would be lower because of less specificity and the inclusion of numerous cases with early disease. The use of the lower of the PT and DP pressures may lead to the overdiagnosis of PAD, with important consequences in terms of resource use and cost. Open in Viewer Table 4. Ankle-Brachial Index Modes of Calculation in the 16 Population Studies Included in the ABI Collaboration Study210 | Study | Measurement Method | Arm | Ankle Artery | Repeat Measures | | --- | --- | --- | --- | --- | | 1 Measured | Higher L+R | Average L+R | 1 Measured | Higher PT+DP | Average PT+DP | Lower PT+DP | Other | Higher | Average | | Atherosclerosis Risk in Communities Study184 | Oscillometry | ✓ | | | ✓ | | | | | | ✓ | | Belgian Men study128 | Doppler | ✓ | | | ✓ | | | | | | | | Cardiovascular Health Study104,107 | Doppler | ✓ | | | ✓ | | | | | | ✓ | | Edinburgh artery study124 | Doppler | ✓ | | | ✓ | | | | | | | | Framingham Offspring Study109 | Doppler | | ✓ | | ✓ | | | | | | ✓ | | Health in Men study212 | Doppler | ✓ | | | | ✓ | | | | | | | Honolulu study129 | Doppler | ✓ | | | ✓ | | | | | | ✓ | | Hoorn study213 | Doppler | Not available | | | | | | | | | InCHIANTI214 | Doppler | ✓ | | | ✓ | | | | | ✓ | | | Limburg study125 | Doppler | | ✓ | | ✓ | | | | | | | | Men Born in 1914126 | Plethysmography | | ✓ | | ✓ | | | | | | | | Rotterdam Study127 | Doppler | ✓ | | | ✓ | | | | | | ✓ | | San Diego study4 | Plethysmography | | | ✓† | | | | | | | | | San Luis Valley study24 | Doppler | | | ✓† | | | | | ✓ | | | | Strong Heart Study130 | Doppler | ✓ | | | ✓ | | | | | | ✓ | | Women's Health and Ageing89 | Doppler | ✓ | | | ✓ | | | | | ✓ | | Expand Table Average done only for arms. † Except for large interarm difference (highest pressure taken in this case). The appropriate management of patients with an asymptomatic low ABI is still unclear. The Aspirin for Asymptomatic Atherosclerosis trial failed to show any benefit of the use of aspirin in patients with an ABI <0.95, with no trend to any benefit when the ABI was <0.90, although the ABI was calculated from the lowest of the 4 ankle arteries.210 Using a technique that reduces specificity for PAD in a clinical trial may limit the ability to show efficacy of therapeutic interventions. Recommendations for the Calculation of the ABI 1. The ABI of each leg should be calculated by dividing the higher of the PT or DP pressure by the higher of the right or left arm SBP (Class I; Level of Evidence A).39,44,189 2. When ABI is used as a diagnostic tool to assess patients with symptoms of PAD, the ABI should be reported separately for each leg (Class I; Level of Evidence C). 3. When the ABI is used as a prognostic marker of cardiovascular events and mortality, the lower of the ABIs of the left and right leg should be used as the prognostic marker of cardiovascular events and mortality. The exception to this recommendation is the case of noncompressible arteries (Class I; Level of Evidence C). 4. For any situation, when the ABI is initially determined to be between 0.80 and 1.00, it is reasonable to repeat the measurement (Class IIa; Level of Evidence B).28 Recommendations for the Use and Interpretation of the ABI in Case of Clinical Presentation of Lower-Extremity PAD 1. In the case of clinical suspicion based on symptoms and clinical findings, the ABI should be used as the first-line noninvasive test for the diagnosis of PAD (Class I; Level of Evidence A).11,38,41,50,56 2. An ABI ≤0.90 should be considered the threshold for confirming the diagnosis of lower-extremity PAD (Class I; Level of Evidence A).11,37–39,42–44,46,50,51 3. When the ABI is >0.90 but there is clinical suspicion of PAD, postexercise ABI or other noninvasive tests, which may include imaging, should be used (Class I; Level of Evidence A).40,58,60,212 4. It is reasonable to consider a postexercise ankle pressure decrease of >30 mm Hg or a postexercise ABI decrease of >20% as a diagnostic criterion for PAD (Class IIa; Level of Evidence A).40,60,62 5. When the ABI is >1.40 but there is clinical suspicion of PAD, a toe-brachial index or other noninvasive tests, which may include imaging, should be used (Class I; Level of Evidence A).65,66 Recommendations for the Interpretation of the ABI During Follow-Up 1. An ABI decrease of >0.15 over time can be effective to detect significant PAD progression (Class IIa; Level of Evidence B).68,69 2. The ABI should not be used alone to follow revascularized patients (Class III no benefit; Level of Evidence C). Recommendations for the Interpretation of the ABI as a Marker of Subclinical CVD and Risk in Asymptomatic Individuals 1. The ABI can be used to provide incremental information beyond standard risk scores in predicting future cardiovascular events (Class IIA; Level of Evidence A).6,116 2. Individuals with an ABI ≤0.90 or ≥1.40 should be considered at increased risk of cardiovascular events and mortality independently of the presence of symptoms of PAD and other cardiovascular risk factors (Class I; Level of Evidence A).6,116 3. Subjects with an ABI between 0.91 and 1.00 are considered “borderline” in terms of cardiovascular risk. Further evaluation is appropriate (Class IIa; Level of Evidence A).6 Training for the Use of the ABI The ABI should be performed by qualified individuals, including physicians, nurses, vascular technicians, and other allied health professionals. The amount of education and training required depends on prior knowledge and experience. Training should consist of both didactic and experiential learning. The individual performing the ABI should have basic knowledge of vascular anatomy, physiology, and the clinical presentation of PAD, as well as a basic understanding of how a Doppler device functions. Training should include demonstration of performance of an ABI with clear delineation of each step and emphasis on correct technique. To become proficient in performance of the ABI, it is necessary to practice the ABI measurement over time to ensure comfort and competence with the equipment and the procedures. The trainee should be asked to correctly demonstrate the independent performance of each step of the ABI in both healthy individuals and those with PAD. Trainees should also be able to demonstrate reproducible results. Trainees should be able to demonstrate correct calculation of the ABI and interpretation of results with a clear understanding of normal and abnormal values. Recommendations for ABI Measurement Training 1. The measurement and interpretation of the ABI should be part of the standard curriculum for medical and nursing students (Class I; Level of Evidence C). 2. All health professionals who perform the ABI should have didactic and experiential learning under the supervision of a qualified and experienced health professional (Class I; Level of Evidence C). 3. Professionals using the ABI should be proficient in performing the technique as determined by quality control measures (Class I; Level of Evidence C). Standards to Report ABI in Scientific Papers One of the aims of this scientific statement is to recommend uniform methods of ABI measurement in research. Controversial results reported in the literature are related in part to discrepancies in the ABI method (see “ABI Mode of Calculation and the Epidemiology of PAD”). The results of studies using the ABI need to be translated into clinical practice. Consequently, most of the recommendations on the clinical use of the ABI apply also to research protocols. However, time constraints for performing a comprehensive ABI should not apply to research protocols. A comprehensive ABI calculation for research protocols includes measurement of SBP in all 4 limbs, including both the PT and DP arteries at each ankle. Given that the reproducibility and accuracy of ABI values are augmented with repeat measurements, it is reasonable to require systematically at least 2 sets of ABI measurements with averaging of the measurements in research studies. This is especially true when the ABI is used as the sole method to determine PAD (as in most epidemiological studies) or when repeated measurements are planned over time. In these situations, duplicate ABI measurements provide increased accuracy and limit measurement bias. In addition, the reduced CI enables the detection of individual ABI changes of a smaller magnitude. It is suggested that ABI results in research reports include intraobserver and interobserver variation measured in a subset of the study population or in a population similar to the one assessed in the study. The prevalence of incompressible arteries or absent flow signals also should be reported. Finally, to compare more appropriately the population between different reports, it is suitable to report also the population's absolute pressure values in arms and legs. Recommendations for the Use of the ABI in Scientific Reports 1. The ABI intraobserver and interobserver variability of the research team should be reported (Class I; Level of Evidence C). 2. To improve the precision of the test, it is reasonable to measure each limb pressure twice and to average the results of each artery to calculate the ABI (Class IIa; Level of Evidence C). Unmet Needs: Fields of Research for the Future The following issues have been identified as gaps for evidence on the use and interpretation of the ABI: • Although several studies report differences in the normal values of ABI according to sex and ethnicity, it is still unclear whether specific thresholds should be used in different sex and ethnic groups in both population studies and clinical practice and research. • Further research should explore potentially easier and faster alternative methods for ABI measurement that would likely be implemented more broadly in primary care. • Standards of accreditation are necessary for the ABI measurement devices using methods other than Doppler devices (eg, oscillometric methods). • Further research to identify the optimal method of ABI calculation for predicting cardiovascular events and mobility loss is encouraged. A major aim of this document is to provide evidence-based recommendations for ABI measurement. However, separate but related ABI issues need to be addressed in future research. Two examples are in whom the ABI should be measured and how often the ABI should be measured. The current recommendations for the target population for ABI screening in American Heart Association/American College of Cardiology guidelines215 reflect the criteria used by investigators in the PARTNERS108 and the German Epidemiological Trial on Ankle-Brachial Index (getABI)216 studies, and the American Diabetes Association has suggested minor modifications of these criteria for diabetic patients.217 However, these recommendations are based on observational epidemiology. Ideally, the criteria would be established by a randomized, clinical trial, but such a trial seems unlikely in the near future. An attractive alternative is a cost-effectiveness analysis in different population subgroups; several such analyses are currently under way. How often the ABI should be repeated is also unknown. On average, the ABI decreases with age as PAD incidence increases. Some evidence exists on the rates of ABI progression in clinical populations25,67,68 and in the general population.23,218 However, there is little evidence on the cost-effectiveness of repeat measurement of the ABI in different patient groups, and with increasing use of the ABI, this will become an important question. Supplemental Material File(aboyans_data_supplement_revised.pdf) Download 104.08 KB References 1. Winsor T. Influence of arterial disease on the systolic blood pressure gradients of the extremity. Am J Med Sci. 1950;220:117–126. Go to Citation Crossref PubMed Google Scholar 2. Carter SA. Indirect systolic pressures and pulse waves in arterial occlusive diseases of the lower extremities. Circulation. 1968;37:624–637. Crossref PubMed Google Scholar a [...] peripheral artery disease (PAD). b [...] to detect PAD compared with angiography. c [...] and provides diagnostic performances. d [...] as high as 100% have been reported. e [...] to that of arm pressures in 3 reports, 3. Yao ST, Hobbs JT, Irvine WT. Ankle systolic pressure measurements in arterial disease affecting the lower extremities. Br J Surg. 1969;56:676–679. Crossref PubMed Google Scholar a [...] peripheral artery disease (PAD). b [...] to detect PAD compared with angiography. c [...] and provides diagnostic performances. d [...] weeks or months after revascularization. 4. Criqui MH, Langer RD, Fronek A, Feigelson HS, Klauber MR, McCann TJ, Browner D. Mortality over a period of 10 years in patients with peripheral arterial disease. N Engl J Med. 1992;326:381–386. Crossref PubMed Google Scholar a [...] even in the absence of symptoms of PAD. b [...] Table 4). c [...] San Diego study 5. McDermott MM, Guralnik JM, Tian L, Liu K, Ferrucci L, Liao Y, Sharma L, Criqui MH. Associations of borderline and low normal ankle-brachial index values with functional decline at 5-year follow-up: the WALCS (Walking and Leg Circulation Study). J Am Coll Cardiol. 2009;53:1056–1062. Crossref PubMed Google Scholar a [...] even in the absence of symptoms of PAD. b [...] decline compared with higher ABI values. c [...] with a baseline ABI of 1.10 to 1.30. 6. Ankle Brachial Index Collaboration Fowkes FG, Murray GD, Butcher I, Heald CL, Lee RJ, Chambless LE, Folsom AR, Hirsch AT, Dramaix M, deBacker G, Wautrecht JC, Kornitzer M, Newman AB, Cushman M, Sutton-Tyrrell K, Lee AJ, Price JF, d'Agostino RB, Murabito JM, Norman PE, Jamrozik K, Curb JD, Masaki KH, Rodriguez BL, Dekker JM, Bouter LM, Heine RJ, Nijpels G, Stehouwer CD, Ferrucci L, McDermott MM, Stoffers HE, Hooi JD, Knottnerus JA, Ogren M, Hedblad B, Witteman JC, Breteler MM, Hunink MG, Hofman A, Criqui MH, Langer RD, Fronek A, Hiatt WR, Hamman R, Resnick HE, Guralnik J. Ankle brachial index combined with Framingham risk score to predict cardiovascular events and mortality: a meta-analysis. JAMA. 2008;300:197–208. Crossref PubMed Google Scholar a [...] even in the absence of symptoms of PAD. b [...] with established scoring systems. c [...] risk prediction when combined with the FRS. d [...] of the risk category in both men and women. e [...] extends beyond that of the FRS alone. f [...] Figure 1). g [...] Figure 1). h [...] Collaboration. Reproduced from Fowkes et al i [...] despite some differences in ABI protocols. j [...] Class IIA; Level of Evidence A). k [...] Class I; Level of Evidence A). l [...] Class IIa; Level of Evidence A). 7. Lange SF, Trampisch HJ, Pittrow D, Darius H, Mahn M, Allenberg JR, Tepohl G, Haberl RL, Diehm C; getABI Study Group. Profound influence of different methods for determination of the ankle brachial index on the prevalence estimate of peripheral arterial disease. BMC Public Health. 2007: 147. Crossref PubMed Google Scholar a [...] according to the mode of ABI calculation. b [...] of PAD prevalence within a population. 8. Aboyans V, Lacroix P, Preux PM, Vergnenegre A, Ferrieres J, Laskar M. Variability of ankle-arm index in general population according to its mode of calculation. Int Angiol. 2002;21:237–243. PubMed Google Scholar a [...] according to the mode of ABI calculation. b [...] of PAD prevalence within a population. 9. Allison MA, Aboyans V, Granston T, McDermott MM, Kamineni A, Ni H, Criqui MH. The relevance of different methods of calculating the ankle-brachial index: the Multi-Ethnic Study of Atherosclerosis. Am J Epidemiol. 2010;171:368–376. Crossref PubMed Google Scholar a [...] according to the mode of ABI calculation. b [...] In MESA, c [...] of PAD prevalence within a population. d [...] times higher in men (9.3% instead 3.4%). 10. Klein S, Hage JJ. Measurement, calculation, and normal range of the ankle-arm index: a bibliometric analysis and recommendation for standardization. Ann Vasc Surg. 2006;20:282–292. Crossref PubMed Google Scholar a [...] single or replicate measures were obtained. b [...] ABI measurement has varied among studies, 11. Dachun X, Jue L, Liling Z, Yawei X, Dayi H, Pagoto SL, Yunsheng M. Sensitivity and specificity of the ankle-brachial index to diagnose peripheral artery disease: a structured review. Vasc Med. 2010;15:361–369. Crossref PubMed Google Scholar a [...] than that reported in earlier studies. b [...] Class I; Level of Evidence A). c [...] Class I; Level of Evidence A). 12. Stein JH, Korcarz CE, Hurst RT, Lonn E, Kendall CB, Mohler ER, Najjar SS, Rembold CM, Post WS; American Society of Echocardiography Carotid Intima-Media Thickness Task Force. Use of carotid ultrasound to identify subclinical vascular disease and evaluate cardiovascular disease risk: a consensus statement from the American Society of Echocardiography Carotid Intima-Media Thickness Task Force: endorsed by the Society for Vascular Medicine. J Am Soc Echocardiogr. 2008;21:93–111;quiz 189–190. Go to Citation Crossref PubMed Google Scholar 13. Greenland P, Bonow RO, Brundage BH, Budoff MJ, Eisenberg MJ, Grundy SM, Lauer MS, Post WS, Raggi P, Redberg RF, Rodgers GP, Shaw LJ, Taylor AJ, Weintraub WS. ACCF/AHA 2007 clinical expert consensus document on coronary artery calcium scoring by computed tomography in global cardiovascular risk assessment and in evaluation of patients with chest pain: a report of the American College of Cardiology Foundation Clinical Expert Consensus Task Force (ACCF/AHA Writing Committee to Update the 2000 Expert Consensus Document on Electron Beam Computed Tomography) developed in collaboration with the Society of Atherosclerosis Imaging and Prevention and the Society of Cardiovascular Computed Tomography. Circulation. 2007;115:402–426. Go to Citation Crossref PubMed Google Scholar 14. Hiatt WR, Goldstone J, Smith SC, McDermott M, Moneta G, Oka R, Newman AB, Pearce WH; American Heart Association Writing Group 1. Atherosclerotic peripheral vascular disease symposium II: nomenclature for vascular diseases. Circulation. 2008;118:2826–2829. Go to Citation Crossref PubMed Google Scholar 15. Safar ME, Protogerou AD, Blacher J. Statins, central blood pressure, and blood pressure amplification. Circulation. 2009;119:9–12. Go to Citation Crossref PubMed Google Scholar 16. Murgo JP, Westerhof N, Giolma JP, Altobelli SA. Aortic input impedance in normal man: relationship to pressure wave forms. Circulation. 1980;62:105–116. Go to Citation Crossref PubMed Google Scholar 17. Latham RD, Westerhof N, Sipkema P, Rubal BJ, Reuderink P, Murgo JP. Regional wave travel and reflections along the human aorta: a study with six simultaneous micromanometric pressures. Circulation. 1985;72:1257–1269. Go to Citation Crossref PubMed Google Scholar 18. Hope SA, Tay DB, Meredith IT, Cameron JD. Waveform dispersion, not reflection, may be the major determinant of aortic pressure wave morphology. Am J Physiol Heart Circ Physiol. 2005;289:H2497–H2502. Crossref PubMed Google Scholar a [...] some attenuation along the arterial system. b [...] the changes in pressure wave morphology. 19. Wang JJ, Parker KH. Wave propagation in a model of the arterial circulation. J Biomech. 2004;37:457–470. Go to Citation Crossref PubMed Google Scholar 20. Tsamis A, Stergiopulos N. Arterial remodeling in response to hypertension using a constituent-based model. Am J Physiol Heart Circ Physiol. 2007;293:H3130–H3139. Go to Citation Crossref PubMed Google Scholar 21. Humphrey JD. Mechanisms of arterial remodeling in hypertension: coupled roles of wall shear and intramural stress. Hypertension. 2008;52:195–200. Go to Citation Crossref PubMed Google Scholar 22. Katz S, Globerman A, Avitzour M, Dolfin T. The ankle-brachial index in normal neonates and infants is significantly lower than in older children and adults. J Pediatr Surg. 1997;32:269–271. Go to Citation Crossref PubMed Google Scholar 23. Smith FB, Lee AJ, Price JF, van Wijk MC, Fowkes FG. Changes in ankle brachial index in symptomatic and asymptomatic subjects in the general population. J Vasc Surg. 2003;38:1323–1330. Crossref PubMed Google Scholar a [...] 0.03 higher than that of the left leg. b [...] prevalence and progression of PAD. c [...] been reported in many population studies. d [...] was reported in the general population. e [...] and in the general population. 24. Hiatt WR, Hoag S, Hamman RF. Effect of diagnostic criteria on the prevalence of peripheral arterial disease: the San Luis Valley Diabetes Study. Circulation. 1995;91:1472–1479. Crossref PubMed Google Scholar a [...] 0.03 higher than that of the left leg. b [...] direct correlation between height and ABI. c [...] in the San Luis Valley Diabetes Study, d [...] does not eliminate observed differences. e [...] Table 4). f [...] San Luis Valley study 25. Bird CE, Criqui MH, Fronek A, Denenberg JO, Klauber MR, Langer RD. Quantitative and qualitative progression of peripheral arterial disease by non-invasive testing. Vasc Med. 1999;4:15–21. Crossref PubMed Google Scholar a [...] prevalence and progression of PAD. b [...] of ABI progression in clinical populations 26. London GM, Guerin AP, Pannier B, Marchais SJ, Stimpel M. Influence of sex on arterial hemodynamics and blood pressure: role of body height. Hypertension. 1995;26:514–519. Crossref PubMed Google Scholar a [...] direct correlation between height and ABI. b [...] been reported in many population studies. 27. Aboyans V, Criqui MH, McClelland RL, Allison MA, McDermott MM, Goff DC, Manolio TA. Intrinsic contribution of gender and ethnicity to normal ankle-brachial index values: the Multi-Ethnic Study of Atherosclerosis (MESA). J Vasc Surg. 2007;45:319–327. Crossref PubMed Google Scholar a [...] for sex, ethnicity, and risk factors. b [...] been reported in many population studies. c [...] does not eliminate observed differences. d [...] risk factors for atherosclerosis. e [...] counterparts after multivariate adjustment, f [...] heart rate did not correlate with the ABI. 28. Stoffers HE, Kester AD, Kaiser V, Rinkens PE, Kitslaar PJ, Knottnerus JA. The diagnostic value of the measurement of the ankle-brachial systolic pressure index in primary health care. J Clin Epidemiol. 1996;49:1401–1405. Crossref PubMed Google Scholar a [...] been reported in many population studies. b [...] and provides diagnostic performances. c [...] Table 2). d [...] based on ROC curve analysis, Stoffers et al e [...] characteristics and disease prevalence. f [...] Stoffers et al, g [...] to 13.0% h [...] a 99% negative predictive value for PAD. i [...] Class IIa; Level of Evidence B). 29. Zheng ZJ, Sharrett AR, Chambless LE, Rosamond WD, Nieto FJ, Sheps DS, Dobs A, Evans GW, Heiss G. Associations of ankle-brachial index with clinical coronary heart disease, stroke and preclinical carotid and popliteal atherosclerosis: the Atherosclerosis Risk in Communities (ARIC) Study. Atherosclerosis. 1997;131:115–125. Crossref PubMed Google Scholar a [...] been reported in many population studies. b [...] included individuals with existing CVD. c [...] in the range of 1.3 to 4.2 among 9 studies. d [...] and CVD differ by sex. In the ARIC study, 30. Zheng ZJ, Rosamond WD, Chambless LE, Nieto FJ, Barnes RW, Hutchinson RG, Tyroler HA, Heiss G; ARIC Investigators. Lower extremity arterial disease assessed by ankle-brachial index in a middle-aged population of African Americans and whites: the Atherosclerosis Risk in Communities (ARIC) Study. Am J Prev Med. 2005; 29 (suppl 1): 42– 49. Crossref PubMed Google Scholar a [...] does not eliminate observed differences. b [...] Risk in Communities Study (ARIC). c [...] homocysteine, and chronic kidney disease). 31. Carmelli D, Fabsitz RR, Swan GE, Reed T, Miller B, Wolf PA. Contribution of genetic and environmental influences to ankle-brachial blood pressure index in the NHLBI Twin Study: National Heart, Lung, and Blood Institute. Am J Epidemiol. 2000;151:452–458. Go to Citation Crossref PubMed Google Scholar 32. Allison MA, Peralta CA, Wassel CL, Aboyans V, Arnett DK, Cushman M, Eng J, Ix J, Rich SS, Criqui MH. Genetic ancestry and lower extremity peripheral artery disease in the Multi-Ethnic Study of Atherosclerosis. Vasc Med. 2010;15:351–359. Go to Citation Crossref PubMed Google Scholar 33. Su HM, Lee KT, Chu CS, Lee MY, Lin TH, Voon WC, Sheu SH, Lai WT. Effects of heart rate on brachial-ankle pulse wave velocity and ankle-brachial pressure index in patients without significant organic heart disease. Angiology. 2007;58:67–74. Go to Citation Crossref PubMed Google Scholar 34. Wilkinson IB, MacCallum H, Flint L, Cockcroft JR, Newby DE, Webb DJ. The influence of heart rate on augmentation index and central arterial pressure in humans. J Physiol. 2000; 525 (pt 1): 263– 270. Crossref PubMed Google Scholar a [...] reported in subjects without heart disease b [...] In 1 study, c [...] magnetic resonance angiography, 35. Abraham P, Desvaux B, Colin D, Leftheriotis G, Saumet JL. Heart rate-corrected ankle-to-arm index in the diagnosis of moderate lower extremity arterial disease. Angiology. 1995;46:673–677. Go to Citation Crossref PubMed Google Scholar 36. Su HM, Chang JM, Lin FH, Chen SC, Voon WC, Cheng KH, Wang CS, Lin TH, Lai WT, Sheu SH. Influence of different measurement time points on brachial-ankle pulse wave velocity and ankle-brachial index in hemodialysis patients. Hypertens Res. 2007;30:965–970. Go to Citation Crossref PubMed Google Scholar 37. Allen J, Oates CP, Henderson J, Jago J, Whittingham TA, Chamberlain J, Jones NA, Murray A. Comparison of lower limb arterial assessments using color-duplex ultrasound and ankle/brachial pressure index measurements. Angiology. 1996;47:225–232. Crossref PubMed Google Scholar a [...] and provides diagnostic performances. b [...] methods, including color duplex ultrasound, c [...] Class I; Level of Evidence A). 38. Lijmer JG, Hunink MG, van den Dungen JJ, Loonstra J, Smit AJ. ROC analysis of noninvasive tests for peripheral arterial disease. Ultrasound Med Biol. 1996;22:391–398. Crossref PubMed Google Scholar a [...] and provides diagnostic performances. b [...] Table 1). c [...] To avoid verification bias, Lijmer et al d [...] Lijmer et al, e [...] Table 2). f [...] Data Supplement). g [...] Lijmer et al, h [...] Class I; Level of Evidence A). i [...] Class I; Level of Evidence A). j [...] Class I; Level of Evidence A). 39. Niazi K, Khan TH, Easley KA. Diagnostic utility of the two methods of ankle brachial index in the detection of peripheral arterial disease of lower extremities. Catheter Cardiovasc Interv. 2006;68:788–792. Crossref PubMed Google Scholar a [...] and provides diagnostic performances. b [...] Data Supplement). c [...] Two studies d [...] and 0.83 versus 0.79 in the latter study). e [...] 0.83 in the latter study, respectively). f [...] Class I; Level of Evidence A). g [...] Class I; Level of Evidence A). 40. Ouriel K, McDonnell AE, Metz CE, Zarins CK. Critical evaluation of stress testing in the diagnosis of peripheral vascular disease. Surgery. 1982;91:686–693. PubMed Google Scholar a [...] and provides diagnostic performances. b [...] Table 2). c [...] Ouriel et al, d [...] proportional to the severity of PAD. e [...] Ouriel et al f [...] were comparable for the detection of PAD. g [...] Data Supplement). h [...] exercise than for that measured at rest. i [...] Class I; Level of Evidence A). j [...] Class IIa; Level of Evidence A). 41. Ouriel K, Zarins CK. Doppler ankle pressure: an evaluation of three methods of expression. Arch Surg. 1982;117:1297–1300. Crossref PubMed Google Scholar a [...] and provides diagnostic performances. b [...] a threshold value of either 0.97 or 0.92. c [...] immediately after exercise cessation. d [...] predict the clinical prognosis of the limb. e [...] Class I; Level of Evidence A). 42. Parameswaran GI, Brand K, Dolan J. Pulse oximetry as a potential screening tool for lower extremity arterial disease in asymptomatic patients with diabetes mellitus. Arch Intern Med. 2005;165:442–446. Crossref PubMed Google Scholar a [...] and provides diagnostic performances. b [...] Table 1). c [...] Parameswaran et al, d [...] Class I; Level of Evidence A). e [...] Class I; Level of Evidence A). 43. Premalatha G, Ravikumar R, Sanjay R, Deepa R, Mohan V. Comparison of colour duplex ultrasound and ankle-brachial pressure index measurements in peripheral vascular disease in type 2 diabetic patients with foot infections. J Assoc Physicians India. 2002;50:1240–1244. PubMed Google Scholar a [...] and provides diagnostic performances. b [...] are reported in diabetic patients. c [...] methods, including color duplex ultrasound, d [...] Class I; Level of Evidence A). 44. Schroder F, Diehm N, Kareem S, Ames M, Pira A, Zwettler U, Lawall H, Diehm C. A modified calculation of ankle-brachial pressure index is far more sensitive in the detection of peripheral arterial disease. J Vasc Surg. 2006;44:531–536. Crossref PubMed Google Scholar a [...] and provides diagnostic performances. b [...] methods, including color duplex ultrasound, c [...] Two studies d [...] detected by color duplex ultrasound. e [...] and 0.83 versus 0.79 in the latter study). f [...] 0.83 in the latter study, respectively). g [...] Class I; Level of Evidence A). h [...] Class I; Level of Evidence A). 45. Sumner DS, Strandness DE. The relationship between calf blood flow and ankle blood pressure in patients with intermittent claudication.Surgery. 1969;65:763–771. PubMed Google Scholar a [...] and provides diagnostic performances. b [...] Table 2). c [...] a threshold value of either 0.97 or 0.92. d [...] Sumner and Strandness, 46. Williams DT, Harding KG, Price P. An evaluation of the efficacy of methods used in screening for lower-limb arterial disease in diabetes. Diabetes Care. 2005;28:2206–2210. Crossref PubMed Google Scholar a [...] and provides diagnostic performances. b [...] methods, including color duplex ultrasound, c [...] Class I; Level of Evidence A). 47. Alnaeb ME, Crabtree VP, Boutin A, Mikhailidis DP, Seifalian AM, Hamilton G. Prospective assessment of lower-extremity peripheral arterial disease in diabetic patients using a novel automated optical device. Angiology. 2007;58:579–585. Crossref PubMed Google Scholar a [...] and provides diagnostic performances. b [...] are reported in diabetic patients. 48. Clairotte C, Retout S, Potier L, Roussel R, Escoubet B. Automated ankle-brachial pressure index measurement by clinical staff for peripheral arterial disease diagnosis in nondiabetic and diabetic patients. Diabetes Care. 2009;32:1231–1236. Crossref PubMed Google Scholar a [...] and provides diagnostic performances. b [...] are reported in diabetic patients. c [...] Table 1). d [...] Clairotte et al, e [...] Table 2). f [...] Clairotte et al g [...] Clairotte et al, h [...] studies for oscillometric methods i [...] Data Supplement). j [...] the Doppler and oscillometric techniques. k [...] Class I; Level of Evidence A). 49. Feigelson HS, Criqui MH, Fronek A, Langer RD, Molgaard CA. Screening for peripheral arterial disease: the sensitivity, specificity, and predictive value of noninvasive tests in a defined population. Am J Epidemiol. 1994;140:526–534. Crossref PubMed Google Scholar a [...] and provides diagnostic performances. b [...] method to detect flow, 1 study 50. Guo X, Li J, Pang W, Zhao M, Luo Y, Sun Y, Hu D. Sensitivity and specificity of ankle-brachial index for detecting angiographic stenosis of peripheral arteries. Circ J. 2008;72:605–610. Crossref PubMed Google Scholar a [...] and provides diagnostic performances. b [...] Table 1). c [...] Guo et al, d [...] Table 2). e [...] Data Supplement). f [...] Guo et al, g [...] Class I; Level of Evidence A). h [...] Class I; Level of Evidence A). i [...] Class I; Level of Evidence A). 51. Wikstrom J, Hansen T, Johansson L, Lind L, Ahlstrom H. Ankle brachial index <0.9 underestimates the prevalence of peripheral artery occlusive disease assessed with whole-body magnetic resonance angiography in the elderly. Acta Radiol. 2008;49:143–149. Crossref PubMed Google Scholar a [...] and provides diagnostic performances. b [...] sensitivity (69%–79%, except 1 outlier c [...] Class I; Level of Evidence A). 52. Baxter GM, Polak JF. Lower limb colour flow imaging: a comparison with ankle:brachial measurements and angiography. Clin Radiol. 1993;47:91–95. Crossref PubMed Google Scholar a [...] and provides diagnostic performances. b [...] as high as 100% have been reported. 53. de Groote P, Millaire A, Deklunder G, Marache P, Decoulx E, Ducloux G. Comparative diagnostic value of ankle-to-brachial index and transcutaneous oxygen tension at rest and after exercise in patients with intermittent claudication. Angiology. 1995;46:115–122. Go to Citation Crossref PubMed Google Scholar 54. Flanigan DP, Ballard JL, Robinson D, Galliano M, Blecker G, Harward TR. Duplex ultrasound of the superficial femoral artery is a better screening tool than ankle-brachial index to identify at risk patients with lower extremity atherosclerosis. J Vasc Surg. 2008;47:789–792. Go to Citation Crossref PubMed Google Scholar 55. Alnaeb ME, Boutin A, Crabtree VP, Mikhailidis DP, Seifalian AM, Hamilton G. Assessment of lower extremity peripheral arterial disease using a novel automated optical device. Vasc Endovascular Surg. 2007;41:522–527. Go to Citation Crossref PubMed Google Scholar 56. Carter SA. Clinical measurement of systolic pressures in limbs with arterial occlusive disease. JAMA. 1969;207:1869–1874. Crossref PubMed Google Scholar a [...] Table 2). b [...] a threshold value of either 0.97 or 0.92. c [...] Carter, d [...] Class I; Level of Evidence A). 57. Bernstein EF, Fronek A. Current status of noninvasive tests in the diagnosis of peripheral arterial disease. Surg Clin North Am. 1982;62:473–487. Crossref PubMed Google Scholar a [...] Table 2). b [...] Bernstein et al, 58. Carter SA. Response of ankle systolic pressure to leg exercise in mild or questionable arterial disease. N Engl J Med. 1972;287:578–582. Crossref PubMed Google Scholar a [...] immediately after exercise cessation. b [...] pre-exercise values within 1 to 2 minutes. c [...] proportional to the severity of PAD. d [...] Class I; Level of Evidence A). 59. Winsor T. Conditioned vasoconstrictive responses of digital vessels. AMA Arch Surg. 1958;76:193–199. Crossref PubMed Google Scholar a [...] pre-exercise values within 1 to 2 minutes. b [...] proportional to the severity of PAD. 60. Laing S, Greenhalgh RM. The detection and progression of asymptomatic peripheral arterial disease. Br J Surg. 1983;70:628–630. Crossref PubMed Google Scholar a [...] proportional to the severity of PAD. b [...] values (0.91–1.00). Laing and Greenhalgh c [...] Class I; Level of Evidence A). d [...] Class IIa; Level of Evidence A). 61. Sakurai T, Matsushita M, Nishikimi N, Nimura Y. Effect of walking distance on the change in ankle-brachial pressure index in patients with intermittent claudication. Eur J Vasc Endovasc Surg. 1997;13:486–490. Go to Citation Crossref PubMed Google Scholar 62. Hoogeveen EK, Mackaay AJ, Beks PJ, Kostense PJ, Dekker JM, Heine RJ, Nijpels G, Rauwerda JA, Stehouwer CD. Evaluation of the one-minute exercise test to detect peripheral arterial disease. Eur J Clin Invest. 2008;38:290–295. Crossref PubMed Google Scholar a [...] in a study of healthy subjects. Others b [...] Class IIa; Level of Evidence A). 63. McPhail IR, Spittel PC, Weston SA, Bailey KR. Intermittent claudication: an objective office-based assessment. J Am Coll Cardiol. 2001;37:1381–1385. Crossref PubMed Google Scholar a [...] assessment of postexercise ABI. b [...] with treadmill exercise in claudicants. 64. Amirhamzeh MM, Chant HJ, Rees JL, Powel RJ, Campbell WB. A comparative study of treadmill tests and heel raising exercise for peripheral arterial disease. Eur J Vasc Endovasc Surg. 1997;13:301–305. Crossref PubMed Google Scholar a [...] assessment of postexercise ABI. b [...] with treadmill exercise in claudicants. 65. Suominen V, Rantanen T, Venermo M, Saarinen J, Salenius J. Prevalence and risk factors of PAD among patients with elevated ABI. Eur J Vasc Endovasc Surg. 2008;35:709–714. Crossref PubMed Google Scholar a [...] disease cannot be detected by the ABI. b [...] with high ABIs range from 60% to 80%. c [...] Class I; Level of Evidence A). 66. Aboyans V, Ho E, Denenberg JO, Ho LA, Natarajan L, Criqui MH. The association between elevated ankle systolic pressures and peripheral occlusive arterial disease in diabetic and nondiabetic subjects. J Vasc Surg. 2008;48:1197–1203. Crossref PubMed Google Scholar a [...] disease cannot be detected by the ABI. b [...] with high ABIs range from 60% to 80%. c [...] associated with smoking and hyperlipidemia. d [...] Class I; Level of Evidence A). 67. Aboyans V, Criqui MH, Denenberg JO, Knoke JD, Ridker PM, Fronek A. Risk factors for progression of peripheral arterial disease in large and small vessels. Circulation. 2006;113:2623–2629. Crossref PubMed Google Scholar a [...] patients assessed in a vascular laboratory, b [...] of ABI progression in clinical populations 68. Nicoloff AD, Taylor LM, Sexton GJ, Schuff RA, Edwards JM, Yeager RA, Landry GJ, Moneta GL, Porter JM; Homocysteine and Progression of Atherosclerosis Study Investigators. Relationship between site of initial symptoms and subsequent progression of disease in a prospective study of atherosclerosis progression in patients receiving long-term treatment for symptomatic peripheral arterial disease. J Vasc Surg. 2002;35:38–46. Crossref PubMed Google Scholar a [...] Nicoloff et al b [...] Class IIa; Level of Evidence B). c [...] of ABI progression in clinical populations 69. Cronenwett JL, Warner KG, Zelenock GB, Whitehouse WM, Graham LM, Lindenauer M, Stanley JC. Intermittent claudication: current results of nonoperative management. Arch Surg. 1984;119:430–436. Crossref PubMed Google Scholar a [...] mean period of 2.5 years, Cronenwett et al b [...] Class IIa; Level of Evidence B). 70. Amighi J, Sabeti S, Schlager O, Francesconi M, Ahmadi R, Minar E, Schillinger M. Outcome of conservative therapy of patients with severe intermittent claudication. Eur J Vasc Endovasc Surg. 2004;27:254–258. Go to Citation Crossref PubMed Google Scholar 71. Norgren L, Hiatt WR, Dormandy JA, Nehler MR, Harris KA, Fowkes FG; TASC II Working Group. Inter-society consensus for the management of peripheral arterial disease (TASC II). J Vasc Surg. 2007; 45 (suppl S): S5– S67. Go to Citation Crossref PubMed Google Scholar 72. Marston WA, Davies SW, Armstrong B, Farber MA, Mendes RC, Fulton JJ, Keagy BA. Natural history of limbs with arterial insufficiency and chronic ulceration treated without revascularization. J Vasc Surg. 2006;44:108–114. Go to Citation Crossref PubMed Google Scholar 73. Hamalainen H, Ronnemaa T, Halonen JP, Toikka T. Factors predicting lower extremity amputations in patients with type 1 or type 2 diabetes mellitus: a population-based 7-year follow-up study. J Intern Med. 1999;246:97–103. Go to Citation Crossref PubMed Google Scholar 74. Brothers TE, Esteban R, Robison JG, Elliott BM. Symptoms of chronic arterial insufficiency correlate with absolute ankle pressure better than with ankle:brachial index. Minerva Cardioangiol. 2000;48:103–109. Go to Citation PubMed Google Scholar 75. Matzke S, Ollgren J, Lepantalo M. Predictive value of distal pressure measurements in critical leg ischaemia. Ann Chir Gynaecol. 1996;85:316–321. PubMed Google Scholar a [...] predict the clinical prognosis of the limb. b [...] the dorsalis pedis (DP) artery was used. c [...] ABI <0.50 than in those with an ABI >0.50. d [...] obtained by the PT versus the DP artery. 76. Fowl RJ, Gewirtz RJ, Love MC, Kempczinski RF. Natural history of claudicants with critical hemodynamic indices. Ann Vasc Surg. 1992;6:31–33. Go to Citation Crossref PubMed Google Scholar 77. Decrinis M, Doder S, Stark G, Pilger E. A prospective evaluation of sensitivity and specificity of the ankle/brachial index in the follow-up of superficial femoral artery occlusions treated by angioplasty. Clin Investig. 1994;72:592–597. Crossref PubMed Google Scholar a [...] of 92% and 100%, respectively. b [...] Data Supplement, 78. Motukuru V, Suresh KR, Vivekanand V, Raj S, Girija KR. Therapeutic angiogenesis in Buerger's disease (thromboangiitis obliterans) patients with critical limb ischemia by autologous transplantation of bone marrow mononuclear cells. J Vasc Surg. 2008; 48 (suppl): 53S– 60S. Go to Citation Crossref PubMed Google Scholar 79. Allouche-Cometto L, Leger P, Rousseau H, Lefebvre D, Bendayan P, Elefterion P, Boccalon H. Comparative of blood flow to the ankle-brachial index after iliac angioplasty. Int Angiol. 1999;18:154–157. Go to Citation PubMed Google Scholar 80. Matoba S, Tatsumi T, Murohara T, Imaizumi T, Katsuda Y, Ito M, Saito Y, Uemura S, Suzuki H, Fukumoto S, Yamamoto Y, Onodera R, Teramukai S, Fukushima M, Matsubara H; TACT Follow-Up Study Investigators. Long-term clinical outcome after intramuscular implantation of bone marrow mononuclear cells (Therapeutic Angiogenesis by Cell Transplantation [TACT] trial) in patients with chronic limb ischemia. Am Heart J. 2008;156:1010–1018. Go to Citation Crossref PubMed Google Scholar 81. Barnes RW, Thompson BW, MacDonald CM, Nix ML, Lambeth A, Nix AD, Johnson DW, Wallace BH. Serial noninvasive studies do not herald postoperative failure of femoropopliteal or femorotibial bypass grafts. Ann Surg. 1989;210:486–493. Go to Citation Crossref PubMed Google Scholar 82. Stierli P, Aeberhard P, Livers M. The role of colour flow duplex screening in infra-inguinal vein grafts. Eur J Vasc Surg. 1992;6:293–298. Go to Citation Crossref PubMed Google Scholar 83. Laborde AL, Synn AY, Worsey MJ, Bower TR, Hoballah JJ, Sharp WJ, Kresowik TF, Corson JD. A prospective comparison of ankle/brachial indices and color duplex imaging in surveillance of the in situ saphenous vein bypass. J Cardiovasc Surg (Torino). 1992;33:420–425. Go to Citation PubMed Google Scholar 84. Idu MM, Blankenstein JD, de Gier P, Truyen E, Buth J. Impact of a color-flow duplex surveillance program on infrainguinal vein graft patency: a five-year experience. J Vasc Surg. 1993;17:42–52. Go to Citation Crossref PubMed Google Scholar 85. Dalsing MC, Cikrit DF, Lalka SG, Sawchuk AP, Schulz C. Femorodistal vein grafts: the utility of graft surveillance criteria. J Vasc Surg. 1995;21:127–134. Go to Citation Crossref PubMed Google Scholar 86. Lundell A, Lindblad B, Bergqvist D, Hansen F. Femoropopliteal-crural graft patency is improved by an intensive surveillance program: a prospective randomized study. J Vasc Surg. 1995;21:26–33. Go to Citation Crossref PubMed Google Scholar 87. Radak D, Labs KH, Jager KA, Bojic M, Popovic AD. Doppler-based diagnosis of restenosis after femoropopliteal percutaneous transluminal angioplasty: sensitivity and specificity of the ankle/brachial pressure index versus changes in absolute pressure values. Angiology. 1999;50:111–122. Go to Citation Crossref PubMed Google Scholar 88. McDermott MM, Greenland P, Liu K, Guralnik JM, Criqui MH, Dolan NC, Chan C, Celic L, Pearce WH, Schneider JR, Sharma L, Clark E, Gibson D, Martin GJ. Leg symptoms in peripheral arterial disease: associated clinical characteristics and functional impairment. JAMA. 2001;286:1599–1606. Crossref PubMed Google Scholar a [...] and lower physical activity levels. b [...] activity to avoid exertional leg symptoms 89. McDermott MM, Fried L, Simonsick E, Ling S, Guralnik JM. Asymptomatic peripheral arterial disease is independently associated with impaired lower extremity functioning: the Women's Health and Aging Study. Circulation. 2000;101:1007–1012. Crossref PubMed Google Scholar a [...] and lower physical activity levels. b [...] decline compared with higher ABI values. c [...] Table 4). d [...] Women's Health and Ageing 90. McDermott MM, Greenland P, Liu K, Guralnik JM, Celic L, Criqui MH, Chan C, Martin GJ, Schneider J, Pearce WH, Taylor LM, Clark E. The ankle brachial index is associated with leg function and physical activity: the Walking and Leg Circulation Study. Ann Intern Med. 2002;136:873–883. Crossref PubMed Google Scholar a [...] and lower physical activity levels. b [...] decline compared with higher ABI values. c [...] Impairment Questionnaire distance score. 91. McDermott MM, Ohlmiller SM, Liu K, Guralnik JM, Martin GJ, Pearce WH, Greenland P. Gait alterations associated with walking impairment in people with peripheral arterial disease with and without intermittent claudication. J Am Geriatr Soc. 2001;49:747–754. Go to Citation Crossref PubMed Google Scholar 92. McDermott MM, Liu K, Greenland P, Guralnik JM, Criqui MH, Chan C, Pearce WH, Schneider JR, Ferrucci L, Celic L, Taylor LM, Vonesh E, Martin GJ, Clark E. Functional decline in peripheral arterial disease: associations with the ankle brachial index and leg symptoms. JAMA. 2004;292:453–461. Crossref PubMed Google Scholar a [...] of the degree of functional limitation. b [...] decline compared with higher ABI values. 93. Szuba A, Oka RK, Harada R, Cooke JP. Limb hemodynamics are not predictive of functional capacity in patients with PAD. Vasc Med. 2006;11:155–163. Crossref PubMed Google Scholar a [...] with greater functional limitations. b [...] symptoms of intermittent claudication. 94. Gardner AW, Skinner JS, Cantwell BW, Smith LK. Prediction of claudication pain from clinical measurements obtained at rest. Med Sci Sports Exerc. 1992;24:163–170. Crossref PubMed Google Scholar a [...] with greater functional limitations. b [...] symptoms of intermittent claudication. 95. Parr B, Noakes TD, Derman EW. Factors predicting walking intolerance in patients with peripheral arterial disease and intermittent claudication. S Afr Med J. 2008;98:958–962. PubMed Google Scholar a [...] with greater functional limitations. b [...] symptoms of intermittent claudication. 96. McDermott MM, Criqui MH, Liu K, Guralnik JM, Greenland P, Martin GJ, Pearce W. Lower ankle/brachial index, as calculated by averaging the dorsalis pedis and posterior tibial arterial pressures, and association with leg functioning in peripheral arterial disease. J Vasc Surg. 2000;32:1164–1171. Go to Citation Crossref PubMed Google Scholar 97. McDermott MM, Ferrucci L, Guralnik JM, Dyer AR, Liu K, Pearce WH, Clark E, Liao Y, Criqui MH. The ankle-brachial index is associated with the magnitude of impaired walking endurance among men and women with peripheral arterial disease. Vasc Med. 2010;15:251–257. Go to Citation Crossref PubMed Google Scholar 98. McDermott MM, Liu K, Ferrucci L, Tian L, Guralnik JM, Green D, Tan J, Liao Y, Pearce WH, Schneider JR, McCue K, Ridker P, Rifai N, Criqui MH. Circulating blood markers and functional impairment in peripheral arterial disease. J Am Geriatr Soc. 2008;56:1504–1510. Go to Citation Crossref PubMed Google Scholar 99. Herman SD, Liu K, Tian L, Guralnik JM, Ferrucci L, Criqui MH, Liao Y, McDermott MM. Baseline lower extremity strength and subsequent decline in functional performance at 6-year follow-up in persons with lower extremity peripheral arterial disease. J Am Geriatr Soc. 2009;57:2246–2252. Go to Citation Crossref PubMed Google Scholar 100. Anderson JD, Epstein FH, Meyer CH, Hagspiel KD, Wang H, Berr SS, Harthun NL, Weltman A, Dimaria JM, West AM, Kramer CM. Multifactorial determinants of functional capacity in peripheral arterial disease: uncoupling of calf muscle perfusion and metabolism. J Am Coll Cardiol. 2009;54:628–635. Go to Citation Crossref PubMed Google Scholar 101. McDermott MM, Liu K, Ferrucci L, Tian L, Guralnik JM, Liao Y, Criqui MH. Greater sedentary hours and slower walking speed outside the home predict faster declines in functioning and adverse calf muscle changes in peripheral arterial disease. J Am Coll Cardiol. 2011;57:2356–2364. Go to Citation Crossref PubMed Google Scholar 102. McDermott MM, Liu K, Ferrucci L, Criqui MH, Greenland P, Guralnik JM, Tian L, Schneider JR, Pearce WH, Tan J, Martin GJ. Physical performance in peripheral arterial disease: a slower rate of decline in patients who walk more. Ann Intern Med. 2006;144:10–20. Go to Citation Crossref PubMed Google Scholar 103. Selvin E, Erlinger TP. Prevalence of and risk factors for peripheral arterial disease in the United States: results from the National Health and Nutrition Examination Survey, 1999–2000. Circulation. 2004;110:738–743. Crossref PubMed Google Scholar a [...] homocysteine, and chronic kidney disease). b [...] of CVD risk factors across ABI thresholds. c [...] included individuals with existing CVD. d [...] individuals with type 1 diabetes mellitus. 104. Newman AB, Siscovick DS, Manolio TA, Polak J, Fried LP, Borhani NO, Wolfson SK. Ankle-arm index as a marker of atherosclerosis in the Cardiovascular Health Study: Cardiovascular Heart Study (CHS) Collaborative Research Group. Circulation. 1993;88:837–845. Crossref PubMed Google Scholar a [...] homocysteine, and chronic kidney disease). b [...] included individuals with existing CVD. c [...] in the range of 1.3 to 4.2 among 9 studies. d [...] Table 4). e [...] Cardiovascular Health Study 105. Allison MA, Criqui MH, McClelland RL, Scott JM, McDermott MM, Liu K, Folsom AR, Bertoni AG, Sharrett AR, Homma S, Kori S. The effect of novel cardiovascular risk factors on the ethnic-specific odds for peripheral arterial disease in the Multi-Ethnic Study of Atherosclerosis (MESA). J Am Coll Cardiol. 2006;48:1190–1197. Go to Citation Crossref PubMed Google Scholar 106. Weatherley BD, Nelson JJ, Heiss G, Chambless LE, Sharrett AR, Nieto FJ, Folsom AR, Rosamond WD. The association of the ankle-brachial index with incident coronary heart disease: the Atherosclerosis Risk in Communities (ARIC) study, 1987–2001. BMC Cardiovasc Disord. 2007: 3. Crossref PubMed Google Scholar a [...] of CVD risk factors across ABI thresholds. b [...] and North America. 107. Newman AB, Shemanski L, Manolio TA, Cushman M, Mittelmark M, Polak JF, Powe NR, Siscovick D. Ankle-arm index as a predictor of cardiovascular disease and mortality in the Cardiovascular Health Study: the Cardiovascular Health Study Group. Arterioscler Thromb Vasc Biol. 1999;19:538–545. Crossref PubMed Google Scholar a [...] included individuals with existing CVD. b [...] and North America. c [...] cardiovascular mortality (risk ratio, 1.5). d [...] Table 4). e [...] Cardiovascular Health Study 108. Hirsch AT, Criqui MH, Treat-Jacobson D, Regensteiner JG, Creager MA, Olin JW, Krook SH, Hunninghake DB, Comerota AJ, Walsh ME, McDermott MM, Hiatt WR. Peripheral arterial disease detection, awareness, and treatment in primary care. JAMA. 2001;286:1317–1324. Crossref PubMed Google Scholar a [...] included individuals with existing CVD. b [...] used by investigators in the PARTNERS 109. Murabito JM, Evans JC, Nieto K, Larson MG, Levy D, Wilson PW. Prevalence and clinical correlates of peripheral arterial disease in the Framingham Offspring Study. Am Heart J. 2002;143:961–965. Crossref PubMed Google Scholar a [...] individuals with type 1 diabetes mellitus. b [...] Table 4). c [...] Framingham Offspring Study 110. Zander E, Heinke P, Reindel J, Kohnert KD, Kairies U, Braun J, Eckel L, Kerner W. Peripheral arterial disease in diabetes mellitus type 1 and type 2: are there different risk factors? Vasa. 2002;31:249–254. Go to Citation Crossref PubMed Google Scholar 111. Hayashi C, Ogawa O, Kubo S, Mitsuhashi N, Onuma T, Kawamori R. Ankle brachial pressure index and carotid intima-media thickness as atherosclerosis markers in Japanese diabetics. Diabetes Res Clin Pract. 2004;66:269–275. Crossref PubMed Google Scholar a [...] individuals with type 1 diabetes mellitus. b [...] in the range of 1.3 to 4.2 among 9 studies. 112. Yang X, Sun K, Zhang W, Wu H, Zhang H, Hui R. Prevalence of and risk factors for peripheral arterial disease in the patients with hypertension among Han Chinese. J Vasc Surg. 2007;46:296–302. Go to Citation Crossref PubMed Google Scholar 113. Ovbiagele B. Association of ankle-brachial index level with stroke. J Neurol Sci. 2009;276:14–17. Go to Citation Crossref PubMed Google Scholar 114. Ramos R, Quesada M, Solanas P, Subirana I, Sala J, Vila J, Masia R, Cerezo C, Elosua R, Grau M, Cordon F, Juvinya D, Fito M, Isabel Covas M, Clara A, Angel Munoz M, Marrugat J; REGICOR Investigators. Prevalence of symptomatic and asymptomatic peripheral arterial disease and the value of the ankle-brachial index to stratify cardiovascular risk. Eur J Vasc Endovasc Surg. 2009;38:305–311. Go to Citation Crossref PubMed Google Scholar 115. Allison MA, Hiatt WR, Hirsch AT, Coll JR, Criqui MH. A high ankle-brachial index is associated with increased cardiovascular disease morbidity and lower quality of life. J Am Coll Cardiol. 2008;51:1292–1298. Crossref PubMed Google Scholar a [...] associated with smoking and hyperlipidemia. b [...] Allison et al 116. Criqui MH, McClelland RL, McDermott MM, Allison MA, Blumenthal RS, Aboyans V, Ix JH, Burke GL, Liu K, Shea S. The ankle-brachial index and incident cardiovascular events in the MESA (Multi-Ethnic Study of Atherosclerosis). J Am Coll Cardiol. 2010;56:1506–1512. Crossref PubMed Google Scholar a [...] high ABI was associated with incident CVD. b [...] ethnic groups in the United States. c [...] extends beyond that of the FRS alone. d [...] Class IIA; Level of Evidence A). e [...] Class I; Level of Evidence A). 117. Sutton-Tyrrell K, Venkitachalam L, Kanaya AM, Boudreau R, Harris T, Thompson T, Mackey RH, Visser M, Vaidean GD, Newman AB. Relationship of ankle blood pressures to cardiovascular events in older adults. Stroke. 2008;39:863–869. Go to Citation Crossref PubMed Google Scholar 118. Wattanakit K, Folsom AR, Duprez DA, Weatherley BD, Hirsch AT. Clinical significance of a high ankle-brachial index: insights from the Atherosclerosis Risk in Communities (ARIC) Study. Atherosclerosis. 2007;190:459–464. Go to Citation Crossref PubMed Google Scholar 119. Resnick HE, Foster GL. Prevalence of elevated ankle-brachial index in the United States 1999 to 2002. Am J Med. 2005;118:676–679. Go to Citation Crossref PubMed Google Scholar 120. Greenland P, Smith SC, Grundy SM. Improving coronary heart disease risk assessment in asymptomatic people: role of traditional risk factors and noninvasive cardiovascular tests. Circulation. 2001;104:1863–1867. Go to Citation Crossref PubMed Google Scholar 121. Brindle P, Beswick A, Fahey T, Ebrahim S. Accuracy and impact of risk assessment in the primary prevention of cardiovascular disease: a systematic review. Heart. 2006;92:1752–1759. Go to Citation Crossref PubMed Google Scholar 122. Tsimikas S, Willerson JT, Ridker PM. C-reactive protein and other emerging blood biomarkers to optimize risk stratification of vulnerable patients. J Am Coll Cardiol. 2006; 47 (suppl): C19– C31. Go to Citation Crossref PubMed Google Scholar 123. Greenland P, LaBree L, Azen SP, Doherty TM, Detrano RC. Coronary artery calcium score combined with Framingham score for risk prediction in asymptomatic individuals. JAMA. 2004;291:210–215. Go to Citation Crossref PubMed Google Scholar 124. Leng GC, Fowkes FG, Lee AJ, Dunbar J, Housley E, Ruckley CV. Use of ankle brachial pressure index to predict cardiovascular events and death: a cohort study. BMJ. 1996;313:1440–1444. Crossref PubMed Google Scholar a [...] cohort studies, mostly in Europe b [...] Table 4). c [...] Edinburgh artery study 125. Hooi JD, Kester AD, Stoffers HE, Rinkens PE, Knottnerus JA, van Ree JW. Asymptomatic peripheral arterial occlusive disease predicted cardiovascular morbidity and mortality in a 7-year follow-up study. J Clin Epidemiol. 2004;57:294–300. Crossref PubMed Google Scholar a [...] cohort studies, mostly in Europe b [...] Table 4). c [...] Limburg study 126. Ogren M, Hedblad B, Isacsson SO, Janzon L, Jungquist G, Lindell SE. Non-invasively detected carotid stenosis and ischaemic heart disease in men with leg arteriosclerosis. Lancet. 1993;342:1138–1141. Crossref PubMed Google Scholar a [...] cohort studies, mostly in Europe b [...] Table 4). c [...] Men Born in 1914 127. van der Meer IM, Bots ML, Hofman A, del Sol AI, van der Kuip DA, Witteman JC. Predictive value of noninvasive measures of atherosclerosis for incident myocardial infarction: the Rotterdam Study. Circulation. 2004;109:1089–1094. Crossref PubMed Google Scholar a [...] cohort studies, mostly in Europe b [...] Table 4). c [...] Rotterdam Study 128. Kornitzer M, Dramaix M, Sobolski J, Degre S, De Backer G. Ankle/arm pressure index in asymptomatic middle-aged males: an independent predictor of ten-year coronary heart disease mortality. Angiology. 1995;46:211–219. Crossref PubMed Google Scholar a [...] and North America. b [...] Table 4). c [...] Belgian Men study 129. Abbott RD, Petrovitch H, Rodriguez BL, Yano K, Schatz IJ, Popper JS, Masaki KH, Ross GW, Curb JD. Ankle/brachial blood pressure in men >70 years of age and the risk of coronary heart disease. Am J Cardiol. 2000;86:280–284. Crossref PubMed Google Scholar a [...] and North America. b [...] Table 4). c [...] Honolulu study 130. Resnick HE, Lindsay RS, McDermott MM, Devereux RB, Jones KL, Fabsitz RR, Howard BV. Relationship of high and low ankle brachial index to all-cause and cardiovascular disease mortality: the Strong Heart Study. Circulation. 2004;109:733–739. Crossref PubMed Google Scholar a [...] and North America. b [...] Table 4). c [...] Strong Heart Study 131. Aboyans V, Lacroix P, Tran MH, Salamagne C, Galinat S, Archambeaud F, Criqui MH, Laskar M. The prognosis of diabetic patients with high ankle-brachial index depends on the coexistence of occlusive peripheral artery disease. J Vasc Surg. 2011;53:984–991. Go to Citation Crossref PubMed Google Scholar 132. Aboyans V, Lacroix P, Postil A, Guilloux J, Rolle F, Cornu E, Laskar M. Subclinical peripheral arterial disease and incompressible ankle arteries are both long-term prognostic factors in patients undergoing coronary artery bypass grafting. J Am Coll Cardiol. 2005;46:815–820. Go to Citation Crossref PubMed Google Scholar 133. Agnelli G, Cimminiello C, Meneghetti G, Urbinati S; Polyvascular Atherothrombosis Observational Survey (PATHOS) Investigators. Low ankle-brachial index predicts an adverse 1-year outcome after acute coronary and cerebrovascular events. J Thromb Haemost. 2006;4:2599–2606. Go to Citation Crossref PubMed Google Scholar 134. Purroy F, Coll B, Oro M, Seto E, Pinol-Ripoll G, Plana A, Quilez A, Sanahuja J, Brieva L, Vega L, Fernandez E. Predictive value of ankle brachial index in patients with acute ischaemic stroke. Eur J Neurol. 2010;17:602–606. Go to Citation Crossref PubMed Google Scholar 135. Alberts MJ, Bhatt DL, Mas JL, Ohman EM, Hirsch AT, Rother J, Salette G, Goto S, Smith SC, Liau CS, Wilson PW, Steg PG; Reduction of Atherothrombosis for Continued Health Registry Investigators. Three-year follow-up and event rates in the international Reduction of Atherothrombosis for Continued Health Registry. Eur Heart J. 2009;30:2318–2326. Crossref PubMed Google Scholar a [...] with disease in only 1 vascular territory. b [...] mortality of 1.6 compared with higher ABIs. 136. Criqui MH, Ninomiya JK, Wingard DL, Ji M, Fronek A. Progression of peripheral arterial disease predicts cardiovascular disease morbidity and mortality. J Am Coll Cardiol. 2008;52:1736–1742. Go to Citation Crossref PubMed Google Scholar 137. Sheikh MA, Bhatt DL, Li J, Lin S, Bartholomew JR. Usefulness of postexercise ankle-brachial index to predict all-cause mortality. Am J Cardiol. 2011;107:778–782. Go to Citation Crossref PubMed Google Scholar 138. Mohler ER, Treat-Jacobson D, Reilly MP, Cunningham KE, Miani M, Criqui MH, Hiatt WR, Hirsch AT. Utility and barriers to performance of the ankle-brachial index in primary care practice. Vasc Med. 2004;9:253–260. Crossref PubMed Google Scholar a [...] and staff training, were identified. b [...] needed for ABI measurement was <15 minutes. 139. Bendermacher BLW, Teijink JAW, Willigendael EM. Applicability of the ankle brachial index measurement as screening device in general practice for high cardiovascular risk. In:, Bendermacher B Peripheral Arterial Disease. Screening, Diagnosis and Conservative Treatment [dissertation]. Maastricht, Netherlands: Maastricht University; 2007. Go to Citation Crossref Google Scholar 140. Pollak EW, Chavis P, Wolfman EF. The effect of postural changes upon the ankle arterial perfusion pressure. Vasc Surg. 1976;10:219–222. Go to Citation Crossref PubMed Google Scholar 141. Gornik HL, Garcia B, Wolski K, Jones DC, Macdonald KA, Fronek A. Validation of a method for determination of the ankle-brachial index in the seated position. J Vasc Surg. 2008;48:1204–1210. Crossref PubMed Google Scholar a [...] Gornik et al b [...] end of the examination table. Gornik et al 142. Yataco AR, Gardner AW. Acute reduction in ankle/brachial index following smoking in chronic smokers with peripheral arterial occlusive disease. Angiology. 1999;50:355–360. Crossref PubMed Google Scholar a [...] after 12 hours of smoking abstinence. b [...] change in brachial artery pressure. 143. Manning DM, Kuchirka C, Kaminski J. Miscuffing: inappropriate blood pressure cuff application. Circulation. 1983;68:763–766. Crossref PubMed Google Scholar a [...] cuff size to avoid inaccurate measurements. b [...] Class I; Level of Evidence B). 144. Pickering TG, Hall JE, Appel LJ, Falkner BE, Graves J, Hill MN, Jones DW, Kurtz T, Sheps SG, Roccella EJ. Recommendations for blood pressure measurement in humans and experimental animals, part 1: blood pressure measurement in humans: a statement for professionals from the Subcommittee of Professional and Public Education of the American Heart Association Council on High Blood Pressure Research. Circulation. 2005;111:697–716. Crossref PubMed Google Scholar a [...] cuff size to avoid inaccurate measurements. b [...] be at least 40% of the limb circumference. c [...] Class I; Level of Evidence B). 145. Mundt KA, Chambless LE, Burnham CB, Heiss G. Measuring ankle systolic blood pressure: validation of the Dinamap 1846 SX. Angiology. 1992;43:555–566. Crossref PubMed Google Scholar a [...] with the spiral cuff wrapping method. b [...] was used with the Doppler technique. c [...] studies for oscillometric methods d [...] method for the detection of PAD. e [...] Data Supplement). 146. Takahashi O, Shimbo T, Rahman M, Musa R, Kurokawa W, Yoshinaka T, Fukui T. Validation of the auscultatory method for diagnosing peripheral arterial disease. Fam Pract. 2006;23:10–14. Crossref PubMed Google Scholar a [...] Takahashi et al b [...] auscultation, c [...] was assessed in a Japanese study. d [...] Class I; Level of Evidence B). 147. Aboyans V, Lacroix P, Doucet S, Preux PM, Criqui MH, Laskar M. Diagnosis of peripheral arterial disease in general practice: can the ankle-brachial index be measured either by pulse palpation or an automatic blood pressure device? Int J Clin Pract. 2008;62:1001–1007. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). c [...] the Doppler and oscillometric techniques. d [...] and pulse palpation. e [...] and a specificity ranging from 75% to 82%. f [...] the ABI compared with the Doppler method. g [...] Data Supplement). h [...] is confirmed by 2 comparative studies i [...] method has poor reproducibility (CoV, 23%). j [...] but there are few data for other methods. k [...] Data Supplement) l [...] Class I; Level of Evidence A). 148. Adiseshiah M, Cross FW, Belsham PA. Ankle blood pressure measured by automatic oscillotonometry: a comparison with Doppler pressure measurements. Ann R Coll Surg Engl. 1987;69:271–273. PubMed Google Scholar a [...] studies for oscillometric methods b [...] of the actual pressure value, c [...] to detect low pressures, eg, <50 mm Hg d [...] Data Supplement). 149. Beckman JA, Higgins CO, Gerhard-Herman M. Automated oscillometric determination of the ankle-brachial index provides accuracy necessary for office practice. Hypertension. 2006;47:35–38. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). c [...] the Doppler and oscillometric techniques. d [...] of agreement (±2 SD) for the ABI were 0.25 150. Benchimol A, Bernard V, Pillois X, Hong NT, Benchimol D, Bonnet J. Validation of a new method of detecting peripheral artery disease by determination of ankle-brachial index using an automatic blood pressure device. Angiology. 2004;55:127–134. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). 151. Blebea J, Ali MK, Love M, Bodenham R, Bacik B. Automatic postoperative monitoring of infrainguinal bypass procedures. Arch Surg. 1997;132:286–291. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] PAD has been acceptable in most studies c [...] Data Supplement). 152. Cortez-Cooper MY, Supak JA, Tanaka H. A new device for automatic measurements of arterial stiffness and ankle-brachial index. Am J Cardiol. 2003;91:1519–1522, A9. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] PAD has been acceptable in most studies c [...] Data Supplement). d [...] the Doppler and oscillometric techniques. 153. Diehm N, Dick F, Czuprin C, Lawall H, Baumgartner I, Diehm C. Oscillometric measurement of ankle-brachial index in patients with suspected peripheral disease: comparison with Doppler method. Swiss Med Wkly. 2009;139:357–363. PubMed Google Scholar a [...] studies for oscillometric methods b [...] ) in patients with advanced PAD. c [...] Data Supplement). d [...] the Doppler and oscillometric techniques. 154. Ena J, Lozano T, Verdú G, Argente CR, González VL. Accuracy of ankle-brachial index obtained by automated blood pressure measuring devices in patients with diabetes mellitus. Diabetes Res Clin Pract. 2011;92:329–336. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). c [...] the Doppler and oscillometric techniques. 155. Jonsson B, Lindberg LG, Skau T, Thulesius O. Is oscillometric ankle pressure reliable in leg vascular disease? Clin Physiol. 2001;21:155–163. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] method for the detection of PAD. c [...] PAD has been acceptable in most studies d [...] of the actual pressure value, e [...] Data Supplement). f [...] the Doppler and oscillometric techniques. g [...] more than ±70 mm Hg in patients with PAD. 156. Korno M, Eldrup N, Sillesen H. Comparison of ankle-brachial index measured by an automated oscillometric apparatus with that by standard Doppler technique in vascular patients. Eur J Vasc Endovasc Surg. 2009;38:610–615. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] PAD has been acceptable in most studies c [...] of the actual pressure value, d [...] Figure 3 . e [...] Data Supplement). f [...] 95% percentiles. Reprinted from Korno et al g [...] Data Supplement). h [...] method, which has a CoV ranging from 5.1% i [...] but there are few data for other methods. j [...] Data Supplement) k [...] Class I; Level of Evidence A). 157. Lee BY, Campbell JS, Berkowitz P. The correlation of ankle oscillometric blood pressures and segmental pulse volumes to Doppler systolic pressures in arterial occlusive disease. J Vasc Surg. 1996;23:116–122. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). c [...] whereas in 5 other studies, 158. MacDonald E, Froggatt P, Lawrence G, Blair S. Are automated blood pressure monitors accurate enough to calculate the ankle brachial pressure index? J Clin Monit Comput. 2008;22:381–384. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] ) in patients with advanced PAD. c [...] Data Supplement). d [...] and 0.23 159. MacDougall AM, Tandon V, Wilson MP, Wilson TW. Oscillometric measurement of ankle-brachial index. Can J Cardiol. 2008;24:49–51. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). 160. Mehlsen J, Wiinberg N, Bruce C. Oscillometric blood pressure measurement: a simple method in screening for peripheral arterial disease. Clin Physiol Funct Imaging. 2008;28:426–429. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] of the actual pressure value, c [...] ) in patients with advanced PAD. d [...] Data Supplement). 161. Nukumizu Y, Matsushita M, Sakurai T, Kobayashi M, Nishikimi N, Komori K. Comparison of Doppler and oscillometric ankle blood pressure measurement in patients with angiographically documented lower extremity arterial occlusive disease. Angiology. 2007;58:303–308. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] of the actual pressure value, c [...] recording failures are frequent (from 11% d [...] ) in patients with advanced PAD. e [...] Data Supplement). 162. Pan CR, Staessen JA, Li Y, Wang JG. Comparison of three measures of the ankle-brachial blood pressure index in a general population. Hypertens Res. 2007;30:555–561. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] PAD has been acceptable in most studies c [...] Data Supplement). d [...] the Doppler and oscillometric techniques. e [...] Data Supplement). 163. Raines JK, Farrar J, Noicely K, Pena J, Davis WW, Willens HJ, Wallace DD. Ankle/brachial index in the primary care setting. Vasc Endovascular Surg. 2004;38:131–136. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). 164. Ramanathan A, Conaghan PJ, Jenkinson AD, Bishop CR. Comparison of ankle-brachial pressure index measurements using an automated oscillometric device with the standard Doppler ultrasound technique. ANZ J Surg. 2003;73:105–108. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] with 1 exception. c [...] Data Supplement). d [...] the Doppler and oscillometric techniques. e [...] studies varied from −0.19 to 0.14 165. Richart T, Kuznetsova T, Wizner B, Struijker-Boudier HA, Staessen JA. Validation of automated oscillometric versus manual measurement of the ankle-brachial index. Hypertens Res. 2009;32:884–888. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] of the actual pressure value, c [...] Data Supplement). d [...] Data Supplement). e [...] been challenged recently by Richart et al, f [...] Class I; Level of Evidence A). 166. Salles-Cunha SX, Vincent DG, Towne JB, Bernhard VM. Noninvasive ankle pressure measurements by oscillometry. Tex Heart Inst J. 1982;9:349–357. PubMed Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). c [...] whereas in 5 other studies, 167. Bonham PA, Cappuccio M, Hulsey T, Michel Y, Kelechi T, Jenkins C, Robison J. Are ankle and toe brachial indices (ABI-TBI) obtained by a pocket Doppler interchangeable with those obtained by standard laboratory equipment? J Wound Ostomy Continence Nurs. 2007;34:35–44. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). c [...] whereas in 5 other studies, 168. Carmo GA, Mandil A, Nascimento BR, Arantes BD, Bittencourt JC, Falqueto EB, Ribeiro AL. Can we measure the ankle-brachial index using only a stethoscope? A pilot study.Fam Pract. 2009;26:22–26. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). 169. Khandanpour N, Armon MP, Jennings B, Clark A, Meyer FJ. Photoplethysmography, an easy and accurate method for measuring ankle brachial pressure index: can photoplethysmography replace Doppler? Vasc Endovascular Surg. 2009;43:578–582. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). c [...] photoplethysmography, d [...] in several series of patients with PAD. e [...] Doppler method ranged from −0.23 to 0.24. 170. Ludyga T, Kuczmik WB, Kazibudzki M, Nowakowski P, Orawczyk T, Glanowski M, Kucharzewski M, Ziaja D, Szaniewski K, Ziaja K. Ankle-brachial pressure index estimated by laser Doppler in patients suffering from peripheral arterial obstructive disease. Ann Vasc Surg. 2007;21:452–457. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). c [...] was used for ABI measurements in 1 study. 171. Migliacci R, Nasorri R, Ricciarini P, Gresele P. Ankle-brachial index measured by palpation for the diagnosis of peripheral arterial disease. Fam Pract. 2008;25:228–232. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). c [...] and pulse palpation. d [...] and a specificity ranging from 75% to 82%. 172. Nicolai SP, Kruidenier LM, Rouwet EV, Wetzels-Gulpers L, Rozeman CA, Prins MH, Teijink JA Pocket Doppler and vascular laboratory equipment yield comparable results for ankle brachial index measurement. BMC Cardiovasc Disord. 2008: 26. Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). 173. Sadiq S, Chithriki M. Arterial pressure measurements using infrared photosensors: comparison with CW Doppler. Clin Physiol. 2001;21:129–132. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). c [...] photoplethysmography, d [...] in several series of patients with PAD. 174. Whiteley MS, Fox AD, Horrocks M. Photoplethysmography can replace hand-held Doppler in the measurement of ankle/brachial indices. Ann R Coll Surg Engl. 1998;80:96–98. PubMed Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). c [...] photoplethysmography, d [...] in several series of patients with PAD. 175. Fowkes FG, Housley E, Macintyre CC, Prescott RJ, Ruckley CV. Variability of ankle and brachial systolic pressures in the measurement of atherosclerotic peripheral arterial disease. J Epidemiol Community Health. 1988;42:128–133. Crossref PubMed Google Scholar a [...] method for the detection of PAD. b [...] measurement and calculation, Fowkes et al c [...] and Fowkes et al 176. Stoffers J, Kaiser V, Kester A, Schouten H, Knottnerus A. Peripheral arterial occlusive disease in general practice: the reproducibility of the ankle-arm systolic pressure ratio. Scand J Prim Health Care. 1991;9:109–114. Crossref PubMed Google Scholar a [...] method for the detection of PAD. b [...] the Doppler and oscillometric techniques. c [...] and −0.18 to 0.35, 177. Kaiser V, Kester AD, Stoffers HE, Kitslaar PJ, Knottnerus JA. The influence of experience on the reproducibility of the ankle-brachial systolic pressure ratio in peripheral arterial occlusive disease. Eur J Vasc Endovasc Surg. 1999;18:25–29. Crossref PubMed Google Scholar a [...] method for the detection of PAD. b [...] but there are few data for other methods. c [...] Data Supplement) 178. Aboyans V, Lacroix P, Lebourdon A, Preux PM, Ferrieres J, Laskar M. The intra- and interobserver variability of ankle-arm blood pressure index according to its mode of calculation. J Clin Epidemiol. 2003;56:215–220. Crossref PubMed Google Scholar a [...] PAD has been acceptable in most studies b [...] or even 80 mm Hg, c [...] 161 to 44% d [...] ) in patients with advanced PAD. e [...] the Doppler and oscillometric techniques. f [...] pressure are trivial in direct comparisons. 179. Baker JD, Dix DE. Variability of Doppler ankle pressures with arterial occlusive disease: an evaluation of ankle index and brachial-ankle pressure gradient. Surgery. 1981;89:134–137. PubMed Google Scholar a [...] of the actual pressure value, b [...] ) in patients with advanced PAD. c [...] cardiovascular conditions, including PAD. 180. Johnston KW, Hosang MY, Andrews DF. Reproducibility of noninvasive vascular laboratory measurements of the peripheral circulation. J Vasc Surg. 1987;6:147–151. Go to Citation Crossref PubMed Google Scholar 181. de Graaff JC, Ubbink DT, Legemate DA, de Haan RJ, Jacobs MJ. Interobserver and intraobserver reproducibility of peripheral blood and oxygen pressure measurements in the assessment of lower extremity arterial disease. J Vasc Surg. 2001;33:1033–1040. Crossref PubMed Google Scholar a [...] Data Supplement). b [...] but there are few data for other methods. c [...] Data Supplement) d [...] to that of arm pressures in 3 reports, e [...] the ABI is a valid biological parameter. f [...] Class I; Level of Evidence A). g [...] at the ankle than by which artery is used. 182. Holland-Letz T, Endres HG, Biedermann S, Mahn M, Kunert J, Groh S, Pittrow D, von Bilderling P, Sternitzky R, Diehm C. Reproducibility and reliability of the ankle-brachial index as assessed by vascular experts, family physicians and nurses. Vasc Med. 2007;12:105–112. Crossref PubMed Google Scholar a [...] Data Supplement). b [...] but there are few data for other methods. c [...] Data Supplement) d [...] Class I; Level of Evidence A). 183. Espeland MA, Regensteiner JG, Jaramillo SA, Gregg E, Knowler WC, Wagenknecht LE, Bahnson J, Haffner S, Hill J, Hiatt WR; Look AHEAD Study Group. Measurement characteristics of the ankle-brachial index: results from the Action for Health in Diabetes study. Vasc Med. 2008;13:225–233. Crossref PubMed Google Scholar a [...] Data Supplement). b [...] when measured by skilled examiners. c [...] Class I; Level of Evidence A). d [...] pressure are trivial in direct comparisons. e [...] at the ankle than by which artery is used. 184. Weatherley BD, Chambless LE, Heiss G, Catellier DJ, Ellison CR. The reliability of the ankle-brachial index in the Atherosclerosis Risk in Communities (ARIC) study and the NHLBI Family Heart Study (FHS). BMC Cardiovasc Disord. 2006: 7. Google Scholar a [...] Data Supplement). b [...] is confirmed by 2 comparative studies c [...] but there are few data for other methods. d [...] in the ARIC study, showing a CoV of 11%. e [...] Data Supplement) f [...] Class I; Level of Evidence A). g [...] Atherosclerosis Risk in Communities Study 185. Forster FK, Turney D. Oscillometric determination of diastolic, mean and systolic blood pressure: a numerical model. J Biomech Eng. 1986;108:359–364. Crossref PubMed Google Scholar a [...] Data Supplement). b [...] to 20.2%. c [...] Class I; Level of Evidence A). 186. Hamel JF, Foucaud D, Fanello S. Comparison of the automated oscillometric method with the gold standard Doppler ultrasound method to access the ankle-brachial pressure index. Angiology. 2010;61:487–491. Crossref PubMed Google Scholar a [...] Data Supplement). b [...] Class I; Level of Evidence A). 187. Ursino M, Cristalli C. A mathematical study of some biomechanical factors affecting the oscillometric blood pressure measurement. IEEE Trans Biomed Eng. 1996;43:761–778. Crossref PubMed Google Scholar a [...] Data Supplement). b [...] Class I; Level of Evidence A). 188. van Montfrans GA. Oscillometric blood pressure measurement: progress and problems. Blood Press Monit. 2001;6:287–290. Crossref PubMed Google Scholar a [...] Data Supplement). b [...] but there are few data for other methods. c [...] Data Supplement) d [...] people compared with patients with PAD. e [...] Class I; Level of Evidence A). 189. Benchimol D, Pillois X, Benchimol A, Houitte A, Sagardiluz P, Tortelier L, Bonnet J. Accuracy of ankle-brachial index using an automatic blood pressure device to detect peripheral artery disease in preventive medicine. Arch Cardiovasc Dis. 2009;102:519–524. Crossref PubMed Google Scholar a [...] Data Supplement). b [...] varies widely in the literature, from 4.7% c [...] to that of arm pressures in 3 reports, d [...] Class I; Level of Evidence A). e [...] Class I; Level of Evidence A). 190. Kawamura T. Assessing ankle-brachial index (ABI) by using automated oscillometric devices. Arq Bras Cardiol. 2008;90:294–298. PubMed Google Scholar a [...] Data Supplement). b [...] but there are few data for other methods. c [...] Data Supplement) d [...] could be the point estimate ±0.21. e [...] do not vary with the average ABI. f [...] at the ankle than by which artery is used. g [...] Table 4). 191. Arveschoug AK, Revsbech P, Brochner-Mortensen J. Sources of variation in the determination of distal blood pressure measured using the strain gauge technique. Clin Physiol. 1998;18:361–368. Crossref PubMed Google Scholar a [...] Data Supplement). b [...] but there are few data for other methods. c [...] Data Supplement) 192. Brown J, Asongwed E, Chesbro S, John E. Inter-rater and intra-rater reliability of ankle brachial index (ABI) measurements using a stethoscope and Doppler. Paper presented at: American Physical Therapy Association Meeting; 2008. Accessed February 10, 2011. Google Scholar a [...] Data Supplement). b [...] but there are few data for other methods. c [...] Data Supplement) 193. Clyne CA, Jones T, Moss S, Ensell J. The use of radioactive oxygen to study muscle function in peripheral vascular disease. Surg Gynecol Obstet. 1979;149:225–228. PubMed Google Scholar a [...] Data Supplement). b [...] but there are few data for other methods. c [...] Data Supplement) d [...] do not vary with the average ABI. 194. Osmundson PJ, O'Fallon WM, Clements IP, Kazmier FJ, Zimmerman BR, Palumbo PJ. Reproducibility of noninvasive tests of peripheral occlusive arterial disease. J Vasc Surg. 1985;2:678–683. Crossref PubMed Google Scholar a [...] Data Supplement). b [...] but there are few data for other methods. c [...] Data Supplement) d [...] or an oscillometry for the arms. e [...] However, Osmundson et al 195. Fisher CM, Burnett A, Makeham V, Kidd J, Glasson M, Harris JP. Variation in measurement of ankle-brachial pressure index in routine clinical practice. J Vasc Surg. 1996;24:871–875. Crossref PubMed Google Scholar a [...] Data Supplement). b [...] but there are few data for other methods. c [...] Data Supplement) d [...] or an oscillometry for the arms. e [...] obtained by the PT versus the DP artery. 196. Jeelani NU, Braithwaite BD, Tomlin C, MacSweeney ST. Variation of method for measurement of brachial artery pressure significantly affects ankle-brachial pressure index values. Eur J Vasc Endovasc Surg. 2000;20:25–28. Crossref PubMed Google Scholar a [...] Data Supplement). b [...] but there are few data for other methods. c [...] Data Supplement) d [...] whereas in 5 other studies, e [...] exercise was 10% and 21%, respectively. 197. Atsma F, Bartelink ML, Grobbee DE, van der Schouw YT. Best reproducibility of the ankle-arm index was calculated using Doppler and dividing highest ankle pressure by highest arm pressure. J Clin Epidemiol. 2005;58:1282–1288. Go to Citation Crossref PubMed Google Scholar 198. van Langen H, van Gurp J, Rubbens L. Interobserver variability of ankle-brachial index measurements at rest and post exercise in patients with intermittent claudication. Vasc Med. 2009;14:221–226. Go to Citation Crossref PubMed Google Scholar 199. Ray SA, Srodon PD, Taylor RS, Dormandy JA. Reliability of ankle:brachial pressure index measurement by junior doctors. Br J Surg. 1994;81:188–190. Go to Citation Crossref PubMed Google Scholar 200. Endres HG, Hucke C, Holland-Letz T, Trampisch HJ. A new efficient trial design for assessing reliability of ankle-brachial index measures by three different observer groups. BMC Cardiovasc Disord. 2006: 33. Go to Citation Google Scholar 201. Osborn LA, Vernon SM, Reynolds B, Timm TC, Allen K. Screening for subclavian artery stenosis in patients who are candidates for coronary bypass surgery. Catheter Cardiovasc Interv. 2002;56:162–165. Crossref PubMed Google Scholar a [...] whereas in 5 other studies, b [...] of subclavian artery stenosis. Osborn et al 202. Vierron E, Halimi JM, Tichet J, Balkau B, Cogneau J, Giraudeau B; DESIR Study Group. Center effect on ankle-brachial index measurement when using the reference method (Doppler and manometer): results from a large cohort study. Am J Hypertens. 2009;22:718–722. Crossref PubMed Google Scholar a [...] the measurements in different laboratories, b [...] at the ankle than by which artery is used. 203. Aboyans V, Criqui MH, McDermott MM, Allison MA, Denenberg JO, Shadman R, Fronek A. The vital prognosis of subclavian stenosis. J Am Coll Cardiol. 2007;49:1540–1545. Go to Citation Crossref PubMed Google Scholar 204. Shadman R, Criqui MH, Bundens WP, Fronek A, Denenberg JO, Gamst AC, McDermott MM. Subclavian artery stenosis: prevalence, risk factors, and association with cardiovascular diseases. J Am Coll Cardiol. 2004;44:618–623. Go to Citation Crossref PubMed Google Scholar 205. Clark CE, Campbell JL, Powell RJ. The interarm blood pressure difference as predictor of cardiovascular events in patients with hypertension in primary care: cohort study. J Hum Hypertens. 2007;21:633–638. Go to Citation Crossref PubMed Google Scholar 206. Orme S, Ralph SG, Birchall A, Lawson-Matthew P, McLean K, Channer KS. The normal range for inter-arm differences in blood pressure. Age Ageing. 1999;28:537–542. Go to Citation Crossref PubMed Google Scholar 207. Aboyans V, Kamineni A, Allison MA, McDermott MM, Crouse JR, Ni H, Szklo M, Criqui MH. The epidemiology of subclavian stenosis and its association with markers of subclinical atherosclerosis: the Multi-Ethnic Study of Atherosclerosis (MESA). Atherosclerosis. 2010;211:266–270. Go to Citation Crossref PubMed Google Scholar 208. Espinola-Klein C, Rupprecht HJ, Bickel C, Lackner K, Savvidis S, Messow CM, Munzel T, Blankenberg S; AtheroGene Investigators. Different calculations of ankle-brachial index and their impact on cardiovascular risk prediction. Circulation. 2008;118:961–967. Crossref PubMed Google Scholar a [...] ABI for prediction of events are limited. b [...] infarction, stroke, and CVD death. 209. O'Hare AM, Katz R, Shlipak MG, Cushman M, Newman AB. Mortality and cardiovascular risk across the ankle-arm index spectrum: results from the Cardiovascular Health Study. Circulation. 2006;113:388–393. Crossref PubMed Google Scholar a [...] ABI for prediction of events are limited. b [...] the ABI to predict cardiovascular events. 210. Fowkes FG, Price JF, Stewart MC, Butcher I, Leng GC, Pell AC, Sandercock PA, Fox KA, Lowe GD, Murray GD; Aspirin for Asymptomatic Atherosclerosis Trialists. Aspirin for prevention of cardiovascular events in a general population screened for a low ankle brachial index: a randomized controlled trial. JAMA. 2010;303:841–848. Crossref PubMed Google Scholar a [...] Table 4). b [...] Included in the ABI Collaboration Study c [...] from the lowest of the 4 ankle arteries. 211. Hoogeveen EK, Kostense PJ, Beks PJ, Mackaay AJ, Jakobs C, Bouter LM, Heine RJ, Stehouwer CD. Hyperhomocysteinemia is associated with an increased risk of cardiovascular disease, especially in non-insulin-dependent diabetes mellitus: a population-based study. Arterioscler Thromb Vasc Biol. 1998;18:133–138. Crossref PubMed Google Scholar 212. Fowler B, Jamrozik K, Norman P, Allen Y. Prevalence of peripheral arterial disease: persistence of excess risk in former smokers. Aust N Z J Public Health. 2002;26:219–224. Crossref PubMed Google Scholar a [...] Table 4). b [...] Health in Men study c [...] Class I; Level of Evidence A). 213. Jager A, Kostense PJ, Ruhe HG, Heine RJ, Nijpels G, Dekker JM, Bouter LM, Stehouwer CD. Microalbuminuria and peripheral arterial disease are independent predictors of cardiovascular and all-cause mortality, especially among hypertensive subjects: five-year follow-up of the Hoorn Study. Arterioscler Thromb Vasc Biol. 1999;19:617–624. Crossref PubMed Google Scholar a [...] Table 4). b [...] Hoorn study 214. McDermott MM, Guralnik JM, Albay M, Bandinelli S, Miniati B, Ferrucci L. Impairments of muscles and nerves associated with peripheral arterial disease and their relationship with lower extremity functioning: the InCHIANTI Study. J Am Geriatr Soc. 2004;52:405–410. Crossref PubMed Google Scholar a [...] Table 4). b [...] InCHIANTI 215. Rooke TW, Hirsch AT, Misra S, Sidawy AN, Beckman JA, Findeiss LK, Golzarian J, Gornik HL, Halperin JL, Jaff MR, Moneta GL, Olin JW, Stanley JC, White CJ, White JV, Zierler RE. ACCF/AHA focused update of the guideline for the management of patients with peripheral artery disease (updating the 2005 guideline): a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines. Circulation. 2011;124:2020–2045. Crossref PubMed Google Scholar a [...] Table 4). b [...] College of Cardiology guidelines 216. Diehm C, Allenberg JR, Pittrow D, Mahn M, Tepohl G, Haberl RL, Darius H, Burghaus I, Trampisch HJ; German Epidemiological Trial on Ankle Brachial Index Study Group. Mortality and vascular morbidity in older adults with asymptomatic versus symptomatic peripheral artery disease. Circulation. 2009;120:2053–61. Go to Citation Crossref PubMed Google Scholar 217. American Diabetes Association. Peripheral arterial disease in people with diabetes. Diabetes Care. 2003;26:3333–3341. Go to Citation Crossref PubMed Google Scholar 218. Kennedy M, Solomon C, Manolio TA, Criqui MH, Newman AB, Polak JF, Burke GL, Enright P, Cushman M. Risk factors for declining ankle-brachial index in men and women 65 years or older: the Cardiovascular Health Study. Arch Intern Med. 2005;165:1896–1902. Go to Citation Crossref PubMed Google Scholar Show all references eLetters eLetters should relate to an article recently published in the journal and are not a forum for providing unpublished data. Comments are reviewed for appropriate use of tone and language. Comments are not peer-reviewed. Acceptable comments are posted to the journal website only. Comments are not published in an issue and are not indexed in PubMed. Comments should be no longer than 500 words and will only be posted online. References are limited to 10. Authors of the article cited in the comment will be invited to reply, as appropriate. Comments and feedback on AHA/ASA Scientific Statements and Guidelines should be directed to the AHA/ASA Manuscript Oversight Committee via its Correspondence page. Sign In to Submit a Response to This Article Information & Authors Information Authors Information Published In Circulation Volume 126 • Number 24 • 11 December 2012 Pages: 2890 - 2909 PubMed: 23159553 Copyright © 2012 American Heart Association, Inc. Versions You are viewing the most recent version of this article. 1 January 2012: Previous PDF (Version 1) History Published online: 16 November 2012 Published in print: 11 December 2012 Permissions Request permissions for this article. Request permissions Keywords AHA Scientific Statements ankle brachial index Subjects Statements and Guidelines Authors Affiliations Expand All Victor Aboyans, MD, PhD, FAHA, Chair View all articles by this author Michael H.Criqui, MD, MPH, FAHA, Co-Chair View all articles by this author Pierre Abraham, MD, PhD View all articles by this author Matthew A.Allison, MD, MPH, FAHA View all articles by this author Mark A.Creager, MD, FAHA View all articles by this author Curt Diehm, MD, PhD View all articles by this author F. Gerry R.Fowkes, MBChB, PhD, FAHA View all articles by this author William R.Hiatt, MD, FAHA View all articles by this author Björn Jönsson, MD, PhD View all articles by this author Philippe Lacroix, MD View all articles by this author Benôıt Marin, MD View all articles by this author Mary M.McDermott, MD, FAHA View all articles by this author Lars Norgren, MD, PhD View all articles by this author Reena L.Pande, MD, MSc View all articles by this author Pierre-Marie Preux, MD, PhD View all articles by this author H.E. (Jelle)Stoffers, MD, PhD View all articles by this author Diane Treat-Jacobson, PhD, RN, FAHA on behalf of the American Heart Association Council on Peripheral Vascular Disease on behalf of Council on Epidemiology and Prevention on behalf of Council on Clinical Cardiology on behalf of Council on Cardiovascular Nursing on behalf of Council on Cardiovascular Radiology and Intervention, and Council on Cardiovascular Surgery and Anesthesia View all articles by this author Notes The American Heart Association makes every effort to avoid any actual or potential conflicts of interest that may arise as a result of an outside relationship or a personal, professional, or business interest of a member of the writing panel. Specifically, all members of the writing group are required to complete and submit a Disclosure Questionnaire showing all such relationships that might be perceived as real or potential conflicts of interest. This statement was approved by the American Heart Association Science Advisory and Coordinating Committee on September 10, 2012. A copy of the document is available at by selecting either the “By Topic” link or the “By Publication Date” link. To purchase additional reprints, call 843-216-2533 or e-mail [email protected]. The online-only Data Supplement is available with this article at The American Heart Association requests that this document be cited as follows: Aboyans V, Criqui MH, Abraham P, Allison MA, Creager MA, Diehm C, Fowkes FGR, Hiatt WR, Jönsson B, Lacroix P, Marin B, McDermott MM, Norgren L, Pande RL, Preux P-M, Stoffers HE, Treat-Jacobson D; on behalf of the American Heart Association Council on Peripheral Vascular Disease, Council on Epidemiology and Prevention, Council on Clinical Cardiology, Council on Cardiovascular Nursing, Council on Cardiovascular Radiology and Intervention, and Council on Cardiovascular Surgery and Anesthesia. Measurement and interpretation of the ankle-brachial index: a scientific statement from the American Heart Association. Circulation. 2012;126:2890–2909. Expert peer review of AHA Scientific Statements is conducted by the AHA Office of Science Operations. For more on AHA statements and guidelines development, visit and select the “Policies and Development” link. Permissions: Multiple copies, modification, alteration, enhancement, and/or distribution of this document are not permitted without the express permission of the American Heart Association. Instructions for obtaining permission are located at A link to the “Copyright Permissions Request Form” appears on the right side of the page. Disclosures Open in Viewer Writing Group Disclosures | Writing Group Member | Employment | Research Grant | Other Research Support | Speakers' Bureau/Honoraria | Expert Witness | Ownership Interest | Consultant/Advisory Board | Other | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Victor Aboyans | Dupuytren University Hospital | None | None | None | None | None | None | None | | Michael H. Criqui | University of California, San Diego | None | None | None | None | None | None | None | | Pierre Abraham | University Hospital of Angers | None | None | None | None | None | None | None | | Matthew A. Allison | University of California, San Diego | None | None | None | None | None | None | None | | Mark A. Creager | Brigham and Women's Hospital | Merck†; NIH† | None | None | None | None | Genzyme; Merck; NormOxys | None | | Curt Diehm | Karlsbad Clinic/University of Heidelberg | None | None | None | None | None | None | None | | F. Gerry R. Fowkes | University of Edinburgh | None | None | None | None | None | None | None | | William R. Hiatt | University of Colorado and CPC Clinical Research | AstraZeneca; Cytokinetics†; Diffusion Pharma†; GlaxoSmithKline†; Kowa†; Otsuka Japan†; Sanofi-Aventis† | None | None | None | None | None | None | | Björn Jönsson | Linköping University Hospital | None | None | None | None | LB Index AB (Sweden)† (50% ownership in company developing medical equipment) | None | None | | Philippe Lacroix | Limoges University | AstraZeneca-France†; Bayer-France†; LeoPharma-France; Novartis-France; Sanofi-Aventis France†; Schering Plough-France; Servier-France | None | None | None | None | None | None | | Benoît Marin | Limoges Teaching Hospital | None | None | None | None | None | None | None | | Mary M. McDermott | Northwestern University | NHLBI† | None | None | None | None | Foundation for Informed Medical Decision Making | JAMA Consulting Editor† | | Lars Norgren | University Hospital, Orebro, Sweden | None | None | None | None | None | None | None | | Reena L. Pande | Brigham and Women's Hospital | None | None | None | None | None | None | None | | Pierre-Marie Preux | University of Limoges | None | None | None | None | None | None | None | | H.E (Jelle) Stoffers | Maastricht University | None | None | None | None | None | None | None | | Diane Treat-Jacobson | University of Minnesota | NHLBI† | None | None | None | None | None | None | This table represents the relationships of writing group members that may be perceived as actual or reasonably perceived conflicts of interest as reported on the Disclosure Questionnaire, which all members of the writing group are required to complete and submit. A relationship is considered to be “significant” if (a) the person receives $10 000 or more during any 12-month period, or 5% or more of the person's gross income; or (b) the person owns 5% or more of the voting stock or share of the entity, or owns $10 000 or more of the fair market value of the entity. A relationship is considered to be “modest” if it is less than “significant” under the preceding definition. Modest. † Significant. Open in Viewer Reviewer Disclosures | Reviewer | Employment | Research Grant | Other Research Support | Speakers' Bureau/Honoraria | Expert Witness | Ownership Interest | Consultant/Advisory Board | Other | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Joshua Beckman | Brigham and Women's Hospital | None | None | None | None | None | Novartis | None | | Joanne Murabito | Boston University Medical Center and Framingham Heart Study | NIH/NHLBI† | None | None | None | None | None | None | | Roberta Oka | VA Palo Alto HCS | None | None | None | None | None | None | None | This table represents the relationships of reviewer that may be perceived as actual or reasonably perceived conflicts of interest as reported on the Disclosure Questionnaire, which all reviewers are required to complete and submit. A relationship is considered to be “significant” if (a) the person receives $10 000 or more during any 12-month period, or 5% or more of the person's gross income; or (b) the person owns 5% or more of the voting stock or share of the entity, or owns $10 000 or more of the fair market value of the entity. A relationship is considered to be “modest” if it is less than “significant” under the preceding definition. Modest. † Significant. Metrics & Citations Metrics Citations 1291 Metrics Article Metrics View all metrics Downloads Citations No data available. 1,083,047 1,291 Total 6 Months 12 Months Total number of downloads and citations See more details Picked up by 1 news outlets Blogged by 2 Referenced in 2 policy sources Posted by 27 X users Referenced in 3 patents On 3 Facebook pages Referenced in 4 Wikipedia pages Referenced in 15 clinical guideline sources 1079 readers on Mendeley Citations Download Citations If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Select your manager software from the list below and click Download. Please select your download format: [x] Direct Import Jonathan Myers, Khin Chan, Patricia Nguyen, Charles Gronau, Pallavi Gautam, Shipra Arya, Oliver Aalami, Prehabilitation and rehabilitation in peripheral arterial disease: Protocol and study design of the PREPARE-IT trial, Contemporary Clinical Trials, 155, (107984), (2025). Crossref Mohammad Hassabi, Mohammad Hasani, Shaghayegh Khodabakhshian, Mohammad Javanbakht, Shiva Aliakbar, Alireza Movahedi, Helia Ghorbani, Mahdi Mohebbi, Hamidreza Movahedi, Mohamad Hosein Tabatabaei Nodoushan, Abolfazl Mirani, Exploring the relationship between ankle strength and endurance factors and the severity and level of ischemia in patients with peripheral arterial disease, Journal of Cardiothoracic Surgery, 20, 1, (2025). Crossref Chloe French, Dan Robbins, Marie Gernigon, Dan Gordon, The influence of cuff location on the oxygenation and reperfusion of the foot during ischemic preconditioning: A reliability study, Microvascular Research, 160, (104811), (2025). Crossref Adam S. Vohra, Dmitriy N. Feldman, Beyond Angiography in Peripheral Interventions: When and How to Incorporate Peri-Procedural Physiology, The American Journal of Cardiology, 246, (90-91), (2025). Crossref Peter Lanzer, Leon Schurgers, Aleksandra Twarda-Clapa, Roberto Ferraresi, Huang Hui, Alexey Kamenskiy, Yabing Chen, Tomoyo Hamana, Pak-Wing Fok, Ángel Millán, Renu Virmani, Cynthia St. Hilaire, Medial arterial calcification in ageing and disease: current evidence and knowledge gaps, European Heart Journal, (2025). Crossref Sinead T J McDonagh, Fiona C Warren, James Peter Sheppard, Kate Boddy, Leon Farmer, Helen Shore, Phil Williams, Philip S Lewis, A Jayne Fordham, Una Martin, Victor Aboyans, Christopher Elles Clark, Arm Based on LEg blood pressures (ABLE-BP): can systolic ankle blood pressure measurements predict systolic arm blood pressure? An individual participant data meta-analysis from the INTERPRESS-IPD Collaboration, BMJ Open, 15, 6, (e094389), (2025). Crossref A. Guézais, M. Talbot, D. Lanéelle, A. Le Faucheur, C. Bossée-Pilon, T. Tueguem Moyo, J.E. Trihan, G. Mahé, Effectiveness of a short duration program combining supervised exercises, nutritional advice and therapeutic education in patients with peripheral artery disease: A retrospective cohort study, Vascular Diseases, (2025). Crossref A. Bartolomé Sánchez, O. Uclés Cabeza, J. Reina Barrera, F. Álvarez Herrero, A. Baturone Blanco, A. Martín-Conejero, Enfermedad arterial periférica de miembros inferiores, Medicine - Programa de Formación Médica Continuada Acreditado, 14, 36, (2175-2184), (2025). Crossref Hassan Chamseddine, Mouhammad Halabi, Loay Kabbani, Timothy Nypaver, Mitchell Weaver, Tamer Boules, Yasaman Kavousi, Kevin Onofrey, Andi Peshkepija, Alexander Shepard, Centers with vascular surgery training programs are more likely to utilize vein mapping and autologous vein for infrainguinal bypass, Journal of Vascular Surgery, (2025). Crossref Mary M. McDermott, Robert Sufit, Kathryn J. Domanchuk, Nicholas J. Volpe, Kate Kosmac, Charlotte A. Peterson, Lihui Zhao, Lu Tian, Dongxue Zhang, Shujun Xu, Ahmed Ismaeel, Luigi Ferrucci, Nishant D. Parekh, Donald Lloyd-Jones, Christopher M. Kramer, Christiaan Leeuwenburgh, Karen Ho, Michael H. Criqui, Tamar Polonsky, Jack M. Guralnik, Melina R. Kibbe, Hepatocyte growth factor for walking performance in peripheral artery disease, Journal of Vascular Surgery, 81, 6, (1467-1478.e1), (2025). Crossref See more Loading... View Options View options PDF and All Supplements Download PDF and All Supplements Download is in progress PDF/EPUB View PDF/EPUB Figures Open all in viewer Figure 1. Hazard ratios for total mortality in men and women by ankle-brachial index at baseline for all studies combined in the ABI Collaboration. Reproduced from Fowkes et al6 with permission from the publisher. Copyright © 2008, American Medical Association. Go to FigureOpen in Viewer Figure 2. Ankle pressure measurement with a Doppler probe: posterior tibial (A) and dorsalis pedis (B) arteries. Go to FigureOpen in Viewer Figure 3. Difference between ankle pressures measured with an oscillometric device (CASMED 740) and Doppler (y axis) according to the ankle pressure bands obtained with Doppler (x axis). In the box plot, the line indicates median percentiles and outer markers indicate 5% and 95% percentiles. Reprinted from Korno et al156 with permission from the publisher. © Copyright 2009, Elsevier. Go to FigureOpen in Viewer Tables Open all in viewer Table 1. The Diagnostic Performances of the Ankle-Brachial Index Versus Other Methods: Receiver-Operating Characteristic Curve Analysis Go to TableOpen in Viewer Table 2. Studies Assessing Optimal Ankle-Brachial Index Cutoff for the Diagnosis of Peripheral Artery Disease Go to TableOpen in Viewer Table 3. Limb Pressure Measurement Protocol for the Determination of the Ankle-Brachial Index With the Doppler Method Go to TableOpen in Viewer Table 4. Ankle-Brachial Index Modes of Calculation in the 16 Population Studies Included in the ABI Collaboration Study210 Go to TableOpen in Viewer Writing Group Disclosures Go to TableOpen in Viewer Reviewer Disclosures Go to TableOpen in Viewer Media Share Share Share article link Copy Link Copied! Copying failed. Share FacebookX (formerly Twitter)LinkedInemail References References 1. Winsor T. Influence of arterial disease on the systolic blood pressure gradients of the extremity. Am J Med Sci. 1950;220:117–126. Go to Citation Crossref PubMed Google Scholar 2. Carter SA. Indirect systolic pressures and pulse waves in arterial occlusive diseases of the lower extremities. Circulation. 1968;37:624–637. Crossref PubMed Google Scholar a [...] peripheral artery disease (PAD). b [...] to detect PAD compared with angiography. c [...] and provides diagnostic performances. d [...] as high as 100% have been reported. e [...] to that of arm pressures in 3 reports, 3. Yao ST, Hobbs JT, Irvine WT. Ankle systolic pressure measurements in arterial disease affecting the lower extremities. Br J Surg. 1969;56:676–679. Crossref PubMed Google Scholar a [...] peripheral artery disease (PAD). b [...] to detect PAD compared with angiography. c [...] and provides diagnostic performances. d [...] weeks or months after revascularization. 4. Criqui MH, Langer RD, Fronek A, Feigelson HS, Klauber MR, McCann TJ, Browner D. Mortality over a period of 10 years in patients with peripheral arterial disease. N Engl J Med. 1992;326:381–386. Crossref PubMed Google Scholar a [...] even in the absence of symptoms of PAD. b [...] Table 4). c [...] San Diego study 5. McDermott MM, Guralnik JM, Tian L, Liu K, Ferrucci L, Liao Y, Sharma L, Criqui MH. Associations of borderline and low normal ankle-brachial index values with functional decline at 5-year follow-up: the WALCS (Walking and Leg Circulation Study). J Am Coll Cardiol. 2009;53:1056–1062. Crossref PubMed Google Scholar a [...] even in the absence of symptoms of PAD. b [...] decline compared with higher ABI values. c [...] with a baseline ABI of 1.10 to 1.30. 6. Ankle Brachial Index Collaboration Fowkes FG, Murray GD, Butcher I, Heald CL, Lee RJ, Chambless LE, Folsom AR, Hirsch AT, Dramaix M, deBacker G, Wautrecht JC, Kornitzer M, Newman AB, Cushman M, Sutton-Tyrrell K, Lee AJ, Price JF, d'Agostino RB, Murabito JM, Norman PE, Jamrozik K, Curb JD, Masaki KH, Rodriguez BL, Dekker JM, Bouter LM, Heine RJ, Nijpels G, Stehouwer CD, Ferrucci L, McDermott MM, Stoffers HE, Hooi JD, Knottnerus JA, Ogren M, Hedblad B, Witteman JC, Breteler MM, Hunink MG, Hofman A, Criqui MH, Langer RD, Fronek A, Hiatt WR, Hamman R, Resnick HE, Guralnik J. Ankle brachial index combined with Framingham risk score to predict cardiovascular events and mortality: a meta-analysis. JAMA. 2008;300:197–208. Crossref PubMed Google Scholar a [...] even in the absence of symptoms of PAD. b [...] with established scoring systems. c [...] risk prediction when combined with the FRS. d [...] of the risk category in both men and women. e [...] extends beyond that of the FRS alone. f [...] Figure 1). g [...] Figure 1). h [...] Collaboration. Reproduced from Fowkes et al i [...] despite some differences in ABI protocols. j [...] Class IIA; Level of Evidence A). k [...] Class I; Level of Evidence A). l [...] Class IIa; Level of Evidence A). 7. Lange SF, Trampisch HJ, Pittrow D, Darius H, Mahn M, Allenberg JR, Tepohl G, Haberl RL, Diehm C; getABI Study Group. Profound influence of different methods for determination of the ankle brachial index on the prevalence estimate of peripheral arterial disease. BMC Public Health. 2007: 147. Crossref PubMed Google Scholar a [...] according to the mode of ABI calculation. b [...] of PAD prevalence within a population. 8. Aboyans V, Lacroix P, Preux PM, Vergnenegre A, Ferrieres J, Laskar M. Variability of ankle-arm index in general population according to its mode of calculation. Int Angiol. 2002;21:237–243. PubMed Google Scholar a [...] according to the mode of ABI calculation. b [...] of PAD prevalence within a population. 9. Allison MA, Aboyans V, Granston T, McDermott MM, Kamineni A, Ni H, Criqui MH. The relevance of different methods of calculating the ankle-brachial index: the Multi-Ethnic Study of Atherosclerosis. Am J Epidemiol. 2010;171:368–376. Crossref PubMed Google Scholar a [...] according to the mode of ABI calculation. b [...] In MESA, c [...] of PAD prevalence within a population. d [...] times higher in men (9.3% instead 3.4%). 10. Klein S, Hage JJ. Measurement, calculation, and normal range of the ankle-arm index: a bibliometric analysis and recommendation for standardization. Ann Vasc Surg. 2006;20:282–292. Crossref PubMed Google Scholar a [...] single or replicate measures were obtained. b [...] ABI measurement has varied among studies, 11. Dachun X, Jue L, Liling Z, Yawei X, Dayi H, Pagoto SL, Yunsheng M. Sensitivity and specificity of the ankle-brachial index to diagnose peripheral artery disease: a structured review. Vasc Med. 2010;15:361–369. Crossref PubMed Google Scholar a [...] than that reported in earlier studies. b [...] Class I; Level of Evidence A). c [...] Class I; Level of Evidence A). 12. Stein JH, Korcarz CE, Hurst RT, Lonn E, Kendall CB, Mohler ER, Najjar SS, Rembold CM, Post WS; American Society of Echocardiography Carotid Intima-Media Thickness Task Force. Use of carotid ultrasound to identify subclinical vascular disease and evaluate cardiovascular disease risk: a consensus statement from the American Society of Echocardiography Carotid Intima-Media Thickness Task Force: endorsed by the Society for Vascular Medicine. J Am Soc Echocardiogr. 2008;21:93–111;quiz 189–190. Go to Citation Crossref PubMed Google Scholar 13. Greenland P, Bonow RO, Brundage BH, Budoff MJ, Eisenberg MJ, Grundy SM, Lauer MS, Post WS, Raggi P, Redberg RF, Rodgers GP, Shaw LJ, Taylor AJ, Weintraub WS. ACCF/AHA 2007 clinical expert consensus document on coronary artery calcium scoring by computed tomography in global cardiovascular risk assessment and in evaluation of patients with chest pain: a report of the American College of Cardiology Foundation Clinical Expert Consensus Task Force (ACCF/AHA Writing Committee to Update the 2000 Expert Consensus Document on Electron Beam Computed Tomography) developed in collaboration with the Society of Atherosclerosis Imaging and Prevention and the Society of Cardiovascular Computed Tomography. Circulation. 2007;115:402–426. Go to Citation Crossref PubMed Google Scholar 14. Hiatt WR, Goldstone J, Smith SC, McDermott M, Moneta G, Oka R, Newman AB, Pearce WH; American Heart Association Writing Group 1. Atherosclerotic peripheral vascular disease symposium II: nomenclature for vascular diseases. Circulation. 2008;118:2826–2829. Go to Citation Crossref PubMed Google Scholar 15. Safar ME, Protogerou AD, Blacher J. Statins, central blood pressure, and blood pressure amplification. Circulation. 2009;119:9–12. Go to Citation Crossref PubMed Google Scholar 16. Murgo JP, Westerhof N, Giolma JP, Altobelli SA. Aortic input impedance in normal man: relationship to pressure wave forms. Circulation. 1980;62:105–116. Go to Citation Crossref PubMed Google Scholar 17. Latham RD, Westerhof N, Sipkema P, Rubal BJ, Reuderink P, Murgo JP. Regional wave travel and reflections along the human aorta: a study with six simultaneous micromanometric pressures. Circulation. 1985;72:1257–1269. Go to Citation Crossref PubMed Google Scholar 18. Hope SA, Tay DB, Meredith IT, Cameron JD. Waveform dispersion, not reflection, may be the major determinant of aortic pressure wave morphology. Am J Physiol Heart Circ Physiol. 2005;289:H2497–H2502. Crossref PubMed Google Scholar a [...] some attenuation along the arterial system. b [...] the changes in pressure wave morphology. 19. Wang JJ, Parker KH. Wave propagation in a model of the arterial circulation. J Biomech. 2004;37:457–470. Go to Citation Crossref PubMed Google Scholar 20. Tsamis A, Stergiopulos N. Arterial remodeling in response to hypertension using a constituent-based model. Am J Physiol Heart Circ Physiol. 2007;293:H3130–H3139. Go to Citation Crossref PubMed Google Scholar 21. Humphrey JD. Mechanisms of arterial remodeling in hypertension: coupled roles of wall shear and intramural stress. Hypertension. 2008;52:195–200. Go to Citation Crossref PubMed Google Scholar 22. Katz S, Globerman A, Avitzour M, Dolfin T. The ankle-brachial index in normal neonates and infants is significantly lower than in older children and adults. J Pediatr Surg. 1997;32:269–271. Go to Citation Crossref PubMed Google Scholar 23. Smith FB, Lee AJ, Price JF, van Wijk MC, Fowkes FG. Changes in ankle brachial index in symptomatic and asymptomatic subjects in the general population. J Vasc Surg. 2003;38:1323–1330. Crossref PubMed Google Scholar a [...] 0.03 higher than that of the left leg. b [...] prevalence and progression of PAD. c [...] been reported in many population studies. d [...] was reported in the general population. e [...] and in the general population. 24. Hiatt WR, Hoag S, Hamman RF. Effect of diagnostic criteria on the prevalence of peripheral arterial disease: the San Luis Valley Diabetes Study. Circulation. 1995;91:1472–1479. Crossref PubMed Google Scholar a [...] 0.03 higher than that of the left leg. b [...] direct correlation between height and ABI. c [...] in the San Luis Valley Diabetes Study, d [...] does not eliminate observed differences. e [...] Table 4). f [...] San Luis Valley study 25. Bird CE, Criqui MH, Fronek A, Denenberg JO, Klauber MR, Langer RD. Quantitative and qualitative progression of peripheral arterial disease by non-invasive testing. Vasc Med. 1999;4:15–21. Crossref PubMed Google Scholar a [...] prevalence and progression of PAD. b [...] of ABI progression in clinical populations 26. London GM, Guerin AP, Pannier B, Marchais SJ, Stimpel M. Influence of sex on arterial hemodynamics and blood pressure: role of body height. Hypertension. 1995;26:514–519. Crossref PubMed Google Scholar a [...] direct correlation between height and ABI. b [...] been reported in many population studies. 27. Aboyans V, Criqui MH, McClelland RL, Allison MA, McDermott MM, Goff DC, Manolio TA. Intrinsic contribution of gender and ethnicity to normal ankle-brachial index values: the Multi-Ethnic Study of Atherosclerosis (MESA). J Vasc Surg. 2007;45:319–327. Crossref PubMed Google Scholar a [...] for sex, ethnicity, and risk factors. b [...] been reported in many population studies. c [...] does not eliminate observed differences. d [...] risk factors for atherosclerosis. e [...] counterparts after multivariate adjustment, f [...] heart rate did not correlate with the ABI. 28. Stoffers HE, Kester AD, Kaiser V, Rinkens PE, Kitslaar PJ, Knottnerus JA. The diagnostic value of the measurement of the ankle-brachial systolic pressure index in primary health care. J Clin Epidemiol. 1996;49:1401–1405. Crossref PubMed Google Scholar a [...] been reported in many population studies. b [...] and provides diagnostic performances. c [...] Table 2). d [...] based on ROC curve analysis, Stoffers et al e [...] characteristics and disease prevalence. f [...] Stoffers et al, g [...] to 13.0% h [...] a 99% negative predictive value for PAD. i [...] Class IIa; Level of Evidence B). 29. Zheng ZJ, Sharrett AR, Chambless LE, Rosamond WD, Nieto FJ, Sheps DS, Dobs A, Evans GW, Heiss G. Associations of ankle-brachial index with clinical coronary heart disease, stroke and preclinical carotid and popliteal atherosclerosis: the Atherosclerosis Risk in Communities (ARIC) Study. Atherosclerosis. 1997;131:115–125. Crossref PubMed Google Scholar a [...] been reported in many population studies. b [...] included individuals with existing CVD. c [...] in the range of 1.3 to 4.2 among 9 studies. d [...] and CVD differ by sex. In the ARIC study, 30. Zheng ZJ, Rosamond WD, Chambless LE, Nieto FJ, Barnes RW, Hutchinson RG, Tyroler HA, Heiss G; ARIC Investigators. Lower extremity arterial disease assessed by ankle-brachial index in a middle-aged population of African Americans and whites: the Atherosclerosis Risk in Communities (ARIC) Study. Am J Prev Med. 2005; 29 (suppl 1): 42– 49. Crossref PubMed Google Scholar a [...] does not eliminate observed differences. b [...] Risk in Communities Study (ARIC). c [...] homocysteine, and chronic kidney disease). 31. Carmelli D, Fabsitz RR, Swan GE, Reed T, Miller B, Wolf PA. Contribution of genetic and environmental influences to ankle-brachial blood pressure index in the NHLBI Twin Study: National Heart, Lung, and Blood Institute. Am J Epidemiol. 2000;151:452–458. Go to Citation Crossref PubMed Google Scholar 32. Allison MA, Peralta CA, Wassel CL, Aboyans V, Arnett DK, Cushman M, Eng J, Ix J, Rich SS, Criqui MH. Genetic ancestry and lower extremity peripheral artery disease in the Multi-Ethnic Study of Atherosclerosis. Vasc Med. 2010;15:351–359. Go to Citation Crossref PubMed Google Scholar 33. Su HM, Lee KT, Chu CS, Lee MY, Lin TH, Voon WC, Sheu SH, Lai WT. Effects of heart rate on brachial-ankle pulse wave velocity and ankle-brachial pressure index in patients without significant organic heart disease. Angiology. 2007;58:67–74. Go to Citation Crossref PubMed Google Scholar 34. Wilkinson IB, MacCallum H, Flint L, Cockcroft JR, Newby DE, Webb DJ. The influence of heart rate on augmentation index and central arterial pressure in humans. J Physiol. 2000; 525 (pt 1): 263– 270. Crossref PubMed Google Scholar a [...] reported in subjects without heart disease b [...] In 1 study, c [...] magnetic resonance angiography, 35. Abraham P, Desvaux B, Colin D, Leftheriotis G, Saumet JL. Heart rate-corrected ankle-to-arm index in the diagnosis of moderate lower extremity arterial disease. Angiology. 1995;46:673–677. Go to Citation Crossref PubMed Google Scholar 36. Su HM, Chang JM, Lin FH, Chen SC, Voon WC, Cheng KH, Wang CS, Lin TH, Lai WT, Sheu SH. Influence of different measurement time points on brachial-ankle pulse wave velocity and ankle-brachial index in hemodialysis patients. Hypertens Res. 2007;30:965–970. Go to Citation Crossref PubMed Google Scholar 37. Allen J, Oates CP, Henderson J, Jago J, Whittingham TA, Chamberlain J, Jones NA, Murray A. Comparison of lower limb arterial assessments using color-duplex ultrasound and ankle/brachial pressure index measurements. Angiology. 1996;47:225–232. Crossref PubMed Google Scholar a [...] and provides diagnostic performances. b [...] methods, including color duplex ultrasound, c [...] Class I; Level of Evidence A). 38. Lijmer JG, Hunink MG, van den Dungen JJ, Loonstra J, Smit AJ. ROC analysis of noninvasive tests for peripheral arterial disease. Ultrasound Med Biol. 1996;22:391–398. Crossref PubMed Google Scholar a [...] and provides diagnostic performances. b [...] Table 1). c [...] To avoid verification bias, Lijmer et al d [...] Lijmer et al, e [...] Table 2). f [...] Data Supplement). g [...] Lijmer et al, h [...] Class I; Level of Evidence A). i [...] Class I; Level of Evidence A). j [...] Class I; Level of Evidence A). 39. Niazi K, Khan TH, Easley KA. Diagnostic utility of the two methods of ankle brachial index in the detection of peripheral arterial disease of lower extremities. Catheter Cardiovasc Interv. 2006;68:788–792. Crossref PubMed Google Scholar a [...] and provides diagnostic performances. b [...] Data Supplement). c [...] Two studies d [...] and 0.83 versus 0.79 in the latter study). e [...] 0.83 in the latter study, respectively). f [...] Class I; Level of Evidence A). g [...] Class I; Level of Evidence A). 40. Ouriel K, McDonnell AE, Metz CE, Zarins CK. Critical evaluation of stress testing in the diagnosis of peripheral vascular disease. Surgery. 1982;91:686–693. PubMed Google Scholar a [...] and provides diagnostic performances. b [...] Table 2). c [...] Ouriel et al, d [...] proportional to the severity of PAD. e [...] Ouriel et al f [...] were comparable for the detection of PAD. g [...] Data Supplement). h [...] exercise than for that measured at rest. i [...] Class I; Level of Evidence A). j [...] Class IIa; Level of Evidence A). 41. Ouriel K, Zarins CK. Doppler ankle pressure: an evaluation of three methods of expression. Arch Surg. 1982;117:1297–1300. Crossref PubMed Google Scholar a [...] and provides diagnostic performances. b [...] a threshold value of either 0.97 or 0.92. c [...] immediately after exercise cessation. d [...] predict the clinical prognosis of the limb. e [...] Class I; Level of Evidence A). 42. Parameswaran GI, Brand K, Dolan J. Pulse oximetry as a potential screening tool for lower extremity arterial disease in asymptomatic patients with diabetes mellitus. Arch Intern Med. 2005;165:442–446. Crossref PubMed Google Scholar a [...] and provides diagnostic performances. b [...] Table 1). c [...] Parameswaran et al, d [...] Class I; Level of Evidence A). e [...] Class I; Level of Evidence A). 43. Premalatha G, Ravikumar R, Sanjay R, Deepa R, Mohan V. Comparison of colour duplex ultrasound and ankle-brachial pressure index measurements in peripheral vascular disease in type 2 diabetic patients with foot infections. J Assoc Physicians India. 2002;50:1240–1244. PubMed Google Scholar a [...] and provides diagnostic performances. b [...] are reported in diabetic patients. c [...] methods, including color duplex ultrasound, d [...] Class I; Level of Evidence A). 44. Schroder F, Diehm N, Kareem S, Ames M, Pira A, Zwettler U, Lawall H, Diehm C. A modified calculation of ankle-brachial pressure index is far more sensitive in the detection of peripheral arterial disease. J Vasc Surg. 2006;44:531–536. Crossref PubMed Google Scholar a [...] and provides diagnostic performances. b [...] methods, including color duplex ultrasound, c [...] Two studies d [...] detected by color duplex ultrasound. e [...] and 0.83 versus 0.79 in the latter study). f [...] 0.83 in the latter study, respectively). g [...] Class I; Level of Evidence A). h [...] Class I; Level of Evidence A). 45. Sumner DS, Strandness DE. The relationship between calf blood flow and ankle blood pressure in patients with intermittent claudication.Surgery. 1969;65:763–771. PubMed Google Scholar a [...] and provides diagnostic performances. b [...] Table 2). c [...] a threshold value of either 0.97 or 0.92. d [...] Sumner and Strandness, 46. Williams DT, Harding KG, Price P. An evaluation of the efficacy of methods used in screening for lower-limb arterial disease in diabetes. Diabetes Care. 2005;28:2206–2210. Crossref PubMed Google Scholar a [...] and provides diagnostic performances. b [...] methods, including color duplex ultrasound, c [...] Class I; Level of Evidence A). 47. Alnaeb ME, Crabtree VP, Boutin A, Mikhailidis DP, Seifalian AM, Hamilton G. Prospective assessment of lower-extremity peripheral arterial disease in diabetic patients using a novel automated optical device. Angiology. 2007;58:579–585. Crossref PubMed Google Scholar a [...] and provides diagnostic performances. b [...] are reported in diabetic patients. 48. Clairotte C, Retout S, Potier L, Roussel R, Escoubet B. Automated ankle-brachial pressure index measurement by clinical staff for peripheral arterial disease diagnosis in nondiabetic and diabetic patients. Diabetes Care. 2009;32:1231–1236. Crossref PubMed Google Scholar a [...] and provides diagnostic performances. b [...] are reported in diabetic patients. c [...] Table 1). d [...] Clairotte et al, e [...] Table 2). f [...] Clairotte et al g [...] Clairotte et al, h [...] studies for oscillometric methods i [...] Data Supplement). j [...] the Doppler and oscillometric techniques. k [...] Class I; Level of Evidence A). 49. Feigelson HS, Criqui MH, Fronek A, Langer RD, Molgaard CA. Screening for peripheral arterial disease: the sensitivity, specificity, and predictive value of noninvasive tests in a defined population. Am J Epidemiol. 1994;140:526–534. Crossref PubMed Google Scholar a [...] and provides diagnostic performances. b [...] method to detect flow, 1 study 50. Guo X, Li J, Pang W, Zhao M, Luo Y, Sun Y, Hu D. Sensitivity and specificity of ankle-brachial index for detecting angiographic stenosis of peripheral arteries. Circ J. 2008;72:605–610. Crossref PubMed Google Scholar a [...] and provides diagnostic performances. b [...] Table 1). c [...] Guo et al, d [...] Table 2). e [...] Data Supplement). f [...] Guo et al, g [...] Class I; Level of Evidence A). h [...] Class I; Level of Evidence A). i [...] Class I; Level of Evidence A). 51. Wikstrom J, Hansen T, Johansson L, Lind L, Ahlstrom H. Ankle brachial index <0.9 underestimates the prevalence of peripheral artery occlusive disease assessed with whole-body magnetic resonance angiography in the elderly. Acta Radiol. 2008;49:143–149. Crossref PubMed Google Scholar a [...] and provides diagnostic performances. b [...] sensitivity (69%–79%, except 1 outlier c [...] Class I; Level of Evidence A). 52. Baxter GM, Polak JF. Lower limb colour flow imaging: a comparison with ankle:brachial measurements and angiography. Clin Radiol. 1993;47:91–95. Crossref PubMed Google Scholar a [...] and provides diagnostic performances. b [...] as high as 100% have been reported. 53. de Groote P, Millaire A, Deklunder G, Marache P, Decoulx E, Ducloux G. Comparative diagnostic value of ankle-to-brachial index and transcutaneous oxygen tension at rest and after exercise in patients with intermittent claudication. Angiology. 1995;46:115–122. Go to Citation Crossref PubMed Google Scholar 54. Flanigan DP, Ballard JL, Robinson D, Galliano M, Blecker G, Harward TR. Duplex ultrasound of the superficial femoral artery is a better screening tool than ankle-brachial index to identify at risk patients with lower extremity atherosclerosis. J Vasc Surg. 2008;47:789–792. Go to Citation Crossref PubMed Google Scholar 55. Alnaeb ME, Boutin A, Crabtree VP, Mikhailidis DP, Seifalian AM, Hamilton G. Assessment of lower extremity peripheral arterial disease using a novel automated optical device. Vasc Endovascular Surg. 2007;41:522–527. Go to Citation Crossref PubMed Google Scholar 56. Carter SA. Clinical measurement of systolic pressures in limbs with arterial occlusive disease. JAMA. 1969;207:1869–1874. Crossref PubMed Google Scholar a [...] Table 2). b [...] a threshold value of either 0.97 or 0.92. c [...] Carter, d [...] Class I; Level of Evidence A). 57. Bernstein EF, Fronek A. Current status of noninvasive tests in the diagnosis of peripheral arterial disease. Surg Clin North Am. 1982;62:473–487. Crossref PubMed Google Scholar a [...] Table 2). b [...] Bernstein et al, 58. Carter SA. Response of ankle systolic pressure to leg exercise in mild or questionable arterial disease. N Engl J Med. 1972;287:578–582. Crossref PubMed Google Scholar a [...] immediately after exercise cessation. b [...] pre-exercise values within 1 to 2 minutes. c [...] proportional to the severity of PAD. d [...] Class I; Level of Evidence A). 59. Winsor T. Conditioned vasoconstrictive responses of digital vessels. AMA Arch Surg. 1958;76:193–199. Crossref PubMed Google Scholar a [...] pre-exercise values within 1 to 2 minutes. b [...] proportional to the severity of PAD. 60. Laing S, Greenhalgh RM. The detection and progression of asymptomatic peripheral arterial disease. Br J Surg. 1983;70:628–630. Crossref PubMed Google Scholar a [...] proportional to the severity of PAD. b [...] values (0.91–1.00). Laing and Greenhalgh c [...] Class I; Level of Evidence A). d [...] Class IIa; Level of Evidence A). 61. Sakurai T, Matsushita M, Nishikimi N, Nimura Y. Effect of walking distance on the change in ankle-brachial pressure index in patients with intermittent claudication. Eur J Vasc Endovasc Surg. 1997;13:486–490. Go to Citation Crossref PubMed Google Scholar 62. Hoogeveen EK, Mackaay AJ, Beks PJ, Kostense PJ, Dekker JM, Heine RJ, Nijpels G, Rauwerda JA, Stehouwer CD. Evaluation of the one-minute exercise test to detect peripheral arterial disease. Eur J Clin Invest. 2008;38:290–295. Crossref PubMed Google Scholar a [...] in a study of healthy subjects. Others b [...] Class IIa; Level of Evidence A). 63. McPhail IR, Spittel PC, Weston SA, Bailey KR. Intermittent claudication: an objective office-based assessment. J Am Coll Cardiol. 2001;37:1381–1385. Crossref PubMed Google Scholar a [...] assessment of postexercise ABI. b [...] with treadmill exercise in claudicants. 64. Amirhamzeh MM, Chant HJ, Rees JL, Powel RJ, Campbell WB. A comparative study of treadmill tests and heel raising exercise for peripheral arterial disease. Eur J Vasc Endovasc Surg. 1997;13:301–305. Crossref PubMed Google Scholar a [...] assessment of postexercise ABI. b [...] with treadmill exercise in claudicants. 65. Suominen V, Rantanen T, Venermo M, Saarinen J, Salenius J. Prevalence and risk factors of PAD among patients with elevated ABI. Eur J Vasc Endovasc Surg. 2008;35:709–714. Crossref PubMed Google Scholar a [...] disease cannot be detected by the ABI. b [...] with high ABIs range from 60% to 80%. c [...] Class I; Level of Evidence A). 66. Aboyans V, Ho E, Denenberg JO, Ho LA, Natarajan L, Criqui MH. The association between elevated ankle systolic pressures and peripheral occlusive arterial disease in diabetic and nondiabetic subjects. J Vasc Surg. 2008;48:1197–1203. Crossref PubMed Google Scholar a [...] disease cannot be detected by the ABI. b [...] with high ABIs range from 60% to 80%. c [...] associated with smoking and hyperlipidemia. d [...] Class I; Level of Evidence A). 67. Aboyans V, Criqui MH, Denenberg JO, Knoke JD, Ridker PM, Fronek A. Risk factors for progression of peripheral arterial disease in large and small vessels. Circulation. 2006;113:2623–2629. Crossref PubMed Google Scholar a [...] patients assessed in a vascular laboratory, b [...] of ABI progression in clinical populations 68. Nicoloff AD, Taylor LM, Sexton GJ, Schuff RA, Edwards JM, Yeager RA, Landry GJ, Moneta GL, Porter JM; Homocysteine and Progression of Atherosclerosis Study Investigators. Relationship between site of initial symptoms and subsequent progression of disease in a prospective study of atherosclerosis progression in patients receiving long-term treatment for symptomatic peripheral arterial disease. J Vasc Surg. 2002;35:38–46. Crossref PubMed Google Scholar a [...] Nicoloff et al b [...] Class IIa; Level of Evidence B). c [...] of ABI progression in clinical populations 69. Cronenwett JL, Warner KG, Zelenock GB, Whitehouse WM, Graham LM, Lindenauer M, Stanley JC. Intermittent claudication: current results of nonoperative management. Arch Surg. 1984;119:430–436. Crossref PubMed Google Scholar a [...] mean period of 2.5 years, Cronenwett et al b [...] Class IIa; Level of Evidence B). 70. Amighi J, Sabeti S, Schlager O, Francesconi M, Ahmadi R, Minar E, Schillinger M. Outcome of conservative therapy of patients with severe intermittent claudication. Eur J Vasc Endovasc Surg. 2004;27:254–258. Go to Citation Crossref PubMed Google Scholar 71. Norgren L, Hiatt WR, Dormandy JA, Nehler MR, Harris KA, Fowkes FG; TASC II Working Group. Inter-society consensus for the management of peripheral arterial disease (TASC II). J Vasc Surg. 2007; 45 (suppl S): S5– S67. Go to Citation Crossref PubMed Google Scholar 72. Marston WA, Davies SW, Armstrong B, Farber MA, Mendes RC, Fulton JJ, Keagy BA. Natural history of limbs with arterial insufficiency and chronic ulceration treated without revascularization. J Vasc Surg. 2006;44:108–114. Go to Citation Crossref PubMed Google Scholar 73. Hamalainen H, Ronnemaa T, Halonen JP, Toikka T. Factors predicting lower extremity amputations in patients with type 1 or type 2 diabetes mellitus: a population-based 7-year follow-up study. J Intern Med. 1999;246:97–103. Go to Citation Crossref PubMed Google Scholar 74. Brothers TE, Esteban R, Robison JG, Elliott BM. Symptoms of chronic arterial insufficiency correlate with absolute ankle pressure better than with ankle:brachial index. Minerva Cardioangiol. 2000;48:103–109. Go to Citation PubMed Google Scholar 75. Matzke S, Ollgren J, Lepantalo M. Predictive value of distal pressure measurements in critical leg ischaemia. Ann Chir Gynaecol. 1996;85:316–321. PubMed Google Scholar a [...] predict the clinical prognosis of the limb. b [...] the dorsalis pedis (DP) artery was used. c [...] ABI <0.50 than in those with an ABI >0.50. d [...] obtained by the PT versus the DP artery. 76. Fowl RJ, Gewirtz RJ, Love MC, Kempczinski RF. Natural history of claudicants with critical hemodynamic indices. Ann Vasc Surg. 1992;6:31–33. Go to Citation Crossref PubMed Google Scholar 77. Decrinis M, Doder S, Stark G, Pilger E. A prospective evaluation of sensitivity and specificity of the ankle/brachial index in the follow-up of superficial femoral artery occlusions treated by angioplasty. Clin Investig. 1994;72:592–597. Crossref PubMed Google Scholar a [...] of 92% and 100%, respectively. b [...] Data Supplement, 78. Motukuru V, Suresh KR, Vivekanand V, Raj S, Girija KR. Therapeutic angiogenesis in Buerger's disease (thromboangiitis obliterans) patients with critical limb ischemia by autologous transplantation of bone marrow mononuclear cells. J Vasc Surg. 2008; 48 (suppl): 53S– 60S. Go to Citation Crossref PubMed Google Scholar 79. Allouche-Cometto L, Leger P, Rousseau H, Lefebvre D, Bendayan P, Elefterion P, Boccalon H. Comparative of blood flow to the ankle-brachial index after iliac angioplasty. Int Angiol. 1999;18:154–157. Go to Citation PubMed Google Scholar 80. Matoba S, Tatsumi T, Murohara T, Imaizumi T, Katsuda Y, Ito M, Saito Y, Uemura S, Suzuki H, Fukumoto S, Yamamoto Y, Onodera R, Teramukai S, Fukushima M, Matsubara H; TACT Follow-Up Study Investigators. Long-term clinical outcome after intramuscular implantation of bone marrow mononuclear cells (Therapeutic Angiogenesis by Cell Transplantation [TACT] trial) in patients with chronic limb ischemia. Am Heart J. 2008;156:1010–1018. Go to Citation Crossref PubMed Google Scholar 81. Barnes RW, Thompson BW, MacDonald CM, Nix ML, Lambeth A, Nix AD, Johnson DW, Wallace BH. Serial noninvasive studies do not herald postoperative failure of femoropopliteal or femorotibial bypass grafts. Ann Surg. 1989;210:486–493. Go to Citation Crossref PubMed Google Scholar 82. Stierli P, Aeberhard P, Livers M. The role of colour flow duplex screening in infra-inguinal vein grafts. Eur J Vasc Surg. 1992;6:293–298. Go to Citation Crossref PubMed Google Scholar 83. Laborde AL, Synn AY, Worsey MJ, Bower TR, Hoballah JJ, Sharp WJ, Kresowik TF, Corson JD. A prospective comparison of ankle/brachial indices and color duplex imaging in surveillance of the in situ saphenous vein bypass. J Cardiovasc Surg (Torino). 1992;33:420–425. Go to Citation PubMed Google Scholar 84. Idu MM, Blankenstein JD, de Gier P, Truyen E, Buth J. Impact of a color-flow duplex surveillance program on infrainguinal vein graft patency: a five-year experience. J Vasc Surg. 1993;17:42–52. Go to Citation Crossref PubMed Google Scholar 85. Dalsing MC, Cikrit DF, Lalka SG, Sawchuk AP, Schulz C. Femorodistal vein grafts: the utility of graft surveillance criteria. J Vasc Surg. 1995;21:127–134. Go to Citation Crossref PubMed Google Scholar 86. Lundell A, Lindblad B, Bergqvist D, Hansen F. Femoropopliteal-crural graft patency is improved by an intensive surveillance program: a prospective randomized study. J Vasc Surg. 1995;21:26–33. Go to Citation Crossref PubMed Google Scholar 87. Radak D, Labs KH, Jager KA, Bojic M, Popovic AD. Doppler-based diagnosis of restenosis after femoropopliteal percutaneous transluminal angioplasty: sensitivity and specificity of the ankle/brachial pressure index versus changes in absolute pressure values. Angiology. 1999;50:111–122. Go to Citation Crossref PubMed Google Scholar 88. McDermott MM, Greenland P, Liu K, Guralnik JM, Criqui MH, Dolan NC, Chan C, Celic L, Pearce WH, Schneider JR, Sharma L, Clark E, Gibson D, Martin GJ. Leg symptoms in peripheral arterial disease: associated clinical characteristics and functional impairment. JAMA. 2001;286:1599–1606. Crossref PubMed Google Scholar a [...] and lower physical activity levels. b [...] activity to avoid exertional leg symptoms 89. McDermott MM, Fried L, Simonsick E, Ling S, Guralnik JM. Asymptomatic peripheral arterial disease is independently associated with impaired lower extremity functioning: the Women's Health and Aging Study. Circulation. 2000;101:1007–1012. Crossref PubMed Google Scholar a [...] and lower physical activity levels. b [...] decline compared with higher ABI values. c [...] Table 4). d [...] Women's Health and Ageing 90. McDermott MM, Greenland P, Liu K, Guralnik JM, Celic L, Criqui MH, Chan C, Martin GJ, Schneider J, Pearce WH, Taylor LM, Clark E. The ankle brachial index is associated with leg function and physical activity: the Walking and Leg Circulation Study. Ann Intern Med. 2002;136:873–883. Crossref PubMed Google Scholar a [...] and lower physical activity levels. b [...] decline compared with higher ABI values. c [...] Impairment Questionnaire distance score. 91. McDermott MM, Ohlmiller SM, Liu K, Guralnik JM, Martin GJ, Pearce WH, Greenland P. Gait alterations associated with walking impairment in people with peripheral arterial disease with and without intermittent claudication. J Am Geriatr Soc. 2001;49:747–754. Go to Citation Crossref PubMed Google Scholar 92. McDermott MM, Liu K, Greenland P, Guralnik JM, Criqui MH, Chan C, Pearce WH, Schneider JR, Ferrucci L, Celic L, Taylor LM, Vonesh E, Martin GJ, Clark E. Functional decline in peripheral arterial disease: associations with the ankle brachial index and leg symptoms. JAMA. 2004;292:453–461. Crossref PubMed Google Scholar a [...] of the degree of functional limitation. b [...] decline compared with higher ABI values. 93. Szuba A, Oka RK, Harada R, Cooke JP. Limb hemodynamics are not predictive of functional capacity in patients with PAD. Vasc Med. 2006;11:155–163. Crossref PubMed Google Scholar a [...] with greater functional limitations. b [...] symptoms of intermittent claudication. 94. Gardner AW, Skinner JS, Cantwell BW, Smith LK. Prediction of claudication pain from clinical measurements obtained at rest. Med Sci Sports Exerc. 1992;24:163–170. Crossref PubMed Google Scholar a [...] with greater functional limitations. b [...] symptoms of intermittent claudication. 95. Parr B, Noakes TD, Derman EW. Factors predicting walking intolerance in patients with peripheral arterial disease and intermittent claudication. S Afr Med J. 2008;98:958–962. PubMed Google Scholar a [...] with greater functional limitations. b [...] symptoms of intermittent claudication. 96. McDermott MM, Criqui MH, Liu K, Guralnik JM, Greenland P, Martin GJ, Pearce W. Lower ankle/brachial index, as calculated by averaging the dorsalis pedis and posterior tibial arterial pressures, and association with leg functioning in peripheral arterial disease. J Vasc Surg. 2000;32:1164–1171. Go to Citation Crossref PubMed Google Scholar 97. McDermott MM, Ferrucci L, Guralnik JM, Dyer AR, Liu K, Pearce WH, Clark E, Liao Y, Criqui MH. The ankle-brachial index is associated with the magnitude of impaired walking endurance among men and women with peripheral arterial disease. Vasc Med. 2010;15:251–257. Go to Citation Crossref PubMed Google Scholar 98. McDermott MM, Liu K, Ferrucci L, Tian L, Guralnik JM, Green D, Tan J, Liao Y, Pearce WH, Schneider JR, McCue K, Ridker P, Rifai N, Criqui MH. Circulating blood markers and functional impairment in peripheral arterial disease. J Am Geriatr Soc. 2008;56:1504–1510. Go to Citation Crossref PubMed Google Scholar 99. Herman SD, Liu K, Tian L, Guralnik JM, Ferrucci L, Criqui MH, Liao Y, McDermott MM. Baseline lower extremity strength and subsequent decline in functional performance at 6-year follow-up in persons with lower extremity peripheral arterial disease. J Am Geriatr Soc. 2009;57:2246–2252. Go to Citation Crossref PubMed Google Scholar 100. Anderson JD, Epstein FH, Meyer CH, Hagspiel KD, Wang H, Berr SS, Harthun NL, Weltman A, Dimaria JM, West AM, Kramer CM. Multifactorial determinants of functional capacity in peripheral arterial disease: uncoupling of calf muscle perfusion and metabolism. J Am Coll Cardiol. 2009;54:628–635. Go to Citation Crossref PubMed Google Scholar 101. McDermott MM, Liu K, Ferrucci L, Tian L, Guralnik JM, Liao Y, Criqui MH. Greater sedentary hours and slower walking speed outside the home predict faster declines in functioning and adverse calf muscle changes in peripheral arterial disease. J Am Coll Cardiol. 2011;57:2356–2364. Go to Citation Crossref PubMed Google Scholar 102. McDermott MM, Liu K, Ferrucci L, Criqui MH, Greenland P, Guralnik JM, Tian L, Schneider JR, Pearce WH, Tan J, Martin GJ. Physical performance in peripheral arterial disease: a slower rate of decline in patients who walk more. Ann Intern Med. 2006;144:10–20. Go to Citation Crossref PubMed Google Scholar 103. Selvin E, Erlinger TP. Prevalence of and risk factors for peripheral arterial disease in the United States: results from the National Health and Nutrition Examination Survey, 1999–2000. Circulation. 2004;110:738–743. Crossref PubMed Google Scholar a [...] homocysteine, and chronic kidney disease). b [...] of CVD risk factors across ABI thresholds. c [...] included individuals with existing CVD. d [...] individuals with type 1 diabetes mellitus. 104. Newman AB, Siscovick DS, Manolio TA, Polak J, Fried LP, Borhani NO, Wolfson SK. Ankle-arm index as a marker of atherosclerosis in the Cardiovascular Health Study: Cardiovascular Heart Study (CHS) Collaborative Research Group. Circulation. 1993;88:837–845. Crossref PubMed Google Scholar a [...] homocysteine, and chronic kidney disease). b [...] included individuals with existing CVD. c [...] in the range of 1.3 to 4.2 among 9 studies. d [...] Table 4). e [...] Cardiovascular Health Study 105. Allison MA, Criqui MH, McClelland RL, Scott JM, McDermott MM, Liu K, Folsom AR, Bertoni AG, Sharrett AR, Homma S, Kori S. The effect of novel cardiovascular risk factors on the ethnic-specific odds for peripheral arterial disease in the Multi-Ethnic Study of Atherosclerosis (MESA). J Am Coll Cardiol. 2006;48:1190–1197. Go to Citation Crossref PubMed Google Scholar 106. Weatherley BD, Nelson JJ, Heiss G, Chambless LE, Sharrett AR, Nieto FJ, Folsom AR, Rosamond WD. The association of the ankle-brachial index with incident coronary heart disease: the Atherosclerosis Risk in Communities (ARIC) study, 1987–2001. BMC Cardiovasc Disord. 2007: 3. Crossref PubMed Google Scholar a [...] of CVD risk factors across ABI thresholds. b [...] and North America. 107. Newman AB, Shemanski L, Manolio TA, Cushman M, Mittelmark M, Polak JF, Powe NR, Siscovick D. Ankle-arm index as a predictor of cardiovascular disease and mortality in the Cardiovascular Health Study: the Cardiovascular Health Study Group. Arterioscler Thromb Vasc Biol. 1999;19:538–545. Crossref PubMed Google Scholar a [...] included individuals with existing CVD. b [...] and North America. c [...] cardiovascular mortality (risk ratio, 1.5). d [...] Table 4). e [...] Cardiovascular Health Study 108. Hirsch AT, Criqui MH, Treat-Jacobson D, Regensteiner JG, Creager MA, Olin JW, Krook SH, Hunninghake DB, Comerota AJ, Walsh ME, McDermott MM, Hiatt WR. Peripheral arterial disease detection, awareness, and treatment in primary care. JAMA. 2001;286:1317–1324. Crossref PubMed Google Scholar a [...] included individuals with existing CVD. b [...] used by investigators in the PARTNERS 109. Murabito JM, Evans JC, Nieto K, Larson MG, Levy D, Wilson PW. Prevalence and clinical correlates of peripheral arterial disease in the Framingham Offspring Study. Am Heart J. 2002;143:961–965. Crossref PubMed Google Scholar a [...] individuals with type 1 diabetes mellitus. b [...] Table 4). c [...] Framingham Offspring Study 110. Zander E, Heinke P, Reindel J, Kohnert KD, Kairies U, Braun J, Eckel L, Kerner W. Peripheral arterial disease in diabetes mellitus type 1 and type 2: are there different risk factors? Vasa. 2002;31:249–254. Go to Citation Crossref PubMed Google Scholar 111. Hayashi C, Ogawa O, Kubo S, Mitsuhashi N, Onuma T, Kawamori R. Ankle brachial pressure index and carotid intima-media thickness as atherosclerosis markers in Japanese diabetics. Diabetes Res Clin Pract. 2004;66:269–275. Crossref PubMed Google Scholar a [...] individuals with type 1 diabetes mellitus. b [...] in the range of 1.3 to 4.2 among 9 studies. 112. Yang X, Sun K, Zhang W, Wu H, Zhang H, Hui R. Prevalence of and risk factors for peripheral arterial disease in the patients with hypertension among Han Chinese. J Vasc Surg. 2007;46:296–302. Go to Citation Crossref PubMed Google Scholar 113. Ovbiagele B. Association of ankle-brachial index level with stroke. J Neurol Sci. 2009;276:14–17. Go to Citation Crossref PubMed Google Scholar 114. Ramos R, Quesada M, Solanas P, Subirana I, Sala J, Vila J, Masia R, Cerezo C, Elosua R, Grau M, Cordon F, Juvinya D, Fito M, Isabel Covas M, Clara A, Angel Munoz M, Marrugat J; REGICOR Investigators. Prevalence of symptomatic and asymptomatic peripheral arterial disease and the value of the ankle-brachial index to stratify cardiovascular risk. Eur J Vasc Endovasc Surg. 2009;38:305–311. Go to Citation Crossref PubMed Google Scholar 115. Allison MA, Hiatt WR, Hirsch AT, Coll JR, Criqui MH. A high ankle-brachial index is associated with increased cardiovascular disease morbidity and lower quality of life. J Am Coll Cardiol. 2008;51:1292–1298. Crossref PubMed Google Scholar a [...] associated with smoking and hyperlipidemia. b [...] Allison et al 116. Criqui MH, McClelland RL, McDermott MM, Allison MA, Blumenthal RS, Aboyans V, Ix JH, Burke GL, Liu K, Shea S. The ankle-brachial index and incident cardiovascular events in the MESA (Multi-Ethnic Study of Atherosclerosis). J Am Coll Cardiol. 2010;56:1506–1512. Crossref PubMed Google Scholar a [...] high ABI was associated with incident CVD. b [...] ethnic groups in the United States. c [...] extends beyond that of the FRS alone. d [...] Class IIA; Level of Evidence A). e [...] Class I; Level of Evidence A). 117. Sutton-Tyrrell K, Venkitachalam L, Kanaya AM, Boudreau R, Harris T, Thompson T, Mackey RH, Visser M, Vaidean GD, Newman AB. Relationship of ankle blood pressures to cardiovascular events in older adults. Stroke. 2008;39:863–869. Go to Citation Crossref PubMed Google Scholar 118. Wattanakit K, Folsom AR, Duprez DA, Weatherley BD, Hirsch AT. Clinical significance of a high ankle-brachial index: insights from the Atherosclerosis Risk in Communities (ARIC) Study. Atherosclerosis. 2007;190:459–464. Go to Citation Crossref PubMed Google Scholar 119. Resnick HE, Foster GL. Prevalence of elevated ankle-brachial index in the United States 1999 to 2002. Am J Med. 2005;118:676–679. Go to Citation Crossref PubMed Google Scholar 120. Greenland P, Smith SC, Grundy SM. Improving coronary heart disease risk assessment in asymptomatic people: role of traditional risk factors and noninvasive cardiovascular tests. Circulation. 2001;104:1863–1867. Go to Citation Crossref PubMed Google Scholar 121. Brindle P, Beswick A, Fahey T, Ebrahim S. Accuracy and impact of risk assessment in the primary prevention of cardiovascular disease: a systematic review. Heart. 2006;92:1752–1759. Go to Citation Crossref PubMed Google Scholar 122. Tsimikas S, Willerson JT, Ridker PM. C-reactive protein and other emerging blood biomarkers to optimize risk stratification of vulnerable patients. J Am Coll Cardiol. 2006; 47 (suppl): C19– C31. Go to Citation Crossref PubMed Google Scholar 123. Greenland P, LaBree L, Azen SP, Doherty TM, Detrano RC. Coronary artery calcium score combined with Framingham score for risk prediction in asymptomatic individuals. JAMA. 2004;291:210–215. Go to Citation Crossref PubMed Google Scholar 124. Leng GC, Fowkes FG, Lee AJ, Dunbar J, Housley E, Ruckley CV. Use of ankle brachial pressure index to predict cardiovascular events and death: a cohort study. BMJ. 1996;313:1440–1444. Crossref PubMed Google Scholar a [...] cohort studies, mostly in Europe b [...] Table 4). c [...] Edinburgh artery study 125. Hooi JD, Kester AD, Stoffers HE, Rinkens PE, Knottnerus JA, van Ree JW. Asymptomatic peripheral arterial occlusive disease predicted cardiovascular morbidity and mortality in a 7-year follow-up study. J Clin Epidemiol. 2004;57:294–300. Crossref PubMed Google Scholar a [...] cohort studies, mostly in Europe b [...] Table 4). c [...] Limburg study 126. Ogren M, Hedblad B, Isacsson SO, Janzon L, Jungquist G, Lindell SE. Non-invasively detected carotid stenosis and ischaemic heart disease in men with leg arteriosclerosis. Lancet. 1993;342:1138–1141. Crossref PubMed Google Scholar a [...] cohort studies, mostly in Europe b [...] Table 4). c [...] Men Born in 1914 127. van der Meer IM, Bots ML, Hofman A, del Sol AI, van der Kuip DA, Witteman JC. Predictive value of noninvasive measures of atherosclerosis for incident myocardial infarction: the Rotterdam Study. Circulation. 2004;109:1089–1094. Crossref PubMed Google Scholar a [...] cohort studies, mostly in Europe b [...] Table 4). c [...] Rotterdam Study 128. Kornitzer M, Dramaix M, Sobolski J, Degre S, De Backer G. Ankle/arm pressure index in asymptomatic middle-aged males: an independent predictor of ten-year coronary heart disease mortality. Angiology. 1995;46:211–219. Crossref PubMed Google Scholar a [...] and North America. b [...] Table 4). c [...] Belgian Men study 129. Abbott RD, Petrovitch H, Rodriguez BL, Yano K, Schatz IJ, Popper JS, Masaki KH, Ross GW, Curb JD. Ankle/brachial blood pressure in men >70 years of age and the risk of coronary heart disease. Am J Cardiol. 2000;86:280–284. Crossref PubMed Google Scholar a [...] and North America. b [...] Table 4). c [...] Honolulu study 130. Resnick HE, Lindsay RS, McDermott MM, Devereux RB, Jones KL, Fabsitz RR, Howard BV. Relationship of high and low ankle brachial index to all-cause and cardiovascular disease mortality: the Strong Heart Study. Circulation. 2004;109:733–739. Crossref PubMed Google Scholar a [...] and North America. b [...] Table 4). c [...] Strong Heart Study 131. Aboyans V, Lacroix P, Tran MH, Salamagne C, Galinat S, Archambeaud F, Criqui MH, Laskar M. The prognosis of diabetic patients with high ankle-brachial index depends on the coexistence of occlusive peripheral artery disease. J Vasc Surg. 2011;53:984–991. Go to Citation Crossref PubMed Google Scholar 132. Aboyans V, Lacroix P, Postil A, Guilloux J, Rolle F, Cornu E, Laskar M. Subclinical peripheral arterial disease and incompressible ankle arteries are both long-term prognostic factors in patients undergoing coronary artery bypass grafting. J Am Coll Cardiol. 2005;46:815–820. Go to Citation Crossref PubMed Google Scholar 133. Agnelli G, Cimminiello C, Meneghetti G, Urbinati S; Polyvascular Atherothrombosis Observational Survey (PATHOS) Investigators. Low ankle-brachial index predicts an adverse 1-year outcome after acute coronary and cerebrovascular events. J Thromb Haemost. 2006;4:2599–2606. Go to Citation Crossref PubMed Google Scholar 134. Purroy F, Coll B, Oro M, Seto E, Pinol-Ripoll G, Plana A, Quilez A, Sanahuja J, Brieva L, Vega L, Fernandez E. Predictive value of ankle brachial index in patients with acute ischaemic stroke. Eur J Neurol. 2010;17:602–606. Go to Citation Crossref PubMed Google Scholar 135. Alberts MJ, Bhatt DL, Mas JL, Ohman EM, Hirsch AT, Rother J, Salette G, Goto S, Smith SC, Liau CS, Wilson PW, Steg PG; Reduction of Atherothrombosis for Continued Health Registry Investigators. Three-year follow-up and event rates in the international Reduction of Atherothrombosis for Continued Health Registry. Eur Heart J. 2009;30:2318–2326. Crossref PubMed Google Scholar a [...] with disease in only 1 vascular territory. b [...] mortality of 1.6 compared with higher ABIs. 136. Criqui MH, Ninomiya JK, Wingard DL, Ji M, Fronek A. Progression of peripheral arterial disease predicts cardiovascular disease morbidity and mortality. J Am Coll Cardiol. 2008;52:1736–1742. Go to Citation Crossref PubMed Google Scholar 137. Sheikh MA, Bhatt DL, Li J, Lin S, Bartholomew JR. Usefulness of postexercise ankle-brachial index to predict all-cause mortality. Am J Cardiol. 2011;107:778–782. Go to Citation Crossref PubMed Google Scholar 138. Mohler ER, Treat-Jacobson D, Reilly MP, Cunningham KE, Miani M, Criqui MH, Hiatt WR, Hirsch AT. Utility and barriers to performance of the ankle-brachial index in primary care practice. Vasc Med. 2004;9:253–260. Crossref PubMed Google Scholar a [...] and staff training, were identified. b [...] needed for ABI measurement was <15 minutes. 139. Bendermacher BLW, Teijink JAW, Willigendael EM. Applicability of the ankle brachial index measurement as screening device in general practice for high cardiovascular risk. In:, Bendermacher B Peripheral Arterial Disease. Screening, Diagnosis and Conservative Treatment [dissertation]. Maastricht, Netherlands: Maastricht University; 2007. Go to Citation Crossref Google Scholar 140. Pollak EW, Chavis P, Wolfman EF. The effect of postural changes upon the ankle arterial perfusion pressure. Vasc Surg. 1976;10:219–222. Go to Citation Crossref PubMed Google Scholar 141. Gornik HL, Garcia B, Wolski K, Jones DC, Macdonald KA, Fronek A. Validation of a method for determination of the ankle-brachial index in the seated position. J Vasc Surg. 2008;48:1204–1210. Crossref PubMed Google Scholar a [...] Gornik et al b [...] end of the examination table. Gornik et al 142. Yataco AR, Gardner AW. Acute reduction in ankle/brachial index following smoking in chronic smokers with peripheral arterial occlusive disease. Angiology. 1999;50:355–360. Crossref PubMed Google Scholar a [...] after 12 hours of smoking abstinence. b [...] change in brachial artery pressure. 143. Manning DM, Kuchirka C, Kaminski J. Miscuffing: inappropriate blood pressure cuff application. Circulation. 1983;68:763–766. Crossref PubMed Google Scholar a [...] cuff size to avoid inaccurate measurements. b [...] Class I; Level of Evidence B). 144. Pickering TG, Hall JE, Appel LJ, Falkner BE, Graves J, Hill MN, Jones DW, Kurtz T, Sheps SG, Roccella EJ. Recommendations for blood pressure measurement in humans and experimental animals, part 1: blood pressure measurement in humans: a statement for professionals from the Subcommittee of Professional and Public Education of the American Heart Association Council on High Blood Pressure Research. Circulation. 2005;111:697–716. Crossref PubMed Google Scholar a [...] cuff size to avoid inaccurate measurements. b [...] be at least 40% of the limb circumference. c [...] Class I; Level of Evidence B). 145. Mundt KA, Chambless LE, Burnham CB, Heiss G. Measuring ankle systolic blood pressure: validation of the Dinamap 1846 SX. Angiology. 1992;43:555–566. Crossref PubMed Google Scholar a [...] with the spiral cuff wrapping method. b [...] was used with the Doppler technique. c [...] studies for oscillometric methods d [...] method for the detection of PAD. e [...] Data Supplement). 146. Takahashi O, Shimbo T, Rahman M, Musa R, Kurokawa W, Yoshinaka T, Fukui T. Validation of the auscultatory method for diagnosing peripheral arterial disease. Fam Pract. 2006;23:10–14. Crossref PubMed Google Scholar a [...] Takahashi et al b [...] auscultation, c [...] was assessed in a Japanese study. d [...] Class I; Level of Evidence B). 147. Aboyans V, Lacroix P, Doucet S, Preux PM, Criqui MH, Laskar M. Diagnosis of peripheral arterial disease in general practice: can the ankle-brachial index be measured either by pulse palpation or an automatic blood pressure device? Int J Clin Pract. 2008;62:1001–1007. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). c [...] the Doppler and oscillometric techniques. d [...] and pulse palpation. e [...] and a specificity ranging from 75% to 82%. f [...] the ABI compared with the Doppler method. g [...] Data Supplement). h [...] is confirmed by 2 comparative studies i [...] method has poor reproducibility (CoV, 23%). j [...] but there are few data for other methods. k [...] Data Supplement) l [...] Class I; Level of Evidence A). 148. Adiseshiah M, Cross FW, Belsham PA. Ankle blood pressure measured by automatic oscillotonometry: a comparison with Doppler pressure measurements. Ann R Coll Surg Engl. 1987;69:271–273. PubMed Google Scholar a [...] studies for oscillometric methods b [...] of the actual pressure value, c [...] to detect low pressures, eg, <50 mm Hg d [...] Data Supplement). 149. Beckman JA, Higgins CO, Gerhard-Herman M. Automated oscillometric determination of the ankle-brachial index provides accuracy necessary for office practice. Hypertension. 2006;47:35–38. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). c [...] the Doppler and oscillometric techniques. d [...] of agreement (±2 SD) for the ABI were 0.25 150. Benchimol A, Bernard V, Pillois X, Hong NT, Benchimol D, Bonnet J. Validation of a new method of detecting peripheral artery disease by determination of ankle-brachial index using an automatic blood pressure device. Angiology. 2004;55:127–134. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). 151. Blebea J, Ali MK, Love M, Bodenham R, Bacik B. Automatic postoperative monitoring of infrainguinal bypass procedures. Arch Surg. 1997;132:286–291. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] PAD has been acceptable in most studies c [...] Data Supplement). 152. Cortez-Cooper MY, Supak JA, Tanaka H. A new device for automatic measurements of arterial stiffness and ankle-brachial index. Am J Cardiol. 2003;91:1519–1522, A9. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] PAD has been acceptable in most studies c [...] Data Supplement). d [...] the Doppler and oscillometric techniques. 153. Diehm N, Dick F, Czuprin C, Lawall H, Baumgartner I, Diehm C. Oscillometric measurement of ankle-brachial index in patients with suspected peripheral disease: comparison with Doppler method. Swiss Med Wkly. 2009;139:357–363. PubMed Google Scholar a [...] studies for oscillometric methods b [...] ) in patients with advanced PAD. c [...] Data Supplement). d [...] the Doppler and oscillometric techniques. 154. Ena J, Lozano T, Verdú G, Argente CR, González VL. Accuracy of ankle-brachial index obtained by automated blood pressure measuring devices in patients with diabetes mellitus. Diabetes Res Clin Pract. 2011;92:329–336. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). c [...] the Doppler and oscillometric techniques. 155. Jonsson B, Lindberg LG, Skau T, Thulesius O. Is oscillometric ankle pressure reliable in leg vascular disease? Clin Physiol. 2001;21:155–163. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] method for the detection of PAD. c [...] PAD has been acceptable in most studies d [...] of the actual pressure value, e [...] Data Supplement). f [...] the Doppler and oscillometric techniques. g [...] more than ±70 mm Hg in patients with PAD. 156. Korno M, Eldrup N, Sillesen H. Comparison of ankle-brachial index measured by an automated oscillometric apparatus with that by standard Doppler technique in vascular patients. Eur J Vasc Endovasc Surg. 2009;38:610–615. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] PAD has been acceptable in most studies c [...] of the actual pressure value, d [...] Figure 3 . e [...] Data Supplement). f [...] 95% percentiles. Reprinted from Korno et al g [...] Data Supplement). h [...] method, which has a CoV ranging from 5.1% i [...] but there are few data for other methods. j [...] Data Supplement) k [...] Class I; Level of Evidence A). 157. Lee BY, Campbell JS, Berkowitz P. The correlation of ankle oscillometric blood pressures and segmental pulse volumes to Doppler systolic pressures in arterial occlusive disease. J Vasc Surg. 1996;23:116–122. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). c [...] whereas in 5 other studies, 158. MacDonald E, Froggatt P, Lawrence G, Blair S. Are automated blood pressure monitors accurate enough to calculate the ankle brachial pressure index? J Clin Monit Comput. 2008;22:381–384. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] ) in patients with advanced PAD. c [...] Data Supplement). d [...] and 0.23 159. MacDougall AM, Tandon V, Wilson MP, Wilson TW. Oscillometric measurement of ankle-brachial index. Can J Cardiol. 2008;24:49–51. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). 160. Mehlsen J, Wiinberg N, Bruce C. Oscillometric blood pressure measurement: a simple method in screening for peripheral arterial disease. Clin Physiol Funct Imaging. 2008;28:426–429. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] of the actual pressure value, c [...] ) in patients with advanced PAD. d [...] Data Supplement). 161. Nukumizu Y, Matsushita M, Sakurai T, Kobayashi M, Nishikimi N, Komori K. Comparison of Doppler and oscillometric ankle blood pressure measurement in patients with angiographically documented lower extremity arterial occlusive disease. Angiology. 2007;58:303–308. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] of the actual pressure value, c [...] recording failures are frequent (from 11% d [...] ) in patients with advanced PAD. e [...] Data Supplement). 162. Pan CR, Staessen JA, Li Y, Wang JG. Comparison of three measures of the ankle-brachial blood pressure index in a general population. Hypertens Res. 2007;30:555–561. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] PAD has been acceptable in most studies c [...] Data Supplement). d [...] the Doppler and oscillometric techniques. e [...] Data Supplement). 163. Raines JK, Farrar J, Noicely K, Pena J, Davis WW, Willens HJ, Wallace DD. Ankle/brachial index in the primary care setting. Vasc Endovascular Surg. 2004;38:131–136. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). 164. Ramanathan A, Conaghan PJ, Jenkinson AD, Bishop CR. Comparison of ankle-brachial pressure index measurements using an automated oscillometric device with the standard Doppler ultrasound technique. ANZ J Surg. 2003;73:105–108. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] with 1 exception. c [...] Data Supplement). d [...] the Doppler and oscillometric techniques. e [...] studies varied from −0.19 to 0.14 165. Richart T, Kuznetsova T, Wizner B, Struijker-Boudier HA, Staessen JA. Validation of automated oscillometric versus manual measurement of the ankle-brachial index. Hypertens Res. 2009;32:884–888. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] of the actual pressure value, c [...] Data Supplement). d [...] Data Supplement). e [...] been challenged recently by Richart et al, f [...] Class I; Level of Evidence A). 166. Salles-Cunha SX, Vincent DG, Towne JB, Bernhard VM. Noninvasive ankle pressure measurements by oscillometry. Tex Heart Inst J. 1982;9:349–357. PubMed Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). c [...] whereas in 5 other studies, 167. Bonham PA, Cappuccio M, Hulsey T, Michel Y, Kelechi T, Jenkins C, Robison J. Are ankle and toe brachial indices (ABI-TBI) obtained by a pocket Doppler interchangeable with those obtained by standard laboratory equipment? J Wound Ostomy Continence Nurs. 2007;34:35–44. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). c [...] whereas in 5 other studies, 168. Carmo GA, Mandil A, Nascimento BR, Arantes BD, Bittencourt JC, Falqueto EB, Ribeiro AL. Can we measure the ankle-brachial index using only a stethoscope? A pilot study.Fam Pract. 2009;26:22–26. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). 169. Khandanpour N, Armon MP, Jennings B, Clark A, Meyer FJ. Photoplethysmography, an easy and accurate method for measuring ankle brachial pressure index: can photoplethysmography replace Doppler? Vasc Endovascular Surg. 2009;43:578–582. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). c [...] photoplethysmography, d [...] in several series of patients with PAD. e [...] Doppler method ranged from −0.23 to 0.24. 170. Ludyga T, Kuczmik WB, Kazibudzki M, Nowakowski P, Orawczyk T, Glanowski M, Kucharzewski M, Ziaja D, Szaniewski K, Ziaja K. Ankle-brachial pressure index estimated by laser Doppler in patients suffering from peripheral arterial obstructive disease. Ann Vasc Surg. 2007;21:452–457. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). c [...] was used for ABI measurements in 1 study. 171. Migliacci R, Nasorri R, Ricciarini P, Gresele P. Ankle-brachial index measured by palpation for the diagnosis of peripheral arterial disease. Fam Pract. 2008;25:228–232. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). c [...] and pulse palpation. d [...] and a specificity ranging from 75% to 82%. 172. Nicolai SP, Kruidenier LM, Rouwet EV, Wetzels-Gulpers L, Rozeman CA, Prins MH, Teijink JA Pocket Doppler and vascular laboratory equipment yield comparable results for ankle brachial index measurement. BMC Cardiovasc Disord. 2008: 26. Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). 173. Sadiq S, Chithriki M. Arterial pressure measurements using infrared photosensors: comparison with CW Doppler. Clin Physiol. 2001;21:129–132. Crossref PubMed Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). c [...] photoplethysmography, d [...] in several series of patients with PAD. 174. Whiteley MS, Fox AD, Horrocks M. Photoplethysmography can replace hand-held Doppler in the measurement of ankle/brachial indices. Ann R Coll Surg Engl. 1998;80:96–98. PubMed Google Scholar a [...] studies for oscillometric methods b [...] Data Supplement). c [...] photoplethysmography, d [...] in several series of patients with PAD. 175. Fowkes FG, Housley E, Macintyre CC, Prescott RJ, Ruckley CV. Variability of ankle and brachial systolic pressures in the measurement of atherosclerotic peripheral arterial disease. J Epidemiol Community Health. 1988;42:128–133. Crossref PubMed Google Scholar a [...] method for the detection of PAD. b [...] measurement and calculation, Fowkes et al c [...] and Fowkes et al 176. Stoffers J, Kaiser V, Kester A, Schouten H, Knottnerus A. Peripheral arterial occlusive disease in general practice: the reproducibility of the ankle-arm systolic pressure ratio. Scand J Prim Health Care. 1991;9:109–114. Crossref PubMed Google Scholar a [...] method for the detection of PAD. b [...] the Doppler and oscillometric techniques. c [...] and −0.18 to 0.35, 177. Kaiser V, Kester AD, Stoffers HE, Kitslaar PJ, Knottnerus JA. The influence of experience on the reproducibility of the ankle-brachial systolic pressure ratio in peripheral arterial occlusive disease. Eur J Vasc Endovasc Surg. 1999;18:25–29. Crossref PubMed Google Scholar a [...] method for the detection of PAD. b [...] but there are few data for other methods. c [...] Data Supplement) 178. Aboyans V, Lacroix P, Lebourdon A, Preux PM, Ferrieres J, Laskar M. The intra- and interobserver variability of ankle-arm blood pressure index according to its mode of calculation. J Clin Epidemiol. 2003;56:215–220. Crossref PubMed Google Scholar a [...] PAD has been acceptable in most studies b [...] or even 80 mm Hg, c [...] 161 to 44% d [...] ) in patients with advanced PAD. e [...] the Doppler and oscillometric techniques. f [...] pressure are trivial in direct comparisons. 179. Baker JD, Dix DE. Variability of Doppler ankle pressures with arterial occlusive disease: an evaluation of ankle index and brachial-ankle pressure gradient. Surgery. 1981;89:134–137. PubMed Google Scholar a [...] of the actual pressure value, b [...] ) in patients with advanced PAD. c [...] cardiovascular conditions, including PAD. 180. Johnston KW, Hosang MY, Andrews DF. Reproducibility of noninvasive vascular laboratory measurements of the peripheral circulation. J Vasc Surg. 1987;6:147–151. Go to Citation Crossref PubMed Google Scholar 181. de Graaff JC, Ubbink DT, Legemate DA, de Haan RJ, Jacobs MJ. Interobserver and intraobserver reproducibility of peripheral blood and oxygen pressure measurements in the assessment of lower extremity arterial disease. J Vasc Surg. 2001;33:1033–1040. Crossref PubMed Google Scholar a [...] Data Supplement). b [...] but there are few data for other methods. c [...] Data Supplement) d [...] to that of arm pressures in 3 reports, e [...] the ABI is a valid biological parameter. f [...] Class I; Level of Evidence A). g [...] at the ankle than by which artery is used. 182. Holland-Letz T, Endres HG, Biedermann S, Mahn M, Kunert J, Groh S, Pittrow D, von Bilderling P, Sternitzky R, Diehm C. Reproducibility and reliability of the ankle-brachial index as assessed by vascular experts, family physicians and nurses. Vasc Med. 2007;12:105–112. Crossref PubMed Google Scholar a [...] Data Supplement). b [...] but there are few data for other methods. c [...] Data Supplement) d [...] Class I; Level of Evidence A). 183. Espeland MA, Regensteiner JG, Jaramillo SA, Gregg E, Knowler WC, Wagenknecht LE, Bahnson J, Haffner S, Hill J, Hiatt WR; Look AHEAD Study Group. Measurement characteristics of the ankle-brachial index: results from the Action for Health in Diabetes study. Vasc Med. 2008;13:225–233. Crossref PubMed Google Scholar a [...] Data Supplement). b [...] when measured by skilled examiners. c [...] Class I; Level of Evidence A). d [...] pressure are trivial in direct comparisons. e [...] at the ankle than by which artery is used. 184. Weatherley BD, Chambless LE, Heiss G, Catellier DJ, Ellison CR. The reliability of the ankle-brachial index in the Atherosclerosis Risk in Communities (ARIC) study and the NHLBI Family Heart Study (FHS). BMC Cardiovasc Disord. 2006: 7. Google Scholar a [...] Data Supplement). b [...] is confirmed by 2 comparative studies c [...] but there are few data for other methods. d [...] in the ARIC study, showing a CoV of 11%. e [...] Data Supplement) f [...] Class I; Level of Evidence A). g [...] Atherosclerosis Risk in Communities Study 185. Forster FK, Turney D. Oscillometric determination of diastolic, mean and systolic blood pressure: a numerical model. J Biomech Eng. 1986;108:359–364. Crossref PubMed Google Scholar a [...] Data Supplement). b [...] to 20.2%. c [...] Class I; Level of Evidence A). 186. Hamel JF, Foucaud D, Fanello S. Comparison of the automated oscillometric method with the gold standard Doppler ultrasound method to access the ankle-brachial pressure index. Angiology. 2010;61:487–491. Crossref PubMed Google Scholar a [...] Data Supplement). b [...] Class I; Level of Evidence A). 187. Ursino M, Cristalli C. A mathematical study of some biomechanical factors affecting the oscillometric blood pressure measurement. IEEE Trans Biomed Eng. 1996;43:761–778. Crossref PubMed Google Scholar a [...] Data Supplement). b [...] Class I; Level of Evidence A). 188. van Montfrans GA. Oscillometric blood pressure measurement: progress and problems. Blood Press Monit. 2001;6:287–290. Crossref PubMed Google Scholar a [...] Data Supplement). b [...] but there are few data for other methods. c [...] Data Supplement) d [...] people compared with patients with PAD. e [...] Class I; Level of Evidence A). 189. Benchimol D, Pillois X, Benchimol A, Houitte A, Sagardiluz P, Tortelier L, Bonnet J. Accuracy of ankle-brachial index using an automatic blood pressure device to detect peripheral artery disease in preventive medicine. Arch Cardiovasc Dis. 2009;102:519–524. Crossref PubMed Google Scholar a [...] Data Supplement). b [...] varies widely in the literature, from 4.7% c [...] to that of arm pressures in 3 reports, d [...] Class I; Level of Evidence A). e [...] Class I; Level of Evidence A). 190. Kawamura T. Assessing ankle-brachial index (ABI) by using automated oscillometric devices. Arq Bras Cardiol. 2008;90:294–298. PubMed Google Scholar a [...] Data Supplement). b [...] but there are few data for other methods. c [...] Data Supplement) d [...] could be the point estimate ±0.21. e [...] do not vary with the average ABI. f [...] at the ankle than by which artery is used. g [...] Table 4). 191. Arveschoug AK, Revsbech P, Brochner-Mortensen J. Sources of variation in the determination of distal blood pressure measured using the strain gauge technique. Clin Physiol. 1998;18:361–368. Crossref PubMed Google Scholar a [...] Data Supplement). b [...] but there are few data for other methods. c [...] Data Supplement) 192. Brown J, Asongwed E, Chesbro S, John E. Inter-rater and intra-rater reliability of ankle brachial index (ABI) measurements using a stethoscope and Doppler. Paper presented at: American Physical Therapy Association Meeting; 2008. Accessed February 10, 2011. Google Scholar a [...] Data Supplement). b [...] but there are few data for other methods. c [...] Data Supplement) 193. Clyne CA, Jones T, Moss S, Ensell J. The use of radioactive oxygen to study muscle function in peripheral vascular disease. Surg Gynecol Obstet. 1979;149:225–228. PubMed Google Scholar a [...] Data Supplement). b [...] but there are few data for other methods. c [...] Data Supplement) d [...] do not vary with the average ABI. 194. Osmundson PJ, O'Fallon WM, Clements IP, Kazmier FJ, Zimmerman BR, Palumbo PJ. Reproducibility of noninvasive tests of peripheral occlusive arterial disease. J Vasc Surg. 1985;2:678–683. Crossref PubMed Google Scholar a [...] Data Supplement). b [...] but there are few data for other methods. c [...] Data Supplement) d [...] or an oscillometry for the arms. e [...] However, Osmundson et al 195. Fisher CM, Burnett A, Makeham V, Kidd J, Glasson M, Harris JP. Variation in measurement of ankle-brachial pressure index in routine clinical practice. J Vasc Surg. 1996;24:871–875. Crossref PubMed Google Scholar a [...] Data Supplement). b [...] but there are few data for other methods. c [...] Data Supplement) d [...] or an oscillometry for the arms. e [...] obtained by the PT versus the DP artery. 196. Jeelani NU, Braithwaite BD, Tomlin C, MacSweeney ST. Variation of method for measurement of brachial artery pressure significantly affects ankle-brachial pressure index values. Eur J Vasc Endovasc Surg. 2000;20:25–28. Crossref PubMed Google Scholar a [...] Data Supplement). b [...] but there are few data for other methods. c [...] Data Supplement) d [...] whereas in 5 other studies, e [...] exercise was 10% and 21%, respectively. 197. Atsma F, Bartelink ML, Grobbee DE, van der Schouw YT. Best reproducibility of the ankle-arm index was calculated using Doppler and dividing highest ankle pressure by highest arm pressure. J Clin Epidemiol. 2005;58:1282–1288. Go to Citation Crossref PubMed Google Scholar 198. van Langen H, van Gurp J, Rubbens L. Interobserver variability of ankle-brachial index measurements at rest and post exercise in patients with intermittent claudication. Vasc Med. 2009;14:221–226. Go to Citation Crossref PubMed Google Scholar 199. Ray SA, Srodon PD, Taylor RS, Dormandy JA. Reliability of ankle:brachial pressure index measurement by junior doctors. Br J Surg. 1994;81:188–190. Go to Citation Crossref PubMed Google Scholar 200. Endres HG, Hucke C, Holland-Letz T, Trampisch HJ. A new efficient trial design for assessing reliability of ankle-brachial index measures by three different observer groups. BMC Cardiovasc Disord. 2006: 33. Go to Citation Google Scholar 201. Osborn LA, Vernon SM, Reynolds B, Timm TC, Allen K. Screening for subclavian artery stenosis in patients who are candidates for coronary bypass surgery. Catheter Cardiovasc Interv. 2002;56:162–165. Crossref PubMed Google Scholar a [...] whereas in 5 other studies, b [...] of subclavian artery stenosis. Osborn et al 202. Vierron E, Halimi JM, Tichet J, Balkau B, Cogneau J, Giraudeau B; DESIR Study Group. Center effect on ankle-brachial index measurement when using the reference method (Doppler and manometer): results from a large cohort study. Am J Hypertens. 2009;22:718–722. Crossref PubMed Google Scholar a [...] the measurements in different laboratories, b [...] at the ankle than by which artery is used. 203. Aboyans V, Criqui MH, McDermott MM, Allison MA, Denenberg JO, Shadman R, Fronek A. The vital prognosis of subclavian stenosis. J Am Coll Cardiol. 2007;49:1540–1545. Go to Citation Crossref PubMed Google Scholar 204. Shadman R, Criqui MH, Bundens WP, Fronek A, Denenberg JO, Gamst AC, McDermott MM. Subclavian artery stenosis: prevalence, risk factors, and association with cardiovascular diseases. J Am Coll Cardiol. 2004;44:618–623. Go to Citation Crossref PubMed Google Scholar 205. Clark CE, Campbell JL, Powell RJ. The interarm blood pressure difference as predictor of cardiovascular events in patients with hypertension in primary care: cohort study. J Hum Hypertens. 2007;21:633–638. Go to Citation Crossref PubMed Google Scholar 206. Orme S, Ralph SG, Birchall A, Lawson-Matthew P, McLean K, Channer KS. The normal range for inter-arm differences in blood pressure. Age Ageing. 1999;28:537–542. Go to Citation Crossref PubMed Google Scholar 207. Aboyans V, Kamineni A, Allison MA, McDermott MM, Crouse JR, Ni H, Szklo M, Criqui MH. The epidemiology of subclavian stenosis and its association with markers of subclinical atherosclerosis: the Multi-Ethnic Study of Atherosclerosis (MESA). Atherosclerosis. 2010;211:266–270. Go to Citation Crossref PubMed Google Scholar 208. Espinola-Klein C, Rupprecht HJ, Bickel C, Lackner K, Savvidis S, Messow CM, Munzel T, Blankenberg S; AtheroGene Investigators. Different calculations of ankle-brachial index and their impact on cardiovascular risk prediction. Circulation. 2008;118:961–967. Crossref PubMed Google Scholar a [...] ABI for prediction of events are limited. b [...] infarction, stroke, and CVD death. 209. O'Hare AM, Katz R, Shlipak MG, Cushman M, Newman AB. Mortality and cardiovascular risk across the ankle-arm index spectrum: results from the Cardiovascular Health Study. Circulation. 2006;113:388–393. Crossref PubMed Google Scholar a [...] ABI for prediction of events are limited. b [...] the ABI to predict cardiovascular events. 210. Fowkes FG, Price JF, Stewart MC, Butcher I, Leng GC, Pell AC, Sandercock PA, Fox KA, Lowe GD, Murray GD; Aspirin for Asymptomatic Atherosclerosis Trialists. Aspirin for prevention of cardiovascular events in a general population screened for a low ankle brachial index: a randomized controlled trial. JAMA. 2010;303:841–848. Crossref PubMed Google Scholar a [...] Table 4). b [...] Included in the ABI Collaboration Study c [...] from the lowest of the 4 ankle arteries. 211. Hoogeveen EK, Kostense PJ, Beks PJ, Mackaay AJ, Jakobs C, Bouter LM, Heine RJ, Stehouwer CD. Hyperhomocysteinemia is associated with an increased risk of cardiovascular disease, especially in non-insulin-dependent diabetes mellitus: a population-based study. Arterioscler Thromb Vasc Biol. 1998;18:133–138. Crossref PubMed Google Scholar 212. Fowler B, Jamrozik K, Norman P, Allen Y. Prevalence of peripheral arterial disease: persistence of excess risk in former smokers. Aust N Z J Public Health. 2002;26:219–224. Crossref PubMed Google Scholar a [...] Table 4). b [...] Health in Men study c [...] Class I; Level of Evidence A). 213. Jager A, Kostense PJ, Ruhe HG, Heine RJ, Nijpels G, Dekker JM, Bouter LM, Stehouwer CD. Microalbuminuria and peripheral arterial disease are independent predictors of cardiovascular and all-cause mortality, especially among hypertensive subjects: five-year follow-up of the Hoorn Study. Arterioscler Thromb Vasc Biol. 1999;19:617–624. Crossref PubMed Google Scholar a [...] Table 4). b [...] Hoorn study 214. McDermott MM, Guralnik JM, Albay M, Bandinelli S, Miniati B, Ferrucci L. Impairments of muscles and nerves associated with peripheral arterial disease and their relationship with lower extremity functioning: the InCHIANTI Study. J Am Geriatr Soc. 2004;52:405–410. Crossref PubMed Google Scholar a [...] Table 4). b [...] InCHIANTI 215. Rooke TW, Hirsch AT, Misra S, Sidawy AN, Beckman JA, Findeiss LK, Golzarian J, Gornik HL, Halperin JL, Jaff MR, Moneta GL, Olin JW, Stanley JC, White CJ, White JV, Zierler RE. ACCF/AHA focused update of the guideline for the management of patients with peripheral artery disease (updating the 2005 guideline): a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines. Circulation. 2011;124:2020–2045. Crossref PubMed Google Scholar a [...] Table 4). b [...] College of Cardiology guidelines 216. Diehm C, Allenberg JR, Pittrow D, Mahn M, Tepohl G, Haberl RL, Darius H, Burghaus I, Trampisch HJ; German Epidemiological Trial on Ankle Brachial Index Study Group. Mortality and vascular morbidity in older adults with asymptomatic versus symptomatic peripheral artery disease. Circulation. 2009;120:2053–61. Go to Citation Crossref PubMed Google Scholar 217. American Diabetes Association. Peripheral arterial disease in people with diabetes. Diabetes Care. 2003;26:3333–3341. Go to Citation Crossref PubMed Google Scholar 218. Kennedy M, Solomon C, Manolio TA, Criqui MH, Newman AB, Polak JF, Burke GL, Enright P, Cushman M. Risk factors for declining ankle-brachial index in men and women 65 years or older: the Cardiovascular Health Study. Arch Intern Med. 2005;165:1896–1902. Go to Citation Crossref PubMed Google Scholar Advertisement Recommended November 2013 Ankle Brachial Index Values, Leg Symptoms, and Functional Performance Among Community‐Dwelling Older Men and Women in the Lifestyle Interventions and Independence for Elders Study Mary M. McDermott, William B. Applegate, Denise E. Bonds, Thomas W. Buford, Timothy Church, Mark A. Espeland, Thomas M. Gill, Jack M. Guralnik, William Haskell, Laura C. Lovato, Marco Pahor, Carl J. Pepine, Kieran F. Reid, and [...] Anne Newman +10 authors March 2018 Abstract 17874: Decline in the Ankle Brachial Index and Functional Outcome in Patients with Peripheral Arterial Disease Umberto Campia, Yihua Liao, and [...] Mary M McDermott +0 authors February 2016 Sex Differences in the Ankle Brachial Index Measurement and Interpreting Findings of Sex Differences in Peripheral Artery Disease Burden [...] Mary McGrae McDermott +0 authors Advertisement Submit a Response to This Article Close Compose eLetter Title: Comment text: Contributors (all fields are required) Remove Contributor First Name: Last Name: Email: Affiliation: Add Another Contributor Statement of Competing Interests Competing Interests? YES NO Please describe the competing interests Cancel Submit View full text|Download PDF Figures Tables Close figure viewer Back to article Figure title goes here Change zoom level Go to figure location within the article Toggle download panel Toggle download panel Download figure Toggle share panel Toggle share panel Share Toggle information panel Toggle information panel All figures All tables View all material View all material xrefBack.goTo xrefBack.goTo Request permissions Expand All Collapse Expand Table Show all references SHOW ALL BOOKS Authors Info & Affiliations Comment Response Now Reading: Measurement and Interpretation of the Ankle-Brachial Index: A Scientific Statement From the American Heart Association Track Citations Add to favorites Share PDF/EPUB ###### PREVIOUS ARTICLE Sodium, Blood Pressure, and Cardiovascular Disease Previous###### NEXT ARTICLE Letter by Tsikas Regarding Article, “Dietary Nitrate Ameliorates Pulmonary Hypertension: Cytoprotective Role for Endothelial Nitric Oxide Synthase and Xanthine Oxidoreductase” Next Circulation Submit BrowseBrowse Collections Subject Terms AHA Journal Podcasts Trend Watch ResourcesResources CME DEIA Resources Early Career Resources AHA Journals @ Meetings InformationInformation For Authors For Reviewers For Subscribers For International Users Arteriosclerosis, Thrombosis, and Vascular Biology Circulation Circulation Research Hypertension Stroke Journal of the American Heart Association Circulation: Arrhythmia and Electrophysiology Circulation: Cardiovascular Imaging Circulation: Cardiovascular Interventions Circulation: Cardiovascular Quality & Outcomes Circulation: Genomic and Precision Medicine Circulation: Heart Failure Stroke: Vascular and Interventional Neurology Annals of Internal Medicine: Clinical Cases This page is managed by Wolters Kluwer Health, Inc. and/or its affiliates or subsidiaries.Wolters Kluwer Privacy Policy Your California Privacy Choices Manage Cookie Preferences Back to top National Center 7272 Greenville Ave.Dallas, TX 75231 Customer Service 1-800-AHA-USA-1 1-800-242-8721 Hours Monday - Friday: 7 a.m. – 7 p.m. CT Saturday: 9 a.m. - 5 p.m. CT Closed on Sundays Tax Identification Number 13-5613797 ABOUT US About the AHA/ASA Annual report AHA Financial Information Careers International Programs Latest Heart and Stroke News AHA/ASA Media Newsroom GET INVOLVED Donate Advocate Volunteer ShopHeart ShopCPR OUR SITES American Heart Association American Stroke Association CPR & ECC Go Red For Women More Sites AHA Careers AHA Privacy Policy Medical Advice Disclaimer Copyright Policy Accessibility Statement Ethics Policy Conflict of Interes Policy Linking Policy Whistleblower Policy Content Editorial Guidelines Diversity Suppliers & Providers State Fundraising Notices ©2025 American Heart Association, Inc. All rights reserved. Unauthorized use prohibited. The American Heart Association is a qualified 501(c)(3) tax-exempt organization. Red Dress ™ DHHS, Go Red ™ AHA ; National Wear Red Day® is a registered trademark. Your Privacy To give you the best possible experience we use cookies and similar technologies. We use data collected through these technologies for various purposes, including to enhance website functionality, remember your preferences, show the most relevant content, and show the most useful ads. You can select your preferences by clicking the link. For more information, please review ourPrivacy & Cookie Notice Manage Cookie Preferences Reject All Cookies Accept All Cookies Privacy Preference Center When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device. Because we respect your right to privacy, you can choose not to allow certain types of cookies on our website. Click on the different category headings to find out more and manage your cookie preferences. However, blocking some types of cookies may impact your experience on the site and the services we are able to offer. Privacy & Cookie Notice Allow All Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function. They are usually set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, this may have an effect on the proper functioning of (parts of) the site. View Vendor Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality, user experience and personalization, and may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies, then some or all of these services may not function properly. View Vendor Details‎ Performance Cookies [x] Performance Cookies These cookies support analytic services that measure and improve the performance of our site. They help us know which pages are the most and least popular and see how visitors move around the site. View Vendor Details‎ Advertising Cookies [x] Advertising Cookies These cookies may collect insights to issue personalized content and advertising on our own and other websites, and may be set through our site by third party providers. If you do not allow these cookies, you may still see basic advertising on your browser that is generic and not based on your interests. View Vendor Details‎ Vendors List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Reject All Confirm My Choices __("articleCrossmark.closePopup")
120
numerical methods - Explanation of Lagrange Interpolating Polynomial - Mathematics Stack Exchange =============== Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Explanation of Lagrange Interpolating Polynomial Ask Question Asked 11 years, 10 months ago Modified4 years, 7 months ago Viewed 23k times This question shows research effort; it is useful and clear 12 Save this question. Show activity on this post. Can anybody explain to me what Lagrange Interpolating Polynomial is with examples? I know the formula but it doesn't seem intuitive to me. numerical-methods Share Share a link to this question Copy linkCC BY-SA 3.0 Cite Follow Follow this question to receive notifications asked Oct 12, 2013 at 21:37 user87274 user87274 1 1 The formula is constructed so that the polynomial necessarily goes through the points specified. Examples. –vadim123 Commented Oct 12, 2013 at 21:55 Add a comment| 2 Answers 2 Sorted by: Reset to default This answer is useful 35 Save this answer. Show activity on this post. The Lagrange interpolating polynomial is a tool which helps us construct a polynomial which goes through any desired set of points. Lets say we want a polynomial that goes through the points (1,3),(3,4),(5,6)(1,3),(3,4),(5,6) and (7,−10)(7,−10). First we define the polynomial P(x)=(x−1)(x−3)(x−5)(x−7)P(x)=(x−1)(x−3)(x−5)(x−7). This has roots at the x-coordinates of each of the points we want to interpolate. Then we construct the following polynomials from this, f 1(x)=P(x)/(x−1)f 1(x)=P(x)/(x−1) f 2(x)=P(x)/(x−3)f 2(x)=P(x)/(x−3) f 3(x)=P(x)/(x−5)f 3(x)=P(x)/(x−5) f 4(x)=P(x)/(x−7)f 4(x)=P(x)/(x−7) Notice that in particular f 1(x)=(x−3)(x−5)(x−7)f 1(x)=(x−3)(x−5)(x−7). This function has the following property: It is zero at x=3,5,x=3,5, and 7 7 and nonzero at x=1 x=1. This means that it is "on" when we are at the first x-coordinate and "off" at the others. Each of them are designed to work this way. Now consider the following expression, L(x)=3 f 1(x)f 1(1)+4 f 2(x)f 2(3)+6 f 3(x)f 3(5)−10 f 4(x)f 4(7)L(x)=3 f 1(x)f 1(1)+4 f 2(x)f 2(3)+6 f 3(x)f 3(5)−10 f 4(x)f 4(7) Notice that this functions goes through all four designated points. When we plug in the desired value of x x only one of the four functions f j f j is turned on and the others are zero. The coefficients are designed to force the expression to equal the corresponding y y-coordinates. In particular consider L(5)L(5), L(5)=3 f 1(5)f 1(1)+4 f 2(5)f 2(3)+6 f 3(5)f 3(5)−10 f 4(5)f 4(7)L(5)=3 f 1(5)f 1(1)+4 f 2(5)f 2(3)+6 f 3(5)f 3(5)−10 f 4(5)f 4(7) L(5)=0+0+6 f 3(5)f 3(5)−0 L(5)=0+0+6 f 3(5)f 3(5)−0 L(5)=6 f 3(5)f 3(5)L(5)=6 f 3(5)f 3(5) L(5)=6(1)L(5)=6(1) L(5)=6 L(5)=6 So we have the desired point (5,6)(5,6). Try explicitly writing out the polynomial and plugging in the other points to really see it work. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications edited Sep 8, 2019 at 20:00 user168764 answered Oct 12, 2013 at 21:56 SpencerSpencer 12.7k 3 3 gold badges 38 38 silver badges 67 67 bronze badges 3 Wow! Thanks a lot for the explanation Sir! –user87274 Commented Oct 12, 2013 at 23:03 My pleasure, I remember these being really confusing when I was first introduced to them. –Spencer Commented Oct 12, 2013 at 23:21 1 This explanation makes a lot of sense. It's really unfortunate that it's the other answer here that both Wolfram Alpha and Wikipedia have. Wikipedia's at least I can fix. –Omnifarious Commented Jun 19, 2017 at 20:48 Add a comment| This answer is useful 0 Save this answer. Show activity on this post. Linear interpolation consists of approximating a function f(x)f(x) as f(x)=∑i=1 N a i ϕ i(x)(1)f(x)=∑i=1 N a i ϕ i(x)(1) where the a i a i's are the interpolation coefficients and the ϕ i ϕ i's are prefixed interpolation functions. Lagrange interpolation, which is one of the simplest and mostly employed interpolation methods, consists of finding the interpolation coefficients as the solution of the linear system f(x j)=∑i=1 N a i ϕ i(x j),j=1,…,N(2)f(x j)=∑i=1 N a i ϕ i(x j),j=1,…,N(2) where the x j x j's are interpolation points. A common case is when the interpolation functions are polynonials, say f(x j)=∑i=1 N α i x i−1 j,j=1,…,N.(3)f(x j)=∑i=1 N α i x j i−1,j=1,…,N.(3) The determinant of such a system is a Vandermonde determinant which is always non-vanishing and therefore the system always admits a unique solution, provided that the interpolation points are all different. Accordingly, polynomial Lagrange interpolation is always unique. Polynomial interpolation functions in eq. (1) can be found by assuming that ϕ i(x)ϕ i(x) is a polynomial of degree N−1 N−1, that ϕ i(x j)=0 ϕ i(x j)=0, j=1,…,N j=1,…,N and j≠i j≠i, and that ϕ i(x i)=1 ϕ i(x i)=1. Accordingly, ϕ i(x)=∏j≠i x−x j x i−x j(4)ϕ i(x)=∏j≠i x−x j x i−x j(4) and f(x)=∑i=1 N f(x i)∏j≠i x−x j x i−x j(5)f(x)=∑i=1 N f(x i)∏j≠i x−x j x i−x j(5) Reference: N.S. Bakhvalov, Numerical Methods, Mir Publishers Moscow, 1981. Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications edited Oct 30, 2013 at 22:25 answered Oct 30, 2013 at 15:21 VitalityVitality 408 4 4 silver badges 12 12 bronze badges Add a comment| You must log in to answer this question. Featured on Meta Will you help build our new visual identity? Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Community help needed to clean up goo.gl links (by August 25) Report this ad Linked 2Lagrange interpolation, syntax help 2How to find a function using domain and range in math? 1In linear interpolation, what exactly is x−x i x k−x i x−x i x k−x i in geometric terms? 1Understanding newton interpolating polynomial 1Intuition for Linear Lagrange Interpolating Polynomial 0Prove uniqueness and existence of ANY polynomial points/pairs Related 2Writing Lagrange form of an interpolating polynomial 4How do I write the generic finite difference approx of f'(x) using Lagrange interpolating polynomial approximation? 1Lagrange 2nd dregree interpolating polynomial 1Derivative of Lagrange interpolating polynomial 1Error Bound for Lagrange Interpolating 2Fourier-Legendre series vs Lagrange interpolating polynomial 2Find the interpolating polynomial of degree 3 3 that interpolates f(x)=x 3 f(x)=x 3 0Is Lagrange interpolating polynomial unique on a region? 0interpolating polynomial vs Lagrange polynomial Hot Network Questions Drawing a 3D vector field with vortices and perspective axis labels What do you call this outfit that Japanese housewives always wear? Do utilitarians value the lives of people in worse situations less? How to reduce repetition in a large amount of if-else statements when reading from a buffer Does the Melf's Acid Arrow spell require a ranged attack roll? Road tire bulge - is it still safe to ride? Rectangle and circle with same area and circumference how often do CANZUK judges color their text? Does Germany have the highest wolf density in the world? If I self-publish a book and give it away for free, would it meet a future publisher's desire to be "first publishing rights"? Quantum model of atom VLOOKUP with wildcards What keeps an index ETF pegged to the index? New Bass strings too tight Harry Potter fanfic where Petunia dies of cancer and Vernon works at a horse racing track? repeat_and_join function for strings and chars in rust Can high schoolers post to arXiv or write preprints? Intel NUC automatically shuts down when trying Ubuntu Collect coefficient of sum of terms in Mathematica Sickness after admitted for Masters "Melbourne saw the most significant change both in actual coffee prices and in percentages." Can Suspended Sentence be cast Twice? Problem with differential backups, after a problem with a full backup Find real and imaginary parts of a complex series Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
121
Published Time: Tue, 15 Jul 2025 18:03:43 GMT Chebyshev polynomials - Wikipedia =============== Jump to content [x] Main menu Main menu move to sidebar hide Navigation Main page Contents Current events Random article About Wikipedia Contact us Contribute Help Learn to edit Community portal Recent changes Upload file Special pages Search Search [x] Appearance Donate Create account Log in [x] Personal tools Donate Create account Log in Pages for logged out editors learn more Contributions Talk Contents move to sidebar hide (Top) 1 DefinitionsToggle Definitions subsection 1.1 Recurrence definition 1.2 Trigonometric definition 1.3 Commuting polynomials definition 1.4 Pell equation definition 1.5 Generating functions 2 Relations between the two kinds of Chebyshev polynomials 3 Explicit expressions 4 PropertiesToggle Properties subsection 4.1 Symmetry 4.2 Roots and extrema 4.3 Differentiation and integration 4.4 Products of Chebyshev polynomials 4.5 Composition and divisibility properties 4.6 Orthogonality 4.7 Minimal ∞-norm 4.7.1 Remark 4.8 Chebyshev polynomials as special cases of more general polynomial families 4.9 Other properties 4.9.1 Chebyshev polynomials as determinants 5 ExamplesToggle Examples subsection 5.1 First kind 5.2 Second kind 6 As a basis setToggle As a basis set subsection 6.1 Example 1 6.2 Example 2 6.3 Partial sums 6.4 Polynomial in Chebyshev form 7 Families of polynomials related to Chebyshev polynomialsToggle Families of polynomials related to Chebyshev polynomials subsection 7.1 Even order modified Chebyshev polynomials 8 See also 9 References 10 Sources 11 Further reading 12 External links [x] Toggle the table of contents Chebyshev polynomials [x] 24 languages العربية Български Català Deutsch Español فارسی Français Galego 한국어 Italiano עברית Magyar Nederlands 日本語 Polski Português Русский Slovenščina Српски / srpski Svenska Тоҷикӣ Українська Tiếng Việt 中文 Edit links Article Talk [x] English Read Edit View history [x] Tools Tools move to sidebar hide Actions Read Edit View history General What links here Related changes Upload file Permanent link Page information Cite this page Get shortened URL Download QR code Print/export Download as PDF Printable version In other projects Wikimedia Commons Wikidata item Appearance move to sidebar hide From Wikipedia, the free encyclopedia Polynomial sequence Not to be confused with discrete Chebyshev polynomials. Plot of the first five T n Chebyshev polynomials (first kind) Plot of the first five U n Chebyshev polynomials (second kind) The Chebyshev polynomials are two sequences of orthogonal polynomials related to the cosine and sine functions, notated as T n(x){\displaystyle T_{n}(x)} and U n(x){\displaystyle U_{n}(x)}. They can be defined in several equivalent ways, one of which starts with trigonometric functions: The Chebyshev polynomials of the first kindT n{\displaystyle T_{n}} are defined by T n(cos⁡θ)=cos⁡(n θ).{\displaystyle T_{n}(\cos \theta )=\cos(n\theta ).} Similarly, the Chebyshev polynomials of the second kindU n{\displaystyle U_{n}} are defined by U n(cos⁡θ)sin⁡θ=sin⁡((n+1)θ).{\displaystyle U_{n}(\cos \theta )\sin \theta =\sin {\big (}(n+1)\theta {\big )}.} That these expressions define polynomials in cos⁡θ{\displaystyle \cos \theta } is not obvious at first sight but can be shown using de Moivre's formula (see below). The Chebyshev polynomials T n are polynomials with the largest possible leading coefficient whose absolute value on the interval[−1, 1] is bounded by 1. They are also the "extremal" polynomials for many other properties. In 1952, Cornelius Lanczos showed that the Chebyshev polynomials are important in approximation theory for the solution of linear systems; the roots of T n(x), which are also called Chebyshev nodes, are used as matching points for optimizing polynomial interpolation. The resulting interpolation polynomial minimizes the problem of Runge's phenomenon and provides an approximation that is close to the best polynomial approximation to a continuous function under the maximum norm, also called the "minimax" criterion. This approximation leads directly to the method of Clenshaw–Curtis quadrature. These polynomials were named after Pafnuty Chebyshev. The letter T is used because of the alternative transliterations of the name Chebyshev as Tchebycheff, Tchebyshev (French) or Tschebyschow (German). Definitions [edit] Recurrence definition [edit] The Chebyshev polynomials of the first kind can be defined by the recurrence relation T 0(x)=1,T 1(x)=x,T n+1(x)=2 x T n(x)−T n−1(x).{\displaystyle {\begin{aligned}T_{0}(x)&=1,\T_{1}(x)&=x,\T_{n+1}(x)&=2x\,T_{n}(x)-T_{n-1}(x).\end{aligned}}} The Chebyshev polynomials of the second kind can be defined by the recurrence relation U 0(x)=1,U 1(x)=2 x,U n+1(x)=2 x U n(x)−U n−1(x),{\displaystyle {\begin{aligned}U_{0}(x)&=1,\U_{1}(x)&=2x,\U_{n+1}(x)&=2x\,U_{n}(x)-U_{n-1}(x),\end{aligned}}} which differs from the above only by the rule for n=1. Trigonometric definition [edit] The Chebyshev polynomials of the first and second kind can be defined as the unique polynomials satisfying T n(cos⁡θ)=cos⁡(n θ){\displaystyle T_{n}(\cos \theta )=\cos(n\theta )} and U n(cos⁡θ)=sin⁡((n+1)θ)sin⁡θ,{\displaystyle U_{n}(\cos \theta )={\frac {\sin {\big (}(n+1)\theta {\big )}}{\sin \theta }},} for n = 0, 1, 2, 3, …. An equivalent way to state this is via exponentiation of a complex number: given a complex number z = a + bi with absolute value of one, z n=T n(a)+i b U n−1(a).{\displaystyle z^{n}=T_{n}(a)+ibU_{n-1}(a).} Chebyshev polynomials can be defined in this form when studying trigonometric polynomials. That cos⁡(n x){\displaystyle \cos(nx)} is an n{\displaystyle n}th-degree polynomial in cos⁡(x){\displaystyle \cos(x)} can be seen by observing that cos⁡(n x){\displaystyle \cos(nx)} is the real part of one side of de Moivre's formula: cos⁡n θ+i sin⁡n θ=(cos⁡θ+i sin⁡θ)n.{\displaystyle \cos n\theta +i\sin n\theta =(\cos \theta +i\sin \theta )^{n}.} The real part of the other side is a polynomial in cos⁡(x){\displaystyle \cos(x)} and sin⁡(x){\displaystyle \sin(x)}, in which all powers of sin⁡(x){\displaystyle \sin(x)} are even and thus replaceable through the identity cos 2⁡(x)+sin 2⁡(x)=1{\displaystyle \cos ^{2}(x)+\sin ^{2}(x)=1}. By the same reasoning, sin⁡(n x){\displaystyle \sin(nx)} is the imaginary part of the polynomial, in which all powers of sin⁡(x){\displaystyle \sin(x)} are odd and thus, if one factor of sin⁡(x){\displaystyle \sin(x)} is factored out, the remaining factors can be replaced to create a n−1{\displaystyle n-1}st-degree polynomial in cos⁡(x){\displaystyle \cos(x)}. For x{\displaystyle x} outside the interval [-1,1], the above definition implies T n(x)={cos⁡(n arccos⁡x)if|x|≤1,cosh⁡(n arcosh⁡x)if x≥1,(−1)n cosh⁡(n arcosh⁡(−x))if x≤−1.{\displaystyle T_{n}(x)={\begin{cases}\cos(n\arccos x)&{\text{ if }}~|x|\leq 1,\\cosh(n\operatorname {arcosh} x)&{\text{ if }}~x\geq 1,\(-1)^{n}\cosh(n\operatorname {arcosh} (-x))&{\text{ if }}~x\leq -1.\end{cases}}} Commuting polynomials definition [edit] Chebyshev polynomials can also be characterized by the following theorem: If F n(x){\displaystyle F_{n}(x)} is a family of monic polynomials with coefficients in a field of characteristic 0{\displaystyle 0} such that deg⁡F n(x)=n{\displaystyle \deg F_{n}(x)=n} and F m(F n(x))=F n(F m(x)){\displaystyle F_{m}(F_{n}(x))=F_{n}(F_{m}(x))} for all m{\displaystyle m} and n{\displaystyle n}, then, up to a simple change of variables, either F n(x)=x n{\displaystyle F_{n}(x)=x^{n}} for all n{\displaystyle n} or F n(x)=2⋅T n(x/2){\displaystyle F_{n}(x)=2\cdot T_{n}(x/2)} for all n{\displaystyle n}. Pell equation definition [edit] The Chebyshev polynomials can also be defined as the solutions to the Pell equation: T n(x)2−(x 2−1)U n−1(x)2=1{\displaystyle T_{n}(x)^{2}-\left(x^{2}-1\right)U_{n-1}(x)^{2}=1} in a ringR[x]{\displaystyle R[x]}. Thus, they can be generated by the standard technique for Pell equations of taking powers of a fundamental solution: T n(x)+U n−1(x)x 2−1=(x+x 2−1)n.{\displaystyle T_{n}(x)+U_{n-1}(x)\,{\sqrt {x^{2}-1}}=\left(x+{\sqrt {x^{2}-1}}\right)^{n}~.} Generating functions [edit] The ordinary generating function for T n{\displaystyle T_{n}} is ∑n=0∞T n(x)t n=1−t x 1−2 t x+t 2.{\displaystyle \sum {n=0}^{\infty }T{n}(x)\,t^{n}={\frac {1-tx}{1-2tx+t^{2}}}.} There are several other generating functions for the Chebyshev polynomials; the exponential generating function is ∑n=0∞T n(x)t n n!=1 2(exp(t(x−x 2−1))+exp(t(x+x 2−1)))=e t x cosh⁡(t x 2−1).{\displaystyle {\begin{aligned}\sum {n=0}^{\infty }T{n}(x){\frac {t^{n}}{n!}}&={\tfrac {1}{2}}{\Bigl (}{\exp }{\Bigl (}{\textstyle t{\bigl (}x-{\sqrt {x^{2}-1}}~!{\bigr )}}{\Bigr )}+{\exp }{\Bigl (}{\textstyle t{\bigl (}x+{\sqrt {x^{2}-1}}~!{\bigr )}}{\Bigr )}{\Bigr )}\&=e^{tx}\cosh \left({\textstyle t{\sqrt {x^{2}-1}}}~!\right).\end{aligned}}} The generating function relevant for 2-dimensional potential theory and multipole expansion is ∑n=1∞T n(x)t n n=ln⁡(1 1−2 t x+t 2).{\displaystyle \sum \limits {n=1}^{\infty }T{n}(x)\,{\frac {t^{n}}{n}}=\ln \left({\frac {1}{\sqrt {1-2tx+t^{2}}}}\right).} The ordinary generating function for U n is ∑n=0∞U n(x)t n=1 1−2 t x+t 2,{\displaystyle \sum {n=0}^{\infty }U{n}(x)\,t^{n}={\frac {1}{1-2tx+t^{2}}},} and the exponential generating function is ∑n=0∞U n(x)t n n!=e t x(cosh⁡(t x 2−1)+x x 2−1 sinh⁡(t x 2−1)).{\displaystyle \sum {n=0}^{\infty }U{n}(x){\frac {t^{n}}{n!}}=e^{tx}{\biggl (}!\cosh \left(t{\sqrt {x^{2}-1}}\right)+{\frac {x}{\sqrt {x^{2}-1}}}\sinh \left(t{\sqrt {x^{2}-1}}\right){\biggr )}.} Relations between the two kinds of Chebyshev polynomials [edit] The Chebyshev polynomials of the first and second kinds correspond to a complementary pair of Lucas sequencesV~n(P,Q){\displaystyle {\tilde {V}}{n}(P,Q)} and U~n(P,Q){\displaystyle {\tilde {U}}{n}(P,Q)} with parameters P=2 x{\displaystyle P=2x} and Q=1{\displaystyle Q=1}: U~n(2 x,1)=U n−1(x),V~n(2 x,1)=2 T n(x).{\displaystyle {\begin{aligned}{\tilde {U}}{n}(2x,1)&=U{n-1}(x),\{\tilde {V}}{n}(2x,1)&=2\,T{n}(x).\end{aligned}}} It follows that they also satisfy a pair of mutual recurrence equations: T n+1(x)=x T n(x)−(1−x 2)U n−1(x),U n+1(x)=x U n(x)+T n+1(x).{\displaystyle {\begin{aligned}T_{n+1}(x)&=x\,T_{n}(x)-(1-x^{2})\,U_{n-1}(x),\U_{n+1}(x)&=x\,U_{n}(x)+T_{n+1}(x).\end{aligned}}} The second of these may be rearranged using the recurrence definition for the Chebyshev polynomials of the second kind to give: T n(x)=1 2(U n(x)−U n−2(x)).{\displaystyle T_{n}(x)={\frac {1}{2}}{\big (}U_{n}(x)-U_{n-2}(x){\big )}.} Using this formula iteratively gives the sum formula: U n(x)={2∑odd j>0 n T j(x)for odd n.2∑even j≥0 n T j(x)−1 for even n,{\displaystyle U_{n}(x)={\begin{cases}2\sum {{\text{ odd }}j>0}^{n}T{j}(x)&{\text{ for odd }}n.\2\sum {{\text{ even }}j\geq 0}^{n}T{j}(x)-1&{\text{ for even }}n,\end{cases}}} while replacing U n(x){\displaystyle U_{n}(x)} and U n−2(x){\displaystyle U_{n-2}(x)} using the derivative formula for T n(x){\displaystyle T_{n}(x)} gives the recurrence relationship for the derivative of T n{\displaystyle T_{n}}: 2 T n(x)=1 n+1 d d x T n+1(x)−1 n−1 d d x T n−1(x),n=2,3,…{\displaystyle 2\,T_{n}(x)={\frac {1}{n+1}}\,{\frac {\mathrm {d} }{\mathrm {d} x}}\,T_{n+1}(x)-{\frac {1}{n-1}}\,{\frac {\mathrm {d} }{\mathrm {d} x}}\,T_{n-1}(x),\qquad n=2,3,\ldots } This relationship is used in the Chebyshev spectral method of solving differential equations. Turán's inequalities for the Chebyshev polynomials are: T n(x)2−T n−1(x)T n+1(x)=1−x 2>0 for−1<x<1 and U n(x)2−U n−1(x)U n+1(x)=1>0.{\displaystyle {\begin{aligned}T_{n}(x)^{2}-T_{n-1}(x)\,T_{n+1}(x)&=1-x^{2}>0&&{\text{ for }}-1<x<1&&{\text{ and }}\U_{n}(x)^{2}-U_{n-1}(x)\,U_{n+1}(x)&=1>0~.\end{aligned}}} The integral relations are ∫−1 1 T n(y)y−x d y 1−y 2=π U n−1(x),∫−1 1 U n−1(y)y−x 1−y 2 d y=−π T n(x){\displaystyle {\begin{aligned}\int {-1}^{1}{\frac {T{n}(y)}{y-x}}\,{\frac {\mathrm {d} y}{\sqrt {1-y^{2}}}}&=\pi \,U_{n-1}(x)~,\[1.5ex]\int {-1}^{1}{\frac {U{n-1}(y)}{y-x}}\,{\sqrt {1-y^{2}}}\mathrm {d} y&=-\pi \,T_{n}(x)\end{aligned}}} where integrals are considered as principal value. Explicit expressions [edit] Using the complex number exponentiation definition of the Chebyshev polynomial, one can derive the following expressions, valid for any real ⁠x{\displaystyle x}⁠:[citation needed] T n(x)=1 2((x−x 2−1)n+(x+x 2−1)n)=1 2((x−x 2−1)n+(x−x 2−1)−n).{\displaystyle {\begin{aligned}T_{n}(x)&={\tfrac {1}{2}}{\Big (}{\bigl (}{\textstyle x-{\sqrt {x^{2}-1}}!~}{\bigr )}^{n}+{\bigl (}{\textstyle x+{\sqrt {x^{2}-1}}!~}{\bigr )}^{n}{\Big )}\[5mu]&={\tfrac {1}{2}}{\Big (}{\bigl (}{\textstyle x-{\sqrt {x^{2}-1}}!~}{\bigr )}^{n}+{\bigl (}{\textstyle x-{\sqrt {x^{2}-1}}!~}{\bigr )}^{-n}{\Big )}.\end{aligned}}} The two are equivalent because (x+x 2−1)(x−x 2−1)=1{\displaystyle \textstyle {\bigl (}x+{\sqrt {x^{2}-1}}!~{\bigr )}{\bigl (}x-{\sqrt {x^{2}-1}}!~{\bigr )}=1}. An explicit form of the Chebyshev polynomial in terms of monomials x k{\displaystyle x^{k}} follows from de Moivre's formula: T n(cos⁡(θ))=Re⁡(cos⁡n θ+i sin⁡n θ)=Re⁡((cos⁡θ+i sin⁡θ)n),{\displaystyle T_{n}(\cos(\theta ))=\operatorname {Re} (\cos n\theta +i\sin n\theta )=\operatorname {Re} ((\cos \theta +i\sin \theta )^{n}),} where R e{\displaystyle \mathrm {Re} } denotes the real part of a complex number. Expanding the formula, one gets (cos⁡θ+i sin⁡θ)n=∑j=0 n(n j)i j sin j⁡θ cos n−j⁡θ.{\displaystyle (\cos \theta +i\sin \theta )^{n}=\sum \limits _{j=0}^{n}{\binom {n}{j}}i^{j}\sin ^{j}\theta \cos ^{n-j}\theta .} The real part of the expression is obtained from summands corresponding to even indices. Noting i 2 j=(−1)j{\displaystyle i^{2j}=(-1)^{j}} and sin 2 j⁡θ=(1−cos 2⁡θ)j{\displaystyle \sin ^{2j}\theta =(1-\cos ^{2}\theta )^{j}}, one gets the explicit formula: cos⁡n θ=∑j=0⌊n/2⌋(n 2 j)(cos 2⁡θ−1)j cos n−2 j⁡θ,{\displaystyle \cos n\theta =\sum \limits _{j=0}^{\lfloor n/2\rfloor }{\binom {n}{2j}}(\cos ^{2}\theta -1)^{j}\cos ^{n-2j}\theta ,} which in turn means that T n(x)=∑j=0⌊n/2⌋(n 2 j)(x 2−1)j x n−2 j.{\displaystyle T_{n}(x)=\sum \limits _{j=0}^{\lfloor n/2\rfloor }{\binom {n}{2j}}(x^{2}-1)^{j}x^{n-2j}.} This can be written as a 2 F 1hypergeometric function: T n(x)=∑k=0⌊n 2⌋(n 2 k)(x 2−1)k x n−2 k=x n∑k=0⌊n 2⌋(n 2 k)(1−x−2)k=n 2∑k=0⌊n 2⌋(−1)k(n−k−1)!k!(n−2 k)!(2 x)n−2 k for n>0=n∑k=0 n(−2)k(n+k−1)!(n−k)!(2 k)!(1−x)k for n>0=2 F 1(−n,n;1 2;1 2(1−x)){\displaystyle {\begin{aligned}T_{n}(x)&=\sum {k=0}^{\left\lfloor {\frac {n}{2}}\right\rfloor }{\binom {n}{2k}}\left(x^{2}-1\right)^{k}x^{n-2k}\&=x^{n}\sum {k=0}^{\left\lfloor {\frac {n}{2}}\right\rfloor }{\binom {n}{2k}}\left(1-x^{-2}\right)^{k}\&={\frac {n}{2}}\sum {k=0}^{\left\lfloor {\frac {n}{2}}\right\rfloor }(-1)^{k}{\frac {(n-k-1)!}{k!(n-2k)!}}~(2x)^{n-2k}\quad {\text{ for }}n>0\\&=n\sum {k=0}^{n}(-2)^{k}{\frac {(n+k-1)!}{(n-k)!(2k)!}}(1-x)^{k}\quad {\text{ for }}n>0\\&={}{2}F{1}!\left(-n,n;{\tfrac {1}{2}};{\tfrac {1}{2}}(1-x)\right)\\end{aligned}}} with inverse x n=2 1−n∑′j=0 j≡n(mod 2)n(n n−j 2)T j(x),{\displaystyle x^{n}=2^{1-n}\mathop {{\sum }'} {j=0 \atop j\equiv n{\pmod {2}}}^{n}!!{\binom {n}{\tfrac {n-j}{2}}}!\;T{j}(x),} where the prime at the summation symbol indicates that the contribution of j=0{\displaystyle j=0} needs to be halved if it appears. A related expression for T n{\displaystyle T_{n}} as a sum of monomials with binomial coefficients and powers of two is T n(x)=∑m=0⌊n 2⌋(−1)m((n−m m)+(n−m−1 n−2 m))⋅2 n−2 m−1⋅x n−2 m.{\displaystyle T_{n}(x)=\sum \limits _{m=0}^{\left\lfloor {\frac {n}{2}}\right\rfloor }(-1)^{m}\left({\binom {n-m}{m}}+{\binom {n-m-1}{n-2m}}\right)\cdot 2^{n-2m-1}\cdot x^{n-2m}.} Similarly, U n{\displaystyle U_{n}} can be expressed in terms of hypergeometric functions: U n(x)=(x+x 2−1)n+1−(x−x 2−1)n+1 2 x 2−1=∑k=0⌊n/2⌋(n+1 2 k+1)(x 2−1)k x n−2 k=x n∑k=0⌊n/2⌋(n+1 2 k+1)(1−x−2)k=∑k=0⌊n/2⌋(2 k−(n+1)k)(2 x)n−2 k for n>0=∑k=0⌊n/2⌋(−1)k(n−k k)(2 x)n−2 k for n>0=∑k=0 n(−2)k(n+k+1)!(n−k)!(2 k+1)!(1−x)k for n>0=(n+1)2 F 1(−n,n+2;3 2;1 2(1−x)).{\displaystyle {\begin{aligned}U_{n}(x)&={\frac {\left(x+{\sqrt {x^{2}-1}}\right)^{n+1}-\left(x-{\sqrt {x^{2}-1}}\right)^{n+1}}{2{\sqrt {x^{2}-1}}}}\&=\sum {k=0}^{\left\lfloor {n}/{2}\right\rfloor }{\binom {n+1}{2k+1}}\left(x^{2}-1\right)^{k}x^{n-2k}\&=x^{n}\sum {k=0}^{\left\lfloor {n}/{2}\right\rfloor }{\binom {n+1}{2k+1}}\left(1-x^{-2}\right)^{k}\&=\sum {k=0}^{\left\lfloor {n}/{2}\right\rfloor }{\binom {2k-(n+1)}{k}}~(2x)^{n-2k}&{\text{ for }}n>0\&=\sum {k=0}^{\left\lfloor {n}/{2}\right\rfloor }(-1)^{k}{\binom {n-k}{k}}~(2x)^{n-2k}&{\text{ for }}n>0\&=\sum {k=0}^{n}(-2)^{k}{\frac {(n+k+1)!}{(n-k)!(2k+1)!}}(1-x)^{k}&{\text{ for }}n>0\&=(n+1)\,{}{2}F_{1}{\big (}-n,n+2;{\tfrac {3}{2}};{\tfrac {1}{2}}(1-x){\big )}.\end{aligned}}} Properties [edit] Symmetry [edit] T n(−x)=(−1)n T n(x),U n(−x)=(−1)n U n(x).{\displaystyle {\begin{aligned}T_{n}(-x)&=(-1)^{n}\,T_{n}(x),\[1ex]U_{n}(-x)&=(-1)^{n}\,U_{n}(x).\end{aligned}}} That is, Chebyshev polynomials of even order have even symmetry and therefore contain only even powers of x{\displaystyle x}. Chebyshev polynomials of odd order have odd symmetry and therefore contain only odd powers of x{\displaystyle x}. Roots and extrema [edit] A Chebyshev polynomial of either kind with degree n has n different simple roots, called Chebyshev roots, in the interval [−1, 1]. The roots of the Chebyshev polynomial of the first kind are sometimes called Chebyshev nodes because they are used as nodes in polynomial interpolation. Using the trigonometric definition and the fact that: cos⁡((2 k+1)π 2)=0{\displaystyle \cos \left((2k+1){\frac {\pi }{2}}\right)=0} one can show that the roots of T n{\displaystyle T_{n}} are: x k=cos⁡(π(k+1/2)n),k=0,…,n−1.{\displaystyle x_{k}=\cos \left({\frac {\pi (k+1/2)}{n}}\right),\quad k=0,\ldots ,n-1.} Similarly, the roots of U n{\displaystyle U_{n}} are: x k=cos⁡(k n+1 π),k=1,…,n.{\displaystyle x_{k}=\cos \left({\frac {k}{n+1}}\pi \right),\quad k=1,\ldots ,n.} The extrema of T n{\displaystyle T_{n}} on the interval −1≤x≤1{\displaystyle -1\leq x\leq 1} are located at: x k=cos⁡(k n π),k=0,…,n.{\displaystyle x_{k}=\cos \left({\frac {k}{n}}\pi \right),\quad k=0,\ldots ,n.} One unique property of the Chebyshev polynomials of the first kind is that on the interval −1≤x≤1{\displaystyle -1\leq x\leq 1} all of the extrema have values that are either −1 or 1. Thus these polynomials have only two finite critical values, the defining property of Shabat polynomials. Both the first and second kinds of Chebyshev polynomial have extrema at the endpoints, given by: T n(1)=1 T n(−1)=(−1)n U n(1)=n+1 U n(−1)=(−1)n(n+1).{\displaystyle {\begin{aligned}T_{n}(1)&=1\T_{n}(-1)&=(-1)^{n}\U_{n}(1)&=n+1\U_{n}(-1)&=(-1)^{n}(n+1).\end{aligned}}} The extrema of T n(x){\displaystyle T_{n}(x)} on the interval −1≤x≤1{\displaystyle -1\leq x\leq 1} where n>0{\displaystyle n>0} are located at n+1{\displaystyle n+1} values of x{\displaystyle x}. They are ±1{\displaystyle \pm 1}, or cos⁡(2 π k d){\displaystyle \cos \left({\frac {2\pi k}{d}}\right)} where d>2{\displaystyle d>2}, d|2 n{\displaystyle d\;|\;2n}, 0<k<d/2{\displaystyle 0<k<d/2} and (k,d)=1{\displaystyle (k,d)=1}, i.e., k{\displaystyle k} and d{\displaystyle d} are relatively prime numbers. Specifically (Minimal polynomial of 2cos(2pi/n)) when n{\displaystyle n} is even: T n(x)=1{\displaystyle T_{n}(x)=1} if x=±1{\displaystyle x=\pm 1}, or d>2{\displaystyle d>2} and 2 n/d{\displaystyle 2n/d} is even. There are n/2+1{\displaystyle n/2+1} such values of x{\displaystyle x}. T n(x)=−1{\displaystyle T_{n}(x)=-1} if d>2{\displaystyle d>2} and 2 n/d{\displaystyle 2n/d} is odd. There are n/2{\displaystyle n/2} such values of x{\displaystyle x}. When n{\displaystyle n} is odd: T n(x)=1{\displaystyle T_{n}(x)=1} if x=1{\displaystyle x=1}, or d>2{\displaystyle d>2} and 2 n/d{\displaystyle 2n/d} is even. There are (n+1)/2{\displaystyle (n+1)/2} such values of x{\displaystyle x}. T n(x)=−1{\displaystyle T_{n}(x)=-1} if x=−1{\displaystyle x=-1}, or d>2{\displaystyle d>2} and 2 n/d{\displaystyle 2n/d} is odd. There are (n+1)/2{\displaystyle (n+1)/2} such values of x{\displaystyle x}. Differentiation and integration [edit] The derivatives of the polynomials can be less than straightforward. By differentiating the polynomials in their trigonometric forms, it can be shown that: d T n d x=n U n−1 d U n d x=(n+1)T n+1−x U n x 2−1 d 2 T n d x 2=n n T n−x U n−1 x 2−1=n(n+1)T n−U n x 2−1.{\displaystyle {\begin{aligned}{\frac {\mathrm {d} T_{n}}{\mathrm {d} x}}&=nU_{n-1}\{\frac {\mathrm {d} U_{n}}{\mathrm {d} x}}&={\frac {(n+1)T_{n+1}-xU_{n}}{x^{2}-1}}\{\frac {\mathrm {d} ^{2}T_{n}}{\mathrm {d} x^{2}}}&=n\,{\frac {nT_{n}-xU_{n-1}}{x^{2}-1}}=n\,{\frac {(n+1)T_{n}-U_{n}}{x^{2}-1}}.\end{aligned}}} The last two formulas can be numerically troublesome due to the division by zero (⁠0/0⁠indeterminate form, specifically) at x=1{\displaystyle x=1} and x=−1{\displaystyle x=-1}. By L'Hôpital's rule: d 2 T n d x 2|x=1=n 4−n 2 3,d 2 T n d x 2|x=−1=(−1)n n 4−n 2 3.{\displaystyle {\begin{aligned}\left.{\frac {\mathrm {d} ^{2}T_{n}}{\mathrm {d} x^{2}}}\right|{x=1}!!&={\frac {n^{4}-n^{2}}{3}},\\left.{\frac {\mathrm {d} ^{2}T{n}}{\mathrm {d} x^{2}}}\right|_{x=-1}!!&=(-1)^{n}{\frac {n^{4}-n^{2}}{3}}.\end{aligned}}} More generally, d p T n d x p|x=±1=(±1)n+p∏k=0 p−1 n 2−k 2 2 k+1,{\displaystyle \left.{\frac {\mathrm {d} ^{p}T_{n}}{\mathrm {d} x^{p}}}\right|{x=\pm 1}!!=(\pm 1)^{n+p}\prod {k=0}^{p-1}{\frac {n^{2}-k^{2}}{2k+1}}~,} which is of great use in the numerical solution of eigenvalue problems. Also, we have: d p d x p T n(x)=2 p n∑′0≤k≤n−p k≡n−p(mod 2)⁡(n+p−k 2−1 n−p−k 2)(n+p+k 2−1)!(n−p+k 2)!T k(x),p≥1,{\displaystyle {\frac {\mathrm {d} ^{p}}{\mathrm {d} x^{p}}}\,T_{n}(x)=2^{p}\,n\mathop {{\sum }'} {0\leq k\leq n-p \atop k\,\equiv \,n-p{\pmod {2}}}{\binom {{\frac {n+p-k}{2}}-1}{\frac {n-p-k}{2}}}{\frac {\left({\frac {n+p+k}{2}}-1\right)!}{\left({\frac {n-p+k}{2}}\right)!}}\,T{k}(x),~\qquad p\geq 1,} where the prime at the summation symbols means that the term contributed by k = 0 is to be halved, if it appears. Concerning integration, the first derivative of the T n implies that: ∫U n d x=T n+1 n+1{\displaystyle \int U_{n}\,\mathrm {d} x={\frac {T_{n+1}}{n+1}}} and the recurrence relation for the first kind polynomials involving derivatives establishes that for n≥2{\displaystyle n\geq 2}: ∫T n d x=1 2(T n+1 n+1−T n−1 n−1)=n T n+1 n 2−1−x T n n−1.{\displaystyle \int T_{n}\,\mathrm {d} x={\frac {1}{2}}\,\left({\frac {T_{n+1}}{n+1}}-{\frac {T_{n-1}}{n-1}}\right)={\frac {n\,T_{n+1}}{n^{2}-1}}-{\frac {x\,T_{n}}{n-1}}.} The last formula can be further manipulated to express the integral of T n{\displaystyle T_{n}} as a function of Chebyshev polynomials of the first kind only: ∫T n d x=n n 2−1 T n+1−1 n−1 T 1 T n=n n 2−1 T n+1−1 2(n−1)(T n+1+T n−1)=1 2(n+1)T n+1−1 2(n−1)T n−1.{\displaystyle {\begin{aligned}\int T_{n}\,\mathrm {d} x&={\frac {n}{n^{2}-1}}T_{n+1}-{\frac {1}{n-1}}T_{1}T_{n}\&={\frac {n}{n^{2}-1}}\,T_{n+1}-{\frac {1}{2(n-1)}}\,(T_{n+1}+T_{n-1})\&={\frac {1}{2(n+1)}}\,T_{n+1}-{\frac {1}{2(n-1)}}\,T_{n-1}.\end{aligned}}} Furthermore, we have: ∫−1 1 T n(x)d x={(−1)n+1 1−n 2 if n≠1 0 if n=1.{\displaystyle \int {-1}^{1}T{n}(x)\,\mathrm {d} x={\begin{cases}{\frac {(-1)^{n}+1}{1-n^{2}}}&{\text{ if }}~n\neq 1\0&{\text{ if }}~n=1.\end{cases}}} Products of Chebyshev polynomials [edit] The Chebyshev polynomials of the first kind satisfy the relation: T m(x)T n(x)=1 2(T m+n(x)+T|m−n|(x)),∀m,n≥0,{\displaystyle T_{m}(x)\,T_{n}(x)={\tfrac {1}{2}}!\left(T_{m+n}(x)+T_{|m-n|}(x)\right)!,\qquad \forall m,n\geq 0,} which is easily proved from the product-to-sum formula for the cosine: 2 cos⁡α cos⁡β=cos⁡(α+β)+cos⁡(α−β).{\displaystyle 2\cos \alpha \,\cos \beta =\cos(\alpha +\beta )+\cos(\alpha -\beta ).} For n=1{\displaystyle n=1} this results in the already known recurrence formula, just arranged differently, and with n=2{\displaystyle n=2} it forms the recurrence relation for all even or all odd indexed Chebyshev polynomials (depending on the parity of the lowest m) which implies the evenness or oddness of these polynomials. Three more useful formulas for evaluating Chebyshev polynomials can be concluded from this product expansion: T 2 n(x)=2 T n 2(x)−T 0(x)=2 T n 2(x)−1,T 2 n+1(x)=2 T n+1(x)T n(x)−T 1(x)=2 T n+1(x)T n(x)−x,T 2 n−1(x)=2 T n−1(x)T n(x)−T 1(x)=2 T n−1(x)T n(x)−x.{\displaystyle {\begin{aligned}T_{2n}(x)&=2\,T_{n}^{2}(x)-T_{0}(x)&&=2T_{n}^{2}(x)-1,\T_{2n+1}(x)&=2\,T_{n+1}(x)\,T_{n}(x)-T_{1}(x)&&=2\,T_{n+1}(x)\,T_{n}(x)-x,\T_{2n-1}(x)&=2\,T_{n-1}(x)\,T_{n}(x)-T_{1}(x)&&=2\,T_{n-1}(x)\,T_{n}(x)-x.\end{aligned}}} The polynomials of the second kind satisfy the similar relation: T m(x)U n(x)={1 2(U m+n(x)+U n−m(x)),if n≥m−1,1 2(U m+n(x)−U m−n−2(x)),if n≤m−2.{\displaystyle T_{m}(x)\,U_{n}(x)={\begin{cases}{\frac {1}{2}}\left(U_{m+n}(x)+U_{n-m}(x)\right),&~{\text{ if }}~n\geq m-1,\\{\frac {1}{2}}\left(U_{m+n}(x)-U_{m-n-2}(x)\right),&~{\text{ if }}~n\leq m-2.\end{cases}}} (with the definition U−1≡0{\displaystyle U_{-1}\equiv 0} by convention ). They also satisfy: U m(x)U n(x)=∑k=0 n U m−n+2 k(x)=∑p=m−n step 2 m+n U p(x).{\displaystyle U_{m}(x)\,U_{n}(x)=\sum {k=0}^{n}\,U{m-n+2k}(x)=\sum {\underset {\text{ step 2 }}{p=m-n}}^{m+n}U{p}(x)~.} for m≥n{\displaystyle m\geq n}. For n=2{\displaystyle n=2} this recurrence reduces to: U m+2(x)=U 2(x)U m(x)−U m(x)−U m−2(x)=U m(x)(U 2(x)−1)−U m−2(x),{\displaystyle U_{m+2}(x)=U_{2}(x)\,U_{m}(x)-U_{m}(x)-U_{m-2}(x)=U_{m}(x)\,{\big (}U_{2}(x)-1{\big )}-U_{m-2}(x)~,} which establishes the evenness or oddness of the even or odd indexed Chebyshev polynomials of the second kind depending on whether m{\displaystyle m} starts with 2 or 3. Composition and divisibility properties [edit] The trigonometric definitions of T n{\displaystyle T_{n}} and U n{\displaystyle U_{n}} imply the composition or nesting properties: T m n(x)=T m(T n(x)),U m n−1(x)=U m−1(T n(x))U n−1(x).{\displaystyle {\begin{aligned}T_{mn}(x)&=T_{m}(T_{n}(x)),\U_{mn-1}(x)&=U_{m-1}(T_{n}(x))U_{n-1}(x).\end{aligned}}} For T m n{\displaystyle T_{mn}} the order of composition may be reversed, making the family of polynomial functions T n{\displaystyle T_{n}} a commutativesemigroup under composition. Since T m(x){\displaystyle T_{m}(x)} is divisible by x{\displaystyle x} if m{\displaystyle m} is odd, it follows that T m n(x){\displaystyle T_{mn}(x)} is divisible by T n(x){\displaystyle T_{n}(x)} if m{\displaystyle m} is odd. Furthermore, U m n−1(x){\displaystyle U_{mn-1}(x)} is divisible by U n−1(x){\displaystyle U_{n-1}(x)}, and in the case that m{\displaystyle m} is even, divisible by T n(x)U n−1(x){\displaystyle T_{n}(x)U_{n-1}(x)}. Orthogonality [edit] Both T n{\displaystyle T_{n}} and U n{\displaystyle U_{n}} form a sequence of orthogonal polynomials. The polynomials of the first kind T n{\displaystyle T_{n}} are orthogonal with respect to the weight: 1 1−x 2,{\displaystyle {\frac {1}{\sqrt {1-x^{2}}}},} on the interval [−1, 1], i.e. we have: ∫−1 1 T n(x)T m(x)d x 1−x 2={0 if n≠m,π if n=m=0,π 2 if n=m≠0.{\displaystyle \int {-1}^{1}T{n}(x)\,T_{m}(x)\,{\frac {\mathrm {d} x}{\sqrt {1-x^{2}}}}={\begin{cases}0&~{\text{ if }}~n\neq m,\[5mu]\pi &~{\text{ if }}~n=m=0,\[5mu]{\frac {\pi }{2}}&~{\text{ if }}~n=m\neq 0.\end{cases}}} This can be proven by letting x=cos⁡(θ){\displaystyle x=\cos(\theta )} and using the defining identity T n(cos⁡(θ))=cos⁡(n θ){\displaystyle T_{n}(\cos(\theta ))=\cos(n\theta )}. Similarly, the polynomials of the second kind U n are orthogonal with respect to the weight: 1−x 2{\displaystyle {\sqrt {1-x^{2}}}} on the interval [−1, 1], i.e. we have: ∫−1 1 U n(x)U m(x)1−x 2 d x={0 if n≠m,π 2 if n=m.{\displaystyle \int {-1}^{1}U{n}(x)\,U_{m}(x)\,{\sqrt {1-x^{2}}}\,\mathrm {d} x={\begin{cases}0&~{\text{ if }}~n\neq m,\[5mu]{\frac {\pi }{2}}&~{\text{ if }}~n=m.\end{cases}}} (The measure 1−x 2 d x{\displaystyle {\sqrt {1-x^{2}}}\,dx} is, to within a normalizing constant, the Wigner semicircle distribution.) These orthogonality properties follow from the fact that the Chebyshev polynomials solve the Chebyshev differential equations: (1−x 2)T n″−x T n′+n 2 T n=0,(1−x 2)U n″−3 x U n′+n(n+2)U n=0,{\displaystyle {\begin{aligned}(1-x^{2})T_{n}''-xT_{n}'+n^{2}T_{n}&=0,\1exU_{n}''-3xU_{n}'+n(n+2)U_{n}&=0,\end{aligned}}} which are Sturm–Liouville differential equations. It is a general feature of such differential equations that there is a distinguished orthonormal set of solutions. (Another way to define the Chebyshev polynomials is as the solutions to those equations.) The T n{\displaystyle T_{n}} also satisfy a discrete orthogonality condition: ∑k=0 N−1 T i(x k)T j(x k)={0 if i≠j,N if i=j=0,N 2 if i=j≠0,{\displaystyle \sum {k=0}^{N-1}{T{i}(x_{k})\,T_{j}(x_{k})}={\begin{cases}0&~{\text{ if }}~i\neq j,\[5mu]N&~{\text{ if }}~i=j=0,\[5mu]{\frac {N}{2}}&~{\text{ if }}~i=j\neq 0,\end{cases}}} where N{\displaystyle N} is any integer greater than max(i,j){\displaystyle \max(i,j)}, and the x k{\displaystyle x_{k}} are the N{\displaystyle N}Chebyshev nodes (see above) of T N(x){\displaystyle T_{N}(x)}: x k=cos⁡(π 2 k+1 2 N)for k=0,1,…,N−1.{\displaystyle x_{k}=\cos \left(\pi \,{\frac {2k+1}{2N}}\right)\quad ~{\text{ for }}~k=0,1,\dots ,N-1.} For the polynomials of the second kind and any integer N>i+j{\displaystyle N>i+j} with the same Chebyshev nodes x k{\displaystyle x_{k}}, there are similar sums: ∑k=0 N−1 U i(x k)U j(x k)(1−x k 2)={0 if i≠j,N 2 if i=j,{\displaystyle \sum {k=0}^{N-1}{U{i}(x_{k})\,U_{j}(x_{k})\left(1-x_{k}^{2}\right)}={\begin{cases}0&{\text{ if }}~i\neq j,\[5mu]{\frac {N}{2}}&{\text{ if }}~i=j,\end{cases}}} and without the weight function: ∑k=0 N−1 U i(x k)U j(x k)={0 if i≢j(mod 2),N⋅(1+min{i,j})if i≡j(mod 2).{\displaystyle \sum {k=0}^{N-1}{U{i}(x_{k})\,U_{j}(x_{k})}={\begin{cases}0&~{\text{ if }}~i\not \equiv j{\pmod {2}},\[5mu]N\cdot (1+\min{i,j})&~{\text{ if }}~i\equiv j{\pmod {2}}.\end{cases}}} For any integer N>i+j{\displaystyle N>i+j}, based on the N{\displaystyle N}} zeros of U N(x){\displaystyle U_{N}(x)}: y k=cos⁡(π k+1 N+1)for k=0,1,…,N−1,{\displaystyle y_{k}=\cos \left(\pi \,{\frac {k+1}{N+1}}\right)\quad ~{\text{ for }}~k=0,1,\dots ,N-1,} one can get the sum: ∑k=0 N−1 U i(y k)U j(y k)(1−y k 2)={0 if i≠j,N+1 2 if i=j,{\displaystyle \sum {k=0}^{N-1}{U{i}(y_{k})\,U_{j}(y_{k})(1-y_{k}^{2})}={\begin{cases}0&~{\text{ if }}i\neq j,\[5mu]{\frac {N+1}{2}}&~{\text{ if }}i=j,\end{cases}}} and again without the weight function: ∑k=0 N−1 U i(y k)U j(y k)={0 if i≢j(mod 2),(min{i,j}+1)(N−max{i,j})if i≡j(mod 2).{\displaystyle \sum {k=0}^{N-1}{U{i}(y_{k})\,U_{j}(y_{k})}={\begin{cases}0&~{\text{ if }}~i\not \equiv j{\pmod {2}},\[5mu]{\bigl (}\min{i,j}+1{\bigr )}{\bigl (}N-\max{i,j}{\bigr )}&~{\text{ if }}~i\equiv j{\pmod {2}}.\end{cases}}} Minimal ∞-norm [edit] For any given n≥1{\displaystyle n\geq 1}, among the polynomials of degree n{\displaystyle n} with leading coefficient 1 (monic polynomials): f(x)=1 2 n−1 T n(x){\displaystyle f(x)={\frac {1}{\,2^{n-1}\,}}\,T_{n}(x)} is the one of which the maximal absolute value on the interval [−1, 1] is minimal. This maximal absolute value is: 1 2 n−1{\displaystyle {\frac {1}{2^{n-1}}}} and |f(x)|{\displaystyle |f(x)|} reaches this maximum exactly n+1{\displaystyle n+1} times at: x=cos⁡k π n for 0≤k≤n.{\displaystyle x=\cos {\frac {k\pi }{n}}\quad {\text{for }}0\leq k\leq n.} Proof Let's assume that w n(x){\displaystyle w_{n}(x)} is a polynomial of degree n{\displaystyle n} with leading coefficient 1 with maximal absolute value on the interval [−1, 1] less than 1 / 2 n− 1. Define f n(x)=1 2 n−1 T n(x)−w n(x){\displaystyle f_{n}(x)={\frac {1}{\,2^{n-1}\,}}\,T_{n}(x)-w_{n}(x)} Because at extreme points of T n we have |w n(x)|<|1 2 n−1 T n(x)|f n(x)>0 for x=cos⁡2 k π n where 0≤2 k≤n f n(x)<0 for x=cos⁡(2 k+1)π n where 0≤2 k+1≤n{\displaystyle {\begin{aligned}|w_{n}(x)|&<\left|{\frac {1}{2^{n-1}}}T_{n}(x)\right|\f_{n}(x)&>0\qquad {\text{ for }}~x=\cos {\frac {2k\pi }{n}}~&&{\text{ where }}0\leq 2k\leq n\f_{n}(x)&<0\qquad {\text{ for }}~x=\cos {\frac {(2k+1)\pi }{n}}~&&{\text{ where }}0\leq 2k+1\leq n\end{aligned}}} From the intermediate value theorem, f n(x) has at least n roots. However, this is impossible, as f n(x) is a polynomial of degree n − 1, so the fundamental theorem of algebra implies it has at most n − 1 roots. Remark [edit] By the equioscillation theorem, among all the polynomials of degree ≤ n, the polynomial f minimizes ‖f‖∞ on [−1, 1]if and only if there are n + 2 points −1 ≤ x 0<x 1< ⋯ <x n + 1 ≤ 1 such that |f(x i)| = ‖f‖∞. Of course, the null polynomial on the interval [−1, 1] can be approximated by itself and minimizes the ∞-norm. Above, however, |f| reaches its maximum only n + 1 times because we are searching for the best polynomial of degree n ≥ 1 (therefore the theorem evoked previously cannot be used). Chebyshev polynomials as special cases of more general polynomial families [edit] The Chebyshev polynomials are a special case of the ultraspherical or Gegenbauer polynomialsC n(λ)(x){\displaystyle C_{n}^{(\lambda )}(x)}, which themselves are a special case of the Jacobi polynomialsP n(α,β)(x){\displaystyle P_{n}^{(\alpha ,\beta )}(x)}: T n(x)=n 2 lim q→0 1 q C n(q)(x)if n≥1,=1(n−1 2 n)P n(−1 2,−1 2)(x)=2 2 n(2 n n)P n(−1 2,−1 2)(x),U n(x)=C n(1)(x)=n+1(n+1 2 n)P n(1 2,1 2)(x)=2 2 n+1(2 n+2 n+1)P n(1 2,1 2)(x).{\displaystyle {\begin{aligned}T_{n}(x)&={\frac {n}{2}}\lim {q\to 0}{\frac {1}{q}}\,C{n}^{(q)}(x)\qquad ~{\text{ if }}~n\geq 1,\&={\frac {1}{\binom {n-{\frac {1}{2}}}{n}}}P_{n}^{\left(-{\frac {1}{2}},-{\frac {1}{2}}\right)}(x)={\frac {2^{2n}}{\binom {2n}{n}}}P_{n}^{\left(-{\frac {1}{2}},-{\frac {1}{2}}\right)}(x)~,\[2ex]U_{n}(x)&=C_{n}^{(1)}(x)\&={\frac {n+1}{\binom {n+{\frac {1}{2}}}{n}}}P_{n}^{\left({\frac {1}{2}},{\frac {1}{2}}\right)}(x)={\frac {2^{2n+1}}{\binom {2n+2}{n+1}}}P_{n}^{\left({\frac {1}{2}},{\frac {1}{2}}\right)}(x)~.\end{aligned}}} Chebyshev polynomials are also a special case of Dickson polynomials: D n(2 x α,α 2)=2 α n T n(x){\displaystyle D_{n}(2x\alpha ,\alpha ^{2})=2\alpha ^{n}T_{n}(x)\,} E n(2 x α,α 2)=α n U n(x).{\displaystyle E_{n}(2x\alpha ,\alpha ^{2})=\alpha ^{n}U_{n}(x).\,} In particular, when α=1 2{\displaystyle \alpha ={\tfrac {1}{2}}}, they are related by D n(x,1 4)=2 1−n T n(x){\displaystyle D_{n}(x,{\tfrac {1}{4}})=2^{1-n}T_{n}(x)} and E n(x,1 4)=2−n U n(x){\displaystyle E_{n}(x,{\tfrac {1}{4}})=2^{-n}U_{n}(x)}. Other properties [edit] The curves given by y = T n(x), or equivalently, by the parametric equations y = T n(cos θ) = cos nθ, x = cos θ, are a special case of Lissajous curves with frequency ratio equal to n. Similar to the formula: T n(cos⁡θ)=cos⁡(n θ),{\displaystyle T_{n}(\cos \theta )=\cos(n\theta ),} we have the analogous formula: T 2 n+1(sin⁡θ)=(−1)n sin⁡((2 n+1)θ).{\displaystyle T_{2n+1}(\sin \theta )=(-1)^{n}\sin \left(\left(2n+1\right)\theta \right).} For x ≠ 0: T n(x+x−1 2)=x n+x−n 2{\displaystyle T_{n}!\left({\frac {x+x^{-1}}{2}}\right)={\frac {x^{n}+x^{-n}}{2}}} and: x n=T n(x+x−1 2)+x−x−1 2 U n−1(x+x−1 2),{\displaystyle x^{n}=T_{n}!\left({\frac {x+x^{-1}}{2}}\right)+{\frac {x-x^{-1}}{2}}\ U_{n-1}!\left({\frac {x+x^{-1}}{2}}\right),} which follows from the fact that this holds by definition for x = e iθ. There are relations between Legendre polynomials and Chebyshev polynomials ∑k=0 n P k(x)T n−k(x)=(n+1)P n(x){\displaystyle \sum {k=0}^{n}P{k}\left(x\right)T_{n-k}\left(x\right)=\left(n+1\right)P_{n}\left(x\right)} ∑k=0 n P k(x)P n−k(x)=U n(x){\displaystyle \sum {k=0}^{n}P{k}\left(x\right)P_{n-k}\left(x\right)=U_{n}\left(x\right)} These identities can be proven using generating functions and discrete convolution Chebyshev polynomials as determinants [edit] From their definition by recurrence it follows that the Chebyshev polynomials can be obtained as determinants of special tridiagonal matrices of size k×k{\displaystyle k\times k}: T k(x)=det[x 1 0⋯0 1 2 x 1⋱⋮0 1 2 x⋱0⋮⋱⋱⋱1 0⋯0 1 2 x],{\displaystyle T_{k}(x)=\det {\begin{bmatrix}x&1&0&\cdots &0\1&2x&1&\ddots &\vdots \0&1&2x&\ddots &0\\vdots &\ddots &\ddots &\ddots &1\0&\cdots &0&1&2x\end{bmatrix}},} and similarly for U k{\displaystyle U_{k}}. Examples [edit] First kind [edit] The first few Chebyshev polynomials of the first kind in the domain −1 <x< 1: The flat T 0, T 1, T 2, T 3, T 4 and T 5. The first few Chebyshev polynomials of the first kind are OEIS:A028297 T 0(x)=1 T 1(x)=x T 2(x)=2 x 2−1 T 3(x)=4 x 3−3 x T 4(x)=8 x 4−8 x 2+1 T 5(x)=16 x 5−20 x 3+5 x T 6(x)=32 x 6−48 x 4+18 x 2−1 T 7(x)=64 x 7−112 x 5+56 x 3−7 x T 8(x)=128 x 8−256 x 6+160 x 4−32 x 2+1 T 9(x)=256 x 9−576 x 7+432 x 5−120 x 3+9 x T 10(x)=512 x 10−1280 x 8+1120 x 6−400 x 4+50 x 2−1{\displaystyle {\begin{aligned}T_{0}(x)&=1\T_{1}(x)&=x\T_{2}(x)&=2x^{2}-1\T_{3}(x)&=4x^{3}-3x\T_{4}(x)&=8x^{4}-8x^{2}+1\T_{5}(x)&=16x^{5}-20x^{3}+5x\T_{6}(x)&=32x^{6}-48x^{4}+18x^{2}-1\T_{7}(x)&=64x^{7}-112x^{5}+56x^{3}-7x\T_{8}(x)&=128x^{8}-256x^{6}+160x^{4}-32x^{2}+1\T_{9}(x)&=256x^{9}-576x^{7}+432x^{5}-120x^{3}+9x\T_{10}(x)&=512x^{10}-1280x^{8}+1120x^{6}-400x^{4}+50x^{2}-1\end{aligned}}} Second kind [edit] The first few Chebyshev polynomials of the second kind in the domain −1 <x< 1: The flat U 0, U 1, U 2, U 3, U 4 and U 5. Although not visible in the image, U n(1) = n+ 1 and U n(−1) = (n+ 1)(−1)n. The first few Chebyshev polynomials of the second kind are OEIS:A053117 U 0(x)=1 U 1(x)=2 x U 2(x)=4 x 2−1 U 3(x)=8 x 3−4 x U 4(x)=16 x 4−12 x 2+1 U 5(x)=32 x 5−32 x 3+6 x U 6(x)=64 x 6−80 x 4+24 x 2−1 U 7(x)=128 x 7−192 x 5+80 x 3−8 x U 8(x)=256 x 8−448 x 6+240 x 4−40 x 2+1 U 9(x)=512 x 9−1024 x 7+672 x 5−160 x 3+10 x U 10(x)=1024 x 10−2304 x 8+1792 x 6−560 x 4+60 x 2−1{\displaystyle {\begin{aligned}U_{0}(x)&=1\U_{1}(x)&=2x\U_{2}(x)&=4x^{2}-1\U_{3}(x)&=8x^{3}-4x\U_{4}(x)&=16x^{4}-12x^{2}+1\U_{5}(x)&=32x^{5}-32x^{3}+6x\U_{6}(x)&=64x^{6}-80x^{4}+24x^{2}-1\U_{7}(x)&=128x^{7}-192x^{5}+80x^{3}-8x\U_{8}(x)&=256x^{8}-448x^{6}+240x^{4}-40x^{2}+1\U_{9}(x)&=512x^{9}-1024x^{7}+672x^{5}-160x^{3}+10x\U_{10}(x)&=1024x^{10}-2304x^{8}+1792x^{6}-560x^{4}+60x^{2}-1\end{aligned}}} As a basis set [edit] The non-smooth function (top) y = −x 3 H(−x), where H is the Heaviside step function, and (bottom) the 5th partial sum of its Chebyshev expansion. The 7th sum is indistinguishable from the original function at the resolution of the graph. In the appropriate Sobolev space, the set of Chebyshev polynomials form an orthonormal basis, so that a function in the same space can, on −1 ≤ x ≤ 1, be expressed via the expansion: f(x)=∑n=0∞a n T n(x).{\displaystyle f(x)=\sum {n=0}^{\infty }a{n}T_{n}(x).} Furthermore, as mentioned previously, the Chebyshev polynomials form an orthogonal basis which (among other things) implies that the coefficients a n can be determined easily through the application of an inner product. This sum is called a Chebyshev series or a Chebyshev expansion. Since a Chebyshev series is related to a Fourier cosine series through a change of variables, all of the theorems, identities, etc. that apply to Fourier series have a Chebyshev counterpart. These attributes include: The Chebyshev polynomials form a complete orthogonal system. The Chebyshev series converges to f(x) if the function is piecewisesmooth and continuous. The smoothness requirement can be relaxed in most cases– as long as there are a finite number of discontinuities in f(x) and its derivatives. At a discontinuity, the series will converge to the average of the right and left limits. The abundance of the theorems and identities inherited from Fourier series make the Chebyshev polynomials important tools in numeric analysis; for example they are the most popular general purpose basis functions used in the spectral method, often in favor of trigonometric series due to generally faster convergence for continuous functions (Gibbs' phenomenon is still a problem). The Chebfun software package supports function manipulation based on their expansion in the Chebyshev basis. Example 1 [edit] Consider the Chebyshev expansion of log(1 +x). One can express: log⁡(1+x)=∑n=0∞a n T n(x).{\displaystyle \log(1+x)=\sum {n=0}^{\infty }a{n}T_{n}(x)~.} One can find the coefficients a n either through the application of an inner product or by the discrete orthogonality condition. For the inner product: ∫−1+1 T m(x)log⁡(1+x)1−x 2 d x=∑n=0∞a n∫−1+1 T m(x)T n(x)1−x 2 d x,{\displaystyle \int {-1}^{+1}\,{\frac {T{m}(x)\,\log(1+x)}{\sqrt {1-x^{2}}}}\,\mathrm {d} x=\sum {n=0}^{\infty }a{n}\int {-1}^{+1}{\frac {T{m}(x)\,T_{n}(x)}{\sqrt {1-x^{2}}}}\,\mathrm {d} x,} which gives: a n={−log⁡2 for n=0,−2(−1)n n for n>0.{\displaystyle a_{n}={\begin{cases}-\log 2&{\text{ for }}~n=0,\{\frac {-2(-1)^{n}}{n}}&{\text{ for }}~n>0.\end{cases}}} Alternatively, when the inner product of the function being approximated cannot be evaluated, the discrete orthogonality condition gives an often useful result for approximate coefficients: a n≈2−δ 0 n N∑k=0 N−1 T n(x k)log⁡(1+x k),{\displaystyle a_{n}\approx {\frac {\,2-\delta {0n}\,}{N}}\,\sum {k=0}^{N-1}T_{n}(x_{k})\,\log(1+x_{k}),} where δ ij is the Kronecker delta function and the x k are the N Gauss–Chebyshev zeros of T N(x): x k=cos⁡(π(k+1 2)N).{\displaystyle x_{k}=\cos \left({\frac {\pi \left(k+{\tfrac {1}{2}}\right)}{N}}\right).} For any N, these approximate coefficients provide an exact approximation to the function at x k with a controlled error between those points. The exact coefficients are obtained with N = ∞, thus representing the function exactly at all points in [−1,1]. The rate of convergence depends on the function and its smoothness. This allows us to compute the approximate coefficients a n very efficiently through the discrete cosine transform: a n≈2−δ 0 n N∑k=0 N−1 cos⁡(n π(k+1 2)N)log⁡(1+x k).{\displaystyle a_{n}\approx {\frac {2-\delta {0n}}{N}}\sum {k=0}^{N-1}\cos \left({\frac {n\pi \left(\,k+{\tfrac {1}{2}}\right)}{N}}\right)\log(1+x_{k}).} Example 2 [edit] To provide another example: (1−x 2)α=−1 π Γ(1 2+α)Γ(α+1)+2 1−2 α∑n=0(−1)n(2 α α−n)T 2 n(x)=2−2 α∑n=0(−1)n(2 α+1 α−n)U 2 n(x).{\displaystyle {\begin{aligned}\left(1-x^{2}\right)^{\alpha }&=-{\frac {1}{\sqrt {\pi }}}\,{\frac {\Gamma \left({\tfrac {1}{2}}+\alpha \right)}{\Gamma (\alpha +1)}}+2^{1-2\alpha }\,\sum {n=0}\left(-1\right)^{n}\,{2\alpha \choose \alpha -n}\,T{2n}(x)\[1ex]&=2^{-2\alpha }\,\sum {n=0}\left(-1\right)^{n}\,{2\alpha +1 \choose \alpha -n}\,U{2n}(x).\end{aligned}}} Partial sums [edit] The partial sums of: f(x)=∑n=0∞a n T n(x){\displaystyle f(x)=\sum {n=0}^{\infty }a{n}T_{n}(x)} are very useful in the approximation of various functions and in the solution of differential equations (see spectral method). Two common methods for determining the coefficients a n are through the use of the inner product as in Galerkin's method and through the use of collocation which is related to interpolation. As an interpolant, the N coefficients of the (N− 1)st partial sum are usually obtained on the Chebyshev–Gauss–Lobatto points (or Lobatto grid), which results in minimum error and avoids Runge's phenomenon associated with a uniform grid. This collection of points corresponds to the extrema of the highest order polynomial in the sum, plus the endpoints and is given by: x k=−cos⁡(k π N−1);k=0,1,…,N−1.{\displaystyle x_{k}=-\cos \left({\frac {k\pi }{N-1}}\right);\qquad k=0,1,\dots ,N-1.} Polynomial in Chebyshev form [edit] An arbitrary polynomial of degree N can be written in terms of the Chebyshev polynomials of the first kind. Such a polynomial p(x) is of the form: p(x)=∑n=0 N a n T n(x).{\displaystyle p(x)=\sum {n=0}^{N}a{n}T_{n}(x).} Polynomials in Chebyshev form can be evaluated using the Clenshaw algorithm. Families of polynomials related to Chebyshev polynomials [edit] Polynomials denoted C n(x){\displaystyle C_{n}(x)} and S n(x){\displaystyle S_{n}(x)} closely related to Chebyshev polynomials are sometimes used. They are defined by: C n(x)=2 T n(x 2),S n(x)=U n(x 2){\displaystyle C_{n}(x)=2T_{n}\left({\frac {x}{2}}\right),\qquad S_{n}(x)=U_{n}\left({\frac {x}{2}}\right)} and satisfy: C n(x)=S n(x)−S n−2(x).{\displaystyle C_{n}(x)=S_{n}(x)-S_{n-2}(x).} A. F. Horadam called the polynomials C n(x){\displaystyle C_{n}(x)}Vieta–Lucas polynomials and denoted them v n(x){\displaystyle v_{n}(x)}. He called the polynomials S n(x){\displaystyle S_{n}(x)}Vieta–Fibonacci polynomials and denoted them V n(x){\displaystyle V_{n}(x)}. Lists of both sets of polynomials are given in Viète'sOpera Mathematica, Chapter IX, Theorems VI and VII. The Vieta–Lucas and Vieta–Fibonacci polynomials of real argument are, up to a power of i{\displaystyle i} and a shift of index in the case of the latter, equal to Lucas and Fibonacci polynomialsL n and F n of imaginary argument. Shifted Chebyshev polynomials of the first and second kinds are related to the Chebyshev polynomials by: T n∗(x)=T n(2 x−1),U n∗(x)=U n(2 x−1).{\displaystyle T_{n}^{}(x)=T_{n}(2x-1),\qquad U_{n}^{}(x)=U_{n}(2x-1).} When the argument of the Chebyshev polynomial satisfies 2 x − 1 ∈ [−1, 1] the argument of the shifted Chebyshev polynomial satisfies x ∈ [0, 1]. Similarly, one can define shifted polynomials for generic intervals [a, b]. Around 1990 the terms "third-kind" and "fourth-kind" came into use in connection with Chebyshev polynomials, although the polynomials denoted by these terms had an earlier development under the name airfoil polynomials. According to J. C. Mason and G. H. Elliott, the terminology "third-kind" and "fourth-kind" is due to Walter Gautschi, "in consultation with colleagues in the field of orthogonal polynomials." The Chebyshev polynomials of the third kind are defined as: V n(x)=cos⁡((n+1 2)θ)cos⁡(θ 2)=2 1+x T 2 n+1(x+1 2){\displaystyle V_{n}(x)={\frac {\cos \left(\left(n+{\frac {1}{2}}\right)\theta \right)}{\cos \left({\frac {\theta }{2}}\right)}}={\sqrt {\frac {2}{1+x}}}T_{2n+1}\left({\sqrt {\frac {x+1}{2}}}\right)} and the Chebyshev polynomials of the fourth kind are defined as: W n(x)=sin⁡((n+1 2)θ)sin⁡(θ 2)=U 2 n(x+1 2),{\displaystyle W_{n}(x)={\frac {\sin \left(\left(n+{\frac {1}{2}}\right)\theta \right)}{\sin \left({\frac {\theta }{2}}\right)}}=U_{2n}\left({\sqrt {\frac {x+1}{2}}}\right),} where θ=arccos⁡x{\displaystyle \theta =\arccos x}. They coincide with the Dirichlet kernel. In the airfoil literature V n(x){\displaystyle V_{n}(x)} and W n(x){\displaystyle W_{n}(x)} are denoted t n(x){\displaystyle t_{n}(x)} and u n(x){\displaystyle u_{n}(x)}. The polynomial families T n(x){\displaystyle T_{n}(x)}, U n(x){\displaystyle U_{n}(x)}, V n(x){\displaystyle V_{n}(x)}, and W n(x){\displaystyle W_{n}(x)} are orthogonal with respect to the weights: (1−x 2)−1/2,(1−x 2)1/2,(1−x)−1/2(1+x)1/2,(1+x)−1/2(1−x)1/2{\displaystyle \left(1-x^{2}\right)^{-1/2},\quad \left(1-x^{2}\right)^{1/2},\quad (1-x)^{-1/2}(1+x)^{1/2},\quad (1+x)^{-1/2}(1-x)^{1/2}} and are proportional to Jacobi polynomials P n(α,β)(x){\displaystyle P_{n}^{(\alpha ,\beta )}(x)} with: (α,β)=(−1 2,−1 2),(α,β)=(1 2,1 2),(α,β)=(−1 2,1 2),(α,β)=(1 2,−1 2).{\displaystyle (\alpha ,\beta )=\left(-{\frac {1}{2}},-{\frac {1}{2}}\right),\quad (\alpha ,\beta )=\left({\frac {1}{2}},{\frac {1}{2}}\right),\quad (\alpha ,\beta )=\left(-{\frac {1}{2}},{\frac {1}{2}}\right),\quad (\alpha ,\beta )=\left({\frac {1}{2}},-{\frac {1}{2}}\right).} All four families satisfy the recurrence p n(x)=2 x p n−1(x)−p n−2(x){\displaystyle p_{n}(x)=2xp_{n-1}(x)-p_{n-2}(x)} with p 0(x)=1{\displaystyle p_{0}(x)=1}, where p n=T n{\displaystyle p_{n}=T_{n}}, U n{\displaystyle U_{n}}, V n{\displaystyle V_{n}}, or W n{\displaystyle W_{n}}, but they differ according to whether p 1(x){\displaystyle p_{1}(x)} equals x{\displaystyle x}, 2 x{\displaystyle 2x}, 2 x−1{\displaystyle 2x-1}, or 2 x+1{\displaystyle 2x+1}. Even order modified Chebyshev polynomials [edit] Some applications rely on Chebyshev polynomials but may be unable to accommodate the lack of a root at zero, which rules out the use of standard Chebyshev polynomials for these kinds of applications. Even order Chebyshev filter designs using equally terminated passive networks are an example of this. However, even order Chebyshev polynomials may be modified to move the lowest roots down to zero while still maintaining the desirable Chebyshev equi-ripple effect. Such modified polynomials contain two roots at zero, and may be referred to as even order modified Chebyshev polynomials. Even order modified Chebyshev polynomials may be created from the Chebyshev nodes in the same manner as standard Chebyshev polynomials. P N=∏i=1 N(x−C i){\displaystyle P_{N}=\prod {i=1}^{N}(x-C{i})} where P N{\displaystyle P_{N}} is an N-th order Chebyshev polynomial C i{\displaystyle C_{i}} is the i-th Chebyshev node In the case of even order modified Chebyshev polynomials, the even order modified Chebyshev nodes are used to construct the even order modified Chebyshev polynomials. P e N=∏i=1 N(x−C e i){\displaystyle Pe_{N}=\prod {i=1}^{N}(x-Ce{i})} where P e N{\displaystyle Pe_{N}} is an N-th order even order modified Chebyshev polynomial C e i{\displaystyle Ce_{i}} is the i-th even order modified Chebyshev node For example, the 4th order Chebyshev polynomial from the example above is X 4−X 2+.125{\displaystyle X^{4}-X^{2}+.125}, which by inspection contains no roots of zero. Creating the polynomial from the even order modified Chebyshev nodes creates a 4th order even order modified Chebyshev polynomial of X 4−.828427 X 2{\displaystyle X^{4}-.828427X^{2}}, which by inspection contains two roots at zero, and may be used in applications requiring roots at zero. See also [edit] Mathematics portal Chebyshev rational functions Function approximation Discrete Chebyshev transform Markov brothers' inequality References [edit] ^Rivlin, Theodore J. (1974). "Chapter 2, Extremal properties". The Chebyshev Polynomials. Pure and Applied Mathematics (1st ed.). New York-London-Sydney: Wiley-Interscience [John Wiley & Sons]. pp.56–123. ISBN978-047172470-4. ^Lanczos, C. (1952). "Solution of systems of linear equations by minimized iterations". Journal of Research of the National Bureau of Standards. 49 (1): 33. doi:10.6028/jres.049.006. ^Chebyshev first presented his eponymous polynomials in a paper read before the St. Petersburg Academy in 1853: Chebyshev, P. L. (1854). "Théorie des mécanismes connus sous le nom de parallélogrammes". Mémoires des Savants étrangers présentés à l'Académie de Saint-Pétersbourg (in French). 7: 539–586. Also published separately as Chebyshev, P. L. (1853). Théorie des mécanismes connus sous le nom de parallélogrammes. St. Petersburg: Imprimerie de l'Académie Impériale des Sciences. doi:10.3931/E-RARA-120037. ^Schaeffer, A. C. (1941). "Inequalities of A. Markoff and S. Bernstein for polynomials and related functions". Bulletin of the American Mathematical Society. 47 (8): 565–579. doi:10.1090/S0002-9904-1941-07510-5. ISSN0002-9904. ^Ritt, J. F. (1922). "Prime and Composite Polynomials". Trans. Amer. Math. Soc. 23: 51–66. doi:10.1090/S0002-9947-1922-1501189-9. ^Demeyer, Jeroen (2007). Diophantine Sets over Polynomial Rings and Hilbert's Tenth Problem for Function Fields(PDF) (Ph.D. thesis). p.70. Archived from the original(PDF) on 2 July 2007. ^Bateman & Bateman Manuscript Project 1953, p.184, eqs. 3–4. ^Beckenbach, E. F.; Seidel, W.; Szász, Otto (1951), "Recurrent determinants of Legendre and of ultraspherical polynomials", Duke Math. J., 18: 1–10, doi:10.1215/S0012-7094-51-01801-7, MR0040487 ^Bateman & Bateman Manuscript Project 1953, p. 187, eqs. 47–48. ^ abcMason & Handscomb 2002. ^Cody, W. J. (1970). "A survey of practical rational and polynomial approximation of functions". SIAM Review. 12 (3): 400–423. doi:10.1137/1012082. ^Mathar, Richard J. (2006). "Chebyshev series expansion of inverse polynomials". Journal of Computational and Applied Mathematics. 196 (2): 596–607. arXiv:math/0403344. doi:10.1016/j.cam.2005.10.013. ^Gürtaş, Y. Z. (2017). "Chebyshev Polynomials and the minimal polynomial of cos⁡(2 π/n){\displaystyle \cos(2\pi /n)}". American Mathematical Monthly. 124 (1): 74–78. doi:10.4169/amer.math.monthly.124.1.74. S2CID125797961. ^Wolfram, D. A. (2022). "Factoring Chebyshev polynomials of the first and second kinds with minimal polynomials of cos⁡(2 π/d){\displaystyle \cos(2\pi /d)}". American Mathematical Monthly. 129 (2): 172–176. doi:10.1080/00029890.2022.2005391. S2CID245808448. ^Rayes, M. O.; Trevisan, V.; Wang, P. S. (2005), "Factorization properties of chebyshev polynomials", Computers & Mathematics with Applications, 50 (8–9): 1231–1240, doi:10.1016/j.camwa.2005.07.003 ^ abcBoyd, John P. (2001). Chebyshev and Fourier Spectral Methods(PDF) (second ed.). Dover. ISBN0-486-41183-4. Archived from the original(PDF) on 31 March 2010. Retrieved 19 March 2009. ^"Chebyshev Interpolation: An Interactive Tour". Archived from the original on 18 March 2017. Retrieved 2 June 2016. ^ abHochstrasser 1972, p.778. ^Horadam, A. F. (2002), "Vieta polynomials"(PDF), Fibonacci Quarterly, 40 (3): 223–232 ^Viète, François (1646). Francisci Vietae Opera mathematica: in unum volumen congesta ac recognita / opera atque studio Francisci a Schooten(PDF). Bibliothèque nationale de France. ^ abcMason, J. C.; Elliott, G. H. (1993), "Near-minimax complex approximation by four kinds of Chebyshev polynomial expansion", J. Comput. Appl. Math., 46 (1–2): 291–300, doi:10.1016/0377-0427(93)90303-S ^ abDesmarais, Robert N.; Bland, Samuel R. (1995), "Tables of properties of airfoil polynomials", NASA Reference Publication 1343, National Aeronautics and Space Administration ^Saal, Rudolf (January 1979). Handbook of Filter Design (in English and German) (1st ed.). Munich, Germany: Allgemeine Elektricitais-Gesellschaft. pp.25, 26, 56–61, 116, 117. ISBN3-87087-070-2. Sources [edit] Hochstrasser, Urs W. (1972) . "Orthogonal Polynomials". In Abramowitz, Milton; Stegun, Irene (eds.). Handbook of Mathematical Functions (10th printing, with corrections; first ed.). Washington D.C.: National Bureau of Standards. Ch. 22, pp.771–792. LCCN64-60036. MR0167642. Reprint: 1983. New York: Dover. ISBN978-0-486-61272-0. Bateman, Harry; Bateman Manuscript Project (1953). "Tchebichef polynomials". In Erdélyi, Arthur (ed.). Higher Transcendental Functions. Vol.2. Research associates: W. Magnus, F. Oberhettinger[de], F. Tricomi (1st ed.). New York: McGraw-Hill. § 10.11, pp.183–187. LCCN53-5555. Caltech eprint 43491. Reprint: 1981. Melbourne, FL: Krieger. ISBN0-89874-069-X. Mason, J. C.; Handscomb, D.C. (2002). Chebyshev Polynomials. Chapman and Hall/CRC. doi:10.1201/9781420036114. ISBN978-1-4200-3611-4. Further reading [edit] Dette, Holger (1995). "A note on some peculiar nonlinear extremal phenomena of the Chebyshev polynomials". Proceedings of the Edinburgh Mathematical Society. 38 (2): 343–355. arXiv:math/9406222. doi:10.1017/S001309150001912X. Elliott, David (1964). "The evaluation and estimation of the coefficients in the Chebyshev Series expansion of a function". Math. Comp. 18 (86): 274–284. doi:10.1090/S0025-5718-1964-0166903-7. MR0166903. Eremenko, A.; Lempert, L. (1994). "An Extremal Problem For Polynomials"(PDF). Proceedings of the American Mathematical Society. 122 (1): 191–193. doi:10.1090/S0002-9939-1994-1207536-1. MR1207536. Hernandez, M. A. (2001). "Chebyshev's approximation algorithms and applications". Computers & Mathematics with Applications. 41 (3–4): 433–445. doi:10.1016/s0898-1221(00)00286-8. Mason, J. C. (1984). "Some properties and applications of Chebyshev polynomial and rational approximation". Rational Approximation and Interpolation. Lecture Notes in Mathematics. Vol.1105. pp.27–48. doi:10.1007/BFb0072398. ISBN978-3-540-13899-0. Koornwinder, Tom H.; Wong, Roderick S. C.; Koekoek, Roelof; Swarttouw, René F. (2010), "Orthogonal Polynomials", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN978-0-521-19225-5, MR2723248. Remes, Eugene. "On an Extremal Property of Chebyshev Polynomials"(PDF). Salzer, Herbert E. (1976). "Converting interpolation series into Chebyshev series by recurrence formulas". Mathematics of Computation. 30 (134): 295–302. doi:10.1090/S0025-5718-1976-0395159-3. MR0395159. Scraton, R.E. (1969). "The Solution of integral equations in Chebyshev series". Mathematics of Computation. 23 (108): 837–844. doi:10.1090/S0025-5718-1969-0260224-4. MR0260224. Smith, Lyle B. (1966). "Computation of Chebyshev series coefficients". Comm. ACM. 9 (2): 86–87. doi:10.1145/365170.365195. S2CID8876563. Algorithm 277. Suetin, P. K. (2001) , "Chebyshev polynomials", Encyclopedia of Mathematics, EMS Press External links [edit] Media related to Chebyshev polynomials at Wikimedia Commons Weisstein, Eric W."Chebyshev polynomial[s] of the first kind". MathWorld. Mathews, John H. (2003). "Module for Chebyshev polynomials". Department of Mathematics. Course notes for Math 340 Numerical Analysis& Math 440 Advanced Numerical Analysis. Fullerton, CA: California State University. Archived from the original on 29 May 2007. Retrieved 17 August 2020. "Numerical computing with functions". The Chebfun Project. "Is there an intuitive explanation for an extremal property of Chebyshev polynomials?". Math Overflow. Question 25534. "Chebyshev polynomial evaluation and the Chebyshev transform". Boost. Math. | Authority control databases | | --- | | International | FAST | | National | Germany United States France BnF data Japan Spain Israel | | Other | IdRef | Retrieved from " Categories: Special hypergeometric functions Orthogonal polynomials Polynomials Approximation theory Hidden categories: CS1 French-language sources (fr) CS1 German-language sources (de) Articles with short description Short description is different from Wikidata Use American English from March 2019 All Wikipedia articles written in American English Use dmy dates from August 2020 Articles containing French-language text Articles containing German-language text All articles with unsourced statements Articles with unsourced statements from June 2025 Commons category link from Wikidata This page was last edited on 15 July 2025, at 18:03(UTC). Text is available under the Creative Commons Attribution-ShareAlike 4.0 License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Code of Conduct Developers Statistics Cookie statement Mobile view Search Search [x] Toggle the table of contents Chebyshev polynomials 24 languagesAdd topic
122
Jump to content Empress (chess) العربية Català Ελληνικά Français 한국어 Lietuvių Português 中文 Edit links From Wikipedia, the free encyclopedia Fairy chess piece The empress is a fairy chess piece that can move like a rook or a knight. It cannot jump over other pieces when moving as a rook but may do so when moving as a knight. The piece has acquired many names[a] and is frequently called a chancellor, a marshal or a knook. Chess moves in this article use C as notation for the empress. This article uses algebraic notation to describe chess moves. Movement [edit] The empress can move as a rook or a knight. | | | | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | | a | b | c | d | e | f | g | h | | | 8 | | | | | | | | | 8 | | 7 | 7 | | 6 | 6 | | 5 | 5 | | 4 | 4 | | 3 | 3 | | 2 | 2 | | 1 | 1 | | | a | b | c | d | e | f | g | h | | The empress can move, but not jump, to squares with crosses; jump to squares with dots; or capture the pawn on e7. | | | | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | | a | b | c | d | e | f | g | h | | | 8 | | | | | | | | | 8 | | 7 | 7 | | 6 | 6 | | 5 | 5 | | 4 | 4 | | 3 | 3 | | 2 | 2 | | 1 | 1 | | | a | b | c | d | e | f | g | h | | Maximum range of an empress on an empty board History and nomenclature [edit] The empress is one of the most simply described fairy chess pieces and as such has a long history and has gone by many names. It was first used in Turkish Great Chess, a large medieval variant of chess, where it was called the war machine (dabbabah; not to be confused with the piece more commonly referred to as the dabbaba today, which is the (2,0) leaper). It was introduced in the West with Carrera's chess from 1617, where it was called a champion[b], and has been used in many chess variants since then. The name chancellor was introduced by Ben Foster in his large variant Chancellor Chess (chess on a 9×9 board, with a chancellor on the opposite side of the king as the queen), and the name marshal was introduced by L. Tressan in his large variant The Sultan's Game. José Raúl Capablanca used both in his large variant Capablanca Chess: he originally called this piece the marshal, but later changed it to chancellor, which was his original name for the archbishop. Both chancellor and marshal are popular names for the rook+knight compound, although a case could be made for marshal, as the word is related to mare (female horse) and thus fits better for a piece that can move like a knight than chancellor, which has no connection to horses. Also, there are many commonly used chess pieces that, like chancellor, begin with C (e.g. the cannon in xiangqi, the camel in Tamerlane Chess, the champion in Omega Chess, and the cardinal or princess), and using the name marshal for the rook+knight compound would reduce this difficulty. The name empress is the most widely used name among problemists. By analogy with the queen, which is a rook+bishop compound, it was suggested that the three basic combinations of the three simple chess pieces (rook, knight, and bishop) should all be named after female royalty. Since the rook+knight compound seemed to be obviously stronger than the bishop+knight compound (as the rook is stronger than the bishop), the name empress was used for the rook+knight compound, and the bishop+knight compound was called the princess. However, the word empress suggests a piece stronger than the queen, while this piece is at best equal to and perhaps weaker than the queen, especially in the endgame. Value [edit] | | | | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | | a | b | c | d | e | f | g | h | | | 8 | | | | | | | | | 8 | | 7 | 7 | | 6 | 6 | | 5 | 5 | | 4 | 4 | | 3 | 3 | | 2 | 2 | | 1 | 1 | | | a | b | c | d | e | f | g | h | | White to move can mate in one with 1.Ch4#. | | | | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | | a | b | c | d | e | f | g | h | | | 8 | | | | | | | | | 8 | | 7 | 7 | | 6 | 6 | | 5 | 5 | | 4 | 4 | | 3 | 3 | | 2 | 2 | | 1 | 1 | | | a | b | c | d | e | f | g | h | | Empress can mate by its own by move to 1. Cf8#. Ralph Betza (inventor of chess with different armies, in which the empress was used in one of the armies) rated the empress as about nine points, equivalent to a queen, as the knight and bishop were about equal and the empress and queen were simply the knight and bishop with the power of a rook added to both. He noted that the queen may be slightly stronger than the empress in the endgame, but that the empress, on the other hand, has a greater ability to give perpetual checks and salvage a draw in an otherwise lost game. Unlike the queen, which can move in 8 different directions, the empress can move in 12. In the endgame of king and amazon (queen+knight compound) versus king and empress, the amazon usually wins, but in a few positions, the weaker side may force a draw by setting up a fortress. These fortresses force the side with the amazon to give perpetual check, as otherwise the side with the empress can force a simplification or give its own perpetual check. King and empress versus king is a forced win for the side with the empress; checkmate can be forced within 11 moves. In comparison, the queen requires 10 moves, and the rook requires 16. The drawing positions in the queen versus pawn endgame do not exist in the empress versus pawn endgame. Examples [edit] | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | | | | | | | | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | | a | b | c | d | e | f | g | h | i | j | | | | 8 | | | | | | | | | | | 8 | | 7 | | | | | | | | | | | 7 | | 6 | | | | | | | | | | | 6 | | 5 | | | | | | | | | | | 5 | | 4 | | | | | | | | | | | 4 | | 3 | | | | | | | | | | | 3 | | 2 | | | | | | | | | | | 2 | | 1 | | | | | | | | | | | 1 | | | a | b | c | d | e | f | g | h | i | j | | | Capablanca chess starting position. Chancellors are on d1 and d8; archbishops are on g1 and g8. | | | | | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | | a | b | c | d | e | f | g | h | | | 8 | | | | | | | | | 8 | | 7 | 7 | | 6 | 6 | | 5 | 5 | | 4 | 4 | | 3 | 3 | | 2 | 2 | | 1 | 1 | | | a | b | c | d | e | f | g | h | | Almost chess starting position. Chancellors replace queens. | Many chess variants use a rook+knight compound, but due to its powerful ability, it is uncommon for variants to use more than one per colour on a normal 8×8 board. Seirawan chess uses one such piece (called an elephant) per colour. Capablanca chess uses one chancellor per colour on a 10×8 board. Almost chess replaces queens with chancellors; these pieces are approximately equal in value. Symbol [edit] Both white and black symbols for the empress were added to version 12 of the Unicode standard in March 2019, in the Chess Symbols block: 🩏 U+1FA4F WHITE CHESS KNIGHT-ROOK 🩒 U+1FA52 BLACK CHESS KNIGHT-ROOK See also [edit] Amazon – the queen+knight compound Princess – the bishop+knight compound Queen – the rook+bishop compound Notes [edit] ^ Less common names the piece has acquired include admiral, cannon, champion, colonel, concubine, count, dabbaba, duke, elephant, guard, knook, lambeth, lord chancellor, marshall, princess, samurai, superrook, tank, visier, and wolf. ^ The game seems to have been an afterthought to his chess treatise and it is mysterious to what extent, if any, he might have used it in practice while he lived, viz. Capablanca Chess. References [edit] ^ Pritchard, D. B. (1994), "Pieces", The Encyclopedia of Chess Variants, Games & Puzzles Publications, p. 227, ISBN 0-9524142-0-1 ^ Rosen, Eric (2022), New Chess Pieces!, Eric Rosen, event occurs at 00:00:15 Piececlopedia: Rook–Knight Compound by Fergus Duniho and David Howe, The Chess Variant Pages Endgame statistics with fantasy pieces by Dave McCooey, The Chess Variant Pages The Remarkable Rookies (includes a section on the empress, where it appears in this army under the name chancellor) by Ralph Betza BuyPoint Chess by Ralph Betza; contains a list of pieces with approximate values Chigorin Chess at The Chess Variant Pages (Betza comments that the draws in the queen vs. pawn endgame are wins in the empress vs. pawn endgame) Great Chess - Indian / Turkish variant by Hans Bodlaender, The Chess Variant Pages | Chess pieces | | | --- | --- | | Orthodox pieces | Bishop King Knight Pawn Queen Rook | | Fairy pieces (List) | Alfil Amazon Berolina pawn Camel Dabbaba Empress Ferz Giraffe Grasshopper Mann Nightrider Princess Wazir Zebra | | Related | Chess set (Staunton) Hippogonal Piece point values | Retrieved from " Category: Fairy chess pieces Hidden categories: Articles with short description Short description is different from Wikidata
123
Chapter 4 The Statistical Physics of non-Isolated systems: The Canonical Ensemble 4.1 The Boltzmann distribution 4.2 The independent-particle approximation: one-body parti-tion function 4.3 Examples of partition function calculations 4.4 Energy, entropy, Helmholtz free energy and the partition function 4.5 Energy fluctuations 4.6 Example: The ideal spin-1/2 paramagnet 4.7 Adiabatic demagnetization and the 3rd law of thermody-namics 4.8 Example: The classical ideal gas 4.9 Vibrational and rotational energy of diatomic molecules 4.10 Translational energy of diatomic molecules: quantum treat-ment 4.11 The equipartition theorem 4.12 The Maxwell-Boltzmann velocity distribution 4.13 What is next? 1 4 The Statistical Physics of non-Isolated systems: The Canonical Ensemble In principle the tools of Chap. 3 suffice to tackle all problems in statistical physics. In practice the microcanonical ensemble considered there for isolated systems (E, V, N fixed) is often complicated to use since it is usually (i.e., except for ideal, non-interacting systems) very difficult to calculate all possible ways the energy can be split between all the components (atoms). However, we may also consider non-isolated systems, and in this chapter we consider systems in contact in with a heat reservoir, where temperature T is fixed rather than E. This then leads us to the canonical ensemble. In Chap. 3, we have introduced the canonical ensemble as many copies of a thermodynamic system, all in thermal contact with one another so energy is exchanged to keep temperature constant throughout the ensemble. In this chapter we will introduce the Boltzmann distribution function by focusing on one copy and considering the rest copies as a giant heat reservoir: Canonical Ensemble = System + Reservoir. The important point to note is that for a macroscopic system the two approaches are essentially identical. Thus, if T is held fixed the energy will statistically fluctuate, but, as we have seen, the fractional size of the fluctuations ∝1/ √ N (we will verify this explicitly later). Thus, from a macroscopic viewpoint, the energy is constant to all intents and purposes, and it makes no real difference whether the heat reservoir is present or not, i.e., whether we use the microcanonical ensemble (with E, V, N fixed) or the canonical ensemble (with T, V, N fixed). The choice is ours to make, for convenience or ease of calculations. We will see canonical ensemble is much more convenience. As we see below, the canonical ensemble leads to the introduction of some-thing called the partition function, Z, from which all thermodynamic quantities (P, E, F, S, · · ·) can be found. At the heart of the partition function lies the Boltz-mann distribution, which gives the probability that a system in contact with a heat reservoir at a given temperature will have a given energy. 2 4.1 The Boltzmann distribution Figure 1 Consider a system S in contact with a heat reservoir R at temperature T as shown in Figure 1. The whole, (R+S), forms an isolated system with fixed energy E0. Heat can be exchanged between S and R, but R is so large that its temperature remains T if heat is exchanged. We now ask: What is the probability pi that the system S is in a particular microstate with energy Ei? We assume that S and R are independent of each other. The total number of microstates Ω= ΩR × ΩS. Now, if we specify the microstate of ΩS to be the ith microstate, ΩS = 1, we have Ω= Ω(E0, Ei) = ΩR(E0 −Ei) × 1. Thus, the probability pi of S being in a state with energy Ei depends on the number of microstates of R with energy E0 −Ei, pi = pi(Ei) = ΩR(E0 −Ei) Ω(E0) = number of microstates of (S + R)with S in state i total number of microstates of (S + R) . Now, use the Boltzmann relation S = kB ln Ωfrom Eq. (1) of Chap. 3, ΩR(E0 −Ei) = exp  1 kB SR(E0 −Ei)  . If R is a good reservoir it must be much bigger than S. So, let’s Taylor expand around E0: SR(E0 −Ei) = SR(E0) −Ei   ∂SR ∂E ! V,N   E=E0 + 1 2!E2 i   ∂2SR ∂E2 ! V,N   E=E0 + · · · . 3 But, from the thermodynamic relations involving partial derivative of S,, ∂SR ∂E ! V,N = 1 T , ∂2SR ∂E2 ! V,N = "∂(1/T) ∂E # V,N = −1 T 2 ∂T ∂E ! V,N = − 1 T 2CV . Thus, SR(E0 −Ei) = SR(E0) −Ei T − E2 i 2T 2C(R) V + O(E3 i ). If R is large enough, C(R) V T ≫Ei and only the first two terms in the expansion are nonzero, pi ∝ΩR(E0−Ei) = exp  1 kB SR(E0 −Ei)  = exp "SR(E0) kB −Ei kBT # = const.×e−Ei/kBT since SR(E0) is a constant, independent of the microstate index i. Call this constant of proportionality 1/Z, we have pi = 1 Z e−Ei/kBT, (1) where Z is determined from normalization condition. So if we sum over all microstates X i pi = 1 we have Z = X i e−Ei/kBT, (2) where sum on i runs over all distinct microstates. pi of Eq. (1) is the Boltzmann distribution function and Z is called the partition function of the system S. As we will see later, partition function Z is very useful because all other thermodynamic quantities can be calculated through it. The internal energy can be calculated by the average ⟨E⟩= X i Eipi = 1 Z X i Eie−Ei/kBT. (3) We will discuss calculation of other thermodynamc quantities later. We want to emphasize also that the index i labels the microstates of N-particles and Ei is the 4 total energy. For example, a microstate for the case of an spin-1/2 paramagnet of N independent particles is a configuration of N spins (up or down): i = (↑, ↑, ↓, . . ., ↓). For a gas of N molecules, however, i represents a set of values of positions and momenta as i = (r1, r2, . . . , rN; p1, p2, . . . , pN), as discussed in Chap. 3 [Ref.: (1) Mandl 2.5; (2) Bowley and S´ anchez 5.1-5.2] 4.2 The independent-particle approximation: one-body par-tition function If we ignore interactions between particles, we can represent a microstate of N-particle system by a configuration specifying each particle’s oocupation of the one-body states, i = (k1, k2, . . . , kN), (4) meaning particle 1 in single-particle state k1, particle 2 in state k2, etc., (e.g., the spin configurations for a paramagnet). The total energy in the microstate of the N particles is then simply the sum of energies of each particle, Ei = ϵk1 + ϵk2 + . . . + ϵkN, where ϵk1 is the energy of particle 1 in state k1 etc. The partition function of the N-particle system of Eq. (2) is then given by, Z = ZN = X i e−Ei/kBT = X k1,k2,...,kN exp  −1 kBT (ϵk1 + ϵk2 + . . . + ϵkN)  . If we further assume that N particles are distinguishable, summations over k’s are independent of one another and can be carried out separately as ZN = X k1,k2,...,iN e−ϵk1/kBTe−ϵk1/kBT · · · e−ϵkN /kBT =  X k1 e−ϵk1/kBT    X k2 e−ϵk2/kBT  · · ·  X kN e−ϵkN /kBT  . (5) We notice that in the last equation, the summation in each factor runs over the same complete single-particle states. Therefore, they are all equal, X k1 e−ϵk1/kBT = X k2 e−ϵk2/kBT = · · · = X kN e−ϵkN /kBT. 5 Hence, the N-particle partition function in the independent-particle approximation is, ZN = (Z1)N where Z1 = X k1 e−ϵk1/kBT is the one-body partition function. We notice that the index k1 in the above equation labels single particle state and ϵk1 is the corresponding energy of the single particle, contrast to the index i used earlier in Eqs. (1) and (2), where i labels the microstate of total N-particle system and ϵi is the corresponding total energy of the system. The above analysis are valid for models of solids and paramagnets where particles are localized hence distinguishable. However, particles of a gas are identical and are moving around the whole volume; they are indistinguishable. The case of N indis-tinguishable particles is more complicated. The fact that permutation of any two particles in a configuration (k1, k2, . . . , kN) of Eq. (3) does not produce a new mi-crostate imposes restrictions on the sum P i = P k1,k2,...,kN; the number of microstates is hence much reduced and sums over k’s are not longer independent of each other. The simple separation method of Eq. (5) is invalid. For a classical ideal gas, if we assume the N particles are in different single-particle states (imagine N molecules in N different cubicles of size h3 in the phase-space (r,p)), the overcounting factor is clearly N! as there are N! permutations for the same microstate (k1, k2, . . . , kN). We hence approximate the partition function of N classical particles as, ZN ≈1 N!(Z1)N. Summary of the partition function in the independent-particle approximation • N distinguishable particles (models of solids and paramagnets): ZN = (Z1)N; (6) • N indistinguishable classical particles (classical ideal gas): ZN ≈1 N!(Z1)N; (7) • In both Eqs. (6) and (7), Z1 = X k1 e−ϵk1/kBT (8) is the one-body partition function, with ϵk1 the single-particle energy. 6 Example. Consider a system of two free (independent) particles. Assuming that there are only two single-particle energy levels ϵ1, ϵ2, by enumerating all possible two-body microstates, determine the partition functions Z2 if these two particle are (a) distinguishable and (b) indistinguishable. Solution: (a) We list all four possible microstates of two distinguishable particles in the following occupation diagram: Notice that the 2nd and 3rd states are different states as two particles are distin-guishable. By definition, the partition function of the two-particle system is given by Z2 = X i e−Ei/kBT = e−2ϵ1/kBT+2e−(ϵ1+ϵ2)/kBT+e−2ϵ2/kBT = (e−ϵ1/kBT+e−ϵ2/kBT)2 = Z2 1, agreed with the general formula ZN = (Z1)N of Eq. (6). The average energy of the two-particle system is give by, according to Eq. (3) ⟨E⟩= 1 Z X i Eie−Ei/kBT = 1 Z2 h (2ϵ1)e−2ϵ1/kBT + 2(ϵ1 + ϵ2)e−(ϵ1+ϵ2)/kBT + (2ϵ2)e−2ϵ2/kBT i . (b) For two identical particles, there are only three microstates as shown the following occupation-number diagram. The corresponding partition function is then given by Z2 = X i e−Ei/kBT = e−2ϵ1/kBT + e−(ϵ1+ϵ2)/kBT + e−2ϵ2/kBT. Notice that this partition function of two identical particles Z2 ̸= 1 2!Z2 1 as given by Eq. (7). Only the middle term has same weight as given by 1 2!Z2 1. The average energy of the two-particle system is ⟨E⟩= 1 Z X i Eie−Ei/kBT = 1 Z2 h (2ϵ1)e−2ϵ1/kBT + (ϵ1 + ϵ2)e−(ϵ1+ϵ2)/kBT + (2ϵ2)e−2ϵ2/kBT i . 7 For the case of a two-particle system with three states, see Q1 of Example Sheet 9. Note: (a) It is important to note that the sum in Eq. (8) runs over all single-particle states k1, and not over all different energies. A given energy eigenvalue ϵk1 may be degenerate, i.e., belong to more than one (different) state. We can also express Eq. (8) alternatively as a sum over distinct energy levels, as Z1 = X k1 e−ϵk1/kBT = X ϵk1 g(ϵk1)e−ϵk1/kBT, (9) where g(ϵk1) is the degeneracy factor at energy level ϵk1. (b) One-body partition function Z1 is a useful quantity for determining N-particle partition function in the independent-particle approximation. Z1 itself has no physical meaning as temperature is undefined for a single particle system. (c) Even if we ignore interaction completely (i.e., in the independent-particle ap-proximation) and restrict to classical mechanics, many-body effects still appear for N identical particles as demonstrated by the 1/N! factor. (d) Equation (6) is invalid in the low temperature limit where quantum effects dominate (e.g., a significant portion of particles of a quantum gas are in the zero-momentum state: the Bose-Einstein condensation). A proper way to tackle the problems of identical particles is to introduce occupation-number configurations and to employ grandcanonical ensemble. A third-year course (Fermions and Bosons) will discuss this subject in details. [Ref.: (1) Mandl 7.1] 4.3 Examples of partition function calculations We will see later all thermodynamic quantities (E, S, F, P etc.) can be determined via the partition function Z. So it is important to learn how to calculate the partition function. In general, calculation of partition function of a thermodynamic system is complicated due to the interactions between particles. In this section, we show a few examples in the independent-particle approximation in which interactions are ignored, using Eqs. (6)-(8) of the last section. Example 1. The ideal spin-1/2 paramagnet. Only 2 energy states for each spin, k1 =↑, ↓, with energies ϵ↑= −µB, ϵ↓= +µB, 8 where µ is the magnetic moment of one spin particle and B is the magnetic field. The one-body partition function is therefore Z1 = X k1 e−ϵk1/kBT = eµB/kBT + e−µB/kBT = 2 cosh(µB/kBT). The partition function for the N spins (distinguishable particles) is ZN = [2 cosh(µB/kBT)]N. (10) Example 2. A simple model for a one-dimensional solid consists of M independent oscillators, each with energy ϵ(x, p) = p2 2m + 1 2mω2x2, where ω is the angular frequency. The state of a classical particle is specified by k = (x, p) and the sum becomes integral X k = 1 h Z dxdp, as discussed in Chap. 3.5. The one-body partition function is therefore given by Z1 = X k e−ϵk/kBT = 1 h Z dxdp e−ϵ(x,p)/kBT = 1 h Z ∞ −∞dpe−p2/2mkBT Z ∞ −∞dxe−mω2x2/2kBT = 1 h q 2πmkBT s 2πkBT mω2 = 2πkBT hω , where we have used the Gaussian integral Z ∞ −∞e−ax2dx = rπ a, a > 0. The partition function of M oscillators (distinguishable) is ZM = (Z1)M = 2πkBT hω !M . (11) Example 3. The classical ideal gas of N particles in a volume V . In this case, the single particle energy is ϵ(r, p) = ϵ(p) = p2 2m = 1 2m(p2 x + p2 y + p2 z). 9 The one-body partition function is Z1 = 1 h3 Z d3rd3p e−ϵ(p)/kBT. As the energy ϵ(p) is r independent, the integral over real space produces a factor of volume V and the integral over momentum is, Z d3p e−p2/2mkBT = Z ∞ −∞dpxe−p2 x/2mkBT  Z ∞ −∞dpye−p2 y/2mkBT  Z ∞ −∞dpze−p2 z/2mkBT  = q 2πmkBT · q 2πmkBT · q 2πmkBT = (2πmkBT)3/2, where we have again used the Gaussian integral formula given above. The one-body partition function is Z1 = V 2πmkBT h2 !3/2 (12) and the partition function for a classical ideal gas of N identical molecules in a volume V is ZN = 1 N!ZN 1 = V N N! 2πmkBT h2 !3N/2 . (13) Later, we will see the importance of the factor 1/N! when we calculated thermody-namic quantities such as energy, entropy, etc. Example 4. The Einstein model of a one-dimensional solid. Revisit Example 2 above but now consider the oscillators are quantum mechanical. A single quantum oscillator has energies ϵn = ¯ hω  n + 1 2  , n = 0, 1, 2, 3, . . .. The one-particle partition function is Z1 = X n=0,1,2,··· e−ϵn/kBT = e−¯ hω/2kBT ∞ X n=0,1,2,··· e−¯ hωn/kBT = e−¯ hω/2kBT 1 1 −e−¯ hω/kBT = 1 2 sinh(¯ hω/2kT), where in the third equation, we have used the formula ∞ X n=0 xn = 1 1 −x. 10 The partition function of M quantum oscillators is ZM = (Z1)M = 1 [2 sinh(¯ hω/2kBT)]M . We will see later the thermodynamics of quantum oscillators reduces to the classical one in the high temperature limit but is completely different in the low temperature limit. In the next sections, we will discuss how to calculate energy, entropy and other thermodynamic quantities from partition functions. [Ref.: (1) Mandl 2.5; (2) Bowley and S´ anchez 5.1-5.2] 4.4 The partition function and other state functions Although the partition function, Z = P i e−Ei/kBT, has appeared just as a normaliza-tion constant, its usefulness is much deeper than that. Loosely, whereas for an isolated system (at fixed E, N, V ) all the thermodynamic properties S, T, P, · · · could be de-rived from Ω(E, N, V ), as for a system in thermal equilibrium at temperature T the same role is played by Z = Z(T, N, V ) for a hydrostatic system (or Z = Z(T, N, B) for a magnetic system, etc.). In the last section we have calculated Z of several sys-tems in the independent-particle approximation. Here we discuss in general how to calculate other properties from Z. First, we consider the energy E. The average energy ⟨E⟩is calculated for the canonical ensemble (with ν copies of the system and νi of these copies in the i-microstate, recall Chap. 3.2) as ⟨E⟩= 1 ν ν X λ=1 Eλ = 1 ν X i νiEi = X i piEi, or ⟨E⟩= 1 Z X i Eie−Ei/kBT = P i Eie−Ei/kBT P i e−Ei/kBT , As given by Eq. (3) in Sec. 4.1. Now, in this expression, the numerator can be obtained from the denominator by differentiating the denominator with respect to (−1/kBT). That is a bit awkward, so let’s define β ≡ 1 kBT . (14) Hence ⟨E⟩= P i Eie−Ei/kBT P i e−Ei/kBT = −1 Z ∂Z ∂β ! N,V , 11 or, more formally ⟨E⟩= − ∂ln Z ∂β ! N,V = kBT 2 ∂ln Z ∂T ! N,V . (15) Next, we consider entropy. Clearly, if the system is in a given (fixed) microstate it has no entropy. Instead, we talk now about the entropy of the ensemble since the many copies can be in many different microstates. So, let the ensemble have ν copies of the system and the ensemble entropy, Sν = ν⟨S⟩, where ⟨S⟩is the average system entropy. Let the ensemble have νi copies in the ith microstate, so the total number of ways of arranging this is Ων = ν! ν1!ν2!ν3! · · ·. Use Stirling formula, we have ln Ων = ν ln ν −ν − X i (νi ln νi −νi) = X i νi(ln ν −ln νi) = − X i νi ln νi ν , but pi = νi/ν, ln Ων = −ν X i pi ln pi. So, from Boltzmann’s formula: Sν = kB ln Ων and ⟨S⟩= Sν/ν, we have system entropy ⟨S⟩= −kB X i pi ln pi. (16) Let us now apply the general Eq. (16) to the case of a system in thermal equilib-rium at a temperature T, where pi is given by the Boltzmann distribution of Eq. (1), ⟨S⟩ = −kB X i pi ln e−βEi Z = −kB X i pi(−βEi −ln Z) = kBβ X i piEi + kBZ X i pi = kBβ⟨E⟩+ kB ln Z where we have used the definition ⟨E⟩= P i piEi and normalization condition P i pi = 1. Rearrange the above equation kBT ln Z = −(⟨E⟩−T⟨S⟩) = −⟨F⟩ where F ≡E −TS is the Helmholtz free energy. Hence we write ⟨F⟩= −kBT ln Z. (17) The other thermodynamic quantities can then calculated by partial derivatives of F as given in Chap. 2. Summary of basic formulas for canonical ensemble 12 • First calculate partition function ZN = X i e−Ei/kBT. (18) • Then the Helmhotz free energy F = −kBT ln ZN. (19) • The entropy and equation of state are obtained by S = − ∂F ∂T ! V,N ; P = − ∂F ∂V ! T,N . (20) • The internal energy can be calculated using E = − ∂ln ZN ∂β ! N,V = kBT 2 ∂ln Z ∂T ! N,V , (21) or simplely from E = F + TS. (22) In the above formulas, we have dropped the average notation ⟨⟩for F, E and S. This is because in the large N limit, the fluctuations around the average value very small, typically proportional to 1/ √ N. In the next section we will discuss these fluctuations for the energy ⟨E⟩. Note: (a) For magnetic systems the term −PdV is replaced by −mdB; and hence we have m = −(∂F/∂B)T,N instead.] (b) Equations (19)-(20) are very reminiscent of those we met in the case of an isolated system in Chap. 3 (Eqs. (1)-(3)). Whereas the entropy S played a central role for isolated systems, that role is now played by F for system in contact with a heat bath. It is no real surprise that F is now the key state function for system at fixed T, since that is just how it was introduced in thermodynamics. (c) In the independent-particle approximation discussed in Sec. 4.2, the partition function can be written as ZN ( = (Z1)N, distinguishable particles; ≈ 1 N!(Z1)N, indistinguishable particles, 13 where Z1 is the one-body partition function. After taking log, we have ln ZN  = N ln Z1, distinguishable particles; ≈N ln Z1 −ln N!, indistinguishable particles, we have, for both the distinguishable and indistinguishable particles, E = EN = − ∂ln ZN ∂β ! N,V = −N ∂ln Z1 ∂β ! N.V = NE1, where E1 = −∂ln Z1/∂β is the average energy of a single particle. Namely, in the independent-particle approximation, the total internal energy of N particles (distinguishable or indistinguishable) is equal to N times the average energy of a single particle. We have calculated the partition functions ZN for a number of systems in Section 4.3. Using Eqs. (19)-(21), it is straightforward to calculate other thermodynamic quantities. In the rest of the chapter we will do just that and also discuss the physical implications of our results. [Refs.: (1) Mandl 2.5; (2) Bowley and S´ anchez 5.3-5.6.] 4.5 The energy fluctuations In this section we will focus on the energy fluctuations and show it is small in the large N limit. From ⟨E⟩we can calculate heat capacity ⟨CV ⟩= ∂⟨E⟩ ∂T ! N,V = kBβ2 ∂2 ln Z ∂β2 ! N,V . (23) ⟨E⟩in canonical ensemble is only known as an average. It will also statistically fluctuate. We can also examine the fluctuations, and see how big they are. We define (∆E)2 ≡⟨E2⟩−⟨E⟩2. Clearly ⟨E2⟩= P i E2 i e−βEi P i e−βEi = 1 Z ∂2Z ∂β2 ! N,V . Hence, (all derivatives in the followings are at constant N, V ) (∆E)2 = 1 Z ∂2Z ∂β2 − −1 Z ∂Z ∂β !2 = ∂ ∂β 1 Z ∂Z ∂β ! = ∂2 ln Z ∂β2 ! N,V = − ∂⟨E⟩ ∂β ! N,V = − ∂⟨E⟩ ∂T ! N,V dT dβ = kBT 2⟨CV ⟩. 14 or (∆E)2 = ∂2 ln Z ∂β2 ! N,V = kBT 2CV . (24) Note: For a normal macroscopic system ⟨E⟩∝NkBT, and CV ∝NkB, hence ∆E ⟨E⟩∝ √ NkBT NkBT = 1 √ N . So, if N ≈1024, ∆E/⟨E⟩≈10−12, an unobservable tiny number! So, for most normal macroscopic systems the fluctuations are totally negligible and we can forget the notation ⟨⟩, and write ⟨E⟩→E, ⟨CV ⟩→CV , etc., and there is no real difference between an isolated system of fixed energy E and one in contact with a heat bath at the same temperature T = (∂E/∂S)N,V . [Note: A notable exception occurs near critical points, where the distinction be-tween 2 phases disappears, Near critical points ⟨CV ⟩can be very large and the fluctua-tions may not be negligible. This can sometimes be observed as ”critical opalescence” where the meniscus between the liquid and gas phases disappears, and the mixture becomes milky-looking and opaque as it scatters light.] [Ref.: (1) Mandl 2.5] 4.6 Example: The ideal spin-1/2 paramagnet Now we revisit the problem of the ideal spin-1/2 paramagnet at fixed temperature. We consider N spins in a magnetic field B. Each spin has only two states, either up with energy (ϵ↑= −µB) or down with energy (ϵ↓= +µB). From Sec. 4.3, the partition function of the paramagnet is calculated as ZN = (Z1)N = [2 cosh(βµB)]N , ln ZN = N ln (2 cosh(βµB)] , where β = 1/kBT. We can now calculate the total average energy easily using Eq. (21) E = −∂ln ZN ∂β = − N cosh(βµB) · sinh(βµB) · (µB), hence E = −NµB tanh µB kBT . (25) The heat capacity at constant magnetic field is calculated as CB = ∂E ∂T ! B = −NµBsech2 µB kBT ·  −µB kBT 2  = N µ2B2 kBT 2sech2 µB kBT , (26) 15 where we have used d dx tanh x = sech2x, and sechx ≡1/ cosh x. We can plot E and CB as function of T using the fact that, as x →0, sinh x →x and cosh x →1; and as x →∞, sinh x →ex/2 and cosh x →ex/2. Hence, from Eq. (25) E →−NµB, T →0 just as expected, since all spins will be in lowering energy spin-up state. On the other hand, tanh x →x as x →0, hence E →−Nµ2B2 kBT , T →∞ again, as expected since as T →∞, the number of up spins and down spins become nearly equal and their energies cancel each other out. These behaviors are shown in Figure 2. Figure 2 We can similarly plot CB. From Eq. (26), in the limit T →0 (β →∞), CB →NkB(µBβ)24e−2µBβ = NkB 2µB kB 2 1 T 2e−2µB/kBT or, using the fact that exponential →0 faster than 1/T 2 →∞, CB →0, T →0. This behavior, which is quite general, is also easy to understand. Thus, at low T, thermal fluctuations that flip a spin are rare →very difficult for the system to absorb heat. Quantization of energy levels →there is always a minimum excitation energy for any system, and hence, if T is low enough, the system can’t absorb heat. 16 Figure 3 Low-T High-T At the opposite limit, CB →Nµ2B2 kB 1 T 2, T →∞. The high T behavior arises because n↓is always smaller than n↑. As T →∞, n↓ approaches n↑and raising T even higher makes no difference, i.e., the system has no further capacity to absorb heat. However, this behavior is not universal, since most systems have an infinite number of energy levels of higher and higher energies. Hence in general there is no max. energy and the heat capacity won’t fall to zero in the high-T limit. In our case, it is pictorially shown in Figure 3. We sketch the behaviors of CB as a function of T in Figure 4. Figure 4 The Helmholtz free energy is calculated as, using Eq. (19) F = −kBT ln ZN = −NkBT ln  2 cosh µB kBT  . The entropy and magnetization are calculated by Eqs. (20) S = − ∂F ∂T ! B,N , m = − ∂F ∂B ! T,N . 17 Hence, for the entropy S = NkB{ln[2 cosh(βµB)] −βµB tanh(βµB)}. (27) Consider the limits: (i) T →0 (or β →∞) S →NkB{ln[2 × 1 2eβµB] −βµB · 1} →0, T →0 which is as expected, since as T →0 all spins are up, i.e. no disorder! (ii) T →∞(or β →0) S →NkB{ln 2 −(βµB)2} →NkB ln 2 again, as expected, since as T →∞, the spins are equally likely to be up or down, entropy per spin is kB ln 2 as we have seen in Chap. 3. The net magnetic moment is given by, m = − ∂F ∂B ! T,N = Nµ tanh(βµB) = −E B as expected, since E = −mB is the equation of state for the ideal paramagnet. The limits: (i) T →0 (or β →∞) m →Nµ for all spins are up; and (ii) T →∞(or β →0) m → Nµ2B kBT , Curies′s law → 0 again, as expected, since nearly equal number of up and down spins. We plot S and m versus T for several different external fields as shown in Figure 5. 18 Figure 5 Note: As we have seen above, the entropy S →0 as T →0 as all spins align. This is generally true, namely, a system has no disorder in the limit of T →0. This is the third law of thermodynamics: The entropy of any system S →0 as T →0. In the next section, we discuss a way to reach low temperature limit using paramag-nets. [Refs.: (1) Mandl 3; (2) Bowley and S´ anchez 5.7.] 4.7 Adiabatic demagnetization and the third law of thermo-dynamics By magnetizing and demagnetizing a paramagnet sample, while controlling the heat flow, we can lower its temperature. Thus, referring to the above S vs. T curves for the ideal paramagnet: Figure 6 19 Start with sample in magnetic field B1 at an (already fairly low) temperature T1. • Step 1: isothermal magnetization: increase the field from B1 →B2 at constant T (i.e., in contact with heat bath). Entropy S hence decreases as spins align in stronger field (i.e., more ordered). • Step 2: adiabatic demagnetization: now isolate the system and demagnetize (i.e., reduce B from B2 →B1). ∆Q = 0, and if the process is quasistatic and reversible, ∆S = 0. From the plot we see T reduces T1 →T2; or from Eq. (15) S is a function of B/T only, hence for constant S and B reduces, T must reduce by a same factor. The figures below show what happens to the spins: (a) Start (b) Step 1 (c) Step 2 In the step 1, we increase the level spacing but keep T constant; population of upper level falls. In step 2 we reduce the level spacing again, but as the process is now adiabatic (spins isolated) there is no change in level occupations, the temperature is lowered. This is actually a practical way to reach quite low temperatures, to small fractions of 1 K. If we start with a large sample we could repeat the process with a small sub-sample, with rest acting as a heat bath. However at each repeat of Steps 1 and 2 we would reduce the temperature by less and less, as the curves come together as T →0. Thus it is impossible to reach T →0 in a finite number of steps in this way. This is just one example of the third law of thermodynamics: namely, either (a) absolute zero is unattainable (in a finite number of steps) or, more precisely, (b) The entropy of any aspect of any system, S →0 as T →0. Note: The (b) statement implies that the ground state is non-degenerate so that all particles fall into the same state as T →0. [Ref.: (1) Mandl 5.6.] 20 Thus, we can colloquially state: The laws of thermodynamics 1. Yon can’t win, you can only break even at best 2. You can only break even at T = 0 3. You can’t attain T = 0 Even more snappily, and slightly more cryptically: The laws of thermodynamics (as played in Monte Carlo) 1. Yon can’t win! 2. You can’t even break even! 3. You can’t leave the game! 21 4.8 Example: The classical ideal gas Now it is time to revisit the ideal gas we discussed often in thermodynamics. We hinted before that it would be a challenge problem using microcanonical ensemble approach. We will show that this is not the case using canonical ensemble approach. We have calculated the partition function of classical ideal gas of N identical molecules at fixed temperature T in a volume V as, Eq. (12)-(13), ZN = 1 N!ZN 1 = V N N! 2πmkBT h2 !3N/2 , hence, using Stirling approximation, N! ≈(N/e)N, ln ZN = N ln  eV N 2πmkBT h2 !3N/2  (28) Now we follow the standard calculations of canonical ensemble to obtain other thermodynamic quantities. The Helmholtz free energy is obtained from Eq. (19) F = −kBT ln ZN = −kBTN "3 2 ln 2πmkBT h2 ! + ln V N + 1 # . (29) Note: If we did not include the 1/N! factor in ZN, the second term in Eq. (29) would be ln V instead of the intensive quantity ln(V/N), and F would not be extensive as required. The entropy is calculated as, according to Eq. (20), S = − ∂F ∂T ! V,N = kBN 3 2 ln 2πmkBT h2 + ln V N + 5 2 ! , (30) which can be compared with Q2(a) of Example Sheet 5 S = kBN 3 2 ln T T0 + ln V V0  + const. Eq. (30) is referred as Sackur-Tetrode equation. It gives absolute value of the entropy of a gas at a given temperature T. (See Q2 of Example Sheet 11 for more details). Apart from the factor m of atomic mass, it is the same for every substance. At high enough temperature and low enough densities all substances behave as ideal gases, and so the Sackur-Tetrode formula can be checked experimentally. Good agreement is found. 22 The equation of state is obtained by Eq. (20) for pressure, P = − ∂F ∂V ! T,N = kBTN · 1 V or, the familiar formula PV = NkBT. (31) The internal energy of an ideal gas can be calculated by partial derivative of Eq. (21), or simply from F = E −TS of Eq. (22) E = F + TS = 3 2kBNT, (32) which is independent of volume V as expected. The heat capacity at constant volume is CV = ∂E ∂T ! V = 3 2kBN. Note: The entropy S of Eq. (30) has wrong low-T behavior as S →−∞in the limit T →0, in conflict with the 3rd law which states S →0 in the limit T →0. Two reasons for the problem: (a) We have ignored interactions between particles when calculating partition func-tion ZN; these interactions are responsible for the particles condensing into liquids or forming solids at low temperature. (b) We have also ignored quantum effects (significant at low temperature) when we considered the properties of indistinguishable particles by assuming particles are in different single-particle state (hence the over-counting factor is simple N!). The quantum effect of many particles in the zero-momentum state is responsible for the Bose-Eisntein condensation. Inclusion either of the above two effects will result correct low temperature behavior for the entropy. More detailed discussion for validity of the classical ZN above is given by Mandl 7.3. 4.9 Vibrational and rotational energy of diatomic molecules In the last section we consider the classical ideal gas of N particles. If these particles are diatomic molecules, in additional to the translational motion for the center-of-mass of a molecule, there are also vibrational and rotational motions. We consider 23 these three motions are independent of one another, hence write the partition function of N diatomic molecules as ZN = 1 N!(Z1)N, Z1 = Zt 1Zv 1Zr 1 (33) where Zt 1 is the one-body partition function of translational motion, given by Eq. (12), Zv 1 is that of vibrational motion, and Zr 1 is that of rotational motion. Here we consider Zr 1 and Zr 1 by quantum treatment. Vibrational energy contribution. The energy levels of a quantum simple har-monic oscillator of angular frequency ω are ϵn =  n + 1 2  ¯ hω, n = 0, 1, 2, · · ·. Hence, the one-body partition function is the same as calculated by Eq. (14) as Z1 = ∞ X n=0 e−(n+1/2)¯ hωβ = 1 2 sinh(¯ hωβ/2). Hence, the vibrational energy per molecule at temperature T is Ev N = − ∂ln Z1 ∂β ! = 1 2 sinh(¯ hωβ/2) · 2 cosh(¯ hωβ/2) · (¯ hω/2) or Ev N = 1 2¯ hω coth(¯ hωβ/2). (34) The two temperature limits: (i) T →0 (β →∞), coth(¯ hωβ/2) →1 Ev N →1 2¯ hω, just the zero-point energy; (ii) T →∞(β →0), coth(¯ hωβ/2) → 2 ¯ hωβ, Ev N →1 β = kBT. 24 Note: For most diatomic molecules, the high-T limit is reached for T ≥1000 K. Rotational energy contribution. In classical mechanics the energy of a rigid rotor with moment of inertia I, rotating with angular velocity ω (or angular momentum L = Iω) is ϵ = Iω2/2 = L2/2I. In quantum mechanics, the angular momentum is quantized as L2 →l(l + 1)¯ h2, l = 0, 1, 2, · · · and hence the energy levels are ϵl = l(l + 1)¯ h2 2I , l = 0, 1, 2, · · ·. Also, for each eigenvalue l we have g(ϵl) = (2l + 1) degenerate magnetic sublevels, specified by quantum number ml = −l, −l + 1, · · ·, l −1, l, all with same energy ϵl. Hence the one-body partition function is Zr l = ∞ X l=0 (2l + 1)e−l(l+1)¯ h2β/2I. (35) For general β, we can not specify the above Z1 further. However, we can look at the low- and high-T limits as follows: (a) T →0 (β →∞), for a good approximation, keeping only the first two terms, Z1 →1 + 3e−¯ h2β/I and the rotational energy per molecule is Er N →0. (b) T →∞(β →0). In this limit (kBT ≫¯ h2/2I) there are many thermally accessible energy levels, and the discrete series can be well approximated by a continuum, i.e., Zr 1 → Z ∞ 0 dl(2l + 1)e−l(l+1)¯ h2β/2I and luckily this integral can be exactly evaluated by making the substitution x = l(l + 1) and dx = (2l + 1)dl. And we obtain Zr 1 →2I ¯ h2β = 2IkBT ¯ h2 . (36) 25 and the rotational energy per molecule, using Er N = −∂ ∂β ln Zr 1 Er N →1 β = kBT, T →∞. (37) For details, see Example Sheet 10. Note: For typical diatomic molecules, ¯ h2/2I ≈10−3 eV, and so the high-T limit is reached well below room tem-perature. Translational energy contribution. From Eq. (32) of Sec. 4.8, we have the translational energy per molecule as Et N = 3 2kBT. (38) We obtained this result using classical mechanics approach. In the next section we obtain the same result using quantum treatment. We will see later in Sec. 4.11 that it is no accident that there is a simple relation between energies per molecule in the high-T limit for these three motions. They are examples of the more general equipartition theorem. [Refs.: (1) Bowley and S´ anchez 5.11, 5.12.] 4.10 Translational energy of molecules: Quantum treatment We have calculated the one-body partition function and the energy for the transla-tion motion of N particles, using classical mechanics approach. Here we repeat the calculation using quantum mechanics. We will see that Z1 is the same as classical result. Consider a single free particle (acted on by no forces, potential V = 0), contained in a box of lengths Lx, Ly, Lz with sides parallel, respectively to the x, y, z axises. Its wavefunction ψ = ψ(x, y, z) satisfies the free Schr¨ odinger equation inside the box −¯ h2 2m∇2ψ(x, y, z) = Eψ(x, y, z). We assume the box is impenetrable so that ψ vanishes everywhere on the boundaries of the box and outside it. The Schr¨ odinger equation with this boundary condition is easily seen to be satisfied by the solution ψ(x, y, z) = ( A sin  nxπx Lx  sin nyπy Ly  sin  nzπz Lz  , inside box; 0, outside box, 26 where nx, ny, ny = 1, 2, · · · and A is a normalization constant. The corresponding energy eigenvalues are E = ϵnx,ny,nz ϵnx,ny,nz =   nxπ Lx 2 + nyπ Ly !2 + nzπ Lz 2  ¯ h2 2m ≡¯ h2k2 2m , where k2 = k2 x + k2 y + k2 z and kx = nxπ/Lx, ky = nyπ/Ly and kz = nzπ/Lz. Hence, the one-particle partition function for this free translational motion is Zt 1 = ∞ X nx=1 ∞ X ny=1 ∞ X nz=1 e−βϵnx,ny,nz. This sum can be further evaluated only in the limit kBT ≫¯ h2π2/2mL2, the energy level spacing. Even for L = 1 cm, m = mH (hydrogen mass), ¯ h2π2/2mL2 ≈2 ×10−18 eV, a truly tiny energy, and for all attainable temperature the condition kBT ≫ ¯ h2π2/2mL2 always satisfies. Thus, for all macroscopic boxes and even at the lowest temperature ever reached, we can replace the sums by integrals. Putting nx = kxLx/π, etc., we replace ∞ X nx=1 · · · → Z ∞ 0 Lx π dkx · · · , etc. We rewrite Z1 as Zt 1 = Lx π Ly π Lz π Z ∞ 0 dkx Z ∞ 0 dky Z ∞ 0 dkze−βϵ(k) = V 8π3 Z d3ke−βϵ(k), where V = LxLyLz and ϵ(k) ≡¯ h2k2/2m. Rewrite the above equation as Zt 1 = V (2π)3 Z d3ke−βϵ(k), ϵ(k) ≡¯ h2k2 2m . (39) Furthermore, in spherical coordinates d3k = k2dk sin θdθdφ, we rewrite Eq. (21), after integration of the angles to yield 4π, Zt 1 = Z ∞ 0 dkD(k)e−βϵ(k), D(k) = V k2 2π2 , (40) where D(k) is usually referred to as density of states in k-space, i.e., D(k)dk is the number of states within the spherical shell from k →k + dk. Finally, we can insert ϵ(k) = ¯ h2k2/2m and evaluate the integral of Eq. (40). Substitute k = q 2m/β¯ h2x, Zt 1 = V 2π2 2m β¯ h2 !3/2 Z ∞ 0 dxx2e−x2 = V m 2πβ¯ h2 !3/2 , (41) 27 where we have used Gaussian integral Z ∞ 0 x2e−x2dx = √π 4 . From Zt 1 we can calculate the average energy per molecule Et N = −∂ln Zt 1 ∂β = 3 2 1 β = 3 2kBT, same as Eq. (32) by classical approach. This is not surprising as we have taken the continuous limit (converting the summations into integrals). The discrete nature of the energy levels will show up only at temperature T < ¯ h2/(kBmV 2/3) ≈10−14 K. Note: (a) In Eq. (40), D(k) acts for the continuum k-variable like the degeneracy factor g(ϵk) in the discrete ϵk-variable in the discrete sum for Z1 of Eq. (9). (b) We want to emphasize that although quantum mechanical Z1 obtained here is the same as classical result shown earlier, the formula for the total partition function ZN = 1 N!ZN 1 is the classical approximation. Namely we have ignored the quantum effects of many-body systems. [Refs.: (1) Mandl 7.1-7.3,; (2) Bowley and S´ anchez 5.9,7.2.] 4.11 The equipartition theorem The last three results (for vibrational, rotational and translational motion) provide examples of the equipartition theorem: for each degree of freedom of a system with an energy which is quadratic in either the coordinate or the momentum, the average energy is kBT/2 and its contribution to the heat capacity is kB/2, at high enough temperatures. Here are the examples we have discussed earlier: • vibrations: Evib = 1 2m ˙ x2 + 1 2kx2 2 quadratic d.o.f. →E →kBT as T →∞. • rotations: 2 perpendicular axes about which it can rotate, Erot = 1 2I1 ˙ θ2 1 + 1 2I2 ˙ θ2 2 2 quadratic d.o.f., hence E →kBT as T →∞; 28 • translations: Etr = 1 2m( ˙ x2 + ˙ y2 + ˙ z2) 3 quadratic d.o.f., hence E →3kBT/2 as T →∞. The equipartition theorem is a classical theorem. From our present statistical mechanics treatment we see it breaks down when the separation between energy levels is small compared with kBT. If this happens the heat capacity of this d.o.f. will be reduced, dropping to zero at low temperatures. The corresponding d..o.f. is then said to be frozen out; e.g., this is typically the situation for the vibrational degrees of freedom at room temperature. More specifically, equipartition holds • for vibrations, when T ≫¯ hω/kB ≈103 K; • for rotations, when T ≫¯ h2/IkB ≈10 −100 K; • for translations, when T ≫¯ h2/(mV 2/3kB) ≈10−14 K. Thus, at room temperature, only the rotational and translational degrees of freedom can be treated classically, giving CV = 3R/2 for monatomic gases and Cv = 5R/2 for diatomic gases, for the molar heat capacity. The following diagram is an example for a diatomic gas (e.g., H2). Figure 8 We can predict the heat capacities of other substances using equipartition, simply by counting the quadratic degrees of freedom. An example is a solid, for which we expect the molar heat capacity to be 3R since each atom is free to to vibrate in 3 directions. This is the Dulong-Petit law, which works well for many solids at room temperature Note: Equipartition only hods for quadratic degrees of freedom. An example is for an ultra-relativistic gas, for which ϵ = cp = c¯ hk, is linear (instead of quadratic). Two important points: 29 (a) T →0, quantum effects dominate. In fact, for all quantum systems including quantum gases, CV →0 as T →0, consistent with the 3rd law of thermody-namics. (b) We only discussed energy per molecule and specific heat here using the equipar-tition theorem and avoided discussing other properties such as entropy or equa-tion of state which require the full N-body partition function ZN (and the property of identical particles matter). [Refs.: (1) Mandl 7.9; (2) Bowley and S´ anchez 5.14.] 4.12 The Maxwell-Boltzmann velocity distribution In this section we derive the Maxwell-Boltzmann velocity distribution for an ideal classical gas you have learned in your year one module. Consider a gas of N molecules in a volume V , in thermal equilibrium at a tem-perature T. From Boltzmann distribution function, the probability of an average molecule in the state (r, p) is p(r, p) = 1 Z1 e−ϵ(r,p)/kBT where ϵ(r, p) is the energy of the a single molecule at state (r, p). The average number of molecules at state (r, p) inside the volume d3rd3p/h3 is then given by Np(r, p)d3rd3p h3 The Maxwell-Boltzmann velocity distribution function f(v) is obtained by setting translational energy ϵ(r, p) = p2/2m with p = mv, and integrating over spatial dr and solid angle sin dθdφ of the momentum as f(v)dv = Nm3 Z1h3 Z d3r Z sin θdθdφe−mv2/2kBTv2dv = Nm3V Z1h3 e−mv2/2kBTv2dv, hence f(v) = Nm3V Z1h3 v2e−mv2/2kBT. Using the result Z1 = V mkBT 2π¯ h2 !3/2 , 30 we have f(v) = N s 2 π  m kBT 3/2 v2e−mv2/2kBT. (42) Notice the normalization Z ∞ 0 f(v)dv = N. We can also define distribution function for an average particle as P(v) = f(v)/N and the normalization equation is Z ∞ 0 P(v)dv = 1. This is the well-known Maxwell-Boltzmann velocity distribution. We plot distribution P(v) of Eq. (42) in Figure 9. Figure 9 A few physical quantities are calculated as follows. • most probable speed: let vp be the point of maximum P(v), i.e., dP/dv = 0 d dv  v2e−mv2β/2 = 0 → (2v −v2mvβ) = 0 we have vp = s 2kBT m ≈1.41 s kBT m . (43) • mean speed: ⟨v⟩ = Z ∞ 0 v · P(v)dv = s 2 π  m kBT 3/2 Z ∞ 0 v3e−mv2/2kBTdv = s 8kBT m ≈1.60 s kBT m . (44) 31 • rms speed: ⟨v2⟩ ≡ v2 rms = Z ∞ 0 v2 · P(v)dv = s 2 π  m kBT 3/2 Z ∞ 0 v4e−mv2/2kBTdv = 3kBT m , or vrms = s 3kBT m ≈1.73 s kBT m . (45) These three speeds are marked in Figure 9. From Eq. (45) we have E1 = 1 2⟨mv2⟩= 1 2mv2 rms = m 2 · 3kBT m = 3 2kBT, consistent with equipartition theorem. Note also that ¯ h has disappeared from the Maxwell-Boltzmann distribution of Eq. (42), which is why it can be also found from classical kinetic theory, as was done originally by Maxwell. Note: In the above integrals, we have used the following general Gaussian integral Z ∞ 0 x2ne−ax2dx = 1 · 3 · 5 · · ·(2n −1) 2n+1an rπ a. [Refs.: (1) Mandl 7.7; (2) Bowley and S´ anchez 7.4.] 4.13 What is next? So far, we have completely ignored interactions between constituent particles in all of our examples, from the ideal spin-1/2 paramagnets to the classical ideal gases. How do we go from here and what is next in the physics of statistical mechanics? Clearly, investigation of the effects due to, for example, interactions between molecules of a gas is the next main task. In fact, the most interesting physics emerges from such interactions, examples are phase transitions from gases to liquids or solids as temperature is lower, and even to superfluids or superconductors at extremely low temperatures where quantum physics dominates. Another major neglect is the quantum effects of many particles (bosons) occupy-ing the same single-particle state (particularly the zero-momentum state) when we discussed the independent-particle approximation for identical particles. Such effects are important in the low temperature limit and inclusion of such quantum effects will result in the Bose-Einstain condensation. These effects are much easier to handle and they are covered in a thir-year course, Bosons and Fermions. 32 The difficult problem is the inclusion of interactions between particles. Amazingly, we have most of the fundamental formulas needed for all such further investigation, although some special techniques will be required. Let’s do a little demonstration to complete our Thermal and Statistical Physics. We consider a gas of N identical, classical molecules. These molecules interact with one another, and the interaction is described by a pair-wise interaction potential, V (r), where r is the separation between the interacting pair. We draw a typical V (r) in Fig. 10. Qualitatively, we see the interaction potential consists of a hard-core (molecules repel each others strongly when they are very close) and an attractive tail which is responsible for condensation into liquids and formation of solids at low temperature. In Chap. 1.3, we have qualitatively discussed the effects due to this interaction to the equation of state, the so-called van der Waals equation, P + αN2 V 2 ! (V −Nβ) = NkBT, where α and β are coefficients depending on the interaction potential. This empirical equation of van der Waals in fact provides a good description of a dense gas (recall that an ideal gas corresponds to dilute gas where interactions can be ignored) and it also predicts a phase transition from gas to liquid phase. Fig. 10 A schematic diagram for the interaction potential between two molecules. One of the tasks in statistical mechanics is to derive this van der Waals equation from, say, canonical ensemble approach. In canonical-ensemble approach, as discussed 33 earlier, we first need to calculate the partition function of N molecules ZN = X i e−ϵi/kBT, where, as we mention before, summation over microstate index i for N classical molecules corresponds to the integral in 6N-dimensional phase-space X i = 1 N! 1 h3N Z d3r1d3p1 Z d3r2d3p2 · · · Z d3rNd3pN, with the factor 1/N! due to the property of identical particles. The energy ϵi is then the classical Hamiltonian (total energy) of interacting N molecules H = H(r1, r2, . . . ; p1, p2, . . .) = K + U; K = p2 1 2m + p2 2 2m + · · · + p2 N 2m = N X k=1 p2 k 2m, U = V (|r1 −r2|) + V (|r1 −r3|) + · · · = X k<l V (|rl −rk|) where K is the total kinetic energy, U the total potential energy. Hence, the partition function of the gas is written as ZN = 1 N! 1 h3N Z d3r1d3p1 Z d3r2d3p2 · · · Z d3rNd3pNe−(K+U)/kBT. We notice, contrast to the case of an ideal gas, the above multi-dimensional integral is NOT separatable, due the coupling terms in the potential U between molecular coordinates r1, r2, . . . , rN. (The kinetic energy term K is still separatable, hence the integrals over momenta p1, p2, . . . , pN are still separatable. Can you prove this?). Special techniques have been developed to evaluate this multi-dimensional integral. One of such techniques is the so-called cluster-expansion for the factor e−U/kBT. Cor-rections to the ideal gas equation of state can then be evaluated. We will stop here. For those who are interested, a good but very advanced reference book (postgraduate text book) is by K. Huang, Statistical Mechanics. 34 Acknowledgment The main parts of these lecture notes (including figures, diagrams, and example sheets) have been passed down to me from Prof. R.F. Bishop and to him from Dr. J.A. McGovern. If these materials have been useful to you, credits should go to these earlier lecturers. I have made some significant changes, including re-organizing the materials. If you find typos, mistakes, or Chinglish, they are likely mine. 35 Lecture Cancellation Lecture on Thursday, 2:00 pm, 01 May 2008 is can-celed. Please study the last section Sec. 4.10 (avail-able on line). Last lecture on Thursday, 08 May will focus on revision. 36
124
125
“Whose” vs. “Who’s”: What’s the Difference? The words whose and who’s may sound identical, but their meanings and usage are completely different. Here, we’ll explain the distinction between these homophones to help you use them correctly in your writing. Work smarter with Grammarly The AI writing assistant for anyone with work to do Get Grammarly Table of contents What’s the difference between whose and who’s? What’s the meaning of whose? What’s the meaning of who’s? Who vs. whom Examples of whose vs. who’s Whose vs. who’s FAQs What’s the difference between whose and who’s? Whose is the possessive form of the pronoun who, whereas who’s is a contraction linking the words who is or who has. Whose and who’s are homophones, meaning they sound the same but have different meanings and are sometimes spelled differently. Here’s another way to remember the difference: Who’s shoes are these? (Translation: “Who is shoes are these?”) Whose shoes are these? (Translation: “To whom do these shoes belong?”) Now, let’s use them both in one sentence: Who’s going to tell me whose party we’re going to? Because possessive nouns often use -’s, it’s tempting to think who’s (not whose) is the possessive form of who. But apostrophes are also used in contractions; that’s what the apostrophe indicates in who’s. Unlike possessive nouns, which often use apostrophes, possessive pronouns—such as his, hers, theirs, and its—never do. That’s why the possessive form of who is whose. What’s the meaning of whose? Whose is the possessive form of the pronoun who. Whose means “belonging to whom” or, occasionally, “of which.” Use it when you’re asking or declaring to whom something belongs. In other words, whose is about possession. She is a writer whose books have inspired many people. The teacher praised the student whose project was the most creative. The relative pronoun whose is used the same as other possessive pronouns like their when you don’t know the owner of something. Whose sandwich is this? Whose backpack is on the table? What’s the meaning of who’s? Who’s means who is or who has. Who’s is a contraction, meaning it’s two words stuck together with some of the letters left out, and those letters are replaced with an apostrophe. This can make pronunciation easier and quicker. The formula for who’s is: who + is or who + has = who’s. Who is hungry? Who’s hungry? Who has got the keys to the office? Who’s got the keys to the office? It helps to remember that who is a pronoun used to refer to a person or people. She’s the one who’s always early to meetings. I have a friend who’s really good at playing the guitar. Here’s a tip: Want to make sure you’re using whose and who’s correctly in your writing? Grammarly can check your spelling and save you from grammar and punctuation mistakes. It even proofreads your text, so your work is extra polished wherever you write. Who vs. whom Both who’s and whose are derived from the pronoun who, but they serve different grammatical roles. Who is used to refer to the subject of a sentence or clause (the person performing an action). Whom is used to refer to the object of a verb or preposition (the person being affected by an action). Who and whom are both pronouns. Who is a subject pronoun (like I, he, she, we, and they), whereas whom is an object pronoun (like me, him, her, us, and them). Try this simple trick when in doubt: If you can replace the word with he or she, use who. If you can replace the word with him or her, use whom. Who is in charge here? Who asked you to go to the dance? Who is that? Like other pronouns, who changes form depending on its role in a sentence. Whom is the objective form—if you can replace it with him, her, me, or them, then whom is correct. Whom are you referencing? Whom did you ask to the dance? To whom are you speaking? Examples of whose vs. who’s Here are some more examples of how to use whose vs. who’s. Who’s excited for the weekend? I found a dog whose collar was missing. Who’s finished their homework? “The People Behind the Tusks: A Who’s Who of the Cast of Warcraft” —MoviePilot.com “Consequently, their roles had to be filled by CIA officers whose identities had not been revealed to the Russians.”—Tom Clancy, Commander in Chief “Bessie carried a lantern, whose light glanced on wet steps and gravel road sodden by a recent thaw.” —Charlotte Brontë, Jane Eyre It’s worth noting that who typically refers to people, but in possessive form (whose), it can sometimes apply to inanimate objects (like Bessie’s lantern) when there’s no better alternative. The alternative is “Bessie carried a lantern, the light of which glanced on wet steps.” Here’s another example: The house whose roof was damaged in the storm needs repairs. And finally, here’s a who’ve for good measure, which means who have: “[They’re] kids from wealthier districts, where winning is a huge honor, who’ve been trained their whole lives for this.” —Suzanne Collins, The Hunger Games Whose vs. who’s FAQs Is it whose or who’s? Whose is the possessive form of who, while who’s is a contraction for who is or who has. Who’s jacket is this? Whose jacket is this? Whose coming to the meeting? Who’s coming to the meeting? How do you remember the difference between whose and who’s? Try replacing who’s with who is or who has. If the sentence still makes sense, use who’s. Otherwise, use whose. Why does whose not have an apostrophe? Possessive pronouns (like his, hers, and its) never use apostrophes, and whose follows this rule. Can whose be used for objects as well as people? While whose is traditionally used to indicate possession in relation to people, it is also accepted for objects when no better alternative exists. He admired the painting whose colors blended beautifully. She read a book whose plot was full of twists. Some argue that whose should only be used for people since it originates from who. In highly formal writing, an alternative phrasing such as “the colors of which” or “the plot of which” may be preferred. What are some common mistakes with whose and who’s? © 2025 Grammarly, Inc.
126
Published Time: 2023-11-21T23:01:30-0500 Is Red Clover Edible? How to identify, harvest, and eat Red Clover — Creek Stewart =============== Cart 0 HomeMEET CREEKKEYNOTE SPEAKINGTEAM BUILDINGSURVIVAL CLASS SCHEDULESHOP ENGAGELIFESTYLECONTACT CREEK BackBusiness & OrganizationsChurches & Christian OrganizationsEducators & Students BackBOOKSGEAR & TOOLSPatch of the MonthCloseout SaleTRAINING COURSESSurvival Skill Training Sheet SetsWild Edible Plant Identification Sheet SetsCustom Jewelry BackWATCHPrivate Survival TrainingAPOCABOX™Survival Skill of the Month ClubWild Edible Plant of the Month ClubSURVIVAL CEOOutdoorCore™ Online Courses 2025 LIVE Wilderness Survival Course Schedule NOW AVAILABLE. CLICK HERE. Cart 0 HomeMEET CREEKKEYNOTE SPEAKINGBusiness & OrganizationsChurches & Christian OrganizationsEducators & StudentsTEAM BUILDINGSURVIVAL CLASS SCHEDULESHOPBOOKSGEAR & TOOLSPatch of the MonthCloseout SaleTRAINING COURSESSurvival Skill Training Sheet SetsWild Edible Plant Identification Sheet SetsCustom JewelryENGAGEWATCHPrivate Survival TrainingAPOCABOX™Survival Skill of the Month ClubWild Edible Plant of the Month ClubSURVIVAL CEOOutdoorCore™ Online CoursesLIFESTYLECONTACT CREEK Is Red Clover Edible? How to identify, harvest, and eat Red Clover I'm excited to share some insights about a fascinating wild edible plant with you – Red Clover (Trifolium pratense). Mini Identification Guide for Red Clover Red Clover can be found in fields, roadsides, forest edges, and open sunny areas. Here are some key features to help you identify it: Trifoliate leaves with three leaflets originating from the same point. Light-green to white "V" shaped watermark on each leaflet. 1-inch diameter round-ish pink to purple flowerheads with up to 100 slender flowers. Leaves grow alternately along the stem. Edible Uses of Red Clover While the leaves are edible, the flowers steal the show with their mildly sweet flavor. Here are some unique ways to use Red Clover in your culinary adventures: Use the flowers and flower heads to thicken soups and stews. Add individual flowers to cookie, muffin, pancake, waffle, and bread mixes. Enhance sweet rice and oatmeal with these tasty blooms. Enjoy a mild cup of Red Clover Tea, a personal favorite. Elevate salads and omelets by tossing in the purple flowers. Harvest and Preparation The best edible parts are the young leaves and flowerheads. You can dry the flowerheads for teas and other culinary uses. Warnings Red Clover contains coumarin and coumarin-like compounds, which can reportedly inhibit blood coagulation, especially for those who may already be on blood thinning medication. Please consult with a holistic-minded physician before attempting to use Red Clover for medicinal purposes. Recipe: Wild Clover Flower Tea Ingredients: Dried or fresh Red Clover flowers (washed) and sweetener (honey or maple syrup). Directions: Steep 2 teaspoons of dried or fresh flowers (or 4 whole flowerheads) in one cup of hot (not boiling) water for 5–8 minutes. Sweeten to taste with honey or maple syrup. During the blooming months, consider drying Red Clover flowerheads in your outdoor solar dehydrator for year-round enjoyment. It's not IF but WHEN! Did you know I teach how to identify, harvest, prepare, and eat a new Wild Edible Plant every month through Wild Edible Plant of the Month Club? Join myself and 100s of other wild foragers at Creek StewartNovember 21, 2023 Facebook 0TwitterLinkedIn 0RedditTumblrPinterest 00 Likes Previous #### How to Identify, Harvest, and Eat the Wrinkled Rose (Rosa rugosa) Creek Stewart November 29, 2023 1 CommentNext #### How to Make an Off-Grid Cooking Stove (ROCKET STOVE) From 25 Bricks Creek Stewart November 21, 2023 Creek Stewart 300 S. Main Street, Sheridan, IN 46069, USA [email protected] Hours TERMS | DISCLAIMERS | PRIVACY POLICY |LEARN |LOG INTO YOUR ACCOUNT
127
PACIFIC JOURNAL OF MATHEMATICS Vol. 47, No. 2, 1973 COMPOSANTS OF HAUSDORFF INDECOMPOSABLE CONTINUA; A MAPPING APPROACH DAVID P. BELLAMY "Continuum" denotes a compact connected Hausdorff space. The principal result is that every indecomposable continuum can be mapped onto Knaster's example D of a chainable indecomposable continuum with one endpoint. This result is then used to conclude that those indecomposable continua each of whose proper subcontinua is decomposable, those which are homeomorphic with each of their nondegenerate sub-continua, and those such that each two points in the same composant can be joined by a continuum which cannot be mapped onto D, have at least c composants. It is also shown that generalized arcwise connected continua are decomposable. The author and , among others, has raised the question of how many composants an indecomposable continuum must have. The technique applied to prove that metric indecomposable continua have uncountably many depends upon the second countability of the comple-ment of a point. (See, for example, [5, p. 140].) Other arguments can generalize this; for example, H. Cook has pointed out in conversation that if an indecomposable continuum has two composants, and is first countable at a point of each, then it has uncountably many composants. This can be generalized to include the case of a continuum with two com-posants, each of which contains a compact Gδ subset. S. Mazurkiewicz has shown that a metric indecomposable continuum has c composants, sharpening slightly the result that it has uncountably many. M. E. Rudin has shown that if the continuum hypothesis holds, then it is not true that every indecomposable continuum has c composants. J. W. Rogers, Jr., has shown that every metric indecomposable continuum can be mapped onto the continuum D mentioned above. (See [5, p. 332] or [6, p. 206] for a picture.) We follow Rogers here in a representing D as an inverse limit of arcs [0,1] = I, indexed by the positive integers, where the bonding map between successive terms is always h, where h(t) = 2ί for t ^ 1/2 and h(t) = 2 - 2ί for t ^ 1/2. Throughout what follows, let I denote [0,1]; h, this function; and D, the inverse limit of this system. This work extends Rogers' result to the nonmetric case; also, the argument here is simpler than Rogers'. This result is then applied to obtain a partial answer to the composant question in certain cases. This work also generalizes work of G. R. Gordh, Jr., presented at the University of Oklahoma Conference on General Topology in March, 1972, , and answers in the negative 303 304 DAVID P. BELLAMY the question of L. E. Ward, Jr., , concerning whether there are indecomposable continua each two points of which can be connected by a generalized arc. (A generalized arc is a continuum with exactly two noncutpoints.) Principal Result. We first establish the following: LEMMA. If X is an indecomposable continuum and f:X-^Iisa continuous function such that /"^(O) and f~ ι(V) both have nonempty interior, then there exists a continuous function g: X—+ I such that fiΓ^O) and g" 1^) both have nonempty interior, and such that f = hog. Proof. Suppose / is given. Let M\JN be a separation of X-Int f~ ι{l) such that both M Π Int f~\ϋ) and N Π Int f~ ι{0) are nonvoid. Such a separation exists since X-Int f~ ι(ϊ) is compact and no component of it has interior; in particular, Int / -1(0) must meet at least two components and hence two quasi-components of it. Now define g: X—• I by \f(x) if x e M = 1 - —f{x) if xeN = i- if «Γ(1). It is readily verified that g is well-defined and continuous. Then, Int (Γ^l) - NΠ Int /^(O) Φ φ Int g~ ι{0) - M Π Int f~\Q) Φ φ . The reader can easily verify that f = hog. COROLLARY TO PROOF. If X is an indecomposable continuum and f:X-+Iis a continuous function such that /"^(O) and /"^(l) both have nonempty interior, and if p, q e X are points such that p e Int /"^(O); f(q) φ 0,1 and q lies in a different component of X-Int /"^(l) from p, then there exists a continuous function g: X— I such that f = hog; g^φ) and ^ ( l ) both have nonempty interior, p e Int (p^O), and 1 > Q(q) ^ 1/2. Proof. Choose M and N as in the proof of the lemma, so that p e M. If q e N, set M r = M; N' — N. If q e M, then since p and q lie in different components of M and M is compact, there is separation M r U A of M with q e A and p e JkP. Then, let iV' = A U N. In either case, proceed as in the proof of the lemma, replacing M and N by M! COMPOSANTS OF HAUSDORFF INDECOMPOSABLE CONTINUA 305 and N' respectively. It is then readily verified that 1 > g(q) 2 > 1/2 and pelntg^iO). We are now in a position to prove: THEOREM. Let X be a nondegenerate indecomposable continuum. Then X can be mapped continuously onto D. Proof. Let 0 be a nonempty open subset of X such that Cl (0) Φ X. Since Cl (0) is a proper closed subset of an indecomposable con-tinuum, with nonvoid interior, it is not connected. Let A (j B be a separation of Cl (0) and observe that both A and B have interior. Let fi.X- +I be a Urysohn function such that fλ(x) — 0 for xeA and f,{x) = 1 for x e B. Now, Int fτ\ϋ) Φ Φ Φ Int /TW We proceed inductively. Suppose continuous functions f^.X—! have been chosen for 1 < ^ i < w, such that for each i > 1, fto/^ = /ί_1 and such that for each i, /^(O) and fϊ ι(ϊ) both have interior. Applying the lemma, we obtain a function fn:X—I such that /^(Q) and /^(l) both have interior and hofn — /n_1# Then the sequence of functions (fi}Z=i induces a map f:X—>D which is onto since X is compact and each f{ is onto, and the proof is complete. Before looking at the composant question, we sharpen this result somewhat with three corollaries to the proof, and to the theorem. COROLLARY 1. If p and q are distinct points of the indecomposable continuum X, there is a continuous surjection f:X-+D such that f(p) Proof. Choose 0 so that p e 0 and q $ Cl (0). Choose A, B so that p e A. Apply Tietze's extension theorem to obtain a function fι such that f,{x) = 0 for x e A, f,(x) - 1 for xeB, and f,(q) = 1/2. Then proceed as in the proof of the theorem. f(p) φ f(q) since fλ{p) — 0 while fM) = 1/2. COROLLARY 2. A nondegenerate indecomposable continuum X can be embedded into a product of copies of D such that every projection carries the image of X onto D. Proof. Let F — {/: / is continuous mapping of Xonto D}« Define k: X—>ΠfeFD by kf = /. By Corollary 1, k is 1 — 1 and by com-pactness it is an embedding. The /-projection of k(X) into D yields f(X), which is all of D. COROLLARY 3. If p and q belong to different composants of the 306 DAVID P. BELLAMY indecomposable continuum X, there is a continuous mapping f: X—>D such that f(p) and f(q) lie in different composants of D. Proof. Obtain fx as in the proof of Corollary 1. Then suppose fi has been chosen for 1 ^ i < n such that p e Int fτ ι(θ) and such that 1 > fi{q) ^ 1/2 for each i, in addition to the other properties assumed in the proof of the theorem. Then, since p and q lie in different composants of X, they must lie in different components of X-Int /~-i(l). The corollary to the proof of the Lemma enables us to choose fn so that p 6 Int f~ ι(0) while 1 > fn(q) ^ 1/2. Then with the map /: X-+ D so obtained, suppose W^D were a proper subcontinuum with f(p) e W and f(q) e W. Let Wt S I be the ith projection of W. Since f(p) is the point each of whose coordinates is zero, 0 e Wι for each i. Since W'ΦD, there is a j such that W, Φ I. Then 1 £ Wά. Since Λ(l/2) = 1,1/2 ί Wj+1. Thus W, +1 S [0,1/2) while fj+1(q) ^ 1/2, a contradiction, since f3 +1(q) e Wj+1. Hence, f(p) and f(q) belong to different com-posants of D. The Composant Problem. The theorem of the preceding section now allows us to make some observations about composants and other internal structures of indecomposable continua. DEFINITION 1. If X and Y are continua and /: X-^Y is a con-tinuous surjection, / maps X irreducibly onto Y iff f(W) is a proper subcontinuum of Y whenever W is a proper subcontinuum of X. PROPOSITION. If X and Y are indecomposable continua and f:X—> Y irreducibly onto, then X has at least as many composants as Y. Proof. If p, q lie in the same composant of X, there is a proper subcontinuum W of X containing p and q. Hence, f(W) is a proper subcontinuum of Y containing both f(p) and f(q), which thus lie in the same composant of Y. Thus, if K is a composant of Y, f~ ι{K) is a union of composants of X. Applying the axiom of choice, we can define a 1-1 function g from the set of composants of Y into the set of composants of X by choosing g(K) to be some composant of X contained in f~\K) for each composant K of Y. COROLLARY 4. If an indecomposable continuum X can be mapped irreducibly onto D, X has at least c composants. COROLLARY 5. If X is a nondegenerate indecomposable continuum, Xcontains an indecomposable subcontinuum Mwith at least c composants. COMPOSANTS OF HAUSDORFF INDECOMPOSABLE CONTINUA 307 Proof. Let f: X—+D be onto. Consider {W £ Ξ X: W is a continuum and /(W) = D} . By compactness and Zorn's lemma, this set contains a minimal element M; M is necessarily indecomposable, and f\M:M—+D maps M irredu-cibly onto Zλ We are done by Corollary 4. DEFINITION 2. An indecomposable continuum is irreducibly inde-composable iff each of its nondegenerate proper subcontinua is decom-posable. DEFINITION 3. A continuum is hereditarily equivalent iff it is homeomorphic with each of its nondegenerate subcontinua. (See and .) COROLLARY 6. An irreducibly indecomposable continuum X which is nondegenerate has at least c composants. Proof. The M in Corollary 5 must in this case be X. COROLLARY 7. A nondegenerate hereditarily equivalent indecom-posable continuum X has at least c composants. Proof. The M in Corollary 5 is in this case homeomorphic with X and so has the same number of composants as X. We also obtain the following somewhat more technical results. COROLLARY 8. If X is a nondegenerate indecomposable continuum such that whenever p, q belong to the same composant of X, p and q lie together in a continuum W(p, q) which cannot be mapped onto D, then X has at least c composants. Proof. Let /: X— D be onto and let M g X be a continuum such that f(M) = D. Suppose M Φ X. Then M lies in some composant K of X. Let p,qe M such that f(p) and f(q) belong to different composants of D. There is a continuum W(p, q) which cannot be mapped onto D, while p, q e W(p, q). Then f(W(p, q)) is a proper subcontinuum of D meeting two composants of ΰ , a contradiction. Thus, M = X and by Proposition 1, the proof is done. In particular, then, if each two points of each composant of an indecomposable continuum lie in a continuum which is locally connected; is a union of fewer than c locally connected continua; or is hereditarily decomposable, then the continuum has at least c composants. (The hereditarily decomposable case was pointed out by L. E. Rogers and 308 DAVID P. BELLAMY G R. Gordh, Jr , in conversation with the author,) COROLLARY 9. A continuum X, each two points of which can be joined by a continuum which cannot be mapped onto D, is decom-posable. Proof. Such a continuum cannot be mapped onto D, for if p, q e X and /: X-+D is onto; and f(p) and f(q) lie in different composants of D, then each continuum containing both p and q is mapped onto D b y / . COROLLARY 10. A continuum each two points of which can be joined by a generalized arc is decomposable. The following Corollaries resolve the question in , mentioned earlier. We close with them. COROLLARY 11. A hereditarily unicoherent generalized arcwise connected continuum is hereditarily decomposable. Proof. If X is such a continuum and W QX is a subcontinuum, then for any p, q in W there is a generalized arc A in X from p to q. By irreducibility of A and hereditary unicoherence, A ϋ W. Thus W is generalized arcwise connected and hence decomposable. COROLLARY 12. Each generalized arcwise connected hereditarily unicoherent continuum has the fixed point property for continuous multi-valued functions. REMARK. This is a restatement of Theorem 2 of [11, p. 926] in light of Corollary 11. REFERENCES 1. D. P. Bellamy, A non-metric indecomposable continuum, Duke Math. J., 38, No. 1 (1971), 15-20. 2. , Topological Properties of Compactifications of Half-Open Interval., Ph. D. Thesis, Michigan State University, (1968). 3. G. R. Gordh, Jr., Indecomposable Hausdorff Continua and Mappings of Connected Linearly Ordered Spaces, presented to the University of Oklahoma Conference on General Topology, March, 1972. 4. G. W. Henderson, Proof that every compact decomposable continuum which is topologically equivalent to each of its nondegenerate subcontinua is an arc, Annals of Math., 72, No. 3 (1960), 421-428. 5. J. G. Hocking and G. S. Young, Topology, Addison-Wesley, Reading, Mass., (1969). 6. K. Kuratowski, Topology, Vol. II, Academic Press, New York and London, (1968). 7. S. Mazurkiewicz, Sur les Continus Indecomposables, Fund. Math., 10(1927), 305-310. COMPOSANTS OF HAUSDORFF INDECOMPOSABLE CONTINUA 309 8. E. E. Moise, An indecomposable plane continuum which is homeomorphic to each of its nondegenerate subcontinua, Trans. Amer. Math. Soc, 63 (1948), 581-594. 9. J. W. Rogers, Jr., On mapping indecomposable continua onto certain chainable indecomposable continua, Proc. Amer. Math. Soc, 25. No. 2 (1970), 449-456. 10. M. E. Rudin, Composants and βN, Proceedings of the Washington State University Conference on General Topology, (1970), 117-119. 11. L. E. Ward, Jr., A fixed point theorem for multi-valued functions, Pacific J. Math., 8 (1958), 921-927. Received June 21, 1972 and in revised form August 30, 1972. UNIVERSITY OF DELAWARE
128
Published Time: Mon, 23 Jan 2023 03:57:13 GMT 1 How incomputable is Kolmogorov complexity? Paul M.B. Vit´ anyi Abstract Kolmogorov complexity is the length of the ultimately compressed version of a file (that is, anything which can be put in a computer). Formally, it is the length of a shortest program from which the file can be reconstructed. We discuss the incomputability of Kolmogorov complexity, which formal loopholes this leaves us, recent approaches to compute or approximate Kolmogorov complexity, which approaches are problematic and which approaches are viable. Index Terms — Kolmogorov complexity, incomputability, feasibility I. I NTRODUCTION Recently there have been several proposals how to compute or approximate in some fashion the Kolmogorov complexity function. There is a proposal that is popular as a reference in papers that do not care about theoretical niceties, and a couple of proposals that do make sense but are not readily applicable. Therefore it is timely to survey the field and show what is and what is not proven. The plain Kolmogorov complexity was defined in and denoted by C in the text and its earlier editions. It deals with finite binary strings, strings for short. Other finite objects can be encoded into single strings in natural ways. The following notions and notation may not be familiar to the reader so we briefly discuss them. The length of a string x is denoted by l(x).The empty string of 0 bits is denoted by . Thus l() = 0 . Let x be a natural number or finite binary string according to the correspondence (, 0) , (0 , 1) , (1 , 2) , (00 , 3) , (01 , 4) , (10 , 5) , (11 , 6) , . . . . Then l(x) = blog( x + 1) c. The Kolmogorov complexity C(x) of x is the length of a shortest string x∗ such that x can be computed from x∗ by a fixed universal Turing machine (of a special Paul Vit´ anyi is with the national research center for mathematics and computer science in the Netherlands (CWI), and the University of Amsterdam. Address: CWI, Science Park 123, 1098XG Amsterdam, The Netherlands. Email: [email protected] . March 24, 2020 DRAFT arXiv:2002.07674v2 [cs.IT] 22 Mar 2020 2 type called “optimal” to exclude undesirable such machines). In this way C(x) is a definite natural number associated with x and a lower bound on the length of a compressed version of it by any known or as yet unknown compression algorithm. We also use the conditional version C(x|y).The papers by R.J. Solomonoff published in 1964, referenced as , contain informal suggestions about the incomputability of Kolmogorov complexity. Says Kolmogorov, “I came to similar conclusions [as Solomonoff], before becoming aware of Solomonoff’s work, in 1963– 1964.” In his 1965 paper Kolmogorov mentioned the incomputability of C(x) without giving a proof: “[ . . . ] the function Cφ(x|y) cannot be effectively calculated (generally computable) even if it is known to be finite for all x and y.” We give the formal proof of incomputability and discuss recent attempts to compute the Kolmogorov complexity partially, a popular but problematic proposal and some serious options. The problems of the popular proposal are discussed at length while the serious options are primarily restricted to brief citations explaining the methods gleaned from the introductions to the articles involved. II. I NCOMPUTABILITY To find the shortest program (or rather its length) for a string x we can run all programs to see which one halts with output x and select the shortest. We need to consider only programs of length at most that of x plus a fixed constant. The problem with this process is known as the halting problem : some programs do not halt and it is undecidable which ones they are. A further complication is that we have to show there are infinitely many such strings x for which C(x) is incomputable. The first written proof of the incomputability of Kolmogorov complexity was perhaps in and we reproduce it here following in order to show what is and what is not proved. THEOREM 1. The function C(x) is not computable. Moreover, no partial computable function φ(x) defined on an infinite set of points can coincide with C(x) over the whole of its domain of definition. Proof. We prove that there is no partial computable φ as in the statement of the theorem. Every infinite computably enumerable set contains an infinite computable subset, see e.g. . Select an infinite computable subset A in the domain of definition of φ. The function ψ(m) = min {x : C(x) ≥ m, x ∈ A} is (total) computable (since C(x) = φ(x) on A), and takes arbitrarily large March 24, 2020 DRAFT 3 values, since it can obviously not be bounded for infinitely many x. Also, by definition of ψ,we have C(ψ(m)) ≥ m. On the other hand, C(ψ(m)) ≤ Cψ(ψ(m)) + cψ by definition of C,and obviously Cψ(ψ(m)) ≤ l(m). Hence, m ≤ log m up to a constant independent of m, which is false from some m onward. That was the bad news; the good news is that we can approximate C(x).THEOREM 2. There is a total computable function φ(t, x ), monotonic decreasing in t, such that lim t→∞ φ(t, x ) = C(x).Proof. We define φ(t, x ) as follows: For each x, we know that the shortest program for x has length at most l(x) + c with c a constant independent of x. Run the reference Turing machine U (an optimal universal one) for t steps on each program p of length at most l(x) + c. If for any such input p the computation halts with output x, then define the value of φ(t, x ) as the length of the shortest such p, otherwise equal to l(x) + c. Clearly, φ(t, x ) is computable, total, and monotonically nonincreasing with t (for all x, φ(t′, x ) ≤ φ(t, x ) if t′ > t ). The limit exists, since for each x there exists a t such that U halts with output x after computing t steps starting with input p with l(p) = C(x).One cannot decide, given x and t, whether φ(t, x ) = C(x). Since φ(t, x ) is nondecreasing and goes to the limit C(x) for t → ∞ , if there were a decision procedure to test φ(t, x ) = C(x),given x and t, then we could compute C(x). But above we showed that C is not computable. However this computable approximation has no convergence guaranties as we show now. Let g1, g 2, . . . be a sequence of functions. We call f the limit of this sequence if f (x) = lim t→∞ gt(x) for all x. The limit is computably uniform if for every rational  > 0 there exists a t(), where t is a total computable function, such that |f (x) − gt()(x)| ≤ , for all x. Let the sequence of one-argument functions ψ1, ψ 2, . . . be defined by ψt(x) = φ(t, x ), for each t for all x. Clearly, C is the limit of the sequence of ψ’s. However, by Theorem 1, the limit is not computably uniform. In fact, by the well-known halting problem, for each  > 0 and t > 0 there exist infinitely many x such that |C(x) − ψt(x)| >  . This means that for each  > 0, for each t there are many x’s such that our estimate φ(t, x ) overestimates C(x) by an error of at least .III. C OMPUTING THE KOLMOGOROV COMPLEXITY The incomputability of C(x) does not mean that we can not compute C(x) for some x’s. For example, if for individual string x we have C(C(x)|x) = c for some constant c, then March 24, 2020 DRAFT 4 this means that there is an algorithm of c bits which computes C(x) from x. We can express the incomputability of C(x) in terms of C(C(x)|x), which measures what we may call the “complexity of the complexity function.” Let l(x) = n. It is easy to prove the upper bound C(C(x)|x)) ≤ log n + O(1) . But it is quite difficult to prove the lower bound : For each length n there are strings x of length n such that C(C(x)|x) ≥ log n − log log n − O(1) or its improvement by a game-based proof in : For each length n there are strings x of length n such that C(C(x)|x) ≥ log n − O(1) . This means that x only marginally helps to compute C(x); most information in C(x) is extra information related to the halting problem. One way to go about computing the Kolmogorov complexity for a few small values is as follows. For example, let T1, T 2, . . . be an acceptable enumeration of Turing machines. Such an acceptable enumeration is a formal concept [12, Exercise 1.7.6]. Suppose we have a fixed reference optimal universal Turing machine U in this enumeration. Let U (i, p ) simulate Ti(p) for all indexes i and (binary) programs p.Run Ti(p) for all i and p in the following manner. As long as i is sufficiently small it is likely that Ti(p) < ∞ for all p (the machine Ti halts for every p). The Busy Beaver function BB (n) : N → N was introduced in and has as value the maximal running time of n-state Turing machines in quadruple format (see or for details). This function is incomputable and rises faster than any computable function of n.Reference supplies the maximal running time for halting machines for all i < 5 and for i < 5 it is decidable which machines halt. For i ≥ 5 but still small there are heuristics , , , . A gigantic lower bound for all i is given in . Using Turing machines and programs with outcome the target string x we can determine an upper bound on C(x) for reference machine U (by for each Ti encoding i in self-delimiting format). Note that there exists no computable lower bound function approximating C(x) since C is incomputable and upper semicomputable. Therefore it can not be lower semicomputable . For an approximation using small Turing machines we do not have to consider all programs. If I is the set of indexes of the Turing machines and P is the set of halting (or what we consider March 24, 2020 DRAFT 5 halting) programs then {(i, p )}x = {(i, p ) : Ti(p) = x} \ { (i, p ) : Ti(p) = x ∧ ∃ i′,p ′ (Ti′ (p′) = x ∧ | (i, p )| ≤ min {| i′|, |p′|}} , with i, i ′ ∈ I, p, p ′ ∈ P . Here we can use the computably invertible Cantor pairing function which is f : N × N → N defined by f (a, b ) = 12 (a + b)( a + b + 1) + b so that each pair of natural numbers (a, b ) is mapped to a natural number f (a, b ) and vice versa. Since the Cantor pairing function is invertible, it must be one-to-one and onto: |(a, b )| = |a| + |b|. Here {(i, p )}x is the desired set of applicable halting programs computing x. That is, if either |i′| or |p′| is greater than some |(i, p )| with (i, p ) ∈ { (i, p )}x while Ti′ (p′) = x then we can discard the pair concerned from {(i, p )}x.IV. P ROBLEMATIC USE OF THE CODING THEOREM Fix an optimal universal prefix Turing machine U . The Universal distribution (with respect to U ) is m(x) = ∑ 2−l(p) where p is a program (without input) for U that halts. The prefix complexity K(x) is with respect to the same machine U . The complexity K(x) is similar to C(x) but such that the set of strings for which the Turing machine concerned halts is prefix-free (no program is a proper prefix of any other program). This leads to a slightly larger complexity: K(x) ≥ C(x). The Coding theorem states K(x) = − log m(x) + O(1) . Since − log m(x) <K(x) (the term 2−K(x) contributes to the sum and 2l(x) + O(log x) is also a program for x) we know that the O(1) term is greater than 0. In it was proposed to compute the Kolmogorov complexity by experimentally approxi-mating the Universal distribution and using the Coding theorem. This idea was used in several articles and applications. One of the last is . It contains errors or inaccuracies for example: “the shortest program” instead of “a shortest program,” “universal Turing machine” instead of “optimal universal Turing machine” and so on. Explanation: there can be more than one shortest program, and Turing machines can be universal in many ways. For instance, if U (p) = x for a universal Turing machine, the Turing machine U ′ such that U ′(qq ) = U (q) for every q and U ′(r) = 0 for every string r 6 = qq for some string q, is also universal. Yet if U serves to define the Kolmogorov complexity C(x) then U ′ defines a complexity of x equal to 2C(x) which means that the invariance theorem does not hold for Universal Turing machines that are not optimal. March 24, 2020 DRAFT 6 Let us assume that the computer used in the experiments fills the rˆ ole of the required optimal Universal Turing machine for the desired Kolmogorov complexity, the target string, and the universal distribution involved. However, the O(1) term in the Coding theorem is mentioned but otherwise ignored in the experiments and conclusions about the value of the Kolmogorov complexity as reported in , . Yet the experiments only concern small values of the Kolmogorov complexity, say smaller than 20, so they are likely swamped by the constant hidden in the O(1) term. Let us expand on this issue briefly. In the proof of the Coding theorem, see e.g. , a Turing machine T is used to decode a complicated code. The machine T is one of an acceptable enumeration T1, T 2, . . . of all Turing machines. The target Kolmogorov complexity K is shown to be smaller than the complexity KT associated with T plus a constant c representing the number of bits to represent T and other items: K(x) ≤ KT (x) + c. Since T is complex since it serves to decode this code, the constant c is huge, that is, much larger than, say, 100 bits. The values of x for which K(x) is approximated by , are at most 5 bits, that is, at most 32 . Unless there arises a way to prove the Coding theorem without the large constant c, this method is does not seem to work. Other problems: The distribution m(x) is apparently used as m(x) = ∑ i∈N ,T i()= x 2−l()/i , see [16, equation (6)] using a (noncomputable) enumeration of Turing machines T1, T 2, . . . that halt on empty input . Therefore ∑ x∈N m(x) = ∑ i∈N ,T i()<∞ 2−l()/i and with l() = 0 we have ∑ x∈N m(x) = ∞ since ∑ x∈N 1/x = ∞. By definition however ∑ x∈N m(x) ≤ 1 : contradiction. It should be m(x) = ∑ i∈N ,T i(p)= x 2−l(p)−α(i) with ∑ i∈N α(i) ≤ 1 as shown in [12, pp. 270–271]. V. N ATURAL DATA The Kolmogorov complexity of a file is a lower bound on the length of the ultimate compressed version of that file. We can approximate the Kolmogorov complexities involved by a real-world compressor. Since the Kolmogorov complexity is incomputable, in the approximation we never know how close we are to it. However, we assume in that the natural data we are dealing with contain no complicated mathematical constructs like π = 3 .1415 . . . or Universal Turing machines, see . In fact, we assume that the natural data we are dealing with contains primarily effective regularities that a good compressor finds. Under those assumptions the Kolmogorov complexity of the object is not much smaller than the length of the compressed version of the object. March 24, 2020 DRAFT 7 VI. S AFE COMPUTATIONS A formal analysis of the the intuitive idea in Section V was subsequently and independently given in . From the abstract of : “Kolmogorov complexity is an incomputable function. . . . By restricting the source of the data to a specific model class, we can construct a computable function to approximate it in a probabilistic sense: the probability that the error is greater than k decays exponentially with k.” This analysis is carried out but its application yielding concrete model classes is not. VII. S HORT LISTS Quoting from : “Given that the Kolmogorov complexity is not computable, it is natural to ask if given a string x it is possible to construct a short list containing a minimal (plus possibly a small overhead) description of x. Bauwens, Mahklin, Vereshchagin and Zimand and Teutsch show that, surprisingly, the answer is YES. Even more, in fact the short list can be computed in polynomial time. More precisely, the first reference showed that one can effectively compute lists of quadratic size guaranteed to contain a description of x whose size is additively O(1) from a minimal one (it is also shown that it is impossible to have such lists shorter than quadratic), and that one can compute in polynomial-time lists guaranteed to contain a description that is additively O(log n) from minimal. Finally, improved the latter result by reducing O(log n) to O(1) ”. See also . VIII. C ONCLUSION The review shows that the Kolmogorov complexity of a string is incomputable in general, but maybe computable for some arguments. To compute or approximate the Kolmogorov complexity recently several approaches have been proposed. The most popular of these is inspired by L.A. Levin’s Coding theorem and consists in taking the negative logarithm of the so-called universal probability of the string to abtain the Kolmogorov complexity of very short strings (this is not excluded by incomputability as we saw). This probability is approximated by the frequency distributions obtained from small Turing machines. As currently stated the approach is problematic in the sense that it is only suggestive and can not be proved correct. Nonetheless, some applications make use of it. Proper approaches either restrict the domain of strings of which the Kolmogorov complexity is desired (so that the incomputability turns into computability) or March 24, 2020 DRAFT 8 manage to restrict the Kolmogorov complexity of a string to an item in a small list of options (so that the Kolmogorov complexity has a certain finite probability). REFERENCES B. Bauwens, A. Makhlin, N. Vereshchagin, and M. Zimand, Short lists with short programs in short time, Proc. 28th IEEE Conf. Comput. Complexity , 2013. B. Bauwens and A.K. Shen, Complexity of complexity and strings with maximal plain and prefix Kolmogorov complexity, J. Symbol. Logic , 79:2(2014), 620–632. P. Bloem, F. Mota, S. de Rooij, L. Antunes, and P. Adriaans, A safe approximation for Kolmogorov complexity, ALT 2014, pp. 336–350. A.H. Brady, The determination of the value of Rados noncomputable function Sigma(k) for four-state Turing machines, Mathematics of Computation , 40(162), 647665. R.L. Cilibrasi, P.M.B. Vit´ anyi, Clustering by compression, IEEE Trans. Information Theory , 51:4(2005), 1523–1545. P. G´ acs, On the symmetry of algorithmic information, Soviet Math. Dokl. , 15(1974), 1477–1480. Correction: ibid., 15 (1974) 1480. M.W. Green, A lower bound Rado’s sigma function for binary Turing machines, Proc. 5th Symp. Switching Circuit Theory Logical Design , 1964, 91–94. J. Harland, Busy beaver machines and the observant otter heuristic (or how to tame dreadful dragons), Theor. Comput. Sci. , 646(2016), 61–85. O Kellett, A multi-faceted attack on the busy beaver problem, Master’s Thesis, Rensselaer Polytechnic Institute, 2005. A.N. Kolmogorov, Three approaches to the quantitative definition of information, Problems Inform. Transmission , 1:1(1965), 1–7. L.A. Levin, Laws of information conservation (non-growth) and aspects of the foundation of probability theory, Problems Inform. Transmission , 10(1974), 206–210. M. Li and P.M.B. Vit´ anyi, An Introduction to Kolmogorov Complexity and Its Applications , Springer-Verlag, New York, 2008. H. Marxen and J. Buntrock,, Attacking the Busy Beaver 5, Bulletin of the EATCS , 40(1990), 247–251. P. Michel, Small Turing machines and generalized busy beaver competition, Theor. Comput. Sci. , 325:1–3(2004), 45–56. T. Rado, On non-computable functions, Bell System Tech. J. , 44:3(1962), 877-884. F. Soler-Toscano and H. Zenil, A computable measure of algorithmic probability by finite approximations with an application to integer sequences, arXiv:1504.06240 [cs.IT], 2017. F Soler-Toscano, H. Zenil, J.-P. Delahaye, N. Gauvrit, Calculating Kolmogorov complexity from the output frequency distributions of small Turing machines, PloS one, 9(2014), e96223. R.J. Solomonoff. A formal theory of inductive inference, part 1 and part 2. Inform. Contr. , 7:1–22, 224–254, 1964. P.M.B. Vit´ anyi, Similarity and denoising, Phil. Trans. Royal Soc. A 371(2013), 20120091. Cantor’s pairing function, Wikipedia . J. Teutsch, Short lists for shortest programs in short time, Computational Complexity , 23:4(2014), 565–583. J. Teutsch and M. Zimand, A brief on short descriptions, ACM SIGACT News , 2016 ( March 24, 2020 DRAFT 9 A.M. Turing, On computable numbers, with an application to the Entscheidungsproblem, Proc. London Math. Soc. , 42:2, 230–265; Correction Ibid. , 43:2(1937), 544–546. H. Zenil, Une approche exprimentale ´ a la th´ eorie algorithmique de la complexit. Ph.D. Thesis, Lab. dInformatique Fondamentale de Lille, Universit´ e des Sciences et Technologie de Lille - Lille I, 2011. M. Zimand, Short Lists with Short Programs in Short Time — A Short Proof, Proc. 10th Conf. Computability in Europe, CiE 2014, 2014, 403–408. A.K. Zvonkin and L.A. Levin. The complexity of finite objects and the development of the concepts of information and randomness by means of the theory of algorithms. Russian Math. Surveys , 25(6):83–124, 1970.
129
Chapter 9 Sources of Magnetic Fields 9.1 Biot-Savart Law....................................................................................................9-3 Interactive Simulation 9.1: Magnetic Field of a Current Element.......................9-4 Example 9.1: Magnetic Field due to a Finite Straight Wire ...................................9-4 Example 9.2: Magnetic Field due to a Circular Current Loop ...............................9-7 9.1.1 Magnetic Field of a Moving Point Charge ..................................................9-10 Animation 9.1: Magnetic Field of a Moving Charge ..........................................9-11 Animation 9.2: Magnetic Field of Several Charges Moving in a Circle.............9-12 Interactive Simulation 9.2: Magnetic Field of a Ring of Moving Charges .......9-12 9.2 Force Between Two Parallel Wires ....................................................................9-13 Animation 9.3: Forces Between Current-Carrying Parallel Wires......................9-14 9.3 Ampere’s Law.....................................................................................................9-14 Example 9.3: Field Inside and Outside a Current-Carrying Wire.........................9-17 Example 9.4: Magnetic Field Due to an Infinite Current Sheet ...........................9-18 9.4 Solenoid ..............................................................................................................9-20 Examaple 9.5: Toroid............................................................................................9-23 9.5 Magnetic Field of a Dipole.................................................................................9-24 9.5.1 Earth’s Magnetic Field at MIT ....................................................................9-25 Animation 9.4: A Bar Magnet in the Earth’s Magnetic Field .............................9-27 9.6 Magnetic Materials .............................................................................................9-28 9.6.1 Magnetization ..............................................................................................9-28 9.6.2 Paramagnetism.............................................................................................9-31 9.6.3 Diamagnetism ..............................................................................................9-32 9.6.4 Ferromagnetism ...........................................................................................9-32 9.7 Summary.............................................................................................................9-33 9.8 Appendix 1: Magnetic Field off the Symmetry Axis of a Current Loop............9-35 9.9 Appendix 2: Helmholtz Coils .............................................................................9-39 Animation 9.5: Magnetic Field of the Helmholtz Coils ......................................9-41 Animation 9.6: Magnetic Field of Two Coils Carrying Opposite Currents ........9-43 Animation 9.7: Forces Between Coaxial Current-Carrying Wires......................9-44 9-1 Animation 9.8: Magnet Oscillating Between Two Coils ....................................9-44 Animation 9.9: Magnet Suspended Between Two Coils.....................................9-45 9.10 Problem-Solving Strategies ..............................................................................9-46 9.10.1 Biot-Savart Law:........................................................................................9-46 9.10.2 Ampere’s law:............................................................................................9-48 9.11 Solved Problems ...............................................................................................9-49 9.11.1 Magnetic Field of a Straight Wire .............................................................9-49 9.11.2 Current-Carrying Arc.................................................................................9-51 9.11.3 Rectangular Current Loop..........................................................................9-52 9.11.4 Hairpin-Shaped Current-Carrying Wire.....................................................9-54 9.11.5 Two Infinitely Long Wires ........................................................................9-55 9.11.6 Non-Uniform Current Density...................................................................9-57 9.11.7 Thin Strip of Metal.....................................................................................9-59 9.11.8 Two Semi-Infinite Wires ...........................................................................9-60 9.12 Conceptual Questions .......................................................................................9-62 9.13 Additional Problems .........................................................................................9-62 9.13.1 Application of Ampere's Law ....................................................................9-62 9.13.2 Magnetic Field of a Current Distribution from Ampere's Law..................9-63 9.13.3 Cylinder with a Hole..................................................................................9-64 9.13.4 The Magnetic Field Through a Solenoid ...................................................9-64 9.13.5 Rotating Disk .............................................................................................9-65 9.13.6 Four Long Conducting Wires ....................................................................9-65 9.13.7 Magnetic Force on a Current Loop............................................................9-65 9.13.8 Magnetic Moment of an Orbital Electron..................................................9-66 9.13.9 Ferromagnetism and Permanent Magnets..................................................9-67 9.13.10 Charge in a Magnetic Field......................................................................9-67 9.13.11 Permanent Magnets..................................................................................9-68 9.13.12 Magnetic Field of a Solenoid...................................................................9-68 9.13.13 Effect of Paramagnetism..........................................................................9-69 9-2 Sources of Magnetic Fields 9.1 Biot-Savart Law Currents which arise due to the motion of charges are the source of magnetic fields. When charges move in a conducting wire and produce a current I, the magnetic field at any point P due to the current can be calculated by adding up the magnetic field contributions, , from small segments of the wire dB G ds G , (Figure 9.1.1). Figure 9.1.1 Magnetic field dB G at point P due to a current-carrying element I d s G . These segments can be thought of as a vector quantity having a magnitude of the length of the segment and pointing in the direction of the current flow. The infinitesimal current source can then be written as I d s G . Let r denote as the distance form the current source to the field point P, and the corresponding unit vector. The Biot-Savart law gives an expression for the magnetic field contribution, , from the current source, ˆ r dB G Id s G , 0 2 ˆ 4 I d d r µ π × = s r B G G (9.1.1) where 0 µ is a constant called the permeability of free space: (9.1.2) 7 0 4 10 T m/A µ π − = × ⋅ Notice that the expression is remarkably similar to the Coulomb’s law for the electric field due to a charge element dq: 2 0 1 ˆ 4 dq d r πε = E r G (9.1.3) Adding up these contributions to find the magnetic field at the point P requires integrating over the current source, 9-3 0 2 wire wire ˆ 4 I d d r µ π × = = ∫ ∫ s r B B G G G (9.1.4) The integral is a vector integral, which means that the expression for B is really three integrals, one for each component of B G G . The vector nature of this integral appears in the cross product . Understanding how to evaluate this cross product and then perform the integral will be the key to learning how to use the Biot-Savart law. ˆ I d × s r G Interactive Simulation 9.1: Magnetic Field of a Current Element Figure 9.1.2 is an interactive ShockWave display that shows the magnetic field of a current element from Eq. (9.1.1). This interactive display allows you to move the position of the observer about the source current element to see how moving that position changes the value of the magnetic field at the position of the observer. Figure 9.1.2 Magnetic field of a current element. Example 9.1: Magnetic Field due to a Finite Straight Wire A thin, straight wire carrying a current I is placed along the x-axis, as shown in Figure 9.1.3. Evaluate the magnetic field at point P. Note that we have assumed that the leads to the ends of the wire make canceling contributions to the net magnetic field at the point . P Figure 9.1.3 A thin straight wire carrying a current I. 9-4 Solution: This is a typical example involving the use of the Biot-Savart law. We solve the problem using the methodology summarized in Section 9.10. (1) Source point (coordinates denoted with a prime) Consider a differential element ˆ ' d dx = s i G carrying current I in the x-direction. The location of this source is represented by ˆ ' ' x = r i G . (2) Field point (coordinates denoted with a subscript “P”) Since the field point P is located at ( , ) (0, ) x y a = , the position vector describing P is ˆ P a = r j G . (3) Relative position vector The vector is a “relative” position vector which points from the source point to the field point. In this case, ' P = − r r r G G G ˆ ' a x = − r ˆ j i G , and the magnitude 2 | | ' r a = = + r 2 x G is the distance from between the source and P. The corresponding unit vector is given by 2 2 ˆ ˆ ' ˆ ˆ ˆ sin cos ' a x r a x θ θ − = = = − + r j i r j i G (4) The cross product ˆ d × s r G The cross product is given by ˆ ˆ ˆ ˆ ˆ ( ' ) ( cos sin ) ( 'sin ) d dx dx θ θ θ × = × − + = s r i i j k G (5) Write down the contribution to the magnetic field due to Id s G The expression is 0 0 2 2 ˆ sin ˆ 4 4 I I d dx d r r µ µ θ π π × = = s r B k G G which shows that the magnetic field at P will point in the ˆ +k direction, or out of the page. (6) Simplify and carry out the integration 9-5 The variables θ, x and r are not independent of each other. In order to complete the integration, let us rewrite the variables x and r in terms of θ. From Figure 9.1.3, we have 2 /sin csc cot csc r a a x a dx a θ θ d θ θ θ = = ⎧ ⎪ ⎨ = ⇒ = − ⎪ ⎩ Upon substituting the above expressions, the differential contribution to the magnetic field is obtained as 2 0 0 2 ( csc )sin sin 4 ( csc ) 4 I I a d dB d a a µ µ θ θ θ θ θ π θ π − = = − Integrating over all angles subtended from 1 θ − to 2 θ (a negative sign is needed for 1 θ in order to take into consideration the portion of the length extended in the negative x axis from the origin), we obtain 2 1 0 0 2 sin (cos cos ) 4 4 I I B d a a θ θ 1 µ µ θ θ θ π π − = − = + ∫ θ (9.1.5) The first term involving 2 θ accounts for the contribution from the portion along the +x axis, while the second term involving 1 θ contains the contribution from the portion along the x − axis. The two terms add! Let’s examine the following cases: (i) In the symmetric case where 2 1 θ θ = − , the field point P is located along the perpendicular bisector. If the length of the rod is 2 , then L 2 2 1 cos / L L a θ = + and the magnetic field is 0 0 1 2 2 cos 2 2 I I L B a a L a µ µ θ π π = = + (9.1.6) (ii) The infinite length limit L →∞ This limit is obtained by choosing 1 2 ( , ) (0,0) θ θ = . The magnetic field at a distance a away becomes 0 2 I B a µ π = (9.1.7) 9-6 Note that in this limit, the system possesses cylindrical symmetry, and the magnetic field lines are circular, as shown in Figure 9.1.4. Figure 9.1.4 Magnetic field lines due to an infinite wire carrying current I. In fact, the direction of the magnetic field due to a long straight wire can be determined by the right-hand rule (Figure 9.1.5). Figure 9.1.5 Direction of the magnetic field due to an infinite straight wire If you direct your right thumb along the direction of the current in the wire, then the fingers of your right hand curl in the direction of the magnetic field. In cylindrical coordinates ( , , ) r z ϕ where the unit vectors are related by ˆ ˆ ˆ × = r φ z , if the current flows in the +z-direction, then, using the Biot-Savart law, the magnetic field must point in the ϕ -direction. Example 9.2: Magnetic Field due to a Circular Current Loop A circular loop of radius R in the xy plane carries a steady current I, as shown in Figure 9.1.6. (a) What is the magnetic field at a point P on the axis of the loop, at a distance z from the center? (b) If we place a magnetic dipole ˆ z µ = µ k G at P, find the magnetic force experienced by the dipole. Is the force attractive or repulsive? What happens if the direction of the dipole is reversed, i.e., ˆ z µ = − µ k G 9-7 Figure 9.1.6 Magnetic field due to a circular loop carrying a steady current. Solution: (a) This is another example that involves the application of the Biot-Savart law. Again let’s find the magnetic field by applying the same methodology used in Example 9.1. (1) Source point In Cartesian coordinates, the differential current element located at ˆ ' (cos ' sin ' R ˆ) φ φ = + r i j G can be written as ˆ ˆ ( '/ ') ' '( sin ' cos ' ) Id I d d d IRd φ φ φ φ φ = = − + s r i j G G . (2) Field point Since the field point P is on the axis of the loop at a distance z from the center, its position vector is given by . ˆ P z = r k G (3) Relative position vector ' P = − r r r G G G The relative position vector is given by ˆ ˆ ˆ ' cos ' sin ' P R R φ φ − = − − + r = r r i z j k G G G (9.1.8) and its magnitude ( ) 2 2 2 ( cos ') sin ' r R R z R φ φ = = − + − + = + r G 2 2 z (9.1.9) is the distance between the differential current element and P. Thus, the corresponding unit vector from Id s G to P can be written as ' ˆ | ' P P r | − = = − r r r r r r G G G G G 9-8 (4) Simplifying the cross product The cross product can be simplified as ( ') P d × − s r r G G G ( ) ˆ ˆ ˆ ˆ ˆ ( ') ' sin ' cos ' [ cos ' sin ' ] ˆ ˆ ˆ '[ cos ' sin ' ] P d R d R R z R d z z R φ φ φ φ φ φ φ φ × − = − + × − − + = + + s r r i j i j k i j k G G G (9.1.10) (5) Writing down dB G Using the Biot-Savart law, the contribution of the current element to the magnetic field at P is 0 0 0 2 3 0 2 2 3/ 2 ˆ ( ' 4 4 4 | ˆ ˆ ˆ cos ' sin ' ' 4 ( ) P P I I I d d d d r r IR z z R d R z 3 ) '| µ µ µ π π π µ φ φ φ π × − × × = = = − + + = + s r r s r s r B r r i j k G G G G G G G G G (9.1.11) (6) Carrying out the integration Using the result obtained above, the magnetic field at P is 2 0 2 2 3/ 2 0 ˆ ˆ ˆ cos ' sin ' ' 4 ( ) IR z z R d R z π µ φ φ φ π + + = + ∫ i j k B G (9.1.12) The x and the y components of B can be readily shown to be zero: G 2 0 0 2 2 3/ 2 2 2 3/2 0 2 cos ' ' sin ' 0 0 4 ( ) 4 ( ) x IRz IRz B d R z R z π π µ µ φ φ φ π π = = + + ∫ = (9.1.13) 2 0 0 2 2 3/ 2 2 2 3/ 2 0 2 sin ' ' cos ' 0 0 4 ( ) 4 ( ) y IRz IRz B d R z R z π π µ µ φ φ φ π π = =− + + ∫ = (9.1.14) On the other hand, the z component is 2 2 2 2 0 0 2 2 3/2 2 2 3/ 2 2 2 3/ 2 0 2 ' 4 ( ) 4 ( ) 2( ) z IR IR IR B d R z R z R z π µ µ π φ π π = = = + + ∫ 0 µ + (9.1.15) Thus, we see that along the symmetric axis, z B is the only non-vanishing component of the magnetic field. The conclusion can also be reached by using the symmetry arguments. 9-9 The behavior of 0 / z B B where 0 0 / 2 B I R µ = is the magnetic field strength at , as a function of is shown in Figure 9.1.7: 0 z = / z R Figure 9.1.7 The ratio of the magnetic field, 0 / z B B , as a function of / z R (b) If we place a magnetic dipole ˆ z µ = µ k G at the point P, as discussed in Chapter 8, due to the non-uniformity of the magnetic field, the dipole will experience a force given by ˆ ( ) ( ) z B z z z dB B dz µ µ ⎛ ⎞ = ∇ ⋅ = ∇ = ⎜ ⎟ ⎝ ⎠ F µ B G G k G (9.1.16) Upon differentiating Eq. (9.1.15) and substituting into Eq. (9.1.16), we obtain 2 0 2 2 5/ 2 3 ˆ 2( ) z B IR z R z µ µ = − + F k G (9.1.17) Thus, the dipole is attracted toward the current-carrying ring. On the other hand, if the direction of the dipole is reversed, ˆ z µ = − µ k G , the resulting force will be repulsive. 9.1.1 Magnetic Field of a Moving Point Charge Suppose we have an infinitesimal current element in the form of a cylinder of cross-sectional area A and length ds consisting of n charge carriers per unit volume, all moving at a common velocity along the axis of the cylinder. Let I be the current in the element, which we define as the amount of charge passing through any cross-section of the cylinder per unit time. From Chapter 6, we see that the current I can be written as v G n Aq I = v G (9.1.18) The total number of charge carriers in the current element is simply , so that using Eq. (9.1.1), the magnetic field dN n Ads = dB G due to the dN charge carriers is given by 9-10 0 0 0 2 2 ˆ ˆ ( | |) ( ) ( ) 4 4 4 nAq d n A ds q dN q d r r 2 ˆ r µ µ µ π π π × × = = = v s r v r v r B × G G G G G (9.1.19) where r is the distance between the charge and the field point P at which the field is being measured, the unit vector points from the source of the field (the charge) to P. The differential length vector is defined to be parallel to ˆ / r = r r G d s G v G . In case of a single charge, , the above equation becomes 1 dN = 0 2 ˆ 4 q r µ π × = v r B G G (9.1.20) Note, however, that since a point charge does not constitute a steady current, the above equation strictly speaking only holds in the non-relativistic limit where v , the speed of light, so that the effect of “retardation” can be ignored. c  The result may be readily extended to a collection of N point charges, each moving with a different velocity. Let the ith charge be located at ( i q , , ) i i i x y z and moving with velocity . Using the superposition principle, the magnetic field at P can be obtained as: i v G 0 3/ 2 2 2 2 1 ˆ ˆ ˆ ( ) ( ) ( ) 4 ( ) ( ) ( ) N i i i i i i i i i x x y y z z q x x y y z z µ π = ⎡ ⎤ − + − + − ⎢ ⎥ = × ⎢ ⎥ ⎡ ⎤ − + − + − ⎣ ⎦ ⎣ ⎦ ∑ i j k B v G G (9.1.21) Animation 9.1: Magnetic Field of a Moving Charge Figure 9.1.8 shows one frame of the animations of the magnetic field of a moving positive and negative point charge, assuming the speed of the charge is small compared to the speed of light. Figure 9.1.8 The magnetic field of (a) a moving positive charge, and (b) a moving negative charge, when the speed of the charge is small compared to the speed of light. 9-11 Animation 9.2: Magnetic Field of Several Charges Moving in a Circle Suppose we want to calculate the magnetic fields of a number of charges moving on the circumference of a circle with equal spacing between the charges. To calculate this field we have to add up vectorially the magnetic fields of each of charges using Eq. (9.1.19). Figure 9.1.9 The magnetic field of four charges moving in a circle. We show the magnetic field vector directions in only one plane. The bullet-like icons indicate the direction of the magnetic field at that point in the array spanning the plane. Figure 9.1.9 shows one frame of the animation when the number of moving charges is four. Other animations show the same situation for N =1, 2, and 8. When we get to eight charges, a characteristic pattern emerges--the magnetic dipole pattern. Far from the ring, the shape of the field lines is the same as the shape of the field lines for an electric dipole. Interactive Simulation 9.2: Magnetic Field of a Ring of Moving Charges Figure 9.1.10 shows a ShockWave display of the vectoral addition process for the case where we have 30 charges moving on a circle. The display in Figure 9.1.10 shows an observation point fixed on the axis of the ring. As the addition proceeds, we also show the resultant up to that point (large arrow in the display). Figure 9.1.10 A ShockWave simulation of the use of the principle of superposition to find the magnetic field due to 30 moving charges moving in a circle at an observation point on the axis of the circle. 9-12 Figure 9.1.11 The magnetic field due to 30 charges moving in a circle at a given observation point. The position of the observation point can be varied to see how the magnetic field of the individual charges adds up to give the total field. In Figure 9.1.11, we show an interactive ShockWave display that is similar to that in Figure 9.1.10, but now we can interact with the display to move the position of the observer about in space. To get a feel for the total magnetic field, we also show a “iron filings” representation of the magnetic field due to these charges. We can move the observation point about in space to see how the total field at various points arises from the individual contributions of the magnetic field of to each moving charge. 9.2 Force Between Two Parallel Wires We have already seen that a current-carrying wire produces a magnetic field. In addition, when placed in a magnetic field, a wire carrying a current will experience a net force. Thus, we expect two current-carrying wires to exert force on each other. Consider two parallel wires separated by a distance a and carrying currents I1 and I2 in the +x-direction, as shown in Figure 9.2.1. Figure 9.2.1 Force between two parallel wires The magnetic force, , exerted on wire 1 by wire 2 may be computed as follows: Using the result from the previous example, the magnetic field lines due to I 12 F G 2 going in the +x-direction are circles concentric with wire 2, with the field 2 B G pointing in the tangential 9-13 direction. Thus, at an arbitrary point P on wire 1, we have 2 0 2 ˆ ( / 2 ) I a µ π = − B j G , which points in the direction perpendicular to wire 1, as depicted in Figure 9.2.1. Therefore, ( ) 0 2 0 1 2 12 1 2 1 ˆ ˆ ˆ 2 2 I I I l I I l a a µ µ π π ⎛ ⎞ = × = × − = − ⎜ ⎟ ⎝ ⎠ F B i j k G G G l (9.2.1) Clearly points toward wire 2. The conclusion we can draw from this simple calculation is that two parallel wires carrying currents in the same direction will attract each other. On the other hand, if the currents flow in opposite directions, the resultant force will be repulsive. 12 F G Animation 9.3: Forces Between Current-Carrying Parallel Wires Figures 9.2.2 shows parallel wires carrying current in the same and in opposite directions. In the first case, the magnetic field configuration is such as to produce an attraction between the wires. In the second case the magnetic field configuration is such as to produce a repulsion between the wires. (a) (b) Figure 9.2.2 (a) The attraction between two wires carrying current in the same direction. The direction of current flow is represented by the motion of the orange spheres in the visualization. (b) The repulsion of two wires carrying current in opposite directions. 9.3 Ampere’s Law We have seen that moving charges or currents are the source of magnetism. This can be readily demonstrated by placing compass needles near a wire. As shown in Figure 9.3.1a, all compass needles point in the same direction in the absence of current. However, when , the needles will be deflected along the tangential direction of the circular path (Figure 9.3.1b). 0 I ≠ 9-14 Figure 9.3.1 Deflection of compass needles near a current-carrying wire Let us now divide a circular path of radius r into a large number of small length vectors , that point along the tangential direction with magnitude ˆ s ∆ ∆ s = φ G s ∆ (Figure 9.3.2). Figure 9.3.2 Amperian loop In the limit , we obtain 0 ∆→ s G G ( ) 0 0 2 2 I d B ds r r µ I π µ π ⎛ ⎞ ⋅ = = = ⎜ ⎟ ⎝ ⎠ ∫ ∫ B s G G v v (9.3.1) The result above is obtained by choosing a closed path, or an “Amperian loop” that follows one particular magnetic field line. Let’s consider a slightly more complicated Amperian loop, as that shown in Figure 9.3.3 Figure 9.3.3 An Amperian loop involving two field lines 9-15 The line integral of the magnetic field around the contour abcda is (9.3.2) 2 2 1 1 0 ( ) 0 [ (2 )] abcda ab bc cd cd d d d d B r B r θ π θ ⋅ = ⋅ + ⋅ + ⋅ + ⋅ = + + + − ∫ ∫ ∫ ∫ ∫ B s B s B s B s B s G G G G G G G G G v d G where the length of arc bc is 2 r θ , and 1(2 ) r π θ − for arc da. The first and the third integrals vanish since the magnetic field is perpendicular to the paths of integration. With 1 0 / 2 1 B I r µ π = and 2 0 / 2 2 B I r µ π = , the above expression becomes 0 0 0 0 2 1 2 1 ( ) [ (2 )] (2 ) 2 2 2 2 abcda I I I I d r r r r 0I µ µ µ µ θ π θ θ π θ µ π π π π ⋅ = + − = + − = ∫B s G G v (9.3.3) We see that the same result is obtained whether the closed path involves one or two magnetic field lines. As shown in Example 9.1, in cylindrical coordinates ( , , ) r z ϕ with current flowing in the +z-axis, the magnetic field is given by 0 ˆ ( / 2 ) I r µ π = B φ G . An arbitrary length element in the cylindrical coordinates can be written as ˆ ˆ ˆ d dr r d dz ϕ = + + s r φ z G (9.3.4) which implies 0 0 0 0 closed path closed path closed path (2 ) 2 2 2 I I I d r d d r µ µ µ I ϕ ϕ π π π π ⎛ ⎞ ⋅ = = = = ⎜ ⎟ ⎝ ⎠ ∫ ∫ ∫ B s µ G G v v v (9.3.5) In other words, the line integral of d ⋅ ∫B s G G v around any closed Amperian loop is proportional to enc I , the current encircled by the loop. Figure 9.3.4 An Amperian loop of arbitrary shape. 9-16 The generalization to any closed loop of arbitrary shape (see for example, Figure 9.3.4) that involves many magnetic field lines is known as Ampere’s law: 0 enc d I µ ⋅ ∫B s = G G v (9.3.6) Ampere’s law in magnetism is analogous to Gauss’s law in electrostatics. In order to apply them, the system must possess certain symmetry. In the case of an infinite wire, the system possesses cylindrical symmetry and Ampere’s law can be readily applied. However, when the length of the wire is finite, Biot-Savart law must be used instead. Biot-Savart Law 0 2 ˆ 4 I d r µ π × = ∫ s r B G G general current source ex: finite wire Ampere’s law 0 enc d I µ ⋅ ∫B s = G G v current source has certain symmetry ex: infinite wire (cylindrical) Ampere’s law is applicable to the following current configurations: 1. Infinitely long straight wires carrying a steady current I (Example 9.3) 2. Infinitely large sheet of thickness b with a current density J (Example 9.4). 3. Infinite solenoid (Section 9.4). 4. Toroid (Example 9.5). We shall examine all four configurations in detail. Example 9.3: Field Inside and Outside a Current-Carrying Wire Consider a long straight wire of radius R carrying a current I of uniform current density, as shown in Figure 9.3.5. Find the magnetic field everywhere. Figure 9.3.5 Amperian loops for calculating the B G field of a conducting wire of radius R. 9-17 Solution: (i) Outside the wire where r , the Amperian loop (circle 1) completely encircles the current, i.e., R ≥ enc I I = . Applying Ampere’s law yields ( ) 0 2 d B ds B r I π µ ⋅ = = = ∫ ∫ B s G G v v which implies 0 2 I B r µ π = (ii) Inside the wire where r , the amount of current encircled by the Amperian loop (circle 2) is proportional to the area enclosed, i.e., R < 2 enc 2 r I I R π π ⎛ ⎞ = ⎜ ⎟ ⎝ ⎠ Thus, we have ( ) 2 0 0 2 2 2 2 Ir r d B r I B R R µ π π µ π π ⎛ ⎞ ⋅ = = ⇒ = ⎜ ⎟ ⎝ ⎠ ∫B s G G v We see that the magnetic field is zero at the center of the wire and increases linearly with r until r=R. Outside the wire, the field falls off as 1/r. The qualitative behavior of the field is depicted in Figure 9.3.6 below: Figure 9.3.6 Magnetic field of a conducting wire of radius R carrying a steady current I . Example 9.4: Magnetic Field Due to an Infinite Current Sheet Consider an infinitely large sheet of thickness b lying in the xy plane with a uniform current density . Find the magnetic field everywhere. 0ˆ J = J G i 9-18 Figure 9.3.7 An infinite sheet with current density . 0ˆ J = J i G Solution: We may think of the current sheet as a set of parallel wires carrying currents in the +x-direction. From Figure 9.3.8, we see that magnetic field at a point P above the plane points in the −y-direction. The z-component vanishes after adding up the contributions from all wires. Similarly, we may show that the magnetic field at a point below the plane points in the +y-direction. Figure 9.3.8 Magnetic field of a current sheet We may now apply Ampere’s law to find the magnetic field due to the current sheet. The Amperian loops are shown in Figure 9.3.9. Figure 9.3.9 Amperian loops for the current sheets For the field outside, we integrate along path . The amount of current enclosed by is 1 C 1 C 9-19 enc 0( ) I d J b = ⋅ = ∫∫J A G G A (9.3.7) Applying Ampere’s law leads to 0 enc 0 0 (2 ) ( ) d B I J b µ µ ⋅ = = = ∫B s G G v A A (9.3.8) or 0 0 / 2 B J b µ = . Note that the magnetic field outside the sheet is constant, independent of the distance from the sheet. Next we find the magnetic field inside the sheet. The amount of current enclosed by path is 2 C enc 0(2 | | ) I d J z = ⋅ = ∫∫J A G G A (9.3.9) Applying Ampere’s law, we obtain 0 enc 0 0 (2 ) (2 | | ) d B I J z µ µ ⋅ = = = ∫B s G G v A A (9.3.10) or 0 0 | | B J z µ = . At , the magnetic field vanishes, as required by symmetry. The results can be summarized using the unit-vector notation as 0 z = 0 0 0 0 0 0 ˆ, / 2 2 ˆ, / 2 / 2 ˆ , / 2 2 J b z b J z b z b J b z b µ µ µ ⎧− > ⎪ ⎪ ⎪ = − − < < ⎨ ⎪ ⎪ < − ⎪ ⎩ j B j j G (9.3.11) Let’s now consider the limit where the sheet is infinitesimally thin, with . In this case, instead of current density , we have surface current 0 b → 0ˆ J = J G i ˆ K = K i G , where 0 K J b = . Note that the dimension of K is current/length. In this limit, the magnetic field becomes 0 0 ˆ, 0 2 ˆ , 0 2 K z K z µ µ ⎧− > ⎪ ⎪ = ⎨ ⎪ < ⎪ ⎩ j B j G (9.3.12) 9.4 Solenoid A solenoid is a long coil of wire tightly wound in the helical form. Figure 9.4.1 shows the magnetic field lines of a solenoid carrying a steady current I. We see that if the turns are closely spaced, the resulting magnetic field inside the solenoid becomes fairly uniform, 9-20 provided that the length of the solenoid is much greater than its diameter. For an “ideal” solenoid, which is infinitely long with turns tightly packed, the magnetic field inside the solenoid is uniform and parallel to the axis, and vanishes outside the solenoid. Figure 9.4.1 Magnetic field lines of a solenoid We can use Ampere’s law to calculate the magnetic field strength inside an ideal solenoid. The cross-sectional view of an ideal solenoid is shown in Figure 9.4.2. To compute B G , we consider a rectangular path of length l and width w and traverse the path in a counterclockwise manner. The line integral of B G along this loop is 0 0 0 d d d d Bl ⋅ ⋅ + ⋅ + ⋅ + ⋅ = + + + ∫ ∫ ∫ ∫ ∫ 1 2 3 4 B s = B s B s B s B d s G G G G G G G G G G v (9.4.1) Figure 9.4.2 Amperian loop for calculating the magnetic field of an ideal solenoid. In the above, the contributions along sides 2 and 4 are zero because B G is perpendicular to . In addition, along side 1 because the magnetic field is non-zero only inside the solenoid. On the other hand, the total current enclosed by the Amperian loop is d s G = B 0 G G enc I NI = , where N is the total number of turns. Applying Ampere’s law yields 0 d Bl N µ ⋅ = = I ∫B s G G v (9.4.2) or 9-21 0 0 NI B nI l µ µ = = (9.4.3) where represents the number of turns per unit length., In terms of the surface current, or current per unit length / n N l = K nI = , the magnetic field can also be written as, 0 B K µ = (9.4.4) What happens if the length of the solenoid is finite? To find the magnetic field due to a finite solenoid, we shall approximate the solenoid as consisting of a large number of circular loops stacking together. Using the result obtained in Example 9.2, the magnetic field at a point P on the z axis may be calculated as follows: Take a cross section of tightly packed loops located at z’ with a thickness ', as shown in Figure 9.4.3 dz The amount of current flowing through is proportional to the thickness of the cross section and is given by , where ( ') ( / ) ' dI I ndz I N l dz = = / n N l = is the number of turns per unit length. Figure 9.4.3 Finite Solenoid The contribution to the magnetic field at P due to this subset of loops is 2 2 0 0 2 2 3/ 2 2 2 3/ 2 ( 2[( ') ] 2[( ') ] z R R dB dI nIdz z z R z z R µ µ = = − + − + ') (9.4.5) Integrating over the entire length of the solenoid, we obtain 2 2 /2 0 0 2 2 3/ 2 2 2 2 /2 0 2 2 2 2 / 2 / 2 ' ' 2 [( ') ] 2 ( ') ( / 2) ( / 2) 2 ( / 2) ( / 2) l z l l l nIR nIR dz z z B z z R R z z R nI l z l z z l R z l R µ µ µ − − − = = − + − + ⎡ ⎤ − + = + ⎢ ⎥ ⎢ − + + + ⎥ ⎣ ⎦ ∫ (9.4.6) 9-22 A plot of 0 / z B B , where 0 0 B nI µ = is the magnetic field of an infinite solenoid, as a function of is shown in Figure 9.4.4 for / z R 10 l R = and 20 l R = . Figure 9.4.4 Magnetic field of a finite solenoid for (a) 10 l R = , and (b) . 20 l R = Notice that the value of the magnetic field in the region| | / 2 z l < is nearly uniform and approximately equal to 0 B . Examaple 9.5: Toroid Consider a toroid which consists of N turns, as shown in Figure 9.4.5. Find the magnetic field everywhere. Figure 9.4.5 A toroid with N turns Solutions: One can think of a toroid as a solenoid wrapped around with its ends connected. Thus, the magnetic field is completely confined inside the toroid and the field points in the azimuthal direction (clockwise due to the way the current flows, as shown in Figure 9.4.5.) Applying Ampere’s law, we obtain 9-23 0 (2 ) d Bds B ds B r N π µ ⋅ = = = = ∫ ∫ ∫ B s I G G v v v (9.4.7) or 0 2 NI B r µ π = (9.4.8) where r is the distance measured from the center of the toroid.. Unlike the magnetic field of a solenoid, the magnetic field inside the toroid is non-uniform and decreases as1/ . r 9.5 Magnetic Field of a Dipole Let a magnetic dipole moment vector ˆ µ = − µ k G be placed at the origin (e.g., center of the Earth) in the plane. What is the magnetic field at a point (e.g., MIT) a distance r away from the origin? yz Figure 9.5.1 Earth’s magnetic field components In Figure 9.5.1 we show the magnetic field at MIT due to the dipole. The y- and z- components of the magnetic field are given by 2 0 0 3 3 3 sin cos , (3cos 1) 4 4 y z B B r r µ µ µ µ θ θ θ π π = − = − − (9.5.1) Readers are referred to Section 9.8 for the detail of the derivation. In spherical coordinates (r,θ,φ), the radial and the polar components of the magnetic field can be written as 0 3 2 sin cos cos 4 r y z B B B r µ µ θ θ π = + = − θ (9.5.2) 9-24 and 0 3 cos sin sin 4 y z B B B r θ µ µ θ θ π = − = − θ (9.5.3) respectively. Thus, the magnetic field at MIT due to the dipole becomes 0 3 ˆ ˆ ˆ (sin 2cos ) 4 r B B r θ ˆ µ µ θ θ π = + = − + B θ r θ r G (9.5.4) Notice the similarity between the above expression and the electric field due to an electric dipole p (see Solved Problem 2.13.6): G 3 0 1 ˆ ˆ (sin 2cos ) 4 p r θ θ πε = + E θ r G The negative sign in Eq. (9.5.4) is due to the fact that the magnetic dipole points in the −z-direction. In general, the magnetic field due to a dipole moment µ G can be written as 0 3 ˆ ˆ 3( ) 4 r µ π ⋅ − = µ r r µ B G G G (9.5.5) The ratio of the radial and the polar components is given by 0 3 0 3 2 cos 4 2cot sin 4 r B r B r θ µ µ θ π θ µ µ θ π − = = − (9.5.6) 9.5.1 Earth’s Magnetic Field at MIT The Earth’s field behaves as if there were a bar magnet in it. In Figure 9.5.2 an imaginary magnet is drawn inside the Earth oriented to produce a magnetic field like that of the Earth’s magnetic field. Note the South pole of such a magnet in the northern hemisphere in order to attract the North pole of a compass. It is most natural to represent the location of a point P on the surface of the Earth using the spherical coordinates ( , , ) r θ φ , where r is the distance from the center of the Earth, θ is the polar angle from the z-axis, with 0 θ π ≤ ≤ , and φ is the azimuthal angle in the xy plane, measured from the x-axis, with 0 2 φ π ≤ ≤ (See Figure 9.5.3.) With the distance fixed at , the radius of the Earth, the point P is parameterized by the two angles E r r = θ and φ . 9-25 Figure 9.5.2 Magnetic field of the Earth In practice, a location on Earth is described by two numbers – latitude and longitude. How are they related to θ and φ ? The latitude of a point, denoted as δ , is a measure of the elevation from the plane of the equator. Thus, it is related to θ (commonly referred to as the colatitude) by 90 δ θ = °− . Using this definition, the equator has latitude 0 , and the north and the south poles have latitude ° 90 ± ° , respectively. The longitude of a location is simply represented by the azimuthal angle φ in the spherical coordinates. Lines of constant longitude are generally referred to as meridians. The value of longitude depends on where the counting begins. For historical reasons, the meridian passing through the Royal Astronomical Observatory in Greenwich, UK, is chosen as the “prime meridian” with zero longitude. Figure 9.5.3 Locating a point P on the surface of the Earth using spherical coordinates. Let the z-axis be the Earth’s rotation axis, and the x-axis passes through the prime meridian. The corresponding magnetic dipole moment of the Earth can be written as 0 0 0 0 0 ˆ ˆ ˆ (sin cos sin sin cos ) ˆ ˆ ˆ ( 0.062 0.18 0.98 ) E E E µ θ φ θ φ θ µ = + + = − + − µ i j k i j k G (9.5.7) 9-26 where , and we have used 22 2 7.79 10 A m E µ = × ⋅ 0 0 ( , ) (169 ,109 ) θ φ = ° ° . The expression shows that E µ G has non-vanishing components in all three directions in the Cartesian coordinates. On the other hand, the location of MIT is for the latitude and 71 for the longitude ( north of the equator, and 71 west of the prime meridian), which means that 42 N ° W ° 42° ° 90 42 48 m θ = °− ° = °, and 360 71 289 m φ = °− ° = ° . Thus, the position of MIT can be described by the vector MIT ˆ ˆ ˆ (sin cos sin sin cos ) ˆ ˆ ˆ (0.24 0.70 0.67 ) E m m m m m E r r θ φ θ φ θ = + + = − + r i j k i j k G (9.5.8) The angle between E −µ G and is given by MIT r G 1 1 MIT MIT cos cos (0.80) 37 | || | E ME E θ − − ⎛ ⎞ − ⋅ = = ⎜ ⎟ − ⎝ ⎠ r µ r µ = ° G G G G (9.5.9) Note that the polar angle θ is defined as 1 ˆ ˆ cos ( ) θ − = ⋅ r k , the inverse of cosine of the dot product between a unit vector for the position, and a unit vector ˆ r ˆ +k in the positive z-direction, as indicated in Figure 9.6.1. Thus, if we measure the ratio of the radial to the polar component of the Earth’s magnetic field at MIT, the result would be 2cot37 2.65 r B Bθ = ° ≈ (9.5.10) Note that the positive radial (vertical) direction is chosen to point outward and the positive polar (horizontal) direction points towards the equator. Animation 9.4: Bar Magnet in the Earth’s Magnetic Field Figure 9.5.4 shows a bar magnet and compass placed on a table. The interaction between the magnetic field of the bar magnet and the magnetic field of the earth is illustrated by the field lines that extend out from the bar magnet. Field lines that emerge towards the edges of the magnet generally reconnect to the magnet near the opposite pole. However, field lines that emerge near the poles tend to wander off and reconnect to the magnetic field of the earth, which, in this case, is approximately a constant field coming at 60 degrees from the horizontal. Looking at the compass, one can see that a compass needle will always align itself in the direction of the local field. In this case, the local field is dominated by the bar magnet. Click and drag the mouse to rotate the scene. Control-click and drag to zoom in and out. 9-27 Figure 9.5.4 A bar magnet in Earth’s magnetic field 9.6 Magnetic Materials The introduction of material media into the study of magnetism has very different consequences as compared to the introduction of material media into the study of electrostatics. When we dealt with dielectric materials in electrostatics, their effect was always to reduce E G below what it would otherwise be, for a given amount of “free” electric charge. In contrast, when we deal with magnetic materials, their effect can be one of the following: (i) reduce B below what it would otherwise be, for the same amount of "free" electric current (diamagnetic materials); G (ii) increase B a little above what it would otherwise be (paramagnetic materials); G (iii) increase B a lot above what it would otherwise be (ferromagnetic materials). G Below we discuss how these effects arise. 9.6.1 Magnetization Magnetic materials consist of many permanent or induced magnetic dipoles. One of the concepts crucial to the understanding of magnetic materials is the average magnetic field produced by many magnetic dipoles which are all aligned. Suppose we have a piece of material in the form of a long cylinder with area and height L, and that it consists of N magnetic dipoles, each with magnetic dipole moment A µ G , spread uniformly throughout the volume of the cylinder, as shown in Figure 9.6.1. 9-28 Figure 9.6.1 A cylinder with N magnetic dipole moments We also assume that all of the magnetic dipole moments µ G are aligned with the axis of the cylinder. In the absence of any external magnetic field, what is the average magnetic field due to these dipoles alone? To answer this question, we note that each magnetic dipole has its own magnetic field associated with it. Let’s define the magnetization vector M G to be the net magnetic dipole moment vector per unit volume: 1 i i V = ∑ M µ G G (9.6.1) where V is the volume. In the case of our cylinder, where all the dipoles are aligned, the magnitude of is simply M G / M N AL µ = . Now, what is the average magnetic field produced by all the dipoles in the cylinder? Figure 9.6.2 (a) Top view of the cylinder containing magnetic dipole moments. (b) The equivalent current. Figure 9.6.2(a) depicts the small current loops associated with the dipole moments and the direction of the currents, as seen from above. We see that in the interior, currents flow in a given direction will be cancelled out by currents flowing in the opposite direction in neighboring loops. The only place where cancellation does not take place is near the edge of the cylinder where there are no adjacent loops further out. Thus, the average current in the interior of the cylinder vanishes, whereas the sides of the cylinder appear to carry a net current. The equivalent situation is shown in Figure 9.6.2(b), where there is an equivalent current eq I on the sides. 9-29 The functional form of eq I may be deduced by requiring that the magnetic dipole moment produced by eq I be the same as total magnetic dipole moment of the system. The condition gives eq I A Nµ = (9.6.2) or eq N I A µ = (9.6.3) Next, let’s calculate the magnetic field produced by eq I . With eq I running on the sides, the equivalent configuration is identical to a solenoid carrying a surface current (or current per unit length) . The two quantities are related by K eq I N K L AL µ M = = = (9.6.4) Thus, we see that the surface current K is equal to the magnetization M , which is the average magnetic dipole moment per unit volume. The average magnetic field produced by the equivalent current system is given by (see Section 9.4) 0 0 M B K M µ µ = = (9.6.5) Since the direction of this magnetic field is in the same direction as M , the above expression may be written in vector notation as G 0 M µ = B M G G (9.6.6) This is exactly opposite from the situation with electric dipoles, in which the average electric field is anti-parallel to the direction of the electric dipoles themselves. The reason is that in the region interior to the current loop of the dipole, the magnetic field is in the same direction as the magnetic dipole vector. Therefore, it is not surprising that after a large-scale averaging, the average magnetic field also turns out to be parallel to the average magnetic dipole moment per unit volume. Notice that the magnetic field in Eq. (9.6.6) is the average field due to all the dipoles. A very different field is observed if we go close to any one of these little dipoles. Let’s now examine the properties of different magnetic materials 9-30 9.6.2 Paramagnetism The atoms or molecules comprising paramagnetic materials have a permanent magnetic dipole moment. Left to themselves, the permanent magnetic dipoles in a paramagnetic material never line up spontaneously. In the absence of any applied external magnetic field, they are randomly aligned. Thus, = M 0 G G and the average magnetic field M B G is also zero. However, when we place a paramagnetic material in an external field , the dipoles experience a torque 0 B G 0 = × τ µ B G G G that tends to align µ G with 0 B G , thereby producing a net magnetization parallel to M G 0 B G . Since M B G is parallel to 0 B G , it will tend to enhance . The total magnetic field B is the sum of these two fields: 0 B G G 0 0 M µ = + = + B B B B M 0 G G G G G (9.6.7) Note how different this is than in the case of dielectric materials. In both cases, the torque on the dipoles causes alignment of the dipole vector parallel to the external field. However, in the paramagnetic case, that alignment enhances the external magnetic field, whereas in the dielectric case it reduces the external electric field. In most paramagnetic substances, the magnetization M G is not only in the same direction as , but also linearly proportional to . This is plausible because without the external field there would be no alignment of dipoles and hence no magnetization M 0 B G 0 B G 0 B G G . The linear relation between M G and is expressed as 0 B G 0 0 m χ µ = B M G G (9.6.8) where m χ is a dimensionless quantity called the magnetic susceptibility. Eq. (10.7.7) can then be written as 0 (1 ) m m χ κ = + = B B 0 B G G G (9.6.9) where 1 m m κ χ = + (9.6.10) is called the relative permeability of the material. For paramagnetic substances, , or equivalently, 1 m κ > 0 m χ > , although m χ is usually on the order of to . The magnetic permeability 6 10− 3 10− m µ of a material may also be defined as 0 (1 ) m m 0 m µ χ µ κ µ = + = (9.6.11) 9-31 Paramagnetic materials have 0 m µ µ > . 9.6.3 Diamagnetism In the case of magnetic materials where there are no permanent magnetic dipoles, the presence of an external field will induce magnetic dipole moments in the atoms or molecules. However, these induced magnetic dipoles are anti-parallel to , leading to a magnetization and average field 0 B G 0 B G M G M B G anti-parallel to 0 B G , and therefore a reduction in the total magnetic field strength. For diamagnetic materials, we can still define the magnetic permeability, as in equation (8-5), although now 1 m κ < , or 0 m χ < , although m χ is usually on the order of 5 10− − to 9 10− − . Diamagnetic materials have 0 m µ µ < . 9.6.4 Ferromagnetism In ferromagnetic materials, there is a strong interaction between neighboring atomic dipole moments. Ferromagnetic materials are made up of small patches called domains, as illustrated in Figure 9.6.3(a). An externally applied field 0 B G will tend to line up those magnetic dipoles parallel to the external field, as shown in Figure 9.6.3(b). The strong interaction between neighboring atomic dipole moments causes a much stronger alignment of the magnetic dipoles than in paramagnetic materials. Figure 9.6.3 (a) Ferromagnetic domains. (b) Alignment of magnetic moments in the direction of the external field . 0 B G The enhancement of the applied external field can be considerable, with the total magnetic field inside a ferromagnet or times greater than the applied field. The permeability of a ferromagnetic material is not a constant, since neither the total field or the magnetization M increases linearly with 3 10 4 10 m κ B G G 0 B G . In fact the relationship between and is not unique, but dependent on the previous history of the material. The M G 0 B G 9-32 phenomenon is known as hysteresis. The variation of M G as a function of the externally applied field is shown in Figure 9.6.4. The loop abcdef is a hysteresis curve. 0 B G Figure 9.6.4 A hysteresis curve. Moreover, in ferromagnets, the strong interaction between neighboring atomic dipole moments can keep those dipole moments aligned, even when the external magnet field is reduced to zero. And these aligned dipoles can thus produce a strong magnetic field, all by themselves, without the necessity of an external magnetic field. This is the origin of permanent magnets. To see how strong such magnets can be, consider the fact that magnetic dipole moments of atoms typically have magnitudes of the order of 23 2 10 A m − ⋅ . Typical atomic densities are atoms/m3. If all these dipole moments are aligned, then we would get a magnetization of order 29 10 (9.6.12) 23 2 29 3 6 (10 A m )(10 atoms/m ) 10 A/m M − ⋅ ∼ ∼ M The magnetization corresponds to values of 0 M µ = B G G of order 1 tesla, or 10,000 Gauss, just due to the atomic currents alone. This is how we get permanent magnets with fields of order 2200 Gauss. 9.7 Summary • Biot-Savart law states that the magnetic field dB G at a point due to a length element d carrying a steady current I and located at s G r G away is given by 0 2 ˆ 4 I d d r µ π × = s r B G G where r = r G and is the permeability of free space. 7 0 4 10 T m/A µ π − = × ⋅ • The magnitude of the magnetic field at a distance r away from an infinitely long straight wire carrying a current I is 9-33 0 2 I B r µ π = • The magnitude of the magnetic force between two straight wires of length A carrying steady current of B F 1 and 2 I I and separated by a distance r is 0 1 2 2 B I I F r µ π = A • Ampere’s law states that the line integral of d ⋅ B s G G around any closed loop is proportional to the total steady current passing through any surface that is bounded by the close loop: 0 enc d I µ ⋅ = ∫B s G G v • The magnetic field inside a toroid which has N closely spaced of wire carrying a current I is given by 0 2 NI B r µ π = where r is the distance from the center of the toroid. • The magnetic field inside a solenoid which has N closely spaced of wire carrying current I in a length of l is given by 0 0 N B I n l µ µ = = I where n is the number of number of turns per unit length. • The properties of magnetic materials are as follows: Materials Magnetic susceptibility m χ Relative permeability 1 m m κ χ = + Magnetic permeability 0 m m µ κ µ = Diamagnetic 5 9 10 10 − − − − ∼ 1 m κ < 0 m µ µ < Paramagnetic 5 3 10 10 − − ∼ 1 m κ > 0 m µ µ > Ferromagnetic 1 m χ  1 m κ  0 m µ µ  9-34 9.8 Appendix 1: Magnetic Field off the Symmetry Axis of a Current Loop In Example 9.2 we calculated the magnetic field due to a circular loop of radius R lying in the xy plane and carrying a steady current I, at a point P along the axis of symmetry. Let’s see how the same technique can be extended to calculating the field at a point off the axis of symmetry in the yz plane. Figure 9.8.1 Calculating the magnetic field off the symmetry axis of a current loop. Again, as shown in Example 9.1, the differential current element is ˆ ˆ '( sin ' cos ' ) Id R dφ φ φ = − + s i j G and its position is described by ˆ ' (cos ' sin ' R ˆ) φ φ = + r i j G . On the other hand, the field point P now lies in the yz plane with ˆ ˆ P y z = + r j k G , as shown in Figure 9.8.1. The corresponding relative position vector is ( ) ˆ ˆ ' cos ' sin ' P ˆ R y R z φ φ − = − + − + r = r r i j k G G G (9.8.1) with a magnitude ( ) 2 2 2 2 2 2 ( cos ') sin ' 2 sin r R y R z R y z yR φ φ φ = = − + − + = + + − r G (9.8.2) and the unit vector ' ˆ | ' P P r | − = = − r r r r r r G G G G G pointing from Id s G to P. The cross product ˆ d × s r G can be simplified as (9.8.3) ( ) ( ) ˆ ˆ ˆ ˆ ˆ ˆ ' sin ' cos ' [ cos ' ( sin ') ] ˆ ˆ ˆ '[ cos ' sin ' sin ' ] d R d R y R z R d z z R y φ φ φ φ φ φ φ φ φ × = − + × − + − + = + + − s r i j i j k i j k G 9-35 Using the Biot-Savart law, the contribution of the current element to the magnetic field at P is ( ) ( ) 0 0 0 3/2 2 3 2 2 2 ˆ ˆ ˆ cos ' sin ' sin ' ˆ ' 4 4 4 2 sin ' z z R y I I IR d d d d r r R y z yR φ φ φ µ µ µ φ π π π φ + + − × × = = = + + − i j k s r s r B G G G G (9.8.4) Thus, magnetic field at P is ( ) ( ) ( ) 2 0 3/2 0 2 2 2 ˆ ˆ ˆ cos ' sin ' sin ' 0, , ' 4 2 sin ' z z R y IR y z d R y z yR π φ φ φ µ φ π φ + + − = + + − ∫ i j k B G (9.8.5) The x-component of B can be readily shown to be zero G ( ) 2 0 3/ 2 0 2 2 2 cos ' ' 0 4 2 sin ' x IRz d B R y z yR π µ φ φ π φ = = + + − ∫ (9.8.6) by making a change of variable 2 2 2 2 sin w R y z yR ' φ = + + − , followed by a straightforward integration. One may also invoke symmetry arguments to verify that x B must vanish; namely, the contribution at ' φ is cancelled by the contribution at ' π φ − . On the other hand, the y and the z components of B G , ( ) 2 0 3/ 2 0 2 2 2 sin ' ' 4 2 sin ' y IRz d B R y z yR π µ φ φ π φ = + + − ∫ (9.8.7) and ( ) ( ) 2 0 3/2 0 2 2 2 sin ' ' 4 2 sin ' z R y d IR B R y z yR π φ φ µ π φ − = + + − ∫ (9.8.8) involve elliptic integrals which can be evaluated numerically. In the limit , the field point P is located along the z-axis, and we recover the results obtained in Example 9.2: 0 y = 2 0 0 2 2 3/ 2 2 2 3/ 2 0 2 sin ' ' cos ' 0 0 4 ( ) 4 ( ) y IRz IRz B d R z R z π π µ µ φ φ φ π π = =− + + ∫ = (9.8.9) and 9-36 2 2 2 2 0 0 2 2 3/2 2 2 3/ 2 2 2 3/ 2 0 2 ' 4 ( ) 4 ( ) 2( ) z IR IR IR B d R z R z R z π µ µ π φ π π = = = + + ∫ 0 µ + (9.8.10) Now, let’s consider the “point-dipole” limit where 2 2 1/ 2 ( ) R y z r + =  , i.e., the characteristic dimension of the current source is much smaller compared to the distance where the magnetic field is to be measured. In this limit, the denominator in the integrand can be expanded as ( ) 3/2 2 3/ 2 2 2 2 3 2 2 3 2 1 2 sin ' 2 sin ' 1 1 3 2 sin ' 1 2 R yR R y z yR r r R yR r r φ φ φ − − ⎡ ⎤ − + + − = + ⎢ ⎥ ⎣ ⎦ ⎡ ⎤ ⎛ ⎞ − = − + ⎢ ⎥ ⎜ ⎟ ⎝ ⎠ ⎣ ⎦ … (9.8.11) This leads to 2 2 0 3 2 0 2 2 2 2 0 0 5 5 0 3 2 sin ' 1 s 4 2 3 3 sin ' ' 4 4 y I Rz R yR in ' ' B d r r I I R yz R yz d r r π π µ φ φ φ π µ µ π φ φ π π ⎡ ⎤ ⎛ ⎞ − ≈ − ⎢ ⎥ ⎜ ⎟ ⎝ ⎠ ⎣ ⎦ = = ∫ ∫ (9.8.12) and 2 2 0 3 2 0 3 2 2 2 2 0 3 2 2 2 0 3 2 0 3 2 2 2 2 0 3 2 3 2 sin ' 1 ( sin ') 4 2 3 9 3 1 sin ' sin ' 4 2 2 3 3 2 4 2 3 2 higher order ter 4 z I R R yR B R r r I R R R Ry ' ' y d R d r r r r I R R Ry R r r r I R y r r π π µ φ φ φ π µ φ φ φ π µ π π π µ π π ⎡ ⎤ ⎛ ⎞ − ≈ − − ⎢ ⎥ ⎜ ⎟ ⎝ ⎠ ⎣ ⎦ ⎡ ⎤ ⎛ ⎞ ⎛ ⎞ = − − − − ⎢ ⎥ ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ ⎣ ⎦ ⎡ ⎤ ⎛ ⎞ = − − ⎢ ⎥ ⎜ ⎟ ⎝ ⎠ ⎣ ⎦ = − + ∫ ∫ ms ⎡ ⎤ ⎢ ⎥ ⎣ ⎦ (9.8.13) The quantity 2 ( ) I R π may be identified as the magnetic dipole moment IA µ = , where 2 A R π = is the area of the loop. Using spherical coordinates where sin y r θ = and cos z r θ = , the above expressions may be rewritten as 2 0 0 5 ( ) 3( sin )( cos ) 3 sin cos 4 4 y I R r r B r r µ π µ 3 θ θ µ θ π π = = θ (9.8.14) 9-37 and 2 2 2 2 0 0 0 3 2 3 3 ( ) 3 sin 2 (2 3sin ) (3 4 4 4 z I R r B r r r r µ µ µ π θ µ µ θ π π π ⎛ ⎞ = − = − = ⎜ ⎟ ⎝ ⎠ 2 cos 1) θ − (9.8.15) Thus, we see that the magnetic field at a point r due to a current ring of radius R may be approximated by a small magnetic dipole moment placed at the origin (Figure 9.8.2). R  Figure 9.8.2 Magnetic dipole moment ˆ µ = µ k G The magnetic field lines due to a current loop and a dipole moment (small bar magnet) are depicted in Figure 9.8.3. Figure 9.8.3 Magnetic field lines due to (a) a current loop, and (b) a small bar magnet. The magnetic field at P can also be written in spherical coordinates ˆ ˆ r B B θ = + B r θ G (9.8.16) The spherical components r B and Bθ are related to the Cartesian components y B and z B by sin cos , cos sin r y z y z B B B B B B θ θ θ θ = + = − θ ˆ (9.8.17) In addition, we have, for the unit vectors, ˆ ˆ ˆ ˆ ˆ sin cos , cos sin θ θ θ = + = − r j k θ θ j k (9.8.18) Using the above relations, the spherical components may be written as 9-38 ( ) 2 2 0 3/ 2 0 2 2 cos ' 4 2 sin sin ' r IR d B R r rR π µ θ φ π θ φ = + − ∫ (9.8.19) and ( ) ( ) ( ) 2 0 3/2 0 2 2 sin ' sin ' , 4 2 sin sin ' r R d IR B r R r rR π θ φ θ φ µ θ π θ φ − = + − ∫ (9.8.20) In the limit where R r  , we obtain 2 2 2 0 0 0 3 3 0 cos 2 cos 2 cos ' 4 4 4 r IR IR B d r r π µ θ µ µ 3 r π θ µ φ π π π ≈ = = ∫ θ (9.8.21) and ( ) ( ) ( ) 2 0 3/2 0 2 2 2 2 2 2 2 2 0 3 2 0 2 0 0 3 3 0 3 sin ' sin ' 4 2 sin sin ' 3 3 3 sin sin 1 sin ' 3 sin sin ' ' 4 2 2 2 ( )sin 2 sin 3 sin 4 4 sin 4 r R d IR B R r rR IR R R R R r R r r r r IR I R R R r r r π θ π φ θ φ µ π θ φ µ θ d θ φ θ φ π µ µ π θ π θ π θ π π µ µ θ φ π − = + − ⎡ ⎤ ⎛ ⎞ ⎛ ⎞ ≈ − − + − − + ⎢ ⎥ ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ ⎣ ⎦ ≈ − + = = ∫ ∫ (9.8.22) 9.9 Appendix 2: Helmholtz Coils Consider two N-turn circular coils of radius R, each perpendicular to the axis of symmetry, with their centers located at / 2 z l = ± . There is a steady current I flowing in the same direction around each coil, as shown in Figure 9.9.1. Let’s find the magnetic field B G on the axis at a distance z from the center of one coil. Figure 9.9.1 Helmholtz coils 9-39 Using the result shown in Example 9.2 for a single coil and applying the superposition principle, the magnetic field at (a point at a distance ( ,0) P z / 2 z l − away from one center and from the other) due to the two coils can be obtained as: / 2 z l + 2 0 top bottom 2 2 3/2 2 2 3/2 1 1 2 [( / 2) ] [( / 2) ] z NIR B B B z l R z l R µ ⎡ ⎤ = + = + ⎢ ⎥ − + + + ⎣ ⎦ (9.9.1) A plot of 0 / z B B with 0 0 3/ 2 (5/ 4) NI B R µ = being the field strength at and 0 z = l R = is depicted in Figure 9.9.2. Figure 9.9.2 Magnetic field as a function of . / z R Let’s analyze the properties of z B in more detail. Differentiating z B with respect to z, we obtain 2 0 2 2 5/ 2 2 2 5/ 2 3( / 2) 3( / 2) ( ) 2 [( / 2) ] [( / 2) ] z z NIR dB z l z l B z dz z l R z l R µ ⎧ ⎫ − + ′ = = − − ⎨ ⎬ − + + + ⎩ ⎭ (9.9.2) One may readily show that at the midpoint, 0 z = , the derivative vanishes: 0 0 z dB dz = = (9.9.3) Straightforward differentiation yields 2 2 2 0 2 2 2 5/2 2 2 2 5/2 2 2 7/2 3 15( / 2) ( ) 2 [( / 2) ] [( / 2) ] 3 15( / 2) [( / 2) ] [( / 2) ] z N IR d B z l B z dz z l R z l R z l z l R z l R µ ⎧ − ′′ = = − + ⎨ − + − + ⎩ ⎫ + − + ⎬ + + + + ⎭ 2 2 7/ 2 (9.9.4) 9-40 At the midpoint , the above expression simplifies to 0 z = 2 2 2 0 2 2 2 5/ 2 2 0 2 2 2 0 2 2 7/2 6 15 (0) 2 [( / 2) ] 2[( / 2) ] 6( ) 2 [( / 2) ] z z NI d B l B dz l R l R NI R l l R µ µ = 2 7/2 ⎧ ⎫ ′′ = = − + ⎨ ⎬ + + ⎩ ⎭ − = − + (9.9.5) Thus, the condition that the second derivative of z B vanishes at 0 z = is . That is, the distance of separation between the two coils is equal to the radius of the coil. A configuration with l is known as Helmholtz coils. l R = R = For small z, we may make a Taylor-series expansion of ( ) z B z about 0 z = : 2 1 ( ) (0) (0) (0) ... 2! z z z z B z B B z B z ′ ′′ = + + + (9.9.6) The fact that the first two derivatives vanish at 0 z = indicates that the magnetic field is fairly uniform in the small z region. One may even show that the third derivative vanishes at as well. (0) z B′′′ 0 z = Recall that the force experienced by a dipole in a magnetic field is ( B = ∇ ⋅ F ) µ B G G G . If we place a magnetic dipole ˆ z µ = µ k G at 0 z = , the magnetic force acting on the dipole is ˆ ( ) z B z z z dB B dz µ µ ⎛ ⎞ = ∇ = ⎜ ⎟ ⎝ ⎠ F G k (9.9.7) which is expected to be very small since the magnetic field is nearly uniform there. Animation 9.5: Magnetic Field of the Helmholtz Coils The animation in Figure 9.9.3(a) shows the magnetic field of the Helmholtz coils. In this configuration the currents in the top and bottom coils flow in the same direction, with their dipole moments aligned. The magnetic fields from the two coils add up to create a net field that is nearly uniform at the center of the coils. Since the distance between the coils is equal to the radius of the coils and remains unchanged, the force of attraction between them creates a tension, and is illustrated by field lines stretching out to enclose both coils. When the distance between the coils is not fixed, as in the animation depicted in Figure 9.9.3(b), the two coils move toward each other due to their force of attraction. In this animation, the top loop has only half the current as the bottom loop. The field configuration is shown using the “iron filings” representation. 9-41 (a) (b) Figure 9.9.3 (a) Magnetic field of the Helmholtz coils where the distance between the coils is equal to the radius of the coil. (b) Two co-axial wire loops carrying current in the same sense are attracted to each other. Next, let’s consider the case where the currents in the loop flow in the opposite directions, as shown in Figure 9.9.4. Figure 9.9.4 Two circular loops carrying currents in the opposite directions. Again, by superposition principle, the magnetic field at a point with is (0,0, ) P z 0 z > 2 0 1 2 2 2 3/ 2 2 2 3/2 1 1 2 [( / 2) ] [( / 2) ] z z z NIR B B B z l R z l R µ ⎡ ⎤ = + = − ⎢ ⎥ − + + + ⎣ ⎦ (9.9.8) A plot of 0 / z B B with 0 0 / 2 B NI R µ = and l R = is depicted in Figure 9.9.5. Figure 9.9.5 Magnetic field as a function of . / z R 9-42 Differentiating z B with respect to z, we obtain 2 0 2 2 5/ 2 2 2 5/ 2 3( / 2) 3( / 2) ( ) 2 [( / 2) ] [( / 2) ] z z NIR dB z l z l B z dz z l R z l R µ ⎧ ⎫ − + ′ = = − + ⎨ ⎬ − + + + ⎩ ⎭ (9.9.9) At the midpoint, , we have 0 z = 2 0 2 2 5/2 3 (0) 0 0 2 [( / 2) ] z z NIR dB l B z dz l R µ ′ = = = + ≠ (9.9.10) Thus, a magnetic dipole ˆ z µ = µ k G placed at 0 z = will experience a net force: 2 0 2 2 5/ 2 (0) 3 ˆ ˆ ( ) ( ) 2 [( / 2) ] z z B z z z NIR dB l B dz l R µ µ µ µ ⎛ ⎞ = ∇ ⋅ = ∇ = = ⎜ ⎟ + ⎝ ⎠ F µ B k G G G k (9.9.11) For , the above expression simplifies to l R = 0 5/ 2 2 3 ˆ 2(5/ 4) z B NI R µ µ = F k G (9.9.12) Animation 9.6: Magnetic Field of Two Coils Carrying Opposite Currents The animation depicted in Figure 9.9.6 shows the magnetic field of two coils like the Helmholtz coils but with currents in the top and bottom coils flowing in the opposite directions. In this configuration, the magnetic dipole moments associated with each coil are anti-parallel. (a) (b) Figure 9.9.6 (a) Magnetic field due to coils carrying currents in the opposite directions. (b) Two co-axial wire loops carrying current in the opposite sense repel each other. The field configurations here are shown using the “iron filings” representation. The bottom wire loop carries twice the amount of current as the top wire loop. 9-43 At the center of the coils along the axis of symmetry, the magnetic field is zero. With the distance between the two coils fixed, the repulsive force results in a pressure between them. This is illustrated by field lines that are compressed along the central horizontal axis between the coils. Animation 9.7: Forces Between Coaxial Current-Carrying Wires Figure 9.9.7 A magnet in the TeachSpin ™ Magnetic Force apparatus when the current in the top coil is counterclockwise as seen from the top. Figure 9.9.7 shows the force of repulsion between the magnetic field of a permanent magnet and the field of a current-carrying ring in the TeachSpin ™ Magnetic Force apparatus. The magnet is forced to have its North magnetic pole pointing downward, and the current in the top coil of the Magnetic Force apparatus is moving clockwise as seen from above. The net result is a repulsion of the magnet when the current in this direction is increased. The visualization shows the stresses transmitted by the fields to the magnet when the current in the upper coil is increased. Animation 9.8: Magnet Oscillating Between Two Coils Figure 9.9.8 illustrates an animation in which the magnetic field of a permanent magnet suspended by a spring in the TeachSpinTM apparatus (see TeachSpin visualization), plus the magnetic field due to current in the two coils (here we see a "cutaway" cross-section of the apparatus). Figure 9.9.8 Magnet oscillating between two coils 9-44 The magnet is fixed so that its north pole points upward, and the current in the two coils is sinusoidal and 180 degrees out of phase. When the effective dipole moment of the top coil points upwards, the dipole moment of the bottom coil points downwards. Thus, the magnet is attracted to the upper coil and repelled by the lower coil, causing it to move upwards. When the conditions are reversed during the second half of the cycle, the magnet moves downwards. This process can also be described in terms of tension along, and pressure perpendicular to, the field lines of the resulting field. When the dipole moment of one of the coils is aligned with that of the magnet, there is a tension along the field lines as they attempt to "connect" the coil and magnet. Conversely, when their moments are anti-aligned, there is a pressure perpendicular to the field lines as they try to keep the coil and magnet apart. Animation 9.9: Magnet Suspended Between Two Coils Figure 9.9.9 illustrates an animation in which the magnetic field of a permanent magnet suspended by a spring in the TeachSpinTM apparatus (see TeachSpin visualization), plus the magnetic field due to current in the two coils (here we see a "cutaway" cross-section of the apparatus). The magnet is fixed so that its north pole points upward, and the current in the two coils is sinusoidal and in phase. When the effective dipole moment of the top coil points upwards, the dipole moment of the bottom coil points upwards as well. Thus, the magnet the magnet is attracted to both coils, and as a result feels no net force (although it does feel a torque, not shown here since the direction of the magnet is fixed to point upwards). When the dipole moments are reversed during the second half of the cycle, the magnet is repelled by both coils, again resulting in no net force. This process can also be described in terms of tension along, and pressure perpendicular to, the field lines of the resulting field. When the dipole moment of the coils is aligned with that of the magnet, there is a tension along the field lines as they are "pulled" from both sides. Conversely, when their moments are anti-aligned, there is a pressure perpendicular to the field lines as they are "squeezed" from both sides. Figure 9.9.9 Magnet suspended between two coils 9-45 9.10 Problem-Solving Strategies In this Chapter, we have seen how Biot-Savart and Ampere’s laws can be used to calculate magnetic field due to a current source. 9.10.1 Biot-Savart Law: The law states that the magnetic field at a point P due to a length element d carrying a steady current I located at away is given by s G r G 0 0 2 3 ˆ 4 4 I I d d d r r µ µ π π × × = = s r s r B G G G G The calculation of the magnetic field may be carried out as follows: (1) Source point: Choose an appropriate coordinate system and write down an expression for the differential current element I ds G , and the vector ' r G describing the position of I ds G . The magnitude is the distance between ' | '| r = r G I ds G and the origin. Variables with a “prime” are used for the source point. (2) Field point: The field point P is the point in space where the magnetic field due to the current distribution is to be calculated. Using the same coordinate system, write down the position vector P r G for the field point P. The quantity | | P P r = r G is the distance between the origin and P. (3) Relative position vector: The relative position between the source point and the field point is characterized by the relative position vector ' P = − r r r G G G . The corresponding unit vector is ' ˆ | ' P P r | − = = − r r r r r r G G G G G where is the distance between the source and the field point P. | | | '| P r = = − r r r G G G (4) Calculate the cross product ˆ d × s r G or d × s r G G . The resultant vector gives the direction of the magnetic field , according to the Biot-Savart law. B G (5) Substitute the expressions obtained to dB G and simplify as much as possible. (6) Complete the integration to obtain B G if possible. The size or the geometry of the system is reflected in the integration limits. Change of variables sometimes may help to complete the integration. 9-46 Below we illustrate how these steps are executed for a current-carrying wire of length L and a loop of radius R. Current distribution Finite wire of length L Circular loop of radius R Figure (1) Source point ˆ ' ' ˆ ( '/ ') ' ' x d d dx dx dx = = = r i s r i G G G ˆ ˆ ' (cos ' sin ' ) ˆ ˆ ( '/ ') ' '( sin ' cos ' ) R d d d d Rd φ φ φ φ φ φ φ = + = = − + r i j s r i j G G G (2) Field point P ˆ P y = r j G ˆ P z = r k G (3) Relative position vector ' P = − r r r G G G 2 2 2 2 ˆ ˆ ' | | ' ˆ ˆ ' ˆ ' y x r x y x x y = − = = + − = + r j i r j i r G G y 2 2 2 2 ˆ ˆ ˆ cos ' sin ' | | ˆ ˆ ˆ cos ' sin ' ˆ R R r R z R R R z φ φ φ φ = − − + = = + − − + = + r i j r i j r z z k k G G (4) The cross product ˆ d × s r G 2 2 ˆ ˆ y dx d y x ′ × = ′ + k s r G 2 2 ˆ ˆ ˆ '( cos ' sin ' ) ˆ R d z z R d R z φ φ φ + + × = + i j k s r G (5) Rewrite dB G 0 2 2 3/ ˆ 4 ( ) I y dx d y x µ π ′ = ′ + k B G 2 0 2 2 3/ 2 ˆ ˆ ˆ '( cos ' sin ' ) 4 ( ) I R d z z R d R z µ φ φ φ π + + = + i j k B G (6) Integrate to get B G / 2 0 2 2 3/ / 2 0 2 2 0 0 ' 4 ( ' ) 4 ( / 2) x y L z L B B Iy dx B y x I L y y L µ π µ π − = = = + = + ∫ 2 2 0 2 2 3/ 2 0 2 0 2 2 3/ 2 0 2 2 2 0 0 2 2 3/ 2 2 2 3/ 2 0 cos ' ' 0 4 ( ) sin ' ' 0 4 ( ) ' 4 ( ) 2( ) x y z IRz B d R z IRz B d R z IR IR B d R z R z π π π µ φ φ π µ φ φ π µ µ φ π = = + = = + = = + + ∫ ∫ ∫ 9-47 9.10.2 Ampere’s law: Ampere’s law states that the line integral of d ⋅ B s G G around any closed loop is proportional to the total current passing through any surface that is bounded by the closed loop: 0 enc d I µ ⋅ = ∫B s G G v To apply Ampere’s law to calculate the magnetic field, we use the following procedure: (1) Draw an Amperian loop using symmetry arguments. (2) Find the current enclosed by the Amperian loop. (3) Calculate the line integral d ⋅ ∫B s G G v around the closed loop. (4) Equate d ⋅ ∫B s G G v with 0 enc I µ and solve for B G . Below we summarize how the methodology can be applied to calculate the magnetic field for an infinite wire, an ideal solenoid and a toroid. System Infinite wire Ideal solenoid Toroid Figure (1) Draw the Amperian loop (2) Find the current enclosed by the Amperian loop enc I I = enc I NI = enc I NI = (3) Calculate d ⋅ ∫B s G G v along the loop (2 ) d B r π ⋅ = ∫B s G G v d Bl ⋅ = ∫B s G G v (2 ) d B r π ⋅ = ∫B s G G v (4) Equate 0 enc I µ with d ⋅ ∫B s G G v to obtain B G 0 2 I B r µ π = 0 0 NI B nI l µ µ = = 0 2 NI B r µ π = 9-48 9.11 Solved Problems 9.11.1 Magnetic Field of a Straight Wire Consider a straight wire of length L carrying a current I along the +x-direction, as shown in Figure 9.11.1 (ignore the return path of the current or the source for the current.) What is the magnetic field at an arbitrary point P on the xy-plane? Figure 9.11.1 A finite straight wire carrying a current I. Solution: The problem is very similar to Example 9.1. However, now the field point is an arbitrary point in the xy-plane. Once again we solve the problem using the methodology outlined in Section 9.10. (1) Source point From Figure 9.10.1, we see that the infinitesimal length dx′ described by the position vector ˆ ' ' x = r G i constitutes a current source ˆ ( ) I d Idx′ = s i G . (2) Field point As can be seen from Figure 9.10.1, the position vector for the field point P is ˆ ˆ x y = + r i j G . (3) Relative position vector The relative position vector from the source to P is ˆ ' ( ') P ˆ x x y − = − + r = r r i j G G G , with 2 2 1 | | | '| [( ) ] P r x x′ = = − = − + r r r G G G 2 y being the distance. The corresponding unit vector is 2 2 1 ˆ ˆ ' ( ) ˆ | '| [( ) ] P P 2 x x y r x x ′ − − + = = = ′ − − + r r r i y j r r r G G G G G 9-49 (4) Simplifying the cross product The cross product d can be simplified as × s r G G ˆ ˆ ˆ ˆ ( ' ) [( ') ] ' dx x x y y dx × − + = i i j k where we have used ˆ ˆ and ˆ ˆ × = i i 0 G ˆ × = i j k . (5) Writing down dB G Using the Biot-Savart law, the infinitesimal contribution due to Id s G is 0 0 0 2 3 2 2 ˆ ˆ 4 4 4 [( ) I I I d d y dx d r r x x y 3 2 ] µ µ µ π π π ′ × × = = = ′ − + s r s r B k G G G G (9.11.1) Thus, we see that the direction of the magnetic field is in the ˆ +k direction. (6) Carrying out the integration to obtain B G The total magnetic field at P can then be obtained by integrating over the entire length of the wire: /2 / 2 0 0 2 2 3 2 2 2 / 2 wire / 2 0 2 2 2 2 ( ) ˆ ˆ 4 [( ) ] 4 ( ) ( / 2) ( / 2) ˆ 4 ( / 2) ( / 2) L L L L Iy dx I x x d x x y y x x y I x L x L y x L y x L y µ µ π π µ π − − ′ ′ − = = = − ′ − + ′ − + ⎡ ⎤ − + = − − ⎢ ⎥ ⎢ − + + + ⎥ ⎣ ⎦ ∫ ∫ B B k k G G k (9.11.2) Let’s consider the following limits: (i) 0 x = In this case, the field point P is at ( , ) (0, ) x y y = on the y axis. The magnetic field becomes 0 0 2 2 2 2 2 2 / 2 / 2 / 2 ˆ ˆ cos 4 2 ( / 2) ( / 2) ( / 2) I I L L L y y L y L y L y µ µ 0 ˆ 2 I y µ θ π π ⎡ ⎤ − + = − − = = ⎢ ⎥ ⎢ − + + + ⎥ + ⎣ ⎦ B k G π k k (9.11.3) 9-50 in agreement with Eq. (9.1.6). (ii) Infinite length limit Consider the limit where , L x y  . This gives back the expected infinite-length result: 0 / 2 / 2 ˆ 4 / 2 / 2 2 0 ˆ I I L L y L L y µ π π − + ⎡ ⎤ = − − = ⎢ ⎥ ⎣ ⎦ B G µ k k (9.11.4) If we use cylindrical coordinates with the wire pointing along the +z-axis then the magnetic field is given by the expression 0 ˆ 2 I r µ π = B φ G (9.11.5) where is the tangential unit vector and the field point P is a distance r away from the wire. ˆ φ 9.11.2 Current-Carrying Arc Consider the current-carrying loop formed of radial lines and segments of circles whose centers are at point P as shown below. Find the magnetic field B G at P. Figure 9.11.2 Current-carrying arc Solution: According to the Biot-Savart law, the magnitude of the magnetic field due to a differential current-carrying element I d s G is given by 0 0 0 2 2 ˆ ' ' 4 4 4 d I I r d dB d r r µ µ µ θ I r θ π π π × = = = s r G (9.11.6) For the outer arc, we have 9-51 0 outer 0 ' 4 4 0 I I B d b b θ µ µ θ θ π π = = ∫ (9.11.7) The direction of is determined by the cross product outer B G ˆ d × s r G which points out of the page. Similarly, for the inner arc, we have 0 inner 0 ' 4 4 0 I I B d a a θ µ µ θ θ π π = = ∫ (9.11.8) For , points into the page. Thus, the total magnitude of magnetic field is inner B G ˆ d × s r G 0 inner outer 1 1 (into page) 4 I a b µ θ π ⎛ ⎞ = = − ⎜ ⎟ ⎝ ⎠ B B + B G G G (9.11.9) 9.11.3 Rectangular Current Loop Determine the magnetic field (in terms of I, a and b) at the origin O due to the current loop shown in Figure 9.11.3 Figure 9.11.3 Rectangular current loop Solution: For a finite wire carrying a current I, the contribution to the magnetic field at a point P is given by Eq. (9.1.5): ( ) 0 1 2 cos cos 4 I B r µ θ θ π = + where 1 and 2 θ θ are the angles which parameterize the length of the wire. 9-52 To obtain the magnetic field at O, we make use of the above formula. The contributions can be divided into three parts: (i) Consider the left segment of the wire which extends from ( to . The angles which parameterize this segment give , ) ( , ) x y a = − +∞ ( , ) a d − + 1 cos 1 θ = ( 1 0 θ = ) and 2 2 cos / b b a θ = − + 2 . Therefore, ( ) 0 0 1 1 2 2 2 cos cos 1 4 4 I I b B a a b a µ µ θ θ π π ⎛ ⎞ = + = − ⎜ + ⎝ ⎠ ⎟ (9.11.10) The direction of is out of page, or 1 B G ˆ +k . (ii) Next, we consider the segment which extends from ( , ) ( , ) x y a b = − + to . Again, the (cosine of the) angles are given by ( , ) a b + + 1 2 2 cos a a b θ = + (9.11.11) 2 1 2 2 cos cos a a b θ θ = = + (9.11.12) This leads to 0 2 2 2 2 2 2 2 4 2 I a a B b a b a b b a b µ µ π π ⎛ ⎞ = + = ⎜ ⎟ 0Ia + + + ⎝ ⎠ (9.11.13) The direction of is into the page, or 2 B G ˆ −k . (iii) The third segment of the wire runs from ( , ) ( , ) x y a b = + + to ( , ) a + +∞. One may readily show that it gives the same contribution as the first one: 3 1 B B = (9.11.14) The direction of is again out of page, or 3 B G ˆ +k . The magnetic field is ( ) 0 0 1 2 3 1 2 2 2 2 2 2 2 2 2 0 2 2 ˆ ˆ 2 1 2 2 ˆ 2 I I b a a b b a b I b a b b a ab a b µ µ π π µ π ⎛ ⎞ = + + = + = − − ⎜ ⎟ + + ⎝ ⎠ = + − − + B B B B B B k k k G G G G G G a (9.11.15) 9-53 Note that in the limit , the horizontal segment is absent, and the two semi-infinite wires carrying currents in the opposite direction overlap each other and their contributions completely cancel. Thus, the magnetic field vanishes in this limit. 0 a → 9.11.4 Hairpin-Shaped Current-Carrying Wire An infinitely long current-carrying wire is bent into a hairpin-like shape shown in Figure 9.11.4. Find the magnetic field at the point P which lies at the center of the half-circle. Figure 9.11.4 Hairpin-shaped current-carrying wire Solution: Again we break the wire into three parts: two semi-infinite plus a semi-circular segments. (i) Let P be located at the origin in the xy plane. The first semi-infinite segment then extends from ( , ) ( , ) x y r = −∞− to (0, ) r − . The two angles which parameterize this segment are characterized by 1 cos 1 θ = ( 1 0 θ = ) and 2 2 cos 0 ( / 2) θ θ π = = . Therefore, its contribution to the magnetic field at P is ( ) 0 0 1 1 2 cos cos (1 0) 4 4 0 4 I I B r r I r µ µ θ θ µ π π π = + = + = (9.11.16) The direction of is out of page, or 1 B G ˆ +k . (ii) For the semi-circular arc of radius r, we make use of the Biot-Savart law: 0 2 ˆ 4 I d r µ π × = ∫ s r B G G (9.11.17) and obtain 0 2 2 0 4 4 0 I I rd B r r π µ µ θ π = = ∫ (9.11.18) 9-54 The direction of is out of page, or 2 B G ˆ +k . (iii) The third segment of the wire runs from ( , ) (0, ) x y r = + to ( , ) r −∞+ . One may readily show that it gives the same contribution as the first one: 0 3 1 4 I B B r µ π = = (9.11.19) The direction of is again out of page, or 3 B G ˆ +k . The total magnitude of the magnetic field is 0 0 0 1 2 3 1 2 ˆ ˆ 2 ( 2 4 4 I I I r r r ˆ 2 ) µ µ µ π π π = + + = + = + = + B B B B B B k k k G G G G G G (9.11.20) Notice that the contribution from the two semi-infinite wires is equal to that due to an infinite wire: 0 1 3 1 ˆ 2 2 I r µ π + = = B B B k G G G (9.11.21) 9.11.5 Two Infinitely Long Wires Consider two infinitely long wires carrying currents are in the −x-direction. Figure 9.11.5 Two infinitely long wires (a) Plot the magnetic field pattern in the yz-plane. (b) Find the distance d along the z-axis where the magnetic field is a maximum. Solutions: (a) The magnetic field lines are shown in Figure 9.11.6. Notice that the directions of both currents are into the page. 9-55 Figure 9.11.6 Magnetic field lines of two wires carrying current in the same direction. (b) The magnetic field at (0, 0, z) due to wire 1 on the left is, using Ampere’s law: 0 0 1 2 2 2 2 I I B r a z µ µ π π = = + (9.11.22) Since the current is flowing in the –x-direction, the magnetic field points in the direction of the cross product 1 ˆ ˆ ˆ ˆ ˆ ˆ ( ) ( ) (cos sin ) sin cos ˆ θ θ θ −× = −× + = − i r i θ j k j k (9.11.23) Thus, we have ( ) 0 1 2 2 ˆ ˆ sin cos 2 I a z µ θ θ π = − + B j k G (9.11.24) For wire 2 on the right, the magnetic field strength is the same as the left one: 1 2 B B = . However, its direction is given by 2 ˆ ˆ ˆ ˆ ˆ ˆ ( ) ( ) ( cos sin ) sin cos ˆ θ θ θ −× = −× − + = i r i θ j k j+ k (9.11.25) Adding up the contributions from both wires, the z-components cancel (as required by symmetry), and we arrive at 0 1 2 2 2 2 2 sin ˆ ( ) I a z a z 0 ˆ Iz µ θ µ π π + = = + + B = B B j j G G G (9.11.26) 9-56 Figure 9.11.7 Superposition of magnetic fields due to two current sources To locate the maximum of B, we set / dB dz 0 = and find ( ) 2 2 0 0 2 2 2 2 2 2 2 2 1 2 0 ( ) I I dB z a z dz a z a z a z µ µ π π ⎛ ⎞ − = − = ⎜ ⎟ + + ⎝ ⎠ + 2 = (9.11.27) which gives z a = (9.11.28) Thus, at z=a, the magnetic field strength is a maximum, with a magnitude 0 max 2 I B a µ π = (9.11.29) 9.11.6 Non-Uniform Current Density Consider an infinitely long, cylindrical conductor of radius R carrying a current I with a non-uniform current density J r α = (9.11.30) where α is a constant. Find the magnetic field everywhere. Figure 9.11.8 Non-uniform current density 9-57 Solution: The problem can be solved by using the Ampere’s law: 0 enc d I µ ⋅ = ∫B s G G v (9.11.31) where the enclosed current Ienc is given by ( )( ) enc ' 2 ' ' I d r r d α π = ⋅ = r ∫ ∫ J A G G (9.11.32) (a) Forr , the enclosed current is R < 3 2 enc 0 2 2 ' ' 3 r r I r dr πα πα = = ∫ (9.11.33) Applying Ampere’s law, the magnetic field at P1 is given by ( ) 3 0 1 2 2 3 r B r µ πα π = (9.11.34) or 2 0 1 3 B r αµ = (9.11.35) The direction of the magnetic field 1 B G is tangential to the Amperian loop which encloses the current. (b) For , the enclosed current is r R > 3 2 enc 0 2 2 ' ' 3 R R I r dr πα πα = = ∫ (9.11.36) which yields ( ) 3 0 2 2 2 3 R B r µ πα π = (9.11.37) Thus, the magnetic field at a point P2 outside the conductor is 3 0 2 3 R B r αµ = (9.11.38) A plot of B as a function of r is shown in Figure 9.11.9: 9-58 Figure 9.11.9 The magnetic field as a function of distance away from the conductor 9.11.7 Thin Strip of Metal Consider an infinitely long, thin strip of metal of width w lying in the xy plane. The strip carries a current I along the +x-direction, as shown in Figure 9.11.10. Find the magnetic field at a point P which is in the plane of the strip and at a distance s away from it. Figure 9.11.10 Thin strip of metal Solution: Consider a thin strip of width dr parallel to the direction of the current and at a distance r away from P, as shown in Figure 9.11.11. The amount of current carried by this differential element is dr dI I w ⎛ = ⎜ ⎝ ⎠ ⎞ ⎟ (9.11.39) Using Ampere’s law, we see that the strip’s contribution to the magnetic field at P is given by 0 enc 0 (2 ) ( ) dB r I dI π µ µ = = (9.11.40) or 9-59 0 0 2 2 dI I dr dB r r w µ µ π π ⎛ = = ⎜ ⎝ ⎠ ⎞ ⎟ (9.11.41) Figure 9.11.11 A thin strip with thickness carrying a steady current dr I . Integrating this expression, we obtain 0 0 ln 2 2 s w s I I dr s w B w r w s µ µ π π + + ⎛ ⎞ ⎛ = = ⎜ ⎟ ⎜ ⎝ ⎠ ⎝ ∫ ⎞ ⎟ ⎠ (9.11.42) Using the right-hand rule, the direction of the magnetic field can be shown to point in the +z-direction, or 0 ˆ ln 1 2 I w w s µ π ⎛ ⎞ = + ⎜ ⎟ ⎝ ⎠ B k G (9.11.43) Notice that in the limit of vanishing width, , w s  ln(1 / ) / w s w s + ≈ , and the above expression becomes 0 ˆ 2 I s µ π = B k G (9.11.44) which is the magnetic field due to an infinitely long thin straight wire. 9.11.8 Two Semi-Infinite Wires A wire carrying current I runs down the y axis to the origin, thence out to infinity along the positive x axis. Show that the magnetic field in the quadrant with of the xy plane is given by , 0 x y > 0 2 2 2 2 1 1 4 z I x y B x y y x y x x y µ π ⎛ ⎞ ⎜ = + + + ⎜ + + ⎝ ⎠ ⎟ ⎟ (9.11.45) Solution: 9-60 Let be a point in the first quadrant at a distance from a point ( on the y-axis and distance from ( on the x-axis. ( , ) P x y 1 r 0, ') y 2 r ',0) x Figure 9.11.12 Two semi-infinite wires Using the Biot-Savart law, the magnetic field at P is given by 0 0 0 1 1 2 2 2 2 1 2 wire wire ˆ ˆ ˆ 4 4 4 y x I I I d d d d r r 2 r µ µ µ π π π × × × = = = + ∫ ∫ ∫ ∫ s r s r s r B B G G G G G (9.11.46) Let’s analyze each segment separately. (i) Along the y axis, consider a differential element 1 ˆ ' d dy = − s j G which is located at a distance 1 ˆ ( ' ˆ ) x y y = + − r i j G from P. This yields 1 1 ˆ ˆ ˆ ˆ ( ' ) [ ( ') ] ' d dy x y y x d × = − × + − = s r y j i j k G G (9.11.47) (ii) Similarly, along the x-axis, we have 2 ˆ ' d dx = s i G and 2 ˆ ( ') ˆ x x y = − + r i j G which gives 2 2 ˆ ' d y d × = s r k x G G (9.11.48) Thus, we see that the magnetic field at P points in the +z-direction. Using the above results and 2 2 1 ( ') r x y y = + − and ( ) 2 2 2 r x x y ′ = − + , we obtain 0 0 2 2 3/2 2 2 0 0 ' 4 [ ( ') ] 4 [ ( ') ] z I I xdy y dx B x y y y x x 3/ 2 ' µ µ π π ∞ ∞ = + + − + − ∫ ∫ (9.11.49) The integrals can be readily evaluated using 9-61 2 2 3/ 2 2 2 0 1 [ ( ) ] bds a b a s b b a b ∞ = + + − + ∫ (9.11.50) The final expression for the magnetic field is given by 0 2 2 2 2 1 1 ˆ 4 I y x x y x x y y x y µ π ⎡ ⎤ = + + + ⎢ ⎢ + + ⎣ ⎦ B k G ⎥ ⎥ (9.11.51) We may show that the result is consistent with Eq. (9.1.5) 9.12 Conceptual Questions 1. Compare and contrast Biot-Savart law in magnetostatics with Coulomb’s law in electrostatics. 2. If a current is passed through a spring, does the spring stretch or compress? Explain. 3. How is the path of the integration of d ⋅ ∫B s G G v chosen when applying Ampere’s law? 4. Two concentric, coplanar circular loops of different diameters carry steady currents in the same direction. Do the loops attract or repel each other? Explain. 5. Suppose three infinitely long parallel wires are arranged in such a way that when looking at the cross section, they are at the corners of an equilateral triangle. Can currents be arranged (combination of flowing in or out of the page) so that all three wires (a) attract, and (b) repel each other? Explain. 9.13 Additional Problems 9.13.1 Application of Ampere's Law The simplest possible application of Ampere's law allows us to calculate the magnetic field in the vicinity of a single infinitely long wire. Adding more wires with differing currents will check your understanding of Ampere's law. (a) Calculate with Ampere's law the magnetic field, | | ( ) B r = B G , as a function of distance r from the wire, in the vicinity of an infinitely long straight wire that carries current I. Show with a sketch the integration path you choose and state explicitly how you use symmetry. What is the field at a distance of 10 mm from the wire if the current is 10 A? 9-62 (b) Eight parallel wires cut the page perpendicularly at the points shown. A wire labeled with the integer k (k = 1, 2, ... , 8) bears the current 2k times 0 I (i.e., 0 2 k I k I = ). For those with k = 1 to 4, the current flows up out of the page; for the rest, the current flows down into the page. Evaluate d ⋅ ∫B s G G v along the closed path (see figure) in the direction indicated by the arrowhead. (Watch your signs!) Figure 9.13.1 Amperian loop (c) Can you use a single application of Ampere's Law to find the field at a point in the vicinity of the 8 wires? Why? How would you proceed to find the field at an arbitrary point P? 9.13.2 Magnetic Field of a Current Distribution from Ampere's Law Consider the cylindrical conductor with a hollow center and copper walls of thickness as shown in Figure 9.13.2. The radii of the inner and outer walls are a and b respectively, and the current I is uniformly spread over the cross section of the copper. b a − (a) Calculate the magnitude of the magnetic field in the region outside the conductor, . (Hint: consider the entire conductor to be a single thin wire, construct an Amperian loop, and apply Ampere's Law.) What is the direction of r b > B G ? Figure 9.13.2 Hollow cylinder carrying a steady current I. 9-63 (b) Calculate the magnetic field inside the inner radius, r < a. What is the direction of B G ? (c) Calculate the magnetic field within the inner conductor, a < r < b. What is the direction of ? B G (d) Plot the behavior of the magnitude of the magnetic field B(r) from r = 0 to . Is B(r) continuous at r = a and r = b? What about its slope? 4 r = b (e) Now suppose that a very thin wire running down the center of the conductor carries the same current I in the opposite direction. Can you plot, roughly, the variation of B(r) without another detailed calculation? (Hint: remember that the vectors d from different current elements can be added to obtain the total vector magnetic field.) B G 9.13.3 Cylinder with a Hole A long copper rod of radius a has an off-center cylindrical hole through its entire length, as shown in Figure 9.13.3. The conductor carries a current I which is directed out of the page and is uniformly distributed throughout the cross section. Find the magnitude and direction of the magnetic field at the point P. Figure 9.13.3 A cylindrical conductor with a hole. 9.13.4 The Magnetic Field Through a Solenoid A solenoid has 200 closely spaced turns so that, for most of its length, it may be considered to be an ideal solenoid. It has a length of 0.25 m, a diameter of 0.1 m, and carries a current of 0.30 A. (a) Sketch the solenoid, showing clearly the rotation direction of the windings, the current direction, and the magnetic field lines (inside and outside) with arrows to show their direction. What is the dominant direction of the magnetic field inside the solenoid? (b) Find the magnitude of the magnetic field inside the solenoid by constructing an Amperian loop and applying Ampere's law. 9-64 (c) Does the magnetic field have a component in the direction of the wire in the loops making up the solenoid? If so, calculate its magnitude both inside and outside the solenoid, at radii 30 mm and 60 mm respectively, and show the directions on your sketch. 9.13.5 Rotating Disk A circular disk of radius R with uniform charge density σ rotates with an angular speed ω . Show that the magnetic field at the center of the disk is 0 1 2 B R µ σω = Hint: Consider a circular ring of radius r and thickness dr. Show that the current in this element is ( / 2 ) dI dq r dr ω π ωσ = = . 9.13.6 Four Long Conducting Wires Four infinitely long parallel wires carrying equal current I are arranged in such a way that when looking at the cross section, they are at the corners of a square, as shown in Figure 9.13.5. Currents in A and D point out of the page, and into the page at B and C. What is the magnetic field at the center of the square? Figure 9.13.5 Four parallel conducting wires 9.13.7 Magnetic Force on a Current Loop A rectangular loop of length l and width carries a steady current w 1 I . The loop is then placed near an finitely long wire carrying a current 2 I , as shown in Figure 9.13.6. What is the magnetic force experienced by the loop due to the magnetic field of the wire? 9-65 Figure 9.13.6 Magnetic force on a current loop. 9.13.8 Magnetic Moment of an Orbital Electron We want to estimate the magnetic dipole moment associated with the motion of an electron as it orbits a proton. We use a “semi-classical” model to do this. Assume that the electron has speed v and orbits a proton (assumed to be very massive) located at the origin. The electron is moving in a right-handed sense with respect to the z-axis in a circle of radius r = 0.53 Å, as shown in Figure 9.13.7. Note that 1 Å = . 10 10 m − Figure 9.13.7 (a) The inward force required to make the electron move in this circle is provided by the Coulomb attractive force between the electron and proton (me is the mass of the electron). Using this fact, and the value of r we give above, find the speed of the electron in our “semi-classical” model. [Ans: .] 2 / e m v r 6 2.18 10 m/s × (b) Given this speed, what is the orbital period T of the electron? [Ans: .] 16 1.52 10 s − × (c) What current is associated with this motion? Think of the electron as stretched out uniformly around the circumference of the circle. In a time T, the total amount of charge q that passes an observer at a point on the circle is just e [Ans: 1.05 mA. Big!] (d) What is the magnetic dipole moment associated with this orbital motion? Give the magnitude and direction. The magnitude of this dipole moment is one Bohr magneton, B µ . [Ans: along the −z axis.] 24 2 9.27 10 A m − × ⋅ (e) One of the reasons this model is “semi-classical” is because classically there is no reason for the radius of the orbit above to assume the specific value we have given. The value of r is determined from quantum mechanical considerations, to wit that the orbital 9-66 angular momentum of the electron can only assume integral multiples of h/2π, where is the Planck constant. What is the orbital angular momentum of the electron here, in units of 34 6.63 10 J/s h − = × / 2 h π ? 9.13.9 Ferromagnetism and Permanent Magnets A disk of iron has a height and a radius 1.00 mm h = 1.00 cm r = . The magnetic dipole moment of an atom of iron is 23 2 1.8 10 A m µ − = × ⋅ . The molar mass of iron is 55.85 g, and its density is 7.9 g/cm3. Assume that all the iron atoms in the disk have their dipole moments aligned with the axis of the disk. (a) What is the number density of the iron atoms? How many atoms are in this disk? [Ans: ; .] 28 3 8.5 10 atoms/m × 22 2.7 10 atoms × (b) What is the magnetization in this disk? [Ans: , parallel to axis.] M G 6 1.53 10 A/m × (c) What is the magnetic dipole moment of the disk? [Ans: 2 0.48 A m ⋅ .] (d) If we were to wrap one loop of wire around a circle of the same radius r, how much current would the wire have to carry to get the dipole moment in (c)? This is the “equivalent” surface current due to the atomic currents in the interior of the magnet. [Ans: 1525 A.] 9.13.10 Charge in a Magnetic Field A coil of radius R with its symmetric axis along the +x-direction carries a steady current I. A positive charge q moves with a velocity ˆ v = v j G when it crosses the axis at a distance x from the center of the coil, as shown in Figure 9.13.8. Figure 9.13.8 Describe the subsequent motion of the charge. What is the instantaneous radius of curvature? 9-67 9.13.11 Permanent Magnets A magnet in the shape of a cylindrical rod has a length of 4.8 cm and a diameter of 1.1 cm. It has a uniform magnetization M of 5300 A/m, directed parallel to its axis. (a) Calculate the magnetic dipole moment of this magnet. (b) What is the axial field a distance of 1 meter from the center of this magnet, along its axis? [Ans: (a) , (b) , or .] 2 2 2.42 10 A m − × ⋅ 9 4.8 10 T − × 5 4.8 10 gauss − × 9.13.12 Magnetic Field of a Solenoid (a) A 3000-turn solenoid has a length of 60 cm and a diameter of 8 cm. If this solenoid carries a current of 5.0 A, find the magnitude of the magnetic field inside the solenoid by constructing an Amperian loop and applying Ampere's Law. How does this compare to the magnetic field of the earth (0.5 gauss). [Ans: 0.0314 T, or 314 gauss, or about 600 times the magnetic field of the earth]. We make a magnetic field in the following way: We have a long cylindrical shell of non-conducting material which carries a surface charge fixed in place (glued down) of , as shown in Figure 9.13.9 The cylinder is suspended in a manner such that it is free to revolve about its axis, without friction. Initially it is at rest. We come along and spin it up until the speed of the surface of the cylinder is . 2 C/m σ 0 v Figure 9.13.9 (b) What is the surface current on the walls of the cylinder, in A/m? [Ans: K 0 K v σ = .] (c) What is magnetic field inside the cylinder? [Ans. 0 0 0 B K v µ µ σ = = , oriented along axis right-handed with respect to spin.] (d) What is the magnetic field outside of the cylinder? Assume that the cylinder is infinitely long. [Ans: 0]. 9-68 9.13.13 Effect of Paramagnetism A solenoid with 16 turns/cm carries a current of 1.3 A. (a) By how much does the magnetic field inside the solenoid increase when a close-fitting chromium rod is inserted? [Note: Chromium is a paramagnetic material with magnetic susceptibility .] 4 2.7 10 χ − = × (b) Find the magnitude of the magnetization M G of the rod. [Ans: (a) 0.86 µT; (b) 0.68 A/m.] 9-69
130
从花树冠到凤冠—隋唐至明代后妃命妇冠饰源流考_首饰_礼服_公夫人 =============== X 新闻 体育 汽车 房产 旅游 教育 时尚 科技 财经 娱乐 更多母婴健康历史军事美食文化星座专题游戏搞笑动漫宠物 无障碍关怀版 登录 从花树冠到凤冠—隋唐至明代后妃命妇冠饰源流考 2022-07-10 19:37 来源: 博物馆世界 链接复制成功 发布于:北京市 摘要:对于今天的中国人来说,凤冠已经成为了传统女性礼服的标志性象征。但实际上在相当长的时间中,模拟自然的“花树”才是中国女性礼服冠中最核心的组成部分,“凤冠”则源于常服冠。 本文以新近修复成功的隋炀帝萧皇后冠饰为例,考证中古时期后妃花树冠花树、钿、博鬓组合模式的真正形态,及其形成与演变过程,并探讨花树冠与凤冠的不同概念与使用。 ▲ 扬州博物馆展出的萧后冠复原件 2013年,隋炀帝杨广和萧皇后墓在扬州被发现,成为当年最重要的考古发现之一。萧后墓中最吸引人的,便是一具腐蚀严重但保存完整的冠饰,被搬回实验室由陕西文物保护研究院开始进行清理修复。经过两年的工作,2016年9月正式召开新闻发布会,公开修复成果,并在扬州展示萧后的“凤冠”。 隋炀帝皇后萧氏出身于梁朝皇室,炀帝遇害后,流落叛军、东突厥,唐贞观四年(630年)归长安,历经四朝,贞观二十一年(647年)去世后被唐太宗以皇后礼与隋炀帝合葬扬州。墓中此冠应是初唐贞观所制,是极其难得的唐代后妃礼服冠实物。 ▲ 初唐皇后礼服首饰组合示意图 展开全文 若仔细观察,易发现一件蹊跷的事,这顶冠上完全不见“凤”的踪影?的确,在很长一段时间里,中国古代后妃居最高地位的礼服首饰中罕有凤鸟存在。 唐以前凤尚未完全成为高贵女性身份的象征,而对自然环境元素的直接模拟,便成为了礼服冠的主要装饰构成手法,头上往往是一派花草树木、鸟语花香、飞禽走兽场景,其中最真正的核心组件就是由步摇发展而来的“花树”。在花树的基础上,历代添加元素,发展成为极盛大隆重的礼服冠。 ▲ 萧后冠饰原件 汉代皇后首饰采用假结(髻)、步摇、簪珥模式,魏晋南北朝陆续增加钿、博鬓,并将步摇改称花树;隋唐在汉晋南北朝以来各朝制度的基础上,确立了花树、钿、钗、博鬓的组合模式,并且以花树、钿的数目区分等级;宋明继续添饰龙凤、仙人、鸟雀,但依然保存了花树、钿、博鬓的基本元素。 而后世的凤冠,起先并非用于礼服,而源自于隋唐时期的另一种常服首饰。两者并行不悖,演着两条路线各自演变了上千年。 ▲ 唐,鎏金菊花纹银钗一对,陕西历史博物馆藏 一、从花树冠到凤冠 隋文帝即位(581年)后,在北齐、北周制度基础上,参照损益南朝制度,初步颁布了新的服令。定皇后服为袆衣、鞠衣、青服、朱服四等,其中用于祭祀、朝会、亲蚕等大礼的袆衣、鞠衣,首饰由花树、两博鬓组成,以花树数目不同区分等级,皇后花十二树,对应皇帝衮冕十二旒,以下依等级分别为九、八、七、六、五、三树;用于礼见皇帝、宴见宾客的次等礼服青服、朱服,则“去花”不使用花树。 摘录《隋书·卷十二志第七·礼仪七》首饰制度如下:皇后首饰,花十二树。……青衣,青罗为之,去花。朱衣,绯罗为之,制如青衣。皇太子妃,公主,王妃,三师、三公及公夫人,一品命妇,并九树。侯夫人,二品命妇,并八树。 伯夫人,三品命妇,并七树。子夫人,世妇及皇太子昭训,四品已上官命妇,并六树。男夫人,五品命妇,五树。女御及皇太子良娣,三树。(自皇后已下,小花并如大花之数,并两博鬓也。) ▲ 隋代开皇、大业后妃命妇礼服首饰制度等级对比 隋炀帝即位后,于大业元年(605年)诏吏部尚书牛弘等更定服制。由于后宫内命妇等级制度发生变动,也对嫔妃首饰制度进行微调。 皇后礼服首饰维持了北朝花树、花钿、博鬓组合,内外命妇首饰则参照南朝制度为花钿、博鬓组合,其数目与品级对应也略做调整,原视为一品九树的公夫人改为二品八钿,原二品八树的侯夫人改为三品七钿。另外后妃内命妇、皇太子妃首饰均有二博鬓,外命妇则未说明。 《隋书·卷十二志第七·礼仪七》首饰制度如下:皇后服……袆衣,首饰花十二钿,小花毦十二树,并两博鬓。祭及朝会,凡大事皆服之。鞠衣,小花十二树。余准袆衣,亲蚕服也。贵妃、德妃、淑妃,是为三妃。 首饰花九钿,并二博鬓。顺仪、顺容、顺华、修仪、修容、修华、充仪、充容、充华,是为九嫔。首饰花八钿,并二博鬓。婕妤,首饰花七钿。美人、才人,首饰花六钿,并二博鬓。宝林,首饰花五钿,并二博鬓。皇太子妃,首饰花九钿,并二博鬓。 诸王太妃、妃、长公主、公主、三公夫人、一品命妇,首饰花九钿,公夫人,县主、二品命妇,首饰八钿。侯、伯夫人、三品命妇,首饰七钿。子夫人、四品命妇,首饰六钿。男夫人、五品命妇,首饰五钿。 ▲ 隋代开皇、大业后妃命妇礼服首饰制度等级对比 唐代建立之后,高祖武德七年(624年)颁布了《武德令》,以国家令文的形式第一次规定唐代礼服制度,其中便有涉及后妃命妇首饰的相关条文;开元二十年(732年)年颁布的《大唐开元礼·序列》中也记录了“皇后王妃内外命妇服及首饰制度”;开元二十六年(738年)《唐六典》中的《内官、宫官、内侍省·尚服局》以及《尚书礼部》中也分别详细记录了后妃与内外命妇的礼服制度。 ▲ 初唐皇后礼服首饰穿戴及全套礼服示意图 以上三种属性的令、礼、行政法典中关于礼服首饰的记载基本相同,摘录比对后可得唐代后妃命妇首饰制度如下。 皇后服:袆衣,首饰花十二树(小花如大花之数,并两博鬓),受册、助祭、朝会诸大事,则服之。鞠衣,首饰与褘衣同,亲蚕则服之。钿钗礼衣,十二钿,宴见宾客,则服之。 皇太子妃服:褕翟,首饰花九树(小花如大花之数,并两博鬓),受册、助祭、朝会诸大事,则服之。鞠衣,首饰与褘衣同,从蚕则服之。钿钗礼衣,九钿。宴见宾客,则服之。 内外命妇服:翟衣,花钗(施两博鬓,宝钿饰)。第一品花钗九树(宝钿准花数,以下准此);第二品花钗八树,第三品花钗七树,第四品花钗六树,第五品花钗五树,内命妇受册、从蚕、朝会,则服之。 其外命妇嫁及受册、从蚕、大朝会,亦准此。钿钗礼衣,第一品九钿,第二品八钿,第三品七钿,第四品六钿,第五品五钿。 内命妇寻常参见、外命妇朝参、辞见及礼会,则服之。六尚、宝林、御女、采女官等服礼衣,无首饰佩绶。 凡婚嫁花钗礼衣,六品已下妻及女嫁则服之;(其钗覆笄而已。其两博鬓任以金、银、杂宝为饰。)其次花钗礼衣,庶人女嫁则服之。(钗以金、银涂,琉璃等饰。) ▲ 唐代博鬓实例 以上制度原文虽繁,但归纳后可以了解,隋唐后妃命妇礼服首饰可分为完整版和简省版两类,分别用于头等礼服和次等礼服,基本构件包括博鬓和数目不等的花树、钿、钗。 头等礼服,即皇后袆衣、鞠衣,皇太子妃褕翟、鞠衣,和内外命妇翟衣。适用于受册、助祭、朝会、亲蚕(从蚕)等最重要的礼仪场合。 ▲ 唐末,头插花钗的女供养人,甘肃敦煌莫高窟9窟 其首饰由完整版的花树(花钗)、宝钿、博鬓组成。(单从令文看,按身份细分有又两种模式,皇后与皇太子妃为大小花树、左右两博鬓模式,内外命妇则为花钗、宝钿、左右两博鬓模式。 花树或花钗、宝钿的数目自皇后而下依品级递减,分别为十二、九、八、七、六、五,配置隆重而华丽,是后世后妃礼服冠的雏形。 次等礼服,为钿钗礼衣,即隋代的青服、朱服。适用于皇后、皇太子妃宴见宾客,内命妇寻常参见,外命妇朝参、辞见、礼会等相对次要性礼仪场合。其首饰也与隋代相似,仅保留数目不等的钿,去除了花树或花钗、博鬓,是相对简省的首饰模式。 ▲ 唐五代贵妇常服、盛装首饰中的凤鸟和口衔珠结的凤簪 二、何为花树、钿和博鬓 那么文献里屡被提及的花树、钿、博鬓到底是什么样的?这个问题是中国古代首饰史中长期未明的难题之一。 以往由于没有任何宋以前后妃礼服画像存留,壁画、陶俑也极少涉及礼仪场合后妃形象,出土首饰实物基本为零碎残件残片,少有属于可以与礼服配套的部分,所以对于中古后妃首饰的研究长期只能停留在文献层面。至于花树、钿的对应,在资料不足的情况下,一直有着各种讹误已久的推测。 近年来,随着陆续几批唐代礼服首饰的完整出土,隋唐礼服首饰构件和组合的实际形态逐渐明朗,并可以初步复原。其中经过科学发掘出土者,包括前文所提隋炀帝皇后萧氏首饰一具,以及二品蜀国公夫人贺若氏首饰一具、五品县君裴氏首饰一具。 另外还有欧洲私人所藏唐七钿七花树冠一具,保利拍卖北周至唐七钿冠一具,香港关善明博士藏唐宝钿花树残件。尤其萧后冠的出土,为大量不明首饰提供了依据。下面就依次看看三者的形态。 ▲ 蜀国公夫人贺若氏墓出土首饰零件,与萧后花朵类似 1、花树:首先是最重要的花树。花树的具体指代,在长期以来的首饰史研究中,被视为晚唐五代敦煌壁画供养人头上极其常见,并且大量出土的一种花钗。通常两两成对,钗首为片状镂空纹样。 但若进一步细考,易知其难以成立。首先此类花钗的流行时代仅在中晚唐,实物最早出现在西安、洛阳附近的中唐墓葬,壁画则见于敦煌中晚唐供养人,仅是一种短期流行做法,而非长期沿用。 其次其形态均为金属片状,与文献形容“琉璃饰”不符,也不似“树”;第三,也是最重要的一点,这些花钗在壁画中出现的场合均属于非礼服性盛装,插戴随意,有时普通供养人的插戴数目往往比后妃花树数还多。花树为隋唐最隆重的大礼服首饰,难以将其与普通花钗混为一谈。 ▲ 一株花树侧视,可见螺旋花柄、木短柱和下端露出的钗脚 我们再来看看新发现萧后冠的情况。根据陕西省文物保护研究院公布的修复资料与实物,萧后冠框架上装有13组花饰,每组花饰的基座包有一个直径3厘米的木质短柱,中有一根铜管为柄,其上伸出12根弹簧状的螺旋花柄。 花柄首端为鎏金铜箔片制成的花朵,其中有玻璃花蕊、小石人、细叶等装饰,中央还有一朵宝花,从修复照片中看,恰好共13朵小花。中央宝花花柄穿过木座的钗脚可插于框架固定。仔细观察并对照文献,我们可以从中得到不少新结论。 ▲ 花树、钿、博鬓示意图 第一点,这种由螺旋花柄集为一束,可随步摇动的构件,即形制长期不明的隋唐“花树”,并且应源自于汉代后妃首饰中的“步摇”。 汉《释名·释首饰》“:步摇,上有垂珠,步则摇动也。”是一种在金属竖枝上缀金银、珠玉花叶片的首饰。步摇或源于中西亚,约在汉代前后传入中原,并同时流传至东北亚、日本,在整个亚欧大陆流行,演变成各种王冠,对此学者们早已做过详细论述。 ▲ 《女史箴图》中的步摇 步摇在汉代成为皇后、长公主等的最高礼服首饰构件,《后汉书·志第三十·舆服下》皇后谒庙礼服首饰“步摇,以黄金为山题,贯白珠,为桂枝相缪,一爵(雀)、九华(花)。 熊、虎、赤罴、天鹿、辟邪、南山丰大特六兽”,即在黄金山题(基座)上的桂枝以金、珍珠,缀饰花鸟,夹杂走兽,是高级配置,但没有出现身份等级的数目差降规律。魏晋南北朝大体继承了步摇的使用,“俗谓之珠松”。 到了北周,首次提出“花树”的概念,并且有了明确的数目等级降差,皇后花树十二,对应皇帝冕旒十二,以下数目依次递降,《隋书·志第六·礼仪六》“:后周(北周)设司服之官……皇后华(花)皆有十二树。诸侯之夫人,亦皆以命数为之节”。 ▲ 辽宁省博物馆藏晋花树状步摇 隋唐因袭了“花树”这一称谓,并对等级差异进一步细分。但从此次发现上看,隋唐式花树与汉晋式“步摇”开始有了不同,不再是在枝干上缀饰摇曳的珍珠或叶片,而是直接将花朵装于可弹动的螺旋枝之上,依然可“随步摇动”,也确实符合“花树”之名。 如此一来,以往若干唐代命妇墓葬中出土的“不明花饰”也得到了正名,如湖北郧县濮王妃阎婉墓、陕西咸阳蜀国公夫人贺若氏墓、西安阎识微夫人裴氏墓、西安金乡县主墓中,均有出土数百件花朵、花蕊、花叶、珠宝残件,应当就是基座腐朽散落的花树花朵。 裴氏冠和金乡县主冠还有在花朵上夹杂小人、鸟雀等饰件,这种做法到了宋代被大放异彩。难得的是濮王妃阎婉墓,首饰残件中还有一件带基座的花树,形态做法和萧后花树很接近,而且花朵、花蕊形态也各不相同。 ▲ 13朵小花有多种样式,有些花心还立有小人 第二点,隋唐制度中所称的“小花并如大花之数”,以往常常被释读为“小花树的数目与大花树相同”,即皇后有大小共24株花树。这种释读方案甚至也被后来的宋、明制度采用,明确注记“大小花二十四株”、“前后各十二株”。 但从萧后首饰中看,至少在隋至唐初,此句很可能应解释为“每株大花树中,小花的数目与大花树总数相同”,即若大花树为12树,每树便有12朵小花。 不过萧后冠饰中发现了13组花树,每树13朵,比当时皇后制度多了一组,原因尚不明确,或许与李世民对前朝皇后特别礼遇拔高一格有关。 ▲ 唐濮王妃阎婉墓出土的花树与各式花朵 2、钿:除了花树以外,萧后冠上还发现了12枚“水滴形饰件”,用琉璃或玉石贝壳镶嵌出花型,四周镶珍珠,背面中央焊接插孔,被分为三排安装在框架上。 这种饰件应是文献中所指的“钿”。唐人所说的“宝钿”,通常便指将各种珠宝、贝壳雕琢成小片花饰,镶嵌黏于金属托上金丝围成的轮廓中制成的华丽装饰品。如法门寺出土衣物账中,对套承佛骨舍利宝函上装饰的描述“金筐宝钿真珠装”,对照实物,便是此类装饰法。 “钿”之制至迟始自魏晋。魏晋在继承汉代后妃首饰假髻、步摇、簪珥组合的基础上,增加了钿数和蔽髻的概念,在假髻上装饰以金玉制成的䥖(钿),并且以䥖数区分等级,如晋制皇后大手髻、步摇、十二䥖,皇太子妃九䥖,贵人、贵嫔、夫人七䥖,九嫔及公主、夫人五䥖,世妇三䥖。此制在南北朝至隋各政权被普遍沿用,并且等级进一步细化,内外命妇五品以上均以钿数为品秩差异。 ▲ 萧后冠饰花树上各式花朵 3、博鬓:最后是“博鬓”。博鬓的位置明显明确,其指代向无争议,即垂挂于头两侧的弧状饰件。隋唐博鬓通常呈长条S弧状,外端上尖内收,装饰方法与宝钿类似,嵌有珠宝,即制度所称“施两博鬓,宝钿饰也”,上沿有时还装饰以小花朵数组。 不过此次萧后冠饰的发现,为探讨博鬓的起源提供了新思路。不像明代博鬓挂于圈口脑后左右,萧后博鬓插于圈口两侧靠近鬓上的位置,其原始功能也许与绑扎冠饰而垂落左右两鬓的束带宝缯有关,这在北朝菩萨宝冠饰中是很常见构件,首饰化之后成为金属珠宝制品,依然垂挂在冠座鬓左右。 ▲ 若干出土唐代钿实例 三、添加了龙凤的宋明礼服冠 中国的礼服制度有着极其强大的历史惯性,一项基本服制形成后,被记录在国家颁布的礼、令条文中,被纳入文明根本大法,往往能因袭上千年。 涉及礼仪的服制多是如此,一般轻易不受朝代更替影响,后世更多是在如何释读和实际操作细节上做文章,或在不违背原则的情况下调整、补充。作为最高级别的女性首饰,花树冠也不例外,但在千年历史进程中依然会不断叠加新的元素。 唐代礼令中后妃礼冠的基本制度是花树和博鬓,北宋初颁布的《开宝礼》依然照搬之。但北宋在隋唐制的基础上,出现了一个重大变化,便是在冠上添加了龙、凤。 如宋《政和五礼新仪》便在唐礼令文皇后冠服“首饰花一十二株,小花如大花之数,并两博鬂”后,补充了一句“冠饰以九龙四凤”;妃制则将龙改为翚(五色雉),“冠饰以九翚四凤。”有时更直接称呼为“龙凤花钗冠”、“九龙四风冠”、“九龙十二株花钗冠”。不仅如此,实际操作中还形成了更丰富但不载于礼法的添加惯例。 ▲ 萧后博鬓原件与复制件,即口圈左右悬挂长条饰件 仔细观察历代宋后画像,可以看到除了基础的大小花株满布全冠,博鬓也增加为左右各三扇,饰以珠翠龙纹,垂珠结;冠顶所添加的九龙,包括左右八条小龙和中央一条大龙,大龙口衔穗球;四凤有时背乘仙女,有时数目还增到到九只;唐代花树间偶见的小人与鸟雀,则发展为浩浩荡荡的“王母仙人队”以及各种云鹤、鸂鶒、鹭鸶、孔雀,场面更加盛大和具体。 北宋开封陷落后,帝后宗室以及全副冠服卤簿被掳至金国,冠服制度也被金人很大程度上照搬而去,在《大金集礼》中我们反而可以看到对宋制皇后礼冠极其详实的描述,与北宋末的皇后画像基本可以完全对应。 皇后冠服:花株冠,用盛子一,青罗表、青绢衬金红罗托里,用九龙、四凤,前面大龙衔穗球一朵,前后有花株各十有二,及鸂鶒、孔雀、云鹤、王母仙人队浮动插瓣等,后有纳言,上有金蝉鑻金两博鬓,以上并用铺翠滴粉缕金装珍珠结制,下有金圈口,上用七宝钿窠,后有金钿窠二,穿红罗铺金款幔带一。 ▲ 左-北齐菩萨造像头部两侧束结后垂于鬓前的宝缯;右北齐娄睿墓出土博鬓,上端可见花结 明初后妃礼服冠基本继承宋制,皇后使用九龙四凤冠,妃使用九翚四凤冠。如洪武元年制“皇后首饰,冠为圆匡,冒以翡翠,上饰九龙四凤,大花十二树,小花如之,两愽鬓,十二钿”。 宋制令文之外的王母仙人队、云鹤等则不再添加。在实际操作中,也如宋代一般,龙凤的数目往往有所增加,或有失载于史的一些惯例。 ▲ 北宋高宗皇后半身像 四、由常服首饰升格而来的“凤冠”冠到凤冠 中国古代的传统女性礼服冠,随着明代灭亡而彻底终结。虽然已经出现凤的踪影,但实际上我们现在概念中的凤冠,依然与以上礼服冠饰没什么直接关系。 此时需要了解一个概念,即传统女性服装发展中的两个大体系,礼服和常服。以上各种均属于服制中的礼服系统,所搭配的大礼服属于“古装”模式,包括衣、弊膝、佩绶等大量传统构件,头戴传统花树礼冠。 ▲ 萧后冠复原 展出时装饰面朝后放置 但晋唐以来的女性,日常生活穿着另一类型的“时装”衫、裙、帔子,首饰则随意插戴。有些场合既不属于礼法限定范畴内,又比日常生活隆重,于是在裙帔的基础上,逐渐形成一种相对华丽的盛装,工艺纹样繁复精致,头上中央有时则会插戴凤鸟首饰。 凤鸟也逐渐成为贵妇象征,越来越多出现在首饰上,在盛唐以来的贵妇、供养人壁画、线刻中很常见。 ▲ 明代一品命妇五翟冠,身穿红大衫、仙鹤霞帔 有时还在左右插横凤首簪,垂珠结,其制或可远溯至汉代太后的“左右一横簪之,以玳瑁为擿,长一尺,端为华胜,上为凤皇爵,以翡翠为毛羽,下有白珠,垂黄金镊。” 这些可以使用在非礼仪性但又相对隆重的场合,类似后世“吉服”的属性。并长期不存在于礼法制度中。从图像中我们也可以看到,这些首饰尽管华丽,但是搭配的服装依然是裙、衫、帔,而非礼服模式。 ▲ 北宋皇后礼服冠上添加的各种装饰示意图(此为侧视,仅可见五龙二凤) 2001年陕西西安出土的宗女李倕冠饰,便属于此类盛装首饰,构件中有凤鸟两翅和上扬的两尾,中央有花饰,还有若干长钗,钗首装饰小型凤鸟。 由于原始位置已被淤泥挤压变形,复原时长钗被安装为十字形,但原始插戴更可能为壁画所体现的横插式。晚唐五代敦煌供养人贵妇盛装中,也逐渐形成了此类中央大凤、花叶,以及左右横簪钗垂珠结固定模式。 ▲ 盛唐宗女李倕墓出土的凤鸟形冠饰 唐代日常衫、裙、帔盛装,到了五代、宋发展为大袖衫、霞帔、长裙,并在北宋进入制度,成为后妃的“常服”。 明初在此基础上,制定了后妃的大衫、霞帔“常服”制度,或称“燕居服”,头上所带的“燕居冠”,继承了唐代以来的盛装模式,其最核心的构成,便是各种类型的凤鸟,以及左右插戴的凤簪,簪首垂下长长的珠结。 ▲ 左-《大明集礼》中的皇后九龙四风冠,右-《明宫冠服仪仗图》中的东宫妃九翚四凤冠 明洪武初常服冠以各种类型的鸟雀区分不同等级,皇后用双凤翊龙、妃用鸾凤,以下各品分别用不同数目的翟、孔雀、鸳鸯、练鹊。 不过不多时,朱元璋嫌礼制过繁,废除了帝王之下官员的冕服制度,相应也废除了皇后、太子妃之下命妇的传统礼服制度,洪武二十四年,将本为常服的大衫霞帔升格为命妇的礼服,冠制也进一步简化,统一为“翟冠”,各品级以翟数不同区分。翟即野鸡,形态上和凤鸟很接近。这样就形成了后妃使用凤冠,命妇使用翟冠的模式,延续至明末。 ▲ 明定陵出土孝端显皇后九龙九凤冠 比如皇后的“双凤翊龙冠”:上饰金龙一、翊以二珠翠凤、皆口衔珠滴。前后珠牡丹花二朵。蕊头八箇。翠叶三十六叶。珠翠穰花鬓二朵。珠翠云二十一片。翠口圈一副。金宝钿花九。上饰珠九颗。金凤一对、口衔珠结。三博鬓。饰以鸾凤。金宝钿二十四。边垂珠滴。金簪一对。珊瑚凤冠觜一副。 其标志性特征,便是中心大牡丹花旁的两只珠翠凤、头顶金龙,以及插在左右侧、口衔珠结的金凤。尽管相距六七百年,与盛唐墓所出盛服冠饰依然接近。其余还有大量珠翠云、花、叶作为辅助装饰,甚至还包括了礼冠冠里的钿和博鬓元素。 ▲ 明代皇后不同时期的常服与礼服对比 但明代依然将其分为两款,定陵出土两位皇后的四顶“凤冠”,其实便包括了两顶礼服冠和两顶燕居冠但从画像上看,晚明礼服冠和燕居冠也偶见混用的情况,如穆宗孝懿莊皇后李氏像身穿黄大衫,头戴礼服冠,或许与当时礼服制度与实际操作的一度混乱有关。 ▲ 明孝端显皇后像 再如一品命妇的“五翟冠”:一品,冠用金事件,珠翟五个,珠牡丹开头二个,珠半开三个,翠云二十四片,翠牡丹叶一十八片,翠口圈一副,上带金宝钿花八个,金翟二个,口衔珠结二个。 与皇后相比,命妇的翟冠将金凤改为金翟,珠凤改为珠翟,不同品级使用不同数目的珠翟。由于翟的形态与凤太过接近,民间口语中逐渐也用凤冠称呼翟冠。 ▲ 清代汉族命妇凤冠霞帔像 到了清代,所有的传统宫廷后妃服饰,不管是礼服还是常服均消亡。但民间汉族命妇在婚礼等大礼时,依然延续明代翟冠传统。同时不论是样式上还是称呼上,都完全改为凤鸟,“凤冠霞帔”也正式成为汉族女性婚礼服的代名词。 如《清稗类钞》所言“国朝,汉族尚沿用之,无论品官士庶,其子弟结婚时,新妇必用凤冠霞帔”。虽已成为最隆重的礼服,但若溯其源头,其实都来自于唐代妇女的常服模式。 来源丨大众考古(文/陈诗宇)返回搜狐,查看更多 平台声明:该文观点仅代表作者本人,搜狐号系信息发布平台,搜狐仅提供信息存储空间服务。 首赞 +1 点赞失败 阅读 (2636) by Taboolaby Taboola Sponsored LinksSponsored Links Promoted LinksPromoted Links 你可能会喜欢 Find The Best Results For Student Loan Now!Yahoo Search | Student Loan Undo The Best Foods for Knee Pain Relief No One Talks About thelifehackmag.com Undo Here's The Average Price of Gutter Protection For 2,500 Sq Ft House LeafFilter Gutter Protection Undo Nutrient-Packed Foods That May Help Ease Knee Discomfort thelifehackmag.com Undo Aaron Rodgers Reveals Secret Marriage to Mystery Wife FillMora Undo Why runtime security is the key to cloud protection CIO | Trace3 Undo 我来说两句 0 人参与, 0 条评论 发布 搜狐“我来说两句” 用户公约 推荐阅读 无滤镜后,王琳像骷髅,周迅小老太,姚晨鲶鱼脸忍不了 留不住的爱·07-23 04:123 尼姑伺候皇帝洗澡,皇帝当晚宠幸了她,结果出了一个千古明君 历史与命运·07-23 01:370 杜建英旧照被修复,不止漂亮还灵气逼人,难怪宗庆后倾心不已! 动漫小新新·07-23 14:1019 安徽女画家关玉梅被执行死刑前,拒绝吃断头饭,行刑时已四肢瘫软 乱世中的史·今天 02:123 两国可能合并!一旦成功将成为超级大国。恐终结美国一家独大局面 南华娱君·07-23 06:201 东北大学6名学生溺亡只是体面说法,真实情况太残忍 雾岛看日·昨天 06:2923 老板儿子高考560分,门卫随礼了1000元;门卫女儿考701分,老板回礼800元,晚上收到微信... 犀利强哥·07-23 04:281 宗庆后年轻时的老照片,手拿大哥大,眼神坚毅大步而行,意气风发 今天气温30度·07-23 01:290 中国末代皇后婉容,随嫁珠宝少得可怜,大婚的凤冠都被盗 艺沫史事1·昨天 03:270 马英九:我不希望台湾成为第二个香港! 壹知眠羊·07-23 06:37139 香港“三级女王”李华月,3年连拍11部三级片,如今她过得怎样 清心阅读·今天 06:200 我国历史上,曾出现过三次严重的决策失误,导致了极为恶劣的后果 大国迷雾·07-23 03:490 三大消息:美国突遭晴天霹雳!中方罕见发声哀悼;特朗普情绪失控 DA国锐器·今天 06:122 她被称为“香港第一波霸”,和男友同住26年,没想到如今竟成这样 我是迪迦奥特曼·今天 00:281 突发:日本首相石破茂宣布辞职 墨染情诗·07-23 02:572 毛阿敏北京聚餐,90岁谷建芬精神足,董文华变化大,苏红让人心疼 生活真奇妙呀·07-23 08:087 2014年,泰王妃被爆脱光衣服,跪在地上喂狗食,后来被迫离婚出家 苦瓜读历史·今天 01:430 柬埔寨打击电诈园区,一群军警冲进去瞬间就懵了,这哪是电诈园 小陈言社会·昨天 14:368 这些女星的颜值这么高,生下的孩子却“歪瓜裂枣”,基因失传了? 十三说娱乐·07-23 13:431 热搜上令人窒息的“母女吃牛肉面一幕”,暴露精神穷人可怕的三观 一道Talk·昨天 09:081 吕文扬与栀子花:洁白芬芳里的成长诗篇 馒头姐姐奋发图强·06-25 11:310 40岁富婆邀两教练共度春宵,警方结论是激情殒命竟因“过度兴奋” 戗词夺理·昨天 14:186 32年前,那个“辞职央视,前往美国”的主持人杜宪,如今怎么样了? 凌风的世界观·07-23 01:330 “通知书是上午拿的,下午没的”,女生晒大学录取通知书,结果悲剧了…… 校长传媒·07-23 10:058 觊觎中国领土,泰国入侵并欲吞并云南!泛泰主义为何膨胀与破产? 大国迷雾·昨天 02:565 为什么清朝皇帝宠幸妃子之后,妃子根本无法走路,都要人扶着走? 潘坤·今天 01:590 上海这一夜,张柏芝穿23年前“旧高定”,不穿暴露礼服却艳压全场 木乔简科·今天 06:400 彻底决裂!拉夫罗夫突然宣布,中俄这回真要动真格 国际大表姐·今天 07:322 四川军阀娶12房姨太,生43个子女,90岁时17岁小妾埋怨发问 刘明宝·今天 01:590 为何袁世凯不到60便去世了?看看他吃的这些玩意,你就明白了 八点钟史纪·昨天 04:050 康熙问囚犯:四阿哥与十四阿哥谁适合做皇帝?囚犯答:看皇孙 抬头吴越蜀·今天 04:580 《甄嬛传》:为何甄嬛一来就扳倒了华妃?皇后一直被她压得死死的 可乐聊故事·昨天 23:560 中国的一个省,因太强大被强制一分为二?如今一个富一个穷? 千年史馆·07-23 01:484 俄罗斯已经走上被领土拖垮的不归路!未来中国能否重回秋海棠? 记史惜今·今天 08:533 日本已到中方不打不行的地步,不应再有期望,连沙特王储都害怕了 百年史说·今天 02:080 “刘邓大军”有五个高级将领,在建国后被开除党籍,他们是谁? 朝子亥·今天 08:180 《琅琊榜》:明明知道后宫有人要害她,霓凰为什么不推掉皇后邀约? 可乐聊故事·昨天 23:550 “谁出40万救我爸,我嫁给谁”,95后河南姑娘冯双双,如今怎样了 夏天的兔子·07-23 12:240 普京怒了!"东方大国"援乌8亿军火,两大盟友接连背叛,俄制武器倒灌战场! 月浩叙事·昨天 05:362 英国驻华大使吴若兰举办宴会!李湘母女受邀,王诗龄穿新中式好美 娱书生·今天 05:500 我国历史上三位罪人,每一位都令国家倒退数百年,第一名令人厌恶 懂得历史·今天 16:090 人民大会堂那么多厅室,为何要把浙江厅改成台湾厅 天空艺灵·今天 01:510 男人三妻四妾与女人三夫四妾:天壤之别的背后 蛋蛋识了个识·今天 09:520 为什么只有中国喜欢记史? 犀利强哥·昨天 22:560 他从12位军长脱颖而出成兵团司令,打仗稳,后谦逊要求给陈赓让位 云叙今古·今天 07:440 公交车上偶遇的小姐姐,真是一白遮三丑啊,小哥哥的眼睛都看直了 卿卿搞笑日常·07-23 01:340 清朝妃子为何要戴长指甲套?除了好看之外,更多的是为了方便皇帝 阿蓝侃闻·今天 01:320 美副总统,在1亿美民面前,怒斥特朗普,不堪入耳的话顺口脱出! 半历写书·昨天 07:111 溥仪退位时,大清朝的12名铁帽子王,为何无一人反对 正说清代十二朝·昨天 19:550 “最牛县长”何家庆:被绑架吃猪食,流浪3万公里,死后遗书曝光 朱哥爱民·今天 02:060 蟹爪兰开花后怎么养?剪一剪,给点肥,来年开花满枝,开成花树 两性情感分析·07-07 19:200 从死对头到互取暖:小燕子为何独信皇后?令妃的圆滑害了自己 晓晓智慧说·今天 00:180 “考上了也不去”,大量编制岗位遇冷,考生:这点钱吃不起饭! 文文教育说·昨天 06:130 中国为何叫中国?如果连祖国名字由来都不知道,是不是有点尴尬 云叙今古·今天 07:420 有“2种花”,月月开花,颜色美花量大,开成花树,很适合养阳台 一波相思雨·07-08 17:040 开车年龄限制再调整!超过这个岁数就不能再开车了,看看你还能开几年? 青衫书生·07-23 06:120 汉成帝邀请班婕妤同车出游,班婕妤断然拒绝,为何王政君大喜? 鉴赏史·今天 01:400 访华结束,冯德莱恩离开北京,从中国临走之前,她送给美方一句话 趣味雪梨·今天 02:100 汉文帝逼舅舅自杀,舅舅装傻:百官去他家门口哭丧!舅舅无奈自尽 历史故事迷·今天 02:580 宗庆后公证遗嘱曝光,宗馥莉早已经结婚? 联合财经UF·昨天 13:220 杜建英也被骗?宗庆后在自传里面说,用女干部是忠诚度高于男性! 邹倩倩·今天 08:510 伤亡激增9倍,4万乌军空降,落在朝军阵地,5小时后全军覆没 月浩叙事·昨天 05:380 孝贞皇后:23年皇后18年皇太后,明第3位太皇太后也是一位可怜人 清竹雅韵读史·今天 02:040 从军中营伎到巾帼英雄,梁红玉的人生几经波折,她的经历有多传奇 苦瓜读历史·今天 07:440 越南第一美人曝光,谁能不爱! 国际艺术大观·昨天 14:030 河南一男子成植物人,岳母不离不弃照顾9年,谁料,男子醒来的第一句话,竟让岳母痛哭流涕! 犀利强哥·昨天 04:200 红色情报网破坏99%,湖南却创造奇迹:无一人暴露、无一人叛变 游史微鉴·今天 01:380 美版知乎:西方为何如此讨厌中国?外国网友:因为中国从未被征服 贪吃的骆驼V·07-23 02:040 据说同治的皇后顶撞慈禧时曾以“大清门抬进来的皇后”为傲,这有什么讲究吗? 书偶记·昨天 19:550 孝安皇后:6年皇后,24年太后,虽非皇帝生母,却得以安度晚年 谈话历史·今天 02:020 跟着文创到大陆去旅行 金台资讯·07-21 02:590 台湾省人士侯友宜这下不装了,直接宣布他就是台独! 磊彤时探·昨天 23:370 章泽天回国视察深圳京东项目,礼服惊艳何超莲抢着合影 黎哥谈历史·今天 04:230 萨达姆去世12年后,情人曼西亚回忆:只要他活着,就都会怕他 财子历史·今天 02:020 听过很多对毛主席的评价,但我最服的就是台湾作家李敖这段话! 墨史浅吟V·今天 01:510 商朝同时期的世界是什么样的?3600年前,我们是最发达的文明吗? 知识巨轮·今天 02:460 为何朱元璋选择年幼的朱允炆继承皇位?其实,这是当时最佳选择 大肥肥文史·昨天 03:140 宗启騄葬礼:杜建英3个娃出殡,宗馥莉拒绝参加,宗庆后气了2年 硬核聊内娱·07-23 09:380 1937年,宝山战场附近的一处稻田,全都是被活活插死的日寇士兵 历史通宝·昨天 03:340 纳兰明珠:虽是奸臣,却立三大奇功,康熙宁杀索额图,为何不杀他 抬头吴越蜀·今天 04:560 乌克兰爆发政变,波罗申科卷土重来,泽连斯基没顶住,大势已去? 云鹤说史·今天 05:180 奇葩邻居把空调外机装在我家,3 个月后就主动哭求拆机:我的反击够狠 万公子·07-23 21:400 传说中的“三皇五帝”,到底是谁? 微智先峰·今天 09:540 明朝一清官死后用草席下葬,300年后墓被打开,竟挖出10亿珍 站在浔阳说历史·今天 01:460 他40岁任命正军职,要求推迟公布为好,司令等人:应立即公布! 亦唐历史·今天 07:440 内地首部“南京大屠杀”电影,陈道明主演,最后一个镜头震撼无比 电影聚焦·今天 09:240 已经到底了 博物馆世界 文章 5.5万 总阅读 6160.7万+ 侃爷妻子 Bianca Censori 在洛杉矶吃饭看电影,细节曝光!_Kanye_West_比安卡·森索里 Discover 全球最性感运动员、戴40万的表拿金牌…本届奥运“颜值担当”到底有多美_跳高_田径_柔道 Discover Teeth Whitening | Search Ads Learn More Which Toothpaste Whitens Teeth Better Instantly - You Might Be Surprised Read More Skip “尤物身材”!40岁邓家佳,饱满有料性感女人味十足!_女神_演技_外貌 Discover 演唱会“出轨”视频疯传后,美国IT公司CEO拜伦辞职_声明_DeJoy_Byron Discover 热门精选 侃爷妻子 Bianca Censori 在洛杉矶吃饭看电影,细节曝光!_Kanye_West_比安卡·森索里 Undo Find The Best Results For Student Loan Now!Yahoo Search | Student Loan Search Now Undo 全球最性感运动员、戴40万的表拿金牌…本届奥运“颜值担当”到底有多美_跳高_田径_柔道 Undo by Taboolaby Taboola Sponsored LinksSponsored Links Promoted LinksPromoted Links The Best Foods for Knee Pain Relief No One Talks About thelifehackmag.com Undo “尤物身材”!40岁邓家佳,饱满有料性感女人味十足!_女神_演技_外貌 Undo 演唱会“出轨”视频疯传后,美国IT公司CEO拜伦辞职_声明_DeJoy_Byron Undo Here's The Average Price of Gutter Protection For 2,500 Sq Ft House LeafFilter Gutter Protection Undo by Taboolaby Taboola Sponsored LinksSponsored Links Promoted LinksPromoted Links 24小时热文 1 中金黄金市值两日蒸发超50亿,多篇事故车间相关推文已... 34万 阅读 2 校园生活,从狐友开始! 180万 阅读 3 国王太阳争夺库明加 已提交报价方案并承诺先发地位 519万 阅读 300米低云遮山、两次降落失败 俄客机坠毁密林更多细节... 267万 阅读 在Instagram 上分享了一张比安卡(Bianca) 衣着很少的照片,称今年为“无裤年”。 最近,比安卡·索里(Bianca Csori) 被拍… Undo 全球最性感运动员、戴40万的表拿金牌…本届奥运“颜值担当”到底有多美_跳高_田径_柔道 身高1.81m的她,不仅拥有逆天的弹跳力,颜值和身材也是一绝,此次参赛,还专门画了黄蓝国旗配色眼线,又美又有力量感! 这位乌克兰跳高女神马胡奇赫是PUMA彪马品牌大使、欧米茄品牌大使,在上个月(巴黎)的世界田… Undo “尤物身材”!40岁邓家佳,饱满有料性感女人味十足!_女神_演技_外貌 她以其绝美的外貌和出色的演技征服了无数观众的心,并赢得了众多奖项的认可。她的外貌充满了一种神秘而又迷人的气质,让人不禁为之倾倒。 然而,邓家佳并不仅仅是一个拥有美貌的空洞偶像,她更是通过自己的努力和才华在演艺… Undo 演唱会“出轨”视频疯传后,美国IT公司CEO拜伦辞职_声明_DeJoy_Byron 当地时间7月19日,美国IT公司Astronomer发布声明称,CEO安迪·拜伦(Andy Byron)已提交辞呈并经董事会批准。董事会将启动新CEO遴选程序,在此期间,联合创始人兼首席产品官皮特·德乔伊(P… Undo 电视剧全网热度榜,《扫毒风暴》跌至第二,第一热度高达76.92_剧情_朝雪录_人物 所以总的来说,《朝雪录》是这是一部符合逻辑,智商在线的剧,没有虐恋,没有误会,就一路打怪升级,是最近这几年看的剧里唯一一个哪哪都正常的良心剧。 剧情点评:不得不说蒋峤西的故事很有张力,他有天才的敏感和力量,这… Undo Find The Best Results For Student Loan Now!Browse the top searches for student loan on Yahoo Search.Yahoo Search | Student Loan | SponsoredSponsored Search Now Undo The Best Foods for Knee Pain Relief No One Talks About Struggling with sore knees? These foods are packed with anti-inflammatory nutrients that may help support joint health naturally, without relying on painkillers. Here’s what to eat for stronger, happier knees.thelifehackmag.com | SponsoredSponsored Learn More Undo 炸裂!杨紫“陪睡CEO上位”后续:亲密照被扒,赵露思、热巴等多名女星陷风波,已取证!_冯某_网友_资源 前有黄晓明,后有刘晓庆,这一回,轮到的是当红小花旦杨紫,也被卷入风波的中心,爆料称她为了拍戏资源,不惜与某CEO有亲密关系。 声称杨紫“戴套试戏”,靠陪睡拿下高层资源,这几百字的小作文,名义上是给杨紫的道歉函… Undo 心痛,网红猫敦敦去世,年仅6岁。铲屎官一定要注意这类疾病了……_猫咪_胰腺_严重 在敦敦去世后,敦敦妈也是发布了公告,表示会停更一段时间,也希望敦敦妈可以早日走出悲伤。 又一只网红猫咪离开了我们,在惋惜的同时,铲屎官们也不仅冒出一个疑问:胰腺炎是什么疾病? 猫咪胰腺炎可以… Undo 干货|6岁孩子呕吐结果是因为脑膜炎引起的,关于宝宝呕吐,你需要补补课啦!(附音频)婴儿辅食/ 喂饱宝宝的小肚肚 没时间看图文版的亲,点击下方绿色小喇叭听雨涵老师语音分享!注:图文均来自网络 Undo 已成后防大腿!萨利巴社媒晒加盟阿森纳时亮相照:6年前……️_中卫_皇马_消息 直播吧7月25日讯 24岁的萨利巴在本赛季为阿森纳各项赛事出战51场打进2球,他也堪称枪手防线的最关键球员。 而他也在今日通过社媒晒出自己加盟阿森纳时手持球衣的两项照,并配文:“6年前……❤️” 数据统计显… Undo 美国实施全球首例“氮气死刑”:处决持续22分钟_史密斯_阿拉巴马州_方法 据媒体报道,史密斯的遗言是:“今晚阿拉巴马州导致人类倒退了一步。 2022年11月,阿拉巴马州的行刑人员使用传统方法,向史密斯注射药物,第一次执行了死刑。这一次,美国政府批准了使用氮气执行死刑的计划:将气密… Undo Here's The Average Price of Gutter Protection For 2,500 Sq Ft House If You're a Homeowner, Try This Instead of Gutter Cleaning (It's Genius)LeafFilter Gutter Protection | SponsoredSponsored Get Quote Undo Nutrient-Packed Foods That May Help Ease Knee Discomfort Struggling with sore knees? These foods are packed with anti-inflammatory nutrients that may help support joint health naturally, without relying on painkillers. Here’s what to eat for stronger, happier knees.thelifehackmag.com | SponsoredSponsored Learn More Undo “邓婵玉”那尔那茜《OK!精彩》封面大片,意气风发“姐感”十足_胡加灵_角色_摄影棚《封神第二部:战火西岐》中最亮眼的角色,非女将军邓婵玉莫属。她的扮演者那尔那茜也在影片正式上映后迎来众口一词的好评。 《OK!精彩》杂志新刊也迎来她的封面大片,意气风发,锋芒毕露。站在摄影棚里也能让人感知到草… Undo 俄A-50预警机为导弹指引目标:F-16战机和乌飞行员被击落细节披露_乌斯季_皮利绍夫_乌军 消息显示,乌斯季缅科驾驶的 F-16 战机在切尔尼戈夫州上空被击落,且有信息表明,俄 A-50 预警机为俄军导弹提供了目标指引。 更讽刺的是,乌克兰媒体又吃人血馒头,搞舆论“造英雄”那一套,称乌斯季缅科中校… Undo CLOSE AD
131
Published Time: 2018-05-06T16:22:36Z 4.2: Multiplicative Number Theoretic Functions - Mathematics LibreTexts =============== Skip to main content Table of Contents menu search Search build_circle Toolbar fact_check Homework cancel Exit Reader Mode school Campus Bookshelves menu_book Bookshelves perm_media Learning Objects login Login how_to_reg Request Instructor Account hub Instructor Commons Search Search this book Submit Search x Text Color Reset Bright Blues Gray Inverted Text Size Reset +- Margin Size Reset +- Font Type Enable Dyslexic Font - [x] Downloads expand_more Download Page (PDF) Download Full Book (PDF) Resources expand_more Periodic Table Physics Constants Scientific Calculator Reference expand_more Reference & Cite Tools expand_more Help expand_more Get Help Feedback Readability x selected template will load here Error This action is not available. chrome_reader_mode Enter Reader Mode 4: Multiplicative Number Theoretic Functions Elementary Number Theory (Raji) { } { "4.01:_Definitions_and_Properties" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "4.02:_Multiplicative_Number_Theoretic_Functions" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "4.03:_The_Mobius_Function_and_the_Mobius_Inversion_Formula" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "4.04:_Perfect_Mersenne_and_Fermat_Numbers" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" } { "00:_Front_Matter" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "01:_Introduction" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "02:_Prime_Numbers" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "03:_Congruences" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "04:_Multiplicative_Number_Theoretic_Functions" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "05:_Primitive_Roots_and_Quadratic_Residues" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "06:_Introduction_to_Continued_Fractions" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "07:_Introduction_to_Analytic_Number_Theory" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "08:_Other_Topics_in_Number_Theory" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "zz:_Back_Matter" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" } Wed, 07 Jul 2021 20:24:24 GMT 4.2: Multiplicative Number Theoretic Functions 8839 8839 admin { } Anonymous Anonymous 2 false false [ "article:topic", "authorname:wraji", "Euler \u03d5 -function", "license:ccby", "showtoc:no" ] [ "article:topic", "authorname:wraji", "Euler \u03d5 -function", "license:ccby", "showtoc:no" ] Search site Search Search Go back to previous article Sign in Username Password Sign in Sign in Sign in Forgot password Contents 1. Home 2. Bookshelves 3. Combinatorics and Discrete Mathematics 4. Elementary Number Theory (Raji) 5. 4: Multiplicative Number Theoretic Functions 6. 4.2: Multiplicative Number Theoretic Functions Expand/collapse global location Elementary Number Theory (Raji) Front Matter 1: Introduction 2: Prime Numbers 3: Congruences 4: Multiplicative Number Theoretic Functions 5: Primitive Roots and Quadratic Residues 6: Introduction to Continued Fractions 7: Introduction to Analytic Number Theory 8: Other Topics in Number Theory Back Matter 4.2: Multiplicative Number Theoretic Functions Last updated Jul 7, 2021 Save as PDF 4.1: Definitions and Properties 4.3: The Mobius Function and the Mobius Inversion Formula picture_as_pdf Full Book Page Downloads Full PDF Import into LMS Individual ZIP Buy Print Copy Print Book Files Buy Print CopyReview / Adopt Submit Adoption Report Submit a Peer Review View on CommonsDonate Page ID 8839 Wissam Raji American University of Beirut ( \newcommand{\kernel}{\mathrm{null}\,}) Table of contents 1. The Euler ϕ ϕ-Function 2. The Sum-of-Divisors Function 3. The Number-of-Divisors Function 4. Contributors and Attributions We now present several multiplicative number theoretic functions which will play a crucial role in many number theoretic results. We start by discussing the Euler phi-function which was defined in an earlier chapter. We then define the sum-of-divisors function and the number-of-divisors function along with their properties. The Euler ϕ ϕ-Function As defined earlier, the Euler ϕ ϕ-function counts the number of integers smaller than and relatively prime to a given integer. We first calculate the value of the p h i p h i-function at primes and prime powers. If p p is prime, then ϕ(p)=p−1 ϕ(p)=p−1. Conversely, if p p is an integer such that ϕ(p)=p−1 ϕ(p)=p−1, then p p is prime. The first part is obvious since every positive integer less than p p is relatively prime to p p. Conversely, suppose that p p is not prime. Then p=1 p=1 or p p is a composite number. If p=1 p=1, then ϕ(p)≠p−1 ϕ(p)≠p−1. Now if p p is composite, then p p has a positive divisor. Thus ϕ(p)≠p−1 ϕ(p)≠p−1. We have a contradiction and thus p p is prime. We now find the value of ϕ ϕ at prime powers. Let p p be a prime and m m a positive integer, then ϕ(p m)=p m−p m−1 ϕ(p m)=p m−p m−1. Note that all integers that are relatively prime to p m p m and that are less than p m p m are those that are not multiple of p p. Those integers are p,2 p,3 p,...,p m−1 p p,2 p,3 p,...,p m−1 p. There are p m−1 p m−1 of those integers that are not relatively prime to p m p m and that are less than p m p m. Thus ϕ(p m)=p m−p m−1.(4.2.1)(4.2.1)ϕ(p m)=p m−p m−1. ϕ(7 3)=7 3−7 2=343−49=294 ϕ(7 3)=7 3−7 2=343−49=294. Also ϕ(2 10)=2 10−2 9=512.ϕ(2 10)=2 10−2 9=512. We now prove that ϕ ϕ is a multiplicative function. Let m m and n n be two relatively prime positive integers. Then ϕ(m n)=ϕ(m)ϕ(n)ϕ(m n)=ϕ(m)ϕ(n). Denote ϕ(m)ϕ(m) by s s and let k 1,k 2,...,k s k 1,k 2,...,k s be a reduced residue system modulo m m. Similarly, denote ϕ(n)ϕ(n) by t t and let k′1,k′2,...,k′t k 1′,k 2′,...,k t′ be a reduced residue system modulo n n. Notice that if x x belongs to a reduced residue system modulo m n m n, then (x,m)=(x,n)=1.(4.2.2)(4.2.2)(x,m)=(x,n)=1. Thus x≡k i(m o d m)and x≡k′j(m o d n)(4.2.3)(4.2.3)x≡k i(m o d m)and x≡k j′(m o d n) for some i,j i,j. Conversely, if x≡k i(m o d m)and x≡k′j(m o d n)(4.2.4)(4.2.4)x≡k i(m o d m)and x≡k j′(m o d n) some i,j i,j then (x,m n)=1(x,m n)=1 and thus x x belongs to a reduced residue system modulo m n m n. Thus a reduced residue system modulo m n m n can be obtained by by determining all x x that are congruent to k i k i and k′j k j′ modulo m m and n n respectively. By the Chinese remainder theorem, the system of equations x≡k i(m o d m)and x≡k′j(m o d n)(4.2.5)(4.2.5)x≡k i(m o d m)and x≡k j′(m o d n) has a unique solution. Thus different i i and j j will yield different answers. Thus ϕ(m n)=s t ϕ(m n)=s t. We now derive a formula for ϕ(n)ϕ(n). Let n=p a 1 1 p a 2 2...p a s s n=p 1 a 1 p 2 a 2...p s a s be the prime factorization of n n. Then ϕ(n)=n(1−1 p 1)(1−1 p 2)...(1−1 p s).(4.2.6)(4.2.6)ϕ(n)=n(1−1 p 1)(1−1 p 2)...(1−1 p s). By Theorem 37, we can see that for all 1≤i≤k 1≤i≤k ϕ(p a i i)=p a i i−p a i−1 i=p a i i(1−1 p i).(4.2.7)(4.2.7)ϕ(p i a i)=p i a i−p i a i−1=p i a i(1−1 p i). Thus by Theorem 38, ϕ(n)=====ϕ(p a 1 1 p a 2 2...p a s s)ϕ(p a 1 1)ϕ(p a 2 2)...ϕ(p a s s)p a 1 1(1−1 p 1)p a 2 2(1−1 p 2)...p a s s(1−1 p s)p a 1 1 p a 2 2...p a k k(1−1 p 1)(1−1 p 2)...(1−1 p s)n(1−1 p 1)(1−1 p 2)...(1−1 p s).ϕ(n)=ϕ(p 1 a 1 p 2 a 2...p s a s)=ϕ(p 1 a 1)ϕ(p 2 a 2)...ϕ(p s a s)=p 1 a 1(1−1 p 1)p 2 a 2(1−1 p 2)...p s a s(1−1 p s)=p 1 a 1 p 2 a 2...p k a k(1−1 p 1)(1−1 p 2)...(1−1 p s)=n(1−1 p 1)(1−1 p 2)...(1−1 p s). Note that ϕ(200)=ϕ(2 3 5 2)=200(1−1 2)(1−1 5)=80.(4.2.8)(4.2.8)ϕ(200)=ϕ(2 3 5 2)=200(1−1 2)(1−1 5)=80. Let n n be a positive integer greater than 2. Then ϕ(n)ϕ(n) is even. Let n=p a 1 1 p a 2 2...p a k k n=p 1 a 1 p 2 a 2...p k a k. Since ϕ ϕ is multiplicative, then ϕ(n)=∏j=1 k ϕ(p a j j).(4.2.9)(4.2.9)ϕ(n)=∏j=1 k ϕ(p j a j). Thus by Theorem 39, we have ϕ(p a j j)=p a j−1−1 j(p j−1).(4.2.10)(4.2.10)ϕ(p j a j)=p j a j−1−1(p j−1). We see then ϕ(p a j j)ϕ(p j a j)is even if p j p j is an odd prime. Notice also that if p j=2 p j=2, then it follows that ϕ(p a j j)ϕ(p j a j) is even. Hence ϕ(n)ϕ(n) is even. Let n n be a positive integer. Then ∑d∣n ϕ(d)=n.(4.2.11)(4.2.11)∑d∣n ϕ(d)=n. Split the integers from 1 to n n into classes. Put an integer m m in the class C d C d if the greatest common divisor of m m and n n is d d. Thus the number of integers in the C d C d class is the number of positive integers not exceeding n/d n/d that are relatively prime to n/d. Thus we have ϕ(n/d)ϕ(n/d) integers in C d C d. Thus we see that n=∑d∣n ϕ(n/d).(4.2.12)(4.2.12)n=∑d∣n ϕ(n/d). As d d runs over all divisors of n n, so does n/d n/d. Hence n=∑d∣n ϕ(n/d)=∑d∣n ϕ(d).(4.2.13)(4.2.13)n=∑d∣n ϕ(n/d)=∑d∣n ϕ(d). The Sum-of-Divisors Function The sum of divisors function, denoted by σ(n)σ(n), is the sum of all positive divisors of n n. σ(12)=1+2+3+4+6+12=28.σ(12)=1+2+3+4+6+12=28. Note that we can express σ(n)σ(n) as σ(n)=∑d∣n d σ(n)=∑d∣n d. We now prove that σ(n)σ(n) is a multiplicative function. The sum of divisors function σ(n)σ(n) is multiplicative. We have proved in Theorem 35 that the summatory function is multiplicative once f f is multiplicative. Thus let f(n)=n f(n)=n and notice that f(n)f(n) is multiplicative. As a result, σ(n)σ(n) is multiplicative. Once we found out that σ(n)σ(n) is multiplicative, it remains to evaluate σ(n)σ(n) at powers of primes and hence we can derive a formula for its values at any positive integer. Let p p be a prime and let n=p a 1 1 p a 2 2...p a t t n=p 1 a 1 p 2 a 2...p t a t be a positive integer. Then σ(p a)=p a+1−1 p−1,(4.2.14)(4.2.14)σ(p a)=p a+1−1 p−1, and as a result, σ(n)=∏j=1 t p a j+1 j−1 p j−1(4.2.15)(4.2.15)σ(n)=∏j=1 t p j a j+1−1 p j−1 Notice that the divisors of p a p a are 1,p,p 2,...,p a 1,p,p 2,...,p a. Thus σ(p a)=1+p+p 2+...+p a=p a+1−1 p−1.(4.2.16)(4.2.16)σ(p a)=1+p+p 2+...+p a=p a+1−1 p−1. where the above sum is the sum of the terms of a geometric progression. Now since σ(n)σ(n) is multiplicative, we have σ(n)===σ(p a 1)σ(p a 2)...σ(p a t)p a 1+1 1−1 p 1−1.p a 2+1 2−1 p 2−1...p a t+1 t−1 p t−1∏j=1 t p a j+1 j−1 p j−1 σ(n)=σ(p a 1)σ(p a 2)...σ(p a t)=p 1 a 1+1−1 p 1−1.p 2 a 2+1−1 p 2−1...p t a t+1−1 p t−1=∏j=1 t p j a j+1−1 p j−1 σ(200)=σ(2 3 5 2)=2 4−1 2−1 5 3−1 5−1=15.31=465.σ(200)=σ(2 3 5 2)=2 4−1 2−1 5 3−1 5−1=15.31=465. The Number-of-Divisors Function The number of divisors function, denoted by τ(n)τ(n), is the sum of all positive divisors of n n. τ(8)=4.τ(8)=4. We can also express τ(n)τ(n) as τ(n)=∑d∣n 1 τ(n)=∑d∣n 1. We can also prove that τ(n)τ(n) is a multiplicative function. The number of divisors function τ(n)τ(n) is multiplicative. By Theorem 36, with f(n)=1 f(n)=1, τ(n)τ(n) is multiplicative. We also find a formula that evaluates τ(n)τ(n) for any integer n n. Let p p be a prime and let n=p a 1 1 p a 2 2...p a t t n=p 1 a 1 p 2 a 2...p t a t be a positive integer. Then τ(p a)=a+1,(4.2.17)(4.2.17)τ(p a)=a+1, and as a result, τ(n)=∏j=1 t(a j+1).(4.2.18)(4.2.18)τ(n)=∏j=1 t(a j+1). The divisors of p a p a as mentioned before are 1,p,p 2,...,p a 1,p,p 2,...,p a. Thus τ(p a)=a+1(4.2.19)(4.2.19)τ(p a)=a+1 Now since τ(n)τ(n) is multiplicative, we have τ(n)===τ(p a 1)τ(p a 2)...τ(p a t)(a 1+1)(a 2+1)...(a t+1)∏j=1 t(a j+1).τ(n)=τ(p a 1)τ(p a 2)...τ(p a t)=(a 1+1)(a 2+1)...(a t+1)=∏j=1 t(a j+1). τ(200)=τ(2 3 5 2)=(3+1)(2+1)=12 τ(200)=τ(2 3 5 2)=(3+1)(2+1)=12. Exercises Find ϕ(256)ϕ(256) and ϕ(2.3.5.7.11)ϕ(2.3.5.7.11). Show that ϕ(5186)=ϕ(5187)ϕ(5186)=ϕ(5187). Find all positive integers n n such that ϕ(n)=6 ϕ(n)=6. Show that if n n is a positive integer, then ϕ(2 n)=ϕ(n)ϕ(2 n)=ϕ(n) if n n is odd. Show that if n n is a positive integer, then ϕ(2 n)=2 ϕ(n)ϕ(2 n)=2 ϕ(n) if n n is even. Show that if n n is an odd integer, then ϕ(4 n)=2 ϕ(n)ϕ(4 n)=2 ϕ(n). Find the sum of positive integer divisors and the number of positive integer divisors of 35 Find the sum of positive integer divisors and the number of positive integer divisors of 2 5 3 4 5 3 7 3 13 2 5 3 4 5 3 7 3 13. Which positive integers have an odd number of positive divisors. Which positive integers have exactly two positive divisors. Contributors and Attributions Dr. Wissam Raji, Ph.D., of the American University in Beirut.His work was selected by the Saylor Foundation’sOpen Textbook Challengefor public release under a Creative Commons Attribution (CC BY) license. This page titled 4.2: Multiplicative Number Theoretic Functions is shared under a CC BY license and was authored, remixed, and/or curated by Wissam Raji. Back to top 4.1: Definitions and Properties 4.3: The Mobius Function and the Mobius Inversion Formula Was this article helpful? Yes No Recommended articles 3.2: Residue Systems and Euler’s φ-Function 6.3: Fermat's and Euler's Theorems 4.1: Definitions and Properties 4.3: The Mobius Function and the Mobius Inversion Formula 4.4: Perfect, Mersenne, and Fermat Numbers Article typeSection or PageAuthorWissam RajiLicenseCC BYShow Page TOCno Tags Euler ϕ -function © Copyright 2025 Mathematics LibreTexts Powered by CXone Expert ® ? The LibreTexts libraries arePowered by NICE CXone Expertand are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. Privacy Policy. Terms & Conditions. Accessibility Statement.For more information contact us [email protected]. Support Center How can we help? Contact Support Search the Insight Knowledge Base Check System Status× contents readability resources tools ☰ 4.1: Definitions and Properties 4.3: The Mobius Function and the Mobius Inversion Formula Complete your gift to make an impact
132
Tutorial 2: Non-uniform Computation =============== [ % escape as normal; these are included in _layouts/math-general.html, % and don't get run through the markdown parser. \newcommand{\bit}{{0,1}} \newcommand{\Exp}{\mathbb{E}} \newcommand{\xor}{\oplus} ] CSC 422 Fall 2013 Tutorial Notes HomeCourse Webpage Tutorial 2: Non-uniform Computation 23 Sep 2013 Today's tutorial concerns non-uniform models of computation Context within the course, definitional motivation The ultimate goal of the course is to devise cryptosystems with the property that no computer program can compromise their private or integrity. To formalizing statements of this kind involves formalizing a model of computation. And to have confidence in the guarantees of these theorems, it's important that we take the most expansive and powerful notion of computation into consideration. The first and most mathematically natural formalization of computation - that of Turing machines - is not the strongest formalization of computing we can devise. In this note we define and prove some properties about the non-uniform models of computation, non-uniform turing machines and circuits. A 'vanilla' Turing machine computes in manner that is uniform with respect to its input lengths. It follows an identical set of instructions no matter what length of input it receives. Were you writing a computer program in practice, you are probably focussing on inputs of a particular size, and your program is likely to take into account some tacit knowledge about these inputs; maybe statistics about the inputs in question, maybe just some information you're encoding on a hunch. A non-uniform Turing machine is a TM that gets an advice string for inputs of various categories (which you should imagine as categories of your choice as algorithm designer). In our definition, we'll group the inputs by input length (because we need a definition applicable for all problems). The next question is how much advice is reasonable? Since there are $2^n$ inputs of length $n$, an advice string of exponential size could encode the answers of all computations and shouldn't be seen as advice. So we'll consider advice strings of polynomial size (i.e. $O(n^c)$ for some $c$) (we could allow the strings to be any sub-exponential, e.g. $O(2^{\sqrt{n}})$, but we won't see examples where this quantitative difference is important, so we just stick with polynomial advice). Definition (non-uniform TMs) A polynomial-time Turing machine with polynomial-sized advice is a polynomial time Turing machine $M$ (i.e. a machine such that $t_M(n) = O(n^c)$ for some $c$) as well as an infinite collection of advice strings ${a_n}_{n \in \mathbb{N}}$ of polynomial size (i.e. $|a_n| = O(n^d)$ for some $d$). $(M, {a_n}) \in P/poly$ decides a language $L \subset {0,1}^$ if [ \forall x \in {0,1}^ \ M(x, a_{|x|}) = \chi_L(x) ] (Where $\chi_L$ is the indicator function for $L$, i.e. $\chi_L(x) = 1$ if and only if $x \in L$). The set languages decided by of polynomial-time Turing machines with polynomial-sized advice is denoted $P/poly$; we'll refer to $(M {a_n})$ as a $P/poly$ machine. Non-uniformity and randomness Implicitly, the definition given above does not give the TM the benefit of random bits. Given our motivation above, it should be clear that we do want to equip our model of computation with whatever might be helpful, however the power of non-uniformity supercedes the power of being able to toss random coins. Let's formalize that now. First we must recall our notion of randomized computation. Definition A language $L$ is said to be in $BPP$ (bounded-error probabilistic polynomial time) if there is a Turing machine $M$ with a random tape such that \begin{equation} \label{eq:bpp_acceptance} \forall x \in {0,1}^ \ \Pr_r[M(x;r) = \chi_L(x)] \geq 2/3 \end{equation} The probability above is over the random coins of the algorithm. No matter the input, the machine is more likely to output the correct answer than the incorrect answer. The following theorem formally states that non-uniformity (i.e. advice) is more powerful than randomness Theorem [ BPP \subseteq P/poly ] The proof of this theorem involves choosing the advice to be a specific setting of the random tape (i.e. the best possible). We could already, for each input length, choose an $r^_n$ so that $M'(x) = M(x;r^_n)$ is correct on $2/3$ of all $x \in \bit^n$. Of course we need to be correct on each $x \in \bit^n$. However, if we first amplify the correctness of our initial algorithm (done easily by taking the majority result of independent trials -a technique sometimes referred to as parallel repitition), there will be a random string that works for all $x$'s. Proof Let $L \in BPP$ as decided by $M$ (i.e. $M$ and $L$ satisfy \eqref{eq:bpp_acceptance}). Claim 1 $\exists M'$, also a $BPP$ machine, such that \begin{equation} \label{eq:bpp_amplified} \forall x \in { 0,1 }^ \ \Pr_r[M'(x;r) = \chi_L(x)] > 1 - 2^{-n} \end{equation} Claim 2 $\forall n \exists r^_n$ such that \begin{equation} \forall x \in { 0,1 }^n \ M'(x;r^_n) = \chi_L(x) \end{equation} The $P/poly$ machine for $L$ is $(M', { r^n }{n \in \mathbb{N}})$. That $M'$ consumes at most polynomial time is immediate from $BPP$; that $r^_n$ are also polysized is implied by $M'$ being a $BPP$ machine, since a polytime bounded machine can consume at most polynomially many random bits. Proof of Claim 1 Let $M'(x; r_1,\ldots,r_m)$ = $Majority{ M(x;r_1), \ldots, M(x;r_m) }$ Arguing the correctness of $M'$ is straightforward given properties of the binomial distribution: \begin{align} \Pr[M'(x; r_1, \ldots, r_m) = \chi_L(x) ] &= \Pr[ Bin(m, 2/3) > m/2 ] \notag \ &= \Pr[ |Bin(m,2/3) - m/2| > m/6 ] \label{eq:binomial_tail} \end{align} It is well known (e.g. see Chernoff Bound)that for $m = O(n^2)$ gives that \eqref{eq:binomial_tail} is $O(2^n)$. Proof of Claim 2 Fix $n$. Suppose the total number of random bits consumed by $M'$ is $\ell(n) = \ell$. Let $T$ be an $2^n$ by $2^\ell$ table, with rows labelled with elements of $\bit^n$ and columns labelled with elements of $\bit^\ell$ and whose $(x_i, r_j)$'th entry is $1$ if $M'(x_i;r_j) = \chi_L(x_i)$ and $0$ otherwise. \eqref{eq:bpp_amplified} tells us that more than a $1 - 2^{-n}$ fraction of the entries are $1$. Since there are only $2^n$ rows, there is at least one column of all $1$'s. Letting $r^_n$ be any such column completes the proof. Wesley George PhD Student, Theory of Computation, Computer Science,University of Toronto [email protected]
133
On Performance Stability in LSM-based Storage Systems Chen Luo University of California, Irvine [email protected] Michael J. Carey University of California, Irvine [email protected] ABSTRACT The Log-Structured Merge-Tree (LSM-tree) has been widely adopted for use in modern NoSQL systems for its superior write performance. Despite the popularity of LSM-trees, they have been criticized for suffering from write stalls and large performance variances due to the inherent mismatch between their fast in-memory writes and slow background I/O operations. In this paper, we use a simple yet effective two-phase experimental approach to evaluate write stalls for various LSM-tree designs. We further identify and explore the design choices of LSM merge schedulers to minimize write stalls given an I/O bandwidth budget. We have con-ducted extensive experiments in the context of the Apache AsterixDB system and we present the results here. PVLDB Reference Format: Chen Luo, Michael J. Carey. On Performance Stability in LSM-based Storage Systems. PVLDB, 13(4): 449-462, 2019. DOI: 1. INTRODUCTION In recent years, the Log-Structured Merge-Tree (LSM-tree) [45, 41] has been widely used in modern key-value stores and NoSQL systems [2, 4, 5, 7, 14]. Different from traditional index structures, such as B+-trees, which ap-ply updates in-place, an LSM-tree always buffers writes into memory. When memory is full, writes are flushed to disk and subsequently merged using sequential I/Os. To improve ef-ficiency and minimize blocking, flushes and merges are often performed asynchronously in the background. Despite their popularity, LSM-trees have been criticized for suffering from write stalls and large performance vari-ances [3, 51, 57]. To illustrate this problem, we conducted a micro-experiment on RocksDB , a state-of-the-art LSM-based key-value store, to evaluate its write throughput on SSDs using the YCSB benchmark . The instantaneous write throughput over time is depicted in Figure 1. As one can see, the write throughput of RocksDB periodically slows down after the first 300 seconds, which is when the system This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. To view a copy of this license, visit For any use beyond those covered by this license, obtain permission by emailing [email protected]. Copyright is held by the owner/author(s). Publication rights licensed to the VLDB Endowment. Proceedings of the VLDB Endowment, Vol. 13, No. 4 ISSN 2150-8097. DOI: 0 5000 10000 15000 20000 25000 30000 0 120 240 360 480 600 720 Write Throughput (records/s) Time (s) Figure 1: Instantaneous write throughput of RocksDB: writes are periodically stalled to wait for lagging merges has to wait for background merges to catch up. Write stalls can significantly impact percentile write latencies and must be minimized to improve the end-user experience or to meet strict service-level agreements . In this paper, we study the impact of write stalls and how to minimize write stalls for various LSM-tree designs. It should first be noted that some write stalls are inevitable. Due to the inherent mismatch between fast in-memory writes and slower background I/O operations, in-memory writes must be slowed down or stopped if background flushes or merges cannot catch up. Without such a flow control mech-anism, the system will eventually run out of memory (due to slow flushes) or disk space (due to slow merges). Thus, it is not a surprise that an LSM-tree can exhibit large write stalls if one measures its maximum write throughput by writing as quickly data as possible, such as we did in Figure 1. This inevitability of write stalls does not necessarily limit the applicability of LSM-trees since in practice writes do not arrive as quickly as possible, but rather are controlled by the expected data arrival rate. The data arrival rate directly impacts the write stall behavior and resulting write latencies of an LSM-tree. If the data arrival rate is relatively low, then write stalls are unlikely to happen. However, it is also desirable to maximize the supported data arrival rate so that the system’s resources can be fully utilized. Moreover, the expected data arrival rate is subject to an important constraint - it must be smaller than the processing capacity of the target LSM-tree. Otherwise, the LSM-tree will never be able to process writes as they arrive, causing infinite write latencies. Thus, to evaluate the write stalls of an LSM-tree, the first step is to choose a proper data arrival rate. As the first contribution, we propose a simple yet effec-tive approach to evaluate the write stalls of various LSM-tree designs by answering the following question: If we set the 449 data arrival rate close to (e.g., 95% of) the maximum write throughput of an LSM-tree, will that cause write stalls? In other words, can a given LSM-tree design provide both a high write throughput and a low write latency? Briefly, the proposed approach consists of two phases: a testing phase and a running phase. During the testing phase, we experimentally measure the maximum write throughput of an LSM-tree by simply writing as much data as possible. During the running phase, we then set the data arrival rate close to the measured maximum write throughput as the limiting data arrival rate to evaluate its write stall behavior based on write latencies. If write stalls happen, the mea-sured write throughput is not sustainable since it cannot be used in the long-term due to the large latencies. However, if write stalls do not happen, then write stalls are no longer a problem since the given LSM-tree can provide a high write throughput with small performance variance. Although this approach seems to be straightforward at first glance, there exist two challenges that must be ad-dressed. First, how can we accurately measure the maxi-mum sustainable write rate of an LSM-tree experimentally? Second, how can we best schedule LSM I/O operations so as to minimize write stalls at runtime? In the remainder of this paper, we will see that the merge scheduler of an LSM-tree can have a large impact on write stalls. As the second contribution, we identify and explore the design choices for LSM merge schedulers and present a new merge scheduler to address these two challenges. As the paper’s final contribution, we have implemented the proposed techniques and various LSM-tree designs inside Apache AsterixDB . This enabled us to carry out exten-sive experiments to evaluate the write stalls of LSM-trees and the effectiveness of the proposed techniques using our two-phase evaluation approach. We argue that with proper tuning and configuration, LSM-trees can achieve both a high write throughput and small performance variance. The remainder of this paper is organized as follows: Sec-tion 2 provides background on LSM-trees and discusses re-lated work. Section 3 describes the general experimental setup used throughout this paper. Section 4 identifies the scheduling choices for LSM-trees and experimentally evalu-ates bLSM’s merge scheduler . Sections 5 and 6 present our techniques for minimizing write stalls for various LSM-tree designs. Section 7 summarize the important lessons and insights from our evaluation. Finally, Section 8 concludes the paper. An extended version of this paper further extends our evaluation to the size-tiered merge policy used in practical systems and LSM-based secondary indexes. 2. BACKGROUND 2.1 Log-Structured Merge Trees The LSM-tree is a persistent index structure opti-mized for write-intensive workloads. In an LSM-tree, writes are first buffered into a memory component. An insert or update simply adds a new entry with the same key, while a delete adds an anti-matter entry indicating that a key has been deleted. When the memory component is full, it is flushed to disk to form a new disk component, within which entries are ordered by keys. Once flushed, LSM disk com-ponents are immutable. A query over an LSM-tree has to reconcile the entries with identical keys from multiple components, as entries L0 L1 L2 memory disk (b) Tiering Merge Policy T components per level (a) Leveling Merge Policy 1 component per level T times larger merge T components Figure 2: LSM-tree Merge Policies from newer components override those from older compo-nents. A point lookup query simply searches all compo-nents from newest to oldest until the first match is found. A range query searches all components simultaneously us-ing a priority queue to perform reconciliation. To speed up point lookups, a common optimization is to build Bloom fil-ters over the sets of keys stored in disk components. If a Bloom filter reports that a key does not exist, then that disk component can be excluded from searching. As disk com-ponents accumulate, query performance tends to degrade since more components must be examined. To counter this, smaller disk components are gradually merged into larger ones. This is implemented by scanning old disk components to create a new disk component with unique entries. The decision of what disk components to merge is made by a pre-defined merge policy, which is discussed below. Merge Policy. Two types of LSM merge policies are commonly used in practice , both of which organize com-ponents into “levels”. The leveling merge policy (Figure 2a) maintains one component per level, and a component at Level i + 1 will be T times larger than that of Level i. As a result, the component at Level i will be merged multiple times with the component from Level i −1 until it fills up and is then merged into Level i + 1. In contrast, the tier-ing merge policy (Figure 2b) maintains multiple components per level. When a Level i becomes full with T components, these T components are merged together into a new compo-nent at Level i + 1. In both merge policies, T is called the size ratio, as it controls the maximum capacity of each level. We will refer to both of these merge policies as full merges since components are merged entirely. In general, the leveling merge policy optimizes for query performance by minimizing the number of components but at the cost of write performance. This design also maximizes space efficiency, which measures the amount of space used for storing obsolete entries, by having most of the entries at the largest level. In contrast, the tiering merge policy is more write-optimized by reducing the merge frequency, but this leads to lower query performance and space utilization. Partitioning. Partitioning is a commonly used opti-mization in modern LSM-based key-value stores that is of-ten implemented together with the leveling merge policy, as pioneered by LevelDB . In this optimization, a large LSM disk component is range-partitioned into multiple (of-ten fixed-size) files. This bounds the processing time and the temporary space of each merge. An example of a par-titioned LSM-tree with the leveling merge policy is shown in Figure 3, where each file is labeled with its key range. Note that partitioning starts from Level 1, as components in Level 0 are directly flushed from memory. To merge a file from Level i to Level i + 1, all of its overlapping files at 450 L0 L1 L2 0-99 0-50 55-99 0-20 22-52 53-75 80-95 0-99 memory disk 0-99 Before Merge 0-99 55-99 53-75 80-95 0-99 0-99 0-15 17-30 32-52 After Merge merging file new file component/file Figure 3: Partitioned LSM-tree with Leveling Merge Policy Level i + 1 are selected and these files are merged to form new files at Level i + 1. For example in Figure 3, the file labeled 0-50 at Level 1 will be merged with the files labeled 0-20 and 22-52 at Level 2, which produce new files labeled 0-15, 17-30, and 32-52 at Level 2. To select which file to merge next, LevelDB uses a round-robin policy. Both full merges and partitioned merges are widely used in existing systems. Full merges are used in AsterixDB , Cassandra , HBase , ScyllaDB , Tarantool , and WiredTiger (MongoDB) . Partitioned merges are used in LevelDB , RocksDB , and X-Engine . Write Stalls in LSM-trees. Since in-memory writes are inherently faster than background I/O operations, writing to memory components sometimes must be stalled to en-sure the stability of an LSM-tree, which, however, will neg-atively impact write latencies. This is often referred to as the write stall problem. If the incoming write speed is faster than the flush speed, writes will be stalled when all memory components are full. Similarly, if there are too many disk components, new writes should be stalled as well. In gen-eral, merges are the major source of write stalls since writes are flushed once but merged multiple times. Moreover, flush stalls can be avoided by giving higher I/O priority to flushes. In this paper, we thus focus on write stalls caused by merges. 2.2 Apache AsterixDB Apache AsterixDB [1, 14, 22] is a parallel, semi-structured Big Data Management System (BDMS) that aims to man-age large amounts of data efficiently. It supports a feed-based framework for efficient data ingestion [31, 55]. The records of a dataset in AsterixDB are hash-partitioned based on their primary keys across multiple nodes of a shared-nothing cluster; thus, a range query has to search all nodes. Each partition of a dataset uses a primary LSM-based B+-tree index to store the data records, while local secondary in-dexes, including LSM-based B+-trees, R-trees, and inverted indexes, can be built to expedite query processing. Aster-ixDB internally uses a variation of the tiering merge policy to manage disk components, similar to the one used in ex-isting systems [4, 7]. Instead of organizing components into levels explicitly as in Figure 2b, AsterixDB’s variation sim-ply schedules merges based on the sizes of disk components. In this work, we do not focus on the LSM-tree implementa-tion in AsterixDB but instead use AsterixDB as a common testbed to evaluate various LSM-tree designs. 2.3 Related Work LSM-trees. Recently, a large number of improvements of the original LSM-tree have been proposed. sur-veys these improvements, ranging from improving write per-formance [18, 27, 28, 37, 39, 44, 47, 56], optimizing mem-C0 C1 C2 memory C’1 disk merge (out1) merge (out0/in1) writes (in0) Figure 4: bLSM’s Spring-and-Gear Merge Scheduler ory management [13, 17, 52, 58], supporting automatic tun-ing of LSM-trees [25, 26, 38], optimizing LSM-based sec-ondary indexes [40, 46], to extending the applicability of LSM-trees [43, 49]. However, all of these efforts have largely ignored performance variances and write stalls of LSM-trees. Several LSM-tree implementations seek to bound the write processing latency to alleviate the negative impact of write stalls [5, 7, 36, 57]. bLSM proposes a spring-and-gear merge scheduler to avoid write stalls. As shown in Figure 4, bLSM has one memory component, C0, and two disk com-ponents, C1 and C2. The memory component C0 is contin-uously flushed and merged with C1. When C1 becomes full, a new C1 component is created while the old C1, which now becomes C′ 1, will be merged with C2. bLSM ensures that for each Level i, the progress of merging C′ i into Ci+1 (denoted as “outi”) will be roughly identical to the progress of the formation of a new Ci (denoted as “ini”). This eventually limits the write rate for the memory component (in0) and avoids blocking writes. However, we will see later that sim-ply bounding the maximum write processing latency alone is insufficient, because a large variance in the processing rate can still cause large queuing delays for subsequent writes. Performance Stability. Performance stability has long been recognized as a critical performance metric. The TPC-C benchmark measures not only absolute throughput, but also specifies the acceptable upper bounds for the per-centile latencies. Huang et al. applied VProfiler to identify major sources of variance in database transactions. Various techniques have been proposed to optimize the vari-ance of query processing [15, 16, 20, 23, 48, 54]. Cao et al. found that variance is common in storage stacks and heavily depends on configurations and workloads. Dean and Barroso discussed several engineering techniques to re-duce performance variances at Google. Different from these efforts, in this work we focus on the performance variances of LSM-trees due to their inherent out-of-place update design. 3. EXPERIMENTAL METHODOLOGY For ease of presentation, we will mix our techniques with a detailed performance analysis for each LSM-tree design. We now describe the general experimental setup and methodol-ogy for all experiments to follow. 3.1 Experimental Setup All experiments were run on a single node with an 8-core Intel i7-7567U 3.5GHZ CPU, 16 GB of memory, a 500GB SSD, and a 1TB 7200 rpm hard disk. We used the SSD for LSM storage and configured the hard disk for transaction logging due to its sufficiently high sequential throughput. We allocated 10GB of memory for the AsterixDB instance. Within that allocation, the buffer cache size was set at 2GB. Each LSM memory component had a 128MB budget, and each LSM-tree had two memory components to minimize stalls during flushes. Each disk component had a Bloom 451 filter with a false positive rate setting of 1%. The data page size was set at 4KB to align with the SSD page size. It is important to note that not all sources of performance variance can be eliminated . For example, writing a key-value pair with a 1MB value inherently requires more work than writing one that only has 1KB. Moreover, short time periods with quickly occurring writes (workload bursts) will be much more likely to cause write stalls than a long period of slow writes, even though their long-term write rate may be the same. In this paper, we will focus on the avoidable variance caused by the internal implementation of LSM-trees instead of variances in the workloads. To evaluate the internal variances of LSM-trees, we adopt YCSB as the basis for our experimental workload. In-stead of using the pre-defined YCSB workloads, we designed our own workloads to better study the performance stability of LSM-trees. Each experiment first loads an LSM-tree with 100 million records, in random key order, where each record has size 1KB. It then runs for 2 hours to update the pre-viously loaded LSM-tree. This ensures that the measured write throughput of an LSM-tree is stable over time. Unless otherwise noted, we used one writer thread for writing data to the LSM memory components. We evaluated two update workloads, where the updated keys follow either a uniform or Zipf distribution. The specific workload setups will be discussed in the subsequent sections. We used two commonly used I/O optimizations when im-plementing LSM-trees, namely I/O throttling and periodic disk forces. In all experiments, we throttled the SSD write speed of all LSM flush and merge operations to 100MB/s. This was implemented by using a rate limiter to inject ar-tificial sleeps into SSD writes. This mechanism bounds the negative impact of the SSD writes on query performance and allows us to more fairly compare the performance differences of various LSM merge schedulers. We further had each flush or merge operation force its SSD writes after each 16MB of data. This helps to limit the OS I/O queue length, reducing the negative impact of SSD writes on queries. We have ver-ified that disabling this optimization would not impact the performance trends of writes; however, large forces at the end of each flush and merge operation, which are required for durability, can significantly interfere with queries. 3.2 Performance Metrics To quantify the impact of write stalls, we will not only present the write throughput of LSM-trees but also their write latencies. However, there are different models for mea-suring write latencies. Throughout the paper, we will use arrival rate to denote the rate at which writes are submitted by clients, processing rate to denote the rate at which writes can be processed by an LSM-tree, and write throughput to denote the number of writes processed by an LSM-tree per unit of time. The difference between the write throughput and arrival/processing rates is discussed further below. The bLSM paper , as well as most of the existing LSM research, used the experimental setup depicted in Figure 5a to write as much data as possible and measure the latency of each write. In this closed system setup , the processing rate essentially controls the arrival rate, which further equals the write throughput. Although this model is sufficient for measuring the maximum write throughput of LSM-trees, it is not suitable for characterizing their write latencies for several reasons. First, writing to memory is inherently faster (b) Open System (a) Closed System client client . . . queue processing rate arrival rate LSM-tree write throughput client client client . . . arrival rate = processing rate write throughput LSM-tree Figure 5: Models for Measuring Write Latency than background I/Os, so an LSM-tree will always have to stall writes in order to wait for lagged flushes and merges. Moreover, under this model, a client cannot submit its next write until its current write is completed. Thus, when the LSM-tree is stalled, only a small number of ongoing writes will actually experience a large latency since the remaining writes have not been submitted yet1. In practice, a DBMS generally cannot control how quickly writes are submitted by external clients, nor will their writes always arrive as fast as possible. Instead, the arrival rate is usually independent from the processing rate, and when the system is not able to process writes as fast as they arrive, the newly arriving writes must be temporarily queued. In such an open system (Figure 5b), the measured write latency includes both the queuing latency and processing latency. Moreover, an important constraint is that the arrival rate must be smaller than the processing rate since otherwise the queue length will be unbounded. Thus, the (overall) write throughput is actually determined by the arrival rate. A simple example will illustrate the important difference between these two models. Suppose that 5 clients are used to generate an intended arrival rate of 1000 writes/s and that the LSM-tree stalls for 1 second. Under the closed system model (Figure 5a), only 5 delayed writes will expe-rience a write latency of 1s since the remaining (intended) 995 writes simply will not occur. However, under the open system model (Figure 5b), all 1000 writes will be queued and their average latency will be at least 0.5s. To evaluate write latencies in an open system, one must first set the arrival rate properly since the write latency heavily depends on the arrival rate. It is also important to maximize the arrival rate to maximize the system’s uti-lization. For these reasons, we propose a two-phase evalu-ation approach with a testing phase and a running phase. During the testing phase, we use the closed system model (Figure 5a) to measure the maximum write throughput of an LSM-tree, which is also its processing rate. When measur-ing the maximum write throughput, we excluded the initial 20-minute period (out of 2 hours) of the testing phase since the initially loaded LSM-tree has a relatively small num-ber of disk components at first. During the running phase, we use the open system model (Figure 5b) to evaluate the write latencies under a constant arrival rate set at 95% of the measured maximum write throughput. Based on queu-ing theory , the queuing time approaches infinity when the utilization, which is the ratio between the arrival rate and the processing rate, approaches 100%. We thus empir-ically determine a high utilization load (95%) while leaving some room for the system to absorb variance. If the running phase then reports large write latencies, the maximum write 1The original release of the YCSB benchmark mistak-enly used this model; this was corrected later in 2015 . 452 throughput as determined in the testing phase is not sustain-able; we must improve the implementation of the LSM-tree or reduce the expected arrival rate to reduce the latencies. In contrast, if the measured write latency is small, then the given LSM-tree can provide a high write throughput with a small performance variance. 4. LSM MERGE SCHEDULER Different from a merge policy, which decides which com-ponents to merge, a merge scheduler is responsible for exe-cuting the merge operations created by the merge policy. In this section, we discuss the design choices for a merge sched-uler and evaluate bLSM’s spring-and-gear merge scheduler. 4.1 Scheduling Choices The write cost of an LSM-tree, which is the number of I/Os per write, is determined by the LSM-tree design it-self and the workload characteristics but not by how merges are executed . Thus, a merge scheduler will have little impact on the overall write throughput of an LSM-tree as long as the allocated I/O bandwidth budget can be fully uti-lized. However, different scheduling choices can significantly impact the write stalls of an LSM-tree, and merge schedulers must be carefully designed to minimize write stalls. We have identified the following design choices for a merge scheduler. Component Constraint: A merge scheduler usually specifies an upper-bound constraint on the total number of components allowed to accumulate before incoming writes to the LSM memory components should be stalled. We call this the component constraint. For example, bLSM allows at most two disk components per level, while other systems like HBase or Cassandra specify the total number of disk components across all levels. Interaction with Writes: There exist different strate-gies to enforce a given component constraint. One strategy is to simply stop processing writes once the component con-straint is violated. Alternatively, the processing of writes can be degraded gracefully based on the merge pressure . Degree of Concurrency: In general, an LSM-tree can often create multiple merge operations in the same time pe-riod. A merge scheduler should decide how these merge op-erations should be scheduled. Allowing concurrent merges will enable merges at multiple levels to proceed concurrently, but they will also compete for CPU and I/O resources, which can negatively impact query performance . As two ex-amples, bLSM allows one merge operation per level, while LevelDB uses just one single background thread to execute all merges one by one. I/O Bandwidth Allocation: Given multiple concur-rent merge operations, the merge scheduler should further decide how to allocate the available I/O bandwidth among these merge operations. A commonly used heuristic is to allocate I/O bandwidth “fairly” (evenly) to all active merge operations. Alternatively, bLSM allocates I/O band-width based on the relative progress of the merge operations to ensure that merges at each level all make steady progress. 4.2 Evaluation of bLSM Due to the implementation complexity of bLSM and its dependency on a particular storage system, Stasis , we chose to directly evaluate the released version of bLSM . bLSM uses the leveling merge policy with two on-disk levels. We set its memory component size to 1GB and size ratio to 10 so that the experimental dataset with 100 million records can fit into the last level. We used 8 write threads to maxi-mize the write throughput of bLSM. Testing Phase. During the testing phase, we measured the maximum write throughput of bLSM by writing as much data as possible using both the uniform and Zipf update workloads. The instantaneous write throughput of bLSM under these two workloads is shown in Figure 6a. For read-ability, the write throughput is averaged over 30-second win-dows. (Unless otherwise noted, the same aggregation applies to all later experiments as well.) Even though bLSM’s merge scheduler prevents writes from being stalled, the instantaneous write throughput still ex-hibits a large variance with regular temporary peaks. Re-call that bLSM uses the merge progress at each level to control its in-memory write speed. After the component C1 is full and becomes C′ 1, the original C1 will be empty and will have much shorter merge times. This will temporarily increase the in-memory write speed of bLSM, which then quickly drops as C1 grows larger and larger. Moreover, the Zipf update workload increases the write throughput only because updated entries can be reclaimed earlier, but the overall variance performance trends are still the same. Running Phase. Based on the maximum write through-put measured in the testing phase, we then used a constant data arrival process (95% of the maximum) in the running phase to evaluate bLSM’s behavior. Figure 6b shows the instantaneous write throughput of bLSM under the uniform and Zipf update workloads. bLSM maintains a sustained write throughput during the initial period of the experi-ment, but later has to slow down its in-memory write rate periodically due to background merge pressure. Figure 6c further shows the resulting percentile write and processing latencies. The processing latency measures only the time for the LSM-tree to process a write, while the write latency includes both the write’s queuing time and processing time. By slowing down the in-memory write rate, bLSM indeed bounds the processing latency. However, the write latency is much larger because writes must be queued when they cannot be processed immediately. This suggests that sim-ply bounding the maximum processing latency is far from sufficient; it is important to minimize the variance in an LSM-tree’s processing rate to minimize write latencies. 5. FULL MERGES In this section, we explore the scheduling choices of LSM-trees with full merges and then evaluate the impact of merge scheduling on write stalls using our two-phase approach. 5.1 Merge Scheduling for Full Merges We first introduce some useful notation for use throughout our analysis in Table 1. To simplify the analysis, we will ignore the I/O cost of flushes since merges consume most of the I/O bandwidth. 5.1.1 Component Constraint To provide acceptable query performance and space uti-lization, the total number of disk components of an LSM-tree must be bounded. We call this upper bound the com-ponent constraint, and it can be enforced either locally or globally. A local constraint specifies the maximum num-ber of disk components per level. For example, bLSM uses a local constraint to allow at most two components per 453 0 1800 3600 5400 7200 Elapsed Time (s) 0 20 40 Write Throughput (kops/s) (a) Testing Phase: Instantaneous Write Throughput (Maximum) zipf uniform 0 1800 3600 5400 7200 Elapsed Time (s) 0 20 40 Write Throughput (kops/s) (b) Running Phase: Instantaneous WriteThroughput (95% Load) zipf uniform 50 90 95 99 99.9 99.99 Percentile (%) 10 3 10 1 10 1 Write Latency (s) (c) Running Phase: Percentile Write Latencies (95% Load) zipf: write latency zipf: processing latency uniform: write latency uniform: processing latency Figure 6: Two-Phase Evaluation of bLSM Table 1: List of notation used in this paper Term Definition Unit T size ratio of the merge policy L the number of levels in an LSM-tree M memory component size entries B I/O bandwidth entries/s µ write arrival rate entries/s W write throughput of an LSM-tree entries/s level. A global constraint instead specifies the maximum number of disk components across all levels. Here we ar-gue that global component constraints will better minimize write stalls. In addition to external factors, such as deletes or shifts in write patterns, the merge time at each level in-herently varies for leveling since the size of the component at Level i varies from 0 to (T −1) · M · T i−1. Because of this, bLSM cannot provide a high yet stable write throughput over time. Global component constraints will better absorb this variance and minimize the write stalls . It remains a question how to determine the maximum number of disk components for the component constraint. In general, tolerating more disk components will increase the LSM-tree’s ability to reduce write stalls and absorb write bursts, but it will decrease query performance and space utilization. Given the negative impact of stalls on write la-tencies, one solution is to tolerate a sufficient number of disk components to avoid write stalls while the worst-case query performance and space utilization are still bounded. For example, one conservative constraint would be to tolerate twice the expected number of disk components, e.g., 2 · L components for leveling and 2 · T · L components for tiering. 5.1.2 Interaction with Writes When the component constraint is violated, the process-ing of writes by an LSM-tree has to be slowed down or stopped. Existing LSM-tree implementations [5, 7, 51] pre-fer to gracefully slow down the in-memory write rate by adding delays to some writes. This approach reduces the maximum processing latency, as large pauses are broken down into many smaller ones, but the overall processing rate of an LSM-tree, which depends on the I/O cost of each write, is not affected. Moreover, this approach will result in an even larger queuing latency. There may be additional considerations for gracefully slowing down writes, but we ar-gue that processing writes as quickly as possible minimizes the overall write latency, as stated by the following theo-rem. See for detailed proofs for all theorems. Theorem 1. Given any data arrival process and any LSM-tree, processing writes as quickly as possible minimizes the latency of each write. Proof Sketch. Consider two merge schedulers S and S′ which only differ in that S may add arbitrary delays to writes while S′ processes writes as quickly as possible. For each write request r, r must be completed by S′ no later than S because the LSM-tree has the same processing rate but S adds some delays to writes. It should be noted that Theorem 1 only considers write latencies. By processing writes as quickly as possible, disk components can stack up more quickly (up to the compo-nent constraint), which may negatively impact query per-formance. Thus, a better approach may be to increase the write processing rate, e.g., by changing the structure of the LSM-tree. We leave the exploration of this direction as fu-ture work. 5.1.3 Degree of Concurrency A merge policy can often create multiple merge operations simultaneously. For full merges, we can show that a single-threaded scheduler that executes one merge at a time is not sufficient for minimizing write stalls. Consider a merge op-eration at Level i. For leveling, the merge time varies from 0 to M·T i B because the size of the component at Level i varies from 0 to (T −1) · M · T i−1. For tiering, each component has size M · T i−1 and merging T components thus takes time M·T i B . Suppose the arrival rate is µ. Without concur-rent merges, there would be µ M · M·T i B = µ·T i B newly flushed components added while this merge operation is being exe-cuted, assuming that flushes can still proceed. Our two-phase evaluation approach chooses the maximum write throughput of an LSM-tree as the arrival rate µ. For leveling, the maximum write throughput is approximately Wlevel = 2·B T ·L, as each entry is merged T 2 times per level. For tiering, the maximum write throughput is approximately Wtier = B L , as each entry is merged only once per level. By substituting Wlevel and Wtier for µ, one needs to tolerate at least 2·T i−1 L flushed components for leveling and T i L flushed components for tiering to avoid write stalls. Since the term T i grows exponentially, a large number of flushed compo-nents will have to be tolerated when a large disk component is being merged. Consider the leveling merge policy with a size ratio of 10. To merge a disk component at Level 5, approximately 2·104 5 = 4000 flushed components would need to be tolerated, which is highly unacceptable. Clearly, concurrent merges must be performed to mini-mize write stalls. When a large merge is being processed, 454 smaller merges can still be completed to reduce the number of components. By the definition of the tiering and leveling merge policies, there can be at most one active merge oper-ation per level. Thus, given an LSM-tree with L levels, at most L merge operations can be scheduled concurrently. 5.1.4 I/O Bandwidth Allocation Given multiple active merge operations, the merge sched-uler must further decide how to allocate I/O bandwidth to these operations. A heuristic used by existing systems [2, 4, 7] is to allocate I/O bandwidth fairly (evenly) to all ongoing merges. We call this the fair scheduler. The fair scheduler ensures that all merges at different levels can proceed, thus eliminating potential starvation. Recall that write stalls oc-cur when an LSM-tree has too many disk components, thus violating the component constraint. It is unclear whether or not the fair scheduler can minimize write stalls by mini-mizing the number of disk components over time. Recall that both the leveling and tiering merge policies always merge the same number of disk components at once. We propose a novel greedy scheduler that always allocates the full I/O bandwidth to the merge operation with the smallest remaining number of bytes. The greedy scheduler has a useful property that it minimizes the number of disk components over time for a given set of merge operations. Theorem 2. Given any set of merge operations that pro-cess the same number of disk components and any I/O band-width budget, the greedy scheduler minimizes the number of disk components at any time instant. Proof Sketch. Consider an arbitrary scheduler S and the greedy scheduler S′. Given N merge operations, we can show that S′ always completes the i-th (1 ≤i ≤N) merge operation no later than S. This can be done by noting that S′ always processes the smallest merge operation first. Theorem 2 only considers a set of statically created merge operations. This conclusion may not hold in general be-cause sometimes completing a large merge may enable the merge policy to create smaller merges, which can then re-duce the number of disk components more quickly. Because of this, there actually exists no merge scheduler that can always minimize the number of disk components over time, as stated by the following theorem. However, as we will see in our later evaluation, the greedy scheduler is still a very effective heuristic for minimizing write stalls. Theorem 3. Given any I/O bandwidth budget, no merge scheduler can minimize the number of disk components at any time instant for any data arrival process and any LSM-tree for a deterministic merge policy where all merge opera-tions process the same number of disk components. Proof Sketch. Consider an LSM-tree that has created a small merge MS and a large merge ML. Completing ML allows the LSM-tree to create a new small merge M ′ S that is smaller than MS. Consider two merge schedulers S1 and S2, where S1 first processes MS and then ML, and S2 first processes ML and then M ′ S. It can be shown that S1 has the earliest completion time for the first merge and S2 has the earliest completion time for the second merge, but no merge scheduler can outperform both S1 and S2. 5.1.5 Putting Everything Together Based on the discussion of each scheduling choice, we now summarize the proposed greedy scheduler. The greedy scheduler enforces a global component constraint with a suf-ficient number of disk components, e.g., twice the expected number of components of an LSM-tree, to minimize write stalls while ensuring the stability of the LSM-tree. It pro-cesses writes as quickly as possible and only stops the pro-cessing of writes when the component constraint is violated. The greedy scheduler performs concurrent merges but allo-cates the full I/O bandwidth to the merge operation with the smallest remaining bytes. Whenever a merge opera-tion is created or completed, the greedy scheduler is notified to find the smallest merge to execute next. Thus, a large merge can be interrupted by a newly created smaller merge. However, in general one cannot exactly know which merge operation requires the least amount of I/O bandwidth until the new component is fully produced. To handle this, the smallest merge operation can be approximated by using the number of remaining pages of the merging components. Under the greedy scheduler, larger merges may be starved at times since they receive lower priority. This has a few im-plications. First, during normal user workloads, such star-vation can only occur if the arrival rate is temporarily faster than the processing rate of an LSM-tree. Given the nega-tive impact of write stalls on write latencies, it can actually be beneficial to temporarily delay large merges so that the system can better absorb write bursts. Second, the greedy scheduler should not be used in the testing phase because it would report a higher but unsustainable write throughput due to such starved large merges. Finally, our discussions of the greedy scheduler as well as the single-threaded scheduler are based on an important as-sumption that a single merge operation is able to fully utilize the available I/O bandwidth budget. Otherwise, multiple merges must be executed at the same time. It is straightfor-ward to extend the greedy scheduler to execute the smallest k merge operations, where k is the degree of concurrency needed to fully utilize the I/O bandwidth budget. 5.2 Experimental Evaluation We now experimentally evaluate the write stalls of LSM-trees using our two-phase approach. We discuss the specific experimental setup followed by the detailed evaluation, in-cluding the impact of merge schedulers on write stalls, the benefit of enforcing the component constraint globally and of processing writes as quickly as possible, and the impact of merge scheduling on query performance. 5.2.1 Experimental Setup All experiments in this section were performed using As-terixDB with the general setup described in Section 3. Un-less otherwise noted, the size ratio of leveling was set at 10, which is a commonly used configuration in practice [5, 7]. For the experimental dataset with 100 million unique records, this results in a three-level LSM-tree, where the last level is nearly full. For tiering, the size ratio was set at 3, which leads to better write performance than leveling without sacrificing too much on query performance. This ratio results in an eight-level LSM-tree. We evaluated the single-threaded scheduler (Section 5.1.3), the fair scheduler (Section 5.1.3), and the proposed greedy scheduler (Section 5.1.5). The single-threaded scheduler only executes one merge at a time using a single thread. Both the fair and greedy schedulers are concurrent schedulers that execute each merge using a separate thread. The difference is that the fair scheduler allocates the I/O bandwidth to all 455 0 1800 3600 5400 7200 Elapsed Time (s) 0 20 40 Write Throughput (kops/s) (a) Tiering Merge Policy single scheduler fair scheduler greedy scheduler 0 1800 3600 5400 7200 Elapsed Time (s) 0 3 6 9 12 15 18 Write Throughput (kops/s) (b) Leveling Merge Policy single scheduler fair scheduler greedy scheduler Figure 7: Testing Phase: Instantaneous Write Throughput ongoing merges evenly, while the greedy scheduler always allocates the full I/O bandwidth to the smallest merge. To minimize flush stalls, a flush operation is always executed in a separate thread and receives higher I/O priority. Unless otherwise noted, all three schedulers enforce global compo-nent constraints and process writes as quickly as possible. The maximum number of disk components is set at twice the expected number of disk components for each merge policy. Each experiment was performed under both the uniform and Zipf update workloads. Since the Zipf update workload had little impact on the overall performance trends, except that it led to higher write throughput, its experiment results are omitted here for brevity. 5.2.2 Testing Phase During the testing phase, we measured the maximum write throughput of an LSM-tree by writing as much data as pos-sible. In general, alternative merge schedulers have little im-pact on the maximum write throughput since the I/O band-width budget is fixed, but their measured write throughput may be different due to the finite experimental period. Figures 7a and 7b shows the instantaneous write through-out of LSM-trees using different merge schedulers for tiering and leveling. Under both merge policies, the single-threaded scheduler regularly exhibits long pauses, making its write throughput vary over time. The fair scheduler exhibits a relatively stable write throughput over time since all merge levels can proceed at the same rate. With leveling, its write throughput still varies slightly over time since the compo-nent size at each level varies. The greedy scheduler appears to achieve a higher write throughput than the fair sched-uler by starving large merges. However, this higher write throughput eventually drops when no small merges can be scheduled. For example, the write throughput with tiering drops slightly at 1100s and 4000s, and there is a long pause from 6000s to 7000s with leveling. This result confirms that the fair scheduler is more suitable for testing the maximum write throughput of an LSM-tree, as merges at all levels can proceed at the same rate. In contrast, the single-threaded scheduler incurs many long pauses, causing a large variance in the measured write throughput. The greedy scheduler provides a higher write throughput by starving large merges, which would be undesirable at runtime. 5.2.3 Running Phase Turning to the running phase, we used a constant data arrival process, configured based on 95% of the maximum write throughput measured by the fair scheduler, to evaluate the write stalls of LSM-trees. LSM-trees can provide a stable write throughput. We first evaluated whether LSM-trees with different merge schedulers can support a high write throughput with low write latencies. For each experiment, we measured the in-stantaneous write throughput and the number of disk com-ponents over time as well as percentile write latencies. The results for tiering are shown in Figure 8. Both the fair and greedy schedulers are able to provide stable write throughputs and the total number of disk components never reaches the configured threshold. The greedy scheduler also minimizes the number of disk components over time. The single-threaded scheduler, however, causes a large number of write stalls due to the blocking of large merges, which confirms our previous analysis. Because of this, the single-threaded scheduler incurs large percentile write latencies. In contrast, both the fair and greedy schedulers provide small write latencies because of their stable write through-put. Figure 9 shows the corresponding results for leveling. The single-threaded scheduler again performs poorly, caus-ing a lot of stalls and thus large write latencies. Due to the inherent variance of merge times, the fair scheduler alone cannot provide a stable write throughput; this results in rel-atively large write latencies. In contrast, the greedy sched-uler avoids write stalls by always minimizing the number of components, which results in small write latencies. This experiment confirms that LSM-trees can achieve a stable write throughput with a relatively small performance variance. Moreover, the write stalls of an LSM-tree heavily depend on the design of the merge scheduler. Impact of Size Ratio. To verify our findings on LSM-trees with different shapes, we further carried out a set of experiments by varying the size ratio from 2 to 10 for both tiering and leveling. For leveling, we applied the dynamic level size optimization so that the largest level remains almost full by slightly modifying the size ratio between Lev-els 0 and 1. This optimization maximizes space utilization without impacting write or query performance. During the testing phase, we measured the maximum write throughput for each LSM-tree configuration using the fair scheduler, which is shown in Figure 10a. In general, a larger size ratio increases write throughput for tiering but decreases write throughput for leveling because it decreases the merge frequency of tiering but increases that of level-ing. During the running phase, we evaluated the 99% per-centile write latency for each LSM-tree configuration using constant data arrivals, which is shown in Figure 10b. With tiering, both the fair and greedy schedulers are able to pro-vide a stable write throughput with small write latencies. With leveling, the fair scheduler causes large write latencies when the size ratio becomes larger, as we have seen before. In contrast, the greedy scheduler is always able to provide a stable write throughput along with small write latencies. This again confirms that LSM-trees, despite their size ratios, 456 0 1800 3600 5400 7200 Elapsed Time (s) 0 20 40 Write Throughput (kops/s) (a) Instantaneous Write Throughput single scheduler fair scheduler greedy scheduler 0 1800 3600 5400 7200 Elapsed Time (s) 0 20 40 Num of Disk Components (b) Number of Disk Components single scheduler fair scheduler greedy scheduler 50 90 95 99 99.9 99.99 Percentile (%) 10 2 10 1 Write Latency (s) (c) Percentile Write Latencies single scheduler fair scheduler greedy scheduler Figure 8: Running Phase of Tiering Merge Policy (95% Load) 0 1800 3600 5400 7200 Elapsed Time (s) 0 10 20 Write Throughput (kops/s) (a) Instantaneous Write Throughput single scheduler fair scheduler greedy scheduler 0 1800 3600 5400 7200 Elapsed Time (s) 0.0 5.0 Num of Disk Components (b) Number of Disk Components single scheduler fair scheduler greedy scheduler 50 90 95 99 99.9 99.99 Percentile (%) 10 2 10 1 Write Latency (s) (c) Percentile Write Latencies single scheduler fair scheduler greedy scheduler Figure 9: Running Phase of Leveling Merge Policy (95% Load) 2 4 6 8 10 Size Ratio 0 10 20 30 Write Throughput (kops/s) (a) Testing Phase: Maximum Write Throughput leveling tiering 2 4 6 8 10 Size Ratio 10 3 10 1 10 1 Write Latency (s) (b) Running Phase: 99% Percentile Write Latency leveling + fair leveling + greedy tiering + fair tiering + greedy Figure 10: Impact of Size Ratio on Write Stalls 50 90 95 99 99.999.99 Percentile (%) 0.0 0.1 0.2 0.3 Write Latency (s) (a) Tiering Merge Policy local + fair local + greedy global + fair global + greedy 50 90 95 99 99.999.99 Percentile (%) 0 200 Write Latency (s) (b) Leveling Merge Policy local + fair local + greedy global + fair global + greedy Figure 11: Impact of Enforcing Component Con-straints on Percentile Write Latencies can provide a high write throughput with a small variance with an appropriately chosen merge scheduler. Benefit of Global Component Constraints. We next evaluated the benefit of global component constraints in terms of minimizing write stalls. We additionally included a variation of the fair and greedy schedulers that enforces local component constraints, that is, 2 components per level for leveling and 2 · T components per level for tiering. The resulting write latencies are shown in Figure 11. In general, local component constraints have little impact on 0 1800 3600 5400 7200 Elapsed Time (s) 0 5 10 15 Write Throughput (kops/s) (a) Instantaneous Write Throughput Limit No Limit 50 90 95 99 99.999.99 Percentile (%) 10 5 10 3 10 1 10 1 Write Latency (s) (b) Percentile Write Latencies Limit No Limit Figure 12: Running Phase with Burst Data Arrivals tiering since its merge time per level is relatively stable. However, the resulting write latencies for leveling become much large due to the inherent variance of its merge times. Moreover, local component constraints have a larger nega-tive impact on the greedy scheduler. The greedy scheduler prefers small merges, which may not be able to complete due to possible violations of the constraint at the next level. This in turn causes longer stalls and thus larger percentile write latencies. In contrast, global component constraints better absorb these variances, reducing the write latencies. Benefits of Processing Writes As Quickly As Pos-sible. We further evaluated the benefit of processing writes as quickly as possible. We used the leveling merge policy with a bursty data arrival process that alternates between a normal arrival rate of 2000 records/s for 25 minutes and a high arrival rate of 8000 records/s for 5 minutes. We evalu-ated two variations of the greedy scheduler. The first varia-tion processes writes as quickly as possible (denoted as “No Limit”), as we did before. The second variation enforces a maximum in-memory write rate of 4000 records/s (denoted as “Limit”) to avoid write stalls. The instantaneous write throughput and the percentile write latencies of the two variations are shown in Figures 457 0 1800 3600 5400 7200 Elapsed Time (s) 0 20 40 Query Throughput (kops/s) (a) Point Lookup fair scheduler greedy scheduler 0 1800 3600 5400 7200 Elapsed Time (s) 0.0 1.0 2.0 Query Throughput (kops/s) (b) Short Range Query fair scheduler greedy scheduler 0 1800 3600 5400 7200 Elapsed Time (s) 0 20 40 60 Query Throughput (ops/min) (c) Long Range Query fair scheduler greedy scheduler Figure 13: Instantaneous Query Throughput of Tiering Merge Policy 0 1800 3600 5400 7200 Elapsed Time (s) 0 20 40 Query Throughput (kops/s) (a) Point Lookup fair scheduler greedy scheduler 0 1800 3600 5400 7200 Elapsed Time (s) 0.0 1.0 2.0 Query Throughput (kops/s) (b) Short Range Query fair scheduler greedy scheduler 0 1800 3600 5400 7200 Elapsed Time (s) 0 20 40 60 Query Throughput (ops/min) (c) Long Range Query fair scheduler greedy scheduler Figure 14: Instantaneous Query Throughput of Leveling Merge Policy 12a and 12b respectively. As Figure 12a shows, delaying writes avoids write stalls and the resulting write through-put is more stable over time. However, this causes larger write latencies (Figure 12b) since delayed writes must be queued. In contrary, writing as quickly as possible causes occasional write stalls but still minimizes overall write la-tencies. This confirms our previous analysis that processing writes as quickly as possible minimizes write latencies. Impact on Query Performance. Finally, since the point of having data is to query it, we evaluated the im-pact of the fair and greedy schedulers on concurrent query performance. We evaluated three types of queries, namely point lookups, short scans, and long scans. A point lookup accesses 1 record given a primary key. A short scan query accesses 100 records and a long scan query accesses 1 million records. In each experiment, we executed one type of query concurrently with concurrent updates with constant arrival rates as before. To maximize query performance while en-suring that LSM flush and merge operations receive enough I/O bandwidth, we used 8 query threads for point lookups and short scans and used 4 query threads for long scans. The instantaneous query throughput under tiering and leveling is depicted in Figures 13 and 14 respectively. Due to space limitations, we omit the average query latency, which can be computed by dividing the number of query threads by the query throughput. As the results show, leveling has sim-ilar point lookup throughput to tiering because Bloom filters are able to filter out most unnecessary I/Os, but it has much better range query throughput than tiering. The greedy scheduler always improves query performance by minimiz-ing the number of components. Among the three types of queries, point lookups and short scans benefit more from the greedy scheduler since these two types of queries are more sensitive to the number of disk components. In con-trast, long scans incur most of their I/O cost at the largest level. Moreover, tiering benefits more from the greedy sched-uler than leveling because tiering has more disk components. Note that with leveling, there is a drop in query throughput under the fair scheduler at around 5400s, even though there is little difference in the number of disk components between the fair and greedy schedulers. This drop is caused by write stalls during that period, as was seen in the instantaneous write throughput of Figure 9a. After the LSM-tree recovers from write stalls, it attempts to write as much data as possi-ble to catch up, which negative impacts query performance. In , we additionally evaluated the impact of forcing SSD writes regularly on query performance. Even though this optimization has some small negative impact on query throughput, it significantly reduces the percentile latencies of small queries, e.g., point lookups and small scans, by 5x to 10x because the large forces at the end of merges are instead broken down into many smaller ones. 6. PARTITIONED MERGES We now examine the write stall behavior of partitioned LSM-trees using our two-phase approach. In a partitioned LSM-tree, a large disk component is range-partitioned into multiple small files and each merge operation only processes a small number of files with overlapping ranges. Since merges always happen immediately once a level is full, a single-threaded scheduler could be sufficient to minimize write stalls. In the reminder of this section, we will evaluate Lev-elDB’s single-threaded scheduler. 6.1 LevelDB’s Merge Scheduler LevelDB’s merge scheduler is single-threaded. It com-putes a score for each level and selects the level with the largest score to merge. Specifically, the score for Level 0 is computed as the total number of flushed components di-vided by the minimum number of flushed components to merge. For a partitioned level (1 and above), its score is de-fined as the total size of all files at this level divided by the 458 0 1800 3600 5400 7200 Elapsed Time (s) 0 3 6 9 12 15 Write Throughput (kops/s) (a) Testing Phase round-robin choose-best 0 1800 3600 5400 7200 Elapsed Time (s) 0 3 6 9 12 15 Write Throughput (kops/s) (b) Running Phase round-robin choose-best Figure 15: Instantaneous Write Throughput under Two-Phase Evaluation of Partitioned LSM-tree configured maximum size. A merge operation is scheduled if the largest score is at least 1, which means that the se-lected level is full. If a partitioned level is chosen to merge, LevelDB selects the next file to merge in a round-robin way. LevelDB only restricts the number of flushed components at Level 0. By default, the minimum number of flushed components to merge is 4. The processing of writes will be slowed down or stopped of the number of flushed component reaches 8 and 12 respectively. Since we have already shown in Section 5.1.2 that processing writes as quickly as possible reduces write latencies, we will only use the stop threshold (12) in our evaluation. Experimental Evaluation. We have implemented Lev-elDB’s partitioned leveling merge policy and its merge sched-uler inside AsterixDB for evaluation. Similar to LevelDB, the minimum number of flushed components to merge was set at 4 and the stop threshold was set at 12 components. Unless otherwise noted, the maximum size of each file was set at 64MB. The memory component size was set at 128MB and the base size of Level 1 was set at 1280MB. The size ratio was set at 10. For the experimental dataset with 100 million records, this results in a 4-level LSM-tree where the largest level is nearly full. To minimize write stalls caused by flushes, we used two memory components and a separate flush thread. We further evaluated the impact of two widely used merge selection strategies on write stalls. The round-robin strategy chooses the next file to merge in a round-robin way. The choose-best strategy chooses the file with the fewest overlapping files at the next level. We used our two-phase approach to evaluate this parti-tioned LSM-tree design. The instantaneous write through-put during the testing phase is shown in Figure 15a, where the write throughput of both strategies decreases over time due to more frequent stalls. Moreover, under the uniform update workload, the alternative selection strategies have little impact on the overall write throughput, as reported in . During the testing phase, we used a constant arrival process to evaluate write stalls. The instantaneous write throughput of both strategies is shown in Figure 15b. As the result shows, in both cases write stalls start to occur after time 6000s. This suggests that the measured write throughput during the testing phase is not sustainable. 6.2 Measuring Sustainable Write Throughput One problem with LevelDB’s score-based merge scheduler is that it merges as many components at Level 0 as possible at once. To see this, suppose that the minimum number of mergeable components at Level L0 is T0 and that the max-imum number of components at Level 0 is T ′ 0. During the testing phase, where writes pile up as quickly as possible, the L0 L1 L3 (a) Expected LSM-tree base size base size ⋅T base size ⋅T L2 merge × T, base size ⋅𝑇 , ./𝑇 , base size ⋅T base size ⋅T ⋅𝑇 , ./𝑇 , merge × T′, (b) Actual LSM-tree Figure 16: Problem of Score-Based Merge Scheduler merge scheduler tends to merge the maximum possible num-ber of components T ′ 0 instead of just T0 at once. Because of this, the LSM-tree will eventually transit from the ex-pected shape (Figure 16a) to the actual shape (Figure 16b), where T is the size ratio of the partitioned levels. Note that the largest level is not affected since its size is determined by the number of unique entries, which is relatively stable. Even though this elastic design dynamically increases the processing rate as needed, it has the following problems. Unsustainable Write Throughput. The measured maximum write throughput is based on merging T ′ 0 flushed components at Level 0 at once. However, this is likely to cause write stalls during the running phase since flushes can-not further proceed. Suboptimal Trade-Offs. The LSM-tree structure in Figure 16b is no longer making optimal performance trade-offs since the size ratios between its adjacent levels are not the same anymore . By adjusting the sizes of interme-diate levels so that adjacent levels have the same size ratio, one can improve both write throughput and space utilization without affecting query performance. Low Space Utilization. One motivation for industrial systems to adopt partitioned LSM-trees is their higher space utilization . However, the LSM-tree depicted in Fig-ure 16b violates this performance guarantee because the ra-tio of wasted space increases from 1/T to T ′ 0/T0 · 1/T. Because of these problems, the measured maximum write throughput cannot be used in the long-term. We propose a simple solution to address these problems. During the test-ing phase, we always merge exactly T0 components at Level 0. This ensures that merge preferences will be given equally to all levels so that the LSM-tree will stay in the expected shape (Figure 16a). Then, during the running phase, the LSM-tree can elastically merge more components at Level 0 as needed to absorb write bursts. To verify the effectiveness of the proposed solution, we repeated the previous experiments on the partitioned LSM-tree. During the testing phase, the LSM-tree always merged 4 components at Level 0 at once. The measured instanta-neous write throughput is shown in Figure 17a, which is 30% lower than that of the previous experiment. During the running phase, we used a constant arrival process based on this lower write throughput. The resulting instantaneous write throughput is shown in Figure 17b, where the LSM-tree successfully maintains a sustainable write throughput without any write stalls, which in turn results in low write latencies (not shown in the figure). This confirms that Lev-elDB’s single-threaded scheduler is sufficient to minimize write stalls, given that a single merge thread can fully utilize the I/O bandwidth budget. After fixing the unsustainable write throughput problem of LevelDB, we further evaluated the impact of partition size on the write stalls of partitioned LSM-trees. In this 459 0 1800 3600 5400 7200 Elapsed Time (s) 0 3 6 9 12 15 Write Throughput (kops/s) (a) Testing Phase round-robin choose-best 0 1800 3600 5400 7200 Elapsed Time (s) 0 3 6 9 12 15 Write Throughput (kops/s) (b) Running Phase round-robin choose-best Figure 17: Instantaneous Write Throughput under Two-Phase Evaluation of Partitioned LSM-tree with the Proposed Solution 8MB 64MB512MB 4GB 32GB Partition Size 0 5 10 15 Write Throughput (kops/s) (a) Testing Phase: Maximum Write Throughput round-robin choose-best 8MB 64MB512MB 4GB 32GB Partition Size 10 3 10 1 10 1 10 3 Write Latency (s) (b) Running Phase: 99% Percentile Write Latency round-robin choose-best Figure 18: Impact of Partition Size on Write Stalls experiment, we varied the size of each partitioned file from 8MB to 32GB so that partitioned merges effectively transit into full merges. The maximum write throughput during the running phase and the 99th percentile write latencies during the testing phase are shown in Figures 18a and 18b respectively. Even though the partition size has little impact on the overall write throughput, a large partition size can cause large write latencies since we have shown in Section 5 that a single-threaded scheduler is insufficient to minimize write stalls for full merges. Most implementations of par-titioned LSM-trees today already choose a small partition size to bound the temporary space occupied by merges. We see here that one more reason to do so is to minimize write stalls under a single-threaded scheduler. 7. LESSONS AND INSIGHTS Having studied and evaluated the write stall problem for various LSM-tree designs, here we summarize the lessons and insights observed from our evaluation. The LSM-tree’s write latency must be measured properly. The out-of-place update nature of LSM-trees has introduced the write stall problem. Throughout our evalua-tion, we have seen cases where one can obtain a higher but unsustainable write throughput. For example, the greedy scheduler would report a higher write throughput by starv-ing large merges, and LevelDB’s merge scheduler would re-port a higher but unsustainable write throughput by dy-namically adjusting the shape of the LSM-tree. Based on our findings, we argue that in addition to the testing phase, used by existing LSM research, an extra running phase must be performed to evaluate the usability of the measured max-imum write throughput. Moreover, the write latency must be measured properly due to queuing. One solution is to use the proposed two-phase evaluation approach to evaluate the resulting write latencies under high utilization, where the arrival rate is close to the processing rate. Merge scheduling is critical to minimizing write stalls. Throughout our evaluation of various LSM-tree de-signs, including bLSM , full merges, and partitioned merges, we have seen that merge scheduling has a critical impact on write stalls. Comparing these LSM-tree designs in general depends on many factors and is beyond the scope of this paper; here we have focused on how to minimize write stalls for each LSM-tree design. bLSM , an instance of full merges, introduces a so-phisticated spring-and-gear merge scheduler to bound the processing latency of LSM-trees. However, we found that bLSM still has large variances in its processing rate, lead-ing to large write latencies under high arrival rates. Among the three evaluated schedulers, namely single-threaded, fair, and greedy, the single-threaded scheduler should not be used in practical systems due to the long stalls caused by large merges. The fair scheduler should be used when measuring the maximum throughput because it provides fairness to all merges. The greedy scheduler should be used at runtime since it better minimizes the number of disk components, both reducing write stalls and improving query performance. Moreover, as an important design choice, global component constraints better minimizes write stalls. Partitioned merges simplify merge scheduling by breaking large merges into many smaller ones. However, we found a new problem that the measured maximum write throughput of LevelDB is unsustainable because it dynamically adjusts the size ratios under write-intensive workloads. After fix-ing this problem, a single-threaded scheduler with a small partition size, as used by LevelDB, is sufficient for deliv-ering low write latencies under high utilization. However, fixing this problem reduced the maximum write throughput of LevelDB roughly one-third in our evaluation. For both full and partitioned merges, processing writes as quickly as possible better minimizes write latencies. Finally, with proper merge scheduling, all LSM-tree designs can in-deed minimize write stalls by delivering low write latencies under high utilizations. 8. CONCLUSION In this paper, we have studied and evaluated the write stall problem for various LSM-tree designs. We first pro-posed a two-phase approach to use in evaluating the impact of write stalls on percentile write latencies using a combi-nation of closed and open system testing models. We then identified and explored the design choices for LSM merge schedulers. For full merges, we proposed a greedy sched-uler that minimizes write stalls. For partitioned merges, we found that a single-threaded scheduler is sufficient to pro-vide a stable write throughput but that the maximum write throughput must be measured properly. Based on these findings, we have shown that performance variance must be considered together with write throughput to ensure the actual usability of the measured throughput. Acknowledgments. We thank Neal Young, Dongxu Zhao, and anonymous reviewers for their helpful feedback on the theorems in this paper. This work has been supported by NSF awards CNS-1305430, IIS-1447720, IIS-1838248, and CNS-1925610 along with industrial support from Amazon, Google, and Microsoft and support from the Donald Bren Foundation (via a Bren Chair). 460 9. REFERENCES AsterixDB. Cassandra. Compaction stalls: something to make better in RocksDB. compaction-stalls-something-to-make.html. HBase. LevelDB. Read- and latency-optimized log structured merge tree. RocksDB. SyllaDB. Tarantool. TPC-C. WiredTiger. YCSB change log. master/core/CHANGES.md. M. Y. Ahmad and B. Kemme. Compaction management in distributed key-value datastores. PVLDB, 8(8):850–861, 2015. S. Alsubaiee et al. AsterixDB: A scalable, open source BDMS. PVLDB, 7(14):1905–1916, 2014. M. Armbrust et al. Piql: Success-tolerant query processing in the cloud. PVLDB, 5(3):181–192, 2011. M. Armbrust et al. Generalized scale independence through incremental precomputation. In ACM SIGMOD, pages 625–636, 2013. O. Balmau et al. FloDB: Unlocking memory in persistent key-value stores. In European Conference on Computer Systems (EuroSys), pages 80–94, 2017. O. Balmau et al. TRIAD: Creating synergies between memory, disk and log in log structured key-value stores. In USENIX Annual Technical Conference (ATC), pages 363–375, 2017. B. H. Bloom. Space/time trade-offs in hash coding with allowable errors. CACM, 13(7):422–426, July 1970. G. Candea et al. A scalable, predictable join operator for highly concurrent data warehouses. PVLDB, 2(1):277–288, 2009. Z. Cao et al. On the performance variation in modern storage stacks. In USENIX Conference on File and Storage Technologies (FAST), pages 329–343, 2017. M. J. Carey. AsterixDB mid-flight: A case study in building systems in academia. In ICDE, pages 1–12, 2019. S. Chaudhuri et al. Variance aware optimization of parameterized queries. In ACM SIGMOD, pages 531–542. ACM, 2010. B. F. Cooper et al. Benchmarking cloud serving systems with YCSB. In ACM SoCC, pages 143–154, 2010. N. Dayan et al. Monkey: Optimal navigable key-value store. In ACM SIGMOD, pages 79–94, 2017. N. Dayan et al. Optimal Bloom filters and adaptive merging for LSM-trees. ACM TODS, 43(4):16:1–16:48, Dec. 2018. N. Dayan and S. Idreos. Dostoevsky: Better space-time trade-offs for LSM-tree based key-value stores via adaptive removal of superfluous merging. In ACM SIGMOD, pages 505–520, 2018. N. Dayan and S. Idreos. The log-structured merge-bush & the wacky continuum. In ACM SIGMOD, pages 449–466, 2019. J. Dean and L. A. Barroso. The tail at scale. CACM, 56:74–80, 2013. S. Dong et al. Optimizing space amplification in RocksDB. In CIDR, volume 3, page 3, 2017. R. Grover and M. J. Carey. Data ingestion in AsterixDB. In EDBT, pages 605–616, 2015. M. Harchol-Balter. Performance modeling and design of computer systems: queueing theory in action. Cambridge University Press, 2013. G. Huang et al. X-Engine: An optimized storage engine for large-scale E-commerce transaction processing. In ACM SIGMOD, pages 651–665, 2019. J. Huang et al. Statistical analysis of latency through semantic profiling. In European Conference on Computer Systems (EuroSys), pages 64–79, 2017. J. Huang et al. A top-down approach to achieving performance predictability in database systems. In ACM SIGMOD, pages 745–758, 2017. Y. Li et al. Tree indexing on solid state drives. PVLDB, 3(1-2):1195–1206, 2010. Y. Li et al. Enabling efficient updates in KV storage via hashing: Design and performance evaluation. ACM Transactions on Storage (TOS), 15(3):20, 2019. H. Lim et al. Towards accurate and fast evaluation of multi-stage log-structured designs. In USENIX Conference on File and Storage Technologies (FAST), pages 149–166, 2016. L. Lu et al. WiscKey: Separating keys from values in SSD-conscious storage. In USENIX Conference on File and Storage Technologies (FAST), pages 133–148, 2016. C. Luo and M. J. Carey. Efficient data ingestion and query processing for LSM-based storage systems. PVLDB, 12(5):531–543, 2019. C. Luo and M. J. Carey. LSM-based storage techniques: a survey. The VLDB Journal, 2019. C. Luo and M. J. Carey. On performance stability in LSM-based storage systems (extended version). CoRR, abs/1906.09667, 2019. C. Luo et al. Umzi: Unified multi-zone indexing for large-scale HTAP. In EDBT, pages 1–12, 2019. F. Mei et al. SifrDB: A unified solution for write-optimized key-value stores in large datacenter. In ACM SoCC, pages 477–489, 2018. P. O’Neil et al. The log-structured merge-tree (LSM-tree). Acta Inf., 33(4):351–385, 1996. M. A. Qader et al. A comparative study of secondary indexing techniques in LSM-based NoSQL databases. In ACM SIGMOD, pages 551–566, 2018. P. Raju et al. PebblesDB: Building key-value stores using fragmented log-structured merge trees. In ACM SOSP, pages 497–514, 2017. V. Raman et al. Constant-time query processing. In ICDE, pages 60–69, 2008. K. Ren et al. SlimDB: A space-efficient key-value storage engine for semi-sorted data. PVLDB, 10(13):2037–2048, 2017. 461 R. Sears and E. Brewer. Stasis: Flexible transactional storage. In Symposium on Operating Systems Design and Implementation (OSDI), pages 29–44, 2006. R. Sears and R. Ramakrishnan. bLSM: A general purpose log structured merge tree. In ACM SIGMOD, pages 217–228, 2012. D. Teng et al. LSbM-tree: Re-enabling buffer caching in data management for mixed reads and writes. In IEEE International Conference on Distributed Computing Systems (ICDCS), pages 68–79, 2017. R. Thonangi and J. Yang. On log-structured merge for solid-state drives. In ICDE, pages 683–694, 2017. P. Unterbrunner et al. Predictable performance for unpredictable workloads. PVLDB, 2(1):706–717, 2009. X. Wang and M. J. Carey. An IDEA: An ingestion framework for data enrichment in AsterixDB. PVLDB., 12(11):1485–1498, 2019. T. Yao et al. A light-weight compaction tree to reduce I/O amplification toward efficient key-value stores. In International Conference on Massive Storage Systems and Technology (MSST), 2017. C. Yunpeng et al. LDC: a lower-level driven compaction method to optimize SSD-oriented key-value stores. In ICDE, pages 722–733, 2019. Y. Zhang et al. ElasticBF: Fine-grained and elastic bloom filter towards efficient read for LSM-tree-based KV stores. In USENIX Workshop on Hot Topics in Storage and File Systems (HotStorage), 2018. 462
134
Mechanism of olefin polymerization by a soluble zirconium catalyst - ScienceDirect =============== Skip to main contentSkip to article Journals & Books Access throughyour organization Purchase PDF Search ScienceDirect Article preview Abstract Introduction Section snippets References (71) Cited by (57) Journal of Molecular Catalysis A: Chemical Volume 128, Issues 1–3, 27 February 1998, Pages 257-271 Mechanism of olefin polymerization by a soluble zirconium catalyst1 Author links open overlay panelA.R Siedle, W.M Lamanna, R.A Newmark, J.N Schroepfer Show more Add to Mendeley Share Cite rights and content Abstract A mechanistic study has been carried out on the homogeneous olefin polymerization/oligomerization catalyst formed from Cp 2 ZrMe 2 and methylaluminoxane, (MeAlO)x, in toluene. Formal transfer of CH 3 from Zr to Al yields low concentrations of Cp 2 ZrMe+ solvated by [(Me 2 AlO)y(MeAlO)x−y]y. The cationic Zr species initiates ethylene oligomerization by olefin coordination followed by insertion into the Zr–CH 3 bond. Chain transfer occurs by one of two competing pathways. The predominant one involves exchange of Cp 2 Zr–P+ (P=growing ethylene oligomer) with Al–CH 3 to produce another Cp 2 ZrMe+ initiator plus an Al-bound oligomer. Terminal Al–C bonds in the latter are ultimately cleaved on hydrolytic workup to produce materials with saturated end groups. Concomitant chain transfer occurs by sigma bond metathesis of Cp 2 Zr–P+ with ethylene. Metathesis results in cleavage of the Zr–C bond of the growing oligomer to produce materials also having saturated end groups; and a new initiating species, Cp 2 Zr-CHCH 2+. The two chain transfer pathways afford structurally different oligomers distinguishable by carbon number and end group structure. Oligomers derived from the Cp 2 ZrMe+ channel are C n (n=odd) alkanes; those derived from Cp 2 Zr–CHCH 2+ are terminally mono-unsaturated C n (n=even) alkenes. Chain transfer by beta hydride elimination is detectable but relatively insignificant under the conditions employed. Propylene and 1-hexene react similarly but beta hydride elimination is the predominant chain transfer step. The initial Zr-alkyl species produces a Cp 2 ZrH+ complex that is the principle chain initiator. Chain transfer is fast relative to propagation and the products are low molecular weight oligomers. Introduction In 1980, Kaminsky 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12reported the synthesis from dimethylzirconocene, Cp 2 ZrMe 2 (Cp=η 5-C 5 H 5) and methylaluminoxane, (MeAlO)x, of a soluble olefin polymerization catalyst. This longlived, high activity catalyst is capable of producing polyethylene having narrow polydispersity. That discovery has stimulated intense interest in Group IV-olefin polymerization chemistry. The initiating site is thought, but not proven, to be a cationic zirconium species and much effort has been put into the synthesis and characterization of cationic organometallic model compounds containing Zr 13, 14, 15, 16, 17, 18, 19, 20(titanium-based model systems have also been studied ) Th and U 22, 23, 24, 25, Co 26, 27, 28, Cr and Ti 30, 31, 32, 33as well as neutral materials containing Sc 34, 35, Lu 36, 37, 38and the lanthanides La, Nd and Sm 39, 40, 41, 42. That work has led to new synthetic routes to cationic organometallic compounds based on silver 17, 18and ferrocenium tetraarylborates and carborane ligands B 9 C 2 H 11 and (B 9 C 2 H 11)2 M (M=Fe, Co, Ni) 39, 40, 41, 42and these in turn have yielded new advances in C–H and CC bond activation chemistry. Much effort too has been devoted to understanding the original Kaminsky catalyst 43, 44, 45, 46but there remain some unanswered fundamental questions. We have studied this catalyst system in detail and have addressed the following issues: (1) what is methylaluminoxane?; (2) how does it interact with Cp 2 ZrMe 2 and what initiating species is produced?; and (3) what are the mechanism(s) of chain initiation and transfer? Our results go beyond the particulars of the Kaminsky catalyst and bear more generally on the area of olefin activation by soluble metal catalysts. Access through your organization Check access to the full text by signing in through your organization. Access through your organization Section snippets The nature of methylaluminoxane Controlled hydrolysis of trimethylaluminum can be achieved in a two-phase system (Me 6 Al 2 and a hydrocarbon solvent) containing an additional, insoluble reagent, such as MgCl 2·6H 2 O, that slowly releases water. After filtration and evaporation of solvent and unreacted Me 6 Al 2, there remains (MeAlO)x. Even today, methylaluminoxane is an enigma. It is not yet established whether this noncrystalline material is a single species (one suspects that it is not), or whether it is cyclic and contains only Experimental Toluene and 1-hexene were dried by distillation from Na–K alloy. Ethylene and propylene were used as received from Matheson or, for 13 C isotopomers, from Merck. Methylaluminoxane was prepared as previously described 47, 48. A 17 O enriched sample was similarly synthesized using MgCl 2 that had been rehydrated with 20 atom% H 2 17 O. A solution of (MeAlO)x–Me 3 Al in toluene was obtained from the Ethyl Corp.; it is stated to be ca. 0.9 M each in (MeAlO)x and Me 3 Al. 13 C, 17 O, 27 A l and 91 Z r NMR spectra Acknowledgements The atomic emission detection separation experiment was both suggested by and conducted by Joel Miller, 3M. Analytical and Properties Research Laboratory. We thank Gaddam Babu, M. Brookhart, R.F. Jordan, Don Hagen, P.A. Lyon and G.V.D. Tiers for helpful discussions. Recommended articles References (71) J.J.W Eshuis et al.### J. Mol. Catal. (1990) M Bochmann et al.### J. Organometal. Chem. (1992) R Taube et al.### J. Organometal. Chem. (1988) J.W Akitt### Prog. NMR Spectroscopy (1989) A.R Siedle et al.### Polyhedron (1990) W Kaminsky et al.### Polyhedron (1988) D.F Hagen et al.### Spectrochim. Acta (1983) D.F Hagen et al.### Anal. Biochem. (1981) D.F Hagen et al.### Spectrochim. Acta (1985) D.F Hagen et al.### Spectrochim. Acta (1987) P.C Uden et al.### J. Chrom. (1989) A.R Bulls et al.### Polyhedron (1988) H Sinn et al.### Angew Chem. Int. Ed. (1980) H. Sinn, W. Kaminsky, H.-J.C. Vollmer, R.O.H.H. Weldt, US Patent 4,404,344... W Kaminsky et al.### Polym. Sci. Polym. Chem. Ed. (1985) A Anderson et al.### Angew. Chem. Int. Ed. (1976) W Kaminsky et al.### Makromol. Chem. Rapid Comm. (1983) W Kaminsky et al.### Angew. Chem. Int. Ed. (1989) W Kaminsky et al.### Makromol. Chem. Rapid Comm. (1984) W Kaminsky et al.### Makromol. Chem. Rapid Comm. (1983) A Ahlers et al.### Makromol. Chem. Rapid Comm. (1988) W Kaminsky et al.### Angew. Chem. Int. Ed. (1985) W Kaminsky et al.### Makromol. Chem. Makromol. Symp. (1986) W. Kaminsky, H. Sinn (Eds.), Transition Metals and Organometallics as Catalysts for Olefin Polymerization,... R.F Jordan### J. Chem. Ed. (1988) R.F Jordan et al.### J. Am. Chem. Soc. (1985) R.F Jordan et al.### J. Am. Chem. Soc. (1987) R.F Jordan et al.### Inorg. Chem. (1987) R.F Jordan et al.### J. Am. Chem. Soc. (1986) R.F Jordan et al.### Organometallics (1987) R.F Jordan et al.### Organometallics (1989) R.F Jordan et al.### J. Am. Chem. Soc. (1990) J.J Eisch et al.### J. Am. Chem. Soc. (1985) K.-H Dahmen et al.### Langmuir (1988) D Hedeen et al.### J. Am. Chem Soc. (1988) View more references Cited by (57) Metallocene and related catalysts for olefin, alkyne and silane dimerization and oligomerization 2006, Coordination Chemistry Reviews Citation Excerpt : Jacobs et al. described the immobilization of (C5H5)2ZrMe2 on zeolite MCM-41 whose silanol groups were pre-reacted with B(C6F5)3/PhNMe2 to result in the heterogeneous catalyst [{MCM-41-O}-B(C6F5)3]−[(C5H5)2ZrMe]+. Activities of this heterogeneous propene oligomerization catalyst were comparable to homogeneous (C5H5)2ZrCl2/{AlMe3, MAO, B(C6F5)4− or B(C6F5)3} systems (∼105 g molZr−1 h−1) and over 90% 1-alkenes with Schulz-Flory carbon number distribution . Van Looveren et al. similarly anchored MAO by the in situ hydrolysis of AlMe3 on the internal pore walls of a mesoporous MCM-41 support to generate a highly active host for (C5H5)2ZrMe2 in the oligomerization of propene with a typical Schulz-Flory distribution of the propene oligomers [57,58]. Show abstract This review summarizes the use of metallocene complexes and related compounds as catalysts in the dimerization or oligomerization of olefins (alkenes) or terminal acetylenes (alkynes) and in the dehydrocoupling/dehydrooligomerization of silanes. Metallocene complexes of group-III metals (scandocenes, yttrocenes, lanthanocenes), lanthanoids (neodymocenes) and group-IV metals (titanocenes, zirconocenes, hafnocenes) have been utilized in the selective (co-/hydro-)oligomerization of ethene, of the α-olefins propene, 1-butene, 1-pentene, 1-hexene, 1-heptene, and 1-octene, of branched olefins, e.g. methyl-butenes, methyl-pentenes and styrene, of cycloolefins, e.g. cyclopentene and norbornene and of α, ω-dienes, e.g. 1,5-hexadiene and 1,7-octadiene. Group-III metallocenes are often active in the C–C coupling without a cocatalyst; group-IV metallocenes require the help of a cocatalyst, such as methylalumoxane, MAO, aluminum alkyls, e.g. Al i Bu 3, or perfluorated boranes, e.g. B(C 6 F 5)3. The actinoid metallocenes (C 5 M 5)2 AnMe 2 with An=thorium, uranium allow for the dimerization and oligomerization of terminal acetylenes. The dehydrooligomerization of (hydro)silanes is typically achieved by group-IV metallocene chlorides together with n-butyl lithium. Also included in this review are related sandwich and half-sandwich (mono-cyclopentadienyl) complexes used for olefin oligomerization. The related sandwich complexes feature phospholyl, boratabenzene or carboranate ligands. Methods of oligo–olefin analyses by gel permeation chromatography (GPC), 1 H NMR spectroscopy, gas chromatography (GC) or viscosity measurements for molecular weight determinations and by 1 H and 13 C NMR spectroscopy or MALDI–TOF mass spectrometry for end group structure determinations are summarized. Possible applications of olefin oligomers, in particular oligopropenes are presented. The functionality of a double bond at the end of each chain (for further modifications) together with the product homogeneity are the advantages of oligomers from metallocene catalysis. In addition, olefin oligomerization is used to study mechanistic aspects and to obtain a better insight into the reaction mechanism of metallocene polymerization catalysis because of the homogeneity of the reaction mixture and because certain mechanistic aspects are easier to investigate in oligomeric products than in high-molar-mass polymers. ### Theoretical studies of the structure and function of MAO (methylaluminoxane) 2004, Progress in Polymer Science Oxford Show abstract Single-site homogeneous catalysts need to be activated by a co-catalyst or counterion. The high activity imparted by methylaluminoxane (MAO) has caused it to be one of the most important co-catalysts. In fact it can be argued that the success of the metallocenes is largely due to the discovery of MAO. However, despite intensive studies MAO has remained a ‘black box’. The presence of multiple equilibria between different (AlOMe)n oligomers coupled with the interaction between MAO and TMA has hindered experimental structural assignment of MAO. This has made it nearly impossible to characterize the dormant and active species present in olefin polymerization and therefore to theoretically investigate the mechanism of this process. Moreover, the binding of MAO with porous inorganic oxides such as silica, alumina and MgCl 2 is currently not understood. Perhaps, even more puzzling is the fact that a large excess of MAO is necessary in order for polymerization to occur (Al:catalyst ratios of ∼10,000:1), whereas in the case of supported MAO this ratio is greatly decreased (ratios of ∼100:1–500:1). Despite the fact that the co-catalytic ability of MAO was discovered nearly 25 years ago, its exact structure and function is still unknown. In recent years, theoretical studies of MAO aiming to give further insight into the aforementioned issues have emerged. In this article, we give a quick overview of experimental studies of MAO and an in-depth review of recent theoretical investigations. ### Online mechanistic investigations of catalyzed reactions by electrospray ionization mass spectrometry: A tool to intercept transient species in solution 2008, European Journal of Organic Chemistry ### Iron catalyzed polyethylene chain growth on zinc: A study of the factors delineating chain transfer versus catalyzed chain growth in zinc and related metal alkyl systems 2004, Journal of the American Chemical Society ### Oligomerisation of ethylene by Bis(imino)pyridyliron and -cobalt complexes 2000, Chemistry A European Journal ### Cocatalysts for metal-catalyzed olefin polymerization: Activators, activation processes, and structure-activity relationships 2000, Chemical Reviews View all citing articles on Scopus 1 Dedicated to Prof. Roy M. Adams, Geneva College, Beaver Falls, PA, USA, on the occasion of his 71st birthday. View full text Copyright © 1998 Elsevier Science B.V. All rights reserved. Recommended articles Solid state NMR - An indispensable tool in organic-inorganic biocomposite characterization; refining the structure of octacalcium phosphate composites with the linear metabolic di-acids succinate and adipate Solid State Nuclear Magnetic Resonance, Volume 95, 2018, pp. 1-5 Yang Li, …, Jerry C.C.Chan ### Equilibrium of zirconium and hafnium in the process of extraction with TBP in nitric medium – Influence in the Zr/Hf separation Minerals Engineering, Volume 146, 2020, Article 106138 Janúbia C.B.S.Amaral, Carlos A.Morais ### Carboxylate-assisted synthesis of highly-defected monoclinic zirconia nanoparticles Journal of Molecular Structure, Volume 1214, 2020, Article 128232 Nasser Y.Mostafa, …, Mohammed A.Amin ### Influence of carboxylic acid content and polymerization catalyst on hydrolytic degradation behavior of Poly(glycolic acid) fibers Polymer Degradation and Stability, Volume 172, 2020, Article 109054 K.Saigusa, …, T.Kikutani ### Development of a validated LC-MS/MS method for quantification of phosphoinositide 3 kinase inhibitor GSK2636771: Application to a pharmacokinetic study in rat plasma Journal of Pharmaceutical and Biomedical Analysis, Volume 179, 2020, Article 112950 Xin Su, …, Hongjun Wei ### Bimodal poly(ethylene-cb-propylene) comb block copolymers from serial reactors: Synthesis and applications as processability additives and blend compatibilizers Polymer, Volume 104, 2016, pp. 72-82 Andy H.Tsou, …, Yiming Zeng Show 3 more articles Article Metrics Citations Citation Indexes 57 Captures Mendeley Readers 25 View details About ScienceDirect Remote access Contact and support Terms and conditions Privacy policy Cookies are used by this site.Cookie settings All content on this site: Copyright © 2025 Elsevier B.V., its licensors, and contributors. All rights are reserved, including those for text and data mining, AI training, and similar technologies. For all open access content, the relevant licensing terms apply. We use cookies that are necessary to make our site work. We may also use additional cookies to analyze, improve, and personalize our content and your digital experience. For more information, see ourCookie Policy Cookie Settings Accept all cookies Cookie Preference Center We use cookies which are necessary to make our site work. We may also use additional cookies to analyse, improve and personalise our content and your digital experience. For more information, see our Cookie Policy and the list of Google Ad-Tech Vendors. You may choose not to allow some types of cookies. However, blocking some types may impact your experience of our site and the services we are able to offer. See the different category headings below to find out more or change your settings. You may also be able to exercise your privacy choices as described in our Privacy Policy Allow all Manage Consent Preferences Strictly Necessary Cookies Always active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. Cookie Details List‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. Cookie Details List‎ Targeting Cookies [x] Targeting Cookies These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. If you do not allow these cookies, you will experience less targeted advertising. Cookie Details List‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Confirm my choices × Read strategically, not sequentially ScienceDirect AI extracts key findings from full-text articles, helping you quickly assess the article's relevance to your research. Unlock your access
135
Published Time: Mon, 23 Jan 2023 03:35:45 GMT An explicit upper bound for Siegel zeros of imaginary quadratic fields D. Ralaivaosaona F. B. Razakarinoro Abstract For any integer d ≥ 3 such that −d is a fundamental discriminant, we show that the Dirichlet L-function associated with the real primitive character χ(·) = ( −d · ) does not vanish on the positive part of the interval [1 − 6.5/√d, 1] . Keywords: Siegel zero, imaginary quadratic fields, class number, L-function Subject Classification Codes: 11M20 1 Introduction For a fundamental discriminant D, the arithmetic function defined by the Kronecker symbol χ(n) = ( Dn ) is a real primitive Dirichlet character and its associated L-function is defined by the series L(s, χ ) := ∞ ∑ n=1 χ(n) ns . The series on the right-hand side only makes sense when Re( s) > 1, but it is well known that the function L(s, χ ) has an analytic continuation defined over the whole complex plane. The locations of the zeros of L(s, χ ) are particularly important in number theory. One of the most important open problem in mathematics – the Generalized Riemann Hypothesis (GRH) – asserts that all zeros with positive real parts lie precisely on the vertical line Re( s) = 12 . Siegel zeros or sometimes called Landau-Siegel zeros are hypothetical real zeros of the L-functions that lie very close to 1. The existence of these zeros has not yet been ruled out, but it is known that L(s, χ ) has at most one simple zero in an interval of the form (1 − c/ log |D|, 1) , see Page . Morrill and Trudgian, in , recently gave an explicit version of the latter statement with c = 1 .011 using Pintz’s refinement of Page’s theorem. The largest positive zero of L(s, χ ), if it exists, will be denoted by β throughout this paper. We are interested in the upper bound on β, or equivalently the lower bound on the distance from β to 1, for the case D = −d where d ≥ 3. It is well known that there exists an absolute constant c > 0 such that 1 − β > c/ √d, see Haneke , Goldfeld and Schinzel , and Pintz . In particular, it is shown in the Goldfeld-Schinzel’s paper that 1 − β > ( 6 π + o(1) ) 1 √d as d → ∞ . (1) Pintz achieved a similar result, but with a different method. He improved the constant 6 π to 12 π , and then improved it further to 16 π following Schinzel’s remark, see the footnote on page 277 of . We are unaware of any result of the form 1 − β > c/ √d with an explicit constant c > 0 prior to this work. Known explicit results have an additional (log d)2 term in the denominator, see [7, Lemma 3], , and . Most of 1 arXiv:2001.05782v2 [math.NT] 19 Jan 2020 these papers made use of explicit upper bounds for L′(1 , χ ). We do not follow this route, instead, we use the method of Goldfeld and Schinzel in . It is worth noting that L(s, χ ) does not have positive real zeros for at least a positive proportion of fundmental discriminats −d, see . Moreover, Watkins’ computational results in show that the same holds for all L(s, χ ) with fundamental discriminants −d such that d ≤ 300000000. The following theorem is our main result. Theorem 1. Let d > 300000000 such that −d is a fundamental discriminant. Let L(s, χ ) be the Dirichlet L-function associated with the primitive character χ(n) = ( −dn ). If there exists β > 0 such that L(β, χ ) = 0 , then 1 − β > 6.5 √d . (2) Another Watkins’ paper provides a classification of all imaginary quadratic fields with class number less or equal to 100. The combination of the results from and guarantees that we may only consider the case where the class number h(−d) of the corresponding imaginary quadratic field Q(√−d) is at least 101. We will see that a higher class number gives a better constant in (2). In fact, we have the following asymptotic result in terms of the class number. Theorem 2. Let d and β be as in Theorem 1 and let h(−d) be the class number of the quadratic field Q(√−d). Then, we have 1 − β > ( 2π + o(1) ) h(−d)(log h(−d)) 2 √d as h(−d) → ∞ . (3) It is also possible to obtain an explicit bound for the o(1) term in (3) in terms of h(−d). But this will not be very useful unless we have an explicit lower bound for the class number h(−d), which is a much harder problem. This paper is organized as follows: In Section 2 we prove two preliminary results, one on the sum of reciprocal prime powers and the other on the sum of reciprocal ideal norms. The proof of Theorem 1 and Theorem 2 are done in Section 3 and Section 4 respectively. We conclude with a short discussion on possible improvements of Theorem 1 in Section 5. 2 Preliminary results 2.1 Sum of reciprocal prime powers We are going to need explicit estimates for the sum ∑ pα≤x p−α, where the sum is taken over the prime powers pα not exceeding x. It is clear that ∑ pα≤x p−α is greater than the sum of reciprocal primes ∑ p≤x p−1 but they are asymptotically equal as x → ∞ . It is well known that for x ≥ 3, we have ∑ p≤x p−1 = log log x + B1 + o(1) , where B1 is known as the Mertens constant, ref. Sequence A077761 in the OEIS. Dusart recently provided an explicit bound for the error term in the above estimate. It is shown, see [6, Theorem 5.6], that for every x ≥ 2278383, we have ∣∣∣∑ p≤x p−1 − log log x − B1 ∣∣∣ ≤ 0.2(log x)3 . (4) We use this result to obtain an explicit estimate for the sum of reciprocal prime-powers. 2Proposition 1. For every x ≥ 2, we have − 1.75 (log x)2 ≤ ∑ pα≤x p−α − log log x − B2 ≤ min { 0.2(log x)3 , 10 −4 } , (5) where B2 = B1 + ∑ α≥2 ∑ p p−α = 1 .03465 . . . The constant B1 is sometimes referred to as the prime reciprocal constant, so we could analogously call B2 as the prime power reciprocal constant. B2 also appears in the OEIS as Sequence A083342. Remark 1. The lower bound in (5) could be made of the form c/ (log x)3 as in Dusart’s result above, but we chose to use the asymptotically weaker bound in the proposition as it gives a slightly better approximation for small values of x, see the comparison with the exact error in Figure 1. Figure 1: Lower bound in (5) and exact error. Proof. For x ≥ 2, we have 0 ≤ ∑ pα≤x p−α − ∑ p≤x p−1 ≤ ∑ α≥2 ∑ p p−α. Let us denote the double summation on the right-hand side by C. It is easy to check that it is convergent. In fact, one has C = lim N→∞ ∑ α≥2 ∑ p≤N p−α = lim N→∞ ∑ p≤N ∑ α≥2 p−α = ∑ p 1 p2 − p . This implies that ∑ pα≤x p−α ≤ ∑ p≤x p−1 + C. (6) Now, for the lower bound, we have ∑ pα≤x p−α − ∑ p≤x p−1 = C − ∑ α≥2 ∑ pα>x p−α, 3and ∑ α≥2 ∑ pα>x p−α ≤ ∑ p> √x 1 p2 − p ≤ ∑ n> √x 1 n2 − n = 1 d√xe − 1 . Thus ∑ pα≤x p−α ≥ ∑ p≤x p−1 + C − 1 d√xe − 1 . (7) Combining (6) and (7), with Dusart’s bound (4), we obtain the following: for x ≥ 2278383, − 0.2(log x)3 − 1 d√xe − 1 ≤ ∑ pα≤x p−α − log log x − B1 − C ≤ 0.2(log x)3 . (8) It is easy to check that for x ≥ 2278383, the latter bounds imply the estimates (5) in the statement of the proposition (if x ≥ 2278383, then 0 .2/(log x)3 < 10 −4). It now remains to check that (5) also holds for all x < 2278383 . We can use a computer check for this, but we need to be cautious because x can take any real value. First, for x ≥ 2, we let ε(x) := ∑ pα≤x p−α − log log x − B2. Then, we can easily show from this definition that if pα is the greatest prime power ≤ x then ε(x) ≤ ε(pα). We verified numerically with a computer program that ε(pα) < 0 for all prime powers pα in the interval [2 , 2278383], which proves the upper bound in (5). Similarly, if pα is the least prime power ≥ x , then we have ε(x) + 1.75 (log x)2 ≥ ε(pα) + 1.75 (log pα)2 −  p−α if x < p α 0 if x = pα. Again, we checked with a computer program that ε(pα) + 1.75 (log pα)2 − pα > 0 for all prime powers pα in the interval [2 , 2278421] (the number 2278421 is the smallest prime power greater than 2278383). Therefore, we deduce that ε(x) + 1.75 (log x)2 > 0 for all x ∈ [2 , 2278383], which completes the proof of the proposition. 2.2 Exploiting the class number The approach of Goldfeld and Schinzel involves reciprocal sums of norms of ideals of the form ∑ N(a)≤x 1 N (a) , (9) where x ≥ 1 and a runs over all nonzero ideals of the ring of integers OQ(√−d). In order to understand such sums, let us recall some useful results from the classical theory of imaginary quadratic fields. For each positive integer a let ν(a) denote the number of representations of a as a norm of an ideal of OQ(√−d) that is not divisible by any rational integer > 1. Such an ideal can be written uniquely in the form [ a, b+√−d 2 ] := { an + b+√−d 2 m : n, m ∈ Z } , where a ≥ 1, −a < b ≤ a, and b2 ≡ − d (mod 4 a). Moreover, every other ideal can be written in the form u[a, b+√−d 2 ], where u is a positive integer, and the norm of such an ideal is u2a. So we can rewrite the sum in (9) as follows: ∑ N(a)≤x 1 N (a) = ∑ u2a≤x ν(a) u2a . 4We have the following important lemma concerning the arithmetic function ν(·). It was given without proof in , so we will provide a quick proof here. Lemma 1. The function ν(·) is multiplicative with ν(pα) = { 1 + χ(p) if p - d or α = 1 0 otherwise . Proof. The multiplicativity of ν(·) follows easily from the unique factorization property of the ideals of OQ(√−d). As for the formula for ν(pα), we use the charicterization of prime ideals in OQ(√−d): • If χ(p) = 0 and α = 1, then the only ideal with norm p is the ideal p with (p) = p2. • If χ(p) = 1, then we have the factorization ( p) = p1p2 with N (p1) = N (p2) = p.Hence, we have ( pα) = pα 1 pα 2 . Thus, the only ideals with norm pα that are not divisible by rational integers are pα 1 and pα 2 , since any other choice will have to be divisible by both p1 and p2, i.e., divisible by ( p). • If χ(p) = −1, then ( p) is a prime ideal with norm p2. Hence, there are no ideals with norm pα if α is odd. But if α is even, then any ideal with norm pα will be divisible by ( p). The only remaining case is when χ(p) = 0 and α ≥ 2. However, since −d is a fundamental discriminant, the only possibility for this to happen is for d to be divisible by 4, p = 2, and α = 2 or 3. But again, in this case, the only ideal with norm 2 α is the ideal pα, where p2 = (2) . Since α ≥ 2, such an ideal is divisible by (2) . When studying sums over norms of ideals like (9), it is often useful to consider the Dedekind zeta function for Q(√−d). Let ζ−d(s) := ∑ a 1 N (a)s , where a runs over all nonzero ideals of OQ(√−d) and Re( s) > 1. Lemma 1 implies that ζ−d(s) = ζ(s)L(s, χ ), (10) which immediately provides an analytic continuation for ζ−d(s). Equation (10) is well known, but also follows easily from Lemma 1. Indeed, for Re( s) > 1, we have ∑ a 1 N (a)s = ∑ u2a ν(a) u2sas = ∑ u2 1 u2s ∑ a ν(a) as = ∏ p (1 − p−2s)−1 ∏ p,χ (p)=0 (1 + p−s) ∏ p,χ (p)=1 ( 1 + 2 p−s 1 − p−s ) = ∏ p (1 − p−s)−1 ∏ p,χ (p)= −1 (1 + p−s)−1 ∏ p,χ (p)=1 (1 − p−s)−1 = ζ(s)L(s, χ ). Every ideal [ a, b+√−d 2 ] corresponds to a binary quadratic form ax 2 + bxy + cy 2,where d = 4 ac − b2. Such a form is called reduced if −a < b ≤ a < c or 0 ≤ b ≤ a = c.The number of reduced forms is known as the class number of Q(√−d), and we denote it by h(−d). Watkins in gives all negative fundamental discriminant with class 5number less or equal to 100. The largest absolute value of such discriminants is 2383747 (whose class number is 98). Moreover, it is shown in another Watkins’ paper that for d ≤ 300000000, the function L(s, χ ) does not have positive real zeros. Hence, we can assume from now on that d > 300000000 , and so h(−d) ≥ 101 . (11) Lemma 2. Let h(−d) be the class number of a quadratic field of discriminant −d with d > 300000000 . Then, we have ∑ a≤12 √d ν(a) a ≤ h(−d)11 . (12) Proof. Notice first that for an ideal [ a, b+√−d 2 ] with norm a ≤ 12 √d, the corresponding quadratic form ax 2 + bxy + cy 2 is reduced. To see this, note that for d > 4, equality cannot hold for a ≤ 12 √d since, otherwise, d/ 4 would not be squarefree. So 4a2 < d = 4 ac − b2 ≤ 4ac, which yields a < c. The above observation implies that each ideal class of Q(√−d) con-tains at most one ideal of the form [ a, b+√−d 2 ] with norm a ≤ 12 √d, and in particular, we have ∑ a≤12 √d ν(a) ≤ h(−d). On the other hand, using Lemma 1, we can show that ν(a) ≤ 2w(a), where w(n)denotes the number of distinct prime divisors of n, with w(1) = 0 . Hence, we have ∑ a≤12 √d ν(a) a ≤ ∑ a≤12 √d 2w(a) a . One can simply verify with a calculator that ∑34 n=1 2w(n) = 101 . This implies that there can only be at most 101 ideals of the form [ a, b+√−d 2 ] with norm a less or equal to 34. But since in our case, the class number h(−d) is at least 101, we may write ∑ a≤12 √d ν(a) a ≤ 34 ∑ n=1 2w(n) n + h(−d) − 101 35 . This is because the sum is larger if more small numbers a are represented as norms of ideals. So in the above, 101 ideals have norms from 1 to 34 and the norms of the rest must be at least 35 . Hence, by evaluating the sum on the right in the above, we obtain ∑ a≤12 √d ν(a) a ≤ 9.161 + h(−d) − 101 35 ≤ h(−d)11 , for h(−d) ≥ 101. 3 Proof of Theorem 1 The proof relies on estimates of sums of the form (9) when x is slightly larger than 12 √d. To make this precise, we consider an auxiliary function f (d) ≥ 1, to be specified later. We set x = 12 √df (d). 6From now on, we may assume that there exists β > 0 such that L(β, χ ) = 0 and that 1 − β ≤ 6.5 √d , (13) for otherwise, there will be nothing to prove. Then, we define the integral I := 12πi ∫ 2+ i∞ 2−i∞ ζ−d(s + β) xs s(s + 2)( s + 3) ds. (14) As we can see in the next lemma, this integral allows us to estimate the sum of reciprocal norms of ideals that we mentioned in the previous section. Lemma 3. We have I ≤ 16 x1−β ∑ N(a)≤x 1 N (a) . Before proving this lemma, let us first recall Perron’s Formula, see [1, p.243] for example: if y is any positive real number and c > 0, then we have 12πi ∫ c+i∞ c−i∞ ys s ds =  1 if y > 1, 12 if y = 1 , 0 if 0 < y < 1, (15) where by ∫ c+i∞ c−i∞ , we mean lim T→∞ ∫ c+iT c−iT . Proof of Lemma 3. We begin by the following partial fraction decomposition: 1 s(s + 2)( s + 3) = 16s − 12( s + 2) + 13( s + 3) . (16) Hence, by (15) we obtain 12πi ∫ 2+ i∞ 2−i∞ ys s(s + 2)( s + 3) = {0 if 0 < y < 1, 16 − y−2 2 y−3 3 if y ≥ 1. (17) Since ζ−d(s) = ∑ a 1 N (a)s = ∑ u2a ν(a) u2sas converges absolutely for Re(s) > 1, we have I = 12πi ∫ 2+ i∞ 2−i∞ ∑ a 1 N (a)s+β xs s(s + 2)( s + 3) ds = 12πi ∫ 2+ i∞ 2−i∞ ∑ a 1 N (a)β ( xN (a) )s 1 s(s + 2)( s + 3) ds. Swapping summation and integration and using (17) (setting y = xN (a) ) yield I = ∑ N(a)≤x 1 N (a)β [ 16 − N (a)2 2x2 + N (a)3 3x3 ] ≤ 16 ∑ N(a)≤x 1 N (a)β ( since 16 − 12y2 + 13y3 ≤ 16 for any y ≥ 1 ) ≤ x1−β 6 ∑ N(a)≤x 1 N (a) , which complete the proof of the lemma. 73.1 Lower bound on I By shifting the path of integration of the integral I to Re( s) = −β, Equation (14) can now be written as I = L(1 , χ )x1−β (1 − β)(3 − β)(4 − β) + 12πi ∫ −β+i∞−β−i∞ ζ(s + β)L(s + β, χ ) xs s(s + 2)( s + 3) ds, (18) where the first term on right-hand side comes from the simple pole of the integrand at s = 1 − β. Note that s = 0 is also a singularity but it is removable since we assumed that L(β, χ ) = 0 . Let us denote the integral on the right-hand side of (18) by J, i.e., J := 12πi ∫ −β+i∞−β−i∞ ζ(s + β)L(s + β, χ ) xs s(s + 2)( s + 3) ds. Then, one has |J| ≤ x−β 2π ∫ ∞−∞ |ζ(it )|| L(it, χ )| √(β2 + t2)((2 − β)2 + t2)((3 − β)2 + t2) dt. On the other hand, using our assumptions (13) and d > 300000000, we deduce that β2 > ( 1 − 6.5/√300000000 )2 0.9996 2 > 0.999 . Therefore, |J| < x−β 2π ∫ ∞−∞ |ζ(it )|| L(it, χ )| √(0 .999 + t2)(1 + t2)(4 + t2) dt. (19) In order to find an upper bound for the above integral we need to obtain explicit bounds for |ζ(it )| and |L(it, χ )|. The following explicit result can be found in : for |t| ≥ 3, |ζ(1 + it )| ≤ 34 log |t|. (20) Similarly, for |L(it, χ )|, Dudek in obtained |L(1 + it, χ )| ≤ log d + log( e(|t| + 14 /5)) . (21) Using (20), (21), and the functional equations for the respective functions, we obtain the following lemma. Lemma 4. For any real number t such that |t| ≥ 3, we have |ζ(it )| ≤ 3 √32 π √|t| log |t|, and |L(it, χ )| ≤ 0.4 √d|t| (log d + log( e(|t| + 14 /5))) . Proof. The functional equation of the Riemann zeta function gives |ζ(it )| = π−1 sinh( π|t|/2) |Γ(1 − it )|| ζ(1 − it )|. Since sinh( π|t|/2) |Γ(1 − it )| = √ π 2 |t| tanh( π|t|/2) = √ π 2 |t| ( 1 − 2 eπ|t| + 1 ) ≤ √ π 2 |t|, 8we deduce from (20) that for |t| ≥ 3 we have |ζ(it )| ≤ 3 √32 π √|t| log |t|. Similarly the functional equation for L(s, χ ) is as follows: if Λ( s, χ ) = ( πd )−(s+1) /2 Γ ( s + 1 2 ) L(s, χ ), then Λ(1 − s, χ ) = ik 1/2 τ (χ) Λ( s, χ ), where τ (χ) = ∑dk=1 χ(k) exp(2 πik/d ) (here χ is real and χ(−1) = −1). Using the fact that |τ (χ)| = d1/2 and replacing s by it yield ( πd )−1 ∣∣∣∣Γ ( 2 − it 2 )∣ ∣∣∣ |L(1 − it, χ )| = ( πd )−1/2 ∣∣∣∣Γ ( 1 + it 2 )∣ ∣∣∣ |L(it, χ )|. Hence |L(it, χ )| = ( dπ )1/2 ∣∣∣∣Γ ( 2 − it 2 )∣ ∣∣∣∣∣∣∣Γ ( 1 + it 2 )∣ ∣∣∣ −1 |L(1 − it, χ )|. Moreover, we have ∣∣∣∣Γ ( 2 − it 2 )∣ ∣∣∣ = √ π|t| 2 sinh( π|t|/2) and ∣∣∣∣Γ ( 1 + it 2 )∣ ∣∣∣ = √ π cosh( π|t|/2) . Thus, ∣∣∣∣Γ ( 2 − it 2 )∣ ∣∣∣∣∣∣∣Γ ( 1 + it 2 )∣ ∣∣∣ −1 = √ |t| 2 coth( π|t|/2) . Therefore, we deduce that |L(it, χ )| = ( dπ )1/2 √ 12 |t| coth( π|t|/2) |L(1 − it, χ )|. (22) If |t| ≥ 3, then eπ|t| ≥ e3π > 12391, so √ 12 |t| coth( π|t|/2) = √ |t| 2 ( 1 + 2 eπ|t|−1 ) < √ 12 |t| (1 + 212390 ) < 0.708 √|t|. Thus for |t| ≥ 3, we have |L(it, χ )| < 0.708 √π √d|t| | L(1 − it, χ )| < 0.4√d|t| | L(1 − it, χ )|. The proof of the lemma is complete by using (21) to estimate the right-hand side. Another consequence of the calculations in the proof above is that we also have |ζ(it )L(it, χ )| = √d 2π |tζ (1 − it )L(1 − it, χ )| for t ∈ R \ { 0}. (23) 9In view of (19), we consider the following integrals: J1 := 12π ∫ 3 −3 |tζ (1 − it )| √(0 .999 + t2)((1 + t2)(4 + t2) dt, J2 := 12π ∫ 3 −3 |tζ (1 − it )| log( e(|t| + 14 /5)) √(0 .999 + t2)((1 + t2)(4 + t2) dt, J3 := 0.6 √2π ∫ ∞ 3 t log t √(0 .999 + t2)((1 + t2)(4 + t2) dt, J4 := 0.6 √2π ∫ ∞ 3 t log t log( e(t + 14 /5)) √(0 .999 + t2)((1 + t2)(4 + t2) dt. By (19), (23) and Lemma 4, we have |J| ≤ x−β 2π √d ( (J1 + J3) log d + J2 + J4 ) . (24) On the other hand, we can use a computer algebra system such as SageMath or Mathematica to calculate the Ji’s numerically. We obtained the following numerical values (with high accuracy) J1 = 0 .19692 . . . , J2 = 0 .45203 . . . , J3 = 0 .15661 . . . , J4 = 0 .61360 . . . Rounding this values up at the 3rd digit, and using (24), we have |J| ≤ x−β 2π √d ( 0.354 log d + 1 .067 ) < x−β 2π ( 0.354 + 1.067 log d )√d log d. Thus using d ≥ 300000000 to estimate the term in brackets, we deduce that |J| < 0.066 x−β √d log d. (25) Returning to the integral I. Recall from Equation (18) that we have I = L(1 , χ )x1−β (1 − β)(3 − β)(4 − β) + J. Hence, using the estimate (25) for J that we just achieved, we get I ≥ x1−β (1 − β) ( L(1 , χ )(3 − β)(4 − β) − 0.066 (1 − β) √d log dx ) . Since x = 12 √df (d), we deduce that I ≥ x1−β (1 − β) ( L(1 , χ )(3 − β)(4 − β) − 0.132 (1 − β) log df (d) ) . In addition, by the class number formula for d > 4, we have L(1 , χ ) = πh (−d) √d . So we finally get a lower estimate of II ≥ x1−β (1 − β)√d ( πh (−d)(3 − β)(4 − β) − 0.132 (1 − β)√d log df (d) ) . (26) 10 3.2 Upper bound on I We recall the bound from Lemma 3, I ≤ 16 x1−β ∑ N(a)≤x N (a)−1. where x = 12 √df (d) and f (d) ≥ 1 a function of d to be chosen later. Here we aim to estimate the sum on the right-hand side. For this, we choose another auxiliary function (d) that satisfies f (d) ≤(d). We have ∑ N(a)≤x N (a)−1 = ∑ u2a≤x ν(a) u2a ≤ π2 6 ∑ a≤x ν(a) a . (27) We split the last sum into two parts ∑ a≤x ν(a) a = ∑ a≤12 √d ν(a) a + ∑ 12 √d<a ≤x ν(a) a =: S0 + S1. Furthermore, we also split S1, S1 = ∑′ ν(a) a + ∑′′ ν(a) a , where ∑′ ν(a) a denotes the sum over all a with 12 √d < a ≤ x such that a has a prime divisor pα > ` (d). Hence, Lemma 1 yields ∑′ ν(a) a = ∑′ ν(pαb) pαb ≤ ∑ b<x/` (d) ν(b) b ∑ ∆( b)<p α≤x/b (2 p−α), (28) where ∆( b) := max { `(d), 12b √d } . Recalling our notation from Section 2 that ∑ pα≤y p−α = log log y + B2 + ε(y). So ∑ ∆( b)<p α≤x/b (2 p−α) = 2 log ( log( x/b )log ∆( b) ) 2 ( ε(x/b ) − ε(∆( b)) ) = 2 log ( log( f (d)) + log( 12b √d)log ∆( b) ) 2 ( ε(x/b ) − ε(∆( b)) ) ≤ 2 log ( 1 + log f (d)log ∆( b) ) 2 ( max y≥`(d) ε(y) − min y≥`(d) ε(y) ) ≤ 2 log ( 1 + log f (d)log `(d) ) 2 ( max y≥`(d) ε(y) − min y≥`(d) ε(y) ) . Moreover, since f (d) ≤ (d), we have x/ (d) ≤ 12 √d. Thus ∑ b<x/` (d) ν(b) b ≤ S0. Therefore, we deduce from (28), that ∑′ ν(a) a ≤ S0 ( 2 log ( 1 + log f (d)log `(d) ) 2 ( max y≥`(d) ε(y) − min y≥`(d) ε(y) )) . (29) 11 As for the sum ∑′′ ν(a) a , each positive integer a contributing to this sum has no prime power divisor > ` (d). So the number of distinct prime divisors of such an a is at least k0 := ⌈ log( 12 √d)log `(d) ⌉ . The latter and the multiplicative property of the function ν(a) imply that ∑′′ ν(a) a ≤ ∑ k≥k0 1 k!  ∑ pα≤`(d) ν(pα) pα  k ≤ σk0 k0! ( 1 + σ (k0 + 1) + σ2 (k0 + 1)( k0 + 2) + · · · ) , where σ := ∑ pα≤`(d) 2 pα . If we choose `(d) in such a way k0 + 1 > σ , then 1 + σ (k0 + 1) + σ2 (k0 + 1)( k0 + 2) + · · · ≤ 1 + σ (k0 + 1) + σ2 (k0 + 1) 2 + · · · = 1 + k0 1 + k0 − σ . Hence, we obtain ∑′′ ν(a) a ≤ (1 + k0) σk0 (1 + k0 − σ) k0! . (30) Putting everything together, and using the result in Lemma 2 that S0 ≤ h(−d)11 , we arrive at the following estimate ∑ N(a)≤x N (a)−1 ≤ π2 66 h(−d) ( 1 + 2 log ( 1 + log f (d)log `(d) ) Er( d, ` (d)) ) , (31) where Er( d, ` (d)) := 2 ( max y≥`(d) ε(y) − min y≥`(d) ε(y) ) 11 (1 + k0) σk0 (1 + k0 − σ)h(−d) k0! . This expression is not easy work with so let us derive a simpler bound for it. From Proposition 1, we know that max y≥`(d) ε(y) − min y≥`(d) ε(y) ≤ 1.75 (log `(d)) 2 + min { 0.2(log `(d)) 3 , 10 −4 } . If 0.2(log (d)) 3 < 10 −4, then log(d) > 12, so 1.75 (log (d)) 2 + 0.2(log(d)) 3 ≤ 1(log `(d)) 2 ( 1.75 + 0.2log `(d) ) < 1.8(log `(d)) 2 . On the other hand if 0.2(log (d)) 3 ≥ 10 −4, then log(d) < 13, so 1.75 (log (d)) 2 + 10 −4 ≤ 1(log(d)) 2 ( 1.75 + (log `(d)) 2 10000 ) < 1.8(log `(d)) 2 . Therefore, we get 2 ( max y≥`(d) ε(y) − min y≥`(d) ε(y) ) < 3.6(log `(d)) 2 . (32) 12 On the other hand, by Stirling’s formula, we have k0! ≥ √2πk 0 ( k0 e )k0 . Which implies that σk0 k0! ≤ 1 √2πk 0 ( eσ k0 )k0 . All these together with h(−d) ≥ 101, we obtain E(d, (d)) ≤ 3.6(log(d)) 2 + 11 (1 + k0)101(1 + k0 − σ)1 √2πk 0 ( eσ k0 )k0 . (33) 3.3 Final steps The purpose here is to choose suitable values for f (d) and (d), but before we do that, let us first list all the constraints (on f (d) and(d)) that we assumed earlier. For d > 300000000, we require • f (d) ≥ 1, • f (d) ≤ `(d), and • k0 + 1 > σ (both sides of the inequality depend on `(d)). Case log( d) ≤ 42 We choose f (d) = 1 ( `(d) will not be needed here), then (27), Lemma 2 and Lemma 3 yield I ≤ 16 x1−β ∑ N(a)≤12 √d N (a)−1 ≤ π2 396 x1−β h(−d). This and the lower bound of I in (26) imply 1(1 − β)√d ( π (3 − β)(4 − β) − 0.132 (1 − β)√d log dh(−d) ) ≤ π2 396 . Recall our assumption in (13) that 1 −β ≤ 6.5√d . Since d > 300000000, the latter implies that β > 0.999 , so π (3 − β)(4 − β) > 0.523 . Using h(−d) ≥ 101, and the assumption (13) again (to estimate the term (1 − β)√d inside the brackets above), we obtain (1 − β)√d > 396 π2 ( 0.523 − 0.132 6.5 log d 101 ) 20 .984 − 0.341 log d. The latter is greater than 6 .6 (if log( d) ≤ 42), contradicting (13). Case 42 < log( d) ≤ 100 Here, we choose f (d) = `(d) = 16 . The combination (26), (31) and Lemma 3 gives 1(1 − β)√d ( π (3 − β)(4 − β) − 0.132 (1 − β)√d log d 101 f (d) ) ≤ π2 396 ( 1 + 2 log ( 1 + log f (d)log `(d) ) Er( d, ` (d)) ) . 13 This implies that (1 − β)√d > 20 .984 − 0.341 log df (d) 1 + 2 log ( 1 + log f (d)log `(d) ) Er( d, ` (d)) . The numerator is obtained in the same way as in the previous case. Since `(d) = 16, we have σ = 2 ∑ pα≤16 p−α < 3.786 . Moreover, from (33), we get E(d, ` (d)) ≤ 3.6(log 16) 2 + 11 (1 + k0)101(1 + k0 − 3.786) 1 √2πk 0 ( 10 .3 k0 )k0 < 0.469 + 0 .044 (1 + k0)(1 + k0 − 3.786) 1 √k0 ( 10 .3 k0 )k0 . Similarly, since f (d) = `(d) = 16, 20 .984 − 0.341 log df (d) ≥ 20 .984 − 0.022 log d, and 1 + 2 log ( 1 + log f (d)log `(d) ) = 1 + 2 log(2) < 2.387 . Thus, we finally obtain (1 − β)√d > 20 .984 − 0.022 log d 2.856 + 0 .044 (1+ k0)(1+ k0−3.786) 1√k0 ( 10 .3 k0 )k0 (34) where, here k0 = ⌈ log d − log 4 2 log 16 ⌉ . The right-hand side of (34) is still difficult to estimate manually, so we did this nu-merically and the result is shown in Figure 2. The corners in the graph correspond to the points where the of value k0 changes from an integer to the next. The mini-mum occurs in the first corner where k0 changes from 8 to 9 i.e. when log d is close to 16 log 16 − log 4 ≈ 45 .747. At this point we still have (1 − β)√d > 6.53 (when k0 ≥ 11 the corners become less apparent because the second term in the denominator contributes very little). So, it is clear that we also obtain (1 − β)√d > 6.5 for all d such that 42 < log( d) ≤ 100, which contradicts (13). Case log d > 100 Just as in the previous case, we also have the bound (1 − β)√d > 20 .984 − 0.341 log df (d) 1 + 2 log ( 1 + log f (d)log `(d) ) Er( d, ` (d)) , (35) but we choose f (d) = `(d) = 0 .5 log( 12 √d), which we simply abbreviate as t to make the reading easier. So from here, we will write everything in terms of t. The condition log d > 100 implies that t > 24 .65 . Let us begin by estimating the terms in the denominator of (35). We have k0 = ⌈ 2t log t ⌉ ≥ 2t log t , 14 Figure 2: Numerical plot for the case 42 < log( d) ≤ 100 and since the right-hand side is at least 15 .3, we may assume that k0 ≥ 16 . By Proposition 1, we have σ = 2 ∑ pα≤t p−α ≤ 2 log log t + 2 .07 . Thus, we deduce that eσ k0 ≤ e log t (2 log log t + 2 .07) 2t It is easy to verify that the term on the right-hand side is a decreasing function of t when t > 24 .65 . Hence, we get eσ k0 < e log(24 .65) (2 log log(24 .65) + 2 .07) 49 .3 < 0.778 . In particular, we verified that 1 + k0 > σ. We also have 1 + k0 1 + k0 − σ = 11 − σ 1+ k0 < 11 − 0.778 e < 1.401 . Using these numerical estimates and the fact that k0 ≥ 16, we obtain 11 (1 + k0)101(1 + k0 − σ)1 √2πk 0 ( eσ k0 )k0 < 0.016 (0 .778) 16 < 0.0003 . (36) We can see that the contribution from this term is very small. Let us now look the remaining terms in the denominator of the right-hand side of (35). Since we have chosen f (d) = `(d) = t, we have 1 + 2 log ( 1 + log f (d)log `(d) ) 3.6(log `(d)) 2 = 1 + 2 log 2 + 3.6(log t)2 < 2.737 . (37) The numerical value is obtain by rounding up the value at t = 24 .65 . Combining (36) and (37), we finally get (1 − β)√d > 20 .984 − 0.341 log dt 2.738 > 7.663 − 0.125 4t + log 4 t > 7for t > 24 .65 . Once again, we obtain a constant strictly greater than 6 .5 which bounds (1 − β)√d from below. Since we have shown that this is the case for all possible values of log d ≥ log(300000000) the proof of Theorem 1 is complete. 15 4 Proof of Theorem 2 We begin by the following consequence Theorem 1 in Goldfeld-Schinzel : if β exists, then 1 − β ≥ ( 6 π2 + o(1) ) L(1 , χ ) ∑ a≤14 √dν(a) a as d → ∞ . (38) For now, we need know how to estimate sums of the form ∑ a≤xν(a) a . We start with the following observation which we already used in the proof Lemma 2: for any 1 ≤ x ≤ 12 √d, we have ∑ a≤x ν(a) a ≤ ∑ a≤y 2w(a) a whenever ∑ a≤y 2w(a) ≥ h(−d). (39) The next lemma gives asymptotic estimates of the sums involving 2 w(a). Lemma 5. As y → ∞ , we have ∑ n≤y 2w(n) = 6 π2 y log y + O(y) and ∑ n≤y 2w(n) n = 3 π2 (log y)2 + O(log y). Proof. For each n ≥ 1, the number 2 w(n) is equal to the number of squarefree divisors of n, i.e., 2w(n) = ∑ d|n |μ(d)|. Therefore, ∑ n≤y 2w(n) = ∑ n≤y ∑ d|n |μ(d)| = ∑ d≤y |μ(d)| ∑ q≤yd 1= y ∑ d≤y |μ(d)| d + O(y). (40) To estimate the sum in the last line, we use a well known estimate for the counting function of squarefree integers ∑ n≤y |μ(n)| = 6 π2 y + O(√y). (41) This is not too difficult to prove, we can even find a proof with an explicit error term in . Hence, applying Abel’s identity, we have ∑ n≤y |μ(n)| n = 1 y ∑ n≤y |μ(n)| + ∫ y 1 1 t2 ∑ n≤t |μ(n)|  dt. (42) The first term is obviously bounded, and the second can be estimated using (41). Thus ∑ n≤y |μ(n)| n = 6 π2 log y + O(1) . (43) 16 The estimate of ∑ n≤y 2w(n) in the lemma now follows from (40). For the second estimate in the lemma, we use Abel’s identity again ∑ n≤y 2w(n) n = 1 y ∑ n≤y 2w(n) + ∫ y 1 1 t2 ∑ n≤t 2w(n)  dt. Then, we use the first estimate in the lemma, that we just proved, to estimate both terms on the right-hand side, and we obtain ∑ n≤y 2w(n) n = 3 π2 (log y)2 + O(log y), which completes the proof of the lemma. One can find explicit upper bounds of the sums in Lemma 5 in [15, Lemma 12]. Lower bounds can also be achieved using the same proof provided in that paper. We are now ready to prove the asymptotic formula in Theorem 2. Proof of Theorem 2. We choose a positive number y = y(d) in such a way that ∑ a≤y−1 2w(a) < h (−d) ≤ ∑ a≤y 2w(a), then, by the first estimate in Lemma 5, we have h(−d) = 6 π2 y(log y) + O(y). Thus, writing y in terms of h(−d), we obtain y = ( π2 6 + O ( 1log h(−d) )) h(−d)log h(−d) . (44) Similarly, using the second estimate in Lemma 5, we have ∑ a≤14 √d ν(a) a ≤ ∑ a≤y 2w(a) a = 3 π2 (log y)2 + O(log y). Then, once again, expressing the right-hand side in terms of h(−d) using (44) yields ∑ a≤14 √d ν(a) a ≤ ( 3 π2 + O ( log log h(−d)log h(−d) )) (log h(−d)) 2. Plugging this into (38) completes the proof. 5 Concluding remarks About further improvements of Theorem 1, one might be able to push the constant 6 .5to about 7 by carefully choosing the values of f (d) and `(d). Another idea is to replace the term s(s + 2)( s + 3) in the definition of the integral I in (14) with s(s + a)( s + b), then choose a and b that give the best result. We have tried this and found out that s(s + 2)( s + 3) is already very close to optimal. Replacing it will either make an insignificant improvement on the final result or worsen it. What could really make a difference is any improvement of the bound in Lemma 2, with h(−d) ≥ 101 we could only get the factor 11. We do not know if one could do significantly better than that. 17 References T. Apostol. Introduction to Analytic Number Theory. Undergraduate Texts in Mathematics . Springer-Verlag, New York - Heidelberg, 1976. M. Bordignon. Explicit bounds on exceptional zeroes of Dirichlet L-function II. arXiv:1907.08327, 2019. H. Cohen, F. Dress, and M. E. Marraki. Explicit estimates for summatory func-tions linked to the M¨ obius μ-function. Functiones et Approximatio XXXVII.1, p. 51-63 , 2007. J. B. Conrey and K. Soundararajan. Real zeros of quadratic Dirichlet L-functions. Invent. Math. , 150(1):1–44, 2002. A. W. Dudek. An explicit result for |L(1 + it, χ )|. Funct. Approx. Comment. Math. 53, p. 23-29 , 2015. P. Dusart. Explicit estimates of some functions over primes. Ramanujan J., p. 1-25 , 2016. K. Ford, F. Luca, and P. Moree. Values of the Euler φ-function not divisible by a given odd prime, and the distribution of Euler-Kronecker constants for cyclotomic fields. Math. Comp. , 83(287):1447–1476, 2014. M. B. G. Martin, K. O’Bryant and A. Rechnitzer. Explicit bounds for primes in arithmetic progressions. arXiv:1802.00085, 2018. D. Goldfeld and A. Schinzel. On Siegel’s zero. Annali Della Scuola Normale Superiore Di Pisa, 4 e Serie, p. 571-583 , 1975. W. Haneke. ¨Uber die reellen nullstellen der dirichletschen l-reihen. Acta Arith-metica , 22(4):391–421, 1973. T. Morrill and T. Trudgian. An elementary bound on Siegel zeroes. arXiv:1811.12521, 2018. A. Page. On the Number of Primes in an Arithmetic Progression. Proc. London Math. Soc. (2) , 39(2):116–141, 1935. J. Pintz. Elementary methods in the theory of L-functions. II. On the greatest real zero of a real L-function. Acta Arith. , 31(3):273–289, 1976. T. Trudgian. A new upper bound for |ζ(1 + it )|. Bull. Aust. Math. Soc. 89, p. 259-264 , 2014. T. Trudgian. Bounds on the number of diophantine quintuples. Journal of Number Theory , 157:233 – 249, 2015. M. Watkins. Class numbers of imaginary quadratic fields. Mathematics of Com-putation. Volume 73, Number 246, p. 907-938 , 2004. M. Watkins. Real zeros of real odd Dirichlet L-functions. Mathematics of Com-putation. Volume 73, Number 245, p. 415-423 , 2004. D. Ralaivaosaona, Department of Mathematical Sciences, Stellenbosch Univer-sity, South Africa E-mail address , D. Ralaivaosaona: [email protected] F. B. Razakarinoro, Department of Mathematical Sciences, Stellenbosch Univer-sity, South Africa E-mail address , F. B. Razakarinoro: [email protected] 18
136
Skip to main content Operations Research Job shop problem using google or-tools Ask Question Asked Modified 5 months ago Viewed 278 times This question shows research effort; it is useful and clear -1 Save this question. Show activity on this post. We're looking for a replacement for our custom-written scheduling algorithm with endless loops and ifs. Recently we've discovered that or-tools exist with cp-sat solver and the problem we're solving is called "job shop" with a bunch of custom requirements. Examples look promising, but I have a bunch of questions regarding to possibility of implementing some stuff we need to achieve. Here's some of the requirements: There's a bunch of machines that can perform one task at a time (seen no-overlap in docs, do no problems there) Each machine has a state with 3 parameters at once: tooling, material and color Each task requires a machine to have a certain state to start, any machine can be changed to that state of three Change of machine state requires to schedule a transition task, that task is not in initial job list and should only be scheduled if no appropriate machine was found Amount of transitions should be minimized, that's high priority of scheduling There should be an ability to schedule a contiguous job that can last for multiple shifts. Each job can be also interrupted by breaks (e.g. lunch) and should be "splittable" instead of just "ignore this period" So, the questions are: Will it be possible to implement those requirements with solver? What is the complexity of such solution? Any good resources to read aside from or-tools github? scheduling or-tools cp-sat job-shop-scheduling c# Share CC BY-SA 4.0 Improve this question Follow this question to receive notifications edited Feb 15 at 13:10 Laurent Perron 3,05911 gold badge88 silver badges1919 bronze badges asked Feb 5 at 14:13 SelfishCrawlerSelfishCrawler 10122 bronze badges Add a comment | 2 Answers 2 Reset to default This answer is useful 4 Save this answer. Show activity on this post. Will it be possible to implement those requirements with solver? Your use case with state-dependent machines and task processing can be modeled as sequence-dependent setup times - see this example, which considers a (flexible) job shop with sequence-dependent setup times. Depending on which two tasks are scheduled, the machine may incur a setup time. This implicitly takes into account the state of the machine. The example above also minimizes the number of transitions. Scheduling jobs with interruptions is harder. In the literature, it's called scheduling with preemption or calendars. One way to go about this is to split jobs into smaller jobs and to constrain that they have to be scheduled contigiously. But the devil is in the details. What is the complexity of such solution? I understand this question as "can OR-Tools solve this?" Whether all of this is possible highly depends on the number of jobs and machines. How large are the instances that you are trying to solve? Any good resources to read aside from or-tools github? You can read CP-SAT primer for a brief introduction to OR-Tools' CP-SAT solver. OR-Tools itself doesn't have a comprehensive user guide, unfortunately. They do have a lot of resources but they are spread all over (lots of examples, pages like these, discussion board). I learned OR-Tools mostly by reimplementing the examples. Share CC BY-SA 4.0 Improve this answer Follow this answer to receive notifications edited Feb 5 at 16:49 answered Feb 5 at 16:41 Leon LanLeon Lan 33922 silver badges1010 bronze badges 2 That's a nice answer. We're planning to schedule around hundred or two machines at most I think – SelfishCrawler Commented Feb 6 at 7:12 OR-Tools should be able to find good solutions to instances of that size. You could try to use PyJobShop, which is a Python package to solve scheduling problems with constraint programming (OR-Tools and CP Optimizer). It doesn't yet support preemptive scheduling or minimizing the setup costs, but it is possible to model the job shop problem + setup times to minimize makespan. (Full disclosure: I built this package.) – Leon Lan Commented Feb 6 at 7:57 Add a comment | This answer is useful 2 Save this answer. Show activity on this post. Regarding question 2 and 3: Job shop problems are known overall to be NP-hard. There are tutorials on how to work with google's OR solver, the pre-requisite of which can be taught through a linear programing course if you wish to dive deeper in how to customize models. Some knowledge on how to model scheduling problems would be a nice start. Share CC BY-SA 4.0 Improve this answer Follow this answer to receive notifications answered Feb 5 at 15:27 applethalapplethal 3366 bronze badges 1 Hi, thanks for your answer! What do you think about machines and their states? Will it be possible to implement, that is the question whether it's worthy to invest time to learn the stuff or not – SelfishCrawler Commented Feb 5 at 16:15 Add a comment | Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions scheduling or-tools cp-sat job-shop-scheduling c# See similar questions with these tags. Featured on Meta Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Will you help build our new visual identity? Related 9 TSP problem: traveller does not visit all nodes - Google OR-tools 0 What kind of job shop scheduling problem is this and how do I solve it? 3 Force gaps between tasks, tasks are scheduled within their job's time windows in Jobshop problem 1 About circuit constraints in OR - tools 0 Using binary variable or integer variable in Constraint Programming (CP)? 1 Job Scheduling with Energy Consumption using Linear Programming 0 Job Shop Scheduling with Multiple possible machines for a task in CP-SAT 1 Infeasibility with scheduling_with_transitions Hot Network Questions Is there a simple method to prove that this triangle is isosceles? Limit in the Lie Derivative How to defend against GDPR being used to access anti-fraud measures? Why am I experiencing these problems updating LibreOffice? What tool can I use to remove a 4 inch cleanout cap with a recessed cross slot? Summation with fractional part At the time of the prequels, was everyone who worked in the Jedi Temple on Coruscant a Jedi? Are there other LEGO Duplo track layouts with two trains that trigger all the switches indefinitely? Graphical software tools for quick and easy diagrams Why do the rules allow resigning in drawn positions with insufficient mating material? How to support copper fitting while soldering vertically from above? Does the warning "5 years imprisonment for removal" on Canada's Four Corners obelisk have any legal backing? Why are illegal immigrants counted towards congressional district apportionment and allocation of Electoral College votes in the United States? If I self-publish a book and give it away for free, would it meet a future publisher's desire to be "first publishing rights"? A story where a character that looks like Wile E. Coyote helps to relocate a community of business-sharp hunters-gatherers XSIM : print solutions of exercises having a predefined tag? Can high schoolers post to arXiv or write preprints? Does cell phone only receive (one way communication) or receive and transmit microwaves (two way communication) during download? History of Wilcoxon/Mann-Whitney being for the median? Why לֶחֶם instead of לַחַם? I found that we can calculate the time of solar eclipses that will happen in the very far future. Do we need relativity in this calculation? How soon after parking a car in a paid parking area must I provide proof of payment? Harry Potter fanfic where Petunia dies of cancer and Vernon works at a horse racing track? Intel NUC automatically shuts down when trying Ubuntu Question feed default By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device.
137
ANNALS OF MATHEMATICS anmaah SECOND SERIES, VOL. 172, NO. 3 November, 2010 The density of discriminants of quintic rings and fields By Manjul Bhargava Annals of Mathematics, 172 (2010), 1559–1591 The density of discriminants of quintic rings and fields By MANJUL BHARGAVA 1. Introduction Let Nn.X/ denote the number of isomorphism classes of number fields of degree n having absolute discriminant at most X. Then it is an old folk conjecture that the limit (1) cn D lim X!1 Nn.X/ X exists and is positive for n > 1. The conjecture is trivial for n  2, while for n D 3 and n D 4 it is a theorem of Davenport and Heilbronn and of the author , respectively. In degrees n  5, where number fields tend to be predominantly non-solvable, the conjecture has not previously been known to be true for any value of n. The primary purpose of this article is to prove the above conjecture for n D 5. In particular, we are able to determine the constant c5 explicitly. More precisely, we prove: THEOREM 1. Let N .i/ 5 .; / denote the number of quintic fields K, up to iso-morphism, having 5 2i real embeddings and satisfying  < Disc.K/ < . Then .a/ lim X!1 N .0/ 5 .0; X/ X D 1 240 Y p .1 C p2 p4 p5/I .b/ lim X!1 N .1/ 5 .X; 0/ X D 1 24 Y p .1 C p2 p4 p5/I .c/ lim X!1 N .2/ 5 .0; X/ X D 1 16 Y p .1 C p2 p4 p5/: The constants appearing in Theorem 1, and thus their sum c5 D 13 120 Y p .1 C p2 p4 p5/; turn out to have very natural interpretations. Indeed, the constant c5 takes the form of an Euler product, where the Euler factor at a place  “counts” the total number of local ´ etale quintic extensions of Q, where each isomorphism class of local 1559 1560 MANJUL BHARGAVA extension K is counted with a certain natural weight to reflect the probability that a quintic number field K has localization K ˝ Q isomorphic to K at . More precisely, let (2) ˇ1 D 1 2 X ŒK1WRD5 étale 1 jAutR.K1/j; where the sum is over all isomorphism classes K1 of ´ etale extensions of R of degree 5. Since AutR.R5/ D 120, AutR.R3 ˚C/ D 12, and AutR.R˚C2/ D 8, we have ˇ1 D 1 240 C 1 24 C 1 16 D 13 120. Similarly, for each prime p, let (3) ˇp D p 1 p X ŒKpWQpD5 étale 1 jAutQp.Kp/j  1 Discp.Kp/; where the sum is over all isomorphism classes Kp of ´ etale extensions of Qp of degree 5, and Discp.Kp/ denotes the discriminant of Kp viewed as a power of p. Then (4) c5 D ˇ1  Y p ˇp; since we will show that (5) ˇp D 1 C p2 p4 p5: Thus we obtain a natural interpretation of c5 as a product of counts of local field extensions. For more details on the evaluation of local sums of the form (3), and for global heuristics on the expected values of the asymptotic constants associated to general degree n Sn-number fields, see . We obtain several additional results as by-products. First, our methods enable us to count analogously all orders in quintic fields: THEOREM 2. Let M .i/ 5 .; / denote the number of isomorphism classes of or-ders O in quintic fields having 52i real embeddings and satisfying  <Disc.O/<. Then there exists a positive constant ˛ such that .a/ lim X!1 M .0/ 5 .0; X/ X D ˛ 240I .b/ lim X!1 M .1/ 5 .X; 0/ X D ˛ 24I .c/ lim X!1 M .2/ 5 .0; X/ X D ˛ 16: The constant ˛ in Theorem 2 has an analogous interpretation. Let ˛p denote the analogue of the sum (3) for orders, i.e., (6) ˛p D p1 p X ŒRpWZpD5 1 jAutZp.Rp/j  1 Discp.Rp/; where the sum is over all isomorphism classes of Zp-algebras Rp of rank 5 over Zp with nonzero discriminant. Then we will show that the constant ˛ appearing in THE DENSITY OF DISCRIMINANTS OF QUINTIC RINGS AND FIELDS 1561 Theorem 2 is given by (7) ˛ D Y p ˛p; thus expressing ˛ as a product of counts of local ring extensions. It is an interesting combinatorial problem to evaluate ˛p explicitly in “closed form”, analogous to the formula (5) that we obtain for ˇp; see for further discussion on the evaluation of such sums. Second, we note that the proof of Theorem 1 contains a determination of the densities of the various splitting types of primes in S5-quintic fields. If K is an S5-quintic field and K120 denotes the Galois closure of K, then the Artin symbol .K120=p/ is defined as a conjugacy class in S5, its values being hei, h.12/i, h.123/i, h.1234/i, h.12345/i, h.12/.34/i, or h.12/.345/i, where hxi denotes the conjugacy class of x in S5. It follows from the Cebotarev density theorem that for fixed K and varying p (unramified in K), the values hei, h.12/i, h.123/i, h.1234/i, h.12345/i, h.12/.34/i, or h.12/.345/i occur with relative frequency 1 : 10 : 20 : 30 : 24 : 15 : 20 (i.e., proportional to the size of the respective conjugacy class). We prove the following complement to Cebatorev density: THEOREM 3. Let p be a fixed prime, and let K run through all S5-quintic fields in which p does not ramify, the fields being ordered by the size of the dis-criminants. Then the Artin symbol .K120=p/ takes the values hei, h.12/i, h.123/i, h.1234/i, h.12345/i, h.12/.34/i, or h.12/.345/i with relative frequency 1W10W20W 30W24W15W20. Actually, we do a little more: we determine for each prime p the density of S5-quintic fields K in which p has the various possible ramification types. For example, it follows from our methods that a proportion of precisely .p C 1/.p2 C p C 1/ p4 C p3 C 2p2 C 2p C 1 of S5-quintic fields are ramified at p. Lastly, our proof of Theorem 1 implies that nearly all—i.e., a density of 100% of—quintic fields have full Galois group S5. This is in stark contrast to the quar-tic case [3, Th. 3], where we showed that only about 91% of quartic fields have associated Galois group S4: THEOREM 4. When ordered by absolute discriminant, a density of 100% of quintic fields have associated Galois group S5. In particular, it follows that 100% of quintic fields are nonsolvable. Note that, rather than counting quintic fields and orders up to isomorphism, we could instead count these objects within a fixed algebraic closure of Q. This would simply multiply all constants appearing in Theorems 1 and 2 by five. Meanwhile, 1562 MANJUL BHARGAVA Theorems 3 and 4 of course remain true regardless of whether one counts quintic extensions up to isomorphism or within an algebraic closure of Q. The key ingredient that allows us to prove the above results for quintic (and thus predominantly nonsolvable) fields is a parametrization of isomorphism classes of quintic orders by means of four integral alternating bilinear forms in five vari-ables, up to the action of GL4.Z/SL5.Z/, which we established in . The proofs of Theorems 1–4 can then be reduced to counting appropriate integer points in certain fundamental regions, as in . However, the current case is consider-ably more involved than the quartic case, since the relevant space is now 40-dimensional rather than 12-dimensional! The primary difficulty lies in counting points in the rather complicated cusps of these 40-dimensional fundamental regions (see Lemmas 8–11). The necessary point-counting is accomplished in Section 2, by carefully dis-secting the “irreducible” portions of the fundamental regions into 152 pieces, and then applying a new adaptation of the averaging methods of to each piece (see Lemma 11). The resulting counting theorem (see Theorem 6), in conjunction with the results of , then yields the asymptotic density of discriminants of pairs .R; R0/, where R is an order in a quintic field and R0 is a sextic resolvent ring of R. Obtaining Theorems 1–4 from this general density result then requires a sieve, which in turn uses certain counting results on resolvent rings and subrings obtained in and in the recent work of Brakenhoff , respectively. This sieve is carried out in the final Section 3. We note that the space of binary cubic forms that was used in the work of Davenport-Heilbronn to count cubic fields, the space of pairs of ternary quadratic forms that we used in to count quartic fields, and the space of quadruples of alternating 2-forms in five variables that we use in this article, are all examples of what are known as prehomogeneous vector spaces. A prehomogeneous vector space is a pair .G; V /, where G is a reductive group and V is a linear representation of G such that GC has a Zariski open orbit on VC. The concept was introduced by Sato in the 1960’s and a classification of all irreducible prehomogeneous vector spaces was given in the work of Sato-Kimura , while Sato-Shintani and Shintani developed a theory of zeta functions associated to these spaces. The connection between prehomogeneous vector spaces and field extensions was first studied systematically in the beautiful 1992 paper of Wright-Yukie . In this work, Wright and Yukie determined the rational orbits and stabilizers in a number of prehomogeneous vector spaces, and showed that these orbits correspond to field extensions of degree 2, 3, 4, or 5. In their paper, they laid out a program to determine the density of discriminants of number fields of degree up to five, by considering adelic versions of Sato-Shintani’s zeta functions as developed by Datskovsky and Wright in their extensive work on cubic extensions. THE DENSITY OF DISCRIMINANTS OF QUINTIC RINGS AND FIELDS 1563 However, despite looking very promising, the program via adelic Shintani zeta functions encountered some difficulties and has not succeeded to date be-yond the cubic case. The primary difficulties have been: (a) establishing cancella-tions among various divergent zeta integrals, in order to establish a “principal part formula” for the associated adelic Shintani zeta function; and (b) “filtering” out the correct count of extensions from the overcount of extensions that is inherent in the definition of the zeta function. In the quartic case, difficulty (a) was overcome in the impressive 1995 treatise of Yukie , while (b) remained an obstacle. In the quintic case, both (a) and (b) have remained impediments to obtaining a correct count of quintic field extensions by discriminant. (For more on the Shintani adelic zeta function approach and these related difficulties, see [3, 1] and .) In and in the current article, we overcome the problems (a) and (b) above, for quartic and quintic fields respectively, by introducing a different counting method that relies more on geometry-of-numbers arguments. Thus, although our methods are different, this article may be viewed as completing the program first laid out by Wright and Yukie to count field extensions in degrees up to 5 via the use of appropriate prehomogeneous vector spaces. We now describe in more detail the methods of this paper, and give a com-parison with previous methods. At least initially, our approach to counting quintic extensions using the prehomogeneous vector space C4 ˝ ^2C5 is quite similar in spirit to Davenport-Heilbronn’s original method in the cubic case and its refinements developed in the quartic case . Namely, we begin by giving an alge-braic interpretation of the integer orbits on the associated prehomogeneous vector space which, in the quintic case, are the orbits of the group GZ D GL4.Z/SL5.Z/ on the 40-dimensional lattice VZ D Z4 ˝ ^2Z5. As we showed in , these integer orbits have an extremely rich algebraic interpretation and structure (see Theorem 5 for a precise statement), enabling us to consider not only quintic fields, but also more refined data such as all orders in quintic fields, the local behaviors of these orders, and their sextic resolvent rings. This interpretation of the integer orbits then allows us to reduce our problem of counting orders and fields to that of enumerating appropriate lattice points in a fundamental domain for the action of the discrete group GZ on the real vector space VR D VZ ˝ R. Just as in and , the main difficulty in counting lattice points in such a fundamental region is that this region is not compact, but instead has cusps (or “tentacles”) going off to infinity. To make matters even more interesting, unlike the case of binary cubic forms in Davenport-Heilbronn’s work—where there is one relatively simple cusp defined by small degree inequalities in four variables— in the case of quadruples of quinary alternating 2-forms, the cusps are numerous in number and are defined by polynomial inequalities of extremely high degree in 40 variables! These difficulties are further exacerbated by the fact that—contrary to the cubic case—in the quartic and quintic cases the number of nondegenerate lattice points in the cuspidal regions is of strictly greater order than the number 1564 MANJUL BHARGAVA of points in the noncuspidal part (“main body”) of the corresponding fundamental domains. The latter issue is indeed what lies behind the problems (a) and (b) above in the adelic zeta function method. Following our work in the quartic case , we overcome these problems that arise from the cuspidal regions by counting lattice points not in a single funda-mental domain, but over a continuous, compact set of fundamental domains. This allows one to “thicken” the cusps, thereby gaining a good deal of control on the integer points in these cuspidal regions. A basic version of this “averaging” method was introduced and used in in the quartic case to handle points in these cusps, and thus enumerate quartic extensions by discriminant (see [3, 1] for more details). However, since the number, complexity and dimensions of the cuspidal regions are so much greater in the quintic case than in the quartic case, a number of new ideas and modifications are needed to successfully carry out the same averaging method in the quintic case. The primary technical contribution of this article is the introduction of a method that allows one to systematically and canonically dissect the cuspidal re-gions into certain “nice” subregions on which a slightly refined averaging technique (see 2.1–2.2) can then be applied in a uniform manner. Using this method, we di-vide the fundamental region into 159 pieces. The first piece is the main body of the region, where we show using geometry-of-numbers arguments that the number of lattice points in the region is essentially its volume. For each of the remaining 158 cuspidal pieces, we show, by a uniform argument, that either the number of lattice points in that region is negligible (see Table 1, Lemma 11), or that the lattice points in that cuspidal piece are all reducible, i.e., they correspond to quintic rings that are not integral domains (see Lemma 10). An asymptotic formula for the number of irreducible integer points in the entire fundamental domain is then attained. The interesting interaction between the algebraic properties of the lattice points (via the correspondence in ) and their geometric locations within the fundamental domain is therefore what allows us to overcome the problems (a) and (b) arising in the adelic Shintani zeta function method. As explained earlier, a sieving method can then be used to prove Theorems 1–4. Our counting method in this article is quite robust and systematic, and should be applicable in many other situations. First, it can be used to reprove the density of discriminants of cubic and quartic fields, with much stronger error terms than have previously been known (in fact, in the cubic case it can be used, in conjunction with a sieve, to obtain an exact second order term; see ). Second, the method can be suitably adapted to count cubic, quartic, and quintic field extensions of any base number field (see ). Third, the method can be used on prehomogeneous vector spaces having infinite stabilizer groups, which would also have a number of interesting applications (see, e.g., ). Finally, we expect that the methods THE DENSITY OF DISCRIMINANTS OF QUINTIC RINGS AND FIELDS 1565 should also be adaptable to representations of algebraic groups that are not neces-sarily prehomogeneous. We hope that these directions will be pursued further in future work. 2. On the class numbers of quadruples of 5 5 skew-symmetric matrices Let V D VR denote the space of quadruples of 5 5 skew-symmetric matri-ces over the real numbers. We write an element of VR as an ordered quadruple .A; B; C; D/, where the 5 5 matrices A, B, C, D have entries aij , bij , cij , dij respectively. Such a quadruple .A; B; C; D/ is said to be integral if all entries of the matrices A, B, C, D are integral. The group GZ D GL4.Z/ SL5.Z/ acts naturally on the space VR. Namely, an element g4 2 GL4.Z/ acts by changing the basis of the Z-module of matrices spanned by A; B; C; D; in terms of matrix multiplication, we have .A B C D/t 7! g4 .A B C D/t. Similarly, an element g5 2 SL5.Z/ changes the basis of the five-dimensional space on which the skew-symmetric forms A; B; C; D take values, i.e., g5 .A; B; C; D/ D .g5Agt 5; g5Bgt 5; g5Cgt 5; g5Dgt 5/. It is clear that the actions of g4 and g5 commute, and that this action of GZ preserves the lattice VZ consisting of the integral elements of VR. The action of GZ on VR (or VZ) has a unique polynomial invariant, which we call the discriminant. It is a degree 40 polynomial in 40 variables, and is much too large to write down. An easy method to compute it for any given element in V was described in . The integer orbits of GZ on VZ have an important arithmetic significance. Re-call that a quintic ring is any ring with unit that is isomorphic to Z5 as a Z-module; for example, an order in a quintic number field is a quintic ring. In we showed how quintic rings may be parametrized in terms of the GZ-orbits on VZ: THEOREM 5. There is a canonical bijection between the set of GZ-equivalence classes of elements .A; B; C; D/ 2 VZ, and the set of isomorphism classes of pairs .R; R0/, where R is a quintic ring and R0 is a sextic resolvent ring of R. Under this bijection, we have Disc.A; B; C; D/ D Disc.R/ D 1 16  Disc.R0/1=3. A sextic resolvent ring of a quintic ring R is a sextic ring R0 equipped with a certain resolvent mapping R ! ^2R0 whose precise definition will not be needed here (see for details). In view of Theorem 5, we wish to try to understand the number of GZ-orbits on VZ having absolute discriminant at most X, as X ! 1. The number of integral orbits on VZ having a fixed discriminant  is called a “class number”, and we wish to understand the behavior of this class number on average. From the point of view of Theorem 5, we would like to restrict the elements of VZ under consideration to those that are “irreducible” in an appropriate sense. More precisely, we call an element .A; B; C; D/ 2 VZ irreducible if, in the corresponding pair of rings .R; R0/ in Theorem 5, the ring R is an integral domain. The quotient field of R is thus a quintic field in that case. We say that .A; B; C; D/ is reducible otherwise. 1566 MANJUL BHARGAVA One may also describe reducibility and irreducibility in more geometric terms. If .A; B; C; D/2VZ, then one may consider the 44 sub-Pfaffians Q1.t1; t2; t3; t4/; : : : ; Q5.t1; t2; t3; t4/ of the single 5 5 skew-symmetric matrix t1A C t2B C t3C C t4D whose entries are linear forms in t1; t2; t3; t4. In other words, Qi D Qi.w; x; y; z/ is defined as a canonical squareroot of the determinant of the 4 4 matrix obtained from t1ACt2B Ct3C Ct4D by removing its ith row and column. Thus these 44 Pfaffians Q1; : : : ; Q5 are quaternary quadratic forms and so define five quadrics in P3. If the element .A; B; C; D/ 2 VZ has nonzero discriminant, then it is known that these five quadrics intersect in exactly five points in P3 (counting multiplicities); see e.g., , . We refer to these five points as the zeroes of .A; B; C; D/ in P3. In we showed that if .A; B; C; D/ corresponds to .R; R0/, where R is isomorphic to an order in a quintic field K, then there exists a zero of .A; B; C; D/ in P3 whose field of definition is K. (The other zeroes of .A; B; C; D/ 2 VZ are thus defined over the conjugates of K.) Therefore, geometrically, we may say that .A; B; C; D/ is irreducible if and only if it possesses a zero in P3 having field of definition K, where K is a quintic field extension of Q. On the other hand, .A; B; C; D/ is reducible if and only if .A; B; C; D/ possesses a zero in P3 defined over a number field of degree smaller than five. The main result of this section is the following theorem: THEOREM 6. Let N.V .i/ Z I X/ denote the number of GZ-equivalence classes of irreducible elements .A; B; C; D/ 2 VZ having 5 2i real zeroes in P3 and satisfying jDisc.A; B; C; D/j < X. Then .a/ lim X!1 N.V .0/ Z I X/ X D .2/2.3/2.4/2.5/ 240 I .b/ lim X!1 N.V .1/ Z I X/ X D .2/2.3/2.4/2.5/ 24 I .c/ lim X!1 N.V .2/ Z I X/ X D .2/2.3/2.4/2.5/ 16 : Theorem 6 is proved in several steps. In Section 2.1, we outline the necessary reduction theory needed to establish some particularly useful fundamental domains for the action of GZ on VR. In Sections 2.2 and 2.3, we describe a refinement of the “averaging” method from that allows us to efficiently count integer points in various components of these fundamental domains in terms of their volumes. In Sections 2.4 and 2.5, we investigate the distribution of reducible and irreducible integral points within these fundamental domains. The volumes of the resulting “irreducible” components of these fundamental domains are then computed in Sec-tion 2.6, proving Theorem 6. A version of Theorem 6 for elements in VZ satisfying any specified set of congruence conditions is then obtained in Section 2.7. In Section 3, we will show how these counting methods—together with a sieving argument—can be used to prove Theorems 1–4. THE DENSITY OF DISCRIMINANTS OF QUINTIC RINGS AND FIELDS 1567 2.1. Reduction theory. The action of GR D GL4.R/ SL5.R/ on VR has three nondegenerate orbits V .0/ R ; V .1/ R ; V .2/ R , where V .i/ R consists of those elements .A; B; C; D/ in VR having nonzero discriminant and 5 2i real zeroes in P3. We wish to understand the number of irreducible GZ-orbits on V .i/ Z D V .i/ R \VZ having absolute discriminant at most X (i D 0; 1; 2). We accomplish this by counting the number of integer points of absolute discriminant at most X in suitable fundamental domains for the action of GZ on VR. These fundamental regions are constructed as follows. First, let F denote a fundamental domain in GR for GZnGR. We may assume that F is contained in a standard Siegel set, i.e., we may assume F is of the form F D fnak W n 2 N 0.a/; a 2 A0; k 2 K;  2 ƒg; where K D fspecial orthogonal transformations in GRgI A0 D fa.s1; s2; : : : ; s7/ W s1; s2; : : : ; s7  cg; where a.s/ D 0 B B B B @ 0 B B @ s3 1 s1 2 s1 3 s1s1 2 s1 3 s1s2s1 3 s1s2s3 3 1 C C A ; 0 B B B B @ s4 4 s3 5 s2 6 s1 7 s4s3 5 s2 6 s1 7 s4s2 5s2 6 s1 7 s4s2 5s3 6s1 7 s4s2 5s3 6s4 7 1 C C C C A 1 C C C C A I x N 0 D fn.u1; u2; : : : ; u16/ W u D .u1; u2; : : : ; u16/ 2 .a/g; where n.u/ D 0 B B @ 0 B @ 1 u1 1 u2 u3 1 u4 u5 u6 1 1 C A ; 0 B B @ 1 u7 1 u8 u9 1 u10 u11 u12 1 u13 u14 u15 u16 1 1 C C A 1 C C AI ƒ D ff W  > 0g; where  acts by 0 B B @ 0 B @     1 C A ; 0 B B @ 1 1 1 1 1 1 C C A 1 C C AI here c > 0 is an absolute constant and .a/ is an absolutely bounded measurable subset of R16 dependent only on the value of a 2 A0. For i D 0; 1; 2, let ni denote the cardinality of the stabilizer in GR of any element v 2 V .i/ R (it follows from Proposition 15 below that n1 D 120, n2 D 12, and n3 D 8). Then for any v 2 V .i/ R , Fv will be the union of ni fundamental domains for the action of GZ on V .i/ R . Since this union is not necessarily disjoint, Fv is best viewed as a multiset, where the multiplicity of a point x in Fv is given by the cardinality of the set fg 2 F j gv D xg. Evidently, this multiplicity is a number between 1 and ni. 1568 MANJUL BHARGAVA Even though the multiset Fv is the union of ni fundamental domains for the action of GZ on V .i/ R , not all elements in GZnVZ will be represented in Fv exactly ni times. In general, the number of times the GZ-equivalence class of an element x 2 VZ will occur in Fv is given by ni=m.x/, where m.x/ denotes the size of the stabilizer of x in GZ. We define N.V .i/ Z I X/ to be the (weighted) number of irreducible GZ-orbits on V .i/ Z having absolute discriminant at most X, where each orbit is counted by a weight of 1=m.x/ for any point x in that orbit. Thus ni  N.V .i/ Z I X/ is the (weighted) number of points in Fv having absolute discriminant at most X, where each point x in the multiset Fv is counted with a weight of 1=m.x/. We note that the GZ-orbits in VZ corresponding to orders in non-Galois quintic fields will then each be counted simply with a weight of 1, since such orders can have no automorphisms. We will show (see Lemma 14) that irreducible orbits having weight < 1 are negligible in number in comparison to those having weight 1, and so points of weight < 1 will not be important as they will not affect the main term of the asymptotics of N.V .i/ Z I X/ as X ! 1. Now the number of integer points can be difficult to count in a single funda-mental region Fv. The main technical obstacle is that the fundamental region Fv is not compact, but rather has a system of cusps going off to infinity which in fact contains infinitely many points, including many irreducible points. We simplify the counting of such points by “thickening” the cusp; more precisely, we compute the number of points in the fundamental region Fv by averaging over lots of such fundamental domains, i.e., by averaging over a continuous range of points v lying in a certain special compact subset H of V . 2.2. Averaging over fundamental domains. Let HDH.J /Dfw 2V WkwkJ , jDisc.w/j  1g, where kwk denotes a Euclidean norm on V fixed under the action of K, and J is sufficiently large so that H is nonempty and of nonzero volume. We write V .i/ WD V .i/ R . Then we have (8) N.V .i/ Z I X/ D R v2H\V .i/ #fx 2 Fv \ V irr Z W jDisc.x/j < Xg jDisc.v/j1dv ni  R v2H\V .i/ jDisc.v/j1dv ; where V irr Z  VZ denotes the subset of irreducible points in VZ. The denominator of the latter expression is, by construction, a finite absolute constant Mi D Mi.J / greater than zero. We have chosen the measure jDisc.v/j1 dv because it is a GR-invariant measure. More generally, for any GZ-invariant subset S  V .i/ Z , let N.SI X/ denote the number of irreducible GZ-orbits on S having discriminant less than X. Then N.SI X/ can be expressed as (9) N.SI X/ D R v2H\V .i/ #fx 2 Fv \ Sirr W jDisc.x/j < Xg jDisc.v/j1dv ni  R v2H\V .i/ jDisc.v/j1dv ; THE DENSITY OF DISCRIMINANTS OF QUINTIC RINGS AND FIELDS 1569 where Sirr  S denotes the subset of irreducible points in S. We shall use this definition of N.SI X/ for any S  VZ, even if S is not GZ-invariant. Note that for disjoint S1; S2  VZ, we have N.S1 [ S2/ D N.S1/ C N.S2/. Now since jDisc.v/j1 dv is a GR-invariant measure, we have for any function f 2 C0.V .i//, and any v; x 2 V .i/ R and g 2 GR satisfying v D gx, that f .v/jDisc.v/j1dv D ri f .gx/ dg for some constant ri dependent only on whether i D 0, 1 or 2; here dg denotes a left-invariant Haar measure on GR. We may thus express the above formula for N.SI X/ as an integral over F  GR: N.SI X/ D ri Mi Z g2F #fx 2 Sirr \ gH W jDisc.x/j < Xg dg (10) D ri Mi Z g2N 0.a/A0ƒK #fx 2 S \ N n.u/a.s/kH W jDisc.x/j < Xg dg : Let us write H.u; s; ; X/ D N n.u/a.s/H \ fv 2 V .i/ W jDisc.v/j < Xg. Noting that KH D H, R K dk D 1 (by convention), and dg D s12 1 s8 2 s12 3 s20 4 s30 5 s30 6 s20 7 du d s d  dk (up to scaling), we have (11) N.SI X/ D ri Mi Z g2N 0.a/A0ƒ #fx 2 Sirr \ H.u; s; ; X/g  s12 1 s8 2 s12 3 s20 4 s30 5 s30 6 s20 7 du d s d  : We note that the same counting method may be used even if we are interested in counting both reducible and irreducible orbits in VZ. For any set S  V .i/ Z , let N .SI X/ be defined by (9), but where the superscript “irr” is removed. Thus for a GZ-invariant set S  V .i/ Z , ni  N .SI X/ counts the total (weighted) number of GZ-orbits in S having absolute discriminant nonzero and less than X (not just the irreducible ones). By the same reasoning, we have (12) N .SI X/ D ri Mi Z g2N 0.a/A0ƒ #fx 2 S \ H.u; s; ; X/g  s12 1 s8 2 s12 3 s20 4 s30 5 s30 6 s20 7 du d s d  : The expression (11) for N.SI X/, and its analogue (12) for N .S; X/, will be useful in the sections that follow. 2.3. A lemma from geometry of numbers. To estimate the number of lattice points in H.u; s; ; X/, we have the following elementary proposition from the geometry-of-numbers, which is essentially due to Davenport . To state the proposition, we require the following simple definitions. A multiset R  Rn is 1570 MANJUL BHARGAVA said to be measurable if Rk is measurable for all k, where Rk denotes the set of those points in R having a fixed multiplicity k. Given a measurable multiset R  Rn, we define its volume in the natural way, that is, Vol.R/ D P k k Vol.Rk/, where Vol.Rk/ denotes the usual Euclidean volume of Rk. LEMMA 7. Let R be a bounded, semi-algebraic multiset in Rn having maxi-mum multiplicity m, where R is defined by at most k polynomial inequalities each having degree at most . Let R0 denote the image of R under any .upper or lower/ triangular, unipotent transformation of Rn. Then the number of integer lattice points .counted with multiplicity/ contained in the region R0 is Vol.R/ C O.maxfVol.x R/; 1g/; where Vol.x R/ denotes the greatest d-dimensional volume of any projection of R onto a coordinate subspace obtained by equating n d coordinates to zero, where d takes all values from 1 to n 1. The implied constant in the second summand depends only on n, m, k, and. Although Davenport states the above lemma only for compact semi-algebraic sets R  Rn, his proof adapts without essential change to the more general case of a bounded semi-algebraic multiset R  Rn, with the same estimate applying also to any image R0 of R under a unipotent triangular transformation. 2.4. Estimates on reducible quadruples .A; B; C; D/. In this subsection we describe the relative frequencies with which reducible and irreducible elements sit inside various parts of the fundamental domain Fv, as v varies over the compact region H. We begin by describing some sufficient conditions that guarantee that a point in VZ is reducible. LEMMA 8. Let .A; B; C; D/ 2 VZ be an element such that some non-trivial Q-linear combination of A; B; C; D has rank  2. Then .A; B; C; D/ is reducible. Proof. Suppose E D rA C sB C tC C uD, where r; s; t; u 2 Q are not all zero. Let Q1; : : : ; Q5 denote the five 4 4 sub-Pfaffians of .A; B; C; D/. Then we have proven in that if .A; B; C; D/ 2 VZ is irreducible, then the quadrics Q1 D 0; : : : ; Q5 D 0 intersect in five points in P3.x Q/, and moreover, these five points are defined over conjugate quintic extensions of Q. However, if rank.E/  2, then Œr; s; t; u 2 P3.Q/ is a common zero of Q1; : : : ; Q5 and it is defined over Q, contradicting the irreducibility of .A; B; C; D/. LEMMA 9. Let .A; B; C; D/ 2 VZ be an element such that some non-trivial Q-linear combination of Q1; : : : ; Q5 factors over Q into two linear factors, where Q1; : : : ; Q5 denote the five 4 4 sub-Pfaffians of .A; B; C; D/. Then .A; B; C; D/ is reducible. THE DENSITY OF DISCRIMINANTS OF QUINTIC RINGS AND FIELDS 1571 Proof. As noted in the proof of Lemma 8, the five associated quadratic forms Q1; : : : ; Q5 of an irreducible element .A; B; C; D/ 2 VZ possess five common zeroes that are defined over conjugate quintic fields, and these zeroes are conju-gate to each other over Q. It follows that each of the 5 3  D 10 planes, going through subsets of three of those five points, cannot be defined over Q, as these planes will each be part of a Gal.x Q=Q/-orbit of size at least 5. However, if some rational quaternary quadratic form Q factors over Q into linear factors, then (by the pigeonhole principle) at least one of these two rational factors must vanish at three of the five common points of intersection, a contradiction. LEMMA 10. Let .A; B; C; D/ 2 VZ be an element such that all the variables in at least one of the following sets vanish: (i) fa12; a13; a14; a15; a23; a24; a25g (ii) fa12; a13; a14; a23; a24; a34g (iii) fa12; a13; a14; a15g [ fb12; b13; b14; b15g (iv) fa12; a13; a14; a23; a24g [ fb12; b13; b14; b23; b24g (v) fa12; a13; a14g [ fb12; b13; b14g [ fc12; c13; c14g (vi) fa12; a13; a23g [ fb12; b13; b23g [ fc12; c13; c23g (vii) fa12; a13g [ fb12; b13g [ fc12; c13g [ fd12; d13g Then .A; B; C; D/ is reducible. Proof. In cases (i)–(ii) (resp. (iv)), one sees that A (resp. a rational linear combination of A and B) has rank  2, and thus .A; B; C; D/ is reducible by Lemma 8. In cases (v)–(vii) (resp. (iii)), one finds that Q5 (resp. a rational linear combination of Q2; : : : ; Q5) factors into rational linear factors, and so the result in these cases follows from Lemma 9. We are now ready to give an estimate on the number of irreducible elements in Fv, on average, satisfying a12 D 0: LEMMA 11. Let v take a random value in H uniformly with respect to the measure jDisc.v/j1 dv. Then the expected number of irreducible elements .A; B; C; D/ 2 Fv such that jDisc.A; B; C; D/j < X and a12 D 0 is O.X39=40/. Proof. As in , we divide the set of all .A; B; C; D/ 2 VZ into a number of cases depending on which initial coordinates are zero and which are nonzero. These cases are described in the second column of Table 1. The vanishing condi-tions in the various subcases of Case nC1 are obtained by setting equal to 0—one at a time—each variable that was assumed to be nonzero in Case n. If such a resulting 1572 MANJUL BHARGAVA subcase satisfies the reducibility conditions of Lemma 10, it is not listed. In this way, it becomes clear that any irreducible element in VZ must satisfy precisely one of the conditions enumerated in the second column of Table 1. In particular, there is no Case 14, because the assumption that any nonzero variable in Case 13 is zero immediately results in reducibility by Lemma 10. Let T denote the set of all forty variables aij ; bij ; cij ; dij . For a subcase C of Table 1, we use T0 D T0.C/ to denote the set of variables in T assumed to be 0 in Subcase C, and T1 to denote the set of variables in T assumed to be nonzero. Each variable t 2 T has a weight, which is defined as follows. The action of a.s1; s2; : : : ; s7/   on .A; B; C; D/ 2 V causes each variable t to multiply by a certain weight which we denote by w.t/. These weights w.t/ are evidently rational functions in ; s1; : : : ; s7. Let V.C/ denote the set of .A; B; C; D/ 2 VR such that .A; B; C; D/ satisfies the vanishing and nonvanishing conditions of Subcase C. For example, in Sub-case 2a we have T0.2a/ D fa12; a13g and T1.2a/ D fa14; a23; b12g; thus V.2a/ de-notes the set of all .A; B; C; D/ 2 VZ such that a12 D a13 D 0 but a14; a23; b12 ¤ 0. For each subcase C of Case n (n > 0), we wish to show that N.V.C/I X/, as defined by (9), is O.X39=40/. Since N 0.a/ is absolutely bounded, the equality (12) implies that (13) N .V.C/I X/ Z X1=40 Dc0 Z 1 s1;s2;:::;s7Dc .V.C// s12 1 s8 2 s12 3 s20 4 s30 5 s30 6 s20 7 d s d ; where .V.C// denotes the number of integer points in the region H.u; s; ; X/ that also satisfy the conditions (14) t D 0 for t 2 T0 and jtj  1 for t 2 T1: Now for an element .A; B; C; D/ 2 H.u; s; ; X/, we evidently have (15) jtj  J w.t/ and therefore the number of integer points in H.u; s; ; X/ satisfying (14) will be nonzero only if (16) J w.t/  1 for all weights w.t/ such that t 2 T1. Now the sets T1 in each subcase of Table 1 have been chosen to be precisely the set of variables having the minimal weights w.t/ among the variables t 2 T nT0 (by “minimal weight” in T nT0, we mean there is no other variable t 2 T n T0 with weight having equal or smaller exponents for all parameters ; s1; s2; : : : ; s7). Thus if the condition (16) holds for all weights w.t/ corresponding to t 2 T1, then—by the very choice of T1—we will also have J w.t/ 1 for all weights w.t/ such that t 2 T n T0. THE DENSITY OF DISCRIMINANTS OF QUINTIC RINGS AND FIELDS 1573 Therefore, if the region H D f.A; B; C; D/ 2 H.u; s; ; X/ W t D 0 8t 2 T0I jtj  1 8t 2 T1g contains an integer point, then (16) and Lemma 7 together imply that the number of integer points in H is O.Vol.H//, since the volumes of all the projections of u1H will in that case also be O.Vol.H//. Now clearly Vol.H/ D O  J 40jT0j Y t2T nT0 w.t/  ; and so we obtain (17) N.V.C/I X/ Z X1=40 Dc0 Z 1 s1;s2;:::;s7Dc Y t2T nT0 w.t/ s12 1 s8 2 s12 3 s20 4 s30 5 s30 6 s20 7 d s d : The latter integral can be explicitly carried out for each of the subcases in Table 1. It will suffice, however, to have a simple estimate of the form O.Xr/, with r < 1, for the integral corresponding to each subcase. For example, if the total exponent of si in (17) is negative for all i in f1; : : : ; 7g, then it is clear that the resulting integral will be at most O.X.40jT0j/=40/ in value. This condition holds for many of the subcases in Table 1 (indicated in the fourth column by “-”), immediately yielding the estimates given in the third column. For cases where this negative exponent condition does not hold, the estimate given in the third column can be obtained as follows. The factor  given in the fourth column is a product of variables in T1, and so it is at least one in absolute value. The integrand in (17) may thus be multiplied by  without harm, and the estimate (17) will remain true; we may then apply the inequalities (15) to each of the variables in , yielding (18) N.V.C/I X/ Z X1=40 Dc0 Z 1 s1;s2;:::;s7Dc Y t2T nT0 w.t/ w./ s12 1 s8 2 s12 3 s20 4 s30 5 s30 6 s20 7 d s d ; where we extend the notation w multiplicatively, i.e., w.ab/ D w.a/w.b/. In each subcase of Table 1, we have chosen the factor  so that the total exponent of each si in (18) is negative. Thus we obtain from (18) that N.V.C/I X/ D O X.40#T0.C/C#/=40 ; where # denotes the total number of variables of T appearing in  (counted with multiplicity), and this is precisely the estimate given in the third column of Table 1. In every subcase, aside from Case 0, we see that 40#T0 C# < 40, as desired. Therefore, for the purposes of proving Theorem 6, we may assume that a12 ¤ 0. 1574 MANJUL BHARGAVA Case The set S  VZ defined by N.SI X/ Use factor 0. a12 ¤ 0 X40=40 -1. a12 D 0 I X39=40 -a13; b12 ¤ 0 2a. a12; a13 D 0 I X38=40 -a14; a23; b12 ¤ 0 2b. a12; b12 D 0 I X38=40 -a13; c12 ¤ 0 3a. a12; a13; a14 D 0 I X37=40 -a15; a23; b12 ¤ 0 3b. a12; a13; a23 D 0 I X37=40 -a14; b12 ¤ 0 3c. a12; a13; b12 D 0 I X37=40 -a14; a23; b13; c12 ¤ 0 3d. a12; b12; c12 D 0 I X37=40 -a13; d12 ¤ 0 4a. a12; a13; a14; a15 D 0 I X37=40 a23 a23; b12 ¤ 0 4b. a12; a13; a14; a23 D 0 I X37=40 a24 a15; a24; b12 ¤ 0 4c. a12; a13; a14; b12 D 0 I X36=40 -a15; a23; b13; c12 ¤ 0 4d. a12; a13; a23; b12 D 0 I X36=40 -a14; b13; c12 ¤ 0 4e. a12; a13; b12; b13 D 0 I X36=40 -a14; a23; c12 ¤ 0 4f. a12; a13; b12; c12 D 0 I X36=40 -a14; a23; b13; d12 ¤ 0 4g. a12; b12; c12; d12 D 0 I X36=40 -a13 ¤ 0 5a. a12; a13; a14; a15; a23 D 0 I X37=40 a2 24 a24; b12 ¤ 0 5b. a12; a13; a14; a15; b12 D 0 I X35=40 -a23; b13; c12 ¤ 0 5c. a12; a13; a14; a23; a24 D 0 I X37=40 a2 34 a15; a34; b12 ¤ 0 5d. a12; a13; a14; a23; b12 D 0 I X35=40 -a15; a24; b13; c12 ¤ 0 Table 1. Subcases 0–5d. THE DENSITY OF DISCRIMINANTS OF QUINTIC RINGS AND FIELDS 1575 Case The set S  VZ defined by N.SI X/ Use factor 5e. a12; a13; a14; b12; b13 D 0 I X35=40 -a15; a23; b14; c12 ¤ 0 5f. a12; a13; a14; b12; c12 D 0 I X35=40 -a15; a23; b13; d12 ¤ 0 5g. a12; a13; a23; b12; b13 D 0 I X35=40 -a14; b23; c12 ¤ 0 5h. a12; a13; a23; b12; c12 D 0 I X35=40 -a14; b13; d12 ¤ 0 5i. a12; a13; b12; b13; c12 D 0 I X35=40 -a14; a23; c13; d12 ¤ 0 5j. a12; a13; b12; c12; d12 D 0 I X35=40 -a14; a23; b13 ¤ 0 6a. a12; a13; a14; a15; a23; a24 D 0 I X37=40 a3 34 a25; a34; b12 ¤ 0 6b. a12; a13; a14; a15; a23; b12 D 0 I X35=40 a24 a24; b13; c12 ¤ 0 6c. a12; a13; a14; a15; b12; b13 D 0 I X34=40 -a23; b14; c12 ¤ 0 6d. a12; a13; a14; a15; b12; c12 D 0 I X34=40 -a23; b13; d12 ¤ 0 6e. a12; a13; a14; a23; a24; b12 D 0 I X35=40 a34 a15; a34; b13; c12 ¤ 0 6f. a12; a13; a14; a23; b12; b13 D 0 I X34=40 -a15; a24; b14; b23; c12 ¤ 0 6g. a12; a13; a14; a23; b12; c12 D 0 I X34=40 -a15; a24; b13; d12 ¤ 0 6h. a12; a13; a14; b12; b13; b14 D 0 I X34=40 -a15; a23; c12 ¤ 0 6i. a12; a13; a14; b12; b13; c12 D 0 I X34=40 -a15; a23; b14; c13; d12 ¤ 0 6j. a12; a13; a14; b12; c12; d12 D 0 I X34=40 -a15; a23; b13 ¤ 0 6k. a12; a13; a23; b12; b13; b23 D 0 I X34=40 -a14; c12 ¤ 0 6l. a12; a13; a23; b12; b13; c12 D 0 I X34=40 -a14; b23; c13; d12 ¤ 0 6m. a12; a13; a23; b12; c12; d12 D 0 I X34=40 -a14; b13 ¤ 0 Table 1. Subcases 5e–6m. 1576 MANJUL BHARGAVA Case The set S  VZ defined by N.SI X/ Use factor 6n. a12; a13; b12; b13; c12; c13 D 0 I X34=40 -a14; a23; d12 ¤ 0 6o. a12; a13; b12; b13; c12; d12 D 0 I X34=40 -a14; a23; c13 ¤ 0 7a. a12; a13; a14; a15; a23; a24; b12 D 0 I X35=40 a2 34 a25; a34; b13; c12 ¤ 0 7b. a12; a13; a14; a15; a23; b12; b13 D 0 I X34=40 a24 a24; b14; b23; c12 ¤ 0 7c. a12; a13; a14; a15; a23; b12; c12 D 0 I X34=40 a24 a24; b13; d12 ¤ 0 7d. a12; a13; a14; a15; b12; b13; b14 D 0 I X34=40 b15 a23; b15; c12 ¤ 0 7e. a12; a13; a14; a15; b12; b13; c12 D 0 I X34=40 d12 a23; b14; c13; d12 ¤ 0 7f. a12; a13; a14; a15; b12; c12; d12 D 0 I X34=40 b13 a23; b13 ¤ 0 7g. a12; a13; a14; a23; a24; b12; b13 D 0 I X34=40 a34 a15; a34; b14; b23; c12 ¤ 0 7h. a12; a13; a14; a23; a24; b12; c12 D 0 I X34=40 a34 a15; a34; b13; d12 ¤ 0 7i. a12; a13; a14; a23; b12; b13; b14 D 0 I X33=40 -a15; a24; b23; c12 ¤ 0 7j. a12; a13; a14; a23; b12; b13; b23 D 0 I X33=40 -a15; a24; b14; c12 ¤ 0 7k. a12; a13; a14; a23; b12; b13; c12 D 0 I X33=40 -a15; a24; b14; b23; c13; d12 ¤ 0 7l. a12; a13; a14; a23; b12; c12; d12 D 0 I X33=40 -a15; a24; b13 ¤ 0 7m. a12; a13; a14; b12; b13; b14; c12 D 0 I X34=40 d12 a15; a23; c13; d12 ¤ 0 7n. a12; a13; a14; b12; b13; c12; c13 D 0 I X34=40 d12 a15; a23; b14; d12 ¤ 0 7o. a12; a13; a14; b12; b13; c12; d12 D 0 I X34=40 c13 a15; a23; b14; c13 ¤ 0 7p. a12; a13; a23; b12; b13; b23; c12 D 0 I X33=40 -a14; c13; d12 ¤ 0 7q. a12; a13; a23; b12; b13; c12; c13 D 0 I X33=40 -a14; b23; d12 ¤ 0 Table 1. Subcases 6n–7q. THE DENSITY OF DISCRIMINANTS OF QUINTIC RINGS AND FIELDS 1577 Case The set S VZ defined by N.SIX/ Use factor 7r. a12; a13; a23; b12; b13; c12; d12 D 0 I X33=40 -a14; b23; c13 ¤ 0 7s. a12; a13; b12; b13; c12; c13; d12 D 0 I X34=40 d13 a14; a23; d13 ¤ 0 8a. a12; a13; a14; a15; a23; a24; b12; b13 D 0 I X34=40 a2 34 a25; a34; b14; b23; c12 ¤ 0 8b. a12; a13; a14; a15; a23; a24; b12; c12 D 0 I X34=40 a25a34 a25; a34; b13; d12 ¤ 0 8c. a12; a13; a14; a15; a23; b12; b13; b14 D 0 I X34=40 a24b15 a24; b15; b23; c12 ¤ 0 8d. a12; a13; a14; a15; a23; b12; b13; b23 D 0 I X33=40 a24 a24; b14; c12 ¤ 0 8e. a12; a13; a14; a15; a23; b12; b13; c12 D 0 I X34=40 a24d12 a24; b14; b23; c13; d12 ¤ 0 8f. a12; a13; a14; a15; a23; b12; c12; d12 D 0 I X34=40 a24b13 a24; b13 ¤ 0 8g. a12; a13; a14; a15; b12; b13; b14; c12 D 0 I X34=40 b15d12 a23; b15; c13; d12 ¤ 0 8h. a12; a13; a14; a15; b12; b13; c12; c13 D 0 I X34=40 b14d12 a23; b14; d12 ¤ 0 8i. a12; a13; a14; a15; b12; b13; c12; d12 D 0 I X34=40 c2 13 a23; b14; c13 ¤ 0 8j. a12; a13; a14; a23; a24; b12; b13; b14 D 0 I X33=40 a34 a15; a34; b23; c12 ¤ 0 8k. a12; a13; a14; a23; a24; b12; b13; b23 D 0 I X33=40 a34 a15; a34; b14; c12 ¤ 0 8l. a12; a13; a14; a23; a24; b12; b13; c12 D 0 I X33=40 a34 a15; a34; b14; b23; c13; d12 ¤ 0 8m. a12; a13; a14; a23; a24; b12; c12; d12 D 0 I X33=40 a15 a15; a34; b13 ¤ 0 8n. a12; a13; a14; a23; b12; b13; b14; b23 D 0 I X33=40 a24 a15; a24; c12 ¤ 0 8o. a12; a13; a14; a23; b12; b13; b14; c12 D 0 I X32=40 -a15; a24; b23; c13; d12 ¤ 0 8p. a12; a13; a14; a23; b12; b13; b23; c12 D 0 I X32=40 -a15; a24; b14; c13; d12 ¤ 0 8q. a12; a13; a14; a23; b12; b13; c12; c13 D 0 I X32=40 -a15; a24; b14; b23; d12 ¤ 0 Table 1. Subcases 7r–8q. 1578 MANJUL BHARGAVA Case The set S VZ defined by N.SIX/ Use factor 8r. a12; a13; a14; a23; b12; b13; c12; d12 D 0 I X32=40 -a15; a24; b14; b23; c13 ¤ 0 8s. a12; a13; a14; b12; b13; b14; c12; c13 D 0 I X34=40 c14d12 a15; a23; c14; d12 ¤ 0 8t. a12; a13; a14; b12; b13; b14; c12; d12 D 0 I X34=40 c2 13 a15; a23; c13 ¤ 0 8u. a12; a13; a14; b12; b13; c12; c13; d12 D 0 I X34=40 d 2 13 a15; a23; b14; d13 ¤ 0 8v. a12; a13; a23; b12; b13; b23; c12; c13 D 0 I X33=40 d12 a14; c23; d12 ¤ 0 8w. a12; a13; a23; b12; b13; b23; c12; d12 D 0 I X33=40 c13 a14; c13 ¤ 0 8x. a12; a13; a23; b12; b13; c12; c13; d12 D 0 I X33=40 d13 a14; b23; d13 ¤ 0 9a. a12; a13; a14; a15; a23; a24; b12; b13; b14 D 0 I X34=40 a2 34b15 a25; a34; b15; b23; c12 ¤ 0 9b. a12; a13; a14; a15; a23; a24; b12; b13; b23 D 0 I X33=40 a2 34 a25; a34; b14; c12 ¤ 0 9c. a12; a13; a14; a15; a23; a24; b12; b13; c12 D 0 I X34=40 a2 34d12 a25; a34; b14; b23; c13; d12 ¤ 0 9d. a12; a13; a14; a15; a23; a24; b12; c12; d12 D 0 I X34=40 a2 25b13 a25; a34; b13 ¤ 0 9e. a12; a13; a14; a15; a23; b12; b13; b14; b23 D 0 I X33=40 a24b15 a24; b15; c12 ¤ 0 9f. a12; a13; a14; a15; a23; b12; b13; b14; c12 D 0 I X34=40 a24b15d12 a24; b15; b23; c13; d12 ¤ 0 9g. a12; a13; a14; a15; a23; b12; b13; b23; c12 D 0 I X31=40 -a24; b14; c13; d12 ¤ 0 9h. a12; a13; a14; a15; a23; b12; b13; c12; c13 D 0 I X34=40 a24b14d12 a24; b14; b23; d12 ¤ 0 9i. a12; a13; a14; a15; a23; b12; b13; c12; d12 D 0 I X34=40 a24c2 13 a24; b14; b23; c13 ¤ 0 9j. a12; a13; a14; a15; b12; b13; b14; c12; c13 D 0 I X34=40 b15c14d12 a23; b15; c14; d12 ¤ 0 9k. a12; a13; a14; a15; b12; b13; b14; c12; d12 D 0 I X34=40 b15c2 13 a23; b15; c13 ¤ 0 9l. a12; a13; a14; a15; b12; b13; c12; c13; d12 D 0 I X34=40 b14d 2 13 a23; b14; d13 ¤ 0 Table 1. Subcases 8r–9l. THE DENSITY OF DISCRIMINANTS OF QUINTIC RINGS AND FIELDS 1579 Case The set S  VZ defined by N.SI X/ Use factor 9m. a12; a13; a14; a23; a24; b12; b13; b14; b23 D 0 I X33=40 a34b24 a15; a34; b24; c12 ¤ 0 9n. a12; a13; a14; a23; a24; b12; b13; b14; c12 D 0 I X31=40 -a15; a34; b23; c13; d12 ¤ 0 9o. a12; a13; a14; a23; a24; b12; b13; b23; c12 D 0 I X31=40 -a15; a34; b14; c13; d12 ¤ 0 9p. a12; a13; a14; a23; a24; b12; b13; c12; c13 D 0 I X31=40 -a15; a34; b14; b23; d12 ¤ 0 9q. a12; a13; a14; a23; a24; b12; b13; c12; d12 D 0 I X31=40 -a15; a34; b14; b23; c13 ¤ 0 9r. a12; a13; a14; a23; b12; b13; b14; b23; c12 D 0 I X31=40 -a15; a24; c13; d12 ¤ 0 9s. a12; a13; a14; a23; b12; b13; b14; c12; c13 D 0 I X32=40 c14 a15; a24; b23; c14; d12 ¤ 0 9t. a12; a13; a14; a23; b12; b13; b14; c12; d12 D 0 I X32=40 c13 a15; a24; b23; c13 ¤ 0 9u. a12; a13; a14; a23; b12; b13; b23; c12; c13 D 0 I X32=40 c23 a15; a24; b14; c23; d12 ¤ 0 9v. a12; a13; a14; a23; b12; b13; b23; c12; d12 D 0 I X32=40 c13 a15; a24; b14; c13 ¤ 0 9w. a12; a13; a14; a23; b12; b13; c12; c13; d12 D 0 I X32=40 d13 a15; a24; b14; b23; d13 ¤ 0 9x. a12; a13; a14; b12; b13; b14; c12; c13; d12 D 0 I X34=40 c14d 2 13 a15; a23; c14; d13 ¤ 0 9y. a12; a13; a23; b12; b13; b23; c12; c13; d12 D 0 I X33=40 d 2 13 a14; c23; d13 ¤ 0 10a. a12; a13; a14; a15; a23; a24; b12; b13; b14; b23 D 0 I X33=40 a2 34b15 a25; a34; b15; b24; c12 ¤ 0 10b. a12; a13; a14; a15; a23; a24; b12; b13; b14; c12 D 0 I X34=40 a2 34b15d12 a25; a34; b15; b23; c13; d12 ¤ 0 10c. a12; a13; a14; a15; a23; a24; b12; b13; b23; c12 D 0 I X31=40 a34 a25; a34; b14; c13; d12 ¤ 0 10d. a12; a13; a14; a15; a23; a24; b12; b13; c12; c13 D 0 I X34=40 a2 34b14d12 a25; a34; b14; b23; d12 ¤ 0 10e. a12; a13; a14; a15; a23; a24; b12; b13; c12; d12 D 0 I X34=40 a2 25c2 13 a25; a34; b14; b23; c13 ¤ 0 10f. a12; a13; a14; a15; a23; b12; b13; b14; b23; c12 D 0 I X31=40 b15 a24; b15; c13; d12 ¤ 0 Table 1. Subcases 9m–10f. 1580 MANJUL BHARGAVA Case The set S  VZ defined by N.SI X/ Use factor 10g. a12; a13; a14; a15; a23; b12; b13; b14; c12; c13 D 0 I X34=40 a24b15c14d12 a24; b15; b23; c14; d12 ¤ 0 10h. a12; a13; a14; a15; a23; b12; b13; b14; c12; d12 D 0 I X34=40 a24b15c2 13 a24; b15; b23; c13 ¤ 0 10i. a12; a13; a14; a15; a23; b12; b13; b23; c12; c13 D 0 I X33=40 a24b14d12 a24; b14; c23; d12 ¤ 0 10j. a12; a13; a14; a15; a23; b12; b13; b23; c12; d12 D 0 I X31=40 c13 a24; b14; c13 ¤ 0 10k. a12; a13; a14; a15; a23; b12; b13; c12; c13; d12 D 0 I X34=40 a24b14d 2 13 a24; b14; b23; d13 ¤ 0 10l. a12; a13; a14; a15; b12; b13; b14; c12; c13; d12 D 0 I X34=40 b15c14d 2 13 a23; b15; c14; d13 ¤ 0 10m. a12; a13; a14; a23; a24; b12; b13; b14; b23; c12 D 0 I X31=40 b24 a15; a34; b24; c13; d12 ¤ 0 10n. a12; a13; a14; a23; a24; b12; b13; b14; c12; c13 D 0 I X31=40 c14 a15; a34; b23; c14; d12 ¤ 0 10o. a12; a13; a14; a23; a24; b12; b13; b14; c12; d12 D 0 I X31=40 c13 a15; a34; b23; c13 ¤ 0 10p. a12; a13; a14; a23; a24; b12; b13; b23; c12; c13 D 0 I X31=40 c23 a15; a34; b14; c23; d12 ¤ 0 10q. a12; a13; a14; a23; a24; b12; b13; b23; c12; d12 D 0 I X31=40 c13 a15; a34; b14; c13 ¤ 0 10r. a12; a13; a14; a23; a24; b12; b13; c12; c13; d12 D 0 I X31=40 d13 a15; a34; b14; b23; d13 ¤ 0 10s. a12; a13; a14; a23; b12; b13; b14; b23; c12; c13 D 0 I X33=40 a24c14d12 a15; a24; c14; c23; d12 ¤ 0 10t. a12; a13; a14; a23; b12; b13; b14; b23; c12; d12 D 0 I X31=40 c13 a15; a24; c13 ¤ 0 10u. a12; a13; a14; a23; b12; b13; b14; c12; c13; d12 D 0 I X32=40 c14d13 a15; a24; b23; c14; d13 ¤ 0 10v. a12; a13; a14; a23; b12; b13; b23; c12; c13; d12 D 0 I X32=40 c23d13 a15; a24; b14; c23; d13 ¤ 0 11a. a12; a13; a14; a15; a23; a24; b12; b13; b14; b23; X31=40 a34b15 c12 D 0I a25; a34; b15; b24; c13; d12 ¤ 0 11b. a12; a13; a14; a15; a23; a24; b12; b13; b14; c12; X34=40 a2 34b15c14d12 c13 D 0I a25; a34; b15; b23; c14; d12 ¤ 0 11c. a12; a13; a14; a15; a23; a24; b12; b13; b14; c12; X36=40 a2 25a34b15c3 13 d12 D 0I a25; a34; b15; b23; c13 ¤ 0 Table 1. Subcases 10g–11c. THE DENSITY OF DISCRIMINANTS OF QUINTIC RINGS AND FIELDS 1581 Case The set S  VZ defined by N.SI X/ Use factor 11d. a12; a13; a14; a15; a23; a24; b12; b13; b23; c12; X33=40 a2 34b14d12 c13 D 0I a25; a34; b14; c23; d12 ¤ 0 11e. a12; a13; a14; a15; a23; a24; b12; b13; b23; c12; X31=40 a25c13 d12 D 0I a25; a34; b14; c13 ¤ 0 11f. a12; a13; a14; a15; a23; a24; b12; b13; c12; c13; X34=40 a2 25b14d 2 13 d12 D 0I a25; a34; b14; b23; d13 ¤ 0 11g. a12; a13; a14; a15; a23; b12; b13; b14; b23; c12; X33=40 a24b15c14d12 c13 D 0I a24; b15; c14; c23; d12 ¤ 0 11h. a12; a13; a14; a15; a23; b12; b13; b14; b23; c12; X31=40 b15c13 d12 D 0I a24; b15; c13 ¤ 0 11i. a12; a13; a14; a15; a23; b12; b13; b14; c12; c13; X34=40 a24b15c14d 2 13 d12 D 0I a24; b15; b23; c14; d13 ¤ 0 11j. a12; a13; a14; a15; a23; b12; b13; b23; c12; c13; X33=40 a24b14d 2 13 d12 D 0I a24; b14; c23; d13 ¤ 0 11k. a12; a13; a14; a23; a24; b12; b13; b14; b23; c12; X33=40 a34b24c14d12 c13 D 0I a15; a34; b24; c14; c23; d12 ¤ 0 11l. a12; a13; a14; a23; a24; b12; b13; b14; b23; c12; X31=40 b24c13 d12 D 0I a15; a34; b24; c13 ¤ 0 11m. a12; a13; a14; a23; a24; b12; b13; b14; c12; c13; X31=40 c14d13 d12 D 0I a15; a34; b23; c14; d13 ¤ 0 11n. a12; a13; a14; a23; a24; b12; b13; b23; c12; c13; X31=40 c23d13 d12 D 0I a15; a34; b14; c23; d13 ¤ 0 11o. a12; a13; a14; a23; b12; b13; b14; b23; c12; c13; X33=40 a24c14d 2 13 d12 D 0I a15; a24; c14; c23; d13 ¤ 0 12a. a12; a13; a14; a15; a23; a24; b12; b13; b14; b23; c12; X33=40 a2 34b15c14d12 c13 D 0I a25; a34; b15; b24; c14; c23; d12 ¤ 0 12b. a12; a13; a14; a15; a23; a24; b12; b13; b14; b23; c12; X36=40 a2 25a34b15b24c3 13 d12 D 0I a25; a34; b15; b24; c13 ¤ 0 12c. a12; a13; a14; a15; a23; a24; b12; b13; b14; c12; c13; X36=40 a2 25a34b15c2 14d 2 13 d12 D 0I a25; a34; b15; b23; c14; d13 ¤ 0 12d. a12; a13; a14; a15; a23; a24; b12; b13; b23; c12; c13; X33=40 a2 25b14d 2 13 d12 D 0I a25; a34; b14; c23; d13 ¤ 0 12e. a12; a13; a14; a15; a23; b12; b13; b14; b23; c12; c13; X33=40 a24b15c14d 2 13 d12 D 0I a24; b15; c14; c23; d13 ¤ 0 12f. a12; a13; a14; a23; a24; b12; b13; b14; b23; c12; c13; X33=40 a15b24c23d 2 13 d12 D 0I a15; a34; b24; c14; c23; d13 ¤ 0 13. a12; a13; a14; a15; a23; a24; b12; b13; b14; b23; X37=40 a2 25a34b2 24c2 14d 3 13 c12; c13; d12 D 0I a25; a34; b15; b24; c14; c23; d13 ¤ 0 Table 1. Subcases 11d–13. 1582 MANJUL BHARGAVA 2.5. The main term. Let RX.v/ denote the multiset fx 2 Fv W jDisc.x/j < Xg. Then we have the following result counting the number of integral points in RX.v/, on average, satisfying a12 ¤ 0: PROPOSITION 12. Let v take a random value in H \ V .i/ uniformly with respect to the measure jDisc.v/j1 dv. Then the expected number of integral elements .A; B; C; D/ 2 Fv such that jDisc.A; B; C; D/j < X and a12 ¤ 0 is Vol.RX.vi// C O.X39=40/, where vi is any vector in V .i/. Proof. Following the proof of Lemma 11, let V .i/.0/ denote the subset of VR such that a12 ¤ 0. We wish to show that (19) N .V .i/.0/I X/ D 1 ni  Vol.RX.vi// C O.X39=40/: Now (20) N .V .i/.0/I X/ D ri Mi Z X1=40 Dc0 Z 1 s1;s2;:::;s7Dc Z u2N 0.a.s// .V.0// s12 1 s8 2 s12 3 s20 4 s30 5 s30 6 s20 7 du d s d ; where .V.0// denotes the number of integer points in the region H.u; s; ; X/ satisfying ja12j  1. Evidently, the number of integer points in H.u; s; ; X/ with ja12j  1 can be nonzero only if we have (21) J w.a12/ D J   s3 1s2s3s3 4s6 5s4 6s2 7  1: Therefore, if the region H D f.A; B; C; D/ 2 H.u; s; ; X/ W ja12j  1g contains an integer point, then (21) and Lemma 7 imply that the number of integer points in H is Vol.H/ C O.J 1Vol.H/=w.a12//, since all smaller-dimensional projections of u1H are clearly bounded by a constant times the projection of H onto the hyperplane a12 D 0 (since a12 has minimal weight). Therefore, since H D H.u; s; ; X/ H.u; s; ; X/ H  , we may write (22) N .V .i/.0/I X/ D ri Mi Z X1=40 Dc0 Z 1 s1;:::;s7Dc Z u2N 0.a.s//  Vol H.u; s; ; X/  Vol H.u; s; ; X/ H  C O.maxfJ 3939s3 1s2s3s3 4s6 5s4 6s2 7; 1g/   s12 1 s8 2 s12 3 s20 4 s30 5 s30 6 s20 7 du d s d : The integral of the first term in (22) is .1=ri/ R v2H\V .i/ Vol.RX.v//jDisc.v/j1dv. Since Vol.RX.v// does not depend on the choice of v 2 V .i/ (see 2.6), the latter integral is simply ŒMi=.ni ri/  Vol.RX.v//. To estimate the integral of the second term in (22), let H0 D H.u; s; t; X/H, and for each ja12j  1, let H0.a12/ be the subset of all elements .A; B; C; D/ 2 H0 THE DENSITY OF DISCRIMINANTS OF QUINTIC RINGS AND FIELDS 1583 with the given value of a12. Then the 39-dimensional volume of H0.a12/ is at most O  J 39 Q t2T nfa12g w.t/  , and so we have the estimate Vol.H0/ Z 1 1 J 39 Y t2T nfa12g w.t/ da12 D O  J 39 Y t2T nfa12g w.t/  : The second term of the integrand in (22) can thus be absorbed into the third term. Fi-nally, one easily computes the integral of the third term in (22) to be O.J 39X39=40/. We thus obtain, for any v 2 V .i/, that (23) N .V .i/I X/ D 1 ni  Vol.RX.v// C O.J 39X39=40=Mi.J //: Note that the above proposition counts all integer points in RX.v/ satisfying a12 ¤ 0, not just the irreducible ones. However, in this regard we have the following lemma: LEMMA 13. Let v 2 H \ V .i/. Then the number of .A; B; C; D/ 2 Fv such that a12 ¤ 0, jDisc.A; B; C; D/j < X, and .A; B; C; D/ is not irreducible is o.X/. Lemma 13 will in fact follow from a stronger lemma. We say that an element .A; B; C; D/ 2 VZ is absolutely irreducible if it is irreducible and the fraction field of its associated quintic ring is an S5-quintic field (equivalently, if the fields of definition of its common zeroes in P3 are S5-quintic fields). Then we have the following lemma, whose proof is postponed to Section 3: LEMMA 14. Let v 2 H \ V .i/. Then the number of .A; B; C; D/ 2 Fv such that a12 ¤ 0, jDisc.A; B; C; D/j < X, and .A; B; C; D/ is not absolutely irre-ducible is o.X/. Therefore, to prove Theorem 6, it remains only to compute the fundamental volume Vol.RX.v// for v 2 V .i/. This is handled in the next subsection. 2.6. Computation of the fundamental volume. In this subsection, we compute Vol.RX.v//, where RX.v/ is defined as in Section 2.5. We will see that this volume depends only on whether v lies in V .0/, V .1/, or V .2/; here V .i/ again denotes the GR-orbit in VR consisting of those elements .A; B; C; D/ having nonzero discrim-inant and possessing 5 2i real zeros in P3. Before performing this computation, we first state two propositions regarding the group G D GL4 SL5 and its 40-dimensional representation V . PROPOSITION 15. The group GR acts transitively on V .i/, and the isotropy groups for v 2 V .i/ are given as follows: .i/ S5, if v 2 V .0/; .ii/ S3 C2, if v 2 V .1/; and .iii/ D4, if v 2 V .2/. 1584 MANJUL BHARGAVA In view of Proposition 15, it will be convenient to use the notation ni to denote the order of the stabilizer of any vector v 2 V .i/. Proposition 15 implies that we have n0 D 120, n1 D 12, and n2 D 8. Now define the usual subgroups N , x N , A, and ƒ of GR as follows: N D fn.x1; x2; : : : ; x16/ W xi 2 Rg; where n.x/ D 0 B B @ 0 B @ 1 x1 x2 x3 1 x4 x5 1 x6 1 1 C A ; 0 B B @ 1 x7 x8 x9 x10 1 x11 x12 x13 1 x14 x15 1 x16 1 1 C C A 1 C C AI x N D fN n.u1; u2; : : : ; u16/ W ui 2 Rg; where N n.u/ D 0 B B @ 0 B @ 1 u1 1 u2 u3 1 u4 u5 u6 1 1 C A ; 0 B B @ 1 u7 1 u8 u9 1 u10 u11 u12 1 u13 u14 u15 u16 1 1 C C A 1 C C AI A D fa.t1; t2; : : : ; t7/ W t1; t2; : : : ; t7 2 RCg; where a.; t/ D 0 B B B B @ 0 B B @ t1 t2=t1 t3=t2 1=t3 1 C C A ; 0 B B B B @ t4 t5=t4 t6=t5 t7=t6 1=t7 1 C C C C A 1 C C C C A I ƒ D ff W  > 0g; where  acts by 0 B B @ 0 B @     1 C A ; 0 B B @ 1 1 1 1 1 1 C C A 1 C C A: We define an invariant measure dg on GR by (24) Z G f .g/dg D Z R C Z R7 C Z R4 Z R4 f .n.x/N n.u/a.t// dx du d t d : With this choice of Haar measure on GR, it is known that Z GZnG˙1 R dg D Œ.2/.3/.4/  Œ.2/.3/.4/.5/; where G˙1 R  GR denotes the subgroup f.g4; g5/ 2 GR W det.g4/ D ˙1g (see, e.g., ). Now let dy D dy1 dy2    dy40 be the standard Euclidean measure on VR. Then we have: THE DENSITY OF DISCRIMINANTS OF QUINTIC RINGS AND FIELDS 1585 PROPOSITION 16. For i D 0, 1, or 2, let f 2 C0.V .i//, and let y denote any element of V .i/. Then (25) Z g2GR f .g  y/dg D ni 20  Z v2V .i/ jDisc.v/j1f .v/ dv: Proof. Put .z1; : : : ; z40/ D n.x/N n.u/a.t/  y: Then the form Disc.z/1dz1 ^    ^ dz12 is a GR-invariant measure, and so we must have Disc.z/1dz1 ^    ^ dz40 D c dx ^ du ^ d t ^ d  for some constant factor c. An explicit Jacobian calculation shows that c D 20. (To make the calculation easier, we note that it suffices to check this on any fixed representative y in V .0/, V .1/, or V .2/.) By Proposition 15, the group GR is an ni-fold covering of V .i/ via the map g ! g  y. Hence Z GR f .g  y/dg D ni 20  Z V .i/ jDisc.v/j1f .v/dv: as desired. Finally, for any vector y 2 V .i/ of absolute discriminant 1, we obtain using Proposition 16 that 1 ni  Vol.RX.y// D 20 ni Z X1=40 1 40d  Z GZnG˙1 R dg D .2/2.3/2.4/2.5/ 2ni X; proving Theorem 6. 2.7. Congruence conditions. We may prove a version of Theorem 6 for a set in V .i/ defined by a finite number of congruence conditions: THEOREM 17. Suppose S is a subset of V .i/ Z defined by finitely many congru-ence conditions. Then (26) lim X!1 N.S \ V .i/I X/ X D .2/2.3/2.4/2.5/ 2ni Y p p.S/; where p.S/ denotes the p-adic density of S in VZ, and ni D 120, 12, or 8 for i D 0, 1, or 2 respectively. To prove Theorem 17, suppose S is defined by congruence conditions mod-ulo some integer m. Then S may be viewed as the union of (say) k translates L1; : : : ; Lk of the lattice m  VZ. For each such lattice translate Lj , we may use formula (11) and the discussion following that formula to compute N.SI X/, but where each d-dimensional volume is scaled by a factor of 1=md to reflect the fact 1586 MANJUL BHARGAVA that our new lattice has been scaled by a factor of m. For a fixed value of m, we thus obtain (27) N.Lj I X/ D m40 Vol.RX.v// C O.m39J 39X39=40=Mi.J // for v 2 V .i/, where the implied constant is also independent of m provided m D O.X1=40/. Summing (27) over j , and noting that km40 D Q p p.S/, yields (26). 3. Quadruples of 5 5 skew-symmetric matrices and Theorems 1–4 Theorems 5 and 6 of the previous section now immediately imply the follow-ing. THEOREM 18. Let M .i/ 5 .; / denote the number of isomorphism classes of pairs .R; R0/ such that R is an order in an S5-quintic field with 5 2i real embeddings, R0 is a sextic resolvent ring of R, and  < Disc.R/ < . Then .a/ lim X!1 M .0/ 5 .0; X/ X D .2/2.3/2.4/2.5/ 240 I .b/ lim X!1 M .1/ 5 .X; 0/ X D .2/2.3/2.4/2.5/ 24 I .c/ lim X!1 M .2/ 5 .0; X/ X D .2/2.3/2.4/2.5/ 16 : To obtain finer asymptotic information on the distribution of quintic rings (in particular, without the weighting by the number of sextic resolvents), we need to be able to count irreducible equivalence classes in VZ lying in certain subsets S  VZ. If S is defined, say, by finitely many congruence conditions, then Theorem 17 applies in that case. However, the set S of elements .A; B; C; D/ 2 VZ corresponding to maximal quintic orders is defined by infinitely many congruence conditions (see [5, 12]). To prove that (26) still holds for such a set, we require a uniform estimate on the error term when only finitely many factors are taken in (26). This estimate is pro-vided in Section 3.1. In Section 3.2, we prove Lemma 14. Finally, in Section 3.3, we complete the proofs of Theorems 1–4. 3.1. A uniformity estimate. As in , for a prime number p we denote by Up the set of all .A; B; C; D/ 2 VZ corresponding to quintic orders R that are maximal at p. Let Wp D VZ Up. In order to apply a sieve to obtain Theorems 1–4, we require the following proposition, analogous to Proposition 1 in and Proposi-tion 23 in . PROPOSITION 19. N.WpI X/ D O.X=p2/, where the implied constant is independent of p. Proof. We begin with the following lemma. THE DENSITY OF DISCRIMINANTS OF QUINTIC RINGS AND FIELDS 1587 LEMMA 20. The number of maximal orders in quintic fields, up to isomor-phism, having absolute discriminant less than X is O.X/. Lemma 20 follows immediately from Theorem 18, since we have shown that every quintic ring has a sextic resolvent ring ([5, Cor. 4]). To estimate N.WpI X/ using Lemma 20, we only need to know that (a) the number of subrings of index pk (k  1) in a maximal quintic ring R does not grow too rapidly with k; and (b) the number of sextic resolvents that such a subring possesses is also not too large relative to pk. For (a), an even stronger result than we need here has recently been proven in the Ph.D. thesis of Jos Brakenhoff, who shows that the number of orders having index pk in a maximal quintic ring R is at most O.pminf2k2; 20 11kg/ for k  1, where the implied constant is independent of p, k, and R. Any such order will of course have discriminant p2kDisc.R/. As for (b), it follows from [5, Proof of Cor. 4] that the number of sextic resolvents of a quintic ring having content n is O.n6/; moreover, the number of sextic resolvents of a maximal quintic ring is 1. (Recall that the content of a quintic ring R is the largest integer n such that R D Z C nR0 for some quintic ring R0.) Since every content n quintic ring R arises as Z C nR0 for a unique content 1 quintic ring R0, and Disc.R/ D n8Disc.R0/, we have N.WpI X/ D 1 X nD1 O.n6/ n8 1 X kD1 O.pminf2k2; 20 11kg/ p2k O.X/ D O.X=p2/; as desired. 3.2. Proof of Lemma 14. We say a quintic ring is an S5-quintic ring if it is an order in an S5-quintic field. To prove Lemma 14, we wish to show that the expected number of integral elements .A; B; C; D/ 2 Fv (v 2 V .i/) that correspond to quintic rings that are not S5-quintic rings, and such that jDisc.A; B; C; D/j < X and a12 ¤ 0, is o.X/. Now if a quintic ring R D R.A; B; C; D/ is not an S5-quintic ring, then we claim that either the splitting type .1112/ or .5/ does not occur in R. Indeed, if both of these splitting types occur in R, then R is clearly a domain (since R=pR Š Fp5 for some prime p) and the Galois group associated with the quotient field of R then must contain a 5-cycle and a transposition, implying that the Galois group is in fact S5. Therefore, to obtain an upper bound on the expected number of integral el-ements .A; B; C; D/ 2 Fv such that R.A; B; C; D/ is not an S5-quintic ring, jDisc.A; B; C; D/j < X, and a12 ¤ 0, we may simply count those quintic rings in which p does not split as .1112/ in R for any prime p < N and those quintic rings for which p does not have splitting type .5/ for any prime p < N (for some sufficiently large N). Now the p-adic density p.Tp.1112// in VZ of the set of those .A; B; C; D/ 2 Tp.1112/ approaches 1=12 as p ! 1 while the p-adic 1588 MANJUL BHARGAVA density p.Tp.5// of those .A; B; C; D/ 2 Tp.5/ approaches 1=5 as p ! 1 (by [5, Lemma 20]). We conclude from (26) that the total number of such .A; B; C; D/ 2 Fv that do not lie in Tp.1112/ for any p < N or do not lie in Tp.5/ for any p < N , and satisfy jDisc.A; B; C; D/j < X for sufficiently large X D X.N/, is at most .2/2.3/2.4/2.5/ 2ni  Y p<N 1p.Tp.1112//  C Y p<N 1p.Tp.5//  X Co.X/: Letting N ! 1, we see that asymptotically the above count of .A; B; C; D/ is less than cX for any fixed positive constant c, and this completes the proof. 3.3. Proofs of Theorems 1–4. Proof of Theorem 1. Again, let Up denote the set of all .A; B; C; D/ 2 VZ that correspond to pairs .R; R0/ where R is maximal at p, and let U D \pUp. Then U is the set of .A; B; C; D/ 2 VZ corresponding to maximal quintic rings R. In [5, Th. 21], we determined the p-adic density .Up/ of Up: (28) .Up/D .p1/8p12.pC1/4.p2C1/2.p2CpC1/2.p4Cp3Cp2CpC1/.p4Cp3C2p2C2pC1/ p40 : Suppose Y is any positive integer. It follows from (26) and (28) that lim X!1 N.\p<Y Up \ V .i/I X/ X D .2/2.3/2.4/2.5/ 2ni Y p<Y Œp28 .p2 1/2.p3 1/2.p4 1/2.p5 1/.p5 C p3 p 1/: Letting Y tend to 1, we obtain immediately that lim sup X!1 N.U \ V .i/I X/ X  .2/2.3/2.4/2.5/ 2ni Y p Œp28.p2 1/2.p3 1/2.p4 1/2.p5 1/.p5 C p3 p 1/ D .2/2.3/2.4/2.5/ 2ni Y p Œ.1 p2/2.1 p3/2.1 p4/2.1 p5/.1 C p2 p4 p5/ D 1 2ni Y p .1 C p2 p4 p5/: To obtain a lower bound for N.U \ V .i/I X/, we note that \ p<Y Up   U [ [ pY Wp  : THE DENSITY OF DISCRIMINANTS OF QUINTIC RINGS AND FIELDS 1589 Hence by Proposition 19, lim X!1 N.U \ V .i/I X/ X  .2/2.3/2.4/2.5/ 2ni Y p<Y Œp28.p2 1/2.p3 1/2.p4 1/2.p5 1/.p5 C p3 p 1/ O X pY p2 : Letting Y tend to infinity completes the proof of Theorem 1. Proof of Theorem 2. For each (isomorphism class of) quintic ring R, we make a choice of sextic resolvent ring R0, and let S  VZ denote the set of all elements in VZ that yield the pair .R; R0/ (under the bijection of Theorem 5) for some R. Then we wish to determine N.S \ V .i/I X/ for i D 0; 1; 2; by equation (26), this amounts to determining the p-adic density p.S/ of S for each prime p for our choice of S. In this regard we have the following formula, which follows easily from the arguments in [5, Proof of Lemma 20]: (29) p.S/ D jG.Fp/j Discp.R/  jAutZp.R/j: Combining (26) and (29) together with the fact that jG.Fp/j D .p1/8 p16 .pC1/4 .p2 C1/2 .p2 CpC1/2 .p4 Cp3 Cp2 CpC1/; and proceeding as in Theorem 1, now yields Theorem 2. Proof of Theorem 3. Let K5 be an S5-quintic field, and K120 its Galois closure. It is known that the Artin symbol .K120=p/ equals hei, h.12/i, h.123/i, h.1234/i, h.12345/i, h.12/.34/i, or h.12/.345/i precisely when the splitting type of p in R is .11111/, .1112/, .113/, .14/, .5/, .122/, or .23/ respectively, where R denotes the ring of integers in K5. As in , let Up./ denote the set of all .A; B; C; D/ 2 VZ that correspond to maximal quintic rings R having a specified splitting type  at p. Then by the same argument as in the proof of Theorem 1, we have lim X!1 N.Up./ \ V .i/I X/ X D .2/2.3/2.4/2.5/ 2ni p.Up.// Y q¤p q.Uq/: On the other hand, Lemma 20 of gives the p-adic densities of Up./ for all splitting and ramification types ; in particular, the values of p.Up.// for  D .11111/, .1112/, .113/, .14/, .5/, .122/, or .23/ are seen to occur in the ratio 1W10W20W30W24W15W20 for any value of p; this is the desired result. Proof of Theorem 4. This follows immediately from Theorem 1, Lemma 11, and Lemma 14. 1590 MANJUL BHARGAVA Acknowledgments. I am very grateful to B. Gross, H. W. Lenstra, P. Sarnak, A. Shankar, A. Wiles, and M. Wood for many helpful discussions during this work. I am also very thankful to the Packard Foundation for their kind support of this project. References M. BHARGAVA, Higher Composition Laws, Ph.D. thesis, Princeton University, 2001. , Higher composition laws III: The parametrization of quartic rings, Ann. of Math. 159 (2004), 1329–1360. MR 2005k:11214 , The density of discriminants of quartic rings and fields, Ann. of Math. 162 (2005), 1031–1063. MR 2006m:11163 Zbl 1159.11045 , Mass formulae for extensions of local fields, and conjectures on the density of number field discriminants, Int. Math. Res. Not. 2007 (2007), Art. ID rnm052, 20. MR 2009e:11220 Zbl 1145.11080 , Higher composition laws IV: The parametrization of quintic rings, Ann. of Math. 167 (2008), 53–94. MR 2009c:11057 Zbl 173.11058 , On mass formulae for algebras over Fp and Zp, in progress. , On Gauss-Siegel class number-regulator summation formulae for cubic fields, in progress. M. BHARGAVA and A. SHANKAR, The density of discriminants of cubic, quartic, and quintic extensions of a number field, in progress. M. BHARGAVA, A. SHANKAR, and J. TSIMERMAN, On the Davenport-Heilbronn theorems, and second order terms, preprint. J. BRAKENHOFF, Counting problems for number rings, Ph.D. thesis, Leiden University, 2009. B. DATSKOVSKY and D. J. WRIGHT, The adelic zeta function associated to the space of binary cubic forms. II. Local theory, J. Reine Angew. Math. 367 (1986), 27–75. MR 87m:11034 Zbl 0575.10016 H. DAVENPORT, On a principle of Lipshitz, J. London Math. Soc. 26 (1951), 179–183, Corri-gendum: “On a principle of Lipschitz”, J. London Math. Soc. 39 (1964), 580. MR 29 #3433 Zbl 0125.02703 , On the class-number of binary cubic forms I and II, J. London Math. Soc. 26 (1951), 183–198. MR 13,323e Zbl 0044.27002 H. DAVENPORT and H. HEILBRONN, On the density of discriminants of cubic fields. II, Proc. Roy. Soc. London Ser. A 322 (1971), 405–420. MR 58 #10816 Zbl 0212.08101 R. P. LANGLANDS, The volume of the fundamental domain for some arithmetical subgroups of Chevalley groups, in Algebraic Groups and Discontinuous Subgroups (Proc. Sympos. Pure Math., Boulder, Colo., 1965), Amer. Math. Soc., Providence, R.I., 1966, pp. 143–148. MR 35 #4226 Zbl 0218.20041 M. SATO and T. KIMURA, A classification of irreducible prehomogeneous vector spaces and their relative invariants, Nagoya Math. J. 65 (1977), 1–155. MR 55 #3341 Zbl 0321.14030 M. SATO and T. SHINTANI, On zeta functions associated with prehomogeneous vector spaces, Ann. of Math. 100 (1974), 131–170. MR 49 #8969 Zbl 0309.10014 T. SHINTANI, On Dirichlet series whose coefficients are class numbers of integral binary cubic forms, J. Math. Soc. Japan 24 (1972), 132–188. MR 44 #6619 Zbl 0227.10031 D. J. WRIGHT and A. YUKIE, Prehomogeneous vector spaces and field extensions, Invent. Math. 110 (1992), 283–314. MR 93j:12004 Zbl 0803.12004 THE DENSITY OF DISCRIMINANTS OF QUINTIC RINGS AND FIELDS 1591 A. YUKIE, Shintani Zeta Functions, London Math. Soc. Lect. Note Ser. 183, Cambridge Univ. Press, Cambridge, 1993. MR 95h:11037 Zbl 0801.11021 (Received September 29, 2004) E-mail address: [email protected] DEPARTMENT OF MATHEMATICS, PRINCETON UNIVERSITY, FINE HALL, WASHINGTON RD, PRINCETON NJ 08544, UNITED STATES ISSN 0003-486X ANNALS OF MATHEMATICS This periodical is published bimonthly by the Department of Mathematics at Princeton University with the cooperation of the Institute for Advanced Study. Annals is typeset in T EX by Sarah R. Warren and produced by Mathematical Sciences Publishers. The six numbers each year are divided into two volumes of three numbers each. Editorial correspondence Papers submitted for publication and editorial correspondence should be addressed to Maureen Schupsky, Annals of Mathematics, Fine Hall-Washington Road, Princeton University, Princeton, NJ, 08544-1000 U.S.A. The e-mail address is [email protected]. Preparing and submitting papers The Annals requests that all papers include an abstract of about 150 words which explains to the nonspecialist mathematician what the paper is about. It should not make any reference to the bibliography. Authors are encouraged to initially submit their papers electronically and in PDF format. Please send the file to: [email protected] or to the Mathematics e-print arXiv: front.math.ucdavis.edu/submissions. If a paper is submitted through the arXiv, then please e-mail us with the arXiv number of the paper. Proofs A PDF file of the galley proof will be sent to the corresponding author for correction. If requested, a paper copy will also be sent to the author. Offprints Authors of single-authored papers will receive 30 offprints. (Authors of papers with one co-author will receive 15 offprints, and authors of papers with two or more co-authors will receive 10 offprints.) Extra offprints may be purchased through the editorial office. Subscriptions The price for a print and online subscription, or an online-only subscription, is $390 per year for institutions. In addition, there is a postage surcharge of $40 for print subscriptions that are mailed to countries outside of the United States. Individuals interested in subscriptions for their own personal use should contact the publisher at the address below. Subscriptions and changes of address should be sent to Mathematical Sciences Publishers, Department of Mathematics, University of California, Berkeley, CA 94720-3840 (e-mail: [email protected]; phone: 1-510-643-8638; fax: 1-510-295-2608). (Checks should be made payable to “Mathematical Sciences Publishers”.) Back issues and reprints Orders for missing issues and back issues should be sent to Mathematical Sciences Publishers at the above address. Claims for missing issues must be made within 12 months of the publication date. Online versions of papers published five or more years ago are available through JSTOR (www.jstor.org). Microfilm Beginning with Volume 1, microfilm may be purchased from NA Publishing, Inc., 4750 Venture Drive, Suite 400, PO Box 998, Ann Arbor, MI 48106-0998; phone: 1-800-420-6272 or 734-302-6500; email: [email protected], website: www.napubco.com/contact.html. ALL RIGHTS RESERVED UNDER THE BERNE CONVENTION AND THE UNIVERSAL COPYRIGHT CONVENTION Copyright © 2010 by Princeton University (Mathematics Department) Printed in U.S.A. by Sheridan Printing Company, Inc., Alpha, NJ table of contents Shigeru Mukai. Curves and symmetric spaces, II. . . . . . . . . . . . . . . . . . . . . . . 1539–1558 Manjul Bhargava. The density of discriminants of quintic rings and fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1559–1591 Shuji Saito and Kanetomo Sato. A finiteness theorem for zero-cycles over p-adic fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1593–1639 Sylvain Crovisier. Birth of homoclinic intersections: a model for the central dynamics of partially hyperbolic systems . . . . . . . . . . . . . . . . . . . . . . . 1641–1677 Joseph Bernstein and Andre Reznikov. Subconvexity bounds for triple L-functions and representation theory . . . . . . . . . . . . . . . . . . . . . . . . . . . 1679–1718 S´ andor J Kov´ acs and Max Lieblich. Boundedness of families of canonically polarized manifolds: A higher dimensional analogue of Shafarevich’s conjecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1719–1748 Andrei Teleman. Instantons and curves on class VII surfaces . . . . . . . . . . 1749–1804 Danijela Damjanovi´ c and Anatole Katok. Local rigidity of partially hyperbolic actions I. KAM method and Zk actions on the torus. . . . . . . . 1805–1858 Jorge Lauret. Einstein solvmanifolds are standard. . . . . . . . . . . . . . . . . . . . . 1859–1877 Pascal Collin and Harold Rosenberg. Construction of harmonic diffeomorphisms and minimal graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1879–1906 John Lewis and Kaj Nystr¨ om. Boundary behavior and the Martin boundary problem for p harmonic functions in Lipschitz domains. . . . . . 1907–1948 Jens Marklof and Andreas Str¨ ombergsson. The distribution of free path lengths in the periodic Lorentz gas and related lattice point problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1949–2033 ´ Etienne Fouvry and J¨ urgen Kl¨ uners. On the negative Pell equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2035–2104 Lex G. Oversteegen and Edward D. Tymchatyn. Extending isotopies of planar continua. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2105–2133 Jan Bruinier and Ken Ono. Heegner divisors, L-functions and harmonic weak Maass forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2135–2181 Florian Pop. Henselian implies large . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2183–2195 Mikhail Belolipetsky, Tsachik Gelander, Alexander Lubotzky and Aner Shalev. Counting arithmetic lattices and surfaces. . . . . . . . . 2197–2221 Sasha Sodin. The spectral edge of some random band matrices . . . . . . . . . 2223–2251
138
O. Reg. 237/09 GENERAL | ontario.ca =============== Table of contents PART IINTERPRETATION 1. Definitions 2. Class 2 credit unions 3. Widely distributed security PART IIESTABLISHING A CREDIT UNION 4. Articles of incorporation 5. Name PART IIIMEMBERSHIP 6. Trusts for named beneficiaries 7. Payments re deceased members PART IVCAPITAL STRUCTURE 8. Number of shares 9. Disclosure re insurance of membership shares 10. Membership share certificate 11. Offering statement 12. Notice of offering 13. Statement of material change 14. Transfer of securities issued after receipt for an offering statement PART VCAPITAL AND LIQUIDITY 15. Adequate capital 16. Total assets 17. Regulatory capital 18. Risk weighted assets of a credit union 19. Forming of groups relating to capital requirements 20. Adequate liquidity for class 1 credit unions 21. Adequate liquidity for class 2 credit unions 22. Encumbered asset 23. Failure to meet requirements for adequate liquidity 24. Provision for doubtful loans and required reserves PART VIGOVERNING THE CREDIT UNION 25. Mandatory by-laws 26. Frequency of board meetings 27. Duties of audit committee 28. Remuneration reported in financial statements 29. Bond for persons handling money 30. Bond PART VIIRESTRICTIONS ON BUSINESS POWERS Ancillary Businesses 31. Ancillary businesses Financial Services 32. Prohibition re financial services 33. Financial lease agreements and conditional sales agreements Networking 34. Networking Authorized Types of Insurance 35. Authorized types of insurance 36. Group insurance policy 37. Advice about insurance Restrictions on Insurance 38. Restriction on insurance 39. Restriction on agency and office space 40. Separate and distinct premises 41. Telecommunications device 42. Promotion of insurer 43. Sharing of information with insurer Fiduciary Activities 44. Fiduciary activities Guarantees 45. Guarantees 46. Limit on amount of guarantee PART VIIIINVESTMENT AND LENDING Interpretation 47. Interpretation Security Interests in Credit Union Property 48. Security interests in credit union property Classes of Loans 49. Classes of loans 50. Agricultural loan 51. Bridge loan 52. Commercial loan 53. Institutional loan 54. Personal loan 55. Residential mortgage loan 56. Syndicated loan 57. Loan to an unincorporated association Lending Limits 58. Lending limits to a person or connected persons 59. Limits on loans of same class to a person Eligible Investments 60. Eligible investments for class 1 credit unions 61. Eligible investments for class 2 credit unions 62. Prescribed conditions re improved real estate 63. Definition 64. Prescribed conditions re body corporate Restriction on Single Investments 65. Restriction re single investments 66. Exception to restriction re single investments Connected Persons 67. Connected persons Investment in Subsidiaries 68. Investment in subsidiaries 69. Restriction on investment in subsidiaries PART IXINTEREST RATE RISK MANAGEMENT 70. Interpretation 71. Policies and procedures 72. Interest rate risk that exceeds limits 73. Interest rate risk report PART XRESTRICTED PARTY TRANSACTIONS Interpretation 74. Application 75. Definition of “restricted party” 76. Definition of “transaction” Permitted Transactions 77. Transactions of nominal value or not material 78. Issue of shares 79. Permitted transactions Restricted Party Transaction Procedures 80. Restricted party transaction procedures PART XIMEETINGS First Meeting 81. First Meeting 82. Quorum 83. Business to be dealt with Financial Statements 84. Financial statements PART XIIRETURNS, EXAMINATIONS AND RECORDS 85. Document retention 86. Maximum fee for by-laws PART XIIILEAGUES Application 87. Application Capital Structure 88. Capital structure Adequate Capital 89. Adequate capital Business Powers 90. Business powers 91. Permitted activities 92. Group insurance 93. Trustee Investment and Lending 94. Investment and lending 95. Exception to restriction re single investments 96. Connected persons Subsidiaries 97. Subsidiaries 98. Restriction on investment in subsidiaries Exemptions from the Act 99. Exemptions from the Act PART XIVDEPOSIT INSURANCE CORPORATION OF ONTARIO Definition 100. Definition Investment of Funds 101. Investment of funds 102. Restriction on investments Deposit Insurance Limit 103. Deposit insurance limit Amalgamations 104. Amalgamations Annual Premium 105. Annual premium 106. Payment of annual premium 107. Audited statement of deposits PART XVCONTINUING AS OR CEASING TO BE AN ONTARIO CREDIT UNION Continuing as an Ontario Credit Union 108. Articles of continuance 109. Conditions for issue of certificate of continuance 110. Limits on transition period Transfer to Another Jurisdiction 111. Conditions for issue of certificate of continuance Continuation under Another Ontario Act 112. Conditions for issue of certificate of continuance PART XVICONSUMER PROTECTION Disclosure Re Interest Rates, etc. 113. Disclosure re interest rates, etc. 114. Disclosure upon renewal 115. Disclosure in advertising Consumer Complaints by Members and Depositors 116. Consumer complaints by members and depositors 117. Inquiry by Superintendent PART XVIIADMINISTRATIVE PENALTIES 118. Administrative penalties Browse or search laws How to use e-Laws Glossary Legislative tables Search within this document Search within this document Submit [x] Use exact search How do I build my search term? Use these search operators to perform more specific searches and narrow down your results. See how to use e-Laws for more detailed descriptions on search operators. | Operator | Example | Function | | --- | --- | --- | | No operator | Human rights | Searches for documents containing all of your specified search terms. | | AND | Human AND rights | Same as using no operator. | | “x y z”(quotes) | “Human rights” | Searches for a multi-word phrase that contains all the search terms, in that order. | | OR | engineer OR architect | Searches for documents containing 1 or more of the specified terms. | | NOT | insurance NOT funds | Excludes a word from your search. | | /# | fishing /4 hunting | Specifies how many words are allowed between the 2 search terms. This is called a proximity search. | | ? | licen?e | Searches for terms with an interchangeable character. Replace the interchangeable character with a question mark. | | (x, y) | (fishing OR hunting)AND licenses | Combines multiple search operators at once. | O. Reg. 237/09: GENERAL Under:Credit Unions and Caisses Populaires Act, 1994, S.O. 1994, c. 11 Download WordPrint Versions revoked or spent March 1, 2022 February 28, 2022 - February 28, 2022 ... July 1, 2010 - December 16, 2010 Show 18 more Credit Unions and Caisses Populaires Act, 1994 ONTARIO REGULATION 237/09 GENERAL Historical Historical version for theperiod July 1, 2010 to December 16, 2010. No amendments. Not-yet-in-force provisions appear in consolidated law as text with a grey background and are accompanied by related editorial notes. This is the English version of a bilingual regulation. CONTENTS PART IINTERPRETATION 1.Definitions 2.Class 2 credit unions 3.Widely distributed security PART IIESTABLISHING A CREDIT UNION 4.Articles of incorporation 5.Name PART IIIMEMBERSHIP 6.Trusts for named beneficiaries 7.Payments re deceased members PART IVCAPITAL STRUCTURE 8.Number of shares 9.Disclosure re insurance of membership shares 10.Membership share certificate 11.Offering statement 12.Notice of offering 13.Statement of material change 14.Transfer of securities issued after receipt for an offering statement PART VCAPITAL AND LIQUIDITY 15.Adequate capital 16.Total assets 17.Regulatory capital 18.Risk weighted assets of a credit union 19.Forming of groups relating to capital requirements 20.Adequate liquidity for class 1 credit unions 21.Adequate liquidity for class 2 credit unions 22.Encumbered asset 23.Failure to meet requirements for adequate liquidity 24.Provision for doubtful loans and required reserves PART VIGOVERNING THE CREDIT UNION 25.Mandatory by-laws 26.Frequency of board meetings 27.Duties of audit committee 28.Remuneration reported in financial statements 29.Bond for persons handling money 30.Bond PART VIIRESTRICTIONS ON BUSINESS POWERS Ancillary Businesses 31.Ancillary businesses Financial Services 32.Prohibition re financial services 33.Financial lease agreements and conditional sales agreements Networking 34.Networking Authorized Types of Insurance 35.Authorized types of insurance 36.Group insurance policy 37.Advice about insurance Restrictions on Insurance 38.Restriction on insurance 39.Restriction on agency and office space 40.Separate and distinct premises 41.Telecommunications device 42.Promotion of insurer 43.Sharing of information with insurer Fiduciary Activities 44.Fiduciary activities Guarantees 45.Guarantees 46.Limit on amount of guarantee PART VIIIINVESTMENT AND LENDING Interpretation 47.Interpretation Security Interests in Credit Union Property 48.Security interests in credit union property Classes of Loans 49.Classes of loans 50.Agricultural loan 51.Bridge loan 52.Commercial loan 53.Institutional loan 54.Personal loan 55.Residential mortgage loan 56.Syndicated loan 57.Loan to an unincorporated association Lending Limits 58.Lending limits to a person or connected persons 59.Limits on loans of same class to a person Eligible Investments 60.Eligible investments for class 1 credit unions 61.Eligible investments for class 2 credit unions 62.Prescribed conditions re improved real estate 63.Definition 64.Prescribed conditions re body corporate Restriction on Single Investments 65.Restriction re single investments 66.Exception to restriction re single investments Connected Persons 67.Connected persons Investment in Subsidiaries 68.Investment in subsidiaries 69.Restriction on investment in subsidiaries PART IXINTEREST RATE RISK MANAGEMENT 70.Interpretation 71.Policies and procedures 72.Interest rate risk that exceeds limits 73.Interest rate risk report PART XRESTRICTED PARTY TRANSACTIONS Interpretation 74.Application 75.Definition of “restricted party” 76.Definition of “transaction” Permitted Transactions 77.Transactions of nominal value or not material 78.Issue of shares 79.Permitted transactions Restricted Party Transaction Procedures 80.Restricted party transaction procedures PART XIMEETINGS First Meeting 81.First Meeting 82.Quorum 83.Business to be dealt with Financial Statements 84.Financial statements PART XIIRETURNS, EXAMINATIONS AND RECORDS 85.Document retention 86.Maximum fee for by-laws PART XIIILEAGUES Application 87.Application Capital Structure 88.Capital structure Adequate Capital 89.Adequate capital Business Powers 90.Business powers 91.Permitted activities 92.Group insurance 93.Trustee Investment and Lending 94.Investment and lending 95.Exception to restriction re single investments 96.Connected persons Subsidiaries 97.Subsidiaries 98.Restriction on investment in subsidiaries Exemptions from the Act 99.Exemptions from the Act PART XIVDEPOSIT INSURANCE CORPORATION OF ONTARIO Definition 100.Definition Investment of Funds 101.Investment of funds 102.Restriction on investments Deposit Insurance Limit 103.Deposit insurance limit Amalgamations 104.Amalgamations Annual Premium 105.Annual premium 106.Payment of annual premium 107.Audited statement of deposits PART XVCONTINUING AS OR CEASING TO BE AN ONTARIO CREDIT UNION Continuing as an Ontario Credit Union 108.Articles of continuance 109.Conditions for issue of certificate of continuance 110.Limits on transition period Transfer to Another Jurisdiction 111.Conditions for issue of certificate of continuance Continuation under Another Ontario Act 112.Conditions for issue of certificate of continuance PART XVICONSUMER PROTECTION Disclosure Re Interest Rates, etc. 113.Disclosure re interest rates, etc. 114.Disclosure upon renewal 115.Disclosure in advertising Consumer Complaints by Members and Depositors 116.Consumer complaints by members and depositors 117.Inquiry by Superintendent PART XVIIADMINISTRATIVE PENALTIES 118.Administrative penalties PART I INTERPRETATION Definitions 1.(1) In this Regulation, “agricultural loan” means an agricultural loan described in section 50; (“prêt agricole”) “authorized types of insurance” means the types of insurance listed in subsection 35 (1); (“types d’assurance autorisés”) “bridge loan” means a bridge loan described in section 51; (“prêt-relais”) “Capital Adequacy Guideline for Ontario’s Credit Unions and Caisses Populaires” means the publication with that title that is published in The Ontario Gazette by the Corporation, as the publication may be amended from time to time; (“Lignes directrices relatives à la suffisance du capital des caisses populaires et credit unions de l’Ontario”) “class 1 credit union” means a credit union that is not a class 2 credit union; (“caisse de catégorie 1”) “class 2 credit union” means a credit union that, according to section 2, is a class 2 credit union; (“caisse de catégorie 2”) “commercial loan” means a commercial loan described in section 52; (“prêt commercial”) “guarantee” includes the issuance of a letter of credit; (“garantie”) “institutional loan” means an institutional loan described in section 53; (“prêt institutionnel”) “insurer” means an insurer licensed under the Insurance Act; (“assureur”) “participating share” means a share of a body corporate that carries the right to participate in the earnings of the body corporate to an unlimited degree and to participate in a distribution of the remaining property of the body corporate on dissolution; (“action participative”) “personal loan” means a personal loan described in section 54; (“prêt personnel”) “regulatory capital” means regulatory capital as determined under section 17; (“capital réglementaire”) “residential mortgage loan” means a residential mortgage loan described in section 55; (“prêt hypothécaire résidentiel”) “residential property” means an individual condominium residential unit or a building with one to four units where at least one half of the floor area of the building is utilized as one or more private residential dwellings; (“bien résidentiel”) “risk weighted assets” means the amount of the risk weighted assets as determined under section 18; (“actif pondéré en fonction des risques”) “total assets” means total assets as determined under section 16. (“actif total”) O.Reg. 237/09, s.1 (1). (2) For the purposes of this Regulation, a lodgement of title is not a mortgage. O.Reg. 237/09, s.1 (2). (3) For the purposes of this Regulation, two or more persons are connected persons if they satisfy the conditions prescribed in section 67. O.Reg. 237/09, s.1 (3). Class 2 credit unions 2.(1) A credit union is a class 2 credit union if either of the following circumstances exist at any time after January 31, 2007: The total assets of the credit union as set out in the audited financial statements of the credit union that were placed before its members at the most recent annual meeting are greater than or equal to $50 million. The credit union makes one or more commercial loans. O.Reg. 237/09, s.2 (1). (2) A credit union becomes a class 2 credit union under subsection (1) on the first day on which either of the circumstances described in subsection (1) exists. O.Reg. 237/09, s.2 (2). (3) A credit union that changes the terms and conditions of a commercial loan made on or before January 31, 2007 or refinances such a loan in any other way shall be deemed, for the purposes of paragraph 2 of subsection (1), to have made a commercial loan on the date of the change or refinancing. O.Reg. 237/09, s.2 (3). (4) A credit union also becomes a class 2 credit union if, upon application by the credit union to the Corporation, the Corporation is satisfied that, (a) the credit union has established the policies required by section 189 of the Act with respect to investment and lending; (b) those policies are appropriate for the size and complexity of the credit union; (c) the credit union is in compliance with the Corporation’s by-laws, including the by-law prescribing standards of sound business and financial practices; and (d) the credit union is in compliance with the minimum capital requirements that would apply under this Regulation if the credit union were a class 2 credit union. O.Reg. 237/09, s.2 (4). (5) Once a credit union becomes a class 2 credit union, it remains a class 2 credit union in perpetuity. O.Reg. 237/09, s.2 (5). Widely distributed security 3.(1) A security is widely distributed, (a) if it is listed or posted for trading on a recognized stock exchange; or (b) if a prospectus relating to the issuance of the security is filed under the laws of a province or a jurisdiction outside Canada. O.Reg. 237/09, s.3 (1). (2) A debt obligation is widely distributed if no prospectus is required in respect of its distribution under the laws of a province or a jurisdiction outside Canada and, (a) at least 90 per cent of the maximum authorized principal of the debt obligation is held by one or more persons other than the credit union making the loan and its subsidiaries and, (i) the debt obligation is issued to at least 25 persons other than the credit union and its subsidiaries within six months after the day on which the first of the debt obligations is issued, or (ii) the debt obligations are issued on a continuous basis and there are, on average, at least 25 holders other than the credit union and its subsidiaries; or (b) when the debt obligation is issued, it meets at least three of the following criteria: Its initial term is one year or less. It is rated by a rating agency. It is distributed through a person authorized to trade in securities. It is distributed in accordance with an offering circular or memorandum or a similar document relating to the distribution of securities. O.Reg. 237/09, s.3 (2). PART II ESTABLISHING A CREDIT UNION Articles of incorporation 4.(1) The following information must be set out in the articles of incorporation of a credit union: Its name. The address of its head office and the name of the municipality or township in Ontario where its principal place of business is located. The minimum and maximum number of directors. The full name, date of birth, citizenship or landed immigrant status and residential address of each director. The classes and maximum number, if any, of shares other than membership shares that the credit union is authorized to issue. The rights, privileges, restrictions and conditions, if any, attaching to each class of shares. The board’s authority with respect to any class of shares that may be issued in series. O.Reg. 237/09, s.4 (1). (2) Articles filed when a credit union is first incorporated must also set out the full name, date of birth and residential address of each incorporator. O.Reg. 237/09, s.4 (2). (3) Articles approved by the Minister before March 1, 1995 shall be deemed to comply with subsections (1) and (2). O.Reg. 237/09, s.4 (3). Name 5.Credit Union Central of Canada and Central 1 Credit Union are prescribed persons for the purposes of section 20 of the Act. O.Reg. 237/09, s.5. PART III MEMBERSHIP Trusts for named beneficiaries 6.For the purposes of clause 39 (1) (d) of the Act, deposits made in accordance with the following provisions are prescribed: Subsection 13 (7) of the Bailiffs Act. Subsections 188 (6), (7) and 227 (1) of the Business Corporations Act. Section 39 of the Cemeteries Act (Revised). Clause 12 (1) (m) of the Charitable Institutions Act. Clause 30 (e) of the Collection Agencies Act. Subsection 81 (4) of the Condominium Act, 1998. Subsection 143 (5) of the Corporations Act. Subsections 52 (1), 53 (3) and 55 (2) of the Funeral, Burial and Cremation Services Act, 2002. Note: Paragraph 8 comes into force on the day that section 52 of theFuneral, Burial and Cremation Services Act, 2002comes into force. See: O.Reg. 237/09, s.120 (2). Subsection 34 (1) and paragraph 1 of subsection 46 (1) of the Funeral Directors and Establishments Act. Subsections 27 (3) and (4) of the Gaming Control Act, 1992. Subsection 191.0.1 (3) of the Highway Traffic Act. Paragraph 16 of subsection 31 (1) of the Homes for the Aged and Rest Homes Act. Clause 183 (2) (o) of the Long-Term Care Homes Act, 2007. Subsection 51 (1) of Ontario Regulation 188/08 (Mortgage Brokerages: Standards of Practice) and subsection 35 (1) of Ontario Regulation 189/08 (Mortgage Administrators: Standards of Practice), both made under the Mortgage Brokerages, Lenders and Administrators Act, 2006. Clause 24 (g) of the Motor Vehicle Dealers Act. Section 25 of the Motor Vehicle Dealers Act, 2002. Clause 6 (4) (b) of Ontario Regulation 415/06 (General) made under the Private Career Colleges Act, 2005. Subsection 27 (1) of the Real Estate and Business Brokers Act, 2002. Clause 35 (n) of the Registered Insurance Brokers Act. Rule 3.3.2 of the Mutual Fund Dealers Association of Canada Rules as governed by section 21.1 of the Securities Act. Rule 1200.3 of the Investment Industry Regulatory Organization of Canada Dealer Member Rules as governed by section 21.1 of the Securities Act. Paragraph 11 of subsection 43 (1) of the Travel Industry Act, 2002. Any provision of an Act enacted by the Government of Canada that requires or governs deposits into trust accounts. O.Reg. 237/09, s.6. Payments re deceased members 7.(1) For the purposes of paragraph 1 of subsection 42 (2) of the Act, the prescribed amount is $50,000. O.Reg. 237/09, s.7 (1). (2) For the purposes of paragraph 2 of subsection 42 (2) of the Act, the prescribed amount is $50,000. O.Reg. 237/09, s.7 (2). PART IV CAPITAL STRUCTURE Number of shares 8.For the purposes of subsection 52 (2) of the Act and despite any limit set out in the by-laws of a credit union, the prescribed limit on the number of membership shares that may be issued to a member of the credit union is the sum of, (a) the minimum number of membership shares required under the by-laws of the credit union; and (b) the number of membership shares that would be issued by the credit union for an additional consideration of $1,000, as determined at the time the membership shares are issued. O.Reg. 237/09, s.8. Disclosure re insurance of membership shares 9.Prior to issuing any membership share, a credit union shall disclose to the member that membership shares are not insured by the Corporation. O.Reg. 237/09, s.9. Membership share certificate 10.For the purposes of subsection 52 (6) of the Act, a membership share certificate must include the following information and statements on its face: The name of the credit union as it appears in the articles. The name of each person to whom the certificate is issued. A statement indicating that the credit union is governed by the Credit Unions and Caisses Populaires Act, 1994. A statement indicating that the certificate represents membership shares in the credit union and indicating the number of shares. A statement indicating that there may be a lien on the shares in favour of the credit union for indebtedness to it. A statement indicating that the shares are not guaranteed or insured by the Corporation or another public agency. A statement indicating that the certificate is not transferable. O.Reg. 237/09, s.10. Offering statement 11.(1) For the purposes of subsection 77 (2) of the Act, the following information is prescribed as information that an offering statement must contain: The name of the credit union. The credit union’s date of incorporation as set out in the articles or, in the case of an amalgamated credit union, its date of amalgamation as set out in its certificate of amalgamation. The address of the credit union’s head office. The name of each of the credit union’s directors and officers, the municipality in which each resides, the principal occupation of each of them and the title of each officer. A description of the business carried on by the credit union and its subsidiaries, if any, and the business each of them intends to carry on. The details of the capital structure of the credit union. A description of the material characteristics of the securities being offered. The details of the use to which the proceeds from the sale of the securities will be put. If the offering is being made in connection with a plan of reorganization, a purchase and sale or an amalgamation, a description of the general effect of these proposed changes and when they will be made. The details of the method of selling the securities and of any commission payable or discount allowable on the sale. If the securities are being sold through an underwriter, include the underwriter’s name and the details of the underwriter’s obligation to take up and pay for the securities. If the securities are being sold by another method, include separate descriptions of the method of distribution of securities underwritten, securities under option and securities being sold on a best efforts basis and also include the amount of any minimum subscription. A description of the market on which the securities may be sold. If there is no market, a description of how the securities will be redeemed. The name of each transfer agent and registrar and the location of each register of transfer. The details of any securities or other obligations ranking ahead of the securities being offered. A description of any material legal proceeding to which the credit union or its subsidiary is a party. A description of any material interest of a director, officer or employee of the credit union or its subsidiary in the operations of the credit union generally or in the securities being offered, including the following: i. Particulars of any options to purchase shares of the credit union that are held by a director or officer and the name of any director or officer who holds such options. ii. Particulars of any options to purchase shares of the credit union that are held by other employees, without naming the employees. A description of every material contract entered into within two years before the date of the offering statement and a description of any contract entered into at any time, if the contract has a bearing on the securities issue. A description of the risk factors of the credit union and the risks associated with the securities being offered. A description, to the extent reasonably practicable, of any substantial variations in the operating results of the credit union during the three years before the date of the offering statement and the financial statements that show the variations. The amount of any dividends, patronage returns, allocations or other distributions paid, declared or accumulated but unpaid by the credit union during the five years before the date of the offering statement. The name and address of the credit union’s auditor. A description of any other material facts. If there are no other material facts, the offering statement must contain the following statement: “There are no other material facts relating to this issue of securities”. Such other information as is required by the Offering Statement Guideline for Credit Unions and Caisses Populaires published in The Ontario Gazette by the Superintendent, as it may be amended from time to time. O.Reg. 237/09, s.11 (1). (2) The offering statement must include the following documents: The audited financial statements of the credit union that were placed before the members at the most recent annual meeting and signed by the chair of the board and the chief executive officer of the credit union. Interim unaudited financial statements, reviewed by a person licensed under the Public Accounting Act, 2004, for the period ending not more than 90 days before the date on the offering statement, if the audited financial statements required under paragraph 1 are in respect of a period ending more than 90 days before the date on the offering statement. If a report, opinion or statement prepared by a person is used in the offering statement, a document signed by the person indicating that the person consents to the use of the report, opinion or statement. A copy of the board resolution approving the offering, certified by the corporate secretary to be a true copy. O.Reg. 237/09, s.11 (2). (3) If the credit union was incorporated within 90 days before the date on the offering statement, the offering statement must include pro forma financial statements, including projected balance sheets and income statements of the credit union for at least the first three fiscal years of the credit union instead of the financial statements required under paragraphs 1 and 2 of subsection (2). O.Reg. 237/09, s.11 (3). (4) If the credit union was amalgamated within 90 days before the date on the offering statement, the offering statement must include, instead of the financial statements required under paragraphs 1 and 2 of subsection (2), (a) the audited financial statements of each predecessor credit union that were placed before its members at the most recent annual meeting of the predecessor credit union; (b) a statement of the assets and liabilities of the amalgamated credit union as of the date of the certificate of amalgamation; and (c) pro forma financial statements, including projected balance sheets and income statements of the amalgamated credit union for at least the first three fiscal years after the amalgamation. O.Reg. 237/09, s.11 (4). (5) The offering statement must include the following statements in conspicuous, bold type on the front cover, in the same language as is used in the statement: No official of the Government of the Province of Ontario has considered the merits of the matters addressed in the offering statement. The securities being offered are not guaranteed by the Deposit Insurance Corporation of Ontario or any similar public agency. O.Reg. 237/09, s.11 (5). (6) If there is no market on which the securities may be sold, the offering statement must include a statement to that effect in bold type on the front cover. O.Reg. 237/09, s.11 (6). Notice of offering 12.(1) A credit union may give any person or entity a notice respecting an offering after the offering statement is filed and before the Superintendent issues a receipt. O.Reg. 237/09, s.12 (1). (2) The notice must contain the following information: A detailed description of the security that the credit union proposes to issue. The price of the security, if the price has been determined. The name and address of a person from whom the securities may be purchased. O.Reg. 237/09, s.12 (2). (3) The notice must include the following statements in conspicuous, bold type on the front cover, in the same language as is used in the offering statement: This is not an offer to sell the securities described in this document. The securities described in this document cannot be sold until after the Superintendent of Financial Services issues a receipt for an offering statement. You are advised to read the offering statement approved by the Superintendent, because the terms and conditions may be changed significantly. The Superintendent may refuse to issue a receipt, in which case the securities described in this document will not be offered for sale. O.Reg. 237/09, s.12 (3). Statement of material change 13.The following information must be set out in a statement of material change respecting an offering statement by a credit union: The name of the credit union. The date on which the receipt for the offering statement was issued. The date on which the material change occurred. A description of the material change. O.Reg. 237/09, s.13. Transfer of securities issued after receipt for an offering statement 14.For the purposes of subsection 74.1 (1) of the Act, the Corporation and a league are prescribed as persons to whom a security issued under circumstances described in clause 75 (1) (a) of the Act may be transferred. O.Reg. 237/09, s.14. PART V CAPITAL AND LIQUIDITY Adequate capital 15.(1) This section sets out the criteria for determining if a credit union is maintaining adequate capital as required by section 84 of the Act. O.Reg. 237/09, s.15 (1). (2) A class 1 credit union has adequate capital if its regulatory capital is at least 5 per cent of its total assets. O.Reg. 237/09, s.15 (2). (3) A class 2 credit union has adequate capital for a financial year if the following conditions are satisfied: Its regulatory capital expressed as a percentage of its total assets is at least 4 per cent for a financial year ending on or after January 1, 2009. Its regulatory capital, expressed as a percentage of its risk weighted assets, is at least 8 per cent. O.Reg. 237/09, s.15 (3). Total assets 16.(1) The total assets of a credit union is the amount calculated using the formula, A – B in which, “A” is the amount of all the credit union’s assets, and “B” is the sum of the following amounts as they would appear in the financial statements of the credit union prepared as of the date of the calculation: Goodwill. Identified intangible assets other than goodwill that have been purchased directly or acquired in conjunction with or arising from the acquisition of a business, including, but not limited to, trademarks, core deposit intangibles, mortgage servicing rights and purchased credit card relationships. Investments in subsidiaries that are financial institutions. Any other amounts set out in the Capital Adequacy Guideline for Ontario’s Credit Unions and Caisses Populaires. O.Reg. 237/09, s.16 (1). (2) For the purposes of subsection (1), the following rules apply: The amount of an asset is its value as it would appear in the financial statements of the credit union prepared as of the date of the calculation. Provisions or allowances for losses of a general nature must be deducted from the most closely applicable class of assets. An investment in a subsidiary must be calculated using the equity method of accounting described in the Capital Adequacy Guideline for Ontario’s Credit Unions and Caisses Populaires. Cash deposits in a financial institution must be offset against overdrafts with the same financial institution. O.Reg. 237/09, s.16 (2). Regulatory capital 17.(1) The regulatory capital of a credit union is the amount calculated using the formula, C + D in which, “C” is the amount of the credit union’s Tier 1 Capital as determined under subsection (2), and “D” is the amount of the credit union’s Tier 2 Capital as determined under subsection (3). O.Reg. 237/09, s.17 (1). (2) The Tier 1 capital of a credit union is the amount calculated using the formula, E – B in which, “E” is the sum of the following amounts as they would appear in the financial statements of the credit union prepared as of the date of the calculation: Membership shares. Retained earnings. Contributed surplus. Patronage shares, other than patronage shares that are redeemable within the following 12-month period. Qualifying shares described in subsection (4), other than qualifying shares that are redeemable within the following 12-month period. Accumulated net after tax unrealized loss on available-for-sale equity securities reported in Other Comprehensive Income. “B” has the same meaning as in subsection 16 (1). O.Reg. 237/09, s.17 (2). (3) The Tier 2 Capital of a credit union is the lesser of the Tier 1 Capital amount determined under subsection (2) and the sum of the following amounts as they would appear in the financial statements of the credit union prepared as of the date of the calculation: Patronage shares that are redeemable within the following 12-month period. Qualifying shares described in subsection (4) that are redeemable within the following 12-month period. Subordinated indebtedness that, i. cannot be redeemed or purchased for cancellation in the first five years after it is issued, and ii. is not convertible into or exchangeable for a security other than a qualifying share. The amount of any general loan loss allowance, not including any specific loan loss allowance, up to a maximum of 0.75 per cent of total assets for a class 1 credit union and 1.25 per cent of risk weighted assets for a class 2 credit union. Accumulated net after tax unrealized gain on available-for-sale equity securities reported in Other Comprehensive Income. Any other amount set out in the Capital Adequacy Guideline for Ontario’s Credit Unions and Caisses Populaires. O.Reg. 237/09, s.17 (3). (4) For the purposes of this section, qualifying shares are fully paid shares other than membership shares and patronage shares issued by the credit union, but only if the following conditions are met: Any rights or special rights as to the payment of dividends to the holders of the shares are non-cumulative. Any rights or special rights, including the right to redeem the shares or call on the credit union to purchase or otherwise acquire the shares, are restricted so that the credit union is not required to redeem, purchase or otherwise acquire the shares of that class at a rate of more than 10 per cent of the outstanding shares during any one-year period. Shares issued after this paragraph comes into force cannot be redeemed or purchased for cancellation in the first five years after their issue, except upon the death or expulsion from the credit union of the holder. The shares do not give their holders the right to convert the shares into, or exchange the shares for, shares of any class of shares other than a class of shares described in paragraph 1, 2 or 3 that are issued to raise capital. O.Reg. 237/09, s.17 (4). Risk weighted assets of a credit union 18.(1) The amount of a credit union’s risk weighted assets is the amount calculated using the formula, A + B + C in which, “A” is the sum of all amounts each of which is calculated by multiplying the value of an asset of the credit union by the percentage described in subsection (2), (3), (4), (5), (6), (7) or (8), as the case may be, that applies to that asset, “B” is the amount of the credit union’s applicable operational risk as determined under subsection (9), and “C” is the amount of the credit union’s applicable interest rate risk as determined under subsection (11). O.Reg. 237/09, s.18 (1). (2) The percentage is zero per cent for the following types of assets: Cash. Claims against, or guaranteed by, the Government of Canada or an agency of the Government. Claims against, or guaranteed by, the government of a province or territory of Canada. Claims fully secured by collateral consisting of cash or securities issued by the Government of Canada or the government of a province or territory of Canada. Residential mortgage loans described in paragraph 2 of section 55. The portion of a residential mortgage loan described in paragraph 3 of section 55, to the extent that the benefits payable under the policy insuring the loan have a backstop guarantee provided by the Government of Canada. Mortgage-backed securities that are guaranteed by the Canada Mortgage and Housing Corporation and secured against residential mortgages. Investments in bodies corporate that are accounted for in the credit union’s financial statements using the equity method. Any deductions from regulatory capital, including goodwill. Deposits in a league, Central 1 Credit Union, La Fédération des caisses Desjardins du Québec or La Caisse centrale Desjardins du Québec. Interest rate contracts with a league, Central 1 Credit Union, La Fédération des caisses Desjardins du Québec, La Caisse centrale Desjardins du Québec, a financial institution or another equivalent entity approved in writing by the Corporation. O.Reg. 237/09, s.18 (2). (3) The percentage is 20 per cent for the following types of assets: Cheques and other items in transit. Claims against or guaranteed by a municipality in Canada. Claims against or guaranteed by a school board, college, university, hospital or social service provider in Canada that receives, as its primary source of funding, regular government financial support. Deposits in a bank or authorized foreign bank within the meaning of section 2 of the Bank Act(Canada), a corporation registered under the Loan and Trust Corporations Act or a corporation to which the Trust and Loan Companies Act (Canada) or similar legislation of another province or territory of Canada applies. Commercial paper, bankers’ acceptances, bankers’ demand notes and similar instruments guaranteed by a bank or authorized foreign bank within the meaning of section 2 of the Bank Act(Canada), a corporation registered under the Loan and Trust Corporations Act or a corporation to which the Trust and Loan Companies Act (Canada) or similar legislation of another province or territory of Canada applies. The value attributed to any off balance sheet exposure relating to assets of the credit union listed in paragraphs 1 to 5, as calculated in accordance with the Capital Adequacy Guideline for Ontario’s Credit Unions and Caisses Populaires. O.Reg. 237/09, s.18 (3). (4) The percentage is 35 per cent for the following types of assets: Residential mortgage loans described in paragraph 1 of section 55 that are not 90 days or more past due. Mortgage-backed securities that are fully and specifically secured by residential mortgage loans, other than mortgage-backed securities described in paragraph 7 of subsection (2). The value attributed to any off balance sheet exposure relating to assets of the credit union listed in paragraphs 1 and 2, as calculated in accordance with the Capital Adequacy Guideline for Ontario’s Credit Unions and Caisses Populaires. O.Reg. 237/09, s.18 (4). (5) The percentage is 75 per cent for the following types of assets: Personal loans. Agricultural loans. Commercial loans made to a person where the sum of all commercial loans made to that person and to any connected persons does not exceed the lesser of 0.035 per cent of the credit union’s total assets and $1.25 million. The value attributed to any off balance sheet exposure relating to assets of the credit union listed in paragraphs 1 to 3, as calculated in accordance with the Capital Adequacy Guideline for Ontario’s Credit Unions and Caisses Populaires. O.Reg. 237/09, s.18 (5). (6) The percentage is 100 per cent for the following types of assets: Commercial loans, other than commercial loans described in paragraph 3 of subsection (5). All assets not described in subsection (2), (3), (4) or (5). Residential mortgage loans described in paragraph 1 of section 55 that are 90 days or more past due. The portion of a residential mortgage loan described in paragraph 3 of section 55 that does not have a backstop guarantee provided by the Government of Canada, if the insurer does not have a credit rating described in the Capital Adequacy Guideline for Ontario’s Credit Unions and Caisses Populaires. The value attributed to any off balance sheet exposure relating to assets of the credit union listed in paragraphs 1, 2, 3 and 4, as calculated in accordance with the Capital Adequacy Guideline for Ontario’s Credit Unions and Caisses Populaires. O.Reg. 237/09, s.18 (6). (7) If a person to whom a commercial loan described in paragraph 1 of subsection (6) is made has a credit rating described in the Capital Adequacy Guideline for Ontario’s Credit Unions and Caisses Populaires, the percentage determined in accordance with that Guideline applies, instead of the percentage specified in subsection (6), in respect of the commercial loan. O.Reg. 237/09, s.18 (7). (8) If an insurer who insures a residential mortgage loan described in paragraph 3 of section 55 has a credit rating described in the Capital Adequacy Guideline for Ontario’s Credit Unions and Caisses Populaires, the percentage determined in accordance with that Guideline applies, instead of the percentage specified in subsection (6), in respect of the portion of the loan that does not have a backstop guarantee by the Government of Canada. O.Reg. 237/09, s.18 (8). (9) Unless another amount is approved by the Corporation, a credit union’s applicable operational risk is the amount calculated using the formula, D/0.08 in which, “D” is the amount of the credit union’s capital charge for operational risk as determined under subsection (10). O.Reg. 237/09, s.18 (9). (10) A credit union’s capital charge for operational risk is the amount calculated using the formula, in which, “E” is the greater of, (a) the amount of the credit union’s interest income less its interest expenses for its most recently ended financial year plus all of its other non-interest income for its most recently ended financial year, and (b) zero, “F” is the amount that would be determined under the definition of “E” if that definition applied to the credit union’s second most recently ended financial year, “G” is the amount that would be determined under the definition of “E” if that definition applied to the credit union’s third most recently ended financial year, and “H” is the greater of, (a) the number of years in which the amounts determined under the definitions of “E”, “F” and “G” exceed zero, and (b) one. O.Reg. 237/09, s.18 (10). (11) Unless another amount is approved by the Corporation, a credit union’s applicable interest rate risk is the amount calculated using the formula, J/0.08 in which, “J” is the amount of the credit union’s capital charge for interest rate risk as determined under subsection (12). O.Reg. 237/09, s.18 (11). (12) A credit union’s capital charge for interest rate risk is the amount calculated using the formula, K × 0.15 in which, “K” is the amount of the credit union’s exposure, determined in accordance with the techniques referred to in paragraph 2 of subsection 71 (1), to interest rate risk. O.Reg. 237/09, s.18 (12). Forming of groups relating to capital requirements 19.(1) The following are requirements for an agreement under subsection 84 (3) of the Act for credit unions and a league to form a group for the purposes of assisting the credit unions in satisfying the requirements of section 84 of the Act relating to capital: The agreement must provide that if an order is issued under clause 86 (1) (a) of the Act against a credit union that is in the group, the league will, within 45 days after the order is issued, invest sufficient monies in the credit union, by purchasing preferred shares or subordinated debt of the credit union, so that the credit union satisfies the requirements of section 84 of the Act relating to capital. The agreement must provide that the credit unions that are in the group agree to jointly and severally indemnify the league for the amount invested under paragraph 1. The agreement must provide that a credit union can withdraw from the group only on 18 months notice to the league and the other credit unions in the group and only if all the credit unions in the group have satisfied the requirements of section 84 of the Act relating to capital throughout the 12-month period preceding the withdrawal. O.Reg. 237/09, s.19 (1). (2) The following are prescribed grounds for the Corporation to revoke its approval under subsection 84 (4) of the Act: The league that is in the group fails to comply with the obligation set out in paragraph 1 of subsection (1). The league that is in the group fails to comply with an order under subsection 85 (4), 86 (1), 187 (1), 189 (4), 191 (2), 197.0.1 (1), 200 (1), (2), (3), (4) or (5), 201.1 (2), 202.1 (1), 204 (7), 231 (2), 234 (1), 235 (1) or 240 (1) of the Act. The league that is in the group is subject to an order under subsection 279 (1) or 294 (1) of the Act. O.Reg. 237/09, s.19 (2). Adequate liquidity for class 1 credit unions 20.(1) This section sets out the requirements for adequate liquidity for class 1 credit unions under section 84 of the Act. O.Reg. 237/09, s.20 (1). (2) A class 1 credit union shall maintain eligible assets for adequate liquidity whose value is at least 7 per cent of the total deposits and borrowings of the credit union, except as provided under subsection (3). O.Reg. 237/09, s.20 (2). (3) The percentage specified in subsection (2) shall be 5 per cent instead of 7 per cent if the credit union has a line of credit that satisfies the following: The line of credit is with a financial institution, Credit Union Central of Canada, Central 1 Credit Union, La Fédération des caisses Desjardins du Québec or La Caisse centrale Desjardins du Québec. The line of credit is for an amount that is equal to or more than 2 per cent of the credit union’s deposits. The line of credit is revocable only after at least 30 days notice to the credit union. The terms of the line of credit are set out in writing. O.Reg. 237/09, s.20 (3). (4) For the purposes of subsection (2), the following are eligible assets for adequate liquidity: Cash. A deposit that matures in 100 days or less that is with, i. a bank or authorized foreign bank within the meaning of section 2 of the Bank Act (Canada), ii. a corporation registered under the Loan and Trust Corporations Act, iii. a league, Central 1 Credit Union, La Fédération des caisses Desjardins du Québec or La Caisse centrale Desjardins du Québec, or iv. Credit Union Central of Canada. A treasury bill or other debt obligation issued by the Government of Canada or a province that matures in 100 days or less. A banker’s acceptance or discounted note issued by a bank or authorized foreign bank within the meaning of section 2 of the Bank Act (Canada), a league, Central 1 Credit Union, La Fédération des caisses Desjardins du Québec, La Caisse centrale Desjardins du Québec or Credit Union Central of Canada, if, i. the acceptance or note matures in one year or less, and ii. the acceptance or note has a rating of at least A (low), as classified by the Dominion Bond Rating Service or an equivalent rating as set out in the Capital Adequacy Guideline for Ontario’s Credit Unions and Caisses Populaires. A debt obligation of a league, Central 1 Credit Union, La Fédération des caisses Desjardins du Québec, La Caisse centrale Desjardins du Québec or Credit Union Central of Canada that matures in 100 days or less. A debt obligation of the Corporation. O.Reg. 237/09, s.20 (4). (5) If an employer has deducted an amount from the pay of a member to be remitted to the credit union and the credit union has credited the amount to the member’s account but has not yet received the amount from the employer, an amount equal to the amount that is in the process of being remitted to the credit union shall be deemed to be an eligible asset for adequate liquidity for the purposes of subsection (2). O.Reg. 237/09, s.20 (5). Adequate liquidity for class 2 credit unions 21.(1) This section sets out the requirements for adequate liquidity for class 2 credit unions under section 84 of the Act. O.Reg. 237/09, s.21 (1). (2) A class 2 credit union shall establish and maintain prudent levels and forms of liquidity that are sufficient to meet its cash flow needs, including depositor withdrawals and all other obligations as they come due. O.Reg. 237/09, s.21 (2). (3) An asset shall not be used to satisfy the requirements for adequate liquidity for a class 2 credit union unless the asset is authorized for that purpose under the capital and liquidity policies of the credit union established under section 85 of the Act. O.Reg. 237/09, s.21 (3). Encumbered asset 22.An encumbered asset shall not be used to satisfy the requirements for adequate liquidity unless the asset is encumbered only by a security interest in favour of the Corporation. O.Reg. 237/09, s.22. Failure to meet requirements for adequate liquidity 23.(1) The following apply if, for a period of five consecutive days (excluding Saturdays, Sundays and holidays), a credit union does not meet the requirements for adequate liquidity under section 84 of the Act: The credit union shall not make a loan or an investment until the credit union again meets the requirements for adequate liquidity. The credit union shall immediately submit to the Superintendent and to the Corporation a report addressing the following matters: i. the circumstances that led to the failure to meet the requirements for adequate liquidity, ii. the steps the credit union has taken to meet the requirements for adequate liquidity, and iii. when the credit union will again meet the requirements for adequate liquidity. O.Reg. 237/09, s.23 (1). (2) For the purposes of paragraph 1 of subsection (1), changing the terms and conditions of a loan or refinancing a loan in any other way shall be deemed to be making a loan. O.Reg. 237/09, s.23 (2). Provision for doubtful loans and required reserves 24.(1) For the purposes of section 90 of the Act, the prescribed monthly provision for doubtful loans is the provision required by the Corporation in its by-laws. O.Reg. 237/09, s.24 (1). (2) For the purposes of section 90 of the Act, the prescribed reserves are those required by By-law No. 6 of the Corporation. O.Reg. 237/09, s.24 (2). PART VI GOVERNING THE CREDIT UNION Mandatory by-laws 25.The following are prescribed for the purposes of subsection 105 (2) of the Act as matters required to be governed by the by-laws of every credit union, to the extent the matters are not provided for by the Act or the regulations or set out in the articles of the credit union: Admission to membership in the credit union and any fees for admission. Withdrawal, suspension or expulsion from membership in the credit union. The allotment of shares, including the maximum number that may be allotted to a member, the payment for shares, the redemption or transfer of shares and the recording of information about these matters. The procedure for deciding how to distribute the profits of the credit union. If the credit union is a member of a league and assesses its own members to pay for the cost of membership in the league, the procedure for assessing credit union members’ annual assessment to be paid to the league. The language or languages in which the credit union will carry on business. Mandatory procedures governing the operation of the credit union. The types of loans that the credit union is authorized to make. The time, place and notice to be given for a members’ meeting, the record date for determining who is entitled to vote at such a meeting, and the quorum for such a meeting. The time, place and notice to be given for a board meeting. The time for, and manner of, electing directors and committee members. The term of office of directors and of committee members, and the procedure for setting their remuneration. The appointment and removal of officers and employees of the credit union, any security that they are required to give the credit union and the procedures for establishing their remuneration. O.Reg. 237/09, s.25. Frequency of board meetings 26.The board of a credit union shall meet at least quarterly during each financial year of the credit union. O.Reg. 237/09, s.26. Duties of audit committee 27.(1) The following are prescribed for the purposes of section 126 of the Act as duties of the audit committee of a credit union: Review and make recommendations to the board about the terms of the engagement letter and the remuneration of the auditor. Review with the auditor the scope and plan of an audit. Discuss with the auditor the audit findings, any restrictions on the scope of the auditor’s work and any problems that the auditor experienced in performing the audit. Review and make recommendations to the board about any management letters, recommendations and reports by the auditor about the business or financial statements of the credit union and any response to them by management of the credit union. Report to the board on any conflict between the auditor and management that the committee is unable to resolve within a reasonable time. Review the annual audited financial statements and make such recommendations to the board as the committee considers appropriate. Review the audited financial statements of each subsidiary of the credit union. Review the effectiveness of the credit union’s internal audit practices and make recommendations to the board to address any deficiencies. Review the organization and assess the degree of independence of the credit union’s internal auditors, if any, including their mandate, work plans and any problems that they experience or issues they raise relating to the performance of audits. Review findings and recommendations of the internal auditors concerning the accounting practices and internal control practices and review the responses by the management of the credit union to any significant or material deficiencies. Report to the board any significant changes in the accounting principles and practices followed by the credit union. Recommend to the board arrangements to safeguard the credit union’s assets, to ensure the timeliness, accuracy and reliability of accounting data, to maintain adherence to the lending and investment policies and procedures and to provide for other matters concerning the financial policies of the credit union. Review any report about the affairs of the credit union made by the Superintendent or the Corporation, monitor the implementation of any significant recommendations and report to the board on the progress of the implementation. Review the credit union’s policies and procedures governing the way in which it meets the requirements under the Act and any other applicable legislation. Review material legal proceedings to which the credit union is a party. Assess whether the staff of the credit union is adequate to fulfil the credit union’s accounting and financial responsibilities. Monitor the adherence of the credit union’s directors, officers and employees to the credit union’s standards of business conduct and ethical behaviour. Review the credit union’s disaster recovery and business continuity plans. Review, at least annually, the effectiveness of the committee in carrying out its duties. O.Reg. 237/09, s.27 (1). (2) The report of the audit committee required under subsection 125 (9) of the Act must contain the following information for the year to which the report relates: The number of meetings held by the committee during the year. A summary of the significant activities undertaken by the committee during the year and a description of the actual and expected results. Confirmation that the committee is conducting its affairs in accordance with the Act and the regulations. Information on any failure of the credit union to implement or complete the implementation of any significant recommendation previously made by the audit committee. Details of any other matter that is required to be disclosed pursuant to the Act or the regulations. O.Reg. 237/09, s.27 (2). (3) The audit committee may, in its annual report, report on such other matters as the committee considers appropriate. O.Reg. 237/09, s.27 (3). Remuneration reported in financial statements 28.(1) For the purposes of subsection 140 (5) of the Act, the prescribed information about the remuneration paid during a year to the officers and employees of a credit union that must be disclosed in the credit union’s annual audited financial statements is the following information with respect to each officer and employee of the credit union whose total remuneration for the year was over $150,000: The name of the officer or employee. The title of the officer or position of the employee. The total amount of salary received. The total amount of bonuses received. The monetary value of benefits received. O.Reg. 237/09, s.28 (1). (2) Despite subsection (1), if there are more than five officers and employees of a credit union whose total remuneration for the year was over $150,000, subsection (1) only applies in respect of the five officers and employees with the highest total remuneration for the year. O.Reg. 237/09, s.28 (2). (3) In this section, “total remuneration” means, in respect of an officer or employee for a year, the total of the amounts described in paragraphs 3, 4 and 5 of subsection (1) for the year. O.Reg. 237/09, s.28 (3). Bond for persons handling money 29.(1) For the purposes of subsection 151 (2) of the Act, from the day this section comes into force, the minimum amount of the bond is the lesser of$1 million and the amount of the credit union’s total assets as shown on the audited financial statements of the credit union placed before the members at the most recent annual meeting. O.Reg. 237/09, s.29 (1). (2) After December 31, 2010, the minimum amount of the bond is the lesser of $5 million and the amount of the credit union’s total assets as shown on the audited financial statements of the credit union placed before the members at the most recent annual meeting. O.Reg. 237/09, s.29 (2). Bond 30.For the purposes of subsection 151 (2) of the Act, after December 31, 2010, the bond shall satisfy all of the following conditions: The bond shall be issued by an insurer licensed under the Insurance Act to write surety and fidelity insurance to indemnify the credit union for any loss in respect of assets owned or held by the credit union arising out of a dishonest, fraudulent or criminal act of a director, officer or employee of the credit union. The bond shall provide that the bond shall not be cancelled or terminated by the insurer or the insured until at least 30 days after the receipt by the Superintendent and the Corporation of a written notice from the insurer or the insured, as the case may be, of its intention to cancel or terminate the bond. O.Reg. 237/09, s.30. PART VII RESTRICTIONS ON BUSINESS POWERS Ancillary Businesses Ancillary businesses 31.For the purposes of subsection 174 (1) of the Act, a credit union may engage in the following trades or businesses: Operating a post office. Operating a motor vehicle licence bureau. Acting as an agent to receive payments for utility bills, realty tax, personal income tax and for similar transactions. Providing facsimile transmission facilities. Promoting merchandise and services to its members or the holder of any payment, credit or charge card issued by the credit union, its subsidiaries or affiliates. Engaging in the sale of, i. tickets, including lottery tickets, on a non-profit, public service basis in connection with special, temporary and infrequent non-commercial celebrations or projects that are of local, municipal, provincial or national interest, ii. transit fares, and iii. tickets in respect of a lottery sponsored by the federal government or a provincial or municipal government or an agency of any such government. O.Reg. 237/09, s.31. Financial Services Prohibition re financial services 32.For the purposes of subsection 174 (3) of the Act, a credit union shall not directly provide the following financial services: Services provided by a factoring corporation described in subsection 68 (2). Services provided by an investment counselling and portfolio management corporation described in subsection 68 (5). Services provided by a mutual fund corporation described in subsection 68 (6). Services provided by a mutual fund distribution corporation described in subsection 68 (7). Services provided by a securities dealer described in subsection 68 (10). O.Reg. 237/09, s.32. Financial lease agreements and conditional sales agreements 33.(1) For the purposes of subsection 174 (3) of the Act, a credit union or subsidiary must not enter into a financial lease agreement or a conditional sales agreement unless the agreement meets the following requirements: The agreement concerns personal property, i. selected by the lessee or purchaser and acquired by the credit union or subsidiary at the request of the lessee or purchaser, or ii. previously acquired by the credit union or subsidiary under another financial lease agreement or conditional sales agreement. The primary purpose of the agreement is to extend credit to the lessee or purchaser. The agreement is for a fixed term. O.Reg. 237/09, s.33 (1). (2) A credit union or subsidiary must not direct a customer or prospective customer to particular dealers for the sale of personal property under a conditional sales agreement. O.Reg. 237/09, s.33 (2). (3) A financial lease agreement or conditional sales agreement must yield, (a) a reasonable rate of return; and (b) a return that at least equals the investment by the subsidiary in the property that is the subject of the agreement, taking into account in the case of a financial lease agreement, (i) rental charges payable or paid by the lessee, (ii) tax benefits to the credit union or subsidiary, and (iii) the guaranteed purchase or resale price, if any, for the property at the expiry of the agreement or the lesser of the estimated residual value of the property and 25 per cent of the original acquisition cost to the credit union or subsidiary. O.Reg. 237/09, s.33 (3). (4) The financial lease agreement or conditional sales agreement must set out the responsibilities of the credit union or its subsidiary respecting the benefit of the warranties, guarantees and undertakings made by the manufacturer or supplier of the property. O.Reg. 237/09, s.33 (4). (5) The aggregate estimated residual value of all property held by a credit union and its subsidiaries under financial lease agreements must not exceed 10 per cent of the aggregate original acquisition cost. O.Reg. 237/09, s.33 (5). (6) This section does not apply with respect to agreements in which the credit union or its subsidiary is the lessee or conditional purchaser. O.Reg. 237/09, s.33 (6). Networking Networking 34.(1) Subject to sections 35 to 43, for the purposes of subsection 174 (4) of the Act, the following are the prescribed persons or entities in respect of which a credit union may act as agent: A financial institution. The Corporation. Credit Union Central of Canada. Central 1 Credit Union. La Fédération des caisses Desjardins du Québec or La Caisse centrale Desjardins du Québec. A financial leasing corporation described in subsection 68 (3), whether or not it is a subsidiary of the credit union. A mutual fund corporation described in subsection 68 (6), whether or not it is a subsidiary of the credit union. A mutual fund distribution corporation described in subsection 68 (7), whether or not it is a subsidiary of the credit union. O.Reg. 237/09, s.34 (1). (2) A credit union may act as agent for the Corporation only with respect to the administration of deposits under a deposit administration agreement. O.Reg. 237/09, s.34 (2). (3) For the purposes of subsection 174 (4) of the Act, a credit union may refer its members to a person or entity listed in paragraphs 1 to 8 of subsection (1), a syndicating credit union or a syndicating league for the purpose of obtaining a syndicated loan referred to in section 56. O.Reg. 237/09, s.34 (3). Authorized Types of Insurance Authorized types of insurance 35.(1) For the purpose of subsection 176 (1) of the Act, a credit union may administer any of the following types of insurance policies offered by insurers that are licensed to carry on business offering that type of insurance policy: Insurance related to a credit card or charge card issued by the credit union. Creditors’ disability insurance. Creditors’ life insurance. Creditors’ insurance for loss of employment. Creditors’ vehicle inventory insurance. Export credit insurance. Group accident and sickness insurance. Group life insurance. Mortgage insurance. Travel insurance. O.Reg. 237/09, s.35 (1). (2) A credit union that, on March 1, 1995, administers an insurance policy other than one authorized under subsection (1) may continue to administer the policy with respect to a person to whom coverage is provided on that date. O.Reg. 237/09, s.35 (2). (3) For the purposes of subsection (1), “insurance related to a credit card or charge card” refers to a policy of an insurer that provides the types of insurance described in this subsection to the holder of a credit card or charge card as a feature of the card without request and without an individual assessment of risk. The policy may provide insurance against the loss of, or damage to, goods purchased with the card. The policy may also provide insurance against any loss arising from a contractual liability assumed by the holder when renting a vehicle, if the rental is paid for with the card. The policy may also provide for the extension of a warranty provided by the manufacturer of the goods purchased with the card. O.Reg. 237/09, s.35 (3). (4) For the purposes of subsection (1), “creditors’ disability insurance” refers to a group insurance policy that will pay to the credit union all or part of the amount of a debt owed to the credit union by a debtor. Payment will be made only in the event of bodily injury to or the illness or disability of, (a) the debtor or his or her spouse, if the debtor is an individual; (b) an individual who is a guarantor of all or part of the debt; (c) a director or officer of a debtor that is a body corporate; or (d) an individual who is essential to the ability of a debtor that is an entity to meet the debtor’s financial obligations to the credit union. O.Reg. 237/09, s.35 (4). (5) For the purposes of subsection (1), “creditors’ life insurance” refers to a group insurance policy that will pay to the credit union all or part of the amount of a debt owed to the credit union by a debtor or all or part of the amount of the credit limit under a line of credit for a debt relating to a small business, a farm, a fishery or a ranch. Payment will be made only in the event of the death of, (a) the debtor or his or her spouse, if the debtor is an individual; (b) an individual who is a guarantor of all or part of the debt; (c) a director or officer of a debtor that is a body corporate; or (d) an individual who is essential to the ability of a debtor that is an entity to meet the debtor’s financial obligations to the credit union. The small business must be a business that is or, if it were incorporated, would be a small business corporation within the meaning of subsection 248 (1) of the Income Tax Act (Canada). The line of credit must be a commitment to lend amounts up to a predetermined limit that does not involve a predetermined repayment schedule. The credit limit must not exceed the reasonable credit needs of the debtor or the lending limits of the credit union. O.Reg. 237/09, s.35 (5). (6) For the purposes of subsection (1), “creditors’ insurance for loss of employment” refers to a policy of an insurer that will pay to the credit union all or part of the amount of a debt owed to the credit union. The insurance policy will be made without an individual assessment of risk. Payment will be made only in the event that, (a) the debtor becomes involuntarily unemployed, if the debtor is an individual; or (b) an individual who is a guarantor of any portion of the debt becomes involuntarily unemployed. O.Reg. 237/09, s.35 (6). (7) For the purposes of subsection (1), “creditors’ vehicle inventory insurance” refers to a policy of an insurer that provides insurance against direct and accidental loss or damage to vehicles held in stock for display and sale purposes by a debtor of the credit union. Some or all of the vehicles must have been financed by the credit union. O.Reg. 237/09, s.35 (7). (8) For the purposes of subsection (1), “export credit insurance” refers to a policy of an insurer that provides insurance to an exporter of goods or services against a loss incurred by the exporter because goods or services are not paid for. O.Reg. 237/09, s.35 (8). (9) For the purposes of subsection (1), “group accident and sickness insurance” refers to a group insurance policy between an insurer and the credit union. The policy provides accident and sickness insurance severally for persons who individually hold certificates of insurance. The insurance must be restricted to the credit union’s employees, its members and the employees of its subsidiaries. O.Reg. 237/09, s.35 (9). (10) For the purposes of subsection (1), “group life insurance” refers to a group insurance policy between an insurer and the credit union. The policy provides life insurance severally for persons who individually hold certificates of insurance. The insurance must be restricted to the credit union’s employees, its members and the employees of its subsidiaries. O.Reg. 237/09, s.35 (10). (11) For the purposes of subsection (1), “mortgage insurance” refers to a policy of an insurer that provides insurance to the credit union against a loss caused by a default under a loan by the credit union secured by a mortgage on real estate or an interest in real estate. The debtor must be an individual. O.Reg. 237/09, s.35 (11). (12) For the purposes of subsection (1), “travel insurance” refers to either of the following: A policy of an insurer that provides the types of insurance described in this paragraph to an individual in respect of a trip by him or her away from the place where he or she ordinarily resides. The insurance is provided without an individual assessment of risk. The policy may provide insurance against a loss that results from the cancellation or interruption of the trip. It may provide insurance against the loss of or damage to personal property that occurs while the individual is on the trip. It may provide insurance against a loss caused by the delayed arrival of personal baggage while the individual is on the trip. A group insurance policy that provides the types of insurance described in this paragraph to an individual in respect of a trip by him or her away from the province in which he or she ordinarily resides. The policy may provide insurance against expenses incurred during the trip that result from the individual’s illness or disability that occurs during the trip. It may provide insurance against expenses incurred during the trip that result from bodily injury to or the death of the individual caused by an accident during the trip. It may provide insurance against expenses incurred by the individual for dental care required as a result of an accident during the trip. It may provide insurance in the event that the individual dies during the trip, against expenses incurred for the return of his or her remains to the place where he or she ordinarily resided before death, or for travel expenses incurred by a relative who must travel to identify the remains. The policy may provide that the insurer undertakes to pay money in the event of the individual’s illness or disability that occurs during the trip or bodily injury to or the death of the individual caused by an accident during the trip. O.Reg. 237/09, s.35 (12). Group insurance policy 36.(1) A credit union may administer a group insurance policy described in section 35 only for its members, its employees or the employees of its subsidiaries. O.Reg. 237/09, s.36 (1). (2) A group insurance policy is a contract of insurance between an insurer and the credit union that provides insurance severally for a group of identifiable persons who individually hold certificates of insurance. O.Reg. 237/09, s.36 (2). Advice about insurance 37.(1) A credit union may provide advice about an authorized type of insurance. O.Reg. 237/09, s.37 (1). (2) A credit union may provide advice in respect of another type of insurance only if, (a) the advice is general in nature; and (b) the advice is not about a specific risk, a particular proposal respecting life insurance or a particular insurance policy, insurer, agent, broker or service. O.Reg. 237/09, s.37 (2). (3) A credit union may provide services in respect of an authorized type of insurance. O.Reg. 237/09, s.37 (3). (4) A credit union may provide services in respect of another type of insurance only if the credit union does not refer a person to a particular insurer, agent or broker. O.Reg. 237/09, s.37 (4). Restrictions on Insurance Restriction on insurance 38.A credit union shall not underwrite insurance. O.Reg. 237/09, s.38. Restriction on agency and office space 39.(1) A credit union shall not act as an agent for any person in the placing of insurance. O.Reg. 237/09, s.39 (1). (2) A credit union shall not lease or provide space in its head office or any other of its offices to a person placing insurance. O.Reg. 237/09, s.39 (2). Separate and distinct premises 40.(1) A credit union that carries on business in premises adjacent to an office of an insurer, agent or broker shall clearly indicate to its customers that the credit union’s premises are separate and distinct from the premises of the insurer, agent or broker. O.Reg. 237/09, s.40 (1). (2) The premises of the credit union must be separate and distinct from the premises of the insurer, agent or broker. O.Reg. 237/09, s.40 (2). Telecommunications device 41.A credit union shall not provide a telecommunications device that is primarily for the use of its customers to link a customer with an insurer, agent or broker. O.Reg. 237/09, s.41. Promotion of insurer 42.(1) A credit union shall not promote an insurer, agent or broker unless, (a) the insurer, agent or broker deals only in authorized types of insurance; or (b) the promotion takes place outside the head office and any other office of the credit union, and is directed to, (i) all of the holders of credit cards or charge cards issued by the credit union to whom statements of account are sent regularly, (ii) all of the credit union members who are individuals and to whom statements of account are sent regularly, or (iii) the general public. O.Reg. 237/09, s.42 (1). (2) A credit union shall not promote an insurance policy of an insurer, agent or broker, or a service provided in respect of such a policy, unless, (a) the policy is of an authorized type of insurance or the service is in respect of such a policy; (b) the policy is to be provided by a corporation without share capital (other than a mutual insurer or a fraternal benefit society) that carries on business without pecuniary gain to its members and the policy provides insurance to an individual in respect of the risks covered by travel insurance; (c) the service is in respect of a policy described in clause (b); or (d) the promotion takes place outside the head office of the credit union and any other office of the credit union, and is directed to, (i) all of the holders of credit cards or charge cards issued by the credit union to whom statements of account are sent regularly, (ii) all of the credit union members who are individuals and to whom statements of account are sent regularly, or (iii) the general public. O.Reg. 237/09, s.42 (2). (3) A credit union may exclude the following persons from a promotion described in clause (1)(b) or (2)(d): Persons in respect of whom the promotion would contravene an Act of Parliament or of the legislature of a province. Persons who have notified the credit union in writing that they do not wish to receive promotional material from the credit union. Persons who hold a credit card or charge card issued by the credit union in respect of which the account is not in good standing. O.Reg. 237/09, s.42 (3). Sharing of information with insurer 43.(1) Except as permitted by this section, a credit union shall not directly or indirectly give an insurer, agent or broker information about, (a) a member of the credit union; (b) an employee of the member; (c) if the member is an entity with its own members, a member of the entity; or (d) if the member has partners, a partner of the member. O.Reg. 237/09, s.43 (1). (2) A credit union shall not permit its subsidiary to give directly or indirectly to an insurer, agent or broker information that the subsidiary receives from the credit union. O.Reg. 237/09, s.43 (2). (3) A credit union shall not permit a subsidiary that is a loan or trust corporation to give directly or indirectly to an insurer, agent or broker information about, (a) a customer of the subsidiary; (b) an employee of the customer; (c) if the customer is an entity with members, a member of the customer; or (d) if the customer has partners, a partner of the customer. O.Reg. 237/09, s.43 (3). (4) A credit union or a subsidiary that is a loan or trust corporation may give information to an insurer, agent or broker if, (a) the credit union or subsidiary has established procedures to ensure that the insurer, agent or broker does not use the information to promote himself, herself or itself or an insurance policy or services respecting an insurance policy; and (b) the insurer, agent or broker has given an undertaking to the credit union or subsidiary, in a form acceptable to the Superintendent, that he, she or it will not use the information for such a purpose. O.Reg. 237/09, s.43 (4). (5) In this section, “loan or trust corporation” means a loan or trust corporation incorporated under an Act of the legislature of a province. O.Reg. 237/09, s.43 (5). Fiduciary Activities Fiduciary activities 44.For the purposes of section 177 of the Act, the only fiduciary activity a credit union may undertake is acting as a trustee with respect to, (a) deposits under registered retirement savings plans, registered retirement income funds, registered education savings plans, registered disability savings plans and tax-free savings accounts under the Income Tax Act (Canada); (b) trust funds established under the Cemeteries Act (Revised) or any other funds in respect of which a credit union is expressly permitted or required, under an Act or regulation, to act as a trustee; and (c) loan proceeds and security under loan participation agreements and syndication agreements. O.Reg. 237/09, s.44. Guarantees Guarantees 45.For the purposes of subsection 178 (3) of the Act, the following are the prescribed conditions and restrictions on a guarantee: The guarantee must have a fixed term. The credit union shall not guarantee an obligation, other than its own obligation or one of its subsidiary, unless the credit union has received security at least equal to the amount of the obligation guaranteed. O.Reg. 237/09, s.45. Limit on amount of guarantee 46.For the purposes of subsection 178 (4) of the Act, the prescribed percentage is 10 per cent. O.Reg. 237/09, s.46. PART VIII INVESTMENT AND LENDING Interpretation Interpretation 47.For the purposes of this Part, regulatory capital shall be determined under section 17 using the audited financial statements of the credit union that were placed before its members at the most recent annual meeting. O.Reg. 237/09, s.47. Security Interests in Credit Union Property Security interests in credit union property 48.(1) This section sets out, for the purposes of section 184 of the Act, the circumstances in which a credit union may create a security interest in property of the credit union. O.Reg. 237/09, s.48 (1). (2) A credit union may create a security interest in personal property of the credit union if the property, together with any other property of the credit union subject to a security interest under this subsection, has an aggregate value of less than the greater of, (a) $25,000; and (b) one per cent of the credit union’s total assets, as set out in the audited financial statements of the credit union that were placed before its members at the most recent annual meeting. O.Reg. 237/09, s.48 (2). (3) A credit union may create a security interest in property of the credit union if the following conditions are satisfied: The security interest is granted to secure a debt, including any obligation of the credit union to an entity listed in paragraph 2 that is a member of the Canadian Payments Association to settle for payment items of the credit union in accordance with the by-laws and rules of the Canadian Payments Association, which together with other debts for which the credit union has granted a security interest does not exceed 15 per cent of the credit union’s total assets, as set out in the audited financial statements of the credit union that were placed before its members at the most recent annual meeting. The debt is owed to, i. a bank or authorized foreign bank within the meaning of section 2 of the Bank Act (Canada), ii. a corporation registered under the Loan and Trust Corporations Act, iii. a league, Central 1 Credit Union, La Fédération des caisses Desjardins du Québec or La Caisse centrale Desjardins du Québec, or iv. Credit Union Central of Canada. The security agreement under which the security interest is granted provides that the security interest is granted over specifically identified assets and does not create a general charge against the business and undertaking of the credit union. The security interest is limited by its terms to property with a value, together with the total value of all property subject to a security interest under this subsection, that does not exceed 25 per cent of the value of the credit union’s total assets as set out in the audited financial statements of the credit union that were placed before its members at the most recent annual meeting. The security agreement under which the security interest is granted provides that if the value of the property subject to a security interest under this subsection exceeds at any time the limit established in paragraph 4, the security interest does not apply to the portion of the property, or to the portion of the proceeds from the sale of the property, that exceed the limit, regardless of whether the debt in respect of which the security was granted has been repaid in full at that time. O.Reg. 237/09, s.48 (3). (4) A credit union may create a general security interest in property of the credit union, except property required to satisfy the requirements of adequate liquidity under section 84 of the Act, if the following conditions are satisfied: The debt is owed to, i. a league, Central 1 Credit Union, La Fédération des caisses Desjardins du Québec or La Caisse centrale Desjardins du Québec, or ii. Credit Union Central of Canada. The security agreement under which the security interest is granted provides that if the Corporation orders the credit union to be subject to administration under section 294 of the Act or the Corporation is appointed as liquidator of the assets of the credit union, the Corporation may require that the security agreement be assigned to the Corporation, if the Corporation delivers one of the following to the secured party: i. Payment in full of the outstanding balance, as of the close of business on the day of the assignment, of the indebtedness of the credit union secured by the agreement. ii. A guarantee of payment for the outstanding balance, as of the close of business on the day of the assignment, of the indebtedness of the credit union secured by the agreement. iii. Partial payment of the outstanding balance of the indebtedness of the credit union secured by the agreement and a guarantee of payment for the portion of the outstanding balance not paid as of the close of business on the day of the assignment. The security agreement under which the security interest is granted provides that despite paragraph 2, if, i. the security interest granted by the credit union forms part of the collateral security granted or assigned by Central 1 Credit Union or Credit Union Central of Canada to the Bank of Canada as security for an emergency liquidity assistance facility from the Bank of Canada, and ii. the Corporation orders the credit union to be subject to administration under section 294 of the Act or the Corporation is appointed liquidator of the assets of the credit union, the Corporation may require the security agreement be assigned to the Corporation only if the Corporation delivers to the Bank of Canada payment in full of the outstanding balance of the indebtedness of the credit union secured by the agreement. O.Reg. 237/09, s.48 (4). (5) A guarantee of payment made under subparagraph ii or iii of paragraph 2 of subsection (4) must provide the following: The Corporation shall pay the outstanding balance of the indebtedness, including interest at the interest rate provided for in the debt instrument that forms a part of the security agreement prior to any default under that instrument, by the fifth anniversary of the guarantee, or such earlier date as the Corporation may designate. The secured party is not required to exhaust its right to recourse against the credit union or any other person before being entitled to payment or performance by the Corporation under the guarantee. The obligations of the Corporation under the guarantee are continuing, unconditional and absolute, and will not be released, discharged, diminished, limited or otherwise affected by a change affecting the credit union. O.Reg. 237/09, s.48 (5). (6) A credit union may create a security interest in property of the credit union in favour of the Corporation without satisfying the requirements of subsection (2), (3) or (4). O.Reg. 237/09, s.48 (6). (7) If, on the day this section comes into force, a credit union has indebtedness that is subject to a security interest that, if created after this section comes into force, would not comply with this section, the credit union shall, (a) pay the outstanding balance of the indebtedness and discharge the security interest within 90 days or such longer period as the Corporation considers appropriate; or (b) amend the terms of the security agreement so as to comply with this section within 90 days or such longer period as the Corporation considers appropriate. O.Reg. 237/09, s.48 (7). Classes of Loans Classes of loans 49.The following are prescribed as classes of loans: Agricultural loans. Bridge loans. Commercial loans. Institutional loans. Personal loans. Residential mortgage loans. Syndicated loans. Loans to unincorporated associations. O.Reg. 237/09, s.49. Agricultural loan 50.An agricultural loan is a loan that is made for the purposes of financing, (a) the production of cultivated or uncultivated field-grown crops; (b) the production of horticultural crops; (c) the raising of livestock, fish, poultry or fur-bearing animals; or (d) the production of eggs, milk, honey, maple syrup, tobacco, wood from woodlots or fibre or fodder crops. O.Reg. 237/09, s.50. Bridge loan 51.A bridge loan is a loan to an individual made under the following circumstances: The loan is for the purchase of residential property in which the purchaser will reside. The term of the loan is not greater than 120 days. The funds from the sale of another residential property owned by the individual will be used to repay the loan. The credit union must receive a copy of the executed purchase and sale agreement for both properties before the loan is made. The conditions of each of the purchase and sale agreements must be satisfied before the loan is made. The loan is fully secured by a mortgage on the residential property being sold or, before the loan is made, the borrower’s solicitor has given the credit union an irrevocable letter of direction from the borrower stating that the funds from the sale of the residential property being sold will be remitted to the credit union. O.Reg. 237/09, s.51. Commercial loan 52.(1) A commercial loan is a loan, other than any of the following types of loans, that is made for any purpose: An agricultural loan, a bridge loan, an institutional loan, a personal loan, a residential mortgage loan. A loan to an unincorporated association. A loan that consists of deposits made by the credit union with a financial institution, Central 1 Credit Union, La Fédération des caisses Desjardins du Québec, La Caisse centrale Desjardins du Québec or Credit Union Central of Canada. A loan that is fully secured by a deposit with, i. a financial institution, including the credit union making the loan, ii. Central 1 Credit Union, La Fédération des caisses Desjardins du Québec or La Caisse centrale Desjardins du Québec, or iii. Credit Union Central of Canada. A loan that is fully secured by debt obligations that are guaranteed by, i. a financial institution other than the credit union making the loan, ii. Central 1 Credit Union, La Fédération des caisses Desjardins du Québec or La Caisse centrale Desjardins du Québec, or iii. Credit Union Central of Canada. A loan that is fully secured by a guarantee of, i. a financial institution other than the credit union making the loan, ii. Central 1 Credit Union, La Fédération des caisses Desjardins du Québec or La Caisse centrale Desjardins du Québec, or iii. Credit Union Central of Canada. An investment in a debt obligation that is, i. fully guaranteed by a financial institution other than the credit union making the loan, ii. fully secured by deposits with a financial institution, including the credit union making the loan, or iii. fully secured by debt obligations that are fully guaranteed by a financial institution other than the credit union making the loan. An investment in a debt obligation issued by the Government of Canada, the government of a province or territory of Canada or a municipality or by an agency of such a government or municipality. An investment in a debt obligation guaranteed by, or fully secured by securities issued by, the Government of Canada, the government of a province or territory of Canada or a municipality or by an agency of such a government or municipality. An investment in a debt obligation issued by a league, Central 1 Credit Union, La Fédération des caisses Desjardins du Québec or La Caisse centrale Desjardins du Québec. An investment in a debt obligation that is widely distributed. An investment in shares or ownership interests that are widely distributed. An investment in a participating share. An investment in shares of a league, Central 1 Credit Union, La Fédération des caisses Desjardins du Québec or La Caisse centrale Desjardins du Québec. O.Reg. 237/09, s.52 (1). (2) A commercial loan includes the supply of funds for use in automated bank machines that are not owned and operated by the credit union. O.Reg. 237/09, s.52 (2). Institutional loan 53.An institutional loan is a loan given to, (a) the Government of Canada; (b) the government of a province or territory of Canada; (c) an agency of the Government of Canada; (d) an agency of the government of a province or territory of Canada; (e) a school board or college funded primarily by the Government of Canada or by the government of a province or territory of Canada; (f) any other entity funded primarily by the Government of Canada, the government of a province or territory of Canada or a municipality; or (g) a municipality or an agency of one. O.Reg. 237/09, s.53. Personal loan 54.A personal loan is a loan given to, (a) an individual for personal, family or household use; or (b) an individual or an entity for any other use if the loan does not exceed $25,000 and if the total outstanding amount of such loans to him, her or it and to connected persons does not exceed $25,000. O.Reg. 237/09, s.54. Residential mortgage loan 55.A residential mortgage loan is a loan that is secured by a mortgage on residential property that is occupied by the borrower and to which any of the following apply: The amount of the loan, together with the amount then outstanding of any mortgage having an equal or prior claim against the residential property, does not exceed 80 per cent of the value of the property when the loan is made. The loan is insured under the National Housing Act (Canada), or guaranteed or insured by a government agency. The loan is insured by an insurer licensed to undertake mortgage insurance. O.Reg. 237/09, s.55. Syndicated loan 56.A syndicated loan is a loan including any related credit facilities made under a syndicated loan agreement by a credit union, a league, Central 1 Credit Union, La Fédération des caisses Desjardins du Québec, La Caisse centrale Desjardins du Québec or Credit Union Central of Canada acting as the syndicating credit union where: The parties to the syndicated loan agreement are the borrower, the syndicating credit union and one or more of the following: i. Another credit union or its subsidiary or affiliate. ii. A league, Central 1 Credit Union, La Fédération des caisses Desjardins du Québec, La Caisse centrale Desjardins du Québec or Credit Union Central of Canada. iii. A financial institution other than a securities dealer. Each of the parties to the syndicated loan agreement, other than the borrower, agrees to contribute a specified portion of the loan and to be bound by the terms and conditions of the syndicated loan agreement. The syndicating credit union contributes at least 10 per cent of the loans, including any related credit facilities, and underwrites, disburses and administers them on behalf of the parties to the syndicated loan agreement. O.Reg. 237/09, s.56. Loan to an unincorporated association 57.A loan to an unincorporated association is a loan to an unincorporated association or organization, (a) that is not a partnership registered under the Business Names Act; and (b) that is operated on a non-profit basis for educational, benevolent, fraternal, charitable, religious or recreational purposes. O.Reg. 237/09, s.57. Lending Limits Lending limits to a person or connected persons 58.(1) A class 1 credit union whose total assets, as set out in the audited financial statements of the credit union that were placed before its members at the most recent annual meeting, are described in a row in Column 1 of the Table to this section shall not make a loan to a person if, as a result of making the loan, the total amount of all outstanding loans made by the credit union to the person and any connected persons would exceed the amount of the total lending limit set out in the same row of Column 2 of the Table. O.Reg. 237/09, s.58 (1). (2) Subject to subsections (3) and (4), a class 2 credit union shall not make a loan to a person if, as a result of making the loan, the total amount of all outstanding loans made to the person and any connected persons would exceed 25 per cent of the credit union’s regulatory capital. O.Reg. 237/09, s.58 (2). (3) If the person to whom the loan is to be made is listed in clause 53 (c), (d) or (e), the class 2 credit union shall not make the loan if, as a result of making the loan, the total amount of all outstanding loans made to the person and any connected persons would exceed 50 per cent of the credit union’s regulatory capital. O.Reg. 237/09, s.58 (3). (4) If the person to whom the loan is made is listed in clause 53 (a) or (b), the lending limit set out in subsection (2) does not apply. O.Reg. 237/09, s.58 (4). (5) For the purposes of this section, the total amount of all outstanding loans made by a credit union to a person and any connected persons excludes the portion, if any, of a loan that, (a) is insured under the National Housing Act (Canada) or by an insurer licensed to undertake mortgage insurance; (b) is guaranteed by, (i) a federal, provincial or territorial government of Canada, (ii) an agent of a government described in subclause (i), or (iii) the Corporation; or (c) is secured by deposits of the borrower with the credit union. O.Reg. 237/09, s.58 (5). (6) For the purposes of this section, changing the terms and conditions of a loan or refinancing a loan in any other way shall be deemed to be making a loan. O.Reg. 237/09, s.58 (6). TABLE LENDING LIMITS TO A PERSON OR CONNECTED PERSONS — CLASS 1 CREDIT UNIONS Column 1 Column 2 Total assets of credit union Total lending limit to a person or connected persons Less than $500,000 Greater of 100% of regulatory capital and $60,000 $500,000 or more but less than $1 million Greater of 100% of regulatory capital and $100,000 $1 million or more but less than $2 million Greater of 80% of regulatory capital and $125,000 $2 million or more but less than $3 million Greater of 80% of regulatory capital and $155,000 $3 million or more but less than $5 million Greater of 70% or regulatory capital and $185,000 $5 million or more but less than $10 million Greater of 60% of regulatory capital and $235,000 $10 million or more but less than $20 million Greater of 50% of regulatory capital and $295,000 $20 million or more but less than $50 million Greater of 30% of regulatory capital and $400,000 O.Reg. 237/09, s.58, Table. Limits on loans of same class to a person 59.(1) A class 1 credit union shall not make a loan to a person if, as a result of making the loan, the total amount of all outstanding loans of the same class, as set out in Column 1 of the Table to this section, made by the credit union to the same person and any connected persons would exceed the amount calculated by multiplying the percentage set out in the same row of Column 2 of the Table by the credit union’s total lending limit as determined under section 58. O.Reg. 237/09, s.59 (1). (2) A class 2 credit union shall establish prudent lending limits for each class of loans that it is authorized by its by-laws to make. O.Reg. 237/09, s.59 (2). (3) For the purposes of this section and for the purposes of the lending limits established by a class 2 credit union, (a) a loan in an amount that exceeds the lending value of any property that is given as security for the loan, as determined in accordance with the credit union’s lending policies, is an under-secured loan; (b) a loan in an amount that does not exceed the lending value of the property that is given as security for the loan, as determined in accordance with the credit union’s lending policies, is a fully secured loan; and (c) a loan to a person includes a loan to two or more persons for which they are jointly and severally liable. O.Reg. 237/09, s.59 (3). (4) For the purposes of this section, the total amount of all outstanding loans made by a credit union to a person and any connected persons excludes the portion, if any, of a loan that, (a) is insured under the National Housing Act (Canada) or by an insurer licensed to undertake mortgage insurance; (b) is guaranteed by, (i) a federal, provincial or territorial government of Canada, (ii) an agent of a government described in subclause (i), or (iii) the Corporation; or (c) is secured by deposits of the borrower with the credit union. O.Reg. 237/09, s.59 (4). (5) For the purposes of this section, changing the terms and conditions of a loan or refinancing a loan in any other way shall be deemed to be making a loan. O.Reg. 237/09, s.59 (5). TABLE LIMITS ON LOANS OF SAME CLASS — CLASS 1 CREDIT UNIONS Column 1 Column 2 Class of loan Percentage of total lending limit Agricultural loan 0% Bridge loan 100% Institutional loan 50% Loan to unincorporated association or organization 5% Personal loan, fully secured 20% Personal loan, unsecured or under-secured 6% Residential mortgage loan 100% Loan under a syndicated loan agreement 0% O.Reg. 237/09, s.59, Table. Eligible Investments Eligible investments for class 1 credit unions 60.(1) The following types of securities and property are prescribed, for the purposes of section 198 of the Act, as the types of securities and property that a class 1 credit union may invest in or hold, subject to the conditions indicated: Debt obligations that are fully guaranteed by, i. a financial institution other than the credit union, ii. Central 1 Credit Union, La Fédération des caisses Desjardins du Québec or La Caisse centrale Desjardins du Québec, or iii. Credit Union Central of Canada. Debt obligations that are fully secured by deposits with, i. a financial institution other than the credit union, ii. Central 1 Credit Union, La Fédération des caisses Desjardins du Québec or La Caisse centrale Desjardins du Québec, or iii. Credit Union Central of Canada. Debt obligations that are fully secured by other debt obligations that are fully guaranteed by, i. a financial institution other than the credit union, ii. Central 1 Credit Union, La Fédération des caisses Desjardins du Québec or La Caisse centrale Desjardins du Québec, or iii. Credit Union Central of Canada. Debt obligations issued by the Government of Canada, the government of a province or territory of Canada, an agency of the Government of Canada or the government of a province or territory, a municipality or a local board or agent of a municipality. Debt obligations that are guaranteed by, or fully secured by securities issued by, the Government of Canada, the government of a province or territory of Canada, an agency of the Government of Canada or the government of a province or territory, a municipality or a local board or agent of a municipality. Debt obligations issued by a school board or by a municipality for the purposes of a school board. Derivative instruments that the credit union purchases to manage interest rate risk. Debt obligations that are widely distributed. Debt obligations of a league, Central 1 Credit Union, La Fédération des caisses Desjardins du Québec, La Caisse centrale Desjardins du Québec or Credit Union Central of Canada. Mortgages upon improved real estate in Canada. Improved real estate in Canada, but only if the credit union occupies or intends to occupy the real estate for its own use. Improved real estate in Canada acquired, i. to protect the credit union’s investment in a mortgage on the real estate, or ii. in satisfaction of debts contracted in the course of the credit union’s business. Securities that are secured by mortgages. Shares of a body corporate or ownership interests in an unincorporated association that are widely distributed. Participating shares of a body corporate. Shares of a league, Central 1 Credit Union, La Fédération des caisses Desjardins du Québec, La Caisse centrale Desjardins du Québec or Credit Union Central of Canada. Fully paid shares or units of a mutual fund or corporation incorporated to offer participation in an investment portfolio. Loans that consist of deposits made by the credit union with a financial institution, Central 1 Credit Union, La Fédération des caisses Desjardins du Québec, La Caisse centrale Desjardins du Québec or Credit Union Central of Canada. Loans that are fully secured by a deposit with a financial institution, Central 1 Credit Union, La Fédération des caisses Desjardins du Québec, La Caisse centrale Desjardins du Québec or Credit Union Central of Canada. Loans that are fully secured by debt obligations that are guaranteed by, i. a financial institution other than a credit union making the loan, or ii. Central 1 Credit Union, La Fédération des caisses Desjardins du Québec, La Caisse centrale Desjardins du Québec or Credit Union Central of Canada. Loans that are fully secured by a guarantee of, i. a financial institution other than a credit union making the loan, or ii. Central 1 Credit Union, La Fédération des caisses Desjardins du Québec, La Caisse centrale Desjardins du Québec or Credit Union Central of Canada. Investments not authorized under paragraphs 1 to 21 and not prohibited under any other provision of the Act or the regulations, so long as the total book value of those investments does not exceed 25 per cent of the credit union’s regulatory capital. O.Reg. 237/09, s.60 (1). (2) Paragraph 22 of subsection (1) does not apply so as to, (a) enlarge the authority conferred under the Act or the regulations to invest in mortgages or to lend on the security of real estate; or (b) affect the limits established under the Act or the regulations on investments in real estate. O.Reg. 237/09, s.60 (2). (3) A class 1 credit union shall not make a direct investment in, or purchase of, any commodity, including metals, food and grain, that trades on a commodity exchange. O.Reg. 237/09, s.60 (3). (4) The total book value of all investments by a class 1 credit union and its subsidiaries in improved real estate in Canada must not exceed 100 per cent of the credit union’s regulatory capital. O.Reg. 237/09, s.60 (4). (5) For the purposes of subsection (4), the total book value does not include the book value of real estate acquired by the credit union and its subsidiaries, (a) to protect its investment in a mortgage on the real estate; or (b) in satisfaction of debts previously contracted in the course of the credit union’s business. O.Reg. 237/09, s.60 (5). (6) The total book value of all investments by the credit union in shares referred to in paragraphs 14 and 15 of subsection (1), other than shares in its subsidiaries, must not exceed 25 per cent of the credit union’s regulatory capital. O.Reg. 237/09, s.60 (6). Eligible investments for class 2 credit unions 61.(1) A class 2 credit union may hold as an investment any asset authorized by its investment policies, other than a prohibited investment, subject to the conditions set out in the Act and this Regulation. O.Reg. 237/09, s.61 (1). (2) A class 2 credit union shall not invest in a derivative instrument unless it is purchased for the purposes of managing interest rate risk. O.Reg. 237/09, s.61 (2). (3) A class 2 credit union shall not make a direct investment in, or purchase of, any commodity, including metals, food and grain, that trades on a commodity exchange. O.Reg. 237/09, s.61 (3). (4) The total book value of all investments by a class 2 credit union in the following types of shares, other than shares in its subsidiaries, must not exceed 70 per cent of the credit union’s regulatory capital: Shares of a body corporate or ownership interests in an unincorporated association that are widely distributed. Participating shares of a body corporate. O.Reg. 237/09, s.61 (4). (5) The total book value of all investments by a class 2 credit union and its subsidiaries in improved real estate in Canada must not exceed 100 per cent of the credit union’s regulatory capital. O.Reg. 237/09, s.61 (5). (6) For the purposes of subsection (5), the total book value does not include the book value of real estate acquired by the credit union and its subsidiaries, (a) to protect its investment in a mortgage on the real estate; or (b) in satisfaction of debts previously contracted in the course of the credit union’s business. O.Reg. 237/09, s.61 (6). Prescribed conditions re improved real estate 62.(1) For the purposes of section 198 of the Act, the following are prescribed conditions that must be satisfied if a credit union invests in improved real estate, either by purchasing it or by way of a loan secured by a mortgage on it: The amount advanced on a mortgage plus all outstanding mortgages with an equal or prior claim against the real estate must not exceed the lending value of the real estate. Despite paragraph 1, the amount may exceed the lending value of the real estate if the loan secured by the mortgage is approved or insured under the National Housing Act (Canada). Despite paragraph 1, the amount may exceed the lending value of the real estate, i. if the excess amount is guaranteed or insured through an agency of the Government of Canada or of the government of a province or territory of Canada, or ii. if the excess amount is insured by an insurer licensed to undertake mortgage insurance. O.Reg. 237/09, s.62 (1). (2) If a credit union or a subsidiary acquires or has the right to possess or sell real estate for either of the following purposes and then sells it and takes back a mortgage on the sale, the investment in the mortgage need not meet the requirements of subsection (1): To protect its investment in a mortgage on the real estate. In satisfaction of debts previously contracted in the course of the credit union’s business. O.Reg. 237/09, s.62 (2). (3) Subsection (1) does not apply with respect to a mortgage taken back by the credit union on the sale of property held by the credit union for its own use. O.Reg. 237/09, s.62 (3). (4) A credit union shall not retain real estate acquired under circumstances described in subsection (2) for more than two years without obtaining the approval of the Corporation. O.Reg. 237/09, s.62 (4). (5) For the purposes of subsection (1), “lending value” means, in respect of real estate, 80 per cent of the market value of the real estate, but if the credit union considers a lesser percentage appropriate in the circumstances under its investment and lending policies, the lending value is that lesser percentage of the market value of the real estate. O.Reg. 237/09, s.62 (5). Definition 63.For the purposes of sections 60, 61 and 62, “improved real estate” means real estate, (a) on which there exists a building capable of being used for residential, financial, commercial, industrial, educational, professional, institutional, religious, charitable or recreational purposes, (b) on which such a building is being built or is about to be built, (c) on which farming operations are being conducted, or (d) that is vacant land restricted by law to being used for commercial, industrial or residential purposes. O.Reg. 237/09, s.63. Prescribed conditions re body corporate 64.(1) For the purposes of section 198 of the Act, it is a prescribed condition that a credit union not directly or indirectly invest in the shares of a body corporate if, as a result of the investment, (a) the voting rights attached to the aggregate of any voting shares of the body corporate beneficially owned by the credit union and by any entities it controls would exceed 30 per cent of the voting rights attached to all of the outstanding voting shares of the body corporate; or (b) the aggregate of any shares of the body corporate beneficially owned by the credit union and by any entities it controls would represent ownership of more than 30 per cent of the shareholders’ equity of the body corporate. O.Reg. 237/09, s.64 (1). (2) Subsection (1) does not apply to a credit union in respect of an investment in the shares of a body corporate described in paragraphs 1 to 15 of subsection 68 (1), (a) if, after the investment is made, all the voting rights attached to the voting shares of the body corporate would be owned by credit unions; or (b) if the Corporation approves the credit union’s investment before the investment is made. O.Reg. 237/09, s.64 (2). (3) For the purposes of section 198 of the Act, it is a prescribed condition that a credit union not directly or indirectly invest in ownership interests in an unincorporated entity if, as a result of the investment, the aggregate of any ownership interests, however designated, into which the entity is divided that would be beneficially owned by the credit union and by entities it controls would exceed 30 per cent of all the ownership interests into which the unincorporated entity is divided. O.Reg. 237/09, s.64 (3). Restriction on Single Investments Restriction re single investments 65.For the purposes of section 198 of the Act, a credit union shall not directly or indirectly invest, by way of purchases from or loans to one person or more than one person that, to its knowledge, are connected persons, more than 25 per cent of the credit union’s regulatory capital. O.Reg. 237/09, s.65. Exception to restriction re single investments 66.For the purposes of subclause 199 (1) (a) (iii) of the Act, the following are prescribed persons and entities for class 1 and class 2 credit unions: Credit Union Central of Canada. Central 1 Credit Union, La Fédération des caisses Desjardins du Québec or La Caisse centrale Desjardins du Québec. O.Reg. 237/09, s.66. Connected Persons Connected persons 67. The following conditions are prescribed as conditions that, if satisfied, result in persons being connected persons for the purposes of the Act: In relation to a person or entity, if another person or entity is one of the following: i. a body corporate in which the person or entity holds or beneficially owns, directly or indirectly, at least 35 per cent of the voting securities, ii. an affiliate of a body corporate described in subparagraph i, iii. a person or entity that has a 50 per cent interest in a partnership in which the person or entity also has a 50 per cent interest, iv. a partnership in which the person or entity is a partner, v. a trust or estate in which the person or entity has a substantial beneficial interest, vi. a trust or estate in respect of which the person or entity serves as a trustee or in a similar capacity, vii. a person or entity on whose financial resources the person or entity depends to repay a loan to the credit union, viii. a person or entity who provides security to the credit union for a loan to the person or entity. In relation to an individual, if another individual is one of the following: i. a spouse of the individual who is financially dependent on the individual, ii. a relative of the individual or of the individual’s spouse who lives in the same home as the individual and who is financially dependant on the individual or spouse. O.Reg. 237/09, s.67. Investment in Subsidiaries Investment in subsidiaries 68.(1) For the purposes of subsection 200 (1) of the Act, the following are the prescribed subsidiaries: A financial institution. A factoring corporation. A financial leasing corporation. An information services corporation. An investment counselling and portfolio management corporation. A mutual fund corporation. A mutual fund distribution corporation. A real property brokerage corporation. A real property corporation. A service corporation. A body corporate engaging in the activities of a securities dealer. A corporation licensed as a mortgage brokerage under the Mortgage Brokerages, Lenders and Administrators Act, 2006. A body corporate that engages in two or more of the businesses or activities carried on by corporations referred to in this subsection. An entity that is limited to businesses and activities in which the credit union is permitted to engage. A body corporate whose sole purpose is to hold all of the credit union’s shares in one or more of the subsidiaries described in paragraphs 1 to 14. O.Reg. 237/09, s.68 (1). (2) A factoring corporation is a body corporate that is restricted to acting as a factor in respect of accounts receivable, raising money for the purpose of acting as a factor and lending money while acting as a factor. O.Reg. 237/09, s.68 (2). (3) A financial leasing corporation is a body corporate that is restricted to, (a) engaging in financial leasing of personal property; (b) entering into and accepting assignments of conditional sales agreements in respect of personal property; (c) administering financial lease agreements and conditional sales agreements on behalf of a person; and (d) raising money for the purpose of financing its activities and investing the money until it is used for those activities. O.Reg. 237/09, s.68 (3). (4) An information services corporation is a body corporate that is primarily engaged in, (a) collecting, manipulating and transmitting information that is primarily financial or economic in nature or that relates to the business of an entity referred to in subsection (1); (b) providing advisory and other services in the design, development and implementation of information management systems; or (c) designing, developing and marketing computer software. Its ancillary activities may include the design, development, manufacture or sale of computer equipment that is not generally available and that is integral to the provision of financial services or information services related to the business of financial institutions. O.Reg. 237/09, s.68 (4). (5) An investment counselling and portfolio management corporation is a body corporate whose principal activities are either of the following: Offering advice or advising about investments. Investing or controlling money, property, deposits or securities that it does not own and that are not deposited with it in the ordinary course of business. This must involve the exercise of discretion and judgment. O.Reg. 237/09, s.68 (5). (6) A mutual fund corporation is a body corporate restricted to investing its funds. It may also be a body corporate that issues securities entitling the holder to receive, on demand or within a specified period, an amount computed by reference to the value of a proportionate interest in all or part of its net assets (including a separate fund or a trust account). O.Reg. 237/09, s.68 (6). (7) A mutual fund distribution corporation is a body corporate whose principal activities are acting as an agent selling and collecting payment for interests in a mutual fund. Purchasers must be told about the existence of any sales commission or service fee before buying an interest in the mutual fund. The sales proceeds, less sales commissions and service fees, must be paid to the fund. O.Reg. 237/09, s.68 (7). (8) A real property brokerage corporation is a body corporate that is primarily engaged in, (a) acting as an agent for vendors, purchasers, mortgagors, mortgagees, lessors or lessees of real estate; and (b) providing consulting or appraisal services with respect to real estate. O.Reg. 237/09, s.68 (8). (9) A real property corporation is a body corporate that is primarily engaged in holding, managing or otherwise dealing with, (a) real estate; or (b) shares of another body corporate or ownership interests in an unincorporated entity, limited partnership or trust that is primarily engaged in holding, managing or otherwise dealing with real estate. O.Reg. 237/09, s.68 (9). (10) A securities dealer is a body corporate that trades in securities in the capacity of principal or agent. “Trade” has the same meaning as in the Securities Act. O.Reg. 237/09, s.68 (10). (11) A service corporation is a body corporate that provides services exclusively to one or more of the following: The credit union. Subsidiaries of the credit union. Financial institutions affiliated with the credit union. O.Reg. 237/09, s.68 (11). Restriction on investment in subsidiaries 69.For the purpose of subsection 200 (7) of the Act, the prescribed percentage of the credit union’s regulatory capital is 100 per cent. O.Reg. 237/09, s.69. PART IX INTEREST RATE RISK MANAGEMENT Interpretation 70.A credit union’s exposure to interest rate risk refers to the potential negative impact, expressed in dollars, of changes in interest rates on a credit union’s earnings and net asset values when the dates of its payments of principal and interest and its receipts of principal and interest are not matched. O.Reg. 237/09, s.70. Policies and procedures 71.(1) Every credit union shall establish, for the purposes of managing its exposure to interest rate risk, policies and procedures that address the following matters: The limits on the credit union’s exposure to interest rate risk and on the impact of this exposure on its net interest income and surplus. The limits must be clear and prudent. The techniques to be used to calculate the amount of the credit union’s exposure to interest rate risk. The internal controls to be implemented to ensure compliance with the policies and procedures. The corrective action to be taken if the limits on the credit union’s exposure to interest rate risk are exceeded. The content and frequency of reports to be made to the board of directors by the management of the credit union about the management of the credit union’s exposure to interest rate risk. O.Reg. 237/09, s.71 (1). (2) The limits must take into account fluctuations in interest rates that might reasonably be expected to occur. O.Reg. 237/09, s.71 (2). (3) For a class 1 credit union, the limits must limit changes in net income to changes that do not exceed 0.15 per cent of the credit union’s total assets. O.Reg. 237/09, s.71 (3). (4) The policies and procedures must require the management of the credit union to submit a report to the board of directors and the Corporation if the credit union’s exposure to interest rate risk exceeds the limits established in the policies and procedures, and the report must be submitted within 21 days after the credit union’s exposure to interest rate risk exceeds the limits established in its policies and procedures. O.Reg. 237/09, s.71 (4). (5) A report required by subsection (4) must, (a) describe the circumstances that led to the credit union’s exposure to interest rate risk exceeding the limits; (b) describe the effect that this exposure has had, and may have, on net income; (c) describe the steps taken to bring this exposure within the limits; and (d) include a schedule indicating when the credit union will comply with its policies and procedures. O.Reg. 237/09, s.71 (5). (6) The policies must be approved by the board of directors of the credit union. O.Reg. 237/09, s.71 (6). Interest rate risk that exceeds limits 72.(1) If a credit union’s exposure to interest rate risk exceeds the limits established in its policies and procedures, the credit union shall immediately take steps to bring its exposure within those limits. O.Reg. 237/09, s.72 (1). (2) If a credit union’s exposure to interest rate risk exceeds the limits established in its policies and procedures for two consecutive quarters, the credit union shall promptly submit to the Corporation a plan approved by the board of directors that describes the steps the credit union intends to take to bring its exposure to interest rate risk within those limits. O.Reg. 237/09, s.72 (2). Interest rate risk report 73.(1) A credit union shall prepare a report at the end of each quarter of its fiscal year on its management of the credit union’s exposure to interest rate risk. O.Reg. 237/09, s.73 (1). (2) The report must include all information about the management of interest rate risk that the credit union has filed with the Corporation. O.Reg. 237/09, s.73 (2). (3) The report must be presented at the next board meeting immediately after it is prepared and the board shall review it. O.Reg. 237/09, s.73 (3). PART X RESTRICTED PARTY TRANSACTIONS Interpretation Application 74.This Part applies with respect to transactions entered into, renewed, extended or modified after March 1, 1995. O.Reg. 237/09, s.74. Definition of “restricted party” 75.(1) For the purposes of the Act, “restricted party” means, in relation to a credit union, a person who is or has been in the preceding 12 months, (a) a director or officer of the credit union, (b) a spouse of a director or officer of the credit union, (c) a relative of a person described in clause (a) or (b), if the relative lives in the home of a person described in clause (a) and is financially dependent on a person described in clause (a) or (b), (d) the auditor of the credit union, if the auditor is an individual, (e) a corporation in which a director or officer of the credit union beneficially owns, directly or indirectly, more than 10 per cent of the voting shares, (f) a corporation controlled by a person described in clause (a), (b), (c) or (d), or (g) an affiliate of the credit union, other than a subsidiary. O.Reg. 237/09, s.75 (1). (2) For the purposes of subsection (1), “officer” includes a person who has not yet assumed the office. O.Reg. 237/09, s.75 (2). Definition of “transaction” 76.(1) For the purposes of the Act, “transaction”, as between a credit union and a restricted party, includes, (a) a guarantee given by the credit union on behalf of the restricted party, (b) an investment by the credit union in securities issued by the restricted party, (c) a loan from the credit union to the restricted party, (d) an assignment taken or acquisition made by the credit union of a loan made by a third party to the restricted party, and (e) a security interest taken by the credit union in securities issued by the restricted party. O.Reg. 237/09, s.76 (1). (2) The performance of a condition of a transaction forms a part of the transaction and does not constitute a separate transaction. O.Reg. 237/09, s.76 (2). (3) The payment of dividends to a restricted party does not constitute a transaction between a credit union and the restricted party. O.Reg. 237/09, s.76 (3). Permitted Transactions Transactions of nominal value or not material 77.A credit union may enter into a transaction with a restricted party if the value of the transaction is nominal or if the transaction is not material when measured by criteria established by the board. O.Reg. 237/09, s.77. Issue of shares 78.(1) A credit union may issue to a restricted party shares that are fully paid for with money or that are issued, (a) upon the conversion of other issued and outstanding securities of the credit union; (b) as a share dividend; (c) as a patronage return; (d) in accordance with an amalgamation agreement; (e) in exchange for shares of another body corporate; or (f) in exchange for other property. O.Reg. 237/09, s.78 (1). (2) A credit union may issue shares under clause (1) (e) or (f) only with the prior written approval of the Superintendent. O.Reg. 237/09, s.78 (2). Permitted transactions 79.(1) A credit union or its subsidiary may enter into any of the following transactions with a restricted party if the transaction is authorized in advance by at least two-thirds of the members of the board of the credit union: A written contract for the purchase of goods or services, other than management services, required by the credit union or the subsidiary to carry on business. The term of the contract and of each potential renewal must not exceed five years. The contract must state the consideration to be paid. A written contract for the provision of management services to or by the credit union or subsidiary. It must be reasonable that the credit union or subsidiary supply the services. The amount to be paid must not exceed fair market value. A written lease of personal property for the credit union or subsidiary to use in carrying on business. The term of the lease and of each potential renewal must not exceed five years. The amount to be paid must not exceed fair market value. A written lease of real property for the credit union or subsidiary to use in carrying on business. The term of the lease and of each potential renewal must not exceed 10 years. The amount to be paid must not exceed fair market value. A contract of employment with an officer of the credit union or a subsidiary. A written contract for employment benefit plans and pension plans and for other reasonable commitments incidental to the credit union or subsidiary employing individuals. A loan. The credit union or subsidiary must be otherwise authorized under the Act to make the loan. The terms of the loan must be no more favourable than those offered in the ordinary course of business by the credit union to its members. O.Reg. 237/09, s.79 (1). (2) A credit union or a subsidiary may enter into any of the following transactions with a restricted party: A contract of employment with an individual who is not a director or officer of the credit union or subsidiary. A deposit made by the credit union for clearing purposes with a financial institution that is a direct clearer or a group clearer member under the by-laws of the Canadian Payments Association. A contract to borrow money from the restricted party. The receipt of deposits from the restricted party. The issuance of debt obligations to the restricted party. O.Reg. 237/09, s.79 (2). (3) The by-laws of the credit union may require the transactions described in subsection (2) to be authorized by a process specified in the by-laws. O.Reg. 237/09, s.79 (3). (4) A credit union may make residential mortgage loans or personal loans to directors or officers of the credit union on terms more favourable than those offered in the ordinary course of business by the credit union to its members if two-thirds of the members of the board have approved the policies and procedures governing the making of such loans. O.Reg. 237/09, s.79 (4). Restricted Party Transaction Procedures Restricted party transaction procedures 80.(1) A credit union shall establish procedures to ensure that it complies with the restrictions governing restricted party transactions. O.Reg. 237/09, s.80 (1). (2) The procedures form part of the investment and lending policies and procedures of the credit union for the purposes of section 189 of the Act. O.Reg. 237/09, s.80 (2). (3) The procedures must include review and approval procedures to be followed by directors, officers and employees. O.Reg. 237/09, s.80 (3). (4) The procedures must require that a restricted party disclose to the credit union, in writing, the party’s interest in a transaction or a proposed transaction with the credit union or its subsidiary. O.Reg. 237/09, s.80 (4). (5) The disclosure to be made by a director or officer must be made in the manner set out in sections 146 and 147 of the Act, with necessary modifications. O.Reg. 237/09, s.80 (5). PART XI MEETINGS First Meeting First Meeting 81.(1) The first meeting of a credit union must be convened by a majority of the incorporators. O.Reg. 237/09, s.81 (1). (2) Written notice of the meeting must be mailed or sent by electronic means to each incorporator at least seven days before the date of the meeting. O.Reg. 237/09, s.81 (2). (3) The notice must state the date, time, place and purpose of the meeting. O.Reg. 237/09, s.81 (3). Quorum 82.At the first meeting of a credit union, a majority of the incorporators constitutes a quorum. O.Reg. 237/09, s.82. Business to be dealt with 83.The following business must be transacted at the first meeting of a credit union: The directors must be elected. The mandatory by-laws required under subsection 105 (2) of the Act must be enacted. The auditor must be appointed. O.Reg. 237/09, s.83. Financial Statements Financial statements 84.(1) For the purposes of subsection 213 (1) of the Act, the prescribed matters to be shown on the financial statements of a credit union are: The amount and composition of Tier 1 and Tier 2 capital and the percentage of regulatory capital held for determining compliance to the capital adequacy requirements of section 15. The amount of each type of asset held for liquidity purposes as determined under section 20 or 21. The amount of outstanding loans in each of the loan classes described in section 49. The amount of impaired loans, the allowance for impairment and the charge for impairment. The value of investments in marketable securities that are held to maturity, available for sale and designated as held for trading. O.Reg. 237/09, s.84 (1). (2) The following time periods are prescribed, for the purposes of subsection 213 (1) of the Act, as the time periods to which the prescribed matters must relate: The most recently completed financial year. The financial year immediately before the most recently completed financial year. O.Reg. 237/09, s.84 (2). PART XII RETURNS, EXAMINATIONS AND RECORDS Document retention 85.(1) A credit union shall keep and maintain the following in accordance with section 231 of the Act: A copy of its articles of incorporation and any amendments to them or, if applicable, its other incorporating document and any amendments to it. A copy of its articles of continuance, if applicable. The by-laws and resolutions, including special resolutions, of the credit union. The register of members, shareholders and security holders required by section 230 of the Act to be kept by the credit union. A register of the directors, members of the audit committee and any other committees established by the board and all officers of the credit union, setting out their names, residential addresses, including the street and number, if any, their occupations and the several dates on which they have become or ceased to be a member of the board or committee. A register of all securities held by the credit union. Books of account and accounting records of the credit union. The minutes of all proceedings at meetings of members, shareholders, directors and committees. The audited financial statements of the credit union placed before the members at the most recent annual meeting. O.Reg. 237/09, s.85 (1). (2) Despite paragraph 8 of subsection (1), a credit union may dispose of minutes of committee proceedings that were held more than six years before the disposition. O.Reg. 237/09, s.85 (2). Maximum fee for by-laws 86.For the purpose of subsection 233 (2) of the Act, the prescribed amount is $25. O.Reg. 237/09, s.86. PART XIII LEAGUES Application Application 87.This Regulation applies with respect to a league as if it were a credit union, except to the extent modified by this Part. O.Reg. 237/09, s.87. Capital Structure Capital structure 88.For the purposes of subsection 74.1 (1) of the Act, the following are prescribed persons to whom a security of a league issued under circumstances described in clause 75 (1) (a) of the Act may be transferred: A member of the league issuing the securities. A member of a credit union that is a member of the league issuing the securities. The Corporation. O.Reg. 237/09, s.88. Adequate Capital Adequate capital 89.(1) A league has adequate capital if its regulatory capital at least equals 5 per cent of its total assets. O.Reg. 237/09, s.89 (1). (2) Section 15 does not apply with respect to a league. O.Reg. 237/09, s.89 (2). Business Powers Business powers 90.For the purposes of subsection 241 (3.1) of the Act, a league may engage in or carry on the following business activities and provide the following services: Accepting deposits and making loans. Guaranteeing loans. Providing administrative, advisory, educational, managerial, promotional and technical services to credit unions. Arranging for one or more pension plans for the directors, officers, employees and members of credit unions, their subsidiaries and subsidiaries of the league. Arranging for group bonding for directors, officers and employees of a credit union, its subsidiaries and subsidiaries of the league. Providing credit counselling to members of credit unions who are repaying loans made by the credit unions. O.Reg. 237/09, s.90. Permitted activities 91.For the purposes of section 173 of the Act, a league may provide investment counselling and portfolio management services to its members, depositors, subsidiaries and affiliates. O.Reg. 237/09, s.91. Group insurance 92.(1) A league may administer a group insurance policy for its employees, its members, the employees of its members or subsidiaries and credit unions that are not members and their employees. O.Reg. 237/09, s.92 (1). (2) Group accident and sickness insurance and group life insurance administered by a league must be restricted to the league’s employees, its members, the employees of its members or subsidiaries and credit unions that are not members and their employees. O.Reg. 237/09, s.92 (2). Trustee 93.For the purposes of section 177 of the Act, a league is authorized to act as trustee with respect to an escrow agreement relating to share offerings by a credit union. O.Reg. 237/09, s.93. Investment and Lending Investment and lending 94.Section 59 does not apply with respect to a loan made by a league to a credit union or to a subsidiary of the league. O.Reg. 237/09, s.94. Exception to restriction re single investments 95.(1) For the purposes of subsection 199 (1) of the Act, the prescribed amount is 10 per cent of a league’s deposits and regulatory capital. O.Reg. 237/09, s.95 (1). (2) Despite subsection (1), La Fédération des caisses populaires de l’Ontario may invest 25 per cent of its deposits and regulatory capital in La Fédération des caisses Desjardins du Québec. O.Reg. 237/09, s.95 (2). Connected persons 96.The following conditions are prescribed as conditions that, if satisfied in relation to a member or a customer of a league, result in persons being connected for the purposes of section 199 of the Act: Another person or entity is one of the following: i. a body corporate in which the member or customer holds or beneficially owns, directly or indirectly, at least 20 per cent of the voting securities, ii. an affiliate of a body corporate described in subparagraph i, iii. a person or entity that has a 50 per cent interest in a partnership in which the member or customer also has a 50 per cent interest, iv. a partnership in which the member or customer is a partner, v. a trust or estate in which the member or customer has a substantial beneficial interest, vi. a trust or estate in respect of which the member or customer serves as trustee or in a similar capacity, vii. a person or entity on whose financial resources the member or customer depends to repay a loan to a league, viii. a person who provides security to a league for a loan to the member or customer. Another individual is one of the following: i. a spouse who is financially dependent on the member or customer, ii. a relative of the member or customer or of the member’s or customer’s spouse who lives in the same home as the member or customer, who is financially dependent on the member, customer or spouse. O.Reg. 237/09, s.96. Subsidiaries Subsidiaries 97.For the purposes of subsection 241 (5) of the Act, leagues may carry on business through the following types of subsidiaries: A subsidiary in which a credit union may invest under the Act. A corporation established to maintain a stabilization fund for the benefit of the credit unions that are members of the league. A corporation established to administer development funds for the creation of new credit unions. A corporation established to administer development funds for investments in, and loans to, small businesses. A corporation that issues payment cards, credit cards or charge cards and operates a payment or charge card plan. O.Reg. 237/09, s.97. Restriction on investment in subsidiaries 98.For the purpose of subsection 200 (7) of the Act, the prescribed amount is 20 per cent of the league’s regulatory capital and deposits. O.Reg. 237/09, s.98. Exemptions from the Act Exemptions from the Act 99.Leagues are exempted under subsection 243 (2) of the Act from the following provisions of the Act: Section 31 (admissions outside bonds of association). Section 46 (withdrawal of members). Section 47 (expulsion of members). Section 201.1 (investment in another credit union). Section 217 (requisitions for meetings). O.Reg. 237/09, s.99. PART XIV DEPOSIT INSURANCE CORPORATION OF ONTARIO Definition Definition 100.In this Part, “deposit”, for the purpose of deposit insurance, has the meaning set out in the by-laws of the Corporation. O.Reg. 237/09, s.100. Investment of Funds Investment of funds 101.(1) For the purposes of section 269 of the Act, the Corporation may invest any funds not required in carrying out its objectives in securities in which a class 2 credit union may invest its funds. O.Reg. 237/09, s.101 (1). (2) The board of directors of the Corporation shall establish prudent investment policies and procedures for the purpose of carrying out its object of managing the Deposit Insurance Reserve Fund. O.Reg. 237/09, s.101 (2). (3) The board of directors of the Corporation shall review its investment policies and procedures at least once a year and shall make such revisions as may be necessary to ensure that the investment policies and procedures satisfy the requirements of subsection (2). O.Reg. 237/09, s.101 (3). Restriction on investments 102.For the purposes of section 269 of the Act, the Corporation’s investments are subject to the same restrictions that apply with respect to investments made by class 2 credit unions. O.Reg. 237/09, s.102. Deposit Insurance Limit Deposit insurance limit 103.(1) For the purposes of paragraph 2 of subsection 270 (2) of the Act, the Corporation shall not insure the amount of any one deposit that exceeds $100,000. O.Reg. 237/09, s.103 (1). (2) Despite subsection (1), the Corporation shall insure the amount of any deposit made to any of the following under the Income Tax Act(Canada): A registered retirement savings plan. A registered retirement income fund. A registered education savings plan. A registered disability savings plan. A tax-free savings account. O.Reg. 237/09, s.103 (2). Amalgamations Amalgamations 104.For the purposes of subsection 271 (3) of the Act, the prescribed amount is $100,000. O.Reg. 237/09, s.104. Annual Premium Annual premium 105.(1) For the purposes of paragraph 1 of subsection 276.1 (1) of the Act, the Corporation shall determine the credit union’s annual premium in accordance with this section. O.Reg. 237/09, s.105 (1). (2) The Corporation shall determine the risk rating of each credit union and league in accordance with this section and with the rules set out in DICO Risk Classification System, dated November 10, 2000, as amended from time to time, and published by the Corporation in The Ontario Gazette on November 25, 2000. O.Reg. 237/09, s.105 (2). (3) The risk rating of a credit union or league at a particular time is determined with reference to the following components: Capital: the level of regulatory capital of the credit union or league. Asset quality: the loan loss experience of the credit union or league. Management: the effectiveness of the risk management practices of the credit union or league, as determined with reference to the Act and By-law No. 5 of the Corporation (“Standards of Sound Business and Financial Practices”). Earnings: the average return on assets of the credit union or league. Asset and liability management: the level of interest rate risk of the credit union or league. O.Reg. 237/09, s.105 (3). (4) The annual premium payable by a credit union or league is calculated at the rate set out in Column 3 of the Table to this subsection opposite the category of risk rating set out in Column 2 within which the credit union’s or league’s risk rating falls. TABLE Column 1 Column 2 Column 3 Premium Class Risk Rating Premium Rate 1 85 points or more$0.90 per $1,000 of the funds described in subsection (5) for a credit union and in subsection (6) for a league 2 At least 70 points and less than 85 points$1.00 per $1,000 of those funds 3 At least 55 points and less than 70 points$1.15 per $1,000 of those funds 4 At least 40 points and less than 55 points$1.40 per $1,000 of those funds 5 Less than 40 points$2.10 per $1,000 of those funds O.Reg. 237/09, s.105 (4). (5) The calculation of the annual premium for a credit union is based only on Canadian funds on deposit with the credit union, and no premium is payable with respect to that portion of a deposit that is uninsured by virtue of section 270 of the Act. O.Reg. 237/09, s.105 (5). (6) The calculation of the annual premium for a league is based on Canadian funds on deposit with the league for a person that is not a credit union, and no premium is payable with respect to that portion of a deposit that is uninsured by virtue of section 270 of the Act. O.Reg. 237/09, s.105 (6). (7) The Corporation may estimate the amount of funds on deposit with the credit union or league using the quarterly financial return of the credit union or league and may adjust the premium upon receiving the audited financial statements. O.Reg. 237/09, s.105 (7). (8) The annual premium payable by a credit union or league that carries on business for less than one year shall be reduced by an amount proportionate to the period during which it did not carry on business. O.Reg. 237/09, s.105 (8). (9) Despite subsections (4) and (8), the minimum annual premium payable by a credit union or league is $250. O.Reg. 237/09, s.105 (9). (10) The Corporation may use approximate figures in determining or calculating an amount under this section. O.Reg. 237/09, s.105 (10). Payment of annual premium 106.A credit union or league shall pay its annual premium within 30 days after the date of the invoice for the premium. O.Reg. 237/09, s.106. Audited statement of deposits 107.A credit union or league shall file an audited statement of its deposits with the Corporation at such time as the Corporation directs and respecting such period as the Corporation directs. O.Reg. 237/09, s.107. PART XV CONTINUING AS OR CEASING TO BE AN ONTARIO CREDIT UNION Continuing as an Ontario Credit Union Articles of continuance 108.The following are prescribed, for the purposes of subsection 316 (3) of the Act, as documents that must accompany the articles of continuance: A copy of the incorporating document of the body corporate, together with all amendments to the document, certified by the officer of the incorporating jurisdiction who is authorized to so certify. A letter of satisfaction, certificate of continuance or other document issued by the proper officer of the incorporating jurisdiction that indicates that the body corporate is authorized under the laws of the jurisdiction in which it was incorporated or continued to apply for articles of continuance. O.Reg. 237/09, s.108. Conditions for issue of certificate of continuance 109.The following are prescribed as conditions for the purposes of subsection 316 (5) of the Act: The Superintendent shall not issue a certificate of continuance unless the body corporate satisfies the Superintendent that the matters set out in paragraphs 1 to 5 of subsection 16 (2) of the Act are satisfied. The Superintendent shall not issue a certificate of continuance unless the body corporate satisfies the Superintendent that the body corporate would meet all the requirements of the Act if it were continued as a credit union. O.Reg. 237/09, s.109. Limits on transition period 110.(1) The prescribed maximum period for the purposes of paragraph 1 of subsection 316 (12) of the Act is two years beginning on the date the articles of continuance became effective. O.Reg. 237/09, s.110 (1). (2) The prescribed maximum extension period for the purposes of paragraph 2 of subsection 316 (12) of the Act is seven years beginning on the date the articles of continuance became effective. O.Reg. 237/09, s.110 (2). Transfer to Another Jurisdiction Conditions for issue of certificate of continuance 111.The following are prescribed as conditions for the purposes of subsection 316.1 (5) of the Act: The Superintendent shall not issue a certificate of approval of continuance unless the credit union satisfies the Superintendent as to the following: i. the shareholders or members who voted against the special resolution to apply for the certificate of continuance will be entitled to be paid the value of their membership, patronage and other shares, calculated in accordance with subsection 62 (2) of the Act, ii. the credit union will proceed with the continuation before the certificate of approval of continuation expires, unless the directors, with the authorization of the shareholders or members, abandon the application. The Superintendent shall not issue a certificate of approval of continuance unless the credit union satisfies the Superintendent that after the credit union is continued under the laws of the other jurisdiction, the laws of that jurisdiction provide in effect that, i. the continued body corporate will possess all the property, rights, privileges and franchises and be subject to all the liabilities, including civil, criminal and quasi-criminal, and all contracts, disabilities and debts of the credit union, ii. a conviction against, or ruling, order or judgment in favour of or against, the credit union may be enforced by or against the continued body corporate, and iii. the continued body corporate will continue as a party in any civil action commenced by or against the credit union. The Superintendent shall include, in each certificate of approval of continuance, a condition that the certificate expires if the credit union has not been continued within six months after the certificate was issued. O.Reg. 237/09, s.111. Continuation under Another Ontario Act Conditions for issue of certificate of continuance 112.The following are prescribed as conditions for the purposes of subsection 316.2 (5) of the Act: The Superintendent shall not issue a certificate of approval of continuance unless the credit union satisfies the Superintendent that the shareholders or members who voted against the special resolution to apply for the certificate of continuance will be entitled to be paid the value of their membership, patronage and other shares, calculated in accordance with subsection 62 (2) of the Act. The Superintendent shall not issue a certificate of approval of continuance unless the credit union satisfies the Superintendent that, after the credit union is continued, i. the continued body corporate will possess all the property, rights, privileges and franchises and be subject to all the liabilities, including civil, criminal and quasi-criminal, and all contracts, disabilities and debts of the credit union, ii. a conviction against, or ruling, order or judgment in favour of or against, the credit union may be enforced by or against the continued body corporate, and iii. the continued body corporate will continue as a party in any civil action commenced by or against the credit union. The Superintendent shall include, in each certificate of approval of continuance, a condition that the certificate expires if the credit union has not been continued within six months after the certificate was issued. O.Reg. 237/09, s.112. PART XVI CONSUMER PROTECTION Disclosure Re Interest Rates, etc. Disclosure re interest rates, etc. 113.(1) A credit union shall disclose to a prospective depositor the applicable rate of interest for the person’s account and the manner of calculating the interest payable. O.Reg. 237/09, s.113 (1). (2) Whenever there is a change in the rate of interest or in the manner of calculating the amount of interest that applies to a deposit account, the credit union shall disclose the change by means of, (a) delivering a written statement to a person in whose name the account is maintained; (b) displaying and making available copies of a written statement at each branch of the credit union where the accounts are held; or (c) displaying a general notice at each branch of the credit union where the accounts are kept. O.Reg. 237/09, s.113 (2). Disclosure upon renewal 114.If a credit union renews a term deposit account, the credit union shall disclose to the depositor the rate of interest for the account and the manner of calculating the interest payable. O.Reg. 237/09, s.114. Disclosure in advertising 115.(1) In an advertisement about an interest-bearing deposit or a debt obligation, a credit union shall disclose how the interest is to be calculated and any circumstances that will affect the rate of interest. O.Reg. 237/09, s.115 (1). (2) An advertisement about an interest-bearing deposit must state how the balance of a deposit account will affect the rate of interest. O.Reg. 237/09, s.115 (2). Consumer Complaints by Members and Depositors Consumer complaints by members and depositors 116.(1) A credit union shall designate an officer or employee of the credit union to receive and attempt to resolve complaints made by members and depositors. O.Reg. 237/09, s.116 (1). (2) A credit union shall advise its members and depositors, in a manner that it considers appropriate, of the name and contact information of the officer or employee designated under subsection (1). O.Reg. 237/09, s.116 (2). (3) If a person makes a written complaint to the credit union about the business activities of the credit union, the credit union shall give the person a written response to the complaint setting out the credit union’s proposed resolution of the complaint. O.Reg. 237/09, s.116 (3). (4) A credit union shall also inform the person who made the complaint that, if the person is not satisfied with the proposed solution and if the person believes that the complaint relates to a contravention of the Act or a regulation made under the Act, the person may refer the complaint to the Superintendent. O.Reg. 237/09, s.116 (4). (5) A credit union shall keep a copy of every complaint it receives, every response it issues and any other document that relates to a complaint for six years from the date of the complaint and shall make them available if requested to do so by the Superintendent. O.Reg. 237/09, s.116 (5). (6) The officer or employee designated under subsection (1) shall report at least once annually to the board about the complaints received and how they were disposed of in a form that is satisfactory to the board. O.Reg. 237/09, s.116 (6). Inquiry by Superintendent 117.(1) If, as a result of receiving a complaint, the Superintendent addresses an inquiry to a credit union or an officer about the conduct of the credit union’s business, the credit union or officer shall promptly reply in writing to the inquiry. O.Reg. 237/09, s.117 (1). (2) If requested to do so by the Superintendent, the credit union shall give a copy of the Superintendent’s inquiry and the reply to each director of the credit union and the inquiry and reply shall form part of the minutes of the next board meeting. O.Reg. 237/09, s.117 (2). PART XVII ADMINISTRATIVE PENALTIES Administrative penalties 118.(1) For the purposes of subsections 331.2 (1) and 331.3 (1) of the Act, the amount of the administrative penalty for a contravention is, for each day on which the contravention occurs or continues, $100 for a class 1 credit union and $250 for a class 2 credit union. O.Reg. 237/09, s.118 (1). (2) If the contravention is a failure to file a document or to provide information in accordance with subsection 331.2 (2) or 331.3 (2) of the Act, the contravention occurs on the day following the day on which the document was required to be filed or the information was required to be provided and continues until it is filed or provided, as the case may be, or until the credit union is notified by the Superintendent or the Corporation that the document or the information is no longer required. O.Reg. 237/09, s.118 (2). (3) Despite subsection (2), where a person or entity has filed a document or provided information in the appropriate form but the document or information is incomplete or inaccurate, the contravention is deemed to have occurred on the day on which the person or entity is given written notice that the document or information is incomplete or inaccurate. O.Reg. 237/09, s.118 (3). (4) If the contravention is a failure to hold a meeting in accordance with subsection 331.2 (2) or 331.3 (2) of the Act, the contravention is deemed to occur on the third day following the day on which the meeting was required to be held and continues until the meeting is held or until the credit union is notified by the Superintendent or the Corporation that the meeting is no longer required. O.Reg. 237/09, s.118 (4). (5) In determining whether to impose an administrative penalty on a person or entity under section 331.2 or 331.3 of the Act for a purpose set out in subsection 331.1 (1) of the Act, the Superintendent or the Corporation, whichever is authorized to impose the penalty, shall consider only the following: Whether the contravention was caused by an event outside the person or entity’s control. Whether the person or entity could have taken steps to prevent the contravention. With respect to incomplete or inaccurate documents or information, whether due diligence was exercised in filing the documents or preparing the information. Whether the person or entity derived or reasonably might have been expected to derive, directly or indirectly, any economic benefit from the contravention or failure. O.Reg. 237/09, s.118 (5). (6) A person or entity on whom an administrative penalty has been imposed must pay the penalty, (a) if the order is not appealed, within 30 days from the date of the order of the Superintendent or the Corporation imposing the penalty or such longer time as may be specified in the order; or (b) if the order is appealed under subsection 331.2 (5) or 331.3 (5) of the Act, within 30 days from the date the Tribunal confirms or varies the order or such longer time as may be specified in the order. O.Reg. 237/09, s.118 (6). (7) Administrative penalties shall be paid into the Consolidated Revenue Fund. O.Reg. 237/09, s.118 (7). 119. Omitted (revokes other Regulations). O.Reg. 237/09, s.119. 120. Omitted (provides for coming into force of provisions of this Regulation). O.Reg. 237/09, s.120. Accessibility Privacy Contact us Terms of use © King’s Printer for Ontario, 2012-24 FrTop
139
Plünnecke inequalities for measure graphs with applications | Ergodic Theory and Dynamical Systems | Cambridge Core =============== Skip to main contentAccessibility help We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings. Login Alert Cancel Log in × × Discover Content Products and Services Register Log In (0) Cart Search Browse Services Open research Institution Login Search Hostname: page-component-cb9f654ff-9knnw Total loading time: 0 Render date: 2025-08-14T19:05:19.366Z Has data issue: false hasContentIssue false Home Journals Ergodic Theory and Dynamical Systems Volume 37 Issue 2 Plünnecke inequalities for measure graphs with applications English Français Ergodic Theory and Dynamical Systems Article contents Abstract References Plünnecke inequalities for measure graphs with applications Published online by Cambridge University Press:06 October 2015 KAMIL BULINSKI[Opens in a new window]and ALEXANDER FISH Show author details KAMIL BULINSKI Affiliation: School of Mathematics and Statistics, University of Sydney, Australia email [email protected], [email protected] ALEXANDER FISH Affiliation: School of Mathematics and Statistics, University of Sydney, Australia email [email protected], [email protected] Article Article Metrics Article contents Abstract References Get access Share Copy Share Share Share Share Post Share Mail Share CiteRights & Permissions [Opens in a new window] Abstract We generalize Petridis’s new proof of Plünnecke’s graph inequality to graphs whose vertex set is a measure space. Consequently, by a recent work of Björklund and Fish, this gives new Plünnecke inequalities for measure-preserving actions which enable us to deduce, via a Furstenberg correspondence principle, Banach density estimates in countable abelian groups that extend those given by Jin. Information Type Research Article Information Ergodic Theory and Dynamical Systems , Volume 37 , Issue 2, April 2017, pp. 418 - 439 DOI: [Opens in a new window] Copyright © Cambridge University Press, 2015 Access options Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.) Check access Institutional login We recognised you are associated with one or more institutions that don't have access to this content. If you should have access, please contact your librarian.Access through your institution Change institution Personal login Log in with your Cambridge Core account or society details.Log in Article purchase Digital access for individuals US$41.00Add to cart References Beiglböck, M., Bergelson, V. and Fish, A.. Sumset phenomenon in countable amenable groups. Adv. Math.223(2) (2010), 416–432.CrossRefGoogle Scholar Björklund, M. and Fish, A.. Plünnecke inequalities for countable abelian groups. J. Reine Angew. Math. to appear. Preprint, 2013, arXiv:1311.5372v2.Google Scholar Björklund, M. and Fish, A.. Product set phenomena for countable groups. Adv. Math.275 (2015), 47–113.CrossRefGoogle Scholar Furstenberg, H.. Ergodic behavior of diagonal measures and a theorem of Szemerédi on arithmetic progressions. J. Anal. Math.31 (1977), 204–256.CrossRefGoogle Scholar Jin, R.. Plünnecke’s theorem for asymptotic densities. Trans. Amer. Math. Soc.363(10) (2011), 5059–5070.CrossRefGoogle Scholar Petridis, G.. Plünnecke’s inequality. Combin. Probab. Comput.20(6) (2011), 921–938.CrossRefGoogle Scholar Plünnecke, H.. Eine zahlentheoretische Anwendung der Graphentheorie. J. Reine Angew. Math.243 (1970), 171–183.Google Scholar Ruzsa, I.Z.. Sumsets and structure. Combinatorial Number Theory and Additive Group Theory (Advanced Courses in Mathematics CRM Barcelona). Birkhäuser, Basel, 2009, pp. 87–210.Google Scholar Tao, T. and Vu, V.H.. Additive Combinatorics (Cambridge Studies in Advanced Mathematics, 105). Cambridge University Press, Cambridge, 2010, paperback edition.Google Scholar Related content Examples in the entropy theory of countable group actions Type Article TitleExamples in the entropy theory of countable group actionsAuthorsLEWIS BOWENJournalErgodic Theory and Dynamical Systems Published online: 25 March 2019 Khintchine-type recurrence for 3-point configurations Type Article TitleKhintchine-type recurrence for 3-point configurationsAuthorsEthan Ackelsberg,Vitaly BergelsonandOr ShalomJournalForum of Mathematics, Sigma Published online: 5 December 2022 Distal systems in topological dynamics and ergodic theory Type Article TitleDistal systems in topological dynamics and ergodic theoryAuthorsNIKOLAI EDEKOandHENRIK KREIDLERJournalErgodic Theory and Dynamical Systems Published online: 1 August 2022 Survey on Entropy-Type Invariants of Subexponential Growth in Dynamical Systems Type Chapter TitleSurvey on Entropy-Type Invariants of Subexponential Growth in Dynamical SystemsAuthors JournalA Vision for Dynamics in the 21st Century Published online: 1 February 2024 Strongly mixing systems are almost strongly mixing of all orders Type Article TitleStrongly mixing systems are almost strongly mixing of all ordersAuthorsV. BERGELSONandR. ZELADAJournalErgodic Theory and Dynamical Systems Published online: 13 September 2023 Weak forms of topological and measure-theoretical equicontinuity: relationships with discrete spectrum and sequence entropy Type Article TitleWeak forms of topological and measure-theoretical equicontinuity: relationships with discrete spectrum and sequence entropyAuthorsFELIPE GARCÍA-RAMOSJournalErgodic Theory and Dynamical Systems Published online: 8 March 2016 Weak containment of measure-preserving group actions Type Article TitleWeak containment of measure-preserving group actionsAuthorsPETER J. BURTONandALEXANDER S. KECHRISJournalErgodic Theory and Dynamical Systems Published online: 17 April 2019 INNER AMENABLE GROUPOIDS AND CENTRAL SEQUENCES Type Article TitleINNER AMENABLE GROUPOIDS AND CENTRAL SEQUENCESAuthorsYOSHIKATA KIDAandROBIN TUCKER-DROBJournalForum of Mathematics, Sigma Published online: 26 May 2020 Lower bound in the Roth theorem for amenable groups Type Article TitleLower bound in the Roth theorem for amenable groupsAuthorsQING CHUandPAVEL ZORIN-KRANICHJournalErgodic Theory and Dynamical Systems Published online: 3 July 2014 Ergodic theorem involving additive and multiplicative groups of a field and ${x+y,xy}$ patterns Type Article TitleErgodic theorem involving additive and multiplicative groups of a field and ${x+y,xy}$ patternsAuthorsVITALY BERGELSONandJOEL MOREIRAJournalErgodic Theory and Dynamical Systems Published online: 6 October 2015 Cited by Loading... Cited by 0 No CrossRef data available. Google Scholar Citations View all Google Scholar citations for this article. × Our Site Accessibility Contact & Help Legal Notices Quick Links Cambridge Core Cambridge Open Engage Cambridge Aspire Our Products Journals Books Elements Textbooks Courseware Join us online Location Please choose a valid location. Update Legal Information Rights & Permissions Copyright Privacy Notice Terms of Use Cookies Policy Cambridge University Press 2025 Cancel Confirm × Save article to Kindle To send this article to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about saving to your Kindle. Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply. Find out more about the Kindle Personal Document Service. Plünnecke inequalities for measure graphs with applications Volume 37, Issue 2 KAMIL BULINSKI(a1) and ALEXANDER FISH(a1) DOI: Your Kindle email address Please provide your Kindle email. @free.kindle.com @kindle.com (service fees apply) Available formats- [x] PDF Please select a format to save. [x] By using this service, you agree that you will only keep content for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services Please confirm that you accept the terms of use. Cancel Save × Save article to Dropbox To save this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Dropbox account. Find out more about saving content to Dropbox. Plünnecke inequalities for measure graphs with applications Volume 37, Issue 2 KAMIL BULINSKI(a1) and ALEXANDER FISH(a1) DOI: Available formats- [x] PDF Please select a format to save. [x] By using this service, you agree that you will only keep content for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services Please confirm that you accept the terms of use. Cancel Save × Save article to Google Drive To save this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Google Drive account. Find out more about saving content to Google Drive. Plünnecke inequalities for measure graphs with applications Volume 37, Issue 2 KAMIL BULINSKI(a1) and ALEXANDER FISH(a1) DOI: Available formats- [x] PDF Please select a format to save. [x] By using this service, you agree that you will only keep content for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services Please confirm that you accept the terms of use. Cancel Save × × Reply to:Submit a response Title Please enter a title for your response. Contents Contents help Close Contents help No HTML tags allowed Web page URLs will display as text only Lines and paragraphs break automatically Attachments, images or tables are not permitted Please enter your response. Your details First name Please enter your first name. Last name Please enter your last name. Email Email help Close Email help Your email address will be used in order to notify you when your comment has been reviewed by the moderator and in case the author(s) of the article or the moderator need to contact you directly. Please enter a valid email address. Occupation Please enter your occupation. Affiliation Please enter any affiliation. You have entered the maximum number of contributors Conflicting interests Do you have any conflicting interests? Conflicting interests help Close Conflicting interests help Please list any fees and grants from, employment by, consultancy for, shared ownership in or any close relationship with, at any time over the preceding 36 months, any organisation whose interests may be affected by the publication of the response. Please also list any non-financial associations or interests (personal, professional, political, institutional, religious or other) that a reasonable reader would want to know about in relation to the submitted work. This pertains to all the authors of the piece, their spouses or partners. Yes No More information Please enter details of the conflict of interest or select 'No'. [x] Please tick the box to confirm you agree to ourTerms of use. Please accept terms of use. [x] Please tick the box to confirm you agree that your name, comment and conflicts of interest (if accepted) will be visible on the website and your comment may be printed in the journal at the Editor’s discretion. Please confirm you agree that your details will be displayed. Study, Teach, Research… with AI? Take our short survey on generative AI in teaching, learning, and research to help us improve our support for the academic community — and for your chance to win up to £150 in vouchers. Click here to participate: Next Citation Tools Copy and paste a formatted citation or download in your chosen format Loading citation... ×
140
UNIVERSITY OF CALIFORNIA Los Angeles Decoupling for the parabola and connections to efficient congruencing A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy in Mathematics by Zane Kun Li 2019 c⃝ Copyright by Zane Kun Li 2019 ABSTRACT OF THE DISSERTATION Decoupling for the parabola and connections to efficient congruencing by Zane Kun Li Doctor of Philosophy in Mathematics University of California, Los Angeles, 2019 Professor Terence Chi-Shen Tao, Chair This thesis presents effective quantitative bounds for l2 decoupling for the parabola. We rst make effective the argument of Bourgain and Demeter in [BD17] for the case of the parabola. This allows us to improve upon the bound of O"p"q on the decoupling constant. Next, we give a new proof of l2 decoupling for the parabola inspired from efficient congruencing. We also mention how efficient congruencing relates to decoupling for the cubic moment curve. This chapter contains the rst known translation of an efficient congruencing argument into decoupling language. Finally, we discuss equivalences and monotonicity of various parabola decoupling constants and a \small ball" l2 decoupling problem. ii The dissertation of Zane Kun Li is approved. John B. Garnett Rowan Brett Killip Monica Visan Terence Chi-Shen Tao, Committee Chair University of California, Los Angeles 2019 iii TABLE OF CONTENTS 1 Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 1.1 What is decoupling? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1.1 Decoupling for the paraboloid and moment curve . . . . . . . . . . . 21.1.2 The extension operator formulation . . . . . . . . . . . . . . . . . . . 31.2 Vinogradov's mean value theorem . . . . . . . . . . . . . . . . . . . . . . . . 41.3 Summary of the results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2 Effective l2 decoupling for the parabola : : : : : : : : : : : : : : : : : : : : : 9 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.2 Weight functions and consequences . . . . . . . . . . . . . . . . . . . . . . . 12 2.2.1 The weights wB and rwB . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2.2 Explicit Schwartz functions . . . . . . . . . . . . . . . . . . . . . . . 17 2.2.3 Immediate applications . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.2.4 Bernstein's inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.2.5 l2L2 decoupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.3 Equivalence of local decoupling constants . . . . . . . . . . . . . . . . . . . . 32 2.3.1 Proof of Lemma 2.3.10 . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.3.2 Proof of (2.56) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 2.4 Parabolic rescaling: an application . . . . . . . . . . . . . . . . . . . . . . . 57 2.5 Bilinear equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 2.6 Ball in ation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 2.7 The iteration: preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 2.8 Control of the bilinear decoupling constant . . . . . . . . . . . . . . . . . . . 76 iv 2.8.1 Case 2 ď p ď 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 2.8.2 Case 4 ă p ă 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 2.9 Decoupling at lacunary scales . . . . . . . . . . . . . . . . . . . . . . . . . . 82 2.9.1 Case 4 ă p ă 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 2.9.2 Case p  6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 2.10 Decoupling at all scales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 2.11 Proof of Theorem 2.1.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 3 An l2 decoupling interpretation of efficient congruencing in 2D : : : : : : 89 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 3.1.1 More notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 3.1.2 Outline of proof of Theorem 3.1.1 . . . . . . . . . . . . . . . . . . . . 91 3.1.3 Comparison with 2D efficient congruencing as in [Pie19, Section 4] . . 93 3.1.4 Comparison with 2D l2 decoupling as in [BD17] . . . . . . . . . . . . 94 3.1.5 Comparison of the iteration in Section 3.2 and 3.4 . . . . . . . . . . . 95 3.1.6 Overview of chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 3.2 Proof of Theorem 3.1.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 3.2.1 Parabolic rescaling and consequences . . . . . . . . . . . . . . . . . . 97 3.2.2 A Fefferman-Cordoba argument . . . . . . . . . . . . . . . . . . . . . 100 3.2.3 The O"p"q bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 3.2.4 An explicit bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 3.3 An uncertainty principle interpretation of Lemma 3.2.8 . . . . . . . . . . . . 108 3.4 An alternate proof of Dpq À " " . . . . . . . . . . . . . . . . . . . . . . . . 113 3.4.1 Some basic properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 3.4.2 Ball in ation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 v3.4.3 The O"p"q bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 3.5 Unifying the two styles of proof . . . . . . . . . . . . . . . . . . . . . . . . . 120 3.6 An efficient congruencing style proof of l2L4 decoupling for the parabola . . . 124 3.6.1 Setup and some standard lemmas . . . . . . . . . . . . . . . . . . . . 124 3.6.2 The key technical lemma . . . . . . . . . . . . . . . . . . . . . . . . . 126 3.6.3 The iteration and endgame . . . . . . . . . . . . . . . . . . . . . . . . 130 3.7 A decoupling interpretation of efficient congruencing for the cubic moment curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 4 More properties of the parabola decoupling constant : : : : : : : : : : : : 136 4.1 Equivalence of some more parabola decoupling constants . . . . . . . . . . . 136 4.1.1 Basic tools and de nitions . . . . . . . . . . . . . . . . . . . . . . . . 137 4.1.2 Equivalence of parallelogram decoupling constants . . . . . . . . . . . 139 4.1.3 Equivalence of decoupling constants . . . . . . . . . . . . . . . . . . . 142 4.2 Monotonicity of the parabola decoupling constant . . . . . . . . . . . . . . . 144 4.3 An elementary proof of l2L4 decoupling for the parabola . . . . . . . . . . . 147 4.4 Small ball l2 decoupling for the paraboloid . . . . . . . . . . . . . . . . . . . 150 4.4.1 The lower bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 4.4.2 The upper bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 References : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 155 vi ACKNOWLEDGMENTS I would rst like to thank my advisor Terence Tao for his constant encouragement, advice, patience, and support. I am very grateful to Ciprian Demeter and Larry Guth for the many discussions about decoupling we had at various conferences and during visits. I am greatly indebted to Elias Stein who taught my rst course in Fourier analysis at Princeton and gave me advice and support through my undergraduate and graduate careers. I would also like to thank Kevin Hughes and Trevor Wooley for discussions on efficient congruencing that helped me gain a better understanding of the area. The work in Section 3.7 was based off discussions with Shaoming Guo and Po-Lam Yung at the Chinese University of Hong Kong and I thank Po-Lam for being such a wonderful host and providing me with support to visit him. I also thank Kirsti Biggs and Sarah Peluse for preliminary discussions on this topic. Finally, I thank Hong Wang for allowing me to use her example in Section 4.4. This thesis was completed with support in part by an NSF Graduate Research Fellowship (DGE-1144087), NSF grant DMS-1266164, and a Girsky Fellowship award. vii VITA 2013 A.B. (Mathematics), Princeton University 2013 National Science Foundation Graduate Research Fellowship 2015 M.A. (Mathematics), University of California, Los Angeles 2018 Girsky Fellowship, Department of Mathematics, University of California, Los Angeles PUBLICATIONS Zane Kun Li, An l2 decoupling interpretation of efficient congruencing in 2D , Preprint: arXiv:1805.10551. Zane Kun Li, Effective l2 decoupling for the parabola , Preprint: arXiv:1711.01202. Zane Kun Li, Quadratic twists of elliptic curves with 3-Selmer rank 1 , International Journal of Number Theory 10 (2014), no. 5, 1191{1217. David Corwin, Tony Feng, Zane Kun Li, and Sarah Trebat-Leder, Elliptic curves with full 2-torsion and maximal adelic Galois representations , Mathematics of Computation 83 (2014), 2925{2951. Zane Kun Li and Alexander W. Walker, Arithmetic properties of Picard-Fuchs equations and holonomic recurrences , Journal of Number Theory 133 (2013), 2770{2793. viii Zane Kun Li, A normal form for cubic surfaces , International Journal of Algebra 4 (2010), no. 5, 233{239. Zane Kun Li, On a special case of the intersection of quadric and cubic surfaces , Journal of Pure and Applied Algebra 214 (2010), no. 11, 2078{2086. Stephen P. Humphries and Zane Kun Li, Counting powers of words in monoids , European Journal of Combinatorics 30 (2009), no. 5, 1297{1308. ix CHAPTER 1 Introduction 1.1 What is decoupling? Consider a region Ω Ă Rd and a partition tu of Ω. Let f be de ned on the Fourier side by pf  pf 1. Then f  ¸  f: Furthermore since the tu are a partition of Ω, Plancherel's theorem gives that }f }2  p ¸  }f}22q1{2 and hence to study }f }2, it suffices to study }f}2 for each . In this sense f has \decoupled" into the individual f pieces. We now ask instead of taking an L2 norm, what happens in the case when we use instead an Lp norm. That is, let DppΩ  Ť q be the best constant such that }f }p ď DppΩ  ď qp ¸  }f}2 p q1{2 (1.1) for all f with Fourier transform supported in Ω. What is the best estimate we can have for DppΩ  Ť q? From the triangle inequality and Cauchy-Schwarz, DppΩ  Ť q ď p #q1{2,however we seek the optimal bound of DppΩ  Ť q. In (1.1), we de ned an l2Lp decoupling for Ω  Ť , however we could have as well de ned an lqLp decoupling here where the l2 sum is replaced by an lq one. For brevity, we will often just use the phrase \ l2 decoupling" rather than \ l2Lp decoupling." Decoupling-type inequalities were rst studied by Wolff in [Wol00] who proved a sharp lpLp decoupling theorem for the cone in 2 1 dimensions for p ą 74 and applied it to 1derive new local smoothing estimates. Wolff's work was further extended and generalized in [LW02, LP06, GS09, GS10]. Bourgain in [Bou13] was able to use induction on scales from [BG11] and multilinear restriction from [BCT06] to partially resolve l2Lp decoupling for smooth compact hypersurfaces in Rn in the range 2 ď p ď 2nn1 . Following the proof of l2Lp decoupling for smooth compact hypersurfaces in Rn by Bourgain and Demeter in [BD15] for the full range 2 ď p ď 2pn1q n1 , decoupling inequalities for various curves and surfaces have found many applications to PDE ([Lee16, DGG17, DGL17, BBG18, BHS18, DGL18, FSW18, DZ19]), geometric measure theory ([DGO18, GIO18]), and analytic number theory ([BD16, BDG16, Bou17a, Bou17b, BDG17, Guo17, Hea17, BW18, GZ18a, GZ18b]). This list is by no means exhaustive, for a more complete list see [Pie19]. 1.1.1 Decoupling for the paraboloid and moment curve We now restriction attention to l2 decoupling for the paraboloid [BD15] and moment curve [BDG16]. In the case of decoupling for the paraboloid, let Ω  tp s; |s|2 tq : s P r 0; 1sn1; |t| ď 2u and we partition Ω into  of the form tp s; |s|2 tq : s P Q; |t| ď 2u for frequency cube Q Ă r 0; 1sn1 of length . Then in [BD15], it was shown that DppΩ  Ť q À " " for all 2 ď p ď 2pn1q n1 . Note that having a 2 neighborhood is natural here since at this scale, the  look like a       2 rectangular boxes. For decoupling for the moment curve t Þ Ñ p t; t 2; t 3; : : : ; t nq, let Ω be the n-neighborhood of tp t; t 2; : : : ; t nq : t P r 0; 1su and the tu be the n-neighborhood of tp t; t 2; : : : ; t nq : t P Ju where J runs through a partition of r0; 1s into intervals of length . Then in [BDG16], it was shown that DppΩ  Ť q À " " for all 2 ď p ď npn 1q. Similarly as the previous paragraph, a n neighborhood is natural here since at this scale, the  look like a 2 3    n rectangular box. Applying this decoupling theorem to a particular f , then showed Vinogradov's mean value theorem. 2We note that the ranges of 2 ď p ď 2pn1q n1 and 2 ď p ď npn 1q in decoupling for the paraboloid and moment curve, respectively, are sharp up to "-losses. That is, to have DppΩ  Ť q À " " in the cases mentioned above, we need 2 ď p ď 2pn1q n1 for the paraboloid and 2 ď p ď npn 1q for the moment curve. To see the necessity of the upper bounds of p ď 2pn1q n1 and p ď npn 1q, we can consider the example where pfpq is a Schwartz function version of 1 || 1pq. Finally to see the necessity of the lower bound p ě 2 in both cases, we can consider the example where pfpq is a Schwartz function version of 1 || 1pqe2ic   where tcu are a collection of very far spaced points in Rn. 1.1.2 The extension operator formulation Instead of using the Fourier localized version of decoupling, we will instead use the extension operator formulation of decoupling. Both versions of decoupling are equivalent (see Sections 2.3 and 4.1 and Remark 5.2 of [BD15]) however the latter formulation makes it easier to see how decoupling estimates imply exponential sum estimates. We de ne the extension operator formulation of decoupling for the paraboloid and mo-ment curve. We note that we will use various different formulations in each of the chapters later, so the following two de nitions are just for the reader to get a avor of what de nitions are ahead. Let PpQq be the partition of Q Ă Rn into cubes of length . For a cube B Ă Rn centered at cB of side length R, let wB pxq : p 1 |x  cB | R q100 n: For the paraboloid, given an cube Q Ă r 0; 1sn1, let pEQgqp xq  ż Q gpqep  x | |2xnq d where epzq : e2iz and x  p x1; : : : ; x n1q. Let Dparab p pq be the best constant such that }Er0;1sn´1 g}LppBq ď Dparab p pqp ¸ QPPpr 0;1sn´1q }EQg}2 LppwBq q1{2 (1.2) for all functions g : r0; 1sn1 Ñ C and cubes B Ă Rn of side length 2. Then [BD15] showed that Dparab p pq À " " for 2 ď p ď 2pn1q n1 .3Now we de ne the extension operator formulation of decoupling for the moment curve. For J Ă r 0; 1s, let pEJ gqp xq  ż J gpqepx 1 2x2    nxnq d: Let Dmoment p pq be the best constant such that }Er0;1sg}LppBq ď Dmoment p pqp ¸ JPPpr 0;1sq }EJ g}2 LppwBq q1{2 (1.3) for all functions g : r0; 1s Ñ C and cubes B Ă Rn of side length n. Then [BDG16] showed that Dmoment p pq À " " for 2 ď p ď npn 1q.In all sections except Sections 3.7 and 4.4, we will be considering decoupling for the parabola. Note that the parabola is the moment curve in R2. 1.2 Vinogradov's mean value theorem For integers s; k ě 1, let Js;k pN q be the number of 2 s tuples px1; : : : ; x s; y 1; : : : ; y sq P r 1; N s2s such that x1 x2    xs  y1 y2    ys x21 x22    x2 s  y21 y22    y2 s ... xk 1 xk 2    xks  yk 1 yk 2    yks : Since 1 n0  ş10 epn q d , we have Js;k pN q  ż r0;1sk | N ¸ n1 ep 1n 2n2    knkq| 2s d : (1.4) If we set xi  yi for i  1; 2; : : : ; s , then Js;k pN q ě N s. If we view the xj and yj as uniformly distributed in r1; N s, the ith power equation heuristically has a 1 {N i chance of being true and so this gives another N 2s{ śki1 N i  N 2skpk1q{ 2 many solutions. This heuristic can be made rigorous as follows. Observe that for 1 ď i ď k, since |xi 1 xi 2    xis  yi 1  yi 2      yis| ď 2sN i: 4Then N 2s À ¸ |h1|ď 2sX ... |hk|ď 2sX k ż r0;1sk | N ¸ n1 ep 1n 2n2    knkq| 2sep 1h1  2h2      khkq d : Applying the triangle inequality then shows that Js;k pN q Á s;k N 2s kpk`1q 2 . Thus we have obtained as a lower bound that Js;k pN q Á s;k N s N 2s kpk`1q 2 : In 1935, Vinogradov [Vin35] was motivated by applications to Waring's problem and the Riemann zeta function to study the mean value (1.4). The main conjecture in Vinogradov's mean value methods was that the lower bound on Js;k pN q is essentially an upper bound. That is, Js;k pN q À s;k;" N "pN s N 2s kpk`1q 2 q (1.5) or equivalently ż r0;1sk | N ¸ n1 ep 1n 2n2    knkq| 2s d Às;k;" N "pN s N 2s kpk`1q 2 q: (1.6) From H older's inequality it suffices to just consider the critical case when 2 s  kpk 1q in which case (1.6) reduces to showing ż r0;1sk | N ¸ n1 ep 1n 2n2    knkq| kpk1q d Àk;" N kpk`1q 2" : A change of variables and using periodicity shows that this is equivalent to showing that ż r0;N ksk | N ¸ n1 ep 1 nN 2p nN q2    kp nN qkq| kpk1q Àk;" N k2 kpk`1q 2" : But this follows from l2 decoupling for the moment curve (1.3) with the choice: gpq  řNj1 1j{N , p  kpk 1q, and   1{N .The critical case when k  2 is classical. Wooley developed over a series of papers [Woo12, Woo13, Woo15, Woo17] the theory of efficient congruencing for Vinogradov's mean value 5theorem eventually proving in [Woo17] that (1.5) is true for all 1 ď s ď 12 kpk 1q 13 k opkq.Additionally in 2014 he was able to prove the critical k  3 case ([Woo16], with a simpli ed approach by Heath-Brown in [Hea15]). In 2015, Bourgain-Demeter-Guth [BDG16] proved the sharp l2 decoupling of the moment curve which then resolved Vinogradov's mean value conjecture for all k ě 2. In 2017, Wooley [Woo19] then modi ed his efficient congruencing approach to also work for all k ě 2. We refer the reader to [Pie19] for a more detailed summary of the history, background, and motivation of both efficient congruencing and decoupling methods. Determining the dependence on " for the implied constant in Jkpk1q{ 2;k pN q À " N kpk1q{ 2" is essential to applications of Vinogradov's mean value theorem to number theoretic results such as the growth of the zeta function in the critical strip, the zero free region, and zero density estimates [For02, Hea17]. See also [Hea17] and the MathOver ow question [Lew15] for applications of an effective Bourgain-Demeter-Guth result. One key point is that it is important to work out the dependence on the dimension n. The proof of decoupling for the moment curve in n dimensions relies on decoupling for the moment curve in pn  1q dimensions. We then need to rst study decoupling for  Þ Ñ p ;  2q, in other words (2.4) with n  2. This motivates why we study decoupling for the parabola in detail in this thesis. Similarities between the efficient congruencing [Woo19] and decoupling [BDG16] methods such as the reliance on translation-dilation invariance for efficient congruencing and parabolic rescaling for decoupling have been observed (see Section 8.5 of [Pie19]). However, no precise dictionary between the two methods has been written down. Chapter 3 is the rst to write down an efficient congruencing argument in decoupling language and makes precise how these two methods compare in the special case of a parabola. There is ongoing work joint with Shaoming Guo and Po-Lam Yung dealing with interpreting more complicated efficient congruencing arguments such as those found in [Hea15] and [Woo19]. 61.3 Summary of the results We now summarize all results in this thesis. We will let Dppq be as in (1.2) with n  2(that is the decoupling constant for the parabola). Chapter 2 deals with obtaining explicit estimates in the decoupling constant for the parabola. By following the argument of [BD17], in Theorem 2.1.1, we show that Dppq À $'&'% exp pOpp log 1  q1cp qq if 2 ď p ă 6exp pOp log 1  log log 1  log log log 1  qq if p  6where cp is a small constant increasing to 1 as p increases to 6. We make all implied constants explicit and we carefully deal with various smoothed versions of 1 B that show up in the argument. Chapter 3 was inspired from reading [Pie19, Section 4.3] and is the rst concrete in-terpretation of an efficient congruencing proof into a decoupling language. The proof of l2 decoupling for the parabola is boiled down the four basic steps: parabolic rescaling, bi-linearization, ball in ation, and H older. Using our explicit estimates from Chapter 2, the argument we give in this chapter obtains that D6pq À exp pOp log 1  log log 1  qq : This reproves } ¸ |n|ď N ane2i pnx n2tq}L6 x;t pT2q À exp pOp log N log log N qqp ¸ |n|ď N |an|2q1{2 (1.7) without using any number theory. Bourgain showed (1.7) in Proposition 2.36 of [Bou93] using the divisor bound. It is unknown whether the exp pOp log N log log N qq can be improved. We also give three proofs of D6pq À " ", one that looks like an efficient congruencing proof (Section 3.2), a proof using language more familiar to decoupling (Sections 3.3 and 3.4) that includes a simpli ed ball in ation lemma, and nally a proof that looks more similar to that done by Bourgain-Demeter in [BD15, BD17] (Section 3.5). Finally, in Section 3.7, we outline work in progress with Shaoming Guo and Po-Lam Yung dealing with interpreting efficient congruencing as in [Hea15] into the decoupling language. 7In our nal chapter, we tie up some loose ends about the equivalence of various parabola decoupling constants (Section 4.1). Various equivalences of parabola decoupling constants were rst dealt with in Section 2.3 to deal with issues arising from parabolic rescaling (Section 2.4). However all the decoupling constants in Section 2.3 were spatially localized (that is, have a LppBq or LppwB q) while in Section 4.1, we introduce some decoupling constants that are not spatially localized. This section complements the remark made in [BD15, Remark 5.2]. In Section 4.2, we give an immediate application of this equivalence and show that all eight parabola decoupling constants we de ne throughout this thesis (listed on Page 143) are equivalent and almost monotonic. Next we then given an elementary direct proof of l2L4 decoupling for the parabola in Section 4.3. Finally in Section 4.4, we discuss a \small ball" l2 decoupling problem whose solution was rst communicated to the author by Hong Wang. 8CHAPTER 2 Effective l2 decoupling for the parabola 2.1 Introduction In [BD15] and later with a more streamlined proof [BD17], Bourgain and Demeter prove that the decoupling constant associated to the paraboloid tp 1;  2; : : : ;  n1;  21    2 n1 q : i P r 0; 1su is On;" p"q for 2 ď p ď 2pn1q n1 . In [BDG16], Bourgain, Demeter, and Guth prove that the decoupling constant associated to the moment curve tp ;  2; : : : ;  nq :  P r 0; 1su is On;" p"q for 2 ď p ď npn 1q which resolved Vinogradov's mean value conjecture. Both the moment curve and the paraboloid are the same when n  2. It is this case we study and make effective. For each interval J Ă r 0; 1s and g : r0; 1s Ñ C, let pEJ gqp xq : ż J gpqepx 1 2x2q d where here epzq  e2iz . Note that Er0;1sg is the extension operator for the parabola tp ;  2q :  P r 0; 1su . For an integer E ě 1 and a square B  BpcB ; R q Ă R2 centered at cB  p cB1; c B2q of side length R, let wB;E pxq : p 1 |x  cB | R qE : If I is an interval in r0; 1s and  P p 0; 1q, let PpIq be the partition of I into |I|{  many intervals of length . Note that when writing PpIq, we assume |I|{  P N. For  P N2,2 ď p ă 8 , and E ě 1, let Dp;E pq be the smallest constant such that }Er0;1sg}LppBq ď Dp;E pqp ¸ JPP1{2pr 0;1sq }EJ g}2 LppwB;E q q1{2 (2.1) for all (axis-parallel) squares B Ă R2 of side length 1 and all functions g : r0; 1s Ñ C.Since 1 B ď 2E wB;E , the trivial bound for Dp;E pq is 2 E{p1{4 which follows from the triangle 9inequality and Cauchy-Schwarz. We will call Dp;E pq a (local) decoupling constant associated to the parabola tp ;  2q :  P r 0; 1su . Note that Dp;E pq is essentially the same size as Dec 2p; p; E q in [BD17] (a consequence of Proposition 2.2.11). By making effective the arguments in [BD17], we have the following improvement over Dp;E pq À " ". Theorem 2.1.1. Let E ě 100 and 0 ă  ă 264 E15 E with  P N2. piq If 2 ď p ď 4, then Dp;E pq ď exp pE6E plog 1  q2{3q: pii q If 4 ă p ă 6, then Dp;E pq ď exp pE6E plog 1  q23 13 log 2p p´22 qq: piii q If p  6, then D6;E pq ď exp pE6E log 1  log log 1  log log log 1  q: Using the trivial bound for  ą 264 E15 E , one can obtain an upper bound on Dp;E pq that is valid for all  P N2.In the proof of decoupling for the paraboloid or the moment curve in n dimensions, one crucial input is a decoupling in pn1q dimensions. This is most easily seen by the reliance on a Bourgain-Guth iteration to show the equivalence between linear and multilinear decoupling constants. In the case of the moment curve, this also makes an additional appearance in a step called lower dimensional decoupling (Lemma 8.2 of [BDG16]) since various sections of the moment curve look lower dimensional at certain scales. Thus ultimately we are reduced to rst studying explicit decoupling in n  2 dimensions. Because of this reduction of dimension argument, the arguments of [BD17, BDG16] should give an upper bound on the decoupling constant that is worse than those stated in Theorem 2.1.1. While the argument in this chapter is similar to [BD17], we highlight some key features. One major feature is that we carefully work with the various weight functions that show up in the argument and obtain estimates with explicit constants. Section 2.2 develops all 10 the estimates needed about the weight function wB;E . The most crucial observation is that wBp0;R q;E  wBp0;R 1q;E ÀE R12wBp0;R q;E for 0 ă R1 ď R (Lemma 2.2.1). The calculations in Section 2.2 can be easily generalized to n dimensions. A careful study of the weight wB;E reveals that the decoupling constant with weight wB;E does not behave too well under parabolic rescaling, see Lemma 2.2.18, Remark 2.2.19, and the proof of Proposition 2.4.1. Essentially this is because wB;E weights all directions evenly and so it is well-adapted for squares and circles but not rectangles and ellipses. To accommodate this, we introduce a second weight rwB;E pxq : wB;E pxqp 1 |x2  cB2| R qE (2.2) and let rDp;E pq be de ned similarly as in (2.1) but with wB;E replaced with rwB;E . We will then need that Dp;E pq  E rDp;E pq which is the topic of Section 2.3. Once we have this, we then recover almost multiplicativity of Dp;E pq in Section 2.4 and other applications of parabolic rescaling. This also introduces some slight changes compared to [BD17], namely our multilinear decoupling constant in Section 2.5 is de ned with weight rwB;E rather than wB;E and in our iteration, Ap uses weight rwB;E rather than wB;E . The ball in ation inequality of [BD17] is made effective in Section 2.6. We have chosen to keep track of the dependence on E since estimates for the decoupling constant in higher dimensions for a speci c E may depend on an estimate for the decoupling constant at a lower dimension with a different E (see for example, Theorems 5.1 and 8.4 of [BD17]). Another key feature is that we do not ignore integrality constraints about partitioning intervals into an integer number of smaller intervals. Tracing all the integrality constraints on the parameters in the argument, the iteration in Sections 2.7 and 2.8 gives a good upper bound for the linear decoupling constant along a lacunary sequence of scales (Section 2.9). Using almost multiplicativity of the linear decoupling constant (Proposition 2.4.1) and the trivial bound, we can upgrade this to be a good upper bound on all scales. This is done in Section 2.10. Finally optimizing in Section 2.11 completes the proof of Theorem 2.1.1. 11 2.2 Weight functions and consequences 2.2.1 The weights wB and rwB As de ned in Section 2.1, we recall that wB pxq : p 1 |x  cB | R qE and rwB pxq : wB pxqp 1 |x2  cB2| R qE : If w is a weight function for B, let }f }Lp pwq : p 1 |B| ż R2 |f pxq| pwpxq dx q1{p: We will make use of the following two inequalities that are immediate applications of H older's inequality: If 1 {p  1{q 1{r, then }f g }LppwB;E q ď } f }Lq pwB;E q}g}Lr pwB;E q and if q ą p, }f }Lp pwB;E q ď } f }Lq pwB;E q : (2.3) The above two inequalities also hold with wB;E replaced with rwB;E . When B is a square centered at the origin, wB and rwB obey the following two important self-convolution esti-mates. Lemma 2.2.1. Let E ě 10 . For 0 ă R1 ď R, wBp0;R q;E  wBp0;R 1q;E ď 4E R12wBp0;R q;E : (2.4) We also have R2wBp0;R q;E ď 3E 1Bp0;R q  wBp0;R q;E : (2.5) The same inequalities with the same constants hold true when wBp0;R q;E is replaced with rwBp0;R q;E . 12 Proof. We rst prove (2.4). We would like to give an upper bound for the expression 1 R12 ż R2 p1 |x  y| R qE p1 |y| R1 qE p1 |x| R qE dy depending only on E. A change of variables in y and rescaling x shows that it suffices to give an upper bound for ż R2 p1 | x  R1 R y|q E p1 | y|q E p1 | x|q E dy (2.6) depending only on E. If |x| ď 1, then (2.6) is ď 2E ż R2 p1 | y|q E dy ď 2E : If |x| ą 1, then we split (2.6) into p ż |xR1 Ry|ď |x| 2 ż |xR1 Ry|ą |x| 2 qp 1 | x  R1 R y|q E p1 | y|q E p1 | x|q E dy: (2.7) In the case of the rst integral in (2.7), pR1{Rq| y| ě | x|  | x  p R1{Rqy| ě | x|{ 2 and hence ż |xR1 Ry|ď |x| 2 p1 | x  R1 R y|q E p1 | y|q E p1 | x|q E dy ď p p1 | x|q E p1 p R{R1q| x|{ 2qE ż R2 p1 | x  R1 R y|q E dy ď p 4R1{RqE pR{R1q2 ď 4E : In the case of the second integral in (2.7), ż |xR1 Ry|ą |x| 2 p1 | x  R1 R y|q E p1 | y|q E p1 | x|q E dy ď p 1 | x| 1 | x|{ 2qE ż R2 p1 | y|q E dy ď 2E : This then proves (2.4). To prove (2.5) it suffices to give a lower bound for 1 R2 ż Bp0;R q p1 |x  y| R qE p1 |x| R qE dy which depends only on E. As before, rescaling x and a change of variables in y gives that it suffices to give a lower bound independent of x for ż Bp0;1q p 1 | x| 1 | x  y|qE dy ě p 1 | x| 2 | x|qE ě 2E : 13 Thus we have shown that 1 R2 p1Bp0;R q  wBp0;R q;E q ě 2E wBp0;R q;E which shows (2.5). We now prove the analogues for rwBp0;R q;E . We rst prove the analogue of (2.4). We would like to give an upper bound for the expression 1 R12 ż R2 p1 |x  y| R qE p1 |x2  y2| R qE p1 |y| R1 qE p 1 |y2| R1 qE p1 |x| R qE p1 |x2| R qE dy: A change of variables in y and rescaling x shows it suffices to bound ż R2 p1 | x  R1 R y|q E p1| y|q E p1 | x|q E p 1 | x2  R1 R y2|q E p1 | y2|q E p1 | x2|q E dy: (2.8) By the triangle inequality, p1 | x2  R1 R y2|q E p1 | y2|q E p1 | x2|q E ď 1 p R1{Rq| y2| 1 | y2| E ď 1: The upper bound for (2.8) then reduces to nding an upper bound for (2.6). To prove the analogue of (2.5) for rwBp0;R q;E , it suffices to give a lower bound for 1 R2 ż Bp0;R q p1 |x  y| R qE p1 |x2  y2| R qE p1 |x| R qE p1 |x2| R qE dy which depends only on E. Once again, a change of variables in y and a rescaling in x show that it suffices to give a lower bound for ż Bp0;1q p1 | x  y|q E p1 | x|q E p1 | x2  y2|q E p1 | x2|q E dy: (2.9) Since y P Bp0; 1q, the triangle inequality gives 1 | x2| 1 | x2  y2| ě 1 | x2| 3{2 | x2| ě 23: Therefore (2.9) is bounded below by p2{3qE ż Bp0;1q p 1 | x| 1 | x  y|qE dy ě p 2{3qE p1 | x| 2 | x|qE ě 3E : This then proves the analogue of (2.5) for rwBp0;R q;E . This completes the proof of Lemma 2.2.1. 14 Remark 2.2.2 . As a corollary of Lemma 2.2.1 and the observation that 1 B ÀE wB;E , we have wBp0;R q;E  wBp0;R q;E E R2wBp0;R q;E . This is also true for rwBp0;R q;E . Remark 2.2.3 . Let I  r R{2; R {2s and I1  r R1{2; R 1{2s with 0 ă R1 ď R. For x P R, let wI;E pxq : p 1 |x| R qE and similarly de ne wI1;E . The same proof as (2.4) gives that wI;E  wI1;E ď 4E R1wI;E : This estimate will be used extensively in the proof of Lemma 2.3.17. Lemma 2.2.1 has an immediate corollary which serves as the continuous analogue of the localization lemma given in Lemma 4.1 of [BD17]. This will allow us to upgrade from unweighted to weighted estimates, see later in Proposition 2.2.11. The inequality below is from the proof of Theorem 5.1 in [BD17]. Corollary 2.2.4. For 1 ď p ă 8 and E ě 10 , }f }pLppwBp0;R q;E q ď 3E ż R2 }f }pLp pBpy;R qq wBp0;R q;E pyq dy: This corollary is also true with wBp0;R q;E replaced with rwBp0;R q;E .Proof. Lemma 2.2.1 implies that ż R2 }f }pLp pBpy;R qq wBp0;R q;E pyq dy  ż R2 |f pxq| pp 1 R2 1Bp0;R q  wBp0;R q;E qp xq dx ě 3E }f }pLppwBp0;R q;E q which completes the proof of Corollary 2.2.4. We close this section by proving two lemmas about the interaction between rwB and rotations which will be used in the proof of Theorem 2.6.1. Lemma 2.2.5. Let cJ P r {2; 1  {2s, RJ  1 a1 4c2 J  1 2|cJ | 2|cJ | 1 ; 15 and J be such that cos J  1{a1 4c2 J and sin J  2|cJ |{ a1 4c2 J . Suppose |a| ď 21,then rwBpRJ pa; 0qT ; ´1qpsq ď 16 E rwBp0; ´1qpsq: Proof. We want to give an upper bound for p 1 | s| 1 | s  p cos J ; sin J qa|qE p 1 | s2| 1 | s2  p sin J qa|qE (2.10) that only depends on E. We rst consider the rst expression in (2.10). If |s| ă 31, then 1 | s| 1 | s  p cos J ; sin J qa| ď 4: If |s| ě 31, then 1 | s| 1 | s  p cos J ; sin J qa|  p 1 |s| 1qp 1 |s| |s  p cos J ; sin J qa||s| q1: (2.11) Since |s| ě 31 and |a| ď 21, |s  p cos J ; sin J qa||s| ě 1  |a||s| ě 13: Therefore (2.11) is ď 4 and so the rst expression in (2.10) is ď 4E . We next consider the second expression in (2.10). The proof is almost exactly the same. If |s2| ď 31, 1 | s2| 1 | s2  p sin J qa| ď 4: For |s2| ą 31, 1 | s2| 1 | s2  p sin J qa|  p 1 |s2| 1qp 1 |s2| |s2  p sin J qa||s2| q1: (2.12) Since |s2| ą 31 and |a| ď 21, |s2  p sin J qa||s2| ě 1  |a||s2| ě 13: Therefore (2.12) is ď 4 and so the second expression in (2.10) is ď 4E . This completes the proof of Lemma 2.2.5. 16 Lemma 2.2.6. Let RJ be as in Lemma 2.2.5. Then p1 |p R1 J xq1| 1 q2E p1 |p R1 J xq2| 1 q2E ď rwBp0; ´1q;E : (2.13) Proof. Since p1 |x|q ď p 1 |x1|qp 1 |x2|q , the left hand side of (2.13) is ď p 1 |R1 J x| 1 q2E  p 1 |x| 1 q2E ď rwBp0; ´1q;E where the equality is because RJ is a rotation. This completes the proof of Lemma 2.2.6. 2.2.2 Explicit Schwartz functions In addition to our polynomial decaying weights wB and rwB , we will also need to construct an explicit Schwartz function weight. More speci cally, in Corollary 2.2.9, we construct a nonnegative  in R2 such that 1 Bp0;1qpxq ď pxq and supp ppq Ă Bp0; 1q. Such an  will be used in the proof of reverse H older (Lemma 2.2.20), l2L2 decoupling (Lemma 2.2.21), and will also allow us to reset the \ E parameter" when we prove the equivalence of local decoupling constants in Section 2.3 (in particular, Lemma 2.3.8 and Proposition 2.3.11). We also construct an explicit smoothed indicator function which is equal to 1 on r 1; 1s and vanishes outside r 3; 3s. This will be used in the proof of ball in ation (Theorem 2.6.1) and the equivalence of local decoupling constants (Lemma 2.3.10). Existence of such Schwartz functions is easy to justify, however our goal is to obtain explicit bounds and so not only will we need to construct such functions but also need to construct them in such a way as to make it easy to compute with. Both Schwartz functions rely on the following lemma which is a small modi cation of Theorem 1.3.5 of [Hor90]. Lemma 2.2.7. Let a0 ě a1 ě    be a positive sequence such that a : ř iě0 ai ă 8 . For i ě 0, let Hipxq : 1 ai 1r ai{2;a i{2spxq and let ukpxq : p H0      Hkqp xq: 17 Then for k ě 2, uk P Ck1 c pRq is supported in r a{2; a {2s and converges (uniformly) to a function u P C8 c pRq as k Ñ 8 which is also supported in r a{2; a {2s. Furthermore, |upjqpxq| ď 2j a0a1    aj for j ě 0 and pupq  8 ź i0 sinc paiq where sinc pxq  p sin x q{p x q.Proof. The proof is the same as that in Theorem 1.3.5 of [Hor90] except in this case we have upjq k  r j1 ź i0 1 ai pai{2  ai{2qsp Hj      Hkq for j ď k  1 where paf qp xq  f px  aq and the product is a composition of operators. For the claim about pu, note that xHipq  sinc paiq which implies pukpq  śki0 sinc paiq: Since uk Ñ u uniformly as k Ñ 8 and since uk and u are both supported on r a{2; a {2s, puk Ñ pu uniformly as k Ñ 8 . This completes the proof of Lemma 2.2.7. We use Lemma 2.2.7 to construct a function on R such that ě 1r 1{2;1{2s and supp p pq Ă r 1{2; 1{2s. Lemma 2.2.8. For x P R, let pxq : 4psinc px 6 q 8 ź i1 sinc p x 6i2 qq 2: Then ě 1r 1{2;1{2s, supp p pq Ă r 1{2; 1{2s, and for all x P R and E ě 100 , | pxq| ď E6E p1 | x|q 2E : Proof. Let u be as in Lemma 2.2.7 with a0  1 and ai  1{i2. Then pupxq  sinc pxq 8 ź i1 sinc px{i2q and u is supported in r 3{2; 3{2s.18 Observe that pxq  F pxq2 with F pxq  2pupx{6q. Since F is even, for x P r 1{2; 1{2s, F pxq ě F p1{2q ě 1. As ě 0 for all x P R, ě 1r 1{2;1{2s. From the support of u, the Fourier transform of F is supported in r 1{4; 1{4s. Since p  pF  pF , p is supported in r 1{2; 1{2s.By the construction of u, |upjqpxq| ď 2jjź k0 a1 k  2jjź k1 k2 ď 2j j2j : The support of u and integration by parts gives that for any j ě 0 and x  0, |pupxq| ď 1 p2|x|q j }upjq}L1pRq ď 3j2j j |x|j : Applying the above bound to j  E shows that for x  0, |pupxq| ď E2E |x|E : Then for |x| ě 1, | pxq|  4|pupx{6q| 2 ď E5E |x|2E Thus if |x| ě 1, p1 | x|q 2E | pxq| ď E6E . If |x| ď 1, then explicit computation gives that p1 | x|q 2E | pxq| ď 4E1: This completes the proof of Lemma 2.2.8. Since Bp0; 1q  r 1{2; 1{2s2 and p1 | x|qp 1 | x2|q ď p 1 | x1|qp 1 | x2|q 2, we immediately have the following corollary. Corollary 2.2.9. Let be as in Lemma 2.2.8. For x P R2, let pxq  px1q px2q: Then  ě 1Bp0;1q, supp ppq Ă Bp0; 1q, and for all x P R2 and E ě 100 , |pxq| ď E12 E p1 | x1|q 2E p1 | x2|q 2E : For B  BpcB ; R q, de ne B pxq : px  cB R q: Then for all x P R2 and arbitrary E ě 100 , B pxq ď E12 E rwB;E pxq ď E12 E wB;E pxq: 19 We now construct our smoothed indicator function and estimate the size of the Fourier transform of its moments. Lemma 2.2.10. Let u be as in Lemma 2.2.7 with a0 : 1{3 and ai : 1{p 3i2q. Then pxq : p u  1r 2;2sqp xq is a C8 c pRq function which is equal to 1 on r 1; 1s and vanishes outside r 3; 3s. For k ě 0, x P R, and E ě 100 we have | ż R tk ptqe2itx dt | ď 6kE5E p1 | x|q 2E : (2.14) Proof. From Lemma 2.2.7, u is supported in r 1; 1s. Since u ě 0, }u}L1  pup0q  1. Then pxq  ż rx2;x 2sXr 1;1s upsq ds  $'&'% 1 if x P r 1; 1s 0 if x R r 3; 3s: To prove (2.14), we rst prove that for k ě 0, |B 2E pxk pxqq| ď 62EkE4E (2.15) where BE  dE {dx E . From Lemma 2.2.7, for j ě 0, |upjqpxq| ď 3p2j q śji1 3i2  3p6j qp j!q2.Thus for j ě 0, | pjqpxq|  |p upjq  1r 2;2sqp xq| ď 12 p6j qp j!q2: First suppose 2 E ď k. Then since is supported on r 3; 3s, |B 2E pxk pxqq|  | 2E ¸ j0 2Ej Bj pxkq p2Ejqpxq| ď 2E ¸ j0 2Ej k! pk  jq!3kj 12 p62Ej qp 2E  jq!2 ď 12 p62E 3kqp 2E!q22E¸ j0 kj ď 12 p62Ekqp 2E!q2: Next suppose k ă 2E. Then similarly, |B 2E pxk pxqq| ď k ¸ j0 2Ej k! pk  jq!3kj 12 p62Ej qp 2E  jq!2 ď 12 p62Ekqp 2E!q2: 20 Since E ě 100, 12 p2E!q2 ď E4E , and so when combined with the above implies |B 2E pxk pxqq| ď 62EkE4E which proves (2.15). We now prove (2.14). Integration by parts and (2.15) give that for x  0, | ż R tk ptqe2itx dt | ď 6 p2|x|q 2E }B 2E ptk ptqq} L8 ď 6kE4E |x|2E : Thus for |x| ě 1, p1 | x|q 2E | ż R tk ptqe2itx dt | ď 22E 6kE4E ď 6kE5E : Observe that ż R |tk ptq| dt ď 3k} }L1  4p3kq where the last equality we have used that u ě 0 and }u}L1  1. Then for |x| ă 1, p1 | x|q 2E | ż R tk ptqe2itx dt | ď 4E13k: This completes the proof of Lemma 2.2.10. 2.2.3 Immediate applications Corollary 2.2.4 allows us to upgrade from estimates in LppBq and LppB q to estimates in LppwB q and Lpp rwB q. We have the following proposition which contains all three different scenarios we will need to upgrade from an unweighted estimate to a weighted estimate. Proposition 2.2.11. Let I Ă r 0; 1s and P be a disjoint partition of I. paq Suppose for some 2 ď p ă 8 , we have }EI g}LppBq ď Cp ¸ JPP }EJ g}2 LppwB;E q q1{2 for all g : r0; 1s Ñ C and all squares B of side length R. Then for each E ě 10 , we have }EI g}LppwB;E q ď 12 E{pCp ¸ JPP }EJ g}2 LppwB;E q q1{2 (2.16) for all g : r0; 1s Ñ C and all squares B of side length R. 21 pbq Suppose we have }EI g}L2pBq ď Cp ¸ JPP }EJ g}2 L2p2 Bq q1{2 for all g : r0; 1s Ñ C and all squares B of side length R. Then for each E ě 100 , we have }EI g}L2pwB;E q ď 12 E{2E12 E Cp ¸ JPP }EJ g}2 L2pwB;E q q1{2 (2.17) for all g : r0; 1s Ñ C and all squares B of side length R. pcq Suppose for some 1 ď p ă q ă 8 , we have }EI g}Lq pBq ď C}EI g}Lp ppBq for all g : r0; 1s Ñ C and all squares B of side length R. Then for each E ě 100 , we have }EI g}Lq pwB;E q ď 12 E{qE12 E C}EI g}Lp pwB;Ep {qq (2.18) for all g : r0; 1s Ñ C and all squares B of side length R.The same results are also true with wB;E replaced with rwB;E .Proof. We rst prove paq. Since for a P R2, pEJ gqp xaq  p EJ hqp xq where hpq  gpqepa1 a22q, a change of variables shows that it suffices to prove (2.16) in the case when B is centered at the origin. Corollary 2.2.4 implies that }EI g}pLppwB;E q ď 3E ż R2 }EI g}pLp pBpy;R qq wB;E pyq dy ď 3E R2Cp ż R2 p ¸ JPP }EJ g}2 LppwBpy;R q;E q qp{2wB;E pyq dy  3E R2Cp}} EJ g}LppwBpy;R q;E q}pLpy pwB;E ql2 J : Since p ě 2, we can interchange the LpypwB;E q and l2 J norms and the above is ď 3E R2Cp}} EJ g}LppwBpy;R q;E q}pl2 JLpypwB;E q  3E R2Cp ¸ JPP p ż R2 }EJ g}pLppwBpy;R q;E qwB;E pyq dy q2{p p{2 : (2.19) 22 Since B is assumed to be centered at the origin, ż R2 }EJ g}pLppwBpy;R q;E qwB;E pyq dy  } EJ g}pLppwB;E wB;E q ď 4E R2}EJ g}pLppwB;E q where the inequality is an application of Lemma 2.2.1. Inserting this into (2.19) gives that }EI g}pLppwB;E q ď 12 E Cpp ¸ JPP }EJ g}2 LppwB;E q qp{2: Taking 1 {p powers of both sides completes the proof of (2.16). We next prove pbq. Once again it suffices to prove (2.17) in the case when B is centered at the origin. Corollary 2.2.4 implies that }EI g}2 L2pwBq ď 3E ż R2 }EI g}2 L2#pBpy;R qq wB pyq dy  3E R2C2 ¸ JPP ż R2 }EJ g}2 L2p2 Bpy;R qq wB pyq dy  3E R2C2 ¸ JPP }EJ g}2 L2p2 BwBq (2.20) By Corollary 2.2.9 and Lemma 2.2.1, 2 B  wB ď E24 E wB; 2E  wB;E ď E24 E 4E R2wB;E and hence (2.20) is ď E24 E 12 E C2 ¸ J1PP1{RpJq }EJ1 g}2 L2pwBq : Taking 1 {2 powers of both sides completes the proof of (2.17). We nally prove pcq. Again it suffices to prove (2.18) in the case when B is centered at the origin. Corollary 2.2.4 implies that }EI g}qLq pwB;E q ď 3E ż R2 }EI g}qLq pBpy;R qq wB;E pyq dy ď 3E CqR2q{p ż R2 }EI g}qLpppBpy;R qqwB;E pyq dy  3E CqR2q{p}| EI gpsq| Bpy;R qpsq} qLqy pwB;E qLps : 23 Since q ą p, we can interchange the norms and the above is ď 3E CqR2q{p}| EI g|Bpy;R q}qLps Lqy pwB;E q  3E CqR2q{pp ż R2 |EI gpsq| ppqB  wB;E qp sqp{q ds qq{p (2.21) Corollary 2.2.9 and Lemma 2.2.1 give that qB  wB;E ď E12 Eq pwB;Eq  wB;E q ď E12 Eq 4E R2wB;E : Inserting this into (2.21) shows that }EI g}qLq pwB;E q ď 12 E E12 Eq CqR22q{p}EI g}qLppwB;Ep {q q Changing Lq and Lp into Lq and Lp , respectively, removes the factor of R22q{p. Taking 1 {q powers of both sides then completes the proof of (2.18). Since the same estimates hold for rwB;E in Lemma 2.2.1, Corollary 2.2.4, and Corollary 2.2.9, the above proof also shows that the proposition also holds with every instance of wB;E replaced with rwB;E . This completes the proof of Proposition 2.2.11. Remark 2.2.12 . Note that a change of variables as in the beginning of the proof of Proposition 2.2.11 shows that knowing }EI g}LppBp0;R qq ď Cp ¸ JPP }EJ g}2 LppwBp0;R q;E q q1{2 (2.22) for all g : r0; 1s Ñ C implies that }EI g}LppBq ď Cp ¸ JPP }EJ g}2 LppwB;E q q1{2 for all g : r0; 1s Ñ C and all squares B of side length R. Therefore often to check the hypotheses of Proposition 2.2.11 we will just prove (2.22) instead. Remark 2.2.13 . Corollary 2.2.4 is not the only way to convert unweighted estimates to weighted estimates. Another approach is to prove an unweighted estimate where B is re-placed by 2 nB for all n ě 0 and then use that wB;E  ř ně0 2nE 12nB to conclude the weighted estimate. 24 Proposition 2.2.14. Let B be a square of side length R and let B be a disjoint partition of B into squares ∆ with side length R1 ă R. Then for E ě 10 , ¸ ∆PB w∆;E ď 48 E wB;E : (2.23) This inequality remains true with w∆;E and wB;E replaced with rw∆;E and rwB;E .Proof. It suffices to prove the case when B is centered at the origin. Since B is a disjoint partition of B, ¸ ∆PB 1∆ ď 1B : Therefore ¸ ∆PB 1∆  wBp0;R 1q;E ď 1B  wBp0;R 1q;E : Lemma 2.2.1 gives that 3E R12 ¸ ∆PB w∆;E ď ¸ ∆PB 1∆  wBp0;R 1q;E and 1B  wBp0;R 1q;E ď 8E R12wB;E where here we have also used 1 B ď 2E wB;E . Rearranging then proves (2.23). Since 1 B ď 4E rwB;E , the same proof then proves (2.23) with w∆;E and wB;E replaced with rw∆;E and rwB;E ,respectively. This completes the proof of Proposition 2.2.14. Remark 2.2.15 . The only property we really need in Proposition 2.2.14 is that ř ∆PB 1∆ ď C1B for some absolute constant C. In particular, the same proof will work with nitely overlapping covers and when R{R1 R N.We illustrate two lemmas regarding how the weights wB and rwB and shear matrices interact. Both lemmas are similar to Proposition 2.2.14 except now there is an additional shear matrix. Lemma 2.2.16 is used in the proof of Lemma 2.3.10. This lemma is a warmup to the proof of Lemma 2.2.18. Lemma 2.2.18 is the key lemma for the application of parabolic rescaling in Propositions 2.4.1 and 2.5.2 and is why we have two separate weights wB and rwB .25 Lemma 2.2.16. Let E ě 10 and S  p 1 a 0 1 q where |a| ď 2. Then wBp0;R q;E pSx q ď 90 E wBp0;R q;E pxq: Proof. Since our weights are centered at the origin, rescaling x, it suffices to prove the case when R  1. Since |a| ď 2, S1Bp0; 1q Ă Bp0; 3q and so 1 Bp0;1qpSx q ď 1Bp0;3qpxq for all x P R2. Therefore 1Bp0;1qpxq ď 1Bp0;3qpS1xq for all x P R2. Convolving both sides by wBp0;1q;E and applying Lemma 2.2.1 gives that 3E wBp0;1q;E ď p 1Bp0;3q  S1q  wBp0;1q;E : Thus it remains to prove that p1Bp0;3q  S1q  wBp0;1q;E ď 30 E wBp0;1q;E  S1: This is the same as showing that ż R2 1Bp0;3qpS1yqp 1 | x  y|q E dy ď 30 E p1 | S1x|q E : (2.24) If x P 25SpBp0; 1qq , then |S1x| ď 24?2 and so ż R2 1Bp0;3qpS1yqp 1 | x  y|q E dy ď 1 ď 24 E p1 | S1x|q E which proves (2.24) in this case. Next let x P 2n1SpBp0; 1qqz 2nSpBp0; 1qq for some n ě 5. Then p1 | S1x|q E ě p 1 ?2  2nqE ě p 2  2nqE : Thus in this case, to prove (2.24) it suffices to show that ż R2 1Bp0;3qpS1yqp 1 | x  y|q E dy ď 15 E 2nE : (2.25) We have ż R2 1Bp0;3qpS1yqp 1 | x  y|q E dy  ż SpBp0;3qq 1 p1 | x  y|q E dy  ż xSpBp0;3qq 1 p1 | y|q E dy: (2.26) 26 For y P x  SpBp0; 3qq , write y  Sa  Sb where a P Bp0; 2n1qz Bp0; 2nq and b P Bp0; 3q.Since }S1} ď 2}S1}max ď 4, |y|  | Spa  bq| ě } S1}1|a  b| ě 14p2n1  32 ?2q ě 110 2n: Therefore the right hand side of (2.26) is bounded above by 9 p10 E q2nE which proves (2.25) and hence (2.24). This completes the proof of Lemma 2.2.16. Remark 2.2.17 . The same proof also shows that wBp0;R q;E pStxq ď 90 E wBp0;R q;E since the only two properties of S we used were S1Bp0; 1q Ă Bp0; 3q and }S1} ď 4. These properties are satis ed if we replace S with St. Lemma 2.2.18. For 0 ă  ď  ă 1 with 1{2 P N, let T  1{2 2a 1{2 0  with 0 ď a ď 1  1{2 and B  Bp0;  1q. Then T pBq is contained in a 31{21  1 rectangle centered at the origin. Let B denote the partition of this rectangle into 31{2 many squares with side length  1. Then for E ě 100 , ¸ ∆PB rw∆;E ď 720 E wB;E  T 1: (2.27) Proof. The proof is similar to what we did in Proposition 2.2.14 and Lemma 2.2.16. Since B is axis-parallel and centered at the origin, T pBq is a parallelogram centered at the origin with a base parallel to the x-axis and height  1. The corners of B are given by p 1{2; 1{2q and hence the corners of T pBq are given by p121{2p1 2aq1; 12 1qp121{2p1  2aq1; 12 1qp 121{2p1 2aq1; 12 1qp 121{2p1  2aq1; 12 1q: Then T pBq is contained in a 3 1{21  1 rectangle centered at the origin. Note that T pBq Ă Ť ∆PB ∆ Ă 10 T pBq (we actually have Ť ∆PB ∆ Ă p 3 2aqT pBq, but this is not needed) and so ¸ ∆PB 1Bpc∆; ´1q ď 1Bp0;10 ´1q  T 1: 27 Convolution with rwBp0; ´1q;E gives that p 1q2 ¸ ∆PB rw∆;E ď 3E p1Bp0;10 ´1q  T 1q  rwBp0; ´1q;E : Thus it suffices to show that p 1q2p1Bp0;10 ´1q  T 1q  rwBp0; ´1q;E ď 240 E wBp0; ´1q;E  T 1: That is, p 1q2 ż R2 1Bp0;10 ´1qpT 1yqp 1 |x  y|  1 qE p1 |x2  y2|  1 qE dy ď 240 E p1 |T 1x| 1 qE : Rescaling x and y (by setting X  x{p  1q and Y  y{p  1q) shows it suffices to prove that ż R2 1Bp0;10 qpS1yqp 1 | x  y|q E p1 | x2  y2|q E dy ď 240 E p1 | S1x|q E (2.28) for all x P R2 where S  1T  p ´1{2 2a ´1{2 01 q: Suppose x P 26SpBq. Then |S1x| ď 32 ?2and so ż R2 1Bp0;10 qpS1yqp 1 | x  y|q E p1 | x2  y2|q E dy ď 1 ď 50 E p1 | S1x|q E : It then remains to prove (2.28) for x P 2n1SpBqz 2nSpBq for all n ě 6. Fix an n ě 6. For x P 2n1SpBqz 2nSpBq, |S1x| ď 2n1{2 and so p2n1qE ď p 1 |S1x|q E : Therefore to prove (2.28) it is enough to prove ż 10 SpBp0;1qq 1 |x  y|E p1 | x2  y2|q E dy ď 120 E 2nE for all x P 2n1SpBqz 2nSpBq. A change of variables shows that we need to prove ż x10 SpBp0;1qq 1 |y|E p1 | y2|q E dy ď 120 E 2nE (2.29) for all x P 2n1SpBp0; 1qqz 2nSpBp0; 1qq .28 Fix an x P 2n1SpBp0; 1qqz 2nSpBp0; 1qq . First suppose |x2| ě 22n{E . If y P x  10 SpBp0; 1qq , then y  Sa  Sb for some a P Bp0; 2n1qz Bp0; 2nq and b P Bp0; 10 q. Since }S1} ď 2}S1}max ď 4, we rst have |y|  | Spa  bq| ě } S1}1|a  b| ě 14|a  b| ě 14p2n1  5?2q ě 120 2n: Next, y2  x2  p Sb q2  x2  b2 and b2 P r 5; 5s and so 1 | x2| 1 | y2|  1 | x2| 1 | x2  b2| ď 1 | b2| ď 6: Therefore ż x10 SpBp0;1qq 1 |y|E p1 | y2|q E dy ď p 61 | x2|qE ż |y|ě 2n{20 1 |y|E dy ď 120 E 22n p1 | x2|q E 2nE and since |x2| ě 22n{E , we have proven (2.29) in this case. Next, suppose |x2| ă 22n{E . In this case, we claim that y P x  10 SpBp0; 1qq satis es |y| Á 2n1{2 and so we can bound the integral trivially. By assumption, |p S1xq2|  | x2| ă 22n{E : Since S1x P 2n1Bp0; 1qz 2nBp0; 1q, |S1x| ě 2n1. Thus |p S1xq1| ě 2n1  22n{E : Since pS1xq1  1{2x1  2ax 2, it follows that |x1| ě 1{2p2n1  3  22n{E q: As in the previous paragraph, write y  x  Sb for some b P Bp0; 10 q. Then |y| ě | y1|  | x1|  1{2|b1 2ab 2| ě 1{2p2n1  3  22n{E  15 q ě 151{22n where the last inequality we have used that n ě 6 and E ě 100. Thus in the case when |x2| ă 22n{E , ż x10 SpBp0;1qq 1 |y|E p1 | y2|q E dy ď p 100 1{2q5E E{22nE ď 6E 2nE which proves (2.29) in this case. This completes the proof of Lemma 2.2.18. 29 Remark 2.2.19 . The rw∆;E on the left hand side of (2.27) was needed to make sure the E on both sides stays the same which is needed when we iterate later (for example in Lemma 2.5.2). If the rw∆;E is replaced with w∆;E , then by the same method as the proof above, one can obtain ř ∆PB w∆;E ÀE wB;E 2  T 1. In this case, some loss in E must occur since we can consider the analogue of (2.28) and (2.29) and let a  0 and x  p 0; 2n1q. 2.2.4 Bernstein's inequality Another immediate application of Proposition 2.2.11 is Bernstein's inequality (also called reverse H older in [BD17]). This should be compared with (2.3) at the beginning of Section 2.2. Our proof of Lemma 2.2.20 is the same as that of Corollary 4.3 of [BD17] except we make effective all the implicit constants. Lemma 2.2.20. Let 1 ď p ă q ď 8 , E ě 100 , J Ă r 0; 1s with ℓpJq  1{R and B Ă R2 asquare with side length R ě 1. If q ă 8 , then }EJ g}Lq prwB;E q ď E23 E }EJ g}Lp prwB;Ep {qq : (2.30) If q  8 , then sup xPB |p EJ gqp xq| ď E23 E }EJ g}Lp prwB;E q : (2.31) Proof. Let  be as in Corollary 2.2.9. Since B ě 1B , }EJ g}Lq pBq ď } B EJ g}Lq pR2q: Let pxq  p2x1q p2x2q where is de ned as in Lemma 2.2.10. Then   1 on Bp0; 1q and vanishes outside Bp0; 3q. Since xB is supported on Bp0; 1{Rq, the Fourier transform of B EJ g is supported in some square S with side length 10 {R. Then we have the following self-replicating formula B EJ g  p B EJ gq  qS : Young's inequality then gives }B EJ g}Lq pR2q ď } B EJ g}LppR2q} qS }Lr pR2q  } qS }Lr pR2q}EJ g}LpppB q 30 where 1 {q  1{p 1{r  1 (since q ą p, we have r ą 1 and qS P Lr). Since qpq  p1{4q q p1{2q q p2{2q, }q}Lr pR2q  41{r1} q }2 LrpRq , applying Lemma 2.2.10 gives that } qS }Lr pR2q  p 10 {Rq22{r}q}Lr pR2q  25 1{r1 R2{r1 } q }2 LrpRq ď 25 1{r1 R2{r1 E10 E : Therefore }EJ g}Lq pBq ď 25 1{r1 E10 E R2{r1 }EJ g}LpppB q (2.32) for all squares B Ă R2 with side length R. If q ă 8 , applying Proposition 2.2.11 and then using that q ą p ě 1 and E ě 100 proves (2.30). If q  8 , then (2.32) and Corollary 2.2.9 implies that sup xPB |p EJ gqp xq| ď 25 1{pE22 E R2{p}EJ g}Lpp rwB;E q: Since E ě 100, (2.31) then follows. This completes the proof of Lemma 2.2.20. 2.2.5 l2L2 decoupling We now prove l2L2 decoupling which will follow from almost orthogonality. This proof is the same as that of Proposition 6.1 of [BD17] except we once again make explicit all implicit constants. Lemma 2.2.21. Let J Ă r 0; 1s be an interval of length ě 1{R such that |J|R P N. Then for E ě 100 and each square B Ă R2 with side length R, }EJ g}2 L2prwB;E q ď E13 E ¸ J1PP1{RpJq }EJ1 g}2 L2prwB;E q : Proof. Let  be as in Corollary 2.2.9. Since 2 B ě 1B , }EJ g}2 L2pBq ď } EJ g}2 L2p2 Bq  } B EJ g}2 L2pR2q  } ¸ J1PP1{RpJq B EJ1 g}2 L2pR2q : Note that the Fourier transform of B EJ1 g is supported in the 1 {R-neighborhood of the piece of parabola above J1. Therefore B EJ1 g and B EJ2 g have disjoint Fourier support if J1 and 31 J2 are separated by ě 2 intervals. Applying this and Plancherel gives } ¸ J1PP1{RpJq B EJ1 g}2 L2pR2q ď ¸ J1 1PP1{RpJq ¸ J1 2PP1{RpJq dpcJ1 1;c J1 2qď 2{R }B EJ1 1 g}L2 }B EJ1 2 g}L2 ď p ¸ J1 1PP1{RpJq }B EJ1 1 g}2 L2 q1{2p ¸ J1 1PP1{RpJq p ¸ J1 2PP1{RpJq dpcJ1 1;c J1 2qď 2{R }B EJ1 2 g}L2 q2q1{2 ď ?5p ¸ J1 1PP1{RpJq }B EJ1 1 g}2 L2 q1{2p ¸ J1 1PP1{RpJq ¸ J1 2PP1{RpJq dpcJ1 1;c J1 2qď 2{R }B EJ1 2 g}2 L2 q1{2 ď 5 ¸ J1PP1{RpJq }EJ1 g}2 L2p2 Bq : Thus we have shown that }EJ g}L2pBq ď ?5p ¸ J1PP1{RpJq }EJ1 g}2 L2p2 Bq q1{2 for all squares B Ă R2 with side length R. Applying Proposition 2.2.11 then completes the proof of Lemma 2.2.21. Remark 2.2.22 . To modify the weights wB and rwB , the main properties the weights need to satisfy are Lemma 2.2.1 and Lemma 2.2.18. The other lemmas such as Lemmas 2.2.5, 2.2.6, and 2.2.16 are also desired, but these should be easy to satisfy. 2.3 Equivalence of local decoupling constants Recall that rDp;E pq is de ned similarly as Dp;E pq except instead of wB;E we use rwB;E . The main goal of this section is to prove that Dp;E pq  E rDp;E pq (2.33) for 2 ď p ď 6, E ě 100, and  P N2. This is proven in Proposition 2.3.11. This equivalence is a consequence of a larger equivalence of a collection of local decoupling constants. This 32 section is similar to Remark 5.2 of [BD15] and may be of independent interest since it shows that an array of slightly different local decoupling constants are essentially the same size. The restriction p ď 6 is very mild and can be removed with a bit more care (at the cost of introducing an implied constant that depends on p). However since 2 ď p ď 6 is precisely the range we need, we restrict to this range. The appearance of the weight rwB in parabolic rescaling (arising from Lemma 2.2.18) means that (2.33) will play an essential part of the argument (for example in Proposition 2.4.1, Lemma 2.5.2, and Lemma 2.8.11). Let fR denote the Fourier restriction of f to R. For each J  r nJ 1{2; pnJ 1q1{2s P P1{2 pr 0; 1sq , let J : tp s; L J psq tq : nJ 1{2 ď s ď p nJ 1q1{2; 5 ď t ď 5u where LJ psq : p 2nJ 1q1{2s  nJ pnJ 1q and 0 ď nJ ď 1{2  1. Here J is a parallelogram that has height 10  and has base parallel to the straight line connecting pnJ 1{2; n 2 J q and pp nJ 1q1{2; pnJ 1q2q. We note that for  P J , |2  LJ p1q| ď 5 (2.34) and |LJ p1q  21 | ď {4: (2.35) Boundedness of the Hilbert transform implies that Fourier restriction to J is a bounded operator from Lp Ñ Lp with operator norm bounded independent of J, we make this explicit with the following lemma. Lemma 2.3.1. For each J P P1{2 pr 0; 1sq and 2 ď p ă 8 , }fJ }p ď Cp}f }p with Cp :p12 12 cot p  2p qq 4: Proof. Fix J P P1{2 pr 0; 1sq . Let R denote the operator de ned by xRf  pf 1J . Let S denote the operator de ned by xSf  pf 1r0;8q : Each J is the intersection of four half planes in R2.33 Since multiplier norms are unchanged after rotation and translation, }R}pÑp ď } S}4 pÑp : (2.36) Note that here we have also used that the operator norm of Fourier restriction to a half plane is bounded above by }S}pÑp which follows from Fubini's Theorem. If H denotes the Hilbert transform, observe that pf pq iyHf pq  2 pf pq1r0;8q pq almost everywhere. Since 2 ď p ă 8 , }H}pÑp ď cot p  2p q: Therefore }S}pÑp ď 12 12 cot p  2pq: Inserting this into (2.36) then completes the proof of Lemma 2.3.1. Remark 2.3.2 . One can think of J as a polygonal approximation of the set tp s; s 2 tq : s P J; |t| ď u. The reason why we use J instead is because Fourier restriction to the aforementioned set is not bounded in Lp for p  2. To prove (2.33), we introduce two more local decoupling constants and show that all four decoupling constants are equivalent. De nition 2.3.3. Let  P N2, 2 ď p ă 8 and E ě 1. Let  be as in Corollary 2.2.9. Let Dppq be the smallest constant such that }Er0;1sg}LppBq ď Dppqp ¸ JPP1{2pr 0;1sq }EJ g}2 LppBq q1{2 for all g : r0; 1s Ñ C and all squares B with side length 1. Let pDp;E pq be the smallest constant such that }f }LppBq ď pDp;E pqp ¸ JPP1{2pr 0;1sq }fJ }2 LppwB;E q q1{2 for all f Fourier supported in  Ť JPP1{2pr 0;1sq J and all squares B with side length 1. From our de nitions of wB , rwB , and B , observe that 1B ď 2E wB;E ; 1B ď 4E rwB;E ; 1B ď B : 34 Furthermore, note that by the triangle inequality followed by Cauchy-Schwarz, all four local decoupling constants we have de ned are ÀE;p 1{4. Taking a speci c g : r0; 1s Ñ C or a speci c f with Fourier support in and using Proposition 2.2.11 shows that Dp;E pq; rDp;E pq,and pDp;E pq are ÁE;p 1. We make this precise with pDp;E which is the only decoupling constant we need an explicit lower bound. Remark 2.3.4 . Another consequence of the equivalence of the four local decoupling constants is that Dppq Á E;p 1 but this is not immediate from the de nition. Lemma 2.3.5. For p ě 2 and E ě 10 , pDp;E pq ě 12 E{p.Proof. Let pD1 p;E pq be the smallest constant such that }f }LppwB;E q ď pD1 p;E pqp ¸ JPP1{2pr 0;1sq }fJ }2 LppwB;E q q1{2 for all f Fourier supported in and all squares B with side length 1. Proposition 2.2.11 implies that pD1 p;E pq ď 12 E{p pDp;E pq. From the de nition, pD1 p;E pq  sup f;B }f }LppwB;E q př JPP1{2pr 0;1sq }fJ }2 LppwB;E q q1{2 (2.37) where the sup is taken over the f and B as mentioned at the beginning of this proof. Taking an f with Fourier support on r0; 1{2s shows that pD1 p;E pq ě 1. Here note that we needed the numerator of the right hand side of (2.37) to be LppwB;E q rather than LppBq. Therefore pDp;E pq ě 12 E{p which completes the proof of Lemma 2.3.5. Remark 2.3.6 . The decoupling constants Dp;E pq and rDp;E pq are useful because wB  wB E R2wB and similarly for rwB . This allows us to use Proposition 2.2.11 to upgrade from un-weighted to weighted estimates which is an important part of the argument. The same cannot be said with the Schwartz weight decoupling constant Dppq since we do not nec-essarily have B  B  R2B . This useful convolution property of the wB and rwB makes Dp;E pq and rDp;E pq ideal for iterative parts of the argument. On the other hand, the decoupling constants Dppq and pDp;E pq are more useful for Fourier type arguments since the Fourier transform of wB and rwB are of sinc type and so 35 do not work well with Fourier arguments. One corollary of the results proven in this section is that all four local decoupling constants are essentially equivalent so we can easily swap between them. To prove (2.33) we will prove the chain of inequalities Dp;E pq ď rDp;E pq À E Dppq À E pDp;G pq À E Dp;E pq (2.38) for 2 ď p ď 6 and some G ă E we will make explicit in our proof. The rst two inequalities follow from that B ÀE wB À rwB . The third inequality follows from boundedness of the Hilbert transform (Lemma 2.3.1) and the last inequality will follow from adapting the proof of Theorem 5.1 in [BD17] to our case and is the most technical. Lemma 2.3.7. For E ě 100 and 2 ď p ă 8 , Dp;E pq ď rDp;E pq ď E12 E{pDppq: Proof. The rst inequality follows from the observation that rwB ď wB . The second inequality follows from Corollary 2.2.9, in particular, B ď E12 E rwB;E : This completes the proof of Lemma 2.3.7. As mentioned above, the third inequality in (2.38) comes from boundedness of the Hilbert transform. In particular, we need the following lemma. Because Dp does not depend on E,this lemma allows us to \reset" the E parameter in Dp;E . This is useful because going up in the E parameter of Dp;E is easy but going down is much harder. Lemma 2.3.8. For  P N2, E ě 1, and 2 ď p ă 8 , we have Dppq ď p 3Cp 5  12 E{pq pDp;E pq where Cp is as de ned in Lemma 2.3.1. Proof. We rst assume that  P N2 and  ď 1{36. Fix arbitrary g : r0; 1s Ñ C and square B with side length 1. We can write g  g1r0; 1{2qYp 11{2;1s g1r1{2;11{2s : g1 g2: 36 Then }Er0;1sg}LppBq ď } Er0;1sg1}LppBq } Er0;1sg2}LppBq: Using the support of g1, the triangle inequality, 1 B ď B , and Lemma 2.3.5, we have }Er0;1sg1}LppBq ď } Er0; 1{2sg}LppBq } Er11{2;1sg}LppBq ď 2  12 E{p pDp;E pqp ¸ JPP1{2 pr 0;1sq }EJ g}2 LppB qq1{2: (2.39) Since g2 is supported in r1{2; 11{2s, the Fourier transform of B Er0;1sg2  B Er1{2;11{2sg is supported in a -neighborhood of this interval which is contained in . Therefore }B Er0;1sg2}LppBq ď pDp;E pqp ¸ JPP1{2 pr 0;1sq }p B Er0;1sg2qJ }2 LppwB;E qq1{2: (2.40) Note that since g2  g1r1{2;11{2s, pB Er0;1sg2qJ  p B Er1{2;11{2sgqJ  $''''''''''''&''''''''''''% pB EJr gqJ if J  r 0;  1{2spB EJ g B EJr gqJ if J  r 1{2; 21{2spB EJℓ g B EJ g B EJr gqJ if J P P1{2 pr 21{2; 1  21{2sq pB EJℓ g B EJ gqJ if J  r 1  21{2; 1  1{2spB EJℓ gqJ if J  r 1  1{2; 1s: where Jℓ and Jr denote the intervals to the left and right of J. Lemma 2.3.1 gives that for J P P1{2 pr 21{2; 1  21{2sq , }p B Er0;1sg2qJ }LppwB;E q ď ¸ J1Pt Jℓ;J;J r u }p B EJ1 gqJ }p ď Cp ¸ J1Pt Jℓ;J;J r u }EJ1 g}LppB q: Similarly we have }p B Er0;1sg2qr0; 1{2s }LppwB;E q ď Cp}Er1{2;21{2sg}LppB q }p B Er0;1sg2qr1´1{2;1s }LppwB;E q ď Cp}Er121{2;11{2sg}LppB q }p B Er0;1sg2qr1{2;21{2s }LppwB;E q ď Cpp} Er1{2;21{2sg}LppB q } Er21{2;31{2sg}LppB qq 37 and }p B Er0;1sg2qr1´21{2;1´1{2s }LppwB;E q ď Cpp} Er131{2;121{2sg}LppB q } Er121{2;11{2sg}LppB qq where here we have used that  ď 1{36. Applying Cauchy-Schwarz and using the above four inequalities gives that ¸ JPP1{2pr 0;1sq }p B Er0;1sg2qJ }2 LppwB;E q ď 9C2 p ¸ JPP1{2pr 0;1sq }EJ g}2 LppBq Combining this with (2.40) and 1 B ď B gives }Er0;1sg2}LppBq ď 3Cp pDp;E pqp ¸ JPP1{2pr 0;1sq }EJ g}2 LppBq q1{2: (2.41) Combining (2.39) and (2.41) proves that Dppq ď p 3Cp 2  12 E{pq pDp;E pq (2.42) for all  P N2 and  ď 1{36. For   1; 1{4; 1{9; 1{16, and 1 {25, we resort to the trivial bound. Proceeding as in the proof of (2.39) shows that for each such   1{i2, i  1; 2; : : : ; 5, we have Dppq ď 5  12 E{p pDp;E pq: Combining this with (2.42) then completes the proof of Lemma 2.3.8. Remark 2.3.9 . The reason why we split g up into g1 and g2 in proof above is because B Er0;1sg is Fourier supported in a set that is slightly bigger than . The last inequality in (2.38) is the most technical of the four inequalities. The proof is similar to that of Theorem 5.1 in [BD17] however our proof is more complicated since our de nition of pDp;E pq uses Fourier restriction to the parallelogram J (to take advantage of Lp boundedness) rather than Fourier restriction to a -tube of a piece of parabola. We also want explicit constants and so we will need to spend some time to extract explicit constants from taking a large number of derivatives. We state our lemma below but due to the length of its proof, we defer the proof to the end of this section. 38 To simplify some constants, we also restrict to the range when 2 ď p ď 6 since this is the range we care about. The restriction that p ď 6 is only used once in the proof of Lemma 2.3.10 (in particular at the end of the proof of Lemma 2.3.16) and is a very mild assumption which can be removed with a bit more care. Lemma 2.3.10. For E ě 10 and 2 ď p ď 6, pDp;E pq ď E60 E Dp; 2E7pq: Since wB;E 2 ď wB;E 1 for E1 ď E2, Dp;E 1 pq ď Dp;E 2 pq and so we can increase the E parameter at no cost. Combining Lemmas 2.3.7-2.3.10 proves the following result which shows (2.38) and hence (2.33). Proposition 2.3.11. For  P N2, E ě 100 , and 2 ď p ď 6, we have Dp;E pq ď rDp;E pq ď E6E Dppq ď E7E pDp;G pq ď E70 E Dp;E pq where G  tpE  7q{ 2u.Proof. Fix arbitrary integer E ě 100. Using Lemma 2.3.7 and that 2 ď p ď 6, we have Dp;E pq ď rDp;E pq ď E6E Dppq: Now we use Lemma 2.3.8 to reset our E. Since E ě 100, G ą 10. From Lemmas 2.3.8 and 2.3.10, E6E Dppq ď E7E pDp;G pq ď E7E G60 GDp; 2G7pq where in the rst inequality we have used that Cp ď 32 for 2 ď p ď 6. Increasing 2 G 7 to E bounds the above by E70 E Dp;E pq. This completes the proof of Proposition 2.3.11. 2.3.1 Proof of Lemma 2.3.10 This proof is similar to the proof of Theorem 5.1 in [BD17]. Our goal is to show that if f is Fourier supported on  Ť JPP1{2pr 0;1sq J , then }f }LppBq ÀE Dp; 2E7pqp ¸ JPP1{2pr 0;1sq }fJ }2 LppwB;E q q1{2 39 for all squares B with side length 1 and some implied constant that will be made explicit in our proof. It suffices to show that this is true in the case when B is centered at the origin. Since f is Fourier supported on , for x P B, f pxq  ¸ JPP1{2pr 0;1sq ż J pf pqep  xq d  ¸ JPP1{2pr 0;1sq ż Jr 5; 5s pf ps; L J psq tqepsx 1 s2x2qepp LJ psq  s2qx2qeptx 2q ds dt: Note that here both t and LJ psq  s2 are of size Opq and x2 is of size Op1q, so the contribution from epp LJ psq  s2qx2q and eptx 2q should be negligible. We make this rigorous. Since eptx 2q  ¸ jě0 p2qj j! p2ix 2 1 qj p1t 2 qj and epp LJ psq  s2qx2q  ¸ kě0 p2qk k! p2ix 2 1 qkp1pLJ psq  s2q 2 qk; it follows that for x P B, |f pxq| ď ¸ j;k ě0 p2qkp2qj k!j! | ¸ JPP1{2pr 0;1sq pEJ gj;k qp xq| where gj;k : r0; 1s Ñ C is de ned pointwise almost everywhere piecewise on each J P P1{2 pr 0; 1sq by gj;k psq  p 1pLJ psq  s2q 2 qk ż 5 5 pf ps; L J psq tqp 1t 2 qj dt for s P J. Let F : 2E 7. We then have }f }LppBq ď Dp;F pq ¸ j;k ě0 p2qkp2qj k!j! p ¸ JPP1{2pr 0;1sq }EJ gj;k }2 LppwB;F q q1{2: (2.43) It then remains to prove that }EJ gj;k }LppwB;F q ÀE exp pOpjq Opkqq} fJ }LppwB;E q (2.44) for some implied constants that will be made explicit in our proof. We rst claim it suffices to only prove (2.44) when J  r 0;  1{2s.40 Lemma 2.3.12. Suppose we knew that }Er0; 1{2sp1p1{2s  s2q 2 qk ż 5 5 pf ps;  1{2s tqp 1t 2 qj dt }LppwB;F q ď C}fr0; 1{2s }LppwB;E q (2.45) for some constant C. Then }ErnJ 1{2;pnJ 1q1{2sp1pLJ psq  s2q 2 qk ż 5 5 pf ps; L J psq tqp 1t 2 qj dt }LppwB;F q ď 90 pEF q{ pC}frnJ 1{2;pnJ `1q1{2s }LppwB;E q: (2.46) Remark 2.3.13 . Here s is a dummy variable, so EJ gpsq means the extension operator applied to the function gpsq creating the function pEJ gqp xq. Proof. This proof is essentially a change of variables. The idea is to translate rnJ ;pnJ 1q1{2s to the origin and apply a shear matrix to turn it into r0; 1{2s. Then apply (2.45) and nally undo the shear transformation. The weights wB are preserved from (2.45) because of Lemma 2.2.16. We have ErnJ 1{2;pnJ 1q1{2sp1pLJ psq  s2q 2 qk ż 5 5 pf ps; L J psq tqp 1t 2 qj dt pxq ż rnJ1{2;pnJ1q1{2s p1pLJ psq  s2q 2 qk ż 5 5 pf ps; L J psq tqp 1t 2 qj dt e psx 1 s2x2q ds: The change of variables u  s  nJ 1{2 and the observation that LJ pu nJ 1{2q  p u nJ 1{2q2  1{2u  u2 gives that the above is equal in absolute value to ż r0; 1{2s p1p1{2u  u2q 2 qk ż 5 5 pf pu nJ 1{2;L J pu nJ 1{2q tq p 1t 2 qj epupx1 2nJ 1{2x2q u2x2q du: Since |2nJ 1{2| ď 2, after a change of variables and an application of Lemma 2.2.16, the right hand side of (2.46) is bounded above by 90 F {p}Er0; 1{2sp1p1{2s  s2q 2 qk ż 5 5 pf ps nJ 1{2; L J ps nJ 1{2q tqp 1t 2 qj dt }LppwB;F q (2.47) 41 Observe that LJ ps nJ 1{2q  n2 J  p 2nJ 1q1{2s: Let gJ pxq : f pxqe2ix p nJ 1{2;n 2 Jq : Then pf ps nJ 1{2; L J ps nJ 1{2q tq  pgJ ps; p2nJ 1q1{2s tq: This implies that Er0; 1{2sp1p1{2s  s2q 2 qk ż 5 5 pf ps nJ 1{2; L J ps nJ 1{2q tqp 1t 2 qj dt  ż 1{2 0 ż 5 5 p1p1{2s  s2q 2 qk pgJ ps; p2nJ 1q1{2s tqp 1t 2 qj epsx 1 s2x2q dt ds which is equal to ż Jp nJ1{2;n 2 Jq p1p1{21  21 q 2 qk pgJ pqp 1p2  p 2nJ 1q1{21q 2 qj ep1x1 21 x2q d: (2.48) Let TJ   1 0 2nJ 1{2 1 : Notice that TJ sends J  p nJ 1{2; n 2 J q to r0; 1{2s. Letting   TJ  gives that (2.48) is equal to ż r0; 1{2s p1p1{21  21q 2 qk pgJ pT 1 J qp 1p2  1{21q 2 qj ep1x1 21x2q d  ż r0; 1{2s p1p1{21  21q 2 qk {gJ  T tJ pqp 1p2  1{21q 2 qj ep1x1 21x2q d  ż 1{2 0 ż 5 5 p1p1{2s  s2q 2 qk {gJ  T tJ ps;  1{2s tqp 1t 2 qj dt e psx 1 s2x2q ds: Inserting the above into (2.47) and applying (2.45) shows that the left hand side of (2.46) is bounded by 90 F {pC}p gJ  T tJ qr0; 1{2s }LppwB;E q: (2.49) 42 By Lemma 2.2.16 and the de nitions of TJ and gJ , we have }p gJ T tJ qr0; 1{2s }pLppwB;E q  ż R2 ż R2 pgJ pT 1 J q1r0; 1{2s pqe2ix  d  p wB;E pxq dx  ż R2 ż R2 pgJ pq1r0; 1{2s pTJ qe2ix  d  p wB;E pT tJ xq dx  ż R2 ż R2 pf p p nJ 1{2; n 2 J qq 1J p p nJ 1{2; n 2 J qq e2ix  d  p wB;E pT tJ xq dx ď 90 E }fJ }pLppwB;E q: Inserting this into (2.49) completes the proof of Lemma 2.3.12. We now prove (2.44) when J  r 0;  1{2s, in other words we will prove (2.45). Corollary 2.2.4 implies that it is enough to show that ż R2 }Er0; 1{2sgj;k }pLp pBpy; ´1qq wB;F pyq dy ÀE exp pppOpjq Opkqqq} fr0; 1{2s }pLppwB;E q: (2.50) We have pEr0; 1{2sqgj;k pxq ż r0; 1{2s pf pqp 1p1{21  21 q 2 qkp1p2  1{21q 2 qj epp 21  2qx2qep  xq d: For x P Bpy;  1q, since epp 21  2qx2q  epp 21  2qy2qepp 21  2qp x2  y2qq ; a Taylor expansion of epp 21  2qp x2  y2qq gives that for x P Bpy;  1q, |p Er0; 1{2sgj;k qp xq| ď ¸ ℓě0 p2qℓ ℓ! ż r0; 1{2s pf pqCj;k;ℓ pqepp 21  2qy2qep  xq d  (2.51) where Cj;k;ℓ pq : p 1p1{21  21 q 2 qkp1p2  1{21q 2 qj p1p21  2q 2 qℓ: Let be as in Lemma 2.2.10 and so P C8 c pRq,  1 on r 1; 1s and vanishes outside r 3; 3s. For positive integer k and  ą 0, let Mk; pxq : xk px{q: 43 Because the integral on the right hand side of (2.51) is restricted to r0; 1{2s, we can insert some Schwartz cutoffs into Cj;k;ℓ . From (2.34) and (2.35), for  P r0; 1{2s, 1 2 |1{21  21 | ď 18; 1 2 |2  1{21| ď 52; 1 2 |21  2| ď 21 8 : Furthermore, for  P r0; 1{2s, |1| ď 1{2 and |2| ď 6: Let F pq : p1{21q p12 6 q;M1p1q : Mk; 1{8p1p1{21  21 q 2 q;M2pq : Mj; 5{2p1p2  1{21q 2 q;M3pq : Mℓ; 21 {8p1p21  2q 2 q; (2.52) and rCj;k;ℓ pq : F pqM1p1qM2pqM3pq: Thus we can replace the Cj;k;ℓ on the right hand side of (2.51) with rCj;k;ℓ . It then remains to prove that ż R2 ż r0; 1{2s pf pq rCj;k;ℓ pqepp 21  2qy2qep  xq d  pLp pBpy; ´1qq wB;F pyq dy ÀE exp pppOpjq Opkq Opℓqqq} fr0; 1{2s }pLppwB;E q: (2.53) For each xed j; k; ℓ; y , let mpq : ep21 y2q rCj;k;ℓ pq  ep21 y2qM1p1qM2pqM3pqF pq: (2.54) Fix arbitrary y P R2. Therefore ż r0; 1{2s pf pq rCj;k;ℓ pqepp 21  2qy2qep  xq d  ż R2 {fr0; 1{2s pqmpqep1x1 2px2  y2qq d  p fr0; 1{2s  qmqp x1; x 2  y2q: This implies ż r0; 1{2s pf pq rCj;k;ℓ pqepp 21  2qy2qep  xq d  pLp pBpy; ´1qq  2 ż R2 |fr0; 1{2s  qm|ppxq1B px1  y1; x 2q dx: 44 H older's inequality implies that |fr0; 1{2s  qm|p ď p| fr0; 1{2s |p  | qm|q} qm}p1 L1 : Note that the L1 norm on the right hand side depends on y since qm depends on y. To show (2.53), it is enough to show that for all z P R2, 2 ż R2 ż R2 | qm|p x  zq1B px1  y1; x 2q} qm}p1 L1 wB;F pyq dx dy ÀE exp pppOpjq Opkq Opℓqqq wB;E pzq: (2.55) We claim that for integers a; b ě 0, }B a1 Bb2 m}L8 ď Cpa; b qp 1{2 1{2|y2|q ab (2.56) where Cpa; b q  12 540 a3b15 j 3k16 ℓa7ab2bpa bq!4pa 1q5pb 1q3: The proof of (2.56) is deferred to the end of this section. The calculation is straightforward but rather tedious. With (2.56), integration by parts gives the following lemma. Lemma 2.3.14. For a; b ě 0, we have | qmpxq| ď 2a2b216 Cpa; b qp 1{2p1 |x1| 1{2 1{2|y2|qaqp p1 |x2| 1 qbq: Proof. Note that for |x| ď 1, 1 ď 2{p 1 | x|q and for |x| ě 1, 1 {| x| ď 2{p 1 | x|q . There are four regions to consider. First consider the case when |x1| ą 1{2 1{2|y2| and |x2| ą 1. Since m is supported in a 6 1{2 36  rectangle centered at the origin, integration by parts gives that ż R2 mpqe2i px11x22q d   ż R2 mpq 1 p2ix 1qap2ix 2qb Ba1 Bb2 e2i px11x22q d  ď 216 p2|x1|q ap2|x2|q b Cpa; b qp 1{2 1{2|y2|q ab3{2 ď 216 Cpa; b qp2qap2qb p1{2p |x1| 1{2 1{2|y2|qaqp p|x2| 1 qbqď 216 Cpa; b q ab p1{2p1 |x1| 1{2 1{2|y2|qaqp p1 |x2| 1 qbq: 45 Next consider the case when |x1| ď 1{2 1{2|y2| and |x2| ď 1. Then we just use the trivial bound in this case. We have ż R2 mpqe2i px11x22q d  ď 216 Cp0; 0q3{2 ď 2a2b216 Cp0; 0qp 1{2p1 |x1| 1{2 1{2|y2|qaqp p1 |x2| 1 qbq: For the case when |x1| ď 1{2 1{2|y2| and |x2| ą 1 we integrate by parts in 2 but use trivial bounds in 1. Thus ż R2 mpqe2i px11x22q d  ď 216 p2|x2|q b Cp0; b qb3{2 ď 2a216 Cp0; b q b p1{2p1 |x1| 1{2 1{2|y2|qaqp p1 |x2| 1 qbq: Similarly, when |x1| ą 1{2 1{2|y2| and |x2| ď 1 we obtain ż R2 mpqe2i px11x22q d  ď 2b216 Cpa; 0q a p1{2p1 |x1| 1{2 1{2|y2|qaqp p1 |x2| 1 qbq: Combining the estimates in the above four cases completes the proof of Lemma 2.3.14. In particular, taking a; b  E ě 10 in Lemma 2.3.14 gives the following corollary. Corollary 2.3.15. For E ě 10 , let ϕ1px1q : 1{2p1 |x1| 1{2 1{2|y2|qE ; ϕ2px2q : p1 |x2| 1 qE : Then | qmpxq| ď 15 j 3k16 ℓE30 E ϕ1px1qϕ2px2q: We now prove (2.55). The following lemma is the only place where p ď 6 is used. Lemma 2.3.16. For 2 ď p ď 6, } qm}p1 L1 ď 15 jpp1q3kpp1q16 ℓpp1qE30 Epp1qp1 |y2|q 5: 46 Proof. From Corollary 2.3.15, } qm}L1 ď 15 j 3k16 ℓE30 E ż R ϕ1px1q dx 1 ż R ϕ2px2q dx 2: A change of variables gives that ż R ϕ1px1q dx 1  1{2p1{2 1{2|y2|q ż R p1 | x1|q E dx 1 ď 1 |y2| and ż R ϕ2px2q dx 2  ż R p1 | x2|q E dx 2 ď 1: Therefore } qm}L1 ď 15 j 3k16 ℓE30 E p1 |y2|q : Raising both sides to the pp  1q-power and then using that p ď 6 completes the proof of the lemma. A change of variables gives 2 ż R2 | qm|p x  zq1B px1  y1; x 2q dx  p| qm|  21B qp y1  z1; z2q and so combining this with Lemma 2.3.16 shows that the left hand side of (2.55) is bounded above by 15 jpp1q3kpp1q16 ℓpp1qE30 Epp1q ż R2 p| qm|  21B qp y1  z1; z2qp 1 |y2|q 5wB;F pyq dy: (2.57) Corollary 2.3.15 gives that p| qm|  21B qp xq ď 15 j 3k16 ℓE30 E pϕ1  1r ´1{2; ´1{2sqp x1qp ϕ2  1r ´1{2; ´1{2sqp x2q: Since 1 r ´1{2; ´1{2s ď 2E wr ´1{2; ´1{2s;E , Remark 2.2.3 shows pϕ2  1r ´1{2; ´1{2sqp x2q ď 8E p1 | x2|{ 1qE : Therefore p| qm|  21B qp y1  z1; z2q ď 15 j 16 ℓE30 E 8E p1 |z2| 1 qE pϕ1  1r ´1{2; ´1{2sqp y1  z1q: 47 Thus (2.57) is bounded above by 15 jp 3kp 16 ℓp E30 Ep 8E p1 |z2| 1 qE ż R2 pϕ1  1r ´1{2; ´1{2sqp y1  z1qp 1 |y2| 1 q5wB;F pyq dy: (2.58) The following lemma will complete the proof of (2.55). Lemma 2.3.17. Let E ě 10 and F  2E 7, then ż R2 pϕ1  1r ´1{2; ´1{2sqp y1  z1qp 1 |y2| 1 q5wB;F pyq dy ď 9  128 E 1p1 |z1| 1 qE : (2.59) Proof. We break the left hand side of (2.59) into the sum of integrals over the regions (recall that  P N2) I : t y : |y2| ď 1u II : ď 1ďkă´1{2 ty : k 1 ă | y2| ď p k 1q1u III : ď kě0 ty : 2 k3{2 ă | y2| ď 2k13{2u: We also note that for a ě 1, p1 |x| a qE ď aE p1 | x|q E : (2.60) We rst consider the integral over region I. When |y2| ď 1, ϕ1px1q  1{2p1 |x1| 1{2 1{2|y2|qE ď 1{2p1 |x1| 21{2 qE ď 2E 1{2p1 |x1| 1{2 qE : Therefore by Remark 2.2.3, pϕ1  1r ´1{2; ´1{2sqp y1  z1q ď 16 E p1 |y1  z1| 1 qE and so ż I pϕ11r ´1{2; ´1{2sqp y1  z1qp 1 |y2| 1 q5wB;F pyq dy ď 16 E  ż R2 p1 |y1  z1| 1 qE p1 |y2| 1 q5p1 |y1| 1 qE p1 |y2| 1 qE7 dy: 48 Applying Remark 2.2.3 in the y1 variable bounds this by 64 E p1 |z1| 1 qE ż R p1 |y2| 1 qE2 dy 2 ď 64 E 1p1 |z1| 1 qE : (2.61) We next consider the integral over region II. For each 1 ď k ă 1{2 and y such that k 1 ă | y2| ď p k 1q1, we have ϕ1px1q  1{2p1 |x1| 1{2 1{2|y2|qE ď 1{2p1 |x1| 3k 1{2 qE ď 3E 1{2p1 |x1| k 1{2 qE : Therefore by Remark 2.2.3, pϕ1  1r ´1{2; ´1{2sqp y1  z1q ď 24 E k p1 |y1  z1| 1 qE and so ż II pϕ1  1r ´1{2; ´1{2sqp y1  z1qp 1 |y2| 1 q5wB;F pyq dy  ¸ 1ďkă´1{2 ż k ´1ă| y2|ďp k1q´1 pϕ1  1r ´1{2; ´1{2sqp y1  z1qp 1 |y2| 1 q5wB;F pyq dy ď 96 E ¸ 1ďkă´1{2 kp1 |z1| 1 qE ż k ´1ă| y2|ďp k1q´1 p1 |y2| 1 qE2 dy 2 ď 96 E ¸ 1ďkă´1{2 kp1 |z1| 1 qE 21kE2 ď 4  96 E 1p1 |z1| 1 qE (2.62) where in the last inequality we have used that E ě 10. Finally we consider the integral over region III. For each k ě 0 and y such that 2 k3{2 ă|y2| ď 2k13{2, we have ϕ1px1q  1{2p1 |x1| 1{2 1{2|y2|qE ď 1{2p1 |x1| 4  2k1 qE ď 4E 1{2p1 |x1| 2k1 qE : Therefore by Remark 2.2.3, pϕ1  1r ´1{2; ´1{2sqp y1  z1q ď 32 E 1{2p1 |y1  z1| 2k1 qE 49 and so ż III pϕ1  1r ´1{2; ´1{2sqp y1  z1qp 1 |y2| 1 q5wB;F pyq dy  ¸ kě0 ż 2k´3{2ă| y2|ď 2k`1´3{2 pϕ1  1r ´1{2; ´1{2sqp y1  z1qp 1 |y2| 1 q5wB;F pyq dy ď 32 E 1{2 ¸ kě0 ż R p1 |y1  z1| 2k1 qE p1 |y1| 1 qE dy 1 ż 2k´3{2ă| y2|ď 2k`1´3{2 p1 |y2| 1 qE2 dy 2 ď 128 E ¸ kě0 1{2p1 |z1| 2k1 qE 2k13{2p2k1{2qE2  128 E ¸ kě0 2p E2q{ 22k1kpE2qp1 |z1| 2k1 qE ď 4  128 E p1 |z1| 1 qE where in the third inequality we have used (2.60). Summing this with (2.61) and (2.62) shows that the left hand side of (2.59) is bounded above by 9  128 E p1 | z1|{ 1qE which completes the proof of Lemma 2.3.17. Thus Lemma 2.3.17 shows that (2.58) is bounded above by 9  15 jp 3kp 16 ℓp E30 Ep 210 E p1 |z1| 1 qE p1 |z2| 1 qE ď 15 jp 3kp 16 ℓp E40 Ep wB;E pzq: (2.63) We now trace back all the implied constants to nish the proof of Lemma 2.3.10. From (2.63), the implied constants in (2.55) and (2.53) are both 15 jp 3kp 16 ℓp E40 Ep . By (2.51) and (2.53), the left hand side of (2.50) is }Er0; 1{2sgj;k }Lp pBpy; ´1qq  pLpypwB;F q ď ¸ ℓě0 p2qℓ ℓ! } ż r0; 1{2s pf pq rCj;k;ℓ pqepp 21  2qy2qep  xq d }Lp pBpy; ´1qq  pLpypwB;F q ď ¸ ℓě0 p2qℓ ℓ! } ż r0; 1{2s pf pq rCj;k;ℓ pqepp 21  2qy2qep  xq d }Lp pBpy; ´1qq Lpy pwB;F q p ď 15 jp 3kp e32 p E40 Ep }fr0; 1{2s }pLppwB;E q which gives the implied constant in (2.50). Using this, Lemma 2.3.12, and Lemma 2.2.4, we 50 have }EJ gj;k }LppwB;F q ď } fJ }LppwB;E q $'&'% 3E{p15 j 3kE40 E e32  if J  r 0;  1{2s 90 pEF q{ p3E{p15 j 3kE40 E e32  if J  r 0;  1{2sď 15 j 3kE54 E }fJ }LppwB;E q where in the last inequality we have used that E ě 10, 2 ď p ď 6, and F  2E 7. Inserting this estimate into (2.43) gives that }f }LppBq ď Dec 1p; p; F qE54 E e32 p ¸ JPP1{2pr 0;1sq }fJ }2 LppwB;E q q1{2: Since E ě 10, e36  ď 10 50 ď E5E and this completes the proof of Lemma 2.3.10. 2.3.2 Proof of (2.56) Let F; M 1; M 2; M 3, and m be as in (2.52) and (2.54). We will prove (2.56). Lemma 2.3.18. Let  ą 0 and let Mk; pxq : xk px{q where is as de ned in Lemma 2.2.10. Then for integer a ě 0, }B aMk; }L8 ď 12  6a3kp1 qkpa!q2: (2.64) If  ě 1, this bound can be replaced with 12 p6akkqp a!q2.Proof. This proof is essentially the same as that of the beginning of the proof of Lemma 2.2.10. From the proof of Lemma 2.2.10, we have that | pjqpxq| ď 12 p6j qp j!q2 for all j ě 0. Since is supported in r 3; 3s, px{q is supported in r 3; 3s.If a  0, then }Mk; }L8 ď 12 p3qk which proves (2.64) in this case. Now consider when 51 a ě 1. First suppose that a ď k, then |B apMk; pxqq|  | a ¸ j0 aj Bj pxkq pajqpxq| ď a ¸ j0 aj k! pk  jq!p3qkj 12 p6aj qp a  jq!2 ď 12 p6a3kqp a!q2 a ¸ j0 kj kj ď 12  6a3kp1 qkpa!q2: Next suppose that k ă a, then |B apMk; pxqq| ď k ¸ j0 aj k! pk  jq!p3qkj 12 p6aj qp a  jq!2 ď 12  6a3kp1 qkpa!q2: This completes the proof of Lemma 2.3.18. Our goal is to obtain an estimate on }B a1 Bb2 m}L8 depending only on a, b,  and y2 and where m is as de ned in (2.54) and (2.52). Since we want exact constants, we will need to differentiate exactly each of the ve functions that make up mpq. Note that since is supported in r 3; 3s, m is supported in a 6 1{2 36  rectangle centered at the origin. In particular, for all  P supp pmq, 31{2 ď 1 ď 31{2: (2.65) The bounds in Lemmas 2.3.20 and 2.3.21 are valid when we take no derivatives (either a  0or b  0) provided we use the convention that 0 0  1. To compute Ba1 Bb2 m, we will need to take arbitrarily many derivatives of a composition of functions. We will use the Faa di Bruno formula. We brie y recall all needed formulas (see [Joh02] for a reference, note that Johnson de ned Bm; 0  0 for m ą 0 since the sum conditions would be vacuous). For m; k ě 1, de ne the Bell polynomials Bm;k px1; x 2; : : : ; x mk1q  1 k! ¸ j1 jkmjiě1 mj1; : : : ; j k xj1    xjk : Let Ympx1; : : : ; x mq : m ¸ k1 Bm;k px1; : : : ; x mk1q: (2.66) 52 The Faa di Bruno formula states that dm dt m gpf ptqq  m ¸ k1 gpkqpf ptqq Bm;k pf 1ptq; f 2ptq; : : : ; f pmk1qptqq : Finally we will abuse notation slightly by writing Ympx; y; 0; : : : ; 0q as Ympx; y q. Lemma 2.3.19. Let m ě 1 and x; y  0 such that |x| ď C|y|1{2 with C ě 1. Then |Ympx; y q| ď Cmmm|y|m{2: Proof. From [Joh02, p. 220], Ympx; y q is equal to the determinant of the m m matrix  x pm  1qy 0    0 0 1 x pm  2qy    0 00 1 x    0 0... ... ... ... ...0 0 0    x y 0 0 0    1 x  : Cofactor expansion gives that Ympx; y q obeys the recurrence Ym  xY m1 p m1qyY m2p1; 1q with Y1  x, Y2  x2 y. Therefore Ymp1; 1q obeys the recurrence Ymp1; 1q  Ym1p1; 1q pm  1qYm2 and so Ymp1; 1q ď m! ď mm. Each Ympx; y q  xm tm{2u ¸ j1 cj xm2j yj  ym{2p xm ym{2 tm{2u ¸ j1 cj xm2j ym{2j q (2.67) and Ymp1; 1q  1 ř j cj ď mm. Thus Ympx; 0q  xm and |Ympx; y q| ď | y|m{2pCm tm{2u ¸ j1 cj Cm2j q ď Cmmm|y|m{2: This completes the proof of Lemma 2.3.19. Lemma 2.3.20. For a ě 0 and  P supp pmq, }B a1 e2iy 221 }L8 ď p 12 qaaa $'&'% a{2 if |y2| ď 1 a{2|y2|a if |y2| ą 1: In particular, }B a1 e2iy 221 }L8 ď p 12 qaaap1{2 1{2|y2|q a: 53 Proof. If a  0, then L8 norm is equal to 1 and the above formula still holds true. Now suppose a ě 1. From Faa di Bruno's formula, Ba1 e2iy 221  a ¸ k1 p2i qke2iy 221 Ba;k p21y2; 2y2; 0; : : : ; 0q and so, }B a1 e2iy 221 }L8 ď p 2qaYap2|1|| y2|; 2|y2|q : (2.68) Suppose |y2| ď 1, then 1{2|y2| ď | y2|1{2 and so from (2.65), 2|1|| y2| ď 6|y2|1{2: Therefore Lemma 2.3.19 gives that Yap2|1|| y2|; 2|y2|q ď 6aaa|y2|a{2 ď 6aaaa{2: Inserting this into (2.68) then nishes this case. If |y2| ą 1, then from (2.67), Yap2|1|| y2|; 2|y2|q ď Yap61{2|y2|; 2|y2|q  6aa{2|y2|ap1 ta{2u ¸ j1 18 j cj p|y2|q j q: Since |y2| ą 1 and 1 ř j cj ď aa, the above is bounded by 6 aaaa{2|y2|a which completes the proof of Lemma 2.3.20. Lemma 2.3.21. For integers a; b ě 0 and  P supp pmq, }B a1 M1}L8 ď 12 p21 aa3a3kqa{2 (2.69) }B a1 Bb2 M2}L8 ď 12 p6a3b15 j qp a bq!2ba{2 (2.70) }B a1 Bb2 M3}L8 ď 12 p18 a3b16 ℓqaapa bq!2ba{2 (2.71) }B a1 Bb2 F }L8 ď 12 26apa!q2pb!q2ba{2: (2.72) Proof. We rst prove (2.69). If a  0, then from Lemma 2.3.18, }M1}L8  } Mk; 1{8}L8 ď 12  3k 54 which proves (2.69) in this case. Next suppose a ě 1. We compute that Ba1 M1  a ¸ s1 M psq k; 1{8 p1p1{21  21 q 2 qBa;s p121{2  11; 1; 0; : : : ; 0q and so applying Lemma 2.3.18 and (2.66) gives that }B a1 M1}L8 ď 12 p3k6aqp a!q2Yap1{2|12  1{21|;  1q: (2.73) Since 1{2|12  1{21| ď 72p1q1{2; Lemma 2.3.19 implies that Yap1{2|12  1{21|;  1q ď p 7{2qaaaa{2: Inserting this into (2.73) completes the proof of (2.69). We now prove (2.70). We compute Ba1 Bb2 M2  p 1 2 qbBa1 M pbq j; 5{2 p1p2  1{21q 2 q p 1 2 qbp 1{2 2 qaM pabq j; 5{2 p1p2  1{21q 2 q: Applying Lemma 2.3.18 gives }B a1 Bb2 M2}L8 ď 12 p6a3b15 j qp a bq!2ba{2 which proves (2.70). Next we prove (2.71). If a  0, then Bb2 M3  p 1 2 qbM pbq ℓ; 21 {8 p1p21  2q 2 q and so }B b2 M3}L8 ď 12 p3b16 ℓqp b!q2b which proves (2.71) in this case. Now suppose a ě 1. Faa di Bruno's formula gives that Ba1 Bb2 M3  p 1 2 qba¸ s1 M psbq ℓ; 21 {8 p1p21  2q 2 qBa;s p11;  1; 0; : : : ; 0q: 55 Applying Lemma 2.3.18 and (2.66) gives that }B a1 Bb2 M3}L8 ď 12 p6a3b16 ℓqp a bq!2bYap1|1|;  1q (2.74) Since 1|1| ď 3p1q1{2, it follows that Yap1|1|;  1q ď 3aaaa{2: Inserting this into (2.74) completes the proof of (2.71). Finally we prove (2.72). We compute Ba1 Bb2 F  a{2p1 6 qb paqp1{21q pbqp12 6 q: Lemma 2.2.10 then implies that }B a1 Bb2 F }L8 ď 12 26apa!q2pb!q2ba{2 which proves (2.72). This completes the proof of Lemma 2.3.21. We are now ready to prove (2.56). Lemma 2.3.22. For a; b ě 0, }B a1 Bb2 m}L8 ď 12 540 a3b15 j 3k16 ℓa7ab2bpa bq!4pa 1q5pb 1q3p1{2 1{2|y2|q ab: Proof. We compute Ba1 Bb2 m  ¸ s1s2s3bt1t2t3t4t5asi;t iě0 b! s1!s2!s3! a! t1!t2!t3!t4!t5! pB t1 1 ep21 y2qqpB t2 1 M1qpB t3 1 Bs1 2 M2qpB t4 1 Bs2 2 M3qpB t5 1 Bs3 2 F q: Applying crude bounds and Lemmas 2.3.20 and 2.3.21 gives that }B a1 Bb2 m}L8 ď 12 540 a3b15 j 3k16 ℓa!b!ba{2p1 |y2|q a ¸ s1s2s3bt1t2t3t4t5asi;t iě0 tt1 1 t3t2 2 tt4 4 pt3 s1q!2pt4 s2q!2t5!s3! ď 12 540 a3b15 j 3k16 ℓa7ab2bpa bq!4pa 1q5pb 1q3ba{2p1 |y2|q a 56 where in the rst inequality we have used that p1{2 1{2|y2|q t1  t1{2p1 |y2|q t1 ď t1{2p1 |y2|q a and we have removed a t5! and s3! using the multinomial coefficient. This completes the proof of Lemma 2.3.22 and the proof of (2.56). 2.4 Parabolic rescaling: an application As an application of Lemma 2.2.18 and Proposition 2.3.11, we will prove that the decoupling constant is essentially multiplicative. This will play an important role in Section 2.10 when we upgrade knowledge about decoupling at a lacunary sequence of scales to knowledge about decoupling on all possible scales in N2. The restriction that p ď 6 is once again an artifact that only arises from our application of Proposition 2.3.11. Proposition 2.4.1. Let E ě 100 and 2 ď p ď 6. For 0 ă  ă  ă 1 with ; ;  { P N2,we have Dp;E pq ď E100 E Dp;E pqDp;E p{q: Proof. Fix an arbitrary E ě 100 and 2 ď p ď 6. We need to show that for all g : r0; 1s Ñ C and all squares B of side length 1, we have }Er0;1sg}LppBq ď E100 E Dp;E pqDp;E p{qp ¸ JPP1{2pr 0;1sq }EJ g}2 LppwB;E q q1{2: It suffices to assume that B is centered at the origin. Since { P N2, we can partition B into a collection of squares tu of side length 1.Then }Er0;1sg}Lppq ď Dp;E pqp ¸ JPP1{2pr 0;1sq }EJ g}2 Lppw;E q q1{2: Raising both sides to the pth power and summing over all , then using Minkowski's in-equality (since p ě 2), and nally applying Proposition 2.2.14 gives that }Er0;1sg}LppBq ď 48 E{pDp;E pqp ¸ JPP1{2pr 0;1sq }EJ g}2 LppwB;E q q1{2: (2.75) 57 For each J  r a; a 1{2s, we will rst show that }EJ g}LppBq ÀE Dp;E p{qp ¸ J1PP1{2pJq }EJ1 g}2 LppwB;E q q1{2: (2.76) Afterwards we will apply Proposition 2.2.11 to (2.76) and then insert the result into (2.75) to nish. Let T be as in Lemma 2.2.18, Lpq  p   aq{ 1{2, gL  g  L1. Then a change of variables gives that }EJ g}LppBq   12  32p }Er0;1sgL}LppT pBqq : Let B be as in Lemma 2.2.18. Thus we cover T pBq by a collection of squares B  t ∆u of side length {, use decoupling constant rDp;E at scale { and undo change of variables. This gives  p 232 }Er0;1sgL}pLppT pBqq ď  p 232 ¸ ∆PB }Er0;1sgL}pLpp∆q ď rDp;E p{qp p 232 ¸ ∆PB p ¸ J2PPp{q1{2pr 0;1sq }EJ2 gL}2 Lpprw∆;E q qp{2 ď rDp;E p{qpp ¸ J1PP1{2pJq }EJ1 g}2 Lppř ∆rw∆;E Tq qp{2 ď rDp;E p{qp720 E p ¸ J1PP1{2pJq }EJ1 g}2 LppwB;E q qp{2 where the third inequality we have used Minkowski's inequality and p ě 2 and the last inequality we have used Lemma 2.2.18. Combining this with Proposition 2.3.11 gives that }EJ g}LppBq ď E70 E 720 E{pDp;E p{qp ¸ J1PP1{2pJq }EJ2 g}2 LppwB;E q q1{2: Applying Proposition 2.2.11 gives that }EJ g}LppwB;E q ď E80 E Dp;E p{qp ¸ J1PP1{2pJq }EJ2 g}2 LppwB;E q q1{2: Inserting this into (2.75) then completes the proof of Proposition 2.4.1. Remark 2.4.2 . Combining Propositions 2.3.11 and 2.4.1, we see that all four decoupling constants Dp;E , rDp;E , Dp, and pDp;E obey a similar multiplicative property. 58 2.5 Bilinear equivalence We now de ne the bilinear decoupling constant and show that it is essentially the same size as the linear decoupling constant. In [BD17], Bourgain and Demeter use a Bourgain-Guth type argument to do this. However in two dimensions, there is a simpler proof using H older's inequality and parabolic rescaling by Tao in [Tao15]. It is this version we follow. For each m P N, E ě 100, let  : 216 2mE10 E : For  P p 0; 1q such that  1{2 P N, let Dp;E p; m q be the best constant such that } geom |EIi g|} LppBq ď Dp;E p; m q geom p ¸ JPP1{2pIiq }EJ g}2 LpprwB;E q q1{2 for all pairs of intervals I1; I 2 P P pr 0; 1sq which are at least -separated, functions g : r0; 1s Ñ C, and squares B of side length 1. Note that the right hand side uses the weight function rwB;E rather than wB;E .We rst give the trivial bound for the bilinear decoupling constant which is a useful bound at large scales. Lemma 2.5.1. Let m; E;  be de ned as above. If  1{2 P N, then Dp;E p; m q ď 4E 1{21{4.Proof. H older's inequality gives that } geom |EIi g|} LppBq ď geom }EIi g}LppBq: The triangle inequality, Cauchy-Schwarz, and that 1 B ď 4E rwB;E gives }EIi g}LppBq  } ¸ JPP1{2pIiq EJ g}LppBq ď 4E 1{21{4p ¸ JPP1{2pIiq }EJ g}2 LpprwB;E q q1{2 which completes the proof of Lemma 2.5.1. Lemma 2.5.2. Let E ě 100 and 2 ď p ď 6. If 1{2 P 2N and 1{21 P 2N, then Dp;E pq ď E100 E pDp;E p 2 q 1  Dp;E p; m qq : 59 Proof. This proof is essentially an application of parabolic rescaling. The restriction 2 ď p ď 6 comes only from the application of Proposition 2.3.11. Fix an arbitrary square B of side length 1 and function g : r0; 1s Ñ C. It suffices to assume B is centered at the origin. Partition r0; 1s into 1 { many intervals I1; : : : ; I 1{ of length  (here we have used that  P 2N). Then }Er0;1sg}LppBq  } ¸ 1ďiď1{ EIi g}LppBq ď } ¸ 1ďi;j ď1{ |EIi g|| EIj g|} 1{2 Lp{2pBq ď } ¸ 1ďi;j ď1{ |ij|ď 1 |EIi g|| EIj g|} Lp{2pBq } ¸ 1ďi;j ď1{ |ij|ą 1 |EIi g|| EIj g|} Lp{2pBq 1{2 ď ?2 } ¸ 1ďi;j ď1{ |ij|ď 1 |EIi g|| EIj g|} 1{2 Lp{2pBq } ¸ 1ďi;j ď1{ |ij|ą 1 |EIi g|| EIj g|} 1{2 Lp{2pBq : We rst consider the off-diagonal terms. This will be controlled by the bilinear decoupling constant. H older's inequality gives that p ¸ 1ďi;j ď1{ |ij|ą 1 |EIi g|| EIj g|q p{2 ď p p2q ¸ 1ďi;j ď1{ |ij|ą 1 p| EIi g|| EIj g|q p{2 and hence ż B p ¸ 1ďi;j ď1{ |ij|ą 1 |EIi g|| EIj g|q p{2 dx ď p p2q ¸ 1ďi;j ď1{ |ij|ą 1 ż B p| EIi g|| EIj g|q p{2 dx: By bilinear decoupling, the above is bounded above by p p2qDp;E p; m qp ¸ 1ďi;j ď1{ |ij|ą 1 p ¸ JPP1{2pIiq }EJ g}2 LpprwB;E q qp{4p ¸ JPP1{2pIjq }EJ g}2 LpprwB;E q qp{4: Note that here we have used that {1{2 P 2N. Since 1{2 is dyadic and Ii and Ij are dyadic intervals, this is bounded above by pDp;E p; m qpp ¸ JPP1{2pr 0;1sq }EJ g}2 LpprwB;E q qp{2: Now we consider the diagonal contribution. The triangle inequality followed by Cauchy-Schwarz gives that } ¸ 1ďi;j ď1{ |ij|ď 1 |EIi g|| EIj g|} Lp{2pBq ď ¸ 1ďi;j ď1{ |ij|ď 1 }EIi g}LppBq}EIj g}LppBq (2.77) 60 Let I  a r 0;  s be an interval of length . Let Lpq  p   aq{ , gL : g  L1,and T  p  2a 02 q. A change of variables then gives that |p EI gqp xq|  |p Er0;1sgLqp T x q| and therefore }EI g}LppBq  13{p}Er0;1sgL}LppT pBqq : (2.78) Note that T pBq is a parallelogram contained in a 3  1 21 rectangle. Covering T pBq by squares B  t ∆u of side length 21 gives that 13{p}Er0;1sgL}LppT pBqq ď 13{pp ¸ ∆PB }Er0;1sgL}pLpp∆qq1{p: (2.79) Applying the de nition of the decoupling constant (and using that  1{2 P 2N), gives that for each square ∆, }Er0;1sgL}pLpp∆q ď rDp;E p{2qpp ¸ JPP1{2{pr 0;1sq }EJ gL}2 Lpprw∆;E q qp{2: Inserting this into (2.79) bounds the left hand side of (2.79) by rDp;E p{2qp ¸ ∆PB p ¸ JPP1{2{pr 0;1sq p13{p}EJ gL}Lpp rw∆;E qq2qp{2q1{p: Applying the same change of variables as in (2.78) followed by Minkowski's inequality (using that p ě 2) gives that the above is bounded by rDp;E p{2qp ¸ JPP1{2pIq }EJ g}2 Lppř ∆PBrw∆;E Tq q1{2 ď 720 E{p rDp;E p{2qp ¸ JPP1{2pIq }EJ g}2 LppwB;E q q1{2: By Proposition 2.3.11, rDp;E pq ď E70 E Dp;E pq and so the above gives that }EI g}LppBq ď E75 E Dp;E p{2qp ¸ JPP1{2pIq }EJ g}2 LppwB;E q q1{2 for each interval I of length .Using this for each interval that shows up on the right hand side of (2.77) gives an upper bound of E150 E Dp;E p{2q2 ¸ 1ďi;j ď1{ |ij|ď 1 p ¸ JPP1{2pIiq }EJ g}2 LppwB;E q q1{2p ¸ J1PP1{2pIjq }EJ1 g}2 LppwB;E q q1{2: 61 Using that 2 ab ď a2 b2, the above is bounded by E150 E Dp;E p{2q2  12 ¸ 1ďi;j ď1{ |ij|ď 1 ¸ JPP1{2pIiq }EJ g}2 LppwB;E q ¸ J1PP1{2pIjq }EJ1 g}2 LppwB;E q ď 2  E150 E Dp;E p{2q2 ¸ JPP1{2pr 0;1sq }EJ g}2 LppwB;E q : Therefore if 1{2 P 2N and 1{21 P 2N, we have Dp;E pq ď 2  E75 E Dp;E p 2 q ?2  Dp;E p; m q which completes the proof of Lemma 2.5.2. Proposition 2.5.3. Let E ě 100 and 2 ď p ď 6. Fix an arbitrary integer m ě 1. Let 1{2 P 2N and K be the largest positive integer such that 1{2K P 2N. Then Dp;E pq ď 100 E log  E 1 max p1; max i0;1;:::;K 1 Dp;E p 2i; m qq : Proof. Note that 1{2 P 2N and 1{2K P 2N imply that for i  0; 1; : : : ; K , 1{2i P 2N.In particular for each i  1; 2; : : : ; K , both 1{2i1 and 1{2i are in 2 N and hence Dp;E p 2i2q ď E100 E pDp;E p 2iq 1  Dp;E p 2i2; m qq : Combining these K inequalities then gives that Dp;E pq ď E100 EK pDp;E p 2K q 21 max i0;1;:::;K 1 Dp;E p 2i; m qq : (2.80) To control Dpp 2K q, we use the de nition of K. In particular, since 1{2 P 2N, 1{2p K1q is dyadic but ě 1. Therefore 1{2K1 ě 1 and so 1{2K ě . The trivial bound then gives that Dp;E p 2K q ď 2E{pp 2K q1{4 ď 2E 1{2: Since 1{2K ď 1, K ď log ´1 1{2 and hence E100 EK ď 50 E log  E : Inserting the above two centered equations into (2.80) then completes the proof of Proposi-tion 2.5.3. 62 2.6 Ball in ation We rst discuss some basic geometry. Let P : tp ;  2q :  P r 0; 1su and  : P Ñ r 0; 1s be the projection map which sends p;  2q Þ Ñ . Since I1; I 2 are d-separated, for any P P I1; Q P I2,we have |P  Q| ě d. Observe that np1pP qq  p 2P; 1q?1 4P 2 and similarly for Q (where here np1pP qq refers to the normal vector to the parabola at the point 1pP q). Let  be the angle between np1pP qq and np1pQqq . Then since |P  Q| ě d,sin   2|P  Q| ap1 4P 2qp 1 4Q2q ě 25d: In the terminology of [BD17], I1 and I2 are 2 d{5-transverse. We will now prove the following effective ball in ation inequality. Theorem 2.6.1. Let p ě 4, 0 ă  ă 1{10 , E ě 100 , and 0 ă d ă 1{2. Let I1; I 2 Ă r 0; 1s be two d-separated intervals of length ě  such that |Ii|{  P N. Let B be an arbitrary square in R2 with side length 2 and let B be the unique partition of B into squares ∆ of side length 1. Then for all g : r0; 1s Ñ C, we have 1 |B| ¸ ∆PB geom p ¸ JPPpIiq }EJ g}2 Lp{2#prw∆;E q qp{2 ď E50 Ep d1plog 1  qp{2 geom p ¸ JPPpIiq }EJ g}2 Lp{2#prwB;E q qp{2: (2.81) Furthermore, for p  4, the estimate is true without the logarithm. This inequality allows us to keep the frequency scale the same while increasing (in ating) the spatial scale and is a key step in the iteration. We will rst prove a version of Theorem 2.6.1 where we additionally assume that all the }EJ g} are of comparable size (for each Ii). Then we remove this assumption by dyadic pigeonholing to obtain (2.81). Lemma 2.6.2. Let p ą 4 and everything else be as de ned in Theorem 2.6.1. Furthermore, let F1 be a collection of intervals in PpI1q such that for each pair of intervals J; J 1 P F1, we 63 have 12 ă}EJ g}Lp{2# p rwB;E q }EJ1 g}Lp{2# p rwB;E q ď 2: (2.82) Similarly de ne F2. Then for all g : r0; 1s Ñ C we have 1 |B| ¸ ∆PB geom p ¸ JPFi }EJ g}2 Lp{2#prw∆;E q qp{2 ď E30 Ep d1 geom p ¸ JPFi }EJ g}2 Lp{2#prwB;E q qp{2: (2.83) Proof. For each J P PpIiq centered at cJ , cover B by a set TJ of mutually parallel nonover-lapping boxes PJ with dimension 1 2 with longer side pointing in the direction of the normal vector to P at 1pcJ q. Note that any 1 2 box outside 4 B cannot cover B itself. Thus we may assume that all the boxes in TJ are contained in 4 B. Finally, let PJ pxq denote the box in TJ containing x and let 2 PJ be the 2 1 22 box having the same center and orientation as PJ .Since p ą 4, H older's inequality yields that p ¸ JPFi }EJ g}2 Lp{2#prw∆;E q qp{2 ď p ¸ JPFi }EJ g}p{2 Lp{2#prw∆;E q q2|Fi|p{22: Thus the left hand side of (2.83) is bounded above by p 2 ź i1 |Fi|p{41q 1 |B| ¸ ∆PB 2 ź i1 p ¸ JPFi }EJ g}p{2 Lp{2#prw∆;E q q: (2.84) For x P 4B, de ne HJ pxq : $''&''% sup yP2PJ pxq }EJ g}p{2 Lp{2#prwBpy; ´1q;E q if x P Ť PJPTJ PJ 0 if x P 4Bz Ť PJPTJ PJ : (2.85) For each x P ∆, observe that ∆ Ă 2PJ pxq. Therefore for each x P ∆, c∆ P 2PJ pxq and hence }EJ g}p{2 Lp{2#prw∆;E q ď HJ pxq (2.86) 64 for all x P ∆. Thus 1 |B| ¸ ∆PB 2 ź i1 p ¸ JPFi }EJ g}p{2 Lp{2#prw∆;E q q ¸ J1PF1 J2PF2 1 |B| ¸ ∆PB }EJ1 g}p{2 Lp{2#prw∆;E q }EJ2 g}p{2 Lp{2#prw∆;E q 1 |∆| ż ∆ dx ď ¸ J1PF1 J2PF2 1 |B| ż B HJ1 pxqHJ2 pxq dx (2.87) where the last inequality we have used (2.86). By how HJ is de ned, HJ is constant on each PJ P TJ . That is, for each x P Ť PJPTJ PJ , HJ pxq  ¸ PJPTJ cPJ 1PJ pxq for some constants cPJ ě 0. Then 1 |B| ż B HJ1 pxqHJ2 pxq dx  1 |B| ¸ PJ1PTJ1 PJ2PTJ2 cPJ1 cPJ2 |p PJ1 X PJ2 q X B|ď 1 |B| ¸ PJ1PTJ1 PJ2PTJ2 cPJ1 cPJ2 |PJ1 X PJ2 | where the last inequality is because cPJ ě 0 for all PJ . Since |PJ |  3 we also have 1 |B| ż 4B HJ pxq dx  1 |B| ż Ť PJPTJPJ ¸ PJPTJ cPJ 1PJ pxq dx   ¸ PJPTJ cPJ : Recall that J1 P F1 Ă PpI1q and J2 P F2 Ă PpI2q. Since I1 and I2 are d-separated, so are J1 and J2. Let =J1;J 2 be the angle between the directions of J1 and J2. By geometry discussion at the beginning of this section, sin p=J1;J 2 q ě 2d{5. Therefore |PJ1 X PJ2 | ď 2 sin p=J1;J 2 q ď 2 2d{5: Applying this gives 1 |B| ¸ PJ1PTJ1 PJ2PTJ2 cPJ1 cPJ2 |PJ1 X PJ2 |ď 32d1 |B| 2 ź i1 p1 |B| ż 4B HJi pxq dx q  3d1 |B|22ź i1 ż 4B HJi pxq dx: 65 Therefore (2.87) is bounded above by 3d12ź i1 p ¸ JPFi 1 |B| ż 4B HJ pxq dx q  768 d12ź i1 p ¸ JPFi 1 |4B| ż 4B HJ pxq dx q: (2.88) We now apply Lemma 2.6.3, proven later, to (2.88). This gives that an upper bound of E20 Ep d12ź i1 p ¸ JPFi }EJ g}p{2 Lp{2#prwB;E q q where here we have also used that E ě 100 and p ě 2. Thus (2.84) is bounded above by E20 Ep d1p 2 ź i1 |Fi|p{41q 2 ź i1 p ¸ JPFi }EJ g}p{2 Lp{2#prwB;E q q: (2.89) To obtain the right hand side of (2.83) we now use that intervals in Fi satisfy (2.82). We have p 2 ź i1 |Fi|p{41q 2 ź i1 p ¸ JPFi }EJ g}p{2 Lp{2#prwB;E q q ď 2 ź i1 |Fi|p{412ź i1 p| Fi| max J1PFi }EJ1 g}p{2 Lp{2#prwB;E q q 2ź i1 p| Fi| max J1PFi }EJ1 g}2 Lp{2#prwB;E q q1{2 p{2 ď 2ź i1 p ¸ JPFi 4}EJ g}2 Lp{2#prwB;E q q1{2 p{2  2p geom p ¸ JPFi }EJ g}2 Lp{2#prwB;E q qp{2 where the second inequality is due to (2.82). Inserting this into (2.89) then completes the proof of Lemma 2.6.2. Lemma 2.6.3. Let HJ be as de ned in (2.85) . Then 1 |4B| ż 4B HJ pxq dx ď E8Ep }EJ g}p{2 Lp{2#prwB;E q : Proof. This is the inequality proven in (29) of [BD17] without explicit constants. We follow their proof, this time paying attention to the implied constants. Fix arbitrary J Ă r 0; 1s of length  and center cJ . For x P Ť PJPTJ PJ  supp HJ Ă 4B,x arbitrary y P 2PJ pxq. Note that 2 PJ pxq points is a rectangle of dimension 2 1 22 with the longer side pointing in the direction of p 2cJ ; 1q.66 Let RJ and J be as in Lemma 2.2.5. Since cJ P r {2; 1  {2s, both cos J and sin J are nonzero. Note that RJ is the rotation matrix such that R1 J applied to 2 PJ pxq gives an axis parallel rectangle of dimension 2 1 22 with the longer side pointing in the vertical direction. Since y P 2PJ pxq, we can write R1 J y  R1 J x y where |y1| ď 21 and |y2| ď 22. We then have }EJ g}p{2 Lp{2prwBpy; ´1q;E q  ż R2 |p EJ gqp sq| p{2 rwBpxRJ y; ´1q;E psq ds Writing y  p y1; 0qT p 0; y 2qT and a change of variables gives that the above is equal to ż R2 |p EJ gqp s x RJ p0; y 2qT q| p{2 rwBpRJ py1;0qT ; ´1q;E psq ds: (2.90) Inserting Lemma 2.2.5 into (2.90) gives that }EJ g}p{2 Lp{2prwBpy; ´1q;E q ď 16 E ż R2 |p EJ gqp s x RJ p0; y 2qT q| p{2 rwBp0; ´1q;E psq ds: (2.91) Observe that |p EJ gqp s x RJ p0; y 2qT q|  | ż R2 yEJ gpqep  p s xqq ep  RJ p0; y 2qT q d |: Since RJ is a rotation matrix, a change of variables gives that the above is equal to | ż R2 yEJ gpRJ qep  R1 J ps xqq ep  p 0; y 2qT q d | (2.92) Writing ep  p 0; y 2qT q  epp 2  c2 J qy2qepc2 J y2q  epc2 J y2q 8 ¸ k0 p2i qkyk 2 k! p2  c2 J qk and using that |y2| ď 22 shows that (2.92) is ď 8 ¸ k0 p4qk k! | ż R2 yEJ gpRJ qep  R1 J ps xqqp 2  c2 J 2 qk d | Applying the change of variables     1pcJ q gives that the above is ď 8 ¸ k0 30 k k! | ż R2 yEJ gpRJ p 1pcJ qqq ep  R1 J ps xqqp 2 22 qk d |: (2.93) 67 Note that yEJ gpRJ p 1pcJ qqq is supported in a 4  42 box centered at the origin pointing in the horizontal direction. Thus we may insert the cutoff from Lemma 2.2.10 in (2.93). Then (2.93) becomes 8 ¸ k0 30 k k! | ż R2 yEJ gpRJ p 1pcJ qqq ep  R1 J ps xqqp 2 2 qk p 1 2 q p 2 2 q d |: Note that we are a bit wasteful since p1{p 2qq p2{p 2qq is equal to 1 on r 2; 2s2 rather than r 2; 2s r 22; 22s, but this will turn out to not matter. Let  kptq : tk ptq and let pMkf qp xq  ż R2 pf pRJ p 1pcJ qqq ep  xq p 1 2 qkp 2 2 q d: Thus we have shown that |p EJ gqp s x RJ p0; y 2qT q| ď 8 ¸ k0 30 k k! |p MkEJ gqp R1 J ps xqq| and combining this with (2.91) gives that for x P Ť PJPTJ PJ and y P 2PJ pxq, }EJ g}p{2 Lp{2#prwBpy; ´1q;E q ď 16 E 2 ż R2 p 8 ¸ k0 30 k k! |p MkEJ gqp R1 J ps xqq|q p{2 rwBp0; ´1q;E psq ds: Thus 1 |4B| ż 4B HJ pxq dx ď 16 E16 ż 4B ż R2 p 8 ¸ k0 30 k k! |p MkEJ gqp R1 J ps xqq|q p{2 rwBp0; ´1q;E psq ds dx  16 E16 ż R2 p 8 ¸ k0 30 k k! |p MkEJ gqp uq|q p{2p ż 4B rwBpx; ´1q;E pRJ uq dx q du: (2.94) As 1 4B ď 4E rw4B;E ď 64 E rwB;E and since B is centered at the origin, ż 4B rwBpx; ´1q;E pRJ uq dx  p 14B  rwBp0; ´1q;E qp RJ uqď 64 E p rwB;E  rwBp0; ´1q;E qp RJ uq ď 256 E 2 rwB;E pRJ uq: Thus it follows that (2.94) is bounded by 212 E 4p 8 ¸ k0 30 k k! }MkEJ g  R1 J }Lp{2p rwB;E qqp{2: (2.95) 68 Inserting an extra epRJ 1pcJ q  sq and applying a change of variables gives |p MkEJ gqp R1 J sq|  | ż R2 yEJ gpRJ p 1pcJ qqq epRJ   sq p 1 2 qkp 2 2 q d | | ż R2 yEJ gp qep  sq xmkp q d | where xmkp q  p 1 cos J 2 sin J  cJ 2 qkp 2 cos J  1 sin J  c2 J 2 q: Then |MkEJ g  R1 J |  | EJ g  mk| ď | EJ g|  | mk| and H older's inequality implies p| EJ g|  | mk|q p{2 ď p| EJ g|p{2  | mk|q} mk}p{21 L1 : Therefore }MkEJ g  R1 J }Lp{2p rwB;E q ď } mk}12{pL1pR2q}EJ g}Lp{2p rwB;E | mk |pqq (2.96) where here |mk|pq is the function |mk|p xq. Since  and are both Schwartz functions, our goal will be to use the rapid decay to show that |mk| À E rwB;E . A change of variables gives |mkpxq|  | ż R2 xmkp qe2ix  d | 42| ż R pw1qe2i pR´1 Jxq1p2w 1q dw 1 ż R kpw2qe2i pR´1 Jxq2p2w 2q dw 2|: Since  0, by Lemma 2.2.10, | ż R pw1qe2i pR´1 Jxq1p2w 1q dw 1| ď E5E p1 2|p R1 J xq1|q 2E and | ż R kpw2qe2i pR´1 Jxq2p2w 2q dw 2| ď 6kE5E p1 2|p R1 J xq2|q 2E : Therefore |mkpxq| ď 426kE10 E p1 |p R1 J xq1| 1 q2E p1 |p R1 J xq2| 1 q2E : (2.97) 69 Thus we have }mk}12{pL1pR2q ď p 6kE11 E q12{p: (2.98) Applying Lemma 2.2.6 to (2.97) shows |mkpxq| ď 4p6kE10 E q2 rwBp0; ´1q;E pxq: Note that this inequality does not change if we replace x with x on the left hand side since the right hand side is radial. Lemma 2.2.1 then implies that rwB;E  | mk|pq ď 6kE11 E rwB;E and hence }EJ g}Lp{2p rwB;E | mk |pqq ď p 6kE11 E q2{p}EJ g}Lp{2p rwB;E q: Combining this with (2.95), (2.96), and (2.98) shows that 1 |4B| ż 4B HJ pxq dx ď 212 E E11 Ep {24p 8 ¸ k0 180 k k! }EJ g}Lp{2p rwB;E qqp{2 ď E8Ep }EJ g}p{2 Lp{2#prwB;E q where in the last inequality we have used that E ě 100 and p ě 2. This completes the proof of Lemma 2.6.3. Proof of Theorem 2.6.1. If p  4, the proof of Lemma 2.6.2 (in particular (2.89)) implies that we can just take Fi  PpIiq and discard the requirement in (2.82) since the only reason we dyadically decomposed and restricted to p ą 4 was to match the Lp{2# with the ℓ2 sum over ř JPFi in (2.83). From now on we assume p ą 4. For i  1; 2, let Mi : max JPPpIiq }EJ g}Lp{2# p rwB;E q: For each i  1; 2, let Fi; 0 denote the set of intervals J1 P PpIiq such that }EJ1 g}Lp{2# p rwB;E q ď 3Mi 70 and partition the remaining intervals in PpIiq into rlog 2p3qs many classes Fi;k (with k  1; 2; : : : ; rlog 2p3qs) such that 2k13Mi ă } EJ1 g}Lp{2# p rwB;E q ď 2k3Mi for all J1 P Fi;k . Note that Fi;k satis es the hypothesis (2.82) given in Lemma 2.6.2. For 1 ď k; l ď rlog 2p3qs, let F∆pk; l q : p ¸ JPF1;k }EJ g}2 Lp{2#prw∆;E q qp{4p ¸ JPF2;l }EJ g}2 Lp{2#prw∆;E q qp{4: Note that F∆pa; b q  F∆pb; a q.The left hand side of (2.81) is equal to 1 |B| ¸ ∆PB p ¸ 0ďk;l ďrlog 2p´3qs ¸ JPF1;k J1PF2;l }EJ g}2 Lp{2#prw∆;E q }EJ1 g}2 Lp{2#prw∆;E q qp{4 ď p rlog 2p3qs 1qp 22 1 |B| ¸ ∆PB ¸ 0ďk;l ďrlog 2p´3qs F∆pk; l q: (2.99) We then have 1 |B| ¸ ∆PB rlog 2p´3qs ¸ k;l 0 F∆pk; l q 1 |B| ¸ ∆PB F∆p0; 0q 2 rlog 2p´3qs ¸ k1 1 |B| ¸ ∆PB F∆p0; k q rlog 2p´3qs ¸ k;l 1 1 |B| ¸ ∆PB F∆pk; l q: (2.100) We rst consider the third sum on the right hand side of (2.100). In this case, both families of intervals satisfy (2.82) in Lemma 2.6.2. Thus applying Lemma 2.6.2 gives that rlog 2p´3qs ¸ k;l 1 1 |B| ¸ ∆PB F∆pk; l q ď rlog 2p3qs2E30 Ep d1 geom p ¸ JPPpIiq }EJ g}2 Lp{2#prwB;E q qp{2: (2.101) The rst two sums on the right hand side of (2.100) are taken care of by trivial estimates. We consider the rst sum. From Proposition 2.2.14, rw∆;E ď 48 E rwB;E (we can obtain a better constant using Lemma 2.2.1 and 1 ∆ ď 1B but this is not needed). Therefore for J1 P Fi; 0,max ∆PB }EJ1 g}Lp{2# p rw∆;E q ď 4{p48 2E{p}EJ1 g}Lp{2# p rwB;E q ď 34{p48 2E{pMi: (2.102) 71 Since |Fi; 0| ď | PpIiq| ď 1,1 |B| ¸ ∆PB F∆p0; 0q ď p| F1;0|| F2;0|12 16 {p48 8E{pM 21 M 22 qp{4 ď 5p{2448 2E geom pM 2 i qp{2: (2.103) Since p ą 4, 5 p{2  4 ą 6 and so the union bound implies that (2.103) is bounded by 48 2E geom p ¸ JPPpIiq }EJ g}2 Lp{2#prwB;E q qp{2: (2.104) Finally we consider the second sum on the right hand side of (2.100). From the same proof as (2.102), for J1 P F2;k with k  0 we have max ∆PB }EJ1 g}Lp{2# p rw∆;E q ď 4{p48 2E{pM2: Therefore by the same reasoning as in the previous paragraph we have 1 |B| ¸ ∆PB F∆p0; k q ď p| F1;0|| F2;k |p 34{p48 2E{pM1q2p4{p48 2E{pM2q2qp{4 ď p448 2E geom pM 2 i qp{2: Since p ą 4, we can discard the power of  and hence 2 rlog 2p´3qs ¸ k1 1 |B| ¸ ∆PB F∆p0; k q ď 2rlog 2p3qs48 2E geom p ¸ JPPpIiq }EJ g}2 Lp{2#prwB;E q qp{2: Combining this with (2.100), (2.101), and (2.104) shows that (2.99) (and hence the left hand side of (2.81)) is bounded above by p   q geom p ¸ JPPpIiq }EJ g}2 Lp{2#prwB;E q qp{2 where p   q is equal to prlog 2p3qs 1qp 22 rlog 2p3qs2E30 Ep d1 2rlog 2p3qs248 2E 48 2E : Since  ă 1{10 and E ě 100, this is bounded above by E50 Ep d1plog 1  qp{2 which completes the proof of Theorem 2.6.1. 72 2.7 The iteration: preliminaries We now setup the iteration scheme as in [BD17] except this time we pay attention to various integrality constraints from previous sections. Let g : r0; 1s Ñ C, t ě 1, q ď r, and I1; I 2 two intervals in r0; 1s. Let Br be a square in R2 with side length r. De ne Gtpq; r q : geom p ¸ JPPqpIiq }EJ g}2 Lt prwBr ;E q q1{2 and Appq; r q  p Avg BqPP´qpBrq G2pq; q qpq1{p : 1 |P´q pBrq| ¸ BqPP´qpBrq G2pq; q qp 1{p : Strictly speaking we should be writing Gtpq; B rq instead of Gtpq; r q since this expression is different for different Br, however all that matters is keeping track of what our frequency and spatial scales are so for simplicity we will write r instead of Br. Remark 2.7.1 . Note that for Gtpq; r q and Appq; r q to be de ned, we need |Ii|q P N and rq P N.For a square Bq, note that Appq; q q  G2pq; q q for all p. In Appq; r q, increasing q represents smaller frequency scales and increasing r represents larger spatial scales. We note that Gt and Ap here are essentially the same as Dp and Ap, respectively in [BD17]. The only difference is that here we use the weight rwB instead of wB . This is because our bilinear decoupling constant is de ned with weight rwB rather than wB .Observe that Gt and Ap obey the following two basic properties. First the t parameter in Gt obeys H older's inequality. Lemma 2.7.2 (H older's inequality for Gt). For each square Br Ă R2, if p1  q{ p1 {p2  1{t, then Gtpq; r q ď Gp1 pq; r q1 Gp2 pq; r q : Proof. The factor 1 {| Br| in the de nition of Gt balances out by how is de ned and hence we may replace Lt , Lp1 , and Lp2 with Lt, Lp1 , and Lp2 , respectively. Next, it suffices to 73 prove that ¸ JPPqpIiq }EJ g}2 LtprwBrq ď ¸ JPPqpIiq }EJ g}2 Lp1prwBrq 1 ¸ JPPqpIiq }EJ g}2 Lp2prwBrq : Applying H older's inequality gives that }EJ g}2 l2 JLt ď }EJ g}1 Lp1 }EJ g}Lp2  2 l2 J  }EJ g}2p1 q Lp1 }EJ g}2 Lp2 l1 J ď } EJ g}2p1 q l2 JLp1 }EJ g}2 l2 JLp2 where here by Lp we mean Lpp rwBr q. This completes the proof Lemma 2.7.2. Second, the averaging in the r parameter in Ap allows us to increase it. Lemma 2.7.3. Fix arbitrary positive integers r ď s ď t and suppose  is such that |Ii|r P N, sr P N, and ts P N. Then for each square Bt Ă R2, Avg BsPP´spBtq Appr; s qp  Appr; t qp: Proof. Fix arbitrary square Bt Ă R2. Expanding the left hand side, we have Avg BsPP´spBtq Appr; s qp  Avg BsPP´spBtq Avg BrPP´rpBsq G2pr; r qp  Avg BrPP´rpBtq G2pr; r qp  Appr; t qp: This completes the proof of Lemma 2.7.3. Finally, we end this section with an outline of our strategy. As in Section 2.5, let m ě 1, E ě 100, 2 ď p ď 6, and  : 216 2mE10 E : Let I1; I 2 be two arbitrary intervals in P pr 0; 1sq which are at least -separated. Lemma 2.7.4. Suppose  was such that 1{2m P 2N and  1{2m P N. Then for each square B1 of side length 1, we have } geom |EIi g|} Lp pB1q ď E100 E 1{21{2m`1 App 12m ; 1q: 74 Proof. Note that since 1{2m P 2N, 11{2m P N since m ě 1. This proof is just an application of H older, Minkowski, and Bernstein inequalities. We have } geom |EIi g|} pLp pB1q  1 |B1| ż B1 geom |EIi g|p  1 |B1| ż B1 geom | ¸ JPP1{2m pIiq EJ g|p ď p 1{21{2m`1 qp 1 |B1| ż B1 geom p ¸ JPP1{2m pIiq |EJ g|2qp{2  p 1{21{2m`1 qp Avg B1{2m PP´1{2m pB1q } geom p ¸ JPP1{2m pIiq |EJ g|2q1{2}pLp pB1{2m q: Note that } geom p ¸ JPP1{2m pIiq |EJ g|2q1{2}pLppB1{2m q ď geom }p ¸ JPP1{2m pIiq |EJ g|2q1{2}pLppB1{2m q: Since p ě 2, }p ¸ JPP1{2m pIiq |EJ g|2q1{2}pLppB1{2m q ď p ¸ JPP1{2m pIiq }EJ g}2 LppB1{2m qqp{2: Combining the above three centered equations gives that } geom |EIi g|} Lp pB1q ď 1{21{2m`1 p Avg B1{2m PP´1{2m pB1q geom p ¸ JPP1{2m pIiq }EJ g}2 Lp pB1{2m qqp{2q1{p: Bernstein's inequality (Lemma 2.2.20) and that p ď 6, E ě 100 gives that }EJ g}Lp pB1{2m q ď 4pE {2ppE {2q23 pE {2}EJ g}L2#p rwB1{2m ;E q ď E100 E }EJ g}L2#p rwB1{2m ;E q: Inserting this above gives that } geom |EIi g|} Lp pB1q ď E100 E 1{21{2m`1 App 12m ; 1q which completes the proof of Lemma 2.7.4. Our target will be to prove an estimate of the form App2m; 1q À ;;E;m Gpp12; 1q (2.105) 75 because then combining this with Lemma 2.7.4 gives an upper bound on the bilinear decou-pling constant. Proposition 2.5.3 then allows us to control the linear decoupling constant. To prove (2.105), we will use ball in ation, l2L2 decoupling to prove an estimate of the form App2ℓ; 2ℓ1q À ;E App2ℓ1; 2ℓ1q for each ℓ  2; 3; : : : ; m . Then Lemma 2.7.3 allows us to patch all the estimates together. The iteration is easier in the 2 ď p ď 4 regime and so we will rst do that case, then we will move on to the case when 4 ă p ă 6. Finally, to control the decoupling constant at p  6, we will apply Bernstein's inequality and use the decoupling constant at p1 for some p1 suitably close to 6. 2.8 Control of the bilinear decoupling constant We now iterate to control the bilinear decoupling constant. We have two separate but similar cases. Our goal is to prove the following result. Proposition 2.8.1. Fix integers m ě 3 and E ě 100 . Let  : 216 2mE10 E and suppose  is such that 1{2m P 2N and  1{2m P N. paq If 2 ď p ď 4, then Dp;E p; m q ď 1{2pE300 E 1{4qm 12m`1 : pbq If 4 ă p ă 6, let a  p4 p2 , then Dp;E p; m q ď 1{2pE300 E 1{4plog 1  q1{2qm 12m`1 Dp;E pq1p 1aqm´1 : 2.8.1 Case 2 ď p ď 4 Lemma 2.8.2. Fix an integer 2 ď ℓ ď m. Suppose 1{2ℓ P 2N and  1{2ℓ P N. Then for each square B2{2ℓ Ă R2, we have A4p 12ℓ ; 22ℓ q ď E100 E 1{4A4p 22ℓ ; 22ℓ q: 76 Proof. Fix an arbitrary square B2{2ℓ of side length 2{2ℓ . Note that our restrictions on  and  also imply that  2{2ℓ P N. We have A4p 12ℓ ; 22ℓ q4  Avg B1{2ℓPP´1{2ℓpB2{2ℓq G2p 12ℓ ; 12ℓ q4 ď E200 E 1G2p 12ℓ ; 22ℓ q4 (2.106) where the inequality is by an application of Theorem 2.6.1. By l2L2 decoupling (Lemma 2.2.21), for each interval J P P1{2ℓ pIiq, we have }EJ g}2 L2#prwB2{2ℓ ;E q ď E13 E ¸ J1PP2{2ℓpJq }EJ1 g}2 L2#prwB2{2ℓ ;E q : Therefore ¸ JPP1{2ℓpIiq }EJ g}2 L2#prwB2{2ℓ ;E q ď E13 E ¸ JPP1{2ℓpIiq ¸ J1PP2{2ℓpJq }EJ1 g}2 L2#prwB2{2ℓ ;E q : Since Ii, J and J1 are all dyadic intervals, the above is equal to E13 E ¸ J1PP2{2ℓpIiq }EJ1 g}2 L2#prwB2{2ℓ ;E q : Therefore G2p 12ℓ ; 22ℓ q ď E13 E{2G2p 22ℓ ; 22ℓ q  E13 E{2A4p 22ℓ ; 22ℓ q: Combining this with (2.106) completes the proof of Lemma 2.8.2. H older's inequality allows us to change from A4 to Ap for 2 ď p ď 4 at no cost. Corollary 2.8.3. Fix an integer 2 ď ℓ ď m. Suppose 1{2ℓ P 2N and  1{2ℓ P N. Then for each square B2{2ℓ Ă R2, we have App 12ℓ ; 22ℓ q ď E100 E 1{4App 22ℓ ; 22ℓ q: Proof. Applying H older's inequality to the de nition of Ap shows that for 2 ď p ď 4, Appq; r q ď A4pq; r q. Lemma 2.8.2 and that A4p 22ℓ ; 22ℓ q  G2p 22ℓ ; 22ℓ q  App 22ℓ ; 22ℓ q then completes the proof of Corollary 2.8.3. 77 Now for each square B1 with side length 1, we partition into squares of side length 2{2ℓ and sum the previous corollary over all such squares. This yields the following result. Lemma 2.8.4. Fix an integer 2 ď ℓ ď m. Suppose 1{2ℓ P 2N and  1{2ℓ P N. Then for each square B1 Ă R2, we have App 12ℓ ; 1q ď E100 E 1{4App 12ℓ1 ; 1q: Proof. Fix an arbitrary square B1 of side length 1. Since 1{2ℓ P 2N, we can dyadically partition B1 into squares of side length 1{2ℓ . Lemma 2.7.3 and Corollary 2.8.3 then give that App 12ℓ ; 1qp  Avg B2{2ℓPP´2{2ℓpB1q App 12ℓ ; 22ℓ qp ď E100 Ep p{4 Avg B2{2ℓPP´2{2ℓpB1q App 22ℓ ; 22ℓ qp  E100 Ep p{4App 22ℓ ; 1qp: This completes the proof of Lemma 2.8.4. Now we combine the m  1 inequalities together to obtain the following result. Lemma 2.8.5. Suppose 1{2m P 2N and  1{2m P N, then for each square B1 Ă R2, we have App 12m ; 1q ď p E100 E 1{4qm1App12; 1q: Proof. Since 1{2m P 2N, 1{2ℓ P 2N for ℓ  1; 2; : : : ; m . Since 1{2m P 2N and  1{2m P N,it follows that  1{2m´1 P N. Since 1{2m´1 P 2N, we have that  1{2m´2 P N. Continuing this shows that  1{2ℓ P N for ℓ  1; 2; : : : ; m . Iterating Lemma 2.8.4 a total of m  1 times then completes the proof of Lemma 2.8.4. We now nally relate App1{2; 1q to Gpp1{2; 1q which will prove (2.105) in the case when 2 ď p ď 4. Lemma 2.8.6. If 1{2;  1{2 P N, then App12; 1q ď 48 E{pGpp12; 1q: 78 Proof. H older's inequality (2.3) implies that G2p12; 12q ď geom p ¸ JPP1{2pIiq }EJ g}2 Lp prwB1{2;E q q1{2: Since } geom fi}p ď geom }fi}p and so App12; 1q ď 1 |P´1{2 pB1q| 1{p geom p ¸ B1{2PP´1{2pB1q p ¸ JPP1{2pIiq }EJ g}2 Lp prwB1{2;E q qp{2q1{p: Changing the Lp to Lp, interchanging the l2 and lp norms, and then applying Proposition 2.2.14 shows that this is ď 48 E{pGpp1{2; 1q which completes the proof of Lemma 2.8.6. Combining Lemmas 2.8.4 and 2.8.6 then proves (2.105) in the case when 2 ď p ď 4. Lemma 2.8.7. Suppose 1{2m P 2N and  1{2m P N, then for each square B1 Ă R2, we have App 12m ; 1q ď p E200 E 1{4qm1Gpp12; 1q: Combining Lemma 2.8.7 with Lemma 2.7.4 and applying the de nition of the bilinear decoupling constant gives Proposition 2.8.1 in the case when 2 ď p ď 4. 2.8.2 Case 4 ă p ă 6We now implement the iteration in the case when 4 ă p ă 6. This case is similar to the case when 2 ď p ď 4. For 4 ă p ă 6, a  p4 p2 satis es 1 p{2  ap 1  a 2 : Note that 2 p1  aq decreases monotonically to 1 as p increase to 6. The analogue of Lemma 2.8.2 and Corollary 2.8.3 is as follows. Lemma 2.8.8. Fix an integer 2 ď ℓ ď m. Suppose 1{2ℓ P 2N and  1{2ℓ P N. Then for each square B2{2ℓ Ă R2, we have App 12ℓ ; 22ℓ q ď E60 E 1{4plog 1  q1{2App 22ℓ ; 22ℓ q1aGpp 12ℓ ; 22ℓ qa: 79 Proof. The proof is similar to that of Lemma 2.8.2. Since p ě 4, in the de nition of Ap,we can increase the L2#p rwB1{2ℓ ;E q to Lp{2# p rwB1{2ℓ ;E q using H older's inequality. Combining this with Theorem 2.6.1 gives that App 12ℓ ; 22ℓ q ď E50 E 1{4plog 1  q1{2Gp{2p 12ℓ ; 22ℓ q: H older's inequality for Gt (Lemma 2.7.2) then shows that Gp{2p 12ℓ ; 22ℓ q ď Gpp 12ℓ ; 22ℓ qaG2p 12ℓ ; 22ℓ q1a: Proceeding as at the end of the proof of Lemma 2.8.2 gives that G2p 12ℓ ; 22ℓ q ď E13 E{2App 22ℓ ; 22ℓ q Putting the above three centered equations together then completes the proof of Lemma 2.8.8. The analogue of Lemma 2.8.4 is as follows. The strategy of proof is essentially the same as that in Lemma 2.8.4 except this time we also need to deal with the Gpp2ℓ; 2ℓ1qa term from Lemma 2.8.8. Lemma 2.8.9. Fix an integer 2 ď ℓ ď m. Suppose 1{2ℓ P 2N and  1{2ℓ P N. Then for each square B1 Ă R2, we have App 12ℓ ; 1q ď E100 E 1{4plog 1  q1{2App 12ℓ1 ; 1q1aGpp 12ℓ ; 1qa: Proof. Fix an arbitrary square B1 of side length 1. Since 1{2ℓ P 2N, we can dyadi-cally partition B1 into squares of side length 1{2ℓ . Lemmas 2.7.3 and 2.8.8 and H older's inequality gives that App 12ℓ ; 1qp  Avg B2{2ℓPP´2{2ℓpB1q App 12ℓ ; 22ℓ qp ď E60 Ep  p 4 plog 1  qp{2 Avg B2{2ℓPP´2{2ℓpB1q App 22ℓ ; 22ℓ qp 1a Avg B2{2ℓPP´2{2ℓpB1q Gpp 12ℓ ; 22ℓ qp a : 80 Lemma 2.7.3 gives that the rst parenthetical term is equal to App 22ℓ ; 1qpp1aq. Thus the lemma is complete if we can show that Avg B2{2ℓPP´2{2ℓpB1q Gpp 12ℓ ; 22ℓ qp ď E40 Ep Gpp 12ℓ ; 1qp: (2.107) Expanding de nitions and interchanging geometric mean and the sum over B2{2ℓ gives that Avg B2{2ℓPP´2{2ℓpB1q Gpp 12ℓ ; 22ℓ qp ď 1 |B1| geom p ¸ B2{2ℓPP´2{2ℓpB1q p ¸ JPP1{2ℓpIiq }EJ g}2 LpprwB2{2ℓ ;E q qp{2q: Since p ě 2, we can switch the l2 and lp norms inside the geometric mean. Finally, apply Proposition 2.2.14 then proves that the above is ď 48 E Gpp 12ℓ ; 1qp which proves (2.107). This completes the proof of Lemma 2.8.9. Combining the above m  1 inequalities in Lemma 2.8.9 gives the following result. Lemma 2.8.10. Suppose 1{2m P 2N and  1{2m P N, then for each square B1 Ă R2, we have App 12m ; 1q ď p E100 E 1{4plog 1  q1{2qm1App12; 1qp1aqm´1 m ź ℓ2 Gpp 12ℓ ; 1qap1aqm´ℓ : Proof. The proof is the same as that of Lemma 2.8.5. To control App12 ; 1q, we use Lemma 2.8.6. However, now we also need to control Gpp 12ℓ ; 1q which we achieve by the following trivial bound. Lemma 2.8.11. Fix an integer 2 ď ℓ ď m. Suppose 1{2ℓ P 2N and  1{2ℓ P N. Then Gpp 12ℓ ; 1q ď E100 E Dp;E pqGpp12; 1q: Proof. For each J P P1{2ℓ pIiq, we have }EJ g}LppB1q  } Er0;1spg1J q} LppB1q ď rDp;E pqp ¸ J1PP1{2pr 0;1sq }EJ1 pg1J q} 2 LpprwB1;E q q1{2  rDp;E pqp ¸ J1PP1{2pJq }EJ1 g}2 LpprwB1;E q q1{2 81 where the last equality is because both 1{2ℓ and 1{2 are dyadic. Applying Propositions 2.2.11 and 2.3.11 then shows that }EJ g}Lpp rwB1;E q ď 12 E{pE70 E Dp;E pqp ¸ J1PP1{2pJq }EJ1 g}2 LpprwB1;E q q1{2: Combining this with the de nition of Gpp1{2ℓ; 1q then completes the proof of Lemma 2.8.11. Combining Lemmas 2.8.6, 2.8.10, and 2.8.11 gives the following result. Lemma 2.8.12. Suppose 1{2m P 2N and  1{2m P N, then for each square B1 Ă R2, we have App 12m ; 1q ď p E100 E 1{4plog 1  q1{2qmDp;E pq1p 1aqm´1 Gpp12; 1q This with Lemma 2.7.4 then proves Proposition 2.8.1 when 4 ă p ă 6. Note that in this case we obtain a small improvement over the trivial bound of Dp;E p; m q À p;E Dp;E pq which is the key to obtaining control of the linear decoupling constant when 4 ă p ă 6. 2.9 Decoupling at lacunary scales Using Propositions 2.5.3 and 2.8.1 we bound the linear decoupling constant at a sequence of lacunary scales. The lacunary scales are because of the integrality conditions in Proposition 2.8.1. Our goal will be to prove the following result. Proposition 2.9.1. Let E ě 100 , m ě 3,  : 216 2mE10 E , and  P t 2mnu8 n1 . paq If 2 ď p ď 4, then Dp;E pq ď 2m2 E400 Em 2m  12m : pbq If 4 ă p ă 6, then Dp;E pq ď p 2m2 E400 Em 2m q 1 r2{p p´2qs m´1  12r4{p p´2qs m´1 : 82 pcq If p  6, then for p1 P p 4; 6q, we have D6;E pq ď E50 E p2m2 E400 Em 2m q 1 p2{p p1´2qq m´1  12r4{p p1´2qs m´1 2p 1 p116q : The proof of Proposition 2.9.1 actually shows that Dp;E pq ď E400 Em 2m 1{2m for 2 ď p ď 4, but the extra 2 m2 is harmless and will allow us to treat all three cases in essentially the same manner. Note that in Propositions 2.8.1 and 2.9.1, the bound when 2 ď p ď 4 is same as the bound for 4 ă p ă 6 except with p  4 (and so a  0) and no plog 1  q1{2. When we prove Proposition 2.9.1, we will only consider the more complicated case when 4 ă p ă 6 and p  6. 2.9.1 Case 4 ă p ă 6We rst prove the following lemma. Lemma 2.9.2. Let   216 2mE10 E , 1{2 P 2N, and a  p4 p2 . Let K be the largest integer such that 1{2K P 2N. Suppose p 2iq1{2m P 2N for all i  0; 1; : : : ; K  1. Then Dp;E pq ď 2m2 E400 Em 2m  12m max i0;1;:::;K 2m´11 Dp;E p 2iq1p 1aqm´1 : Proof. Observe that p 2iq1{2m  p  2pi2m´1qq1{2m and so for i  0; 1; : : : ; K  2m1  1, we have that p 2iq1{2m P N.For i  0; 1; : : : ; K  2m1  1, we may apply Proposition 2.8.1 which gives that for such i, Dp;E p 2i; m q ď p E300 E 1{4plog 1  q1{2qm 12m`1 Dp;E p 2iq1p 1aqm´1 : For i  K  2m1; : : : ; K  1, the trivial bound (Lemma 2.5.1) gives that Dp;E p 2i; m q ď 4E 1{2p 2iq1{4 ď 4E p1{2K q1{2 12 p2m´11q: (2.108) By how K is de ned, 1{2K1 R 2N. Since 1{2 and  are dyadic numbers, we must then have 1{2K1 P 2Z and hence 1{2K1 ě 1 which implies that 1{2K ď 1. Inserting 83 this into (2.108) gives that for such i, Dp;E p 2i; m q ď 4E 2m{4: Therefore Proposition 2.5.3 gives that Dp;E pqď 100 E log  E 1 max p1; 4E 2m{4; max i0;1;:::;K 2m´11 Dp;E p 2i; m qq ď 100 E log  E 1 max 4E 2m{4; pE300 E 1{4plog 1  q1{2qm 12m`1 max i0;1;:::;K 2m´11 Dp;E p 2iq1p 1aqm´1 ď E300 Em 2m plog 1  qm{2 12m`1 100 E log  E max i0;1;:::;K 2m´11 Dp;E p 2iq1p 1aqm´1 where in the last inequality we have used that Dp;E pq ě 12 E{p for all  which follows from the same proof as Lemma 2.3.5. Observe that log 1  ď 1 ae a for a ą 0, and hence plog 1  qm{2 ď 2m2 E4Em  52m¨E8E : Furthermore, from our de nition of , 100 E log  E ď  10 2mE8E . Inserting this into the above completes the proof of Lemma 2.9.2. Because of the generality of the statement of the previous lemma, we can upgrade the above result so that the same maximum appears on both left and right hand sides. Lemma 2.9.3. Suppose ;  , K, and a are as in Lemma 2.9.2. The left hand side of the inequality in Lemma 2.9.2 can be replaced with max i0;1;:::;K 2m´11 Dp;E p 2iq.Proof. Fix a j  0; 1; : : : ; K  2m1  1. Let Kpjq : K  j. Since K is the largest integer such that 1{2K P 2N, it follows that Kpjq is the largest integer such that p 2j q1{2Kpjq  1{2p Kpjq jq P 2N: We similarly also have p 2pijqq1{2m P 2N for i  0; 1; : : : ; K pjq  1. Therefore Lemma 2.9.2 gives that Dp;E p 2j q ď 2m2 E400 Em 2m  12m max ℓ0;1;:::;K 2m´11j Dp;E p 2pjℓqq1p 1aqm´1 ď 2m2 E400 Em 2m  12m max ℓ0;1;:::;K 2m´11 Dp;E p 2ℓq1p 1aqm´1 : 84 Since j on the left hand side of the above inequality is arbitrary and the right hand side is independent of j, the above inequality is still true if we take the maximum over all j on the left hand side. This completes the proof of Lemma 2.9.3. This gives the following corollary. Corollary 2.9.4. Suppose ;  , K, and a are as in Lemma 2.9.2. Then max ℓ0;1;:::;K 2m´11 Dp;E p 2ℓq ď p 2m2 E400 Em 2m  12m q 1 p1´aqm´1 Taking ℓ  0 in Corollary 2.9.4 and observing that the choice of  P t 2mnu8 n1 satis es the hypothesis of Lemma 2.9.2 completes the proof of Proposition 2.9.1 when 4 ă p ă 6. Indeed, with this choice of , K  2m1n  1 and so observe that p 2iq1{2m  p 1qn2i{2m and for i  0; 1; : : : ; K  1, we have n  2i{2m ě 0. 2.9.2 Case p  6At p  6 the argument no longer gives a better than trivial estimate since here 2 p1  aq  1. The advantage we have however is that we know a good bound on Dp1;E pq for all p1 arbitrary close to 6. This combined with reverse H older and H older is enough to give a better than trivial bound at p  6. Let 4 ă p1 ă 6 to be chosen later. The proof of Lemma 2.2.20 along with Corollary 2.2.9 and Proposition 2.2.11 imply that }Er0;1sg}L6pBq ď 25 p1{p11{6qE22 E }Er0;1sg}Lp1 pwB;E q ď E23 E Dp1;E pqp ¸ JPP1{2pr 0;1sq }EJ g}2 Lp1pwB;E q q1{2: H older's inequality to increase Lp1 to L6 then implies that D6;E pq ď E50 E p2q1{p11{6Dp1;E pq: 85 Combining this with Proposition 2.9.1 for 4 ă p1 ă 6 shows that under the hypothesis of Proposition 2.9.1 and arbitrary 4 ă p1 ă 6, we have D6;E pq ď E50 E p2m2 E400 Em 2m q 1 p2{p p1´2qq m´1  12r4{p p1´2qs m´1 2p 1 p116q : Thus if we choose p1 so that 1 {p1  1{6 is sufficiently small and then choose m sufficiently large, we once again can do better than the trivial bound of OE;p p1{4q. This completes the proof of Proposition 2.9.1 when p  6. 2.10 Decoupling at all scales While Proposition 2.9.1 is for a lacunary sequence of scales, recall that the decoupling con-stant de ned in (2.1) is for  P N2. To upgrade Proposition 2.9.1 to all scales  P N2 we use lacunarity and Proposition 2.4.1. Lemma 2.10.1. Suppose  P r 1;  2s X N2 and 2{1  c. Then Dp;E pq ď E100 E 2E{pc1{4Dp;E p2q: Proof. Using Proposition 2.4.1 and the trivial bound on decoupling we have Dp;E pq ď E100 E Dp;E p2qDp;E p 2 qď E100 E 2E{pp2  q1{4Dp;E p2q ď E100 E 2E{pc1{4Dp;E p2q which completes the proof of Lemma 2.10.1. Combining this lemma with Proposition 2.9.1 gives the following result. Proposition 2.10.2. Let E ě 100 , m ě 3, and suppose  P N2. paq If 2 ď p ď 4, then Dp;E pq ď 24mE15 E  12m : pbq If 4 ă p ă 6, then Dp;E pq ď p 24mE15 E  12m q 1 r2{p p´2qs m´1 : 86 pcq If p  6, then for p1 P p 4; 6q we have Dp;E pq ď p 24mE15 E  12m q 1 r2{p p1´2qs m´1 2p 1 p116q : Proof. Recall that   216 2mE10 E . The proof of all three parts is essentially the same, so we only concentrate on the 2 ď p ď 4 case. If  P r 2m ; 1s X N2, the trivial bound gives that Dp;E pq ď 2E{p2m{4  2E{p44mE10 E : (2.109) From Lemma 2.10.1, if  P r 2mpn1q;  2mns X N2 for some n ě 1, then Dp;E pq ď E100 E 2E{p2m{4Dp;E p2mnq: Inserting the bound from Proposition 2.9.1 gives that the above is bounded by E100 E 2E{p2m{42m2 E400 Em 2m  12m ď 2m2 E500 Em  54 2m  12m : Using that E ě 100 and the de nition of , we have 2m2 E500 Em  54 2m ď 2100 4mE10 E ď 24mE15 E : This then shows Dp;E pq ď 24mE15 E  12m for all  P r 2mpn1q;  2mns, n ě 1. Combining with (2.109) completes the proof of Proposition 2.10.2 when 2 ď p ď 4. When 4 ă p ă 6, 12{p p2q ą 1 and so we can repeat the same proof as above in the remaining two cases of the proposition. This completes the proof of Proposition 2.10.2. 2.11 Proof of Theorem 2.1.1 Since Proposition 2.10.2 is true for all m ě 3 and  P N2, we now optimize the bound on Dp;E pq in m. This will give the proof of Theorem 2.1.1. Proof of Theorem 2.1.1. We combine the cases of 2 ď p ď 4 and 4 ă p ă 6. Fix arbitrary  P N2 and E ě 100. Let m be the largest integer such that 2m ď E5E plog 2 1q1{3 ă 2m1: (2.110) 87 Since  ă 264 E15 E , m ě 3. Then 24mE15 E  12m ď exp p5plog 2 q1{3E5E plog 1  q2{3q ď exp p5  E5E plog 1  q2{3q (2.111) which nishes the case of Theorem 2.1.1 when 2 ď p ď 4. For 4 ă p ă 6, observe that p 2 p  2qp m1q  exp pp m  1q log 2 p  2q ď 2plog 1  q 13 log 2p 2 p´2q : (2.112) Combining (2.111) and (2.112) then proves Theorem 2.1.1 in the case when 4 ă p ă 6. For the case when p  6, choose m as in (2.110). Then for 4 ă p1 ă 6, D6;E pq ď exp p10  E5E plog 1  q23  13 log 2p 2 p1´2q q2p 1 p116q ď exp pE6E plog 1  qrp log 1  q 13 log 2p 4 p1´2q p 1 p1  16qsq : (2.113) It thus remains to optimize plog 1  q 13 log 2p 4 p1´2q p 1 p1  16q for 4 ă p1 ă 6. Let  : 1 p1  16 and suppose we choose p1 sufficiently close to 6 such that  ă 1{4. Then 4 p12  16 13 and log 4 p1  2 ě 8: Thus plog 1  q 13 log 2 4 p1´2 p 1 p1  16q ď p log 1  q3 : Setting   log p3 log log 1  q 3 log log 1  gives that plog 1  q3   1 log 3 log log log 1  3 log log 1  ď log log log 1  log log 1  (2.114) where we have used that 1 log 3 ď log log log 1  for our range of . Note that for our range of ,  ă 1{4 since this is equivalent to 3 log log 1  ă p log 1  q3{4 which is certainly satis ed if 1 ą 10 8. Inserting (2.114) into (2.113) then completes the proof of Theorem 2.1.1. 88 CHAPTER 3 An l2 decoupling interpretation of efficient congruencing in 2D 3.1 Introduction Since we will once again be studying l2 decoupling for the parabola, we adopt essentially the same notation as in Chapter 2 with a few small differences (namely  in Chapter 2 is 2 in this chapter and we just set E  100). For an interval J Ă r 0; 1s and g : r0; 1s Ñ C, we de ne pEJ gqp xq : ż J gpqepx 1 2x2q d where epaq : e2ia . For an interval I, let PℓpIq be the partition of I into intervals of length ℓ. By writing PℓpIq, we are assuming that |I|{ ℓ P N. We will also similarly de ne PℓpBq for squares B in R2. Next if B  Bpc; R q is a square in R2 centered at c of side length R, let wB pxq : p 1 |x  c| R q100 : We will always assume that our squares have sides parallel to the x and y-axis. We observe that 1 B ď 2100 wB . For a function w, we de ne }f }Lppwq : p ż R2 |f pxq| pwpxq dx q1{p: For  P N1, let Dpq be the best constant such that }Er0;1sg}L6pBq ď Dpqp ¸ JPPpr 0;1sq }EJ g}2 L6pwBq q1{2 (3.1) 89 for all g : r0; 1s Ñ C and all squares B in R2 of side length 2. Let Dppq be the decoupling constant where the L6 in (3.1) is replaced with Lp. Since 1 B À wB , the triangle inequality combined with Cauchy-Schwarz shows that Dppq À p 1{2. The l2 decoupling theorem for the paraboloid proven by Bourgain and Demeter in [BD15] implies that for the parabola we have Dppq À " " for 2 ď p ď 6 and this range of p is sharp. This chapter attempts to probe the connections between efficient congruencing and l2 decoupling in the simplest case of the parabola. Our proof of l2 decoupling for the parabola is inspired by the exposition of efficient congruencing in Pierce's Bourbaki seminar exposition [Pie19]. This proof will give the following result. Theorem 3.1.1. For  P N1 such that 0 ă  ă e200 200 , we have Dpq ď exp p30 log 1  log log 1  q: This improves upon a previous result of Theorem 2.1.1 in Chapter 2. In the context of discrete Fourier restriction, Theorem 3.1.1 implies that for all N sufficiently large and arbitrary sequence tanu Ă l2, we have } ¸ |n|ď N ane2i pnx n2tq}L6pT2q À exp pOp log N log log N qqp ¸ |n|ď N |an|2q1{2 which rederives (up to constants) the upper bound obtained by Bourgain in [Bou93, Propo-sition 2.36] but without resorting to using a divisor bound. It is an open problem whether the exp pOp log N log log N qq can be improved. 3.1.1 More notation Once again we will let  be a Schwartz function such that  ě 1Bp0;1q and supp ppq Ă Bp0; 1q.For B  Bpc; R q we also de ne B pxq : pxcR q. Since we care about explicit constants in Section 3.2, we will use the explicit  constructed in Corollary 2.2.9. In particular, for this , B ď 10 2400 wB . For the remaining sections in this chapter, we will ignore this constant. We refer the reader to [BD17, Section 4] and Chapter 2, Section 2.2 for some useful properties of the weight wB and B .90 Finally we de ne }f }Lp pBq : p 1 |B| ż B |f pxq| p dx q1{p and given a collection C of squares, we let Avg ∆PC f p∆q : 1 |C| ¸ ∆PC f p∆q: 3.1.2 Outline of proof of Theorem 3.1.1 Our argument is inspired by the discussion of efficient congruencing in [Pie19, Section 4] which in turn is based off Heath-Brown's simpli cation [Hea15] of Wooley's proof of the cubic case of Vinogradov's mean value theorem [Woo16]. Our rst step, much like the rst step in both 2D efficient congruencing and decoupling, is to bilinearize the problem. Throughout we will assume 1 P N and  P N1 X p 0; 1{100 q.Fix arbitrary integers a; b ě 1. Suppose  and  were such that a1;  b1 P N. This implies that  ď min pa;  bq and the requirement that max pa;b q1 P N is equivalent to having a1;  b1 P N. For this  and , let Ma;b p;  q be the best constant such that ż B |EI g|2|EI1 g|4 ď Ma;b p;  q6p ¸ JPPpIq }EJ g}2 L6pwBq qp ¸ J1PPpI1q }EJ1 g}2 L6pwBq q2 (3.2) for all squares B of side length 2, g : r0; 1s Ñ C, and all intervals I P Pa pr 0; 1sq , I1 P Pb pr 0; 1sq with dpI; I 1q ě 3. We will say that such I and I1 are 3 -separated. Applying H older followed by the triangle inequality and Cauchy-Schwarz shows that Ma;b p;  q is nite. This is not the only bilinear decoupling constant we can use (see (3.27) and (3.31) in Sections 3.4 and 3.5, respectively), but in this outline we will use (3.2) because it is closest to the one used in [Pie19] and the one we will use in Section 3.2. Our proof of Theorem 3.1.1 is broken into the following four lemmas. We state them below ignoring explicit constants for now. Lemma 3.1.2 (Parabolic rescaling) . Let 0 ă  ă  ă 1 be such that ; ;  { P N1. Let I be an arbitrary interval in r0; 1s of length . Then }EI g}L6pBq À Dp  qp ¸ JPPpIq }EJ g}2 L6pwBq q1{2 91 for every g : r0; 1s Ñ C and every square B of side length 2. Lemma 3.1.3 (Bilinear reduction) . Suppose  and  were such that  1 P N. Then Dpq À Dp  q 1M1;1p;  q: Lemma 3.1.4. Let a and b be integers such that 1 ď a ď 2b. Suppose  and  were such that 2b1 P N. Then Ma;b p;  q À 1{6M2b;b p;  q: Lemma 3.1.5. Suppose b is an integer and  and  were such that 2b1 P N. Then M2b;b p;  q À Mb; 2bp;  q1{2Dp b q1{2: Applying Lemma 3.1.4, we can move from M1;1 to M2;1 and then Lemma 3.1.5 allows us to move from M2;1 to M1;2 at the cost of a square root of Dp{q. Applying Lemma 3.1.4 again moves us to M2;4. Repeating this we can eventually reach M2N ´1;2N paying some Op1q power of 1 and the value of the linear decoupling constants at various scales. This combined with Lemma 3.1.3 and the choice of   1{2N leads to the following result. Lemma 3.1.6. Let N P N and suppose  was such that 1{2N P N and 0 ă  ă 100 2N .Then Dpq À Dp1 12N q  43¨2N Dp1{2q 13¨2N N1 ź j0 Dp1 12N ´j q 12j`1 : This then gives a recursion which shows that Dpq À " " (see Section 3.2.3 for more details). The proof of Lemma 3.1.2 is essentially a change of variables and applying the de nition of the linear decoupling constant (some technical issues arise because of the weight wB , see Chapter 2, Section 2.4). The idea is that a cap on the paraboloid can be stretched to the whole paraboloid without changing any geometric properties. The bilinear reduction Lemma 3.1.3 follows from H older's inequality. The argument we use is from Tao's exposition on the Bourgain-Demeter-Guth proof of Vinogradov's mean value theorem [Tao15]. In general dimension, the multilinear reduction follows from a Bourgain-Guth argument (see [BG11] 92 and [BD17, Section 8]). We note that if a and b are so large that a;  b   then Ma;b  Op1q and so the goal of the iteration is to efficiently move from small a and b to very large a and b.Lemma 3.1.4 is the most technical of the four lemmas and is where we use a Fefferman-Cordoba argument in Section 3.2. It turns out we can still close the iteration with Lemma 3.1.4 replaced by Ma;b À Mb;b for 1 ď a ă b and Mb;b À 1{6M2b;b . Both these estimates come from the same proof of Lemma 3.1.4 and is how we approach the iteration in Sections 3 and 4 (see Lemmas 3.3.3 and 3.3.5 and their rigorous counterparts Lemmas 3.4.7 and 3.4.8). The proof of these lemmas is a consequence of l2L2 decoupling and bilinear Kakeya. As a and b get larger and larger the estimate in Lemma 3.1.4 generally gets better and better than the trivial bound of Ma;b À p 2baq{ 6M2b;b . The 1{6 comes from the -transversality of I1 and I2 in the de nition of Ma;b . In particular, should be viewed as pp 21qq1{6 where the 1 {6 comes from that we are working in L6 and the p 21q comes from p d1q with d  2 which is the power of  arising from multilinear Kakeya. Finally, Lemma 3.1.5 is an application of H older and parabolic rescaling. 3.1.3 Comparison with 2D efficient congruencing as in [Pie19, Section 4] The main object of iteration in [Pie19, Section 4] is the following bilinear object I1pX; a; b q max pmod pq ż p0;1sk | ¸ 1ďxďXxpmod paq ep 1x 2x2q| 2| ¸ 1ďyďXypmod paq ep 1y 2y2q| 4 d : Lemma 3.1.2-3.1.5 correspond directly to Lemmas 4.2-4.5 of [Pie19, Section 4]. The obser-vation that Lemmas 4.2 and 4.3 of [Pie19] correspond to parabolic rescaling and bilinear reduction, respectively was already observed by Pierce in [Pie19, Section 8]. We can think of p as 1, JpXq{ X3 as Dpq, and pa2bI1pX; a; b q{ X3 as Ma;b p;  q6. In the de nition of I1, the max  pmod pq condition can be thought of as corresponding to the transversality condition that I1 and I2 are -transverse (or since we are in 2D, -separated) intervals of length . The integral over p0; 1s2 corresponds to an integral over B. Finally the 93 expression | ¸ 1ďxďXxpmod paq ep 1x 2x2q| ; can be thought of as corresponding to |EI g| for I an interval of length a and so the whole of I1pX; a; b q can be thought of as ş B |EI1 g|2|EI2 g|4 where ℓpI1q  a and ℓpI2q  b with I1 and I2 are Opq-separated. This will be our interpretation in Section 3.2. Interpreting the proof of Lemma 3.1.4 using the uncertainty principle, we reinterpret I1pX; a; b q as (ignoring weight functions) Avg ∆PP´max pa;b qpBq }EI g}2 L2#p∆q }EI1 g}4 L4#p∆q (3.3) where I and I1 are length a and b, respectively and are -separated. The uncertainty principle says that (3.3) is essentially equal to 1 |B| ş B |EI g|2|EI1 g|4.Finally in Section 3.5 we replace (3.3) with Avg ∆PP´bpBq p ¸ JPPbpIq }EJ g}2 L2#p∆q qp ¸ J1PPbpI1q }EJ1 g}2 L2#p∆q q2 where I and I1 are length  and -separated. Note that when b  1 this then is exactly equal to 1 |B| ş B |EI g|2|EI1 g|4. The interpretation given above is now similar to the Ap object studied by Bourgain-Demeter in [BD17]. 3.1.4 Comparison with 2D l2 decoupling as in [BD17] Let M p2;4q a;b p;  q be the bilinear constant de ned in (3.2). Let M p3;3q 1;1 p;  q be the bilinear constant de ned as in (3.2) with a  b  1 except instead we use the true geometric mean. This latter bilinear decoupling constant is the one used by Bourgain and Demeter in [BD17]. The largest difference between our proof and the Bourgain-Demeter proof is how we iterate. Both proofs obtain that Dpq À Dp  q 1M ps; 6sq 1;1 p;  q (3.4) where s  3 corresponds to the Bourgain-Demeter proof while s  2 corresponds to our proof. However we proceed to analyze the iteration slightly differently. Bourgain-Demeter 94 applies (3.4) to Dp{q and Dp{2q to obtain Dpq À Dp 2 q 1pM p3;3q 1;1 p  ;  q M p3;3q 1;1 p;  qq À Dp 3 q 1pM p3;3q 1;1 p 2 ;  q M p3;3q 1;1 p  ;  q M p3;3q 1;1 p;  qq and we continue to iterate until {2n is of size 1. It now remains to analyze M p3;3q 1;1 p;  q for various scales  which is done by the Ap expressions that are used in [BD17]. For our proof, in two steps (of applying Lemmas 3.1.4 and 3.1.5) we obtain Dpq À Dp  q 7{6M p2;4q 1;2 p;  q1{2Dp  q1{2 À Dp  q 5{4M p2;4q 2;4 p;  q1{4Dp 2 q1{4Dp  q1{2 and we continue to iterate {2n is of size 1. Note that while the iteration here is able to tackle the endpoint L6 estimate directly and as written [BD17] could not do so, the iteration in [BD17] can be slightly modi ed so it can handle the endpoint estimate directly (thanks to Pavel Zorin-Kranich for pointing this out). 3.1.5 Comparison of the iteration in Section 3.2 and 3.4 The way we iterate in Section 3.2 will be slightly different than how we iterate in Section 3.4. In Section 3.2, we rst apply the trivial bound M1;1 À 1{6M1;2. Then Lemmas 3.1.4 and 3.1.5 imply that for integer b ě 2, Mb{2;b p;  q À 1{6Mb; 2bp;  q1{2Dp b q1{2: Thus from this we can access M2N ´1;2N for arbitrary large N but lose only Op1q. In contrast, for Section 3.4, we use that Ma;b À Mb;b for 1 ď a ă b (from l2L2 decoupling) and Mb;b À 1{6M2b;b (from bilinear Kakeya). Combining these two inequalities with Lemma 3.1.5 gives that for integer b ě 1, Mb;b p;  q À 1{6M2b; 2bp;  q1{2Dp b q1{2: Now we can access the constant M2N ;2N for arbitrary large N but lose only Op1q. Both iterations give similar quantitative estimates. 95 3.1.6 Overview of chapter Theorem 3.1.1 will be proved in Section 3.2 via a Fefferman-Cordoba argument. This ar-gument does not generalize to proving that Dppq À " " except for p  4; 6. However in Section 3.3, by the uncertainty principle we reinterpret a key lemma from Section 2 (Lemma 3.2.8) which allows us to generalize the argument in Section 3.2 so that it can work for all 2 ď p ď 6. We make this completely rigorous in Section 3.4 by de ning a slightly different (but morally equivalent) bilinear decoupling constant. This will make use of l2L2 decoupling, Bernstein's inequality, and bilinear Kakeya. A basic version of the ball in ation inequality similar to that used in [BD17, Theorem 9.2] and [BDG16, Theorem 6.6] makes an appear-ance. Finally in Section 3.5, we reinterpret the argument made in Section 3.4 and write an argument that is more like that given in [BD17]. We create a 1-parameter family of bilinear constants which in some sense \interpolate" between the Bourgain-Demeter argument and our argument here. The three arguments in Sections 3.2-3.5 are similar but will use slightly different bilinear decoupling constants. We will only mention explicit constants in Section 3.2. In Sections 3.4 and 3.5, for simplicity, we will only prove that Dpq À " ". The estimates in those sections can be made explicit by using explicit constants obtained from Chapter 2. Because the structure of the iteration in Sections 3.4 and 3.5 is the same as that in Section 3.2, we obtain essentially the same quantitative bounds as in Theorem 3.1.1 when making explicit the bounds in Sections 3.4 and 3.5. In Section 3.6 we modify the argument in the previous sections to illustrate how to tackle l2Lp decoupling for the parabola for 2 ă p ă 6, taking p  4 as an example. Finally in Section 3.7, we address ongoing work with Shaoming Guo and Po-Lam Yung about effi-cient congruencing in [Hea15] and sketch how we give a new (bilinear) proof of sharp l4L12 decoupling for the moment curve t Þ Ñ p t; t 2; t 3q.96 3.2 Proof of Theorem 3.1.1 We recall the de nition of the bilinear decoupling constant Ma;b as in (3.2). The arguments in this section will rely strongly on that the exponents in the de nition of Ma;b are 2 and 4, though we will only essentially use this in Lemma 3.2.8. Given two expressions x1 and x2, let geom 2;4 xi : x2{61 x4{62 : H older gives } geom 2;4 xi}p ď geom 2;4 }xi}p. 3.2.1 Parabolic rescaling and consequences The linear decoupling constant Dpq obeys the following important property. Lemma 3.2.1 (Parabolic rescaling) . Let 0 ă  ă  ă 1 be such that ; ;  { P N1. Let I be an arbitrary interval in r0; 1s of length . Then }EI g}L6pBq ď 10 20000 Dp  qp ¸ JPPpIq }EJ g}2 L6pwBq q1{2 for every g : r0; 1s Ñ C and every square B of side length 2.Proof. See [BD17, Proposition 7.1] for the proof without explicit constants and Section 2.4 with E  100 for a proof with explicit constants (and a clari cation of parabolic rescaling with weight wB ). As an immediate application of parabolic rescaling we have almost multiplicativity of the decoupling constant. Lemma 3.2.2 (Almost multiplicativity) . Let 0 ă  ă  ă 1 be such that ; ;  { P N1,then Dpq ď 10 20000 DpqDp{q: Proof. See Proposition 2.4.1 with E  100. 97 The trivial bound of Oppa2bq{ 61{2q for Ma;b p;  q is too weak for applications. We instead give another trivial bound that follows from parabolic rescaling. Lemma 3.2.3. If  and  were such that a1;  b1 P N, then Ma;b p;  q ď 10 20000 Dp a q1{3Dp b q2{3: Proof. Fix arbitrary I1 P Pa pr 0; 1sq and I2 P Pb pr 0; 1sq which are 3 -separated. H older's inequality gives that } geom 2;4 |EIi g|} 6 L6pBq ď } EI1 g}2 L6pBq }EI2 g}4 L6pBq : Parabolic rescaling bounds this by 10 120000 Dp a q2Dp b q4p ¸ JPPpI1q }EJ g}2 L6pwBq qp ¸ J1PPpI2q }EJ1 g}2 L6pwBq q2: Taking sixth roots then completes the proof of Lemma 3.2.3. H older and parabolic rescaling allows us to interchange the a and b in Ma;b . Lemma 3.2.4. Suppose b ě 1 and  and  were such that 2b1 P N. Then M2b;b p;  q ď 10 10000 Mb; 2bp;  q1{2Dp{bq1{2: Proof. Fix arbitrary I1 and I2 intervals of length 2b and b, respectively which are -separated. H older's inequality then gives }| EI1 g|1{3|EI2 g|2{3}6 L6pBq ď p ż B |EI1 g|4|EI2 g|2q1{2p ż B |EI2 g|6q1{2: Applying the de nition of Mb; 2b and parabolic rescaling bounds the above by p10 20000 q3Mb; 2bp;  q3Dp b q3p ¸ JPPpI1q }EJ g}2 L6pwBq qp ¸ J1PPpI2q }EJ1 g}2 L6pwBq q2 which completes the proof of Lemma 3.2.4. Lemma 3.2.5 (Bilinear reduction) . Suppose  and  were such that  1 P N. Then Dpq ď 10 30000 pDp  q 1M1;1p;  qq : 98 Proof. Let tIiu´1 i1  P pr 0; 1sq . We have }Er0;1sg}L6pBq  } ¸ 1ďiď´1 EIi g}L6pBq ď } ¸ 1ďi;j ď´1 |EIi g|| EIj g|} 1{2 L3pBq ď ?2 } ¸ 1ďi;j ď´1 |ij|ď 3 |EIi g|| EIj g|} 1{2 L3pBq } ¸ 1ďi;j ď´1 |ij|ą 3 |EIi g|| EIj g|} 1{2 L3pBq : (3.5) We rst consider the diagonal terms. The triangle inequality followed by Cauchy-Schwarz gives that } ¸ 1ďi;j ď´1 |ij|ď 3 |EIi g|| EIj g|} L3pBq ď ¸ 1ďi;j ď´1 |ij|ď 3 }EIi g}L6pBq}EIj g}L6pBq: Parabolic rescaling bounds this by 10 40000 Dp  q2 ¸ 1ďi;j ď´1 |ij|ď 3 p ¸ JPPpIiq }EJ g}2 L6pwBq q1{2p ¸ JPPpIjq }EJ g}2 L6pwBq q1{2 ď 10 40000 2 Dp  q2 ¸ 1ďi;j ď´1 |ij|ď 3 ¸ JPPpIiq }EJ g}2 L6pwBq ¸ JPPpIjq }EJ g}2 L6pwBq ď 10 40010 Dp  q2 ¸ JPPpr 0;1sq }EJ g}2 L6pwBq : Therefore the rst term in (3.5) is bounded above by 10 30000 Dp  qp ¸ JPPpr 0;1sq }EJ g}2 L6pwBq q1{2: (3.6) Next we consider the off-diagonal terms. We have } ¸ 1ďi;j ď´1 |ij|ą 3 |EIi g|| EIj g|} 1{2 L3pBq ď 1 max 1ďi;j ď´1 |ij|ą 3 }| EIi g|| EIj g|} 1{2 L3pBq H older's inequality gives that }| EIi g|| EIj g|} 1{2 L3pBq ď }| EIi g|1{3|EIj g|2{3}1{2 L6pBq }| EIi g|2{3|EIj g|1{3}1{2 L6pBq (3.7) and therefore from (3.2) (and using that  1 P N), the second term in (3.5) is bounded by ?21M1;1p;  qp ¸ JPPpr 0;1sq }EJ g}2 L6pwBq q1{2: Combining this with (3.6) and applying the de nition of Dpq then completes the proof of Lemma 3.2.5. 99 3.2.2 A Fefferman-Cordoba argument In the proof of Lemma 3.2.8 we need a version of Ma;b with both sides being L6pwB q. The following lemma shows that these two constants are equivalent. Lemma 3.2.6. Suppose  and  were such that a1, b1 P N. Let M 1 a;b p;  q be the best constant such that ż |EI g|2|EI1 g|4wB ď M 1 a;b p;  q6p ¸ JPPpIq }EJ g}2 L6pwBq qp ¸ J1PPpI1q }EJ1 g}2 L6pwBq q2 for all squares B of side length 2, g : r0; 1s Ñ C, and all 3-separated intervals I P Pa pr 0; 1sq and I1 P Pb pr 0; 1sq . Then M 1 a;b p;  q ď 12 100 {6Ma;b p;  q: Remark 3.2.7 . Since 1 B À wB , Ma;b p;  q À M 1 a;b p;  q and hence Lemma 3.2.6 implies Ma;b  M 1 a;b . Proof. Fix arbitrary 3 -separated intervals I1 P Pa pr 0; 1sq and I2 P Pb pr 0; 1sq . It suffices to assume that B is centered at the origin. Corollary 2.2.4 gives } geom 2;4 |EIi g|} 6 L6pwBq ď 3100 ż R2 } geom 2;4 |EIi g|} 6 L6#pBpy; ´2qq wB pyq dy: Applying the de nition of Ma;b gives that the above is ď 3100 4Ma;b p;  q6 ż R2 geom 2;4p ¸ JPPpIiq }EJ g}2 L6pwBpy; ´2qq q3wB pyq dy ď 3100 4Ma;b p;  q6 geom 2;4 ż R2 p ¸ JPPpIiq }EJ g}2 L6pwBpy; ´2qq q12 6wB pyq dy ď 3100 4Ma;b p;  q6 geom 2;4p ¸ JPPpIiq p ż R2 }EJ g}6 L6pwBpy; ´2qq wB pyq dy q1{3q3 where the second inequality is by H older and the third inequality is by Minkowski. Since B is centered at the origin, wB  wB ď 4100 4wB (Lemma 2.2.1) and hence 4 ż R2 }EJ g}6 L6pwBpy; ´2qq wB pyq dy ď 4100 }EJ g}6 L6pwBq : 100 This then immediately implies that M 1 a;b p;  q ď 12 100 {6Ma;b p;  q which completes the proof of Lemma 3.2.6. We have the following key technical lemma of this paper. We encourage the reader to compare the argument with that of [Pie19, Lemma 4.4]. This lemma is a large improvement over the trivial bound of Ma;b À p 2baq{ 6M2b;b especially at very small scales (large a; b ). Lemma 3.2.8. Let a and b be integers such that 1 ď a ď 2b. Suppose  and  was such that 2b1 P N. Then Ma;b p;  q ď 10 1000 1{6M2b;b p;  q: Proof. It suffices to assume that B is centered at the origin with side length 2. The integrality conditions on  and  imply that  ď 2b and a1;  b1 P N. Fix arbitrary intervals I1  r ; as P Pa pr 0; 1sq and I2  r ; bs P Pb pr 0; 1sq which are 3 -separated. Let g pxq : gpx q, T  p 1 2 01 q, and d :  . Shifting I2 to r0;  bs gives that ż B |p EI1 gqp xq| 2|p EI2 gqp xq| 4 dx  ż B |p Erd;d asg qp T xq| 2|p Er0; bsg qp T xq| 4 dx  ż TpBq |p Erd;d asg qp xq| 2|p Er0; bsg qp xq| 4 dx: (3.8) Note that d can be negative, however since g : r0; 1s Ñ C and d   , Erd;d asg is de ned. Since | | ď 1, T pBq Ă 100 B. Combining this with 1 100 B ď 100 B gives that (3.8) is ď ż R2 |p Erd;d asg qp xq| 2|p Er0; bsg qp xq| 4100 B pxq dx  ¸ J1;J 2PP2bpr d;d asq ż R2 pEJ1 g qp xqp EJ2 g qp xq|p Er0; bsg qp xq| 4100 B pxq dx: (3.9) We claim that if dpJ1; J 2q ą 10 2b1, the integral in (3.9) is equal to 0. Suppose J1; J 2 P P2b pr d; d asq such that dpJ1; J 2q ą 10 2b1. Expanding the integral in (3.9) for this pair of J1; J 2 gives that it is equal to ż R2 ż J1r 0; bs2J2r 0; bs2 3 ź i1 g piqg pi3qep   q 6 ź i1 d i 100 B pxq dx (3.10) where the expression inside the ep   q is pp 1  4qx1 p 21  24 qx2q pp 2 3  5  6qx1 p 22 23  25  26 qx2q: 101 Interchanging the integrals in  and x shows that the integral in x is equal to the Fourier inverse of 100 B evaluated at p 3 ¸ i1 pi  i3q; 3 ¸ i1 p2 i  2 i3 qq : Since the Fourier inverse of 100 B is supported in Bp0;  2{100 q, (3.10) is equal to 0 unless | 3 ¸ i1 pi  i3q| ď 2{200 | 3 ¸ i1 p2 i  2 i3 q| ď 2{200 : (3.11) Since  ď 2b and i P r 0;  bs for i  2; 3; 5; 6, (3.11) implies |1  4|| 1 4|  | 21  24 | ď 52b: (3.12) Since I1; I 2 are 3 -separated, |d| ě 3. Recall that 1 P J1, 4 P J2 and J1; J 2 are subsets of rd; d as. Write 1  d r and 4  d s with r; s P r 0;  as. Then |1 4|  | 2d p r sq| ě 6  | r s| ě 6  2a ě 4: (3.13) Since dpJ1; J 2q ą 10 2b1, |1  4| ą 10 2b1. Therefore the left hand side of (3.12) is ą 40 2b, a contradiction. Thus the integral in (3.9) is equal to 0 when dpJ1; J 2q ą 10 2b1.The above analysis implies that (3.9) is ď ¸ J1;J 2PP2bpr d;d asq dpJ1;J 2qď 10 2b´1 ż R2 |p EJ1 g qp xq||p EJ2 g qp xq||p Er0; bsg qp xq| 4100 B pxq dx: Undoing the change of variables as in (3.8) gives that the above is equal to ¸ J1;J 2PP2bpI1q dpJ1;J 2qď 10 2b´1 ż R2 |p EJ1 gqp xq||p EJ2 gqp xq||p EI2 gqp xq| 4100 B pT xq dx: (3.14) Observe that 100 B pT xq ď 10 2400 w100 B pT xq ď 10 2600 w100 B pxq ď 10 2800 wB pxq 102 where the second inequality is an application of Lemma 2.2.16 and the last inequality is because wB pxq1w100 B pxq ď 10 200 . An application of Cauchy-Schwarz shows that (3.14) is ď 10 2800 ¸ J1;J 2PP2bpI1q dpJ1;J 2qď 10 2b´1 p ż R2 |EJ1 g|2|EI2 g|4wB q1{2p ż R2 |EJ2 g|2|EI2 g|4wB q1{2: Note that for each J1 P P2b pI1q, there are ď 10000 1 intervals J2 P P2b pI1q such that dpJ1; J 2q ď 10 2b1. Thus two applications of Cauchy-Schwarz bounds the above by 10 2802 1{2p ¸ J1PP2bpI1q ż R2 |EJ1 g|2|EI2 g|4wB q1{2p ¸ J1PP2bpI1q ¸ J2PP2bpI2q dpJ1;J 2qď 10 2b´1 ż R2 |EJ1 g|2|EI2 g|4wB q1{2: Since there are ď 10000 1 relevant J2 for each J1, the above is ď 10 3000 1 ¸ JPP2bpI1q ż R2 |EJ g|2|EI2 g|4wB ď 10 3000 12 100 M2b;b p;  q6p ¸ JPPpI1q }EJ g}2 L6pwBq qp ¸ J1PPpI2q }EJ1 g}2 L6pwBq q2 where the last inequality is an application of Lemma 3.2.6. This completes the proof of Lemma 3.2.8. Iterating Lemmas 3.2.4 and 3.2.8 repeatedly gives the following estimate. Lemma 3.2.9. Let N P N and suppose  and  were such that 2N 1 P N. Then M1;1p;  q ď 10 60000 1{3Dp 2N ´1 q 13¨2N Dp 2N q 23¨2N N1 ź j0 Dp 2j q1{2j`1 : Proof. Lemmas 3.2.4 and 3.2.8 imply that if 1 ď a ď 2b and  and  were such that 2b1 P N, then Ma;b p;  q ď 10 20000 1{6Mb; 2bp;  q1{2Dp b q1{2: (3.15) Since 2N 1 P N, i1 P N for i  0; 1; 2; : : : ; 2N . Applying (3.15) repeatedly gives M1;1p;  q ď 10 40000 1{3M2N ´1;2N p;  q 12N N1 ź j0 Dp 2j q1{2j`1 : Bounding M2N ´1;2N using Lemma 3.2.3 then completes the proof of Lemma 3.2.9. 103 Remark 3.2.10 . A similar analysis as in (3.11)-(3.13) shows that if 1 ď a ă b and  and  were such that b1 P N, then Ma;b p;  q À Mb;b p;  q. Though we do not iterate this way in this section, it is enough to close the iteration with Ma;b À Mb;b for 1 ď a ă b, and Mb;b À 1{6M2b;b , and Lemma 3.2.4. This gives Mb;b À 1{6M 1{22b; 2bDp{bq1{2 which is much better than the trivial bound. We interpret the iteration and in particular Lemma 3.2.8 this way in Sections 3.3-3.5. 3.2.3 The O"p"q bound Combining Lemma 3.2.9 with Lemma 3.2.5 gives the following. Corollary 3.2.11. Let N P N and suppose  and  were such that 2N 1 P N. Then Dpq ď 10 10 5 Dp  q 4{3Dp 2N ´1 q 13¨2N Dp 2N q 23¨2N N1 ź j0 Dp 2j q1{2j`1 Choosing   1{2N in Corollary 3.2.11 and requiring that   1{2N P N1 X p 0; 1{100 q gives the following result. Corollary 3.2.12. Let N P N and suppose  was such that 1{2N P N and  ă 100 2N .Then Dpq ď 10 10 5 Dp1 12N q  43¨2N Dp1{2q 13¨2N N1 ź j0 Dp1 12N ´j q 12j`1 : Corollary 3.2.12 allows us to conclude that Dpq À " ". To see this, the trivial bounds for Dpq are 1 À Dpq À 1{2 for all  P N1. Let  be the smallest real number such that Dpq À " " for all  P N1. From the trivial bounds,  P r 0; 1{2s. We claim that   0. Suppose  ą 0. Choose N to be an integer such that 56 N 2  43 ě 1: (3.16) Then by Corollary 3.2.12, for 1{2N P N with  ă 100 2N , Dpq À " p1 12N q "  43¨2N   6¨2NřN´1 j“0p112N´jq 2j`1" À" p1 12N q " p1p 56 N 243q12Nq " À" p1 12N q " 104 where in the last inequality we have used (3.16). Applying almost multiplicativity of the linear decoupling constant (similar to Section 2.10 or the proof of Lemma 3.2.14 later) then shows that for all  P N1, Dpq À N;" p1 12N q ": This then contradicts minimality of . Therefore   0 and thus we have shown that Dpq À " " for all  P N1. 3.2.4 An explicit bound Having shown that Dpq À " ", we now make this dependence on " explicit. Fix arbitrary 0 ă " ă 1{100. Then Dpq ď C"" for all  P N1. Lemma 3.2.13. Fix arbitrary 0 ă " ă 1{100 and suppose Dpq ď C"" for all  P N1.Let integer N ě 1 be such that 56 N 2  43" ą 0: Then for  such that 1{2N P N and  ă 100 2N , we have Dpq ď 2  10 10 5 C1 " 2N " ": Proof. Inserting Dpq ď C"" into Corollary 3.2.12 gives that for all integers N ě 1 and  such that 1{2N P N,  ă 100 2N , we have Dpq ď 10 10 5 pC" " 2N C1 23¨2N "  " 2Np56N 243"q q": Thus by our choice of N , Dpq ď 10 10 5 pC" " 2N C1 23¨2N " q": (3.17) There are two possibilities. If  ă C1 " , then since 0 ă " ă 1{100, (3.17) becomes Dpq ď 10 10 5 pC1 " 2N " C1 23¨2N " q" ď 2  10 10 5 C1 " 2N " ": (3.18) On the other hand if  ě C1 " , the trivial bound gives Dpq ď 2100 {61{2 ď 2100 {6C1{2 " 105 which is bounded above by the right hand side of (3.18). This completes the proof of Lemma 3.2.13. Note that Lemma 3.2.13 is only true for  satisfying 1{2N P N and  ă 100 2N . We now use almost multiplicativity to upgrade the result of Lemma 3.2.13 to all  P N1. Lemma 3.2.14. Fix arbitrary 0 ă " ă 1{100 and suppose Dpq ď C"" for all  P N1.Then Dpq ď 10 10 6 2481{" C1 " 81{" " " for all  P N1.Proof. Choose N : r 83"  53s (3.19) and  P t 22N nu8 n7  t nu8 n7 . Then for these , 1{2N P N and  ă 100 2N . If  Pp7; 1s X N1, then Dpq ď 2100 {61{2 ď 2100 {622N ´17: If  P p n1;  ns for some n ě 7, then almost multiplicativity and Lemma 3.2.13 gives that Dpq ď 10 20000 DpnqDp n qď 10 20000 p2  10 10 5 C1 " 2N " "n qp 2100 {6pn  q1{2qď 10 10 6 22N ´1 C1 " 2N " " where N is as in (3.19) and the second inequality we have used the trivial bound for Dp{nq.Combining both cases above then shows that if N is chosen as in (3.19), then Dpq ď 10 10 6 272N ´1 C1 " 2N " " for all  P N1. Since we are no longer constrained by having N P N, we can increase N to be 3 {" and so we have that Dpq ď 10 10 6 2481{" C1 " 81{" " " for all  P N1. This completes the proof of Lemma 3.2.14. 106 Lemma 3.2.15. For all 0 ă " ă 1{100 and all  P N1, we have Dpq ď 2200 1{" ": Proof. Let P pC;  q be the statement that Dpq ď C  for all  P N1. Lemma 3.2.14 implies that for " P p 0; 1{100 q, P pC"; " q ùñ P p10 10 6 2481{" C1 " 81{" " ; " q: Iterating this M times gives that P pC"; " q ùñ P pr 10 10 6 2481{" s řM´1 j“0p1" 81{"qj Cp1 " 81{"qM " ; " q: Letting M Ñ 8 thus gives that for all 0 ă " ă 1{100, Dpq ď p 10 10 6 2481{" q81{"{"" ď 2100 1{"{"" ď 2200 1{" " for all  P N1. This completes the proof of Lemma 3.2.15. Optimizing in " then gives the proof of our main result. Proof of Theorem 3.1.1. Note that if   log A  log log A, then  exp pq  Ap1  log log A log A q ď A. Choose " such that A  p log 2 200 qp log 1  q,   1 " log 200, and   log A  log log A. Then 200 1{" log 2 ď " log 1  and hence 2200 1{" " ď exp p2" log 1  q: (3.20) Since   log A  log log A, we need to ensure that our choice of " is such that 0 ă " ă 1{100. Thus we need "  log 200 log pp log 2 200 qp log 1  qq  log log pp log 2 200 qp log 1  qq ă 1100 : 107 Note that for all x ą 0, log log x ă p log xq1{2 and hence for all 0 ă  ă e 4log 2 200 ,log pp log 2 200 qp log 1  qq  log log pp log 2 200 qp log 1  qq ě log pp log 2 200 qp log 1  qq  r log pp log 2 200 qp log 1  qqs 1{2 ě 12 log pp log 2 200 qp log 1  qq ě 12 log log 1  : (3.21) Thus we need 0 ă  ă e 4log 2 200 to also be such that 2 log 200 log log 1  ă 1100 and hence  ă e200 200 . Therefore using (3.20) and (3.21), we have that for  P p 0; e 200 200 q X N1, Dpq ď exp p30 log 1  log log 1  q: This completes the proof of Theorem 3.1.1. 3.3 An uncertainty principle interpretation of Lemma 3.2.8 The main point was of Lemma 3.2.8 was to show that if 1 ď a ď 2b,  and  such that 2b1 P N, then ż B |EI1 g|2|EI2 g|4 À 1 ¸ JPP2bpI1q ż B |EJ g|2|EI2 g|4 (3.22) for arbitrary I1 P Pa pr 0; 1sq and I2 P Pb pr 0; 1sq such that dpI1; I 2q Á . From Lemma 3.2.9, we only need (3.22) to be true for 1 ď a ď b. Our goal of this section is to prove (heuristically under the uncertainty principle) the following two statements: (I) For 1 ď a ă b, Ma;b p;  q À Mb;b p;  q; in other words ż B |EI1 g|2|EI2 g|4 À ¸ JPPbpI1q ż B |EJ g|2|EI2 g|4 (3.23) for arbitrary I1 P Pa pr 0; 1sq and I2 P Pb pr 0; 1sq such that dpI1; I 2q Á .108 (II) Mb;b p;  q À 1{6M2b;b p;  q; in other words ż B |EI1 g|2|EI2 g|4 À 1 ¸ JPP2bpI1q ż B |EJ g|2|EI2 g|4 (3.24) for arbitrary I1; I 2 P Pb pr 0; 1sq such that dpI1; I 2q Á .Replacing 4 with p  2 then allows us to generalize to 2 ď p ă 6 (in Section 3.6 we illustrate this in the case of p  4). Note that all results in this section are only heuristically true. In this section we will pretend all weight functions are just indicator functions and will make these heuristics rigorous in the next section. The particular instance of the uncertainty principle we will use is the following. Let I be an interval of length 1 {R with center c. Fix an arbitrary R R2 rectangle T oriented in the direction p 2c; 1q. Heuristically for x P T , pEI gqp xq behaves like aT;I e2i! T;I x1T pxq. Here the amplitude aT depends on g; T , and I and the phase !T depends on T and I. In particular, |p EI gqp xq| is essentially constant on every R R2 rectangle oriented in the direction p 2c; 1q.This also implies that if ∆ is a square of side length R, then |p EI gqp xq| is essentially constant on ∆ (with constant depending on ∆) and }EI g}Lp p∆q is essentially constant with the same constant independent of p.We introduce two standard tools from [BD17, BDG16]. Lemma 3.3.1 (Bernstein's inequality) . Let I be an interval of length 1{R and ∆ a square of side length R. If 1 ď p ď q ă 8 , then }EI g}Lq p∆q À } EI g}Lp p∆q : We also have }EI g}L8p∆q À } EI g}Lp p∆q : Proof. See [BD17, Corollary 4.3] or Lemma 2.2.20 for a rigorous proof. The reverse inequality in the above lemma is just an application of H older. 109 Lemma 3.3.2 (l2L2 decoupling) . Let I be an interval of length ě 1{R such that R|I| P N and ∆ a square of side length R. Then }EI g}L2p∆q À p ¸ JPP1{RpIq }EJ g}2 L2p∆q q1{2: Proof. See [BD17, Proposition 6.1] or Lemma 2.2.21 for a rigorous proof. The rst inequality (3.23) is an immediate application of the uncertainty principle and l2L2 decoupling. Lemma 3.3.3. Suppose 1 ď a ă b and  and  were such that b1 P N. Then ż B |EI1 g|2|EI2 g|4 À ¸ JPPbpI1q ż B |EJ g|2|EI2 g|4 for arbitrary I1 P Pa pr 0; 1sq and I2 P Pb pr 0; 1sq such that dpI1; I 2q Á . In other words, Ma;b p;  q À Mb;b p;  q: Proof. It suffices to show that for each ∆ 1 P P´b pBq, we have ż ∆1 |EI1 g|2|EI2 g|4 À ¸ JPPbpI1q ż ∆1 |EJ g|2|EI2 g|4: Since I2 is an interval of length b, |EI2 g| is essentially constant on ∆ 1. Therefore the above reduces to showing ż ∆1 |EI1 g|2 À ¸ JPPbpI1q ż ∆1 |EJ g|2 which since a ă b and I1 is of length a is just an application of l2L2 decoupling. This completes the proof of Lemma 3.3.3. Inequality (3.24) is a consequence of the following ball in ation lemma which is reminis-cent of the ball in ation in the Bourgain-Demeter-Guth proof of Vinogradov's mean value theorem. The main point of this lemma is to increase the spatial scale so we can apply l2L2 decoupling while keep the frequency scales constant. 110 Lemma 3.3.4 (Ball in ation) . Let b ě 1 be a positive integer. Suppose I1 and I2 are -separated intervals of length b. Then for any square ∆1 of side length 2b, we have Avg ∆PP´bp∆1q }EI1 g}2 L2#p∆q }EI2 g}4 L4#p∆q À 1}EI1 g}2 L2#p∆1q }EI2 g}4 L4#p∆1q : Proof. The uncertainty principle implies that |EI1 g| and |EI2 g| are essentially constant on ∆. Therefore we essentially have Avg ∆PP´bp∆1q }EI1 g}2 L2#p∆q }EI2 g}4 L4#p∆q  1 |P´b p∆1q| ¸ ∆PP´bp∆1q 1 |∆| ż ∆ |EI1 g|2|EI2 g|4  1 |∆1| ż ∆1 |EI1 g|2|EI2 g|4: On ∆ 1, note that |EI1 g|  ř T1 |cT1 |1T1 and similarly for I2 where tTiu are the b 2b rectangles covering ∆ 1 and pointing in the normal direction of the cap on the parabola living above Ii. Since I1 and I2 are -separated, for any two tubes T1; T 2 corresponding to I1; I 2,we have |T1 X T2| À 12b. Therefore 1 |∆1| ż ∆1 |EI1 g|2|EI2 g|4  1 2b |∆1| ¸ T1;T 2 |cT1 |2|cT2 |4: Since }EI1 g}2 L2#p∆1q }EI2 g}4 L4#p∆1q  6b |∆1|2 ¸ T1;T 2 |cT1 |2|cT2 |4 and |∆1|  4b, this completes the proof of Lemma 3.3.4. We now prove inequality (3.24). Lemma 3.3.5. Suppose  and  were such that 2b1 P N. Then ż B |EI1 g|2|EI2 g|4 À 1 ¸ JPP2bpI1q ż B |EJ g|2|EI2 g|4 for arbitrary I1 P Pb pr 0; 1sq and I2 P Pb pr 0; 1sq such that dpI1; I 2q Á . In other words, Mb;b p;  q À 1{6M2b;b p;  q: 111 Proof. This is an application of ball in ation, l2L2 decoupling, Bernstein, and the uncertainty principle. Since 2b1 P N, b1 P N and  ď 2b. Fix arbitrary I1; I 2 P Pb pr 0; 1sq . We have 1 |B| ż B |EI1 g|2|EI2 g|4  1 |B| ¸ ∆PP´bpBq ż ∆ |EI1 g|2|EI2 g|4 ď 1 |B| ¸ ∆PP´bpBq p ż ∆ |EI1 g|2q} EI2 g}4 L8p∆q À 1 |P´b pBq| ¸ ∆PP´bpBq p 1 |∆| ż ∆ |EI1 g|2q} EI2 g}4 L4#p∆q  Avg ∆PP´bpBq }EI1 g}2 L2#p∆q }EI2 g}4 L4#p∆q (3.25) where the second inequality is because of Bernstein. From ball in ation we know that for each ∆ 1 P P´2b pBq,Avg ∆PP´2bp∆1q }EI1 g}2 L2#p∆q }EI2 g}4 L4#p∆q À 1}EI1 g}2 L2#p∆1q }EI2 g}4 L4#p∆1q : Averaging the above over all ∆ 1 P P´2b pBq shows that (3.25) is À 1 Avg ∆1PP´2bpBq }EI1 g}2 L2#p∆1q }EI2 g}4 L4#p∆1q : Since I1 is of length b, l2L2 decoupling gives that the above is À 1 ¸ JPP2bpI1q Avg ∆1PP´2bpBq }EJ g}2 L2#p∆1q }EI2 g}4 L4#p∆1q  1 1 |B| ¸ JPP2bpI1q ¸ ∆1PP´2bpBq }EI2 g}4 L4p∆1q }EJ g}2 L2#p∆1q  1 1 |B| ¸ JPP2bpI1q ¸ ∆1PP´2bpBq p ż ∆1 |EI2 g|4q} EJ g}2 L2#p∆1q : Since |EJ g| is essentially constant on ∆ 1, the uncertainty principle gives that essentially we have p ż ∆1 |EI2 g|4q} EJ g}2 L2#p∆1q  ż ∆1 |EJ g|2|EI2 g|4: Combining the above two centered equations then completes the proof of Lemma 3.3.5. 112 Remark 3.3.6 . The proof of Lemma 3.3.5 is reminiscent of our proof of Lemma 3.2.8. The }EI2 g}L8p∆q can be thought as using the trivial bound for i, i  2; 3; 5; 6 to obtain (3.12). Then we apply some data about separation, much like in ball in ation here to get large amounts of cancelation. 3.4 An alternate proof of Dpq À ε ´ε The ball in ation lemma and our proof of Lemma 3.3.5 inspire us to de ne a new bilinear decoupling constant that can make our uncertainty principle heuristics from the previous section rigorous. The left hand side of the de nition of Dpq is unweighted, however recall that Proposition 2.2.11 implies that }Er0;1sg}L6pwB q À Dpqp ¸ JPPpr 0;1sq }EJ g}2 L6pwBq q1{2: (3.26) for all g : r0; 1s Ñ C and squares B of side length 2.We will assume that 1 P N and  P N1 Xp 0; 1{100 q. Let Ma;b p;  q be the best constant such that Avg ∆PP´max pa;b qpBq }EI g}2 L2#pw∆q }EI1 g}4 L4#pw∆q ď Ma;b p;  q6p ¸ JPPpIq }EJ g}2 L6#pwBq qp ¸ JPPpI1q }EJ1 g}2 L6#pwBq q2 (3.27) for all squares B of side length 2, g : r0; 1s Ñ C and all intervals I P Pa pr 0; 1sq , I1 P Pb pr 0; 1sq with dpI; I 1q ě .Suppose a ą b (the proof when a ď b is similar). The uncertainty principle implies that Avg ∆PP´apBq }EI1 g}2 L2#p∆q }EI2 g}4 L4#p∆q  1 |P´a pBq| ¸ ∆PP´apBq p 1 |∆| ż ∆ |EI2 g|4q} EI1 g}2 L2#p∆q  1 |B| ż B |EI1 g|2|EI2 g|4 where the last  is because |EI1 g| is essentially constant on ∆. Therefore our bilinear constant Ma;b is essentially the same as the bilinear constant Ma;b we de ned in (3.2). 113 3.4.1 Some basic properties Lemma 3.4.1 (Bernstein) . Let I be an interval of length 1{R and ∆ a square of side length R. Then }EI g}L8p∆q À } EI g}Lp pw∆q : Proof. See [BD17, Corollary 4.3] for a proof without explicit constants or Lemma 2.2.20 for a version with explicit constants. Lemma 3.4.2 (l2L2 decoupling) . Let I be an interval of length ě 1{R such that R|I| P N and ∆ a square of side length R. Then }EI g}L2pw∆q À p ¸ JPP1{RpIq }EJ g}2 L2pw∆q q1{2: Proof. See [BD17, Proposition 6.1] for a proof without explicit constants or Lemma 2.2.21 for a version with explicit constants. We now run through the substitutes of Lemmas 3.2.3-3.2.5. Lemma 3.4.3. Suppose  and  were such that a1, b1 P N. Then Ma;b p;  q À Dp a q1{3Dp b q2{3: Proof. Let I1 P Pa pr 0; 1sq and I2 P Pb pr 0; 1sq . H older's inequality gives that Avg ∆PP´max pa;b qpBq }EI1 g}2 L2#pw∆q }EI2 g}4 L4#pw∆q ď Avg ∆PP´max pa;b qpBq }EI1 g}2 L6#pw∆q }EI2 g}4 L6#pw∆q ď p Avg ∆PP´max pa;b qpBq }EI1 g}6 L6#pw∆q q1{3p Avg ∆PP´max pa;b qpBq }EI2 g}6 L6#pw∆q q2{3 À } EI1 g}2 L6#pwBq }EI2 g}4 L6#pwBq where the last inequality we have used that ř ∆ w∆ Àn wB (see Proposition 2.2.14). Finally applying (3.26) with parabolic rescaling then completes the proof of Lemma 3.4.3. 114 Lemma 3.4.4. Suppose a1;  b1 P N. Then Ma;b p;  q À Mb;a p;  q1{2Dp b q1{2: Proof. Let I1 P Pa pr 0; 1sq and I2 P Pb pr 0; 1sq . We have Avg ∆PP´max pa;b qpBq }EI1 g}2 L2#pw∆q }EI2 g}4 L4#pw∆q ď Avg ∆PP´max pa;b qpBq }EI1 g}2 L2#pw∆q }EI2 g}L2#pw∆q}EI2 g}3 L6#pw∆q ď p Avg ∆PP´max pa;b qpBq }EI1 g}4 L2#pw∆q }EI2 g}2 L2#pw∆q q1{2p Avg ∆PP´max pa;b qpBq }EI2 g}6 L6#pw∆q q1{2 À p Avg ∆PP´max pa;b qpBq }EI1 g}4 L4#pw∆q }EI2 g}2 L2#pw∆q q1{2}EI2 g}3 L6#pwBq where the rst and second inequalities are because of H older and the third inequality is an application of H older and the estimate ř ∆ w∆ À wB . Applying parabolic rescaling and the de nition of Mb;a then completes the proof of Lemma 3.4.4. Lemma 3.4.5 (Bilinear reduction) . Suppose  and  were such that  1 P N. Then Dpq À n Dp  q 1M1;1p;  q: Proof. The proof is essentially the same as that of Lemma 3.2.5 except when analyzing (3.7) in the off-diagonal terms we use }| EIi g|1{3|EIj g|2{3}6 L6#pBq  Avg ∆PP´1pBq 1 |∆| ż ∆ |EIi g|2|EIj g|4 ď Avg ∆PP´1pBq }EIi g}2 L2#p∆q }EIj g}4 L8p∆q À Avg ∆PP´1pBq }EIi g}2 L2#pw∆q }EIj g}4 L4#pw∆q where the second inequality we have used Bernstein. 3.4.2 Ball in ation We now prove rigorously the ball in ation lemma we mentioned in the previous section. 115 Lemma 3.4.6 (Ball in ation) . Let b ě 1 be a positive integer. Suppose I1 and I2 are -separated intervals of length b. Then for any square ∆1 of side length 2b, we have Avg ∆PP´bp∆1q }EI1 g}2 L2#pw∆q }EI2 g}4 L4#pw∆q À 1}EI1 g}2 L2#pw∆1q }EI2 g}4 L4#pw∆1q : (3.28) Proof. Without loss of generality we may assume that ∆ 1 is centered at the origin. Fix intervals I1 and I2 intervals of length b which are -separated with centers c1 and c2,respectively. Cover ∆ 1 by a set T1 of mutually parallel nonoverlapping rectangles T1 of dimensions b 2b with longer side pointing in the direction of p 2c1; 1q (the normal direction of the piece of parabola above I1). Note that any b 2b rectangle outside 4∆ 1 cannot cover ∆ 1 itself. Thus we may assume that all rectangles in T1 are contained in 4∆ 1. Finally let T1pxq be the rectangle in T1 containing x. Similarly de ne T2 except this time we use I2.For x P 4∆ 1, de ne F1pxq : $'&'% sup yP2T1pxq }EI1 g}L2#pwBpy; ´bqq if x P Ť T1PT1 T1 0 if x P 4∆ 1z Ť T1PT1 T1 and F2pxq : $'&'% sup yP2T2pxq }EI2 g}L4#pwBpy; ´bqq if x P Ť T2PT2 T2 0 if x P 4∆ 1z Ť T2PT2 T2: Given a ∆ P P´b p∆1q, if x P ∆, then ∆ Ă 2Tipxq. This implies that the center of ∆, c∆ P 2Tipxq for x P ∆ and hence for all x P ∆, }EI1 g}L2#pw∆q ď F1pxq and }EI2 g}L4#pw∆q ď F2pxq: Therefore }EI1 g}2 L2#pw∆q }EI2 g}4 L4#pw∆q ď 1 |∆| ż ∆ F1pxq2F2pxq4 dx: (3.29) 116 By how Fi is de ned, Fi is constant on each Ti P Ti. That is, for each x P Ť TiPTi Ti, Fipxq  ¸ TiPTi cTi 1Ti pxq for some constants cTi ě 0. Thus using (3.29) and that the Ti are disjoint, the left hand side of (3.28) is bounded above by 1 |∆1| ż ∆1 F1pxq2F2pxq4 dx  1 |∆1| ¸ T1;T 2 c2 T1 c4 T2 |T1 X T2| À 1 2b |∆1| ¸ T1;T 2 c2 T1 c4 T2 (3.30) where the last inequality we have used that since I1 and I2 are -separated, sine of the angle between T1 and T2 is Á  and hence |T1 X T2| À 12b. Note that }F1}2 L2#p4∆ 1q  3b |4∆ 1| ¸ T1 c2 T1 and }F2}4 L4#p4∆ 1q  3b |4∆ 1| ¸ T2 c4 T2 : Therefore (3.30) is À 1}F1}2 L2#p4∆ 1q }F2}4 L4#p4∆ 1q : Thus we are done if we can prove that }F1}2 L2#p4∆ 1q À } EI1 g}2 L2#pw∆1q and }F2}4 L4#p4∆ 1q À } EI2 g}4 L4#pw∆1q but this was exactly what was shown in [BD17, Eq. (29)] (and Lemma 2.6.3 for the same inequality but with explicit constants). Our choice of bilinear constant (3.27) makes the rigorous proofs of Lemmas 3.3.3 and 3.3.5 immediate consequences of ball in ation and l2L2 decoupling. 117 Lemma 3.4.7. Suppose 1 ď a ă b and  and  were such that b1 P N. Then Ma;b p;  q À Mb;b p;  q: Proof. For arbitrary I1 P Pa pr 0; 1sq and I2 P Pb pr 0; 1sq which are -separated, it suffices to show that Avg ∆PP´bpBq }EI1 g}2 L2#pw∆q }EI2 g}4 L4#pw∆q À ¸ JPPbpI1q Avg ∆PP´bpBq }EJ g}2 L2#pw∆q }EI2 g}4 L4#pw∆q : But this is immediate from l2L2 decoupling which completes the proof of Lemma 3.4.7. Lemma 3.4.8. Let b ě 1 and suppose  and  were such that 2b1 P N. Then Mb;b p;  q À 1{6M2b;b p;  q: Proof. For arbitrary I1 P Pa pr 0; 1sq and I2 P Pb pr 0; 1sq which are -separated, it suffices to prove that Avg ∆PP´bpBq }EI1 g}2 L2#pw∆q }EI2 g}4 L4#pw∆q À 1 ¸ JPP2bpI1q Avg ∆1PP´2bpBq }EJ g}2 L2#pw∆1q }EI2 g}4 L4#pw∆1q : But this is immediate from ball in ation followed by l2L2 decoupling which completes the proof of Lemma 3.4.8. Combining Lemmas 3.4.4, 3.4.7, and 3.4.8 gives the following corollary. Corollary 3.4.9. Suppose  and  were such that 2b1 P N. Then Mb;b p;  q À 1{6M2b; 2bp;  q1{2Dp b q1{2: This corollary should be compared to the trivial estimate obtained from Lemma 3.4.3 which implies Mb;b p;  q À Dp{bq. 3.4.3 The O"p"q bound We now prove that Dpq À " ". The structure of the argument is essentially the same as that in Section 3.2.3. Repeatedly iterating Corollary 3.4.9 gives the following result. 118 Lemma 3.4.10. Let N be an integer chosen sufficiently large later and let  be such that 1{2N P N and 0 ă  ă 100 2N . Then Dpq À Dp1 12N q  43¨2N N1 ź j0 Dp1 12N ´j q 12j`1 : Proof. Iterating Corollary 3.4.9 N times gives that if  and  were such that 2N 1 P N,then M1;1p;  q À 1{3M2N ;2N p;  q1{2N : N1 ź j0 Dp 2j q 12j`1 Applying the trivial bound for the bilinear constant bounds gives that the above is À 1{3Dp 2N q1{2N N1 ź j0 Dp 2j q 12j`1 Choosing   1{2N shows that if 1{2N P N and 0 ă  ă 100 2N , then M1;1p;  1{2N q À  13¨2N N1 ź j0 Dp1 12N ´j q 12j`1 : By the bilinear reduction, if  was such that 1{2N P N and 0 ă  ă 100 2N , then Dpq À Dp1 12N q  43¨2N N1 ź j0 Dp1 12N ´j q 12j`1 : This completes the proof of Lemma 3.4.10. Trivial bounds for Dpq show that 1 À Dpq À 1{2 for all  P N1. Let  be the smallest real number such that Dpq À " " for all  P N1. From the trivial bounds  P r 0; 1{2s.We claim   0. Suppose  ą 0. Let N be a sufficiently large integer ě 83 . This implies 1 N 2  43 ě 1: Lemma 3.4.10 then implies that for  such that 1{2N P N and 0 ă  ă 100 2N , we have Dpq À " p1 12N q " p1 12N p1 N 243qq " À" p1 12N q " where the last inequality we have applied our choice of N . By almost multiplicity we then have the same estimate for all  P N1 (with a potentially larger constant depending on N ). But this then contradicts minimality of . Therefore   0. 119 3.5 Unifying the two styles of proof We now attempt to unify the Bourgain-Demeter style of decoupling and the style of decou-pling mentioned in the previous section. In view of Corollary 3.4.9, instead of having two integer parameters a and b we just have one integer parameter. Let b be an integer ě 1 and choose s P r 2; 3s any real number. Suppose  P N1 and  P N1 X p 0; 1{100 q were such that b1 P N. Let Mpsq b p;  q be the best constant such that Avg ∆PP´bpBq p ¸ JPPbpIq }EJ g}2 L2#pw∆q q s 2 p ¸ J1PPbpI1q }EJ1 g}2 L2#pw∆q q6´s 2 ď Mpsq b p;  q6p ¸ JPPpIq }EJ g}2 L2#pwBq q s 2 p ¸ J1PPpI1q }EJ1 g}2 L2#pwBq q6´s 2 (3.31) for all squares B of side length 2, g : r0; 1s Ñ C, and all intervals I; I 1 P P pr 0; 1sq which are -separated. Note that left hand side of the de nition of Mp3q b p;  q is the same as A6pq; B r; q q6 de ned in [BD17] and from the uncertainty principle, Mp2q 1 p;  q is morally the same as M1;1p;  q de ned in (3.2) and M1;1p;  q de ned in (3.27). The l2 piece in the de nition of Mpsq b p;  q is so that we can make the most out of applying l2L2 decoupling. We will use Mpsq b as our bilinear constant in this section to show that Dpq À " ". The bilinear constant Mpsq b obeys much the same lemmas as in the previous sections. Lemma 3.5.1 (cf. Lemmas 3.2.3 and 3.4.3) . If  and  were such that b1 P N, then Mpsq b p;  q À Dp b q: Proof. Fix arbitrary I1; I 2 P P pr 0; 1sq which are -separated. Moving up from L2# to L6# followed by H older in the average over ∆ bounds the left hand side of (3.31) p Avg ∆PP´bpBq p ¸ JPPbpI1q }EJ g}2 L6#pw∆q q62 qsp Avg ∆PP´bpBq p ¸ J1PPbpI2q }EJ g}2 L6#pw∆q q62 q6s: Using Minkowski to switch the l2 and l6 sum followed by ř ∆ w∆ À wB shows that this is À p ¸ JPPbpI1q }EJ g}2 L6#pwBq q s 2 p ¸ J1PPbpI2q }EJ1 g}2 L6#pwBq q6´s 2 : Parabolic rescaling then completes the proof of Lemma 3.5.1. 120 Lemma 3.5.2 (Bilinear reduction, cf. Lemmas 3.2.5 and 3.4.5) . Suppose  and  were such that  1 P N. Then Dpq À Dp  q 1Mpsq 1 p;  q: Proof. Note that the left hand side of the de nition of Mpsq 1 p;  q is Avg ∆PP´1pBq }EI1 g}sL2#pw∆q}EI2 g}6sL2#pw∆q: Proceeding as in the proof of Lemmas 3.2.5 and 3.4.5, for Ii; I j P P pr 0; 1sq which are -separated, we have }| EIi g|| EIj g|} 1{2 L3#pBq ď }| EIi g| s 6 |EIj g|1 s 6 }1{2 L6#pBq }| EIi g|1 s 6 |EIj g| s 6 }1{2 L6#pBq : (3.32) We have }| EIi g| s 6 |EIj g|1 s 6 }6 L6#pBq  Avg ∆PP´1pBq 1 |∆| ż ∆ |EIi g|s|EIj g|6s ď Avg ∆PP´1pBq }EIi g}sLs p∆q }EIj g}6sL8p∆q À Avg ∆PP´1pBq }EIi g}sL2#pw∆q}EIj g}6sL2#pw∆q where the last inequality we have used Bernstein. Inserting this into (3.32) and applying the de nition of Mpsq 1 p;  q then completes the proof of Lemma 3.5.2. Lemma 3.5.3 (Ball in ation, cf. Lemma 3.4.6) . Let b ě 1 be a positive integer. Suppose I1 and I2 are -separated intervals of length . Then for any square ∆1 of side length 2b and any " ą 0, we have Avg ∆PP´bp∆1q p ¸ JPPbpI1q }EJ g}2 Ls pw∆q q s 2 p ¸ J1PPbpI2q }EJ1 g}2 L6´s pw∆q q6´s 2 À" 1b" p ¸ JPPbpI1q }EJ g}2 Ls pw∆1q q s 2 p ¸ J1PPbpI2q }EJ1 g}2 L6´s pw∆1q q6´s 2 Proof. The s  2 case be proven directly using Lemma 3.4.6 without any loss in b" . The proof for s P p 2; 3s proceeds as in the proof of ball in ation in [BD17, Section 9.2] (see also Section 2.6 for more details and explicit constants). 121 From dyadic pigeonholing, since we can lose a b" , it suffices to restrict the sum over J and J1 to families F1 and F2 such that for all J P F1, }EJ g}Ls pw∆1q are comparable up to a factor of 2 and similarly for all J1 P F2. H older gives Avg ∆PP´bp∆1q p ¸ JPF1 }EJ g}2 Ls pw∆q q s 2 p ¸ J1PF2 }EJ1 g}2 L6´s pw∆q q6´s 2 ď p #F1q s 21 p#F2q6´s 21 Avg ∆PP´bp∆1q p ¸ JPF1 }EJ g}sLs pw∆q qp ¸ J1PF2 }EJ1 g}6sL6´s pw∆q q: The proof of Lemma 3.4.6 shows that this is À 1p#F1q s 21 p#F2q6´s 21 p ¸ JPF1 }EJ g}sLs pw∆1q qp ¸ J1PF2 }EJ1 g}6sL6´s pw∆1q q: Since for J P F1 the values of }EJ g}Ls pw∆1q are comparable and similarly for J1 P F2, the above is À 1p ¸ JPF1 }EJ g}2 Ls pw∆1q q s 2 p ¸ J1PF2 }EJ1 g}2 L6´s pw∆1q q6´s 2 : This completes the proof of Lemma 3.5.3. Lemma 3.5.4 (cf. Corollary 3.4.9) . Suppose  and  were such that 2b1 P N. Then for every " ą 0, Mpsq b p;  q À "  16 p1b" qMpsq 2b p;  q1{2Dp b q1{2: Proof. Let  and φ be such that  2 1 6  1 s and φ 2 1φ 6  16s . Then H older gives }f }Ls ď } f }L2 }f }1L6 and }f }L6´s ď } f }φL2 }f }1φL6 .Fix arbitrary I1; I 2 P P pr 0; 1sq which are -separated. We have Avg ∆PP´bpBq p ¸ JPPbpI1q }EJ g}2 L2#pw∆q q s 2 p ¸ J1PPbpI2q }EJ1 g}2 L2#pw∆q q6´s 2 ď Avg ∆1PP´2bpBq Avg ∆PP´bp∆1q p ¸ JPPbpI1q }EJ g}2 Ls pw∆q q s 2 p ¸ J1PPbpI2q }EJ1 g}2 L6´s pw∆q q6´s 2 À" 1b" Avg ∆1PP´2bpBq p ¸ JPPbpI1q }EJ g}2 Ls pw∆1q q s 2 p ¸ J1PPbpI2q }EJ1 g}2 L6´s pw∆1q q6´s 2 where the rst inequality is from H older and the second inequality is from ball in ation. We now use how  and φ are de ned to return to a piece which we control by l2L2 decoupling 122 and a piece which we can control by parabolic rescaling. H older (as in the de nition of  and φ) gives that the average above is bounded by Avg ∆1PP´2b pBq p ¸ JPPb pI1q }EJ g}2L2#pw∆1 q}EJ g}2p1q L6#pw∆1 qq s 2 p ¸ J1PPb pI2q }EJ1 g}2φL2#pw∆1 q}EJ1 g}2p1φq L6#pw∆1 qq6´s 2 : H older in the sum over J and J1 shows that this is ď Avg ∆1PP´2b pBq p ¸ JPPb pI1q }EJ g}2 L2#pw∆1 qqp ¸ JPPb pI1q }EJ g}2 L6#pw∆1 qq1 s 2 p ¸ J1PPb pI2q }EJ1 g}2 L2#pw∆1 qqφp ¸ J1PPb pI2q }EJ1 g}2 L6#pw∆1 qq1φ 6´s 2 : Since s  3  s 2 and φp6  sq  s 2 , rearranging the above gives Avg ∆1PP´2b pBq p ¸ JPPb pI1q }EJ g}2 L2#pw∆1 qq12 p3 s 2 qp ¸ J1PPb pI2q }EJ1 g}2 L2#pw∆1 qq12  s 2 p ¸ JPPb pI1q }EJ g}2 L6#pw∆1 qq12 3p s 2 1qp ¸ J1PPb pI2q }EJ1 g}2 L6#pw∆1 qq12 3p2 s 2 q : Cauchy-Schwarz in the average over ∆ 1 then bounds the above by Avg ∆1PP´2b pBq p ¸ JPPb pI1q }EJ g}2 L2#pw∆1 qq6´s 2 p ¸ J1PPb pI2q }EJ1 g}2 L2#pw∆1 qq s 2 12 Avg ∆1PP´2b pBq p ¸ JPPb pI1q }EJ g}2 L6#pw∆1 qq3ps´2q 2 p ¸ J1PPb pI2q }EJ1 g}2 L6#pw∆1 qq3p4´sq 2 12 : (3.33) After l2L2 decoupling, the rst term in (3.33) is À Mpsq 2b p;  q3p ¸ JPP pI1q }EJ g}2 L2#pwB qq12  6´s 2 p ¸ J1PP pI2q }EJ1 g}2 L2#pwB qq12  s 2 : (3.34) H older in the average over ∆ 1 bounds the second term in (3.33) by p Avg ∆1PP´2b pBq p ¸ JPPb pI1q }EJ g}2 L6#pw∆1 qq62 qs´24 p Avg ∆1PP´2b pBq p ¸ JPPb pI1q }EJ g}2 L6#pw∆1 qq62 q4´s 4 : Applying Minkowski to interchange the l2 and l6 norms shows that this is À p ¸ JPPb pI1q }EJ g}2 L6#pwB qq3ps´2q 4 p ¸ J1PPb pI2q }EJ1 g}2 L6#pwB qq3p4´sq 4 : 123 Parabolic rescaling bounds this by Dp b q3p ¸ JPPpI1q }EJ g}2 L6#pwBq q12  3ps´2q 2 p ¸ J1PPpI2q }EJ1 g}2 L6#pwBq q12  3p4´sq 2 : (3.35) Combining (3.34) and (3.35) then completes the proof of Lemma 3.5.4. With Lemma 3.5.4, the same proof as Lemma 3.4.10 gives the following. Lemma 3.5.5 (cf. Corollary 3.2.12 and Lemma 3.4.10) . Let N be an integer chosen sufficient large later and let  be such that 1{2N P N and 0 ă  ă 100 2N . Then Dpq À " Dp1 12N q  43¨2N  N " 6¨2N N1 ź j0 Dp1 12N ´j q 12j`1 : Proof. This follows from the proof of Lemma 3.4.10 and the observation that Mpsq 1 p;  q À "  13  16 N " Mpsq 2N p;  q 12N N1 ź j0 Dp 2j q 12j`1 : along with Lemmas 3.5.1 and 3.5.2. To nish, we proceed as at the end of the previous section. Let  P r 0; 1{2s be the smallest real such that Dpq À " ". Suppose  ą 0. Choose N such that 1 N 2  43 ě 1: Then for  such that 1{2N P N and 0 ă  ă 100 2N , Lemma 3.5.5 gives Dpq À " p1 12N q " p1 12N p1 N 243qq "p112Nq N " 2¨2NN " 6¨2N À" p1 12N q ": Almost multiplicativity gives that Dpq À N;" p1 12N q " for all  P N1, contradicting the minimality of . 3.6 An efficient congruencing style proof of l2L4 decoupling for the parabola 3.6.1 Setup and some standard lemmas Having compared the iteration from Bourgain-Demeter with an efficient congruencing style decoupling proof at L6, we compare the two arguments for some 2 ă p ă 6. We using 124 techniques from the previous sections to prove an explicit upper bound for the l2L4 decoupling constant for the parabola. We will make use of the uncertainty principle at times, however the rigorous argument can easily be made in a similar manner as how we transitioned from Section 3.3 to Section 3.4. Aside from the notation for the linear and bilinear decoupling constants, we adopt all notation from the previous sections. For simplicity, in this section we write Dpq to be the l2L4 decoupling constant for the parabola. That is, for  P N1, let Dpq be the best constant such that }Er0;1sg}L4pBq ď Dpqp ¸ JPPpr 0;1sq }EJ g}2 L4pwBq q1{2 for all g : r0; 1s Ñ C and all squares B of side length 2.Let geom be the standard geometric mean. We will assume that 1 P N and  P N1 X p 0; 1{10000 q. Fix arbitrary integer a ě 1, Suppose  and  was such that a1 P N.For this  and , let Map;  q be the best constant such that } geom |EIi g|} L4pBq ď Map;  q geom p ¸ JPPpIiq }EJ g}2 L4pwBq q1{2 for all squares B of side length 2, g : r0; 1s Ñ C, and all intervals I1; I 2 P Pa pr 0; 1sq with dpI1; I 2q ě 3.In Chapter 2 we showed that Dpq À exp pOpp log 1  q2{3qq . In this section we will show that the methods from the previous section give Dpq À exp pOpp log 1  q3{4qq (3.36) which is qualitatively the same as the bound we obtained in Chapter 2. Remark 3.6.1 . Since 4  2 2, it turns out that we only need to have one frequency scale in Map;  q. One could also de ne an alternative bilinear decoupling constant with two frequency scales Ma;b p;  q analogously as in (3.2). In this case, the key properties are Ma;b p;  q  Mb;a p;  q and Ma;b p;  q À 1{4Mb; 2bp;  q. In both de nitions we obtain essentially the same iteration and that Dpq À exp pOpp log 1  q3{4qq .125 We have the following standard lemmas which we will state without proof. Lemma 3.6.2 (Parabolic rescaling) . Let 0 ă  ă  ă 1 be such that ; ;  { P N1. Let I be an arbitrary interval in r0; 1s of length . Then }EI g}L4pBq À Dp  qp ¸ JPPpIq }EJ g}2 L4pwBq q1{2 for every g : r0; 1s Ñ C and every square B of side length 2. Lemma 3.6.3 (Almost multiplicativity) . Let 0 ă  ă  ă 1 be such that ; ;  { P N1,then Dpq À DpqDp{q: Lemma 3.6.4 (Bilinear reduction) . Suppose  and  were such that  1 P N. Then Dpq À Dp  q 1M1p;  q: Lemma 3.6.5. If  and  are such that a1 P N, then Map;  q À Dp a q: 3.6.2 The key technical lemma Much like how Lemma 3.2.8 was the key step in the previous section, the following key technical lemma drives our iteration. Lemma 3.6.6. Let a and b be integers such that 1 ď a ă b. Suppose  and  are such that b1 P N. Then Map;  q À Mbp;  q: Proof. It suffices to assume that B is centered at the origin with side length 2. Note that the integrality conditions imply that  ď b and since 1 P N, a1;  b1 P N.Fix arbitrary intervals I1  r ; as and I2  r ; as both in Pa pr 0; 1sq and are 3-separated. Observe that } geom |EIi g|} 4 L4pBq  ż B |EI1 g|2|EI2 g|2: 126 Let g pxq : gpx q, T  p 1 2 01 q, and d :  . Then shifting I2 to r0;  as gives that ż B |EI1 g|2|EI2 g|2  ż B |p Erd;d asg qp T xq| 2|Er0; asg qp T xq| 2 dx  ż TpBq |p Erd;d asg qp xq| 2|p Er0; asg qp xq| 2 dx: (3.37) Note that d can be negative, however since g : r0; 1s Ñ C and d   , Erd;d asg is de ned. Since | | ď 1{2, T pBq Ă 10 B. Combining this with 1 10 B ď 10 B gives that the above is ď ż R2 |p Erd;d asg qp xq| 2|p Er0; asg qp xq| 210 B pxq dx  ¸ J1;J 2PPbpr d;d asq K1;K 2PPbpr 0; asq ż R2 EJ1 g EJ2 g EK1 g EK2 g 10 B dx: (3.38) We will show that the integral above is zero unless dpJ1; J 2q ď b and dpK1; K 2q ď b. If we can show this, then we can add these two conditions into the sum in (3.38) and hence Cauchy-Schwarz bounds (3.38) by ¸ JPPbpr d;d asq KPPbpr 0; asq ż R2 |EJ g |2|EK g |210 B dx: Undoing the change of variables as in (3.37) gives that the above is equal to ¸ JPPbpI1q KPPbpI2q ż R2 |EJ g|2|EK g|210 B pT xq dx: The de nition of Mb and the observation that 10 B pT xq À wB pxq gives that the above is bounded above by (here we will need a version of Mb with the left hand side with weight wB , but such a constant is equivalent to Mb) Mbp;  q4 ¸ JPPbpI1q KPPbpI2q p ¸ J1PPpJq }EJ1 g}2 L4pwBq qp ¸ K1PPpKq }EK1 g}2 L4pwBq qď Mbp;  q4p ¸ JPPpI1q }EJ g}2 L4pwBq qp ¸ KPPpI2q }EK g}2 L4pwBq q: This then proves Lemma 3.6.6 provided we can add in the conditions dpJ1; J 2q ď b and dpK1; K 2q ď b into (3.38). 127 Fix J1; J 2 P Pb pr d; d asq and K1; K 2 P Pb pr 0;  asq . Suppose dpJ1; J 2q ą b. We claim that ż R2 EJ1 g EJ2 g EK1 g EK2 g 10 B dx  0 (3.39) in this case. The case when dpK1; K 2q ą b is similar. The left hand side is equal to ż J1J2K1K2 g p1qg p2qg p3qg p4q ż R2 ep   q 10 B pxq dx d where the expression in the ep   q is pp 1  2  3 4qx1 p 21  22  23 24 qx2q: Therefore by the Fourier support of 10 B , (3.39) is equal to 0 unless |1  2  3 4| ď 2 10 |21  22  23 24 | ď 2 10 : Since dpJ1; J 2q ą b, |1  2| ą b and since I1 and I2 are 3 -separated, |2  4| ą 3. Note that |i| ď 1 and 21  22  23 24  p 1  2 3  4qp 2  4q p 1  2  3 4qp 1 3q: Therefore |1  2 3  4| ď 110 21 ď 110 2b1: We claim that the above inequalities are inconsistent. Since we are not given the relative positions of the i, we have the following two cases. piq 1 ą 2 and 4 ą 3 OR 2 ą 1 and 3 ą 4: We have 2 10 ě | 1  2  3 4|  | 1  2| | 4  3| ě | 1  2| ą b: Since  ď b, we then have b ď 2b{10, a contradiction. 128 pii q 1 ą 2 and 3 ą 4 OR 2 ą 1 and 4 ą 3: We have 110 2b1 ě | 1  2 3  4|  | 1  2| | 3  4| ě | 1  2| ą b; a contradiction since b ą 1 and  is sufficiently small. Therefore in all cases (3.39) is equal to 0 when dpJ1; J 2q ą b. This completes the proof of Lemma 3.6.6. The following alternate to Lemma 3.6.6 can also be used and is reminiscent of the proofs of Lemmas 3.3.4 and 3.3.5. Lemma 3.6.7. Let a be a positive integer. Suppose  and  are such that 2a1 P N. Then Map;  q À 1{4M2ap;  q: Proof. We will make use of the uncertainty principle in this proof, but this can be made rigorous through the same methods we used to make Section 3.3 rigorous. It suffices to prove that ż B |EI g|2|EI1 g|2 À 1 ¸ JPP2apIq J1PP2apI1q ż B |EJ g|2|EJ1 g|2 (3.40) for I; I 1 P Pa pr 0; 1sq with dpI; I 1q Á .Fix I; I 1 P Pa pr 0; 1sq with dpI; I 1q Á . To show (3.40), it suffices to show that 1 |∆| ż ∆ |EI g|2|EI1 g|2 À 1 ¸ JPP2apIq J1PP2apI1q 1 |∆| ż ∆ |EJ g|2|EJ1 g|2 (3.41) for each ∆ P P´2a pBq.Since the uncertainty principle implies that |EJ g| and |EJ1 g| are essentially constant on ∆, combining this with l2L2 decoupling shows (3.41) reduces to showing that 1 |∆| ż ∆ |EI g|2|EI1 g|2 À 1p 1 |∆| ż ∆ |EI g|2qp 1 |∆| ż ∆ |EI1 g|2q: (3.42) 129 Now as in the proof of Lemma 3.3.4, the uncertainty principle says that on ∆, |EI g|  ř T |cT |1T and |EI1 g|  ř T1 |cT 1 |1T 1 where tT u and tT 1u are a 2a rectangles covering ∆1 and pointing in the normal direction of the cap on the parabola living above I and I1,respectively. Thus we would have (3.42) if we could show that for each pair of tubes T; T 1 associated to I; I 1, we have p 1 |∆| ż ∆ 1T 1T 1 q À 1p 1 |∆| ż ∆ 1T qp 1 |∆| ż ∆ 1T 1 q (3.43) for some absolute constant C. But since dpI; I 1q Á , the left hand side is equal to 1p2aq while the right hand side is 1paq2 which proves (3.43) and hence proves (3.40) which completes the proof of Lemma 3.6.7. 3.6.3 The iteration and endgame First applying Lemma 3.6.4 followed by Lemma 3.6.6 and then Lemma 3.6.5 then gives the following lemma. Lemma 3.6.8. Let m ą 10 . Suppose  and  were such that m1 P N. Then Dpq À Dp  q 1Dp m q: Choosing   1{m (and recalling that we also require  P N1 X p 0; 1{100 q) gives the following result. Lemma 3.6.9. Let m ą 10 . Suppose  was such that 1{m P N and  ă 100 m. Then Dpq À Dp11{mq 1{m where the implied constant is independent of m. We now give a proof that Dpq À " " for all " ą 0. Proposition 3.6.10. For all  P N1, Dpq À " ". 130 Proof. The trivial bounds for Dpq are 1 À Dpq À 1{2 for all  P N1. Let  be the smallest real number such that Dpq À " " for all  P N1. From the trivial bounds,  P r 0; 1{2s. We claim that   0. Suppose  ą 0. Since  ď 1{2, choose m to be an integer such that 1 m ă 1  1 m Then by Lemma 3.6.9, for 1{m P N with  ă 100 m, Dpq À " p1 1 mq " p 1 m q À" p1 1 mq " : Applying almost multiplicativity then shows that for all  P N1, Dpq À m;" p1 1 mq " ; contradicting minimality of . Therefore   0. This completes the proof of Proposition 3.6.10. Having shown that Dpq À " ", we now make this bound explicit. Fix arbitrary 0 ă " ă 1{100. Then Dpq ď C"" for all  P N1. Lemma 3.6.11. Fix arbitrary 0 ă " ă 1{100 . Let m ą 10 be such that 1 m" ă 1  1 m and  such that 1{m P N and  ă 100 m. Then Dpq À C1"{m" " where the implied is absolute. Proof. Increasing C", we may assume that C" ą 1. Inserting Dpq ď C"" into Lemma 3.6.9 gives that for all integers m ą 1 and  such that 1{m P N and  ă 100 m, we have Dpq À p C" "m  1 m" q": (3.44) 131 If additionally  ă C1 " , then (3.44) becomes Dpq À C1 "m " ": (3.45) On the other hand if  ą C1 " , we can just apply the trivial bound Dpq À 1{2 À C1{2 " which is bounded above by the right hand side of (3.45). This completes the proof of Lemma 3.6.11. Using almost multiplicativity to get rid of the integrality conditions, we have the following lemma. Lemma 3.6.12. Fix arbitrary 0 ă " ă 1{100 . For all  P N1, Dpq À exp pOp1 " qq C1"2{2 " ": Thus if P pC;  q is the statement that Dpq ď C  for all  P N1, Lemma 3.6.12 implies that P pC"; " q ùñ P pC exp pOp1 " qq C1"2{2 " ; " q for an absolute constant C. Iterating this repeatedly then gives the following result. Lemma 3.6.13. Fix arbitrary 0 ă " ă 1{100 . For all  P N1, Dpq ď exp pOp 1 "3 qq ": Optimizing in " then proves (3.36). 3.7 A decoupling interpretation of efficient congruencing for the cubic moment curve Having interpreted efficient congruencing for the quadratic Vinogradov conjecture in terms of l2 decoupling, one immediate question is whether other works of efficient congruencing such as [Hea15] or [Woo19] can give a new and different proof of decoupling for the moment curve. 132 We sketch an argument that is ongoing work with Shaoming Guo and Po-Lam Yung in this direction. We reinterpret the iteration given in [Hea15] into decoupling language. To rigorously use the uncertainty principle, we use a slightly different formulation than what is below, however, the formulation below makes the connection to [Hea15] clearer. We are able to give a new proof of l4L12 decoupling for the moment curve t Þ Ñ p t; t 2; t 3q that is different from that given by Bourgain-Demeter-Guth in [BDG16] (who actually prove an l2L12 decoupling theorem). In particular, we use a bilinear argument while [BDG16] uses a trilinear argument. For the purposes of number theory, any lpL12 decoupling theorem is sufficient. However our argument is only able to prove an lpL12 decoupling theorem for the cubic moment curve for p ě 4. Let pEI gqp xq : ż I gpqepx 1 2x2 3x3q d: We let Dpq be the best constant such that }Er0;1sg}L12 pBq ď Dpqp ¸ JPPpr 0;1sq }EJ g}4 L12 pBq q1{4 for all functions g : r0; 1s Ñ C and all squares B of side length 3. We prove that Dpq À " 1{4" which is the sharp l4L12 decoupling theorem for the moment curve t Þ Ñ p t; t 2; t 3q.Suppose  P 22N X p 0; 1{1000 q. We de ne two bilinear decoupling constants M1;a;b p;  q and M2;a;b p;  q. Suppose a and b are integers and  and  are such that a1;  b1 P N.Let M1;a;b p;  q be the best constant such that ż B |EI g|2|EI1 g|10 ď M1;a;b p;  q12 p ¸ JPPpIq }EJ g}4 L12 pBq q1{2p ¸ J1PPpI1q }EJ1 g}4 L12 pBq q5{2 for all functions g : r0; 1s Ñ C, cubes B Ă R3 of side length 3 and all pairs of intervals I P Pa pr 0; 1sq , I1 P Pb pr 0; 1sq with dpI; I 1q Á . Similarly, let M2;a;b p;  q be the best constant such that ż B |EI g|4|EI1 g|8 ď M2;a;b p;  q12 p ¸ JPPpIq }EJ g}4 L12 pBq qp ¸ J1PPpI1q }EJ1 g}4 L12 pBq q2 133 for all functions g : r0; 1s Ñ C, cubes B Ă R3 of side length 3 and all pairs of intervals I P Pa pr 0; 1sq , I1 P Pb pr 0; 1sq with dpI; I 1q Á . In addition to parabolic rescaling, our l4L12 decoupling theorem is a consequence of the following ve additional lemmas. Lemma 3.7.1 (Bilinearization) . If  and  were such that  1 P N, then Dpq À 1{4Dp  q 1M2;1;1p;  q: Lemma 3.7.2. If a and b are positive integers and  and  were such that a1;  b1 P N,then M2;a;b p;  q À M2;b;a p;  q1{3M1;a;b p;  q2{3: Lemma 3.7.3. If a and b are positive integers and  and  were such that a1;  b1 P N,then M1;a;b p;  q À M2;b;a p;  q1{4Dp b q3{4: Lemma 3.7.4. Let a and b be integers such that 1 ď a ď 3b. Suppose  and  were such that 3b1 P N. Then M1;a;b p;  q À a;b  124 p3baq C0 M1;3b;b p;  q for some large absolute constant C0. Lemma 3.7.5. Let a and b be integers such that 1 ď a ď b. Suppose  and  were such that 2ba1 P N. Then for every " ą 0, M2;a;b p;  q À a;b;"  16 p1"qp baqM2;2ba;b p;  q for some large absolute constant C0. The proof of Lemma 3.7.1 is similar to that of Lemma 3.2.5. The proof of Lemmas 3.7.2 and 3.7.3 essentially follow from the observations that ż f 4g8 ď p ż f 8g4q1{3p ż f 2g10 q2{3 134 and ż f 2g10 ď p ż f 8g4q1{4p ż g12 q3{4: The proof of Lemma 3.7.4 relies on l2L2 decoupling and two ball in ation lemmas similar to that in Lemma 3.3.4. Bourgain-Demeter-Guth's proof of l2L12 decoupling for the cubic moment curve will make use of l2L6 decoupling of the parabola as a lower dimensional input. It turns out that Lemma 3.7.5 will make use of the following lower dimensional decoupling theorem. Lemma 3.7.6. Let pE2DI gqp xq : ş I gpqepx 1 2x2q d . Then for every " ą 0, }E2D r0;1s g}L4pBq À" 1{4"p ¸ JPPpr 0;1sq }E2DJ g}4 L4pwBq q1{4 for all functions g : r0; 1s Ñ C and squares B Ă R2 of side length 1. The loss of 1{4 in Lemma 3.7.6 is sharp (up to " losses) which can be seen by taking g  1r0;1s. Furthermore, the use of Lemma 3.7.6 is precisely why we were only able to prove an l4L12 decoupling theorem rather than an l2L12 decoupling theorem. 135 CHAPTER 4 More properties of the parabola decoupling constant In this chapter, we collection some short stories about the parabola decoupling constant. First we prove some more equivalences of the parabola decoupling constant and show that these parabola decoupling constants are all monotonic. Among these parabola decoupling constants is the global decoupling constant that is used in [BD15]. Next, after having given iterative proofs of l2L4 decoupling for the parabola in Chapter 2 and Section 3.6, we give an elementary proof which shows that in the case of l2L4 decoupling for the parabola, the associated decoupling constant is Op1q. Finally in Section 4.4, we address a \small ball" l2 decoupling theorem for the paraboloid that the author rst learned from Hong Wang in January 2018. 4.1 Equivalence of some more parabola decoupling constants In Section 2.3 (in particular (2.38)), we showed many spatially localized decoupling constants were all equivalent. Now we de ne a few more decoupling constants and show that they are equivalent. The decoupling constants we introduce are all of the type that involve an f with Fourier support in a 2 neighborhood of the parabola above r0; 1s. We then relate this to pDp;E pq from De nition 2.3.3 thus proving that a slew of local and global decoupling constants are equivalent. Here by local we mean spatially localized while by global we mean nonspatially localized. This section and Section 2.3 combined provide similar results that were stated (though not explicitly proven) in Remark 5.2 of [BD15]. As we stated in Remark 2.3.6, equivalence of various parabola decoupling constants is an extremely useful result. Because of the shear matrix, parabolic rescaling is easier using 136 the global formulation rather than the local formulation. Thus by also showing that certain global decoupling constants are equivalent to some local decoupling constants we can apply parabolic rescaling using the global decoupling formulation and then pass this result to the local decoupling formulation. Also the result of this section shows that various local decoupling constants involving a function Fourier supported in some Op2q neighborhood of the parabola are equivalent to each other regardless of decay E in the weight wB;E or thickness C of the C 2 neighborhood of the parabola. The results in this section can be generalized to an arbitrary h P C2 satisfying: hp0q  h1p0q  0, 0 ă h1ptq ď 1 for t P p 0; 1s,and 1 {2 ď h2ptq ď 2 for t P r 0; 1s but we do not pursue that here. 4.1.1 Basic tools and de nitions We rst de ne two local and global decoupling constants. We show that these decoupling constants are equivalent by linearly approximating the regions where f has Fourier support and using that Fourier restriction to polygons are bounded in Lp.For a square B centered at c with side length R, let wB;E pxq : p 1 |xc| R qE . Let  be a Schwartz function such that  ě 1Bp0;1q and supp ppq Ă Bp0; 1q. For a square B centered at c of side length R, we let B pxq : pxcR q.If J P Ppr 0; 1sq and n P N, let J;n : tp s; s 2 tq : s P J; |t| ď n 2 2u (4.1) and n : Ť JPPpr 0;1{2sq J;n . We now de ne the following two decoupling constants. De nition 4.1.1. Let DLp;n;E pq be the best constant such that }f }LppBq ď DLp;n;E pqp ¸ JPPpr 0;1sq }fJ;n }2 LppwB;E q q1{2 for all f with Fourier support in n and squares B of side length 2.Let DGp;n pq be the best constant such that }f }p ď DGp;n pqp ¸ JPPpr 0;1sq }fJ;n }2 p q1{2 for all f with Fourier support in n. 137 We reintroduce the parallelograms from the discussion above Lemma 2.3.1 though this time instead of a 10  neighborhood we use an n 2 neighborhood (we also have switched notation slightly so that 1{2 and  in Chapter 2 have become  and 2, but this does not change any of our results). If J  r nJ ; pnJ 1qs P Ppr 0; 1sq , let LJ be the line connecting the point pnJ ; n 2 J 2q and pp nJ 1q; pnJ 1q22q. Explicitly we have LJ pxq : p2nJ 1qp x  nJ q n2 J 2: For J P Ppr 0; 1sq and n P N, let 1 J;n : tp s; L J psq tq : s P J; |t| ď n 2 2u: Pictorially, 1 J;n is a parallelogram with sides parallel to LJ of height n 2. Finally, we let 1 n : Ť JPPpr 0;1sq 1 J;n .We now de ne two more decoupling constants we will consider which are the parallelo-gram versions of De nition 4.1.1. De nition 4.1.2. Let Dpar;L p;n;E pq be the best constant such that }f }LppBq ď Dpar;L p;n;E pqp ¸ JPPpr 0;1sq }f1 J;n }2 LppwB;E q q1{2 for all f with Fourier support in 1 n and squares B of side length 2.Let Dpar;G p;n pq be the best constant such that }f }p ď Dpar;G p;n pqp ¸ JPPpr 0;1sq }f1 J;n }2 p q1{2 for all f with Fourier support in 1 n . In Lemmas 4.1.5-4.1.6 we show that no matter how we modify the n and E parameter, the local and global decoupling constants de ned in De nition 4.1.2 are equivalent. The proof will make use that 1 J;n is a parallelogram, in particular, we will often make use that Fourier restriction to a parallelogram is bounded as an operator on Lp. We also have the following reverse triangle inequality which will prove to be useful. 138 Lemma 4.1.3 (Reverse triangle inequality) . Let  and 1 be two parallelograms with disjoint interior. Then for 1 ă p ă 8 , }f}p } f1 }p p }fY1 }p: Proof. Since  and 1 are disjoint, fY1  f f1 and hence }fY1 }p ď } f}p } f1 }p from the triangle inequality. We observe that f  p fY1 q and f1  p fY1 q1 and so since Fourier restriction to a parallelogram is bounded in Lp for 1 ă p ă 8 , }f}p } f1 }p  }p fY1 q}p }p fY1 q1 }p À } fY1 }p: This completes the proof of Lemma 4.1.3. 4.1.2 Equivalence of parallelogram decoupling constants We rst show that we have many equivalences for the parallelogram decoupling constants. The restriction to 2 ď p ď 6 is not important and is just there to get rid of the dependence on p. Lemma 4.1.4 (Global equivalence for n  m). For 2 ď p ď 6 and n  m, Dpar;G p;n pq  n;m Dpar;G p;m pq: Proof. It suffices to show the case when n  1. Since m ą 1, 1 1 Ă 1 m and hence if f is Fourier supported in 1 1 phq, we then have }f }p ď Dpar;G p;m pqp ¸ JPPpr 0;1sq }f1 J;m }2 p q1{2: However since f is Fourier supported in 1 1 , f1 J;m  f1 J; 1 and hence Dpar;G p; 1 pq ď Dpar;G p;m pq.The reverse inequality will make use of Lemma 4.1.3. The idea is to partition 1 m into m translates of 1 1 , apply Dpar;G p; 1 pq to each of these translates, and then sum them together using Lemma 4.1.3 (losing a constant depending on m). Let f be Fourier supported in 1 m . For each J P Ppr 0; 1sq , we can write 1 J;m  m ď i1 1 J; 1 p 0; c iq 139 for some ci and the union is a disjoint union (except at the boundary). Explicitly if m is odd, then we can take tciu  t k 2 2 : k even ; |k| ď m  1u and if m is even, then we can take tciu  t k 2 2 : k odd ; |k| ď m  1u.Next Lemma 4.1.3 implies that }f1 J;m }p mm¸ i1 }f1 J; 1p 0;c iq }p where here we have removed the dependence on p because 2 ď p ď 6. Therefore m ¸ i1 }f1 J; 1p 0;c iq }2 p À p m ¸ i1 }f1 J; 1p 0;c iq }pq2 Àm }f1 J;m }2 p : (4.2) With this, we write f  řmi1 f1 1p 0;c iq and estimate }f }p Àm p m ¸ i1 }f1 1p 0;c iq }2 p q1{2 À Dpar;G p; 1 pqp m ¸ i1 ¸ JPPpr 0;1sq }f1 J; 1p 0;c iq }2 p q1{2: Interchanging sums and then applying (4.2) then shows Dpar;G p;m pq À m Dpar;G p; 1 pq. This com-pletes the proof of Lemma 4.1.4. Lemma 4.1.5 (Local-global equivalence for the same n). For 2 ď p ď 6, Dpar;L p;n;E pq  n;E Dpar;G p;n pq: Proof. We rst show that Dpar;G p;n pq À n;E Dpar;L p;n;E pq. Let B be a partition of R2 into squares of side length 2. Since ř BPB 1B  1, convolving both sides with wBp0; ´2q;E and using convolution properties of wB;E (Lemma 2.2.1) shows that ř BPB wB;E ÀE 1. Let f be Fourier supported in 1 n . Then }f }pp  ¸ BPB }f }pLppBq ď Dpar;L p;n;E pqp ¸ BPB p ¸ JPPpr 0;1sq }f1 J;n }2 LppwB;E q qp{2; Using Minkowski (and that p ě 2) to interchange the l2 J and lpB bounds this by p ¸ JPPpr 0;1sq }f1 J;n }2 Lppř BPBwB;E q qp{2: Finally using that ř BPB wB;E ÀE 1 then shows that Dpar;G p;n pq À E Dpar;L p;n;E pq where here we have used that p ď 6 to remove the dependence on p.140 From Lemma 4.1.4, to show the reverse inequality, it suffices to show Dpar;L p;n;E p; h q À n;E Dpar;G p; 10 n p; h q: Let f be Fourier supported in 1 n . We have }f }2 LppBq ÀE }f1r0; s;n }2 LppwB;E q } B f1r; 1´s;n }2 p } f1r1´; 1s;n }2 LppwB;E q : Since n{2 1 ď 10 n, the Fourier transform of B f1r; 1´s;n is supported in 1 10 n . Observe that for J P Ppr 0; 1sq , pB f1r; 1´s;n q1 J; 10 n  $''''''''''''&''''''''''''% pB f1 Jr ;n q1 J; 10 n if J  r 0;  spB f1 J;n q1 J; 10 n p B f1 Jr ;n q1 J; 10 n if J  r ; 2s ř IPt Jℓ;J;J ru pB f1 I;n q1 J; 10 n if J P Ppr 2; 1  2sq pB f1 Jℓ;n q1 J; 10 n p B f1 J;n q1 J; 10 n if J  r 1  2; 1  spB f1 Jℓ;n q1 J; 10 n if J  r 1  ; 1s (4.3) where Jℓ and Jr are the intervals to the left and right of J, respectively. Applying the de nition of Dpar;G p; 10 n pq gives }B f1r; 1´s;n }2 p ď Dpar;G p; 10 n pq2 ¸ JPPpr 0;1sq }p B f1r; 1´s;n q1 J; 10 n }2 p : Using (4.3) and the observations that 1 J; 10 n phq is a parallelogram and Fourier restriction to a parallelogram is bounded in Lp, the above is À Dpar;G p; 10 n pq2 ¸ JPPpr 0;1sq }f1 J;n }2 LppBq where we have removed the dependence on p because p ď 6. Since B ÀE wB;E , it then follows that Dpar;L p;n;E pq À E Dpar;G p; 10 n pq. This completes the proof of Lemma 4.1.5. Corollary 4.1.6 (Local equivalence for n  m, xed E). For 2 ď p ď 6 and n  m, Dpar;L p;n;E pq  n;m;E Dpar;L p;m;E pq: Proof. From Lemma 4.1.5, Dpar;L p;n;E pq  n;E Dpar;G p;n pq. From Lemma 4.1.4, Dpar;G p;n pq  n;m Dpar;G p;m pq. Applying Lemma 4.1.5 again then completes the proof of Corollary 4.1.6. 141 Corollary 4.1.7 (Local equivalence for n  m, E1  E2). For 2 ď p ď 6, n  m, E1  E2, Dpar;L p;n;E 1 pq  n;m;E 1;E 2 Dpar;L p;m;E 2 pq: Proof. Corollary 4.1.6 and Lemma 4.1.5 gives that Dpar;L p;n;E 1 pq  n;m;E 1 Dpar;L p;m;E 1 pq  m;E 1 Dpar;G p;m pq  m;E 2 Dpar;L p;m;E 2 pq which completes the proof of Corollary 4.1.7. 4.1.3 Equivalence of decoupling constants We have the following lemma which will help us relate the parallelogram decoupling constants from De nition 4.1.2 to the decoupling constants we have de ned in De nition 4.1.1. Lemma 4.1.8. For n ě 2, we have 1 J; 1 Ă J;n Ă 1 J; 2n : Proof. For s P J, recall from (2.35) that |s2  LJ psq| ď 2{4: Since n ě 2, for s P J, LJ psq 2 2 ď s2 n 2 2 ď LJ psq n 2 which completes the proof of Lemma 4.1.8. Like the parallelogram decoupling constant equivalence, we have the following three equivalences. The purpose of introducing the parallelogram decoupling constants was be-cause Fourier restriction to J;n is not a bounded operator on Lp, however, Fourier restriction to 1 J;n is a bounded operator on Lp. Lemma 4.1.9 (Local-global equivalence for the same n). For 2 ď p ď 6 and n ě 2, DLp;n;E pq  n;E DGp;n pq: 142 Proof. Since n ě 2, Lemma 4.1.8 implies 1 1 Ă n Ă 1 2n and hence Dpar;L p; 1;E pq ď DLp;n;E pq ď Dpar;L p; 2n;E pq À n;E Dpar;L p; 1;E pq (4.4) where the last inequality we have used Corollary 4.1.6. Using similar reasoning and Lemma 4.1.4 gives Dpar;G p; 1 pq ď DGp;n pq ď Dpar;G p; 2n pq À n;E Dpar;G p; 1 pq: (4.5) Finally combining these two estimates and Lemma 4.1.5 imply DLp;n;E pq  n;E DGp;n pq which completes the proof of Lemma 4.1.9. Corollary 4.1.10 (Global equivalence for n  m). For 2 ď p ď 6 and n  m with n; m ě 2, DGp;n pq  n;m DGp;m pq: Proof. It suffices to show that for each n ě 2, DGp;n pq  n Dpar;G p; 1 pq. But this exactly was shown in (4.5). Corollary 4.1.11 (Local equivalence for n  m, xed E). For 2 ď p ď 6 and n  m with n; m ě 2, DLp;n;E pq  n;m;E DLp;m;E pq: Proof. For each n ě 2, it is enough to show that DLp;n;E pq  n Dpar;L p; 1;E pq but this is what was shown in (4.4). Corollary 4.1.12 (Local equivalence for n  m, E1  E2). For 2 ď p ď 6 and n  m with n; m ě 2, DLp;n;E 1 pq  n;m;E 1;E 2 DLp;m;E 2 pq: Proof. From Corollary 4.1.11, it is enough to show that DLp;m;E 1 pq  m;E 1;E 2 DLp;m;E 2 pq. But this follows immediately from Lemma 4.1.9. Note that pDp;E pq de ned in De nition 2.3.3 is the same as Dpar;L p; 10 ;E pq in this section. Therefore we have shown that for 2 ď p ď 6, all the following constants are equivalent (up to constants that depend on all parameters of the constants involved except for p and ): 143 (a) Extension operator based, spatially localized: Dp;E pq, de ned in (2.1), used in [BD17] rDp;E pq, de ned in (2.2) Dppq, de ned in De nition 2.3.3 (b) Fourier based, spatially localized: pDp;E pq, de ned in De nition 2.3.3, equal to Dpar;L p; 10 ;E pq DLp;n;E pq, de ned in De nition 4.1.1 Dpar;L p;n;E pq, de ned in De nition 4.1.2 (c) Fourier based, global: DGp;n pq, de ned in De nition 4.1.1, used in [BD15] Dpar;G p;n pq, de ned in De nition 4.1.2 That is, take any number of the eight above decoupling constants, for example, Dp;E 1 pq, Dpar;G p;n pq, Dppq, and DLp;m;E 2 pq (also assume n; m ě 2). Then our results show that for 2 ď p ď 6, Dp;E 1 pq  n;E 1 Dpar;G p;n pq  n Dppq  m;E 2 DLp;m;E 2 pq: 4.2 Monotonicity of the parabola decoupling constant One immediate application of the results in Section 4.1, is that we can show that the de-coupling constant, however de ned in the list above is essentially a decreasing function of .The way we show Corollary 4.2.2 is not the most efficient way to show this for a particular decoupling constant. If one is willing to work with weight functions wB;E , rwB;E , B directly one can show the applicable monotonicity result using a calculation that is similar to the proof of parabolic rescaling (Section 2.4). However, having done the heavy lifting in Section 4.1 in showing many decoupling constants are equivalent we present a nice application of our work. This application of the equivalence of decoupling constants shows the power of such an 144 equivalence since often certain calculations are easier with some decoupling constants while others are much more tedious. The main proposition we claim is the following: Proposition 4.2.1. For N P N and 2 ď p ď 6, we have DGp; 2p 1 N q ď DGp; 2p 1 N 1q where DGp;n pq is as in De nition 4.1.1. Proof. This proof is a change of variables. To emphasize the interval and the scale , instead of using the notation J; 2 from (4.1), we will let T p; I q be the piece of 2-tube living above I Ă r 0; 1s. That is T p; I q : tp s; s 2 tq : s P I; |t| ď 2up I; 2q: Suppose f is Fourier supported in a 1 {N 2-tube of the parabola living above r0; 1s. We have f pxq  ż Tp1 N;r0;1sq pf pqepx  q d  p N 1 N q3 ż Tp1 N1;r0;NN1sq pf pN 1 N 1; pN 1q2 N 2 2qepx1 N 1 N 1 x2 pN 1q2 N 2 2q d Therefore }f }p  p N 1 N q33{p}gN }p (4.6) with gN pxq : ż R2 pf pN 1 N 1; pN 1q2 N 2 2q1T p 1 N1;r0;NN1sq pqep  xq d: Note that gN is Fourier supported in a 1 {p N 1q2-tube of the parabola living above r0; 1s.Then pN 1 N q33{p}gN }p ď p N 1 N q33{pDGp; 2p 1 N 1qp ¸ 0ďiďNPTp1 N1;riN1;i1 N1sq }p gN q }2 p q1{2: (4.7) 145 For i ă N , zpgN q pq  pf pN 1 N 1; pN 1q2 N 2 2q1 pq and when i  N , pgN q  0. Undoing the change of variables used to obtain (4.6) gives that (4.7) is equal to DGp; 2p 1 N 1qp ¸ 0ďjďN1 PTp1 N;riN;i`1 Nsq }f }2 p q1{2: Applying the de nition of DGp; 2p1{N q then completes the proof of Proposition 4.2.1. The following corollary follows from combining the above proposition and the results in Section 4.1. Corollary 4.2.2. For N P N and 2 ď p ď 6, the following eight inequalities are true: Dp;E p 1 N q À E Dp;E p 1 N 1q rDp;E p 1 N q À E Dp;E p 1 N 1q pDp;E p 1 N q À E pDp;E p 1 N 1q Dpar;L p;n;E p 1 N q À n;E Dpar;L p;n;E p 1 N 1q DLp;n;E p 1 N q À n;E DLp;n;E p 1 N 1q Dpp 1 N q À Dpp 1 N 1q Dpar;G p;n p 1 N q À n;E Dpar;G p;n p 1 N 1q DGp;n p 1 N q À n DGp;n p 1 N 1q: We can obtain a similar result when applying this idea to the observation that DGp; 2pq is almost multiplicative, that is, for 1;  2 P N1, DGp; 2p12q ď DGp; 2p1qDGp; 2p2q; however we omit the proof here. 146 4.3 An elementary proof of l2L4 decoupling for the parabola Having seen two iterative proofs of l2L4 decoupling for the parabola, we now give a direct proof. This is the only nontrivial parabola decoupling theorem that can be proven directly (as far as the author knows). The proof is similar in spirit to the short proof of discrete Fourier restriction in L4 for pn; n 2q that Bourgain gives in Proposition 2.1 of [Bou93]. For an interval I Ă r 0; 1s, let pEI gqp xq  ż I gpqepx 1 2x2q d where epxq  e2ix . We will prove that not only can we decouple r0; 1s into intervals of length  at some Op1q cost, but also we can decouple r0; 1s into an arbitrary collection of intervals at an Op1q cost. Let I  t IiuNi1 be an arbitrary partition of r0; 1s into N intervals. Let R  p min IPI |I|q 2 and if B is a square of side length R centered at cB , let wB pxq  p 1 |x  cB | R q100 : Let  be a Schwartz function such that supp ppq Ă Bp0; 1q and 1 Bp0;1q ď . For a square B  BpcB ; R q, let B pxq  pxcB R q. Proposition 4.3.1. For all g : r0; 1s Ñ C and all squares B of side length R, }Er0;1sg}L4pBq À p ¸ IPI }EI g}2 L4pwBq q1{2 (4.8) where the implied constant is an absolute constant independent of the partition I.Remark 4.3.2 . It is an open problem whether an analogous statement is true with L4 replaced with Lp for some other p ă 6 even if we accept an p#Iq" loss. Proof. Since g : r0; 1s Ñ C is arbitrary, we may assume that B is centered at the origin. We have }Er0;1sg}4 L4pBq  } Er0;1sg  Er0;1sg}2 L2pBq : 147 Then }Er0;1sg}4 L4pBq À } ¸ 1ďi;j ďN |ij|ď 1 EIi gEIj g}2 L2pBq } ¸ 1ďi;j ďN |ij|ą 1 EIi gEIj g}2 L2pBq : (4.9) We analyze the rst expression in (4.6). We have } ¸ 1ďi;j ďN |ij|ď 1 EIi gEIj g}2 L2pBq ď p ¸ 1ďi;j ďN |ij|ď 1 }EIi g}L4pBq}EIj g}L4pBqq2 À p ¸ IPI }EI g}2 L4pwBq q2 (4.10) where the last inequality is by Cauchy-Schwarz. We now analyze the second term in (4.9). Since 1 B ď 110 B ď 10 B , it suffices to analyze } ¸ 1ďi;j ďN |ij|ą 1 EIi gEIj g}2 L2p10 Bq  ¸ 1ďi;i 1;j;j 1ďN |ij|ą 1;|i1j1|ą 1 ż IiIjIi1Ij1 gp1qgp2qgp3qgp4q ż R2 ep   q 10 B pxq dx d (4.11) where the expression in ep   q is p1  2  3 4qx1 p 21  22  23 24 qx2: We claim the integral in  above is equal to 0 if |i  i1| ą 1 or |j  j1| ą 1 and so we can add the conditions that |i  i1| ď 1 and |j  j1| ď 1 to the sum in (4.11). We only show that case when |i  i1| ą 1, the case when |j  j1| ą 1 is similar. Since 10 B has Fourier support on Bp0; 1{p 10 Rqq , for the integral in (4.11) to not be 0, it is necessary that |1  2  3 4| ď 110 R |21  22  23 24 | ď 110 R (4.12) for all 1 P Ii, 2 P Ij , 3 P Ii1 , and 4 P Ij1 and therefore we can insert this condition into the integral in the -variables. Since |i  j| ą 1, |i1  j1| ą 1, and |i  i1| ą 1, we have |1  2| ą R1{2, |3  4| ą R1{2, and |1  3| ą R1{2, respectively. We claim that these inequalities are incompatible with (4.12). 148 Lemma 4.3.3. Suppose 0 ď 1;  2;  3;  4 ď 1. The system |1  2  3 4| ď 110 R (4.13) |21  22  23 24 | ď 110 R (4.14) |3  4| ą 1 R1{2 (4.15) |1  3| ą 1 R1{2 (4.16) has no solution. Proof. Suppose there was a solution to the above system of inequalities. Note that 21  22  23 24  p 1  2  3 4qp 1 2q p 3  4qp 1 2  3  4q and so combining this with (4.13), (4.14), (4.15), the triangle inequality, and that i P r 0; 1s gives 1 R1{2 |1 2  3  4| ď 110 R 2|1  2  3 4| ď 310 R : Therefore |1 2  3  4| ď 310 R1{2 : (4.17) Since we are not given the relative positions of the i, we have the following four cases. piq 3 ą 1 and 2 ą 4: Using (4.13), positivity of 3  1 and 2  4, and (4.16) gives 110 R ě | 3  1 4  2|  | 3  1| | 4  2| ě | 3  1| ą 1 R1{2 which is impossible. pii q 1 ą 3 and 4 ą 2: Using (4.13), positivity of 1  3 and 4  2, and (4.16) gives 110 R ě | 1  2  3 4|  | 1  3| | 4  2| ě | 1  3| ą 1 R1{2 which is impossible. 149 piii q 3 ą 1 and 4 ą 2: Using (4.17), positivity of 3  1 and 4  2, and (4.16) gives 310 R1{2 ě | 3  1 4  2|  | 3  1| | 4  2| ě | 3  1| ą 1 R1{2 which is impossible. piv q 1 ą 3 and 2 ą 4: Using (4.17), positivity of 1  3 and 2  4, and (4.16) gives 310 R1{2 ě | 1  3 2  4|  | 1  3| | 2  4| ě | 1  3| ą 1 R1{2 which is impossible. Thus we have shown the inequalities (4.13)-(4.16) to be incompatible. This completes the proof of Lemma 4.3.3. Therefore Lemma 4.3.3 implies (4.11) is ď ¸ 1ďi;i 1;j;j 1ďN |ij|ą 1;|i1j1|ą 1 |ii1|ď 1;|jj1|ď 1 ż R2 |EIi gEIj gEIi1 gEIj1 g|10 B dx À ¸ 1ďi;j ďN ż R2 |EIi g|2|EIj g|2wB dx ď p ¸ IPI }EI g}2 L4pwBq q2 (4.18) where the second inequality is by Cauchy-Schwarz and that 10 B À w10 B À wB and the last inequality is by H older's inequality. Combining (4.9), (4.10), and (4.18) then proves (4.8). This completes the proof of Proposition 4.3.1. 4.4 Small ball l2 decoupling for the paraboloid Decoupling for the paraboloid as stated in 1.2 has an LppBq where B is a cube in Rn of side length 2. This is a natural scale since we are decoupling into frequency cubes in r0; 1sn1 of side length  and hence the wavepackets that arise are of size 1    1 2.One can ask perhaps what happens in l2 decoupling for the paraboloid when we consider B to be a ball of radius r with 1 ď r ă 2. The following result was communicated to the author by Hong Wang in January 2018. This a purely expository chapter and the author 150 claims no originality in the argument below. All errors are my own misunderstanding of her argument. For Q Ă r 0; 1sn1 and g : r0; 1sn1 Ñ C, de ne the extension operator pEQgqp xq : ż Q gpqep1x1    n1xn1 | |2xnq d  ż Q gpqep  x | |2xnq d: Also de ne Eg : Er0;1sn´1 g. We will ignore any weight functions or integrality issues that may arise in this analysis and freely make use of the uncertainty principle. Given a cube Q,let PpQq be the partition of Q into cubes of side length .Fix 1 ď r ă 2 and 2 ď p ď 2pn1q n1 , let Dpp; r q be the best constant such that }Eg}LppBr q ď Dpp; r qp ¸ QPPpr 0;1sn´1q }EQg}2 LppBrq q1{2 (4.19) for all g : r0; 1sn1 Ñ C and all cubes Br Ă Rn of side length r. Note that the standard Bourgain-Demeter decoupling for the paraboloid [BD15] gives that 1 À Dpp; 2q À " ". We claim the following result. Proposition 4.4.1. For 1 ď r ă 2 and 2 ď p ď 2pn1q n1 , p 1 r 2qp 121 pqp n1q À Dpp; r q À " p 1 r 2qp 121 pqp n1q " : In particular, Proposition 4.4.1 implies that at spatial scales smaller than 2, to decouple we must lose some negative power of . For the lower bound, we exhibit a speci c g (in particular g  1r0; r{2sn´1 ) and compute both sides of (4.19). For the upper bound, we reduce the problem using the uncertainty principle to be a problem about the Fourier transform. 4.4.1 The lower bound Without loss of generality we may assume that Br  r 0;  rsn. Let g : 1r0; r{2sn´1 (if Br is a different cube in Rn of side length r, then we can multiply g by an appropriate phase). We then have pE1r0; r{2sn´1 qp xq  ż r0; r{2sn´1 ep  x | |2xnq d  pn1qr{2 ż r0;1sn´1 ep  r{2x r||2xnq d: 151 Another change of variables then gives }E1r0; r{2sn´1 }LppBr q   r 2 pn1 n`1 p q}E1r0;1sn´1 }Lppr 0; ´r{2sn´1r 0;1sq : (4.20) Since |E1r0;1sn´1 | is essentially constant on 1 1    1 boxes, for x P r 0;  r{2sn1 r 0; 1s we can replace |p E1r0;1sn´1 qp xq| by |p E1r0;1sn´1 qp x; 0q| and hence (4.20) is essentially the same as  r 2 pn1 n`1 p q}q1r0;1sn´1 }Lppr 0; ´r{2sn´1q   r 2 pn1 n`1 p q}q1r0;1s}n1 Lppr 0; ´r{2sq : The same computations give that the right hand side of (4.19) is p ¸ QPP pr 0;1sn´1q }EQ1r0; r{2sn´1 }2 LppBr qq1{2  p ¸ QPP pr 0; r{2sn´1q }E1Q}2 LppBr qq1{2   r 2 pn1 n`1 p qp ¸ QPP1´r{2 pr 0;1sn´1q }E1Q}2 Lppr 0; ´r{2sn´1r 0;1sq q1{2: Note that here we have implicitly used that r ă 2 since this implies 1r{2 ă 1. From the uncertainty principle, this is once again essentially  r 2 pn1 n`1 p qp ¸ QPP1´r{2 pr 0;1sn´1q }q1Q}2 Lppr 0; ´r{2sn´1qq1{2   r 2 pn1 n`1 p qp 1 r 2 q n´12 }q1r0; 1´r{2sn´1 }Lppr 0; ´r{2sn´1q   r 2 pn1 n`1 p qp 1 r 2 q n´12 }q1r0; 1´r{2s}n1 Lppr 0; ´r{2sq   r 2 pn1 n`1 p qp 1 r 2 q n´12 p 1 r 2 qp 1 1 p qp n1q}q1r0;1s}n1 Lppr 0; 1´r sq : Therefore sup g;B r }Eg}LppBr q př QPP pr 0;1sn´1q }EQg}2 LppBr qq1{2 ě p 1 r 2 qp 12  1 p qp n1qp}q1r0;1s}Lppr 0; ´r{2sq }q1r0;1s}Lppr 0; 1´r sq qn1: Since r{2 ą r  1, the ratio of Lp norms is ě 1 which then proves the lower bound of Proposition 4.4.1. 4.4.2 The upper bound As in the lower bound we will apply a (slightly different) change of variables and the uncer-tainty principle to transform the problem into a problem about the Fourier transform. We 152 want to show that }Eg}LppBr q À" p 1 r 2qp 121 pqp n1q " p ¸ QPPpr 0;1sn´1q }EQg}2 LppBrq q1{2 for all g : r0; 1sn1 Ñ C and all cubes Br Ă Rn of side length 2. Since 2 ď p ď 2pn1q n1 ,decoupling for the paraboloid gives that }Eg}LppBr q À" "p ¸ Q1PPr{2pr 0;1sn´1q }EQ1 g}2 LppBrq q1{2: Therefore it remains to show that for each Q1 P Pr{2 pr 0; 1sn1q, }EQ1 g}LppBr q À p 1 r 2qp 121 pqp n1q p ¸ QPPpQ1q }EQg}2 LppBrq q1{2: (4.21) Without loss of generality (in particular ignoring issues with weights), we may assume that Q1  r 0;  r{2sn1. Let gpxq : gpx q. A change of variables gives that pEr0; r{2sn´1 gqp xq  ż r0; r{2sn´1 gpqep  x | |2xnq d  n1 ż r0; ´1`r{2sn´1 gpqep  x | |22xnq d and hence }Er0; r{2sn´1 g}LppBr q  pn1q n`1 p }Er0; ´1r{2sn´1 g}Lppr 0; ´r1sn´1r 0; ´r2sq : (4.22) From the uncertainty principle, |p Er0; ´1r{2sn´1 gqp xq| is essentially constant on 1r{2    1r{2 2r boxes. Therefore for x P r 0;  r1sn1 r 0;  r2s, |p Er0; ´1r{2sn´1 gqp xq| is essentially equal to |p Er0; ´1r{2sn´1 gqp x; 0q| and hence (4.22) becomes essentially equal to  2´rp pn1q n`1 p } ż r0; ´1`r{2sn´1 gpqep  yq d }Lpy pr 0; ´r`1sn´1q: The same reasoning then shows that p ¸ QPPpr 0; r{2sn´1q }EQg}2 LppBrq q1{2   2´rp pn1q n`1 p p ¸ QPP1pr 0; ´1`r{2sn´1q } ż Q gpqep  yq d }2 Lpypr 0; ´r`1sn´1q q1{2: Therefore since r  1 ě 0, (4.21) then follows from the following lemma and parallel decou-pling. The argument below basically is from Lecture 2 of Larry Guth's lectures notes on decoupling [Gut18]. 153 Lemma 4.4.2. Suppose pf is supported on r0; N sd. Then }f }Lppr 0;1sdq À N dp 12  1 pq p ¸ QPP1pr 0;N sdq }fQ}2 Lppr 0;1sdq q1{2 where here xfQ  pf 1Q. To prove Lemma 4.4.2, we rst recall Bernstein's inequality (and we ignore weight func-tions). Lemma 4.4.3. Suppose pf is supported on a cube of side length 1. Then for any cube B of side length 1, }f }L8pBq À } f }L1pBq.Proof of Lemma 4.4.2. Since f  ř QPP1pr 0;N sdq fQ, almost orthogonality and ignoring weights gives that essentially }f }2 L2pr 0;1sdq À ¸ QPP1pr 0;N sdq }fQ}2 L2pr 0;1sdq : Observe that ż r0;1sd |f |p ď } f }p2 L8pr 0;1sdq ż r0;1sd |f |2 À p ¸ QPP1pr 0;N sdq }fQ}L8pr 0;1sdqqp2 ¸ QPP1pr 0;N sdq }fQ}2 L2pr 0;1sdq : H older and Bernstein then bound the above by N d pp´2q 2 p ¸ QPP1pr 0;N sdq }fQ}2 L8pr 0;1sdq qp´22 p ¸ QPP1pr 0;N sdq }fQ}2 L2pr 0;1sdq qÀ N d pp´2q 2 p ¸ QPP1pr 0;N sdq }fQ}2 Lppr 0;1sdq qp{2: Taking 1 {p powers then completes the proof of Lemma 4.4.2. 154 REFERENCES [BBG18] Jonathan Bennett, Neal Bez, Susana Guti errez, and Sanghyuk Lee. \Estimates for the kinetic transport equation in hyperbolic Sobolev spaces." J. Math. Pures Appl. (9) , 114 :1{28, 2018. [BCT06] Jonathan Bennett, Anthony Carbery, and Terence Tao. \On the multilinear re-striction and Kakeya conjectures." Acta Math. , 196 (2):261{302, 2006. [BD15] Jean Bourgain and Ciprian Demeter. \The proof of the l2 decoupling conjecture." Ann. of Math. (2) , 182 (1):351{389, 2015. [BD16] Jean Bourgain and Ciprian Demeter. \Mean value estimates for Weyl sums in two dimensions." J. Lond. Math. Soc. (2) , 94 (3):814{838, 2016. [BD17] Jean Bourgain and Ciprian Demeter. \A study guide for the l2 decoupling theo-rem." Chin. Ann. Math. Ser. B , 38 (1):173{200, 2017. [BDG16] Jean Bourgain, Ciprian Demeter, and Larry Guth. \Proof of the main conjecture in Vinogradov's mean value theorem for degrees higher than three." Ann. of Math. (2) , 184 (2):633{682, 2016. [BDG17] Jean Bourgain, Ciprian Demeter, and Shaoming Guo. \Sharp bounds for the cubic Parsell-Vinogradov system in two dimensions." Adv. Math. , 320 :827{875, 2017. [BG11] Jean Bourgain and Larry Guth. \Bounds on oscillatory integral operators based on multilinear estimates." Geom. Funct. Anal. , 21 (6):1239{1295, 2011. [BHS18] David Beltran, Jonathan Hickman, and Christopher D. Sogge. \Variable coeffi-cient Wolff-type inequalities and sharp local smoothing estimates for wave equa-tions on manifolds." arXiv:1801.06910, 2018. [Bou93] Jean Bourgain. \Fourier transform restriction phenomena for certain lattice sub-sets and applications to nonlinear evolution equations. I. Schr odinger equations." Geom. Funct. Anal. , 3(2):107{156, 1993. [Bou13] Jean Bourgain. \Moment inequalities for trigonometric polynomials with spectrum in curved hypersurfaces." Israel J. Math. , 193 (1):441{458, 2013. [Bou17a] Jean Bourgain. \Decoupling, exponential sums and the Riemann zeta function." J. Amer. Math. Soc. , 30 (1):205{224, 2017. [Bou17b] Jean Bourgain. \Decoupling inequalities and some mean-value theorems." J. Anal. Math. , 133 :313{334, 2017. [BW18] Jean Bourgain and Nigel Watt. \Decoupling for perturbed cones and mean square of |p12 it q| ." International Mathematics Research Notices , 2018 (17):5219{5296, 2018. 155 [DGG17] Yu Deng, Pierre Germain, and Larry Guth. \Strichartz estimates for the Schr odinger equation on irrational tori." J. Funct. Anal. , 273 (9):2846{2869, 2017. [DGL17] Xiumin Du, Larry Guth, and Xiaochun Li. \A sharp Schr odinger maximal esti-mate in R2." Ann. of Math. (2) , 186 (2):607{640, 2017. [DGL18] Xiumin Du, Larry Guth, Xiaochun Li, and Ruixiang Zhang. \Pointwise con-vergence of Schr odinger solutions and multilinear re ned Strichartz estimates." Forum Math. Sigma , 6:e14, 18, 2018. [DGO18] Xiumin Du, Larry Guth, Yumeng Ou, Hong Wang, Bobby Wilson, and Ruixiang Zhang. \Weighted restriction estimates and application to Falconer distance set problem." arXiv:1802.10186, 2018. [DZ19] Xiumin Du and Ruixiang Zhang. \Sharp L2 estimates of the Schr odinger maximal function in higher dimensions." Annals of Mathematics , 189 (3):837{861, 2019. [For02] Kevin Ford. \Vinogradov's integral and bounds for the Riemann zeta function." Proc. London Math. Soc. (3) , 85 (3):565{633, 2002. [FSW18] Chenjie Fan, Gigliola Staffilani, Hong Wang, and Bobby Wilson. \On a bilinear Strichartz estimate on irrational tori." Anal. PDE , 11 (4):919{944, 2018. [GIO18] Larry Guth, Alex Iosevich, Yumeng Ou, and Hong Wang. \On Falconer's distance set problem in the plane." arXiv:1808.09346, 2018. [GS09] Gustavo Garrig os and Andreas Seeger. \On plate decompositions of cone multi-pliers." Proc. Edinb. Math. Soc. (2) , 52 (3):631{651, 2009. [GS10] Gustavo Garrig os and Andreas Seeger. \A mixed norm variant of Wolff's in-equality for paraboloids." In Harmonic analysis and partial differential equations ,volume 505 of Contemp. Math. , pp. 179{197. Amer. Math. Soc., Providence, RI, 2010. [Guo17] Shaoming Guo. \On a binary system of Prendiville: the cubic case." arXiv:1701.06732, 2017. [Gut18] Larry Guth. \Lectures Notes for Math 118, Topics in Analysis: Decoupling." , 2018. [GZ18a] Shaoming Guo and Ruixiang Zhang. \On integer solutions of Parsell-Vinogradov systems." arXiv:1804.02488, to appear in Inventiones mathematicae , 2018. [GZ18b] Shaoming Guo and Pavel Zorin-Kranich. \Decoupling for moment manifolds as-sociated to Arkhipov{Chubarikov{Karatsuba systems." arXiv:1811.02207, 2018. [Hea15] D. R. Heath-Brown. \The Cubic Case of Vinogradov's Mean Value Theorem { A Simpli ed Approach to Wooley's \Efficient Congruencing"." arXiv:1512.03272, 2015. 156 [Hea17] D. R. Heath-Brown. \A new kth derivative estimate for exponential sums via Vinogradov's mean value." Tr. Mat. Inst. Steklova , 296 (Analiticheskaya i Kom-binatornaya Teoriya Chisel):95{110, 2017. [Hor90] Lars Hormander. The analysis of linear partial differential operators. I , volume 256 of Fundamental Principles of Mathematical Sciences . Springer-Verlag, Berlin, second edition, 1990. Distribution theory and Fourier analysis. [Joh02] Warren P. Johnson. \The curious history of Fa a di Bruno's formula." Amer. Math. Monthly , 109 (3):217{234, 2002. [Lee16] Jungjin Lee. \A trilinear approach to square function and local smoothing esti-mates for the wave operator." arXiv:1607.08426, 2016. [Lew15] Mark Lewko. \The Bourgain-Demeter-Guth breakthrough and the Riemann zeta function?" , 2015. [LP06] Izabella Laba and Malabika Pramanik. \Wolff's inequality for hypersurfaces." Collect. Math. , (Vol. Extra):293{326, 2006. [LW02] Izabella Laba and Thomas Wolff. \A local smoothing estimate in higher dimen-sions." J. Anal. Math. , 88 :149{171, 2002. Dedicated to the memory of Tom Wolff. [Pie19] Lillian B. Pierce. \The Vinogradov Mean Value Theorem [after Wooley, and Bourgain, Demeter and Guth]." Ast erisque Expos es Bourbaki , 407 :479{564, 2019. [Tao15] Terence Tao. \The two-dimensional case of the Bourgain-Demeter-Guth proof of the Vinogradov main conjecture." terrytao.wordpress.com , 2015. [Vin35] Ivan M. Vinogradov. \New estimates for Weyl sums." Dokl. Akad. Nauk SSSR , 8:195{198, 1935. [Wol00] Thomas Wolff. \Local smoothing type estimates on Lp for large p." Geom. Funct. Anal. , 10 (5):1237{1288, 2000. [Woo12] Trevor D. Wooley. \Vinogradov's mean value theorem via efficient congruencing." Ann. of Math. (2) , 175 (3):1575{1627, 2012. [Woo13] Trevor D. Wooley. \Vinogradov's mean value theorem via efficient congruencing, II." Duke Math. J. , 162 (4):673{730, 2013. [Woo15] Trevor D. Wooley. \Multigrade efficient congruencing and Vinogradov's mean value theorem." Proc. Lond. Math. Soc. (3) , 111 (3):519{560, 2015. [Woo16] Trevor D. Wooley. \The cubic case of the main conjecture in Vinogradov's mean value theorem." Adv. Math. , 294 :532{561, 2016. [Woo17] Trevor D. Wooley. \Approximating the main conjecture in Vinogradov's mean value theorem." Mathematika , 63 (1):292{350, 2017. 157 [Woo19] Trevor D. Wooley. \Nested efficient congruencing and relatives of Vino-gradov's mean value theorem." Proceedings of the London Mathematical Society , 118 (4):942{1016, 2019. 158
141
REVUE FRANÇAISE D’AUTOMATIQUE, D’INFORMATIQUE ET DE RECHERCHE OPÉRATIONNELLE. RECHERCHE OPÉRATIONNELLE T. M. O’DONOVAN Direct solutions of M/G/1 priority queueing models Revue française d’automatique, d’informatique et de recherche opérationnelle. Recherche opérationnelle, tome 10, no V1 (1976), p. 107-111. © AFCET, 1976, tous droits réservés. L’accès aux archives de la revue « Revue française d’automatique, d’infor-matique et de recherche opérationnelle. Recherche opérationnelle » implique l’accord avec les conditions générales d’utilisation ( legal.php). Toute utilisation commerciale ou impression systématique est constitutive d’une infraction pénale. Toute copie ou impression de ce fi-chier doit contenir la présente mention de copyright. Article numérisé dans le cadre du programme Numérisation de documents anciens mathématiques R.A.T.R.O. Recherche opérationnelle (vol. 10, n° 2, février 1976, p. 107 à 111) DIRECT SOLUTIONS OF M/G/1 PRIIORITY QUEUEDNG MODELS () by T. M. O'DONOVAN () Abstract. — This paper présents a gênerai methodfor deriving expected conditional response times in priority queueing models. The method consists of applying Kleinrock's conservation law to subsystems ofjobs with priority over all otherjobs. The method is illustrated for the fol-lowing queue disciplines: preemptive résume shortest processing time, non-preemptive résume shortest processing tinte and shortest remaining processing time, Three well-known queueing models are considered in which priority is assigned to jobs on the basis of their processing times. It is shown that the average waiting times in these models are easily evaluated by applying a conservation law to a subsystem ofjobs. Mathematical models of priority queues have been widely studied (see Jaiswal ). This paper is concerned with priority queues in which priority is assigned to jobs on the basis of their processing time requirements. Of these Systems, the Non-preemptive Shortest Processing Time system is most widely used. In this systeme jobs are served to completion. When a job is to be selected from among those waiting, the one with the shortest processing time is chosen. In the Preemptive Resumé Shortest Processing Time system, an arriving job wiil preempt the job in service if and only if the processing time of the new arrivai is less than the total processing time of the jobthen in service. Partially completed jobs can be remöved from the processor and returned at a later time without waste of time or work already done. In the Shortest Remaining Processing Time system, an arriving job will preempt the job in service if and only if the processing time of the new arrivai is less than the remaining processing time of the job then in service. When a job is to be selected from among those waiting, the one with the lowest remaining processing time is selected. The expected conditional waiting times in M/G/1 models under these queue disciplines were derived by Phipps , Cohen and Conway, Maxwell and Miller , respectively, by first evaluating this characteristic in models () Reçu avril 1975. () Department of Statistics, University College, Cork, Ireland. Revue Française d'Automatique, Informatique et Recherche Opérationnelle n° février 1976. 108 T. M. O'DONOVAN with a finite number of priority levels and then letting the number of levels become infinité. In the case of the last System, Schrage and Miller have given a direct dérivation of this characteristic using a complicated busy period argument. Hère it is shown that for each of these models, this characteristic is easily obtained directly by applying Kleinrock's Conservation Law to a subsystem of jobs. We consider M/G/l queueing Systems in which jobs arrive at rate X and the processing times are independently sampled from a distribution having distribution function F(,). At each epoch, a job in the system is either waiting for service, being served, or (under queue disciplines which permit interrupt-ing a job in service before it is completed) in limbo (see Wolff ). The waiting time of a job is the time from the epoch the job arrives until the epoch its service begins. Let W(t) be the êxpected waiting time of a job whose processing time requirement is t units. Let l/\i and m2 be the first and second moments of the processing time distribution. Define, p = X~t F= -Xm2, u 2 m(t) -i'F(t) X(t)m(t). A CONSERVATION LAW Kleinrock has proved a Conservation Law for queueing Systems subject to the following restrictions: 1. Ail jobs remain in the system until completely serviced. 2. The single service facility is always busy if there are any jobs in the system. 3. Préemption, if it occurs, is of the preemptive-resume type. Consider the load on such a System at a given time point, i. e. the total processing time yet to be allocated to all the jobs in the System. It is obvious that this load is independent of queue discipline. Thus L, the êxpected load on the system at a random time point, is also independent of queue discipline. The êxpected load on the system at a random time point due to the job, if any, in service is well known to be independent of queue discipline and to have the value: V=^Xm2, (1) (see Wolff ). This holds not only for Poisson arrivais but also for genera! independant arrivais. Thus the êxpected load on the System due to jobs Revue Française d'Automatique et Recherche Opérationnelle DIRECT SOLUTIONS OF M / G / l PRIORITY QUEUEING MODELS 109 waiting or in limbo is also independent of queue discipline. We will evaluate ti in a system with a First Come First Served queue discipline. If W is the expected waiting time in such a system, it follows from Little's relation that the expected number of jobs waiting for service at a random time point is X W and so the expected load on the system due to these jobs is X Wlf\i. Assuming that the arrivais of jobs to the waiting line form a Poisson process, we have W' = L. Thus L= p L+ V. (2) This is Kleinrock's Conservation Law. In this paper, we consider M/G/l queueing Systems under the following queue disciplines: 1. Preemptive Resumé Shortest Processing Time. 2. Non-preemptive Shortest Processing Time. 3. Shortest Remaining Processing Time. Let Wx{t) (1 S i ^ 3) be the value of the expected waiting time W(t) under the corresponding queue discipline. We evaluate Wt (t ) as follows. In each system, we define a different subsystem of jobs St and let Lt be the expected load on the subsystem. It is immediately obvious that in each system Wt (t ) has two components : a) The expected load Lt. b) The delay caused by subséquent arrivais while this load is being cleared, whose processing times are less than t. Such jobs arrive in a Poisson process with rate X (t) and their expected processing time is m (t ). By delay cycle analysis , it follows that: ^ (3) l-p(0 In each system, Lt is evaluated by applying the Conservation Law (2) to the subsystem of jobs St. This is possible because the jobs in the subsystem S( have priority over all other jobs and so condition (ii) for the Conservation Law is satisfled. Let Lf+Lbc the expected load on the subsystem St at a random time point due to jobs waiting or in limbo and let Ls. be the corresponding expected load due to jobs in service. Then: L. (4) n" février 1976. 110 T. M. O'DONOVAN THE PREEMFITVE RESUME SHORTEST PROCESSING TIME SYSTEM Let St bç the subsystem of jobs whose processing times are at most t. The arrivai rate öf such jobs is X F (t ) and the second moment of their processing time distribution is x2 (dF(x)/F(t)). Thus by (1), we have: is Jo F(OJ By the argument used in the dérivation of the Conservation Law it follows that: Thus from (4), and so from (3), (1-P(O)2 (see Cohen ). THE NON-PREEMPTTVE SHORTEST PROCESSING TIME SYSTEM Let S2 be the subsystem of jobs whose processing times are at most t plus the job, if any, in service. Jobs whose processing times are at most t enter this subsystem on arrivai and join the waiting line, if any. A job whose processing time exceeds t can only enter the subsystem if there are no jobs whose processing times are at most t in the subsystem. Such a job begins service as soon as it enters the subsystem and thus never joins the waiting line. From (1), it follows that: LS 2=V. As before, the contribution to LJ+L of jobs whose processing times are at most / is p (/ ) L2. Jobs whose processing times exceed /, never join the waiting line and so their contribution to LJ+L is zero. Thus from (4), and so from (3): (see Phipps and Cohen ). Revue Française d'Automatique et Recherche Opérationnelle DIRECT SOLUTIONS OF M / G / l PRIORITY QUEUEING MODELS 111 THE SHORTEST REMAINING PROCESSING TIME SYSTEM Let 5 3 be the subsystem of jobs whose remaining processing times are at most t. Jobs whose processing times are at most t enter this subsystem on arrivai and join the waiting line, if any. A job whose processing time exceeds t9 will only begin to be served when there are no jobs in the subsystem whose remaining processing times are at most t. When its remaining processing time equals t, it then enters the subsystem and continues in service unless preempted by a subséquent arrivai. Thus such a job never joins the waiting line. Since ail jobs in the original System eventually join the subsystem and the time spent in service by a job in Sz is distributed as a processing time truncated at t, we have from (1) that: As bef ore, the contribution to Lf+L of jobs whose processing times are at most t is p (/ ) JL3. As shown above, jobs whose processing times exceed t never join the wait-ing line. Since LJ+L is independent of queue discipline and when jobs in S3 are served First Come First Served, jobs whose processing times exceed t will never enter limbo while in S3, the contribution of such jobs to L^+Lis zero. Thus from (4), L3 = p(OL3+i and so from (3), (see Schrage and Miller ). REFERENCES 1. J. COHEN, The Single Server Queue, North-Holland Publishing Company, 1969. 2. R. W. CONWAY, W. L. MAXWELL and L. W. • MILLER, Theory of Scheduling, Addison-Wesley, 1967. 3. N. K. JAISWAL, Priority Queues, Academie Press, 1968. 4. L. KLEINROCK, A Conservation Law for a Wide Class of Queueing Disciplines, Nav. Res. Log. Quart., Vol. 12, 1965, pp. 181-192. 5. J. D. C, LITTLE, A Prooffor the Queueing Formula L=\W, Opns. Res., Vol. 9, 1961, pp. 383-387. 6. T. PHEPPS, Machine Repair as a Waiting Line Problem, Opns. Res., Vol. 4, 1956, pp. 76-86. 7. L. SCHRAGE and L. W. MILLER, The Queue M/G/l with the Shortest Remaining Processing Time Discipline, Opns. Res., Vol. 14, 1966, pp. 670-684. 8. R. W. WOLFF, Work-Conserving Priorities, J. AppL Prob., Vol. 7,1970, pp. 327-337. n° février 1976.
142
News & Events Multimedia Suggested Searches Highlights NASA’s Hubble, Chandra Spot Rare Type of Black Hole Eating a Star NASA, JAXA XRISM Satellite X-rays Milky Way’s Sulfur What’s Up: July 2025 Skywatching Tips from NASA Missions Humans in Space Earth The Solar System The Universe Science Aeronautics Technology Learning Resources About NASA NASA en Español News & Events Multimedia Highlights NASA Rehearses How to Measure X-59’s Noise Levels NASA’s Hubble, Chandra Spot Rare Type of Black Hole Eating a Star Hubble Spies Swirling Spiral Highlights NASA Invites Virtual Guests to SpaceX Crew-11 Mission Launch NASA’s SpaceX Crew-11 NASA Tests New Heat Source Fuel for Deep Space Exploration Highlights Amendment 4: Several Changes to A.7 Water Quality Applications and A.8 Water Resources Applications. NASA Tests Scalable Satellite Tech to Launch Sensors Quicker Registration Opens for 2025 NASA International Space Apps Challenge Highlights NASA eClips STEM Student Ambassadors Light Up CNU’s 2025 STEM Community Day Compare Earth and the Moon NASA Shares How to Save Camera 370-Million-Miles Away Near Jupiter Highlights Hubble Spies Swirling Spiral NASA’s Hubble, Chandra Spot Rare Type of Black Hole Eating a Star NASA, JAXA XRISM Satellite X-rays Milky Way’s Sulfur Highlights Hubble Spies Swirling Spiral Another Quantum Leap: NASA Expands Lab to Enable Next-Gen Space Missions What Are Quasicrystals, and Why Does NASA Study Them? Highlights NASA Rehearses How to Measure X-59’s Noise Levels NASA Tests 5G-Based Aviation Network to Boost Air Taxi Connectivity NASA Tests Mixed Reality Pilot Simulation in Vertical Motion Simulator Highlights NASA Seeks Industry Concepts on Moon, Mars Communications NASA Tests New Heat Source Fuel for Deep Space Exploration Amendment 3: Payloads and Research Investigations on the Surface of the Moon (PRISM) Updated Text and Schedule Highlights NASA eClips STEM Student Ambassadors Light Up CNU’s 2025 STEM Community Day NASA Challenge Wraps, Student Teams Complete Space Suit Challenges GLOBE-Trotting Science Lands in Chesapeake with NASA eClips Highlights NASA Seeks Industry Concepts on Moon, Mars Communications Registration Opens for 2025 NASA International Space Apps Challenge GVIS History Highlights Pódcast en español de la NASA estrena su tercera temporada Las carreras en la NASA despegan con las pasantías El X-59 de la NASA completa las pruebas electromagnéticas What Is the Spooky Science of Quantum Entanglement? Learn how particles engage in a mind-boggling phenomenon that forms the backbone of quantum mechanics. There’s a lot we don’t know about our universe — in fact, 95% of it remains a mystery to us. That’s why scientists continue to probe our understanding of quantum physics. There are many facets to quantum science, but let’s zoom in on something that Albert Einstein called “spooky action at a distance”: quantum entanglement. What is Quantum Entanglement? Quantum science explores and helps explain some of the strangest phenomena in the universe, even shedding light on the mystery of dark matter and dark energy. Quantum is the study of atoms and subatomic particles, and how they interact with each other. It examines the very stuff we, and everything around us, are made of. One of the most far-out phenomena of quantum theory is quantum entanglement, the idea that particles of the same origin, which were once connected, always stay connected. Even if they separate and move far apart in time and space, they continue to share something beyond a mere bond — they shed their original quantum states and take on a new, united quantum state which they maintain forever. This means if something happens to one particle, it affects all the others with which it’s entangled. A “Spooky” Science In 1935, Albert Einstein and colleagues first pointed out the “spooky” action of quantum entanglement. Quantum entanglement, however, appeared to conflict with Einstein’s theory of special relativity, which postulates that nothing can travel faster than the speed of light and is demonstrated mathematically by the well-known equation E=mc2. The ability to instantaneously measure the quantum state of one particle by measuring that of its entangled partner somewhere else in the universe means that that information would have to be delivered faster than lightspeed. This contradicts Einstein’s theory of special relativity. What also remains a mystery is how exactly these particles can interact from such a far distance to share information. Three decades would pass until another scientist, John Stewart Bell, would develop a method to test the phenomenon, which ultimately enabled later scientists to confirm quantum entanglement. Classical vs. Quantum Physics If classical physics is life as we know it, the quantum world is like an alternate universe. Classical physics is the force governing an extremely predictable world, where an apple set on a table stays there until something causes it to move again. In the quantum world, not only can the apple end up on Mars, but, hypothetically, it could exist both on the table and on Mars at the same time. It could even be inextricably tied to another apple in some other part of the universe through entanglement. Thus, “reality” as we know it is much more uncertain, with the possibility for many solutions or outcomes to exist, rather than just one. Quantum entanglement remains a spooky part of our world. Check out the resources below to learn more about how NASA scientists are working to unravel the mysteries of our quantum universe. Related Resources: SEAQUE (Space Entanglement and Annealing Quantum Experiment) How Atoms Are Defying Gravity in NASA's Cold Atom Lab NASA Demonstrates ‘Ultra-Cool’ Quantum Sensor for First Time in Space NASA’s Biological and Physical Sciences Division pioneers scientific discovery and enables exploration by using space environments to conduct investigations not possible on Earth. Studying biological and physical phenomenon under extreme conditions allows researchers to advance the fundamental scientific knowledge required to go farther and stay longer in space, while also benefitting life on Earth. National Aeronautics and Space Administration NASA explores the unknown in air and space, innovates for the benefit of humanity, and inspires the world through discovery. Follow NASA
143
Куда пойти в Минске в выходные 27–29 июня В ближайшие выходные в Минске будет громко: на различных площадках пройдет сразу несколько ярких вечеринок. CityDog.io посмотрел, какие еще ивенты планируются на этот уикенд. Знакомимся с современным искусством Публикация от СФЕРА | АРТ-ПРОСТРАНСТВО | ЛЕКЦИИ | ТВОРЧЕСКИЕ ВСТРЕЧИ (@sphere.arthub) Во время лекции искусствоведка Мария Карпенкова расскажет, как современные художники используют пар и дым для создания инсталляций. А еще – как художники вместе с парфюмерами и инженерами создают мультидисциплинарные работы. Создаем авторский коллаж Публикация от Галерея ДК (@dk_gallery) Художница Ольга Страбовская расскажет и покажет, как из журнальных вырезок создать настоящее произведение искусства. В стоимость мастер-класса входит коктейль. Ностальгируем под диджей-сеты нулевых Минчан зовут на вечеринку, где отвечать за музыку будет легенда тусовочной сцены и один из основателей клубной культуры Беларуси – диджей Paliony. Алексей начинал свою карьеру еще в 90-х, а теперь открыл собственную школу диджеинга. Организаторы вечеринки обещают крутую тусовку с музыкой, которую больше не создают, но очень любят. Вспоминаем классика беларуской литературы Публикация от Музей Янкі Купалы (@kupalamuseum) В 83-ю годовщину смерти Янки Купалы любители творчества поэта соберутся на Военном кладбище, чтобы почтить память классика. Во время встречи прозвучат стихи Купалы. Любуемся работами из текстиля Публикация от Цэнтр сучасных мастацтваў (@ncsmby) Магистерка искусствоведения Елизавета Филимонова проведет экскурсию по выставке «Текстильный букет». В экспозиции представлены как традиционные текстильные техники, так и новаторские направления этого вида искусства. Отправляемся на жесткую вечеринку В клубе устраивают мощную вечеринку с громкими битами и быстрым ритмом. Будет две сцены: одна внутри клуба, другая – на улице. В лайнапе диджеи Widik, Di.A и Lizik Kissik. Поддерживаем беларуских металлистов Публикация от Рестобар & клуб TNT ROCK CLUB (@tntrockclub) Выходные завершаем шумно – и отправляемся на концерт беларуских метал-групп. В воскресенье в TNT выступят Dendrobium, Myrein и Porphyria. Перепечатка материалов CityDog.io возможна только с письменного разрешения редакции. Подробности здесь. Фото: Unsplash.com. Редакция: [email protected] Афиша: [email protected] Реклама: [email protected] Перепечатка материалов CityDog возможна только с письменного разрешения редакции. Подробности здесь. Нашли ошибку? Ctrl+Enter © 2025 CityDog Media a.s.
144
MATH337: Changepoint Detection Gaetano Romano 2024-11-01 Table of contents Preface 4 Source files, and attributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1 An Introduction to Changepoint Detection 6 1.1 Introduction to Time Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.1.1 What is a time series? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.1.2 Properties of time series . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.2 Introduction to changepoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.2.1 Types of Changes in Time Series . . . . . . . . . . . . . . . . . . . . . . 14 1.2.2 The biggest data challenge in changepoint detection . . . . . . . . . . . 16 1.3 Detecting one change in mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 1.3.1 The CUSUM statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.3.2 Searching for all 𝜏s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.3.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.3.4 Algorithmic Formulation of the CUSUM Statistic . . . . . . . . . . . . . 20 1.3.5 Example: a large sequence . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 1.4.1 Code the CUSUM algorithm for a unknown change location, based on the pseudocode above. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 1.4.2 Lab 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2 Controlling the CUSUM and Other Models 25 2.1 The asymptotic distribution of the CUSUM statistics . . . . . . . . . . . . . . . 25 2.1.1 Controlling the max of our cusums . . . . . . . . . . . . . . . . . . . . . 26 2.2 The Likelihood Ratio Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.2.1 Example: Gaussian change-in-mean . . . . . . . . . . . . . . . . . . . . 29 2.3 Towards More General Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.3.1 Change-in-variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.3.2 Change-in-slope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.3.3 Revisiting our Simpsons data (again!) . . . . . . . . . . . . . . . . . . . 35 2.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.4.1 Workshop 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.4.2 Lab 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2 3 Multiple changepoints 38 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.1.1 Real Example: Genomic Data and Neuroblastoma . . . . . . . . . . . . 38 3.1.2 Towards multiple changes . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.1.3 The cost of a segmentation . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.1.4 The “best” segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.2 Binary Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 3.2.1 Binary Segmentation in action . . . . . . . . . . . . . . . . . . . . . . . 47 3.3 Optimal Partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.3.1 Optimal partitinioning in action . . . . . . . . . . . . . . . . . . . . . . 52 3.3.2 Neuroblastoma example . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.4.1 Workshop 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.4.2 Lab 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4 PELT, WBS and Penalty choices 59 4.1 Drawbacks of OP and BS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 4.1.1 Quality of the Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . 59 4.1.2 Computational Complexity . . . . . . . . . . . . . . . . . . . . . . . . . 61 4.2 PELT and WBS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 4.2.1 PELT: an efficient solution to OP . . . . . . . . . . . . . . . . . . . . . . 62 4.2.2 WBS: Improving on Binary Segmentation . . . . . . . . . . . . . . . . . 65 4.3 Penalty Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 4.3.1 Example in R: Comparing Penalties with PELT . . . . . . . . . . . . . 67 4.3.2 CROPS: running with multiple penalties . . . . . . . . . . . . . . . . . . 69 4.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.4.1 Workshop 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.4.2 Lab 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 5 Working with Real Data 77 5.1 Assessing the model fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 5.1.1 Assessing Residuals from a Changepoint Model . . . . . . . . . . . . . . 77 5.1.2 Example: violating heteroschedasticity: . . . . . . . . . . . . . . . . . . 81 5.2 Estimating Other Known Parameters . . . . . . . . . . . . . . . . . . . . . . . . 84 5.2.1 Neuroblastoma Example: The Impact of Mis-specified Variance . . . . . 85 5.2.2 Addressing Mis-specified Variance with Robust Estimators . . . . . . . 86 5.3 Non-Parametric Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 5.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 5.4.1 Workshop exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 References 92 3 Preface These are the notes for MATH337 Changepoint Detection. They were written by Gaetano Romano. The module will introduce you to changepoint detection, detailing some algorithms, developing the basics theoretical foundations, and practicing few real-world scenarios. Across five weeks we will cover the following topics: 1. An introduction to changepoint detection and the CUSUM statistics 2. Controlling the CUSUM and some additional models 3. Dealing with multiple changes 4. PELT, WBS and Penalty selection 5. Working with Real World data. We will be using R as the programming language for this module. If you’re unfamiliar with it, make sure you cover the first three weeks of MATH245. Every week, you are expected to follow two lectures, one workshop, and one computer aided lab. Over the lecture, we will cover the basics concepts of changepoint detection. At the end of each chapter, you will find exercises that will be carried in the workshop and the lab. During the workshop, you will be dealing with computations and details about the methodologies, and, finally, during the lab sessions, you’ll give a go at programming the various algorithms and running real-world examples. You will find the solutions to the exercises on the Moodle page, released weekly. If you cannot access the Moodle page, and you still would like to have these solutions, please get in touch with me. 4 Source files, and attributions The notes are released as open-source on GitHub under the CC BY-NC 4.0 License. You can access the repository at the following link: point_detection. The materials in this course are based on and share elements with the following resources: • Fearnhead, P., & Fryzlewicz, P. (2022). Detecting a single change-point. arXiv preprint arXiv:2210.07066. • Rebecca Killick’s Introduction to Changepoint Detection - a half-day introductory course on changepoint detection. • Rebecca Killick’s Further Changepoint Topics - an extended course on changepoint de-tection. • Toby Hocking’s Course on Unsupervised Learning, which includes changepoint detection. I would like to express my gratitude to the authors of these resources. In addition, materials were sourced from various academic papers, which are referenced throughout the body of these notes. 5 1 An Introduction to Changepoint Detection 1.1 Introduction to Time Series In this module, we will be dealing with time series. A time series is a sequence of observations recorded over time (or space), where the order of the data points is crucial. 1.1.1 What is a time series? In previous modules, such as Likelihood Inference, we typically dealt with data that was not ordered in a particular way. For example, we might have worked with a sample of independent Gaussian observations, where each observation is drawn randomly from the same distribution. This sample might look like the following: 𝑦𝑖∼𝒩(0, 1), 𝑖= 1, … , 100 Here, 𝑦𝑖represents the 𝑖-th observation, and the assumption is that all observations are inde-pendent and identically distributed (i.i.d.) with a mean of 0 and variance of 1. 6 0 5 10 15 20 −3 −2 −1 0 1 2 Values Frequency Histogram of Random Normal Values In this case, the observations do not have any particular order, and our primary interest may be in estimating parameters such as the mean, variance, or mode of the distribution. This is typical for traditional inference, where the order of observations is not of concern. However, a time series involves a specific order to the data—usually indexed by time, al-though it could also be by space or another sequential dimension. For example, we could assume that the Gaussian sample above is a sequential process, ordered by the time we drew an observation. Each observation corresponds to a specific time point 𝑡. 7 −2 −1 0 1 2 0 25 50 75 100 Time Value Time Series 0 5 10 15 20 count Distribution Formal Notation. In time series analysis, use an index 𝑡to represent time or order on a given set of observations. The time series vector is written as: 𝑦1∶𝑛= (𝑦1, 𝑦2, … , 𝑦𝑛). Here, 𝑛is the total length of the sequence, and 𝑦𝑡represents the observed value at time 𝑡, for 𝑡= 1, 2, … , 𝑛. In our previous example, for instance, 𝑛= 100. Often, we are also interested in subsets of a time series, especially when investigating specific “windows” or “chunks” of the data. A subset of a time series, starting from time 𝑙to time 𝑢, with 𝑠≤𝑢, will be denoted by the following: 𝑦𝑙∶𝑢= (𝑦𝑙, 𝑦𝑙+1, … , 𝑦𝑢), Where if 𝑙= 𝑢, 𝑦𝑙∶𝑙= (𝑦𝑙). 1.1.2 Properties of time series Time series can have various statistical properties that explain how they behave over time, and they can be characterized based on those. Let us look at three examples of time series: 8 A: stationary w.r.t the mean B: Linearly increasing trendC: Piecewise stationary in the mean 0 25 50 75 100 0 25 50 75 100 0 25 50 75 100 0.0 2.5 5.0 7.5 0 4 8 12 −2 −1 0 1 2 Time Value Comparison of Time Series A. The leftmost time series, was generated by sampling random normal variables 𝑦𝑡= 𝜖𝑡, 𝜖𝑡∼𝒩(0, 1). In this case: 𝔼(𝑦𝑡) = 𝔼(𝜖𝑡) = 0, Var(𝑦𝑡) = Var(𝜖𝑡) = 1, ∀𝑡∈{1, ..., 100}. Say we generate more observations under the same random process, this will give us still a value that will be centered on 0, with variance 1, e.g. 𝔼(𝑦150) = 0, Var(𝑦150) = 1. B. In the centre time series, the series is generated as: 𝑦𝑡= 𝜖𝑡+ 0.1 ⋅𝑡, 𝜖𝑡∼𝒩(0, 1). This creates a time series with a linear upward trend. Similarly to what done before: 𝔼(𝑦𝑡) = 𝔼(𝜖𝑡) + 𝔼(0.1 ⋅𝑡) = 0.1 ⋅𝑡. Again, saying that we wish to predict the behaviour of the time series at time 150, we know this will be centered on 𝔼(𝑦150) = 1.5 (and with which variance?). C. In the rightmost example, the time series was generated for the first half of the observa-tions as in A., however after 𝑡= 50, a sudden shift occurs. Mathematically: 𝑦𝑡= {𝜖𝑡 for 𝑡≤50 𝜖𝑡+ 5 for 𝑡> 50 , 𝜖𝑡∼𝒩(0, 1) 9 This abrupt change at 𝑡= 50 introduces a piecewise structure to the data, where the data is seen following a distribution prior to the change, 𝑦𝑡∼𝑁(0, 1) up to a certain time point 𝑡= 50, and 𝑦𝑡∼𝑁(5, 1) after. In many examples of this module, we will be studying processes that are piecewise stationary in the mean and variance, as in this example. Stationarity in the mean and variance. A time series is said to be stationary in mean and variance, if its mean and variance are constant over time. That is, for a time series 𝑦1∶𝑛: 𝔼(𝑦𝑡) = 𝜇 and Var(𝑦𝑡) = 𝜎2 ∀∈{1, ..., 𝑛} Similarly, a time series is non-stationary in the mean and variance if those change over time. Piecewise stationary in the mean and variance. A piecewise stationary time series is a special case of a non-stationary time series. We will say that a time series is piecewise stationary in mean and variance if it is stationary within certain segments but has changes in the mean or variance at certain points, known as changepoints. After each changepoint, the series may have a different mean, variance, or both. Back to our example. • In A., we can see, very simply how, in this case 𝔼(𝑦𝑡) = 𝔼(𝜖𝑡) = 0, ∀𝑡∈{1, ..., 100}, therefore our series is stationary in the mean and variance. • In B, we notice that: ∀𝑡1, 𝑡2 ∈{1, ..., 100}, 𝑡1 ≠𝑡2 →𝔼(𝑦𝑡1) ≠𝔼(𝑦𝑡2). We can therefore say that the series is non-stationary in the mean. • In C, 𝐸[𝑦𝑡] = 𝐸[𝜖𝑡] = 0 for 𝑡≤50, and 𝐸[𝑦𝑡] = 𝐸[𝜖𝑡] + 𝐸 = 5 for 𝑡> 50. The series is therefore piecewise stationary in the mean. 1.2 Introduction to changepoints Changepoints are sudden, and often unexpected, shifts in the behavior of a process. They are also known as breakpoints, structural breaks, or regime switches. The detection of change-points is crucial in understanding and responding to changes in various types of time series data. The primary objectives in detecting changepoints include: 10 • Has a change occurred?: Identifying if there is a shift in the data. • If yes, where is the change?: Locating the precise point where the change happened. • What is the difference between the pre and post-change data? This may reveal the type of change, and it could indicate differences in parameter values before and after the change. • How certain are we of the changepoint location?: Assessing the confidence in the detected changepoint. • How many changes have occurred?: Identifying multiple changepoints and analyz-ing each one for similar characteristics. Changepoints can be found in a wide range of time series, not limited to physical, biological, industrial, or financial processes, and which objectives to follow depends on the type of the analysis we are carrying. In changepoint detection, there are two main approaches: online and offline analysis. In applications that require online analysis, the data is processed as it arrives, or in small batches. The primary goal of online changepoint detection is to identify changes as quickly as possible, making it crucial in contexts such as process control or intrusion detection, where immediate action is necessary. On the other hand, offline analysis processes all the data at once, typically after it has been fully collected. The aim here is to provide an accurate detection of changepoints, rather than a rapid one. This approach is common in fields like genome analysis or audiology, where the focus is on understanding the structure of the data post-collection. To give few examples: 1. Spectroscopy data. Changepoint detection is useful in spectroscopy data to segment time series of electron emissions into regions of approximately constant intensity, ac-counting for large-scale fluctuations in laser power and beam pointing. Figure 1.1: Electron emission spectroscopy data, Frick, K., Munk, A., & Sieling, H. (2014). 2. ECG: Detecting changes or abnormalities in electrocardiogram (ECG) data can help in diagnosing heart conditions. 11 Figure 1.2: Electrocardiograms (heart monitoring), Fotoohinasab et al, Asilomar conference 2020. 3. Cancer Diagnosis: Identifying breakpoints in DNA copy number data is important for diagnosing some types of cancer, such as neuroblastoma. This is a typical example of an offline analysis. Figure 1.3: DNA copy number data, breakpoints associated with aggressive cancer, Hocking et al, Bioinformatics 2014. 4. Engineering Monitoring: Detecting changes in CPU monitoring data in servers can help in identifying potential issues or failures: this is often analysed in real-time on with online methods, with the aim of detecting an issue as quickly as possible. 12 Figure 1.4: Temperature data from a CPU of an AWS server. Source Romano et al., (2023) 5. Gamma Ray-Burst detection. Efficient online changepoint detection algorithms can detect gamma-ray bursts from gamma-ray counts on satellites in space. These bursts events happen in just a fraction of a second, and are related to supernova implosions. In this module, we will focus exclusively on offline changepoint detection, where we assume that all the data is available for analysis from the start. 13 1.2.1 Types of Changes in Time Series Depending on the model, we could seek for different types of changes in the structure of a time series. Some of the most common types of changes include shifts in mean, variance, and trends in regression. For example, the CPU example above exihibited, in addition to some extreme observations, both changes in mean and variance. • A change in mean occurs when the average level of an otherwise stationary time series shifts from one point to another. −1 0 1 2 3 0 100 200 300 400 500 Time Value Change in Mean In the plot above, the red lines indicate the true mean values of the different segments. • A change in variance refers to a shift in the variability of the time series data, even when the mean remains constant. 14 −2 0 2 4 0 100 200 300 400 500 Time Value Change in Variance 1.2.1.1 3. Change in Regression (Slope) A change in regression or slope occurs when the underlying relationship between time, and/or other auxiliary variables, and the values of the time series changes. −2 −1 0 1 2 0 100 200 300 400 500 Time Value Change in Regression (Slope) 15 1.2.2 The biggest data challenge in changepoint detection One of the most widely debated and difficult data challenges in changepoint detection may not be in the field of finance, genetics, or climate science—but rather in television history. Specifically, the question that has plagued critics and fans alike for years is: At which episode did “The Simpsons” start to decline? It’s almost common knowledge that “The Simpsons,” the longest-running and most beloved animated sitcom, experienced a significant drop in quality over time. But pinpointing exactly when this drop occurred is the real challenge. Fortunately, there’s a branch of statistics that was practically built to answer questions like these! I have downloaded a dataset (Bown 2023) containing ratings for every episode of “The Simp-sons” up to season 34. We will analyze this data to determine if and when a significant shift occurred in the ratings, which might reflect the decline in quality that so many have observed. 4 5 6 7 8 0 200 400 600 Episode Number Rating Season 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 TMDB Ratings of The Simpsons Episodes In this plot, each episode of “The Simpsons” is represented by its TMBD rating, and episodes are colored by season. By visually inspecting the graph, we may already start to see some potential points where the ratings decline. However, the goal of our changepoint analysis is to move beyond visual inspection and rigorously detect the exact moment where a significant shift in the data occurs. Jokes apart, this is a challenging time series! First of all, there’s not a clear single change, but rather an increase, followed by a decline. After which, the sequence seems rather stationary. For this reason, throughout the module, we will use this data as a running example to develop 16 our understanding of various methods, hopefully trying to obtain a definitive answer towards the final chapters. But let’s proceed with order… 1.3 Detecting one change in mean In this section, we will start by exploring the simplest case of a changepoint detection problem: detecting a change in the mean of a time series. We assume that the data is generated according to the following model: 𝑦𝑡= 𝜇𝑡+ 𝜖𝑡, 𝑡= 1, … , 𝑛, where 𝜖𝑡∼𝒩(0, 𝜎2) represents Gaussian noise with mean 0 and known variance 𝜎2, and 𝜇𝑡∈ℝ is the signal at time 𝑡, with 𝔼(𝑦𝑡) = 𝜇𝑡. The vector of noise terms 𝜖1∶𝑛is often referred to as Gaussian noise, and hence, this model is known as the signal plus noise model, where the signal is given by 𝜇1∶𝑛and the noise by 𝜖1∶𝑛. In the single change-in-mean problem, our goal is to determine whether the signal remains constant throughout the entire sequence, or if there exists a point 𝜏, where the mean shifts. In other words, we are testing whether 𝜇1 = 𝜇2 = ⋯= 𝜇𝑛 (no changepoint), or if there exists a time 𝜏such that 𝜇1 = 𝜇2 = ⋯= 𝜇𝜏≠𝜇𝜏+1 = ⋯= 𝜇𝑛 (changepoint at 𝜏). Note. The point 𝜏is our changepoint, e.g. the first point after which our mean changes, however there’s a lot of inconsistencies on the literature: sometimes you will find that people refer to 𝜏+1 as the changepoint, and 𝜏as the last pre-change point (as a matter of fact, please let me know if you spot this inconsistency anywhere in these notes!). To address this problem, one of the most widely used methods is the CUSUM (Cumulative Sum) statistic. The basic idea behind the CUSUM statistic is to systematically compare the mean of the data to the left and right of each possible changepoint 𝜏. By doing so, we can assess whether there is evidence of a significant change in the mean at a given point. 17 1.3.1 The CUSUM statistics The CUSUM statistic compares, for a fixed 𝜏∈{1, … , 𝑛−1} , the empirical mean (average) of the data to the left (before 𝜏) with the empirical mean of the data to the right (after 𝜏): 𝐶𝜏= √𝜏(𝑛−𝜏) 𝑛 ∣̄ 𝑦1∶𝜏−̄ 𝑦(𝜏+1)∶𝑛∣, Our ̄ 𝑦1∶𝜏and ̄ 𝑦(𝜏+1)∶𝑛are just the empirical means of each segment, simply computed with: ̄ 𝑦𝑙∶𝑢= 1 𝑢−𝑙+ 1 𝑢 ∑ 𝑡=𝑙 𝑦𝑡. The term on the left of the difference, is there to re-scale it so that our statistics is the absolute value of normal random variable that has variance 𝜎2. If there is no change at 𝜏, this difference is going to be distributed as a standard normal. This approach is intuitive because if the mean 𝜇is the same across the entire sequence, the values of the averages on both sides of any point 𝜏should be similar. However, if there is a large-enough change in the mean, the means will differ significantly, highlighting the changepoint. More formally, we declare a change at 𝜏if: 𝐶2 𝜏 𝜎2 > 𝑐, where the 𝑐∈ℝ+ is a suitable chosen threshold value (in fact it is often chosen as in hypothesis testing). 1.3.2 Searching for all 𝜏s In practice, however, we do not know the changepoint location in advance. Our goal is to detect whether a changepoint exists and, if so, estimate its location. To achieve this, we need to consider all possible changepoint locations and choose the one that maximizes our test statistic. The natural extension of the CUSUM to this situation is to use as a test statistic the maximum of 𝐶𝜏as we vary 𝜏: 𝐶2 𝑚𝑎𝑥= max 𝜏∈{1,…,𝑛−1} 𝐶2 𝜏/𝜎2. 18 And detect a changepoint if 𝐶2 𝑚𝑎𝑥> 𝑐for some suitably chosen threshold 𝑐. The choice of 𝑐will determine the significance level of the test (we’ll discuss this in more detail later). Graphically, the test will look as follows: 0 5 10 15 −10 −5 0 5 Cusum over 15 points t If we detect a changepoint (i.e., if 𝐶2 𝑚𝑎𝑥> 𝑐), we can estimate its location by: ̂ 𝜏= arg max 𝜏∈{1,…,𝑛−1} 𝐶2 𝜏. In other words, ̂ 𝜏is the value of 𝜏that maximizes the CUSUM statistic. A simple estimate of the size of the change is then given by: Δ ̂ 𝜇= ̄ 𝑦( ̂ 𝜏+1)∶𝑛−̄ 𝑦1∶̂ 𝜏. This estimate represents the difference between the mean of the data after the estimated changepoint and the mean of the data before the estimated changepoint. 1.3.3 Example Let us compute the cusum for the vector 𝑦1∶4 = (0.5, −0.1, 12.1, 12.4). We know that 𝑛= 4 (the total number of observations), therefore possible changepoints are: 𝜏= 1, 2, 3. Compute empirical means for each segment We first need to calculate the segment means, ̄ 𝑦1∶𝜏and ̄ 𝑦(𝜏+1)∶𝑛, for each 𝜏. 19 • For 𝜏= 1, the left segment is: 𝑦1∶1 = (0.5), and ̄ 𝑦1∶1 = 0.5. The right segment: 𝑦2∶4 = (−0.1, 12.1, 12.4) gives ̄ 𝑦2∶4 = −0.1+12.1+12.4 3 = 24.4 3 = 8.13. • For 𝜏= 2, we have, in a similar fashion, ̄ 𝑦1∶2 = 0.5−0.1 2 = 0.2, ̄ 𝑦3∶4 = 12.1+12.4 2 = 12.25, • Lastly, for 𝜏= 3, we have ̄ 𝑦1∶3 = 0.5−0.1+12.1 3 = 12.5 3 = 4.16 and ̄ 𝑦4∶4 = 12.4. Compute the CUSUM statistics Now that we have the empirical means for each segment, we have all the ingredients for computing our CUSUM: 𝐶𝜏= √𝜏(𝑛−𝜏) 𝑛 ∣̄ 𝑦1∶𝜏−̄ 𝑦(𝜏+1)∶𝑛∣. • For 𝜏= 1: 𝐶1 = √1(4 −1) 4 ∣0.5 −8.133∣= 0.866 × 7.633 = 6.61. • For 𝜏= 2: 𝐶2 = √2(4 −2) 4 |0.2 −12.25| = 1 × 12.05 = 12.05. • For 𝜏= 3: 𝐶3 = √3(4 −3) 4 ∣4.166 −12.4∣= 0.866 × 8.233 = 7.13. Thus, the maximum of the CUSUM statistic occurs at 𝜏= 2, with 𝐶𝑚𝑎𝑥= 12.05. To detect a changepoint, we would compare 𝐶𝑚𝑎𝑥to a threshold value 𝑐. If 𝐶𝑚𝑎𝑥> 𝑐, we conclude that there is a changepoint at ̂ 𝜏= 2. 1.3.4 Algorithmic Formulation of the CUSUM Statistic This process seems rather long, as for every step, we need to precompute the means… A naive implementation of the cusum, in fact, takes 𝒪(𝑛2) computations. However, there’s an algorithmic trick: by sequentially computing partial sums, e.g. 𝑆𝑛= ∑ 𝑛 𝑖=1 𝑦𝑖, we can shorten out our computations significantly. In this way we can compute the value of the means directly as we iterate in the for cycle. 20 INPUT: Time series 𝑦= (𝑦1, ..., 𝑦𝑛), threshold 𝑐, variance 𝜎2. OUTPUT: Changepoint estimate ̂ 𝜏, maximum CUSUM statistic 𝐶𝑚𝑎𝑥 𝑛←length of 𝑦 𝐶𝑚𝑎𝑥←0 ̂ 𝜏←0 𝑆𝑛←∑ 𝑛 𝑖=1 𝑦𝑖// Compute total sum of y 𝑆←0 FOR 𝑡= 1, … , 𝑛−1 𝑆←𝑆+ 𝑦𝑡 ̄ 𝑦1∶𝑡←𝑆/𝑡 ̄ 𝑦(𝑡+1)∶𝑛←(𝑆𝑛−𝑆)/(𝑛−𝑡) // Can you figure out why? 𝐶2 𝑡←𝑡(𝑛−𝑡) 𝑛 ( ̄ 𝑦1∶𝑡−̄ 𝑦(𝑡+1)∶𝑛)2 IF 𝐶2 𝑡> 𝐶𝑚𝑎𝑥 𝐶𝑚𝑎𝑥←𝐶2 𝑡 ̂ 𝜏←𝑡 IF 𝐶𝑚𝑎𝑥/𝜎2 > 𝑐 RETURN ̂ 𝜏, 𝐶𝑚𝑎𝑥// Changepoint detected ELSE RETURN NULL, 𝐶𝑚𝑎𝑥// No changepoint detected For this reason, the time complexity of the CUSUM algorithm is 𝑂(𝑛), where 𝑛is the length of the time series. 1.3.5 Example: a large sequence We can see how the value 𝐶2 𝑡in the algorithm above behaves across different values of 𝑡= 1, … , 𝑛−1 in the example below: 21 0.0 2.5 5.0 7.5 0 25 50 75 100 x y 0 200 400 600 0 25 50 75 100 x CUSUM Running the CUSUM test, and maximising on our Simpsons episode, results in: 4 5 6 7 8 0 200 400 600 Episode Rating The Simpsons Ratings Over Time 0 100 200 0 200 400 600 Episode CUSUM Statistic CUSUM Statistics over time This results in episode Thirty Minutes over Tokyo being the last “good” Simpsons episode, with Beyond Blunderdome being the start of the decline, according to the Gaussian change-in-mean model! 22 1.4 Exercises 1.4.1 Code the CUSUM algorithm for a unknown change location, based on the pseudocode above. Workshop 1 1. Determine if the following processes are stationary, piecewise stationary, or non-stationary: a. 𝑦𝑡= 𝑦𝑡−1 + 𝜖𝑡, 𝑡= 2, … , 𝑛, 𝑦1 = 0, 𝜖𝑡∼𝑁(0, 1). This is a random walk model. Let’s start by computing the expected value and variance of 𝑦𝑡across all 𝑡. TIP: Start by expanding 𝑦𝑡in terms of the noise components… b. 𝑦𝑡= 𝑡𝜖𝑡+ 3𝟙(𝑡> 50), 𝑡= 1, … , 100, 𝜖𝑡∼𝑁(0, 1) c. 𝑦𝑡= 0.05 ⋅𝑡+ 𝜖𝑡, 𝑡= 1, … , 100, 𝜖𝑡∼𝑁(0, 1) 2. In this exercise we will show that: 1 𝜎 √𝜏(𝑛−𝜏) 𝑛 ( ̄ 𝑦1∶𝜏−̄ 𝑦(𝜏+1)∶𝑛) in case of no change, e.g. for 𝜇1 = 𝜇2 = ⋯= 𝜇𝑛= 𝜇, follows a standard normal distribution. Hint: a. Compute the expected value and variance of the difference ̄ 𝑦1∶𝜏−̄ 𝑦(𝜏+1)∶𝑛 b. Conclude that if you standardise the sum, this follows a standard normal distribu-tion. 1.4.2 Lab 1 1. Code the CUSUM algorithm for a unknown change location, based on the pseudocode of Section Section 1.3.4. 2. Modify your function above to output the CUSUM statistics over all ranges of tau. 3. Recreate the “CUSUM Statistics over time” plot for the Simpsons data above. a. You’ll be able to load the dataset via: 23 library(tidyverse) simpsons_episodes <- read_csv(" simpsons_ratings <- simpsons_episodes |> mutate(Episode = id + 1, Season = as.factor(season), Rating = tmdb_rating) simpsons_ratings <- simpsons_ratings[-nrow(simpsons_ratings), ] # run your CUSUM algorithm on the Rating variable! b. To run it on the whole sequence, you’ll have to set the threshold 𝑐= ∞. c. Assume 𝜎2 = 1 24 2 Controlling the CUSUM and Other Models In this chapter, we explore the properties of the CUSUM test for detecting a change in mean, and this will allow us how to determine appropriate thresholds, and explore its properties when a changepoint is present. We will employ some concepts from asymptotic theory: in time series analysis, an asymptotic distribution refers to the distribution that our test statistic approaches as the length of the time series 𝑛becomes very large. 2.1 The asymptotic distribution of the CUSUM statistics If 𝑧1, ⋯, 𝑧𝑘are independent, standard Normal random variables, then: 𝑘 ∑ 𝑖=1 𝑧2 𝑖∼𝜒2 𝑘, where 𝜒2 𝑘is a chi-squared distribution with 𝑘degrees of freedom. The chi-squared distribu-tion is a continuous probability distribution that models the sum of squares of k independent standard normal random variables: we have met the chi-squared distribution already in hy-pothesis testing and constructing confidence intervals. The shape of the distribution depends on its degrees of freedom. For 𝑘= 1, it’s highly skewed, but as 𝑘increases, it becomes more symmetric and approaches a normal distribution. Last week, we found out that, under the null hypothesis of no change: 1 𝜎 √𝜏(𝑛−𝜏) 𝑛 ( ̄ 𝑦1∶𝜏−̄ 𝑦(𝜏+1)∶𝑛) ∼𝑁(0, 1). Therefore, our test statistics for a fixed 𝜏: 𝐶2 𝜏 𝜎2 ∼𝜒2 1. If we take the example of last week, and remove the changepoint, we can observe that the cusum statistics stays constant, and relatively small: 25 −2 −1 0 1 2 0 25 50 75 100 x y 0 1 2 0 25 50 75 100 x CUSUM However, as the change is unknown, our actual test statistic for detecting a change is max𝜏𝐶2 𝜏/𝜎2. For this reason, calculating the distribution of this maximum ends up being a bit more chal-lenging… 1. So far, we only studied the behaviour of the statistics for one fixed 𝜏, however, when comparing the maximums, the values of 𝐶𝜏are in fact not independent across different 𝜏s. 2. As we will learn later, the CUSUM is a special case of a LR test, as setting the size of the actual change in mean to 0 effectively removes the changepoint parameter from the model. For this reason, the usual regularity conditions for likelihood-ratio test statistics don’t apply here. 2.1.1 Controlling the max of our cusums Fortunately, for controlling our CUSUM test, we can use the fact that (𝐶1, ..., 𝐶𝑛−1)/𝜎are the absolute values of a Gaussian process with mean 0 and known covariance, and there are well known statistical results that can help us in our problem. Yao and Davis (1986), in fact, show that the maximum of a set of Gaussian random variables is known to converge to a Gumbel distribution, described by the following equation: lim 𝑛→∞Pr{𝑎−1 𝑛(max 𝜏 𝐶𝜏/𝜎−𝑏𝑛) ≤𝑢𝛼} = exp{−(2𝜋)−1/2 exp(−𝑢𝛼)}, (2.1) 26 where 𝑎𝑛= (2 log log 𝑛)−1/2 and 𝑏𝑛= 𝑎−1 𝑛+ 0.5𝑎𝑛log log log 𝑛are a scaling and a centering constant. The right side of this equation is the CDF of a Gumbell distribution. As we learned from likelihood inference, to find the threshold 𝑐𝛼for a given false probability rate, we first set the right-hand side equal to 1 −𝛼, and solve for 𝑢𝛼. This gives: 𝑢𝛼= −log (−log(1 −𝛼) (2𝜋)−1/2 ) . Then, we can find the critical value by looking into the left side of the equation: ̃ 𝑐= (𝑎𝑛𝑢𝛼+ 𝑏𝑛), To find the threshold, as max𝜏 𝐶2 𝜏 𝜎2 > 𝑐, we just have to square our value above, e.g. 𝑐𝛼= ̃ 𝑐2. This asymptotic result suggests that the threshold 𝑐𝛼for 𝐶2 𝜏/𝜎2 should increase with 𝑛at a rate of approximately 2 log log 𝑛. Given that this is a fairly slow rate of convergence, this suggests that the threshold suggested by this asymptotic distribution can be conservative in practice, potentially leading to detect less changepoints than what actually exist. In practice, it’s often simplest and most effective to use Monte Carlo methods to approximate the null distribution of the test statistic. This can be done via the following process: 1. Simulate many time series under the null hypothesis (no changepoint), 2. Calculate the test statistic 𝐶2 𝜏/𝜎2 for each one of the replicates. 3. Set the threshold to be the (1 −𝛼) percentile of the distribution of the test statistics from simulated data. This leads to have less conservative thresholds. Theoretical vs Empirical Thresholds The figure below shows, for various levels of 𝛼= 0.01, 0.05, 0.1, thresholds 𝑐𝛼computed from the theoretical distribution of Equation 2.1 against the Monte Carlo thresholds obtained from empirical simulations under the null. 27 We will see how to compute in practice the theoretical and empirical thresholds in the Lab! 2.2 The Likelihood Ratio Test The CUSUM can be viewed as a special case of a more general framework based on the Likelihood Ratio Test (LRT). This allow us to test for more general settings, beyond simply detecting changes in the mean. In general, the Likelihood Ratio Test is a method for comparing two nested models: one under the null hypothesis, which assumes no changepoint, and one under the alternative hypothesis, which assumes a changepoint exists at some unknown position 𝜏. Suppose we have a set of observations 𝑦1, 𝑦2, … , 𝑦𝑛. Under the null hypothesis 𝐻0, we assume that all the data is generated by the same model without a changepoint. Under the alternative hypothesis 𝐻1, there is a single changepoint at 𝜏, such that the model for the data changes after 𝜏. The LRT statistic is given by: 𝐿𝑅𝜏= −2 log { max𝜃∏ 𝑛 𝑡=1 𝑓(𝑦𝑡|𝜃) max𝜃1,𝜃2[(∏ 𝑛 𝑡=1 𝑓(𝑦𝑡|𝜃1))(∏ 𝑛 𝑡=1 𝑓(𝑦𝑡|𝜃2)]} (2.2) 28 The LRT compares the likelihood of the data under two models to determine which one is more likely: the enumerator, is the likelihood under the null hypothesis of no changepoint, while the denominator represents the likelihood of the data under the alternative hypothesis, where we optimise for two different parameters before and after the changepoint at 𝜏. 2.2.1 Example: Gaussian change-in-mean As a first example, we show how the CUSUM statistics is nothing but a specific case of the GLR. To see this, we start from our piecewise costant signal, plus noise, 𝑦𝑡= 𝑓𝑡+ 𝜖𝑡, 𝑡= 1, … , 𝑛. Under this model our data, a linear combination of a Gaussian, is distributed as: 𝑦𝑡∼𝑁(𝜇𝑡, 𝜎2), 𝑡= 1, … , 𝑛 Our p.d.f. will be: 𝑓(𝑦𝑡|𝜃) = 1 √ 2𝜋𝜎2 exp{−1 2𝜎2 (𝑦𝑡−𝜇)2}. Therefore, to obtain the likelihood ratio test statistic, we plug our Gaussian p.d.f. into the LR above, and take the logarithm: 𝐿𝑅𝜏= −2 [max 𝜇 (−1 2𝜎2 𝑛 ∑ 𝑖=1 (𝑦𝑖−𝜇)2) −max 𝜇1,𝜇2 (−1 2𝜎2 ( 𝜏 ∑ 𝑖=1 (𝑦𝑖−𝜇1)2 + 𝑛 ∑ 𝑖=𝜏+1 (𝑦𝑖−𝜇2)2))] + (2.3) + 𝜏log(2𝜋𝜎2) + (𝑛−𝜏) log(2𝜋𝜎2) −𝑛log(2𝜋𝜎2). (2.4) This simplifies to: = 1 𝜎2 [min 𝜇 𝑛 ∑ 𝑖=1 (𝑦𝑖−𝜇1)2 −min 𝜇1,𝜇2 ( 𝜏 ∑ 𝑖=1 (𝑦𝑖−𝜇1)2 + 𝑛 ∑ 𝑖=𝜏+1 (𝑦𝑖−𝜇2)2)] . To solve the minimization over 𝜇1 and 𝜇2, we plug-in values ̂ 𝜇= ̄ 𝑦1∶𝑛on the first term, and ̂ 𝜇1 = ̄ 𝑦1∶𝜏, ̂ 𝜇2 = ̄ 𝑦(𝜏+1)∶𝑛for the second term: 𝐿𝑅𝜏= 1 𝜎2 [ 𝑛 ∑ 𝑖=1 (𝑦𝑖−̄ 𝑦1∶𝑛)2 − 𝜏 ∑ 𝑖=1 (𝑦𝑖−̄ 𝑦1∶𝜏)2 − 𝑛 ∑ 𝑖=𝜏+1 (𝑦𝑖−̄ 𝑦(𝜏+1)∶𝑛)2] . 29 This is the likelihood ratio test statistic for a change in mean in a Gaussian model, which is essentially the CUSUM statistics squared, rescaled by the known variance: 𝐿𝑅𝜏= 𝐶2 𝜏 𝜎2 . It is possible to prove this directly with some tedious computations. Proof. We start by writing down 𝜎2𝐿𝑅. This will be: 𝜎2𝐿𝑅𝜏= 𝑛 ∑ 𝑖=1 (𝑦𝑖−̄ 𝑦1∶𝑛)2 − 𝜏 ∑ 𝑖=1 (𝑦𝑖−̄ 𝑦1∶𝜏)2 − 𝑛 ∑ 𝑖=𝜏+1 (𝑦𝑖−̄ 𝑦𝜏+1∶𝑛)2. Now we need to expand each term. Starting with the first: 𝑛 ∑ 𝑖=1 (𝑦𝑖−̄ 𝑦1∶𝑛)2 = 𝑛 ∑ 𝑖=1 𝑦2 𝑖−2 ̄ 𝑦1∶𝑛 𝑛 ∑ 𝑖=1 𝑦𝑖+ 𝑛̄ 𝑦2 1∶𝑛. As ∑ 𝑛 𝑖=1 𝑦𝑖= 𝑛̄ 𝑦1∶𝑛, we notice that we can simplify the last two terms. We are left with: 𝑛 ∑ 𝑖=1 (𝑦𝑖−̄ 𝑦1∶𝑛)2 = 𝑛 ∑ 𝑖=1 𝑦2 𝑖−𝑛̄ 𝑦2 1∶𝜏. We proceed similarly for the other two terms: 𝜏 ∑ 𝑖=1 (𝑦𝑖−̄ 𝑦1∶𝜏)2 = 𝜏 ∑ 𝑖=1 𝑦2 𝑖−𝜏̄ 𝑦2 1∶𝜏, 𝑛 ∑ 𝑖=𝜏+1 (𝑦𝑖−̄ 𝑦𝜏+1∶𝑛)2 = 𝑛 ∑ 𝑖=𝜏+1 𝑦2 𝑖−(𝑛−𝜏) ̄ 𝑦2 𝜏+1∶𝑛. Putting all together, and getting rid of the partial sums, we are left with: 𝜎2𝐿𝑅𝜏= −𝑛̄ 𝑦2 1∶𝑛+ 𝜏̄ 𝑦2 1∶𝜏+ (𝑛−𝜏) ̄ 𝑦2 𝜏+1∶𝑛. Now, recall that ̄ 𝑦1∶𝑛= 1 𝑛[𝜏̄ 𝑦1∶𝜏+ (𝑛−𝜏) ̄ 𝑦𝜏+1∶𝑛], and: ̄ 𝑦2 1∶𝑛= 1 𝑛2 [𝜏2 ̄ 𝑦2 1∶𝜏+ 2𝜏(𝑛−𝜏) ̄ 𝑦1∶𝜏̄ 𝑦𝜏+1∶𝑛+ (𝑛−𝜏)2 ̄ 𝑦2 𝜏+1∶𝑛] . Plugging in this into our LR: 30 𝜎2𝐿𝑅𝜏= −𝜏2 𝑛 ̄ 𝑦2 1∶𝜏−2𝜏(𝑛−𝜏) 𝑛 ̄ 𝑦1∶𝜏̄ 𝑦𝜏+1∶𝑛−(𝑛−𝜏)2 𝑛 ̄ 𝑦2 𝜏+1∶𝑛−𝜏̄ 𝑦2 1∶𝜏−(𝑛−𝜏) ̄ 𝑦2 𝜏+1∶𝑛= (2.5) = 𝜏(𝑛−𝜏) 𝑛 ̄ 𝑦2 1∶𝜏−2𝜏(𝑛−𝜏) 𝑛 ̄ 𝑦1∶𝜏̄ 𝑦𝜏+1∶𝑛+ 𝜏(𝑛−𝜏) 𝑛 ̄ 𝑦2 𝜏+1∶𝑛= (2.6) = 𝜏(𝑛−𝜏) 𝑛 ( ̄ 𝑦2 1∶𝜏−2 ̄ 𝑦1∶𝜏̄ 𝑦𝜏+1∶𝑛+ ̄ 𝑦2 𝜏+1∶𝑛) = (2.7) = 𝜏(𝑛−𝜏) 𝑛 ( ̄ 𝑦1∶𝜏−̄ 𝑦𝜏+1∶𝑛)2 = (2.8) = 𝐶2 𝜏. (2.9) This gives us 𝐿𝑅𝜏= 𝐶2 𝜏 𝜎2 . 2.3 Towards More General Models The great thing of the LR test is that it’s extremely flexible, allowing us to detect other changes then the simple change-in-mean case. As before, the procedure is to compute the LR test conditional on a fixed location of a changepoint, e.g. 𝐿𝑅𝜏, and range across all possible values for 𝜏to find the test statistics for our change. 2.3.1 Change-in-variance To this end we will demonstrate how to construct a test for Gaussian change-in-variance, for mean known. For simplicity, we will call our variance 𝜎2 = 𝜃, our parameter of interest, and without loss of generality, we can center our data on zero (e.g. if 𝑥𝑡∼𝑁(𝜇, 𝜃), then 𝑥𝑡−𝜇= 𝑦𝑡∼𝑁(0, 𝜃)). Then, our p.d.f for one observation will be given by: 𝑓(𝑦𝑡|𝜃) = 1 √ 2𝜋𝜃 exp{−𝑦2 𝑡 2𝜃}. Plugging in the main LR test formula, we find: 𝐿𝑅𝜏= −2 log ⎧ { ⎨ { ⎩ max𝜃∏ 𝑛 𝑡=1 1 √ 2𝜋𝜃exp{−𝑦2 𝑡 2𝜃} max𝜃1,𝜃2[(∏ 𝜏 𝑡=1 1 √2𝜋𝜃1 exp{−𝑦2 𝑡 2𝜃1 })(∏ 𝑛 𝑡=𝜏+1 1 √2𝜋𝜃2 exp{−𝑦2 𝑡 2𝜃2 }] ⎫ } ⎬ } ⎭ And taking the log, and simplifying over the constant gives us: 31 𝐿𝑅𝜏= −max 𝜃 𝑛 ∑ 𝑡=1 (−log(𝜃) −𝑦2 𝜃) + max 𝜃1,𝜃2 [ 𝜏 ∑ 𝑡=1 (−log(𝜃1) −𝑦2 𝜃1 ) + 𝑛 ∑ 𝑡=𝜏+1 (−log(𝜃2) −𝑦2 𝜃2 )] = (2.10) = min 𝜃 𝑛 ∑ 𝑡=1 (log(𝜃) + 𝑦2 𝜃) −min 𝜃1,𝜃2 [ 𝜏 ∑ 𝑡=1 (log(𝜃1) + 𝑦2 𝜃1 ) + 𝑛 ∑ 𝑡=𝜏+1 (log(𝜃2) + 𝑦2 𝜃2 )] (2.11) Now to solve the minimisation, we focus on the first term: 𝑓(𝑦1∶𝑛, 𝜃) = 𝑛 ∑ 𝑡=1 (log(𝜃) + 𝑦2 𝜃) = (𝑛log(𝜃) + ∑ 𝑛 𝑡=1 𝑦2 𝜃 ) . Taking the derivative with respect to 𝜃, gives: 𝑑 𝑑𝜃𝑓(𝑦1∶𝑛, 𝜃) = 𝑛 𝜃−∑ 𝑛 𝑡=1 𝑦2 𝜃2 . Setting equal to zero and solving for 𝜃: 𝑛𝜃− 𝑛 ∑ 𝑡=1 𝑦2 = 0 Which gives us: ̂ 𝜃= ∑𝑛 𝑡=1 𝑦2 𝑛 = ̄ 𝑆1∶𝑛the sample variance. Solving the optimization for 𝜃1 and 𝜃2 similarly, gives us the values ̂ 𝜃1 = ̄ 𝑆1∶𝜏, ̂ 𝜃2 = ̄ 𝑆(𝜏+1)∶𝑛. Now, as 𝑓(𝑦1∶𝑛, ̂ 𝜃) = 𝑛log( ̄ 𝑆1∶𝑛) + 𝑛(why?) the final LR test simplifies to: 𝐿𝑅𝜏= [𝑛log( ̄ 𝑆1∶𝑛) −𝜏log( ̄ 𝑆1∶𝜏) −(𝑛−𝜏) log( ̄ 𝑆(𝜏+1)∶𝑛)] 32 −10 −5 0 5 10 0 50 100 150 200 x y 0 50 100 150 0 50 100 150 200 x CUSUM 2.3.2 Change-in-slope Another important example, and an alternative to detecting a change-in-mean, is detecting a change in slope. In this section, we assume the data is still modeled as a signal plus noise, but the signal itself is a linear function of time (e.g. non-stationary, with a change!). Graphically: 33 −120 −80 −40 0 0 50 100 150 200 250 x y More formally, let our data be modeled as: 𝑦𝑡= 𝑓𝑡+ 𝜖𝑡, 𝜖𝑡∼𝑁(0, 1) 𝑡= 1, … , 𝑛. In this scenario, for simplicity, we assume a known constant variance, which without loss of generality, we take to be 1. Under the null hypothesis 𝐻0, we assume that the signal is linear with a constant slope over the entire sequence, i.e., 𝑓𝑡= 𝛼1 + 𝑡𝜃1, 𝑡= 1, … , 𝑛, where 𝛼1 is the intercept, and 𝜃1 is the slope. However, under the alternative hypothesis 𝐻1, we assume there is a changepoint at 𝜏after which the slope changes. Thus, the signal becomes: 𝑓𝑡= 𝛼1 + 𝑡𝜃1, 𝑡= 1, … , 𝜏; 𝑓𝑡= 𝛼2 + 𝑡𝜃2, 𝑡= 𝜏+ 1, … , 𝑛, where 𝛼2 is the new intercept, and 𝜃2 is the new slope after the changepoint. In other words, the model is showing a piecewise linear mean. For this model, the log-likelihood ratio test statistic can be written as the square of a projection of the data onto a vector 𝑣𝜏, i.e., 34 𝐿𝑅𝜏= (𝑣⊤ 𝜏𝑦1∶𝑛) 2 , where 𝑣𝜏is a contrast vector that is piecewise linear with a change in slope at 𝜏. This vector is constructed such that, under the null hypothesis, the vector 𝑣⊤ 𝜏𝑦1∶𝑛has variance 1, and 𝑣𝜏⊤𝑦1∶𝑛is invariant to adding a linear function to the data. These properties uniquely define the contrast vector 𝑣𝜏, up to an arbitrary sign. Computations on how to obtain this likelihood ration test, and how to construct this vector are beyond the scope of this module, but should you be curious those are detailed in Baranowski, Chen, and Fryzlewicz (2019). 2.3.3 Revisiting our Simpsons data (again!) So, going back to the Simpsons example… We mentioned how the belowed show rose rapidly to success, and at one point, started to decline… A much better model would therefore be our change-in-slope model! To run the model, we can take advantage of the changepoint package, which by default is a multiple changepoint package (we will see these in the next week), but whose simplest case implements exactly our change-in-slope LR test. Before we proceed, we need to load, clean and standardize our data: # Load Simpsons ratings data simpsons_episodes <- read.csv("extra/simpsons_episodes.csv") simpsons_episodes <- simpsons_episodes |> mutate(Episode = id + 1, Season = as.factor(season), Rating = tmdb_rating) simpsons_episodes <- simpsons_episodes[-nrow(simpsons_episodes), ] y <- simpsons_episodes$Rating We can then run our model with: library(changepoint) data <- cbind(y, 1, 1:length(y)) out <- cpt.reg(data, method="AMOC") # AMOC is short for "At Most One Change" print(paste0("Our changepoint estimate (chagepoints): ", cpts(out))) "Our changepoint estimate (chagepoints): 176" 35 plot(out) 0 200 400 600 4 5 6 7 8 Index data.set(x)[, 1] We can see that we now find a significant changepoint prior to episode The Simpsons Spin-Off Showcase, which is anthology episode well over into season 8, which, according to our method, is the beginning of the decline! However, some among you, might have noticed that there are more then one changes in this dataset… We will see, in fact, how we can improve on our estimation in the following weeks! 2.4 Exercises 2.4.1 Workshop 2 1. Compute the LR ratio to detect a change in the success probability of a Bernoulli Random Variable. a. Start by writing down the distribution of the model under the null, and find the MLE. Extend this to the alternative b. Compose the log-likelihood ratio, according to the equation Equation 2.2 introduced above. 2.4.2 Lab 2 1. Write a function, that taking as input 𝑛and a desired 𝛼level for false positive rate, returns the threshold for the cusum statistics, according to Section Section 2.1.1. 36 2. Construct a function that, taking as input 𝑛, a desired 𝛼, and a replicates parameter, runs a Monte Carlo simulation to tune an empirical penalty for the CUSUM change-in-mean on a simple Gaussian signal. Tip: You can reuse the function for computing the CUSUM statistics that you built the last week 3. Compare for a range of increasingly values of n, e.g. 𝑛= 100, 500, 1000, 10.000, and for few desired levels of alpha, the Monte Carlo threshold with the theoretically justified threshold. Plot the results, to recreate the plot above. 4. Using the Test the Simpsons dataset, and the monte carlo threshold, find a critical level for your CUSUM statistics, and declare a change with the change-in-mean model. 37 3 Multiple changepoints 3.1 Introduction In real-world data, it is common to encounter situations where more than one change occurs. When applying the CUSUM statistic in such cases, where there are multiple changes, the question arises: how does CUSUM behave, and how can we detect these multiple changes effectively? 3.1.1 Real Example: Genomic Data and Neuroblastoma To motivate this discussion, we return to the example from week 1: detecting active genomic regions using ChIP-seq data. Our goal here is to identify copy number variations (CNVs)— structural changes in the genome where DNA sections are duplicated or deleted. These vari-ations can impact gene expression and are linked to diseases like cancer, including neuroblas-toma. The dataset we’ll examine consists of logratios of genomic probe intensities, which help us detect changes in the underlying DNA structure. Statistically, our objective is to segment this logratio sequence into regions with different means, corresponding to different genomic states: 38 −0.5 0.0 0.5 0 50 100 150 200 250 position/1e+06 logratio (noisy copy number measurement) As seen from the plot, the data is noisy, but there are visible shifts in the logratio values, suggesting multiple changes in the underlying copy number. By the end of this chapter, we will segment this sequence! 3.1.2 Towards multiple changes Under this framework, the observed sequence 𝑦𝑡can be modeled as a piecewise constant signal with changes in the parameter, occurring at each changepoint 𝜏1, … , 𝜏𝑘, … , 𝜏𝐾, where 𝜏𝑘< 𝜏𝑘+1, with 𝐾being the maximum number of changes that we have in the sequence, corresponding to 𝐾+1 segments. Also - for the sake of notation, in the rest of this module we will set 𝜏0 = 0 and 𝜏𝐾+1 = 𝑛. Multiple Changes-in-Mean. A plausible model for the change-in-mean signal is given by 𝑦𝑡= 𝜇𝑘+ 𝜖𝑡, for 𝜏𝑘+ 1 ≤𝑡< 𝜏𝑘+1, 𝑘= 0, 1, … , 𝐾, where 𝜇𝑘is the mean of the 𝑘-th segment, and 𝜖𝑡∼𝒩(0, 𝜎2) are independent Gaussian noise terms with mean 0 and (known) variance 𝜎2. As a starting example, we can generate a sequence with 4 segments, with 𝜏1 = 50, 𝜏2 = 100, 𝜏3 = 150 and means 𝜇1 = 2, 𝜇2 = 0, 𝜇3 = −1 and 𝜇4 = 2. Running the CUSUM statistic in this scenario with multiple changes, leads to the following 𝐶2 𝜏trace: 39 −2 0 2 4 0 100 200 300 400 x y 0 50 100 150 200 0 100 200 300 400 x CUSUM One thing we could do, would be to set a threshold, say 50, record the windows over which the CUSUM is over the threshold, and pick the argmax in each of our windows as the candidate changepoints. 0 50 100 150 200 0 100 200 300 400 x CUSUM However, from this simple example we notice that we miss already one changepoint, that of time 200… In fact, in some scenarios, such as this one, detection power of the CUSUM statistic lost when there is more then one change in our test. To make it even worse, if we compare the values of the CUSUM statistic ran on the whole dataset (as above), against with the values of the CUSUM, ran on a subset limited to only only the first two segments 𝑦1∶200: Warning: Removed 199 rows containing missing values or values outside the scale range (geom_line()). 40 −2 0 2 4 0 50 100 150 200 x y 0 100 200 300 400 0 50 100 150 200 x CUSUM We can see that max of the CUSUM across the entire dataset (the line in grey, that we computed before) is much lower than the one where we isolate the sequence on one single change! So there is an effective loss of power in this scenario when analyzing all changes together, as some segments means are masking the effects of others with the CUSUM… This gives us motivation to move towards some strategy that tries to estimate all changes locations jointly, rather then looking for one! 3.1.3 The cost of a segmentation Well, so far we only worked with one scheme that tried to split a sequence in a hald But how can we work in case we have more than one change? Well, we need to introduce the cost of a segment. 3.1.3.1 The cost of a segment If we assume the data is independent and identically distributed within each segment, for segment parameter 𝜃, then this cost can be obtained through: ℒ(𝑦𝑠+1∶𝑡) = min 𝜃 𝑡 ∑ 𝑖=𝑠+1 −2 log(𝑓(𝑦𝑖, 𝜃)) (3.1) 41 with 𝑓(𝑦, 𝜃) being the likelihood for data point 𝑦if the segment parameter is 𝜃. Note, as the parameter of interest is 𝜃, we can remove all constant terms with respect to 𝜃, as those will not affect our optimization. Example. Now, for example, in the Gaussian case, recall our p.d.f. is given by: 𝑓(𝑦𝑖, 𝜃) = 1 √ 2𝜋𝜎2 exp{−1 2𝜎2 (𝑦𝑖−𝜇)2}. Taking the log and summing across all data points in the segment: 𝑡 ∑ 𝑖=𝑠+1 −2 log 𝑓(𝑦𝑖|𝜃) = −2 [−𝑡−𝑠 2 log(2𝜋𝜎2) − 1 2𝜎2 𝑡 ∑ 𝑖=𝑠+1 (𝑦𝑡−𝜇)2] . This is minimized for ̄ 𝑦𝑠+1∶𝑡= 1 𝑡−𝑠∑ 𝑡 𝑖=𝑠+1 𝑦𝑡. Therefore, plugging this into our equation Equation 3.1, the cost of a segment will be given by: ℒ(𝑦𝑠+1∶𝑡) = (𝑡−𝑠) log(2𝜋𝜎2) + 1 𝜎2 𝑡 ∑ 𝑖=𝑠+1 (𝑦𝑖−̄ 𝑦𝑠+1∶𝑡) 2 . Remember, we can get rid of all constants terms as those do not contribute to our optimization. Doing so, our cost will be simply: ℒ(𝑦𝑠+1∶𝑡) = 1 𝜎2 𝑡 ∑ 𝑖=𝑠+1 (𝑦𝑖−̄ 𝑦𝑠+1∶𝑡) 2 . 3.1.3.2 Obtaining the cost of the full segmentation The cost for the full segmentation will be given by the sum across all segments: 𝐾 ∑ 𝑘=0 ℒ(𝑦𝜏𝑘+1∶𝜏𝑘+1) Interestingly, the cost of a full segmentation is closely related to the LR test. Consider, a single Gaussian change-in-mean, e.g. 𝐾= 1 at time 𝜏1 = 𝜏, splitting the data into two segments: 𝑦1∶𝜏and 𝑦𝜏+1∶𝑛. The cost of this segmentation is: ℒ(𝑦1∶𝜏) + ℒ(𝑦𝜏+1∶𝑛) = 1 𝜎2 [ 𝜏 ∑ 𝑖=1 (𝑦𝑖−̄ 𝑦1∶𝜏)2 + 𝑛 ∑ 𝑖=𝜏+1 (𝑦𝑖−̄ 𝑦(𝜏+1)∶𝑛)2] 42 Which is essentially minus the LR test as we saw last week, without the null component. Specifically, for one change, minimizing the segmentation cost over all possible changepoints locations 𝜏is equivalent to maximizing the CUSUM statistic. 3.1.4 The “best” segmentation We now have a way of evaluating how “good” a segmentation is, so it’s only natural to ask the question: what would be the best one? Well, one way would be to, say, finding the the best set of 𝜏= 𝜏0, … , 𝜏𝐾+1 changepoints that minimise the cost: min 𝐾∈ℕ 𝜏1,…,𝜏𝐾 𝐾 ∑ 𝑘=0 ℒ(𝑦𝜏𝑘+1∶𝜏𝑘+1). Which one would this be? Say that for instance we range the 𝐾= 1, … , 𝑛, and at each step we find the best possible segmentation. Graphically, we would be observing the following: −2 0 2 4 0 100 200 300 400 t y Segments: 1 Seg. Cost: 1042 −2 0 2 4 0 100 200 300 400 t y Segments: 2 Seg. Cost: 821 43 −2 0 2 4 0 100 200 300 400 t y Segments: 3 Seg. Cost: 441 −2 0 2 4 0 100 200 300 400 t y Segments: 4 Seg. Cost: 400 Well, arguably we would like to stop at 4, which we know is the real number of segments, but the cost keep going down… −2 0 2 4 0 100 200 300 400 t y Segments: 10 Seg. Cost: 369 44 −2 0 2 4 0 100 200 300 400 t y Segments: 100 Seg. Cost: 133 And finally: −2 0 2 4 0 100 200 300 400 t y Segments: 400 Seg. Cost: 0 45 Well, it turns out, that according to the minimization above, the optimal segmentation across all would be the one that puts each point into its own segment! Well, there are different solutions to this problem. The first one we will see, is a divide-and-conquer greedy approach, called Binary Segmentation, and the second one will aim a generating a different optimization to the one below that will find the optimal segmentation up to a constant to avoid over-fitting! 3.2 Binary Segmentation Binary Segmentation (BS) is a procedure from Scott and Knott (1974) and Sen and Srivastava (1975). Binary segmentation works like this: 1. Start with a test for a change 𝜏that splits a sequence into two segments and to check if the cost over those two segments, plus a penalty 𝛽∈ℝ, is smaller then the cost computed on the whole sequence: ℒ(𝑦1∶𝜏) + ℒ(𝑦𝜏+1∶𝑛) + 𝛽< ℒ(𝑦1∶𝑛) (3.2) where the segment cost ℒ(⋅), is as in Equation 3.1. 2. If the condition in Equation 3.2 is true for at least one 𝜏∈1, … , 𝑛, then the 𝜏that mini-mizes ℒ(𝑦1∶𝜏) + ℒ(𝑦𝜏+1∶𝑛) is picked as a first changepoint and the test is then performed on the two newly generated splits. This step is repeated until no further changepoints are detected on all resulting segments. 3. If there are no more resulting valid splits, then the procedure ends. Some of you might have noted how the condition in Equation 3.2 is closely related to the LR test in Equation 2.2. In fact, rearranging equation above, gives us: −ℒ(𝑦1∶𝑛) + ℒ(𝑦1∶𝜏) + ℒ(𝑦𝜏+1∶𝑛) = −𝐿𝑅𝜏 2 < −𝛽. The −𝛽acts exactly as the constant 𝑐for declaring a change, and it adds a natural stopping condition, solving the issue of overfitting that we mentioned in the previous section! Binary Segmentation, in fact, does nothing more then iteratively running a LR test, until no changes are found anymore! This gives us a strategy to essentially apply a test that is locally optimal for one change, such as the Likelihood Ratio test, to solve a multiple changepoint segmentation. For this reason, BS is often employed to extend single changepoint procedures to multiple changes procedures, and hence it is one of the most prominent methods in the literature. 46 3.2.1 Binary Segmentation in action Having introduced the main idea, we show now how binary segmentation works in action with an example above. Say that we set a 𝛽= 2 log(400) = 11.98. Step 1: We start by computing the cost as in Equation 3.2, and for those that are less then 𝛽, we pick the smallest. This will be our first changepoint estimate, and the first point of split. In the plots below, the blue horizontal line is the mean signal estimated for a given split, while in the cusum the pink will represent the values of the LR below the threshold 𝛽, and red vertical line will show the min of the test statistics. When the cost is below the beta line, this will be our changepoint estimate. In our case, we can see that the min of our cost has been achieved for ̂ 𝜏= 100, and since this is below the threshold, it’s our first estimated changepoint! −2 0 2 4 0 100 200 300 400 x y −90 −60 −30 0 0 100 200 300 400 x LR Step 2: From the first step, we have to check now two splits: • The first left split, 1-LEFT in the plot below, covers data 𝑦1∶100. We can see that from here, the min of our statistic is below the threshold, therefore we won’t declare any further change in this subset. • The first right split, 1-RIGHT covers data 𝑦101∶400. We can see that here, the min of the statistics, is below the threshold, and therefore we identify a second change at ̂ 𝜏= 297. 47 This is not exactly 300, so we don’t have a perfect estimate. Despite this is not ideal, this is the best point we have found and therefore we have to continue! 0 1 2 3 4 0 25 50 75 100 x y −12.5 −10.0 −7.5 −5.0 −2.5 0.0 0 25 50 75 100 x LR 1−LEFT −2 0 2 4 100 200 300 400 x y −150 −100 −50 0 100 200 300 400 x LR 1−RIGHT Step 3: In step 3, we have to check again two splits splits: • The second left split, 2-LEFT in the plot below, covers data 𝑦101∶297. Now, it’s in this split that the statistics goes below the threshold! The third estimated change is at ̂ 𝜏= 203, again slightly off the real one at 200. We continue investigating this split… • The second right split, 2-RIGHT covers data 𝑦298∶400. In this last split, the min is not over the threshold, therefore we stop the search. 48 −2 0 2 100 150 200 250 300 x y −20 −15 −10 −5 0 100 150 200 250 300 x LR 2−LEFT −1 0 1 2 3 4 300 325 350 375 400 x y −12.5 −10.0 −7.5 −5.0 −2.5 0.0 300 325 350 375 400 x LR 2−RIGHT Step 4: In step 4, we check: • The third left split, 3-LEFT in the plot below, covers data 𝑦101∶203. The minimum, in here is not over the threshold. • The third right split, 3-RIGHT covers data 𝑦204∶298. Similarly, the minimum is not over the treshold. 49 −3 −2 −1 0 1 100 125 150 175 200 x y −12.5 −10.0 −7.5 −5.0 −2.5 0.0 100 125 150 175 200 x LR 3−LEFT −3 −2 −1 0 1 2 3 200 225 250 275 300 x y −12.5 −10.0 −7.5 −5.0 −2.5 0.0 200 225 250 275 300 x LR 3−RIGHT The algorithm therefore terminates! With this graphical description in mind, we formally describe the Binary Segmentation algorithm as a recursive procedure, where the first iteration would be simply given by BinSeg(𝑦1∶𝑛, 𝛽). BinSeg(𝑦𝑠∶𝑡, 𝛽) INPUT: Subseries 𝑦𝑠∶𝑡= {𝑦𝑠, … , 𝑦𝑡} of length 𝑡−𝑠+ 1, penalty 𝛽 OUTPUT: Set of detected changepoints 𝑐𝑝 IF 𝑡−𝑠≤1 RETURN {} // No changepoint in segments of length 1 or less COMPUTE 𝒬← min 𝜏∈{𝑠,…,𝑡−1} [ℒ(𝑦𝑠∶𝜏) + ℒ(𝑦𝜏+1∶𝑡) −ℒ(𝑦𝑠∶𝑡) + 𝛽] IF 𝒬< 0 50 ̂ 𝜏← arg min 𝜏∈{𝑠,…,𝑡−1} [ℒ(𝑦𝑠∶𝜏) + ℒ(𝑦𝜏+1∶𝑡) −ℒ(𝑦𝑠∶𝑡)] 𝑐𝑝←{ ̂ 𝜏, BinSeg(𝑦𝑠∶̂ 𝜏, 𝛽), BinSeg(𝑦̂ 𝜏+1∶𝑡, 𝛽)} RETURN 𝑐𝑝 RETURN {} // No changepoint if −𝐿𝑅/2 is above penalty −𝛽 3.3 Optimal Partitioning Another solution to avoid the over-fitting problem of Equation 3.1 lies in introducing a penalty term that discourages too many changepoints, avoiding overfitting. This is known as the penalised approach. To achieve this, we want to minimize the following cost function: 𝑄𝑛,𝛽= min 𝐾∈ℕ[ min 𝜏1,…,𝜏𝐾 𝐾 ∑ 𝑘=0 ℒ(𝑦𝜏𝑘+1∶𝜏𝑘+1) + 𝛽𝐾] , (3.3) where 𝑄𝑛,𝛽represents the optimal cost for segmenting the data up to time 𝑛with a penalty 𝛽 that increases with each additional changepoint 𝐾. With the 𝛽term, for every new changepoint added, the cost of the full segmentation increases, discouraging therefore models with too many changepoints. Unlike Binary Segmentation, which works iteratively and makes local decisions about poten-tial changepoints, and as we have seen it is prone to errors, solving 𝑄𝑛,𝛽ensures that the segmentation is globally optimal, as in the location of the changes are the best possible to minimise our cost. Now, directly solving this problem using a brute-force search is computationally prohibitive, as it would require checking every possible combination of changepoints across the sequence: the number of possible segmentations grows exponentially as 𝑛increases… Fortunately, this problem can be solved efficiently using a sequential, dynamic programming algorithm: Optimal Partitioning (OP), from Jackson et al. (2005). OP solves Equation 3.3 exactly through the following recursion. We start with 𝒬0,𝛽= −𝛽, and then, for each 𝑡= 1, … , 𝑛, we compute: 𝒬𝑡,𝛽= min 0≤𝜏<𝑡[𝒬𝜏,𝛽+ ℒ(𝑦𝜏+1∶𝑡) + 𝛽] . (3.4) 51 Here, 𝒬𝑡,𝛽represents the optimal cost of segmenting the data up to time 𝑡. The al-gorithm builds this solution sequentially by considering each possible segmentation 𝒬0,𝛽, ⋯, 𝒬𝑡−2,𝛽, 𝒬𝑡−1,𝛽before the current time 𝑡, plus the segment cost up to current time 𝑡, ℒ(𝑦𝜏+1∶𝑡). 3.3.1 Optimal partitinioning in action This recursion can be quite hard to digest, and is, as usual, best described graphically. Step 1 Say we are at 𝑡= 1. In this case, according to equation above, the optimal cost up to time one will be given by (remember that the 𝛽cancels out with 𝑄0,𝛽!): 𝒬1,𝛽= [−𝛽+ ℒ(𝑦1∶1) + 𝛽] = ℒ(𝑦1∶1) 0 2 4 6 8 10 t L(y1:1) Step 2. Now, at the second step, we have to minimise between two segmentations: • One with the whole sequence in a second segment alone (again, 𝛽cancels out with 𝑄0,𝛽= −𝛽), and this will be given by ℒ(𝑦1∶2) (dotted line) • One with the optimal segmentation from step 1 𝒬1,𝛽(whose cost considered only the first point in its own segment!), to which we have to sum the cost relative to a second segment ℒ(𝑦2∶2) that puts the second point alone, and the penalty 𝛽as we have added a new segment! We minimise across the two, and this gives us 𝑄2,𝛽. 52 0 2 4 6 8 t L(y1:2) Q1β + L(y2:2) + β Step 3: Similarly, at 𝑡= 3 we have now three segmentations to choose from: • The one that puts the first three observations in the same segment, whose cost will be given simply by ℒ(𝑦1∶2), • The one considering the optimal segmentation from time 1, plus the cost of adding an extra segment with observation 2 and 3 together • Finally the optimal from segmentation 2, 𝒬2,𝛽, plus the segment cost of fitting an extra segment with point 3 alone. Note that 𝒬2,𝛽will come from the step before: if we would have been beneficial to add a change, at the previous step, this information is carried over! Again, we pick the minimum across these three to get 𝒬3,𝛽, and proceed. 0 2 4 6 8 t L(y1:3) Q1β + L(y2:3) + β Q2β + L(y3:3) + β Step 𝑛Until the last step! Which would look something like this: 53 0 2 4 6 8 t L(y1:6) Q1β + L(y2:6) + β Q2β + L(y3:6) + β Q3β + L(y4:6) + β Q4β + L(y5:6) + β Q5β + L(y6:6) + β A formal description of the algorithm can be found below: INPUT: Time series 𝑦= (𝑦1, ..., 𝑦𝑛), penalty 𝛽 OUTPUT: Optimal changepoint vector 𝑐𝑝𝑛 Initialize 𝒬0 ←−𝛽 Initialize 𝑐𝑝0 ←{} // a set of vectors ordered by time FOR 𝑡= 1, … , 𝑛 𝒬𝑡←min0≤𝜏<𝑡[𝒬𝜏+ ℒ(𝑦𝜏+1∶𝑡) + 𝛽] ̂ 𝜏←arg min0≤𝜏<𝑡[𝒬𝜏+ ℒ(𝑦𝜏+1∶𝑡) + 𝛽] 𝑐𝑝𝑡←(𝑐𝑝̂ 𝜏, ̂ 𝜏) // Append the changepoint to the list at the last optimal point RETURN 𝑐𝑝𝑛 Note. To implement the line 𝒬𝑡←min0≤𝜏<𝑡[𝒬𝜏+ ℒ(𝑦𝜏+1∶𝑡) + 𝛽], we could either use an inner for cycle and iteratively compute 𝒬𝑡for each 𝜏, or we could use a vectorized approach. If we created a vectorized version of our function, it would look something like this: costs = map_dbl(1:t, (tau) Q[tau] + L(y[(tau):t]) + beta) Q[t + 1] = min(costs) tau_hat = which.min(costs) 54 We range across 1:t as R index starts from 1, so everything is shifted by 1 (that’s why in the cost we have just 𝜏instead of 𝜏+ 1. Remember that the map_dbl() function is used to apply a function to each element of a vector and return a new vector. This would be more efficient and faster. We will code the Optimal Partitioning algorithm in the lab. Running the Optimal Partitioning method on our example scenario, with the same penalty 𝛽= 2 log(400) = 11.98 as above, gives changepoint locations 𝜏1∶4 = {100, 203, 301}. Loading required package: zoo Attaching package: 'zoo' The following objects are masked from 'package:base': as.Date, as.Date.numeric Successfully loaded changepoint package version 2.3 WARNING: From v.2.3 the default method in cpt. functions has changed from AMOC to PELT. See NEWS for details of all changes. Time y 0 100 200 300 400 −2 0 2 4 So we can see how on this dataset in particular, OP performs slightly better then Binary Segmentation on the last change, getting closer to the real changepoint of 300! 55 3.3.2 Neuroblastoma example Returning to the original example at the start of the module, the neuroblastoma dataset, we run both Binary Segmentation, and Optimal Partitioning. We report results in the plot below (blue for BS, green for OP). In this case, the algorithms return the same four changepoints: 41 113 157 41 113 157 −6 −4 −2 0 2 4 0 50 100 150 200 Time y Some of you might come up with two (very interesting) questions that hopefully we will answer next week… • If the methods perform roughly the same, which one do I choose? • Why is the data on a different scale then that presented at the start of the chapter? 3.4 Exercises 3.4.1 Workshop 3 1. For the vector 𝑦1∶4 = (0.5, −0.1, 12.1, 12.4), and a penalty 𝛽= 5 calculate, pen on paper (and calculator), all the Optimal Partitioning and Binary Segmentation steps for the Gaussian change-in-mean case. TIP: To speed up computations, you want to pre-compute all segment costs ℒ(𝑦𝑙∶𝑢). I have pre-computed some of these costs in the table below: 56 𝑙\𝑢 1 2 3 4 1 ℒ(𝑦1∶1) 0.18 94.59 145.43 2 0.00 ℒ(𝑦2∶3) 101.73 3 0.00 ℒ(𝑦3∶4) 4 0.00 3.4.2 Lab 3 1. Code the Optimal Partitioning algorithm for the Gaussian change-in-mean case. Your function should take as input three things: • A vector y, our observations • A double penalty, corresponding to the 𝛽penalty of our penalised cost • A function COST. This function should take as input arguments y, s, t, and act as ℒ(𝑦𝑠∶𝑡). You fill find a skeleton below. Note: with adequate indexing, you can pass to COST just the data y! OP <- function (y, penalty, COST) { ### pre-compute all the costs here ### your initialization here for (t in 1:n) { ### your recursion here } return(changepoints) } For this exercise, we are implementing the Gaussian change-in-mean case. Therefore, a skeleton for our cost function will be: 57 costMean = function(y, s, t) { # code for computing the Gaussian change-in-mean cost here return(your_cost) } You should be then able to call your OP function as: set.seed(123) y <- c(rnorm(100), rnorm(100, 5), rnorm(100, -1)) OP(y, 15, costMean) This should return 100, 200. Tips: a. Again, you can pre-compute all the possible $\mathcal{L}(y_{l:u})$, for $u \geq l$ to sav b. Be very careful with indexing... R starts indexing at 1, however, in the pseudocode, you 58 4 PELT, WBS and Penalty choices 4.1 Drawbacks of OP and BS When deciding which segmentation approach to use, Binary Segmentation (BS) and Optimal Partitioning (OP) each offer different strengths. The choice largely depends on the character-istics of the data and the goal of the analysis. 4.1.1 Quality of the Segmentation Generally, Optimal Partitioning (OP) provides the most accurate segmentation, especially when we have a well-defined model and expect precise changepoint detection. OP ensures that the solution is optimal by globally minimizing the cost function across all possible segmenta-tions. This is ideal for datasets with clear changes, even if noise is present. Let’s consider a case with true changepoints at 𝜏= 100, 200, 300, and segment means 𝜇1∶4 = 2, 1, −1, 1.5: −2 0 2 4 0 100 200 300 400 x y 59 While the underlying signal follows these clear shifts, noise complicates segmentation. Binary Segmentation uses a greedy process where each iteration looks for the largest changepoint. Although fast, this local search can make mistakes if the signal isn’t perfectly clear, particularly in the early stages of the algorithm. For example, running BS on this dataset introduces a mistake at 𝜏= 136, as shown in the plot below: −2 0 2 4 0 100 200 300 400 x y −75 −50 −25 0 0 100 200 300 400 x LR This error is carried in the subsequent steps, and the full binary segmentation algorithm will output an additional change at 𝜏= 136… Optimal Partitioning (OP), on the other hand, evaluates all possible segmentations considers the overall fit across the entire sequence. It is therefore less susceptible to adding “ghost” changepoints, as rather than focusing on the largest change at each step. To illustrate, we compare the segmentations generated by both approaches: 60 102 136 200 302 102 200 302 −2 0 2 4 0 100 200 300 400 Time y 4.1.2 Computational Complexity Well, you may ask why not using OP all the time, then? Well, in changepoint detection, in which is the most appropiate method, we often have to keep track of the computational performance too, and Binary Segmentation is faster on average. For this reason, for large datasets where approximate solutions are acceptable, it might be the best option. Specifically: • Binary Segmentation starts by dividing the entire sequence into two parts, iteratively applying changepoint detection to each segment. In the average case, it runs in 𝒪(𝑛log 𝑛) because it avoids searching every possible split point. However, in the worst case (if all data points are changepoints), the complexity can degrade to 𝒪(𝑛2), as each step can require recalculating test statistics for a growing number of segments. • Optimal Partitioning, on the other hand, solves the changepoint problem by recur-sively considering every possible split point up to time 𝑡. The result is an optimal seg-mentation, but at the cost of 𝒪(𝑛2) computations. This holds true for both the average and worst cases, as it always requires a full exploration of all potential changepoints. 61 4.2 PELT and WBS Good news is, despite both algorithms have drawbacks, following recent developments, those have been solved. In the next sections, we will introduce two new algorithms, PELT and WBS. 4.2.1 PELT: an efficient solution to OP In OP, we can reduce the numbers of checks to be performed at each iteration, reducing the complexity. This operation is called pruning. Specifically, given a cost function ℒ(⋅), on the condition that there exists a constant 𝜅such that for every 𝑙< 𝜏< 𝑢: ℒ(𝑦𝑙+1∶𝜏) + ℒ(𝑦𝜏+1∶𝑢) + 𝜅≤ℒ(𝑦𝑙+1∶𝑢) (4.1) It is possible to prune without resorting to an approximation. For many cost functions, such as the Gaussian cost, such constant 𝜅exists. Then, for any 𝜏< 𝑡, if 𝒬𝜏,𝛽+ ℒ(𝑦𝜏+1∶𝑡) + 𝜅≥𝒬𝑡,𝛽 (4.2) holds, then for any 𝑇> 𝑡, 𝜏can never be the optimal change location prior to time 𝑇. Proof. Assume that Equation 4.2 from above is true. Then, for 𝜏< 𝑡< 𝑇: 𝒬𝜏,𝛽+ ℒ(𝑦𝜏+1∶𝑡) + 𝛽+ 𝜅≥𝒬𝑡,𝛽+ 𝛽. Summing ℒ(𝑦𝑡+1∶𝑇) on both sides: 𝒬𝜏,𝛽+ ℒ(𝑦𝜏+1∶𝑡) + 𝛽+ 𝜅+ ℒ(𝑦𝑡+1∶𝑇) ≥𝒬𝑡,𝛽+ 𝛽+ ℒ(𝑦𝑡+1∶𝑇), ⟹𝒬𝜏,𝛽+ ℒ(𝑦𝜏+1∶𝑇) + 𝛽≥𝒬𝑡,𝛽+ ℒ(𝑦𝑡+1∶𝑇) + 𝛽, by Equation 4.1 from above. Hence, it follows that 𝜏cannot be a future minimizer of the sets 𝑅𝑡= {𝜏′ ∈0, 1, … , 𝑇−1, 𝒬𝜏′,𝛽+ ℒ(𝑦𝜏′+1∶𝑇) + 𝛽}, ∀𝑇> 𝜏 62 and can be removed from the set of splits to check for each future step! Using the condition in Equation 4.2, the PELT algorithm – acronym for Pruned Exact Linear Time – (Killick, Fearnhead, and Eckley (2012)) solves exactly the penalised minimization of Equation 3.4 with an expected computational cost that can be linear in 𝑛– while still retaining 𝒪(𝑛2) computational complexity in the worst case. This is achieved by reducing the number of segment costs to evaluate at each iteration via an additional pruning step based on Condition Equation 3.4. That is, by setting 𝜅= 0, if 𝒬𝜏,𝛽+ ℒ(𝑦𝜏+1∶𝑡) ≥𝒬𝑡,𝛽 then we can safely prune the segment cost related to 𝜏, as 𝜏will never be the optimal change-point location up to any time 𝑇> 𝑡in the future. When this condition will be met? Well, plugging the cost from the previous week into the right side: 𝒬𝜏,𝛽+ ℒ(𝑦𝜏+1∶𝑡) ≥min 0≤𝜏<𝑡[𝒬𝜏,𝛽+ ℒ(𝑦𝜏+1∶𝑡) + 𝛽] , we see how the 𝛽plays again a central role, as this is absent in the left side. The intuition, is that once we introduce a new candidate changepoint, we would then prune. This is why colloquially we say “PELT prunes at the changes”. And if the changes increase linearly with the length of the data, this means that our algorithm will achieve a 𝒪(𝑛log 𝑛) computational complexity, without any drawbacks! 63 To reduce computational complexity, we can slightly modify the OP algorithm, to add the pruning condition above: PELT INPUT: Time series 𝑦= (𝑦1, ..., 𝑦𝑛), penalty 𝛽 OUTPUT: Optimal changepoint vector 𝑐𝑝𝑛 Initialize 𝒬0 ←−𝛽 Initialize 𝑐𝑝0 ←{} // a set of vectors ordered by time Initialise 𝑅1 = (0) // a vector of candidate change locations FOR 𝑡= 1, … , 𝑛 𝒬𝑡←min𝜏∈𝑅𝑡[𝒬𝜏+ ℒ(𝑦𝜏+1∶𝑡) + 𝛽] ̂ 𝜏←arg min𝜏∈𝑅𝑡[𝒬𝜏+ ℒ(𝑦𝜏+1∶𝑡) + 𝛽] 𝑐𝑝𝑡←(𝑐𝑝̂ 𝜏, ̂ 𝜏) // Append the changepoint to the list at the last optimal point 𝑅𝑡+1 ←{𝜏∈𝑅𝑡∶𝒬𝜏+ ℒ(𝑦𝜏+1∶𝑡) < 𝒬𝑡} // select only the change locations that are still optimal 𝑅𝑡+1 ←(𝑅𝑡+1, 𝑡) // add t to the points to check at the next iteration RETURN 𝑐𝑝𝑛 64 As the segmentation retained is effectively the same, there are literally no disadvantages in using PELT over OP, if the cost function allows to do so. However, PELT still has some disadvantages: • PELT pruning works only over some cost functions, those for which the condition above is true. For example, in a special case of change-in-slope, as we will see in the workshop, we have that the cost from the next change depends on the location of the previous one, making it impossible for PELT to prune without loosing optimality. • We mentioned above how PELT over iterations at which a change is detected. For signals where changes are not frequent, PELT does not benefits from. A more sophisticated approach is that of FPOP, that prunes at every iteration. FPOP employs a different type of pruning, called functional pruning, that at every iteration only check costs that are likely associated to a change. However, despite the pruning is stronger FPOP works only over few selected models. 4.2.2 WBS: Improving on Binary Segmentation In BS, one of the issues that may arise, is an incorrect segmentation. WBS, Fryzlewicz (2014), is a multiple changepoints procedures that improve on the BS changepoint estimation via computing the initial segmentation cost of BS multiple times over 𝑀+ 1 random subsets of the sequence, 𝑦𝑠1∶𝑡1, … , 𝑦𝑠𝑀∶𝑡𝑀, 𝑦1∶𝑛, picking the best subset according to what achieves the smallest segmentation cost and reiterating the procedure over that sample accordingly. The idea behind WBS lies in the fact that a favorable subset of the data 𝑦𝑠𝑚∶𝑡𝑚could be drawn which contains a true change sufficiently separated from both sides 𝑠𝑚, 𝑡𝑚of the sequence. By the inclusion of the 𝑦1∶𝑛entire sequence among the subsets, it is guaranteed that WBS will do no worse than the simple BS algorithm. We can formally provide a description of WBS as a recursive procedure. We first start by drawing the set of intervals ℱ= {[𝑠1, 𝑡1], … , [𝑠𝑀, 𝑡𝑀]} where: 𝑠𝑚∼U(1, 𝑛), 𝑡𝑚∼U(𝑠𝑚+ 1, 𝑛) Then, WBS will have just a couple of alterations to the original Binary Segmentation: WBS(𝑦𝑠∶𝑡, 𝛽) 65 INPUT: Subseries 𝑦𝑠∶𝑡= {𝑦𝑠, … , 𝑦𝑡} of length 𝑡−𝑠+ 1, penalty 𝛽 OUTPUT: Set of detected changepoints 𝑐𝑝 IF 𝑡−𝑠≤1 RETURN {} // No changepoint in segments of length 1 or less ℳ𝑠,𝑒= Set of those indices m for which [𝑠𝑚, 𝑡𝑚] ∈ℱis such that [𝑠𝑚, 𝑡𝑚] ⊂[𝑠, 𝑡]. ℳ←ℳ∪{[1, 𝑛]} COMPUTE 𝒬← min [𝑠𝑚,𝑡𝑚]∈ℳ 𝜏∈{𝑠𝑚,…,𝑡𝑚} [ℒ(𝑦𝑠∶𝜏) + ℒ(𝑦𝜏+1∶𝑡) −ℒ(𝑦𝑠∶𝑡) + 𝛽] IF 𝒬< 0 ̂ 𝜏← arg min [𝑠𝑚,𝑡𝑚]∈ℳ 𝜏∈{𝑠𝑚,…,𝑡𝑚} [ℒ(𝑦𝑠∶𝜏) + ℒ(𝑦𝜏+1∶𝑡) −ℒ(𝑦𝑠∶𝑡)] 𝑐𝑝←{ ̂ 𝜏, WBS(𝑦𝑠∶̂ 𝜏, 𝛽), WBS(𝑦̂ 𝜏+1∶𝑡, 𝛽)} RETURN 𝑐𝑝 RETURN {} // No changepoint if −𝐿𝑅/2 is above penalty −𝛽 One of the major drawbacks of WBS is that in scenarios where we find frequent changepoints, in order to retain a close-to-optimal estimation, one should draw a higher number of 𝑀intervals (usually of the order of thousands of intervals). This can be problematic given that WBS has computational complexity that grows linearly in the total length of the observations of the subsets. 4.3 Penalty Selection In previous sections, we applied the changepoint detection algorithms using a penalty term of 2 log(𝑛). As we’ll see, this is the BIC penalty (Bayes Information Criterion), a widely used penalty in changepoint detection. However, it is important to note that BIC is just one of several penalty types that can be applied… As in the single change, some penalty may be more conservative then others! Choosing the correct penalty is key to obtaining a sensible segmentation of the data. The penalty term plays a significant role in balancing the goodness-of-fit of the model with its complexity: 66 • A lower penalty may lead to an over-segmentation, where too many changepoints are detected • A higher penalty could under-segment the data, missing important changepoints. The three most common penalties, are: • AIC (Akaike Information Criterion): The AIC penalty takes value of 2𝑝, where 𝑝is the number of parameters that one adds to the model. In multiple changes scenario, every new change, we add a new parameter to the model (as we estimate the signal). This, in OP and BS approaches, where the penalty is added at different iterations, shouls we fit a change, this translates in 𝛽= 2×2 = 4 as our 𝛽. While simple to apply, AIC is known to be asymptotically inconsistent: it tends to overestimate the number of changepoints as the sample size increases. Intuitively, this is because AIC is designed to minimize the prediction error rather than to identify the true model structure. It favors models that fit the data well, often leading to the inclusion of more changepoints than necessary. • BIC (Bayesian Information Criterion): The BIC penalty is given by 𝑝log(𝑛). In our approaches, this translates to: 𝛽= 2 log(𝑛), that we add for each additional changepoint. BIC is generally more conservative than AIC and is consistent, meaning it will not overestimate the number of changepoints as the sample size grows. • MBIC (Modified BIC): The MBIC penalty, from Zhang and Siegmund (2007), is an extension of the BIC that includes an extra term to account for the spacing of the changepoints. We can approximate it, in practice, by using a value of 𝛽= 3 log(𝑛) as our penalty. In practice, it is even more conservative then the BIC penalty. 4.3.1 Example in R: Comparing Penalties with PELT Let’s now examine how different penalties impact the results of changepoint detection using the changepoint package in R. We’ll focus on the PELT method and compare the outcomes when using AIC, BIC, and MBIC penalties. As a data sequence, we will pick a different chromosome in our Neuroblastoma dataset. Can you tell, by eye, how many changes are in this sequence? 67 −2 0 2 4 0 50 100 150 position/1e+06 logratio (noisy copy number measurement) Chromosome 5 We can compare the three penalties using the changepoint library, as below: data <- one.dt$logratio n <- length(data) # Apply PELT with AIC, BIC, and MBIC penalties cp_aic <- cpt.mean(data, method = "PELT", penalty = "AIC") cp_bic <- cpt.mean(data, method = "PELT", penalty = "BIC") cp_mbic <- cpt.mean(data, method = "PELT", penalty = "MBIC") # Extract changepoint locations for each penalty cp_aic_points <- cpts(cp_aic) cp_bic_points <- cpts(cp_bic) cp_mbic_points <- cpts(cp_mbic) # Create a data frame for plotting with ggplot2 plot_data <- data.frame( index = 1:n, data = data) # Create data frames for changepoints with corresponding method labels cp_df <- bind_rows( data.frame(index = cp_aic_points, method = "AIC"), data.frame(index = cp_bic_points, method = "BIC"), 68 data.frame(index = cp_mbic_points, method = "MBIC") ) ggplot(plot_data, aes(x = index, y = data)) + geom_point() + # Plot the data line first geom_vline(data = cp_df, aes(xintercept = index, color = method)) + facet_grid(method ~ .) + labs(title = "PELT with Different Penalties: AIC, BIC, MBIC", x = "Index", y = "Data") + theme_minimal() + theme(legend.position = "none") AIC BIC MBIC 0 50 100 150 −2 0 2 4 −2 0 2 4 −2 0 2 4 Index Data PELT with Different Penalties: AIC, BIC, MBIC We can see how from this example, the AIC likely overestimated the number of changepoints, while BIC and MBIC provided more conservative and reasonable segmentations. By eye, the MBIC seems to have done the better job! 4.3.2 CROPS: running with multiple penalties Hopefully, the example above should have highlighted that finding the right penalty can be tricky. One solution, would be to run our algorithm for a range of penalties, and then choose a posteriori what the best segmentation is. The CROPS algorithm, from Haynes, Eckley, and Fearnhead (2017), is based on this idea. CROPS works alongside an existing penalised changepoint detection algorithm, like PELT or WBS: as long as the changepoint method can map a penalty value to a (decreasing) segmentation cost, CROPS could be applied. 69 CROPS takes as input a range of penalties [𝛽min, 𝛽max], and explores all possible segmentations within those two penalties in a clever way, to fit the changepoint model as least as we can. As CROPS calculates changepoints for a particular penalty, it keeps track of the range of penalty values where that specific set of changepoints is valid. This works because, for certain ranges of penalties, the set of changepoints stays the same. E.g. for penalties between 𝛽1 and 𝛽2, the changepoints might remain the same, so CROPS only needs to run the changepoint detection once for that range. We won’t introduce the method formally, but in an intuitive way, CROPS works in this way: 1. It starts calculates changepoints at two extreme penalties: 𝛽min and 𝛽max. If those are the same, it quits. 2. Alternatively, as a binary search, CROPS selects a mid-point penalty 𝛽int based on whether the segmentation change, and runs the changepoint detection again on [𝛽min, 𝛽int], and [𝛽int, 𝛽max], refining its search for the next penalty. 3. It repeats 2 iteratively until no further segmentations are found. We can use CROPS to generate an elbow plot for selecting the appropriate penalty value in changepoint detection. In Data Science and Machine Learning, elbow plots are graphs that helps us choosing the appropiate value of a parameter, balancing between model complexity (in our case number of changepoints) and goodness of fit (how tightly our model fits the data). In case of CROPS, we can plot the number of changepoints against the penalty value from our range. The curve typically shows a steep drop at first, as many changepoints are detected with low penalties, then flattens as the penalty increases and fewer changepoints are added. The elbow (hence its name) is the point where the rate of change in the number of changepoints significantly slows down: 0 20 40 60 80 5 10 15 20 Number of Changepoints Penalty Value X 70 The elbow is a point of balance between model fit and complexity. As a rule of thumb, a good choices of a penalty reside in picking either the penalty that generates the segmentation at the elbow, or the one at the point immediately prior. Going back to our neuroblastoma example above. We run CROPS for penalties [2, 40], and we then generate the elbow plot: out <- cpt.mean(data, method = "PELT", penalty = "CROPS", pen.value = c(2, 40)) plot(out,diagnostic=TRUE) abline(v=3) 5 10 15 20 10 20 30 40 Number of Changepoints Penalty Value We can see that the elbow is at 4 changepoints, therefore this could suggest that a segmentation with 4 changes might be the best! This gives us: cpts(out, 3) 3 17 52 plot(out, ncpts= 3, type="p", pch=16) 71 Time data.set.ts(x) 0 50 100 150 −2 0 2 4 4.4 Exercises 4.4.1 Workshop 4 1. Looking at last week workshop exercise solution, which points in the OP recursion would have been pruned by PELT? Check that the PELT pruning condition is true. 2. The model (not the cost!) for a single segment of a continuous change-in-slope is given by: 𝑦𝑡= 𝜏𝑖𝜃𝜏𝑖+ 𝜃𝜏𝑖+1(𝑡−𝜏𝑖) + 𝜖𝑡, for 𝑡= 𝜏𝑖+ 1, … , 𝜏𝑖+1, 𝜖𝑡∼𝑁(0, 1) (4.3) where 𝜃𝜏𝑖represents the value of the slope at changepoint 𝜏𝑖and 𝜙𝜏𝑖+1 is the value at the next changepoint 𝜏𝑖+1. Note, in this example, for simplicity, we assume the intercept is set equal to 0. This model is a variation from the one we had last week as it enforces continuity, e.g. the value at the end of one segment, needs to be the at the next: 72 continuous discontinuous 0 50 100 150 200 250 0 50 100 150 200 250 −40 −20 0 t signal a. Can you identify the elements where there is dependency across segments? Once you’ve done that, rewrite the model for one change (model a). Then, still for one change, rewrite the model setting 𝜏𝑖= 0 (model b). Would you be able to use PELT pruning with this one? b. Write down the continuous model from equation Equation 4.3, and the one from the previous point for a case where you have two segments, 𝜃1, 𝜃2. Then have a look at the model from week 2 (model c). What are the differences across the three models? 3. The PELT algorithm will only be able to deal with the discontinuous model. We will now revisit the Simpsons dataset, fitting this multiple changes model. This can be achieved via: library(changepoint) data <- cbind(y, 1, 1:length(y)) out <- cpt.reg(data, method="PELT") cat("Our changepoint estimates:", cpts(out)) Our changepoint estimates: 176 359 363 585 589 708 plot(out, ylab="y", xlab="t", pch=16) abline(v = cpts(out), col = "red") 73 0 200 400 600 4 5 6 7 8 t y Comment this segmentation. In which way we improved from the segmentation in week 2? What would you change? 4.4.2 Lab 4 In this lab we will test changepoint algorithms over some artificial data. Each sequence will have one of the following Gaussian change-in-mean patterns: 74 The code below will generate 400 sequences, which will be stored in a list called full_seqs. Every 100 sequences you will have a different change-pattern, across the four different change patterns. library(tidyverse) generate_signal <- function(n, pattern = c("none", "up", "updown", "rand1"), nbSeg = 8, jumpS type <- match.arg(pattern) if (type == "rand1") { set.seed(42) rand1CP <- rpois(nbSeg, lambda = 10) r1 <- pmax(round(rand1CP n / sum(rand1CP)), 1) s <- sum(r1) # Adjust r1 to match sum to n r1 <- if (s > n) { while (sum(r1) > n) r1[which(r1 > 1)[sample(1:length(which(r1 > 1)), 1)]] <- r1[which(r r1 } else { sample(rep(seq_along(r1), n - s)) %>% table() %>% as.numeric() %>% +(r1) } set.seed(43) rand1Jump <- runif(nbSeg, min = 0.5, max = 1) sample(c(-1, 1), nbSeg, replace = TRUE) } # Generate scenarios switch( type, none = rep(0, n), up = rep(seq(0, nbSeg - 1) jumpSize, each = n / nbSeg), updown = rep((seq(0, nbSeg - 1) %% 2) jumpSize, each = n / nbSeg), rand1 = map2(rand1Jump, r1, ~rep(.x jumpSize, .y)) %>% unlist() ) } sims <- expand_grid(pattern = c("none", "up", "updown", "rand1"), rep = 1:100) full_seqs <- pmap(sims, (pattern, rep) { mu <- generate_signal(1e4, pattern) set.seed(rep) y <- mu + rnorm(length(mu)) 75 cps <- which(diff(mu) != 0) return(list(y = y, mu = mu, cps = cps, pattern = pattern)) }) # each component of the list describes a sequence: summary(full_seqs) Length Class Mode y 10000 -none- numeric mu 10000 -none- numeric cps 0 -none- numeric pattern 1 -none- character 1. Plot four sample sequences, each with a different change pattern, with superimposed signals. You should replicate the plot above. 2. Install the changepoint package. By researching ?cpt.mean, learn about the change in mean function. Run the PELT algorithm for change in mean on the four sequences you picked above, with MBIC penalty. 3. Compare, in a simulation study, across the four different scenarios, performances of: a. Binary Segmentation, with AIC and BIC penalty b. PELT, with AIC and BIC penalty You will need to compare performances in term of Mean Square Error of the fitted signal MSE = ||𝜇1∶𝑛− ̂ 𝜇1∶𝑛||2 2. A function has been already coded for you below: mse_loss <- function(mu_true, mu_hat) { return(sum((mu_true - mu_hat) ^ 2)) } Report results by scenario and algorithm. NOTE: You will be able to access parameters estimates via the function param.est(). To get ̂ 𝜇1∶𝑛, necessary for the MSE computation above, we can use: results <- # cpt.mean output here rep(param.est(result)$mean, times = diff(c(0, cp_est, length(y)))) 76 5 Working with Real Data In practice, working with real-world data presents various challenges that can complicate our analyses. Unlike idealised examples, real data often contain noise, outliers, and other irregularities that can impair the accuracy of the segmentations we aim to generate. The assumptions we make in our models may not hold up well, and this can lead to poor estimates of changepoints. To tackle these issues, it is important to either use robust methods, or consider carefully how we handle the estimation of key parameters within our changepoint detection models. 5.1 Assessing the model fit 5.1.1 Assessing Residuals from a Changepoint Model When dealing with real-world data and changepoint, it’s essential to evaluate how well the model fits the data. Apart from the elbow plot and visual inspection to assess a segmentation, this evaluation is often done by examining the residuals – the differences between the observed data points and the values predicted by the model. If the residuals exhibit certain patterns, it may indicate that the model is not capturing all the underlying structure of the data, or that assumptions about the error distribution are violated. This section introduces three key diagnostic tools for assessing the residuals from a changepoint model: the histogram of residuals, the normal Q-Q plot, and the residuals vs. fitted values plot. These tools help assess whether the assumptions of the model (such as normality and homoscedasticity) hold. As an example, we will run diagnostics on the following sequence: 77 −2 0 2 4 0 200 400 600 x y On which PELT returns the segmentation: −2 0 2 4 0 200 400 600 x y 5.1.1.1 1. Histogram of the Residuals The histogram of the residuals is a simple but effective tool for visualizing the distribution of residuals. The histogram checks whether the residuals are approximately normally distributed. You should see a bell-shaped histogram centered around zero, indicating that the residuals are symmetrically distributed around the mean with no significant skewness or heavy tails. 78 hist(residuals(out), main = "Histogram of the residuals") Histogram of the residuals residuals(out) Frequency −3 −2 −1 0 1 2 3 0 50 100 150 • What to Watch for: – Skewness: If the histogram is not symmetric, it could indicate skewness in the residuals, suggesting that the model may not fully capture the data’s structure. – Heavy or light tails: Large bars far from zero indicate extreme residuals, which could be outliers. In case we see the histogram showing heavier tails, this may suggest that the data contains more extreme values than expected under the assumption of normally distributed errors. But note that if we see lighter tails, this might also suggest that we are overfitting over those outlier points! 5.1.1.2 2. Normal Q-Q Plot The normal quantile-quantile (Q-Q) plot compares the distribution of the residuals to a the-oretical normal distribution. The idea is to see if the residuals deviate significantly from the straight line that would indicate normality. Points should fall along a straight diagonal line, indicating that the residuals closely follow a normal distribution. qqnorm(residuals(out), main = "Normal Q-Q Plot of the Residuals") qqline(residuals(out), col = "red") 79 −3 −2 −1 0 1 2 3 −3 −1 1 2 3 Normal Q−Q Plot of the Residuals Theoretical Quantiles Sample Quantiles • What to Watch for: – Deviations from the line: Systematic deviations suggest non-normality. For instance, if points deviate upwards or downwards at the tails of the plot, it might indicate heavy tails (more extreme values than a normal distribution would predict) or light tails (fewer extreme values). As above, both cases might suggest that we have outliers, as we might be in a scenario where we are overfitting our sequence. 5.1.1.3 3. Residuals vs. Fitted Values Plot Finally, this plot shows the residuals on the y-axis against the fitted values from the model on the x-axis. It is particularly useful for checking if there are patterns in the residuals that suggest issues with the model fit. We would like to see some clusters, consisting of a random scatter of points around zero, with no discernible pattern, and roughly equal spread across all fitted values and clusters. plot(fitted(out), residuals(out), xlab = "fitted", ylab="residuals") 80 0.0 0.5 1.0 1.5 2.0 2.5 −3 −1 1 2 3 fitted residuals • What to Watch for: – To observe cluster of points in the data is normal, as the fitted values are the estimate of our piecewise signal. Maybe counter-intuitively, we need to look out for single observations alone! These could be segments which only have one or few observations in it, which could be a sign of overfitting. Note that these will not show up in the two plots above! – Heteroscedasticity (Non-constant variance): If the residuals’ spread increases or decreases as the fitted values increase, it may indicate heteroscedasticity, which violates one of the key assumptions of many models (constant variance of residuals). If you observe heteroschedasticity only in one of the clusters, it might mean that we are underestimating the number of the changes! 5.1.2 Example: violating heteroschedasticity: Let’s take the data from before, and add increase the variance in one of the segments: 81 −10 −5 0 5 10 15 0 200 400 600 x y A simple PELT change-in-mean fit gives us the segmentation: −10 −5 0 5 10 15 0 200 400 600 x y With the diagnostics: 82 Histogram of the residuals residuals(out) Frequency −6 −4 −2 0 2 4 6 0 50 150 250 −3 −2 −1 0 1 2 3 −4 0 2 4 Normal Q−Q Plot of the Residuals Theoretical Quantiles Sample Quantiles 83 −10 −5 0 5 10 15 −4 0 2 4 fitted residuals We can clearly see from the vertical lines that we have an oversegmentation in the third segment. The histogram and Q-Q plot both show some signs of deviations from a strict normal distribution, especially in the tails, which are lighter. This means that there might be some evidence of overfitting. The residuals vs. fitted values gives us the better figure and provides the strongest evidence of overfitting: we can see that we have many small clusters consisting of just few points… Those which have two points, are almost symmetrically distanced from the 0 line. This is a clear sign of heteroschedasticity, where we overfitted across the sequence with larger variance. 5.2 Estimating Other Known Parameters Let’s revisit the classic problem of detecting a change in mean. One of the key assumptions we’ve relied on so far is that the variance, 𝜎2, is fixed, and known. Specifically, we used the following cost function in our models: ℒ(𝑦𝑠∶𝑡) = 1 2𝜎2 𝑡 ∑ 𝑖=𝑠 (𝑦𝑖−̄ 𝑦𝑠∶𝑡) 2 In our examples, we’ve typically set 𝜎2 = 1. However, this assumption is often unrealistic when working with real data. When the true value of 𝜎2 is unknown or incorrectly specified, the results of changepoint detection can be significantly affected. • If we underestimate the variance by choosing a value for 𝜎2 that is too small, the change-point detection algorithm may overlook real changes in the data, resulting in fewer detected changepoints. • Conversely, if we overestimate the variance with a value that is too high, the algorithm may detect too many changes, identifying noise as changepoints. 84 5.2.1 Neuroblastoma Example: The Impact of Mis-specified Variance Consider the neuroblastoma dataset as an example. If we run a changepoint detection method like PELT or BS on this data without any pre-processing, we might observe that the algorithm does not detect any changes at all: −0.5 0.0 0.5 0 50 100 150 200 Time y summary(out_op) Created Using changepoint version 2.3 Changepoint type : Change in mean Method of analysis : PELT Test Statistic : Normal Type of penalty : MBIC with value, 16.36596 Minimum Segment Length : 1 Maximum no. of cpts : Inf Changepoint Locations : In this example, PELT fails to detect any changes because the scale of the data suggests a lower variance than expected, affecting the algorithm’s sensitivity to changes. 85 5.2.2 Addressing Mis-specified Variance with Robust Estimators One problem with estimating the variance in the change-in-mean scenario, is that depending on the size of the changes, these can skew your estimate… One way to solve the issue of this, is that, on the assumption that the data is i.i.d. Gaussian, looking at the lag-1 differences 𝑧𝑡= 𝑦𝑡−𝑦𝑡−1 ∀ 𝑡= 2, … , 𝑛: qplot(x = 1:(n-1), y = diff(y)) + theme_minimal() Warning: qplot() was deprecated in ggplot2 3.4.0. −0.50 −0.25 0.00 0.25 0.50 0.75 0 50 100 150 200 1:(n − 1) diff(y) And compute the sample variance across all these differences as an estimator for our sigma square: ̂ 𝜎2 = ̄ 𝑆(𝑧1∶𝑛). However, we have not fixed our problem… yet! What happens exactly at 𝑡= 𝜏+ 1? Well, across these observations, our 𝑧𝜏+1 appears as an outlier (why?). This can still skew our estimate of the variance. A solution, is to use robust estimators of the variance. A common choice is the Median Absolute Deviation (MAD), which is less sensitive to outliers and can provide a more reliable estimate of ̄ 𝑆in our case. The formula for MAD is given by: MAD = median(|𝑧𝑖−median(𝑧1∶𝑛)|) 86 This estimator computes the median of the absolute deviations from the median of the data. However, for asymptotical consistency, to fully convert MAD into a robust variance estimate, we can use: ̂ 𝜎MAD = 1.4826 × MAD This scaling factor ensures that 𝜎MAD provides an approximately unbiased estimate of the standard deviation under the assumption of normally distributed data. We then can divide our observations by this value to obtain ready-to-analyse observations. Go back and check the scale of the data in the segmentations in week 3! While this trick provides a solution for handling variance estimation in the change-in-mean problem, more sophisticated models may require the estimation of additional parameters. And more advanced techniques are needed to ensure that all relevant parameters are accurately estimated (this is very much an open are of research)! 5.3 Non-Parametric Models A alternative approach for detecting changes in real data, especially when we don’t want to make specific parametric assumptions, is to use a non-parametric cost function. This method allows us to detect general changes in the distribution of the data, not just changes in the mean or variance. One such approach is the Non-Parametric PELT (NP-PELT) method, which focuses on detecting any changes in the underlying distribution of the data. For example, let us have a look at one of the sequences from the Yahoo! Webscope dataset ydata-labeled-time-series-anomalies-v1_0 [ A1 <- read_csv("extra/A1_yahoo_bench.csv") Rows: 1427 Columns: 3 -- Column specification --------------------------------------------------------Delimiter: "," dbl (3): timestamp, value, is_anomaly i Use spec() to retrieve the full column specification for this data. i Specify the column types or set show_col_types = FALSE to quiet this message. 87 ggplot(A1, aes(x = timestamp, y = value)) + geom_vline(xintercept = which(A1$is_anomaly == 1), alpha = .3, col = "red") + geom_point() + theme_minimal() 0 250 500 750 1000 0 500 1000 timestamp value Following Haynes, Fearnhead, and Eckley (2017), we introduce the NP-PELT approach. Let 𝐹𝑖∶𝑛(𝑞) denote the unknown cumulative distribution function (CDF) for the segment 𝑦1∶𝑛, where 𝑛indexes the data points. Similarly, let ̂ 𝐹1∶𝑛(𝑞) be the empirical CDF, which provides an estimate of the true distribution over the segment. The empirical CDF is given by: ̂ 𝐹1∶𝑛(𝑞) = 1 𝑛{ 𝑛 ∑ 𝑗=1 𝕀(𝑦𝑗< 𝑞) + 0.5 × 𝕀(𝑦𝑗= 𝑞)} . Here, 𝕀(𝑦𝑗< 𝑞) is an indicator function that equals 1 if 𝑦𝑗< 𝑞and 0 otherwise, and the term 0.5 × 𝕀(𝑦𝑗= 𝑞) handles cases where 𝑦𝑗equals 𝑞. Under the assumption that the data are independent, the empirical CDF ̂ 𝐹1∶𝑛(𝑞) follows a Binomial distribution. Specifically, for any quantile 𝑞, we can write: 𝑛̂ 𝐹1∶𝑛(𝑞) ∼Binom(𝑛, 𝐹1∶𝑛(𝑞)). This means that the number of observations 𝑦𝑗less than or equal to 𝑞follows a Binomial distribution, with 𝑛trials and success probability equal to the true CDF value 𝐹1∶𝑛(𝑞) at 𝑞. 88 Using this Binomial approximation, we can derive the log-likelihood of a segment of data 𝑦𝜏1+1∶𝜏2, where 𝜏1 and 𝜏2 are the changepoints marking the beginning and end of the segment, respectively. The log-likelihood is expressed as: ℒ(𝑦𝜏1+1∶𝜏2; 𝑞) = (𝜏2 −𝜏1) [ ̂ 𝐹𝜏1+1∶𝜏2(𝑞) log( ̂ 𝐹𝜏1+1∶𝜏2(𝑞)) −(1 − ̂ 𝐹𝜏1+1∶𝜏2(𝑞)) log(1 − ̂ 𝐹𝜏1+1∶𝜏2(𝑞))] . This cost function compares the empirical CDF of at the right and at the left of this data points, for all the points: 1. Pre−change 2. Post−change 0.0 2.5 5.0 7.5 10.0 0.0 2.5 5.0 7.5 10.0 0.00 0.25 0.50 0.75 1.00 y Fn(y) In practice, NP-PELT on the previous sequence gives the following: library(changepoint.np) y <- A1$value cpt.np(y, penalty = "Manual", pen.value = 25 log(length(y))) |> plot(ylab = "y") 89 Time y 0 200 400 600 800 1000 1200 1400 0 400 800 5.4 Exercises 5.4.1 Workshop exercises Provide an interpretation of the residuals diagnostics from the Simpsons dataset: Histogram of the residuals residuals(out) Frequency −3 −2 −1 0 1 2 3 0 100 200 300 90 −3 −2 −1 0 1 2 3 −3 −1 1 2 3 Normal Q−Q Plot of the Residuals Theoretical Quantiles Sample Quantiles 4.5 5.0 5.5 6.0 6.5 7.0 7.5 −3 −1 1 2 3 fitted residuals 91 References Baranowski, Rafal, Yining Chen, and Piotr Fryzlewicz. 2019. “Narrowest-over-Threshold Detection of Multiple Change Points and Change-Point-Like Features.” Journal of the Royal Statistical Society Series B: Statistical Methodology 81 (3): 649–72. Bown, Jonathan. 2023. “Simpsons Episodes & Ratings (1989-Present).” .com/datasets/jonbown/simpsons-episodes-2016?resource=download. Fryzlewicz, Piotr. 2014. “Wild binary segmentation for multiple change-point detection.” Annals of Statistics 42: 2243–81. Haynes, Kaylea, Idris A Eckley, and Paul Fearnhead. 2017. “Computationally Efficient Changepoint Detection for a Range of Penalties.” Journal of Computational and Graphical Statistics 26 (1): 134–43. Haynes, Kaylea, Paul Fearnhead, and Idris A Eckley. 2017. “A Computationally Efficient Nonparametric Approach for Changepoint Detection.” Statistics and Computing 27: 1293– 1305. Jackson, Brad, Jeffrey Scargle, D. Barnes, S. Arabhi, A. Alt, P. Gioumousis, E. Gwin, P. Sangtrakulcharoen, L. Tan, and Tun Tsai. 2005. “An Algorithm for Optimal Partitioning of Data on an Interval.” Signal Processing Letters, IEEE 12 (March): 105–8. https: //doi.org/10.1109/LSP.2001.838216. Killick, R., P. Fearnhead, and I. A. Eckley. 2012. “Optimal Detection of Changepoints with a Linear Computational Cost.” Journal of the American Statistical Association 107 (500): 1590–98. Scott, Andrew Jhon, and Martin Knott. 1974. “A Cluster Analysis Method for Grouping Means in the Analysis of Variance.” Biometrics, 507–12. Sen, Ashish, and Muni S Srivastava. 1975. “On Tests for Detecting Change in Mean.” The Annals of Statistics, 98–108. Yao, Yi-Ching, and Richard A Davis. 1986. “The Asymptotic Behavior of the Likelihood Ratio Statistic for Testing a Shift in Mean in a Sequence of Independent Normal Variates.” Sankhyā: The Indian Journal of Statistics, Series A, 339–53. Zhang, Nancy R, and David O Siegmund. 2007. “A Modified Bayes Information Criterion with Applications to the Analysis of Comparative Genomic Hybridization Data.” Biometrics 63 (1): 22–32. 92
145
Atomic and Optical Physics II | Physics | MIT OpenCourseWare =============== Browse Course Material Syllabus Instructor Insights Clicker Questions Using a Tablet During Lectures Assessing Students' Writing Teaching Graduate Students Readings Video Lectures Assignments Projects Course Info Instructor Prof. Wolfgang Ketterle Departments Physics As Taught In Spring 2013 Level Graduate Topics Science Physics Atomic, Molecular, Optical Physics Learning Resource Types theaters Lecture Videos assignment Problem Sets co_present Instructor Insights Download Course menu search Give Now About OCW Help & Faqs Contact Us searchGIVE NOWabout ocwhelp & faqscontact us 8.422 | Spring 2013 | Graduate Atomic and Optical Physics II Menu Syllabus Instructor Insights Clicker Questions Using a Tablet During Lectures Assessing Students' Writing Teaching Graduate Students Readings Video Lectures Assignments Projects Course Description This is the second of a two-semester subject sequence beginning with Atomic and Optical Physics I (8.421) that provides the foundations for contemporary research in selected areas of atomic and optical physics. Topics covered include non-classical states of light–squeezed states; multi-photon processes, Raman …Show more This is the second of a two-semester subject sequence beginning with Atomic and Optical Physics I (8.421) that provides the foundations for contemporary research in selected areas of atomic and optical physics. Topics covered include non-classical states of light–squeezed states; multi-photon processes, Raman scattering; coherence–level crossings, quantum beats, double resonance, superradiance; trapping and cooling-light forces, laser cooling, atom optics, spectroscopy of trapped atoms and ions; atomic interactions–classical collisions, quantum scattering theory, ultracold collisions; and experimental methods.Show less Course Info Instructor Prof. Wolfgang Ketterle Departments Physics Topics Science Physics Atomic, Molecular, Optical Physics Learning Resource Types theaters Lecture Videos assignment Problem Sets co_present Instructor Insights Diagram for light scattering by an atom with ground state g and excited state e. (Image Courtesy of Prof. Wolfgang Ketterle.) Download Course Over 2,500 courses & materials Freely sharing knowledge with learners and educators around the world. Learn more © 2001–2025 Massachusetts Institute of Technology Accessibility Creative Commons License Terms and Conditions Proud member of: © 2001–2025 Massachusetts Institute of Technology You are leaving MIT OpenCourseWare ================================== close Please be advised that external sites may have terms and conditions, including license rights, that differ from ours. MIT OCW is not responsible for any content on third party sites, nor does a link suggest an endorsement of those sites and/or their content. Stay Here Continue
146
Published Time: 2017-09-07T21:50:08+00:00 What Clocks have to do with Quantum Computation | Quantum Frontiers =============== Skip to primary content Quantum Frontiers A blog by the Institute for Quantum Information and Matter @ Caltech Search Main menu Experiments Theory Reflections Real science The expert’s corner News About IQIM Post navigation ← PreviousNext → What Clocks have to do with Quantum Computation =============================================== Posted on September 7, 2017 by Johannes Bausch Have you ever played the game “telephone”? You might remember it from your nursery days, blissfully oblivious to the fact that quantum mechanics governs your existence, and not yet wondering why Fox canceled Firefly. For everyone who forgot, here is the gist of the game: sit in a circle with your friends. Now you think of a story (prompt: a spherical weapon that can destroy planets). Once you have the story laid out in your head, tell it to your neighbor on your left. She takes the story and tells it to her friend on her left. It is important to master the art of whispering for this game: you don’t want to be overheard when the story is passed on. After one round, the friend on your right tells you what he heard from his friend on his right. Does the story match your masterpiece? If your story is generic, it probably survived without alterations. Tolstoy’s War and Peace, on the other hand, might turn into a version of Game of Thrones. Passing along complex stories seems to be more difficult than passing on easy ones, and it also becomes more prone to errors the more friends join your circle—which makes intuitive sense. So what does this have to do with physics or quantum computation? Let’s add maths to this game, because why not. Take a difficult calculation that follows a certain procedure, such as long division of two integer numbers. Now you perform one step of the division and pass the piece of paper on to your left. Your friend there is honest and trusts you: she doesn’t check what you did, but happily performs the next step in the division. Once she’s done, she passes the piece of paper on to her left, and so on. By the time the paper reaches you again, you hopefully have the result of the calculation, given you have enough friends to divide your favorite numbers, and given that everyone performed their steps accurately. I’m not sure if Feynman thought about telephone when he, in 1986, proposed a method of embedding computation into eigenstates (e.g. the ground state) of a Hamiltonian, but the fact remains that the similarity is striking. Remember that writing down a Hamiltonian is a way of describing a quantum-mechanical system, for instance how the constituents of a multi-body system are coupled with each other. The ground state of such a Hamiltonian describes the lowest energy state that a system assumes when it is cooled down as far as possible. Before we dive into how the Hamiltonian looks, let’s try to understand how, in Feynman’s construction, a game of telephone can be represented as a quantum state of a physical system. In this picture, represents a snapshot of the story or calculation at time t—in the division example, this would be the current divisor and remainder terms; so e.g. the snapshot represents the initial dividend and divisor, and the person next to you is thinking of , one step into the calculation. The label in front of the tensor sign is like a tag that you put on files on your computer, and uniquely associates the snapshot with the t-th time step. We say that the story snapshot is entangled with its label. This is also an example of quantum superposition: all the are distinct states (the time labels, if not the story snapshots, are all unique), and by adding these states up we put them into superposition. So if we were to measure the time label, we would obtain one of the snapshots uniformly at random—it’s as if you had a cloth bag full of cards, and you blindly pick one. One side of the card will have the time label on it, while the other side contains the story snapshot. But don’t be fooled—you cannot access all story snapshots by successive measurements! Quantum states collapse; whatever measurement outcome you have dictates what the quantum state will look like after the measurement. In our example, this means that we burn the cloth bag after you pick your card; in this sense, the quantum state behaves differently than a simple juxtaposition of scraps of paper. Nonetheless, this is the reason why we call such a quantum state a history state: it preserves the history of the computation, where every step that is performed is appropriately tagged. If we manage to compare all pairs of successively-labeled snapshots (without measuring them!), one can verify that the end result does, in fact, stem from a valid computation—and not just a random guess. In the division example, this would correspond to checking that each of your friends performs a correct division step. So history states are clearly useful. But how do you design a Hamiltonian with a history state as the ground state? Is it even possible? The answer is yes, and it all boils down to verifying that two successive snapshots and are related to each other in the correct manner, e.g. that your friend on seat t+1 performs a valid division step from the snapshot prepared by the person on seat t. In fancy physics speak (aka Bra-Ket notation), we can for example write The actual Hamiltonian will then be a sum of such terms, and one can verify that its ground state is indeed the one representing the history state we introduced above. I’m glossing over a few details here: there is a minus sign in front of this term, and we have to add its Hermitian conjugate (flip the labels and snapshots around). But this is not essential for the argument, so let’s not go there for now. However, you’re totally right with one thing: it wouldn’t make sense to write down all snapshots themselves into the Hamiltonian! After all, if we had to calculate every snapshot transition like in advance, there would be no use to this construction. So instead, we can write Perfect. We now have a Hamiltonian which, in its ground state, can encode the history of a computation, and if we replace the transition operator with another desired transition operator (a unitary matrix), we can perform any computation we want (more precisely, any computation that can be written as a unitary matrix; this includes anything your laptop can do). However, this is only half of the story, since we need to have a way of reading out the final answer. So let’s step back for a moment, and go back to the telephone game. Can you motivate your friends to cheat? Your friends playing telephone make mistakes. Ok, let’s assume we give them a little incentive: offer $1 to the person on your right in case the result is an even number. Will he cheat? With so much at stake? In fact, maybe your friend is not only greedy but also dishonest: he wants to hide the fact that he miscalculates on purpose, and sometimes tells his friend on his right to make a mistake instead (maybe giving him a share of the money). So for a few of your friends close to the person at the end of the chain, there is a real incentive to cheat! Can we motivate spins to cheat? We already discussed how to write down a Hamiltonian that verifies valid computational steps. But can we do the same thing as bribing your friends to procure a certain outcome? Can we give an energy bonus to certain outcomes of the computation? In fact, we can. Alexei Kitaev proposed adding a term to Feynman’s Hamiltonian which raises the energy of an unwanted outcome, relative to a desirable outcome. How? Again in fancy physics language, What this term does is that it takes the history state and yields a negative energy contribution (signaled by the minus sign in front) if the last snapshot is an even number. If it isn’t, no bonus is felt; this would correspond to you keeping the dollar you promised to your friend. This simply means that in case the computation has a desirable outcome—i.e. an even number—the Hamiltonian allows a lower energy ground state than for any other output. Et voilà, we can distinguish between different outputs of the computation. The true picture is, of course, a tad more complicated; generally, we give penalty terms to unwanted states instead of bonus terms to desirable ones. The reason for this is somewhat subtle, but can potentially be explained with an analogy: humans fear loss much more than they value gains of the same magnitude. Quantum systems behave in a completely opposite manner: the promise of a bonus at the end of the computation is such a great incentive that most of the weight of the history state will flock to the bonus term (for the physicists: the system now has a bound state, meaning that the wave function is localized around a specific site, and drops off exponentially quickly away from it). This makes it difficult to verify the computation far away from the bonus term. So the Feynman-Kitaev Hamiltonian consists of three parts: one which checks each step of the computation, one which penalizes invalid outcomes—and obviously we also need to make sure the input of the computation is valid. Why? Well, are you saying you are more honest than your friends? Physical Implications of History State Hamiltonians If there is one thing I’ve learned throughout my PhD it is that we should always ask what use a theory is. So what can we learn from this construction? Almost 20 years ago, Alexei Kitaev used Feynman’s idea to prove that estimating the ground state energy of a physical system with local interactions is hard, even on a quantum computer (for the experts: QMA-hard under the assumption of a promise gap splitting the embedded YES and NO instances). Why is estimating the ground state energy hard? The energy shift induced by the output penalty depends on the outcome of the computation that we embed (e.g. even or odd outcome). And as fun as long division is, there are much more difficult tasks we can write down as a history state Hamiltonian—in fact, it is this very freedom which makes estimating the ground state energy difficult: if we can embed any computation we want, estimating the induced energy shift should be at least as hard as actually performing the computation on a quantum computer. This has one curious implication: if we don’t expect that we can estimate the ground state energy efficiently, the physical system will take a long time to actually assume its ground state when cooled down, and potentially behave like a spin glass! Feynman’s history state construction and the QMA-hardness proof of Kitaev were a big part of the research I did for my PhD. I formalized the case where the message is not passed on along a unique path from neighbor to neighbor, but can take an arbitrary path between beginning and end in a more complicated graph; in this way, computation can in some sense be parallelized. Well, to be honest, the last statement is not entirely true: while there can be parallel tracks of computation from A to B, these tracks have to perform the same computation (albeit in potentially different steps); otherwise the system becomes much more complicated to analyze. The reason why this admittedly quite restricted form of branching might still be an advantage is somewhat subtle: if your computation has a lot of classical if-else cases, but you don’t have enough space on your piece of paper to store all the variables to check the conditions, it might be worth just taking a gamble: pass your message down one branch, in the hope that the condition is met. The only thing that you have to be careful about is that in case the condition isn’t met, you don’t produce invalid results. What use is that in physics? If you don’t have to store a lot of information locally, it means you can get away using a much lower local spin dimension for the system you describe. Such small and physically realistic models have as of late been proposed as actual computational devices (called Hamiltonian quantum computers), where a prepared initial state is evolved under such a history state Hamiltonian for a specific time, in contrast to the static property of a history ground state we discussed above. Yet whether or not this is something one could actually build in a lab remains an open question. Last year, Thomas Vidick invited me to visit Caltech, and I worked with IQIM postdoc Elizabeth Crosson to improve the analysis of the energy penalty that is assigned to any history state that cheats the constraints in the Feynman-Kitaev Hamiltonian. We identified some open problems and also proved limitations on the extent of the energetic penalty that these kinds of Hamiltonians can have. This summer I went back to Caltech to further develop these ideas and make progress towards a complete understanding of such “clock” Hamiltonians, which Elizabeth and I are putting together in a follow-up work that should appear soon. It is striking how such simple idea can have so profound an implication across fields, and remain relevant, even 30 years after its first proposal. Feynman concludes his 1986 Foundations of Physics paper with the following words. At any rate, it seems that the laws of physics present no barrier to reducing the size of computers until bits are the size of atoms, and quantum behavior holds dominant sway. For my part, I hope that he was right and that history state constructions will play a part in this future. Share this: Click to share on Reddit (Opens in new window)Reddit More Click to email a link to a friend (Opens in new window)Email Share Like Loading... Humans can intuit quantum physics.January 27, 2019 In "Experimental highlights" Frontiers of Quantum Information ScienceSeptember 22, 2013 In "News" How writing a popular-science book led to a Nature Physics paperMarch 30, 2025 In "Experimental highlights" This entry was posted in Theoretical highlights by Johannes Bausch. Bookmark the permalink. About Johannes Bausch Quantum complexity theorist and outdoor fanatic. I'm currently at Cambridge University, researching the interplay between many-body quantum systems and universal computation. View all posts by Johannes Bausch → Your thoughts here. Cancel reply Δ Blog at WordPress.com. Comment Reblog SubscribeSubscribed Quantum Frontiers Join 1,700 other subscribers Sign me up Already have a WordPress.com account? Log in now. Quantum Frontiers SubscribeSubscribed Sign up Log in Copy shortlink Report this content View post in Reader Manage subscriptions Collapse this bar Loading Comments... Write a Comment... Email (Required) Name (Required) Website %d
147
The genetic code & codon table (article) | Khan Academy =============== Skip to main content If you're seeing this message, it means we're having trouble loading external resources on our website. If you're behind a web filter, please make sure that the domains .kastatic.org and .kasandbox.org are unblocked. Explore Browse By Standards Math: Pre-K - 8th grade Pre-K through grade 2 (Khan Kids) Early math review 2nd grade 3rd grade 4th grade 5th grade (TX TEKS) NEW 6th grade (TX TEKS) NEW 7th grade (TX TEKS) NEW 8th grade (TX TEKS) NEW Basic geometry and measurement Teacher resources (TX TEKS Math) NEW See Pre-K - 8th Math Math: Get ready courses Get ready for 3rd grade Get ready for 4th grade Get ready for 5th grade Get ready for 6th grade Get ready for 7th grade Get ready for 8th grade Get ready for Algebra 1 Get ready for Geometry Get ready for Algebra 2 Get ready for Precalculus Get ready for AP® Calculus Get ready for AP® Statistics Math: high school & college Algebra 1 (TX TEKS) NEW Teacher resources (TX TEKS Math) NEW Geometry Algebra 2 Integrated math 1 Integrated math 2 Integrated math 3 Algebra basics Trigonometry Precalculus High school statistics Statistics & probability College algebra AP®︎/College Calculus AB AP®︎/College Calculus BC AP®︎/College Statistics Multivariable calculus Differential equations Linear algebra See all Math Test prep SAT Math SAT Reading and Writing Get ready for SAT Prep: Math NEW LSAT MCAT Science Middle school biology Middle school Earth and space science Middle school physics High school biology (TX TEKS) NEW High school chemistry (TX TEKS) NEW High school physics (TX TEKS) NEW Teacher resources (TX TEKS Science) NEW Hands-on science activities NEW AP®︎/College Biology AP®︎/College Chemistry AP®︎/College Environmental Science AP®︎/College Physics 1 See all Science Computing Intro to CS - Python NEW Computer programming AP®︎/College Computer Science Principles Computers and the Internet Pixar in a Box See all Computing Reading & language arts Up to 2nd grade (Khan Kids) 2nd grade 3rd grade 4th grade 5th grade 6th grade reading and vocab NEW 7th grade reading and vocab NEW 8th grade reading and vocab NEW 9th grade reading and vocab NEW 10th grade reading and vocab NEW Grammar See all Reading & Language Arts Economics Macroeconomics AP®︎/College Macroeconomics Microeconomics AP®︎/College Microeconomics See all Economics Life skills Social & emotional learning (Khan Kids) Khanmigo for students NEW AI for education NEW Financial literacy NEW Internet safety Social media literacy Growth mindset College admissions Careers Personal finance See all Life Skills Social studies US history AP®︎/College US History US government and civics AP®︎/College US Government & Politics World history World History Project - Origins to the Present World History Project - 1750 to the Present AP®︎/College World History NEW Big History Project Climate project NEW Art history AP®︎/College Art History Constitution 101 NEW See all Social studies Partner courses Ancient Art Asian Art Biodiversity Music NASA Natural History New Zealand - Natural & cultural history NOVA Labs Philosophy Khan for educators Khan for educators (US) NEW Khanmigo for educators NEW Teacher resources (TX-TEKS) NEW Search AI for Teachers FreeDonateLog inSign up Search for courses, skills, and videos Skip to lesson content AP®︎/College Biology Course: AP®︎/College Biology>Unit 6 Lesson 4: Translation Translation (mRNA to protein) Overview of translation Retroviruses Differences in translation between prokaryotes and eukaryotes DNA replication and RNA transcription and translation Intro to gene expression (central dogma) The genetic code Translation Science> AP®︎/College Biology> Gene expression and regulation> Translation © 2025 Khan Academy Terms of usePrivacy PolicyCookie NoticeAccessibility Statement The genetic code AP.BIO: IST‑1 (EU), IST‑1.N (LO), IST‑1.N.1 (EK), IST‑1.N.2 (EK) Google Classroom Microsoft Teams The genetic code links groups of nucleotides in an mRNA to amino acids in a protein. Start codons, stop codons, reading frame. Introduction Have you ever written a secret message to one of your friends? If so, you may have used a code to keep the message hidden. For instance, you may have replaced the letters of the word with numbers or symbols, following a particular set of rules. In order for your friend to understand the message, they would need to know the code and apply the same set of rules, in reverse, to decode it. Decoding messages is also a key step in gene expression, in which information from a gene is read out to build a protein. In this article, we'll take a closer look at the genetic code, which allows DNA and RNA sequences to be "decoded" into the amino acids of a protein. Background: Making a protein Genes that provide instructions for proteins are expressed in a two-step process. In transcription, the DNA sequence of a gene is "rewritten" in RNA. In eukaryotes, the RNA must go through additional processing steps to become a messenger RNA, or mRNA. In translation, the sequence of nucleotides in the mRNA is "translated" into a sequence of amino acids in a polypeptide (protein chain). If this is a new concept for you, you may want to learn more by watching Sal's video on transcription and translation. Codons Cells decode mRNAs by reading their nucleotides in groups of three, called codons. Here are some features of codons: Most codons specify an amino acid Three "stop" codons mark the end of a protein One "start" codon, AUG, marks the beginning of a protein and also encodes the amino acid methionine Codons in an mRNA are read during translation, beginning with a start codon and continuing until a stop codon is reached. mRNA codons are read from 5' to 3' , and they specify the order of amino acids in a protein from N-terminus (methionine) to C-terminus. Image 3: The mRNA sequence is: 5'-AUGAUCUCGUAA-5' Translation involves reading the mRNA nucleotides in groups of three; each group specifies an amino acid (or provides a stop signal indicating that translation is finished). 3'-AUG AUC UCG UAA-5' AUG $\rightarrow$ Methionine (Start) AUC $\rightarrow$ Isoleucine UCG $\rightarrow$ Serine UAA $\rightarrow$ "Stop" Polypeptide sequence: (N-terminus) Methionine-Isoleucine-Serine (C-terminus) What do 5' and 3' mean? The two ends of a strand of DNA or RNA are different from each other. That is, a DNA or RNA molecule has directionality. At the 5’ end of the chain, the phosphate group of the first nucleotide in the chain sticks out. The phosphate group is attached to the 5' carbon of the sugar ring, which is why this is called the 5' end. At the other end, called the 3’ end, the hydroxyl of the last nucleotide added to the chain is exposed. The hydroxyl group is attached to the 3' carbon of the sugar ring, which is why this is called the 3' end. Many processes, such as DNA replication and transcription, can only take place in one particular direction relative the the directionality of a DNA or RNA strand. You can learn more in the article on nucleic acids. What are the N- and C-terminus? Polypeptides (chains of linked amino acids) have two distinct ends: An N-terminus with an amino group exposed A C-terminus with a carboxyl group exposed During translation, polypeptide is built from N- to C-terminus. You can learn more about N- and C-termini in the article on proteins and amino acids. The genetic code table The full set of relationships between codons and amino acids (or stop signals) is called the genetic code. The genetic code is often summarized in a table. How do you read the codon table? The codon table may look kind of intimidating at first. Fortunately, it's organized in a logical way, and it's not too hard to use once you understand this organization. To see how the codon table works, let's walk through an example. Suppose that we are interested in the codon CAG and want to know which amino acid it specifies. First, we look at the left side of the table. The axis on the left side refers to the first letter of the codon, so we find C along the left axis. This tells us the (broad) row of the table in which our codon will be found. Next, we look at the top of the table. The upper axis refers to the second letter of the codon, so we find A along the upper axis. This tells us the column of the table in which our codon will be found. The row and column from steps 1 and 2 intersect in a single box in the codon table, one containing four codons. It's often easiest to simply look at these four codons and see which one is the one you're looking for. If you want to use the structure of the table to the maximum, however, you can use the third axis (on the right side of the table) corresponding to the intersect box. By finding the third nucleotide of the codon on this axis, you can identify the exact row within the box where your codon is found. For instance, if we look for G on this axis in our example above, we find that CAG encodes the amino acid glutamine (Gln). Image credit: "The genetic code," by OpenStax College, Biology (CC BY 3.0). Notice that many amino acids are represented in the table by more than one codon. For instance, there are six different ways to "write" leucine in the language of mRNA (see if you can find all six). An important point about the genetic code is that it's universal. That is, with minor exceptions, virtually all species (from bacteria to you!) use the genetic code shown above for protein synthesis. Reading frame To reliably get from an mRNA to a protein, we need one more concept: that of reading frame. Reading frame determines how the mRNA sequence is divided up into codons during translation. That's a pretty abstract concept, so let's look at an example to understand it better. The mRNA below can encode three totally different proteins, depending on the frame in which it's read: Image 5: mRNA sequence: 5'-UCAUGAUCUCGUAAGA-3' Read in Frame 1: 5'-UCA UGA UCU CGU AAG A-3' Ser-STOP-Ser-Arg-Lys Read in Frame 2: 5'-U CAU GAU CUC GUA AGA-3' His-Asp-Leu-Val-Arg Read in Frame 3: 5'-UC AUG AUC UCG UAA GA-3' Met(Start)-Ile-Ser-STOP The start codon's position ensures that Frame 3 is chosen for translation of the mRNA. So, how does a cell know which of these protein to make? The start codon is the key signal. Because translation begins at the start codon and continues in successive groups of three, the position of the start codon ensures that the mRNA is read in the correct frame (in the example above, in Frame 3). Mutations (changes in DNA) that insert or delete one or two nucleotides can change the reading frame, causing an incorrect protein to be produced "downstream" of the mutation site: Image credit; "The genetic code: Figure 3," by OpenStax College, Biology, CC BY 4.0. How was the genetic code discovered? The story of how the genetic code was discovered is a pretty cool and epic one. We've stashed our version in the pop-up below, so as not to distract you if you're in a hurry. However, if you have some time, it's definitely interesting reading. Discovery of the genetic code Discovery of the code To crack the genetic code, researchers needed to figure out how sequences of nucleotides in a DNA or RNA molecule could encode the sequence of amino acids in a polypeptide. Why was this a tricky problem? Let's imagine a very simple code to get the idea. In this code, each nucleotide in an DNA or RNA molecule might code for one amino acid in a protein. But this code can't actually work, because there are 20‍ amino acids commonly found in proteins and just 4‍ nucleotide bases in DNA or RNA. So, the code had to involve something more complex than a one-to-one matching of nucleotides and amino acids. But what? The triplet hypothesis In the mid-1950s, physicist George Gamow extended this line of thinking to predict that the genetic code was likely composed of triplets of nucleotides 1‍. That is, he proposed that a group of 3‍ nucleotides in a gene might code for one amino acid in a protein. Gamow's reasoning was that even a doublet code (2‍ nucleotides per amino acid) would not work, as it would allow for only 16‍ ordered groups of nucleotides (4 2‍), too few to account for the 20‍ standard amino acids used to build proteins. A code based on nucleotide triplets, however, seemed promising: it would provide 64‍ unique sequences of nucleotides (4 3‍), more than enough to cover the 20‍ amino acids. Gamow had some other not-so-correct ideas about how the code was read (for example, he thought that the triplets overlapped, which we now know is not the case)1‍. However, his core insight – that a triplet code was the "minimum" that could cover all the amino acids – proved to be correct. More about the math There are 16‍ unique groups of nucleotides if a doublet code is used, and 64‍ unique groups if a triplet code is used. Why is this the case? Let's take a closer look at the math behind these statements. Doublet code Let’s look at the doublet code first. In a doublet code, an ordered group of two nucleotides codes for one amino acid. How many such groups of two nucleotides can we make? We know that there are 4‍ different possibilities for each of the 2‍ nucleotides in the doublet (A, T, C, and G, if we use DNA bases). If we put an A in the first position, then any of the four other nucleotides can occupy the second position, resulting in four combinations (AA, AT, AG, AC) that begin with an A. We can repeat this for T (TT, TA, TC, TG), C (CC, CT, CA, CG), and G (GG, GC, GT, GA). If we count all of these possibilities, we'll find that there are 16‍ of them in total. You may find it faster and more foolproof to use a mathematical shortcut to quickly answer this type of question. Because we know there are 4 possible nucleotides for each position in the doublet, and because the order of the two slots matters, we can use the rules of permutations to calculate the number of possible groups as follows: (4‍ possibilities for the first slot) ⋅‍ (4‍ possibilities for the second slot) =‍ 4⋅4=16‍ possible ordered groups Triplet code What about the triplet code? In this case, we can use the same mathematical reasoning, but must add an additional slot to our setup. There are now 3‍ positions to fill, and each can be occupied by any of the four bases (A, T, C, or G). Since there are 4‍ possible choices for each position, we can multiply as follows: (4‍ possibilities for the first slot) ⋅‍ (4‍ possibilities for the second slot) ⋅‍ (4‍ possibilities for the third slot) =‍ 4⋅4⋅4=64‍ possible ordered groups Matching codons to amino acids Gamow’s triplet hypothesis seemed logical and was widely accepted. However, it had not been experimentally proven, and researchers still did not know which triplets of nucleotides corresponded to which amino acids. The cracking of the genetic code began in 1961, with work from the American biochemist Marshall Nirenberg. For the first time, Nirenberg and his colleagues were able to identify specific nucleotide triplets that corresponded to particular amino acids. Their success relied on two experimental innovations: A way to make artificial mRNA molecules with specific, known sequences. A system to translate mRNAs into polypeptides outside of a cell (a "cell-free" system). Nirenberg's system consisted of cytoplasm from burst E. coli cells, which contains all of the materials needed for translation. First, Nirenberg synthesized an mRNA molecule consisting only of the nucleotide uracil (called poly-U). When he added poly-U mRNA to the cell-free system, he found that the polypeptides made consisted exclusively of the amino acid phenylalanine. Because the only triplet in poly-U mRNA is UUU, Nirenberg concluded that UUU might code for phenylalanine 2‍. Using the same approach, he was able to show that poly-C mRNA was translated into polypeptides made exclusively of the amino acid proline, suggesting that the triplet CCC might code for proline 2‍. Image 7: mRNA sequence: 5'-...UUUUUUUUUUUU...-3' (poly-U mRNA) UUU $\rightarrow$ phenylalanine (Phe) Polypeptide sequence: (N terminus)...Phe-Phe-Phe-Phe...(C terminus) Other researchers, such as the biochemist Har Gobind Khorana at University of Wisconsin, extended Nirenberg's experiment by synthesizing artificial mRNAs with more complex sequences. For instance, in one experiment, Khorana generated a poly-UC (UCUCUCUCUC…) mRNA and added it to a cell-free system similar to Nirenberg's 3,4‍. The poly-UC mRNA that it was translated into polypeptides with an alternating pattern of serine and leucine amino acids. These and other results confirmed that the genetic code was based on triplets, or codons. Today, we know that serine is encoded by the codon UCU, while leucine is encoded by CUC. Image 8: mRNA sequence: 5'-...UCUCUCUCUCUC...-3' (poly-UC mRNA) UCU $\rightarrow$ serine (Ser) CUC $\rightarrow$ leucine (Leu) Polypeptide sequence: (N terminus)...Ser-Leu-Ser-Leu...(C terminus) By 1965, using the cell-free system and other techniques, Nirenberg, Khorana, and their colleagues had deciphered the entire genetic code. That is, they had identified the amino acid or "stop" signal corresponding to each one of the 64‍ nucleotide codons. For their contributions, Nirenberg and Khorana (along with another genetic code researcher, Robert Holley) received the Nobel Prize in 1968. Left: Image modified from "Marshall Nirenberg and Heinrich Matthaei," by N. MacVicar (public domain). Right: "Har Gobind Khorana" (public domain). I always like to imagine how cool it would have been to be one of the people who discovered the basic molecular code of life. Although we now know the code, there are many other biological mysteries still waiting to be solved (perhaps by you!). Attribution and references Attribution: This article is a modified derivative of "The genetic code," by OpenStax College, Biology, CC BY 4.0. Download the original article for free at The modified article is licensed under a CC BY-NC-SA 4.0 license. Works cited: Lorch, M. (2012, August 16). The most beautiful wrong ideas in science. In Chemistry blog. Retrieved from Nirenberg, M. (2004). Historical review: Deciphering the genetic code – a personal account. TRENDS in Biochemical Sciences, 29(1), 46-54. Gellene, Denise. (2011, November 14). H. Gobind Khorana, 89, Nobel-winning scientist, dies. The New York Times. Reterieved from H. Gobind Khorana – Nobel Lecture. NobelPrize.org. Nobel Media AB 2019. Mon. 6 May 2019. References: Arnaud, M.B., Inglis, D.O., Skrzypek, M.S., Binkley, J., Shah, P., Wymore, F., Binkley, G., Miyasato, S.R., Simison, M., and Sherlock, G. (2013). CGD help: Non-standard genetic codes. In Candida genome database. Retrieved from Codon. (2014). In Scitable. Retrieved from Gellene, Denise. (2011, November 14). H. Gobind Khorana, 89, Nobel-winning scientist, dies. The New York Times. Reterieved from Guevara Vasquez, F. (2013). Cracking the genetic code. In ACCESS - cryptography 2013. Retrieved from Nirenberg/Khorana: Breaking the genetic code. (n.d.). Retrieved from Nirenberg, M. (2004). Historical review: Deciphering the genetic code – a personal account. TRENDS in Biochemical Sciences, 29(1), 46-54. 0. Nirenberg, M. and Leder, P. (1964). RNA codewords and protein synthesis. Science, 145(3639), 1399-1407. Nirenberg, M. W. and Matthaei, J. H. (1961). The dependence of cell-free protein synthesis in E. coli upon naturally occurring or synthetic polyribonucleotides. PNAS, 47(10), 1588-1602. Office of NIH History. (n.d.). The poly-U experiment. In Deciphering the genetic code: Marshall Nirenberg. Retrieved from Openstax College, Biology. (2015, September 29). The genetic code. In OpenStax CNX. Retrieved from Purves, W. K., Sadava, D. E., Orians, G. H., and Heller, H.C. (2004). The genetic code. In Life: The science of biology (7th ed., pp. 239-241). Sunderland, MA: Sinauer Associates. Raven, P. H., Johnson, G. B., Mason, K. A., Losos, J. B., and Singer, S. R. (2014). The genetic code. In Biology (10th ed., AP ed., pp. 282-284). New York, NY: McGraw-Hill. Reece, J. B., Urry, L. A., Cain, M. L., Wasserman, S. A., Minorsky, P. V., and Jackson, R. B. (2011). The genetic code. In Campbell biology (10th ed., pp. 337-340). San Francisco, CA: Pearson. Söll, D., Ohtsuka, E., Jones, D. S., Lohrmann, R., Hayatsu, H., Nishimura, S., and Khorana, H. G. (1965). Studies on polynucleotides, XLIX. Stimulation of the binding of aminoacyl-sRNA's to ribosomes by ribotrinucleotides and a survey of codon assignments for 20 amino acids. PNAS, 54(5), 1378-1385. Retrieved from Skip to end of discussions Questions Tips & Thanks Want to join the conversation? Log in Sort by: Top Voted Andres Cantu 9 years ago Posted 9 years ago. Direct link to Andres Cantu's post “Are Glutamate (Glu) and G...” more Are Glutamate (Glu) and Glutamine (Gln) interchangeable? or there is something wrong with the example on reading the codon table, because CAG codes for Gln, not Glu. Answer Button navigates to signup page •Comment Button navigates to signup page (11 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show preview Show formatting options Post answer Emily 9 years ago Posted 9 years ago. Direct link to Emily's post “They are 2 different amin...” more They are 2 different amino acids, so no they cannot be use interchangeably. Comment Button navigates to signup page (6 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Show more... SeekerAtFarnham 6 years ago Posted 6 years ago. Direct link to SeekerAtFarnham's post “When does the tRNA know w...” more When does the tRNA know when to use AUG as a start codon and when to code Methionine? Are there other influencers Answer Button navigates to signup page •Comment Button navigates to signup page (7 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show preview Show formatting options Post answer SeekerAtFarnham 6 years ago Posted 6 years ago. Direct link to SeekerAtFarnham's post “Thank you - I looked at t...” more Thank you - I looked at the video and that was a great help! Comment Button navigates to signup page (0 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Arki🖤 8 years ago Posted 8 years ago. Direct link to Arki🖤's post “Why is AUG a start codo...” more Why is AUG a start codon and UAA , UGA and UAG stop codons? Answer Button navigates to signup page •Comment Button navigates to signup page (6 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show preview Show formatting options Post answer KEVIN 2 years ago Posted 2 years ago. Direct link to KEVIN's post “Why does leucine happen t...” more Why does leucine happen to have 6 ways to code while many other amino acids only have 2? Does it have to do with how essential it is? Answer Button navigates to signup page •Comment Button navigates to signup page (4 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show preview Show formatting options Post answer Priyanka 7 years ago Posted 7 years ago. Direct link to Priyanka's post “In the section, Reading F...” more In the section, Reading Frame, frameshift mutations are mentioned. Point mutations will shift the frame of reference. The insertion or deletion of three(or it's multiple )bases would insert or delete one or more codons or amino acids, without shifting the reading frame. But addition or subtraction of amino acids from a polypeptide would transform it..... How is this dealt with? Answer Button navigates to signup page •Comment Button navigates to signup page (3 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show preview Show formatting options Post answer Bala sankar 2 years ago Posted 2 years ago. Direct link to Bala sankar's post “if there are 999 bases in...” more if there are 999 bases in an rna that codes for a protein with 333 amino acids and the base at position 901 is deleted such that the length of the rna becomes 998 bases, how many codons will be altered ? Answer Button navigates to signup page •Comment Button navigates to signup page (2 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show preview Show formatting options Post answer Charles LaCour 2 years ago Posted 2 years ago. Direct link to Charles LaCour's post “If you are doing transcri...” more If you are doing transcription forward from base 1 to 999 then base 901 is the first base in in codon 301 so there will be a shift in 33 codons. Since there are multiple codons for a specific amino acid there may or may not be "errors" in each of the amino acid choices. Comment Button navigates to signup page (3 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Mars 4 years ago Posted 4 years ago. Direct link to Mars's post “If the mRNA is coded from...” more If the mRNA is coded from the 5' end to the 3', how is the code similar to that of the coding strand of the DNA? Since it reads from the other end, it should be the reverse of it right? Answer Button navigates to signup page •Comment Button navigates to signup page (2 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show preview Show formatting options Post answer Malavika B 4 years ago Posted 4 years ago. Direct link to Malavika B's post “Actually, the mRNA strand...” more Actually, the mRNA strand is coded from the template strand of the DNA which runs from 3' to 5' end. The coding strand is the other strand of DNA helix other than the template strand that runs from 5' to 3' end and is parallel to the mRNA strand. Comment Button navigates to signup page (2 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Javacrafter 4 months ago Posted 4 months ago. Direct link to Javacrafter's post “What did Robert Holley do...” more What did Robert Holley do to earn a Nobel prize? Answer Button navigates to signup page •Comment Button navigates to signup page (2 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show preview Show formatting options Post answer Sillus 12 days ago Posted 12 days ago. Direct link to Sillus's post “Robert W. Holley shared t...” more Robert W. Holley shared the 1968 Nobel Prize in Physiology or Medicine for his work on the structure of transfer RNA (tRNA). He was part of a team that determined the complete nucleotide sequence of alanine transfer RNA (tRNA), which helped explain how genetic information in RNA controls protein synthesis. Comment Button navigates to signup page (2 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more mikaelag14 3 years ago Posted 3 years ago. Direct link to mikaelag14's post “-How is the DNA's informa...” more -How is the DNA's information get out of the nucleus and turn into mRNA? -What is the role of enzymes and ribosomes in this process? Answer Button navigates to signup page •Comment Button navigates to signup page (2 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show preview Show formatting options Post answer sneha 4 years ago Posted 4 years ago. Direct link to sneha's post “What should be the nature...” more What should be the nature of the genetic code if there would have been 65 amino acids? Answer Button navigates to signup page •Comment Button navigates to signup page (2 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show preview Show formatting options Post answer Up next: exercise Use of cookies Cookies are small files placed on your device that collect information when you use Khan Academy. Strictly necessary cookies are used to make our site work and are required. Other types of cookies are used to improve your experience, to analyze how Khan Academy is used, and to market our service. You can allow or disallow these other cookies by checking or unchecking the boxes below. You can learn more in our cookie policy Accept All Cookies Strictly Necessary Only Cookies Settings Privacy Preference Center When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer. More information Allow All Manage Consent Preferences Strictly Necessary Cookies Always Active Certain cookies and other technologies are essential in order to enable our Service to provide the features you have requested, such as making it possible for you to access our product and information related to your account. For example, each time you log into our Service, a Strictly Necessary Cookie authenticates that it is you logging in and allows you to use the Service without having to re-enter your password when you visit a new page or new unit during your browsing session. Functional Cookies [x] Functional Cookies These cookies provide you with a more tailored experience and allow you to make certain selections on our Service. For example, these cookies store information such as your preferred language and website preferences. Targeting Cookies [x] Targeting Cookies These cookies are used on a limited basis, only on pages directed to adults (teachers, donors, or parents). We use these cookies to inform our own digital marketing and help us connect with people who are interested in our Service and our mission. We do not use cookies to serve third party ads on our Service. Performance Cookies [x] Performance Cookies These cookies and other technologies allow us to understand how you interact with our Service (e.g., how often you use our Service, where you are accessing the Service from and the content that you’re interacting with). Analytic cookies enable us to support and improve how our Service operates. For example, we use Google Analytics cookies to help us measure traffic and usage trends for the Service, and to understand more about the demographics of our users. We also may use web beacons to gauge the effectiveness of certain communications and the effectiveness of our marketing campaigns via HTML emails. Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Reject All Confirm My Choices
148
Skip to main content Download PDF Article Open access Published: Evolutionary advantage of anti-parallel strand orientation of duplex DNA Hemachander Subramanian1 & Robert A. Gatenby2 Scientific Reports volume 10, Article number: 9883 (2020) Cite this article 17k Accesses 9 Citations 14 Altmetric Metrics details Abstract DNA in all living systems shares common properties that are remarkably well suited to its function, suggesting refinement by evolution. However, DNA also shares some counter-intuitive properties which confer no obvious benefit, such as strand directionality and anti-parallel strand orientation, which together result in the complicated lagging strand replication. The evolutionary dynamics that led to these properties of DNA remain unknown but their universality suggests that they confer as yet unknown selective advantage to DNA. In this article, we identify an evolutionary advantage of anti-parallel strand orientation of duplex DNA, within a given set of plausible premises. The advantage stems from the increased rate of replication, achieved by dividing the DNA into predictable, independently and simultaneously replicating segments, as opposed to sequentially replicating the entire DNA, thereby parallelizing the replication process. We show that anti-parallel strand orientation is essential for such a replicative organization of DNA, given our premises, the most important of which is the assumption of the presence of sequence-dependent asymmetric cooperativity in DNA. Similar content being viewed by others Strand-resolved mutagenicity of DNA damage and repair Article Open access 12 June 2024 The unusual structural properties and potential biological relevance of switchback DNA Article Open access 06 August 2024 Structural basis for intrinsic strand displacement activity of mitochondrial DNA polymerase Article Open access 11 March 2025 Introduction Living systems, uniquely in nature, acquire, store and use information autonomously. The molecular carriers of information, DNA and RNA, exhibit a number of distinctive physico-chemical properties that are optimal for storage and transfer of biological information1,2,3. This suggests that significant prebiotic evolutionary optimization4 preceded and resulted in RNA and DNA, and that the fundamental properties of nucleotides and DNA are not simply the outcomes of frozen accidents or of chemical inevitabilities. The evolutionary pressures that resulted in the adaptation of the specific physico-chemical properties of DNA are yet to be clearly elucidated, however. Such an evolution-based inquiry can be a useful alternative to the traditional biochemical approaches to unravel the functional significance of the structure and sequence of DNA. In this article, we identify an evolutionary advantage for the anti-parallel orientation of the two strands of the DNA duplex. The importance of such an evolution-based explanation for anti-parallel strand orientation5 stems from the fact that the latter is directly responsible for the biochemically cumbersome and complicated lagging strand replication mechanism of DNA, the existence of which militates against the well-established notion that DNA is a product of prebiotic evolutionary optimization. Evolution could have utilized parallel-stranded DNA, which have been shown to form under physiological conditions6,7,8,9,10, which could have obviated the need for lagging strand replication and the attendant biochemical complexities. The superior thermodynamic stability of anti-parallel DNA double strands over the parallel double strands cannot be a reason, since, in the primordial scenario, such a stability could actually have hindered self-replication by inhibiting the separation of daughter strand from the template11, and where the need for preservation of information is secondary or non-existent. Thus, the evolutionary choice of anti-parallel DNA as the genetic material requires explanation, given that parallel DNA double strands are proven to form within the physiological range of parameters, and, given the possible simplicity of self-replicative processes with parallel-stranded DNA. Within the picture we develop below, the evolutionary advantage of anti-parallel strand orientation of DNA arises from its ability to temporally parallelize the replication process, by dividing DNA into predictable, independent, simultaneously replicating segments, thereby speeding up the replication process considerably. In our picture, “Asymmetric Cooperativity”, a new property we introduced earlier and which we assume to be present in DNA, underpins the ability of anti-parallel strands to temporally parallelize DNA replication. On the Organization of the Article The central concept of this article is asymmetric cooperativity, a new property of self-replicating heteropolymers that we introduced in our earlier article12. In that article, we have quantitatively evaluated, using a Marko Chain model, the self-replicative potential of heteropolymers with asymmetric and symmetric cooperativities. We have demonstrated there that heteropolymers with asymmetric cooperativity are evolutionarily superior, when compared to symmetrically cooperative or non-cooperative heteropolymers. The current article examines the evolutionary consequences of asymmetric cooperativity to the replicative organization of DNA. We begin below by recapitulating, from our earlier article12, what asymmetric cooperativity is and why it is useful for self-replication. In the next “Model and its Premises” section, we decompose asymmetric cooperativity into two parts, namely, sequence-independent and sequence-dependent asymmetric cooperativities, and elaborate on and illustrate them with a number of diagrams. We also explain the necessity of heteromolecular base-pairing, between purines and pyrimidines, to incorporate sequence-dependent asymmetric cooperativity, using a purely symmetry-based analysis. Literature-based experimental support, for our assumptions of asymmetrically cooperative bonding phenomena made in the “Model and its Premises” section, are provided in the “Experimental support for the model” section further below. We chose to sequester experimental support in a separate section in order to keep our model introduction as compact and comprehensible as possible, and to separate what is new in this article from what is already known. After the introduction of the model and its premises, in the next section, we logically demonstrate the evolutionary advantage of anti-parallel strand orientation, assuming the presence of asymmetric cooperativity in DNA. We also explore the possible emergence of a primitive kind of information storage in non-enzymatically self-replicating heteropolymers in the primordial regime, where information pertaining to the construction of enzymes was irrelevant. The sections following the “Experimental support” section are the “Falsification approaches” section, crucial for any testable scientific model, and the “Discussion” section, where a summary of our arguments are provided and the limitations of the model are underscored. Asymmetric Cooperativity In an earlier article12, we showed that maximization of the replicative potential of a generic primordial self-replicating polymer leads to the property of asymmetric cooperativity. We recapitulate the same here for completeness. Asymmetric cooperativity is said to be present when the kinetic influence of a pre-existing hydrogen bond, between a monomer and the template strand of the polymer, on the formation/dissociation of the two neighboring inter-strand bonds between other monomers and the template, to the left and right, is unequal (please see Fig. 1). We theoretically showed that asymmetrically cooperative circular self-replicating polymer strands in the primordial oceans succeeded in the evolutionary competition with symmetrically cooperative self-replicating polymers for common substrates of their respective monomers and energetic sources. The advantage accruing to a generic circular self-replicating polymer from having asymmetric cooperativity is illustrated in Fig. 1. This replicative advantage of asymmetric cooperativity arises from the latter simultaneously satisfying two competing requirements for successful replication: A low kinetic barrier for a monomer to be easily inducted from primordial soup to form an inter-strand hydrogen bond, and a high kinetic barrier for the monomer to be retained on the template strand to facilitate intra-strand covalent bond formation in order to extend the replica strand. By lowering the kinetic barrier of its right (left) neighbor and raising the barrier of its left (right) neighbor, asymmetrically cooperative inter-strand bonds satisfy both these requirements, and result in a zipper-like functionality of the polymer, with unidirectional (un)zipping of inter-strand bonds. It is obvious that there are two entirely equivalent modes of asymmetric cooperativity: left asymmetric cooperativity, where the kinetic barrier of the left neighboring inter-strand bond is lowered, and right asymmetric cooperativity, where the right neighbor’s barrier is lowered. Within the premise that DNA is a product of molecular evolution, it would be natural to expect that asymmetric cooperativity is present in DNA as well. In our previous publication12, we have suggested an experiment to verify the existence of asymmetric cooperativity in DNA, and cited numerous experiments suggesting its presence in DNA. The Model and its Premises In this article, our central premise is the presence of asymmetric cooperativity in DNA. In order to simplify our arguments below, we factorize asymmetric cooperativity in DNA into two parts: A strong sequence-independent part, in which, the mode of asymmetric cooperativity (left or right) is dictated entirely by the orientation of the DNA single strand; and a comparatively weaker sequence-dependent part, where the mode is dictated by the “orientation” of the base-pair in the DNA double strand. The orientation of the base-pair specifies which nucleotide of the base-pair is on the 3′–5′ strand and which is on the 5′–3′ strand, thus differentiating, for example, the base-pair 5′–G–3′/3′–C–5′ from that of its 180°-rotated counterpart, 5′–C–3′/3′–G–5′. The kinetic effects on the left and right neighbors of a base-pair in these two orientations would be different, because of the base-pair’s left-right asymmetry. Below, we explain these two types of cooperativities in more detail. Sequence-independent asymmetric cooperativity The sequence-independent asymmetric cooperativity mode is dictated by the orientation of the DNA single strand template: An interstrand hydrogen bond between a 3′–5′-oriented template strand and a lone 5′–3′-oriented nucleotide which is not yet incorporated into the growing daughter strand would catalyze its right neighboring hydrogen bond formation and inhibit its left neighbor (right asymmetric cooperativity mode). Reversing the template strand orientation from 3′–5′ to 5′–3′ would reverse the catalytic and inhibitory direction. Our theoretical separation of asymmetric cooperativity into a sequence-independent part and a sequence-dependent part implies that, in the case of the former, the asymmetric cooperativity mode is not influenced by the types of nucleotides composing the base-pair. Figure 2 illustrates the above point. The figure shows that, for a 3′–5′-oriented template strand, irrespective of the types of nucleotides composing the hydrogen bond, the kinetic barrier for the formation of a hydrogen bond neighbor to the right is always reduced, whereas, the barrier for formation of the left neighbor is always higher. The asymmetric cooperativity mode is the same in both the cases (a) and (b) in the figure, since the mode is dictated primarily by the directionality of the single template strand, denoted by the thick black arrows below the strands in the figure. Our assumption about the strength of sequence-independent cooperativity, in comparison with the weaker sequence-dependent cooperativity, leads to the former dominating the latter and dictating the asymmetric cooperativity mode in single template strands. Our above choice of the dependence of asymmetric cooperativity mode on the directionality of template strand ensures that the DNA daughter strand construction beginning at its 5′ end and moving towards 3′ end (towards the right in Fig. 2(a,b)) is kinetically favored, while construction beginning from the 3′ end of the daughter strand is disfavored. This premise is borne from the observation that DNA daughter strand construction is unidirectional and proceeds from its 5′ end. Sequence-dependent asymmetric cooperativity The sequence-dependent part of asymmetric cooperativity arises from the dependence of asymmetric cooperativity modes on the orientation of the base-pair. We assume that this sequence-dependent part is considerably smaller in magnitude compared to the sequence-independent part, in order to align our picture with the experimentally established behavior of DNA replication. The sequence-dependent asymmetric cooperativity is operative only in DNA double strands, due to the mutual cancellation of the opposing sequence-independent asymmetric cooperativity modes of the two anti-parallel strands of the DNA double strand. Figure 3(a,b) illustrate the impact of sequence-dependent part of asymmetric cooperativity on the hydrogen bond kinetic barriers. The thick black arrows in Fig. 3 denote the direction of the two sequence-independent asymmetric cooperativity modes (left or right), which align with the 3′–5′ direction of the strands, whereas, the thinner arrows attached to the hydrogen bonds denote the direction of the two modes of sequence-dependent asymmetric cooperativity. The base-pair 5′–C–3′/3′–G–5′ is assumed to be left-asymmetrically cooperative, as shown in the last three bonds of Fig. 3(b), catalyzing its left and inhibiting its right neighboring hydrogen bond, whereas the 180°-rotated 5′–G–3′/3′–C–5′ would obviously be right-asymmetrically cooperative, which would catalyze its right and inhibit its left neighbor. As can be easily seen from the Fig. 3, the kinetic barriers of different hydrogen bonds in parts (a) and (b) are very different, due to the difference in the sequences in the two subfigures. We will argue below that this sequence dependence of kinetics of unzipping is evolutionarily useful for the DNA, for, it provides the DNA with additional degrees of freedom to modify its kinetics of unzipping (and hence self-replication) by modifying its sequence characteristics. In Fig. 3(a), unzipping is kinetically favorable if it begins at the rightmost end, whereas, in Fig. 3(b), the unzipping would begin at the center of the strand and proceed bidirectionally to the left and right. Experimental support for our above choice of assigning right asymmetry mode to 5′–G–3′/3′–C–5′ comes partly from13, where, the kinetic influence on the nonenzymatic incorporation of neighboring nucleotides has been measured, which is reproduced with permission and elaborated on below as Fig. 10. It has to be re-emphasized that, in our picture, while the asymmetric cooperativity mode of a hydrogen-bond between a lone nucleotide and the template strand is dictated primarily by the 3′–5′ or 5′–3′ orientation of the template strand, as illustrated in Fig. 2, the asymmetric cooperativity mode of a hydrogen-bond in a fully-formed duplex DNA is dictated by the orientation of the base-pair, as illustrated in Fig. 3. This is because, in the fully-formed duplex DNA, the opposite orientations of the two single strands result in cancellation of sequence-independent asymmetric cooperativity, due to their opposing modes, leaving the sequence-dependent asymmetric cooperativity of the base-pairs to dictate the kinetics of hydrogen bond dissociation of their neighboring base-pairs. Importance of heteromolecular base-pairing It is important to note that, if not for the complementarity of the sequences of the two strands, left-right symmetry would prohibit the incorporation of asymmetric cooperativity in homomolecular base-pairs. This inability of homomolecular base-pairs to incorporate asymmetric cooperativity is illustrated in Fig. 4. Base-pairs such as 5′–C–3′/3′–C–5′, as shown in the bottom-left strand diagram of Fig. 4, are evidently left-right symmetric, cannot distinguish between left and right directions, and hence cannot instantiate asymmetric cooperativity. This can be verified by comparing the above base-pair structure with its self-similar 180° -rotated 5′–C–3′/3′–C–5′ structure, shown in the bottom-right strand diagram of Fig. 4. This is the reason no asymmetric cooperativity arrows are shown attached to the hydrogen bonds in the bottom-left and bottom-right strand diagrams of Fig. 4. Thus, in the fully formed anti-parallel DNA double strand, complementarity of the sequences of the two strands alone enable incorporation of asymmetric cooperativity, necessitating heteromolecular base-pairing and rendering the asymmetric cooperativity mode sequence-dependent. This ability to switch the mode of asymmetric cooperativity by rotating the base-pair is illustrated in the top-left and top-right strand diagrams in Fig. 4. If the DNA base-pairs are homomolecular, as illustrated in the bottom-left and bottom-right strand diagrams in Fig. 4, left-right symmetry of the duplex DNA base-pairs will disallow instantiation of sequence-dependent asymmetric cooperativity, while the sequence-independent asymmetric cooperativity would stand canceled due to the anti-parallel strand orientation of the daughter and template strands. Our premise statements above are distilled from a number of experiments to help parsimoniously explain, using evolutionary reasoning, the counterintuitive replicative properties of DNA, such as its unidirectional daughter strand construction and the lagging strand replication mechanism, which are consequences of DNA’s anti-parallel strand orientation. We show below that these premises also help make sense of a few other disparate experimental observations such as the presence of asymmetric nucleotide composition or GC skew observed in nearly all genomes, and palindromic instabilities, apart from anti-parallel strand orientation of DNA. These premise statements about asymmetric cooperativity can be thought of as axioms or postulates, from which the replicative properties of DNA will be shown to follow logically. As postulates, these premise statements do not require biophysical justifications beyond the cited experimental literature that support their plausibility, in the “Experimental support” section below, and hence we defer an inquiry into the biophysical origins of these premises to a latter date. Finally, we assume that the evolutionary force for faster construction of replica strand that was operative during the early stages of self-replicating polymer evolution was operative until more recently in guiding the evolution of various properties of DNA. Even though RNA is a more appropriate candidate to examine the consequences of asymmetric cooperativity, because it is widely believed to be evolutionarily more ancient than DNA, due to the comparative lack of experimental information on the thermodynamics and kinetics of double-strand formation and unzipping of RNA, and due to the central importance of DNA in understanding the functioning of extant biological systems, we decided to concentrate on DNA. More over, long RNA molecules are unstable in the extant biophysical environment, which renders the replicative organization of possible remnants of the “RNA world”, RNA viruses, uninformative, for our purposes. Due to this instability of long RNA molecules, the RNA viruses divide their genetic information across multiple, unconnected, short RNA molecules, called “segments”14,15, which also results in temporal parallelization of replication. The search for primordial biophysical environments that possibly enhanced the thermodynamic stability of long RNA molecules is ongoing16,17. Continuing the reductionistic spirit of our earlier paper12, our intention here is to investigate the evolution of structural properties of DNA in isolation, without taking into account the effects of its interactions with numerous enzymes, such as polymerases. The rationale behind this assumption is that the fundamental properties of DNA, such as its anti-parallel strand orientation, were evolutionarily more ancient than the evolution of enzymes, and were already set by the evolutionary dynamics of the DNA’s progenitors before enzymatic assistance for replication evolved. The fact that such an inquiry throws much light on some of the counterintuitive properties of DNA justifies our approach a posteriori. Replicative advantages of anti-parallel DNA strands The replicative advantage of anti-parallel DNA double strand arises simply from its ability to locally switch the modes of sequence-dependent asymmetric cooperativity from left to right or vice versa, since the stronger sequence-independent asymmetric cooperativity of the two anti-parallel individual strands cancel each other out. This switching of modes between left and right asymmetric cooperativity is achieved by altering the orientation of a hydrogen-bonded base-pair, by rotating it, as illustrated in the top-left and top-right strand diagrams in Fig. 4. For example, 5′–G–3′/3′–C–5′ base-pair orientation reduces the kinetic barrier of the hydrogen bonds to the base-pair’s right, thereby instantiating right asymmetric cooperativity mode, whereas the 180°-rotated 5′–C–3′/3′–G–5′ instantiates left asymmetric cooperativity, as shown in Fig. 3(b). As we show below, this sequence dependence of asymmetric cooperativity opens up the possibility of replicating a long DNA double strand by dividing it into multiple disjoint segments that are capable of replicating independently, simultaneously and predictably. These disjoint, independently replicating segments of DNA are called “Replichores” in Biology literature. This temporal parallelization of the replication process by dividing the DNA into multiple segments would have enhanced the replicative potential of the anti-parallel DNA double strand by significantly decreasing its replication time, compared to its biochemically distinct parallel strand self-replicating competitors5,8,9, during its early evolution. The asymmetric cooperativity modes of the hypothetical parallel-stranded DNA-like molecule cannot be similarly altered locally, due to the predominance of the stronger sequence-independent asymmetric cooperativity over its sequence-dependent counterpart, arising from the directionally additive influence of the two parallel strands. This distinction can be understood by comparing the sequence-dependence of kinetic barriers of the hydrogen bonds of anti-parallel strands in Fig. 3 and the relative sequence-independence of kinetic barriers of parallel strands in Fig. 5. In Fig. 3, the heights of kinetic barriers of anti-parallel double strands are strongly dependent on the sequence, through the dependence of asymmetric cooperativity on the base-pair orientation. In Fig. 5, on the other hand, the kinetic barrier heights of hydrogen bonds of parallel double strands are relatively insensitive to the sequence, and is dictated primarily by the common orientation of the two parallel strands. This sequence-dependence of kinetic barriers arises in anti-parallel strands due to cancellation of sequence-independent asymmetric cooperativity because of the anti-parallel strand orientations of the two strands. Parallelization of the replication process The ability to switch the modes of asymmetric cooperativity between left and right by altering the sequences of the DNA in an anti-parallel DNA double strand makes it possible for independent segments of DNA to have different asymmetric cooperativity modes. This can be seen in Fig. 3(b), where the three left hydrogen bonds (left replichore) have right asymmetric cooperativity mode and the next three bonds (right replichore) are left asymmetrically cooperative. When the DNA begins to replicate, the earliest hydrogen bonds to break would be the ones with the lowest kinetic barrier, i.e., the third and the fourth bonds in Fig. 3(b), where the asymmetric cooperativity mode changes from right to left. This local unzipping process is illustrated in Fig. 6(a). The next two bonds to break would be the second and the fifth bonds, as shown in Fig. 6(b), whose barriers are lowered due to the absence of stabilization from the third and the fourth bonds, which were just broken. Thus the unzipping of the DNA double strand would proceed bidirectionally from the mode-switching location, as observed during DNA bubble formation before replication initiation in extant organisms18,19. This bidirectional unzipping from multiple such mode-flipping locations on DNA would make available multiple segments of DNA for simultaneous replication, unlike the hypothetical parallel DNA, where the unzipping would start at one end of the DNA (rightmost end in Fig. 5) and would have to proceed sequentially along the entire length of the DNA towards the other end to be kinetically favorable. This reduction in replication time of anti-parallel strands with appropriately chosen sequence is illustrated in Fig. 7. The Fig. 7(a) illustrates the sequential nature of unzipping and daughter strand growth in a hypothetical parallel strand DNA incorporating asymmetric cooperativity through a schematic diagram that shows the time at which each location on the double strand is replicated. It shows that the locations of DNA that are farther from the origin of replication (denoted by a red dot) are replicated latter, and there is a one-to-one correspondence between diffrent locations on the DNA and their time of replication. Figure 7(b) illustrates the parallel nature of replication in anti-parallel DNA strands with appropriately chosen sequence. Daughter strand construction radiating from multiple origins of replication (denoted by red dots), a consequence of sequence-dependent asymmetric cooperativity in anti-parallel DNA strands, creates disjoint segments that are replicated simultaneously, thereby reducing replication time. This reduction in replication time is robust even when the rate of daughter strand construction in anti-parallel strand is lower than that of parallel strand due to the smaller magnitude of asymmetric cooperativity, as illustrated by the higher slope of lines in Fig. 7(b). This robustness arises from the possibility of increasing the number of origins of replication and hence the number of segments, by appropriately choosing the sequences, thereby reducing the segment lengths and hence their replication time. Once the DNA is locally unzipped bidirectionally, construction of daughter strands can begin anywhere on the two single-strand templates and proceed from the 3′-end of the template towards the 5′-end. But due to the sequence-independent asymmetric cooperativity of the single strand templates, the kinetically favorable replication initialization happens when the first hydrogen bond between the template and an incoming nucleotide is formed at the farthest of the unzipped 3′-ends of the two template strands, as shown in Fig. 8. In the Fig. 8, the lightly shaded G nucleotide denotes the location of the kinetically stable first bond formation on both the strands, beyond which the DNA double strand has not yet unzipped. As can be seen from this figure, the daughter strand construction can happen continuously on the template made available through unzipping only when the unzipping direction and the direction of the daughter strand construction are the same. This happens on parts of the two template strands labeled “leading strand templates” in the Fig. 8. When the direction of unzipping is opposite to that of the daughter strand construction, on parts labeled as “lagging strand templates” in the Fig. 8, daughter strand construction should begin at the farthest 3′-end made available by unzipping and proceeds towards the 5′-end, to be kinetically favorable. When another burst of unzipping happens beyond the initial bubble, the lagging strand construction should again begin at the farthest 3′-end of the recently unzipped template segment and proceed towards the 5′-end. In the extant organisms, the ingenious replisome design ensures that the RNA primers are attached to the lagging strand end closest to the helicase unzipping the DNA, and replicated from those ends discontinuously20. In the primordial settings that we are interested in, the Y-shaped fork itself might have catalyzed the daughter strand construction initiation at the 3′-end of the lagging strand template. The picture we have developed thus far utilizes sequence-independent and sequence-dependent asymmetric cooperativities to argue that the experimentally observed DNA replication mechanism is kinetically the most favorable one. Furthermore, the above picture also suggests that the structural aspects of DNA, such as strand directionality and anti-parallel strand orientation, evolved to minimize the replication time and increase replicative potential. Information storage and sequence-dependent replication kinetics We have argued above that the sequence characteristics of a primordial ancestor of DNA dictated its unzipping and replicative kinetics, through seuquence-dependent asymmetric cooperativity, instantiated by anti-parallel strand orientation and heteromolecular base-pairing. Sequences that support temporal parallelization of replication, through multiple alterations of the mode of asymmetric cooperativity between left and right, across the length of the polymer, such as 5′–(G)m(C)m(G)m(C)m–3′, for an arbitrary m, can successfully compete for monomers against a similar-length sequence such as 5′–(G)4m–3′, whose unzipping kinetics favor replication in a single file from right to left. The latter would take longer to replicate compared to the former (see Fig. 7). Thus, our hypothesis of sequence-dependent asymmetric cooperativity makes the connection between a specific sequence and its self-replicative potential in the primordial oceans, concrete. The competition for resources such as monomers, between different sequences, will result in certain sequences dominating over others in replicative potential, thereby giving rise to persistence of sequence properties, or information, across many cycles of replication of heteropolymers. Environmental conditions, such as the abundance of monomers, temperature, pH and so on, would influence the rate of replication, and hence would also influence the type of sequences that would be successful in a given environment. For example, when monomers are highly abundant, sequences such as 5′–(G)m(C)m(G)m(C)m(G)m(C)m–3′ would replicate faster than the sequence with the same length 5′–(G)n(C)n(G)n(C)n–3′, with n > m, due to the presence of more independently replicating subunits in the former. Whereas, when the monomer supply is scarce, sequences that kinetically promote the retention of monomers bound to the template, and avoid multiple origins of replication which require multiple, simultaneous daughter strand construction initiations, such as the latter, will be more successful in replication. Thus the environment would influence the type of sequences that will be successful in it, leaving a crude imprint of itself in the sequences. The origin of information storage and processing in living systems is usually argued to be when an RNA or its ancestral self-replicator began forming a sequence-dependent three-dimensional folded structure that catalyzed the self-replication of itself and of its hypercyclic partners21. Here, we argue for the possibility of existence of heteropolymers whose replicative success in a given environment depend on their sequences, through sequence-dependent unzipping kinetics, leading to a more primitive form of information storage in the sequences that reflects the kind of environment in which they would succeed. Experimental support for the model Multiple, independent lines of experimental observations in the literature, when reinterpreted, support the central thesis developed above, that the kinetics of unzipping during the replication/transcription of DNA depends on the sequence through sequence-dependent asymmetric cooperativity. Observations, such as the pervasive presence of asymmetric base composition or GC skew in nearly all genomes studied, which has resisted a simple explanation thus far, finds a surprisingly simple explanation within the model developed above. Furthermore, the observations of polar inhibition of the replication forks, palindromic instability and primer extension kinetics lend support to the existence of sequence-dependent asymmetric cooperativity. Below, we list these various experimental observations and elaborate on how they support our thesis. The presence of asymmetric nucleotide composition or GC skew Asymmetric base composition or GC skew, defined as a local excess of G over C or vice versa in one of the strands of the duplex DNA, has been observed in nearly all genomes studied, both prokaryotic and eukaryotic22,23,24,25,26,27. This strand asymmetry, calculated as ((C-G)/(C+G) \% ) in running windows along genomic sequences, can be positive or negative at different locations, and its magnitude averages to about 4% in Human genome28 and is more than 12% in some Bacteria29. The characteristic signature of the presence of (GC) skew is a “V”-shaped cumulative skew diagram, as illustrated in Fig. 9. GC skew is traditionally used in genome analysis software programs to find “Origins of Replication” in prokaryotic genomic sequences, by identifying locations on the 5′–3′ strand where the skew switches from (G)-dominant to (C)-dominant. Various reasons have been provided for the presence of (GC) skew in genomes, with the most prominent one attributing it to the asymmetric mutational pressures due to the differences in leading and lagging strand replicative and transcriptional mechanisms30,31,32, while the relative magnitudes of the mutational pressures due to replication and transcription still remain contentious33,34. Again, this reasoning does not provide the evolutionary significance of (GC) skew, but only provides the mechanistic reason for its emergence. The question of the evolutionary advantage of (GC) skew is important, because, higher the (GC) skew, lower will be the space available for coding amino acids. For example, if there are very few or no .G’s available on a part of the transcribed DNA strand, due to very high (GC) skew, then the DNA codons that have (G) in them, such as 5′–CTG–3′, cannot be used to code for the amino acid Leucine, forcing the organism to code for the amino acid using other synonymous triplets, such as (CTA). Thus (GC) skew places restrictions on the redundancy of the Genetic Code, and hence is possibly detrimental, making its evolutionary significance much more intriguing. The model we described above provides both the mechanistic and evolutionary underpinnings of (GC) skew. The significance of (GC) skew is apparent from the Fig. 9. The figure clearly illustrates our idea that the skew is the cause of direction of unzipping during DNA replication. The duplex strand shown in Fig. 9 shows three replichores, which are the independently replicating segments of DNA, oriented in such a way that the first segment is left asymmetrically cooperative, the second, right, and the third, left asymmetrically cooperative, again. Since left asymmetric cooperativity is instantiated by 5′–C–3′/3′–G–5′ as shown in Fig. 3(b), the first segment to the left is composed of 5′–3′ top strand that is C-dominant, and 3′–5′ bottom strand that is G-dominant. Similarly, for the right asymmetrically cooperative duplex strand, the 5′–3′ top strand is (G)-dominant, and 3′–5′, (C)-dominant. On a side note, an objection may be raised because the experimentally observed excess of the G– or C– dominance is only of the order of a few percent. This objection can be addressed by relaxing the assumption in our model, that the kinetic effects of asymmetric cooperativity applies only to the nearest neighbors, by including hydrogen bonds that are farther away. The asymmetric kinetic effect of the orientation of a given base-pair may extend well beyond the nearest neighbors. Observations that support the relaxation of our nearest-neighbor assumption include experiments where pairs of base-pairs in duplex DNA has been shown to interact across a distance of the order of a few nanometers (electronic coherence length), about an order of magnitude larger than the distance between two neighboring base-pairs35,36. When the kinetic interaction extends beyond the nearest neighbors, it becomes possible for only a few percent of (GC) skew to set the unzipping orientation during DNA replication. As shown in the Fig. 9, there are two types of interfaces between two replichores: (a) As we move from the 5′-end of a strand towards its 3′-end, a (G)-dominant replichore changes to (C)-dominant one at the interface (bottom strand, right interface), or, (b) a C-dominant replichore changes to (G)-dominant one at the interface (bottom strand, left interface). The kinetics of bonding/dissociation of base-pairs at these two types of interfaces are entirely different. This difference has to do with the direction of the catalytic arrows of the base-pairs on either side of the interface. The arrows in the middle of the two strands in Fig. 9 show the direction of the catalysis, which is determined by the sign of the (GC) skew. For the type of interface, mentioned in (a) above, the asymmetric cooperativity changes from left mode to right mode as we move towards left, and the catalytic arrows point at each other, as in the first interface from the right of the strand in Fig. 9, denoted with a red dot. The hydrogen bonds of base-pairs at the interface will have their barriers lowered due to catalytic influence from the neighboring base-pairs in both left and right directions, and are prone to dissociate easily. This explains the reason behind the function of replichore interfaces of type (a) as origins of replication. On the other hand, in type (b), the catalytic arrows point away from each other, as in the first replichore interface from the left of the strands in Fig. 9. This results in the kinetic barriers of the hydrogen bonds of base-pairs at the interface to be raised, and thus results in such interfaces to function as replication termini. It is easy to understand that, higher the (GC) skew, higher will be the sequence-dependent asymmetric cooperativity, and consequently, higher will be the rate of unzipping and hence of replication. It is interesting to note that such a correlation between the magnitude of skew in a genome and its replicative speed has already been observed37. The other pair of nucleotides, (A) and (T), are also observed to be asymmetrically distributed across the two strands of DNA in various genomes, and its switch is correlated with replicative origins24. But the base-pair orientation does not consistently correlate with the direction of replication across genomes of different organisms37, like that of the (GC) base-pair. For example, (T) is enriched on the leading strand in Human genome, whereas (A) is enriched on the leading strand in B. Subtilis genome. It is possible that different environmental factors dictate the asymmetric cooperativity mode of the base-pair. We would like to emphasize that, while the directionality of the unzipping machinery is determined by the GC skew within this picture, the direction of new strand synthesis would still be dictated by the 3′–5′ directionality of the template strand, due to our assumption of weaker sequence-dependent asymmetric cooperativity, compared to strand directionality-dependent asymmetric cooperativity. Asymmetric primer extension kinetics An important experimental source of support for the connection we established above between the asymmetric cooperativity mode and the orientation of the base-pairs, i.e., 5′–G–3′/3′–C–5′ versus 5′–C–3′/3′–G–5′, is provided in13, where the kinetics of non-enzymatic primer extension (which includes both hydrogen and covalent bonding) is measured as a function of various sequence neighborhoods. The asymmetric influence of a hydrogen bond on the incorporation kinetics of a monomer nearby is illustrated in Fig. S6 of13, and reproduced with permission here in Fig. 10. First, the rate of incorporation of a nucleotide is shown to be dependent on the type of nucleotide present on the 3′ and the 5′ neighboring ends of the incorporated nucleotide (Table 1 of13). Second, the rate of incorporation depends on the orientation of the neighboring base-pairs, i.e., 5′–G–3′/3′–C–5′ versus 5′–C–3′/3′–G–5′. For example, 5′–C–3′/3′–G–5′ supports higher rate of nuceotide incorporation to its left compared to 5′–G–3′/3′–C–5′, whereas 5′–G–3′/3′–C–5′ supports higher incorporation rate to its right compared to 5′–C–3′/3′–G–5′ (please see Fig. 10). Third, the direction of asymmetric enhancement (5′–C–3′/3′–G–5′ catalyzing the left neighbor) of the incorporation rate agrees with the direction of catalysis that we arrived at from the well-established relationship between the direction of unzipping during replication and (GC) skew. Palindrome and inverted repeat instability Special sequences, whose bottom strand sequence is the reverse of the top strand sequence, exhibiting a special kind of symmetry called “dyad symmetry”, are called palindromes. An example is the sequence 5′–CTAG–3′/3′–GATC–5′, which has been shown to be extremely rare in bacterial genomes38. Perfect palindromes are generally under-represented in most genomes39, and have been shown to be fragile40. Inverted repeats are sequences with an intervening sequence between the two symmetric “arms” of a palindromic sequence. As with the larger-scale approximate dyadic symmetry of the (GC)-skew-switching locations leading to origins of replication, these smaller-scale dyadic symmetry elements too serve as origins of replication and transcription41, and function as targets for restriction enzymes42. Within our model, these properties follow from the increased symmetry of palindromic and inverted repeat sequences. The dyadic symmetry of the palindromic sequences, illustrated in the Fig. 11, causes the asymmetric cooperativity modes of the two arms of the palindrome to point in opposite directions. This results in two possibilities: (a) The two asymmetric cooperativity arrows of the two arms point away from each other, or (b) The two arrows point at each other. The former case, shown in the Fig. 11(a), makes the center of the palindrome to behave like a replication terminus (see also Fig. 9), but at one of the ends of the palindrome, the two arrows point at each other, rendering that location to be unstable. This location is denoted by a red ellipse in the Fig. 11(a). In the second case, the two arrows point at each other in the middle of the palindrome, resulting in instability at the center of the palindrome. This instability can lead to local unzipping at those locations, and in case (b), may allow for the formation of secondary structures such as cruciform extrustion. Inverted repeats, which have an intervening sequence between the two arms of a palindrome, will also lead to local instability, due to the (GC) skew of the intervening sequence being different in direction from the skew of one of the arms of the palindromic sequence that contains it. This clear separation of palindromes into two different types (a) and (b) provides a possibility to experimentally verify our hypothesis of sequence-dependent asymmetric cooperativity. Since the fragile locations, where the double strand is unstable with respect to thermal fluctuations, are different in the two types, a bioinformatic/experimental search for fragile locations in these two types can provide clear evidence for or against our hypothesis. Polar inhibition of replication forks Another source of experimental support is the documented asymmetric (polar) and sequence-dependent rate of movement of the “unzipping machinery” (the replication fork) as it traverses the genome during replication. During DNA replication, the replication fork moves unidirectionally from the origin of replication, with the direction correlated with the GC skew sign. Thus, stretches of genome with (G)-enriched on one strand should allow the fork to proceed in one direction, while inhibiting its movement in the opposite direction. Such polar inhibition of replication forks through G-enriched sections has been experimentally observed43,44,45, and are usually explained as due to triple-helix formation, although there has been no direct experimental evidence for the triple-helix formation, in vivo. This sequence-dependent unidirectional movement of replication fork arises from the asymmetric kinetics of (un)zipping of the asymmetrically cooperative DNA, within our model. It has to be noted that the permissive and blocking directions set by (GC) skew are consistent for the movement of both the DNA unzipping machinery and the replicative and transcriptional machinery through (G)-enriched sections of different genomes. Thermodynamic parameters of DNA unzipping alone cannot capture such direction-dependent rates of movement of the replication fork. More support for sequence-dependent unidirectional movement of the replication fork are a) the direction-dependent slowdown of the replication fork at transcription-start and stop elements46, b) the direction-dependent pause or termination of replication at ter elements of E. Coli47, with the choice between pause and termination determined by the speed of the replisome48, and c) genetically-determined replication slow zones in budding yeast49 and D. Melanogaster50 genomes. At the single-molecule level, the orientation of the terminal base-pair of DNA hair-pin molecules has been discerned using kinetics of unzipping through a nanopore51. More recently, the differences in lifetimes of stacking interactions between swapped-sequence pairs such as 5′–CG–3′/3′–GC–5′ and 5′–GC–3′/3′–CG–5′ have been shown to span several orders of magnitude52, further supporting our hypothesis of the connection between base-pair orientation and kinetics. Asynchronous replication of mammalian mitochondria Mammalian mitochondrial DNA replicates slowly compared to the rates of replication of prokaryotes such as E. Coli, and appears to have minimal evolutionary pressure for rapid replication53. In the absence of such pressure, the mitochondrial genome is not constrained to simultaneously replicate independent segments, and has been shown to undergo a different mode of replication (called Strand Displacement Model), where the two strands replicate independently, successively, and asynchronously53. This mode of replication avoids employing lagging strand synthesis to replicate major sections of the genome and thus foregoes the complications associated with it. The (GC) skews of these mammalian mitochondrial genomes are larger in magnitude and never cross zero54, implying that the asymmetric cooperativity mode remains the same for a major portion of such genomes, within our picture. This suggests that, under minimal evolutionary pressure for faster replication, mammalian mitochondria have dispensed with the lagging strand synthesis approach, and adopted a (GC) skew profile that supports the continuous replication of both the strands. Falsification approaches The model above and its central premise, that of the presence of sequence-dependent asymmetric cooperativity in DNA, can be experimentally verified or falsified with currently available technologies. The relationship between (GC) skew and asymmetric kinetic barriers on the two sides of a double strand DNA can be tested thoroughly by unzipping a single dsDNA molecule using Atomic Force Microscope from both ends and documenting the force signatures, as has been done here55, taking care to do the experiment near equilibrium conditions. According to our model, it should be easier to unzip the sequence 5′–(C)n–3′/3′–(G)n–5′ from the left end and 5′–(G)n–3′/3′–(C)n–5′, from the right end, in an environment resembling in vivo conditions of prokaryotic genomes. Sequence-dependent asymmetric cooperativity can be quantified by varying the sequence and measuring the difference in the forces required to unzip the dsDNA molecules from the left and right ends. Also the model’s assumption that only nearest neighboring base-pairs affect the kinetics of unzipping can be tested and modified as necessary. The connection between origins of replication and asymmetric cooperativity can be tested by working with sequences whose (GC) skew switches between negative and positive values and measuring the lifetimes of hydrogen bonds of base-pairs at the switching location, through NMR experiments, taking care to include the helicity and the topology of the strands as influencing variables. The hydrogen bond lifetimes at the switching location should be lower when the skew switches from (G)-dominant to C-dominant, and should be higher when the switch is the other way around, when the environmental variables are kept at values similar to those observed in prokaryotic genomes. Another falsification approach, using either bioinformatics or experiments, is to verify the presence of two types of palindromic sequences, type (a) and type (b), as explained above. The fragile locations on these two types of sequences would be different, according to the model. Type (a) palindromic sequences would have fragile locations at one of their ends, whereas, type (b) sequences will have fragile locations at the center of the palindrome. Discussion We have shown that some fundamental structural and functional elements of DNA can be connected to the presence of asymmetric cooperativity in DNA. Asymmetric cooperativity, defined as an unequal and non-reciprocal kinetic influence between two interstrand hydrogen bonds, necessitates breaking of left-right symmetry of monomers, resulting in directional monomers and strands, denoted in the biological literature as 3′–5′ directionality. In this article, we factorized asymmetric cooperativity into sequence-independent and sequence-dependent parts, operative in single and anti-parallel double strands respectively, for ease of analysis. We have argued that anti-parallel strand orientation of DNA enables independent unzipping and replication of multiple segments of DNA simultaneously, from predictable origins of replication (for prokaryotes), through sequence-dependent asymmetric cooperativity, since the stronger sequence-independent part is cancelled due to the anti-parallel orientation of the two strands of the duplex. Such a replicative organization would result in substantially shorter replication time for self-replicating heteropolymers with anti-parallel strands, when compared to heteropolymers with parallel strands. The latter’s unzipping direction would be set by the parallel strands themselves through sequence-independent asymmetric cooperativity, is therefore frozen along the entire length of the strands and cannot be altered to achieve simultaneous replication of independent segments, within our model. Parallel-stranded DNA have been shown to readily form, given appropriate sequences, under physiological conditions, in vitro7,8,9,10. There is also evidence of formation of parallel-stranded RNA sequences in vivo in gene-silencing experiments6. Thus, biochemical implausibility of formation of the parallel DNA strands cannot be a reason for the choice of anti-parallel strands. Experiments comparing the thermodynamic stabilities of anti-parallel and parallel-stranded DNA have shown that the former are more stable, and have higher melting temperatures7,8. This stability is essential for DNA to preserve information across multiple generations, which is achieved by raising the thermodynamic barrier for the double-strand to single-strand (helix-coil) transition, thereby reducing the time spent by DNA in the mutationally more susceptible single-stranded state. However, in the primordial scenario we are interested in, such high thermodynamic barriers are counterproductive, since that would prevent the separation of daughter strand from the template in time to start the next round of replication11, making anti-parallel strands a replicatively less favorable choice. Evolution appears to have overcome these competing requirements of high and low thermodynamic stabilities of double-stranded anti-parallel DNA by utilizing sequence-dependence of thermodynamic and kinetic barriers for helix-coil transition. This sequence-dependence enables predictable sections of DNA with low barriers to function as origins of replication, which in turn provide access to thermodynamically more stable sections of DNA through cooperative unzipping. We showed that sequence-dependent asymmetric cooperativity cannot be instantiated in anti-parallel strands with homomolecular inter-strand bonds, due to the absence of left-right asymmetry of the homomolecular base-pair. This necessitates the introduction of heteromolecular inter-strand bonds, which possibly led to G/C and A/T heteromolecular inter-strand bonding. We argue that unzipping directionality during replication is set by asymmetric nucleotide composition or (GC) skew, the excess of one nucleotide over another over the entire segment of DNA over which the unzipping machinery moves in the same direction. This provides an evolution-based rationale for the existence of asymmetric nucleotide composition in genomes, otherwise detrimental due to the consequent reduction of protein-coding space. Our identification of (GC) skew as the cause of unzipping and replication directionality, instead of an effect of the latter, through sequence-dependent asymmetric cooperativity, also helps us make sense of the nature of sequences at replication origins. These sequences at replication origins usually exhibit an approximate dyadic symmetry, prominent example being palindromic sequences. We have shown that due to the switching of asymmetric cooperativity modes from right to left, the hydrogen bonds at these locations have lowered kinetic barriers, and hence can break easily during thermal fluctuations, enabling them to function as origins. Similar arguments apply for sequences at the replication termini, where the kinetic barrier is raised due to inhibitory kinetic influence from either side of the (GC) skew-switching location. We speculate that the kinetics of unzipping underlie information-encoding mechanism in genomes56, with thermodynamics playing a more subdued role. We have referred to multiple experiments and observations that point to the existence of asymmetric cooperativity in DNA. We have also included possible experimental tests to validate the proposed connections, where appropriate. Importantly, our theoretical picture might make it possible to decipher the connection between DNA sequence and its propensity and rate of unzipping under various cellular environments, by going beyond thermodynamic analyses alone, thereby throwing a clearer light on the mechanisms governing the specific genomic response to these cellular environments. These connections thus also provide possible means of manipulating the genomic responses through rational alteration of local sequences, informed by the inclusion of sequence-dependent asymmetric cooperativity. Crucially, by linking together DNA sequence and its rate of replication, asymmetric cooperativity might have made prebiotic evolution possible in the first place. In conclusion, asymmetric cooperativity, if experimentally verified to be present in DNA, can provide a unifying theoretical picture within which the evolutionary rationale for the existence of some fundamental properties of DNA can be understood. A reasonable counter-argument against the foregoing is the absence of any evidence of temporal parallelization of replication in the possibly more primordial RNA-based life forms, such as dsRNA viruses, as a reviewer has pointed out. The genomic organization of RNA-based genomic systems of viruses appear to be dictated by the thermodynamic instability of long RNA molecules14, and less by the evolutionary pressure towards high rate of replication. The manufacture of the capsid proteins of RNA viruses inside their hosts has been shown to be the rate-limiting step during the viral replication57, which reduces the evolutionary pressure on the RNA genomes to replicate faster. RNA viruses increase the information content of their genomes, subject to the constraint on the length of RNA molecules, by dividing their genomes into multiple, small, unconnected RNA strands, called segments, that replicate unidirectionally, asynchronously and independently of each other14,15. The absence of evidence for RNA-based genomes with replichore-based genomic organization similar to that of DNA is also possibly due to the current environmental conditions on Earth being different from the ones prevailing during the “RNA-world” scenario which possibly supported longer RNA molecules16,17. Limitations of the model As with nearly all biophysical models, the model constructed above is very much an abstraction of the real processes inside DNA, which leaves out a vast majority of other interactions. A more realistic model, while including all interactions, say between DNA and the replisome proteins, would be hopelessly complicated to be amenable to such simple theoretical arguments. In isolating one particular interaction to study in detail, namely, the influence of neighborhood on the kinetics of hydrogen bonding, we have ignored the influence of other related degrees of freedom of DNA, such as its helicity or topology, on our subsystem of study. The interactions between these other degrees of freedom and asymmetric cooperativity would be crucial to understand higher order functions, such as the influence of negative supercoiling or superhelicity on replication and transcription origins, for instance. Another technical limitation is our assumption that only nearest neighbors influence the kinetics of hydrogen bonds, which can be safely relaxed without jeopardizing our conclusions. Although we have justified our exclusion of interactions of DNA with other cellular components by situating our study at the time of the evolutionary progenitors of DNA which were not encumbered with such interactions, quantitative analyses of extant systems that go beyond mere understanding require the inclusion of such interactions, for which the above model will merely serve as a simple starting point. References Engelhart, A. E. & Hud, N. V. Primitive genetic polymers. Cold Spring Harb Perspect Biol, p. 21 (2010). Hud, N. V., Cafferty, B. J., Krishnamurthy, R. & Williams, L. D. The origin of RNA and my grandfather’s axe. Chemistry & biology 20(4), 466–474 (2013). Article CAS Google Scholar 3. Orgel, L. E. Prebiotic chemistry and the origin of the RNA world. Critical reviews in biochemistry and molecular biology 39(2), 99–123 (2004). Article CAS PubMed Google Scholar 4. Joyce, G. F., Schwartz, A. W., Miller, S. L. & Orgel, L. E. The case for an ancestral genetic system involving simple analogues of the nucleotides. Proceedings of the National Academy of Sciences 84(13), 4398–4402 (1987). Article ADS CAS Google Scholar 5. Veitia, R. & Ottolenghi, C. H. R. I. S. Placing parallel stranded DNA in an evolutionary context. Journal of theoretical biology 206(2), 317–322 (2000). Article CAS PubMed Google Scholar 6. Tchurikov, N. A. et al. Gene-specific silencing by expression of parallel complementary RNA in Escherichia coli. Journal of Biological Chemistry 275(34), 26523–26529 (2000). Article CAS PubMed Google Scholar 7. Ramsing, N. B., Rippe, K. & Jovin, T. M. Helix-coil transition of parallel-stranded DNA. Thermodynamics of hairpin and linear duplex oligonucleotides. Biochemistry 28(24), 9528–9535 (1989). Article CAS PubMed Google Scholar 8. Germann, M. W., Kalisch, B. W. & van de Sande, J. H. Relative stability of parallel-and anti-parallel-stranded duplex DNA. Biochemistry 27(22), 8302–8306 (1988). Article CAS PubMed Google Scholar 9. Szabat, M. & Kierzek, R. Parallel-stranded DNA and RNA duplexes–structural features and potential applications. The FEBS journal 284(23), 3986–3998 (2017). Article CAS PubMed Google Scholar 10. Shchyolkina, A. K. et al. Parallel-stranded DNA with natural base sequences. Molecular Biology 37(2), 223–231 (2003). Article CAS Google Scholar Szostak, J. W. The eightfold path to non-enzymatic RNA replication. Journal of Systems Chemistry 3(1), 2 (2012). Article CAS Google Scholar 12. Subramanian, H. & Gatenby, R. A. Evolutionary advantage of directional symmetry breaking in self-replicating polymers. Journal of Theoretical Biology 446, 128–136 (2018). Article CAS PubMed PubMed Central Google Scholar 13. Kervio, E., Hochgesand, A., Steiner, U. E. & Richert, C. Templating efficiency of naked DNA. Proceedings of the National Academy of Sciences 107(27), 12074–12079 (2010). Article ADS CAS Google Scholar 14. Holmes, E. C. The evolution and emergence of RNA viruses. (Oxford University Press, 2009). 15. Ojosnegros, S. et al. Viral genome segmentation can result from a trade-off between genetic content and particle stability. PLoS genetics 7, 3 (2011). Article CAS Google Scholar 16. Vlassov, A. V. et al. The RNA world on ice: a new scenario for the emergence of RNA information. Journal of molecular evolution 61(2), 264–273 (2005). Article ADS CAS PubMed Google Scholar 17. Attwater, J. et al. Ice as a protocellular medium for RNA replication. Nature Communications 1(1), 1–9 (2010). Article CAS Google Scholar 18. Altan-Bonnet, G., Libchaber, A. & Krichevsky, O. Bubble dynamics in double-stranded DNA”. Physical review letters 90(13), 138101 (2003). Article ADS PubMed CAS Google Scholar 19. Kalosakas, G., Rasmussen, K. Ø., Bishop, A. R., Choi, C. H. & Usheva, A. Sequence-specific thermal fluctuations identify start sites for DNA transcription. Europhysics Letters 68(1), 127 (2004). Article ADS CAS Google Scholar 20. Pomerantz, R. T. & O’Donnell, M. Replisome mechanics: insights into a twin DNA polymerase machine. Trends in microbiology 15(4), 156–164 (2007). Article CAS PubMed Google Scholar 21. Gesteland, R. F., Cech, T. R. & Atkins, J. F. (eds.) The RNA World (Cold Spring Harbor Laboratory Press, Cold Spring Harbor, 1999). 22. Rocha, E. P. The replication-related organization of bacterial genomes. Microbiology 150(6), 1609–1627 (2004). Article CAS PubMed Google Scholar 23. Tillier, E. R. & Collins, R. A. The contributions of replication orientation, gene direction, and signal sequences to base-composition asymmetries in bacterial genomes. Journal of Molecular Evolution 50(3), 249–257 (2000). Article ADS CAS PubMed Google Scholar 24. Dai, J., Chuang, R.-Y. & Kelly, T. J. DNA replication origins in the schizosaccharomyces pombe genome. Proceedings of the National Academy of Sciences of the United States of America 102(2), 337–342 (2005). Article ADS CAS PubMed Google Scholar 25. Marsolier-Kergoat, M.-C. Asymmetry indices for analysis and prediction of replication origins in eukaryotic genomes. PloS one 7(9), e45050 (2012). Article ADS CAS PubMed PubMed Central Google Scholar 26. Niu, D. K., Lin, K. & Zhang, D.-Y. Strand compositional asymmetries of nuclear DNA in eukaryotes. Journal of molecular evolution 57(3), 325–334 (2003). Article ADS CAS PubMed Google Scholar 27. Bartholdy, B., Mukhopadhyay, R., Lajugie, J., Aladjem, M. I. & Bouhassira, E. E. Allele-specific analysis of DNA replication origins in mammalian cells. Nature communications, 6 (2015). 28. Touchon, M. et al. Replication-associated strand asymmetries in mammalian genomes: toward detection of replication origins. Proceedings of the National Academy of Sciences of the United States of America 102(28), 9836–9841 (2005). Article ADS CAS PubMed PubMed Central Google Scholar 29. Lobry, J. Asymmetric substitution patterns in the two DNA strands of bacteria. Molecular biology and evolution 13(5), 660–665 (1996). Article CAS PubMed Google Scholar 30. Rocha, E. P. The organization of the bacterial genome. Annual review of genetics 42, 211–233 (2008). Article CAS PubMed Google Scholar 31. Frank, A. & Lobry, J. Asymmetric substitution patterns: a review of possible underlying mutational or selective mechanisms. Gene 238(1), 65–77 (1999). Article CAS PubMed Google Scholar 32. Polak, P. & Arndt, P. F. Transcription induces strand-specific mutations at the 5’ end of human genes. Genome Research 18(8), 1216–1223 (2008). Article CAS PubMed PubMed Central Google Scholar 33. Green, P., Ewing, B., Miller, W., Thomas, P. J. & Green, E. D. Transcription-associated mutational asymmetry in mammalian evolution. Nat Genet 33, 514–517 (2003). Article CAS PubMed Google Scholar 34. Kono, N., Tomita, M. & Arakawa, K. Accelerated laboratory evolution reveals the influence of replication on the GC skew in Escherichia coli. Genome biology and evolution 10(11), 3110–3117 (2018). Article CAS PubMed PubMed Central Google Scholar 35. Artés, J. M., Li, Y., Qi, J., Anantram, M. & Hihath, J. Conformational gating of dna conductance. Nature communications 6, 8870 (2015). Article ADS PubMed PubMed Central CAS Google Scholar 36. Beratan, D. N., Naaman, R. & Waldeck, D. H. Charge and spin transport through nucleic acids. Current Opinion in Electrochemistry (2017). 37. Worning, P., Jensen, L. J., Hallin, P. F., Stærfeldt, H.-H. & Ussery, D. W. Origin of replication in circular prokaryotic chromosomes. Environmental microbiology 8(2), 353–361 (2006). Article CAS PubMed Google Scholar 38. Burge, C., Campbell, A. M. & Karlin, S. Over-and under-representation of short oligonucleotides in DNA sequences. Proceedings of the National Academy of Sciences 89(4), 1358–1362 (1992). Article ADS CAS Google Scholar 39. Leach, D. R. Long DNA palindromes, cruciform structures, genetic instability and secondary structure repair. Bioessays 16(12), 893–900 (1994). Article CAS PubMed Google Scholar 40. Voineagu, I., Narayanan, V., Lobachev, K. S. & Mirkin, S. M. Replication stalling at unstable inverted repeats: interplay between DNA hairpins and fork stabilizing proteins. Proceedings of the National Academy of Sciences 105(29), 9936–9941 (2008). Article ADS CAS Google Scholar 41. Pearson, C. E., Zorbas, H., Price, G. B. & Zannis-Hadjopoulos, M. Inverted repeats, stem-loops, and cruciforms: significance for initiation of DNA replication. Journal of cellular biochemistry 63(1), 1–22 (1996). Article CAS PubMed Google Scholar 42. Pingoud, A. & Jeltsch, A. Recognition and cleavage of DNA by type-II restriction endonucleases. European journal of biochemistry/FEBS 246(1), 1 (1997). Article CAS Google Scholar 43. Brinton, B., Caddle, M. S. & Heintz, N. Position and orientation-dependent effects of a eukaryotic Z-triplex DNA motif on episomal DNA replication in COS-7 cells. Journal of Biological Chemistry 266(8), 5153–5161 (1991). CAS PubMed Google Scholar 44. Belotserkovskii, B. P. et al. Transcription blockage by homopurine DNA sequences: role of sequence composition and single-strand breaks. Nucleic acids research, p. 1333 (2012). 45. Krasilnikova, M. M., Samadashwily, G. M., Krasilnikov, A. S. & Mirkin, S. M. Transcription through a simple DNA repeat blocks replication elongation. The EMBO Journal 17(17), 5095–5102 (1998). Article CAS PubMed PubMed Central Google Scholar 46. Mirkin, E. V., Roa, D. C., Nudler, E. & Mirkin, S. M. Transcription regulatory elements are punctuation marks for DNA replication. Proceedings of the National Academy of Sciences 103(19), 7276–7281 (2006). Article ADS CAS Google Scholar 47. Lee, E. H., Kornberg, A., Hidaka, M., Kobayashi, T. & Horiuchi, T. Escherichia coli replication termination protein impedes the action of helicases. Proceedings of the National Academy of Sciences 86(23), 9104–9108 (1989). Article ADS CAS Google Scholar 48. Elshenawy, M. M. et al. Replisome speed determines the efficiency of the Tus-Ter replication termination barrier. Nature (2015). 49. Cha, R. S. & Kleckner, N. ATR homolog Mec1 promotes fork progression, thus averting breaks in replication slow zones. Science 297(5581), 602–606 (2002). Article ADS CAS PubMed Google Scholar 50. Jøers, P. & Jacobs, H. T. Analysis of replication intermediates indicates that Drosophila Melanogaster mitochondrial DNA replicates by a strand-coupled theta mechanism. PloS one 8(1), e53249 (2013). Article ADS PubMed PubMed Central CAS Google Scholar 51. Vercoutere, W. A. et al. Discrimination among individual watson–crick base pairs at the termini of single DNA hairpin molecules. Nucleic acids research 31(4), 1311–1318 (2003). Article CAS PubMed PubMed Central Google Scholar 52. Kilchherr, F. et al. Single-molecule dissection of stacking forces in DNA. Science 353(6304), 5508 (2016). Article CAS Google Scholar 53. Clayton, D. A. Transcription and replication of mitochondrial DNA. Human Reproduction 15(Suppl2), 11 (2000). Article PubMed Google Scholar 54. Xia, X. DNA replication and strand asymmetry in prokaryotic and mitochondrial genomes. Current Genomics 13(1), 16–27 (2012). Article CAS PubMed PubMed Central Google Scholar 55. Bockelmann, U., Essevaz-Roulet, B. & Heslot, F. Molecular stick-slip motion revealed by opening DNA with piconewton forces. Physical review letters 79(22), 4489 (1997). Article ADS CAS Google Scholar 56. Pross, A. The driving force for life’s emergence: kinetic and thermodynamic considerations. Journal of theoretical Biology 220(3), 393–406 (2003). Article PubMed MATH Google Scholar 57. Birch, E. W., Ruggero, N. A. & Covert, M. W. Determining host metabolic limitations on viral replication via integrated modeling and experimental perturbation. PLoS computational biology 8, 10 (2012). Article CAS Google Scholar Download references Acknowledgements We thank Addy Pross, John Cleveland, Joel Brown and Robert Gillies for useful comments. HS thanks Artem Kaznatcheev, IMO faculty and post-doctoral associates for helpful discussions. Support for this work was provided by the Moffitt Physical Science and Oncology Network (PS-ON) NIH grant, U54CA193489. Author information Authors and Affiliations Department of Physics, National Institute of Technology, Durgapur, West Bengal, India Hemachander Subramanian 2. Integrated Mathematical Oncology Department, Cancer Biology and Evolution Program, H. Lee Moffitt Cancer Center and Research Institute, 12902, USF Magnolia Dr, Tampa, Florida, USA Robert A. Gatenby Authors Hemachander Subramanian View author publications Search author on:PubMed Google Scholar 2. Robert A. Gatenby View author publications Search author on:PubMed Google Scholar Contributions R.G. and H.S. conceptualized the problem. H.S. analyzed and arrived at the explanations with the help of R.G. R.G. and H.S. co-wrote the paper. Corresponding author Correspondence to Hemachander Subramanian. Ethics declarations Competing interests The authors declare no competing interests. Additional information Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Rights and permissions Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit Reprints and permissions About this article Cite this article Subramanian, H., Gatenby, R.A. Evolutionary advantage of anti-parallel strand orientation of duplex DNA. Sci Rep 10, 9883 (2020). Download citation Received: Accepted: Published: DOI: Share this article Anyone you share the following link with will be able to read this content: Sorry, a shareable link is not currently available for this article. Provided by the Springer Nature SharedIt content-sharing initiative Subjects Biological physics Molecular evolution Chemical origin of life This article is cited by High Nucleotide Skew Palindromic DNA Sequences Function as Potential Replication Origins due to their Unzipping Propensity Parthasarathi Sahu Sashikanta Barik Hemachander Subramanian Journal of Molecular Evolution (2024) ### Prebiotic competition and evolution in self-replicating polynucleotides can explain the properties of DNA/RNA in modern living systems Hemachander Subramanian Joel Brown Robert Gatenby BMC Evolutionary Biology (2020)
149
,L-A094 085 PURDUE UNIV LAFAYETTE IND SCHOOL OF AERONAUTICS AND -- ETC F/G 12/1 USE OF STRAINED COORDINATE PERTURBATION METHOD IN TRANSONIC AER--ETC(U) OCT 80 P OURUSWAMY AFOSR-78-3523 1NCLASSIFIED AFAL-T R-80-3102 NL mEEE///EEEEEEI EEm hE~Eh ElEllElllll/ Lm~ 1.8 111.!2 .4 I i 1.6 MICROCOPY RESOLUTION TEST CHART NATIONAL BUREAU OF STANDARDS- 1963-A AFWAL-TR-80- 3102 USE OF STRAINED COORDINATE PERTURBATION METHOD IN TRANSONIC AEROELASTIC COMPUTATIONS 0 P. GURUSWAMY PURDUE UNIVERSITY WEST LAFAYETTE, INDIANA 47907 OCTOBER 1980 TECHNICAL REPORT AFWAL-TR-80-3102 Final Report for period November 1979-June 1980 Approved for public release; distribution unlimited. r, -FLIGHT DYNAMICS LABORATORY " ' 6 V 91 " AIR FORCE WRIGHT AERONAUTICAL LABORATORIES AIR FORCE SYSTEMS COMMAND WRIGHT-PATTERSON AIR FORCE BASE, OHIO 45433 81 123 07b JV NOTICE When Government drawings, specifications, or other data are used for any purpose other than in connection with a definitely related Government procurement operation, the United States Government thereby incurs no responsibility nor any obligation whatsoever; and the fact that the government may have formulated, furnished, or in any way supplied the said drawings, specifications, or other data, is not to be regarded by implication or otherwise as in any manner licensing the holder or any other person or corporation, or conveying any rights or permission to manufacture, use, or sell any patented invention that may in any way be related thereto. This report has been reviewed by the Office of Public Affairs (PA) and is releasable to the general public, including foreign nations. This technical report has been reviewed and is approved for publication. LAWRENCE J. HUTTSELL FREDERICK A. PICCHIONI, Lt Col, USAF Aerospace Engineer Ch, Analysis and Optimization Branch Aeroelastic Group FOR THE COMMANDER RALPH L. KUSTER, Jr, Col, USAF Chief, Structures and Dynamics Division Copies of this report should not be returned unless return is required by security considerations, contractual obligations, or notice on a specific document. AIR FORCE/S67U00fl December 19O- 140 ..... ii 7 Pi SECURITY CLASSIFICATIC MTHIS PAGE (Iti Dalt Etered), EPORT DOCUMENTATION PAGE BEOECM EIGFR 1JEFIAIEDORNTEJ!CRTUR2.TOV~THO '7CSS Nal e REtSCTLGNME . PERORMIN ORGA IZAINNAE AN DRS OPORM LMN.POET AI PTIRA ? 9A CAERLASIC OMPTATONS NoeCLASIFCATN'DONGRADiNG 7. DIIUT IO STTMN (oUMBER(&)ort 9. SPEMENT RARYNIZOTS EADADES 0 RGA LMETRJCTS PerturbofAtionaetod Atoatc WstLain e odiana737/51 Unstoeay gh Aerona ca Lbrasre WrDivegencatero S Fpeed4434 04. ABSITRGA NConu oNAM &s sidOeif ieretromd Ide nti ng yfbl cme) I.SCRT LS.(fti eot aeroelastic~~~~~~~U coptain Is Inesiatd Th mai obetv is to reuedh cppoveutatoalubtice requies;diortranstonc aerlasiticclultds BSedo h trained coordinateprubto qainapoeuei peTedt eeaesed tt nta odtosfrLRN ransonic coe. afluetin of sh srintia conrdinton onrturansd cmutaos is astdied Results are illustrated for Mach numiber, angle of attack and thickness DD JAN,, 1473 EDITION Of I NOV 61 IS OBSOLETE SEUIYCASI I OFTSPGE(hnDi to k SECUITYCLASIFCATION O HSPG WinDi n V. SICUOITY CLASSIFICATION OF THIS PA (WUhm Doa rnt. . variations. The computer time required by the present procedure and the direct LTRAN2 computations are compared. There is a considerable saving in the computer time by present procedure. The application of the strained coordinate perturbation method in computing transonic divergence speeds of a slender straight wing is. presented. Results are obtained for a lO thick parabolic arc at varying Mach number. Transonic divergence speeds obtained by present method are compared with those given by subsonic theory. A computer program for creating the steady state initial conditions based on the strained coordinate perturbation method is presented. This program is compatible with LTRAN2. SECuRITY CLASSIFICATION OF ,rU e PGE"On Date Efte.0 FOREWORD This report was prepared by P. Guruswamy of the School of Aeronautics and Astronautics of Purdue University under AFOSR Grant 78-3523A, "Application of Time-Accurate Transonic Aerodynamics to Aeroelastic Problems". The report covers work conducted from November 1979 to June 1980. The research was administered by Lawrence J. Huttsell (AFWAL/FIBRC) of the Structures and Dynamics Division, Flight Dynamics Laboratory, Wright-Patterson Air Force Base, Ohio. The advice of Professor T. Y. Yang of Purdue University and Dr. J. Olsen of the Flight Dynamics Laboratory is also appreciated. 7 .I . . .......... 2i TABLE OF CONTENTS SECTION PAGE I INTRODUCTION ....... ....................... 1 II STRAINED COORDINATE PERTURBATION EQUATIONS ........... 8 III APPLICATION TO STEADY AND UNSTEADY COMPUTATIONS ....... 12 I Mach Number Variation ..... ................ 12 Angle of Attack Variation ..... ............. 16 (c) Thickness Variation ..... ................ 18 IV APPLICATION TO THE TRANSONIC DIVERGENCE PROBLEM ....... .................... 22 (a) Formulation of Divergence Equations ............. 22 (b) Results ....... ....................... 25 V CONCLUDING REMARKS ........................ ... 32 REFERENCES ........ ........................ 33 APPENDIX ........ ......................... 35 v LIST OF ILLUSTRATIONS FIGURE PAGE 1 Effect of Mach Number Perturbation for 6% Thick Parabolic Arc on Steady Pressure Curves at M = 0.86 ........ 14 2 Effect of Mach Number Perturbation for 6% Thick Parabolic Arc on Unsteady Pressure Curves at M = 0.86 ....... 15 3 Effect of Angle of Attack Perturbation for 6% Thick Parabolic Arc on Steady Pressure Curves at a = 0.80 ........ .. 17 4 Effect of Thickness Perturbation for 6.5% Thick Parabolic Arc on Steady Pressure Curves at M = 0.85 and a = 0.00. ......................... 19 5 Effect of Thickness Perturbation for 6.5% Thick Parabolic Arc on Unsteady Pressure Curves at M = 0.85 anda = 0.00 ......... ......................... 21 6 Variation of Lift Coefficient with Angle of Attack for 10% Thick Parabolic Arc at M = 0.80 ..... .............. 26 7 Variation of Moment (About Leading Edge) Coefficient with Angle of Attack for 10% Thick Parabolic Arc at M = 0.80 ...... .. ........................... .. 27 8 Effect of Mach Number on Static Divergence Dynamic Pressure for a Straight Slender Wing at Various Positions of Elastic Axis ...... ................... 30 vi 1? LIST OF TABLES TABLE PAGE 1 Aerodynamic Coefficients for 10% Thick Parabolic Arc at Various Mach Numbers ..... ................. 29 vii NOMENCLATURE ah distance between midchord and elastic axis in semichords, positive toward the trailing edge c full chord length of airfoil CZ lift coefficient Cm moment coefficient about leading edge Cy, slope of the lift coefficient curve Cma slope of the moment coefficient curve GJ torsional rigidity of wing section kc reduced frequency defined as wc/U k (y+l)M, transonic flow parameter M free stream Mach number p ratio of the semispan to chord q dynamic pressure defined as 1/2 pU2 U free stream velocity Xangle of attack S/(I-M0 2)1 / 2 , Prandtl-Glauert Number ratio of specific heats 6xs shock movement in chord per unit perturbation S perturbation parameter p free stream air density -ratio of the maximum airfoil thickness to chord pdisturbance velocity potential circular frequency of oscillation (0) denote parameters corresponding to base flow (1) denote parameters corresponding to calibration flow viii Ii SECTION I INTRODUCTION In recent years there is an increasing trend that aircraft be operated at a speed in high subsonic or transonic regimes. In transonic flight, small-amplitude oscillations of a wing can produce large variations in the aerodynamic forces and moments acting on that wing. Moreover, phase differences between the motion and the resulting forces and moments can be large. These characteristics tend to increase the probability of encountering aeroelastic instabilities. Thus, the transonic regime has become a sensitive one for aero-elastic analysis. Studies in transonic aeroelasticity have begun recently. In References I to 5, Yang et al. have conducted various aeroelastic studies of airfoils in two dimensional transonic flows. In Reference 6, Eastep and Olsen have conducted the flutter analysis of a rectangular wing in three dimensional transonic flows. A state-of-the-art review on the aeroelastic applications of transonic aerodynamics is given by Ashley in Reference 7. In transonic aeroelastic studies the computation of aerodynamic forces is a major task. Previous transonic aeroelastic studies have shown that the computer time required to compute the aerodynamic data is quite high. It is due to the fact that the transonic aerodynamics depends on flow parameters such as Mach number, angle of attack, airfoil configuration, reduced frequency, etc. in a nonlinear fashion. Because of this, the aerodynamic computation has to be repeated whenever any one of the above flow parameters is varied. Also computations become more complicated when shocks are present. 1, The linear subsonic and supersonic aerodynamic equations yield closed form relations between aerodynamic forces and flow parameters. On the other hand, nonlinear transonic aerodynamic equations do not directly yield any such relations. Lack of such closed form relations has put restriction on certain transonic aeroelastic studies. For example, in order to compute the transonic divergence speed of a wing, it is required to know the aerodynamic forces as a function of angle of attack. Rtecently some techniques based on perturbation method have been developed to avoid the repetition of the aerodynamic computation when flow parameters are varied. Such techniques have also led to simple relations between aero-dynamic forces and flow parameters. One such technique is to use the concept of the strained coordinate perturbation procedure in obtaining the transonic aerudynamic loads. The basic concept of the method of strained coordinate perturbation is to minimize the actual number of separate calculations required in a particular application by extending, over some parametric range, the usefulness of each individual solution determined by some computationally-expensive procedure. Coordinate straining introduced into transonic aerodynamics as a means of accounting properly for the movement of discontinuities (shock waves) due to changes in some geometric or flow parameter is shown to result in an accurate perturbation predictions in the vicinity of the discontinuity. Detailed studies of using this method as an effective tool for reducing the computational requirements in transonic aerodynamics has begun. 2 The basic concepts of coordinate straining is given, for example, in References 8 and 9. An evaluation of the strained coordinate perturbation procedure as applied to nonlinear subsonic and transonic flows is given by Stahara et al-in Reference 10. In Reference 10. a procedure of defining a unit perturbation by employing two nonlinear solutions which differ from one another by a nominal change in some geometric or flow parameter, and then using that unit perturbation to predict a family of related nonlinear solutions over a range of parameter variation is discussed. Coordinate straining is used in determining the unit perturbation to account for movement of shocks due to the perturbation. Based on full potential solutions, perturbation results are presented for flows past both isolated airfoils and compressor cascades involving a variety of flow and geometry parameter changes in transonic regime. Comparisons with corresponding "exact" nonlinear solutions indicate a good accuracy and range of validity of such a method. In Reference 11, Nixon illustrated a procedure of perturbing a transonic flow with shock wave. The method is based on the use of a distorted airfoil as the initial case rather than the real physical airfoil. The distortion is chosen such that the shock location is unchanged by the perturbation. The distorted airfoil is obtained by the use of a strained coordinate system. The procedure yielded an algebraic similarity relation between related airfoils with shock waves at different locations. Results are illustrated for a NACA 0012 airfoil and 10% thick parabolic arc at transonic M ach numbers. The pressure distributions around the perturbed airfoils computed by using both the extended integral equation method and the perturbation method compare well. 3 In Reference 12, Nixon extended the fundamental concept of Reference 11 to two-dimensional lifting flows and three-dimensional lifting flows with multiple, intersecting shock waves. The application of strained coordinate perturbation method in computing the steady transonic aerodynamic loads when Mach number and angle-of-attack are varied is illustrated. From this method, transonic steady-state aerodynamic solution at any given value of a parameter can be expressed as a function of the solutions obtained at two different values of the same parameter. The solutions required are termed as base and calibration solutions, respectively. The procedure is based on the assumptions that shocks are neither created nor destroyed within the range of variation of the parameter (Mach number or angle-of-attack) and the perturbation of the parameter is small. Using the method, simple algebraic expressions were derived for lift, pitching moment and drag coefficients in transonic regime. The method was illustrated for a 10% thick parabolic arc and a NACA 640410 airfoil at transonic Mach numbers. In Reference 13, Nixon compared the strained coordinate interpolation method used for transonic flow in Reference 12 with normal interpolation/extrapolation procedures. It was found that both methods are essentially equivalent in smooth regions of the solution. However, the normal linear extrapolation may not be applicable in the region just behind the shock wave. The strained coordinate method does move the shock and its associated shock foot singularity to the correct location and scales the strength of the singularity according to linear interpolation. & 44 Based on the strained coordinate perturbation procedure the computational time required for transonic aeroelastic studies can considerably be reduced when parameters such as Mach number, angle-of-attack, thickness, etc. are varied. Using the developments, for example, in Reference 12, the repetition of steady state aerodynamic computations can be avoided when flow parameters are varied. Thus, there will be considerable reduction in the computational time for aeroelastic studies using the steady state computations. For example, in computing static divergence speed of a wing only steady state aerodynamic data is required. Transonic unsteady computations based on time integration, indicial and harmonic methods require fairly accurate steady state initial conditions. It is noted that in both indicial and harmonic methodsthe unsteady solution is treated as a small linear perturbation about a nonlinear steady state solution. When flow parameters are varied in unsteady computationsthe required steady state initial conditions can be economically obtained by using the strained coordinate perturbation method. Certain aeroelastic computations require the aerodynamic forces as a function of flow parameters. For example, in computing static divergence speed of a wing, it is necessary to know the aerodynamic forces as a function of angle-of-attack. Such functions for transonic regime can be obtained by the strained coordinate perturbation method. In this study, some preliminary applications of the strained coordinate perturbation procedure in transonic aeroelasticity are investigated. The main emphasis is on reducing the computational time in transonic aeroelastic 5 studies. The developments in Reference 12 are used for this purpose. First, the use of the strained coordinate perturbation method in computing the steady and unsteady aerodynamic pressure distributions for varying flow parameters were studied. As a computational example, a parabolic arc was selected. Variations of angle-of-attack, Mach number and thickness were considered. The aerodynamic solutions at base and calibration values of the flow parameter were computed by using the transonic code LTRAN2 developed by Ballhaus and Goorjian (Reference 14). For a 6% parabolic arc at Mach number 0.8, the steady and unsteady aerodynamic pressure coefficients were computed at angle-of-attack 0.80 by strained coordinate perturbation method. This was based on the base and calibration angles-of-attack equal to 0.4' and 0.60, respectively. The aero-dynamic results obtained at angle-of-attack 0.80 by strained coordinate perturbation method were compared with those directly obtained by LTRAN2. The comparison is good. For the 6% parabolic arc at zero angle-of-attack, the steady and unsteady aerodynamic results were computed at Mach number 0.86. This was based on the base and calibration Mach numbers equal to 0.854 and 0.856, respectively. Present results are compared with those directly obtained by LTRAN2. The comparison is good. For a 6.5% thick parabolic arc at Mach number 0.85, aerodynamic pressure coefficients were computed by strained coordinate method, The base and calibration thicknesses were 6% and 7%, respectively. Present results compare well with those obtained directly by LTRAN2. 6 _ _ Finally, the use of the strained coordinate perturbation method in computing the transonic divergence dynamic pressure of a typical slender straight wing with conventional airfoil is illustrated. Since two-dimensional aerodynamics were used in obtaining the aerodynamic loads, the aeroelastic equations were derived by using strip theory. The solution for divergence dynamic pressure was obtained by Rayleigh's method (Reference 15). As a computational example, a 10% thick parabolic arc airfoil section was selected. By using the aerodynamic force coefficients computed by the strained coordinate perturbation method, the effect of Mach number on transonic divergence dynamic pressure was studied. These results were compared with those obtained by linear subsonic aerodynamic theory. Based on the present procedure a computer program to post process LTRAN2 aerodynamic data for strained coordinate perturbation method was written. The listing with user's manual is given. This program creates new steady state initial conditions for LTRAN2 by using the base and calibration steady state solutions obtained by LTRAN2 for varying the flow parameters. 7 SECTION II STRAINED COORDINATE PERTURBATION EQUATIONS In this section, the strained coordinate perturbation equations are presented. The equations are based on the assumptions that shocks are neither created nor destroyed during perturbation and the order f the perturbation is small. The main objective is to obtain from two or more solutions an algebraic relation that connects the flow variables for a range of one or more parameters, thus leading to a rapid computation of these related flows. The effect of shock movement during perturbation is accounted by using the procedure given in Reference 12. Equations are presented for two-dimensional transonic inviscid flow governed by small-disturbance conditions. The basic steady-state equation in a scaled form is given by oxx + 0zz = xxx (1) where (x,z) is a Cartesian Coordinate System, with x aligned with the airfoil chord and related to the physical coordinate system (x,z) by the transformation x = X, z = 8z (2) where, if M. is the freestream Mach number, then = 1.0/(l -M.2) 1/2 (3) The potential 0 (x,z) is expanded as a series in small parameter c such as 0 (x,z) 0 0o (x',z) + e 01 (x 6,z) + .... (4) 8 --down where x' is the strained coordinate. Shock is assumd to be normal to the freestream, thus only x coordinate straining is required. Following the discussion given in Reference 12 the strained coordinate system is defined by X = X' + L 6Xs (x') (5) where, if x, is the location of the shock in the (x',z) coordinates, then x I(W) = X' X); 0< X' < 1 (6a) 5 s (x') = 0; x'>l, x'-0.0 0.1 0.4 0.681. x/c Figure 3. Effect of Angle of Attack Perturbation for 6% Thick Parabolic Arc on Steady Pressure Curves at a 0.80. 17 From unsteady computations it was found that the present method gives results identical to those obtained by LTRAN2 directly. This is because of the excellent comparison obtained between the two corresponding steady state curves. It was noticed that to obtain unsteady results for 3 cycles, the present method required about 40% of the computer time required for direct LTRAN2 computations. Based on the same base and calibration angles-of-attack, steady and unsteady results can be obtained for angles-of-attack, say, between 0.00 and 1.00 from the present procedure. Thus, there can be considerable saving in the computer time if aeroelastic computations have to be conducted in this range. c. Thickness Variation A parabolic arc airfoil at Mach number 0.85 with zero angle-of-attack was considered. Based on base and calibration maximum thickness to chord ratios equal to 0.06 and 0.07, respectively, results for maximum thickness to chord ratio equal to 0.065 was obtained by strained coordinate perturbation method. They are compared with those obtained directly from LTRAN2. Figure 4 shows the steady state pressure coefficient curve obtained by strained coordinate perturbation method for T = 0.065 by using Equation 7. The values for E and Eo were obtained from Equations lOa and lOb as 0.005 and 0.01, respectively. The amount of shock movement co6xs between the base and calibration thickness ratios was equal to 0.08 chord. The computational time required to obtain this curve was about 5 seconds on CYBER 74 computer. 18 '= 0.065, M = 0.85, 0(= 0.0--o.7 -0- FINITE DIFFERENCE -0.6- " PRESENT (M 0 = 6%, 7 1 =7%) -0.5 -. -015 -0.4 --0.3-8 -0.2 -LU C=, a- 0.0 i.0.1 0.2 0,3 0.4-0.5 -• I I I 1 ! I I 0.2 0.4 0.6 0.8 1.0 X/C Figure 4. Effect of Thickness Perturbation for 6.5% Thick Parabolic Arc on Steady Pressure Curves at M = 0.85 and a = 0.00. 19 In Figure 4, steady state pressure curve obtained directly from LTRAN2 is also shown. The agreement between two curves is good. There is some discrepancy near the shock. It may be due to lack of a fine grid near the shock. Based on the base and calibration steady state solutions obtained at thickness ratios 0.06 and 0.07, respectively, initial conditions for thickness ratio equal to 0.065 was computed by the strained coordinate perturbation Equation 7. This was carried out by using a computer program given in the Appendix. Using this steady state initial condition, unsteady results were computed by time integration procedure of LTRAN2. The reduced frequency kc was assumed as 0.1. Figure 5 shows two sets of unsteady pressure curves obtained by using present initial conditions and initial conditions directly obtained by LTRAN2. The curves were plotted at non-dimensional time wt equal to 18.06 radians. In Figure 5, it can be observed that results obtained by the two methods agree well. Slight discrepancies in the initial steady state conditions obtained by present method do not have any influence on unsteady pressure curve. Similarly, the unsteady lift and moment coefficients compared well between the two methods. It was noticed that to obtain unsteady results for 3 cycles, the present method required about 60% of the computer time required for the direct LTRAN2 computations. Based on the same base and calibration thickness ratios, steady and unsteady results can be obtained for thickness ratios, say, between 0.05 and 0.08 from present procedure. Thus, there can be considerable saving in the computer time if additional aeroelastic computations have to be conducted in this range. 20 r= 0.065, M = 0.85, 0.0° , K, =0.1 -0.7- PITCHING ABOUT MIDCHORD -0.6 --0.5- LOWER SURFACE -0.4 --0.3 -L,., -0.2 -UPPER ~SURFACE " -0.1 C,) S0.0 0.1 -EXACT METHOD - PRESENT METHOD 2S 0.2 ( =O 0.06, 1'l = 0.07) 0.3 0.4 0.5-0.2 0.4 0.6 0.8 1.0 X/C Figure 5. Effect of Thickness Perturbation for 6.5% Thick Parabolic Arc on Unsteady Pressure Curves at M = 0.85 and ax 0.00. 21 __ __ _ SECTION IV APPLICATION TO THE TRANSONIC DIVERGENCE PROBLEM Based on the strained coordinate perturbation method, computational time required for transonic aeroelastic studies can considerably be reduced. One such study where this method can be used is in computing the transonic static divergence speeds of slender straight wings. In this study, the transonic divergence dynamic pressure of a typical slender straight wing with conventional airfoil is computed by using the method of strained coordinates. The required base and calibration aero-dynamic solutions were obtained by LTRAN2. Since two-dimensional aero-dynamics was used in obtaining the aerodynamic loads, the aeroelastic equations were derived by using strip theory. The solution for divergence dynamic pressure was obtained by Rayleigh's method (Reference 15). As a computational example, a 10% thick parabolic arc airfoil section was selected. First, the use of the method of strained coordinateswas illustrated at a transonic Mach number of 0.8 for varying angle-of-attack. Then the effect of Mach number on transonic divergence dynamic pressure was studied for various positions of the elastic axis. These results were compared with those obtained by linear subsonic theory. a. Formulation of Divergence Equations It is assumed that the wing is slender and straight so that the three dimensional effects of the aerodynamics can be neglected and that strip theory can be used in deriving an expression for the static divergence dynamic pressure. Assuming that the wing torsional deformation pattern is invariable. 22 with respect to the load distribution and using Rayleigh's method an expression for the static divergence dynamic pressure can be written as P GJJ(o df d qD = (12) f {CU (I + ah)/ 2 + Cmaf 2d 0 where qD = 1/2 pU2, is the divergence dynamic pressure; p, density of the air; U, flight speed; GJ, torsional rigidity of the wing section assumed to be constant along span; p, ratio of the span to chord; f, divergence mode shape expressed as a function of &; , is the ratio of the distance measured in full chords from the root to a span station; c, full chord length of the wing assumed to be constant along the span; C k, slope of the aerodynamic lifting force with respect to the angle-of-attack; Cma, slope of the aero-dynamic pitching moment (measured about the leading edge) with respect to the angle-of-attack; and ah, position of the elastic axis measured in semi-chords from the midchord (positive towards trailing edge). For the subsonic and supersonic flows the aerodynamic equations are linear and the aerodynamic force coefficients, C, and Cm, depend on the angle-of-attack linearly. Hence, Equation 12 can be integrated once an expression for f(&) is assumed. On the other hand, for the transonic flows, the aerodynamic equations are non-linear and the aerodynamic force coefficients, C. and Cm, depend on angle-of-attack in a non-linear fashion. Hence, it is required to express C 2 and Cm as a function of angle-of-attack in order to integrate Equation 12. Such expression can be derived for transonic flows 23 by using the method of strained coordinates. Assuming that perturbation in the angle-of-attack is small and the shock is neither created nor destroyed within the range of the perturbation, Cz and Cm can be expressed as (See Equations Ila and lNb) C9 = C + (E/c ° ) (CZ Ci ) (13) C o=C C M = C m + (E/C0) (Cm C m ) (14) where Ck and Cko are calibration and base lift coefficients, respectively; Cm, and Cm ° are calibration and base moment coefficients (measured about the leading edge), respectively; e - a-o; co 01 -ao; 011 is the calibration angle-of-attack; and ao is the base angle-of-attack. Equations 13 and 14 lead to simple expression for Cz. and Cm as C- = (Ckl C 0 )/(a, - ao) (15) Cm" = (Cm, Cm 0 )/(a - ao) (16) Since CiO and Cm given by Equations 15 and 16 are independent of a, Equation 12 can be integrated for a known function of f(E). This function should approximately represent the divergent mode that satisfies the boundary conditions, namely, f(O) 0 and f'(p) 0. One such function is given by f(E) = 2C/p -(C/p) 2 (17) 24 Substituting Equations 15, 16 and 17, into 12 and integrating yields qD = 15/6A (18) where qD = qD p2 c/GJ, non-dimensional static divergence dynamic pressure and A = (C, -CRO) (1 + ah)/2(a1 - ao + (Cm -Cm0)/(a - OL C ) (19) b. Results The Mach numbers considered in this study are in transonic range for the l0 thick parabolic arc. First, a case with angle-of-attack varying from 00 to 1.00 at Mach number 0.80 was considered to illustrate the use of the method of strained coordinates. Lift and moment coefficients were computed by both the finite difference and the strained coordinate methods and they are compared. Figure 6 shows the plots of lift coefficient Cz versus angle-of-attack obtained by both finite difference and strained coordinate methods. The curve for strained coordinate method was based on base and calibration angles-of-attack equal to 0.2' and 0.8', respectively. The corresponding results for the pitching moment coefficient (about the leading edge) are shown in Figure 7. In both Figures 6 and 7, the aerodynamic computations at base and calibration angles-of-attack were also made by the successive line over relaxation method. Results in Figures 6 and 7 show that the method of strained coordinates agree well with the finite difference method. The level of agreement is better for lift coefficients when compared to that for moment coefficients. The total computational time required by the method of strained coordinates was about 1/5 of that required for the finite difference method. Also the method of strained coordinates assumes simple relations between the aerodynamic force 25 04 PARABOLIC ARC (10%) , M=O8 FINITE DIFFERENCE METHOD - --STRAINED COORDINATE METHOD z O. 3-zOJ CALIBRATION o -02 - /" oJ / I . '--e A S E I- I I I 0 0 0.20 040 06 080 1.00 ANGLE OF ATTACK Figure 6. Variation of Lift Coefficient With Angle of Attack for 101 Thick Parabolic Arc at M - 0.80. 26 PARABOLIC ARC (10%), M = 0.8 -12 - FINITE DIFFERENCE METHOD --- STRAINED COORDINATE METHOD w 8 CALIBRATION , -4-0. 0 00 o / 0// ANGL O -/ -BASE . . O2" 0.4 0"6 080 1.00 ANGLE OF ATTACK Figure 7. Variation of Moment (About Leading Edge) Coefficient With Angle of Attack for 10% Thick Parabolic Arc at M = 0.80. 27 coefficients and the angle-of-attack. The application of the method of strained coordinates in computing the transonic divergence characteristics was considered next. The characteristics of the static divergence dynamic pressures for varying Mach number at various values of the position of elastic axis were obtained. The base and the calibration angles were assumed as 0.20 and 0.60, respectively. The Mach numbers considered were 0.76, 0.78, 0.79, 0.80, 0.805, and 0.81, respectively. Table 1 shows the lifting force and pitching moment (about the leading edge) coefficients obtained at base and calibration angles-of-attack for six Mach numbers. It is observed in the table that both the coefficients increase non-linearly with increase in Mach number. In the same table corresponding coefficients obtained by linear aerodynamic theoryarealso given for reference. The differences in the values are mainly due to the presence of shocks. Based on the aerodynamic coefficients given in Table 1 and using Equation 18, static divergence dynamic pressures were computed. The values for the position of elastic axis were assumed as 0.0, -0.1, and -0.2. Figure 8 shows the plots of divergence dynamic pressure parameter qo versus Mach number. In the same figure the corresponding results obtained by the subsonic theory are also shown. In Figure 8, it is observed that the static divergence dynamic pressure obtained by transonic aerodynamics increased with the increase of Mach number. The increase is more rapid at higher Mach numbers. Also the static divergence dynamic pressure increases as the elastic axis move towards leading edge from the mldchord. The curves shift to the left as the elastic axis moves toward the leading edge from the midchord. 28 TABLE IAERODYNAMIC COEFFICIENTS FOR 10% THICK PARABOLIC ARC AT VARIOUS MACH NUMBERS Lift Coef. Ck Moment Coef. Cm MACH--T -, NUMBER CASE i 0.20' c 0.60 0.2' 0.6' 0.760 1 0.03893 0.11710 -. 01139 -. 03426 2 0.01074 0.03222 -. 00268 -. 00806 0.780 1 0.04308 0.13033 -. 01314 -. 03994 2 [-0.01116 0.03347 -. 00280 -. 00837 0.790 1 0.04699 0.14380 -. 01503 -. 04681 2 0.01139 0.03416 -00285 -. 00854 0.800 1 0.57 .75 -. 01945 -. 06275 2____ 0.01164 0.03491 -. 00291 - .00873 0.805 1 0.06222 0.19965 -. 02382 -. 07989 2 1 0.01177 0.03530 -. 00294 -. 00882 0.810 1 0.07322 0.25171 __ -. 03050 -.11251 2 0.01190 0.0357 -. 00298-.09 1. Transonic Method 2. Subsonic Method 29 W PARABOLIC ARC (10%) 6 BASE c 0,2 CALIBRATION c = 0.6 Q. 5 w- --- TRANSONIC THEORY D SUBSONIC THEORY w 4 -03 z > -0.0 ES 076 0 77 0.78 079 0.80 0-81 MACH NUMBER Figure 8. Effect of Mach Number on Static Divergence Dynamic Pressure for a Straight Slender Wing at Various Positions of Elastic Axis. 30 Figure 8 shows that the results obtained by subsonic theory do not agree with those obtained by the present transonic theory, especially at higher Mach numbers. Also agreement becomes bad as the elastic axis moves towards the leading edge. The discrepancies are mainly due to the presence of shocks which are incorporated only in the transonic theory. It is noted that the rapid changes in the divergence dynamic pressures at the higher Mach numbers are due to the movement of the shock towards the trailing edge. The increase in the divergence dynamic pressures with the increase of Mach number can further be explained as follows. The center of pressure (CP) moves from the quarter chord towards midchord with increasing Mach number. Thus CP moves towards the elastic axis which is located near mid-chord for this study. As a result, the divergence dynamic pressure inlcreases (See Equation 8-40 of Reference 15). 31 SECTION V CONCLUDING REMARKS Based on the present study the following concluding remarks may be made. (1) The steady state pressure curves obtained by using strained coordinate perturbation method compare well with those directly obtained by LTRAN2. Some small discrepancies obtained near the shock are due to the lack of a fine enough grid. (2) The computational time required to compute the steady state pressure curve by the strained coordinate perturbation method for a known base and calibration solution is about 5 seconds on CYBER 74 computer. (3) Untayresults based on the initial conditions obtained by the present method compare well with those obtained directly by LTRAN2. (4) The present procedure shows about 40 to 50% saving in the computer time for typical unsteady computations required in aeroelastic analysis. (5) Based on aerodynamic forces computed by the strained coordinate per-turbation equations, transonic divergence speeds of slender straight wings can be computed. (6) Based on the present computations it is found that the divergence speed of a slender straight wing with 10% thick parabolic arc section increase with increase in Mach number. On the other hand, linear theory predicts a different behavior. (7) The present procedure can be extended to three dimensional steady and unsteady transonic computations by using the corresponding developments in strained coordinate methods. 32 REFERENCES 1. Yang, T.Y., Striz, A.G., and Guruswamy, P., "Flutter Analysis of Two-Dimensional and Two Degree of Freedom Airfoils in Small Disturbance Unsteady Transonic Flow", AFFDL-TR-78-202, December 1978. 2. Yang, T.Y., Guruswamy, P., and Striz, A.G., "Aeroelastic Response Analysis of Two-Dimensional Single and Two-Degree-of-Freedom Airfoils in Low Frequency, Small Disturbance Unsteady Transonic Flow", AFFDL-TR-79-3077, June 1979. 3. Yang, T.Y., Guruswamy, P., Striz, A.G., and Olsen, J.J., "Flutter Analysis of a NACA 64A006 Airfoil in Small Disturbance Transonic Flow", Journal of Aircraft, Vol. 17, No. 4, April 1980. 4. Yang, T.Y., Guruswamy, P., and Striz, A.G., "Flutter Analysis of a Two-Dimensional and Two-Degree-of-Freedom Supercritical Airfoil in Small Disturbance Unsteady Transonic Flow", AFWAL-TR-80-3010, March 1980. 5. Yang, T.Y., Striz, A.G., and Guruswamy, P., "Flutter Analysis of a Two-Dimensional and Two-Degree-of-Freedom MBB A-3 Supercritical Airfoil in Two-Dimensional Transonic Flow", AIAA Paper No. 80-0736, May 1980. 6. Eastep, F.E., and Olsen, J.J., "Transonic Flutter Analysis of a Reactangular Wing with Conventional Airfoil Section", AIAA Paper No. 79-1632, August 1979. 7. Ashley, H., "On the Role of Shocks in the Sub-Transonic Flutter Phenomenon", AIAA Paper 79-0765, April 1979. 8. Lighthill, M.J., "A Technique for Rendering Approximate Solutions to Physical Problems Uniformly valid", Philes. Mag., Vol. 40, 1949, pp. 1179-1201. 9. Van Dyke, M. Perturbation Methods in Fluid Mechanics, The Parabolic Press, California, 1975. 10. Stahara, S.S., Crisalli, A.J., and Spreiter, J.R., "Evaluation of a Strained Coordinate Perturbation Procedure: Nonlinear Subsonic and Transonic Flows", AIAA Paper 80-0339, January, 1980. 11. Nixon, D., "Perturbation of a Discontinuous Transonic Flow", AIAA Journal, Vol. 16, January 1978, pp. 47-52. 33 12. Nixon, D., "Perturbations in Two- and Three-Dimensional Transonic Flows", AIM Journal, Vol. 16, July 1978, pp. 699-709. 13. Nixon, D., "Observations on the Strained Coordinate Method for Transonic Flows", AIM Journal, Vol. 18, March 1980, pp. 341-342. 14. Ballhaus, W.F., and Goorjian, P.M., "Implicit Finite Difference Computations of Unsteady Transonic Flows About Airfoils", AIM Journal, Vol. 15, December 1977, pp. 1728-1735. 15. Bisplinghoff, R.L., Ashley, H., and Halfman, R.L., Aeroelasticity, Addison Wesley Publishing Company, Reading Mass., 1955, Chapter 8. 34 APPENDIX A computer program to create the steady state initial conditions by using the strained coordinate perturbation method is presented. This program is compatible for LTRAN2. This program can create a new steady state initial condition based on base and calibration steady state initial conditions obtained by LTRAN2. a. Description of the INPUT (1) Initial DATA Card (415, 4FI0.4) one card to define following parameters. Columns Description Variable 1-5 Number of mesh points in LMAX vertical direction 6-10 Number of mesh points in JMAX horizontal direction 11-15 Mesh point corresponding to JLE leading edge 16-20 Mesh point corresponding to JTE trailing edge 21-30 Distance of the shock from XSO leading edge measured in chords for base flow 31-40 Distance of the shock from XS1 leading edge measured in chords for calibration flow 41-50 Value of the perturbation EP parameter c (See Section II) 51-60 Value of the perturbation EPO parameter c o (See Section II) (2) Mesh Card (8EI0.4) Cards to define JMAX values of the X mesh points. 35 (b) Description of Logical Files Used TAPE 1 Contains steady state initial conditions from LTRAN2 for base flow on INPUT TAPE 2 Contains steady state initial conditions from LTRAN2 for calibration flow on INPUT TAPE 3 Contains the steady state initial conditions to LTRAN2 for the current flow on OUTPUT 36 PROGRAM MAIN(INPUT, OUTPUT. TAPE5=INPUT, TAPES=OUTPUT. TAPE1, 1 TAPEe. TAPE3, TAPE4) DIMENSION PU(119).PL(119),P(79,119),DUMYI(11)DUIY2(119). 1 X(1lS)PTX(119) C C PROGRAM TO CREATE STEADY STATE INTIAL CONDITIONS FOR LTRAN2 C BY STRAINED COORDINATE PERTURBATION METHOD C JMAX=NUMBER OF HORIZONTAL GRID POINTS USED LTRAN2 C LMAX=NUMBER OF VERTICAL GRID POINTS USED IN LTRAN2 C JLE=HORIZONTAL GRID POINT OF LEADING EDGE C JTE=HORIZONTAL GRID POINT OF TRAILING EDGE C XSO=SHOCK POSITION IN BASE SOLUTION C X51=SHOCK POSITION IN CALIBRATION SOLUTION C EP=UALUE OF PERTURBATION PARAMETER FOR CURRENT FLOW C EPO=UALUE OF PERTURBATION PARAMETER FOR CALIBRATION FLOW C X=X COORDINATES OF HORZONTAL GRID POINTS C NOTE -TAPEl SHOULD CONTAIN STEADY STATE INTIAL CONDITIONS C OF BASE FLOW FROM LTRAN2 C TAPE2 SHOULD CONTAIN STEADY STATE INTIAL CONDITIONS C OF CALIBRATION FLOW FROM LTRAN2 C ON OUTPUT TAPE3 WILL CONTAIN STEADY STATE INTIAL CONDITIONS C OF THE CURRENT FLOW FOR LTRAN2 C INTIAL CONDITIONS FROM LTRAN2 ARE C T=TIME C GLIFT=LIFT C PU=DISTURBANCE VELOCITY POTENTIALS OF UPPER SURFACE C PL=DISTURBANCE VELOCITY POTENTIALS OF LOWER SURFACE C P=DISTURBANCE VELOCITY POTENTIALS OF ALL GRID POINTS C READ(5, 1)LMAX, JMAX, JLE, JTE, XSO, XS1. EP, EPO 1 FORMAT(415,6F10.4) DXS=XSI-XSO URITECS. 6)LMAX, JMAX. JLE, JTE, XSO. XS1,DXS, EP, EPO 6 FORMAT(/5XPNO OF UER MESH POINTS=,I5, NO OF NOR MESH PTS=, IS, 1 LE MESH PT=,IS5,TE MESH PT=tI5 /5XPBASE SHOCK POSITION= 2,F10.4#CAL SHOCK POSITION=,FIO.4, SHOCK DISPLACEMENT=,F10.4/5XV 3, EP=,FlO.4, EPO=.FIO.4) READ(6,7) (X(I), I=1,JMAX) 7 FORMAT(8E10.4) WRITECS. 11) 11 FORfIAT(/5X. X CO-ORDINATES) WRITE(6. 18)(X(I), I=1,JMAX) 18 FORMAT('5X. 10F12.6) CONS=XSO (1 O-XSO) DO 20 I=1,JMAX 20 TX(I)=O.O DO 30 I=JLEtJTE 30 TX(I)=(X(I)(1.O-X(I)))/CONS WRITE(6, 31) 31 FORMAT(/5XPDISTORSION COEF=) WRITE(6.36)(TX(I), I=1,JMAX) 36 FORMAT(/5X, 10F12.6) REWIND 1 READ(1)TI.LMAXIPJMAX1,GLIFT1, (PU(J)vPL(J). (P(LpJ)pL=1vLMAX)9 1 J=1,JMAX) WRITE(6,2) 2 FORMAT(/5Xo DATA FROM TAPE1) WRITE(6,41 )TI.LMAXIPJMAXlGLIFT1 41 FORMAT('5XoFlO.4v2159FlO.4) DO 10 11,PJMAX WRITE(6916) ItPU(I),PL(I) IS FORMAT(/5XOFOR COLUMN=. I5,PU=PE15.6o PL=PE15.6) WRITE(6, 1?)(P(J, I)vJ=1. 10) 17 FORMAT (/5X, 10E12.4) 10 CONITINUE REWIND 3 WRITE(3)(PU(I)o 11.JMAX) WRITE(3)(PL(I)v 11,JMAX) DO 40 11.#LMAX 40 WRITE(3)(P(IJ)#J=1,JMAX) 37 REWIND 2 READ(2)T2#LIAX2#JIIAX2,GLIFT29 (PU(J)oPL(J)v(P(LvJ)9L=1.LMAX)v I J1I.JIAX) WRITE(6,2l) 21 FORMAT(/5XP DATA FROM TAPE 2) WRITE(6,41 )T2, LrAX2, JtAX2.GLIFT2 DO 100 I=l,JMAX 100 URITE 6v17)(P(JvI)pJolv10) RA=EP/EPO REWIND 4 REWIND 3 C1=RADXS READ(3) (DUMYlI I 11JMAX) DO 200 I=1,JMAX 200 DUMY2(I)=DUrY1()(1.-ClTX(l))+RA(PU(I)-DUHY1(I). WRITE(4)(Dumy2(i), I=l.JMAX) READ(3)(DUMYI(I),I=1,JMAX) DO 300 I=l,JMAX 300 DUMY2(I)=DUMYI(I)(1.0-C1TX(I))+RA(PL(I)-DUMY1(I) 1(l.0-DXSTX(I))) WRITE(4)(DUMY2(I), I=lPJMAX) DO 500 I=1.LrIAX READ(3)(DUMY1 (K)K=19JMAX) DO 400 J=I.JMAX 400 DUMY2(J)=DUtiYI(J)(1.0-C1TX(J))+RA(P(IJ)-DUMYI(J) l(1.0-DXSTX(J))) 500 WRITE(4)(DlUMY2(K),K=l,JMAX) T3=T1+RA(T2-T1) GLIFT3=GLIFT1+RA(GLIFT2-GLIFTI) READ()(PU IILJMAX) DO 600 I=1,LtIAX 600 READ(4)(P(IJ)J=IPJMAX) U4RITE(3)3LIAXlJiAX11..IFT3, (PU(J)9PL(J). (P(LJ)pL=19LMAX). 1 J=lJMAX) REWIND 3 READ(3)T3#LMAXIoJMAX1,GLIFT39 (PU(JhvPL(J), (P(LJ)iL=1,LMAX),. 1 J=IJMAX) WRITE(6.601) 601 FORMAT(,5X, DATA TO BE WRITTEN ON TAPE 3) WRITE(6, 41 )T3,LMAX1.JMAXIPGLIFT3 DO 700 I=1,JMAX WRITE(6.16)I.PU(I)#PL(I) 700 WRITE(S9 17)(P(JvI)vJ=lol0) STOP END 38 U.S.Government Pr'inting officei 1080- 79 y-oog/279 LA or
150
Semperoper Ballett: All Forsythe review – a triple blast from the master Sadler’s Wells, LondonThe German company perform a trio of suspenseful, deadpan and always sparkling works by William Forsythe It is not uncommon to see a William Forysthe piece on a ballet programme but it’s rare to get a trio. Hats off, then, to Aaron Watkin, director of Semperoper Ballett in Dresden and a former Forsythe dancer, for offering this fascinating opportunity to see three different faces of Forsythe in one go. In the Middle, Somewhat Elevated is one of his best-known works. With its sparse, industrial feel, the way it flips between nonchalance and virtuosity, and its thwacking synthetic soundscape, the piece has an immediate impact. But it also holds up to repeated viewing, and despite its uncluttered appearance, there is always a lot going on. Take the opening: a gaggle of dancers in informal poses get pulled one by one into sharp sequences of darting jumps and rapid swerves that multiply and fracture until the stage feels filled by a cut-glass choreographic kaleidoscope. Composition is one kind of complexity; another is the layering of physical presence, the dancers switching between full-out performance mode, nonchalance and what look like run-throughs. Sometimes they seem to be playing to the backcloth rather than the auditorium, and Forsythe also scatters our attention, cheekily undercutting high tension, split-kicky numbers at centre stage with deadpan or low-key action in the margins. It’s quite a blast. Neue Suite is a more conventional work, composed of eight duets adapted from Forsythe’s back catalogue. Here, he takes the framing for granted – all are traditional balletic male-female pairings, staged conventionally and set to classical music – in order to focus on the content. Bookending the suite are a duet in ballet’s lyrical mode, all curving grace and floating lifts, and one in its sparkling mode, with fleet footwork and sharp lines. In between come encounters pegged closely to the mood and phrasing of their music (Handel, Berio, Bach), but with a density of physical and dynamic detail that is all Forsythe’s own. One, following the pursuit implicit in the musical canon, is full of feints, catches and slip-ups; another, like open violin strings, seems to be all about harmonics and overtones, the couple’s arms echoing and extending each other’s lines into the air around them; a third is all dissonance, crankily built from jolts and blocks. Each is fascinating in itself, while the format makes them into a series of studies on a theme. The best comes last. If In the Middle plays with its own framing, Enemy in the Figure breaks it up altogether. A high wooden wall cuts right across the middle of the stage, and a lot of choreography seems to happen behind it. But then, a lot of action is obscured upfront too, with lights that dim, flare and sweep such that the shadows feel as active and as integral to the performance as any dancer. Forsythe is too much the showman to let that be simply a clever device. Instead, he makes it thrilling. We don’t know half of what’s happening, but we glimpse leggy figures spidering away in the shadows, there are chases, fights and escapes, sudden surges and stillnesses, a restlessly thrumming score – all the dynamics of a suspense movie with none of the plot. It’s a piece that hooks you as much by what it withholds as what it shows, and the dancers seemed to love it quite as much as the audience. All Forsythe is at Sadler’s Wells, London, until 23 June. Box office: 020-7863 8000. The caption to the second picture was amended on 25 June 2018 because an earlier version misidentified the dancer in the image as Skyler Maxey-Wert. This has been corrected to say Michael Tucker. Comments (…) Most viewed Most viewed
151
Models and pre-trained weights — Torchvision 0.22 documentation =============== LearnGet Started Run PyTorch locally or get started quickly with one of the supported cloud platformsTutorials Whats new in PyTorch tutorialsLearn the Basics Familiarize yourself with PyTorch concepts and modulesPyTorch Recipes Bite-size, ready-to-deploy PyTorch code examplesIntro to PyTorch - YouTube Series Master PyTorch basics with our engaging YouTube tutorial series EcosystemTools Learn about the tools and frameworks in the PyTorch EcosystemCommunity Join the PyTorch developer community to contribute, learn, and get your questions answeredForums A place to discuss PyTorch code, issues, install, researchDeveloper Resources Find resources and get questions answeredContributor Awards - 2024 Award winners announced at this year's PyTorch Conference EdgeAbout PyTorch Edge Build innovative and privacy-aware AI experiences for edge devicesExecuTorch End-to-end solution for enabling on-device inference capabilities across mobile and edge devicesExecuTorch Docs DocsPyTorch Explore the documentation for comprehensive guidance on how to use PyTorchPyTorch Domains Read the PyTorch Domains documentation to learn more about domain-specific libraries Blogs & NewsPyTorch Blog Catch up on the latest technical news and happeningsCommunity Blog Stories from the PyTorch ecosystemVideos Learn about the latest PyTorch tutorials, new, and moreCommunity Stories Learn how our community solves real, everyday machine learning problems with PyTorchEvents Find events, webinars, and podcastsNewsletter Stay up-to-date with the latest updates AboutPyTorch Foundation Learn more about the PyTorch FoundationGoverning BoardCloud Credit ProgramTechnical Advisory CouncilStaffContact Us Become a Member Table of Contents 0.22 ▼ Package Reference Transforming and augmenting images TVTensors Models and pre-trained weights Datasets Utils Operators Decoding / Encoding images and videos Feature extraction for model inspection Examples and training references Examples and tutorials Training references PyTorch Libraries PyTorch torchaudio torchtext torchvision TorchElastic TorchServe PyTorch on XLA Devices Docs> Models and pre-trained weights Shortcuts Models and pre-trained weights The torchvision.models subpackage contains definitions of models for addressing different tasks, including: image classification, pixelwise semantic segmentation, object detection, instance segmentation, person keypoint detection, video classification, and optical flow. General information on pre-trained weights TorchVision offers pre-trained weights for every provided architecture, using the PyTorch torch.hub. Instancing a pre-trained model will download its weights to a cache directory. This directory can be set using the TORCH_HOME environment variable. See torch.hub.load_state_dict_from_url() for details. Note The pre-trained models provided in this library may have their own licenses or terms and conditions derived from the dataset used for training. It is your responsibility to determine whether you have permission to use the models for your use case. Note Backward compatibility is guaranteed for loading a serialized state_dict to the model created using old PyTorch version. On the contrary, loading entire saved models or serialized ScriptModules (serialized using older versions of PyTorch) may not preserve the historic behaviour. Refer to the following documentation Initializing pre-trained models As of v0.13, TorchVision offers a new Multi-weight support API for loading different weights to the existing model builder methods: from torchvision.models import resnet50, ResNet50_Weights Old weights with accuracy 76.130% resnet50(weights=ResNet50_Weights.IMAGENET1K_V1) New weights with accuracy 80.858% resnet50(weights=ResNet50_Weights.IMAGENET1K_V2) Best available weights (currently alias for IMAGENET1K_V2) Note that these weights may change across versions resnet50(weights=ResNet50_Weights.DEFAULT) Strings are also supported resnet50(weights="IMAGENET1K_V2") No weights - random initialization resnet50(weights=None) Migrating to the new API is very straightforward. The following method calls between the 2 APIs are all equivalent: from torchvision.models import resnet50, ResNet50_Weights Using pretrained weights: resnet50(weights=ResNet50_Weights.IMAGENET1K_V1) resnet50(weights="IMAGENET1K_V1") resnet50(pretrained=True) # deprecated resnet50(True) # deprecated Using no weights: resnet50(weights=None) resnet50() resnet50(pretrained=False) # deprecated resnet50(False) # deprecated Note that the pretrained parameter is now deprecated, using it will emit warnings and will be removed on v0.15. Using the pre-trained models Before using the pre-trained models, one must preprocess the image (resize with right resolution/interpolation, apply inference transforms, rescale the values etc). There is no standard way to do this as it depends on how a given model was trained. It can vary across model families, variants or even weight versions. Using the correct preprocessing method is critical and failing to do so may lead to decreased accuracy or incorrect outputs. All the necessary information for the inference transforms of each pre-trained model is provided on its weights documentation. To simplify inference, TorchVision bundles the necessary preprocessing transforms into each model weight. These are accessible via the weight.transforms attribute: Initialize the Weight Transforms weights = ResNet50_Weights.DEFAULT preprocess = weights.transforms() Apply it to the input image img_transformed = preprocess(img) Some models use modules which have different training and evaluation behavior, such as batch normalization. To switch between these modes, use model.train() or model.eval() as appropriate. See train() or eval() for details. Initialize model weights = ResNet50_Weights.DEFAULT model = resnet50(weights=weights) Set model to eval mode model.eval() Listing and retrieving available models As of v0.14, TorchVision offers a new mechanism which allows listing and retrieving models and weights by their names. Here are a few examples on how to use them: List available models all_models = list_models() classification_models = list_models(module=torchvision.models) Initialize models m1 = get_model("mobilenet_v3_large", weights=None) m2 = get_model("quantized_mobilenet_v3_large", weights="DEFAULT") Fetch weights weights = get_weight("MobileNet_V3_Large_QuantizedWeights.DEFAULT") assert weights == MobileNet_V3_Large_QuantizedWeights.DEFAULT weights_enum = get_model_weights("quantized_mobilenet_v3_large") assert weights_enum == MobileNet_V3_Large_QuantizedWeights weights_enum2 = get_model_weights(torchvision.models.quantization.mobilenet_v3_large) assert weights_enum == weights_enum2 Here are the available public functions to retrieve models and their corresponding weights: get_model(name,config)Gets the model name and configuration and returns an instantiated model. get_model_weights(name)Returns the weights enum class associated to the given model. get_weight(name)Gets the weights enum value by its full name. list_models([module,include,exclude])Returns a list with the names of registered models. Using models from Hub Most pre-trained models can be accessed directly via PyTorch Hub without having TorchVision installed: import torch Option 1: passing weights param as string model = torch.hub.load("pytorch/vision", "resnet50", weights="IMAGENET1K_V2") Option 2: passing weights param as enum weights = torch.hub.load( "pytorch/vision", "get_weight", weights="ResNet50_Weights.IMAGENET1K_V2", ) model = torch.hub.load("pytorch/vision", "resnet50", weights=weights) You can also retrieve all the available weights of a specific model via PyTorch Hub by doing: import torch weight_enum = torch.hub.load("pytorch/vision", "get_model_weights", name="resnet50") print([weight for weight in weight_enum]) The only exception to the above are the detection models included on torchvision.models.detection. These models require TorchVision to be installed because they depend on custom C++ operators. Classification The following classification models are available, with or without pre-trained weights: AlexNet ConvNeXt DenseNet EfficientNet EfficientNetV2 GoogLeNet Inception V3 MaxVit MNASNet MobileNet V2 MobileNet V3 RegNet ResNet ResNeXt ShuffleNet V2 SqueezeNet SwinTransformer VGG VisionTransformer Wide ResNet Here is an example of how to use the pre-trained image classification models: from torchvision.io import decode_image from torchvision.models import resnet50, ResNet50_Weights img = decode_image("test/assets/encode_jpeg/grace_hopper_517x606.jpg") Step 1: Initialize model with the best available weights weights = ResNet50_Weights.DEFAULT model = resnet50(weights=weights) model.eval() Step 2: Initialize the inference transforms preprocess = weights.transforms() Step 3: Apply inference preprocessing transforms batch = preprocess(img).unsqueeze(0) Step 4: Use the model and print the predicted category prediction = model(batch).squeeze(0).softmax(0) class_id = prediction.argmax().item() score = prediction[class_id].item() category_name = weights.meta["categories"][class_id] print(f"{category_name}: {100 score:.1f}%") The classes of the pre-trained model outputs can be found at weights.meta["categories"]. Table of all available classification weights Accuracies are reported on ImageNet-1K using single crops: | Weight | Acc@1 | Acc@5 | Params | GFLOPS | Recipe | | --- | --- | --- | --- | --- | --- | | AlexNet_Weights.IMAGENET1K_V1 | 56.522 | 79.066 | 61.1M | 0.71 | link | | ConvNeXt_Base_Weights.IMAGENET1K_V1 | 84.062 | 96.87 | 88.6M | 15.36 | link | | ConvNeXt_Large_Weights.IMAGENET1K_V1 | 84.414 | 96.976 | 197.8M | 34.36 | link | | ConvNeXt_Small_Weights.IMAGENET1K_V1 | 83.616 | 96.65 | 50.2M | 8.68 | link | | ConvNeXt_Tiny_Weights.IMAGENET1K_V1 | 82.52 | 96.146 | 28.6M | 4.46 | link | | DenseNet121_Weights.IMAGENET1K_V1 | 74.434 | 91.972 | 8.0M | 2.83 | link | | DenseNet161_Weights.IMAGENET1K_V1 | 77.138 | 93.56 | 28.7M | 7.73 | link | | DenseNet169_Weights.IMAGENET1K_V1 | 75.6 | 92.806 | 14.1M | 3.36 | link | | DenseNet201_Weights.IMAGENET1K_V1 | 76.896 | 93.37 | 20.0M | 4.29 | link | | EfficientNet_B0_Weights.IMAGENET1K_V1 | 77.692 | 93.532 | 5.3M | 0.39 | link | | EfficientNet_B1_Weights.IMAGENET1K_V1 | 78.642 | 94.186 | 7.8M | 0.69 | link | | EfficientNet_B1_Weights.IMAGENET1K_V2 | 79.838 | 94.934 | 7.8M | 0.69 | link | | EfficientNet_B2_Weights.IMAGENET1K_V1 | 80.608 | 95.31 | 9.1M | 1.09 | link | | EfficientNet_B3_Weights.IMAGENET1K_V1 | 82.008 | 96.054 | 12.2M | 1.83 | link | | EfficientNet_B4_Weights.IMAGENET1K_V1 | 83.384 | 96.594 | 19.3M | 4.39 | link | | EfficientNet_B5_Weights.IMAGENET1K_V1 | 83.444 | 96.628 | 30.4M | 10.27 | link | | EfficientNet_B6_Weights.IMAGENET1K_V1 | 84.008 | 96.916 | 43.0M | 19.07 | link | | EfficientNet_B7_Weights.IMAGENET1K_V1 | 84.122 | 96.908 | 66.3M | 37.75 | link | | EfficientNet_V2_L_Weights.IMAGENET1K_V1 | 85.808 | 97.788 | 118.5M | 56.08 | link | | EfficientNet_V2_M_Weights.IMAGENET1K_V1 | 85.112 | 97.156 | 54.1M | 24.58 | link | | EfficientNet_V2_S_Weights.IMAGENET1K_V1 | 84.228 | 96.878 | 21.5M | 8.37 | link | | GoogLeNet_Weights.IMAGENET1K_V1 | 69.778 | 89.53 | 6.6M | 1.5 | link | | Inception_V3_Weights.IMAGENET1K_V1 | 77.294 | 93.45 | 27.2M | 5.71 | link | | MNASNet0_5_Weights.IMAGENET1K_V1 | 67.734 | 87.49 | 2.2M | 0.1 | link | | MNASNet0_75_Weights.IMAGENET1K_V1 | 71.18 | 90.496 | 3.2M | 0.21 | link | | MNASNet1_0_Weights.IMAGENET1K_V1 | 73.456 | 91.51 | 4.4M | 0.31 | link | | MNASNet1_3_Weights.IMAGENET1K_V1 | 76.506 | 93.522 | 6.3M | 0.53 | link | | MaxVit_T_Weights.IMAGENET1K_V1 | 83.7 | 96.722 | 30.9M | 5.56 | link | | MobileNet_V2_Weights.IMAGENET1K_V1 | 71.878 | 90.286 | 3.5M | 0.3 | link | | MobileNet_V2_Weights.IMAGENET1K_V2 | 72.154 | 90.822 | 3.5M | 0.3 | link | | MobileNet_V3_Large_Weights.IMAGENET1K_V1 | 74.042 | 91.34 | 5.5M | 0.22 | link | | MobileNet_V3_Large_Weights.IMAGENET1K_V2 | 75.274 | 92.566 | 5.5M | 0.22 | link | | MobileNet_V3_Small_Weights.IMAGENET1K_V1 | 67.668 | 87.402 | 2.5M | 0.06 | link | | RegNet_X_16GF_Weights.IMAGENET1K_V1 | 80.058 | 94.944 | 54.3M | 15.94 | link | | RegNet_X_16GF_Weights.IMAGENET1K_V2 | 82.716 | 96.196 | 54.3M | 15.94 | link | | RegNet_X_1_6GF_Weights.IMAGENET1K_V1 | 77.04 | 93.44 | 9.2M | 1.6 | link | | RegNet_X_1_6GF_Weights.IMAGENET1K_V2 | 79.668 | 94.922 | 9.2M | 1.6 | link | | RegNet_X_32GF_Weights.IMAGENET1K_V1 | 80.622 | 95.248 | 107.8M | 31.74 | link | | RegNet_X_32GF_Weights.IMAGENET1K_V2 | 83.014 | 96.288 | 107.8M | 31.74 | link | | RegNet_X_3_2GF_Weights.IMAGENET1K_V1 | 78.364 | 93.992 | 15.3M | 3.18 | link | | RegNet_X_3_2GF_Weights.IMAGENET1K_V2 | 81.196 | 95.43 | 15.3M | 3.18 | link | | RegNet_X_400MF_Weights.IMAGENET1K_V1 | 72.834 | 90.95 | 5.5M | 0.41 | link | | RegNet_X_400MF_Weights.IMAGENET1K_V2 | 74.864 | 92.322 | 5.5M | 0.41 | link | | RegNet_X_800MF_Weights.IMAGENET1K_V1 | 75.212 | 92.348 | 7.3M | 0.8 | link | | RegNet_X_800MF_Weights.IMAGENET1K_V2 | 77.522 | 93.826 | 7.3M | 0.8 | link | | RegNet_X_8GF_Weights.IMAGENET1K_V1 | 79.344 | 94.686 | 39.6M | 8 | link | | RegNet_X_8GF_Weights.IMAGENET1K_V2 | 81.682 | 95.678 | 39.6M | 8 | link | | RegNet_Y_128GF_Weights.IMAGENET1K_SWAG_E2E_V1 | 88.228 | 98.682 | 644.8M | 374.57 | link | | RegNet_Y_128GF_Weights.IMAGENET1K_SWAG_LINEAR_V1 | 86.068 | 97.844 | 644.8M | 127.52 | link | | RegNet_Y_16GF_Weights.IMAGENET1K_V1 | 80.424 | 95.24 | 83.6M | 15.91 | link | | RegNet_Y_16GF_Weights.IMAGENET1K_V2 | 82.886 | 96.328 | 83.6M | 15.91 | link | | RegNet_Y_16GF_Weights.IMAGENET1K_SWAG_E2E_V1 | 86.012 | 98.054 | 83.6M | 46.73 | link | | RegNet_Y_16GF_Weights.IMAGENET1K_SWAG_LINEAR_V1 | 83.976 | 97.244 | 83.6M | 15.91 | link | | RegNet_Y_1_6GF_Weights.IMAGENET1K_V1 | 77.95 | 93.966 | 11.2M | 1.61 | link | | RegNet_Y_1_6GF_Weights.IMAGENET1K_V2 | 80.876 | 95.444 | 11.2M | 1.61 | link | | RegNet_Y_32GF_Weights.IMAGENET1K_V1 | 80.878 | 95.34 | 145.0M | 32.28 | link | | RegNet_Y_32GF_Weights.IMAGENET1K_V2 | 83.368 | 96.498 | 145.0M | 32.28 | link | | RegNet_Y_32GF_Weights.IMAGENET1K_SWAG_E2E_V1 | 86.838 | 98.362 | 145.0M | 94.83 | link | | RegNet_Y_32GF_Weights.IMAGENET1K_SWAG_LINEAR_V1 | 84.622 | 97.48 | 145.0M | 32.28 | link | | RegNet_Y_3_2GF_Weights.IMAGENET1K_V1 | 78.948 | 94.576 | 19.4M | 3.18 | link | | RegNet_Y_3_2GF_Weights.IMAGENET1K_V2 | 81.982 | 95.972 | 19.4M | 3.18 | link | | RegNet_Y_400MF_Weights.IMAGENET1K_V1 | 74.046 | 91.716 | 4.3M | 0.4 | link | | RegNet_Y_400MF_Weights.IMAGENET1K_V2 | 75.804 | 92.742 | 4.3M | 0.4 | link | | RegNet_Y_800MF_Weights.IMAGENET1K_V1 | 76.42 | 93.136 | 6.4M | 0.83 | link | | RegNet_Y_800MF_Weights.IMAGENET1K_V2 | 78.828 | 94.502 | 6.4M | 0.83 | link | | RegNet_Y_8GF_Weights.IMAGENET1K_V1 | 80.032 | 95.048 | 39.4M | 8.47 | link | | RegNet_Y_8GF_Weights.IMAGENET1K_V2 | 82.828 | 96.33 | 39.4M | 8.47 | link | | ResNeXt101_32X8D_Weights.IMAGENET1K_V1 | 79.312 | 94.526 | 88.8M | 16.41 | link | | ResNeXt101_32X8D_Weights.IMAGENET1K_V2 | 82.834 | 96.228 | 88.8M | 16.41 | link | | ResNeXt101_64X4D_Weights.IMAGENET1K_V1 | 83.246 | 96.454 | 83.5M | 15.46 | link | | ResNeXt50_32X4D_Weights.IMAGENET1K_V1 | 77.618 | 93.698 | 25.0M | 4.23 | link | | ResNeXt50_32X4D_Weights.IMAGENET1K_V2 | 81.198 | 95.34 | 25.0M | 4.23 | link | | ResNet101_Weights.IMAGENET1K_V1 | 77.374 | 93.546 | 44.5M | 7.8 | link | | ResNet101_Weights.IMAGENET1K_V2 | 81.886 | 95.78 | 44.5M | 7.8 | link | | ResNet152_Weights.IMAGENET1K_V1 | 78.312 | 94.046 | 60.2M | 11.51 | link | | ResNet152_Weights.IMAGENET1K_V2 | 82.284 | 96.002 | 60.2M | 11.51 | link | | ResNet18_Weights.IMAGENET1K_V1 | 69.758 | 89.078 | 11.7M | 1.81 | link | | ResNet34_Weights.IMAGENET1K_V1 | 73.314 | 91.42 | 21.8M | 3.66 | link | | ResNet50_Weights.IMAGENET1K_V1 | 76.13 | 92.862 | 25.6M | 4.09 | link | | ResNet50_Weights.IMAGENET1K_V2 | 80.858 | 95.434 | 25.6M | 4.09 | link | | ShuffleNet_V2_X0_5_Weights.IMAGENET1K_V1 | 60.552 | 81.746 | 1.4M | 0.04 | link | | ShuffleNet_V2_X1_0_Weights.IMAGENET1K_V1 | 69.362 | 88.316 | 2.3M | 0.14 | link | | ShuffleNet_V2_X1_5_Weights.IMAGENET1K_V1 | 72.996 | 91.086 | 3.5M | 0.3 | link | | ShuffleNet_V2_X2_0_Weights.IMAGENET1K_V1 | 76.23 | 93.006 | 7.4M | 0.58 | link | | SqueezeNet1_0_Weights.IMAGENET1K_V1 | 58.092 | 80.42 | 1.2M | 0.82 | link | | SqueezeNet1_1_Weights.IMAGENET1K_V1 | 58.178 | 80.624 | 1.2M | 0.35 | link | | Swin_B_Weights.IMAGENET1K_V1 | 83.582 | 96.64 | 87.8M | 15.43 | link | | Swin_S_Weights.IMAGENET1K_V1 | 83.196 | 96.36 | 49.6M | 8.74 | link | | Swin_T_Weights.IMAGENET1K_V1 | 81.474 | 95.776 | 28.3M | 4.49 | link | | Swin_V2_B_Weights.IMAGENET1K_V1 | 84.112 | 96.864 | 87.9M | 20.32 | link | | Swin_V2_S_Weights.IMAGENET1K_V1 | 83.712 | 96.816 | 49.7M | 11.55 | link | | Swin_V2_T_Weights.IMAGENET1K_V1 | 82.072 | 96.132 | 28.4M | 5.94 | link | | VGG11_BN_Weights.IMAGENET1K_V1 | 70.37 | 89.81 | 132.9M | 7.61 | link | | VGG11_Weights.IMAGENET1K_V1 | 69.02 | 88.628 | 132.9M | 7.61 | link | | VGG13_BN_Weights.IMAGENET1K_V1 | 71.586 | 90.374 | 133.1M | 11.31 | link | | VGG13_Weights.IMAGENET1K_V1 | 69.928 | 89.246 | 133.0M | 11.31 | link | | VGG16_BN_Weights.IMAGENET1K_V1 | 73.36 | 91.516 | 138.4M | 15.47 | link | | VGG16_Weights.IMAGENET1K_V1 | 71.592 | 90.382 | 138.4M | 15.47 | link | | VGG16_Weights.IMAGENET1K_FEATURES | nan | nan | 138.4M | 15.47 | link | | VGG19_BN_Weights.IMAGENET1K_V1 | 74.218 | 91.842 | 143.7M | 19.63 | link | | VGG19_Weights.IMAGENET1K_V1 | 72.376 | 90.876 | 143.7M | 19.63 | link | | ViT_B_16_Weights.IMAGENET1K_V1 | 81.072 | 95.318 | 86.6M | 17.56 | link | | ViT_B_16_Weights.IMAGENET1K_SWAG_E2E_V1 | 85.304 | 97.65 | 86.9M | 55.48 | link | | ViT_B_16_Weights.IMAGENET1K_SWAG_LINEAR_V1 | 81.886 | 96.18 | 86.6M | 17.56 | link | | ViT_B_32_Weights.IMAGENET1K_V1 | 75.912 | 92.466 | 88.2M | 4.41 | link | | ViT_H_14_Weights.IMAGENET1K_SWAG_E2E_V1 | 88.552 | 98.694 | 633.5M | 1016.72 | link | | ViT_H_14_Weights.IMAGENET1K_SWAG_LINEAR_V1 | 85.708 | 97.73 | 632.0M | 167.29 | link | | ViT_L_16_Weights.IMAGENET1K_V1 | 79.662 | 94.638 | 304.3M | 61.55 | link | | ViT_L_16_Weights.IMAGENET1K_SWAG_E2E_V1 | 88.064 | 98.512 | 305.2M | 361.99 | link | | ViT_L_16_Weights.IMAGENET1K_SWAG_LINEAR_V1 | 85.146 | 97.422 | 304.3M | 61.55 | link | | ViT_L_32_Weights.IMAGENET1K_V1 | 76.972 | 93.07 | 306.5M | 15.38 | link | | Wide_ResNet101_2_Weights.IMAGENET1K_V1 | 78.848 | 94.284 | 126.9M | 22.75 | link | | Wide_ResNet101_2_Weights.IMAGENET1K_V2 | 82.51 | 96.02 | 126.9M | 22.75 | link | | Wide_ResNet50_2_Weights.IMAGENET1K_V1 | 78.468 | 94.086 | 68.9M | 11.4 | link | | Wide_ResNet50_2_Weights.IMAGENET1K_V2 | 81.602 | 95.758 | 68.9M | 11.4 | link | Quantized models The following architectures provide support for INT8 quantized models, with or without pre-trained weights: Quantized GoogLeNet Quantized InceptionV3 Quantized MobileNet V2 Quantized MobileNet V3 Quantized ResNet Quantized ResNeXt Quantized ShuffleNet V2 Here is an example of how to use the pre-trained quantized image classification models: from torchvision.io import decode_image from torchvision.models.quantization import resnet50, ResNet50_QuantizedWeights img = decode_image("test/assets/encode_jpeg/grace_hopper_517x606.jpg") Step 1: Initialize model with the best available weights weights = ResNet50_QuantizedWeights.DEFAULT model = resnet50(weights=weights, quantize=True) model.eval() Step 2: Initialize the inference transforms preprocess = weights.transforms() Step 3: Apply inference preprocessing transforms batch = preprocess(img).unsqueeze(0) Step 4: Use the model and print the predicted category prediction = model(batch).squeeze(0).softmax(0) class_id = prediction.argmax().item() score = prediction[class_id].item() category_name = weights.meta["categories"][class_id] print(f"{category_name}: {100 score}%") The classes of the pre-trained model outputs can be found at weights.meta["categories"]. Table of all available quantized classification weights Accuracies are reported on ImageNet-1K using single crops: | Weight | Acc@1 | Acc@5 | Params | GIPS | Recipe | | --- | --- | --- | --- | --- | --- | | GoogLeNet_QuantizedWeights.IMAGENET1K_FBGEMM_V1 | 69.826 | 89.404 | 6.6M | 1.5 | link | | Inception_V3_QuantizedWeights.IMAGENET1K_FBGEMM_V1 | 77.176 | 93.354 | 27.2M | 5.71 | link | | MobileNet_V2_QuantizedWeights.IMAGENET1K_QNNPACK_V1 | 71.658 | 90.15 | 3.5M | 0.3 | link | | MobileNet_V3_Large_QuantizedWeights.IMAGENET1K_QNNPACK_V1 | 73.004 | 90.858 | 5.5M | 0.22 | link | | ResNeXt101_32X8D_QuantizedWeights.IMAGENET1K_FBGEMM_V1 | 78.986 | 94.48 | 88.8M | 16.41 | link | | ResNeXt101_32X8D_QuantizedWeights.IMAGENET1K_FBGEMM_V2 | 82.574 | 96.132 | 88.8M | 16.41 | link | | ResNeXt101_64X4D_QuantizedWeights.IMAGENET1K_FBGEMM_V1 | 82.898 | 96.326 | 83.5M | 15.46 | link | | ResNet18_QuantizedWeights.IMAGENET1K_FBGEMM_V1 | 69.494 | 88.882 | 11.7M | 1.81 | link | | ResNet50_QuantizedWeights.IMAGENET1K_FBGEMM_V1 | 75.92 | 92.814 | 25.6M | 4.09 | link | | ResNet50_QuantizedWeights.IMAGENET1K_FBGEMM_V2 | 80.282 | 94.976 | 25.6M | 4.09 | link | | ShuffleNet_V2_X0_5_QuantizedWeights.IMAGENET1K_FBGEMM_V1 | 57.972 | 79.78 | 1.4M | 0.04 | link | | ShuffleNet_V2_X1_0_QuantizedWeights.IMAGENET1K_FBGEMM_V1 | 68.36 | 87.582 | 2.3M | 0.14 | link | | ShuffleNet_V2_X1_5_QuantizedWeights.IMAGENET1K_FBGEMM_V1 | 72.052 | 90.7 | 3.5M | 0.3 | link | | ShuffleNet_V2_X2_0_QuantizedWeights.IMAGENET1K_FBGEMM_V1 | 75.354 | 92.488 | 7.4M | 0.58 | link | Semantic Segmentation Warning The segmentation module is in Beta stage, and backward compatibility is not guaranteed. The following semantic segmentation models are available, with or without pre-trained weights: DeepLabV3 FCN LRASPP Here is an example of how to use the pre-trained semantic segmentation models: from torchvision.io.image import decode_image from torchvision.models.segmentation import fcn_resnet50, FCN_ResNet50_Weights from torchvision.transforms.functional import to_pil_image img = decode_image("gallery/assets/dog1.jpg") Step 1: Initialize model with the best available weights weights = FCN_ResNet50_Weights.DEFAULT model = fcn_resnet50(weights=weights) model.eval() Step 2: Initialize the inference transforms preprocess = weights.transforms() Step 3: Apply inference preprocessing transforms batch = preprocess(img).unsqueeze(0) Step 4: Use the model and visualize the prediction prediction = model(batch)["out"] normalized_masks = prediction.softmax(dim=1) class_to_idx = {cls: idx for (idx, cls) in enumerate(weights.meta["categories"])} mask = normalized_masks[0, class_to_idx["dog"]] to_pil_image(mask).show() The classes of the pre-trained model outputs can be found at weights.meta["categories"]. The output format of the models is illustrated in Semantic segmentation models. Table of all available semantic segmentation weights All models are evaluated a subset of COCO val2017, on the 20 categories that are present in the Pascal VOC dataset: | Weight | Mean IoU | pixelwise Acc | Params | GFLOPS | Recipe | | --- | --- | --- | --- | --- | --- | | DeepLabV3_MobileNet_V3_Large_Weights.COCO_WITH_VOC_LABELS_V1 | 60.3 | 91.2 | 11.0M | 10.45 | link | | DeepLabV3_ResNet101_Weights.COCO_WITH_VOC_LABELS_V1 | 67.4 | 92.4 | 61.0M | 258.74 | link | | DeepLabV3_ResNet50_Weights.COCO_WITH_VOC_LABELS_V1 | 66.4 | 92.4 | 42.0M | 178.72 | link | | FCN_ResNet101_Weights.COCO_WITH_VOC_LABELS_V1 | 63.7 | 91.9 | 54.3M | 232.74 | link | | FCN_ResNet50_Weights.COCO_WITH_VOC_LABELS_V1 | 60.5 | 91.4 | 35.3M | 152.72 | link | | LRASPP_MobileNet_V3_Large_Weights.COCO_WITH_VOC_LABELS_V1 | 57.9 | 91.2 | 3.2M | 2.09 | link | Object Detection, Instance Segmentation and Person Keypoint Detection The pre-trained models for detection, instance segmentation and keypoint detection are initialized with the classification models in torchvision. The models expect a list of Tensor[C, H, W]. Check the constructor of the models for more information. Warning The detection module is in Beta stage, and backward compatibility is not guaranteed. Object Detection The following object detection models are available, with or without pre-trained weights: Faster R-CNN FCOS RetinaNet SSD SSDlite Here is an example of how to use the pre-trained object detection models: from torchvision.io.image import decode_image from torchvision.models.detection import fasterrcnn_resnet50_fpn_v2, FasterRCNN_ResNet50_FPN_V2_Weights from torchvision.utils import draw_bounding_boxes from torchvision.transforms.functional import to_pil_image img = decode_image("test/assets/encode_jpeg/grace_hopper_517x606.jpg") Step 1: Initialize model with the best available weights weights = FasterRCNN_ResNet50_FPN_V2_Weights.DEFAULT model = fasterrcnn_resnet50_fpn_v2(weights=weights, box_score_thresh=0.9) model.eval() Step 2: Initialize the inference transforms preprocess = weights.transforms() Step 3: Apply inference preprocessing transforms batch = [preprocess(img)] Step 4: Use the model and visualize the prediction prediction = model(batch) labels = [weights.meta["categories"][i] for i in prediction["labels"]] box = draw_bounding_boxes(img, boxes=prediction["boxes"], labels=labels, colors="red", width=4, font_size=30) im = to_pil_image(box.detach()) im.show() The classes of the pre-trained model outputs can be found at weights.meta["categories"]. For details on how to plot the bounding boxes of the models, you may refer to Instance segmentation models. Table of all available Object detection weights Box MAPs are reported on COCO val2017: | Weight | Box MAP | Params | GFLOPS | Recipe | | --- | --- | --- | --- | --- | | FCOS_ResNet50_FPN_Weights.COCO_V1 | 39.2 | 32.3M | 128.21 | link | | FasterRCNN_MobileNet_V3_Large_320_FPN_Weights.COCO_V1 | 22.8 | 19.4M | 0.72 | link | | FasterRCNN_MobileNet_V3_Large_FPN_Weights.COCO_V1 | 32.8 | 19.4M | 4.49 | link | | FasterRCNN_ResNet50_FPN_V2_Weights.COCO_V1 | 46.7 | 43.7M | 280.37 | link | | FasterRCNN_ResNet50_FPN_Weights.COCO_V1 | 37 | 41.8M | 134.38 | link | | RetinaNet_ResNet50_FPN_V2_Weights.COCO_V1 | 41.5 | 38.2M | 152.24 | link | | RetinaNet_ResNet50_FPN_Weights.COCO_V1 | 36.4 | 34.0M | 151.54 | link | | SSD300_VGG16_Weights.COCO_V1 | 25.1 | 35.6M | 34.86 | link | | SSDLite320_MobileNet_V3_Large_Weights.COCO_V1 | 21.3 | 3.4M | 0.58 | link | Instance Segmentation The following instance segmentation models are available, with or without pre-trained weights: Mask R-CNN For details on how to plot the masks of the models, you may refer to Instance segmentation models. Table of all available Instance segmentation weights Box and Mask MAPs are reported on COCO val2017: | Weight | Box MAP | Mask MAP | Params | GFLOPS | Recipe | | --- | --- | --- | --- | --- | --- | | MaskRCNN_ResNet50_FPN_V2_Weights.COCO_V1 | 47.4 | 41.8 | 46.4M | 333.58 | link | | MaskRCNN_ResNet50_FPN_Weights.COCO_V1 | 37.9 | 34.6 | 44.4M | 134.38 | link | Keypoint Detection The following person keypoint detection models are available, with or without pre-trained weights: Keypoint R-CNN The classes of the pre-trained model outputs can be found at weights.meta["keypoint_names"]. For details on how to plot the bounding boxes of the models, you may refer to Visualizing keypoints. Table of all available Keypoint detection weights Box and Keypoint MAPs are reported on COCO val2017: | Weight | Box MAP | Keypoint MAP | Params | GFLOPS | Recipe | | --- | --- | --- | --- | --- | --- | | KeypointRCNN_ResNet50_FPN_Weights.COCO_LEGACY | 50.6 | 61.1 | 59.1M | 133.92 | link | | KeypointRCNN_ResNet50_FPN_Weights.COCO_V1 | 54.6 | 65 | 59.1M | 137.42 | link | Video Classification Warning The video module is in Beta stage, and backward compatibility is not guaranteed. The following video classification models are available, with or without pre-trained weights: Video MViT Video ResNet Video S3D Video SwinTransformer Here is an example of how to use the pre-trained video classification models: from torchvision.io.video import read_video from torchvision.models.video import r3d_18, R3D_18_Weights vid, _, _ = read_video("test/assets/videos/v_SoccerJuggling_g23_c01.avi", output_format="TCHW") vid = vid[:32] # optionally shorten duration Step 1: Initialize model with the best available weights weights = R3D_18_Weights.DEFAULT model = r3d_18(weights=weights) model.eval() Step 2: Initialize the inference transforms preprocess = weights.transforms() Step 3: Apply inference preprocessing transforms batch = preprocess(vid).unsqueeze(0) Step 4: Use the model and print the predicted category prediction = model(batch).squeeze(0).softmax(0) label = prediction.argmax().item() score = prediction[label].item() category_name = weights.meta["categories"][label] print(f"{category_name}: {100 score}%") The classes of the pre-trained model outputs can be found at weights.meta["categories"]. Table of all available video classification weights Accuracies are reported on Kinetics-400 using single crops for clip length 16: | Weight | Acc@1 | Acc@5 | Params | GFLOPS | Recipe | | --- | --- | --- | --- | --- | --- | | MC3_18_Weights.KINETICS400_V1 | 63.96 | 84.13 | 11.7M | 43.34 | link | | MViT_V1_B_Weights.KINETICS400_V1 | 78.477 | 93.582 | 36.6M | 70.6 | link | | MViT_V2_S_Weights.KINETICS400_V1 | 80.757 | 94.665 | 34.5M | 64.22 | link | | R2Plus1D_18_Weights.KINETICS400_V1 | 67.463 | 86.175 | 31.5M | 40.52 | link | | R3D_18_Weights.KINETICS400_V1 | 63.2 | 83.479 | 33.4M | 40.7 | link | | S3D_Weights.KINETICS400_V1 | 68.368 | 88.05 | 8.3M | 17.98 | link | | Swin3D_B_Weights.KINETICS400_V1 | 79.427 | 94.386 | 88.0M | 140.67 | link | | Swin3D_B_Weights.KINETICS400_IMAGENET22K_V1 | 81.643 | 95.574 | 88.0M | 140.67 | link | | Swin3D_S_Weights.KINETICS400_V1 | 79.521 | 94.158 | 49.8M | 82.84 | link | | Swin3D_T_Weights.KINETICS400_V1 | 77.715 | 93.519 | 28.2M | 43.88 | link | Optical Flow The following Optical Flow models are available, with or without pre-trained RAFT Next Previous © Copyright 2017-present, Torch Contributors. Built with Sphinx using a theme provided by Read the Docs. Models and pre-trained weights General information on pre-trained weights Initializing pre-trained models Using the pre-trained models Listing and retrieving available models Using models from Hub Classification Table of all available classification weights Quantized models Table of all available quantized classification weights Semantic Segmentation Table of all available semantic segmentation weights Object Detection, Instance Segmentation and Person Keypoint Detection Object Detection Table of all available Object detection weights Instance Segmentation Table of all available Instance segmentation weights Keypoint Detection Table of all available Keypoint detection weights Video Classification Table of all available video classification weights Optical Flow Docs Access comprehensive developer documentation for PyTorch View Docs Tutorials Get in-depth tutorials for beginners and advanced developers View Tutorials Resources Find development resources and get your questions answered View Resources PyTorch Get Started Features Ecosystem Blog Contributing Resources Tutorials Docs Discuss Github Issues Brand Guidelines Stay up to date Facebook Twitter YouTube LinkedIn PyTorch Podcasts Spotify Apple Google Amazon Terms | Privacy © Copyright The Linux Foundation. The PyTorch Foundation is a project of The Linux Foundation. For web site terms of use, trademark policy and other policies applicable to The PyTorch Foundation please see www.linuxfoundation.org/policies/. The PyTorch Foundation supports the PyTorch open source project, which has been established as PyTorch Project a Series of LF Projects, LLC. For policies applicable to the PyTorch Project a Series of LF Projects, LLC, please see www.lfprojects.org/policies/. To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy. Learn Get Started Tutorials Learn the Basics PyTorch Recipes Introduction to PyTorch - YouTube Series Ecosystem Tools Community Forums Developer Resources Contributor Awards - 2024 Edge About PyTorch Edge ExecuTorch ExecuTorch Documentation Docs PyTorch PyTorch Domains Blog & News PyTorch Blog Community Blog Videos Community Stories Events Newsletter About PyTorch Foundation Governing Board Cloud Credit Program Technical Advisory Council Staff Contact Us
152
Risk of Gastrointestinal Bleeding with Concurrent Use of NSAID and SSRI: A Systematic Review and Network Meta-Analysis - PubMed =============== Clipboard, Search History, and several other advanced features are temporarily unavailable. Skip to main page content An official website of the United States government Here's how you know The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site. The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely. Log inShow account info Close Account Logged in as: username Dashboard Publications Account settings Log out Access keysNCBI HomepageMyNCBI HomepageMain ContentMain Navigation Search: Search AdvancedClipboard User Guide Save Email Send to Clipboard My Bibliography Collections Citation manager Display options Display options Format Save citation to file Format: Create file Cancel Email citation On or after July 28, sending email will require My NCBI login. Learn more about this and other changes coming to the email feature. Subject: 1 selected item: 36526813 - PubMed To: From: Format: [x] MeSH and other data Send email Cancel Add to Collections Create a new collection Add to an existing collection Name your collection: Name must be less than 100 characters Choose a collection: Unable to load your collection due to an error Please try again Add Cancel Add to My Bibliography My Bibliography Unable to load your delegates due to an error Please try again Add Cancel Your saved search Name of saved search: Search terms: Test search terms Would you like email updates of new search results? Saved Search Alert Radio Buttons Yes No Email: (change) Frequency: Which day? Which day? Report format: Send at most: [x] Send even when there aren't any new results Optional text in email: Save Cancel Create a file for external citation management software Create file Cancel Your RSS Feed Name of RSS Feed: Number of items displayed: Create RSS Cancel RSS Link Copy Full text links Springer Full text links Actions Cite Collections Add to Collections Create a new collection Add to an existing collection Name your collection: Name must be less than 100 characters Choose a collection: Unable to load your collection due to an error Please try again Add Cancel Permalink Permalink Copy Display options Display options Format Page navigation Title & authors Abstract Similar articles Cited by References Publication types MeSH terms Substances Related information LinkOut - more resources Dig Dis Sci Actions Search in PubMed Search in NLM Catalog Add to Search . 2023 May;68(5):1975-1982. doi: 10.1007/s10620-022-07788-y. Epub 2022 Dec 16. Risk of Gastrointestinal Bleeding with Concurrent Use of NSAID and SSRI: A Systematic Review and Network Meta-Analysis Hossein Haghbin1,Nuruddinkhodja Zakirkhodjaev2,Faiza Fatima Husain3,Wade Lee-Smith4,Muhammad Aziz5 Affiliations Expand Affiliations 1 Division of Gastroenterology, Ascension Providence Hospital, Southfield, MI, USA. [email protected]. 2 Department of Surgery, Stony Brook Medicine, Stony Brook, NY, USA. 3 Michigan Medicine, Ann Arbor, MI, USA. 4 University of Toledo Libraries, University of Toledo, Toledo, OH, USA. 5 Division of Gastroenterology and Hepatology, University of Toledo, Toledo, OH, USA. PMID: 36526813 DOI: 10.1007/s10620-022-07788-y Item in Clipboard Risk of Gastrointestinal Bleeding with Concurrent Use of NSAID and SSRI: A Systematic Review and Network Meta-Analysis Hossein Haghbin et al. Dig Dis Sci.2023 May. Show details Display options Display options Format Dig Dis Sci Actions Search in PubMed Search in NLM Catalog Add to Search . 2023 May;68(5):1975-1982. doi: 10.1007/s10620-022-07788-y. Epub 2022 Dec 16. Authors Hossein Haghbin1,Nuruddinkhodja Zakirkhodjaev2,Faiza Fatima Husain3,Wade Lee-Smith4,Muhammad Aziz5 Affiliations 1 Division of Gastroenterology, Ascension Providence Hospital, Southfield, MI, USA. [email protected]. 2 Department of Surgery, Stony Brook Medicine, Stony Brook, NY, USA. 3 Michigan Medicine, Ann Arbor, MI, USA. 4 University of Toledo Libraries, University of Toledo, Toledo, OH, USA. 5 Division of Gastroenterology and Hepatology, University of Toledo, Toledo, OH, USA. PMID: 36526813 DOI: 10.1007/s10620-022-07788-y Item in Clipboard Full text links Cite Display options Display options Format Abstract Introduction: Nonsteroidal anti-inflammatory drugs (NSAIDs) are commonly used over-the-counter medications that can increase the risk of gastrointestinal (GI) bleeding through antiplatelet effects and loss of GI protection. Selective serotonin reuptake inhibitors (SSRIs), commonly used for mental and behavioral health, are another group of medications that can cause platelet dysfunction. Previous literature has shown a possible increased risk of GI bleeding with concurrent use of SSRIs and NSAIDs. We performed a network meta-analysis comparing NSAIDs, SSRIs, and combined SSRI/NSAIDs to assess the risk of GI bleeding. Methods: The following databases were searched: MEDLINE, Embase, Web of Science Core Collection, SciELO, KCI, and Cochrane database. All comparative studies, i.e., case-control, cohort, and randomized controlled trials were included. Direct and network meta-analysis was conducted using DerSimonian-Laird approach and random effect. For binary outcomes, odds ratio (OR) with 95% confidence interval (CI) and p value were calculated. Results: After a comprehensive search through November 10th, 2021, 15 studies with 82,605 patients were identified. 11 studies reported higher rates of GI bleeds in SSRI/NSAID than SSRI users (36.9% vs 22.8%, OR 2.14, 95% CI 1.52-3.02, p < 0.001, I 2 = 86.1%). 10 studies compared SSRI/NSAID to NSAID users with higher rates of bleeds in SSRI/NSAID group (40.9% vs 34.2%, OR 1.49, 95% CI 1.20-1.84, p < 0.001, I 2 = 68.8%). The results were consistent using network meta-analysis as well. Conclusion: Given higher risk of bleeding with concurrent NSAIDs and SSRIs, prescribers should exercise caution when administering NSAIDs and SSRIs concurrently especially in patients with higher risks of GI bleeding. Keywords: Cyclic oxygenase inhibitor; Gastrointestinal bleeding; Network meta-analysis; Nonsteroidal anti-inflammatory drugs; Selective serotonin reuptake inhibitor. © 2022. The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature. PubMed Disclaimer Similar articles Risk of upper gastrointestinal bleeding with selective serotonin reuptake inhibitors with or without concurrent nonsteroidal anti-inflammatory use: a systematic review and meta-analysis.Anglin R, Yuan Y, Moayyedi P, Tse F, Armstrong D, Leontiadis GI.Anglin R, et al.Am J Gastroenterol. 2014 Jun;109(6):811-9. doi: 10.1038/ajg.2014.82. Epub 2014 Apr 29.Am J Gastroenterol. 2014.PMID: 24777151 A comparison of the cost-effectiveness of five strategies for the prevention of non-steroidal anti-inflammatory drug-induced gastrointestinal toxicity: a systematic review with economic modelling.Brown TJ, Hooper L, Elliott RA, Payne K, Webb R, Roberts C, Rostom A, Symmons D.Brown TJ, et al.Health Technol Assess. 2006 Oct;10(38):iii-iv, xi-xiii, 1-183. doi: 10.3310/hta10380.Health Technol Assess. 2006.PMID: 17018227 Non-steroidal anti-inflammatory drugs versus corticosteroids for controlling inflammation after uncomplicated cataract surgery.Juthani VV, Clearfield E, Chuck RS.Juthani VV, et al.Cochrane Database Syst Rev. 2017 Jul 3;7(7):CD010516. doi: 10.1002/14651858.CD010516.pub2.Cochrane Database Syst Rev. 2017.PMID: 28670710 Free PMC article. Cyclooxygenase-2 selective non-steroidal anti-inflammatory drugs (etodolac, meloxicam, celecoxib, rofecoxib, etoricoxib, valdecoxib and lumiracoxib) for osteoarthritis and rheumatoid arthritis: a systematic review and economic evaluation.Chen YF, Jobanputra P, Barton P, Bryan S, Fry-Smith A, Harris G, Taylor RS.Chen YF, et al.Health Technol Assess. 2008 Apr;12(11):1-278, iii. doi: 10.3310/hta12110.Health Technol Assess. 2008.PMID: 18405470 Use of selective serotonin reuptake inhibitors and risk of upper gastrointestinal bleeding: a systematic review and meta-analysis.Jiang HY, Chen HZ, Hu XJ, Yu ZH, Yang W, Deng M, Zhang YH, Ruan B.Jiang HY, et al.Clin Gastroenterol Hepatol. 2015 Jan;13(1):42-50.e3. doi: 10.1016/j.cgh.2014.06.021. Epub 2014 Jun 30.Clin Gastroenterol Hepatol. 2015.PMID: 24993365 See all similar articles Cited by Examining the Association Between Serotonergic Antidepressants and Blood Transfusion Requirements in Orthopaedic Surgery: A Comprehensive Analysis.Iqbal F, Narayan A, Chatrath M, Iqbal M.Iqbal F, et al.Cureus. 2023 Sep 26;15(9):e45988. doi: 10.7759/cureus.45988. eCollection 2023 Sep.Cureus. 2023.PMID: 37900430 Free PMC article. The Risk of Drug Interactions in Older Primary Care Patients after Hospital Discharge: The Role of Drug Reconciliation.Vocca C, Siniscalchi A, Rania V, Galati C, Marcianò G, Palleria C, Catarisano L, Gareri I, Leuzzi M, Muraca L, Citraro R, Nanci G, Scuteri A, Bianco RC, Fera I, Greco A, Leuzzi G, De Sarro G, D'Agostino B, Gallelli L.Vocca C, et al.Geriatrics (Basel). 2023 Dec 16;8(6):122. doi: 10.3390/geriatrics8060122.Geriatrics (Basel). 2023.PMID: 38132493 Free PMC article. The Role of Inflammation in Depression and Beyond: A Primer for Clinicians.Calagua-Bedoya EA, Rajasekaran V, De Witte L, Perez-Rodriguez MM.Calagua-Bedoya EA, et al.Curr Psychiatry Rep. 2024 Oct;26(10):514-529. doi: 10.1007/s11920-024-01526-z. Epub 2024 Aug 27.Curr Psychiatry Rep. 2024.PMID: 39187612 Review. Selective Serotonin Reuptake Inhibitors (SSRIs) and Surgical Bleeding in Plastic Surgery: A Systematic Review.John N, Ferri FA, Brito EM, Devineni MN, Newman MI.John N, et al.Cureus. 2025 Feb 25;17(2):e79639. doi: 10.7759/cureus.79639. eCollection 2025 Feb.Cureus. 2025.PMID: 40151730 Free PMC article.Review. A Novel Approach to Gastrointestinal Bleeding Risk Stratification and Proton Pump Inhibitor Effectiveness in Patients with Acute Coronary Syndrome on Dual Antiplatelet Therapy: A Nationwide Retrospective Cohort Study.Lee MY, Heo KN, Shin J, Lee JY.Lee MY, et al.Cardiovasc Drugs Ther. 2025 Apr 26. doi: 10.1007/s10557-025-07702-4. Online ahead of print.Cardiovasc Drugs Ther. 2025.PMID: 40285928 References Bindu S, Mazumder S, Bandyopadhyay U. Non-steroidal anti-inflammatory drugs (NSAIDs) and organ damage: a current perspective. Biochem Pharmacol. 2020;180:114147. - DOI - PubMed - PMC Rao CV, Rivenson A, Simi B et al. Chemoprevention of colon carcinogenesis by sulindac, a nonsteroidal anti-inflammatory agent. Cancer Res. 1995;55:1464–1472. - PubMed Tsutsumi S, Gotoh T, Tomisato W et al. Endoplasmic reticulum stress response is involved in nonsteroidal anti-inflammatory drug-induced apoptosis. Cell Death Differ. 2004;11:1009–1016. - DOI - PubMed Scarpignato C, Lanas A, Blandizzi C, Lems WF, Hermann M, Hunt RH. Safe prescribing of non-steroidal anti-inflammatory drugs in patients with osteoarthritis–an expert consensus addressing benefits as well as gastrointestinal and cardiovascular risks. BMC Med. 2015;13:55. - DOI - PubMed - PMC Hatt KM, Vijapura A, Maitin IB, Cruz E. Safety considerations in prescription of NSAIDs for musculoskeletal pain: a narrative review. PM R. 2018;10:1404–1411. - DOI - PubMed Show all 39 references Publication types Systematic Review Actions Search in PubMed Search in MeSH Add to Search Network Meta-Analysis Actions Search in PubMed Search in MeSH Add to Search MeSH terms Anti-Inflammatory Agents, Non-Steroidal / adverse effects Actions Search in PubMed Search in MeSH Add to Search Gastrointestinal Hemorrhage / chemically induced Actions Search in PubMed Search in MeSH Add to Search Gastrointestinal Hemorrhage / epidemiology Actions Search in PubMed Search in MeSH Add to Search Humans Actions Search in PubMed Search in MeSH Add to Search Selective Serotonin Reuptake Inhibitors / adverse effects Actions Search in PubMed Search in MeSH Add to Search Substances Selective Serotonin Reuptake Inhibitors Actions Search in PubMed Search in MeSH Add to Search Anti-Inflammatory Agents, Non-Steroidal Actions Search in PubMed Search in MeSH Add to Search Related information MedGen LinkOut - more resources Full Text Sources Springer Full text links[x] Springer [x] Cite Copy Download .nbib.nbib Format: Send To Clipboard Email Save My Bibliography Collections Citation Manager [x] NCBI Literature Resources MeSHPMCBookshelfDisclaimer The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited. Follow NCBI Connect with NLM National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers NLM NIH HHS USA.gov
153
Published Time: 2008-12-21T21:42:21Z Vertex cycle cover - Wikipedia =============== Jump to content [x] Main menu Main menu move to sidebar hide Navigation Main page Contents Current events Random article About Wikipedia Contact us Contribute Help Learn to edit Community portal Recent changes Upload file Special pages Search Search [x] Appearance Appearance move to sidebar hide Text Small Standard Large This page always uses small font size Width Standard Wide The content is as wide as possible for your browser window. Color (beta) Automatic Light Dark This page is always in light mode. Donate Create account Log in [x] Personal tools Donate Create account Log in Pages for logged out editors learn more Contributions Talk [x] Toggle the table of contents Contents move to sidebar hide (Top) 1 Properties and applicationsToggle Properties and applications subsection 1.1 Permanent 1.2 Minimal disjoint cycle covers 2 See also 3 References Vertex cycle cover [x] 2 languages Русский Українська Edit links Article Talk [x] English Read Edit View history [x] Tools Tools move to sidebar hide Actions Read Edit View history General What links here Related changes Upload file Permanent link Page information Cite this page Get shortened URL Download QR code Edit interlanguage links Print/export Download as PDF Printable version In other projects Wikidata item From Wikipedia, the free encyclopedia The top graph is vertex covered by 3 cycles, and the cycles share both vertices (vertex 3) and edges (edge between 4 and 6). The middle graph is covered by 2 cycles, and while there is a vertex overlap (vertex 3), no edges are used twice, making the covering an edge-disjoint. The bottom graph has a covering where no vertex or edge is shared between the cycles, making the covering both edge-disjoint and vertex-disjoint. In mathematics, a vertex cycle cover (commonly called simply cycle cover) of a graphG is a set of cycles which are subgraphs of G and contain all vertices of G. If the cycles of the cover have no vertices in common, the cover is called vertex-disjoint or sometimes simply disjoint cycle cover. This is sometimes known as exact vertex cycle cover. In this case the set of the cycles constitutes a spanning subgraph of G. A disjoint cycle cover of an undirected graph (if it exists) can be found in polynomial time by transforming the problem into a problem of finding a perfect matching in a larger graph. If the cycles of the cover have no edges in common, the cover is called edge-disjoint or simply disjoint cycle cover. Similar definitions exist for digraphs, in terms of directed cycles. Finding a vertex-disjoint cycle cover of a directed graph can also be performed in polynomial time by a similar reduction to perfect matching. However, adding the condition that each cycle should have length at least 3 makes the problem NP-hard. Properties and applications [edit] Permanent [edit] The permanent of a (0,1)-matrix is equal to the number of vertex-disjoint cycle covers of a directed graph with this adjacency matrix. This fact is used in a simplified proof showing that computing the permanent is #P-complete. Minimal disjoint cycle covers [edit] The problems of finding a vertex disjoint and edge disjoint cycle covers with minimal number of cycles are NP-complete. The problems are not in complexity classAPX. The variants for digraphs are not in APX either. See also [edit] Edge cycle cover, a collection of cycles covering all edges of G References [edit] ^David Eppstein. "Partition a graph into node-disjoint cycles". ^Tutte, W. T. (1954), "A short proof of the factor theorem for finite graphs"(PDF), Canadian Journal of Mathematics, 6: 347–352, doi:10.4153/CJM-1954-033-3, MR0063008, S2CID123221074. ^ (problem 1) ^Garey and Johnson, Computers and intractability, GT13 ^Ben-Dor, Amir and Halevi, Shai. (1993). "Zero-one permanent is #P-complete, a simpler proof". Proceedings of the 2nd Israel Symposium on the Theory and Computing Systems, 108-117. ^Complexity and Approximation: Combinatorial Optimization Problems and Their Approximability Properties (1999) ISBN3-540-65431-3p.378, 379, citing Sahni, Sartaj; Gonzalez, Teofilo (1976), "P-complete approximation problems"(PDF), Journal of the ACM, 23 (3): 555–565, doi:10.1145/321958.321975, MR0408313, S2CID207548581. Retrieved from " Categories: NP-complete problems Computational problems in graph theory This page was last edited on 9 February 2025, at 01:09(UTC). Text is available under the Creative Commons Attribution-ShareAlike 4.0 License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Code of Conduct Developers Statistics Cookie statement Mobile view Edit preview settings Search Search [x] Toggle the table of contents Vertex cycle cover 2 languagesAdd topic
154
Introduction to Perturbation Methods (Texts in Applied Mathematics, 20): Holmes, Mark H.: 9781461454762: Amazon.com: Books =============== Skip to Main content About this item About this item About this item Buying options Compare with similar items Videos Reviews Keyboard shortcuts Search opt+/ Cart shift+opt+C Home shift+opt+H Orders shift+opt+O Add to cart shift+opt+K Show/Hide shortcuts shift+opt+Z To move between items, use your keyboard's up or down arrows. .us Delivering to Daytona B... 32117 Update location Books Select the department you want to search in Search Amazon EN Hello, sign in Account & Lists Returns& Orders0 Cart Sign in New customer? Start here. Your Lists Create a List Find a List or Registry Your Account Account Orders Keep Shopping For Recommendations Browsing History Amazon Credit Cards Watchlist Video Purchases & Rentals Kindle Unlimited Content & Devices Subscribe & Save Items Memberships & Subscriptions Prime Membership Amazon Credit Cards Music Library Start a Selling Account Register for a free Business Account Customer Service Sign in New customer? Start here. All Amazon Haul Medical Care Luxury Best Sellers Amazon Basics Groceries New Releases Today's Deals Registry Prime Customer Service Gift Cards Smart Home Music Sports & Outdoors Pharmacy Amazon Home Shop By Interest Fashion Sell Beauty & Personal Care Toys & Games Books Automotive Home Improvement Categories Recently Visited Featured Kindle eBooks Kindle Deals Best Books of 2025 So Far Top Categories Romance Science Fiction & Fantasy Mystery, Thriller & Suspense Self-help History Children's Books Fiction Comics & Manga LGBTQIA+ Literature & Fiction Mystery, Thriller & Suspense Romance Science Fiction & Fantasy Teen & Young Adult Nonfiction Arts & Photography Biographies & Memoirs Business & Money Calendars Computers & Technology Cookbooks, Food & Wine Crafts, Hobbies & Home Education & Teaching Engineering & Transportation Health, Fitness & Dieting History Humor & Entertainment Law LGBTQIA+ Medical Books Parenting & Relationships Politics & Social Sciences Reference Religion & Spirituality Science & Math Self-help Sports & Outdoors Travel Children's Books All Children's Books Editors' Picks Teacher's Picks Award Winners Shorts Amazon Original Stories Short Reads More Categories Amazon Classics Amazon First Reads Textbooks Magazines Great on Kindle All Categories Top Categories Fiction Nonfiction Children's Books Shorts More Categories Top Categories Romance Science Fiction & Fantasy Mystery, Thriller & Suspense Self-help History Children's Books Fiction Comics & Manga LGBTQIA+ Literature & Fiction Mystery, Thriller & Suspense Romance Science Fiction & Fantasy Teen & Young Adult Nonfiction Arts & Photography Biographies & Memoirs Business & Money Calendars Computers & Technology Cookbooks, Food & Wine Crafts, Hobbies & Home Education & Teaching Engineering & Transportation Health, Fitness & Dieting History Humor & Entertainment Law LGBTQIA+ Medical Books Parenting & Relationships Politics & Social Sciences Reference Religion & Spirituality Science & Math Self-help Sports & Outdoors Travel Children's Books All Children's Books Editors' Picks Teacher's Picks Award Winners Shorts Amazon Original Stories Short Reads More Categories Amazon Classics Amazon First Reads Textbooks Magazines Great on Kindle New & Trending Recently Visited Featured Kindle eBooks Kindle Deals Best Books of 2025 So Far New & Trending New Releases Editors' Picks of the Month Amazon First Reads Best of #BookTok books Comics & Manga New Releases Deals & Rewards Recently Visited Featured Kindle eBooks Kindle Deals Best Books of 2025 So Far Deals & Rewards Print Deals Kindle Deals Audible Deals Comics & Manga Deals Kindle Rewards Best Sellers & More Recently Visited Featured Kindle eBooks Kindle Deals Best Books of 2025 So Far Best Sellers Amazon Best Sellers New York Times Best Sellers Amazon Charts Acclaimed Award Winners Goodreads Choice Winners From Our Editors Editors' Picks of the Month Amazon Book Review Best Books of 2025 So Far Memberships Recently Visited Featured Kindle eBooks Kindle Deals Best Books of 2025 So Far Memberships Unlimited access to over 4 million digital books, audiobooks, comics, and magazines. Read or listen anywhere, anytime. Access over 700,000 audiobooks and listen across any device. Prime members new to Audible get 2 free audiobooks with trial. Explore over 45,000 comics, graphic novels, and manga from top publishers including Marvel, DC, Kodansha, Dark Horse, Image, and Yen Press. Prime members can access a curated catalog of eBooks, audiobooks, magazines, comics, and more, that offer a taste of the Kindle Unlimited library. Amazon Kids+ provides unlimited access to ad-free, age-appropriate books, including classic chapter books as well as graphic novel favorites. Communities Recently Visited Featured Kindle eBooks Kindle Deals Best Books of 2025 So Far Communities Amazon Book Clubs Amazon Books Live Goodreads More Recently Visited Featured Kindle eBooks Kindle Deals Best Books of 2025 So Far Best Sellers Amazon Best Sellers New York Times Best Sellers Amazon Charts Acclaimed Award Winners Goodreads Choice Winners From Our Editors Editors' Picks of the Month Amazon Book Review Best Books of 2025 So Far Memberships Communities Amazon Book Clubs Amazon Books Live Goodreads More Manage Content and Devices Your Saved Books Author Follow Buy a Kindle Improve Your Recommendations Your Company Bookshelf Advanced Search More Recently Visited Featured Kindle eBooks Kindle Deals Best Books of 2025 So Far More Manage Content and Devices Your Saved Books Author Follow Buy a Kindle Improve Your Recommendations Your Company Bookshelf Advanced Search Your Books Books › Science & Math › Mathematics › Applied › Differential Equations Enjoy fast, free delivery, exclusive deals, and award-winning movies & TV shows. Join Prime eTextbook $41.86 Available instantly Hardcover $32.14 - $71.81 Paperback $38.94 Other Used and New from $32.14 Hardcover from $32.14 Paperback from $38.89 Buy new: -28%$71.81$71.81 FREE delivery Monday, August 18 Ships from: Amazon.com Sold by: Amazon.com $71.81 with 28 percent savings -28%$71.81 List Price: $99.99 List Price: $99.99$99.99 The List Price is the suggested retail price of a new product as provided by a manufacturer, supplier, or seller. Except for books, Amazon will display a List Price if the product was purchased by customers on Amazon or offered by other retailers at or above the List Price in at least the past 90 days. List prices may not necessarily reflect the product's prevailing market price. Learn more FREE Returns Return this item for free We offer easy, convenient returns with at least one free return option: no shipping charges. All returns must comply with our returns policy. Learn more about free returns. How to return the item? Go to your orders and start the return Select your preferred free shipping option Drop off and leave! FREE delivery Monday, August 18 Or Prime members get FREE delivery Friday, August 15. Order within 8 hrs 10 mins. Join Prime Delivering to Daytona Beach 32117 - Update location Only 1 left in stock (more on the way). Quantity: Quantity:1 $$71.81 71.81 () Includes selected options. Includes initial monthly payment and selected options. Details Price ($71.81 x) $71.81 Subtotal $$71.81 71.81 Subtotal Initial payment breakdown Shipping cost, delivery date, and order total (including tax) shown at checkout. Add to Cart Buy Now Enhancements you chose aren't available for this seller. Details To add the following enhancements to your purchase, choose a different seller. %cardName% ${cardName} not available for the seller you chose ${cardName} unavailable for quantities greater than ${maxQuantity}. Ships from Amazon.com Amazon.com Ships from Amazon.com Sold by Amazon.com Amazon.com Sold by Amazon.com Payment Secure transaction Your transaction is secure We work hard to protect your security and privacy. Our payment security system encrypts your information during transmission. We don’t share your credit card details with third-party sellers, and we don’t sell your information to others. Learn more See more [x] Add a gift receipt for easy returns Save with Used - Very Good $32.14$32.14 $3.99 delivery Tuesday, August 26 Ships from: books_from_california Sold by: books_from_california $32.14 $32.14 Over 1 MIllion Amazon Orders Shipped - Buy with Confidence - Satisfaction Guaranteed! We Ship Daily! Excellent Customer Service & Return Policy. Ships from USA. Over 1 MIllion Amazon Orders Shipped - Buy with Confidence - Satisfaction Guaranteed! We Ship Daily! Excellent Customer Service & Return Policy. Ships from USA. See less $3.99 delivery Tuesday, August 26. Details Or fastest delivery August 19 - 21. Details Delivering to Daytona Beach 32117 - Update location Only 2 left in stock - order soon. Quantity: Quantity:1 $$71.81 71.81 () Includes selected options. Includes initial monthly payment and selected options. Details Price ($71.81 x) $71.81 Subtotal $$71.81 71.81 Subtotal Initial payment breakdown Shipping cost, delivery date, and order total (including tax) shown at checkout. Access codes and supplements are not guaranteed with used items. Add to Cart Enhancements you chose aren't available for this seller. Details To add the following enhancements to your purchase, choose a different seller. %cardName% ${cardName} not available for the seller you chose ${cardName} unavailable for quantities greater than ${maxQuantity}. Ships from and sold by books_from_california. Add to List Added to Unable to add item to List. Please try again. Sorry, there was a problem. There was an error retrieving your Wish Lists. Please try again. Sorry, there was a problem. List unavailable. Other sellers on Amazon New & Used (28) from$32.14$32.14+ $3.99 shipping Sponsored Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required. Read instantly on your browser with Kindle for Web. Using your mobile phone camera - scan the code below and download the Kindle app. Image Unavailable Image not available for Color: To view this video download Flash Player VIDEOS 360° VIEW IMAGES Read sample Follow the author Mark H. Holmes Mark H. Holmes Follow Something went wrong. Please try your request again later. OK Introduction to Perturbation Methods (Texts in Applied Mathematics, 20) 2nd ed. 2013 Edition by Mark H. Holmes(Author) 4.5 4.5 out of 5 stars11 ratings Part of: Texts in Applied Mathematics (54 books) Sorry, there was a problem loading this page. Try again. See all formats and editions {"desktop_buybox_group_1":[{"displayPrice":"$71.81","priceAmount":71.81,"currencySymbol":"$","integerValue":"71","decimalSeparator":".","fractionalValue":"81","symbolPosition":"left","hasSpace":false,"showFractionalPartIfEmpty":true,"offerListingId":"KIgq7SEXk0m7Hu2D290t61J%2BzVwpsg8CxSJNRpQ6vNA6izfEDcR4GchZqSCX%2FixDhxrzr4qX4ueWpD%2Ftc4HVT%2BhrR1JzEEjq1pflITdh%2B%2B2qS910cDyfTvm3CqIDHZZdwfRnifszpWv3ODkx1INwhA%3D%3D","locale":"en-US","buyingOptionType":"NEW","aapiBuyingOptionIndex":0}, {"displayPrice":"$32.14","priceAmount":32.14,"currencySymbol":"$","integerValue":"32","decimalSeparator":".","fractionalValue":"14","symbolPosition":"left","hasSpace":false,"showFractionalPartIfEmpty":true,"offerListingId":"KIgq7SEXk0m7Hu2D290t61J%2BzVwpsg8CVK7Lgb9NvOg7EL79N4wsi2bwhIGMtn8kgCzKJeN4Yrszs5ZkCBEP4xbF%2F2oUrWcO4ONrGidXCI3OjeV0m%2Fg4XUAm1z3As0HaqdyrZDf7M7DwPuoUomOAuiaQZONQ340tlsC%2BU8UObbJgZZN8xqrI%2Fw%3D%3D","locale":"en-US","buyingOptionType":"USED","aapiBuyingOptionIndex":1}]} Purchase options and add-ons [x] This introductory graduate text is based on a graduate course the author has taught repeatedly over the last ten years to students in applied mathematics, engineering sciences, and physics. Each chapter begins with an introductory development involving ordinary differential equations, and goes on to cover such traditional topics as boundary layers and multiple scales. However, it also contains material arising from current research interest, including homogenisation, slender body theory, symbolic computing, and discrete equations. Many of the excellent exercises are derived from problems of up-to-date research and are drawn from a wide range of application areas. One hundred new pages added including new material on transcedentally small terms, Kummer's function, weakly coupled oscillators and wave interactions. Read more Report an issue with this product or seller Previous slide of product details ISBN-10 146145476X ISBN-13 978-1461454762 Edition 2nd ed. 2013 Publisher Springer Publication date December 5, 2012 Part of series Texts in Applied Mathematics Language English Dimensions 6.25 x 1.05 x 9.25 inches Print length 456 pages Next slide of product details See all details Sponsored Frequently purchased items with fast delivery Page 1 of 27Start over Previous set of slides Introduction to Partial Differential Equations (Undergraduate Texts in Mathematics)Peter J. Olver 4.4 out of 5 stars 68 Hardcover $37.51$37.51 Get it as soon as Monday, Aug 18 FREE Shipping by Amazon Only 16 left in stock (more on the way). Stochastic Differential Equations: An Introduction with Applications (Universitext)Bernt Oksendal 4.2 out of 5 stars 84 Paperback $48.30$48.30 Get it as soon as Monday, Aug 18 FREE Shipping by Amazon An Introduction to Differential Equations and Their Applications (Dover Books on Mathematics)Stanley J. Farlow 4.2 out of 5 stars 72 Paperback $25.49$25.49 Get it as soon as Monday, Aug 18 FREE Shipping on orders over $35 shipped by Amazon Only 18 left in stock (more on the way). Next set of slides Popular titles by this author Page 1 of 1Start over Previous set of slides Introduction to the Foundations of Applied Mathematics (Texts in Applied Mathematics, 56)Mark H. Holmes 4.9 out of 5 stars 10 Hardcover $46.82$46.82 Get it as soon as Monday, Aug 18 FREE Shipping by Amazon Only 2 left in stock (more on the way). Introduction to the Foundations of Applied Mathematics (Texts in Applied Mathematics, 56)Mark H. Holmes 4.0 out of 5 stars 9 Hardcover $39.61$39.61 Get it as soon as Monday, Aug 18 FREE Shipping by Amazon Only 2 left in stock - order soon. Introduction to Scientific Computing and Data Analysis (Texts in Computational Science and Engineering, 13)Mark H. Holmes 4.7 out of 5 stars 4 Hardcover 3 offers from $31.99 Introduction to Perturbation Methods (Texts in Applied Mathematics, 20)Mark H. Holmes 4.5 out of 5 stars 11 Paperback $38.94$38.94 Get it as soon as Monday, Aug 18 FREE Shipping by Amazon Only 10 left in stock - order soon. Introduction to Scientific Computing and Data Analysis (Texts in Computational Science and Engineering, 13)Mark H. Holmes Hardcover $89.35$89.35 Get it as soon as Monday, Aug 18 FREE Shipping by Amazon Only 3 left in stock - order soon. Introduction to Differential Equations 2eMark H. Holmes 4.3 out of 5 stars 14 Paperback $40.98$40.98 Get it Aug 21 - 22 FREE Shipping Only 11 left in stock - order soon. Next set of slides Editorial Reviews Review From the reviews of the second edition: “The book is composed of 6 chapters with the topics of Introduction to Asymptotic Approximations, Matched Asymptotic Expansions … Second-Order Difference Equations, and Delay Equations. … enjoyed reading this book that has a refreshing flavor to perturbation methods. … The book can be used for both undergraduate and graduate courses in mathematics and physics and also in aerospace, electrical and mechanical engineering areas. Those working in industry will find this book useful in addressing some of the nonlinear problems in real-world situations.” (D. Subbaram Naidu, Amazon.com, March, 2013) “This introduction to perturbation methods is a rich, well-written … textbook. … Students and their instructors will benefit greatly from this author’s evident broad understanding of applied mathematics and mechanics and his uncommon pedagogical abilities and scholarship. … Holmes’s text will be tough to beat for the ambitious and talented.” (Robert E. O’Malley, Jr., SIAM Review, Vol. 55 (3), 2013) “This is the second edition of the well-known book widely used by researchers in applied mathematics and physics, engineers, graduate and postgraduate students. Its distinctive feature is that it includes a variety of substantive physically motivated examples on various kinds functional equations and also exercises both in and at the end of every chapter.” (Boris V. Loginov, zbMATH, Vol. 1270, 2013) From the Back Cover This introductory graduate text is based on a graduate course the author has taught repeatedly over the last twenty or so years to students in applied mathematics, engineering sciences, and physics. Each chapter begins with an introductory development involving ordinary differential equations, and goes on to cover more advanced topics such as systems and partial differential equations. Moreover, it also contains material arising from current research interest, including homogenisation, slender body theory, symbolic computing, and discrete equations. Many of the excellent exercises are derived from problems of up-to-date research and are drawn from a wide range of application areas. For this new edition every section has been updated throughout, many only in minor ways, while others have been completely rewritten. New material has also been added. This includes approximations for weakly coupled oscillators, analysis of problems that involve transcendentally small terms, an expanded discussion of Kummer functions, and metastability. Two appendices have been added, one on solving difference equations and another on delay equations. Additional exercises have been included throughout. Review of first edition: "Those familiar with earlier expositions of singular perturbations for ordinary and partial differential equations will find many traditional gems freshly presented, as well as many new topics. Much of the excitement lies in the examples and the more than 250 exercises, which are guaranteed to provoke andchallenge readers and learners with various backgrounds and levels of expertise." (SIAM Review, 1996 ) About the Author Mark Holmes has written a number of successful textbooks and is Professor at Rensselaar Polytechnic Institute. Read more Product details Publisher ‏ : ‎ Springer Publication date ‏ : ‎ December 5, 2012 Edition ‏ : ‎ 2nd ed. 2013 Language ‏ : ‎ English Print length ‏ : ‎ 456 pages ISBN-10 ‏ : ‎ 146145476X ISBN-13 ‏ : ‎ 978-1461454762 Item Weight ‏ : ‎ 1.8 pounds Dimensions ‏ : ‎ 6.25 x 1.05 x 9.25 inches Part of series ‏ : ‎ Texts in Applied Mathematics Best Sellers Rank: #4,162,252 in Books (See Top 100 in Books) 541 in Differential Equations (Books) 1,384 in Mathematical Analysis (Books) 6,578 in Pure Mathematics (Books) Customer Reviews: 4.5 4.5 out of 5 stars11 ratings TopAbout this itemSimilarReviews Brief content visible, double tap to read full content. Full content visible, double tap to read brief content. Videos Help others learn more about this product by uploading a video! Upload your video About the author Follow authors to get new release updates, plus improved recommendations. Follow Mark H. Holmes -------------- Brief content visible, double tap to read full content. Full content visible, double tap to read brief content. Discover more of the author’s books, see similar authors, read book recommendations and more. Read more about this author Read less about this author Products related to this item Sponsored Page 1 of 1Start over Previous page of related Sponsored Products Feedback Mathematical Analysis 1: theory and solved exercises Alessio Mangoni 11 Paperback $14.99$14.99 Feedback Mathematical Analysis Exercises: series, integrals, the study of functions (Univers... Alessio Mangoni Paperback $25.99$25.99 Feedback Mathematical Analysis: Exercises Series (University) Alessio Mangoni 3 Paperback $12.49$12.49 Feedback Mathematical Analysis: Exercises Integrals (University) Alessio Mangoni 5 Paperback $9.99$9.99 Feedback 3D Analytic Geometry in Code-Ready Algorithms Will Warner An introduction to 3D analytic geometry for coders.2 Hardcover -21%$46.85$46.85 List:$59.00$59.00 Feedback Elliptic Partial Differential Equations: Theory and Practical Applications with Pyt... Jamie Flux Paperback $29.99$29.99 Feedback Concepts of mathematics: decomposition of polynomial, equations… Alessio Mangoni 8 Paperback $19.99$19.99 Next page of related Sponsored Products Sponsored How would you rate your experience shopping for books on Amazon today? Very poor Poor Neutral Good Great Submit Thank you for your feedback! Related books Page 1 of 1Start Over Sponsored Previous page Shop the Store on Amazon › Teaching with AI: How Agentic AI is Transforming Education (AI in Education Series) 4.6 4.6 out of 5 stars 52 $26.95$26.95 Shop the Store on Amazon › Algebra 1 Textbook, Mometrix, Student Edition 5.0 5.0 out of 5 stars 2 $58.32$58.32 List:$74.99$74.99 Shop the Store on Amazon › PERT Test Study Guide: Math, Reading, and Writing Exam Prep with Practice Questions for Florida [6th Edition] 4.0 4.0 out of 5 stars 9 $36.26$36.26 Shop the Store on Amazon › Quantum Physics for Beginners: The Non-Scientist’s Guide to the Big Ideas of Quantum Mechanics, with Key Principles, Major Theories, and Experiments Simplified 4.1 4.1 out of 5 stars 599 $11.30$11.30 List:$15.99$15.99 Shop the Store on Amazon › Praxis Biology (5236) Exam Mastery: Cut Through the Complexity with Focused Content, Precision Strategies, High-Yield Review and 450 Q&As with Detailed Explanations (3 Full-Length Tests) 4.8 4.8 out of 5 stars 23 $39.99$39.99 Next page Customer reviews 4.5 out of 5 stars 4.5 out of 5 11 global ratings 5 star 4 star 3 star 2 star 1 star 5 star 69%25%0%0%6%69% 5 star 4 star 3 star 2 star 1 star 4 star 69%25%0%0%6%25% 5 star 4 star 3 star 2 star 1 star 3 star 69%25%0%0%6%0% 5 star 4 star 3 star 2 star 1 star 2 star 69%25%0%0%6%0% 5 star 4 star 3 star 2 star 1 star 1 star 69%25%0%0%6%6% How customer reviews and ratings work Customer Reviews, including Product Star Ratings help customers to learn more about the product and decide whether it is the right product for them. To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzed reviews to verify trustworthiness. Learn more how customers reviews work on Amazon Review this product Share your thoughts with other customers Write a customer review Sponsored View Image Gallery Amazon Customer 5.0 out of 5 stars Images in this review Top reviews from the United States There was a problem filtering reviews. Please reload the page. Dario Bojanjac ##### 4.0 out of 5 stars Book for every engineer and applied mathematician Reviewed in the United States on April 27, 2015Verified Purchase This book introduce ideas behind perturbation methods in a very natural form. There is a lot of good examples from which you can learn important techniques and theory. Chapter about homogenization techniques is very well written and useful. If you study engineering of any kind, physics or applied math you need this book. Read more Helpful Report D ##### 4.0 out of 5 stars Ok Reviewed in the United States on September 23, 2014Verified Purchase Useful as a companion text but hard to read on its own. But this is typical of books in this springer series of texts in applied mathematics Read more Helpful Report Frank Lin ##### 1.0 out of 5 stars One Star Reviewed in the United States on March 7, 2017 Exercises are almost impossible to solve, even for the professor who assign this book as textbook. Read more Helpful Report Francy_B ##### 4.0 out of 5 stars Good book to read and own Reviewed in the United States on February 5, 2014Verified Purchase Well written and organized clearly. Definitely a good starting point for an applied mathematics course. Sometimes the examples skip important steps. Read more One person found this helpful Helpful Report D. Subbaram Naidu ##### 4.0 out of 5 stars Online Book Review Reviewed in the United States on March 27, 2013 Mark Holmes: Introduction to Perturbation Methods, Second Edition, Springer, New York, NY, 2013 Reviewed by D. Subbaram Naidu, Idaho State University The book is composed of 6 chapters with the topics of Introduction to Asymptotic Approximations, Matched Asymptotic Expansions, Multiple Scales, The WKB and Related Methods, The Method of Homogenization, and Introduction to Bifurcation and Stability and appendices on Taylor Series, Solution and Properties of Transition Layer Equations, Asymptotic Approximations of Integrals, Second-Order Difference Equations, and Delay Equations. This reviewer, with background in singular perturbations and time scales in automatic control theory and applications, enjoyed reading this book that has a refreshing flavor to perturbation methods. Some of the interesting and useful features of this book are the coverage of both ordinary and partial differential equations related to applications to nonlinear wave and diffusion problems and others, generous sprinkling of a number of examples and both section-wise and chapter-wise exercises throughout the book paving the best way of "learning mathematics", solutions to some of the exercises available from the author's home page and the MATLAB® files, that were used to generate the figures, also available from author's web page. The book can be used for both undergraduate and graduate courses in mathematics and physics and also in aerospace, electrical and mechanical engineering areas. Those working in industry will find this book useful in addressing some of the nonlinear problems in real-world situations. Read more 3 people found this helpful Helpful Report See more reviews Top reviews from other countries Pedro Dav ##### 5.0 out of 5 stars honestly, excellent Reviewed in the United Kingdom on October 10, 2019Verified Purchase presented with high quality Read more Report See more reviews TopAbout this itemSimilarReviews Processing... Your recently viewed items and featured recommendations › View or edit your browsing history After viewing product detail pages, look here to find an easy way to navigate back to pages you are interested in. Back to top Get to Know Us Careers Amazon Newsletter About Amazon Accessibility Sustainability Press Center Investor Relations Amazon Devices Amazon Science Make Money with Us Sell on Amazon Sell apps on Amazon Supply to Amazon Protect & Build Your Brand Become an Affiliate Become a Delivery Driver Start a Package Delivery Business Advertise Your Products Self-Publish with Us Become an Amazon Hub Partner ›See More Ways to Make Money Amazon Payment Products Amazon Visa Amazon Store Card Amazon Secured Card Amazon Business Card Shop with Points Credit Card Marketplace Reload Your Balance Gift Cards Amazon Currency Converter Let Us Help You Your Account Your Orders Shipping Rates & Policies Amazon Prime Returns & Replacements Manage Your Content and Devices Recalls and Product Safety Alerts Registry & Gift List Help English United States ##### Amazon Music Stream millions of songs ##### Amazon Ads Reach customers wherever they spend their time ##### 6pm Score deals on fashion brands ##### AbeBooks Books, art & collectibles ##### ACX Audiobook Publishing Made Easy ##### Sell on Amazon Start a Selling Account ##### Veeqo Shipping Software Inventory Management ##### Amazon Business Everything For Your Business ##### Amazon Fresh Groceries & More Right To Your Door ##### AmazonGlobal Ship Orders Internationally ##### Home Services Experienced Pros Happiness Guarantee ##### Amazon Web Services Scalable Cloud Computing Services ##### Audible Listen to Books & Original Audio Performances ##### Box Office Mojo Find Movie Box Office Data ##### Goodreads Book reviews & recommendations ##### IMDb Movies, TV & Celebrities ##### IMDbPro Get Info Entertainment Professionals Need ##### Kindle Direct Publishing Indie Digital & Print Publishing Made Easy ##### Amazon Photos Unlimited Photo Storage Free With Prime ##### Prime Video Direct Video Distribution Made Easy ##### Shopbop Designer Fashion Brands ##### Amazon Resale Great Deals on Quality Used Products ##### Whole Foods Market America’s Healthiest Grocery Store ##### Woot! Deals and Shenanigans ##### Zappos Shoes & Clothing ##### Ring Smart Home Security Systems ##### eero WiFi Stream 4K Video in Every Room ##### Blink Smart Security for Every Home ##### Neighbors App Real-Time Crime & Safety Alerts ##### Amazon Subscription Boxes Top subscription boxes – right to your door ##### PillPack Pharmacy Simplified ##### Amazon Renewed Like-new products you can trust Conditions of Use Privacy Notice Consumer Health Data Privacy Disclosure Your Ads Privacy Choices © 1996-2025, Amazon.com, Inc. or its affiliates
155
thermodynamics - How does the Stirling approximation give an exact formula for entropy? - Physics Stack Exchange =============== Join Physics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Physics helpchat Physics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now How does the Stirling approximation give an exact formula for entropy? Ask Question Asked 2 months ago Modified2 months ago Viewed 2k times This question shows research effort; it is useful and clear 14 Save this question. Show activity on this post. I'm watching Leonard Susskind's Statistical mechanics lecture (3 on Youtube). We have an exact formula for the number of microstates Ω Ω: Ω=(N n 1)(N−n 1 n 2)(N−n 1−n 2 n 3)...=N!∏i n i!Ω=(N n 1)(N−n 1 n 2)(N−n 1−n 2 n 3)...=N!∏i n i! We are interested in the log. ln(Ω)=ln(N!)−∑i ln(n i!)ln⁡(Ω)=ln⁡(N!)−∑i ln⁡(n i!) We use Stirling's approximation ln(N!)≈N ln(N)−N+1 ln⁡(N!)≈N ln⁡(N)−N+1 further shortened to ln(N!)≈N ln(N)−N ln⁡(N!)≈N ln⁡(N)−N. And, after some algebra (I can put below if desired), end up with exactly ln(Ω)≈−N∑p i ln(p i)=N S ln⁡(Ω)≈−N∑p i ln⁡(p i)=N S Where p i=n i N p i=n i N is the probability of state i i. Which is the formula for entropy S S, scaled by the size of the system N N. So we start with an exact formula for Ω Ω, make an approximation, however good for large N N, and end up with an exact formula for entropy. How is this possible? All I can think of is that either As N→∞N→∞ the Stirling approximation gets infinitely accurate, therefore we end up with an exact formula only in the limiting case of large N N. This is all well and good for the idea of a probability p i p i, however my understanding of all presentations I've seen of the formula S=−∑i p i ln(p i)S=−∑i p i ln⁡(p i) is that it is exact even for not as large N N. This would imply that the formula is incorrect for not large N N. Furthermore, I believe (correct me) that although the percent error of the Stirling approximation decreases with N N, the absolute error still grows. One of these formulae is wrong. I don't believe that either are so as they are both derived from first principles. thermodynamics statistical-mechanics entropy Share Share a link to this question Copy linkCC BY-SA 4.0 Cite Improve this question Follow Follow this question to receive notifications edited Jun 1 at 0:40 Ján Lalinský 45.4k 1 1 gold badge 39 39 silver badges 113 113 bronze badges asked May 31 at 19:40 Furrier TransformFurrier Transform 259 1 1 silver badge 5 5 bronze badges 2 2 Simply put, statistics make little sense with tiny populations. What is the temperature of a single particle? –Hagen von Eitzen Commented Jun 1 at 18:34 2 @HagenvonEitzen I disagree. A single particle may not have a precise temperature, but it can have an energy and a volume within which it is free to move. There is also a precise way to assign an entropy, this entropy being a statistical property, for any system size, which reproduces the thermodynamic entropy in the thermodynamic limit. –Andrew Steane Commented Jun 3 at 9:43 Add a comment| 6 Answers 6 Sorted by: Reset to default This answer is useful 15 Save this answer. Show activity on this post. This is really about the difference between the Boltzmann entropy and the discrete Gibbs-Shannon entropy (I will write just Gibbs entropy from now on), which is not only conceptual, but also numerical. We can show why the Gibbs entropy is given by what looks like an approximate formula for the Boltzmann entropy. Notice there are four concepts of entropy at play, all meaning different things. The general Boltzmann entropy formula for a situation where there is a discrete number Ω Ω of equally weighed microstates (definition by Planck): S B=k B ln Ω.(1)(1)S B=k B ln⁡Ω. The value of S B S B for a system in which there are N N distinguishable entities in K K departments, and we know the occupation numbers of every department n i n i, which is an array which we call "distribution" {n i}{n i} ; and it is assumed that all possible microstates that achieve such distribution are considered equally weighed. Under these assumptions, it can be shown that S(N i n K)B=k B ln N!n 1!n 2!...n K!.(2)(2)S B(N i n K)=k B ln⁡N!n 1!n 2!...n K!. This is a functional of the distribution {n i}{n i}. An approximate value of (2) - the higher the N N, the better the approximation: S(N i n K)B,a p p r o x=−k B∑i=1 K n i ln n i N,(3.1)(3.1)S B,a p p r o x(N i n K)=−k B∑i=1 K n i ln⁡n i N, The same approximate value, but using the occupation fractions w i=n i N w i=n i N: S(N i n K)B,a p p r o x=−N k B∑i=1 K w i ln w i.(3.2)(3.2)S B,a p p r o x(N i n K)=−N k B∑i=1 K w i ln⁡w i. At this point, entropy (2) is just a mathematical concept relating to distribution {n i}{n i} of N N entities in K K departments; no physical interpretation of the entities has been assumed, and no connection to any physical system or to its thermodynamic entropy has been shown. We want to make a connection to the Gibbs entropy formula (below). However, notice the factor N N, which does not appear in the usual formula for the Gibbs entropy. It appears here because i i does not run over all microstates of the system of N N entities, but only over microstates of single entity. Thus although we are really calculating the Boltzmann entropy of an "ensemble" of N N entities, we are using "inappropriate" states i i of a single entity, instead of the states of the ensemble (thus i i runs from 1 to K K, instead of 1 1 to K N K N). Hence the factor N N in the formula above. The Gibbs entropy of a probability distribution p i p i; notice the lack of the factor N N: S G=−k B∑i=1 K p i ln p i.(4)(4)S G=−k B∑i=1 K p i ln⁡p i. This is better called information entropy or the Shannon entropy, but in physics, the term Gibbs entropy sticks strong and is overloaded. Sometimes "Gibbs entropy" is used to refer to the value of this information entropy for the "correct" probability distribution p∗i p i∗. Again, this is just a mathematical concept, until we say which system is being described and probabilities of what p i p i's are. The Gibbs entropy, when taken as a functional of the probabilities of microstates of a single physical system, is an abstract quantity characterizing that probability distribution; the purpose of this quantity in physics is that the probability distribution p∗i p i∗ which maximizes it (all while obeying the constraints implied by the macrostate X X) is the correct distribution for that macrostate. The constraint can be e.g. equal energy of all microstates and then the result is equal probabilities (in the microcanonical approach); or the constraint is fixed average energy and the result is probabilities decaying exponentially with energy (in the canonical approach); there are other approaches. Also, it turns out the value of the Gibbs entropy, for the correct probabilities of microstates implied by the macrostate, is the statistical physics estimate for thermodynamic entropy of the single system in that macrostate (up to an additive constant). Thus the approximate result (3.2) of the OP calculation based on Boltzmann's entropy formula does not really give the correct Gibbs entropy of a single system described by probabilities p i p i, but it gives the correct Gibbs entropy of a super-system that consists of N N independent such systems. However, this calculation can be interpreted in a more abstract way that makes it useful for motivating the Gibbs entropy: those N N entities in (2) are really N N imaginary copies of a physical system in the same macrostate, but in possibly different microstates, which we are considering to find the "correct" probability distribution for the single system. The point of the calculation is to use an ensemble of very many copies (N N) of the system, apply the statistical argument to it, and derive the rule that the correct occupation fractions maximize (3.2). Then, this result motivates the definition of the Gibbs entropy functional (4) and the maximum information entropy principle. Here is a short attempt at such an argument. Consider many copies (N N) of the macroscopic system, all in the same macrostate X X, but possibly in different microstates i i, all of which are considered compatible with X X and equally weighed. If N N was very small, our ensemble would not sample all the possible microstates well and it can't tell us which are most likely. Thus we should consider N N to be high enough, so that each microstate is occupied by very many individual systems. We are interested in the "correct" occupation fractions w∗i w i∗ for all microstates of the single system. In other words, we seek the correct distribution of the N N systems among the microstates. The statistical argument is that in the limit N→∞N→∞, the "correct" can be given the meaning "the most probable", or "can be realized by the greatest number of ways - microstates of the whole ensemble". Thus we are after the distribution w∗i w i∗ implying the highest value of the Boltzmann entropy of the ensemble (2). Maximizing the formula (2) and even (3.2) for finite N N exactly is hard, since n i n i's and w i w i's cannot assume all values in between integers or rationals; the allowed values and the maximizing distribution depends somewhat, in a not easily expressible way, on N N. However, the higher the N N, the smaller this dependence, and we expect that in the limit N→∞N→∞, both the exact and the approximate formula have the same maximizing occupation fractions w∗i w i∗. The limiting maximizing distribution thus does not depend on the value of N N, and so to find it, we can instead maximize directly the modified expression which we get from (3.2) by dividing by N N and by replacing w i w i's by p i p i's. Thus we can instead maximize the Gibbs formula (4) to find the limiting maximizing probabilities p i p i. The expression (3.2) gives a value that is N N times higher than the Gibbs entropy of the system. This is logical, as in (2) we really have the Boltzmann entropy of N N copies of the actual system. So, contrary to what you may have thought, the calculation did not derive the exact formula for the Gibbs entropy (4), but only the formula (3.2) for the approximate value of the Boltzmann entropy of N N copies. In this view, the calculation resulting in (3.2) does not derive the Gibbs entropy formula. However, (3.2) and its maximization is a motivation to define the Gibbs entropy functional exactly by (4). The Boltzmann entropy (1) and the Gibbs entropy (4) are different mathematical concepts, one is related to finite multiplicity of a distribution of finite number of things, the other is a functional of real-valued probabilities. The Gibbs entropy can be regarded as something like a "continuous extrapolation" of the concept of Boltzmann entropy (2) per single entity: we divide by the number of entities N N to keep the thing finite and free of the auxiliary parameter N N, and take the limit N→∞N→∞. Also, we replace the ratios w i w i by real-valued probabilities p i p i which can then assume values from the continuous interval ⟨0,1⟩⟨0,1⟩. we start with an exact formula for Ω Ω, make an approximation, however good for large N N, and end up with an exact formula for entropy. How is this possible? We rather ended up with an approximate formula for the Boltzmann entropy of N N copies of the system, which turns out to be very similar to the Gibbs entropy of those N N copies. The Boltzmann entropy per one entity ((2) divided by N N) is given approximately by the Gibbs entropy of a single entity (4). These two numbers agree very well for very high N N, but of course, we should not expect that when N N is low. They are exactly the same only in the limit N→∞N→∞ and w i→p i w i→p i. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Improve this answer Follow Follow this answer to receive notifications edited Jun 7 at 22:25 answered Jun 1 at 3:09 Ján LalinskýJán Lalinský 45.4k 1 1 gold badge 39 39 silver badges 113 113 bronze badges Add a comment| This answer is useful 5 Save this answer. Show activity on this post. How does the Stirling approximation give an exact formula for entropy? It doesn't. As discussed in manymanyposts on this website, the word "entropy" has different meanings in different contexts. (Some unfortunately only slightly different.) Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Improve this answer Follow Follow this answer to receive notifications edited May 31 at 19:50 answered May 31 at 19:43 hfthft 27.7k 2 2 gold badges 34 34 silver badges 81 81 bronze badges 8 +1. But I'd emphasise that we kinda get to choose which entropy function we wish to apply, and experiment will then tell us if our choice is bad or tolerable. –naturallyInconsistent Commented May 31 at 21:17 So I'm aware of the thermodynamic "Gibbs" definition S=δ Q/T S=δ Q/T and the related definition for temperature T=∂E/∂S T=∂E/∂S, but I am interested in the statistical definition which he introduced in terms of countable microstates. I view Shannon and Boltzmann entropies as equivalent, up to a log base, and the units of k B k B can be rationalised away. Using the sum or integral −k B Σ p i ln(p i)−k B Σ p i ln⁡(p i) is just the more general form of −k B l n(Ω)−k B l n(Ω). –Furrier Transform Commented Jun 1 at 0:09 At any rate, I believe that the Gibbs and Boltzmann entropies can be shown to be equivalent, as mentioned in that first linked post. So For this microstate definition For this microstate definition, without worrying about units of k B k B, we have Ω=N!/Π n i!Ω=N!/Π n i! as exact. But I also followed the derivation of −Σ p i ln(p i)−Σ p i ln⁡(p i) from first principles consistent with the definition of Ω Ω. I want to know using the Stirling approximation still results in our pre-derived answer. –Furrier Transform Commented Jun 1 at 0:14 2 " I view Shannon and Boltzmann entropies as equivalent..." But they are not equivalent. One can be viewed as a special case of the other... Did you look at the accepted answer to the first question I linked to: physics.stackexchange.com/questions/709644/… ? –hft Commented Jun 1 at 0:41 1 If you are going to define "entropy" to be the log of the number of states then you can notalso define "entropy" as −∑i p i l o g(p i)−∑i p i l o g(p i). (Unless you want to be confused/confusing.) –hft Commented Jun 1 at 0:42 |Show 3 more comments This answer is useful 3 Save this answer. Show activity on this post. One way to interpret this, is to say that the formula S=log(Ω)S=log⁡(Ω) only holds for the microcanonical ensemble. There prepare the system by taking the all states in some intensive energy interval and assigning them the same a priori probability. If the distribution is uniform, then −∑p i log(p i)=−∑1 Ω log(1 Ω)=log(Ω)−∑p i log⁡(p i)=−∑1 Ω log⁡(1 Ω)=log⁡(Ω) exactly, even for small N N. The connection to classical thermodynamic entropy only works for either a microcanonical ensemble of large systems, or for system coupled to a heat bath – which again can be seen as a larger "total system" made up of the bath and the system of interest. When explicitly doing the analysis of increasing system size for a toy system in the microcanonical ensemble (such as a "lattice gas") to reach the thermodynamic limit, you will note that the entropy is not even exactly extensive. That is, if you double the system size, the entropy doesn't double. The leading term S ext S ext is extensive, but there are sub-leading corrections S sub S sub. However for large systems those corrections are such that lim S sub S ext=0 lim S sub S ext=0 as you go to the limit. And this is the power of thermodynamics: The simple results obtained as you reach the thermodynamic limits and simple results for the extensive part of the thermodynamic variables. The exact results, on the other hand, are not very useful for large systems. (Of course, ensembles of small systems coupled to heat baths can be handled precisely, and there the non-leading corrections are often relevant – but you need to do a precise statistical analysis and clearly state how the ensemble is prepared.) Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Improve this answer Follow Follow this answer to receive notifications answered Jun 1 at 12:22 Sebastian RieseSebastian Riese 13.5k 2 2 gold badges 44 44 silver badges 64 64 bronze badges 3 By entropy S S, and in "entropy is not even exactly extensive" do you mean the Gibbs entropy? Also what do you mean by "intensive energy interval"? –Ján Lalinský Commented Jun 1 at 23:43 intensive means a quantity that stays constant when increasing the system size. This is necessary for setting up the microcanonical ensemble properly (if you just say Ω Ω is the degree of degeneracy when prepared at some energy level E E you run into the problem of systems where that degeneracy is lifted by a perturbation, then suddenly Ω=1 Ω=1) –Sebastian Riese Commented Jun 2 at 16:30 That it is not exactly extensive holds for the Boltzmann entropy as well as the Gibbs entropy in the microcanonical ensemble. –Sebastian Riese Commented Jun 2 at 16:38 Add a comment| This answer is useful 3 Save this answer. Show activity on this post. Summary Since this turned into a huge answer, here's the bottom line up front: your starting point by taking Ω=N!n 1!n 2!⋯n k!Ω=N!n 1!n 2!⋯n k! was already an approximation to the true number of states. To get the exact Ω Ω, you should sum over possible configurations of occupation numbers, Ω=∑n 1,n 2,⋯,n k N!n 1!n 2!⋯n k!Ω=∑n 1,n 2,⋯,n k N!n 1!n 2!⋯n k!. In the limit of large N N, that sum becomes dominated by the configuration with all n i=1/k n i=1/k, in which case it becomes a good approximation to take just the log of the one term that you did, apply Stirling's formula, and use the formula n i/N=p i n i/N=p i. Setup This answer is going to start off saying a bunch of things that you already know; it's not because I don't think you know them, but so that I can phrase them in my language, because I think the issue is a little subtle, so I want to lay out my perspective carefully. I am going to use units where k B=1 k B=1 I haven't watched the lecture but here is what I am assuming you are describing. You have N N distinguishable particles, and there are k k states, each of which has the same energy (so you can say we are working in the microcanonical ensemble where the energy of the system is fixed). Then we need to count how many ways we can distribute the N N particles among k k states. Given the the occupation numbers {n 1,n 2,⋯,n k}{n 1,n 2,⋯,n k} (meaning, n i n i particles in state i i, with 1≤i≤k 1≤i≤k), we can come the number of ways Ω Ω to arrange the particles consistent with those occupation numbers. That will lead to the result you wrote down Ω=N!∏k i=1 n i!Ω=N!∏i=1 k n i! As others have said, there are multiple definitions of entropy, which can make things complicated. I am pretty opinionated (possibly overly so) and my point of view is that the best way to understand entropy is through the Shannon (or Gibbs) entropy. This says, given a discrete set of k k states labeled by i i, and a probability distribution over those states p i p i, then the entropy is S=−∑i=1 k p i log p i S=−∑i=1 k p i log⁡p i (As an aside, defining entropy for a continuous set of states is complicated). The normal "Boltzmann entropy" S=log Ω S=log⁡Ω is a special case that applies in the situation where each state has uniform probability. Then p i=1/k p i=1/k, and S=−∑i 1 k log 1 k=log k S=−∑i 1 k log⁡1 k=log⁡k where in this case k k is the number of microstates. Exact calculation of entropy Now we start to make our way back to your problem, but I'm going to approach it a different way, first. We will build to an exact calculation of the entropy in steps. First, note that a set of occupation numbers {n 1,n 2,⋯,n k}{n 1,n 2,⋯,n k}, with n 1+n 2+⋯+n k=N n 1+n 2+⋯+n k=N, can be thought of as a random draw from a probability distribution over k k states. In other words, we can assign probabilities {p 1,p 2,⋯,p k}{p 1,p 2,⋯,p k} (with ∑k p k=1∑k p k=1) to the k k states. Then if we randomly assign N N particles to the k k states, we will get a histogram {n 1,n 2,⋯,n k}{n 1,n 2,⋯,n k}. It is not the case in general that n i/N=p i n i/N=p i. For example, if k=2 k=2, then you can imagine you assigned each particle to one of the two states by flipping a coin. If N=100 N=100, you don't expect to get exactly 50 heads and 50 tails, you might get 54 heads and 46 tails for instance. But as N N gets larger then the ratio n i/N n i/N will approach p i p i by the law of large numbers. Where we are headed, intuitively, is that when you used Stirling's approximation you implicitly used the law of large numbers in this sense. Second, in thermodynamics we are interested in the probability distribution that maximizes the entropy. Here I'm going to introduce the special case k=2 k=2, so that we can do a few analytical calculations easily. Now let's start with N=1 N=1 particle. Suppose the probability of state 1 1 is p p, so the probability of state 2 2 is 1−p 1−p. Then the entropy is S(p)=−p log p−(1−p)log(1−p)S(p)=−p log⁡p−(1−p)log⁡(1−p) You can find the maximum of S(p)S(p) by solving d S/d p=0 d S/d p=0; it is an easy calculation and is solved for p=1/2 p=1/2, meaning the two microstates are equally likely. (With k k states we would find p=1/k p=1/k). Plugging p=1/2 p=1/2 into S(p)S(p), we find the entropy is S=log 2 S=log⁡2, or S=log k S=log⁡k for k k states, consistent with the Boltzmann entropy. So we see the Boltzmann entropy applies after we have maximized the entropy over a set of probability distributions. Third, you might have noticed there's a factor of N N between S=log k S=log⁡k and S=N log k S=N log⁡k that you wrote in your answer. That's because so far we have only looked at 1 1 particle. Generalizing to N N non-interacting distinguishable particles (for k=2 k=2 states) is straightforward. Particle 1 1 has a probability p 1 p 1 to be in state 1 1 and 1−p 1 1−p 1 to be in state 2 2; particle 2 2 has probability p 2 p 2 to be in state 1 1 and 1−p 2 1−p 2 to be in state 2 2, etc. Since the particles are independent, the N N particle probability distribution is just the product of the N N 1 1 particle distributions. So the probability that particle 1 1 is in state i 1 i 1 (which could be 0 0 or 1 1), particle 2 2 is in state i 2 i 2, etc, is p(i 1,i 2,⋯,i N)=(p i 1 δ i 1,1+(1−p 1)δ i 1,2)(p i 2 δ i 2,1+(1−p 2)δ i 2,2)⋯(p i N δ i N,1+(1−p N)δ i N,2)p(i 1,i 2,⋯,i N)=(p i 1 δ i 1,1+(1−p 1)δ i 1,2)(p i 2 δ i 2,1+(1−p 2)δ i 2,2)⋯(p i N δ i N,1+(1−p N)δ i N,2) where δ a,b=1 δ a,b=1 if a=b a=b and 0 0 otherwise. So the overall entropy as a function of p 1,p 2,⋯,p N p 1,p 2,⋯,p N is S(p 1,p 2,⋯,p N)=∑i=1 N(−p i log p i−(1−p i)log(1−p i))S(p 1,p 2,⋯,p N)=∑i=1 N(−p i log⁡p i−(1−p i)log⁡(1−p i)) You can again maximize this as a function of p 1,⋯,p N p 1,⋯,p N, and find that the maximum entropy distribution has p 1=p 2=⋯=p N=1/2 p 1=p 2=⋯=p N=1/2. In other words, each particle has a 1/2 1/2 probability to be in either state 1 1 or in state 2 2. However, the configuration of the whole system involves specifying the state of \emph{each} particle. Which means there are 2 N 2 N total states. Since each of these states is equally likely (you can either say that's an assumption of thermodynamics, or you can say that we argued this is the maximum entropy distribution), that means the probability of each state is 2−N 2−N. That means the overall entropy of the maximum entropy distribution is S=N log 2 S=N log⁡2 You can calculate this a number of ways. First, you can take the log of the number of microstates 2 N 2 N. Second, you can evaluate −∑i p i log p i−∑i p i log⁡p i with 2 N 2 N states each with probability 2−N 2−N (ie, evaluating S(1/2,1/2,⋯,1/2)S(1/2,1/2,⋯,1/2)). Third, you can use the fact that entropy is additive, meaning that if p(x,y)=p(x)p(y)p(x,y)=p(x)p(y), then S[p(x,y)]=S[p(x)]+S[p(y)]S[p(x,y)]=S[p(x)]+S[p(y)], so the entropy for N N non-interacting particles is just N N times the entropy of 1 1 particle, or N log 2 N log⁡2. The main reason I went through this in such explicit detail is to point out that there are states in the above distribution that don't have n i/N=p i=1/2 n i/N=p i=1/2 (where n i n i is the occuptation number, remember since k=2 k=2 we have n 1 n 1 and n 2 n 2 with n 1+n 2=N n 1+n 2=N). For example, there is a non-zero probability that all N N particles are in state 1 1 and zero are in state 2 2. (In the notation above, this corresponds to i 1=i 2=⋯=i N=1 i 1=i 2=⋯=i N=1). The probability is very small for large N N -- specifically 2−N 2−N. But nevertheless, those states exist in the ensemble. In the next section, we'll show that the core of your question is that the approximation you made was to ignore those states, and at large N N those states can be ignored. Approximate calculation of entropy at large N N So, now, finally, let's come back to the calculation you did for Ω Ω. Let's again fix k=2 k=2 so we can simplify the algebra. Let's suppose that we are looking at the maximum entropy distribution, with p=1/2 p=1/2. You argued that Ω=N!n 1!(N−n 1)!=(N k)Ω=N!n 1!(N−n 1)!=(N k) is the number of states. In fact, this is only the number of states for one particular way of arranging the particles. In general, the real value of Ω Ω involves summing over all the ways we can arrange the particles Ω=∑n 1=0 N(N k)=2 N Ω=∑n 1=0 N(N k)=2 N where the second equation follows from the sum of binomial coefficients. Physically it's easy to understand this result; given that we have N N distinguishable particles and 2 2 states, we can decide the configuration of each independently, so we get 2 N 2 N possibilities. Since we assume each configuration is equally likely, the entropy reduces to the Boltzmann entropy, and S=log Ω=N log 2 S=log⁡Ω=N log⁡2 as we expect. (And as we calculated multiple ways in the previous section.) What happens at large N N is that the occupation numbers n i/N n i/N becomes sharply peaked around the probabilities p=1/k p=1/k. So the sum over n 1 n 1 in Ω Ω can be approximated by just the term with n 1/N=n 2/N=1/2 n 1/N=n 2/N=1/2. So S=log Ω=log∑n 1 N!n 1!(N−n 1)!≈log N!((N/2)!)2=log N!−2 log(N 2)!S=log⁡Ω=log⁡∑n 1 N!n 1!(N−n 1)!≈log⁡N!((N/2)!)2=log⁡N!−2 log⁡(N 2)! Then we can use Stirling's approximation log N!=N log N−N log⁡N!=N log⁡N−N to evaluate the factorials at large N N S≈=N log N−N−N log N 2+N N log 2 S≈N log⁡N−N−N log⁡N 2+N=N log⁡2 Since all these approximations become better and better at large N N, the exact calculation we did of the entropy of the maximum entropy distribution above, agrees with the approximate calculation of the entropy using Stirling's approximation plus the limiting value n i/N=p n i/N=p, in the large N N limit. Appendix: A different (but related) problem As a technical aside not directly related to your question, one technical point I found surprising while I was writing this up is that there is a subtle distinction between the entropy of the binomial distribution, and the entropy of the microcanonical ensemble for k=2 k=2 we looked at above. The ultimate reason is that the binomial distribution is appropriate for identical bosons at zero temperature with a degenerate ground state, whereas above we were looking at distinguishable particles with a fixed energy for the total system. The binomial distribution tells us the probability of getting n 1 n 1 successes, given N N trials, and that the probability of success is p p. The distribution is p(n 1)=(N n 1)p n 1(1−p)N−n 1 p(n 1)=(N n 1)p n 1(1−p)N−n 1 and using Stirling's approximation, one can show for large N N the entropy is approximately S≈1 2 log(2 π e N p(1−p))S≈1 2 log⁡(2 π e N p(1−p)) The maximum entropy distribution has p=1/2 p=1/2, in which case p(n 1)=(N n 1)(1 2)n 1+N−n 1=(N n 1)2−N p(n 1)=(N n 1)(1 2)n 1+N−n 1=(N n 1)2−N in which case using Stirling's approximation the entropy is approximately S=1 2 log(π e N 2)S=1 2 log⁡(π e N 2) This grows as log N log⁡N, not like N N as we saw above. The issue is that the binominal distribution only counts the number of successes, and doesn't distinguish between trials. In other words, the binomial distribution tells you the probability of getting 1 1 head out of 100 100 trials, not the probability of the first trial being heads. In the microcanonical ensemble used in the main question, each of the Ω=2 N Ω=2 N states has probability 2−N 2−N. We have 2 N 2 N states because we can assign each particle to be in state 0 0 or 1 1 independently. This counting is not the same for identical bosons. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Improve this answer Follow Follow this answer to receive notifications edited Jun 1 at 21:18 answered Jun 1 at 4:45 AndrewAndrew 58.7k 4 4 gold badges 96 96 silver badges 184 184 bronze badges Add a comment| This answer is useful 3 Save this answer. Show activity on this post. Existing answers include some good ones, but they are a bit long. Sometimes short is better. Answer. In microcanonical ensemble, the two formulae agree exactly: −∑i p i ln p i=−ln p 1=ln Ω−∑i p i ln⁡p i=−ln⁡p 1=ln⁡Ω since p 1=p i=1/Ω p 1=p i=1/Ω. In canonical ensemble, the quantity Ω Ω (for the system, as opposed to system plus thermal reservoir) is not a single number but is distributed over a range of values, because the system can exchange heat with the reservoir. It is not true, for example, that Ω Ω is simply N!/∏n i!N!/∏n i!, but the log of that value is a good approximation to the log of Ω Ω at large N N. My own notes on this can be found here: under the link ("Introduction to statistical thermal physics; includes careful definition of entropy and derivation of basic results"). It is section of 6 of the latter that handles the question asked about here. This reference might be regarded as self-advertising but it is an attempt to be helpful. The most commonly used undergraduate textbooks either handle this issue incorrectly or are rather vague about what is being assumed. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Improve this answer Follow Follow this answer to receive notifications edited Jun 3 at 8:48 answered Jun 2 at 17:36 Andrew SteaneAndrew Steane 65.5k 3 3 gold badges 93 93 silver badges 267 267 bronze badges Add a comment| This answer is useful 2 Save this answer. Show activity on this post. What is important in my opinion is the justification of using the logarithm of the formula that was derived. After all, the method of Lagrange multipliers should be used for Ω Ω. And if the Stirling approximation is used directly there, it is: n!=(2 π n−−−√)n n exp(−n)n!=(2 π n)n n exp⁡(−n). The square root term is necessary for the correct approximation when n n grows. But taking log(n!)=log(2 π n−−−√)+n log(n)−n log⁡(n!)=log⁡(2 π n)+n log⁡(n)−n, the ratio of using only the last 2 terms of the RHS (neglecting the square root term), over the full RHS side, tends to unity when n grows. But why it is valid to apply the Lagrange multipliers on log(Ω)log⁡(Ω)? Because the results will be the same as for Ω Ω. It can be shown comparing the minimization of f(x,y)=x y f(x,y)=x y, (both positive) with restriction y=1−x y=1−x, and log(x y)log⁡(x y) under the same restriction. We get x=y=1 2 x=y=1 2 both ways. Of course it is still an approximation, but when n>10 6 n>10 6 or bigger it is fair accurate. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Improve this answer Follow Follow this answer to receive notifications answered May 31 at 23:19 Claudio SaspinskiClaudio Saspinski 17.6k 2 2 gold badges 15 15 silver badges 40 40 bronze badges 2 I follow why using Lagrange multipliers to maximise Ω Ω is valid, but I'm interested in that even BEFORE maximising, or worrying about what that set of n i n i s might be, we get that ln(Ω)=−N Σ p i ln(p i)=N S.ln⁡(Ω)=−N Σ p i ln⁡(p i)=N S. So I'm curious, IS S=−Σ p i ln(p ᵢ)S=−Σ p i ln⁡(p ᵢ) an approximation, ...somehow despite the fact that we derived it from first principles? Assuming that this is the type of of entropy that we are interested in, yes S=−Σ p i ln(p ᵢ)S=−Σ p i ln⁡(p ᵢ) only work for large N? Why would it be wrong for small? What would the error be? –Furrier Transform Commented Jun 1 at 0:24 If you test for n=10 n=10, n=100 n=100, or n=1000 n=1000, you can see the magnitude of the approximation. –Claudio Saspinski Commented Jun 1 at 0:53 Add a comment| Your Answer Thanks for contributing an answer to Physics Stack Exchange! Please be sure to answer the question. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Making statements based on opinion; back them up with references or personal experience. Use MathJax to format equations. MathJax reference. To learn more, see our tips on writing great answers. Draft saved Draft discarded Sign up or log in Sign up using Google Sign up using Email and Password Submit Post as a guest Name Email Required, but never shown Post Your Answer Discard By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions thermodynamics statistical-mechanics entropy See similar questions with these tags. Featured on Meta Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Will you help build our new visual identity? Linked 45Is information entropy the same as thermodynamic entropy? 10How is thermodynamic entropy defined? What is its relationship to information entropy? 8How to derive Shannon Entropy from Clausius Theorem? 5Connection between Different Kinds of Entropy (Boltzmann, Volume, and Surface Entropies) Related 0Understanding summations over microstates of a given function 1A derivation in problem 3.1 of Problems on Statistical Mechanics.(by Dalvit, et al) 7How does the fluctuation theorem dissipation function become entropy? 3Why is e−α=N h 3 V(β 2 π m)3/2 e−α=N h 3 V(β 2 π m)3/2 true for Maxwell Boltzmann statistics and how is the equation derived? 0Confusion about microstate probabilities 0Entropy as a function of internal energy in an arbitrary ensemble 2Microscopic definition of heat in the grand canonical ensemble 3Entropy of an isolated system 1Average number of particles in a certain energy level in the Canonical Ensemble 5Boltzmann or Gibbs entropy for the canonical ensemble? Hot Network Questions Why do the rules allow resigning in drawn positions with insufficient mating material? In the US, can I contribute to my Roth IRA, ahead of the time I get the earned income? Is the logic of the original smoking study valid? How to debug/correct missing number error in plug during memoization? When was this builder's paper produced? Landmark identification in "The Angel" (Arsenal FC's anthem) How can I tell that two analytical methods are orthogonal? Does the Melf's Acid Arrow spell require a ranged attack roll? What does it mean to be one's "God"? VLOOKUP with wildcards Why לֶחֶם instead of לַחַם? Can metal atoms act as ligands? Show double quotient with congruence subgroup is simply connected? Why does my HDD keep spinning and seeking when I power off the computer? Do you email authors whose results you have improved? Dropdown width with very long options Harry Potter fanfic where Petunia dies of cancer and Vernon works at a horse racing track? LM393 comparator not pulling down History of Wilcoxon/Mann-Whitney being for the median? Does cell phone only receive (one way communication) or receive and transmit microwaves (two way communication) during download? Wiring a bathroom exhaust fan Dimension too large compiling longtable with lualatex. What is the cause? What's the difference between democracy and totalitarianism if, even in democracy, we must respect laws set by parties we didn't vote for? I found that we can calculate the time of solar eclipses that will happen in the very far future. Do we need relativity in this calculation? Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Physics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
156
algebraic topology - Examples of 4-manifolds with nontrivial third Stiefel-Whitney class $w_3$. - Mathematics Stack Exchange =============== Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Examples of 4-manifolds with nontrivial third Stiefel-Whitney class w 3 w 3. Ask Question Asked 6 years, 8 months ago Modified6 years, 8 months ago Viewed 553 times This question shows research effort; it is useful and clear 11 Save this question. Show activity on this post. What are some examples of 4 4-manifolds M M for which the class w 3(T M)∈H 3(M;Z/2)w 3(T M)∈H 3(M;Z/2) is nontrivial? Is there a mapping torus with this property? Motivation: I am wondering whether any such 4 4-manifolds can be "built out of" a 3 3-manifold by the mapping torus construction, despite the fact that w 3 w 3 vanishes on 3 3-manifolds. In asking myself this, I realized my that go-to examples of 4 4-manifolds -- R P 4 R P 4, C P 2 C P 2, and K 3 K 3 -- all have trivial w 3 w 3. algebraic-topology characteristic-classes 4-manifolds Share Share a link to this question Copy linkCC BY-SA 4.0 Cite Follow Follow this question to receive notifications edited Dec 4, 2018 at 18:32 aaa asked Dec 4, 2018 at 6:16 aaaaaa 441 3 3 silver badges 11 11 bronze badges 7 2 R P 2×R P 2 R P 2×R P 2 is a non-orientable 4 4-manifold with non-vanishing w 3 w 3. I have to think about the mapping cylinder. –ThorbenK Commented Dec 4, 2018 at 12:29 1 Presumably you mean mapping torus instead of cylinder: the latter is not a very interesting manifold. For the former, you should be able to explicitly calculate the SW classes of f:M→M f:M→M in terms of w i(M)w i(M) and the map f∗f∗. –user98602 Commented Dec 4, 2018 at 16:39 1 @ThorbenK Right, by Whitney product. I see. Thanks! –aaa Commented Dec 4, 2018 at 18:22 @MikeMiller Right, I meant mapping torus. –aaa Commented Dec 4, 2018 at 18:22 1 @ThorbenK Was this question a related idea? I tried to do a calculation of w 3 w 3 (defined as the Poincare dual to the locus where two generic sections become linearly dependent) using the obvious vector field as well as a homotopy between a nonvanishing tangent field X X on M M and f∗X f∗X. I didn't get anywhere, which is one of the reasons I started thinking the answer was "yes, such a mapping torus exists". –user98602 Commented Dec 5, 2018 at 21:36 |Show 2 more comments 2 Answers 2 Sorted by: Reset to default This answer is useful 5 Save this answer. Show activity on this post. This is a partial answer, showing what one should demand if they were to try to find an example with w 3(T f)≠0 w 3(T f)≠0: If M M is an orientable 4-manifold then w 3(M)=0 w 3(M)=0. So f f would have to either be an orientation-reversing diffeomorphism or a diffeomorphism of a non-orientable manifold. To prove this, use Wu classes. These are the classes ν i∈H i(M;Z/2)ν i∈H i(M;Z/2) for which the two maps H n−i(X;Z/2)→Z/2 H n−i(X;Z/2)→Z/2 given by ν i⋅x ν i⋅x and Sq i x Sq i x agree. Wu's theorem is that we have the property ∑i=0⌊k/2⌋Sq k−i ν i=w i.∑i=0⌊k/2⌋Sq k−i ν i=w i. We see from the definition that ν 3=0 ν 3=0 because Sq 3 Sq 3 vanishes on classes of degree less than 3 3, and we see from orientability that ν 1=0 ν 1=0, and hence ν 2=w 2 ν 2=w 2. Therefore we have w 3=Sq 1 w 2 w 3=Sq 1 w 2. (In fact, this is true for an arbitrary 4-manifold; one needs to argue that (Sq 1)2 w 1(Sq 1)2 w 1, which in principle contributes, is always zero). The operation Sq 1 Sq 1 is sometimes better known as the Bockstein map. This map factors as the composite of the integral Bockstein β Z:H 2(M;Z/2)→H 3(M;Z)β Z:H 2(M;Z/2)→H 3(M;Z) and reducing coefficients modulo 2, so it suffices to show that if M M is an oriented closed 4-manifold, we have β Z w 2(M)=0 β Z w 2(M)=0. A really elegant proof of this fact is given in the main proposition of this short note. I will not reproduce it. The essential point is that the Bockstein long exact sequence shows that β Z w 2(M)=0 β Z w 2(M)=0 if and only if w 2(M)w 2(M) lifts to an integral class, and that note explains how to show that w 2(M)w 2(M) lifts to an integral class. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications edited Dec 5, 2018 at 0:01 answered Dec 4, 2018 at 20:39 user98602 user98602 6 Here's another proof that (S q 1)2 w 1=0(S q 1)2 w 1=0, or more generally, that (S q 1)2:H 1(X;Z/2 Z)→H 3(X;Z/2 Z)(S q 1)2:H 1(X;Z/2 Z)→H 3(X;Z/2 Z) is always the zero map. Namely, since R P∞R P∞ represents H 1(X;Z/2 Z)H 1(X;Z/2 Z), it's enough to verify it in just that case. Writing H∗(R P∞;Z/2 Z)≅Z/2 Z[x]H∗(R P∞;Z/2 Z)≅Z/2 Z[x], we have S q 1(S q 1 x)=S q 1(x 2)=(S q 1 x)x+x(S q 1 x)=0 S q 1(S q 1 x)=S q 1(x 2)=(S q 1 x)x+x(S q 1 x)=0. –Jason DeVito - on hiatus Commented Dec 6, 2018 at 1:34 @JasonDeVito That makes one proof, since I didn't give one! –user98602 Commented Dec 6, 2018 at 1:57 Ha! I need more caffeine. I seem to recall that (S q 1)2=0(S q 1)2=0 (not just on H 1 H 1 but on any H∗H∗). I'll try to remember the reference or proof... –Jason DeVito - on hiatus Commented Dec 6, 2018 at 2:32 @JasonDeVito I'm sure Hatcher proves it somewhere. (As usual, I'm too lazy to find the actual reference: I have said the previous sentence too many times.) –user98602 Commented Dec 6, 2018 at 2:42 1 @JasonDeVito Since S q 1=ρ 2 β Z S q 1=ρ 2 β Z, where ρ 2 ρ 2 is mod 2 reduction, for any class α α the class S q 1 α S q 1 α has an integral lift (namely the integral Bockstein of α α). Since the integral Bockstein vanishes on classes with integral lifts, we have S q 1 S q 1 α=0 S q 1 S q 1 α=0. Symbolically, S q 1 S q 1=(ρ 2∘β Z)∘(ρ 2∘β Z)=ρ 2∘(β Z∘ρ 2)∘β Z=0 S q 1 S q 1=(ρ 2∘β Z)∘(ρ 2∘β Z)=ρ 2∘(β Z∘ρ 2)∘β Z=0. –Aleksandar Milivojević Commented Dec 6, 2018 at 23:52 |Show 1 more comment This answer is useful 2 Save this answer. Show activity on this post. Here is an idea I came up but I was only able essentially reduce it to this question. I had hoped to prove that the mapping torus of a diffeomorphism of an orientable 3 3-manifold is zero using the fact that orientable 3 3-manifolds have trivial tangent bundles and the definition of Stiefel-Whitney classes as obstruction classes. In the end this gave a good method to produce counterexamples I think. Let M M denote an orientable 3 3-manifold and f f and orientation preserving diffeomorphism. I will denote the mapping Torus by T f T f and the inclusion of some fiber by ι:M→T f ι:M→T f. Fix a trivialization of T ι(M)T ι(M), which is possible by the aforementioned fact. Now we know that ι∗(w i(S f))=w i(ι∗T S f)=w i(T M⊕R)=0 ι∗(w i(S f))=w i(ι∗T S f)=w i(T M⊕R)=0 we conclude that w i(S f)w i(S f) comes from some class in H i(S f,ι(M);Z/2 Z)H i(S f,ι(M);Z/2 Z). Using excision the inclusion of the pair (M×I,M×∂I)→(S f,ι(M))(M×I,M×∂I)→(S f,ι(M)) induces an isomorphism on cohomology. Therefore we have to understand how the Stiefel-Whitney classes of (M×I,∂M×I)(M×I,∂M×I) behave. Since T(M×I)≅π∗T M⊕R T(M×I)≅π∗T M⊕R, where π π denotes the projection M×I→M M×I→M and this splitting respects the fixed framing at M×∂I M×∂I, we have to understand w 3(π∗T M,π∗T M|M×∂I)w 3(π∗T M,π∗T M|M×∂I). Note that this is the mod 2 2 reduction of the relative Euler class. Furthermore note that if we fix some non-vanishing section ϕ ϕ of π∗T M|M×{0}π∗T M|M×{0} then the section at π∗T M|M×{1}π∗T M|M×{1} is given by f∗ϕ((x,1))=D f f−1(x)(ϕ(f−1(x))f∗ϕ((x,1))=D f f−1(x)(ϕ(f−1(x)). All in all this should imply that w 3 w 3 is the mod 2 2 reduction of the obstruction class for a homotopy between ϕ ϕ and f∗(ϕ)f∗(ϕ). Therefore we are left with the question how do homotopy classes of non-vanishing vector fields on orientable 3 3-manifolds behave under diffeomorphisms of said manifold, which is exactly the aforementioned question. Nevertheless note that [M,S 2][M,S 2], which is the set of homotopy classes of vector fields, surjects quite naturally to H 2(M;Z)H 2(M;Z). So maybe it is possible to deduce the existence of a vector field ϕ ϕ and a diffeomorphism f f such that the obstruction class for a homtopy between f∗ϕ f∗ϕ and ϕ ϕ is non-zero mod 2 2 using the action of f f on H 2(M)H 2(M), but I'm tired right now so I will think about this last part tomorrow. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications edited Dec 5, 2018 at 23:00 answered Dec 5, 2018 at 22:28 ThorbenKThorbenK 1,495 9 9 silver badges 15 15 bronze badges 3 The natural surjection I see is to H 1(M;Z)H 1(M;Z), taking the preimage of a generic point. I definitely think this should work - I was trying to show w 3 w 3 is zero this way and when I couldn't, I decided it probably wasn't. I think a nice manifold with a good vector field is what we need. –user98602 Commented Dec 5, 2018 at 22:45 Yes you are right (As I said, I'm tired). There is a natural surjection to H 2(M;Z)H 2(M;Z) since S 2 S 2 is the 3 3-skeleton of C P∞C P∞. Like I said maybe it is enough to consider the action of f f on H 2(M)H 2(M), but I'm complettly unable to check this right now.I will attempt this tomorrow. –ThorbenK Commented Dec 5, 2018 at 22:50 It's a very good idea to my eye (which I similarly am too tired and busy to carry out). –user98602 Commented Dec 5, 2018 at 22:50 Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions algebraic-topology characteristic-classes 4-manifolds See similar questions with these tags. Featured on Meta Will you help build our new visual identity? Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Community help needed to clean up goo.gl links (by August 25) Report this ad Related 8Second Stiefel-Whitney Class of a Five Manifold 8Examples with zero first Stiefel-Whitney class and nonzero second Stiefel-Whitney class 17If the top Stiefel-Whitney class of a compact manifold is nonzero, must there be another non-vanishing Stiefel-Whitney class? 2Examples for obstructions to a S p i n c S p i n c structure 3H 1(M,Z 2)H 1(M,Z 2): 1st Stiefel Whitney class v.s. fermion eta invariant v.s. spin structure 23 3-manifolds with boundary having trivial H 2 H 2 and nontrivial relative H 2 H 2 3Bordism invariants as integrals of Stiefel-Whitney classes 3Does this attempt to get a nontrivial theory of orientation for complex manifolds actually work? Hot Network Questions Intel NUC automatically shuts down when trying Ubuntu Reuse the profile of apt Firefox in Flatpak version through a symbolic link? Reskinning creatures without accidentally hiding how dangerous/safe they are What do you call this outfit that Japanese housewives always wear? Is the logic of the original smoking study valid? What does my 3D Printing Life-Seeder Probe need to print to populate the Universe for humans? how to set grub default Why am I experiencing these problems updating LibreOffice? In the US, can I contribute to my Roth IRA, ahead of the time I get the earned income? How do I keep my internal drives active? What violent acts or injuries are attributable to Palestine Action? Dropdown width with very long options History of Wilcoxon/Mann-Whitney being for the median? If linear negation is interpreted as representing destructors, how to make sense of double linear negation elimination? Is Switzer's Proposition 7.16 (suspension isomorphism for pointed spaces) correct? Does trading for Kyogre in Pokémon Omega Ruby include its Mega Evolution? What does it mean to be one's "God"? What is a good way to get magnetic sensor input? Elfquest story where two elves argue over one's hypnotizing of an animal What's at stake if the E3/EU "snaps back" their sanctions on Iran? Proper way to power off a Ubuntu 22.04-5 desktop from single user mode Pilot Procedures for OFV Control When Cabin System Fails A story where a character that looks like Wile E. Coyote helps to relocate a community of business-sharp hunters-gatherers Does the Melf's Acid Arrow spell require a ranged attack roll? Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
157
Published Time: Mon, 28 Jul 2025 17:15:15 GMT The Feynman Lectures on Physics Vol. III Ch. 18: Angular Momentum =============== ◄▲► AAA MATHJAX LOADING PAGE... Dear Reader, There are several reasons you might be seeing this page. In order to read the online edition of The Feynman Lectures on Physics, javascript must be supported by your browser and enabled. If you have visited this website previously it's possible you may have a mixture of incompatible files (.js, .css, and .html) in your browser cache. If you use an ad blocker it may be preventing our pages from downloading necessary resources. So, please try the following: make sure javascript is enabled, clear your browser cache (at least of files from feynmanlectures.caltech.edu), turn off your browser extensions, and open this page: If it does not open, or only shows you this message again, then please let us know: which browser you are using (including version #) which operating system you are using (including version #) This type of problem is rare, and there's a good chance it can be fixed if we have some clues about the cause. So, if you can, after enabling javascript, clearing the cache and disabling extensions, please open your browser's javascript console, load the page above, and if this generates any messages (particularly errors or warnings) on the console, then please make a copy (text or screenshot) of those messages and send them with the above-listed information to the email address given below. By sending us information you will be helping not only yourself, but others who may be having similar problems accessing the online edition of The Feynman Lectures on Physics. Your time and consideration are greatly appreciated. Best regards, Mike Gottlieb [email protected] Editor, The Feynman Lectures on Physics New Millennium Edition play stop mute max volume Update Required To play the media you will need to either update your browser to a recent version or update your Flash plugin. The recording of this lecture is missing from the Caltech Archives. 18Angular Momentum 18–1 Electric dipole radiation In the last chapter we developed the idea of the conservation of angular momentum in quantum mechanics, and showed how it might be used to predict the angular distribution of the proton from the disintegration of the Λ-particle. We want now to give you a number of other, similar, illustrations of the consequences of momentum conservation in atomic systems. Our first example is the radiation of light from an atom. The conservation of angular momentum (among other things) will determine the polarization and angular distribution of the emitted photons. Suppose we have an atom which is in an excited state of definite angular momentum—say with a spin of one—and it makes a transition to a state of angular momentum zero at a lower energy, emitting a photon. The problem is to figure out the angular distribution and polarization of the photons. (This problem is almost exactly the same as the Λ 0 disintegration, except that we have spin-one instead of spin one-half particles.) Since the upper state of the atom is spin one, there are three possibilities for its z-component of angular momentum. The value of m could be +1,or 0, or−1. We will take m=+1 for our example. Once you see how it goes, you can work out the other cases. We suppose that the atom is sitting with its angular momentum along the +z-axis—as in Fig.18–1(a)—and ask with what amplitude it will emit right circularly polarized light upward along the z-axis, so that the atom ends up with zero angular momentum—as shown in part(b) of the figure. Well, we don’t know the answer to that. But we do know that right circularly polarized light has one unit of angular momentum about its direction of propagation. So after the photon is emitted, the situation would have to be as shown in Fig.18–1(b)—the atom is left with zero angular momentum about the z-axis, since we have assumed an atom whose lower state is spin zero. We will let a stand for the amplitude for such an event. More precisely, we let a be the amplitude to emit a photon into a certain small solid angle Δ Ω, centered on the z-axis, during a time d t. Notice that the amplitude to emit a LHC photon in the same direction is zero. The net angular momentum about the z-axis would be−1 for such a photon and zero for the atom for a total of−1, which would not conserve angular momentum. Fig. 18–1. An atom with m=+1 emits a RHC photon along the +z-axis. Similarly, if the spin of the atom is initially “down” (−1 along the z-axis), it can emit only a LHC polarized photon in the direction of the +z-axis, as shown in Fig.18–2. We will let b stand for the amplitude for this event—meaning again the amplitude that the photon goes into a certain solid angle Δ Ω. On the other hand, if the atom is in the m=0 state, it cannot emit a photon in the +z-direction at all, because a photon can have only the angular momentum +1 or−1 along its direction of motion. Fig. 18–2. An atom with m=−1 emits a LHC photon along the +z-axis. Next, we can show that b is related to a. Suppose we perform an inversion of the situation in Fig.18–1, which means that we should imagine what the system would look like if we were to move each part of the system to an equivalent point on the opposite side of the origin. This does not mean that we should reflect the angular momentum vectors, because they are artificial. We should, rather, invert the actual character of the motion that would correspond to such an angular momentum. In Fig.18–3(a) and(b) we show what the process of Fig.18–1 looks like before and after an inversion with respect to the center of the atom. Notice that the sense of rotation of the atom is unchanged.1 In the inverted system of Fig.18–3(b) we have an atom with m=+1 emitting a LHC photon downward. Fig. 18–3. If the process of(a) is transformed by an inversion through the center of the atom, it appears as in(b). If we now rotate the system of Fig.18–3(b) by 180∘ about the x- or y-axis, it becomes identical to Fig.18–2. The combination of the inversion and rotation turns the second process into the first. Using Table17–2, we see that a rotation of 180∘ about the y-axis just throws an m=−1 state into an m=+1 state, so the amplitude b must be equal to the amplitude a except for a possible sign change due to the inversion. The sign change in the inversion will depend on the parities of the initial and final state of the atom. In atomic processes, parity is conserved, so the parity of the whole system must be the same before and after the photon emission. What happens will depend on whether the parities of the initial and final states of the atom are even or odd—the angular distribution of the radiation will be different for different cases. We will take the common case of odd parity for the initial state and even parity for the final state; it will give what is called “electric dipole radiation.” (If the initial and final states have the same parity we say there is “magnetic dipole radiation,” which has the character of the radiation from an oscillating current in a loop.) If the parity of the initial state is odd, its amplitude reverses its sign in the inversion which takes the system from (a) to(b) of Fig.18–3. The final state of the atom has even parity, so its amplitude doesn’t change sign. If the reaction is going to conserve parity, the amplitude b must be equal to a in magnitude but of the opposite sign. We conclude that if the amplitude is a that an m=+1 state will emit a photon upward, then for the assumed parities of the initial and final states the amplitude that an m=−1 state will emit a LHC photon upward is−a.2 We have all we need to know to find the amplitude for a photon to be emitted at any angle θ with respect to the z-axis. Suppose we have an atom originally polarized with m=+1. We can resolve this state into +1,0, and−1 states with respect to a new z′-axis in the direction of the photon emission. The amplitudes for these three states are just the ones given in the lower half of Table17–2. The amplitude that a RHC photon is emitted in the direction θ is then a times the amplitude to have m=+1 in that direction, namely, a⟨+|R y(θ)|+⟩=a 2(1+cos θ). The amplitude that a LHC photon is emitted in the same direction is −a times the amplitude to have m=−1 in the new direction. Using Table17–2, it is −a⟨−|R y(θ)|+⟩=−a 2(1−cos θ). If you are interested in other polarizations you can find out the amplitude for them from the superposition of these two amplitudes. To get the intensity of any component as a function of angle, you must, of course, take the absolute square of the amplitudes. 18–2 Light scattering Fig. 18–4. The scattering of light by an atom seen as a two-step process. Let’s use these results to solve a somewhat more complicated problem—but also one which is somewhat more real. We suppose that the same atoms are sitting in their ground state(j=0), and scatter an incoming beam of light. Let’s say that the light is going initially in the +z-direction, so that we have photons coming up to the atom from the −z-direction, as shown in Fig.18–4(a). We can consider the scattering of light as a two-step process: The photon is absorbed, and then is re-emitted. If we start with a RHC photon as in Fig.18–4(a), and angular momentum is conserved, the atom will be in an m=+1 state after the absorption—as shown in Fig.18–4(b). We call the amplitude for this process c. The atom can then emit a RHC photon in the direction θ—as in Fig.18–4(c). The total amplitude that a RHC photon is scattered in the direction θ is just c times(18.1). Let’s call this scattering amplitude⟨R′|S|R⟩; we have ⟨R′|S|R⟩=a c 2(1+cos θ). There is also an amplitude that a RHC photon will be absorbed and that a LHC photon will be emitted. The product of the two amplitudes is the amplitude⟨L′|S|R⟩ that a RHC photon is scattered as a LHC photon. Using(18.2), we have ⟨L′|S|R⟩=−a c 2(1−cos θ). Now let’s ask about what happens if a LHC photon comes in. When it is absorbed, the atom will go into an m=−1 state. By the same kind of arguments we used in the preceding section, we can show that this amplitude must be−c. The amplitude that an atom in the m=−1 state will emit a RHC photon at the angle θ is a times the amplitude⟨+|R y(θ)|−⟩, which is 1 2(1−cos θ). So we have ⟨R′|S|L⟩=−a c 2(1−cos θ). Finally, the amplitude for a LHC photon to be scattered as a LHC photon is ⟨L′|S|L⟩=a c 2(1+cos θ). (There are two minus signs which cancel.) If we make a measurement of the scattered intensity for any given combination of circular polarizations it will be proportional to the square of one of our four amplitudes. For instance, with an incoming beam of RHC light the intensity of the RHC light in the scattered radiation will vary as(1+cos θ)2. That’s all very well, but suppose we start out with linearly polarized light. What then? If we have x-polarized light, it can be represented as a superposition of RHC and LHC light. We write (see Section11-4) |x⟩=1√2(|R⟩+|L⟩). Or, if we have y-polarized light, we would have |y⟩=−i√2(|R⟩−|L⟩). Now what do you want to know? Do you want the amplitude that an x-polarized photon will scatter into a RHC photon at the angle θ? You can get it by the usual rule for combining amplitudes. First, multiply(18.7) by⟨R′|S to get ⟨R′|S|x⟩=1√2(⟨R′|S|R⟩+⟨R′|S|L⟩), and then use (18.3) and(18.5) for the two amplitudes. You get ⟨R′|S|x⟩=a c√2 cos θ. If you wanted the amplitude that an x-photon would scatter into a LHC photon, you would get ⟨L′|S|x⟩=a c√2 cos θ. Finally, suppose you wanted to know the amplitude that an x-polarized photon will scatter while keeping its x-polarization. What you want is⟨x′|S|x⟩. This can be written as ⟨x′|S|x⟩=⟨x′|R′⟩⟨R′|S|x⟩+⟨x′|L′⟩⟨L′|S|x⟩. If you then use the relations |R′⟩=1√2(|x′⟩+i|y′⟩),|L′⟩=1√2(|x′⟩−i|y′⟩) it follows that ⟨x′|R′⟩=1√2,⟨x′|L′⟩=1√2. So you get that ⟨x′|S|x⟩=a c cos θ. The answer is that a beam of x-polarized light will be scattered at the direction θ (in the x z-plane) with an intensity proportional to cos 2 θ. If you ask about y-polarized light, you find that ⟨y′|S|x⟩=0. So the scattered light is completely polarized in the x-direction. Now we notice something interesting. The results (18.17) and(18.18) correspond exactly to the classical theory of light scattering we gave in Vol.I, Section32-5, where we imagined that the electron was bound to the atom by a linear restoring force—so that it acted like a classical oscillator. Perhaps you are thinking: “It’s so much easier in the classical theory; if it gives the right answer why bother with the quantum theory?” For one thing, we have considered so far only the special—though common—case of an atom with a j=1 excited state and a j=0 ground state. If the excited state had spin two, you would get a different result. Also, there is no reason why the model of an electron attached to a spring and driven by an oscillating electric field should work for a single photon. But we have found that it does in fact work, and that the polarization and intensities come out right. So in a certain sense we are bringing the whole course around to the real truth. Whereas we have, in Vol.I, done the theory of the index of refraction, and of light scattering, by the classical theory, we have now shown that the quantum theory gives the same result for the most common case. In effect we have now done the polarization of sky light, for instance, by quantum mechanical arguments, which is the only truly legitimate way. It should be, of course, that all the classical theories which work are supported ultimately by legitimate quantum arguments. Naturally, those things which we have spent a great deal of time in explaining to you were selected from just those parts of classical physics which still maintain validity in quantum mechanics. You’ll notice that we did not discuss in great detail any model of the atom which has electrons going around in orbits. That’s because such a model doesn’t give results which agree with the quantum mechanics. But the electron on a spring—which is not, in a sense, at all the way an atom “looks”—does work, and so we used that model for the theory of the index of refraction. 18–3 The annihilation of positronium We would like next to take an example which is very pretty. It is quite interesting and, although somewhat complicated, we hope not too much so. Our example is the system called positronium, which is an “atom” made up of an electron and a positron—a bound state of an e+ and an e−. It is like a hydrogen atom, except that a positron replaces the proton. This object has—like the hydrogen atom—many states. Also like the hydrogen, the ground state is split into a “hyperfine structure” by the interaction of the magnetic moments. The spins of the electron and positron are each one-half, and they can be either parallel or antiparallel to any given axis. (In the ground state there is no other angular momentum due to orbital motion.) So there are four states: three are the substates of a spin-one system, all with the same energy; and one is a state of spin zero with a different energy. The energy splitting is, however, much larger than the 1420 megacycles of hydrogen because the positron magnetic moment is so much stronger—1000 times stronger—than the proton moment. The most important difference, however, is that positronium cannot last forever. The positron is the antiparticle of the electron; they can annihilate each other. The two particles disappear completely—converting their rest energy into radiation, which appears as γ-rays (photons). In the disintegration, two particles with a finite rest mass go into two or more objects which have zero rest mass.3 We begin by analyzing the disintegration of the spin-zero state of the positronium. It disintegrates into two γ-rays with a lifetime of about 10−10 second. Initially, we have a positron and an electron close together and with spins antiparallel, making the positronium system. After the disintegration there are two photons going out with equal and opposite momenta (Fig.18–5). The momenta must be equal and opposite, because the total momentum after the disintegration must be zero, as it was before, if we are taking the case of annihilation at rest. If the positronium is not at rest, we can ride with it, solve the problem, and then transform everything back to the lab system. (See, we can do anything now; we have all the tools.) Fig. 18–5. The two-photon annihilation of positronium. First, we note that the angular distribution is not very interesting. Since the initial state has spin zero, it has no special axis—it is symmetric under all rotations. The final state must then also be symmetric under all rotations. That means that all angles for the disintegration are equally likely—the amplitude is the same for a photon to go in any direction. Of course, once we find one of the photons in some direction the other must be opposite. The only remaining question, which we now want to look at, is about the polarization of the photons. Let’s call the directions of motion of the two photons the plus and minus z-axes. We can use any representations we want for the polarization states of the photons; we will choose for our description right and left circular polarization—always with respect to the directions of motion.4 Right away, we can see that if the photon going upward is RHC, then angular momentum will be conserved if the downward going photon is also RHC. Each will carry +1 unit of angular momentum with respect to its momentum direction, which means plus and minus one unit about the z-axis. The total will be zero, and the angular momentum after the disintegration will be the same as before. See Fig.18–6. Fig. 18–6. One possibility for positronium annihilation along the z-axis. The same arguments show that if the upward going photon is RHC, the downward cannot be LHC. Then the final state would have two units of angular momentum. This is not permitted if the initial state has spin zero. Note that such a final state is also not possible for the other positronium ground state of spin one, because it can have a maximum of one unit of angular momentum in any direction. Now we want to show that two-photon annihilation is not possible at all from the spin-one state. You might think that if we took the j=1, m=0 state—which has zero angular momentum about the z-axis—it should be like the spin-zero state, and could disintegrate into two RHC photons. Certainly, the disintegration sketched in Fig.18–7(a) conserves angular momentum about the z-axis. But now look what happens if we rotate this system around the y-axis by 180∘; we get the picture shown in Fig.18–7(b). It is exactly the same as in part(a) of the figure. All we have done is interchange the two photons. Now photons are Bose particles; if we interchange them, the amplitude has the same sign, so the amplitude for the disintegration in part(b) must be the same as in part(a). But we have assumed that the initial object is spin one. And when we rotate a spin-one object in a state with m=0 by 180∘ about the y-axis, its amplitudes change sign (see Table17–2 for θ=π). So the amplitudes for (a) and(b) in Fig.18–7 should have opposite signs; the spin-one state cannot disintegrate into two photons. Fig. 18–7. For the j=1 state of positronium, the process(a) and its 180∘rotation about y(b) are exactly the same. When positronium is formed you would expect it to end up in the spin-zero state 1/4 of the time and in the spin-one state (with m=−1,0, or+1) 3/4 of the time. So 1/4 of the time you would get two-photon annihilations. The other 3/4 of the time there can be no two-photon annihilations. There is still an annihilation, but it has to go with three photons. It is harder for it to do that and the lifetime is 1000 times longer—about 10−7 second. This is what is observed experimentally. We will not go into any more of the details of the spin-one annihilation. So far we have that if we only worry about angular momentum, the spin-zero state of the positronium can go into two RHC photons. There is also another possibility: it can go into two LHC photons as shown in Fig.18–8. The next question is, what is the relation between the amplitudes for these two possible decay modes? We can find out from the conservation of parity. Fig. 18–8. Another possible process for positronium annihilation. To do that, however, we need to know the parity of the positronium. Now theoretical physicists have shown in a way that is not easy to explain that the parity of the electron and the positron—its antiparticle—must be opposite, so that the spin-zero ground state of positronium must be odd. We will just assume that it is odd, and since we will get agreement with experiment, we can take that as sufficient proof. Let’s see then what happens if we make an inversion of the process in Fig.18–6. When we do that, the two photons reverse directions and polarizations. The inverted picture looks just like Fig.18–8. Assuming that the parity of the positronium is odd, the amplitudes for the two processes in Figs. 18–6 and18–8 must have the opposite sign. Let’s let|R 1 R 2⟩ stand for the final state of Fig.18–6 in which both photons are RHC, and let|L 1 L 2⟩ stand for the final state of Fig.18–8, in which both photons are LHC. The true final state—let’s call it|F⟩—must be |F⟩=|R 1 R 2⟩−|L 1 L 2⟩. Then an inversion changes the R’s into L’s and gives the state P|F⟩=|L 1 L 2⟩−|R 1 R 2⟩=−|F⟩, which is the negative of(18.19). So the final state|F⟩ has negative parity, which is the same as the initial spin-zero state of the positronium. This is the only final state that conserves both angular momentum and parity. There is some amplitude that the disintegration into this state will occur, which we don’t need to worry about now, however, since we are only interested in questions about the polarization. What does the final state of(18.19) mean physically? One thing it means is the following: If we observe the two photons in two detectors which can be set to count separately the RHC or LHC photons, we will always see two RHC photons together, or two LHC photons together. That is, if you stand on one side of the positronium and someone else stands on the opposite side, you can measure the polarization and tell the other guy what polarization he will get. You have a 50-50 chance of catching a RHC photon or a LHC photon; whichever one you get, you can predict that he will get the same. Since there is a 50-50 chance for RHC or LHC polarization, it sounds as though it might be like linear polarization. Let’s ask what happens if we observe the photon in counters that accept only linearly polarized light. For γ-rays it is not as easy to measure the polarization as it is for light; there is no polarizer which works well for such short wavelengths. But let’s imagine that there is, to make the discussion easier. Suppose that you have a counter that only accepts light with x-polarization, and that there is a guy on the other side that also looks for linear polarized light with, say, y-polarization. What is the chance you will pick up the two photons from an annihilation? What we need to ask is the amplitude that |F⟩ will be in the state|x 1 y 2⟩. In other words, we want the amplitude ⟨x 1 y 2|F⟩, which is, of course, just ⟨x 1 y 2|R 1 R 2⟩−⟨x 1 y 2|L 1 L 2⟩. Now although we are working with two-particle amplitudes for the two photons, we can handle them just as we did the single particle amplitudes, since each particle acts independently of the other. That means that the amplitude⟨x 1 y 2|R 1 R 2⟩ is just the product of the two independent amplitudes ⟨x 1|R 1⟩ and⟨y 2|R 2⟩. Using Table17–3, these two amplitudes are 1/√2 and i/√2—so ⟨x 1 y 2|R 1 R 2⟩=+i 2. Similarly, we find that ⟨x 1 y 2|L 1 L 2⟩=−i 2. Subtracting these two amplitudes according to(18.21), we get that ⟨x 1 y 2|F⟩=+i. So there is a unit probability5 that if you get a photon in your x-polarized detector, the other guy will get a photon in his y-polarized detector. Now suppose that the other guy sets his counter for x-polarization the same as yours. He would never get a count when you got one. If you work it through, you will find that ⟨x 1 x 2|F⟩=0. It will, naturally, also work out that if you set your counter for y-polarization he will get coincident counts only if he is set for x-polarization. Now this all leads to an interesting situation. Suppose you were to set up something like a piece of calcite which separated the photons into x-polarized and y-polarized beams, and put a counter in each beam. Let’s call one the x-counter and the other the y-counter. If the guy on the other side does the same thing, you can always tell him which beam his photon is going to go into. Whenever you and he get simultaneous counts, you can see which of your detectors caught the photon and then tell him which of his counters had a photon. Let’s say that in a certain disintegration you find that a photon went into your x-counter; you can tell him that he must have had a count in his y-counter. Now many people who learn quantum mechanics in the usual (old-fashioned) way find this disturbing. They would like to think that once the photons are emitted it goes along as a wave with a definite character. They would think that since “any given photon” has some “amplitude” to be x-polarized or to be y-polarized, there should be some chance of picking it up in either the x- or y-counter and that this chance shouldn’t depend on what some other person finds out about a completely different photon. They argue that “someone else making a measurement shouldn’t be able to change the probability that I will find something.” Our quantum mechanics says, however, that by making a measurement on photon number one, you can predict precisely what the polarization of photon number two is going to be when it is detected. This point was never accepted by Einstein, and he worried about it a great deal—it became known as the “Einstein-Podolsky-Rosen paradox.” But when the situation is described as we have done it here, there doesn’t seem to be any paradox at all; it comes out quite naturally that what is measured in one place is correlated with what is measured somewhere else. The argument that the result is paradoxical runs something like this: If you have a counter which tells you whether your photon is RHC or LHC, you can predict exactly what kind of a photon (RHC or LHC) he will find. The photons he receives must, therefore, each be purely RHC or purely LHC, some of one kind and some of the other. Surely you cannot alter the physical nature of his photons by changing the kind of observation you make on your photons. No matter what measurements you make on yours, his must still be either RHC or LHC. Now suppose he changes his apparatus to split his photons into two linearly polarized beams with a piece of calcite so that all of his photons go either into an x-polarized beam or into a y-polarized beam. There is absolutely no way, according to quantum mechanics, to tell into which beam any particular RHC photon will go. There is a 50% probability it will go into the x-beam and a 50% probability it will go into the y-beam. And the same goes for a LHC photon. Since each photon is RHC or LHC—according to (2) and(3)—each one must have a 50-50 chance of going into the x-beam or the y-beam and there is no way to predict which way it will go. Yet the theory predicts that if you see your photon go through an x-polarizer you can predict with certainty that his photon will go into his y-polarized beam. This is in contradiction to(5) so there is a paradox. Nature apparently doesn’t see the “paradox,” however, because experiment shows that the prediction in(6) is, in fact, true. We have already discussed the key to this “paradox” in our very first lecture on quantum mechanical behavior in Chapter37, Vol.I.6 In the argument above, steps (1),(2), (4), and(6) are all correct, but(3), and its consequence(5), are wrong; they are not a true description of nature. Argument(3) says that by your measurement (seeing a RHC or a LHC photon) you can determine which of two alternative events occurs for him (seeing a RHC or a LHC photon), and that even if you do not make your measurement you can still say that his event will occur either by one alternative or the other. But it was precisely the point of Chapter37, Vol.I, to point out right at the beginning that this is not so in Nature. Her way requires a description in terms of interfering amplitudes, one amplitude for each alternative. A measurement of which alternative actually occurs destroys the interference, but if a measurement is not made you cannot still say that “one alternative or the other is still occurring.” If you could determine for each one of your photons whether it was RHC or LHC, and also whether it was x-polarized (all for the same photon) there would indeed be a paradox. But you cannot do that—it is an example of the uncertainty principle. Do you still think there is a “paradox”? Make sure that it is, in fact, a paradox about the behavior of Nature, by setting up an imaginary experiment for which the theory of quantum mechanics would predict inconsistent results via two different arguments. Otherwise the “paradox” is only a conflict between reality and your feeling of what reality “ought to be.” Do you think that it is not a “paradox,” but that it is still very peculiar? On that we can all agree. It is what makes physics fascinating. 18–4 Rotation matrix for any spin By now you can see, we hope, how important the idea of the angular momentum is in understanding atomic processes. So far, we have considered only systems with spins—or “total angular momentum”—of zero, one-half, or one. There are, of course, atomic systems with higher angular momenta. For analyzing such systems we would need to have tables of rotation amplitudes like those in Section17-6. That is, we would need the matrix of amplitudes for spin 3 2,2, 5 2, 3,etc. Although we will not work out these tables in detail, we would like to show you how it is done, so that you can do it if you ever need to. As we have seen earlier, any system which has the spin or “total angular momentum”j can exist in any one of (2 j+1)states for which the z-component of angular momentum can have any one of the discrete values in the sequence j,j−1, j−2, …, −(j−1),−j (all in units of ℏ). Calling the z-component of angular momentum of any particular state m ℏ, we can define a particular angular momentum state by giving the numerical values of the two “angular momentum quantum numbers” j and m. We can indicate such a state by the state vector|j,m⟩. In the case of a spin one-half particle, the two states are then |1 2,1 2⟩ and|1 2,−1 2⟩; or for a spin-one system, the states would be written in this notation as|1,+1⟩, |1,0⟩, |1,−1⟩. A spin-zero particle has, of course, only the one state|0,0⟩. Now we want to know what happens when we project the general state|j,m⟩ into a representation with respect to a rotated set of axes. First, we know that j is a number which characterizes the system, so it doesn’t change. If we rotate the axes, all we do is get a mixture of the various m-values for the same j. In general, there will be some amplitude that in the rotated frame the system will be in the state|j,m′⟩, where m′ gives the new z-component of angular momentum. So what we want are all the matrix elements⟨j,m′|R|j,m⟩ for various rotations. We already know what happens if we rotate by an angle ϕ about the z-axis. The new state is just the old one multiplied by e i m ϕ—it still has the same m-value. We can write this by R z(ϕ)|j,m⟩=e i m ϕ|j,m⟩. Or, if you prefer, ⟨j,m′|R z(ϕ)|j,m⟩=δ m,m′e i m ϕ (where δ m,m′ is 1 if m′=m, or zero otherwise). For a rotation about any other axis there will be a mixing of the various m-states. We could, of course, try to work out the matrix elements for an arbitrary rotation described by the Euler angles β,α, and γ. But it is easier to remember that the most general such rotation can be made up of the three rotations R z(γ), R y(α), R z(β); so if we know the matrix elements for a rotation about the y-axis, we will have all we need. How can we find the rotation matrix for a rotation by the angle θ about the y-axis for a particle of spin j? We can’t tell you how to do it in a basic way (with what we have had). We did it for spin one-half by a complicated symmetry argument. We then did it for spin one by taking the special case of a spin-one system which was made up of two spin one-half particles. If you will go along with us and accept the fact that in the general case the answers depend only on the spin j, and are independent of how the inner guts of the object of spin j are put together, we can extend the spin-one argument to an arbitrary spin. We can, for example, cook up an artificial system of spin 3 2 out of three spin one-half objects. We can even avoid complications by imagining that they are all distinct particles—like a proton, an electron, and a muon. By transforming each spin one-half object, we can see what happens to the whole system—remembering that the three amplitudes are multiplied for the combined state. Let’s see how it goes in this case. Suppose we take the three spin one-half objects all with spins “up”; we can indicate this state by|+++⟩. If we look at this system in a frame rotated about the z-axis by the angle ϕ, each plus stays a plus, but gets multiplied by e i ϕ/2. We have three such factors, so R z(ϕ)|+++⟩=e i(3 ϕ/2)|+++⟩. Evidently the state|+++⟩ is just what we mean by the m=+3 2 state, or the state|3 2,+3 2⟩. If we now rotate this system about the y-axis, each of the spin one-half objects will have some amplitude to be plus or to be minus, so the system will now be a mixture of the eight possible combinations |+++⟩,|++−⟩, |+−+⟩, |−++⟩, |+−−⟩, |−+−⟩, |−−+⟩, or|−−−⟩. It is clear, however, that these can be broken up into four sets, each set corresponding to a particular value of m. First, we have |+++⟩, for which m=3 2. Then there are the three states |++−⟩,|+−+⟩, and|−++⟩—each with two plusses and one minus. Since each spin one-half object has the same chance of coming out minus under the rotation, the amounts of each of these three combinations should be equal. So let’s take the combination 1√3{|++−⟩+|+−+⟩+|−++⟩} with the factor 1/√3 put in to normalize the state. If we rotate this state about the z-axis, we get a factor e i ϕ/2 for each plus, and e−i ϕ/2 for each minus. Each term in(18.27) is multiplied by e i ϕ/2, so there is the common factor e i ϕ/2. This state satisfies our idea of an m=+1 2 state; we can conclude that 1√3{|++−⟩+|+−+⟩+|−++⟩}=|3 2,+1 2⟩. Similarly, we can write 1√3{|+−−⟩+|−+−⟩+|−−+⟩}=|3 2,−1 2⟩, which corresponds to a state with m=−1 2. Notice that we take only the symmetric combinations—we do not take any combinations with minus signs. They would correspond to states of the same m but a different j. (It’s just like the spin-one case, where we found that (1/√2){|+−⟩+|−+⟩} was the state|1,0⟩, but the state(1/√2){|+−⟩−|−+⟩} was the state|0,0⟩.) Finally, we would have that |3 2,−3 2⟩=|−−−⟩. We summarize our four states in Table18–1. Table 18–1 |+++⟩=|3 2,+3 2⟩1√3{|++−⟩+|+−+⟩+|−++⟩}=|3 2,+1 2⟩1√3{|+−−⟩+|−+−⟩+|−−+⟩}=|3 2,−1 2⟩|−−−⟩=|3 2,−3 2⟩ Now all we have to do is take each state and rotate it about the y-axis and see how much of the other states it gives—using our known rotation matrix for the spin one-half particles. We can proceed in exactly the same way we did for the spin-one case in Section12-6. (It just takes a little more algebra.) We will follow directly the ideas of Chapter12, so we won’t repeat all the explanations in detail. The states in the system S will be labelled|3 2,+3 2,S⟩=|+++⟩, |3 2,+1 2,S⟩=(1/√3){|++−⟩+|+−+⟩+|−++⟩}, and so on. The T-system will be one rotated about the y-axis of S by the angle θ. States in T will be labelled|3 2,+3 2,T⟩, |3 2,+1 2,T⟩, and so on. Of course, |3 2,+3 2,T⟩ is the same as|+′+′+′⟩, the primes referring always to the T-system. Similarly, |3 2,+1 2,T⟩ will be equal to(1/√3){|+′+′−′⟩+|+′−′+′⟩+|−′+′+′⟩}, and so on. Each |+′⟩state in the T-frame comes from both the |+⟩ and|−⟩ states in S via the matrix elements of Table12–4. When we have three spin one-half particles, Eq.(12.47) gets replaced by |+++⟩=+a 2 b a 3|+′+′+′⟩+a 2 b{|+′+′−′⟩+|+′−′+′⟩+|−′+′+′⟩}+a b 2{|+′−′−′⟩+|−′+′−′⟩+|−′−′+′⟩}+a{b 3|−′−′−′⟩. Using the transformation of Table12–4, we get instead of(12.48) the equation |3 2,+3 2,S⟩=a 3|3 2,+3 2,T⟩+√3 a 2 b|3 2,+1 2,T⟩+√3 a b 2|3 2,−1 2,T⟩+b 3|3 2,−3 2,T⟩. This already gives us several of our matrix elements⟨j T|i S⟩. To get the expression for|3 2,+1 2,S⟩ we begin with the transformation of a state with two“+” and one“−” pieces. For instance, |++−⟩=a 2 c|+′+′+′⟩+a 2 d|+′+′−′⟩+a b c|+′−′+′⟩+b a c|−′+′+′⟩+a b d|+′−′−′⟩+b a d|−′+′−′⟩+b 2 c|−′−′+′⟩+b 2 d|−′−′−′⟩. Adding two similar expressions for |+−+⟩ and|−++⟩ and dividing by√3, we find |3 2,+1 2,S⟩=√3 a 2 c|3 2,+3 2,T⟩+(a 2 d+2 a b c)|3 2,+1 2,T⟩+(2 b a d+b 2 c)|3 2,−1 2,T⟩+√3 b 2 d|3 2,−3 2,T⟩. Continuing the process we find all the elements⟨j T|i S⟩ of the transformation matrix as given in Table18–2. The first column comes from Eq.(18.32); the second from(18.34). The last two columns were worked out in the same way. Table 18–2 Rotation matrix for a [math] (The coefficients [math], [math], [math], and [math] are given in Table12–4.) [math][math][math][math][math] [math][math][math][math][math] [math][math][math][math][math] [math][math][math][math][math] [math][math][math][math][math] Now suppose the [math]-frame were rotated with respect to[math] by the angle[math] about their [math]-axes. Then [math],[math], [math], and[math] have the values [see(12.54)] [math][math][math], and [math][math][math]. Using these values in Table18–2 we get the forms which correspond to the second part of Table17–2, but now for a spin [math]system. The arguments we have just gone through are readily generalized to a system of any spin[math]. The states[math] can be put together from [math]particles, each of spin one-half. (There are [math] of them in the [math]state and [math] in the [math]state.) Sums are taken over all the possible ways this can be done, and the state is normalized by multiplying by a suitable constant. Those of you who are mathematically inclined may be able to show that the following result comes out7: [math] where [math] is to go over all values which give terms[math] in all the factorials. This is quite a messy formula, but with it you can check Table17–2 for[math] and prepare tables of your own for larger[math]. Several special matrix elements are of extra importance and have been given special names. For example the matrix elements for[math] and integral[math] are known as the Legendre polynomials and are called[math]: [math] The first few of these polynomials are: [math] 18–5 Measuring a nuclear spin We would like to show you one example of the application of the coefficients we have just described. It has to do with a recent, interesting experiment which you will now be able to understand. Some physicists wanted to find out the spin of a certain excited state of the Ne[math] nucleus. To do this, they bombarded a carbon target with a beam of accelerated carbon ions, and produced the desired excited state of Ne[math]—called Ne[math]—in the reaction [math] where [math] is the [math]-particle, or He[math]. Several of the excited states of Ne[math] produced this way are unstable and disintegrate in the reaction [math] So experimentally there are two [math]-particles which come out of the reaction. We call them [math] and[math]; since they come off with different energies, they can be distinguished from each other. Also, by picking a particular energy for [math] we can pick out any particular excited state of the Ne[math]. Fig. 18–9. Experimental arrangement for determining the spin of certain states of Ne[math]. The experiment was set up as shown in Fig.18–9. A beam of [math]-MeV carbon ions was directed onto a thin foil of carbon. The first [math]-particle was counted in a silicon diffused junction detector marked [math]—set to accept [math]-particles of the proper energy moving in the forward direction (with respect to the incident C[math] beam). The second [math]-particle was picked up in the counter[math] at the angle[math] with respect to[math]. The counting rate of coincidence signals from [math] and[math] were measured as a function of the angle[math]. The idea of the experiment is the following. First, you need to know that the spins of C[math], O[math], and the [math]-particle are all zero. If we call the direction of motion of the initial C[math] the [math]-direction, then we know that the Ne[math] must have zero angular momentum about the [math]-axis. None of the other particles has any spin; the C[math] arrives along the [math]-axis and the [math] leaves along the [math]-axis so they can’t have any angular momentum about it. So whatever the spin[math] of the Ne[math] is, we know that it is in the state[math]. Now what will happen when the Ne[math] disintegrates into an O[math] and the second [math]-particle? Well, the [math]-particle is picked up in the counter[math] and to conserve momentum the O[math] must go off in the opposite direction.8About the new axis through[math], there can be no component of angular momentum. The final state has zero angular momentum about the new axis, so the Ne[math] can disintegrate this way only if it has some amplitude to have [math] equal to zero, where [math] is the quantum number of the component of angular momentum about the new axis. In fact, the probability of observing [math] at the angle[math] is just the square of the amplitude (or matrix element) [math] To find the spin of the Ne[math] state in question, the intensity of the second [math]-particle was plotted as a function of angle and compared with the theoretical curves for various values of[math]. As we said in the last section, the amplitudes[math] are just the functions[math]. So the possible angular distributions are curves of[math]. The experimental results are shown in Fig.18–10 for two of the excited states. You can see that the angular distribution for the [math]-MeV state fits very well the curve for[math], and so it must be a spin-one state. The data for the [math]-MeV state, on the other hand, are quite different; they fit the curve[math]. The state has a spin of[math]. Fig. 18–10. Experimental results for the angular distribution of the [math]-particles from two excited states of Ne[math] produced in the setup of Fig.18–9. [From J.A. Kuehner, Physical Review, Vol.125, p.1650, 1962.] From this experiment we have been able to find out the angular momentum of two of the excited states of Ne[math]. This information can then be used for trying to understand what the configuration of protons and neutrons is inside this nucleus—one more piece of information about the mysterious nuclear forces. 18–6 Composition of angular momentum When we studied the hyperfine structure of the hydrogen atom in Chapter12 we had to work out the internal states of a system composed of two particles—the electron and the proton—each with a spin of one-half. We found that the four possible spin states of such a system could be put together into two groups—a group with one energy that looked to the external world like a spin-one particle, and one remaining state that behaved like a particle of zero spin. That is, putting together two spin one-half particles we can form a system whose “total spin” is one, or zero. In this section we want to discuss in more general terms the spin states of a system which is made up of two particles of arbitrary spin. It is another important problem about angular momentum in quantum mechanical systems. Let’s first rewrite the results of Chapter12 for the hydrogen atom in a form that will be easier to extend to the more general case. We began with two particles which we will now call particle[math] (the electron) and particle[math] (the proton). Particle[math] had the spin[math] ([math]), and its [math]-component of angular momentum[math] could have one of several values (actually[math], namely [math] or[math]). Similarly, the spin state of particle[math] is described by its spin[math] and its [math]-component of angular momentum[math]. Various combinations of the spin states of the two particles could be formed. For instance, we could have particle[math] with[math] and particle[math] with[math], to make a state[math]. In general, the combined states formed a system whose “system spin,” or “total spin,” or “total angular momentum”[math] could be[math], or[math]. And the system could have a [math]-component of angular momentum[math], which was [math],[math], or[math] when[math], or[math] when[math]. In this new language we can rewrite the formulas in (12.41) and(12.42) as shown in Table18–3. Table 18–3 Composition of angular momenta for two spin [math] particles [math], [math] [math] [math] In the table the left-hand column describes the compound state in terms of its total angular momentum[math] and the [math]-component[math]. The right-hand column shows how these states are made up in terms of the [math]-values of the two particles [math] and[math]. We want now to generalize this result to states made up of two objects [math] and[math] of arbitrary spins [math] and[math]. We start by considering an example for which [math] and[math], namely, the deuterium atom in which particle[math] is an electron(e) and particle[math] is the nucleus—a deuteron(d). We have then that [math]. The deuteron is formed of one proton and one neutron in a state whose total spin is one, so[math]. We want to discuss the hyperfine states of deuterium—just the way we did for hydrogen. Since the deuteron has three possible states[math], [math], [math], and the electron has two, [math],[math], there are six possible states as follows (using the notation[math]): [math] You will notice that we have grouped the states according to the values of the sum of [math] and[math]—arranged in descending order. Now we ask: What happens to these states if we project into a different coordinate system? If the new system is just rotated about the [math]-axis by the angle[math], then the state[math] gets multiplied by [math] (The state may be thought of as the product[math], and each state vector contributes independently its own exponential factor.) The factor(18.43) is of the form[math], so the state[math] has a [math]-component of angular momentum equal to [math]The [math]-component of the total angular momentum is the sum of the [math]-components of angular momentum of the parts. In the list of(18.42), therefore, the state in the top line has[math], the two in the second line have[math], the next two have[math], and the last state has[math]. We see immediately one possibility for the spin[math] of the combined state (the total angular momentum) must be[math], and this will require four states with [math],[math], [math], and[math]. There is only one candidate for[math], so we know already that [math] But what is the state[math]? We have two candidates in the second line of(18.42), and, in fact, any linear combination of them would also have[math]. So, in general, we must expect to find that [math] where [math] and[math] are two numbers. They are called the Clebsch-Gordan coefficients. Our next problem is to find out what they are. We can find out easily if we just remember that the deuteron is made up of a neutron and a proton, and write the deuteron states out more explicitly using the rules of Table18–3. If we do that, the states listed in(18.42) then look as shown in Table18–4. Table 18–4 Angular momentum states of a deuterium atom [math] [math] [math] [math] [math] [math] [math] [math] [math] [math] We want to form the four states of[math], using the states in the table. But we already know the answer, because in Table18–1 we have states of spin[math] formed from three spin one-half particles. The first state in Table18–1 has[math] and it is[math], which—in our present notation—is the same as[math], or the first state in Table18–4. But this state is also the same as the first in the list of(18.42), confirming our statement in(18.45). The second line of Table18–1 says—changing to our present notation—that [math] The right side can evidently be put together from the two entries in the second line of Table18–4 by taking [math]of the first term with [math]of the second. That is, Eq.(18.47) is equivalent to [math] We have found our two Clebsch-Gordan coefficients [math] and[math] in Eq.(18.46): [math] Following the same procedure we can find that [math] And, also, of course, [math] These are the rules for the composition of spin[math] and spin[math] to make a total[math]. We summarize (18.45),(18.48),(18.50), and(18.51) in Table18–5. Table 18–5 The [math]states of the deuterium atom [math] We have, however, only four states here while the system we are considering has six possible states. Of the two states in the second line of(18.42) we have used only one linear combination to form[math]. There is another linear combination orthogonal to the one we have taken which also has[math], namely [math] Similarly, the two states in the third line of(18.42) can be combined to give two orthogonal states, each with[math]. The one orthogonal to(18.52) is [math] These are the two remaining states. They have[math]; and must be the two states corresponding to[math]. So we have [math] We can verify that these two states do indeed behave like the states of a spin one-half object by writing out the deuterium parts in terms of the neutron and proton states—using Table18–4. The first state in(18.52) is [math] which can also be written [math] Now look at the terms in the first curly brackets, and think of the e and p taken together. Together they form a spin-zero state (see the bottom line of Table18–3), and contribute no angular momentum. Only the neutron is left, so the whole of the first curly bracket of(18.56) behaves under rotations like a neutron, namely as a state with[math], [math]. Following the same reasoning, we see that in the second curly bracket of(18.56) the electron and neutron team up to produce zero angular momentum, and only the proton contribution—with[math]—is left. The terms behave like an object with[math], [math]. So the whole expression of(18.56) transforms like[math] as it should. The [math]state which corresponds to(18.53) can be written down (by changing the proper [math]'s to[math]'s) to get [math] You can easily check that this is equal to the second line of(18.54), as it should be if the two terms of that pair are to be the two states of a spin one-half system. So our results are confirmed. A deuteron and an electron can exist in six spin states, four of which act like the states of a spin [math]object (Table18–5) and two of which act like an object of spin one-half(18.54). The results of Table18–5 and of Eq.(18.54) were obtained by making use of the fact that the deuteron is made up of a neutron and a proton. The truth of the equations does not depend on that special circumstance. For any spin-one object put together with any spin one-half object the composition laws (and the coefficients) are the same. The set of equations in Table18–5 means that if the coordinates are rotated about, say, the [math]-axis—so that the states of the spin one-half particle and of the spin-one particle change according to Table17–1 and Table17–2—the linear combinations on the right-hand side will change in the proper way for a spin [math]object. Under the same rotation the states of(18.54) will change as the states of a spin one-half object. The results depend only on the rotation properties (that is, the spin states) of the two original particles but not in any way on the origins of their angular momenta. We have only made use of this fact to work out the formulas by choosing a special case in which one of the component parts is itself made up of two spin one-half particles in a symmetric state. We have put all our results together in Table18–6, changing the notation “e” and“d” to “[math]” and“[math]” to emphasize the generality of the conclusions. Table 18–6 Composition of a spin one‑half particle [math] and a spin‑one particle [math] [math] [math] Suppose we have the general problem of finding the states which can be formed when two objects of arbitrary spins are combined. Say one has[math] (so its [math]-component[math] runs over the [math]values from [math] to[math]) and the other has[math] (with [math]-component[math] running over the values from [math] to[math]). The combined states are[math], and there are [math]different ones. Now what states of total spin[math] can be found? The total [math]-component of angular momentum[math] is equal to[math], and the states can all be listed according to[math] [as in(18.42)]. The largest[math] is unique; it corresponds to [math] and[math], and is, therefore, just[math]. That means that the largest total spin[math] is also equal to the sum[math]: [math] For the first [math]value smaller than[math], there are two states (either [math] or[math] is one unit less than its maximum). They must contribute one state to the set that goes with[math], and the one left over will belong to a new set with[math]. The next [math]-value—the third from the top of the list—can be formed in three ways. (From [math], [math]; from [math], [math]; and from [math], [math].) Two of these belong to groups already started above; the third tells us that states of[math] must also be included. This argument continues until we reach a stage where in our list we can no longer go one more step down in one of the[math]’s to make new states. Let[math] be the smaller of [math] and[math] (if they are equal take either one); then only [math]values of[math] are required—going in integer steps from[math] down to[math]. That is, when two objects of spin [math] and[math] are combined, the system can have a total angular momentum[math] equal to any one of the values [math] (By writing[math] instead of[math] we can avoid the extra admonition that[math].) For each of these [math]values there are the [math]states of different [math]-values—with [math] going from [math] to[math]. Each of these is formed from linear combinations of the original states[math] with appropriate factors—the Clebsch-Gordan coefficients for each particular term. We can consider that these coefficients give the “amount” of the state[math] which appears in the state[math]. So each of the Clebsch-Gordan coefficients has, if you wish, six indices identifying its position in the formulas like those of Tables 18–3 and18–6. That is, calling these coefficients[math], we could express the equality of the second line of Table18–6 by writing [math] We will not calculate here the coefficients for any other special cases.9 You can, however, find tables in many books. You might wish to try another special case for yourself. The next one to do would be the composition of two spin-one particles. We give just the final result in Table18–7. Table 18–7 Composition of two spin-one particles [math] [math] [math] [math] These laws of the composition of angular momenta are very important in particle physics—where they have innumerable applications. Unfortunately, we have no time to look at more examples here. 18–7 Added Note 1: Derivation of the rotation matrix10 For those who would like to see the details, we work out here the general rotation matrix for a system with spin (total angular momentum)[math]. It is really not very important to work out the general case; once you have the idea, you can find the general results in tables in many books. On the other hand, after coming this far you might like to see that you can indeed understand even the very complicated formulas of quantum mechanics, such as Eq.(18.35), that come into the description of angular momentum. We extend the arguments of Section18-4 to a system with spin[math], which we consider to be made up of [math]spin one-half objects. The state with[math] would be[math] (with [math]plus signs). For[math], there will be [math]terms like[math], [math], and so on. Let’s consider the general case in which there are [math]plusses and [math]minuses—with[math]. Under a rotation about the [math]-axis each of the [math]plusses will contribute[math]. The result is a phase change of[math]. You see that [math] Just as for[math], each state of definite[math] must be the linear combination with plus signs of all the states with the same [math] and[math]—that is, states corresponding to every possible arrangement which has [math]plusses and [math]minuses. We assume that you can figure out that there are[math] such arrangements. To normalize each state, we should divide the sum by the square root of this number. We can write [math] with [math] It will help our work if we now go to still another notation. Once we have defined the states by Eq.(18.60), the two numbers [math] and[math] define a state just as well as [math] and[math]. It will help us keep track of things if we write [math] where, using the equalities of(18.61) [math] Next, we would like to write Eq.(18.60) with a new special notation as [math] Note that we have changed the exponent of the factor in front to plus[math]. We do that because there are just [math]terms inside the curly brackets. Comparing(18.63) with(18.60) it is clear that [math] is just a shorthand way of writing [math] where [math] is the number of different terms in the bracket. The reason that this notation is convenient is that each time we make a rotation, all of the plus signs contribute the same factor, so we get this factor to the [math]th power. Similarly, all together the [math]minus terms contribute a factor to the [math]th power no matter what the sequence of the terms is. Now suppose we rotate our system by the angle[math] about the [math]-axis. What we want is[math]. When [math] operates on each[math] it gives [math] where [math] and[math]. When [math] operates on each[math] it gives [math] So what we want is [math] Now each binomial has to be expanded out to its appropriate power and the two expressions multiplied together. There will be terms with[math] to all powers from zero to[math]. Let’s look at all of the terms which have [math] to the [math]power. They will appear always multiplied with[math] to the [math]power, where[math]. Suppose we collect all such terms. For each permutation they will have some numerical coefficient involving the factors of the binomial expansion as well as the factors [math] and[math]. Suppose we call that factor[math]. Then Eq.(18.65) will look like [math] Now let’s say that we divide[math] by the factor[math] and call the quotient[math]. Equation(18.66) is then equivalent to [math] (We could just say that this equation defines[math] by the requirement that(18.67) gives the same expression that appears in(18.65).) With this definition of[math] the remaining factors on the right-hand side of Eq.(18.67) are just the states[math]. So we have that [math] with [math] always equal to[math]. This means, of course, that the coefficients[math] are just the matrix elements we want, namely [math] Now we just have to push through the algebra to find the various[math]. Comparing(18.65) with(18.67)—and remembering that [math]—we see that [math] is just the coefficient of[math] in the following expression: [math] It is now only a dirty job to make the expansions by the binomial theorem, and collect the terms with the given power of [math] and[math]. If you work it all out, you find that the coefficient of[math] in(18.70) is [math] The sum is to be taken over all integers[math] which give terms of zero or greater in the factorials. This expression is then the matrix element we wanted. Finally, we can return to our original notation in terms of [math],[math], and[math] using [math] Making these substitutions, we get Eq.(18.35) in Section18-4. 18–8 Added Note 2: Conservation of parity in photon emission In Section18-1 of this chapter we considered the emission of light by an atom that goes from an excited state of spin[math] to a ground state of spin[math]. If the excited state has its spin up([math]), it can emit a RHC photon along the [math]-axis or a LHC photon along the [math]-axis. Let’s call these two states of the photon [math] and[math]. Neither of these states has a definite parity. Letting[math] be the parity operator, [math] and[math]. What about our earlier proof that an atom in a state of definite energy must have a definite parity, and our statement that parity is conserved in atomic processes? Shouldn’t the final state in this problem (the state after the emission of a photon) have a definite parity? It does if we consider the complete final state which contains amplitudes for the emission of photons into all sorts of angles. In Section18-1 we chose to consider only a part of the complete final state. If we wish we can look only at final states that do have a definite parity. For example, consider a final state[math] which has some amplitude[math] to be a RHC photon going along[math] and some amplitude[math] to be a LHC photon going along[math]. We can write [math] The parity operation on this state gives [math] This state will be[math] if [math] or if [math]. So a final state of even parity is [math] and a state of odd parity is [math] Next, we wish to consider the decay of an excited state of odd parity to a ground state of even parity. If parity is to be conserved, the final state of the photon must have odd parity. It must be the state in(18.75). If the amplitude to find [math] is[math], the amplitude to find [math] is[math]. Now notice what happens when we perform a rotation of[math] about the [math]-axis. The initial excited state of the atom becomes an [math]state (with no change in sign, according to Table17–2). And the rotation of the final state gives [math] Comparing this equation with(18.75), you see that for the assumed parity of the final state, the amplitude to get a LHC photon along[math] from the [math]initial state is the negative of the amplitude to get a RHC photon from the [math]initial state. This agrees with the result we found in Section18-1. When we change [math] into[math], you might think that all vectors get reversed. That is true for polar vectors like displacements and velocities, but not for an axial vector like angular momentum—or any vector which is derived from a cross product of two polar vectors. Axial vectors have the same components after an inversion. ↩ Some of you may object to the argument we have just made, on the basis that the final states we have been considering do not have a definite parity. You will find in Added Note 2 at the end of this chapter another demonstration, which you may prefer. ↩ In the deeper understanding of the world today, we do not have an easy way to distinguish whether the energy of a photon is less “matter” than the energy of an electron, because as you remember all the particles behave very similarly. The only distinction is that the photon has zero rest mass. ↩ Note that we always analyze the angular momentum about the direction of motion of the particle. If we were to ask about the angular momentum about any other axis, we would have to worry about the possibility of “orbital” angular momentum—from a [math]term. For instance, we can’t say that the photons leave exactly from the center of the positronium. They could leave like two things shot out from the rim of a spinning wheel. We don’t have to worry about such possibilities when we take our axis along the direction of motion. ↩ We have not normalized our amplitudes, or multiplied them by the amplitude for the disintegration into any particular final state, but we can see that this result is correct because we get zero probability when we look at the other alternative—see Eq.(18.23). ↩ See also Chapter1 of the present volume. ↩ If you want details, they are given in an appendix to this chapter. ↩ We can neglect the recoil given to the Ne[math] in the first collision. Or better still, we can calculate what it is and make a correction for it. ↩ A large part of the work is done now that we have the general rotation matrix Eq.(18.35). ↩ The material of this appendix was originally included in the body of the lecture. We now feel that it is unnecessary to include such a detailed treatment of the general case. ↩ Copyright © 1965, 2006, 2013 by the California Institute of Technology,Michael A. Gottlieb and Rudolf Pfeiffer 18–1 Electric dipole radiation18–2 Light scattering18–3 The annihilation of positronium18–4 Rotation matrix for any spin18–5 Measuring a nuclear spin18–6 Composition of angular momentum18–7 Added Note 1: Derivation of the rotation matrix18–8 Added Note 2: Conservation of parity in photon emission
158
Published Time: Thu, 14 Aug 2025 17:43:23 GMT William Forsythe, In the Middle, Somewhat Elevated - Mara Marietta =============== Home » The Arts » Dance » William Forsythe William Forsythe In the Middle, Somewhat Elevated WILLIAM FORSYTHE: NIJINSKY’S HEIR A CLASSICAL COMPANY LEADS MODERN DANCE By Senta Driver From Choreography and Dance, 2000, Vol. 5, Part 3, pp. 1-7 I first heard it from Paul Taylor: the suggestion that the real founder of modern dance was Vaslav Nijinsky. The notion comes, in fact, from Lincoln Kirstein. It rests on an appreciation of the choreographer Nijinsky’s profound innovations in movement design and group structure. Perhaps it also reflects that the Russian dancer was committed to a personal vision rather than to the primacy of the classical materials he inherited. What made us think Nijinsky was a ballet choreographer at all? The answer is probably his context, the Diaghilev company, and his training more than his actual product. Consider the choreographic material of his fully completed works. They could all be taken for radical modern dance, but they were performed by ballet dancers, and presented by a company with an otherwise classical repertory. William Forsythe, In the Middle, Somewhat Elevated| Ballett Zürich | Photo: Gregory Batardon The elements of the Russian ballet tradition were being used to forge an utterly original direction. What made us think George Balanchine ran what was essentially a modern dance company, compared to other ballet companies of his era? One usually cited his departure from the classic repertory and school of steps and his commitment to a single, progressive artistic vision and a distinctive movement vocabulary. Ballet is defined, like all arts, by its makers, and by the dancers and choreographers who accept influences that they find persuasive, and build upon them. William Forsythe, In the Middle, Somewhat Elevated| Ballett Zürich | Photo: Gregory Batardon When an artist with a thoroughly classical background is denounced on the grounds that he ‘destroys ballet’, either the work is weak or ballet has developed dangerously reactionary thinking. Those who do not make work are periodically moved to lock down its definitions: this was ineffective in 1912 when Nijinsky created his first full professional work, and it will continue to be so. We can celebrate a profound advance into new thinking when a choreographer spends his artistic life inside the classical school, shows veneration for the ballet d’école, informed respect for Balanchine and ballet history, and regular reliance on the pointe shoe, but makes work with unorthodox expansions of the vocabulary, fractured yet theatrically astute structure, and constant sympathies with the work of major modern dance innovators. William Forsythe, In the Middle, Somewhat Elevated| Ballett Zürich | Photo: Gregory Batardon William Forsythe has given his field a daring new direction and scope. His work is frequently begun in notions about light rather than music, as he reveals in Conversation on Lighting with Jennifer Tipton. Like the finest artists, he teaches the dance audience new skills for looking at things. For his sake we have developed a capacity to see in extremely low levels of light, even as his dancers have learned to be able to move freely, in groups, through a blackout. We can follow the logic of a known classical step through long new permutations. A penché may plunge in extraordinary directions, or a fouetté be created by picking up the dancer and hurling her manually around 360° as she executes the legwork — and we still recognize the source. William Forsythe, In the Middle, Somewhat Elevated| Ballett Zürich | Photo: Gregory Batardon As Forsythe has often stated, he treats the premises of classical technique as a usable language capable of new meaning, rather than as a collection of phrases and traditionally-linked steps that retain traditional rules, shapes, and content subject only to rearrangement. Aside from his enrichment of the ballet d’école, his approach greatly strengthens the dancers who use it. This is apparent physically as well as in their intellectual development, and it shows up in a look of knowledgeability and engagement on stage. Most dancers in outside companies who are cast in Forsythe pieces look like different artists in his work. They visibly know what a depth of information lies behind their movement. William Forsythe, In the Middle, Somewhat Elevated| Ballett Zürich | Photo: Gregory Batardon Forsythe has taught classical dancers to generate their own material by applying structural devices to their familiar technique. Drawing upon the theories of Rudolf Laban, which Forsythe has carried forward in what he calls Improvisation Technology, he vitally expands the movement vocabulary. His style has developed over the 14 years in Frankfurt from a forceful, weighted, and athletic one using the pointe shoe as a pole vaulter might, into a much warmer and more fluid realm. He has a permanent affection for pointework, and never abandons it for long, but recent works are sometimes danced in socks, soft slippers or even bare feet, and an increasing use of silence and tenderness is apparent. William Forsythe, In the Middle, Somewhat Elevated | Photo: Herbert Migdoll, Joffrey Ballett The dancers examine each other, touch and handle and interfere with each other in intricate assignments, and work extensively with and close to the floor. The structure of the pieces can seem confusing, but they are assembled with astute theatricality. They are carefully built to pace the evening well, and they progress with their own logic. His approach and movement thinking resonate with the methods of the great modern dance innovators such as Merce Cunningham, Trisha Brown, and Twyla Tharp. Both his artists and the dancers of other companies who have worked with him demonstrate a range of new virtuosities. These days one rarely sees as much original material for the body in a whole season of modern dance as Forsythe offers at Ballett Frankfurt. William Forsythe, In the Middle, Somewhat Elevated| Ballett Zürich | Photo: Gregory Batardon The range of Forsythe’s thinking is illustrated by the various plans he made for a new work at the Roundhouse, a former trolley-car turntable shed in London. He has described three successive concepts: transformation of the building into a huge camera obscura with the image projected into the cellar onto a field of tightly packed narcissus; video on the skylight atop the building; and, finally, raising the ‘world’s largest bouncy castle’, an inflated structure with a trampoline floor in which the audience created all the movement. The piece, Tight Roaring Circle (1997), is known to him and Dana Caspersen, who made it with him, as the John Cage Memorial Choreographic Cube. William Forsythe, In the Middle, Somewhat Elevated| English National Ballet Ballet tells us where ballet is going. Not even the most ardent devotees of what used to be can do that. We had half expected to see moderns, with their alleged superiority of imagination, take over the classical field. The stream of new choreographers making work on and off pointe for ballet companies suggested to some that the future of ballet lay outside its fold. Are we now looking at the opposite, a classically-based artist who emerges as a profound leader of modern dance? Certainly, in addition to his rich physicality, there is more adventure, more risk-taking in topic and visual design and structure in an evening at Ballett Frankfurt than one finds in most contemporary dance. There is also more aesthetic daring. William Forsythe, In the Middle, Somewhat Elevated | Boston Ballet Many modern dance artists have widened their reach into popular culture and its resources, but no one has managed a surreal parody of a Broadway musical, expertly sung Ethel Merman-style by an entire ballet company, quite on the level of I sabelle’s Dance(1986). Forsythe is, more simply, a profound leader of all dance, going in directions once reserved for moderns by means of a classical vehicle. He observed in the program of his April 1998 season at Theater Basel, ‘I use ballet, because I use ballet dancers, and I use the knowledge in their bodies. I think ballet is a very, very good idea that often gets pooh-poohed.’ He trusts the tradition. He believes it is his, and that it is fertile. What does his work say about the old dichotomy, now that he has opened ballet’s physical vocabulary, without losing its classical base, by the use of modern dance gambits? William Forsythe, In the Middle, Somewhat Elevated | Boston Ballet The illuminating factor in his process is his huge curiosity about everything around him, how it works and what happens if something is knocked awry. This and his simple, candid respect for other artists are the elements that stand out. Forsythe has profoundly enriched our art form in his 22 years of working, taking the utmost care to evade the center of attention. He has a remarkable capacity, such as I have only seen before in the great teacher Helen Alkire, for keeping his mind open and at work on new challenges. He has forged a new kind of beauty in dancing. William Forsythe, In the Middle, Somewhat Elevated| English National Ballet WILLIAM FORSYTHE: ‘IN THE MIDDLE, SOMEWHAT ELEVATED’ Pas de deux, ‘In the Middle, Somewhat Elevated’ Zakharova & Andre Merkuriev, Mariinsky Ballet (Note: Poor quality video but outstanding performance) ‘IN THE MIDDLE, SOMEWHAT ELEVATED’ IN ‘MARA, MARIETTA’ FROM ‘MARA, MARIETTA’ Part Three Chapter 10 With the applause my beating heart conspires to leave me breathless; I look into your eyes and see you are as stunned as I. What have we witnessed? Spiders in mating display? Matador and bull in the ceremonial kill? Karateka demonstrating combat stances? Aye, from our seats in the grand circle, we have seen the reinvention of ballet; we have seen the vestiges of academic virtuosity extended, accelerated and given a power that electrified the stage. Yes, in the Palais Garnier, two dancers in a pas de deux astounded us. Did they feel in their vertebrae, did they sense in their sinews, that this choreography is destined to endure? William Forsythe, In the Middle, Somewhat Elevated | Photo: Herbert Migdoll, Joffrey Ballett As the purity of their movements burned away all embellishments, did they know the erotic charge they were generating would lay bare our hearts? Unearthly angles and undulations, the steely majesty of wrenching turns: Who is the man with such a kinetic imagination? Helical motion and counter curvature, audacious extensions and volumetric form; off-kilter dynamics and casual contortions, high kicks and thrusting hips: Who is the master that conceived this miracle? Feline, vulpine, feminine, the sex and venom in a push-pull attack; virile, fluid, visceral, the violence and grace of a split kick snapped back: Who is the man who, in mingling the demonic and the divine, has resuscitated the corpse of classical dance? Forsythe. William Forsythe. William Forsythe, In the Middle, Somewhat Elevated| Ballett Zürich | Photo: Gregory Batardon Was it his freedom that allowed him to turn the page on the past while preserving it in palimpsest? Was it his freedom that inspired the composer to write such unremittingly ecstatic music—telluric, architectonic, empyrean? Aye, was it his freedom that gave a spring to our step, a grace to our stride, as we stepped, hand-in-hand, into the night outside? William Forsythe, In the Middle, Somewhat Elevated| English National Ballet MARA, MARIETTA: A LOVE STORY IN 77 BEDROOMS A novel by Richard Jonathan Available from AMAZON (paper | ebook) & iBOOKS, GOOGLE PLAY, KOBO & NOOK (see LINKS below) ‘IN THE MIDDLE, SOMEWHAT ELEVATED’: THOM WILLEMS, COMPOSER You can listen to the track in full with a registered Spotify account, which comes for free. THOM WILLEMS, MELODY OF THE METROPOLIS Chantal Aubry, La Croix(French newspaper), 2 Nov 2000 | Translated by Richard Jonathan Thom Willems, musical alter ego of choreographer William Forsythe, is as reserved in his daily life as he is assertive in his music. For the first fourteen years of his partnership with the director of the Frankfurt Ballet, no CD, no live recording, no concert outside of his work for the celebrated company has seen the light of day. Finally ceding to popular demand, he has recorded two of his most famous compositions, The Loss of Small Detail and In the Middle, Somewhat Elevated. The latter work inspired one of Forsythe’s most astounding pieces, premiered by the Paris Opera Ballet in 1987. It features both a metallic, aggressive sound, echoing a busy metropolis, and the hush of urban clamour filtered through closed windows. The din of the city, a metropolitan melody, an industrial soundscape that twentieth-century music draws on. Thom Willems, The Loss of Small Detail&In the Middle, Somewhat Elevated A Dutchman, born in Arnhem in 1955, Thom Willems began composing at the age of twelve, even before he was admitted to the Conservatory in The Hague. He belongs to no school, and refuses to be recruited into any. Indeed, he resists falling under any influence, especially that of the contemporary music the preceded him. Thom Willems His crucible: electronics. His principle: deconstruction, the principle of an entire generation, with architect Daniel Libeskind its beacon and William Forsythe its leading exponent in dance. ‘The sheer rock face of mountains in action movies, the steep gorges between skyscrapers; speed and vertiginous falls, rich tapestries of sound set off against ceaseless traffic’, as dance critic Eva-Elisabeth Fischer puts it, Thom Willems the musical craftsman also knows how to create an emotional subtext, a climate that’s a perfect complement to the one Forsythe’s dance generates. His music, you’ll have understood, transcends ‘ballet music’. See also Michael Hoh’s interview with Thom Willems on the Staats-Ballett Berlin website. Thom Willems WILLIAM FORSYTHE: THREE BOOKS Steven Spier, William Forsythe: Choreography William Forsythe: Choreographic Objects William Forsythe: The Fact of Matter WILLIAM FORSYTHE: TWO DVDs One Flat Thing, reproduced From a Classical Position&Just Dancing Around? VIDEO: WILLIAM FORSYTHE INTERVIEW & 2 SHORT PERFORMANCES All videos from NUMERIDANCE.TV Dance Platform. Click/tap the image to go to the video. STEPTEXT, Ballet de Lorraine, 2009 INTERVIEW: Théâtre National de la Danse, 2014 VERTIGINOUS THRILL EXACTITUDE, 2015 DANCE IN ‘THE WORLD OF MARA, MARIETTA’ Jiri Kylian,Fallen Angels, Ballet de l’Opéra de Lyon Mara Marietta Author Bio References and Bibliography Contact Mara, Marietta: A Love Story in 77 Bedrooms is available from these retailers: Amazon Apple iBooks Google Play Kobo Nook © 2025 – Richard Jonathan – All rights reserved | CULTURE BLOG | LEGAL NOTICE | SITEMAP| Website by Agoralys
159
Shen Congwen and Zhang Zhaohe, married for over fifty years… – Chinese book reviews =============== Skip to content Chinese book reviews Homepage About Get articles by mail Novels Short Stories Detective Stories Publishing Rue89 Homepage About Get articles by mail Novels Short Stories Detective Stories Publishing Rue89 Shen Congwen and Zhang Zhaohe, married for over fifty years… The release a few weeks ago of a book by Shen Congwen (which will be discussed later): “The journey to Xiang and other short stories” (1), has led me to read again his translated works and also to discover a remarkable book, “Four sisters of Hofei “by Annping Chin. She was born in Taiwan in 1950, a professor in the History Department at Yale University and the wife of Jonathan Spence, the well-known academic. Annping Chin was able to meet the four sisters and had access to the family archives; she published a unique document about the life of a family at a time when China is opening to modernity. One of the four sisters is Chao-Ho, the future wife of Shen Congwen. 1 / A very romantic encounter: In the early 1930s, Shen Congwen is a well-known writer, the last romantic writer, and his short stories are very different in their style, themes and characters from the writings of the time. The famous philosopher Hu Shi, chairman of the Chinese Institute of Wusong near Shanghai, offered him a professorship, an exception to the rules as he had no university degree. This is where he meets Chao-Ho, then a second year student. He is her teacher and fell in love with her but she says bluntly that her studies are her priority and she does not want to be bothered by a suitor. Shen Congwen is very distressed; the best friend of Chao-Ho explains that she is more rational than emotional, she still has childish attitudes: she knows how to say no and can be stubborn. She meets Hu Shi who speaks very highly of Shen Congwen: he considers him a genius and the most promising writer of his generation. Subsequently, Hu Shi wrote to the unfortunate lover with a copy to Chao-Ho: “This woman does not understand you, much less your love for her … Love is not the only thing in life … this person is too young and inexperienced … She enjoys turning away her suitors. You’re just one among many “. It is true that at the time she collected the letters of her lovers carefully classified and numbered! She was beautiful but did not pay attention to it, the beauty of the family was her older sister Yuan-Ho. She had short hair and dark complexion, a “black peony”. Why did she marry in September 1933 as she was often exasperated by him? Because he wrote good letters!! The short story “The housewife”, written in 1936 (3), is a portrait of his young wife, the way he had courted her and the challenges that were emerging: he was eight years older, he was unable to take into account the reality and invests everything in his passion, antiques (which became his profession after 1949). “One minute you call me your darling, your treasure and the next you say the same thing about a tray or a vase …” She was educated to be a traditional wife and was treated as such by her husband. “Their habits were entirely different, so she tried hard to adjust. She wanted to become a model wife at home as well as in society. She was loving and responsible, modest and disciplined. “ Everyday life was not easy, “she now understands what passion was and she knew that he retained that nearly childlike passion for her, but this had no meaning in daily life nor had she any great need for it.” He accuses her of not trying to understand his actions and simply tolerate them. She accuses him of shying away from his responsibilities and the reality. Her family circle treats her like a teen age girl while she is a mother of a two years old little boy! 2 / Travelling and being separated: A few months after his marriage, he went to Fenghuang, his hometown in western Hunan. He wrote to his wife about sixty letters which will provide the material for “Journey to Xiang.(Discursive notes on a trip through Hunan)”. Annping Ching translated the only letter from Chao-Ho that remains from this period (p.205); she signs with her nickname San-san and is less romantic than the letters from her husband … In 1937, they now have two boys and Shen Congwen leaves Beijing for Kunming because of the war with Japan. She had just had a baby but waited for the summer of 1938 to leave Beijing. One may wonder if she does not prefer him being thousands of miles away, and writing to her! She is worried with his financial situation, and fears that he becomes a burden to those around him and regrets that he wastes his talent by multiplying essays and reviews … Once installed near Kunming, she becomes a school teacher, and likes teaching and her independence. Relations between Shen Congwen and Gao Gingzi, a friend known during his stay in Qingdao in 1931 and met again at the Kunming University, will create rumors and difficulties, but also lead him to write several remarkable essays and stories specially: “Water and clouds “, translated and introduced by Isabelle Rabut (4). This document from the end of 1942 is a “long poetic meditation” at a time when the writer is isolated, criticized by both left-wing writers and nationalists. Influenced by psychoanalysis, his introspection, as Isabelle Rabut said, “has found its way into a sort of compromise between romanticism that is generated by passions and philosophies that keep them away.” His marriage is firmly anchored and he seeks by all means to guard himself against passion; ” he is balanced precariously between what he preaches ideally (the beauty of the love affair) and what he is ready to take in real life. “ A short story “Looking at the rainbow” (3), is strongly influenced by DH Lawrence. This is a private sentimental conversation during a winter evening. This short story created a scandal and was condemned as perverse during a period of resistance against Japanese aggression. For Jeffrey Kinkley, his relationship with Gao Gingzi were passionate but platonic, nevertheless he did not show the short story to his wife. 3 / Politics at the center of everything: At the end of the war in 1945, Shen Congwen returned to Peking University, six months before his family and during this period he felt very close to his wife. His position at the university became difficult, he is attacked by leftist writers specially Guo Moruo and is dropped by Ding Ling whom he vigorously defended in 1933. He is completely depressed and attemps suicide. When the communists enter Beijing in January 1949, Chao-Ho was admitted to the University of North China to be reeducated along the revolutionary tradition. She became an editor of the journal of People Literature. The university position of Shen Congwen is canceled and he is transferred to the Beijing History Museum. His literary work is over, he will refuse to start writing again despite pressures from Zhou Enlai. During the Cultural Revolution, he was sent to a May 7 school for three years, and his wife was with him. They return to Beijing in 1972 but do not live together for a long time; however, he took his meals with her. They lived with stubbornness in two separate worlds. In 1995, seven years after the death of Shen Congwen, she published their correspondence which had already been partially printed. Yiyun Li says that this is one of the three books she brought with her when she went to study in the United States and she has translated some letters; a complete translation by Alice Xin Liu is being prepared. We should leave the last word to Chao-Ho, “we were happy or unhappy? I have no answer.I did’nt completely understand him. Later, I began to grasp what he was about, not until now did I truly understand his character and the pressure put on him … “ Bertrand Mialaret (1) Shen Congwen, “The journey of Xiang and other short stories”, translated and introduced by Marie Laureillard and Gilles Cabrero. Bleu de Chine, Gallimard 2012.300 pages, 25 euros. (2) Annping Chin, “Four sisters of Hofei.” Scribner 2002. (3) Shen Congwen, “Imperfect Paradise-Twenty four stories.” Edited by Jeffrey Kinkley. University of Hawaii Press, 1985. (4) Shen Congwen, “Water and clouds.” Translation and introduction by Isabelle Rabut. Bleu de Chine 1993. Category:NovelsBy Bertrand MialaretDecember 16, 2012Leave a comment Author:Bertrand Mialaret Post navigation Previous Previous post:Mo Yan: short stories by a Nobel prize.Next Next post:Shen Congwen, a writer once banned in China and Taiwan. Related posts A new prose work by the poet Bei Dao who wants to rebuild “his” Beijing. July 7, 2020 Why not take advantage of your lockdown to read Chinese novels? April 3, 2020 Chi Zijian, can literature help us face the epidemic? March 17, 2020 Sharlene Teo, horror films and literature in Singapore. March 5, 2020 The novelist Yan Lianke, with “The death of the sun”, paints a tragic portrait of the Chinese dream. February 6, 2020 Xiao Hong, the talent of a great novelist let down by translators, publishers, film makers… January 7, 2020 Leave a Reply Cancel reply Your email address will not be published. Required fields are marked Comment Name Email Website [x] Save my name, email, and website in this browser for the next time I comment. [x] Notify me of follow-up comments by email. [x] Notify me of new posts by email. Post comment English Français (French) Get the articles by email Your email Views Homepage - 339,261 views Chinese mothers tell Xinran about their abandoned daughters. - 103,203 views Xiao Hong died more than seventy years ago, the promotion by the media. - 32,760 views Qiu Miaojin: love letters and a suicide in Paris… - 26,329 views A meeting with Liu Zhenyun: famine and loneliness - 25,147 views Chi Zijian, the death of shamans and reindeer herders. - 24,435 views Tags Alison Wong(1) Anne Cheng(1) Awards(1) Baba(1) Beijing writers(1) Best of crime stories(1) Bibliothèque chinoise(1) Bi Feiyu(1) Book Fair(1) Book industry(1) Cao Naiqian(1) Cao Wenxuan(1) Chi Li(1) Chinese films(1) Chuah Guat Eng(2) Cinema chinois(1) Diane Wei Liang(1) Dung Kai-Cheung(1) Dunhuang(1) Ed Lin(3) Eileen Chang(2) Feng Tang(1) Feng Zikai(1) Gao Xingjian(1) Gu Long(1) Guo Xiaolu(1) Ha Jin(1) Han Shaogong(2) He Jiahong(1) Hsu-Ming Teo(1) Jiang Rong(1) Jia Pingwa(1) Lao She(3) Le Clezio(1) Leslie Chang(1) Liao Yiwu(1) Liliane Dutrait(1) Literature museum(1) Littérature de Taiwan(1) Liu Zhenyun(1) Li Yiyun(2) Ma Jian(1) Marie-Claire Bergère(1) Michel Bonnin(1) Mo Yan(6) Paperbacks for a summer(2) Pingru Rao(1) Qiu Xiaolong(4) Royalties(1) Sanmao(1) Shandong Bouddhas(1) Shanghaï(1) Shen Congwen(2) Su Tong(1) Tash Aw(1) Wang Anyi(1) Wang Dulu(1) Xinran(1) Xi Xi(3) Xue Yiwei(1) Yan Lianke(3) Ye Zhaoyan(1) Young Writers(1) Yu Hua(3) Zhang Chengzhi(1) Zhang Dachun(1) Zu Wen(1) Recent Posts Murder, gastronomy and poetry, Qiu Xiaolong is back, in China, during the Tang empire (618-907).February 28, 2021 A new prose work by the poet Bei Dao who wants to rebuild “his” Beijing.July 7, 2020 Why not take advantage of your lockdown to read Chinese novels?April 3, 2020 Chi Zijian, can literature help us face the epidemic?March 17, 2020 Sharlene Teo, horror films and literature in Singapore.March 5, 2020 Recent Comments Carl on Reading Chinese novels in the West. Tom Smith on Han Suyin: Malaysia, the difficult birth of a nation. arcelo on Romain Rolland, forgotten in France, praised in China. Anthony Walker on The dictatorship of Chiang Kai-shek in Taiwan, a novel by Shawna Yang Ryan. Kim on Feng Zikai: can one be both a painter and a writer? Views Homepage - 339,261 views Chinese mothers tell Xinran about their abandoned daughters. - 103,203 views Xiao Hong died more than seventy years ago, the promotion by the media. - 32,760 views Qiu Miaojin: love letters and a suicide in Paris… - 26,329 views A meeting with Liu Zhenyun: famine and loneliness - 25,147 views Go to Top
160
Published Time: Sat, 02 Aug 2025 00:03:45 GMT HAL Id: hal-01240469 Submitted on 10 Dec 2015 HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL , est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Copyright Formalization of a Newton Series Representation of Polynomials Cyril Cohen, Boris Djalal To cite this version: Cyril Cohen, Boris Djalal. Formalization of a Newton Series Representation of Polynomials. Certified Programs and Proofs, Jan 2016, St. Petersburg, Florida, United States. ￿hal-01240469￿ Formalization of a Newton Series Representation of Polynomials Cyril Cohen Inria Sophia Antipolis – Méditerranée, France [email protected] Boris Djalal Inria Sophia Antipolis – Méditerranée, France [email protected] Abstract We formalize an algorithm to change the representation of a poly-nomial to a Newton power series. This provides a way to compute efficiently polynomials whose roots are the sums or products of roots of other polynomials, and hence provides a base component of efficient computation for algebraic numbers. In order to achieve this, we formalize a notion of truncated power series and develop an abstract theory of poles of fractions. Categories and Subject Descriptors D.2.4 [ SOFTWARE ENGI-NEERING ]: Software/Program Verification —Correctness proofs, Formal methods; F.2.1 [ANALYSIS OF ALGORITHMS AND PROBLEM COMPLEXITY ]: Numerical Algorithms and Problems— Computations on polynomials; I.1.2 [ SYMBOLIC AND ALGE-BRAIC MANIPULATION ]: Algorithms—Algebraic algorithms; I.1.1 [ SYMBOLIC AND ALGEBRAIC MANIPULATION ]: Expres-sions and Their Representation—Representations (general and polynomial) Keywords formalization of mathematics, algebraic numbers, frac-tions, polynomials, Newton power series Introduction Real algebraic geometry studies points and sets defined by poly-nomial equations and inequations. Algorithms in real algebraic ge-ometry handle these points and sets in an implicit way and use the defining polynomials as a basis for computations. For example, a real algebraic number is represented by a nonzero polynomial with rational coefficients and a way to select a root (e.g. real approxi-mation, rational interval( Cohen 2012b ), Thom encoding). In order to compute basic arithmetic operations on algebraic numbers (sum, product, comparison), we perform operations on these polynomi-als and need to find a polynomial whose roots are the result of the given operation. Given two polynomials in K[X], P = ∑ aiXi with roots α1, . . . , α m and Q = ∑ bj Xj with roots β1, . . . , β n, we wish to compute a polynomial whose roots are αi + βj for (i, j ) ∈{1, . . . , m }×{ 1, . . . , n }, which we write P ⊕Q and which we call composed sum. Similarly we define the composed product P ⊗ Q,whose roots are αiβj for (i, j ) ∈ { 1, . . . , m } × { 1, . . . , n }.If the base field K is algebraically closed (i.e. in which every non constant polynomial has a root), we can split the polynomials P = an ∏ i (X − αi) and Q = bm ∏ j (X − βj ), and compute the results P ⊕ Q = ambn ∏ (i,j ) (X − (αi + βj )) .P ⊗ Q = ambn ∏ (i,j ) (X − (αi · βj )) . Given an arbitrary field K, folklore mathematics textbooks usu-ally take an algebraic closure or splitting fields and performs the construction above. However, in order to take the algebraic closure, one needs to already have a way to perform the composed sums and products, so this does not solve the problem. In fact, these op-erations may rely on purely algebraic computations on the coeffi-cients, and thus do not require a preexisting algebraic closure to perform the computation. In prior work by the first author ( Cohen 2012b ), we provide a way to perform these operations and build the algebraic closure. However the underlying algorithms for composed sums and prod-ucts use resultants, which is not an efficient implementation and would not allow for a practical implementation of algebraic num-bers. An efficient way relies on a morphism between polynomials and Newton power series, as described by A. Bostan in ( Bostan 2003 ), which is a main reference for the computer algebra commu-nity. Composed sums and products on polynomials are mapped to low-cost operations on Newton power series. In the case of zero characteristic, the new algorithm improves the complexity by a lin-ear factor ( Bostan 2003 ). If n denotes the degree of two input poly-nomials and M (n) the costs to multiply two polynomials of de-gree n, then the traditional algorithm based on bivariate resultant has asymptotic complexity O(nM (n) log( n)) while the new algo-rithm has complexity O(M (n2)) (Bostan 2003 ). In this work we use our prior knowledge of the existence of an algebraic closure (even though it is not efficient) as a reference implementation to certify our new results. Indeed, all the algorithms we describe use the coefficients in the base field, but all the proofs are made by supposing one can split the polynomials in a field extension. The existence of the algebraic closure provides grounding for this as-sumption and thus serves as a bootstrap. In this paper, we describe a C OQ formalization of an isomor-phism between monic polynomials of a bounded degree and a trun-cated version of formal power series. In order to explain and formalize this isomorphism we first introduce the general mathematical notions we had to formalize (Section 2). The truncated formal power series (Section 2.1 ) is an approximation of traditional power series, better suited for use in COQ with the M ATHEMATICAL COMPONENTS library. We also need to build the fraction field of the domain of polynomials in order to compute the isomorphism, and instead we first give an abstract interface for poles and evaluation of fraction (Section 2.2 )which we implement twice. Throughout Section 3, we describe some algorithms on poly-nomials on a field and prove them correct with regard to another, more concise definition in an algebraic closure. We explain how to compute the Newton power series in the context of algebraically closed fields (Section 3.1 ). Then we describe an algorithm to com-pute the Newton power series without making operations inside an algebraic closure and we prove it computes the same result as in the previous section (Section 3.2 ). We also provide an explicit inverse, which computes a polynomial from a Newton power series. Finally, we explain how to compute the composed sum and product using a translation to a Newton power series, a simple computation on the formal power series and then a backward translation. The results described in this paper are entirely formalized in COQ , unless explicitly stated. The formal development is available on the first author webpage: Mathematical background 2.1 Formal power series and truncated formal power series In this paper, we make statements indirectly involving formal power series (FPS), the common mathematical object noted K .FPS is a generalization of polynomials: it is an infinite sequence of coefficients (ai)i∈N. Unlike polynomials, the set of non-zero coefficients is not necessarily finite. For example, 1 + X2 is a poly-nomial (thus a FPS) and S = 1 + X + X2 + X3 + . . . is the FPS with general term Xi. In the general case, coefficients are taken in a commutative ring. The set of formal series R over a commutative ring R has a structure of commutative ring. In this work, we study formal series K over a field K of characteristic zero, which means that for any natural number n: n ∑ i=1 1K 6 = 0 . In C OQ we could implement FPS by functions from nat to the given ring, as it is done by A. Chaieb in ( Chaieb 2011 ). In this case, comparing two FPS amounts to proving that two given functions are equal. This approach has two drawbacks: equality becomes undecidable (more precisely, disequality is semi-decidable) and without the functional extensionally axiom two FPS with equal terms are not provably equal. While we would not mind being in a context where the second axiom is validated (e.g. Homotopy Type Theory ( Univalent Foundations Program 2013 )) the algebraic library we use extensively in our work requires a decidable equality. Instead of FPS, we consider an approximation we call truncated formal power series up to Xm (TFPS m or TFPS if there is no am-biguity), noted Km[X]. In this case, we deliberately provide only m + 1 coefficients. The set of TFPS m is isomorphic to the set of polynomials quotiented by Xm+1 . Consequently, we can imple-ment a TFPS by a polynomial whose degree is less than m, i.e. by a polynomial together with a proof that its degree is less than m.We use polynomials from the M ATHEMATICAL COMPONENTS li-brary, already used for many results (algebraic numbers, Galois the-ory, Cayley Hamilton theorem, odd order theorem ( Gonthier et al. 2013 ), . . . ). Record tfps := TFPS { truncation_tfps :> { poly K }; _ : size truncation_tfps <= n.+1 }. We write {tfps K n } for the formal power series over K truncated up to precision Xn (included). We can use the following fact to turn any polynomial into a TFPS m for any m, and build a smart constructor Tfpsp which turns any polynomial to a TFPS m. Fact leq_modpXn m p : size (p %% ’ X^m) <= m. Definition Tfpsp m : { poly K } -> { tfps K m } := fun p => TFPS (leq_modpXn m .+1 p). We also provide a construction to define a TFPS m from its coefficients: Notation "[tfps s => E]" := Tfpsp m (\ poly_ (i < m.+1) E) 2.1.1 Arithmetic properties The decision procedure for equality consists in comparing the un-derlying polynomial representations. The addition of two polyno-mials whose degrees are less than m produces a polynomial whose degree is less than m. Thus, we define addition of two TFPS as the addition of the underlying polynomials. In order to multiply two TFPS m, we multiply the underlying polynomials, we can get a polynomial with degree more than m+1 .We then take the remainder modulo Xm+1 to guarantee that the degree of the obtained polynomial is less than m. This makes sense since the coefficients from m + 1 are only partially known (we are missing the information from the rest of the series). Our definition of multiplication of two TFPS in C OQ is as follows: Definition mul_tfps (f1 f2 : { tfps K m }) : { tfps K m }:= Tfpsp (f1 f2 ). For example in K2[X], we have: (1 + X2) × X = X. Additionally, the Hadamard product of two FPS is the term-wise product of the FPS. Formally we write: Definition hmul_tfps (f1 f2 : { tfps K m }) : { tfps K m }:= [ tfps s => f1 ‘_s f2 ‘_s ] For example in K2[X], we have: (1 + X2) ⊙ (1 + X + 2 X2 ) = 1 + 2 X2. FPS over an integral domain forms an integral domain. This is not true with TFPS as we defined them. Indeed, in TFPS 3 X2 · X2 = 0 (mod X4) This is due to the fixed precision of our representation. We could opt for a “floating truncation” FPS m, where m would actually refer to the number of meaningful terms, i.e. the representation of these series would be of the form XM ∑ 0≤s≤m asXs with a0 nonzero. Moreover, allowing M to be negative would lead to a representation of Laurent formal series instead. Derivative We can translate any statement about FPS into a statement about TFPS. Let’s consider the additivity of formal derivation as an ex-ample. In terms of FPS, it expresses as: ∀f, g ∈ K ( f + g)′ = f ′ + g′. In terms of TFPS, it expresses as: ∀m ∈ N, ∀f, g ∈ Km[X] ( f + g)′ = f ′ + g′. Here, we just have replaced occur-rences of FPS by TFPS. However, the translation of a statement dealing with FPS into a statement dealing with TFPS can be trick-ier. This is the case for the definition of the derivative of a TFPS (see 2.1.1 ). We formally prove that Km[X] is a ring. The rela-tionships between FPS and TFPS is summarized by the following diagram: K Km[X] Kn[X]............................................................................................................................................................ ............................................................................................................................................................................. ................. ⌊.⌋m ............................................................................................................................................................ ............................................................................................................................................................................. ................. where n ≤ m, ։ denotes a surjective ring morphism, and ⌊.⌋m denotes the function which sends a FPS (or a polynomial) to its truncation in Km[X]. Please note that in our C OQ development we chose only to deal with TFPS, not FPS. One can mathematically define the derivative, primitive, exponential and logarithm of FPS. All these operations are required to formalize the algorithm which changes the representation of a polynomial to a Newton TFPS. We are going to define such operations on TFPS by adapting their equivalent definitions on FPS. The derivative of a polynomial is defined as follows. For P ∈ K[X], P ′ = m∑ i=1 i · ai Xi−1 = m−1∑ i=0 (i + 1) · ai+1 Xi. The notion of derivative on polynomials is easily extended to FPS. For P ∈ K , P ′ = ∞∑ i=1 i · ai Xi−1 = ∞∑ i=0 (i + 1) · ai+1 Xi. In C OQ we write: Definition deriv_tfps : { tfps K n } -> { tfps K n .-1} := fun f => [ tfps s => f‘_s .+1 + s.+1] Note that derivation must be a map from Km[X] to Km−1[X],not a map from Km[X] to Km[X]. Indeed, when we take derivative, we lose precision. Otherwise, the following commutative diagram would be broken: K K Km[X] Km. −1 [X]. ............................................................................................................................................................... ................................................................................................................................................................................................. derivative ........................................................................................................................................ .......................................................................................................................................................................... derivative .......................................................................................................................................................................................................................................................................................................................................................................................................................... ⌊.⌋m .......................................................................................................................................................................................................................................................................................................................................................................................................................... ⌊.⌋m−1 Primitive We define the canonical primitive of a polynomial as the only primitive such that the constant term is zero. ∫ P = ∫ m∑ i=0 ai Xi = m∑ i=0 ai i + 1 Xi+1 . Definition prim (p : { poly K }) := \poly_ (i < ( size p ).+1) (p‘_i .-1 + ( i != 0) / ( i%: R)). From the canonical primitive, we can derive the family of primi-tive functions for each possible constant term in the ring by adding the constant a, as in (a + prim p ).Like for the derivative, these definitions naturally extend to a FPS. ∫ ∞∑ s=0 as Xs = ∞∑ s=0 as s + 1 Xs+1 . We formally define primitive for polynomial and for TFPS. Definition prim_tfps : { tfps K n } -> { tfps K n .+1} := fun f => [ tfps s => f‘_s .-1 + ( s != 0) / s%: R]. Note that the primitive must be a map from Km[X] to Km+1 [X],not a map from Km[X] to Km[X], for exactly the same reason as for derivation. We prove basic theorems linking derivative with primitive: (∫ P )′ = P ∫ (P ′) = P − a0 Our corresponding C OQ pieces of code of these two statements are: Lemma prim_tfpsK (n : nat ) : cancel (@prim_tfps n ) ( @deriv_tfps K n .+1). Lemma deriv_tfpsK n (f : { tfps K n .+1}) : {in @coef0_is_0 K n .+1, cancel (@deriv_tfps _ _ ) ( @prim_tfps _ )}. Composition The composition of FPS is not always well defined: given two P = ∞∑ i=0 ai Xi and Q = ∞∑ j=0 bj Xj , for the composition to be well-defined it is sufficient that b0 = 0 (cf ( Wikipedia )). We can then write the composition of P et Q in the following way P ◦ Q = ∞∑ i=0 ai ( ∞∑ j=0 bj Xj )i. We could also define the composition with explicit coefficients: P ◦ Q = ∞∑ k=0 ck Xk with ck = ∑ i∈N, s ∈Ni ∑ j∈Ii sj =k ai ∏ j∈Ii bsj  where Ii = {1, . . . , i }.The coefficient ck is well-defined whenever b0 = 0 , since the sum and each product is finite. Indeed, for any k, if there is a j such that sj = 0 then ∏ j∈Ii bj = 0 , so we can sum over only the sequences s ∈ (N∗)i.Since ∑ j∈Ii sj = k and ∀j ∈ Ii, s j > 0, we have i ≤ k.Hence the sum indexed by i ∈ N and s ∈ Ni such that ∑ j∈Ii sj = k is indeed finite. This proves that ck is a finite sum and is well-defined. The formula above works also for k = 0 . In this case the only possible i is 0. It leads to: c0 = a0 ∏ j∈∅ bsj = a0 × 1 = a0 which is the expected result. Formally proving the general formula for the coefficient of the composition of two TFPS is the subject of an ongoing work. Composition for polynomials is already defined in the M ATH - EMATICAL COMPONENTS library. We formally define this notion when formal power series are truncated, by directly defining the composition in terms of polynomials. We use this opportunity in our C OQ code: Definition comp_tfps m (q p : { tfps K m }) := if q \in (coef0_is_0 K m ) then Tfpsp m (comp_poly q p ) else 0. Notation "p \So q" := ( comp_tfps q p ). We did not yet prove in C OQ the formula with explicit coeffi-cients, nor did we prove the existence of the compositional inverse when the coefficient of X0 is 0 and the coefficient of X1 is 1. In-deed, this could have eased the definition of exponential and loga-rithm, but it is not used in our developments yet (it is not required to define Newton Power Series). Exponential There are two main ways to define the exponential of a FPS. We can see the exponential as the element of K defined by exp = ∞ ∑ i=0 Xi i! . Note that here we use the hypothesis that K has zero characteristic to guarantee i! 6 = 0 . This is commonly expressed in M ATHEMATI - CAL COMPONENTS by the following hypothesis. Hypothesis char_K_is_zero : [ char K ] = i pred0 . We can also view this exponential as a function from FPS with zero constant to FPS with constant 1, defined by: fexp (P ) = ∞ ∑ i=0 P i i! . This definition is related to the first one by: fexp (P ) = exp ◦P. In our development, we use the former definition adapted to TFPS: Definition exp (p : { tfps K n }) : { tfps K n } := if p \notin coef0_is_0 then 0 else Tfpsp (\ sum_ (i < n.+1) (( i‘!%: R)^-1) : ( p ^+ i))). We formally prove that the formal exponential is a morphism: for all P and Q such that P (0) = Q(0) = 0 , exp( P + Q) = exp( P ) exp( Q). The corresponding C OQ code statement is: Lemma exp_is_morphism :{in (@coef0_is_0 K m ) &, {morph (@exp _ _ ) : p q / p + q >-> p q}}. We formally prove the formula linking the exponential and its derivative, which states, for any formal power series P : (exp P )′ = P ′ exp( P ). In our C OQ code, this formula is expressed as: Lemma deriv_exp (m : nat ) ( p : tfps K m ) : (exp p )^‘ () = ( p^‘ ()) ( Tfpsp m .-1 ( exp p )). It would be simplier if exp were defined as a TFPS, as we could write exp ′ = exp and prove the previous theorem as a trivial application of the derivation of composition theorem. Moreover, injectivity could be obtained by the existence of compositional inverses under the right conditions. Logarithm As for exponential, logarithm can be defined both as a function or as a FPS. For any P such that P0 = 1 : log P = − ∞ ∑ i=1 (1 − P )i i . This series is well-defined whenever P0 = 1 for the same reason as for the composition. In our development, we use the latter definition adapted to TFPS: Definition log (p : { tfps K n }) := if p \notin coef0_is_1 then 0 else Tfpsp n (- \ sum_ (1 <= i < n.+1) (( i %: R) ^-1) : ((1 - val p ) ^+ i)). We formally prove: (log P )′ = P ′ P . We expressed the lemma about the derivative of the formal logarithm on TFPS with: Lemma deriv_log (m : nat ) ( p : tfps K m ) : p \in (coef0_is_1 K m ) -> (log p ) ^‘ () = ( p )^‘ () / ( Tfpsp m .-1 p). We prove that log(exp P ) = P on TFPS formally by derivating the expression and using the lemma about the derivative of log . It is possible to obtain this results from logarithm injectivity. But in our development, we use the derivative of the logarithm formula to prove the injectivity of the logarithm function. The injectivity of the formal logarithm on TFPS expresses as: Lemma log_inj :{in coef0_is_1 K n &, injective (@log K n )}. We prove the injectivity of the logarithm as follows. First, we notice that the derivative formula of a division holds for TFPS . For any TFPS P and Q such that Q 6 = 0 , we have: ( P Q )′ = P ′Q − QP ′ Q2 . Let be TFPS P et Q such that P (0) = 1 and Q(0) = 1 , and let’s suppose that we have log( P ) = log( Q). Since Q(0) = 1 , we have Q 6 = 0 . We are going to prove that P = Q and thus the injectivity of the logarithm. By derivating the hypothesis, we get: P ′ P = Q′ Q which rewrites as P ′Q − QP ′ = 0 then P ′ Q−QP ′ Q2 = 0 (since Q 6 =0). Using our first remark, we thus get ( P Q )′ = 0 . It implies that P Q is a constant. In other words, the underlying polynomials of P and Q are associate polynomials. Since P (0) = Q(0) , we finally get P = Q. This achieves the proof of the injectivity of the formal logarithm on TFPS. We could simplify some proof by getting the logarithm of a se-ries as a composition for series. Indeed we could define log(1 − X) as a series: L = − ∞ ∑ i=1 Xi i . So that log P = L◦ (1 −P ), which would be well-defined whenever the constant coefficient of P is 1. Moreover, injectivity could be obtained by the existence of compositional inverses under the right conditions. 2.2 Fractions The main theorem about Newton power series requires polynomial fractions. We use a more general result about the field of frac-tions of an integral domain and K(X) is constructed as the field of fractions of K[X]. The construction of polynomial is correct be-cause K[X] is an integral domain whenever K is a field. Let R be an integral domain and ι : ֒R → F (R) the (canonical) injection to its field of fractions. We not only want to manipulate fractions, we also want to define functions from F(R) to another ring without having to go back to the implementation of F(R) as R×R quotiented by an equivalence relation. In this section we show an interface for an abstract notion of poles of fraction. We first show two use-cases of this interface. We then establish the relationship between this interface and the universal property of the field of fractions. Here we present the first use-case, which is the evaluation of polynomial fractions. Let’s consider the following examples: • the evaluation of X2 − 2 X + 5 in 3 is 7 8 , • the evaluation of X2 − 2 X − 3 in 3 is not defined because 3 is a pole, • the evaluation of X2 − 3X X2 − X − 6 = X X + 2 in 3 is 3 5 .An evaluation algorithm proceeds by first deciding whether there is a good representation for our fraction. Then, there are two mutually exclusive possibilities: • evaluation is not defined • it boils down to evaluating two polynomials and performing a division. A unifying interface via abstract poles of fractions Let R be an integral domain and ι : ֒R → F (R) the canonical injection to its field of fractions. Let K be a field and f : R → K a morphism. For any a in K, we say x has a regular representation in a if x can be written as u v with f (v) 6 = a. We use this definition of regular representation for a = 0 .We define κ(x) = { f (u) f(v) if there is a regular representation u v of x in 0 undefined otherwise (we return 0 in C OQ ) . where we say x has a regular representation in 0 if x can be written as u v with f (v) 6 = 0 . R F(R) K .............................................................................................................................................................................................................................................................................................................................................................................................................................................................. f ..................................................................................................................................... ............................ ι ....... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ........... ............κ, (1) Our κ is mathematically well-defined, but not always computable. The computability of κ is guaranted when these three points are satisfied: f is computable, for all x, it is decidable whether there is a regular representation of x and we can compute it when it exists. Application to the evaluation of polynomial fractions. Let K be a field. • R is the ring of polynomials K[X] • K(X) is the field of fractions of R, noted F(R) • f : R −→ K is the evaluation of polynomials in a The evaluation of polynomial fractions in a is the map: κ : F(R) −→ K κ(x) = { f (u) f(v) if x can be written as u v with f (v) 6 = 0 undef ined otherwise. Note that f is parameterized by an element a ∈ K. Application to the lifting of a the field of coefficients. Let F and L denote fields and L/F be a field extension, where ι : F֒ → L is the canonical injection. We assume that we know how to lift from F[X] to L(X). We want to lift any element of F ∈ F(X) to L(X).We reuse or interface where • F[X] is the integral domain R • F(X) is the field of fractions of R, noted F(R) • L(X) is a field K • f : R −→ K is the lifting from F[X] to L(X) • our lifting function from F(X) to L(X) is the map: κ : F(R) −→ K κ(x) = { f (u) f(v) if x can be written as u v with v 6 = 0 undef ined otherwise. Note that since here f is injective, v 6 = 0 implies f (v) 6 = 0 . Thus, κ is always defined. Relation to the universal property of fractions When f is injective, it is basically the universal property of the field of fractions of an integral domain. This universal property states: for any field K and injective ring morphism f from R to K, there is a unique ring morphism κ from the field of fractions F(R) to K such that our diagram ( 1) commutes. However the function κ is defined even when f is not injec-tive — we call this broader definition of κ an abstract evaluation. When f is not injective, κ fails to be a ring morphism. We develop abstract evaluation theory in the general case i.e. even when f is not injective. We define the notion of abstract pole by: x ∈ F (R) has a f -pole if: ∀u v ∈ R such that x = ι(u) ι(v) , f (v) = 0 .The presence of an abstract pole, or f -pole, in x means that there is no regular representation of x. It is expressed as follows in our COQ code: Definition has_pole x := forall y : R R, x = y.1%: F / y.2%: F -> f y .2 = 0. We then suppose that any x ∈ F (R) either has a f -pole or can be written x = ι(u) ι(v) for some u v ∈ R such that f (v) 6 = 0 .We derive formally the following results: • κ(0) = 0 • κ(1) = 1 • ∀x ∈ F (R), κ (−x) = −κ(x) • ∀x, y ∈ F (R), κ (y) 6 = 0 = ⇒ κ( x y ) = κ(x) κ(y) The latter result is expressed as: Lemma kappa_div (y z : { fraction R }) : kappa z != 0 -> kappa (y / z) = ( kappa y ) / ( kappa z ). Extra results are provided in the interface when ι is injective. When ι is injective, the main and summarizing result is that ab-stract evaluation is a ring morphism. We use the universal property of fractions — lifting from F(X) to L(X) — in the particular situation where F = L(Y ) for a sec-ond formal variable Y . We use it to define lifting from F(X) to (F(Y ))( X). Here, R is F[X], K is F(X) and f is the lifting func-tion from F[X] to (F(Y ))[ X]. Then, we derive lifting of polyno-mial fractions automatically : this implementation helps defining composition of fractions of polynomials. Let U and V ∈ F(X).The composition of U and V is defined by: U ◦ V = ( κ(U ))( V ).That is, we lift U . Then we swap variables X and Y and evaluate the resulting polynomial in V . Taylor series of polynomial fractions In Section 3.2 we use the concept of Taylor series of polynomial rational, which is well defined for the set of elements of K(X) such that the denominator of an irreducible representation has got a nonzero constant. For example, 1 X has no Taylor series and 1 1−X has the following Taylor series ∞ ∑ i=0 Xi = 1 + X + X2 + X3 + . . . . Note that X2 X−X2 has a Taylor series despite the fact that the con-stant term of X − X2 is zero. If 0 is a pole of the fraction, we know there is no Taylor series. Otherwise, there is a representation U V of the fraction where the constant coefficient of the denominator V is nonzero. Then we find the Taylor series S of U V by solving the system deriving from the equation S · V = U (in this equation U and V cast to FPS). Sum and product of fractions preserve the exis-tence of a Taylor series, hence fractions which have a Taylor series form a sub-ring of the field of polynomial fractions. We use the very same concept of Taylor series, adapted to TFPS. Instead, we output a TFPS with the desired precision. We could also reuse the generalized universal property of the field of fractions, in two possible different ways. The first way would require a generalization of the interface for the generalized universal property of the field of fractions with a morphism which target is in an integral domain instead of a field. Then we consider the morphism that casts a polynomial inside the integral domain of “floating truncation” FPS. A second way would be to reuse our interface with a morphism which target is truncated Laurent formal series, and show that when the source fraction has no pole, the Laurent formal series has a nonnegative order: XM ∑ 0≤s≤m asXs with M ≥ 0. Newton Power Series The mathematical theory of this section is entirely based on ( Bostan 2003 ). A Newton power series is a FPS which represents a monic polynomial (polynomial with leading coefficient equal to one) with-out loss of information. In pratice, it is enought to use truncated Newton power series, because only n coefficients of the formal se-ries are used to recover the n coefficients of a monic polynomial of degree n.Throughout this section, we take the running example of two polynomials P = X2 − 2 and Q = X + 3 we compute the composed sum and product of. 3.1 Newton series in an algebraic closed field Let K be a field and L an algebraically closed extension of K, where ι : ֒K → L is the injective morphism from K to L. Let P =∑ i≤m aiXi be a polynomial of K[X]. We write abusively ι(P ) = ∑ i≤m ι(ai)Xi the lifting of P to L[X] and we write p ^ iota in COQ .Since L is algebraically closed, ι(P ) has roots αi ∈ L for i ∈ { 1, . . . , m }, such that ι(P ) = ι(am) ∏ i (X − αi). Note that, in common mathematical practice, we would simply write P = am ∏ i (X − αi), the injection ι being implicit. With our example, we have X2 − 2 = ( X − √2)( X + √2) and X + 3 = X − (−3) . And the results of the composed sum and products should be (X2 − 2) ⊕ (X + 3) = ( X + ( 3 − √2 )) ( X + ( 3 + √2 )) = X2 + 6 X + 7 (X2 − 2) ⊗ (X + 3) = ( X + 3 √2 ) ( (X − 3√2 ) = X2 − 18 We can define Ns(P ) = ∑ i≤m αsi (2) By convention, the sum is zero if P has no root. This can be formal-ized using the library bigop (Bertot et al. 2008 ) of Mathematical Components libraries, which provides a theory of finite iterations of an operation on a monoid. With our example, we have: Ns(X2 − 2) = √2s + ( −√2 )s = { 2k+1 if s = 2 k 0 otherwise and Ns(X + 3) = ( −3) s The Newton power series of P is the following FPS : N (P ) = ∞ ∑ s=0 Ns(P ) Xs (3) We call forward transformation the process to obtain N (P ) from P . With our example, we have: N (X2 − 2) = 2 + 4 X2 + 8 X4 + ∑ k> 2 2k+1 X2k N (X + 3) = 1 − 3X + 9 X2 + ∑ k> 2 (−3) k Xk The Newton power series N (P ) and the Newton sums Ns(P ) are respectively in L and L. However, each Ns(P ) is a sym-metric function of the roots of P : each Ns(P ) remains unchanged when we apply a permutation to the roots of P . Thus, by the fun-damental theorem of symmetric polynomials, Ns(P ) can be ex-pressed as a polynomial expression of elementary symmetric func-tions of the roots of P , which are the coefficients of P (up to a sign). This means Ns(P ) can be expressed as a polynomial in the coefficients of P and is hence in K instead of L. Hence N (P ) can be written as a formal power series over K. Whereas common math-ematical practice does not observe the distinction, in C OQ we have to be specific and we cannot write the definition of Newton sums as in ( 2) and Newton power series as in ( 3). There are two options: using the theory of symmetric polynomials we could have proved that the Newton series are in the sub-field K of L and extracted the corresponding elements, or we can provide an alternative definition of Newton series that makes it obvious they belong to K. Since this alternative definition is also the one used for efficient computation, we chose to define Newton sums using the latter. The remainder of this section gives a concrete direct representa-tion of Newton sums as a value in K and hence the truncated power series newton_tfps representing the series N (P ), but in K instead of L .With the definition in Section 3.2 , we eventually prove that forall (p : { poly K }) ( n : nat ), (newton_tfps n p ) ^ iota =[tfps j => \ sum_ (r <- roots (p ^ iota )) r ^+ j]. Note that in fact, L need not be algebraically closed, but a splitting field of P . However, it is sufficient for our development to consider an algebraically closed extension, because we can actually provide the algebraic closure of the fields we consider ( Cohen 2012b ). We now explain how to define newton_tfps as a series with coefficients in K. 3.2 Newton power series in a general setting We now show how to compute N (P ) directly from the coefficients of P , without the need for factoring P . Then, we will present a procedure to recover P from N (P ), where P is monic ( i.e. with a leading coefficient equal to 1, which implies P 6 = 0 ). For any polynomial P , the forward transformation theorem holds: N (P ) = rev( P ′) rev( P ) (4) where rev is the reverse operator on polynomials, which outputs a polynomial by reversing the list of coefficients of the input poly-nomial. For instance: rev(1 + 2 X + 3 X2) = 3 + 2 X + X2, rev( X3 − X5) = −1 + X2, rev( X2 − 2) = 1 − 2X2, rev(2 X) = 2, rev( X + 3) = 1 + 3 X and rev(1) = 1 .In the equation above P is in K[X] and N (P ) is in K(X). The constant coefficient of rev( P ) is nonzero because P 6 = 0 , thus rev( P′) rev( P) has a Taylor series, which is a FPS in K .With our example we have rev(2 X) rev( X2 − 2) = 2 1 − 2X2 = 2 ∑ s≥0 (2 X2)s = ∑ s≥0 2s+1 X2s rev(1) rev( X + 3) = 1 1 + 3 X = ∑ s≥0 (−3X)s = ∑ s≥0 (−3) sXs Hence we define newton as a rational fraction in K(X) and newton_tfps as a TFPS with coefficients in K. Definition newton (p : { poly K }) : { fracpoly K } := rev (deriv p ) // rev p . Definition newton_tfps (m : nat ) ( p : { poly K }) := Tfpsfp m (newton p ). The proof of the equivalence of the definitions ( 3) and ( 4) relies on the following lemma about the reverse function. rev( P ) = P ( 1 X ) · Xm. Proof. rev( P ) = m ∑ i=0 am−iXi = ( n∑ i=0 am−iXi−m ) Xm = ( m∑ i=0 aiX−i ) Xm = P ( 1 X ) Xm. We now prove the equivalence of the definitions ( 3) and ( 4). Proof. The polynomial P factorizes in L[X] because L is an alge-braic closed field: P = am ∏ i≤m (X − αi). The definition ( 3) gives N (P ) = ∞ ∑ s=0 ∑ i≤m αsi  Xs = ∑ i≤m ( ∞∑ s=0 (αiXs) ) = ∑ i≤m 1 1 − αiX . The definition ( 4) uses rev( P ) = P ( 1 X ) Xm and rev( P ′) = P ′ ( 1 X ) Xm−1. Then: rev( P ′) rev( P ) = 1 XP ′ ( 1 X ) P ( 1 X ) = 1 X ∑ i 1 1 X − αi = ∑ i 1 1 − αiX . The formal proof follows the same reasoning steps. In the refer-ence ( Bostan 2003 ), ( 3) is a definition and ( 4) is a theorem, whereas in our C OQ formalization, we exchange definition and theorem of Newton representation. The formal proof relies on TFPS, on frac-tions and on the development of a fraction of polynomials into a TFPS. In section 3.3 , we use this transformation to compute the New-ton power series associated with two polynomials P and Q, on which we perform the operation corresponding to the composed sums and products. In order to get the result, we need to transform back a power series into a polynomial. We prove the backward transformation theorem rev( P ) = exp (∫ 1 X (m − N (P )) ) (5) First, remark that the constant term of N (P ) is N0(P ) = ∑ i≤m α0 i = ∑ i≤m 1 = m(the number of roots of P , which is the degree of P ). So, X divides m − N (P ). It is then possible to define m−N (P ) X as an element of K .Whenever we encounter an imprecise equality between a poly-nomial and an infinite FPS, we explicitly truncate the FPS to the right precision. This formula ( 5) establishes a relation be-tween rev( P ) and N (P ). It is possible to recover rev( P ) from N (P ).Besides, it is possible to recover P , except the multiplicity of 0 in P , from rev( P ). Thus, it is possible to recover P , except the multiplicity of 0 in P , from N (P ).When P (0) 6 = 0 , which is a decidable property, rev(rev( P )) = P holds and thus our backward conversion formula rewrites as: P = rev(exp( ∫ 1 X (m − N (P )))) . If P (0) = 0 , we can recover the multiplicity of X by making the difference between the degree of the result and m.We write the inverse of newton_tfps : Definition newton_inv (p : { tfps K m }) : { poly K }:= revp (exp (prim_tfps (divfX (( p‘_0 )%: S - p)))). We now prove ( 5). First we check that for any polynomials U , V such that U0 = 1 : U ′ U = V =⇒ U = exp (∫ V ) . (6) Proof. U ′ U = V ⇐⇒ ∫ U ′ U = ∫ V ⇐⇒ ∫ (log( U )) ′ = ∫ V ⇐⇒ log U − (log U )0 = log U − 0 = ∫ V ⇐⇒ exp(log U ) = exp (∫ V ) (injectivity of the exponential) ⇐⇒ U = exp (∫ V ) . We proved ( 6). So, by applying ( 6) to the desired theorem, it is sufficient to prove: rev( P )′ rev( P ) = m − N (P ) X (7) We prove ( 7) by considering both sides as FPS. On one side: n − N (P ) = m − ∞ ∑ s=0 Ns(P )Xs = − ∞ ∑ s=1 Ns(P ) Xs n − N (P ) X = − ∞ ∑ s=0 Ns+1 (P ) Xi = − ∞ ∑ s=0 (∑ i αs+1 i ) Xs On the other side: rev( P )′ rev( P ) = ∑ i 1 X − 1 αi = ∑ i αi αi X − 1 = − ∑ i αi 1 − αiX rev( P )′ rev( P ) = − ∑ i αi ( ∞∑ s=0 (αiX)s ) = − ∞ ∑ i=0 (∑ i αs+1 i ) Xs. We proved: rev( P )′ rev( P ) = n − N (P ) X . The result follows by applying ( 6) to ( 7). In our C OQ code, the backward transformation theorem is expressed as: Lemma newton_tfpsK (p : { poly K }) : size p <= m.+1 -> ~~ ( root p 0) -> p \is monic -> newton_inv (newton_tfps m p ) = p. As one could see in the proof, we use a result about formal geomet-ric series. We use this version: 1 1 − aX = ∞ ∑ i=0 aiXi In our C OQ code, it is expressed as: Lemma geometric_series (a : K) ( m : nat ) : Tfpsp m (((1 - a : ’ X)%: F) ^-1) = [ tfps j => a ^+ j]. 3.3 Composed sums and products Given two monic polynomials in K[X], P = ∑ aiXi with roots α1, . . . , α m in L and Q = ∑ bj Xj with roots β1, . . . , β n in L,we wish to compute a polynomial whose roots are αi + βj for (i, j ) ∈ { 1, . . . , m }×{ 1, . . . , n }, which we write P ⊕Q and which we call composed sum. Similarly we define the composed product P ⊗Q, whose roots are αiβj for (i, j ) ∈ { 1, . . . , m }×{ 1, . . . , n }.Since P and Q are supposed monic, am and bn are equal to 1,and we can split the polynomials in L[X]. We write abusively P = ∏ i (X − αi) and Q = ∏ j (X − βj ), and we want to compute the results P ⊕ Q = ambn ∏ (i,j ) (X − (αi + βj )) P ⊗ Q = ambn ∏ (i,j ) (X − (αi · βj )) . Since, both P ⊕ Q and P ⊗ Q are symmetric in the αi and βi,the fundamental theorem of symmetric polynomials concludes that their coefficients are polynomials in the coefficients of P and Q.We can compute P ⊕ Q and P ⊗ Q without the need for factoring P or Q. One solution is to use the resultant (which is already imple-mented for the construction of algebraic numbers( Cohen 2012b )) thanks to the relations: (P ⊕ Q)( X) = Res Y (P (X − Y ), Q (Y )) (P ⊗ Q)( X) = Res Y (Y mP ( X Y ), Y ) where m is the degree of P . Note that P ( X Y ) ∈ K(X, Y ) but Y mP ( X Y ) ∈ K[X, Y ].The Newton power series enables us to compute (P ⊕ Q)( X) and (P ⊗ Q)( X) faster for any characteristic of K. The Newton representation of a polynomial is a FPS. If this FPS is truncated far enough, there is no loss of information about the input polynomial. Then, the composed sum and composed product are done in the space of TFPS. We formally prove the following equation for the composed prod-uct N (P ⊗ Q) = N (P ) ⊙ N (Q) (8) where ⊙ denotes the Hadamard product, i.e. the term-wise product of TFPS. In C OQ :Definition cmul (p q : { poly K }) := if (p == 0) || ( q == 0) then 0 else \prod_ (r <- [ seq s t | s <- roots p ^ iota , t <- roots q ^ iota ]) (’ X - r%: P) Lemma newton_cmul m p q : newton_tfps m (cmul iota p q ) = (hmul_tfps (newton_tfps m p ) ( newton_tfps m q )) ^ iota . The proof of ( 8) boils down to proving the equality of the coefficients of two FPS. Proof. Ns(P ⊗ Q) = ∑ i≤mj≤n (αiβj )s = (∑ i≤m αsi )( ∑ j≤n βs)= Ns(P )Ns(Q). With our example we have N (P ⊗ Q) = N (P ) ⊙ N (Q)= ∑ s≥0 2s+1 X2s ⊙ ∑ s≥0 (−3) sXs = 2 + 36 X2 + . . . We then compute rev( P ⊗ Q) = exp (∫ 1 X (2 − N (P ⊗ Q)) ) = exp (∫ (−36 X + . . . ) ) = exp (−18 X2 + . . . ) = ∑ s≥0 (−18 X2 + . . . )s s!= 1 − 18 X2 + . . . which we truncate at precision 2. Hence P ⊗ Q = X2 − 18 , which is the expected result. We formally prove the similar equation for the composed sum N (P ⊕ Q) ⊙ E = ( N (P ) ⊙ E) · (N (Q) ⊙ E) (9) where E = exp( X) = ∞ ∑ s=0 Xs s! . In C OQ : Definition cadd_poly (p q : { poly K }) := if (p == 0) || ( q == 0) then 0 else \prod_ (r <- [ seq s + t | s <- iroots p , t <- iroots q ]) (’ X - r%: P). Lemma cadd_newton m p q : hmul_tfps (newton_tfps m (cadd_poly iota p q )) E =(hmul_tfps (newton_tfps m p ) E hmul_tfps (newton_tfps m q ) E) ^ iota . The proof of ( 9) is also almost immediate. Proof. First we have: N (P ⊕ Q) = ∞ ∑ s=0 (∑ i,j (αi + βj )s ) Xs. Thus ( 9) can be rewritten as: ∞ ∑ s=0 ∑ i,j (αi + βj )s s! Xs = ( ∞∑ s=0 (∑ i αsi s! ) Xs )( ∞∑ s=0 (∑ j βsj s! ) Xs ) which can be re-expressed in the following way: ∑ i,j exp (( αi + βj )X) = (∑ i exp( αiX) ) · (∑ j exp( βj X) ) . The latter is a direct consequence of the morphism property of the exponential. This completes the proof of ( 9). With our example we have N (P ⊕ Q) ⊙ E = (N (P ) ⊙ E) · (N (Q) ⊙ E)= ∑ s≥0 2s+1 (2 s)! X2s · ∑ s≥0 (−3) s s! Xs = (2 + 2 X2 + . . . ) ( 1 − 3X + 9 2 X2 + . . . ) = 2 − 6X + 11 X2 + . . . N (P ⊕ Q) = 2 − 6X + 22 X2 + . . . We then compute the backward transformation. rev( P ⊕ Q) = exp (∫ 1 X (2 − N (P ⊕ Q)) ) = exp (∫ (6 − 22 X + . . . ) ) = exp (6X − 11 X2 + . . . ) = ∑ s≥0 (6 X − 11 X2 + . . . )s s!= 1 + 6 X + 7 X2 + . . . which we truncate at precision 2. Hence P ⊕ Q = X2 + 6 X + 7 ,which is the expected result. Related Work This work is mostly based on Alin Bostan’s PhD thesis ( Bostan 2003 ) for the mathematical results, that he develops with care in order to provide algorithms for computer algebra. In our paper we only study the theory for characteristic zero, while he does it for arbitrary characteristic. In ( Chaieb 2011 ), the author describes a formalization of formal power series in ISABELLE/HOL as functions from natural num-bers. He already provides formal derivative, division, composition, inverse, logarithm. He provides additional constructions such as ar-bitrary nth roots, compositional inverse, binomial FPS, trigonomet-ric FPS. These constructions are done for arbitrary domain. Con-trary to our work, ( Chaieb 2011 ) does not require polynomials and defines them as FPS. We define TFPS by exploiting an existing library about polynomials. In ( Alasdair Armstrong 2014 ), the au-thors generalizes FPS by implementing them as functions from free monoids into dioids. We rely on the code of P.-Y. Strub ( Strub 2014 ) for the theory of polynomials on a decidable field, in order to get the list of roots of a polynomial in an algebraically closed field. For the fractions, we follow up on a work by the first au-thor ( Cohen 2013 ), where he describes a construction of the frac-tion field of an integral domain using a quotient construction. We expand this work in a slightly different way than ( Strub 2014 ) does, by generalizing the notion of pole and evaluation to an arbitrary integral domain. We also use the existence of the algebraic closure to conclude that the resulting polynomial has the desired properties ( Cohen 2012b ,a). Conclusion and future work In this paper, we describe a formalization of truncated formal power series. We equip them with a commutative ring structure and define some common operations: Hadamard product, derivative, primitive, composition, exponential, logarithm and prove formulas linking these operations. The theory of truncated power series is both an artifice to avoid handling FPS as infinite objects and a feature as it explicits the precision that the operations have. We hope that the theory of TFPS can be reused for a theory of Taylor series. We also develop a theory of abstract poles of fractions to factor-ize code and improve readability, which generalizes the universal property of the field of fractions. The theory of abstract poles is used to develop the theory of evaluation of polynomial fractions in a modular way. It is used twice: once for the evaluation of polyno-mials, once for lifting of injective morphisms from polynomials to a field, to a morphism from polynomial fractions to a field. We then use these components to define the Newton power series and prove the main results: forward transformation theorem, backward transformation, theorem linking composed product with the Hadamard product, formula linking the composed sum with the product. Whatever implementation of algebraic numbers we pick, in order to implement the sum and product on algebraic numbers, one step is to compute the composed sum and product of two polynomials. In this paper we describe an efficient way to do so. We should now investigate efficient ways to select a root of the composed sum and products. Once we have more efficient pieces than in ( Cohen 2012b ) we may use the C OQ EAL library to provide computable versions of the algorithms we describe and formalized here and perform computations on algebraic numbers inside C OQ . Acknowledgments Pretty C OQ code listing was done thanks to Assia Mahboubi’s file (Mahboubi ). Commutative diagrams where drawn thanks to Pedro Quaresma’s package ( Quaresma ). We thank our colleagues for their input on this work: Enrico Tassi, José Grimm, Laurence Rideau, Laurent Théry and Yves Bertot. References T. W. Alasdair Armstrong, Georg Struth. Programming and Automating Mathematics in the Tarski-Kleene Hierarchy. 2014. Y. Bertot, G. Gonthier, S. Ould Biha, and I. Pasca. Canonical big operators. In Theorem Proving in Higher-Order Logics , volume 5170 of LNCS ,pages 86–101, 2008. A. Bostan. Algorithmique efficace pour des opérations de base en Calcul formel . 2003. A. Chaieb. Formal power series. Journal of Automated Reasoning , 47:291– 318, October 2011. C. Cohen. Formalized algebraic numbers: construction and first-order theory . PhD thesis, École polytechnique, Nov 2012a. C. Cohen. Construction of real algebraic numbers in Coq. 2012b. C. Cohen. Pragmatic quotient types in coq. In S. Blazy, C. Paulin-Mohring, and D. Pichardie, editors, Interactive Theo-rem Proving , volume 7998 of Lecture Notes in Computer Sci-ence , pages 213–228. Springer Berlin Heidelberg, 2013. ISBN 978-3-642-39633-5. doi: 10.1007/978-3-642-39634-2_17. URL .G. Gonthier, A. Asperti, J. Avigad, Y. Bertot, C. Cohen, F. Garillot, S. Roux, A. Mahboubi, R. O’Connor, S. Ould Biha, I. Pasca, L. Rideau, A. Solovyev, E. Tassi, and L. Théry. A machine-checked proof of the odd order theorem. In S. Blazy, C. Paulin-Mohring, and D. Pichardie, editors, Interactive Theorem Proving , volume 7998 of Lecture Notes in Computer Science , pages 163–179. Springer Berlin Heidelberg, 2013. ISBN 978-3-642-39633-5. doi: 10.1007/978-3-642-39634-2_14. URL .A. Mahboubi. lstcoq.sty file which defines aCoq - SSReflect style for listings in Latex. .Accessed: 25/06/2015. P. Quaresma. DCpic package to draw diagrams in Latex. . Accessed: 25/06/2015. P.-Y. Strub. A Coq/Ssreflect Library for Elliptic Curves. In R. G. Gerwin Klein, editor, Interactive Theorem Proving , volume 8558 of Lecture Notes in Computer Science , pages 77–92. Springer, 2014. ISBN 978-3-319-08970-6. doi: 10.1007/978-3-319-08970-6_6. URL . Accessed: 3/07/2015. T. Univalent Foundations Program. Homotopy Type Theory: Univalent Foundations of Mathematics . , Institute for Advanced Study, 2013. Wikipedia. Composition of series. Accessed: 29/06/2015.
161
Some properties of circulant matrices with Ducci sequences - ScienceDirect =============== Typesetting math: 100% Skip to main contentSkip to article Journals & Books ViewPDF Download full issue Search ScienceDirect Outline Abstract MSC Keywords 1. Introduction 2. Preliminaries 3. Main results References Show full outline Cited by (7) Linear Algebra and its Applications Volume 542, 1 April 2018, Pages 557-568 Some properties of circulant matrices with Ducci sequences Author links open overlay panel Süleyman Solak a, Mustafa Bahşi b Show more Outline Add to Mendeley Share Cite rights and content Under an Elsevier user license Open archive Abstract A Ducci sequence is the sequence {X,D X,D 2 X,...} generated by n-tuples X=(x 1,x 2,...,x n)∈Z n, where D X=D(x 1,x 2,...,x n)=(|x 2−x 1|,|x 3−x 2|,...,|x n−x 1|). Equivalently, the Ducci sequence of n-vector X may be defined as {D k X}, where k=0,1,2,... and D k X denotes k times iterated vector X. In this study, we examine properties of the matrix D(C n), which results from applying the Ducci map to the rows of the circulant matrix C n with first row (c 0,c 1,...c n−1). Previous article in issue Next article in issue MSC 11B83 15A15 15A60 Keywords Ducci sequence Circulant matrix Norm 1. Introduction 1.1. Ducci sequences The Ducci sequence generated by A=(a 1,a 2,...,a n)∈Z n is the sequence {A,D A,D 2 A,...} where D:Z n→Z n is defined by D(a 1,a 2,...,a n)=(|a 2−a 1|,|a 3−a 2|,...,|a n−a n−1|,|a n−a 1|). If we study over Z 2 n, then D−(a 1,a 2,...,a n)=(a 1+a 2,a 2+a 3,...,a n−1+a n,a n+a 1) i.e. where |a i−a j|≡a i+a j(mod 2). Every Ducci sequence {A,D A,D 2 A,...} gives rise to a cycle, i.e., there are integers i and j with 0≤i<j with D i A=D j A. When i and j are as small as possible we say that the Ducci sequence has period j−i. Example 1 Let Fibonacci sequence be F=(1,1,2) 3-tuple. Then D F=D(1,1,2)=(|1−1|,|2−1|,|2−1|)=(0,1,1)D 2 F=D(0,1,1)=(|1−0|,|1−1|,|1−0|)=(1,0,1)D 3 F=D(1,0,1)=(|0−1|,|1−0|,|1−1|)=(1,1,0)D 4 F=D(1,1,0)=(|1−1|,|0−1|,|0−1|)=(0,1,1) where since D F=D 4 F, Ducci sequence has period 4−1=3. On the other hand,D−F=(1+1,1+2,2+1)mod 2=(2,3,3)mod 2≡(0,1,1)=D F. Ducci sequences were first introduced in 1937 . The discovery of Ducci sequences is attributed to Professor E. Ducci. Ducci sequences have been studied extensively , , , , , , , . Most studies dealt with the limiting behavior of Ducci sequences for the starting vector A=(a 1,a 2,...,a n), , , , , , . The best known result is that every starting vector converges to zero vector if and only if n is a power of 2 , . Such a sequence is said to vanish . When n is not a power of 2, every starting vector converges to a periodic orbit and reaches to k(x 1,x 2,...,x n), where x i∈{0,1} and k is a positive constant , . The image of any 0−1 vector under the Ducci map is another 0−1 vector. Thus when studying periods of Ducci sequence, one can study over Z 2 n, . Ducci matrix sequences can be used to represent the real numbers and recent studies focus on Ducci matrix sequences , , . Thus, we agree that the statement “Regardless, even though it is now about 75 years old, the Ducci map story seems not yet to be at an end” in . For the extensive historical background for the Ducci sequence, we refer to . Let us consider the following starting vectors over Z n A 1=(a 1,a 2,...,a n),A 2=(a n,a 1,...,a n−1),⋮A n=(a 2,a 3,...,a n,a 1). Then their images under the Ducci map are D A 1=(|a 2−a 1|,|a 3−a 2|,...,|a n−a n−1|,|a n−a 1|)D A 2=(|a 1−a n|,|a 2−a 1|,...,|a n−1−a n−2|,|a n−1−a n|)⋮D A n=(|a 3−a 2|,|a 4−a 3|,...,|a 1−a n|,|a 1−a 2|).A i is called ancestor of D A i and D A i is called descendant of A i. Thus, we can form ancestor matrix and descendant matrix from the above starting vectors and their images under the Ducci map as follows, respectively:(1)[a 1 a 2⋯a n a n a 1⋯a n−1⋮⋮⋮a 2 a 3⋯a 1] and(2)[|a 2−a 1||a 3−a 2|⋯|a n−a 1||a 1−a n||a 2−a 1|⋯|a n−1−a n|⋮⋮⋮|a 3−a 2||a 4−a 3|⋯|a 1−a 2|]. Now the question becomes, is there any relationship between norms, determinants and eigenvalues of the ancestor and descendant matrix? In the present paper, we seek answer to this question. It is clear that the above ancestor and descendant matrices are circulant matrices. So, next we give some background information for the circulant matrices. 1.2. Circulant matrices In many research areas such as signal processing and coding theory, we encounter circulant matrices. An n×n matrix C is called a circulant matrix if it is of the form C=[c 0 c 1⋯c n−1 c n−1 c n−2 c 0 c n−1⋯…c n−2 c n−3⋮⋮⋮c 1 c 2⋯c 0]. Equivalently, we say an n×n matrix C is circulant if there exist c 0,c 1,…,c n−1 such that the i, j entry of C is c(i−j)mod n, where the rows and columns are numbered from 0 to n−1. Thus, we denote the circulant matrix C as C=Circ(c 0,c 1,…,c n−1). Circulant matrices form an attractive class of matrices since their inverses, conjugate transposes, sums and products are also circulant, moreover, circulant matrices are normal matrices . Also, the eigenvalues of the matrix C=Circ(c 0,c 1,…,c n−1) are(3)λ j 0≤j≤n−1=∑n−1 k=0 c k w−j k where w=e 2 π i n and i=−1, . Many authors have studied general circulant matrices , , . Karner et al. have worked on spectral decompositions and singular value decompositions of four types of real circulant matrices. Zhang et al. have worked on the minimal polynomials and inverses of block circulant matrices over a field. Although many different circulant matrices with special entries have been examined , , , the best known of these matrices are those whose entries are Fibonacci and Lucas numbers , , . Let the matrix C be of the form C=Circ(F 0,F 1,…,F n−1), where F n denotes the n th Fibonacci number. By means of , we have F n F n−1≤‖C‖2≤F n F n−1 and ‖C‖E 2=n F n F n−1 for the spectral and Euclidean norm of C, respectively. The above inequality related to spectral norm has been improved as ‖C‖2=F n+1−1. Also, the determinant and inverse of the matrix C=Circ(F 1,F 2,…,F n) are given in . In this study, we apply the Ducci map to each row of the circulant matrix Circ(A)=Circ(a 1,a 2,...,a n) and then establish relationships between spectral norm, Euclidean norm, l p norm, determinant and eigenvalues of the matrix Circ(A) and its image under the Ducci map. It is clear that(4)Circ(A)=Circ(a 1,a 2,...,a n) and its image under the Ducci map given as(5)Circ(D A)=Circ(|a 2−a 1|,|a 3−a 2|,...,|a n−a n−1|,|a n−a 1|) correspond to sequence A=(a 1,a 2,...,a n)∈Z n and the Ducci sequence D(A)=(|a 2−a 1|,|a 3−a 2|,...,|a n−a n−1|,|a n−a 1|), respectively. If the i th row of the matrix Circ(A) is the sequence A i, then the i th row of the matrix Circ(D A) is the Ducci sequence D A i. In fact, the matrices Circ(A) and Circ(D A) are the ancestor and descendant matrices given in (1) and (2). The main contents of this presentation are organized as follows: In Section 2 we give some definitions and lemmas related to our study. In Section 3 we give some equalities and inequalities related to norms, determinants and eigenvalues of circulant matrices Circ(A) and Circ(D A). Moreover, after some theorems in Section 3, we give a numerical example in terms of Fibonacci numbers. In our examples, F denotes the sequence F=(F 1,F 2,…,F n) where F n is the n th Fibonacci number and DF denotes the sequence D F=(F 0,F 1,…,F n−2,F n−1) since D F=(F 2−F 1,F 3−F 2,…,F n−F n−1,F n−F 1)=(F 0,F 1,…,F n−2,F n−1). Throughout this study the matrices Circ(A) and Circ(D A) are the n×n circulant matrices in (4) and (5). 2. Preliminaries The Fibonacci numbers form one of the most well known sequences, and they have many applications to different fields such as mathematics, statistics and physics. The Fibonacci numbers are defined by the second order linear recurrence relation: F n+1=F n+F n−1(n≥1), F 0=0 and F 1=1. Let α and β be the roots of the characteristic equation x 2−x−1=0, then the Binet formulas of F n is :F n=α n−β n α−β. The Fibonacci numbers have many interesting identities such as∑s=0 n−1 F s=F n+1−1, and∑s=0 n−1 F s 2=F n−1 F n. Definition 1 Let A=(a i j) be any m×n matrix. The l p(1<p<∞)norm of A is‖A‖p=(∑i=1 m∑j=1 n|a i j|p)1 p. When we take p=2, then we have Euclidean norm of A defined by‖A‖E=(∑i=1 m∑j=1 n|a i j|2). Definition 2 Let A=(a i j) be any m×n matrix. The spectral norm of A is‖A‖2=max i λ i(A⁎A), where λ i(A⁎A) are eigenvalues of A⁎A and A⁎ is conjugate transpose of A. There are two well known relations between the Euclidean norm and the spectral norm. These relations are:1 n‖A‖E≤‖A‖2≤‖A‖E‖A‖2≤‖A‖E≤n‖A‖2. Lemma 1 Let A be an n×n matrix with eigenvalues λ 1,λ 2,…,λ n . Then, A is a normal matrix if and only if the eigenvalues of A⁎A are|λ 1|2,|λ 2|2,…,|λ n|2 . Lemma 2 Let A be an n×n matrix. Then(6)|det⁡A|2≤∏n i=1∑n j=1|a i j|2. The inequality (6) is known as Hadamard inequality. Now we give our results. 3. Main results Theorem 1 The Euclidean norm of the matrix Circ(D A)is‖Circ(D A)‖E 2=2‖Circ(A)‖E 2−2 n[∑n−1 k=1 a k+1 a k+a n a 1]. Proof From the definition of Euclidean norm, we have(7)‖Circ(A)‖E 2=n∑n k=1 a k 2 and(8)‖Circ(D A)‖E 2=n∑n−1 k=1|a k+1−a k|2+n|a n−a 1|2=n[∑n−1 k=1 a k+1 2−2∑n−1 k=1 a k+1 a k+∑n−1 k=1 a k 2+(a n 2−2 a n a 1+a 1 2)]=n[∑n k=1 a k 2+∑n k=1 a k 2−2∑n−1 k=1 a k+1 a k−2 a n a 1]. Thus, (7) and (8) yield‖Circ(D A)‖E 2=2‖Circ(A)‖E 2−2 n[∑n−1 k=1 a k+1 a k+a n a 1]. □ Example 2 For the Euclidean norms of the matrices Circ(F) and Circ(D F), we have‖Circ(F)‖E 2=n∑n k=1 F k 2=n F n F n+1 and‖Circ(D F)‖E 2=n[∑n−2 k=0 F k 2+(F n−1)2]=n[F n−2 F n−1+(F n−1)2]. Thus‖Circ(F)‖E 2−‖Circ(D F)‖E 2=n[F n F n+1−F n−2 F n−1−(F n−1)2]=n[F n 2+F n−1 2−(F n−1)2]=n[F n−1 2+2 F n−1]. Theorem 2 The l p norm of the matrix Circ(D A)is‖Circ(D A)‖p p≤[(‖Circ(A)‖p p−n|a 1|p)1 p+(‖Circ(A)‖p p−n|a n|p)1 p]p+n|a n−a 1|p. Proof From the definition of l p norm, we have‖Circ(A)‖p p=n∑n k=1|a k|p and‖Circ(D A)‖p p=n∑n−1 k=1|a k+1−a k|p+n|a n−a 1|p≤n[(∑n−1 k=1|a k+1|p)1 p+(∑n−1 k=1|a k|p)1 p]p+n|a n−a 1|p=[(n∑n k=1|a k|p−n|a 1|p)1 p+(n∑n k=1|a k|p−n|a n|p)1 p]p+n|a n−a 1|p. Thus, we have‖Circ(D A)‖p p≤[(‖Circ(A)‖p p−n|a 1|p)1 p+(‖Circ(A)‖p p−n|a n|p)1 p]p+n|a n−a 1|p. □ Example 3 For the l p norms of the matrices Circ(F) and Circ(D F), we have‖Circ(F)‖p p=n∑n k=1 F k p and‖Circ(D F)‖p p=n[∑n−2 k=0 F k p+(F n−1)p]=n[∑n k=1 F k p−F n−1 p−F n p+(F n−1)p]=‖Circ(F)‖p p−n(F n−1 p+F n p)+n(F n−1)p. Thus‖Circ(F)‖p p−‖Circ(D F)‖p p=n[F n−1 p+F n p−(F n−1)p]. Theorem 3 The spectral norm of the matrix Circ(D A)satisfies‖Circ(D A)‖2≥2|a n−a 1|. Proof Since the circulant matrix Circ(D A) is normal, its spectral norm is equal to its spectral radius. Furthermore, by considering Circ(D A) is irreducible and its entries are nonnegative, we have that the spectral radius (or spectral norm) of the matrix Circ(D A) is equal to its Perron root. We select an n-dimensional column vector v=(1,1,…,1)T, then Circ(D A)v=(∑n−1 k=1|a k+1−a k|+|a n−a 1|)v. Obviously, (∑n−1 k=1|a k+1−a k|+|a n−a 1|) is an eigenvalue of Circ(D A) associated with v and it is the Perron root of Circ(D A). Hence, we have‖Circ(D A)‖2=∑n−1 k=1|a k+1−a k|+|a n−a 1|≥|∑n−1 k=1 a k+1−a k|+|a n−a 1|=|a n−a 1|+|a n−a 1|=2|a n−a 1|. □ Theorem 4 The determinant of the matrix Circ(D A)satisfies|det⁡Circ(D A)|≤1 n n 2‖Circ(D A)‖E n. Proof For the determinant of the matrix Circ(D A), the Hadamard inequality in Lemma 2 and the equality (8) yield|det⁡Circ(D A)|2≤∏n i=1[∑n−1 k=1(a k+1−a k)2+(a n−a 1)2]=∏n i=1[∑n−1 k=1 a k+1 2−2∑n−1 k=1 a k+1 a k+∑n−1 k=1 a k 2+(a n 2−2 a n a 1+a 1 2)]=∏n i=1[∑n k=1 a k 2−2∑n−1 k=1 a k+1 a k+∑n k=1 a k 2−2 a n a 1]=∏n i=1[2∑n k=1 a k 2−2∑n−1 k=1 a k+1 a k−2 a n a 1]=∏n i=1[1 n‖Ci r c(D A)‖E 2]=[1 n‖Circ(D A)‖E 2]n. Thus, we have|det⁡Circ(D A)|≤1 n n 2‖Circ(D A)‖E n. □ Example 4 For the determinant of the matrices Circ(F) and Circ(D F), we have|det⁡Circ(F)|≤1 n n 2(‖Circ(F)‖E)n=1 n n 2[(n F n F n+1)1 2]n=(F n F n+1)n 2 and|det⁡Circ(D F)|≤1 n n 2(‖Circ(D F)‖E)n=1 n n 2[n(F n−2 F n−1+(F n−1)2)]n=(F n−2 F n−1+(F n−1)2)n 2. Theorem 5 Let μ j and λ j(j=0,1,...,n−1)be eigenvalues of the matrices Circ(D A)and Circ(A), respectively. If a 1≤a 2≤...≤a n , then μ j 0≤j≤n−1=(λ j+2 a n−2 a 1)w−j−λ j,where w=e 2 π i n and i=−1 . Proof Since a 1≤a 2≤...≤a n we write Circ(D A)=Circ(a 2−a 1,a 3−a 2,…,a n−a n−1,a n−a 1). From (3) the eigenvalues of the matrices Circ(A) and Circ(D A) are of the forms:(9)λ j 0≤j≤n−1=∑n k=1 a k w j(k−1)=a 1+a 2 w j+⋯+a n w j(n−1) and(10)μ j 0≤j≤n−1=[∑n−1 k=1(a k+1−a k)w j(k−1)]+(a n−a 1)w j(n−1). Hence,μ j 0≤j≤n−1=[∑n−1 k=1 a k+1 w j(k−1)−∑n−1 k=1 a k w j(k−1)]+(a n−a 1)w j(n−1)=w−j∑n k=1 a k w j(k−1)−∑n k=1 a k w j(k−1)−a 1 w−j+a n w j(n−1)+(a n−a 1)w j(n−1). Thus, by (9) and the fact w−j=w j(n−1), we have μ j=w−j λ j−λ j+(2 a n−2 a 1)w−j=(λ j+2 a n−2 a 1)w−j−λ j. □ Example 5 Let μ j and λ j(j=0,1,...,n−1) be eigenvalues of the matrices Circ(D F) and Circ(F), respectively. Then we have μ j 0≤j≤n−1=(λ j+2 F n−2)w−j−λ j. Theorem 6 The spectral norm of the matrix Circ(D A)with a 1≤a 2≤...≤a n satisfies‖Circ(D A)‖2=2(a n−a 1). Proof From Theorem 5 we have for the eigenvalues of the matrix Circ(D A)μ j 0≤j≤n−1=(λ j+2 a n−2 a 1)w−j−λ j. Hence for j=0,μ 0=2 a n−2 a 1 and for j=1,2,…,n−1, from (10)|μ j|1≤j≤n−1=|(∑n−1 k=1(a k+1−a k)w j(k−1))+(a n−a 1)w j(n−1)|≤|∑n−1 k=1(a k+1−a k)w j(k−1)|+|(a n−a 1)w j(n−1)|≤∑n−1 k=1|(a k+1−a k)||w j(k−1)|+|a n−a 1||w j(n−1)|≤∑n−1 k=1|(a k+1−a k)|+|a n−a 1|=2 a n−2 a 1. Since the matrix Circ(D A) is normal and from Lemma 1 its spectral norm is‖Circ(D A)‖2=max 0≤j≤n−1|μ j|=max 0≤j≤n−1(|μ 0|,max 1≤j≤n−1|μ j|). Thus, we have‖Circ(D A)‖2=|μ 0|=2 a n−2 a 1. □ Example 6 For the spectral norm of the matrix Circ(D F), Theorem 6 yields‖Circ(D F)‖2=2 F n−2. Special issue articles Recommended articles References M. Bahsi, S. Solak On the circulant matrices with arithmetic sequence Int. J. Contemp. Math. Sci., 5 (25) (2010), pp. 1213-1222 Google Scholar F. Breuer, E. Löther, B. van der Merwe Ducci-sequences and cyclotomic polynomials Finite Fields Appl., 13 (2007), pp. 293-304 View PDFView articleView in ScopusGoogle Scholar F. Breuer Ducci sequences in higher dimensions Integers, 7 (2007), Article A24 Google Scholar G. Brockman, Ryan J. Zerr Asymptotic behavior of certain Ducci sequences Fibonacci Quart., 45 (2) (2007), pp. 155-163 CrossrefView in ScopusGoogle Scholar R. Brown, J.L. Merzel Limiting behavior in Ducci sequences Period. Math. Hungar., 47 (1–2) (2003), pp. 45-50 View in ScopusGoogle Scholar N.J. Calkin, J.G. Stevens, D.M. Thomas A characterization for the length of cycles of the n-number Ducci game Fibonacci Quart., 43 (1) (2005), pp. 53-59 CrossrefView in ScopusGoogle Scholar C. Ciamberlini, A. Marengoni Su una interssante curiosita numerica Period. Math., 17 (4) (1937), pp. 25-30 Google Scholar P.J. Davis Circulant Matrices Wiley, New York, Chichester, Brisbane (1979) Google Scholar A. Ehrlich Periods in Ducci's n-number game of differences Fibonacci Quart., 28 (1990), pp. 302-305 CrossrefGoogle Scholar B. Grone, C. Johnson, E.M. De Sa, H. Wolkowicz Improving Hadamard's inequality Linear Multilinear Algebra, 16 (1984), pp. 305-322 CrossrefView in ScopusGoogle Scholar A. Hladnik Schur norms of bicirculant matrices Linear Algebra Appl., 286 (1999), pp. 261-272 View in ScopusGoogle Scholar K. Hogenson, S. Negaard, R.J. Zerr Matrix sequences associated with the Ducci map and the mediant construction of the rationals Linear Algebra Appl., 437 (1) (2012), pp. 285-293 View PDFView articleView in ScopusGoogle Scholar R.A. Horn, C.R. Johnson Matrix Analysis Cambridge University Press, Cambridge (1985) Google Scholar A. İpek On the spectral norms of circulant matrices with classical Fibonacci and Lucas numbers entries Appl. Math. Comput., 217 (2011), pp. 6011-6012 View PDFView articleView in ScopusGoogle Scholar H. Karner, J. Schneid, C.W. Ueberhuber Spectral decomposition of real circulant matrices Linear Algebra Appl., 367 (2003), pp. 301-311 View PDFView articleView in ScopusGoogle Scholar T. Koshy Fibonacci and Lucas Numbers with Applications A Wiley–Interscience Publication (2001) Google Scholar M. Misiurewicz, A. Schinzel On n numbers on a circle Hardy-Ramanujan J., 11 (1988), pp. 30-39 Google Scholar I. Odegard, R.J. Zerr The quadratic irrationals and Ducci matrix sequences Linear Algebra Appl., 484 (2015), pp. 344-355 View PDFView articleView in ScopusGoogle Scholar S.Q. Shen, J.M. Cen, Y. Hao On the determinants and inverses of circulant matrices with Fibonacci and Lucas numbers Appl. Math. Comput., 217 (2011), pp. 9790-9797 View PDFView articleView in ScopusGoogle Scholar S. Solak On the norms of circulant matrices with the Fibonacci and Lucas numbers Appl. Math. Comput., 160 (2005), pp. 125-132 View PDFView articleView in ScopusGoogle Scholar W.A. Webb The n-number game for real numbers European J. Combin., 8 (1987), pp. 457-460 View PDFView articleView in ScopusGoogle Scholar F. Wong Ducci processes Fibonacci Quart., 20 (1982), pp. 97-105 CrossrefView in ScopusGoogle Scholar S. Zhang, Z. Jiang, S. Liu An application of block circulant matrices Linear Algebra Appl., 347 (2002), pp. 101-114 View PDFView articleGoogle Scholar Cited by (7) Lower bounds for periods of ducci sequences 2020, Bulletin of the Australian Mathematical Society ### On circulant and r-circulant matrices with Ducci sequences and Lucas numbers 2024, Filomat ### Some spectral norms of RFPrLrR circulant matrices 2023, Filomat ### On r-min and r-max matrices 2022, Journal of Applied Mathematics and Computing ### On the circulant matrices with Ducci sequence and Gaussian Fibonacci numbers 2021, Aip Conference Proceedings ### On the circulant matrices with ducci sequences and fibonacci numbers 2018, Filomat View all citing articles on Scopus © 2017 Elsevier Inc. Part of special issue Proceedings of the 20th ILAS Conference, Leuven, Belgium 2016 Edited by D.Farenick, B.Lemmens, M.Van Barel, R.Vandebril Download full issue Other articles from this issue The relative operator entropy and the Karcher mean 1 April 2018 Jun Ichi Fujii, Yuki Seo View PDF ### Linear images of joint unitary orbits of Hermitian matrices 1 April 2018 Pan-Shun Lau View PDF ### Implications of losing Hermiticity in quantum mechanics 1 April 2018 N.Bebiano, J.da Providência View PDF View more articles Recommended articles No articles found. Article Metrics Citations Citation Indexes 7 Captures Mendeley Readers 3 View details About ScienceDirect Remote access Contact and support Terms and conditions Privacy policy Cookies are used by this site.Cookie settings All content on this site: Copyright © 2025 Elsevier B.V., its licensors, and contributors. All rights are reserved, including those for text and data mining, AI training, and similar technologies. For all open access content, the relevant licensing terms apply. We use cookies that are necessary to make our site work. We may also use additional cookies to analyze, improve, and personalize our content and your digital experience. For more information, see ourCookie Policy Cookie Settings Accept all cookies Cookie Preference Center We use cookies which are necessary to make our site work. We may also use additional cookies to analyse, improve and personalise our content and your digital experience. For more information, see our Cookie Policy and the list of Google Ad-Tech Vendors. You may choose not to allow some types of cookies. However, blocking some types may impact your experience of our site and the services we are able to offer. See the different category headings below to find out more or change your settings. You may also be able to exercise your privacy choices as described in our Privacy Policy Allow all Manage Consent Preferences Strictly Necessary Cookies Always active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. Cookie Details List‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. Cookie Details List‎ Targeting Cookies [x] Targeting Cookies These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. If you do not allow these cookies, you will experience less targeted advertising. Cookie Details List‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Confirm my choices × Accelerate your discovery ScienceDirect AI enhances your research, producing cited responses from full-text scholarly literature. Try ScienceDirect AI for free
162
Fields due to a moving charge =============== Next:Relativistic particle dynamicsUp:Relativity and electromagnetismPrevious:Potential due to a Fields due to a moving charge Although the fields generated by a uniformly moving charge can be calculated from the expressions (1525) and (1526) for the potentials, it is simpler to calculate them from first principles. Let a charge , whose position vector at time is , move with uniform velocity in a frame whose -axis has been chosen in the direction of . We require to find the field strengths and at the event . Let be that frame in standard configuration with in which the charge is permanently at rest. In , the field is given by (1528) (1529) This field must now be transformed into the frame . The direct method, using Eqs.(1512)-(1515), is somewhat simpler here, but we shall use a somewhat indirect method because of its intrinsic interest. In order to express Eq.(1528) and (1529) in tensor form, we need the electromagnetic field tensor on the left-hand side, and the position 4-vector and the scalar on the right-hand side. (We regard as an invariant for all observers.) To get a vanishing magnetic field in , we multiply on the right by the 4-velocity , thus tentatively arriving at the equation (1530) Recall that and . However, this equation cannot be correct, because the antisymmetric tensor can only be equated to another antisymmetric tensor. Consequently, let us try (1531) This is found to give the correct field at in , as long as refers to any event whatsoever at the charge. It only remains to interpret (1531) in . It is convenient to choose for that event at the charge at which (not the retarded event). Thus, (1532) giving (1533) or (1534) Likewise, (1535) or (1536) Lastly, we must find an expression for in terms of quantities measured in at time . If is the corresponding time in at the charge, we have (1537) Thus, (1538) (1539) Note that acts in line with the point which the charge occupies at the instant of measurement, despite the fact that, owing to the finite speed of propagation of all physical effects, the behaviour of the charge during a finite period before that instant can no longer affect the measurement. Note also that, unlike Eqs.(1525) and (1526), the above expressions for the fields are not valid for an arbitrarily moving charge, not can they be made valid by merely using retarded values. For whereas acceleration does not affect the potentials, it does affect the fields, which involve the derivatives of the potential. For low velocities, , Eqs.(1538) and (1539) reduce to the well-known Coulomb and Biot-Savart fields. However, at high velocities, , the fields exhibit some interesting behaviour. The peak electric field, which occurs at the point of closest approach of the charge to the observation point, becomes equal to times its non-relativistic value. However, the duration of appreciable field strength at the point is decreased. A measure of the time interval over which the field is appreciable is (1540) where is the distance of closest approach (assuming ). As increases, the peak field increases in proportion, but its duration goes in the inverse proportion. The time integral of the field is independent of . As , the observer at sees electric and magnetic fields which are indistinguishable from the fields of a pulse of plane polarized radiation propagating in the -direction. The direction of polarization is along the radius vector pointing towards the particle's actual position at the time of observation. Next:Relativistic particle dynamicsUp:Relativity and electromagnetismPrevious:Potential due to a Richard Fitzpatrick 2006-02-02
163
46 Chapter 3 Linear Programs A linear program involves optimization (i.e., maximization or minimization) of a linear function subject to linear constraints. A linear inequality con-straint on a vector x ∈ℜN takes the form aTx ≤b or a1x1 + a2x2 + . . . + aNxN ≤b for some a ∈ℜN and b ∈ℜ. If we have a collection of constraints (a1)Tx ≤b1, (a2)Tx ≤b2, . . . , (aM)Tx ≤bM, we can group them together as a single vector inequality Ax ≤b where A =     (a1)T . . . (aM)T     and b =     b1 . . . bM    . When we write that the vector Ax is less than or equal to b, we mean that each component of Ax is less than or equal to the corresponding component of b. That is, for each i, (Ax)i ≤bi. Sometimes, in a slight abuse of language, we refer to the ith row Ai∗of the matrix A as the ith constraint, and bi as the value of the constraint. In mathematical notation, a linear program can be expressed as follows: maximize cTx subject to Ax ≤b. The maximization is over x ∈ℜN. Each component xj is referred to as a decision variable. The matrix A ∈ℜM×N and vector b ∈ℜM specify a set of M inequality constraints, one for each row of A. The ith constraint comes from the ith row and is (Ai∗)Tx ≤bi. The vector c ∈ℜN is a vector of values for each decision variable. Each cj represents the benefit of increasing xj by 1. The set of vectors x ∈ℜN that satisfy Ax ≤b is called the feasible region. A linear program can also be defined to minimize the objective: minimize cTx subject to Ax ≤b, 47 48 in which case cj represents the cost of increasing xj by 1. 3.1 Graphical Examples To generate some understanding of linear programs, we will consider two simple examples. These examples each involve two decision variables. In most interesting applications of linear programming there will be many more decision variables – perhaps hundreds, thousands, or even hundreds of thou-sands. However, we start with cases involving only two variables because it is easy to illustrate what happens in a two dimensional space. The situation is analogous with our study of linear algebra. In that context, it was easy to generate some intuition through two-dimensional illustrations, and much of this intuition generalized to spaces of higher dimension. 3.1.1 Producing Cars and Trucks Let us consider a simplified model of an automobile manufacturer that pro-duces cars and trucks. Manufacturing is organized into four departments: sheet metal stamping, engine assembly, automobile assembly, and truck as-sembly. The capacity of each department is limited. The following table provides the percentages of each department’s monthly capacity that would be consumed by constructing a thousand cars or a thousand trucks: Department Automobile Truck metal stamping 4% 2.86% engine assembly 3% 6% automobile assembly 4.44% 0% truck assembly 0% 6.67% The marketing department estimates a profit of $3000 per car produced and $2500 per truck produced. If the company decides only to produce cars, it could produce 22, 500 of them, generating a total profit of $67.5 million. On the other hand, if it only produces trucks, it can produce 15, 000 of them, with a total profit of $37.5 million. So should the company only produce cars? No. It turns out that profit can be increased if the company produces a combination of cars and trucks. Let us formulate a linear program that will lead us to the optimal solution. Define decision variables x1 and x2 to be the number in thousands of cars and trucks, respectively, to produce each month. Together, they can be thought of as a vector x ∈ℜ2. These quantities have to be positive, so we introduce a constraint x ≥0. Several additional constraints arise from capacity limitations. The car assembly and truck assembly departments limit c ⃝Benjamin Van Roy and Kahn Mason 49 production according to 4.44x1 ≤100 and 6.67x2 ≤100. The metal stamping and engine assembly activities also introduce constraints: 4x1 + 2.86x2 ≤100 and 3x1 + 6x2 ≤100. The set of vectors x ∈ℜ2 that satisfy these constraints is illustrated in Figure 3.1(a). 10 20 30 40 10 20 30 40 truck assembly engine assembly metal stamping car assembly feasible solutions trucks produced (thousands) cars produced (thousands) 10 20 30 40 10 20 30 40 trucks produced (thousands) cars produced (thousands) maximal profit (a) (b) Figure 3.1: (a) Feasible solutions for production of cars and trucks. (b) Finding the solution that maximizes profit. The anticipated profit in thousands of dollars associated with production quantities x1 and x2 is 3x1 + 2.5x2. In Figure 3.1(b), each gray line super-imposed on the set of solutions represents a subset for which the associated profit takes on a particular value. In other words, each line represents solu-tions of the equation 3x1 + 2.5x2 = α for some value of α. The diagram also identifies the feasible solution that maximizes profit, which is given approx-imately by x1 = 20.4 and x2 = 6.5. Note that this solution involves making use of the entire capacity available for metal stamping and engine assembly, but does not maximize use of capacity to assemble either cars or trucks. The optimal profit is over $77.3 million per month, which exceeds by about $10 million the profit associated with producing only cars. 50 3.1.2 Feeding an Army Suppose that two basic types of food are supplied to soldiers in an army: meats and potatoes. Each pound of meats costs $1, while each pound of potatoes costs $0.25. To minimize expenses, army officials consider serving only potatoes. However, there are some basic nutritional requirements that call for meats in a soldier’s diet. In particular, each soldier should get at least 400 grams of carbohydrates, 40 grams of dietary fiber, and 200 grams of protein in their daily diet. Nutrients offered per pound of each of the two types of food, as well as the daily requirements, are provided in the following table: Nutrient Meats Potatoes Daily Requirement carbohydrates 40 grams 200 grams 400 grams dietary fiber 5 grams 40 grams 40 grams protein 100 grams 20 grams 200 grams Consider the problem of finding a minimal cost diet comprised of meats and potatoes that satisfies the nutritional requirements. Let x1 and x2 denote the number of pounds of meat and potatoes to be consumed daily. These quantities cannot be negative, so we have a constraint x ≥0. The nutritional requirements impose further constraints: 40x1 + 200x2 ≥400 (carbohydrates) 5x1 + 40x2 ≥40 (dietary fiber) 100x1 + 20x2 ≥200 (protein). The set of feasible solutions is illustrated in Figure 3.2(a). In Figure 3.2(b), superimposed lines identify sets that lead to particular daily costs. Unlike the automobile manufacturing problem we considered in the previous section, we are now minimizing cost rather than maximizing profit. The optimal solution involves a diet that includes both meats and potatoes, and is given approximately by x1 = 1.67 and x2 = 1.67. The asso-ciated daily cost per soldier is about $2.08. Note that the constraint brought about by dietary fiber requirements does not affect the feasible region. This is because – based on our data – any serving of potatoes that offers sufficient carbohydrates will also offer sufficient dietary fibers. 3.1.3 Some Observations There are some interesting observations that one can make from the preceding examples and generalize to more complex linear programs. In each case, the set of feasible solutions forms a polygon. By this we mean that the boundary of each is made up of a finite number of straight segments, forming corners c ⃝Benjamin Van Roy and Kahn Mason 51 5 10 5 10 meats (pounds) potatoes (pounds) feasible solutions protien carbohydrates dietary fiber 5 10 meats (pounds) ) s d n u o p ( s e o t a t o p minimal cost (a) (b) Figure 3.2: (a) Feasible solutions for an army diet. (b) Finding the solution that minimizes cost. where they connect. In the case of producing cars and trucks, this polygon is bounded. In the case of feeding an army, the polygon is unbounded: two of the sides continue out to infinity. But in both cases they are polygons. Another key commonality between the examples is that optimal solutions appear at corners of the polygon. To see why this is the case when we have two decision variables, consider the line given by {x ∈ℜ2|cTx = α} for some α ∈ℜ. As we change α we move the line continuously across ℜ2. To make α as large as possible, and hence maximize cTx, we keep moving the line (increasing α) until any more movement will mean the line no longer intersects the feasible region. At this point, the line must be touching a corner. In the next two sections, we will formalize and generalize these observa-tions, so that we can make statements that apply to linear programs involv-ing arbitrary numbers of variables and constraints. In higher dimensions, we will be dealing with polyhedra as opposed to polygons, and we will find that optimal solutions still arise at “corners.” 52 3.2 Feasible Regions and Basic Feasible So-lutions The set of vectors x ∈ℜN that satisfies constraints of the form Ax ≤b is called a polyhedron. In three dimensions, the boundaries of the set are formed by “flat faces.” In two dimensions, the boundaries are formed by line segments, and a polyhedron is a polygon. Note that the feasible region of a linear program is a polyhedron. Hence, a linear program involves optimization of a linear objective function over a polyhedral feasible region. One way to view a polyhedron is as the intersec-tion of a collection of half-spaces. A half-space is a set of points that satisfy a single inequality constraint. Hence, each constraint (Ai∗)Tx ≤bi defines a half-space, and the polyhedron characterized by Ax ≤b is the intersection of M such half-spaces. As an example, consider the problem of producing cars and trucks de-scribed in Section 3.1.1. Each constraint restricts feasible solutions to the half-space on one side of a line. For instance, the constraint that the number of cars produced must be nonnegative restricts the feasible region to vectors in the half-space on the right side of the horizontal axis in Figure 3.1(a). Note that, though this constraint was represented with a greater-than sign (x1 ≥0), it can also be represented with a less-than sign (−x1 ≤0) to be consistent with the form of Ax ≤b. The constraint introduced by the ca-pacity to assemble engines also restricts solutions to a half-space – the set of points below a diagonal line. The intersection of half-spaces associated with the six constraints produces the polyhedron of feasible solutions. In this section, we develop some understanding of the structure of poly-hedra. We will later build on these ideas to establish useful properties of optimal solutions. 3.2.1 Convexity Given two vectors x and y in ℜN, a vector z ∈ℜN is said to be a convex combination of x and y if there exists a scalar α ∈[0, 1] such that z = αx+(1−α)y. Intuitively, a convex combination of two vectors is in between the vectors, meaning it lies directly on the line segment joining the two vectors. In fact, the line segment connecting two vectors is the set of all convex combinations of the two vectors. We generalize this to more than two vectors by saying y is a convex combination of x1, x2, . . . , xM if there are some α1, α2, . . . , αM ≥0 such that y = α1x1 + α2x2 + . . . + αMxM and α1 + α2 + . . . + αM = 1. c ⃝Benjamin Van Roy and Kahn Mason 53 A set U ⊆ℜN is said to be convex if any convex combination of any two elements of U is in U. In other words, if we take two arbitrary points in U, the line segment connecting those points should stay inside U. Figure 3.3 illustrates three subsets of ℜ2. The first is convex. The others are not. (a) (b) (c) Figure 3.3: (a) A convex set in ℜ2. (b and c) Nonconvex sets in ℜ2 – in each case, a segment connecting points in the set includes points outside the set. Given a set of vectors x1, . . . , xK ∈ℜN, their convex hull is the smallest convex set containing x1, . . . , xK ∈ℜN. Because any intersection of convex sets is convex, there is no ambiguity in this definition. In particular, the convex hull can be thought of as the intersection of all convex sets containing x1, . . . , xK ∈ℜN. Convex sets are very important in many areas of optimization, and linear programming is no exception. In particular, polyhedra are convex. To see why this is so, consider a polyhedron U = {x ∈ℜN|Ax ≤b}. If z = αx + (1 −α)y for some α ∈[0, 1] and x, y ∈U then Az = A(αx + (1 −α)y) = αAx + (1 −α)Ay ≤αb + (1 −α)b = b so that z is an element of U. 3.2.2 Vertices and Basic Solutions Let U ⊆ℜN be a polyhedron. We say x ∈U is a vertex of U if x is not a convex combination of two other points in U. Vertices are what we think of as “corners.” Suppose U = {x ∈ℜN|Ax ≤b} is the feasible region for a linear program and that y ∈U. If (Ai∗)Ty = bi then we say the ith constraint is binding or active at y. If we think of a polyhedron as a collection of half spaces, then for a constraint to be active, the point in question lies on the hyperplane forming the border of the half space. In three dimensions, it must lie on the face associated with the constraint, and in two dimensions, it must lie on the edge associated with the constraint. If a collection of constraints 54 (a1)Tx ≤β1, . . . , (aK)Tx ≤βK are active at a vector x ∈ℜN, then x is a solution to Bx = β, where B =     (a1)T . . . (aK)T     and β =     β1 . . . βK    . A collection of linear constraints (a1)Tx ≤β1, . . . , (aK)Tx ≤βK is said to be linearly independent if a1, . . . , aK are linearly independent. For a given linear program, if there are N linearly independent constraints that are active at a vector x ∈ℜN then we say that x is a basic solution. To motivate this terminology, note that the active constraints form a basis for ℜN. Note also that, because the active constraints form a basis, given N active constraints (a1)Tx ≤β1, . . . , (aK)Tx ≤βK, the square matrix B =     (a1)T . . . (aK)T    , has full rank and is therefore invertible. Hence, Bx = β has a unique solution, which is a basic solution of the linear program. If a basic solution x ∈ℜN is feasible, then we say x is a basic feasible solution. In two dimensions, a basic feasible solution is the intersection of two boundaries of the polygonal feasible region. Clearly, in two dimensions, basic feasible solutions and vertices are equivalent. The following theorem establishes that this remains true for polyhedra in higher-dimensional spaces. Theorem 3.2.1. Let U be the feasible region of a linear program. Then, x ∈U is a basic feasible solution if and only if x is a vertex of U. Proof: Suppose x is a basic feasible solution and also that x is a convex combination of y and z, both in U. Let C be a matrix whose rows are N linearly independent active constraints at x, and c be the vector of corre-sponding constraint values. Because C has linearly independent rows, it has full rank and is invertible. Also Cy ≤c, Cz ≤c and Cx = c. x is a convex combination of y and z so that x = αy + (1 −α)z for some α ∈[0, 1]. This means Cx = αCy + (1 −α)Cz ≤αc + (1 −α)c = c can only equal c if at least one of Cy and Cz are equal to c. But because C is invertible, there is only one solution to Cy = c, namely y = x. Similarly Cz = c gives z = x. This means x cannot be expressed as a convex combination of two points in U unless one of them is x, so that x is a vertex. c ⃝Benjamin Van Roy and Kahn Mason 55 Conversely, suppose x is not a basic feasible solution. We let C be the matrix of all the active constraints at x. Because C has less than N linearly independent rows, it has a non-empty null space. Let d be a non zero vector in N(C). Then for small ϵ, we have that x ± ϵd is still feasible (C(x ± ϵd) = Cx = c and for small ϵ, non-active constraints will still be non-active). But x = 1 2(x + ϵd) + 1 2(x −ϵd), so that x is a convex combination of two other vectors in U. So, x is not a vertex. If the feasible region of a linear program is {x ∈ℜN|Ax ≤b} then any N linearly independent active constraints identify a unique basic solution x ∈ℜN. To see why, consider a square matrix B ∈ℜN×N whose rows are N linearly independent active constraints. Any vector x ∈ℜN for which these constraints are active must satisfy Bx = b. Since its rows of B are linearly independent, B has full rank and therefore a unique inverse B−1. Hence, B−1b is the unique point at which the N constraints are active. Let us capture the concept in a theorem. Theorem 3.2.2. Given a polyhedron {x ∈ℜN|Ax ≤b} for some A ∈ℜM×N and b ∈ℜM, any set of N linearly independent active constraints identifies a unique basic solution. Each basic solution corresponds to N selected constraints. There are M constraints to choose from and only finitely many ways to choose N from M. This implies the following theorem. Theorem 3.2.3. There are a finite number of basic solutions and a finite number of basic feasible solutions. Note that not every combination of N constraints corresponds to a basic solution. The constraints are required to be linearly independent. 3.2.3 Bounded Polyhedra A polyhedron U ⊆ℜN is said to be bounded if there is a scalar α such that, for each x ∈U and each j, −α ≤xj ≤α. In other words, each component of a vector in U is restricted to a bounded interval, or U is contained in a “hyper-cube.” The following theorem presents an alternative way to represent bounded polyhedra. Theorem 3.2.4. If U is a bounded polyhedron, it is the convex hull of its vertices. Proof: Let U = {x ∈ℜN|Ax ≤b}, and let H be the convex hull of the vertices of U. Each vertex of U is in U and U is convex. Hence, H ⊆U. 56 We now have to show U ⊆H. We will do this by showing that any x ∈U is a convex combination of the vertices of U, and hence is in H. We will do this by backwards induction on the number of linearly independent active constraints at x. If the number of linearly independent active constraints is N, then x is a basic feasible solution, and so a vertex. A vertex is a convex combination of itself, and hence is in U. Thus all points with N linearly independent active constraints are in H. Suppose all points with K + 1 or more linearly independent active con-straints are in H, and that x has K linearly independent active constraints. Let C be a matrix whose rows are the active constraints at x, and let c be the vector whose components are the corresponding constraint values. Because C has rank K < N, we know that its null space is non-empty. Take any non-zero vector n ∈N(C) and consider the line x + αn for different α. For small α, the points on the line are inside U, but because U is bounded, for sufficiently large positive and negative values of α, the points on the line will not be in U. Take the most positive and negative values of α such that x+αn is in U, and let the corresponding points be y and z. Note Cy = c = Cz so that all the constraints active for x are still active for y and z. However each one of them much also have an additional active constraint because n is in the null space of C and so changing α will not change C(x + αn). Thus each of y and z must have at least K +1 linearly independent active constraints. x lies on the line segment connecting y and z and so is a convex combination of them. By the inductive hypothesis, each of y and z are convex combinations of vertices, and hence so is x. Hence all points with K linearly independent active constraints are in H. By induction, each x ∈U is also in H, so U ⊆H. 3.3 Optimality of Basic Feasible Solutions Consider the linear program maximize cTx subject to Ax ≤b. If x1, x2, . . . , xK are the vertices of the feasible region, we say that xk is an optimal basic feasible solution if cTxk ≥cTxℓfor every ℓ. That is xk is the vertex with the largest objective value among vertices. Note that the optimal basic feasible solution need not be unique. An optimal solution is a feasible solution x such that cTx ≥cTy for every other feasible solution y. As discussed in Section 3.1, in two dimen-c ⃝Benjamin Van Roy and Kahn Mason 57 sions, it is easy to see that an optimal basic feasible solution is also an optimal solution. In this section, we generalize this observation to polyhedra in higher-dimensional spaces. 3.3.1 Bounded Feasible Regions We first consider linear programs where the feasible region is bounded. Theorem 3.3.1. If x∗is an optimal basic feasible solution of a linear program for which the feasible region is bounded, then it is an optimal solution. Proof: If x is a convex combination of y and z then cTx ≤max{cTy, cTz}. Similarly, if x is a convex combination of x1, . . . , xK, then cTx ≤maxℓ∈{1,...,K} cTxℓ. Any x is a convex combination of the vertices, and so cTx must attain its largest value at a vertex. 3.3.2 The General Case The result also applies to unbounded polyhedra, though the associated anal-ysis becomes more complicated. Theorem 3.3.2. If x∗is an optimal basic feasible solution of a linear program that has an optimal solution, then it is an optimal solution. Before proving the theorem, we will establish a helpful lemma. Recall that a line is a one dimensional affine space. A set U ⊆ℜN is said to contain a line if there exists a vector x ∈S and a vector d ̸= 0 such that x + αd ∈S for all α ∈ℜ. Lemma 3.3.1. Consider a polyhedron U = {x ∈ℜN|Ax ≤b} that is not empty. Then the following statements are equivalent: 1. The polyhedron U has at least one vertex. 2. The polyhedron U does not contain a line. Proof of lemma: Suppose that U contains the line {x + αd|α ∈ℜ} where d ̸= 0. Then for all α, we have A(x + αd) ≤b, or rearranging αAd ≤b −Ax. For this to be true for all α it must be that Ad = 0, so that A is not full rank, and so cannot have N linearly independent rows. Thus there are no vertices. Suppose that U does not contain any line. We use a similar line of rea-soning to Theorem 3.2.4. Let x be a point with the maximum number 58 of linearly independent active constraints. Let the number of linearly inde-pendent active constraints be K. If K = N then x is a vertex, and we are done. Suppose that K < N, the consider the line L = {x + αd|α ∈ℜ} for some d that is perpendicular to the constraints active at x. d must exist because the matrix whose rows are the constraints active at x is not full rank, and hence has a non-zero null space. All the points in L satisfy the K constraints at x. Because L cannot be contained in U, there must be some α for which an additional constraint is active. The point x + αd has K + 1 linearly independent active constraints contradicting the fact that K was the maximum attainable. Proof of theorem: Note that the fact that the linear program has a basic solution means that is can contain no lines. Let x be an optimal solution with the largest number of linearly independent active constraints, and the number of linearly independent active constraints at x be K. If K = N then x is a vertex satisfying the conclusion of the theorem. Suppose K < N, then take d orthogonal to the constraints active at x. The same reasoning as given in Theorem ?? shows that all points of the form x + αd have all the same constraints active as x, and also that for some α∗, an additional constraint is satisfied. But, for sufficiently small α, x±αd is still feasible. Because cT(x+αd) = cTx + αcTd can be no larger than cTx, we have that cTd = 0 and that cT(x + αd) = cTx for all α. But this means x + α∗d is an optimal solution with more than K linearly independent active constraints, contradicting the maximality of K. 3.3.3 Searching through Basic Feasible Solutions For any linear program, there are a finite number of basic feasible solutions, and one of them is optimal if the linear program has an optimum. This mo-tivates a procedure for solving linear programs: enumerate all basic feasible solutions and select the one with the largest objective value. Unfortunately, such a procedure is not effective for large problems that arise in practice be-cause there are usually far too many basic feasible solutions. As mentioned earlier, the number of basic solutions is the number of ways of choosing N linearly independent constraints from the entire collection of M constraints. There are M!/(N!(M−N)!) choices. This is an enormous number – if N = 20 and M = 100, the number of choices M!/(N!(M −N)!) exceeds half a billion trillion. Though many of these choices will not be linearly independent or feasible, the number of them that are basic feasible solutions is usually still enormous. c ⃝Benjamin Van Roy and Kahn Mason 59 In Chapter ??, we will study the simplex method, which is a popular linear programming algorithm that searches through basic feasible solutions for an optimal one. It does not enumerate all possibilities but instead intelli-gently traverses a sequence of improving basic feasible solutions in a way that arrives quickly at an optimum. We will also study interior point methods, another very efficient approach that employs a different strategy. Instead of considering basic feasible solutions, interior point methods generate a se-quence of improving solutions in the interior of the feasible region, converging on an optimum. 3.4 Greater-Than and Equality Constraints We have focused until now on less-than constraints, each taking the form aTx ≤b for some a ∈ℜN and b ∈ℜ. Two other forms of constraints com-monly used to describe polyhedra are greater-than and equality constraints. Greater-than constraints take the form aTx ≥b, while equality constraints take the form aTx = b. Both greater-than and equality constraints can be replaced by equivalent less-than constraints. A greater than constraint aTx ≥b is equivalent to (−a)Tx ≤−b, whereas an equality constraint aTx = b is equivalent to a pair of less-than constraints: aTx ≤b and (−a)Tx ≥−b. Hence, any set of constraints that includes less-than, greater-than, and equality constraints is equivalent to a set of less-than constraints and therefore represents a poly-hedron. In matrix notation, we can define a polyhedron involving all types of constraints by S = {x ∈ℜN|A1x ≤b, A2x ≥b2, A3x = b3}. This is the same polyhedron as one characterized by Ax ≤b, where A =      A1 −A2 A3 −A3      and b =      b1 −b2 b3 −b3     . The vertices of a polyhedron do not change if we change the way it is represented. This is because the notion of a vertex is geometric; that is, it only depends on which vectors are inside or outside the set. Our definition of basic feasible solutions, on the other hand, does rely on the algebraic representation. In particular, our definition determines whether a solution is 60 basic depending on which of the M constraints represented by Ax ≤b are active. Theorem 3.2.1 establishes that vertices and basic feasible solutions are equivalent. Hence, the theorem relates a geometric, representation-independent concept to an algebraic, representation-dependent one. It is convenient to extend the definition of a basic feasible solution to situations where the repre-sentation of a polyhedron makes use of greater-than and equality constraints. This extended definition should maintain the equivalence between basic fea-sible solutions and vertices. Let us now provide the generalized definition of basic and basic feasible solutions. Given a set of equality and inequality constraints defining a poly-hedron S ∈ℜN, we say that a vector x ∈ℜN is a basic solution if all equality constraints are active, and among all constraints that are active at x, N of them are linearly independent. A basic feasible solution is a basic solution that satisfies all constraints. To see that basic feasible solutions still correspond to vertices, consider a polyhedron S = {x ∈ℜN|A1x ≤b1, A2x ≥b2, A3x = b3}. Recall that S = {x ∈ℜN|Ax ≤b}, where A =      A1 −A2 A3 −A3      and b =      b1 −b2 b3 −b3     . From Theorem 3.2.1, we know that basic feasible solutions of Ax ≤b are equivalent to vertices of S. We will show that basic feasible solutions of the inequalities A1x ≤b, A2x ≥b2, A3x = b3 are equivalent to basic feasible solutions of Ax ≤b. Consider a basic feasible solution x of A1x ≤b1, A2x ≥b2, A3x = b3. All the equality constraints must be active at x, and there must be a set of N linearly independent constraints that are active at x. Let I denote this set of N linearly independent constraints. Let N1, N2, and N3, be the number of less-than, greater-than, and equality constraints represented by I. Hence, N1 + N2 + N3 = N. Now consider the inequality Ax ≤b, which we write in a partitioned form      A1 −A2 A3 −A3     x ≤      b1 −b2 b3 −b3     . c ⃝Benjamin Van Roy and Kahn Mason 61 Since A3x = b3, all rows of A associated with A3 and −A3 correspond to active constraints. Only 2N3 of these correspond to equality constraints in I, and among these 2N3 constraints, only only N3 of them can be linearly in-dependent (since each row of −A3 is linearly dependent on the corresponding row of A3). Another N1 + N2 constraints in I lead to linearly independent active constraints in rows of A associated with A1 and −A2. This makes for a total of N linearly independent constraints. Therefore, any basic feasible solution of A1x ≤b1, A2x ≥b2, A3x = b3 is also a basic feasible solution of Ax = b and therefore a vertex of S. The converse – that a basic feasible solution of Ax = b is a basic feasible solution of A1x ≤b1, A2x ≥b2, A3x = b3 – can be shown by reversing preceding steps. It follows that basic feasible solutions of A1x ≤b1, A2x ≥b2, A3x = b3 are equivalent to vertices of S. 3.5 Production We now shift gears to explore a few of the many application domains of linear programming. A prime application of linear programming is to the allocation of limited resources among production activities that can be carried out at a firm. In this context, linear programming is used to determine the degree to which the firm should carry out each activity, in the face of resource constraints. In this section, we discuss several types of production problems that can be modeled and solved as linear programs. 3.5.1 Single-Stage Production In a single stage production problem, there is stock in M types of resources and N activities, each of which transforms resources into a type of product. The available stock in each ith resource is denoted by bi, which is a compo-nent of a vector b ∈ℜM. The level to which activity j is carried out is a decision variable xj, which a component of a vector x ∈ℜN. This quantity xj represents the number of units of product type j generated by the activity. In producing each unit of product j, Aij units of each ith resource are consumed. This gives us a matrix A ∈ℜM×N. The activity levels are con-strained to be nonnegative (xj ≥0), and in aggregate, they cannot consume more resources than available (Ax ≤b). Each unit of product j generates a profit of cj, which is a component of a vector c ∈ℜN. The objective is to 62 maximize profit. This gives rise to a linear program: maximize cTx subject to Ax ≤b x ≥0. Let us revisit the petroleum production problem of Chapter 1, which is an example of a single-stage production problem. Example 3.5.1. Crude petroleum extracted from a well contains a complex mixture of component hydrocarbons, each with a different boiling point. A refinery separates these component hydrocarbons using a distillation column. The resulting components are then used to manufacture consumer products such as low, medium, and high octane gasoline, diesel fuel, aviation fuel, and heating oil. Suppose we are managing a company that manufactures N petroleum products and have to decide on the number of liters xj, j ∈{1, . . . , n} of each product to manufacture next month. We have M types of resources in the form of component hydrocarbons. A vector b ∈ℜM represents the quantity in liters of each ith resource to be available to us next month. Each petroleum product is manufactured through a separate activity. The jth activity con-sumes Aij liters of the ith resource per unit of the jth product manufactured. Our objective is to maximize next month’s profit. Each jth product gar-ners cj dollars per liter. Hence, the activity levels that maximize profit solve the following linear program: maximize cTx subject to Ax ≤b x ≥0. There are typically many possible activities that can be carried out to produce a wide variety of products. In this event, the number of activities N may far exceed the number M of resource types. Is it advantageous to carry out so many activities in parallel? Remarkably, the following theorem establishes that it is not: Theorem 3.5.1. If a single-stage production problem with M resource types and N activities has an optimal solution, then there is an optimal solution that involves use of no more than M activities. This result follows from Theorem 3.3.2, as we will now explain. Consider a basic feasible solution x ∈ℜN. At x, there must be N linearly independent binding constraints. Up to M of these constraints can be associated with c ⃝Benjamin Van Roy and Kahn Mason 63 the M rows of A. If N > M, we need at least N −M additional binding constraints – these must be nonnegativity constraints. It follows that at least N −M components of x are equal to zero. In other words, there are at most M activities that are used at a basic feasible solution. By Theorem 3.3.2, if there is an optimal solution to a linear program, there is one that is a basic feasible solution. Hence, there is an optimal solution that entails carrying out no more than M activities. This greatly simplifies implementation of an optimal production strategy. No matter how many activities are available, we will make use of no more than M of them. Before closing this section, it is worth discussing how to deal with capac-ity constraints. In particular, production activities are often constrained not only by availability of raw materials, but also by the capacity of manufactur-ing facilities. In fact, in many practical production activities, capacity is the only relevant constraint. Conveniently, capacity can simply be treated as an additional resource consumed by manufacturing activities. We illustrate this point through a continuation to our petroleum production example. Example 3.5.2. Recall that our petroleum production problem led to a linear program maximize cTx subject to Ax ≤b x ≥0. Given an optimal solution x∗∈ℜN, each component x∗ j tells us the quantity in liters of the jth petroleum product that we should manufacture next month. Suppose that we have two factories that support different manufacturing processes. Each of our N manufacturing activities requires capacity from one or both of the factories. In particular, the manufacturing of each liter of the jth product requires a fraction a1 j of next month’s capacity from factory 1 and a fraction a2 j of next month’s capacity from factory 2. Hence, we face capacity constraints: (a1)Tx ≤1 and (a2)Tx ≤1. Let A =    A (a1)T (a2)T    and b =    b 1 1   . A new linear program incorporates capacity constraints: maximize cTx subject to Ax ≤b x ≥0. 64 Note that the capacity constraints play a role entirely analogous to the con-straints imposed by limitations in stock of component hydrocarbons. The ca-pacity at each factory is just a resource that leads to an additional constraint. 3.5.2 Multi-Stage Production Manufacturing of sophisticated products typically entails multiple stages of production activity. For example, in manufacturing a computer, chips are fabricated, then they are connected on a printed circuit board, and finally, printed circuit boards, casing, and other components are assembled to cre-ate a finished product. In such a process, not all activities deliver finished products. Some activities generate materials that serve as resources for other activities. Multi-stage production activities of this sort can still be formu-lated as linear programs. Let us illustrate this with an example. Example 3.5.3. Consider a computer manufacturer with two CPU chip fab-rication facilities and one computer assembly plant. Components such as key-boards, monitors, casing, mice, disk drives, and other chips such as SRAM and DRAM are purchased from other companies. There are three grades of CPU chips manufactured by the company, and they are used to produce three models of computers. Fabrication facility 1 can produce chips of grades 1 and 2, while fabrication facility 2 can produce chips of grade 2 and 3. Com-pleted chips are transported to the assembly plant where they are combined with other components to produce finished products. The only relevant con-straints on manufacturing are capacities of the two fabrication facilities and the manufacturing plant. Consider the decision of how to allocate resources and conduct manufac-turing activities over the next month of operation. To formulate the problem as a linear program, we introduce the following decision variables: x1 quantity of model 1 produced x2 quantity of model 2 produced x3 quantity of model 3 produced x4 quantity of grade 1 chip produced at factory 1 x5 quantity of grade 2 chip produced at factory 1 x6 quantity of grade 2 chip produced at factory 2 x7 quantity of grade 3 chip produced at factory 2 There are 6 types of resources to consider: (1) capacity at fabrication facility 1; (2) capacity at fabrication facility 2; c ⃝Benjamin Van Roy and Kahn Mason 65 (3) capacity at the assembly plant; (4) grade 1 chips; (5) grade 2 chips; (6) grade 3 chips. Let the quantities available next month be denoted by a vector b ∈ℜ6. For each jth activity and ith resource, let Aij denote the amount of the ith resource consumed or produced per unit of activity j. If Aij > 0, this represents an amount consumed. If Aij < 0, this represents an amount produced. Consider, as an example, the constraint associated with capacity at the assembly plant: A31x1 + A32x2 + A33x3 + A34x4 + A35x5 + A36x6 + A37x7 ≤b3. The coefficients A34, A35, A36, and A37 are all equal to zero, since capacity at the assembly plant is not used in manufacturing chips. On the other hand, producing the three models of computers does require assembly, so A31, A32, and A33 are positive. Let us now consider, as a second example, the constraint associated with stock in grade 2 chips, which is an endogenously produced resource: A51x1 + A52x2 + A53x3 + A54x4 + A55x5 + A56x6 + A57x7 ≤b5. The coefficients A51, A52, and A53 are zero or positive, depending on how many grade 2 chips are used in manufacturing units of each model. The coefficients A54 and A57 are zero, because manufacturing of grade 1 and grade 3 chips does not affect our stock in grade 2 chips. Finally, A55 = −1 and A56 = −1, because each unit of activities 5 and 6 involves production of one grade 2 chip. The value of b5 represents the number of chips available in the absence of any manufacturing. These could be chips acquired from an exogenous source or left over from the previous month. If b5 = 0, all chips used to produce computers next month must be manufactured during the month. The profit per unit associated with each of the three models is given by a vector c ∈ℜ7, for which only the first three components are nonzero. With an objective of maximizing profit, we have a linear program: maximize cTx subject to Ax ≤b x ≥0. Note that the basic difference between linear programs arising from multi-stage – as opposed to single-stage – production problems is that elements of 66 the matrix A are no longer nonnegative. Negative elements are associated with the production of resources. There is another way to represent such linear programs that is worth considering. Instead of having a matrix A that can have negative elements, we could define matrices C and P such that both have only nonnegative elements and A = C −P. The matrix C represents quantities consumed by various activities, while P represents quantities produced. Then, the multi-stage production problem takes the form maximize cTx subject to Cx −Px ≤b x ≥0. As in the single-stage case, a basic feasible solution of the multi-stage production problem with M resource types – including materials produced for use as resources to later stages of production – must have M linearly independent binding constraints. If the number N of activities exceeds the number M of resources, at least N −M activities must be inactive at a basic feasible solution. We therefore have an extension of Theorem 3.5.1: Theorem 3.5.2. If a multi-stage production problem with M resource types and N activities has an optimal solution, then there is an optimal solution that involves use of no more than M activities. 3.5.3 Market Stratification and Price Discrimination In all the production models we have considered, each unit of a product generated the same profit. Consider, for example, a single-stage production problem: maximize cTx subject to Ax ≤b x ≥0. Each unit of each jth product offers a profit of cj. In some situations, it is desirable to stratify a market and charge different prices for different classes of customers. For example, coupons can be directed at a certain segment of the market that will only purchase a product if it is below the advertised price. This allows a firm to sell at a high price to those who find it acceptable without loosing the profit it can obtain from the portion of the market that requires a lower price. When there is a single price cj associated with each jth product, the profit generated by manufacturing xj units is cjxj. Suppose that the market is segmented and price discrimination is viable. In particular, suppose that c ⃝Benjamin Van Roy and Kahn Mason 67 we can sell up to K units each jth product at price c1 j and the rest at price c2 j < c1 j. Then, the objective should be to maximize PN j=1 fj(xj), where fj(xj) = ( c1 jxj, if xj ≤K c1 jK + c2 j(xj −K), otherwise. In fact, suppose that this were the case for every product. Then, the single-stage production problem becomes: maximize PN j=1 fj(xj) subject to Ax ≤b x ≥0. This optimization problem is not a linear program, but fortunately, it can be converted to one, as we now explain. We introduce new decision variables x1, x2 ∈ℜN. For each j, let x1 j = min(xj, K) and x2 j = min(xj −K, 0). Hence, xj = x1 j +x2 j. Each x1 j represents the quantity of product j manufactured and sold for profit c1 j, while x2 j rep-resents the quantity manufactured and sold for profit c2 j. A linear program leads to optimal values for these new decision variables: maximize (c1)Tx1 + (c2)Tx2 subject to A(x1 + x2) ≤b x1 ≤Ke x1 ≥0 x2 ≥0. Recall that e denotes the vector with every component equal to 1. This idea generalizes to any number of market segments. If there are L different profit vectors c1 ≥c2 ≥· · · ≥cL, let K1, . . . , KL−1 denote num-bers of customers that will purchase each jth product at prices c1 j, . . . , cL−1 j , respectively. Then, the profit-maximizing linear program is given by maximize PL k=1(ck)Txk subject to A PL k=1 xk ≤b xk ≤Kke, for k ∈{1, . . . , L −1} xk ≥0, for k ∈{1, . . . , L}. There are LN decision variables in the above linear program, so at a basic feasible solution x1, . . . , xL, there must be LN linearly independent active constraints. Among these, at most M can be associated with resource constraints, and at most (L −1)N can be associated with the constraints 68 xk ≤Kke. If N > M, this leaves at least N −M additional constraints that must be active. These must be nonnegativity constraints. It follows that at least N −M components among x1, . . . , xL are equal to zero. We therefore have the following theorem. Theorem 3.5.3. If a production problem with M resource types, N activities, and multiple market segments associated with different profits has an optimal solution, then there is an optimal solution that involves use of no more than M activities. 3.6 Contingent Claims In Section 2.4, we introduced the study of contingent claims. We showed how structured products can sometimes be replicated and priced and discussed the notion of arbitrage. In this section, we revisit the topic of contingent claims, bringing to bear our understanding of linear programming. This will allow us to broaden the range of situations where a bank can sell structured products while protecting itself against risk and to identify arbitrage opportunities, when they exist. 3.6.1 Structured Products in an Incomplete Market As in Section 2.4, consider a collection of N assets in a world with M possible outcomes. The possible payoffs of each jth asset is represented by a vector aj ∈ℜM. A payoffmatrix P = h a1 . . . aN i , represents payoffvectors for all assets. The price per unit of each asset is given by the corresponding component of a vector ρ ∈ℜN. When a bank sells a structured product, it is desirable to protect against risks. This can be accomplished by finding portfolio holdings x ∈ℜN that replicate the product, and purchasing the associated quantities of assets. The process of replication also guides pricing of the structured product. In particular, the price that the bank charges should exceed the cost ρTx of the replicating portfolio. It is not always possible to find a replicating portfolio. A structured product with a payoffvector b ∈ℜM can only be replicated if there is a vector x ∈ℜN of portfolio holdings such that Px = b. Such a vector exists only when b is in C(P). This is true for all b only if the market is complete c ⃝Benjamin Van Roy and Kahn Mason 69 – that is, if P has rank M. Otherwise, the market is said to be incomplete, and some structured products cannot be replicated. How can a bank protect against risks when it sells a structured product that cannot be replicated? One way involves super-replicating the structured product. A portfolio with holdings x is said to super-replicate a structured product with payofffunction b if Px ≥b. If a bank sells a structured product and purchases a super-replicating portfolio, its net cash flow (Px)i −bi for any possible future outcome i is nonnegative. The price ρTx of this super-replicating portfolio can also be used as a lower bound for the price to charge the customer for the structured product. We illustrate the concept of super-replication with an example. Example 3.6.1. (Super-Replicating a Currency Hedge) Recall the structured product described in Example 2.4.2. Suppose that the current value of the foreign currency is r0 = 0.5 dollars, that r∗= 0.1 dollars and p = 10 million dollars, and that the currency value one year from now will be in the set {0.01, 0.02, . . . , 0.99, 1.0}. Hence, the payoffvector b for the struc-tured product is in ℜ100, with bi being the value of the product one year from now in the event that the currency is valued at i cents. This payoffvector is illustrated in Figure 3.4. payoff (millions) 0.1 10 0.5 outcome Figure 3.4: The payofffunction for a protective structured product. Consider a situation where a bank sells this structured product and wishes to protect against associated risks. Suppose there are several assets in the market that can be traded for this purpose: 70 (a) The currency. If a unit of foreign currency is sold one year from now, the payoffvector a1 ∈ℜ100 is given by a1 i = i/100 for each i. (b) A zero-coupon bond. The payoffvector a2 ∈ℜ100 is given by a2 i = 1 for each i. (c) A European call with strike 0.1. The payoffvector a3 ∈ℜ100 is given by a3 i = max(a1 i −0.1, 0) for each i. (d) A European call with strike 0.2. The payoffvector a4 ∈ℜ100 is given by a3 i = max(a1 i −0.2, 0) for each i. Payofffunctions for these four assets are illustrated in Figure 3.5. The struc-tured product payoffvector b is not in the span of a1, . . . , a4, and therefore, the product cannot be replicated. There are many ways to super-replicate the structured product. One in-volves purchasing 10 million units of the bond. This leads to a portfolio that pays 10 million dollars in any outcome. This amount always exceeds what the bank will have to pay the customer. Somehow this feels like overkill, since in many outcomes, the value of the bond portfolio will far exceed the value of the structured product. An alternative replicating portfolio is constructed by purchasing 10 million units of the bond and 20 million call options at a strike of 0.2, and short selling 40 million call options at a strike of 0.1. The payoffvector of this super-replicating portfolio, as well as the one consisting only of bonds, is illustrated in Figure 3.6. Note that the payoffs associated with the second super-replicated portfolio are dominated by those offered by the first. It is natural to expect that the second super-replicating portfolio should offer a less expensive way of protecting against risks brought about by selling the structured product. Given multiple super-replicating portfolios, as in the above example, which one should the bank purchase? The cheapest one, of course! The price of a replicating portfolio with holdings x is ρTx. To find the cheapest replicating portfolio, the bank can solve a linear program: minimize ρTx subject to Px ≥b. The constraints ensure that the resulting portfolio holdings do indeed gener-ate a super-replicating portfolio. We illustrate the process in the context of our currency hedge. Example 3.6.2. (Cheapest Super-Replicating Portfolio) Suppose that the prices per unit of the currency, bond, and European call options are $0.5, $0.9, $0.4, and $0.35, respectively. Then, letting ρ = [0.5 0.9 0.4 0.35]T, c ⃝Benjamin Van Roy and Kahn Mason 71 solving the linear program minimize ρTx subject to Px ≥b, leads to portfolio holdings of x = 106 × [0 10 −40 20]T. This corresponds to the second super-replicating portfolio considered in Example 3.6.1. By Theorem 3.3.2, if the linear program minimize ρTx subject to Px ≥b. has an optimal solution and there is a basic feasible solution, then there is a basic feasible solution that is an optimal solution. Let x∗be an optimal basic feasible solution. Then, there must be at least N constraints active at x∗. In other words, the payoffvector Px∗is equal to the payoffvector b of the structured product for at least N outcomes. This means that if we plot the payoffvectors associated with the structured product and this optimal super-replicating portfolio on the same graph, they will touch at no less than N points. 3.6.2 Finding Arbitrage Opportunities In Section 2.4, we introduced the notion of arbitrage. An arbitrage opportu-nity was defined to be a vector x ∈ℜN of portfolio holdings with a negative cost ρTx < 0 and nonnegative payoffs Px ≥0. By purchasing assets in quan-tities given by the portfolio weights, we receive an amount −ρTx > 0, and in every possible future event, we are not committed to pay any money. Sounds good. But why stop at buying the quantities identified by x? Why not buy quantities 100x and garner an income of −100ρTx? Indeed, an arbitrage opportunity offers the possibility of making unbounded sums of money. We have been talking about arbitrage opportunities for some time, but now that we understand linear programming, we can actually identify ar-bitrage opportunities, if they exist. Consider solving the following linear program: minimize ρTx subject to Px ≥0. It is clear that this linear program will look for the most profitable arbitrage opportunity. However, because there is no bound to the sum of money that can be made when an arbitrage opportunity presents itself, this linear pro-gram will have an unbounded solution when an arbitrage opportunity exists. 72 In order to generate a more meaningful solution, consider an alternative linear program: minimize ρTx subject to Px ≥0 ρTx = −1. This one identifies a portfolio that offers an initial income of $1. If an arbi-trage opportunity exists, we can use is to make any amount of money, so we could use it to generate a $1 income. Hence, existence of an arbitrage op-portunity implies that the feasible region of this linear program is nonempty. Once this linear program identifies a portfolio that generates $1, we can make an arbitrarily large amount of money by purchasing multiples of this portfo-lio. Note that the objective function here could be anything – any feasible solution is an arbitrage opportunity. There are other linear programs we could consider in the search for arbi-trage opportunities. An example is: minimize eT(x+ + x−) subject to P(x+ −x−) ≥0 ρT(x+ −x−) = −1 x+ ≥0 x−≥0. This linear program involves two decision vectors: x+, x−∈ℜN. The idea is to view the difference x+ −x−as a vector of portfolio weights. Note that if (x+, x−) is a feasible solution to our new linear program, the portfolio x+−x−is an arbitrage opportunity, since P(x+−x−) ≥0 and ρT(x+−x−) = −1. Further, if there exists an arbitrage opportunity x that offers a $1 profit, there is a feasible solution (x+, x−) such that x = x+ −x−. Hence, our new linear program finds the $1 arbitrage opportunity that minimizes eT(x+ + x−), which is the number of shares that must be traded in order to execute the opportunity. This objective is motivated by a notion that if there are multiple arbitrage opportunities, one that minimizes trading activity may be preferable. 3.7 Pattern Classification Many engineering and managerial activities call for automated classification of observed data. Computer programs that classify observations are typically developed through machine learning. We discuss an example involving breast cancer diagnosis. c ⃝Benjamin Van Roy and Kahn Mason 73 Example 3.7.1. (Breast Cancer Diagnosis) Breast cancer is the sec-ond largest cause of cancer deaths among women. A breast cancer victim’s chances for long-term survival are improved by early detection of the disease. The first sign of breast cancer is a lump in the breast. The majority of breast lumps are benign, however, so other means are required to diagnose breast cancer – that is, to distinguish malignant lumps from benign ones. One ap-proach to diagnosing breast cancer involves extracting fluid from a lump and photographing cell nuclei through a microscope. Numerical measurements of the sizes and shapes of nuclei are recorded and used for diagnosis. An automated system for diagnosing breast cancer based on these numer-ical measurements has proven to be very effective. This system was developed through machine learning. In particular, a computer program processed a large collection of data samples, each of which was known to be associated with a malignant or benign lump. Through this process, the computer pro-gram “learned” patterns that distinguish malignant lumps from benign ones to produce a system for classifying subsequent samples. Pattern classification can be thought of in terms of mapping a feature vector a ∈ℜK to one of a finite set C of classes. Each feature vector is an encoding that represents an observation. For example, in breast cancer diag-nosis, each feature vector represents measurements associated with cell nuclei from a breast lump. In this example, there are two classes corresponding to malignant and benign lumps. Machine learning involves processing a set of feature vectors u1, . . . , uL ∈ ℜK labeled with known classes z1, . . . , zL ∈C to produce a mapping from ℜK to C. This mapping is then used to classify additional feature vectors. Machine learning has been used to generate pattern classifiers in many ap-plication domains other than breast cancer diagnosis. Examples include: 1. automatic recognition of handwritten alphabetical characters; 2. detection of faults in manufacturing equipment based on sensor data; 3. automatic matching of finger prints; 4. automatic matching of mug shots with composite sketches; 5. prediction of outcomes in sports events; 6. detection of favorable investment opportunities based on stock market data; 7. automatic recognition of spoken phonemes. 74 In this section, we discuss an approach to machine learning and pattern classification that involves formulation and solution of a linear program. We will only consider the case of two classes. This comes at no loss of generality, since methods that address two-class problems can also handle larger num-bers of classes. In particular, for a problem with n classes, we could generate n classifiers, each distinguishing one class from the others. The combination of these classifiers addresses the n-class problem. 3.7.1 Linear Separation of Data We consider two classes with labels C = {1, −1}. Samples in class 1 are referred to as positive, while samples in class −1 are negative. To develop a classifier we start with positive samples u1, . . . , uK ∈ℜN and negative samples v1, . . . , vL ∈ℜN. The positive and negative samples are said to be linearly separable if there exists a hyperplane in ℜN such that all positive samples are on one side of the hyperplane and all negative samples are on the other. Figure 3.7 illustrates for N = 2 cases of samples that are linearly separable and samples that are not. Recall that each hyperplane in ℜN takes the form {y|xTy = α} for some x ∈ℜN and α ∈ℜ. In mathematical terms, the positive and negative samples are linearly separable if there is a vector x ∈ℜN and a scalar α ∈ℜ such that xTuk > 0 and xTvℓ< 0 for all k ∈{1, . . . , K} and ℓ∈{1, . . . , L}. If positive and negative samples are linearly separable, a hyperplane that separates the data provides a classifier. In particular, given parameters x ∈ ℜN and α ∈ℜfor such a hyperplane, we classify a sample y ∈ℜN as positive if xTy > α and as negative if xTy < α. But how can we find appropriate parameters x and α? Linear programming offers one approach. To obtain a separating hyperplane, we need to compute x ∈ℜN and α ∈ℜsuch that xTuk > α and xTvℓ< α for all k and ℓ. This does not quite fit into the linear programming framework, because the inequalities are strict. However, the problem can be converted to a linear program, as we now explain. Suppose that we have parameters x and α satisfying the desired strict inequalities. They then also satisfy xTuk −α > 0 and xTvℓ−α < 0 for all k and ℓ. For a sufficiently large scalar β, we have β(xTuk −α) ≥1 and β(xTvℓ−α) ≤−1, for all k and ℓ. Letting x = βx and α = βα, we have xTuk −α ≥1 and xTvℓ−α ≤−1, for all k and ℓ. It follows that – if there is a separating hyperplane – there is a hyperplane characterized by parameters x and α satisfying xTuk −α ≥1 and xTvℓ−α ≤−1, for all k and ℓ. c ⃝Benjamin Van Roy and Kahn Mason 75 Let U =     (u1)T . . . (uK)T     and V =     (v1)T . . . (vK)T    , so that our inequalities can be written more compactly as Ux −αe ≥e and V x −αe ≤−e, where e is the vector with every component equal to 1. A linear program with constraints Ux −αe ≥e and V x −αe ≤−e and any objective function will find a hyperplane that separates positive and negative values, if one exists. 3.7.2 Minimizing Violations What should we do when the positive and negative samples cannot be sep-arated by a hyperplane? One might aim at minimizing the number of mis-classifications. A misclassification is either a positive sample uk such that xTuk < 0 or a negative sample vℓsuch that xTvℓ> 0. Unfortunately, the problem of minimizing the number of misclassifications is very hard. In fact, there are no known methods for efficiently finding a hyperplane that mini-mizes the number of misclassifications. One alternative is to find a hyperplane that minimizes the “extent” of misclassifications. In particular, given a hyperplane parameterized by x ∈ ℜN and α ∈ℜ, for each kth positive sample, define the violation δ+ k = max(−Ux + αe + e, 0). Similarly, for each ℓth negative sample, define the violation δ− ℓ= max(V x−αe+e, 0). We therefore have two vectors: δ+ ∈ℜK and δ−∈ℜL. The violation associated with a sample exceeds 1 if and only if the sample is misclassified. The following linear program introduces decision variables δ+ and δ−, in addition to x and α, and minimizes the sum of violations: minimize eT(δ+ + δ−) subject to δ+ ≥−Ux + αe + e δ+ ≥0 δ−≥V x −αe + e δ−≥0. Figure 3.8 presents an example with N = 2 of a hyperplane produced by this linear program when positive and negative samples are not linearly separable. 3.8 Notes The line of analysis presented in Sections 3.2 and 3.3 is adapted from Chapter 2 of Introduction to Linear Optimization, by Bertsimas and Tsitsiklis (1997). 76 The example of breast cancer diagnosis and the linear programming for-mulation for minimizing violations is taken from Mangasarien, Street, and Wolberg (1994), who have developed using linear programming a pattern classifier that is in use at University of Wisconsin Hospitals. 3.9 Exercises Question 1 Add a single inequality constraint to x ≤0, y ≤0 so that the feasible region contains only one point. Question 2 How many faces does the feasible set given by x ≥0, y ≥0, z ≥0, x+y+z = 1 have. What common polyhedron is it? What is the maximum of x + 2y + 3z over this polyhedron? Question 3 Show that the feasible set constrained by x ≥0, y ≥0, 2x + 5y ≤3, −3x + 8y ≤−5 is empty. Question 4 Is ℜN convex? Show that it is, or explain why not. Question 5 Draw a picture of a polyhedron in ℜ2 where one of the points in the polyhe-dron has 3 constraints active at the same time. Question 6 In a particular polyhedron in ℜ3, one point has 3 constraints active at once. Does this point have to be a vertex? Why, or if not, give an example. c ⃝Benjamin Van Roy and Kahn Mason 77 Question 7 Consider the following problem. Maximize x + y subject to the constraints x ≥0, y ≥0, −3x + 2y ≤−1, x −y ≤2. Is the feasible region bounded or unbounded? For a particular L ≥0, find an x and y so that [x y]T is feasible, and also x + y ≥L. Note that x and y will depend on L. Question 8 Dwight is a retiree who raises pigs for supplemental income. He is trying to decide what to feed his pigs, and is considering using a combination of feeds from some local suppliers. He would like to feed the pigs at minimum cost while making sure that each pig receives an adequate supply of calories and vitamins. The cost, calorie content, and vitamin supply of each feed is given in the table below. Contents FeedTypeA FeedtypeB Calories(perpound) 800 1000 V itamins(perpound) 140units 70units Cost(perpound) $0.40 $0.80 Each pig requires at least 8000 calories per day and at least 700 units of vitamins. A further constraint is that no more than one-third of the diet (by weight) can consist of Feed Type A, since it contains an ingredient that is toxic if consumed in too large a quantity. Formulate as a linear program and solve this in Excel. What is the resulting daily cost per pig. Question 9 The Apex Television Company has to decide on the number of 27- and 20-inch sets to be produced at one of its factories. Market research indicates that at most 40 of the 27-inch sets and 10 of the 20-inch sets can be sold per month. The maximum number of work-hours available is 500 per month. a 27-inch set requires 10 work hours, and a 20-inch set requires 10 work-hours. Each 27-inch set sold produces a profit of $120 and each 20-inch set produces a profit of $80. A wholesaler has agreed to purchase all the television sets (at the market price) produced if the numbers do not exceed the amounts indicated by the market research. Formulate as a linear program and solve for the maximum profit in Excel. 78 Question 10 The Metalco Company desires to blend a new alloy of 40 percent tin, 35 percent zinc and 25 percent lead from several available alloys having the following properties. Property Alloy1 Alloy2 Alloy3 Alloy4 Alloy5 Percentageoftin 60 25 45 20 50 Percentageofzinc 10 15 45 50 40 Percentageoflead 30 60 10 30 10 Cost($perlb) 22 20 25 24 27 The objective is to determine the proportions of these alloys that should be blended to produce the new allow at a minimum cost. a) Formulate this as a linear program, and solve in Excel. b) How many alloys are used in the optimal solution? Question 11 Suppose that two constraints in a system are cTx ≤1 and dTx ≤1, where c and d are linearly dependent. A constraint is called redundant if removing it does not change the feasible region. a) If cTd ≥0 does this mean that one of c or d is redundant? If so, explain why. If not, give an example b) If c ≤d does this mean that one of c or d is redundant? If so, explain why. If not, give an example Question 12 A paper mill makes rolls of paper that are 80” (80 inches) wide. Rolls are marketed in widths of 14”, 31” and 36”. An 80” roll may be cut (like a loaf of bread) into any combination of widths whose sum does not exceed 80”. Suppose there are orders for 216 rolls of width 14”, 87 rolls of width 31” and 341 rolls of with 36”. The problem is to minimize the total number of 80” rolls required to fill the orders. There are six ways – called “cuts” – in which we might consider to cut each roll into widths of 14”, 31” and 36”. The number of 14”, 31” and 36” rolls resulting from each cut are given in the following table: c ⃝Benjamin Van Roy and Kahn Mason 79 cut 14” 31” 36” 1 5 0 0 2 3 1 0 3 1 2 0 4 3 0 1 5 0 1 1 6 0 0 2 (a) Let x1, x2, x2, x4, x5, x6 be decision variables, each representing the number of rolls cut in one of the six ways. Describe a linear program that determines the minimum number of 80” rolls required to fill the orders (ig-nore the requirement that each xi should be an integer). (b) Solve the linear program using Excel. (c) Suppose the orders for next month are yet to be decided. Can we deter-mine in advance how many types of cuts will be needed? Can we determine in advance any cuts that will or will not be used? For each question if your answer is affirmative, then explain why, and if not, explain why not. Question 13 MSE Airlines (pronounced ”messy”) needs to hire customer service agents. Research on customer demands has lead to the following requirements on the minimum number of customer service agents that need to be on duty at various times in any given day: Time Period StaffRequired 6am to 8am 68 8am to 10am 90 10am to noon 56 Noon to 2pm 107 2pm to 4pm 80 4pm to 6pm 93 6pm to 8pm 62 8pm to 10pm 56 10pm to midnight 40 Midnight to 6am 15 The head of personnel would like to determine least expensive way to meet these staffing requirements. Each agent works an 8 hour shift, but not all shifts are available. The following table gives the available shifts and daily wages for agents working various shifts: 80 Shift Daily Wages 6am-2pm $ 180 8am-4pm $ 170 10am-6pm $ 160 Noon-8pm $ 190 2pm-10pm $ 200 4pm-Midnight $ 210 10pm-6am $ 225 Midnight-8am $ 210 (a) Write a linear program that determines the least expensive way to meet staffing requirements. (b) Solve the linear program using Excel. Question 14 Consider the multi-stage production problem of producing chips and com-puters given in the lecture notes (Example 3.5.3). Suppose the net profits of selling one unit of Model 1, 2 and 3 are $600, $650 and $800 respectively. Production of one unit of Model 1 consumes 1.5 % of the capacity of the assembly plant. Model 2 consumes 2% and Model 3, 2.5%. Production of one unit of Chip 1 and 2 use 2% of the capacity of fabrication facility 1, each. Production of one unit of Chip 2 uses 3% of the capacity of fabrication facility 2. Production of Chip 3 uses 4%. Model 1 needs one unit of Chip 1 and one unit of Chip 2, Model 2 needs one unit of Chip 1 and one unit of Chip 3, and Model 3 needs one unit of Chip 2 and one unit of Chip 3. Initially there are no chips in stock. Formulate as a linear program and solve in Excel. What is the optimal production plan? How many activities are used? Is this the maximal num-ber of activities that would be used for any basic feasible solution? Is this the minimal number of activities that would be used for any basic feasible solution? Question 15 In the early 1900s, Edgar Anderson collected data on different species of iris’ to study how species evolve to differentiate themselves in the course of evolution. In class, we studied three species – iris sesota, iris versicolor, and iris verginica – and used a linear program to show how iris sesota could be distinguished from the others based on sepal and petal dimensions. In particular, we showed that data on iris sesota was linearly separable from c ⃝Benjamin Van Roy and Kahn Mason 81 the other species data. The Excel file used in class is available on the course web site. Is iris versicolor data linearly separable from the other species’ data? If so determine parameters (x and α) for a separating hyperplane. If not, determine parameters for a hyperplane that minimizes the violation metric discussed in class and used in the spreadsheet, and determine the number of misclassifications resulting from this hyperplane. Question 16 Consider a stock that in one year can take on any price within {1, 2, . . . , 99, 100} dollars. Suppose that there are four European put options available on the stock, and these are the only assets that we can trade at the moment. Their strike prices and current market prices for buying/selling are provided in the following table: Strike Price Market Price 10 1.5 20 2.55 30 4.59 40 8.72 Use Excel to show whether or not there is an arbitrage opportunity in-volving trading of these four assets. 82 0.5 0.5 outcome payoff (dollars) 1 outcome payoff (dollars) (a) (b) 0.4 0.5 outcome payoff (dollars) 0.1 0.3 0.5 outcome payoff (dollars) 0.2 (c) (d) Figure 3.5: Payofffunctions of the currency (a), a zero-coupon bond (b), a Eu-ropean call option with strike 0.1 (c), and a European call option with strike 0.2 (d). c ⃝Benjamin Van Roy and Kahn Mason 83 10 outcome payoff (millions) 10 0.5 outcome payoff (millions) 0.1 0.2 6 (a) (b) Figure 3.6: Payofffunctions of two super-replicating portfolios: (a) one consisting only of bonds, and (b) one consisting of stocks and long and short positions in European call options. (a) (b) Figure 3.7: (a) Linearly separable data. The dashed line represents a separating hyperplane. (b) Data that are not linearly separable. 84 Figure 3.8: A hyperplane constructed by solving the linear program that minimizes violations. The resulting objective value was approximately 3.04.
164
Interval Type-3 Fuzzy Inference System Design for Medical Classification Using Genetic Algorithms =============== Loading [MathJax]/jax/output/HTML-CSS/fonts/Gyre-Pagella/Main/Regular/Main.js Next Article in Journal Unsupervised Feature Selection with Latent Relationship Penalty Term Next Article in Special Issue An Interval Type-2 Fuzzy Logic Approach for Dynamic Parameter Adaptation in a Whale Optimization Algorithm Applied to Mathematical Functions Previous Article in Journal Robust Variable Selection with Exponential Squared Loss for the Spatial Error Model Journals Active JournalsFind a JournalJournal ProposalProceedings Series Topics ------ Information For AuthorsFor ReviewersFor EditorsFor LibrariansFor PublishersFor SocietiesFor Conference Organizers Open Access PolicyInstitutional Open Access ProgramSpecial Issues GuidelinesEditorial ProcessResearch and Publication EthicsArticle Processing ChargesAwardsTestimonials Author Services --------------- Initiatives SciforumMDPI BooksPreprints.orgScilitSciProfilesEncyclopediaJAMSProceedings Series About OverviewContactCareersNewsPressBlog Sign In / Sign Up Notice You can make submissions to other journals here. clear Notice You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader. Continue Cancel clear All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications. Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers. Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal. Original Submission Date Received: . clearzoom_out_mapsearchmenu Journals Active Journals Find a Journal Journal Proposal Proceedings Series Topics Information For Authors For Reviewers For Editors For Librarians For Publishers For Societies For Conference Organizers Open Access Policy Institutional Open Access Program Special Issues Guidelines Editorial Process Research and Publication Ethics Article Processing Charges Awards Testimonials Author Services Initiatives Sciforum MDPI Books Preprints.org Scilit SciProfiles Encyclopedia JAMS Proceedings Series About Overview Contact Careers News Press Blog Sign In / Sign UpSubmit Search for Articles: Title / Keyword Author / Affiliation / Email Journal Axioms Article Type All Article Types Advanced Search Section All Sections Special Issue All Special Issues Volume Issue Number Page Logical Operator Operator Search Text Search Type add_circle_outline remove_circle_outline Journals Axioms Volume 13 Issue 1 10.3390/axioms13010005 Submit to this JournalReview for this JournalPropose a Special Issue ►▼ Article Menu Article Menu Academic EditorJavier Montero Subscribe SciFeed Recommended Articles Related Info Link Google Scholar More by Authors Links on DOAJ Melin, P. Sánchez, D. Castillo, O. on Google Scholar Melin, P. Sánchez, D. Castillo, O. on PubMed Melin, P. Sánchez, D. Castillo, O. /ajax/scifeed/subscribe Article Views 3235 Citations 6 Table of Contents Abstract Introduction Type-3 Fuzzy Logic Genetic Algorithms Proposed Method Experimental Results Discussion Conclusions Author Contributions Funding Institutional Review Board Statement Informed Consent Statement Data Availability Statement Acknowledgments Conflicts of Interest References Altmetricshare Shareannouncement Helpformat_quote Citequestion_answer Discuss in SciProfiles Need Help? Support Find support for a specific problem in the support section of our website. Get Support Feedback Please let us know what you think of our products and services. Give Feedback Information Visit our dedicated information section to learn more about MDPI. Get Information clear JSmol Viewer clear first_page Download PDF settings Order Article Reprints Font Type: Arial Georgia Verdana Font Size: Aa Aa Aa Line Spacing:    Column Width:    Background: Open Access Article Interval Type-3 Fuzzy Inference System Design for Medical Classification Using Genetic Algorithms by Patricia Melin Patricia Melin SciProfilesScilitPreprints.orgGoogle Scholar , Daniela Sánchez Daniela Sánchez SciProfilesScilitPreprints.orgGoogle Scholar and Oscar Castillo Oscar Castillo SciProfilesScilitPreprints.orgGoogle Scholar Tijuana Institute of Technology, Tecnológico Nacional de México, Calzada Tecnológico S/N, Fracc. Tomas Aquino, Tijuana 22379, BC, Mexico Author to whom correspondence should be addressed. Axioms2024, 13(1), 5; Submission received: 13 November 2023 / Revised: 16 December 2023 / Accepted: 19 December 2023 / Published: 20 December 2023 (This article belongs to the Special Issue Advances in Mathematical Optimization Algorithms and Its Applications) Download keyboard_arrow_down Download PDF Download PDF with Cover Download XML Download Epub Browse Figures Versions Notes Abstract An essential aspect of healthcare is receiving an appropriate and opportune disease diagnosis. In recent years, there has been enormous progress in combining artificial intelligence to help professionals perform these tasks. The design of interval Type-3 fuzzy inference systems (IT3FIS) for medical classification is proposed in this work. This work proposed a genetic algorithm (GA) for the IT3FIS design where the fuzzy inputs correspond to attributes relational to a particular disease. This optimization allows us to find some main fuzzy inference systems (FIS) parameters, such as membership function (MF) parameters and the fuzzy if-then rules. As a comparison against the proposed method, the results achieved in this work are compared with Type-1 fuzzy inference systems (T1FIS), Interval Type-2 fuzzy inference systems (IT2FIS), and General Type-2 fuzzy inference systems (GT2FIS) using medical datasets such as Haberman’s Survival, Cryotherapy, Immunotherapy, PIMA Indian Diabetes, Indian Liver, and Breast Cancer Coimbra dataset, which achieved 75.30, 87.13, 82.04, 77.76, 71.86, and 71.06, respectively. Also, cross-validation tests were performed. Instances established as design sets are used to design the fuzzy inference systems, the optimization technique seeks to reduce the classification error using this set, and finally, the testing set allows the validation of the real performance of the FIS. Keywords: medical classification; classification; genetic algorithm; Interval Type-3 fuzzy logic; Cryotherapy; Haberman; Immunotherapy; PIMA Indian Diabetes; Indian Liver; Breast Cancer Coimbra MSC: 03B52; 03E72; 62P30 1. Introduction In modern healthcare, the accurate diagnosis of diseases holds paramount significance. Timely and precise identification of medical conditions is pivotal for effective treatment and essential for mitigating their potentially dire consequences. In recent years, the incorporation of artificial intelligence (AI) into medical diagnostics has emerged as a transformative force, offering innovative approaches to disease classification and detection, allowing health experts more tools to provide a correct diagnosis. Artificial intelligence represents a powerful ally in pursuing more accurate and efficient disease diagnosis with its capacity to analyze extensive datasets, discern patterns, and adapt to evolving scenarios. Machine learning algorithms have demonstrated remarkable proficiency in the early detection of diseases, often preceding the manifestation of clinical symptoms [1,2,3,4]. But another area within AI is fuzzy logic (FL), which, since its creation, has had many areas of application where it has demonstrated its effectiveness in solving highly complex problems. Its applications range from the classification of foods based on their characteristics , fuzzy control problems where the inputs of the FIS play an important role in obtaining important output values that allow the stability of the models , to responses combination for pattern recognition applied to time series prediction or human recognition , and classification problems to mention a few applications. A significant contribution that FL has had is in medical applications, where, either alone or in combination with other techniques, it has allowed it to be an excellent support tool in medical diagnosis [10,11]. In Ref. , a fuzzy rule-based model is presented for Diabetes classification, where the results achieved demonstrated its effectiveness to be proven in the healthcare sector to help in the diagnosis. In Ref. , a system based on FL to predict postoperative complications is proposed using characteristics about current voltage for acupuncture points; the proposed method was successfully applied to the surgical treatment of benign prostatic hyperplasia, demonstrating it to be a tool to help in the diagnosis. In Ref. , a fuzzy decision tree is proposed as a classification method for medical data, where authors show that the proposed method achieved better accuracy over conventional classifiers. Comparisons among Support Vector Machine (SVM), Naive Bayes (NB), Decision Tree (DT), Artificial Neural Networks (ANN), Type 1, Interval, and General Type-2 FIS have been performed, where the FIS are optimized using particle swarm optimization, the General Type-2 FIS proved to have better results over the other techniques applied to medical diagnosis even using different level of uncertainty and cross-validation [15,16]. Many works have compared the results obtained between different types of fuzzy systems. In these works, it has been observed that depending on the complexity of the problem and the data used, the type of FL to be used will depend. In Ref. , The advantages of Interval Type-2 FIS are shown over Type-1 FIS, applied to a modification of Flower Pollination Optimization for Rotary Inverted Pendulum System. The application of noise effects and load disturbance mainly demonstrates the robustness and effectiveness of the method. In Ref. , a General Type-2 fuzzy PID (proportional integral derivative) controller is presented and compared versus PID, Type-1 fuzzy PID, and Interval Type-2 fuzzy PID using uncertainties such as controller disturbance or output noise, and the proposed PID achieved better results than the other methods shown. It is also important to mention the combination that has been made of Type-2 fuzzy logic with the Internet of Things (IoT). In Ref. , a control model using Type-2 fuzzy logic is presented to determine the intensity of water absorption applied to IoT infrastructure with sensors to measure humidity conditions. In Ref. , Type-2 fuzzy logic is applied to analyze accelerometer signals for an IoT system for driving support, showing its ability to adjust to driving expectations by collecting information about driving conditions. In Ref. , a comparison of T1FIS, IT2FIS, and GT2FIS optimized by a hierarchical genetic algorithm is shown for the combination of responses of modular granular neural networks applied to human recognition, where the achieved results prove the effectiveness of the GT2FIS when the biometric measures information has noise or have poor quality. Comparisons applied FL to time series prediction are also performed and applied to COVID-19-confirmed cases, where a fuzzy weighted average is proposed to obtain a final prediction of ensemble neural networks. The achieved results prove the advantages of the Interval Type-3 fuzzy weighted average in predicting information of complex time series. The parameters of the FIS are optimized using a Firefly Algorithm . Although there are currently many techniques for performing optimizations, GA is one of the first methods used to search for parameters and architectures, which continues to be an excellent tool for obtaining optimized parameters related to FL [20,21,22,23]. In Ref. , a method combining GA and FL is applied to improve the performance of a pump as a turbine is proposed. In Ref. , a real-coded genetic algorithm with fuzzy control is proposed, where the fuzzy inference system establishes its parameters, such as the probability of mutation, type of crossover, and population size applied to system dynamics models. In Ref. , a binary-coded genetic algorithm is proposed and applied to Magnetotelluric modeling, where each gene is used to optimize the resistivity and thickness of homogenously horizontal layers. In Ref. , a real-coded genetic algorithm is proposed and applied to software mutation testing; the proposed method integrates the path coverage-based testing method with the novel idea of tracing a fault detection matrix. In Ref. , a real-coded genetic algorithm is proposed and applied to optimize the Stewart platform with rotary actuators for the flight simulator mechanism. In Ref. , implementing FL in a 3D printer is proposed. The authors work on modifying its base using the direct current motor, the acquisition card, and the power stage. The results show that the optimization of values of the MF of the FIS obtained better times than other techniques. One of the main motivations of this work is to improve results obtained in previous works, where fuzzy systems were designed to classify diabetes using the PIMA Indian Diabetes dataset. The results showed the effectiveness of IT2FIS for classifying this disease. In Ref. , a real-coded GA is developed to design Type-1 FIS using five attributes of the dataset, where a comparison designing different fuzzy if-then rules was presented, demonstrating the importance of designing them. In Ref. , the design of Interval Type-2 FIS and its optimization using a GA is proposed, and the results achieved prove the effectiveness of these kinds of fuzzy systems over the Type-1 FIS applied to the PIMA Indian Diabetes dataset using the same five attributes. For both work the instances were divided into two sets: design and testing. In this work, we proposed the IT3FIS design. The novelty of the proposed method lies in the design of a general method capable of classifying by designing the IT3FIS using a percentage of instances. The design consists of establishing the ranges of the input fuzzy variables and the design of the fuzzy rules, allowing the reduction of the number of fuzzy rules, proving to be an excellent tool for classification and reducing time execution. This paper has the following structure. A description of Type-3 fuzzy logic can be found in Section 2. In Section 3, a brief description of genetic algorithms is presented. The proposed method is described in Section 4. In Section 5, the results obtained by the proposed method are presented. In Section 6, discussion and statistical tests are shown. In Section 7, our conclusions are shown. 2. Type-3 Fuzzy Logic Type-1 FL is a helpful intelligence technique that can be used to be applied to model elaborate problems. L.A. Zadeh proposed this technique in 1965 [32,33], where an element in part belongs with a particular membership grade with a crisp number between 0 and 1 to a set. An improvement of the FL was proposed in 1975: Type-2 FL . In Type-2 FL, unlike Type-1 FL, the elements do not have a crisp number [0, 1]. A fuzzy set (FS) in [0, 1] allows the definition of the MF . The description of a Type-2 fuzzy system is given by Equation (1): Ã=x,u,µ Ã(x,u)∀x∈X,∀u∈J x⊆0,1,µ à x,u∈[0,1] (1) where X represents the domain of the fuzzy variable, a primary membership is represented by J x⊆0,1, and µ Ã(x,u) defines a secondary membership (Type-1 FS). The footprint of uncertainty (FOU) represents the uncertainty region. A Type-2 MF interval occurs when µ Ã(x,u) = 1, ∀u∈J x⊆0,1. In Figure 1, the upper µ¯Ã x and lower µ _ à x MF of a Trapezoidal Type-2 MF is shown. An Interval Type-2 fuzzy set is determined as Equation (2). Ã=x,u,1∀x∈X,∀u∈J x⊆[0,1] (2) Figure 1. Trapezoidal Type-2 MF. In Type-3, we can potentially handle higher degrees of uncertainty with respect to Type-2 due to the nature of the membership functions. A Type-3 fuzzy set (T3 FS) [37,38] is represented by the notation A(3), is the graph of a trivariate function named MF of A(3), in the cartesian product defined by Equation (3), where the primary variable of A(3) has a universe X, x. The membership function of µ A(3) is defined by µ A(3)(x,u,v), and is a Type-3 membership function of the T3 fuzzy set defined by Equation (4): µ A(3):X×0,1×0,1→[0,1] (3) A(3)=x,u x,v(x,u),µ A(3)(x,u,v)x∈X,u∈U⊆0,1,v∈V⊆[0,1] (4) where u is the secondary variable and has the universe U, and V for the tertiary variable v. A Trapezoidal Interval Type-3 MF μ~A x,u = ScaleTrapScaleGaussIT3MF with Trapezoidal F O U(A) has for the upper membership function (UMF) as parameters [p a 1,p b 1,p c 1,p d 1], and the lower membership function (LMF): p λ (LowerScale) and p l (LowerLag) to form D O U=[μ _ x,μ¯(x)]. The representation of this MF is given by Equation (5). μ~A x,u=ScaleTrapScaleGaussIT 3 MF(x,{{[p a 1,p b 1,p c 1,p d 1]},p λ,[p l 1 p l 2]]} (5) The vertical cuts A x(u) identify the F O U A, these are Interval Type-2 FS with Gaussian Interval Type-2 MF, μ A x u with parameters [σ u,m(x)] for the UMF, and for LMF: λ (LowerScale) and l (LowerLag). An illustration of a Type-3 Trapezoidal MF with a vertical cut is shown in Figure 2. This Interval Type-3 membership function is defined with the Equation (6). μ¯x=0,x<p a 1 x−p a 1 p b 1−p a 1,p a 1≤x<p b 1 1,p b 1≤x≤p c 1 p d 1−x p d 1−p c 1,p c 1p d 1 (6) Figure 2. Type-3 Trapezoidal MF with a vertical cut. The values (p a 2,p b 2,p c 2,p d 2) determine the lower membership function of the domain of uncertainty (DOU), μ _(x) is determined by the values where these are functions of the parameters (p a 1,p b 1,p c 1,p d 1) of the UMF for the domain of uncertainty, μ¯(x), and the elements of the LowerLag (l) vector. i.e., ∆r=p b 1−p a 1 p l 1 (7) ∆l=p d 1−v c 1 p l 2 (8) p a 2=p a 1+∆r (9) p b 2=p b 1+∆r (10) p c 2=p c 1−∆l (11) p d 2=p d 1−∆l (12) x=0,x<p a 2 x−p a 2 p b 2−p a 2,p a 2≤x<p b 2 1,p b 2≤x≤p c 2 p d 2−x p d 2−p c 2,p c 2p d 2 (13) The function μ(x) and the parameter λ are multiplicated to create the LMF of the domain of uncertainty, μ _(x), is described as the following: μ _ x=λ μ(x). Then, the upper and lower limits of the domain of uncertainty are represented respectively by u¯x and u _ x. The range, δ u, and radio, σ u, of the footprint of uncertainty are calculated by Equations (14) and (15). δ u=u¯x−u _ x (14) σ u=δ u 2 3+ε (15) where machine epsilon is represented by ε. Equation (16) defines the apex, m(x), of the IT3 MF μ~x,u. x=0,x<p a x−p a p b−p a,p a≤x<p b 1,p b≤x≤p c p d−x p d−p c,p cp d (16) where p a=(p a 1+p a 2)/2, p b=(p b 1+p b 2)/2, and p c=(p c 1+p c 2)/2 y p d=(p d 1+p d 2)/2. Then, the vertical cuts with Interval Type-2 MF, μ A x u=[μ _ A x u,μ¯A x u], are presented with the Equations (17) and (18). μ¯A x u=e x p−1 2 u−m(x)σ u 2 (17) μ _ A x u=s·e x p−1 2 x−m(x)σ u2 (18) where σ u=σ u ln⁡l ln⁡ε, p l=(p l 1+p l 2). If p l=0, then σ u=σ u. Then, μ¯A x u and μ _ A x u are the UMF and LMF of the vertical cuts IT2 FS of the secondary IT2 MF of the IT3 FS . Figure 3 shows a representation of this IT3 MF. Figure 3. Type-3 Trapezoidal MF. 3. Genetic Algorithms GA is based on the principles of evolution and natural selection. With this type of optimization strategy, the strongest individual survives to the following generation . A chromosome functions as a representation of an individual (solution), where each chromosome contains genes that represent values to be optimized. The algorithm runs for a certain amount of generation or some other stopping criteria [41,42]. The algorithm begins by creating the population with random values within the ranges established in the search space. The next steps are repeated until the maximum number of generations or the stop criteria are achieved. The population is ranked, and the individual (solution) with the best performance is protected to avoid modifications (Elitism). The next step is called selection, where a part of the population (depending on the population rate) is modified with to application of the genetic operators. In the crossover, to create a new offspring, two individuals are taken with the roles of parent to combine their genes resulting in the exchange of information between the two individuals. This genetic operator is the mechanism to ensure the reshuffle of characteristics of the parents in their children . The value of one or more genes is modified to alter the chromosome. This step is called mutation and is the main way of evolving new individuals from old ones. The next step consists of reinserting the new population the individual previously saved [44,45,46]. Two types of genetic algorithms are defined based on their coding: binary and real. A binary-coded genetic algorithm uses binary strings to represent individuals in the population. Meanwhile, a real-coded genetic algorithm uses real numbers to represent genes . In Figure 4, a representation of a basic binary-coded GA flowchart is shown. Figure 4. Flowchart of GA. 4. Proposed Method The proposed method is implemented in medical classification. In this section, the proposed method description, implementation, and datasets are presented. 4.1. Description of the Method The medical classification performed by the proposed method consists of using from 1 to n attributes to obtain a final classification. The number of attributes depends on the illness or diagnosis to be given. Each attribute represents an input in the IT3FIS. In this work, Trapezoidal MF is applied to each fuzzy input variable, using 3 MFs in each one (MF Low, MF Medium, and MF High). In this work, the Trapezoidal membership function is used due to its ability to represent a triangular membership function by joining its central points . The optimal parameters of the Type-3 Trapezoidal MF (a 1, b 1, c 1, d 1, LowerScale, and LowerLag) and the fuzzy if-then rules are designed by a real-coded GA. In Figure 5, a representation of the proposed method is shown, where a Sugeno Model is applied with n attributes to finally obtain a final result. Figure 5. The proposed method. 4.2. Datasets Description A total of 7 datasets are utilized to design and prove the proposed method to show the potential of the proposed method. In Table 1, the number of attributes and instances of each one is shown. Two sets are created using the instances: design and testing. The creation of the FIS depends on the design set, and its real behavior is verified with the testing set. Table 1. Benchmark datasets. 4.3. Application to Medical Classification The Haberman’s Survival dataset, using its 3 attributes, are presented in Figure 6 to describe the proposed method in more detail, where each attribute corresponds to each input of the fuzzy inference systems. The information of each instance enters its corresponding fuzzy inference system input of the Sugeno Model to obtain a classification. Figure 6. Sugeno Model Type-3 FIS applied to Haberman’s Survival. The search space is defined by an analysis of the design set. The minimum and maximum values of each attribute are calculated by Equations (19) and (20). V m i n=min⁡a t t r 1 j,a t t r 2 j,a t t r 3 j,…,a t t r i j (19) V m a x=max⁡a t t r 1 j,a t t r 2 j,a t t r 3 j,…,a t t r i j (20) where j is the corresponding attribute (from 1 to n), and i represents the number of instances used in the design set. The minimum and maximum ranges of the fuzzy input are calculated by Equations (21) and (22). R m i n=V m i n−V m i n∗0.10 (21) R m a x=V m a x+(V m a x∗0.10) (22) Equations (19)–(22) are used for each attribute of the dataset. With those calculations, all the ranges for the fuzzy inputs are automatically generated. An example of the ranges for Haberman’s Survival dataset is shown in Table 2. Table 2. Examples of ranges for the Haberman’s Survival dataset. Figure 7 shows an example of the fuzzy inputs corresponding to Haberman’s Survival dataset attributes, where its ranges can be observed. Figure 7. Fuzzy inputs for Haberman’s Survival dataset: (a) age of patient; (b) year of operation; and (c) number of positive axillary nodes detected. 4.4. Description of the GA A real-coded GA is proposed to determine the optimal parameters of Interval Type-3 FIS. The ranges previously calculated are applied to define the upper membership function of each Type-3 Trapezoidal MF, which in turn are utilized to establish the search space. This means that depending on the dataset, the values will chang. In Table 3, the rest of the parameters are shown. Table 3. Search space of the GA. To calculate the chromosome size, first the number of total rules must be calculated by Equation (23). T R=n 3 (23) where n is the number of attributes of the dataset to perform the classification. Once the number of rules has been calculated, the size can be calculated by Equation (24). s i z e=21∗n+3+(T R∗2) (24) The chromosome size depends on the number of attributes (n). This number is multiplied by 21 because each Type-3 Trapezoidal MF has seven parameters (p a1, p b1, p c1, p d1, p λ, p l 1, and p l 2) and there are three Type-3 Trapezoidal MF in each fuzzy input. Three membership functions are used because previous works have shown that this number of membership functions allows good results for classification problems [30,31], as well as in other applications . The three constants of the output are added to this multiplication, and finally, the multiplication of TR by 2 (consequents and activation status). For example, Haberman’s Survival dataset has three attributes, n = 3. Therefore, TR = 27, so the chromosome size is 119 genes. A representation of the chromosome applied for the Interval Type-3 FIS design is shown in Figure 8. Figure 8. Chromosome representation. We can summarize the process in Figure 9 with 5 main steps: Figure 9. Summary of the creation and evaluation process for classification. The dataset is divided into design and testing sets. The range of the fuzzy input and the maximum number of fuzzy rules are established based on the design test, the search space of the GA is determined, and the individuals are established with random values. Each individual allows the design of each fuzzy inference system. The parameters of the membership functions are established. The same individual is allowed to know if a fuzzy rule will be added to the fuzzy inference system using the genes assigned to this task. The values of these genes are values between 0 and 1. If the value is equal to or less than 0.5, the fuzzy rules are omitted, and if the value is greater than 0.5, the fuzzy rules are added, and a consequent is assigned. When the fuzzy inference system is fully designed, the testing set is used to prove the FIS, where each instance is evaluated for the fuzzy inference system, and the resulting value determines its class. An illustration of the schematic of the GA applied to design Type-3 FIS for medical classification is shown in Figure 10. Figure 10. Flowchart of the proposed GA. In this work, a real-coded GA is applied to design the Interval Type-3 FIS, and its configuration is as follows: as the selection method is Tournaments, a mutation rate of 0.2 with a single point crossover. The parameters established for the genetic algorithm are based on previous works where genetic algorithms have been used to optimize fuzzy inference systems . The proposed GA seeks the minimization of the classification error. In this work, Equation (25) is used to calculate the accuracy. A c c u r a c c y=T P+T N T P+F P+T N+F N (25) where False Positive, True Positive, False Negative, and True Negative are represented as FP, TP, FN, and TN, respectively. The Equation (26) provides the objective function applied by the GA: f=1−T P+T N T P+F P+T N+F N (26) 5. Experimental Results The results obtained with the first two datasets are shown in this section. The configuration of the number of iterations was established for comparison purposes. The summary and the results of the other datasets are presented in Section 6. In Section 6.1, comparisons with other works are shown. 5.1. Haberman’s Survival Dataset Results For Haberman’s Survival dataset, 60% of the instances were used in the design phase, leaving the rest (40%) to prove the real behavior of the Type-3 FIS. In total, 30 experiments were performed using the proposed GA. Figure 11 shows the inputs with the best results in the testing phase. The Type-3 fuzzy inference achieves in this phase 77.05% of accuracy. Figure 11. Best Type-3 fuzzy inputs variables for Haberman’s Survival dataset: (a) age of patient; (b) year of operation; and (c) number of positive axillary nodes detected. Table 4 shows the fuzzy if-then rules achieved for Haberman’s Survival dataset. Initially, for this dataset, the FIS must have 27 fuzzy if-then rules. However, the proposed method allowed obtain a better result with only 13 fuzzy if-then rules. Table 4. Fuzzy if-then rules (Haberman’s Survival dataset). As Table 5 shows, in the design phase, better results are obtained because the proposed GA allowed designing the Type-3 FIS with a part of the instances. However, an important part is the real behavior of the Type-3 FIS, with instances not used in the design phase. The convergence average of the evolutions is shown in Figure 12. For each experiment, 30 generations were used. Figure 12. Average convergence for Haberman’s Survival dataset. Table 5. Haberman’s Survival dataset results. 5.2. PIMA Indian Diabetes Dataset Results For the PIMA Indian Diabetes dataset, 70% of the instances were used in the design phase, leaving the rest (30%) to prove the real behavior of the Type-3 FIS. In total, 30 experiments were performed using the proposed GA with 1000 generations, each experiment using only 5 of 8 attributes. As Table 6 shows, in the design phase, better results are obtained because the proposed GA allowed designing the Type-3 FIS with a part of the instances. However, the Interval Type-3 FIS obtained allowed to have good results with instances not used in the design phase. Figure 13 shows the inputs with the best results in the testing phase, where 86.38% of accuracy is achieved. Figure 13. Best Type-3 fuzzy inputs variables for PIMA Indian Diabetes dataset: (a) glucose; (b) blood pressure; (c) body mass index; (d) diabetes pedigree function; and (e) age. Table 6. PIMA Indian Diabetes results. Table 7 shows the fuzzy if-then rules achieved for the PIMA Indian Diabetes dataset. Initially, for this dataset, the FIS must have 243 fuzzy if-then rules. However, the proposed method allowed obtain a better result with only 27 fuzzy if-then rules. The convergence average is shown in Figure 14. Figure 14. Average convergence for PIMA Indian Diabetes dataset. Table 7. Fuzzy if-then rules (PIMA Indian Diabetes dataset). 6. Discussion Experiments with and without cross-validation were carried out to evaluate the performance of the proposed method. Table 8 shows the best and average results achieved in both phases (design and testing) with corresponding standard deviations. Table 8 summarizes the results achieved with all the datasets used in this work. As Table 8 shows, for Cryotherapy, Immunotherapy, and Breast Cancer Coimbra databases, the cross-validation allowed for improving the percentage of accuracy in both phases (design and testing). Table 8. Summary results. In Table 9, the comparison of the results obtained with the proposed method and the results obtained in a previous work using five attributes of the PIMA Indian Diabetes is shown. These results were achieved using T1FIS and IT2FIS and its comparison with the proposed method (IT3FIS). Table 9. PIMA Indian Diabetes results (5 Attributes). It can be observed that the best result obtained by the proposed method does not overcome the best results previously obtained by IT2FIS, but the average was surpassed by a large difference for both phases. Table 10 shows the average times of the experiments performed with the genetic algorithm for each dataset with the different validations. It can be observed how the use of cross-validation increases the time. It is also important to mention that the number of generations is significant and impacts the amount of time, as in the case of the PIMA Indian dataset with five attributes, where 1000 generations were used for its execution. Table 10. Average time of the evolutions. Figure 15 shows the average number of fuzzy rules generated by the genetic algorithm for each dataset with their number of attributes. It can be seen how the 5- and 10-fold cross-validation helped reduce the number of rules for Haberman’s Survival and Breast Cancer Coimbra datasets. In some other cases, only one of the cross-validations allowed a reduction in the number of fuzzy rules, as in the case of the Cryotherapy, Immunotherapy, and Indian Liver datasets. The number of fuzzy rules increased with cross-validations only for the PIMA Indian Diabetes dataset. The proposed method allows the maintenance of an appropriate number of fuzzy rules independently of the number of attributes. In general, the contribution of the optimization of fuzzy rules is essential because their design collaborates with the increase in the percentage of accuracy. Figure 15. Average of fuzzy rules generated by the genetic algorithm. The complexity of a genetic algorithm can be established based on the number of iterations and the number of individuals. For the proposed method, the complexity of the evaluation of each individual lies in the use of Type-3 fuzzy inference systems. The complexity of a Type-3 fuzzy model was described in , where it was determined that complexity based on the vertical-slices theory for centroid type reduction is approximately O(NKL). Where it is assumed that a primary variable x is sampled into N points, K is the number of iterations to approximate a switch point, and L means samples for a vertical slice. The complexity is reduced from exponential to linear. 6.1. Statistical Comparison The results previously shown are used to carry out statistical tests determining if the proposed method allows obtaining a significant advantage over other methods. Table 11 shows the parameters used to perform the statistical Z-tests and t-tests presented in this section. Table 11. Tests parameters. In Table 12, the comparison between the proposed method (IT3FIS) and T1FIS and IT2FIS developed in a previous work is shown. Where the Z-values are more than the critical value, then it is concluded that the H 0 is rejected. There is enough evidence to affirm that the proposed method is better than T1FIS and IT2FIS applied to the PIMA Indian Diabetes dataset using only five attributes and 30% of the instances for the evaluation of the FIS designed. Table 12. Values of Z-test for PIMA Indian Diabetes with 5 attributes. In the results presented in Refs. [15,16], the results achieved show the effectiveness of the GT2FIS over T1FIS and IT2FIS. For this reason, the statistical comparison is performed directly between GT2FIS and IT3FIS. In Table 13, the results achieved using 40% of the instances for evaluating the FIS designed are presented, where the z-values achieved show the improvement provided by the IT3FIS in all the datasets except for Indian Liver and the Breast Cancer Coimbra, where the obtained results by the proposed method are better than GT2FIS but with not enough statistical evidence. Table 13. Values of Z-tests using 40% of the instances for validation of the FIS. In Table 14, the results achieved using 20% of the instances for evaluating the FIS designed are presented with five cross-validations, where the t-values achieved show the improvement allowed by the Interval Type-3 FIS in all the datasets, proving that the proposed method is better than General Type-2 FIS with enough statistical evidence. Table 14. Values of t-tests using 20% of the instances for validation of the FIS with 5 cross-validations. The results achieved using 10% of the instances for evaluating the FIS designed are presented in Table 15 with 10 cross-validations, where the t-values achieved show the improvement allowed by the Interval Type-3 FIS in all the datasets except for the Breast Cancer Coimbra, where the obtained results by the proposed method are better than General Type-2 FIS but with not enough statistical evidence. Table 15. Values of t-tests using 10% of the instances for validation of the FIS with 10 cross-validations. 7. Conclusions This paper proposes the design of Interval Type-3 fuzzy inference systems using a GA applied to medical classification. The GA seeks to find the main fuzzy inference systems parameters, such as MF parameters and the fuzzy if-then rules. Type-3 Trapezoidal MFs are utilized in this work in each input of the FIS; the design of these MFs is based on their LowerScale and LowerLag. An important contribution of our method is the automatic establishment of ranges of the fuzzy variables, where the set of instances used for the design is used to establish them, which allows the proposed method to be applied to different databases with different numbers of attributes (inputs of the FIS). The results achieved in this work allowed us to improve results achieved by other methods based on fuzzy logic. The medical datasets Haberman’s Survival, Cryotherapy, Immunotherapy, PIMA Indian Diabetes, Indian Liver, and Breast Cancer Coimbra dataset achieved 75.30, 87.13, 82.04, 77.76, 71.86, and 71.06, respectively. Cross-validation tests were also carried out using 5- and 10-fold, where for Cryotherapy, Immunotherapy, and Breast Cancer Coimbra databases, the cross-validation improved the accuracy percentage on both phases (design and testing). Statistical tests were performed, and the Z-test demonstrates the effectiveness of the proposed method over General Type-2 FIS in almost all the datasets except for Indian Liver and the Breast Cancer Coimbra. T-tests were applied to validate the behavior of the proposed method with the cross-validation tests. For five cross-validation, the proposed method achieved better results; for the 10 cross-validation tests only for the Breast Cancer Coimbra, there is no statistical difference. In future works, the design of Type-3 FIS proposed in this work will be applied to other areas, such as the integration method applied to pattern recognition, images for edge detection, or control problems, to prove the ability of adaptation of the proposed method. Author Contributions Methodology and validation, P.M.; software, validation and writing, D.S.; conceptualization and writing—review and editing, O.C. All authors have read and agreed to the published version of the manuscript. Funding This research received no external funding. Institutional Review Board Statement Not applicable. This study does not contain any studies with human participants or animals. Informed Consent Statement Not applicable. This study does not contain any studies with human participants. Data Availability Statement Publicly available datasets were analyzed in this study. These data can be found at “ (accessed on 16 August 2023). Acknowledgments We would like to thank TecNM and Conacyt for their support during the realization of this research. Conflicts of Interest The authors declare no conflicts of interest. References Bharati, S.; Podder, P.; Mondal, M.; Prasath, V. CO-ResNet: Optimized ResNet model for COVID-19 diagnosis from X-ray images. Int. J. Hybrid Intell. Syst.2021, 17, 71–85. [Google Scholar] [CrossRef] Elhag, A.; Aloafi, T.; Jawa, T.; Sayed-Ahmed, N.; Bayones, F. Artificial neural networks and statistical models for optimization studying COVID-19. Results Phys.2021, 25, 104274. [Google Scholar] [CrossRef] [PubMed] Bashkandi, A.; Sadoughi, K.; Aflaki, F.; Alkhazaleh, H.; Mohammadi, H.; Jimenez, G. Combination of political optimizer, particle swarm optimizer, and convolutional neural network for brain tumor detection. Biomed. Signal Process. Control2023, 81, 104434. [Google Scholar] [CrossRef] Gangwar, A.; Ravi, V. Diabetic retinopathy detection using transfer learning and Deep Learning. In Evolution in Computational Intelligence, 1st ed.; Bhateja, V., Peng, S.L., Satapathy, S.C., Zhang, Y.D., Eds.; Springer: London, UK, 2020; Volume 1176, pp. 679–689. [Google Scholar] Nassiri, S.; Tahavoor, A.; Jafari, A. Fuzzy logic classification of mature tomatoes based on physical properties fusion. Inf. Process Agric.2021, 9, 547–555. [Google Scholar] [CrossRef] Hamza, M. Modified Flower Pollination Optimization Based Design of Interval Type-2 Fuzzy PID Controller for Rotary Inverted Pendulum System. Axioms2023, 12, 586. [Google Scholar] [CrossRef] Melin, P.; Sánchez, D.; Castro, J.; Castillo, O. Design of Type-3 Fuzzy Systems and Ensemble Neural Networks for COVID-19 Time Series Prediction Using a Firefly Algorithm. Axioms2022, 11, 410. [Google Scholar] [CrossRef] Melin, P.; Sánchez, D. Optimization of type-1, interval type-2 and general type-2 fuzzy inference systems using a hierarchical genetic algorithm for modular granular neural networks. Granul. Comput.2018, 4, 211–236. [Google Scholar] [CrossRef] Tabakov, M.; Chlopowiec, A.; Chlopowiec, A.; Dlubak, A. Classification with Fuzzification Optimization Combining Fuzzy Information Systems and Type-2 Fuzzy Inference. Appl. Sci.2021, 11, 3484. [Google Scholar] [CrossRef] Vlamou, E.; Papadopoulos, B. Fuzzy logic systems and medical applications. AIMS Neurosci.2019, 6, 266–272. [Google Scholar] Czmil, A. Comparative Study of Fuzzy Rule-Based Classifiers for Medical Applications. Sensors2023, 23, 992. [Google Scholar] [CrossRef] Aamir, K.; Sarfraz, L.; Ramzan, M.; Bilal, M.; Shafi, J.; Attique, M. A Fuzzy Rule-Based System for Classification of Diabetes. Sensors2021, 21, 8095. [Google Scholar] [CrossRef] [PubMed] Filis, S.; Al-Kasasbeh, R.; Shatalova, O.; Korenevskiy, N.; Shaqadan, A.; Protasova, Z.; Ilyash, M.; Lukashov, M. Biotechnical system based on fuzzy logic prediction for surgical risk classification using analysis of current-voltage characteristics of acupuncture points. J. Integr. Med.2022, 20, 252–264. [Google Scholar] [CrossRef] [PubMed] Zaitseva, E.; Levashenko, V.; Rabcan, J.; Kvassay, M. A New Fuzzy-Based Classification Method for Use in Smart/Precision Medicine. Bioengineering2023, 10, 838. [Google Scholar] [CrossRef] [PubMed] Ontiveros, E.; Melin, P.; Castillo, O. Comparative study of interval Type-2 and general Type-2 fuzzy systems in medical diagnosis. Inf. Sci.2020, 525, 37–53. [Google Scholar] [CrossRef] Ontiveros-Robles, E.; Castillo, O.; Melin, P. Towards asymmetric uncertainty modeling in designing General Type-2 Fuzzy classifiers for medical diagnosis. Expert Syst. Appl.2021, 183, 115370. [Google Scholar] [CrossRef] Shi, J. A unified general type-2 fuzzy PID controller and its comparative with type-1 and interval type-2 fuzzy PID controller. Asian J. Control2022, 24, 1808–1824. [Google Scholar] [CrossRef] Woźniak, M.; Szczotka, J.; Sikora, A.; Zielonka, A. Fuzzy logic type-2 intelligent moisture control system. Expert Syst. Appl.2024, 238, 121581. [Google Scholar] [CrossRef] Woźniak, M.; Zielonka, A.; Sikora, A. Driving support by type-2 fuzzy logic control model. Expert Syst. Appl.2022, 207, 117798. [Google Scholar] [CrossRef] Chowdhury, D.; Hovda, S. A hybrid fuzzy logic/genetic algorithm model based on experimental data for estimation of cuttings concentration during drilling. Geoenergy Sci. Eng.2023, 231, 212387. [Google Scholar] [CrossRef] Fan, L.-p.; Chen, X.-m. Optimization of Controller for Microbial Fuel Cell: Comparison between Genetic Algorithm and Fuzzy Logic. Int. J. Electrochem. Sci.2021, 16, 211123. [Google Scholar] [CrossRef] Azizan, F.; Sathasivam, S.; Majahar Ali, M.; Roslan, N.; Feng, C. Hybridised Network of Fuzzy Logic and a Genetic Algorithm in Solving 3-Satisfiability Hopfield Neural Networks. Axioms2023, 12, 250. [Google Scholar] [CrossRef] Schockenhoff, F.; Zähringer, M.; Brönner, M.; Lienkamp, M. Combining a Genetic Algorithm and a Fuzzy System to Optimize User Centricity in Autonomous Vehicle Concept Development. Systems2021, 9, 25. [Google Scholar] [CrossRef] Zhang, F.; Lu, J.; Yang, S.; Liu, W.; Tao, R.; Zhu, D.; Xiao, R. Performance improvement of a pump as turbine in storage mode by optimization design based on genetic algorithm and fuzzy logic. J. Energy Storage2023, 62, 106875. [Google Scholar] [CrossRef] Beklaryan, G.; Akopov, A.; Khachatryan, N. Optimisation of System Dynamics Models Using a Real-Coded Genetic Algorithm with Fuzzy Control. Cybern. Inf. Technol.2019, 19, 87–103. [Google Scholar] [CrossRef] Wijanarko, E.; Grandis, H. Binary Coded Genetic Algorithm (BCGA) with Multi-Point Cross-Over for Magnetotelluric (MT) 1D Data Inversion. IOP Conf. Ser. Earth Environ. Sci.2019, 318, 012029. [Google Scholar] [CrossRef] Mishra, D.; Acharya, B.; Rath, D.; Gerogiannis, V.; Kanavos, A. A Novel Real Coded Genetic Algorithm for Software Mutation Testing. Symmetry2022, 14, 1525. [Google Scholar] [CrossRef] Petrašinović, M.; Grbović, A.; Petrašinović, D.; Petrović, M.; Raičević, N. Real Coded Mixed Integer Genetic Algorithm for Geometry Optimization of Flight Simulator Mechanism Based on Rotary Stewart Platform. Appl. Sci.2022, 12, 7085. [Google Scholar] [CrossRef] Torres-Salinas, H.; Rodríguez-Reséndiz, J.; Cruz-Miguel, E.; Ángeles-Hurtado, L. Fuzzy Logic and Genetic-Based Algorithm for a Servo Control System. Micromachines2022, 13, 586. [Google Scholar] [CrossRef] Mónica, J.; Melin, P.; Sánchez, D. Optimal Design of a Fuzzy System with a Real-Coded Genetic Algorithm for Diabetes Classification. In Proceedings of the International Conference on Hybrid Intelligent Systems, Online, 14–16 December 2020. [Google Scholar] Melin, P.; Sánchez, D. Optimal design of type-2 fuzzy systems for diabetes classification based on genetic algorithms. Int. J. Hybrid Intell. Syst.2021, 17, 15–32. [Google Scholar] [CrossRef] Zadeh, L. Fuzzy sets. Inf. Control.1965, 8, 338–353. [Google Scholar] [CrossRef] Zadeh, L. Some reflections on soft computing, granular computing and their roles in the conception, design and utilization of information/intelligent systems. Soft Comput.1998, 2, 23–25. [Google Scholar] [CrossRef] Zadeh, L. The concept of a linguistic variable and its application to approximate reasoning. Inf. Sci.1975, 8, 199–249. [Google Scholar] [CrossRef] Al-Jamimi, H.; Saleh, T. Transparent predictive modelling of catalytic hydrodesulfurization using an interval type-2 fuzzy logic. J. Clean. Prod.2019, 231, 1079–1088. [Google Scholar] [CrossRef] Melin, P.; Castillo, O. A review on type-2 fuzzy logic applications in clustering, classification and pattern recognition. Applied Soft Comput.2014, 21, 568–577. [Google Scholar] [CrossRef] Rickard, J.; Aisbett, J.; Gibbon, G. Fuzzy subsethood for fuzzy sets of type-2 and generalized type-n. IEEE Trans. Fuzzy Syst.2009, 17, 50–60. [Google Scholar] [CrossRef] Mohammadzadeh, A.; Sabzalian, M.; Zhang, W. An Interval Type-3 Fuzzy System and a New Online Fractional-Order Learning Algorithm: Theory and Practice. IEEE Trans. Fuzzy Syst.2020, 28, 1940–1950. [Google Scholar] [CrossRef] Castillo, O.; Castro, J.; Melin, P. Interval Type-3 Fuzzy Systems: Theory and Design, 1st ed.; Springer: London, UK, 2022. [Google Scholar] Brabazon, A.; O’Neill, M.; McGarraghy, S. Natural Computing Algorithms, 1st ed.; Springer: London, UK, 2015. [Google Scholar] Eiben, A.; Smith, J. Introduction to Evolutionary Computing, 2nd ed.; Springer: London, UK, 2015. [Google Scholar] Gestal, M.; Rivero, D.; Pazos, A. Genetic Algorithms: Key Concepts and Examples, 1st ed.; Lambert Academic Publishing: Saarbrücken, Germany, 2010. [Google Scholar] Goldberg, D. Genetic Algorithms in Search Optimization and Machine Learning, 1st ed.; Addison-Wesley: Boston, MA, USA, 1989. [Google Scholar] Man, K.; Tang, K.; Kwong, S. Genetic Algorithms: Concepts and Designs, 1st ed.; Springer: London, UK, 1999. [Google Scholar] Roy, S. Introduction to Soft Computing: Neuro-Fuzzy and Genetic Algorithms, 1st ed.; Pearson: London, UK, 2017. [Google Scholar] Kramer, O. Genetic Algorithm Essentials, 1st ed.; Springer: London, UK, 2017. [Google Scholar] Amador-Angulo, L.; Castillo, O.; Castro, J.; Melin, P. A New Approach for Interval Type-3 Fuzzy Control of Nonlinear Plants. Int. J. Fuzzy Syst.2023, 25, 1624–1642. [Google Scholar] [CrossRef] Figure 1. Trapezoidal Type-2 MF. Figure 2. Type-3 Trapezoidal MF with a vertical cut. Figure 3. Type-3 Trapezoidal MF. Figure 4. Flowchart of GA. Figure 5. The proposed method. Figure 6. Sugeno Model Type-3 FIS applied to Haberman’s Survival. Figure 7. Fuzzy inputs for Haberman’s Survival dataset: (a) age of patient; (b) year of operation; and (c) number of positive axillary nodes detected. Figure 8. Chromosome representation. Figure 9. Summary of the creation and evaluation process for classification. Figure 10. Flowchart of the proposed GA. Figure 11. Best Type-3 fuzzy inputs variables for Haberman’s Survival dataset: (a) age of patient; (b) year of operation; and (c) number of positive axillary nodes detected. Figure 12. Average convergence for Haberman’s Survival dataset. Figure 13. Best Type-3 fuzzy inputs variables for PIMA Indian Diabetes dataset: (a) glucose; (b) blood pressure; (c) body mass index; (d) diabetes pedigree function; and (e) age. Figure 14. Average convergence for PIMA Indian Diabetes dataset. Figure 15. Average of fuzzy rules generated by the genetic algorithm. Table 1. Benchmark datasets. | Dataset | Attributes | Instances | | :---: | :---: | :---: | | Haberman’s Survival | 3 | 306 | | PIMA Indian Diabetes | 5 and 7 | 336 | | Cryotherapy | 7 | 90 | | Immunotherapy | 8 | 90 | | PIMA Indian Diabetes | 8 | 768 | | Indian Liver | 9 | 583 | | Breast Cancer Coimbra | 10 | 116 | Table 2. Examples of ranges for the Haberman’s Survival dataset. | Attributes | R min | R max | | :---: | :---: | :---: | | Age (attr1) | 27 | 73.70 | | Op_Year (attr2) | 52.20 | 75.9 | | Axil_Nodes (attr3) | 0 | 57.2 | Table 3. Search space of the GA. | Parameters | Minimum | Maximum | | :---: | :---: | :---: | | Trapezoidal MFs (a 1, b 1, c 1, d 1) | - | - | | LowerScale (λ) | 0.1 | 0.9 | | LowerLag (l 1, l 2). | 0.1 | 0.9 | | Output | 0 | 1 | | Fuzzy if-then rules | Consequents | 1 | 3 | | (Activation or deactivation) | 0 | 1 | Table 4. Fuzzy if-then rules (Haberman’s Survival dataset). | Rule | Antecedents | Consequent | | :---: | :---: | :---: | | Age | Output | Axil_Nodes | Output | | 1 | MF Low | MF Low | MF High | MF High | | 2 | MF Low | MF Medium | MF Low | MF Low | | 3 | MF Low | MF Medium | MF Medium | MF Medium | | 4 | MF Low | MF Medium | MF High | MF Medium | | 5 | MF Low | MF High | MF Low | MF Low | | 6 | MF Medium | MF Low | MF Low | MF Medium | | 7 | MF Medium | MF Medium | MF Low | MF Medium | | 8 | MF Medium | MF High | MF Medium | MF High | | 9 | MF High | MF Low | MF Low | MF Low | | 10 | MF High | MF Medium | MF Low | MF Medium | | 11 | MF High | MF Medium | MF Medium | MF Medium | | 12 | MF High | MF Medium | MF High | MF High | | 13 | MF High | MF High | MF Low | MF Medium | Table 5. Haberman’s Survival dataset results. | Design | Testing | | :---: | :---: | | (Best) | (Average) | (Best) | (Average) | | 79.89% | 77.14% | 77.05% | 75.30% | Table 6. PIMA Indian Diabetes results. | Design | Testing | | :---: | :---: | | (Best) | (Average) | (Best) | (Average) | | 86.38% | 83.65% | 83.17% | 81.52% | Table 7. Fuzzy if-then rules (PIMA Indian Diabetes dataset). | Rule | Antecedents | Consequent | | :---: | :---: | :---: | | Glucose | BP | BMI | DPF | AGE | Output | | 1 | MF Low | MF Medium | MF Low | MF Medium | MF High | MF Low | | 2 | MF Low | MF Medium | MF Low | MF High | MF Medium | MF Low | | 3 | MF Low | MF Medium | MF Medium | MF Medium | MF High | MF High | | 4 | MF Low | MF Medium | MF Medium | MF High | MF Medium | MF High | | 5 | MF Medium | MF Low | MF Medium | MF Medium | MF Medium | MF High | | 6 | MF Medium | MF Medium | MF Low | MF Medium | MF Low | MF Low | | 7 | MF Medium | MF Medium | MF Low | MF High | MF Low | MF Medium | | 8 | MF Medium | MF Medium | MF Medium | MF Medium | MF Low | MF High | | 9 | MF Medium | MF Medium | MF Medium | MF Medium | MF Medium | MF Medium | | 10 | MF Medium | MF High | MF Medium | MF Low | MF Medium | MF High | | 11 | MF High | MF Low | MF Low | MF High | MF High | MF High | | 12 | MF High | MF Medium | MF Low | MF Medium | MF Medium | MF High | | 13 | MF High | MF Medium | MF Medium | MF Low | MF Low | MF High | | 14 | MF High | MF Medium | MF Medium | MF Medium | MF High | MF High | | 15 | MF High | MF Medium | MF High | MF Low | MF Medium | MF Medium | | 16 | MF High | MF High | MF Medium | MF Low | MF Low | MF Medium | | 17 | MF High | MF High | MF Medium | MF Low | MF Low | MF Medium | | 18 | MF High | MF High | MF Medium | MF Low | MF Low | MF Medium | | 19 | MF High | MF High | MF Medium | MF Low | MF Low | MF Medium | | 20 | MF High | MF Medium | MF Medium | MF High | MF Medium | MF Medium | | 21 | MF High | MF Medium | MF Medium | MF High | MF High | MF Medium | | 22 | MF High | MF High | MF High | MF Low | MF High | MF Medium | | 23 | MF High | MF Medium | MF High | MF High | MF Medium | MF High | | 24 | MF High | MF High | MF High | MF Low | MF Medium | MF High | | 25 | MF High | MF High | MF Medium | MF Medium | MF Low | MF Medium | | 26 | MF High | MF High | MF Medium | MF Medium | MF Medium | MF Medium | | 27 | MF High | MF High | MF High | MF Medium | MF Medium | MF Low | Table 8. Summary results. | Dataset | k-Fold | % Design Set | % Testing Set | Design | Testing | | :---: | :---: | :---: | :---: | :---: | :---: | | Mean | Std Dev | Mean | Std Dev | | Haberman’s Survival | - | 60 | 40 | 77.14 | 1.3906 | 75.30 | 0.6361 | | 5 | 80 | 20 | 76.78 | 1.3645 | 76.69 | 1.1509 | | 10 | 90 | 10 | 76.55 | 0.8739 | 77.73 | 2.0233 | | Cryotherapy | - | 60 | 40 | 85.12 | 4.0829 | 87.13 | 1.5446 | | 5 | 80 | 20 | 85.42 | 3.1570 | 86.67 | 2.0286 | | 10 | 90 | 10 | 89.17 | 2.1510 | 89.67 | 1.6605 | | Immunotherapy | - | 60 | 40 | 84.26 | 2.4194 | 82.04 | 2.2759 | | 5 | 80 | 20 | 84.86 | 1.8391 | 85.56 | 2.0951 | | 10 | 90 | 10 | 84.40 | 2.3520 | 83.78 | 1.8295 | | PIMA Indian Diabetes | - | 60 | 40 | 74.26 | 1.4760 | 77.76 | 1.1085 | | 5 | 80 | 20 | 74.11 | 1.7514 | 77.18 | 0.3064 | | 10 | 90 | 10 | 74.46 | 1.8471 | 77.66 | 1.7584 | | Indian Liver | - | 60 | 40 | 72.45 | 0.7416 | 71.86 | 0.4014 | | 5 | 80 | 20 | 71.85 | 0.4863 | 72.40 | 0.6870 | | 10 | 90 | 10 | 71.93 | 0.7086 | 72.26 | 0.9167 | | Breast Cancer Coimbra | - | 60 | 40 | 63.19 | 4.5129 | 71.06 | 2.1350 | | 5 | 80 | 20 | 66.19 | 3.1440 | 72.78 | 2.2471 | | 10 | 90 | 10 | 75.58 | 1.2419 | 74.91 | 1.4342 | | PIMA Indian Diabetes (5 attributes) | - | 70 | 30 | 83.65 | 0.9194 | 81.52 | 1.1133 | Table 9. PIMA Indian Diabetes results (5 Attributes). | Method | Design | Testing | | :---: | :---: | :---: | | (Best) | (Average) | (Best) | (Average) | | T1FIS | 83.44% | 81.46% | 80.20% | 76.34% | | IT2FIS | 86.68% | 82.45% | 83.17% | 78.68% | | IT3FIS | 86.38% | 83.65% | 83.17% | 81.52% | Table 10. Average time of the evolutions. | Dataset | Hold-Out | 5-Fold | 10-Fold | | :---: | :---: | :---: | :---: | | Haberman’s Survival | 00:04:22 | 00:16:36 | 00:32:54 | | Cryotherapy | 00:02:54 | 00:11:08 | 00:24:11 | | Immunotherapy | 00:02:32 | 00:10:41 | 00:21:04 | | PIMA Indian Diabetes | 00:24:46 | 01:43:04 | 03:50:29 | | Indian Liver | 00:23:32 | 02:04:30 | 03:38:27 | | Breast Cancer Coimbra | 00:05:45 | 00:23:34 | 00:45:00 | | PIMA Indian Diabetes (5 attributes) | 05:24:34 | - | - | Table 11. Tests parameters. | Parameter | Value | | :---: | :---: | | Significance | 0.95 | | H 0 | µ1 = µ2 | | H 1 | µ1 > µ2 | | Critical Value (Z-test/T-test) | 1.645/1.812 | Table 12. Values of Z-test for PIMA Indian Diabetes with 5 attributes. | Method | N | Mean | Std Dev | z-Value | p-Value | | :---: | :---: | :---: | :---: | :---: | :---: | | IT3FIS | 30 | 81.52 | 1.1133 | 9.7750 | 1.36 × 10−21 | | T1FIS | 30 | 76.34 | 2.6814 | | IT3FIS | 30 | 81.52 | 1.1133 | 6.6860 | 1.54 × 10−10 | | IT2FIS | 30 | 78.68 | 2.0429 | Table 13. Values of Z-tests using 40% of the instances for validation of the FIS. | Dataset | Method | N | Mean | Std Dev | z-Value | p-Value | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | Haberman’s Survival | IT3FIS | 30 | 75.30 | 0.6361 | 2.8115 | 0.0025 | | GT2FIS | 30 | 74.01 | 2.4285 | | Cryotherapy | IT3FIS | 30 | 87.13 | 1.5446 | 2.0482 | 0.0203 | | GT2FIS | 30 | 85.52 | 4.0122 | | Immunotherapy | IT3FIS | 30 | 82.04 | 2.2759 | 2.2462 | 0.0123 | | GT2FIS | 30 | 78.79 | 7.5888 | | PIMA Indian Diabetes | IT3FIS | 30 | 77.76 | 1.1085 | 2.5702 | 0.0051 | | GT2FIS | 30 | 76.60 | 2.2156 | | Indian Liver | IT3FIS | 30 | 71.86 | 0.4014 | 0.7714 | 0.2202 | | GT2FIS | 30 | 71.48 | 2.6631 | | Breast Cancer Coimbra | IT3FIS | 30 | 71.06 | 2.1350 | 0.7483 | 0.2271 | | GT2FIS | 30 | 69.87 | 8.4725 | Table 14. Values of t-tests using 20% of the instances for validation of the FIS with 5 cross-validations. | Dataset | Method | N | Mean | Std Dev | t-Value | p-Value | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | Haberman’s Survival | IT3FIS | 10 | 76.69 | 1.1509 | 6.1340 | 3.69 × 10−5 | | GT2FIS | 10 | 74.30 | 0.4450 | | Cryotherapy | IT3FIS | 10 | 86.67 | 2.0286 | 2.6674 | 0.0081 | | GT2FIS | 10 | 83.89 | 2.594 | | Immunotherapy | IT3FIS | 10 | 85.56 | 2.0951 | 18.5907 | 4.54 × 10−12 | | GT2FIS | 10 | 71.05 | 1.3027 | | PIMA Indian Diabetes | IT3FIS | 10 | 77.18 | 0.3064 | 6.3802 | 4.56 × 10−6 | | GT2FIS | 10 | 76.17 | 0.3970 | | Indian Liver | IT3FIS | 10 | 72.40 | 0.6870 | 2.7279 | 0.0074 | | GT2FIS | 10 | 71.36 | 0.9830 | | Breast Cancer Coimbra | IT3FIS | 10 | 72.78 | 2.2471 | 2.5457 | 0.0117 | | GT2FIS | 10 | 70.70 | 1.2927 | Table 15. Values of t-tests using 10% of the instances for validation of the FIS with 10 cross-validations. | Dataset | Method | N | Mean | Std Dev | t-Value | p Value | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | Haberman’s Survival | IT3FIS | 10 | 77.73 | 2.0233 | 4.6205 | 4.75 × 10−4 | | GT2FIS | 10 | 74.67 | 0.5578 | | Cryotherapy | IT3FIS | 10 | 89.67 | 1.6605 | 5.4189 | 3.56 × 10−5 | | GT2FIS | 10 | 86.22 | 1.133 | | Immunotherapy | IT3FIS | 10 | 83.78 | 1.8295 | 7.0236 | 1.02 × 10−6 | | GT2FIS | 10 | 77.78 | 1.9876 | | PIMA Indian Diabetes | IT3FIS | 10 | 77.66 | 1.7584 | 2.8421 | 0.0097 | | GT2FIS | 10 | 76.05 | 0.338 | | Indian Liver | IT3FIS | 10 | 72.26 | 0.9167 | 2.1748 | 0.0252 | | GT2FIS | 10 | 71.57 | 0.4110 | | Breast Cancer Coimbra | IT3FIS | 10 | 74.91 | 1.4342 | 0.5277 | 0.3027 | | GT2FIS | 10 | 74.45 | 2.316 | Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ( Share and Cite MDPI and ACS Style Melin, P.; Sánchez, D.; Castillo, O. Interval Type-3 Fuzzy Inference System Design for Medical Classification Using Genetic Algorithms. Axioms2024, 13, 5. AMA Style Melin P, Sánchez D, Castillo O. Interval Type-3 Fuzzy Inference System Design for Medical Classification Using Genetic Algorithms. Axioms. 2024; 13(1):5. Chicago/Turabian Style Melin, Patricia, Daniela Sánchez, and Oscar Castillo. 2024. "Interval Type-3 Fuzzy Inference System Design for Medical Classification Using Genetic Algorithms" Axioms 13, no. 1: 5. APA Style Melin, P., Sánchez, D., & Castillo, O. (2024). Interval Type-3 Fuzzy Inference System Design for Medical Classification Using Genetic Algorithms. Axioms, 13(1), 5. Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here. Article Metrics Yes Citations Crossref 6 Web of Science 3 Google Scholar [click to view] No Article Access Statistics For more information on the journal statistics, click here. Multiple requests from the same IP address are counted as one view. Zoom|Orient|As Lines|As Sticks|As Cartoon|As Surface|Previous Scene|Next Scene Cite Export citation file: BibTeX) MDPI and ACS Style Melin, P.; Sánchez, D.; Castillo, O. Interval Type-3 Fuzzy Inference System Design for Medical Classification Using Genetic Algorithms. Axioms2024, 13, 5. AMA Style Melin P, Sánchez D, Castillo O. Interval Type-3 Fuzzy Inference System Design for Medical Classification Using Genetic Algorithms. Axioms. 2024; 13(1):5. Chicago/Turabian Style Melin, Patricia, Daniela Sánchez, and Oscar Castillo. 2024. "Interval Type-3 Fuzzy Inference System Design for Medical Classification Using Genetic Algorithms" Axioms 13, no. 1: 5. APA Style Melin, P., Sánchez, D., & Castillo, O. (2024). Interval Type-3 Fuzzy Inference System Design for Medical Classification Using Genetic Algorithms. Axioms, 13(1), 5. Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here. clear Axioms, EISSN 2075-1680, Published by MDPI RSSContent Alert Further Information Article Processing ChargesPay an InvoiceOpen Access PolicyContact MDPIJobs at MDPI Guidelines For AuthorsFor ReviewersFor EditorsFor LibrariansFor PublishersFor SocietiesFor Conference Organizers MDPI Initiatives SciforumMDPI BooksPreprints.orgScilitSciProfilesEncyclopediaJAMSProceedings Series Follow MDPI LinkedInFacebookX Subscribe to receive issue release notifications and newsletters from MDPI journals Select options [x] Accounting and Auditing [x] Acoustics [x] Acta Microbiologica Hellenica [x] Actuators [x] Adhesives [x] Administrative Sciences [x] Adolescents [x] Advances in Respiratory Medicine [x] Aerobiology [x] Aerospace [x] Agriculture [x] AgriEngineering [x] Agrochemicals [x] Agronomy [x] AI [x] AI Sensors [x] Air [x] Algorithms [x] Allergies [x] Alloys [x] Analytica [x] Analytics [x] Anatomia [x] Anesthesia Research [x] Animals [x] Antibiotics [x] Antibodies [x] Antioxidants [x] Applied Biosciences [x] Applied Mechanics [x] Applied Microbiology [x] Applied Nano [x] Applied Sciences [x] Applied System Innovation [x] AppliedChem [x] AppliedMath [x] AppliedPhys [x] Aquaculture Journal [x] Architecture [x] Arthropoda [x] Arts [x] Astronomy [x] Atmosphere [x] Atoms [x] Audiology Research [x] Automation [x] Axioms [x] Bacteria [x] Batteries [x] Behavioral Sciences [x] Beverages [x] Big Data and Cognitive Computing [x] BioChem [x] Bioengineering [x] Biologics [x] Biology [x] Biology and Life Sciences Forum [x] Biomass [x] Biomechanics [x] BioMed [x] Biomedicines [x] BioMedInformatics [x] Biomimetics [x] Biomolecules [x] Biophysica [x] Biosensors [x] Biosphere [x] BioTech [x] Birds [x] Blockchains [x] Brain Sciences [x] Buildings [x] Businesses [x] C [x] Cancers [x] Cardiogenetics [x] Catalysts [x] Cells [x] Ceramics [x] Challenges [x] ChemEngineering [x] Chemistry [x] Chemistry Proceedings [x] Chemosensors [x] Children [x] Chips [x] CivilEng [x] Clean Technologies [x] Climate [x] Clinical and Translational Neuroscience [x] Clinical Bioenergetics [x] Clinics and Practice [x] Clocks & Sleep [x] Coasts [x] Coatings [x] Colloids and Interfaces [x] Colorants [x] Commodities [x] Complexities [x] Complications [x] Compounds [x] Computation [x] Computer Sciences & Mathematics Forum [x] Computers [x] Condensed Matter [x] Conservation [x] Construction Materials [x] Corrosion and Materials Degradation [x] Cosmetics [x] COVID [x] Craniomaxillofacial Trauma & Reconstruction [x] Crops [x] Cryo [x] Cryptography [x] Crystals [x] Current Issues in Molecular Biology [x] Current Oncology [x] Dairy [x] Data [x] Dentistry Journal [x] Dermato [x] Dermatopathology [x] Designs [x] Diabetology [x] Diagnostics [x] Dietetics [x] Digital [x] Disabilities [x] Diseases [x] Diversity [x] DNA [x] Drones [x] Drugs and Drug Candidates [x] Dynamics [x] Earth [x] Ecologies [x] Econometrics [x] Economies [x] Education Sciences [x] Electricity [x] Electrochem [x] Electronic Materials [x] Electronics [x] Emergency Care and Medicine [x] Encyclopedia [x] Endocrines [x] Energies [x] Energy Storage and Applications [x] Eng [x] Engineering Proceedings [x] Entropy [x] Environmental and Earth Sciences Proceedings [x] Environments [x] Epidemiologia [x] Epigenomes [x] European Burn Journal [x] European Journal of Investigation in Health, Psychology and Education [x] Fermentation [x] Fibers [x] FinTech [x] Fire [x] Fishes [x] Fluids [x] Foods [x] Forecasting [x] Forensic Sciences [x] Forests [x] Fossil Studies [x] Foundations [x] Fractal and Fractional [x] Fuels [x] Future [x] Future Internet [x] Future Pharmacology [x] Future Transportation [x] Galaxies [x] Games [x] Gases [x] Gastroenterology Insights [x] Gastrointestinal Disorders [x] Gastronomy [x] Gels [x] Genealogy [x] Genes [x] Geographies [x] GeoHazards [x] Geomatics [x] Geometry [x] Geosciences [x] Geotechnics [x] Geriatrics [x] Glacies [x] Gout, Urate, and Crystal Deposition Disease [x] Grasses [x] Green Health [x] Hardware [x] Healthcare [x] Hearts [x] Hemato [x] Hematology Reports [x] Heritage [x] Histories [x] Horticulturae [x] Hospitals [x] Humanities [x] Humans [x] Hydrobiology [x] Hydrogen [x] Hydrology [x] Hygiene [x] Immuno [x] Infectious Disease Reports [x] Informatics [x] Information [x] Infrastructures [x] Inorganics [x] Insects [x] Instruments [x] Intelligent Infrastructure and Construction [x] International Journal of Environmental Research and Public Health [x] International Journal of Financial Studies [x] International Journal of Molecular Sciences [x] International Journal of Neonatal Screening [x] International Journal of Orofacial Myology and Myofunctional Therapy [x] International Journal of Plant Biology [x] International Journal of Topology [x] International Journal of Translational Medicine [x] International Journal of Turbomachinery, Propulsion and Power [x] International Medical Education [x] Inventions [x] IoT [x] ISPRS International Journal of Geo-Information [x] J [x] Journal of Aesthetic Medicine [x] Journal of Ageing and Longevity [x] Journal of CardioRenal Medicine [x] Journal of Cardiovascular Development and Disease [x] Journal of Clinical & Translational Ophthalmology [x] Journal of Clinical Medicine [x] Journal of Composites Science [x] Journal of Cybersecurity and Privacy [x] Journal of Dementia and Alzheimer's Disease [x] Journal of Developmental Biology [x] Journal of Experimental and Theoretical Analyses [x] Journal of Eye Movement Research [x] Journal of Functional Biomaterials [x] Journal of Functional Morphology and Kinesiology [x] Journal of Fungi [x] Journal of Imaging [x] Journal of Intelligence [x] Journal of Low Power Electronics and Applications [x] Journal of Manufacturing and Materials Processing [x] Journal of Marine Science and Engineering [x] Journal of Market Access & Health Policy [x] Journal of Mind and Medical Sciences [x] Journal of Molecular Pathology [x] Journal of Nanotheranostics [x] Journal of Nuclear Engineering [x] Journal of Otorhinolaryngology, Hearing and Balance Medicine [x] Journal of Parks [x] Journal of Personalized Medicine [x] Journal of Pharmaceutical and BioTech Industry [x] Journal of Respiration [x] Journal of Risk and Financial Management [x] Journal of Sensor and Actuator Networks [x] Journal of the Oman Medical Association [x] Journal of Theoretical and Applied Electronic Commerce Research [x] Journal of Vascular Diseases [x] Journal of Xenobiotics [x] Journal of Zoological and Botanical Gardens [x] Journalism and Media [x] Kidney and Dialysis [x] Kinases and Phosphatases [x] Knowledge [x] LabMed [x] Laboratories [x] Land [x] Languages [x] Laws [x] Life [x] Limnological Review [x] Lipidology [x] Liquids [x] Literature [x] Livers [x] Logics [x] Logistics [x] Lubricants [x] Lymphatics [x] Machine Learning and Knowledge Extraction [x] Machines [x] Macromol [x] Magnetism [x] Magnetochemistry [x] Marine Drugs [x] Materials [x] Materials Proceedings [x] Mathematical and Computational Applications [x] Mathematics [x] Medical Sciences [x] Medical Sciences Forum [x] Medicina [x] Medicines [x] Membranes [x] Merits [x] Metabolites [x] Metals [x] Meteorology [x] Methane [x] Methods and Protocols [x] Metrics [x] Metrology [x] Micro [x] Microbiology Research [x] Microelectronics [x] Micromachines [x] Microorganisms [x] Microplastics [x] Microwave [x] Minerals [x] Mining [x] Modelling [x] Modern Mathematical Physics [x] Molbank [x] Molecules [x] Multimedia [x] Multimodal Technologies and Interaction [x] Muscles [x] Nanoenergy Advances [x] Nanomanufacturing [x] Nanomaterials [x] NDT [x] Network [x] Neuroglia [x] Neurology International [x] NeuroSci [x] Nitrogen [x] Non-Coding RNA [x] Nursing Reports [x] Nutraceuticals [x] Nutrients [x] Obesities [x] Oceans [x] Onco [x] Optics [x] Oral [x] Organics [x] Organoids [x] Osteology [x] Oxygen [x] Parasitologia [x] Particles [x] Pathogens [x] Pathophysiology [x] Pediatric Reports [x] Pets [x] Pharmaceuticals [x] Pharmaceutics [x] Pharmacoepidemiology [x] Pharmacy [x] Philosophies [x] Photochem [x] Photonics [x] Phycology [x] Physchem [x] Physical Sciences Forum [x] Physics [x] Physiologia [x] Plants [x] Plasma [x] Platforms [x] Pollutants [x] Polymers [x] Polysaccharides [x] Populations [x] Poultry [x] Powders [x] Proceedings [x] Processes [x] Prosthesis [x] Proteomes [x] Psychiatry International [x] Psychoactives [x] Psychology International [x] Publications [x] Purification [x] Quantum Beam Science [x] Quantum Reports [x] Quaternary [x] Radiation [x] Reactions [x] Real Estate [x] Receptors [x] Recycling [x] Regional Science and Environmental Economics [x] Religions [x] Remote Sensing [x] Reports [x] Reproductive Medicine [x] Resources [x] Rheumato [x] Risks [x] Robotics [x] Ruminants [x] Safety [x] Sci [x] Scientia Pharmaceutica [x] Sclerosis [x] Seeds [x] Sensors [x] Separations [x] Sexes [x] Signals [x] Sinusitis [x] Smart Cities [x] Social Sciences [x] Société Internationale d’Urologie Journal [x] Societies [x] Software [x] Soil Systems [x] Solar [x] Solids [x] Spectroscopy Journal [x] Sports [x] Standards [x] Stats [x] Stresses [x] Surfaces [x] Surgeries [x] Surgical Techniques Development [x] Sustainability [x] Sustainable Chemistry [x] Symmetry [x] SynBio [x] Systems [x] Targets [x] Taxonomy [x] Technologies [x] Telecom [x] Textiles [x] Thalassemia Reports [x] Theoretical and Applied Ergonomics [x] Therapeutics [x] Thermo [x] Time and Space [x] Tomography [x] Tourism and Hospitality [x] Toxics [x] Toxins [x] Transplantology [x] Trauma Care [x] Trends in Higher Education [x] Tropical Medicine and Infectious Disease [x] Universe [x] Urban Science [x] Uro [x] Vaccines [x] Vehicles [x] Venereology [x] Veterinary Sciences [x] Vibration [x] Virtual Worlds [x] Viruses [x] Vision [x] Waste [x] Water [x] Wild [x] Wind [x] Women [x] World [x] World Electric Vehicle Journal [x] Youth [x] Zoonotic Diseases Subscribe © 1996-2025 MDPI (Basel, Switzerland) unless otherwise stated Disclaimer Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. Terms and ConditionsPrivacy Policy We use cookies on our website to ensure you get the best experience. Read more about our cookies here. Accept Share Link Copy clear Share clear Back to Top Top
165
Set Theory - Thomas Jech - Google Books =============== Sign in Hidden fields Try the new Google Books Books View sample Add to my library Try the new Google Books Check out the new look and enjoy easier access to your favorite features Try it now No thanks Try the new Google Books My library Help Advanced Book Search Good for: Web Tablet / iPad eReader Smartphone#### Features: Flowing text Scanned pages Help with devices & formats Learn more about books on Google Play EBOOK FROM $27.10 Get this book in print▼ Springer Shop Amazon.com Barnes&Noble.com Books-A-Million IndieBound Find in a library All sellers» My library My History Set Theory ========== Thomas Jech Springer Science & Business Media, Jun 29, 2013 - Mathematics - 634 pages The main body of this book consists of 106 numbered theorems and a dozen of examples of models of set theory. A large number of additional results is given in the exercises, which are scattered throughout the text. Most exer cises are provided with an outline of proof in square brackets [ ], and the more difficult ones are indicated by an asterisk. I am greatly indebted to all those mathematicians, too numerous to men tion by name, who in their letters, preprints, handwritten notes, lectures, seminars, and many conversations over the past decade shared with me their insight into this exciting subject. XI CONTENTS Preface xi PART I SETS Chapter 1 AXIOMATIC SET THEORY I. Axioms of Set Theory I 2. Ordinal Numbers 12 3. Cardinal Numbers 22 4. Real Numbers 29 5. The Axiom of Choice 38 6. Cardinal Arithmetic 42 7. Filters and Ideals. Closed Unbounded Sets 52 8. Singular Cardinals 61 9. The Axiom of Regularity 70 Appendix: Bernays-Godel Axiomatic Set Theory 76 Chapter 2 TRANSITIVE MODELS OF SET THEORY 10. Models of Set Theory 78 II. Transitive Models of ZF 87 12. Constructible Sets 99 13. Consistency of the Axiom of Choice and the Generalized Continuum Hypothesis 108 14. The In Hierarchy of Classes, Relations, and Functions 114 15. Relative Constructibility and Ordinal Definability 126 PART II MORE SETS Chapter 3 FORCING AND GENERIC MODELS 16. Generic Models 137 17. Complete Boolean Algebras 144 18. More » Preview this book » Selected pages Page 23 Title Page Table of Contents Index References Contents Preface1 TRANSITIVE MODELS OF SET THEORY 78 MORE SETS 137 More Generic Models 187 SOME APPLICATIONS OF FORCING 216 LARGE SETS 295 OTHER LARGE CARDINALS 398 SETS OF REALS 493 HISTORICAL NOTES AND GUIDE TO THE BIBLIOGRAPHY 579 BIBLIOGRAPHY596 NOTATION 611 Name Index623 Copyright Other editions - View all Set Theory Limited preview - 1978 Set Theory Thomas Jech No preview available - 2014 Set Theory Thomas Jech No preview available - 2014 View all » Common terms and phrases ₁אa₁Aronszajn treeautomorphismaxiom of choiceB₁Borel setclosed unboundedcofinalitycompact cardinalcomplete Boolean algebraconstructCorollarycountabledefineddenotedensedisjointdom(pelementary embeddingelementary submodelelementsExerciseexistsfilterfiniteformulafunction fG₁henceidealinaccessible cardinalindiscerniblesinductioninfiniteisomorphick-completeLebesgue measurablelet fLet GLet us assumeLet us considerlimit ordinalmeasurable cardinalmodel of ZFCnonemptynormal measurenotion of forcingone-to-oneorder-typeP₁partitionproof of Lemmaproof of Theoremproveregular cardinalsatisfies the c.c.c.Sectionsequenceset of realsset theorysingular cardinalSkolemSolovaystationarysubmodelsuffices to showSuslin treeT₁transitive modelU₁ultrafilterultrapoweruncountable cardinalw₂weakly compactwell-foundedwell-orderingX₁ω₁ Bibliographic information Title Set Theory Perspectives in Mathematical Logic AuthorThomas Jech Edition 2, illustrated Publisher Springer Science & Business Media, 2013 ISBN 3662224003, 9783662224007 Length 634 pages SubjectsMathematics › Logic Computers / Computer Science Mathematics / Discrete Mathematics Mathematics / General Mathematics / History & Philosophy Mathematics / Logic Export CitationBiBTeXEndNoteRefMan About Google Books - Privacy Policy - Terms of Service - Information for Publishers - Report an issue - Help - Google Home
166
Lecture Notes on Classical Field Theory Janos Polonyi Department of Physics Strasbourg University, Strasbourg, France ii Contents 1 Introduction 1 2 Elements of special relativity 3 2.1 Newton’s relativity . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.2 Conflict resolution . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.3 Invariant length . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.4 Lorentz Transformations . . . . . . . . . . . . . . . . . . . . . . . 7 2.5 Time dilatation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.6 Contraction of length . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.7 Transformation of the velocity . . . . . . . . . . . . . . . . . . . . 11 2.8 Four-vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.9 Relativistic mechanics . . . . . . . . . . . . . . . . . . . . . . . . 13 2.10 Lessons of special relativity . . . . . . . . . . . . . . . . . . . . . 14 3 Classical Field Theory 17 3.1 Why Classical Field Theory? . . . . . . . . . . . . . . . . . . . . 17 3.2 Variational principle . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.2.1 Single point on the real axis . . . . . . . . . . . . . . . . . 18 3.2.2 Non-relativistic point particle . . . . . . . . . . . . . . . . 19 3.2.3 Relativistic particle . . . . . . . . . . . . . . . . . . . . . . 21 3.2.4 Scalar field . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.3 Noether theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.3.1 Point particle . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.3.2 Internal symmetries . . . . . . . . . . . . . . . . . . . . . 25 3.3.3 Canonical energy-momentum tensor . . . . . . . . . . . . 27 3.3.4 External symmetries . . . . . . . . . . . . . . . . . . . . . 28 4 Electrodynamics 31 4.1 Charge in an external electromagnetic field . . . . . . . . . . . . 31 4.2 Dynamics of the electromagnetic field . . . . . . . . . . . . . . . 32 4.3 Energy-momentum tensor . . . . . . . . . . . . . . . . . . . . . . 35 4.4 Electromagnetic waves in the vacuum . . . . . . . . . . . . . . . 37 iii iv CONTENTS 5 Green functions 39 5.1 Time arrow problem . . . . . . . . . . . . . . . . . . . . . . . . . 39 5.2 Invertible linear equation . . . . . . . . . . . . . . . . . . . . . . 41 5.3 Non-invertible linear equation with boundary conditions . . . . . 43 5.4 Retarded and advanced solutions . . . . . . . . . . . . . . . . . . 44 6 Radiation of a point charge 49 6.1 Li´ enard-Wiechert potential . . . . . . . . . . . . . . . . . . . . . 49 6.2 Field strengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 6.3 Dipole radiation . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 7 Radiation back-reaction 57 7.1 The problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 7.2 Hydrodynamical analogy . . . . . . . . . . . . . . . . . . . . . . . 59 7.3 Radiated energy-momentum . . . . . . . . . . . . . . . . . . . . . 59 7.4 Brief history . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 7.4.1 Extended charge distribution . . . . . . . . . . . . . . . . 63 7.4.2 Point charge limit . . . . . . . . . . . . . . . . . . . . . . 66 7.4.3 Iterative solution . . . . . . . . . . . . . . . . . . . . . . . 69 7.4.4 Action-at-a-distance . . . . . . . . . . . . . . . . . . . . . 72 7.4.5 Beyond electrodynamics . . . . . . . . . . . . . . . . . . . 74 7.5 Epilogue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Chapter 1 Introduction The following is a short notes of lectures about classical field theory, in par-ticular classical electrodynamics for fourth or fifth year physics students. It is not supposed to be an introductory course to electrodynamics whose knowledge will be assumed. Our main interest is the consider electrodynamics as a par-ticular, relativistic field theory. A slightly more detailed view of back reaction force acting on point charges is given, being the last open chapter of classical electrodynamics. The concept of classical field emerged in the nineteenth century when the proper degrees of freedom have been identified for the electromagnetic interac-tion and the idea was generalized later. A half century later the careful study of the propagation of the electromagnetic waves led to special relativity. One is usually confronted with relativistic effects at high energies as far as massive particles are concerned and the simpler, non-relativistic approximation is suffi-cient to describe low energy phenomena. But a massless particle, such as the photon, moves with relativistic speed at arbitrarily low energy and requires the full complexity of the relativistic description. We do not follow here the historical evolution, rather start with a very short summary of the main idea of special relativity. This makes the introduction of classical field more natural. Classical field theories will be introduced by means of the action principle. This is not only a rather powerful scheme but it offers a clear view of the role symmetries play in the dynamics. After having laid down the general formalism we turn to the electrodynamics, the interactive system of point charges and the electromagnetic field. The presentation is closed by a short review of the state of the radiation back reaction force acting on accelerating point charges. This lecture notes differs from a text book to be written about classical field theory in restricting the attention to subjects which can be covered in a one semester course and as a result gauge theory in general and in particular general relativity are not presented. Another difference is the inclusion of a subject, special relativity, which might not be presented in other courses. There are numerous textbooks available in this classical subject. The mono-1 2 CHAPTER 1. INTRODUCTION graph is monumental collection of different aspects of electrodynamics, the basics can be found best in . The radiation reaction force is nicely discussed in , and . Chapter 2 Elements of special relativity The main concepts of special relativity are introduced in this chapter. They caused a genuine surprise a century ago because people had the illusion that their intuition, based on the physics of slow moving object, covers the whole range of Physics. The deviation from Newton’s mechanics of massive bodies has systematically been established few decades after the discovery of special relativity only. In the meantime the only strong evidence of special relativity came from electromag-netic radiation, from the propagation of massless particles, the photons. They move with the speed of light at any energy and provide ample evidences of the new physics of particles moving with speed comparable with the speed of light. Therefore we rely on the propagation of light signals in the discussions below without entering into the more detailed description of such signals by classical electrodynamics, the only reference to the Maxwell equations being made in the simple assumption 2 below. 2.1 Newton’s relativity A frequently used concept below is the inertial coordinate systems. Simplest motion is that of a free particle and the inertial coordinate systems are where a free point particle moves with constant velocity. Once the motion of a free particle satisfy the same equation, vanishing acceleration, in each inertial sys-tems one conjectures that any other, interactive system follow the same laws in different inertial systems. Newton’s law, m¨ x = −∇U, includes the second time derivative of the coordinates, therefore inertial systems are connected by motion of constant speed, x →x′ = x −tv. (2.1) 3 4 CHAPTER 2. ELEMENTS OF SPECIAL RELATIVITY This transformation is called Galilean boost because the invariance of the laws of mechanics under such transformation, the relativity assumption of Newton’s theory, was discovered by Galileo. In other words, there is no way to find out the absolute velocity in mechanics because the physical phenomena found by two observers, moving with constant velocity with respect to each other are identical. The point which marks the end of the applicability of Newton’s theory in physics is which was assumed for hundreds of years but left implicit in Galilean boost, namely that the time remains the same, t →t′ = t (2.2) when an inertial system is changed into another one. In other words, the time is absolute in Newton’s physics, can in principle be introduced for all inertial system identically. 2.2 Conflict resolution Special relativity results from the solution of a contradiction among the two main pillars of classical physics, mechanics and electrodynamics. The following two assumptions seem to be unacceptable: 1. Principle of Newton’s relativity: The laws of Physics look the same in the inertial coordinate systems. 2. Electrodynamics: According to the Maxwell equations the speed of the propagation of electromagnetic waves (speed of light) is c = 2.99793 · 1010cm/s. In fact, the Galilean boost of Eqs. (2.1)-(2.2) leads to the addition of ve-locities, dx′ dt = dx dt −v. This result is in contradiction with the inertial system independence of the speed of light, encoded in the Maxwell-equations. It is Einstein’s deep understanding physics which led him to recognize that Eq. (2.2) is the weak point of the argument, not supported by observations and special relativity is based on its rejection. Special relativity is based on the following, weakened assumptions. 1’ There is a transformation x →x′ and t →t′ of the coordinate and time which maps an inertial system into another and preserves the laws of physics. This transformation changes the observed velocity of objects, rendering impossible to measure absolute velocities. 2’ The speed of light is the same in every intertial system. Once the time lost its absolute nature then the next step is its construction for each inertial system by observations. After this point is completed one can clarify the details of the relation mentioned in assumption 1’, between the time 2.2. CONFLICT RESOLUTION 5 and coordinates when different inertial systems are compared. This will be our main task in the remaining part of this chapter. The loss of absolute nature of the time forces us to change the way we imagine the motion of an object. In the Newtonian mechanics the motion of a point particle was characterized by its trajectory x(t), its coordinates as the function of the (absolute) time. If the time is to be constructed in a dynamical manner then one should be more careful and not use the same time for different objects. Therefore, the motion of a point particle is described by its world line xµ = (ct(s), x(s)), µ = 0, 1, 2, 3, the parametrized form of its time and coordinates. The trivial factor c, the speed of light, is introduced for the time to have components with the same length dimensions in the four-coordinate xµ(s). Each four-coordinate labels a point in the space-time, called event. The world line of a point particle is a curve in the space time. x t Figure 2.1: Synchronization of clocks to the one placed at the origin. Let us suppose that we can introduce a coordinate system by means of meter rods which characterize points in space and all are in rest. Then we place a clock at each space point which will be synchronized in the following manner. We pick the clock at one point, x = 0 in Fig. 2.1, as a reference, its finger being used to construct the flow of time at x = 0, the time variable of its world line. Suppose that we want now to set the clock at point y. We first place a mirror on this clock and then emit a light signal which propagates with the speed of light according to assumption 2’ from our reference point at time t0 and measure the time t1 when it arrives back from y. The clock at y should show the time (t1 −t0)/2 when the light has just reached. The clocks, synchronized in such a manner show immediately one of the most dramatic prediction of special relativity, the loss of absolute nature of time. Let us imagine an experimental rearrangement in the coordinate system (x, y, z) of Fig. 2.2 which contains a light source (A) and two light detectors (B and C), placed at equal distance from the source. A light signal, emitted form the source reaches the detectors at the same time in this intertial system. Let us analyze the same process seen from another inertial system (ct′, x′, y′, z′) which is attached to an observer moving with a constant velocity in the direction of the y axis. A shift by a constant velocity leaves the free particle motion unaccelerated therefore the coordinate system (ct′, x′, y′, z′) where this observes is at rest is inertial, too. But the time ct′ when the detector C signals the 6 CHAPTER 2. ELEMENTS OF SPECIAL RELATIVITY B A y y’ C x’ x z z’ Figure 2.2: The arrival of the light to B and C are simultaneous (|AB|′ = |AC|′) in the inertial system (ct, x, y, x) but the light signals arrive earlier to B than C in the inertial system (ct′, x′, y′, x′). arrival of the light for this moving observer is later than the time ct in the co-moving inertial system. In fact, the light propagates with the same speed in both systems but the detector moves away form the source int the system (ct′, x′, y′, z′). In a similar manner, the time ct′ when the light reaches detector B is earlier than ct because this detector moves towards the source. As a result, two events which are in coincidence in one inertial system may correspond to different times in another inertial system. The order of events may change when we see them in different inertial systems where the physical laws are supposed to be identical. 2.3 Invariant length The finding of the transformation rule for space-time vectors xµ = (ct, x) is rendered simpler by the introduction of some kind of length between events which is the same when seen form different inertial systems. Since the speed of light is the same in every inertial system it is natural to use light in the construction of this length. We define the distance between two events in such a manner that is is vanishing when there is a light signal which connects the two events. The distance square is supposed to be quadratic in the difference of the space-time coordinates, thus the expression s2 = c2(t2 −t1)2 −(x2 −x1)2. (2.3) is a natural choice. If s2 is vanishing in one reference frame then the two events can be connected by a light signal. This property is valid in any reference frame, therefore the value s2 = 0 remains invariant during change of inertial systems. Now we show that s2 ̸= 0 remains invariant, as well. The change of inertial system may consist of trivial translations in space-time and spatial rotation which leave the the expression (2.3) unchanged in an obvious manner. What is left to show is that a relativistic boost of the inertial system when it moves with a constant speed leaves s2 ̸= 0 invariant. 2.4. LORENTZ TRANSFORMATIONS 7 x t space−like separation future light−cone past light−cone absolut past separation time−like absolut future Figure 2.3: The light cones. Let us start with a reference frames S, and consider two others, S(u1) and S(u2) which move with velocities u1 and u2 with respect to S. Because of s2 = 0 is invariant and the transformation law for s2 should be continuous in u for infinitesimal ds2 (no large distances involved where physical phenomena might accumulate) we have ds2 = a(|u1|)ds2 1, ds2 = a(|u2|)ds2 2, (2.4) where a(u) is a continuous function and the argument depends on the magnitude |u| only owing to rotational invariance. When S(u1) is viewed from S(u1) then one finds ds2 1 = a(|u1 −u2|)ds2 2 (2.5) and the comparison of (2.4) nd (2.5) gives a(|u1 −u2|) = a(|u2|) a(|u1|) (2.6) which can be true only if a = 1. One says that two events are time-, space- or light-like separated when s2 > 0, s2 < 0 or s2 = 0, respectively. Signals emitted from a point, shown as the origin in Fig. 2.3 reaches the future light cone. The signals received may be emitted form its past light cone. There is no communication between two events when they are space-like. Events separated by light-like interval can communicate by signals traveling with the speed of light only. 2.4 Lorentz Transformations The use of the invariant length is a simple characterization of the transformation of the space-time coordinates when the inertial system is changed, a Lorentz 8 CHAPTER 2. ELEMENTS OF SPECIAL RELATIVITY transformation is carried out. For this end we introduce the metric tensor gµν =     1 0 0 0 0 −1 0 0 0 0 −1 0 0 0 0 −1     (2.7) which allows us to introduce a Lorentz-invariant scalar product x · y = xµgµνyν (2.8) where x = (ct, x), etc. The Lorentz-group consists of 4 × 4 matrices which mix the space-time coordinates xµ →x′µ = Λµ νxν, (2.9) in such a manner that the scalar product or the invariant length is preserved, x · y = xµ′Λµ µ′gµνΛν ν′yν′ (2.10) or g = ˜ Λ · g · Λ. (2.11) The Lorentz group is 6 dimensional, 3 dimensions correspond to three-dimensional rotations and three other directions belong to Lorentz-boosts, parametrized by the three-velocity v relating the inertial systems. let us denote the the par-allel and perpendicular projection of the three-coordinate on the velocity v by x∥and x⊥, respectively, x = x∥+ x⊥, x∥· x⊥= v · x⊥= 0. (2.12) We can then write a general Lorentz transformation in a three-dimensional notation as x′ = α(x∥−vt) + γx⊥, t′ = β  t −x · v ˜ c2  (2.13) The invariance of the length, c2t2 −x2 = c2β2  t −x · v ˜ c2 2 −α2(x∥−vt)2 −γx2 ⊥, (2.14) yields the relations γ = ±1, v = 0 = ⇒ γ = 1 ˜ c = c α = β = 1 q 1 −v2 c2 (2.15) x′ ∥= x∥−vt q 1 −v2 c2 , t′ = t − vx∥ c2 q 1 −v2 c2 (2.16) 2.4. LORENTZ TRANSFORMATIONS 9 x’ t t’ x G E Figure 2.4: Lorentz transformations. Note that the inverse Lorentz transformation is obtained by the change v → −v, x∥= x′ ∥+ vt′ q 1 −v2 c2 , t = t′ + vx′ ∥ c2 q 1 −v2 c2 . (2.17) Fig. 2.4 shows that change of the space-time coordinates during a Lorentz boost. For an Euclidean rotation in two dimensions both axes are rotated by the same angle, here this possibility is excluded by the invariance of the light cone. As a results the axes are moved by keeping the light cone, shown with dashed lines, unchanged. We remark that there are four disconnected components of the Lorentz group. First note that the determinant of Eq. (2.11), det g = det g(det Λ)2 indicates that det Λ = ±1 and there are no infinitesimal Lorentz transforma-tions 1 1+δΛ such that det Λ(1 1+δΛ) ̸= det Λ. Thus the spatial inversion split the Lorentz group into two disconnected sets. Furthermore, observe that the com-ponent (00) of Eq. (2.11), 1 = g00 = (Λ0 0)2 −P j(Λj 0)2 implies that Λ0 0| > 1, and that time inversion, a Lorentz transformation, splits the :Lorentz group into two disconnected sets. The four disconnected components consists of matrices satisfying Eq. (2.11) and 1. det Λ = 1, Λ0 0 ≥1 (the proper Lorentz group, L↑ +), 2. det Λ = 1, Λ0 0 ≤1, 3. det Λ = −1, Λ0 0 ≥1, 4. det Λ = −1, Λ0 0 ≤1. Note that one recovers the Galilean boost, x′ = x −vt, in the non-relativistic limit. One usually needs the full space-time symmetry group, called Poincar group. It is ten dimensional and is the direct product of the six dimensional Lorentz group and the four dimensional translation group in the space-time. 10 CHAPTER 2. ELEMENTS OF SPECIAL RELATIVITY 2.5 Time dilatation The proper time τ is the lapse the time measured the coordinate system attached to the system. To find it for an object moving with a velocity v to be considered constant during a short motion, in a reference system let us express the invariant length between two consecutive events, ref. system of the particle c2dτ 2 = c2dt2 −dt2v2 lab. system (2.18) which gives dτ = dt r 1 −v2 c2 . (2.19) Remarks: 1. A moving clock seems to be slower than a standing one. 2. The time measured by a clock, 1 c Z xf xi ds (2.20) is maximal if the clock moves with constant velocity, ie. its world-line is straight. (Clock following a motion with the same initial and final point but non-constant velocity seems to be slower than the one in uniform motion.) 2.6 Contraction of length The proper length of a rod, ℓ0 = x′ 2 −x′ 1, is defined in the inertial system S′ in which the rod is at rest. In another inertial system the end points correspond to the world lines xj = x′ j + vt′ j q 1 −v2 c2 , tj = t′ j + vx′ j c2 q 1 −v2 c2 . (2.21) The length is read offat equal time, t1 = t2, thus t′ 2 −t′ 1 = −v c2 (x′ 2 −x′ 1) = −vℓ0 c2 (2.22) and the invariant length of the space-time vector pointing to the event E is −ℓ2 = c2 vℓ0 c2 2 −ℓ2 0, (2.23) yielding ℓ= ℓ0 r 1 −v2 c2 . (2.24) 2.7. TRANSFORMATION OF THE VELOCITY 11 x x’ t’ t’ 1 2 E t t’ Figure 2.5: Lorentz contraction. Lorentz contraction is that the length is the longest in the rest frame. It was introduced by Lorentz as an ad hoc mechanism to explain the negative result of the Michelson-Moreley experiment to measure the absolute speed of their laboratory. It is Einstein’s essential contribution to change this view and instead of postulating a fundamental effect he derived it by the detailed analysis of the way length are measured in moving inertial system. Thus the contraction of the length has nothing to do with real change in the system, it reflects the specific features of the way observations are done only. 2.7 Transformation of the velocity As mentioned above, the Galilean boost (2.1)-(2.2) leads immediately to the addition of velocities, dx dt → dx dt −v. This rule is in contradiction with the invariance of the speed of light under Lorentz boosts. It was mentioned that the resolution of this conflict is the renounce of the absolute nature of the time. This must introduce non-linear pieces in the transformation law of the velocities. To find them we denote by V the velocity between the inertial systems S and S′, dx∥= dx′ ∥+ V dt′ q 1 −V 2 c2 , dx⊥= dx′ ⊥, dt = dt′ + V dx′ ∥ c2 q 1 −V 2 c2 . (2.25) Then dt dt′ = 1 + V v′ ∥ c2 q 1 −V 2 c2 (2.26) and the velocity transform as v∥= v′ ∥+ V 1 + V v′ ∥ c2 , v⊥= v′ ⊥ q 1 −V 2 c2 1 + V v′ ∥ c2 . (2.27) Note that 12 CHAPTER 2. ELEMENTS OF SPECIAL RELATIVITY 1. the rule of addition of velocity is valid for v/c ≪1, 2. if v = c then v′ = c, 3. the expressions are not symmetrical for the exchange of v and V 2.8 Four-vectors The space-time coordinates represent the contravariant vectors xµ = (ct, x). In order to eliminate the metric tensor from covariant expressions we introduce covariant vectors whose lower index is obtained by multiplying with the metric tensor, xµ = gµνxν. Thus allows us to leave out the metric tensor from the scalar product, x · y = xµgµνyν = xµyµ. The inverse of the metric tensor gµν is denoted by gµν, gµρgρν = δµ ν . Identities for Lorentz transformations: g = ˜ Λ · g · Λ Λ−1 = g−1 · ˜ Λ · g = (g · Λ · g−1)tr x′µ = (Λ · x)µ = Λµ νxν xµ = (g · Λ · g−1) µ ν x′ν = x′νΛ µ ν = (x′ · Λ)µ x′ µ = (g · Λ · x)µ = (g · Λ · g−1 · g · x)µ = Λ ν µ xν xµ = x′ νΛν µ = (x′ · Λ)µ (2.28) One can define contravariant tensors which transform as T µ1···µn = Λµ1 ν1 · · · Λµn νnT ν1···νn, (2.29) covariant tensors with the transformation rule Tµ1···µn = Λν1 µ1 · · · Λνn µnTν1···νn (2.30) and mixed tensors which satisfy T ρ1···ρm µ1···µn = Λρ1 κ1 · · · Λρm κmΛν1 µ1 · · · Λνn µnT κ1···κm ν1···νn . (2.31) There are important invariant tensors, for instance the metric tensor is pre-served, gµν′ = Λµ′ µgµ′ν′Λν′ ν together with its other forms like gµν, gµν and gν µ. Another important invariant tensor is the completely antisymmetric one ǫµνρσ where the convention is ǫ0123 = 1. In fact, ǫµνρσ′ = ǫµνρσ det Λ which shows that ǫµνρσ is a pseudo tensor, is remains invariant under proper Lorentz transformation and changes sign during inversions. 2.9. RELATIVISTIC MECHANICS 13 2.9 Relativistic mechanics Let us first find the heuristic generalization of Newton’s law for relativistic velocities by imposing Lorentz invariance. The four-velocity is defined as uµ = dxµ(s) ds = ˙ x(s) = dx0 ds , dx0 ds v c  =   1 q 1 −v2 c2 , v c q 1 −v2 c2   (2.32) and it gives rise the four-acceleration ˙ uµ = duµ ds , (2.33) and the derivation of the identity u2(s) = 1 with respect to s yields ˙ u · u = 0. The four-momentum, defined by pµ = mcuµ = (p0, p) =   mc q 1 −v2 c2 , mv q 1 −v2 c2  , (2.34) satisfies the relation p2 = m2c2. The rate of change of the four-momentum defines the four-force, Kµ = dpµ ds = d ds  mcdxµ ds  . (2.35) The three-vector F = ds dt K = mc d dt dt dsv = ma q 1 −v2 c2 − d2s dt2 ( ds dt )2 mcv = ma q 1 −v2 c2 − d dt √ c2 −v2 c2 −v2 mcv = m q 1 −v2 c2 " a + v(v · a) c2(1 −v2 c2 ) # (2.36) can be considered as the relativistic generalization of the the three-force in Newton’s equation. The particular choice of O v2/c2 corrections are chosen in such manner that the temporal component of Eq. (2.35), d ds  mcdx0 ds  = d ds mc q 1 −v2 c2 = K0 (2.37) 14 CHAPTER 2. ELEMENTS OF SPECIAL RELATIVITY leads to the conservation law for the energy. This is because the constraint 0 = mc ˙ u · u = K · u = mc¨ x · ˙ x = 0 gives K0 dx0 ds = Ku =  dt ds 2 Fv (2.38) what can be written as d dtE(v) = Fv (2.39) which gives the kinetic energy E(v) = mc2 q 1 −v2 c2 (2.40) and leads to the expressions pµ = E c , p  , E2 c2 = p2 + m2c2, E(p) = c p p2 + m2c2. (2.41) Note that the unusual relativistic correction in the three-force (2.36) is non-vanishing when the velocity is not perpendicular to the acceleration, i.e. the kinetic energy is not conserved and work done by the force on the particle. 2.10 Lessons of special relativity Special relativity grew out from the unsuccessful experimental attempts of mea-suring absolute velocities. This negative results is incorporated into the dy-namics by postulating a symmetry of the fundamental laws in agreement with Maxwell equations. The most radical consequences of this symmetry concerns the time. It becomes non-absolute, has to be determined dynamically for each system instead of assumed to be available before any observation. Furthermore, two events which coincide in one reference frame may appear in different order in time in other reference frames, the order of events in time is not absolute either. The impossibility of measuring absolute acceleration and further, higher derivatives of the coordinates with respect to the time is extended in general relativity to the nonavailability of the coordinate system before measurements where the space-time coordinates are constructed by the observers. The dynamical origin of time motivates the change of the trajectory x(t) as a fundamental object of non-relativistic mechanics to world line xµ(s) where the reference system time x0 is parametrized by the proper time or simply a parameter of the motion s. The world line offers a surprising extension of the non-relativistic motion by letting x0(s) non-monotonous function. Turning point where time turns back along the world line is interpreted in the quantum case as an events where a particle-anti particle pair is created or annihilated. We close this short overview of special relativity with a warning. The basic issues of this theory , such as meter rods and clocks are introduced on the 2.10. LESSONS OF SPECIAL RELATIVITY 15 macroscopic level. Though the formal implementation of special relativity is fully confirmed in the quantum regime their interpretation in physical term, e.g. the speed of propagation of light within an atom, is neither trivial nor parallel with the macroscopic reasoning. 16 CHAPTER 2. ELEMENTS OF SPECIAL RELATIVITY Chapter 3 Classical Field Theory 3.1 Why Classical Field Theory? It seems nowadays natural to deal with fields in Physics. It is pointed out here that the motivation to introduce fields, dynamical degrees of freedom distributed in space, is not supported only by electrodynamics. There is a “no-go” theorem in mechanics, it is impossible to construct relativistic interactions in a many-body system. Thus if special relativity is imposed we need an extension of the many-particle systems, such fields, to incorporate interactions. The dynamical problem of a many-particle system is establishment and the solution of the equations of motion for the world lines xµ a(s), a = 1m . . . , n of the particles. By generalizing the Newton equation we seek differential equations for the world lines, ¨ xµ a = F µ a (x1, . . . , xn) (3.1) where interactions are described by some kind of “forces” F µ a (x1, . . . , xn). The problem is that we intend to use instantaneous force and to consider the ar-gument of the force, the world lines at the same time x0 a as the particle in question but the “equal time” is not a relativistically invariant concept and has not natural implementation. A formal aspect of this problem can be seen by recalling that ˙ x2(s) = 1 long the world line, therefore ¨ x· ˙ x = 0, the four-velocity and the four-acceleration are orthogonal. Thus any Cauchy problem which provides the initial coordinates and velocities on an initial spatial hyper-surface must satisfy this orthogonality constraint. This imposes a complicated, unexpected restriction on the possible forces. For instance when translation invariant, central two-particle forces are considered then F µ a (x1, . . . , xn) = X b̸=a (xµ a −xµ b )f((xa −xb)2) (3.2) and xa −xb is usually not orthogonal to ˙ xa and xb. 17 18 CHAPTER 3. CLASSICAL FIELD THEORY The most convincing and general proof of the “no-go“ theorem is algebraic. The point is that the Hamilton function is the generator of the translation in time and its Poisson brackets, the commutator with the other generators of the Poincar group are fixed by the relativistic kinematics, the structure of the Poincar group. It can be proven that the any realization of the commutator algebra of the Poincar group for a many-particle system must contains the trivial Hamilton function, the sum of the free Hamilton functions for the particles. What is left to introduce relativistic interactions is to give up instantaneous force and allow the influence of the whole past history of the system on the forces. This is an action-at-a-distance theory where particles interact at different space-time points. We can simplify this situation by introducing auxiliary dynamical variables which are distributed in space and describe the propagation of the influence of the particles on each other. The systematical implementation of this idea is classical field theory. 3.2 Variational principle Our goal in Section is to obtain equations of motion which are local in space-time and are compatible with certain symmetries in a systematic manner. The basic principle is to construct equations which remain invariant under nonlinear transformations of the coordinates and the time. It is rather obvious that such a gigantic symmetry renders the resulting equations much more useful. Field theory is a dynamical system containing degrees of freedom, denoted by φ(x), at each space point x. The coordinate φ(x) can be a single real number (real scalar field) or consist n-components (n-component field). Our goal is to provide an equation satisfied by the trajectory φcl(t, x). The index cl is supposed to remind us that this trajectory is the solution of a classical (as opposed to a quantum) equation of motion. This problem will be simplified in two steps. First we restrict x to a single value, x = x0. The n-component field φ(x0) can be thought as the coordinate of a single point particle moving in n-dimensions. We need the equation satisfied by the trajectory of this particle. The second step of simplification is to reduce the n-dimensional function φ(x0) to a single point on the real axis. 3.2.1 Single point on the real axis We start with a baby version of the dynamical problem, the identification of a point on the real axis, xcl ∈R, in a manner which is independent of the re-parametrization of the real axis. The solution is that the point is identified by specifying a function with vanishing derivative at xcl only: d f(x) dx |x=xcl = 0 (3.3) 3.2. VARIATIONAL PRINCIPLE 19 To check the re-parametrization invariance of this equation we introduce new coordinate y by the function x = x(y) and find d f(x(y)) dy |y=ycl = d f(x) dx |x=xcl | {z } 0 dx(y) dy |y=ycl = 0 (3.4) We can now announce the variational principle. There is simple way of rewriting Eq. (3.3) by performing an infinitesimal variation of the coordinate x →x + δx, and writing f(xcl + δx) = f(xcl) + δf(xcl) = f(xcl) + δx f ′(xcl) | {z } 0 +δx2 2 f ′′(xcl) + O δx3 . (3.5) The variation principle, equivalent of Eq. (3.3) is δf(xcl) = O δx2 , (3.6) stating that xcl is characterized by the property that an infinitesimal variation around it, xcl →xcl + δx, induces an O δx2 change in the value of f(xcl). 3.2.2 Non-relativistic point particle We want to identify a trajectory of a non-relativistic particle in a coordinate choice independent manner. Let us identify a trajectory xcl(t) by specifying the coordinate at the initial and final time, xcl(ti) = xi, xcl(tf) = xf (by assuming that the equation of motion is of second order in time derivatives) and consider a variation of the trajectory x(t): x(t) →x(t) + δx(t) which leaves the initial and final conditions invariant (ie. does not modify the solution). Our function f(x) of the previous section becomes a functional, called action S[x(·)] = Z tf ti dtL(x(t), ˙ x(t)) (3.7) involving the Lagrangian L(x(t), ˙ x(t)). (The symbol x(·) in the argument of the action functional is supposed to remind us that the variable of the functional is a function. It is better to put a dot in the place of the independent variable of the function x(t) otherwise the notation S[x(t)] can be mistaken with an embedded 20 CHAPTER 3. CLASSICAL FIELD THEORY function S(x(t)).) The variation of the action is δS[x(·)] = Z tf ti dtL  x(t) + δx(t), ˙ x(t) + d dtδx(t)  − Z tf ti dtL(x(t), ˙ x(t)) = Z tf ti dt  L(x(t), ˙ x(t)) + δx(t)δL(x(t), ˙ x(t)) δx + d dtδx(t)δL(x(t), ˙ x(t)) δ ˙ x + O δx(t)2 − Z tf ti dtL(x(t), ˙ x(t))  = Z tf ti dtδx(t) δL(x(t), ˙ x(t)) δx −d dt δL(x(t), ˙ x(t)) δ ˙ x  + δx(t) | {z } 0 δL(x(t), ˙ x(t)) δ ˙ x ti tf + O δx(t)2 (3.8) The variational principle amounts to the suppression of the integral in the last line for an arbitrary variation, yielding the Euler-Lagrange equation: δL(x, ˙ x) δx −d dt δL(x, ˙ x) δ ˙ x = 0 (3.9) The generalization of the previous steps for a n-dimensional particle gives δL(x, ˙ x) δx −d dt δL(x, ˙ x) δ ˙ x = 0. (3.10) It is easy to check that the Lagrangian L = T −U = m 2 ˙ x2 −U(x) (3.11) leads to the usual Newton equation m¨ x = −∇U(x). (3.12) It is advantageous to introduce the generalized momentum: p = ∂L(x, ˙ x) ∂˙ x (3.13) which allows to write the Euler-Lagrange equation as ˙ p = ∂L(x, ˙ x) ∂x (3.14) The coordinate not appearing in the Lagrangian in an explicit manner is called cyclic coordinate, ∂L(x, ˙ x) ∂xcycl = 0. (3.15) For each cyclic coordinate there is a conserved quantity because the generalized momentum of a cyclic coordinate, pcycl is conserved according to Eqs. (3.13) and (3.15). 3.2. VARIATIONAL PRINCIPLE 21 3.2.3 Relativistic particle After the heuristic generalization of the non-relativistic Newton’s law let us con-sider now more systematically the relativistically invariant variational principle. The Lorentz invariant action must be proportional to the invariant length of the world-line, this latter being the only invariant of the problem. Dimensional con-siderations lead to S = −mc Z sf si ds = Z τf τi dτLτ (3.16) where τ is an arbitrary parameter of the world-line and the corresponding La-grangian is Lτ = −mc r dxµ dτ gµν dxµ dτ . (3.17) The Lagrangian L = −mc2 r 1 −v2 c2 = −mc2 + v2 2m + O v4 c2  (3.18) corresponds to the integrand when τ is the time and justifies the dimensionless constant in the definition of the action (3.16). We have immediately the energy-momentum p = ∂L ∂v = mv q 1 −v2 c2 E = ⃗ p⃗ v −L = mc2 q 1 −v2 c = mc2 + v2 2m + O v4 c2  . (3.19) The variation of the world-line, δS = Z xf xi ds δLs δxµ δxµ + δLs δ dxµ ds δ dxµ ds ! = δLs δ dxµ ds δxµ xf xi + Z xf xi dsδxµ δLs δxµ −d ds δLs δ dxµ ds ! (3.20) or δS = −mc Z ds δdxµ ds dxµ ds q dxµ ds dxµ ds = −mc Z dsδdxµ ds dxµ ds = −mcδxµ dxµ ds xf xi +mc Z dsδxµ d2xµ ds2 (3.21) leads to the Euler-Lagrange equation mcd2xµ ds2 = 0. (3.22) The four momentum is pµ = −δS δxµ f = mcgµν dxν ds . (3.23) 22 CHAPTER 3. CLASSICAL FIELD THEORY The projection of the non-relativistic angular momentum on a given unit vector n can be defined by the derivative of the action with respect to the angle of rotation around n. Such a rotation generates δx = δRx = δφn × x and gives δS δφ = δS δxℓ f δxℓ δφ = pRx = p(n × x) = n(x × p). (3.24) The relativistic generalization of this procedure is δxµ = δLµνxν, δS δφ = δS δxρ δxρ δφ = −pµLµνxν = 1 2Lµν(pνxµ −pµxν) (3.25) yielding M µν = xµpν −pµxν. (3.26) 3.2.4 Scalar field We turn now the dynamical variables which were evoked in avoiding the “no-go“ theorem, fields. We assume the simple case where there are n scalar degree of freedom at each space point, a scalar field φa(x), a = 1, . . . , n whose time dependence gives a space-time dependent field φa(x). To establish the variational principle we consider the variation of the trajec-tory φ(x) φ(x) →φ(x) + δφ(x), δφ(ti, x) = δφ(tf, x) = 0. (3.27) The variation of the action S[φ(·)] = Z V dtd3x | {z } dx L(φ, ∂φ) (3.28) is δS = Z V dx ∂L(φ, ∂φ) ∂φa δφa + ∂L(φ, ∂φ) ∂∂µφa δ∂µφa  + O δ2φ  = Z V dx ∂L(φ, ∂φ) ∂φa δφa + ∂L(φ, ∂φ) ∂∂µφa ∂µδφa  + O δ2φ  = Z ∂V dsµδφa ∂L(φ, ∂φ) ∂∂µφa + Z V dxδφa ∂L(φ, ∂φ) ∂φa −∂µ ∂L(φ, ∂φ) ∂∂µφa  + O δ2φ  (3.29) The first term for µ = 0, Z ∂V ds0δφa ∂L(φ, ∂φ) ∂∂0φa = Z t=tf d3x δφa |{z} 0 ∂L(φ, ∂φ) ∂∂0φa − Z t=ti d3x δφa |{z} 0 ∂L(φ, ∂φ) ∂∂0φa = 0 (3.30) 3.3. NOETHER THEOREM 23 is vanishing because there is no variation at the initial and final time. When µ = j then Z ∂V dsjδφa ∂L(φ, ∂φ) ∂∂jφa = Z xj=∞ dsjδφa ∂L(φ, ∂φ) ∂∂jφa | {z } 0 − Z xj=−∞ dsjδφa ∂L(φ, ∂φ) ∂∂jφa | {z } 0 = 0 (3.31) and it is still vanishing because we are interested in the dynamics of localized systems and the interactions are supposed to be short ranged. Therefore, φ = 0 at the spatial infinities and the Lagrangian is vanishing. The suppression of the second term gives the Euler-Lagrange equation ∂L(φ, ∂φ) ∂φa −∂µ ∂L(φ, ∂φ) ∂∂µφa = 0. (3.32) The simplest scalar field theory consists of a free, massive field and is de-scribed by the Lagrangian L = 1 2∂µφ∂µφ −m2c2 2ℏ2 φ2 (3.33) and the corresponding equation of motion is the Klein-Gordon equation, (□+ Λ−2 C )φ = 0 (3.34) where ΛC = ℏ mc is the Compton wavelength of a particle of mass m. The parameter m can be interpreted as mass because the plane wave solution φk(x) = e−ik·x (3.35) to the equation of motion satisfies the mass shell condition, ℏ2k2 = m2c2 (3.36) c.f. Eq. (2.41). 3.3 Noether theorem It is shown below that there is a conserved current for each continuous symmetry. Symmetry: A transformation of the space-time coordinates xµ →x′µ, and the field φa(x) →φ′ a(x) preserves the equation of motion. Since the equation of motion is obtained by varying the action, the action should be preserved by the symmetry transformations. A slight generalization is that the action can in fact be changed by a surface term which does not influence its variation, the equation of motion at finite space-time points. Therefore, the symmetry transformations satisfy the condition L(φ, ∂φ) →L(φ′, ∂′φ′) + ∂′ µΛµ (3.37) 24 CHAPTER 3. CLASSICAL FIELD THEORY with a certain vector function Λµ(x′). Continuous symmetry: There are infinitesimal symmetry transforma-tions, in an arbitrary small neighborhood of the identity, xµ →xµ + δxµ, φa(x) →φa(x) + δφa(x). Examples: Rotations, translations in the space-time, and φ(x) →eiαφ(x) for a complex field. Conserved current: ∂µjµ = 0, conserved charge: Q(t): ∂0Q(t) = ∂0 Z V d3xj0 = − Z V d3x∂vj = − Z ∂V ds · j (3.38) It is useful to distinguish external and internal spaces, corresponding to the space-time and the values of the field variable. Eg. φa(x) : R4 |{z} external space → Rm |{z} internal space . (3.39) Internal and external symmetry transformations act on the internal or external space, respectively. 3.3.1 Point particle The main points of the construction of the Noether current for internal symme-tries can be best understood in the framework of a particle. To find the analogy of the internal symmetries let us consider a point particle with the continuous symmetry x →x + ǫf(x) for infinitesimal ǫ, L(x, ˙ x) = L(x + ǫf(x), ˙ x + ǫ( ˙ x · ∂)f(x)) + O ǫ2 . (3.40) Let us introduce a new, time dependent coordinates, y(t) = y(x(t)), based on the solution of the equation of motion, xcl(t), in such a manner that one of them will be y1(t) = ǫ(t), where x(t) = xcl(t) + ǫ(t)f(xcl(t)). There will be n −1 other new coordinates, yℓ, ℓ= 2, . . . , n whose actual form is not interesting for us. The Lagrangian in terms of the new coordinates is defined by L(y, ˙ y) = L(y(x), ˙ y(x)). The ǫ-dependent part assumes the form L(ǫ, ˙ ǫ) = L(xcl + ǫf(xcl), ˙ xcl + ǫ( ˙ xcl · ∂)f(xcl) + ˙ ǫf(xcl)) + O ǫ2 . (3.41) What is the equation of motion of this Lagrangian? Since the solution is ǫ(t) = 0 it is sufficient to retain the O (ǫ) contributions in the Lagrangian only, L(ǫ, ˙ ǫ) →L(1)(ǫ, ˙ ǫ) = ǫ∂L(xcl, ˙ xcl) ∂x ·f(xcl)+ ∂L(xcl, ˙ xcl) ∂˙ x [ǫ( ˙ xcl·∂)f(xcl)+ ˙ ǫf(xcl)] (3.42) up to an ǫ-independent constant. The corresponding Euler-Lagrange equation is ∂L(1)(ǫ, ˙ ǫ) ∂ǫ −d dt ∂L(1)(ǫ, ˙ ǫ) ∂˙ ǫ = 0. (3.43) 3.3. NOETHER THEOREM 25 (this is the point where the formal invariance of the equation of motion under nonlinear, time dependent transformations of the coordinates is used). Accord-ing to Eq. (3.40) ǫ is a cyclic coordinate, ∂L(ǫ, ˙ ǫ) ∂ǫ = 0 (3.44) and its generalized momentum, pǫ = ∂L(ǫ, ˙ ǫ) ∂˙ ǫ (3.45) is conserved. The external space transformation corresponds to the shift of the time, t → t + ǫ which induces x(t) →x(t −ǫ) = x(t) −ǫ ˙ x(t) for infinitesimal ǫ. This is a symmetry as long as the Hamiltonian (and the Lagrangian) does not contain explicitly the time. In fact, the action changes by a boundary contribution only which can be seen by expanding the Lagrangian in time around t −ǫ, Z tf ti dtL(x(t), ˙ x(t)) = Z tf ti dt  L(x(t −ǫ), ˙ x(t −ǫ)) + ǫdL(x(t), ˙ x(t)) dt  + O ǫ2 (3.46) and as a result the variational equation of motion remains unchanged. But the continuation of the argument is slightly different from the case of internal symmetry. We consider ǫ as a time dependent function which generates a trans-formation of the coordinate, x(t) →x(t −ǫ(t)) = x(t) −ǫ(t) ˙ x(t) + O ǫ2 . The Lagrangian of ǫ(t) as new coordinate for x(t) = xcl(t) is L(1)(ǫ, ˙ ǫ) = L(xcl(t −ǫ), ˙ xcl(t −ǫ)) −L(xcl(t), ˙ xcl(t)) = −ǫ ˙ xcl ∂L(xcl, ˙ xcl) ∂x −dǫ ˙ xcl dt ∂L(xcl, ˙ xcl) ∂˙ x + O ǫ2 = −ǫ ˙ xcl ∂L(xcl, ˙ xcl) ∂x −ǫ¨ xcl ∂L(xcl, ˙ xcl) ∂˙ x | {z } −ǫ dL(xcl, ˙ xcl) dt −˙ ǫ ˙ xcl ∂L(xcl, ˙ xcl) ∂˙ x + O ǫ2 = −ǫ dL(xcl, ˙ xcl) dt −d dt ∂L(xcl, ˙ xcl) ∂˙ x ˙ xcl  −d dt ∂L(xcl, ˙ xcl) ∂˙ xcl ǫ ˙ xcl  + O ǫ2 (3.47) up to an ǫ-independent constant. Its Euler-Lagrange equation (3.43) assures the conservation of the energy, H = ∂L(x, ˙ x) ∂˙ x ˙ x −L(x, ˙ x). (3.48) 3.3.2 Internal symmetries An internal symmetry transformation of field theory acts on the internal space only. We shall consider linearly realized internal symmetries for simplicity where δxµ = 0, δiφa(x) = ǫ τab |{z} generator φb(x). (3.49) 26 CHAPTER 3. CLASSICAL FIELD THEORY This transformation is a symmetry, L(φ, ∂φ) = L(φ + ǫτφ, ∂φ + ǫτ∂φ) + O ǫ2 . (3.50) Let us introduce new ”coordinates”, ie. new field variable, Φ(φ), in such a manner that Φ1(x) = ǫ(x) where φ(x) = φcl(x) + ǫ(x)τφcl(x), φcl(x) being the solution of the equations of movement. The linearized Lagrangian for ǫ(x) is ˜ L(ǫ, ∂ǫ) = L(φcl + ǫτφ(x), ∂φcl + ∂ǫτφ(x) + ǫτ∂φ(x)) → ǫτ ∂L(φcl, ∂φcl) ∂φ + [∂ǫτφ(x) + ǫτ∂φ(x)]∂L(φcl, ∂φcl) ∂∂φ .(3.51) The symmetry, Eq. (3.50), indicates that ǫ is a cyclic coordinate and the equa-tion of motion ∂˜ L(ǫ, ∂ǫ) ∂ǫ −∂µ ∂˜ L(ǫ, ∂ǫ) ∂∂µǫ = 0. (3.52) shows that the current, Jµ = −∂˜ L(ǫ, ∂ǫ) ∂∂µǫ = −∂L(φ, ∂φ) ∂∂µφ τφ (3.53) defined up to a multiplicative constant as the generalized momentum of ǫ, is conserved. Notice that (i) we have an independent conserved current corre-sponding to each independent direction in the internal symmetry group and (ii) the conserved current is well defined up to a multiplicative constant only. Let us consider a scalar field as an example. The four momentum is repre-sented by the vector operator ˆ pµ = −  ℏ ic∂0, ℏ i ⃗ ∂  in Quantum Mechanics which leads to the Lorentz invariant invariant Klein-Gordon equation 0 = (ˆ p2 −m2c2)φa = −ℏ2  ∂µ∂µ + m2c2 ℏ2  φa, (3.54) generated by the Lagrangian L = 1 2(∂φ)2 −m2c2 2ℏ2 φ2 = ⇒1 2(∂φ)2 −m2 2 φ2. (3.55) One may introduce a relativistically invariant self-interaction by means of a potential V (φ), L = 1 2(∂φ)2 −m2 2 φ2 −V (φ) (3.56) and the corresponding equation of motion is (∂µ∂µ + m2) = −V ′(φ). (3.57) The complex field theory with symmetry φ(x) →eiαφ(x) is defined by the Lagrangian L = ∂µφ∗∂µφ −m2φ∗φ −V (φ∗φ) (3.58) 3.3. NOETHER THEOREM 27 x δ V V’ Figure 3.1: Deformation of the volume in the external space. where it is useful to considered φ and φ∗as independent variables. The infinites-imal transformations δφ = iǫφ, δφ∗= −iǫφ∗yield the conserved current jµ = i 2(φ∗∂µφ −∂µφ∗φ) (3.59) up to a multiplicative constant. 3.3.3 Canonical energy-momentum tensor The most general transformations leaving the action invariant may act in the external space, too. Therefore, let us consider the transformation xµ →x′µ = xµ + δxµ and φ(x) →φ′(x′) = φ(x) + δφ(x) where δφ(x) = δiφ(x) + δxµ∂µφ(x) where δiφ(x) denotes the eventual internal space variation. The variation of the action is δS = Z V dxδL + Z V ′−V dxL = Z V dxδL + Z ∂V dSµδxµL (3.60) according to Fig. 3.1 what can be written as δS = Z V dx ∂L ∂φ −∂µ ∂L ∂∂µφ  δφ + Z ∂V dSµ  ∂L ∂∂µφδiφ + δxµL  = Z V dx ∂L ∂φ −∂µ ∂L ∂∂µφ  δφ + Z ∂V dSµ  ∂L ∂∂µφδφ + δxν  Lgµ ν − ∂L ∂∂µφ∂νφ  . (3.61) For field configurations satisfying the equation of motion the first integral is vanishing leaving the current Jµ = ∂L ∂∂µφδφ + δxν  Lgµ ν − ∂L ∂∂µφ∂νφ  (3.62) conserved. The case of internal space variation only δxµ = 0 reproduces the conserved Noether current of Eq. (3.53). For translations we have δxµ = aµ and δiφ = 0 28 CHAPTER 3. CLASSICAL FIELD THEORY is chosen such that the field configuration is displaced only, δφ = 0. The four conserved current are collected in the canonical energy-momentum tensor T µν c = ∂L ∂∂µφ∂νφ −Lgµν (3.63) obeying the conservation laws ∂µT µν c = 0. (3.64) They show that P ν = Z d3xT 0ν c (3.65) can be identified by the energy-momentum vector and we have the form T µν c =  ǫ cp 1 cS σ  (3.66) where ǫ is the energy density, p is the momentum density, S is the density of the energy flux and σjk is the flux of pk in the direction j. 3.3.4 External symmetries When Lorentz transformations and translations are performed simultaneously then we have δxµ = aµ+ωµ ν xν and δφ = Λνµωµνφ ̸= 0 for field with nonvanishing spin and the conserved current is Jµ = ∂L ∂∂µφ(Λνκωκνφ −δxν∂νφ) + δxµL. (3.67) Let us simplify the expressions be introducing the tensor f µνκ = ∂L ∂∂µφΛνκφ (3.68) and write Jµ = f µνκωκν − ∂L ∂∂µφδxν∂νφ + δxµL. (3.69) By the cyclic permutation of the indices µνκ we can define another tensor ˜ f µνκ =  ∂L ∂∂µφΛνκ + ∂L ∂∂νφΛκµ − ∂L ∂∂κφΛµν  φ (3.70) which is antisymmetric in the first two indices, ˜ f νµκ =  ∂L ∂∂νφΛµκ + ∂L ∂∂µφΛκν − ∂L ∂∂κφΛνµ  φ =  −∂L ∂∂νφΛκµ − ∂L ∂∂µφΛνκ + ∂L ∂∂κφΛµν  φ = −˜ f µνκ (3.71) 3.3. NOETHER THEOREM 29 and verifies the equation ˜ f µνκωνκ =  ∂L ∂∂µφΛνκ + ∂L ∂∂νφΛκµ − ∂L ∂∂κφΛµν  φωνκ = f µνκωνκ −  ∂L ∂∂νφΛµκ + ∂L ∂∂κφΛµν  φωνκ = f µνκωνκ. (3.72) As a result we can replace f µνκ by it in Eq. (3.69), Jµ = ˜ f µνκωκν − ∂L ∂∂µφδxν∂νφ + δxµL = ˜ f µνκ∂ν(δxκ) − ∂L ∂∂µφδxν∂νφ + δxµL = δxκ  gµκL − ∂L ∂∂µφ∂κφ −∂ν ˜ f µνκ  + ∂ν( ˜ f µνκδxκφ). (3.73) The last term J′µ = ∂ν( ˜ f µνκδxκφ) gives a conserved current thus can be dropped and the conserved Noether current simplifies as Jµ = T µν(aν + ωνκxκ) = T µνaν + 1 2(T µνxκ −T µκxν)ωνκ (3.74) where we can introduced the symmetric energy momentum tensor T µν = T µν c + ∂κ ˜ f µκν (3.75) and the tensor M µνσ = T µνxσ −T µσxν. (3.76) Due to Z ∂V Sµ∂κ ˜ f µκν = Z V ∂µ∂κ ˜ f µκν = 0 (3.77) the energy momentum extracted from T µν and T µν c agree and M is conserved ∂µM µνσ = 0, (3.78) yielding the relativistic angular momentum Jνσ = Z d3x(T 0νxσ −T 0σxν). (3.79) with the usual non-relativistic spatial structure. The energy-momentum tensor T µν is symmetric because the conservation of the relativistic angular momen-tum, Eq. (3.78) gives 0 = ∂ρM ρµν = ∂ρ(T ρµxν −T ρνxµ) = T νµ −T µν. (3.80) 30 CHAPTER 3. CLASSICAL FIELD THEORY Chapter 4 Electrodynamics 4.1 Charge in an external electromagnetic field The three-dimensional scalar and vector fields make up the four-dimensional vector potential as Aµ = (φ, A) and the simplest Lorentz invariant Lagrange function we can construct with it is Aµ ˙ xµ therefore the action for a point-charge moving in the presence of a given, external vector potential is S = − Z xf xi  mcds + e cAµdxµ = − Z xf xi  mcds −e cA · dx + eφdt  = Z τf τi Lτdτ, (4.1) where the index τ in the Lagrangian is a reminder of the variable used to construct the action, Lt = −mc2 r 1 −v2 c2 + e cA · v −eφ, (4.2) or Ls = −mc r dxµ ds gµν dxν ds −e cAµ(x)dxµ ds . (4.3) The Euler-Lagrange equation for the manifest invariant Ls which is parametrized by the invariant length s of the world line is 0 = δL δxµ −d ds δL δ dxµ ds = −e c∂µAν(x)dxν ds + mc d ds gµν dxµ ds q dxµ ds gµν dxν ds + e c d dsAµ(x) = mcd2xµ ds2 −e cFµν dxν ds (4.4) 31 32 CHAPTER 4. ELECTRODYNAMICS where the field-strength is given by Fµν = ∂µAν(x) −∂νAµ(x). (4.5) The interaction term in the action can be written as a space-time integral involving the current density, S = −mc Z ds −1 c Z dxAµ(x)jµ(x). (4.6) The relativistically covariant generalization of the non-relativistic current j = ρv for a single charge is jµ = ρdxµ dt = (cρ, j) = (cρ, ρv) = ρds dt ˙ xµ (4.7) In the case of a system of charges, xa(t), we have jµ(x) = c X a ea Z dsδ(x −xa(s)) ˙ xµ = c X a ea Z dsδ(x −xa(s))δ(x0 −x0 a(s)) ˙ xµ = c X a eaδ(x −xa(s)) 1 | dx0 ds | ˙ xµ = X a eaδ(x −xa(s)) | {z } ρ(x) dxµ dt . (4.8) It is easy to verify that the continuity equation ∂µjµ = ∂0ρ + ∇· j = X a ea[−va(t)∇δ(x −xa(t)) + ∇δ(x −xa(t))va(t)] = 0 (4.9) is satisfied. 4.2 Dynamics of the electromagnetic field The action (4.6) dos not contain the time derivatives of the vector potential therefore we have to extend our Lagrangian, L →L+LA, to generate dynamics for the electromagnetic field. The guiding principle is that LA should be 1. quadratic in the time derivative of the vector potential to have the usual equation of motion, 2. Lorentz invariant and 4.2. DYNAMICS OF THE ELECTROMAGNETIC FIELD 33 3. gauge invariant, ie. remain invariant under the transformation Aµ →Aµ + ∂µα. (4.10) The simplest solution is LA = −1 16π F µνFµν (4.11) where the factor −1/16π is introduced for later convenience. The complete action is S = Sm + SA where Sm = −mc X a Z ds r dxµ a ds gµν dxν a ds (4.12) and SA = −e c X a Z Aµ(x)dxµ − 1 16πc Z F µνFµνdx = −e c X a Z δ(3)(x −xa(t))Aµ(x)dxµ adV − 1 16πc Z F µνFµνdx = −e c2 X a Z δ(4)(x −xa(t))Aµ(x)dxµ dt dx − 1 16πc Z F µνFµνdx = Z LAdV dt (4.13) with LA = −1 c jµAµ(x) − 1 16π F µνFµν = −1 c jµAµ(x) −1 8π ∂µAν∂µAν + 1 8π ∂µAν∂νAµ. (4.14) It yields the Maxwell-equations 0 = δL δAµ −∂ν δL δ∂νAµ = −1 c jµ −1 4π ∂νF µν. (4.15) Note that the necessary condition for the gauge invariance of the action is the current conservation, Eq. (4.9). A simple calculation shows that any continuously double differentiable vector potential satisfies the Bianchi identity, ∂ρFµν + ∂νFρµ + ∂µFνρ = 0. (4.16) The usual three-dimensional notation is achieved by the parametrization Aµ = (φ, A), Aµ = (φ, −A), giving the electric and the magnetic fields E = −∂0A −∇φ = −1 c ∂tA −∇φ, H = ∇× A. (4.17) 34 CHAPTER 4. ELECTRODYNAMICS Notice that transformation jµ = (ρ, j) →(ρ, −j) under time reversal and the invariance of the term jµAµ interaction Lagrangian requires the transformation law φ →φ, A →A, E →E, H →−H for time reversal. The equation ǫjkℓHℓ= ǫjkℓǫℓmn∇mAn = (δjmδkn −δjnδkm)∇mAn = ∇jAk −∇kAj (4.18) relates the electric and magnetic field with the field strength tensor as Fµν =     0 Ex Ey Ez −Ex 0 −Hz Hy −Ey Hz 0 −Hx −Ez −Hy Hx 0    , F µν =     0 −Ex −Ey −Ez Ex 0 −Hz Hy Ey Hz 0 −Hx Ez −Hy Hx 0    . (4.19) One defines the dual field strength as ˜ Fµν = 1 2ǫµνρσF ρσ. (4.20) Duality refers to the exchange of the electric and the magnetic fields up to a sign, ˜ F0j = 1 2ǫjkℓF kℓ= Bj, ˜ Fjk = ǫjkℓF ℓ0 = −ǫjkℓEℓ, (4.21) giving ˜ Fµν =     0 Bx By Bz −Bx 0 Ez −Ey −By −Ez 0 Ex −Bz Ey −Ex 0    , ˜ F µν =     0 −Bx −By −Bz Bx 0 Ez −Ey By −Ez 0 Ex Bz Ey −Ex 0    . (4.22) We have two invariants, F µνFµν = −2E2 + 2H2 F µν ˜ Fµν = −4EH (4.23) but the first can be used only in classical electrodynamics which is invariant under time reversal. The field strength tensor transforms under Lorentz trans-formations as φ = φ′ + v c A′ ∥ q 1 −v2 c2 , A∥= A′ ∥+ v c φ′ q 1 −v2 c2 , (4.24) and F ⊥⊥ = F ⊥⊥′ F ∥⊥ = F ∥⊥′ + v c F 0⊥′ q 1 −v2 c2 F 0⊥ = F 0⊥′ + v c F ∥⊥′ q 1 −v2 c2 F ∥0 = F ∥0′ (∼ǫ01). (4.25) 4.3. ENERGY-MOMENTUM TENSOR 35 For v = (v, 0, 0) we have in the three-dimensional notation E∥ = E′ ∥, Ey = E′ y + v c H′ z q 1 −v2 c2 , Ez = E′ z −v c H′ y q 1 −v2 c2 H∥ = H′ ∥, Hy = H′ y −v c E′ z q 1 −v2 c2 , Hz = H′ z + v c E′ y q 1 −v2 c2 , (4.26) i.e. the homogeneous electric and magnetic fields transform into each other when seen by an observer moving with constant speed. 4.3 Energy-momentum tensor Let us first construct the energy-momentum tensor for the electromagnetic field by means of the Noether theorem. The translation xµ →xµ + ǫµ is a symme-try of the dynamics therefore we have a conserved current for each space-time direction, (Jµ)ν which can be rearranged in a tensor, T µν = (Jµ)ν, given by T µν c = −gµνL + δL δ∂µAρ ∂νAρ = gµν  1 16π F ρσFρσ + 1 c jρAρ  −1 4π F µρ∂νAρ (4.27) for the canonical energy-momentum tensor. The conservation law, ∂µT µν c = 0 suggests the identification of T 0ν c with the energy-momentum P ν of the system up to a multiplicative constant. But the physical energy-momentum may con-tain a freely chosen three index tensor Θµρν as long as Θµρν = −Θρµν because T µν →T µν + ∂ρΘµρν (4.28) is still conserved. This freedom can be used to eliminate an unphysical property of the canonical energy-momentum tensor, namely its gauge dependence. The choice Θµρν = 1 4πF µρAν gives T µν = gµν  1 16π F ρσFρσ + 1 c jρAρ  −1 4π F µρ∂νAρ + 1 4π ∂ρ(F µρAν) = gµν 16π F ρσFρσ + 1 4π F µρF ν ρ + gµν 1 c jρAρ + 1 4π ∂ρF µρAν = gµν 16π F ρσFρσ + 1 4π F µρF ν ρ + gµν 1 c jρAρ −jµAν (4.29) where the equation of motion was used in the last equation. The new energy-momentum tensor in the absence of the electric current, the true energy-momentum tensor of the EM field, T µν ed = gµν 16π F ρσFρσ + 1 4π F µρF ν ρ , (4.30) is gauge invariant, symmetric and traceless. But it is not conserved, the energy-momentum is continuously exchanged between the charges and the EM field. 36 CHAPTER 4. ELECTRODYNAMICS The amount of non-conservation, Kν = −∂µT µν ed ̸= 0, identifies the energy-momentum density of the charges, Kν = −∂µ  gµν 16π F ρσFρσ + 1 4π F µρF ν ρ  = −1 8π F ρσ∂νFρσ −1 4π F µρ∂µF ν ρ −1 4π ∂µF µρF ν ρ . (4.31) We use the Bianchi identity for the first term and the equation of motion Kν = −1 8π F ρσ(−∂ρF ν σ −∂σF ν ρ | {z } Bianchi −2∂σF ν ρ ) −1 c jρF ν ρ = −1 8π F ρσ(∂ρF ν σ + ∂σF ν ρ) | {z } =0 +1 c jρF ν ρ = ρF ν 0 + 1 c jkF ν k = ρF ν0 −1 c jkF νk. (4.32) Since −jkF 0k = jE ρF ℓ0 = ρEℓ jkF ℓk = jkǫℓkmHm (4.33) we have the source of the energy-momentum of the EM field Kµ = (K0, K) = 1 c jE, ρE + 1 c j × H  . (4.34) The time-like component is indeed the work done on the charges by the EM field. The spatial components is the rate of change of the momentum of the charges, the Lorentz force. The energy-momentum density of the EM field P ν = T 0ν is P 0 = 1 8π (−E2 + H2) + 1 4π E2 = 1 8π (E2 + H2) P ℓ = 1 4π F 0kF ℓ k = 1 4π EkǫkℓmHm = −1 c Sℓ (4.35) where the energy flux-density S = c 4π E × H (4.36) is given by the Poynting vector. In fact, the symmetry of the energy-momentum tensor allows us to identify the energy flux-density with c times the momentum density. 4.4. ELECTROMAGNETIC WAVES IN THE VACUUM 37 4.4 Electromagnetic waves in the vacuum Let us consider first the EM field waves in the absence of charges, the solution of the Maxwell equations, (4.15) for j = 0. We shall use the Lorentz gauge ∂µAµ = 0 where the equations of motion are 0 = ∂νF µν = ∂ν∂µAν −□Aµ = −□Aµ. (4.37) we shall consider plane and spherical waves, solutions which display the same value on parallel planes or concentric spheres. The plane wave solution depends on the combination t± = t ± n · x c (4.38) of the space-time coordinates. The linearity of the Maxwell equation allows us to write the solution as the linear superposition Aµ(x) = A+ µ (t+) + A− µ (t−) (4.39) where  1 c2 ∂2 t −∆  A± µ (x) = □A± µ (t±) = 0 (4.40) for arbitrary functions A± µ (t), to be determined by the boundary conditions. The plane waves read in the three-dimensional notation as H = ∇× A(t±) = ∇  t ± n · x c  × A′ = ±1 c n × A′ E = −1 c ∂tA(t±) −∇φ(t±) = −1 c A′ ∓1 c nφ′. (4.41) The relation H = ±1 c n × (−cE ∓nφ′) = ∓n × E (4.42) shows that H orthogonal both to the direction of the propagation, n and to E. The Lorentz gauge condition, 0 = 1 c ∂tφ + ∇· A = 1 c φ′ ± 1 c n · A′ (4.43) together with the second equation in (4.41) shows that E is orthogonal to n, as well. The energy-momentum density P ν = E2 + H2 8π , −E × H 4π  = E2 4π , ±E × (n × E) 4π  = E2 4π (1, ±n), (4.44) is a light-like vector, P 2 = 0. The spherical waves are of the form (4.39) with t± = t ± r c (4.45) 38 CHAPTER 4. ELECTRODYNAMICS in spherical coordinate system. We consider them in d spatial dimensions where they satisfy the wave equation ∂µ∂µA = 0. We write A±(x) = r 1−d 2 a±(t±) where a± is a solution of the equation 0 =  1 c2 ∂2 t − 1 rd−1 ∂rrd−1∂r  A±(t±) =  1 c2 ∂2 t + (d −1)(d −3) 4r2 ∂r −∂2 r  a±(t±). (4.46) The functions a±(t±) correspond to 1+1 dimensional plane waves in d = 1, 3 only. Chapter 5 Green functions The Green functions provide a clear and compact solution of linear equations of motion. But the transparency pf the result hides a drawback, the suppression of the the boundary conditions which are imposed both in space and time. The spatial boundary conditions are usually simpler, they amount to some suppres-sion of the fields at spatial infinity when localized phenomena are investigated. The boundary conditions in time are more complicated and are dealt with briefly in the next section. 5.1 Time arrow problem The basic equations of Physics, except weak interactions, are invariant under a discrete space-time symmetry, the reversal of the direction of time, T : (t, x) → (−t, x). Despite this symmetry, it is a daily experience that the this symmetry is not respected in the world around us. It is enough to recall that we are first born and die later, never in the opposite order. A more tangible example is that the radio transmission arrives at our receivers after its emission, namely the electromagnetic signals travel forward in time rather than backward which is in principle always possible with time reversal invariant equations of motion. What eliminates the backward moving electromagnetic waves? This is one aspect of the time arrow problem in Physics, the problem of pinning down the direction of time, the dynamical origin of the apparent breakdown of the time reversal invariance. This problem can be discussed at four different level. The most obvious is the level of electromagnetic radiation where it appears as the suppression of backward moving electromagnetic waves in time. It is believed that the origin of this problem is not in Electrodynamics and this property of the electromagnetic waves is related to the boundary conditions chosen in time. We can prescribe the solution we seek in terms of initial or final conditions or even by a mixture of these two possibilities and depending on our choice we see forward or backward going waves or even their mixture in the solution. Why are we interested mainly 39 40 CHAPTER 5. GREEN FUNCTIONS initial problems rather than final condition problems in physics? A tentative answer comes from Thermodynamics, the non-decreasing nature of entropy in time. It seems that the composite systems tend to become more complicated and to expand into more irregular regions in the phase space as the time elapses. This property is might not be related to the breakdown of the time reversal invariance because it must obviously hold for either choice of the time arrow. It seems more to have something to do with the nature of the initial conditions we encounter in Physics. The choice of the initial condition leads us to the astrophysical origin of the time arrow. The current cosmological models, solutions of the formally time reversal invariant Einstein equations of General Relativity, suggest that our Universe undergone a singularity in the distant past. This singular initial condition might be the origin of the peculiar features of the choice of the time arrow. Yet another level of this issue is the quantum-classical crossover, the scale regime where quantum effects give rise to classical physics. Each measure-ment traverses this crossover, it magnifies some microscopic quantum effects into macroscopic, classical one. This magnification process, such as the con-densation of the drops in the Wilson cloud chamber or the ”click” of a Geiger counter indicating th presence of an energetic particle, breaks the time reversal invariance. In fact, the end result of the measurements, a classical ”record” created endures the flow of time and can not be reconverted into microscopic phenomena without macroscopic trace. Hence the deepest level of the break-down of the time reversal invariance comes from the scale regions because any quantum gravitational problem must be handled by this scheme. Instead of following a more detailed analysis of this dynamical issue we con-fine the discussion of the separation of the kinematical aspects of this problem. The question we turn to is the way a certain initial of final condition problem can be handled within the framework of Classical Field Theory. The problem arises from the use of the variational principle in deriving the equations of motion. The variational equations of motion can not break the time reversal invariance and can not handle any boundary conditions which does it. We start the discussion with the formal introduction of the Green function. Let us consider a given function of the time f(t) and the inhomogeneous linear differential equation Lf = g, (5.1) where L is a differential operator acting on the time variable. The Green func-tion is the inverse of the operator L and satisfies the equation LtG(t, t′) = δ(t −t′). (5.2) The index in Lt is a reminder that the differential operator acts on the variable t of the two variable function G(t, t′). Note that for translation invariant L we have G(t, t′) = G(t−t′). The Dirac-delta is the identify operator on the function space, thus G = L−1 5.2. INVERTIBLE LINEAR EQUATION 41 The solution of Eq. (5.1) can now formally be written as f(t) = Z dt′G(t, t′)g(t′). (5.3) The time reversal invariance of the propagation of perturbations requires that the Green function be symmetric with respect to the exchange of its time vari-ables, G(t, t′) = G(t′, t). When the propagation of a signal violates time reversal invariance then the Green function must contain antisymmetric part. The variation principle which reproduces Eq. (5.1) as an equation of motion is based on the action S[x] = 1 2 Z dtdt′f(t)G−1(t, t′)f(t′) − Z dtf(t)g(t). (5.4) But the quadratic action is invariant under the exchange of the integral variables t ↔t′. Therefore, any time reversal breaking antisymmetric part of G−1(t, t′) is canceled in the action, the variation principle can not produce time reversal breaking. The way out of this deadlock is the observation that Eq. (5.2) yields a well defined Green function when the operator L has trivial null-space only. The null-space of an operator is the linear subspace of its domain of definition which is mapped into 0. Whenever there is a non-trivial solution of the equation Lh = 0 it can freely be added to the solution of Eq. (5.1), rendering G ill-defined in Eqs. (5.2)-(5.3). The variational problem has nothing to say on the trajectories, corresponding to the null-space of the equation of motion. But this null-space consists of the physically most important functions, the solution of the free equation of motion, in the absence of external source g. This component of the solution must be fixed by the boundary conditions. We shall bring it into the dynamics and the variational equations by adding an infinitesimal, imaginary piece to the inverse propagator, G−1 →G−1 + iO (ǫ) . (5.5) It renders the Green function well defined by making the null-space of G−1 trivial and breaks the time reversal invariance in the desired manner because the time reversal implies complex conjugation. The relation between the time arrow problem and this formal discussion is that these freely addable solutions are to assure the particular boundary conditions. Therefore, the handling of the boundary conditions must come from devices beyond the variational principle, such as the non-symmetrical part of the Green function. 5.2 Invertible linear equation We start with the simple case where L is invertible and has trivial null-space. The invertible differential operators usually arise in time independent problems. 42 CHAPTER 5. GREEN FUNCTIONS We consider here the case of a static, 3 dimensional equation ∆f = g (5.6) in the three-volume V when f and ∇f⊥are given on ∂V . The null-space of the operator ∆is nontrivial, it consists of harmonic functions. But by impos-ing boundedness on the solution on an infinitely large domain, a rather usual condition in typical physical cases, the null-space becomes trivial. One can split the solution as f = fpart + fhom where fpart is a particular solution of the inhomogeneous equation and fhom, the solution of the homoge-neous equation. Due to boundedness fhom must be a trivial constant and will be ignored. A useful particular solution is found by inspecting the first two derivatives of the function D(x, y) = −1 4π 1 |x −y|. (5.7) which read as ∂k 1 |x| = −xk |x|3 ∂ℓ∂k 1 |x| = −δkℓ |x|3 + 3xkxℓ |x|5 (5.8) give ∆1 |x| = 0 (5.9) for x ̸= 0. Apparently ∆1 |x| is a distribution what can be identified by calculat-ing the integral Z x2<ǫ2 dV f(x)∆1 |x| = − Z x2<ǫ2 dV ∇f(x) · ∇1 |x| | {z } O(ǫ) + Z x2=ǫ2 dSf(x) · ∇1 |x| | {z } −4πf(0) (5.10) giving ∆xD(x, y) = δ(x −y) (5.11) or D(x, y) = ⟨x| 1 ∆|y⟩. (5.12) Thus we have fpart(x) = Z d3yD(x, y)g(y) = − Z d3y g(y) 4π|x −y|. (5.13) To find the homogeneous solution we start with Gauss integral theorem, Z ∂V dSyF(y) = Z V d3y∇F(y) (5.14) 5.3. NON-INVERTIBLE LINEAR EQUATION WITH BOUNDARY CONDITIONS43 and by applying for F(y) = D(x, y)∇f(y) −f(y)∇D(x, y) we arrive at Green theorem Z ∂V dSy[D(x, y)∇f(y)−f(y)∇D(x, y)] = Z V d3y[D(x, y)∆f(y)−f(y)∆yD(x, y)]. (5.15) which gives f(x) = −1 4π Z V d3y g(y) |x −y| + 1 4π Z ∂V dSy  ∇f(y) |x −y| −f(y)∇y 1 |x −y|  . (5.16) 5.3 Non-invertible linear equation with bound-ary conditions The non-invertible operators usually appears in dynamical problems. Let us consider the equation □f = g (5.17) on 4-dimensional space-time where the function f is sought for a given g. We follow first the extension of the previous argument for four-dimensions: We define a Green-function which is the solution of the equation □xD(x, y) = δ(x −y) (5.18) and take the time integral of Eq. (5.15) with a Green function D(x, y) = D(x −y), Z [ti,tf ]⊗∂V dtdS[D(x, y)∇f(y) −f(y)∇yD(x, y)] (5.19) = Z [ti,tf ]⊗V dy[D(x, y)∆f(y) −f(y)∆yD(x, y)] = Z [ti,tf ]⊗V dy[−D(x, y)□f(y) + D(x, y)∂2 t f(y) −f(y)∂2 tyD(x, y) + f(y)□yD(x, y)] = f(x) − Z [ti,tf ]⊗V dyD(x, y)□f(y) + Z [ti,tf ]⊗V dy∂ty[D(x, y)∂tf(y) −f(y)∂tyD(x, y)]. The resulting equation f(x) = Z [ti,tf ]⊗V dyD(x, y)g(y) + Z [ti,tf ]⊗∂V dtdS[D(x, y)∇f(y) −f(y)∇yD(x, y)] − Z V d3y[D(x, y)∂tf(y) −f(y)∂tyD(x, y)] tf ti (5.20) expresses the solution in terms of the boundary conditions, the value of the function f and its derivatives on the boundary of the space-time region where the equation (5.17) is to be solved. 44 CHAPTER 5. GREEN FUNCTIONS 5.4 Retarded and advanced solutions The definition (5.18) determines the Green-function up to a null-space function, a solution of the homogeneous equation. It is easy to see that the solution (5.20) is well defined and is free of ambiguity. We turn now the more formal method to make the Green-function well defined by introducing an infinitesimal imaginary part. To see better the role of the boundary conditions in time let us drop the spatial boundary conditions by extending the three-volume where the solution of Eq. (5.17) is sought to infinity. The Fourier representation of the Green-function is ˜ D(k) = −1 k2 (5.21) for k2 ̸= 0 because Z d4k (2π)4 (−k2)e−ikµxµ ˜ D(k) = Z d4k (2π)4 e−ikµxµ. (5.22) To make this integral well defined we have to avoid the singularities of ˜ D(k2) by some infinitesimal shift of the singularities in the complex frequency plane, k0 →k0 ± iǫ. The different modifications of the propagator in the vicinity of k2 = 0 introduce different additive homogeneous solutions of Eq. (5.17) in the Green-function. Let us introduce first the retarded Green-function, Dr(x, y) ≈Θ(x0 −y0) which is used when the initial conditions are known. It is obtained by shifting the poles of ˜ D(k2) slightly below the real axes on the complex energy plane. In fact, the frequency integral D(k, t) = Z dk0 2π e−ik0t ˜ D(k) (5.23) is non-vanishing just for t > 0. The advanced Green-function is used when the final conditions are known and it is obtained by shifting the poles slightly above the real axis, D r a(x) = − Z d3k (2π)3 Z dk0 2π e−ikµxµ (k0 + |k| ± iǫ)(k0 −|k| ± iǫ) (5.24) 5.4. RETARDED AND ADVANCED SOLUTIONS 45 The explicit calculation gives Dr(x) = − Z d3k (2π)3 eikx Z dk0 2π e−ick0t (k0 + iǫ −|k|)(k0 + iǫ + |k|) = i Z d3k (2π)3 eikx e−ickt 2k −eickt 2k  = i (2π)3 Z dkk2dφd(cos θ)eikr cos θ e−ickt −eickt 2k = i (2π)2 Z dkk2 eikr −e−ikr ikr e−ickt −eickt 2k = 1 2(2π)2r Z ∞ 0 dk(eikr −e−ikr)(e−ickt −eickt) = 1 8πr Z ∞ −∞ dk 2π (eik(r−ct) + eik(−r+ct) −e−ik(r+ct) −eik(r+ct)) = 1 4πr[δ(−r + ct) −δ(r + ct)] (t > 0) = δ(ct −r) 4πr (t > 0) (5.25) and Da(x) = − Z d3k (2π)3 eikx Z dk0 2π e−ick0t (k0 −iǫ −|k|)(k0 −iǫ + |k|) = i Z d3k (2π)3 eikx eickt 2k −e−ickt 2k  = δ(r + ct) −δ(−r + ct) 4πr (t < 0) = δ(ct + r) 4πr . (5.26) Finally we have f(x) = Z d4yD r a(x −y)g(y) + f in out(x) = Z d4y δ(x0 −y0 ∓|x −y|)g(y) 4π|x −y| + f in out(x) = Z d3y g  tx ∓|x−y| c , y  4π|x −y| + f in out(x) (5.27) where f(t i f, x) = f in out(t i f, x) and □f in out = 0. It is easy to find the relativistically 46 CHAPTER 5. GREEN FUNCTIONS invariant form of the Green functions, D r a(x) = Θ(±t)δ(ct ∓r) 4πr = Θ(±t)δ(ct + r) + δ(ct −r) 4πr = Θ(±t)δ(c2t2 −r2) 2π = Θ(±x0)δ(x2) 2π . (5.28) There is no dynamical issue in choosing one or other solution. The trivial guiding principle in selecting a Green-function is the information we possess, the initial the final conditions. One can imagine another, rather unrealistic problem where the sum agin(x) + (1 −a)gout(x) is known to the solution of Eq. (5.17). Then the solution is f(x) = Z dy[aDr(x, y)+(1−a)Da(x, y)]g(y)+agin(x)+(1−a)gout(x). (5.29) We are accustomed to think in terms of initial rather than final conditions and therefore use the retarded solutions. This is due to the experimental fact that the homogeneous solution of the Maxwell-equation, the incoming radiation field is negligible compared to the final, outgoing field after some local manipu-lation. The deep dynamical question is why is this the case, why is the radiation field rather weak for t →∞when the basic equations of motion are invariant with respect to the inversion of the direction of the time. Since Da(x, y) = Drtr(x, y) = Dr(y, x) the symmetric and antisymmetric Green-functions D n f = 1 2(Dr ± Da) (5.30) give the solutions of the inhomogeneous and homogeneous equation, respec-tively. The inhomogeneous Green-functions are connected by the relation D r a(x, y) = 2Dn(x, y)Θ(±(x0 −y0)) (5.31) where the near field Green function is Dn(x) = δ(x2) 4π (5.32) according to Eq. (5.28). The Fourier representation of the homogeneous Green 5.4. RETARDED AND ADVANCED SOLUTIONS 47 Figure 5.1: Huygens principle for a wave front. function can be obtained in an obvious manner, Df(x) = 1 4π δ(x2)ǫ(x0) = −i 2 Z d3k (2π)3 eikx eickt 2k −e−ickt 2k  = i 2 Z d4k (2π)3 e−ikx δ(k0 −|k|) −δ(k0 + |k|) 2|k| = i 2 Z d4k (2π)3 eikxδ(k2)ǫ(k0) (5.33) where ǫ(x) = sign(x). A useful relation satisfied by this Green-function is ∂x0D(x)x0=0 = 1 2 Z d3k (2π)3 eikxk0 δ(k0 −|k|) −δ(k0 + |k|) 2|k| = 1 2 Z d3k (2π)3 eikx 1 2[δ(k0 −|k|) + δ(k0 + |k|)] = 1 2δ(x) (5.34) The far field, given by Df is closely related to the radiation field. The expressions A r a(x) = 4π c Z dyD r a(x, y)j(y) + A in out(x) (5.35) suggest the definition Arad = Aout −Ain = 2Af. (5.36) Let us close this discussion with a remark about the Huygens principle stat-ing that the wavefront of a propagating light coincide with the envelope of spherical waves emitted by the points of the wavefront at an earlier time. This implies a fixed propagation speed. The retarded Green-function for d space-time dimensions Dr(x) = ( 1 2πd/2−1 Θ(x0)( d dx2 )(d/2−2)δ(x2), (−1) d−3 2 1 2π2/d Γ( d 2 −1)Θ(x0 −|x|)(x2)1−d/2, v = ( c d even ≤c d odd. (5.37) 48 CHAPTER 5. GREEN FUNCTIONS shows that the propagation of the EM wave is restricted to the future light cone in even dimensional space-times only. For odd space-time dimensions the speed of the propagation is not fixed, special relativity takes a radically different form and the Huygens principle is violated. Chapter 6 Radiation of a point charge We consider in this chapter a point charge following a prescribed world line and and determine the induced electromagnetic field. 6.1 Li´ enard-Wiechert potential As the first step we seek the electromagnetic field Aµ(x) at x = (t, x) created by a point charge e following the world line xµ(s). The current is jµ(x) = ce Z dsδ(x −x(s)) ˙ xµ(s) (6.1) and Aµ in = 0. It is easy to check that at any point x we have a single, definite event on the world line which contribute either to the retarded or to the advanced radiation. In fact, the world-line, having time-like tangent vector can traverse of the future or the past light cone of any point at a single event only as shown in Fig. 6.1. We shall find the answer in two different ways, by a simple heuristic argument and by a more general and complicated manner. Heuristic method: The charge at x′ = (t′, x′) can contribute to this field if the difference x −x′ is a null-vector, ct −ct′ = ±|x −x′| (6.2) (+:retarded propagation, −:advanced propagation). In the coordinate system where the charge is at rest at the emission of the electromagnetic field we have φ = e |x −x′|, A = 0. (6.3) Let us generalize this expression for an arbitrary inertial system, in particular where the four vector of the charge in the retarded or advanced time is ˙ xµ = dxµ(s) ds = (c, v) dt ds (6.4) 49 50 CHAPTER 6. RADIATION OF A POINT CHARGE x(s) x x xret av Figure 6.1: The observer at x receives the signal emitted from the point xret or xav for the retarded or the advanced propagation, respectively. For this end we introduce the four-vector Rµ = (ct −ct′, x −x′) and write A r aµ = ± e ˙ xµ R · ˙ x (6.5) Due to R · ˙ x = (rc −rv) dt ds, r = |x −x′| we have φ = ± e r −r·v c , A = ± ev c(r −r·v c ). (6.6) The part O v0 is the static Coulomb potential, the v-dependent pieces in the denominator represent the retardation or advanced effects. Finally, A gives the magnetic field induced from the Coulomb potential by the Lorentz boost. The more systematical way of obtaining the induced field is based on the use of the Green-functions, A r aµ(x) = e4π Z dx′ Z dsD r a(x −x′)δ(x′ −x(s)) ˙ xµ(s) = 2e Z dx′ Z dsδ((x −x′)2)Θ(±(x0 −x ′0))δ(x′ −x(s)) ˙ xµ(s) = 2e Z dsδ((x −x(s))2)Θ(±(x0 −x0(s))) ˙ xµ(s) (6.7) x−x(s r a) can be written as the linear superposition of two orthogonal vectors, x −x(s r a) = (± ˙ x + w)R r a (6.8) where w is space-like. Since (x −x(s r a))2 = 0, ˙ x2 = 1 and ˙ x · w = 0 we have w2 = −1 and R r a = −w · (x −x(s r a)) = ± ˙ x · (x −x(s r a)) (6.9) 6.2. FIELD STRENGTHS 51 The use of the rule δ(f(x)) →δ(x −x0)/|f ′(x0)| where f(x0) = 0 and the relation d(x −x(s))2 ds = ∓2R r a (6.10) gives A r aµ(x) = e ˙ xµ(s r a) R r a . (6.11) 6.2 Field strengths The field strength is obtained by calculating the space-time derivatives of the Li´ enard-Wiechert potential (6.11), ∂µAν(x) = e4π Z dx′ Z ds∂xµDr(x −x′)δ(x′ −x(s)) ˙ xν(s) = e4π Z ds∂Dr(x −x(s)) ∂(x −x(s))2 ∂(x −x(s))2 ∂xµ ˙ xν(s) = e8π Z ds∂Dr(x −x(s)) ∂s ∂s ∂(x −x(s))2 | {z } 1/[−2(x−x(s))· ˙ x(s)] (x −x(s))µ ˙ xν(s) = −e4π Z ds∂Dr(x −x(s)) ∂s (x −x(s))µ ˙ xν(s) (x −x(s)) · ˙ x = −e4πDr(x −x(s))(x −x(s))µ ˙ xν(s) (x −x(s)) · ˙ x ∞ −∞ | {z } =0 +e4π Z dsDr(x −x(s)) ∂ ∂s (x −x(s))µ ˙ xν(s) (x −x(s)) · ˙ x(s) = 2e Z dsδ((x −x(s))2)Θ(x0 −x0(s)) ∂ ∂s (x −x(s))µ ˙ xν(s) (x −x(s)) · ˙ x(s) = e 1 (x −x(s)) · ˙ x(s) ∂ ∂s (x −x(s))µ ˙ xν(s) (x −x(s)) · ˙ x(s) |s=sr (6.12) The introduction of the scalar Q = (x −x(s)) · ¨ x(s) = R(± ˙ x + w) · ¨ x(s) = Rw · ¨ xµ(s) (6.13) allows us to write F µν = e R3 [(x −x(s))µ¨ xνR −˙ xµ ˙ xνR −(x −x(s))µ ˙ xνQ + ˙ x · ˙ x(x −x(s))µ ˙ xν −(µ ↔ν)] = e R3 [(x −x(s))µ¨ xνR −(x −x(s))µ ˙ xνQ + (x −x(s))µ ˙ xν −(µ ↔ν)] = e R2 [(± ˙ x + w)µ¨ xνR −(± ˙ x + w)µ ˙ xνQ + (± ˙ x + w)µ ˙ xν −(µ ↔ν)] = e R2 [(± ˙ x + w)µ¨ xνR −wµ ˙ xνRw · ¨ x + wµ ˙ xν −(µ ↔ν)] (6.14) 52 CHAPTER 6. RADIATION OF A POINT CHARGE The field strength is the sum of O R−1 and O R−2 terms, called far and near fields. The tree-dimensional notation is introduced by ˙ x = (c, v) dt ds, R r a = ±(±rc −rv) dt ds = (rc ∓rv) dt ds = r ∓rβ p 1 −β2 (6.15) where β = v c . The formally introduced spatial unit vector w is determined by the condition (±r, r) = R  ±(c, v) dt ds + w  (6.16) which yields w = 1 R(±r, r) ∓(c, v) dt ds = (±r, r) (rc ∓rv) ds dt ∓(c, v) dt ds (6.17) which reads w = r (r ∓rβ) p 1 −β2 ∓ β p 1 −β2 w0 = ± r (rc ∓rv) ds dt ∓c dt ds = ± " r (r ∓rβ) p 1 −β2 − 1 p 1 −β2 # (6.18) in three-dimensional notation. The near-field depends on the coordinate and the velocity, F nµν = e R3 [(x −x(s))µ ˙ xν −(x −x(s))ν ˙ xµ] = e R2 [wµ ˙ xν −wν ˙ xµ] (6.19) 6.3. DIPOLE RADIATION 53 The near electric field, Ej = F j0, is En = e R2 [w ˙ x0 −w0 ˙ x] = e(1 −β2) (r ∓rβ)2  r (r ∓rβ) p 1 −β2 ∓ β p 1 −β2 ! 1 p 1 −β2 ∓ r (r ∓rβ) p 1 −β2 − 1 p 1 −β2 ! β p 1 −β2  = e(1 −β2) (r ∓rβ)2  r (r ∓rβ) ∓ β 1 −β2  ∓  r (r ∓rβ) − 1 1 −β2  β  = e(1 −β2) (r ∓rβ)2  r (r ∓rβ) ∓ rβ (r ∓rβ)  = e(1 −β2)(r ∓rβ) (r ∓r · β)3 = e(1 −v2 c2 )(r ∓r v c ) (r ∓r·v c )3 (6.20) The far-field depends on the acceleration as well, F fµν = e R3 [(x −x(s))µ(¨ xνR −˙ xνQ) −(x −x(s))ν(¨ xµR −˙ xµQ)] (6.21) and Ef = er × [(r −r v c ) × a] c2(r −r·v c )3 , (6.22) where a = dv dt . We have the relation H = r × E r (6.23) for both fields. 6.3 Dipole radiation The complications in obtaining the Li´ enard-Wiechert potential come from the retardation. Thus it is advised to see the limits when the retardation effects are weak and the final result can be expanded in them. Let us suppose that the characteristic time and distance scales of the prescribed charge distribution are tch and rch, respectively. The period length of the radiation is approximately tch, yielding the wavelength λ = ctch. The retardation time is when is needed for the EM wave to traverse the charge distribution, tret = rch/c. The retardation effects are therefore weak for tret/tch ≪1 which gives rch ≪λ. Another way to express this inequality is to consider the characteristic speed of the charge system, vch = rch/tch, to write λ = crch/vch which yields vch ≪c. 54 CHAPTER 6. RADIATION OF A POINT CHARGE We assume that these inequalities hold and consider the leading order effect of the retardation on the retarded Li´ enard-Wiechert potential (6.11) A(x) = X a ea ˙ xµ(sr a) Rr a . (6.24) It is sufficient to find the magnetic field, A(x) = 1 cr X a eava  1 + O |v| c  (6.25) where r denotes the distance between the observation point and the center of the charges and va stands for the velocity of the charge a at the time of observation. Since X a eava = d dt X a eaxa = d dtd (6.26) where d is the dipole moment of the charge system we have A(x) = 1 cr dd dt (6.27) in the leading order. Then the magnetic field is given by H = ∇× 1 cr dd(t −r c) dt = −1 cr2 n × dd dt −1 c2rn × d2d dt2 (6.28) which reduces to the far field H = 1 c2r d2d dt2 × n = 1 c2r X a eaaa × n (6.29) for the retarded solution. Since the vectors E, H and n form an orthogonal basis we have E = 1 c2r d2d dt2 × n  × n = 1 c2R X a ea(aa × n) × n (6.30) The far field, dipole radiation depends on the acceleration of the charges only. The radiation power passing trough a surface df is dI = Sdf (6.31) where S is the Poynting vector and is given by dI = c 4π H2r2dΩ (6.32) 6.3. DIPOLE RADIATION 55 according to Eq. (4.44) where dΩdenotes the solid angle. In the case of the dipole radiation we find dI = 1 4πc3 d2d dt2 × n 2 dΩ= 1 4πc3 d2d dt2 2 sin2 θdΩ (6.33) where θ is the angle between d2 dt2 d and n. The total radiated power is obtained by integrating over the solid angle, I = Z dφ Z d(cos θ) 1 4πc3 d2d dt2 2 sin2 θ = 2 3c3 d2d dt2 2 . (6.34) For a single charge we have I = 2a2 3c3 . (6.35) (J. Larmor, 1897). 56 CHAPTER 6. RADIATION OF A POINT CHARGE Chapter 7 Radiation back-reaction The charges and the electromagnetic fields interact in electrodynamics. The full dynamical problem where both the charges and the electromagnetic field are allowed to follow the time dependence described by their dynamics, the mechanical equation of motion and the Maxwell equations is quite a wonderful mathematical problem. A simpler question is when the motion is partially restricted, when one members of this system is forced to follow a prescribed time dependence and the other is allowed to follow its own dynamics only. For instance, the world lines of a point charge moving in the presence of a fixed electromagnetic field can easily be found by integrating the equation of motion (4.4). The use of the Green functions provides the solution for a number or engineering problems in electrodynamics where the electromagnetic fields are sought for a given charge distribution. We devote this chapter to a question whose complexity is in between the full and the restricted dynamical problems but appears a more fundamental issue. 7.1 The problem Let us consider a charge moving under the influence of a nonvanishing external force. The force accelerates it and in turn radiation is emitted. The radia-tion has some energy and momentum which is lost in the supposedly infinite space surrounding the charge. Thus the energy and momentum of our charge is changed and we have to assume that there is some additional force acting on the charge. The very question is rather perplexing because one would have thought that the equation of motion for the charge, Eq. (4.4), containing the Lorentz force, the second term on the right hand side is the last word in this issue. There is apparently another force in the ”true” equation of motion! The complexity of this problem explains that this is perhaps the last open chapter of classical electrodynamics. There is a further, even more disturbing question. Does a point-like charge 57 58 CHAPTER 7. RADIATION BACK-REACTION interact with the electromagnetic field induced by its own motion? It is better not, otherwise we run into the problem of singularities like a point-charge at rest at the singular point of its own Coulomb-field. But the electric energy of a given static charge distribution ρ(x), E = 1 2 Z d3xd3y ρ(x)ρ(y) |x −y| (7.1) suggests that the answer is affirmative. The radiation reaction force whose derivation is the goal of this chapter touches a number of subtle issues. 1. The problem about the limit r0 →0 where r0 is the characteristic size of the charge distribution of a particle raises the possibility that the limits ℏ→0 and r0 →0 do not commute. In fact, in discussing a point charge in classical electrodynamics one tacitly takes the limit ℏ→0 first to get the laws of classical electrodynamics where the limit r0 →0 is performed at the end. But a strongly localized particle induces quantum effects what should be taken into account by keeping ℏfinite, ie. we should start with the limit r0 →0 to introduce a point-like particle in Quantum Mechanics and the classical limit ℏ→0, should be performed at the end only. There are no point charges in this scheme because even if one starts with a strictly point-like charge the unavoidable vacuum-polarization effect generates a charge density polarization cloud of the size of the Compton wavelength around the point charge. 2. Is there regular solutions at all for the set of coupled equations for point charges and the electromagnetic field? It may happen that some smearing, provided by the unavoidable vacuum polarization of quantum electrody-namics is needed to render the solution of the classical equations of motion regular. 3. The existence of the radiation back reaction force is beyond doubt but its derivation is non-trivial. It is a friction force, describing the loss of energy to the radiation field, and can not be derived by variational principle ie. it is not present in the usual variational system of equations of motion of classical electrodynamics. 4. The energy radiated out by the charge can not be recovered anymore in an infinite system. Thus the sign of the radiation reaction force represents a dynamical breakdown of the time inversion invariance of the basic laws of electrodynamics. 5. The radiation back reaction force acting on point charges can be calculated exactly and turns out to be proportional to the third derivative of the coordinates. Such kind of force generates self-accelerating motion which is unacceptable. 7.2. HYDRODYNAMICAL ANALOGY 59 V Figure 7.1: A body moving in viscous fluid. 7.2 Hydrodynamical analogy Before embarking the detailed study of classical electrodynamics let us consider a simpler, related problem in hydrodynamics, in another classical field theory. We immerse a spherical rigid body of mass M in a viscous fluid as depicted in Fig. 7.1. What is the equation of motion for the center of mass x of this body? The naive answer, M d2x dt2 = Fext (7.2) the right hand side being the external force acting on the body is clearly in-adequate because it ignores the environment of the body. The full equation of motion must contain a rather involved friction force Ffl(v), M dv dt = Fext + Ffl(v) (7.3) where v is the velocity of the body. There are two ways to find the answer. The direct, local one is to construct the force Fext(v) the fluid exerts on the body by the detailed study of the flow in is vicinity. If we have not enough information to accomplish this calculation then another, more indirect global possibility is to calculate the total momentum Pfl(v) of the fluid which is usually easier to find and to set Ffl(v) = −d dtPfl(v), (7.4) or equivalently to state that the total momentum of the body is Ptot(v) = Mv + Pfl(v). (7.5) We shall find the electromagnetic analogy of both schemes in the rest of this chapter. 7.3 Radiated energy-momentum We start the establishment of the energy-momentum balance for accelerating charges by considering the energy loss for a slow, non-relativistic charges. In the 60 CHAPTER 7. RADIATION BACK-REACTION absence or other intrinsic scales the EM field generated by slow motion agrees with the dipole radiation and the far field expression (6.22)-(6.23) gives the total radiation power (6.35), indicating the presence of forces acting on accelerating charges. We regard now the EM field of a point charge in a more detailed manner. The field strength of the general, relativistic case satisfies few important relations. The equation ˜ F µνFµν = 1 2ǫµνρσFµνFρσ = 0 (7.6) follows from the symmetry of FµνFρσ for µ ↔σ according to Eq. (6.14). In a similar manner, ˜ F ρσ(x −x(s))σ = 1 2ǫµνρσFµν(x −x(s))σ = 0 (7.7) follows from the symmetry of Fµν(x −x(s))σ for ν ↔σ. Thus H ⊥E, H ⊥ x −x(s) and E ⊥x −x(s). The far field satisfies beyond Eqs. (7.6)-(7.7) the conditions F f µνF fµν = 0 (7.8) and F f µν(x −x(s))ν = 0, (7.9) ie. |Hf| = |Ef|. The null-field is defined by the properties F µνFµν = ˜ F µνFµν = 0. The far field is null-field, therefore its energy-momentum tensor is T fµν = 1 4π F fµσF fν σ = e2 4πR6 [(x −x(s))µ(¨ xσR −˙ xσQ) −(x −x(s))σ(¨ xµR −˙ xµQ)] ×[(x −x(s))σ(¨ xνR −˙ xνQ) −(x −x(s))ν(¨ xσR −˙ xσQ)]. (7.10) Due to (x −x(s))2 = 0 we have T fµν = e2 4πR6 [(x −x(s))µ(¨ xνR −˙ xνQ)(¨ xR −˙ xQ) · (x −x(s)) −(x −x(s))µ(x −x(s))ν(¨ xR −˙ xQ)2 +(¨ xµR −˙ xµQ)(x −x(s))ν(x −x(s)) · (¨ xR −˙ xQ)]. (7.11) The relation (¨ xR −˙ xQ) · (x −x(s)) = {¨ x[ ˙ x · (x −x(s))] −˙ x[¨ x · (x −x(s))]} · (x −x(s)) = [¨ x · (x −x(s)][ ˙ x · (x −x(s))] −[ ˙ x · (x −x(s))][¨ x · (x −x(s))] = 0 7.3. RADIATED ENERGY-MOMENTUM 61 allows us to write T fµν = − e2 4πR6 (x −x(s))µ(x −x(s))ν(¨ xR −˙ xQ)2 = − e2 4πR6 (x −x(s))µ(x −x(s))ν(¨ x2R2 + Q2) (7.12) because ˙ x · ¨ x = 0. The radiation reaction four-force acting on the charge, Kµ = −∂νT νµ, can be obtained by considering the integral I of ∂νT νµ over the four-volume V of Fig. 7.2, bounded by the hyper-surfaces S1, C1, S2, C2. For sufficiently far from the charges the far field survives only and we have I = Z dV ∂νT νµ = Z ∂V dSνT νµ = Z S2 dSνT νµ − Z S1 dSνT νµ + Z C1 dSνT νµ + Z C2 dSνT νµ (7.13) for the far field contributions. The last two terms are vanishing because T fνµ ≈ (x−x(s))ν and dSνT νµ = 0 and no energy-momentum crosses the hyper-surfaces C1 and C2. It is important to note that this is not true for the near field because T nµν ̸≈(x −x(s))ν, the near-field, eg. Coulomb field moves with the charge. The co-moving nature of the near-field contrasted with the decoupled, freely propagating nature of the far field which defines identifies the radiation field. Since the energy-momentum tensor tµν of the localized charge is vanishing in the integration volume the energy-momentum conservation ∂ν(T νµ+tνµ) = 0 assures I = 0, and ∆P µ = − Z S dSνT fνµ, (7.14) the radiated energy-momentum, is a four-vector and is independent of the choice of the surface S. To calculate ∆P we choose a suitable surface S in such a manner that dSµ is space-like. We write x −x(s) = R ˙ x + y where y · ˙ x = 0 and define dSµ = yµRdΩds. S becomes a sphere of radius R, y2 = −R2 in the rest-frame of the charge at the emission of the radiation for infinitesimal proper length ds and we have ∆P µ = −ds Z dΩRyνT fµν = e2 4π ds Z dΩyν (R ˙ x + y)µ(R ˙ x + y)ν R3  ¨ x2 + Q2 R2  = −e2 4π ds Z dΩ(R ˙ x + y)µ R  ¨ x2 + Q2 R2  . (7.15) 62 CHAPTER 7. RADIATION BACK-REACTION x(t ) 2 x(t ) 1 S S C 2 1 C2 1 Figure 7.2: Energy momentum emitted by an accelerating charge Since Z dΩyµ = 0 (7.16) for a sphere and Q2 = [(x −x(s)) · ¨ x]2 = [(R ˙ x −y) · ¨ x]2 = (y · ¨ x)2 = y2¨ x2 cos2 θ = −R2¨ x2 cos2 θ (7.17) the energy-flux for relativistic charge is ∆P µ = −e2 4π ds ˙ xµ Z dΩ  ¨ x2 + Q2 R2  = −e2 4π ds ˙ xµ¨ x2 Z dΩ(1 −cos2 θ) = −e2 2 ˙ xµ¨ x2ds Z 1 −1 d(cos θ) sin2 θ | {z } 4 3 . (7.18) and ∆P µ ds = −2 3e2 ˙ xµ¨ x2. (7.19) 7.4 Brief history We summarize the stages the radiation reaction force has passed with more attention payed to relatively recent developments. 7.4. BRIEF HISTORY 63 7.4.1 Extended charge distribution The energy of a charge e distributed in a sphere of radius r0 which moves with velocity v, was written by Thomson as E = K +Eed where K = 1 2mmechv2 and Eem = 1 2 Z d3x(E2 + H2), (7.20) and the actual calculation yields Eem = f e2 r0c2 v2 2 , (7.21) f being a dimensionless constant depending on the charge distribution with value f = 2/3 for uniformly distributed charge within the sphere. One can introduce the electromagnetic mass for such charge distribution, med = 2 3 e2 r0c2 (7.22) giving Eem = med 2 v2. (7.23) We thereby recover E = m 2 v2 where m = mmech + med. Assuming pure electro-magnetic origin of the mass, mmech = 0, we have the classical charge radius rcl = 2 3 e2 mc2 , (7.24) the distance where the non-mechanical origin of the mass becomes visible. The next step was made by Lorentz who held the conviction that all elec-trodynamics phenomena arise from the structure of the electron . Larmor’s formule gives ∆EL = 2e2 3c3 Z a2dt = 2e2 3c3 Z d(a · v) dt −da dt · v  dt (7.25) for the energy loss due to radiation. The contribution of the first term in the last equation is negligible for long time and motion with bounded velocity and acceleration and we have ∆EL ≈ −2e2 3c3 Z da dt · vdt = − Z Fraddx (7.26) yielding the first time first an expression for the radiation reaction force, FL = 2e2 3c3 da dt . (7.27) 64 CHAPTER 7. RADIATION BACK-REACTION In another work Lorentz sets out to calculate the direct (Lorentz) force acting on a rigid charge distribution ρ(x) of size r0 due to the radiation back-reaction. He found Frad = ρ  Erad + 1 c v × Hrad  = −4 3c2 1 2 Z ρ(x)ρ(x′) |x −x′| d3xd3x′ | {z } 4 3 med a + 2e2 3c3 da dt | {z } FL −2e2 3c3 ∞ X n=2 (−1)n n!cn dma dtn O rn−1 0  The problems, opened by this result are the following. 1. The electromagnetic mass, with its factor 3/2 in front of med, given by Eq. (7.22) differs from Thomson’s result. 2. Higher order derivatives with respect to the time appear in the equation of motion. They contradict with our daily experience in mechanics. 3. There is no place in electrodynamics for cohesive forces, appearing for finite charge distribution, r0 ̸= 0, in equilibrium. 4. The divergence med = ∞in the limit r0 →0 spoils our ideas about point charges. While Lorentz concentrated on the energy loss of the charge system Abraham approached the problem from the point of view of the momentum loss. He identified the momentum of the Coulomb field of an charge in uniform motion by its Poyting’s vector , prad = 1 4πc2 Z (E × H)d3x. (7.28) The actual calculation yields prad = 4 3medv (7.29) where the electromagnetic mass is given by Eq. (7.22). The factor 4/3 is the same as in Lorentz’s expression and is in contradiction with the considerations based on the energy conservation. One way to under-stand its origin is the note that the rigidly prescribed charge distribution, used in these early calculations before 1905 violates special relativity in the absence of Lorentz contraction. Approximately in the same time Sommerfeld calculated the self force acting on a charge distribution ρ(x) in its co-moving coordinate system by ignoring the higher order than linear terms in the acceleration and its time derivatives , Frad = 2 3e2 ∞ X n=0 (−1)n n! cn dn dtn+1 v, (7.30) 7.4. BRIEF HISTORY 65 where cn = Z d3xd3yρ(x)|x −y|nρ(y). (7.31) He considered charges distributed homogeneously on the surface of a sphere of radius r0 when cn = 1 (2r0)n−1 2 n + 1 (7.32) and managed to resum the series which results the non-relativistic equation of motion mdv dt = Fext + med v(t −2a) −v(t) 2r0 (7.33) which is a finite difference equation, the delay time needed to reach the opposite points of the sphere. Soon after the discovery of special relativity Laue has found the relativistic extension of Lorentz’s result (7.27), m¨ xµ = −2 3e2( ˙ xµ¨ x2 + ... x µ), (7.34) an equation to be derived in a more reliable manner later by Dirac. The first term is positive definite and represents the breakdown of the time inversion invariance, the loss of energy due to the ’friction’ caused by the radiation. The second, the so called Schott term can change sign and stands for the emission and absorption processes. A particularly simple, phenomenological argument to arrive at the self force (7.34) is based on the constraint ¨ x · ˙ x = 0 on the world line of a point particle which asserts that the four-force F µ rad = mmech¨ xµ must be orthogonal to the four-velocity. One can easily construct the linearized equation of motion, an orthogonal vector which is linear in the velocity and its derivatives by means of the projector P µν = gµν −˙ xµ ˙ xν (7.35) as F µ rad = P µν ∞ X n=1 an dnxν(s) dsn . (7.36) The use of the derivative of the constraint, ... x · ˙ x + ¨ x2 = 0 gives for the self force truncated at the third derivative mmech¨ xµ ≈a2¨ xµ + a3(... x µ + ˙ xµ¨ x2). (7.37) The mass renormalization m = mmech −a2 eliminates the first term on the right hand side and expresses the physical mass as the sum of the mechanical and part and the contribution from electrodynamics. The comparison with Eq. (7.34) gives a3 = −2e2/3. 66 CHAPTER 7. RADIATION BACK-REACTION The third derivative with respect to the time in the equation of motion presents a new problem, such an equation has runaway, self-accelerating solu-tion, ˙ x0 = cosh[r0(es/r0 −1)], ˙ x1 = sinh[r0(es/r0 −1)], ˙ x2 = ˙ x3 = 0, (7.38) with rc being the classical electron radius (7.24). This is unacceptable. Dirac proposed an additional boundary condition in time for the charges which is needed for the equation of motion with third time derivative. This is to be imposed at the final time and it eliminates the runaway solutions. The problem which renders this proposal difficult to accept is that it generates acausal effects on the motion of the charges, acceleration before the application of the forces. The origin of this problem is the sharp boundary of the homogeneously dis-tributed charge in a sphere when the radius r0 tends to zero. It was shown that both the runaway and the preaccelerating solutions are absent for Sommerfeld’s equation of motion (7.33) as long as r0 > rcl . It is the truncated power se-ries approximation for this finite difference equation which creates the problem in the point charge limit. 7.4.2 Point charge limit We present here Dirac’s work where he returns to the point-like electron and introduces the manifestly Lorentz-covariant separation of the near and far fields, A = Dr · j + Ain = Da · j + Aout Arad = Aout −Ain = Ar −Aa Ar = 1 2Arad | {z } Af + 1 2(Ar + Aa) | {z } An . (7.39) He found that the correction force to the equation of motion comes entirely from Af which is finite and regular at the point charge and the near field is responsible of the divergences arising in the point charge limit. The actual calculation is subtle because the emission of radiation does not commute with the limit r0 →0 since the radiation is constrained onto the future light-cone which can not be pierced by the world-line of a massive particle. In other words, a strictly point charge can not have back reaction force, this latter comes entirely from r0 > 0. We start with F µν rad(s′) = 4πe Z ds[Dr(x −x′) −Da(x −x′) | {z } 1 2π ǫ(x0−x′0)δ((x−x′)2) ] d ds (x −x(s))µ ˙ xν(s) (x −x(s)) · ˙ x(s) −(µ ← →ν) (7.40) 7.4. BRIEF HISTORY 67 where we write s = s′ + u and expand in u, x(s) −x(s′) = u ˙ x + u2 2 ¨ x + u3 6 d3x ds3 + · · · ˙ x(s) = ˙ x + u¨ x + u2 2 d3x ds3 + · · · (x(s) −x(s′))2 = u2 + O u4 (x(s) −x(s′)) · ˙ x = u + O u3 ǫ(x0(s) −x′0(s′)) = ǫ(u) (7.41) to find F µν rad(s′) = 2e Z dsǫ(u)δ(u2) d du  ˙ x + u 2 ¨ x + u2 6 d3x ds3 µ  ˙ x + u¨ x + u2 2 d3x ds3 ν −(µ ← →ν) = 2e Z duǫ(u)δ(u2) d du  u ˙ xµ¨ xν + u 2 ¨ xµ ˙ xν + u2 2 ˙ xµ dxν ds3 + u2 6 dxµ ds3 ˙ xν  −(µ ← →ν). (7.42) The small but finite size of the charge compared with the width of the light cone where the radiation field is constrained is taken into account by the formal steps witnessing the insight of the inventor of the delta function, δ(u2) = lim v→0+ δ(u2 −v2) = lim v→0+ δ(u −v) 2v + δ(u + v) 2v  ǫ(u)δ(u2) = lim v→0+ δ(u −v) 2v −δ(u + v) 2v  = −δ′(u), (7.43) yielding F µν rad(s′) = 2e Z duδ(u) d2 du2 u 2 ˙ xµ¨ xν + u2 3 ˙ xµ dxν ds3  −(µ ← →ν) = 4 3e  ˙ xµ d3xν ds3 −˙ xν d3xµ ds3  (7.44) and Kµ react = mc¨ xµ rad(x′) = e 2cF µν rad(x′) ˙ xν = 2 3e2  ˙ xµ d3x ds3 · ˙ x −d3xµ ds3  . (7.45) Since ˙ x · ˙ x = 1, ¨ x · ˙ x = 0 and d3x ds3 · ˙ x + ¨ x2 = 0 (7.46) 68 CHAPTER 7. RADIATION BACK-REACTION we find finally Kµ react = −2 3e2  ˙ xµ¨ x2 + d3xµ ds3  . (7.47) The near field represents no loss or gain in energy and momentum, it rather enriches the structure of the charge by modifying, renormalizing its free equation of motion. Dirac found that Lorentz’s divergent mreact is given by the near-field and gives rise a mass renormalization. To see this we start with the action S = −mbc Z ds + Sed Sed = −e c Z d4xAν(x)jν(x) = −e 2 Z d4x[Ar ν(x) + Aa ν(x)] Z ds ρ(x −x(s)) | {z } form factor ˙ xν(s) (7.48) We write the near field as 1 2[Ar ν(x) + Aa ν(x)] = 4πe Z d4x′ds′ 1 2[Dr(x −x′) + Dadv(x −x′)]ρ(x′ −x(s′)) ˙ xν(s′) = e Z d4x′ds′δ((x −x′)2)ρ(x′ −x(s′)) ˙ xν(s′) (7.49) which yields, upon inserted into the action Sed = −e2 Z d4xd4x′dsds′δ((x −x′)2)ρ(x′ −x(s′))ρ(x −x(s)) ˙ x(s′) · ˙ x(s) = −e2 Z d4wd4w′dsds′δ((w −w′ + x(s) −x(s′))2)ρ(w′)ρ(w) ˙ x(s′) · ˙ x(s). (7.50) This was the decisive step, this action does not contain the Li´ enard-Wiechert potentials anymore, the Green functions were used to eliminate the electromag-netic field from the problem by means of their equations of motion. We follow the limit r0 →, s′ = s + u, ˙ x(s′) = ˙ x + u¨ x + · · · , x(s) −x(s′) = −u ˙ x + · · · and write Sed ≈ −e2 Z dsd4wd4w′duδ((w −w′ −u ˙ x)2)ρ(w′)ρ(w) = −e2 2 Z dsd4wd4w′  1 (w −w′ −uret ˙ x) · ˙ x + 1 (w −w′ −uadv ˙ x) · ˙ x  ρ(w′)ρ(w) = −e2 2 Z ds Z d4wd4w′  1 (w −w′) · ˙ x −uret + 1 (w −w′) · ˙ x −uadv  ρ(w′)ρ(w) | {z } tends to be divergent and independent of s for ρ(x)→δ4(x) = −medc Z ds (7.51) 7.4. BRIEF HISTORY 69 with med = e2 2c Z d4wd4w′  1 (w −w′) · ˙ x −uret + 1 (w −w′) · ˙ x −uadv  ρ(w′)ρ(w) (7.52) What is found is a renormalization of the mass, the combination mph = mb+mel is observable only which sets mb = mph −med. 7.4.3 Iterative solution The coupled equations of motion for the charge and the electromagnetic field can be solved iteratively, by reinserting the Li´ enard-Wiechert potential obtained by means of the solution of the mechanical equation . They set up a pertur-bation expansion in the retardation which comprises the nontrivial effects of the radiation and obtain the radiation force in two steps. First they calculate the effective Lagrangian for the charge, obtained by eliminating the electromagnetic field by the Maxwell equation in order O (v/c). The next order contain the ra-diational friction force and is obtained by iterating the equation of motion. It is reassuring to see that the further iterations in the retardation yield vanishing result in the point charge limit. The retarded Li´ enard-Wiechert potential (which can not be obtained from an action principle due to its non time reflection symmetrical form) leads to the effective Lagrangian L = − X a mac2 r 1 −v2 a c2 − X a eaφ + X a ea c A · va = X a mav2 a 2 + mav4 a 8c2 + O v6 c6  − X a ea Z d3x′ ρ(t −|xa−x′| c , x′) |xa −x′| + X a ea c2 Z d3x′ j(t −|xa−x′| c , x′) |xa −x′| · va (7.53) for a system of charges when the self-interaction is retained. We make an expansion in the retardation by assuming v ≪c, R/c ≪τ, τ being the characteristic time scale of the charges. Note that the factor |xa − x′|n in the higher, O  ( v2 c2 )n order contributions with n ≥3 suppresses the singularity at |xa −x′| = 0. We find φ(t, ra) = X b Z drb ρ(t, rb) Rab −1 c ∂t Z drbρ(t, rb) | {z } Q=const. + 1 2c2 ∂2 t Z drbRabρ(t, rb) −1 6c3 ∂3 t Z drbR2 abρ(t, rb)  + O  1 c4  A(t, ra) = X b 1 c Z drb j(t, rb) Rab −1 c2 ∂t Z drbj(t, rb)  + O  1 c3  , (7.54) 70 CHAPTER 7. RADIATION BACK-REACTION what yields φa = X b  eb Rab + eb 2c2 ∂2 t Rab −eb 6c3 ∂3 t R2 ab  Aa = X b  ebvb cRab −eb c2 ∂tvb  (7.55) in the point charge limit. We perform the gauge transformation φ′ a = φa −1 c X b h ∂t  eb 2c∂tRab −eb 6c2 ∂2 t R2 ab i = X b eb Rab = φ′(0) a A′ a = Aa + ∇ X b h eb 2c∂tRab −eb 6c2 ∂2 t R2 ab i = X b   ebvb cRab −eb c2 ∂tvb + eb 2c∇∂tRab −eb 6c2 ∂2 t ∇R2 ab | {z } 2Rab   = A′(1) a + A′(2) a . (7.56) The Lagrangian is L(0) = X a mav2 a 2 −1 2 X a̸=b eaeb Rab (7.57) in the non-relativistic limit, O ( v c )0 after ignoring an unimportant, diverging self energy for a = b. For the next non-relativistic order, O v c  , we need ∇∂tR = ∂t∇R = ∂tn = ∂tR R −R∂tR R2 (7.58) where n = (r −r′)/|r −r′| denotes the unit vector from the charge to the observation point and R∂tR = R∂tR = −Rv, with ∂tn = −v + n(n · v) R . (7.59) One finds φ′(0) a = X b eb Rab A′(1) a = X b eb vb + nb(nb · vb) 2cRab (7.60) and the Lagrangian is L(2) = X a mav2 a 2 + mav4 a 8c2  −1 2 X a̸=b eaeb Rab +1 2 X a̸=b eaeb c2Rab [va·vb+(va·nab)(vb·nab)] (7.61) 7.4. BRIEF HISTORY 71 in this order after a diverging self energy is ignored again for a = b. The next, O ( v c )2 electromagnetic field contains the radiation induced fric-tion force and can not be represented in the Lagrangian. We set R = r −r′, ∂tR = −∂tr′ and write A ′(2) a = X b heb c2 ∂tvb + eb 3c2 ∂tvb i = − X b 2 3 eb c2 ∂tvb. (7.62) In the absence of explicit x-dependence the magnetic field is vanishing in this order, H(2) = 0. The force acting on the charge is of electric origin alone and the self force arises from the electric field E(2) a = −1 c ∂tA ′(2) a −∇φ(2) a |{z} =0 = 2 3 ea∂3 t xa c3 (7.63) The energy loss per unit time is W = X a Fa · va = 3 2 1 c3 X b eb∂3 t xb · X a ea∂txa = 3 2 1 c3 X ab eaeb[∂t(∂2 t xb · ∂txa) −(∂2 t xa · ∂2 t xb)] (7.64) with the time average W = −3 2 1 c3 X ab eaeb(∂2 t xa · ∂2 t xb) (7.65) where the total derivative term can be neglected. The higher order contributions in the retardation become negligible in the point-like charge limit when R →0 and the expression for the radiation reaction force Frr = 2 3 e2∂2 t v c3 (7.66) becomes exact! We see that we recover the second term in the right hand side of the last equation of Eqs. (7.34) but not the first one in this manner, by relying on the retarded potentials. The non-relativistic equation of motion, m∂tv = 2 3 e2∂2 t v c3 (7.67) leads unavoidable to the runaway solution ∂tv = v0et 3 2 mc3 e2 . (7.68) 72 CHAPTER 7. RADIATION BACK-REACTION The equation of motion with the Lorentz-force, corrected by the radiation reaction is m∂tv = eEext + e cv × Hext + 2 3 e2∂2 t v c3 . (7.69) We arrived finally at a central question: at what length scales can we see the radiation reaction forces? The condition for the radiation back-reaction be small and an iterative solution is applicable is the following. In the rest frame ∂2 t v = e m∂tEext + e mc∂tv × Hext + O c−3 . (7.70) Since ∂tv = eEext/m, ∂2 t v = e m∂tEext + e2 m2cEext × Hext + O c−3 (7.71) and the radiation reaction force is Frr = 2 3 e3 mc3 ∂tEext | {z } O  e3ω mc3 E  + 2 3 e4 m2c4 Eext × Hext | {z } O  e4 m2c4 EH  +O c−5 . (7.72) The first term is negligible compared with the force generated by the external electric field for a monochromatic field with frequency ω if |Frr| |Fext| ≈e2ω mc3 ≪1 (7.73) or e2 mc2 ≪c ω = λ 2π . (7.74) Thus classical electrodynamics becomes inconsistent due to pair creations at distances shorter than the classical charge radius, ℓ≈λC = e2/mc2. We note that the second term is negligible, H ≪m2c4 e3 (7.75) for realistic magnetic fields. 7.4.4 Action-at-a-distance A different approach to electrodynamics which might be called effective theory in the contemporary jargon is based on the elimination of the electromagnetic field altogether from the theory [14, 15, 16]. Let us write the action of a system of charges described by their world lines xµ a(s) and the electromagnetic field in a condensed notation as S = X a Sm[xa] + 1 2A · D−1 · A − X a ja · A (7.76) 7.4. BRIEF HISTORY 73 where the dot stands for space-time integration and index summation, j · A = R dxjµ(x)Aµ(x), etc. The Maxwell equation, δS δA = 0, yields A = D · j. (7.77) This equation can be used to eliminate A from the action and to construct the effective theory for the charges with the action S = X a Sm[xa] + 1 2 X ab ja · D · D−1 · D · jb − X ab ja · D · jb = X a Sm[xa] −1 2 X ab ja · D−1 · jb → X a Sm[xa] −1 2 X a̸=b ja · D−1 · jb (7.78) without the electromagnetic field. The elimination of the field degrees of free-dom generates action-at-a-distance. The self-interaction was omitted in the last equation. The Maxwell equation indicates that D should be a Green-function. But which one? According to Dirac’s proposal we have near and far field Green-functions A n f = 1 2(Ar ± Aa) = 1 2(Dr ± Da) · j (7.79) which motivates the notation D n f = 1 2(Dr ± Da). Whatever Green-function we use, the symmetric part survives only because A · B · A = 0 for an anti-symmetrical operator, Btr = −B. Since Da(x, y) = Dr(y, x), Dn and Df are just the symmetric and antisymmetric part of the inhomogeneous propagator and we have to use Dn in the action principle. The self-interaction generated by the near-field and ignored in the last line of Eqs. (7.78) is indeed a world-line independent, divergent term. The support of the Green-function is the light-cone therefore the charge a at point xa interacts with the charge b if the world-line xb(s) of the charge b pierces the light-cone erected at point xa. The interaction is governed by the near-field Green-function and it is 50% retarded and 50% advanced. Such an even distribution of the retarded and advanced interaction assures the formal time inversion invariance. The unwanted complication of the near-field mediated interactions is that it eliminates radiation field and the retardation effects. It is a quite cumbersome procedure to add by hand the appropriate free field to the solution which restores the desired initial conditions. The use of the retarded Green-function assumes that the in-fields are weak. This is not the case for the out-fields and the time inversion symmetry is broken. A sufficient plausible assumption to explain this phenomenon is the proposition that the Universe is completely absorptive, there is no electromagnetic radiation 74 CHAPTER 7. RADIATION BACK-REACTION reaching spatial infinities due to the elementary scattering processes of the inter-galactic dust . The equation of motion for the charge a in the theory given by the action (7.78), mc¨ xµ a = e cF nµν ˙ xaν = e 2c X b̸=a (F rµν b + F aµν b ) ˙ xaν (7.80) can be written as mc¨ xµ a = e c  X b̸=a F rµν b + 1 2(F rµν a −F aµν a ) −1 2 X b (F rµν b −F aµν b )  ˙ xaν. (7.81) The first term represents the usual retarded interaction with the charges, self interactions ignored. The second term is the regular far field generated by the charge and provides the forces needed for the energy-momentum conservation for radiating charges. The last expression, the radiation field of all charges is vanishing in a completely absorbing Universe. The origin of the breakdown of the time reversal invariance needed for the appearance of the radiation friction force which can be derived without difficulty from Eq. (7.81) is thus located in the absorbing nature of the Universe. Calculations performed in Quantum Electrodynamics in finite, flat space-time support the absorbing Universe hy-pothesis. 7.4.5 Beyond electrodynamics Similar radiation back-reaction problem exist in any interactive particle-field theory, for instance gravity or a more academic model where the interaction is mediated by a massive scalar field. A mass curves the space-time around itself and is actually moves in such a distorted geometry. part of the distortion is instantaneous, the analogy of the Coulomb force of electrodynamics, another part displays retardation and represents gravitational radiation. It was found [18, 19] that there is indeed a radiation back-reaction force in gravity and its additional feature is that it has a non-local component parallel to the four-acceleration, hence the mass is renormalized by a term which depends on the whole past of the motion. It is the special vector algebra which rendered the mass renormalization a part and time independent constant in Eq. (7.22) for the electromagnetic interaction in flat space-time. But a conceptual issue which remains to settle in the gravitational case is that in general any explicit use of the space-time coordinate corresponds to a gauge choice, in particular the form of the self force one can get is gauge dependent and not physical. The satisfactory solution of this problem which is still ahead of us is to translate all relevant dynamical issues into gauge invariant, coordinate choice independent form. The loss of the mass as a constant to characterize the motion of a point particle obviously forces radical changes upon our way to imagine classical physics. 7.5. EPILOGUE 75 The origin of the non-local nature of the self-force can easily be understood. An external background curvature acts as a mass term for the gravitational radiation. Therefore, the dynamical problem here is like the radiation back-reaction arising from interacting with a massive field. This problem can specially easily analyzed in the case of a massive scalar field. Its retarded Green function is non-vanishing within the future light cone as opposed to the massless Green function whose support is the future light cone only. Therefore whole past of the world-line for the point x(s) lies within the past light cone of x(s) and contribute to the self-force as opposed to the simple situation of the massless electromagnetic interaction, depicted in Fig. 6.1. 7.5 Epilogue The recent developments in High Energy Physics, namely the construction of effective theories based on the use of the renormalization group shows clearly the origin of the Abraham-Lorentz force. When degrees of freedom are eliminated in a dynamical system by means of their equation of motion then the equations of motion of the remaining degrees of freedom change. The new terms represent the correlations realized by the eliminated degrees of freedom in the dynamics of the remaining part of the system. When the effect of the self field is considered on a charge then we actually eliminate the EM field and generate new pieces to the equations of motion for the charges. These are the radiation back-reaction forces, their importance can systematically be estimated by the method of the renormalization group, applied either on the classic or the quantum level. 76 CHAPTER 7. RADIATION BACK-REACTION Bibliography J. D. Jackson, Classical Electrodynamics, John Wiley and Sons, New York. L. D. Landau, E. M. Lifshitz, The Classical Theory of Fields, Vol2. (4th ed.) Butterworth-Heinemann. F. Rohrlich, Classical Charged Particles, Addison-Wesley Publishing Co., Redwood City, (1965). A. O. Barut, Electrodynamics and Classical Theory of Fields and Particles, The MacMillan Co., New York,1964. J. J. Thomson, Phil. Mag. 11, 229 (1881). H. A. Lorentz, Enzykl. Math. Wiss. V. 1, 188 (1903). H. A. Lorentz The Theory of Electrons and Its Applications to the Phe-nomena of Light and Radiant Heat, Dover, New York, (1962). M. Abraham, Ann. Physik 10, 105 (1903). A. Sommerfeld, Akad. Van Wetensch, Amsterdam 13, 346 (1904). M. von Laue, Ann. Physik 28, 436 (1909). F. Rohrlich, Am. J. Phys. 65, 1051 (1977). E. J. Moniz, D. H. Sharp, Phys. Rev. 15, 2850 (1977). P. A. M. Dirac, Proc. Roy. Soc. A167, 148 (1938). K. Schwarzschild, G¨ ottinger Nachrichten 128, 132 (1903). H. Tetrode, Zeits. f. Physik 10, 317 (1922). A. D. Fokker, Zeits. f. Physik 58, 386 (1929); Physica 9, 33 (1929); Physica 12, 145 (1932). J. A. Wheeler, R. Feynman, Rev. Mod. Phys. 21, 425 (1949). Y. Mino, M. Sasaki, T. Tanaka, Phys. Rev. D55, 3457 (1997). T. C. Quinn, R. M. Wald, Phys. Rev. D56, 3381 (1997). 77
167
The ALLHAT Report: A Case of Information and Misinformation - PMC =============== Skip to main content An official website of the United States government Here's how you know Here's how you know Official websites use .gov A .gov website belongs to an official government organization in the United States. Secure .gov websites use HTTPS A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites. PMC Search Update PMC Beta search will replace the current PMC search the week of September 7, 2025. Try out PMC Beta search now and give us your feedback. Learn more Search Log in Dashboard Publications Account settings Log out Search… Search NCBI Primary site navigation Search Logged in as: Dashboard Publications Account settings Log in Search PMC Full-Text Archive Search in PMC Advanced Search Journal List User Guide New Try this search in PMC Beta Search View on publisher site Download PDF Add to Collections Cite Permalink PERMALINK Copy As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with, the contents by NLM or the National Institutes of Health. Learn more: PMC Disclaimer | PMC Copyright Notice J Clin Hypertens (Greenwich) . 2007 May 21;5(1):9–13. doi: 10.1111/j.1524-6175.2003.02287.x Search in PMC Search in PubMed View in NLM Catalog Add to search The ALLHAT Report: A Case of Information and Misinformation Michael A Weber Michael A Weber, MD 1 From the State University of New York, Downstate College of Medicine, Brooklyn, NY Find articles by Michael A Weber 1 Author information Article notes Copyright and License information 1 From the State University of New York, Downstate College of Medicine, Brooklyn, NY ✉ Michael A. Weber, MD, State University of New York, Downstate College of Medicine, 450 Clarkson Avenue, Box 97, Brooklyn, NY 11203 E‐mail: [email protected] Collection date 2003 Jan-Feb. PMC Copyright notice PMCID: PMC8099274 PMID: 12556647 The announcement of the results of the Antihypertensive and Lipid‐Lowering Treatment to Prevent Heart Attack Trial (ALLHAT)1 was literally front page news. After all, this very large clinical outcomes trial in hypertension comparing the diuretic chlorthalidone with newer agents, the angiotensin‐converting enzyme (ACE) inhibitor lisinopril and the calcium channel blocker (CCB) amlodipine, had concluded that the diuretic was superior to the other drugs in preventing major cardiovascular events. More than that, the ALLHAT authors pointed out that because thiazide‐like diuretics like chlorthalidone are so inexpensive, they have the double advantage of being cheaper as well as better than the other drug classes. Right from the moment of publication, though, experts in hypertension were surprised at a result that appeared to be in conflict with data from previous carefully conducted clinical trials. And, as it has become possible to digest the lengthy and detailed ALLHAT report, serious questions have arisen not only as to the accuracy of the original claims, but also as to the propriety of announcing them in so flamboyant a fashion. Those of us who have advocated the value of diuretics, either as single agents or in combination with other drugs, can now feel reassured that they will continue to have a key role in hypertension management. At the same time, it is important to take a closer look at the ALLHAT data and evaluate the validity of the claims and conclusions published in the formal report,1 not to mention the accompanying press releases. THE MYSTERY OF THE PRIMARY END POINT From the beginning, the chief focus of this study was intended to be coronary events. So much so, that the “HAT” in the title, “ALLHAT,” stands for Heart Attack Trial. The results of the study showed that for the formal primary end point of fatal coronary heart disease and nonfatal myocardial infarction, there were no meaningful differences among the drugs. The study event rate for chlorthalidone was 11.5%, with fractionally lower point estimates for amlodipine (11.3%) and lisinopril (11.4%) despite the fact that systolic blood pressure was not as well controlled in the latter two groups as in the diuretic group. In an almost unprecedented departure from scientific probity, the authors of the ALLHAT report omitted this apparently inconvenient fact from their Conclusion. It is noteworthy that even the New York Times, which in its initial front page coverage of ALLHAT proclaimed the superiority of the diuretic, felt compelled to publish a formal Correction four days later acknowledging that the primary end point of the study was, in fact, similar among the three drugs. The main basis for the claim that chlorthalidone was better than the other two drugs depended on secondary end points. Most notably, the stroke rate with the diuretic was claimed to be lower than with lisinopril, and heart failure was claimed to be reduced when compared with both lisinopril and amlodipine. Close scrutiny of the data supporting these claims has raised some doubts, and it is revealing to explore them further. However, before doing so, it is important to look at how study design issues affected patient management and blood pressure control. STUDY DESIGN AND BLOOD PRESSURE CONTROL Since the goal of ALLHAT was to compare the effects of different antihypertensive agents on clinical end points, it was critical to achieve equal blood pressure effects in each of the three treatment groups so as to ensure that the outcomes benefits of the three drug classes could be validly compared. This intention was reinforced quite clearly in a publication by the ALLHAT study leaders about 1 year before ALLHAT's results were announced.2 So, the difference in systolic blood pressure of 2 mm Hg (actually, slightly higher when weighted for patient‐years of treatment exposure) favoring chlorthalidone over lisinopril was not trivial, but instead has clouded interpretation of many of ALLHAT's clinical outcomes. A recent comprehensive study of the relationships between blood pressure and clinical events, based on observations in one million persons, indicated that blood pressure differences closely similar to those observed in ALLHAT could account for powerful effects on stroke and coronary mortality rates.3 The reason for the blood pressure problem in ALLHAT is easy to explain. Because the study was originally set up to compare the outcomes effects of a diuretic, a CCB, an ACE inhibitor and an α blocker (which was discontinued during the trial and is not discussed further here), the research protocol prohibited the use of agents from these classes when additive treatments were required in patients whose blood pressures did not respond adequately to their primary drug. Beta blockers, which were the agents most commonly chosen for this purpose, as well as other agents affecting adrenergic mechanisms, were the main drugs that could be added. This situation clearly helped chlorthalidone, for addition of a β blocker to a diuretic provides a logical and effective blood pressure‐lowering combination. Even for those patients assigned to amlodipine, the addition of a β blocker is useful. But, for the lisinopril group, adding a β blocker is clearly less helpful than a lowdose diuretic or a CCB. For black patients, in whom neither ACE inhibitors nor β blockers are drugs of choice for blood pressure control,4 this caused an even greater shortfall in blood pressure control. This discrepancy cannot be shrugged off as an accidental or unintended result of the study. It was predictable from the time the study organizers decided to attempt multiple comparisons in a single trial rather than the more direct and clinically relevant approach of separate head‐to‐head comparisons. This decision may have been largely driven by considerations of cost and logistics, but obviously it also contrived to prevent equal blood pressure effects in the treatment groups, in essence benefiting chlorthalidone but putting lisinopril at a disadvantage. STROKES AND THE AFRICAN AMERICAN PATIENTS Stroke is perhaps the most feared outcome of hypertension. Compared with the diuretic, the stroke event rate during treatment with amlodipine was actually 7% lower. Although this advantage to the CCB was not statistically significant, it was observed across almost all subgroups in the diverse population included in ALLHAT. On the other hand, chlorthalidone reduced stroke event rate by 15% when compared with the ACE inhibitor lisinopril. But herein lie some of the most interesting and contentious issues in ALLHAT. For a start, stroke event rates were virtually identical for the diuretic and the ACE inhibitor in the nonblack patients, meaning that the entire overall difference between chlorthalidone and lisinopril could be accounted for by the dramatic 40% greater event rate in black patients randomized to lisinopril. There are two explanations for this result in the African American patients. First, due to the treatment selection problems discussed earlier, there was a discrepancy of 4 mm Hg in systolic blood pressure favoring chlorthalidone. This difference by itself could explain much or even all the stroke excess in these high risk patients.3 And, second, there is the fact that the majority of those patients who required additional therapy finished with a combination of an ACE inhibitor and a β blocker, thus getting two drugs with overlapping neurohormonal actions that might not provide the same additive target organ benefits that might be expected when drugs with complementary properties are combined. THE HEART FAILURE CONUNDRUM The overall claim for chlorthalidone's superiority over the other two drugs in preventing clinical end points in ALLHAT was based predominantly on the difficult‐to‐diagnose secondary end point of heart failure. To cardiovascular experts, this finding came as a surprise, particularly the claimed superiority over the ACE inhibitor lisinopril which, based on copious clinical trial evidence as well as its known actions in the circulation, is regarded as the primary treatment for heart failure and would be expected to be highly effective in preventing this condition. Some important questions should be addressed. Is the Heart Failure Claim Credible? ALLHAT is not the first clinical trial to compare the effects on heart failure of diuretic‐based treatment with treatments based on ACE inhibitors or CCBs in hypertensive patients. A meta‐analysis carried out by the Blood Pressure Lowering Treatment Trialists' Collaboration,5 which had the strong authority of being based on prospectively designated clinical trials, compared heart failure rates in patients treated with ACE inhibitors with those treated with conventional therapies (diuretics and β blockers). With a total of more than 8000 patients in each group, the investigators found an 8% lower event rate with the ACE inhibitor‐based treatment. Although this difference was not significant, it goes clearly in the opposite direction to that reported in ALLHAT. In a similar comparison between CCBs and conventional therapy, with over 11,000 patients in each group, the heart failure event rate this time was found to be lower in the diuretic‐based group by 12%, though it did not reach statistical significance. Moreover, while favoring the diuretic‐based treatment, the point estimate of this difference was substantially lower than that reported in ALLHAT. These unexpected differences in event rates between the previous studies and ALLHAT could possibly be explained by the blood pressure problems in ALLHAT, but it is also worth considering the accuracy of the diagnosis. Were the Heart Failure Findings Real? Heart failure is a difficult diagnosis to make, even by experienced cardiologists participating in formal heart failure trials, let alone a study like ALLHAT that was conducted to a large extent in community‐based settings. A potentially important problem with the diagnosis of heart failure in ALLHAT was related to the masking effect of diuretic treatment on major fluid‐dependent clinical signs such as rales and peripheral edema and symptoms like dyspnea. For this reason, heart failure in hypertensive patients receiving diuretics can go unrecognized for a considerable period. In ALLHAT, the majority of patients entering the study were already receiving a diuretic, so that those individuals with unsuspected heart failure randomized to the ACE inhibitor or CCB would have been at risk of rapidly losing the masking effects of their previous diuretics and manifesting their heart failure early in the trial. On the other hand, in those patients with hidden heart failure who were randomized to the powerful diuretic chlorthalidone, fluid‐dependent clinical signs and symptoms might have remained suppressed. In fact, examination of the Kaplan‐Meier curves for heart failure event rates with chlorthalidone and lisinopril shows that much of the separation between their effects takes place during the early stages of the study. The situation comparing chlorthalidone with amlodipine is not so clear, but again the masking effect of the diuretic might have played a substantial role in the different event rates between the two treatments. Moreover, one of the common side effects of the CCB is peripheral edema, which is not related to fluid retention but which can misleadingly suggest the appearance of heart failure. This, again, could have added to the possibility of misdiagnosis in ALLHAT, where peripheral edema was regarded as a key physical finding of heart failure and where rigorous confirmation of clinical events was carried out in only a small sampling of patients. One final point should be noted. Heart failure is a condition with a high case‐fatality rate, and the substantial benefits claimed for chlorthalidone in preventing heart failure (not to mention some of the other cardiovascular end points) should have resulted in a clear trend toward lower mortality in patients treated with the diuretic. As discussed later, this was definitely not the case. In view of these questions and uncertainties, heart failure seems to be a rather soft and uncertain secondary end point upon which to base the major justification for chlorthalidone's overall superiority claim in ALLHAT. DIABETES: GOOD NEWS AND BAD NEWS One of the interesting outcomes of ALLHAT was that in the comparisons of end points between the diuretic and the other agents there were no major differences in event rates between diabetic and nondiabetic patients. Because of data showing the specific benefits of drugs that interrupt the renin‐angiotensin system in patients with diabetic nephropathy,6, 7 experts had started to recommend that such drugs as ACE inhibitors should be used in all diabetic patients. In ALLHAT, however, there did not appear to be much of an advantage to lisinopril over chlorthalidone in diabetic (as compared with nondiabetic) patients, suggesting that a thiazide agent, as monotherapy, could be as acceptable as ACE inhibitor monotherapy in these high‐risk patients. However, in the absence of detailed renal data in ALLHAT we cannot be certain that this applies to kidney protection. It is possible, though, that the blood pressure difference between lisinopril and chlorthalidone in ALLHAT could have influenced cardiovascular outcomes in chlorthalidone's favor, particularly as these outcomes are so blood pressure‐sensitive in diabetic patients.8 We should also not forget that strong cardiovascular and renal benefits in diabetic patients have occurred when blockers of the reninangiotensin system are combined with diuretics. For these reasons, despite some reassurance from ALLHAT, physicians should generally continue to employ this combination approach for their diabetic patients with hypertension. As would be expected, blood glucose concentrations rose more with the diuretic than with either of the other two agents. For nondiabetic patients entering the study, the 4‐year incidence of new‐onset diabetes in the chlorthalidone group was 11.6%, which represented an 18% increase in relative risk compared with amlodipine (4‐year rate of 9.8%) and a 43% increase in relative risk compared with lisinopril (4‐year incidence of 8.1%). Both of these differences were significant. This increase in new‐onset diabetes in the diuretic group did not translate into increased cardiovascular events during the relatively short period of observation following diagnosis, but this finding should prompt the recommendation that patients at risk of developing diabetes should not be treated with a thiazide alone but rather with a regimen based on a drug that interrupts the renin‐angiotensin system. DEATH: THE INCONTROVERTIBLE END POINT Mortality is the most definite as well as the most important of the secondary end points in ALLHAT, yet it received surprisingly little emphasis in the ALLHAT report and press releases. Compared with chlorthalidone, all‐cause mortality was identical with lisinopril and 4% lower with amlodipine (nonsignificant). For nonblack patients, mortality was 3% and 6% lower with lisinopril and amlodipine. Mortality differences favoring the CCB and especially the ACE inhibitor very likely would have been greater had it not been for the blood pressure discrepancies in ALLHAT.3 This is a pivotal issue that speaks directly to the principal conclusion stated in the ALLHAT report. It seems inconsistent, perhaps almost absurd, to have claimed cardiovascular outcomes superiority for chlorthalidone—which can only have real meaning if the drug displays life‐saving attributes—when its effects on mortality were actually heading in the wrong direction. IMPLICATIONS OF ALLHAT The ALLHAT results publication1 is not only one of the longest study reports ever published, it is also detailed and complex. It will take more time to fully define the true meaning of its data. But since the study, in its final form, compared the effects of three antihypertensive agents, it may be helpful to look briefly at the impact of ALLHAT on each of them. THE INDIVIDUAL AGENTS According to the ALLHAT authors, this study demonstrated that a thiazide diuretic was superior to an ACE inhibitor or a CCB in cardiovascular protection. For the reasons already detailed, this conclusion is debatable at best. Even so, nothing in ALLHAT has hurt chlorthalidone's role as an integral part of hypertension therapy and as an agent strongly to be considered as first‐step therapy in the elderly and particularly in black patients. The ALLHAT experience has also shown that concerns about the use of thiazide diuretics in diabetic patients may be largely unfounded. Lisinopril fully equaled chlorthalidone for the primary end point of coronary events and for the key secondary end point of mortality, despite the deficiencies of study design that put the ACE inhibitor at a blood pressure disadvantage. Black patients did not seem to do as well with lisinopril as nonblack patients, although it is likely that the use of these agents, when combined appropriately with such drug classes as diuretics or CCBs, could also be of considerable value in preventing renal and cardiovascular events in African Americans. Apart from the controversial heart failure findings, advocates of ACE inhibitors could still argue that ALLHAT was not able to disprove the notion that ACE inhibitors, used in an optimal fashion, might be the first‐line therapy of choice for many hypertensive patients, particularly nonblacks. Amlodipine performed well in ALLHAT. While the heart failure findings need further clarification, it was notable that amlodipine not only equaled chlorthalidone for the primary coronary end point, but actually appeared to have a small advantage (albeit not significant) for both mortality and stroke prevention. Impact On Guidelines Although not presented in the ALLHAT report, the relative cost of drugs somehow became part of its published conclusion. No doubt, thiazide diuretics are relatively inexpensive, at least as far as the cost of acquisition is concerned, but so are other antihypertensive agents. Many of the ACE inhibitors, including lisinopril, as well as some of the long‐acting dihydropyridine CCBs, are also generic and well priced. Because hypertension is such an aggressive market place, several of the newer branded drugs are priced competitively and allow physicians reasonable flexibility in choosing treatment based on therapeutic need. Most importantly, we all now recognize that effective blood pressure control in most hypertensive patients calls for logical drug combinations that typically will include all the drug types examined in ALLHAT as well as other classes. The National Heart, Lung, and Blood Institute (NHLBI) was responsible for organizing and conducting the ALLHAT study and for writing its report and conclusions. Since the NHLBI also organizes a hypertension guidelines committee (the forthcoming Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure [JNC VII]) and appoints its members, there is little doubt that the agenda expressed in the ALLHAT report and the NHLBIs press releases will affect the recommendations. Experts in hypertension concerned about providing the best possible treatment for the diverse hypertension population in the United States will hope fervently that the Committee will thoughtfully take into account the full array of available clinical trials data, including responsible interpretation of ALLHAT, when writing their report. R eferences The ALLHAT Officers and Coordinators for the ALLHAT Collaborative Research Group . Major outcomes in high‐risk hypertensive patients randomized to angiotensin‐converting enzyme inhibitor or CCB vs. diuretic: the Antihypertensive and Lipid‐Lowering Treatment to Prevent Heart Attack Trial (ALLHAT). JAMA. 2002;288:2981–2997. [DOI] [PubMed] [Google Scholar] Furberg CD, Psaty BM, Pahor M, et al. Clinical implications of recent findings from the antihypertensive and lipidlowering treatment to prevent heart attack trial (ALLHAT) and other studies of hypertension. Ann Intern Med. 2001; 135:1074–1078. [DOI] [PubMed] [Google Scholar] Prospective Studies Collaboration . Age‐specific relevance of usual blood pressure to vascular mortality: a meta‐analysis of individual data for one million adults in 61 prospective studies. Lancet. 2002;360:1903–1913. [DOI] [PubMed] [Google Scholar] Materson BJ, Reda DJ, Cushman WC, et al. Single‐drug therapy for hypertension in men. A comparison of six antihypertensive agents with placebo. The Department of Veterans Affairs Cooperative Study Group on Antihypertensive Agents. N Engl J Med. 1993;1;328 (13):914–921. [DOI] [PubMed] [Google Scholar] Neal B, MacMahon S, Chapman N. Effects of ACE inhibitors, calcium antagonists, and other blood‐pressurelowering drugs: results of prospectively designed overviews of randomized trials. Blood Pressure Lowering Treatment Trialists' collaboration. Lancet. 2000;356:1955–1964. [DOI] [PubMed] [Google Scholar] Lewis EJ, Hunsicker LG, Bain RP, et al., for the Collaborative Study Group. The effect of angiotensin‐converting‐enzyme inhibition on diabetic nephropathy. N Engl J Med. 1993;329:1456–1462. [DOI] [PubMed] [Google Scholar] Brenner BM, Cooper ME, de Zeeuw D, et al. Effects of losartan on renal and cardiovascular outcomes in patients with type 2 diabetes and nephropathy. N Engl J Med. 2001;345:861–869. [DOI] [PubMed] [Google Scholar] Hansson L, Zanchetti A, Carruthers SC, et al. Effects of intensive blood‐pressure lowering and low‐dose aspirin in patients with hypertension: principal results of the Hypertension Optimal Treatment (HOT) randomized trial. HOT Study Group. Lancet. 1998;351:1755–1762. [DOI] [PubMed] [Google Scholar] Articles from The Journal of Clinical Hypertension are provided here courtesy of Wiley ACTIONS View on publisher site PDF (832.0 KB) Cite Collections Permalink PERMALINK Copy RESOURCES Similar articles Cited by other articles Links to NCBI Databases On this page THE MYSTERY OF THE PRIMARY END POINT STUDY DESIGN AND BLOOD PRESSURE CONTROL STROKES AND THE AFRICAN AMERICAN PATIENTS THE HEART FAILURE CONUNDRUM Is the Heart Failure Claim Credible? Were the Heart Failure Findings Real? DIABETES: GOOD NEWS AND BAD NEWS DEATH: THE INCONTROVERTIBLE END POINT IMPLICATIONS OF ALLHAT THE INDIVIDUAL AGENTS Impact On Guidelines References Cite Copy Download .nbib.nbib Format: Add to Collections Create a new collection Add to an existing collection Name your collection Choose a collection Unable to load your collection due to an error Please try again Add Cancel Follow NCBI NCBI on X (formerly known as Twitter)NCBI on FacebookNCBI on LinkedInNCBI on GitHubNCBI RSS feed Connect with NLM NLM on X (formerly known as Twitter)NLM on FacebookNLM on YouTube National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers NLM NIH HHS USA.gov Back to Top
168
Introduction Theory Numbers by Hardy Wright - AbeBooks =============== Skip to main content AbeBooks.com Search Sign in My Account Basket Help Menu Find My Account My Purchases Sign Off Advanced Search Browse Collections Rare Books Art & Collectibles Textbooks Sellers Start Selling Help CLOSE Item added to your basket View basket Order Total (1 Item Items): Shipping Destination: Proceed to Basket View basket Continue shopping Introduction Theory Numbers by Hardy Wright (57 results) Feedback Author:hardy wright, Title:introduction theory numbers Refine with Advanced Search Feedback List Grid Sort By Search preferences Skip to main search results Search filters Product Type All Product Types Books(57) Magazines & Periodicals Magazines & Periodicals (No further results match this refinement) Comics Comics (No further results match this refinement) Sheet Music Sheet Music (No further results match this refinement) Art, Prints & Posters Art, Prints & Posters (No further results match this refinement) Photographs Photographs (No further results match this refinement) Maps Maps (No further results match this refinement) Manuscripts & Paper Collectibles Manuscripts & Paper Collectibles (No further results match this refinement) Condition Learn more New(18) As New, Fine or Near Fine(9) Very Good or Good(25) Fair or Poor(1) As Described(4) Binding All Bindings Hardcover(32) Softcover(20) Collectible Attributes First Edition(6) Signed Signed (No further results match this refinement) Dust Jacket(4) Seller-Supplied Images(29) Not Print on Demand(56) Language (3) Apply Price Any Price Under US$ 25 US$ 25 to US$ 50 Over US$ 50 Custom price range (US$) to USD Only use numbers for the minimum and maximum price. The minimum price must be lower than or match the maximum price. Free Shipping Free Shipping to U.S.A.(5) Seller Location Seller region European Union North America Europe Oceania Asia Seller country China France Germany India Italy New Zealand Poland South Africa Spain Sweden U.S.A. United Kingdom Seller Rating All Sellers 2-star rating and up(56) 3-star rating and up(56) 4-star rating and up(56) 5-star rating(49) Stock Image An Introduction to the Theory of Numbers Hardy, G. H.; Wright, E. M. Published by Oxford University Press (edition 5), 1980 ISBN 10: 0198531710/ ISBN 13: 9780198531715 Language: English Seller: BooksRun, Philadelphia, PA, U.S.A. (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Used - Softcover Condition: Very good US$ 9.41 Convert currency Free shipping within U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Paperback. Condition: Very Good. 5. Ship within 24hrs. Satisfaction 100% guaranteed. APO/FPO addresses supported. More buying choices from other sellers on AbeBooks Used offers fromUS$ 9.41 Also findSoftcoverFirst Edition Stock Image An Introduction to the Theory of Numbers Hardy, G. H.; Wright, Edward M.; Wiles, Andrew Published by Oxford University Press, 2008 ISBN 10: 0199219869/ ISBN 13: 9780199219865 Language: English Seller: Yes Books, Portland, ME, U.S.A. (4-star seller)Seller rating 4 out of 5 stars;) Contact seller Used - Softcover Condition: Very good US$ 60.00 Convert currency US$ 4.25 shipping within U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Soft cover. Condition: Very Good. Dust Jacket Condition: No Dust Jacket. 5th or later Edition. This book is clean and unmarked in very good condition. Sixth edition. 621 pages. More buying choices from other sellers on AbeBooks New offers fromUS$ 93.25 Used offers fromUS$ 64.25 Also findSoftcover Featured items related to your search Image 12 An Introduction to the Theory of NumbersHardy, G.H.; Wright, E.M. First Edition, Used, Hardcover US$ 902.08 Image 13 An Introduction to the Theory of NumbersG. H. Hardy, E. M. Wright First Edition, Used, Hardcover US$ 250.00 Image 14 An Introduction to the Theory of NumbersHardy, G. H.; Wright, E. M. Used, Softcover US$ 9.41 Image 15 An Introduction to the Theory of Numbers.HARDY, G. H., & E. M. Wright. First Edition, Used US$ 5,215.90 Image 16 An Introduction to the Theory of NumbersG. H. Hardy and E. M. Wright First Edition, Used, Hardcover US$ 500.00 Image 17 An introduction to the theory of numbersHARDY, G.H. & WRIGHT, E.M. First Edition, Used US$ 600.00 Image 18 Introduction to the Theory of NumbersHardy, G. H. and E. M. Wright Used, Hardcover US$ 17.50 Image 19 An Introduction to the Theory of Numbers.G.H. Hardy and E.M. Wright. Used, Hardcover US$ 695.45 Image 20 An Introduction to the Theory of NumbersG. H. Hardy, E. M. Wright Used, Hardcover US$ 179.21 Image 21 An Introduction to the Theory of NumbersG.H. Hardy & E.M. Wright Used, Hardcover US$ 500.00 Image 22 an introduction to the theory of numbers 1945G.H. Hardy; E.M. Wright New, Hardcover US$ 21.76 Image 23 An Introduction to the Theory of Numbers.Hardy, H. G. and E. M. Wright: Used US$ 467.88 Image 24 An Introduction to the Theory of Numbers, Fourth EditionHardy, G.H. and Wright, E.M. Used, Hardcover US$ 27.00 Image 25 An Introduction to the Theory of Numbers. Third editionHardy, G. H.; Wright, E. M. Used, Hardcover US$ 211.94 Image 26 Introduction to the theory of numbersHardy, G.H.; Wright, E.M. Used, Hardcover US$ 25.00 Image 27 An Introduction to the Theory of NumbersHardy, Godfrey H.; Wright, Edward M. Used, Hardcover US$ 455.85 Image 28 An Introduction to the Theory of NumbersG. H. Hardy e E. M. Wright Used, Hardcover US$ 57.73 Image 29 An Introduction to the Theory of Numbers (Fifth Edition)HARDY, G H & WRIGHT, E M Used US$ 24.54 Image 30 An Introduction to the Theory of NumbersG. H. Hardy; E. M. Wright Used, Hardcover US$ 350.00 Image 31 An Introduction to the Theory of NumbersHardy, G.H. ; Wright, E.M. Used, Hardcover US$ 38.00 Image 32 An Introduction to the Theory of NumbersEdward M. Wright, Godfrey H. Hardy New, Hardcover US$ 294.45 Image 33 An Introduction to the Theory of NumbersHardy, G. H.; Wright, Edward M.;... Used, Softcover US$ 60.00 Image 34 An introduction to the theory of numbersHardy, G. H. - Wright, E. M. Used US$ 46.93 Stock Image an introduction to the theory of numbers 1945 G.H. Hardy; E.M. Wright Published by Facsimile Publisher ISBN 10: 9333496491/ ISBN 13: 9789333496490 Language: English Seller: Books Puddle, New York, NY, U.S.A. (4-star seller)Seller rating 4 out of 5 stars;) Contact seller New - Hardcover Condition: New US$ 21.76 Convert currency US$ 3.99 shipping within U.S.A. Destination, rates & speeds Quantity: 4 available Add to basket Condition: New. pp. 427. More buying choices from other sellers on AbeBooks New offers fromUS$ 25.75 Also findHardcover Stock Image Introduction to the theory of numbers Hardy, G.H.; Wright, E.M. Published by Oxford University Press, 1960 ISBN 10: 0198533101/ ISBN 13: 9780198533108 Language: English Seller: Book Alley, Pasadena, CA, U.S.A. (4-star seller)Seller rating 4 out of 5 stars;) Contact seller Used - Hardcover Condition: Good US$ 25.00 Convert currency US$ 6.00 shipping within U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket hardcover. Condition: Good. Dust Jacket Condition: Good. B&W Equations (illustrator). Reprinted 1965. Good in worn and chipped dust jacket. Used with wear but is still in solid reading condition. Pasadena's finest new and used bookstore since 1992. More buying choices from other sellers on AbeBooks Used offers fromUS$ 31.00 Also findHardcover Introduction to the Theory of Numbers Hardy, G. H. and E. M. Wright Published by The Clarendon Press, Oxford, 1960 Seller: 4 THE WORLD RESOURCE DISTRIBUTORS, Springfield, MO, U.S.A. (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Used - Hardcover Condition: Good US$ 17.50 Convert currency US$ 4.99 shipping within U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Hardcover. Condition: Good. Fourth Edition. Reprint of 4th edition with corrections in 1965 ; Ex-Library; 8vo 8" - 9" tall. Save 25% Seller Image An Introduction to the Theory of Numbers, Fourth Edition Hardy, G.H. and Wright, E.M. Published by Clarendon Press, Oxford, 1968 Seller: Easy Chair Books, Lexington, MO, U.S.A. (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Used - Hardcover Condition: Good US$ 36.00 25% off US$ 27.00 Convert currency US$ 5.50 shipping within U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Hardcover. Condition: Good. Dust Jacket Condition: No Dust Jacket. 421 pages. Ex-university library marks, light wear and discoloring; a sound binding; a good book overall. No jacket. Quantity Available: 1. Category: Mathematics; Inventory No: 220373. Seller Image An Introduction to the Theory of Numbers Hardy, G.H. ; Wright, E.M. Published by Oxford, 1962 Seller: Chapter 1, Johannesburg, GAU, South Africa (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Used - Hardcover Condition: Good US$ 38.00 Convert currency US$ 20.44 shipping from South Africa to U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Hardcover. Condition: Good. Dust Jacket Condition: Good. 4th . shelf wear on the jacket. the book has some edge wear. mild foxing and markings. ownership inscription. sound binding. reprint. very heavy may require extra postage outside South Africa. [SK]. Our orders are shipped using tracked courier delivery services. An Introduction to the Theory of Numbers (Fifth Edition) HARDY, G H & WRIGHT, E M Published by Oxford University Press 1990, 1990 Seller: Hard to Find Books NZ (Internet) Ltd., Dunedin, OTAGO, New Zealand Association Member: IOBA (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Used US$ 24.54 Convert currency US$ 20.97 shipping from New Zealand to U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Super octavo, cream light card covers, black lettering to spine & covers, xvi + 426pp, VG (light creasing/curling to spine & cover edges, light chafing & tanning to covers, light tanning & foxing/soiling to page edges, minor cracking to front gutter, light foxing to prelims). Stock Image An Introduction to the Theory of Numbers (Oxford Mathematics) Hardy, G. H.,Wright, Edward M.,Wiles, Andrew Published by Oxford University Press, 2008 ISBN 10: 0199219850/ ISBN 13: 9780199219858 Language: English Seller: HPB-Red, Dallas, TX, U.S.A. (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Used - Hardcover Condition: Good US$ 116.00 Convert currency US$ 3.75 shipping within U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket hardcover. Condition: Good. Connecting readers with great books since 1972! Used textbooks may not include companion materials such as access codes, etc. May have some wear or writing/highlighting. We ship orders daily and Customer Service is our top priority! More buying choices from other sellers on AbeBooks New offers fromUS$ 212.19 Used offers fromUS$ 119.75 Also findHardcover Seller Image An Introduction to the Theory of Numbers G. H. Hardy, E. M. Wright Published by At The Clarendon Press, Oxford, 1945 Language: English Seller: Gurra's Books, Hemse, Sweden (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Used - Hardcover Condition: Good US$ 179.21 Convert currency US$ 25.69 shipping from Sweden to U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Hardcover. Condition: Good. No Jacket. 2nd Edition. Slightly shelf-worn and scratched w sunned bs and slightly bumped corners. Slightly cocked. Front hinge slightly loose. Very slightly shaken. Owner's name on frp. Underlinings in tp. A few scattered underlinings and annotations. 2nd ed. xvi + 407 pp. 8vo. 423 pp. English. Seller Image More images An Introduction to the Theory of Numbers G. H. Hardy e E. M. Wright Published by Oxford - Clarendon Press, 1960 Seller: Logic and Art, Novara, NO, Italy (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Used - Hardcover Condition: Fine US$ 57.73 Convert currency US$ 23.35 shipping from Italy to U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Rilegato. Condition: ottimo. senza sovraccoperta. quarta edizione. Fine copy with signs of time (yellow halo on outer and page edges, faded and browned spine) and minor signs of use (a few small spots on back cover, slight signs of wear on cover corners, which are sharp, and on back edges, hint of minor bending up to page 11). Some brownish spots on page VII, otherwise clean inside, tight binding. No outer sleeve. Seller Image An Introduction to the Theory of Numbers. Third edition Hardy, G. H.; Wright, E. M. Published by The Clarendon Press, Oxford, 1954 Language: English Seller: Leopolis, Kraków, Poland (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Used - Hardcover Condition: Very good US$ 211.94 Convert currency US$ 17.52 shipping from Poland to U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Hardcover. Condition: Very Good. 8vo (24 cm), XVI, 419, pp. Publisher's cloth (binding slightly rubbed and dusted; top corner of the front free endpaper cut off). This classic textbook on number theory, authored by the distinguished English mathematicians G. H. Hardy and E. M. Wright, originated from a series of lectures and was first published in 1938. Renowned for its clarity and depth, the book has seen multiple editions, with the third edition introducing an elementary proof of... More An Introduction to the Theory of Numbers G. H. Hardy; E. M. Wright Published by Oxford University Press, 1938 Language: English Seller: Grey Matter Books, Hadley, MA, U.S.A. Association Member: SNEAB (4-star seller)Seller rating 4 out of 5 stars;) Contact seller Used - Hardcover Condition: Very good US$ 350.00 Convert currency US$ 5.50 shipping within U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Hardcover. Condition: Very Good. No Jacket. First edition, first printing. Previous owner's signature in pen on the first free end page. Text is unmarked; pages are bright, though the page edges are age toned. Binding is sturdy. Covers show some wear around the corners and at the head and base of the spine. The covers are partially faded in a band along the top edge. No dust jacket. International/Priority shipping at cost. Seller Image An introduction to the theory of numbers Hardy, G. H. - Wright, E. M. Published by oxford science publications, 1989 Seller: Miliardi di Parole, Pietra Marazzi, AL, Italy (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Used Condition: Good US$ 46.93 Convert currency US$ 26.16 shipping from Italy to U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Condition: Buone. inglese Condizioni dell'esterno: Rovinata Condizioni dell'interno: Buone. Seller Image An Introduction to the Theory of Numbers. Hardy, H. G. and E. M. Wright: Published by Oxford, Clarendon Press, 1938 Language: English Seller: Antiquariat Ehbrecht - Preis inkl. MwSt., Ilsede, Germany (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Used Condition: Very good US$ 467.88 Convert currency US$ 52.55 shipping from Germany to U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Condition: Gut. 8°, XVI, 403 Seiten, betitelter Originalleinen - Einband leicht ber. und best., Rücken leicht aufgehellt sonst guter Zustand - 1938. b89957 Sprache: Englisch Gewicht in Gramm: 1400. Stock Image An Introduction to the Theory of Numbers Number Theory Guide (5th Edition)(Chinese Edition) (YING)G.H.Hardy E.M.Wright ZHANG MING YAO ZHANG FAN YI Published by People Post Press Pub. Date :2008-10-01, 1991 ISBN 10: 7115184526/ ISBN 13: 9787115184528 Language: Chinese Seller: liu xing, Nanjing, JS, China (5-star seller)Seller rating 5 out of 5 stars;) Contact seller New - Softcover Condition: New US$ 90.80 Convert currency US$ 18.00 shipping from China to U.S.A. Destination, rates & speeds Quantity: 3 available Add to basket Soft cover. Condition: New. Language:Chinese.Author:(YING)G.H.Hardy E.M.Wright ZHANG MING YAO ZHANG FAN YI.Binding:Soft cover.Publisher:People Post Press Pub. Date :2008-10-01. AN INTRODUCTION TO THE THEORY OF NUMBERS Hardy, G. H. (Godfrey Harold), 1877-1947; Wright, E. M. (Edward Maitland), 1906-2005 Published by Clarendon Press, Oxford, 1938 Seller: Second Story Books, ABAA, Rockville, MD, U.S.A. Association Member: ABAAILAB (4-star seller)Seller rating 4 out of 5 stars;) Contact seller Used - Hardcover US$ 150.00 Convert currency US$ 6.00 shipping within U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Hardcover. Octavo, xvi, 403 pages. In Fair condition. Spine is blue with gold print. Boards in blue cloth; wear to spine caps and corners, shelf wear. Text block has penciled names on front endpapers, tanning to endpapers, cracked hinges, intermittent spine breaks. NOTE: Shelved in Locked Annex Area, ND-HV Section. 1400312. FP New Rockville Stock. Seller Image More images An Introduction to the Theory of Numbers G. H. Hardy, E. M. Wright Published by Oxford University Press, 1938 Seller: Moe's Books, Berkeley, CA, U.S.A. (4-star seller)Seller rating 4 out of 5 stars;) Contact seller First Edition Used - Hardcover Condition: Good US$ 250.00 Convert currency US$ 6.50 shipping within U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Hard cover. Condition: Good. No jacket. First printing. The cover boards are worn, stained, and scratched with the bottom corner edge being bumped along with the bottom edge of cover have a few small dents. The front cover is slightly sunned along the hinge. The spine is worn along the sides and is tanned, not affecting legibility. Spine is shaken, but binding is secure. Both the front and back end leaves are foxed with an ink signature from the... More Seller Image More images An Introduction To The Theory Of Numbers Hardy, G H & Wright, E M Published by Blandford Press London, 1954 Seller: Deightons, Bournemouth, United Kingdom (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Used - Hardcover Condition: Near fine US$ 243.41 Convert currency US$ 27.01 shipping from United Kingdom to U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Hardcover. Condition: Near Fine. Dust Jacket Condition: Near Fine. 3rd edition. Narrow 4to. xvi + 419 + (1)pp. Publisher's blue plain cloth covers, silver lettering on spine Pale blue printed dw, dark blue lettering, not price clipped 42s net. White eps. Covers : book slight lean. Dw :slight rub top of spine + rear top corner, slight darkening to spine. Contents : very clean, tight. & unfoxed. Attractive copy. VG+/VG+. Seller Image More images An Introduction to the Theory of Numbers G. H. Hardy and E. M. Wright Published by Oxford University Press / Clarendon Press, 1938 Seller: Zed Books, New York, NY, U.S.A. (5-star seller)Seller rating 5 out of 5 stars;) Contact seller First Edition Used - Hardcover Condition: Very good US$ 500.00 Convert currency US$ 5.00 shipping within U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Hardcover. Condition: Very Good. 1st Edition. First printing. 8vo. 403 pp. Blue cloth with gilt lettering to spine. Very Good. Rear hinge cracked, otherwise excellent with slight shelfwear to boards. Previous owner's name to front free endpaper. Seller Image An Introduction to the Theory of Numbers G.H. Hardy & E.M. Wright Published by Oxford University Press, London, 1945 Seller: APPLEDORE BOOKS, ABAA, WACCABUC, NY, U.S.A. Association Member: ABAAILAB (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Used - Hardcover Condition: Near fine US$ 500.00 Convert currency US$ 6.00 shipping within U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Cloth. Condition: Near Fine. Dust Jacket Condition: Near Fine. An unusually clean, well-preserved copy of the 1945 stated 2nd edition of this foundational text in elementary number theory. And in a very sharp example of the uncommon dustjacket. Tight and Near Fine in a crisp, Near Fine dustjacket, with the faintest trace of creasing at the foot of the spine and 2 very thin bands of offsetting along the rear panel. Co-authored by G.H. Hardy (1877-1947), the legendary Cambridge... More An Introduction to the Theory of Numbers. G.H. Hardy and E.M. Wright. Published by Oxford, UK, 1938 Seller: Anytime Books, London, United Kingdom (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Used - Hardcover Condition: Good US$ 695.45 Convert currency US$ 37.81 shipping from United Kingdom to U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Cloth. Condition: Good. A classic text. First ed. pp.xvi.403. Owner's mark of the mathematician R H Merson, 1943. This book was shaken; repaired. External wear; see photo. Seller Image More images An Introduction to the Theory of Numbers Hardy, G.H.; Wright, E.M. Published by Clarendon Press-Oxford, Oxford, 1938 Seller: San Francisco Book Company, Paris, France (5-star seller)Seller rating 5 out of 5 stars;) Contact seller First Edition Used - Hardcover Condition: Good US$ 902.08 Convert currency US$ 21.02 shipping from France to U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Hardcover. Condition: Good. Cloth/no dust jacket Octavo. navy blue cloth, gilt lettering, no dust jacket, 403 pp first edition inscribed and dated by astronomer Georges Fournier on the front endpapers covers lightly worn and scuffed corners bumped foxing on the pages Standard shipping (no tracking) / Priority (with tracking) / Custom quote for large or heavy orders. An introduction to the theory of numbers HARDY, G.H. & WRIGHT, E.M. Published by Clarendon Press, Oxford, 1938 Seller: B & L Rootenberg Rare Books, ABAA, Sherman Oaks, CA, U.S.A. Association Member: ABAAILAB (1-star seller)Seller rating 1 out of 5 stars;) Contact seller First Edition Used US$ 600.00 Convert currency US$ 20.00 shipping within U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket FIRST EDITION. Publisher's original blue cloth; aside from spotting on the first flyleaf, an excellent copy. Seller Image More images An Introduction to the Theory of Numbers. HARDY, G. H., & E. M. Wright. Published by Oxford: at the Clarendon Press, 1938, 1938 Seller: Peter Harrington. ABA/ ILAB., London, United Kingdom Association Member: ABAILABPBFA (5-star seller)Seller rating 5 out of 5 stars;) Contact seller First Edition Used US$ 5,215.90 Convert currency US$ 18.91 shipping from United Kingdom to U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket First edition, first impression, retaining the rare jacket, of this classic work in the field of number theory. "The greatest impact of [Wright's] work was through his book with G. H. Hardy, An Introduction to the Theory of Numbers (1938), always referred to as 'Hardy and Wright'. This book was widely praised by number theory specialists for its excellent exposition, very broad range, and good judgement in the selection of material. Its continuing sales, through five editions (the last... More Seller Image More images an introduction to the theory of numbers 1945 [Leather Bound] G. H. hardy, E. M. Wright Publication Date:2015 Seller: Gyan Books Pvt. Ltd., Delhi, India (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Print on Demand New - Hardcover Condition: New US$ 61.07 Convert currency Free shipping from India to U.S.A. Destination, rates & speeds Quantity: Over 20 available Add to basket Leather Bound. Condition: New. Language: English. {Size: 14.60 x 22.86 cms} This book is available in 5 different Leather color without any extra cost. Explore More Options by Clicking on 'More Images' and Notify Us of Your Choice via Email within 24 hours of placing the order. Presenting an Exquisite Leather-Bound Edition, expertly crafted by the prestigious organization "Rare Biblio" with Original Natural Leather that gracefully adorns the spine and corners. The allure continues with Golden Leaf Printing that... More Create a Want Tell us what you're looking for and once a match is found, we'll inform you by e-mail. Create a Want BookSleuth Can't remember the title or the author of a book? Our BookSleuth is specially designed for you. Visit BookSleuth Help with Search Search Tips Glossary of Terms Set your own Search Preferences Back to top Shop With Us Advanced Search Browse Collections My Account My Orders View Basket Sell With Us Start Selling Join Our Affiliate Program Book Buyback Refer a seller About Us About AbeBooks Media Careers Forums Privacy Policy Your Ads Privacy Choices Designated Agent Accessibility Find Help Help Customer Support Other AbeBooks Companies AbeBooks.co.uk AbeBooks.de AbeBooks.fr AbeBooks.it AbeBooks Aus/NZ AbeBooks.ca IberLibro.com ZVAB.com BookFinder.com Find any book at the best price Follow AbeBooks AbeBooks.co.uk AbeBooks.de AbeBooks.fr AbeBooks.it AbeBooks Aus/NZ AbeBooks.ca IberLibro.com ZVAB.com BookFinder.com Find any book at the best price By using the Web site, you confirm that you have read, understood, and agreed to be bound by the Terms and Conditions. © 1996 - 2025 AbeBooks Inc. All Rights Reserved. AbeBooks, the AbeBooks logo, AbeBooks.com, "Passion for books." and "Passion for books. Books for your passion." are registered trademarks with the Registered US Patent & Trademark Office. × Change currency Choose your preferred currency You will be charged in {0}. You will be shown prices in {0} as a reference only. Your orders will process in {1}. Learn more about currency preferences. Cancel Save
169
Mastering Weyl's Inequality =============== Resources Case StudiesWhitepapersData ReportsAPI Documentation Search Explore more topics StatisticsEconomicsMBABusiness ForecastingData AnalyticsMathmatics Press Enter to search Open main menu Log inSign up Log inSign up Mastering Weyl's Inequality Advanced Techniques and Examples in Number Theory Sarah Lee AI generated Llama-4-Maverick-17B-128E-Instruct-FP8 7 min read· June 13, 2025 11 views Dive deeper into Weyl's Inequality, exploring advanced techniques and examples that illustrate its power and versatility in Number Theory. Advanced Techniques for Weyl's Inequality Weyl's Inequality is a fundamental tool in Number Theory, providing a bound on exponential sums that has far-reaching implications in various areas of mathematics. In this section, we will explore some advanced techniques related to Weyl's Inequality, including its application in estimating exponential sums, its role in the study of modular forms, and its connections to algebraic geometry. Estimating Exponential Sums using Weyl's Inequality Exponential sums are a crucial object of study in Number Theory, and Weyl's Inequality provides a powerful tool for estimating these sums. The inequality states that for a polynomial f(x)f(x)f(x) of degree d d d with integer coefficients, and for any positive integer N N N, we have: ∣∑n=1 N e 2 π i f(n)∣≪N 1−1 d−1+ϵ, \left| \sum_{n=1}^{N} e^{2\pi i f(n)} \right| \ll N^{1-\frac{1}{d-1}+\epsilon}, ​n=1∑N​e 2 πi f(n)​≪N 1−d−1 1​+ϵ, where ϵ>0 \epsilon > 0 ϵ>0 is an arbitrarily small constant, and the implied constant depends on d d d and ϵ \epsilon ϵ. This bound has significant implications for the study of exponential sums, as it provides a non-trivial estimate even for large values of N N N. For instance, when d=2 d=2 d=2, Weyl's Inequality reduces to the well-known bound for quadratic exponential sums: ∣∑n=1 N e 2 π i(a n 2+b n)∣≪N 1/2+ϵ, \left| \sum_{n=1}^{N} e^{2\pi i (an^2+bn)} \right| \ll N^{1/2+\epsilon}, ​n=1∑N​e 2 πi(a n 2+bn)​≪N 1/2+ϵ, which is a fundamental result in the study of quadratic forms. Role of Weyl's Inequality in the Study of Modular Forms Modular forms are a central object of study in Number Theory, and Weyl's Inequality plays a crucial role in their analysis. In particular, Weyl's Inequality is used to establish bounds on the Fourier coefficients of modular forms, which are essential in understanding their properties. For example, the Fourier coefficients of a modular form f(z)f(z)f(z) of weight k k k can be written as: a n=∑m=1∞c m m k−1 e 2 π i m n/τ, a_n = \sum_{m=1}^{\infty} \frac{c_m}{m^{k-1}} e^{2\pi i m n / \tau}, a n​=m=1∑∞​m k−1 c m​​e 2 πimn/τ, where c m c_m c m​ are constants, and τ \tau τ is a complex number with positive imaginary part. Weyl's Inequality can be used to bound the sum over m m m, providing a non-trivial estimate for the Fourier coefficients a n a_n a n​. Applications in Algebraic Geometry Weyl's Inequality has significant implications in algebraic geometry, particularly in the study of algebraic curves and their zeta functions. For instance, the inequality is used to establish bounds on the number of rational points on algebraic curves, which is a fundamental problem in Diophantine geometry. The connection between Weyl's Inequality and algebraic geometry is deep and far-reaching. For example, the inequality is used in the study of the zeta function of an algebraic curve, which is defined as: Z(X,T)=exp⁡(∑n=1∞N n n T n), Z(X,T) = \exp\left(\sum_{n=1}^{\infty} \frac{N_n}{n} T^n\right), Z(X,T)=exp(n=1∑∞​n N n​​T n), where N n N_n N n​ is the number of rational points on the curve X X X over the finite field with q n q^n q n elements. Weyl's Inequality is used to establish bounds on N n N_n N n​, which in turn provides information about the zeta function Z(X,T)Z(X,T)Z(X,T). Examples and Illustrations In this section, we will provide some concrete examples that illustrate the power and versatility of Weyl's Inequality. We will also compare Weyl's Inequality with other related inequalities, highlighting its strengths and weaknesses. Concrete Examples Demonstrating the Power of Weyl's Inequality One of the most significant examples of the power of Weyl's Inequality is its application to the study of the distribution of prime numbers. In particular, the inequality is used to establish bounds on the number of prime numbers in arithmetic progressions. For example, let a a a and q q q be positive integers with (a,q)=1(a,q)=1(a,q)=1, and let π(x;q,a) \pi(x;q,a)π(x;q,a) denote the number of prime numbers less than or equal to x x x that are congruent to a a a modulo q q q. Then, using Weyl's Inequality, one can establish the bound: π(x;q,a)=li(x)ϕ(q)+O(x(log⁡x)A), \pi(x;q,a) = \frac{\text{li}(x)}{\phi(q)} + O\left(\frac{x}{(\log x)^A}\right), π(x;q,a)=ϕ(q)li(x)​+O((lo g x)A x​), where li(x) \text{li}(x)li(x) is the logarithmic integral, ϕ(q) \phi(q)ϕ(q) is Euler's totient function, and A A A is an arbitrary positive constant. Case Studies Highlighting the Inequality's Versatility Weyl's Inequality has been applied in a wide range of contexts, from Number Theory to algebraic geometry. Here, we will highlight a few case studies that demonstrate its versatility. The study of modular forms: As mentioned earlier, Weyl's Inequality is used to establish bounds on the Fourier coefficients of modular forms. This has significant implications for the study of modular forms and their applications in Number Theory and algebraic geometry. The distribution of rational points on algebraic curves: Weyl's Inequality is used to establish bounds on the number of rational points on algebraic curves, which is a fundamental problem in Diophantine geometry. The study of exponential sums: Weyl's Inequality provides a powerful tool for estimating exponential sums, which are a crucial object of study in Number Theory. Comparison with Other Related Inequalities Weyl's Inequality is one of several inequalities used to estimate exponential sums. Here, we will compare it with a few other related inequalities, highlighting its strengths and weaknesses. | Inequality | Bound | Applicability | | :--- | :--- | :--- | | Weyl's Inequality | N 1−1 d−1+ϵ N^{1-\frac{1}{d-1}+\epsilon}N 1−d−1 1​+ϵ | General polynomials | | van der Corput's Inequality | N 1−1 2 d−2+ϵ N^{1-\frac{1}{2d-2}+\epsilon}N 1−2 d−2 1​+ϵ | Polynomials with certain properties | | Vinogradov's Inequality | N 1−1 d+ϵ N^{1-\frac{1}{d}+\epsilon}N 1−d 1​+ϵ | Polynomials with certain properties | As can be seen from the table, Weyl's Inequality provides a relatively strong bound for general polynomials. However, for polynomials with certain properties, other inequalities such as van der Corput's Inequality or Vinogradov's Inequality may provide stronger bounds. Future Directions and Open Problems Weyl's Inequality continues to be an active area of research, with many open problems and conjectures related to its applications and extensions. Here, we will highlight a few current research directions and open problems. Current Research Directions Involving Weyl's Inequality Some of the current research directions involving Weyl's Inequality include: Extensions to higher-dimensional exponential sums: Weyl's Inequality has been extended to higher-dimensional exponential sums, which has significant implications for the study of algebraic geometry and Number Theory. Applications to other areas of mathematics: Weyl's Inequality has been applied in a wide range of contexts, from Number Theory to algebraic geometry. Researchers continue to explore its applications in other areas of mathematics, such as harmonic analysis and partial differential equations. Refinements and improvements: Researchers continue to refine and improve Weyl's Inequality, seeking stronger bounds and more general applicability. Open Problems and Conjectures Related to the Inequality Some of the open problems and conjectures related to Weyl's Inequality include: The Weyl exponent conjecture: This conjecture proposes that the exponent in Weyl's Inequality can be improved to 1−1 d−1−ϵ 1-\frac{1}{d-1}-\epsilon 1−d−1 1​−ϵ for certain types of polynomials. The Vinogradov's Mean Value Theorem: This theorem provides a bound on the mean value of certain exponential sums, and is closely related to Weyl's Inequality. Potential Applications in Emerging Areas of Mathematics Weyl's Inequality has the potential to be applied in a wide range of emerging areas of mathematics, including: Arithmetic geometry: Weyl's Inequality has significant implications for the study of algebraic curves and their zeta functions, which is a fundamental area of study in arithmetic geometry. Harmonic analysis: Weyl's Inequality has been applied in harmonic analysis, particularly in the study of oscillatory integrals. Partial differential equations: Weyl's Inequality has been used to establish bounds on certain types of partial differential equations, particularly those with oscillatory solutions. The following Mermaid graph illustrates some of the connections between Weyl's Inequality and other areas of mathematics: References Weyl, H. (1916). Über die Gleichverteilung von Zahlen mod. Eins. Mathematische Annalen, 77(3), 313-352. Vinogradov, I. M. (1935). On Weyl's sums. Matematicheskii Sbornik, 42(5), 521-530. Vaughan, R. C. (1997). The Hardy-Littlewood Method. Cambridge University Press. Iwaniec, H., & Kowalski, E. (2004). Analytic Number Theory. American Mathematical Society. Bombieri, E. (1966). On exponential sums in finite fields. American Journal of Mathematics, 88(1), 71-105. FAQ What is Weyl's Inequality? Weyl's Inequality is a fundamental tool in Number Theory that provides a bound on exponential sums. What are the applications of Weyl's Inequality? Weyl's Inequality has a wide range of applications, from Number Theory to algebraic geometry, harmonic analysis, and partial differential equations. What are some open problems related to Weyl's Inequality? Some open problems related to Weyl's Inequality include the Weyl exponent conjecture and Vinogradov's Mean Value Theorem. How is Weyl's Inequality used in the study of modular forms? Weyl's Inequality is used to establish bounds on the Fourier coefficients of modular forms, which is essential in understanding their properties. Sarah Lee 2025-06-13 00:46:13 0 Comments You need to be logged in to add comments. Click here to login. Related Posts Combinatorial Algorithms Meet Topological Data Analysis The integration of Topological Data Analysis (TDA) and combinatorial algorithms has the potential to... By Sarah Lee •Jun 16, 2025•2558 views Easy statistical analysis tool Learn more Easy statistical analysis tool We use cookies to improve your experience on our website. Privacy Policy. By clicking Accept All you consent to our use of cookies. Accept All Decline All
170
Semantic Loss in the Holy Qur'an Translation with Special Reference to Surat Al-Mujadilah and Surat Al-Hashr فقدان المعنى في ترجمة القرآن الكريم وبالتحديد سورتيّ المجادلة والحشر By Alaa Kamal Mohammed Othman Supervised by Prof. Dr. Walid Amer Professor of Linguistics A thesis submitted in partial fulfillment of the requirements for the Degree of Master in Translation March / 2021 الجـامعــــــــــة اإلســـــالميـــــة ب ـــــــــــ غ ـــــــــ زة عمادة البحث العلمي والدراس ـــــــــ ات العليــ ــــ ــــا كـليــــ ـــــــــــــــــــــــــــــــــــــــــــــــــــ ــــة اآل دا ماجست ـــير الترجمة واللســـــــانيـــــــــــــــــــــات The Islamic University of Gaza Deanship of Research and Graduate Studies Faculty of Arts Master of Translation and Linguistics I إقـــــــــرار :أنا الموقع أدناه مقدم الرسالة التي تحمل العنوان Semantic Loss in the Holy Qur'an Translation with Special Reference to Surat Al-Mujadilah and Surat Al-Hashr فقدان المعنى في ترجمة القرآن الكريم وبالتحديد سورتيّ المجادلة والحشر أقر بأن ما اشتملت عليه هذه الرسالة إنما هو نتاج جهدي الخاص، باستثناء ما تمت اإلشارة إليه حيثما ورد، وأن هذه الرسالة ككل أ و أي جزء منها لم يقدم من قبل االخرين لنيل درجة أو لقب .علمي أو بحثي لدى أي مؤسسة تعليمية أو بحثية أخرى Declaration I understand the nature of plagiarism, and I am aware of the University’s policy on this. The work provided in this thesis, unless otherwise referenced, is the researcher's own work, and has not been submitted by others elsewhere for any other degree or qualification. :اسم الطالب آالء كمال محمد عثمان Student's name: :التوقيع Signature: :التاريخ Date: III Abstract The present study aims at investigating the semantic loss in two English translations of Surat Al-Mujadilah and Surat Al-Hashr in the holy Qura'n undertaken by two of the most prominent translators, Abdullah Yusuf Ali and Arthur John Arberry. The study also tends to manifest the causes of the losses in the two English translations in light of Baker’s typology of equivalence (2011), particularly, equivalence at the word level. The losses are generally divided into two types: Complete and partial. This research focuses at the semantic losses that are caused in most cases by the cultural factor. Moreover, it examines the strategies both translators used in their translations and to what extent they have achieved the cultural equivalence. This study follows the qualitative descriptive approach. To come out with clear-cut answers for the research questions, the researcher extracted 52 cultural-specific items from Surat Al-Mujadilah and Surat Al-Hashr. She then adopted the comparative textual analysis for their English translations taken from Ali’s work “The Holy Qur’an: Text and Translation” (1938) and Arthur John Arberry’s “The Koran Interpreted” (1968). The findings of the study revealed that both translations resulted in frequent partial and complete semantic losses. However, the complete losses were the most dominant in Arberry’s translation. The findings also showed that the causes of the semantic losses were due to the existence of culture-related terms, lack of lexicalization, semantically complex words, lack of hyponyms in the TL, and mistranslations. Moreover, the researcher found out that Ali was more successful in achieving the cultural equivalence with a percentage of 42.3% whereas Arberry’s achievement of cultural equivalence accounted for 34.6%. In light of these findings, the researcher recommends the translators to contact with experts in the religious sciences, refer to exegesis books to reach the depth of the ST messages and not to focus mainly on the surface meaning, consult Arabic and English dictionaries and pay attention to the strategies they use. Finally, the researcher recommends the future researchers to conduct further research on full chapters (suras) to prevent the occurrence of such losses and produce a precise translated version of the Noble Qur’an. IV ملخص الدراسة تهدف الدراسة الحالية إلى تقصي الفقد في المعنى في ترجمتين باللغة اإلنجليزية لسور تي المجادلة والحشر في القرأن الكريم، ن الثني من أبرز المترجمين ، وهما عبد هللا يوسف علي وآرثر جون آ ربيري ، وذلك عن طريق اختيار اثنين وخمسين مصطلح اً ثقافياً من كلتي السورتين . كما و تهدف الدراسة أيضًا إلى إظهار أس باب الفقد في المعنى في الترجمتين ( في ضوء تصنيف بيكر للتكافؤ2011 ) .وال سيما التكافؤ على مستوى الكلمة تتبع هذ ه الدراسة المنهج النوعي الوصفي ، حيث اعتمدت الباحثة على التحليل النصي المقارن ل ترجمات كل من علي وآر بيري . ف كشفت نتائج الدراسة أن الترجمتين أسفرتا عن فقد جزئي وكلي ف ي المعني علي حد سواء . حيث غلب فق د المعني الكلي على ترجمة آربيري ؛ وذلك ل وجود مصطلحات ثقافية ، و كلمات مع قدة ًلغويا ،في اللغة المصدر وغياب بعض المفردات من اللغة الهدف . كما وأظهرت النتائج أن عل ًيا كان أكثر نجاحًا في تحقيق التكافؤ الثقافي بنسبة42.3 ٪ ، في حين أن تحقيق آرب ي ري للتكافؤ الثقافي يمثل34.6 ٪ . في ضوء هذه النتائج توصي ال دراسة المترجمين بالتواصل مع خبراء في العلوم الدينية ،وال رجوع إلى كتب التفسير للوصول إلى عمق رسائل النص المصدر ،وعدم ا لتركيز بشكل أساسي على المعنى الس طحي ، وا للجوء إلى القواميس ال عربية واإلنجليزية ،وتسليط الضوء على االستراتيجي .ات التي يستخدمونها ًوأخيرا توصي ال دراسة الباحثين المستقبليين بإجراء ال مزيد من البح وث على ما تبقى من سور القرآن كاملة ؛ ل تجنب مثل هذا الفقد في الم عنى وإنتاج نسخة م ترجمة دقيقة ل لقرآن الكريم . V Dedication This work is wholeheartedly dedicated to my dear mum and dad, who have been my source of inspiration through all this journey, they gave me strength when I thought of giving up, and continually provided their moral, spiritual, and emotional support. To my amazing husband, for his continuous love, patience, and support, whose care for me and our kids made it possible for me to finish this work. To my kids, Rakan and Kinan, who are a real blessing form Allah, thank you for allowing me time away from you to work and for motivating me to move on and be a good model for you. To my beloved brothers and sister, who shared their words of advice and encouragement. VI Acknowledgment I cannot express enough thanks and gratitude to Almighty Allah for granting me health, strength, and power of mind to continue this work. I offer my sincere appreciation for my supervisor Prof. Dr. Walid Amer for his thoughtful guidance, keen interest, and encouragement. My gratitude is extended to Dr. Mohammed Al-Haj Ahmed, associated Professor of Translation at the IUG, and Dr. Mohammed Soliman Al-Farra for providing me with help and assistance to get this work done. I would like also to express my deep sense of gratitude to my family, husband, and sons for being supportive and motivating. VII Table of Contents إقـــــــــرار .......................................................................................................................... I Declaration.................................................................................................................... I نتيجة الحكم على أطروحة الماجستير ........................................................................................ II Abstract ...................................................................................................................... III ملخص الدراسة ................................................................................................................. IV Dedication .................................................................................................................... V Acknowledgment ....................................................................................................... VI List of tables................................................................................................................. X List of figures ............................................................................................................. XI List of abbreviations ................................................................................................ XII Chapter 1 ...................................................................................................................... 1 Introduction .................................................................................................................. 1 1.1 Introduction ....................................................................................................... 2 1.2 Statement of the problem .................................................................................... 4 1.3 Questions of study .............................................................................................. 4 1.3.1 Research main question ................................................................................ 4 1.3.2 Research sub-questions ................................................................................ 4 1.4 Purpose of the study ........................................................................................... 4 1.5 Significance of the study ..................................................................................... 5 1.6 Limitations of the study ...................................................................................... 5 1.7 Structure of the Study ........................................................................................ 5 1.8 Definition of terms.............................................................................................. 6 Chapter 2 ...................................................................................................................... 8 Literature review ......................................................................................................... 8 2.1 Introduction ....................................................................................................... 9 2.2 Definitions of Translation ................................................................................... 9 2.3 Religious translation ......................................................................................... 10 2.4 The translation of the Holy Qur’an ................................................................... 11 2.5 Difficulties in translating the Holy Qur’an ........................................................ 12 2.5.1 Linguistic problems .................................................................................... 12 2.5.2 Cultural problems ...................................................................................... 17 VIII 2.6 Equivalence in translation ................................................................................ 18 2.7 Ivir’s seven strategies for overcoming cultural gaps ........................................... 22 2.8 Previous studies ............................................................................................... 24 2.8.1 Previous studies in relation to the translation of the Holy Quran. ................. 24 2.8.2 Previous studies in relation to the semantic loss in the Holy Qur’an. ............ 27 2.9 Commentary on the previous studies ................................................................. 29 2.10 Conclusion ..................................................................................................... 30 Chapter 3 .................................................................................................................... 31 Corpus and Methodology .......................................................................................... 31 3.1 Introduction ..................................................................................................... 32 3.2 Research design ............................................................................................... 32 3.3 Data of the study .............................................................................................. 32 3.4 Data analysis .................................................................................................... 36 3.5 Procedures of data collection ............................................................................ 36 3.6 Inter Rater Reliability ...................................................................................... 37 3.8 The translations to be investigated .................................................................. 37 3.9 The selected Suras ............................................................................................ 38 3.10 Selection criteria............................................................................................. 38 3.11 Conclusion ..................................................................................................... 39 Chapter 4 .................................................................................................................... 40 Data analysis ............................................................................................................... 40 4.1 Introduction ..................................................................................................... 41 Extract 1…..... ....................................................................................................... 43 Extract 2……. ....................................................................................................... 44 Extract 3……. ....................................................................................................... 44 Extract 4……. ....................................................................................................... 45 Extract 5……. ....................................................................................................... 46 Extract 6……. ....................................................................................................... 47 Extract 7……. ....................................................................................................... 49 Extract 8……. ....................................................................................................... 50 Extract 9……. ....................................................................................................... 51 Extract 10…… ...................................................................................................... 51 Extract 11…… ...................................................................................................... 52 Extract 12…… ...................................................................................................... 54 IX Extract 13…… ...................................................................................................... 54 Extract 14…… ...................................................................................................... 55 Extract 15…… ...................................................................................................... 55 Extract 16…… ...................................................................................................... 55 Extract 17…… ...................................................................................................... 57 Extract 18…… ...................................................................................................... 58 Extract 19…… ...................................................................................................... 58 Extract 20…… ...................................................................................................... 59 Extract 21…… ...................................................................................................... 60 Extract 22…… ...................................................................................................... 61 Extract 23…… ...................................................................................................... 62 Extract 24…… ...................................................................................................... 63 Extract 25…… ...................................................................................................... 63 Extract 26…… ...................................................................................................... 64 Extract 27…… ...................................................................................................... 64 Extract 28…… ...................................................................................................... 65 Extract 29…… ...................................................................................................... 66 Extract 30…… ...................................................................................................... 67 4.2 Conclusion ....................................................................................................... 67 Chapter 5 .................................................................................................................... 68 Results, Conclusion, and Recommendations ........................................................... 68 5.1 Introduction ..................................................................................................... 69 5.2 Answers of the research questions ..................................................................... 69 5.2.1 Answer of the main question ....................................................................... 69 5.2.2 Answer of the first sub-question ................................................................. 80 5.2.3 Answer of the second sub-question .............................................................. 81 5.2.4 Answer of the third sub-question ................................................................ 86 5.3 Conclusion ....................................................................................................... 91 5.4 Recommendations ............................................................................................ 92 References ................................................................................................................... 93 X List of tables Table (3.1): Data of the study (Surat Al-Mujadilah)……………………………….35 Table (3.2): Data of the study (Surat Al-Hashr)……………………………………36 Table (4.1): Cultural-specific terms in Surat Al-Mujadilah………………………..40 Table (4.2): Cultural-specific terms in Surat Al-Hasr……………………………...41 Table (5.1): Types of semantic loss in Ali and Arberry’s translations…………….69 Table (5.2): Strategies used by Ali and Arberry in translating the CSIs in Surat Al-Mujadilah and Surat Al-Hasr…………………………………………………….....81 Table (5.3): Achievement and non-achievement of cultural equivalence in Ali and Arberry’s translation……………………………………..…………………………85 XI List of figures Figure (3.1): Themes of Surat Al-Mujadilah…………………………………….34 Figure (3.2): Themes of Surat Al-Hashr…………………………………………34 Figure (5.1): Achievement of cultural equivalence in Ali and Arberry’s translations………………………………………………………………………..90 XII List of abbreviations CSI Cultural Specific Items IUG Islamic University of Gaza SL Source Language ST Source Text TL Target Language TT Target Text 1 Chapter 1 Introduction 2 Chapter 1 Introduction 1.1 Introduction No doubt that translation is a means of communication that bridges the gap between different languages and cultures. As Schulte (2002), as cited in Abdelaal (2017:1), wrote, “Translation is not a mere transplantation of words from one language to another, it involves interactions among linguistic, cultural, anthropological, and psychological phenomena.” Al-Masri (2009:7) stated that “it includes extra-linguistic factors, semantic levels, and textual contexts.” Kehal (2010) argued that translation does not rely only on the linguistic factor, but also on the precise use of language. Therefore, translators should take into consideration the cultural norms of the ST and the TT as long as language and culture are two faces for one coin. (Adopted in Abdelaal 2017:1). During the process of translation, choosing the accurate equivalent is quite challenging for translators who do not have the full command of the linguistic codes of both languages. In fact, it is even difficult for the one who masters the two codes. Newmark (1988) pointed out that translators are not excused for altering the words that have one to one equivalent, even if she/he believes that the alternative would sound better, since that is considered a violation of the accuracy rule in translation. Ervin and Bower (1952:595) stated that distorting the meaning while translating results from lexical, syntactical or cultural differences between languages. They also asserted that words may or may not have referents that are culturally different. For instance, the word “eclipse” has two referents in Arabic: One refers to the moon and the other to the sun. In a similar manner, Baker (2011) claimed that one type of non-equivalence is the lack of lexical words between ST and TT languages. Ervin and Bower (1952) discussed other lexical problems that pose difficulties for the translators represented by homonyms, figurative meaning, and polysemy. Similarly, Darwish (2010) contended that the difficulties in translation appear because of the various semantic, lexical, syntactic, phonological, and morphological characteristics among the different languages. He also presented another reason which is the literal translation of some lexemes that is unlikely to convey the intended meaning. On the other hand, Guessabi (2013:224) confirmed that culture constitutes a 3 crucial problem in translation. As culture is the complex whole, which includes knowledge, belief, art, moral, law, customs, and any capabilities or habits acquired by man and member of society (Taylor 1889:1) So to speak, there are several factors that lead to difficulties in translation that may increase when dealing with sensitive and sacred texts such as the Holy Qur’an. The need for translating the Holy Qur’an into other languages, particularly English, became urgent since the religion of Islam is growing faster and becoming more widespread all over the world and the number of Muslims who do not speak Arabic is rapidly increasing. It is already known that the Noble Qur’an was revealed in standard Arabic which is distinguished for being rhetoric and eloquent. The Quranic discourse has its own distinctive features on the syntactic, semantic, cultural, and rhetorical levels (Abdul-Raof, 2010). Holding such characteristics makes the language of the Qur’an more difficult to understand. Furthermore, translating the Holy Qur’an text is challenged by many obscurities, ambiguities, and non-equivalence problems (Tabrizi & Mahmud, 2013, pp. 1-6). Subsequently, translators should exert strenuous efforts so as to be able to perceive the genuine meanings adequately. Delisle (1984) proposed four main competency levels which are important for the translator: Linguistic, comprehension, encyclopedic, and re-expression knowledge. Following these maxims, a translator should have full knowledge of the Arabic and Islamic culture and have a background about the reasons of revelation. Al-Jabri (2008) claimed that in spite of the great efforts some translators have exerted in producing accurate English translations of the holy Qur’an, their quality and style are still poor. Abdul-Raof (2005: 115-130) also stated that lots of scholars have been criticized for being entirely incapable of transferring the authentic meanings of the Holy Qur’an. Their inability was due to not being fully acquainted with the adequate knowledge about the Arabic culture and not being able to distinguish between exegesis (tafsir) and hypothetical opinion (ta’wil). Thenceforward, this study examines the semantic loss in Ali and Arberry’s translations of Surat Al-Mujadilah and Surat Al-Hashr. There are several reasons that drove to conducting this study. One reason is that some translations of the Holy Qur’an come up short on the proper understanding of some Quranic matters. Another reason is to make the non-Arabic speakers aware of the weak spots of the translations that may 4 lead to a misunderstanding of some religious aspects. One last reason that is worth mentioning is to provide the non-Arabic speaking Muslims with information about the Medinan suras that inform about the teachings of Islam. 1.2 Statement of the problem Since the Quranic discourse has particular characteristics which are Qur'an-bound and semantically oriented, it cannot be rendered into an exact language. Accordingly, some translations may not transfer the meaning faithfully and then produce a semantic loss in meaning which distorts the original message. For instance, rendering the religious item “الحج – Al-Haj” into the English equivalent “pilgrimage” does not convey the meaning of the Haj to Mecca as it is depicted in the Islamic culture. 1.3 Questions of study: 1.3.1 Research main question: What types of semantic loss in the Holy Qur'an translation are found with special reference to Surat Al-Mujadilah and Surat Al-Hashr? 1.3.2 Research sub-questions: 1. What types of non- equivalence the translations of Ali and Arberry reflected for the named two suras? 2. What translation strategies did the two translators use in rendering the two suras? 3. To what extent have Ali and Arberry’s translations been successful in achieving the cultural equivalence of the specific items? 1.4 Purpose of the study: The present study aims at examining the semantic loss in the two English translations of Surat Al-Mujadilah and Surat Al-Hashr attempted by Abdullah Yusuf Ali and Arthur John Arberry. It also investigates the types of the semantic loss found in the two English translations. The loss in this research has two definitions. In its broad sense, it refers to the complete or partial loss of any verbal sign. However, in its narrow sense, it refers to the kind of losses that semantically affect the interpretation of the previous signs, subsequently, affecting the target readers reception of the TT (Al-Masri, 5 2009). Moreover, it identifies the causes of this loss in light of Baker’s typology of equivalence. 1.5 Significance of the study: Dickens, Harvey, & Higgins (2005), wrote,” It is established that losses in translation are inevitable, and these losses, undoubtedly, may affect or distort the meaning intended in the sacred Quranic text. Thus it is vital to study losses in the translated Quranic text to provide insight into them, and also to a translation to ensure accuracy, reduce distortions, and know how to deal with them during the translation process.” The significance of this study comes from the fact that it provides information about the semantic loss in two English translations of two suras of the Holy Quran. (Surat Al-Mujadilah and Surat Al-Hashr). In addition, it contributes to a better understanding on how semantic losses can be reduced in Ali and Arberry’s English translations of Surat Al-Mujadilah and Surat Al-Hashr. Furthermore, it provides beneficial insights for future translators to avoid such losses in their translations. Moreover, it encourages researchers to conduct future studies investigating the semantic losses found in the translations of other suras. Finally, it raises the awareness of non-native speakers of Arabic to the losses found in the translations of the Holy Quran. 1.6 Limitations of the study: This study is limited to investigating the semantic loss resulting from the cultural non-equivalence in the translations of two suras of the Holy Qur'an. The researcher chose only two English translations for Surat Al-Mujadilah and Surat Al-Hashr undertaken by only two translators of different backgrounds, beliefs, and religions. They are Abdullah Yusuf Ali and Arthur J. Arberry. 1.7 Structure of the Study The present study goes on to present the second chapter of literature review that is divided into two parts; theoretical part and practical part. The first part deals with an overview about translation, religious translation, the Holy Quran translation and the difficulties faced in translating it, equivalence in translation, and finally translation and culture. While the second part includes previous studies done on the translation of the 6 Holy Qur’an in general and studies related to the semantic loss. The third chapter is devoted to the corpus and methodology of the study. The fourth chapter presents the analysis of the data in addition to discussion and comparison of Ali and Arberry’s translations of Surat Al-Mujadilah and Surat Al-Hashr. Finally, the answers of the research questions are presented in the fifth chapter beside the results, conclusion and recommendations. 1.8 Definition of terms: 1. Loss: When specific features found in the source text disappear in the target text. Translation loss refers to, “The incomplete replication of the ST in the TT.” (Dizdar, 2014, pp. 206-223) 2. Translation: Nida and Taber (1969), as cited in Akbari (2013:4), confirmed that translating consists of reproducing in the receptor language the closest natural equivalent of the source language message, first in terms of meaning and secondly in terms of style. Translation involving the transposition of thoughts expressed in one language by one social group into the appropriate expression of another group, entails a process of cultural de-coding, re-coding and en-coding. As cultures are increasingly brought into greater contact with one another, it is the cultural aspect of the text that we should take into consideration. 3. Culture: According to Bahmeed (2008:3), there is no specific definition of culture that is agreed-upon. Nevertheless, based on the definition of Wikipedia.org, the word culture, which is taken from the Latin “cultura”, is a new term that was used in classical antiquity by the Roman orator, Cicero: “cultura animi”. In the American anthropology, the term culture had two meanings: The evolved human capacity to classify and represent experiences with symbols, and to act imaginatively and creatively. The other meaning is the distinct ways that people living in different parts of the world classified, represented their experiences, and acted creatively. ” Since culture is simply a way of life of a particular people living together in one place, speaking the same language, it means thinking and feeling, and having emotions, rather differently from people who use a different language (Eliot 1962:120). Dawson (1948:50) argues that “We cannot understand the way people think and then use language without understanding their culture, and we cannot understand their cultural 7 backgrounds unless we have a good knowledge of their various kinds of believes that formulate the inner form of their linguistic competence”. This particular idea of Dawson was adopted through the whole research. 4. Culture- specific items: These refer to the words in the SL that do not have an equivalent in the TL. Those culture-related terms constitute one of the most dominant problems of non-equivalence that translators encounter since culture is a main cause for so many semantic losses as will appear in the present study. 8 Chapter 2 Literature review 9 Chapter 2 Literature Review 2.1 Introduction This chapter is of two parts, the first part concentrates on the theoretical framework which includes translation studies, religious translation, the translation of the Holy Qur’an, difficulties in translating the holy Qur’an, equivalence in translation, and translation and culture. The second part reviews empirical studies on the translation of the Holy Qur’an and on the semantic loss. 2.2 Definitions of Translation The word “translation” is taken from the Latin word “translatio” which refers to “transferring”. Hatim & Munday (2004) view translation from two various perspectives which are the process and then the product. They see translation as process of transforming the meaning from one language into another as a product. They both agree that translation centers on the outcomes achieved by the translator. Catford (1965) defines it as "an operation performed on languages, a process of substituting a text in one language for a text in another". Nida and Taber (1982:12) state, “Translating consists of reproducing in the receptor language the closest natural equivalent of the source message”. Shehab (2009:869-890) defines translation as transferring meaning from one language into another attaining a high degree of equivalence of the context, and semiotic components of the source text. Moreover, Ghazala (1995:1-2) also mentioned that “as a subject, translation is generally used to refer to all the processes and methods used to convey the meaning of the source language into the target language. That is, the use of: (1) Words which already have an equivalent in Arabic language. (2) New words for which no equivalent was available in Arabic before. (3) Foreign words written in Arabic letters. (4) Foreign words changed to suit Arabic pronunciation, spelling and grammar”. Translation studies addressed the issue of translation types. For instance, Roman Jakobson (1959:30-39) classified translation into three types: Intralingual, interlingual, 10 and intersemiotic. Intralingual translation means paraphrasing or summarizing within the same language, whilst interlingual translation is the traditional way of transferring meaning from one language into another. Intersemiotic translation refers to transferring the verbal signs into nonverbal signs. Catford (1965) proposes full and partial translation. In full translation, every bit of the source text is transferred into the target text. While in partial translation, some parts are not translated. Newmark (1988) states that translation is either free or literal. In free translation, the main concentration is on the content rather than the form of the source text, it is simply a matter of paraphrasing. On the other hand, literal translation refers to translating the meaning of single words but converting the grammatical constructions of the source language to their closest constructions in the target language. In conclusion, translation can be defined in its simplest form as the process of rendering the meaning and form of the source text into the target text. 2.3 Religious translation Religious translation is a very complex type of translation that is concerned with the translation of sacred and highly sensitive texts. And as a result of the sensitivity of religious texts, a large number of translators prefer to avoid it. Religious texts, the divine ones in particular, came to an individual or a whole nation as per their language, culture, intellectuality, and mentality. So, translating these texts requires full knowledge about all of the previous aspects which makes the process of translation difficult. Since sacred texts, such as the Holy Qur’an, cannot be perceived at once, hence translators tend to translate the meaning of the Islamic thoughts indulged within these texts provided that these divine texts must be read by using the authentic language “Arabic” when performing prayers and rituals. Religious culture specific words constitute a problem for the translator who is not fully aware of Arabic and English cultures and not competent enough. For example, the Arabic word “الوضوء”, it is not appropriate to translate it into the English word “ablution” which refers to the act of washing oneself. However, it should be transliterated into “el wodoo” which means washing specific parts of the body in a specific time that is before each prayer. It is argued that religious translation is not restricted only to the translation of the Holy Qur'an and the Prophetic Hadiths, but it also involves the translation of 11 articles, research, and religious subjects so as to accomplish various objectives, for instance, spreading the teachings of Islam, disseminating the religious principles, revealing the real picture of Islam especially in the West, and eliminating the savage ideology of Islamophobia. 2.4 The translation of the Holy Qur’an The Holy Qur’an started to revelate upon our Prophet Muhammad (Peace be upon him) to all mankind in 612 AD, and since then a lot of efforts have been made to think through the issue of translatability of the Holy Qur’an. Some intellectuals like the Muslim Orthodox believe that Qur’an is untranslatable since it is the Word of Allah. On the contrary, numerous Muslim and non-Muslim intellectuals think oppositely. As a result, a huge increase in the number of interpretations of the Holy Qur’an appeared such as Abdullah Yusuf Ali’s work “The Holy Qur’an: Text and Translation” (1938), Arthur John Arberry’s “The Koran Interpreted” (1968), and Pickthall’s, “The Meaning of the Glorious Koran” (1930). Translating Qur'an is substantial for many reasons. First and foremost, the religion of Islam is universal and Prophet Muhammad "Peace be upon him" was sent to the entire world as the messenger of Allah to guide mankind. This universality of Islam held the Muslims completely responsible for rendering the Holy Qur'an into other languages. Second, scholars like Imam Al-Bukhary, Ibn Taymya, Ibn Hajar, Muhammed Ibn Salih Al-Uthaymeen, and Ibn Baz, believe that it is mandatory to translate Qur'an into different languages. Third, lots of people around the world attempted to look for the real identity of Muslims through the translations of Qur'an after the incidents of September 11th, 2001, but unfortunately they found few translations translated mainly by non-Muslims. The importance of translating the Holy Quran into English in particular comes from the fact that it is a global language. Also it is the language of international communication, the media, and the internet. Most importantly, it is the official language of two of the most powerful and influential countries in the world (The United States of America and the United Kingdom) and the second language in many countries such as India, China, and so on so forth. This great diffusion of English helps make any English translation of the Holy Qur'an more widely spread than any other translation. 12 “The English language, being widely spread, many people interested in Islam will get their ideas of the Qur'an from English translations” (Ali, 1934: xiii). 2.5 Difficulties in translating the Holy Qur’an Many translators encounter multiple and varied problems during the translation of the Holy Qura’n, since the latter enjoys an inimitable nature and it is impossible to imitate. Consequently, this leads to a loss in meaning. According to Arberry (1973), “the Qur’an is neither prose nor poetry, but a unique fusion of both. So it is clear that a translator cannot imitate its form as it is a Quran-specific form that beautifully utilizes the peculiar properties of the original language” (p. 10). Based on that fact, it is difficult to find an exact equivalent to the form and content of the Holy Qur’an. Hence, such problems of translation that lead to a semantic loss in the meaning of the Qur’anic discourse are divided into two broad types: Linguistic and cultural problems. 2.5.1 Linguistic problems The main aim of translation is to transfer the meaning from the source language into the target one maintaining the original meaning of the ST. Yet, since no two languages follow the same system and are not similar in the form, culture, norms, terminology, vocabulary… etc., translators will encounter some various linguistic problems such as the semantic, syntactic, and lexical ones. 2.5.1.1 Semantic problems Semantics is the study of the meaning at the level of words, phrases, and sentences. It is linked to the themes of denotation, reference, and representation or it is the study of the relationship between words and their meanings. A word has two types of meaning, the first is the "reference", for example the word "book" refers to a collection of paper bound together containing printed material. The second one is the "sense" which determines the word's semantic relationship with the other words. For instance, "big" is the opposite of "small". Each morpheme in a word has a meaning. The suffix "er" when added to a verb, a noun is derived (it may refer to an agent as "worker" or to an instrument or device as "washer"). Some morphemes have different meanings when added to different types of words. As an example, the prefix "un" when added to an adjective, it produces the opposite “helpful, unhelpful", however, when added to a verb, it indicates a reverse action "tie, untie". 13 When we study the language or translate, we should be careful about the meanings of the words it consists of. For example, when someone starts learning a new language, he first learns the meaning of the words of this language before studying their grammatical "morphological, syntactic" properties. All in all, semantics is the study of meaning at the word, phrase, and sentence level. Sematic problems happen to appear due to many reasons. For instance, the lack of equivalence in the target language, particularly in the religious and cultural fields, the complex nature of the Qur’anic discourse, the inability to distinguish between the meanings of some words, and using the hidden or connotative meaning of words which causes misconception of the real meaning and the intent for using it. Consequently, translators must have a thorough knowledge of the Islamic culture and be fluent in the Arabic language in order to reach the adequate equivalent. An example on the semantic problems is the words “الريح” and “الرياح”. Some translators are not aware of the difference in meaning between those words, hence they believe that they have the same meaning. While in fact, the word “الريح” has a negative connotation that reflects torment and doom, for example: ":قوله تعالى َِ أَنْ تُفَن ِدُون ْ ال َ لَو ِ يحَ يُوسُف ِ دُ ر َ َج َ لَمَّا فَصَلَتِ الْعِيرُ قَالَ أَبُوهُمْ إِن ِي َل و" (Yusuf:94) “ And when the caravan departed [from Egypt], their father said, "Indeed, I find the smell of Joseph [and would say that he was alive] if you did not think me weakened in mind.” (Abdullah Yusuf Ali) However, the word “الرياح” holds a positive meaning that reflects mercy, for example: " :قوله تعالى َ اقِحَ فَأَنْزَ لْنَا مِنَ السَّمَاءِ مَاءً فَأَسْقَيْنَا ْ سَلْنَا الر ِ يَاحَ لَو َ أَر و َِ نِين َ مَا أَنْتُمْ لَهُ بِخَاز كُمُوهُ و" (Al-Hijir:22) "And We have sent the fertilizing winds and sent down water from the sky and given you drink from it. And you are not its retainers." (Abdullah Yusuf Ali) The main problem lies within the fact that those words have the same equivalent in English which is “wind”. 2.5.1.2 Syntactic problems 14 Syntax refers to the study of the structure of sentences in language and how words are combined. Syntactic problems appear while translating because of the huge differences among languages, for instance, Arabic and English are two radically different languages since they belong to various systems. Syntactic problems are possible to occur and they are usually found in tenses, conditionals, and word order. In referring to tenses, there are two types, present and past, in all natural languages including English and Arabic. Both Arabic and English have agreement and aspect that configure the verb construction in progressive and perfective case but their forms are different in both languages, this leads translators to fail in conveying the main tense when translating tenses in a literal way. As a result, translators may resort to shifting to convey the exact meaning to the target readers causing a semantic loss in meaning. The following example is a verse taken from surat Al-Ahzab: :قوله تعالى" َِ بَلَغَت َ إِذْ زَ اغَتِ اَلَبْصَارُ و َ مِنْ أَسْفَلَ مِنْكُمْ و ْ قِكُمْ و إِذْ جَاءُوكمْ مِنْ فَو ِ ََّ تَظُنُّونَ بِاَّلل َ و ِ ر الْقُلُوبُ الْحَنَاج الظُّنُونَا " (Al-Ahzab: 10) “Behold! They came on you from above you and from below you, and behold, the eyes became dim and the hearts gaped up to the throats, and ye imagined various (vain) thoughts about Allah” (Abdullah Yusuf Ali) Now the verbs (جاءوكم) “came against you”, (زاغت) “grew wild”, (بلغت) “reached” are in the past form, yet the verb (وتظنون) “think” shifts to the present form. The main aim of this shift is to show that those events happen in the present. Tenses in Arabic must be shifted to communicate the exact meaning to the target readers since they cannot be transferred in a literal manner. 2.5.1.3 Lexical problems The English word “lexical” is characterized as the lexemic meaning which relies upon the specific setting wherein it is utilized. It is difficult to classify the lexical meaning since it does not focus only on the literal meaning but also on the relations between the various linguistic units such as synonymy, hyponymy, polysemy and homonymy. For the current research, Baker’s typology (2011) was adopted so as to recognize the accompanying lexical and morphological issues: Synonymy, polysemy, homonymy and hyponymy. 15 2.5.1.3.1 Synonymy Palmer (1981) defines synonymy as a lexical relationship that indicates similarity in meaning. As per Shunnaq (1992:5-39), interpreting equivalent words is confounding as a result of the slight contrasts between the equivalents. Subsequently, a local speaker can pass judgment on these varieties more reliably when contrasted with a non-local speaker. Shehab (2009) talks about the case of two Arabic words “يغبط” and “يحسد” as they cannot be comprehended without having some data about the distinctions among these equivalent words. Thus, interpreters can utilize the word “envy” for both, yet it does not convey the genuine meaning because the word “يغبط” has a positive connotation while “يحسد” has a negative hidden meaning. Murphy (2003) states that synonyms can be classified into various sorts and ordinarily be perceived as lexical relations and they are interpretable based on hypotheses, information and customs. Consequently, synonyms are words that share the same meaning. 2.5.1.3.2 Homonymy Crystal (1991) defines homonymy as two words that a have a similar spelling yet various implications. The following is an example on homonymy taken from Surat Al-Baqra: ".قال تعالى: "قل إن هُدى هللاِ هو الهُدى (Al-Baqra: 120) “Say: The Guidance of God, - that is the (only) Guidance.” (Abdullah Yusuf Ali) “Say: God's guidance is the true guidance.” (Arthur Arberry) In this example, the homonyms are the words “ هُدى and الهُدى“. It is noticed that both Ali and Arberry translated “ هُدى” as “Guidance” which accords with Al Zamakhchari’s translation “the Guidance Allah that He sent to the prophet Muhammad” (Peace Be Upon Him). However, Ali translated the second homonym “ال هُدى” as “the only Guidance” whereas Arberry interpreted it as “true guidance” which is similar to Al Zamakhchari's translation, thus his translation is successful and more appropriate. On the contrary, we find that Ali failed in translating this word since he did not offer any explanation for his choice and it was different from the translation mentioned in Al Zamakhchari's exegesis. 16 2.5.1.3.3 Polysemy Geeraerts (2010) defines polysemy as a word or a phrase that has multiple meanings. For example, the word “يضرب” in the following verse taken from Surat Al-Baqra: ":قال تعالى إن هللا ال يستحيي أن يضرب ."مثالً ما بعوضة26) Baqra: -(Al “Allah disdains not to use the similitude of things, Lowest (45) as well as highest” (Abdullah Yusuf Ali) “God is not ashamed to strike a similitude even of a gnat, or aught above it.” (Arthur Arberry) Ali successfully conveyed the exact meaning of the verse by translating the word “يضرب” functionally into “to use”. On the other hand, Arberry translated it literally into “to strike” which deviates from the original meaning and then leads to a semantic loss since the word “يضرب” has many implications in Arabic such as يضرب ًمثال, يضرب في اَلرض, يضرب ضرباً مبرحا and here in this verse, according to Al-Zamakhshari, “يضرب” means “ يضرب ًمثال”. 2.5.1.3.4 Hyponymy Hyponymy shows the semantic relationship between a generic term and a specific instance of it. A hyponym is defined as a word or a phrase whose semantic meaning is more specific than its hypernym. For instance, the Arabic words “ عم or خال“, are hyponyms of the hypernym “uncle”. Another example is the words “fig and olive” in Surat “At-Tin” in the holy Qur’an. These two words are hyponyms of the word fruit. 2.5.1.3.5 Metonymy Newmark (1988) mentions that metonymy happens to occur where the name of an item is substituted to replace another thing that has a relation with. This kind of transferring happens in one condition; when there is a close relationship between the literal and figurative meaning and also the presence of an implied hint. Metonymy is found in the Holy Qur’an to serve an end as in the following example from Surat Nuh: " :قال تعالى ْيُرسل ْالسَّمَاءَ عَلَي كم مِدْرَ ارً ا " (Nuh:11) 17 “For whom we poured out rain from the skies in abundance” (Abdullah Yusuf Ali) “and how we loosed heaven upon them in torrents” (Arthur J. Arberry) Ali rendered the word “السماء” as “skies” in order to depict the exact image which indicates the plenty of rain. On the contrary, Arberry failed in conveying the intended meaning of the metonymy when he used literal translation. 2.5.1.3.6 Metaphor According to Oxford Advanced Learner’s Dictionary (2010), a metaphor is the use of a word or a phrase to describe something else that does not invoke similarity between the word or phrase used and the thing referred to. There are plenty of metaphors found in the Holy Qur’an. For instance, consider this verse taken from Surat Al-Haj: " :قال تعالى ٍْ جٍ بَهِيج َو ْ مِن كل ز َ أَنْبَتَت ْ و َ رَبَت ْ و َ هَامِدَةً فَإِذَا أَنْزَ لْنَا عَلَيْهَا الْمَاءَ اهْتَزَّت ْ ض ْ َر َ تَرَ ى اَل و" (Al-Haj:5) "Thou seest the earth barren and lifeless, but when we pour down rain on it, it is stirred (to life), it swells and it puts forth every kind of beautiful growth in pairs" )Abdullah Yusuf Ali( Here, Allah the almighty compares the case of the earth after raining to a dead body that goes back to life after being watered. Therefore, literal translation would not be the appropriate solution and hence translators must convey the exact meaning of the verse communicatively. 2.5.2 Cultural problems Culture is another critical problem in the translation of the Holy Qur’an. Individuals who share similar culture, traditions, values, beliefs, and way of life do not face difficulties understanding each other. However, when there are two different languages with two different cultures, it becomes difficult to communicate information, ideas, or whatever. In this kind of situations, we do not only transfer meaning but also the culture hidden within so that things become clear and easy. Cultural translation is considered one of the most complex types of translation problems for some reasons. First, languages are fully-loaded with culture-specific terms that require the translator to be highly competent and fully aware of both cultures. 18 Second, there are some shared expressions among some cultures yet the individuals who belong to those cultures look at them differently. As a result, translators often suffer from the problem of non-equivalence. An example on cultural problems is “المن والسلوى”. If the translator uses literal translation or transliteration, s/he will not convey the intended meaning. Thus, types of food found in various cultures must not be borrowed as they are in the source language since it does not refer to the same components or elements in order not to distract the target reader’s attention from the exact idea. 2.6 Equivalence in translation It is agreeable that the most crucial part in the translation process is to find a suitable match in the target language. Hence, translation studies draw a great deal of attention to equivalence. Equivalence depends mainly on words, sentences, or text levels. For that, it is linked to units of equivalence such as words, phrases, clauses, morphemes, proverbs, idioms, etc. We must not forget that these units of equivalence and equivalence level are strongly linked. Translators must bear in mind not to focus only on the linguistic equivalence but also shed some light on the cultural equivalence. Despite the fact that lots of scholars worked on the concept of equivalence, two major approaches only grabbed all the attention which are the linguistic approach and the pragmatic approach. Most of translation procedures depend on the translation of transformations taking equivalence into consideration. The following are Vinay and Darbelnet's seven procedures for translation (1995, pp.30–40):  Borrowing: Is the process whereby new words are formed by adopting words from other languages together with the concepts or ideas they stand for. Examples: -tango, mango, taco, burrito from Spanish; -fiancé, very (adopted from Old French verai), -garage from French; -pizza, mafia from Italian. -Foreign words like these among others may cause a dilemma in translation.  Calque: A type of borrowing where the source language expression is transferred to the target language with some sort of semantic change. 19  Literal translation: Direct transfer of the ST into the target language in a grammatically and idiomatically proper way.  Transposition: One part of speech is exchanged by another one keeping the sense unchanged.  Modulation: It includes a change in the semantics of the source language.  Equivalence: Using different stylistic features to describe the same situation.  Adaptation: When a situation in the source culture cannot be found in the target culture, a modification in the cultural reference is adapted. Nida and Taber (1982) suggested two crucial types of equivalence: Formal and dynamic. They defined the formal equivalence (word-for-word translation) as "quality of a translation in which the features of the form of the source text have been mechanically reproduced in the receptor language” (p. 201). In dynamic equivalence (sense-for-sense translation) " the form is structured (different syntax and lexicon) to preserve the same meaning” (p. 173). Their effort constituted an enormous help to the translators in analyzing the text they are dealing with. The American translation theorist Lawrence Venuti in his book "The Translator’s Invisibility", introduced two translation strategies which are domestication and foreignization. Venuti (1995) defines domestication as “an ethnocentric reduction of the foreign text to target-language cultural values, bring the author back home", we can say that it is target text-oriented. On the other hand, he defines foreignization as “an ethno deviant pressure on those (cultural) values to register the linguistic and cultural difference of the foreign text, sending the reader abroad", so it is source text-oriented. Venuti follows foreignization because he believes that it is more eligible since it preserves the cultural and linguistic features of the source text. Newmark (1988) argues that “the central problem of translating has always been whether to translate literally or freely. The argument has been going on since at least the first century BC up to the beginning of the nineteenth century” (p. 45). He introduced many translation methods which are classified depending on whether the focus is on the source language or on the target language.  Methods according to the emphasis on the source language are:  Word for word translation: Translating the meaning of single words maintaining their word order in the source language. 20  Literal translation: Translating the meaning of single words but converting the grammatical constructions of the source language to their closest constructions in the target language.  Faithful translation: Faithful to the ST author's intentions and ideas.  Semantic translation: Preserving the same meaning of the source text but without focusing on the aesthetic features (assonance, rhyme, repetition, etc.)  Methods according to the emphasis on the target language are:  Free translation: This method concentrates on the content rather than the form of the source text. It is simply a matter of paraphrasing.  Idiomatic translation: Transfers the message of the source text with some sort of distortion in the meaning due to the use of idioms that are not found in the source language.  Communicative translation: Translates the same contextual meaning of the source text having the content and language acceptable to the readers. House (1997) is one of the supporters of the pragmatic approach. She argues that the source and the target texts should act functionally the same. Moreover, the most precise translation should go with the textual function of the source text. She also makes a differentiation between two kinds of translation: Overt and covert translation. Overt translation means that when the target text does not act like the original one. On the other hand, the covert translation means when the target text has the features of the original one. Mona Baker (2011) provides a detailed list of criteria upon which the concept of equivalence can be defined. She believes that equivalence is relative due to be being influenced by linguistic and cultural factors (p.6). She introduces new types of equivalence and analyzes them at various levels, that is, at the word level, above the word level, grammatical level, textual level, and pragmatic level. 21 Following the bottom-up approach, Baker stresses the importance of single words during the translation process because the first thing translators focus on is finding an equivalent to these individual words in the target language. She defines the term "word" as the smallest significant unit, taking into consideration, that a single word can sometimes be assigned different meanings in different languages. Cruse (1997) presented four types of the lexical meaning: The prepositional meaning which describes the relationship between words and their imaginary meanings, the expressive meaning that relates to the emotions of the speaker, the presupposed meaning which rises from the co-occurrence of restrictions and finally the evoked meaning that focuses on the meaning besides the dialect and register. Hence translators should pay attention to the parameters such as number, gender, and tense (p. 11-12). The equivalence above the word level includes the translation of idioms, proverbs, phrases, collocations, and other word combinations. The grammatical equivalence refers to the diversity of the grammatical categories among languages. Baker mentions that finding a target equivalent is quite unattainable due to the so many differences in the grammatical systems or rules of languages. Such categories that pose difficulties are number, gender, person, voice, tense, and aspect. Consequently, this may compel the translator to add or delete information in the target text. Textual equivalence refers to the kind of equivalence obtained between the SL text and TL text in terms of cohesion and information. Baker stresses the importance of texture in helping translators understand and analyze the text they are dealing with since it links together the words and expressions we say or write, thus producing a cohesive target text. (Baker, 2011:190) Finally, Baker’s pragmatic equivalence concentrates fundamentally on two crucial concepts which are the implicature and coherence. She states that implicature refers to what is implied or intended not what is explicitly said and it is divided into Paul Grice’s (1975) four maxims (quality, quantity, manner, relation), whereas coherence refers to the semantic relationships that make a text more arranged and logical. (p.230). Translators face some problems when dealing with the pragmatic equivalence such as concentrating on the literal meaning of the words without taking into 22 consideration the connotative meaning, so the translator's role here is to figure out the intended meaning and convey it adequately in the TL text. 2.7 Ivir’s seven strategies for overcoming cultural gaps Ivir (1987) suggested seven strategies in order to help translators translate cultural-specific items. These strategies are: Borrowing, definition, literal translation, substitution, lexical creation, omission, and finally addition. 1. Borrowing: Borrowing means that the translator imports a SL expression into the TL. This strategy can be joined with replacement or definition. Borrowing is utilized only when it is required and it succeeds when the borrowed item is utilized more than once. Additionally, the borrowed item ought to handily incorporate into the TL, both phonologically and morphologically. Translators must be careful not to use an excessive amount of borrowed words because of the effect of the source culture on the target one. Some examples from Arabic into English include Omra and intifada. 2. Definition: Definition refers to some sort of clarification given by a translator to a word or a term. This sort of definition is incorporated either inside the text itself or as a footnote. Definition can be joined with lexical creation, borrowing or replacement. Translators should bear in mind that definition sometimes leads to over-translation. So, they must take into consideration to add only what is important and needed. An example on definition is the word “zakat”, a compulsory payment gathered for once each year according to the Islamic sharia for charity purposes. 3. Literal translation: Literal translation is the most widely recognized strategy used when it is joined with borrowing. The significance of this strategy lies in its faithfulness to SL expressions and its transparency in TL. For instance, “money laundry غسيل اَلموال“. Nevertheless, translators do not utilize literal translation when it would contradict with some terms in the TL, or if the translation prompts issues in the grammatical structure in TL. 23 4. Substitution: Substitution is used when there is a specific item of culture missing. In this case, translators tend to use a similar equivalent yet not typically the same. An example on substitution is: verse vs. آية. This strategy could be joined with addition. Here, the receptor has no trouble to comprehend and recognize the terms and ideas. Substitution clears the vagueness and weirdness of the source culture. 5. Lexical creation: Lexical creation means that a new lexicon is being created. For example, mobile جوال. There is no limitation on how translators create these new vocabulary as long as they are adequate. Despite this, the other strategies are used more often than this one since it burdens the mind of the translator and the receptor. Another example that was inserted in the English dictionaries about ten years prior is the term “belly dancing: الرقص الشرقي”. 6. Omission: Omission is required not by the nature of the cultural item, yet by the nature of the communicative situation wherein such a cultural item shows up. For instance, Arab individuals occasionally salute each other in the morning by saying “ صبحكم الله بالخير ”, thus when it is translated into English it is sufficient to say "good morning" since the English language culture tends to use simple salutations 7. Addition: Addition is used when we interpret certain inexplicit items of culture. It is joined with lexical creation, borrowing or substitution. For instance, if we found this abbreviation “MOD” in an English text, we would translate it into Arabic by adding the full words of the initials as a matter of clarification as ‘ وزارة الدف اع البريطانية ’, so the Arabic reader would be able to understand its meaning. Another example is the metaphor “to Save one's face” which is translated into Arabic as “يحفظ ماء الوجه”, in this case, the Arabic word “ماء” is included because it is basic in the Arabic metaphor. 24 2.8 Previous studies 2.8.1 Previous studies in relation to the translation of the Holy Quran. As suggested in his PhD thesis title, Reasons for the Possible Incomprehensibility of Some Verses of Three Translations of the Meanings of the Holy Quran into English, Al-Jabri (2008) investigated the causes of incomprehensibility of the translation of some verses for native speakers of English. He chose three translations for Al-Hilali, Yusuf Ali, and Arthur Arberry and used them in a questionnaire then disseminated it among highly-educated English native speakers to measure the comprehensibility of the translated verses. Al-Jabri came out with a shocking result that the clarity of the translations was unfortunately less than 5%. The main causes behind this poor kind of translation were due to peculiar style, literal translation, cultural differences, the use of old English, transliteration, uncommon orthography; the absence or misuse of punctuation marks, and the extreme use of explanations between brackets. Najjar (2012) in her PhD entitled An investigation of a Sample of Quran metaphors with reference to three English versions of the Quran, discussed the obstacles faced during translating Qur’anic metaphors and the way they are rendered properly. Three translations of the Holy Qur’an for Arberry, Yusuf Ali, and Pickthall were chosen for this study. The data collection tool was a questionnaire. The main findings showed that the three translations failed in conveying the metaphorical meaning and they were heavily loaded with errors and that the main causes behind these errors are a result of the translator’s use of old English, complex words, complex word order, and the translation of words out of the context. In his paper Translation of the Holy Quran: A Call for Standardization, Halimah (2014) concentrated on five English translations of the Holy Qur’an that are for Ali, Arberry, Dawood, Abdel Haleem, and Schult-Nafeh. The results of the study revealed that the translators failed in achieving the cultural and communicative equivalence and so there is a great need for having one unified and standardized version of the translation of the Holy Qur’an to be utilized in all the English speaking countries. Hence, and for that purpose, the researcher introduced a list of recommendations such as: the standardized version of the translation of the Holy Qur’an must not deviate from 25 the original meaning and there must be a specific institution that is officially authorized to translate the Holy Qur’an. Jassem (2014) carried out a study on Al-Hilali and Khan's translation of the Holy Quran. The researcher evaluated their translations critically to decide which translation is more accurate than the other. The number of the data selected for the study was 261 instances which are so far from the normal English usage. The findings showed that the translations are full of grammatical, lexical, stylistic, and discourse errors. These errors refer to language transfer, overgeneralizations, ignorance of rule restrictions, and language loyalty. Jassem concludes that although the translators spared no effort to produce a precise translation, the final outcome appeared to have depended on literal translation which does not convey the exact meaning. In their paper Cultural Problems in the Translation of the Quran, Al Azzam, Al Ahardib, Al Huqail (2015) discussed the cultural issues that translators encountered when translating the Holy Qur’an. They selected three translations for different translators from different backgrounds. Random verses containing cultural-specific items were extracted from the Qur’an to examine the authenticity of the translation. The results showed that the three translations have a loss in meaning as a result of having semantic implications in the source texts that translators themselves were not able to understand. Besides the information they tried to convey were not sufficient enough for the target reader to conceive. So, the researchers suggested that translators must provide the readers with more details using footnotes. (pp. 28-34) Anari and Sanjarani (2016) in their paper entitled Application of Baker's Model in Translating Quran-Specific Cultural Items asserted that translating the Holy Qur’an has significantly contributed to the cross-cultural understanding. The researchers chose three various translations to examine how CSIs are translated based on Baker’s model. The results showed that strategies such as omission and illustration were not used at all, however the strategy of translating by more general words was the most used and translating by paraphrasing using unrelated words was used the least. (pp. 145-151) Siddiek (2017) conducted a study on the linguistic precautions that should be taken into consideration in the translation of the Holy Qur’an. The researcher investigated the causes of the linguistic losses in the translations of the Holy Qur’an. He extracted some samples of translations of famous English translators. The findings 26 uncovered that the causes behind these losses are due to using literal translation and archaic words. The researcher recommends that translators must focus on the function itself and try to avoid literalism although the main aim behind it is maintaining the holiness of the Qur’anic discourse. Issa (2017) discussed in his study Mistranslations of the Prophets' Names in the Holy Quran: A Critical Evaluation of Two Translations, the renditions of twenty five prophets' names with reference to translation strategies. The main aim of this study is to participate in the improvement of the Holy Qur’an translation. The data was extracted from two translations of the Holy Quran by Ali (1964), and Al-Hilali and Khan (1993). The analysis showed that Ali misinterpreted six names while Al-Hilali and Khan misinterpreted four due to their use of transliteration rather than naturalization. (pp. 168-174) Abdelaal (2018) conducted a study on the losses in the translation of connotative meaning in the Holy Quran and examined the causes of such losses. Abdelaal selected seven examples from the Holy Qur’an and analyzed them qualitatively. The results revealed that the main causes behind the losses in the connotative meaning are due to non-equivalence, which resulted from lack of lexicalization, semantic complexity, culturally-bound terms, difference in expressive meaning and the distinction of meaning between the SL and the TL, and translator’s incompetence in conveying the meaning through using the most suitable equivalent. The researcher suggested some strategies to avoid the previous losses such as footnoting, transliteration, periphrastic translation, and accuracy of selecting the proper equivalent that can be achieved by triangulation procedures such as peer-checking and expert-checking. Abdelaal (2019) investigated the faithfulness in the translation of the Holy Quran in light of the Skopos theory. The researcher chose six verses of the Chapter of Al-A’araf and Al-Ana’m and analyzed them. The findings of the study show that some losses were spotted in the translations of Abdel Haleem, Pickthall, Shakir, and Sarwar, for example, semantic losses and losses in the denotative and connotative meaning. The researcher recommends future translators to make use of the Skopos in the translation of the Holy Qur’an instead of just rendering meaning in the target language since faithful translation denotes that no effort was exerted to convey the main purpose of the original text. 27 2.8.2 Previous studies in relation to the semantic loss in the Holy Qur’an. Abdelaal and Rashid (2015) examined the semantic loss in the translation of Surah al-WaqiAAa by Abdullah Yusuf Ali and they also studied the reasons behind these losses. The research is qualitative and follows the descriptive content analysis. The researchers selected Abdullah Yusuf Ali’s translation from his book: The Holy Qur’an: Text and Translation. Two Arabic and English language experts were consulted to check the meanings of the translated ayat. Baker’s typology was utilized to spot the reasons behind the semantic losses. The findings of the study revealed that the main causes behind the partial and complete semantic losses are because of mistranslations, semantic complexity of the vocabularies, and culture. (pp.1-11) In their study Semantic Loss at Word Level in Quran Translation, Hana and Ilhem (2016) examined the semantic loss in the translation of Surah Al-Baqara plus its types and causes. The researchers selected the translations of both Arthur John Arberry and Abdullah Yusuf Ali. They followed Baker’s typology of equivalence and concentrated mainly at the word level. The verses of the surah were analyzed and critically evaluated. The analysis has shown that Arberry’s literal translation led to vagueness in meaning and so as the case in Ali’s translation that resulted in a partial semantic loss in some verses and complete one in others. It can be concluded that those losses appeared because the translators were not competent enough in both language and culture and they lack some skills in translation. Abdelaal and Rashid (2016) conducted a study on grammar-related semantic losses in the translation of the Holy Quran, with special reference to Surah Al A’araf. The researchers adopted the qualitative descriptive approach. The data was taken from Abdel Haleem’s English translation of Surah Al A’araf. The findings of the study revealed that the grammatical losses in conjunctions, syntactic order, duality, tense, and verbs led to both complete semantic losses and partial ones in the connotative or expressive meaning. Abdelaal and Rashid recommended that suitable translation strategies must be followed in order to prevent such loses in the translation. Abdelaal (2017) discussed the grammatical and semantic losses in the translation of the Holy Qur’an with sepecial reference to Surat Al-A’raaf, At-Tur, and Al-Ana’am. This study is qualitative in nature. The sample selected for the study is Abdel-Haleem’s English translation of the above-mentioned Surahs. Abdelaal used the 28 content analysis of the translation of the specific verses of the assigned surahs following Baker’s typology of non-equivalence and Catford’s translation shifts. The results of the study uncovered different kinds of the grammatical losses in Abdel-Haleem’s translation of the previous surahs such as losses in the translation of conjunctions, tense, syntactic order, loss of emphasis, duality, and plurality. Moreover, other kinds of semantic losses were also discovered, for instance, over translation, loss in rhetorical devices and expressive meanings. In their study Complications of Translating the Meanings of the Holy Qur’an at Word Level in the English Language in Relation to Frame Semantic Theory, Balla and Siddiek (2017) aimed at examining the losses that result from the lexical choices in the translation of the Holy Qur’an, showing the significance of the semantic theory in the translation, revealing the linguistic or cultural factors that affect the translation, and pointing out the strategies translators adopted to prevent the problems in translation. The researchers extracted two words from the Holy Qur’an. The results of the research showed that the linguistic factors affected the translator’s choices more than the cultural ones. In addition, Ali’s translation occupied the first position being completely accurate and Pickthall’s took the second position. Islam (2018) investigated the semantic loss in two English translations of Surah Yasin by two translators Abdullah Yusuf Ali and Arthur John Arberry. The research is qualitative and based on Hermeneutics. It follows Baker’s typology of equivalence (1992) to determine the causes of the semantic loss. The data was extracted from Abdullah Yusuf Ali’s work “The Holy Qur’an: Text and Translation” (1938) and Arthur John Arberry’s “The Koran Interpreted” (1968). The findings reveal that Abdullah Yusuf Ali’s translation led to a partial loss of meaning and Arthur John Arberry’s translation resulted in complete loss of meaning and one of the main causes of these losses is the linguistic deviation from the original text. (pp.18-34) Shammalah (2019) examined domestication and foreignization strategies in the translation of cultural specific items in Alnisaa’ Sura. She chose two English translations for Talal Itani and Abduallah Yusuf Ali. The data selected was 50 cultural-specific items. The research followed the comparative textual analysis based on Ivir’s (1987) translation strategies. The analysis of Itani and Ali’s translation revealed that both translators adopted domestication strategies rather than foreignization. Moreover, 29 Ali and Itani’s use of foreignization strategies was more suitable in obtaining cultural equivalence than their use of domestication strategies. Shammalah recommended that translators of the Holy Qur’an should be fully knowledgeable of the metaphorical and expressive language of the Holy Qur’an. In her study Impact of Semantic Loss in the Holy Quran Translation with Reference to Yusuf Ali’s and Pickthall’s Translations of Al-Nur Surah, El-Halabi (2020) examined the semantic loss in the translation of two well-known translators: Abdullah Yusuf Ali and Pickthall and she also discussed the causes of this loss and to what extent the translators were able to achieve the cultural equivalence. She conducted a quantitative and qualitative research at the same time. 40 cultural-specific items were extracted from Surat Al-Nur in the Holy Quran. The researcher followed the comparative textual analysis for the two translations based on Ivir’s (1987) strategies. The findings of this study showed that the causes behind the semantic loss were the abundancy of cultural-specific terms, and the translators’ lack of knowledge in the field of Qur’anic metaphorical language. El-Halabi suggested that translators must follow books of tafseer when translating the Holy Qur’an and to go deeper in studying the science of Qur’anic discourse. 2.9 Commentary on the previous studies Having reviewed the previous studies, the researcher concludes that reasonable attention was devoted to the losses in the translation of the Holy Qur’an. One can also notice that most of the studies regarding the translation losses focused on grammatical, lexical, and cultural losses, whereas a few studies concentrated on the semantic loss in the translation of the Holy Qur’an. The researcher points out that there is a concrete need to work on the semantic loss in particular since the main focus is on the meaning of the messages of the Holy Qur’an to see whether they are conveyed accurately or not. The above-mentioned studies emphasized the importance of investigating the losses in the translation of the Holy Qur’an, as it has become the main concern that most researchers shed light on, and also showed the main causes behind these losses. For instance, Abdelaal and Rashid (2015) and Hana and Ilhem(2016) argued that the translators’ incompetency and lack of knowledge of both language and culture led to gross losses in the semantic meaning. Abdelaal and Rashid (2016) again reached a conclusion that grammatical losses in conjunctions, syntactic order, duality, tense, loss 30 of emphasis and verbs led to semantic losses. Moreover, Balla and Siddiek (2017) and Islam (2018) agreed that the linguistic factors that affected the translator’s choices led to some deviation from the original texts. The other previous studies confirmed that the reasons behind the general losses in the translation of the Holy Qur’an are due to depending mainly on literal translation, the use of archaic or old English, not using footnoting or providing more details when necessary and not focusing on the metaphorical or rhetorical language of the holy Qur’an. Through the above studies, it was shown that handling this issue will lead translators to avoid such losses in their translations. The selected previous studies were conducted and worked out by several researchers in different universities, colleges, and places around the world. All of them debated the translation of the Holy Qur’an specifically the losses found in these translations. There are slight differences among these previous studies and the present one. This study, to the best of my knowledge, is the only one that investigates the semantic loss in the translation of two full chapters of the Holy Qur’an which are Surat Al-Mujadilah and Surat Al-Hashr. Unlike some other studies that choose some verses from different chapters. 2.10 Conclusion In this chapter, the theoretical and practical parts were examined. The researcher discussed various theories on translation studies besides translation equivalence. The empirical studies involved two parts: The first part is related to previous studies done on the translation of the Holy Qur’an in general and the second one done on the semantic loss. Different previous studies were utilized in this study such as PhD studies, MA theses, and research papers and most of them are new ones. The next chapter will discuss the methodology followed in this thesis. 31 Chapter 3 Corpus and Methodology 32 Chapter 3 Corpus and Methodology 3.1 Introduction This chapter sheds light on the procedures and the steps the researcher followed to achieve the objectives of the study. It involves items such as: the study approach, data, data analysis, data collection procedures, instrumentation, and inter-rater reliability. In addition, this chapter includes the strategies used for analyzing the data. 3.2 Research design This research falls under the interpretive paradigm of a qualitative research since it is based on Hermeneutics. The qualitative descriptive approach suits this research as it deals with the translation of the Holy Qur’an which is a very difficult process as the latter is of an inimitable nature that cannot be pointedly examined through using other approaches. Qualitative research always has descriptive quality, it means that the data which are analyzed and the data analysis result have the form of phenomenon descriptive, not nominal form or coefficient about relationship among variable (Aminudin, 1991:16). 3.3 Data of the study The data of this research consisted of 52 cultural-specific items that were extracted from Surat Al-Mujadilah and Surat Al-Hashr. These CSIs are religious and related to the Islamic culture. The selected CSIs constitute words or phrases of two-word length and this is the reason they were discussed based on Baker’s typology of equivalence: At the word level. The following figures summarize the main themes of the previous suras. 33 Figure (3.1): Themes of Surat Al-Mujadilah Figure (3.2): Themes of Surat Al-Hashr Surah Al-Mujadilah The legality of pre-Islamic method of divorce called zihar A warning message to the Muslims to avoid the enemies of Islam The rules of gatherings in Islam Surah Al-Hashr The expulsion of the Jewish tribe of Banu Nadir. The Beautiful Names of Allah The false promises of the hypocrites. The exhortation of the Believers to faith 34 Table (3.1): Data of the study Surah Al-Mujadilah No. Cultural-specific item Verse no. Expressions related to the rules of Zihar in Islam 1 ََيُظَاهِرُون 2 2 َ رَقَبَة 3 3 َ ُِ يَام فَص 4 4 مِسْكِينًا 4 5 ُحُدُود 4 The rules of gathering in Islam 6 ََْ ك حَيَّو 8 7 النَجْوَ ى 10 8 انْشُزُوا 11 9 َ ًصَدَقَة 12 General Islamic terms 10 َ ُ َّاللّ 1 11 َ ِرَسُولِه 4 12 َ ِْ مََالْقِيَامَة يَو 7 13 َ ُجَهَنَّم 8 14 ْ نَهَا يَصْلَو 8 15 َ ُِ ير َ َالْمَص فَبِئْس 8 16 ََوَ اتَّقُوا 9 17 ََتُحْشَرُون 9 18 َِِْثْم بِاْل 9 19 وَ التَّقْوَ ى 9 20 َِ بِضَارَ ْهِم 10 21 َ ََ ة فَأَقِيمُواَالصََّل 13 22 َ َوَ آَتُواَالزَّكَاة 13 23 َ ًجُنَّة 16 24 ََالْخَاسِرُون 19 25 ََيُحَادُّون 20 26 َِِ ر ْ َخ ْ مَِاْل الْيَو 22 35 Table (3.2): Data of the study Surah Al-Hashr No. Cultural-specific item Verse no. Expressions related to the expulsion of Banu Al-Nadir 27 الَّذِينَ كَفَرُوا 2 28 َ ِأَهْلَِالْكِتَاب 2 29 َِالْحَشْر 2 30 َِْ َبْصَار أُولِيَاْل 2 31 لِينَة 5 32 َ ِِ رَة ْ خ اْل 3 33 َالفاسقين 5 Expressions related to the ruling for the benefit of the Muhajirin 34 َ ِ كَاب رَوالََ يْلَ َخ 6 35 وَلِذِي الْقُرْبَى 7 36 َ ِابْنَِالسَّبِيل 7 37 َِلِلْفُقَرَاء 8 38 ََِ ين ِ ر الْمُهَاج 8 39 َ َالدَّار 9 40 َ َِْيمَان وَ اْل 9 41 َ ِْ هِم صُدُور 9 42 حَاجَة 9 43 ِ ِخْوَ انِنَا وَ ْل 10 General Islamic terms 44 َ ْ م قَو 13 45 َِالشَّيْطَان 16 46 َ ُجَزَاء 17 47 َ لِغَد 18 48 َِالنَّار 20 49 َ ِالْجَنَّة 20 50 مُتَصَد ِعًا 21 36 51 ِ بُهَا نَضْر 21 52 َِِالْغَيْبَ 22 3.4 Data analysis The comparative textual analysis was adopted in light of Baker’s typology of equivalence (2011) to identify the causes of the semantic losses in the two English translations taken from Abdullah Yusuf Ali’s “The Holy Qur’an: Text and Translation” (1938) and Arthur John Arberry’s “The Koran Interpreted” (1968) 3.5 Procedures of data collection In order to fulfill the purpose of the present study, the researcher followed several steps: 1. The researcher selected Surat Al-Mujadilah and Surat Al-Hashr and their interpretation in two leading books of Tafsir: Ibn Khathir (2000) and Al-Tabari (2003). 2. Two English translations of Surat Al- Mujadilah and Surat Al-Hashr attempted by Abdullah Yusuf Ali and Arthur John Arberry were chosen for the purpose of the study. 3. These two English translations were deeply examined to select the most culturally- problematic items in both suras. 4. After reading the interpretation of the two suras and examining their translations, the researcher spotted 52 CSIs to have their meanings lexically analyzed. 5. Finally, the lexical meanings of the CSIs in the STs were compared with those of the TTs through employing Tafsirs of Ibn Khathir (2000) and Al-Tabari (2003) as reference books along with the Arabic dictionary (Almaany dictionary) and Mu’jam Lughat al-Fuqaha’(1985) in addition to four English dictionaries: Oxford English dictionary (2009), Cambridge Dictionary (1995), Merriam-Webster (1828), and English Dictionary (2012). Furthermore, Dr. Mohammed Al-Farra, a specialist of Quran interpretation at the Islamic University, was consulted to understand the meanings of the source text. 37 3.6 Inter Rater Reliability For the credibility of the study, the researcher provided the definitions of the cultural-specific items according to two reference books of Tafsir: Ibn Khathir (2000) and Al-Tabari (2003). In addition, she contacted with Dr. Mohammed Al-Farra, a Quranic interpretation expert from the IUG to get a clear vision about some of the religious matters mentioned in both suras. To increase the impartiality of singling out the 52 CSIs, the researcher consulted Dr. Walid Amer, a professor of linguistics at the IUG and my supervisor in this current study and Dr. Mohammed Al-Haj Ahmed, assistant professor of translation at the IUG. 3.7 Strategies used in the Analysis For the analysis of the CSIs, the researcher utilized Ivir’s (1987) strategies, which were previously mentioned in chapter two “literature review”. 3.8 The translations to be investigated: For this study, the researcher selected the translations of two well-known translators: Abdullah Yusuf Ali and Arthur John Arberry. Abdullah Yusuf Ali, an Indian Muslim Scholar, was born on April 14, 1872 in Bombay, India to a wealthy Muslim family. When he was young, he learned the principles of Islam and he memorized the Holy Qur’an by heart. He spoke Arabic and English fluently. He studied English literature at several European universities, including the British University of Leeds. Abdullah Ali focused his efforts on studying the Noble Qur’an till he produced his famous book “The Holy Qur’an: Text, Translation, and Commentary” which was published in 1938. He was respected for his thoughts which made Dr. Muhammad Iqbal choose him for the position of the Dean of the Islamic College in Lahore, India. Later, he returned to England and died in London. Ali’s translation is the oldest and it is distinguished for its easiness, simplicity, and credibility in interpreting the Qur’anic verses. Arthur John Arberry was born in England in 1905. He attended the Grammar School in Portsmouth then joined the University of Cambridge to study the classical languages of Latin and Greek. One of his professors encouraged him to study Arabic and Persian. Afterwards, he travelled to Egypt in 1931 to continue studying the Arabic 38 Language and then worked in the Faculty of Arts as Head of the Department of Ancient Studies (Greek and Latin). In the early fifties, he issued his first book called “The Holy Koran” and in 1955 he published the interpreted translation of the Qur’an titled “The Koran Interpreted”. The Western academics consider Arberry’s translation as the source of reference on Islam and it is one of the most famous interpretation among the English-speaking countries. 3.9 The selected Suras: Surat Al-Mujadilah: Is a Medinan surat and it is the 58th surat (chapter) of the Noble Qur’an with 22 ayat (verses). The name of the surat is attributed to the woman “Khawla bent Tha’laba” who complained to Prophet Muhammad (PBUH) about “zihar” (a method of divorce in the pre-Islamic era). This Surat carries a great message to humans to be wise in choosing the ones to whom they express their worries and sorrows to, it also shows that the best solution for humans is to keep their complaints between them and the Almighty Allah, since He the Almighty is the only one who listens to them carefully without even asking Him as He listened to Khawla bent Tha’laba. Surat Al-Hashr: Is also a Medinan surat and it is the 59th surat of the Qur’an and has 24 verses. The surat is named AL-Hashr because the word Al-Hashr occurred in verse 2 describing the banishment of the Jews of Banu Al-Nadir from their homes due to breaking their promise with prophet Muhammad (PBUH) which is not to fight him or fight with him. The surat highlighted the virtue of cooperation and union by reminding us of the relationship between Al-Muhajirin and Al-Ansar forasmuch the relationship between Muslims should be based on the principles of cooperation, assistance, and solidarity. It also confirms the sincerity of the Holy Qur’an in dealing with intentions of the hypocrites and the Jews. Moreover, it emphasizes the demerits of the Jews such as treachery, betrayal, and cowardice thus we should be careful when dealing with them at all times. 3.10 Selection criteria: The aforementioned suras were selected purposively for their sensitive themes and for containing a numerous number of culture-bound items, linked to Islam and the 39 Islamic culture, which may not have been understood properly. Thus, a semantic ambiguity seems to have occurred in some points. 3.11 Conclusion In this chapter, the researcher clarified the methodology, research design and the procedures she followed in detail. She also explained how to attain the inter-rater reliability. Eventually, she mentioned Ivir’s (1987) strategies that were used for the analysis of the CSIs. The following chapter deals with the discussion or the analysis of Ali and Arberry’s translations of the 52 CSIs. 40 Chapter 4 Data analysis 41 Chapter 4 Data analysis 4.1 Introduction In this chapter, the researcher analyzed the data collected, to provide answers for the questions of the research in the following chapter, by going through several steps. First, the researcher presented the Arabic verse that contains the CSI followed by its two English translations done by Ali and Arberry. Second, she provided the interpretation of the CSIs based on interpretation books such as: Tafsirs of Ibn Khathir (2000) and Al-Tabari (2003) in addition to a book named Mu’jam Lughat al-Fuqaha’ (1985). Furthermore, four English dictionaries: Oxford English dictionary (2009), Cambridge Dictionary (1995), Merriam-Webster (1828), and English Dictionary (2012) and the Arabic dictionary: Almaany dictionary were used. Likewise, an expert at the IUG was referred to in verifying the interpretation of the selected data. The following tables involve the CSIs within their Arabic verses in Surat Al-Mujadilah and Surat Al-Hashr. Table (4.1): Cultural-specific terms in Surat Al-Mujadilah Verse no. Cultural-specific terms 1 ُ َّّللا َقَدْ سَمِع 2 ۖ ْمِنْكُمْ مِنْ نِسَائِهِمْ مَا هُنَّ أُمَّهَاتِهِم َيُظَاهِرُون َالَّذِين 3 مِنْ قَبْلِ أَنْ يَتَمَاسَّا رَقَبَة ُِ ير فَتَحْر 4 ِ َّذَ لِكَ لِتُؤْمِنُوا بِاَّلل ۚ مِسْكِين ا َيَسْتَطِعْ فَإِطْعَامُ سِتِّين ْشَهْرَيْنِ مُتَتَابِعَيْنِ مِنْ قَبْلِ أَنْ يَتَمَاسَّا ۖ فَمَنْ لَم ُِ يَام فَص ِْ د فَمَنْ لَمْ يَج ٌِ ينَ عَذَا ٌ أَلِيم َ لِلْكَافِر َّ ِ ۗ و ّللا ُحُدُود َوَتِلْك ِرَسُولِه َو 7 ََ مَعَهُمْ أَيْن َّ هُو َ أَكََْرَ إَِ َ َ مِنْ ذَ لِكَ و أَدْنَى َ َ ََّ هُوَ سَادِسُهُمْ و َ خَمْسَة إَِ َّ هُوَ رَابِعُهُمْ وَ َ إَِ َ ثَة ثَال نَجْوَى ْمَا يَكُونُ مِن ۚ ِْ مَ الْقِيَامَة يَو مَا كَانُوا ۖ ثُمَّ يُنَبِّئُهُمْ بِمَا عَمِلُوا 8 ۖ ْ نَهَا يَصْلَو ُجَهَنَّم ْ بِمَا نَقُو ُ ۚ حَسْبُهُم ُ ََّ يُعَذِّبُنَا ّللا ْ َ َّ ُ وَيَقُولُونَ فِي أَنْفُسِهِمْ لَو بِمَا لَمْ يُحَيِّكَ بِهِ ّللا َْ ك حَيَّو َوَ إِذَا جَاءُوك ُِ ير َ الْمَص فَبِئْس 9 َ َّّللا َ اتَّقُوا و ۖ َ ى َ التَّقْو و ِّْ ا بِالْبِر ِ يَتِ الرَّسُو ِ وَتَنَاجَو وَ الْعُدْوَ انِ وَ مَعْص ِِْثْم بِاإل ْ ا َ تَتَنَاجَو يَا أَيُّهَا الَّذِينَ آمَنُوا إِذَا تَنَاجَيْتُمْ فَال َتُحْشَرُون ِالَّذِي إِلَيْه 10 ََكَّلِ الْمُؤْمِنُون َّ ِ فَلْيَتَو َعَلَى ّللا َّ ِ ۚ و َّ بِإِذْنِ ّللا شَيْئ ا إَِ ْبِضَارِّ هِم َمِنَ الشَّيْطَانِ لِيَحْزُنَ الَّذِينَ آمَنُوا وَلَيْس النَّجْوَى إِنَّمَا 42 11 ُ َّفَانْشُزُوا يَرْفَعِ ّللا انْشُزُوا ََّ ُ لَكُمْ ۖ وَ إِذَا قِيل ِ فَافْسَحُوا يَفْسَحِ ّللا لَكُمْ تَفَسَّحُوا فِي الْمَجَالِس َيَا أَيُّهَا الَّذِينَ آمَنُوا إِذَا قِيل ٌَّ ُ بِمَا تَعْمَلُونَ خَبِير الَّذِينَ آمَنُوا مِنْكُمْ وَ الَّذِينَ أُوتُوا الْعِلْمَ دَرَجَات ۚ وَّللا 12 يَا أَيُّهَا الَّذِي ْنَ آمَنُوا إِذَا نَاجَيْتُمُ الرَّسُو َ فَقَدِّمُوا بَيْنَ يَدَيْ نَجْوَ اكُمۚ صَدَقَة َ َِّ دُوا فَإِنَّ ّللا َ أَطْهَرُ ۚ فَإِنْ لَمْ تَج ذَ لِكَ خَيْرٌ لَكُمْ و ٌِ يم غَفُورٌ رَح 13 ِْ نَجْوَ اكُمْ صَدَقَات ۚ فَإ أَأَشْفَقْتُمْ أَنْ تُقَدِّمُوا بَيْنَ يَدَي َّْ ُ عَلَيْكُم ذْ لَمْ تَفْعَلُوا وَ تَا َ ّللا ََ ة فَأَقِيمُوا الصَّال ََ آتُوا الزَّكَاة و َو أَطِيعُوا ََّ ُ خَبِيرٌ بِمَا تَعْمَلُون َّ َ وَرَسُولَهُ ۚ وَّللا اللّ 16 ْاتَّخَذُوا أَيْمَانَهُم جُنَّة ٌَّ ِ فَلَهُمْ عَذَا ٌ مُهِين فَصَدُّوا عَنْ سَبِيلِ ّللا 19 ِْ زْ َ الشَّي َ إِنَّ ح ِ زْ ُ الشَّيْطَانِ ۚ أََ َّ ِ ۚ أُولَ ئِكَ ح اسْتَحْوَذَ عَلَيْهِمُ الشَّيْطَانُ فَأَنْسَاهُمْ ذِكْرَ ّللا ُطَانِ هُم َالْخَاسِرُون 20 َإِنَّ الَّذِين َيُحَادُّون َْ َذَلِّين َّ َ وَرَسُولَهُ أُولَ ئِكَ فِي اْل اللّ 22 ْ م ِ دُ قَو َ تَج َ ََّ ِ و ا يُؤْمِنُونَ بِاَّلل ِِ ر ْ خ ْ مِ اآل الْيَو َُّ َ وَرَسُولَه يُوَ ادُّونَ مَنْ حَادَّ ّللا Table (4.2): Cultural-specific terms in Surat Al-Hashr Verse no. Cultural-specific terms 2 َََْنُّوا أَنَّهُمْ مَانِعَتُهُمْ حُصُونُهُم مَا ََنَنْتُمْ أَنْ يَخْرُجُوا ۖ وۚ ِالْحَشْر ِ َِّ َو لْ ِْ هِم يَار ِد ْمِن ِ أَهْلِ الْكِتَا ْمِن الَّذِينَ كَفَرُوا َهُوَ الَّذِي أَخْرَج دِي الْمُؤْمِنِينَ فَاعْتَبِرُوا يَا َْ أَي بِأَيْدِيهِمْ و ِْ بُونَ بُيُوتَهُم َ فِي قُلُوبِهِمُ الرُّعْبَ ۚ يُخْر َّ ُ مِنْ حَيْثُ لَمْ يَحْتَسِبُوا ۖ وَقَذَف َّ ِ فَأَتَاهُمُ ّللا مِنَ ّللا ِْ َبْصَار أُولِي اْل 3 ِعَذَا ُ النَّار ِِ رَة ْ خ اآل َ لَهُمْ فِي َ ءَ لَعَذَّبَهُمْ فِي الدُّنْيَا ۖ و َّ ُ عَلَيْهِمُ الْجَال ّللا َكَتَب َْ أَن ْ َ وَلَو 5 َالْفَاسِقِين َِ ي َّ ِ وَلِيُخْز ولِهَا فَبِإِذْنِ ّللا ُ أُص ْ تَرَكْتُمُوهَا قَائِمَة عَلَى أَو لِينَة ْمَا قَطَعْتُمْ مِن 6 ِّ كُل َّ ُ عَلَى َّللا مَنْ يَشَاءُ ۚ و َّ َ يُسَلِّطُ رُسُلَهُ عَلَى وَلَ كِنَّ ّللا ِ كَا َ ر وَ َ خَيْل ْْ جَفْتُمْ عَلَيْهِ مِن رَسُولِهِ مِنْهُمْ فَمَا أَو َّ ُ عَلَى وَ مَا أَفَاءَ ّللا ٌشَيْء قَدِير 7 َ يَكُونَ دُولَة كَيْ َ ِابْنِ السَّبِيل َ وَ الْمَسَاكِينِ و وَ الْيَتَامَى لِذِي الْقُرْبَى ََ لِلرَّسُو ِ و َ ِ َّ ِ و لِلَف رَسُولِهِ مِنْ أَهْلِ الْقُرَى َّ ُ عَلَى مَا أَفَاءَ ّللا َْ َغْنِيَاءِ مِنْكُمْ ۚ وَ مَا آتَاكُمُ الرَّسُو ُ فَخُذُوهُ وَ م بَيْنَ اْل َ ََّّ َ ۖ إِنَّ ّللا ا نَهَاكُمْ عَنْهُ فَانْتَهُوا ۚ وَ اتَّقُوا ّللا ِ شَدِيدُ الْعِقَا 8 ُأُولَ ئِكَ هُم ۚ ُسُولَه ََر َّ َ و َيَنْصُرُونَ ّللا ِ ضْوَ ان ا و َّ ِ وَر مِنَ ّللا ِ هِمْ وَ أَمْوَ الِهِمْ يَبْتَغُونَ فَضْال ِ جُوا مِنْ دِيَار الَّذِينَ أُخْر َِ ين ِ ر الْمُهَاج ِلِلْفُقَرَاء َالصَّادِقُون 43 Surat Al-Mujadilah Extract 1: َ َّ ََسَمِيع َّ َُيَسْمَعَُتَحَاوُرَكُمَاَإِنََّّللا َّ َِوَّللا ِ هَاَوَتَشْتَكِيَإِلَىَّللا ْ ج ْ لََالَّتِيَتُجَادِلُكََفِيَزَو قَوََ ُ َّّللاََقَدَْسَمِع" َ ِ ير بَص" َ (Al-Mujadilah: 1) Ali: “Allah has indeed heard (and accepted) the statement of the woman who pleads with thee concerning her husband and carries her complaint (in prayer) to Allah: and Allah (always) hears the arguments between both sides among you: for Allah hears and sees (all things)”. Arberry: “God has heard the words of her that disputes with thee concerning her husband, and makes complaint unto God. God hears the two of you conversing together; surely God is All-hearing, All-seeing”. According to Oxford English dictionary (2009), the English word “God” is countable in some religions and it has a female form which is “Goddess” and it also has other meanings like “a person who is loved or admired very much by other people”. However, Dr. Mohammed Al-Farra said, the Arabic word “الله” is inflected neither for 9 َوَ الَّذِين تَبَوَّ ءُو َا الدَّار َو َِْيمَان اإل ِ دُونَ فِي َ يَج َ َ ِ بُّونَ مَنْ هَاجَرَ إِلَيْهِمْ و مِنْ قَبْلِهِمْ يُح ِْ هِم صُدُور حَاجَة 10 ِ ِوَ الَّذِينَ جَاءُوا مِنْ بَعْدِهِمْ يَقُولُونَ رَبَّنَا اغْفِرْ لَنَا وَ إل َخْو انِنَا ا لِلَّذِينَ آمَنُوا رَبَّنَا َ تَجْعَلْ فِي قُلُوبِنَا غِال ِْيمَانِ وَ َ الَّذِينَ سَبَقُونَا بِاإل ٌِ يم ٌ رَح إِنَّكَ رَءُوف 13 ََ يَفْقَهُون َ ٌْ م قَو َّْ ِ ۚ ذَ لِكَ بِأَنَّهُم ِ هِمْ مِنَ ّللا َ َنْتُمْ أَشَدُّ رَهْبَة فِي صُدُور لْ 16 ََّ َ رَ َّ الْعَالَمِين ُ ّللا ِ يءٌ مِنْكَ إِنِّي أَخَاف ْ ِنْسَانِ اكْفُرْ فَلَمَّا كَفَرَ قَا َ إِنِّي بَر إِذْ قَا َ لِْل ِيْطَان َّالش ِكَمَََل 17 َالظَّالِمِين ُجَزَاء َِ خَالِدَيْنِ فِيهَا ۚ وَذَ لِك فَكَانَ عَاقِبَتَهُمَا أَنَّهُمَا فِي النَّار 18 ََّ َ خَبِيرٌ بِمَا تَعْمَلُون َّ َ ۚ إِنَّ ّللا وَ اتَّقُوا ّللاۖ لِغَد ٌْ مَا قَدَّمَت َّ َ وَلْتَنْظُرْ نَفْس ينَ آمَنُوا اتَّقُوا ّللا ِيَا أَيُّهَا الَّذ 20 َ أَصْحَا ُ الْجَنَّةِ هُمُ الْفَائِزُونۚ ِالْجَنَّة ُ وَ أَصْحَا ِالنَّار ُ َ يَسْتَوِي أَصْحَا َ 21 ِلِلنَّاس ِ بُهَا نَضْر ُ ْ َمََْا َّ ِ ۚ وَتِلْكَ اْل مِنْ خَشْيَةِ ّللا مُتَصَدِّع ا جَبَل لَرَأَيْتَهُ خَاشِع ا هَ ذَا الْقُرْآنَ عَلَى ْ أَنْزَلْنَا لَو 22 َُّ هُوَ ۖ عَالِم َ إِلَ هَ إَِ َّ ُ الَّذِي َ هُوَ ّللا ِالْغَيْب ُِ يم وَ الشَّهَادَةِ ۖ هُوَ الرَّحْمَ نُ الرَّح 44 gender nor for number and so He does not have a wife nor a child as the Christians believe. In addition, “الله” cannot be used to describe anything but the Almighty Allah unlike the word “God”. Ali was successful when he transliterated the word “الله” as “Allah”. However, a cultural non-equivalence appears in Arberry’s translation due to incorrect substitution, since he used the word God. Extract 2: ََ إِنَّهُمْ لَيَقُولُون هُمْ ۚ و ََلَدْن َّ ئِي و َّ الال مِنكُم مِّن نِّسَائِهِم مَّا هُنَّ أُمَّهَاتِهِمْ ۖ إِنْ أُمَّهَاتُهُمْ إَِ َيُظَاهِرُون َالَّذِين" ٌَّ َ لَعَفُوٌّ غَفُور ْ ِ وَزُور ا ۚ وَ إِنَّ ّللا مُنكَر ا مِّنَ الْقَو" (Al-Mujadilah: 2) Ali: “If any men among you divorce their wives by Zihar (calling them mothers), they cannot be their mothers: None can be their mothers except those who gave them birth. And in fact they use words (both) iniquitous and false: but truly Allah is one that blots out (sins), and forgives (again and again).” Arberry: “Those of you who say, regarding their wives, 'Be as my mother's back,' they are not truly their mothers; their mothers are only those who gave them birth, and they are surely saying a dishonourable saying, and a falsehood. Yet surely God is All-pardoning, All-forgiving.” As said by the Quranic interpretation expert, the word “ َيُظَاهِرُون” which is derived from the Arabic term “ظِهار” means that a woman is forbidden to her husband as his mother is and he cannot live with her again unless he pays kafarah. Ali defined it using the word “divorce” to help the foreign reader understand that “zihar” was a form of divorce in the pre-Islamic era (Al-Jahiliyah). On the other hand, Arberry translated “ َيُظَاهِرُون” by defining it as “Be as my mother’s back” which makes no sense and leads to a complete cultural loss. Therefore, Ali succeeded in achieving the cultural equivalence while Arberry did not. Extract 3: " وَ الَّذِينَ يُظَاهِرُونَ مِنْ نِسَائِهِمْ ثُمَّ يَعُودُونَ لِمَا قَالُوا ُِ ير فَتَحْر رَقَبَة َّ ُ بِمَا َّللا مِنْ قَبْلِ أَنْ يَتَمَاسَّا ۚ ذَ لِكُمْ تُوعَظُونَ بِهِ ۚ و ُتَعْمَل ٌونَ خَبِير" 45 (Al-Mujadilah: 3) Ali: “But those who divorce their wives by Zihar, then wish to go back on the words they uttered,- (It is ordained that such a one) should free a slave before they touch each other: Thus are ye admonished to perform: and Allah is well-acquainted with (all) that ye do.” Arberry: And those who say, regarding their wives, 'Be as my mother's back,' and then retract what they have said, they shall set free a slave, before the two of them touch one another. By that you are admonished; and God is aware of the things you do. As mentioned in Al-Tabari (2003), the Arabic word “رقبة” means a fe/male slave. By using substitution strategy, Ali and Arberry translated it into “slave”. Therefore, we notice that both of them succeeded in achieving the cultural equivalence. Extract 4: " ِْ د فَمَنْ لَمْ يَج ُِ يَام فَص ِشَهْرَيْنِ مُتَتَاب َعَيْنِ مِنْ قَبْلِ أَنْ يَتَمَاسَّا ۖ فَمَنْ لَمْ يَسْتَطِعْ فَإِطْعَامُ سِتِّينۚ مِسْكِين ا َذَ لِك ِ َّ لِتُؤْمِنُوا بِاَّلل ۚ ِوَرَسُولِه َوَتِلْك ُحُدُود ٌِ ينَ عَذَا ٌ أَلِيم َ لِلْكَافِر َّ ِ ۗ و اللّ" (Al-Mujadilah: 4) Ali: “And if any has not (the wherewithal), he should fast for two months consecutively before they touch each other. But if any is unable to do so, he should feed sixty indigent ones, this, that ye may show your faith in Allah and His Messenger. Those are limits (set by) Allah. For those who reject (Him), there is a grievous Penalty.” Arberry: “But whosoever finds not the means, then let him fast two successive months, before the two of them touch one another. And if any man is not able to, then let him feed sixty poor persons -- that, that you may believe in God and His Messenger. Those are God's bounds; and for the unbelievers there awaits yet a painful chastisement. For the word “صيام”, Ali and Arberry rendered it as “fast”, which has a different cultural meaning from the original word “الصيام”, that is to restrict one's personal consumption of some food and drinks. Whereas the term "الصيام" means to fast from sunrise to sunset and to abstain oneself from all the things that break the fast such as the intercourse between a husband and a wife (Almaany dictionary). This definition 46 also was provided by Dr. Alfarra. Subsequently, the translators were not able to convey the intended cultural meaning or the essence of the message and it would be more accurate to borrow the word” صيام“followed by a definition or leaving a footnote clarifying its meaning so that translators do not deviate from the real meaning. Al Maany Dictionary illustrates that the word “مسكين” refers to the poor, who does not have enough to eat, or miserable person, who has nothing. Ali translated it literally into “indigent” which means, according to Merriam-Webster (1828), “very poor” and he aslo added “ones”. Similarly, Arberry tended to render it literally as “poor “adding the word “persons”. Both translations are true and into the point. Referring to the Tafsir of Ibn Kathir (2000), it is mentioned that the Arabic word “رسول” refers to our prophet Muhammad (SAW) and according to Wehr (1979), "Some prophets are categorized as messengers (Arabic: رسل , sing. رسول), those who transmit divine revelation through the intercession of an angel", and here the word angel refers to Gabriel who was revealed only to our prophet Muhammad (SAW). Hence, Ali and Arberry were successful in conveying the cultural meaning by substituting the word “رسول” with "Messenger”. As found in Almaany dictionary, the phrase “ حدود هللا“means Allah’s orders and prohibitions and his punishment for those who violate them. Moreover, Al-Tabari (2015) interprets the word “حدود” as the limits Allah has put for you that you must not exceed. Ali literally translated the word “حدود” into “limits” and added the verb phrase “set by” and Arberry used literal translation also in rendering the word “حدود” into “bounds”. The word “limits” means “the level of something that is either possible or allowed” and “bounds” refers to the “limits of an activity or behavior”, (Cambridge Dictionary, 1995). So, we conclude that both Ali and Arberry were accurate in choosing the previous translations and hence they succeeded in delivering the exact cultural equivalence. Extract 5: " ْمَا يَكُونُ مِن نَجْوَى ََّ هُو َ ثَة إَِ ثَال ََ مَعَهُمْ أَيْن َّ هُو َ أَكََْرَ إَِ َ َ مِنْ ذَ لِكَ و َ أَدْنَى َ َ َّ هُوَ سَادِسُهُمْ و َ خَمْسَة إَِ رَابِعُهُمْ وَ َ مَا كَانُوا ۖ ثُمَّ يُنَبِّئُهُمْ بِمَا عَمِلُوا ِْ مَ الْقِيَامَة يَو " (Al-Mujadilah: 7) 47 Ali: “There is not a secret consultation between three, but He makes the fourth among them, - Nor between five but He makes the sixth,- nor between fewer nor more, but He is in their midst, wheresoever they be: In the end will He tell them the truth of their conduct, on the Day of Judgment. For Allah has full knowledge of all things”. Arberry: “Three men conspire not secretly together, but He is the fourth of them, neither five men, but He is the sixth of them, neither fewer than that, neither more, but He is with them, wherever they may be; then He shall tell them what they have done, on the Day of Resurrection. Surely God has knowledge of everything”. According to Dr. Mohammed, the religious term “يوم القيامة” refers to the last day on earth when all the creatures will be resurrected from their graves and held accountable for their deeds both the good and bad. Based on that, both translators have achieved the cultural equivalence. Extract 6: ۖ ْ نَهَا يَصْلَو ُجَهَنَّم َّْ ُ بِمَا نَقُو ُ ۚ حَسْبُهُم نَا ّللا َُ يُعَذِّب ْ َ َ يَقُولُونَ فِي أَنْفُسِهِمْ لَو َّ ُ و بِمَا لَمْ يُحَيِّكَ بِهِ ّللا َْ ك حَيَّو َوَ إِذَا جَاءُوك" " ُِ ير َ الْمَص فَبِئْس (Al-Mujadilah: 8) Ali: “And when they come to thee, they salute thee, not as Allah salutes thee, (but in crooked ways): And they say to themselves, "Why does not Allah punish us for our words?" Enough for them is Hell: In it will they burn, and evil is that destination!” Arberry: “Then, when they come to thee, they greet thee with a greeting God never greeted thee withal; and they say within themselves, 'Why does God not chastise us for what we say?' Sufficient for them shall be Gehenna, at which they, shall be roasted -- an evil homecoming!” The Arabic verb “حيَّوك” refers to “a kind or glad reception” or “words or gestures used to greet a person” Al Maany Dictionary. The word “salute” means “to make a formal sign of respect to someone, especially by raising the right hand to the side of the head (especially of people in the armed forces)” (Cambridge 48 Dictionary, 1995). Over and above, the word “salute” is not spoken and it is only a hand gesture. While the word “greet” means “to address with expression of kind wishes upon meeting or arrival” (Merriam-Webster, 1828). It is crystal clear that both translators opted for literal translation. However, Arberry’s choice is more suitable. As for Ali, he was not able to provide the precise meaning since “salute” carries a different connotation. As a result, a complete semantic loss occurred in his translation. The term “جهنم” is among the names of “Al-nnar-النار” and it is called “جهنم” due to its very far bottom. Ali substituted it with “Hell” which means “the nether realm of the devil and the demons in which condemned people suffer everlasting punishment” (Merriam-Webster, 1828). In Islam and culture, there is nothing called a nether realm of the devil and also this devil will be punished by Allah on the Day of judgement. To conclude, using “Hell” resulted in a complete cultural loss. Notwithstanding, Arberry borrowed the term “Gehenna”, hence this is the righteous translation. Regarding the word “يصلونها”, Ali seems to have rendered the ST meaning correctly by using the literal meaning “burn” which means “to be hurt, damaged, or destroyed by fire or extreme heat, or to cause this to happen” (Cambridge Dictionary, 1995). On the contrary, Arberry’s literal translation led to a partial loss in meaning as the word “roast” is used mainly with food “to cook food in an oven or over a fire” (Cambridge Dictionary, 1995). In translating the Arabic phrase “فبئس المصير” which refers to the evil final destination or end, Ali translated it literally into “evil is that destination”, so he was able to convey the intended meaning successfully. Nevertheless, Arberry rendered it as “an evil homecoming” using literal translation and lexical creation and here the word “homecoming” refers to “the act of returning to your home or to a place that is like your home” (Merriam-Webster, 1828), thus his translation does not tend to be proper and so it does not fit in the original meaning. This subsequently led to a complete cultural loss. 49 Extract 7: َ اتَّقُوا و ۖ َ ى َ التَّقْو و ِّْ ا بِالْبِر َتَنَاجَو ِ يَتِ الرَّسُو ِ و وَ الْعُدْوَ انِ وَ مَعْص ِِْثْم بِاإل ْ ا َ تَتَنَاجَو يَا أَيُّهَا الَّذِينَ آمَنُوا إِذَا تَنَاجَيْتُمْ فَال" " َتُحْشَرُون َِّ َ الَّذِي إِلَيْه اللّ (Al-Mujadilah: 9) Ali: “O ye who believe! When ye hold secret counsel, do it not for iniquity and hostility, and disobedience to the Prophet; but do it for righteousness and self-restraint; and fear Allah, to Whom ye shall be brought back.” Arberry: “O believers, when you conspire secretly, then conspire not together in sin and enmity and disobedience to the Messenger, but conspire in piety and god-fearing. Fear God, unto whom you shall be mustered.” “اإلثم” refers to the state of being a wrongdoer (Al Maany Dictionary). “Sin” means “an action that is or is felt to be highly reprehensible” and “iniquity” denotes “the quality of being unfair or evil” (Merriam-Webster, 1828). In translating the word “ثم اإل”, Ali selected substitution strategy. However, Arberry used literal translation which roughly achieved the cultural equivalence while Ali did not. The Arabic word “التقوى” is explained as doing what Allah demanded and refraining from what He forbade (Al Maany Dictionary, 2010). Moreover, Al-Tabari (2015) elucidates that “التقوى” is fearing Allah by obeying his commands and avoiding the bad deeds. In the aforementioned translations, Ali literally translated ”التقوى” with “self-restraint” which means, based on the definition of Merriam-Webster (1828), training one’s self to control his/her emotions and desires. However, his choice fails to convey the genuine meaning. For Arberry, he substituted “التقوى” with “Godfearing”, which is used to “describe religious people who try to obey the rules of their religion and to live in a way that is considered morally right” (Merriam-Webster, 1828). Thus, it is closer in meaning to the original one, however he failed in using the word “God”. So, a cultural non-equivalence resulted from Ali’s translation while Arberry’s resulted in a partial one. In the researcher’s humble opinion, the word “التقوى” should be transliterated with a footnote containing a detailed explanation of it. 50 Based on the explanation of Al-Tabari (2003), the verb “اتقوا” means fear Allah to whom is your destiny. In the previous translations, both translators used literal rendition producing the verb “fear” which fits exactly with the authentic meaning. Regarding translating the word “تحشرون”, the phrasal verb “bring back” means “to return something to where it came from”, whereas the verb “muster” means “(especially of soldiers) come together, especially in preparation for fighting, or to cause to do this” (Cambridge Dictionary, 1995). Ali was able to transfer the meaning successfully through substitution whilst Arberry mistranslated it through translating it literally into “muster” which has a different connotative meaning and due to that a complete cultural loss occurred. Extract 8: " ََكَّلِ الْمُؤْمِنُون َّ ِ فَلْيَتَو َ عَلَى ّللا َّ ِ ۚ و َّ بِإِذْنِ ّللا شَيْئ ا إَِ ْبِضَارِّ هِم َمِنَ الشَّيْطَانِ لِيَحْزُنَ الَّذِينَ آمَنُوا وَلَيْس النَّجْوَى إِنَّمَا " (Al-Mujadilah: 10) Ali: “Secret counsels are only (inspired) by the Evil One, in order that he may cause grief to the Believers; but he cannot harm them in the least, except as Allah permits; and on Allah let the Believers put their trust.” Arberry: “Conspiring secretly together is of Satan, that the believers may sorrow; but he will not hurt them anything, except by the leave of God. And in God let the believers put all their trust.” The Arabic word “النجوى” refers to secret conversations between people. Allah states that those secret talks are only from the Satan in order to make the believers grieve. (Ibn Khathir, 2000 and Al-Tabari, 2003). The English word “counsels” means “advice given especially as a result of consultation”, yet the word “conspiring” means “ to make an agreement with others especially in secret to do an unlawful act or to happen in a way that produces bad or unpleasant results” (Merriam-Webster, 1828). Thence, Ali’s definition of “النجوى” was absolutely opposite to the intended cultural meaning but Arberry’s was exactly to the point. Al-Tabari (2003), in verse (10), in explaining the meaning of the word “بضارهم” states that the Satan intends to bother the believers and make them grieve, although his attempts will not hurt them, except by Allah’s will. The word “harm” refers to 51 “physical or other injury or damage” while “hurt” means “to cause emotional pain to someone” (Cambridge Dictionary, 1995). The two translations are literal. Nonetheless, Ali’s interpretation is not the intended one since it holds a different connotation as illustrated above, as a result, a partial loss in meaning occurs. On the other hand, the meaning of the ST was completely transferred by Arberry’s translation. Extract 9: ِفَانْشُزُوا يَرْفَع انْشُزُوا َقِيل َ إِذَا َّ ُ لَكُمْ ۖ و ِ فَافْسَحُوا يَفْسَحِ ّللا يَا أَيُّهَا الَّذِينَ آمَنُوا إِذَا قِيلَ لَكُمْ تَفَسَّحُوا فِي الْمَجَالِس" " ٌَّ ُ بِمَا تَعْمَلُونَ خَبِير دَرَجَات ۚ وَّللا ََّ ُ الَّذِينَ آمَنُوا مِنْكُمْ وَ الَّذِينَ أُوتُوا الْعِلْم اللّ (Al-Mujadilah: 11) Ali: “O ye who believe! When ye are told to make room in the assemblies, (spread out and) make room: (ample) room will Allah provide for you. And when ye are told to rise up, rise up Allah will rise up, to (suitable) ranks (and degrees), those of you who believe and who have been granted (mystic) Knowledge. And Allah is well-acquainted with all ye do.” Arberry: “O believers, when it is said to you 'Make room in the assemblies', then make room, and God will make room for you; and when it is said, 'Move up', move up, and God will raise up in rank those of you who believe and have been given knowledge. And God is aware of the things you do.” Al-Tabari (2003) illustrated that the word “انشزوا” means when you are called to any type of a good deed then respond. The previous translations are literal and conveyed some shades of the authentic meaning. Paraphrasing would be a good strategy to follow in this case. Extract 10: " َّيَا أَيُّهَا ال ْذِينَ آمَنُوا إِذَا نَاجَيْتُمُ الرَّسُو َ فَقَدِّمُوا بَيْنَ يَدَيْ نَجْوَ اكُمۚ صَدَقَة َِّ دُوا فَإِن ْ لَمْ تَج َ أَطْهَرُ ۚ فَإِن ذَ لِكَ خَيْرٌ لَكُمْ و ٌِ يم َّ َ غَفُورٌ رَح اللّ" (Al-Mujadilah: 12) Ali: “O ye who believe! When ye consult the Messenger in private, spend something in charity before your private consultation. That will be best for you, and most 52 conducive to purity (of conduct). But if ye find not (the wherewithal), Allah is Oft-Forgiving, Most Merciful. “ Arberry: “O believers, when you conspire with the Messenger, before your conspiring advance a freewill offering; that is better for you and purer. Yet if you find not means, God is All-forgiving, All-compassionate.” Almanny dictionary construes the Arabic word “صدقة” as what is given to the poor and needy people from money, food, or clothes for the sake of getting closer to Allah. Cambridge Dictionary (1995) explains that “Charity” refers to the money, food or any other help given to those who are in need for it. It also defines “offering” as something that you give or offer to someone, however it did not specify the category to be given this offering as it is seen in “charity”. Consequently, using substitution in rendering “صدقة” into “charity”, we can find that Ali managed to transfer the cultural meaning. On the contrary, Arberry utilized the definition strategy in translating “صدقة” into “a freewill offering” thus his choice of word did not match the original cultural meaning. Extract 11: ََ آتُوا الزَّكَاة و ََ ة فَأَقِيمُوا الصَّال َّْ ُ عَلَيْكُم تُقَدِّمُوا بَيْنَ يَدَيْ نَجْوَ اكُمْ صَدَقَات ۚ فَإِذْ لَمْ تَفْعَلُوا وَتَا َ ّللا ْأَأَشْفَقْتُمْ أَن" ََّ ُ خَبِيرٌ بِمَا تَعْمَلُون َّ َ وَرَسُولَهُ ۚ وَّللا وَ أَطِيعُوا ّللا" (Al-Mujadilah: 13) Ali: “Is it that ye are afraid of spending sums in charity before your private consultation (with him)? If, then, ye do not so, and Allah forgives you, then (at least) establish regular prayer; practise regular charity; and obey Allah and His Messenger. And Allah is well-acquainted with all that ye do.” Arberry:” Are you afraid, before your conspiring, to advance freewill offerings? If you do not so, and God turns again unto you, then perform the prayer, and pay the alms, and obey God and His Messenger. God is aware of the things you do.” Al Maany Dictionary states that the religious term “الصالة” has several meanings, such as: The Du’aa (supplication), seeking mercy and forgiveness, and finally the legitimate “Salah”. Furthermore, Al-Tabari (2003) interprets the phrase “أقيموا ا لصالة “as to perform it with its main pillars: The sujood “adoration” and Ruku 53 “bowing down” and on the exact time. Ali literally rendered “أقيموا” into “establish” which does not capture the precise meaning of the ST verb and he added the attributive adjective “regular”. As for Arberry, he also translated it literally into “perform” which is a better choice in conveying the meaning of the ST verb. Regarding the word “الصالة”, they have substituted it with “prayer” which is not acceptable because when a foreign reader comes across this word then he will understand it differently linking it to the prayer rituals in his own religion whether it is Christianity or another. So, in order to preserve the cultural and religious connotation of the word “الصالة”, it should be transliterated providing either its definition or an explanatory footnote. Following the interpretation of Al Maany Dictionary, the word “الزكاة” refers to an obligatory pillar of Islam that requires spending a known portion of money if it reaches the nisaab. As clarified by Merriam-Webster (1828), “charity” refers to “generosity and helpfulness especially toward the needy or suffering” and the word “alms” means “something (such as money or food) given freely to relieve the poor”. Considering the previous translations, Ali and Arberry literally rendered the verb “آتوا” consecutively into “practice” and “pay”. For the word “الزكاة”, Ali substituted it with “charity” adding the adjective regular while Arberry substituted it with “alms”. Based on the pervious interpretations, one can conclude that both translators were not able to convey the exact cultural meaning since the renditions “charity” and “alms” denote that the money given is according to one’s own desire, however “الزكاة” implies obligation and it is not regular for all people at all times as it requires certain conditions such as owning a specific portion of money for a specific period of time. As a result, this leads to a complete cultural loss. It is better to transliterate the word “الزكاة” as “zakah” and provide a footnote so that its meaning does not change. 54 Extract 12: " ْاتَّخَذُوا أَيْمَانَهُم جُنَّة ٌَّ ِ فَلَهُمْ عَذَا ٌ مُهِين فَصَدُّوا عَنْ سَبِيلِ ّللا" (Al-Mujadilah: 16) Ali: “They have made their oaths a screen (for their misdeeds): thus they obstruct (men) from the Path of Allah: therefore shall they have a humiliating Penalty.” Arberry: “They have taken their oaths as a covering, and barred from God's way; so there awaits them a humbling chastisement.” Ibn Khathir (2003), in a way of interpreting the word “جُنَة”, stated that those hypocrites hid their blasphemy, showed Iman (faith) and resorted to false swearing to prevent themselves from being killed. Ali tended to use literal translation in addition to addition. Therefore, he conveyed some parts of the meaning. However, Arberry selected literal translation as a choice so his translation was not as accurate as required, therefore a huge semantic loss in meaning appears. Such misinterpretation could be solved via paraphrasing. Extract 13: " ِْ زْ َ الشَّي َ إِنَّ ح ِ زْ ُ الشَّيْطَانِ ۚ أََ َّ ِ ۚ أُولَ ئِكَ ح اسْتَحْوَذَ عَلَيْهِمُ الشَّيْطَانُ فَأَنْسَاهُمْ ذِكْرَ ّللا ُطَانِ هُم َالْخَاسِرُون" (Al-Mujadilah: 19) Ali: “The Evil One has got the better of them: so he has made them lose the remembrance of Allah. They are the Party of the Evil One. Truly, it is the Party of the Evil One that will perish!” Arberry:” Satan has gained the mastery over them, and caused them to forget God's Remembrance. Those are Satan's party; why, Satan's party, surely, they are the losers!” Al-Tabari (2003) clarified the meaning of “الخاسرون” as “those who are perished and futile“. As can be seen form the above translations, Ali resorted to definition stratregy and Arberry adopted literal translation. However, Ali’s translation was more adequate than Arberry’s which conveyed some parts of the original meaning. 55 Extract 14: " إ َنَّ الَّذِين َيُحَادُّون َْ َذَلِّين َّ َ وَرَسُولَهُ أُولَ ئِكَ فِي اْل اللّ" (Al-Mujadilah: 20) Ali: “Those who resist Allah and His Messenger will be among those most humiliated.” Arberry: “Surely those who oppose God and His Messenger, those are among the most abject.” The verb “يحادون” means to fight or resist. It is readily seen that Ali and Arberry applied literal translation and both did present a good match to the ST word. Extract 15: " َُ رَسُولَه َّ َ و يُوَ ادُّونَ مَنْ حَادَّ ّللا ِِ ر ْ خ ْ مِ اآل الْيَو ََّ ِ و يُؤْمِنُونَ بِاَّلل ْ م ا ِ دُ قَو َ تَج َ" (Al-Mujadilah: 22) Ali: “Thou wilt not find any people who believe in Allah and the Last Day, loving those who resist Allah and His Messenger” Arberry: “Thou shalt not find any people who believe in God and the Last Day who are loving to anyone who opposes. God and His Messenger” After consulting Dr. Al-Farra about the meaning of the Arabic phrase “ اليوم اآلخر”, he said that it has the same meaning of “يوم القيامة” which was interpreted previously as the day when all creatures will be resurrected from their graves to be held accountable for their good and bad deeds. Both translators have literally rendered it into “the Last Day” which is a serious divergence from the real meaning, since Dr. Mohammed also said that there are another two lives after the earthly life which are the life of Al-Barzakh “in the grave” and the life on the day of resurrection or judgement. Subsequently, both of them were not able to convey the cultural meaning. Surat Al-Hashr Extract 16: َْ ََنُّوا أَنَّهُم مَا ََنَنْتُمْ أَنْ يَخْرُجُوا ۖ وۚ ِالْحَشْر ِ َِّ َو لْ ِْ هِم دِيَار ْمِن ِ أَهْلِ الْكِتَا ْمِن الَّذِينَ كَفَرُوا َهُوَ الَّذِي أَخْرَج" ِْ بُونَ بُيُوتَهُمْ بِأَيْدِيهِم َ فِي قُلُوبِهِمُ الرُّعْبَ ۚ يُخْر َ قَذَف َّ ُ مِنْ حَيْثُ لَمْ يَحْتَسِبُوا ۖ و ّللا َُّ ِ فَأَتَاهُم مَانِعَتُهُمْ حُصُونُهُمْ مِنَ ّللا " ِْ َبْصَار أُولِي اْل وَ أَيْدِي الْمُؤْمِنِينَ فَاعْتَبِرُوا يَا 56 (Al-Hashr: 2) Ali: “It is He Who got out the Unbelievers among the People of the Book from their homes at the first gathering (of the forces). Little did ye think that they would get out: And they thought that their fortresses would defend them from Allah! But the (Wrath of) Allah came to them from quarters from which they little expected (it), and cast terror into their hearts, so that they destroyed their dwellings by their own hands and the hands of the Believers, take warning, then, O ye with eyes (to see)!” Arberry: “It is He who expelled from their habitations the unbelievers among the People of the Book at the first mustering. You did not think that they would go forth, and they thought that their fortresses would defend them against God; then God came upon them from whence they had not reckoned, and He cast terror into their hearts as they destroyed their houses with their own hands, and the hands of the believers; therefore take heed, you who have eyes!” Al-Tabari (2003) defines “الذين كفروا” as those who denied the prophecy of Muhammad (PBUH) from the Jews of Banu Al-Nadir. Both translators substituted the former relative clause with the noun “unbelievers”, which according to (Merriam-Webster, 1828)” refers to “one that does not believe in a particular religious faith”. Eventually, they were able to convey the primary cultural meaning partially. In order to maintain the original meaning of the ST, it is advisable to translate it as “those who disbelieved” or “disbelievers” which designates refusing or rejecting a belief deliberately. The religious term “أهل الكتاب” refers to the Jews and Christians. As the aforesaid translations show, Ali and Arberry literally translated it into “the People of the Book” which is misleading for a foreign reader who might think that the previous term includes the Muslims in addition to the Jews and Christians, while in fact the term ”أهل الكتاب” is not permissible to be used for Muslims. Accordingly, their translations have failed to meet the authentic meaning leading up to a complete cultural loss. I propose that using the strategy of cultural substitution and addition as in “the people of the scripture (Jews and Christians) would be more lucid and comprehensible. 57 Based on the Tafsir of Ibn Khathir (2003), the Arabic phrase “َلول الحشر” refers to the incident when the Jews of Banu Al-Nadir broke their promise with Prophet Muhammad (PBUH), so Allah expelled them from Al-Madina and gathered them in Al-Sham. In addition, the IUG expert said that whenever the word “first” is mentioned then there must be a second, so the first gathering is the one mentioned above and the second gathering is on the Day of Judgment. Thus, Ali’s rendition is true while Arberry’s is not due to the different meaning of the word “muster” that was mentioned earlier in the interpretation of the word “تحشرون”. Al-Tabari (2003) explains that the term “أولي اَلبصار” indicates those people of understanding. In the translations above, Ali opted for literal translation rendering it as “O ye with eyes” and adding “to see”. Likewise, Arberry translated it literally into “you who have eyes”. Their translations are not proper and deviant from the real intended meaning. Hence, a complete loss at the cultural level appears. Paraphrasing “ أولي اَلبصار” would be a better alternative. Extract 17: " ِعَذَا ُ النَّار ِِ رَة ْ خ اآل َ لَهُمْ فِي َ ءَ لَعَذَّبَهُمْ فِي الدُّنْيَا ۖ و َّ ُ عَلَيْهِمُ الْجَال ّللا َكَتَب َْ أَن ْ َ وَ لَو " (Al-Hashr: 3) Ali: “And had it not been that Allah had decreed banishment for them, He would certainly have punished them in this world: And in the Hereafter they shall (certainly) have the Punishment of the Fire.” Arberry: “Had God not prescribed dispersal for them, He would have chastised them in this world; and there awaits them in the world to come the chastisement of the Fire.” As Dr. Mohammed explained previously, the terms “يوم القيامة, اليوم اآلخر, and اآلخرة” have the same meaning. Ali tended for literal translation producing “Hereafter” which means “an existence beyond earthly life” (Merriam-Webster, 1828). Arberry defined it as “the world to come”. Ali’s choice is partially correct while Arberry’s lacks accuracy since it sways away from the real meaning causing a complete cultural loss. 58 Extract 18: " َالْفَاسِقِين َِ ي َّ ِ وَلِيُخْز أُصُولِهَا فَبِإِذْنِ ّللا ْ تَرَكْتُمُوهَا قَائِمَة عَلَى أَو لِينَة ْا قَطَعْتُمْ مِن َم" (Al-Hashr: 5) Ali: “Whether ye cut down (O ye Muslim!) The tender palm-trees, or ye left them standing on their roots, it was by leave of Allah, and in order that He might cover with shame the rebellious transgresses.” Arberry: ”Whatever palm-trees you cut down, or left standing upon their roots, that was by God's leave, and that He might degrade the ungodly.” According to Ibn Khathir (2000) and Al-Tabari (2003), the word “لينة” is a special type of date tree other than Ajwah (ripen dates). Ali mistakenly rendered it literally into “tender” and simultaneously he substituted it with “palm-tress” using the plural form. Substitution was also opted for by Arberry, however both translations conveyed a part of the exact cultural meaning. The researcher believes that such kind of a word should be transferred through transliteration with a definition or a footnote. Depending on the Tafsir of Al-Tabari (2003), the word “الفاسقين” refers to those people who disobey Allah’s orders and commands. Ali utilized two strategies which are the literal translation and addition. He added the adjective “rebellious”, however he was mistaken when he used the verb “transgresses” instead of the noun “transgressors”. Arberry domesticated his translation by substitution through using the word “ungodly” which means “denying or disobeying God” (Merriam-Webster, 1828). Thus, he was able to convey the cultural meaning. Extract 19: ُ ََّّللا مَنْ يَشَاءُ ۚ و َّ َ يُسَلِّطُ رُسُلَهُ عَلَى َلَ كِنَّ ّللا و ِ كَا ر َ َ َو خَيْل ْْ جَفْتُمْ عَلَيْهِ مِن رَسُولِهِ مِنْهُمْ فَمَا أَو َّ ُ عَلَى وَ مَا أَفَاءَ ّللا " ٌ كُلِّ شَيْء قَدِير عَلَى" (Al-Hashr: 6) Ali: “What Allah has bestowed on His Messenger (and taken away) from them - for this ye made no expedition with either cavalry or camelry: but Allah gives power to His messengers over any He pleases: and Allah has power over all things.” 59 Arberry: “And whatever spoils of war God has given unto His Messenger from them, against that you pricked neither horse nor camel; but God gives authority to His Messengers over whomsoever He will. God is powerful over everything.” Ibn Khathir (2000) and Al-Tabari (2003) explained that the phrase “خيل وال ركاب” means what you earned without a fight. Ali substituted it with “cavalry or camelry”. The former refers to “the group of soldiers in an army who fight in tanks, or (especially in the past) on horses” Cambridge Dictionary (1995) while the latter refers to “troops mounted on camels” (Merriam-Webster, 1828). In light of the previous definitions, Ali was able to render the original meaning precisely. However, Arberry’s use of literal translation was far away from the ST meaning which led to a complete loss at the semantic level. Extract 20: َ َ ْكَي ِابْنِ السَّبِيل ََ الْمَسَاكِينِ و و َ الْيَتَامَى و لِذِي الْقُرْبَى ََ لِلرَّسُو ِ و َ ِ َّ ِ و لِلَف رَسُولِهِ مِنْ أَهْلِ الْقُرَى َّ ُ عَلَى مَا أَفَاءَ ّللا" ََ مَا آتَاكُمُ الرَّسُو ُ فَخُذُوهُ وَ مَا نَه ْ َغْنِيَاءِ مِنْكُمْ ۚ و يَكُونَ دُولَة بَيْنَ اْل َ ََّّ َ ۖ إِنَّ ّللا َ اتَّقُوا ّللا اكُمْ عَنْهُ فَانْتَهُوا ۚ و ُشَدِيد ِ الْعِقَا" (Al-Hashr: 7) Ali: “What Allah has bestowed on His Messenger (and taken away) from the people of the townships,- belongs to Allah,- to His Messenger and to kindred and orphans, the needy and the wayfarer; In order that it may not (merely) make a circuit between the wealthy among you. So take what the Messenger assigns to you, and deny yourselves that which he withholds from you. And fear Allah; for Allah is strict in Punishment. “ Arberry:” Whatsoever spoils of war God has given to His Messenger from the people of the cities belongs to God, and His Messenger, and the near kinsman, orphans, the needy and the traveller, so that it be not a thing taken in turns among the rich of you. Whatever the Messenger gives you, take; whatever he forbids you, give over. And fear God; surely God is terrible in retribution.” Al-Tabari (2003) asserted that the phrase “ولذي القربى” refers to Prophet Muhammad relatives from Bani Hashim and Bani Al-Motaleb. “Kindred” refers to “distant and close relatives, collectively”, whereas “kinsman” refers to “a male relative such as a sibling or a cousin” English Dictionary (2012). Based on the previous 60 definitions, it can be concluded that Ali’s substitution conveyed a part of the ST meaning. Notwithstanding, Arberry translated it literally into “near” and added “kinsman” which is more specific and close to the interpretation of the former book of Tafsir. The Arabic phrase “ابن السبيل” refers to the traveler who travelled for a very long distance and has no money left to reach his country. (Mu’jam Lughat al-Fuqaha’, 1985). Moreover, it is defined at Al Maany Dictionary as “the traveler who wants to go back to his country but finds no penny to get him there. In accord with Cambridge Dictionary (1995), the word “wayfarer” means someone who travels on foot, however the word “traveler” refers generally to someone who travels. So, based on that, it can be inferred that Ali’s use of literal translation of “ابن السبيل” into “wayfarer” conveyed some parts of the cultural meaning since the “wayfarer” must be needy to consider him “ابن سبيل”. On the other hand, Arberry’s choice in substituting it into “traveler” was not successful as it causes a complete cultural loss. A more favorable translation would be the use of transliteration with a definition of the word “ابن السبيل”. Extract 21: َ ََّيَنْصُرُونَ ّللا ان ا و َِ ضْو َر َّ ِ و مِنَ ّللا ِ هِمْ وَ أَمْوَ الِهِمْ يَبْتَغُونَ فَضْال ِ جُوا مِنْ دِيَار الَّذِينَ أُخْر َِ ين ِ ر الْمُهَاج ِلِلْفُقَرَاء" " َوَرَسُولَهُ ۚ أُولَ ئِكَ هُمُ الصَّادِقُون (Al-Hashr: 8) Ali: “(Some part is due) to the indigent Muhajirs, those who were expelled from their homes and their property, while seeking Grace from Allah and (His) Good Pleasure, and aiding Allah and His Messenger: such are indeed the sincere ones.” Arberry: “It is for the poor emigrants, who were expelled from their habitations and their possessions, seeking bounty from God and good pleasure, and helping God and His Messenger; those -- they are the truthful ones.” Al Maany Dictionary defines the word “الفقراء” which is the plural form of “الفقير” as those people who have nothing but the least food. Ali rendered it literally into “the indigent” and Arberry aslo provided its literal translation as “the poor”. Both translations are correct. 61 The Arabic word “المهاجرين” refers to those who migrated with Prophet Muhammad (PBUH) from Mecca to Medina for the sake of Allah. Ali borrowed the term “Muhajirs” which relates to the “fellow emigrants who fled with Muhammad during the Hegira” (Merriam-Webster, 1828). By this, he conveyed the meaning of the ST word. In contrast, Arberry rendered it literally into “emigrants” which denotes “a person who leaves a country permanently to live in another one” Cambridge Dictionary (1995). “Emigrants” is a more general term, therefore his translation did not match the actual meaning resulting in a complete cultural loss. Due to that, using borrowing as Ali did or transliteration with footnotes would be more appropriate. Extract 22: " حَاجَة ِْ هِم صُدُور ِ دُونَ فِي َ يَج َ َ ِ بُّونَ مَنْ هَاجَرَ إِلَيْهِمْ و مِنْ قَبْلِهِمْ يُح َِْيمَان اإل َو َتَبَوَّ ءُوا الدَّار َوَ الَّذِين " (Al-Hashr: 9) Ali: “But those who before them, had homes (in Medina) and had adopted the Faith,- show their affection to such as came to them for refuge, and entertain no desire in their hearts for things given to the (latter),” Arberry: “And those who made their dwelling in the abode, and in belief, before them; love whosoever has emigrated to them, not finding in their breasts any need for what they have been given,” It is meant by the word “الدار”, based on Al-Tabari (2003), “Medina”. The meaning of the ST was rendered clearly by Ali as he adopted literal translation and addition. On the other hand, Arberry opted for literal translation as “the abode” which unfortunately led to a complete loss in the cultural meaning of the ST. The term “اإليمان” refers, according to Tafsir Al-Tabari (2003), to believing in Allah and His messenger. Cambridge Dictionary (1995) defines the word “Faith” as a “strong belief in God or a particular religion” and the word “belief” as " the feeling of being certain that something exists or is true”. As it clearly appears, Ali’s literal 62 translation conveys some parts of the original meaning. Nonetheless, Arberry, using the same strategy, used a more general word that refers to believing in anything in this world and by this a complete cultural loss occurs. The researcher recommends the strategy of transliteration followed by either a full definition or a footnote. Ibn Khathir (2000) interpreted the word “صدورهم” as “hearts” since the heart is the place of good and bad emotions or feelings. For the word “breast”, it refers to “the fore or ventral part of the body between the neck and the abdomen” (Merriam-Webster, 1828). Regarding the prior translations, Ali was able to capture the exact rich meaning of the ST by using substitution. On the contrary, a partial semantic loss occurred in Arberry’s translation due to his use of literal translation. In light of the interpretation of Ibn Khathir (2000) and Al-Tabari (2003), it is said that the Ansar did not have any envy for the Muhajirin because of the better status, rank, or more exalted grade that Allah gave the Muhajirin above them. Consequently, a complete semantic loss appears in both translations of the word “حاجة” because of the use of literal translation through which the ST meaning was not properly transferred in the TT. A better rendition would be through using paraphrasing strategy. Extract 23: َا لِلَّذِين َ تَجْعَلْ فِي قُلُوبِنَا غِال َ َ ْ ِيمَانِ و الَّذِينَ سَبَقُونَا بِاإل خْوَ انِنَا ِ ِفِرْ لَنَا وَ إل ْوَ الَّذِينَ جَاءُوا مِنْ بَعْدِهِمْ يَقُولُونَ رَبَّنَا اغ " ٌِ يم ٌ رَح آمَنُوا رَبَّنَا إِنَّكَ رَءُوف" (Al-Hashr: 10) Ali: “And those who came after them say: "Our Lord! Forgive us, and our brethren who came before us into the Faith, and leave not, in our hearts, rancour (or sense of injury) against those who have believed. Our Lord! Thou art indeed Full of Kindness, Most Merciful." Arberry: “And as for those who came after them, they say, 'Our Lord, forgive us and our brothers, who preceded us in belief, and put Thou not into our hearts any rancour towards those who believe. Our Lord, surely Thou art the All-gentle, the All-compassionate.” The Arabic word “إلخواننا” refers to those who followed the Muhajirin and Ansar in their good deeds and beautiful traits (Al-Tabari, 2003). The English word “brethren” is “used as a form of address to members of an organization or religious group” 63 (Cambridge Dictionary, 1995). On the other hand, the word “brother” means “a man or boy with the same parents as another person” (Cambridge Dictionary, 1995). Ali’s tendency towards substitution was more precise than Arberry’s use of literal translation that resulted in a complete loss at the sematic level. Extract 24: " ََ يَفْقَهُون َ ٌْ م قَو َّْ ِ ۚ ذَ لِكَ بِأَنَّهُم ِ هِمْ مِنَ ّللا َ َنْتُمْ أَشَدُّ رَهْبَة فِي صُدُور لْ " (Al-Hashr: 13) Ali: “Of a truth ye are stronger (than they) because of the terror in their hearts, (sent) by Allah. This is because they are men devoid of understanding.” Arberry: “Why, you arouse greater fear in their hearts than God; that is because they are a people who understand not.” The English word “men” is the plural form of “man” but “people” refers to “a body of persons that are united by a common culture, tradition, or sense of kinship, that typically have common language, institutions, and beliefs”. Ali’s literal translation was faulty because the Arabic word “قوم” is not restricted only to males, as a result a complete semantic loss occurs. On the contrary, Arberry transferred the meaning of the ST in the TT perfectly using literal translation plus adding the article “a” to show that Allah grouped those people in specific by a common trait which is not understanding. Extract 25: " ََّ َ رَ َّ الْعَالَمِين ُ ّللا ِ يءٌ مِنْكَ إِنِّي أَخَاف ْ ِنْسَانِ اكْفُرْ فَلَمَّا كَفَرَ قَا َ إِنِّي بَر إِذْ قَا َ لِْل ِالشَّيْطَان ِكَمَََل " (Al-Hashr: 16) Ali: “(Their allies deceived them), like the Evil One, when he says to man, "Deny Allah": but when (man) denies Allah, (the Evil One) says, "I am free of thee: I do fear Allah, the Lord of the Worlds!” Arberry: “Like Satan, when he said to man, 'Disbelieve'; then, when he disbelieved, he said, 'Surely I am quit of you. Surely I fear God, the Lord of all Being.” In accord with Al Maany Dictionary, the word “الشيطان” refers to “Eblees”, the evil spirit that seduces people to commit sins and spread corruption. Moreover, Dr. 64 Alfarra asserts that “الشيطان” is the one who was banished from “الجنة” or paradise because he refused to obey Allah and bow down for Adam, and because of being expelled, he swore to tempt people to commit sins”. Ali defined it as “the evil one” which means “morally bad, cruel, or very unpleasant” (Cambridge Dictionary, 1995). Arberry used cultural substitution in rendering it into “satan” which refers to “the angel who in Jewish belief is commanded by God to tempt humans to sin, to accuse the sinners, and to carry out God's punishment” (Merriam-Webster, 1828). So, one can say that Ali conveyed some parts of the original cultural meaning while Arberry utterly failed in achieving the cultural equivalence since “الشيطان” is not an angel as Jews and Christians believe, besides he disbelieved in Allah and insisted to seduce those who worship Him. Therefore, Arberry’s translation led to a complete cultural loss. It is preferable to transliterate it into “shaytan” with an explanatory footnote. Extract 26: " َالظَّالِمِين ُجَزَاء َِ خَالِدَيْنِ فِيهَا ۚ وَذَ لِك كَانَ عَاقِبَتَهُمَا أَنَّهُمَا فِي النَّارَ" (Al-Hashr: 17) Ali: “The end of both will be that they will go into the Fire, dwelling therein forever. Such is the reward of the wrong-doers.” Arberry: “Their end is, both are in the Fire, there dwelling forever; that is the recompense of the evildoers.” The word “جزاء” could bear two meanings based on the context itself. So, it means either “reward” or “punishment”. Ali and Arberry literally translated it successively into “reward” and “recompense”. Both words hold positive connotation which verily contradicts with the meaning of the verse since it talks about the wrong-doers. Subsequently, their choice of lexemes is not accurate and it produces a complete semantic loss. It is suggested to use the word “punishment” as long as it goes with the meaning intended by the verse. Extract 27: " ََّ َ خَبِيرٌ بِمَا تَعْمَلُون َّ َ ۚ إِنَّ ّللا وَ اتَّقُوا ّللاۖ لِغَد ٌْ مَا قَدَّمَت َّ َ وَ لْتَنْظُرْ نَفْس يَا أَيُّهَا الَّذِينَ آمَنُوا اتَّقُوا ّللا " (Al-Hashr: 18) 65 Ali: “O ye who believe! Fear Allah, and let every soul look to what (provision) He has sent forth for the morrow. Yea, fear Allah: for Allah is well-acquainted with (all) that ye do.” Arberry: “O believers, fear God. Let every soul consider what it has forwarded for the morrow. And fear God; God is aware of the things you do.” Al-Tabari (2003) interprets the word “ ٍلغد” as the Day of Judgment”. It is obvious that both translators resorted to the literal translation strategy rendering “ ِلغد” for “the morrow” which is an archaic form of the word “tomorrow” that means the next day. As a result, they failed to capture the authentic meaning the thing that led to a complete loss at the semantic level. Extract 28: " َ أَصْحَا ُ الْجَنَّةِ هُمُ الْفَائِزُونۚ ِالْجَنَّة ُ وَ أَصْحَا ِالنَّار ُ يَسْتَوِي أَصْحَا ََ" (Al-Hashr: 20) Ali: “Not equal are the Companions of the Fire and the Companions of the Garden: it is the Companions of the Garden that will achieve Felicity.” Arberry: “Not equal are the inhabitants of the Fire and the inhabitants of Paradise. The inhabitants of Paradise -- they are the triumphant.” The Arabic term “النار” refers to the final abode of torture and insult which Allah the Almighty has prepared for the disbelievers who disobeyed Him and His messengers. “Fire” is a literal translation adopted by Ali and Arberry for the word “النار”. The English word “fire” means “the state of burning that produces flames that send out heat and light, and might produce smoke” (Cambridge Dictionary, 1995). Based on that, both translations are not accurate and lead to a complete loss at the cultural level. The researcher suggests to transliterate it plus adding its definition or provide a footnote. In light of the interpretation of Mu’jam Lughat al-Fuqaha, (1988), the term “الجنة” refers to the place where pious people are believed to go after they pass away. Here, we notice that Ali literally translated “الجنة” into “Garden”, which according to 66 Cambridge Dictionary (1995), means “a piece of land next to and belonging to a house, where flowers and other plants are grown, and often containing an area of grass”. Hence, Ali’s choice was not adequate as it led to a complete loss at the cultural level. On the other hand, Arberry opted for adaptation and used “paradise” which refers to “the garden of Eden, the place where Adam and Eve lived in the Bible story” Cambridge Dictionary (1995). Pursuant thereto, Arberry’s is more precise and acceptable. It is suggested that transliteration with a footnote would also be a good solution. Extract 29: " ِلِلنَّاس ِ بُهَا نَضْر ُ ْ َمََْا َتِلْكَ اْل َّ ِ ۚ و خَشْيَةِ ّللا ْمِن مُتَصَدِّع ا جَبَل لَرَأَيْتَهُ خَاشِع ا ْ أَنْزَلْنَا هَ ذَا الْقُرْآنَ عَلَى لَو " (Al-Hashr: 21) Ali: “Had We sent down this Quran on a mountain, verily, thou wouldst have seen it humble itself and cleave asunder for fear of Allah. Such are the similitudes which We propound to men, that they may reflect.” Arberry: “If We had sent down this Koran upon a mountain, thou wouldst have seen it humbled, split asunder out of the fear of God. And those similitudes -- We strike them for men; haply they will reflect.” The Arabic word “متصدع” refers to a crack in something. Both translators have succeeded in transferring the real meaning through translating the word “ ًمتصدعا” literally into “asunder” plus adding the verbs “cleave” and “split” successively. Ali seems to have substituted the verb “نضربها” with “propound” which refers to “offering for discussion or consideration” (Merriam-Webster, 1828). Albeit, Arberry adopted the literal translation, so he rendered it as “strike” which means “to aim and usually deliver a blow, stroke, or thrust (as with the hand, a weapon, or a tool)” (Merriam-Webster, 1828). As a result, he failed in delivering the required meaning. Therefore, this creates a complete semantic loss. Although Ali conveyed shades of the meaning, there are also other simple alternatives such as “set”, “cite”, and “give”. 67 Extract 30: " ُِ يم وَ الشَّهَادَةِ ۖ هُوَ الرَّحْمَ نُ الرَّح ِالْغَيْب َُّ هُوَ ۖ عَالِم َ إِلَ هَ إَِ َّ ُ الَّذِي َ هُوَ ّللا " (Al-Hashr: 22) Ali: “Allah is He, than Whom there is no other god;- Who knows (all things) both secret and open; He, Most Gracious, Most Merciful.” Arberry: “He is God; there is no god but He. He is the knower of the Unseen and the Visible; He is the All-merciful, the All-compassionate.” The term “الغيب” means that Allah is knowledgeable of the secret and hidden things. Ali substituted the word “الغيب” with “secret” which is considered compatible to the original meaning. In contrast, Arberry adopted the literal translation in rendering it as “the Unseen” which is restricted only to the things that cannot be seen. Hence, employing “the Unseen” in the translation leads to failing in achieving the cultural equivalence. 4.2 Conclusion This chapter includes 30 extracts (Arabic verses), taken from Surat Al-Mujadilah and Surat Al-Hashr, containing the 52 CSIs. Each of these extracts is followed by two English translations attempted by Abdullah Yusuf Ali and Arthur John Arberry. The 52 terms from both suras were classified thematically. In order to analyze the data, the researcher resorted to reference books such as Ibn Khathir (2000) and Al-Tabari (2003) along with the Arabic dictionary (Almaany dictionary) and Mu’jam Lughat al-Fuqaha’(1985) in addition to four English dictionaries: Oxford English dictionary (2009), Cambridge Dictionary (1995), Merriam-Webster (1828), and English Dictionary (2012). Moreover, a specialist of Quran interpretation at the Islamic University participated in clarifying the meanings of some critical items. It is worth mentioning that the researcher depended on Ivir’s strategies in her analysis. The discussion aimed at investigating the types of semantic loss, the types of non- equivalence the translations of Ali and Arberry reflected for the previous suras and the translation strategies they have opted for in rendering the two suras. 68 Chapter 5 Results, Conclusion, and Recommendations 69 Chapter 5 Results, Conclusion, and Recommendations 5.1 Introduction Chapter 5 tackles the main findings of the study, conclusion, and recommendations. This is done through answering the questions of the study introduced in chapter one. 5.2 Answers of the research questions: 5.2.1 Answer of the main question: What types of semantic loss are found in translating the two suras of the Holy Quran: Al-Mujadilah and Al-Hashr? A loss is either complete or partial. The former occurs when the translator finds no equivalent to the ST, so using any other alternative leads to a change in meaning. Whereas, the latter appears when the meaning of the ST is transferred partially in the TT. It is normal to find some of these losses in the translation of the Holy Qur'an which is characterized with complex structure in addition to its particular characteristics which are Qur'an-bound and semantically oriented. As-Safi (2011) stated that losses in translation are divided into two types: Avertable (preventable) loss and inevitable loss. The Avertable loss depends on the translator's abilities, competencies, and skills whether he/she is able to produce an adequate and appropriate translation or not. On the other hand, the inevitable loss occurs due to the huge differences in the system and culture of the source and target languages, and it has nothing to do with the translator's own abilities. Therefore, semantic losses are inevitable while translating from a SL to a TL due to the lack of equivalence of some cultural words in the target language. The following table shows the complete and partial semantic losses in both Ali and Arberry’s translations. 70 Table (5.1): Types of semantic loss in Ali and Arberry’s translations Surah Al-Mujadilah Sample Verse no. Cultural-specific item Ali’s translation Type of loss Arberry’s translation Type of loss 1 1 ُ َّاللّ Allah _ God Complete cultural loss 2 2 َيُظَاهِرُون divorce their wives by Zihar (calling them mothers) 'Be as my mother's back,' Complete cultural loss 3 4 ُِ يَام ص Fast Complete cultural loss fast Complete cultural loss 4 8 َْ ك حَيَّو Salute Complete semantic loss greet _ 5 8 جهنم Hell Complete cultural loss Gehenna 6 8 ْ نَهَا يَصْلَو Burn _ roasted Partial semantic loss 7 8 ُِ ير َ الْمَص فَبِئْس evil is that destination an evil homecoming Complete cultural loss 8 9 ْ ِثْم اإل Iniquity Complete cultural loss sin _ 9 9 َ ى التَّقْو self-restraint Complete cultural loss God-fearing Partial cultural loss 71 10 9 َتُحْشَرُون brought back mustered Complete semantic loss 11 10 َ ى النَّجْو Secret counsels Complete cultural loss Conspiring secretly _ 12 10 ب ْ ِ هِم ضَار Harm Partial semantic loss hurt 13 11 انْشُزُوا rise up Complete semantic loss Move up Partial semantic loss 14 12 ًصَدَقَة Charity _ freewill offering Complete cultural loss 15 13 َّفَأَقِيمُوا الص ََ ة لا establish regular prayer Complete cultural loss perform the prayer Complete cultural loss 16 13 ََّكَاة آتُوا الز practise regular charity Complete cultural loss pay the alms Complete cultural loss 17 16 ًجُنَّة a screen (for their misdeeds) Partial semantic loss a covering Complete semantic loss 18 19 َالْخَاسِرُون that will perish The losers Partial semantic loss 19 22 ِِ ر ْ خ ْ مِ اآل الْيَو the Last Day Complete cultural loss the Last Day Complete cultural loss Surat Al-Hashr 20 2 الَّذِينَ كَفَرُوا Unbelievers Partial cultural loss unbelievers Partial cultural loss 72 21 2 ِأَهْلِ الْكِتَاب the People of the Book Complete cultural loss the People of the Book Complete cultural loss 22 2 ِالْحَشْر the first gathering Partial semantic loss The first mustering Complete semantic loss 23 2 ِْ َبْصَار أُولِي اَل O ye with eyes (to see) Complete cultural loss you who have eyes Complete cultural loss 24 3 َِة ِ ر ْ خ اآل the Hereafter Partial cultural loss the world to come Complete cultural loss 25 5 ٍلِينَة The tender palm-trees Partial cultural loss Palm-trees Partial cultural loss 26 5 َالْفَاسِقِين rebellious transgresses Partial cultural loss ungodly _ 27 6 ٍِ كَاب َ ر َ ال خَيْلٍ و cavalry or camelry horse nor camel Complete semantic loss 28 7 ْ بَى َ لِذِي الْقُر و Kindred Partial cultural loss near kinsman _ 29 7 َِ ابْنِ السَّبِيل و the wayfarer Partial cultural loss the traveller Complete cultural loss 30 8 َالْمُه َِ ين ِ ر اج Muhajirs emigrants Complete cultural loss 31 9 َ َالدَّار homes (in Medina) _ the abode Complete cultural loss 73 32 9 ْ ِيمَان اإل the Faith Partial cultural loss belief Complete cultural loss 33 9 ً ِْ هِم صُدُور their hearts their breasts Partial semantic loss 34 9 حَاجَة entertain no desire Complete semantic loss any need Complete semantic loss 35 10 ِلإ َ انِنَا خْو our brethren _ our brothers Complete semantic loss 36 13 ْ م قَو Men Complete semantic loss a people 37 16 ِالشَّيْطَان the Evil One Partial cultural loss Satan Complete cultural loss 38 17 َُ اء جَز the reward Complete semantic loss the recompense Complete semantic loss 39 18 ٍلِغَد the morrow Complete semantic loss the morrow Complete semantic loss 40 20 ِالنَّار Fire Complete cultural loss Fire Complete cultural loss 41 20 ِالْجَنَّة Garden Complete cultural loss Paradise _ 42 21 ِ بُهَا نَضْر Propound Partial semantic loss strike Complete semantic loss 74 43 22 ِالْغَيْب Secret _ Unseen Complete cultural loss As the table illustrates, the translations of Ali and Arberry show frequent complete and partial losses. However, Arberry’s complete losses are the most prevailing. For instance, in the first verse, Arberry substituted the word “الله” with “God” while Ali transliterated it, so the latter was able to achieve the cultural equivalence whereas the former did not. Dr. Mohammed, the IUG expert, explained that the word “ َيُظَاهِرُون” means that a woman is forbidden to her husband as his mother is and he cannot live with her again unless he pays kafarah. Ali used definition using the word “divorce” to approximate the picture of “ظهار” to the foreign reader. However, Arberry defined it as “Be as my mother’s back”, which does not make the concept of “zihar” clear for the target readers and causes a semantic ambiguity. Regarding the word “صيام”, both translators opted for literal translation using the word “fast” which has a different cultural meaning from the one intended by the Islamic religion. Hence, Ali and Arberry were not able to convey the exact cultural meaning. In translating the word “حي وك”, Ali chose “salute” which holds a different connotation, used specifically in military, that does not suit the intended meaning. Notwithstanding, Arberry used the proper equivalent for “حي وك” which is “greet”. As a result, Ali failed in conveying the original meaning. For the word “جهنم”, Ali and Arberry used two different strategies. The former selected substitution using “Hell” while the latter tried borrowing using “Gehenna”. Based on their definitions in Merriam-Webster dictionary, mentioned earlier in chapter four, Arberry’s borrowing was the most suitable equivalent for “جهنم”, yet Ali was not able to provide the exact cultural equivalence. 75 The Arabic word “يصلونها” was rendered literally as “burn” by Ali and “roast” by Arberry. However, Ali’s translation was closer to the original meaning than Arberry’s. Ali was able to convey the meaning of the phrase “فبئس المصير” through using literal translation as “evil is that destination” since the previous Arabic phrase denotes the bad eventual destination. Nonetheless, Arberry’s selection of the strategies of literal translation and lexical creation in translating it into “an evil homecoming” was not proper and led to a complete cultural loss. In rendering the word “اإلثم”, which refers to the state of being a wrongdoer (Al Maany Dictionary), both Ali and Arberry selected the substitution strategy. However, Arberry achieved the cultural equivalence while Ali did not because “sin” means “an action that is or is felt to be highly reprehensible” and “iniquity” denotes “the quality of being unfair or evil” (Merriam-Webster, 1828). As illustrated in Al Maany Dictionary, the word “التقوى” refers to doing what Allah demanded and refraining from what He forbade. For Ali, he substituted it with “self-restraint” which is far from the religious meaning intended by the word “التقوى”. On the other hand, Arberry substituted it with “God-fearing” which resulted in a partial cultural loss. Ali correctly transferred the meaning of the word “تحشرون” via substitution while Arberry used a literal rendition that has another different connotation, used particularly for soldiers, which is “muster”. Consequently, he was not able to communicate the cultural meaning of the original word. Ali and Arberry defined the word “النجوى” as “secret counsels” and “conspiring secretly” successively. Ali’s rendition did not fit into the real meaning of the previous Arabic word since it means “advice” while “النجوى” refers to the “secret conversations”. On the contrary, Arberry’s translation reflected the same cultural meaning. In case of rendering the word “بضارهم”, the translations of Ali and Arberry are literal. Nevertheless, Ali’s “harm” reflects physical injury thus it cannot be considered as an equivalent to the genuine word but Arberry’s “hurt”, that reflects emotional pain, transferred its connotative meaning. 76 In translating the meaning of the word “انشزوا”, both translators tended to use literal translation using “rise up” and “move up” respectively and so their translations conveyed parts of the real meaning. Using substitution in rendering “صدقة” into “charity”, we can find that Ali managed to transfer the cultural meaning. On the contrary, Arberry utilized the definition strategy in translating “صدقة” into “a freewill offering” thus his choice of word did not match the original cultural meaning. The Arabic phrase “أقي موا الصالة” was translated through using literal translation, addition, and substitution by Ali into “establish regular prayer”. On the other hand, Arberry opted for literal translation and substitution producing “perform the prayer”. Regarding the verb “أ قيموا”, Ali’s choice did not constitute the exact match of the original meaning yet Arberry’s “perform” transferred the authentic meaning properly. For the word “الصالة”, they have substituted it with “prayer” which definitely has diverse cultural meaning and different rituals form the ST word. The phrase “آتوا الزكاة” was rendered by Ali as “practice regular charity” using literal translation, addition, and substitution. For Arberry, he translated it using literal translation and substituion into “pay the alms”. Both conveyed the meaning of the verb “آتوا” correctly. However, substituting “الزكاة” with “charity” and “alms” is not accurate as both denote the inner desire to give money or whatever while “الزكاة’ reflects an obligatory sense and it has its own specific conditions to be given after that. Also adding the word “regular” is wrong since the former is not regular for all people at all times. Since Ibn Khathir (2000) interpreted the word “جُنَة” as hiding blasphemy and showing Iman, it can be noticed that Ali’s literal translation and addition transmitted the authentic meaning partially whereas Arberry’s literal translation failed in conveying the real meaning. Based on Al-Tabari’s (2003) interpretation of the word “الخاسرون” as “those who are perished and futile“. It is inferred that the definition strategy used by Ali was successful in reflecting the original meaning. Nevertheless, Arberry’s literal translation transferred shades of the ST word. As Dr. Al-Farra said, the religious term “اليوم اآلخر’ is similar in meaning to the term “يوم القيامة”, so translating it literally into “the Last Day” by both translators did not 77 help in achieving the cultural equivalence since human beings have three lives: the worldly life, life of Al-Barzakh “البرزخ” and the life on the Day of resurrection/judgement. Concerning the relative clause “الذين كفروا”, Ali and Arberry substituted it with “unbelievers” conveying the ST meaning partially since the word “unbelievers” designates those who do not believe generally in any religion while the ST clause refers to those who deliberately refused to believe in Allah and his messenger. Literally translating the religious term “أهل الكتاب” into “the People of the Book” by both translators failed in obtaining the cultural equivalence as the word “Book” may include the Muslims also while it is not allowed to use such terminology for Muslims. As a result, using cultural substitution and addition as in “the people of the scripture (Jews and Christians) would sound more accurate. With regard to the translation of the Arabic phrase “َلول الحشر”, both translators resorted to literal translation as “the first gathering” and “the first mustering” respectively. However, Ali’s translation transferred the exact meaning correctly, as the meaning of “َلول الحشر” was explained by Dr. Mohammed in chapter 4, whereas Arberry’s translation was faulty due to using the word “mustering” which is used in specific contexts (army/military). As for the term “أولي اَلبصار”, Ali rendered it literally into “O ye with eyes” adding “to see”. Similarly, Arberry utilized literal translation translating it into “you who have eyes”. Consequently, both were not able to communicate the exact cultural meaning. Ali’s use of “the Hereafter” transmits parts of the original meaning of the religious word “اآلخرة” since the former, according to (Merriam-Webster, 1828), means “an existence beyond earthly life”. For Arberry, he defined it as “the world to come” which is too general and far from the intended cultural meaning. Regarding the translation of the word “لينة”, which refers to a specific kind of date trees, Ali and Arberry substituted it with a more general term that is “palm-trees”. In addition, Ali translated it literally into “tender”. They have both conveyed the cultural meaning partially. 78 The word “الفاسقين” refers to those people who disobey Allah’s orders and commands (Al-Tabari (2003). Ali opted for the literal translation and addition. He added the adjective “rebellious” but he made a mistake when he used the verb “transgresses” instead of the noun “transgressors”. Arberry substituted it with the word “ungodly” which works well in conveying the cultural meaning. With reference to the phrase “خيل وال ركاب” which means, based on the interpretation of Ibn Khathir (2000) and Al-Tabari (2003), what is gained without a fight. Ali’s substitution as “cavalry or camelry” was successful and transferred the authentic meaning whilst Arberry’s literal translation failed to convey the meaning of the ST phrase. In respect to the phrase “ولذي القربى”, Ali tended to use substitution producing the word “kindred”. On the other hand, Arberry selected literal translation using the word “near” and adding the word “kinsman”. Ali’s choice was too general and so it transferred parts of the ST phrase since the latter refers to our prophets close relatives from Bani Hashim and Bani Al-Motaleb. Conversely, Arberry’s translation was closer to the intended meaning. As for the term “ابن السبيل” which refers to the needy one who travels for long distances on foot, Ali rendered it literally into “wayfarer” which transmits the original meaning partially. However, a complete cultural loss appeared in Arberry’s translation due to using a more general word which is “traveler”. Considering the word “المهاجرين”, Ali tended to borrow the word “Muhajirs” while Arberry opted for translating it literally into “Emigrants” which does not convey the cultural meaning properly since it is general while the ST word is restricted only to those who migrated with our prophet to Medina. Thus, Ali’s borrowing was the most precise choice. Al-Tabari (2003) clarifies that word “الدار” refers to “Medina”. Literal translation and addition were used by Ali to render the ST word into “homes (in Medina)” which is considered apropos. Perversely, Arberry rendered it literally as “the abode” which does not convey the exact intended meaning. Ali and Arberry literally translated the word “اإليمان” into “the faith” and “belief” respectively. Ali’s choice carries some parts of meaning of the ST term, 79 however Arberry’s is way general as it denotes believing in anything in the world. As a result, he was not able to achieve the cultural equivalence adequately. Rendering the word “صدورهم” by substituting it with “hearts”, Ali was successful in capturing the intended meaning. On the other hand, Arberry’s literal translation into “breasts” conveyed the meaning partially. As demonstrated by the two books of Tafsir, the word “حاجة” designates the feeling of envy. Both translators opted for literal translation, hence they were not able to communicate the ST word meaning appropriately. Substituting the word “إلخواننا” with “brethren” helped Ali transfer its authentic meaning. Whereas, rendering it literally to “brothers” by Arberry made the word lose its original sense. In translating the word “قوم”, Ali resorted to literal translation using the word “men”, so he was not successful since the ST word is more inclusive and not limited solely to males. Nonetheless, Arberry’s use of literal translation using “a people” was more adequate. Defining the word “الشيطان” as “the evil one” by Ali conveyed parts of the cultural meaning. However, substituting it with “Satan” as Arberry did resulted in a complete cultural loss since it carries a meaning that contradicts with our own religion “Islam” and culture. For the word “جزاء”, Ali and Arbery rendered it literally as “reward” and “recompense” respectively. Both TL words carry a positive meaning that is opposite to the ST word which denotes punishment and torture. Thence, their translations were not accurate. Rendering the word “ ٍلغد” literally as “the morrow” by both translators was not suitable since the ST word refers to the “Day of judgement” while “morrow” means the next day which is obviously too general. Therefore, using it led to a semantic loss. Both translators transferred the meaning of the Arabic term “النار” literally through using the English word “Fire” which is wrong since the cultural meaning of “النار” differs considerably from the literal “Fire”. 80 Ali’s literal rendition of the word “الجنة” as “Garden” is absolutely not proper due to the huge differences in meaning. In a deviant manner, Arberry substituted it with the term “paradise” which is considered acceptable since the latter carries a similar cultural meaning to the original ST item. Regarding the translation of “نضربها”, Ali tried to substitute it with “propound” which conveys the meaning of the ST item partially. For Arberry, he mistranslated it literally into “strike” which is verily far from the meaning intended by the authentic word. Eventually, Ali’s substitution of the word “الغيب” with “secret” was appropriate since the former explains that Allah knows everything that is confidential and secret. On the contrary, Arberry’s literal rendition as “the Unseen” was not sufficient and paved the way for a complete cultural loss to occur. 5.2.2 Answer of the first sub-question: What types of non- equivalence the translations of Ali and Arberry reflect for the named two suras? Baker’s typology of non-equivalence at the word level was adopted to identify the causes of losses in the two English translations as follows: 1. Culture-specific concepts: For example, “هللا”, “صيام”, “فأقيموا الصالة”, “فآتوا الزكاة”, “صدقة”, “يوم القيامة” and lots of other CSIs in the two suras create difficulties for the translators in the process of finding them suitable equivalents. 2. SL terms are not lexicalized in the TL: The religious terms: “أولي اَلبصار”, “فبئس المصير”, and “ابن السبيل” are not coined in English so translators face difficulties looking for the accurate strategy to be able to translate them such as paraphrasing. 3. SL terms are semantically complex: Some words have a very complex meaning such as: “الغيب”, “ال شيطان”, and “النار” and this kind of complexity appears due to the cultural factor. 4. TL lacks specific terms ( hyponym): English lacks the hyponym of “لينة” but has the general word or superordinate: (Palm-tree). 81 5. SL terms are no longer found or used in the SL: For example, the term “الظهار” existed only in the pre-Islamic era and disappeared by the diffusion of Islam. 6. Mistranslation of SL terms: If the translator does not refer to exegesis books to understand the exact meaning of the words then mistranslations will occur. For instance, the words “جُنة” and “خيل وال ركاب”. 5.2.3 Answer of the second sub-question: What translation strategies did the two translators use in rendering the CSIs in the two suras? To answer this question, the researcher drew a table summarizing all the strategies used by Ali and Arberry. It is apparent from the table below that both translators used plenty of Ivir’s strategies in translating the 52 CSIs in order to bridge the gap between the SL and the TL shedding some lights on their various cultures. Ivir’s strategies could be categorized into two general strategies suggested by Lawrence Venuti which are: foreignization and domestication. The former includes Ivir’s (literal translation and borrowing) while the latter involves (addition, definition, substitution, lexical creation, and deletion). It is worth mentioning that both translators sometimes used more than one strategy in translating some CSIs. For instance, in rendering the meaning of “يظاهرون”, Ali opted for definition and addition strategies. Moreover, in translating “فأقيموا الصالة”, he resorted to three strategies which are: Literal translation, substitution, and addition. Furthermore, Arberry utilized addition and substitution in transferring the meaning of “لذي القربى”. Table (5.2): Strategies used by Ali and Arberry in translating the CSIs in Surat Al-Mujadilah and Al-Hashr. Surat Al-Mujadilah Sample Cultural-specific item Ali’s translation Strategy Arberry’s translation Strategy 82 1 ُ َّاللّ Allah Borrowing God Substitution 2 َيُظَاهِرُون divorce their wives by Zihar (calling them mothers) Definition + addition 'Be as my mother's back,' Definition 3 رقبة slave substitution slave substitution 4 ُِ يَام ص fast Literal translation fast Literal translation 5 مسكين Indigent ones Literal translation + addition Poor persons Literal translation + addition 6 رسوله Messenger Substitution Messenger Substitution 7 حدود Limits (set by) Literal translation + addition bounds Literal translation 8 ِْ مَ الْقِيَامَة يَو the Day of Judgment. Substitution the Day of Resurrection Literal translation 9 َْ ك حَيَّو salute Literal translation greet Literal translation 10 جهنم Hell Substitution Gehenna Borrowing 11 ْ نَهَا يَصْلَو burn Literal translation roasted Literal translation 12 ُِ ير َ الْمَص فَبِئْس evil is that destination Literal translation an evil homecoming Literal translation + lexical creation 13 ْ ِثْم اإل iniquity substitution sin Literal translation 14 ْالتَّق َ ى و restraint -self Literal translation god-fearing Substitution 83 15 اتقوا Fear Literal translation Fear Literal translation 16 تُحْشَرُون brought back Substitution mustered Literal translation 17 َ ى النَّجْو Secret counsels Definition Conspiring secretly Definition 18 ب ْ ِ هِم ضَار harm Literal translation hurt Literal translation 19 انْشُزُوا rise up Literal translation Move up Literal translation 20 ًصَدَقَة charity Substitution freewill offering Definition 21 َف ََ ة أَقِيمُوا الصَّال establish regular prayer Literal translation + addition + substitution perform the prayer Literal translation + substitution 22 ََّكَاة آتُوا الز practice regular charity Literal translation + addition substitution pay the alms Literal translation + substitution 23 ًجُنَّة a screen (for their misdeeds) Literal translation + addition a covering Literal translation 24 َالْخَاسِرُون that will perish Definition The losers Literal translation 25 يحادون resist Literal translation oppose Literal translation 26 ِِ ر ْ خ ْ مِ اآل الْيَو the Last Day Literal translation the Last Day Literal translation Surat Al-Hashr 84 27 الَّذِينَ كَفَرُوا Unbelievers Substitution unbelievers Substitution 28 ِأَهْلِ الْكِتَاب the People of the Book Literal translation the People of the Book Literal translation 29 ِالْحَشْر the first gathering Literal translation The first mustering Literal translation 30 ِْ َبْصَار أُولِي اَل O ye with eyes (to see) Literal translation + addition you who have eyes Literal translation 31 َِة ِ ر ْ خ اآل the Hereafter Substitution the world to come Definition 32 ٍلِينَة The tender palm-trees Literal translation + substitution Palm-trees Substitution 33 َالْفَاسِقِين rebellious transgresses Literal translation + addition ungodly Substitution 34 ٍِ كَاب َ ر َ ال خَيْلٍ و cavalry or camelry Substitution horse nor camel Literal translation 35 ْ بَى َ لِذِي الْقُر و kindred Substitution near kinsman Literal translation + addition 36 َِ ابْنِ السَّبِيل و the wayfarer Literal translation the traveller Substitution 37 للفقراء the indigent Literal translation the poor Literal translation 38 َِ ين ِ ر الْمُهَاج Muhajirs Borrowing emigrants Literal translation 85 39 َ َالدَّار homes (in Medina) Literal translation + addition the abode Literal translation 40 ْ ِيمَان اإل the Faith Literal translation belief Literal translation 41 ً ِْ هِم صُدُور their hearts Substitution their breasts Literal translation 42 حَاجَة entertain no desire Literal translation any need Literal translation 43 ِلإ َ انِنَا خْو our brethren Substitution our brothers Literal translation 44 ْ م قَو men Literal translation a people Literal translation + addition 45 ِالشَّيْطَان the Evil One Definition Satan Substitution 46 َُ اء جَز the reward Literal translation the recompense Literal translation 47 ٍلِغَد the morrow Literal translation the morrow Literal translation 48 ِالنَّار Fire Literal translation Fire Literal translation 49 ِالْجَنَّة Garden Literal translation Paradise Substitution 50 ًمتصدعا cleave asunder Literal translation + addition Split asunder Literal translation + addition 51 ِ بُهَا نَضْر propound Substitution strike Literal translation 86 52 ِالْغَيْب secret Substitution Unseen Literal translation 5.2.4 Answer of the third sub-question: To what extent have Ali and Arberry’s translations were successful in achieving the cultural equivalence of the specific items? To answer this question, the researcher drew a table demonstrating Ali and Arberry’s achievement and non-achievement of cultural equivalence in the translation of the specific items. Table (5.3): Achievement and non-achievement of cultural equivalence in Ali and Arberry’s translation . No of CSI CSI Ali’s translation Achievement of cultural equivalence Arberry’s translation Achievement of cultural equivalence Mujadilah -Al Surat 1 ُ َّاللّ Allah Yes God No 2 َيُظَاهِرُون divorce their wives by Zihar (calling them mothers) Yes 'Be as my mother's back,' No 3 رقبة slave Yes slave Yes 4 ُِ يَام ص fast No fast No 5 مسكين Indigent ones Yes Poor persons Yes 6 رسوله Messenger Yes Messenger Yes 87 7 حدود Limits (set by) yes bounds yes 8 ِْ مَ الْقِيَامَة يَو the Day of Judgment. Yes the Day of Resurrection Yes 9 َْ ك حَيَّو salute No greet Yes 10 جهنم Hell No Gehenna Yes 11 ْ نَهَا يَصْلَو burn Yes roasted No 12 ُِ ير َ الْمَص فَبِئْس evil is that destination Yes an evil homecoming No 13 ْ ِثْم اإل iniquity No sin Yes 14 َ ى التَّقْو restraint -self No god-fearing No 15 اتقوا Fear Yes Fear Yes 16 تُحْشَرُون brought back Yes mustered No 17 َ ى النَّجْو Secret counsels no Conspiring secretly es Y 18 ب ْ ِ هِم ضَار harm No hurt Yes 19 انْشُزُوا rise up No Move up No 20 ًصَدَقَة charity Yes freewill offering No 88 21 ََ ة فَأَقِيمُوا الصَّال establish regular prayer No perform the prayer No 22 ََّكَاة آتُوا الز practise regular charity No pay the alms No 23 ًجُنَّة a screen (for their misdeeds) No a covering No 24 َالْخَاسِرُون that will perish Yes The losers No 25 يحادون resist Yes oppose Yes 26 ِِ ر ْ خ ْ مِ اآل الْيَو the Last Day No the Last Day No Surat Al-Hashr 27 الَّذِينَ كَفَرُوا Unbelievers No unbelievers No 28 ِأَهْلِ الْكِتَاب the People of the Book No the People of the Book No 29 ِالْحَشْر the first gathering No The first mustering No 30 ِْ َبْصَار أُولِي اَل O ye with eyes (to see) No you who have eyes No 31 َِة ِ ر ْ خ اآل the Hereafter No the world to come No 89 32 ٍلِينَة The tender palm-trees No Palm-trees No 33 َالْفَاسِقِين rebellious transgresses No ungodly Yes 34 ٍِ كَاب َ ر َ ال خَيْلٍ و cavalry or camelry Yes horse nor camel No 35 ْ بَى َ لِذِي الْقُر و kindred No near kinsman Yes 36 َِ ابْنِ السَّبِيل و the wayfarer No the traveller No 37 للفقراء the indigent Yes the poor Yes 38 َِ ين ِ ر الْمُهَاج Muhajirs Yes emigrants No 39 َ َالدَّار mes (in ho Medina) Yes the abode No 40 ْ ِيمَان اإل the Faith No belief No 41 ً ِْ هِم صُدُور their hearts Yes their breasts No 42 حَاجَة entertain no desire No any need No 43 ِلإ َ انِنَا خْو our brethren Yes our brothers No 44 ْ م قَو men No a people Yes 45 ِالشَّيْطَان he Evil One t No Satan No 90 46 َُ اء جَز reward No recompense No 47 ٍلِغَد the morrow No the morrow No 48 ِالنَّار Fire No Fire No 49 ِالْجَنَّة Garden No Paradise Yes 50 ًمتصدعا cleave asunder Yes Split asunder Yes 51 ِ بُهَا نَضْر propound No strike No 52 ِالْغَيْب secret Yes Unseen No Total of cultural equivalence achievement 22 18 Percentage 42.3% 34.6% 91 Figure (5.1): Achievement of cultural equivalence in the translations of Ali and Arberry As the table and the pie chart demonstrate, Ali was able to achieve the cultural equivalence in the translation of 22 CSIs which equals 42.3% whereas Arberry succeeded in achieving the cultural equivalence in 18 CSIs which equals 34.6%. Accordingly, Ali’s translation of CSIs was more precise and accurate compared with Arberry’s. 5.3 Conclusion The Holy Qur’an is distinguished for its inimitable nature and unique discourse. It is also featured for its eloquent and figurative language. Therefore, translating the language of the Holy Qur’an constitutes a difficulty for the translators due to having Qur’an-bound terms that cannot be equaled simply by using any word. Hence losses in translation are inevitable hindering translators from achieving the exact equivalence. After comparing between the two translations and analyzing them, the researcher have found out that the translations of both Ali and Arberry contain semantic losses whether complete or partial. The former seems to be the most prevalent in 42.30% 34.60% Achievement of cultural equivalence in the translatios of Ali and Arberry Ali's achievement of cultural equivalence Arberry's achievement of cultural equivalence 92 Arberry’s translation as Hana and Ilhem (2016) and Islam (2018) found. In light of Baker’s typology of non-equivalence at the word level, the semantic losses occurred mainly due to the abundance of culture-related terms and semantically complex words as found by Abdelaal and Rashid (2015). Furthermore, lack of lexicalization and hyponyms in the TL played a significant role in causing semantic ambiguity. Also Ali and Arberry’s excessive use of foreignization strategies, literal translation in specific, resulted in a shift in meaning since the language of the Holy Qur’an cannot be translated literally. Another final cause is the translators’ lack of knowledge in the religious sciences. In addition, the findings revealed that Ali’s achievement of cultural equivalence accounted for 42.3% while Arberry’s accounted for 34.6%. 5.3 Recommendations Based on the findings of this study, the researcher recommends translators to consult scholars or religious institutions upon commonly used Islamic Shari’a terms in order to be provided with the accurate choices of translation. Also, they must rely on exegesis books that will facilitate the process of understanding the meanings of the verses hence attaining the precise equivalent. Moreover, they should use Arabic and English language dictionaries that have access to Islamic terms. Furthermore, translators must take into consideration the connotative meaning and not focus mainly on the denotative meaning (the dictionary meaning). Paying great attention to the strategies of translating the Holy Qur’an would be very beneficial in reducing the losses in meaning. In an ultimate manner, the researcher suggests that more research should be done on complete chapters of the Holy Qur’an in order to eliminate the losses and present the most exquisite version to the foreign readers around the world. 93 References 94 References Abdelaal, N. M. (2017). Grammatical and Semantic Losses in Abdel Haleem's English Translation Of The Holy Quran. International Journal of Education and Literacy Studies, 5(3). Abdelaal, N. (2018). Translating Connotative Meaning in the Translation of the Holy Quran: Problems and Solutions. Arab World English Journal for Translation and Literary Studies, 2(1). Abdelaal, N. M. (2019). Faithfulness in the Translation of the Holy Quran: Revisiting the Skopos Theory. SAGE Open, 9(3). Abdelaal, N. M., & Rashid, S. M. (2015). Semantic Loss in the Holy Qur’an Translation with Special Reference to Surah Al-WaqiAAa (Chapter of The Event Inevitable). SAGE Open, 5(4). Abdelaal, N., & Rashid, S. (2016). Grammar-Related Semantic Losses in the Translation of the Holy Quran, with Special Reference to Surah Al A’araf (The Heights). SAGE Open Journal, 6(3), 1-11. Abdul-Raof, H. (2005). Pragmalinguistic forms in cross-cultural communication: Contributions from Qur’an translation. Intercultural Commuincation Studies, 4, 115-130. Abdul-Raof, H. (2010). Qur’an Translation, Discourse, Texture and Exegesis. London and New York: Routledge. Akbari, M. (2013). The Role of Culture in Translation. Journal of Academic and Applied Studies, 3(8), 13-21. 95 Al-Azzam, B. & Al-Ahaydib, M. & Al-Huqail E (2015) .Cultural Problems in the Translation of the Qur’an. International Journal of Applied Linguistics and Translation, 1, 28-34. Doi: 10.11648/j.ijalt.20150102.12 Ali, A. (Trans.). (1968). The Holy Qur’an, Text, Translation and Commentary. Beirut, Lebanon: Dar al Arabia. Ali, A. (1934). The Holy Qur’an Translation Commentary. Beirut: The Holy Qur’an publishing house. Al-Jabari, A. (2008). Reasons for the possible incomprehensibility of some verses of three translations of the meaning of the Holy Quran into English. (Unpublished doctoral dissertation). University of Salford, Salford. Almaany. (n. d.). Retrieved January 31, 2020, from Al-Masri, H. (2009). Translation and cultural equivalence: A study of translation losses in Arabic. Journal of Language and Translation, 1, 7-44. Aminudin. (1999). Pengembangan Penelitian Kualitative dalam Bidang Bahasa dan Sastra. Malang: Yayasan Asah Asih Asuh (YA3). Anari, S. & Sanjarani, A. (2016). Application of Baker's Model in Translating Quran Specific Cultural Items. Journal of Language Sciences & Linguistics, 4(3), 145-151. Arberry, A. J. (1973). The Koran Interpreted. New York: The Macmillan Company. Arberry, A. J. (Trans.) (1996). The Koran Interpreted: A translation. New York, NY: Simon & Schuster. 96 Al-Tabari, M. (2003). JaamiAA Al-Bayaan AAn Ta’weel Ayil Qur’an [The commentary on the Quran]. Cairo, Egypt: Al-Halabi. Dar Al- Maref. As-Safi, A. B. (2011). Translation Theories: Strategies and Basic Theoretical Issues. Amman: Dar Amwaj. Bahmeed A. S. (2008): Hindrances in Arabic-English Intercultural Translation, Cultural Aspects, Vol.12, No. 1. Baker, M. (2011). In Other words: A course book on translation, (2nd edition). London, England: Routledge. Balla, A., & Siddiek, A. (2017) Complications of Translating the Meanings of the Holy Qur’an at Word Level in the English Language in Relation to Frame Semantic Theory. AIAC 8(5). Catford, J. C. (1965). A linguistic Theory of Translation. An Essay in Applied Linguistics (Reprinted ed.). London: Oxford U. P. Cambridge dictionary. (1995). Retrieved January 31, 2020, from Cruse, D. A. (1997). Lexical Semantics. Cambridge: Cambridge University Press. Crystal, D. (1991). A dictionary of linguistics and phonetics (3rd Ed.). Oxford, UK. Darwish, A. (2010). Elements of Translation. Melbourne: Write scope. Dawson, C. (1948) Religion and Culture. London: Sheed &Ward. 97 Delisle, J. (1984). L’Analyse du Discours comme Methodede Traduction. (Theorie et pratique) Initiation I la traduction francaise des textes pragmatiques anglaisn (Model for Translation-Oriented Text. Analysis) Editions de I’Unuversite d’Ottawa. Dickens, J., Hervey, S., & Higgins, I. (2005). Thinking Arabic Translation: A Course in Translation Method: Arabic to English. London: Routledge. Dizdar, D. (2014). Instrumental Thinking in Translation Studies. Target: International Journal on Translation Studies, 26(2), 206-223. El-Halabi, H. (2020). Impact of Semantic Loss in the Holy Quran Translation with Reference to Yusuf Ali’s and Pickthall’s Translations of Al-Nur Surah. (Master’s thesis). Central Library of IUG. Eliot, T.S. (1948): Notes Towards the Definition of Culture 2nd ed. London: Faber and Faber. English dictionary. (2012). Retrieved January 31, 2020, from Ervin, S., & Bower, R. T. (1952). Translation problems in international surveys. The Public Opinion Quarterly, 16(4), 595-604. Geeraerts, D. (2010). Theories of lexical semantics (1st Ed.). New York, NY: Oxford University Press. Ghazala, H. (1995). Translation as Problems and Solutions: A Course Book for University Students and Trainee translators. Malta: ELG Publication. Guessabi, F. (2013). The cultural problems in translating a novel from Arabic to English language. A case study: the Algerian novel. Arab World English Journal, 1(2). 224-232. 98 Halimah, Ahmed (2014). Translation of the Holy Quran: A Call for Standardization. AIAC 5(1). Hana, B. & Ilhem, M. (2016). Semantic Loss at Word Level in Quran Translation Case of Two Translated Versions of Surah Al-Baqara (The Cow) by A. J.Arberry & A.Y.Ali. (Master’s Thesis) Ministry of Higher Education and Scientific Research, Kasdi Merbah University – Ouargla – Faculty of Letters and Languages Department of Letters and English Language. Hatim, B., & Munday, J. (2004). Translation an advanced resource book. London: Routledge. House, J. (1997). Translation Quality Assessment: A Model Revisited. Tübingen: Gunter Narr, 1997/2011. Ibn Kathir, H. D. (2000). Tafsiru AlQur’ani Alkarim [Commentary on the Quran]. Beirut, Lebanon: Maktabat Nour Al’ilmijja. (Original work published 1995) Islam, S. (2018). Semantic Loss in Two English Translations of Surah Ya-Sin by Two Translators. (Abdullah Yusuf Ali and Arthur John Arberry). International Journal of Linguistics, Literature & Translation, 1(4), 18-34. Issa, I. (2017). Mistranslations of the Prophets' Names in the Holy Quran: A Critical Evaluation of Two Translations. Journal of Education and Practice, 8(2), 168-174. Ivir, V. (1987). Procedures and strategies for the translation of culture. London, England: Routledge. Jakobson, R. (1959). On linguistic aspects of translation. On translation, (3), 30-39. Jassem, Z. A. (2014). The Noble Qur’an: A critical evaluation of Al-Hilali and Khan’s translation. International Journal of English and Education, Vol. 3, 2278-4012. 99 Kehal. (2010). Problems in English Arabic Translation of Reference Pragamtic Aspects. Unpublished MA thesis. Mentori University-Constontine, Algeria. Retrieved from: Meriam-Webster. (1828). Retrieved January 31, 2020, from Murphy, Gerald. (2003). Oxford Art Online. doi:10.1093/gao/9781884446054. Article.t060494. Najjar, S. (2012). Metaphors in translation: An investigation of a sample of Quran metaphors with reference to three English versions of the Quran (unpublished doctoral dissertation).Liverpool, England: John Moores University. Newmark, P. (1988). A Textbook of Translation, New York: Prentice-Hall International. Newmark, P. (1991). About Translation. Clevedon: Multilingual matters. Nida, E. A., & Taber, C. R. (1982). The Theory and Practice of Translation, Leiden: E.J. Brill. (Original work published 1969). Omer, B. A. (2017). Translation Loss in Translation of the Glorious Quran, with Special Reference to Verbal Similarity. Oxford English Dictionary. (2009). Retrieved January 31, 2020, from: Palmer, F. R. (1981). Semantics: A new outline (2nd Ed.). Cambridge, UK: Cambridge University Press. (Original work published 1976). Qala’gy, M., & Qnaiby, H. (1985) Mu’jam Lughat al-Fuqaha’. Beiruit, Lebanon: Dar Al-Nafaes. 100 Schulte, R. (2002). The Georgapghy of Translation and Interpretation: Traveling between Languages. New York: Edwin Mellen, Lewiston. Shammalah (2019). Assessing Foreignization and Domestication Strategies in the Translation of Cultural Specific Items in Itani's and Ali's Translations of Alnisaa’ Sura. (Master’s thesis). Central Library of IUG. Shehab, E. (2009). The Problems involved in translating Arabic cognitive synonyms into English. Majallat Al-JaamAAah Al-Islamiyyah, 17, 869-890. Shunnaq, A. (1992). Functional repetition in Arabic realized through the use of wordstrings with reference to Arabic- English translation of political discourse. Nouveltes De La Fit Newsletter, 1(2), 5-39. Siddiek, A. (2017). Linguistic precautions that to be considered when translating the Holy Quran. Advances in Language and literary Studies, 8 (2). Spencer-Oatey, H. (ed.) (2000) Culturally Speaking.Managing Rapport through Talk across Cultures. London and New York: Continuum. Tabrizi, A., & Mahmud, R. (2013). Coherence analysis Issues on English- Translated Quran. In Communications, Signal Processing, and their Applications (ICCSPA), 1-6. IEEE. Retrieved from: Tylor, E. B. (1889). Primitive Culture, (3rd edition). Venuti, L. (1995). The Translator’s Invisibility. London, England: Routledge. Vinay, J. P. and Darbelnet (1995). Comparative Stylistics of French and English: A methodology for Translation, translated and edited by Juan Sager and Marie-Jo Hamel. Amsterdam and Philadelphia: John Benjamins. 101 Wehr, Hans (1979). A Dictionary of Modern Written Arabic (4th ed.). Ithaca, NY: Spoken Language Services. Wikipedia (n.d). Retrieved December 25, 2020, from:
171
Published Time: 2003-07-21T04:31:13Z Genetic linkage - Wikipedia =============== Jump to content [x] Main menu Main menu move to sidebar hide Navigation Main page Contents Current events Random article About Wikipedia Contact us Contribute Help Learn to edit Community portal Recent changes Upload file Special pages Search Search [x] Appearance Donate Create account Log in [x] Personal tools Donate Create account Log in Pages for logged out editors learn more Contributions Talk Contents move to sidebar hide (Top) 1 Discovery 2 Linkage map 3 Linkage analysisToggle Linkage analysis subsection 3.1 Parametric linkage analysis 3.2 Limitations 4 Recombination frequencyToggle Recombination frequency subsection 4.1 Linkage of genetic sites within a gene 5 Variation of recombination frequencyToggle Variation of recombination frequency subsection 5.1 Genes affecting recombination frequency 6 Meiosis indicators 7 See also 8 References [x] Toggle the table of contents Genetic linkage [x] 38 languages العربية Bosanski Català Čeština Deutsch Ελληνικά Español فارسی Français Gaeilge Galego 한국어 हिन्दी Hrvatski Bahasa Indonesia Íslenska Italiano עברית Қазақша Lietuvių Nederlands 日本語 Norsk bokmål Português Română Русский Simple English کوردی Српски / srpski Srpskohrvatski / српскохрватски Suomi Svenska ไทย Türkçe Українська Tiếng Việt 粵語 中文 Edit links Article Talk [x] English Read Edit View history [x] Tools Tools move to sidebar hide Actions Read Edit View history General What links here Related changes Upload file Permanent link Page information Cite this page Get shortened URL Download QR code Print/export Download as PDF Printable version In other projects Wikimedia Commons Wikidata item Appearance move to sidebar hide From Wikipedia, the free encyclopedia Tendency of DNA sequences that are close together on a chromosome to be inherited together "Genetic map" redirects here; not to be confused with Gene mapping. Genetic linkage is the tendency of DNA sequences that are close together on a chromosome to be inherited together during the meiosis phase of sexual reproduction. Two genetic markers that are physically near to each other are unlikely to be separated onto different chromatids during chromosomal crossover, and are therefore said to be more linked than markers that are far apart. In other words, the nearer two genes are on a chromosome, the lower the chance of recombination between them, and the more likely they are to be inherited together. Markers on different chromosomes are perfectly unlinked, although the penetrance of potentially deleterious alleles may be influenced by the presence of other alleles, and these other alleles may be located on other chromosomes than that on which a particular potentially deleterious allele is located. Genetic linkage is the most prominent exception to Gregor Mendel's Law of Independent Assortment. The first experiment to demonstrate linkage was carried out in 1905. At the time, the reason why certain traits tend to be inherited together was unknown. Later work revealed that genes are physical structures related by physical distance. The typical unit of genetic linkage is the centimorgan (cM). A distance of 1 cM between two markers means that the markers are separated to different chromosomes on average once per 100 meiotic product, thus once per 50 meioses. Discovery [edit] Gregor Mendel's Law of Independent Assortment states that every trait is inherited independently of every other trait. But shortly after Mendel's work was rediscovered, exceptions to this rule were found. In 1905, the British geneticists William Bateson, Edith Rebecca Saunders and Reginald Punnett cross-bred pea plants in experiments similar to Mendel's. They were interested in trait inheritance in the sweet pea and were studying two genes—the gene for flower colour (P, purple, and p, red) and the gene affecting the shape of pollen grains (L, long, and l, round). They crossed the pure lines PPLL and ppll and then self-crossed the resulting PpLl lines.[citation needed] According to Mendelian genetics, the expected phenotypes would occur in a 9:3:3:1 ratio of PL:Pl:pL:pl. To their surprise, they observed an increased frequency of PL and pl and a decreased frequency of Pl and pL:[citation needed] Bateson, Saunders, and Punnett experiment| Phenotype and genotype | Observed | Expected from 9:3:3:1 ratio | | --- | --- | --- | | Purple, long (P_L_) | 284 | 216 | | Purple, round (P_ll) | 21 | 72 | | Red, long (ppL_) | 21 | 72 | | Red, round (ppll) | 55 | 24 | Their experiment revealed linkage between the P and L alleles and the p and l alleles. The frequency of P occurring together with L and p occurring together with l is greater than that of the recombinant Pl and pL. The recombination frequency is more difficult to compute in an F2 cross than a backcross, but the lack of fit between observed and expected numbers of progeny in the above table indicate it is less than 50%. This indicated that two factors interacted in some way to create this difference by masking the appearance of the other two phenotypes. This led to the conclusion that some traits are related to each other because of their near proximity to each other on a chromosome.[citation needed] The understanding of linkage was expanded by the work of Thomas Hunt Morgan. Morgan's observation that the amount of crossing over between linked genes differs led to the idea that crossover frequency might indicate the distance separating genes on the chromosome. The centimorgan, which expresses the frequency of crossing over, is named in his honour.[citation needed] Linkage map [edit] Thomas Hunt Morgan'sDrosophila melanogaster genetic linkage map. This was the first successful gene mapping work and provides important evidence for the chromosome theory of inheritance. The map shows the relative positions of alleles on the second Drosophila chromosome. The distances between the genes (centimorgans) are equal to the percentages of chromosomal crossover events that occur between different alleles. See also: Gene map A linkage map (also known as a genetic map) is a table for a species or experimental population that shows the position of its known genes or genetic markers relative to each other in terms of recombination frequency, rather than a specific physical distance along each chromosome. Linkage maps were first developed by Alfred Sturtevant, a student of Thomas Hunt Morgan.[citation needed] A linkage map is a map based on the frequencies of recombination between markers during crossover of homologous chromosomes. The greater the frequency of recombination (segregation) between two genetic markers, the further apart they are assumed to be. Conversely, the lower the frequency of recombination between the markers, the smaller the physical distance between them. Historically, the markers originally used were detectable phenotypes (enzyme production, eye colour) derived from coding DNA sequences; eventually, confirmed or assumed noncoding DNA sequences such as microsatellites or those generating restriction fragment length polymorphisms (RFLPs) have been used.[citation needed] Linkage maps help researchers to locate other markers, such as other genes by testing for genetic linkage of the already known markers. In the early stages of developing a linkage map, the data are used to assemble linkage groups, a set of genes which are known to be linked. As knowledge advances, more markers can be added to a group, until the group covers an entire chromosome. For well-studied organisms the linkage groups correspond one-to-one with the chromosomes.[citation needed] A linkage map is not a physical map (such as a radiation reduced hybrid map) or gene map.[citation needed] Linkage analysis [edit] Linkage analysis is a genetic method that searches for chromosomal segments that cosegregate with the ailment phenotype through families. It can be used to map genes for both binary and quantitative traits. Linkage analysis may be either parametric (if we know the relationship between phenotypic and genetic similarity) or non-parametric. Parametric linkage analysis is the traditional approach, whereby the probability that a gene important for a disease is linked to a genetic marker is studied through the LOD score, which assesses the probability that a given pedigree, where the disease and the marker are cosegregating, is due to the existence of linkage (with a given linkage value) or to chance. Non-parametric linkage analysis, in turn, studies the probability of an allele being identical by descent with itself.[citation needed] Pedigree illustrating Parametric Linkage Analysis Parametric linkage analysis [edit] The LOD score (logarithm (base 10) of odds), developed by Newton Morton, is a statistical test often used for linkage analysis in human, animal, and plant populations. The LOD score compares the likelihood of obtaining the test data if the two loci are indeed linked, to the likelihood of observing the same data purely by chance. Positive LOD scores favour the presence of linkage, whereas negative LOD scores indicate that linkage is less likely. Computerised LOD score analysis is a simple way to analyse complex family pedigrees in order to determine the linkage between Mendelian traits (or between a trait and a marker, or two markers).[citation needed] The method is described in greater detail by Strachan and Read. Briefly, it works as follows:[citation needed] Establish a pedigree Make a number of estimates of recombination frequency Calculate a LOD score for each estimate The estimate with the highest LOD score will be considered the best estimate The LOD score is calculated as follows: LOD=Z=log 10⁡probability of birth sequence with a given linkage value probability of birth sequence with no linkage=log 10⁡(1−θ)N R×θ R 0.5 N R+R{\displaystyle {\text{LOD}}=Z=\log {10}{\frac {\text{probability of birth sequence with a given linkage value}}{\text{probability of birth sequence with no linkage}}}=\log {10}{\frac {(1-\theta )^{NR}\times \theta ^{R}}{0.5^{NR+R}}}} NR denotes the number of non-recombinant offspring, and R denotes the number of recombinant offspring. The reason 0.5 is used in the denominator is that any alleles that are completely unlinked (e.g. alleles on separate chromosomes) have a 50% chance of recombination, due to independent assortment. θ is the recombinant fraction, i.e. the fraction of births in which recombination has happened between the studied genetic marker and the putative gene associated with the disease. Thus, it is equal to R / (NR + R).[citation needed] By convention, a LOD score greater than 3.0 is considered evidence for linkage, as it indicates 1000 to 1 odds that the linkage being observed did not occur by chance. On the other hand, a LOD score less than −2.0 is considered evidence to exclude linkage. Although it is very unlikely that a LOD score of 3 would be obtained from a single pedigree, the mathematical properties of the test allow data from a number of pedigrees to be combined by summing their LOD scores. A LOD score of 3 translates to a p-value of approximately 0.05, and no multiple testing correction (e.g. Bonferroni correction) is required. Limitations [edit] Linkage analysis has a number of methodological and theoretical limitations that can significantly increase the type-1 error rate and reduce the power to map human quantitative trait loci (QTL). While linkage analysis was successfully used to identify genetic variants that contribute to rare disorders such as Huntington disease, it did not perform that well when applied to more common disorders such as heart disease or different forms of cancer. An explanation for this is that the genetic mechanisms affecting common disorders are different from those causing some rare disorders. Recombination frequency [edit] Recombination frequency is a measure of genetic linkage and is used in the creation of a genetic linkage map. Recombination frequency (θ) is the frequency with which a single chromosomal crossover will take place between two genes during meiosis. A centimorgan (cM) is a unit that describes a recombination frequency of 1%. In this way we can measure the genetic distance between two loci, based upon their recombination frequency. This is a good estimate of the real distance. Double crossovers would turn into no recombination. In this case we cannot tell if crossovers took place. If the loci we're analysing are very close (less than 7 cM) a double crossover is very unlikely. When distances become higher, the likelihood of a double crossover increases. As the likelihood of a double crossover increases one could systematically underestimate the genetic distance between two loci, unless one used an appropriate mathematical model.[citation needed] Double linkage is more of a historical concern for plants. In animals, double crossover happens rarely. In humans, for example, one chromosome has two crossovers on average during meiosis. Furthermore, modern geneticists have enough genes that only nearby genes need to be linkage-analyzed, unlike the early days when only a few genes were known. During meiosis, chromosomes assort randomly into gametes, such that the segregation of alleles of one gene is independent of alleles of another gene. This is stated in Mendel's Second Law and is known as the law of independent assortment. The law of independent assortment always holds true for genes that are located on different chromosomes, but for genes that are on the same chromosome, it does not always hold true.[citation needed] As an example of independent assortment, consider the crossing of the pure-bred homozygote parental strain with genotypeAABB with a different pure-bred strain with genotype aabb. A and a and B and b represent the alleles of genes A and B. Crossing these homozygous parental strains will result in F1 generation offspring that are double heterozygotes with genotype AaBb. The F1 offspring AaBb produces gametes that are AB, Ab, aB, and ab with equal frequencies (25%) because the alleles of gene A assort independently of the alleles for gene B during meiosis. Note that 2 of the 4 gametes (50%)—Ab and aB—were not present in the parental generation. These gametes represent recombinant gametes. Recombinant gametes are those gametes that differ from both of the haploid gametes that made up the original diploid cell. In this example, the recombination frequency is 50% since 2 of the 4 gametes were recombinant gametes.[citation needed] The recombination frequency will be 50% when two genes are located on different chromosomes or when they are widely separated on the same chromosome. This is a consequence of independent assortment.[citation needed] When two genes are close together on the same chromosome, they do not assort independently and are said to be linked. Whereas genes located on different chromosomes assort independently and have a recombination frequency of 50%, linked genes have a recombination frequency that is less than 50%.[citation needed] As an example of linkage, consider the classic experiment by William Bateson and Reginald Punnett. They were interested in trait inheritance in the sweet pea and were studying two genes—the gene for flower colour (P, purple, and p, red) and the gene affecting the shape of pollen grains (L, long, and l, round). They crossed the pure lines PPLL and ppll and then self-crossed the resulting PpLl lines. According to Mendelian genetics, the expected phenotypes would occur in a 9:3:3:1 ratio of PL:Pl:pL:pl. To their surprise, they observed an increased frequency of PL and pl and a decreased frequency of Pl and pL (see table below). Bateson and Punnett experiment| Phenotype and genotype | Observed | Expected from 9:3:3:1 ratio | | --- | --- | --- | | Purple, long (P_L_) | 284 | 216 | | Purple, round (P_ll) | 21 | 72 | | Red, long (ppL_) | 21 | 72 | | Red, round (ppll) | 55 | 24 | Unlinked Genes vs. Linked Genes Their experiment revealed linkage between the P and L alleles and the p and l alleles. The frequency of P occurring together with L and with p occurring together with l is greater than that of the recombinant Pl and pL. The recombination frequency is more difficult to compute in an F2 cross than a backcross, but the lack of fit between observed and expected numbers of progeny in the above table indicate it is less than 50%.[citation needed] The progeny in this case received two dominant alleles linked on one chromosome (referred to as coupling or cis arrangement). However, after crossover, some progeny could have received one parental chromosome with a dominant allele for one trait (e.g. Purple) linked to a recessive allele for a second trait (e.g. round) with the opposite being true for the other parental chromosome (e.g. red and Long). This is referred to as repulsion or a trans arrangement. The phenotype here would still be purple and long but a test cross of this individual with the recessive parent would produce progeny with much greater proportion of the two crossover phenotypes. While such a problem may not seem likely from this example, unfavourable repulsion linkages do appear when breeding for disease resistance in some crops.[citation needed] The two possible arrangements, cis and trans, of alleles in a double heterozygote are referred to as gametic phases, and phasing is the process of determining which of the two is present in a given individual.[citation needed] When two genes are located on the same chromosome, the chance of a crossover producing recombination between the genes is related to the distance between the two genes. Thus, the use of recombination frequencies has been used to develop linkage maps or genetic maps.[citation needed] However, it is important to note that recombination frequency tends to underestimate the distance between two linked genes. This is because as the two genes are located farther apart, the chance of double or even number of crossovers between them also increases. Double or even number of crossovers between the two genes results in them being cosegregated to the same gamete, yielding a parental progeny instead of the expected recombinant progeny. As mentioned above, the Kosambi and Haldane transformations attempt to correct for multiple crossovers. Linkage of genetic sites within a gene [edit] In the early 1950s the prevailing view was that the genes in a chromosome are discrete entities, indivisible by genetic recombination and arranged like beads on a string. During 1955 to 1959, Benzer performed genetic recombination experiments using rII mutants of bacteriophage T4. He found that, on the basis of recombination tests, the sites of mutation could be mapped in a linear order. This result provided evidence for the key idea that the gene has a linear structure equivalent to a length of DNA with many sites that can independently mutate.[citation needed] Edgar et al. performed mapping experiments with r mutants of bacteriophage T4 showing that recombination frequencies between rII mutants are not strictly additive. The recombination frequency from a cross of two rII mutants (a x d) is usually less than the sum of recombination frequencies for adjacent internal sub-intervals (a x b) + (b x c) + (c x d). Although not strictly additive, a systematic relationship was observed that likely reflects the underlying molecular mechanism of genetic recombination. Variation of recombination frequency [edit] While recombination of chromosomes is an essential process during meiosis, there is a large range of frequency of cross overs across organisms and within species. Sexually dimorphic rates of recombination are termed heterochiasmy, and are observed more often than a common rate between male and females. In mammals, females often have a higher rate of recombination compared to males. It is theorised that there are unique selections acting or meiotic drivers which influence the difference in rates. The difference in rates may also reflect the vastly different environments and conditions of meiosis in oogenesis and spermatogenesis. Genes affecting recombination frequency [edit] Mutations in genes that encode proteins involved in the processing of DNA often affect recombination frequency. In bacteriophage T4, mutations that reduce expression of the replicative DNA polymerase [gene product 43 (gp43)] increase recombination (decrease linkage) several fold. The increase in recombination may be due to replication errors by the defective DNA polymerase that are themselves recombination events such as template switches, i.e. copy choice recombination events. Recombination is also increased by mutations that reduce the expression of DNA ligase (gp30) and dCMP hydroxymethylase (gp42), two enzymes employed in DNA synthesis. Recombination is reduced (linkage increased) by mutations in genes that encode proteins with nuclease functions (gp46 and gp47) and a DNA-binding protein (gp32) Mutation in the bacteriophage uvsX gene also substantially reduces recombination. The uvsX gene is analogous to the well studied recA gene of Escherichia coli that plays a central role in recombination. Meiosis indicators [edit] With very large pedigrees or with very dense genetic marker data, such as from whole-genome sequencing, it is possible to precisely locate recombinations. With this type of genetic analysis, a meiosis indicator is assigned to each position of the genome for each meiosis in a pedigree. The indicator indicates which copy of the parental chromosome contributes to the transmitted gamete at that position. For example, if the allele from the 'first' copy of the parental chromosome is transmitted, a '0' might be assigned to that meiosis. If the allele from the 'second' copy of the parental chromosome is transmitted, a '1' would be assigned to that meiosis. The two alleles in the parent came, one each, from two grandparents. These indicators are then used to determine identical-by-descent (IBD) states or inheritance states, which are in turn used to identify genes responsible for diseases.[citation needed] See also [edit] Centimorgan Genetic association Genetic epidemiology Genome-wide association study Identity by descent Lander–Green algorithm Linkage disequilibrium Structural motif References [edit] ^Cooper, DN; Krawczak, M; Polychronakos, C; Tyler-Smith, C; Kehrer-Sawatzki, H (October 2013). "Where genotype is not predictive of phenotype: towards an understanding of the molecular basis of reduced penetrance in human inherited disease". Human genetics. 132 (10): 1077–130. doi:10.1007/s00439-013-1331-2. PMC3778950. PMID23820649. ^Lobo, Ingrid; Shaw, Kenna. "Discovery and Types of Genetic Linkage". Scitable. Nature Education. Retrieved 21 January 2017. ^Bateson, W; Saunders, ER; Punnett, RC (18 May 1904). Reports to the Evolution committee of the Royal Society. London: Harrison and Sons, Printers. Retrieved 21 January 2017. ^ abFisher, RA; Balmukand, B (July 1928). "The estimation of linkage from the offspring of selfed heterozygotes". Journal of Genetics. 20 (1): 79–92. doi:10.1007/BF02983317. S2CID27688031. ^Mader, Sylvia (2007). Biology Ninth Edition. New York: McGraw-Hill. p.209. ISBN978-0-07-325839-3. ^Griffiths, AJF (2000). An Introduction to Genetic Analysis (7th ed.). W. H. Freeman. Archived from the original on September 13, 2019. ^ abCantor, Rita M. (2013), "Analysis of Genetic Linkage", in Rimoin, David; Pyeritz, Reed; Korf, Bruce (eds.), Emery and Rimoin's Principles and Practice of Medical Genetics (6th ed.), Academic Press, pp.1–9, doi:10.1016/b978-0-12-383834-6.00010-0, ISBN978-0-12-383834-6 ^Morton NE (1955). "Sequential tests for the detection of linkage". American Journal of Human Genetics. 7 (3): 277–318. PMC1716611. PMID13258560. ^Nyholt, Dale R (August 2000). "All LODs Are Not Created Equal". American Journal of Human Genetics. 67 (2): 282–288. doi:10.1086/303029. PMC1287176. PMID10884360. ^Risch, Neil (June 1991). "A Note on Multiple Testing Procedures in Linkage Analysis". American Journal of Human Genetics. 48 (6): 1058–1064. PMC1683115. PMID2035526. ^Ferreira, Manuel A. R. (2004-10-01). "Linkage Analysis: Principles and Methods for the Analysis of Human Quantitative Traits". Twin Research and Human Genetics. 7 (5): 513–530. doi:10.1375/twin.7.5.513. ISSN2053-6003. PMID15527667. S2CID199001341. ^Gusella, James F.; Frontali, Marina; Wasmuth, John J.; Collins, Francis S.; Lehrach, Hans; Myers, Richard; Altherr, Michael; Allitto, Bernice; Taylor, Sherry (1992-05-01). "The Huntington's disease candidate region exhibits many different haplotypes". Nature Genetics. 1 (2): 99–103. doi:10.1038/ng0592-99. ISSN1546-1718. PMID1302016. S2CID25472459. ^Mark J. Daly; Hirschhorn, Joel N. (2005-02-01). "Genome-wide association studies for common diseases and complex traits". Nature Reviews Genetics. 6 (2): 95–108. doi:10.1038/nrg1521. ISSN1471-0064. PMID15716906. S2CID2813666. ^Meneely, Philip Mark; Dawes Hoang, Rachel; Okeke, Iruka N.; Heston, Katherine (2017). Genetics: genes, genomes, and evolution. Oxford: Oxford University Press. p.361. ISBN978-0-19-879536-0. OCLC951645141. ^Punnett, R. C.; Bateson, W. (1908-05-15). "The Heredity of Sex". Science. 27 (698): 785–787. Bibcode:1908Sci....27..785P. doi:10.1126/science.27.698.785. ISSN0036-8075. PMID17791047. ^Griffiths, AJF; Miller, JH; Suzuki, DT (2000). "Accurate calculation of large map distances, Figure 6-4". An Introduction to Genetic Analysis (7th ed.). New York: W. H. Freeman. ISBN978-0-7167-3520-5. Graph of mapping function from compared to idealised 1-1 equivalence of recombination frequency percentage (RF%) to map units. ^Benzer S. Fine structure of a genetic region in bacteriophage. Proc Natl Acad Sci U S A. 1955;41(6):344-354. doi:10.1073/pnas.41.6.344 ^Benzer S. On the topology of the genetic fine structure. Proc Natl Acad Sci U S A. 1959;45(11):1607-1620. doi:10.1073/pnas.45.11.1607 ^Edgar, R. S.; Feynman, R. P.; Klein, S.; Lielausis, I.; Steinberg, C. M. (1962). "Mapping Experiments with R Mutants of Bacteriophage T4d". Genetics. 47 (2): 179–186. doi:10.1093/genetics/47.2.179. PMC1210321. PMID13889186. ^Fisher, K. M.; Bernstein, H. (1965). "The Additivity of Intervals in the rIIA Cistron of Phage T4d". Genetics. 52 (6): 1127–1136. doi:10.1093/genetics/52.6.1127. PMC1210971. PMID5882191. ^McKee, Bruce D. (2004-03-15). "Homologous pairing and chromosome dynamics in meiosis and mitosis". Biochimica et Biophysica Acta (BBA) - Gene Structure and Expression. 1677 (1–3): 165–180. doi:10.1016/j.bbaexp.2003.11.017. ISSN0006-3002. PMID15020057. ^ abBernstein H. The effect on recombination of mutational defects in the DNA-polymerase and deoxycytidylate hydroxymethylase of phage T4D. Genetics. 1967;56(4):755-769 ^ abcdeBerger H, Warren AJ, Fry KE. Variations in genetic recombination due to amber mutations in T4D bacteriophage. J Virol. 1969;3(2):171-175. doi:10.1128/JVI.3.2.171-175.1969 ^Bernstein H. On the mechanism of intragenic recombination. I. The rII region of bacteriophage T4. (1962) Journal of Theoretical Biology. 1962; 3, 335-353. ^ abBernstein H. Repair and recombination in phage T4. I. Genes affecting recombination. Cold Spring Harb Symp Quant Biol. 1968;33:325-331. doi:10.1101/sqb.1968.033.01.037 ^Hamlett NV, Berger H. Mutations altering genetic recombination and repair of DNA in bacteriophage T4. Virology. 1975;63(2):539-567. doi:10.1016/0042-6822(75)90326-8 ^Fujisawa H, Yonesaki T, Minagawa T. Sequence of the T4 recombination gene, uvsX, and its comparison with that of the recA gene of Escherichia coli. Nucleic Acids Res. 1985;13(20):7473-7481. doi:10.1093/nar/13.20.7473 Library resources about Gene mapping Resources in your library Resources in other libraries Griffiths AJF; Miller JH; Suzuki DT; Lewontin RC; et al. (1993). "Chapter 5". An Introduction to Genetic Analysis (5th ed.). New York: W.H. Freeman and Company. ISBN978-0-7167-2285-4. Poehlman JM; Sleper DA (1995). "Chapter 3". Breeding Field Crops (4th ed.). Iowa: Iowa State Press. ISBN978-0-8138-2427-7. | v t e Population genetics | | --- | | Key concepts | Hardy–Weinberg principle Genetic linkage Identity by descent Linkage disequilibrium Fisher's fundamental theorem Neutral theory Shifting balance theory Price equation Coefficient of inbreeding Coefficient of relationship Selection coefficient Fitness Heritability Population structure Constructive neutral evolution | | Selection | Natural Artificial Sexual Ecological | | Effects of selection on genomic variation | Genetic hitchhiking Background selection | | Genetic drift | Small population size Population bottleneck Founder effect Coalescence Balding–Nichols model | | Founders | R. A. Fisher J. B. S. Haldane Sewall Wright | | Related topics | Biogeography Evolution Evolutionary game theory Fitness landscape Genetic genealogy Landscape genetics and genomics Microevolution Population genomics Phylogeography Quantitative genetics | | Index of evolutionary biology articles | | v t e Personal genomics | | --- | | Data collection | Biobank Biological database | | Field concepts | Biological specimen De-identification Human genetic variation Genetic linkage Single-nucleotide polymorphisms Identity by descent Genetic disorder | | Applications | Personalized medicine Predictive medicine Genetic epidemiology Pharmacogenomics | | Analysis techniques | Whole genome sequencing Genome-wide association study SNP array Genetic testing | | Major projects | Human Genome Project International HapMap Project 1000 Genomes Project Human Genome Diversity Project | | Authority control databases | | --- | | National | Germany United States Israel | | Other | Yale LUX | Retrieved from " Categories: Classical genetics Population genetics Hidden categories: Articles with short description Short description matches Wikidata All articles with unsourced statements Articles with unsourced statements from December 2024 This page was last edited on 23 July 2025, at 19:24(UTC). Text is available under the Creative Commons Attribution-ShareAlike 4.0 License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Code of Conduct Developers Statistics Cookie statement Mobile view Search Search [x] Toggle the table of contents Genetic linkage 38 languagesAdd topic
172
Jump to content Old Kingdom of Egypt Afrikaans Alemannisch العربية Aragonés Asturianu Azərbaycanca تۆرکجه বাংলা 閩南語 / Bn-lm-gí Беларуская Български Brezhoneg Català Чӑвашла Čeština Cymraeg Deutsch Ελληνικά Español Esperanto Euskara فارسی Français Galego 한국어 Hausa Հայերեն हिन्दी Hrvatski Bahasa Indonesia Íslenska Italiano עברית ქართული Latina Lëtzebuergesch Lietuvių Magyar Македонски Malagasy مصرى Bahasa Melayu မြန်မာဘာသာ Nederlands 日本語 Norsk bokmål Occitan Polski Português Romnă Русский Simple English Slovenčina Slovenščina Српски / srpski Srpskohrvatski / српскохрватски Suomi Svenska Tagalog தமிழ் ไทย Türkçe Українська اردو Tiếng Việt Winaray 吴语 粵語 中文 Edit links From Wikipedia, the free encyclopedia Period in ancient Egyptian history (c. 2686–2181 BC) "Old Kingdom" redirects here. For other uses, see Old Kingdom (disambiguation). | Old Kingdom of Egypt | | | --- | --- | | c. 2686 BC–c. 2181 BC | | | During the Old Kingdom of Egypt (circa 2700 BC – circa 2200 BC), Egypt consisted of the Nile River region south to Abu (also known as Elephantine), as well as Sinai and the oases in the western desert, with Egyptian control/rule over Nubia reaching to the area south of the third cataract. | | | Capital | Memphis | | Common languages | Ancient Egyptian | | Religion | Ancient Egyptian religion | | Government | Divine, absolute monarchy | | Pharaoh | | | | | | • c. 2686 – c. 2649 BC | Djoser (first) | | • c. 2184 – c. 2181 BC | Last king depends on the scholar, Neitiqerty Siptah (6th Dynasty) or Neferirkare (7th/8th Dynasty) | | | | | History | | | | | | • Began | c. 2686 BC | | • Ended | c. 2181 BC | | | | | Population | | | | | | • 2500 BC | 1.6 million | | | | | | | | | --- | --- | | Preceded by | Succeeded by | | | | | | --- | --- | | | Early Dynastic Period of Egypt | | | | | | --- | --- | | First Intermediate period | | | | | | Part of a series on the | | --- | | History of Egypt | | | | Prehistoric Egypt | | | | --- | --- | | Paleolithic | 300,000–17,000 BC | | Mesolithic | 17,000–9000 BC | | Predynastic Period | 6000–3000 BC | | | Ancient Egypt | | | | --- | --- | | Early Dynastic Period | 3150–2686 BC | | Old Kingdom | 2686–2181 BC | | 1st Intermediate Period | 2181–2055 BC | | Middle Kingdom | 2055–1650 BC | | 2nd Intermediate Period | 1650–1550 BC | | New Kingdom | 1550–1069 BC | | 3rd Intermediate Period | 1069–664 BC | | Late Period | 664–332 BC | | | Greco-Roman Egypt | | | | --- | --- | | Argead dynasty | 332–310 BC | | Ptolemaic dynasties | 310–30 BC | | Roman and Byzantine Egypt | 30 BC–641 AD | | Sasanian Egypt | 619–629 | | | Medieval Egypt | | | | --- | --- | | Rashidun caliphate | 641–661 | | Umayyad caliphate | 661–750 | | Abbasid dynasty | 750–935 | | Tulunid dynasty | 868–905 | | Ikhshidid dynasty | 935–969 | | Fatimid dynasty | 969–1171 | | Ayyubid dynasty | 1171–1250 | | Mamluk dynasty | 1250–1517 | | | Early modern Egypt | | | | --- | --- | | Ottoman Egypt | 1517–1867 | | French occupation | 1798–1801 | | Muhammad Ali dynasty | 1805–1953 | | Khedivate of Egypt | 1867–1914 | | | Late Modern Egypt | | | | --- | --- | | British occupation | 1882–1922 | | Sultanate of Egypt | 1914–1922 | | Kingdom of Egypt | 1922–1953 | | Republic | 1953–present | | | Egypt portal | | v t e | In ancient Egyptian history, the Old Kingdom is the period spanning c. 2700–2200 BC. It is also known as the "Age of the Pyramids" or the "Age of the Pyramid Builders", as it encompasses the reigns of the great pyramid-builders of the Fourth Dynasty, such as King Sneferu, under whom the art of pyramid-building was perfected, and the kings Khufu, Khafre and Menkaure, who commissioned the construction of the pyramids at Giza. Egypt attained its first sustained peak of civilization during the Old Kingdom, the first of three so-called "Kingdom" periods (followed by the Middle Kingdom and New Kingdom), which mark the high points of civilization in the lower Nile Valley. The concept of an "Old Kingdom" as one of three "golden ages" was coined in 1845 by the German Egyptologist Baron von Bunsen, and its definition evolved significantly throughout the 19th and the 20th centuries. Not only was the last king of the Early Dynastic Period related to the first two kings of the Old Kingdom, but the "capital", the royal residence, remained at Ineb-Hedj, the Egyptian name for Memphis. The basic justification for separating the two periods is the revolutionary change in architecture accompanied by the effects on Egyptian society and the economy of large-scale building projects. The Old Kingdom is most commonly regarded as the period from the Third Dynasty to the Sixth Dynasty (2686–2181 BC). Information from the Fourth to the Sixth Dynasties of Egypt is scarce, and historians regard the history of the era as literally "written in stone" and largely architectural in that it is through the monuments and their inscriptions that scholars have been able to construct a history. Egyptologists also include the Memphite Seventh and Eighth Dynasties in the Old Kingdom as a continuation of the administration, centralized at Memphis. While the Old Kingdom was a period of internal security and prosperity, it was followed by a period of disunity and relative cultural decline referred to by Egyptologists as the First Intermediate Period. During the Old Kingdom, the King of Egypt (not called the Pharaoh until the New Kingdom) became a living god who ruled absolutely and could demand the services and wealth of his subjects. Under King Djoser, the first king of the Third Dynasty of the Old Kingdom, the royal capital of Egypt was moved to Memphis, where Djoser established his court. A new era of building was initiated at Saqqara under his reign. King Djoser's architect, Imhotep, is credited with the development of building with stone and with the conception of the new architectural form, the step pyramid. The Old Kingdom is best known for a large number of pyramids constructed at this time as burial places for Egypt's kings. History [edit] Rise of the Old Kingdom [edit] Main article: Third Dynasty of Egypt The first King of the Old Kingdom was Djoser (sometime between 2691 and 2625 BC) of the Third Dynasty, who ordered the construction of a pyramid (the Step Pyramid) in Memphis' necropolis, Saqqara. An important person during the reign of Djoser was his vizier, Imhotep. It was in this era that formerly independent ancient Egyptian states became known as nomes, under the rule of the king. The former rulers were forced to assume the role of governors or otherwise work in tax collection. Egyptians in this era believed the king to be the incarnation of Horus, linking the human and spiritual worlds. Egyptian views on the nature of time during this period held that the universe worked in cycles, and the Pharaoh on earth worked to ensure the stability of those cycles. They also perceived themselves as specially selected people. The Pyramid of Djoser at Saqqara. The Temple of Djoser at Saqqara The head of a King, likely Huni c. 2650–2600 BC, Brooklyn Museum. Height of the Old Kingdom [edit] Main article: Fourth Dynasty of Egypt The Old Kingdom and its royal power reached a zenith under the Fourth Dynasty (2613–2494 BC). King Sneferu, the first king of the Fourth Dynasty, held territory from ancient Libya in the west to the Sinai Peninsula in the east, to Nubia in the south. An Egyptian settlement was founded at Buhen in Nubia which endured for 200 years. After Djoser, Sneferu was the next great pyramid builder. He commissioned the building of three pyramids. The first is called the Meidum Pyramid, named for its location in Egypt. Sneferu abandoned it after the outside casing fell off of the pyramid. The Meidum pyramid was the first to have an above-ground burial chamber. Using more stones than any other Pharaoh, he commissioned the three pyramids: a now collapsed pyramid in Meidum, the Bent Pyramid at Dahshur, and the Red Pyramid, at North Dahshur. However, the full development of the pyramid style of building was reached not at Saqqara, but during the building of the Great Pyramids at Giza. Sneferu was succeeded by his son, Khufu (2589–2566 BC), who commissioned the Great Pyramid of Giza. After Khufu's death, his sons Djedefre (2566–2558 BC) and Khafre (2558–2532 BC) may have quarrelled. The latter commissioned the second pyramid and (in traditional thinking) the Great Sphinx of Giza. Recent re-examination of evidence has led Egyptologist Vassil Dobrev to propose that the Sphinx was commissioned by Djedefre as a monument to his father Khufu.Alternatively, the Sphinx has been proposed to be the work of Khafre and Khufu himself. There were military expeditions into Canaan and Nubia, with Egyptian influence reaching up the Nile into what is today Sudan. The later kings of the Fourth Dynasty were Menkaure (2532–2504 BC), who commissioned the smallest of the three great pyramids in Giza; Shepseskaf (2504–2498 BC); and, perhaps, Djedefptah (2498–2496 BC). Fifth Dynasty [edit] Main article: Fifth Dynasty of Egypt The Fifth Dynasty (2494–2345 BC) began with Userkaf (2494–2487 BC) and was marked by the growing importance of the cult of sun god Ra. Consequently, fewer efforts were devoted to the construction of pyramid complexes than during the Fourth Dynasty and more to the construction of sun temples in Abusir. Userkaf was succeeded by his son Sahure (2487–2475 BC), who commanded an expedition to Punt. Sahure was in turn succeeded by Neferirkare Kakai (2475–2455 BC), who was Sahure's son. Neferirkare introduced the prenomen in the royal titulary. He was followed by two short-lived kings, his son Neferefre (2455–2453 BC) and Shepseskare, the latter of uncertain parentage. Shepseskare may have been deposed by Neferefre's brother Nyuserre Ini (2445–2421 BC), a long-lived pharaoh who commissioned extensively in Abusir and restarted royal activity in Giza. The last pharaohs of the dynasty were Menkauhor Kaiu (2421–2414 BC), Djedkare Isesi (2414–2375 BC), and Unas (2375–2345), the earliest ruler to have the Pyramid Texts inscribed in his pyramid. Egypt's expanding interests in trade goods such as ebony, incense such as myrrh and frankincense, gold, copper, and other useful metals inspired the ancient Egyptians to build suitable ships for navigation of the open sea. They traded with Lebanon for cedar and travelled the length of the Red Sea to the Kingdom of Punt- modern-day Eritrea—for ebony, ivory, and aromatic resins. Shipbuilders of that era did not use pegs (treenails) or metal fasteners, but relied on the rope to keep their ships assembled. Planks and the superstructure were tightly tied and bound together. This period also witnessed direct trade between Egypt and its Aegean neighbors and Anatolia. The rulers of the dynasty sent expeditions to the stone quarries and gold mines of Nubia and the mines of Sinai. there are references and depictions of military campaigns in Nubia and Asia. Decline into the First Intermediate Period [edit] Main articles: Sixth Dynasty of Egypt and First Intermediate Period The sixth dynasty peaked during the reigns of Pepi I and Merenre I with flourishing trade, several mining and quarrying expeditions and major military campaigns. Militarily, aggressive expansion into Nubia marked Pepi I's reign. At least five military expeditions were sent into Canaan. There is evidence that Merenre was not only active in Nubia like Pepi I but also sent officials to maintain Egyptian rule over Nubia from the northern border to the area south of the third cataract. During the Sixth Dynasty (2345–2181 BC) the power of the pharaoh gradually weakened in favor of powerful nomarchs (regional governors). These no longer belonged to the royal family and their charge became hereditary, thus creating local dynasties largely independent from the central authority of the Pharaoh. However, Nile flood control was still the subject of very large works, including especially the canal to Lake Moeris around 2300 BC, which was likely also the source of water to the Giza pyramid complex centuries earlier. Internal disorders set in during the incredibly long reign of Pepi II (2278–2184 BC) towards the end of the dynasty. His death, certainly well past that of his intended heirs, might have created succession struggles. The country slipped into civil wars mere decades after the close of Pepi II's reign. The final blow was the 22nd century BC drought in the region that resulted in a drastic drop in precipitation. For at least some years between 2200 and 2150 BC, this prevented the normal flooding of the Nile. Whatever its cause, the collapse of the Old Kingdom was followed by decades of famine and strife. An important inscription on the tomb of Ankhtifi, a nomarch during the early First Intermediate Period, describes the pitiful state of the country when famine stalked the land. Art [edit] The most defining feature of ancient Egyptian art is its function, as that was the entire purpose of creation. Art was not made for enjoyment in the strictest sense, but rather served a role of some kind in Egyptian religion and ideology. This fact manifests itself in the artistic style, even as it evolved over the dynasties. The three primary principles of that style, frontality, composite composition, and hierarchy scale, illustrate this quite well. These characteristics, initiated in the Early Dynastic Period and solidified during the Old Kingdom, persisted with some adaptability throughout the entirety of ancient Egyptian history as the foundation of its art. Frontality, the first principle, indicates that art was viewed directly from the front. One was meant to approach a piece as they would a living individual, for it was meant to be a place of manifestation. The act of interaction would bring forth the divine entity represented in the art. It was therefore imperative that whoever was represented be as identifiable as possible. The guidelines developed in the Old Kingdom and the later grid system developed in the Middle Kingdom ensured that art was axial, symmetrical, proportional, and most importantly reproducible and therefore recognizable. Composite composition, the second principle, also contributes to the goal of identification. Multiple perspectives were used in order to ensure that the onlooker could determine precisely what they saw. Though Egyptian art almost always includes descriptive text, literacy rates were not high, so the art gave another method for communicating the same information. One of the best examples of composite composition is the human form. In most two-dimensional relief, the head, legs, and feet are seen in profile, while the torso faces directly front. Another common example is an aerial view of a building or location. The third principle, the hierarchy of scale, illustrates relative importance in society. The larger the figure, the more important the individual. The king is usually the largest, aside from deities. The similarity in size equated to similarity in position. However, this is not to say that physical differences were not shown as well. Women, for example, are usually shown as smaller than men. Children retain adult features and proportions but are substantially smaller in size. Aside from the three primary conventions, there are several characteristics that can help date a piece to a particular time frame. Proportions of the human figure are one of the most distinctive, as they vary between kingdoms. Old Kingdom male figures have characteristically broad shoulders and a long torso, with obvious musculature. On the other hand, females are narrower in the shoulders and waist, with longer legs and a shorter torso. However, in the Sixth Dynasty, the male figures lose their muscularity and their shoulders narrow. The eyes also tend to get much larger. In order to help maintain the consistency of these proportions, the Egyptians used a series of eight guidelines to divide the body. They occurred at the following locations: the top of the head, the hairline, the base of the neck, the underarms, the tip of the elbow or the bottom of the ribcage, the top of the thigh at the bottom of the buttocks, the knee, and the middle of the lower leg. From the soles of the feet to the hairline was also divided into thirds, one-third between the soles and the knee, another third between the knee and the elbow, and the final third from the elbow to the hairline. The broad shoulders that appeared in the Fifth Dynasty constituted roughly that one-third length as well. These proportions not only help with the identification of representations and the reproduction of art but also tie into the Egyptian ideal of order, which tied into the solar aspect of their religion and the inundations of the Nile. Though the above concepts apply to most, if not all, figures in Egyptian art, there are additional characteristics that applied to the representations of the king. Their appearance was not an exact rendering of the king's visage, though kings are somewhat identifiable through looks alone. Identification could be supplied by inscriptions or context. A huge, more important part of a king's portrayal was about the idea of the office of kingship, which were dependent on the time period. The Old Kingdom was considered a golden age for Egypt, a grandiose height to which all future kingdoms aspired. As such, the king was portrayed as young and vital, with features that agreed with the standards of beauty of the time. The musculature seen in male figures was also applied to kings. A royal rite, the jubilee run which was established during the Old Kingdom, involved the king running around a group of markers that symbolized the geographic borders of Egypt. This was meant to be a demonstration of the king's physical vigor, which determined his capacity to continue his reign. This idea of kingly youth and strength were pervasive in the Old Kingdom and thus shown in the art. The sculpture was a major product of the Old Kingdom. The position of the figures in this period was mostly limited to sitting or standing, either with feet together or in the striding pose. Group statues of the king with either gods or family members, typically his wife and children, were also common. It was not just the subject of sculpture that was important, but also the material: The use of hard stone, such as gneiss, graywacke, schist, and granite, was relatively common in the Old Kingdom. The color of the stone had a great deal of symbolism and was chosen deliberately. Four colors were distinguished in the ancient Egyptian language: black, green, red, and white. Black was associated with Egypt due to the color of the soil after the Nile flood, green with vegetation and rebirth, red with the sun and its regenerative cycle, and white with purity. The statue of Menkaure with Hathor and Anput is an example of a typical Old Kingdom sculpture. The three figures display frontality and axiality, while fitting with the proportions of this time period. The graywacke came from the Eastern Desert in Egypt and is therefore associated with rebirth and the rising of the sun in the east. Old Kingdom genetics [edit] Main article: Old Kingdom individual (NUE001) For the first time, in a 2025 publication by the scientific journal Nature, a whole-genome genetic study was able to give insights into the genetic background of Old Kingdom individuals, by sequencing the whole genome of an Old Kingdom adult male Egyptian of relatively high-status, radiocarbon-dated to 2855–2570 BCE, with funerary practices archeologically attributed to the Third and Fourth Dynasty, which was excavated in Nuwayrat (Nuerat, نويرات), in a cliff 265 km south of Cairo. Before this study, whole-genome sequencing of ancient Egyptians from the early periods of Egyptian Dynastic history had not yet been accomplished, mainly because of the problematic DNA preservation conditions in Egypt. The corpse had been placed intact in a large circular clay pot without embalming, and then installed inside a cliff tomb, which accounts for the comparatively good level of conservation of the skeleton and its DNA. Most of his genome was found to be associated with North African Neolithic ancestry, but about 20% of his genetic ancestry could be sourced to the eastern Fertile Crescent, including Mesopotamia. The genetic profile was most closely represented by a two-source model, in which 77.6% ± 3.8% of the ancestry corresponded to genomes from the Middle Neolithic Moroccan site of Skhirat-Rouazi (dated to 4780–4230 BCE), which itself consists of predominantly (76.4 ± 4.0%) Levant Neolithic ancestry and (23.6 ± 4.0%) minor Iberomaurusian ancestry, while the remainder (22.4% ± 3.8%) was most closely related to known genomes from Neolithic Mesopotamia (dated to 9000-8000 BCE). Genomes from the Neolithic/Chalcolithic Levant only appeared as a minor third-place component in three-source models. A 2022 DNA study had already shown evidence of gene flow from the Mesopotamian and Zagros regions into surrounding areas, including Anatolia, during the Neolithic, but not as far as Egypt yet. In terms of chronology, Egypt was one of the first areas to adopt the Neolithic package emerging from West Asia as early as the 6th millennium BCE. Population genetics in the Nile Valley observed a marked change around this period, as shown by odontometric and dental tissue changes. Cultural exchange and trade between the two regions then continued through the 4th millennium BCE, as shown by the transfer of Mesopotamian Late Uruk period features to the Nile Valley of the later Predynastic Period. Migrations flows from Mesopotamia accompanied such cultural exchanges, possibly through the sea routes of the Mediterranean and the Red Sea or through yet un-sampled intermediaries in the Levant, which could explain the relative smallness of genetic influence from known Chalcolithic/Bronze Age Levantines populations. Overall, the 2025 study "provides direct evidence of genetic ancestry related to the eastern Fertile Crescent in ancient Egypt". This genetic connection suggests that there had been ancient migration flows from the eastern Fertile Crescent to Egypt, in addition to the exchanges of objects and imagery (domesticated animals and plants, writing systems...) already observed. This suggests a pattern of wide cultural and demographic expansion from the Mesopotamian region, which affected both Anatolia and Egypt during this period. Gallery [edit] King khufu statue at Cairo museum King khafre statue at Cairo museum Greywacke statue of Menkaure and Queen Khamerernebty II at the Boston Museum of Fine Arts Rahotep and Nofret statues at Cairo museum Kaaper around 2500 BC Majordomo Keki statue, 6th dynasty at Louvre museum References [edit] ^ Grimal, Nicolas (1994). A History of Ancient Egypt. Wiley-Blackwell (July 19, 1994). p. 85. ^ Steven Snape (16 March 2019). "Estimating Population in Ancient Egypt". Retrieved 5 January 2021. ^ a b "Old Kingdom of Egypt". World History Encyclopedia. Retrieved 2017-12-04. ^ a b Malek, Jaromir. 2003. "The Old Kingdom (c. 2686–2160 BC)". In The Oxford History of Ancient Egypt, edited by Ian Shaw. Oxford and New York: Oxford University Press. ISBN 978-0192804587, p.83 ^ Schneider, Thomas (27 August 2008). "Periodizing Egyptian History: Manetho, Convention, and Beyond". In Klaus-Peter Adam (ed.). Historiographie in der Antike. Walter de Gruyter. pp. 181–197. ISBN 978-3-11-020672-2. ^ Carl Roebuck, The World of Ancient Times, pp. 55 & 60. ^ a b Carl Roebuck, The World of Ancient Times, p. 56. ^ Herlin, Susan J. (2003). "Ancient African Civilizations to ca. 1500: Pharaonic Egypt to Ca. 800 BC". p. 27. Archived from the original on August 23, 2003. Retrieved 23 January 2017. ^ Bothmer, Bernard (1974). Brief Guide to the Department of Egyptian and Classical Art. Brooklyn, NY: Brooklyn Museum. p. 22. ^ "The Old Kingdom (c. 2575–c. 2130 BCE) and the First Intermediate period (c. 2130–1938 BCE)". Encyclopaedia Britannica. ^ "Ancient Egypt – the Archaic Period and Old Kingdom". Penfield High School. Archived from the original on 2021-04-02. Retrieved 2017-12-04. ^ Carl Roebuck (1984), The World of Ancient Times, p. 57. ^ Fleming, Nic (14 December 2004). "I have solved riddle of the Sphinx, says Frenchman". The Telegraph. Retrieved 21 May 2022. ^ p.5, The Collins Encyclopedia of Military History (4th edition, 1993), Dupuy & Dupuy. ^ Miroslav Verner: Archaeological Remarks on the 4th and 5th Dynasty Chronology, Archiv Orientální, Volume 69: 2001 ^ Franziska Grathwol, Christian Roos, Dietmar Zinner, Benjamin Hume, Stéphanie M Porcier, Didier Berthet, Jacques Cuisin, Stefan Merker, Claudio Ottoni, Wim Van Neer (2023), "Adulis and the transshipment of baboons during classical antiquity", eLife, 12 e87513, elifsciences, doi:10.7554/eLife.87513, PMC 10597581, PMID 37767965{{citation}}: CS1 maint: multiple names: authors list (link) ^ Grimal, Nicolas (1994). A History of Ancient Egypt. Wiley-Blackwell (July 19, 1994). p. 76. ^ Grimal, Nicolas (1994). A History of Ancient Egypt. Wiley-Blackwell (July 19, 1994). pp. 76, 79. ^ Verner, Miroslav (2001b). "Old Kingdom: An Overview". In Redford, Donald B. (ed.). The Oxford Encyclopedia of Ancient Egypt, Volume 2. Oxford: Oxford University Press. pp. 585–591. ISBN 978-0-19-510234-5. ^ Shaw, Ian (2003). "New fieldwork at Gebel el-Asr: "Chephren's diorite quarries"". In Hawass, Zahi; Pinch Brock, Lyla (eds.). Egyptology at the Dawn of the Twenty-first Century: Archaeology. Cairo, New York: American University in Cairo Press. ISBN 978-977-424-715-6. ^ Klemm, Rosemarie; Klemm, Dietrich (2013). Gold and gold mining in ancient Egypt and Nubia : geoarchaeology of the ancient gold mining sites in the Egyptian and Sudanese eastern deserts. Natural science in archaeology. Berlin; New-York: Springer. ISBN 978-1-283-93479-4. ^ Verner, Miroslav (2001b). "Old Kingdom: An Overview". In Redford, Donald B. (ed.). The Oxford Encyclopedia of Ancient Egypt, Volume 2. Oxford: Oxford University Press. p. 588. ISBN 978-0-19-510234-5. ^ "Siege Scenes of the Old Kingdom". Archived from the original on 2022-02-04. Retrieved 2022-02-04. ^ Baker, Darrell (2008). The Encyclopedia of the Pharaohs: Volume I – Predynastic to the Twentieth Dynasty 3300–1069 BC. Stacey International. p. 84. ISBN 978-1-905299-37-9. ^ Hayes, William (1978). The Scepter of Egypt: A Background for the Study of the Egyptian Antiquities in The Metropolitan Museum of Art. Vol. 1, From the Earliest Times to the End of the Middle Kingdom. New York: Metropolitan Museum of Art. p. 122. OCLC 7427345. ^ Smith, William Stevenson (1971). "The Old Kingdom of Egypt and the Beginning of the First Intermediate Period". In Edwards, I. E. S.; Gadd, C. J.; Hammond, N. G. L. (eds.). The Cambridge Ancient History, Volume 1, Part 2. Early History of the Middle East (3rd ed.). London, New york: Cambridge University Press. pp. 191–194. ISBN 9780521077910. OCLC 33234410. ^ a b Grimal, Nicolas (19 July 1994). A History of Ancient Egypt. Wiley-Blackwell. p. 85. ^ Jean-Daniel Stanley; et al. (2003). "Nile flow failure at the end of the Old Kingdom, Egypt: Strontium isotopic and petrologic evidence" (PDF). Geoarchaeology. 18 (3): 395–402. doi:10.1002/gea.10065. S2CID 53571037. ^ a b c d e f g h i j k l Robins, Gay (2008). The Art of Ancient Egypt. Cambridge: Harvard University Press. ^ a b Sourouzian, Hourig (2010). A Companion to Ancient Egypt. Vol. I. Blackwell Publishing Ltd. pp. 853–881. ^ a b Arnold, Dorothea (1999). When the Pyramids Were Built: Egyptian Art of the Old Kingdom. The Metropolitan Museum of Art and Rizzoli International Publications Inc. pp. 7–17. ^ "The Metropolitan Museum". ^ a b c d e Robins, Gay (1994). Proportion, and Style in Ancient Egyptian Art. University of Texas Press. ^ "Statue of Menkaure with Hathor and Cynopolis". The Global Egyptian Museum. ^ a b Malek, Jaromir (1999). Egyptian Art. London: Phaidon Press Limited. ^ a b Morgan, Lyvia (2011). "Enlivening the Body: Color and Stone Statues in Old Kingdom Egypt". Notes in the History of Art. 30 (3): 4–11. doi:10.1086/sou.30.3.23208555. S2CID 191369829. ^ Klemm, Dietrich (2001). "The Building Stones of Ancient Egypt: A Gift of its Geology". African Earth Sciences. 33 (3–4): 631–642. Bibcode:2001JAfES..33..631K. CiteSeerX 10.1.1.111.9099. doi:10.1016/S0899-5362(01)00085-9. ^ Morez Jacobs, Adeline; Irish, Joel D.; Cooke, Ashley; Anastasiadou, Kyriaki; Barrington, Christopher; Gilardet, Alexandre; Kelly, Monica; Silva, Marina; Speidel, Leo; Tait, Frankie; Williams, Mia; Brucato, Nicolas; Ricaut, Francois-Xavier; Wilkinson, Caroline; Madgwick, Richard; Holt, Emily; Nederbragt, Alexandra J.; Inglis, Edward; Hajdinjak, Mateja; Skoglund, Pontus; Girdland-Flink, Linus (2 July 2025). "Whole-genome ancestry of an Old Kingdom Egyptian". Nature: Extended Data Fig. 2 Facial reconstruction and depiction created from the Nuwayrat individual skull. doi:10.1038/s41586-025-09195-5. ISSN 1476-4687. PMID 40604286. ^ a b c d e f g h i j k l m n Morez Jacobs, Adeline; Irish, Joel D.; Cooke, Ashley; Anastasiadou, Kyriaki; Barrington, Christopher; Gilardet, Alexandre; Kelly, Monica; Silva, Marina; Speidel, Leo; Tait, Frankie; Williams, Mia; Brucato, Nicolas; Ricaut, Francois-Xavier; Wilkinson, Caroline; Madgwick, Richard; Holt, Emily; Nederbragt, Alexandra J.; Inglis, Edward; Hajdinjak, Mateja; Skoglund, Pontus; Girdland-Flink, Linus (2 July 2025). "Whole-genome ancestry of an Old Kingdom Egyptian". Nature: 1–8. doi:10.1038/s41586-025-09195-5. ISSN 1476-4687. PMID 40604286. ^ Strickland, Ashley (2 July 2025). "The first genome sequenced from ancient Egypt reveals surprising ancestry, scientists say". CNN. ^ a b Simões, Luciana G.; Günther, Torsten; Martínez-Sánchez, Rafael M.; Vera-Rodríguez, Juan Carlos; Iriarte, Eneko; Rodríguez-Varela, Ricardo; Bokbot, Youssef; Valdiosera, Cristina; Jakobsson, Mattias (7 June 2023). "Northwest African Neolithic initiated by migrants from Iberia and Levant". Nature. 618 (7965): 550–556. Bibcode:2023Natur.618..550S. doi:10.1038/s41586-023-06166-6. PMC 10266975. PMID 37286608. Further reading [edit] Brewer, Douglas J. (2005). Ancient Egypt: foundations of a civilization. Harlow: Longman. ISBN 978-0-582-77253-3. Callender, Gae (1998). Egypt in the old kingdom: an introduction. Melbourne: Longman. ISBN 978-0-582-81226-0. Kanawati, Naguib (1980). Governmental reforms in old Kingdom Egypt. Modern Egyptology series. Warminster, England: Aris & Phillips. ISBN 978-0-85668-168-4. Kanawati, Naguib; Woods, Alexandra; Alexakis, Effy, eds. (2009). Artists of the Old Kingdom: techniques and achievements. Cairo: Supreme Council of Antiquities Press. ISBN 978-977-437-985-7. Lehner, Mark (1997). The complete pyramids. New York: Thames & Hudson. ISBN 978-0-500-05084-2. Málek, Jaromír; Forman, Werner (1986). In the shadow of the pyramids: Egypt during the Old Kingdom. Norman, OK: University of Oklahoma Press. ISBN 978-0-8061-2027-0. McFarlane, Ann; Mourad, Anna-Latifa (2012). Behind the scenes: daily life in Old Kingdom Egypt. Studies. North Ryde: Australian Centre for Egyptology. ISBN 978-0-85668-860-7. O'Neill, John P.; Allen, James P., eds. (1999). Egyptian Art in the Age of the Pyramids. New York: Metropolitan Museum of Art. ISBN 978-0-87099-907-9. Papazian, Hratch (2012). Domain of Pharaoh: the structure and components of the economy of Old Kingdom Egypt. Hildesheimer Ägyptologische Beiträge. Hildesheim: Gerstenberg. ISBN 978-3-8067-8761-0. OCLC 788217545. Ryholt, K. S. B. (1997). The political situation in Egypt during the second intermediate period, c. 1800-1550 B.C. CNI publications. Copenhagen: Museum Tusculanum Press. ISBN 978-87-7289-421-8. Sowada, Karin N.; Grave, Peter (2009). Egypt in the Eastern Mediterranean during the Old Kingdom: an archaeological perspective. Orbis biblicus et orientalis. Fribourg, Göttingen: Vandenhoeck & Ruprecht. ISBN 978-3-7278-1649-9. OCLC 428025036. S2CID 127449010. Kanawati, Naguib; Strudwick, Nigel (1992). The Administration of Egypt in the Old Kingdom: The Highest Titles and Their Holders (PDF). Studies in egyptology. Vol. 78. London: KPI. doi:10.2307/3822094. ISBN 978-0-7103-0107-9. JSTOR 3822094. S2CID 69046660. Archived from the original (PDF) on 2014-04-03. Retrieved 2024-11-07. Warden, Leslie Anne (2014). Pottery and economy in Old Kingdom Egypt. Culture and history of the ancient Near East. Leiden ; Boston: Brill Publishers. ISBN 978-90-04-25984-3. Wilkinson, Toby A. H. (2001). Early dynastic Egypt. London: Routledge. ISBN 978-0-415-26011-4. External links [edit] The Fall of the Egyptian Old Kingdom from BBC History Middle East on The Matrix: Egypt, The Old Kingdom – Photographs of many of the historic sites dating from the Old Kingdom Old Kingdom of Egypt- Aldokkan | | | | | --- | --- | --- | | Preceded by Early Dynastic Period | Time Periods of Egypt 2686–2181 BC | Succeeded by First Intermediate Period | | v t e Ancient Egypt topics | | | --- | --- | | Glossary of artifacts Index Main topics | | | Agriculture Architecture + Revival + Obelisks + Pylon Ancient Egyptian race controversy Art + Portraiture Astronomy Chronology Cities + List Clothing Cuisine Dance Dynasties Funerary practices Geography Great Royal Wives + List Hieroglyphs + Cursive hieroglyphs History Language + Demotic + Hieratic Literature Mathematics Medicine Military Music Mythology People Pharaohs + List + Titulary Philosophy Population history of Egypt Pottery Prehistoric Egypt Religion Scribes Sites + Capitals + District Technology Trade Egypt–Mesopotamia relations | | | Egyptology Egyptologists Museums | | | Ancient Egypt portal Category Commons Outline WikiProject | | | v t e Ancient seafaring | | | --- | --- | | | Vessels | | | | --- | --- | --- | | | | | | --- | --- | | Types | Balangay Bangka Beden Coracle Dhow Dragon boat Dugout canoe Fire ship Galley + Penteconter Kunlun ship Liburna Longship Multihull Navis lusoria Obelisk ship Outriggers + Sakman + Single-outrigger + Catamaran + Trimaran Polyremes + Bireme + Oared warships + Trireme + Quadrireme + Quinquereme + Hexareme + Tessarakonteres Raft Reed boat Sailing ship Solar ship Tomol Tākitimu Uru | | by region | Austronesia Egypt Persia Rome | | Propulsion | Paddling Sailing Towing Poling | | Components | Anchor Bow Cabin Deck Figurehead Hull + Planking Keel Mast Oar Paddle Rope Rudder Steering oar Sail + Sail components Stem Sternpost Strake Tiller | | Construction | Boat building Careening Carvel built Clinker built Dugout Framing + Frame-first + Shell-first Joinery + Lashed-lug + Mortise and tenon + Phoenician joint + Scarf joint + Sewn-plank Shipbuilding By region: + Egypt | | Rigging | Crab claw Fore-and-aft + Lateen + Settee + Tanja + Triangular sail Junk Mast-aft Spritsail Square | | Armaments | Ballista Catapult Corvus Dolphin Fire ship Harpax Ram Sambuca | | Wrecks and relics | | | | | --- | --- | | Earliest | Pesse canoe Dufuna canoe Abydos Moor Sand Dokos Khufu ship Ferriby Boats Dover Bronze Age Boat Uluburun Canaanite Cape Gelidonya Zambratija Rochelongue | | Austronesia | Pontian boat Butuan boats | | Black Sea‎ | Sinop D | | Greek | Kyrenia Leontophoros Syracusia Ashkelon Antikythera | | Canaanite and Phoenician | Bajo de la Campana Canaanite shipwreck Gozo Ma'agan Michael Mazarrón | | Punic | Marsala Punic shipwreck | | Roman | Alkedo Arles Rhône 3 Blackfriars I Caligula's Giant Ship De Meern ships Isis Madrague de Giens Nemi ships Marausa Yassi Ada | | Nordic | Hjortspring boat Nydam boat | | Lists | Oldest surviving Museum ships Surviving ancient ships | | | | | | | | | Navigation, and ports and harbors | | | | --- | --- | --- | | | | | | --- | --- | | Navigation | Celestial Charts Coastal + Lighthouses History Ocean swell + Stick chart Periplus Piloting + Pilot boat + Maritime pilot By region: + Inuit + Micronesian + Polynesia | | Ports and harbors | Actium Aden Adulis Alexandria Apollonia Aradus Arikamedu Arsinoe Avalites Barbarikon Barygaza Basra Berenice Troglodytica Bosphorus Byblos Caesarea Maritima Canopus Carthage Centumcellae Charmutha Chittagong Delos Essina Giao Chỉ Godavaya Guangzhou Jambukola Jeddah Kaveri Poompattinam Kedah Korkai Leptis Magna Lothal Manthai Madurai Malao Mersa Gawasis Myos Hormos Martaban Mueang Phra Rot Muscat Muziris Narbonne Nesis Óc Eo (Cattigara) Opone Ostia Antica Palembang Piraeus Portus Augusti Portus Pisanus Prosphorion Ptolemais Theron Puteoli Qal'at al-Bahrain Qandala Quilon Rhacotis Sarapion Satingpra Sidon Socotra Sounagoura Thapsus Trincomalee Tulum Tyndis Tyre Wadi al-Jarf Zanzibar | | | | | | | | History | | | | --- | --- | --- | | | | | | --- | --- | | Prehistory | Timeline Britain Oceania + Remote + Near Ubaid period Indus Valley | | Civilizations | Ancient Egypt + Old Kingdom Austronesia + Sa Huỳnh + Lapita + Micronesia + Langkasuka + Kedah + Champa + Kutai + Tarumanagara + Kalingga + Srivijaya + Sunda + Polynesia Minoan Indus Valley Tamilakam + Chola + Chera + Pandya Somalia Dilmun Maya Nuragic Mycenaean Phoenicia Olmecs Carthage Greece + Archaic + Classical Rome Achaemenid Nabatea Aksum | | Migration and exploration | Peopling of Australia Peopling of Micronesia Austronesian Expansion Greeks in pre-Roman Gaul Ocean exploration Phoenician maritime expansion + Sardinia + Circumnavigation of Africa Pytheas' voyage to Britain Roman circumnavigation of Britain Timeline | | Mariners and explorers | Henenu Euthymenes Hanno the Navigator Himilco Sataspes Androsthenes of Thasos Archias of Pella Alexander the Great Nearchus Pytheas Megasthenes Xu Fu Hippalus Julius Caesar Eudoxus of Cyzicus Maes Titianus | | Military | | | | | --- | --- | | Navies | Egyptian Achaemenid Greek Roman | | Battles | Mediterranean: Alashiya Nile Delta Salamis Artemisium Eurymedon Naupactus Olpae Syracuse Cynossema Arginusae Mytilene Hellespont Echinades Salamis II Mylae Cape Hermaeum Ecnomus Drepana Aegates Lake Trasimene Chios Myonessus Nile Naulochus Mycale Actium | | Tactics | Boarding Grappling Incendiaries Oared vessels Sailing ships Greek navy Ramming | | | By region | China India + Odisha Japan Rome South America + Rafts | | | | | | | | Economy and trade | | | | --- | --- | --- | | Whaling Fishing Egypt + Land of Punt Indo–Mesopotamia relations Meluhha Tin Spice trade Austronesian network Sa Huynh-Kalanay Incense trade Maritime Silk Road Periplus of the Erythraean Sea Maya Greece + Greco–Indian + shipping Rome + Indo–Roman | | | | | | | | | | | --- | --- | --- | | History Mediterranean piracy Ameinias the Phocian Cilician pirates Jewish pirates Kidnapping of Julius Caesar Pompey's campaign against the pirates | | | | | | | Research and education | | | | --- | --- | --- | | | | | | | | | | --- | --- | --- | --- | --- | --- | | Scholars | | | | | --- | --- | | Historians | David Blackman Lionel Casson Fik Meijer John Sinclair Morrison William L. Rodgers Chester G. Starr | | Archaeologists | George Bass Jean-Yves Empereur Boris Rankov J. Richard Steffy Peter Throckmorton Shelley Wachsmann | | | Topics and theories | Coastal defence and fortification Grave goods Lighthouses + Alexandria Marine art Marine navigation Maritime archaeology Naval warfare Maritime temples + Temple of Isthmia + Temple of Poseidon, Sounion + Samothrace temple complex Nusantao network Phoenician discovery of America Pre-Columbian theories Sea Peoples Shipbuilding Shell middens Ship burial Tacking Thalassocracy Underwater archaeology Underwater exploration | | Sites | H3 Qal'at al-Bahrain | | Experimental archaeology | Ship replica Heyerdahl expeditions + Kon-Tiki + Ra and Ra II Austronesian replicas + Hōkūleʻa + Sarimanok + Te Au o Tonga + Hawaiʻiloa canoe + Samudra Raksa + Alingano Maisu + Saina + Balangay Voyage + Faʻafaite + Gaualofa + Marumaru Atua Mediterranean + Olympias + Regina + Phoenician Ship Expedition Viking replicas + Viking Others + Vital Alsar + Ivlia + Abora + Viracocha + Tangaroa + Oakleaf + Morgawr | | Institutes and conferences | Advisory Council on Underwater Archaeology Archaeological Institute of America European Association of Archaeologists Institute of Nautical Archaeology International Congress of Maritime Museums Nautical Archaeology Society RPM Nautical Foundation Sea Research Society Society for American Archaeology | | Museums and memorials | Boat Museum, Kolkata Bodrum Museum of Underwater Archaeology Giza Solar boat museum Grand Egyptian Museum Ancient Shipwreck Museum at Kyrenia Castle Museum of Ancient Seafaring Museum of Ancient Ships, Pisa National Museum of Subaquatic Archaeology Viking ship museums: + Oslo + Roskilde | | | | | | | | Legend and literature | | | | --- | --- | --- | | | | | | --- | --- | | Legend | Ark of bulrushes Flood myths + Genesis + Gilgamesh + Greek | | Literature | Odyssey Histories (Herodotus) On the Ocean Argonautica Histories (Polybius) Metamorphoses Geography Aeneid | | | | | | | Authority control databases | | | --- | --- | | National | Germany United States Israel | | Other | Yale LUX | | Authority control databases | | | --- | --- | | National | Germany United States Israel | | Other | Yale LUX | Retrieved from " Categories: Old Kingdom of Egypt 3rd millennium BC in Egypt Hidden categories: Pages with non-numeric formatnum arguments CS1 maint: multiple names: authors list Articles with short description Short description is different from Wikidata
173
Teaching | Andrej Bauer =============== Andrej Bauer Home Teaching Research Fun Blog en sl To arrange an office hour, come to my office 4.35 (Jadranska 21) or send me an e-mail. Courses Visit spletna učilnica for complete information about ongoing courses. Ongoing and recent courses: 2024/2025 Formalized mathematics and proof assistants ‣ lecture notes ‣ videos ‣ class notes Symbolic computation and dynamic geometry ‣ videos ‣ class notes 2022/2023 Logic and sets ‣ lecture notes ‣ videos ‣ class notes Symbolic computation and dynamic geometry ‣ videos ‣ class notes Principles of programming languages ‣ lecture notes ‣ videos ‣ class notes Logic in computer science ‣ lecture notes ‣ videos ‣ repository 2021/2022 Logic and sets ‣ lecture notes ‣ videos ‣ class notes Principles of programming languages ‣ videos ‣ class notes 2020/2021 Logic and sets ‣ class notes Computer practicum ‣ class notes Computer science (physics) ‣ class notes Principles of programming languages ‣ class notes 2019/2020 Logic and sets ‣ class notes Computer practicum ‣ class notes Computer science (physics) ‣ class notes Principles of programming languages ‣ class notes 2018/2019 Introduction to homotopy type theory ‣ course page ‣ videos ‣ class notes Computer practicum ‣ class notes Computer science (physics) ‣ class notes Principles of programming languages ‣ class notes Additional resources My YouTube channel Old video archive, including some of the above courses Complete listing of lecture and class notes
174
faculty of science and engineering mathematics The knot complement and its homotopy Bachelor’s Project Mathematics Date: July 2022 Student: R. M. van der Weide First supervisor: Dr. R. I. van der Veen Second assessor: Dr. M. Seri Contents Introduction 3 Chapter 1. Knot theory basics 4 1. Definition of a knot 4 2. Knot invariants 6 Chapter 2. Seifert surfaces 8 1. Existence of Seifert Surfaces 8 2. Genus of a surface 9 3. Fundamental group of Seifert Surfaces 11 Chapter 3. Cyclic coverings 13 1. Homology of the knot complement 13 2. Existence of cyclic coverings 15 3. Cutting along a surface 16 4. Construction of the cyclic covering 17 Chapter 4. Fibred knots 20 1. Fibre bundles 20 2. Homotopy invariance of the pullback bundle 22 3. Fibred knots and the commutator subgroup 23 Chapter 5. A different way to study the knot complement 26 1. The metro station 26 2. The knot complement as a tube 27 3. Homotopy-equivalence of the knot complement and the knot tube 29 4. The cyclic covering of the metro station 32 5. The cyclic covering of the knot tube 34 6. The homology of the cyclic cover as Z[t±]-module 37 Conclusion 39 Bibliography 40 2 Introduction What is a knot? How can we study it topologically? Those are the ques-tions that this thesis seeks to answer. We study the knot complement as topological space and apply techniques found in algebraic topology. To start of, this thesis introduces the reader to knots and equivalence of knots. We talk about knot invariants and in particular, the knot complement, a topological space that is studied in detail in this thesis. Secondly, we study Seifert surfaces. The existence of Seifert surfaces is proven using Seifert’s algorithm. Then, we use the genus of the Seifert surface to compute its fundamental group. In the third chapter, the reader is introduced to the infinite cyclic cover of the knot complement. We first use the homology of the knot complement to prove the existence of the infinite cyclic cover, and then construct it explicitly by cutting the knot complement along a Seifert surface. Fourthly, we zoom in to a specific type of knot, called the fibred knot. The reader is first introduced to some theory on fibre bundles, which we use to compute the commutator subgroup of the knot complement for fibred knots. To finish of this thesis, we construct a space that is homotopy-equivalent to the knot complement and somewhat easier to grasp. We study the homology of the infinite cyclic cover of this space and give ideas on how this space can be used to give the homology group a Z[t±]-module structure. Many of the techniques used in this thesis are techniques from algebraic topology. As of writing this thesis, the University of Groningen does not have a bachelor course on algebraic topology. Therefore many bachelor students may struggle reading this thesis. In case the reader is interested in learning about algebraic topology, the author recommends reading chapters 11-14 of and chapter 13 of . 3 CHAPTER 1 Knot theory basics The aim of this chapter is to provide the reader with the basic definitions in knot theory. To be specific, we define what a knot is and how they are classified. Furthermore, knot invariants are defined and it is proven that the knot complement is a knot invariant. 1. Definition of a knot In order to define knots, we first recall the definition of an embedding. Definition 1.1. Let X and Y be topological spaces. An embedding of X in Y is a continuous map f : X →Y such that the restriction f : X →f(X) is a homeomorphism. Sometimes, the notation X , →Y may be used to express that X is embed-ded in Y , without needing to give the map a name. It is clear that an embedding is injective. Note however that not every continuous injective map is an embedding, as the inverse of the restriction may not be continuous. The definition of an embedding is enough to give the definition of a knot. Definition 1.2. A knot is an embedding k : S1 , →S3. The image of this map is also called a knot and is also denoted k. The ambiguity of this notation is not an issue, as it is always clear from context whether k refers to an embedding S1 , →S3 or the image of that embedding. One should recall that S3 is the one point compactification of R3. Therefore S3 can be viewed as R3 ∪{∞}. Consequently, by thinking of knots as embed-dings S1 , →R3 with a point at infinity, it becomes much easier to visualise and draw them. Example 1.3 (Unknot and trefoil knot). We give some examples of knots. Firstly, there is the trivial embedding S1 , →S3, as seen in the left drawing of figure 1.1. This knot is called the unknot. Secondly, there is the embedding S1 , →S3 that is drawn on the right in figure 1.1. This knot is called the trefoil knot. As with many objects in mathematics, we need a method to classify knots. The image of a knot is not of use of us here, as every image of a knot is homeomorphic with the circle S1. So instead, we need to take the surrounding space into account. This leads us to the following definition: 4 Figure 1.1. The unknot (left) and the trefoil knot (right). Definition 1.4. Two knots k1 : S1 , →S3 and k2 : S1 , →S3 are equivalent if there exists an orientation-preserving homeomorphism h : S3 ∼ − →S3 that carries k1 into k2, i.e. h ◦k1 = k2. It is clear that the above definition is an equivalence relation. The definition of a knot presented above lacks a property that will be used extensively in future chapters. To be specific, we require our knots to have a tubular neighbourhood, which is a neighbourhood that is homeomorphic with S1 ×D2. An example of a knot that doesn’t have this property is the infinitely nested knot presented in figure 1.2, as no tube can be formed around the (limit) point L. Despite appearing as if you can unravel the knot from the right side, it can be shown that this knot is not equivalent to the unknot. Figure 1.2. An knot with infinitely nested crossings, from . Examples of knots that clearly have a tubular neighbourhood are those that satisfy the following: Definition 1.5. A polygonal knot is a knot whose image is the union of a finite number of line segments. It is clear that the knots presented in figure 1.1 are polygonal knots. These knots turn out to be precisely the ones we are interested in. Theorem 1.6. Let k be a knot. The following statements are equivalent: (1) The knot k is equivalent to a polygonal knot; 5 (2) there exists a neighbourhood k that is homeomorphic to S1 ×D2. This neighbourhood is called a tubular neighbourhood of k, denoted V (k). Proof. This theorem is not proven in this thesis. A proof of a stronger version of this theorem is given in . , Definition 1.7. A knot is called tame if it satisfies the equivalent condi-tions (1) and (2) in theorem 1.6. A knot that is not tame is called wild. From now on, all knots in this thesis are assumed to be tame. 2. Knot invariants The concept of equivalence of knots has been presented in definition 1.4. This definition is quite straightforward and knot theorists have developed a method to show that two knots are equivalent involving the use of drawings. This method is called equivalence by Raidemaister moves and an in-depth explanation can be found in the first chapter of . On the contrary, a problem that is omnipresent in knot theory is showing two knot are not equivalent. It is difficult to directly show no orientation-preserving homeomorphism of S3 exists that carries one knot to another. For instance, it is tough to show that the knots in figure 1.1 are not equivalent. Therefore knot theorists turn to knot invariants instead to show two knots are not equivalent. Definition 1.8. A knot invariant is a map {knots k : S1 , →S3}/(equivalence) − →Z, where Z is any set. An knot invariant is called complete if it is injective. So a knot invariant assigns to any knot a quantity (in a very broad sense) that does not change under equivalence of knots. The main focus of knot theory is to find knot invariants and compute them. As of the publication of this thesis, no easily computable complete knot invariant exists. Finding such an invariant would be the pinnacle achievement of knot theory. The focus of this thesis is on the knot complement, defined below. Definition 1.9. Let k : S1 , →S3 be a knot. The space S3 \ Im k is called the knot complement of k. The knot complement turns out to be a knot invariant, up to homeomor-phisms (denoted ∼ =). Proposition 1.10. The following map is a knot invariant: C : {knots k : S1 , →S3}/(equivalence) − →{S3 \ Im k | k : S1 , →S3}/ ∼ = k 7− →S3 \ Im k. Proof. We need to show that C is well-defined. Let k1 : S1 , →S3 and k2 : S1 , →S3 be two equivalent knots and h : S3 ∼ − → S3 an orientation-preserving homeomorphism such that h ◦k1 = k2. It is clear 6 that h(Im k1) = Im k2, and therefore the restriction h|S3\Im k1 : S3 \ Im k1 − →S3 \ Im k2 x 7− →h(x) is a homeomorphism. We conclude that C is well-defined. , A noteworthy fact about the knot complement is that it is a complete invariant. A proof of this can be found in . As mentioned before, no known easily computable knot invariants exists. Indeed, it is tough to determine whether the knot complements of two knots are homeomorphic. Instead, we choose to focus on computing topological invariants of the knot complement, which is done in the remainder of this thesis. 7 CHAPTER 2 Seifert surfaces Before studying topological invariants of the knot complement, we need to take a detour to the theory of Seifert surfaces. These surfaces will be required to construct a covering map of the knot complement in chapter 3, called the infinite cyclic cover of the knot complement. In this chapter, we discuss Seifert surfaces and their existence, as well as compute their fundamental group in terms of their genus. 1. Existence of Seifert Surfaces Definition 2.1. A Seifert surface of a knot is an orientable surface with boundary equal to the knot. The existence of Seifert surfaces is non-trivial, but does turn out to be guaranteed. An algorithm called Seifert’s algorithm allows us to construct a Seifert surface explicitely for any knot. Theorem 2.2 (Seifert’s algorithm). Every knot admits a Seifert surface. Proof. Let k be a knot. We will construct an orientable surface S satis-fying ∂S = k. First, choose an orientation and a knot diagram for k. Then, at each crossing of the knot in the diagram, alter k as shown in figure 2.1. Figure 2.1. The creation of Seifert cycles, from . After this, we end up with a disjoint union of oriented simple closed curves. These curves are called Seifert cycles. Recall that an oriented simple closed curve is the boundary of an oriented surface. For each Seifert cycle, choose such a surface and embed them into S3 such that their boundaries are the Seifert cycles, while keeping the surfaces disjoint. The surface can be kept disjoint by lifting them up or down to create a three-dimensional stack of surfaces. These surfaces are call Seifert cells Lastly, we undo the process of ‘cutting’ the knot in figure 2.1 by merging the Seifert cells together. This is done by adding a half-twisted strip at each position that there used to be a crossing, see figure 2.2. We let the half-twists 8 cross in the same way as the original crossing, creating a connected surface S satisfying ∂S = k. Figure 2.2. Twisted bands merging the surfaces, from . The Seifert cells are orientable and due to the twisted bands from figure 2.2 we find that S itself is also orientable. We conclude that S is a Seifert surface of k. , To further clarify the algorithm presented above, a Seifert surface of the so-called ‘figure eight’ knot is constructed in figure 2.3. Figure 2.3. Applying Seifert’s algorithm to the figure eight knot, from . A fact that may strike interest into the reader is that Seifert surfaces are not unique. Every knot has infinitely many non-homeomorphic Seifert surfaces. This statement is easily proven using topological surgery, but this is not done in this thesis. 2. Genus of a surface In this section, we discuss the concept of the genus of the surface. When defining the genus, it is important to distinguish between surfaces with- and without boundary. 2.1. Surfaces without boundary. The genus is defined as follows for orientable surfaces (without boundary): Definition 2.3. Let S be a connected and orientable surface. Then the genus of S is the maximum number of simple closed curves that S can be cut along without the reulting surface being disconnected. 9 When talking about surfaces, it is useful to consider the classification of surfaces. This classification is a useful aid in finding the fundamental group of Seifert surfaces in the next section. Proposition 2.4 (Classification of surfaces). Any connected surface is homeomorphic with one of (1) the 2-sphere; (2) the torus; (3) the projective plane, or connected sums of these surfaces. Recall that the connected sum of two connected surfaces S and T is ob-tained by removing an open disc in S and T and then identifying their respec-tive boundaries. Note that the connected sum of two spheres is a single sphere, and the connected sum of two tori is a torus with two holes, see figure 2.4. Figure 2.4. The connected sum of two tori is a torus with two holes. Since the projective plane is non-orientable, we can classify the connected orientable surfaces as tori with n holes, where the sphere is the torus with 0 holes. With this information, we can find the genus of all connected orientable surfaces: Proposition 2.5. A torus with n holes has genus n. Proof. There are two different simple closed curves that can be cut out of a torus without the resulting surface being disconnected. One of them yields a cylinder, and the other an annulus, which are homeomorphic. No simple closed curve can be taken out of these surface without making the result disconnected. By performing either of these cuts at every hole of the torus with n holes, we obtain a connected sum of n cylinders. No more cuts can be done to this, without making disconnecting the space, so we conclude that the torus with n holes has genus n. , 10 2.2. Surfaces with boundary. We can view surfaces with boundary as closed surfaces with discs taken out of them. To be specific, S is said to be a connected surface with n boundary if for some connected surface C, S = C \ n G i=1 D2 ! . Here D2 is the open disc. We can now define the genus of a surface with boundary: Definition 2.6. Let S = C (Fn i=1 D2) be a surface with boundary. Then the genus of S is the genus of C. It should be noted that defining the genus of a surface with boundary as the number of simple closed curves that can be cut from the surface, without disconnecting it, would result in an equivalent definition. This fact is not shown in this thesis. One example of a connected and orientable surface with boundary is a Seifert surface. The genus of the Seifert surface is of interest to us, but it turns out that not all Seifert surfaces of a knot have the same genus. Therefore, we define the concept of the genus of a knot as follows: Definition 2.7. The genus of a knot is the minimal genus of its Seifert surfaces. 3. Fundamental group of Seifert Surfaces Consider a knot k of genus g. Let S be a Seifert surface of k of genus g. The boundary of S is k, so since k has 1 connected component we see that S is a surface with 1 boundary. As S is orientable, the classification of oriented surfaces tells us that S is homeomorphic with a torus with g holes and 1 disc taken out of it. Now that we can grasp the Seifert surface more easily, we can compute its fundamental group. Proposition 2.8. The fundamental group of a connected and oriented surface with genus g and 1 boundary is F2g, the free group with 2g generators. Proof. Let S be a connected and oriented surface with genus g and 1 boundary. Then by the classification of surfaces we find that S is a torus with g holes and a disc taken out of it. This space can be retracted to the bouquet of 2g circles, see figure 2.5. The fundamental group of the bouquet with 2g circles is F2g. This completes the proof. , Corollary 2.9. The fundamental group of a Seifert surface of genus g is F2g. 11 Figure 2.5. The torus with n holes and a disc 1 boundary can be retracted to the bouquet of 2g circles. 12 CHAPTER 3 Cyclic coverings In this chapter we start studying the knot complement. If k is a knot with tubular neighbourhood V = V (k), then we denote the knot complement by C. By knot complement we mean one of the spaces S3 \ k, S3 \ V , S3 \ k or S3 \ V . Even though these spaces are not homeomorphic, they are homotopy-equivalent. Seeing as we are studying the homology groups and fundamental group of the knot complement, this does not turn out to be a problem. The fundamental group of the knot complement is of great interest to knot theorists, as it is a very powerful invariant. However, computing this group explicitly turns out to be a difficult task that has yet to be overcome. Therefore, knot theorists prefer to study invariants of this group, such as the first homology group or the commutator subgroup. The aim of this chapter is to compute the homology of the knot comple-ment, and explictely construct a space that has the commutator subgroup as its fundamental group. This space is studied in-depth in future chapters. It should be noted that the knot complement is assumed to be a connected 3-manifold. This is non-trivial, but is not proven in this thesis. 1. Homology of the knot complement In this section, the homology of the knot complement is computed. The following tool is required to compute this homology. Theorem 3.1 (Mayer-Vietoris). Let X be a topological space and U1, U2 ⊂ X open subspaces such that U1 ∪U2 = X. Consider the group homomorphisms induced by the inclusion maps: Hp(U1) Hp(U1 ∩U2) Hp(X) Hp(U2) k⋆ i⋆ j⋆ l⋆ Also consider the group homomorphism ∂⋆: Hp(X) →Hp(U1 ∩U2) given by ∂⋆[c] = ∂⋆[c1 −c2] = [∂c1] = [∂c2], where c1 and c2 are p-chains in U1 and U2 respectively. Recall that any cycle in X can be written in this way. The following sequence of groups is exact: 13 · · · Hp+1(U1) ⊕Hp+1(U2) Hp+1(X) Hp(U1 ∩U2) Hp(U1) ⊕Hp(U2) Hp(X) Hp−1(U1 ∩U2) Hp−1(U1) ⊕Hp−1(U2) · · · · · · H0(U1) ⊕H0(U2) H0(X) 0 k⋆−l⋆ ∂⋆ i⋆⊕j⋆ k⋆−l⋆ ∂⋆ i⋆⊕j⋆ k⋆−l⋆ Proof. This theorem is assumed to be prior knowledge to the reader. A proof can be found in chapter 13 of . , The Mayer-Vietoris theorem is used not only to proof the following theo-rem, but also to proof several theorems in chapter 5. Therefore it is imperative that the reader has a good understanding of this theorem. Proposition 3.2 (Homology of the knot complement). Let k be a knot, V = V (k) a tubular neighbourhood and C = S3 \ k the corresponding knot complement, then Hp(C) =  Z if p = 0, 1, 0 if p > 1. Proof. It is assumed without proof that the knot complement is a con-nected 3-manifold. Therefore C is path-connected and H0(C) = Z. The Mayer-Vietoris theorem is used to find Hp(C) for p > 0. In this case, let X = S3, U1 = C and U2 = V . Since V is homeomorphic with a solid torus, it’s homotopy-equivalent with the circle S1. The intersection U1 ∩U2 = C ∩V is a solid torus with a circle taken out of it, i.e. homeomorphic with S1×(D2{∗}). The space D2 {∗} is homotopy-equivalent with S1, so U1 ∩U2 is is homotopy-equivalent with the torus T. Recall the following results from algebraic topology: Hp(S1) =  Z if p = 0, 1, 0 if p > 1, Hp(S3) =  Z if p = 0, 3, 0 if p = 1, 2 or p > 3, Hp(T) =    Z if p = 0, 2, Z ⊕Z if p = 1, 0 if p > 2. Applying Mayer-Vietoris yields the following exact sequence for p > 3: Hp(C ∩V ) Hp(C) ⊕Hp(V ) Hp(S3) Since Hp(T) = Hp(S1) = Hp(S3) = 0, we find that Hp(C) = 0 for p > 3. In addition, there is the following exact sequence at the bottom of the sequence: 14 H3(C ∩V ) H3(C) ⊕H3(V ) H3(S3) H2(C ∩V ) H2(C) ⊕H2(V ) H2(S3) H1(C ∩V ) H1(C) ⊕H1(V ) H1(S3) ∂⋆ i⋆⊕j⋆ k⋆−l⋆ As H1(S3) = H2(S3) = 0 we find that H1(C ∩V ) = H1(C) ⊕H1(V ), so Z ⊕Z = H1(C) ⊕Z. Thus H1(C) = 0. If we view C ∩V as the torus by the homotopy-equivalence, then any 2-cycle on the torus is the boundary of a 3-chain in S3. Because our knot is tame, this 3-chain can be made so that it does not intersect the knot anywhere. Therefore the inclusion H2(C ∩V ) →H2(C) is trivial so Im i⋆⊕j⋆= 0. From the homomorphism theorem we get (H2(C) ⊕H2(V ))/ ker(k⋆−l⋆) = H2(S3) so by exactness H2(C) ⊕H2(V ) = H2(C) = 0. Since ker i⋆⊕j⋆= Z we find that ∂⋆is surjective, so since H3(S3) = H2(C∩ V ) = Z we find that ker ∂⋆= 0. Furthermore, H3(C∩V ) = 0 thus H3(C∩V ) → H3(C) ⊕H3(V ) is trivial. We conclude by exactness that H3(C) ⊕H3(V ) = 0 and therefore H3(C) = 0. , 2. Existence of cyclic coverings In the previous section, it was shown that the first homology group of the knot complement is independent of the knot, and is always infinite cyclic. In this section, this is used to prove the existence of a cyclic covering of the knot complement of which the fundamental group is equal to the commutator subgroup of the fundamental group of the knot complement. We first recall the notion of a regular covering and the Galois Correspon-dence of covering maps. Definition 3.3. Let p : Y →X be a covering with Y connected and X locally path-connected. The covering p is called regular if it is a G-covering for some group G. Theorem 3.4 (Galois Correspondence). Let X be a topological space that is connected, locally path-connected and semi-locally simply connected. Let S be the set of pointed regular coverings p : (Y, y) →(X, x), up to isomorphism. Let P be the set of subgroups of π1(X, x). The map S − →P p 7− →p⋆(π1(Y, y)) is a bijection. This bijection is called the Galois Correspondence. A covering p ∈S is regular if and only if its corresponding subgroup of π1(X, x) is normal. If this is the case, then p : (Y, y) →(X, x) is a π1(X, x)/p⋆(π1(Y, y))-covering. Proof. This theorem is assumed to be prior knowledge to the reader. A proof can be found in chapter 13 of . , 15 The existence of the cyclic coverings described below is an immediate con-sequence of the Galois Correspondence. Proposition 3.5 (Existence of cyclic coverings). There exists a unique regular Z-covering of the knot complement p∞: C∞→C that satisfies p∞⋆(π1(C∞)) ∼ = [π1(C), π1(C)]. For n ∈Z≥2 there exists a unique regular Z/nZ-covering of the knot comple-ment pn : Cn →C that satisfies pn⋆(π1(Cn)) ∼ = nZ ⊕[π1(C), π1(C)]. Proof. Recall that the commutator subgroup [π1(C), π1(C)] of π1(C) is a normal subgroup. Furthermore, recall from Hurewicz’ Theorem that H1(C) ∼ = π1(C)/[π1(C), π1(C)] and that H1(C) ∼ = Z by proposition 3.2. Therefore we find by the Galois Correspondence (theorem 3.4) that there exists a unique regular Z-covering p∞: C∞→C that satisfies p∞⋆(π1(C∞)) ∼ = [π1(C), π1(C)]. By further quotienting H1(C) to Z/nZ we find the Z/nZ-covering, again by the Galois Correspondence. , These coverings are significant enough in this thesis to be given their own name: Definition 3.6. The Z-covering of proposisition 3.5 is called the infinite cyclic covering and the Z/nZ-covering is called the finite (n-fold) cyclic cov-ering. 3. Cutting along a surface A technique that is used to construct the cyclic covering of the knot com-plement is called cutting a 3-manifold along a surface. The most intuitive way to cut a 3-manifold M along a surface S in M would be to consider the subspace M \ S. However, this space is not closed in M. This turns out to be problematic when constructing the cyclic cover. Therefore the technique below is used to cut along a surface instead. Definition 3.7 (Cutting a 3-manifold along a surface). Let M be a 3-manifold and S an oriented surface in M. Consider a neighbourhood U around S such that U ∼ = S × [−1, 1]. Then U \ S = U1 ∪U2 with U1 ∩U2 = ∅and U1, U2 ∼ = S × (0, 1]. Let M ′ 0, U ′ 1 and U ′ 2 be homeomorphic copies of M \ U, U1 and U2 respec-tively, with homeomorphisms f0 : M \ U ∼ − →M ′ 0 and fi : Ui ∼ − →U ′ i for i ∈{1, 2}. The space M ′ is obtained from the disjoint union M ′ 0⊔U ′ 1⊔U ′ 2 by identifying f0(x) with fi(x) when x ∈M \ U ∩Ui = ∂(M \ U) ∩∂Ui (i ∈{1, 2}). The space M ′ is a 3-manifold and is called the space obtained by cutting M along S. 16 First of all, it should be noted that the space constructed above is home-omorphic with M \ U, which is closed in M, solving our problem mentioned earlier Additionally, Through the homeomorphisms fi there is a natural map i : M ′ 0 ⊔U ′ 1 ⊔U ′ 2 − →M x 7− →    f0(x) if x ∈M ′ 0 f1(x) if x ∈U ′ 1 f2(x) if x ∈U ′ 2. In turn, this map induces a natural map ι : M ′ →M called the identification map, not to be confused with the quotient map M ′ 0 ⊔U ′ 1 ⊔U ′ 2 →M ′. As an example, we can cut the solid torusy S1 × D2 along a disc D2 to obtain a solid cylinder [0, 1] × D2 with two copies of the disc D2 as boundary. This can be seen in figure 3.1. It should be noted that the identification map maps points in the red discs of the solid cylinder to the corresponding points in the red disc of the solid torus. Figure 3.1. Cutting a solid torus along a disc. 4. Construction of the cyclic covering In this section, we construct the cyclic coverings of the knot complement. Let k be a knot, let V = V (k) be a tubular neighbourhood of k and let S′ be a Seifert surface of k. Furthermore, let C = S3 \ V be the knot complement and let S = S′ ∩C. Lastly, define λ = ∂S′ ∩V , which is a simple closed curve along the boundary of V . Now cut C along S to obtain the 3-manifold C⋆. The boundary of C⋆is a connected surface that consists of an annulus that is obtained by cutting the torus ∂V along λ, and two disjoint parts S+ and S−that are homeomorphic to S. A local overview of this can be seen in figure 3.2. Let r : S+ ∼ − →S−be the homeomorphism that maps a point of S+ to the point of S−which corresponds to the same point of S. You can think of this as drawing vertical lines between S+ and S−in figure 3.2 and sending points in S+ along those lines to the corresponding point in S−. Take homeomorphic copies (C∗ i )i∈Z of C⋆with homeomorphisms (hi : C⋆ ∼ − → C⋆ i )i∈Z. The space C∞is obtained from the disjoint union F i∈Z C⋆ i by identi-fying hi(x) and hi+1(r(x)) when x ∈S+ and i ∈Z. Furthermore, the space 17 Figure 3.2. Local overview of cutting the knot complement along a Seifert surface, from . Figure 3.3. Moving between layers in C∞and Cn, from . Cn is obtained from Fn−1 i=0 C⋆ i by identifying hi(x) with hi+1(r(x)) and hn−1(x) with h0(r(x)) for i ∈{1, 2, . . . , n −1} and when x ∈S+. The spaces Cn and C∞are stacks of C⋆, where one can move from one layer up or down through S+ or S−respectively. In Cn, going up through S+ in C⋆ n−1 puts you in C⋆ 0. To illustrate this moving between layers, see figure 3.3. 18 The covering maps can now be introduced. Let ι : C⋆→C be the identi-fication map. We define the maps p∞: C∞− →C x 7− →ι(h−1 i (x)) if x ∈C⋆ i and pn : Cn − →C x 7− →ι(h−1 i (x)) if x ∈C⋆ i . It is clear that these maps are coverings of the knot complement. It remains to be shown that they are the cyclic coverings of the knot complement. Consider the map t : C∞− →C∞ x 7− →hi+1(h−1 i (x)) if x ∈C⋆ i . The map t moves a point up a layer in C∞. We have an even action of Z on C∞ through z · x := tz(x), for z ∈Z and x ∈C∞. Since the orbits of Z are equal to the fibres of p∞, we find that p∞is a Z-covering. Since C∞is connected and C is locally path-connected, p∞is in fact a regular Z-covering. Hence p∞ is the infinite cyclic cover of the knot complement from definition 3.6. The proof that pn is the n-fold cyclic covering of the knot complement is analogous. 19 CHAPTER 4 Fibred knots The focus of this chapter is on a specific type of knots, called fibred knots. As the name suggests, fibred knots have something to do with fibre bundles. The first section introduces fibre bundles and the pullback bundle and the second section relates this pullback bundle to homotopy. In the last section, we are ready to introduce fibred knots and prove the main theorem on fibred knots, which has to do with the commutator subgroup of the fundamental group of the knot complement. 1. Fibre bundles A fibre bundle is defined as follows: Definition 4.1. A fibre bundle is a continuous surjection π : E →B with a fibre F satisfying the following property: For every p ∈B there exists an open neighbourhood U and a homeo-morphism φ : π−1(U) ∼ − →U × F such that the following diagram commutes (U × F →U is the projection map): π−1(U) U × F U φ π The duplet (U, φ) is called the local trivialisation of p. From the above property it follows that π−1({p}) ∼ = F for all p ∈B. The space B is called the base space of the bundle, E the total space. The map π is called the projection of the bundle. A fibre bundle π : E →B is called trivial if there is a homeomorphism ψ : B × F ∼ − →E such that π ◦ψ is the projection onto B. To aid in the readers understanding of fibre bundle, we give some examples of fibre bundles. Example 4.2. The cylinder and the M¨ obius strip are both fibre bundles over the circle with fibre [0, 1], see figure 4.1. Note that the cylinder is a trivial fibre over the circle as the cylinder is homeomorphic with S1 × [0, 1]. On the contrary, the M¨ obius strip is a non-trivial fibre over the circle. Another example of a fibre bundle that strikes our interest is a covering map. Proposition 4.3. Let p : Y →X be a map. The following two statements are equivalent: 20 Figure 4.1. The M¨ obius strip and the cylinder as fibre bundles over the circle, from . (1) p is a covering map with homeomorphic fibres; (2) p is a fibre bundle with a discrete fibre. In the above statement, discrete fibre means that the fibre has the discrete topology. Recall that a covering over a connected space has homeomorphic fibres, so this extra condition in (1) is usually satisfied. Proof of proposition 4.3. (1) = ⇒(2): Let p : Y →X be a covering with homeomorphic fibres and let x ∈X. We prove that p is a fibre bundle with fibre p−1({x}). Let z ∈X, then there exists an open neighbourhood U of z such that p−1(U) = G y∈p−1({z}) Vy with Vy open, and the restriction p|Vy : Vy →U is a homeomorphism. Let ψ : p−1({z}) ∼ − → p−1({x}) be a homeomorphism. Define the map φ : p−1(U) ∼ − − →U × p−1({x}) w 7− →(p(w), ψ(y)) if w ∈Vy. Then φ is a homeomorphism such that the following diagram (with U × p−1({x}) →U projection) commutes: p−1(U) U × p−1({x}) U φ p We conclude that p : Y →X is a fibre bundle with fibre p−1({x}). (2) = ⇒(1): Let p : Y →X be a fibre bundle with discrete bundle F. Let x ∈X, then there exists an open neighbourhood U of x and a homeomorphism 21 φ : p−1(U) ∼ − →U × F such that the following diagram commutes: p−1(U) U × F U φ p Since F has the discrete topology, the subspace U × {f} ⊂U × F is open for all f ∈F. Also note that the projection U × {f} ∼ − →U is a homeomorphism. We conclude that p−1(U) = G f∈F φ−1(U × {f}), so p−1(U) is a disjoint union of opens such that the restriction p|φ−1(U×{f}) is a homeomorphism, hence p is a covering map with homeomorphic fibres (each fibre is homeomorphic to F). , We don’t need a lot of theory on fibre bundles in this thesis, but there is one more definition that we require, namely that of the pullback bundle. Definition 4.4 (pullback bundle). Let π : E →B be a fibre bundle with fibre F and let f : B′ →B be a continuous map. Consider the space f ⋆E = {(b′, e) ∈B′ × E | f(b′) = π(e)}. Let π′ : f ⋆E →B′ be the projection onto the first coordinate, and h : f ⋆E →E the projection onto the second coordinate. Then π′ is a fibre bundle with fibre F and the following diagram commutes: f ⋆E E B′ B h π′ π f The fibre bundle π′ : f ⋆E →B′ (with fibre F) is called the pullback bundle of π along f. If (U, φ) is a local trivialisation of (a point in) E, then (f −1(U), ψ) is a local trivialisation of (a point in) B′. Here ψ is given by ψ : π′−1(f −1(U)) − →f −1(U) × F (b′, e) 7− →(b′, proj2(φ(e)), with proj2 the projection onto the second coordinate. So π′ : f ⋆E →B′ is in fact a fibre bundle with fibre F. 2. Homotopy invariance of the pullback bundle The following theorem is the relation between fibre bundles and homotopy. This is called the homotopy-invariance of the lifting property or the homotopy-lifting property of the fibre bundle. Despite being of great importance to prove the main result of the theorem on fibred knots, this theorem of is not proven in this thesis. 22 Theorem 4.5. Let π : E →B be a fibre bundle. Let f, g : B′ →B be two homotopic maps, then their pullbacks are homeomorphic: f ⋆E ∼ = g⋆E. Proof. This theorem is proven in chapter 11 of . , The following corollary of this theorem is used to prove the main theorem on fibred knots. Corollary 4.6. A fibre bundle over a contractible space is trivial. Proof. Let π : E →B be a fibre bundle with B contractible and with fibre F. Let idB : B →B be the identity and f : B →B a constant map. By the definition of the pullback we find that id⋆ BE = {(b, e) ∈B × E | idB(b) = π(e)} = G b∈B {b} × π−1({b}) ∼ = E. The homeomorphism in the last step is G b∈B {b} × π−1({b}) − →E (b, e) 7− →e. Furthermore, we find that f ⋆E = {(b, e) ∈B × E | f(b) = π(E)} = B × π−1(f(B)) ∼ = B × F. By theorem 4.5 we find that there is a homeomorphism ψ : B × F ∼ − →id⋆ BE and that the following diagram commutes: B × F E B B. id′ B◦ψ π idB Therefore we conclude that the fibre bundle π is trivial. , 3. Fibred knots and the commutator subgroup We are ready to define fibred knots. Definition 4.7. Let k be a knot with complement C and S a Seifert surface of k of genus g. The knot k is a fibred knot if there is a fibre bundle π : C →S1 with fibre S. It turns out to be difficult to prove that a knot is fibred. The simplest example of a fibred knot is the unknot. 23 Proposition 4.8. The unknot is a fibred knot. Proof. We construct a fibre bundle. However, we first require to view the unknot in an alternative way. Recall that S3 is the one point compactification of R3. Furthermore, recall that there is a homeomorphism S1 ∼ − →R ∪{∞} called the stereographic projection. The stereographic projection maps every point x of S1 to the point on the real line that intersects the line through x and the north pole of S1. The north pole is mapped to ∞, see figure 4.2. Figure 4.2. The stereographic projection of S1. Therefore there is a knot S1 k , − →S3 equivalent tot the unknot such that Im k = {(0, 0, z) | z ∈R} ∪{∞}(∼ = S1). Therefore the knot complement C = S3 \ Im k is R3 minus the z-axis. A Seifert surface of this unknot is S = {(x, 0, z) | x ∈R+ and z ∈R} ⊂S3. The space C is homeomorphic with R × (C \ {0}). Elements of C are written as elements of R × (C \ {0}) from now on. By writing S1 = {z ∈C | ∥z∥= 1}, the following fibre bundle is constructed: π : C − →S1 (x, z) 7− → z ∥z∥. In addition, there is a homeomorphism φ : C ∼ − − →S1 × S (x, z) 7− →  z ∥z∥, (∥z∥, 0, x)  . This homeomorphism leads to a global trivialisation (S1, φ) because the fol-lowing diagram commutes: π−1(S1) S1 × S S1 φ π , In the previous section, we prepared the proof of the main theorem on fibred knots, presented below. 24 Theorem 4.9. Let k be a fibred knot with complement C and as fibre a Seifert surface of genus g. Then [π1(C), π1(C)] ∼ = F2g. Proof. Let k be a fibred knot with complement C. Let S be a Seifert surface of genus g and π : C →S1 a fibre bundle with fibre S. Consider the universal covering u : R →S1 of S1 given by the quotient R →R/Z. Since Z acts even on R, the universal cover u is a regular Z-covering. By proposition 4.3 u is a fibre bundle. By taking the pullback along both of these fibre bundles, we obtain the spaces u⋆C = {(r, c) ∈R × C | u(r) = π(c)} and π⋆R = {(c, r) ∈C × R | π(c) = u(r)}. It is clear that u⋆C ∼ = π⋆R. The pullback fibre u⋆C →R is a fibre bundle with fibre S. Since R is contractible, the bundle is trivial by corollary 4.6. So u⋆C ∼ = R × S. In addition, the pullback fibre π⋆R →C is a Z-covering by proposition 4.3 as Z has the discrete topology. Since π⋆R ∼ = R × S is connected, the fibre is a regular Z-covering of C. This means that π⋆R is the infinite cyclic cover space of C. Consequently, the infinite cyclic cover is homeomorphic with R × S, so homotopy-equivalent with S. By proposition 3.5 and corollary 2.9 we find [π1(C), π1(C)] ∼ = F2g, concluding the proof. , A remarkable fact about fibred knots, is that the previous theorem has a converse. Said converse is not proven in this thesis, but a proof can be found in . Below you can find the theorem in its stronger form. Theorem 4.10. Let k be a knot of genus g with complement C. The following are equivalent: (1) The knot k is fibred; (2) The commutator subgroup [π1(C), π1(C)] is finitely generated; (3) The commutator subgroup [π1(C), π1(C)] is isomorphic with F2g. 25 CHAPTER 5 A different way to study the knot complement In this chapter, we present a space called the knot tube that is homotopy-equivalent to the knot complement. Therefore we can study the fundamental group of the knot tube, rather than that of the knot complement. Computing this fundamental group is still a difficult task, but studying the infinite cyclic covering of the knot tube is significantly easier. In the first two sections, we construct the knot tube. In the following two sections, we construct the infinite cyclic covering of the knot tube and compute its first homology group. 1. The metro station Before constructing the knot tube, we need to define the following space, which is used to construct the knot tube in the following section. Definition 5.1. Let X1, X2 and X3 be three homeomorphic copies of the square [0, 1] × [0, 1] with homeomorphisms fi : [0, 1] × [0, 1] ∼ − →Xi for i ∈{1, 2, 3}. The metro station is obtained from the disjoint union X1 ⊔X2 ⊔X3 by identifying the corresponding points in the following sets: (1) Identify f1([0, 1] × {0}) with f2([0, 1] × {0})) and f1([0, 1] × {1}) with f2([0, 1] × {1}); (2) identify f3({0} × [0, 1]) with f2({0} × [0, 1]) and f3({1} × [0, 1]) with f2({1} × [0, 1]). A drawing of the metro station can be found in figure 5.1. Figure 5.1. The metro station. Only the boundary is drawn of the middle square X2. 26 By means of Mayer-Vietoris (theorem 3.1), the first homology group of the metro station can be found. Proposition 5.2. Let M be the metro station, then H1(M) ∼ = Z2. Proof. Let M be the metro station as constructed in definition 5.1. Since M is path connected we have H0(M) ∼ = Z. Let U = M \ (f2([0, 1] × {0}) ∪f2({0} × [0, 1]) and V = M \ (f2([0, 1] × {1}) ∪f2({1} × [0, 1])). Using figure 5.1 it can be verified that U and V are contractible, and that U ∩V is homotopy-equivalent to the discrete space with three points. Applying Mayer-Vietoris with this decomposition yields the following exact sequence: 0 H1(M) Z3 Z2 Z 0 From this exact sequence we deduce that H1(M) ∼ = Z2. , 2. The knot complement as a tube In this section, we display a new way of viewing the knot complement. Under homotopy-equivalence, many parts of the knot complement can be re-tracted. The resulting space, called the knot tube, is obtained by gluing to-gether metro stations from definition 5.1. Before constructing this space, we need two definitions from graph theory, that the reader may be unfamiliar with. Definition 5.3. Given a compact graph embedded in a 2-manifold, a face of the graph is a connected component of the complement of the graph. Definition 5.4. Consider a compact and connected graph that is embed-ded in a 2-manifold. The dual graph of this graph is the graph that has a vertex in each face and an edge between every pair of vertices of which the connected components share an edge. See figure 5.2. We are now ready to construct the knot tube. Let k be a knot that passes through infinity (this means that ∞∈k when viewing S3 as R3 ∪{∞}). It is intuitive that any knot is equivalent to such a knot. While constructing the space, we use the trefoil knot as example. The trefoil knot can be viewed as a knot that passes through infinity as in figure 5.3a. There is a natural way to view a knot as a graph in S2, by seeing each crossing as a vertex and the lines connecting the crossings as edges. Consider the dual graph of the knot graph, as seen in figure 5.3b. If k has crossings c1, c2, . . . , cn, then the dual graph has n + 1 faces. With the exception of the outer face, each face contains one of the crossings of k. Furthermore, the four ends of these crossings each go to one of the four edges of the face. At each 27 Figure 5.2. The red graph is the dual graph of the blue graph. (a) The trefoil knot as a knot that passes through infinity. (b) The dual graph of the trefoil graph, given in red. Figure 5.3. A different way to view the trefoil knot. crossing ci, i ∈{1, 2, . . . , n}, let Fi be the face of the dual graph containing ci. This face is homeomorphic with the square [0, 1] × [0, 1]. Let M be the metro station from definition 5.1 constructed by gluing to-gether the squares X1, X2 and X3 with homeomorphisms fj : [0, 1]×[0, 1] ∼ − →Xj (j ∈{1, 2, 3}). Let M1, M2, . . . , Mn be homeomorphic copies of the metro sta-tion M with homeomorphisms hi : Mi ∼ − →M. We now make identifications in the disjoint union Fn i=1 Mi. For every pair of faces Fi and Fk that are connected to each other (i.e. next to each other), identify the corresponding points in Mi and Mk given in figure 5.4 in a natural way. The space obtained from this identification process is called the knot tube. Making the necessary identifications to the metro stations of the trefoil knot 28 Identify hi(f2([0, 1] × {1}) with hk(f1({0} × [0, 1]); identify hi(f3([0, 1] × {1}) with hk(f2({0} × [0, 1]). Identify hi(f2([0, 1] × {1}) with hk(f2([0, 1] × {0}); identify hi(f3([0, 1] × {1}) with hk(f3([0, 1] × {0}). Identify hi(f1({1} × [0, 1]) with hk(f2([0, 1] × {0}); identify hi(f2({1} × [0, 1]) with hk(f3([0, 1] × {0}). Identify hi(f1({1} × [0, 1]) with hk(f2({1} × [0, 1]); identify hi(f2({0} × [0, 1]) with hk(f3({0} × [0, 1]). Figure 5.4. The identification process of the metro stations. yields the space that can be found in figure 5.5. In this figure, all the adja-cent metro stations are connected by the identifications shown in figure 5.4. Furthermore, the trefoil knot is still drawn in this figure as a visual aid, but the knot itself is not a part of the knot tube. It should also be noted that the top and bottom sides of the top and bottom metro stations are also connected together, as their corresponding crossings are connected through the point at infinity. 3. Homotopy-equivalence of the knot complement and the knot tube As mentioned in previous sections, the knot tube is homotopy-equivalent to the knot complement. This section seeks to provide an explicit homotopy-equivalence between these two spaces. 29 Figure 5.5. The knot tube of the trefoil knot. However, before constructing the homotopy-equivalence, we introduce the following theorem that is used to create a decomposition of the knot comple-ment. Theorem 5.5 (Alexander-Schoenflies). Let i : S2 , →S3 be a piecewise linear embedding. Then there are closed balls B1 and B2 such that S3 = B1 ∪B2 and i(S2) = B1 ∩B2 = ∂B1 = ∂B2. Proof. A proof of this theorem can be found in . , The term piecewise linear may be unfamiliar to the reader. This thesis does not provide an explanation of this, but a good explanation can be found in . Nevertheless, the requirement that the embedding S2 , →S3 is piecewise linear does not hamper any of the arguments presented here. The Alexander-Schoenflies theorem provides us with a new way to view the 3-sphere, namely as two closed balls whose boundaries are identified with each other. This decomposition of S3 is used to construct the homotopy-equivalence. Let k be a knot that passes through infinity and let C = S3 \ k be its complement. As before, the trefoil is used as example and passes through infinity as shown in figure 5.3a. The knot k can be separated into two parts: the knotted part which has all the crossings in it, and the line through infinity that connects the knotted part to itself. Let B1 be a closed ball with a line taken out of it; let B2 ⊂C be a closed ball around the knotted part of k. See figure 5.6. The knot complement C can be constructed by identifying the the boundaries of the closed balls at the corresponding points, making sure that the knotted parts are connected together. 30 (a) The closed ball with a line taken out, denoted B1. (b) The closed ball around the knotted part of k, denoted B2. Figure 5.6. The closed balls B1 and B2. The missing line from B1 can be ‘thickened up’ under homotopy-equivalence to attain the space in figure 5.7a. As a result, gluing together B1 and B2 yields the space given in figure 5.7b. This space is homotopy-equivalent, even homeomorphic, with B2. (a) The thickened up line in B1. (b) B1 and B2 glued together. Figure 5.7. The thickening and identification. In summary, the knot complement is homotopy-equivalent to B2. It’s time to introduce the dual graph used to construct the knot tube. When construct-ing the dual graph, we make sure that the vertices corresponding to the outer faces of the graph are placed on the boundary of B2 and that the edges con-necting these vertices are also on the boundary of B2. In this manner, the 31 boundary of the ball in figure 5.6b corresponds with the outer edges of the dual graph in figure 5.3b. To contract B2 to the knot tube, first recall that the punctured disc D2{∗} can be retracted to the circle S1. At each of the faces of the dual graph, B2 is a closed ball with two lines taken out of it that together form a crossing, see 5.8. By splitting this ball up into an upper and lower half, separated Figure 5.8. B2 in a neighbourhood of each face. by the face of the dual graph, we obtain two spaces homeomorphic with the punctured solid cylinder [0, 1]×D2 {∗}. Contracting both halves to cylinders (so homeomorphic with [0, 1] × S1) gives us the upper and lower part of the metro station from figure 5.1. Applying this process to all the faces of the dual graph shows that the B2 is homotopy-equivalent to the knot tube, completing the proof. 4. The cyclic covering of the metro station The metro station M admits an infinite cyclic covering. It can be con-structed as follows: For all i ∈Z, let Xi, Yi and Zi be homeomorphic copies of the square [0, 1] × [0, 1] with homeomorphisms fi : [0, 1] × [0, 1] ∼ − − →Xi gi : [0, 1] × [0, 1] ∼ − − →Yi hi : [0, 1] × [0, 1] ∼ − − →Zi respectively. The infinite cyclic covering space C is obtained from the disjoint union (F i∈Z Xi)⊔(F i∈Z Yi)⊔(F i∈Z Zi) by identifying the corresponding points in the following sets: (1) Identify gi([0, 1] × {0}) with fi([0, 1] × {0}) and gi([0, 1] × {1}) with fi+1([0, 1] × {1}); (2) identify hi({0} × [0, 1]) with fi({0} × [0, 1]) and hi({1} × [0, 1]) with fi+1({1} × [0, 1]). Let ˜ X1, ˜ X2 and ˜ X3 be the squares from definition 5.1 with homeomorphisms ˜ fj : [0, 1] × [0, 1] ∼ − →˜ Xj (j ∈{1, 2, 3}). The covering map p : C →M is given 32 by p : C − →M x 7− →    ˜ f2(f −1 i (x)) if x ∈Xi ˜ f1(g−1 i (x)) if x ∈Yi ˜ f3(h−1 i (x)) if x ∈Zi. Consider the map t : C ∼ − − →C x 7− →    fi+1(f −1 i (x)) if x ∈Xi gi+1(g−1 i (x)) if x ∈Yi hi+1(h−1 i (x)) if x ∈Zi. The map t sends elements of Xi, Yi and Zi to the corresponding elements in Xi+1, Yi+1 and Zi+1 respectively. This induces a natural even Z-action on C that is compatible with p. Therefore p is a Z-covering, as was required. The infinite cyclic cover is a double staircase, at each level Xi, you can ‘move up’ to Xi+1 via Yi or Zi. Stepping down is done similarly. These steps up and down are the lifts of the two loops in the metro station. One can think of the infinite cyclic covering of the metro stations as ‘fold-ing open’ the upper and lower squares of infinitely many metro stations and connecting them together in such a way to form a staircase in two directions. To visualise this, one can consider two staircases as given on the left in figure Figure 5.9. The infinite cyclic cover as two merged staircases 5.9 and then merge them together by identifying the corresponding points in the black squares. This way we obtain the space given on the right in figure 5.9. Only the boundary of the covering is drawn in this figure to improve clarity. The squares Xi are given in black, Yi in red and Zi in blue. It appears as if the blue and red squares intersect, but topologically they don’t. 33 Proposition 5.6. Let C be the infinite cyclic cover of the metro station, then H1(C) ∼ = M i∈Z Z. Proof. This proposition is proven using Mayer-Vietoris. Let U be a small open neighbourhood of C \ (S i∈Z Yi) and V a small open neighbourhood of C (S i∈Z Zi), such that U and V are homotopy-equivalent to C (S i∈Z Yi) and C \ (S i∈Z Zi) respectively. Both of these spaces are infinite (single) staircases, thus contractible. The intersection U ∩V is homotopy-equivalent to S i∈Z Xi, which in turn is homotopy-equivalent to the countable discrete space D. Using that H0(D) ∼ = L i∈Z Z, Mayer-Vietoris provides the exact sequence 0 H1(C) L i∈Z Z Z2 Z 0. From this sequence we conclude that H1(C) ∼ = L i∈Z Z. , 5. The cyclic covering of the knot tube Since the knot tube is homotopy-equivalent with the knot complement, the knot tube has a unique (regular) infinite cyclic covering. The goal of this chapter is to construct this space and find its first homology group. This done by gluing G-coverings. The result we are proving is the following: Theorem 5.7. Let C be the infinite cyclic cover of a knot tube, then H1(C) ∼ = M i∈Z Z. Recall from algebraic topology that if you glue together two spaces with G-coverings, then this induces a natural gluing map on the covering spaces such that we obtain a new G-covering of the glued space. This gluing of G-coverings does require the glued spaces to both be con-nected, locally path-connected, and semi-locally simply connected. In addition, the G-covering needs to be regular. More information in gluing G-coverings can be found in chapter 14 of . Another tool we require to prove this theorem is the following lemma that may be familiar to the reader. Lemma 5.8. Let the following sequence of five groups be exact: A B C D E. f g h i Then this induces a short exact sequence of groups: 0 B/Im f C ker i 0. g h Proof. The proof of this lemma is a straightforward application of the definition of an exact sequence. , 34 The knot tube is constructed by gluing together metro stations. The metro station and its infinite cyclic covering satisfy the requirements to apply the gluing of G-coverings, so we can glue together the infinite cyclic covering of the metro station, to obtain the infinite cyclic covering of the knot tube. Figure 5.10. The trefoil knot with two numbered edges. The infinite cyclic covering of the knot tube is constructed in steps. The metro stations are connected one edge at a time, and after each step we keep track of what happens to the infinite cyclic cover and its homology. To compute the first homology group we use Mayer-Vietoris (theorem 3.1). To further clarify the steps for the reader, the infinite cyclic cover of the trefoil knot is constructed here. In particular, the trefoil knot as a knot that passes through infinity, as shown in figure 5.10. The construction is analogous for all other knots, but the process is easier to visualise when using an example. To further clarify the process, a drawing is presented of the cyclic cover at each step. Only two full layers of the cyclic cover are drawn to make it easier to visualise the identifications. Two of the ‘edges’ between the crossing in figure 5.10 are numbered. Our proof commences by making the identifications required to connect edge 1. The resulting space is denoted C1. The required identifications are shown in figure 5.11. Note that the bottom crossing is displayed on the left in figure 5.11 and the top crossing on the right. The first homology group of C1 can easily be computed using Mayer-Vietoris. Let U be a small open neighbourhood around the left cover (meaning that it includes a small open around the identification line in the right cover), and let V be an open neighbourhood around the right cover. Then U and V are each homotopy-equivalent to the cyclic cover of the metro station, so H1(U) ⊕H1(V ) ∼ = M i∈Z Z ! ⊕ M i∈Z Z ! ∼ = M i∈Z Z. 35 Figure 5.11. The glued cyclic covers, denoted C1. Furthermore, the intersection U ∩V is homeomorphic with R×(0, 1) thus con-tractible. Applying Mayer-Vietoris with these opens yields the exact sequence 0 L i∈Z Z H1(C1) Z Z2 Z 0, from which we deduce that H1(C1) ∼ = L i∈Z Z. Secondly, the identifications required for the edge labelled 2 in figure 5.10 are made. The resulting space is denoted C2. These identifications can be seen in figure 5.12. Again, the first homology group of C2 can be computed using Mayer-Vietoris. Let U be C2 minus the identification line (this is open as the identification line is closed), and let V be a small open neighbourhood around the identification line. Then U is homotopy-equivalent to C1 and V is homeomorphic with R × (0, 1) hence contractible. The intersection U ∩V is homeomorphic with R × (0, 1) ⊔R × (0, 1), so U ∩V is homotopy-equivalent with the discrete two-point space. The Mayer-Vietoris sequence of this decom-position is as follows: 0 L i∈Z Z H1(C3) Z2 Z2 Z 0. f g By exactness, we get Im g ∼ = Z and hence ker g ∼ = Z (by the homomorphism theorem). Also by exactness, deduce that Im f ∼ = Z and therefore ker f ∼ = Z. Now apply lemma 5.8 to the first five groups of the sequence to obtain the 36 Figure 5.12. The glued cyclic covers, denoted C2. following short exact sequence: 0 L i∈Z Z H1(C3) Z 0. All the groups in this sequence are abelian, and Z is a free abelian group. Therefore the sequence splits, and hence H1(C3) ∼ = L i∈Z Z  ⊕Z ∼ = L i∈Z Z. For the remaining four edges of the trefoil knot, the proof that the first homology group remains L i∈Z Z is analogous. We are either connecting two disjoint covers, in which case we can use the proof used for edge 1 above; or there is a connecting of a cover to itself, in which case the proof used for edge 2 can be used. In fact, this proof can be applied to any knot. Since knots are assumed to be tame, there is a finite number of identifications to be made and after every one of them the proofs above can be used to show that the first homology group is still L i∈Z Z. 6. The homology of the cyclic cover as Z[t±]-module In the previous section, it was shown that the first homology group of the infinite cyclic cover of the knot tube is L i∈Z Z. Despite being an invariant of the knot, this is a trivial invariant and therefore not very interesting. However, this group does have its uses in knot theory. By turning the first homology group into a module, a well-known invariant called the Alexander polynomial can be created. This invariant is not studied in this thesis, but a comprehensive overview can be found in . Instead, we look at how the first homology group could be turned into such a module. Let C∞be the infinite cyclic covering of the knot tube given in the pre-vious section. Furthermore, consider the homeomorphism t : C∞ ∼ − →C∞that sends an element to the corresponding element one layer higher in the cyclic covering. Furthermore, consider the ring of Laurent polynomials Z[t±], which 37 is isomorphic with the polynomial ring Z[x, y]/(xy −1). Then the induced isomorphism t⋆: H1(C∞) ∼ − →H1(C∞) provides a natural way to make H1(C∞) a Z[t±]-module. When using the infinite cyclic cover as constructed in section 4 of chapter 3, it is difficult to say something about t⋆. The idea behind the infinite cyclic cover constructed in the previous section, is that it is easier to see how t⋆ moves the generators of H1(C∞). However, despite the infinite cyclic cover being easier to visualise, it is still difficult to say what happens to any of the generators when applying t⋆. Consequently, the knot tube has not yet proven itself to be very useful. There is also no existing literature on the knot tube, so no inspiration can be taken from that. In conclusion, the knot tube would have to be studied more thoroughly in order to make sense of what happens when applying t⋆. 38 Conclusion This thesis covers a wide variety of topics in knot theory. The main results of the thesis are briefly summarised in this conclusion. To start of, a constructive proof of the existence of Seifert surfaces is given. Then, we introduce the concept of the genus of a surface and use this to compute the fundamental group of a Seifert surface. Secondly, a computation of the first homology group of the knot comple-ment leads to a proof that the knot complement has a unique infinite cyclic covering. This infinite cyclic covering is then constructed by cutting the knot complement along a Seifert surface and stacking infinitely many copies of this space on top of each other. Thirdly, this thesis contains an introduction to fibre bundles and basic theorems concerning them. Then this is used to find the commutator subgroup of the fundamental group of the knot complement in case the knot omits a fibre bundle to the circle. To finish of the thesis, we provide a new way to look at the knot comple-ment, up to homotopy-equivalence. This space is called the knot tube and is constructed by gluing together so-called metro stations. By constructing the infinite cyclic covering of the knot complement, we can compute the first homology group of the infinite cyclic coverings. This turns out to be L i∈Z Z for all knots and is therefore a trivial invariant. In the future, more research could be done on the knot tube. To be specific, more attempts could be made at properly describing the map t⋆: H1(C∞) → H1(C∞) so that in turn H1(C∞) could be described as a Z[t±]-module. In theory, this should lead to a different way to find the Alexander polynomial of a knot. This new method may be useful when computing the Alexander polynomial of large knots. 39 Bibliography J. W. Alexander. “On the Subdivision of 3-Space by a Polyhedron”. In: Proceedings of the National Academy of Sciences 10.1 (1924), pp. 6–8. doi: 10.1073/pnas.10.1.6. eprint: 10.1073/pnas.10.1.6. url: 1073/pnas.10.1.6. G. Burde and H. Zieschang. Knots. 3rd ed. De Gruyter, 2013. isbn: 9783110270785. doi: doi:10.1515/9783110270785. url: https:// doi.org/10.1515/9783110270785. X. Cheng et al. “Ear decomposition of 3-regular polyhedral links with applications”. In: Journal of Theoretical Biology 359 (2014), pp. 146– 154. issn: 0022-5193. doi: 06.009. url: pii/S0022519314003464. W. Fulton. Algebraic Topology. 1st ed. Graduate Texts in Mathematics. Springer New York, NY, 1995. isbn: 978-0-387-94327-5. doi: https : //doi.org/10.1007/978-1-4612-4180-5. C.M. Gordon and J. Luecke. “Knots are determined by their comple-ment”. In: Journal of the American Mathematical Society 2.2 (Apr. 1989). J.M. Lee. Introduction to Topological Manifolds. 2nd ed. Graduate Texts in Mathematics. Springer New York, NY, 2011. isbn: 978-1-4419-7939-1. doi: C. Ray. Introduction to fiber bundles. introduction-to-fiber-bundles-eb8e1157ec22. Online; accesed 16-june-2022. Nov. 2014. C.P. Rourke and B.J. Sanderson. Introduction to Piecewise-Linear Topol-ogy. Springer-Verlag, 1972. J.R. Stallings. “On fibering certain 3-manifolds”. In: Topology of 3-manifolds and related topics (1961), pp. 95–100. N. Steenrod. The Topology of Fibre Bundles. (PMS-14). Princeton Uni-versity Press, 1951. isbn: 9780691005485. url: stable/j.ctt1bpm9t5 (visited on 06/16/2022). 40
175
Published Time: 2025 Optimum Statistical Analysis on Sphere Surface | SpringerLink =============== Your privacy, your choice We use essential cookies to make sure the site can function. We also use optional cookies for advertising, personalisation of content, usage analysis, and social media. By accepting optional cookies, you consent to the processing of your personal data - including transfers to third parties. Some third parties are outside of the European Economic Area, with varying standards of data protection. See our privacy policy for more information on the use of your personal data. Manage preferences for further information and to change your choices. Accept all cookies Skip to main content Advertisement Log in Menu Find a journalPublish with usTrack your research Search Cart Search Search by keyword or author Search Navigation Find a journal Publish with us Track your research Home Geometry and Non-Convex Optimization Chapter Optimum Statistical Analysis on Sphere Surface Chapter First Online: 24 July 2025 pp 183–201 Cite this chapter Geometry and Non-Convex Optimization Christos Kitsos20& Stavros Fatouros20 Part of the book series:Springer Optimization and Its Applications ((SOIA,volume 223)) 95 Accesses Abstract The idea of a “triangulus sphaericus”, that is, a spherical triangle, so essential in astronomy and geodesy was introduced by Johannes Werner (1468–1528), who replaced the term “triangulus ex arcubus ciculorrum magnorum”, i.e. a triangle formed from arcs of maximum cycles. The latin term “trilaterum” was adopted from Pappus (290–350), who used the term “three sided on a sphere”. Therefore, it is obvious that since the early times, the sphere attracted interest, mainly due to spherical astronomy as celestial coordinate system and time. Observation of celestial objects provided food for thought for astrological purposes related to navigation, as well as defining the time keeping of that (see also Appendix 3). This is a preview of subscription content, log in via an institution to check access. Access this chapter Log in via an institution Subscribe and save Springer+ from $39.99 /Month Starting from 10 chapters or articles per month Access and download chapters and articles from more than 300k books and 2,500 journals Cancel anytime View plans Buy Now Chapter USD 29.95 Price excludes VAT (USA) Available as PDF Read on any device Instant download Own it forever Buy Chapter eBook USD 169.00 Price excludes VAT (USA) Available as EPUB and PDF Read on any device Instant download Own it forever Buy eBook Hardcover Book USD 219.99 Price excludes VAT (USA) Durable hardcover edition Dispatched in 3 to 5 business days Free shipping worldwide - see info Buy Hardcover Book Tax calculation will be finalised at checkout Purchases are for personal use only Institutional subscriptions Similar content being viewed by others Spherical Trigonometry in the Islamic World Chapter© 2016 The Tracts on the Sphere: Knowledge Restructured Over a Network Chapter© 2017 On Cesàro triangles and spherical polygons Article 15 June 2021 References S.A. Boyd, L.A. Vandenberghe, Convex Optimization (Cambridge University Press, Cambridge, 2004) Google Scholar E.A. Breitenberger, Analogues of the normal distribution on the circle and the sphere. Biometrika 50(1/2), 81–88 (1963) MathSciNetGoogle Scholar C.A. Caratheodory, Uber den Variabilitatsbereich der Koeffizienten von Potenzreihen, die gegebene Werte nicht annehmen. Math. Ann. 64(1), 95–115 (1907) MathSciNetGoogle Scholar A.A. Chadwick, S.A. Ilic, J. Helm-Petersen, An evaluation of directional analysis techniques for multidirectional, partially reflected waves part 2: application to field data. J. Hydraulic Res. 38(4), 253–258 (2000) Google Scholar T.A. Chang, Spherical regression. Ann. Stat. 14(3), 907–924 (1986) MathSciNetGoogle Scholar T.D. Downs, Spherical regression. Biometrika 90(3), 655–668 (2003) MathSciNetGoogle Scholar I.L. Dryden, Statistical analysis on high-dimensional spheres and shape spaces. Ann. Stat. 33(4), 1643–1665 (2005) MathSciNetGoogle Scholar W.A. Feller, An Introduction to Probability Theory and its Applications (1) (Wiley, Hoboken, 1968) Google Scholar R.A. Fisher, Dispersion on a sphere. Proc. R. Soc. London Ser. A. Math. Phys. Sci. 217(1130), 295–305 (1953) MathSciNetGoogle Scholar D.A.S. Fraser, The Structure of Inference (Wiley & Sons, Hoboken, 1968) Google Scholar R.A. Gatto, The von Mises–Fisher distribution of the first exit point from the hypersphere of the drifted Brownian motion and the density of the first exit time. Stat. Probab. Lett. 83(7), 1669–1676 (2013) MathSciNetGoogle Scholar R.A. Gatto, S.R. Jammalamadaka, The generalized von Mises distribution. Stat. Methodol. 4(3), 341–353 (2007) MathSciNetGoogle Scholar A.A. Gidskehaug, Statistics on a sphere. Geophys. J. Int. 45(3), 657–676 (1976) Google Scholar G.A. Hon, On Kepler’s awareness of the problem of experimental error. Ann. Sci. 44(6), 545–591 (1987) MathSciNetGoogle Scholar P.E. Jupp, K.V. Mardia, Maximum likelihood estimators for the matrix von Mises-Fisher and Bingham distributions. Ann. Stat. 7(3), 599–606 (1979) MathSciNetGoogle Scholar J.T. Kent, The Fisher-Bingham distribution on the sphere. J. R. Stat. Soc. Ser. B (Methodol.) 44(1), 71–80 (1982) Google Scholar C.P. Kitsos, Technological Mathematics and Statistics (In Greek) (New Tech. Pub., Athens, 2009) Google Scholar C.P. Kitsos, The Geometry of Greeks (In Greek) (New Tech. Pub., Athens, 2021) Google Scholar C.P. Kitsos, S.A. Fatouros, Geometry in quantitative methods and applications, in Analysis, Geometry, Nonlinear Optimization and Applications (World Scientific Publ. Co., Singapore, 2022) Google Scholar C.P. Kitsos, P.A. Iliopoulou, Adopting information distance measures for geographical data. J. Reg. Econ. Issues 12(1), 6–26 (2022) Google Scholar C.P. Kitsos, A.A. Oliveira, Asymptotic statistical results: theory and practice, in Computational Mathematics and Variational Analysis (Springer, Cham, 2020), pp. 177–190 Google Scholar O.A. Lahav, P.B. Lilje, J.R. Primack, M.J. Rees, Dynamical effects of the cosmological constant. Mon. Not. R. Astron. Soc. 251(1), 128–136 (1991) Google Scholar M.P. Langevin, Magnétisme et théorie des électrons. Ann. Chim. Phys. 8(5), 68–125 (1905) Google Scholar P.A. Leong, S.A. Carlile, Methods for spherical data analysis and visualization. J. Neurosci. Methods 80(2), 191–200 (1998) Google Scholar K.V. Mardia, Distribution theory for the von Mises-Fisher distribution and its application, in A Modern Course on Statistical Distributions in Scientific Work (Springer, Dordrecht, 1975), pp. 113–130 Google Scholar K.V. Mardia, Statistics of directional data. J. R. Stat. Soc. Ser. B Methodol. 37(3), 349–371 (1975) MathSciNetGoogle Scholar K.V. Mardia, P.E. Jupp, Directional Statistics (John Wiley & Sons, Hoboken, 2009) Google Scholar P.J.E. Peebles The Large-Scale Structure of the Universe (Princeton University Press, Princeton, 2020) Google Scholar N.A. Prakash, Differential Geometry an Integrated Approach (Tata McGraw-Hill, New Delhi, 1981) Google Scholar R.M. Robinson, Note on convex regions on the sphere. Bull. Am. Math. Soc. 44(2), 115–116 (1938) MathSciNetGoogle Scholar R.T. Rockafellar, Convex Analysis (Princeton University Press, Princeton, 2015) Google Scholar L.A. Santalo, Convex regions on the n-dimensional spherical surface. Ann. Math. 47, 448–459 (1946) MathSciNetGoogle Scholar M.J. Schervish, Theory of Statistics (Springer Science & Business Media, Berlin, 2012) Google Scholar S.A. Silvey, Optimal Design: An Introduction to the Theory for Parameter Estimation, vol. 1 (Springer Science & Business Media, Berlin, 2013) Google Scholar W.M. Smart, W.M. Smart, R.M. Green, Textbook on Spherical Astronomy (Cambridge University Press, Cambridge, 1977) Google Scholar K.A. Tapp, Differential Geometry of Curves and Surfaces (Springer, Berlin, 2016) Google Scholar F.A. Wang, A.E. Gelfand, Directional data analysis under the general projected normal distribution. Stat. Methodol. 10(1), 113–127 (2013) MathSciNetGoogle Scholar G.S. Watson, Analysis of dispersion on a sphere. Geophys. Suppl. Mon. Not. R. Astron. Soc. 7(4), 153–159 (1956) MathSciNetGoogle Scholar G.S. Watson, More significance tests on the sphere. Biometrika 47(1/2), 87–91 (1960) MathSciNetGoogle Scholar G.S. Watson, Equatorial distributions on a sphere. Biometrika 52(1/2), 193–201 (1965) MathSciNetGoogle Scholar G.S. Watson, E.J. Williams, On the construction of significance tests on the circle and the sphere. Biometrika 43(3/4), 344–352 (1956) MathSciNetGoogle Scholar Download references Author information Authors and Affiliations School of Engineering, University of West Attica, Egaleo, Greece Christos Kitsos&Stavros Fatouros Authors 1. Christos KitsosView author publications Search author on:PubMedGoogle Scholar 2. Stavros FatourosView author publications Search author on:PubMedGoogle Scholar Corresponding author Correspondence to Christos Kitsos. Editor information Editors and Affiliations Department of Industrial and Systems Engineering, University of Florida, Gainesville, FL, USA Panos M. Pardalos Mathematics, National Technical University of Athens, Athens, Attiki, Greece Themistocles M. Rassias Appendices Appendix 1: The Sphere as a Manifold Consider the sphere $$\displaystyle \begin{aligned} S^2: \,\, x_1^2+x_2^2+x_3^2=\mbox{const}\end{aligned}$$ For all (p\in S^2 ), there exist a neighbourhood of p with radius r, (N_p(r)) and a map f such that (Fig. 3) $$\displaystyle \begin{aligned} f:N_p(r)\underset{\mbox{onto}}{\stackrel{1-1}{\longrightarrow}}\,D \subseteq \mathbb{R}^2\end{aligned}$$ The map f does not preserve lengths or angles or any geometrical measure. Considering the spherical coordinates ((\theta , \varphi ) \in [0,\pi ]\times [0,2\pi ] \subseteq \mathbb {R}^2), two problems appear, where there is no map: (i) At the pole (\theta =0), one point is “mapped” to the whole line (x_1=0), (x_2=[0,2\pi ]). (ii) The points with (\varphi =0) are “mapped” to the whole places (x_1=0) or (x_2=0) or (x_2=2\pi ). Therefore, we restrict the map to the open region ((0,\pi )\times (0,2\pi )). In such case, the “outlier” points are two: the two poles and the semicycle (\varphi =0) joining them are not considered in this map (see also Fig. 4). A new function can be considered for these points; the line (\varphi =0) is the equator of the first system for (\varphi \in [\frac {\pi }{2},\frac {3\pi }{2}]). The overlap problem of this function does not influence the conclusion; (S^2) is a manifold. Note: In some cases as a better map of (S^2) onto a region of (\mathbb {R}^2) is the stereographic map. Appendix 2: Notes on (SO(3)) Recall that the manifold of all notations (SO(3)) is different from the manifold whose coordinate systems are rotated. For the orthogonal group in n dimensions (O(n)) and a matrix A, it holds that if (A\in O(n) \Rightarrow \det (A)=\pm 1). Therefore, these matrices form a subgroup (SO(n)), a special orthogonal group that is the group of rotations. The canonical form of a matrix A in (O(n)) is achieved by the transformation (B^{-1}AB), (B\in O(n)), and consists of blocks with ones ((+1)) or minus ones ((-1)) or submatrix Q with $$\displaystyle \begin{aligned} Q=\begin{pmatrix} \cos \theta & \sin \theta \ -\sin \theta & \cos \theta \end{pmatrix}\end{aligned}$$ For example, consider the unit sphere (S^2) given by (x^2+y^2+z^2=1) in (\mathbb {R}^3). If it is rotated by an angle (\theta ), then the new coordinates are $$\displaystyle \begin{aligned} x_N=\begin{pmatrix} x'\y'\z' \end{pmatrix} =\begin{pmatrix} x\ y\cos\theta - z\sin \theta \ y\sin \theta + z\cos \theta \end{pmatrix} =\left(\begin{array}{c} \begin{array}{ccc} 1 & 0 & 0\end{array}\ \begin{array}{cc} \begin{array}{c} 0\ 0 \end{array} & \mathit{Q}\,\,\end{array} \end{array}\right) \begin{pmatrix} x\y\z \end{pmatrix} =Rx\end{aligned}$$ This transformation is associated with the group element of (exp\theta L_1) of (SO(3)). Since (S^2) is a manifold (but not a vector space), this transformation offers a realisation of (SO(3)). The rotation group (SO(n+1)) is combining (SO(n)) and (S^n), as thinking in terms of ((n+1)) dimensional space will be a unit (magnitude) vector, which may consider a point on the generalised sphere (S^n). Appendix 3: Equation of Motion of a Spherical Shell The equation of motion of a spherical shell is (see [22, 28], among others): $$\displaystyle \begin{aligned} \ddot{r}=-\frac{GM(r)}{r^2}+\frac{K}{3}r\end{aligned}$$ where M is the mass inside the sphere of radius r. We assume that M remains constant during the expansion phase. G is the gravitation constant, and K is the cosmological constant. Thus, we obtain the “energy equation” $$\displaystyle \begin{aligned} \frac{1}{2}\dot{r}^2=G\frac{M}{r}+\frac{K}{G}r^2+E\end{aligned}$$ where E is the integration constant with dimensions of the specific energy. Assuming that the initial time is (t_i), the radius of the shell is (r_i) and its relative excess of the mass is: $$\displaystyle \begin{aligned} D_i=\frac{3}{r_i^3}\int_{0}^{r_i}x^2\delta_i(x)dx\end{aligned}$$ then the “energy equation” can be written as $$\displaystyle \begin{aligned} \frac{ds}{dt}=H_ig^{\frac{1}{2}}(s)\end{aligned}$$ with g a given function and (s={r}/{r_i}). Following [22, 28], for a flat universe, we come across the cubic equation $$\displaystyle \begin{aligned} {} c_1s^3+c_2s^2+c_3=0 \end{aligned} $$ (41) with (c_i\,,\,\,i=1,2,3) defined constants, due to the spherical shell dynamics theory for a flat universe with cosmological constant. The interesting point is that we face a cubic equation with such a particular cosmological interest. Recall that for the complete cubic equation defined as $$\displaystyle \begin{aligned} p_3(x)=\alpha x^3+\beta x^2+\gamma x+\delta\,,\,\,\alpha\neq 0\end{aligned}$$ there are two discriminants for this, namely: $$\displaystyle \begin{aligned} D_1=\beta^2-3\alpha\gamma\end{aligned}$$ $$\displaystyle \begin{aligned} D_2=4\alpha\gamma^3+4\delta\beta^3+27\alpha^2\delta^2-\beta^2\gamma^2-18\alpha \beta \gamma \delta\end{aligned}$$ and when $$\displaystyle \begin{aligned} D_1>0, D_2<0\mbox{, there are }x_1,x_2,x_3 \in \mathbb{R}:\, p_3(x_i)=0\,,\,\,i=1,2,3 \end{aligned}$$ Moreover if $$\displaystyle \begin{aligned} D_1>0\,\, \text{and}\,\, D_2=0\Leftrightarrow x_1=x_2 \in \mathbb{R}\,,\,x_3\in \mathbb{R}\end{aligned}$$ $$\displaystyle \begin{aligned} D_1=0\,\, \text{and}\,\, D_2=0\Leftrightarrow x_1=x_2=x_3\in \mathbb{R}\end{aligned}$$ otherwise $$\displaystyle \begin{aligned} x_1\in \mathbb{R}\,,x_2=\bar{x}_3\in \mathbb{C}\end{aligned}$$ Due to this investigation and the Vieta relation, the roots of (41) can be obtained. Rights and permissions Reprints and permissions Copyright information © 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG About this chapter Cite this chapter Kitsos, C., Fatouros, S. (2025). Optimum Statistical Analysis on Sphere Surface. In: Pardalos, P.M., Rassias, T.M. (eds) Geometry and Non-Convex Optimization. Springer Optimization and Its Applications, vol 223. Springer, Cham. Download citation .RIS .ENW .BIB DOI: Published: 24 July 2025 Publisher Name: Springer, Cham Print ISBN: 978-3-031-87056-9 Online ISBN: 978-3-031-87057-6 eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0) Share this chapter Anyone you share the following link with will be able to read this content: Get shareable link Sorry, a shareable link is not currently available for this article. Copy to clipboard Provided by the Springer Nature SharedIt content-sharing initiative Publish with us Policies and ethics Access this chapter Log in via an institution Subscribe and save Springer+ from $39.99 /Month Starting from 10 chapters or articles per month Access and download chapters and articles from more than 300k books and 2,500 journals Cancel anytime View plans Buy Now Chapter USD 29.95 Price excludes VAT (USA) Available as PDF Read on any device Instant download Own it forever Buy Chapter eBook USD 169.00 Price excludes VAT (USA) Available as EPUB and PDF Read on any device Instant download Own it forever Buy eBook Hardcover Book USD 219.99 Price excludes VAT (USA) Durable hardcover edition Dispatched in 3 to 5 business days Free shipping worldwide - see info Buy Hardcover Book Tax calculation will be finalised at checkout Purchases are for personal use only Institutional subscriptions Sections References Abstract References Author information Editor information Appendices Rights and permissions Copyright information About this chapter Publish with us S.A. Boyd, L.A. Vandenberghe, Convex Optimization (Cambridge University Press, Cambridge, 2004) Google Scholar E.A. Breitenberger, Analogues of the normal distribution on the circle and the sphere. Biometrika 50(1/2), 81–88 (1963) MathSciNetGoogle Scholar C.A. Caratheodory, Uber den Variabilitatsbereich der Koeffizienten von Potenzreihen, die gegebene Werte nicht annehmen. Math. Ann. 64(1), 95–115 (1907) MathSciNetGoogle Scholar A.A. Chadwick, S.A. Ilic, J. Helm-Petersen, An evaluation of directional analysis techniques for multidirectional, partially reflected waves part 2: application to field data. J. Hydraulic Res. 38(4), 253–258 (2000) Google Scholar T.A. Chang, Spherical regression. Ann. Stat. 14(3), 907–924 (1986) MathSciNetGoogle Scholar T.D. Downs, Spherical regression. Biometrika 90(3), 655–668 (2003) MathSciNetGoogle Scholar I.L. Dryden, Statistical analysis on high-dimensional spheres and shape spaces. Ann. Stat. 33(4), 1643–1665 (2005) MathSciNetGoogle Scholar W.A. Feller, An Introduction to Probability Theory and its Applications (1) (Wiley, Hoboken, 1968) Google Scholar R.A. Fisher, Dispersion on a sphere. Proc. R. Soc. London Ser. A. Math. Phys. Sci. 217(1130), 295–305 (1953) MathSciNetGoogle Scholar D.A.S. Fraser, The Structure of Inference (Wiley & Sons, Hoboken, 1968) Google Scholar R.A. Gatto, The von Mises–Fisher distribution of the first exit point from the hypersphere of the drifted Brownian motion and the density of the first exit time. Stat. Probab. Lett. 83(7), 1669–1676 (2013) MathSciNetGoogle Scholar R.A. Gatto, S.R. Jammalamadaka, The generalized von Mises distribution. Stat. Methodol. 4(3), 341–353 (2007) MathSciNetGoogle Scholar A.A. Gidskehaug, Statistics on a sphere. Geophys. J. Int. 45(3), 657–676 (1976) Google Scholar G.A. Hon, On Kepler’s awareness of the problem of experimental error. Ann. Sci. 44(6), 545–591 (1987) MathSciNetGoogle Scholar P.E. Jupp, K.V. Mardia, Maximum likelihood estimators for the matrix von Mises-Fisher and Bingham distributions. Ann. Stat. 7(3), 599–606 (1979) MathSciNetGoogle Scholar J.T. Kent, The Fisher-Bingham distribution on the sphere. J. R. Stat. Soc. Ser. B (Methodol.) 44(1), 71–80 (1982) Google Scholar C.P. Kitsos, Technological Mathematics and Statistics (In Greek) (New Tech. Pub., Athens, 2009) Google Scholar C.P. Kitsos, The Geometry of Greeks (In Greek) (New Tech. Pub., Athens, 2021) Google Scholar C.P. Kitsos, S.A. Fatouros, Geometry in quantitative methods and applications, in Analysis, Geometry, Nonlinear Optimization and Applications (World Scientific Publ. Co., Singapore, 2022) Google Scholar C.P. Kitsos, P.A. Iliopoulou, Adopting information distance measures for geographical data. J. Reg. Econ. Issues 12(1), 6–26 (2022) Google Scholar C.P. Kitsos, A.A. Oliveira, Asymptotic statistical results: theory and practice, in Computational Mathematics and Variational Analysis (Springer, Cham, 2020), pp. 177–190 Google Scholar O.A. Lahav, P.B. Lilje, J.R. Primack, M.J. Rees, Dynamical effects of the cosmological constant. Mon. Not. R. Astron. Soc. 251(1), 128–136 (1991) Google Scholar M.P. Langevin, Magnétisme et théorie des électrons. Ann. Chim. Phys. 8(5), 68–125 (1905) Google Scholar P.A. Leong, S.A. Carlile, Methods for spherical data analysis and visualization. J. Neurosci. Methods 80(2), 191–200 (1998) Google Scholar K.V. Mardia, Distribution theory for the von Mises-Fisher distribution and its application, in A Modern Course on Statistical Distributions in Scientific Work (Springer, Dordrecht, 1975), pp. 113–130 Google Scholar K.V. Mardia, Statistics of directional data. J. R. Stat. Soc. Ser. B Methodol. 37(3), 349–371 (1975) MathSciNetGoogle Scholar K.V. Mardia, P.E. Jupp, Directional Statistics (John Wiley & Sons, Hoboken, 2009) Google Scholar P.J.E. Peebles The Large-Scale Structure of the Universe (Princeton University Press, Princeton, 2020) Google Scholar N.A. Prakash, Differential Geometry an Integrated Approach (Tata McGraw-Hill, New Delhi, 1981) Google Scholar R.M. Robinson, Note on convex regions on the sphere. Bull. Am. Math. Soc. 44(2), 115–116 (1938) MathSciNetGoogle Scholar R.T. Rockafellar, Convex Analysis (Princeton University Press, Princeton, 2015) Google Scholar L.A. Santalo, Convex regions on the n-dimensional spherical surface. Ann. Math. 47, 448–459 (1946) MathSciNetGoogle Scholar M.J. Schervish, Theory of Statistics (Springer Science & Business Media, Berlin, 2012) Google Scholar S.A. Silvey, Optimal Design: An Introduction to the Theory for Parameter Estimation, vol. 1 (Springer Science & Business Media, Berlin, 2013) Google Scholar W.M. Smart, W.M. Smart, R.M. Green, Textbook on Spherical Astronomy (Cambridge University Press, Cambridge, 1977) Google Scholar K.A. Tapp, Differential Geometry of Curves and Surfaces (Springer, Berlin, 2016) Google Scholar F.A. Wang, A.E. Gelfand, Directional data analysis under the general projected normal distribution. Stat. Methodol. 10(1), 113–127 (2013) MathSciNetGoogle Scholar G.S. Watson, Analysis of dispersion on a sphere. Geophys. Suppl. Mon. Not. R. Astron. Soc. 7(4), 153–159 (1956) MathSciNetGoogle Scholar G.S. Watson, More significance tests on the sphere. Biometrika 47(1/2), 87–91 (1960) MathSciNetGoogle Scholar G.S. Watson, Equatorial distributions on a sphere. Biometrika 52(1/2), 193–201 (1965) MathSciNetGoogle Scholar G.S. Watson, E.J. Williams, On the construction of significance tests on the circle and the sphere. Biometrika 43(3/4), 344–352 (1956) MathSciNetGoogle Scholar Discover content Journals A-Z Books A-Z Publish with us Journal finder Publish your research Language editing Open access publishing Products and services Our products Librarians Societies Partners and advertisers Our brands Springer Nature Portfolio BMC Palgrave Macmillan Apress Discover Your privacy choices/Manage cookies Your US state privacy rights Accessibility statement Terms and conditions Privacy policy Help and support Legal notice Cancel contracts here 34.34.225.137 Not affiliated © 2025 Springer Nature
176
260 CHAPTER 7 CALCULATIONS OF CELESTIAL NAVIGATION FINDING GHA AND DECLINATION 700. Use of the Almanacs The time used as an entering argument in the almanacs is 12h + Greenwich hour angle of the mean Sun and is de-noted by UT. This scale may differ from the broadcast time signals by an amount which, if ignored, will introduce an error of up to 0.2' in longitude determined from astronomi-cal observations. The difference arises because the time ar-gument depends on the variable rate of rotation of the Earth while the broadcast time signals are now based on an atom-ic time-scale. Step adjustments of exactly one second are made to the time signals as required (primarily at 24h on December 31 and June 30) so that the difference between the time signals and UT, as used in the almanacs, may not exceed 0.9s. Those who require to reduce observations to a precision of better than 1s must therefore obtain the correc-tion to the time signals from coding in the signal, or from other sources. The correction may be applied to each of the times of observation. Alternatively, the longitude, when de-termined from astronomical observations, may be corrected by the corresponding amount shown in the following table: The main contents of the almanacs consist of data from which the Greenwich hour angle (GHA) and the declination (Dec.) of all the bodies used for navigation can be obtained for any instant of Universal Time (UT). The local hour angle (LHA) can then be obtained by means of the formula: - west LHA = GHA longitude + east For the Sun, Moon, and the four navigational planets, the GHA and declination are tabulated directly in the Nau-tical Almanac for each hour of UT throughout the year. For the stars the sidereal hour angle (SHA) is given, and the GHA is obtained from: GHA Star = GHA Aries + SHA Star The SHA and declination of the stars change slowly and may be regarded as constant over periods of several days or even months, if lesser accuracy is required. The GHA Aries or Greenwich hour angle of the first point of Aries (the vernal equinox), is tabulated for each hour of UT in the Nautical Almanac. Permanent tables give the appro-priate increments to the tabulated values of GHA and dec-lination for the minutes and seconds of UT. In the Nautical Almanac, the permanent table for incre-ments also includes corrections for v, the difference be-tween the actual change of GHA in one hour and a constant value used in the interpolation tables and d, the change in declination in one hour. In the Nautical Almanac, v is always positive unless a negative sign (-) is given. This can occur only in the case of Venus. For the Sun, the tabulated values of GHA have been adjusted to reduce to a minimum the error caused by treat-ing v as negligible; there is no v tabulated for the Sun. No sign is given for tabulated values of d, which is pos-itive if declination is increasing, and negative if it is de-creasing. The sign of a v or d value is given also to the re-lated correction. 701. Finding GHA and declination of the Sun In the Nautical Almanac, enter the daily-page table with the whole hour of the given UT (GMT) immediately preceding the given UT (18 for 18h24m37s), unless this time is itself a whole hour, and extract the tabulated GHA and declination. Also record the d value given at the bottom of the declination column. The sign of d is determined by what the declination is doing over the three-day period displayed. Next, enter the Increments and Corrections table for the number of minutes of UT. If there are seconds, use the next earlier whole minute. On the line corresponding to the sec-onds of UT take the value from the Sun-planets column. Add this to the value of GHA from the daily page to find GHA at the given time. Next, enter the correction table for the same minute with the d value, and take out the correc-tion. Give this the sign of the d value, and apply it to the declination from the daily page. The result is the declination at the given time. Example: Find the GHA and declination of the Sun at UT Correction to time signals Correction to longitude -0.7s to -0.9s 0.2' to east -0.6s to -0.3s 0.1' to east -0.2s to +0.2s no correction +0.3s to +0.6s 0.1' to west +0.7s to +0.9s 0.2' to west Chapter 7 - Calculations of Celestial Navigation CALCULATIONS OF CELESTIAL NAVIGATION 261 l8h24m37s on June 1, 2024. Solution: Sun Sun UT l8h24m37s June 1 UT l8h24m37s June 1 l8h 90° 30.5' l8h 22° 11.4'N d (+) 0.3' 24m37s 6° 09.3' 24m37s (+)0.1' GHA 96° 39.8' Dec. 22° 11.5'N The correction table for GHA of the Sun is based upon a rate of change of 15° per hour, the average rate during a year. At most times the rate differs slightly from this. The slight error thus introduced is minimized by adjustment of the tabular values. The d value is the amount that the declination changes between 1200 and 1300 on the middle day of the three shown. 702. Finding GHA and Declination of the Moon In the Nautical Almanac, enter the daily-page table with the whole hour of the given UT (GMT) immediately preceding the given UT (21 for 21h25m44s), unless this time is itself a whole hour, and extract the tabulated GHA and declination. Also record the corresponding v and d values, tabulated on the same line, and determine the sign of the d value. The v value of the Moon is always positive (+), and is not marked in the almanac. Next, enter the Increments and Corrections table for the minutes of UT, and on the line for the seconds of UT take the GHA correction from the Moon column. Then, enter the correction table for the same minute with the v value, and extract the correction. Add both of these corrections to the GHA from the daily page to obtain the GHA at the given time. Then, enter the same cor-rection table with the d value, and extract the correction. Give this correction the sign of the d value, and apply it to the declination from the daily page to find the declination at the given time. Example: Find the GHA and declination of the Moon at UT 21h25m44s on June 1, 2024. Solution: Moon Moon UT 21h25m44s June 1 UT 21h25m44s June 1 21h 196° 08.4' v(+)11.8' 21h 3° 49.2'N d(+)17.0' 25m44s 6° 08.4' d corr. (+)7.2' v corr. (+)5.0' Dec. 3° 56.4'N GHA 202° 21.8' The correction table for GHA of the Moon is based upon the minimum rate at which the Moon's GHA increas-es, 14°19.0' per hour. The v correction makes the adjust-ment for the actual rate. The v value itself is the difference between the minimum rate and the actual rate during the hour following the tabulated time. The d value is the amount that the declination changes during the hour follow-ing the tabulated time. 703. Finding GHA and Declination of the Planets In the Nautical Almanac, enter the daily-page table with the whole hour of the given UT (GMT) immediately preceding the given UT (05 for 5h24m07s), unless this time is itself a whole hour, and extract the tabulated GHA and declination. Also record the v value given at the bottom of each of these columns. Next, enter the Increments and Cor-rections table for the minutes of UT, and on the line for the seconds of UT take the GHA correction from the Sun-plan-ets column. Next, enter the correction table with the v value and extract the correction, giving it the sign of the v value. Add the first correction to the GHA from the daily page, and apply the second correction in accordance with its sign, to obtain the GHA at the given time. Then, enter the correc-tion table for the same minute with the d value, and extract the correction. Give this correction the sign of the d value, and apply it to the declination from the daily page to find the declination at the given time. Example: Find the GHA and declination of Venus at UT 5h24m07s on June 2, 2024. Solution: Venus Venus UT 5h24m07s June 2 UT 5h24m07sJune 2 5h 256° 10.6' v(-)0.8' 5h 22° 00.4'N d(+)0.5' 24m07s 6° 01.8' v(-)0.4' d corr. (+)0.2' v corr. (-)0.3' Dec. 22° 00.6'N GHA 262° 12.1' The correction table for GHA of planets is based upon the mean rate of the Sun, 15° per hour. The v value is the difference between 15° and the change of GHA of the plan-et between 1200 and 1300 on the middle day of the three shown. The d value is the amount that the declination changes between 1200 and 1300 on the middle day. Venus is the only body listed which ever has a negative v value. 704. Finding GHA and Declination of a Star If the GHA and declination of each navigational star were tabulated separately, the almanacs would be several times their present size. But since the sidereal hour angle of star and the declination are nearly constant over several days (to the nearest 0.1') or months (to the nearest l'), sepa-rate tabulations are not needed. Instead, the GHA of the first point of Aries, from which SHA is measured, is tabulated on the daily pages, and a single listing of SHA and declina-262 CALCULATIONS OF CELESTIAL NAVIGATION tion is given for each double page of the Nautical Almanac. The finding of GHA is similar to finding GHA of the Sun, Moon, and planets. In the Nautical Almanac, enter the daily-page table with the whole hour of the given UT (GMT) immediately preceding the given UT (03 for 3h24m33s), unless this time is itself a whole hour, and extract the tabulated GHA , the tabulated SHA and declination of the star from the list-ing on the left-hand daily page. Next, enter the Increments and Corrections table for the minutes of UT, and on the line for the seconds of UT take the GHA correction from the Aries column. Add this correction and the SHA of the star to the GHA of the daily page to find the GHA of the star at the given time. No adjustment of declination is need-ed. Example: Find the GHA and declination of Canopus at UT 3h24m33s on June 2, 2024. Solution: Canopus UT 3h24m33s June 2 3h 296° 04.7' 24m33s 6° 09.3' SHA 263° 53.1' GHA 206° 07.1' Dec. 52° 42.6'S The SHA and declination of 173 stars, including Polar-is and the 57 listed on the daily pages, are given for the middle of each month, on almanac pages 268-273. For a star not listed on the daily pages this is the only almanac source of this information. Interpolation in this table is not necessary for ordinary purposes of navigation, but is some-times needed for precise results. Thus, if the SHA and declination of ß Crucis (Mimosa) are desired for March 1, 2024, they are found by simple eye interpolation to be SHA 167°42.5' and Dec. 59°49.2'S. If GHA is desired, it is found as indicated in the example, but omitting the addition of SHA of a star. In the example GHA is 296°04.7' + 6°09.3' = 302°14.0'. THE UNDIVIDED ASTRONOMICAL TRIANGLE 705. Solving for Altitude The law of cosines for sides is a fundamental formula for solving a spherical triangle. As applied to the spherical triangle of Figure 705a, the law is stated as: cos a = cos b cos c + sin b sin c cos A (1a) cos b = cos c cos a + sin c sin a cos B (1b) cos c = cos a cos b + sin a sin b cos C (1c) A applied to the undivided astronomical triangle of Figure 705b, equation (1a) is stated as: cos (90°-h) = cos (90°-L) cos (90°-d) + sin (90°-L) sin(90°-d) cos (LHA) sin h = sin L sin d + cos L cos d cos LHA (2a) in which h is the altitude of the celestial body above the ce-lestial horizon; L is the latitude of the observer or the as-sumed position of the observer, d is the declination of the body, and LHA is the local hour angle of the body. Meridian angle, t, can be substituted for LHA in the equation; i.e., sin h = sin L sin d + cos L cos d cos t (2b) The sign convention used in the calculations of both formulas is that declination is treated as a negative quantity when latitude and declination are of contrary name. No spe-cial sign convention is required for local hour angle or for whether the meridian angle is measured eastward or west-ward from the meridian. If the altitude as calculated is negative, the body is below the celestial horizon. Particularly when using a table of trigonometric func-tions, the rules for the following cases may be helpful in Figure 705a. Spherical Triangle. Figure 705b. Undivided astronomical triangle. CALCULATIONS OF CELESTIAL NAVIGATION 263 avoiding calculation mistakes due to not using the proper sign with a trigonometric function (Section 138). However, for cases II and III it is necessary to know whether the body is above or below the celestial horizon. Case I (t < 90° and Same Name) If LHA is in the range 0° increasing to 90°, or 270° in-creasing to 360° and the latitude is same name as declina-tion; the two terms on the right-hand side of the equation are added. The body is above the celestial horizon. Case II (t < 90° and Contrary Name) If LHA is in the range 0° increasing to 90°, or 270° in-creasing to 360° and the latitude is of contrary name, the lesser quantity is subtracted from the greater on the right-hand side of the equation. The body can be above or below the celestial horizon. Case III (t > 90° and Same Name) If LHA is in the range greater than 90° and increasing to 270° and the latitude is same name as declination, the lesser quantity is subtracted from the greater on the right-hand side of the equation. The body can be above or below the celestial horizon. Case IV (t > 90° and Contrary Name). If LHA is in the range greater than 90° and increasing to 270° and the latitude is of contrary name, the two quan-tities on the right-hand side of the equation are added. The body is below the celestial horizon. Astronomical triangles corresponding to the four cases are drawn on diagrams on the plane of the celestial meridian in Figure 705c.. Example 1: The latitude of the observer is 45°00.0'N; the declination of the celestial body is 5°00.0'N; the local hour angle is 60°. (Case I) Required: Altitude of the body. Solution: By natural functions (table 2) sin h = sin L sin d + cos L cos d cos LHA (2a) = sin 45° sin 5° + cos 45° cos 5° cos 60° = (0.70711)(0.08716) + (0.70111)(0.99619)(0.50000) = 0.06163 + 0.35221 = 0.41384 h = 24°26.8' Example 2: The latitude of the observer is 45°00.0'N; the declination of the celestial body is 5°00.0'S; the local hour angle is 60°. (Case II) Required: Altitude of the body. Solution: By natural functions (table 2) sin h = sin L sin d + cos L cos d cos LHA (2a) = sin 45° sin -5° + cos 45° cos -5° cos 60° = (0.70711)(-0.08716) + (0.70111)(0.99619)(0.50000) = -0.06163 + 0.35221 = 0.29058 h = 16°53.6' Example 3: The latitude of the observer is 45°00.0'S; the declination of the celestial body is 5°00.0'S; the local hour angle is 240°. (Case III) Required: Altitude of the body. Solution: By natural functions (table 2) sin h = sin L sin d + cos L cos d cos LHA (2a) = sin 45° sin 5° + cos 45° cos 5° cos 240° = (0.70711)(0.08716) + (0.70111)(0.99619)(-0.50000) = 0.06163 + -0.35221 = -0.29058 h = -16°53.6' Example 4: The latitude of the observer is 45°00.0'S; the declination of the celestial body is 5°00.0'N; the local hour angle is 240°. (Case IV) Required: Altitude of the body. Solution: By natural functions (table 2) sin h = sin L sin d + cos L cos d cos LHA (2a) = sin 45° sin -5°+ cos 45° cos -5° cos 240° = (0.70711)(-0.08716) + (0.70111)(0.99619)(-0.50000) = -0.06163 + -0.35221 = -0.41384 h = -24°26.8' Example 5: The latitude of the observer is 30°25.0'N; the declination of the celestial body is 22°06.2'N; the meridian angle is 39°54.7'W. (Case I) Required: Altitude of the body. Solution: (1) By natural (Table 2) and (2) logarithmic func-tions (Tables 1,3) sin h = sin L sin d + cos L cos d cos t (2b) = sin 30°25.0' sin 22°06.2' + cos 30°25.0' cos 22°06.2' cos 39°54.7' = (0.50628)(0.37628) + (0.86237)(0.92651)(0.76703) = 0.19050 + 0.61285 = 0.80335 (1) h = 53°27.1' For logarithmic solution by tables 1 and 3, the follow-ing modification is used: A = sin L sin d B = cos L cos d cos t sin h = A + B log A = l sin 30°25.0' + l sin 22°06.2' log A = 9.70439 + 9.57551 = 9.27990 A = 0.19050 (Table 1) log B = l cos 30°25.0' + l cos 22°06.2' + l cos 39°54.7' log B = 9.93569 + 9.96685 + 9.88481 = 9.78735 B = 0.61284 (Table 1) sin h = 0.19050 + 0.61284 h = 0.80334 (2) h = 53°27.0' Example 6: The latitude of the observer is 30°25.0'N; the declination of the celestial body is 22°06.2'S; the meridian 264 CALCULATIONS OF CELESTIAL NAVIGATION angle is 39°54.7'W. (Case II) Required: Altitude of the body. Solution: (1) By natural (table 2) and (2) logarithmic func-tions (tables 1,3) sin h = sin L sin d + cos L cos d cos t (2b) = sin 30°25.0' sin -22°06.2' + cos 30°25.0' cos -22°06.2' cos 39°54.7' = (0.50628)(-0.37628) + (0.86237)(0.92651)(0.76703) = -0.19050 + 0.61285 = 0.42235 (1) h = 24°59.0' For logarithmic solution by tables 1 and 3, the follow-ing modification is used: A = sin L sin d B = cos L cos d cos t sin h = A ~ B log A = l sin 30°25.0' + l sin 22°06.2' log A = 9.70439 + 9.57551 = 9.27990 A = 0.19050 (Table 1) log B = l cos 30°25.0' + l cos 22°06.2' + l cos 39°54.7' Figure 705c. Diagram on the plane of the celestial meridian. CALCULATIONS OF CELESTIAL NAVIGATION 265 log B = 9.93569 + 9.96685 + 9.88481 = 9.78735 B = 0.61284 (Table 1) sin h = -0.19050 + 0.61284 h = 0.42234 (2) h = 53°27.0' Example 7: The latitude of the observer is 30°25.0'S; the declination of the celestial body is 22°06.2'S; the meridian angle is 91°20.0'W. (Case III) Required: Altitude of the body. Solution: (1) By natural (table 2) and (2) logarithmic func-tions (tables 1,3) sin h = sin L sin d + cos L cos d cos t (2b) = sin 30°25.0' sin 22°06.2' + cos 30°25.0' cos 22°06.2' cos 91°20.0' = (0.50628)(0.37628) + (0.86237)(0.92651)(-0.02327) = 0.19050 + -0.01859 = 0.17191 (1) h = 9°53.9' For logarithmic solution by tables 1 and 3, the follow-ing modification is used: A = sin L sin d B = cos L cos d cos t sin h = A ~ B log A = l sin 30°25.0' + l sin 22°06.2' log A = 9.70439 + 9.57551 = 9.27990 A = 0.19050 (Table 1) log B = l cos 30°25.0' + l cos 22°06.2' + l cos 91°20.0' log B = 9.93569 + 9.96685 + 8.36678 = 8.26932 B = 0.01859 (Table 1) sin h = 0.19050 + -0.01859 h = 0.17191 (2) h = 9°53.9' Example 8: The latitude of the observer is 30°25.0'S; the declination of the celestial body is 22°06.2'N; the meridian angle is 91°20.0'W. (Case IV) Required: Altitude of the body. Solution: (1) By natural (table 2) and (2) logarithmic func-tions (tables 1,3) sin h = sin L sin d + cos L cos d cos t (2b) = sin30°25.0' sin-22°06.2' + cos30°25.0' cos-22°06.2' cos91°20.0' = (0.50628)(-0.37628) + (0.86237)(0.92651)(-0.0.2327) = -0.19050 + -0.01859 = -0.20909 (1) h = -12°04.1' For logarithmic solution by tables 1 and 3, the follow-ing modification is used: A = sin L sin d B = cos L cos d cos t sin h = A + B log A = l sin 30°25.0' + l sin 22°06.2' log A = 9.70439 + 9.57551 = 9.27990 A = 0.19050 (Table 1) log B = l cos 30°25.0' + l cos 22°06.2' + l cos 91°20.0' log B = 9.93569 + 9.96685 + 8.36678 = 8.26932 B = 0.01859 (Table 1) sin h = 0.19050 + 0.01859 h = 0.20909 (2) h = -12°04.1' Note: When the meridian angle is greater than 90° and the latitude and declination are of contrary name, the body lies below the celestial horizon. 706. Solving for Azimuth The relations between the parts of a spherical triangle as shown in Figure 706a. are given in the following equa-tions known as the five parts formulas: sin a cos B = cos b sin c - sin b cos c cos A (3a) sin a cos C = cos c sin a - sin c cos a cos B (3b) sin a cos A = cos a sin b - sin a cos b cos C (3c) Also by the law of sines: sin a sin b sin c = = sin A sin B sin C Substituting the value of sin a as obtained from the law of sines in questions 3a: sin a sin b sin a = sin B sin A cot B = sin c cot b - cos c cos A As applied to the undivided astronomical triangle of Figure 706b, the above equation is stated as: sin LHA cot Z = cos L tan d - sin L cos LHA from which cos L tan d - sin L cos LHA cot Z = sin LHA sin LHA tan Z = (4a) cos L tan d - sin L cos LHA sin d Substituting for tan d, cos d cos d sin LHA tan Z = (4b) cos L tan d - sin L cos d cos LHA Meridian angle, t, can be substituted for LHA in equa-tions 4a and 4b: sin t tan Z = (5a) cos L tan d - sin L cos t cos d sin t tan Z = (5b) cos L tan d - sin L cos d cos t The sign conventions used in the calculations of the az-imuth angle formulas are as follows: (1) If latitude and declination are of contrary name; declination is treated as a 266 CALCULATIONS OF CELESTIAL NAVIGATION negative quantity; (2) If in equations 4a and 4b the local hour angle is greater than 180°, it is treated as a negative quantity. If the acute angle as calculated is negative, it is neces-sary to add 180° to obtain the desired azimuth angle. Azimuth angle is measured from 0° at the north or south reference direction clockwise or counter-clockwise through 180°. It is labeled with the reference direction as the prefix and the direction of measurement from the refer-ence direction as a suffix. Thus, azimuth angle S144°W is 144° west of south, or true azimuth 324°. Azimuth angle is labeled N or S to agree with the latitude and E or W to agree with the meridian angle (labeled E when LHA is greater than l80°). Azimuth angle can also be converted to true azimuth, Zn, through use of the following rules: 707. Time Azimuth The time azimuth or azimuth angle is computed using the LHA or meridian angle (a function of time), latitude, and declination as the known quantities. Solution can be made using equations 4a or 5a. Example 1: The latitude of the observer is 30°25.0'N; the declination of the celestial body is 22°06.2'N; the meridian angle is 39°54.7'W. Required: Azimuth of the body. Solution: By equation 4a. sin LHA tan Z = (4a) cos L tan d - sin L cos LHA sin 39°54.7' tan Z = cos30°25.0' tan2°06.2' - sin30°25.0' cos39°54.7' 0.64161 tan Z = (0.86237)(0.40613) - (0.50628)(0.76703) 0.64161 0.64161 tan Z = = = -16.840 0.35023 - 0.38833 -0.03810 Since the acute angle (-86.6°) as calculated is nega-tive, it is necessary to add 180° to obtain the desired azi-muth angle. Z = -86.6° + 180° = N93.4°W Zn = 266.6° Example 2: The latitude of the observer is 30°25.0'S; the declination of the celestial body is 22°06.2'N; the meridian angle is 39°54.7'E. Required: Azimuth of the body. Solution: By equation 5a. sin t tan Z = (5a) cos L tan d - sin L cos t sin 39°54.7' tan Z = cos30°25.0' tan-22°06.2' - sin30°25.0' cos39°54.7' 0.64161 tan Z = (0.86237)(-0.40613) - (0.50628)(0.76703) 0.64161 0.64161 tan Z = = = -0.86873 -0.35023 - 0.38833 -0.73856 Since the acute angle (-41.0°) as calculated is nega-tive, it is necessary to add 180° to obtain the desired azi-muth angle. Solving this example by equation 4a, local hour angle 320°05.3' is treated as a negative angle. Z = -41.0° + 180° = N139.0°E Zn = 041.0° 708. Altitude Azimuth The altitude azimuth is the azimuth or azimuth angle computed using altitude, latitude, and declination (or polar distance) as the known quantities. By the low of cosines for sides, Figure 706a. Spherical Triangle. Figure 706b. Undivided astronomical triangle. CALCULATIONS OF CELESTIAL NAVIGATION 267 cos b = cos c cos a + sin c sin a cos B (1b) As applied to the astronomical triangle of Figure 706b, equation 1b is stated as: cos (90°-d) = cos (90°-L) cos (90°-h) + sin (90°-L) sin (90°-h) cos Z sin d = sin L sin h + cos L cos h cos Z sin d - sin L sin h cos Z = (6) cos L cos h Example: The latitude of the observer is 30°00.0'N; the center of the Sun is on the visible horizon; the declination of the Sun is 18°00.0'N. Required: Altitude angle of the Sun. Solution: By equation 6. Computation for azimuth angle is made for an altitude of -0°41.1', determined as follows: Dip at 41 feet height of eye (-) 6.2' Refraction at -6.2' altitude (-) 35.3' Parallax (+) 0.1' (-) 41.4' sin 18° - sin 30° sin -0°41.1 cos Z = (6) cos 30° cos -0°41.1' (0.30902) - (0.50000)(-0.01204) cos Z = (0.86603)(0.99993) 0.31504 cos Z = = 0.36380 0.86597 Z = 68°40.0' 709. Time and Altitude Azimuth The time and altitude azimuth or azimuth angle is com-puted using meridian angle, declination, and altitude as the known quantities. The most common formula is derived from the law of sines. By the law of sines, the relationship between the angles and sides opposite of the spherical triangle shown in Figure 705a is: sin b sin a = (3d) sin B sin A As applied to the astronomical triangle of Figure 706b, equation 3d is stated as: sin (90°- d) sin (90°- h) = (3d) sin Z sin t sin Z cos h = sin t cos d sin Z = sin t cos d sec h (7) The weakness of this method is that it does not indicate whether the celestial body is north or south of the prime vertical. Usually there is no question on this point, but if Z is near 90°, the quadrant may be in doubt. If this occurs, the meridian angle or altitude when on the prime vertical can be determined from table 20 (for declinations less than 23°) or by computation (Section 710), using the formula: cos t = tan d cot L (8c) or sin h = sin d csc L (8d) Example: The latitude of the observer is 30°25.0'N; the declination of the celestial body 22°06.2'N; the altitude of the body is 53°27.0'; and the meridian angle is 39°54.7'W. Required: Altitude of the body. Solution: By equation 7. sin Z = sin t cos d sec h = sin 39°54.7' cos 22°06.2' sec 53°27.0' = (0.64161)(0.92651)(1.67919) = 0.99821 Z = 88.6° or 93.4° ? By logarithmic solution, l sin Z = l sin t + l cos d + l sec h = l sin 39°54.7' + l cos 22°06.2' + l sec 53°27.0' = (9.80726) + (9.96685) + (10.22510) Z = 88.6° or 93.4° ? If the altitude is less, or the meridian angle is greater than the value when the body is on the prime vertical, the numerical value of the azimuth angle is the lesser of the two angles. If the altitude is greater or the meridian angle is less when on the prime vertical, the numerical value of the azi-muth angle is the greater of the two angles. Entering Table 20 with latitude 30°25' and declination 22°06.2' (same name as latitude) as arguments, the meridi-an angle and altitude of the body when on the prime vertical are determined as: t = 46.1°, h = 48.1° Since the altitude is greater than the value when the body is on the prime vertical, the numerical value of the az-imuth angle is the greater of the two quantities, i.e., the az-imuth angle is N93.4°W; Zn is 266.6°. 710. Finding Time on the Prime Vertical A celestial body having a declination of opposite name to the latitude crosses the prime vertical below the horizon. Its nearest visible approach is at the time of rising and set-ting. If a celestial body has a declination of the same name as the latitude, but is numerically greater, it does not cross the prime vertical. Its nearest approach (in azimuth) is at the point at which its azimuth angle is maximum. At this point the meridian angle is given by the formula sec t = tan d cot L (8a) and its altitude by the formula csc h = sin d csc L (8b) A celestial body having a declination of the same name 268 CALCULATIONS OF CELESTIAL NAVIGATION as the latitude, and numerically smaller, crosses the prime vertical at some point before it reaches the celestial meridi-an, and again after meridian transit. At these two crossings of the prime vertical, the meridian angles are equal and are always less than 90°. They are given by the formula: cos t = tan d cot L (8c) The altitudes are also equal, and are given by the formula: sin h = sin d csc L (8d) Meridian angle and altitude of bodies on the prime ver-tical, and similar data for the nearest approach (in azimuth) of those bodies of same name which do not cross the prime vertical, are given in table 20 for various latitudes, and for declinations from 0° to 23°, inclusive. Equation 8c for meridian angle, when azimuth angle is 90°, is derived by Napier's rules as follows: The circular parts diagram for astronomical triangle PMZ is completed as shown in figure 721. sin (90°- t) = tan d tan (90°- L) cos t = tan d cot L (8c) The altitudes of the two crossings of the prime vertical are also equal. Equation 8d for altitude on the prime vertical is derived by Napier's rules: sin d = cos (90°- L) cos (90°- h) sin d = sin L sin h sin h = sin d csc L (8d) To find the time of crossing the prime vertical, convert t to LHA, and add west longitude or subtract east longitude to find GHA. The UT at which this GHA occurs can be found, as explained in Section 717, and converted to any other time desired. Example: Determine (1) the approximate zone time, and (2) the approximate altitude of the Sun when it crosses the prime vertical during the afternoon of May 30, 2024, at lat. 51°32.3'N, long. 160°21.7'W, using Table 20 and the Nau-tical Almanac. Solution: May 30 t 71.6°W (from table 20) LHA 71.6° λ 160.4°W GHA 232.0° 3h 225.6° 26m 6.4° UT 0326 May 31 ZD (+)11 (rev.) (1) ZT 1626 May 30 (2) h 28.4° (from table 20) At the time of crossing the prime vertical, or at nearest approach (in azimuth), a celestial body is changing azimuth slowly, and therefore this is considered a good time to check longitude, compass deviation, or to swing ship. The prime vertical at any place is the celestial horizon of a point 90° away, on the same meridian. Therefore, a ce-lestial body crosses the prime vertical at approximately the same time it rises and sets at the point 90° away. Thus, if one is at latitude 35° N, the Sun crosses his prime vertical at about the same time it rises or sets at latitude 55°S. If time of sunrise and sunset are to be obtained accurately by this method, corrections must be applied for semidiameter and refraction. RISING, SETTING, AND TWILIGHT 711. Rising, Setting, and Twilight In the Nautical Almanac, the times of sunrise, sunset, moonrise, moonset, and twilight information at various lat-itudes between 72°N and 60°S are given to the nearest whole minute. By definition, rising or setting occurs when the upper limb of the body is on the visible horizon, assum-ing standard refraction for zero height of eye. Because of variations in refraction and height of eye, computation to a greater precision than 1m is not justified. In high latitudes some of the phenomena do not occur during certain periods.The symbols used to indicate this condition are: Sun or Moon does not set, but remains continuous-ly above the horizon. Sun or Moon does not rise, but remains continu-ously below the horizon. //// Twilight last all night. The Nautical Almanac makes no provision for finding the times of rising, setting, or twilight in polar regions. In the Nautical Almanac, sunrise, sunset, and twilight tables are given only once for the middle of the three days on each page opening. For most purposes this information can be used for all three days. There are moonrise and Figure 710. Circular parts diagram for astronomical triangle PMZ CALCULATIONS OF CELESTIAL NAVIGATION 269 moonset tables for each day. The tabulations are in local mean time (Section 309). On the zone meridian, this is the zone time (ZT). For every 15' of longitude that the observer's position differs from that of the zone meridian, the zone time of the phenomena dif-fers by 1m, being later if the observer is west of the zone meridian, and earlier if he is east of the zone meridian. The local mean time of the phenomena varies with latitude of the observer, declination of the body, and hour angle of the body relative to that of the mean Sun. 712. Finding Time of Sunrise and Sunset In the Nautical Almanac, enter the table on the daily page, and extract the LMT for the tabulated latitude next smaller than the observer's latitude (unless this is an exact tabulated value). Apply a correction from table I on alma-nac page xxxii to interpolate for latitude, determining the sign of the correction by inspection. Then convert LMT to ZT by means of the difference in longitude (dλ) between the local and zone meridians. Example: Find the zone time of sunrise and sunset at lat. 43°31.4'N, long 36°14.3'W on June 1, 2024. Solution: L 43° 31.4'N June λ 36° 14.3'W DR latitude lies between tabular latitudes 40°N and 45°N in the Nautical Almanac. Sunrise Sunset 40° 0434 40° 1922 (5°/-17m) T I (-) 10 (5°/+17m) T I (+) 10 LMT 0424 LMT 1932 dλ (+) 25 dλ (+) 25 ZT 0449 ZT 1957 Table I is to be entered, in the appropriate column on the left, with the difference between true latitude and the nearest tabular latitude which is less than the true latitude. In this example, that it 3° 31.4' which would take you to the row for 3° 30.0'; and with the argument at the top which is the nearest value of the difference between the times for the tabular latitude and the next higher one. In this example, the differences are -16 minutes and +17 minutes, respec-tively. Both of these would land you in the column for 15m and thus a correction of 10m. 713. Finding Time of Twilight Morning twilight ends at sunrise, and evening twilight begins at sunset. The time of the darker limits of both civil and nautical twilights (center of the Sun 6° and 12°, respec-tively, below the celestial horizon) is given in the Nautical Almanac. The brightness of the sky at any given depression of the Sun below the horizon may vary considerably from day to day, depending upon the amount of cloudiness and other atmospheric conditions. In general, however, the most effective period for observing stars and planets occurs when the center of the Sun is between about 3° and 9° below the celestial horizon. Hence, the darker limit of civil twilight occurs at about the mid point of this period. At the darker limit of nautical twilight the horizon is generally too dark for good observations. At the darker limit of astronom-ical twilight (center of the Sun 18° below the celestial hori-zon) full night has set in. Time of twilight’s approximate value can be determined by extrapolation (Section 205) in the Nautical Almanac, noting that the duration of the differ-ent kinds of twilight is not proportional to the number of de-grees of depression at the darker limit. More precise deter-mination of the time at which the center of the Sun is any given number of degrees below the celestial horizon can be determined by a large-scale diagram on the plane of the ce-lestial meridian (Appendix G). The method of finding the darker limit of twilight is the same as that for sunrise and sunset. Example: Find the zone time of beginning of morning nau-tical twilight and ending of evening nautical twilight at lat. 21°54.7'S, long 109°04.2'E on June 1, 2024. Solution: L 21°54.7'S June 1 λ 109°04.2'E DR latitude lies between tabular latitudes 20°S and 30°S in the Nautical Almanac. Nautical Twilight Nautical Twilight 20°S 0537 20°S 1819 (10°/-15m)T I (+) 3 (10°/+15m)T I (-) 3 LMT 0540 LMT 1816 dλ (-) 16 dλ (-) 16 ZT 0524 ZT 1800 714. Finding Time of Moonrise and Moonset Finding the time of moonrise and moonset is similar to finding the time of sunrise and sunset, with one important difference. Because of the Moon's rapid change of declina-tion, and its fast eastward motion relative to the Sun, the time of moonrise and moonset varies considerably from day to day. These changes of position on the celestial sphere (Appendix G) are continuous, as moonrise and moonset occur successively at various longitudes around the Earth. Therefore, the change in time is distributed over all longi-tudes. For ordinary purposes of navigation, it is sufficiently accurate to interpolate between consecutive moonrises or moonsets at the Greenwich meridian. Since apparent mo-tion of the Moon is westward, relative to an observer on the Earth, interpolation in west longitude is between the phe-nomenon on the given date and the following one. In east 270 CALCULATIONS OF CELESTIAL NAVIGATION longitude it is between the phenomenon on the given date and the preceding one. For the given date, enter the daily-page table with lati-tude, and extract the LMT for the tabulated latitude next smaller than the observer's latitude (unless this is an exact tabulated value). Apply a correction from Table I of the al-manac “Tables for Interpolating Sunrise, Moonrise, etc.” to interpolate for latitude, determining the sign of the correc-tion by inspection. Repeat this procedure for the day fol-lowing the given date, if in west longitude; or for the day preceding, if in east longitude. Using the difference be-tween these two times, and the longitude, enter Table II of the almanac “Tables for Interpolating Sunrise, Sunset, etc.” and take out the correction. Apply this correction to the LMT of moonrise or moonset at the Greenwich meridian on the given date to find the LMT at the position of the observ-er. The sign to be given the correction is such as to make the corrected time fall between the times for the two dates be-tween which interpolation is being made. This is nearly al-ways positive (+) in west longitude and negative (-) in east longitude. Convert the corrected LMT to ZT. Example 1: Find the zone time of moonrise and moonset at lat. 58°23.6'N, long 144°05.5'W on June 1, 2024. Solution: L 58°23.6'N June 1 λ 144°05.5'W DR latitude lies between tabular latitudes 58°N and 60°N in the Nautical Almanac (d lat = 0° 23.6'). Moonrise Moonset 58°N 0130 June 1 58°N 1352 June 1 (2°/+1m) T I 0 (2°/+1m) T I 0 LMT (G) 0130 June 1 LMT (G) 1352 June 1 58°N 0133 June 2 58°N 1529 June 2 (2°/-3m) T I (-) 1 (2°/+5m)T I (+) 1 LMT (G) 0132 June 2 LMT (G) 1530 June 2 LMT (G) 0130 June 1 LMT (G) 1352 June 1 diff. 2 diff. 1 38 T II 0 T II (+)39 LMT (G) 0130 June 1 LMT (G) 1352 June 1 LMT 0130 June 1 LMT 1431 June 1 dλ (-) 24 dλ (-) 24 ZT 0106 June 1 ZT 1407 June 1 Example 2: Find the zone time of moonrise and moonset at lat. 58°23.6'N, long 166°10.5'E on June 2, 2024. Solution: L 58°23.6'N June 2 λ 166°07.5'E DR latitude lies between tabular latitudes 58°N and 60°N in the Nautical Almanac (d lat = 0° 23.6') Moonrise Moonset 58°N 0133 June 2 58°N 1529 June 2 (2°/-3m) T I (-) 1 (2°/+5m) T I (+) 1 LMT (G) 0132 June 2 LMT (G) 1530 June 2 58°N 0130 June 1 58°N 1352 June 1 (2°/+1m) T I 0 (2°/+1m) T I 0 LMT (G) 0130 June 1 LMT (G) 1352 June 1 LMT (G) 0132 June 2 LMT (G) 1530 June 2 diff. 2 diff. 1 38 T II 0 T II (-)47 LMT (G) 0132 June 2 LMT (G) 1530 June 2 LMT 0132 June 2 LMT 1443 June 2 dλ (-) 5 dλ (-) 5 ZT 0127 June 2 ZT 1438 June 2 As with the Sun, there are times in high latitudes when interpolation is inaccurate or impossible. With the Moon, this condition occurs when the Moon rises or sets at one lat-itude, but not at the next higher tabulated latitude, as with the Sun. It also occurs when the Moon rises or sets on one day but not on the preceding or following day. Because of the eastward revolution of the Moon around the Earth, there is one day each synodical month (29½ days) when the Moon does not rise, and one day when it does not set. These occur near last quarter and first quar-ter, respectively. Since this day is not the same at all lati-tudes or at all longitudes, the time of moonrise or moonset found from the almanac may occasionally be the preceding or succeeding one to that desired. When interpolating near midnight, one should exercise caution to prevent an error. Refer to the right-hand daily page of the Nautical Al-manac for July 8, 9, 10 (Appendix E). On July 9 moonset occurs at 2345 at latitude 70°N, and at 0119 at latitude 72°N. These are not the same moonset, the one at 0119 oc-curring approximately one day later than the one occurring at 2345. This is indicated by the two times, which differ by nearly 24 hours. The table indicates that with increasing northerly latitude, moonset occurs later. Between 70°N and 72°N the time crosses midnight to the following day. Hence, between these latitudes interpolation should be made between 2345 on July 9 and 0119 on July 10. For another example, refer to the right-hand daily page of the Nautical Almanac for June 2, 3, 4 (Appendix E). On June 2 moonrise occurs at 0110 at latitude 70°N, and at 0103 at latitude 72°N. On June 3 at 70°N (72°N) moonrise occurs at 0038 (0020), moonset occurs at 1853 (1938) and then another moonrise occurs at 2349 (2302) after which the moon does not set as indicated by the white boxes. On June 4 and 5 the moon never sets below the horizon north of latitude 70°. The effect of the revolution of the Moon around the Earth is to cause the Moon to rise or set later from day to day. The daily retardation due to this effect does not differ greatly from 50m. The change in declination of the Moon may increase or decrease this effect. The effect due to CALCULATIONS OF CELESTIAL NAVIGATION 271 change of declination increases with latitude, and in ex-treme conditions it may be greater than the effect due to rev-olution of the Moon. Hence, the interval between succes-sive moonrises or moonsets is more erratic in high latitudes than in low latitudes. When the two effects act in the same direction, daily differences can be quite large. Thus, at lati-tude 72°N the moon is always above the horizon on July 8, and then rises at 0427 on July 9, and at 0709 on July 10. When they act in opposite directions, they are small, and when the effect due to change in declination is larger than that due to revolution, the Moon sets earlier on succeeding days. Thus, at latitude 72°N the Moon sets at 0139 on June 13, and at 0102 on June 14 (37m versus 50m) (Appendix E). If this happens near last quarter or first quarter, two moonrises or moonsets might occur on the same day, one a few minutes after the day begins, and the other a few min-utes before it ends. On June 3, 2024, for instance, at latitude 72°N, the Moon rises at 0020, sets at 1938, and rises again at 2302 the same day. On those days on which no moonrise or no moonset occurs, the next succeeding one is shown with 24h added to the time. Thus, at latitude 60°N the Moon rises at 2335 on May 24, while the next moonrise occurs 25h13m later, at 0048 on May 26. This is listed both as 2448 on May 25 and as 0048 on May 26. Interpolation for longitude is always made between consecutive moonrises or moonsets, regardless of the days on which they fall. Example 3: Find the zone time of moonset at lat. 71°38.7'N, long 56°21.8'W during the night of July 9 - 10, 2024 Solution: L 71° 38.7'N July 9 - 10 λ 56° 21.8'W Moonset 70°N 2345 July 9 T I (+)16 LMT (G) 0001 July 10 70°N 2315 July 10 T I (+) 8 LMT (G) 2323 July 10 LMT (G) 0001 July 10 diff. 38 T II (-) 7 LMT (G) 0001 July 10 LMT 2354 July 9 dλ (-) 15 ZT 2339 July 9 Interpolation for the first entry is between 2345 on July 9 (lat. 70°N) and 0005 on July 10 (lat. 72°N); for the second entry, between 2315 on July 10 and 2325 on July 10. The “Semiduration of Sunlight” graphs, located near the back of the Nautical Almanac on the same page as the “Duration of Twilight” graphs, gives the number of hours between sunrise and meridian transit or between meridian transit and sunset. The dot scale near the top of the graph in-dicates the LMT of meridian transit, the time represented by the minute dot nearest the vertical date line being used. If the intersection occurs in the area marked “Sun above hori-zon,” the Sun does not set; and if in the area marked “Sun below horizon,” the Sun does not rise. Example 4: Find the zone time of sunrise at lat. 71°30.0'N, long 10°00.0'W near Jan Mayen Island, on August 25, 2024. Solution: August 25 LMT 1202 LAN, from top of graph dλ (-)20 ZT 1142 LAN semidur. 8h24m from graph ZT 0318 sunrise (-semidur.) ZT 2006 sunset (+semidur.) A vertical line through August 25 passes nearest the dot representing LAN 1202 on the scale near the top of the graph. This is LMT; at longitude 10°00.0'W the ZT is 20m earlier, or at 1142. The intersection of the vertical date line with the horizontal latitude line occurs between the 8h and 9h curves, at approximately 8h24m. Hence, sunrise occurs at this interval before LAN and sunset at this interval after LAN. The “Duration of Twilight” graphs, located near the back of the Nautical Almanac on the same page as the “Semiduration of Sunlight” graphs, gives the number of hours between the beginning of morning civil twilight (cen-ter of Sun 6° below the horizon) and sunrise, or between sunset and the end of evening civil twilight. If the Sun does not rise, but twilight does occur, the time taken from the graph is half the total length of the single twilight period, or the number of hours from beginning of morning twilight to LAN, or from LAN to end of evening twilight. If the inter-section occurs in the area marked “continuous twilight or sunlight,” the center of the Sun does not get more than 6° below the horizon; and if in the area marked “no twilight nor sunlight,” the Sun remains more than 6° below the hori-zon throughout the entire day. Example 5: Find the zone time of beginning of morning twi-light and ending of evening twilight at the place and date of Example 4. Solution: Twilight Twilight ZT 0318 sunrise ZT 2006 sunset dur. 1h46m from graph dur. 1h53m from graph ZT 0122 morning twilight ZT 2152 evening twilight 272 CALCULATIONS OF CELESTIAL NAVIGATION The intersection of the vertical date line and the hori-zontal latitude line occurs approximately one-sixth of the distance from the 2h line toward the 1h20m line; or at about 1h53m. Morning twilight begins at this interval be-fore sunrise, and evening twilight ends at this interval after sunset. The “Semiduration of Moonlight” graphs give the number of hours between moonrise and meridian transit or between meridian transit and moonset. The dot scale near the top of the graph indicates the LMT of meridian transit, each dot representing one hour. The phase symbols indicate the date on which the principal Moon phases occur, the open circle indicating full moon and the dark circle indicat-ing new moon. If the intersection of the vertical date line and the horizontal latitude line falls in the “moon above horizon” or “moon below horizon” area, the Moon remains above or below the horizon, respectively, for the entire 24 hours of the day. If approximations of the times of moonrise and moon-set are sufficient, the values of semiduration taken from the graph can be used without adjustment. For more accurate results, the times on the required date and the adjacent date (the following date in west longitude and the preceding date in east longitude) should be determined, and an interpola-tion made for longitude, as in any latitude, since the inter-vals given are for the Greenwich meridian. Example 6: Find the zone time of moonrise and moonset at lat. 74°00.0'N, long 108°00.0'W on May 5, 2024. Solution: May 5 May 6 LMT 0930 LMT 1012 meridian transit, from graph dλ (+)12 dλ (+)12 ZT 0942 ZT 1024 meridian transit semidur. 6h45m semidur. 8h48m from graph ZT 0257 ZT 0136 (moonrise - semidur.) ZT 1627 ZT 1912 (moonrise + semidur.) Moonrise Moonset ZT 0257 May 5 ZT 1627 May 5 ZT 0136 May 6 ZT 1912 May 6 diff. (-) 81 diff. (+)165 81 x 108.0/360 (-) 24 165 x 108.0/360 (+) 50 ZT 0233 ZT 1717 The phase is crescent, about two days before new moon. The LMT of meridian transits are found by noting the intersections of the vertical date lines with the dot scale near the top of the graph, interpolating by eye. At longitude 108°00.0'W the ZT is 12m later. The semiduration is found by noting the position, with respect to the semiduration curves, of the intersection of the vertical date line with the horizontal latitude line. This interval is subtracted from the time of meridian transit to obtain moonrise, and added to obtain moonset. These solutions are made for both May 5 and 6, and the difference determined in minutes. The ad-justment to be applied to the ZT on May 5 at Greenwich is determined by multiplying this difference by the ratio λ/360. The phase is determined by noting the position of the vertical date line with respect to the phase symbols. If the answer indicates that the phenomenon occurs on a date dif-fering from that desired, a new solution should be made, adjusting the starting date accordingly. The phenomenon may occur twice on the same day, or it may not occur at all. In high latitudes the effect on the time of moonrise and moonset of a relatively small change in declination is con-siderably greater than in lower latitudes, resulting in great-er differences from day to day. Sunlight, twilight, and moonlight graphs are not given for south latitudes. Beyond latitude 65°S, the northern hemisphere graphs can be used for determining the semi-duration or duration, by using the vertical date line for a day when the declination has the same numerical value but op-posite sign. The time of meridian transit and the phase of the Moon are determined as explained above, using the cor-rect date. Between latitudes 60°S and 65°S solution is made by interpolation between the tables and the graphs. Several other methods of solution of these phenomena are available. Semiduration or duration can be determined graphically by means of a diagram on the plane of the ce-lestial meridian (Appendix G), or by computation. When computation is used, solution is made for the meridian angle at which the required negative altitude occurs. The meridian angle expressed in time units is the semiduration in the case of sunrise, sunset, moonrise, and moonset; and the semiduration of the combined sunlight and twilight, or the time from meridian transit at which morning twilight begins or evening twilight ends. For sunrise and sunset the altitude used is (-)50'. Allowance for height of eye can be made by algebraically subtracting (numerically adding) the dip correction from this altitude. The altitude used for twi-light is (-)6°, (-)12°, or (-)18° for civil, nautical, or astro-nomical twilight, respectively. The altitude used for moon-rise and moonset is -34' - SD + HP, where SD is semidiam-eter and HP is horizontal parallax, from the daily pages of the Nautical Almanac. 715. Rising, Setting, and Twilight at a Moving Craft Instructions given in the preceding three articles relate to a fixed position on the Earth. Aboard a moving craft the problem is complicated somewhat by the fact that time of occurrence depends upon position of the craft, and vice ver-sa. At ship speeds, it is generally sufficiently accurate to make an approximate mental solution, and use the position of the vessel at this time to make a more accurate solution. If higher accuracy is required, the position at the time indi-cated in the second solution can be used for a third solution. If desired, this process can be repeated until the same an-CALCULATIONS OF CELESTIAL NAVIGATION 273 swer is obtained from two consecutive solutions. However, it is generally sufficient to alter the first solution by 1m for each 15' of longitude that the position of the craft differs from that used in the solution, adding if west of the estimat-ed position, and subtracting if east of it. In applying this rule, use both longitudes to the nearest 15'. The first solu-tion is known as the first estimate; the second solution is the second estimate. LATITUDE BY MERIDIAN TRANSIT 716. Meridian Altitudes The latitude of a place on the surface of the Earth, being its angular distance from the equator, is measured by an arc of the meridian between the zenith and the equator, and hence is equal to the declination of the zenith; there-fore, if the zenith distance of any heavenly body when on the meridian be known, together with the declination of the body, the latitude can be found. Figure 716a shows the celestial sphere surrounding the Earth: Pn,MPs, is the upper branch of a celestial me-ridian and LL' a portion of the corresponding geographic meridian. The declination of a body at M (arc QM) is nu-merically equal to the latitude of its geographical position at GP. The zenith distance of a body is equiva-lent to the distance on Earth between the geographical position of the body and the position of the observer. In Figure 716a the zenith distance of M is 30° and its decli-nation is 20°N. If the body is on the meridian, the GP is also on the meridian. Since Pn, Z, and M are all on the ce-lestial meridian, the navigational triangle flattens out to a line. The observer is 30° north of the GP (L 50°N) if the body is seen to bear south, or 30° south of the GP (L' 10°S) if the body is seen to bear north. The navigator knows whether the GP is north or south, because it is the same as the direction he faces when making his observation. In the diagram on the plane of the celestial meridian shown in Figure 716b, M is the position of a celestial body north of the equator but south of the zenith; QM is the declination of the body; SM is the altitude (h); and MZ is the zenith distance (z). From the diagram: QZ = QM + MZ, or L = d + z (9) With attention to the direction of the GP and the name of the declination, the above equation may be considered general for any position of the body at upper transit, as M, M', M". When the body is below the pole, as at M''' —that is, at its lower transit— the same formula may be used by substi-tuting 180° - d for d. Another solution is given in this case by observing that: NPn = PnM''' + NM''', or L = p + h (10) By drawing that half of the diagram on the plane of the celestial meridian containing the zenith, the proper combi-nation of zenith distance and declination is made obvious, Figure 716a. Body on celestial meridian. Figure 716b. Diagram on the plane of the celestial meridian. 274 CALCULATIONS OF CELESTIAL NAVIGATION as shown in the following examples: Example 1: The navigator observes the Sun on the meridi-an, bearing south. The declination of the Sun is 10°00.0'N; the corrected sextant altitude (Ho) is 60°00.0'. (See Figure 716c) Required: The latitude. Solution: L = z + d 90° 00.0' Ho 60° 00.0' z 30° 00.0' d 10° 00.0'N L 40° 00.0'N Example 2: The navigator observes the Sun on the meridi-an, bearing south. The declination of the Sun is 10°00.0'S; the corrected sextant altitude (Ho) is 65°00.0'. (See Figure 716d) Required: The latitude. Solution: L = z - d 90° 00.0' Ho 65° 00.0' z 25° 00.0' d 10° 00.0'S L 15° 00.0'N Example 3: The navigator observes the Sun on the meridi-an, bearing north. The declination of the Sun is 20°00.0'S; the corrected sextant altitude (Ho) is 60°00.0'. (See Figure 716e) Required: The latitude. Solution: L = z + d 90° 00.0' Ho 60° 00.0' z 30° 00.0' d 20° 00.0'S L 50° 00.0'S Example 4: The navigator observes the Sun on the meridi-an, bearing north. The declination of the Sun is 23°00.0'N; the corrected sextant altitude (Ho) is 72°00.0'. (See Figure 716f) Required: The latitude. Solution: L = z - d 90° 00.0' Ho 72° 00.0' z 18° 00.0' d 23° 00.0'N L 5° 00.0'N Example 5: In the vicinity of the equator, the navigator ob-serves the Sun on the meridian, bearing north. The declination of the Sun is 22°05.0'N; the corrected sextant altitude (Ho) is 67°45.0'. (See Figure 716g) Required: The latitude. Solution: L = z - d 90° 00.0' Ho 67° 45.0' z 22° 15.0' d 22° 05.0'N L 0° 10.0'S Figure 716c. Meridian altitude diagram. Figure 716d. Meridian altitude diagram. Figure 716e. Meridian altitude diagram. CALCULATIONS OF CELESTIAL NAVIGATION 275 Example 6: The navigator in high northern latitudes ob-serves the Sun on the celestial meridian, bearing north. The declination of the Sun is 18°46.0'N; the corrected sextant altitude (Ho) is 6°22.0'. (See Figure 716h) Required: The latitude. Solution: L = (180° - d) - z, or L = p + h 90° 00.0' Ho 6° 22.0' z 83° 38.0' d 161° 14.0'N L 77° 36.0'N Since the Sun's GP is 83°38.0' north of the observer in high northern latitudes, the GP is beyond the pole, or on the lower branch of the observer's meridian. If an observation is made near but not exactly at merid-ian transit, it can be solved as a meridian altitude, with one modification. Enter Table 24 with the approximate latitude of the observer and the declination of the body, and take out the altitude factor (a). This is the difference between me-ridian altitude and the altitude one minute of time later (or earlier). Next, enter Table 25 with the altitude factor and the difference of time between meridian transit and the time of observation, and take out the correction. Add this value to Ho if near upper transit, or subtract it from Ho if near lower transit. Then proceed as for a meridian altitude, remember-ing that the value obtained is the latitude at the time of observation, not at the time of meridian transit. This method should not be used beyond the limits of Table 25 unless re-duced accuracy is acceptable. This process is called reduction to the meridian, the altitude before adjustment an ex-meridian altitude, and the observation an ex-merid-ian observation. It requires knowledge of the meridian angle, which depends upon knowledge of longitude. Example 7: At 1212, the zone time of LAN on June 2, 2024, clouds have obscured the sun. At 1224, while at latitude 12° 36.3' N, 033° 32.0' W, the lower limb of the sun was ob-served and the observed altitude (Ho) was 79° 48.9'. What is the ex-meridian altitude? Solution: ZT 1224 ZD +2 GMT 1424 14h 30° 28.5' dec N22° 17.8' 24m 6° 00.0' d (+0.3) 0.1' GHA 36° 28.5' dec N22° 17.9' λ 33° 32.0' LHA 2° 56.5' "t" 2° 56.5'W = 11m 46s T-24 a = 10.6" T-25 correction - 24.5' (to be added to Ho) Ho 79° 48.9' Corr. +24.5' Ho 80° 13.4' 90° 89° 60.0' - Ho 80° 13.4' z 9° 46.6' d 22° 17.9' L 12° 31.3' N Figure 716f. Meridian altitude diagram. Figure 716g. Meridian transit in the vicinity of the equator. Figure 716h. Meridian altitude at lower transit. 276 CALCULATIONS OF CELESTIAL NAVIGATION 717. Finding Time of Meridian Transit If a meridian altitude is to be observed other than by chance, a knowledge of the time of transit of the body across the meridian is needed. On a slow-moving vessel, or one traveling approxi-mately east or west, the time need not be known with great accuracy. The right-hand daily page of the Nautical Almanac gives the UT of transit of the Sun and Moon across the Greenwich meridian (approximately LMT of transit across the local meridian) under the heading “Mer. Pass.” In the case of the Moon, an interpolation should be made for longitude. This is performed in the same manner as finding the LMT of moonrise and moon-set (art. 730). In the case of planets, the tabulated accu-racy is normally sufficient without interpolation. The time of transit of the navigational planets is given at the lower right-hand corner of each left-hand daily page of the Nautical Almanac. The tabulated values are for the middle day of the page. These times are the UT of transit across the Greenwich meridian, but are approximately correct for the LMT of transit across the local meridian. Observations are started several minutes in advance and continued until the altitude reaches a maximum and starts to decrease (a minimum and starts to increase for lower transit). The greatest altitude occurs at upper tran-sit (and the least at lower transit). This method is not re-liable if there is a large northerly or southerly component of the vessel's motion, because the altitude at meridian transit changes slowly, particularly at low altitudes. At this time the change due to the vessel's motion may be considerably greater than that due to apparent motion of the body (rotation of the Earth), so that the highest alti-tude occurs several minutes before or after meridian transit. If the moment at which the azimuth is 000° or 180° can be determined accurately, the observation can be made at this time. However, this generally does not pro-vide a high order of accuracy. If the longitude is known with sufficient accuracy, the time of transit can be computed. A number of meth-ods of computation have been devised, but perhaps the simplest is to consider the GHA of the body equal to the longitude if west, or 360°- λ if east, and find the time at which this occurs. Example 1: Find the zone time of meridian transit of the Sun at longitude 156°44.2'W on May 31, 2024. Solution: May 31 λ 156° 44.2'W GHA 156° 44.2' 22h 150° 32.5' 24m47s 6° 11.7' UT 22h24m47s ZD (+)10 (rev.) ZT 12h24m47s This solution is the reverse of finding GHA. The larg-est tabulated value of GHA that does not exceed the desired GHA is found in the tabulation for the day, and recorded, with its time. The difference between this value and the de-sired GHA is then used to enter the “Increments and Corrections” table. The time interval corresponding to this value is added to the time taken from the daily page. If there is a v correction, it is subtracted from the GHA difference before the time interval is determined. The UT can be con-verted to any other kind of time desired. If the Greenwich date differs from the local date at the time of transit (for the Sun this can occur only near the 180th meridian), a second solution may be needed. This possibility can often be avoid-ed by making an approximate mental solution in advance. As the basis for this approximate solution, it is convenient to remember that the UT of Greenwich transit (GHA 0°) is about the same as the LMT of local transit. To find the time of transit of a star, subtract its SHA from the desired GHA to find the desired GHA . Determine the time corre-sponding to GHA , as explained above for the Sun. Aboard a moving vessel, the longitude at transit usual-ly depends upon the time of transit. An approximate men-tal solution may provide a time sufficiently close. In the ab-sence of better information, use ZT 1200 for the Sun. Find the time of transit for the position at this time. The result is the first estimate of the zone time of local apparent noon (LAN) or of meridian transit. For high accuracy a second adjustment may occasionally be needed, but this is seldom justified because of the uncertainty of the vessel's position. If the second adjustment is made, the result is the second estimate. The time of transit of the Sun can also be found by means of apparent time (Section 310). Meridian transit oc-curs at LAT 12h00m00s. This can be converted to any other kind of time desired. Example 2: Find the zone time of meridian transit of the Sun as observed aboard a ship steaming at 20 knots on course 255° on May 31, 2024, using the positional data giv-en in Figure 717 Solution: May 31 360 °00.0 ' λ 112° 55.0'E GHA 247° 05.0' 4h 240° 31.9' 26m12s 6° 33.1' UT 4h26m12s May 31 ZD (-)8 (rev.) ZT 12h26m12s (first estimate) The second estimate of the zone time of meridian tran-sit is found by plotting the DR position for the first estimate of the zone time of transit and then applying the dλ between this DR and the 1200 DR to the time found by computation. CALCULATIONS OF CELESTIAL NAVIGATION 277 May 31 360° 00.0' λ 112° 46.0'E GHA 247° 14.0' 4h 240° 31.9' 26m48s 6° 42.1' UT 4h26m48s May 31 ZD (-)8 (rev.) ZT 12h26m48s (second estimate) As shown in Figure 717, the zone times of meridian transit are noted on several successive meridians. This is accomplished by extracting the LMT of meridian transit from the daily page of the Nautical Almanac and converting this time to the zone time for each meridian. The time when the ship and the Sun are on the same meridian can then be obtained by inspection to within approximately one-half minute. 718. Latitude by Polaris Another special method of finding latitude, available in most of the northern hemisphere, utilizes the fact that Polar-is is less than 1° from the north celestial pole. As indicated in Appendix G, the altitude of the elevated pole above the celestial horizon is equal to the latitude. Since Polaris is never far from the pole, its observed altitude (Ho), with suitable correction, is the latitude. The Nautical Almanac has tables based on the follow-ing formula: Latitude - corrected sextant altitude = (- p cos h) + (½p sin p sin2 h tan (latitude)) where p = polar distance of Polaris = 90° - Dec. h = local hour angle of Polaris = LHA Aries + SHA. The value a0, which is a function of LHA Aries only, is the value of both terms of the above formula calculated for mean values of the SHA and Dec. of Polaris, for a mean latitude of 50°, and adjusted by the addition of a constant (58.8'). The value a1, which is a function of LHA Aries and latitude, is the excess of the value of the second term over its mean value for latitude 50°, increased by a constant (0.6') to make it always positive. The value a2, which is a function of LHA Aries and date, is the correction to the first term for the variation of Polaris from its adopted mean po-sition; it is increased by a constant (0.6') to make it positive. The sum of the added constants is 1°, so that: Latitude = corrected sextant altitude - 1° + a0 + a1 + a2 The table at the top of each Polaris correction page is entered with LHA Aries, and the first correction (a0) is taken out by single interpolation. The second and third corrections (a1 and a2, respectively) are taken from the double entry tables without interpolation, using the LHA Aries column with the latitude for the second correction and with the month for the third correction. Example: During morning twilight on June 2, 2024, the 0525 DR position of a ship is lat. 15°43.6'N, long. 110°07.3'W. At watch time 5h24m49s AM the navigator ob-serves Polaris with a marine sextant having an index error of 3.0' on the arc, from a height of eye of 44 feet. The watch is 23" slow on zone time. The hs is 16°24.0'. Solution: Star hs 16° 24.0' IC -3.0' D -6.4' Figure 717. Semidiameter, phase and augmentation. 278 CALCULATIONS OF CELESTIAL NAVIGATION ha 16° 14.6' St-p -3.3' Ho 16° 11.3' June 2 WT 5h24m49s ΑΜ WE (S)23s ZT 5h25m12s ZD (+)7 UT 12h25m12s June 2 12h 71° 26.9' 25m12s 6° 19.0' GHA 77° 45.9' λ 110° 07.3'W LHA 327° 38.8' Polaris a0 51.2' a1 0.4' a2 0.3' -1° -60.0' sum - 8.1' Ho 16° 11.3' Lat 16° 03.2' Since LHA is an entering value in all three correc-tion tables, and since this is affected by the longitude, other observations, if available, should be solved and plotted first, to obtain a good longitude for the Polaris solution. For greater accuracy, particularly in higher latitudes, and espe-cially if considerable doubt exists as to the longitude, it is good practice to find the azimuth of Polaris and draw the line of position perpendicular to it, through the point de-fined by the latitude found in the computation and the lon-gitude used in the solution. The azimuth at various latitudes to 65°N is given below the Polaris corrections. This table can be extrapolated to higher latitudes, but Polaris would not ordinarily be used much beyond latitude 65°. In the ex-ample given above the azimuth is 000.8°.
177
Orthogonal matrix - Wikipedia =============== Jump to content [x] Main menu Main menu move to sidebar hide Navigation Main page Contents Current events Random article About Wikipedia Contact us Contribute Help Learn to edit Community portal Recent changes Upload file Special pages Search Search [x] Appearance Donate Create account Log in [x] Personal tools Donate Create account Log in Pages for logged out editors learn more Contributions Talk Contents move to sidebar hide (Top) 1 Overview 2 Examples 3 Elementary constructionsToggle Elementary constructions subsection 3.1 Lower dimensions 3.2 Higher dimensions 3.3 Primitives 4 PropertiesToggle Properties subsection 4.1 Matrix properties 4.2 Group properties 4.3 Canonical form 4.4 Lie algebra 5 Numerical linear algebraToggle Numerical linear algebra subsection 5.1 Benefits 5.2 Decompositions 5.2.1 Examples 5.3 Randomization 5.4 Nearest orthogonal matrix 6 Spin and pin 7 Rectangular matrices 8 See also 9 Notes 10 References 11 External links [x] Toggle the table of contents Orthogonal matrix [x] 32 languages العربية 閩南語 / Bân-lâm-gú Беларуская Català Чӑвашла Čeština Dansk Deutsch Ελληνικά Español Esperanto Euskara فارسی Français Galego 한국어 Bahasa Indonesia Italiano עברית Magyar Nederlands 日本語 Polski Português Русский Slovenščina Suomi Svenska தமிழ் Українська اردو 中文 Edit links Article Talk [x] English Read Edit View history [x] Tools Tools move to sidebar hide Actions Read Edit View history General What links here Related changes Upload file Permanent link Page information Cite this page Get shortened URL Download QR code Print/export Download as PDF Printable version In other projects Wikidata item Appearance move to sidebar hide From Wikipedia, the free encyclopedia Real square matrix whose columns and rows are orthogonal unit vectors For matrices with orthogonality over the complex number field, see unitary matrix. This article includes a list of general references, but it lacks sufficient corresponding inline citations. Please help to improve this article by introducing more precise citations.(May 2023) (Learn how and when to remove this message) In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormalvectors. One way to express this is Q T Q=Q Q T=I,{\displaystyle Q^{\mathrm {T} }Q=QQ^{\mathrm {T} }=I,} where Q T is the transpose of Q and I is the identity matrix. This leads to the equivalent characterization: a matrix Q is orthogonal if its transpose is equal to its inverse: Q T=Q−1,{\displaystyle Q^{\mathrm {T} }=Q^{-1},} where Q−1 is the inverse of Q. An orthogonal matrix Q is necessarily invertible (with inverse Q−1 = Q T), unitary (Q−1 = Q∗), where Q∗ is the Hermitian adjoint (conjugate transpose) of Q, and therefore normal (Q∗Q = QQ∗) over the real numbers. The determinant of any orthogonal matrix is either +1 or −1. As a linear transformation, an orthogonal matrix preserves the inner product of vectors, and therefore acts as an isometry of Euclidean space, such as a rotation, reflection or rotoreflection. In other words, it is a unitary transformation. The set of n × n orthogonal matrices, under multiplication, forms the groupO(n), known as the orthogonal group. The subgroupSO(n) consisting of orthogonal matrices with determinant +1 is called the special orthogonal group, and each of its elements is a special orthogonal matrix. As a linear transformation, every special orthogonal matrix acts as a rotation. Overview [edit] Visual understanding of multiplication by the transpose of a matrix. If A is an orthogonal matrix and B is its transpose, the ij-th element of the product AA T will vanish if i≠j, because the i-th row of A is orthogonal to the j-th row of A. An orthogonal matrix is the real specialization of a unitary matrix, and thus always a normal matrix. Although we consider only real matrices here, the definition can be used for matrices with entries from any field. However, orthogonal matrices arise naturally from dot products, and for matrices of complex numbers that leads instead to the unitary requirement. Orthogonal matrices preserve the dot product, so, for vectors u and v in an n-dimensional real Euclidean spaceu⋅v=(Q u)⋅(Q v){\displaystyle {\mathbf {u} }\cdot {\mathbf {v} }=\left(Q{\mathbf {u} }\right)\cdot \left(Q{\mathbf {v} }\right)} where Q is an orthogonal matrix. To see the inner product connection, consider a vector v in an n-dimensional real Euclidean space. Written with respect to an orthonormal basis, the squared length of v is vTv. If a linear transformation, in matrix form Qv, preserves vector lengths, then v T v=(Q v)T(Q v)=v T Q T Q v.{\displaystyle {\mathbf {v} }^{\mathrm {T} }{\mathbf {v} }=(Q{\mathbf {v} })^{\mathrm {T} }(Q{\mathbf {v} })={\mathbf {v} }^{\mathrm {T} }Q^{\mathrm {T} }Q{\mathbf {v} }.} Thus finite-dimensional linear isometries—rotations, reflections, and their combinations—produce orthogonal matrices. The converse is also true: orthogonal matrices imply orthogonal transformations. However, linear algebra includes orthogonal transformations between spaces which may be neither finite-dimensional nor of the same dimension, and these have no orthogonal matrix equivalent. Orthogonal matrices are important for a number of reasons, both theoretical and practical. The n × n orthogonal matrices form a group under matrix multiplication, the orthogonal group denoted by O(n), which—with its subgroups—is widely used in mathematics and the physical sciences. For example, the point group of a molecule is a subgroup of O(3). Because floating point versions of orthogonal matrices have advantageous properties, they are key to many algorithms in numerical linear algebra, such as QR decomposition. As another example, with appropriate normalization the discrete cosine transform (used in MP3 compression) is represented by an orthogonal matrix. Examples [edit] Below are a few examples of small orthogonal matrices and possible interpretations. [1 0 0 1]{\displaystyle {\begin{bmatrix}1&0\0&1\\end{bmatrix}}} (identity transformation) [cos⁡θ−sin⁡θ sin⁡θ cos⁡θ]{\displaystyle {\begin{bmatrix}\cos \theta &-\sin \theta \\sin \theta &\cos \theta \\end{bmatrix}}} (rotation about the origin) [1 0 0−1]{\displaystyle {\begin{bmatrix}1&0\0&-1\\end{bmatrix}}} (reflection across x-axis) [0 0 0 1 0 0 1 0 1 0 0 0 0 1 0 0]{\displaystyle {\begin{bmatrix}0&0&0&1\0&0&1&0\1&0&0&0\0&1&0&0\end{bmatrix}}} (permutation of coordinate axes) Elementary constructions [edit] Lower dimensions [edit] The simplest orthogonal matrices are the 1 × 1 matrices and [−1], which we can interpret as the identity and a reflection of the real line across the origin. The 2 × 2 matrices have the form [p t q u],{\displaystyle {\begin{bmatrix}p&t\q&u\end{bmatrix}},} which orthogonality demands satisfy the three equations 1=p 2+t 2,1=q 2+u 2,0=p q+t u.{\displaystyle {\begin{aligned}1&=p^{2}+t^{2},\1&=q^{2}+u^{2},\0&=pq+tu.\end{aligned}}} In consideration of the first equation, without loss of generality let p = cos θ, q = sin θ; then either t = −q, u = p or t = q, u = −p. We can interpret the first case as a rotation by θ (where θ = 0 is the identity), and the second as a reflection across a line at an angle of ⁠θ/2⁠. cos⁡θ−sin⁡θ sin⁡θ cos⁡θ,cos⁡θ sin⁡θ sin⁡θ−cos⁡θ{\displaystyle {\begin{bmatrix}\cos \theta &-\sin \theta \\sin \theta &\cos \theta \\end{bmatrix}}{\text{ (rotation), }}\qquad {\begin{bmatrix}\cos \theta &\sin \theta \\sin \theta &-\cos \theta \\end{bmatrix}}{\text{ (reflection)}}} The special case of the reflection matrix with θ = 90° generates a reflection about the line at 45° given by y = x and therefore exchanges x and y; it is a permutation matrix, with a single 1 in each column and row (and otherwise 0): [0 1 1 0].{\displaystyle {\begin{bmatrix}0&1\1&0\end{bmatrix}}.} The identity is also a permutation matrix. A reflection is its own inverse, which implies that a reflection matrix is symmetric (equal to its transpose) as well as orthogonal. The product of two rotation matrices is a rotation matrix, and the product of two reflection matrices is also a rotation matrix. Higher dimensions [edit] Regardless of the dimension, it is always possible to classify orthogonal matrices as purely rotational or not, but for 3 × 3 matrices and larger the non-rotational matrices can be more complicated than reflections. For example, [−1 0 0 0−1 0 0 0−1]and[0−1 0 1 0 0 0 0−1]{\displaystyle {\begin{bmatrix}-1&0&0\0&-1&0\0&0&-1\end{bmatrix}}{\text{ and }}{\begin{bmatrix}0&-1&0\1&0&0\0&0&-1\end{bmatrix}}} represent an inversion through the origin and a rotoinversion, respectively, about the z-axis. Rotations become more complicated in higher dimensions; they can no longer be completely characterized by one angle, and may affect more than one planar subspace. It is common to describe a 3 × 3 rotation matrix in terms of an axis and angle, but this only works in three dimensions. Above three dimensions two or more angles are needed, each associated with a plane of rotation. However, we have elementary building blocks for permutations, reflections, and rotations that apply in general. Primitives [edit] The most elementary permutation is a transposition, obtained from the identity matrix by exchanging two rows. Any n × n permutation matrix can be constructed as a product of no more than n − 1 transpositions. A Householder reflection is constructed from a non-null vector v as Q=I−2 v v T v T v.{\displaystyle Q=I-2{\frac {{\mathbf {v} }{\mathbf {v} }^{\mathrm {T} }}{{\mathbf {v} }^{\mathrm {T} }{\mathbf {v} }}}.} Here the numerator is a symmetric matrix while the denominator is a number, the squared magnitude of v. This is a reflection in the hyperplane perpendicular to v (negating any vector component parallel to v). If v is a unit vector, then Q = I − 2vvT suffices. A Householder reflection is typically used to simultaneously zero the lower part of a column. Any orthogonal matrix of size n × n can be constructed as a product of at most n such reflections. A Givens rotation acts on a two-dimensional (planar) subspace spanned by two coordinate axes, rotating by a chosen angle. It is typically used to zero a single subdiagonal entry. Any rotation matrix of size n × n can be constructed as a product of at most ⁠n(n − 1)/2⁠ such rotations. In the case of 3 × 3 matrices, three such rotations suffice; and by fixing the sequence we can thus describe all 3 × 3 rotation matrices (though not uniquely) in terms of the three angles used, often called Euler angles. A Jacobi rotation has the same form as a Givens rotation, but is used to zero both off-diagonal entries of a 2 × 2 symmetric submatrix. Properties [edit] Matrix properties [edit] A real square matrix is orthogonal if and only if its columns form an orthonormal basis of the Euclidean spaceRn with the ordinary Euclidean dot product, which is the case if and only if its rows form an orthonormal basis of Rn. It might be tempting to suppose a matrix with orthogonal (not orthonormal) columns would be called an orthogonal matrix, but such matrices have no special interest and no special name; they only satisfy M T M = D, with D a diagonal matrix. The determinant of any orthogonal matrix is +1 or −1. This follows from basic facts about determinants, as follows: 1=det(I)=det(Q T Q)=det(Q T)det(Q)=(det(Q))2.{\displaystyle 1=\det(I)=\det \left(Q^{\mathrm {T} }Q\right)=\det \left(Q^{\mathrm {T} }\right)\det(Q)={\bigl (}\det(Q){\bigr )}^{2}.} The converse is not true; having a determinant of ±1 is no guarantee of orthogonality, even with orthogonal columns, as shown by the following counterexample. [2 0 0 1 2]{\displaystyle {\begin{bmatrix}2&0\0&{\frac {1}{2}}\end{bmatrix}}} With permutation matrices the determinant matches the signature, being +1 or −1 as the parity of the permutation is even or odd, for the determinant is an alternating function of the rows. Stronger than the determinant restriction is the fact that an orthogonal matrix can always be diagonalized over the complex numbers to exhibit a full set of eigenvalues, all of which must have (complex) modulus1. Group properties [edit] The inverse of every orthogonal matrix is again orthogonal, as is the matrix product of two orthogonal matrices. In fact, the set of all n × n orthogonal matrices satisfies all the axioms of a group. It is a compactLie group of dimension ⁠n(n − 1)/2⁠, called the orthogonal group and denoted by O(n). The orthogonal matrices whose determinant is +1 form a path-connectednormal subgroup of O(n) of index 2, the special orthogonal groupSO(n) of rotations. The quotient groupO(n)/SO(n) is isomorphic to O(1), with the projection map choosing [+1] or [−1] according to the determinant. Orthogonal matrices with determinant −1 do not include the identity, and so do not form a subgroup but only a coset; it is also (separately) connected. Thus each orthogonal group falls into two pieces; and because the projection map splits, O(n) is a semidirect product of SO(n) by O(1). In practical terms, a comparable statement is that any orthogonal matrix can be produced by taking a rotation matrix and possibly negating one of its columns, as we saw with 2 × 2 matrices. If n is odd, then the semidirect product is in fact a direct product, and any orthogonal matrix can be produced by taking a rotation matrix and possibly negating all of its columns. This follows from the property of determinants that negating a column negates the determinant, and thus negating an odd (but not even) number of columns negates the determinant. Now consider (n + 1) × (n + 1) orthogonal matrices with bottom right entry equal to 1. The remainder of the last column (and last row) must be zeros, and the product of any two such matrices has the same form. The rest of the matrix is an n × n orthogonal matrix; thus O(n) is a subgroup of O(n + 1) (and of all higher groups). [0 O(n)⋮0 0⋯0 1]{\displaystyle {\begin{bmatrix}&&&0\&\mathrm {O} (n)&&\vdots \&&&0\0&\cdots &0&1\end{bmatrix}}} Since an elementary reflection in the form of a Householder matrix can reduce any orthogonal matrix to this constrained form, a series of such reflections can bring any orthogonal matrix to the identity; thus an orthogonal group is a reflection group. The last column can be fixed to any unit vector, and each choice gives a different copy of O(n) in O(n + 1); in this way O(n + 1) is a bundle over the unit sphere S n with fiber O(n). Similarly, SO(n) is a subgroup of SO(n + 1); and any special orthogonal matrix can be generated by Givens plane rotations using an analogous procedure. The bundle structure persists: SO(n) ↪ SO(n + 1) → S n. A single rotation can produce a zero in the first row of the last column, and series of n − 1 rotations will zero all but the last row of the last column of an n × n rotation matrix. Since the planes are fixed, each rotation has only one degree of freedom, its angle. By induction, SO(n) therefore has (n−1)+(n−2)+⋯+1=n(n−1)2{\displaystyle (n-1)+(n-2)+\cdots +1={\frac {n(n-1)}{2}}} degrees of freedom, and so does O(n). Permutation matrices are simpler still; they form, not a Lie group, but only a finite group, the order n!symmetric groupS n. By the same kind of argument, S n is a subgroup of S n + 1. The even permutations produce the subgroup of permutation matrices of determinant +1, the order ⁠n!/2⁠alternating group. Canonical form [edit] More broadly, the effect of any orthogonal matrix separates into independent actions on orthogonal two-dimensional subspaces. That is, if Q is special orthogonal then one can always find an orthogonal matrix P, a (rotational) change of basis, that brings Q into block diagonal form: P T Q P=R 1⋱R k,P T Q P=R 1⋱R k 1.{\displaystyle P^{\mathrm {T} }QP={\begin{bmatrix}R_{1}&&\&\ddots &\&&R_{k}\end{bmatrix}}\ (n{\text{ even}}),\ P^{\mathrm {T} }QP={\begin{bmatrix}R_{1}&&&\&\ddots &&\&&R_{k}&\&&&1\end{bmatrix}}\ (n{\text{ odd}}).} where the matrices R 1, ..., R k are 2 × 2 rotation matrices, and with the remaining entries zero. Exceptionally, a rotation block may be diagonal, ±I. Thus, negating one column if necessary, and noting that a 2 × 2 reflection diagonalizes to a +1 and −1, any orthogonal matrix can be brought to the form P T Q P=[R 1⋱R k 0 0±1⋱±1],{\displaystyle P^{\mathrm {T} }QP={\begin{bmatrix}{\begin{matrix}R_{1}&&\&\ddots &\&&R_{k}\end{matrix}}&0\0&{\begin{matrix}\pm 1&&\&\ddots &\&&\pm 1\end{matrix}}\\end{bmatrix}},} The matrices R 1, ..., R k give conjugate pairs of eigenvalues lying on the unit circle in the complex plane; so this decomposition confirms that all eigenvalues have absolute value 1. If n is odd, there is at least one real eigenvalue, +1 or −1; for a 3 × 3 rotation, the eigenvector associated with +1 is the rotation axis. Lie algebra [edit] Suppose the entries of Q are differentiable functions of t, and that t = 0 gives Q = I. Differentiating the orthogonality condition Q T Q=I{\displaystyle Q^{\mathrm {T} }Q=I} yields Q˙T Q+Q T Q˙=0{\displaystyle {\dot {Q}}^{\mathrm {T} }Q+Q^{\mathrm {T} }{\dot {Q}}=0} Evaluation at t = 0 (Q = I) then implies Q˙T=−Q˙.{\displaystyle {\dot {Q}}^{\mathrm {T} }=-{\dot {Q}}.} In Lie group terms, this means that the Lie algebra of an orthogonal matrix group consists of skew-symmetric matrices. Going the other direction, the matrix exponential of any skew-symmetric matrix is an orthogonal matrix (in fact, special orthogonal). For example, the three-dimensional object physics calls angular velocity is a differential rotation, thus a vector in the Lie algebra s o(3){\displaystyle {\mathfrak {so}}(3)} tangent to SO(3). Given ω = (xθ, yθ, zθ), with v = (x, y, z) being a unit vector, the correct skew-symmetric matrix form of ω is Ω=[0−z θ y θ z θ 0−x θ−y θ x θ 0].{\displaystyle \Omega ={\begin{bmatrix}0&-z\theta &y\theta \z\theta &0&-x\theta \-y\theta &x\theta &0\end{bmatrix}}.} The exponential of this is the orthogonal matrix for rotation around axis v by angle θ; setting c = cos ⁠θ/2⁠, s = sin ⁠θ/2⁠, exp⁡(Ω)=[1−2 s 2+2 x 2 s 2 2 x y s 2−2 z s c 2 x z s 2+2 y s c 2 x y s 2+2 z s c 1−2 s 2+2 y 2 s 2 2 y z s 2−2 x s c 2 x z s 2−2 y s c 2 y z s 2+2 x s c 1−2 s 2+2 z 2 s 2].{\displaystyle \exp(\Omega )={\begin{bmatrix}1-2s^{2}+2x^{2}s^{2}&2xys^{2}-2zsc&2xzs^{2}+2ysc\2xys^{2}+2zsc&1-2s^{2}+2y^{2}s^{2}&2yzs^{2}-2xsc\2xzs^{2}-2ysc&2yzs^{2}+2xsc&1-2s^{2}+2z^{2}s^{2}\end{bmatrix}}.} Numerical linear algebra [edit] Benefits [edit] Numerical analysis takes advantage of many of the properties of orthogonal matrices for numerical linear algebra, and they arise naturally. For example, it is often desirable to compute an orthonormal basis for a space, or an orthogonal change of bases; both take the form of orthogonal matrices. Having determinant ±1 and all eigenvalues of magnitude 1 is of great benefit for numeric stability. One implication is that the condition number is 1 (which is the minimum), so errors are not magnified when multiplying with an orthogonal matrix. Many algorithms use orthogonal matrices like Householder reflections and Givens rotations for this reason. It is also helpful that, not only is an orthogonal matrix invertible, but its inverse is available essentially free, by exchanging indices. Permutations are essential to the success of many algorithms, including the workhorse Gaussian elimination with partial pivoting (where permutations do the pivoting). However, they rarely appear explicitly as matrices; their special form allows more efficient representation, such as a list of n indices. Likewise, algorithms using Householder and Givens matrices typically use specialized methods of multiplication and storage. For example, a Givens rotation affects only two rows of a matrix it multiplies, changing a full multiplication of order n 3 to a much more efficient order n. When uses of these reflections and rotations introduce zeros in a matrix, the space vacated is enough to store sufficient data to reproduce the transform, and to do so robustly. (Following Stewart (1976), we do not store a rotation angle, which is both expensive and badly behaved.) Decompositions [edit] A number of important matrix decompositions (Golub & Van Loan 1996) involve orthogonal matrices, including especially: QR decompositionM = QR, Q orthogonal, R upper triangularSingular value decompositionM = U Σ V T, U and V orthogonal, Σ diagonal matrixEigendecomposition of a symmetric matrix (decomposition according to the spectral theorem)S = Q Λ Q T, S symmetric, Q orthogonal, Λ diagonalPolar decompositionM = QS, Q orthogonal, S symmetric positive-semidefinite Examples [edit] Consider an overdetermined system of linear equations, as might occur with repeated measurements of a physical phenomenon to compensate for experimental errors. Write Ax = b, where A is m × n, m>n. A QR decomposition reduces A to upper triangular R. For example, if A is 5 × 3 then R has the form R=[⋅⋅⋅0⋅⋅0 0⋅0 0 0 0 0 0].{\displaystyle R={\begin{bmatrix}\cdot &\cdot &\cdot \0&\cdot &\cdot \0&0&\cdot \0&0&0\0&0&0\end{bmatrix}}.} The linear least squares problem is to find the x that minimizes ‖Ax − b‖, which is equivalent to projecting b to the subspace spanned by the columns of A. Assuming the columns of A (and hence R) are independent, the projection solution is found from A T Ax = A Tb. Now A T A is square (n × n) and invertible, and also equal to R T R. But the lower rows of zeros in R are superfluous in the product, which is thus already in lower-triangular upper-triangular factored form, as in Gaussian elimination (Cholesky decomposition). Here orthogonality is important not only for reducing A T A = (R T Q T)QR to R T R, but also for allowing solution without magnifying numerical problems. In the case of a linear system which is underdetermined, or an otherwise non-invertible matrix, singular value decomposition (SVD) is equally useful. With A factored as U Σ V T, a satisfactory solution uses the Moore-Penrose pseudoinverse, V Σ+U T, where Σ+ merely replaces each non-zero diagonal entry with its reciprocal. Set x to V Σ+U Tb. The case of a square invertible matrix also holds interest. Suppose, for example, that A is a 3 × 3 rotation matrix which has been computed as the composition of numerous twists and turns. Floating point does not match the mathematical ideal of real numbers, so A has gradually lost its true orthogonality. A Gram–Schmidt process could orthogonalize the columns, but it is not the most reliable, nor the most efficient, nor the most invariant method. The polar decomposition factors a matrix into a pair, one of which is the unique closest orthogonal matrix to the given matrix, or one of the closest if the given matrix is singular. (Closeness can be measured by any matrix norm invariant under an orthogonal change of basis, such as the spectral norm or the Frobenius norm.) For a near-orthogonal matrix, rapid convergence to the orthogonal factor can be achieved by a "Newton's method" approach due to Higham (1986) (1990), repeatedly averaging the matrix with its inverse transpose. Dubrulle (1999) has published an accelerated method with a convenient convergence test. For example, consider a non-orthogonal matrix for which the simple averaging algorithm takes seven steps [3 1 7 5]→[1.8125 0.0625 3.4375 2.6875]→⋯→[0.8−0.6 0.6 0.8]{\displaystyle {\begin{bmatrix}3&1\7&5\end{bmatrix}}\rightarrow {\begin{bmatrix}1.8125&0.0625\3.4375&2.6875\end{bmatrix}}\rightarrow \cdots \rightarrow {\begin{bmatrix}0.8&-0.6\0.6&0.8\end{bmatrix}}} and which acceleration trims to two steps (with γ = 0.353553, 0.565685). [3 1 7 5]→[1.41421−1.06066 1.06066 1.41421]→[0.8−0.6 0.6 0.8]{\displaystyle {\begin{bmatrix}3&1\7&5\end{bmatrix}}\rightarrow {\begin{bmatrix}1.41421&-1.06066\1.06066&1.41421\end{bmatrix}}\rightarrow {\begin{bmatrix}0.8&-0.6\0.6&0.8\end{bmatrix}}} Gram-Schmidt yields an inferior solution, shown by a Frobenius distance of 8.28659 instead of the minimum 8.12404. [3 1 7 5]→[0.393919−0.919145 0.919145 0.393919]{\displaystyle {\begin{bmatrix}3&1\7&5\end{bmatrix}}\rightarrow {\begin{bmatrix}0.393919&-0.919145\0.919145&0.393919\end{bmatrix}}} Randomization [edit] Some numerical applications, such as Monte Carlo methods and exploration of high-dimensional data spaces, require generation of uniformly distributed random orthogonal matrices. In this context, "uniform" is defined in terms of Haar measure, which essentially requires that the distribution not change if multiplied by any freely chosen orthogonal matrix. Orthogonalizing matrices with independent uniformly distributed random entries does not result in uniformly distributed orthogonal matrices[citation needed], but the QR decomposition of independent normally distributed random entries does, as long as the diagonal of R contains only positive entries (Mezzadri 2006). Stewart (1980) replaced this with a more efficient idea that Diaconis & Shahshahani (1987) later generalized as the "subgroup algorithm" (in which form it works just as well for permutations and rotations). To generate an (n + 1) × (n + 1) orthogonal matrix, take an n × n one and a uniformly distributed unit vector of dimension n + 1. Construct a Householder reflection from the vector, then apply it to the smaller matrix (embedded in the larger size with a 1 at the bottom right corner). Nearest orthogonal matrix [edit] The problem of finding the orthogonal matrix Q nearest a given matrix M is related to the Orthogonal Procrustes problem. There are several different ways to get the unique solution, the simplest of which is taking the singular value decomposition of M and replacing the singular values with ones. Another method expresses the R explicitly but requires the use of a matrix square root:Q=M(M T M)−1 2{\displaystyle Q=M\left(M^{\mathrm {T} }M\right)^{-{\frac {1}{2}}}} This may be combined with the Babylonian method for extracting the square root of a matrix to give a recurrence which converges to an orthogonal matrix quadratically: Q n+1=2 M(Q n−1 M+M T Q n)−1{\displaystyle Q_{n+1}=2M\left(Q_{n}^{-1}M+M^{\mathrm {T} }Q_{n}\right)^{-1}} where Q 0 = M. These iterations are stable provided the condition number of M is less than three. Using a first-order approximation of the inverse and the same initialization results in the modified iteration: N n=Q n T Q n{\displaystyle N_{n}=Q_{n}^{\mathrm {T} }Q_{n}}P n=1 2 Q n N n{\displaystyle P_{n}={\frac {1}{2}}Q_{n}N_{n}}Q n+1=2 Q n+P n N n−3 P n{\displaystyle Q_{n+1}=2Q_{n}+P_{n}N_{n}-3P_{n}} Spin and pin [edit] A subtle technical problem afflicts some uses of orthogonal matrices. Not only are the group components with determinant +1 and −1 not connected to each other, even the +1 component, SO(n), is not simply connected (except for SO(1), which is trivial). Thus it is sometimes advantageous, or even necessary, to work with a covering group of SO(n), the spin group, Spin(n). Likewise, O(n) has covering groups, the pin groups, Pin(n). For n> 2, Spin(n) is simply connected and thus the universal covering group for SO(n). By far the most famous example of a spin group is Spin(3), which is nothing but SU(2), or the group of unit quaternions. The Pin and Spin groups are found within Clifford algebras, which themselves can be built from orthogonal matrices. Rectangular matrices [edit] Main article: Semi-orthogonal matrix If Q is not a square matrix, then the conditions Q T Q = I and QQ T = I are not equivalent. The condition Q T Q = I says that the columns of Q are orthonormal. This can only happen if Q is an m × n matrix with n ≤ m (due to linear dependence). Similarly, QQ T = I says that the rows of Q are orthonormal, which requires n ≥ m. There is no standard terminology for these matrices. They are variously called "semi-orthogonal matrices", "orthonormal matrices", "orthogonal matrices", and sometimes simply "matrices with orthonormal rows/columns". For the case n ≤ m, matrices with orthonormal columns may be referred to as orthogonal k-frames and they are elements of the Stiefel manifold. See also [edit] Biorthogonal system Notes [edit] ^"Paul's online math notes"[full citation needed], Paul Dawkins, Lamar University, 2008. Theorem 3(c) ^"Finding the Nearest Orthonormal Matrix", Berthold K.P. Horn, MIT. ^"Newton's Method for the Matrix Square Root"Archived 2011-09-29 at the Wayback Machine, Nicholas J. Higham, Mathematics of Computation, Volume 46, Number 174, 1986. References [edit] Diaconis, Persi; Shahshahani, Mehrdad (1987), "The subgroup algorithm for generating uniform random variables", Probability in the Engineering and Informational Sciences, 1: 15–32, doi:10.1017/S0269964800000255, ISSN0269-9648, S2CID122752374 Dubrulle, Augustin A. (1999), "An Optimum Iteration for the Matrix Polar Decomposition", Electronic Transactions on Numerical Analysis, 8: 21–25 Golub, Gene H.; Van Loan, Charles F. (1996), Matrix Computations (3/e ed.), Baltimore: Johns Hopkins University Press, ISBN978-0-8018-5414-9 Higham, Nicholas (1986), "Computing the Polar Decomposition—with Applications"(PDF), SIAM Journal on Scientific and Statistical Computing, 7 (4): 1160–1174, doi:10.1137/0907079, ISSN0196-5204 Higham, Nicholas; Schreiber, Robert (July 1990), "Fast polar decomposition of an arbitrary matrix", SIAM Journal on Scientific and Statistical Computing, 11 (4): 648–655, CiteSeerX10.1.1.230.4322, doi:10.1137/0911038, ISSN0196-5204, S2CID14268409 Stewart, G. W. (1976), "The Economical Storage of Plane Rotations", Numerische Mathematik, 25 (2): 137–138, doi:10.1007/BF01462266, ISSN0029-599X, S2CID120372682 Stewart, G. W. (1980), "The Efficient Generation of Random Orthogonal Matrices with an Application to Condition Estimators", SIAM Journal on Numerical Analysis, 17 (3): 403–409, Bibcode:1980SJNA...17..403S, doi:10.1137/0717034, ISSN0036-1429 Mezzadri, Francesco (2006), "How to generate random matrices from the classical compact groups", Notices of the American Mathematical Society, 54, arXiv:math-ph/0609050, Bibcode:2006math.ph...9050M External links [edit] Wikiversity introduces the orthogonal matrix. "Orthogonal matrix", Encyclopedia of Mathematics, EMS Press, 2001 Tutorial and Interactive Program on Orthogonal Matrix | v t e Matrix classes | | --- | | Explicitly constrained entries | Alternant Anti-diagonal Anti-Hermitian Anti-symmetric Arrowhead Band Bidiagonal Bisymmetric Block-diagonal Block Block tridiagonal Boolean Cauchy Centrosymmetric Conference Complex Hadamard Copositive Diagonally dominant Diagonal Discrete Fourier Transform Elementary Equivalent Frobenius Generalized permutation Hadamard Hankel Hermitian Hessenberg Hollow Integer Logical Matrix unit Metzler Moore Nonnegative Pentadiagonal Permutation Persymmetric Polynomial Quaternionic Signature Skew-Hermitian Skew-symmetric Skyline Sparse Sylvester Symmetric Toeplitz Triangular Tridiagonal Vandermonde Walsh Z | | Constant | Exchange Hilbert Identity Lehmer Of ones Pascal Pauli Redheffer Shift Zero | | Conditions on eigenvalues or eigenvectors | Companion Convergent Defective Definite Diagonalizable Hurwitz-stable Positive-definite Stieltjes | | Satisfying conditions on products or inverses | Congruent Idempotent or Projection Invertible Involutory Nilpotent Normal Orthogonal Unimodular Unipotent Unitary Totally unimodular Weighing | | With specific applications | Adjugate Alternating sign Augmented Bézout Carleman Cartan Circulant Cofactor Commutation Confusion Coxeter Distance Duplication and elimination Euclidean distance Fundamental (linear differential equation) Generator Gram Hessian Householder Jacobian Moment Payoff Pick Random Rotation Routh-Hurwitz Seifert Shear Similarity Symplectic Totally positive Transformation | | Used in statistics | Centering Correlation Covariance Design Doubly stochastic Fisher information Hat Precision Stochastic Transition | | Used in graph theory | Adjacency Biadjacency Degree Edmonds Incidence Laplacian Seidel adjacency Tutte | | Used in science and engineering | Cabibbo–Kobayashi–Maskawa Density Fundamental (computer vision) Fuzzy associative Gamma Gell-Mann Hamiltonian Irregular Overlap S State transition Substitution Z (chemistry) | | Related terms | Jordan normal form Linear independence Matrix exponential Matrix representation of conic sections Perfect matrix Pseudoinverse Row echelon form Wronskian | | Mathematics portal List of matrices Category:Matrices (mathematics) | Retrieved from " Category: Matrices (mathematics) Hidden categories: All articles with incomplete citations Articles with incomplete citations from January 2013 Webarchive template wayback links Articles with short description Short description matches Wikidata Articles lacking in-text citations from May 2023 All articles lacking in-text citations All articles with unsourced statements Articles with unsourced statements from June 2009 This page was last edited on 14 April 2025, at 21:06(UTC). Text is available under the Creative Commons Attribution-ShareAlike 4.0 License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Code of Conduct Developers Statistics Cookie statement Mobile view Search Search [x] Toggle the table of contents Orthogonal matrix 32 languagesAdd topic
178
Overview of hypertension in adults - UpToDate =============== Subscribe Sign in English Deutsch Español 日本語 Português Why UpToDate? Product Editorial Subscription Options How can UpToDate help you?Select the option that best describes you Medical Professional Resident, Fellow, or Student Hospital or Institution Group Practice Subscribe Overview of hypertension in adults View Topic Share Font Size Small Normal Large Bookmark Rate Feedback Tools Formulary drug information for this topic No drug references linked in this topic. Find in topic Formulary Print Share Feedback Font Size Small Normal Large Outline SUMMARY AND RECOMMENDATIONS INTRODUCTION DEFINITIONS Hypertension Definitions based upon ambulatory and home readings - White coat hypertension - Masked hypertension BLOOD PRESSURE MEASUREMENT Office-based blood pressure measurement Ambulatory blood pressure monitoring Self-measured blood pressure monitoring PRIMARY HYPERTENSION Pathogenesis Risk factors for primary (essential) hypertension SECONDARY OR CONTRIBUTING CAUSES OF HYPERTENSION COMPLICATIONS OF HYPERTENSION MAKING THE DIAGNOSIS OF HYPERTENSION Detection Diagnosis EVALUATION History Physical examination Laboratory testing - Additional tests - Testing for secondary hypertension TREATMENT Nonpharmacologic therapy Pharmacologic therapy - Who should be treated with pharmacologic therapy? - Choice of initial antihypertensive agents Combination therapy Blood pressure goals (targets) Resistant hypertension Hypertensive urgency and emergency Hypertension in hospitalized patients Discontinuing therapy Systems approach to blood pressure management SOCIETY GUIDELINE LINKS INFORMATION FOR PATIENTS SUMMARY AND RECOMMENDATIONS Definition of hypertension Measurement of blood pressure Diagnosis of hypertension Evaluation of hypertension Treatment of hypertension ACKNOWLEDGMENTS REFERENCES GRAPHICS Algorithms - Diagnosis of hypertension in adults Tables - Checklist for accurate measurement of blood pressure - Definition of hypertension based on BP measurement strategy - Corresponding values for ABPM measurements - Reasons to evaluate for masked or white coat hypertension - Selection criteria for BP cuff size for measurement of adult BP - Procedures for use of home blood pressure monitoring - History in hypertension - Physical examination in the hypertensive patient - Nonpharmacologic interventions for hypertension - Treatment of hypertension by underlying disease - Hypertensive emergencies Figures - Coronary heart disease mortality related to BP and age - Stroke mortality related to BP and age - Number of risk factors and CVD risk - Weight loss and fall in blood pressure - Benefit of treatment of mild hypertension CALCULATORS Calculator: Cardiovascular risk assessment in adults (10-year, ACC/AHA 2013) (conventional and SI units) RELATED TOPICS Patient education: Coping with high prescription drug prices in the United States (Beyond the Basics) Patient education: High blood pressure in adults (Beyond the Basics) Patient education: High blood pressure treatment in adults (Beyond the Basics) Patient education: High blood pressure, diet, and weight (Beyond the Basics) Ambulatory blood pressure monitoring: Indications and procedure Antihypertensive therapy and progression of nondiabetic chronic kidney disease in adults Antihypertensive therapy for secondary stroke prevention Apparent mineralocorticoid excess syndromes (including chronic licorice ingestion) Atherosclerotic cardiovascular disease risk assessment for primary prevention in adults Burden of hypertension in Black individuals Cardiovascular benefits and risks of moderate alcohol consumption Cardiovascular effects of hyperthyroidism Cardiovascular effects of hypothyroidism Cardiovascular risks of hypertension Clinical diagnosis of stroke subtypes Clinical features, diagnosis, and treatment of hypertensive nephrosclerosis Clinical manifestations and diagnosis of coarctation of the aorta Clinical presentation and diagnosis of pheochromocytoma Definition, risk factors, and evaluation of resistant hypertension Diagnosis of primary aldosteronism Diet in the treatment and prevention of hypertension Epidemiology and clinical manifestations of Cushing syndrome Epidemiology of chronic kidney disease Epidemiology of heart failure Establishing the diagnosis of renovascular hypertension Evaluation and treatment of hypertensive emergencies in adults Evaluation of secondary hypertension Exercise in the treatment and prevention of hypertension Genetic factors in the pathogenesis of hypertension Goal blood pressure in adults with hypertension Hormonal contraception in women with hypertension and other cardiovascular risk factors Hypertension in adults: Blood pressure measurement and diagnosis Hypertension in adults: Epidemiology Hypertension in adults: Initial drug therapy Increased pulse pressure Insufficient sleep: Definition, epidemiology, and adverse outcomes Management of severe asymptomatic hypertension (hypertensive urgencies) in adults Mechanisms, causes, and evaluation of orthostatic hypotension Medication adherence in patients with hypertension Moderately increased albuminuria (microalbuminuria) and cardiovascular disease NSAIDs and acetaminophen: Effects on blood pressure and hypertension Obstructive sleep apnea and cardiovascular disease in adults Overview of established risk factors for cardiovascular disease Overview of hypertension in acute and chronic kidney disease Overview of secondary prevention of ischemic stroke Overweight, obesity, and weight reduction in hypertension Pathophysiology and clinical features of primary aldosteronism Patient education: Controlling your blood pressure through lifestyle (The Basics) Patient education: Coping with high drug prices (The Basics) Patient education: High blood pressure emergencies (The Basics) Patient education: High blood pressure in adults (The Basics) Patient education: Medicines for high blood pressure (The Basics) Patient education: Understanding your risk of high blood pressure (The Basics) Possible role of low birth weight in the pathogenesis of primary (essential) hypertension Potassium and hypertension Primary hyperparathyroidism: Clinical manifestations Salt intake and hypertension Society guideline links: Hypertension in adults Spontaneous intracerebral hemorrhage: Pathogenesis, clinical features, and diagnosis Tapering and discontinuing antihypertensive medications Treatment of hypertension in older adults, particularly isolated systolic hypertension Treatment of hypertension in patients with diabetes mellitus Treatment of hypertension in pregnant and postpartum patients Treatment of pheochromocytoma in adults Treatment of resistant hypertension White coat and masked hypertension Overview of hypertension in adults Authors:Jan Neil Basile, MDMichael J Bloch, MD, FACP, FASH, FSVM, FNLASection Editor:William B White, MDDeputy Editors:Karen Law, MD, FACPJohn P Forman, MD, MSc Literature review current through:Jul 2025. This topic last updated:Oct 18, 2024. INTRODUCTION High blood pressure is a major risk factor for heart disease and stroke, and the global burden of hypertension is high . This topic provides a broad overview of the definitions, pathogenesis, complications, diagnosis, evaluation, and management of hypertension. Detailed discussions of all these issues are found separately. The reader is directed, when necessary, to more detailed discussions of these issues in other topics. ●Prevalence and control of hypertension (see "Hypertension in adults: Epidemiology") ●Complications of hypertension (see "Cardiovascular risks of hypertension") ●Measurement of blood pressure and diagnosis of hypertension (see "Hypertension in adults: Blood pressure measurement and diagnosis" and "Ambulatory blood pressure monitoring: Indications and procedure" and "White coat and masked hypertension") ●Initial evaluation of patients with hypertension (see "Hypertension in adults: Blood pressure measurement and diagnosis", section on 'Additional evaluation and follow-up') To continue reading this article, you must sign in with your personal, hospital, or group practice subscription. Subscribe Sign in Disclaimer: This generalized information is a limited summary of diagnosis, treatment, and/or medication information. It is not meant to be comprehensive and should be used as a tool to help the user understand and/or assess potential diagnostic and treatment options. It does NOT include all information about conditions, treatments, medications, side effects, or risks that may apply to a specific patient. It is not intended to be medical advice or a substitute for the medical advice, diagnosis, or treatment of a health care provider based on the health care provider's examination and assessment of a patient's specific and unique circumstances. Patients must speak with a health care provider for complete information about their health, medical questions, and treatment options, including any risks or benefits regarding use of medications. This information does not endorse any treatments or medications as safe, effective, or approved for treating a specific patient. UpToDate, Inc. and its affiliates disclaim any warranty or liability relating to this information or the use thereof. The use of this information is governed by the Terms of Use, available at 2025© UpToDate, Inc. and its affiliates and/or licensors. All rights reserved. Topic Feedback Algorithms Diagnosis of hypertension in adults Diagnosis of hypertension in adults Tables Checklist for accurate measurement of blood pressureDefinition of hypertension based on blood pressure measurement strategyCorresponding values of SBP/DBP for clinic, HBPM, daytime, nighttime, and 24-hour ABPM measurementsReasons to evaluate a patient for masked or white coat hypertensionSelection criteria for blood pressure cuff size for measurement of blood pressure in adults[1,2]Procedures for use of home blood pressure monitoringImportant aspects of the history in the patient with hypertensionImportant aspects of the physical examination in the hypertensive patientBest proven nonpharmacologic interventions for prevention and treatment of hypertensionConsiderations for individualizing antihypertensive therapyHypertensive emergencies Checklist for accurate measurement of blood pressureDefinition of hypertension based on blood pressure measurement strategyCorresponding values of SBP/DBP for clinic, HBPM, daytime, nighttime, and 24-hour ABPM measurementsReasons to evaluate a patient for masked or white coat hypertensionSelection criteria for blood pressure cuff size for measurement of blood pressure in adults[1,2]Procedures for use of home blood pressure monitoringImportant aspects of the history in the patient with hypertensionImportant aspects of the physical examination in the hypertensive patientBest proven nonpharmacologic interventions for prevention and treatment of hypertensionConsiderations for individualizing antihypertensive therapyHypertensive emergencies Figures Coronary heart disease mortality related to blood pressure and ageStroke mortality related to blood pressure and ageAdditive effects of risk factors on cardiovascular disease at 5 yearsWeight loss-induced reduction in diastolic blood pressureCardiovascular benefit of treating mild hypertension Coronary heart disease mortality related to blood pressure and ageStroke mortality related to blood pressure and ageAdditive effects of risk factors on cardiovascular disease at 5 yearsWeight loss-induced reduction in diastolic blood pressureCardiovascular benefit of treating mild hypertension Company About Us Editorial Policy Testimonials Wolters Kluwer Careers Support Contact Us Help & Training Citing Our Content News & Events What's New Clinical Podcasts Press Announcements In the News Events Resources UpToDate Sign-in CME/CE/CPD Mobile Apps Webinars EHR Integration Health Industry Podcasts Follow Us Sign up today to receive the latest news and updates from UpToDate. Sign Up When you have to be right Privacy Policy Trademarks Terms of Use Manage Cookie Preferences © 2025 UpToDate, Inc. and/or its affiliates. All Rights Reserved. Licensed to: UpToDate Marketing Professional Support Tag : [0502 - 75.80.0.175 - 6926B35375 - PR14 - UPT - NP - 20250814-08:06:09UTC] - SM - MD - LG - XL Loading Please wait Your Privacy To give you the best possible experience we use cookies and similar technologies. We use data collected through these technologies for various purposes, including to enhance website functionality, remember your preferences, and show the most relevant content. You can select your preferences by clicking the link. For more information, please review our Privacy & Cookie Notice Manage Cookie Preferences Reject All Cookies Accept All Cookies Privacy Preference Center When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device. Because we respect your right to privacy, you can choose not to allow certain types of cookies on our website. Click on the different category headings to find out more and manage your cookie preferences. However, blocking some types of cookies may impact your experience on the site and the services we are able to offer. Privacy & Cookie Notice Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function. They are usually set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, this may have an effect on the proper functioning of (parts of) the site. View Vendor Details‎ Performance Cookies [x] Performance Cookies These cookies support analytic services that measure and improve the performance of our site. They help us know which pages are the most and least popular and see how visitors move around the site. View Vendor Details‎ Vendors List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Reject All Confirm My Choices
179
Limit theorems for renewal processes | Stochastic Processes Class Notes | Fiveable | Fiveable =============== practice cheatsheetsscoresvideosteachers download cheatsheets hide units Stochastic Processes Table of Contents Probability Theory Basics Random Variables and Distributions Stochastic processes basics Poisson processes Markov chains Continuous-time Markov Chains Renewal processes Unit 7 Overview: Renewal processes 7.1 Definition and properties of renewal processes 7.2 Renewal functions and equations 7.3 Limit theorems for renewal processes 7.4 Alternating renewal processes Queueing theory Brownian Motion and Diffusion Martingales Stochastic calculus Stochastic Processes: Real-World Applications log in purchase cram mode get help / manage subscription All Study Guides Stochastic Processes Unit 7 – Renewal processes Topic: 7.3 🔀stochastic processes review 7.3 Limit theorems for renewal processes Citation: MLA Limit theorems for renewal processes provide key insights into their long-term behavior. These theorems, including the strong law of large numbers and elementary renewal theorem, describe how renewal processes converge to predictable patterns over time. Understanding these limit theorems is crucial for analyzing real-world systems modeled by renewal processes. They allow us to make predictions about long-term average behavior and performance metrics in fields like queueing theory and reliability engineering. Renewal processes overview Renewal processes are a fundamental concept in stochastic processes that model systems where events occur repeatedly over time They provide a framework for analyzing the long-term behavior and steady-state properties of various real-world phenomena (queueing systems, reliability theory) Definition of renewal process Top images from around the web for Definition of renewal process GMD - Stochastic ensemble climate forecast with an analogue model View original Is this image relevant? Frontiers | Stochastic processes in the structure and functioning of soil biodiversity View original Is this image relevant? Frontiers | A Review of Stochastic Programming Methods for Optimization of Process Systems Under ... View original Is this image relevant? GMD - Stochastic ensemble climate forecast with an analogue model View original Is this image relevant? Frontiers | Stochastic processes in the structure and functioning of soil biodiversity View original Is this image relevant? 1 of 3 Top images from around the web for Definition of renewal process GMD - Stochastic ensemble climate forecast with an analogue model View original Is this image relevant? Frontiers | Stochastic processes in the structure and functioning of soil biodiversity View original Is this image relevant? Frontiers | A Review of Stochastic Programming Methods for Optimization of Process Systems Under ... View original Is this image relevant? GMD - Stochastic ensemble climate forecast with an analogue model View original Is this image relevant? Frontiers | Stochastic processes in the structure and functioning of soil biodiversity View original Is this image relevant? 1 of 3 A renewal process is a stochastic process that consists of a sequence of independent and identically distributed (i.i.d.) non-negative random variables representing the interarrival times between events The process starts at time zero, and the events occur at random times $T_1, T_2, \ldots$, where $T_n = X_1 + X_2 + \ldots + X_n$ $X_1, X_2, \ldots$ are the i.i.d. interarrival times The counting process $N(t) = \sup{n : T_n \leq t}$ represents the number of events that have occurred up to time $t$ Interarrival times in renewal processes The interarrival times $X_1, X_2, \ldots$ are non-negative random variables with a common distribution function $F(x) = P(X_i \leq x)$ The distribution of interarrival times can be discrete or continuous, depending on the nature of the renewal process Common examples of interarrival time distributions include exponential, gamma, and Weibull distributions Renewal function and renewal equation The renewal function $m(t) = E[N(t)]$ represents the expected number of events that have occurred up to time $t$ The renewal function satisfies the renewal equation: m(t)=F(t)+∫0 t m(t−x)d F(x)m(t) = F(t) + \int_0^t m(t-x) dF(x)m(t)=F(t)+∫0 t​m(t−x)d F(x) The renewal equation expresses the relationship between the renewal function and the distribution of interarrival times It plays a central role in the analysis of renewal processes and the derivation of various limit theorems Strong law of large numbers (SLLN) The strong law of large numbers (SLLN) is a fundamental limit theorem in probability theory that describes the long-term average behavior of a sequence of random variables In the context of renewal processes, the SLLN provides insights into the asymptotic properties of the process Statement of SLLN for renewal processes Let $N(t)$ be a renewal process with interarrival times having finite mean $\mu = E[X_i]$. Then, with probability one: lim⁡t→∞N(t)t=1 μ\lim_{t \to \infty} \frac{N(t)}{t} = \frac{1}{\mu}lim t→∞​t N(t)​=μ 1​ The SLLN states that the long-term average number of events per unit time converges to the reciprocal of the mean interarrival time Proof sketch of SLLN The proof of the SLLN for renewal processes typically involves the following key steps: Express $N(t)$ in terms of the interarrival times $X_1, X_2, \ldots$ Apply the SLLN for i.i.d. random variables to the sequence of interarrival times Use the continuity of the function $x \mapsto 1/x$ to establish the convergence of $N(t)/t$ to $1/\mu$ The proof relies on the independence and identical distribution of the interarrival times, as well as the existence of their finite mean Applications of SLLN in renewal theory The SLLN has important implications in various applications of renewal theory: In queueing systems, it helps determine the long-term average arrival rate and service rate In reliability theory, it provides insights into the long-term failure rate of components or systems The SLLN allows for the estimation of key performance metrics and the analysis of system behavior over extended periods Elementary renewal theorem The elementary renewal theorem is a fundamental result in renewal theory that describes the asymptotic behavior of the renewal function $m(t)$ It provides a simple and intuitive approximation for the expected number of events in a renewal process as time tends to infinity Statement of elementary renewal theorem Let $m(t)$ be the renewal function of a renewal process with interarrival times having finite mean $\mu = E[X_i]$. Then: lim⁡t→∞m(t)t=1 μ\lim_{t \to \infty} \frac{m(t)}{t} = \frac{1}{\mu}lim t→∞​t m(t)​=μ 1​ The elementary renewal theorem states that the long-term average number of events per unit time converges to the reciprocal of the mean interarrival time Key assumptions and conditions The elementary renewal theorem relies on the following assumptions: The interarrival times are non-negative, independent, and identically distributed random variables The mean interarrival time $\mu$ is finite These assumptions ensure the existence and finiteness of the renewal function and the applicability of the theorem Proof outline of elementary renewal theorem The proof of the elementary renewal theorem typically involves the following steps: Express the renewal function $m(t)$ using the renewal equation Divide both sides of the renewal equation by $t$ and take the limit as $t \to \infty$ Apply the key renewal theorem (discussed later) to simplify the limit expression Use the properties of the interarrival time distribution to evaluate the limit and obtain the desired result The proof relies on the renewal equation and the key renewal theorem to establish the asymptotic behavior of the renewal function Interpretation and significance The elementary renewal theorem provides a simple and intuitive approximation for the long-term behavior of a renewal process It states that the expected number of events grows linearly with time, with a rate equal to the reciprocal of the mean interarrival time The theorem has practical implications in various fields: In queueing theory, it helps estimate the long-term average number of customers served or waiting in a queue In reliability engineering, it provides insights into the expected number of failures or replacements over a given time period The elementary renewal theorem serves as a foundation for more advanced renewal theorems and their applications Blackwell's renewal theorem Blackwell's renewal theorem is a stronger result compared to the elementary renewal theorem, providing more precise information about the asymptotic behavior of the renewal function It gives a finer approximation of the renewal function by considering the limiting behavior of its increments over fixed intervals Formulation of Blackwell's theorem Let $m(t)$ be the renewal function of a renewal process with interarrival times having finite mean $\mu = E[X_i]$. Then, for any fixed $h > 0$: lim⁡t→∞[m(t+h)−m(t)]=h μ\lim_{t \to \infty} [m(t+h) - m(t)] = \frac{h}{\mu}lim t→∞​[m(t+h)−m(t)]=μ h​ Blackwell's theorem states that the increments of the renewal function over fixed intervals converge to a constant value proportional to the length of the interval and the reciprocal of the mean interarrival time Comparison with elementary renewal theorem Blackwell's theorem provides a stronger result than the elementary renewal theorem: The elementary renewal theorem describes the asymptotic behavior of the renewal function itself, $m(t)$ Blackwell's theorem focuses on the increments of the renewal function, $m(t+h) - m(t)$, over fixed intervals Blackwell's theorem gives a more precise characterization of the local behavior of the renewal function, while the elementary renewal theorem captures the global behavior Proof sketch of Blackwell's theorem The proof of Blackwell's theorem typically involves the following key steps: Express the increment $m(t+h) - m(t)$ using the renewal equation Apply the key renewal theorem to the resulting expression Simplify the limit using the properties of the interarrival time distribution and the elementary renewal theorem Evaluate the limit to obtain the desired result The proof relies on the renewal equation, the key renewal theorem, and the elementary renewal theorem to establish the convergence of the increments Implications and applications Blackwell's theorem has important implications in various applications of renewal theory: In queueing systems, it helps analyze the transient behavior and convergence properties of performance metrics In reliability theory, it provides insights into the expected number of failures or replacements over fixed time intervals Blackwell's theorem allows for a more refined analysis of renewal processes and their asymptotic properties It is particularly useful when studying the local behavior and increments of the renewal function Key renewal theorem The key renewal theorem is a powerful result in renewal theory that generalizes the elementary renewal theorem and provides a framework for analyzing the asymptotic behavior of functions related to renewal processes It deals with the limiting behavior of convolutions involving the renewal function and other functions Statement of key renewal theorem Let $m(t)$ be the renewal function of a renewal process with interarrival times having finite mean $\mu = E[X_i]$, and let $f(t)$ be a directly Riemann integrable function. Then: lim⁡t→∞∫0 t f(t−x)d m(x)=1 μ∫0∞f(x)d x\lim_{t \to \infty} \int_0^t f(t-x) dm(x) = \frac{1}{\mu} \int_0^\infty f(x) dx lim t→∞​∫0 t​f(t−x)d m(x)=μ 1​∫0∞​f(x)d x The key renewal theorem relates the asymptotic behavior of the convolution of a function $f(t)$ with the renewal function $m(t)$ to the integral of $f(t)$ and the mean interarrival time Relationship to other renewal theorems The key renewal theorem is a generalization of the elementary renewal theorem: Setting $f(t) = 1$ in the key renewal theorem recovers the elementary renewal theorem The key renewal theorem is also related to Blackwell's renewal theorem: Blackwell's theorem can be derived as a special case of the key renewal theorem by considering the function $f(t) = \mathbf{1}_{[0,h]}(t)$ The key renewal theorem provides a unified framework for studying the asymptotic behavior of various functions related to renewal processes Proof outline of key renewal theorem The proof of the key renewal theorem typically involves the following steps: Express the convolution integral using the renewal equation Divide both sides by $t$ and take the limit as $t \to \infty$ Apply the dominated convergence theorem to interchange the limit and integral Use the elementary renewal theorem and the properties of the interarrival time distribution to simplify the limit expression Evaluate the limit to obtain the desired result The proof relies on the renewal equation, the elementary renewal theorem, and the properties of directly Riemann integrable functions Applications in stochastic processes The key renewal theorem has wide-ranging applications in various areas of stochastic processes: In queueing theory, it is used to analyze the asymptotic behavior of performance measures such as waiting times and queue lengths In reliability engineering, it helps study the long-term behavior of failure rates and maintenance policies In insurance mathematics, it is applied to the analysis of claim arrival processes and ruin probabilities The key renewal theorem provides a powerful tool for deriving asymptotic results and understanding the long-term behavior of stochastic systems involving renewal processes Renewal reward processes Renewal reward processes are an extension of renewal processes that incorporate rewards or costs associated with each renewal event They provide a framework for analyzing the accumulation of rewards over time in systems governed by renewal processes Definition and setup A renewal reward process is defined by a sequence of i.i.d. non-negative random variables $(X_n, R_n)$, where: $X_n$ represents the interarrival time between the $(n-1)$-th and $n$-th renewal events $R_n$ represents the reward earned at the $n$-th renewal event The renewal times are denoted by $T_n = \sum_{i=1}^n X_i$, and the total reward earned up to time $t$ is given by: R(t)=∑n=1∞R n 1{T n≤t}R(t) = \sum_{n=1}^\infty R_n \mathbf{1}_{{T_n \leq t}}R(t)=∑n=1∞​R n​1{T n​≤t}​ Renewal reward theorem The renewal reward theorem is a key result in the theory of renewal reward processes, providing an asymptotic expression for the expected total reward earned up to time $t$ Let $\mu = E[X_n]$ be the mean interarrival time and $\nu = E[R_n]$ be the mean reward per renewal. Then, under certain conditions: lim⁡t→∞E[R(t)]t=ν μ\lim_{t \to \infty} \frac{E[R(t)]}{t} = \frac{\nu}{\mu}lim t→∞​t E[R(t)]​=μ ν​ The renewal reward theorem states that the long-term average reward earned per unit time converges to the ratio of the mean reward per renewal to the mean interarrival time Proof sketch of renewal reward theorem The proof of the renewal reward theorem typically involves the following steps: Express the expected total reward $E[R(t)]$ in terms of the renewal function and the reward distribution Apply the key renewal theorem to the resulting expression Use the properties of the interarrival time and reward distributions to simplify the limit Evaluate the limit to obtain the desired result The proof relies on the key renewal theorem and the properties of the renewal reward process to establish the asymptotic behavior of the expected total reward Examples and applications Renewal reward processes find applications in various domains: In queueing systems with costs or revenues associated with each service completion In reliability engineering, where rewards may represent the uptime or performance of a system between failures In finance, where rewards can represent the returns or dividends earned over time Example: Consider a machine that earns a random reward $R_n$ each time it completes a job, with the time between job completions being i.i.d. random variables $X_n$. The renewal reward theorem helps determine the long-term average reward earned by the machine per unit time. Asymptotic behavior of renewal processes The asymptotic behavior of renewal processes refers to the limiting properties and convergence results that describe the long-term evolution of the process Understanding the asymptotic behavior is crucial for making predictions, assessing the stability, and deriving performance measures of systems governed by renewal processes Limiting distribution of renewal processes The limiting distribution of a renewal process characterizes the asymptotic behavior of the age and residual life of the process The age of a renewal process at time $t$, denoted by $A(t)$, represents the time elapsed since the last renewal event The residual life at time $t$, denoted by $R(t)$, represents the time remaining until the next renewal event Under certain conditions, the age and residual life distributions converge to limiting distributions as $t \to \infty$: The limiting distribution of the age is given by the equilibrium distribution of the interarrival times The limiting distribution of the residual life is related to the excess distribution of the interarrival times Convergence rates and conditions The convergence rates of renewal processes describe how quickly the process approaches its limiting behavior The convergence rates depend on the properties of the interarrival time distribution, such as the existence of moments and the tail behavior Stronger convergence results, such as exponential convergence or convergence in total variation distance, can be obtained under additional conditions on the interarrival time distribution (e.g., existence of exponential moments) Connections to other limit theorems The asymptotic behavior of renewal processes is closely connected to other fundamental limit theorems in probability theory: The strong law of large numbers (SLLN) for renewal processes is related to the convergence of the average number of renewals per unit time The central limit theorem (CLT) for renewal processes describes the asymptotic normality of the number of renewals and the total reward earned These limit theorems provide a comprehensive understanding of the long-term behavior and fluctuations of renewal processes Practical implications and insights The asymptotic behavior of renewal processes has significant practical implications in various fields: In queueing theory, it helps predict the long-term performance measures, such as the average waiting time and the system utilization In reliability engineering, it provides insights into the long-term failure rates and the effectiveness of maintenance strategies In insurance mathematics, it helps assess the long-term profitability and solvency of insurance portfolios Understanding the asymptotic behavior enables informed decision-making, system design, and risk assessment in applications involving renewal processes Cram ModeBecome an affiliateStudy GuidesPractice QuestionsGlossaryScore CalculatorsGet HelpTestimonialsTermsPrivacyCCPA every AP exam is fiveable history 🌎 ap world history🇺🇸 ap us history🇪🇺 ap european history social science ✊🏿 ap african american studies🗳️ ap comparative government🚜 ap human geography💶 ap macroeconomics🤑 ap microeconomics🧠 ap psychology👩🏾‍⚖️ ap us government english & capstone ✍🏽 ap english language📚 ap english literature🔍 ap research💬 ap seminar arts 🎨 ap art & design🖼️ ap art history🎵 ap music theory science 🧬 ap biology🧪 ap chemistry♻️ ap environmental science🎡 ap physics 1🧲 ap physics 2💡 ap physics c: e&m⚙️ ap physics c: mechanics math & computer science 🧮 ap calculus ab♾️ ap calculus bc📊 ap statistics💻 ap computer science a⌨️ ap computer science p world languages 🇨🇳 ap chinese🇫🇷 ap french🇩🇪 ap german🇮🇹 ap italian🇯🇵 ap japanese🏛️ ap latin🇪🇸 ap spanish language💃🏽 ap spanish literature go beyond AP high school exams ✏️ PSAT🎓 Digital SAT🎒 ACT honors classes 🍬 honors algebra II🐇 honors biology👩🏽‍🔬 honors chemistry💲 honors economics⚾️ honors physics📏 honors pre-calculus📊 honors statistics🗳️ honors us government🇺🇸 honors us history🌎 honors world history college classes 👩🏽‍🎤 arts👔 business🎤 communications🏗️ engineering📓 humanities➗ math🧑🏽‍🔬 science💶 social science © 2025 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. every AP exam is fiveable Cram ModeBecome an affiliateStudy GuidesPractice QuestionsGlossaryScore CalculatorsGet HelpTestimonialsTermsPrivacyCCPA history 🌎 ap world history🇺🇸 ap us history🇪🇺 ap european history social science ✊🏿 ap african american studies🗳️ ap comparative government🚜 ap human geography💶 ap macroeconomics🤑 ap microeconomics🧠 ap psychology👩🏾‍⚖️ ap us government english & capstone ✍🏽 ap english language📚 ap english literature🔍 ap research💬 ap seminar arts 🎨 ap art & design🖼️ ap art history🎵 ap music theory science 🧬 ap biology🧪 ap chemistry♻️ ap environmental science🎡 ap physics 1🧲 ap physics 2💡 ap physics c: e&m⚙️ ap physics c: mechanics math & computer science 🧮 ap calculus ab♾️ ap calculus bc📊 ap statistics💻 ap computer science a⌨️ ap computer science p world languages 🇨🇳 ap chinese🇫🇷 ap french🇩🇪 ap german🇮🇹 ap italian🇯🇵 ap japanese🏛️ ap latin🇪🇸 ap spanish language💃🏽 ap spanish literature go beyond AP high school exams ✏️ PSAT🎓 Digital SAT🎒 ACT honors classes 🍬 honors algebra II🐇 honors biology👩🏽‍🔬 honors chemistry💲 honors economics⚾️ honors physics📏 honors pre-calculus📊 honors statistics🗳️ honors us government🇺🇸 honors us history🌎 honors world history college classes 👩🏽‍🎤 arts👔 business🎤 communications🏗️ engineering📓 humanities➗ math🧑🏽‍🔬 science💶 social science © 2025 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. ###### Back ###### 7.4 Alternating renewal processes 0
180
THE CONWAY-SLOANE CALCULUS FOR 2-ADIC LATTICES DANIEL ALLCOCK, ITAMAR GAL, AND ALICE MARK Abstract. We motivate and explain the system introduced by Conway and Sloane for working with quadratic forms over the 2-adic integers, and prove its validity. Their system is far better for actual calculations than earlier methods, and has been used for many years, but no proof has been published before now. 1. Introduction Our goal in this paper is to explain the system that Conway and Sloane developed for working with lattices (quadratic forms) over the ring of 2-adic integers Z2. Algorithms were already known for determining when two lattices were isometric, and for finding a canonical form for each one. But these were clumsy. In his influential book on quadratic forms, Cassels even wrote about 2-adic integral canonical forms: “only the masochist is invited to read the rest of this section” [5, §8.4]. To this day, 2-adic lattices retain their reputation for complexity. But the 2-adic part of a lattice over Z is its most important part. Many questions about Z-lattices reduce to p-adic versions of the same questions, where p varies over the primes. For example, consider the question of whether one Z-lattice is isometric to another. We restrict to the case of rank ≥3 and some fixed indefinite signature, because then it is (almost) true that an isometry exists if and only if one exists p-adically for each p. Most questions about p-adic lattices are easy for odd p, including this isomorphism problem. So all the real work takes place at p = 2. Other examples of questions with this same flavor are whether a lattice represents a given number, or whether one lattice admits another as a direct summand (or as a primitive sublattice). See section 2 for a little more on this larger picture. The Conway-Sloane calculus [8, ch. 15] is much simpler than previous approaches to 2-adic lattices, for example the original papers on invari-ants and canonical forms by Pall and Jones . It is widely used Date: August 30, 2017. 2010 Mathematics Subject Classification. 11E08. First author supported by NSF grant DMS-1101566. 1 2 DANIEL ALLCOCK, ITAMAR GAL, AND ALICE MARK in modern applications, for example . Their innovation was to introduce the “oddity fusion” and “sign walking” operations, which are notationally simple and generate all equivalences. Strangely, their formal statement of results (their Theorem 10) completely avoids these operations. So it has the same unwieldy feel as the papers of Pall and Jones just mentioned. Proofs of their theorem appear in and in Bartels’ unpublished dissertation . But the literature contains no treatment of the calculus as it is actually used. We hope to make it more accessible. What is new here are the “givers” and “receivers” of section 4, and the “signways” of section 6. In particular, we use signways to correct an error in their formulation of canonical forms. Here is a fairly detailed overview of the calculus. Our goal is to show what it looks like and what it involves, rather than to explain it properly. For that, see the formal development beginning in section 3. Unimodular lattices: The first step in all approaches to Z2-lattices is to classify the unimodular ones. Conway and Sloane indicate them by symbols like L = 1+2 2 or 1−3 3 or 1−4 I I . The main number 1 says that L is unimodular. If L is even, which is to say that all norms are even, then the subscript is I I. Otherwise, L is diagonalizable and the subscript is the oddity o(L) of L, meaning the sum mod 8 of the diagonal terms in any diagonalization. Amazingly, this is an isometry invariant, although the definition is more complicated if L is non-diagonalizable or non-unimodular; see section 3. The superscript is not a signed number, but rather a sign and a separate nonnegative integer. The integer is dim L. The sign is + or −according to whether det(L) ≡±1 or ±3 mod 8. The sign, dimension and subscript turn out to determine the isometry class of L. We prove this in theorem 5.1. For example, 1−3 3 is isometric to the lattices with diagonal inner product matrices ⟨1, −1, 3⟩, ⟨−1, −1, −3⟩and ⟨3, 3, −3⟩: each is 3-dimensional with determinant ±3 and diagonal entries summing to 3 mod 8. Similarly, 1+2 2 is isometric to ⟨1, 1⟩and ⟨−3, −3⟩. Pass-ing from the symbol to a representative lattice is always this easy. And the symbols also behave cleanly under direct sum: signs multiply and dimensions and subscripts add. For subscripts this means addi-tion in Z/8, together with the special rule I I + t = t. For example, 1+2 2 ⊕1−3 3 ⊕1−4 I I ∼ = 1+9 5 . Jordan decompositions: A general Z2-lattice can be expressed as a direct sum, where each term is got by rescaling a unimodular lattice by a different power of 2. This is called a Jordan decomposition and the terms are called Jordan constituents. Conway and Sloane use symbols like 1+2 I I , 2−2 2 , 4+3 1 and 64−2 I I to indicate them. These lattices are got THE CONWAY-SLOANE 2-ADIC CALCULUS 3 from the unimodular lattices with the same decorations, by scaling inner products by 1, 2, 4 and 64 respectively. The scale of each term means this scaling factor. A general Z2-lattice is a direct sum of such terms, for example (1.1) 12 I I 2−2 4 43 −1161 1 322 I I 64−2 I I 1281 −12561 1512−4 I I where we have suppressed + signs in superscripts and ⊕symbols be-tween the terms. We will use this example many times: it is compli-cated enough to illustrate all possible phenomena. There are two main ways that the case of p an odd prime is simpler than the p = 2 case. The first is that the unimodular classification is simpler: one needs no subscripts. The second is that the Jordan de-composition is unique up to isometry. So when p is odd, understanding a p-adic lattice amounts to a writing down something like (1.1) with-out subscripts. Equivalences between distinct Jordan decompositions is the subtle part of 2-adic lattice theory. Conway and Sloane introduced oddity fusion and sign walking to organize these equivalences. Oddity fusion: An example of nonuniqueness of Jordan decomposi-tion is (1.2) 2−2 4 43 −1 ∼ = 2−2 2 43 1 ∼ = 2−2 −243 5 These are the same except for their subscripts, and in all three cases the sum of the subscripts is 3 mod 8. This illustrates a general phe-nomenon called oddity fusion: when the scales of a sequence of Jordan constituents are consecutive powers of 2, and the subscripts are all nu-merical rather than I I, then those constituents “share” their subscripts. We write [2−243]3 rather than any particular Jordan decomposition from (1.2). A collection of terms that are bracketed in this way is called a compartment, and the final subscript 3 is called the compart-ment oddity. Since [2−243]3 displays less information than any of the three symbols from (1.2), it is more canonical. Most of the simplicity of the Conway-Sloane approach comes from the use of oddity fusion. After oddity fusion, our example (1.1) becomes (1.3) 12 I I [2−243]3161 1 322 I I 64−2 I I [1281 2561]0512−4 I I The term of scale 16 is not part of the first compartment because of the absence of a term of scale 8. It forms a compartment by itself. We call a symbol like (1.3) a 2-adic symbol. Sign walking: Oddity fusion does not generate all equivalences be-tween 2-adic Jordan decompositions. For example, (1.3). turns out to 4 DANIEL ALLCOCK, ITAMAR GAL, AND ALICE MARK be isometric to each of 1−2 I I [22 43]−1161 1 322 I I 64−2 I I [1281 2561]0512−4 I I (1.4) 12 I I [224−3]−1161 1 322 I I 64−2 I I [1281 2561]0512−4 I I (1.5) 12 I I [2−2 4−3]−116−1 −3 322 I I 64−2 I I [1281 2561]0512−4 I I (1.6) In each case we have negated the signs of two terms of (1.3), and changed by 4 the oddity of each compartment involved. The under-brackets indicate the terms whose signs were changed. The rules for which pairs of terms admit such a sign walk are subtle enough that we postpone them to section 6. But to illustrate the flexibility they provide, we show which terms can interact with each other via some chain of sign walks: 12 I I [2−243]3161 1 322 I I 64−2 I I [1281 2561]0512−4 I I We call these groups of terms signways, suggesting highways along which signs can move. In the language of Conway and Sloane, the clas-sification of 2-adic lattices amounts to the theorem that sign walking generates all equivalences between 2-adic symbols. (Theorem 6.2.) Some equivalence relations are like mazes, where it is not clear which “moves” to make when seeking an equivalence between two objects, or perhaps only an arcane recipe for these moves is available. This is the nature of earlier classifications of 2-adic lattices. Happily, sign walking is simple. For any given 2-adic symbol, the allowed sign walks generate an elementary abelian 2-group, acting simply transitively on the 2-adic symbols that are equivalent to it. (See the proof of theorem 6.3.) One can use sign walking to define a canonical form: walk all the −signs as far left as possible, canceling pairs of such signs when possible. Then all signs will be + except perhaps for the first terms of the signways. The main virtues of the Conway-Sloane notation are that (i) it allows easy passage between the notation and the lattices, (ii) it behaves well under direct sum and scaling, and duality too, (iii) no more information is displayed than necessary, and (iv) rather than being constrained to a single canonical form, one can easily pass between all possible 2-adic symbols for a particular lattice. See the extended example 6.5 for an illustration of (iv): we find all the Z2-lattices whose sum with ⟨2, 2⟩is isometric to (1.3). After some (strictly) motivational background in section 2, we cover some technical preliminaries in section 3. Then section 4 defines what we call a fine decomposition of a 2-adic lattice and describe some moves THE CONWAY-SLOANE 2-ADIC CALCULUS 5 between them. In section 5 we classify the unimodular lattices and introduce oddity fusion. In section 6 we define 2-adic symbols and prove that sign walking generates all equivalences between them. We also discuss canonical forms and how to define some numerical invariants of 2-adic lattices. The final section is devoted to the proof of theorem 4.4. This note developed from part of a course on quadratic forms given by the first author at the University of Texas at Austin, with his lecture treatment greatly improved by the second and third authors. 2. The larger picture This section is meant to describe how the 2-adic lattice theory fits into the larger theory of integer quadratic forms. It is not needed later in the paper. A lattice over Z or the p-adic integers Zp means a free module equipped with a symmetric bilinear pairing that takes values in the fraction field Q or Qp. An isometry from one such lattice to another means a module isomorphism that preserves inner products. In many situations one wants to understand whether two Z-lattices are isomet-ric. If L is a Z-lattice, then L ⊗Zp is a Zp-lattice. If L′ is another Z-lattice, then L, L′ are said to lie in the same genus if they have the same signature and L ⊗Zp and L′ ⊗Zp are isometric for all primes p. Isometric Z-lattices obviously lie in the same genus. Until work of Eichler in the 1950s, it was open whether the converse held in the indefinite case in dimension ≥3. Eichler discovered a subtle equivalence relation, whose equivalence classes are called spinor genera. Each genus consists of finitely many spinor genera, and each spinor genus consists of finitely many isometry classes of lattices. But some mild hypotheses promote “finitely many” to “one”: Theorem 2.1 (Eichler). An indefinite spinor genus of dimension ≥3 consists of exactly one isometry class. Theorem 2.2. An indefinite genus G of dimension ≥3 consists of exactly one spinor genus, unless there exists some prime p such that G⊗ Zp is p-adically diagonalizable, with the p-power parts of the diagonal terms all being distinct. If G is integral, then this exceptional case can only occur if p(n 2) | det G. Note that the integer det G and the Zp-lattice G ⊗Zp are well-defined, by the definition of a genus. See or [5, Ch. 10, Thm. 1.4] for the-orem 2.1. See [8, Ch. 15, Thm. 19], or the proof of the Corollary to Lemma 3.7 in [5, Ch. 10], for theorem 2.2. 6 DANIEL ALLCOCK, ITAMAR GAL, AND ALICE MARK Except in quite small dimension, lattices with the distinct-powers-of-p property in Theorem 2.2 do not seem to occur in nature. So these two theorems form the basis for our statement in the introduction that for indefinite lattices of dimension ≥3, it is “almost” true that genera coincide with isometry classes. Even if a genus (indefinite of rank ≥3) does have the distinct-powers-of-p property, it might still consist of a single isometry class, and one can check this. It is just no longer guaranteed. We have explained why questions of isometries of Z-lattices often reduce to Zp-lattices. For p > 2, a Zp-lattice has only one isomor-phism class of Jordan decomposition. And each Jordan constituent J is characterized by its scale, dimension and sign. In this case there is no subtlety to the isometry classification. So the p = 2 case accounts for most of the isometry analysis. (For odd p, the sign is defined as the Legendre symbol det J p  = ±1, always abbreviated to ±. Although we did not say so in the introduction, when p = 2 the sign of J is the Kronecker’s generalization det J 2  of the Legendre symbol.) A second common question about a Z-lattice L is whether a given lattice M occurs a direct summand. When L is the only lattice in its genus, and the signatures of M and L are compatible, this reduces to the question of whether M ⊗Zp is a summand of L ⊗Zp for all primes p. For p > 2 this is almost trivial: M ⊗Zp is a summand if and only if each constituent of M is lower-dimensional than the cor-responding constituent of L, or else has the same dimension and sign. The corresponding question for p = 2 is more subtle—see example 6.5 for a taste of the required analysis. A similar common question is whether M occurs as a primitive sub-lattice of L. Under the same conditions as in the previous paragraph, this reduces to the problem of building a suitable candidate for the or-thogonal complement of M ⊗Zp in L ⊗Zp, for each prime p. The case of odd p is no longer trivial, but still the p = 2 case usually dominates the analysis. See for an extended calculation of this sort. 3. Preliminaries Now we begin our formal exposition. Henceforth, an integer means an element of the ring Z2 of 2-adic integers, and we write Q2 for Z2’s fraction field. A lattice means a finite-dimensional free module over Z2 equipped with a Q2-valued symmetric bilinear form. We assume known that two odd elements of Z2 differ by a square factor if and only if they are congruent mod 8. All lattices considered will be nondegenerate. A lattice is integral if all inner products are THE CONWAY-SLOANE 2-ADIC CALCULUS 7 integers. An integral lattice is called even if all its elements have even norm (self-inner-product), and odd otherwise. Given some basis for a lattice, one can seek an orthogonal basis by Gram-Schmidt diagonal-ization. This almost works but not quite. Instead it shows that every lattice is a direct sum of 1-dimensional lattices and copies of the two lattices 0 1 1 0  and 2 1 1 2  scaled by powers of 2. (See [5, p. 117] or §4.4 of [8, Ch. 15].) Now suppose U is a unimodular lattice, meaning that it is integral and the natural map from U to its dual lattice U ∗:= Hom(U, Z2) is an isomorphism. Equivalently, the determinant of any inner product matrix is a unit of Z2. The sign of U means the Kronecker symbol det U 2  . Recall that this is defined as +1 or −1 according to whether det U ≡±1 or ±3 mod 8. We will always abbreviate ±1 to ±. The Kro-necker symbol has special properties that are important in quadratic reciprocity. But these play no role in this paper; for us it is just a way to record partial information about the congruence class of an odd number mod 8. We only refer to it as the Kronecker symbol because that name already belongs to this function. Now consider a lattice got by scaling the inner product on a uni-modular lattice. We say it has type I or I I according to whether the unimodular lattice is odd or even. For example, ⟨2⟩has type I, although it is an even lattice, because it was got by scaling the odd lattice ⟨1⟩. On the other hand, 4 2 2 4  has type I I, because it was got by scaling the even unimodular lattice 2 1 1 2  . The last invariant of a 2-adic lattice L that we need is Z/8-valued. It is called the oddity of L and written o(L). It is defined in [8, p. 371], and only depends on the isometry type of the quadratic vector space L ⊗Q2. Its definition is strange and the fact that it is an invariant is surprising. (See [8, Ch. 15, §6.1–6.2] for a proof.) But it is very easy to compute, especially for unimodular lattices, which are all one needs it for in the Conway-Sloane calculus. To compute o(L), first diagonalize its inner product matrix (over Q2). Then add up the odd parts of the diagonal entries mod 8, and add 4 for each diagonal entry which is an antisquare. An antisquare is defined as a 2-adic number of the form 2oddu where u ≡±3 mod 8. The (imperfect) motivation for this language is that such a number fails to be a square in Q2 for two separate reasons: neither the 2-part nor the odd part are squares. (The imperfection is that −2 has the same properties but does not count as an antisquare. The name comes from the corresponding construction with 2 replaced 8 DANIEL ALLCOCK, ITAMAR GAL, AND ALICE MARK by an odd prime. In that case the corresponding property and the definition of “antisquare” are equivalent.) For example, ⟨1, 3, 3, 7⟩has oddity 1 + 3 + 3 + 7 ≡4 mod 8. It turns out that every odd unimodular lattice can be diagonalized over Z2 (lemma 4.1). The resulting diagonal terms must of course be odd, so they cannot be antisquares. So the oddity is just their sum mod 8. The calculation for even unimodular lattices is even easier: the oddity is always 0. To see this, express any such lattice as a sum of copies of 0 1 1 0  and 2 1 1 2  . One can diagonalize these over Q2, yielding ⟨1, −1⟩ and ⟨2, 6⟩. The first has no antisquares, so o(⟨1, −1⟩) = 1 −1 ≡0 mod 8. The second has one antisquare (namely 6), and the odd parts of its diagonal entries are 1 and 3. So o(⟨2, 6⟩) = 1 + 3 + 4 ≡0 mod 8. It follows that the oddity of every even unimodular lattice vanishes. For unimodular lattices, the dimension, sign, type and oddity turn out to be a complete set of invariants. We prove this as theorem 5.1. Conway and Sloane express the isometry class of a unimodular lattice as 1±n t where ± is the sign, n is the dimension and t is either the formal symbol I I or an integer mod 8. We write I I for even lattices, and the oddity for odd lattices. So the subscript implicitly records the type. We just saw that type I I lattices have oddity 0, so in this case there is no point to recording it. Except for special cases, we will not use this notation until we have classified the unimodular lattices in theorem 5.1. The special cases are in dimension 1 and the type I I case in dimension 2, where the classifica-tion is easy. Because ±1 and ±3 are the only square classes in Z× 2 , the 1-dimensional unimodular lattices are ⟨1⟩, ⟨−1⟩, ⟨3⟩and ⟨−3⟩. Their symbols are 1+1 1 , 1+1 −1, 1−1 3 and 1−1 −3 respectively. Note that the subscript determines the sign; this is unique to dimension 1. For an even unimod-ular lattice, we mentioned above that Gram-Schmidt orthogonalization fails to diagonalize it, but does express it as a sum of copies of 0 1 1 0  and 2 1 1 2  . It follows that these are the only even unimodular lattices in 2 dimensions. They are non-isometric because their determinants are different. Their symbols are 1+2 I I and 1−2 I I respectively. If q is a power of 2 then we write q±n I I or q±n t for the lattice got from 1±n I I or 1±n t by rescaling all inner products by q. For example, 2−2 I I has inner product matrix 4 2 2 4  . The number q is called the scale of the symbol (or lattice). Caution: in the type I case, the subscript is the oddity of the unimodular lattice, not the scaled lattice. These may differ by 4 because of the antisquare term in the definition of oddity. For example, the 2-adic lattice 2−1 3 ∼ = ⟨6⟩has oddity −1, not 3. THE CONWAY-SLOANE 2-ADIC CALCULUS 9 Just like for unimodular lattices, until we prove theorem 5.1 we will only use the symbols q±1 t and q±2 I I . We will usually omit the symbol ⊕ from direct sums, for example writing 1+1 −1 1−1 3 4+2 I I for 1+1 −1⊕1−1 3 ⊕4+2 I I . To lighten the notation one usually suppresses plus signs in superscripts, for example 11 −1 1−1 3 42 I I, and/or suppresses the dimensions when they are 1, for example 1+ −1 1− 3 42 I I. One could suppress even more, such as leaving the subscript blank for summands of type I I. But at some point abbreviations become more error-prone than helpful. 4. Fine symbols In this section we work with a finer decomposition of a lattice than the usual Jordan decomposition. The goal is to establish that certain “moves” between such decompositions do not change the isometry class of the lattice. This will make the corresponding facts for Jordan de-compositions in the next section easy to state and prove. Theorem 4.4, proven in section 7, captures the full classification of 2-adic lattices, but in a very clumsy way. The rest of this paper recasts this classification into a simpler form. By a fine decomposition of a lattice L we mean a direct sum decom-position in which each summand (or term) is one of q1 ±1, q−1 ±3 or q±2 I I , with the last case only occurring if every term of that scale has type I I. The name reflects the fact that there no further decomposition of the summands is possible. A fine decomposition always exists, by starting with a decomposition as a sum of q±1 t ’s and q±2 I I ’s and applying the next lemma repeatedly. Lemma 4.1. If ε, ε′ are signs then 1ε1 t 1ε′2 I I admits an orthogonal basis. Proof. Write M and N for the two summands and consider the three elements of (M/2M) ⊕(N/2N) that lie in neither M/2M nor N/2N. Any lifts of them have odd norms and even inner products. Applying row and column operations to their inner product matrix leads to a diagonal matrix with odd diagonal entries. □ In order to discuss the relation between distinct fine decompositions of a given lattice, we introduce the following special language for 1-dimensional lattices only. We call q+1 1 and q−1 −3 “givers” and q+1 −1 and q−1 3 “receivers”. (Type I I lattices are neither givers nor receivers.) The idea is that a giver can give away two oddity and remain a legal symbol (q+ 1 →q+ −1 or q− −3 →q− 3 ), while a receiver can accept two oddity. We often use a subscript R or G in place of the oddity, so that 1+ G and 1− G mean 1+ 1 and 1− −3, while 1+ R and 1− R mean 1+ −1 and 1− 3 . Scaling inner 10 DANIEL ALLCOCK, ITAMAR GAL, AND ALICE MARK products by −3 negates signs and preserves giver/receiver status, while scaling them by −1 preserves signs and reverses giver/receiver status. A fine symbol means a sequence of symbols q±2 I I and q± R or G. We replace R and G by numerical subscripts whenever convenient, and regard two symbols as the same if they differ by permuting terms. Two scales are called adjacent if they differ by a factor of 2. Lemma 4.2 (Sign walking). Consider a fine symbol and two terms of it that satisfy one of the following conditions: (0) they have the same scale; (1) they have adjacent scales and different types; (2) they have adjacent scales and are both givers or both receivers; (3) their scales differ by a factor of 4 and they both have type I. Consider as well the fine symbol got by negating the signs of these terms, and also changing both from givers to receivers or vice-versa in case (2). Then the two fine symbols represent isometric lattices. An alternate name for (3) might be sign jumping. Conway and Sloane informally describe it as a composition of two sign walks of type (1). For example, 11 12+0 I I 41 1 →1−1 −32−0 I I 41 1 →1−1 −32+0 I I 4−1 −3. They also observe that this doesn’t really make sense: 2−0 I I is illegal because the 0-dimensional lattice has determinant 1, hence sign +. Proof. It suffices to prove the following isometries, where ε, ε′ are signs, X represents R or G, and X′ represents R or G: (0) 1ε2 I I 1ε′2 I I ∼ = 1−ε2 I I 1−ε′2 I I and 1ε X 1ε′ X′ ∼ = 1−ε X 1−ε′ X′ (1) 1ε2 I I 2ε′ X′ ∼ = 1−ε2 I I 2−ε′ X′ and 1ε′ X′ 2ε2 I I ∼ = 1−ε′ X′ 2−ε2 I I (2) 1ε G 2ε′ G ∼ = 1−ε R 2−ε′ R (3) 1ε X 4ε′ X′ ∼ = 1−ε X 4−ε′ X′ The first part of (0) is trivial except for the assertion 1+2 I I 1+2 I I ∼ = 1−2 I I 1−2 I I . Choose a norm 4 vector x of the right side, that is not twice a lattice vector. Then choose y to have inner product 1 with x. The span of x and y is even of determinant ≡−1 mod 8, so it is a copy of 1+2 I I . Its orthogonal complement must also be even unimodular, hence one of 1±2 I I , hence 1+2 I I by considering the determinant. The second part of (0) is best understood using numerical subscripts: we must show 1ε t 1ε′ t′ ∼ = 1−ε t+4 1−ε′ t′+4, i.e., ⟨t, t′⟩∼ = ⟨t + 4, t′ + 4⟩. To see this, note that the left side represents t + 4t′ ≡t + 4 mod 8, that this is odd and therefore corresponds to some direct summand, and the determinants of the two sides are equal. Note that givers and receivers always have oddities congruent to 1 and −1 mod 4 respectively, so THE CONWAY-SLOANE 2-ADIC CALCULUS 11 changing a numerical subscript by 4 doesn’t alter giver/receiver status. Furthermore, the sign on 1ε t changes since exactly one of t, t + 4 lies in {±1} and the other in {±3}, and similarly for 1ε′ t′ . The same argument works for (3), in the form 1ε t 4ε′ t′ ∼ = 1−ε t+4 4−ε′ t′+4. For the first part of (1) we choose a basis for 1ε2 I I with inner product matrix 2 1 1 0 or 2  where the lower right corner depends on ε. Replacing the second basis vector by its sum with a generator of 2ε′ X′ changes the lower right corner by 2 mod 4. This toggles the 2 × 2 determinant between −1 and 3 mod 8. Therefore it gives an even unimodular sum-mand of determinant −3 times that of 1ε2 I I , hence of sign −ε. Since the overall determinant is an invariant, the determinant of its complement is therefore −3 times that of 2ε′2 X′. So the complement is got from 2ε′2 X′ by scaling by −3. We observed above that scaling by −3 negates the sign and preserves giver/receiver status, so the complement is 2−ε′2 X′ . The second part of (1) follows from the first by passing to dual lattices and then scaling inner products by 2. (It is easy to see that the dual lattice has the same symbol with each scale replaced by its reciprocal.) (2) After rescaling by −3 if necessary to take ε = +, it suffices to prove 1+ G 2ε′ G ∼ = 1− R 2−ε′ R , i.e., ⟨1, 2⟩∼ = ⟨3, 6⟩and ⟨1, −6⟩∼ = ⟨3, −2⟩. In each case one finds a vector on the left side whose norm is odd and appears on the right, and then compares determinants. □ Further equivalences between fine symbols are phrased in terms of “compartments”. A compartment means a set of type I terms, the set of whose scales forms a sequence of consecutive powers of 2, and which is maximal with these properties. For example in 12 I I 2− G 2− R 4+ G 16− R, the set of scales that have type I are {2, 4, 16}. These fall into two strings of consecutive powers of 2, namely {2, 4} and {16}. So there are two compartments, which are the sums of the terms of the corresponding scales. That is, one compartment is 2− G 2− R 4+ G and the other is 16− R. Lemma 4.3 (Giver permutation and conversion). Consider a fine sym-bol and the symbol obtained by one of the following operations. Then the lattices they represent are isometric. (1) Permute the subscripts G and R within a compartment. (2) Convert any four G’s in a compartment to R’s, or vice versa. Proof. Giver permutation, meaning operation (1), can be achieved by repeated use of the isomorphisms 1ε G 1ε′ R ∼ = 1ε R 1ε′ G and 1ε G 2ε′ R ∼ = 1ε R 2ε′ G (scaled up or down as necessary). To establish these we first rescale by −3 if necessary, to take ε = + without loss of generality. This leaves the cases ⟨1, −1⟩∼ = ⟨−1, 1⟩, ⟨1, 3⟩∼ = ⟨−1, −3⟩, ⟨1, −2⟩∼ = ⟨−1, 2⟩ and ⟨1, 6⟩∼ = ⟨−1, 10⟩. One proves each by finding a vector on the 12 DANIEL ALLCOCK, ITAMAR GAL, AND ALICE MARK left whose norm is odd and appears on the right, and then comparing determinants. For giver conversion, meaning operation (2), we assume first that more than one scale is present in the compartment, so we can choose terms of adjacent scales. Assuming four G’s are present in the com-partment, we permute a pair of them to our chosen terms, then use sign walking to convert these terms to receivers. This negates both signs. Then we permute these R’s away, replacing them by the second pair of G’s, and repeat the sign walking. This converts the second pair of G’s to R’s and restores the original signs. For the case that only a single scale is present we first treat what will be the essential cases, namely 1+ G 1+ G 1+ G 1+ G ∼ = 1+ R 1+ R 1+ R 1+ R and 1− G 1+ G 1+ G 1+ G ∼ = 1− R 1+ R 1+ R 1+ R That is, ⟨1, 1, 1, 1⟩∼ = ⟨−1, −1, −1, −1⟩and ⟨−3, 1, 1, 1⟩∼ = ⟨3, −1, −1, −1⟩ In the first case we exhibit a suitable basis for the left side, namely (2, 1, 1, 1) and the images of (−1, 2, 1, −1) under cyclic permutation of the last 3 coordinates. In the second we note that the left side is the orthogonal sum of the span of (1, 0, 0, 0) and (0, 1, 1, 1), which is a copy of ⟨−3, 3⟩, and the span of (0, −1, 1, 0) and (0, 0, 1, −1), which is a copy of 1−2 I I . Since each of these is isometric to its scaling by −1, so is their direct sum. Now we treat the general case when only a single scale is present. Suppose there are at least 4 givers. By scaling by a power of 2 it suffices to treat the unimodular case. By sign walking we may change the signs on any even number of them, so we may suppose at most one −is present. (Recall that sign walking between terms of the same scale doesn’t affect subscripts G or R.) By the previous paragraph we may convert four G’s to R’s. Then we reverse the sign walking operations to restore the original signs. □ The following theorem captures the full classification of 2-adic lat-tices. It is already simpler than the results in and . But fine symbols package information poorly, and much greater simplification is possible. We will develop this in the next two sections. Theorem 4.4 (Equivalence of fine symbols). Two fine symbols repre-sent isometric lattices if and only if they are related by a sequence of sign walking, giver permutation and giver conversion operations. Although it is natural to state the theorem here, its proof depends on Theorem 5.1. The first place we use it is to prove Theorem 6.2, so THE CONWAY-SLOANE 2-ADIC CALCULUS 13 logically the proof could go anywhere in between. But in fact we have deferred it to section 7 to avoid breaking the flow of ideas. 5. Jordan symbols In this section we define and study the Jordan decompositions of a lattice. The main point is that “oddity fusion” neatly wraps up all the giver permutation and conversion operations from the previous section. We begin by classifying the unimodular lattices: Theorem 5.1 (Unimodular lattices). A unimodular lattice is charac-terized by its dimension, type, sign and oddity. As mentioned in section 3, the oddity is always 0 for even unimod-ular lattices. One checks this by diagonalizing 1±2 I I over Q2, obtaining ⟨1, −1⟩and ⟨2, 6⟩, and computing the oddity directly. Proof. Consider unimodular lattices U, U ′ with the same dimension, type, sign and oddity, and fine symbols F, F ′ for them. The product of the signs in F equals the sign of U, and similarly for U ′. Since U and U ′ have the same sign, we may use sign walking to make the signs in F the same as in F ′. If U, U ′ are even then the terms in F are now the same as in F ′, so U ∼ = U ′. So suppose U, U ′ are odd. By giver permutation, and exchanging F and F ′ if necessary, we may suppose that all non-matching subscripts are R in F and G in F ′. And by giver conversion we may suppose that the number of non-matching subscripts is k ≤3. Since changing a receiver to a giver without changing the sign increases the oddity by two, o(U ′) = o(U) + 2k. Since o(U ′) ≡o(U) mod 8 we have k = 0. So the terms in F are the same as in F ′, and U ∼ = U ′. □ We now have license to use the notation q±n t and q±n I I from section 3. We say that such a symbol is legal if it represents a lattice. The legal symbols are q+0 I I q±n I I with n positive and even q+1 ±1 and q−1 ±3 q+2 0 , q+2 ±2, q−2 4 and q−2 ±2 q±n t with n > 2 and t ≡n mod 2 A good way to mentally organize these is to regard the conditions for dimension ̸= 1, 2 as obvious, remember that q2 4 and q−2 0 are illegal, and remember that the subscript of q±1 t determines the sign. 14 DANIEL ALLCOCK, ITAMAR GAL, AND ALICE MARK The illegality of 12 4 and 1−2 0 follows by considering all possible sums 1ε1 t 1ε′1 t′ . When the signs ε, ε′ are different, one subscript is ±1 and the other is ±3, so the total oddity cannot be 0. When the signs are the same, either both subscripts are in {±1} or both are in {±3}, so the total oddity cannot be 4. This calculation used the simple rules for direct sums of unimodular lattices: signs multiply and dimensions and subscripts add, subject to the special rules I I + I I = I I and I I + t = t. A Jordan decomposition of a lattice means a direct sum decompo-sition whose summands (called constituents) are unimodular lattices scaled by different powers of 2. By the Jordan symbol for the decom-position we mean the list of the symbols (or terms) q±n I I and q±n t for the summands. An example we will use in this section and the next, and mentioned already in the introduction, is (5.1) 12 I I 2−2 6 43 −3 161 1 322 I I 64−2 I I 1281 1 2561 −1 512−4 I I It is sometimes convenient and sometimes annoying to allow trivial (0-dimensional) terms in a Jordan decomposition. The main difficulty of 2-adic lattices is that a given lattice may have several inequivalent Jordan decompositions. The purpose of the Conway-Sloane calculus is to allow one to move easily between all pos-sible isometry classes of Jordan decompositions. Some of the data in the Jordan symbol remains invariant under these moves. First, if one has two Jordan decompositions for the same lattice L, then each term in one has the same dimension as the term of that scale in the other. (Scaling reduces the general case to the integral case, which follows by considering the structure of the abelian group L∗/L.) Second, the type I or I I of the term of any given scale is independent of the Jordan decomposition. (One can show this directly, but we won’t need it until after Theorem 6.2, which implies it.) The signs and oddities of the constituents are not usually invariants of L. We define a compartment of a Jordan decomposition just as we did for fine decompositions: a set of type I constituents, whose scales form a sequence of consecutive powers of 2, which is maximal with these properties. The example above has three compartments: 2−2 6 43 −3, 161 1 and 1281 1 2561 −1. The oddity of a compartment means the sum of its subscripts (mod 8 as always). Caution: this depends on the Jordan decomposition, and is not an isometry invariant of the underlying lat-tice. See Lemma 6.1 for how it can change. Despite this non-invariance, the oddity of a compartment is still useful: THE CONWAY-SLOANE 2-ADIC CALCULUS 15 Lemma 5.2 (Oddity fusion). Consider a lattice, a Jordan symbol J for it, and the Jordan symbol J′ got by reassigning all the subscripts in a compartment, in such a way that that all resulting terms are legal and the compartment’s oddity remains unchanged. Then J, J′ represent isometric lattices. Proof. By discarding the rest of J we may suppose it is a single com-partment. The argument is similar to the odd case of Theorem 5.1. We refine J, J′ to fine symbols F, F ′. By hypothesis, the terms of J′ have the same signs as those of J. It follows that for each scale, the product of the signs of F’s terms of that scale is the same as the corresponding product for F ′. Therefore sign walking between equal-scale terms lets us suppose that the signs in F are the same as in F ′. Recall from the proof of Lemma 4.2(0) that this sort of sign walking amounts to the iso-morphisms 1ε1 t 1ε′1 t′ ∼ = 1−ε1 t+4 1−ε′1 t′+4, which don’t change the compartment’s oddity. Giver permutation and conversion don’t change a compartment’s oddity either. This is because changing a giver to a receiver, without changing the sign of that term, reduces the numerical subscript by 2. So changing one giver to a receiver, and simultaneously one receiver to a giver, leaves the compartment’s oddity unchanged, as does converting between four givers and four receivers. By giver permutation and possibly swapping F with F ′, we may suppose that the non-matching subscripts are R’s in F and G’s in F ′. By giver conversion we may suppose k ≤3 subscripts fail to match, and the assumed equality of oddities shows k = 0. So the fine symbols are the same and the lattices are isometric. □ 6. 2-adic symbols One can translate sign walking between fine symbols to the language of Jordan symbols, but it turns out to be fussier than necessary. Things become simpler once we incorporate oddity fusion into the notation as follows. The 2-adic symbol of a Jordan decomposition means the Jordan symbol, except that each compartment is enclosed in brackets, the enclosed terms are stripped of their subscripts, and their sum in Z/8 (the compartment’s oddity) is attached to the right bracket as a subscript. For our example (5.1) this yields 12 I I [2−243]31322 I I 64−2 I I [1281 2561]0512−4 I I If a compartment consists of a single term, such as 1, then one usually omits the brackets: (6.1) 12 I I [2−243]3161 1 322 I I 64−2 I I [1281 2561]0512−4 I I 16 DANIEL ALLCOCK, ITAMAR GAL, AND ALICE MARK Lemma 5.2 shows that the isometry type of a lattice with given 2-adic symbol is well-defined. When a compartment has total dimension ≤2 then its oddity is constrained by its overall sign in the same way as for an odd unimodular lattice of that dimension. For compartments of dimension 1 this is the same constraint as before. In 2 dimensions, [1+2−]0 and [1−2+]0 are illegal (cannot come from any fine symbol) because each term 1+ ··· or 2+ ··· would have ±1 as its subscript, while each term 1− ··· or 2− ··· would have ±3 as its subscript. There is no way to choose subscripts summing to 0. The same reasoning shows that [1+2+]4 and [1−2−]4 are also illegal. Lemma 6.1 (Sign walking for 2-adic symbols). Consider the 2-adic symbol of a Jordan decomposition of a lattice, and two nontrivial terms of it that satisfy one of the following: (1) they have adjacent scales and different types; (2) they have adjacent scales and type I, and their compartment either has dimension > 2 or compartment oddity ±2; (3) they have type I, their scales differ by a factor of 4, and the term between them is trivial. Then the 2-adic symbol got by negating their signs, and changing by 4 the oddity of each compartment that contains at least one of the terms, represents an isometric lattice. As remarked after Lemma 4.2, one could also call (3) sign jumping. One can use it even if the intermediate term were nontrivial, by using two sign walks of type (2) resp. (1) if the intermediate term had type I resp. I I. Our example 12 I I [2−243]3161 1 322 I I 64−2 I I [1281 2561]0512−4 I I can walk to 1−2 I I [22 43]−1161 1 322 I I 64−2 I I [1281 2561]0512−4 I I by (1), or 12 I I [224−3]−1161 1 322 I I 64−2 I I [1281 2561]0512−4 I I by (2), or 12 I I [2−2 4−3]−116−1 −3 322 I I 64−2 I I [1281 2561]0512−4 I I by (3), or 12 I I [2−243]3161 1 322 I I 64−2 I I [1281 256−1]45124 I I by (1), but no sign walk is possible between the terms of scales 128 and 256. (Underbrackets indicate the terms involved in the moves.) Proof. Refine the Jordan decomposition to a fine decomposition F, apply the corresponding sign walk operation (1)–(3) from Lemma 4.2 to suitable terms of F, and observe the corresponding change in the Jordan symbol. In case (2) some care is required because Lemma 4.2 requires both terms of F to be givers or both to be receivers. If the THE CONWAY-SLOANE 2-ADIC CALCULUS 17 compartment has dimension > 2 then we may arrange this by giver permutation (which preserves the compartment oddity and therefore doesn’t change the 2-adic symbol). In dimension 2 the hypothesis (compartment oddity) ≡±2 rules out the case that one is a giver and one a receiver, since givers and receivers have subscripts 1 and −1 mod 4. □ Theorem 6.2 (Equivalence of 2-adic symbols). Suppose given two lat-tices with Jordan decompositions. Then the lattices are isometric if and only if the 2-adic symbols of these decompositions are related by a sequence of the sign walk operations in Lemma 6.1. Proof. The previous lemma shows that sign walks preserve isometry type. So suppose the lattices are isometric. Refine the Jordan decom-positions to fine decompositions, apply Theorem 4.4 to obtain a chain of intermediate fine symbols, and consider the corresponding 2-adic symbols. In the proof of Lemma 5.2 we explained why giver permu-tation and conversion don’t change the 2-adic symbol, and that sign walking between same-scale terms also has no effect. The effects of the remaining sign walk operations are recorded in Lemma 6.1. □ A lattice may have more than one 2-adic symbol, but the only re-maining ambiguity lies in the positions of the signs: Theorem 6.3. Suppose two given lattices have 2-adic symbols with the same scales, dimensions, types and signs. Then the lattices are isometric if and only if the symbols are equal, which amounts to having the same compartment oddities. Proof. If a 2-adic symbol S of a lattice L admits a sign walk affecting the signs of the terms of scales 2i, 2j then we write ∆i,j(S) for the resulting symbol. No sign walks affect the conditions for ∆i,j to act on S, since they don’t change the type of any term or the oddity mod 4 of any compartment. So we may regard ∆i,j as acting simultaneously on all 2-adic symbols for L. By its description in terms of negating signs and adjusting compartments’ oddities, ∆i,j may be regarded as an element of order 2 in the group {±1}T × (Z/8)C where T is the number of terms present and C is the number of compartments. The assertion of the lemma is that if a sequence of sign walks on S restores the original signs, then it also restores the original oddities. We rephrase this in terms of the subgroup A of {±1}T × (Z/8)C generated by the ∆i,j. Namely: projecting A to the {±1}T factor has trivial kernel. This is easy to see because the ∆i,j are ordered so that they 18 DANIEL ALLCOCK, ITAMAR GAL, AND ALICE MARK are ∆i1,j1, . . . , ∆in,jn with i1 < j1 ≤i2 < j2 ≤· · · ≤in < jn. The linear independence of their projections to {±1}T is obvious. □ To get a canonical symbol for a lattice L one starts with any 2-adic symbol S and walks all the minus signs as far left as possible, canceling them when possible. To express this formally, we say two scales can interact if their terms are as in Lemma 6.1. (We noted in the previous proof that the ability of two scales to interact is independent of the particular 2-adic symbol representing L.) We define a signway as an equivalence class of scales, under the equivalence relation generated by interaction. The language suggests a pathway or highway along which signs can move. Signs can move (or cancel) between two adjacent scales except when both terms have type I I, or when both terms have dimension 1 and together form a compartment of oddity 0 or 4. And signs can jump across a missing scale, provided both terms have type I. In our example the signways are the following: 12 I I [2−243]3161 1 322 I I 64−2 I I [1281 2561]0512−4 I I Note that the absence of a term of scale 8 doesn’t break the first sign-way, while signs cannot move between the terms of the “bad” compart-ment [1281 2561]0. Each signway has a term of smallest scale, and by sign walking we may suppose that all minus signs are moved to these terms or canceled with each other. Then we say the symbol is in canonical form, which for our example is 1−2 I I −1161 1 322 I I 64−2 I I [1281 256−1]45124 I I Theorem 6.3 implies: Corollary 6.4 (Canonical form). Given lattices L, L′ and 2-adic sym-bols S, S′ for them in canonical form, L ∼ = L′ if and only if S = S′. □ Conway and Sloane’s discussion of the canonical form is in terms of “trains”, each of which is a union of one or more of our signways. Our example has two trains, the second consisting of the last two signways. They asserted that signs can walk up and down the length of a train, so that after walking signs leftward, there is at most one sign per train. But this is not true, as pointed out in . One cannot walk the minus sign in [1281 256−1]4 leftward because there is no way to assign the subscripts in 128− ±3 256+ ±1 so that the compartment has oddity 0. Example 6.5. As an extended demonstration of sign walking, we deter-mine the lattices M with the property that M ⊕⟨2, 2⟩∼ = L where L is THE CONWAY-SLOANE 2-ADIC CALCULUS 19 from (6.1). Note that ⟨2, 2⟩= 22 2. Obviously we require M = 1±2 I I 4±3 ? 16±1 ? 32±2 I I 64±2 I I [128±1 256±1]?512±4 I I We have marked the signways with underbrackets. The 3rd and 4th of these become the 2nd and 3th signways of L after summing with 22 2. No sign walking is possible between distinct signways. So the isomorphism M ⊕22 2 ∼ = L shows that these signways in M must coincide with the corresponding signways in L. Next, the first two signways of M fuse with the 22 2 summand to form the first signway of L. The overall sign of this in L is −, so the total number of −signs in the first two signways of M must be odd. By sign walking in the second signway of M, we reduce to M ∼ =  1−2 I I 43 t161 u322 I I or 12 I I4−3 t 161 u322 I I  ⊕64−2 I I 0512−4 I I where t and u are unknowns. Now we sum with 22 2 to get L ∼ =  1−2 I I 2+t161 u322 I I or 12 I I[224−3]2+t161 u322 I I  ⊕· · · Then we sign walk between the first two terms, or between the second and third, to make the signs match those in (6.1). That is, L ∼ =  12 I I[2−243]6+t161 u322 I I or 12 I I[2−243]6+t161 u322 I I  ⊕· · · Both this and (6.1) represent L, and the signs match, so the subscripts must too. Therefore 6 + t = 3 and u = 1. That is, M ∼ = 1±2 I I 4∓3 5 161 1322 I I64−2 I I 0512−4 I I where one ambiguous sign is + and the other is −. The two possibilities are distinct because their scale 1 terms have different signs and are involved in no sign walks. (More formally: canonicalization does not affect the first signway. So after canonicalization the symbols will still be different.) It follows that the isometry group of L has two orbits on summands isomorphic to ⟨2, 2⟩. One can use the ideas of the proof of Theorem 6.3 to give numerical invariants for lattices, if one prefers them to a canonical form. For example, The following invariants come from Theorem 10 of [8, Ch. 15], which is proven in . One records the scales, dimensions and types, the adjusted oddity of each compartment, and the overall sign of each signway (the product of the signs of the signway’s terms). Here the adjusted oddity of a compartment means its oddity plus 4 for each − sign appearing in its 1st, 3rd, 5th, . . . position, with each −sign after that compartment counted as occurring in the “(k + 1)st” position, 20 DANIEL ALLCOCK, ITAMAR GAL, AND ALICE MARK where k is the number of terms in the compartment. It is easy to check that sign walking leaves these quantities unchanged. These invariants are clumsy because of the definition of adjusted odd-ity. The adjusted oddity also has the ugly feature that it depends on signs outside the signway containing the relevant compartment. This goes against the principle we used to great effect in example 6.5: dis-tinct signways are isolated from each other. Furthermore, these invariants are really just a complicated way of recording the canonical form while pretending not to. We will show how to construct the unique 2-adic symbol in canonical form having the same invariants as any chosen 2-adic lattice. To do this we first observe that the types of the compartments, together with the adjusted oddities (hence the compartment oddities mod 4), determine the signways. The sign of the first term of each signway is equal to the given overall sign of that signway, and the other signs are +. The signs then allow one to compute the compartment oddities from the adjusted oddities. 7. Equivalences between fine decompositions In this section we give the deferred proof of Theorem 4.4: two fine symbols represent isometric lattices if and only if they are related by sign walks and giver permutation and conversion. Logically, it belongs anywhere between Theorems 5.1 and 6.2. The next two lemmas are standard; our proofs are adapted from Cassels [5, pp. 120–122]. Lemma 7.1. Suppose L is an integral lattice, that x, x′ ∈L have the same odd norm, and that their orthogonal complements x⊥, x′⊥are either both odd or both even. Then x⊥∼ = x′⊥. Proof. First, (x −x′)2 is even. If it is twice an odd number then the reflection in x−x′ is an isometry of L. This reflection exchanges x and x′, so it gives an isometry between x⊥and x′⊥. This argument applies in particular if x · x′ is even. So we may restrict to the case that x · x′ is odd and (x −x′)2 is divisible by 4. Next, note that (x + x′)2 differs from (x −x′)2 by 4x · x′ ≡4 mod 8. So by replacing x′ by −x′ we may suppose that (x−x′)2 ≡4 mod 8. This replacement is harmless because ±x′ have the same orthogonal complement. If it happens that (x −x′) · L ⊆2Z2 then the reflection in x −x′ preserves L and we may argue as before. So suppose some y ∈L has odd inner product with x −x′. Then the inner product matrix of x, x −x′, y is   1 0 ? 0 0 1 ? 1 ?   mod 2, THE CONWAY-SLOANE 2-ADIC CALCULUS 21 which has odd determinant. Therefore these three vectors span a uni-modular summand of L, so L has a Jordan decomposition whose uni-modular part L0 contains both x and x′. Note that x’s orthogonal complement in L0 is even just if its orthogonal complement in L is, and similarly for x′. So by discarding the rest of the decomposition we may suppose L = L0, without losing our hypothesis that x⊥, x′⊥are both odd or both even. Now, x⊥is unimodular with det(x⊥) = (det L)/x2 and oddity o(x⊥) = o(L) −x2, and similarly for x′. Since x2 = x′2, Theorem 5.1 implies x⊥∼ = x′⊥. □ Lemma 7.2. Suppose L is an integral lattice and U, U ′ ⊆L are iso-metric even unimodular sublattices. Then U ⊥∼ = U ′⊥. Proof. U ⊕⟨1⟩has an orthogonal basis x1, . . . , xn by Lemma 4.1, and we write x′ 1, . . . , x′ n for the basis for U ′ ⊕⟨1⟩corresponding to it under some isometry U ∼ = U ′. Apply the previous lemma n times, starting with L ⊕⟨1⟩. (In the nth application we need the observation that the orthogonal complements of U, U ′ in L are both even or both odd. This holds because these orthogonal complements are even or odd according to whether L is.) □ Lemma 7.3. Suppose L is an integral lattice and that 1+ G is a term in some fine symbol for L. Then we may apply a sequence of sign walking and giver permutation and conversion operations to transform any other fine symbol F for L into one possessing a term 1+ G. Proof. We claim first that after some of these operations we may sup-pose F has a term 1+ ···. Because L is odd, F’s terms of scale 1 have the form 1± R or G. If F has more than one such term then we can obtain a sign + by sign walking, so suppose it has only one term, of sign −. If there are type I terms of scale 4 then again we can use sign walking, so suppose all scale 4 terms have type I I. We can do the same thing if there are any terms 2±2 I I . Or terms 2±1 R or G, if the compartment consist-ing of the scale 1 and 2 terms has at least two givers or two receivers. This holds in particular if there is more than one term of scale 2. So we have reduced to the case F = 1− R or G 4··· I I 8··· ··· · · · or F = 1− R or G 2± R or G 4··· I I 8··· ··· · · · where in the latter case the subscripts cannot be both G’s or both R’s. (Here and below, the superscript and subscript dots indicate any pos-sibilities for the number of terms at that scale, and their decorations in that position. In particular, there might be no terms of that scale. The dots at the end indicate terms of higher scale than the ones al-ready listed.) So in the second case there is one R and one G. By giver 22 DANIEL ALLCOCK, ITAMAR GAL, AND ALICE MARK permutation we may suppose F = 1− R or G 4··· I I 8··· ··· · · · or F = 1− G 2± R 4··· I I 8··· ··· · · · None of these cases occur, because these lattices don’t represent 1 mod 8, contrary to the hypothesis that some fine decomposition has a term 1+ G. This non-representation is easy to see because L is ⟨±3⟩or ⟨5, −2⟩or ⟨5, 6⟩, plus a lattice in which all norms are divisible by 8. So we may suppose F has a term 1+ ···, and must show that after further operations we may suppose it has a term 1+ G. In particular, we may suppose that our term 1+ ··· is 1+ R. If the compartment C containing it has any givers then we may use giver permutation to complete the proof. So suppose C consists of receivers. If there are 4 receivers then we may convert them to givers, reducing to the previous case. If C has two terms of different scales, neither of which is our 1+ R term, then we may use sign walking to convert them to givers, again reducing to a known case. Only a few cases remain, none of which actually occur, by a similar argument to the previous paragraph. Namely, after more sign walking we may take F to be 1+ R 2± R or 1+ R 2+ R 2± R  4··· I I · · · or 1+ R or 1+ R 1± R or 1+ R 1+ R 1± R  2··· I I · · · The first set of possibilities is  ⟨−1, −2⟩or ⟨−1, 6⟩or ⟨−1, −2, −2⟩or ⟨−1, −2, 6⟩  ⊕(a lattice with all norms divisible by 8) none of which represent 1 mod 8. The second set of possibilities is  ⟨−1⟩or ⟨−1, −1⟩or ⟨−1, 3⟩or ⟨−1, −1, −1⟩or ⟨−1, −1, 3⟩  ⊕(a lattice with all norms divisible by 4) and only the last two cases represent 1 mod 8. But in these cases every vector x of norm 1 mod 8 projects to ¯ x := (1, 1, 1) in U/2U, where U is the summand ⟨−1, −1, −1⟩or ⟨−1, −1, 3⟩. There are no odd-norm vectors orthogonal to x since the orthogonal complement of ¯ x in U/2U consists entirely of self-orthogonal vectors. So while these lattices admit norm 1 summands, they do not admit fine decompositions with 1+ G terms. □ Lemma 7.4. Suppose ε = ±. Then Lemma 7.3 holds with 1ε2 I I in place of 1+ G. Proof. If F has two terms of scale 1, or a scale 2 term of type I, then we can use sign walking. The only remaining case is F = 1−ε2 I I 2··· I I 4··· ··· · · · . Write U for the 1−ε2 I I summand and note that any two elements of L THE CONWAY-SLOANE 2-ADIC CALCULUS 23 with the same image in U/2U have the same norm mod 4. Direct calculation shows that the norms of the nonzero elements of U/2U are 0, 0, 2 or 2, 2, 2 mod 4, depending on ε. Now consider the summand U ′ ∼ = 1ε2 I I of L that we assumed to exist. By considering norms mod 4 we see that U ′/2U ′ →U/2U cannot be injective, so it must have image 0 or Z/2. Since all self-inner products in U/2U vanish, we obtain the absurdity that all inner products in U ′ are even. □ Proof of Theorem 4.4. The “if” part has already been proven in Lem-mas 4.2 and 4.3, so we prove “only if”. We assume the result for all lattices of lower dimension. By scaling by a power of 2 we may suppose L is integral and some inner product is odd, so each of F and F ′ has a nontrivial unimodular term. First suppose L is odd, so the unimodular terms of F and F ′ have type I. By rescaling L by an odd number we may suppose F has a term 1+ G. By Lemma 7.3 we may apply our moves to F ′ so that it also has a term 1+ G. The orthogonal complements of the corresponding sum-mands of L are both even (if the unimodular Jordan constituents are 1-dimensional) or both odd (otherwise). By Lemma 7.1 these orthog-onal complements are isometric. They come with fine decompositions, given by the remaining terms in F, F ′. By induction on dimension these fine decompositions are equivalent by our moves. If L is even then the same argument applies, using Lemmas 7.4 and 7.2 in place of Lemmas 7.3 and 7.1. □ References Daniel Allcock, The reflective Lorentzian lattices of rank 3, in Memoirs of the A.M.S. (no. 1033, vol. 220, Nov. 2012). Daniel Allcock, Prenilpotent pairs in the E10 root lattice, Math. Proc. Camb. Phil. Soc. 164 (2017) 473–483. Klaus Bartels, Zur Klassifikation quadratischer Gitter ¨ uber diskreten Bewer-tungsringen, Ph.D. Dissertation G¨ ottingen 1988. Jan Hendrik Bruinier, Stephan Ehlen, and Eberhard Freitag, Lattices with many Borcherds Products, Mathematics of Computation 85 (2016) 1953–1981. J.W.S. Cassels, Rational Quadratic Forms, Academic Press 1968. M. Eichler, Quadratische Formen und Orthogonal Gruppen, Springer-Verlag, 1952. Gerald H¨ ohn and Geoffrey Mason, The 290 fixed-point sublattices of the Leech lattice, Journal of Algebra 448 (2016) 618–637. J.H. Conway and N.J.A. Sloane, Sphere Packings, Lattices and Groups, 2nd ed., Springer 1993. Burton W. Jones, A canonical quadratic form for the ring of 2-adic integers, Duke Math. J. 11 (1944) 715–727. Gordon Pall, The arithmetical invariants of quadratic forms, Bulletin of the A.M.S. 51 (1945) 185–197. 24 DANIEL ALLCOCK, ITAMAR GAL, AND ALICE MARK Ivica Turkalj, Totally-Reflective Genera of Integral Lattices, preprint 2016, arxiv:1503.04428. Fei Xu, Minimal norm Jordan splittings of quadratic lattices over complete dyadic discrete valuation rings, Arch. Math. 81 (2003) 402–415. Department of Mathematics, University of Texas at Austin E-mail address: [email protected] URL: Department of Mathematics, University of Texas at Austin E-mail address: [email protected] School of Mathematical and Statistical Sciences, Arizona State University E-mail address: [email protected]
181
Warning: The NCBI web site requires JavaScript to function. more... An official website of the United States government The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site. The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely. Account Bookshelf NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health. StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2025 Jan-. StatPearls [Internet]. Bronchogenic Cyst Sanjeev Sharma; Faten Limaiem; Sara A. Collier; Mouna Mlika. Authors Sanjeev Sharma; Faten Limaiem1; Sara A. Collier2; Mouna Mlika3. Authors Affiliations Last Update: November 9, 2024. Continuing Education Activity Bronchogenic cysts are rare congenital malformations that arise from abnormal budding of the foregut during early embryonic development, typically occurring along the tracheobronchial tree, in the mediastinum, or within the lung parenchyma. These cysts, filled with fluid or mucus and lined with respiratory epithelium, often remain asymptomatic but may cause compressive symptoms or recurrent infections if they enlarge, become infected, or rupture. Diagnosis is typically based on imaging, with computed tomography or magnetic resonance imaging revealing a well-defined cystic lesion. While asymptomatic cysts can sometimes be managed with observation, symptomatic or complicated cysts usually require surgical excision, as they carry risks of infection, respiratory compromise, and, rarely, malignant transformation. Clinicians participating in this course gain a comprehensive understanding of bronchogenic cysts, from their embryologic origin and pathophysiology to their clinical presentation, differential diagnosis, and management options. The course provides in-depth insights into radiographic identification, histological characteristics, and the latest surgical and nonsurgical treatment approaches for bronchogenic cysts, enabling clinicians to make informed decisions regarding patient care. Objectives: Identify the clinical presentations and radiologic findings that suggest bronchogenic cysts in pediatric and adult patients. Differentiate bronchogenic cysts from other mediastinal and pulmonary cystic lesions using imaging and histopathology. Assess patients for potential surgical versus conservative management options based on clinical presentation and cyst location. Coordinate follow-up care and monitoring plans with interprofessional teams to manage recurrence risk and detect potential complications early. Access free multiple choice questions on this topic. Introduction Bronchogenic cysts, first described in 1859, are rare congenital malformations originating from abnormal budding of the primitive foregut during embryonic development. These cysts are most commonly located in the mediastinum or lungs (see Image. Bronchogenic Cyst). However, on rare occasions, they may arise along the entire foregut pathway, sometimes in ectopic locations, such as the diaphragm and retroperitoneum. Characteristically lined with respiratory epithelium and typically filled with mucus or fluid, bronchogenic cysts can exhibit various clinical and radiologic presentations. Most bronchogenic cysts are benign lesions with a slow growth rate and an insidious onset. Often asymptomatic, these cysts may be discovered incidentally on imaging. However, bronchogenic cysts can also present with significant clinical complications if they enlarge or become infected, leading to symptoms such as cough, dyspnea, chest pain, or recurrent respiratory infections. In some cases, bronchogenic cysts cause acute symptoms due to airway obstruction or infection, underscoring the importance of recognizing and managing these polymorphic malformations in clinical practice. Etiology Bronchogenic cysts are congenital malformations that arise from abnormal budding or diverticulation of the primitive foregut, typically between the third and seventh weeks (days 26 to 40) of gestation. This early developmental error leads to the formation of fluid-filled, blind-ending pouches that originate along the tracheobronchial tree, resulting in diverse presentation sites due to the origin of the foregut endodermal tissue. These cysts most commonly appear in the mediastinum or lung parenchyma, though they can also be found in rarer, ectopic sites such as the neck, retroperitoneum, diaphragm, or subcutaneous tissues. The specific timing of abnormal budding influences the cyst’s eventual location: if the error occurs early in gestation, cysts are more likely to form centrally in the tracheobronchial tree or mediastinum, including the right paratracheal area and tracheal carina, which are preferred sites. Abnormal budding later in development tends to result in peripheral cysts within the lung parenchyma, often in the lower lobes, with no side preference. Approximately 20% to 30% of bronchogenic cysts fall into this intrapulmonary category. Bronchogenic cysts account for 10% to 15% of mediastinal tumors and 50% to 60% of mediastinal cysts, predominantly in the middle mediastinum. They are classified into 5 types based on location: paratracheal, carinal, paraesophageal, hilar, and miscellaneous. Most bronchogenic cysts are lined with respiratory epithelium and may contain cartilage, smooth muscle, and mucus-secreting glands, reflecting their respiratory origin. The exact cause of abnormal budding remains unclear; disruptions in signaling pathways essential for respiratory tract development—such as sonic hedgehog, fibroblast growth factors, and bone morphogenetic proteins—are thought to play a role. Errors in these pathways during critical developmental periods may contribute to cyst formation. While many bronchogenic cysts remain asymptomatic, others can lead to complications if they enlarge, become infected, or fistulize to adjacent organs, potentially causing symptoms such as airway obstruction, recurrent infections, or chest discomfort. Rarely, fistulization can result in severe complications like air embolism, and there have been sporadic reports of malignant transformation, emphasizing the need for monitoring. Additionally, bronchogenic cysts have been occasionally associated with other congenital anomalies, suggesting a possible genetic or multifactorial basis for their etiology. Epidemiology Bronchogenic cysts are rare congenital malformations, with an estimated prevalence of 1 in 42,000 to 1 in 68,000 individuals based on surgical and autopsy series study results. They account for 10% to 15% of mediastinal tumors and 50% to 60% of all mediastinal cysts. These cysts are most commonly diagnosed in children or young adults, although some cases remain undiscovered until the third or fourth decade of life. Most bronchogenic cysts are detected in the first few years of life, often due to respiratory symptoms resulting from cyst enlargement or infection. In asymptomatic cases, cysts may remain undiagnosed until later in adulthood, particularly if imaging is obtained for unrelated reasons. The incidence and presentation of bronchogenic cysts vary by anatomical location, patient sex, and clinical manifestations, though they appear equally across different ethnic and geographic populations without racial predilection. The cysts originate preferentially in the middle mediastinum, including the right paratracheal area and the tracheal carina. Mediastinal bronchogenic cysts are the most common, comprising about 50% to 70% of cases, and are frequently found in the middle and posterior mediastinum. Bronchogenic cysts affect males and females equally and typically present earlier due to their proximity to the tracheobronchial tree, where they may compress surrounding structures, causing respiratory symptoms. Intrapulmonary cysts, accounting for around 20% to 30% of cases, are more common in males. Although initially asymptomatic, these cysts may later lead to complications such as recurrent infections or, in rare instances, pneumothorax if they rupture. Gastric bronchogenic cysts, a rare ectopic subtype, have a female predominance with a female-to-male ratio of approximately 21:14. Bronchogenic cysts are generally considered sporadic but are occasionally associated with other congenital anomalies, including esophageal duplication cysts and cardiac malformations. Such associations suggest bronchogenic cysts may be part of a broader spectrum of foregut-derived anomalies in some instances, possibly influenced by genetic or environmental factors during development. Histopathology The gross and histopathological findings of bronchogenic cysts provide essential insights into their nature as congenital malformations. Macroscopically, bronchogenic cysts typically present as spherical, smooth lesions that may vary from white to pinkish. These cysts can be single or multiple, with diameters ranging from 2 to 12 cm. Most often, they are unilocular and contain clear fluid or proteinaceous mucus, although air or hemorrhagic secretions can be found in rare cases. Calcification of the cyst wall is uncommon, and they rarely communicate with the bronchial tree. The definitive diagnosis of bronchogenic cysts relies on histological examination of the surgical specimen. Histologically, bronchogenic cysts are characterized by a lining of ciliated pseudostratified columnar epithelium of respiratory type, with possible areas of squamous metaplasia. The cyst wall may contain various airway components, including cartilage plates, bronchial glands, and smooth muscle. Rarely are nerve and adipose tissues observed within the cyst wall. Bronchogenic cysts can exhibit histological alterations due to infarction, infection, or previous medical interventions. These changes may manifest as acute or chronic inflammation, epithelial denudation, and bleeding, leading to hemosiderin-laden macrophages, cholesterol clefts, and fibrosis. The combination of these macroscopic and histopathological features reflects the complex nature of bronchogenic cysts, emphasizing their developmental origin and potential for complications. Recognition of these characteristics is crucial for accurate diagnosis and management in clinical practice. History and Physical Given the diverse clinical presentations of these congenital malformations across different age groups, the history and physical examination of patients with bronchogenic cysts can vary significantly. In pediatrics, bronchogenic cysts may cause life-threatening compressive symptoms. Diagnosis often occurs when the cysts become infected or enlarge enough to compress surrounding structures. In adults, bronchogenic cysts are often incidental radiologic findings. Pulmonary bronchogenic cysts are more likely to be symptomatic than mediastinal cysts, and 86.4% of symptomatic patients have a complicated cyst. Medical History Asymptomatic presentation:Many patients with bronchogenic cysts are asymptomatic, and they are often discovered incidentally during imaging studies for unrelated issues. This is particularly common for cysts in the lung parenchyma or those that do not exert pressure on adjacent structures. Respiratory symptoms:Patients who present with symptoms may report a range of respiratory issues, including cough, dyspnea, and chest pain. The cough may be chronic and nonproductive, especially if the cyst has become infected or increased in size. Patients may experience shortness of breath due to airway compression, particularly with mediastinal cysts. In children, bronchogenic cysts may cause life-threatening compressive symptoms, leading to severe respiratory distress. While nonfistulized bronchogenic cysts typically cause chest pain due to pressure on surrounding structures, fistulized cysts can result in additional symptoms, such as cough, fever, sputum production, and hemoptysis. Infection history:Many symptomatic individuals with bronchogenic cysts have recurrent respiratory infections, indicating potential complications such as cyst infection or inflammation. Fever and shortness of breath may occur due to pericystic pneumonitis or pneumonia in the adjacent compressed lung. Patients with active infection may report malaise. Past medical history:Patients may have a history of respiratory issues or congenital anomalies. The presence of other congenital malformations, particularly cardiac defects or other foregut anomalies, can be associated with bronchogenic cysts. Developmental history:In children with a bronchogenic cyst, obtaining a developmental history may reveal relevant congenital conditions. Physical Examination General appearance:If asymptomatic, patients may appear well-nourished and in no acute distress. Symptomatic individuals, especially children, may exhibit signs of respiratory distress or systemic infection. Respiratory examination:Auscultation may be normal in asymptomatic individuals. However, in symptomatic individuals, decreased breath sounds may be noted on the affected side, particularly with large cysts. Abnormal sounds like wheezing or crackles can occur if there is an associated infection or atelectasis. Percussion may reveal dullness over areas of consolidation or if a cyst is large enough to affect normal lung inflation. Vital signs: Patients who are asymptomatic may present with normal vital signs. However, those with significant respiratory involvement may be tachypneic, hypoxic, or tachycardic, especially during exacerbations or infections; patients with active infections may be febrile. Neck and mediastinal assessment:In situations where bronchogenic cysts are located in ectopic sites, such as the neck or retroperitoneum, a mass may be palpable. Mediastinal cysts might also cause tracheal deviation or other signs of mediastinal shift. Evaluation Evaluating a patient with a suspected bronchogenic cyst involves a combination of laboratory tests, imaging studies, and occasionally more specialized procedures. These investigations help confirm the diagnosis, assess the cyst’s characteristics, and rule out complications. Laboratory Tests Complete blood count: This testmay reveal leukocytosis in patients with active infection or anemia in thosewith chronic disease. C-reactive protein: An elevated serum C-reactive protein may indicate inflammation or infection associated with the cyst. Sputum cultures: Respiratory symptoms such as cough, sputum production, or hemoptysis may warrant obtaining sputum cultures, which can help identify infectious agents if the cyst is infected. Blood cultures: Patients with systemic signs consistent with bacteremia, such as fever or chills, may warrant blood cultures. Radiographic Imaging Chest radiography: A chest x-ray is often the first imaging study performed when bronchogenic cysts are suspected. Pulmonary bronchogenic cysts will be sharply defined, solitary, round, or oval opacities, usually the lower lobe appearing as a homogeneous water density, an air-filled cyst, or with an air-fluid level. Abnormalities in the surrounding lung parenchyma, like atelectasis or consolidation, may make the diagnosis more difficult. Mediastinal bronchogenic cysts appear as homogeneous, smooth, solitary, round, or ovoid masses, usually in the middle mediastinum. Computed tomography: Computed tomography (CT) is the investigation of choice, providing detailed imaging of the cyst’s location, size, position, and relationship to tracheobronchial and vascular structures. CT is beneficial in evaluating cysts located within the lungs or at ectopic sites. This study can help differentiate bronchogenic cysts from other mediastinal masses. The CT density of bronchogenic cysts varies from typical water density to high density related to blood, increased calcium content, anthracotic pigment, or increased protein content of the fluid. CT can also assess the presence of associated complications, such as infection, surrounding inflammatory changes, or airway compression. In some cases, a contrast-enhanced CT scan may be performed to delineate the vascular structures better and assess for any associated complications, such as vascular involvement or other tumors. Magnetic resonance imaging: Magnetic resonance imaging(MRI) is not always required when evaluating a patient with a suspected bronchogenic cyst. Still, it is especially useful when evaluating complex cysts or when better soft tissue contrast is needed. MRI is superior to CT for delineating anatomic relations and the definition of the cyst. The MRI appearance of the cyst is dictated by its content. On T1-weighted images, the intrinsic signal intensity varies from low to high, depending on the cyst contents. T2-weighted images show high signal intensity; enhancement after contrast injection is frequently absent. Ultrasonography: Although less commonly employed when diagnosing bronchogenic cysts, ultrasonography may help evaluate cysts in atypical locations, such as the neck or retroperitoneum, particularly in pediatric patients. Other Required Tests Bronchoscopy: This allows direct visualization of the bronchial tree and can be useful if bronchogenic cysts are suspected of causing airway obstruction. This study can also facilitate sampling for cultures or biopsies if necessary. Endoscopic ultrasonography: Cysts located in the mediastinum or retroperitoneum can be further evaluated by endoscopic ultrasonography, providing real-time imaging and allowing for fine-needle aspiration. Histopathological examination: This remains the gold standard for definitively diagnosing bronchogenic cysts by examining tissue obtained via surgical excision or biopsy. This examination can confirm the presence of ciliated pseudostratified columnar epithelium, cartilage, smooth muscle, and other respiratory tissue components. Treatment / Management The strategies employed when managing and treating bronchogenic cysts are primarily based on the cyst's size, location, symptomatology, and potential complications. While many bronchogenic cysts are asymptomatic and discovered incidentally, symptomatic or complicated cysts require more active management. Observation of Asymptomatic Individuals For patients with asymptomatic bronchogenic cysts, especially those diagnosed incidentally by imaging studies, a conservative approach may be appropriate. Regular follow-up with imaging, typically CT, is recommended. This is particularly relevant for small cysts that do not cause compressive symptoms. However, the management of asymptomatic bronchogenic cysts remains controversial. Most authors seem to advocate a surgical approach to prevent complications. While a conservative approach may be considered for adults, given that these lesions do not regress spontaneously, surgical excision is often recommended for children and young, healthy adults due to the risk of complications such as infection, erosion, or malignancy. Even in cases where the neonate or infant is asymptomatic or prenatal ultrasonography suggests a vanishing lesion, further evaluation is still necessary. Postnatal investigations, including CT and consultation with a pediatric surgeon, are essential to address potential long-term complications. Surgical Intervention The gold standard therapeutic intervention for a bronchogenic cyst remains surgical excision, which offers excellent long-term outcomes free of recurrence and low perioperative morbidity and mortality. Surgical intervention is typically warranted in the following clinical scenarios: Surgical Approaches Surgical excision is the standard treatment for symptomatic and some asymptomatic bronchogenic cysts. Both open and minimally invasive procedures are used; the approach can vary depending on the cyst's location and the patient's status. Open surgical resection In cases where cysts are more extensive or complicated, an open surgical approach may be necessary. This can involve thoracotomy for better access to the cyst, especially in complex mediastinal cases where adhesions increase the risk of incomplete resection; destruction of residual mucosa is required to prevent fluid accumulation and late recurrence. Additionally, some patients do not tolerate thoracoscopy well. Thus, thoracotomy is the procedure usually performed. Thoracoscopic surgery Resection via video-assisted thoracoscopic surgery (VATS) has emerged as a viable approach and is often employed for cysts located in the lung parenchyma or mediastinum. This minimally invasive technique allows for effective removal with reduced recovery time and morbidity. With the adoption of the robotic platform for minimally invasive thoracic surgery, thoracoscopic resection has become more feasible. The transition from open to video-assisted thoracoscopic surgery (VATS) to robotic-assisted thoracoscopy has been associated with few complications. A purely thoracoscopic procedure, which is even less invasive than VATS, offers advantages such as faster recovery and improved cosmetic outcomes. As demonstrated in some instances, this approach is feasible even in the confined spaces of younger patients, and it should be considered when the patient can tolerate it. Choice of procedure Regardless of the chosen surgical approach, the procedure performed will vary. For example, lobectomy is the procedure of choice for intrapulmonary bronchogenic cysts, whereas a conservative procedure such as a total pericystectomy, a wedge resection, or segmentectomy is recommended for peripheral bronchogenic cysts or patients with limited lung function. Minimally Invasive Techniques Aspiration and drainage of symptomatic bronchogenic cysts can be performed using minimally invasive techniques, such as percutaneous access guided by ultrasound or endobronchial ultrasound-guided fine needle aspiration (EBUS-FNA). While EBUS-FNA is primarily a diagnostic tool, some specialists have successfully used this technique to drain bronchogenic cysts. However, this option is generally unavailable for young children, as bronchoscopy with EBUS-FNA is not typically performed in this age group. Drainage may be considered for patients who are not suitable for surgery, but it is not an ideal long-term solution due to the persistent risks of infection or cyst recurrence. When drainage is performed, it often serves as a temporary measure to relieve symptoms and facilitate later surgical removal by reducing the cyst's size and inflammation. Management of Complications Complications of bronchogenic cysts include infection and fistula formation. Infected bronchogenic cysts may require a combination of surgical drainage and antibiotic therapy. Surgical drainage is typically necessary to relieve symptoms and prevent further complications if the cyst has formed an abscess. If a bronchogenic cyst has fistulized, surgical intervention is needed to close the communication with the bronchial tree and manage associated symptoms such as cough or hemoptysis. Symptomatic Treatment Patients with symptomatic cysts, particularly those experiencing chest pain, may require analgesics or anti-inflammatory medications to manage discomfort. If bronchogenic cysts lead to airway obstruction or bronchospasm, bronchodilators may be prescribed to improve airflow. Differential Diagnosis The differential diagnosis of bronchogenic cysts is broad and includes other cystic and solid lesions in the mediastinum, lungs, and elsewhere along the foregut development pathway, as well as conditions that may present with similar symptoms. Key differential diagnoses includeing: Prognosis The prognosis of bronchogenic cysts after surgical excision is excellent. In case of incomplete excision, late recurrences can occur. A recent study reviewing 102 patients treated for bronchogenic cysts provided results demonstrating that the estimated mean morbidity and mortality was 20%. Complications According to some authors, bronchogenic cysts lead to complications in 45% of patients, but complications do not increase morbidity or death. Complications encountered more frequently with bronchogenic cysts include: Rare complications comprise: Deterrence and Patient Education Deterrence and patient education for bronchogenic cysts are essential for empowering patients with the knowledge to manage their condition effectively and avoid complications. While bronchogenic cysts are congenital and thus cannot be prevented, understanding the nature of the condition, predominantly if asymptomatic, is crucial. Patients should be educated about bronchogenic cysts, including how they develop and why they commonly occur in the mediastinum or lungs. This education helps address potential anxiety, especially as many cysts are benign and often asymptomatic, with a low risk of malignant transformation. Recognizing symptoms indicative of complications is essential for patients with asymptomatic cysts, as changes in symptoms can signal growth, infection, or other issues. Patients should be advised to report signs such as persistent cough, shortness of breath, chest pain, fever, and hemoptysis, which may indicate a complex cyst or infection. Additionally, regular follow-up visits and imaging studies are recommended to monitor changes over time, helping detect potential issues early. For symptomatic or complicated cysts, patients should understand that while observation is suitable for asymptomatic cases, surgical removal may be necessary to prevent further complications. Minimally invasive procedures, like VATS, are options for symptom relief and are typically associated with faster recovery. After surgery, patients should be informed about postoperative expectations, wound care, and signs of complications like infection or bleeding, emphasizing the importance of follow-up to ensure full recovery and monitor for recurrence. Lifestyle modifications can also play a role in reducing risks, as patients with cysts near the airways should avoid smoking or other respiratory irritants, which may increase infection risks or aggravate respiratory symptoms. Good hygiene practices, avoiding close contact with people who have respiratory infections, and regular hand washing are vital, especially in patients with cysts prone to infection. Maintaining respiratory health through physical activity, a healthy diet, and vaccinations (eg, flu and pneumonia vaccines) is encouraged to reduce respiratory complications. In cases where bronchogenic cysts are associated with congenital anomalies or syndromes, genetic counseling may be beneficial, with family education to ensure early symptom recognition if similar cases arise among relatives. Ultimately, educating patients about their condition, symptom recognition, follow-up care, and lifestyle modifications can significantly reduce the risk of complications and promote adherence to management recommendations, making a well-informed patient more likely to manage the condition effectively. Enhancing Healthcare Team Outcomes Effectively managing bronchogenic cysts requires a skilled, coordinated, and multidisciplinary approach to enhance patient-centered care and outcomes. Clinicians must develop diagnostic acumen and surgical skills to accurately identify and treat bronchogenic cysts, leveraging imaging and histopathological analysis for definitive diagnosis. Surgeons are crucial in executing minimally invasive or open procedures, depending on the cyst’s location and complexity, while anesthesiologists support safe perioperative care, particularly in complex mediastinal cases. Nursing professionals are essential for preoperative and postoperative care, monitoring for complications, providing patient education, and delivering emotional support to patients and their families, especially as some procedures may be anxiety-inducing or invasive. Interprofessional communication and care coordination are pivotal for ensuring patient safety, optimizing team performance, and delivering consistent patient-centered care. Regular interdisciplinary meetings allow clinicians, radiologists, pharmacists, and allied health professionals to discuss patient-specific strategies, share diagnostic insights, and coordinate treatment plans. Pharmacists contribute by advising on antibiotics for infection prevention and pain management, ensuring medication safety, and minimizing drug interactions. By maintaining clear communication channels and leveraging each discipline’s expertise, the healthcare team can reduce procedural risks, anticipate complications, and create a supportive care environment. This collaborative approach promotes seamless transitions between diagnostics, treatment, and follow-up, strengthens patient safety, and empowers patients in their care journey, leading to better long-term outcomes. Review Questions Figure Bronchogenic Cyst, Computed Tomography. These cysts are most commonly located in the mediastinum or lungs. Contributed by S Bhimji, MD References Disclosure: Sanjeev Sharma declares no relevant financial relationships with ineligible companies. Disclosure: Faten Limaiem declares no relevant financial relationships with ineligible companies. Disclosure: Sara Collier declares no relevant financial relationships with ineligible companies. Disclosure: Mouna Mlika declares no relevant financial relationships with ineligible companies. This book is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) ( which permits others to distribute the work, provided that the article is not altered or used commercially. You are not required to obtain permission to distribute this article, provided that you credit the author and journal. Views In this Page Bulk Download Related information Recent Activity Your browsing activity is empty. Activity recording is turned off. Turn recording back on) Connect with NLM National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers
182
Intro to Ducci Sequence Diana Thomas 991 subscribers Description 100 views Posted: 22 Nov 2021 Transcript: right before i before i actually did applied math this is one of the last things i worked on and i was developing my career as a pure mathematician um there's a number of these papers uh that i'm showing you right here if you're interested in this particular topic we have one more paper that's a list of open problems in this area so steve leteria is one of the if not i think he was the best student i ever had as an undergrad research student he was physics we can forgive him for that um he got his phd in physics but he was very very talented at pure mathematics and so he did this we didn't have a thesis but he did a thesis with us and then he went on to go to the national math meetings and win a national prize for his work so um and we still talk and work together he actually has published some applied math papers with me as well so he's good at both of these um that and there's another student that i had i actually left off hers she is a high school teacher now in new jersey but she also has published on this topic um and um so it there are accessible pieces of this that's what you should know is there's pieces of this that you can actually take as an open problem all right so what is this it's actually an old problem um it was posed for the case n equals four um by a italian mathematician with the last name duchy um i think back in the 1800s and people published on this quite a bit after that and so when we came on there was a lot of different avenues that people have taken but um the main problem still remains on uh still remains open so what you have is you don't have a vector you have a string of integers it's not a vector because if you recall the definition of a vector spaces that had a satisfied of seven criteria integers don't do not satisfy those seven criteria you can't multiply by a scalar and then remain you have all issues with closure so um so we call this a string of integers you take a string of integers of length n so i have z to the n which means that every input in the components are an integer and then you iterate it by taking x1 minus x2 and then the absolute value so the next is going to be a positive integer all of these guys until you get the very last one because you have xn and nothing to the right of xn so you wrap it around and start back with x1 so you start subtract from the beginning so for example if i have one two three four my next iterate would be one one one three by just applying that rule now if you keep going the question is what if what happens if you keep going so this is homework 13 part 1. you guys are pretty tired so i'm going to ask you to take boards and iterate this on boards further take that 1113 and continue on and save your work make it neat because we're going to take a picture of it [Music]
183
7.3 conjugation and mapping via Hfr strains Flashcards | Quizlet =============== hello quizlet Study tools Subjects Create Log in 7.3 conjugation and mapping via Hfr strains Save Flashcards Learn Test Blocks Match E8. An Hfr strain that is hisE and pheA was mixed with a strain that is hisE and pheA . The conjugation was interrupted and the percentage of recombinants for each gene was determined by streaking on a medium that lacked either histidine or phenylalanine. The following results were obtained: Click the card to flip 👆 A. If we extrapolate these lines back to the x-axis, the hisE intersects at about 3 minutes and the pheA intersects at about 24 minutes. These are the values for the times of entry. Therefore, the distance between these two genes is 21 minutes (i.e., 24 minus 3). B. picture is drawn in the answer Click the card to flip 👆 1 / 33 1 / 33 Flashcards Learn Test Blocks Match Created by iulianasin Students also studied Flashcard sets Study guides Practice tests Ch 9 59 terms jonathan_costello7 Preview Genetics Exam 2--Topic 4 (Bacterial+Viral Genetics) 12 terms sspychal Preview BMS 127 - Midterm 1 228 terms Alyssa_Tang130 Preview CHEM 584 Exam II 71 terms arwdickson Preview I Hate This Class: Part 3 98 terms caitlynekure Preview MICROBIO Exam 3 CH. 17 43 terms Taylor_Dixon42 Preview Genetic Analysis FINAL 31 terms Ruiwen_Lin1 Preview Genetic Engineering and Biotechnology Concepts 19 terms alexathechef Preview MI EOC 159 terms kindallk Preview Biochemistry 501: Chapter 9 - DNA-Based Information Technologies 30 terms DanaCatherineKenney Preview BIOS 2600 Exam 2 60 terms BryanKatie Preview Unit 3 103 terms afillipo_05 Preview Biology Exam 4 Practice Exam 77 terms zacharybaber Preview Chapter 20- Analyzing and Engineering genes 27 terms Sunbul_M Preview Exam 2 (L17 MCQ) 27 terms Caroline_Chandler22 Preview BIO 203 CH 20-22 simplified 67 terms jhdooley Preview Human Gene Midterm 9 terms mburke2630 Preview Biotechnology 11 terms quizlette31237308 Preview Bio Lab ch.9 8 terms erf151 Preview Molecular Genetics Exam 1 - Hurt - TTU 120 terms lindseyb_03 Preview Chapter 13 Biotechnology 30 terms Demar_Williams1 Preview BSCI283 Lecture 14: Genetic Engineering, Synthetic Biology, and Biotechnology 62 terms emily_sherman3 Preview Gene Transfer Mechanisms (bacteria) 14 terms smithtrinityb Preview eDNA lab 35 terms paola-a-beard Preview Genetics - Lab Exam 2 57 terms mihogfans Preview BIOS1705 Quiz 3 28 terms happyyhyena Preview Biology: Lab 45 terms James_Geffner Preview Terms in this set (33) E8. An Hfr strain that is hisE and pheA was mixed with a strain that is hisE and pheA . The conjugation was interrupted and the percentage of recombinants for each gene was determined by streaking on a medium that lacked either histidine or phenylalanine. The following results were obtained: A. If we extrapolate these lines back to the x-axis, the hisE intersects at about 3 minutes and the pheA intersects at about 24 minutes. These are the values for the times of entry. Therefore, the distance between these two genes is 21 minutes (i.e., 24 minus 3). B. picture is drawn in the answer Explain how an Hfr strain is produced. As shown in Figure 7.5a, an F factor may align with a similar region found in the bacterial chromosome. Due to recombination, the F factor may integrate into the bacterial chromosome. In this example, the F factor has integrated next to a lac+ gene. F factors can integrate into several different sites that are scattered around the E. coli chromosome. When an F factor integrates into the chromosome it creates an Hfr cell or a high frequency recombination cell Describe how an Hfr strain can transfer portions of the bacterial chromosome to recipient strains. (Figure 7.6). 1. The origin of transfer within the integrated F factor determines the starting point and direction of this transfer process. 2. One of the DNA strands is cut at the origin of transfer. This cut, or nicked, site is the starting point at which the Hfr chromosome enters the F recipient cell.3. From this starting point, one strand of DNA from the Hfr chromosome begins to enter the F cell in a linear manner. The transfer process occurs in conjunction with chromosomal replication, so the Hfr cell retains its original chromosomal composition. 4. About 1.5 to 2 hours is required for the entire Hfr chromosome to pass into the F cell. Because most conjugations do not last that long, usually only a portion of the Hfr chromosome is transmitted to the F cell. 5. Once inside the F cell, the chromosomal material from the Hfr cell can swap, or recombine, with the homologous region of the recipient cell's chromosome. Construct a genetic map using data from conjugation experiments. page 164-165 walks you through the process Use the terms listed below to correctly explain concepts, assigned figures, and specified end-of-chapter questions. Hfr strain F' factors Interrupted mating Antibiotics (e.g. streptomyacin) Minutes Hfr is very similar to F+ plasmid , however the big difference is that Hfr unlike F+ is integrated in the chromosome of the strain, therefore is called high frequency recombination factor. Hfr strain will integrate only if regions of chromosomes are similar to F+ factor, the assemble happens, then the chromosome integrates it into its genome. If chromosome will realize F+ foreign genome entrance, the loop will form then breakage, but bcs breakage is I'mprecise, the F factor broken will carry part of chromosomal DNA. Also the chromosome will have a left over of F factor . The F factor left from chromosome that contains some chromosomal DNA is called F'. Experiments of determining the gene distance in a Hfr allowed us to understand how the transfer occurs. A blender was used to interrupt the mating or the process of conjugation between Hfr and F- recipient. Two different strains were used. If the strain contained streptomycin sensitive gene, there will be no growth unless transfer of gene occurred through conjugation. To calculate the distance between genes, minutes were used as units bcs time was used to determine how long it took the genes to be transferred from one cell to another. Hfr strain Term used to designate bacterial strains in which the F factor has integrated into the bacterial chromosome. F^' factors Term used to describe an F factor that contiains a portion of the bacterial chromosome. interrupted mating Conjugation events between F+ and F- cells that are not allowed to proceed to completion; Useful in conjugation mapping antibiotics (e.g. streptomyacin) Substances that kill bacterial cells (e.g. streptomyacin, kanomyacin, and ampicillin) minutes The unit of map distance used in bacterial genetic maps. 7.5, 7.6, 7.7and 7.9 ( There is a video clip lecture for this figure in the Bb folder) With regard to conjugation, a key difference between F and Hfr cells is that an Hfr cell a. is unable to conjugate. b. transfers a plasmid to the recipient cell. c. transfers a portion of the bacterial chromosome to the recipient cell. d. becomes an F cell after conjugation. c See an expert-written answer! We have an expert-written solution to this problem! In mapping experiments, __ strains are conjugated to F strains. The distance between two genes is determined by comparing their ____ during a conjugation experiment. a. F , times of entry c. F , expression levels b. Hfr, times of entry d. Hfr, expression levels b S2. By conducting conjugation experiments between Hfr and recipient strains, Wollman and Jacob mapped the order of many bacterial genes. Throughout the course of their studies, they identified several different Hfr strains in which the F factor DNA had been integrated at different places along the bacterial chromosome. A sample of their experimental results is shown in the following table page 178 A. Explain how these results are consistent with the idea that the bacterial chromosome is circular. B. Draw a map that shows the order of genes and the locations of the origins of transfer among these different Hfr strains. A. In comparing the data among different Hfr strains, the order of the nine genes was always the same or the reverse of the same order. For example, HfrH and Hfr4 transfer the same genes but their orders are reversed relative to each other. In addition, the Hfr strains showed an overlapping pattern of transfer with regardto the origin. For example, Hfr1 and Hfr2 had the same order of genes, but Hfr1 began with leu and ended with azi, whereas Hfr2 began with pro and ended with lac. From these findings, Wollman and Jacob concluded that the origin of transfer had been inserted at different points within a circular E. coli chromosome in different Hfr strains. They also concluded that the origin can be inserted in either orientation, so the direction of gene transfer can be clockwise or counterclockwise around the circular bacterial chromosome. B. page 179 S3. An Hfr strain that is leuA and thiL was mixed with a strain that is leuA and thiL . In the data points shown here, the conjugation was interrupted, and the percentage of recombinants for each gene was determined by streaking on a medium that lacked either leucine or thiamine. The results are shown in the following graph. page 179 What is the map distance (in minutes) between these two genes? Answer: This problem is solved by extrapolating the data points to the x-axis to determine the time of entry. For leuA , they extrapolate back to 10 minutes. For thiL , they extrapolate back to 20 minutes. Therefore, the distance between the two genes is approximately 10 minutes. C5. What is the role of the origin of transfer during F - and Hfrmediated conjugation? What is the significance of the direction of transfer in Hfr-mediated conjugation? Answer: The role of the origin of transfer is to provide a starting site where two important events occur: the DNA is nicked, and one strand begins its transfer into a recipient cell. The direction of transfer in Hfrmediated transfer will determine the order of transfer of the genes. For example, if the origin is between gene A and B, it could be oriented so that gene A will be transferred first. Alternatively, it could be oriented in the opposite direction so that gene B will be transferred first. See an expert-written answer! We have an expert-written solution to this problem! E4. What is an interrupted mating experiment? What type of experimental information can be obtained from this type of study? Why is it necessary to interrupt mating? Answer: An interrupted mating experiment is a procedure in which two bacterial strains are allowed to mate, and then the mating is interrupted at various time points. The interruption occurs by agitation of the solution in which the bacteria are found. This type of study is used to map the locations of genes. It is necessary to interrupt mating so that you can vary the time and obtain information about the order of transfer; which gene transferred first, second, and so on. E5. In a conjugation experiment, what is meant by the time of entry? How is the time of entry determined experimentally? Answer: The time of entry is the time it takes for a gene to be initially transferred from one bacterium to another. To determine this time, we make many measurements at various lengths of time and then extrapolate these data back to the x-axis. E7. As mentioned in solved problem S2, origins of transfer can be located in many different locations, and their direction of transfer can be clockwise or counterclockwise. Let's suppose a researcher conjugated six different Hfr strains that were thr leu ton s str r azi s lac gal pro met to an F strain that was thr leu ton r str s azi r lac gal pro met , and obtained the following results: page 181 Draw a circular map of the E. coli chromosome and describe the locations and orientations of the origins of transfer in these six Hfr strains. RULE Always put in order of 1rst row CLOCKWISE Always use 1 letter, if letter repeats use L1, L2 Always put the arrow BEFORE the 1st letter , Never between 1st and 2nd row !!!!! see if clockwise or counterclockwise E8. An Hfr strain that is hisE and pheA was mixed with a strain that is hisE and pheA . The conjugation was interrupted and the percentage of recombinants for each gene was determined by streaking on a medium that lacked either histidine or phenylalanine. The following results were obtained: fig page 181. A. Determine the map distance (in minutes) between these two genes. B. In a previous experiment, it was found that hisE is 4 minutes away from the gene pab B. PheA was shown to be 17 minutes from this gene. Draw a genetic map describing the locations of all three genes. A. If we extrapolate these lines back to the x-axis, the hisE intersects at about 3 minutes and the pheA intersects at about 24 minutes. These are the values for the times of entry. Therefore, the distance between these two genes is 21 minutes (i.e., 24 minus 3). B pic is given in answer key How is an F' factor different from an F factor? An Fʹ factor carries a portion of the bacterial chromosome, whereas an F factor does not Review figure 7.6. With regard to the timing of conjugation, explain why the recipient cell in the top right is pro- whereas the recipient cell in the bottom right is pro+. FIGURE 7.6 Because conjugation occurred for a longer period of time, pro+ was transferred in the conjugation experiment shown in the bottom right. In eukaryotic genetic mapping, the units of distance are cM, % recombination, and mu. What are the units in bacterial genetic mapping and why is this scale appropriate? Because the chromosome is circular, we must arbitrarily assign a starting point on the map, in this case the gene thrA. Researchers scale genetic maps from bacterial conjugation studies in units of minutes. This unit refers to the relative time it takes for genes to first enter an F recipient strain during a conjugation experiment. The distance between two genes is determined by comparing their times of entry during a conjugation experiment. Consider figure 7.9. Which of the two genes (lacZ and gale) is closer to the origin of transfer? Explain the rationale for your response. The lacZ gene is closer to the origin of transfer; its transfer began at 16 minutes explanation in Figure 7.9, the time of entry is found by conducting conjugation experiments at different time intervals before interrup tion. We compute the time of entry by extrapolating the data back to the x-axis. In this experiment, the time of entry of the lacZ gene was approximately 16 minutes, and that of the galE gene was 25 minutes. Therefore, these two genes are approximately 9 minutes apart from each other along the E. coli chromosome. Difference between F-, F+, Hfr and F' F- doesn't have fertility factor, F+ has fertility factor, Hfr-fertility factor integrated in bacterial chromosome, F'- excised fertility factor with additional genes from bacterial chromosome True or False: Typically, only a portion of the Hfr chromosome is transmitted to the F- cell. true By what process (2 words) can the chromosomal material from the Hfr become integrated into the receipient cell's chromosome? homologous recombination What factor (one word) most likely determines the amount of genetic material transferred from an Hfr to an F- cell? time What piece of equipment (one word) did Wollman and Jacob use in their technique of interrupted mating? blender Examine the figure of the simplified genetic map of E. coli. What is the approximate distance between the lacZ,Y, A gene and the galE gene? Just put the closest whole number and no units. Verify your answer by viewing the figure showing the time course of an interrupted E. coli conjugation experiment. 9 In Jacbb and Wollman's experiments, what was used to kill the donor strain following conjugation (one word): streptomycin In chapter 6, we used mu and cm as units of genetic distance. What is the unit of distance in bacterial mapping experiments? minute NOT PLURAL ! Complete this statement: The time it takes for genes to enter a donor cell is directly related to their __ along the bacterial ___. Enter the two words separated by one space and without punctuation order chromosome About us About Quizlet How Quizlet works Careers Advertise with us For students Flashcards Test Learn Solutions Modern Learning Lab Quizlet Plus Study Guides Pomodoro timer For teachers Live Blog Be the Change Quizlet Plus for teachers Resources Help center Sign up Honor code Community guidelines Privacy Terms Ad and Cookie Policy Interest-Based Advertising Quizlet for Schools Parents Language Get the app Country United States Canada United Kingdom Australia New Zealand Germany France Spain Italy Japan South Korea India China Mexico Sweden Netherlands Switzerland Brazil Poland Turkey Ukraine Taiwan Vietnam Indonesia Philippines Russia © 2025 Quizlet, Inc. Students Flashcards Learn Study Guides Test Expert Solutions Teachers Live Blast Categories Subjects Exams Literature Arts and Humanit... Languages Math Science Social Science Other Flashcards Learn Study Guides Test Expert Solutions Live Blast Categories Exams Literature Arts and Humanit... Languages Math Science Social Science Other
184
calculus - Evaluate $\int^0_1 \frac{\ln(t)}{1-t^2}dt$ - Mathematics Stack Exchange =============== Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Evaluate ∫0 1 ln(t)1−t 2 d t Ask Question Asked 10 years, 1 month ago Modified8 years, 1 month ago Viewed 2k times This question shows research effort; it is useful and clear 4 Save this question. Show activity on this post. Evaluate: ∫0 1 ln(t)1−t 2 d t This actually came up while solving another integral. It was suggested that I use a binomial series, but unfortunately I do not understand how to use this. Can anyone help me out? calculus integration definite-integrals Share Share a link to this question Copy linkCC BY-SA 3.0 Cite Follow Follow this question to receive notifications edited Jun 25, 2017 at 8:21 jvdhooft 8,009 9 9 gold badges 28 28 silver badges 50 50 bronze badges asked Jun 22, 2015 at 11:37 User1234User1234 4,296 3 3 gold badges 22 22 silver badges 43 43 bronze badges 5 1 ∞∑k=0∫1 0 t 2 k(−ln t)d t, then you can for example substitute t=e−u. You get a series representation of the value of the integral. I think the series is not unknown. –Daniel Fischer Commented Jun 22, 2015 at 11:42 Hint:∞∑n=0 u n=1 1−u for |u|<1. –Lucian Commented Jun 22, 2015 at 11:44 see this: math.stackexchange.com/questions/1334561/… –Math-fun Commented Jun 22, 2015 at 12:03 How can the 0 be on top and the 1 below? Is this a mistake or just something I haven't learned yet? I am in doubt because people also use it in the answer. –wythagoras Commented Jun 22, 2015 at 13:32 @wythagoras Usually we integrate with the proper orientation, but if you don't, the net effect is a change in sign. Thus the integral in the OP is positive. –Ian Commented Jun 22, 2015 at 13:34 Add a comment| 3 Answers 3 Sorted by: Reset to default This answer is useful 3 Save this answer. Show activity on this post. First consider the operation ∂n t n=d d n e n ln(t)=ln(t)e n ln(t)=t n ln(t). Now consider the integral, where the operation just presented will be used, I n=∫1 0 ln(t)t n d t=∂n∫1 0 t n d t=∂n[t n+1 n+1]1 0=∂n(1 n+1)=−1(n+1)2 Now letting n→2 n and then summing over n it is seen that: ∞∑n=0 I 2 n=∫1 0 ln(t)d t 1−t 2=−∞∑n=0 1(2 n+1)2=−(∞∑n=0 1(2 n+1)2+∞∑n=1 1(2 n)2)+1 4∞∑n=1 1 n 2=−∞∑n=1 1 n 2+1 4∞∑n=1 1 n 2=−ζ(2)+1 4 ζ(2)=−3 4 ζ(2)=−π 2 8. The integral desired is: ∫1 0 ln(t)d t 1−t 2=−π 2 8. Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications edited Jun 22, 2015 at 16:04 answered Jun 22, 2015 at 13:29 LeucippusLeucippus 27.2k 186 186 gold badges 46 46 silver badges 92 92 bronze badges 6 You might elaborate that ∂n means ∂∂n and give some justification for the operations, in particular the interchange operations. –Ian Commented Jun 22, 2015 at 13:36 @BetterWorld First Leucippus did the integral calculation for an arbitrary n. Then they substituted in 2 n, since your quantity only involves 2 n. Plugging in you get that sum of the reciprocal of the squares of the odd natural numbers. We know the sum of the reciprocal of the squares of all the natural numbers from elsewhere. So Leucippus adds and subtracts the sum of the squares of the even natural numbers. Where it was added, now we know what to do. Where it was subtracted, you have the simple fact that ∑∞n=1 1(2 n)2=1 4∑∞n=0 1 n 2. –Ian Commented Jun 22, 2015 at 14:29 @BetterWorld Sorry, my very last sum should start at 1, not 0. –Ian Commented Jun 22, 2015 at 14:58 @BetterWorld What you wrote is equivalent, as you can see by simplifying (they are both −3 4 ζ(2)). –Ian Commented Jun 22, 2015 at 15:41 @BetterWorld There are a vast number of calculus books on integration, series, and advanced calculus that explain the interchange of summation and integration, interchange of integration and differentiation, and more. Those texts, some of which can be found and downloaded from Google Books, are better suited to read and learn from. –Leucippus Commented Jun 22, 2015 at 16:07 |Show 1 more comment This answer is useful 1 Save this answer. Show activity on this post. The geometric series formula tells you that 1 1−r=∞∑n=0 r n if |r|<1. Applying this to r=t 2 you get that your integral is ∫0 1∞∑n=0 ln(t)t 2 n d t. You can interchange the sum and integral, for example using monotone convergence (since the integrands are all negative), so you have ∞∑n=0∫0 1 ln(t)t 2 n d t. Each of these integrals can be done using integration by parts with u=ln(t) and d v=t 2 n d t. They are improper at the endpoint of 0, but this is no real obstacle, because the log term in each antiderivative is getting multiplied with a monomial, so the log terms in the definite integrals all vanish. Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications answered Jun 22, 2015 at 11:45 IanIan 105k 5 5 gold badges 100 100 silver badges 171 171 bronze badges 5 @BetterWorld Since ln(t)≤0 and t 2 n≥0, the product is negative, so f N(t)=∑N n=0 ln(t))t 2 n is a decreasing sequence of functions. So you can use the monotone convergence theorem to get the result. Other options are available; for instance the convergence is uniform on [δ,1−δ] for any δ>0, so you might be able to use that, too. –Ian Commented Jun 22, 2015 at 12:07 @BetterWorld That is a big question, which is one of the main topics in real analysis. The most general theorem I know of is the Vitali convergence theorem, although it does not help when the limiting integral is infinite. The most useful theorem I know of is the dominated convergence theorem, which can be regarded as a special case of the Vitali convergence theorem with convenient hypotheses. –Ian Commented Jun 22, 2015 at 12:21 @BetterWorld Frankly this subject takes a semester to study. I could say a bunch myself, but I think you'd be better off picking up a real analysis text, such as Real Analysis by Royden and Fitzpatrick. I can say what f n(x) means, though. You have a sequence of functions which are indexed by the natural numbers n. This means that for each natural number n, you have a function f n, which can be evaluated at x. This number is f n(x). –Ian Commented Jun 22, 2015 at 12:44 @BetterWorld As for the absolute values, the meaning of the integral isn't anything special as a result of the absolute values being there. That said, in the Lebesgue theory all convergent integrals are absolutely convergent, so integrals of absolute values show up constantly. In particular, the interchange of sum and integral when absolute convergence occurs is just like the interchange of sum and sum when absolute convergence occurs. –Ian Commented Jun 22, 2015 at 12:47 @BetterWorld By definition ∑∞n=0 t 2 n log(t)=lim N→∞∑N n=0 t 2 n log(t). I just called the partial sum f N(t) for convenience. –Ian Commented Jun 22, 2015 at 13:17 Add a comment| This answer is useful 1 Save this answer. Show activity on this post. Substitute t↦e−t: ∫0 1 log(t)1−t 2 d t=∫∞0 t 1−e−2 t e−t d t=∞∑k=0∫∞0 t e−(2 k+1)t d t=Γ(2)∞∑k=0 1(2 k+1)2=π 2 8 Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications answered Jun 25, 2017 at 15:53 robjohn♦robjohn 354k 38 38 gold badges 497 497 silver badges 889 889 bronze badges Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions calculus integration definite-integrals See similar questions with these tags. Featured on Meta Will you help build our new visual identity? Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Community help needed to clean up goo.gl links (by August 25) Linked 245When can a sum and integral be interchanged? 37Reversing the Order of Integration and Summation 12Evaluate ∫1 0 ln(1+x 1−x)d x x√1−x 2 Related 1Evaluate ∫e x(1+cos(x))d x 4Evaluate ℑ(1 100×2 100(e 2 ι x−1)100) 3Evaluate ∫2 π y 2 y 3−1 d y 5Evaluate ∫sin 4 x sin x d x 2Evaluate ∫d a a√a+1 2Evaluating ∫1 0 x(1−x)1 4(2−x)2 d x 2A simpler way to evaluate ∫d x sin 2 x cos 4 x 4A closed form for the dilogarithm integral ∫1 0 Li 2(2 x(1−x))x d x 2Evaluate the integral ∫∞0 d x(x+√x+√x+√...)2 6Solve the integral ∫1 4 x 2+9 d x Hot Network Questions Expected number of rolls to see all sides of a die Does cell phone only receive (one way communication) or receive and transmit microwaves (two way communication) during download? Does the Melf's Acid Arrow spell require a ranged attack roll? In Grep, how can I grep -r --exclude build/lib//.py Why do Jesus’ interlocutors in Mark 12:14 call Him ‘True,’ and how might this title implicitly reveal a messianic identity? Can high schoolers post to arXiv or write preprints? What violent acts or injuries are attributable to Palestine Action? How do Commoners "change class"? "Melbourne saw the most significant change both in actual coffee prices and in percentages." What's the difference between democracy and totalitarianism if, even in democracy, we must respect laws set by parties we didn't vote for? Why are there no 'add14' chords? Does trading for Kyogre in Pokémon Omega Ruby include its Mega Evolution? confused by appamor's execute options Using my custom font on kitty Kubuntu Why do the rules allow resigning in drawn positions with insufficient mating material? Why do I observe a sawtooth-like wave when measuring the voltage of a Li-Ion battery? Samba(Linux)/Windows interaction Show double quotient with congruence subgroup is simply connected? LM393 comparator not pulling down In the US, can I contribute to my Roth IRA, ahead of the time I get the earned income? What do you call this outfit that Japanese housewives always wear? What is an Excel macro to resize columns by column header? SPDX: GPL-2.0-or-later vs the + operator Is Adj N Adj possible? Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
185
WEATHER CLIMATE WATER World Meteorological Organization WORLD METEOROLOGICAL CONGRESS Nineteenth Session 22 May to 2 June 2023, Geneva Cg-19/INF. 4.2(9b) Submitted by: P/INFCOM, through Executive Council 6.IV.2023 The 2022 GCOS ECVS Requirements 2022 GCOS ECVs Requirements 2022 GCOS ECVs Requirements The 2022 GCOS ECVs Requirements 2022 GCOS ECVs Requirements © World Meteorological Organization, 2022 The right of publication in print, electronic and any other form and in any language is reserved by WMO. Short extracts from WMO publications may be reproduced without authorization, provided that the complete source is clearly indicated. Editorial correspondence and requests to publish, reproduce or translate this publication in part or in whole should be addressed to: Chair, Publications Board World Meteorological Organization (WMO) 7 bis, avenue de la Paix Tel.: +41 (0) 22 730 84 03 P.O. Box 2300 Fax: +41 (0) 22 730 80 40 CH-1211 Geneva 2, Switzerland E-mail: [email protected] NOTE The designations employed in WMO publications and the presentation of material in this publication do not imply the expression of any opinion whatsoever on the part of WMO concerning the legal status of any country, territory, city or area, or of its authorities, or concerning the delimitation of its frontiers or boundaries. The mention of specific companies or products does not imply that they are endorsed or recommended by WMO in preference to others of a similar nature which are not mentioned or advertised. The findings, interpretations and conclusions expressed in WMO publications with named authors are those of the authors alone and do not necessarily reflect those of WMO or its Members. This publication has been issued without formal editing. 2022 GCOS ECVs Requirements TABLE OF CONTENTS 1. Introduction .................................................................................................................... 1 2. Evolution of ECVS Requirements ........................................................................................ 1 3. ECVs requirements Tables ................................................................................................ 7 Atmospheric ECVs ................................................................................................................... 8 1. SURFACE ........................................................................................................................ 9 1.1 ECV: Air Pressure ............................................................................................................ 9 1.1.1 ECV product: Atmospheric Pressure (near surface) .............................................................. 9 1.2 ECV: Surface Temperature ............................................................................................. 10 1.2.1 ECV Product: Air Temperature (near surface) .................................................................... 10 1.3 ECV: Surface Wind Speed and Direction ........................................................................... 12 1.3.1 ECV Product: Wind Direction (near surface) ...................................................................... 12 1.3.2 ECV Product: Wind Speed (near surface) .......................................................................... 13 1.3.3 ECV Product: Wind Vector (near surface) .......................................................................... 14 1.4 ECV: Surface Water Vapour ............................................................................................ 15 1.4.1 ECV Product: Dew Point Temperature (near Surface) ......................................................... 15 1.4.2 ECV Product: Relative Humidity (near surface) .................................................................. 16 1.4.3 ECV Product: Air Specific Humidity (near surface) .............................................................. 17 1.5 ECV: Precipitation .......................................................................................................... 18 1.5.1 ECV Product: Accumulated Precipitation ........................................................................... 18 1.6 ECV: Surface radiation budget ........................................................................................ 19 1.6.1 ECV Product: Upward Long-Wave Irradiance at Earth Surface ............................................. 19 1.6.2 ECV Product: Downward Long-Wave Irradiance at Earth Surface ......................................... 20 1.6.3 ECV Product: Downward Short-Wave Irradiance at Earth Surface ........................................ 21 2. UPPER AIR .................................................................................................................... 22 2.1 ECV: Upper-air temperature ........................................................................................... 22 2.1.1 ECV Product: Atmospheric Temperature in the Boundary Layer ........................................... 22 2.1.2 ECV Product: Atmospheric Temperature in the Free Troposphere ......................................... 24 2.1.3 ECV Product: Atmospheric Temperature in the Upper Troposphere and Lower Stratosphere .... 26 2.1.4 ECV Product: Atmospheric Temperature in the Middle and Upper Stratosphere ...................... 28 2.1.5 ECV Product: Atmospheric Temperature in the Mesosphere ................................................. 30 2.2 ECV: Upper-air wind speed and direction .......................................................................... 32 2.2.1 ECV Product: Wind (horizontal) in the Boundary Layer ....................................................... 32 2.2.2 ECV Product: Wind (horizontal) in the Free Troposphere ..................................................... 34 2.2.3 ECV Product: Wind (horizontal) in the Upper Troposphere and Lower Stratosphere ................ 36 2.2.4 ECV Product: Wind (horizontal) in the Middle and Upper Stratosphere .................................. 38 2.2.5 ECV Product: Wind (horizontal) in the Mesosphere ............................................................ 39 2.2.6 ECV Product: Wind (vertical) in the Boundary Layer ........................................................... 40 2.2.7 ECV Product: Wind (vertical) in the Free Troposphere ........................................................ 42 2.2.8 ECV Product: Wind (vertical) in the Upper Troposphere and Lower Stratosphere .................... 44 2.2.9 ECV Product: Wind (vertical) in the Middle and Upper Stratosphere ..................................... 46 2.2.10 ECV Product: Wind (vertical) in the Mesosphere ................................................................ 48 2.2.11 Figures ......................................................................................................................... 50 2022 GCOS ECVs Requirements 2.3 ECV: Upper-air Water Vapour .......................................................................................... 55 2.3.1 ECV Product: Water Vapour Mixing Ratio in the Upper Troposphere and Lower Stratosphere ... 55 2.3.2 ECV Product: Water Vapour Mixing Ratio in the Middle and Upper Stratosphere ..................... 56 2.3.3 ECV Product: Water Vapour Mixing Ratio in the Mesosphere ................................................ 57 2.3.4 ECV Product: Relative Humidity in the Boundary Layer ....................................................... 58 2.3.5 ECV Product: Relative Humidity in the Free Troposphere .................................................... 59 2.3.6 ECV Product: Relative Humidity in the Upper Troposphere and Lower Stratosphere ................ 60 2.3.7 ECV Product: Specific Humidity in the Boundary Layer ....................................................... 61 2.3.8 ECV Product: Specific Humidity in the Free Troposphere ..................................................... 62 2.3.9 ECV Product: Integrated Water Vapour............................................................................. 63 2.4 ECV: Earth radiation budget ............................................................................................ 64 2.4.1 ECV Product: Radiation Profile ......................................................................................... 64 2.4.2 ECV Product: Solar Spectral Irradiance ............................................................................. 65 2.4.3 ECV Product: Downward Short-Wave Irradiance at Top of the Atmosphere ........................... 66 2.4.4 ECV Product: Upward Short-Wave Irradiance at Top of the Atmosphere ............................... 67 2.4.5 ECV Product: Upward Long-Wave Irradiance at Top of the Atmosphere ................................ 68 2.5 ECV Cloud Properties ..................................................................................................... 69 2.5.1 ECV Product: Cloud cover ............................................................................................... 69 2.5.2 ECV Product: Cloud Liquid Water Path .............................................................................. 70 2.5.3 ECV Product: Cloud Ice Water Path .................................................................................. 71 2.5.4 ECV Product: Cloud Drop Effective Radius ......................................................................... 72 2.5.5 ECV Product: Cloud Optical Depth .................................................................................... 73 2.5.6 ECV Product: Cloud Top Temperature ............................................................................... 74 2.5.7 ECV Product: Cloud Top Height ....................................................................................... 75 2.6 ECV: Lightning .............................................................................................................. 76 2.6.1 ECV Product: Schumann Resonances ............................................................................... 76 2.6.2 ECV Product: Total lightning stroke density ....................................................................... 77 3. ATMOSPHERIC COMPOSITION ......................................................................................... 79 3.1 ECV: Greenhouse Gases ................................................................................................. 79 3.1.1 ECV Product: N2O mole fraction ....................................................................................... 79 3.1.2 ECV Product: CO2 mole fraction ....................................................................................... 80 3.1.3 ECV Product: CO2 column average dry air mixing ratio ....................................................... 81 3.1.4 ECV Product: CH4 mole fraction ....................................................................................... 82 3.1.5 ECV Product: CH4 column average dry air mixing ratio ....................................................... 83 3.2 ECV: Ozone .................................................................................................................. 84 3.2.1 ECV Product: Ozone mole fraction in the Troposphere ........................................................ 84 3.2.2 ECV Product: Ozone mole fraction in the Upper Troposphere/ Lower Stratosphere (UTLS) ...... 86 3.2.3 ECV Product: Ozone mole fraction in the Middle and Upper Stratosphere .............................. 87 3.2.4 ECV Product: Ozone Tropospheric Column ........................................................................ 88 3.2.5 ECV Product: Ozone Stratospheric Column ........................................................................ 89 3.2.6 ECV Product: Ozone Total Column ................................................................................... 90 3.3 ECV: Precursors (Supporting the aerosol and ozone ECVs) .................................................. 91 3.3.1 ECV Product: CO Tropospheric Column ............................................................................. 91 3.3.2 ECV Product: CO Mole fraction ........................................................................................ 92 3.3.3 ECV Product: HCHO Tropospheric Column ......................................................................... 93 2022 GCOS ECVs Requirements 3.3.4 ECV Product: SO2 Tropospheric Column ............................................................................ 94 3.3.5 ECV product: SO2 Stratospheric Column ........................................................................... 95 3.3.6 ECV Product: NO2 Tropospheric Column ............................................................................ 96 3.3.7 ECV Product: NO2 Mole Fraction ...................................................................................... 97 3.4 ECV: Aerosols Properties ................................................................................................ 98 3.4.1 ECV Product: Aerosol Light Extinction Vertical Profile (Troposphere) .................................... 98 3.4.2 ECV Product: Aerosol Light Extinction Vertical Profile (Stratosphere) .................................... 99 3.4.3 ECV Product: Multi-wavelength Aerosol Optical Depth ...................................................... 100 3.4.4 ECV product: Chemical Composition of Aerosol Particles ................................................... 102 3.4.5 ECV Product: Number of Cloud Condensation Nuclei ......................................................... 103 3.4.6 ECV Product: Aerosol Number Size Distribution ............................................................... 104 3.4.7 ECV Product: Aerosol Single Scattering Albedo ................................................................ 106 Ocean ECVs .......................................................................................................................... 107 4. PHYSICS..................................................................................................................... 108 4.1 ECV: Sea-Surface Temperature ..................................................................................... 108 4.1.1 ECV Product: Sea-Surface Temperature ......................................................................... 108 4.2 ECV: Subsurface Temperature ....................................................................................... 109 4.2.1 ECV Product: Interior Temperature ................................................................................ 109 4.3 ECV: Sea-Surface Salinity ............................................................................................. 111 4.3.1 ECV Product: Sea-surface Salinity .................................................................................. 111 4.4 ECV: Subsurface Salinity .............................................................................................. 112 4.4.1 ECV Product: Interior Salinity ........................................................................................ 112 4.5 ECV: Surface Currents ................................................................................................. 113 4.5.1 ECV Product: Ekman Currents ....................................................................................... 113 4.5.2 ECV Product: Surface Geostrophic Current ...................................................................... 114 4.6 ECV: Subsurface Currents............................................................................................. 115 4.6.1 ECV Product: Vertical Mixing ......................................................................................... 115 4.7 ECV: Sea Level ............................................................................................................ 116 4.7.1 ECV Product: Regional Mean Sea Level ........................................................................... 116 4.7.2 ECV Product: Global Mean Sea Level .............................................................................. 117 4.8 ECV: Sea State ........................................................................................................... 118 4.8.1 ECV Product: Wave Height ............................................................................................ 118 4.9 ECV: Ocean Surface Stress ........................................................................................... 119 4.9.1 ECV Product: Ocean Surface Stress................................................................................ 119 4.10 ECV: Ocean Surface Heat Flux ....................................................................................... 120 4.10.1 ECV Product: Radiative Heat Flux .................................................................................. 120 4.10.2 ECV Product: Sensible Heat Flux .................................................................................... 121 4.10.3 ECV Product: Latent Heat Flux ...................................................................................... 122 4.11 ECV: Sea Ice ............................................................................................................... 123 4.11.1 ECV Product: Sea Ice Concentration ............................................................................... 123 4.11.2 ECV Product: Sea Ice Thickness .................................................................................... 124 4.11.3 ECV Product: Sea Ice Drift ............................................................................................ 126 4.11.4 ECV Product: Sea Ice Age ............................................................................................. 127 4.11.5 ECV Product: Sea Ice Temperature ................................................................................ 129 4.11.6 ECV Product: Sea Ice Surface Albedo ............................................................................. 131 2022 GCOS ECVs Requirements 4.11.7 ECV Product: Snow Depth on Sea Ice ............................................................................. 133 5. BIOGEOCHEMISTRY ..................................................................................................... 135 5.1 ECV: Oxygen .............................................................................................................. 135 5.1.1 ECV Product: Dissolved Oxygen Concentration ................................................................ 135 5.2 ECV: Nutrients ............................................................................................................ 163 5.2.1 ECV Product: Silicate ................................................................................................... 163 5.2.2 ECV Product: Phosphate ............................................................................................... 164 5.2.3 ECV Product: Nitrate .................................................................................................... 165 5.3 ECV: Ocean Inorganic Carbon ....................................................................................... 166 5.3.1 ECV Product: Total Alkalinity (TA) .................................................................................. 166 5.3.2 ECV Product: Dissolved Inorganic Carbon (DIC) .............................................................. 167 5.3.3 ECV Product: pCO₂ ...................................................................................................... 168 5.4 ECV: Transient tracers ................................................................................................. 169 5.4.1 ECV Product: 14C ......................................................................................................... 169 5.4.2 ECV Product: SF₆......................................................................................................... 170 5.4.3 ECV Product: CFC-11 ................................................................................................... 171 5.4.4 ECV Product: CFC-12 ................................................................................................... 172 5.5 ECV: Ocean Nitrous Oxide N2O ...................................................................................... 173 5.5.1 ECV Product: Interior Ocean Nitrous Oxide N2O ............................................................... 173 5.5.2 ECV Product: N2O Air-sea Flux ...................................................................................... 174 5.6 ECV: Ocean Colour ...................................................................................................... 175 5.6.1 ECV Product: Chlorophyll-a ........................................................................................... 175 5.6.2 ECV Product: Water Leaving Radiance ............................................................................ 176 6. BIOSPHERE................................................................................................................. 177 6.1 ECV: Plankton ............................................................................................................. 177 6.1.1 ECV Product: Zooplankton Diversity ............................................................................... 177 6.1.2 ECV Product: Zooplankton Biomass................................................................................ 178 6.1.3 ECV Product: Phytoplankton Diversity ............................................................................ 179 6.1.4 ECV Product: Phytoplankton Biomass ............................................................................. 180 6.2 ECV: Marine Habitat Properties ...................................................................................... 181 6.2.1 ECV Product: Mangrove Cover and Composition .............................................................. 181 6.2.2 ECV Product: Seagrass Cover (areal extent) ................................................................... 182 6.2.3 ECV Product: Macroalgal Canopy Cover and Composition .................................................. 183 6.2.4 ECV Product: Hard Coral Cover and Composition ............................................................. 184 Terrestrial ECVs .................................................................................................................. 185 7. HYDROLOGY ............................................................................................................... 186 7.1 ECV: Groundwater ....................................................................................................... 186 7.1.1 ECV Product: Groundwater Storage Change .................................................................... 186 7.1.2 ECV Product: Groundwater Level ................................................................................... 188 7.2 ECV: Lakes ................................................................................................................. 190 7.2.1 ECV Product: Lake Water Level (LWL) ............................................................................ 190 7.2.2 ECV Product: Lake Water Extent (LWE) .......................................................................... 191 7.2.3 ECV Product: Lake Surface Water Temperature (LSWT) .................................................... 192 7.2.4 ECV Product: Lake Ice Cover (LIC) ................................................................................ 193 7.2.5 ECV Product: Lake Ice Thickness (LIT) ........................................................................... 194 2022 GCOS ECVs Requirements 7.2.6 ECV Product: Lake Water-Leaving Reflectance ................................................................. 195 7.3 ECV: River Discharge ................................................................................................... 196 7.3.1 ECV Product: River Discharge ........................................................................................ 196 7.3.2 ECV Product: Water Level ............................................................................................. 197 7.4 ECV: Soil moisture ....................................................................................................... 198 7.4.1 ECV Product: Surface Soil Moisture ................................................................................ 198 7.4.2 ECV Product: Freeze/Thaw ............................................................................................ 199 7.4.3 ECV Product: Surface Inundation ................................................................................... 201 7.4.4 ECV Product: Root Zone Soil Moisture ............................................................................ 202 7.5 ECV: Terrestrial Water Storage (TWS) ............................................................................ 204 7.5.1 ECV Product: Terrestrial Water Storage Anomaly ............................................................. 204 8. CRYOSPHERE .............................................................................................................. 205 8.1 ECV: Snow ................................................................................................................. 205 8.1.1 ECV Product: Area Covered by Snow .............................................................................. 205 8.1.2 ECV Product: Snow Depth ............................................................................................. 206 8.1.3 ECV Product: Snow-Water Equivalent ............................................................................. 207 8.2 ECV: Glaciers .............................................................................................................. 208 8.2.1 ECV Product: Glacier Area ............................................................................................. 208 8.2.2 ECV Product: Glacier Elevation Change ........................................................................... 209 8.2.3 ECV Product: Glacier Mass Change ................................................................................. 210 8.3 ECV: Ice Sheets and Ice Shelves ................................................................................... 211 8.3.1 ECV Product: Surface Elevation Change .......................................................................... 211 8.3.2 ECV Product: Ice Velocity ............................................................................................. 212 8.3.3 ECV Product: Ice Volume Change .................................................................................. 213 8.3.4 ECV Product: Grounding Line Location and Thickness ....................................................... 214 8.4 ECV: Permafrost .......................................................................................................... 215 8.4.1 ECV Product: Permafrost Temperature (PT) .................................................................... 215 8.4.2 ECV Product: Active Layer Thickness (ALT) ..................................................................... 217 8.4.3 ECV Product: Rock Glacier Velocity (RGV) ....................................................................... 218 9. BIOSPHERE................................................................................................................. 220 9.1 ECV: Above-Ground Biomass......................................................................................... 220 9.1.1 ECV Product: Above-Ground Biomass (AGB) ................................................................... 220 9.2 ECV: Albedo ................................................................................................................ 222 9.2.1 ECV Product: Spectral and Broadband (Visible, Near Infrared and Shortwave) DHR & BHR with Associated Spectral Bidirectional Reflectance Distribution Function (BRDF) Parameters ........ 222 9.3 ECV: Evaporation from Land ......................................................................................... 223 9.3.1 ECV Product: Sensible Heat Flux .................................................................................... 223 9.3.2 ECV Product: Latent Heat Flux ....................................................................................... 224 9.3.3 ECV Product: Bare Soil Evaporation ................................................................................ 226 9.3.4 ECV Product: Interception Loss ...................................................................................... 228 9.3.5 ECV Product: Transpiration ........................................................................................... 230 9.4 ECV: Fire .................................................................................................................... 232 9.4.1 ECV Product: Burned Area ............................................................................................ 232 9.4.2 ECV Product: Active Fires .............................................................................................. 234 9.4.3 ECV Product: Fire Radiative Power (FRP) ........................................................................ 236 2022 GCOS ECVs Requirements 9.5 ECV: Fraction of Absorbed Photosynthetically Active Radiation (FAPAR) .............................. 237 9.5.1 ECV Product: Fraction of Absorbed Photosynthetically Active Radiation ............................... 237 9.6 ECV: Land Cover ......................................................................................................... 239 9.6.1 ECV Product: Land Cover .............................................................................................. 239 9.6.2 ECV Product: Maps of High-Resolution Land Cover ........................................................... 241 9.6.3 ECV Product: Maps of Key IPCC Land Classes, Related Changes and Land Management Types ... ................................................................................................................................. 243 9.7 ECV: Land Surface Temperature .................................................................................... 245 9.7.1 ECV Product: Land Surface Temperature (LST) ................................................................ 245 9.7.2 ECV Product: Soil Temperature ...................................................................................... 247 9.8 ECV: Leaf Area Index ................................................................................................... 248 9.8.1 ECV Product: Leaf Area Index (LAI) ............................................................................... 248 9.9 ECV: Soil carbon .......................................................................................................... 250 9.9.1 ECV Product: Carbon in Soil .......................................................................................... 250 9.9.2 ECV Product: Mineral Soil Bulk Density ........................................................................... 251 9.9.3 ECV Product: Peatlands ................................................................................................ 252 10. ANTHROPOGENIC ........................................................................................................ 253 10.1 ECV: Anthropogenic Greenhouse Gas Fluxes ................................................................... 253 10.1.1 ECV Product: Anthropogenic CO2 Emissions from Fossil Fuel Use, Industry, Agriculture, Waste and Products Use ......................................................................................................... 253 10.1.2 ECV Product: Anthropogenic CH4 Emissions from Fossil Fuel, Waste, Agriculture, Industrial Processes and Fuel Use ................................................................................................ 254 10.1.3 ECV Product: Anthropogenic N2O Emissions from Fossil Fuel Use, Industry, ... Agriculture, Waste and Products Use, Indirect from N-Related Emissions/Depositions ..................................... 255 10.1.4 ECV Product: Anthropogenic F-Gas Emissions from Industrial Processes and Product Use ..... 256 10.1.5 ECV Product: Total Estimated Fluxes by Coupled Data Assimilation/ Models with Observed Atmospheric Composition – National .............................................................................. 257 10.1.6 ECV Product: Total Estimated Fluxes by Coupled Data Assimilation/ Models with Observed Atmospheric Composition – Continental .......................................................................... 258 10.1.7 ECV Product: Anthropogenic CO2 Emissions/Removals by Land Categories .......................... 259 10.1.8 ECV Product: High-Resolution Footprint Around Point Sources ........................................... 260 10.2 ECV: Anthropogenic Water Use ...................................................................................... 261 10.2.1 ECV Product: Anthropogenic Water Use .......................................................................... 261 2022 GCOS ECVs Requirements 1. INTRODUCTION This document is a supplement to the 2022 GCOS Implementation Plan (GCOS-244) and presents the updated list of Essential Climate Variables (ECVs) requirements. An ECV is a physical, chemical or biological variable (or group of linked variables) that critically contributes to the characterization of Earth’s climate. An ECV product, is a measurable parameter needed to characterize the ECV. GCOS has asked its expert panels, informed by the wider community, to define requirements for the ECV products of all ECVs detailed in this document. A complete list of contributors is provided in GCOS-244 Appendix 3. The requirements are expressed in terms of five criteria: 1. Spatial Resolution - horizontal and vertical (if needed). 2. Temporal resolution (or frequency) – the frequency of observations e.g. hourly, daily or annual. 3. Measurement Uncertainty – the parameter, associated with the result of a measurement, that characterizes the dispersion of the values that could reasonably be attributed to the measurand (GUM)1. It includes all contributions to the uncertainty, expressed in units of 2 standard deviations, unless stated otherwise. 4. Stability – The change in bias over time. Stability is quoted per decade. 5. Timeliness - The time expectation for accessibility and availability of data. In this Implementation Plan, for each of these criteria, a goal, breakthrough and threshold value are presented. These are defined as: • Goal (G): an ideal requirement above which further improvements are not necessary. • Breakthrough (B): an intermediate level between threshold and goal which, if achieved, would result in a significant improvement for the targeted application. The breakthrough value may also indicate the level at which specified uses within climate monitoring become possible. It may be appropriate to have different breakthrough values for different uses. • Threshold (T): the minimum requirement to be met to ensure that data are useful. For each ECV product, a definition and units are provided together with the requirements. 2. EVOLUTION OF ECVS REQUIREMENTS The ECV framework has evolved since the publication of the previous list of ECVs requirements in the GCOS IP 2016. The list of ECVs and ECVs products has changed as well, and the following table illustrates those changes. Atmosphere ECV ECV Product 2016 ECV Product 2022 Surface Pressure Pressure (surface) Air Pressure (near surface) Surface Temperature Temperature (surface) Air Temperature (near surface) Surface wind Speed and Direction Surface wind Speed and Direction Wind Speed (near surface) Wind Direction (near surface) Wind Vector (near surface) Water Vapour (surface) Dew Point Temperature (near surface) 1 2022 GCOS ECVs Requirements - 2 - Surface Water Vapour Relative Humidity (near surface) Air Specific Humidity (near surface) Precipitation Estimates of Liquid and Solid Precipitation Accumulated precipitation Surface Radiation Budget Surface ERB Short-Wave Downward Short-Wave Irradiance at Earth Surface Surface ERB long-Wave Downward Long-Wave Irradiance at Earth Surface Upward Long-Wave Irradiance at Earth Surface Upper-air Temperature Tropospheric Temperature Profile Atmospheric Temperature in the Boundary Layer Stratospheric Temperature Profile Atmospheric Temperature in the Free Troposphere Atmospheric Temperature in the Upper Troposphere and Lower Stratosphere Temperature of the Deep Atmospheric Layers Atmospheric Temperature in the Middle and Upper Stratosphere Atmospheric Temperature in the Mesosphere Upper-air Wind Speed and Direction Upper-Air Wind Retrievals Wind (horizontal) in the Boundary Layer Wind (horizontal) in the Free Troposphere Wind (horizontal) in the Upper Troposphere and Lower Stratosphere Wind (horizontal) in the Middle and Upper Stratosphere Wind (horizontal) in the Mesosphere Wind (vertical) in the Boundary Layer Wind (vertical) in the Free Troposphere Wind (vertical) in the Upper Troposphere and Lower Stratosphere Wind (vertical) In the Middle and Upper Stratosphere Wind (vertical) in the Mesosphere Upper-air Water Vapour Tropospheric and Lower-Stratospheric profile of Water Vapour Water Vapour Mixing Ratio in the Upper Troposphere and Lower Stratosphere Water Vapour Mixing Ratio in the Middle and Upper Stratosphere Water Vapour Mixing Ratio in the Mesosphere Relative Humidity in the Boundary Layer Upper Tropospheric Humidity Relative Humidity in the Free Troposphere Relative Humidity in the Upper Troposphere and Lower Stratosphere Specific Humidity in the Boundary Layer Specific Humidity in the Free Troposphere Total Column Water Vapour Integrated Water Vapour Earth Radiation Budget Solar Spectral Irradiance Solar Spectral Irradiance Total Solar Irradiance Downward Short-Wave Irradiance at Top of the Atmosphere Top of the Atmosphere ERB Long-Wave Upward Long-Wave Irradiance at Top of the Atmosphere Top of the Atmosphere ERB Short-Wave Upward Short-Wave Irradiance at Top of the Atmosphere Radiation Profile Cloud Properties Cloud Amount Cloud Cover Cloud Water Path (liquid and ice) Cloud Liquid Water Path Cloud Ice Water Path 2022 GCOS ECVs Requirements - 3 - Cloud Effective particle radius (liquid and ice) Cloud Drop Effective Radius Cloud Optical Depth Cloud Optical Depth Cloud Top Temperature Cloud Top Temperature Cloud Top Pressure Cloud Top Height Lightning Lightning Total Lightning Stroke Density Schumann Resonances Carbon Dioxide, Methane and Other Greenhouse Gases Tropospheric CO2 CO2 Mole Fraction Tropospheric CO2 Column CO2 Column Average Dry Air Mixing Ratio Tropospheric CH4 CH4 Mole Fraction Stratospheric CH4 Tropospheric CH4 Column CH4 Column Average Dry Air Mixing Ratio N2O Mole Fraction Ozone Troposphere Ozone Ozone Mole Fraction in the Troposphere Ozone Profile in Upper and Lower Stratosphere Ozone Mole Fraction in the Upper Troposphere/ Lower Stratosphere Ozone Profile in Upper Stratosphere and Mesosphere Ozone Mole Fraction in the Middle and Upper Stratosphere Total Column Ozone Ozone Total Column Ozone Tropospheric Column Ozone Stratospheric Column Precursors (Supporting the aerosol and ozone ECVs) CO Tropospheric Column CO Tropospheric Column CO Tropospheric Profile CO Mole Fraction SO2, HCHO Tropospheric Columns HCHO Tropospheric Column SO2 Tropospheric Column SO2 Stratospheric Column NO2 Tropospheric Column NO2 Tropospheric Column NO2 Mole Fraction Aerosols Properties Aerosol Extinction Coefficient Profile Aerosol Light Extinction Vertical Profile (Troposphere) Aerosol Light Extinction Vertical Profile (Stratosphere) Aerosol Optical Depth Multi-wavelength Aerosol Optical Depth Single Scattering Albedo Aerosol Single Scattering Albedo Aerosol Layer Height Chemical Composition of Aerosol Particles Number of Cloud Condensation Nuclei Aerosol Number Size Distribution Ocean ECV ECV Product 2016 ECV Product 2022 Sea-Surface temperature Sea-Surface temperature Sea-Surface temperature Subsurface Temperature Interior Temperature Interior Temperature Sea-Surface Salinity Sea-Surface Salinity Sea-Surface Salinity Subsurface Salinity Interior Salinity Interior Salinity Surface Currents Surface Geostrophic Current Surface Geostrophic Current Ekman Currents Subsurface Currents Interior Currents Vertical Mixing Sea Level Regional Sea Level Regional Mean Sea Level Global Mean Sea Level Global Mean Sea Level 2022 GCOS ECVs Requirements - 4 - Sea State Wave Height Wave Height Surface Stress Surface Stress Surface Stress Ocean Surface Heat Flux Radiative Heat Flux Radiative Heat Flux Sensible Heat Flux Sensible Heat Flux Latent Heat Flux Latent Heat Flux Sea Ice Sea Ice Concentration Sea Ice Concentration Sea Ice Thickness Sea Ice Thickness Sea Ice Drift Sea Ice Drift Sea Ice Extent/Edge Sea Ice Age Sea Ice Surface Temperature (IST) Sea ice Surface Albedo Snow Depth on Sea Ice Oxygen Interior Ocean Oxygen Concentration Dissolved Oxygen Concentration Nutrients Interior Ocean Concentrations of Silicate, Phosphate, nitrate Silicate Phosphate Nitrate Ocean Inorganic Carbon Interior Ocean Carbon Storage. (At least 2 of DIC, TA or pH) Total Alkalinity (TA) Dissolved Inorganic Carbon (DIC) pCO₂ Transient Tracers Interior Ocean CFC-11, CFC-12, SF₆, 14C, tritium, 3He, 39Ar 14C SF₆ CFC-11 CFC-12 Ocean nitrous oxide N2O Interior Ocean Nitrous Oxide N2O Interior Ocean Nitrous Oxide N2O N2O Air-Sea Flux N2O Air-Sea Flux Ocean Colour Water Leaving Radiance Water Leaving Radiance Chlorophyll-a concentration Chlorophyll-a concentration Plankton Zooplankton Zooplankton Diversity Zooplankton Biomass Phytoplankton Phytoplankton Diversity Phytoplankton Biomass Marine Habitat Properties Coral Reefs, mangrove forests, seagrass beds, Macroalgal Communities Mangrove Cover and Composition Seagrass Cover (areal extent) Macroalgal Canopy Cover and Composition Hard coral cover and composition Terrestrial ECV ECV Product 2016 ECV Product 2022 Groundwater Groundwater Volume Change Groundwater Storage Change Groundwater Level Groundwater Level Groundwater Recharge Groundwater Discharge Wellhead Level Water Quality Lakes Lake Water Level Lake Water Level (LWL) Water Extent Lake Water Extent (LWE) Lake Surface-Water Temperature Lake Surface Water Temperature (LSWT) Lake Ice Cover Lake Ice Cover (LIC) Lake Ice Thickness Lake Ice Thickness (LIT) Lake Colour (Lake Water-Leaving Reflectance) Lake Water-Leaving Reflectance River Discharge River Discharge River Discharge Water Level Water Level 2022 GCOS ECVs Requirements - 5 - Flow Velocity Cross-Section Soil Moisture Surface Soil Moisture Surface Soil Moisture Freeze/Thaw Freeze/Thaw Surface Inundation Surface Inundation Root-Zone Soil Moisture Root Zone Soil Moisture Terrestrial Water Storage2 Terrestrial Water Storage Anomaly Snow Area Covered by Snow Area Covered by Snow Snow Depth Snow Depth Snow-Water Equivalent Snow-Water Equivalent Glaciers Glacier Area Glacier Area Glacier Elevation Change Glacier Elevation Change Glacier Mass Change Glacier Mass Change Ice Sheets and Ice Shelves Surface Elevation Change Surface Elevation Change Ice Velocity Ice Velocity Ice Mass Change Ice Volume Change Grounding Line Location and Thickness Grounding Line Location and Thickness Permafrost Thermal State of Permafrost Permafrost Temperature (PT) Active Layer Thickness Active Layer Thickness (ALT) Rock Glacier Velocity (RGV) Fraction of FAPAR Maps of FAPAR for Modelling Fraction of Absorbed Photosynthetically Active Radiation Maps of FAPAR for Adaptation Leaf Area Index Maps of LAI for Modelling Leaf Area Index (LAI) Maps of LAI for Adaptation Albedo Maps of DHR Albedo for Adaptation Spectral and Broadband (Visible, Near Infrared and Shortwave) DHR & BHR with Associated Spectral Bidirectional Reflectance Distribution Function (BRDF) Parameters Maps of BHR Albedo for Adaptation Maps of DHR Albedo for Modelling Maps of BHR Albedo for Modelling Land-Surface Temperature Maps of Land-Surface Temperature Land Surface Temperature (LST) Soil Temperature3 Above-Ground Biomass Maps of AGB Above-Ground Biomass (AGB) Land Cover Maps of Land Cover Land Cover Maps of High-Resolution Land Cover Maps of High-Resolution Land Cover Maps of Key IPCC Land Use, Related Changes and Land-Management Types Maps of Key IPCC Land Classes, Related Changes and Land Management Types Soil Carbon % Carbon in Soil Carbon in Soil Mineral Soil Bulk Density to 30 Cm and 1 M Mineral Soil Bulk Density Peatlands Total Depth of Profile, Area and Location Peatlands Fire Burnt Areas Burned Area Active Fire Maps Active Fires Fire Radiative Power Fire Radiative Power (FRP) 2 This is the only new ECV approved by GCOS Steering Committee in 2020. 3 Soil Temperature is a new ECV product temporarily included under the ECV Land-Surface Temperature. Its positioning will be subject to evaluation by the TOPC Panel and the GCOS Steering Committee. 2022 GCOS ECVs Requirements - 6 - Anthropogenic Greenhouse-Gas Fluxes Emissions from Fossil Fuel Use, Industry, Agriculture and Waste Sectors Anthropogenic CO2 Emissions from Fossil Fuel Use, Industry, Agriculture, Waste and Products Use Anthropogenic CH4 Emissions from Fossil Fuel, Waste, Agriculture, Industrial Processes and Fuel Use Anthropogenic N2O Emissions from Fossil Fuel Use, Industry, Agriculture, Waste and Products Use, Indirect from N-Related Emissions/Depositions Anthropogenic F-Gas Emissions from Industrial Processes and Product Use Estimated Fluxes by Inversions of Observed Atmospheric Composition – National Total Estimated Fluxes by Coupled Data Assimilation/Models with Observed Atmospheric Composition – National Estimated Fluxes by Inversions of Observed Atmospheric Composition – Continental Total Estimated Fluxes by Coupled Data Assimilation/Models with Observed Atmospheric Composition - Continental Emissions/ Removals by IPCC Land Categories Anthropogenic CO2 Emissions/Removals by Land Categories High-Resolution CO2 Column Concentrations to Monitor Point Sources High-Resolution Footprint Around Point Sources Evaporation from Land TOPC was considering the practicality of this being an ECV (Latent and Sensible Heat Fluxes) and, if so, what the requirements might be. Sensible Heat Flux Latent Heat Flux Bare Soil Evaporation Interception Loss Transpiration Anthropogenic Water Use Anthropogenic Water Use Anthropogenic Water Use 2022 GCOS ECVs Requirements 3. ECVS REQUIREMENTS TABLES In this section the requirements for the ECVs and their products are presented in 3 different sections Atmospheric, Ocean and Terrestrial. Units are expressed according to the International System of units. For the time unit, the following abbreviations are used: Minute (min); day (d); month (month); year (y). 2022 GCOS ECVs Requirements - 8 - Atmospheric ECVs 2022 GCOS ECVs Requirements - 9 - 1. SURFACE 1.1 ECV: Air Pressure 1.1.1 ECV product: Atmospheric Pressure (near surface) Name Atmospheric Pressure (near surface) Definition Air pressure at a known height above the surface with the height specified in the metadata. Unit hPa Note Observations made over the ocean are not static, being mostly recorded by mobile ships and drifting buoys (Kent et al., 2019). Requirements for marine surface observations must therefore be defined in terms of the composite accuracy and sampling of the marine observing networks to achieve comparable uncertainty thresholds at similar resolution. The primary application of pressure in monitoring relates to the use of reanalysis and so these requirements have been set in this regard. Timeliness does not preclude delayed mode acquisition via e.g. data rescue. Important also, but not covered in the table, is the observation location information. A mis-placed observation of surface pressure (particularly the station elevation) will have substantial implications for reanalysis applications. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 Resolution is consistent with other surface ECVs B 100 T 500 Vertical Resolution G - N/A B - T - Temporal Resolution h G 1 B 6 T 12 Timeliness h G 6 B 24 T 720 monthly Required Measurement Uncertainty (2-sigma) hPa G 0.5 B 1 T 1 Stability hPa/decade G 0.02 B 0.1 T 0.2 Standards and References Kent, E.C., Rayner, N.A., Berry, D.I., Eastman, R., Grigorieva, V.G., Huang, B., Kennedy, J.J., Smith, S.R. and Willett, K.M., 2019: Observing Requirements for Long-Term Climate Records at the Ocean Surface. Frontiers in Marine Science 6, Article 441, doi:10.3389/fmars.2019.00441. 2022 GCOS ECVs Requirements - 10 - 1.2 ECV: Surface Temperature 1.2.1 ECV Product: Air Temperature (near surface) Name Air Temperature (near surface) Definition Air temperature at a known height above surface, with the height specified in the metadata. Unit K Note The terminology used here for Tx (maximum daily temperature) and Tn (minimum daily temperature) and the observing cycle only applies to land-based meteorological stations. Observations made over the ocean are not static, being mostly recorded by mobile ships and drifting buoys (Kent et al., 2019). Requirements for marine surface observations must therefore be defined in terms of the composite accuracy and sampling of the marine observing networks to achieve comparable uncertainty thresholds at similar resolution, for example through the construction of gridded data products. Breakthrough targets are generally needed for reanalysis to make good use of these data. Temporal resolution: For better Reanalysis, we need more sampling down to 100km and sub-daily (hourly or 3-hourly). This is also needed for monitoring of extremes. For determining global annual temperature averages, the current network of land stations and ship and buoy measurements is adequate, but regional and higher temporal resolution averages can be highly uncertain (e.g. the 500 km sampling doesn’t get made in many regions, such as Africa, the polar regions and the Southern Ocean).Even if we got to the goal sampling, the uncertainty in the monthly global average temperatures would be reduced, but not by much from what it is now. However, these more stringent requirements will allow regional monthly averages to be calculated. Even if we got to the goal sampling, the uncertainty in the monthly global average temperatures would be reduced, but not by much from what it is now. However, these more stringent requirements will allow regional monthly averages to be calculated. Timeliness requirements are for routine applications related to climate monitoring, such as assimilation into reanalyses or the update of monitoring products. Observations that miss these timeliness requirements remain useful for some climate applications and can, for example, be used in periodic revisions to climate monitoring products. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 Thorne et al. (2018) B 100 Thorne et al. (2018) T 500 Threshold for horizontal resolution is based on the literature and specifically over land where correlation distances tend to be smaller than over the oceans. Thorne et al. (2018) showed via repeat sub-sampling of CRUTEM4 that well-spaced networks of the order 180 stations over the globe could recreate full-field global mean land surface air temperature estimates (see details in Jones et al., 1997) for the monthly timescale. For surface air temperature over the ocean which is taken predominantly by ships and buoys this can be challenging in remote Ocean basins (see the earlier note and Kent et al., 2019) Vertical Resolution G - N/A B - T - Temporal Resolution h G < 1 Sub-hourly. Required for derivation of extreme indices. B 1 Required for Climate Data Assimilation System (CDAS)-mode reanalysis assimilation. Breakthrough is the monthly average necessary to inform the global, regional and national monitoring statements from WMO and members. Captures most of the variability in the diurnal cycle T 3 Minimum sampling of diurnal cycle (daily Tx/Tm) Timeliness h G 6 Allows use in near-real time reanalysis B 24 Required for CDAS-mode reanalysis assimilation. Allows use in daily climate monitoring products T 720 Monthly average is necessary to inform the global, regional and national monitoring statements from WMO 2022 GCOS ECVs Requirements - 11 - and members. Allows use in monthly climate monitoring products Required Measurement Uncertainty (2-sigma) K G 0.1 Uncertainty is assumed to include random and systematic effects. Thorne et al. (2018) Jones et al. (1997) B 0.5 T 1 Stability K/decade G 0.01 Required for large-scale averages over century scales B 0.05 Required for large-scale averages over multi-decadal scales T 0.1 Required for regional averages over multi decadal scales Standards and References Jones, P.D., Osborn, T.J. and Briffa, K.R., 1997: Estimating sampling errors in large-scale temperature averages. J. Climate 10, 2548-2568. Kent, E.C., Rayner, N.A., Berry, D.I., Eastman, R., Grigorieva, V.G., Huang, B., Kennedy, J.J., Smith, S.R. and Willett, K.M., 2019: Observing Requirements for Long-Term Climate Records at the Ocean Surface. Frontiers in Marine Science 6, Article 441, doi:10.3389/fmars.2019.00441. Thorne, P.W., Diamond, H.J., Goodison, B., Harrigan, S. Hausfather, Z., Ingleby, N.B., Jones, P.D., Lawrimore, J.H., Lister, D.H., Merlone, A., Oakley, T., Palecki, M., Peterson, T.C., de Podesta, M., Tassone, C., Venema, V. and Willett, K.M., 2018: Towards a global land surface climate fiducial reference measurements network. Int. J. Climatol. 38, 2760-2774, 2022 GCOS ECVs Requirements - 12 - 1.3 ECV: Surface Wind Speed and Direction 1.3.1 ECV Product: Wind Direction (near surface) Name Wind Direction (near surface) Definition Direction from which wind is blowing at a known height above the surface which is to be specified in the metadata. Unit Degree true Note Wind directions are normally reported as an average due to their high variability. The averaging period should be reported as metadata. Timeliness requirements are for routine applications related to climate monitoring, such as assimilation into reanalyses or the update of monitoring products. Observations that miss these timeliness requirements remain useful for some climate applications and can, for example, be used in periodic revisions to climate monitoring products. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 B 100 For consistency with other surface ECV T 500 Vertical Resolution G - N/A B - T - Temporal Resolution h G <1 Sub-hourly B 1 Captures most of the variability in the diurnal cycle T 3 Minimum sampling of diurnal cycle Timeliness h G 6 Allows use in near-real time reanalysis B 24 Allows use in daily climate monitoring products T 720 Allows use in monthly climate monitoring products Required Measurement Uncertainty (2-sigma) degrees G 1 B 5 T 10 Stability degrees/decade G 1 B 2 T 5 Standards and References Kent, E.C., Rayner, N.A., Berry, D.I., Eastman, R., Grigorieva, V.G., Huang, B., Kennedy, J.J., Smith, S.R. and Willett, K.M., 2019: Observing Requirements for Long-Term Climate Records at the Ocean Surface. Frontiers in Marine Science 6, Article 441, doi:10.3389/fmars.2019.00441. 2022 GCOS ECVs Requirements - 13 - 1.3.2 ECV Product: Wind Speed (near surface) Name Wind Speed (near surface) Definition Speed of air at a known height above the surface which is to be specified in the metadata. Unit m s-1 Note Wind speeds are normally reported as an average due to their high variability. The averaging period should be reported as metadata. Observations made over the ocean are not static, being mostly recorded by mobile ships and drifting buoys (Kent et al., 2019). Requirements for marine surface observations must therefore be defined in terms of the composite accuracy and sampling of the marine observing networks to achieve comparable uncertainty thresholds at similar resolution. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 B 100 T 500 Vertical Resolution G - N/A B - T - Temporal Resolution h G < 1 Sub-hourly B 1 Captures most of the variability in the diurnal cycle T 3 Minimum sampling of diurnal cycle Timeliness h G 6 Allows use in near-real time reanalysis B 24 T 720 Monthly Required Measurement Uncertainty (2-sigma) m s-1 G 0.1 B 0.5 T 1 Stability m s-1/ decade G 0.1 B 0.25 T 0.5 Standards and References Kent, E.C., Rayner, N.A., Berry, D.I., Eastman, R., Grigorieva, V.G., Huang, B., Kennedy, J.J., Smith, S.R. and Willett, K.M., 2019: Observing Requirements for Long-Term Climate Records at the Ocean Surface. Frontiers in Marine Science 6, Article 441, doi:10.3389/fmars.2019.00441. 2022 GCOS ECVs Requirements - 14 - 1.3.3 ECV Product: Wind Vector (near surface) Name Wind Vector (near surface) Definition Horizontal wind vector, at a known height above the surface which is to be specified in the metadata. Unit m s-1 Note Wind directions are normally reported as an average due to their high variability. The averaging period should be reported as metadata. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 B 100 T 500 Vertical Resolution G - N/A B - T - Temporal Resolution h G <1 Sub-hourly B 1 Captures most of the variability in the diurnal cycle T 3 Minimum sampling of diurnal cycle Timeliness h G 6 B 24 T 720 Monthly Required Measurement Uncertainty (2-sigma) m s-1 G 0.1 B 0.5 T 1 Stability m s-1/ decade G 0.1 B 0.25 T 0.5 Standards and References Kent, E.C., Rayner, N.A., Berry, D.I., Eastman, R., Grigorieva, V.G., Huang, B., Kennedy, J.J., Smith, S.R. and Willett, K.M., 2019: Observing Requirements for Long-Term Climate Records at the Ocean Surface. Frontiers in Marine Science 6, Article 441, doi:10.3389/fmars.2019.00441. 2022 GCOS ECVs Requirements - 15 - 1.4 ECV: Surface Water Vapour 1.4.1 ECV Product: Dew Point Temperature (near Surface) Name Dew Point Temperature (near surface) Definition Temperature to which air must be cooled to become saturated with water vapor at a known height above surface, with the height specified in the metadata. Unit K Note Observations made over the ocean are not static, being mostly recorded by mobile ships and drifting buoys (Kent et al., 2019). Requirements for marine surface observations must therefore be defined in terms of the composite accuracy and sampling of the marine observing networks to achieve comparable uncertainty thresholds at similar resolution, for example through the construction of gridded data products. Willett et al. 2008 show that spatial scales of near surface dew point temperature are comparable to those of temperature so the same horizontal resolution should be broadly applicable. Timeliness requirements are for routine applications related to climate monitoring, such as assimilation into reanalyses or the update of monitoring products. Observations that miss these timeliness requirements remain useful for some climate applications and can, for example, be used in periodic revisions to climate monitoring products. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 Willett et al. 2008, based on analogy with temperature B 100 T 500 Vertical Resolution G - N/A B - T - Temporal Resolution h G <1 Sub-hourly B 1 Captures most of the variability in the diurnal cycle T 3 Minimum sampling of diurnal cycle Timeliness h G 6 Allows use in near-real time reanalysis B 24 Allows use in daily climate monitoring products T 720 Allows use in monthly climate monitoring products Required Measurement Uncertainty (2-sigma) K G 0.1 B 0.5 T 1 Stability K/decade G 0.01 Required for large-scale averages over century scales B 0.05 Required for large-scale averages over multi-decadal scales T 0.1 Required for regional averages over multi decadal scales Standards and References Kent, E.C., Rayner, N.A., Berry, D.I., Eastman, R., Grigorieva, V.G., Huang, B., Kennedy, J.J., Smith, S.R. and Willett, K.M., 2019: Observing Requirements for Long-Term Climate Records at the Ocean Surface. Frontiers in Marine Science 6, Article 441, doi:10.3389/fmars.2019.00441. Willett, K. M., Dunn, R. J. H., Thorne, P. W., Bell, S., de Podesta, M., Parker, D. E., Jones, P. D., and Williams Jr., C. N.: HadISDH land surface multi-variable humidity and temperature record for climate monitoring, Clim. Past, 10, 1983-2006, doi:10.5194/cp-10-1983-2014, 2014. Willett, K. M., Williams Jr., C. N., Dunn, R. J. H., Thorne, P. W., Bell, S., de Podesta, M., Jones, P. D., and Parker D. E., 2013: HadISDH: An updated land surface specific humidity product for climate monitoring. Climate of the Past, 9, 657-677, doi:10.5194/cp-9-657-2013. 2022 GCOS ECVs Requirements - 16 - 1.4.2 ECV Product: Relative Humidity (near surface) Name Relative Humidity (near surface) Definition Relative humidity at a known height above surface, with the height specified in the metadata. Relative humidity is the ratio of the amount of atmospheric moisture present relative to the amount that would be present if the air were saturated with respect to water or ice to be specified in the metadata. Unit % Note Observations made over the ocean are not static, being mostly recorded by mobile ships and drifting buoys (Kent et al., 2019). Requirements for marine surface observations must therefore be defined in terms of the composite accuracy and sampling of the marine observing networks to achieve comparable uncertainty thresholds at similar resolution. Relative humidity is often derived from temperature and dewpoint temperature. It is important that the conversions be applied at the observation scale so as not to introduce both random and systematic effects into the analysis. Formulae to convert between the various water vapour metrics (Specific Humidity, Relative Humidity and Dewpoint are given in Willett et al. (2008). The observation requirements for each of the humidity variables is based on those for dewpoint temperature and are approximate, for more detailed information see Bell (1996). Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 By analogy with near surface dewpoint temperature via near surface air temperature, requirement therefore tentative. B 100 T 500 Vertical Resolution G - N/A B - T - Temporal Resolution h G <1 Sub-hourly B 1 T 3 Timeliness h G 6 B 24 T 720 Monthly Required Measurement Uncertainty (2-sigma) %RH G 0.5 B 2.5 T 5 Stability %RH/decade G 0.05 B 0.25 T 0.5 Standards and References S. Bell, Guide to the measurement of humidity, Guide 103, NPL, 1996. Kent, E.C., Rayner, N.A., Berry, D.I., Eastman, R., Grigorieva, V.G., Huang, B., Kennedy, J.J., Smith, S.R. and Willett, K.M., 2019: Observing Requirements for Long-Term Climate Records at the Ocean Surface. Frontiers in Marine Science 6, Article 441, doi:10.3389/fmars.2019.00441. Willett, K. M., Dunn, R. J. H., Thorne, P. W., Bell, S., de Podesta, M., Parker, D. E., Jones, P. D., and Williams Jr., C. N.: HadISDH land surface multi-variable humidity and temperature record for climate monitoring, Clim. Past, 10, 1983-2006, doi:10.5194/cp-10-1983-2014, 2014. Willett, K. M., Williams Jr., C. N., Dunn, R. J. H., Thorne, P. W., Bell, S., de Podesta, M., Jones, P. D., and Parker D. E., 2013: HadISDH: An updated land surface specific humidity product for climate monitoring. Climate of the Past, 9, 657-677, doi:10.5194/cp-9-657-2013. 2022 GCOS ECVs Requirements - 17 - 1.4.3 ECV Product: Air Specific Humidity (near surface) Name Atmospheric Specific Humidity (near Surface) Definition Air specific humidity at a known height above surface, with the height specified in the metadata. Specific humidity is the ratio of the mass of water vapour and the mass of moist air. Unit g kg-1 Note Observations made over the ocean are not static, being mostly recorded by mobile ships and drifting buoys (Kent et al., 2019). Requirements for marine surface observations must therefore be defined in terms of the composite accuracy and sampling of the marine observing networks to achieve comparable uncertainty thresholds at similar resolution. Willett et al 2008 show that spatial scales of surface specific humidity are comparable to those of temperature so the same horizontal resolution should be broadly applicable. Specific humidity is generally derived from temperature and dewpoint temperature. It is important that the conversions be applied at the observation scale so as not to introduce both random and systematic effects into the analysis. Formulae to convert between the various water vapour metrics (Specific Humidity, Relative Humidity and Dewpoint are given in Willett et al. (2008). Given the orders of magnitude variation in specific humidity between the tropics and the polar regions there is a strong case for latitudinally varying requirements for uncertainty and stability which would be more stringent in polar than extra-tropical than tropical climates. Current values are a compromise which may be indicative of extra-tropical locations. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 B 100 T 500 Vertical Resolution G - N/A B - T - Temporal Resolution h G <1 Sub-hourly B 1 T 3 Timeliness h G 6 B 24 T 720 Monthly Required Measurement Uncertainty (2-sigma) g kg-1 G 0.1 B 0.5 T 1 Stability g kg-1/ decade G 0.01 B 0.05 T 0.1 Standards and References Kent, E.C., Rayner, N.A., Berry, D.I., Eastman, R., Grigorieva, V.G., Huang, B., Kennedy, J.J., Smith, S.R. and Willett, K.M., 2019: Observing Requirements for Long-Term Climate Records at the Ocean Surface. Frontiers in Marine Science 6, Article 441, doi:10.3389/fmars.2019.00441. Willett, K. M., Dunn, R. J. H., Thorne, P. W., Bell, S., de Podesta, M., Parker, D. E., Jones, P. D., and Williams Jr., C. N.: HadISDH land surface multi-variable humidity and temperature record for climate monitoring, Clim. Past, 10, 1983-2006, doi:10.5194/cp-10-1983-2014, 2014. Willett, K. M., Williams Jr., C. N., Dunn, R. J. H., Thorne, P. W., Bell, S., de Podesta, M., Jones, P. D., and Parker D. E., 2013: HadISDH: An updated land surface specific humidity product for climate monitoring. Climate of the Past, 9, 657-677, doi:10.5194/cp-9-657-2013. 2022 GCOS ECVs Requirements - 18 - 1.5 ECV: Precipitation 1.5.1 ECV Product: Accumulated Precipitation Name Accumulated precipitation Definition Integration of solid and liquid precipitation rate reaching the ground over a time period defined in the metadata. Unit mm Note This ECV is designed to monitor the amount of precipitation globally in order to investigate the impact on the hydrological cycle, agriculture, drinking water supply or droughts. It is driven to support studies on a continental to global scale. This implies, that it is not designed to monitor extremes globally on a local to regional scale in space and time, as the requirements are different to answer both scientific questions. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 50 B 125 T 250 Vertical Resolution G - N/A B - T - Temporal Resolution d G 1 Daily aggregation over period which defines the upper limit of temporal sampling B 30 Monthly aggregation over period which defines the upper limit of temporal sampling T 365 Annual aggregation over period which defines the upper limit of temporal sampling Timeliness d G 1 B 7 T 30 Required Measurement Uncertainty (2-sigma) mm G 1 B 2 T 5 Stability mm/decade G 0.02 B 0.05 T 0.1 Standards and References 2022 GCOS ECVs Requirements - 19 - 1.6 ECV: Surface radiation budget 1.6.1 ECV Product: Upward Long-Wave Irradiance at Earth Surface Name Upward Long-Wave Irradiance at Earth Surface Definition Flux density of terrestrial radiation emitted by the Earth surface. Unit W m-² Note Main driver of the uncertainty in the components of the surface radiation budget is the composition of the atmosphere (e.g. Water vapour, Aerosols, Clouds)”. The Required Measurement Uncertainty (2-sigma) (see the VIM & GUM) includes both random and systematic components. The uncertainty is meant to be an uncertainty for the measurement device / instrument / ECV algorithm. The uncertainty of spatially and temporally averaged global mean value might be smaller. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 B 50 T 100 Vertical Resolution G - N/A B - T - Temporal Resolution h G 1 B 24 T 720 Timeliness d G B T 30 1 month after the observations period Required Measurement Uncertainty (2-sigma) W m-2 G 1 B 5 T 10 Stability W m-2/ decade G 0.2 B 0.5 T 1 Standards and References 2022 GCOS ECVs Requirements - 20 - 1.6.2 ECV Product: Downward Long-Wave Irradiance at Earth Surface Name Downward Long-Wave Irradiance at Earth Surface Definition Flux density of radiation emitted by the gases, aerosols and clouds of the atmosphere to the Earth's surface. Unit W m-² Note Main driver of the uncertainty in the components of the surface radiation budget is the composition of the atmosphere (e.g. Water vapour, Aerosols, Clouds)”. The Required Measurement Uncertainty (2-sigma) (see the VIM & GUM) includes both random and systematic components. The uncertainty is meant to be an uncertainty for the measurement device / instrument / ECV algorithm. The uncertainty of spatially and temporally averaged global mean value might be smaller. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 B 50 T 100 Vertical Resolution G - N/A B - T - Temporal Resolution h G 1 B 24 T 720 Timeliness d G B T 30 1 month after the observations period Required Measurement Uncertainty (2-sigma) W m-² G 1 B 5 T 10 Stability W m-²/decade G 0.2 B 0.5 T 1 Standards and References 2022 GCOS ECVs Requirements - 21 - 1.6.3 ECV Product: Downward Short-Wave Irradiance at Earth Surface Name Downward Short-Wave Irradiance at Earth Surface Definition Flux density of the solar radiation at the Earth surface. Unit W m-² Note Main driver of the uncertainty in the components of the surface radiation budget is the composition of the atmosphere (e.g. Water vapour, Aerosols, Clouds)”. The Required Measurement Uncertainty (2-sigma) (see the VIM & GUM) includes both random and systematic components. The uncertainty is meant to be an uncertainty for the measurement device / instrument / ECV algorithm. The uncertainty of spatially and temporally averaged global mean value might be smaller. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 B 50 T 100 Vertical Resolution G - N/A B - T - Temporal Resolution h G 1 B 24 T 720 Timeliness d G B T 30 1 month after the observations period Required Measurement Uncertainty (2-sigma) W m-² G 1 B 5 T 10 Stability W m-²/ decade G 0.2 B 0.5 T 1 Standards and References 2022 GCOS ECVs Requirements - 22 - 2. UPPER AIR 2.1 ECV: Upper-air temperature 2.1.1 ECV Product: Atmospheric Temperature in the Boundary Layer Name Atmospheric Temperature in the Boundary Layer Definition 3D field of the atmospheric temperature in the Boundary Layer. Unit K Note The following requirements are inferred mainly from the viewpoint of reanalysis and its near-real-time continuation in operational analyses as well as with respect to the magnitude of typical temperature variations at relevant spatial and temporal scales. Some additional considerations are also made, for which explanations are given in notes below this table. The requirements for temperature in the boundary layer are mainly driven by needs for monitoring of fluxes for the goal threshold. Stability assumes independence of measurements between instruments permitting partial cancellation and is based upon need to be able to detect current trends which are c.0.2 K/decade. Boundary layer temperature is assumed to share spatial characteristics with surface temperature for which this has been characterized in e.g. Thorne et al., 2018. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 15 Hersbach et al. (2018), Thorne et al. (2005, 2018). This has been changed from the original 10km to 15 km to be consistent with Numerical Weather Prediction (NWP), although it is suggested that NWP should be at 10km. Roughly corresponds to the current global NWP model resolution, which would be used for next generation reanalyses, and resolves features influenced by local factors such as proximity of water bodies or significant topography. B 100 Hersbach et al. (2018), Thorne et al. (2005, 2018). A typical horizontal error correlation length in first guess fields and typical scale of mesoscale features that, especially when occurring frequently or with significant amplitude, can affect global climate. For example, Waller et al. (2016) found that error correlations of surface temperature in observation-minus-background and observation-minus-analysis residuals from the Met Office high-resolution model range between 30 km and 80 km. T 500 Hersbach et al. (2018), Thorne et al. (2005, 2018). Minimum resolution needed to resolve synoptic-scale features. Thorne et al., 2005 show typical e-folding correlation distances in radiosonde-measured tropospheric temperatures of at least several 100km and more generally 1000km, with larger values in the tropics. Surface and boundary layer are tightly coupled, particularly in the lowermost boundary layer. Vertical Resolution m G 1 This high resolution allows different users the option to subsample or process the data in ways that suit their applications (Ingleby et al. 2016). Determining fluxes requires this high vertical fidelity. Thus, this value has not been changed to be consistent with requirements for NWP as NWP thresholds would demonstrably fail to meet needs to quantify fluxes and close energy budget. B 10 Roughly corresponds to the assimilating model resolution (Fujiwara et al. 2017) T 100 Minimum resolution considering the layer depth Temporal Resolution h G <1 Sub-hourly. A typical 4D-Var timeslot length, a sub-division into which observations are grouped for processing (ECMWF 2018) B 6 A typical time interval between numerical analyses and/or the typical time scale of subsynoptic features 2022 GCOS ECVs Requirements - 23 - T 12 Minimum resolution needed to resolve synoptic-scale waves. For this reason, it has not been changed to ensure consistency with NWP requirements. Timeliness h G 1 A typical cut-off time of the operational NWP cycle analysis (JMA 2019), which might also be used for climate monitoring B 3 A typical cut-off time for the Climate Data Assimilation System (a near-real time continuation of reanalysis) T 24 A typical master decoding cut-off time, beyond which observations are not automatically decoded and incorporated into the operational observation archive Required Measurement Uncertainty (2-sigma) K RMS G 0.1 These values are inferred based on the standard deviations of 6-hourly analysis with respect to the monthly climatology. (T) corresponds to regions of high variability, (B) of medium variability and (G) of low variability. RMS departures of observed values from first guess field values, in accordance with the practical verification schemes applied by the GUAN Monitoring Centre for upper-air observations. B 0.5 T 1 Stability K/decade G 0.01 These values are based on the need to detect temperature trends such as those observed in recent decades (IPCC 2013). (T) corresponds to regions of large trend or 50% of observed global-mean trend, (B) regions of medium trend or 20% of global-mean trend, and (G) regions of small trend or 10% of global-mean trend. B 0.05 T 0.1 Standards and References ECMWF, 2018: IFS documentation – Cy45r1, Part I: Observations. ECMWF, UK, 82p. Available at Fujiwara, M., 2017: Introduction to the SPARC Reanalysis Intercomparison Project (S-RIP) and overview of the reanalysis systems, Atmos. Chem. Phys., 17, 1417– 1452, 2017. Hersbach et al. (2018): Operational global reanalysis: progress, future directions and synergies with NWP. ERA Report Series, 27. Ingleby et al., 2016: Progress toward high-resolution, real-time radiosonde reports. Bull. Amer. Meteor. Soc., 97, 2149-2161. IPCC, 2013: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 1535 pp. JMA, 2019: Outline of the operational numerical weather prediction at the Japan Meteorological Agency, Appendix to WMO Technical Progress Report on the Global Data-processing and Forecasting System (GDPFS) and Numerical Weather Prediction (NWP) Research. Japan Meteorological Agency, Tokyo, Japan. Available at Thorne, P. W., D. E. Parker, et al. (2005). "Revisiting radiosonde upper air temperatures from 1958 to 2002." Journal of Geophysical Research-Atmospheres 110(D18), doi:10.1029/2004JD005753 Thorne, P.W. et al. (2018), Towards a global land surface climate fiducial reference measurements network. IJOC, Waller, J. E., S. P. Ballard, S. L. Dance, G. Kelly, N. K. Nichols, and David Simonin, 2016: Diagnosing horizontal and inter-channel observation error correlations for SEVIRI observations using observation-minus-background and observation-minus-analysis statistics. Remote Sens. 2016, 8(7), 581, doi:10.3390/rs8070581 2022 GCOS ECVs Requirements - 24 - 2.1.2 ECV Product: Atmospheric Temperature in the Free Troposphere Name Atmospheric Temperature in the Free Troposphere Definition 3D field of the atmospheric temperature in the troposphere. Unit K Note The following requirements are inferred mainly from the viewpoint of reanalysis and its near-real-time continuation in operational analyses as well as with respect to the magnitude of typical temperature variations at relevant spatial and temporal scales. Some additional considerations are also made, for which explanations are given in notes below this table. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 15 Hersbach et al. (2018), Thorne et al. (2005) This has been changed from the original 10km to 15 km to be consistent with Numerical Weather Prediction (NWP), although it is suggested that NWP should be at 10km. Roughly corresponds to the current global NWP model resolution, which would be used for next generation reanalyses, and resolves features influenced by local factors such as proximity of water bodies or significant topography. B 100 Hersbach et al. (2018), Thorne et al. (2005). A typical horizontal error correlation length in first guess fields and typical scale of mesoscale features that, especially when occurring frequently or with significant amplitude, can affect global climate. Hersbach et al. (2018) shows examples of the background error covariances prescribed for the latest-generation reanalysis, where the horizontal correlation decreases below 1/e within the length of 500 km or less in the troposphere. It should be noted that the correlation length depends on the data assimilation system used as well as the observing system assimilated for making initial conditions. In general, the correlation length tends to be shorter when the data assimilation system has a higher resolution and is more advanced as well as when the observations assimilated have a higher density. In order to produce reanalysis data with accuracy comparable to NWP, the requirements need to be similar to those for NWP, as already proposed in the table. T 1000 Hersbach et al. (2018), Thorne et al. (2005) Minimum resolution needed to resolve synoptic-scale waves. Thorne et al., (2005) show typical e-folding correlation distances in radiosonde-measured tropospheric temperatures of at least several 100km and more generally 1000km, with larger values in the tropics. Vertical Resolution km G 0.01 This high resolution allows different users the option to subsample or process the data in ways that suit their applications (Ingleby et al. 2016). This has not been changed to be consistent with NWP requirements as NWP has requirements that are too coarse for some such applications, e.g. determining fluxes requires high vertical fidelity. B 0.1 Roughly corresponds to the assimilating model resolution (Fujiwara et al. 2017) T 1 Minimum resolution considering the layer depth Temporal Resolution h G 1 A typical 4D-Var timeslot length, a sub-division into which observations are grouped for processing (ECMWF 2018) B 12 A typical time interval between numerical analyses and/or the typical time scale of subsynoptic features 2022 GCOS ECVs Requirements - 25 - T 24 Minimum resolution needed to resolve synoptic-scale waves Timeliness h G 1 A typical cut-off time of the operational NWP cycle analysis (JMA 2019), which might also be used for climate monitoring B 3 A typical cut-off time for the Climate Data Assimilation System (a near-real time continuation of reanalysis) T 6 A typical master decoding cut-off time, beyond which observations are not automatically decoded and incorporated into the operational observation archive Required Measurement Uncertainty (2-sigma) K RMS G 0.1 These values are inferred based on the standard deviations of 6-hourly analysis with respect to the monthly climatology. (T) corresponds to regions of high variability, (B) of medium variability and (G) of low variability. RMS departures of observed values from first guess field values, in accordance with the practical verification schemes applied by the GUAN Monitoring Centre for upper-air observations B 0.5 T 1 Stability K/decade G 0.01 IPCC (2013) These values are based on the need to detect temperature trends such as those observed in recent decades (IPCC 2013; Lübken et al. 2013). (T) corresponds to regions of large trend or 50% of observed global-mean trend, (B) regions of medium trend or 20% of global-mean trend, and (G) regions of small trend or 10% of global-mean trend. B 0.02 T 0.05 Standards and References ECMWF, 2018: IFS documentation – Cy45r1, Part I: Observations. ECMWF, UK, 82p. Available at Fujiwara, M., 2017: Introduction to the SPARC Reanalysis Intercomparison Project (S-RIP) and overview of the reanalysis systems, Atmos. Chem. Phys., 17, 1417–1452, 2017. Hersbach et al. (2018): Operational global reanalysis: progress, future directions and synergies with NWP. ERA Report Series, 27. Ingleby et al., 2016: Progress toward high-resolution, real-time radiosonde reports. Bull. Amer. Meteor. Soc., 97, 2149-2161. IPCC, 2013: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 1535 pp. JMA, 2019: Outline of the operational numerical weather prediction at the Japan Meteorological Agency, Appendix to WMO Technical Progress Report on the Global Data-processing and Forecasting System (GDPFS) and Numerical Weather Prediction (NWP) Research. Japan Meteorological Agency, Tokyo, Japan. Available at Lübken, F.‐J., Berger, U., and Baumgarten, G. (2013), Temperature trends in the midlatitude summer mesosphere, J. Geophys. Res. Atmos., 118, 13,347-13,360, doi:10.1002/2013JD020576. Thorne, P. W., D. E. Parker, et al. (2005). "Revisiting radiosonde upper air temperatures from 1958 to 2002." Journal of Geophysical Research-Atmospheres 110(D18), doi:10.1029/2004JD005753 2022 GCOS ECVs Requirements - 26 - 2.1.3 ECV Product: Atmospheric Temperature in the Upper Troposphere and Lower Stratosphere Name Atmospheric Temperature in the Upper Troposphere and Lower Stratosphere Definition 3D field of the atmospheric temperature in the UTLS Unit K Note The following requirements are inferred mainly from the viewpoint of reanalysis and its near-real-time continuation in operational analyses as well as with respect to the magnitude of typical temperature variations at relevant spatial and temporal scales. Some additional considerations are also made, for which explanations are given in notes below this table. For vertical resolution, high vertical resolution is required to diagnose both multiple tropopauses but also trends in tropopause height. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 15 Hersbach et al. (2018), Thorne et al. (2005) Roughly corresponds to the current global Numerical Weather Prediction (NWP) model resolution, which would be used for next generation reanalyses. B 100 Hersbach et al. (2018), Thorne et al. (2005). A typical horizontal error correlation length in first guess fields and typical scale of mesoscale features that, especially when occurring frequently or with significant amplitude, can affect global climate. T 500 Hersbach et al. (2018), Thorne et al. (2005) Minimum resolution needed to resolve synoptic-scale waves. Thorne et al., 2005 show typical e-folding correlation distances in radiosonde-measured tropospheric temperatures of at least several 100km and more generally 1000km, with larger values in the tropics. Vertical Resolution m G 25 Thorne et al (2005). This high resolution allows different users the option to subsample or process the data in ways that suit their applications (Ingleby et al. 2016). Neither the current NWP resolution of 3km, nor the NWP goal of 300m, is adequate for locating the tropopause. B 100 Roughly corresponds to the assimilating model resolution (Fujiwara et al. 2017) T 250 Minimum resolution considering the layer depth Temporal Resolution h G 1 A typical 4D-Var timeslot length, a sub-division into which observations are grouped for processing (ECMWF 2018) B 12 A typical time interval between numerical analyses and/or the typical time scale of subsynoptic features T 24 Minimum resolution needed to resolve synoptic-scale waves Timeliness h G 1 A typical cut-off time of the operational NWP cycle analysis (JMA 2019), which might also be used for climate monitoring B 3 A typical cut-off time for the Climate Data Assimilation System (a near-real time continuation of reanalysis) T 6 A typical master decoding cut-off time, beyond which observations are not automatically decoded and incorporated into the operational observation archive Required Measurement Uncertainty (2-sigma) K RMS G 0.1 These values are inferred based on the standard deviations of 6-hourly analysis with respect to the monthly climatology. (T) corresponds to regions of high variability, (B) of medium variability and (G) of low variability. RMS departures of observed values from first guess field values, in accordance with the practical verification schemes applied by the GUAN Monitoring Centre for upper-air observations. B 0.5 T 1 2022 GCOS ECVs Requirements - 27 - Stability K/decade G 0.01 These values are based on the need to detect temperature trends such as those observed in recent decades (IPCC 2013; Lübken et al. 2013). (T) corresponds to regions of large trend or 50% of observed global-mean trend, (B) regions of medium trend or 20% of global-mean trend, and (G) regions of small trend or 10% of global-mean trend. B 0.02 T 0.05 Standards and References ECMWF, 2018: IFS documentation – Cy45r1, Part I: Observations. ECMWF, UK, 82p. Available at Fujiwara, M., 2017: Introduction to the SPARC Reanalysis Intercomparison Project (S-RIP) and overview of the reanalysis systems, Atmos. Chem. Phys., 17, 1417–1452, 2017. Hersbach et al. (2018): Operational global reanalysis: progress, future directions and synergies with NWP. ERA Report Series, 27. Ingleby et al., 2016: Progress toward high-resolution, real-time radiosonde reports. Bull. Amer. Meteor. Soc., 97, 2149-2161. IPCC, 2013: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 1535 pp. JMA, 2019: Outline of the operational numerical weather prediction at the Japan Meteorological Agency, Appendix to WMO Technical Progress Report on the Global Data-processing and Forecasting System (GDPFS) and Numerical Weather Prediction (NWP) Research. Japan Meteorological Agency, Tokyo, Japan. Available at Lübken, F.‐J., Berger, U., and Baumgarten, G. (2013), Temperature trends in the midlatitude summer mesosphere, J. Geophys. Res. Atmos., 118, 13,347-13,360, doi:10.1002/2013JD020576. Thorne, P. W., D. E. Parker, et al. (2005). "Revisiting radiosonde upper air temperatures from 1958 to 2002." Journal of Geophysical Research-Atmospheres 110(D18), doi:10.1029/2004JD005753 2022 GCOS ECVs Requirements - 28 - 2.1.4 ECV Product: Atmospheric Temperature in the Middle and Upper Stratosphere Name Atmospheric Temperature in the Middle and Upper Stratosphere Definition 3D field of the atmospheric temperature in the middle and upper stratosphere. Unit K Note The following requirements are inferred mainly from the viewpoint of reanalysis and its near-real-time continuation in operational analyses as well as with respect to the magnitude of typical temperature variations at relevant spatial and temporal scales. Correlation distances on climate timescales are much larger in the stratosphere than the troposphere. The dynamical processes are distinct as is the degree of stratification which leads to lower requirements for both vertical and spatial resolution. Some large-scale waves are common to the upper stratosphere and lower mesosphere, with horizontal scales of around 2500 km. Historical and projected future trends are larger so commensurately the stability requirements can be relaxed accordingly. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 50 Vincent (2015) The stratospheric effective resolution of most Numerical Weather Prediction (NWP) systems B 100 Vincent (2015) A typical horizontal error correlation length in first guess fields and typical scale of mesoscale features that, especially when occurring frequently or with significant amplitude, can affect global climate. T 1500 Vincent (2015) Minimum resolution needed to resolve synoptic-scale features. Vertical Resolution km G 0.5 This high resolution allows different users the option to subsample or process the data in ways that suit their applications (Ingleby et al. 2016). B 1 Roughly corresponds to the assimilating model resolution (Fujiwara et al. 2017) T 3 Minimum resolution considering the layer depth Temporal Resolution h G 1 A typical 4D-Var timeslot length, a sub-division into which observations are grouped for processing (ECMWF 2018) B 12 A typical time interval between numerical analyses and/or the typical time scale of subsynoptic features T 24 Minimum resolution needed to resolve synoptic-scale waves Timeliness h G 1 A typical cut-off time of the operational NWP cycle analysis (JMA 2019), which might also be used for climate monitoring B 3 A typical cut-off time for the Climate Data Assimilation System (a near-real time continuation of reanalysis) T 6 A typical master decoding cut-off time, beyond which observations are not automatically decoded and incorporated into the operational observation archive Required Measurement Uncertainty (2-sigma) K RMS G 0.1 These values are inferred based on the standard deviations of 6-hourly analysis with respect to the monthly climatology. (T) corresponds to regions of high variability, (B) of medium variability and (G) of low variability. RMS departures of observed values from first guess field values, in accordance with the practical verification schemes applied by the GUAN Monitoring Centre for upper-air observations. B 0.5 T 1 Stability K/decade G 0.05 These values are based on the need to detect temperature trends such as those observed in recent decades (IPCC 2013; Lübken et al. 2013). (T) corresponds to regions of large trend or 50% of observed global-mean trend, (B) regions of medium trend or 20% of global-mean trend, and (G) regions of small trend or 10% of global-mean trend. B 0.1 T 0.2 2022 GCOS ECVs Requirements - 29 - IPCC (2013) Standards and References ECMWF, 2018: IFS documentation – Cy45r1, Part I: Observations. ECMWF, UK, 82p. Available at Fujiwara, M., 2017: Introduction to the SPARC Reanalysis Intercomparison Project (S-RIP) and overview of the reanalysis systems, Atmos. Chem. Phys., 17, 1417–1452, 2017. Ingleby et al., 2016: Progress toward high-resolution, real-time radiosonde reports. Bull. Amer. Meteor. Soc., 97, 2149-2161. IPCC, 2013: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 1535 pp. JMA, 2019: Outline of the operational numerical weather prediction at the Japan Meteorological Agency, Appendix to WMO Technical Progress Report on the Global Data-processing and Forecasting System (GDPFS) and Numerical Weather Prediction (NWP) Research. Japan Meteorological Agency, Tokyo, Japan. Available at Lübken, F.‐J., Berger, U., and Baumgarten, G. (2013), Temperature trends in the midlatitude summer mesosphere, J. Geophys. Res. Atmos., 118, 13,347-13,360, doi:10.1002/2013JD020576. Vincent, R. A., 2015: The dynamics of the mesosphere and lower thermosphere: a brief review. 2022 GCOS ECVs Requirements - 30 - 2.1.5 ECV Product: Atmospheric Temperature in the Mesosphere Name Atmospheric Temperature in the Mesosphere Definition 3D field of the atmospheric temperature in the mesosphere. Unit K Note The following requirements are inferred mainly from the viewpoint of reanalysis and its near-real-time continuation in operational analyses as well as with respect to the magnitude of typical temperature variations at relevant spatial and temporal scales. Horizontal resolution, vertical resolution, temporal sampling, and uncertainty thresholds are based on the scales and amplitudes of typical dynamical features of the mesosphere. Trends and current uncertainties are larger than in the troposphere, so stability criteria can also be relaxed. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 50 Garcia (2005), Vincent (2015) Roughly corresponds to the current global Numerical Weather Prediction (NWP) model resolution, which would be used for next generation reanalyses. B 100 Garcia (2005), Vincent (2015) A typical horizontal error correlation length in first guess fields and typical scale of mesoscale features that, especially when occurring frequently or with significant amplitude, can affect global climate. T 1500 Garcia (2005), Vincent (2015) Minimum resolution needed to resolve synoptic-scale waves. Thorne et al., (2005) show typical e-folding correlation distances in radiosonde-measured tropospheric temperatures of at least several 100km and more generally 1000km, with larger values in the tropics. Vertical Resolution km G 0.5 Garcia (2005), Vincent (2015) This high resolution allows different users the option to subsample or process the data in ways that suit their applications (Ingleby et al. 2016). B 1 Garcia (2005), Vincent (2015) Roughly corresponds to the assimilating model resolution (Fujiwara et al. 2017) T 3 Garcia (2005), Vincent (2015) Minimum resolution considering the layer depth Temporal Resolution h G 1 A typical 4D-Var timeslot length, a sub-division into which observations are grouped for processing (ECMWF 2018) B 12 A typical time interval between numerical analyses and/or the typical time scale of subsynoptic features T 24 Minimum resolution needed to resolve synoptic-scale waves Timeliness h G 1 A typical cut-off time of the operational NWP cycle analysis (JMA 2019), which might also be used for climate monitoring B 3 A typical cut-off time for the Climate Data Assimilation System (a near-real time continuation of reanalysis) T 6 A typical master decoding cut-off time, beyond which observations are not automatically decoded and incorporated into the operational observation archive Required Measurement Uncertainty (2-sigma) K RMS G 0.1 Garcia (2005), Vincent (2015) These values are inferred based on the standard deviations of 6-hourly analysis with respect to the monthly climatology. (T) corresponds to regions of high variability, (B) of medium variability and (G) of low variability. RMS departures of observed values from first guess field values, in accordance with the practical verification schemes applied by the GUAN Monitoring Centre for upper-air observations. B 0.5 T 1 Stability K/decade G 0.05 Lübken et al. (2013) 2022 GCOS ECVs Requirements - 31 - B 0.1 These values are based on the need to detect temperature trends such as those observed in recent decades (IPCC 2013; Lübken et al. 2013). (T) corresponds to regions of large trend or 50% of observed global-mean trend, (B) regions of medium trend or 20% of global-mean trend, and (G) regions of small trend or 10% of global-mean trend. T 0.2 Standards and References ECMWF, 2018: IFS documentation – Cy45r1, Part I: Observations. ECMWF, UK, 82p. Available at Fujiwara, M., 2017: Introduction to the SPARC Reanalysis Intercomparison Project (S-RIP) and overview of the reanalysis systems, Atmos. Chem. Phys., 17, 1417–1452, 2017. Garcia, R. A., 2005: Large-Scale waves in the mesosphere and lower thermosphere Observed by SABER. Journal of Atmospheric Sciences, 62, 10.1175/JAS3612.1. Ingleby et al., 2016: Progress toward high-resolution, real-time radiosonde reports. Bull. Amer. Meteor. Soc., 97, 2149-2161. IPCC, 2013: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 1535 pp. JMA, 2019: Outline of the operational numerical weather prediction at the Japan Meteorological Agency, Appendix to WMO Technical Progress Report on the Global Data-processing and Forecasting System (GDPFS) and Numerical Weather Prediction (NWP) Research. Japan Meteorological Agency, Tokyo, Japan. Available at Lübken, F.‐J., Berger, U., and Baumgarten, G. (2013), Temperature trends in the midlatitude summer mesosphere, J. Geophys. Res. Atmos., 118, 13,347-13,360, doi:10.1002/2013JD020576. Thorne, P. W., D. E. Parker, et al. (2005). "Revisiting radiosonde upper air temperatures from 1958 to 2002." Journal of Geophysical Research-Atmospheres 110(D18), doi:10.1029/2004JD005753 Vincent, R. A., 2015: The dynamics of the mesosphere and lower thermosphere: a brief review. 2022 GCOS ECVs Requirements - 32 - 2.2 ECV: Upper-air wind speed and direction 2.2.1 ECV Product: Wind (horizontal) in the Boundary Layer Name Wind (horizontal) in the Boundary Layer Definition 3D field of the horizontal vector component (2D) of the 3D wind vector in the boundary layer. Unit m s-1 Note The following requirements are inferred mainly from the viewpoint of reanalysis and its near-real-time continuation as users of this ECV. Some additional considerations are also made, for which explanations are given in notes below this table. Additional goal requirements for the lowermost part of the boundary layer (values in parentheses) are for better sampling of micrometeorological phenomena and accurate calculation of fluxes. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 15 Roughly corresponds to the current global Numerical Weather Prediction (NWP) model resolution, which would be used for next generation reanalyses B 100 A typical horizontal error correlation length in first guess fields. T 500 Minimum resolution needed to resolve synoptic-scale waves. Vertical Resolution m G 10(1) Global NWP requirements are not adequate for accurate calculation of fluxes and these have not been changed. This high resolution allows different users the option to subsample or process the data in ways that suit their applications (Ingleby et al. 2016). The value in parentheses is for the lowermost part of the boundary layer (up to 100 m above the ground) B 50(10) Roughly corresponds to the assimilating model resolution (Fujiwara et al. 2017) T 100 Minimum resolution considering the layer depth Temporal Resolution min G 30(1) Global NWP requirements are not adequate for accurate calculation of fluxes and these have not been changed. A typical 4D-Var timeslot length, a sub-division into which observations are grouped for processing (ECMWF 2018). Given large diurnal cycle in the boundary layer, higher temporal sampling is required. The value in parentheses is for the lowermost part of the boundary layer (up to 100 m above the ground) B 60 A typical time interval between numerical analyses and/or the typical time scale of subsynoptic features. T 720 Minimum resolution needed to resolve synoptic-scale waves Timeliness h G 6 A typical cut-off time of the operational NWP cycle analysis (JMA 2019), which might also be used for climate monitoring B 18 A typical cut-off time for the Climate Data Assimilation System (a near-real time continuation of reanalysis) T 48 A typical master decoding cut-off time, beyond which observations are not automatically decoded and incorporated into the operational observation archive Required Measurement Uncertainty (2-sigma) m s-1 RMS G 0.5 These values are inferred based on the standard deviations of 6-hourly analysis with respect to the monthly climatology (Figs. 1, 2). (T) corresponds to regions of high variability, (B) of medium variability and (G) of low variability. RMS departures of observed values from first guess field values, in accordance with the practical B 3 T 5 2022 GCOS ECVs Requirements - 33 - verification schemes applied by the GUAN Monitoring Centre for upper-air observations (Fig.3). Stability m s-1/ decade G 0.1 These values are inferred based on the RMS trends of monthly analysis for the 1981-2010 period (Fig. 1). (T) corresponds to regions of large trend, (B) of medium trend and (G) of small trend. B 0.3 T 0.5 Standards and References ECMWF, 2018: IFS documentation – Cy45r1, Part I: Observations. ECMWF, UK, 82p. Available at Fujiwara et al., 2017: Introduction to the SPARC Reanalysis Intercomparison Project (S-RIP) and overview of the reanalysis systems. Atmos. Chem. Phys., 17, 1417-1452. Ingleby et al., 2016: Progress toward high-resolution, real-time radiosonde reports. Bull. Amer. Meteor. Soc., 97, 2149-2161. JMA, 2019: Outline of the operational numerical weather prediction at the Japan Meteorological Agency, Appendix to WMO Technical Progress Report on the Global Data-processing and Forecasting System (GDPFS) and Numerical Weather Prediction (NWP) Research. Japan Meteorological Agency, Tokyo, Japan. Available at 2022 GCOS ECVs Requirements - 34 - 2.2.2 ECV Product: Wind (horizontal) in the Free Troposphere Name Wind (horizontal) in the Free Troposphere Definition 3D field of the horizontal vector component (2D) of the 3D wind vector in the troposphere. Unit m s-1 Note The following requirements are inferred mainly from the viewpoint of reanalysis and its near-real-time continuation as users of this ECV. Some additional considerations are also made, for which explanations are given where needed. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 15 Roughly corresponds to the current global Numerical Weather Prediction (NWP) model resolution, which would be used for next generation reanalyses B 100 A typical horizontal error correlation length in first guess fields. T 1000 Minimum resolution needed to resolve synoptic-scale waves. Vertical Resolution m G 10 Global NWP requirements are not adequate to monitor large-scale vertical circulation (e.g. the Hadley and Walker circulation) and these have not been changed. This high resolution allows different users the option to subsample or process the data in ways that suit their applications (Ingleby et al. 2016). B 100 Roughly corresponds to the assimilating model resolution (Fujiwara et al. 2017) T 1500 Minimum resolution considering the layer depth. The threshold for vertical resolution roughly corresponds to the resolution of the standard levels for the traditional radiosonde observation. Temporal Resolution h G 1 A typical 4D-Var timeslot length, a sub-division into which observations are grouped for processing (ECMWF 2018). B 6 A typical time interval between numerical analyses and/or the typical time scale of subsynoptic features. T 12 Minimum resolution needed to resolve synoptic-scale waves Timeliness h G 6 A typical cut-off time of the operational NWP cycle analysis (JMA 2019), which might also be used for climate monitoring B 18 A typical cut-off time for the Climate Data Assimilation System (a near-real time continuation of reanalysis) T 48 A typical master decoding cut-off time, beyond which observations are not automatically decoded and incorporated into the operational observation archive Required Measurement Uncertainty (2-sigma) m s-1 RMS G 1 These values are inferred based on the standard deviations of 6-hourly analysis with respect to the monthly climatology (Figs. 1, 2). (T) corresponds to regions of high variability, (B) of medium variability and (G) of low variability. RMS departures of observed values from first guess field values, in accordance with the practical verification schemes applied by the GUAN Monitoring Centre for upper-air observations (Fig.3). B 3 T 5 Stability m s-1/ decade G 0.1 These values are inferred based on the RMS trends of monthly analysis for the 1981-2010 period (Fig. 1). (T) corresponds to regions of large trend, (B) of medium trend and (G) of small trend. B 0.3 T 0.5 Standards and References ECMWF, 2018: IFS documentation – Cy45r1, Part I: Observations. ECMWF, UK, 82p. Available at 2022 GCOS ECVs Requirements - 35 - Fujiwara et al., 2017: Introduction to the SPARC Reanalysis Intercomparison Project (S-RIP) and overview of the reanalysis systems. Atmos. Chem. Phys., 17, 1417-1452. Ingleby et al., 2016: Progress toward high-resolution, real-time radiosonde reports. Bull. Amer. Meteor. Soc., 97, 2149-2161. JMA, 2019: Outline of the operational numerical weather prediction at the Japan Meteorological Agency, Appendix to WMO Technical Progress Report on the Global Data-processing and Forecasting System (GDPFS) and Numerical Weather Prediction (NWP) Research. Japan Meteorological Agency, Tokyo, Japan. Available at 2022 GCOS ECVs Requirements - 36 - 2.2.3 ECV Product: Wind (horizontal) in the Upper Troposphere and Lower Stratosphere Name Wind (horizontal) in the Upper Troposphere and Lower Stratosphere. Definition 3D field of the horizontal vector component (2D) of the 3D wind vector in the UTLS. Unit m s-1 Note The following requirements are inferred mainly from the viewpoint of reanalysis and its near-real-time continuation as users of this ECV. Some additional considerations are also made, for which explanations are given where needed. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 15 Roughly corresponds to the current global Numerical Weather Prediction (NWP) model resolution, which would be used for next generation reanalyses B 100 A typical horizontal error correlation length in first guess fields. T 500 Minimum resolution needed to resolve synoptic-scale waves. Vertical Resolution m G 25 Global NWP requirements (0.3 km for goal and 3 km for threshold) are not adequate to infer tropopause region behavior and thus we are not changing these except that the goal requirement has been relaxed from 10 m to 25 m. This high resolution allows different users the option to subsample or process the data in ways that suit their applications (Ingleby et al. 2016). B 100 Roughly corresponds to the assimilating model resolution (Fujiwara et al. 2017) T 500 Minimum resolution considering the layer depth. To infer tropopause region behavior, such as tropopause folding (e.g. Lamarque and Hess 2015), higher vertical resolution is required. Temporal Resolution h G 1 A typical 4D-Var timeslot length, a sub-division into which observations are grouped for processing (ECMWF 2018). B 6 A typical time interval between numerical analyses and/or the typical time scale of subsynoptic features. T 12 Minimum resolution needed to resolve synoptic-scale waves Timeliness h G 6 A typical cut-off time of the operational NWP cycle analysis (JMA 2019), which might also be used for climate monitoring B 18 A typical cut-off time for the Climate Data Assimilation System (a near-real time continuation of reanalysis) T 48 A typical master decoding cut-off time, beyond which observations are not automatically decoded and incorporated into the operational observation archive Required Measurement Uncertainty (2-sigma) m s-1 RMS G 1 These values are inferred based on the standard deviations of 6-hourly analysis with respect to the monthly climatology (Figs. 1, 2). (T) corresponds to regions of high variability, (B) of medium variability and (G) of low variability. RMS departures of observed values from first guess field values, in accordance with the practical verification schemes applied by the GUAN Monitoring Centre for upper-air observations (Fig.3). B 3 T 5 Stability m s-1/ decade G 0.1 These values are inferred based on the RMS trends of monthly analysis for the 1981-2010 period (Fig. 1). (T) corresponds to regions of large trend, (B) of medium trend and (G) of small trend. B 0.3 T 0.5 Standards and References ECMWF, 2018: IFS documentation – Cy45r1, Part I: Observations. ECMWF, UK, 82p. Available at 2022 GCOS ECVs Requirements - 37 - Fujiwara et al., 2017: Introduction to the SPARC Reanalysis Intercomparison Project (S-RIP) and overview of the reanalysis systems. Atmos. Chem. Phys., 17, 1417-1452. Ingleby et al., 2016: Progress toward high-resolution, real-time radiosonde reports. Bull. Amer. Meteor. Soc., 97, 2149-2161. JMA, 2019: Outline of the operational numerical weather prediction at the Japan Meteorological Agency, Appendix to WMO Technical Progress Report on the Global Data-processing and Forecasting System (GDPFS) and Numerical Weather Prediction (NWP) Research. Japan Meteorological Agency, Tokyo, Japan. Available at Lamarque, J. F., and P. Hess, 2015: Stratosphere/troposphere exchange and structure – local process. Encyclopedia of Atmospheric Sciences (Second Edition), 262-268. 2022 GCOS ECVs Requirements - 38 - 2.2.4 ECV Product: Wind (horizontal) in the Middle and Upper Stratosphere Name Wind (horizontal) in the Middle and Upper Stratosphere. Definition 3D field of the horizontal vector component (2D) of the 3D wind vector in the middle and upper stratosphere. Unit m s-1 Note The following requirements are inferred mainly from the viewpoint of reanalysis and its near-real-time continuation as users of this ECV. Some additional considerations are also made, for which explanations are given where needed. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 50 Roughly corresponds to the current global Numerical Weather Prediction (NWP) model resolution, which would be used for next generation reanalyses B 100 A typical horizontal error correlation length in first guess fields T 3000 Minimum resolution needed to resolve planetary-scale waves Vertical Resolution km G 1 Consistent with Global NWP. B 2 Roughly corresponds to the assimilating model resolution (Fujiwara et al. 2017) T 3 Minimum resolution considering the layer depth. Temporal Resolution h G 1 A typical 4D-Var timeslot length, a sub-division into which observations are grouped for processing (ECMWF 2018) B 6 A typical time interval between numerical analyses and/or the typical time scale of subsynoptic features. T 24 Minimum resolution needed to resolve planetary waves Timeliness h G 6 A typical cut-off time of the operational NWP cycle analysis (JMA 2019), which might also be used for climate monitoring B 18 A typical cut-off time for the Climate Data Assimilation System (a near-real time continuation of reanalysis) T 48 A typical master decoding cut-off time, beyond which observations are not automatically decoded and incorporated into the operational observation archive Required Measurement Uncertainty (2-sigma) m s-1 RMS G 1 These values are inferred based on the standard deviations of 6-hourly analysis with respect to the monthly climatology (Figs. 1, 2). (T) corresponds to regions of high variability, (B) of medium variability and (G) of low variability. RMS departures of observed values from first guess field values, in accordance with the practical verification schemes applied by the GUAN Monitoring Centre for upper-air observations (Fig.3). B 5 T 10 Stability m s-1/ decade G 0.1 These values are inferred based on the RMS trends of monthly analysis for the 1981-2010 period (Fig. 1). (T) corresponds to regions of large trend, (B) of medium trend and (G) of small trend. B 0.5 T 1 Standards and References ECMWF, 2018: IFS documentation – Cy45r1, Part I: Observations. ECMWF, UK, 82p. Available at Fujiwara et al., 2017: Introduction to the SPARC Reanalysis Intercomparison Project (S-RIP) and overview of the reanalysis systems. Atmos. Chem. Phys., 17, 1417-1452. Ingleby et al., 2016: Progress toward high-resolution, real-time radiosonde reports. Bull. Amer. Meteor. Soc., 97, 2149-2161. JMA, 2019: Outline of the operational numerical weather prediction at the Japan Meteorological Agency, Appendix to WMO Technical Progress Report on the Global Data-processing and Forecasting System (GDPFS) and Numerical Weather Prediction (NWP) Research. Japan Meteorological Agency, Tokyo, Japan. Available at 2022 GCOS ECVs Requirements - 39 - 2.2.5 ECV Product: Wind (horizontal) in the Mesosphere Name Wind (horizontal) in the Mesosphere Definition 3D field of the horizontal vector component (2D) of the 3D wind vector in the mesosphere. Unit m s-1 Note The following requirements are inferred mainly from the viewpoint of reanalysis and its near-real-time continuation as users of this ECV. Some additional considerations are also made, for which explanations are given where needed. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 50 Roughly corresponds to the current global Numerical Weather Prediction (NWP) model resolution, which would be used for next generation reanalyses B 100 A typical horizontal error correlation length in first guess fields T 3000 Minimum resolution needed to resolve planetary-scale waves Vertical Resolution km G 1 B 2 Roughly corresponds to the assimilating model resolution (Fujiwara et al. 2017) T 3 Minimum resolution considering the layer depth. Temporal Resolution h G 1 This has been changed from the original 0.5 h to 1 h to be consistent with Global NWP. A typical 4D-Var timeslot length, a sub-division into which observations are grouped for processing (ECMWF 2018). B 6 A typical time interval between numerical analyses and/or the typical time scale of subsynoptic features T 24 Minimum resolution needed to resolve planetary-scale waves Timeliness h G 6 A typical cut-off time of the operational NWP cycle analysis (JMA 2019), which might also be used for climate monitoring B 18 A typical cut-off time for the Climate Data Assimilation System (a near-real time continuation of reanalysis) T 48 A typical master decoding cut-off time, beyond which observations are not automatically decoded and incorporated into the operational observation archive Required Measurement Uncertainty (2-sigma) m s-1 RMS G 1 These values are inferred based on the standard deviations of 6-hourly analysis with respect to the monthly climatology (Figs. 1, 2). (T) corresponds to regions of high variability, (B) of medium variability and (G) of low variability. RMS departures of observed values from first guess field values, in accordance with the practical verification schemes applied by the GUAN Monitoring Centre for upper-air observations (Fig.3). B 5 T 10 Stability m s-1/ decade G 0.1 These values are inferred based on the RMS trends of monthly analysis for the 1981-2010 period (Fig. 1). (T) corresponds to regions of large trend, (B) of medium trend and (G) of small trend. B 0.5 T 1 Standards and References ECMWF, 2018: IFS documentation – Cy45r1, Part I: Observations. ECMWF, UK, 82p. Available at Fujiwara et al., 2017: Introduction to the SPARC Reanalysis Intercomparison Project (S-RIP) and overview of the reanalysis systems. Atmos. Chem. Phys., 17, 1417-1452. Ingleby et al., 2016: Progress toward high-resolution, real-time radiosonde reports. Bull. Amer. Meteor. Soc., 97, 2149-2161. JMA, 2019: Outline of the operational numerical weather prediction at the Japan Meteorological Agency, Appendix to WMO Technical Progress Report on the Global Data-processing and Forecasting System (GDPFS) and Numerical Weather Prediction (NWP) Research. Japan Meteorological Agency, Tokyo, Japan. Available at 2022 GCOS ECVs Requirements - 40 - 2.2.6 ECV Product: Wind (vertical) in the Boundary Layer Name Wind (vertical) in the Boundary Layer Definition 3D field of the vertical component of the 3D wind vector in the boundary layer. Unit cm s-1 Note The following requirements are inferred mainly from the viewpoint of reanalysis and its near-real-time continuation as users of this ECV. Some additional considerations are also made, for which explanations are given where needed. Additional goal requirements for the lowermost part of the boundary layer (values in parentheses) are for better sampling of micrometeorological phenomena and accurate calculation of fluxes. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 15 Roughly corresponds to the current global Numerical Weather Prediction (NWP) model resolution, which would be used for next generation reanalyses B 200 This has been changed from the original 100 km to 200 km to be consistent with Global NWP. T 500 Minimum resolution needed to resolve synoptic-scale waves Vertical Resolution m G 10(1) This high resolution allows different users the option to subsample or process the data in ways that suit their applications (Ingleby et al. 2016). The value in parentheses is for the lowermost part of the boundary layer (up to 100 m above the ground) B 100 Roughly corresponds to the assimilating model resolution (Fujiwara et al. 2017) T 500 Minimum resolution considering the layer depth Temporal Resolution min G 30(1) Global NWP requirements are not adequate for accurate calculation of fluxes and these have not been changed except that the goal requirement has been relaxed from 10 min to 30 min as has been done for Horizontal Wind Velocity in the same layer. A typical 4D-Var timeslot length, a sub-division into which observations are grouped for processing (ECMWF 2018). Given large diurnal cycle in the boundary layer, higher temporal sampling is required. The value in parentheses is for the lowermost part of the boundary layer (up to 100 m above the ground) B 60 A typical time interval between numerical analyses and/or the typical time scale of sub-synoptic features. T 720 Minimum resolution needed to resolve synoptic-scale waves Timeliness h G 6 A typical cut-off time of the operational NWP cycle analysis (JMA 2019), which might also be used for climate monitoring B 18 A typical cut-off time for the Climate Data Assimilation System (a near-real time continuation of reanalysis) T 48 A typical master decoding cut-off time, beyond which observations are not automatically decoded and incorporated into the operational observation archive Required Measurement Uncertainty (2-sigma) cm s-1 RMS G 0.5 These values are inferred based on the standard deviations of 6-hourly analysis with respect to the monthly climatology (Figs. 4, 5). (T) corresponds to regions of high variability, (B) of medium variability and (G) of low variability. RMS departures of observed values from first guess field values, in accordance with the practical verification schemes applied by the GUAN Monitoring Centre for upper-air observations. B 1 T 1.5 Stability cm s-1/ decade G 0.05 These values are inferred based on the RMS trends of monthly analysis for the 1981-2010 period (Fig. 4). (T) corresponds to regions of large trend, (B) of medium trend and (G) of small trend. B 0.1 T 0.15 2022 GCOS ECVs Requirements - 41 - Standards and References ECMWF, 2018: IFS documentation – Cy45r1, Part I: Observations. ECMWF, UK, 82p. Available at Fujiwara et al., 2017: Introduction to the SPARC Reanalysis Intercomparison Project (S-RIP) and overview of the reanalysis systems. Atmos. Chem. Phys., 17, 1417-1452. Ingleby et al., 2016: Progress toward high-resolution, real-time radiosonde reports. Bull. Amer. Meteor. Soc., 97, 2149-2161. JMA, 2019: Outline of the operational numerical weather prediction at the Japan Meteorological Agency, Appendix to WMO Technical Progress Report on the Global Data-processing and Forecasting System (GDPFS) and Numerical Weather Prediction (NWP) Research. Japan Meteorological Agency, Tokyo, Japan. Available at 2022 GCOS ECVs Requirements - 42 - 2.2.7 ECV Product: Wind (vertical) in the Free Troposphere Name Wind (vertical) in the Free Troposphere Definition 3D field of the vertical component of the 3D wind vector in the troposphere. Unit cm s-1 Note The following requirements are inferred mainly from the viewpoint of reanalysis and its near-real-time continuation as users of this ECV. Some additional considerations are also made, for which explanations are given where needed. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 15 Roughly corresponds to the current global Numerical Weather Prediction (NWP) model resolution, which would be used for next generation reanalyses B 200 Consistent with Global NWP T 1000 Minimum resolution needed to resolve synoptic-scale waves. Vertical Resolution m G 10 Global NWP requirements are not adequate to monitor large-scale vertical circulation (e.g. the Hadley and Walker circulation) and these have not been changed. This high resolution allows different users the option to subsample or process the data in ways that suit their applications (Ingleby et al. 2016). B 100 Roughly corresponds to the assimilating model resolution (Fujiwara et al. 2017) T 1500 Minimum resolution considering the layer depth Temporal Resolution h G 1 A typical 4D-Var timeslot length, a sub-division into which observations are grouped for processing (ECMWF 2018) B 6 A typical time interval between numerical analyses and/or the typical time scale of sub-synoptic features T 12 Minimum resolution needed to resolve synoptic-scale waves Timeliness h G 6 A typical cut-off time of the operational NWP cycle analysis (JMA 2019), which might also be used for climate monitoring B 18 A typical cut-off time for the Climate Data Assimilation System (a near-real time continuation of reanalysis) T 48 A typical master decoding cut-off time, beyond which observations are not automatically decoded and incorporated into the operational observation archive Required Measurement Uncertainty (2-sigma) cm s-1 RMS G 0.5 These values are inferred based on the standard deviations of 6-hourly analysis with respect to the monthly climatology (Figs. 4, 5). (T) corresponds to regions of high variability, (B) of medium variability and (G) of low variability. RMS departures of observed values from first guess field values, in accordance with the practical verification schemes applied by the GUAN Monitoring Centre for upper-air observations B 1.5 T 2.5 Stability cm s-1/ decade G 0.05 These values are inferred based on the RMS trends of monthly analysis for the 1981-2010 period (Fig. 4). (T) corresponds to regions of large trend, (B) of medium trend and (G) of small trend B 0.15 T 0.25 Standards and References ECMWF, 2018: IFS documentation – Cy45r1, Part I: Observations. ECMWF, UK, 82p. Available at Fujiwara et al., 2017: Introduction to the SPARC Reanalysis Intercomparison Project (S-RIP) and overview of the reanalysis systems. Atmos. Chem. Phys., 17, 1417-1452. Ingleby et al., 2016: Progress toward high-resolution, real-time radiosonde reports. Bull. Amer. Meteor. Soc., 97, 2149-2161. JMA, 2019: Outline of the operational numerical weather prediction at the Japan Meteorological Agency, Appendix to WMO Technical Progress Report on the Global Data-2022 GCOS ECVs Requirements - 43 - processing and Forecasting System (GDPFS) and Numerical Weather Prediction (NWP) Research. Japan Meteorological Agency, Tokyo, Japan. Available at 2022 GCOS ECVs Requirements - 44 - 2.2.8 ECV Product: Wind (vertical) in the Upper Troposphere and Lower Stratosphere Name Wind (vertical)in the Upper Troposphere and Lower Stratosphere. Definition 3D field of the vertical component of the 3D wind vector in the UTLS. Unit cm s-1 Note The following requirements are inferred mainly from the viewpoint of reanalysis and its near-real-time continuation as users of this ECV. Some additional considerations are also made, for which explanations are given where needed. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 15 Roughly corresponds to the current global Numerical Weather Prediction (NWP) model resolution, which would be used for next generation reanalyses B 200 Consistent with Global NWP T 500 Minimum resolution needed to resolve synoptic-scale waves Vertical Resolution m G 25 Global NWP requirements (0.3 km for goal and 3 km for threshold) are not adequate to infer tropopause region behavior and thus we are not changing these except that the goal requirement has been relaxed from 0.01 km to 0.025 km. This high resolution allows different users the option to subsample or process the data in ways that suit their applications (Ingleby et al. 2016). B 100 Roughly corresponds to the assimilating model resolution (Fujiwara et al. 2017) T 500 To infer tropopause region behavior, such as tropopause folding (e.g. Lamarque and Hess 2015), higher vertical resolution is required. Temporal Resolution h G 1 A typical 4D-Var timeslot length, a sub-division into which observations are grouped for processing (ECMWF 2018) B 6 A typical time interval between numerical analyses and/or the typical time scale of sub-synoptic features T 12 Minimum resolution needed to resolve synoptic-scale waves Timeliness h G 6 A typical cut-off time of the operational NWP cycle analysis (JMA 2019), which might also be used for climate monitoring B 18 A typical cut-off time for the Climate Data Assimilation System (a near-real time continuation of reanalysis) T 48 A typical master decoding cut-off time, beyond which observations are not automatically decoded and incorporated into the operational observation archive Required Measurement Uncertainty (2-sigma) cm s-1 RMS G 0.5 These values are inferred based on the standard deviations of 6-hourly analysis with respect to the monthly climatology (Figs. 4, 5). (T) corresponds to regions of high variability, (B) of medium variability and (G) of low variability. RMS departures of observed values from first guess field values, in accordance with the practical verification schemes applied by the GUAN Monitoring Centre for upper-air observations. B 1.5 T 2.5 Stability cm s-1/ decade G 0.05 These values are inferred based on the RMS trends of monthly analysis for the 1981-2010 period (Fig. 4). (T) corresponds to regions of large trend, (B) of medium trend and (G) of small trend B 0.15 T 0.25 Standards and References ECMWF, 2018: IFS documentation – Cy45r1, Part I: Observations. ECMWF, UK, 82p. Available at Fujiwara et al., 2017: Introduction to the SPARC Reanalysis Intercomparison Project (S-RIP) and overview of the reanalysis systems. Atmos. Chem. Phys., 17, 1417-1452. 2022 GCOS ECVs Requirements - 45 - Ingleby et al., 2016: Progress toward high-resolution, real-time radiosonde reports. Bull. Amer. Meteor. Soc., 97, 2149-2161. JMA, 2019: Outline of the operational numerical weather prediction at the Japan Meteorological Agency, Appendix to WMO Technical Progress Report on the Global Data-processing and Forecasting System (GDPFS) and Numerical Weather Prediction (NWP) Research. Japan Meteorological Agency, Tokyo, Japan. Available at Lamarque, J. F., and P. Hess, 2015: Stratosphere/troposphere exchange and structure – local process. Encyclopedia of Atmospheric Sciences (Second Edition), 262-268. 2022 GCOS ECVs Requirements - 46 - 2.2.9 ECV Product: Wind (vertical) in the Middle and Upper Stratosphere Name Wind (vertical) In the Middle and Upper Stratosphere Definition 3D field of the vertical component of the 3D wind vector in the middle and upper stratosphere. Unit cm s-1 Note The following requirements are inferred mainly from the viewpoint of reanalysis and its near-real-time continuation as users of this ECV. Some additional considerations are also made, for which explanations are given where needed. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 50 Roughly corresponds to the current global Numerical Weather Prediction (NWP) model resolution, which would be used for next generation reanalyses B 200 Consistent with Global NWP T 3000 Minimum resolution needed to resolve planetary-scale waves Vertical Resolution km G 0.5 B 2 Consistent with Global NWP. Roughly corresponds to the assimilating model resolution (Fujiwara et al. 2017) T 3 Minimum resolution considering the layer depth Temporal Resolution h G 1 Consistent with Global NWP. A typical 4D-Var timeslot length, a sub-division into which observations are grouped for processing (ECMWF 2018) B 6 A typical time interval between numerical analyses and/or the typical time scale of sub-synoptic features T 24 Minimum resolution needed to resolve planetary-scale waves Timeliness h G 6 A typical cut-off time of the operational NWP cycle analysis (JMA 2019), which might also be used for climate monitoring B 18 A typical cut-off time for the Climate Data Assimilation System (a near-real time continuation of reanalysis) T 48 A typical master decoding cut-off time, beyond which observations are not automatically decoded and incorporated into the operational observation archive Required Measurement Uncertainty (2-sigma) cm s-1 RMS G 1 These values are inferred based on the standard deviations of 6-hourly analysis with respect to the monthly climatology (Figs. 4, 5). (T) corresponds to regions of high variability, (B) of medium variability and (G) of low variability. RMS departures of observed values from first guess field values, in accordance with the practical verification schemes applied by the GUAN Monitoring Centre for upper-air observations B 3 T 5 Stability cm s-1/ decade G 0.05 These values are inferred based on the RMS trends of monthly analysis for the 1981-2010 period (Fig. 4). (T) corresponds to regions of large trend, (B) of medium trend and (G) of small trend. B 0.15 T 0.25 Standards and References ECMWF, 2018: IFS documentation – Cy45r1, Part I: Observations. ECMWF, UK, 82p. Available at Fujiwara et al., 2017: Introduction to the SPARC Reanalysis Intercomparison Project (S-RIP) and overview of the reanalysis systems. Atmos. Chem. Phys., 17, 1417-1452. Ingleby et al., 2016: Progress toward high-resolution, real-time radiosonde reports. Bull. Amer. Meteor. Soc., 97, 2149-2161. JMA, 2019: Outline of the operational numerical weather prediction at the Japan Meteorological Agency, Appendix to WMO Technical Progress Report on the Global Data-processing and 2022 GCOS ECVs Requirements - 47 - Forecasting System (GDPFS) and Numerical Weather Prediction (NWP) Research. Japan Meteorological Agency, Tokyo, Japan. Available at 2022 GCOS ECVs Requirements - 48 - 2.2.10 ECV Product: Wind (vertical) in the Mesosphere Name Wind (vertical) in the Mesosphere. Definition 3D field of the vertical component of the 3D wind vector in the mesosphere. Unit cm s-1 Note The following requirements are inferred mainly from the viewpoint of reanalysis and its near-real-time continuation as users of this ECV. Some additional considerations are also made, for which explanations are given where needed. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 50 Roughly corresponds to the current global Numerical Weather Prediction (NWP) model resolution, which would be used for next generation reanalyses B 200 Consistent with Global NWP T 3000 Minimum resolution needed to resolve planetary-scale waves. Vertical Resolution km G 1 B 2 Roughly corresponds to the assimilating model resolution (Fujiwara et al. 2017) T 3 Minimum resolution considering the layer depth Temporal Resolution h G 1 Consistent with Global NWP A typical 4D-Var timeslot length, a sub-division into which observations are grouped for processing (ECMWF 2018). B 6 A typical time interval between numerical analyses and/or the typical time scale of sub-synoptic features T 24 Minimum resolution needed to resolve planetary-scale waves Timeliness h G 6 A typical cut-off time of the operational NWP cycle analysis (JMA 2019), which might also be used for climate monitoring B 18 A typical cut-off time for the Climate Data Assimilation System (a near-real time continuation of reanalysis) T 48 A typical master decoding cut-off time, beyond which observations are not automatically decoded and incorporated into the operational observation archive Required Measurement Uncertainty (2-sigma) cm s-1 RMS G 2 These values are inferred based on the standard deviations of 6-hourly analysis with respect to the monthly climatology (Figs. 4, 5). (T) corresponds to regions of high variability, (B) of medium variability and (G) of low variability. RMS departures of observed values from first guess field values, in accordance with the practical verification schemes applied by the GUAN Monitoring Centre for upper-air observations. B 6 T 10 Stability cm s-1/ decade G 0.1 These values are inferred based on the RMS trends of monthly analysis for the 1981-2010 period (Fig. 4). (T) corresponds to regions of large trend, (B) of medium trend and (G) of small trend. B 0.2 T 0.3 Standards and References ECMWF, 2018: IFS documentation – Cy45r1, Part I: Observations. ECMWF, UK, 82p. Available at Fujiwara et al., 2017: Introduction to the SPARC Reanalysis Intercomparison Project (S-RIP) and overview of the reanalysis systems. Atmos. Chem. Phys., 17, 1417-1452. Ingleby et al., 2016: Progress toward high-resolution, real-time radiosonde reports. Bull. Amer. Meteor. Soc., 97, 2149-2161. JMA, 2019: Outline of the operational numerical weather prediction at the Japan Meteorological Agency, Appendix to WMO Technical Progress Report on the Global Data-processing and Forecasting System (GDPFS) and Numerical Weather Prediction (NWP) Research. Japan 2022 GCOS ECVs Requirements - 49 - Meteorological Agency, Tokyo, Japan. Available at 2022 GCOS ECVs Requirements - 50 - 2.2.11 Figures (a) (b) (c) (d) Figure 1. U-component of wind from JRA-55 for January (a) zonal means averaged over the 1981-2010 period, (b) standard deviations of 6-hourly analysis with respect to the monthly climatology, (c) zonal mean trends of monthly analysis for the 1981-2010 period and (d) RMS trends. 2022 GCOS ECVs Requirements - 51 - (a) (b) (c) (d) Figure 2. As Figure 1 but for July. 2022 GCOS ECVs Requirements - 52 - (a) (b) (c) (d) (e) Figure 3. (Top) global mean and (2nd) standard deviation of departure, (3rd) the number and (bottom) global mean observed values of radiosonde u-component of winds used in JRA-55 for (a) 30 hPa, (b) 100 hPa, (c) 250 hPa, (d) 500 hPa and (e) 850 hPa. 2022 GCOS ECVs Requirements - 53 - (a) (b) (c) (d) Figure 4. As Figure 1. but for vertical velocity from JRA-55. Note that the vertical velocity shown here is computed from the horizontal wind velocities using the continuity equation, thus the values represent averages for the horizontal resolution of JRA-55, which is approximately 55 km. 2022 GCOS ECVs Requirements - 54 - (a) (b) (c) (d) Figure 5. As Figure 4. but for July. 2022 GCOS ECVs Requirements - 55 - 2.3 ECV: Upper-air Water Vapour 2.3.1 ECV Product: Water Vapour Mixing Ratio in the Upper Troposphere and Lower Stratosphere Name Water Vapour Mixing Ratio in the Upper Troposphere and Lower Stratosphere Definition 3D field of water vapour mixing ratios in the UTLS. Mixing ratio is the mole fraction of a substance in dry air. Unit ppm Note Consistency with temperature requirements for the same layer was used as a primary guiding consideration for horizontal resolution. Vertical resolution needed for determining fine layer cirrus and complex tropopause Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 15 B 100 T 500 Vertical Resolution km G 0.01 B 0.1 T 0.25 Temporal Resolution h G 3 B 6 T 24 Timeliness h G 1 B 120 T 720 Required Measurement Uncertainty (2-sigma) ppmv . G 0.1 Dessler et al. (2013) Solomon et al. (2010) Uncertainty requirements are based on interannual variability and data quality needed to study supersaturation and dehydration. B 0.25 T 0.5 Stability ppmv/decade G <0.1 Dessler et al. (2013) Solomon et al. (2010) Stability requirements are based on magnitudes of seasonal and longer-term trends. B 0.1 T 0.25 Standards and References Dessler, A. E., Schoeberl, M. R., Wang, T., Davis, S. M., & Rosenlof, K. H. (2013). Stratospheric water vapor feedback. Proceedings of the National Academy of Sciences of the United States of America, 110(45), 18087–18091. doi:10.1073/pnas.1310344110 Solomon, S., Rosenlof, K. H., Portmann, R. W., Daniel, J. S., Davis, S. M., Sanford, T. J., & Plattner, G.-K. (2010). Contributions of Stratospheric Water Vapor to Decadal Changes in the Rate of Global Warming. Science, 327(5970), 1219-1223. doi:10.1126/science.1182488 2022 GCOS ECVs Requirements - 56 - 2.3.2 ECV Product: Water Vapour Mixing Ratio in the Middle and Upper Stratosphere Name Water Vapour Mixing Ratio in the Middle and Upper Stratosphere Definition 3D field of water vapor mixing ratios in the middle and upper stratosphere. Mixing ratio is the mole fraction of a substance in dry air. Unit ppm Note Consistency with temperature requirements for the same layer was used as a primary guiding consideration for horizontal resolution. However, for the breakthrough, there is no justification to use the same value as for temperature that is significantly smaller. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 50 B 500 T 1500 Vertical Resolution km G 0.5 B 1 T 3 Temporal Resolution h G 3 B 6 T 72 Timeliness h G 1 B 168 T 720 Required Measurement Uncertainty (2-sigma) ppmv G 0.1 Dessler et al. (2013) Solomon et al. (2010) Uncertainty requirements are based on observed seasonal and interannual variability. B 0.25 T 0.5 Stability ppmv/decade G <0.2 Dessler et al. (2013) Solomon et al. (2010) Stability requirements are based on magnitudes of longer-term trends. B 0.2 T 0.5 Standards and References Dessler, A. E., Schoeberl, M. R., Wang, T., Davis, S. M., & Rosenlof, K. H. (2013). Stratospheric water vapor feedback. Proceedings of the National Academy of Sciences of the United States of America, 110(45), 18087–18091. doi:10.1073/pnas.1310344110 Solomon, S., Rosenlof, K. H., Portmann, R. W., Daniel, J. S., Davis, S. M., Sanford, T. J., & Plattner, G.-K. (2010). Contributions of Stratospheric Water Vapor to Decadal Changes in the Rate of Global Warming. Science, 327(5970), 1219-1223. doi:10.1126/science.1182488 2022 GCOS ECVs Requirements - 57 - 2.3.3 ECV Product: Water Vapour Mixing Ratio in the Mesosphere Name Water Vapour Mixing Ratio in the Mesosphere Definition 3D field of water vapour mixing ratios in the mesosphere. Mixing ratio is the mole fraction of a substance in dry air. Unit ppm Note Consistency with temperature requirements for the same layer was used as a primary guiding consideration for horizontal resolution. However, for the breakthrough, there is no justification to use the same value as for temperature that is significantly smaller. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 50 B 500 T 1500 Vertical Resolution km G 0.5 B 1 T 3 Temporal Resolution h G 3 B 6 T 72 Timeliness h G 1 B 168 T 720 Required Measurement Uncertainty (2-sigma) ppmv G 0.1 Dessler et al. (2013) Solomon et al. (2010) Uncertainty requirements are based on observed seasonal and interannual variability. B 0.25 T 0.5 Stability ppmv/decade G <0.2 Dessler et al. (2013) Solomon et al. (2010) Stability requirements are based on magnitudes of longer-term trends. B 0.2 T 0.5 Standards and References Dessler, A. E., Schoeberl, M. R., Wang, T., Davis, S. M., & Rosenlof, K. H. (2013). Stratospheric water vapor feedback. Proceedings of the National Academy of Sciences of the United States of America, 110(45), 18087–18091. doi:10.1073/pnas.1310344110 Solomon, S., Rosenlof, K. H., Portmann, R. W., Daniel, J. S., Davis, S. M., Sanford, T. J., & Plattner, G.-K. (2010). Contributions of Stratospheric Water Vapor to Decadal Changes in the Rate of Global Warming. Science, 327(5970), 1219-1223. doi:10.1126/science.1182488 2022 GCOS ECVs Requirements - 58 - 2.3.4 ECV Product: Relative Humidity in the Boundary Layer Name Relative Humidity in the Boundary Layer Definition 3D field of the relative humidity in the PBL. Relative humidity is the amount of water vapor in air divided by the temperature-dependent amount of water vapor in saturated air. RH can be expressed relative to water or ice saturation (to be specified in the metadata). Unit % Note Vertical resolution is required for calculation of fluxes in the lower part of the boundary layer. McCarthy, 2007 notes significant spatial heterogeneity related to latitude of the observation. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 15 McCarthy, (2007), consistency with T B 100 McCarthy, (2007) T 500 McCarthy, (2007 Vertical Resolution m G 1 B 10 T 100 Temporal Resolution h G <1 Sub-hourly B 6 T 12 Timeliness h G 1 B 120 T 720 Required Measurement Uncertainty (2-sigma) %RH G 0.1 B 0.5 T 1 Stability %RH/decade G 0.1 Assumption that stability is per measurement system leads to partial cancellation across a network of sites performing measurements. B 0.5 T 1 Standards and References McCarthy, 2007 2022 GCOS ECVs Requirements - 59 - 2.3.5 ECV Product: Relative Humidity in the Free Troposphere Name Relative Humidity in the Free Troposphere Definition 3D field of the relative humidity in the free troposphere. Relative humidity is the amount of water vapor in air divided by the temperature-dependent amount of water vapor in saturated air. RH can be expressed relative to water or ice saturation (to be specified in the metadata). Unit % Note McCarthy, 2007 notes significant spatial heterogeneity related to latitude of the observation. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 15 McCarthy, (2007) B 100 McCarthy, (2007) T 1000 McCarthy, (2007) Vertical Resolution km G 0.01 B 0.1 T 1 Temporal Resolution h G <1 Sub-hourly B 6 T 12 Timeliness h G 1 B 120 T 720 Required Measurement Uncertainty (2-sigma) %RH G 0.1 B 0.5 T 1 Stability %RH/decade G 0.1 B 0.5 T 1 Standards and References McCarthy, 2007 2022 GCOS ECVs Requirements - 60 - 2.3.6 ECV Product: Relative Humidity in the Upper Troposphere and Lower Stratosphere Name Relative Humidity in the Upper Troposphere and Lower Stratosphere Definition 3D field of the relative humidity in the UTLS. Relative humidity is the amount of water vapor in air divided by the temperature-dependent amount of water vapor in saturated air. RH can be expressed relative to water or ice saturation (to be specified in the metadata). Unit % Note Vertical resolution needed for determining fine layer cirrus and complex tropopause Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 15 B 100 T 500 Vertical Resolution km G 0.01 B 0.1 T 0.25 Temporal Resolution h G 3 B 6 T 24 Timeliness h G 1 B 120 T 720 Required Measurement Uncertainty (2-sigma) %RH . G 0.5 Dessler et al. (2013) Solomon et al. (2010) Uncertainty requirements are based on interannual variability and data quality needed to study supersaturation and dehydration. B 1 T 2 Stability %RH/decade G <0.5 Dessler et al. (2013) Solomon et al. (2010) Stability requirements are based on magnitudes of seasonal and longer-term trends. B 0.5 T 2 Standards and References Dessler, A. E., Schoeberl, M. R., Wang, T., Davis, S. M., & Rosenlof, K. H. (2013). Stratospheric water vapor feedback. Proceedings of the National Academy of Sciences of the United States of America, 110(45), 18087–18091. doi:10.1073/pnas.1310344110 Solomon, S., Rosenlof, K. H., Portmann, R. W., Daniel, J. S., Davis, S. M., Sanford, T. J., & Plattner, G.-K. (2010). Contributions of Stratospheric Water Vapor to Decadal Changes in the Rate of Global Warming. Science, 327(5970), 1219-1223. doi:10.1126/science.1182488 2022 GCOS ECVs Requirements - 61 - 2.3.7 ECV Product: Specific Humidity in the Boundary Layer Name Specific Humidity in the Boundary Layer Definition 3D field of the specific humidity in the PBL. The specific humidity is the ratio between the mass of water vapour and the mass of moist air. Unit g Kg-1 Note Vertical resolution is required for calculation of fluxes in the lowermost boundary layer. McCarthy, 2007 notes significant spatial heterogeneity related to latitude of the observation. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 15 McCarthy, (2007) B 100 McCarthy, (2007) T 500 McCarthy, (2007) Vertical Resolution m G 1 B 10 T 100 Temporal Resolution h G <1 Sub-hourly B 1 T 3 Timeliness h G 1 B 120 T 720 Required Measurement Uncertainty (2-sigma) g Kg-1 G 0.1 B 0.5 T 1 Stability g Kg-1/ decade G 0.01 B 0.05 T 0.1 Standards and References McCarthy, 2007 2022 GCOS ECVs Requirements - 62 - 2.3.8 ECV Product: Specific Humidity in the Free Troposphere Name Specific Humidity in the Free Troposphere Definition 3D field of the specific humidity in the free troposphere. The specific humidity is the ratio between the mass of water vapour and the mass of moist air. Unit g Kg-1 Note McCarthy 2007) notes significant spatial heterogeneity related to latitude of the observation. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 15 McCarthy, (2007) B 100 McCarthy, (2007) T 1000 McCarthy, (2007) Vertical Resolution km G 0.01 B 0.1 T 1 Temporal Resolution h G <1 Sub-hourly B 1 T 3 Timeliness h G 1 B 120 T 720 Required Measurement Uncertainty (2-sigma) g Kg-1 G 0.1 B 0.5 T 1 Stability g Kg-1/ decade G 0.01 B 0.05 T 0.1 Standards and References McCarthy, 2007 2022 GCOS ECVs Requirements - 63 - 2.3.9 ECV Product: Integrated Water Vapour Name Integrated Water Vapour (IWV) Definition Total amount of water vapour present in a vertical atmospheric column. Unit Kg m-2 Note Implicit assumption that IWV is intrinsically linked to boundary layer and surface humidity given the predominance of the water vapour in these regions in contributing to the column total. Because IWV scales with temperature, uncertainty and stability should be split latitudinally. The applied values here are for mid-latitude locations. They would be stricter (more relaxed) for polar (tropical) locations and in winter than summer. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 25 B 250 T 1000 Vertical Resolution G - N/A B - T - Temporal Resolution h G 0.20 B 1 T 24 Timeliness h G 24 B 120 T 720 Required Measurement Uncertainty (2-sigma) Kg m-2 G 0.1 Varies by latitude (See note above) B 0.5 T 1 Stability Kg m-2/ decade G 0.1 Varies by latitude (See note above) B 0.2 T 0.5 Standards and References 2022 GCOS ECVs Requirements - 64 - 2.4 ECV: Earth radiation budget 2.4.1 ECV Product: Radiation Profile Name Radiation Profile Definition Vertical profile of upward and downward Long Wave (LW) and Short Wave (SW) radiation components. Unit W m-2 Note For the application area of global climate monitoring no requirements exist. Thus, the requirements of the individual components are taken Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 B 50 T 100 Vertical Resolution km G 1 B 2 T 4 Temporal Resolution h G 1 resolving diurnal cycle B 24 T 720 Timeliness h G 1 B 24 T 720 Required Measurement Uncertainty (2-sigma) W m-2 G 0.1/0.2 Shortwave radiation/Longwave radiation A factor of 2 was applied to gain the breakthrough value and a factor of 4 was applied to estimate the threshold value. B 0.2/0.4 T 0.4/0.8 Stability W m-2/ decade G 0.025/0.05 Shortwave radiation/Longwave radiation B 0.05/0.1 T 0.1/0.2 Standards and References 2022 GCOS ECVs Requirements - 65 - 2.4.2 ECV Product: Solar Spectral Irradiance Name Solar Spectral Irradiance Definition Downward Short-Wave Irradiance at Top of the Atmosphere when measured as a function of wavelength it is the spectral irradiance. Unit W m-2 μm-1 Note Downward Short-Wave Irradiance at Top of the Atmosphere is also known as Solar Spectral Irradiance (SSI) Requirements Item needed Unit Metric Value Notes Horizontal Resolution mm G 10 B 50 T 100 Spectral resolution G < 290 nm B 1nm 290-1000 nm 2nm 1000-1600 nm 5nm 1600-3200 nm 10nm 3200-6400 nm 20nm 6400-10020nm 40nm 10020-160000 nm 20000nm T Temporal Resolution h G 3 B 12 Current TSIS-1 Level 3 sampling T 24 Current TSIS-1 Level 3 sampling Timeliness h G 1 B 10 T 90 Required Measurement Uncertainty (2-sigma) % G 0.3 (200-3000 nm) B 1.5 T 3 Stability %/decade G 0.03 (200-3000 nm) B 0.15 T 0.3 Standards and References 2022 GCOS ECVs Requirements - 66 - 2.4.3 ECV Product: Downward Short-Wave Irradiance at Top of the Atmosphere Name Downward Short-Wave Irradiance at Top of the Atmosphere Definition Flux density of the solar radiation at the top of the atmosphere. Unit W m-2 Note This EVC is formerly/also known as Total Solar Irradiance (TSI). Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 B 50 T 100 Vertical Resolution G - N/A B - T - Temporal Resolution h G 1 B 6 Current TSIS-1 Level 3 sampling T 24 Current TSIS-1 Level 3 sampling Timeliness h G 1 B 24 T 720 Required Measurement Uncertainty (2-sigma) W m-2 G 0.04 B 0.08 T 0.12 Stability W m-2/ decade G 0.01 B 0.02 T 0.04 Standards and References 2022 GCOS ECVs Requirements - 67 - 2.4.4 ECV Product: Upward Short-Wave Irradiance at Top of the Atmosphere Name Upward Short-Wave Irradiance at Top of the Atmosphere Definition Flux density of solar radiation, reflected by the Earth surface and atmosphere, emitted to space at the top of the atmosphere. Unit W m-² Note The measurand for this ECV is radiance (W·sr−1·m−2). The current approach adopted by the Clouds and Earth’s Radiant Energy System (CERES) is to derive irradiances (Wm-2) from measured radiances using observed anisotropy factors over various scene types. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 B 50 T 100 Vertical Resolution G - N/A B - T - Temporal Resolution h G 1 B 24 Resolves the diurnal cycle T 720 Allows a regional monitoring Timeliness h G 1 B 24 T 720 Required Measurement Uncertainty (2-sigma) W m-² G 0.2 NOAA Tech Rep. NESDIS 134; Ohring et al. (2005) A factor of 2 was applied to gain the breakthrough value and a factor of 4 was applied to estimate the threshold value. B 0.5 T 1 Stability W m-²/ decade G 0.06 NOAA Tech Rep. NESDIS 134 B 0.15 T 0.3 Standards and References Ohring et al. 2005: NOAA Tech Rep. NESDIS 134: Report from the Workshop on Continuity of Earth Radiation Budget (CERB) Observations: Post-CERES Requirements. John J. Bates and Xuepeng Zhao, May 2011 2022 GCOS ECVs Requirements - 68 - 2.4.5 ECV Product: Upward Long-Wave Irradiance at Top of the Atmosphere Name Upward Long-Wave Irradiance at Top of the Atmosphere Definition Flux density of terrestrial radiation emitted by the Earth surface and the gases, aerosols and clouds of the atmosphere at the top of the atmosphere. Unit W m-² Note The measurand for this ECV is radiance (W·sr−1·m−2). The current approach adopted by the Clouds and Earth’s Radiant Energy System (CERES) is to derive irradiances (Wm-2) from measured radiances using observed anisotropy factors over various scene types. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 B 50 T 100 Vertical Resolution G - N/A B - T - Temporal Resolution h G 1 B 24 Based on resolved diurnal cycle T 720 Based on resolved diurnal cycle Timeliness h G 1 B 24 T 720 Required Measurement Uncertainty (2-sigma) W m-² G 0.2 NOAA Tech Rep. NESDIS 134; Ohring et al. 2003 / 2005) A factor of 2 was applied to gain the breakthrough value and a factor of 4 was applied to estimate the threshold value. B 0.5 T 1 Stability W m-²/decade G 0.05 NOAA Tech Rep. NESDIS 134 Requirements for decadal stability and bias can be derived from theoretical assumptions about the minimum anticipated signal to detect climate trends (Ohring 2004, 2005). Ohring et al. assume the required stability to be 1/5 of the expected climate signal. To detect a climate signal the stability should be better than 10 % of the uncertainty. B 0.1 T 0.2 Standards and References Ohring et al. 2004: Satellite Instrument Calibration for Measuring Global Climate Change. NIST Rep. NISTIR 7047, 101 pp Ohring et al. 2005: NOAA Tech Rep. NESDIS 134: Report from the Workshop on Continuity of Earth Radiation Budget (CERB) Observations: Post-CERES Requirements. John J. Bates and Xuepeng Zhao, May 2011 2022 GCOS ECVs Requirements - 69 - 2.5 ECV Cloud Properties 2.5.1 ECV Product: Cloud cover Name Cloud Cover Definition 2D field of fraction of sky filled by cloud. Unit Unitless (percentage) Note These requirements include: Global, continental, and regional Climate monitoring, feedback and improved knowledge about the interaction between clouds, aerosols and atmospheric gases Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 25 To perform regional climate monitoring. Higher spatial resolution is needed with a resolution as high as 10 km required for resolving convective clouds in the tropics. B 100 To perform continental climate monitoring T 500 Global climate monitoring is performed on a monthly time scale with an averaged global number for which ~500 km for horizontal resolution is sufficient. Vertical Resolution G - N/A B - T - Temporal Resolution h G 1 To resolve the diurnal cycle for all kinds of clouds on the global scale and investigating cloud related climate feedbacks which are e.g. connected to rainfall, surface temperature, convection demand a temporal observing resolution of hourly to daily. B 24 To perform climate monitoring of clouds on the global scale, a daily observing cycle will be sufficient. T 720 To characterize seasonal and interannual changes Timeliness h G 1 B 3 T 12 Required Measurement Uncertainty (2-sigma) % G 3 Breakthrough is estimated with a factor of 2 times the goal value, whereas the threshold is calculated with a factor of 4 times the goal value. B 6 T 12 Stability %/decade G 0.3 Ohring et al. 2005 Breakthrough is estimated with a factor of 2 times the goal value, whereas the threshold is calculated with a factor of 4 times the goal value. B 0.6 T 1.2 Standards and References Ohring et al. 2005: 2022 GCOS ECVs Requirements - 70 - 2.5.2 ECV Product: Cloud Liquid Water Path Name Cloud Liquid Water Path Definition 2D Field of atmospheric water in the liquid phase (precipitating or not), integrated over the total column. Unit Kg m-2 Note This variable is identical to the also used "Cloud liquid water total column" which is given in g/m² and often used in NWP and climate models. The uncertainty values are below would then by re-scaled from Kg m-2 to g m-2. These requirements include: Global, continental, and regional Climate monitoring, feedback and improved knowledge about the interaction between clouds, aerosols and atmospheric gases. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 25 To perform regional climate monitoring. Higher spatial resolution is needed with a resolution as high as 10 km required for resolving convective clouds in the tropics B 100 To perform continental climate monitoring. T 500 Global climate monitoring is performed on a monthly time scale with an averaged global number for which ~500 km for horizontal resolution is sufficient Vertical Resolution G N/A B T Temporal Resolution h G 1 To resolve the diurnal cycle for all kinds of clouds on the global scale and investigating cloud related climate feedbacks which are e.g. connected to rainfall, surface temperature, convection demand a temporal observing resolution of hourly to daily. B 24 To perform climate monitoring of clouds on the global scale, a daily to monthly observing cycle will be sufficient T 720 To characterize seasonal and interannual changes Timeliness h G 1 B 3 T 12 Required Measurement Uncertainty (2-sigma) Kg m-2 G 0.05 Breakthrough is estimated with a factor of 2 times the goal value, whereas the threshold is calculated with a factor of 4 times the goal value B 0.1 T 0.2 Stability Kg m-2/ decade G 0.005 Ohring et al. 2005 Breakthrough is estimated with a factor of 2 times the goal value, whereas the threshold is calculated with a factor of 4 times the goal value B 0.01 T 0.02 Standards and References Ohring et al. 2005: 2022 GCOS ECVs Requirements - 71 - 2.5.3 ECV Product: Cloud Ice Water Path Name Cloud Ice Water Path Definition 2D Field of atmospheric water in the solid phase (precipitating or not), integrated over the total column. Unit kg m-2 Note This variable is identical to the also used "Cloud ice water total column" which is given in g/m² and often used in NWP and climate models. The uncertainty values are below would then by re-scaled from kg/m² to g/m². These requirements include: Global, continental, and regional Climate monitoring, feedback and improved knowledge about the interaction between clouds, aerosols and atmospheric gases. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 25 To perform regional climate monitoring. Higher spatial resolution is needed with a resolution as high as 10 km required for resolving convective clouds in the tropics. B 100 To perform continental climate monitoring. T 500 Global climate monitoring is performed on a monthly time scale with an averaged global number for which ~500 km for horizontal resolution is sufficient. Vertical Resolution N/A G - N/A B - T - Temporal Resolution h G 1 To resolve the diurnal cycle for all kinds of clouds on the global scale and investigating cloud related climate feedbacks which are e.g. connected to rainfall, surface temperature, convection demand a temporal observing resolution of hourly to daily. B 24 To perform climate monitoring of clouds on the global scale, a daily to monthly observing cycle will be sufficient. T 720 To characterized seasonal and interannual changes Timeliness h G 1 B 3 T 12 Required Measurement Uncertainty (2-sigma) kg m-2 G 0.05 Breakthrough is estimated with a factor of 2 times the goal value, whereas the threshold is calculated with a factor of 4 times the goal value. B 0.1 T 0.2 Stability kg m-2/ decade G 0.005 Ohring et al. 2005 Breakthrough is estimated with a factor of 2 times the goal value, whereas the threshold is calculated with a factor of 4 times the goal value. B 0.01 T 0.02 Standards and References Ohring et al. 2005: 2022 GCOS ECVs Requirements - 72 - 2.5.4 ECV Product: Cloud Drop Effective Radius Name Cloud Drop Effective Radius Definition Ratio of integral of water droplets size distribution in volume divided by integral in area (µm). Unit µm Note These requirements include: Global, continental, and regional Climate monitoring, feedback and improved knowledge about the interaction between clouds, aerosols and atmospheric gases. Requirements for this ECV is are for the cloud top Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 25 To perform regional climate monitoring. Higher spatial resolution is needed with a resolution as high as 10 km required for resolving convective clouds in the tropics. B 100 To perform continental climate monitoring T 500 Global climate monitoring is performed on a monthly time scale with an averaged global number for which ~500 km for horizontal resolution is sufficient. Vertical Resolution G - N/A B - T - Temporal Resolution h G 1 To resolve the diurnal cycle for all kinds of clouds on the global scale and investigating cloud related climate feedbacks which are e.g. connected to rainfall, surface temperature, convection demand a temporal observing resolution of hourly to daily. B 24 To perform climate monitoring of clouds on the global scale, a daily to monthly observing cycle will be sufficient. T 720 To characterize seasonal and interannual changes Timeliness h G 1 B 3 T 12 Required Measurement Uncertainty (2-sigma) µm As metric the uncertainty (RMS) is chosen which is given for 1-sigma G 1/2 Breakthrough is estimated with a factor of 2 times the goal value, whereas the threshold is calculated with a factor of 4 times the goal value. B 2/4 T 4/8 Stability µm /decade G 0.1/0.2 Values given separately for cloud water and ice effective particle size as water/ice. Ohring et al. 2005 specifies stability and accuracy requirements separately for cloud water particle size as percentage forcing, and ice particle size as percentage feedback. Breakthrough is estimated with a factor of 2 times the goal value, whereas the threshold is calculated with a factor of 4 times the goal value. B 0.2/0.4 T 0.4/0.8 Standards and References Ohring et al. 2005: 2022 GCOS ECVs Requirements - 73 - 2.5.5 ECV Product: Cloud Optical Depth Name Cloud Optical Depth Definition Effective depth of a cloud from the viewpoint of radiation extinction. OD = exp(-K.Δz) where K is the extinction coefficient [km-1], Δz the vertical path [km] between the base and the top of the cloud and the reference wavelength to be specified in the metadata. Unit Dimensionless (percentage) Note These requirements include: Global, continental, and regional Climate monitoring, feedback and improved knowledge about the interaction between clouds, aerosols and atmospheric gases. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 25 To perform regional climate monitoring. Higher spatial resolution is needed with a resolution as high as 10 km required for resolving convective clouds in the tropics. B 100 To perform continental and regional climate monitoring higher spatial resolution is needed T 500 Global climate monitoring is performed on a monthly time scale with an averaged global number for which ~500 km for horizontal resolution is sufficient. Vertical Resolution G - N/A B - T - Temporal Resolution h G 1 To resolve the diurnal cycle for all kinds of clouds on the global scale and investigating cloud related climate feedbacks which are e.g. connected to rainfall, surface temperature, convection demand a temporal observing resolution of hourly to daily. B 24 To perform Performing climate monitoring of clouds on the global scale, a daily to monthly observing cycle will be sufficient. T 720 To characterize seasonal and interannual changes Timeliness h G 1 B 3 T 12 Required Measurement Uncertainty (2-sigma) % G 20 Breakthrough is estimated with a factor of 2 times the goal value, whereas the threshold is calculated with a factor of 4 times the goal value. B 40 T 80 Stability %/decade G 2.0 Ohring et al. 2005 lists the stability requirements for cloud optical thickness as 2% with a 10% accuracy. Breakthrough is estimated with a factor of 2 times the goal value, whereas the threshold is calculated with a factor of 4 times the goal value. B 4.0 T 8.0 Standards and References Ohring et al. 2005: 2022 GCOS ECVs Requirements - 74 - 2.5.6 ECV Product: Cloud Top Temperature Name Cloud Top Temperature Definition Temperature of the top of the cloud (highest cloud in case of multi-layer clouds). Unit K Note These requirements include: Global, continental, and regional Climate monitoring, feedback and improved knowledge about the interaction between clouds, aerosols and atmospheric gases. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 25 To perform regional climate monitoring. Higher spatial resolution is needed with a resolution as high as 10 km required for resolving convective clouds in the tropics. B 100 To perform continental and regional climate monitoring higher spatial resolution is needed T 500 Global climate monitoring is performed on a monthly time scale with an averaged global number for which ~500 km for horizontal resolution is sufficient. Vertical Resolution G - N/A B - T - Temporal Resolution h G 1 To resolve the diurnal cycle for all kinds of clouds on the global scale and investigating cloud related climate feedbacks which are e.g. connected to rainfall, surface temperature, convection demand a temporal observing resolution of hourly to daily. B 24 To perform Performing climate monitoring of clouds on the global scale, a daily to monthly observing cycle will be sufficient. T 720 To characterize seasonal and interannual changes Timeliness h G 1 B 3 T 12 Required Measurement Uncertainty (2-sigma) K G 2 Breakthrough is estimated with a factor of 2 times the goal value, whereas the threshold is calculated with a factor of 4 times the goal value. B 4 T 8 Stability K/decade G 0.2 Ohring et al. 2005 lists the stability requirement for cloud top temperature as 0.2K/cloud emissivity per decade with accuracy as 1 K/cloud emissivity per decade. Breakthrough is estimated with a factor of 2 times the goal value, whereas the threshold is calculated with a factor of 4 times the goal value. B 0.4 T 0.8 Standards and References Ohring et al. 2005: 2022 GCOS ECVs Requirements - 75 - 2.5.7 ECV Product: Cloud Top Height Name Cloud Top Height Definition Height of the top of the cloud (highest cloud in case of multi-layer clouds. Unit km Note These requirements include: Global, continental, and regional Climate monitoring, feedback and improved knowledge about the interaction between clouds, aerosols and atmospheric gases. 3-D cloud top information are required where possible. This can be achieved via a combination of cloud optical depth vs cloud top height histograms Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 25 To perform regional climate monitoring. Higher spatial resolution is needed with a resolution as high as 10 km required for resolving convective clouds in the tropics. B 100 To perform continental and regional climate monitoring higher spatial resolution is needed T 500 Global climate monitoring is performed on a monthly time scale with an averaged global number for which ~500 km for horizontal resolution is sufficient. Vertical Resolution G - N/A B - T - Temporal Resolution h G 1 To resolve the diurnal cycle for all kinds of clouds on the global scale and investigating cloud related climate feedbacks which are e.g. connected to rainfall, surface temperature, convection demand a temporal observing resolution of hourly to daily. B 24 To perform climate monitoring of clouds on the global scale, a daily to monthly observing cycle will be sufficient. T 720 To characterize seasonal and interannual changes Timeliness h G 1 B 3 T 12 Required Measurement Uncertainty (2-sigma) km G 0.30 Breakthrough is estimated with a factor of 2 times the goal value, whereas the threshold is calculated with a factor of 4 times the goal value. B 0.60 T 1.2 Stability km/decade G 0.03 Ohring et al. 2005 lists the required stability for cloud top height as 30 m/decade with accuracy of 150 m/decade. Breakthrough is estimated with a factor of 2 times the goal value, whereas the threshold is calculated with a factor of 4 times the goal value. B 0.06 T 0.12 Standards and References Ohring et al. 2005: 2022 GCOS ECVs Requirements - 76 - 2.6 ECV: Lightning 2.6.1 ECV Product: Schumann Resonances Name Schumann Resonances Definition Extremely Low Frequency (ELF) magnetic and electric field of the three first resonance modes (8 Hz, 14 Hz, 20 Hz). Unit pT2 Hz-1 (magnetic field); V2 m-2 Hz-1 (electric field) Note Regular measurements of two horizontal magnetic field components at a location are enough to monitor globally Schumann Resonances. The magnetic field should be monitored at a level of ~0.1 pT2 Hz-1. Additionally, to the magnetic measurements, one vertical electric measurement would document the full transverse electromagnetic (TEM) waveguide component at any given location. Note the estimate of the electric intensity assumes the wave impedance is half that of free space (377 ohms). In this context, the electric field should be monitored at a level of ~2.3 x 10-9 V2 m-2 Hz-1. Requirements Item needed Unit Metric Value Notes Horizontal Resolution G - One value represents the globe, so no horizontal resolution required B - T - Vertical Resolution G - N/A B - T - Temporal Resolution d G 1/24 Suitable for investigation of the strong diurnal variation of tropical “chimney” regions and for use in multi-station inversion methods for global lightning activity B 1 Suitable for investigation of intraseasonal variations (5-day wave; MJO) T 30 Suitable for investigation of the global seasonal and annual variation, and the interannual ENSO variation Timeliness d G 1 For use in building a representative monthly estimate for climate purposes B - T 30 For climate-related studies; responsiveness of lightning to long-term temperature changes Required Measurement Uncertainty (2-sigma) fT2 Hz-1 G 1 Absolute coil calibration is feasible at the 1% level/ (Calibration of the vertical electric field is difficult, but possible) B - T 5 Absolute coil calibration at the 5% level Stability fT2 Hz-1 G 1 Given lightning sensitivity to temperature at the 10% per K level, one needs absolute calibration and stability at the 1% level to see fraction of 1K temperature changes B - T 5 Coil calibration should be checked and maintained to at least this level Standards and References Nickolaenko, A.P. and M. Hayakawa, Resonances in the Earth–ionosphere cavity. Kluwer Academic Publisher, Dordrecht, London, 2002. Nickolaenko, A.P. and M. Hayakawa, Schumann Resonance for Tyros: Essentials of Global Electromagnetic Resonance in the Earth–ionosphere Cavity. Springer, Tokyo/Heidelberg/New York/Dordrecht/London, 2014. Polk, C., Schumann Resonances, in CRC Handbook of Atmospherics. Volume 1, Ed., H. Volland, CRC Press, Boca Raton, Florida, 1982. Sátori G, V. Mushtak, and E. Williams, Schumann resonance signature of global lightning activity. In: Betz, HD, U. Schumann and P. Laroche (eds), Lightning: Principles, Instruments and Applications: Review of Modern Lightning Research. Springer, Berlin, pp 347–386. 2009. Sentman, D.D., Schumann Resonances. In Volland, H., Ed., Handbook of Atmospheric Electrodynamics, CRC Press, Boca Raton, 267-296, 1995. 2022 GCOS ECVs Requirements - 77 - 2.6.2 ECV Product: Total lightning stroke density Name Total lightning stroke density Definition Total number of detected strokes in the corresponding time interval and the space unit. The space unit (grid box) should be on the order of the horizontal resolution and the accumulation time to the observing cycle. Unit Strokes per km2 y-1 Note Data sets at the 1-map-per-month level require limited data storage, and thus should be simply posted on a publicly accessible website. The larger data sets reaching down to global resolutions of 0.1 degree with time resolution of a few hours should be maintained by the network managers and provided to the user community as needed. Metadata should include sufficient information to validate the detection efficiency at the maximum spatial and temporal scales. Requirements Item needed Unit Metric Value Notes Horizontal Resolution Degree pixels G 0.1x0.1 Thunderstorms are complex, with different dynamics in different parts of the storm, for example the updraft region and the trailing stratosphere region. Therefore, the net influence on global currents and climatology is likely to be very different from different sub-storm scales. B 0.25x0.25 This is the convection scale and will help identify climate variability at the storm level T 1x1 Ideally these data would be provided as both maps as well as digital files, along with the Metadata with adequate time resolution to address both long term and short term detection efficiency variations within these data sets. Vertical Resolution N/A G - N/A B - N/A T - N/A Temporal Resolution d G 1/24 Lifetime of thunderstorm cell, diurnal cycle. For high resolution climatology, also necessary to validate thunder day data in order to extend time series of lightning activity back in time B 1 Weather patterns, weekly and intraseasonal patterns like MJO T 30 Climate Scale Timeliness d G 10 For high resolution climatology. It can be important for special occasions to see direct impacts of events or mitigation immediately in order to react. B 30 Forecasting and model input T 365 For lightning climatology studies the provision of yearly data within one year of data collection, and to prepare their data back as far as it is available from their network is necessary. Required Measurement Uncertainty (2-sigma) dimensionless G 1 For high resolution climatology, also necessary to validate thunder day data in order to extend time series of lightning activity back in time B - T 15 For climatologies Stability % G 1 For high resolution climatology, also necessary to validate thunder day data in order to extend time series of lightning activity back in time B - – T 10 For climatologies Standards and References Algorithm Theoretical Basis Document (ATBD) for L2 processing of the GOES-R Geostationary Lightning Mapper (GLM, Goodman et al., 2013) and MTG Lightning Imager data (Eumetsat, 2014) Meteosat Third Generation (MTG) End-User Requirements Document (EURD) (Eumetsat, 2010) GOES-R Product Definition and Users' Guide (PUG, Rev. 2018) and Data Book (Rev., 2019) Nag et al., 2015 Virts, K.S. et al, 2013, Highlights of a New Ground-Based, Hourly Global Lightning Climatology, BAMS, 94 (9), 2022 GCOS ECVs Requirements - 78 - GOES-R Series, 2018. Product Definition and Users’ Guide. Volume 3: Level 1b Products, 1 November 2018 DCN 7035538, Revision 2.0, available at GOES-R Series Data Book, 2019. CDRL PM-14 Rev A. May 2019, NOAA-NASA. Available at 2022 GCOS ECVs Requirements - 79 - 3. ATMOSPHERIC COMPOSITION 3.1 ECV: Greenhouse Gases 3.1.1 ECV Product: N2O mole fraction Name N2O mole fraction Definition 3D field of amount of N2O (expressed in moles) divided by the total amount of all constituents in dry air (also expressed in moles). Unit ppb Note N2O was not an ECV product in the GCOS IP but should be added as it is a strong GHG. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 100 B 500 T 2000 Vertical Resolution km G 0.1 B 1 T 3 Temporal Resolution h G 1 B 24 T 168 Timeliness d G 1 B 30 T 180 Required Measurement Uncertainty (2-sigma) ppb G 0.05 Expert judgement and GAW Rep. No. 242 network compatibility B 0.1 Expert judgement and GAW Rep. No. 242 extended network compatibility T 0.3 Expert judgement, larger than B. Stability ppb/decade G 0.05 Within accuracy B 0.05 Within accuracy/2 T 0.2 Within accuracy/2 Standards and References GAW Report, 242. 19th WMO/IAEA Meeting on Carbon Dioxide, Other Greenhouse Gases and Related Measurement Techniques (GGMT-2017) Crotwell Andrew; Steinbacher M.; World Meteorological Organization (WMO) - WMO, 2018 GAW Report, 255. 20th WMO/IAEA Meeting on Carbon Dioxide, Other Greenhouse Gases and Related Measurement Techniques (GGMT-2019) Crotwell A.; Lee, H.; Steinbacher M.; World Meteorological Organization (WMO) - WMO, 2020 2022 GCOS ECVs Requirements - 80 - 3.1.2 ECV Product: CO2 mole fraction Name CO2 mole fraction Definition 3D field of amount of CO2 (Carbon dioxide, expressed in moles) divided by the total amount of all constituents in dry air (also expressed in moles). Unit ppm Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 100 B 500 T 2000 Vertical Resolution km G 0.1 B 1 T 3 Temporal Resolution h G 1 B 24 T 168 Timeliness day G 1 B 30 T 180 Required Measurement Uncertainty (2-sigma) ppm G 0.1 GAW Rep. No. 242 B 0.2 GAW Rep. No. 242 T 0.5 Expert judgement, larger than B. Stability ppm/decade G 0.1 Within accuracy B 0.1 Within accuracy/2 T 0.3 Within accuracy/2 Standards and References GAW Report, 242. 19th WMO/IAEA Meeting on Carbon Dioxide, Other Greenhouse Gases and Related Measurement Techniques (GGMT-2017) Crotwell Andrew; Steinbacher M.; World Meteorological Organization (WMO) - WMO, 2018 GAW Report, 255. 20th WMO/IAEA Meeting on Carbon Dioxide, Other Greenhouse Gases and Related Measurement Techniques (GGMT-2019) Crotwell A.; Lee, H.; Steinbacher M.; World Meteorological Organization (WMO) - WMO, 2020 2022 GCOS ECVs Requirements - 81 - 3.1.3 ECV Product: CO2 column average dry air mixing ratio Name CO2 column average dry air mixing ratio Definition 2D column integrated number of molecules of the target gas (CO2) divided by that of dry air expressed in mole fraction. Unit μmol mol-1 Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 1 imaging B 5 ~OCO-2/3 T 10 CO2M, CEOS document - LEO, GEO Vertical Resolution G - N/A B - T - Temporal Resolution h G 1 geostationary B 12 Blue report T 72 CO2M Timeliness d G 1 B 7 T 14 Required Measurement Uncertainty (2-sigma) ppm G 0.6 1-sigma: 0.3ppm TCCON / Green report B 1 1-sigma: 0.5ppm Expert judgment based on improving CO2M requirements T 1.6 1-sigma: 0.8ppm CO2M requirements, WMO Report #242 Stability ppm/decade G 0.1 Within accuracy / 5 B 0.2 Within accuracy / 5 T 0.3 Within accuracy / 5 Standards and References Blue Report, 2015: Towards a European Operational Observing System to Monitor Fossil CO2 emissions Red Report, 2017: Baseline Requirements, Model Components and Functional Architecture Green Report, 2019: Needs and High Level Requirements for in situ Measurements CO2M ndidates MRD, v 2.0: ESA Climate Change Initiative (CCI) User Requirements Document Version 2.1 (URDv2.1) for the Essential Climate Variable (ECV) Greenhouse Gases (GHG) CEOS documents: CEOS GHG report/white paper: GAW Report, 242. 19th WMO/IAEA Meeting on Carbon Dioxide, Other Greenhouse Gases and Related Measurement Techniques (GGMT-2017) Crotwell Andrew; Steinbacher M.; World Meteorological Organization (WMO) - WMO, 2018 GAW Report, 255. 20th WMO/IAEA Meeting on Carbon Dioxide, Other Greenhouse Gases and Related Measurement Techniques (GGMT-2019) Crotwell A.; Lee, H.; Steinbacher M.; World Meteorological Organization (WMO) - WMO, 2020 2022 GCOS ECVs Requirements - 82 - 3.1.4 ECV Product: CH4 mole fraction Name CH4 mole fraction Definition 3D field of amount of CH4 (Methane, expressed in moles) divided by the total amount of all constituents in dry air (also expressed in moles). Unit ppb Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 100 B 500 T 2000 Vertical Resolution km G 0.1 B 1 T 3 Temporal Resolution h G 1 B 24 T 168 Timeliness d G 1 B 30 T 180 Required Measurement Uncertainty (2-sigma) ppb G 1 Expert judgement based on GAW Rep. No. 242 network compatibility B 2 Expert judgement based on GAW Rep. No. 242 extended network compatibility T 5 Expert judgment, larger than B. Stability ppb/decade G 1 Within accuracy B 1 Within accuracy/2 T 3 Within accuracy/2 Standards and References Green Report, 2019: Needs and High Level Requirements for in situ Measurements GAW Report, 242. 19th WMO/IAEA Meeting on Carbon Dioxide, Other Greenhouse Gases and Related Measurement Techniques (GGMT-2017) Crotwell Andrew; Steinbacher M.; World Meteorological Organization (WMO) - WMO, 2018 GAW Report, 255. 20th WMO/IAEA Meeting on Carbon Dioxide, Other Greenhouse Gases and Related Measurement Techniques (GGMT-2019) Crotwell A.; Lee, H.; Steinbacher M.; World Meteorological Organization (WMO) - WMO, 2020 2022 GCOS ECVs Requirements - 83 - 3.1.5 ECV Product: CH4 column average dry air mixing ratio Name CH4 column average dry air mixing ratio Definition 2D column integrated number of molecules of the target gas (CH4) divided by that of dry air expressed in mole fraction. Unit nmol mol-1 Note Temporal resolution and timeliness are kept the same/compatible with CO2 Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 0.3 Imaging, permafrost region B 1 Improved TROPOMI T 10 TROPOMI/S5P Vertical Resolution G - N/A B - T - Temporal Resolution h G 1 Geo constellation + LEO B 12 In the middle between threshold and goal T 72 TROPOMI revisit, single geostationary Timeliness d G 1 B 7 T 14 Required Measureme nt Uncertainty (2-sigma) ppb G 7 1-sigma: 3.5ppb GeoCARB and MERLIN mission requirements, 0.2% of current CH4 burden B 10 1-sigma:5ppb Expert judgement based on expected improvement of TROPOMI/S5P T 20 1-sigma: 10ppb TROPOMI/S5P, CEOS doc, advancing from GCOS 2011 Stability ppb/deca de G 1 Within accuracy / 5 B 2 within accuracy / 5 T 4 within accuracy / 5 Standards and References Blue Report, 2015: Towards a European Operational Observing System to Monitor Fossil CO2 emissions Red Report, 2017: Baseline Requirements, Model Components and Functional Architecture Green Report, 2019: Needs and High Level Requirements for in situ Measurements CO2M: Candidates MRD, v 2.0: ESA Climate Change Initiative (CCI) User Requirements Document Version 2.1 (URDv2.1) for the Essential Climate Variable (ECV) Greenhouse Gases (GHG) CEOS documents: CEOS GHG report/white paper: GAW Report, 242. 19th WMO/IAEA Meeting on Carbon Dioxide, Other Greenhouse Gases and Related Measurement Techniques (GGMT-2017) Crotwell Andrew; Steinbacher M.; World Meteorological Organization (WMO) - WMO, 2018 GAW Report, 255. 20th WMO/IAEA Meeting on Carbon Dioxide, Other Greenhouse Gases and Related Measurement Techniques (GGMT-2019) Crotwell A.; Lee, H.; Steinbacher M.; World Meteorological Organization (WMO) - WMO, 2020 2022 GCOS ECVs Requirements - 84 - 3.2 ECV: Ozone 3.2.1 ECV Product: Ozone mole fraction in the Troposphere Name Ozone mole fraction in the troposphere Definition 3D field of amount of O3 (expressed in moles) in the troposphere divided by the total amount of all constituents in dry air (also expressed in moles). Unit % (directly transferrable to mixing ratios, mol/mol) Note The team of ozone experts unanimously agreed that the uncertainty and stability requirements for each of these ozone data products should be expressed as % and %/decade in the tables. Defining requirements in units of mixing ratios or Dobson Units would require each uncertainty and stability requirement be a wide range of values. We therefore found it more definitive and intuitive that each table entry is one number in % or %/decade. To help translate the requirements in % or %/decade to absolute units we have put a footnote beneath each table that quantitatively describes the wide range of mixing ratios or Dobson Units corresponding to that data product. This helps to explain why the requirements in the tables are not expressed in units of mixing ratio or DU. Requirements in absolute units are easily calculated by multiplying the % (or %/decade) in the table by the mixing ratio or DU ranges in the footnotes. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 1 1, 2, 3, 4,5,6,7 B 20 T 100 Vertical Resolution km G 1 1,2,3,4,5,6,7 B 3 T 5 Temporal Resolution d G 1/24 1, 2, 3, 4,5,6,7 B 1/4 T 30 Timeliness d G 1/24 B 1 T 30 Required Measurement Uncertainty (2-sigma) % G 2 1, 2, 3, 4,5,6,7,8 Requirements for uncertainty (%) and stability (%/decade) translate to wide mixing ratio requirement ranges based on a 20 to 80 ppb range of ozone mixing ratios in the troposphere. B 5 T 10 Stability %/decade G <1 1, 2, 3, 4,5,6,7,8 Requirements for uncertainty (%) and stability (%/decade) translate to wide mixing ratio requirement ranges based on a 20 to 80 ppb range of ozone mixing ratios in the troposphere. B 2 T 3 Standards and References 1. Ozone Climate Change Initiative User Requirements Document 2. WMO (World Meteorological Organization), Stratospheric Ozone Changes and Climate in Scientific Assessment of Ozone Depletion: 2018, Global Ozone Research and Monitoring Project– Report No. 58, 588 pp., Geneva, Switzerland, 2018. sment.pdf 3. Climate Monitoring User Group CCI Requirements Baseline Documents 4. WMO (World Meteorological Organization), Update on Global Ozone: Past, Present and Future in Scientific Assessment of Ozone Depletion: 2018, Global Ozone Research and Monitoring Project–Report No. 58, 588 pp., Geneva, Switzerland, 2018. eAssessment.pdf 5. Gaudel, A., et al. (2018), Tropospheric Ozone Assessment Report: Present-day distribution and trends of tropospheric ozone relevant to climate and global atmospheric chemistry model evaluation, Elem. Sci. Anth., 6(1), 39, 6. Tarasick, D. W., I. E. Galbally, O. R. Cooper, M. G. Schultz, G. Ancellet, T. Leblanc, T. J. Wallington, J. Ziemke, X. Liu, M. Steinbacher, J. Staehelin, C. Vigouroux, J. W. Hannigan, O. 2022 GCOS ECVs Requirements - 85 - García, G. Foret, P. Zanis, E. Weatherhead, I. Petropavlovskikh, H. Worden, M. Osman, J. Liu, K.-L. Chang, A. Gaudel, M. Lin, M. Granados-Muñoz, A. M. Thompson, S. J. Oltmans, J. Cuesta, G. Dufour, V. Thouret, B. Hassler, T. Trickl and J. L. Neu (2019), Tropospheric Ozone Assessment Report: Tropospheric ozone from 1877 to 2016, observed levels, trends and uncertainties. Elem Sci Anth, 7(1), DOI: 7. Galbally, IE, Schultz, MG, Buchmann, B, Gilge, S, Guenther, F, Koide, H, Oltmans, S, Patrick, L, Scheel, H-E, Smit, H, Steinbacher, M, Steinbrecht, W, Tarasova, O, Viallon, J, Volz-Thomas, A, Weber, M, Wielgosz, R and Zellweger, C. (2013), Guidelines for Continuous Measurement of Ozone in the Troposphere, GAW Report No 209, Publication WMO-No. 1110, ISBN 978-92-63-11110-4, Geneva, Switzerland: World Meteorological Organisation, 76. 8. Fischer, E.V., Jaffe, D.A. and Weatherhead, E.C., 2011. Free tropospheric peroxyacetyl nitrate (PAN) and ozone at Mount Bachelor: causes of variability and timescale for trend detection. Atmospheric Chemistry & Physics Discussions, 11(2). 2022 GCOS ECVs Requirements - 86 - 3.2.2 ECV Product: Ozone mole fraction in the Upper Troposphere/ Lower Stratosphere (UTLS) Name Ozone mole fraction in the Upper Troposphere/ Lower Stratosphere (UTLS) Definition 3D field of amount of O3 (expressed in moles) in the upper troposphere/lower stratosphere (UTLS) divided by the total amount of all constituents in dry air (also expressed in moles). Unit % (directly transferrable to mixing ratios, mol/mol) Note The team of ozone experts unanimously agreed that the uncertainty and stability requirements for each of these ozone data products should be expressed as % and %/decade in the tables. Defining requirements in units of mixing ratios or Dobson Units would require each uncertainty and stability requirement be a wide range of values. We therefore found it more definitive and intuitive that each table entry is one number in % or %/decade. To help translate the requirements in % or %/decade to absolute units we have put a footnote beneath each table that quantitatively describes the wide range of mixing ratios or Dobson Units corresponding to that data product. This helps to explain why the requirements in the tables are not expressed in units of mixing ratio or DU. Requirements in absolute units are easily calculated by multiplying the % (or %/decade) in the table by the mixing ratio or DU ranges in the footnotes. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 1, 2, 3, 4,5 B 50 T 200 Vertical Resolution km G 0.5 1,2,3,4,5 B 1 T 3 Temporal Resolution d G 1/4 1, 2, 3, 4,5 B 1 T 30 Timeliness d G 1/4 B 1 T 30 Required Measurement Uncertainty (2-sigma) % G 2 1, 2, 3, 4,5 Requirements for uncertainty (%) and stability (%/decade) translate o wide mixing ratio requirement ranges based on a 50 ppb to 3 ppm range of ozone mixing ratios in the UTLS. B 5 T 10 Stability %/decade G 1 1, 2, 3, 4,5 Requirements for uncertainty (%) and stability (%/decade) translate to wide mixing ratio requirement ranges based on a 50 ppb to 3 ppm range of ozone mixing ratios in the UTLS. B 2 T 3 Standards and References 1. Ozone Climate Change Initiative User Requirements Document 2. WMO (World Meteorological Organization), Stratospheric Ozone Changes and Climate in Scientific Assessment of Ozone Depletion: 2018, Global Ozone Research and Monitoring Project– Report No. 58, 588 pp., Geneva, Switzerland, 2018. sment.pdf 3. Climate Monitoring User Group CCI Requirements Baseline Documents 4. WMO (World Meteorological Organization), Update on Global Ozone: Past, Present and Future in Scientific Assessment of Ozone Depletion: 2018, Global Ozone Research and Monitoring Project–Report No. 58, 588 pp., Geneva, Switzerland, 2018. eAssessment.pdf 5. Gaudel, A., et al. (2018), Tropospheric Ozone Assessment Report: Present-day distribution and trends of tropospheric ozone relevant to climate and global atmospheric chemistry model evaluation, Elem. Sci. Anth., 6(1), 39, 2022 GCOS ECVs Requirements - 87 - 3.2.3 ECV Product: Ozone mole fraction in the Middle and Upper Stratosphere Name Ozone mole fraction in the Middle and Upper Stratosphere Definition 3D field of amount of O3 (expressed in moles) in the Middle and Upper Stratosphere divided by the total amount of all constituents in dry air (also expressed in moles). Unit % (directly transferrable to mixing ratios, mol/mol) Note The team of ozone experts unanimously agreed that the uncertainty and stability requirements for each of these ozone data products should be expressed as % and %/decade in the tables. Defining requirements in units of mixing ratios or Dobson Units would require each uncertainty and stability requirement be a wide range of values. We therefore found it more definitive and intuitive that each table entry is one number in % or %/decade. To help translate the requirements in % or %/decade to absolute units we have put a footnote beneath each table that quantitatively describes the wide range of mixing ratios or Dobson Units corresponding to that data product. This helps to explain why the requirements in the tables are not expressed in units of mixing ratio or DU. Requirements in absolute units are easily calculated by multiplying the % (or %/decade) in the table by the mixing ratio or DU ranges in the footnotes. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 20 1, 2, 3, 4 B 100 T 500 Vertical Resolution km G 1 1,2,3,4 B 3 T 10 Temporal Resolution d G 1/4 1, 2, 3, 4 B 1 T 30 Timeliness d G 1/4 B 1 T 30 Required Measurement Uncertainty (2-sigma) % G 5 1, 2, 3, 4 Requirements for uncertainty (%) and stability (%/decade) translate to wide mixing ratio requirement ranges based on a 3 to 10 ppm range of ozone mixing ratios in the middle and upper stratosphere. B 10 T 15 Stability %/decade G 1 1, 2, 3, 4 Requirements for uncertainty (%) and stability (%/decade) translate to wide mixing ratio requirement ranges based on a 3 to 10 ppm range of ozone mixing ratios in the middle and upper stratosphere. B 2 T 3 Standards and References 1. Ozone Climate Change Initiative User Requirements Document 2. WMO (World Meteorological Organization), Stratospheric Ozone Changes and Climate in Scientific Assessment of Ozone Depletion: 2018, Global Ozone Research and Monitoring Project– Report No. 58, 588 pp., Geneva, Switzerland, 2018. sment.pdf 3. Climate Monitoring User Group CCI Requirements Baseline Documents 4. WMO (World Meteorological Organization), Update on Global Ozone: Past, Present and Future in Scientific Assessment of Ozone Depletion: 2018, Global Ozone Research and Monitoring Project–Report No. 58, 588 pp., Geneva, Switzerland, 2018. eAssessment.pdf 2022 GCOS ECVs Requirements - 88 - 3.2.4 ECV Product: Ozone Tropospheric Column Name Ozone Tropospheric Column Definition 2D field of total amount of O3 molecules per unit area in an atmospheric column extending from the Earth’s surface to the tropopause. Unit % (directly transferrable to Dobson units) Note The team of ozone experts unanimously agreed that the uncertainty and stability requirements for each of these ozone data products should be expressed as % and %/decade in the tables. Defining requirements in units of mixing ratios or Dobson Units would require each uncertainty and stability requirement be a wide range of values. We therefore found it more definitive and intuitive that each table entry is one number in % or %/decade. To help translate the requirements in % or %/decade to absolute units we have put a footnote beneath each table that quantitatively describes the wide range of mixing ratios or Dobson Units corresponding to that data product. This helps to explain why the requirements in the tables are not expressed in units of mixing ratio or DU. Requirements in absolute units are easily calculated by multiplying the % (or %/decade) in the table by the mixing ratio or DU ranges in the footnotes. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 5 1, 2, 3, 4, 5 B 20 T 100 Vertical Resolution G - N/A B - T - Temporal Resolution d G 1/24 1, 2, 3, 4, 5 B 1/4 T 30 Timeliness d G 1/24 B 1 T 30 Required Measurement Uncertainty (2-sigma) % G 5 1, 2, 3, 4, 5 Requirements for uncertainty (%) and stability (%/decade) translate to wide Dobson Unit requirement ranges based on a 20 to 45 DU range of ozone tropospheric columns. B 10 T 15 Stability %/decade G 1 1, 2, 3, 4,5 Requirements for uncertainty (%) and stability (%/decade) translate to wide Dobson Unit requirement ranges based on a 20 to 45 DU range of ozone tropospheric columns. B 2 T 3 Standards and References 1. Ozone Climate Change Initiative User Requirements Document 2. WMO (World Meteorological Organization), Stratospheric Ozone Changes and Climate in Scientific Assessment of Ozone Depletion: 2018, Global Ozone Research and Monitoring Project– Report No. 58, 588 pp., Geneva, Switzerland, 2018. sment.pdf 3. Climate Monitoring User Group CCI Requirements Baseline Documents 4. WMO (World Meteorological Organization), Update on Global Ozone: Past, Present and Future in Scientific Assessment of Ozone Depletion: 2018, Global Ozone Research and Monitoring Project–Report No. 58, 588 pp., Geneva, Switzerland, 2018. sment.pdf 5. Gaudel, A., et al. (2018), Tropospheric Ozone Assessment Report: Present-day distribution and trends of tropospheric ozone relevant to climate and global atmospheric chemistry model evaluation, Elem. Sci. Anth., 6(1), 39, 2022 GCOS ECVs Requirements - 89 - 3.2.5 ECV Product: Ozone Stratospheric Column Name Ozone Stratospheric Column Definition 2D field of total amount of O3 molecules per unit area in an atmospheric column extending from tropopause to stratopause. Unit % (directly transferrable to Dobson units) Note The team of ozone experts unanimously agreed that the uncertainty and stability requirements for each of these ozone data products should be expressed as % and %/decade in the tables. Defining requirements in units of mixing ratios or Dobson Units would require each uncertainty and stability requirement be a wide range of values. We therefore found it more definitive and intuitive that each table entry is one number in % or %/decade. To help translate the requirements in % or %/decade to absolute units we have put a footnote beneath each table that quantitatively describes the wide range of mixing ratios or Dobson Units corresponding to that data product. This helps to explain why the requirements in the tables are not expressed in units of mixing ratio or DU. Requirements in absolute units are easily calculated by multiplying the % (or %/decade) in the table by the mixing ratio or DU ranges in the footnotes. This data product must consider additional uncertainties introduced by errors in tropopause heights and must definitively state which tropopause definition was used. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 20 1, 2, 3, 4 B 100 T 500 Vertical Resolution G - N/A B - T - Temporal Resolution d G 1/24 1, 2, 3, 4 B 1 T 30 Timeliness d G 1/4 B 1 T 30 Required Measurement Uncertainty (2-sigma) % G 1 1, 2, 3, 4 Requirements for uncertainty (%) and stability (%/decade) translate to wide Dobson Unit requirement ranges based on a 150 to 450 DU range of ozone stratospheric columns. B 3 T 5 Stability %/decade G 1 1, 2, 3, 4 Requirements for uncertainty (%) and stability (%/decade) translate to wide Dobson Unit requirement ranges based on a 150 to 450 DU range of ozone stratospheric columns. B 2 T 3 Standards and References 1. Ozone Climate Change Initiative User Requirements Document 2. WMO (World Meteorological Organization), Stratospheric Ozone Changes and Climate in Scientific Assessment of Ozone Depletion: 2018, Global Ozone Research and Monitoring Project– Report No. 58, 588 pp., Geneva, Switzerland, 2018. sment.pdf 3. Climate Monitoring User Group CCI Requirements Baseline Documents 4. WMO (World Meteorological Organization), Update on Global Ozone: Past, Present and Future in Scientific Assessment of Ozone Depletion: 2018, Global Ozone Research and Monitoring Project–Report No. 58, 588 pp., Geneva, Switzerland, 2018. eAssessment.pdf 2022 GCOS ECVs Requirements - 90 - 3.2.6 ECV Product: Ozone Total Column Name Ozone Total Column Definition 2D field of total amount of O3 molecules per unit area in an atmospheric column extending from the Earth’s surface to the upper edge of the atmosphere. Unit % (directly transferrable to Dobson units) Note The team of ozone experts unanimously agreed that the uncertainty and stability requirements for each of these ozone data products should be expressed as % and %/decade in the tables. Defining requirements in units of mixing ratios or Dobson Units would require each uncertainty and stability requirement be a wide range of values. We therefore found it more definitive and intuitive that each table entry is one number in % or %/decade. To help translate the requirements in % or %/decade to absolute units we have put a footnote beneath each table that quantitatively describes the wide range of mixing ratios or Dobson Units corresponding to that data product. This helps to explain why the requirements in the tables are not expressed in units of mixing ratio or DU. Requirements in absolute units are easily calculated by multiplying the % (or %/decade) in the table by the mixing ratio or DU ranges in the footnotes. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 20 1, 2, 3, 4 B 100 T 500 Vertical Resolution G - N/A B - T - Temporal Resolution d G 1/24 1, 2, 3, 4 B 1 T 30 Timeliness d G 1/24 B 1 T 30 Required Measurement Uncertainty (2-sigma) % G 1 1, 2, 3, 4 Requirements for uncertainty (%) and stability (%/decade) translate to wide Dobson Unit requirement ranges based on a 200 to 500 DU range of ozone total columns. B 2 T 3 Stability %/decade G 1 1, 2, 3, 4 Requirements for uncertainty (%) and stability (%/decade) translate to wide Dobson Unit requirement ranges based on a 200 to 500 DU range of ozone total columns. B 2 T 3 Standards and References 1. Ozone Climate Change Initiative User Requirements Document 2. WMO (World Meteorological Organization), Stratospheric Ozone Changes and Climate in Scientific Assessment of Ozone Depletion: 2018, Global Ozone Research and Monitoring Project– Report No. 58, 588 pp., Geneva, Switzerland, 2018. sment.pdf 3. Climate Monitoring User Group CCI Requirements Baseline Documents 4. WMO (World Meteorological Organization), Update on Global Ozone: Past, Present and Future in Scientific Assessment of Ozone Depletion: 2018, Global Ozone Research and Monitoring Project–Report No. 58, 588 pp., Geneva, Switzerland, 2018. eAssessment.pdf 2022 GCOS ECVs Requirements - 91 - 3.3 ECV: Precursors (Supporting the aerosol and ozone ECVs) 3.3.1 ECV Product: CO Tropospheric Column Name CO Tropospheric Column Definition 2D field of total amount of CO molecules per unit area in an atmospheric column extending from the Earth’s surface to the tropopause. Unit ppb Note Total column CO can approximate tropospheric CO. Observations exist for total column CO. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 In line with O3 & AOD & precursors B 30 T 100 Vertical Resolution G - N/A B - T - Temporal Resolution d G 1/24 In line with O3 & AOD & precursors B 1 T 30 Timeliness d G 1 B 7 T 30 Required Measurement Uncertainty (2-sigma) ppb G 1 Relaxed from GAW #242 B 5 T 10 Stability ppb/decade G <1 accuracy/5 B 1 T 2 Standards and References GAW Report 242: GAW Report, 242. 19th WMO/IAEA Meeting on Carbon Dioxide, Other Greenhouse Gases and Related Measurement Techniques (GGMT-2017) Landgraf et al, 2016, AMT; GAW Report, 255. 20th WMO/IAEA Meeting on Carbon Dioxide, Other Greenhouse Gases and Related Measurement Techniques (GGMT-2019) Crotwell A.; Lee, H.; Steinbacher M.; World Meteorological Organization (WMO) - WMO, 2020 2022 GCOS ECVs Requirements - 92 - 3.3.2 ECV Product: CO Mole fraction Name CO Mole fraction Definition 3D field of amount of CO (Carbon monoxide, expressed in moles) divided by the total amount of all constituents in dry air (also expressed in moles). Unit Mole fraction Note Tropospheric Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 close to the ozone requirements B 30 T 100 Vertical Resolution m G 1 in line with ozone requirements B 3 T 5 Temporal Resolution d G 1/24 in line with ozone requirements B 1 T 30 Timeliness d G 1 B 7 T 30 Required Measurement Uncertainty (2-sigma) ppb G 1 B 5 T 10 Stability ppb/decade G <1 B 1 T 3 Standards and References GAW Report, 242. 19th WMO/IAEA Meeting on Carbon Dioxide, Other Greenhouse Gases and Related Measurement Techniques (GGMT-2017) GAW Report, 255. 20th WMO/IAEA Meeting on Carbon Dioxide, Other Greenhouse Gases and Related Measurement Techniques (GGMT-2019) Crotwell A.; Lee, H.; Steinbacher M.; World Meteorological Organization (WMO) - WMO, 2020 2022 GCOS ECVs Requirements - 93 - 3.3.3 ECV Product: HCHO Tropospheric Column Name HCHO Tropospheric Column Definition 2D field of total amount of HCHO molecules per unit area in an atmospheric column extending from the Earth’s surface to the tropopause. Unit molecules cm-2 Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 B 30 T 100 Vertical Resolution G - N/A B - T - Temporal Resolution d G 1/24 in line with O3 & aerosols. B 1 T 30 Timeliness d G 1 B 7 T 30 Required Measurement Uncertainty (2-sigma) molecules cm-2 G max (20%, 8E15) Pre-launch accuracy requirements for TROPOMI were 40-80 %; Vigoroux et al., 2020; Achievable with satellites, noting that accuracy is typically dominated by fit error, can be largely improved by temporal and spatial averaging B max (40%,16E15) T max (100%,40E15) Stability molecules cm-2 G max (4%, 8E15) B max (8%,8E15) T max (20%,8E15) Standards and References Uncertainties in Hydrocarbon emission inventories (Cao et al, 2018, Kaiser et al 2018). Typical variability over continental regions, Zhu et al., 2016. Variability of the remote atmosphere, Wolfe et al 2019. 2022 GCOS ECVs Requirements - 94 - 3.3.4 ECV Product: SO2 Tropospheric Column Name SO2 Tropospheric Column Definition 2D field of total amount of SO2 molecules per unit area in an atmospheric column extending from the Earth’s surface to the tropopause. Unit molecules cm-2 Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 in line with O3 & AOD & precursors B 30 T 100 Vertical Resolution G - N/A B - T - Temporal Resolution d G 1/24 in line with O3 & AOD & precursors B 1 T 30 Timeliness d G 1 B 7 T 30 Required Measurement Uncertainty (2-sigma) molecules cm-2 G max (30%,6E15) Improved from Breakthrough B max(60%, 12E15) Driven by relaxed NO2 accuracy (1.5 NO2 accuracy in %) T max(100%, 20E15) Relaxed from Breakthrough, closer to achievable Stability Molecules cm-2/ decade G max(6%,1.2E15) Accuracy/5 B max(12%, 2.4E15) T max(20%, 4E15) Standards and References Accuracy is typically dominated by fit error, can be largely improved by temporal and spatial averaging, AMF for tropospheric SO2 is smaller than for HCHO and NO2 2022 GCOS ECVs Requirements - 95 - 3.3.5 ECV product: SO2 Stratospheric Column Name SO2 Stratospheric Column Definition 2D field of total amount of SO2 molecules per unit area in an atmospheric column extending from the tropopause to the top of the atmosphere. Unit Molecules cm-2 Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 in line with O3 & AOD & precursors B 30 T 100 Vertical Resolution G - N/A B - T - Temporal Resolution d G 1/24 in line with O3 & AOD & precursors B 1 T 30 Timeliness d G 1 B 7 T 30 Required Measurement Uncertainty (2-sigma) molecules cm-2 G max(30%,6E15) According to tropospheric SO2 requirements B max(60%, 12E15) T max(100%, 20E15) Stability molecules cm-2 /decade G max(10%,3E15) Accuracy/3 B max(20%,4E15) T max(30%, 7E15) Standards and References Accuracy is typically dominated by fit error, can be largely improved by temporal and spatial averaging, AMF for tropospheric SO2 is smaller than for HCHO and NO2. 2022 GCOS ECVs Requirements - 96 - 3.3.6 ECV Product: NO2 Tropospheric Column Name NO2 Tropospheric Column Definition 2D field of total amount of NO2 molecules per unit area in an atmospheric column extending from the Earth’s surface to the tropopause. Unit molecules cm-2 Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 in line with O3 & AOD & precursors B 30 T 100 Vertical Resolution G - N/A B - T - Temporal Resolution d G 1/24 in line with O3 & AOD & precursors B 1 T 30 Timeliness d G 1 B 7 T 30 Required Measurement Uncertainty (2-sigma) molecules cm-2 G max(20%, 1E15) Improved accuracy B max(40%, 2E15) Requirement according to 2016 IP T max(100%, 5E15) Achievable accuracy. Stability molecules cm-2/ decade G max(4%, 1E15) accuracy/5 B max(8%, 1E15) T max(20%, 1E15) Standards and References 2022 GCOS ECVs Requirements - 97 - 3.3.7 ECV Product: NO2 Mole Fraction NO2 Mole Fraction Name 3D field of amount of NO2 (expressed in moles) divided by the total amount of all constituents in dry air (also expressed in moles) – in stratosphere. Unit ppb Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 20 in line with ozone profile B 100 T 500 Vertical Resolution km G 1 in line with ozone profile B 3 in line with ozone profile T 5 Relaxed from breakthrough Temporal Resolution d G 1/4 B 1 T 30 Timeliness d G 1 in line with ozone profile B 7 T 30 Required Measurement Uncertainty (2-sigma) % G 20 Achievable with solar occultation B 40 Limb scatter, stellar occultation, joint random & systematic uncertainty (1-sigma) around 20% T 60 Relaxed compared to limb scatter Stability %/decade G 4 accuracy/5 B 8 T 12 Standards and References Brochede et al, 2007; geophys comparison, Tamminen et. Al 2010. doi:10.5194/acp-10-9505-2010 Fussen et al, 2019, 2022 GCOS ECVs Requirements - 98 - 3.4 ECV: Aerosols Properties 3.4.1 ECV Product: Aerosol Light Extinction Vertical Profile (Troposphere) Name Aerosol Light Extinction Vertical Profile (Troposphere) Definition Spectrally dependent sum of aerosol particle light scattering and absorption coefficients per unit of geometrical path length. Unit km-1 Note As proxy where extinction profiles are not available a very useful information is the Aerosol Layer Height layer derived from lidar or thermal instruments Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 50 Extinction profiles are retrieved by lidar observations so they typically refer to punctual observations. The reported values in terms of horizontal resolution are here mutated from the AOD. B 100 T 500 Vertical Resolution km G 0.2 Effective vertical resolution depends on the aerosol load strongly. The reported values refer to aerosol extinction @532 nm larger than 2.5 10-2 km-1 B 1 T 2 Temporal Resolution d All the indicated averaging times are assumed to be representative G 1 B 30 T 90 Timeliness y G 0,003 B 0.08 T 1 Required Measurement Uncertainty (2-sigma) % G 20 Uncertainty is dependent on the atmospheric aerosol load. These relative uncertainties refer to extinction values @532nm larger than 2.5 10-2 km-1 The reference value above (2.5 10-2 km-1), to which the uncertainty and stability and vertical resolution requirements apply, are related to the presence of aerosol. The value of 2.5 10-2 km-1 @532nm has been estimated within ACTRIS/EARLINET as indicative of the presence of an aerosol layer (ref : QC documentation available at www.earlinet.org) B 40 T 60 Stability % /decade G 10 These percentages refer to extinction values @532nm larger than 2.5 10-2 km-1. Stability for users’ requirements for this quantity are estimated from the corresponding AOD: for AOD the required stability is one half of the required uncertainty. This criterion has been adopted also for the aerosol extinction (which is the profiling analogue of AOD). B 20 T 30 Standards and References Samset, B. H., and G. Myhre, Climate response to externally mixed black carbon as a function of altitude, J. Geophys. Res. Atmos., 120, 2913–2927, doi:10.1002/2014JD022849, 2015. Pappalardo, G., Amodeo, A., Apituley, A., Comeron, A., Freudenthaler, V., Linné, H., Ansmann, A., Bösenberg, J., D'Amico, G., Mattis, I., Mona, L., Wandinger, U., Amiridis, V., Alados-Arboledas, L., Nicolae, D., and Wiegner, M.: EARLINET: towards an advanced sustainable European aerosol lidar network, Atmos. Meas. Tech., 7, 2389–2409, 2014. Welton, E.J., J. R. Campbell, J. D. Spinhirne, and V. S. Scott. Global monitoring of clouds and aerosols using a network of micro-pulse lidar systems, Proc. SPIE, 4153, 151-158, 2001. Welton, E.J. K.J. Voss, H.R. Gordon, H. Maring, A. Smirnov, B. Holben, B. Schmid, J.M. Livingston, P.B. Russell, P.A. Durkee, P. Formenti, M.O. Andreae. Ground-based Lidar Measurements of Aerosols During ACE-2: Instrument Description, Results, and Comparisons with other Ground-based and Airborne Measurements, Tellus B, 52, 635-650, 2000. Anderson, T. L., R. J. Charlson, D. M. Winker, J. A. Ogren, and K. Holmén, Mesoscale variations of tropospheric aerosols, J. Atmos. Sci., 60, 119– 136, 2003. Shimizu, A., T. Nishizawa, Y. Jin, S.-W. Kim, Z. Wang, D. Batdorj and N. Sugimoto, Evolution of a lidar network for tropospheric aerosol detection in East Asia, Optical Engineering. 56 (3), 031219, 2016. 2022 GCOS ECVs Requirements - 99 - 3.4.2 ECV Product: Aerosol Light Extinction Vertical Profile (Stratosphere) Name Aerosol light extinction vertical profile in the stratosphere Definition Spectrally dependent sum of aerosol particle light scattering and absorption coefficients per unit of geometrical path length. Unit km-1 Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 200 Extinction profiles are retrieved by lidar observations so they typically refer to punctual observations. But they are also inverted from limb and occultation soundings from satellite for which the spatial resolution can be used when aggregating individual measurements In the stratosphere aerosols are fast spread in latitude bands. Therefore, higher resolution is required along meridians than within latitude bands Source: Aerosol_cci2 User Requirements Document v3.0, 2017 B 500 (latitude) x 6000 (longitude) T Vertical Resolution km G 1 Effective vertical resolution depends on the aerosol load strongly. The reported values refer to aerosol extinction @532 nm larger than 2.5 10-2 km-1 Finer vertical resolution is required near the tropopause so that small to medium sized volcanic eruptions can be detected. B: 1 at 10 km altitude; 2 at 30 km altitude Source: Aerosol_cci2 User Requirements Document v3.0, 2017 B 1 (2) T 2 Temporal Resolution d G 5 All the indicated averaging times are assumed to be representative With 5 days also minor volcanic eruptions can be detected, with 30 days only medium to large eruptions can be detected Source: Bingen, et al., 2017 and Popp, et al., 2016 B 5 T 30 Timeliness y G B T 1 No near-real time usage foreseen; climate studies are main use Required Measurement Uncertainty (2-sigma) % G 20 Uncertainty is dependent on the atmospheric aerosol load. These relative uncertainties refer to extinction values @532nm larger than 2.5 10-2 km-1 Source: Aerosol_cci2 User Requirements Document v3.0, 2017 B 40 T Stability % /decade G 20 These percentages refer to extinction values @532nm larger than 2.5 10-2 km-1. Source: Aerosol_cci2 User Requirements Document v3.0, 2017 B 40 T Standards and References ESA Aerosol_cci2, User Requirements Document, v3., 12.03.2017 Christine Bingen, Charles E. Robert, Kerstin Stebel, Christoph Brühl, Jennifer Schallock, Filip Vanhellemont, Nina Mateshvili, Michael Höpfner, Thomas Trickl, John E. Barnes, Julien Jumelet, Jean-Paul Vernier, Thomas Popp, Gerrit de Leeuw, and Simon Pinnock, Stratospheric aerosol data records for the Climate Change Initiative: development, validation and application to Chemistry-Climate Modelling, Remote Sensing of Environment, 2017, Section 4.4 of: Thomas Popp, Gerrit de Leeuw, Christine Bingen, Christoph Brühl, Virginie Capelle, Alain Chedin, Lieven Clarisse, Oleg Dubovik, Roy Grainger, Jan Griesfeller, Andreas Heckel, Stefan Kinne, Lars Klüser, Miriam Kosmale, Pekka Kolmonen, Luca Lelli, Pavel Litvinov, Linlu Mei, Peter North, Simon Pinnock, Adam Povey, Charles Robert, Michael Schulz, Larisa Sogacheva, Kerstin Stebel, Deborah Stein Zweers, Gareth Thomas, Lieuwe Gijsbert Tilstra, Sophie Vandenbussche, Pepijn Veefkind, Marco Vountas and Yong Xue, Development, Production and Evaluation of Aerosol Climate Data Records from European Satellite Observations (Aerosol_cci), Remote Sensing, 8, 421; doi:10.3390/rs8050421, 2016 2022 GCOS ECVs Requirements - 100 - 3.4.3 ECV Product: Multi-wavelength Aerosol Optical Depth Name Multi-wavelength Aerosol Optical Depth Definition Multi-wavelength AOD is the spectral dependent aerosol extinction coefficient integrated over the geometrical path length. (see note) Unit dimensionless Note Aerosol Optical Depth quantifies the extinction of the radiation while propagating in an aerosol layer and reflects the aerosol loading information in the view of remote sensing measurement. AOD varies with wavelength and this variation is related to the aerosol size and type. The GAW guidelines recommend AOD be measured at 3 or more wavelengths among 368, 412, 500, 675, 778, and 862 nm with a bandwidth of 5nm. 1) under some assumptions of aerosol models and surface reflectances, spectral-dependence of AOD permits retrieval of Fine-AOD and Coarse-AOD, defined as the fraction of total aerosol optical depth attributed to the “non-dust” and "dust" aerosols, respectively, which are important parameters to distinguish aerosol type. Also sea-salt is part of the coarse mode AOD 2) The absorption aerosol optical depth (AAOD) is the fraction of AOD related to light absorption and is defined as AAOD=(1−ωo)×AOD where ωo is the column integrated aerosol single scattering albedo. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 20 B 100 T 500 Vertical Resolution G - N/A. B - T - Temporal Resolution d G 0.01 All averages assumed to be representative B 1 T 30 Timeliness d G 1 B 7 T 30 Required Measurement Uncertainty (2-sigma) % or AOD G 4% or 0.02 B 10% or 0.030 T 20% or 0.06 Stability %/decade or AOD/decade G 2% or 0.01 B 4% or 0.02 T 10% or 0.04 Standards and References Levy, R. C., Mattoo, S., Munchak, L. A., Remer, L. A., Sayer, A. M., Patadia, F., and Hsu, N. C.: The Collection 6 MODIS aerosol products over land and ocean, Atmos. Meas. Tech., 6, 2989– 3034, 2013 CIMO-WMO report No 1019, “Abridged final report with resolutions and recommendations”, 2006 Giles, D. M., Sinyuk, A., Sorokin, M. G., Schafer, J. S., Smirnov, A., Slutsker, I., Eck, T. F., Holben, B. N., Lewis, J. R., Campbell, J. R., Welton, E. J., Korkin, S. V., and Lyapustin, A. I.: Advancements in the Aerosol Robotic Network (AERONET) Version 3 database – automated near-real-time quality control algorithm with improved cloud screening for Sun photometer aerosol optical depth (AOD) measurements, Atmos. Meas. Tech., 12, 169– 209, 2019 Cuevas, E., Romero-Campos, P. M., Kouremeti, N., Kazadzis, S., Räisänen, P., García, R. D., Barreto, A., Guirado-Fuentes, C., Ramos, R., Toledano, C., Almansa, F., and Gröbner, J.: Aerosol optical depth comparison between GAW-PFR and AERONET-Cimel radiometers from long-term (2005–2015) 1 min synchronous measurements, Atmos. Meas. Tech., 12, 4309– 4337, 2019 2022 GCOS ECVs Requirements - 101 - Kazadzis, S., Kouremeti, N., Nyeki, S., Gröbner, J., and Wehrli, C.: The World Optical Depth Research and Calibration Center (WORCC) quality assurance and quality control of GAW-PFR AOD measurements, Geosci. Instrum. Method. Data Syst., 7, 39-53, 2018a. Kazadzis, S., Kouremeti, N., Diémoz, H., Gröbner, J., Forgan, B. W., Campanelli, M., Estellés, V., Lantz, K., Michalsky, J., Carlund, T., Cuevas, E., Toledano, C., Becker, R., Nyeki, S., Kosmopoulos, P. G., Tatsiankou, V., Vuilleumier, L., Denn, F. M., Ohkawara, N., Ijima, O., Goloub, P., Raptis, P. I., Milner, M., Behrens, K., Barreto, A., Martucci, G., Hall, E., Wendell, J., Fabbri, B. E., and Wehrli, C.: Results from the Fourth WMO Filter Radiometer Comparison for aerosol optical depth measurements, Atmos. Chem. Phys., 18, 3185-3201, 2018b. Schutgens, N., Tsyro, S., Gryspeerdt, E., Goto, D., Weigum, N., Schulz, M., and Stier, P.: On the spatio-temporal representativeness of observations, Atmos. Chem. Phys., 17, 9761– 9780, 2017. 2022 GCOS ECVs Requirements - 102 - 3.4.4 ECV product: Chemical Composition of Aerosol Particles Name Chemical Composition of Aerosol Particles Definition Aerosol particles are chemically composed of inorganic salts (ammonium sulfates, ammonium nitrate, and sea salt), organic compounds, Elemental Carbon (EC), dust, and volcanic ash. These species are often internally mixed within a particle with mixtures depending on sources (primary particles and gas phase precursors), atmospheric processes (gas to particle conversion, cloud processing, and condensation), and atmospheric conditions (T, P, and RH). The chemical composition of aerosol particles is often expressed in μg m-3. Unit µg m-3 Note Climate relevant properties of aerosol particles include hygroscopicity and refractive index. To a first approximation knowledge of the speciated amounts of key components (total inorganics – including sea-salt- , organics, Equivalent Black Carbon, mineral dust, and volcanic ash) is sufficient. Dust can be approximated from the difference between total Mass and sum of Inorganic, EC and OC. As a proxy for the chemical composition, combination of different properties can be used, e.g. size (from Extinction Angström exponent or Fine Mode fraction), absorption (from SSA or AAOD), absorption colour (Absorption Angström exponent). However, any such estimated characterization needs to be associated with a clear definition how a certain aerosol type was characterized and this should be part of the metadata in a product file. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 50 Horizontal definition based on Anderson et al., 2003 B 100 T 500 Vertical Resolution km G 1 Information on both single point AND integrated column are valuable as a threshold. More precise information can be obtained by using a profile at 5km resolution (breakthrough) or 1 km (Goal). B 5 T Temporal Resolution d All averages assumed to be representative G 1 B 30 T 90 Timeliness d G 0.1 B 1 T 365 Required Measurement Uncertainty (2-sigma) % G 20 B 40 T 60 Stability % /decade G 2 B 2 T 4 Standards and References Anderson, T. L., R. J. Charlson, D. M. Winker, J. A. Ogren, and K. Holmén, Mesoscale variations of tropospheric aerosols, J. Atmos. Sci., 60, 119– 136, 2003. Aas, W., Mortier, A., Bowersox, V. et al. Global and regional trends of atmospheric sulfur. Sci Rep 9, 953 (2019) doi:10.1038/s41598-018-37304-0. Putaud, J. P., Raes, F., Van Dingenen, R., Brüggemann, E., Facchini, M. C., Decesari, S., Fuzzi, S., Gehrig, R., Hüglin, C., Laj, P., Lorbeer, G., Maenhaut, W., Mihalopoulos, N., Müller, K., Querol, X., Rodriguez, S., Schneider, J., Spindler, G., Ten Brink, H., Tørseth, K., and Wiedensohler, A.: European aerosol phenomenology – 2: chemical characteristics of particulate matter at kerbside, urban, rural and background sites in Europe, Atmos. Environ., 38, 2579–2595, 2004. 2022 GCOS ECVs Requirements - 103 - 3.4.5 ECV Product: Number of Cloud Condensation Nuclei Name Number of Cloud Condensation Nuclei Definition Number of aerosol particles which can activate to a cloud droplet at a given supersaturations of water. CCN is often indicated as a percent of the total CN for specific supersaturation typical of atmospheric cloud formation. Unit Dimensionless Note CCN depends on the supersaturation. Whenever provision of CCN for a range of supersaturation is not available, a typical value of 0.5% can be used as typical supersaturation under atmospheric conditions. The CCN number concentration can be approximated by the fraction of particles larger than a given diameter from the particle number size distribution, generally the number of particles larger than 100 nm, which provide a good approximation of particles activated at « typical » supersaturation. Where no other data are available, fine mode AOD can be used as a qualitative proxy for CCN Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 50 Horizontal definition based on Anderson et al., 2003, Sun et al., 2019 and Laj et al., submitted B 100 T 500 Vertical Resolution km G 1 Information on both single point AND integrated column are valuable as a threshold. More precise information can be obtained by using a profile at 5km resolution (breakthrough) or 1 km (Goal). B 5 T Temporal Resolution d All averages assumed to be representative G 0.5 B 1 T 30 Timeliness d G 0.04 B 1 T 365 Required Measurement Uncertainty (2-sigma) % G 20 B 40 T 60 Stability % /decade G - Stability difficult to evaluate as no trend in CCN are currently available B - T - Standards and References Anderson, T. L., R. J. Charlson, D. M. Winker, J. A. Ogren, and K. Holmén, Mesoscale variations of tropospheric aerosols, J. Atmos. Sci., 60, 119– 136, 2003. Fanourgakis, GS, Kanakidou, M, Nenes, A, Bauer, SE, Bergman, T, Carslaw, KS, Grini, A, Hamilton, DS, Johnson, JS, Karydis, VA, Kirkevag, A, Kodros, JK, Lohmann, U, Luo, G, Makkonen, R, Matsui, H, Neubauer, D, Pierce, JR, Schmale, J, Stier, P, Tsigaridis, K, van Noije, T, Wang, HL, Watson-Parris, D, Westervelt, DM, Yang, Y, Yoshioka, M, Daskalakis, N, Decesari, S, Gysel-Beer, M, Kalivitis, N, Liu, XH, Mahowald, NM, Myriokefalitakis, S. Schrodner, R, Sfakianaki, M, Tsimpidi, AP, Wu, MX, Yu, FQ, “Evaluation of global simulations of aerosol particle and cloud condensation nuclei number, with implications for cloud droplet formation,” Atmos. Chem. Phys., 19, 8591-8617 DOI:10.5194/acp-19-8591-2019, 2019. Schmale, J., Henning, S., Henzing, J.S., Keskinen, H., Sellegri, K., Ovadnevaite, J., Bougiatioti, A., Kalivitis, N., Stavroulas, I., Jefferson, A., Park, M., Schlag, P., Kristensson, A., Iwamoto, Y., Aalto, P., Äijälä, M., Bukowiecki, N., Decesari, S., Ehn, M., Frank, G., Fröhlich, R., Frumau, A., Herrmann, E., Holzinger, R., Kos, G., Kulmala, M., Mihalopoulos, N., Motos, G., Nenes, A., O’Dowd, C.D., Paramonov, M., Petäjä, T., Picard, D., Poulain, L., Prévôt, A.S.H., Swietlicki, E., Pöhlker, M., Pöschl, U., Artaxo, P., Brito, J., Carbone, S., Wiedensohler, A., Ogren, J., Matsuki, A., Yum, S.S., Stratmann, F., Baltensperger, U. and Gysel, M. (2017) What do we learn from long-term cloud condensation nuclei number concentration, particle number size distribution and chemical composition at regionally representative observatories? Sci. Data 4:170003, doi: 10.1038/sdata.2017.3. 2022 GCOS ECVs Requirements - 104 - 3.4.6 ECV Product: Aerosol Number Size Distribution Name Aerosol Number Size Distribution Definition The particle number size distribution (PNSD) describes the number of particles in multiple specified size ranges. Unit dimensionless Note The PNSD can provide information about primary particle sources and secondary formation processes, as well as aerosol transport. PNSD can be directly measured in-situ or retrieved under some assumptions from AOD-related measurements or light extinction vertical profile measurements. For climate application, PNSD at ambient relative humidity is relevant. As a proxy for a directly measured aerosol number size distribution, the extinction (scattering) Angstrom exponent, defined as the dependence of ln(AOD) (or ln(σsp)) on ln(λ) can be used as a qualitative indicator of aerosol particle size distribution. Values near 1 indicate a particle size distribution dominated by coarse mode aerosol such as typically associated with mineral dust and sea salt. Values of near 2 indicate particle size distributions dominated by the fine aerosol mode (usually associated with anthropogenic sources and biomass burning). The total number of particles (i.e., condensation nuclei (CN)) is the integral of PNSD over all size ranges. It can be used to derive PNSD under some assumptions. Whenever PNSD is retrieved at dry size, ambient PNSD can be retrieved with the knowledge of particle composition and hydroscopic growth model under some assumptions Number of particles below 20 nm (in diameter) are highly variable due to the process of New Particle Formation and have little direct radiative impact. Regardless, the requirement for aerosol number size distribution ideally is provided for the full size spectrum (15 nm- 15 µm) (defined as goal). Very important climate application can be made with knowledge of PNSD into 2 size ranges (fine and coarse), defined as Threshold). Knowledge of PNSD into 4 size ranges (ultrafine, Aitken, Accumulation and coarse) is defined as breakthrough. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 50 Horizontal definition based on Anderson et al., 2003, Sun et al., 2019 and Laj et al., submitted B 100 T 500 Vertical Resolution km G 1 Information on both single point AND integrated column are valuable as a threshold. More precise information can be obtained by using a profile at 5km resolution (breakthrough) or 1 km (Goal). B 5 T Temporal Resolution d All averages assumed to be representative G 0.04 B 1 T 30 Timeliness d G 0,25 B 30 T 365 Required Measurement Uncertainty (2-sigma) G 40% in number and 20% on size Size distribution is a 2-D variable thus uncertainty can either refer size or number. Uncertainty requirements are therefore provided for both dimensions. The uncertainty on size refers to the diameter of the mode of the distribution B 60% in number in 40% in size T 40% in number for fine-mode (0.05-0.5um) and 100% in number for coarse-mode (0.5-15um) Stability G 2 2022 GCOS ECVs Requirements - 105 - % /decade B 4 T 10 Standards and References Laj et al., A global analysis of climate-relevant aerosol properties retrieved from the network of GAW near-surface observatories, submitted to AMT Anderson, T. L., R. J. Charlson, D. M. Winker, J. A. Ogren, and K. Holmén, Mesoscale variations of tropospheric aerosols, J. Atmos. Sci., 60, 119– 136, 2003. Sun, J., W. Birmili, M. Hermann, T. Tuch, K. Weinhold, G. Spindler, A. Schladitz, S. Bastian, G. Löschau, J. Cyrys, J. Gu, H. Flentje, B. Briel, C. Asbach, H. Kaminski, L. Ries, R. Sohmer, H. Gerwig, K. Wirtz, F. Meinhardt, A. Schwerin, O. Bath, N. Ma, A. Wiedensohler, Variability of black carbon mass concentrations, sub-micrometer particle number concentrations and size distributions: results of the German Ultrafine Aerosol Network ranging from city street to High Alpine locations, Atmospheric Environment, Volume 202, 2019, Pages 256-268, ISSN 1352-2310, Wiedensohler, A., Birmili, W., Nowak, A., Sonntag, A., Weinhold, K., Merkel, M., Wehner, B., Tuch, T., Pfeifer, S., Fiebig, M., Fjäraa, A. M., Asmi, E., Sellegri, K., Depuy, R., Venzac, H., Villani, P., Laj, P., Aalto, P., Ogren, J. A., Swietlicki, E., Williams, P., Roldin, P., Quincey, P., Hüglin, C., Fierz-Schmidhauser, R., Gysel, M., Weingartner, E., Riccobono, F., Santos, S., Grüning, C., Faloon, K., Beddows, D., Harrison, R., Monahan, C., Jennings, S. G., O'Dowd, C. D., Marinoni, A., Horn, H.-G., Keck, L., Jiang, J., Scheckman, J., McMurry, P. H., Deng, Z., Zhao, C. S., Moerman, M., Henzing, B., de Leeuw, G., Löschau, G., and Bastian, S.: Mobility particle size spectrometers: harmonization of technical standards and data structure to facilitate high quality long-term observations of atmospheric particle number size distributions, Atmos. Meas. Tech., 5, 657– 685, 2012. 2022 GCOS ECVs Requirements - 106 - 3.4.7 ECV Product: Aerosol Single Scattering Albedo Name Aerosol Single Scattering Albedo Definition Spectrally dependent ratio of particle light scattering coefficient to the particle light extinction coefficient. Unit dimensionless Note The Aerosol Single Scattering Albedo (ω0 or SSA) is defined as σsp/σep, or σsp/(σsp+ σap) where (σep), is the volumetric cross-section for light extinction and is commonly called the particle light extinction coefficient typically reported in units of Mm-1 (10-6 m-1). It is the sum of the particle light scattering (σsp) and particle light absorption coefficients (σap), σep = σsp + σap . All coefficients are spectrally dependent. Purely scattering aerosol particles (e.g., ammonium sulfate) have values of 1, while very strong absorbing aerosol particles (e.g., black carbon) may have values of around 0.3 at 550nm. The absorption aerosol optical depth(AAOD) is fraction of AOD related to light absorption and is defined as AAOD= (1−ωo)×AOD where ωo is the column integrated single scattering albedo. Under some circumstances, AAOD at 550 nm is not as highly uncertain as SSA (in particular for low AOD) and can be used as ECV proxy for absorption. By part of the community AAOD is regarded better suited than SSA which is highly uncertain at low AOD. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 50 Anderson et al., 2003 Laj et al., submitted) B 200 T 500 Vertical Resolution km G 1 Information on both single point AND integrated column are valuable as a threshold. More precise information can be obtained by using a profile at 5km resolution (breakthrough) or 1 km (Goal). SSA is not directly measurable as integrated column or profile but can be retrieved under some assumptions. B 5 T Temporal Resolution d G 0.01 All averages assumed to be representative B 1 T 30 Timeliness d G 1 B 7 T 30 Required Measurement Uncertainty (2-sigma) dimensionless G 0.1 B 0.2 T 0.4 Stability % /decade G 0.1 Stability difficult to assess due to lack of clear trends observed B 0.4 T 1 Standards and References Laj et al., A global analysis of climate-relevant aerosol properties retrieved from the network of GAW near-surface observatories, submitted to AMT Collaud Coen et al., Multidecadal trend analysis of aerosol radiative properties at a global scale, submitted to ACP Sherman, J. P., Sheridan, P. J., Ogren, J. A., Andrews, E., Hageman, D., Schmeisser, L., Jefferson, A., and Sharma, S.: A multi-year study of lower tropospheric aerosol variability and systematic relationships from four North American regions, Atmos. Chem. Phys., 15, 12487– 12517, 2015. Schutgens, N., Tsyro, S., Gryspeerdt, E., Goto, D., Weigum, N., Schulz, M., and Stier, P.: On the spatio-temporal representativeness of observations, Atmos. Chem. Phys., 17, 9761– 9780, 2017. 2022 GCOS ECVs Requirements - 107 - Ocean ECVs 2022 GCOS ECVs Requirements - 108 - 4. PHYSICS 4.1 ECV: Sea-Surface Temperature 4.1.1 ECV Product: Sea-Surface Temperature Name Sea surface temperature Definition Radiative skin sea surface temperature, or Bulk sea surface temperature at stated depth Unit Kelvin (K) Note The “bulk” temperature refers to the depth of typically 2 m, the “skin” temperature refers to within the upper 1 mm. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km length G 5 B T 100 Vertical Resolution G - N/A B - T - Temporal Resolution d time G 1/24 In situ measurements, daily in the case of satellite measurements B T 7 Timeliness h time G 3 B T 24 Required Measurement Uncertainty (2-sigma) K G 0.05 Over 100 km scale B T 0.3 Over 100 km scale Stability K/decade G 0.01 Over 100 km scale B T 0.1 Over 100 km scale Standards and References Johnson et al (2015): Informing Deep Argo Array Design Using Argo and Full-Depth Hydrographic Section Data; 0139.1; 5 x 5 degree array proposed with 15-day repeat cycle. Estimated reduction of sub-2000 m OHC error in decadal trends from +/- 17 TW to +/- 3 TW. Desbruyeres et al (2017): Global and Full-Depth Ocean Temperature Trends during the Early Twenty-First Century from Argo and Repeat Hydrography; "Estimate of global ocean heat uptake of 0.71 ± 0.09 W m−2 during 2006-2014 with < 2000m layer accounting for 90% of the observed change. Rayner (2017) User Requirements Document, SST_CCI-URD-UKMO-201, ESA. 2 signed.pdf Merchant, C.J., Embury, O., Bulgin, C.E. et al. Satellite-based time-series of sea- surface temperature since 1981 for climate applications. Sci Data 6, 223 (2019). 2022 GCOS ECVs Requirements - 109 - 4.2 ECV: Subsurface Temperature 4.2.1 ECV Product: Interior Temperature Name Interior temperature Definition Seawater temperature measured with depth. Unit Kelvin (K) Note This variable is referred to as “Ocean temperature” in WMO RRR, and a difference between Upper (<2000 m) and Deep (>2000 m) ocean is established. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 100 1 Upper ocean Deep ocean Coastal B 100 250 Upper ocean Deep ocean T 300 500 10 Upper ocean Deep ocean Coastal Vertical Resolution m G 1 Upper ocean B 2 Upper ocean T 10 Upper ocean Temporal Resolution d G 1 1 1/24 Upper ocean Deep ocean Coastal B 10 15 Upper ocean Deep ocean T 30 30 30 Upper ocean Deep ocean Coastal Timeliness d G 1 90 for real time in delayed mode B 1 180 for real time in delayed mode T 30 365 for real time in delayed mode Required Measurement Uncertainty (2-sigma) K G 0.001 0.001 Upper ocean Deep ocean B T 0.1 0.01 0.1 Upper ocean Deep ocean Coastal Stability K Standards and References Johnson et al (2015): Informing Deep Argo Array Design Using Argo and Full-Depth Hydrographic Section Data; ; 5 x 5 degree array proposed with 15-day repeat cycle. Estimated reduction of sub-2000 m OHC error in decadal trends from +/- 17 TW to +/- 3 TW. Palmer et al (2010): Future Observations for Monitoring Global Ocean Heat Content; Table 1 in the paper includes GCOS Observation Requirements in WMO/CEOS Database for upper ocean temperature and salinity Desbruyeres et al (2017): Global and Full-Depth Ocean Temperature Trends during the Early Twenty-First Century from Argo and Repeat 2022 GCOS ECVs Requirements - 110 - Hydrography; "Estimate of global ocean heat uptake of 0.71 ± 0.09 W m−2 during 2006-2014 with < 2000m layer accounting for 90% of the observed change. 2022 GCOS ECVs Requirements - 111 - 4.3 ECV: Sea-Surface Salinity 4.3.1 ECV Product: Sea-surface Salinity Name Sea-surface salinity Definition Salinity of seawater, at or near the surface. Unit psu, pss, g/Kg, or no unit Note For remote sensing, the measurement corresponds typically to 1 cm depth. For in situ, 1-2 m depth. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 B T 50-100 Vertical Resolution G - N/A B - T - Temporal Resolution d G 1-3 B T 7 Timeliness d G 7 B T 30 Required Measurement Uncertainty (2-sigma) 1 G 0.1 Synthesis of coordinated input from ESA based on community workshop and numerous published references. 0.1 psu for 50-km spatial average and monthly mean; mean in low-variability regions (where in-situ validation measurements are not subject to significant sampling errors). B T 0.2 Synthesis of coordinated input from ESA based on community workshop and numerous published references. 0.2 psu for 100-km spatial average and monthly mean in low variability regions. Stability 1/decade G 0.01 0.01 psu/decade for 1000-km average in low-variability regions. B T 0.1 Durach, Wijffel and Matear (2012) (showing trends of 0.4 psu over 5 decades on 1000-km scales) 0.1 psu/decade for 1000-km average in low-variability regions. Standards and References Durack, Paul J., Susan E. Wijffels and Richard J. Matear (2012): Ocean Salinities Reveal Strong Global Water Cycle Intensification During 1950 to 2000, Science, 336 (6080), pp 455-458. DOI: 10.1126/science.1212222 Sea Surface Salinity Climate Change Initiative Phase 1 - User Requirement Document (2019). Available at: 2022 GCOS ECVs Requirements - 112 - 4.4 ECV: Subsurface Salinity 4.4.1 ECV Product: Interior Salinity Name Interior salinity Definition Salinity of seawater measured with depth. Unit psu, pss, g Kg-1, or no unit Note This variable is referred to as “Ocean salinity” in WMO RRR OSCAR database, and a difference between Upper (<2000 m) and Deep (>2000 m) ocean is established. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 B T 100 Vertical Resolution m G 1 1 Upper ocean Deep ocean B T 10 100 Upper ocean Deep ocean Temporal Resolution d G 1 B T 30 Timeliness d G 1 B T 30 Required Measurement Uncertainty (2-sigma) 1 G 0.01 0.005 Upper ocean Deep ocean B T 0.05 0.02 Upper ocean Deep ocean Stability 1/decade G B T Standards and References 2022 GCOS ECVs Requirements - 113 - 4.5 ECV: Surface Currents 4.5.1 ECV Product: Ekman Currents Name Ekman currents Definition Ocean vector motion occurring over the depth of the Ekman layer as a result of the combined action of surface winds and Coriolis force. Unit m s-1 Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 B 20 T 25 Vertical Resolution G - N/A B - T - Temporal Resolution h G 1 B T 6 Timeliness h G 1 B T 3 Required Measurement Uncertainty (2-sigma) m s-1 G 0.02 B T 0.1 Stability G B T Standards and References 2022 GCOS ECVs Requirements - 114 - 4.5.2 ECV Product: Surface Geostrophic Current Name Surface Geostrophic Current Definition Ocean vector motion measured at or near the surface (at stated depth). Unit m s-1 Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 B 20 T 100 Vertical Resolution G - N/A B - T - Temporal Resolution d G 1/4 B 1 T 7 Timeliness d G B T 1 Required Measurement Uncertainty (2-sigma) m s-1 G 0.02 B T 0.1 Stability G B T Standards and References Villas Bôas et al. (2019) Integrated Observations of Global Surface Winds, Currents, and Waves: Requirements and Challenges for the Next Decade. Front. Mar.Sci. 6:425. doi: 10.3389/fmars.2019.00425 2022 GCOS ECVs Requirements - 115 - 4.6 ECV: Subsurface Currents 4.6.1 ECV Product: Vertical Mixing Name Vertical mixing Definition Ocean vector motion measured at or near the surface (3D, at stated depth). Unit m s-1 Note A difference between Upper (<2000 m) and Deep (>2000 m) ocean is established. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 B T 100 Vertical Resolution m G 1 10 Upper ocean Deep ocean B T 10 100 Upper ocean Deep ocean Temporal Resolution d G 1 B 7 T 30 Timeliness d G B T 30 Required Measurement Uncertainty (2-sigma) G 0.02 B T 0.1 Stability G B T Standards and References 2022 GCOS ECVs Requirements - 116 - 4.7 ECV: Sea Level 4.7.1 ECV Product: Regional Mean Sea Level Name Regional mean sea level Definition The Height of the Ocean Surface relative to a reference geoid or an agreed regional datum. Unit m Note Estimates of the regional mean sea level are obtained by averaging individual sea surface heights over a region during a given period. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 B T 100 Vertical Resolution G - N/A B - T - Temporal Resolution d G 1 B T 7 Timeliness month G 1 B T 12 Required Measurement Uncertainty (2-sigma) mm G B T 10 Over a grid mesh of 50-100 km Stability mm yr-1 G 0.3 Regional mean, 90% CI (confidence level) B T <0.1 Over a grid mesh of 50-100 km Standards and References Ponte, R.M., Carson, M., Cirano, M., Domingues, C.M., Jevrejeva, S., Marcos, M., Mitchum, G., Van De Wal, R.S.W., Woodworth, P.L., Ablain, M. and Ardhuin, F., 2019. Towards comprehensive observing and modeling systems for monitoring and predicting regional to coastal sea level. Frontiers in Marine Science, p.437. Benveniste, J., Cazenave, A., Vignudelli, S., Fenoglio-Marc, L., Shah, R., Almar, R., et al. (2019). Requirements for a coastal zone observing system. Front. Mar. Sci. 6:348. doi: 10.3389/fmars.2019.00348 2022 GCOS ECVs Requirements - 117 - 4.7.2 ECV Product: Global Mean Sea Level Name Global Mean Sea level Definition The height of the ocean surface relative to a reference geoid. Unit m Note Estimates of the global mean sea level are obtained by averaging individual sea surface heights over the global ocean during a given period. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 B T 100 Vertical Resolution G - N/A B - T - Temporal Resolution d G 1 B T 30 Timeliness d G 30 B T 365 Required Measurement Uncertainty (2-sigma) mm G B T 2-4 Values for the global mean. The uncertainty over a global mesh is = 10 mm Stability mm yr-1 G <0.03 Target to be considered for the detection of permafrost melting. From the WCRP grand challenge on sea level and coastal impacts the required stability in GMSL is <0.03 mm/year (over a decade, 90%Cl) to detect permafrost thawing. B <0.1 Target to be considered for the estimation of deep ocean warming and Earth energy imbalance is 0.1 mm/year (over a decade, 90% Cl). T <0.3 Adapted for sea level impact detection (detection of a change in the rate of rise of the global mean sea level). From the WCRP grand challenge on sea level and coastal impacts the required stability in GMSL <0.3 mm/year (global mean, 90% Cl) for the detection attribution of sea level rise. Standards and References The uncertainty budget of the global mean sea level derived from satellite altimetry strongly relies on the precise orbit determination of the platform, the instrumental, geophysical and environmental altimeter corrections used to derive the sea level anomalies. Meyssignac, B., Boyer, T., Zhao, Z., Hakuba, M.Z., Landerer, F.W., Stammer, D., Köhl, A., Kato, S., L’ecuyer, T., Ablain, M. and Abraham, J.P., 2019. Measuring global ocean heat content to estimate the Earth energy imbalance. Frontiers in Marine Science, 6, p.432. Cazenave, A., Hamlington, B., Horwath, M., Barletta, V.R., Benveniste, J., Chambers, D., Döll, P., Hogg, A.E., Legeais, J.F., Merrifield, M. and Meyssignac, B., 2019. Observational requirements for long-term monitoring of the global mean sea level and its components over the altimetry era. Frontiers in Marine Science, p.582. 2022 GCOS ECVs Requirements - 118 - 4.8 ECV: Sea State 4.8.1 ECV Product: Wave Height Name Wave Height Definition The distance between the trough of the wave and the adjacent crest of the wave. The significant wave height is the mean wave height (trough to crest) of the highest third of the waves in a wave spectrum. Unit cm Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 1 Needed to resolve sea state variability in the coastal zone B 25 Needed to resolve mesoscale variability T 100 Needed to resolve synoptic scales associated with atmospheric systems Vertical Resolution G - N/A B - T - Temporal Resolution h G 1 Needed to resolve sea state variability in the coastal zone (tidal modulation of the sea state) B 3 Needed to resolve sea state variability at the scale of storm events T 24 Needed to compute robust monthly statistics Timeliness d G 7 To support assessment of extreme storm/cyclonic event B 30 To support assessment of seasonal extreme event T 365 For assessment and reanalysis Required Measurement Uncertainty (2-sigma) % Normalized root-mean-squared error G 5 Uncertainty goal, as proposed by Ardhuin et al., 2019 B T Stability cm/decade G 1 Needed to account for wave impact (wave setup) on coastal sea level B T 10 Needed to detect the largest trends. Existing long-term observations show maximum Standards and References Ardhuin, F. et al. 2019. Observing Sea States. Front. Mar. Sci. 6. 2022 GCOS ECVs Requirements - 119 - 4.9 ECV: Ocean Surface Stress 4.9.1 ECV Product: Ocean Surface Stress Name Ocean Surface Stress Definition The two-dimensional vector drag at the bottom of the atmosphere and the dynamical forcing at the top of the ocean. Unit N m-2 Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 B T 100 Vertical Resolution G - N/A B - T - Temporal Resolution h G 1 B T 24 Timeliness d G 7 B T 30 Required Measurement Uncertainty (2-sigma) N m-2 G 0.004 or 2% International Ocean Vector Wind Science Team; Cronin et a. (2019), B T 0.02 or 8% International Ocean Vector Wind Science Team; Cronin et a. (2019), Stability N m-2 G 0.0006 International Ocean Vector Wind Science Team; Cronin et a. (2019), B T 0.0001 International Ocean Vector Wind Science Team; Cronin et a. (2019), Standards and References 2022 GCOS ECVs Requirements - 120 - 4.10 ECV: Ocean Surface Heat Flux 4.10.1 ECV Product: Radiative Heat Flux Name Radiative Heat Flux Definition The net difference between radiation leaving the sea surface (reflected and emitted) and downward radiation impinging on the sea surface; commonly divided into an infrared or longwave and a visible or shortwave component (𝑄𝑄𝐿𝐿𝐿𝐿,𝑛𝑛𝑛𝑛𝑛𝑛+ 𝑄𝑄𝑆𝑆𝑆𝑆,𝑛𝑛𝑛𝑛𝑛𝑛): 𝑄𝑄𝐿𝐿𝐿𝐿,𝑛𝑛𝑛𝑛𝑛𝑛 = 𝐿𝐿𝐿𝐿↑ − 𝐿𝐿𝐿𝐿↓ = 𝜖𝜖 𝜎𝜎 𝑆𝑆𝑆𝑆𝑇𝑇 𝑠𝑠 4 + (1 −𝜖𝜖 ) 𝐿𝐿𝐿𝐿↓ − 𝐿𝐿𝐿𝐿↓ = 𝜖𝜖 (𝜎𝜎 𝑆𝑆𝑆𝑆𝑇𝑇 𝑠𝑠 4 − 𝐿𝐿𝐿𝐿↓) and 𝑄𝑄𝑆𝑆𝑆𝑆,𝑛𝑛𝑛𝑛𝑛𝑛= 𝑄𝑄𝑆𝑆𝑆𝑆↑ − 𝑄𝑄𝑆𝑆𝑆𝑆↓ = 𝑄𝑄𝑆𝑆𝑆𝑆↓( 𝛼𝛼−1 ) where 𝜖𝜖 is the IR surface emissivity (𝜖𝜖= 1 for black-body emission), 𝜎𝜎 𝑆𝑆𝑆𝑆 is Stefan-Boltzmann constant, and 𝑇𝑇 𝑠𝑠 is the sea surface (skin) temperature that is emitting the IR-radiation, in degrees Kelvin. Upward shortwave flux is reflected sunlight, often determined by parameterization of surface albedo (𝛼𝛼). Unit W m-2 Note Surface heat flux is the rate of exchange of heat, per unit area, crossing the sea surface from ocean to atmosphere. Sign conventions vary; heat fluxes are sometimes reported with positive values for heat into the ocean. The net heat flux is the sum of turbulent (latent and sensible) fluxes and the radiative (short wave and long wave) components. Downward shortwave at the surface is predominantly visible light. While sensible, latent, and longwave heat fluxes occur at the sea surface, the shortwave radiation penetrates seawater, with red light absorbed close to the surface and blue light absorbed at deeper depths. These turbulent and radiative surface fluxes are major contributors to energy and moisture budgets, and are largely responsible for thermodynamic coupling of the ocean and atmosphere on all scales. Variability of these fluxes is in part related to largescale variability in weather (climate) patterns. For most regions, the two major components are the net shortwave gain by the ocean and the latent heat flux loss by the ocean. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 B 25 T 100 Vertical Resolution G - N/A B - T - Temporal Resolution h G 1 B 3 T 24 Timeliness G 7 B 30 T 365 Required Measurement Uncertainty (2-sigma) W m-2 G 10 . B 15 T 20 Stability W m-2/ decade G 1 B 2 T 3 Standards and References Meghan F. Cronin et al. (2019). Air-Sea Fluxes with a Focus on Heat and Momentum, Frontiers in Marine Science, 6, article 430, p1-30. Meyssignac, Benoit, et al. Measuring global ocean heat content to estimate the Earth energy imbalance" Frontiers in Marine Science 6 (2019): 432. 2022 GCOS ECVs Requirements - 121 - 4.10.2 ECV Product: Sensible Heat Flux Name Sensible Heat Flux Definition The heat exchanged between the atmosphere and ocean when a warmer ocean warms the air above or when a cooler ocean cools the air above. Unit W m-2 Note The net surface heat flux is the rate of exchange of heat, per unit area, crossing the sea surface from ocean to atmosphere. Sign conventions vary; heat fluxes are sometimes reported with positive values for heat into the ocean. The net heat flux is the sum of turbulent (latent and sensible) fluxes and the radiative (short wave and long wave) components. Sensible heat flux is the rate at which heat is transferred from the ocean to the atmosphere by conduction and convection. Commonly, the ocean is warmer than the atmosphere, leading to a sensible heat flux that warms the atmosphere. A surface sensible heat flux which warms the atmosphere will tend to cause unstable (convective) conditions and enhanced mixing, while an atmosphere cooled by the ocean tends to be stratified, which inhibits mixing. In the tropics, latent heat flux is typically an order of magnitude greater than sensible heat flux, but in polar regions they are similar in magnitude. These fluxes are major contributors to energy and moisture budgets, and are largely responsible for thermodynamic coupling of the ocean and atmosphere on all scales. Variability of these fluxes is in part related to largescale variability in weather (climate) patterns. For most regions, the two major components are the net shortwave gain by the ocean and the latent heat flux loss by the ocean. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 B 25 T 100 Vertical Resolution G - N/A B - T - Temporal Resolution h G 1 B 3 T 24 Timeliness G 7 B 30 T 365 Required Measurement Uncertainty (2-sigma) W m-2 G 10 . B 15 T 20 Stability W m-2/ decade G 1 B 2 T 3 Standards and References Meghan F. Cronin et al (2019). Air-Sea Fluxes with a Focus on Heat and Momentum, Frontiers in Marine Science, 6, article 430, p1-30. Meyssignac, Benoit, et al. "Measuring global ocean heat content to estimate the Earth energy imbalance." Frontiers in Marine Science 6 (2019): 432. 2022 GCOS ECVs Requirements - 122 - 4.10.3 ECV Product: Latent Heat Flux Name Latent Heat Flux Definition The latent heat exchanged between the ocean and atmosphere associated with the phase change from liquid to gas during evaporation of seawater or from gas to liquid during condensation. During the more common process of surface evaporation, heat is extracted from the ocean, cooling the surface ocean. The moistened parcel of air can be carried aloft and the latent heat released to the atmosphere through condensation, which plays a crucial role in cloud formation and precipitation. Unit W m-2 Note The net surface heat flux is the rate of exchange of heat, per unit area, crossing the sea surface from ocean to atmosphere. Sign conventions vary; heat fluxes are sometimes reported with positive values for heat into the ocean. The net heat flux is the sum of turbulent (latent and sensible) fluxes and the radiative (short wave and long wave) components. Latent heat flux is associated with the phase change of water during evaporation or condensation and proportional to evaporation. The energy required for surface evaporation cools the ocean surface and moistens the near surface air adding to its buoyancy. The moistened parcel of air can be carried aloft, and the latent heat released to the atmosphere through condensation, which plays a crucial role in cloud formation and precipitation. Surface measured precipitation is often out of balance with evaporation (P-E), which implies moisture convergence/divergence in the atmosphere. In the tropics, latent heat flux is typically an order of magnitude greater than sensible heat flux, but in polar regions they are similar in magnitude. These fluxes are major contributors to energy and moisture budgets, and are largely responsible for thermodynamic coupling of the ocean and atmosphere on all scales. Variability of these fluxes is in part related to largescale variability in weather (climate) patterns. For most regions, the two major components are the net shortwave gain by the ocean and the latent heat flux loss by the ocean. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10 B 25 T 100 Vertical Resolution G - N/A B - T - Temporal Resolution h G 1 B 3 T 24 Timeliness d G 7 B 30 T 365 Required Measurement Uncertainty (2-sigma) W m-2 G 10 . B 15 T 20 Stability W m-2/ decade G 1 B 2 T 3 Standards and References Meghan F. Cronin et al (2019). Air-Sea Fluxes with a Focus on Heat and Momentum, Frontiers in Marine Science, 6, article 430, p1-30. Meyssignac, Benoit, et al. "Measuring global ocean heat content to estimate the Earth energy imbalance." Frontiers in Marine Science 6 (2019): 432. 2022 GCOS ECVs Requirements - 123 - 4.11 ECV: Sea Ice 4.11.1 ECV Product: Sea Ice Concentration Name Sea Ice Concentration (SIC) Definition Fraction of ocean area covered with sea ice. Unit % (or 1) Note Sea ice concentration (in %) or sea ice area fraction (0 … 1) is a parameter that requires a spatial scale for reference; it is the fraction of a known ocean area (whatever size) covered with sea ice. Sea-ice extent (= the total area of all grid cells covered with sea ice above a certain threshold, often 15%) and sea-ice area (= the total area of all grid cells covered with sea ice using the actual sea-ice area fraction as weight) are indicators derived from sea-ice concentration. Some products report sea-ice concentration intervals, others are ice/water binary masks. The border of the sea ice covered area (below a given threshold, often 15% SIC) defines a sea ice edge. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 1 Near-coast applications (e.g. Canadian Arctic Archipelago). Possibly not as sea-ice concentration but as ice / no-ice (edge). B 5 25 Regional analysis Trend analysis, global monitoring T 50 Limit for trend analysis, evaluation of global GCM simulations Vertical Resolution N/A G <1 SIC vary on a sub-daily time scale (opening/closing of leads) B 1 7 Ocean and Atmosphere reanalyses, daily monitoring of the sea- ice cover T 30 Temporal Resolution d G <1 SIC vary on a sub-daily time scale (opening/closing of leads) B 1 7 Ocean and Atmosphere reanalyses, daily monitoring of the sea-ice cover T 30 Timeliness d G 1-2 B 7 Operational monitoring with climate indicators, update of reanalyses T 30 Update of monthly climate indicators Required Measurement Uncertainty (2-sigma) % SIC G 5 B T 10 Stability %/dec G 5 B T Standards and References Lavergne and Kern, et al. (2022). A New Structure for the Sea Ice Essential Climate Variables of the Global Climate Observing System, BAMS, DOI 10.1175/BAMS-D-21-0227.1. Ono, J., H. Tatebe, and Y. Komuro, 2019: Mechanisms for and Predictability of a Drastic Reduction in the Arctic Sea Ice: APPOSITE Data with Climate Model MIROC. J. Climate, 32, 1361–1380, 2022 GCOS ECVs Requirements - 124 - 4.11.2 ECV Product: Sea Ice Thickness Name Sea Ice Thickness Definition The vertical distance between sea ice surface and sea ice underside of the ice-covered fraction of an area. Unit m Note Sea-ice thickness is together with the sea-ice area derived from the sea-ice concentration the key ingredient to compute the sea-ice volume and mass. Long-term sea-ice volume and mass changes are considered as the integral response of climate change exerted on the polar regions. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 1 Required to resolve small scale impacts of deformation events on sea-ice thickness distribution for more accurate estimation of dynamics on mass balance. Enables to resolve thickness distribution approaching floe scale for improved ice mass flux. Needed to obtain enhanced ice-type specific ice thickness information and more accurate estimates of ice production. B 25 distribution 25 mean & median Required for the analysis of regional sea-ice thickness distributions Needed to further develop and improve GCMs and to improve regional climate analyses Needed to refine hemispheric trend analyses and to analyze basin-wide / regional sea-ice thickness and mass trends Required for the evaluation of the next generation of CMIP6 GCMs T 50 Minimum useful horizontal resolution to compute hemispheric trends in sea-ice thickness and mass and to evaluate GCMs / CMIP6 Vertical Resolution G - N/A B - T - Temporal Resolution d G daily year-round To resolve ice production in polynyas and during early freeze-up To resolve the impact of dynamic processes on the sea-ice thickness distribution To resolve snow-ice formation B weekly year-round monthly year-round To better monitor the impact of longer-lasting weather conditions on sea-ice formation and melt. To better monitor the full seasonal cycle of sea-ice thickness T monthly wintertime Minimum temporal resolution required to adequately monitor the winter-time sea-ice thickness and mass increase Timeliness d G 1 Operational monitoring with climate indicators, update of reanalyses B 7 Update of monthly climate indicators T 30 Required Measurement Uncertainty (2-sigma) m G 0.05 To improve monitoring of thin ice areas and associated heat fluxes To enhance sea-ice production estimation To monitor diurnal changes in sea-ice thickness during growth and melt B 0.1 To monitor regional- and large-scale sea-ice thickness changes in the Arctic towards the end of the growing season and in the Antarctic. T 0.25 Minimum useful uncertainty to be able to monitor basin-wide sea-ice thickness changes at monthly scale. G 2022 GCOS ECVs Requirements - 125 - Stability m/decade B T Standards and References Lavergne and Kern, et al. (2022). A New Structure for the Sea Ice Essential Climate Variables of the Global Climate Observing System, BAMS, DOI 10.1175/BAMS-D-21-0227.1. 2022 GCOS ECVs Requirements - 126 - 4.11.3 ECV Product: Sea Ice Drift Name Sea Ice Drift Definition Rate of movement of sea ice due to winds, currents or other forces. Unit km d-1 Note 1) Sea Ice drift is a 2D vector, expressed with two components along two orthogonal directions. 2) The uncertainty requirements below are for both components (not the total velocity). 3) The uncertainty requirements below are for a reference displacement period of 24 hours. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 1 Near-coast applications (e.g. Canadian Arctic Archipelago). B 5 25 Regional analysis, deformations, volume fluxes through narrow gates. Trend analysis, sea-ice tracking, volume fluxes T 50 Limit for trend analysis, evaluation of global GCM simulations Vertical Resolution G - N/A B - T - Temporal Resolution d G <1 Sea-ice motion can change very rapidly with winds or internal forces B 1 7 T 30 Large-scale circulation patterns and trends Timeliness d G 1-2 B 7 Update of monthly climate indicators T 30 Required Measurement Uncertainty (2-sigma) km d-1 see Note G 0.25 Requires high-resolution imaging (e.g. SAR). For deriving deformation. B 3 T 10 Stability %/decade G B T Standards and References Lavergne and Kern, et al. (2022). A New Structure for the Sea Ice Essential Climate Variables of the Global Climate Observing System, BAMS, DOI 10.1175/BAMS-D-21-0227.1. Dierking, W., et al., Estimating statistical errors in retrievals of ice velocity and deformation parameters from satellite images and buoy arrays, The Cryosphere, 14(9), 2999-3016, 2020, 2022 GCOS ECVs Requirements - 127 - 4.11.4 ECV Product: Sea Ice Age Name Sea Ice Age Definition The age of an ice parcel is the time since its formation or since the last significant (e.g. summer) melt. Unit day Note An ice parcel formed during the freezing season is in its first year of existence and can be defined as first-year ice, its age is less than 1 year. When it survives the first exposure to significant melting (e.g. summer season) it becomes second-year ice (its age is between 1 and 2 years). This continues for each summer melt season the ice parcel survives. In other words, the age of an ice parcel is rounded up to the nearest integer year with each exposure to significant melting (typically the summer melt season). While in the Arctic, it has been common practice to use the date of the overall summer minimum extent for the reclassification of the sea ice, there are no well accepted definitions for the Southern Ocean and region-specific dates might be needed. Here we do not define any specific details what the definition of the significant melt is. The reclassification of sea ice into an older ice category at significant melt aims at linking the sea-ice age information to the physical properties of the ice, including its air bubbles content, density, salinity, surface roughness, etc. All these physical properties change drastically through melting and especially during the first summer melt. Sea ice age can be reported as the representative/dominating age in an area or as the distribution of ages within an area. Sea ice age can be computed with different approaches. Traditionally, sea-ice age has been derived from either Lagrangian tracking techniques and presented as areas with year classes (age = 1, 2, 3, etc.) or from analysis of microwave emissivity and backscattering and reported as age categories (e.g. first-year ice, second year ice, multiyear ice). The latter retrieval method often refers to the product as sea-ice type. Age concentration products exist that report some distribution of age within grid cells. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 1 Needed to resolve spatial differences in age when refreezing occurs between larger ice floes and plates, or in divergent icefields. Will capture details in the Canadian Archipelago. Needed to optimally resolve the age of narrow land-fast ice areas fringing Antarctica. B 5 25 Needed for better capturing regions dominated by broken old ice (like the Beaufort Gyre), and elongated filaments of certain age classes. Needed to resolve the age of larger-scale land-fast ice areas in Antarctica important for buttressing ice shelves. Reasonable capability in Canadian Archipelago, except for narrower straits. Regional analysis. General mapping of ice classes, used for climate monitoring e.g. trend analysis, climate index of old ice. Also, used as background information for ice thickness retrieval. Lack of resolution for smaller areas, such as in the Canadian Archipelago. T 50 Limit for trend analysis Vertical Resolution G - N/A B - T - Temporal Resolution d G <1 B 1 7 The edges between ice classes can move a lot during a d however the areal coverage of the >1year classes is assumed not to have large daily variability. T 30 Timeliness d G 1-2 Operational monitoring with climate indicators B 7 T 30 Useful for input into monthly altimeter-based sea ice thickness estimates. Required Measurement Uncertainty (2-sigma) d G 7 Age information as “time since its formation or since the last significant (e.g. summer) melt”. We do report the age of the ice within the on-going freezing season. B 182 Age as year classes (1,2,3,...). Requirement on accuracy is 182 days (half a year) because we do not report the age of the ice within the on-going freezing season. 2022 GCOS ECVs Requirements - 128 - T > 1 year As a minimum, a meaningful sea-ice age product should separate ice into seasonal ice and perennial ice, with a probability of correct classification of 70%. The dominating ice class is reported. Stability d G B T Standards and References Lavergne and Kern, et al. (2022). A New Structure for the Sea Ice Essential Climate Variables of the Global Climate Observing System, BAMS, DOI 10.1175/BAMS-D-21-0227.1. 2022 GCOS ECVs Requirements - 129 - 4.11.5 ECV Product: Sea Ice Temperature Name Sea Ice Surface Temperature (IST) Definition The surface temperature of sea ice or snow on sea ice, either a calibrated radiometric or thermometric in situ measurement. Unit Kelvin (K) Note The IST requirements below are based on several requirement/recommendation documents from relevant communities and institutions, e.g. WMO, GCOS, GMES, Copernicus/CMEMS, ESA CCI, NOAA, and others. Requirements for IST range widely in both in values and metric and the given values are based on these documents and expert judgments from the OSISAF High Latitude team. Uncertainty requirements are valid for automatically cloud screened day and night time IST data compared with surface temperature reference data of high quality, e.g. radiometric in situ observations. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 1 GCOS, GMES, Copernicus/CMEMS B 5 10 GCOS, GMES, Copernicus/CMEMS T 50 WMO Vertical Resolution G Skin N/A B Skin T Skin Temporal Resolution d G 3 h to capture diurnal cycle, GCOS, Copernicus/CMEMS B 1 GCOS, Copernicus/CMEMS T 7 Can allow full coverage (cloud cover) Timeliness d G 1-2 B 7 T 30 Required Measurement Uncertainty (2-sigma) K G 1.0 Copernicus/CMEMS, GMES, EUMETSAT/OSISAF, Dybkjær et al., 2019 B 3.0 Copernicus/CMEMS, GMES, EUMETSAT/OSISAF, Dybkjær et al., 2019 T 6.0 Copernicus/CMEMS, GMES, EUMETSAT/OSISAF, Dybkjær et al., 2019 Stability K/decade G 0.1 As defined in the GCOS LST ECV requirements B 0.2 T 0.3 As defined in the GCOS LST ECV requirements Standards and References Lavergne and Kern, et al. (2022). A New Structure for the Sea Ice Essential Climate Variables of the Global Climate Observing System, BAMS, DOI 10.1175/BAMS-D-21-0227.1. CLiC (2012) Observational needs for sea ice models - Short note. Discussion note from CLiC Arctic Sea Ice Working Group, 2012. CMEMS (2016) Bertino, L., L.A. Breivik, F. Dinesen, Y. Faugere, G. Garric, B. Hackett, J. A. Johannesen, T. Lavergne, P.-Y. LeTraon, L.T. Pedersen, P. Rampal, S. Sandven & H. Shyberg. Position paper Polar and snow cover applications User Requirements Workshop Brussels, Copernicus Marine Environment Monitoring Service, Mercator Ocean. CMEMS (2017) CMEMS requirements for the evolution of the Copernicus Satellite Component. Copernicus Marine Environment Monitoring Service, Mercator Ocean and CMEMS partners. CMEMS (2020) CMEMS Dashboard Upstream Satellite Data Requirements, V10.0 March 2020 (spreadsheet) Copernicus (2018a) Duchossois, G., P. Strobl, V. Toumazou (Eds.) User Requirements for a Copernicus Polar Mission Phase 1 Report - User Requirements and Priorities. JRC Technical Report, doi:10.2760/22832, 2018. Copernicus. (2018b) Duchossois, G., P. Strobl, V. Toumazou (Eds.) User Requirements for a Copernicus Polar Mission Phase 2 Report - High-level mission requirements. JRC Technical Report, doi:10.2760/44170, 2018. Dybkjær, G., R. Tonboe, M. Winstrup and J. L. Høyer (2019) Review of state-of-the-art methods and algorithms for Ice Surface Temperature retrieval algorithms - Including consolidate and refine output product requirements and software specification, Product requirement and baseline document, version 2.3. EUMETSAT document Reference Number: EUM/OPS-COPER/19/1065840. 2022 GCOS ECVs Requirements - 130 - GCOS (2016) The Global Observing System for Climate: Implementation Needs (World Meteorological Organization, GCOS-200). OSI SAF CDOP 3 (2018) Product Requirement Document, Version: 1.4, 2018 2022 GCOS ECVs Requirements - 131 - 4.11.6 ECV Product: Sea Ice Surface Albedo Name Sea Ice Surface Albedo Definition Broadband snow or ice surface albedo Unit 1 Note Albedo is a measure of how much solar radiation incident at a surface of known area is reflected back; it is the ratio between incoming and outgoing surface short-wave radiation. The value range is 0 to 1. The surface albedo of sea ice covers almost the entire range with very thin ice such as dark nilas having an albedo of ~ 0.1 and sea ice with a fresh snow cover having an albedo of ~0.9. The albedo of bare (snow-free) sea ice depends strongly on sea-ice age. Predominantly in the Arctic, during summer, melt water forms complex patterns of melt ponds on top of the sea ice that reduce the albedo considerably - depending on areal fraction and depth of the ponds and on ice age. Thus, not only the surface albedo, but also its partition into surface types (openings in the sea ice cover, melt ponds, bare ice, snow, etc.) is critical to observe. Through its relation to surface melt processes, albedo observations are key to improving the satellite retrieval of other sea-ice variables, such as sea-ice concentration. Albedo is the key parameter describing the amount of solar energy available for ice melt and in-ice and under-ice primary production. Both the fact that the sea ice drifts and the difficulty to obtain adequate in-situ observations for ground truthing and evaluation of sea ice surface albedo climate data records determine that ECV requirements for sea-ice albedo differ from those of the terrestrial albedo. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 1 Needed for mapping of larger flooded ice areas in the Arctic during summer (e.g. in river estuaries, or fjords) Improved mapping of spring / summer melt progress in the Arctic as a function of ice age. B 5 10 Needed to reliably monitor albedo evolution of larger thin ice areas associated with polynyas. Needed to monitor albedo evolution in narrow passages such as the Canadian Archipelago or around the Antarctic Peninsula Needed to discriminate adequately between the albedo of ice of different age during melt and re-freeze in the Arctic. Needed to reliably detect surface melt / refreeze event-induced changes in snow surface albedo in the Antarctic T 50 Minimum horizontal resolution to derive basin-wide trends in albedo and solar energy input Vertical Resolution G - N/A B - T - Temporal Resolution d G 3 h Required for an optimal quantification of surface albedo (and hence solar energy input) under highly variable cloud / surface illumination (changes surface topography) / surface conditions (fresh snow and pond drainage change surface albedo at ~ hourly scale) B 1 Required to accurately quantify the seasonal cycle and cumulative amount of surface available solar radiation Enables us to take into account the impact of melt-pond surface area changes and snowfall on diurnal variations in albedo and surface available solar radiation T 7 Minimum temporal resolution required to derive basin-scale changes in seasonal surface available solar radiation input, melt onset, and commence of freeze-up as well as to estimate onset of under-ice primary production. Timeliness d G 1-2 B 7 T 30 Required Measurement Uncertainty (2-sigma) G 0.01 Required to discriminate between new ice and open water and to detect submerged ice Needed to accurately observe sub-grid scale changes in ice surface conditions 2022 GCOS ECVs Requirements - 132 - B 0.05 Required to reliably monitor changes in snow properties: fresh - old - melting and to be able to distinguish between melting snow and bare ice Needed to differentiate between melt ponds on ice of different age and to identify melt-pond freeze-up T 0.1 Minimum measurement uncertainty to discriminate between ice / no ice or cold snow-covered / bare ice or to identify melt ponds Stability G B T Standards and References Lavergne and Kern, et al. (2022). A New Structure for the Sea Ice Essential Climate Variables of the Global Climate Observing System, BAMS, DOI 10.1175/BAMS-D-21-0227.1. Perovich, D. K., et al., Anatomy of a late spring snowfall on sea ice, Geophys. Res. Lett., 44(6), 2802-2809, 2017, Ardyna, M. and K. R. Arrigo, Phytoplankton dynamics in a changing Arctic Ocean, Nat. Climate Change, 10(10), 892-903, 2020, 2022 GCOS ECVs Requirements - 133 - 4.11.7 ECV Product: Snow Depth on Sea Ice Name Snow Depth on Sea Ice Definition The vertical extent of the snow cover on top of the sea ice. Unit m Note Snow has a heat conductivity which is an order of magnitude smaller than that of sea ice. It is hence very efficient at isolating sea ice from the atmosphere already at a depth of a few centimeters. Snow reduces the ocean-atmosphere heat flux. Thick snow retards winter-time ice growth and summer-time ice melt onset. Snow therefore has a profound impact on the overall heat and sea-ice mass budget of the polar oceans. Snow has the highest short-wave albedo of the snow-sea ice-system. Snow-covered sea ice can reflect about 25% more solar radiation than any kind of bare sea ice. Snowfall during melt-onset can delay sea-ice melt for several days to a few weeks due to the surface albedo change imposed. Snow is a critically required parameter for sea-ice thickness retrieval using altimetry. Snow depth on sea ice has been retrieved using multi-frequency satellite microwave radiometer observations for decades. While the retrieval is mature and accurate over undeformed seasonal sea ice during winter conditions, deformation, melt conditions and multiyear ice pose challenges. To solve these is currently explored using innovative combinations of satellite microwave radiometer observations using even more frequencies than so far with radar and laser altimeter observations, in situ observations from buoys, airborne surveys and specifically developed snow models informed with meteorological data from numerical modeling. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 1 B 25 25 Distribution T 50 Minimum horizontal resolution to derive basin-wide trends Minimum spatial resolution to support sea-ice thickness retrieval from altimetry Vertical Resolution G - N/A B - T - Temporal Resolution G daily year-round Needed for highly accurate year-round daily sea-ice thickness retrieval using satellite altimetry Required to define begin and end of spring snow melt on sea ice Needed to improve estimates of sea-ice melt progress or slow down Would enable estimation of the amount of snow-to-ice conversion related to flooding - refreeze events B weekly year-round monthly year-round Needed for year-round sea-ice thickness retrieval using satellite altimetry at weekly time scale Required to enhance evaluation of ocean-atmosphere heat flux estimates during the shoulder seasons and studies about sea-ice melt and freeze onset Required for year-round sea-ice thickness retrieval using satellite altimetry T monthly, wintertime Minimum temporal resolution to support sea-ice thickness retrieval using satellite altimetry Timeliness d G 1-2 B 7 T 30 Required Measurement Uncertainty (2-sigma) m G 0.01 B 0.05 T 0.1 Minimum requirement to ensure a sea-ice thickness retrieval uncertainty < 0.5 m and < 0.8 m using radar and laser altimetry, respectively. Stability m/decade G 2022 GCOS ECVs Requirements - 134 - B T Standards and References Lavergne and Kern, et al. (2022). A New Structure for the Sea Ice Essential Climate Variables of the Global Climate Observing System, BAMS, DOI 10.1175/BAMS-D-21-0227.1. Kwok, R., and G. F. Cunningham, ICESat over Arctic sea ice: Estimation of snow depth and ice thickness, J. Geophys. Res., 113, C08010, 2008, Giles, K. A., et al., Combined airborne laser and radar altimeter measurements over the Fram Strait in May 2002, Rem. Sens. Environ., 111(2-3), 182-194, 2007, 2022 GCOS ECVs Requirements - 135 - 5. BIOGEOCHEMISTRY 5.1 ECV: Oxygen 5.1.1 ECV Product: Dissolved Oxygen Concentration Name Dissolved Oxygen Concentration Definition Concentration of dissolved oxygen (O₂) in the water column. Unit μmol kg⁻1 Note This Essential Ocean Variable (EOV)/ECV is a measurement of sub-surface dissolved oxygen (O₂) concentration in the ocean, expressed in units of μmol kg⁻¹. Data on dissolved oxygen is obtained by both discrete (chemical analysis) and continuous (sensor measurements) sampling performed on a number of observing platforms (ship-based, fixed-point, autonomous). Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 300 1-100 For global coverage, spatial resolution refers to distance between transects, not between sampling stations. Coastal B T 2000 300 Coastal Vertical Resolution G - B - T - Temporal Resolution G monthly B T decadal Timeliness month G 6 B T 12 Required Measurement Uncertainty (2-sigma) μmol kg⁻¹ G 0.5 B T 2 Stability G B T Standards and References Requirements based on characteristic scales and magnitude of signal of phenomena to observe. See the EOV Specification Sheet for details and references (www.goosocean.org/eov). - 163 - 2022 GCOS ECVs Requirements 5.2 ECV: Nutrients 5.2.1 ECV Product: Silicate Name Silicate Definition Concentration of Si(OH)₄ in the water column. Unit μmol kg⁻¹ Note The availability of nutrients in seawater is estimated from measurements of concentration of inorganic macronutrients: nitrate (NO₃), phosphate (PO₄), silicic acid (Si(OH)₄), ammonium (NH₄), and nitrite (NO₂), expressed in umol kg⁻¹ of seawater. Nutrients ECV products are primarily obtained from discrete sample measurements using analytical chemical methods (colorimetric reactions) but nitrate concentration is also measured by sensors using the ultraviolet absorption method. Linear combination of nitrate and phosphate, defined as N, and the difference between silicic acid and nitrate concentrations, Si, provide estimates of nutrient supply/removal relative to global Redfield stoichiometry and are widely used for mapping and detecting trends in global nutrient cycling. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 1000 0.1-100 Coastal B T 2000 100 Coastal Vertical Resolution G - N/A B - T - Temporal Resolution month G 3 1 Coastal B T decadal Timeliness month G 6 B T 12 Required Measurement Uncertainty (2-sigma) % G 1 B T 3 Stability G B T Standards and References Requirements based on characteristic scales and magnitude of signal of phenomena to observe. See the EOV Specification Sheet for details and references (www.goosocean.org/eov). - 164 - 2022 GCOS ECVs Requirements 5.2.2 ECV Product: Phosphate Name Phosphate Definition Concentration of PO₄ in the water column. Unit μmol kg⁻¹ Note The availability of nutrients in seawater is estimated from measurements of concentration of inorganic macronutrients: nitrate (NO₃), phosphate (PO₄), silicic acid (Si(OH)₄), ammonium (NH₄), and nitrite (NO₂), expressed in umol kg⁻¹ of seawater. Nutrients ECV products are primarily obtained from discrete sample measurements using analytical chemical methods (colorimetric reactions) but nitrate concentration is also measured by sensors using the ultraviolet absorption method. Linear combination of nitrate and phosphate, defined as N, and the difference between silicic acid and nitrate concentrations, Si, provide estimates of nutrient supply/removal relative to global Redfield stoichiometry and are widely used for mapping and detecting trends in global nutrient cycling. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 1000 0.1-100 Coastal B T 2000 100 Coastal Vertical Resolution G - N/A B - T - Temporal Resolution month G 3 1 Coastal B T decadal Timeliness month G 6 B T 12 Required Measurement Uncertainty (2-sigma) % G 1 B T 3 Stability G B T Standards and References Requirements based on characteristic scales and magnitude of signal of phenomena to observe. See the EOV Specification Sheet for details and references (www.goosocean.org/eov). - 165 - 2022 GCOS ECVs Requirements 5.2.3 ECV Product: Nitrate Name Nitrate Definition Concentration of NO₃ in the water column. Unit μmol kg⁻¹ Note The availability of nutrients in seawater is estimated from measurements of concentration of inorganic macronutrients: nitrate (NO₃), phosphate (PO₄), silicic acid (Si(OH)₄), ammonium (NH₄), and nitrite (NO₂), expressed in umol kg⁻¹ of seawater. Nutrients ECV products are primarily obtained from discrete sample measurements using analytical chemical methods (colorimetric reactions) but nitrate concentration is also measured by sensors using the ultraviolet absorption method. Linear combination of nitrate and phosphate, defined as N, and the difference between silicic acid and nitrate concentrations, Si, provide estimates of nutrient supply/removal relative to global Redfield stoichiometry and are widely used for mapping and detecting trends in global nutrient cycling. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 1000 0.1-100 Coastal B T 2000 100 Coastal Vertical Resolution G - N/A B - T - Temporal Resolution month G 3 1 Coastal B T decadal Timeliness month G 6 B T 12 Required Measurement Uncertainty (2-sigma) % G 1 B T 3 Stability G B T Standards and References Requirements based on characteristic scales and magnitude of signal of phenomena to observe. See the EOV Specification Sheet for details and references (www.goosocean.org/eov). - 166 - 2022 GCOS ECVs Requirements 5.3 ECV: Ocean Inorganic Carbon 5.3.1 ECV Product: Total Alkalinity (TA) Name Total Alkalinity (TA) Definition Total concentration of alkaline substances. Unit μmol kg⁻¹ Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 1000 100 Coastal B T 2000 1000 Coastal Vertical Resolution G - N/A B - T - Temporal Resolution month G 3 B T decadal Timeliness month G 6 B T 12 Required Measurement Uncertainty (2-sigma) μmol kg⁻¹ G 2 B T 2 Stability G B T Standards and References Requirements based on characteristic scales and magnitude of signal of phenomena to observe. See the EOV Specification Sheet for details and references (www.goosocean.org/eov). Additional requirements based on the Global Ocean Data Assimilation Project (GLODAP; www.glodap.info); for pH based on the Global Ocean Acidification Observing Network (GOA-ON) Implementation Strategy ( for pCO2 from the Surface Ocean CO2 Atlas (SOCAT; www.socat.info). - 167 - 2022 GCOS ECVs Requirements 5.3.2 ECV Product: Dissolved Inorganic Carbon (DIC) Name Dissolved Inorganic Carbon (DIC) Definition Sum of dissolved inorganic carbon species (CO2, HCO⁻, CO3²⁻) in water. Unit μmol kg⁻¹ Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 1000 100 Coastal B T 2000 1000 Coastal Vertical Resolution G - N/A B - T - Temporal Resolution month G 3 B T decadal Timeliness month G 6 B T 12 Required Measurement Uncertainty (2-sigma) μmol kg⁻¹ G 2 B T 2 Stability G B T Standards and References Requirements based on characteristic scales and magnitude of signal of phenomena to observe. See the Essential Ocean Variables (EOV) Specification Sheet for details and references (www.goosocean.org/eov). Additional requirements based on the Global Ocean Data Assimilation Project (GLODAP; www.glodap.info); for pH based on the Global Ocean Acidification Observing Network (GOA-ON) Implementation Strategy ( for pCO2 from the Surface Ocean CO2 Atlas (SOCAT; www.socat.info). - 168 - 2022 GCOS ECVs Requirements 5.3.3 ECV Product: pCO₂ Name pCO₂ Definition Surface ocean partial pressure of CO₂. Unit μatm Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 100 B T 1000 <1000 Coastal Vertical Resolution G - N/A B - T - Temporal Resolution G monthly B T decadal Timeliness month G 6 B T 12 Required Measurement Uncertainty (2-sigma) μatm G 2 B T 2 Stability G B T Standards and References Requirements based on characteristic scales and magnitude of signal of phenomena to observe. See the EOV Specification Sheet for details and references (www.goosocean.org/eov). Additional requirements based on the Global Ocean Data Assimilation Project (GLODAP; www.glodap.info); for pH based on the Global Ocean Acidification Observing Network (GOA-ON) Implementation Strategy ( for p CO2 from the Surface Ocean CO2 Atlas (SOCAT; www.socat.info). - 169 - 2022 GCOS ECVs Requirements 5.4 ECV: Transient tracers 5.4.1 ECV Product: 14C Name ¹⁴C Definition Ratio of sample to reference value (Δ14) in the water column. Unit ‰ Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 2000 200 Regional Deep water formation areas B T 2000 Vertical Resolution G - N/A B - T - Temporal Resolution y G 10 2 Regional Deep water formation areas B T 10 Timeliness y G 1 B T 2 Required Measurement Uncertainty (2-sigma) ‰ G 0.4 B T Stability G decadal 1y Regional Deep water formation areas B T decadal Standards and References Requirements based on characteristic scales and magnitude of signal of phenomena to observe. See the EOV Specification Sheet for details and references (www.goosocean.org/eov). - 170 - 2022 GCOS ECVs Requirements 5.4.2 ECV Product: SF₆ Name SF₆ Definition Concentration of SF6 gas in the water column. Unit fmol kg⁻¹ Note Requirements Item needed Unit Met ric Value Notes Horizontal Resolution km G 2000 200 Regional Deep water formation areas B T 2000 Vertical Resolution G - N/A B - T - Temporal Resolution y G 10 2 Regional Deep water formation areas B T 10 Timeliness y G 1 B T 2 Required Measurement Uncertainty (2-sigma) ‰ G 0.4 B T Stability G decadal 1y Regional Deep water formation areas B T decadal Standards and References Requirements based on characteristic scales and magnitude of signal of phenomena to observe. See the EOV Specification Sheet for details and references (www.goosocean.org/eov). - 171 - 2022 GCOS ECVs Requirements 5.4.3 ECV Product: CFC-11 Name CFC-11 Definition Concentration of CFC-11 gas in the water column. Unit pmol kg⁻¹ Note Requirements Item needed Unit Met ric Value Notes Horizontal Resolution km G 2000 200 Regional Deep water formation areas B T 2000 Vertical Resolution G - N/A B - T - Temporal Resolution y G 10 2 Regional Deep water formation areas B T 10 Timeliness month G 6 B T 6 Required Measurement Uncertainty (2-sigma) ‰ G 1 B T Stability G decadal 1y Regional Deep water formation areas B T decadal Standards and References Requirements based on characteristic scales and magnitude of signal of phenomena to observe. See the EOV Specification Sheet for details and references (www.goosocean.org/eov). - 172 - 2022 GCOS ECVs Requirements 5.4.4 ECV Product: CFC-12 Name CFC-12 Definition Concentration of CFC-12 gas in the water column. Unit pmol kg⁻¹ Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 2000 200 Regional Deep water formation areas B T 2000 Vertical Resolution G - N/A B - T - Temporal Resolution y G 10 2 Regional Deep water formation areas B T 10 Timeliness month G 6 B T 6 Required Measurement Uncertainty (2-sigma) ‰ G 1 B T Stability G decadal 1y Regional Deep water formation areas B T decadal Standards and References Requirements based on characteristic scales and magnitude of signal of phenomena to observe. See the EOV Specification Sheet for details and references (www.goosocean.org/eov). - 173 - 2022 GCOS ECVs Requirements 5.5 ECV: Ocean Nitrous Oxide N2O 5.5.1 ECV Product: Interior Ocean Nitrous Oxide N2O Name Interior Ocean Nitrous Oxide N2O Definition Concentration of N₂O gas in the water column. Unit nmol kg⁻¹ Note Nitrous oxide (N2O) is an atmospheric trace gas which is measured in the water column of all major ocean basins at concentrations spanning three orders of magnitude. The ocean is a major source (around 25%) of N2O gas to the atmosphere. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G <2000 <500 Coastal B T 2000 Vertical Resolution G - N/A B - T - Temporal Resolution month G 3 B T 3 weekly to monthly Coastal Timeliness y G 1 B T 2 Required Measurement Uncertainty (2-sigma) % G <1 B T 5 Stability G B T Standards and References Values based on the characteristic scales of the phenomena which are observed using N₂O measurements. For more details and references see the Nitrous Oxide EOV Specification Sheet (www.goosocean.org/eov), publications from SCOR WG 143 ( and the GOOS Report No. 225 ( - 174 - 2022 GCOS ECVs Requirements 5.5.2 ECV Product: N2O Air-sea Flux Name N2O Air-sea Flux Definition Amount of N₂O produced per area per year. Unit μmol m⁻² y⁻¹ Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G <2000 <500 Coastal B T 2000 Vertical Resolution G - N/A B - T - Temporal Resolution month G 3 weekly to monthly Coastal B T Decadal Timeliness y G 1 B T 2 Required Measurement Uncertainty (2-sigma) G <1 B T 5 Stability % G B T Standards and References Values based on the characteristic scales of the phenomena which are observed using N₂O measurements. For more details and references see the Nitrous Oxide EOV Specification Sheet (www.goosocean.org/eov), publications from SCOR WG 143 ( and the GOOS Report No. 225 ( - 175 - 2022 GCOS ECVs Requirements 5.6 ECV: Ocean Colour 5.6.1 ECV Product: Chlorophyll-a Name Chlorophyll-a Definition Concentration of chlorophyll-a pigment in the surface water. Unit µg l-1 Note Ocean colour is the radiance emanating from the ocean normalized by the irradiance illuminating the ocean. Products derived from ocean colour remote sensing (OCRS) contain information on the ocean albedo and information on the constituents of the seawater, in particular, phytoplankton pigments such as chlorophyll-a. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 4 B T 4 Vertical Resolution G - N/A B - T - Temporal Resolution d G 1 B T 7 Timeliness G B T Required Measurement Uncertainty (2-sigma) % G 30 B T 30 Stability %/decade G 3 B T 3 Standards and References For more details and references see the Ocean Colour EOV Specification Sheet (www.goosocean.org/eov). - 176 - 2022 GCOS ECVs Requirements 5.6.2 ECV Product: Water Leaving Radiance Name Water Leaving Radiance Definition Amount of light emanating from within the ocean. Unit Note Ocean colour is the radiance emanating from the ocean normalized by the irradiance illuminating the ocean. Products derived from ocean colour remote sensing (OCRS) contain information on the ocean albedo and information on the constituents of the seawater, in particular, phytoplankton pigments such as chlorophyll-a. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 4 B T 4 Vertical Resolution G - N/A B - T - Temporal Resolution d G 1 B T 1 Timeliness G B T Required Measurement Uncertainty (2-sigma) % G 5 Uncertainty specified for blue and green wavelengths. B T 5 Uncertainty specified for blue and green wavelengths. Stability %/decade G 0.5 B T 0.5 Standards and References For more details and references see the Ocean Colour EOV Specification Sheet (www.goosocean.org/eov). - 177 - 2022 GCOS ECVs Requirements 6. BIOSPHERE 6.1 ECV: Plankton 6.1.1 ECV Product: Zooplankton Diversity Name Zooplankton Diversity Definition Number of species, functional traits, molecular biology groups (Operational Taxonomic Unit/OUT, other) per unit seawater volume or unit sea surface area, or unit benthos area. Unit [Number of Species per unit volume or area, [Number of traits per unit volume or area], [Number of molecular biology groups per unit volume or area]. Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 100 0.1 offshore nearshore B 1 0.1 offshore nearshore T 2500 0.1 offshore nearshore Vertical Resolution m G 10 nominal Depends on method of collection: discrete samples, vertical imaging profiles, net tows (oblique vs open/closing), or continuous tow recorder/imaging B 10 nominal T surface Temporal Resolution month G 1 Phenology of zooplankton is critical for food web dynamics, and recruitment success for whales, birds, turtles, fish, and invertebrate success B 3 T 12 Timeliness y G 1 B T 2 Required Measurement Uncertainty (2-sigma) %, count, concentration, weight (biomass) G Depending on observation: Taxonomic unit, trait, molecular group, biomass (wet/dry weight, carbon, nitrogen, protein content) B T 5 Stability G B T Standards and References See the Zooplankton EOV Specification Sheet for more details and references (www.goosocean.org/eov). - 178 - 2022 GCOS ECVs Requirements 6.1.2 ECV Product: Zooplankton Biomass Name Zooplankton Biomass Definition Weight of zooplankton by volume. Unit mg l-1 Note It can be dry weight or wet weight. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 100 B T 2500 Vertical Resolution m G 10 B T surface Temporal Resolution month G 1 B T 12 Timeliness y G 1 B T 2 Required Measurement Uncertainty (2-sigma) % G B T 5 Stability G B T Standards and References See the Zooplankton EOV Specification Sheet for more details and references (www.goosocean.org/eov). - 179 - 2022 GCOS ECVs Requirements 6.1.3 ECV Product: Phytoplankton Diversity Name Phytoplankton Diversity Definition Number of species per unit sample, number and concentration of pigment types per unit sample. Unit Per unit volume or unit surface area Note Phytoplankton are the foundation of near-surface food webs and the non-chemosynthetic support for deep ocean foodwebs through vertical fluxes of particulate organic matter. In addition to their biomass and diversity, measures of primary production are also important. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 100 0.1 offshore nearshore B 1 0.1 offshore nearshore T 2000 1 offshore nearshore Vertical Resolution G 10 nominal Depends on method of collection: discrete samples, vertical imaging profiles, net tows (oblique vs open/closing), or continuous tow recorder/imaging B 10 nominal T surface Temporal Resolution month G weekly-monthly Phenology of phytoplankton is critical for food web dynamics and recruitment success for whales, birds, turtles, fish, and invertebrate success B 3 T 1 Timeliness G B T Required Measurement Uncertainty (2-sigma) % G Depending on observation: Taxonomic unit, trait, molecular group, biomass (wet/dry weight, carbon, nitrogen, protein content) B T 5 Stability G B T Standards and References Field methods foundational reference for operational oceanography: Strickland, J.D., & Parsons, T.R. (1968). A practical handbook of seawater analysis. Fisheries Research Board of Canada. Bulletin 167. (plus numerous and more recent publications for specific methods) Remote sensing of phytoplankton links to the Ocean Colour EOV/ECV See the EOV Specification Sheet for more details and references (www.goosocean.org/eov). - 180 - 2022 GCOS ECVs Requirements 6.1.4 ECV Product: Phytoplankton Biomass Name Phytoplankton Biomass Definition Weight of phytoplankton by volume. Unit mg m-3 Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 100 B T 2000 Vertical Resolution G - N/A B - T - Temporal Resolution y G Weekly-seasonal B T 10 Timeliness G B T Required Measurement Uncertainty (2-sigma) % G B T 5 Stability G B T Standards and References See the EOV Specification Sheet for more details and references (www.goosocean.org/eov). - 181 - 2022 GCOS ECVs Requirements 6.2 ECV: Marine Habitat Properties 6.2.1 ECV Product: Mangrove Cover and Composition Name Mangrove Cover and Composition Definition Extent of mangroves and species types in coastal environments (percent or ha and number of species per area). Unit Extent measured in quadrats (e.g. 10x10m), or by pixels (e.g. 30x30m) Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution m2 Pixel/point in space G 30x30 B T 50x50 Vertical Resolution G - B - T - Temporal Resolution month Point in time G 12 B T 12 Timeliness month Point in time G 6 B T 12 Required Measurement Uncertainty (2-sigma) Areal extent Percent G 10 B T 20 Stability Percent cover/decade G 10 B T 50 Standards and References Requirements and approaches vary for field based and satellite mapping approaches. For in situ data collection for mangrove composition see and See the EOV Specification Sheet for more details and references (www.goosocean.org/eov). - 182 - 2022 GCOS ECVs Requirements 6.2.2 ECV Product: Seagrass Cover (areal extent) Name Seagrass Cover (areal extent) Definition Areal extent of suitable physical habitat (shallow sediment shelf with adequate water quality) supporting seagrass. Unit km2 Note Seagrass areal extent is typically estimated by remote sensing, including satellite, photography from aircraft, and for smaller areas by Unoccupied Aerial vehicle (UAV), i.e., drone. Various methods of image post-processing have been used to convert imagery to seagrass habitat extent. Requirements Item needed Unit Metric Value Notes Horizontal Resolution m G 30 Muller-Karger et al., 2018 B T 250 Muller-Karger et al., 2018 Vertical Resolution G - N/A B - T - Temporal Resolution y G 1 week Muller-Karger et al., 2018 B T 1 Timeliness G B T Required Measurement Uncertainty (2-sigma) % G B T 10 Stability G B T Standards and References Requirements based on characteristic scales and magnitude of signal of phenomena to observe. See the EOV Specification Sheet for more details and references (www.goosocean.org/eov). Muller-Karger et al., 2018. - 183 - 2022 GCOS ECVs Requirements 6.2.3 ECV Product: Macroalgal Canopy Cover and Composition Name Macroalgal Canopy Cover and Composition Definition Abundance of layered macroalgal stands in marine coastal environments. Unit percent or number of individuals/area Note Percent cover measured within quadrats (e.g., 0.5 x 0.5 m) or transects (e.g., 50 x 5 m). For large macroalgae such as kelps, abundance can be measured as number of individuals per area. Requirements Item needed Unit Metric Value Notes Horizontal Resolution m2 point in space G 0.25 B 1 T 250 Vertical Resolution m linear extent G 1 B 5 T 10 Temporal Resolution month point in time G 1 B 3 T 12 Timeliness month point in time G 4 B 6 T 12 Required Measurement Uncertainty (2-sigma) Percent cover G 10 B 20 T 30 Stability Percent cover G 20 B 30 T 50 Standards and References See the EOV Specification Sheet for more details and references (www.goosocean.org/eov). - 184 - 2022 GCOS ECVs Requirements 6.2.4 ECV Product: Hard Coral Cover and Composition Name Hard Coral Cover and Composition Definition Percent cover of hard coral. For composition, this is broken down by taxonomic or functional groups. Unit % Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 10-100 For resolution of climate impacts, down to 10 km would be ideal; but will require development of remote sensing tools that can distinguish coral cover B T 1000 Currently global coral data is analyzed at country levels (100s to 1000s of km) Vertical Resolution m G 10 for resolution of climate impacts, stratification in 10 m would be ideal B T ≈ single layer, global coral data is summarized in a single bin. Temporal Resolution y G 1 annual data ideal B T 5-10 data gaps results in 5-10 y gaps/bins for global analyses Timeliness y G 0.25 Establishment of open access integrated regional datasets would allow sub-annual access to data B 2 T 5 Current practice requires high-effort compilations Required Measurement Uncertainty (2-sigma) % G B T 5 Stability G B T Standards and References English, S., Wilkinson, C., and Baker, V. (1997). Survey Manual for Tropical Marine Resources. Townsville, Australia. Australian Institute of Marine Science. GCRMN (2018a). GCRMN Implementation and Governance Plan. International Coral Reef Initiative (ICRI). GCRMN (2018b). GCRMN Technical Note. International Coral Reef Initiative (ICRI). Obura DO, et al., (2019) Coral Reef Monitoring, Reef Assessment Technologies, and Ecosystem-Based Management. Front. Mar. Sci. 6:580. doi: 10.3389/fmars.2019.00580 See the EOV Specification Sheet for more details and references (www.goosocean.org/eov). - 185 - 2022 GCOS ECVs Requirements Terrestrial ECVs - 186 - 2022 GCOS ECVs Requirements 7. HYDROLOGY 7.1 ECV: Groundwater 7.1.1 ECV Product: Groundwater Storage Change Name Groundwater Storage Change Definition The volumetric loss or gain of groundwater between two times period. Unit km3 y-1 or mm y-1 Note Ground water storage change is monitored at large spatial scales by satellite gravimetry. To isolate groundwater storage change from the total mass variations observed by satellite gravimetry, all other mass changes in the Earth system need to be subtracted by complementary observations or models. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km Length/width of area that can be resolved G ≤ 100 depends on size of aquifer, hydrogeological characteristics, and type of application. 100 km is defined as a goal/target value by ref#1 B T 200-300 horizontal resolution of GRACE water storage data, depending on product, signal strength, geographical location and time scale (ref #1, #2, #3) Vertical Resolution G - N/A B - T - Temporal Resolution month time G 0.5 Requirement for the analysis of the groundwater response to, e.g., recharge events or changes in (human) withdrawals. B 1 T 3 Seasonal, for assessing, e.g., the climatology of groundwater storage variations and long-term variations / trends. Timeliness month time G <1 Near-real time. Requirement for risk management (droughts), short-term forecasts B 1 Requirement for, e.g., seasonal forecasts T 12 Annually. Minimum requirement to assess long-term storage variations Required Measuremen t Uncertainty (2-sigma) mm y-1 Change in water storage in water equivalents (volume per area) between two time periods G 1 Goal value to allow for a much larger number of aquifers or river basins of smaller size to be monitored than for threshold value (ref #1), or for detecting more subtle rates of groundwater storage change. Depending on the time scale of application (e.g., for the assessment of monthly anomalies or long-term trends), the required measurement uncertainties may vary. It should be noted that the measurement uncertainty based on satellite gravimetry varies largely and in a non-linear way with spatial resolution, i.e., it is given as 0.05, 1, 5, 50 mm/year for 400, 200, 150, 100 km spatial resolution (ref #1). Additional uncertainty is added by isolating groundwater storage from total mass changes observed by satellite gravimetry. B T 10 Expert judgement, based on long-term groundwater trends as observed with GRACE for large aquifers (≥ 50000 km²) (ref #2, #4), given that these observations already provided valuable information on the status of large aquifers. Depending on the time scale of application (e.g., for the assessment of monthly anomalies or long- term trends), the required measurement uncertainties may vary. Stability mm y-1 G 1 Based on subtle expected long-term groundwater trends in large aquifers B - 187 - 2022 GCOS ECVs Requirements T 10 Based on expected long-term groundwater trends as observed with GRACE for large aquifers (≥ 50000 km²) (ref #2, #4) Standards and References #1 Pail, R., Bingham, R., Braitenberg, C., Dobslaw, H., Eicker, A., Güntner, A., Horwath, M., Ivins, E., Longuevergne, L., Panet, I., Wouters, B., and the IUGG Expert Panel (2015): Science and User Needs for Observing Global Mass Transport to Understand Global Change and to Benefit Society. Surveys in Geophysics, 36, 743-772, 10.1007/s10712-015-9348-9. #2 Frappart, F., and Ramillien, G. (2018): Monitoring Groundwater Storage Changes Using the Gravity Recovery and Climate Experiment (GRACE) Satellite Mission: A Review. Remote Sensing, 10, 10.3390/rs10060829. #3 Rodell, M., Famiglietti, J. S., Wiese, D. N., Reager, J. T., Beaudoing, H. K., Landerer, F. W., and Lo, M. H. (2018): Emerging trends in global freshwater availability, Nature, 557, 650-+, 10.1038/s41586-018-0123-1. #4 Chen, J. L., Famiglietti, J. S., Scanlon, B. R., and Rodell, M. (2016): Groundwater Storage Changes: Present Status from GRACE Observations. Surveys in Geophysics, 37, 397-417, 10.1007/s10712-015-9332-4. - 188 - 2022 GCOS ECVs Requirements 7.1.2 ECV Product: Groundwater Level Name Groundwater Level Definition The level (depth or elevation) of the water table, the upper surface of the saturated portion of the soil or bedrock. Unit m Note Groundwater levels are measured in monitoring wells. The measurements are expressed in m (below ground surface or above sea level, depending on the reference system). Requirements Item needed Unit Metric Value Notes Horizontal Resolution number of wells per 100 km² spatial density of wells G - Depends on hydrogeology. Expert judgment. B - Depends on hydrogeology. Expert judgment. T 1 Recommended by the U.S. Geological Survey (USGS). Vertical Resolution G - N/A B - T - Temporal Resolution Month time G 0.5 Expert judgment B 1 Expert judgment T 3 Seasonal (wet/dry). Expert judgment Timeliness y time G 2-3 (days) Expert judgment. When resources are available, a real- time monitoring network with telemetry can be set up, allowing the public to get data immediately. When quality checks are performed, international experience shows that data can be released in 2 or 3 days. B 0.5 Expert judgment. International experience shows that when missions have to be carried out to measure groundwater levels, half a year is an adequate time span to go over all locations, measure the levels, come back to the office, perform data quality tests and upload the final data in the online database to make it available to the public through official channels. T 1 Timeliness is directly related to the use of technology to get the data (telemetry vs going to the field to collect the data). Required Measurement Uncertainty (2-sigma) mm G 1 Depending on the size and gradient of the aquifer, higher uncertainties may have a significant impact on the estimation of the water table. Also, there are other parameters that could have a higher impact on the uncertainty of the recording, as ill-defined vertical datums, pumping wells disrupting groundwater flow patterns, inadequate location of the well, inadequate length of screen setting, etc. B T 30 Stability mm y-1 G 1 A stable trend can be defined as an average monthly change in groundwater levels that is less than a certain value (e.g. 10 cm), for a series of consecutive years (e.g. 5, 10 or 20 years). A specific number and density of point data are needed depending on the period to be considered. For 5 years trend, 10 or more data points are required, and at least one reading per year for 4 out of the 5 years. For 10 years trend, 20 or more data points are required, and at least one reading from each consecutive two-year period. For 20 years trend, 40 or more data points are required, and at least one reading from each consecutive four-year period. This method is the one used by the Bureau of Meteorology of Australia, which is one of the several methods used around the world to estimate a stable trend in groundwater levels. B T 10 It is important to notice that each country might have its own threshold value depending on how marked seasonal fluctuations are (depending on precipitation regimen and hydrogeology, among others). The required measurement stability depends largely on the magnitude of the expected groundwater level trend. - 189 - 2022 GCOS ECVs Requirements Standards and References 2022 GCOS ECVs Requirements 190 7.2 ECV: Lakes 7.2.1 ECV Product: Lake Water Level (LWL) Name Lake Water Level (LWL) Definition Lake Water Level (LWL). Elevation of the free surface of a lake relative to a specified vertical datum. Unit cm Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution m G - In situ observation by a point measurement on gauge B - T 100 Vertical Resolution G - N/A B - T - Temporal Resolution d G 1 B 30 T 365 Annual summary in the form of yearbook Timeliness d G 1 In some case it can be interesting to have near real time lake level changes (in case of extreme events) B 30 T 365 For yearbooks Required Measurement Uncertainty (2-sigma) cm G 5 B T 10 Allows to use the considered characteristic in global and regional climate models Stability cm /decade G 1 B T 10 Allows to use the considered characteristic in global and regional climate models Standards and References Technical Regulations, volume III, Hydrology, 2006 edition, WMO-No.49 Guide to Hydrological Practices, sixth edition,2008, WMO-No.168 2022 GCOS ECVs Requirements 191 7.2.2 ECV Product: Lake Water Extent (LWE) Name Lake Water Extent (LWE) Definition Areal extent of the surface of a lake. Unit km2 Note LWE is only measurable using satellite imagery. For shallow lakes the LWE variable is more relevant than the Lake Water Level to detect climate change signal (Mason et al., 1994). Requirements Item needed Unit Metric Value Notes Horizontal Resolution m G 10 Using Sentinel-2 missions. Allows to determine small extent variations. B 30 Using Landsat (5,7,8) missions. Still relevant for shallow lakes with high extent potential variations. T 1000 Useful to partition surface energy fluxes. Vertical Resolution G - N/A B - T - Temporal Resolution d G 5 Reasonable for climate change studies. Consistent with possibilities offered by satellite technologies (Sentinel-2 constellation can provide in the best-case images every 5 days). Will allow detecting LWE changes linked to extreme events. B T 30 For long term evolution of lake extent changes monthly basis is still acceptable and usable. Useful to partition surface energy fluxes. Timeliness d G 5 To be consistent with temporal resolution and possibilities offered by satellite technologies (Sentinel-2 constellation can provide in the best-case images every 5 days). B T 365 Climate scale Required Measurement Uncertainty (2-sigma) % G 5 For LWE, the uncertainty relatively to the total surface makes sense. B T Stability % /decade G 5 B T Standards and References Algorithm Theoretical Basis Document (ATBD) of LWE (Lake Water Extent) calculation under ESA’s CCI (Climate change Initiative) program. Mason I.M., Guzkowska M.A.J., Rapley C.G., and Street-Perrot F.A., (1994). The response of lake levels and areas to climate change, Climate Change 27, 161-197. 2022 GCOS ECVs Requirements 192 7.2.3 ECV Product: Lake Surface Water Temperature (LSWT) Name Lake Surface Water Temperature (LSWT) Definition Temperature of the lake surface. Unit °C Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 0.1 B 1 T 2 Using satellite technics Vertical Resolution G - N/A B - T - Temporal Resolution h G 3 To capture diurnal cycles B 24 Daily T 240 Currently achievable with satellite observations. Annual summary in the form of yearbook can also provide useful long-timeseries. Timeliness D G 1 B 30 T 365 For yearbooks Required Measurement Uncertainty (2-sigma) °C G 0.1 B 0.3 T 0.6 Stability °C / decade G 0.1 B T 0.25 Standards and References Technical Regulations, volume III, Hydrology, 2006 edition, WMO-No.49. 2022 GCOS ECVs Requirements 193 7.2.4 ECV Product: Lake Ice Cover (LIC) Name Lake Ice Cover (LIC) Definition Area of lake covered by ice. Unit km2 Note Based on lake-wide satellite observations. In situ observations of ice cover can be temporally and spatially consistent, and therefore be useful for climate monitoring, but capture variations and trends in ice cover that are spatially limited (i.e. not lake-wide but rather representative of some limited area observable from lake shore). Lake-wide ice phenology can be derived from LIC (freeze onset to complete freeze over (CFO) dates during the freeze-up period; melt onset to water clear of ice (WCI) dates during the break-up period; and ice cover duration derived from number of days between CFO and WCI dates over an ice year) (Duguay et al., 2015). For lakes that do not form a complete ice cover every year or in some years (e.g. Laurentian Great Lakes), maximum ice cover extent (timestamped with date) is also a useful climate indicator that can be derived; similarly minimum ice extent can be derived for High Arctic lakes that do not completely lose their ice cover in summer. Requirements Item needed Unit Metric Value Notes Horizontal Resolution m G 50 Smaller water bodies as well as due to increased availability of synthetic aperture radar (SAR) and optical data at resolutions ≤ 50 m (e.g. Wang et al., 2018) B 100 Small water bodies (lakes, ponds) can be observed T 1000 Medium to large sized water bodies as demonstrated through ESA Lakes_cci Vertical Resolution G - N/A B - T - Temporal Resolution d G < 1 Detection of interannual variability and decadal shifts in ice cover and for improving ice, weather forecasting and climate models. B 1 Allows daily observations under variable cloud cover from optical satellite data T 3-7 Useful for contrasting extreme ice years, numerical weather forecasting, and assessing lake models used as parameterization schemes in climate models. Timeliness d G 1 In support of ice forecasting systems (e.g. NOAA’s Great Lakes Coastal Forecasting System, GLCFS). B T 365 To support annual climate reporting Required Measurement Uncertainty (2-sigma) % G 1 B T 10 Stability % G 0.1 B T 1 Standards and References ATBD and URD of ESA Lakes_cci Duguay, C.R., M. Bernier, Y. Gauthier, and A. Kouraev, 2015. Remote sensing of lake and river ice. In Remote Sensing of the Cryosphere, Edited by M. Tedesco. Wiley-Blackwell (Oxford, UK), pp. 273-306. Wang, J., C.R. Duguay, and D.A. Clausi, V. Pinard, and S.E.L. Howell, 2018. Semi-automated classification of lake ice cover using dual polarization RADARSAT-2 imagery. Remote Sensing, 10(11), 1727; 2022 GCOS ECVs Requirements 194 7.2.5 ECV Product: Lake Ice Thickness (LIT) Name Lake Ice Thickness (LIT) Definition Thickness of ice on a lake. Unit cm Note LIT measurements are largely based on in situ observational networks. Satellite-based retrieval algorithms are under development (research stage), not operational yet. On-ice snow depth measurements are also useful for both climate monitoring as well as for assessing and improving lake models. Requirements Item needed Unit Metric Value Notes Horizontal Resolution m G 50 From synthetic aperture radar (SAR) B 1000 T 10000 From radar altimetry and passive microwave data (Kang et al., 2014) Vertical Resolution G - N/A B - T - Temporal Resolution d G 1 From satellite observations B 30 T 365 Annual summary of in situ measurements from yearbooks Timeliness d G 1 Using satellite telecommunication systems for in situ measurements; also daily from satellites for numerical models such as NOAA’s Great Lakes Coastal Forecasting System (GLCFS) B 30 T 365 To support annual climate reporting Required Measurement Uncertainty (2-sigma) cm G 1 Achievable with in situ measurements B 10 Achievable from satellite measurements T 15 Stability cm G 1 B T 10 Standards and References National standards. Kang, K.-K., C. R. Duguay, J. Lemmetyinen, and Y. Gel, 2014. Estimation of ice thickness on large northern lakes from AMSR-E brightness temperature measurements. Remote Sensing of Environment, 150: 1-19, 2022 GCOS ECVs Requirements 195 7.2.6 ECV Product: Lake Water-Leaving Reflectance Name Lake Water Leaving Reflectance Definition Water-leaving reflectance in discrete wavebands of electromagnetic radiation from near-UV through visible to near infrared and up to shortwave infrared, fully normalized for viewing and solar incident angles. Unit dimensionless Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution m G 10 Small rivers and water bodies can be observed B 100 Water bodies included with resolution <300m, as demonstrated through Copernicus Global Land Service T 1000 Medium to large sized water bodies (up to 50% of global inland water surface area), as demonstrated through ESA Lakes_cci Vertical Resolution G - N/A B - T - Temporal Resolution d G <1 At equator. Allows daily observations under variable. B 1 At equator. Decade-scale shifts in biological components become detectable in individual water bodies. T 3-30 At equator. Decade-scale shifts in biological components become detectable within global lake biomes. Timeliness d G 1 Episodic events can be detected in near real-time B 30 Satellite observations supplied with reliable meteorological ancillary data T 365 Annual extension of existing data records based on measurements supplied with reliable meteorological records Required Measurement Uncertainty (2-sigma) % G 10 At peak reflectance amplitude. Expected to allow derived water column properties to be estimated within 0.1 mg m-3 chlorophyll-a and 1 g m-3 suspended matter or 1 NTU. See ESA Lakes_cci URD. Impact of observation uncertainty will vary with lake type (shape of reflectance spectrum). B 20 At peak reflectance amplitude T 30 At peak reflectance amplitude. A threshold cannot be clearly defined for all optical water types and lake morphologies. A larger number of observations (large lakes) may compensate for increased per-observation uncertainty. Stability % /decade G 0.1 For in situ fiducial reference observations. B 0.5 T 1 Equates to 0.0001/decade for LWLR, 0.1 mg m-3 per decade for chlorophyll-a and 0.1 g m-3 for suspended matter or turbidity. Standards and References ATBD and URD of ESA Lakes_cci 2022 GCOS ECVs Requirements 196 7.3 ECV: River Discharge 7.3.1 ECV Product: River Discharge Name River Discharge Definition River Discharge is defined as the volume of water passing a measuring point or gauging station in a river in a given time. Unit m3 s-1 Note For station calibration both, the flow velocity and the cross-sectional area has to be measured a few times a year. River Discharge measurements have essential direct applications for water management and related services, including flood protection. They are needed in the longer term to help identify and adapt to some of the most significant potential effects of climate change. The flow of freshwater from rivers into the oceans also needs to be monitored because it reduces ocean salinity, and changes in flow may thereby influence the thermohaline circulation. For climate applications a minimum number of 600 gauging stations globally would be needed to capture the freshwater influx from major rivers to the oceans (which in turn has an impact on ocean temperature and salinity which in turn has impacts on ocean currents and weather systems). A minimum of 4000 gauging stations would be required, in addition to global and regional hydrological data, for deriving changes in rainfall distribution and intensity, and determine climate signals in least anthropogenic impacted basins. Requirements Item needed Unit Metric Value Notes Horizontal Resolution G - N/A. In situ observation by a point measurement on gauge. B - T - Vertical Resolution G - N/A B - T - Temporal Resolution h G 1 Hourly. Required to monitor single events and for assessment of extreme events. B 24 Daily. Suitable to determine general discharge patterns at regional and global scales T 720 Monthly. Suitable to support climate related modelling of terrestrial, oceanographic and atmospheric systems Timeliness month G 1 (day) Daily. For high resolution studies and for preparedness, mitigation during short term events B 1 Monthly. Regional forecasting and modelling T 12 Yearly. For climatology the provision of monthly data within one year after data collection is necessary Required Measurement Uncertainty (2-sigma) % G 5 Improved measurement techniques and sufficient resources B 10 T 15 Discharge measurements are affected by a number of changing conditions and uncertainties due to complex calibration needs such as river cross section flow velocities, changing channel conditions, siltation, scour, weed growth, ice conditions. Stability m y-1 / decade Maxim um drift over referen ce period G 0.01 For high resolution climatology, necessary to validate discharge variability and extremes. B 0.05 T 0.1 For climatology Standards and References WMO Technical Regulations of Hydrology (WMO-No.49) and Guide to hydrological practices (WMO- No.168) ISO 1100-1 (1996) Measurement of liquid flow in open channels-Part I: Establishment and operation of a gauging station ISO 748 (1997) Measurement of liquid flow in open channels-Velocity area methods WMO (WMO-519) Manual on stream gauging Volume I-Fieldwork and Volume II-Computation of discharge ISO Technical Committee 113 is dealing with all standards related to Hydrometry ISO/TS 24154 (2005) The principles of operation, construction, maintenance and application of acoustic Doppler current profilers (ADCP) 2022 GCOS ECVs Requirements 197 7.3.2 ECV Product: Water Level Name Water Level Definition Water Level is the elevation of the water surface of a river (or a lake, reservoir) regarding a reference (the ellipsoid). Unit m Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution m G <20 In addition to global and regional hydrological data, measurement of least anthropogenic impacted basins to derive changes in rainfall distribution, intensity and determine climate signals. B 20-50 Measurement of changes in seasonal level patterns at regional level. T >50 Vertical Resolution G - N/A B - T - Temporal Resolution h G 1 Hourly. Required to monitor single events and for assessment of extreme events B 24 Daily. Suitable to determine general river/lakes patterns at regional and global scales T 720 Monthly. Suitable to support climate related modelling of terrestrial, oceanographic and atmospheric systems Timeliness month G 1 (day) Daily. For high resolution studies and for preparedness, mitigation during short term events B 1 Monthly. Regional forecasting and modelling T 12 Yearly. For climatology the provision of monthly data within one year after data collection is necessary Required Measurement Uncertainty (2-sigma) cm G 10 From in situ observations B T >10 From satellite observations Stability m y-1 / decade Maximu m drift over reference period G 0.01 For high resolution climatology and necessary to validate variability and extremes B T 0.05 For climatology Standards and References WMO Technical Regulations of Hydrology (WMO-No.49) and Guide to hydrological practices (WMO- No.168) ISO 1100-1 (1996) Measurement of liquid flow in open channels-Part I: Establishment and operation of a gauging station ISO 748 (1997) Measurement of liquid flow in open channels-Velocity area methods WMO (WMO-519) Manual on stream gauging Volume I-Fieldwork and Volume II-Computation of discharge ISO Technical Committee 113 is dealing with all standards related to Hydrometry ISO/TS 24154 (2005) The principles of operation, construction, maintenance and application of acoustic Doppler current profilers (ADCP) 2022 GCOS ECVs Requirements 198 7.4 ECV: Soil moisture 7.4.1 ECV Product: Surface Soil Moisture Name Surface Soil Moisture Definition Soil Moisture refers to the average water content in the soil, which can be expressed in volumetric, gravimetric or relative (e.g. degree of saturation) units. Surface Soil Moisture is sometimes referred to as topsoil moisture, surface wetness, surface humidity. Unit m3 m-3 Note The depth of the topmost soil layer is often only qualitatively defined as the actual sensing depth varies with measurement technique, water content, and soil properties and usually cannot be specified with any accuracy. All units can be inter-converted given the availability of soil property information (bulk density, porosity etc.), yet the use of the volumetric soil moisture content as the standard measurement unit is encouraged. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 1 Needed to fully resolve highly-dynamic processes taking place at the land-atmosphere interface surface (convective rainfall, orographic effects, etc.). B 10 Many climate and earth system models are moving to a grid size of 10 km or finer. T 50 This definition reflects a practical understanding of the boundary between climate science and other related geoscientific fields such as hydrology, agronomy, or ecology. Vertical Resolution G - N/A. There is no proper vertical resolution as the surface is a single layer. However, for modelling bare soil evaporation and LST a very thin skin layer is required (e.g. Dorigo et al., 2017; ECMWF). B - T - Temporal Resolution h G 6 Needed to fully resolve highly-dynamic processes taking place at the land-atmosphere interface surface; Needed to depict the interplay between soil moisture, precipitation, vegetation activity, and evaporation. B 24 Needed for closing water balance at daily scales. T 48 Important land-atmospheric processes are missed, but drying and wetting trends can be depicted. Timeliness h G 3 For climate communication and improved preparedness. B 6 To support the assessment of on-going extreme events (droughts, extreme wetness). T 48 For assessments and re-analysis. Required Measurement Uncertainty m3 m-3 Unbiased root mean square error G 0.03 More demanding goal is probably unrealistic due to high variability of soil moisture at small-scales due to changes in soil properties, topography, vegetation cover. B 0.04 Accuracy goal as first adopted for the dedicated soil moisture satellites SMOS and SMAP. Later adopted for GCOS and reconfirmed at the 4th Satellite Soil Moisture Validation and Application Workshop (Wagner et al. 2017). T 0.08 This value traces back to the accuracy goals as specified for the SMOS and SMAP satellites designed for measuring soil moisture. Stability m3 m-3 / decade G 0.005 This value still lacks justification in the scientific literature and needs to be critically assessed. B 0.01 As above T 0.02 As above Standards and References Wagner, W., T.J. Jackson, J.J. Qu, R. de Jeu, N. Rodriguez-Fernandez, R. Reichle, L. Brocca, W. Dorigo (2017) Fourth Satellite Soil Moisture Validation and Application Workshop, GEWEX News, 28(4), 13-14. Gruber, A., De Lannoy, G., Albergel, C., Al-Yaari, A., Brocca, L., Calvet, J.-C., Colliander, A., Cosh, M., Crow, W., Dorigo, W., Draper, C., Hirschi, M., Kerr, Y., Konings, A., Lahoz, W., McColl, K., Montzka, C., Muñoz-Sabater, J., Peng, J., Reichle, R., Richaume, P., Rüdiger, C., Scanlon, T., Schalie, R.v.d., Wigneron, J.-P. and Wagner, W., 2020. Validation practices for satellite soil moisture retrievals: What are (the) errors? Remote Sensing of Environment, 244: 111806. 10.1016/j.rse.2020.111806. 2022 GCOS ECVs Requirements 199 7.4.2 ECV Product: Freeze/Thaw Name Freeze/Thaw Definition Flag indicating whether the land surface is frozen or not. Unit Unitless Note Freeze/Thaw is subsidiary variable of the ECV soil moisture. It is needed because most measurement techniques do not allow to measure soil moisture when the ground is frozen. Also, land-surface processes fundamentally change when the soil is frozen. Instead of binary values (e.g. thawed = 0 and frozen = 1) probabilities (i.e. probability that the soil is frozen) may be used. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km Size of grid cell G 1 Same as for Surface Soil Moisture: Needed to fully resolve highly-dynamic processes taking place at the land-atmosphere interface surface (convective rainfall, orographic effects, etc.). B 10 Same as for Surface Soil Moisture: Many climate and earth system models are moving to a grid size of 10 km or finer. T 50 Same as for Surface Soil Moisture: This definition reflects a practical understanding of the boundary between climate science and other related geoscientific fields such as hydrology, agronomy, or ecology. Vertical Resolution G - N/A B - T - Temporal Resolution h G 6 Same as for Surface Soil Moisture: Needed to fully resolve highly-dynamic processes taking place at the land-atmosphere interface surface, and to depict the interplay between soil moisture, precipitation and evaporation B 24 Same as for Surface Soil Moisture: Needed for closing water balance at daily scales T 48 Same as for Surface Soil Moisture: Important land- atmospheric processes are missed, but drying and wetting trends can be depicted Timeliness h G 3 Same as for Surface Soil Moisture: For climate communication and improved preparedness B 6 Same as for Surface Soil Moisture: To support the assessment of on-going extreme events (droughts, extreme wetness) T 48 Same as for Surface Soil Moisture: For assessments and re-analysis Required Measurement Uncertainty % Overall classification accuracy (as this is a flag, this variable has an accuracy and not a sigma) G 98 Same as for Surface Soil Moisture: More demanding goal is probably unrealistic due to high variability of soil moisture at small-scales due to changes in soil properties, topography, vegetation cover. B 95 Same as for Surface Soil Moisture: Accuracy goal as first adopted for the dedicated soil moisture satellites SMOS and SMAP. Later adopted for GCOS and reconfirmed at the 4th Satellite Soil Moisture Validation and Application Workshop (Wagner et al. 2017). T 90 Same as for Surface Soil Moisture: This value traces back to the accuracy goals as specified for the SMOS and SMAP satellites designed for measuring soil moisture. Stability 2022 GCOS ECVs Requirements 200 Standards and References Required Measurement Uncertainty (2-sigma): Confusion matrices should be computed for different periods of the year. In particular, the transition periods from frozen to thawed conditions are most critical for assessing the accuracy of the freeze/thaw estimates. Wagner, W., T.J. Jackson, J.J. Qu, R. de Jeu, N. Rodriguez-Fernandez, R. Reichle, L. Brocca, W. Dorigo (2017) Fourth Satellite Soil Moisture Validation and Application Workshop, GEWEX News, 28(4), 13-14. Gruber, A., De Lannoy, G., Albergel, C., Al-Yaari, A., Brocca, L., Calvet, J.-C., Colliander, A., Cosh, M., Crow, W., Dorigo, W., Draper, C., Hirschi, M., Kerr, Y., Konings, A., Lahoz, W., McColl, K., Montzka, C., Muñoz-Sabater, J., Peng, J., Reichle, R., Richaume, P., Rüdiger, C., Scanlon, T., Schalie, R.v.d., Wigneron, J.-P. and Wagner, W., 2020. Validation practices for satellite soil moisture retrievals: What are (the) errors? Remote Sensing of Environment, 244: 111806. 10.1016/j.rse.2020.111806. 2022 GCOS ECVs Requirements 201 7.4.3 ECV Product: Surface Inundation Name Surface Inundation Definition Flag indicating whether the land surface is inundated or not. Unit Unitless Note Surface inundation is subsidiary variable of the ECV soil moisture. It is needed because most measurement techniques do not allow to measure soil moisture when the soil surface is inundated. Also, land-surface processes fundamentally change when the soil is inundated. Instead of binary values probabilities (i.e. probability that the soil is inundated) may be used. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km Size of grid cell G 1 Same as for Surface Soil Moisture: Needed to fully resolve highly-dynamic processes taking place at the land-atmosphere interface surface (convective rainfall, orographic effects, etc.). B 10 Same as for Surface Soil Moisture: Many climate and earth system models are moving to a grid size of 10 km or finer. T 50 Same as for Surface Soil Moisture: This definition reflects a practical understanding of the boundary between climate science and other related geoscientific fields such as hydrology, agronomy, or ecology. Vertical Resolution G - N/A B - T - Temporal Resolution h G 6 Same as for Surface Soil Moisture: Needed to fully resolve highly-dynamic processes taking place at the land-atmosphere interface surface, and to depict the interplay between soil moisture, precipitation and evaporation. B 24 Same as for Surface Soil Moisture: Needed for closing water balance at daily scales. T 48 Same as for Surface Soil Moisture: Important land- atmospheric processes are missed, but drying and wetting trends can be depicted. Timeliness h G 3 Same as for Surface Soil Moisture: For climate communication and improved preparedness. B 6 Same as for Surface Soil Moisture: To support the assessment of on-going extreme events (droughts, extreme wetness). T 48 Same as for Surface Soil Moisture: For assessments and re-analysis. Required Measurement Uncertainty % Overall classificati on accuracy (as this is a flag, this variable has an accuracy and not a sigma) G 98 Same as for Surface Soil Moisture: More demanding goal is probably unrealistic due to high variability of soil moisture at small-scales due to changes in soil properties, topography, vegetation cover. B 95 Same as for Surface Soil Moisture: Accuracy goal as first adopted for the dedicated soil moisture satellites SMOS and SMAP. Later adopted for GCOS and reconfirmed at the 4th Satellite Soil Moisture Validation and Application Workshop (Wagner et al. 2017). T 90 Same as for Surface Soil Moisture: This value traces back to the accuracy goals as specified for the SMOS and SMAP satellites designed for measuring soil moisture. Stability Standards Wagner, W., T.J. Jackson, J.J. Qu, R. de Jeu, N. Rodriguez-Fernandez, R. Reichle, L. Brocca, W. Dorigo (2017) Fourth Satellite Soil Moisture Validation and Application Workshop, GEWEX News, 28(4), 13-14. Gruber, A., De Lannoy, G., Albergel, C., Al-Yaari, A., Brocca, L., Calvet, J.-C., Colliander, A., Cosh, M., Crow, W., Dorigo, W., Draper, C., Hirschi, M., Kerr, Y., Konings, A., Lahoz, W., McColl, K., Montzka, C., Muñoz-Sabater, J., Peng, J., Reichle, R., Richaume, P., Rüdiger, C., Scanlon, T., Schalie, R.v.d., Wigneron, J.-P. and Wagner, W., 2020. Validation practices for satellite soil moisture retrievals: What are (the) errors? Remote Sensing of Environment, 244: 111806. 10.1016/j.rse.2020.111806. f f f 2022 GCOS ECVs Requirements 202 7.4.4 ECV Product: Root Zone Soil Moisture Name Root Zone Soil Moisture Definition The Root-Zone Soil Moisture content refers to the average water content in the root-zone. Unit m3 m-3 Note There is no agreed definition of the depth of the root-zone layer, as the actual root-zone of plants varies according to vegetation type, ground water table, and substrate. Considering that many in situ networks have sensors up to a depth of about 50 cm, a first definition of the root-zone layer may be 0-50 cm or similar ranges, although most land surface and vegetation models adopt a root zone of 100 cm or deeper (e.g. Muñoz-Sabater, 2021). Measuring the water content in the root-zone is either not possible (e.g. when using microwave satellites) or costly (e.g. using in situ measurements). Hence, the root-zone soil moisture content has initially not been considered by GCOS. However, as most applications require information about the soil moisture content in deeper soil layers, the root-zone soil moisture content was added to the ECV soil moisture in the GCOS 2016 Implementation Plan. Because it is relatively new variable, all specifications given in this table need to be regarded with care. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km Size of grid cell G 1 Same as for Surface Soil Moisture: Needed to fully resolve highly-dynamic processes taking place at the land-atmosphere interface surface (convective rainfall, orographic effects, etc.). B 10 Same as for Surface Soil Moisture: Many climate and earth system models are moving to a grid size of 10 km or finer. T 50 Same as for Surface Soil Moisture: This definition reflects a practical understanding of the boundary between climate science and other related geoscientific fields such as hydrology, agronomy, or ecology. Vertical Resolution cm G 10 B 50 T 100 Temporal Resolution h G 6 Same as for Surface Soil Moisture: Needed to fully resolve highly-dynamic processes taking place at the land-atmosphere interface surface; Needed to depict the interplay between soil moisture, precipitation and evaporation. B 24 Same as for Surface Soil Moisture: Needed for closing water balance at daily scales. T 48 Same as for Surface Soil Moisture: Important land- atmospheric processes are missed, but drying and wetting trends can be depicted. Timeliness month G 0.25 Weekly. Same as for Surface Soil Moisture: For climate communication and improved preparedness B 1 Monthly. Same as for Surface Soil Moisture: To support the assessment of on-going extreme events (droughts, extreme wetness) T 12 Yearly. Same as for Surface Soil Moisture: for assessments and re-analysis Required Measurement Uncertainty m3 m-3 Unbiased root mean square error G 0.03 Same as for Surface Soil Moisture: More demanding goal is probably unrealistic due to high variability of soil moisture at small-scales due to changes in soil properties, topography, vegetation cover. B 0.04 Same as for Surface Soil Moisture: Accuracy goal as first adopted for the dedicated soil moisture satellites SMOS and SMAP. Later adopted for GCOS and reconfirmed at the 4th Satellite Soil Moisture Validation and Application Workshop (Wagner et al. 2017). T 0.08 Same as for Surface Soil Moisture: This value traces back to the accuracy goals as specified for the SMOS and SMAP satellites designed for measuring soil moisture. Stability m3 m-3 G 0.005 Same as for Surface Soil Moisture: This value still lacks justification in the scientific literature and needs to be critically assessed. B 0.01 As above T 0.02 As above 2022 GCOS ECVs Requirements 203 Wagner, W., T.J. Jackson, J.J. Qu, R. de Jeu, N. Rodriguez-Fernandez, R. Reichle, L. Brocca, W. Dorigo (2017) Fourth Satellite Soil Moisture Validation and Application Workshop, GEWEX News, 28(4), 13-14. Gruber, A., De Lannoy, G., Albergel, C., Al-Yaari, A., Brocca, L., Calvet, J.-C., Colliander, A., Cosh, M., Crow, W., Dorigo, W., Draper, C., Hirschi, M., Kerr, Y., Konings, A., Lahoz, W., McColl, K., Montzka, C., Muñoz-Sabater, J., Peng, J., Reichle, R., Richaume, P., Rüdiger, C., Scanlon, T., Schalie, R.v.d., Wigneron, J.-P. and Wagner, W., 2020. Validation practices for satellite soil moisture retrievals: What are (the) errors? Remote Sensing of Environment, 244: 111806. 10.1016/j.rse.2020.111806. Muñoz-Sabater, J., Dutra, E., Agustí-Panareda, A., Albergel, C., Arduini, G., Balsamo, G., ... & Thépaut, J. N. (2021). ERA5-Land: A state-of-the-art global reanalysis dataset for land applications. Earth System Science Data, 13(9), 4349-4383. - 204 - 2022 GCOS ECVs Requirements 7.5 ECV: Terrestrial Water Storage (TWS)4 7.5.1 ECV Product: Terrestrial Water Storage Anomaly Name Terrestrial Water Storage Anomaly Definition TWS is the total amount of water stored in all continental storage compartments (ice caps, glaciers, snow cover, soil moisture, groundwater, surface water bodies, water in biomass). The change of TWS over time balances the budget of the water fluxes precipitation, evapotranspiration and runoff, i.e., it closes the continental water balance. Unit km³ or mm water equivalent (kg/m²) Note Measuring TWS is possible by satellite and terrestrial gravimetry in relative terms only, not in absolute values. Thus, TWS is given as the deviation relative to a long-term mean (TWS Requirements Item needed Unit Metric Value Notes Horizontal Resolution km G 1 Resolve the topography- and land cover-driven patterns of landscape-scale water storage dynamics, e.g., ref #2 B 10 Many climate and Earth system models are moving to a grid size of 10 km or finer. Often a relevant local to regional water management scale T 200 Comprehensive continental-scale patterns of water storage changes, e.g., ref #1 Vertical Resolution G - N/A, as total water storage represents an integrative value in the vertical, overall storage compartments and depths. B - T - Temporal Resolution d G 1 To resolve water storage changes caused by heavy precipitation events and occurring during flood events B T 30 To resolve major seasonal, intra- and inter-annual dynamics as well as long-term trends of water storage Timeliness d G 1 Required latency for warning for and managing of extreme events, in particular floods, e.g. ref #3 B T 60-90 Current latency of GRACE-FO based TWS products, e.g. ref #4 Required Measurement Uncertainty (2-sigma) mm G 1 Order of magnitude required to resolve TWS effect of daily evapotranspiration B T 20 Order of magnitude to resolve monthly TWS variations Stability mm y-1 G <1 Stability needed to detect subtle long-term TWS trends caused by global change and anthropogenic impacts on the water cycle B T <5 Stability needed to resolve major long-term TWS changes, e.g., related to melting ice sheets, groundwater depletion Standards and References Pail, R., Bingham, R., Braitenberg, C., Dobslaw, H., Eicker, A., Güntner, A., Horwath, M., Ivins, E., Longuevergne, L., Panet, I., Wouters, B., Panel, I.E. (2015): Science and User Needs for Observing Global Mass Transport to Understand Global Change and to Benefit Society. Surveys in Geophysics 36, 743-772. Güntner, A., Reich, M., Mikolaj, M., Creutzfeldt, B., Schroeder, S., Wziontek, H. (2017): Landscape-scale water balance monitoring with an iGrav superconducting gravimeter in a field enclosure. Hydrology and Earth System Sciences, 21(6), 3167-3182, doi: 10.5194/hess-21-3167-2017. Jäggi, A., Weigelt, M., Flechtner, F., Güntner, A., Mayer-Gürr, T., Martinis, S., Bruinsma, S., Flury, J., Bourgogne, S., Steffen, H., Meyer, U., Jean, Y., Sušnik, A., Grahsl, A., Arnold, D., Cann-Guthauser, K., Dach, R., Li, Z., Chen, Q., van Dam, T., Gruber, C., Poropat, L., Gouweleeuw, B., Kvas, A., Klinger, B., Lemoine, J.-M., Biancale, R., Zwenzner, H., Bandikova, T., Shabanloui, A. (2019): European Gravity Service for Improved Emergency Management (EGSIEM) - from concept to implementation. Geophysical Journal International, 218(3), 1572-1590, doi: 10.1093/gji/ggz238. Peter, H., Meyer, U., Lasser, M., Jäggi, A. (2022): COST-G gravity field models for precise orbit determination of Low Earth Orbiting Satellites. Advances in Space Research, 69(12), 4155-4168, doi: 10.1016/j.asr.2022.04.005 4 This is a new ECV approved by GCOS Steering Committee in 2020. - 205 - 2022 GCOS ECVs Requirements 8. Cryosphere5 8.1 ECV: Snow 8.1.1 ECV Product: Area Covered by Snow Name Area Covered by Snow Definition Snow cover refers to the % coverage solid surface (ground, ice sea ice, lake ice, glaciers, etc) in open areas and on top of vegetation cover that is present, such as forest canopies covered by snow at a given time. Sometimes called “viewable snow”. Unit km2 Note Area covered by snow is observed in-situ and by satellite (Robinson, 2013; Frei et al., 2012). The visible satellite identifies the snow cover with few millimeters of snow depth. The microwave radiometer can detect at first from few centimeters of snow depth. Requirements Item needed Unit Metric Value Notes Horizontal Resolution m Size of grid cell G 50 B 500 T 1000 Vertical Resolution G - N/A B - T - Temporal Resolution h Frequency of measurement G 6 B 24 T 48 Timeliness h G 3 B 24 T 240 Required Measurement Uncertainty (2-sigma) % G 5 B 15 T 20 Stability % G 1 B 5 T 10 Standards and References Frei, A., Tedesco, M., Lee, S., Foster, J., Hall, D. K., Kelly, R. and Robinson, D. A. (2012): A review of global satellite-derived snow products, Advances in Space Research, 50, 1007–1029. Goodison, B. and Walker, A. (1994): Canadian development and use of snow cover information from passive microwave satellite data, B. Choudhuly et al. (ed), Passive Microwave Remote Sensing of Land-Atmosphere Interaction, Utrecht: VSP BV, 245-262. Robinson, D.A. (2013): Climate Data Record Program (CDRP): Climate Algorithm Theoretical Basis Document (C-ATBD) Northern Hemisphere Snow Cover Extent, CDRPATBD-0156. Asheville, North Carolina, USA 28 pp. Sturm, M., Taras, B., Liston, G. E., Derksen, C., Jonas, T. and Lea, J. (2010): Estimating Snow Water Equivalent Using Snow Depth Data and Climate Classes. Jour. Hydromet. 11, 1380-1394. Bormann, K., R. Brown, C. Derksen, and T. Painter. 2018. Estimating snow cover trends from space. Nature Climate Change. DOI: 10.1038/s41558-018-0318-3.005 WMO (2018), Guide to instruments and methods of observation: Volume II - Measurement of Cryospheric Variables, 2018th ed., World Meteorological Organization, Geneva, Switzerland, 52 pp. Fierz, C., Armstrong, R.L., Durand, Y., Etchevers, P., Greene, E., McClung, D.M., Nishimura, K., Satyawali, P.K., and Sokratov, S.A. (2009): The International Classification for Seasonal Snow on the Ground, UNESCO-IHP, Paris, France, viii+80 pp. 5 GCOS and GCW will be working together to harmonize the requirements for the cryosphere ECVs during the lifetime of this Implementation Plan. - 206 - 2022 GCOS ECVs Requirements 8.1.2 ECV Product: Snow Depth Name Snow Depth Definition Snow thickness is the perpendicular distance between snowpack surface and the underlying surface (ground, sea ice, lake ice, ice sheets, on ice shelves, glaciers, etc.). Unit m Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution km Size of grid cell G 0.5 B 5 T 25 The resolution 1km refers to the homogeneous snow coverage in the frat field and high local variation in the mountain areas. Vertical Resolution G - N/A B - T - Temporal Resolution d G 6 B 24 T 48 Timeliness h G 1 B 6 T 24 Required Measurement Uncertainty (2-sigma) mm G 10 B 25 T 50 Stability cm G 1 B 2 T 5 Standards and References Frei, A., Tedesco, M., Lee, S., Foster, J., Hall, D. K., Kelly, R. and Robinson, D. A. (2012): A review of global satellite-derived snow products, Advances in Space Research, 50, 1007–1029. Goodison, B. and Walker, A. (1994): Canadian development and use of snow cover information from passive microwave satellite data, B. Choudhuly et al. (ed), Passive Microwave Remote Sensing of Land-Atmosphere Interaction, Utrecht: VSP BV, 245-262. Robinson, D.A. (2013): Climate Data Record Program (CDRP): Climate Algorithm Theoretical Basis Document (C-ATBD) Northern Hemisphere Snow Cover Extent, CDRPATBD-0156. Asheville, North Carolina, USA 28 pp. Sturm, M., Taras, B., Liston, G. E., Derksen, C., Jonas, T. and Lea, J. (2010): Estimating Snow Water Equivalent Using Snow Depth Data and Climate Classes. Jour. Hydromet. 11, 1380-1394. Pulliainen, J., Luojus, K., Derksen, C. et al. (2020). Patterns and trends of Northern Hemisphere snow mass from 1980 to 2018. Nature 581, 294–298. Doi: 10.1038/s41586-020-2258-0. WMO (2018), Guide to instruments and methods of observation: Volume II - Measurement of Cryospheric Variables, 2018th ed., World Meteorological Organization, Geneva, Switzerland, 52 pp. Fierz, C., Armstrong, R.L., Durand, Y., Etchevers, P., Greene, E., McClung, D.M., Nishimura, K., Satyawali, P.K., and Sokratov, S.A. (2009): The International Classification for Seasonal Snow on the Ground, UNESCO-IHP, Paris, France, viii+80 pp. - 207 - 2022 GCOS ECVs Requirements 8.1.3 ECV Product: Snow-Water Equivalent Name Snow-Water Equivalent Definition Water equivalent of snow cover: the vertical depth of the water that would be obtained if the snow cover melted completely, which equates to the snow-cover mass per unit area. Unit mm Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution km Size of grid cell G 0.5 B 5 These horizontal resolutions apply to non-mountain snow covered regions only. T 25 Vertical Resolution G - N/A B - T - Temporal Resolution h G 6 B 24 T 48 Timeliness h G 3 B 24 T 240 Required Measuremen t Uncertaint y (2- sigma) mm G 1 For mountain areas 20% B 5 For mountain areas 30% T 10 For mountain areas 40% Stability mm G 5 B 8 T 10 Standards and Reference s Frei, A., Tedesco, M., Lee, S., Foster, J., Hall, D. K., Kelly, R. and Robinson, D. A. (2012): A review of global satellite-derived snow products, Advances in Space Research, 50, 1007–1029. Goodison, B. and Walker, A. (1994): Canadian development and use of snow cover information from passive microwave satellite data, B. Choudhuly et al. (ed), Passive Microwave Remote Sensing of Land-Atmosphere Interaction, Utrecht: VSP BV, 245-262. Robinson, D.A. (2013): Climate Data Record Program (CDRP): Climate Algorithm Theoretical Basis Document (C-ATBD) Northern Hemisphere Snow Cover Extent, CDRPATBD-0156. Asheville, North Carolina, USA 28 pp. Sturm, M., Taras, B., Liston, G. E., Derksen, C., Jonas, T. and Lea, J. (2010): Estimating Snow Water Equivalent Using Snow Depth Data and Climate Classes. Jour. Hydromet. 11, 1380-1394. WMO (2018), Guide to instruments and methods of observation: Volume II - Measurement of Cryospheric Variables, 2018th ed., World Meteorological Organization, Geneva, Switzerland, 52 pp. Fierz, C., Armstrong, R.L., Durand, Y., Etchevers, P., Greene, E., McClung, D.M., Nishimura, K., Satyawali, P.K., and Sokratov, S.A. (2009): The International Classification for Seasonal Snow on the Ground, UNESCO-IHP, Paris, France, viii+80 pp. Luojus, K., Pulliainen, J., Takala, M., Lemmetyinen, J., Mortimer, C., Derksen, C., Mudryk, L., Moisander, M., Venäläinen, P., Hiltunen, M., Ikonen, J., Smolander, T., Cohen, J., Salminen, M., Veijola, K., and Norberg, J. (2021): GlobSnow v3.0 Northern Hemisphere snow water equivalent dataset. Scientific Data. doi: 10.1038/s41597-021-00939-2 Mortimer, C., Mudryk, L., Derksen, C., Luojus, K., Brown, R., Kelly, R., Tedesco, M. (2020): Evaluation of long term Northern Hemisphere snow water equivalent products. The Cryosphere. doi: 10.5194/tc-14-1579-2020 - 208 - 2022 GCOS ECVs Requirements 8.2 ECV: Glaciers 8.2.1 ECV Product: Glacier Area Name Glacier Area Definition Inventory of map-projected area covered by glaciers. Unit km2 Note Glacier area is the map-projected size of a glacier in km2. The product comes as worldwide inventory of glaciers outlines with various related attribute fields (e.g. area, elevation range, glacier characteristics). Typically, a minimum size of 0.01 or 0.02 km2 is applied, to avoid including small ice patches which do not flow and are therefore not glaciers. Requirements Item needed Unit Metric Value Notes Horizontal Resolution m G 1 Spatial resolutions better than 15 m (e.g. the 10 m from Sentinel 2) are preferable as typical characteristics of glacier flow (e.g. crevasses) only become visible at this resolution (Paul et al. 2016). B 20 The horizontal resolution of 15‐30 m refers to typically used satellite sensors (Landsat and ASTER) to map glaciers. T 100 At coarser resolution the quality of the derived outlines rapidly degrades. Vertical Resolution G - N/A B - T - Temporal Resolution y G 1 The temporal sampling “Annual” means that each year the availability of satellite (or aerial) images should be checked to identify the image with the best snow conditions (i.e. snow should not hide the glacier perimeter). B T 10 Decadal data used to evaluate glacier change in regional scale. Timeliness y G 1 B T 10 For multi-temporal inventories at decadal resolution, the timeliness of the product availability is not so important. Required Measurement Uncertainty % Random error of glacier outlines produced in dependency of remote sensing imagery used, with respect to the total glacier area G 1 Glacier outlines mapped with a resolution of 1 m remote sensing images (take glacier area in average as 1 km2) B 5 Glacier outlines mapped with a resolution of 15-30 m remote sensing images (take glacier area in average as 1 km2) T 20 Glacier outlines mapped with a resolution of 100 m remote sensing images (take glacier area in average as 1 km2) Stability G Glacier area at different times extracted independently. No cumulative effect of the measurement system should be considered B T Standards and References Pfeffer, W. T. et al. The Randolph Glacier Inventory: a globally complete inventory of glaciers. J. Glaciol. 60, 537–552 (2014). Paul, F., S.H. Winsvold, A. Kääb, T. Nagler and G. Schwaizer (2016): Glacier Remote Sensing Using Sentinel-2. Part II: Mapping Glacier Extents and Surface Facies, and Comparison to Landsat 8. Remote Sensing, 8(7), 575; doi:10.3390/rs8070575. Zemp, M., Frey, H., Gärtner-Roer, I., Nussbaumer, S. U., Hoelzle, M., Paul, F., … Vincent, C. (2015). Historically unprecedented global glacier decline in the early 21st century. Journal of Glaciology, 61(228), 745–762. - 209 - 2022 GCOS ECVs Requirements 8.2.2 ECV Product: Glacier Elevation Change Name Glacier Elevation Change Definition Glacier surface elevation changes from geodetic methods. Unit m y-1 Note Measured in-situ and remotely sensed using geodetic method (Cogley et al. 2011, Zemp et al. 2013) Requirements Item needed Unit Metric Value Notes Horizontal Resolution m G 1 The fine resolution (1-5 m) data be used to extract mass change and dynamic characteristics in area with abnormal topography (quite steep slope, ice fall, calving snout) B 25 A stable size of raster for measuring volume change (Joerg and Zemp, 2014) T 90 Resolution of SRTM, which most widely used as reference to extract elevation change Vertical Resolution m G 0.01 Annual mass change of glaciers be evaluated with data with vertical resolution < 0.01 m (e.g. Xu et al., 2019) B 2 Roughly corresponding to the resolution needed for annual mean mass change if observed decadal T 5 The targets for vertical resolutions refer to requirements for differences of digital elevation models (dDEM) in mountainous terrain (e.g. Joerg and Zemp, 2014) Temporal Resolution y G 1 To evaluate annual mass change and detect the signal of potential abnormal events (e.g. surge) B T 10 The frequency “decadal” refers to the length of the time period needed between two geodetic surveys in order to safely apply a density conversion from volume to mass change (cf. Huss 2013, Zemp et al. 2013) Timeliness G In view of the low need for temporal sampling, the timeliness is not so important. B T Required Measurement Uncertainty m Glacier‐wide (random) uncertainty estimate based on a quality assessment of the digital elevation model differencing product over stable terrain G B 2 Refers to the glacier-wide uncertainty estimate based on a quality assessment of the dDEM product over stable terrain. The value of (2m per decade = 0.2 m-2 a-1) is set in relation to the corresponding uncertainty requirement of the glaciological method. T Stability m / decade Glacier-wide bias in elevation change measurements over a decade G B 2 The stability of 2m per decade refers to a bias in the glacier‐wide change of 0.2 m m-2 a-1, which is about one third to half of the average annual ice loss rate over the 20th century (Zemp et al. 2015) and is good enough for validation of glaciological series (Zemp et al. 2013) T Standards and References Huss, M. (2013). Density assumptions for converting geodetic glacier volume change to mass change. The Cryosphere, 7(3), 877–887. Joerg, P. C., & Zemp, M. (2014). Evaluating Volumetric Glacier Change Methods Using Airborne Laser Scanning Data. Geografiska Annaler: Series A, Physical Geography, 96(2), n/a- n/a. Zemp, M., Thibert, E., Huss, M., Stumm, D., Rolstad Denby, C., Nuth, C., Nussbaumer, S.U., Moholdt, G., Mercer, A., Mayer, C., Joerg, P.C., Jansson, P., Hynek, B., Fischer, A., Escher-Vetter, H., Elvehøy, H., and Andreassen, L.M. (2013): Reanalysing glacier mass balance measurement series. The Cryosphere, 7, 1227-1245, doi:10.5194/tc-7-1227-2013. Zemp, M., Frey, H., Gärtner-Roer, I., Nussbaumer, S. U., Hoelzle, M., Paul, F., … Vincent, C. (2015). Historically unprecedented global glacier decline in the early 21st century. Journal of Glaciology, 61(228), 745–762. Xu, C., Li, Z., Li, H., Wang, F., & Zhou, P. (2018). Long-range terrestrial laser scanning measurements of summer and annual mass balances for Urumqi Glacier No. 1, eastern Tien Shan, China. The Cryosphere Discussions, 1-28. doi: 10.5194/tc-2018-128. - 210 - 2022 GCOS ECVs Requirements 8.2.3 ECV Product: Glacier Mass Change Name Glacier Mass Change Definition Glacier Mass Changes from glaciological method. Unit kg m-2 Note Mass change is measured in-situ by the glaciological method (Cogley et al. 2011, Zemp et al. 2013) Requirements Item needed Unit Metric Value Notes Horizontal Resolution G B T Vertical Resolution m G B 0.01 The vertical resolution “0.01 m or 10 kg m-2” refers to the precision of ablation stake and snow pit readings at point locations T 0.05 Lowest requirement in glaciology Temporal Resolution month G 1 Monthly observations in melting season to depict melting processes. B 3 Seasonal. The frequency “seasonal to annual” refers to the measurement campaigns which ideally are carried out at the time of maximum accumulation (spring) and of maximum ablation (end of hydrological year) T 12 Annual. The frequency “seasonal to annual” refers to the measurement campaigns which ideally are carried out at the time of maximum accumulation (spring) and of maximum ablation (end of hydrological year) Timeliness day G B T 365 Ideally, glaciological measurement become available after completion of the annual field campaigns. The WGMS grants a one-year retention period to allow investigators time to properly analyze, document, and publish their data before submitting the data. Required Measurement Uncertainty kg m-2 a-1 Glacier‐wide (random) uncertainty estimate including uncertainties from point measurements , snow, firn and ice density conversions, and extrapolation to glacier-wide results. G B 0.2 2-sigma (200 kg m-2 a-1 = 0.2 m w.e. m-2 a-1) refers to the glacier-wide annual balance which is interpolated from the point measurements. The target value was selected based on a review of long‐term mass balance measurement series (Zemp et al. 2013). T 0.5 Lowest requirement in glaciology. Stability kg m-2 / deca de Glacier-wide bias in mass change measurement s over a decade. G B T 2 The stability can be assessed by validation and – if necessary – calibration of a glaciological times series with decadal results from the geodetic method (cf. Zemp et al. 2013). As a rule of thumb, stability is recommended to be better than 300 kg m-2 a-1 (cf. Zemp et al. 2013). Standards and References Zemp, M., Thibert, E., Huss, M., Stumm, D., Rolstad Denby, C., Nuth, C., Nussbaumer, S.U., Moholdt, G., Mercer, A., Mayer, C., Joerg, P.C., Jansson, P., Hynek, B., Fischer, A., Escher-Vetter, H., Elvehøy, H., and Andreassen, L.M. (2013): Reanalysing glacier mass balance measurement series. The Cryosphere, 7, 1227-1245, doi:10.5194/tc-7-1227-2013. Zemp, M., Frey, H., Gärtner-Roer, I., Nussbaumer, S. U., Hoelzle, M., Paul, F., … Vincent, C. (2015). Historically unprecedented global glacier decline in the early 21st century. Journal of Glaciology, 61(228), 745–762. Zemp, M., Huss, M., Thibert, E. et al. Global glacier mass changes and their contributions to sea-level rise from 1961 to 2016. Nature 568, 382–386 (2019). 2022 GCOS ECVs Requirements 211 8.3 ECV: Ice Sheets and Ice Shelves 8.3.1 ECV Product: Surface Elevation Change Name Surface Elevation Change Definition Measurements of the change height above a reference (geoid or ellipsoid) of the snow-air surface or uppermost firn layers. Unit Annual change in elevations above sea level measured in meters (m y-1) Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution m Spacing of measurements G B T 100 Vertical Resolution G - N/A. One value per point of Earth’s surface. B - T - Temporal Resolution month G 1 B T 12 Timeliness G B T Required Measurement Uncertainty m a‐ 1 error of measured in‐ situ using the geodetic method and remotely sensed surface elevation G B T 0.1 Stability m a‐ 1 as above G B T 0.01 Standards and References 2022 GCOS ECVs Requirements 212 8.3.2 ECV Product: Ice Velocity Name Ice Velocity Definition Surface-parallel vector of the surface ice flow. Unit m y-1 (average speed in grid cell of surface ice flow) Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution m Grid cell size G 50 B 100 T 1000 Vertical Resolution G - N/A. One value per point of Earth’s surface. B - T - Temporal Resolution month time G 1 B T 12 Timeliness G B T Required Measurement Uncertainty m y-1 error of measured in‐situ using the geodetic method and remotely sensed surface elevation G 10 B 30 T 100 Stability m s-1 as above G B T 10 Standards and References Hvidberg, C.S., et al., 2021. User Requirements Document for the Ice_Sheets_cci project of ESA's Climate Change Initiative, version 1.5, 03 Aug 2012. 2022 GCOS ECVs Requirements 213 8.3.3 ECV Product: Ice Volume Change Name Ice Volume Change Definition Direct measurement of local volume changes or inferred volume change from combining measurements. Unit km3 y-1 Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution km Size of grid cell G B T 50 Vertical Resolution G N/A. One value per point of Earth’s surface B T Temporal Resolution d Time G 30 B T 365 Timeliness G B T Required Measurement Uncertainty km3 y-1 error of measured in‐situ using the geodetic method and remotely sensed surface elevation G B T 10 Stability km3 y-1 as above G B T 1 Standards and References 2022 GCOS ECVs Requirements 214 8.3.4 ECV Product: Grounding Line Location and Thickness Name Grounding Line Location and Thickness Definition Location of the line (zone) where ice outflow to an ocean begins to float, and thickness of ice at that location. Unit m (thickness), coordinates of location Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution m G 100 B T 1000 Vertical Resolution G - N/A B - T - Temporal Resolution y G B T 1 Timeliness G B T Required Measurement Uncertainty (2 sigma) m G 1 B T 10 Stability m G B T 1 Standards and References - 215 - 2022 GCOS ECVs Requirements 8.4 ECV: Permafrost 8.4.1 ECV Product: Permafrost Temperature (PT) Name Permafrost Temperature (PT) Definition Permafrost is subsurface earth material that remains continuously at or below 0 °C throughout at least two consecutive years, usually for extended time periods. Product definition: Ground temperatures measured at specified depths along profiles. Unit °C Note Measurements made in boreholes, and usually presented as temperature profiles. Active layer = surface layer that thaws/freezes every year. ZAA = Zero Annual Amplitude, maximum penetration depth of seasonal variations. Requirements Item needed Unit Metric Value Notes Horizontal Resolution N/A Spatial distribution of boreholes G Regular spacing It is necessary to fill the spatial gaps in order to calibrate/compare with remote sensing products and climate modeling results. B Transects Longitudinal and latitudinal transects allow the assessment of gradients. B Various settings Various terrain with different ground/soil conditions (including varying moisture and ice content, thermal properties) and topoclimatic/microclimate conditions (e.g. vegetation, snow cover, slope, aspect). In mountain permafrost, various geomorphological and topo-climatic settings: rock-glaciers, rock walls, in various aspects. Allows for comparison of different reaction to climate change. T Characterizat ion of bioclimate zones Boreholes in continuous, discontinuous, and sporadic permafrost areas. In discontinuous/sporadic permafrost, boreholes must be located in permafrost affected zones. Some boreholes in non-permafrost within permafrost areas can be useful for comparison, model comparison and for understanding evolution of regional permafrost conditions. Location of boreholes is strongly dependent on accessibility of borehole sites. Vertical Resolution N/A Borehole depth, defined according to characteristic permafrost layers G Deeper than ZAA Allows assessment of mid- to long term trends. B Down to ZAA Allows measurement of the full seasonal variations, and assessment of interannual trend. T Below permafrost table Allows calculation of active layer depth and measurement of the temperature of the uppermost permafrost at the permafrost table. m Sensor spacing along borehole for continuous monitoring / measuring interval for manual measurement G Above ZAA: 0.2 Spacing typically increases with depth. Actual spacing has to be adapted to local conditions and should be higher on boundary values (active layer/permafrost, ZAA), to allow an accurate interpolation. B T Above ZAA: 0.5 G Below ZAA: 5 to 10 B T Below ZAA > 10 Temporal Resolution Sampling interval for continuous monitoring/ periodicity for manual measures. Depends on depth, must be more frequent in active layer than below G Active layer: 1h Only useful in topmost layers, affected by diurnal variations. B Active layer: 1d Assessment of rapid changes due for instance to water infiltration. T Active layer: 1 month Sites measured only once a year cannot be used for active layer monitoring G Down to ZAA: 1d Assessment of rapid variations in terrain with high thermal conductivity. B Down to ZAA: 1 Assessment of seasonal variations. T Down to ZAA: 1 year Sites with manual measurement are measured only once a year. G Below ZAA: 1 month Allows detection of extreme seasonal variations. - 216 - 2022 GCOS ECVs Requirements ZAA B Below ZAA: 1 year Sites with manual measurement are measured only once a year. T Below ZAA: 5 years Sufficient for mid- to long-term trend. Timeliness G Weekly /real time Timely reporting, fast intervention in case of problems where possible reduces the risk of large data gaps B 1 year Most site measurements are retrieved only once a year T 5 years Some site measurements are not retrieved every year Required Measurement Uncertainty °C Sensor uncertainty G 0.01 Useful for finer definition of freeze/thaw dates B 0.1 Mean annual trends are often less than 0.1 °C. Reachable with high resolution sensors. T 0.2 Reachable with most standard sensors. Stability °C Sensor drift over reference period. Assumed drift value of commonly used sensors. Sensor drift correction needs recalibration f G 0.01 B 0.05 Should be reached in order to maintain drift below trend. T 0.1 Commonly accepted value based on experience. Calibration of sensor probe is possible in case of manual measurement. It is often impossible for fixed sensor chains, that additionally can be blocked in the borehole due to e.g., shearing. Drift can be minimized by 3 or 4 wire mounting. In situ calibration/correction is possible for sub-surface sensors using “zero curtain”. Standards and References Streletskiy, Dmitry and Biskaborn, Boris and Smith, Sharon L. and Noetzli, Jeannette and Vieira, Gonçalo and Schoeneich, Philippe (2017) GTN-P Strategy and Implementation Plan 2016- 2020. Technical Report. Global Terrestrial Network for Permafrost. - 217 - 2022 GCOS ECVs Requirements 8.4.2 ECV Product: Active Layer Thickness (ALT) Name Active Layer Thickness Definition The surface layer of the ground, subject to annual thawing and freezing in areas underlain by permafrost. Unit cm Note There are three established methods for measuring ALT: mechanical probing, frost tubes and temperature interpolation (with the assumption that 0 °C = freeze point). In all three cases, the result is a depth/thickness value expressed in cm. Satellite based estimates of ALT using Interferometric Synthetic Aperture Radar (InSAR) (Liu et al, 2012, Schaefer et al., 2016) maybe used in the future. Requirements Item needed Unit Metric Value Notes Horizontal Resolution m Spatial distribution of sites G Regular spacing It is necessary to fill gaps in order to calibrate and compare with remote sensing products and climate modeling results B Transects T sufficient sites to characterize each bioclimatic subzone Vertical Resolution cm Spacing of sensors G 2 Vertical resolution of ground temperature sensor spacing for the interpolation B 10 T 20 Temporal Resolution y G 1 (at end of thawing period) ALT is an annual value, which is measured once a year at the end of the thawing period. In case of continuous measurement (borehole data), ALT is defined at time of maximal penetration of above 0°C temperature. B T 1 (at end of thawing period) Timeliness y G 1 ALT is measured and provided once per year B T 1 Required Measurement Uncertainty cm mechanical probing penetration uncertainty / sensor uncertainty G 1/5 Mechanical probing/frost tubes/ temperature interpolation from boreholes. B T 2/15 Stability cm G 1 A common cause of bias is due to surface subsidence in case of ice loss in ice-rich permafrost. Needs to be corrected in order to get the true thaw depth. In ice-rich terrain subject to thaw subsidence, monitoring of vertical movements by frost heave in winter and subsidence in summer are of critical importance. Field measurements may involve direct measurement towards borehole tube, optical survey or differential GPS technology. B 5 T 10 Standards and References Smith, Sharon and Brown, Jerry (2009) Assessment of the status of the development of the standards for the Terrestrial Essential Climate Variables - T7 - Permafrost and seasonally frozen ground. Streletskiy, Dmitry and Biskaborn, Boris and Smith, Sharon L. and Noetzli, Jeannette and Vieira, Gonçalo and Schoeneich, Philippe (2017) GTN-P - Strategy and Implementation Plan 2016- 2020. Technical Report. Global Terrestrial Network for Permafrost. Liu, L., Schaefer, K., Zhang, T., & Wahr, J. (2012). Estimating 1992–2000 average active layer thickness on the Alaskan North Slope from remotely sensed surface subsidence. Journal of Geophysical Research: Earth Surface, 117(F1). - 218 - 2022 GCOS ECVs Requirements 8.4.3 ECV Product: Rock Glacier Velocity (RGV) Name Rock Glacier Velocity (RGV) Definition Global dataset of surface velocity time series measured/computed on single rock glacier units. Unit m y-1 Note RGV can be measured/computed from terrestrial survey (e.g. repeated GNSS field campaigns, permanent GNSS stations) or remote sensing based approaches (e.g. InSAR, satellite-/air-/UAV-borne photogrammetry). The velocity values can be derived either from an annualized displacement measurement or from an annualized displacement computed from position measurements. RGV is defined for a single rock glacier unit that is expressed geomorphologicaly according to standards. Time series must be distinguished if they come from different units, even in a unique rock glacier system. Several time series can be measured/computed on the same rock glacier unit when derived from different methodologies. Rock glacier characteristics must be described according to the inventorying baseline concepts (Technical definition and standardized attributes of rock glaciers). In particular, the spatial connection to the upslope unit (e.g. connected to a glacier or not) leads to a specific evolution of rock glacier velocities and has to be documented. Requirements Item needed Unit Metric Value Notes Horizontal Resolution Spatial distributio n of selected rock glaciers G Regional coverage At least 30% of the active talus-connected and/or debris- mantled slope-connected rock glaciers should be selected in a region, which is a part of a mountain range, in order to represent its climatic context. Only possible with remote sensing approaches. B Multiple sites in a defined regional context Allows the definition of a regional trend. T Isolated site Continuous time series produced either from in situ measurements or remotely sensed measurements. Spatial resolution of the measurem ent. 1 value per selected rock glacier unit G Flow field Velocity is computed/measured by aggregation over a target area on a rock glacier unit. The aggregation procedure and the target area should be consistent over time. Allows the best representation of the effective movement over the rock glacier unit. B Few discrete points Velocity is computed/measured as an aggregation of few measurement points over a target area on a rock glacier unit. The aggregation procedure and the target area should be consistent over time. Allows a better representation of the effective movement over the rock glacier unit. T Velocity value at a point Velocity is computed/measured on a single point. The location should be consistent over time and be spatially representative of the rock glacier unit it is taking part (i.e. located within a recognized moving area). Vertical resolution N/A G B T Temporal Resolution y Frequency and Observati on time window G 1 and 1 Measured/computed once a year. The observation time window is 1 year and consistent over time. B 1 and <1 Measured/computed once a year. The observation time window is shorter than 1 year (e.g. observation on summer period only). It should not be shorter than 1 month and must be consistent over time. Allows a better representation of the annual behavior. T 2-5 and > 1 Frequency limited by an observation time window of 2-5 years. This time period corresponds to the common periodicity for aerial image coverages, and can be adapted according to regional/national specificities. Longer intervals are admissible for optical images, as well as for reconstructions from archives. Timeliness month G 3 Minimum time needed for data processing. B T 12 - 219 - 2022 GCOS ECVs Requirements Require d Measurement Uncertainty % Relative error of the velocity data G 5% Allowed relative error of the velocity data to produce a reliable analysis of long-term temporal changes in rock glacier velocity (RGV). The technique must be chosen in accordance with the absolute value measured/computed on the observed rock glacier and the goal relative error of the velocity data. B 10% T 20% Maximal allowed relative error of the velocity data to produce a reliable analysis of long-term temporal changes in rock glacier velocity (RGV). The technique must be chosen in accordance with the absolute value measured/computed on the observed rock glacier and the target relative error of the velocity data. Stability y Overlappin g G With overla p severa l years Observation time window, horizontal resolution of the velocity value and methodologies/procedures used to measure/compute velocity value for a single time series must be consistent over time. If one of these elements is changing, two times series must be derived for the selected rock glacier unit. If these two time series have an overlap of several years ensuring consistency, they can be merged into a single time series. The merging procedure must be documented. B With overlap 1 year Observation time window, horizontal resolution of the velocity value and methodologies/procedures used to measure/compute velocity value for a single time series must be consistent over time. If one of these elements is changing, two time series must be derived for the selected rock glacier unit. If these two time series have an overlap of 1 year ensuring consistency, they can be merged into a single time series. The merging procedure must be documented. T Withou t overla p Observation time window, horizontal resolution of the velocity value and methodologies/procedures used to measure/compute velocity value for a single time series must be consistent over time. If one of this element is changing without overlap, two time series must be derived for the selected rock glacier unit. Standards and References IPA Action Group Rock glaciers inventories and kinematics ( groups) Standards and definitions: - Technical definition and standardized attributes of rock glacier ( nt_ Baseline_Concepts_Inventorying_Rock_Glaciers.pdf) - Rock glacier velocity ( nt_ RockGlacierVelocity.pdf 2022 GCOS ECVs Requirements 9. BIOSPHERE 9.1 ECV: Above-Ground Biomass 9.1.1 ECV Product: Above-Ground Biomass (AGB) Name Above-Ground Biomass Definition Above-ground biomass is defined as the mass of live and/or dead organic matter in terrestrial vegetation. Unit Mg ha–1 (dry weight per unit area) Note Definition can vary for different observations/products, considering live and/or dead biomass and different vegetation compartments (woody, branches, and leaves). There are differences in what different satellite and in-situ observations actually measure. A clear definition needs to be provided with each measurement/product, and consistency is to be ensured, and ECV products might include flexibility in information to respond to different definition requirements (i.e. including different estimates for different compartments). Requirements Item needed Unit Metric Value Notes Horizontal Resolution m Pixel-size G 10 This resolution reflects the need to have biomass data at the scale of human-induced disturbance. Suitable resolution can vary by ecozone; biomass is a rapidly varying quantity in space and the variance when moving to more detailed spatial resolutions is getting enormous and very hard to be captured efficiently by varying observation sources, especially for natural and tropical forests. Current understanding practices suggest a horizontal resolution of 0.25 ha (50x50 m) outside the (sub-)tropics and a horizontal resolution of 1 ha (100x100 m) in the tropics for global products. In specific regions of interest and areas of active change (forest/land) higher resolution data can be helpful. Higher quality regional biomass maps can be used for the calibration and validation of global products. B 100 This resolution is suitable for most regional vegetation and carbon modeling and assessing the impact of climate extremes. T 1000 This resolution is suitable for global vegetation, carbon and climate models. Vertical Resolution G - N/A, since ECV products provide estimates as total over a certain area without further vertical discrimination. There is however evolving products on tree/vegetation height and structure that are very related to biomass and could eventually be considered as a “third” dimension for biomass ECV products. B - T - Temporal Resolution years Changes in biomass stocks (Mg ha–1) over time (i.e. per year) are important to assess forest carbon gains and losses G 0.5 Intra-annual. Biomass data more detailed than annual time steps are of value for assessing and modeling the impact of disturbances such as fires and forest degradation, and for seasonal variability in biomass productivity. There is also interest for more near-real time updates and estimates of forest biomass changes for (local) enforcement and accounting applications. B 1-2 Annual and bi-annual time steps are used by many models and carbon accounting applications requiring biomass data. T 5-10 Temporal sampling increases are needed to track changes and for long-term biomass trends information every 5-10 years is suitable. Timeliness years G <1 Ideally, biomass measurements become available soon after the acquisition of the data for regular updating in regional hotspots, in case of major disturbances and climate extremes etc. Speed of delivery of biomass information might come at the risk that full quality assurance and independent validation cannot be completed in near-real time as well. B 1-5 Global biomass measurements become available at least one (to a few) year(s) after the acquisition of the data and quality processing and ECV product derivation and validation, as well as long-term consistency is to be ensured. 2022 GCOS ECVs Requirements T >5 Regular reprocessing of historical records. Model applications require long-term consistent biomass datasets that should take advantage of the whole historical data record. Improved and reprocessed historical data records consistent with the recent higher quality ECV estimates should be provided on a regular basis. Required Measurement Uncertainty % (relative) and Mg (absolute ) for different biomass classes/ra ring Relative and absolute bias and confidence interval or RMSE, overall and by biomass class/rang e derived from using multi-date reference data of higher quality G 10% B 20% T 30% Stability % (relative) and Mg (absolute ), for different biomass classes/ra nges Relative and absolute bias and confidence interval or RMSE, overall and by biomass class/rang e derived from using multi-date reference data of higher quality G 5% As for uncertainty, stability should be assessed using both relative and absolute bias and RMSE. The stability can be assessed by multi-date independent validation/uncertainty assessments. The stability requirements are tighter that for overall uncertainty since the aim for multi-date ECV data is to provide information on biomass changes. B 10% T 20% Standards and References 2022 GCOS ECVs Requirements 9.2 ECV: Albedo 9.2.1 ECV Product: Spectral and Broadband (Visible, Near Infrared and Shortwave) DHR & BHR6 with Associated Spectral Bidirectional Reflectance Distribution Function (BRDF) Parameters Name Spectral and Broadband (visible, near infrared and shortwave) DHR & BHR with Associated Spectral Bidirectional Reflectance Distribution Function (BRDF) parameters (required to derive albedo from reflectance) Definition The land surface albedo is the ratio of the radiant flux reflected from Earth’s surface to the incident flux. Each spectral/broadband value depends on natural variations and is highly variable in space and time as a result of terrestrial properties changes, and with illumination conditions. Unit Dimensionless Note Length of record: Threshold: 20 years; Target: > 40 years Requirements Item needed Unit Metric Value Notes Horizontal Resolution m G 10 Due to the heterogeneous nature of terrestrial surfaces, having surface albedo at such scale will increase accuracy for further assimilation of local/regional climate model. B T 250 Enable assimilation in earth/climate model. Vertical Resolution G - N/A B - T - Temporal Resolution day G 1 For climate change services. Multi-angular instruments (including geostationary) and/or accumulation of daily data for BRDF parameters retrieval. B T 10 For assimilation in earth/climate model. Same as above as mono-angular Timeliness day G 1 For climate change services. B T 5 For NRT reanalysis. Required Measurement Uncertainty % 1 standard deviation or error covariance matrix, with associated PDF shape (functional form of estimated error distribution for the term) G 3% for values ≥0.05; 0.0015 (absolute value) for smaller values “A change of 1% to the Earth’s albedo has a radiative effect of 3.4 W/m2” Over snow-free and snow-covered land, climate, biogeochemical, hydrological, and weather forecast models require this uncertainty. B T 5% for values ≥0.05; 0.0025 for smaller values See Ohring, et al. 2005 Stability % / decad e A factor of uncertainties to demonstrate that the ‘error’ of the product remains constant over the period, typically a decade or more G < 1 % Rate of change of surface albedo over the available time period (per decade). The required stability is some fraction of the expected signal’ (see Ohring, et al. 2005) B T < 1.5 % Boussetta S., Balsamo G., Dutra E., Beljaars A., Albergel C. (2015). Assimilation of surface albedo and vegetation states from satellite observations and their impact on numerical weather prediction, Remote Sensing of Environment, pp. 111-126. DOI:10.1016/j.rse.2015.03.009 Ohring, G., Wielicki, B., Spencer, R., Emery, B., & Datla, R. (2005). Satellite instrument calibration for measuring global climate change: Report of a workshop. Bulletin of the American Meteorological Society, 86(9), 1303-1314. 6 DHR: Directional Hemispheric Reflectance; BHR: Bidirectional Hemispheric Reflectance. - 223 - 2022 GCOS ECVs Requirements 9.3 ECV: Evaporation from Land 9.3.1 ECV Product: Sensible Heat Flux Name Sensible Heat Flux Definition The land surface (terrestrial) sensible heat flux represents the conduction of heat between the land surface into the atmosphere. Unit W m-2 Note Current sensible heat flux datasets based on satellite data are often derived as a residual from the energy balance equation based on estimated latent heat fluxes. Due to their analogous use to that of latent heat fluxes by the climate and meteorology community, their user requirements are similar. However, giver their lower immediate value for the agricultural and water management community, some differences in the targeted goals are considered. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km Size of grid cell G 1 Scales needed to achieve a realistic estimation considering land cover heterogeneity that may be useful to determine the role of sensible heat fluxes during extreme events (Miralles et al., 2019). B – – T 25 Current spatial resolution of global datasets, which has so far been deemed sufficient for climatological applications. Vertical Resolution G - N/A B - T - Temporal Resolution h time G 1 Sub-daily processes are needed to represent the evolution of the atmospheric boundary layer during flash droughts or heatwaves (Miralles et al., 2019). B – – T 24 Typical temporal resolution of current global datasets, which has so far been deemed sufficient for climatological applications. Timeliness d G 1 Accurate forecasting of short-term droughts and heatwaves requires data in near real-time (Miralles et al., 2019). B 30 Scales needed to make sensible heat fluxes data useful for early drought diagnostic or to improve seasonal weather forecasts (expert judgement). T 365 Current latency for multiple global datasets, which has so far been deemed sufficient for climatological applications. Required Measurement Uncertainty % relativ e root mean square error G 10 This will involve an improved differentiation among ecosystems, and enable more efficient weather forecasts of extreme events (expert judgement). B 20 Intermediate compromise at which datasets can become useful as drought diagnostic (expert judgement). T 40 Current level of relative error that has so far been deemed sufficient for climatological applications. Stability W m- 2 year- 1 G 0.015 Due to the scarcity of studies of sensible heat flux trends (Siemann et al., 2018), we refer to the same stability thresholds as for latent heat fluxes (and in the same units). B – – T 0.03 – Standards and References Siemann, A. L., Chaney, N. and Wood, E. F.: Development and Validation of a Long-Term, Global, Terrestrial Sensible Heat Flux Dataset, J. Climate, 31(15), 6073–6095, doi:10.1175/JCLI-D-17-0732.1, 2018. Miralles, D. G., Gentine, P., Seneviratne, S. I. and Teuling, A. J.: Land-atmospheric feedbacks during droughts and heatwaves: state of the science and current challenges, Ann. N.Y. Acad. Sci., 8, 469–17, doi:10.1111/nyas.13912, 2019. - 224 - 2022 GCOS ECVs Requirements 9.3.2 ECV Product: Latent Heat Flux Name Latent Heat Flux Definition The land surface (terrestrial) latent heat flux is the energy flux associated with the evaporation occurring over land surfaces, and it may comprise three main sources or individual components: bare soil evaporation (direct evaporation of water from soils), interception loss (evaporation of water from wet canopies) and transpiration (plant water consumption), each of which are considered as sub-products. Unit W m-2 Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution km Size of grid cell G 0.1 The length scales required to detect spatially heterogeneous responses, particularly if agricultural applications are intended (Fisher et al., 2017; Martens et al., 2018). B 1 Scales needed to achieve a realistic partitioning of evaporation into different components considering land cover heterogeneity (Talsma et al., 2019; Miralles et al., 2016). T 25 Current spatial resolution of global datasets (McCabe et al. 2016; Miralles et al., 2016), which has so far been deemed sufficient for climatological applications (Fisher et al., 2017). Vertical Resolution G - N/A B - T - Temporal Resolution hour time G 1 Water management and agricultural applications require to solve evaporation at timeframes associated with sub-daily irrigation decisions and scheduling (Fisher et al., 2017). B 6 Intermediate compromise in which sub-daily processes controlling the evolution of the atmospheric boundary layer can be resolved (McCabe et al. 2016; Miralles et al., 2016). T 24 Typical temporal resolution of current global datasets, which has so far been deemed sufficient for climatological applications (Fisher et al., 2017). Timeliness day G 1 Water management and agricultural applications require data in near real-time (Fisher et al., 2017). B 30 Scales needed to make evaporation data useful for early drought diagnostic or to improve seasonal weather forecasts (expert judgement). T 365 Current latency for multiple global datasets, which has so far been deemed sufficient for climatological applications (Fisher et al., 2017). Required Measuremen t Uncertainty % relative root mean square error G 10 This will involve an improved differentiation of water use and water stress among different crops, species, and ecosystems, and will enable more efficient water management (Fisher et al., 2017). B 20 Intermediate compromise in which datasets can become useful as drought diagnostic or as a water management asset (expert judgement). T 40 Current level of relative error (McCabe et al. 2016); this level has so far been deemed sufficient for climatological applications (Fisher et al., 2017). Stability W m-2 y-1 G 0.015 Approximately half of the current spread in the multi-datasets estimates of the global trend in evaporation (Zang et al., 2016). B – – T 0.03 Current estimates of the trend in the evaporation, but also the estimates of the spread in the estimates of these trends by different datasets (Zhang et al 2016). - 225 - 2022 GCOS ECVs Requirements Standards and References Fisher, J. B., Melton, F., Middleton, E., Hain, C., Anderson, M., Allen, R., Mccabe, M. F., Hook, S., Baldocchi, D., Townsend, P. A., Kilic, A., Tu, K., Miralles, D. D., Perret, J., Lagouarde, J.-P., Waliser, D., Purdy, A. J., French, A., Schimel, D., Famiglietti, J. S., Stephens, G. and Wood, E. F.: The future of evapotranspiration: Global requirements for ecosystem functioning, carbon and climate feedbacks, agricultural management, and water resources, Water Resour. Res., 53(4), 2618–2626, doi:10.1002/2016WR020175, 2017. Martens, B., de Jeu, R., Verhoest, N., Schuurmans, H., Kleijer, J. and Miralles, D.: Towards Estimating Land Evaporation at Field Scales Using GLEAM, Remote Sensing, 10(11), 1720–25, doi:10.3390/rs10111720, 2018. Mccabe, M. F., Ershadi, A., Jiménez, C., Miralles, D. G., Michel, D. and Wood, E. F.: The GEWEX LandFlux project: evaluation of model evaporation using tower-based and globally gridded forcing data, Geosci. Model Dev., 9(1), 283–305, doi:10.5194/gmd-9-283-2016, 2016. Miralles, D. G., Jiménez, C., Jung, M., Michel, D., Ershadi, A., Mccabe, M. F., Hirschi, M., Martens, B., Dolman, A. J., Fisher, J. B., Mu, Q., Seneviratne, S. I., Wood, E. F. and Fernández-Prieto, D.: The WACMOS-ET project – Part 2: Evaluation of global terrestrial evaporation data sets, Hydrol. Earth Syst. Sci., 20(2), 823–842, doi:10.5194/hess-20-823-2016, 2016. Miralles, D. G., Gentine, P., Seneviratne, S. I. and Teuling, A. J.: Land-atmospheric feedbacks during droughts and heatwaves: state of the science and current challenges, Ann. N.Y. Acad. Sci., 8, 469–17, doi:10.1111/nyas.13912, 2019. Talsma, C., Good, S., Miralles, D., Fisher, J., Martens, B., Jiménez, C. and Purdy, A.: Sensitivity of Evapotranspiration Components in Remote Sensing-Based Models, Remote Sensing, 10(10), 1601– 28, doi:10.3390/rs10101601, 2018. Zhang, Y., Peña-Arancibia, J. L., Mcvicar, T. R., Chiew, F. H. S., Vaze, J., Liu, C., Lu, X., Zheng, H., Wang, Y., Liu, Y. Y., Miralles, D. G. and Pan, M.: Multi-decadal trends in global terrestrial evapotranspiration and its components, Sci. Rep., 1–12, doi:10.1038/srep19124, 2016. - 226 - 2022 GCOS ECVs Requirements 9.3.3 ECV Product: Bare Soil Evaporation Name Bare Soil Evaporation Definition The component of the total latent heat flux that corresponds to the direct evaporation of soil moisture into the atmosphere. Unit W m-2 Note The requirements are analogous to those of the total latent heat flux, because the applications are the same. Several studies have shown, however, that the accuracy of the latent heat flux can still be adequate despite a higher uncertainty in the evaporation components (i.e. bare soil evaporation, transpiration and interception loss) – see e.g. Miralles et al. (2016), Talsma et al. (2018). For that reason, the uncertainty goals have been subjectively relaxed based on expert judgement. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km Size of grid cell G 0.1 The length scales required to detect spatially heterogeneous responses, particularly if agricultural applications are intended (Fisher et al., 2017; Martens et al., 2018). B 1 Scales needed to achieve a realistic partitioning of evaporation into different components considering land cover heterogeneity (Talsma et al., 2019; Miralles et al., 2016). T 25 Current spatial resolution of global datasets (McCabe et al. 2016; Miralles et al., 2016), which has so far been deemed sufficient for climatological applications (Fisher et al., 2017). Vertical Resolution G - N/A B - T - Temporal Resolution h time G 1 Water management and agricultural applications require to solve evaporation at timeframes associated with sub-daily irrigation decisions and scheduling (Fisher et al., 2017). B 6 Intermediate compromise in which sub-daily processes controlling the evolution of the atmospheric boundary layer can be resolved (McCabe et al. 2016; Miralles et al., 2016). T 24 Typical temporal resolution of current global datasets, which has so far been deemed sufficient for climatological applications (Fisher et al., 2017). Timeliness d G 1 Water management and agricultural applications require data in near real-time (Fisher et al., 2017). B 30 Scales needed to make bare soil evaporation data useful for early drought diagnostic or to improve seasonal weather forecasts (expert judgement). T 365 Current latency for multiple global datasets, which has so far been deemed sufficient for climatological applications (Fisher et al., 2017). Required Measurement Uncertainty % relative root mean square error G 20 This will enable more efficient water management (Fisher et al., 2017). B 30 Intermediate compromise in which datasets can become useful as drought diagnostic or as a water management asset (expert judgement). T 50 Current level of relative error (Talsma et al., 2018); this level has so far been deemed sufficient for climatological applications (Fisher et al., 2017). Stability W m-2 y-1 G 0.015 Approximately half of the current spread in the multi-datasets estimates of the global trend in evaporation (Zang et al., 2016). B – – T 0.03 Current estimates of the trend in the evaporation, but also the estimates of the spread in the estimates of these trends by different datasets (Zhang et al 2016). - 227 - 2022 GCOS ECVs Requirements Standards and References Fisher, J. B., Melton, F., Middleton, E., Hain, C., Anderson, M., Allen, R., Mccabe, M. F., Hook, S., Baldocchi, D., Townsend, P. A., Kilic, A., Tu, K., Miralles, D. D., Perret, J., Lagouarde, J.-P., Waliser, D., Purdy, A. J., French, A., Schimel, D., Famiglietti, J. S., Stephens, G. and Wood, E. F.: The future of evapotranspiration: Global requirements for ecosystem functioning, carbon and climate feedbacks, agricultural management, and water resources, Water Resour. Res., 53(4), 2618–2626, doi:10.1002/2016WR020175, 2017. Martens, B., de Jeu, R., Verhoest, N., Schuurmans, H., Kleijer, J. and Miralles, D.: Towards Estimating Land Evaporation at Field Scales Using GLEAM, Remote Sensing, 10(11), 1720–25, doi:10.3390/rs10111720, 2018. Mccabe, M. F., Ershadi, A., Jiménez, C., Miralles, D. G., Michel, D. and Wood, E. F.: The GEWEX LandFlux project: evaluation of model evaporation using tower-based and globally gridded forcing data, Geosci. Model Dev., 9(1), 283–305, doi:10.5194/gmd-9-283-2016, 2016. Miralles, D. G., Jiménez, C., Jung, M., Michel, D., Ershadi, A., Mccabe, M. F., Hirschi, M., Martens, B., Dolman, A. J., Fisher, J. B., Mu, Q., Seneviratne, S. I., Wood, E. F. and Fernández-Prieto, D.: The WACMOS-ET project – Part 2: Evaluation of global terrestrial evaporation data sets, Hydrol. Earth Syst. Sci., 20(2), 823–842, doi:10.5194/hess-20-823-2016, 2016. Miralles, D. G., Gentine, P., Seneviratne, S. I. and Teuling, A. J.: Land-atmospheric feedbacks during droughts and heatwaves: state of the science and current challenges, Ann. N.Y. Acad. Sci., 8, 469–17, doi:10.1111/nyas.13912, 2019. Talsma, C., Good, S., Miralles, D., Fisher, J., Martens, B., Jiménez, C. and Purdy, A.: Sensitivity of Evapotranspiration Components in Remote Sensing-Based Models, Remote Sensing, 10(10), 1601–28, doi:10.3390/rs10101601, 2018. Zhang, Y., Peña-Arancibia, J. L., Mcvicar, T. R., Chiew, F. H. S., Vaze, J., Liu, C., Lu, X., Zheng, H., Wang, Y., Liu, Y. Y., Miralles, D. G. and Pan, M.: Multi-decadal trends in global terrestrial evapotranspiration and its components, Sci. Rep., 1–12, doi:10.1038/srep19124, 2016. - 228 - 2022 GCOS ECVs Requirements 9.3.4 ECV Product: Interception Loss Name Interception Loss Definition The component of the total latent heat flux that corresponds to the precipitation that is intercepted by vegetation and evaporated directly. Unit W m-2 Note The requirements are analogous to those of the total latent heat flux, because the applications are the same. Several studies have shown, however, that the accuracy of the latent heat flux can still be adequate despite a higher uncertainty in the evaporation components (i.e. bare soil evaporation, transpiration and interception loss) – see e.g. Miralles et al. (2016), Talsma et al. (2018). For that reason, the uncertainty goals have been subjectively relaxed based on expert judgement. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km Size of grid cell G 0.1 The length scales required to detect spatially heterogeneous responses, particularly if agricultural applications are intended (Fisher et al., 2017; Martens et al., 2018). B 1 Scales needed to achieve a realistic partitioning of evaporation into different components considering land cover heterogeneity (Talsma et al., 2019; Miralles et al., 2016). T 25 Current spatial resolution of global datasets (McCabe et al. 2016; Miralles et al., 2016), which has so far been deemed sufficient for climatological applications (Fisher et al., 2017). Vertical Resolution G - N/A B - T - Temporal Resolution h G 1 Water management and agricultural applications require to solve evaporation at timeframes associated with sub-daily irrigation decisions and scheduling (Fisher et al., 2017). B 6 Intermediate compromise in which sub-daily processes controlling the evolution of the atmospheric boundary layer can be resolved (McCabe et al. 2016; Miralles et al., 2016). T 24 Typical temporal resolution of current global datasets, which has so far been deemed sufficient for climatological applications (Fisher et al., 2017). Timeliness d G 1 Water management and agricultural applications require data in near real-time (Fisher et al., 2017). B 30 Scales needed to make interception loss needed to (e.g.) improve seasonal weather or hydrological forecasts (expert judgement). T 365 Current latency for multiple global datasets, which has so far been deemed sufficient for climatological applications (Fisher et al., 2017). Required Measurement Uncertainty % relative root mean square error G 20 This will enable more efficient water management (Fisher et al., 2017). B 30 Intermediate compromise in which datasets can become useful as a water management asset (expert judgement). T 50 Current level of relative error (Talsma et al., 2018); this level has so far been deemed sufficient for climatological applications (Fisher et al., 2017). Stability W m-2 y-1 G 0.015 Approximately half of the current spread in the multi-datasets estimates of the global trend in evaporation (Zang et al., 2016). B – – T 0.03 Current estimates of the trend in the evaporation, but also the estimates of the spread in the estimates of these trends by different datasets (Zhang et al 2016). - 229 - 2022 GCOS ECVs Requirements Standards and References Fisher, J. B., Melton, F., Middleton, E., Hain, C., Anderson, M., Allen, R., Mccabe, M. F., Hook, S., Baldocchi, D., Townsend, P. A., Kilic, A., Tu, K., Miralles, D. D., Perret, J., Lagouarde, J.-P., Waliser, D., Purdy, A. J., French, A., Schimel, D., Famiglietti, J. S., Stephens, G. and Wood, E. F.: The future of evapotranspiration: Global requirements for ecosystem functioning, carbon and climate feedbacks, agricultural management, and water resources, Water Resour. Res., 53(4), 2618–2626, doi:10.1002/2016WR020175, 2017. Martens, B., de Jeu, R., Verhoest, N., Schuurmans, H., Kleijer, J. and Miralles, D.: Towards Estimating Land Evaporation at Field Scales Using GLEAM, Remote Sensing, 10(11), 1720–25, doi:10.3390/rs10111720, 2018. Mccabe, M. F., Ershadi, A., Jiménez, C., Miralles, D. G., Michel, D. and Wood, E. F.: The GEWEX LandFlux project: evaluation of model evaporation using tower-based and globally gridded forcing data, Geosci. Model Dev., 9(1), 283–305, doi:10.5194/gmd-9-283-2016, 2016. Miralles, D. G., Gentine, P., Seneviratne, S. I. and Teuling, A. J.: Land-atmospheric feedbacks during droughts and heatwaves: state of the science and current challenges, Ann. N.Y. Acad. Sci., 8, 469–17, doi:10.1111/nyas.13912, 2019. Miralles, D. G., Jiménez, C., Jung, M., Michel, D., Ershadi, A., Mccabe, M. F., Hirschi, M., Martens, B., Dolman, A. J., Fisher, J. B., Mu, Q., Seneviratne, S. I., Wood, E. F. and Fernández-Prieto, D.: The WACMOS-ET project – Part 2: Evaluation of global terrestrial evaporation data sets, Hydrol. Earth Syst. Sci., 20(2), 823–842, doi:10.5194/hess-20-823-2016, 2016. Talsma, C., Good, S., Miralles, D., Fisher, J., Martens, B., Jiménez, C. and Purdy, A.: Sensitivity of Evapotranspiration Components in Remote Sensing-Based Models, Remote Sensing, 10(10), 1601– 28, doi:10.3390/rs10101601, 2018. Zhang, Y., Peña-Arancibia, J. L., Mcvicar, T. R., Chiew, F. H. S., Vaze, J., Liu, C., Lu, X., Zheng, H., Wang, Y., Liu, Y. Y., Miralles, D. G. and Pan, M.: Multi-decadal trends in global terrestrial evapotranspiration and its components, Sci. Rep., 1–12, doi:10.1038/srep19124, 2016. - 230 - 2022 GCOS ECVs Requirements 9.3.5 ECV Product: Transpiration Name Transpiration Definition The component of the total latent heat flux that corresponds to the vegetation consumption of water. Unit W m-2 Note The requirements are analogous to those of the total latent heat flux, because the applications are the same. Several studies have shown, however, that the accuracy of the latent heat flux can still be adequate despite a higher uncertainty in the evaporation components (i.e. bare soil evaporation, transpiration and interception loss) – see e.g. Miralles et al. (2016), Talsma et al. (2018). For that reason, the uncertainty goals have been subjectively relaxed based on expert judgement. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km Size of grid cell G 0.1 Required to detect spatially heterogeneous responses, particularly if agricultural applications are intended (Fisher et al., 2017; Martens et al., 2018). B 1 Required to achieve a realistic partitioning of evaporation into different components considering land cover heterogeneity (Talsma et al., 2019; Miralles et al., 2016). T 25 Current spatial resolution of global datasets (McCabe et al. 2016; Miralles et al., 2016), which has so far been deemed sufficient for climatological applications (Fisher et al., 2017). Vertical Resolution G - N/A B - T - Temporal Resolution h G 1 Water management and agricultural applications require to solve evaporation at timeframes associated with sub-daily irrigation decisions and scheduling (Fisher et al., 2017). B 6 Intermediate compromise in which sub-daily processes controlling the evolution of the atmospheric boundary layer can be resolved (McCabe et al. 2016; Miralles et al., 2016). T 24 Typical temporal resolution of current global datasets, which has so far been deemed sufficient for climatological applications (Fisher et al., 2017). Timeliness d G 1 Water management and agricultural applications require data in near real-time (Fisher et al., 2017). B 30 Scales needed to make transpiration data useful for early drought diagnostic or to improve seasonal weather forecasts (expert judgement). T 365 Current latency for multiple global datasets, which has so far been deemed sufficient for climatological applications (Fisher et al., 2017). Required Measurement Uncertainty % relative root mean square error G 20 This will involve an improved differentiation of water use and water stress among different crops, species, and ecosystems, and will enable more efficient water management (Fisher et al., 2017). B 40 Intermediate compromise in which datasets can become useful as drought diagnostic or as a water management asset (expert judgement). T 50 Current level of relative error (Talsma et al., 2018); this level has so far been deemed sufficient for climatological applications (Fisher et al., 2017). Stability W m- 2 year- 1 G 0.015 Approximately half of the current spread in the multi-datasets estimates of the global trend in evaporation (Zang et al., 2016). B – – T 0.03 Current estimates of the trend in the evaporation, but also the estimates of the spread in the estimates of these trends by different datasets (Zhang et al 2016). - 231 - 2022 GCOS ECVs Requirements Standards and References Fisher, J. B., Melton, F., Middleton, E., Hain, C., Anderson, M., Allen, R., Mccabe, M. F., Hook, S., Baldocchi, D., Townsend, P. A., Kilic, A., Tu, K., Miralles, D. D., Perret, J., Lagouarde, J.-P., Waliser, D., Purdy, A. J., French, A., Schimel, D., Famiglietti, J. S., Stephens, G. and Wood, E. F.: The future of evapotranspiration: Global requirements for ecosystem functioning, carbon and climate feedbacks, agricultural management, and water resources, Water Resour. Res., 53(4), 2618–2626, doi:10.1002/2016WR020175, 2017. Martens, B., de Jeu, R., Verhoest, N., Schuurmans, H., Kleijer, J. and Miralles, D.: Towards Estimating Land Evaporation at Field Scales Using GLEAM, Remote Sensing, 10(11), 1720–25, doi:10.3390/rs10111720, 2018. Mccabe, M. F., Ershadi, A., Jiménez, C., Miralles, D. G., Michel, D. and Wood, E. F.: The GEWEX LandFlux project: evaluation of model evaporation using tower-based and globally gridded forcing data, Geosci. Model Dev., 9(1), 283–305, doi:10.5194/gmd-9-283-2016, 2016. Miralles, D. G., Gentine, P., Seneviratne, S. I. and Teuling, A. J.: Land-atmospheric feedbacks during droughts and heatwaves: state of the science and current challenges, Ann. N.Y. Acad. Sci., 8, 469– 17, doi:10.1111/nyas.13912, 2019. Miralles, D. G., Jiménez, C., Jung, M., Michel, D., Ershadi, A., Mccabe, M. F., Hirschi, M., Martens, B., Dolman, A. J., Fisher, J. B., Mu, Q., Seneviratne, S. I., Wood, E. F. and Fernández- Prieto, D.: The WACMOS-ET project – Part 2: Evaluation of global terrestrial evaporation data sets, Hydrol. Earth Syst. Sci., 20(2), 823–842, doi:10.5194/hess-20-823-2016, 2016. Talsma, C., Good, S., Miralles, D., Fisher, J., Martens, B., Jiménez, C. and Purdy, A.: Sensitivity of Evapotranspiration Components in Remote Sensing-Based Models, Remote Sensing, 10(10), 1601– 28, doi:10.3390/rs10101601, 2018. Zhang, Y., Peña-Arancibia, J. L., Mcvicar, T. R., Chiew, F. H. S., Vaze, J., Liu, C., Lu, X., Zheng, H., Wang, Y., Liu, Y. Y., Miralles, D. G. and Pan, M.: Multi-decadal trends in global terrestrial evapotranspiration and its components, Sci. Rep., 1–12, doi:10.1038/srep19124, 2016. - 232 - 2022 GCOS ECVs Requirements 9.4 ECV: Fire 9.4.1 ECV Product: Burned Area Name Burned area Definition Burned area is described by a grid where each cell is labelled as burnt if the majority of that cell is classified as containing burned vegetation. Unit m2 Note Requirements Item needed Unit Metric Valu e Derivation and References and Standards Horizontal Resolution m Minimum mapping unit to which the BA product refers G 10 10 m goal reflects the need to better map small and spatially fragmented burned areas that cannot be resolved at lower spatial resolution & reflects the spatial resolution provided by recent (Sentinel-2) and planned (Landsat Next) global coverage EO missions. B 100 Products based on higher resolution have shown higher sensitivity to small fires, even though coarse resolution RS products still miss most small fires (Chuvieco et al. 2022) T 1000 1000 m threshold reflects experience using heritage AVHRR LAC data. Burned area products can be aggregated to lower spatial resolution (e.g. 0.25 degree grid cells) for climate modeling applications. Most climate modelers work at coarse resolution grids, 0.25 d is the most common. A recent review of users of RS BA products show that most of them work at this level of detail ( updated by Heil 2019). A review of users of BA products can be found in Mouillot et al. 2014 and Chuvieco et al. 2019. Vertical Resolution G - N/A B - T - Temporal Resolution d Minimum temporal period to which the BA product refers G 1 Mostly for atmospheric modelers. A questionnaire to atmospheric and carbon modelers done in 2011 suggested 1-2 days ( but it was recently updated to 1 day or even 6 hours: Heil 2019 B 10 Based on a questionnaire to atmospheric and carbon modelers done in 2011: updated in Heil 2019 T 30 Based on the same questionnaire as above Timeliness d days when the BA product is accessible after fires occurred G 10 Based on the same questionnaire as above B 120 T 360 Required Measuremen t Uncertainty % Average omission and commission errors G 5 Based on the same questionnaire as above B 15 T 25 Stability Measure s of omissio n and commis sion over the availabl e time period Assessment of whether a monotonic trend exists based on the slope of the relationship between an accuracy G 0 Some potential metrics of stability have been published in the last few years (Padilla et al. 2014), but it is not yet an international agreement on which one should be more suitable for measuring BA consistency. Padilla et al., proposed using the slope b of change of accuracy per year is estimated through a nonparametric linear regression. In addition, the temporal monotonic trend of accuracy (i.e. b different than zero) is tested with the Kendall’s tau statistic (Conover 1999; Section 5.4). A statistically significant test result would indicate that accuracy measure m presents B 1 T 2 - 233 - 2022 GCOS ECVs Requirements measure and time temporal instability, as it would have a significant increase or decrease over time. Standards and References Chuvieco, E., Mouillot, F., van der Werf, G.R., San Miguel, J., Tanasse, M., Koutsias, N., García, M., Yebra, M., Padilla, M., Gitas, I., Heil, A., Hawbaker, T.J., & Giglio, L. (2019). Historical background and current developments for mapping burned area from satellite Earth observation. Remote Sensing of Environment, 225, 45-64. Chuvieco, E., Roteta, E., Sali, M., Stroppiana, D., Boettcher, M., Kirches, G., Khairoun, A., Pettinari, L., Franquesa, M., & Albergel, C. (2022). Building a small fire database for Sub-Saharan Africa from Sentinel-2 high-resolution images. Science of the Total Environment, Volume 845, 157139 Heil, A. (2019). ESA CCI ECV Fire Disturbance: D1.1 User requirements document, version 6.0. In. Available from: Mouillot, F., Schultz, M.G., Yue, C., Cadule, P., Tansey, K., Ciais, P., & Chuvieco, E. (2014). Ten years of global burned area products from spaceborne remote sensing—A review: Analysis of user needs and recommendations for future developments. International Journal of Applied Earth Observation and Geoinformation, 26, 64-79. Padilla, M., Stehman, S.V., Litago, J., & Chuvieco, E. (2014). Assessing the Temporal Stability of the Accuracy of a Time Series of Burned Area Products. Remote Sensing, 6, 2050-2068. Roteta, E., Bastarrika, A., Storm, T., & Chuvieco, E. (2019). Development of a Sentinel-2 burned area algorithm: generation of a small fire database for northern hemisphere tropical Africa Remote Sensing of Environment, 222, 1-17. - 234 - 2022 GCOS ECVs Requirements 9.4.2 ECV Product: Active Fires Name Active Fires Definition Presence of a temporal thermal anomaly within a grid cell. Those thermal anomalies that are permanent should be linked to other sources of thermal emission (volcanos, gas flaring, industrial or power plants). Generally, the active fire maps are defined by the satellite overpass time (date/hour) when the thermal anomaly was detected. Unit m2 Note Requirements Item needed Unit Metric Value Derivation and References and Standards Horizontal Resolution m Minimum mapping unit to which the AF product refers G 50 This resolution reflects need to detect small and cool fires (including underground peat fires and fires occurring under forest canopies) and is mostly required by fire managers and fire extinction services B 250 Useful for fire risk assessment and better understanding of fire risk factors T 5000 5000m threshold reflects experience using legacy AVHRR GAC data. Most climate modelers work at coarse resolution grids, 0.25 d is the most common. A recent review of users of RS BA products show that most of them work at this level of detail ( updated by Heil 2019). Vertical Resolution G - N/A B - T - Temporal Resolution min Minimum temporal period to which the AF product refers (values specified regardless of cloud conditions) G 5 5 min goal reflects need to detect rapidly moving and short-lived fires. For fire management purposes, active fire detection should be done very frequently. Atmospheric modelers also require updated information on fire activity B 120 2-hour breakthrough reflects need to monitor diurnal active fire variability T 720 12-hour threshold reflects experience with legacy fire data sets. Needed by atmospheric and carbon modelers. Timeliness d Time lapse between satellite overpass and AF availability G 1 Requirement values reflect need to analyse climate anomalies and their effects shortly after fire occurrence. A timeliness of 10 minutes (achievable using new geostationary satellites) will be needed by fire managers and atmospheric modelers of smoke impacts on human health B 7 T 365 Reporting on fire activity Required Measurement Uncertainty % Average omission and commission errors G 5% Based on a questionnaire to atmospheric and carbon modelers done in 2011: updated in Heil 2019 B 5% Based on the same questionnaire as above T 5% Based on the same questionnaire as above Stability Measures of omission and commission over the available time period Assessment of whether a monotonic trend exists based on the slope of the relationship between an accuracy measure and time G 0% Percentage reflects the relative increase of decrease in reported global total count of active fire detection gridcells over a 10-year period B 1% T 2% - 235 - 2022 GCOS ECVs Requirements Standards and References Giglio, L. et al. (2013) Analysis of daily, monthly, and annual burned area using the fourth-generation global fire emissions database (GFED4). Journal of Geophysical Research: Biogeosciences. [Online] 118 (1), 317–328. Giglio, L. (2007) Characterization of the tropical diurnal fire cycle using VIRS and MODIS observations. Remote Sensing of Environment. [Online] 108 (4), 407–421 Heil, A. (2019). ESA CCI ECV Fire Disturbance: D1.1 User requirements document, version 6.0. In. Available from: Mouillot, F., Schultz, M.G., Yue, C., Cadule, P., Tansey, K., Ciais, P., & Chuvieco, E. (2014). Ten years of global burned area products from spaceborne remote sensing—A review: Analysis of user needs and recommendations for future developments. International Journal of Applied Earth Observation and Geoinformation, 26, 64-79. Wooster, M. J. et al. (2021) Satellite remote sensing of active fires: History and current status, applications and future requirements. Remote Sensing of Environment. [Online] 267112694. with respect to active fires burning with FRP equal to 5 MW km-2 in the detector ground footprint with respect to active fires burning with FRP equal to 10 MW km-2 in the detector ground footprint with respect to active fires burning with FRP equal to 20 MW km-2 in the detector ground footprint - 236 - 2022 GCOS ECVs Requirements 9.4.3 ECV Product: Fire Radiative Power (FRP) Name Fire Radiative Power (FRP) Definition Energy per unit time released by all fires burning within the pixel footprint. This variable is a function of actual temperature of the active fire at the satellite overpass and the proportion of the grid cell being burned. Unit W (or MW) Note Requirements Item needed Unit Metric Value Derivation and References and Standards Horizontal Resolution m Minimum mapping unit to which the FRP product refers G 50 Reflects need to characterize small and cool fires including underground peat fires and fires occurring under forest canopies B 250 T 5000 Reflects experience using legacy AVHRR GAC data Vertical Resolution G - N/A B - T - Temporal Resolution min Minimum temporal period to which the FRP product refers (values specified regardless of cloud conditions) G 5 5 min goal reflects need to characterize rapidly moving and short-lived fires B 120 2-hour breakthrough reflects need to monitor diurnal active fire variability T 720 12-hour threshold reflects experience with legacy fire data sets Timeliness d Time lapse between satellite overpass and AF availability G 1 For climate applications timeliness is less critical B 7 Requirement values reflect need to analyze climate anomalies and their effects shortly after fire occurrence T 365 Required Measurement Uncertainty MW km-2 of detector ground footprint Average deviation between estimated and observed FRP G 0.5 Goal based on need to quantify FRP of small and cool smoldering fires B 1 T 2 Stability % Assessment of whether a monotonic trend exists based on the slope of the relationship between an accuracy measure and time G 0 Percentage reflects the relative increase of decrease in reported global mean FRP for total burned area over a 10-year period B 1 T 2 Standards and References Giglio, L. et al. (2016) The collection 6 MODIS active fire detection algorithm and fire products. Remote Sensing of Environment. [Online] 17831–41. Roberts, G. et al. (2018) Investigating the impact of overlying vegetation canopy structures on fire radiative power (FRP) retrieval through simulation and measurement. Remote Sensing of Environment. [Online] 217158–171. Wooster, M. J. et al. (2021) Satellite remote sensing of active fires: History and current status, applications and future requirements. Remote Sensing of Environment. [Online] 267112694. - 237 - 2022 GCOS ECVs Requirements 9.5 ECV: Fraction of Absorbed Photosynthetically Active Radiation (FAPAR) 9.5.1 ECV Product: Fraction of Absorbed Photosynthetically Active Radiation Name Fraction of Absorbed Photosynthetically Active Radiation Definition FAPAR is defined as the fraction of photosynthetically active radiation (PAR, i.e. the solar radiation reaching the surface in the 0.4-0.7μm spectral region) that is absorbed by vegetation canopy. Both black-sky (assuming only direct radiation) and white-sky (assuming that all the incoming radiation is in the form of isotropic diffuse radiation) FAPAR values may be considered. Similarly FAPAR can also be angularly integrated or instantaneous (i.e., at the actual sun position of measurement). Leaves-only FAPAR refers to the fraction of PAR radiation absorbed by live leaves only, i.e., contributing to the photosynthetic activity within leaf cells. Unit dimensionless Note FAPAR plays a critical role in assessing the primary productivity of canopies, the associated fixation of atmospheric CO2 and the energy balance of the surface. Length of record: Threshold: 20 years; Target: >40 years Requirements Item needed Unit Metric Value Notes Horizontal Resolution m G 10 Application at 10 m for Climate Adaptation, CO2 fluxnet up scaling. Best practices B T 250 Scale needed for regional and global climate modeling. Vertical Resolution - N/A - - Temporal Resolution d G 1 When assimilated by model, this value corresponds to the climate model temporal resolution. In order to derive a better phenology accuracy. B T 10 When using for crops or ecosytems modeling, or Land Surface / Earth System Model evaluation. Timeliness d G 1 In order to be useful in climate change services. B 5 In order to be useful in environmental change services. Can be longer (~months) for historic climate/environmental change assessments. T 10 In order to be useful in environmental change services. Required Measurement Uncertainty % 1 standard deviation or error covariance matrix, with associated PDF shape (functional form of estimated error distribution for the term) G 5% for values ≥0.05; 0.0025 (absolut e value) for smaller values The values were assessed through physical link between FAPAR with the LAI and surface albedo uncertainties. B T 10% for values >0.05; 0.005 (absolut e value) for smaller values The threshold value of uncertainty was assessed through physical link between FAPAR with the LAI and surface albedo uncertainties. Stability % Assessmen t of whether a trend G <1.5 ‘The required stability is some fraction of the expected signal’ (see Ohring, et. al. 2005.). In the case that we have data over 10 years (= one decade) N=10 and U=5% - 238 - 2022 GCOS ECVs Requirements exists with respect to reference data, taken into the definition, i.e. white- sky or black-sky and total versus ‘green foliage’. Assuming U constant along the period It means S=SQRT(NU^2)/N=SQRT(N)U/N S=0.3U = 0.31 10/100.0 = 1.5 % This number should be smaller than expected FAPAR trend. B T <3 Same as above with U = 10% Standards and References - 239 - 2022 GCOS ECVs Requirements 9.6 ECV: Land Cover 9.6.1 ECV Product: Land Cover Name Land Cover Definition Land cover is defined as the observed (bio)-physical cover on the Earth’s surface for regional and global climate applications Unit Primary units are categories (binary variables such as forest or cropland) or continuous variables classifiers (e.g. fraction of tree canopy cover in percent). Secondary outputs include surface area of land cover/use types and land cover/use changes (in ha). UN/FAO Land Cover Classification System (LCCS) + C3/C4 sub-classification should be used with cross-walking tables to other common classifications. Note Land cover can be variable in time due to land changes and phenology. Requirements Item needed Unit Metric Value Notes Horizontal Resolution m G 100-300 Most climate users are satisfied by a horizontal resolution of 300m if they can be provided for long time spans. B 300-1 km Suitable for regional (climate) modeling. T >1 km Suitable for global (climate) modelers. Vertical Resolution G - N/A, since ECV products provide estimates as total over a certain area with further vertical discrimination. There is currently no consideration of the third dimension for land ECV products though some of the definitions (such as forests) often use, among others, minimum height criteria. B - T - Temporal Resolution month time G 1 Monthly. Allows regrowth, phenology, changes in water extent related to seasonality to be detected. B 12 Yearly. Inter-annual changes can be detected. T 60 Every 5 years. Suitable scale for longer-term mapping, related to broader land cover change dynamics. Timeliness month G 3 Seasonal. Ideally, land cover data become available soon after the acquisition of the data but quality processing and ECV product derivation and accuracy assessment, as well as, long-term consistency is to be ensured to track changes and trends. These frequent changes may be relevant for land managers who can react quickly to changes. B 12 Annual and bi-annual reporting applications. Policy makers will be able to develop and assess policies based on regular updates and observed changes. T 60 Every 5 years. Suitable for longer-term mapping, related to broader land cover change dynamics. Temporal Extent (Time span) year G >50 Historic changes which most users are interested in are captured. Only be achieved with modeling approaches using non-earth observation data sources (i.e. historical maps) B 10-50 Historic changes can be assessed for the Earth observation era. T 0 (one time only) Only current and potentially future data are available, but this is useful for those who require current status products, for example for modelling, and static assessments. Required Measurement Uncertainty % for accuracy and errors of omission and commissi on and hectares for area estimates incl. 95 Primary: overall map accuracy and errors of omission and commissi on for individual land G 5 For reporting purposes, this would allow sufficient accuracy, where all classes have high accuracies. An independent accuracy assessment using statistically robust, global or regional reference data of higher quality is required for any ECV land cover product. B 20 For other uses, this would be sufficient – it would be expected that some classes would have higher accuracy -for example confusion between built-up and forest would be lower, but confusion between agriculture and bare might be higher. An independent accuracy assessment using statistically robust, global or regional reference data of higher quality is required for any ECV land cover product. - 240 - 2022 GCOS ECVs Requirements % confidenc e intervals cover categorie s and types of change (incl. confidenc e interval). Secondar y: bias for area estimates (incl. confidenc e intervals) T 35 This threshold would be suitable for maximum commission/omission error for individual categories. Overall accuracy might be expected to be higher. An independent accuracy assessment using statistically robust, global or regional reference data of higher quality is required for any ECV land cover product. Stability % incl. 95 % confide nce interval s Primary: errors of omission and commissio n for individual land cover categories and types of change (incl. confidenc e interval) G 5 Stability is important for long-term land cover datasets where multiple sensors are used to generate a time series dataset. High stability is required for assessing long-term trends. The stability can be assessed by multi-date independent accuracy assessment. The stability requirements are tighter that for overall uncertainty since the aim for multi- date ECV data is to provide information on changes and trends. B 15 T 25 Standards and References - 241 - 2022 GCOS ECVs Requirements 9.6.2 ECV Product: Maps of High-Resolution Land Cover Name Maps of High-Resolution Land Cover Definition High Resolution Land Cover is the observed (bio)-physical cover on the Earth’s surface for monitoring changes at local scales (suitable for adaptation and mitigation). Unit Primary units are categories (binary variables such as forest or cropland) or continuous variables classifiers (e.g. fraction of tree canopy cover in percent). Secondary outputs include surface area of land cover/use types and land cover/use changes (in ha). Note It can also be variable in time due to land changes and phenology. Requirements Item needed Unit Metric Value Notes Horizontal Resolution m Size of grid cell G <10 Suitable for local land managers - specifically for targeted applications in climate change mitigation and adaptation. Small features such as green spaces within cities are visible and changes to water extent (in particular change in river courses) also become visible at this resolution. More detailed land cover descriptions are more. B 10-30 Can identify human induced land change at regional levels. Most features of interest are visible, and broad changes captured. T 30-100 Broad landscape typologies and changes across landscapes are visible, so suitable for landscape management. Vertical Resolution G - N/A, since ECV products provide estimates as total over a certain area with further vertical discrimination. There is currently no consideration of the third dimension for land ECV products though some of the definitions (such as forests) often use, among others, a minimum height criteria. B - T - Temporal Resolution month G 1 Monthly. Allows regrowth, phenology, changes in water extent related to seasonality to be detected. B 12 Yearly. Inter-annual changes can be detected T 60 Every 5 years. Suitable scale for longer-term mapping, related to broader land cover change dynamics. Timeliness month G 3 Seasonal. Ideally, land cover data become available soon after the acquisition of the data but quality processing and ECV product derivation and accuracy assessment, as well as, long-term consistency is to be ensured to track changes and trends. These frequent changes may be relevant for land managers who can react quickly to changes. B 12 Annual and bi-annual reporting applications. Policy makers will be able to develop and assess policies based on regular updates and observed changes. T 60 Every 5 years. Suitable scale for longer-term mapping, related to broader land cover change dynamics. Temporal Extent (Time span) Y G 30-50 Historic changes which most users are interested in are captured. Only be achieved with modeling approaches using non-earth observation data sources (i.e. historical maps) – where more recent high resolution data sources (Landsat, Sentinel) are not available. B 10-30 Historic changes can be assessed for the Earth observation data which are required at this resolution. T 0 (one time only) Only current and potentially future data are available, but this is useful for those who require current status products, for example for modelling, and static assessments. Required Measurement Uncertainty % for accuracy and errors of omission and Primary: overall map accuracy and errors G 5 For reporting purposes, this would allow sufficient accuracy, where all classes have high accuracies. An independent accuracy assessment using statistically robust, global or regional reference data of higher quality is required for any ECV land cover - 242 - 2022 GCOS ECVs Requirements commissio n and hectares for area estimates incl. 95 % confidence intervals of omission and commission for individual land cover categories and types of change (incl. confidence interval). Secondary: bias for area estimates (incl. confidence intervals) B 20 For other uses, this would be sufficient – it would be expected that some classes would have higher accuracy. For example confusion between built-up and forest would be lower, but confusion between agriculture and bare might be higher. An independent accuracy assessment using statistically robust, global or regional reference data of higher quality is required for any ECV land cover product. T 35 This threshold would be suitable for maximum commission/omission error for individual categories. Overall accuracy might be expected to be higher. An independent accuracy assessment using statistically robust, global or regional reference data of higher quality is required for any ECV land cover product. Stability % incl. 95 % confidence intervals Primary: errors of omission and commission for individual land cover categories and types of change (incl. confidence interval) G 5 Stability is important for long-term land cover datasets where multiple sensors are used to generate a time series dataset. High stability is required for assessing long-term trends. The stability can be assessed by multi-date independent accuracy assessment. The stability requirements are tighter that for overall uncertainty since the aim for multi-date ECV data is to provide information on changes and trends. B 15 T 25 Standards and References - 243 - 2022 GCOS ECVs Requirements 9.6.3 ECV Product: Maps of Key IPCC Land Classes, Related Changes and Land Management Types Name Maps of Key IPCC Land Classes, Related Changes and Land Management Types Definition Land cover classes to be used for the estimation of GHG emissions and removals following the IPCC guidelines. Unit Primary units are categories (binary variables such as forest or cropland) or continuous variables classifiers (e.g. fraction of tree canopy cover in percent). Secondary outputs include surface area of land cover/use types and land cover/use changes (in ha). Note It can also be variable in time due to land changes and phenology. Crucially, this table refers to change products. Requirements Item needed Unit Metric Value Notes Horizontal Resolution m / degree Size of grid cell G 10-300 This would allow finer detail to be observed, and for land management to be assessed at smaller units. B 300- 1000 For most climate users, 300 m is sufficient. T 1000-1 degree For modelling for example at the global scale, this resolution is sufficient. More detailed land cover descriptions are more targeted for regional applications in climate change mitigation and adaptation purposes. Vertical Resolution G - N/A, since ECV products provide estimates as total over a certain area with further vertical discrimination. There is currently no consideration of the third dimension for land ECV products though some of the definitions (such as forests) often use, among others, minimum height criteria. B - T - Temporal Resolution month G 1 Monthly. Allows regrowth, phenology, changes in water extent related to seasonality to be detected. B 12 Yearly. Inter-annual changes can be detected. Suitable for most international and national policy reporting cycles. T 60 Every 5 years. Suitable for longer-term mapping, related to broader land cover change dynamics. Timeliness month G 1 Monthly. Ideally, land cover data become available soon after the acquisition of the data but quality processing and ECV product derivation and accuracy assessment, as well as, long-term consistency is to be ensured to track changes and trends. B 12 Yearly. Policy makers will be able to develop and assess policies based on these changes. T 60 Every 5 years. Suitable for longer-term mapping, related to broader land cover change dynamics. Temporal Extent (Time span) y G >100 For modelling over longer histories historic data are required. B 50 Near historic changes can be assessed. T 30 Only current maps using the current generation of satellites are used. Required Measurement Uncertainty % for accuracy and errors of omission and commissi on and hectares Primary: overall map accuracy and errors of omission and commission for individual land cover categories G 5 For reporting purposes, this would allow sufficient accuracy, where all classes have high accuracies. - 244 - 2022 GCOS ECVs Requirements for area estimates incl. 95 % confidenc e intervals and types of change (incl. confidence interval). Secondary: bias for area estimates (incl. Confidence intervals) B 15 For other uses, this would be sufficient – it would be expected that some classes would have higher accuracy -for example confusion between built-up and forest would be lower, but confusion between agriculture and bare might be higher. T 25 This threshold would be suitable for maximum commission/omission error for individual categories. Overall accuracy might be expected to be higher. Stability % incl. 95 % confidenc Primary: errors of omission and commission for individual land cover categories and types of change (incl. confidence interval) G 5 Stability is important for long-term land cover datasets where multiple sensors are used to generate a time series dataset. High stability is required for assessing long-term trends. The stability can be assessed by multi-date independent accuracy assessment. The stability requirements are tighter that for overall uncertainty since the aim for multi-date ECV data is to provide information on changes and trends. B 15 T 25 Standards and References 2022 GCOS ECVs Requirements 245 9.7 ECV: Land Surface Temperature 9.7.1 ECV Product: Land Surface Temperature (LST) Name Land Surface Temperature Definition Land Surface Temperature (LST) is a measure of how hot or cold the surface of the Earth would feel to the touch. When derived from radiometric measurements of ground-based, airborne, and spaceborne remote sensing instruments, LST is the aggregated radiometric surface temperature of the ensemble of components within the sensor field of view. Unit K (average over grid cell) Note From a climate perspective, LST is important for evaluating land surface and land-atmosphere exchange processes, constraining surface energy budgets and model parameters, and providing observations of surface temperature change both globally and in key regions. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km Size of grid cell G < 1 Reflect the primary application of the climate users in the survey. The three most popular primary applications are model evaluation, evapotranspiration/vegetation or crop monitoring and urban climate, all of which may quite feasibly require data with a spatial resolution of 1 km or better. Only polar orbiting satellites can currently provide data at these resolutions. B < 1 T 1 Vertical Resolution N/A G B T Temporal Resolution h G < 1 Only Geostationary data can provide data at these resolutions but these are regional datasets. In contrast polar orbiting satellites cover the whole globe but are restricted to day/night temporal resolution. B 1 T 6 Very nearly met by day/night temporal resolution from polar orbiting satellite, which satisfies 70% of climate users in survey. Timeliness d G A survey of 80 non-climate users for timeliness from the ESA DUE GlobTemperature Project revealed the a “threshold” need of 1 month for long-term data records, and a “breakthrough” of 48 hours for long-term data records. B 2 T 30 Required Measurement Uncertainty K An estimate of the expected spread of the distribution of possible values G < 1 This is the required total uncertainty per pixel combining the four groups of uncertainty components: random, locally correlated atmospheric, locally correlated surface, and large scale systematic. There is a requirement for knowledge on correlation length scales B < 1 T < 1 Stability K / decade Assessment of whether a monotonic trend exists with respect to ground- based Fiducial Reference Measurements or related ECV datasets (such as near- surface air temperature) G 0.1 For climate modeling community long-term product stability is noted as high priority. Temporal stability of the LST products need to be sufficient for global and regional trends in LST anomalies to be calculated. B 0.2 T 0.3 2022 GCOS ECVs Requirements 246 Standards and References Bulgin, C., & Merchant, C. (2016). DUE GlobTemperature Requirements Baseline Document. Ghent, D., Veal, K., Trent, T., Dodd, E., Sembhi, H., and Remedios, J. (2019). A New Approach to Defining Uncertainties for MODIS Land Surface Temperature. Remote Sensing, 11, 1021. doi: 10.3390/rs11091021 Good, E. J., Ghent, D. J., Bulgin, C. E., & Remedios, J. J. (2017). A spatiotemporal analysis of the relationship between near‐surface air temperature and satellite land surface temperatures using 17 years of data from the ATSR series. Journal of Geophysical Research: Atmospheres, 122(17), 9185- 9210. doi:10.1002/2017JD026880 LST CCI (2018) User Requirements Document, Reference LST-CCI-D1.1-URD - i1r0 LST CCI (2019) End-to-End ECV Uncertainty Budget Document, Reference LST-CCI-D2.3-E3UB - i1r0 Merchant, C. J., Paul, F., Popp, T., Ablain, M., Bontemps, S., Defourny, P., Hollmann, R., Lavergne, T., Laeng, A., de Leeuw, G., Mittaz, J., Poulsen, C., Povey, A. C., Reuter, M., Sathyendranath, S., Sandven, S., Sofieva, V. F., and Wagner, W. (2017). Uncertainty information in climate data records from Earth observation. Earth System Science Data, 0, 511-527. - 247 - 2022 GCOS ECVs Requirements 9.7.2 ECV Product: Soil Temperature7 Name Soil Temperature Definition Soil temperature at different depth. Unit °C Note The soil temperature at different depth could represent the thermal energy. The standard depths for soil temperature measurements are 5, 10, 20, 50 and 100 cm below the surface according to the CIMO guide (0cm is an additional in CMA); additional depths may be included. Secondly, LST is more difficult to measure using in situ thermometers or thermocouples s. The temperature sensor is difficult to fit tightly to the ground and remains stable. In the case of precipitation, the fitness will change and cause unstable measurement results. The position of the temperature sensor needs to be adjusted manually. Infrared temperature sensors are expensive, and require representative fields of view to that observed from satellites, so it is challenging to create a global network to represent all possible land covers. Soil temperature is easy to measure using thermometer (0/5/10 cm) or temperature sensor (5/10/20/50/100 cm). Requirements Item needed Unit Metric Value Notes Horizontal Resolution km longitude G 50 B 150 T 139-278 For the GSN, the horizontal distance between two network stations should not be less than the length of 2.5 degrees of longitude at that location (278 km at the equator). For stations beyond 60 degrees latitude (north or south) the minimum distance is fixed at the length of 2.5 degrees of longitude at 60 degrees latitude (139 km). Consequently, the minimum spacing varies from 278 km at the equator to 139 km in the polar regions. Vertical Resolution cm G 0, 5, 10, 20, 50, 100, 180 The standard depths for soil temperature measurements are 5, 10, 20, 50 and 100 cm below the surface; additional depths may be included. LST is important for the satellite observation. So zero depth could be included. Goal: At the depth of 180cm the temperature is useful for long term climate monitor and prediction. Breakthrough: Automatic Weather Station observe could observe the soil temperature at these depths. Threshold: The thermometer can be used at this depth. Suitable for observing stations without automatic weather stations. B 0, 5, 10, 20, 50, 100 T 0, 5, 10, 20 Temporal Resolution h G 3 B 6 Regarding surface synoptic observations: the main standard times shall be 0000, 0600, 1200 and 1800 UTC. The intermediate standard times shall be 0300, 0900, 1500 and 2100 UTC. Every effort should be made to obtain surface synoptic observations four times daily at the main standard times, with priority being given to the 0000 and 1200 UTC observations required for global exchanges. T 24 Timeliness h G 3 B 6 T 48 Required Measurement Uncertainty (2-sigma) K G 0.1 B 0.2 T 0.2 Stability G B T Standards and References WMO Guide to Meteorological Instruments and Methods of Observation (WMO-No.8) Guide to the GCOS Surface Network (GSN) and GCOS Upper-Air Network (GUAN) (GCOS-144) (WMO/TD No. 1558) 7 Soil Temperature is a new ECV product temporary included under the ECV Land-Surface Temperature. His positioning will be subjected to evaluation of TOPC Panel and GCOS Steering Committee. - 248 - 2022 GCOS ECVs Requirements 9.8 ECV: Leaf Area Index 9.8.1 ECV Product: Leaf Area Index (LAI) Name Leaf Area Index (LAI) Definition Leaf Area Index of a plant canopy or ecosystem is defined as one half of the total green leaf area per unit horizontal ground surface area and measures the area of leaf material present in the specified environment (projection to the underlying ground along the normal to the slope). Unit m2 m-2 Note Effective Leaf Area Index is the LAI value that would produce the same indirect ground measurement as that observed assuming foliage distribution (LAIeff=LAItrue x canopy clumping index). The conversion of data measurements to true values is an essential step and requires additional information about the structure and architecture of the canopy, e.g. gap size distributions, at the appropriate spatial resolutions. Leaf Area Index controls important mass and energy exchange processes, such as radiation and rain interception, as well as photosynthesis and respiration, which couple vegetation to the climate system. Length of record: Threshold: 20 years; Target: >40 years. Requirements Item needed Unit Metric Value Notes Horizontal Resolution M G 10 For (e.g.) climate adaptation and agricultural monitoring Best practices published here: B 100 T 250 For regional and global climate modeling Vertical Resolution - N/A. In theory, a vegetation canopy can be stratified into various layers to describe its vertical structure in a discrete way. However actual methods of LAI observation, e.g. optical sensors, can only measure the total canopy leaf area index. Therefore, no requirements for vertical resolution are set. - - Temporal Resolution D G 1 When assimilated by model, this value corresponds to the climate model temporal resolution (to derive a better phenology accuracy). B T 10 When using for crops or ecosystems modeling, or Land Surface / Earth System Model evaluation. Timeliness d G 1 For climate change services. B 5 For environmental change services. Can be longer (~months) for historic climate/environmental change assessments. T 10 For NWP (ECMWF) Required Measurement Uncertainty % or m2 m-2 1 sigma G 10% for values ≥0.5; 0.05 (absolute value) for smaller values One standard deviation or error covariance matrix with associated PDF shape (functional form of estimated error distribution for the term). The goal value of uncertainties were assessed through literature review of impact of climate change on LAI using various earth system models (see Mahowald, et. al., 2016; They show impact on LAI deviation at global scale using various RCP scenarios. If we take the models ensemble results, we demonstrate that the uncertainties should be less than Delta_LAI ~0.20 for a 2 deg. C deviation for an annual average LAI, that can be approximated to ~1.5. This means that the uncertainties should be smaller than 10% (~0.20/1.87100.). B T 20% for values ≥0.5; 0.1 (absolute value) for smaller values Same as above but with Delta_LAI ~0.25 - 249 - 2022 GCOS ECVs Requirements Stability m2 m-2 / decade A factor of uncertainti es to demonstrat e that the ‘error’ of the product remains constant over at least a decade G <3% The unit is rate of change of LAI over the available time period. ‘The required stability is some fraction of the expected signal’ (see Ohring, et. al. 2005). “It may represent a requirement on the extent to which the error of the product remains constant over a long period, typically a decade or more. It can be defined by the mean of uncertainties over a month …”. In the case that we have data over 10 years (= one decade) N=10 and U=10% S=sqrt(sum(U^2))/N. Assuming U constant along the period It means S=SQRT(NU^2)/N=SQRT(N)U/N S=0.3U = 0.31 10/100.0 = 3 % This number should be smaller than expected LAI trend. Ref: Jiang et al. 2017. B T <6% Same as above but with threshold uncertainty. Standards and References Fang, H., Baret, F., Plummer, S., & Schaepman‐Strub, G. ( 2019). An overview of global leaf area index (LAI): Methods, products, validation, and applications. Reviews of Geophysics. 57, 739– 799. Boussetta S., Balsamo G., Dutra E., Beljaars A., Albergel C. (2015) Assimilation of surface albedo and vegetation states from satellite observations and their impact on numerical weather prediction, Remote Sensing of Environment, pp. 111-126. DOI:10.1016/j.rse.2015.03.009 Fernandes, R., Plummer, S., Nightingale, J., Baret, F., Camacho, F., Fang, H., Garrigues, S., Gobron, N., Lang, M., Lacaze, R., LeBlanc, S., Meroni, M., Martinez, B., Nilson, T., Pinty, B., Pisek, J., Sonnentag, O., Verger, A., Welles, J., Weiss, M., & Widlowski, J.L. (2014). Global Leaf Area Index Product Validation Good Practices. Version 2.0. In G. Schaepman-Strub, M. Román, & J. Nickeson (Eds.), Best Practice for Satellite-Derived Land Product Validation (p. 76): Land Product Validation Subgroup (WGCV/CEOS), doi:10.5067/doc/ceoswgcv/lpv/lai.002 C. Y. Jiang, Y. Ryu, H. Fang, R. Myneni, M. Claverie, Z. Zhu, (2017). Inconsistencies of interannual variability and trends in long-term satellite leaf area index products. Glob. Chang. Biol. 23, 4133– 4146. Ohring, G., Wielicki, B., Spencer, R., Emery, B., & Datla, R. (2005). Satellite instrument calibration for measuring global climate change: Report of a workshop. Bulletin of the American Meteorological Society, 86(9), 1303-1314. - 250 - 2022 GCOS ECVs Requirements 9.9 ECV: Soil carbon 9.9.1 ECV Product: Carbon in Soil Name Carbon in Soil Definition % of organic carbon in the topmost 30 cm and sub-soil 30-100cm. Unit % of mass Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution km Grid cell size G 20 B 100 T 1000 Vertical Resolution G - N/A B - N/A T - N/A Temporal Resolution y Time between estimates G 1 Consistent with LUC B 5 T 10 Timeliness y G 1 B 1 T 1 Required Measurement Uncertainty (2-sigma) % G 10 B 10 T 10 Stability % G 1 B 1 T 1 Standards and References Nachtergaele, F.H., van Velthuizen, L. Verekst, and D. Widberg, Eds., 2012: Harmonized World Soil Database v1.2 Wieder et al, 2013, Nature Climate Change; Oertel et al., 2016, doi:10.1016/j.chemer.2016.04.002 Anan et al., 2013, nan et al., 2013, Todd-Brown et al., 2014, doi:10.5194/bg-11-2341-2014 Todd-Brown et al., 2014, doi:10.5194/bg-11-2341-2014 - 251 - 2022 GCOS ECVs Requirements 9.9.2 ECV Product: Mineral Soil Bulk Density Name Mineral Soil Bulk Density Definition Bulk density of dry soil averaged over the topmost 30 cm and topmost 1 m. Unit Kg m-3 Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution km Grid cell size G 0.1 For permafrost B 1 T 20 Vertical Resolution G - N/A B - N/A T - N/A Temporal Resolution y Time between estimates G 5 B 10 T 20 Timeliness y G 1 B 1 T 1 Required Measurement Uncertainty (2-sigma) % G 10 B 10 T 10 Stability G 1 B 1 T 1 Standards and References National Research Council (2014). Opportunities to Use Remote Sensing in Understanding Permafrost and Related Ecological Characteristics: Report of a Workshop. Washington, DC: The National Academies Press. - 252 - 2022 GCOS ECVs Requirements 9.9.3 ECV Product: Peatlands Name Peatlands Definition Depth of peat measured on a regular grid (where peat exists). Unit m Note This provides the geographic extent of peatlands and their depth Requirements Item needed Unit Metric Value Notes Horizontal Resolution m Grid cell size G 20 B 100 T 1000 Vertical Resolution m G 0.1 B 0.5 T 1 Temporal Resolution y Time between estimates G 5 B 10 T 20 Timeliness y G 1 B 1 T 1 Required Measurement Uncertainty (2-sigma) % G 10 B 10 T 10 Stability % G 1 B 1 T 1 Standards and References Minasny, B., O. Berglund, J. Connolly, C. Hedley, F. de Vries, A. Gimona, B. Kempen, D. Kidd, H. Lilja, B. Malone, A. McBratney, P. Roudier, S. O'Rourke, Rudiyanto, J. Padarian, L. Poggio, A. ten Caten, D. Thompson, C. Tuve and W. Widyatmanti (2019). "Digital mapping of peatlands - A critical review." Earth-Science Reviews 196. doi: 10.1016/j.earscirev.2019.05.014 Hugelius, G., J. Loisel, S. Chadburn, R. B. Jackson, M. Jones, G. MacDonald, M. Marushchak, D. Olefeldt, M. Packalen, M. B. Siewert, C. Treat, M. Turetsky, C. Voigt and Z. Yu (2020). "Large stocks of peatland carbon and nitrogen are vulnerable to permafrost thaw." Proceedings of the National Academy of Sciences 117(34): 20438-20446. doi: 10.1073/pnas.1916387117 - 253 - 2022 GCOS ECVs Requirements 10. ANTHROPOGENIC 10.1 ECV: Anthropogenic Greenhouse Gas Fluxes 10.1.1 ECV Product: Anthropogenic CO2 Emissions from Fossil Fuel Use, Industry, Agriculture, Waste and Products Use Name Anthropogenic CO2 Emissions from Fossil Fuel Use, Industry, Agriculture, Waste and Products Use Definition Anthropogenic long-cycle C emissions are mainly originating from combustion of fossil fuels, and for about 10% also from non-combustion sources, such as cement production, ferrous and non-ferrous metal production processes, urea production, agricultural liming and solvent use. Unit Mg CO2 y-1 for the region Note This corresponds to UNFCCC reporting of anthropogenic emissions from non-LULUCF sources by country Requirements Item needed Unit Metric Value Notes Horizontal Resolution Country-level As defined by UNFCCC G By country and sector IPCC 2006 Guidelines, UNFCCC Inventory Guidelines B T By country and sector IPCC 2006 Guidelines, UNFCCC Inventory Guidelines Vertical Resolution G - N/A B - T - Temporal Resolution y G 1 IPCC 2006 Guidelines, UNFCCC Inventory Guidelines B T 1 IPCC 2006 Guidelines, UNFCCC Inventory Guidelines Timeliness y G Within 1.25 years UNFCCC Inventory Reporting Guidelines B T Within 1.25 years UNFCCC Inventory Reporting Guidelines Required Measurement Uncertainty % Twice the estimated standard deviation of the total as a % of the total G Globally: 5% Nationally: 10% IPCC 2006 Guidelines B T Globally: 10% Nationally: 30% IPCC 2006 Guidelines Stability G Follow times series consistency in 2006 Guidelines and 2019 Refinement B T Standards and References IPCC 2006 Guidelines (Optional: 2019 Refinement of the Guidelines; National inventory reports to UNFCCC) - 254 - 2022 GCOS ECVs Requirements 10.1.2 ECV Product: Anthropogenic CH4 Emissions from Fossil Fuel, Waste, Agriculture, Industrial Processes and Fuel Use Name Anthropogenic CH4 Emissions from Fossil Fuel, Waste, Agriculture, Industrial Processes and Fuel Use Definition Anthropogenic CH4 emissions are mainly originating from fermentation processes in waste (landfills), manure, enteric fermentation, but also from fossil fuel extraction, transmission and distribution and use, and industrial processes. Unit Mg CH4 y-1 for the region Note This corresponds to UNFCCC reporting of anthropogenic emissions of methane, except from wetlands Requirements Item needed Unit Metric Value Notes Horizontal Resolution Country-level Country by country G By country and sector IPCC 2006 Guidelines, UNFCCC Inventory Guidelines B T By country and sector IPCC 2006 Guidelines, UNFCCC Inventory Guidelines Vertical Resolution G - N/A B - T - Temporal Resolution y time G 1 IPCC 2006 Guidelines, UNFCCC Inventory B T 1 IPCC 2006 Guidelines, UNFCCC Inventory Timeliness y time G within 1.25 years UNFCCC Inventory Reporting Guidelines B T within 1.25 years UNFCCC Inventory Reporting Guidelines Required Measurement Uncertainty % Twice the estimated standard deviation of the total as a % of the total G 20% IPCC 2006 Guidelines B T 40% IPCC 2006 Guidelines Stability G Follow times series consistency in 2006 Guidelines and 2019 Refinement B T Standards and References IPCC 2006 Guidelines (Optional: 2019 Refinement of the Guidelines; National inventory reports to UNFCCC) - 255 - 2022 GCOS ECVs Requirements 10.1.3 ECV Product: Anthropogenic N2O Emissions from Fossil Fuel Use, Industry, Agriculture, Waste and Products Use, Indirect from N-Related Emissions/Depositions Name Anthropogenic N2O Emissions from Fossil Fuel Use, Industry, Agriculture, Waste and Products Use, Indirect from N-Related Emissions/Depositions Definition Anthropogenic N2O emissions are mainly originating from fuel combustion, industry, agriculture, waste, products use (including indirect emissions from leaching and run-off, from NOx emissions). Unit Mg N2O y-1 for the region Note This corresponds to UNFCCC reporting of anthropogenic emissions of nitrous oxide Requirements Item needed Unit Metric Value Notes Horizontal Resolution Country -level Country by country G By country and sector IPCC 2006 Guidelines, UNFCCC Inventory Guidelines B T By country and sector IPCC 2006 Guidelines, UNFCCC Inventory Guidelines Vertical Resolution G - N/A B - T - Temporal Resolution y time G 1 IPCC 2006 Guidelines, UNFCCC Inventory Guidelines B T 1 IPCC 2006 Guidelines, UNFCCC Inventory Guidelines Timeliness y time G within 1.25 years UNFCCC Inventory Reporting Guidelines B T within 1.25 years UNFCCC Inventory Reporting Guidelines Required Measurement Uncertainty % Twice the estimated standard deviation of the total as a % of the total G 40% IPCC 2006 Guidelines B T 80% IPCC 2006 Guidelines Stability G Follow times series consistency in 2006 Guidelines and 2019 Refinement B T Standards and References IPCC 2006 Guidelines (Optional: 2019 Refinement of the Guidelines; National inventory reports to UNFCCC) - 256 - 2022 GCOS ECVs Requirements 10.1.4 ECV Product: Anthropogenic F-Gas Emissions from Industrial Processes and Product Use Name Anthropogenic F-Gas Emissions from Industrial Processes and Product Use Definition F-Gas emissions are anthropogenic and mainly originating from chemical industrial processes and F- gas-related product use. The different F-gases have different, all very high global warming potentials. Unit Mg CO2eq y-1 for the region Note This corresponds to UNFCCC reporting of anthropogenic emissions of fluorinated gases (HFC, PFC and SF6) aggregated according to the GWP as agreed by the UNFCCC Requirements Item needed Unit Metric Value Notes Horizontal Resolution Country -level Country by country G By country and sector IPCC 2006 Guidelines, UNFCCC Inventory Guidelines B T By country and sector IPCC 2006 Guidelines, UNFCCC Inventory Guidelines Vertical Resolution G - N/A B - T - Temporal Resolution y time G 1 IPCC 2006 Guidelines, UNFCCC Inventory B T 1 IPCC 2006 Guidelines, UNFCCC Inventory Timeliness y time G within 1.25 years UNFCCC Inventory Reporting Guidelines B T within 1.25 years UNFCCC Inventory Reporting Guidelines Required Measurement Uncertainty % Twice the estimated standard deviation of the total as a % of the total G 10% IPCC 2006 Guidelines B T 50% IPCC 2006 Guidelines Stability G Follow times series consistency in 2006 Guidelines and 2019 Refinement B T Standards and References IPCC 2006 Guidelines (Optional: 2019 Refinement of the Guidelines; National inventory reports to UNFCCC) - 257 - 2022 GCOS ECVs Requirements 10.1.5 ECV Product: Total Estimated Fluxes by Coupled Data Assimilation/ Models with Observed Atmospheric Composition – National Name Total Estimated Fluxes by Coupled Data Assimilation/ Models with Observed Atmospheric Composition – National Definition National estimates derived from highly resolved GHG emission gridmaps (modelled output, using proxy for the spatial distribution at fine-scale resolution). Unit kg CO2eq m-2 s-1 Note Total estimated fluxes by coupled data assimilation/ inverse models at a national scale. This includes both “anthropogenic” and “natural” emissions and removals. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km Size of country G 10 B T 100 Vertical Resolution G - Rather than vertical resolution there can be 4 Layers: 1- surface; 2- stack height (between 100m and 300m); 3- cruise height (10km) and 4- supersonic height (15 km). B - T - Temporal Resolution y Time G 1 IPCC 2019, UNFCCC Inventory Guidelines B T 1 IPCC 2019, UNFCCC Inventory Guidelines Timeliness y Time G within 1.25 years To allow comparison with estimates made following the UNFCCC Inventory Reporting Guidelines B T within 1.25 years To allow comparison with estimates made following the UNFCCC Inventory Reporting Guidelines Required Measurement Uncertainty Twice the estimated standard deviation of the total as a % of the total G 10% IPCC 2019 B T 30% IPCC 2019 Stability G B T Standards and References IPCC 2019 refinement Volume I, Chapter 6.10.2 Comparisons with atmospheric measurements GAW Report No. 245, An Integrated Global Greenhouse Gas Information System (IG3IS) Science Implementation PlanEC-CO2 report, Pinty et al., 2017: An operational anthropogenic CO₂ emissions monitoring & verification support capacity - Baseline requirements, Model components and functional architecture, European Commission Joint Research Centre, EUR 28736 EN, - 258 - 2022 GCOS ECVs Requirements 10.1.6 ECV Product: Total Estimated Fluxes by Coupled Data Assimilation/ Models with Observed Atmospheric Composition – Continental Name Total Estimated Fluxes by Coupled Data Assimilation / Models with Observed Atmospheric Composition - Continental Definition GHG emission gridmaps (modelled output, using proxy for the spatial distribution). Unit kg CO2eq m-2 s-1 Note Total estimated fluxes by coupled data assimilation/ inverse models at a continental scale. This includes both “anthropogenic” and “natural” emissions and removals. Requirements Item needed Unit Metric Value Notes Horizontal Resolution km Size of continents G 1000 B T 10000 Vertical Resolution G - N/A B - T - Temporal Resolution y time G 1 IPCC 2006 Guidelines, UNFCCC Inventory Guidelines B T 1 IPCC 2006 Guidelines, UNFCCC Inventory Guidelines Timeliness y time G within 1.25 years To allow comparison with estimates made following the UNFCCC Inventory Reporting Guidelines B T within 1.25 years To allow comparison with estimates made following the UNFCCC Inventory Reporting Guidelines Required Measurement Uncertainty % Twice the estimated standard deviation of the total as a % of the total G 10% IPCC 2019 B T 25% IPCC 2019 Stability G IPCC 2019 B T IPCC 2019 Standards and References IPCC 2019 refinement Volume I, Chapter 6.10.2 Comparisons with atmospheric measurements. GAW Report No. 245, An Integrated Global Greenhouse Gas Information System (IG3IS) Science Implementation Plan. - 259 - 2022 GCOS ECVs Requirements 10.1.7 ECV Product: Anthropogenic CO2 Emissions/Removals by Land Categories Name Anthropogenic CO2 Emissions/Removals by Land Categories Definition Short and long cycle C emissions from land use, land-use and forestry (including carbon stock gains and losses of biomass burning, disease, harvest, net deforestation). Unit Mg of CO2 y-1 (for the region) Note This corresponds to UNFCCC reporting of anthropogenic emissions and removals from LULUCF Requirements Item needed Unit Metric Value Notes Horizontal Resolution Country-level As defined by UNFCCC G By country/region IPCC 2006 Guidelines, UNFCCC Inventory B T By country/region IPCC 2006 Guidelines, UNFCCC Inventory Vertical Resolution G - N/A B - T - Temporal Resolution y Time G 1 IPCC 2006 Guidelines, UNFCCC Inventory B T 1 IPCC 2006 Guidelines, UNFCCC Inventory Timeliness y Time G within 1.25 years UNFCCC Inventory Reporting Guidelines B T within 1.25 years UNFCCC Inventory Reporting Guidelines Required Measuremen t Uncertainty % or Gg Twice the estimated standard deviation of the total as a % of the total or mass of CO2 G 15% or 300Gg, whichever is largest IPCC 2006 Guidelines B T 20% or 400Gg – whichever is largest IPCC 2006 Guidelines Stability G B T Standards and References IPCC 2003 GPG, IPCC 2006 Guidelines; UNFCCC National Inventory Reports - 260 - 2022 GCOS ECVs Requirements 10.1.8 ECV Product: High-Resolution Footprint Around Point Sources Name High-Resolution Footprint Around Point Sources Definition Spatially resolved GHG emission plume around local source. Unit ppm (total column-averaged dry air mole fraction of CO2) Note Requirements Item needed Unit Metric Value Notes Horizontal Resolution km distance G 1 B T 2 Vertical Resolution G - N/A B - T - Temporal Resolution h Repeat time of observations G 4 IPCC 2019 Refinement B T 144 (6 days) Timeliness weeks G 1 B T 4 Required Measurement Uncertainty ppm Twice the estimated standard deviation of the total G 1 IPCC 2006 Guidelines B T 5 IPCC 2006 Guidelines Stability G B T Standards and References ESA Mission requirements document of CarbonSat, of CO2M Sentinel (EOP-SM/3088/YM-ym, 82 pp., _Issued20190927.pdf) References in Janssens-Maenhout et al., 2020: Toward an Operational Anthropogenic CO2 Emissions Monitoring and Verification Support Capacity, BAMS, 2022 GCOS Implementation Plan 261 10.2 ECV: Anthropogenic Water Use 10.2.1 ECV Product: Anthropogenic Water Use Name Anthropogenic Water Use Definition Volume of water used by country, by sector – agricultural, industrial and domestic. Unit Volume of water used by country. Gm3 y-1 Note AQUASTAT contains estimates of water use by county. Requirements Item needed Unit Metric Value Notes Horizontal Resolution By country G Medium-scale watersheds B Country, plus major watersheds T Country Vertical Resolution G - N/A B - T - Temporal Resolution mont h G 1 B T 12 Timeliness G B T Required Measurement Uncertainty (2-sigma) % G 10 B T 20 Stability G B T Standards and References GCOS Secretariat Global Climate Observing System c/o World Meteorological Organization 7 bis, Avenue de la Paix P.O. Box No. 2300 CH-1211 Geneva 2, Switzerland Tel: +41 22 730 8067 Fax: +41 22 730 8181 Email: [email protected]
186
error correction - Why do we care about conjugation? - Quantum Computing Stack Exchange =============== Join Quantum Computing By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Quantum Computing helpchat Quantum Computing Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Companies Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Why do we care about conjugation? Ask Question Asked 5 months ago Modified5 months ago Viewed 769 times This question shows research effort; it is useful and clear 5 Save this question. Show activity on this post. I hear a lot that Clifford gates conjugate Paulis to Paulis, that is C P C†=P′C P C†=P′, where C C, P P is a Clifford or Pauli respectively. But why is it important that Cliffords conjugate Paulis to Paulis? Would it be useless if it was, say, the product of a Clifford with a Pauli gives another Pauli? error-correction density-matrix clifford-group Share Share a link to this question Copy linkCC BY-SA 4.0 Improve this question Follow Follow this question to receive notifications asked Mar 6 at 1:19 drumadoirdrumadoir 321 1 1 silver badge 7 7 bronze badges Add a comment| 2 Answers 2 Sorted by: Reset to default This answer is useful 7 Save this answer. Show activity on this post. Your question relates to the Heisenberg representation of quantum mechanics. In the Heisenberg representation, instead of tracking states, we track how operators change. Think of it like this: Assume we have a state |ψ⟩|ψ⟩ and we act on it with a unitary operator C C. What would another operator P P acting on the original state |ψ⟩|ψ⟩ translate to on the new state C|ψ⟩C|ψ⟩: C P|ψ⟩=C P C†C|ψ⟩.C P|ψ⟩=C P C†C|ψ⟩. This equation suggests that applying P P to the original state and then applying C C is equivalent to applying the transformed operator C P C†C P C† to the new state (after applying C C). We can say the operator P P evolved like this: P→C P C†.P→C P C†. While this representation is equivalent to the Schrödinger representation (tracking states), it is very useful. For example: Tracking states - if we have a state that is an eigenstate of some operator P P, after applying some quantum circuit U U, the state would remain an eigenstate, but of the conjugated operator P′=U P U†P′=U P U†. Classical simulation (Gottesman-Knill Theorem) - When a quantum circuit consists only of Clifford operations, we can efficiently track the eigenstates classically, allowing us to simulate Clifford circuits. Share Share a link to this answer Copy linkCC BY-SA 4.0 Improve this answer Follow Follow this answer to receive notifications edited Mar 12 at 6:33 drumadoir 321 1 1 silver badge 7 7 bronze badges answered Mar 6 at 7:45 Shoham JacobyShoham Jacoby 521 1 1 silver badge 11 11 bronze badges 5 Hey Shoham thanks for the answer. Are you following the convention of the rightmost unitary acts on the state first? Because then C P|ψ⟩C P|ψ⟩ would technically be C C acting on the state P|ψ⟩P|ψ⟩ rather than P P acting on C|ψ⟩C|ψ⟩. Just trying to be clear because how P P evolved relies on it –drumadoir Commented Mar 10 at 6:49 Yes, the idea is that first acting with P P on |ψ⟩|ψ⟩ followed by C C is the same as C P C†C P C† acting on C|ψ⟩C|ψ⟩. So if you have a state which is stabilized by P P, after applying C C it would be stabilized by C P C†C P C†. –Shoham Jacoby Commented Mar 10 at 12:44 Thanks for the reply. So instead of “we have a state |ψ⟩|ψ⟩ and we act on it with a unitary operator C C. How does another operator P P act on the new state” (implying P C|ψ⟩P C|ψ⟩) shouldn’t your answer say “we have a state |ψ⟩|ψ⟩ and we act on it with a unitary operator P P. How does another operator C C act on the new state” (implying C P|ψ⟩C P|ψ⟩)? It also says the “equation suggests that applying P P to the original state is equivalent to applying … C P C†C P C† to the new state”, but that implies P|ψ⟩=C P C†C|ψ⟩P|ψ⟩=C P C†C|ψ⟩, a different equation –drumadoir Commented Mar 11 at 5:48 Oh, I see. I edited and clarified it a bit. The idea is "What does P P on the original state |ψ⟩|ψ⟩ would be translated to on the new state C|ψ⟩C|ψ⟩". Also, if still not clear, check out equation (1) in the reference I linked to. –Shoham Jacoby Commented Mar 11 at 13:13 1 Thanks Shoham I finally understand!! –drumadoir Commented Mar 11 at 23:07 Add a comment| This answer is useful 3 Save this answer. Show activity on this post. When you drag a Pauli P P across a Clifford C C, while holding the Clifford constant and the overall functionality of the circuit constant, the Pauli must change to C†P C C†P C (or C P C†C P C† depending on the time direction you're dragging). In other words, conjugation tells you how Paulis are changed by moving through Cliffords. Once you know how Paulis move through Cliffords, you can easily move them from one time to another. They become unstuck. You can move them to whenever they are easiest to analyze, or to whenever you are most interested in at the moment. This hugely simplifies analyzing the circuit's behavior; it is the foundation upon which the stabilizer formalism sits. The product of a Pauli and a Clifford is generally a Clifford, not a Pauli. The interesting thing about C P C−1 C P C−1 is the Cliffordness cancels out, guaranteeing a simpler case. Share Share a link to this answer Copy linkCC BY-SA 4.0 Improve this answer Follow Follow this answer to receive notifications answered Mar 6 at 23:02 Craig GidneyCraig Gidney 48.5k 1 1 gold badge 50 50 silver badges 127 127 bronze badges 1 Thank you for the answer Craig! By “guaranteeing a simpler case” I assume you mean guaranteeing the operators remain Paulis rather than being transformed to Cliffords? That makes sense thanks! –drumadoir Commented Mar 11 at 23:09 Add a comment| Your Answer Thanks for contributing an answer to Quantum Computing Stack Exchange! Please be sure to answer the question. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Making statements based on opinion; back them up with references or personal experience. Use MathJax to format equations. MathJax reference. To learn more, see our tips on writing great answers. Draft saved Draft discarded Sign up or log in Sign up using Google Sign up using Email and Password Submit Post as a guest Name Email Required, but never shown Post Your Answer Discard By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions error-correction density-matrix clifford-group See similar questions with these tags. The Overflow Blog AI isn’t stealing your job, it’s helping you find it Featured on Meta Will you help build our new visual identity? Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Report this ad Report this ad Related 8Why do we care about the number of T T gates in a quantum circuit? 8Does conjugation by a Clifford send each non-identity Pauli to every other non-identity Pauli with equal frequency? 5Gottesman Knill theorem - why O(n)O(n) operations for arbitrary unitary gates 2Conjugating pairs of Paulis to each other with a Clifford 3Conjugating pairs of Paulis to each other with a non-entangling Clifford 3Generating All Logical Clifford Operators in Stabilizer Codes via Coset Multiplication 4Is the normaliser of the Clifford group the Clifford group itself? 4The weight distribution of uniformly random Clifford conjugation 6Are the coefficients of Clifford gates in the Pauli basis always of equal magnitude? Hot Network Questions Is the logic of the original smoking study valid? Can "Accepted" Be Used as a Noun? What is an Excel macro to resize columns by column header? Why do we expect AI to reason instantly when humans require years of lived experience? Why does my HDD keep spinning and seeking when I power off the computer? Quantum model of atom Is there any way to still use Manifest v2 extensions in Google Chrome 139+? How do Commoners "change class"? Dimension too large compiling longtable with lualatex. What is the cause? Can Suspended Sentence be cast Twice? repeat_and_join function for strings and chars in rust How to defend against GDPR being used to access anti-fraud measures? Can my daughter’s candy preferences be modelled using numeric weights II? I failed to make Claus benzene. (With sticks.) When was John Mark from Acts first identified as Mark the Evangelist? What does, "For you alone are holy." mean in Revelation 15:4? Pilot Procedures for OFV Control When Cabin System Fails Why isn't gauge symmetry a symmetry while global symmetry is? Why was there a child at the dig site in Montana? Which public officers other than presidents and lawmakers are chosen by people? Does the warning "5 years imprisonment for removal" on Canada's Four Corners obelisk have any legal backing? Why לֶחֶם instead of לַחַם? If I self-publish a book and give it away for free, would it meet a future publisher's desire to be "first publishing rights"? Landmark identification in "The Angel" (Arsenal FC's anthem) Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Quantum Computing Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
187
Ti 6Al 4V (Grade 5) Titanium Alloy Data Sheet Home / Titanium Resources / Everything you need to know about Titanium / Ti 6Al 4V (Grade 5) Titanium Alloy Data Sheet ASTM grade 5 titanium is the most ubiquitous and versatile of titanium’s alloys. It is comprised of 90% titanium, 6% aluminium and 4% vanadium. It is an alpha beta titanium alloy with aluminium stabilising the alpha phase and vanadium stabilising the beta phase. Ti 6 Al 4V is widely used because of its optimum blend of properties. It can undergo further processing to become better suited to specific applications. Properties of Grade 5 titanium alloy Titanium 6al-4v has a density of 4.43 g/cc. Thermal Properties Melting Point: 1604 – 1660 °C Solidus: 1604 °C Liquidus: 1660 °C Beta Transus: 980 °C Mechanical Properties Tensile Strength, Ultimate: 1170 Mpa Tensile Strength, Yield: 1100 Mpa Elongation at Break: 10% Modulus of Elasticity: 114 Gpa Hardness Brinell: 334 Rockwell: C 363 Vickers: 36 Heat Treatment Ti 6Al 4V alloy is widely heat-treated to further improve its properties. It is typically mill annealed, solution treated or aged. Stress relieving is used on formed and welded parts whilst beta annealing is used to improve the alloy’s strength. Corrosion Resistance Ti 6Al 4V instantaneously produces a ceramic oxide layer on its surface, which protects it from carrion in all but the most severe of environments. Because of this, grade 5 Titanium is widely used in salt water applications, as well as humid environments. It is also moderately resistant to highly acid environments though titanium alloys containing palladium are better. Hot Working Ti 6Al 4V is usually hot worked in order to produce the desired microstructure through the process of recrystallisation. This keeps the alloys yield strength and hardness low and its ductility high. In grade 5 this is done at approximately 870°C to 980° C which stops the growth of excessive alpha phase. Cold Working Ti 6AL 4V is not easily cold worked due to its low elastic modulus meaning it has a tendency to resume its prior shape. Grade 5 can be cold drawn and extruded though this is typically confined to smaller industrial processing facilities on commercially pure grades of titanium. Weldability of Ti-6Al-4v alloy Ti 6Al-4V can be welded using Ti 6Al-4V as a filler metal. The metal has to be shielded with inert gases to prevent the pickup of oxygen in the weld area which can cause embrittlement and failure. Gas tungsten arc welding is the most commonly used welding process for Ti 6Al-4V alloy, though gas metal arc welding is used for welding thicker sections. Ti 6Al-4V can be successfully welded using plasma arc welding, spot welding, electron beam, laser beam, resistance welding and diffusion welding. Machinability of Ti6Al4V grade 5 Ti6Al4V parts have good machinability and can be machined as stock parts. The following factors contribute to efficient machining of Ti6Al4V parts: Low cutting speeds, high feed rate, generous quantities of cutting fluid, sharp tools and a rigid setup. You can learn more on our machining page. +44 (0) 1189 795 [email protected] KYOCERA SGS Precision Tools Europe Ltd. 10 Ashville Way, Wokingham, Berkshire, RG41 2PL United Kingdom LATEST POSTS Machining Synergy: Optimising CNC with Software & Tooling Innovation Event Happy Easter 2025 Share this page Website designers in Berkshire Subscribe to our mailing list
188
Quantum Computation and Quantum Information Michael A. Nielsen & Isaac L. Chuang PUBLISHED BY THE PRESS SYNDICATE OF THE UNIVERSITY OF CAMBRIDGE The Pitt Building, Trumpington Street, Cambridge, United Kingdom CAMBRIDGE UNIVERSITY PRESS The Edinburgh Building, Cambridge CB2 2RU, UK www.cup.cam.ac.uk 40 West 20th Street, New York, NY 10011-4211, USA www.cup.org 10 Stamford Road, Oakleigh, Melbourne 3166, Australia Ruiz de Alarc´ on 13, 28014 Madrid, Spain c ⃝Cambridge University Press 2000 This book is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2000 Printed in the United Kingdom at the University Press, Cambridge Typeface Monotype Ehrhardt 10 1 2 /13pt System L AT EX2ε [EPC] A catalogue record of this book is available from the British Library Library of Congress Cataloguing in Publication data Nielsen, Michael A., and Chuang, Isaac L. Quantum Computation and Quantum Information / Michael A. Nielsen and Isaac L. Chuang. p. cm. Includes bibliographical references and index. ISBN 0-521-63503-9 1. Physics. I. Title. QA401.G47 2000 511′.8–dc21 98-22029 CIP ISBN 0 521 63235 8 hardback ISBN 0 521 63503 9 paperback Contents Preface page xv Acknowledgements xxi Nomenclature and notation xxiii Part I Fundamental concepts 1 1 Introduction and overview 1 1.1 Global perspectives 1 1.1.1 History of quantum computation and quantum information 2 1.1.2 Future directions 12 1.2 Quantum bits 13 1.2.1 Multiple qubits 16 1.3 Quantum computation 17 1.3.1 Single qubit gates 17 1.3.2 Multiple qubit gates 20 1.3.3 Measurements in bases other than the computational basis 22 1.3.4 Quantum circuits 22 1.3.5 Qubit copying circuit? 24 1.3.6 Example: Bell states 25 1.3.7 Example: quantum teleportation 26 1.4 Quantum algorithms 28 1.4.1 Classical computations on a quantum computer 29 1.4.2 Quantum parallelism 30 1.4.3 Deutsch’s algorithm 32 1.4.4 The Deutsch–Jozsa algorithm 34 1.4.5 Quantum algorithms summarized 36 1.5 Experimental quantum information processing 42 1.5.1 The Stern–Gerlach experiment 43 1.5.2 Prospects for practical quantum information processing 46 1.6 Quantum information 50 1.6.1 Quantum information theory: example problems 52 1.6.2 Quantum information in a wider context 58 2 Introduction to quantum mechanics 60 2.1 Linear algebra 61 2.1.1 Bases and linear independence 62 2.1.2 Linear operators and matrices 63 viii Contents 2.1.3 The Pauli matrices 65 2.1.4 Inner products 65 2.1.5 Eigenvectors and eigenvalues 68 2.1.6 Adjoints and Hermitian operators 69 2.1.7 Tensor products 71 2.1.8 Operator functions 75 2.1.9 The commutator and anti-commutator 76 2.1.10 The polar and singular value decompositions 78 2.2 The postulates of quantum mechanics 80 2.2.1 State space 80 2.2.2 Evolution 81 2.2.3 Quantum measurement 84 2.2.4 Distinguishing quantum states 86 2.2.5 Projective measurements 87 2.2.6 POVM measurements 90 2.2.7 Phase 93 2.2.8 Composite systems 93 2.2.9 Quantum mechanics: a global view 96 2.3 Application: superdense coding 97 2.4 The density operator 98 2.4.1 Ensembles of quantum states 99 2.4.2 General properties of the density operator 101 2.4.3 The reduced density operator 105 2.5 The Schmidt decomposition and purifications 109 2.6 EPR and the Bell inequality 111 3 Introduction to computer science 120 3.1 Models for computation 122 3.1.1 Turing machines 122 3.1.2 Circuits 129 3.2 The analysis of computational problems 135 3.2.1 How to quantify computational resources 136 3.2.2 Computational complexity 138 3.2.3 Decision problems and the complexity classes P and NP 141 3.2.4 A plethora of complexity classes 150 3.2.5 Energy and computation 153 3.3 Perspectives on computer science 161 Part II Quantum computation 171 4 Quantum circuits 171 4.1 Quantum algorithms 172 4.2 Single qubit operations 174 4.3 Controlled operations 177 4.4 Measurement 185 4.5 Universal quantum gates 188 Contents ix 4.5.1 Two-level unitary gates are universal 189 4.5.2 Single qubit and cnot gates are universal 191 4.5.3 A discrete set of universal operations 194 4.5.4 Approximating arbitrary unitary gates is generically hard 198 4.5.5 Quantum computational complexity 200 4.6 Summary of the quantum circuit model of computation 202 4.7 Simulation of quantum systems 204 4.7.1 Simulation in action 204 4.7.2 The quantum simulation algorithm 206 4.7.3 An illustrative example 209 4.7.4 Perspectives on quantum simulation 211 5 The quantum Fourier transform and its applications 216 5.1 The quantum Fourier transform 217 5.2 Phase estimation 221 5.2.1 Performance and requirements 223 5.3 Applications: order-finding and factoring 226 5.3.1 Application: order-finding 226 5.3.2 Application: factoring 232 5.4 General applications of the quantum Fourier transform 234 5.4.1 Period-finding 236 5.4.2 Discrete logarithms 238 5.4.3 The hidden subgroup problem 240 5.4.4 Other quantum algorithms? 242 6 Quantum search algorithms 248 6.1 The quantum search algorithm 248 6.1.1 The oracle 248 6.1.2 The procedure 250 6.1.3 Geometric visualization 252 6.1.4 Performance 253 6.2 Quantum search as a quantum simulation 255 6.3 Quantum counting 261 6.4 Speeding up the solution of NP-complete problems 263 6.5 Quantum search of an unstructured database 265 6.6 Optimality of the search algorithm 269 6.7 Black box algorithm limits 271 7 Quantum computers: physical realization 277 7.1 Guiding principles 277 7.2 Conditions for quantum computation 279 7.2.1 Representation of quantum information 279 7.2.2 Performance of unitary transformations 281 7.2.3 Preparation of fiducial initial states 281 7.2.4 Measurement of output result 282 7.3 Harmonic oscillator quantum computer 283 7.3.1 Physical apparatus 283 x Contents 7.3.2 The Hamiltonian 284 7.3.3 Quantum computation 286 7.3.4 Drawbacks 286 7.4 Optical photon quantum computer 287 7.4.1 Physical apparatus 287 7.4.2 Quantum computation 290 7.4.3 Drawbacks 296 7.5 Optical cavity quantum electrodynamics 297 7.5.1 Physical apparatus 298 7.5.2 The Hamiltonian 300 7.5.3 Single-photon single-atom absorption and refraction 303 7.5.4 Quantum computation 306 7.6 Ion traps 309 7.6.1 Physical apparatus 309 7.6.2 The Hamiltonian 317 7.6.3 Quantum computation 319 7.6.4 Experiment 321 7.7 Nuclear magnetic resonance 324 7.7.1 Physical apparatus 325 7.7.2 The Hamiltonian 326 7.7.3 Quantum computation 331 7.7.4 Experiment 336 7.8 Other implementation schemes 343 Part III Quantum information 353 8 Quantum noise and quantum operations 353 8.1 Classical noise and Markov processes 354 8.2 Quantum operations 356 8.2.1 Overview 356 8.2.2 Environments and quantum operations 357 8.2.3 Operator-sum representation 360 8.2.4 Axiomatic approach to quantum operations 366 8.3 Examples of quantum noise and quantum operations 373 8.3.1 Trace and partial trace 374 8.3.2 Geometric picture of single qubit quantum operations 374 8.3.3 Bit flip and phase flip channels 376 8.3.4 Depolarizing channel 378 8.3.5 Amplitude damping 380 8.3.6 Phase damping 383 8.4 Applications of quantum operations 386 8.4.1 Master equations 386 8.4.2 Quantum process tomography 389 8.5 Limitations of the quantum operations formalism 394 Contents xi 9 Distance measures for quantum information 399 9.1 Distance measures for classical information 399 9.2 How close are two quantum states? 403 9.2.1 Trace distance 403 9.2.2 Fidelity 409 9.2.3 Relationships between distance measures 415 9.3 How well does a quantum channel preserve information? 416 10 Quantum error-correction 425 10.1 Introduction 426 10.1.1 The three qubit bit flip code 427 10.1.2 Three qubit phase flip code 430 10.2 The Shor code 432 10.3 Theory of quantum error-correction 435 10.3.1 Discretization of the errors 438 10.3.2 Independent error models 441 10.3.3 Degenerate codes 444 10.3.4 The quantum Hamming bound 444 10.4 Constructing quantum codes 445 10.4.1 Classical linear codes 445 10.4.2 Calderbank–Shor–Steane codes 450 10.5 Stabilizer codes 453 10.5.1 The stabilizer formalism 454 10.5.2 Unitary gates and the stabilizer formalism 459 10.5.3 Measurement in the stabilizer formalism 463 10.5.4 The Gottesman–Knill theorem 464 10.5.5 Stabilizer code constructions 464 10.5.6 Examples 467 10.5.7 Standard form for a stabilizer code 470 10.5.8 Quantum circuits for encoding, decoding, and correction 472 10.6 Fault-tolerant quantum computation 474 10.6.1 Fault-tolerance: the big picture 475 10.6.2 Fault-tolerant quantum logic 482 10.6.3 Fault-tolerant measurement 489 10.6.4 Elements of resilient quantum computation 493 11 Entropy and information 500 11.1 Shannon entropy 500 11.2 Basic properties of entropy 502 11.2.1 The binary entropy 502 11.2.2 The relative entropy 504 11.2.3 Conditional entropy and mutual information 505 11.2.4 The data processing inequality 509 11.3 Von Neumann entropy 510 11.3.1 Quantum relative entropy 511 11.3.2 Basic properties of entropy 513 11.3.3 Measurements and entropy 514 xii Contents 11.3.4 Subadditivity 515 11.3.5 Concavity of the entropy 516 11.3.6 The entropy of a mixture of quantum states 518 11.4 Strong subadditivity 519 11.4.1 Proof of strong subadditivity 519 11.4.2 Strong subadditivity: elementary applications 522 12 Quantum information theory 528 12.1 Distinguishing quantum states and the accessible information 529 12.1.1 The Holevo bound 531 12.1.2 Example applications of the Holevo bound 534 12.2 Data compression 536 12.2.1 Shannon’s noiseless channel coding theorem 537 12.2.2 Schumacher’s quantum noiseless channel coding theorem 542 12.3 Classical information over noisy quantum channels 546 12.3.1 Communication over noisy classical channels 548 12.3.2 Communication over noisy quantum channels 554 12.4 Quantum information over noisy quantum channels 561 12.4.1 Entropy exchange and the quantum Fano inequality 561 12.4.2 The quantum data processing inequality 564 12.4.3 Quantum Singleton bound 568 12.4.4 Quantum error-correction, refrigeration and Maxwell’s demon 569 12.5 Entanglement as a physical resource 571 12.5.1 Transforming bi-partite pure state entanglement 573 12.5.2 Entanglement distillation and dilution 578 12.5.3 Entanglement distillation and quantum error-correction 580 12.6 Quantum cryptography 582 12.6.1 Private key cryptography 582 12.6.2 Privacy amplification and information reconciliation 584 12.6.3 Quantum key distribution 586 12.6.4 Privacy and coherent information 592 12.6.5 The security of quantum key distribution 593 Appendices 608 Appendix 1: Notes on basic probability theory 608 Appendix 2: Group theory 610 A2.1 Basic definitions 610 A2.1.1 Generators 611 A2.1.2 Cyclic groups 611 A2.1.3 Cosets 612 A2.2 Representations 612 A2.2.1 Equivalence and reducibility 612 A2.2.2 Orthogonality 613 A2.2.3 The regular representation 614 Contents xiii A2.3 Fourier transforms 615 Appendix 3: The Solovay–Kitaev theorem 617 Appendix 4: Number theory 625 A4.1 Fundamentals 625 A4.2 Modular arithmetic and Euclid’s algorithm 626 A4.3 Reduction of factoring to order-finding 633 A4.4 Continued fractions 635 Appendix 5: Public key cryptography and the RSA cryptosystem 640 Appendix 6: Proof of Lieb’s theorem 645 Bibliography 649 Index 665 I Fundamental concepts 1 Introduction and overview Science offers the boldest metaphysics of the age. It is a thoroughly human construct, driven by the faith that if we dream, press to discover , explain, and dream again, thereby plunging repeatedly into new terrain, the world will some-how come clearer and we will grasp the true strangeness of the universe. And the strangeness will all prove to be connected, and make sense. – Edward O. Wilson Information is physical. – Rolf Landauer What are the fundamental concepts of quantum computation and quantum information? How did these concepts develop? To what uses may they be put? How will they be pre-sented in this book? The purpose of this introductory chapter is to answer these questions by developing in broad brushstrokes a picture of the field of quantum computation and quantum information. The intent is to communicate a basic understanding of the central concepts of the field, perspective on how they have been developed, and to help you decide how to approach the rest of the book. Our story begins in Section 1.1 with an account of the historical context in which quantum computation and quantum information has developed. Each remaining section in the chapter gives a brief introduction to one or more fundamental concepts from the field: quantum bits (Section 1.2), quantum computers, quantum gates and quantum cir-cuits (Section 1.3), quantum algorithms (Section 1.4), experimental quantum information processing (Section 1.5), and quantum information and communication (Section 1.6). Along the way, illustrative and easily accessible developments such as quantum tele-portation and some simple quantum algorithms are given, using the basic mathematics taught in this chapter. The presentation is self-contained, and designed to be accessible even without a background in computer science or physics. As we move along, we give pointers to more in-depth discussions in later chapters, where references and suggestions for further reading may also be found. If as you read you’re finding the going rough, skip on to a spot where you feel more comfortable. At points we haven’t been able to avoid using a little technical lingo which won’t be completely explained until later in the book. Simply accept it for now, and come back later when you understand all the terminology in more detail. The emphasis in this first chapter is on the big picture, with the details to be filled in later. 1.1 Global perspectives Quantum computation and quantum information is the study of the information process-ing tasks that can be accomplished using quantum mechanical systems. Sounds pretty 2 Introduction and overview simple and obvious, doesn’t it? Like many simple but profound ideas it was a long time before anybody thought of doing information processing using quantum mechanical sys-tems. To see why this is the case, we must go back in time and look in turn at each of the fields which have contributed fundamental ideas to quantum computation and quantum information – quantum mechanics, computer science, information theory, and cryptography. As we take our short historical tour of these fields, think of yourself first as a physicist, then as a computer scientist, then as an information theorist, and finally as a cryptographer, in order to get some feel for the disparate perspectives which have come together in quantum computation and quantum information. 1.1.1 History of quantum computation and quantum information Our story begins at the turn of the twentieth century when a unheralded revolution was underway in science. A series of crises had arisen in physics. The problem was that the theories of physics at that time (now dubbed classical physics) were predicting absurdities such as the existence of an ‘ultraviolet catastrophe’ involving infinite energies, or electrons spiraling inexorably into the atomic nucleus. At first such problems were resolved with the addition of ad hoc hypotheses to classical physics, but as a better understanding of atoms and radiation was gained these attempted explanations became more and more convoluted. The crisis came to a head in the early 1920s after a quarter century of turmoil, and resulted in the creation of the modern theory of quantum mechanics. Quantum mechanics has been an indispensable part of science ever since, and has been applied with enormous success to everything under and inside the Sun, including the structure of the atom, nuclear fusion in stars, superconductors, the structure of DNA, and the elementary particles of Nature. What is quantum mechanics? Quantum mechanics is a mathematical framework or set of rules for the construction of physical theories. For example, there is a physical theory known as quantum electrodynamics which describes with fantastic accuracy the interac-tion of atoms and light. Quantum electrodynamics is built up within the framework of quantum mechanics, but it contains specific rules not determined by quantum mechanics. The relationship of quantum mechanics to specific physical theories like quantum elec-trodynamics is rather like the relationship of a computer’s operating system to specific applications software – the operating system sets certain basic parameters and modes of operation, but leaves open how specific tasks are accomplished by the applications. The rules of quantum mechanics are simple but even experts find them counter-intuitive, and the earliest antecedents of quantum computation and quantum information may be found in the long-standing desire of physicists to better understand quantum mechanics. The best known critic of quantum mechanics, Albert Einstein, went to his grave unreconciled with the theory he helped invent. Generations of physicists since have wrestled with quantum mechanics in an effort to make its predictions more palatable. One of the goals of quantum computation and quantum information is to develop tools which sharpen our intuition about quantum mechanics, and make its predictions more transparent to human minds. For example, in the early 1980s, interest arose in whether it might be possible to use quantum effects to signal faster than light – a big no-no according to Einstein’s theory of relativity. The resolution of this problem turns out to hinge on whether it is possible to clone an unknown quantum state, that is, construct a copy of a quantum state. If cloning were possible, then it would be possible to signal faster than light using quantum effects. Global perspectives 3 However, cloning – so easy to accomplish with classical information (consider the words in front of you, and where they came from!) – turns out not to be possible in general in quantum mechanics. This no-cloning theorem, discovered in the early 1980s, is one of the earliest results of quantum computation and quantum information. Many refinements of the no-cloning theorem have since been developed, and we now have conceptual tools which allow us to understand how well a (necessarily imperfect) quantum cloning device might work. These tools, in turn, have been applied to understand other aspects of quantum mechanics. A related historical strand contributing to the development of quantum computation and quantum information is the interest, dating to the 1970s, of obtaining complete con-trol over single quantum systems. Applications of quantum mechanics prior to the 1970s typically involved a gross level of control over a bulk sample containing an enormous number of quantum mechanical systems, none of them directly accessible. For example, superconductivity has a superb quantum mechanical explanation. However, because a su-perconductor involves a huge (compared to the atomic scale) sample of conducting metal, we can only probe a few aspects of its quantum mechanical nature, with the individual quantum systems constituting the superconductor remaining inaccessible. Systems such as particle accelerators do allow limited access to individual quantum systems, but again provide little control over the constituent systems. Since the 1970s many techniques for controlling single quantum systems have been developed. For example, methods have been developed for trapping a single atom in an ‘atom trap’, isolating it from the rest of the world and allowing us to probe many different aspects of its behavior with incredible precision. The scanning tunneling microscope has been used to move single atoms around, creating designer arrays of atoms at will. Electronic devices whose operation involves the transfer of only single electrons have been demonstrated. Why all this effort to attain complete control over single quantum systems? Setting aside the many technological reasons and concentrating on pure science, the principal answer is that researchers have done this on a hunch. Often the most profound insights in science come when we develop a method for probing a new regime of Nature. For example, the invention of radio astronomy in the 1930s and 1940s led to a spectacular sequence of discoveries, including the galactic core of the Milky Way galaxy, pulsars, and quasars. Low temperature physics has achieved its amazing successes by finding ways to lower the temperatures of different systems. In a similar way, by obtaining complete control over single quantum systems, we are exploring untouched regimes of Nature in the hope of discovering new and unexpected phenomena. We are just now taking our first steps along these lines, and already a few interesting surprises have been discovered in this regime. What else shall we discover as we obtain more complete control over single quantum systems, and extend it to more complex systems? Quantum computation and quantum information fit naturally into this program. They provide a useful series of challenges at varied levels of difficulty for people devising methods to better manipulate single quantum systems, and stimulate the development of new experimental techniques and provide guidance as to the most interesting directions in which to take experiment. Conversely, the ability to control single quantum systems is essential if we are to harness the power of quantum mechanics for applications to quantum computation and quantum information. Despite this intense interest, efforts to build quantum information processing systems 4 Introduction and overview have resulted in modest success to date. Small quantum computers, capable of doing dozens of operations on a few qubits represent the state of the art in quantum computation. Experimental prototypes for doing quantum cryptography – a way of communicating in secret across long distances – have been demonstrated, and are even at the level where they may be useful for some real-world applications. However, it remains a great challenge to physicists and engineers of the future to develop techniques for making large-scale quantum information processing a reality. Let us turn our attention from quantum mechanics to another of the great intellectual triumphs of the twentieth century, computer science. The origins of computer science are lost in the depths of history. For example, cuneiform tablets indicate that by the time of Hammurabi (circa 1750 B.C.) the Babylonians had developed some fairly sophisticated algorithmic ideas, and it is likely that many of those ideas date to even earlier times. The modern incarnation of computer science was announced by the great mathemati-cian Alan Turing in a remarkable 1936 paper. Turing developed in detail an abstract notion of what we would now call a programmable computer, a model for computation now known as the Turing machine, in his honor. Turing showed that there is a Universal Turing Machine that can be used to simulate any other Turing machine. Furthermore, he claimed that the Universal Turing Machine completely captures what it means to per-form a task by algorithmic means. That is, if an algorithm can be performed on any piece of hardware (say, a modern personal computer), then there is an equivalent algorithm for a Universal Turing Machine which performs exactly the same task as the algorithm running on the personal computer. This assertion, known as the Church–Turing thesis in honor of Turing and another pioneer of computer science, Alonzo Church, asserts the equivalence between the physical concept of what class of algorithms can be performed on some physical device with the rigorous mathematical concept of a Universal Turing Machine. The broad acceptance of this thesis laid the foundation for the development of a rich theory of computer science. Not long after Turing’s paper, the first computers constructed from electronic com-ponents were developed. John von Neumann developed a simple theoretical model for how to put together in a practical fashion all the components necessary for a computer to be fully as capable as a Universal Turing Machine. Hardware development truly took off, though, in 1947, when John Bardeen, Walter Brattain, and Will Shockley developed the transistor. Computer hardware has grown in power at an amazing pace ever since, so much so that the growth was codified by Gordon Moore in 1965 in what has come to be known as Moore’s law, which states that computer power will double for constant cost roughly once every two years. Amazingly enough, Moore’s law has approximately held true in the decades since the 1960s. Nevertheless, most observers expect that this dream run will end some time during the first two decades of the twenty-first century. Conventional approaches to the fabrication of computer technology are beginning to run up against fundamental difficulties of size. Quantum effects are beginning to interfere in the functioning of electronic devices as they are made smaller and smaller. One possible solution to the problem posed by the eventual failure of Moore’s law is to move to a different computing paradigm. One such paradigm is provided by the theory of quantum computation, which is based on the idea of using quantum mechanics to perform computations, instead of classical physics. It turns out that while an ordinary computer can be used to simulate a quantum computer, it appears to be impossible to Global perspectives 5 perform the simulation in an efficient fashion. Thus quantum computers offer an essential speed advantage over classical computers. This speed advantage is so significant that many researchers believe that no conceivable amount of progress in classical computation would be able to overcome the gap between the power of a classical computer and the power of a quantum computer. What do we mean by ‘efficient’ versus ‘inefficient’ simulations of a quantum computer? Many of the key notions needed to answer this question were actually invented before the notion of a quantum computer had even arisen. In particular, the idea of efficient and inefficient algorithms was made mathematically precise by the field of computational complexity. Roughly speaking, an efficient algorithm is one which runs in time polynomial in the size of the problem solved. In contrast, an inefficient algorithm requires super-polynomial (typically exponential) time. What was noticed in the late 1960s and early 1970s was that it seemed as though the Turing machine model of computation was at least as powerful as any other model of computation, in the sense that a problem which could be solved efficiently in some model of computation could also be solved efficiently in the Turing machine model, by using the Turing machine to simulate the other model of computation. This observation was codified into a strengthened version of the Church– Turing thesis: Any algorithmic process can be simulated efficiently using a Turing machine. The key strengthening in the strong Church–Turing thesis is the word efficiently. If the strong Church–Turing thesis is correct, then it implies that no matter what type of machine we use to perform our algorithms, that machine can be simulated efficiently using a standard Turing machine. This is an important strengthening, as it implies that for the purposes of analyzing whether a given computational task can be accomplished efficiently, we may restrict ourselves to the analysis of the Turing machine model of computation. One class of challenges to the strong Church–Turing thesis comes from the field of analog computation. In the years since Turing, many different teams of researchers have noticed that certain types of analog computers can efficiently solve problems believed to have no efficient solution on a Turing machine. At first glance these analog computers appear to violate the strong form of the Church–Turing thesis. Unfortunately for analog computation, it turns out that when realistic assumptions about the presence of noise in analog computers are made, their power disappears in all known instances; they cannot efficiently solve problems which are not efficiently solvable on a Turing machine. This lesson – that the effects of realistic noise must be taken into account in evaluating the efficiency of a computational model – was one of the great early challenges of quantum computation and quantum information, a challenge successfully met by the development of a theory of quantum error-correcting codes and fault-tolerant quantum computation. Thus, unlike analog computation, quantum computation can in principle tolerate a finite amount of noise and still retain its computational advantages. The first major challenge to the strong Church–Turing thesis arose in the mid 1970s, when Robert Solovay and Volker Strassen showed that it is possible to test whether an in-teger is prime or composite using a randomized algorithm. That is, the Solovay–Strassen test for primality used randomness as an essential part of the algorithm. The algorithm did not determine whether a given integer was prime or composite with certainty. Instead, the algorithm could determine that a number was probably prime or else composite with 6 Introduction and overview certainty. By repeating the Solovay–Strassen test a few times it is possible to determine with near certainty whether a number is prime or composite. Of especial interest at the time the Solovay–Strassen test was proposed was that no efficient deterministic test for primality was known. Thus, it seemed as though computers with access to a random number generator would be able to efficiently perform computational tasks with no effi-cient solution on a conventional deterministic Turing machine. This discovery inspired a search for other randomized algorithms which has paid off handsomely, with the field blossoming into a thriving area of research. Randomized algorithms pose a challenge to the strong Church–Turing thesis, suggest-ing that there are efficiently soluble problems which, nevertheless, cannot be efficiently solved on a deterministic Turing machine. This challenge appears to be easily resolved by a simple modification of the strong Church–Turing thesis: Any algorithmic process can be simulated efficiently using a probabilistic Turing machine. This ad hoc modification of the strong Church–Turing thesis should leave you feeling rather queasy. Might it not turn out at some later date that yet another model of computa-tion allows one to efficiently solve problems that are not efficiently soluble within Turing’s model of computation? Is there any way we can find a single model of computation which is guaranteed to be able to efficiently simulate any other model of computation? Motivated by this question, in 1985 David Deutsch asked whether the laws of physics could be use to derive an even stronger version of the Church–Turing thesis. Instead of adopting ad hoc hypotheses, Deutsch looked to physical theory to provide a foundation for the Church–Turing thesis that would be as secure as the status of that physical theory. In particular, Deutsch attempted to define a computational device that would be capable of efficiently simulating an arbitrary physical system. Because the laws of physics are ultimately quantum mechanical, Deutsch was naturally led to consider computing devices based upon the principles of quantum mechanics. These devices, quantum analogues of the machines defined forty-nine years earlier by Turing, led ultimately to the modern conception of a quantum computer used in this book. At the time of writing it is not clear whether Deutsch’s notion of a Universal Quan-tum Computer is sufficient to efficiently simulate an arbitrary physical system. Proving or refuting this conjecture is one of the great open problems of the field of quantum computation and quantum information. It is possible, for example, that some effect of quantum field theory or an even more esoteric effect based in string theory, quantum gravity or some other physical theory may take us beyond Deutsch’s Universal Quan-tum Computer, giving us a still more powerful model for computation. At this stage, we simply don’t know. What Deutsch’s model of a quantum computer did enable was a challenge to the strong form of the Church–Turing thesis. Deutsch asked whether it is possible for a quantum computer to efficiently solve computational problems which have no efficient solution on a classical computer, even a probabilistic Turing machine. He then constructed a simple example suggesting that, indeed, quantum computers might have computational powers exceeding those of classical computers. This remarkable first step taken by Deutsch was improved in the subsequent decade by many people, culminating in Peter Shor’s 1994 demonstration that two enormously important problems – the problem of finding the prime factors of an integer, and the so-Global perspectives 7 called ‘discrete logarithm’ problem – could be solved efficiently on a quantum computer. This attracted widespread interest because these two problems were and still are widely believed to have no efficient solution on a classical computer. Shor’s results are a power-ful indication that quantum computers are more powerful than Turing machines, even probabilistic Turing machines. Further evidence for the power of quantum computers came in 1995 when Lov Grover showed that another important problem – the problem of conducting a search through some unstructured search space – could also be sped up on a quantum computer. While Grover’s algorithm did not provide as spectacular a speed-up as Shor’s algorithms, the widespread applicability of search-based methodologies has excited considerable interest in Grover’s algorithm. At about the same time as Shor’s and Grover’s algorithms were discovered, many people were developing an idea Richard Feynman had suggested in 1982. Feynman had pointed out that there seemed to be essential difficulties in simulating quantum mechan-ical systems on classical computers, and suggested that building computers based on the principles of quantum mechanics would allow us to avoid those difficulties. In the 1990s several teams of researchers began fleshing this idea out, showing that it is indeed possible to use quantum computers to efficiently simulate systems that have no known efficient simulation on a classical computer. It is likely that one of the major applications of quantum computers in the future will be performing simulations of quantum mechan-ical systems too difficult to simulate on a classical computer, a problem with profound scientific and technological implications. What other problems can quantum computers solve more quickly than classical com-puters? The short answer is that we don’t know. Coming up with good quantum algo-rithms seems to be hard. A pessimist might think that’s because there’s nothing quantum computers are good for other than the applications already discovered! We take a differ-ent view. Algorithm design for quantum computers is hard because designers face two difficult problems not faced in the construction of algorithms for classical computers. First, our human intuition is rooted in the classical world. If we use that intuition as an aid to the construction of algorithms, then the algorithmic ideas we come up with will be classical ideas. To design good quantum algorithms one must ‘turn off’ one’s classical intuition for at least part of the design process, using truly quantum effects to achieve the desired algorithmic end. Second, to be truly interesting it is not enough to design an algorithm that is merely quantum mechanical. The algorithm must be better than any existing classical algorithm! Thus, it is possible that one may find an algorithm which makes use of truly quantum aspects of quantum mechanics, that is nevertheless not of widespread interest because classical algorithms with comparable performance charac-teristics exist. The combination of these two problems makes the construction of new quantum algorithms a challenging problem for the future. Even more broadly, we can ask if there are any generalizations we can make about the power of quantum computers versus classical computers. What is it that makes quantum computers more powerful than classical computers – assuming that this is indeed the case? What class of problems can be solved efficiently on a quantum computer, and how does that class compare to the class of problems that can be solved efficiently on a classical computer? One of the most exciting things about quantum computation and quantum information is how little is known about the answers to these questions! It is a great challenge for the future to understand these questions better. Having come up to the frontier of quantum computation, let’s switch to the history 8 Introduction and overview of another strand of thought contributing to quantum computation and quantum infor-mation: information theory. At the same time computer science was exploding in the 1940s, another revolution was taking place in our understanding of communication. In 1948 Claude Shannon published a remarkable pair of papers laying the foundations for the modern theory of information and communication. Perhaps the key step taken by Shannon was to mathematically define the concept of information. In many mathematical sciences there is considerable flexibility in the choice of fundamental definitions. Try thinking naively for a few minutes about the following question: how would you go about mathematically defining the notion of an information source? Several different answers to this problem have found widespread use; however, the definition Shannon came up with seems to be far and away the most fruitful in terms of increased understanding, leading to a plethora of deep results and a theory with a rich structure which seems to accurately reflect many (though not all) real-world communications problems. Shannon was interested in two key questions related to the communication of in-formation over a communications channel. First, what resources are required to send information over a communications channel? For example, telephone companies need to know how much information they can reliably transmit over a given telephone cable. Second, can information be transmitted in such a way that it is protected against noise in the communications channel? Shannon answered these two questions by proving the two fundamental theorems of information theory. The first, Shannon’s noiseless channel coding theorem, quantifies the physical resources required to store the output from an information source. Shan-non’s second fundamental theorem, the noisy channel coding theorem, quantifies how much information it is possible to reliably transmit through a noisy communications channel. To achieve reliable transmission in the presence of noise, Shannon showed that error-correcting codes could be used to protect the information being sent. Shannon’s noisy channel coding theorem gives an upper limit on the protection afforded by error-correcting codes. Unfortunately, Shannon’s theorem does not explicitly give a practically useful set of error-correcting codes to achieve that limit. From the time of Shannon’s pa-pers until today, researchers have constructed more and better classes of error-correcting codes in their attempts to come closer to the limit set by Shannon’s theorem. A sophisti-cated theory of error-correcting codes now exists offering the user a plethora of choices in their quest to design a good error-correcting code. Such codes are used in a multitude of places including, for example, compact disc players, computer modems, and satellite communications systems. Quantum information theory has followed with similar developments. In 1995, Ben Schumacher provided an analogue to Shannon’s noiseless coding theorem, and in the process defined the ‘quantum bit’ or ‘qubit’ as a tangible physical resource. However, no analogue to Shannon’s noisy channel coding theorem is yet known for quantum in-formation. Nevertheless, in analogy to their classical counterparts, a theory of quantum error-correction has been developed which, as already mentioned, allows quantum com-puters to compute effectively in the presence of noise, and also allows communication over noisy quantum channels to take place reliably. Indeed, classical ideas of error-correction have proved to be enormously important in developing and understanding quantum error-correcting codes. In 1996, two groups working independently, Robert Calderbank and Peter Shor, and Andrew Steane, discov-Global perspectives 9 ered an important class of quantum codes now known as CSS codes after their initials. This work has since been subsumed by the stabilizer codes, independently discovered by Robert Calderbank, Eric Rains, Peter Shor and Neil Sloane, and by Daniel Gottesman. By building upon the basic ideas of classical linear coding theory, these discoveries greatly facilitated a rapid understanding of quantum error-correcting codes and their application to quantum computation and quantum information. The theory of quantum error-correcting codes was developed to protect quantum states against noise. What about transmitting ordinary classical information using a quantum channel? How efficiently can this be done? A few surprises have been discovered in this arena. In 1992 Charles Bennett and Stephen Wiesner explained how to transmit two classical bits of information, while only transmitting one quantum bit from sender to receiver, a result dubbed superdense coding. Even more interesting are the results in distributed quantum computation. Imagine you have two computers networked, trying to solve a particular problem. How much communication is required to solve the problem? Recently it has been shown that quan-tum computers can require exponentially less communication to solve certain problems than would be required if the networked computers were classical! Unfortunately, as yet these problems are not especially important in a practical setting, and suffer from some undesirable technical restrictions. A major challenge for the future of quantum compu-tation and quantum information is to find problems of real-world importance for which distributed quantum computation offers a substantial advantage over distributed classical computation. Let’s return to information theory proper. The study of information theory begins with the properties of a single communications channel. In applications we often do not deal with a single communications channel, but rather with networks of many channels. The subject of networked information theory deals with the information carrying properties of such networks of communications channels, and has been developed into a rich and intricate subject. By contrast, the study of networked quantum information theory is very much in its infancy. Even for very basic questions we know little about the information carrying abil-ities of networks of quantum channels. Several rather striking preliminary results have been found in the past few years; however, no unifying theory of networked information theory exists for quantum channels. One example of networked quantum information theory should suffice to convince you of the value such a general theory would have. Imagine that we are attempting to send quantum information from Alice to Bob through a noisy quantum channel. If that channel has zero capacity for quantum information, then it is impossible to reliably send any information from Alice to Bob. Imagine instead that we consider two copies of the channel, operating in synchrony. Intuitively it is clear (and can be rigorously justified) that such a channel also has zero capacity to send quan-tum information. However, if we instead reverse the direction of one of the channels, as illustrated in Figure 1.1, it turns out that sometimes we can obtain a non-zero capacity for the transmission of information from Alice to Bob! Counter-intuitive properties like this illustrate the strange nature of quantum information. Better understanding the in-formation carrying properties of networks of quantum channels is a major open problem of quantum computation and quantum information. Let’s switch fields one last time, moving to the venerable old art and science of cryp-tography. Broadly speaking, cryptography is the problem of doing communication or 10 Introduction and overview Figure 1.1. Classically, if we have two very noisy channels of zero capacity running side by side, then the combined channel has zero capacity to send information. Not surprisingly, if we reverse the direction of one of the channels, we still have zero capacity to send information. Quantum mechanically, reversing one of the zero capacity channels can actually allow us to send information! computation involving two or more parties who may not trust one another. The best known cryptographic problem is the transmission of secret messages. Suppose two parties wish to communicate in secret. For example, you may wish to give your credit card num-ber to a merchant in exchange for goods, hopefully without any malevolent third party intercepting your credit card number. The way this is done is to use a cryptographic protocol. We’ll describe in detail how cryptographic protocols work later in the book, but for now it will suffice to make a few simple distinctions. The most important distinction is between private key cryptosystems and public key cryptosystems. The way a private key cryptosystem works is that two parties, ‘Alice’ and ‘Bob’, wish to communicate by sharing a private key, which only they know. The exact form of the key doesn’t matter at this point – think of a string of zeroes and ones. The point is that this key is used by Alice to encrypt the information she wishes to send to Bob. After Alice encrypts she sends the encrypted information to Bob, who must now recover the original information. Exactly how Alice encrypts the message depends upon the private key, so that to recover the original message Bob needs to know the private key, in order to undo the transformation Alice applied. Unfortunately, private key cryptosystems have some severe problems in many contexts. The most basic problem is how to distribute the keys? In many ways, the key distribution problem is just as difficult as the original problem of communicating in private – a malevolent third party may be eavesdropping on the key distribution, and then use the intercepted key to decrypt some of the message transmission. One of the earliest discoveries in quantum computation and quantum information was that quantum mechanics can be used to do key distribution in such a way that Alice and Bob’s security can not be compromised. This procedure is known as quantum cryptog-raphy or quantum key distribution. The basic idea is to exploit the quantum mechanical principle that observation in general disturbs the system being observed. Thus, if there is an eavesdropper listening in as Alice and Bob attempt to transmit their key, the presence of the eavesdropper will be visible as a disturbance of the communications channel Alice and Bob are using to establish the key. Alice and Bob can then throw out the key bits established while the eavesdropper was listening in, and start over. The first quantum cryptographic ideas were proposed by Stephen Wiesner in the late 1960s, but unfortu-Global perspectives 11 nately were not accepted for publication! In 1984 Charles Bennett and Gilles Brassard, building on Wiesner’s earlier work, proposed a protocol using quantum mechanics to distribute keys between Alice and Bob, without any possibility of a compromise. Since then numerous quantum cryptographic protocols have been proposed, and experimental prototypes developed. At the time of this writing, the experimental prototypes are nearing the stage where they may be useful in limited-scale real-world applications. The second major type of cryptosystem is the public key cryptosystem. Public key cryptosystems don’t rely on Alice and Bob sharing a secret key in advance. Instead, Bob simply publishes a ‘public key’, which is made available to the general public. Alice can make use of this public key to encrypt a message which she sends to Bob. What is interesting is that a third party cannot use Bob’s public key to decrypt the message! Strictly speaking, we shouldn’t say cannot. Rather, the encryption transformation is chosen in a very clever and non-trivial way so that it is extremely difficult (though not impossible) to invert, given only knowledge of the public key. To make inversion easy, Bob has a secret key matched to his public key, which together enable him to easily perform the decryption. This secret key is not known to anybody other than Bob, who can therefore be confident that only he can read the contents of Alice’s transmission, to the extent that it is unlikely that anybody else has the computational power to invert the encryption, given only the public key. Public key cryptosystems solve the key distribution problem by making it unnecessary for Alice and Bob to share a private key before communicating. Rather remarkably, public key cryptography did not achieve widespread use until the mid-1970s, when it was proposed independently by Whitfield Diffie and Martin Hellman, and by Ralph Merkle, revolutionizing the field of cryptography. A little later, Ronald Rivest, Adi Shamir, and Leonard Adleman developed the RSA cryptosystem, which at the time of writing is the most widely deployed public key cryptosystem, believed to offer a fine balance of security and practical usability. In 1997 it was disclosed that these ideas – public key cryptography, the Diffie–Hellman and RSA cryptosystems – were actually invented in the late 1960s and early 1970s by researchers working at the British intelligence agency GCHQ. The key to the security of public key cryptosystems is that it should be difficult to invert the encryption stage if only the public key is available. For example, it turns out that inverting the encryption stage of RSA is a problem closely related to factoring. Much of the presumed security of RSA comes from the belief that factoring is a problem hard to solve on a classical computer. However, Shor’s fast algorithm for factoring on a quantum computer could be used to break RSA! Similarly, there are other public key cryptosystems which can be broken if a fast algorithm for solving the discrete logarithm problem – like Shor’s quantum algorithm for discrete logarithm – were known. This practical application of quantum computers to the breaking of cryptographic codes has excited much of the interest in quantum computation and quantum information. We have been looking at the historical antecedents for quantum computation and quantum information. Of course, as the field has grown and matured, it has sprouted its own subfields of research, whose antecedents lie mainly within quantum computation and quantum information. Perhaps the most striking of these is the study of quantum entanglement. Entangle-ment is a uniquely quantum mechanical resource that plays a key role in many of the most interesting applications of quantum computation and quantum information; en-tanglement is iron to the classical world’s bronze age. In recent years there has been a 12 Introduction and overview tremendous effort trying to better understand the properties of entanglement considered as a fundamental resource of Nature, of comparable importance to energy, information, entropy, or any other fundamental resource. Although there is as yet no complete theory of entanglement, some progress has been made in understanding this strange property of quantum mechanics. It is hoped by many researchers that further study of the properties of entanglement will yield insights that facilitate the development of new applications in quantum computation and quantum information. 1.1.2 Future directions We’ve looked at some at the history and present status of quantum computation and quantum information. What of the future? What can quantum computation and quan-tum information offer to science, to technology, and to humanity? What benefits does quantum computation and quantum information confer upon its parent fields of computer science, information theory, and physics? What are the key open problems of quantum computation and quantum information? We will make a few very brief remarks about these overarching questions before moving onto more detailed investigations. Quantum computation and quantum information has taught us to think physically about computation, and we have discovered that this approach yields many new and exciting capabilities for information processing and communication. Computer scientists and information theorists have been gifted with a new and rich paradigm for explo-ration. Indeed, in the broadest terms we have learned that any physical theory, not just quantum mechanics, may be used as the basis for a theory of information processing and communication. The fruits of these explorations may one day result in information processing devices with capabilities far beyond today’s computing and communications systems, with concomitant benefits and drawbacks for society as a whole. Quantum computation and quantum information certainly offer challenges aplenty to physicists, but it is perhaps a little subtle what quantum computation and quantum information offers to physics in the long term. We believe that just as we have learned to think physically about computation, we can also learn to think computationally about physics. Whereas physics has traditionally been a discipline focused on understanding ‘elementary’ objects and simple systems, many interesting aspects of Nature arise only when things become larger and more complicated. Chemistry and engineering deal with such complexity to some extent, but most often in a rather ad hoc fashion. One of the messages of quantum computation and information is that new tools are available for traversing the gulf between the small and the relatively complex: computation and algorithms provide systematic means for constructing and understanding such systems. Applying ideas from these fields is already beginning to yield new insights into physics. It is our hope that this perspective will blossom in years to come into a fruitful way of understanding all aspects of physics. We’ve briefly examined some of the key motivations and ideas underlying quantum computation and quantum information. Over the remaining sections of this chapter we give a more technical but still accessible introduction to these motivations and ideas, with the hope of giving you a bird’s-eye view of the field as it is presently poised. Quantum bits 13 1.2 Quantum bits The bit is the fundamental concept of classical computation and classical information. Quantum computation and quantum information are built upon an analogous concept, the quantum bit, or qubit for short. In this section we introduce the properties of single and multiple qubits, comparing and contrasting their properties to those of classical bits. What is a qubit? We’re going to describe qubits as mathematical objects with certain specific properties. ‘But hang on’, you say, ‘I thought qubits were physical objects.’ It’s true that qubits, like bits, are realized as actual physical systems, and in Section 1.5 and Chapter 7 we describe in detail how this connection between the abstract mathematical point of view and real systems is made. However, for the most part we treat qubits as abstract mathematical objects. The beauty of treating qubits as abstract entities is that it gives us the freedom to construct a general theory of quantum computation and quantum information which does not depend upon a specific system for its realization. What then is a qubit? Just as a classical bit has a state – either 0 or 1 – a qubit also has a state. Two possible states for a qubit are the states |0⟩and |1⟩, which as you might guess correspond to the states 0 and 1 for a classical bit. Notation like ‘| ⟩’ is called the Dirac notation, and we’ll be seeing it often, as it’s the standard notation for states in quantum mechanics. The difference between bits and qubits is that a qubit can be in a state other than |0⟩or |1⟩. It is also possible to form linear combinations of states, often called superpositions: |ψ⟩= α |0⟩+ β |1⟩. (1.1) The numbers α and β are complex numbers, although for many purposes not much is lost by thinking of them as real numbers. Put another way, the state of a qubit is a vector in a two-dimensional complex vector space. The special states |0⟩and |1⟩are known as computational basis states, and form an orthonormal basis for this vector space. We can examine a bit to determine whether it is in the state 0 or 1. For example, computers do this all the time when they retrieve the contents of their memory. Rather remarkably, we cannot examine a qubit to determine its quantum state, that is, the values of α and β. Instead, quantum mechanics tells us that we can only acquire much more restricted information about the quantum state. When we measure a qubit we get either the result 0, with probability |α|2, or the result 1, with probability |β|2. Naturally, |α|2 + |β|2 = 1, since the probabilities must sum to one. Geometrically, we can interpret this as the condition that the qubit’s state be normalized to length 1. Thus, in general a qubit’s state is a unit vector in a two-dimensional complex vector space. This dichotomy between the unobservable state of a qubit and the observations we can make lies at the heart of quantum computation and quantum information. In most of our abstract models of the world, there is a direct correspondence between elements of the abstraction and the real world, just as an architect’s plans for a building are in correspondence with the final building. The lack of this direct correspondence in quantum mechanics makes it difficult to intuit the behavior of quantum systems; however, there is an indirect correspondence, for qubit states can be manipulated and transformed in ways which lead to measurement outcomes which depend distinctly on the different properties of the state. Thus, these quantum states have real, experimentally verifiable consequences, which we shall see are essential to the power of quantum computation and quantum information. 14 Introduction and overview The ability of a qubit to be in a superposition state runs counter to our ‘common sense’ understanding of the physical world around us. A classical bit is like a coin: either heads or tails up. For imperfect coins, there may be intermediate states like having it balanced on an edge, but those can be disregarded in the ideal case. By contrast, a qubit can exist in a continuum of states between |0⟩and |1⟩– until it is observed. Let us emphasize again that when a qubit is measured, it only ever gives ‘0’ or ‘1’ as the measurement result – probabilistically. For example, a qubit can be in the state 1 √ 2 |0⟩+ 1 √ 2 |1⟩, (1.2) which, when measured, gives the result 0 fifty percent (|1/ √ 2|2) of the time, and the result 1 fifty percent of the time. We will return often to this state, which is sometimes denoted |+⟩. Despite this strangeness, qubits are decidedly real, their existence and behavior ex-tensively validated by experiments (discussed in Section 1.5 and Chapter 7), and many different physical systems can be used to realize qubits. To get a concrete feel for how a qubit can be realized it may be helpful to list some of the ways this realization may occur: as the two different polarizations of a photon; as the alignment of a nuclear spin in a uniform magnetic field; as two states of an electron orbiting a single atom such as shown in Figure 1.2. In the atom model, the electron can exist in either the so-called ‘ground’ or ‘excited’ states, which we’ll call |0⟩and |1⟩, respectively. By shining light on the atom, with appropriate energy and for an appropriate length of time, it is possible to move the electron from the |0⟩state to the |1⟩state and vice versa. But more interestingly, by reducing the time we shine the light, an electron initially in the state |0⟩can be moved ‘halfway’ between |0⟩and |1⟩, into the |+⟩state. j0i ji Figure 1.2. Qubit represented by two electronic levels in an atom. Naturally, a great deal of attention has been given to the ‘meaning’ or ‘interpretation’ that might be attached to superposition states, and of the inherently probabilistic nature of observations on quantum systems. However, by and large, we shall not concern ourselves with such discussions in this book. Instead, our intent will be to develop mathematical and conceptual pictures which are predictive. One picture useful in thinking about qubits is the following geometric representation. Quantum bits 15 Because |α|2 + |β|2 = 1, we may rewrite Equation (1.1) as |ψ⟩= eiγ µ cos θ 2|0⟩+ eiϕ sin θ 2|1⟩ ¶ , (1.3) where θ, ϕ and γ are real numbers. In Chapter 2 we will see that we can ignore the factor of eiγ out the front, because it has no observable effects, and for that reason we can effectively write |ψ⟩= cos θ 2|0⟩+ eiϕ sin θ 2|1⟩. (1.4) The numbers θ and ϕ define a point on the unit three-dimensional sphere, as shown in Figure 1.3. This sphere is often called the Bloch sphere; it provides a useful means of visualizing the state of a single qubit, and often serves as an excellent testbed for ideas about quantum computation and quantum information. Many of the operations on single qubits which we describe later in this chapter are neatly described within the Bloch sphere picture. However, it must be kept in mind that this intuition is limited because there is no simple generalization of the Bloch sphere known for multiple qubits. |1ñ |0ñ ϕ |ψ⟩ θ x y z Figure 1.3. Bloch sphere representation of a qubit. How much information is represented by a qubit? Paradoxically, there are an infinite number of points on the unit sphere, so that in principle one could store an entire text of Shakespeare in the infinite binary expansion of θ. However, this conclusion turns out to be misleading, because of the behavior of a qubit when observed. Recall that measurement of a qubit will give only either 0 or 1. Furthermore, measurement changes the state of a qubit, collapsing it from its superposition of |0⟩and |1⟩to the specific state consistent with the measurement result. For example, if measurement of |+⟩gives 0, then the post-measurement state of the qubit will be |0⟩. Why does this type of collapse occur? Nobody knows. As discussed in Chapter 2, this behavior is simply one of the fundamental postulates of quantum mechanics. What is relevant for our purposes is that from a single measurement one obtains only a single bit of information about the state of the qubit, thus resolving the apparent paradox. It turns out that only if infinitely many 16 Introduction and overview identically prepared qubits were measured would one be able to determine α and β for a qubit in the state given in Equation (1.1). But an even more interesting question to ask might be: how much information is represented by a qubit if we do not measure it? This is a trick question, because how can one quantify information if it cannot be measured? Nevertheless, there is something conceptually important here, because when Nature evolves a closed quantum system of qubits, not performing any ‘measurements’, she apparently does keep track of all the continuous variables describing the state, like α and β. In a sense, in the state of a qubit, Nature conceals a great deal of ‘hidden information’. And even more interestingly, we will see shortly that the potential amount of this extra ‘information’ grows exponentially with the number of qubits. Understanding this hidden quantum information is a question that we grapple with for much of this book, and which lies at the heart of what makes quantum mechanics a powerful tool for information processing. 1.2.1 Multiple qubits Hilbert space is a big place. – Carlton Caves Suppose we have two qubits. If these were two classical bits, then there would be four possible states, 00, 01, 10, and 11. Correspondingly, a two qubit system has four com-putational basis states denoted |00⟩, |01⟩, |10⟩, |11⟩. A pair of qubits can also exist in superpositions of these four states, so the quantum state of two qubits involves associating a complex coefficient – sometimes called an amplitude – with each computational basis state, such that the state vector describing the two qubits is |ψ⟩= α00|00⟩+ α01|01⟩+ α10|10⟩+ α11|11⟩. (1.5) Similar to the case for a single qubit, the measurement result x (= 00, 01, 10 or 11) occurs with probability |αx|2, with the state of the qubits after the measurement being |x⟩. The condition that probabilities sum to one is therefore expressed by the normalization condition that P x∈{0,1}2 |αx|2 = 1, where the notation ‘{0, 1}2’ means ‘the set of strings of length two with each letter being either zero or one’. For a two qubit system, we could measure just a subset of the qubits, say the first qubit, and you can probably guess how this works: measuring the first qubit alone gives 0 with probability |α00|2 +|α01|2, leaving the post-measurement state |ψ′⟩= α00|00⟩+ α01|01⟩ p |α00|2 + |α01|2 . (1.6) Note how the post-measurement state is re-normalized by the factor p |α00|2 + |α01|2 so that it still satisfies the normalization condition, just as we expect for a legitimate quantum state. An important two qubit state is the Bell state or EPR pair, |00⟩+ |11⟩ √ 2 . (1.7) This innocuous-looking state is responsible for many surprises in quantum computation Quantum computation 17 and quantum information. It is the key ingredient in quantum teleportation and super-dense coding, which we’ll come to in Section 1.3.7 and Section 2.3, respectively, and the prototype for many other interesting quantum states. The Bell state has the property that upon measuring the first qubit, one obtains two possible results: 0 with probability 1/2, leaving the post-measurement state |ϕ′⟩= |00⟩, and 1 with probability 1/2, leaving |ϕ′⟩= |11⟩. As a result, a measurement of the second qubit always gives the same result as the measurement of the first qubit. That is, the measurement outcomes are correlated. Indeed, it turns out that other types of measurements can be performed on the Bell state, by first applying some operations to the first or second qubit, and that interesting correlations still exist between the result of a measurement on the first and second qubit. These correlations have been the subject of intense interest ever since a famous paper by Einstein, Podolsky and Rosen, in which they first pointed out the strange properties of states like the Bell state. EPR’s insights were taken up and greatly improved by John Bell, who proved an amazing result: the measurement correlations in the Bell state are stronger than could ever exist between classical systems. These results, described in de-tail in Section 2.6, were the first intimation that quantum mechanics allows information processing beyond what is possible in the classical world. More generally, we may consider a system of n qubits. The computational basis states of this system are of the form |x1x2 . . . xn⟩, and so a quantum state of such a system is specified by 2n amplitudes. For n = 500 this number is larger than the estimated number of atoms in the Universe! Trying to store all these complex numbers would not be possible on any conceivable classical computer. Hilbert space is indeed a big place. In principle, however, Nature manipulates such enormous quantities of data, even for systems containing only a few hundred atoms. It is as if Nature were keeping 2500 hidden pieces of scratch paper on the side, on which she performs her calculations as the system evolves. This enormous potential computational power is something we would very much like to take advantage of. But how can we think of quantum mechanics as computation? 1.3 Quantum computation Changes occurring to a quantum state can be described using the language of quantum computation. Analogous to the way a classical computer is built from an electrical circuit containing wires and logic gates, a quantum computer is built from a quantum circuit containing wires and elementary quantum gates to carry around and manipulate the quantum information. In this section we describe some simple quantum gates, and present several example circuits illustrating their application, including a circuit which teleports qubits! 1.3.1 Single qubit gates Classical computer circuits consist of wires and logic gates. The wires are used to carry information around the circuit, while the logic gates perform manipulations of the infor-mation, converting it from one form to another. Consider, for example, classical single bit logic gates. The only non-trivial member of this class is the not gate, whose operation is defined by its truth table, in which 0 →1 and 1 →0, that is, the 0 and 1 states are interchanged. Can an analogous quantum not gate for qubits be defined? Imagine that we had some process which took the state |0⟩to the state |1⟩, and vice versa. Such a process 18 Introduction and overview would obviously be a good candidate for a quantum analogue to the not gate. However, specifying the action of the gate on the states |0⟩and |1⟩does not tell us what happens to superpositions of the states |0⟩and |1⟩, without further knowledge about the properties of quantum gates. In fact, the quantum not gate acts linearly, that is, it takes the state α|0⟩+ β|1⟩ (1.8) to the corresponding state in which the role of |0⟩and |1⟩have been interchanged, α|1⟩+ β|0⟩. (1.9) Why the quantum not gate acts linearly and not in some nonlinear fashion is a very interesting question, and the answer is not at all obvious. It turns out that this linear behavior is a general property of quantum mechanics, and very well motivated empirically; moreover, nonlinear behavior can lead to apparent paradoxes such as time travel, faster-than-light communication, and violations of the second laws of thermodynamics. We’ll explore this point in more depth in later chapters, but for now we’ll just take it as given. There is a convenient way of representing the quantum not gate in matrix form, which follows directly from the linearity of quantum gates. Suppose we define a matrix X to represent the quantum not gate as follows: X ≡ · 0 1 1 0 ¸ . (1.10) (The notation X for the quantum not is used for historical reasons.) If the quantum state α|0⟩+ β|1⟩is written in a vector notation as · α β ¸ , (1.11) with the top entry corresponding to the amplitude for |0⟩and the bottom entry the amplitude for |1⟩, then the corresponding output from the quantum not gate is X · α β ¸ = · β α ¸ . (1.12) Notice that the action of the not gate is to take the state |0⟩and replace it by the state corresponding to the first column of the matrix X. Similarly, the state |1⟩is replaced by the state corresponding to the second column of the matrix X. So quantum gates on a single qubit can be described by two by two matrices. Are there any constraints on what matrices may be used as quantum gates? It turns out that there are. Recall that the normalization condition requires |α|2 + |β|2 = 1 for a quantum state α|0⟩+ β|1⟩. This must also be true of the quantum state |ψ′⟩= α′|0⟩+ β′|1⟩after the gate has acted. It turns out that the appropriate condition on the matrix representing the gate is that the matrix U describing the single qubit gate be unitary, that is U †U = I, where U † is the adjoint of U (obtained by transposing and then complex conjugating U), and I is the two by two identity matrix. For example, for the not gate it is easy to verify that X†X = I. Amazingly, this unitarity constraint is the only constraint on quantum gates. Any unitary matrix specifies a valid quantum gate! The interesting implication is that in contrast to the classical case, where only one non-trivial single bit gate exists – the not Quantum computation 19 0 1 2 + |0ñ |1ñ x x x y y y z z z Figure 1.4. Visualization of the Hadamard gate on the Bloch sphere, acting on the input state (|0⟩+ |1⟩)/ √ 2. gate – there are many non-trivial single qubit gates. Two important ones which we shall use later are the Z gate: Z ≡ · 1 0 0 −1 ¸ , (1.13) which leaves |0⟩unchanged, and flips the sign of |1⟩to give −|1⟩, and the Hadamard gate, H ≡ 1 √ 2 · 1 1 1 −1 ¸ . (1.14) This gate is sometimes described as being like a ‘square-root of not’ gate, in that it turns a |0⟩into (|0⟩+ |1⟩)/ √ 2 (first column of H), ‘halfway’ between |0⟩and |1⟩, and turns |1⟩into (|0⟩−|1⟩)/ √ 2 (second column of H), which is also ‘halfway’ between |0⟩and |1⟩. Note, however, that H2 is not a not gate, as simple algebra shows that H2 = I, and thus applying H twice to a state does nothing to it. The Hadamard gate is one of the most useful quantum gates, and it is worth trying to visualize its operation by considering the Bloch sphere picture. In this picture, it turns out that single qubit gates correspond to rotations and reflections of the sphere. The Hadamard operation is just a rotation of the sphere about the ˆ y axis by 90◦, followed by a reflection through the ˆ x- ˆ y plane, as illustrated in Figure 1.4. Some important single qubit gates are shown in Figure 1.5, and contrasted with the classical case. x x j0i + ji X j0i + ji j0i + ji Z j0i ji j0i + ji H j0i+ji p + j0iji p Figure 1.5. Single bit (left) and qubit (right) logic gates. There are infinitely many two by two unitary matrices, and thus infinitely many single 20 Introduction and overview qubit gates. However, it turns out that the properties of the complete set can be under-stood from the properties of a much smaller set. For example, as explained in Box 1.1, an arbitrary single qubit unitary gate can be decomposed as a product of rotations · cos γ 2 −sin γ 2 sin γ 2 cos γ 2 ¸ , (1.15) and a gate which we’ll later understand as being a rotation about the ˆ z axis, · e−iβ/2 0 0 eiβ/2 ¸ , (1.16) together with a (global) phase shift – a constant multiplier of the form eiα. These gates can be broken down further – we don’t need to be able to do these gates for arbitrary α, β and γ, but can build arbitrarily good approximations to such gates using only certain special fixed values of α, β and γ. In this way it is possible to build up an arbitrary single qubit gate using a finite set of quantum gates. More generally, an arbitrary quantum computation on any number of qubits can be generated by a finite set of gates that is said to be universal for quantum computation. To obtain such a universal set we first need to introduce some quantum gates involving multiple qubits. Box 1.1: Decomposing single qubit operations In Section 4.2 starting on page 174 we prove that an arbitrary 2×2 unitary matrix may be decomposed as U = eiα · e−iβ/2 0 0 eiβ/2 ¸ · cos γ 2 −sin γ 2 sin γ 2 cos γ 2 ¸ , · e−iδ/2 0 0 eiδ/2 ¸ , (1.17) where α, β, γ, and δ are real-valued. Notice that the second matrix is just an ordinary rotation. It turns out that the first and last matrices can also be understood as rotations in a different plane. This decomposition can be used to give an exact prescription for performing an arbitrary single qubit quantum logic gate. 1.3.2 Multiple qubit gates Now let us generalize from one to multiple qubits. Figure 1.6 shows five notable multiple bit classical gates, the and, or, x or (exclusive- or), nand and nor gates. An important theoretical result is that any function on bits can be computed from the composition of nand gates alone, which is thus known as a universal gate. By contrast, the x or alone or even together with not is not universal. One way of seeing this is to note that applying an x or gate does not change the total parity of the bits. As a result, any circuit involving only not and x or gates will, if two inputs x and y have the same parity, give outputs with the same parity, restricting the class of functions which may be computed, and thus precluding universality. The prototypical multi-qubit quantum logic gate is the controlled-not or cnot gate. This gate has two input qubits, known as the control qubit and the target qubit, respec-tively. The circuit representation for the cnot is shown in the top right of Figure 1.6; the top line represents the control qubit, while the bottom line represents the target
189
Marcus Tullius Cicero =============== Authors Word Search Concordance About PHI Latin Texts Marcus Tullius Cicero[Cic] 001 Pro Quinctio[Quinct] 002 Pro S. Roscio Amerino[SRosc] 003 Pro Q. Roscio Comoedo[QRosc] 004 In Q. Caecilium[DivCaec] 005 In Verrem[Ver] 006 Pro Tullio[Tul] 007 Pro Fonteio[Font] 008 Pro Caecina[Caec] 009 Pro Lege Manilia[Man] 010 Pro Cluentio[Clu] 011 De Lege Agraria[Agr] 012 Pro Rabirio Perduellionis Reo[RabPerd] 013 In Catilinam[Catil] 014 Pro Murena[Mur] 015 Pro Sulla[Sul] 016 Pro Archia[Arch] 017 Pro Flacco[Flac] 018 Post Reditum ad Populum[RedPop] 019 Post Reditum in Senatu[RedSen] 020 De Domo Sua[Dom] 021 De Haruspicum Responso[Har] 022 Pro Sestio[Sest] 023 In Vatinium[Vat] 024 Pro Caelio[Cael] 025 De Provinciis Consularibus[Prov] 026 Pro Balbo[Balb] 027 In Pisonem[Pis] 028 Pro Plancio[Planc] 029 Pro Scauro[Scaur] 030 Pro Rabirio Postumo[RabPost] 031 Pro Milone[Mil] 032 Pro Marcello[Marc] 033 Pro Ligario[Lig] 034 Pro Rege Deiotaro[Deiot] 035 Philippicae[Phil] 036 De Inventione[Inv] 037 De Oratore[deOrat] 038 De Partitione Oratoria[Part] 039 Brutus[Brut] 040 Orator[Orat] 041 De Optimo Genere Oratorum[OptGen] 042 Topica[Top] 043 De Republica[Rep] 044 De Legibus[Leg] 045 Academica[Ac] 046 Lucullus[Luc] 047 Paradoxa Stoicorum[Parad] 048 De Finibus[Fin] 049 Tusculanae Disputationes[Tusc] 050 De Natura Deorum[ND] 051 Cato Maior de Senectute[Sen] 052 Laelius de Amicitia[Amic] 053 De Divinatione[Div] 054 De Fato[Fat] 055 De Officiis[Off] 056 Epistulae ad Familiares[Fam] 057 Epistulae ad Atticum[Att] 058 Epistulae ad Quintum Fratrem[Qfr] 059 Epistulae ad Brutum[adBrut] 060 Arati Phaenomena[AratPhaen] 061 Facete Dicta[Facet] 062 carmina, fragmenta[poet] 063 Commentarii Causarum[CommCaus] 064 epistulae, fragmenta[epfrg] 065 Hortensius[Hort] 066 incertorum librorum fragmenta[libinc] 067 De Iure Civ. in Artem Redig.[IurCiv] 068 orationum deperditarum frr.[oratdep] 069 orationum incertarum frr.[incorat] 070 philosophicorum librorum frr.[philfrg] 071 Arati Prognostica[AratProgn] 072 Timaeus[Tim] 073 Rhetorica ad Herennium [sp.][RhetHer] 074 In Sallustium [sp.][Sal] 075 epistula ad Octavianum [sp.][EpOct]
190
Lecture 19: Properties of theta function Rajat Mittal IIT Kanpur⋆ The Lovasz theta number was introduced last time to find out upper bounds on the size of independent set. For the case of perfect graphs (chromatic number is same as maximum clique size), the Lovasz theta number of complementary graph gives us the maximum independent set size. Though for general graphs it has been proved that Lovasz theta number is not a tight bound on the size of maximum independent set size. In spite of this the theta function is useful because it is an SDP and using SDP structure we can come up with lot of properties of independent set and chromatic number. To remind, the Lovasz theta number for a graph G is given by following SDP’s. Primal max P (i,j)∈E Xij s.t. Xij = 0 ∀(i, j) ∈E P i∈V Tr(X) = 1 X ⪰0 (1) Dual min λ s.t. λI −A ⪰0 Ai,j = 1 if i = j or (i, j) / ∈E (2) 1 Shannon capacity of a graph Shannon introduced Shannon capacity of a graph to estimate the capacity of a channel with noise. Suppose there is a channel and a list of symbols (say a, b, · · · ) which can be sent across the channel. Since the channel is noisy, some of the pairs can be confused when transferred across the channel. The effect of this noise can be modeled by a confusability graph G. The graph G has vertices for every alphabet and there is an edge between two vertices if they can be confused by the receiver of the channel. Clearly the maximum number of symbols which can be sent is the maximum independent set of this graph. A better strategy would be to send multiple letters so that the chances of confusion are less, i.e., our new symbol set is aa, ab, bc, · · · . In this case the new confusability graph will be related to the original confusability graph. The relation is given by strong product of a graph. Given two graphs G1 = (V1, E1), G2 = (V2, E2), the strong product G1 × G2 is defined as the graph with vertex set V1 × V2. The edges are defined by relation, (i1, i2), (j1, j2) ∈E(G1 × G2) ⇔(i1 = j1 ∨(i1, j1) ∈E1) ∧(i2 = j2 ∨(i2, j2) ∈E2). Exercise 1. Convince yourself that the confusability graph when we send two symbols is the strong product of confusability graph when we send just one symbol. Note: The strong product can be thought of as a product where edges are present in the new graph if for all co-ordinates there is an edge or both the vertices are equal. There is a weak product also where in the newly formed graph the edges are present if for any co-ordinate there is an edge in the original graph. The strong product and weak product are useful in many applications and not just Shannon capacity. With this product the number of symbols we can send are α(G2). To normalize the cost (sending two symbols instead of one), we compare the square root of α(G2) with α(G). Exercise 2. Show that α(G) ≤ p α(G2). In general show that if k ≤l then k p α(Gk) ≤ l p α(Gl). ⋆Lovasz’s paper on Shannon capacity a b c d e Fig. 1. Example of a confusability graph This idea can be expanded further and we can consider the channel capacity as the maximum of k p α(Gk). The Shannon capacity is defined by, lim k→∞ k q α(Gk). 2 The product property of theta number From the definition of Shannon capacity, it is clear that the quantity θ(Gk) is interesting. So, the natural question is how θ(G1 × G2) is related to θ(G1), θ(G2). Theorem 1. For graphs G1 and G2, θ(G1 × G2) = θ(G1)θ(G2). We will not show the proof but comment that one of the way to show this theorem is using the structure of SDP’s. In general it can be asked, given the description (C, Ai, b) of SDP, what happens when the to the optimal value if a product between two SDP’s are taken. The product of SDP’s can be defined in many ways, but one of the most natural one is when we take tensor product of the respective parameters. This product turns out to be useful in many applications. Suppose the primal SDP’s are S1, S2 and their duals D1, D2. Denote the product primal SDP by S1 ⊗S2 and the dual of it by D(S1 ⊗S2).The outline of the proof is, 1. Show that for the optimal solution X∗ 1, X2∗of S1, S2 respectively, X∗ 1 ⊗X∗ 2 is the solution of S1 ⊗S2 with the objective value Opt(S1) Opt(S2). This implies Opt(S1 ⊗S2) ≥Opt(S1) Opt(S2). 2. Show that for the optimal solution y∗ 1, y2∗of D1, D2 respectively, y∗ 1 ⊗y∗ 2 is the solution of D(S1 ⊗S2) with the objective value Opt(D1) Opt(D2). This implies Opt(D(S1 ⊗S2)) ≥Opt(D1) Opt(D2). 3. Using strong duality for all the SDP’s Opt(S1 ⊗S2) = Opt(S1) Opt(S2). Exercise 3. Use the above strategy to show the theorem. Using this product property of Lovasz theta number (Thm. 1), Exercise 4. Show that the Shannon capacity of the graph is less than θ(G). 2 Lovasz showed the Shannon capacity of the pentagon (C5) using the above exercise. He showed that θ(C5) = √ 5 = p α(C2 5). Hence, q α(C2 5) ≤lim k→∞ k q α(Ck 5 ) ≤θ(C5) = q α(C2 5). So the Shannon capacity of pentagon is √ 5. 3 Theta number of complementary graph There was another formulation of Lovasz theta number in the last lecture. minc,{vi} maxi( 1 cT vi )2 s.t. ∥c∥= ∥vi∥= 1 ∀i vT i vj = 0 ∀(i, j) / ∈E (3) Using this characterization we can prove, Theorem 2. For a graph G and its complement ¯ G, θ(G)θ( ¯ G) ≥n. Proof. Suppose c, {vi} and d, {wi} are the optimal vectors for G and ¯ G respectively. Then the unit vectors vi ⊗wi and vj ⊗wj are orthogonal to each other iffi ̸= j. So, 1 ≥ X i ((c ⊗d)T (vi ⊗wi))2 = X i (cT vi)2(dT wi)2. From the fact that c, {vi} is feasible for θ(G), 1 ≥ X i (dT wi)2 θ(G) ⇒θ(G) ≥ X i (dT wi)2. Now using the fact that d, {wi} is feasible for θ( ¯ G), θ(G) ≥ X i 1 θ( ¯ G) ⇒θ(G)θ( ¯ G) ≥n. 3
191
Structural Insights into the Mechanism of Scanning and Start Codon Recognition in Eukaryotic Translation Initiation - ScienceDirect =============== Skip to main contentSkip to article Journals & Books Access throughyour organization Purchase PDF Search ScienceDirect Article preview Abstract Section snippets References (93) Cited by (222) Trends in Biochemical Sciences ------------------------------ Volume 42, Issue 8, August 2017, Pages 589-611 Feature Review Special Issue: Ribosomes & Translation Structural Insights into the Mechanism of Scanning and Start Codon Recognition in Eukaryotic Translation Initiation Author links open overlay panelAlan G.Hinnebusch 1 Show more Add to Mendeley Share Cite rights and content Trends Recent high-resolution structures of PICs reveal distinct conformations of the 40S subunit, (initiator) tRNA i, and initiation factors indicative of different stages of the scanning mechanism for selecting AUG start codons. An open PIC conformation features less tightly anchored mRNA and tRNA i, and unobstructed binding of the gatekeeper molecule eIF1 – all features compatible with scanning. In the closed PIC conformation, both mRNA and tRNA i are locked into the decoding center, distorting eIF1 as a prelude to its release; and eIF1A stabilizes tRNA i binding – all compatible with AUG selection. eIF2 subunits encase tRNA i within the TC; eIF2β helps to retain eIF1 in the open complex, and eIF2α interacts directly with ‘context’ mRNA nucleotides surrounding the AUG. eIF3 effectively encircles the PIC and contacts various 40S functional sites, illuminating its multiple roles in stimulating PIC assembly, scanning, and AUG selection. Initiation of translation on eukaryotic mRNAs generally follows the scanning mechanism, wherein a preinitiation complex (PIC) assembled on the small (40S) ribosomal subunit and containing initiator methionyl tRNA i (Met-tRNA i) scans the mRNA leader for an AUG codon. In a current model, the scanning PIC adopts an open conformation and rearranges to a closed state, with fully accommodated Met-tRNA i, upon AUG recognition. Evidence from recent high-resolution structures of PICs assembled with different ligands supports this model and illuminates the molecular functions of eukaryotic initiation factors eIF1, eIF1A, and eIF2 in restricting to AUG codons the transition to the closed conformation. They also reveal that the eIF3 complex interacts with multiple functional sites in the PIC, rationalizing its participation in numerous steps of initiation. Access through your organization Check access to the full text by signing in through your organization. Access through your organization Section snippets Translation Initiation by the Scanning Mechanism Translation initiation is the process of decoding the AUG start codon in mRNA by methionyl initiator tRNA (Met-tRNA i). Most eukaryotic mRNAs are translated by a scanning mechanism, where the small (40S) ribosomal subunit is first loaded with Met-tRNA i in a ternary complex (TC) with GTP-bound eukaryotic initiation factor 2 (eIF2) in a reaction promoted by eIF1, eIF1A, eIF5, and the multisubunit eIF3. This 43S preinitiation complex (PIC) then attaches to the 5′ end of mRNA – preactivated by A Model for the Scanning Mechanism Biochemical, genetic, and structural analyses have led to the following model for the mechanism of scanning (Figure 3; for reviews, see 3, 4). Factors eIF1 and eIF1A bind the 40S subunit near the P and A sites, respectively, and promote an open 40S conformation conducive to rapid loading of TC and mRNA scanning. TC is bound in a ‘P OUT’ conformation, not fully engaged with the PIC, suitable for scanning successive triplets entering the P site. GTP hydrolysis by TC is stimulated by the GTPase Structure of TC Met-tRNA i is anchored to the scanning PIC in the TC with GTP-bound eIF2, an α/β/γ heterotrimer. The γ-subunit is related to the three-domain factor EF1A, which transfers charged tRNAs to the A site during translation elongation, featuring a GTP-binding pocket in the G domain. Though absent in eubacteria, archaea contain a similar factor, aIF2, and crystal structures of aIF2 subcomplexes 15, 16, 17, 18 and archaeal TC (aTC) provided detailed glimpses into how eukaryotic TC might be Interactions of tRNA i with P Site Residues Stabilize the P IN Conformation Comparing the pm43S PIC (lacking mRNA) with the py48S PIC suggested that the tRNA i is inserted ∼7 Å less deeply in the P site of pm43S PIC, consistent with a P OUT state . However, examining the pm43S complex at higher resolution indicates that tRNA i is not inserted less deeply, but rather occupies the P site differently in the two complexes, owing to angular rotation of the entire TC relative to the head and body of the 40S subunit (Yaser Hashem, personnel communication). The P tRNA i Clashes with eIF1 in the P IN State The crystal structure of a Tetrahymena 40S–eIF1 complex revealed eIF1 bound to the 40S platform near the P site, consistent with directed hydroxyl radical cleavage (DHRC) mapping of eIF1 in reconstituted mammalian PICs . It was predicted that eIF1 would clash with tRNA i bound in the canonical P/P state . This expectation was extended by overlaying the crystal structures of the mammalian 40S–eIF1–eIF1A and m48S PIC, which predicted clashes of the ASL with loop-1, and the tRNA i The eIF1A NTT Stabilizes the Codon–Anticodon Helix The globular domain of eIF1A binds to the 40S A site, as demonstrated by DHRC mapping in mammalian PICs and crystal structures of 40S–eIF1–eIF1A complexes from Tetrahymena and mammals . The helical domain and associated NTT and CTT of eIF1A project from the globular domain bound to the 40S body, and the NTT interacts with Rps27A (Rps31 in yeast) in the 40S head 35, 42 (Figure 6B). The ability of eIF1A to bridge the 40S head and body could influence the mRNA binding cleft formed at PIC Contacts with mRNA in the Closed/P IN Complex The mRNA path through the eukaryotic 48S PICs is similar to that observed in bacterial 70S elongation complexes except for the absence of a pronounced kink between the A and P sites, thought to help maintain reading frame during elongation. Absence of a kink in the PIC might facilitate slippage of the mRNA through the decoding center during base-by-base scanning . mRNA residues both upstream and downstream of the AUG engage residues within 18S rRNA, ribosomal proteins, and initiation The py48S-Open Complex Exhibits a Widened mRNA Channel, Open Latch, and Incomplete P Site Conducive to mRNA Recruitment and Scanning An important structural feature of the mRNA entry channel is a noncovalent interaction between rRNA residues in h34 of the 40S head and h18 of the body, comprising a ‘latch’ thought to clamp mRNA into the entry channel. The first, relatively low-resolution cryo-EM structure of yeast 40S PICs harboring eIF1, eIF1A, or both, indicated an open latch in the 40S–eIF1–eIF1A PIC but a closed latch in the free 40S and 40S–eIF1A complex, consistent with an open, scanning conformation induced by eIFs 1 eIF2β Exhibits Contacts with eIF1, eIF1A, and the tRNA i ASL Restricted to py48S-Open In previous py48S and pm43S structures, only the N-terminal helix of eIF2β was visualized, where it attaches to eIF2γ. However, most of eIF2β is visible in py48S-open and py48S-closed structures (excluding the unstructured NTT), and it interacts differently with PIC components in these two complexes. In py48S-open, the helix–turn–helix domain of eIF2β binds to both tRNA i (anchored to the 40S head) and eIFs 1 and 1A (bound to the 40S body), bridging the head and body. These Insights from Prokaryotic PICs Recent cryo-EM analysis of bacterial (Thermus thermophilus) PICs containing IF1 and IF3–the functional orthologs of eIF1A and eIF1 – revealed parallels with eukaryotic PICs suggesting conservation of key aspects of initiation . In prokaryotes, the purine-rich Shine–Dalgarno (SD) sequence just upstream of the start codon pairs with the 3′ end of 16S rRNA in the small (30S) subunit to direct the PIC to the correct start codon, supplanting the scanning mechanism, although scanning appears to Structure of eIF3 Because of its subunit complexity and absence in archaea, and the conformational flexibility of linkers connecting its globular subdomains, the complete structure of eIF3 has not yet been determined. However, substantial progress has been made toward this goal, and the interaction of its globular subdomains with the PIC was recently illuminated. The subunit interaction map of the yeIF3 complex (Figure 2) was constructed by mapping binary interactions between its six subunits, and identifying Position of meIF3 in the Mammalian 43S PIC The PCI/MPN octamer of meIF3 was visualized in the cryo-EM reconstructions of the m43S PIC bound to Dhx29, first at 11.6 Å and later at ∼6 Å (Figure 8B,C). The head and left arm, composed of 3c and 3a , contact ribosomal proteins Rps13/S15 and Rps27/S27e, and Rps3A/S1e and Rps26/S26e, respectively, on the solvent face of the 40S near the exit channel (Figure 8C). This position of 3a fits with its crosslinking to mRNA 5′ UTR residues −14/−17 (relative to AUG) in mammalian 48S PICs Position of yeIF3 in a 40S–eIF1–eIF1A–eIF3 PIC Interaction of the six-subunit yeIF3 complex with the 40S subunit was revealed in a cryo-EM reconstruction of a 40S–eIF1–eIF1A–eIF3 complex, stabilized by crosslinking, at ∼6.5 Å resolution (Figure 8D). Interpreting the eIF3 densities was aided by prior crosslinking-mass spectrometry, which identified 155 linkages between yeIF3 subunits and 40S ribosomal proteins, and integrative modeling . The best-defined aspects include the 3a/3c PCI heterodimer near the mRNA exit channel, similar to Communication of eIF3 Subunits with the Decoding Center in Yeast PICs The connections identified in the yeast MFC (Figure 2) led to the prediction that the 3c-NTD and 3a-CTD segments project into the subunit interface side of the 40S subunit and interact with eIF1, eIF2β-NTT, and eIF5-CTD in the decoding center 7, 8. Consistent with this, crosslinking mass spectrometry data indicates interaction of the N-terminal segment of eIF3c with eIF1 bound to the 40S platform . Moreover, a globular density was found in contact with eIF1 in the py48S-closed complex and Concluding Remarks The recent high-resolution structures of PICs harboring different combinations of factors and ligands have provided a wealth of information about the mechanism of start codon recognition at atomic-level resolution, and identified certain aspects of the process that are conserved among all three kingdoms of life. The eukaryotic PIC structures have revealed distinct open or closed conformations of the 40S that seem conducive to mRNA attachment or scanning versus AUG selection, respectively. The Acknowledgments The author is grateful to Tanweer Hussain, Jose L. Llácer, and Yaser Hashem for critical comments on the manuscript, communicating results prior to publication, and providing images employed in the figures. Special issue articles Special issue articles Recommended articles References (93) C.R. Singh Efficient incorporation of eukaryotic initiation factor 1 into the multifactor complex is critical for formation of functional ribosomal preinitiation complexes in vivo J. Biol. Chem. (2004) M. Karaskova Functional characterization of the role of the N-terminal domain of the c/Nip1 subunit of eukaryotic initiation factor 3 (eIF3) in AUG recognition J. Biol. Chem. (2012) R.E. Luna The C-terminal domain of eukaryotic initiation factor 5 promotes start codon recognition by its dynamic interplay with eIF1 and eIF2beta Cell Rep. (2012) E. Stolboushkina Crystal structure of the intact archaeal translation initiation factor 2 demonstrates very high conformational flexibility in the alpha- and beta-subunits J. Mol. Biol. (2008) E. Schmitt Eukaryotic and archaeal translation initiation factor 2: a heterotrimeric tRNA carrier FEBS Lett. (2010) T. Hussain Structural changes enable start codon recognition by the eukaryotic translation initiation complex Cell (2014) J.L. Llácer Conformational differences between open and closed states of the eukaryotic translation initiation complex Mol. Cell (2015) M.G. Acker Reconstitution of yeast translation initiation Methods Enzymol. (2007) Y. Hashem Structure of the mammalian ribosomal 43S preinitiation complex bound to the scanning factor DHX29 Cell (2013) L.A. Passmore The eukaryotic translation initiation factors eIF1 and eIF1A induce an open conformation of the 40S ribosome Mol. Cell (2007) H.J. Drabkin The role of nucleotides conserved in eukaryotic initiator methionine tRNAs in initiation of protein synthesis J. Biol. Chem. (1993) P. Martin-Marcos β-Hairpin loop of eIF1 mediates 40S ribosome binding to regulate initiator tRNAMet recruitment and accuracy of AUG selection in vivo J. Biol. Chem. (2013) J.S. Nanda Coordinated movements of eukaryotic translation initiation factors eIF1, eIF1A, and eIF5 trigger phosphate release from eIF2 in response to start codon recognition by the ribosomal preinitiation complex J. Biol. Chem. (2013) M.A. Algire Pi release from eIF2, not GTP hydrolysis, is the step controlled by start-site selection during eukaryotic translation initiation Mol. Cell (2005) T. Hussain Large-scale movements of IF3 and tRNA during bacterial translation initiation Cell (2016) A.G. Hinnebusch eIF3: a versatile scaffold for translation initiation complexes Trends Biochem. Sci. (2006) L. Elantak Structure of eIF3b-RRM and its interaction with eIF3j: structural insights into the recruitment of eIF3b to the 40S ribosomal subunit J. Biol. Chem. (2007) L. Elantak The indispensable N-terminal half of eIF3j/HCR1 cooperates with its structurally conserved binding partner eIF3b/PRT1-RRM and with eIF1A in stringent AUG selection J. Mol. Biol. (2010) Y. Liu Translation initiation factor eIF3b contains a nine-bladed beta-propeller and interacts with the 40S ribosomal subunit Structure (2014) J.P. Erzberger Molecular architecture of the 40SeIF1eIF3 translation initiation complex Cell (2014) Z. Wei Crystal structure of the C-terminal domain of S cerevisiae eIF5 J. Mol. Biol. (2006) S. Srivastava Eukaryotic initiation factor 3 does not prevent association through physical blockage of the ribosomal subunit-subunit interface J. Mol. Biol. (1992) J. Querol-Audi Architecture of human translation initiation factor 3 Structure (2013) M.D. Smith Assembly of eIF3 mediated by mutually dependent subunit insertion Structure (2016) Z. Dong Spectrin domain of eukaryotic initiation factor 3a is the docking site for formation of the a:b:i:g subcomplex J. Biol. Chem. (2013) N. Villa Human eukaryotic initiation factor 4G (eIF4G) protein binds to eIF3c, -d, and -e to promote mRNA recruitment to the ribosome J. Biol. Chem. (2013) A. Marintchev Topology and regulation of the human eIF4A/4G/4H helicase complex in translation initiation Cell (2009) C.S. Fraser eIF3j is located in the decoding center of the human 40S ribosomal subunit Mol. Cell (2007) S.F. Mitchell The 5′-7-methylguanosine cap on eukaryotic mRNAs serves both to stimulate canonical translation initiation and block an alternative pathway Mol. Cell (2010) A. Simonetti eIF3 peripheral subunits rearrangement after mRNA binding and start-codon recognition Mol. Cell (2016) V.P. Pisareva Translation initiation on mammalian mRNAs with structured 5′-UTRs requires DExH-box protein DHX29 Cell (2008) A.G. Hinnebusch Mechanism of translation initiation in the yeast Saccharomyces cerevisiae R.J. Jackson The mechanism of eukaryotic translation initiation and principles of its regulation Nat. Rev. Mol. Cell Biol. (2010) A.G. Hinnebusch Molecular mechanism of scanning and start codon selection in eukaryotes Microbiol. Mol. Biol. Rev. (2011) A.G. Hinnebusch The scanning mechanism of eukaryotic translation initiation Annu. Rev. Biochem. (2014) L.S. Valasek ‘Ribozoomin’ – translation initiation from the perspective of the ribosome-bound eukaryotic initiation factors (eIFs) Curr. Protein Pept. Sci. (2012) L. Phan Identification of a translation initiation factor 3 (eIF3) core complex, conserved in yeast and mammals, that interacts with eIF5 Mol. Cell Biol. (1998) K. Asano A multifactor complex of eukaryotic initiation factors eIF1, eIF2, eIF3, eIF5, and initiator tRNA Met is an important translation initiation intermediate in vivo Genes Dev. (2000) L. Valášek Direct eIF2-eIF3 contact in the multifactor complex is important for translation initiation in vivo EMBO J. (2002) M. Sokabe The human translation initiation multi-factor complex promotes methionyl-tRNAi binding to the 40S ribosomal subunit Nucleic Acids Res. (2012) L. Valasek Interactions of eukaryotic translation initiation factor 3 (eIF3) subunit NIP1/c with eIF1 and eIF5 promote preinitiation complex assembly and regulate start codon selection Mol. Cell Biol. (2004) R.E. Luna The interaction between eukaryotic initiation factor 1A and eIF5 retains eIF1 within scanning preinitiation complexes Biochemistry (2013) M. Sokabe Structure of archaeal translational initiation factor 2 betagamma-GDP reveals significant conformational change of the beta-subunit and switch 1 region Proc. Natl. Acad. Sci. U. S. A. (2006) L. Yatime Structure of an archaeal heterotrimeric initiation factor 2 reveals a nucleotide state between the GTP and the GDP states Proc. Natl. Acad. Sci. U. S. A. (2007) E. Schmitt Structure of the ternary initiation complex aIF2-GDPNP-methionylated initiator tRNA Nat. Struct. Mol. Biol. (2012) J. Dong Conserved residues in yeast initiator tRNA calibrate initiation accuracy by regulating preinitiation complex stability at the start codon Genes Dev. (2014) View more references Cited by (222) Nonstructural Protein 1 of SARS-CoV-2 Is a Potent Pathogenicity Factor Redirecting Host Protein Synthesis Machinery toward Viral RNA 2020, Molecular Cell Citation Excerpt : The ribosomal protein uS3 is conserved in all kingdoms. Together with h16, h18, and h34 of 18S rRNA it constitutes the mRNA-binding channel and the mRNA entry site (Graifer et al., 2014; Hinnebusch, 2017). It has been shown that uS3 interacts with the mRNA and regulates scanning-independent translation on a specific set of mRNAs (Haimov et al., 2017; Sharifulin et al., 2015). Show abstract The causative virus of the COVID-19 pandemic, SARS-CoV-2, uses its nonstructural protein 1 (Nsp1) to suppress cellular, but not viral, protein synthesis through yet unknown mechanisms. We show here that among all viral proteins, Nsp1 has the largest impact on host viability in the cells of human lung origin. Differential expression analysis of mRNA-seq data revealed that Nsp1 broadly alters the cellular transcriptome. Our cryo-EM structure of the Nsp1-40S ribosome complex shows that Nsp1 inhibits translation by plugging the mRNA entry channel of the 40S. We also determined the structure of the 48S preinitiation complex formed by Nsp1, 40S, and the cricket paralysis virus internal ribosome entry site (IRES) RNA, which shows that it is nonfunctional because of the incorrect position of the mRNA 3′ region. Our results elucidate the mechanism of host translation inhibition by SARS-CoV-2 and advance understanding of the impacts from a major pathogenicity factor of SARS-CoV-2. ### mRNA-based therapeutics: powerful and versatile tools to combat diseases 2022, Signal Transduction and Targeted Therapy ### The plasticity of mRNA translation during cancer progression and therapy resistance 2021, Nature Reviews Cancer ### Alternative ORFs and small ORFs: Shedding light on the dark proteome 2021, Nucleic Acids Research ### Protein synthesis initiation in eukaryotic cells 2018, Cold Spring Harbor Perspectives in Biology ### Non-AUG translation: A new start for protein synthesis in eukaryotes 2017, Genes and Development View all citing articles on Scopus View full text Published by Elsevier Ltd. Part of special issue Ribosomes & Translation View special issue Recommended articles Protein-RNA: Structure function and recognition Methods, Volumes 118–119, 2017, pp. 1-2 ### Reducing maternal mortality: can elabela help in this fight? The Lancet, Volume 394, Issue 10192, 2019, pp. 8-9 Sonia S Hassan, Nardhy Gomez-Lopez ### Breaking the Silos of Protein Synthesis Trends in Biochemical Sciences, Volume 42, Issue 8, 2017, pp. 587-588 Maria K.Mateyak, Terri Goss Kinzy ### Early-onset type 2 diabetes: no time for defeatism The Lancet, Volume 405, Issue 10497, 2025, p. 2255 The Lancet ### Future of once-weekly insulins in type 2 diabetes: efficacy and safety The Lancet, Volume 405, Issue 10497, 2025, pp. 2256-2258 Edith Wing-Kar Chow, Elaine Chow ### Practice-changing evidence for low-risk differentiated thyroid cancer The Lancet, Volume 406, Issue 10498, 2025, pp. 5-6 Dana M Hartl Show 3 more articles Article Metrics Citations Citation Indexes 221 Patent Family Citations 1 Captures Mendeley Readers 291 Mentions References 4 View details About ScienceDirect Remote access Contact and support Terms and conditions Privacy policy Cookies are used by this site.Cookie settings All content on this site: Copyright © 2025 Elsevier B.V., its licensors, and contributors. All rights are reserved, including those for text and data mining, AI training, and similar technologies. For all open access content, the relevant licensing terms apply. We use cookies that are necessary to make our site work. We may also use additional cookies to analyze, improve, and personalize our content and your digital experience. For more information, see ourCookie Policy Cookie Settings Accept all cookies Cookie Preference Center We use cookies which are necessary to make our site work. We may also use additional cookies to analyse, improve and personalise our content and your digital experience. For more information, see our Cookie Policy and the list of Google Ad-Tech Vendors. You may choose not to allow some types of cookies. However, blocking some types may impact your experience of our site and the services we are able to offer. See the different category headings below to find out more or change your settings. You may also be able to exercise your privacy choices as described in our Privacy Policy Allow all Manage Consent Preferences Strictly Necessary Cookies Always active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. Cookie Details List‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. Cookie Details List‎ Targeting Cookies [x] Targeting Cookies These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. If you do not allow these cookies, you will experience less targeted advertising. Cookie Details List‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Confirm my choices × Read strategically, not sequentially ScienceDirect AI extracts key findings from full-text articles, helping you quickly assess the article's relevance to your research. Unlock your access
192
Skip to content The Voting Rights of Non-voting Shareholders – An Oxymoron? Published on: March 2025 | What's Trending Share: Facebook Twitter X Linked In Share by Email An Ontario business corporation must be authorized to issue at least one class of shares and if the corporation’s articles authorize more than one class of shares, the articles must specify the rights, privileges, restrictions, conditions and voting rights attaching to each class. The provisions for a non-voting class of shares usually state that shareholders holding such shares are not entitled to receive notice of, attend at or vote at any meeting of the shareholders of the corporation. But despite being called a “non-voting” shareholder and the perceived absence of voting rights, such shareholders continue to have limited voting rights under the Ontario Business Corporations Act (OBCA). While non-voting shareholders may not influence governance, non-voting shares often come with other significant benefits. Non-voting shareholders can vote when changes to the corporation’s articles may have significant adverse impact on their rights. Also, non-voting shares allow investors to participate in a corporation’s financial success, through dividends or capital appreciation. Such shares are commonly used for employee stock ownership plans as well as tax and estate planning. Voting Rights for Non-Voting Shareholders Section 170(1) of the OBCA includes certain protections for non-voting shareholders, granting them the right to vote separately as a class if the corporation proposes to amend its articles in a way that changes the rights or conditions attached to their shares. In summary, non-voting shareholders can vote separately as a class or series under Section 170(1) with respect to amendments to the articles that contemplate: A change in a number of authorized shares of the non-voting class or increase in maximum number of shares that are equal to or superior to the non-voting shares. Exchange, reclassification or cancellation of shares of their class. Amendments that could prejudice the rights of non-voting shareholders, such as removal or reduction of the following rights attaching to their shares: The right to accrued or cumulative dividends; Rights or sinking fund provisions; Liquidation preference or dividend preference; and Conversion privileges, options, transfer and pre-emptive rights, voting rights or rights to acquire securities of a corporation. Addition of rights or privileges to another class of shares that is equal to or superior to the non-voting class. Creation of a new class of shares with rights equal to or superior to the non-voting shares. Making equal or superior the rights of the other shares that are inferior to the shares of such non-voting shareholder. Exchange of shares or changes to restrictions on the issue, transfer, or ownership of non-voting shares also give non-voting shareholders the right to vote separately. Voting Rights in Amalgamations Section 176(3) of the OBCA provides protection for non-voting shareholders during corporate amalgamations. To amalgamate, the corporations may be required to enter into an amalgamation agreement. Such agreements, much like the articles of amendment, may include provisions that impact the rights of the shareholders. Non-voting shareholders will have a right to vote separately if the amalgamation agreement includes provisions that affect their rights in a way described in Section 170(1), as if it were an amendment to the corporation’s articles. Modifying Non-Voting Shareholder Statutory Rights The same Section 170(1) that grants voting rights, also provides that some of these rights can be removed as long as this is expressly set out in the corporation’s articles. The statutory voting rights that can be removed are those changes to the articles which effect: the number of authorized shares (see item 1 above), exchange, reclassification, or cancellation of the shares (see item 2 above) and creation of new share classes that are superior in rights (item 5 above). The articles must contain these restrictions before the shares are issued. Practical Implications for Non-Voting Shareholders By offering the ability to vote separately as a class, the OBCA acknowledges the potential impact of corporate changes on non-voting shareholders and grants them mechanisms to safeguard their investments. It is important for non-voting shareholders to understand that while the OBCA ensures protection, the ultimate decision-making power still rests with the voting shareholders. While non-voting shareholders cannot modify their rights on their own, the OBCA allows for transparent processes that protect them from unfair alterations. By understanding these rights, both non-voting and voting shareholders can navigate the complexities of corporate governance with confidence.
193
Bounded Function & Unbounded: Definition, Examples - Statistics How To =============== Skip to content Statistics How To Menu Home Tables Binomial Distribution Table F Table Inverse T Distribution Table PPMC Critical Values T-Distribution Table (One Tail and Two-Tails) Chi Squared Table (Right Tail) Z-table (Right of Curve or Left) Probability and Statistics Binomials Chi-Square Statistic Expected Value Hypothesis Testing Non Normal Distribution Normal Distributions Probability Regression Analysis Statistics Basics T-Distribution Multivariate Analysis & Independent Component Sampling Calculators Variance and Standard Deviation Calculator Tdist Calculator Permutation Calculator / Combination Calculator Interquartile Range Calculator Linear Regression Calculator Expected Value Calculator Binomial Distribution Calculator Matrices Experimental Design Calculus Based Statistics Statistics How To Menu Home Tables Binomial Distribution Table F Table Inverse T Distribution Table PPMC Critical Values T-Distribution Table (One Tail and Two-Tails) Chi Squared Table (Right Tail) Z-table (Right of Curve or Left) Probability and Statistics Binomials Chi-Square Statistic Expected Value Hypothesis Testing Non Normal Distribution Normal Distributions Probability Regression Analysis Statistics Basics T-Distribution Multivariate Analysis & Independent Component Sampling Calculators Variance and Standard Deviation Calculator Tdist Calculator Permutation Calculator / Combination Calculator Interquartile Range Calculator Linear Regression Calculator Expected Value Calculator Binomial Distribution Calculator Matrices Experimental Design Calculus Based Statistics Bounded Function & Unbounded: Definition, Examples Types of Functions> Contents (Click to skip to that section): Bounded Function Definition Upper Bound Least Upper Bound (LUB) Bounded Sequence Bounded Variation 1. What is a Bounded Function? Bounded functions have some kind of boundaries or constraints placed upon them. Most things in real life have natural bounds: cars are somewhere between 6 and 12 feet long, people take between 2 hours and 20 hours to complete a marathon, cats range in length from a few inches to a few feet. When you place those kinds of bounds on a function, it becomes a bounded function. In order for a function to be classified as “bounded”, its rangemust have both a lower bound (e.g. 7 inches) andan upper bound (e.g. 12 feet). Any function that isn’t bounded is unbounded. A function can be bounded at one end, and unbounded at another. Upper Bound for a Bounded Function If a function only has a range with an upper bound (i.e. the function has a number that fixes how high the range can get), then the function is called bounded from above. Usually, the lower limit for the range is listed as -∞. More formally, an upper bound is defined as follows: A set A ∈ ℝ of real numbers is bounded from above if there exists a real number M ∈ R, called an upper bound of A, such that x ≤ M for every x ∈ A (Hunter, n.d.). Basically, the above definition is saying there’s a real number, M, that we’ll call an upper bound. Every element in the set is lower than this value M. Don’t get confused by the fact that the formal definition uses an “x” to denote the elements in the set; It doesn’t mean x-values (as in, the domain). The definition of bounded only applies to the range of values a function can output, nothow high the x-values can get. The exact definition is slightly different, depending on where you’re using the term. Function, Interval, or Set Integration 3. Estimation 1. Upper Bounded Function or Set The upper bound of a function (U) is that function’s largest number. More formally, you would say that a function f has a U if f(x) ≤ U for all x in the function’s domain. If you’re working with an interval(i.e. a small piece of the function), then U on the interval is the largest number in the interval. In notation, that’s: f(x) ≤ U for all x on [a, b]. In the same way, the upper bound of a set (U)is the largest number in the set. In other words, it’s a number that’s greater than or equal to all of the elements in the set. For example, 132 is U for the set { 3, 7, 39, 75, 132 }. Integration The upper bound of an integralis the where you stop integrating. It’s above the integral symbol: See: Integral Bounds. 3. Use in Estimation In estimation, an “upper bound” is the smallest value that rounds up to the next value. For example, let’s say you had an object that was 7 cm long, rounded to the nearest cm. The upper bound is 7.5 cm, because 7.5 cm is the smallest length that would round up to the next increment—8 cm. Similarly, a lower bound is the smallest value that rounds up to 7cm— 6.5 cm. You’re stating that the 7 cm object is actually anywhere between 6.5 cm (the lower bound) and 7.5 cm (the upper bound). Least Upper Bound of a Bounded Function Least upper bound (LUB)refers to a number that serves as the lowest possible ceiling for a set of numbers. If a set of numbers has a greatest number, then that number is also the least upper bound (supremum). For example, let’s say you had a set defined by the closed interval [0,2]. The number 2 is included in the set, and is therefore the least upper bound. Where things get a little interesting is when a set of numbers doesn’t have an upper bound. In that case, the supremum is the number that “wants to be the greatest element” (Howland, 2010). Take the open interval{0,2}. Although the set is bounded by the number 0 and 2, they aren’t actually in the set. However, 2 wants to be the greatest element, and so it’s the least upper bound. When The Least Upper Bound Doesn’t Exist Real numbers (ℝ) include the rational (ℚ), which include the integers (Z), which include the natural numbers (N). Any set of real numbers ordered with < has a least upper bound. Some sets don’t have a supremum. For example (Holmes, n.d.): Rational numbers ordered by <. Let’s say you had a set of rational numbers where all the elements are less than √2. You can find an upper bound (e.g. the number 2), but the only candidate for the least upper bound is √2, and that number isn’t a rational number (it’s a real number). And a real number can’t be the supremum for a set of rational numbers. If a set has no upper bound, then that set has no supremum. For example, the set of all real numbers is unbounded. The empty setdoesn’t have a least upper bound. That’s because every number is a potential upper bound for the empty set. The rational numbers pose all kinds of problems like this that render them “…unfit to be the basis of calculus” (Bloch, p.64). More Formal Definition In the case of the open interval {0,2}, the number is is the smallest number that is larger than every member in the set. In other words, 2 isn’t actually in the set itself, but it’s the smallest number outside of the set that’s larger than 1.999…. In more formal terms: IfM is a set of numbers and M is a number, we can say that M is the least upper bound or supremum of M if the following two statements are true: M is an upper bound of M, and no element of Mwhich is less than M can be an upper bound for M. Assume that M is the least upper bound for M. What this means is that for every number x ∈ Mwe have x ≤ M. For any set of numbers that has an upper bound, the set is bounded from above. Lower Bound If a function has a range with a lower bound, it’s called bounded from below. Usually, the lower limit for the range is listed as +∞. The formal definition is almost the same as that for the upper bound, except with a different inequality. A set A ∈ ℝ of real numbers is bounded from below if there exists a real number M ∈ R, called a lower bound of A, such that x ≥ M for every x ∈ A (Hunter, n.d.). Bounded Sequence: Special Case of Bounded Function A bounded sequenceis a special case of a bounded function; one where the absolute value of every term is less than or equal to a particular real, positive number. You can think of it as there being a well defined boundary line such that no term in the sequencecan be found on the outskirts of that line. More formally, a sequence X is bounded if there is a real number, M greater than 0, such |x n| ≤ M for all n ∈ N. The blue dots on the image below show an infinite sequence. As you can see, the sequence does not converge, for the red boundary lines never converge. However, it is bounded. Examples of Bounded Sequences One example of a sequence that is bounded is the one defined by” The right hand side of this equation tells us that n is indexed between 1 and infinity. This makes the sequence into a sequence of fractions, with the numerators always being one and the denominators always being numbers that are greater than one. A basic algebraic identity tells us that x-k = 1 / x k. So each term in the sequence is a fractional part of one, and we can say that for every term in our sequence, |x n| ≤ 1. Remember now our definition of a bounded sequence:a sequence X is bounded if there is a real number, M greater than 0, such |x n| ≤ M for all n ∈ N. Let M = 1, and then M is be a real number greater than zero such that |x n| ≤ M for all n between 1 and infinity. So our sequence is bounded. Bounded Sequences and Convergence Every absolutely convergent sequence is bounded, so if we know that a sequence is convergent, we know immediately that it is bounded. Note that this doesn’t tell us anything about whether a bounded sequence is convergent: it may or may not be. As an example, the sequence drawn above is not convergent though it is bounded. Bounded Above and Below If we say a sequence is bounded, it is bounded above and below. Some sequences, however, are only bounded from one side. If all of the terms of a sequence are greater than or equal to a number K the sequence is bounded below, and K is called the lower bound. The greatest possible K is the infimum. If all the terms of a sequence are less than or equal to a number K’ the sequence is said to be bounded above, and K’ is the upper bound. The least possible K is the supremum. Bounded Function and Bounded Variation A bounded function of bounded variation (also called a BV function) “wiggles” or oscillates between bounds, much in the same way that a sine function wiggles between bounds of 1 and -1; The vertical (up and down movement) of these functions is restricted over an interval. In other words, the variation isn’t infinite: we can calculate a value for it. These functions can be described as integrable functionswith a derivative (in the sense of distributions) that is a signed measure with finite total variation . The concept was originally developed in the context of Fourier series , when mathematicians were trying to prove the series convergence. Examples of Functions of Bounded Variation All monotonic functions and absolutely continuous functions are of bounded variation; Real‐valued functions with a variation on a compact interval can be expressed as the difference between two monotone (non-decreasing) functions , called a Jordan decomposition. Interestingly, these functions do not have to be continuous functions and can have a finite number of discontinuities (although they do have to be Riemann integrable). They can also be approximated by finite step functions, or decomposed to part continuous and part jump. Normalized functions can be described as having bounded variation when on the interval [0,1] with h(0) = 0 and h(c) = h(c + 0) for 0 < c < 1. More formally, a real-valued function α of bounded variation on the closed interval [a, b] has a constant M > 0 such that : It’s not always necessary to specific the interval, especially when the interval in question is obvious . References (Bounded Variation) Ziemer W.P. (1989) Functions of Bounded Variation. In: Weakly Differentiable Functions. Graduate Texts in Mathematics, vol 120. Springer, New York, NY. Monteiro, G. et al. Series in Real Analysis. Volume 15-Kurzweil–Stieltjes Integral: Theory and Applications. World Scientific. Bridges, D. (2016). A Constructive Look at Functions of BV. Bulletin of the London Mathematical Society. Volume 32, Issue 3 p. 316-324 Bridges, D. (2016). Functions of Bounded Variation. retrieved April 8, 2021 from: Functions of BV. Retrieved April 8, 2021 from: Other Bounded Function References Bloch, E. (2011). The Real Numbers and Real Analysis. Springer Science and Business Media. Gallup, Nathaniel. Mat25 Lecture 9 Notes: Boundedness of Sequences. Retrieved from on January 25, 2018. Holmes (n.d.). Class Notes. Retrieved January 16, 2018 from: Howland, J. (2010). Basic Real Analysis. Jones & Bartlett Learning. Hunter, J. Supremum and Infinim. Retrieved December 8, 2018 from: Larson & Edwards. Calculus. Laval, P. Bounded Functions. Retrieved December 8, 2018 from: King, M. & Mody, N. (2010). Numerical and Statistical Methods for Bioengineering: Applications in MATLAB. Cambridge University Press. Math Learning Center: Sequences. Retrieved from on January 26, 2018 Mac Lane et al. (1991). Algebra. Providence, RI: American Mathematical Society. p. 145. ISBN 0-8218-1646-2. Woodroofe, R. Math 131. Retrieved October 18, 2018 from: Comments? Need to post a correction? Please Contact Us. Feel like “cheating” at Statistics? Check out our "Practically Cheating Statistics Handbook, which gives you hundreds of easy-to-follow answers in a convenient e-book. Looking for elementarystatistics help? You’ve come to the right place. Statistics How To has more than 1,000 articles and videos for elementary statistics, probability, AP and advanced statistics topics. Looking for a specific topic? Type it into the search box at the top of the page. Latest articles Maximum Likelihood and Maximum Likelihood Estimation What is a Parameter in Statistics? Beta Level: Definition & Examples Pairwise Independent, Mutually Independent: Definition, Example Population Mean Definition, Example, Formula Dispersion / Measures of Dispersion: Definition Serial Correlation / Autocorrelation: Definition, Tests Fisher Information / Expected Information: Definition Mixed Derivative (Partial, Iterated) Power Series Distributions Coefficient, Leading Coefficient: Definition, Test Reciprocal Distribution: Definition & Examples © 2025 Statistics How To | About Us | Privacy Policy | Terms of Use
194
The fact that std::launder ( e... | Hacker News =============== Hacker Newsnew | past | comments | ask | show | jobs | submitlogin saagarjhaon Aug 19, 2019 | parent | context | favorite | on: Why Const Doesn't Make C Code Faster The fact that std::launder ( exists blows my mind. Like, why is this a thing that the standard allows? jcranmeron Aug 19, 2019[–] has an explanation. In short, placement new (or some scenarios involving unions) technically cause undefined behavior if you try to use the object after the call to placement new, since the pre-existing object there has had its lifetime expire. std::launder lets you use a pre-existing pointer to the memory at the same location to access the data there. saagarjhaon Aug 19, 2019 | parent[–] Yes, I know what std::launder does, and that document contains some of my own thoughts: namely, placement new (the only use I've seen suggested) should automatically launder, and std::launder should just not exist. gpderettaon Aug 20, 2019 | root | parent[–] AFAIK placement new does launder. But both new and launder have no effect on their argument (well, placement new of course constructs an object there), they only 'bless' pointer returned from it. The use case is if you placement new on some byte storage. Now you want to access the object stored there, and of course you do not want to placement new the storage again, nor you have cached the result of the previous placement new (it would be suboptimal), instead you want to get a pointer to T from the storage address itself. saagarjhaon Aug 20, 2019 | root | parent[–] > AFAIK placement new does launder. But both new and launder have no effect on their argument (well, placement new of course constructs an object there), they only 'bless' pointer returned from it. I may be misunderstanding, but this seems to directly contradict what the linked paper says: > Note that std::launder() does not “white wash” the pointer for any further usage. > The obvious question is, why don’t we simply fix the current memory model so that using data where placement new was called for implicitly always does launder? gpderettaon Aug 20, 2019 | root | parent | next[–] Because the placement new might have been called in a other translation unit for example so the compiler can't track the it. gpderettaon Aug 27, 2019 | root | parent | prev[–] > I may be misunderstanding, but this seems to directly contradict what the linked paper says: >> Note that std::launder() does not “white wash” the pointer for any further usage. no, it is consistent. Launder does not white wash its parameter, only its return pointer and any pointer returned by it. Same for placement new. Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact Search:
195
Connecting with peers and coworkers is a vital part of boosting morale in the workplace. Seating arrangements are essential in setting the tone within the physical space that fosters intentional connection. Whether it be round table seating, where eye contact and discussion are the main focus for strengthening teams, or the more traditional theater style in larger venues to draw focus to a speaker, there are numerous ways to design a workspace based on your specific needs. Here are some of the most popular seating arrangements to help your business encourage communication, inclusivity, and productivity. Everyone gather round’ People are working remotely now more than ever, changing how we understand productivity and the future of work. Due to this significant shift away from office culture, companies recognize the value of company retreats, conferences, and regular opportunities to promote building and trust in the workplace, especially for those businesses that have gone fully remote. Round table seating is a way to bring people together in a way that cultivates intimacy. Ideal for small to mid-size teams, this type of seating arrangement can be done in places with limited space, giving you more venue options. Pros: Round tables don’t take up much space They promote spontaneous conversation Intimate setting breaks down hierarchical structure within teams Close proximity seating encourages participation Cons: The limited surface area makes it challenging to accommodate large teams Round tables can be cramped if there are too many people There is a lack of surface area for presentation materials It can be uncomfortable for less social team members Banquet seating for encouraging cross-connections Banquet seating, like a round table, provides an intimate setting, but in this case, there are numerous tables instead of one central seating area. Banquet seating consists of tables set up in close proximity, making it possible for larger companies to break into small groups, ideal for meet-and-greets and encouraging cross-connection. Banquet seating also lends itself to celebration. This seating arrangement is a comfortable and efficient way to bring teams together to present awards or honor the latest professional achievements. Pros: Banquet seating provides enough seating for larger teams Numerous tables allow workers to break into smaller groups An ideal setting for workshopping and small team-building exercise The banquet setting enables teams to rotate Small groups promote intimacy A perfect setting for food and drinks to be served Cons: It may be difficult for large companies to all fit comfortably Numerous groups can take away attention from the central speaker Seating arrangements can be cramped if there isn't a large enough venue It can be noisy during group conversations due to the proximity of tables Cabaret for awards ceremonies and parties Suppose you’re hoping to host an awards ceremony or honor your employees in a festive setting. In that case, a cabaret arrangement is ideal, as it provides a comfortable and informal environment that can accommodate a speaker or presentation. Like banquet seating, this arrangement focuses on round tables and draws focus to a main speaker or presentation while bringing teams close. Instead of having groups fully enclose the table, cabaret seating requires one end of the table to remain open towards the focal point of the presentation or speaker. This type of arrangement can accommodate mid-size groups by efficiently using space. Pros: Cabaret-style seating is great for events that serve meals and drinks Numerous tables accommodate small groups, allowing for meaningful interaction Tables nearby allow groups to switch tables, enhancing networking and connection Cons: Cabaret seating can provide an intimate setting that isn’t ideal for large-scale conferences Depending on the number of tables, cabaret seating can require a lot of floor capacity and space Cabaret seating brings people together and can be potentially loud Boardroom seating for business Boardroom seating is a great way to combine a formal set-up with the chance for conversation and discussion. For small to mid-size meetings requiring an entire company or team, the boardroom focuses on those leading the conference while also allowing for close physical proximity, perfect for Q&A sessions. If your company is planning an off-site meet-up, the boardroom seating arrangement is an excellent setup for mini-conferences and workshops. Pros: Boardroom seating emanates professionalism without being restrictive Seating centers focus on the speaker while keeping participants close It is easy to have group discussions and Q&A sessions The boardroom utilizes tables, providing space for laptops and presentation materials Cons: The seating arrangement has the potential for limited visibility if seating in a far corner of the table Social interaction can feel limited unless promoted by the speaker Classroom style to focus on learning It might feel like you are back in school with this setup, as the classroom sets values, focus, and function. Classroom seating is an excellent setup for small presentations. If your company is setting up a retreat that is focused on team building but also needs to address important announcements, such as end-of-the-year figures or policy changes. In that case, the classroom can bring focus and space for attendees to socialize afterward. Pros: Classroom seating can accommodate small to mid-sized teams The forward facing-seating promotes focus and learning There is space for laptops and note-taking Cons: The classroom setting is ideal for discussion outside of answering questions The forward focus can feel restrictive during long meeting sessions Enhance engagement with the chevron setup Much like the classroom setup, the chevron seating arrangement prioritizes the speaker and is a great way to bring the whole team together for a small conference of announcements. Chevron seating is often two columns with several seats in each row at an angle toward each other. Larger companies can utilize space efficiently and promote productivity using the chevron style. Pros: Chevron is spacious and well-suited for larger teams The angle and direction prioritize focus by physically facing the speaker or panel Chevron seating provides an excellent way to bring larger groups together Chevron is suitable for training and onboarding meetings Cons: Chevron seating isn’t ideal for spontaneous conversation To have a chevron setup, there needs to be designated entrances and aisles to access seating This arrangement can be restrictive depending on the space Hollow-square setup for all hands on deck The hollow square seating arrangement is like a boardroom setup, except by incorporating an open shape, it has the potential to feel less formal or restricted. This setup works well if your company is organizing a workshop that isn’t dependent on group work and brings small to mid-size teams together. Hollow Square can also accommodate large teams depending on whether or not the room is large enough to fit large tables. Pros: Hollow-square makes attendees face one another, which brings a sense of intimacy to the group This arrangement provides plenty of space to take notes or work on your laptop In this setup, attendees can see one another, and it encourages people to ask questions Cons: Hollow-square seating requires a lot of space If the room isn’t spacious enough, this arrangement can’t accommodate large groups Some attendees may have difficulty seeing if placed at the far end of the room or table This arrangement does not have a designated space for a speaker Horseshoe seating for informal meetups and events Think hollow-square seating with an open space without a table, allowing for more fluid interaction from presenters or designated speakers. The horseshoe seating arrangement does require a fair amount of space but can fit a medium-sized team without compromising openness. Pros: Horseshoe seating lets speakers walk in between seating to pass out materials or documents Attendees can freely ask questions and foster communication This seating arrangement lets neighbors discuss when needed Cons: Horseshoe seating typical requires ample space This seating style has no surface area to place belongings or work-related materials Open format or lounge setting Drawing inspiration from the open-plan office and coworking spaces, lounge-style seating allows people to mingle freely, encouraging serendipitous conversation and intimacy among workers. An open-format breaks down the hierarchy and is an ideal seating arrangement for introductions and company retreats focused on team-building. Pros: The open setting allows people to spark up a conversation This arrangement encourages teams to break out of their comfort zone A lounge setting is perfect for promoting informal relationship building The open format is excellent for parties and cocktail hour Cons: Open-plan meeting spaces can be potentially uncomfortable for less social attendees It can be hard to establish a structured meeting because of the fluid setup A lounge setup requires a lot of space to allow people to move comfortably Traditional conference or theater setup For large-scale meetups, the most popular arrangement is the conference room which is similar to theater-style or auditorium-style seating. Participants often face a stage or platform where speakers present or give small panel discussions in front of the audience. The traditional seating arrangement is one of the most commonplace setups, as it can accommodate teams of all sizes and aims to provide focus. Pros: Because the traditional setup requires an auditorium or similar space, enough room for attendees is often not an issue Conference settings provide focus and encourage learning Traditional seating excellent for conferences, speeches, and presentations It could potentially be more comfortable for employees who are less inclined to socialize Cons: Traditional setups don’t encourage discussion or spontaneous introductions Depending on the venue, it can be challenging to hear the speaker Conference settings don't foster intimacy and are not ideal for team building The right seating arrangement for your next event or company retreat can play a significant role in how employers interact with one another and can be a strategic way to evoke productivity and connection, ultimately bringing workers closer together. If you’re unsure whether or not any of the setups mentioned above will suit your specific needs, we can help you figure out what type of venue and seating will be the best fit for your particular event; The Surf Office offers a meeting room capacity calculator to help you figure out what will work best for you and your team. The Surf Office is here to help you organize your next company retreat and work out even the smallest details, from icebreaker topics to seating arrangements.
196
What is the best linker for a fusion protein? | ResearchGate =============== Question Answers 29 Similar questions Research that mentions Mutant Chimeric Proteins Question Asked 31 October 2014 Anchel Gonzalez French Institute of Health and Medical Research What is the best linker for a fusion protein? Hello everyone, I am preparing a fusion protein with mCherry and I have doubts about the type of linker to use. I have seen in the literature that sometimes flexible linkers of this type of sequence are used: (Gly-Gly-Gly-Gly-Ser)n. Can an expert in the field confirm whether this residue composition is the most suitable? What would be the optimal length? Thank you for your help. Recombinant Fusion Proteins Protein-Protein Interaction Mutant Chimeric Proteins Recombinant Proteins Genetic Engineering Share FacebookTwitterLinkedInReddit Most recent answer Ayham Shakouka Indian Agricultural Research Institute Gly is good and small ...( less effect on the folding) Cite Popular answers (1) Benoît-Joseph Laventie University of Basel There are linkers based on the same principle than the (GGGGS)n which I find superior as they don't have repeats that can lead to homologous recombination. One example: GSAGSAAGSGEF. ref for the GSAGSAAGSGEF liner : Waldo, G.S., Standish, B.M., Berendzen, J., and Terwilliger, T.C. (1999). Rapid protein-folding assay using green fluorescent protein. Nat. Biotechnol. 17, 691–695. ref for other similar linkers: Chen, X., Zaro, J.L., and Shen, W.-C. (2013). Fusion protein linkers: property, design and functionality. Adv. Drug Deliv. Rev. 65, 1357–1369. Cite 23 Recommendations Powered By 10 Top three ways social proof can boost your organic reach and engagement Share Next Stay Get help with your research Join ResearchGate to ask questions, get input, and advance your work. Join for free Log in All Answers (29) Christopher D. Pellenz EOS I agree with your feeling that Gly makes a good linker. I think length is largely empirical to your specific protein and your example seems completely reasonable. Some constructs will localize and some will not, you just need to test it out. Cite 5 Recommendations Lin Bai Brookhaven National Laboratory I always use Gly-Ser repeat as the linker. As you are working on mCherry, i think you can try ~5-10 Aa at first. If it doesn't work, then test some other length. Cite 4 Recommendations Dominique Liger University of Paris-Sud Hi there, I have evaluated the effect of (Gly 4 Ser)2 spacer on mIL3-mediated diphteria toxin toxicity: the presence of the spacer significantly improved toxicity. If interested, the article published in FEBS Letters in 1997 is accessible from my researchgate contribution page. Cite 12 Recommendations Anchel Gonzalez French Institute of Health and Medical Research Thank you all for your answers, I will try this linker first: Gly-Gly-Ser-Gly-Gly-Gly-Gly-Ser-Gly-Gly And let's see how it goes! Thanks! Cite Ferdinand Roesch French National Institute for Agriculture, Food, and Environment (INRAE) Hello Anchel, Did it work ? I am interested in a similar linker. Cite Anchel Gonzalez French Institute of Health and Medical Research Hi Ferdinand, Yes, it worked. This is the linker that I finally used: Gly-Gly-Ser-Gly-Gly-Gly-Ser-Gly-Gly I can observe good mCherry expression (C-terminal to the linker) by fluorescence microscopy. I still have to run some tests to validate that my protein of interest (N-terminal to the linker) is still present as a fusion protein (not cleaved). But even that's the case I assume it will be due to the sequence of my protein and not the linker. Cite 4 Recommendations Milan Pabst University of Bonn here is a nice homepage for those interested: I will try the Gly-Gly-Ser-Gly linker. cheers, Milan Cite 3 Recommendations Saša Rezelj National Institute of Chemistry Have you ever noticed that Gly-Ser linker is cleaved by proteases or that the ribosome falls off bevore the whole fusion protein is produced? I have problems with different lenghts of the produced proteins and I don't know why. Cite 1 Recommendation Anchel Gonzalez French Institute of Health and Medical Research Hi Sasa, In my case the entire protein is produced (using the Gly-Gly-Ser-Gly-Gly-Gly-Ser-Gly-Gly linker). The fusion protein has the predicted size, as determined by Western blot. No cleaving products are detected. Cite 4 Recommendations Tahoora Mousavi Mazandaran University of Medical Sciences which linker is better for cleaving in tissue for protein vaccine? (we want to link some peptide with suitable linker) what linker you suggest ? Cite Anchel Gonzalez French Institute of Health and Medical Research Hi Tahoora, Do you mean a linker that gets cleaved upon tissue uptake? That's interesting but unfortunately I don't have experience in this topic, sorry. Perhaps you can read this review that I just found after a google search: I hope you can find good suggestions there! Cite 2 Recommendations Tahoora Mousavi Mazandaran University of Medical Sciences Thank you very much for your suggest Cite Sutharsana Yathursan Ministry of Health, New Zealand Hi Tahoora, Did you find any linkers suitable for your work? I am looking for similar one. Cite Benoît-Joseph Laventie University of Basel There are linkers based on the same principle than the (GGGGS)n which I find superior as they don't have repeats that can lead to homologous recombination. One example: GSAGSAAGSGEF. ref for the GSAGSAAGSGEF liner : Waldo, G.S., Standish, B.M., Berendzen, J., and Terwilliger, T.C. (1999). Rapid protein-folding assay using green fluorescent protein. Nat. Biotechnol. 17, 691–695. ref for other similar linkers: Chen, X., Zaro, J.L., and Shen, W.-C. (2013). Fusion protein linkers: property, design and functionality. Adv. Drug Deliv. Rev. 65, 1357–1369. Cite 23 Recommendations Songbo Qiu University of Texas MD Anderson Cancer Center You may not use redundant linker, just direct link sometime better than any link Cite Suman Khan Weizmann Institute of Science Hi Anchel, I know that I am a little bit in late. I am really curious to know if your protein with the mCherry tag at the 3-terminal was still functional. If I can ask, which protein it was? Thank you very much Cite Anchel Gonzalez French Institute of Health and Medical Research Hi Suman, In my study the functionality of the protein was not relevant, so we didn't test it. If you suspect the C-terminal in your protein is part of a domain important for its function perhaps you can make an additional construct with the tag in a different position. But keep in mind that placing the tag N-terminal will very likely change the normal intracellular distribution since localization sequences are usually there. Sometimes, tagging a protein without changing its localization and function could be very challenging, perhaps you will have to test multiple approaches. Or tag it internally, by click-chemistry for example? Maybe others have better suggestions? Cite 1 Recommendation Davod Jafari Iran University of Medical Sciences Hi. (Pro-Gly-Pro-Gly) linker could be a good choice for fusion proteins and subunite vaccine designing. it worked on my work: Cite 1 Recommendation Davod Jafari Iran University of Medical Sciences Hi anchel The (pro-gly-pro-gly)n linker could be a good choice for fusion proteins specially subunit vaccines. It was tested here experimentally: ArticleConstructing and transient expression of a gene cassette con... And bioinformatically analysis: ArticleBioinformatic analysis of different fusions of ipaD, PA20 an... Hope to succeed. Cite 1 Recommendation I have been told that ATG needs to occur only once in a fusion gene. But will it cause problems in ATG is repeated in another gene in the fusion assembly? I am currently planning to link a GDP promoter (yeast) to my gene which links to a green fluorescent protein (GFP) and that links to a terminator sequence, so I will have a complete repair template cassette ready to attach homology arms to for CRISPR. I have ATG starting my main gene, but I also have it starting my GFP sequence. Do I need to clone that out ? thanks! - Amy Cite what do you think of this linker sequence based on ApaI restriction enzyme overhand inside the linker sequence? it does have one proline, which concerns me over flexibility. ggcagcgcgggcagcGGGCCCgcggcgggcagcggc I put the RE inside the linker sequence to avoid a scar in the main protein sequences Cite 1 Recommendation Yoram Gerchman Oranim Academic College of Education Amy Johnson , I do not think the "ATG needs to occur only once in a fusion gene" make much sense. Take for example the NhaA gene in E. coli (my PhD subjetc :-), available here As you can see there are multiple Methionines in the protein sequence, and since the only codon for Met is ATG there are multiple ATGs... Interestingly the coding region start with CTC, not ATG... Yoram Cite 1 Recommendation I think what was meant was: Is a 2nd ATG needed ? for the 2nd gene TSS in the fusion cassette after the linker sequence. Thank you for clarifying the issue. Cite Laurel Cook Sacred Heart University This might be a bit extra, but you might be able to just create a recombinant gene with a linker sequence containing a blend of both genes (assuming all subunits are helical) by finding the nearest flanking restriction site (3’ for upstream and 5’ for the downstream gene by looking up the sequence on BLAST, and selecting a source organism with both restriction sites enclosing filler DNA - it’s tricky to articulate what I mean, but it’s basically crossing your fingers and praying that you end up with the three sequences ligating together. (I’ve never done this before, I’m only a freshman in college, but I’ve been computing the source organisms I’d need to make a botulinum toxin detector protein for mandatory quality-control checks for about 2-ish years now without any expert input, so I could be 100% wrong, since it’s literally a very chemically involved game of “find the very short oligonucleotide in the haystack.” Cite 1 Recommendation Anchel Gonzalez French Institute of Health and Medical Research Amy Johnson , I don't know if I fully understand what you want to do but, if you are wondering if a second ATG is necessary to express the second gene as a fusion protein, the answer is no. If you don't include it, the first Met of GFP will not be in the sequence but, as long as the entire open reading frame is maintained, the cassette will be expressed using only the TSS of the gene at 5'. The ribosomes will scan the transcript until they find the first AUG, from where protein synthesis will begin and continue all the way though the linker and GFP (subsequent AUGs will be translated as Met, but not used as start codons). In fact, for the design to work as a fusion protein, you need to remove GFP's 5'-UTR or else the ribosome may find a premature stop codon and drop off before continuing with GFP, or it will be expressed separately (not as fusion protein). I hope this helps and my apologies in advance if I missed the point of your question. Cite 3 Recommendations thank you, I do not include UTRs of gene fragments past the initial promoter sequence of the whole fusion cassette. Cite Yijie Daniel Deng Dartmouth College XTEN linker is anther flexible linker and found very good in Cas9 linked deaminase. It may be a good choice for nuclease-like enzymes. See below an example: ArticleEngineering of high-precision base editors for site-specific... Cite 1 Recommendation Michael Koksharov Brown University Another interesting paper on linkers: ArticleIFLinkC: An iterative functional linker cloning strategy for... Cite Ayham Shakouka Indian Agricultural Research Institute Gly is good and small ...( less effect on the folding) Cite Similar questions and discussions Is there any need to add a flexible linker, such as GSSG, between your protein and the flag tag you added? Question 2 answers Asked 8 June 2015 Zhu Songbiao Is GSSG a flexible linker? can the flexible linker exclude the influence of the tag on the protein function? Appreciate your kind answer. View What is the importance of GGGGS linker for Fusion protien? Question 6 answers Asked 31 January 2015 Maulik Rachh Whay Gly-Gly-Gly-Gly-Ser linker mostly used as a linker for protien? Does its not interfere protein structure? View What is the best linker for a fusion protein? Question 7 answers Asked 30 July 2022 Gopikrishna Vempati Can you suggest some proteins. View Strategies for designing linkers for GFP-linked proteins? Question 3 answers Asked 19 October 2016 Sven Truckenbrodt I'm currently trying to design several dozen new GFP-linked fusion proteins for an upcoming project, but I struggle to find guidelines of how to design the linker between protein of interest and GFP. I'm aware that N- or C-terminal linkage can affect expression, targetting and functionality of the protein of interest and have established for my specific candidate proteins which terminus to use. However, I am uncertain of how long the linker should be and which amino acids to favour or to avoid. I would be very thankful if someone could point me into the right direction! I should mention that I have complete flexibility, as I'm going to synthesize the constructs from scratch. View Has anyone ever placed a His tag between two fusion proteins and got great purification? Question 19 answers Asked 12 April 2013 David Bateman I am designing a new construct and have always had the his tag on the C-terminus, I as I think this leads to the best expression. However on new construct I have a fusion protein added to increase solubility. I have designed it as M-6histag-SolubleProtein-Thrombin-AggregationProne protein. I was wondering if anyone has had the 6-his tag between the proteins in their fusion design and still had excellent purification. I would like to use: SolubleProtein-6histag-Thrombin-AggregationProtein Any thoughts? View Has anyone done a HEK293 cells puromycin kill curve before? HEK293 became suspension cells after puro treatment? Cells weren't killed at high conc? Question 8 answers Asked 22 February 2022 Jiaqi Xiong I have been trying to do lentiviral transduction in the HEK293 cell line to overexpress my protein of interest. I encountered some problems with puromycin selection steps: 1. Almost 50% of cells were alive even after 6ug/mL puromycin treatment 2. HEK293 became suspension cells after puromycin treatment. Some details: At first, I tried the puromycin concentration (2.5 ug/mL) recommended by the postdoc in my lab. A majority of cells (HEK293) weren't killed after 3 days; in fact, 80% of them survived. The seeding density is 0.2 10^6 per well in a 6-well plate. What's even more strange was that HEK293 became suspension cells. Then I tried to determine the optimal concentration that kills all the cells in 3 days by treating cells with various concentrations of puromycin (0, 1, 2, 3, 4, 5 ug/mL) in a 6-well plate; Cells were seeded at a lower density (0.1 10^6). I also thawed a new batch of HEK293 cells due to the suspension cell issue. Similar things occurred again. The cells weren't killed even at the highest concentrations after 3-day of treatment. The puromycin should be good since everyone in the lab uses the same batch of puromycin as me. Most of my lab mates transduce with lung cancer cell lines instead of HEK293, so there might be differences in puromycin sensitivity across cell lines. I'm wondering if anyone has similar experience before? Should I try to generate a puromycin kill curve again with more concentrations (0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ug/mL) and treat them for a longer period? Also, is it normal for HEK293 to become suspension cells? Thanks so much for your time!! - A clueless first-year PhD student View What is the best linkers for 3xHA tag? Is linker(s) required between tags and/or between tag and an gene of interest? Question 7 answers Asked 24 July 2018 Tomoya Sakamoto I am trying to make 3xHA tagged-gene construct. I have found some constructs with 3xHA on Addgene and the 3xHA tags have several linkers between HA tags. Some have G and GS between HAs, but others have different linkers. I am wondering if these linkers are required for generating the 3xHA tag. If so, what is the best linker(s) for 3xHA tags? And why are those linkers required? View How to improve the purity of an Ni-NTA purified His-tag protein? Question 16 answers Asked 5 September 2022 Daniel Solomon Hello I'm sure this question has been asked a lot but the protein I am purifying is not as clean as I would like and all the potential solutions I have read about have not worked for me. I am purifying a protein that forms inclusion bodies. The construct is 14x His tag - Tev cleavage - protein of interest. I use BL21 Star (DE3) cells and TB media for growing the bacteria and induce at OD 0.6 for 4 hours at 30 degrees.The purification protocol is based on a previously published paper. Following sonication, lysis buffer and washing the pellet the protein is extracted from the inclusion bodies using 6M Guanidine Hydrochloride, applied o/n at room temperature to a Ni NTA agarose column. It is then eluted in 4M Guanidine. I then pass it through a RP-HPLC. As you can see by the attached image although I am getting a high amount of protein it has a large smear which I am not sure if it is contaminants or degredation. I overloading with sample when trying to judge purity but I think it's better to get an accurate representation of what’s going on rather than kid myself I am working with a pure sample. The HPLC makes no difference so I think optimisation on the Ni-NTA purification is needed but nothing thus far has worked, I have tested different induction temperatures, leaving it on the Ni column at both 4 degrees and for a few hours (rather than O/N at RT like the protocol says), protease inhibitors and washes/step-wise elutions with different concentrations of imidazole (as in attached file) - yet none of this has worked. I've even thought about making the His tag smaller (14x seems quite large, but I cant see how this would impact purity other than needing more imidazole for elution). If anyone has comments on the purity and has any tips that I have not tried it would be greatly appreciated! Thanks! View How long can I store at +4 degrees for E.coli liquid culture maxi prep that I made in a mini prep? Discussion 17 replies Asked 12 July 2020 Çiğdem Yılmaz Hello to everyone! I have an insert that I transformed into E. coli STBL3 strain. These are liquid cultures that I stored at -80 ° C.These are liquid cultures that I store at -80C. I wonder, after the mini culture is completed, I get some of it. Then, to start the maxi culture, I have to keep the liquid bacterial culture I received in the eppendorfa for 4-7 hours. Is there any harm in keeping it in an eppendorfta + 4C for 4-7 hours in LB? 4-7 hours later I start the maxi culture with this. What other way can I do this. View Related Publications Genetic Engineering of a Recombinant Fusion Protein Possessing an Antitumor Antibody Fragment and a TNF-α Moiety Chapter Nov 2002 Dieter K�rholz Wieland Kiess Jim Xiang John R. Gordon Tumor necrosis factor-α (TNF-α) is a cytokine (CK) that possesses a wide variety of biological activities, including potent antitumor activities (1) and immunomodulatory properties mediated through its binding to two TNF receptors (p55 and p75) (2). Signaling through the p55 receptor is primarily associated with responses such as cytotoxicity (2,3)... View Bedeutung der Gentechnik für die Diagnostik Chapter Jan 1990 E.-G. Afting H. G. Beschle L. Wieczorek Die Gentechnik bietet neue methodische Möglichkeiten, um konventionelle diagnostische Ansätze in zahlreichen Anwendungsgebieten wesentlich zu verbessern, zu erweitern oder zu ersetzen. Die Diagnostik von Erbkrankheiten, die pränatale Diagnostik, Gewebetypisierungen und Vaterschaftstests gehören neben anderen Anwendungen zu Einsatzgebieten mit hoher... View [Viewpoint. Genetic engineering] Article May 1990 E C Bonard A Vez View Got a technical question? Get high-quality answers from experts. Ask a question Join ResearchGate to find the people and research you need to help your work 25+ million members 160+ million publication pages 2.3+ billion citations Join for free or Discover by subject area Recruit researchers Join for free Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? - [x] Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · HintTip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? - [x] Keep me logged in Log in or Continue with Google No account? Sign up Company About us News Careers Support Help Center Business solutions Advertising Recruiting © 2008-2025 ResearchGate GmbH. All rights reserved. Terms Privacy Copyright Imprint Consent preferences
197
Rational subsets of groups L. Bartholdi1 P. V. Silva2,∗ 1Mathematisches Institut Georg-August Universit¨ at zu G¨ ottingen Bunsenstraße 3–5 D-37073 G¨ ottingen, Germany email: [email protected] 2Centro de Matem´ atica, Faculdade de Ciˆ encias Universidade do Porto R. Campo Alegre 687 4169-007 Porto, Portugal email: [email protected] 2010 Mathematics Subject Classification: 20F10, 20E05, 68Q45, 68Q70 Key words: Free groups, inverse automata, Stallings automata, rational subsets. Contents 1 Finitely generated groups 2 1.1 Free groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2 Inverse automata and Stallings’ construction 4 2.1 Inverse automata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Stallings’ construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Basic applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.4 Conjugacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.5 Further algebraic properties . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.6 Topological properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.7 Dynamical properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3 Rational and recognizable subsets 17 3.1 Rational and recognizable subgroups . . . . . . . . . . . . . . . . . . . . 18 3.2 Benois’ Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.3 Rational versus recognizable . . . . . . . . . . . . . . . . . . . . . . . . 21 3.4 Beyond free groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.5 Rational solution sets and rational constraints . . . . . . . . . . . . . . . 24 References 25 ∗The second author acknowledges support by Project ASA (PTDC/MAT/65481/2006) and C.M.U.P., fi-nanced by F.C.T. (Portugal) through the programmes POCTI and POSI, with national and E.U. structural funds. 2 L. Bartholdi, P. V. Silva Over the years, finite automata have been used effectively in the theory of infinite groups to represent rational subsets. This includes the important particular case of finitely generated subgroups (and the beautiful theory of Stallings automata for the free group case), but goes far beyond that: certain inductive procedures need a more general set-ting than mere subgroups, and rational subsets constitute the natural generalization. The connections between automata theory and group theory are rich and deep, and many are portrayed in Sims’ book . This chapter is divided into three parts: in Section 1 we introduce basic concepts, terminology and notation for finitely generated groups, devoting special attention to free groups. These will also be used in Chapter 24. Section 2 describes the use of finite inverse automata to study finitely generated sub-groups of free groups. The automaton recognizes elements of a subgroup, represented as words in the ambient free group. Section 3 considers, more generally, rational subsets of groups, when good closure and decidability properties of these subsets are satisfied. The authors are grateful to Stuart Margolis, Benjamin Steinberg and Pascal Weil for their remarks on a preliminary version of this text. 1 Finitely generated groups Let G be a group. Given A ⊆G, let ⟨A⟩= (A ∪A−1)∗denote the subgroup of G generated by A. We say that H ⩽G is finitely generated and write H ⩽f.g. G if H = ⟨A⟩for some finite subset A of H. Given H ⩽G, we denote by [G : H] the index of H in G, that is, the number of right cosets Hg for all g ∈G; or, equivalently, the number of left cosets. If [G : H] is finite, we write H ⩽f.i. G. It is well known that every finite index subgroup of a finitely generated group is finitely generated. We denote by 1 the identity of G. An element g ∈G has finite order if ⟨g⟩is finite. Elements g, h ∈G are conjugate if h = x−1gx for some x ∈G. We use the notation gh = h−1gh and [g, h] = g−1gh to denote, respectively, conjugates and commutators. Given an alphabet A, we denote by A−1 a set of formal inverses of A, and write e A = A ∪A−1. We say that e A is an involutive alphabet. We extend −1 : A →A−1 : a 7→a−1 to an involution on e A∗through (a−1)−1 = a, (uv)−1 = v−1u−1 (a ∈A, u, v ∈e A∗) . If G = ⟨A⟩, we have a canonical epimorphism ρ : e A∗↠G, mapping a±1 ∈e A to a±1 ∈G. We present next some classical decidability problems: Definition 1.1. Let G = ⟨A⟩be a finitely generated group. word problem: is there an algorithm that, upon receiving as input a word u ∈e A∗, deter-mines whether or not ρ(u) = 1? conjugacy problem: is there an algorithm that, upon receiving as input words u, v ∈e A∗, determines whether or not ρ(u) and ρ(v) are conjugate in G? Rational subsets of groups 3 membership problem for K ⊆2G: is there for every X ∈K an algorithm that, upon receiving as input a word u ∈e A∗, determines whether or not ρ(u) ∈X? generalized word problem: is the membership problem for the class of finitely gener-ated subgroups of G solvable? order problem: is there an algorithm that, upon receiving as input a word u ∈e A∗, determines whether ρ(u) has finite or infinite order? isomorphism problem for a class G of groups: is there an algorithm that, upon receiv-ing as input a description of groups G, H ∈G, decides whether or not G ∼ = H? Typically, G may be a subclass of finitely presented groups (given by their presen-tation), or automata groups (see Chapter 24) given by automata. We can also require complexity bounds on the algorithms; more precisely, we may ask with which complexity bound an answer to the problem may be obtained, and also with which complexity bound a witness (a normal form for the word problem, an element conjugating ρ(u) to ρ(v) in case they are conjugate, an expression of u in the generators of X in the generalized word problem) may be constructed. 1.1 Free groups We recall that an equivalence relation ∼on a semigroup S is a congruence if a ∼b implies ac ∼bc and ca ∼cb for all a, b, c ∈S. Definition 1.2. Given an alphabet A, let ∼denote the congruence on e A∗generated by the relation {(aa−1, 1) | a ∈e A} . (1.1) The quotient FA = e A∗/∼is the free group on A. We denote by θ : e A∗→FA the canonical morphism u 7→[u]∼. Free groups admit the following universal property: for every map f : A →G, there is a unique group morphism FA →G that extends f. Alternatively, we can view (1.1) as a confluent length-reducing rewriting system on e A∗, where each word w ∈e A∗can be transformed into a unique reduced word w with no factor of the form aa−1, see . As a consequence, the equivalence u ∼v ⇔ u = v (u, v ∈e A∗) solves the word problem for FA. We shall use the notation RA = e A∗. It is well known that FA is isomorphic to RA under the binary operation u ⋆v = uv (u, v ∈RA) . We recall that the length |g| of g ∈FA is the length of the reduced form of g, also denoted by g. The letters of A provide a natural basis for FA: they generate FA and satisfy no non-trivial relations, that is, all reduced words on these generators represent distinct elements of FA. A group is free if and only if it has a basis. 4 L. Bartholdi, P. V. Silva Throughout this chapter, we assume A to be a finite alphabet. It is well known that free groups FA and FB are isomorphic if and only if #A = #B. This leads to the concept of rank of a free group F: the cardinality of a basis of F, denoted by rk F. It is common to use the notation Fn to denote a free group of rank n. We recall that a reduced word u is cyclically reduced if uu is also reduced. Any reduced word u ∈RA admits a unique decomposition of the form u = vwv−1 with w cyclically reduced. A solution for the conjugacy problem follows easily from this: first reduce the words cyclically; then two cyclically reduced words in RA are conjugate if and only if they are cyclic permutations of each other. On the other hand, the order problem admits a trivial solution: only the identity has finite order. Finally, the generalized word problem shall be discussed in the following section. 2 Inverse automata and Stallings’ construction The study of finitely generated subgroups of free groups entered a new era in the early eighties when Stallings made explicit and effective a construction that can be traced back to the early part of the twentieth century in Schreier’s coset graphs (see and §24.1) and to Serre’s work . Stallings’ seminal paper was built over immersions of finite graphs, but the alternative approach using finite inverse automata became much more popular over the years; for more on their link, see . An extensive survey has been written by Kapovich and Miasnikov . Stallings’ construction for H ⩽f.g. FA consists in taking a finite set of generators for H in reduced form, building the so-called flower automaton and then proceeding to make this automaton deterministic through the operation known as Stallings foldings. This is clearly a terminating procedure, but the key fact is that the construction is independent from both the given finite generating set and the chosen folding sequence. A short simple automata-theoretic proof of this claim will be given. The finite inverse automaton S(H) thus obtained is usually called the Stallings automaton of H. Over the years, Stallings au-tomata became the standard representation for finitely generated subgroups of free groups and are involved in many of the algorithmic results presently obtained. Several of these algorithms are implemented in computer software, see e.g. CRAG , or the packages AUTOMATA and FGA in GAP . 2.1 Inverse automata An automaton A over an involutive alphabet e A is involutive if, whenever (p, a, q) is an edge of A, so is (q, a−1, p). Therefore it suffices to depict just the positively labelled edges (having label in A) in their graphical representation. Definition 2.1. An involutive automaton is inverse if it is deterministic, trim and has a single final state. If the latter happens to be the initial state, it is called the basepoint. It follows easily Rational subsets of groups 5 from the computation of the Nerode equivalence (see §10.2) that every inverse automaton is a minimal automaton. Finite inverse automata capture the idea of an action (of a finite inverse monoid, their transition monoid) on a finite set (the vertex set) through partial bijections. We recall that a monoid M is inverse if, for every x ∈M, there exists a unique y ∈M such that xyx = x and y = yxy; then M acts by partial bijections on itself. The next result is easily proven, but is quite useful. Proposition 2.1. Let A be an inverse automaton and let p uvv−1w − − − − − →q be a path in A. Then there exists also a path p uw − →q in A. Another important property relates languages to morphisms. For us, a morphism be-tween deterministic automata A and A′ is a mapping ϕ between their respective vertex sets which preserves initial vertices, final vertices and edges, in the sense that (ϕ(p), a, ϕ(q)) is an edge of A′ whenever (p, a, q) is an edge of A. Proposition 2.2. Given inverse automata A and A′, then L(A) ⊆L(A′) if and only if there exists a morphism ϕ : A →A′. Moreover, such a morphism is unique. Proof. (⇒): Given a vertex q of A, take a successful path →q0 u − →q v − →t → in A, for some u, v ∈e A∗. Since L(A) ⊆L(A′), there exists a successful path →q′ 0 u − →q′ v − →t′ → in A′. We take ϕ(q) = q′. To show that ϕ is well defined, suppose that →q0 u′ − →q v′ − →t → is an alternative successful path in A. Since u′v ∈L(A) ⊆L(A′), there exists a success-ful path →q′ 0 u′ − →q′′ v − →t′ → in A′ and it follows that q′ = q′′ since A′ is inverse. Thus ϕ is well defined. It is now routine to check that ϕ is a morphism from A to A′ and that it is unique. (⇐): Immediate from the definition of morphism. 2.2 Stallings’ construction Let X be a finite subset of RA. We build an involutive automaton F(X) by fixing a basepoint q0 and gluing to it a petal labelled by every word in X as follows: if x = a1 . . . ak ∈X, with ai ∈e A, the petal consists of a closed path of the form q0 a1 − →• a2 − →· · · ak − →q0 and the respective inverse edges. All such intermediate vertices • are assumed to be distinct in the automaton. For obvious reasons, F(X) is called the flower automaton of X. 6 L. Bartholdi, P. V. Silva The automaton F(X) is almost an inverse automaton – except that it need not be deterministic. We can fix it by performing a sequence of so-called Stallings foldings. Assume that A is a trim involutive automaton with a basepoint, possessing two distinct edges of the form p a − →q, p a − →r (2.1) for a ∈e A. The folding is performed by identifying these two edges, as well as the two respective inverse edges. In particular, the vertices q and r are also identified (if they were distinct). The number of edges is certain to decrease through foldings. Therefore, if we perform enough of them, we are sure to turn F(X) into a finite inverse automaton. Definition 2.2. The Stallings automaton of X is the finite inverse automaton S(X) ob-tained through folding F(X). We shall see that S(X) depends only on the finitely generated subgroup ⟨X⟩of FA generated by X, being in particular independent from the choice of foldings taken to reach it. Since inverse automata are minimal, it suffices to characterize L(S(X)) in terms of H to prove uniqueness (up to isomorphism): Proposition 2.3. Fix H ⩽f.g. FA and let X ⊆RA be a finite generating set for H. Then L(S(X)) = \ {L ⊆e A∗| L is recognized by a finite inverse automaton with a basepoint and H ⊆L} . Proof. (⊇): Clearly, S(X) is a finite inverse automaton with a basepoint. Since X ∪ X−1 ⊆L(F(X)) ⊆L(S(X)), it follows easily from Proposition 2.1 that H ⊆L(S(X)) . (2.2) (⊆): Let L ⊆e A∗be recognized by a finite inverse automaton A with a basepoint, with H ⊆L. Since X ⊆H, we have an automaton morphism from F(X) to A, hence L(F(X)) ⊆L. To prove that L(S(X)) ⊆L, it suffices to show that inclusion in L is preserved through foldings. Indeed, assume that L(B) ⊆L and B′ is obtained from B by folding the two edges in (2.1). It is immediate that every successful path q0 u − →t in B′ can be lifted to a success-ful path q0 v − →t in B by successively inserting the word a−1a into u. Now v ∈L = L(A) implies u ∈L in view of Proposition 2.1. Now, given H ⩽FA finitely generated, we take a finite set X of generators. Without loss of generality, we may assume that X consists of reduced words, and we may define S(H) = S(X) to be the Stallings automaton of H. Example 2.1. Stallings’ construction for X = {a−1ba, ba2}, where the next edges to be identified are depicted by dotted lines, is Rational subsets of groups 7 q0 F(X) = b a b a a a q0 q0 = S(X) b b a a a b a b a A simple, yet important example is given by applying the construction to Fn itself, when we obtain the so-called bouquet of n circles: q0 q0 q0 S(F1) S(F2) S(F3) a a b a b c In terms of complexity, the best known algorithm for the construction of S(X) is due to Touikan . Its time complexity is O(n log∗n), where n is the sum of the lengths of the elements of X. 2.3 Basic applications The most fundamental application of Stallings’ construction is an elegant and efficient solution to the generalized word problem: Theorem 2.4. The generalized word problem in FA is solvable. We will see many groups in Chapter 24 that have solvable word problem; however, few of them have solvable generalized word problem. The proof of Theorem 2.4 relies on Proposition 2.5. Consider H ⩽f.g. FA and u ∈FA. Then u ∈H if and only if u ∈ L(S(H)). Proof. (⇒): Follows from (2.2). (⇐): It follows easily from the last paragraph of the proof of Proposition 2.3 that, if B′ is obtained from B by performing Stallings foldings, then L(B′) = L(B). Hence, if 8 L. Bartholdi, P. V. Silva H = ⟨X⟩, we get L(S(H)) = L(F(X)) = (X ∪X−1)∗= H and the implication follows. It follows from our previous remark that the complexity of the generalized word prob-lem is O(n log∗n + m), where n is the sum of the lengths of the elements of X and m is the length of the input word. In particular, once the subgroup X has been fixed, complexity is linear in m. Example 2.2. We may use the Stallings automaton constructed in Example 2.1 to check that baba−1b−1 ∈H = ⟨a−1ba, ba2⟩but ab / ∈H. Stallings automata also provide an effective construction for bases of finitely generated subgroups. Consider H ⩽f.g. FA, and let m be the number of vertices of S(H). A spanning tree T for S(H) consists of m −1 edges and their inverses which, together, connect all the vertices of S(H). Given a vertex p of S(H), we denote by gp the T-geodesic connecting the basepoint q0 to p, that is, q0 gp − →p is the shortest path contained in T connecting q0 to p. Proposition 2.6. Let H ⩽f.g. FA and let T be a spanning tree for S(H). Let E+ be the set of positively labelled edges of S(H). Then H is free with basis Y = {gpag−1 q | (p, a, q) ∈E+ \ T} . Proof. It follows from Proposition 2.5 that L(S(H)) ⊆H, hence Y ⊆H. To show that H = ⟨Y ⟩, take h = a1 · · · ak ∈H in reduced form (ai ∈e A). By Proposition 2.5, there exists a successful path q0 a1 − →q1 a2 − →· · · ak − →qk = q0 in S(H). For i = 1, . . . , k, we have either gqi−1aig−1 qi ∈Y ∪Y −1 or gqi−1aig−1 qi = 1, the latter occurring if (qi−1, ai, qi) ∈T. In any case, we get h = a1 · · · ak = (gq0a1g−1 q1 )(gq1a2g−1 q2 ) · · · (gqk−1akg−1 q0 ) ∈⟨Y ⟩ and so H = ⟨Y ⟩. It remains to show that the elements of Y satisfy no nontrivial relations. Let y1, . . . , yk ∈Y ∪Y −1 with yi ̸= y−1 i−1 for i = 2, . . . , k. Write yi = gpiaig−1 ri , where ai ∈e A labels the edge not in T. It follows easily from yi ̸= y−1 i−1 and the definition of spanning tree that y1 · · · yk = gp1a1g−1 r1 gp2a2 · · · ak−1g−1 rk−1gpkakgrk , a nonempty reduced word if k ⩾1. Therefore Y is a basis of H as claimed. In the process, we also obtain a proof of the Nielsen-Schreier Theorem, in the case of finitely generated subgroups. A simple topological proof may be found in : Theorem 2.7 (Nielsen-Schreier). Every subgroup of a free group is itself free. Rational subsets of groups 9 Example 2.3. We use the Stallings automaton constructed in Example 2.1 to construct a basis of H = ⟨a−1ba, ba2⟩. If we take the spanning tree T defined by the dotted lines in q0 b a b a then #E+ \ T = 2 and the corresponding basis is {ba2, baba−1b−1}. Another choice of spanning tree actually proves that the original generating set is also a basis. We remark that Proposition 2.6 can be extended to the case of infinitely generated subgroups, proving the general case of Theorem 2.7. However, in this case there is no ef-fective construction such as Stallings’, and the (infinite) inverse automaton S(H) remains a theoretical object, using appropriate cosets as vertices. Another classical application of Stallings’ construction regards the identification of finite index subgroups. Proposition 2.8. Consider H ⩽f.g. FA. (i) H is a finite index subgroup of FA if and only if S(H) is a complete automaton. (ii) If H is a finite index subgroup of FA, then its index is the number of vertices of S(H). Proof. (i) (⇒): Suppose that S(H) is not complete. Then there exist some vertex q and some a ∈e A such that q · a is undefined. Let g be a geodesic connecting the basepoint q0 to q in S(H). We claim that Hgam ̸= Hgan if m −n > |g| . (2.3) Indeed, Hgam = Hgan implies gam−ng−1 ∈H and so gam−ng−1 ∈L(S(H)) by Proposition 2.5. Since ga is reduced due to S(H) being inverse, it follows from m −n > |g| that gaam−n−1g−1 = gam−ng−1 ∈L(S(H)): indeed, g−1 is not long enough to erase all the a’s. Since S(H) is deterministic, q · a must be defined, a contradiction. Therefore (2.3) holds and so H has infinite index. (⇐): Let Q be the vertex set of S(H) and fix a geodesic q0 gq − →q for each q ∈Q. Take u ∈FA. Since S(H) is complete, we have a path q0 u − →q for some q ∈Q. Hence ug−1 q ∈H and so u = ug−1 q gq ∈Hgq. Therefore FA = S q∈Q Hgq and so H ⩽f.i. FA. (ii) In view of FA = S q∈Q Hgq, it suffices to show that the cosets Hgq are all distinct. Indeed, assume that Hgp = Hgq for some vertices p, q ∈Q. Then gpg−1 q ∈H and so gpg−1 q ∈L(S(H)) by Proposition 2.5. On the other hand, since S(H) is complete, we have a path q0 gpg−1 q − − − →r 10 L. Bartholdi, P. V. Silva for some r ∈Q. In view of Proposition 2.1, and by determinism, we get r = q0. Hence we have paths p g−1 q − − →q0, q g−1 q − − →q0 . Since S(H) is inverse, we get p = q as required. Example 2.4. Since the Stallings automaton constructed in Example 2.1 is not complete, it follows that ⟨a−1ba, ba2⟩is not a finite index subgroup of F2. Corollary 2.9. If H ⩽FA has index n, then rk H = 1 + n(#A −1). Proof. By Proposition 2.8, the automaton S(H) has n vertices and n#A positive edges. A spanning tree has n −1 positive edges, so rk H = n#A −(n −1) = 1 + n(#A −1) by Proposition 2.6. Beautiful connections between finite index subgroups and certain classes of bifix codes — set of words none of which is a prefix or a suffix of another — have recently been unveiled by Berstel, De Felice, Perrin, Reutenauer and Rindone . 2.4 Conjugacy We start now a brief discussion of conjugacy. Recall that the outdegree of a vertex q is the number of edges starting at q and the geodesic distance in a connected graph is the length of the shortest undirected path connecting two vertices. Since the original generating set is always taken in reduced form, it follows easily that there is at most one vertex in a Stallings automaton having outdegree < 2: the basepoint q0. Assuming that H is nontrivial, S(H) must always be of the form q0 q1 · · · · · · · · · u where q1 is the closest vertex to q0 (in terms of geodesic distance) having outdegree > 2 (since there is at least one vertex having such outdegree). Note that q1 = q0 if q0 has outdegree > 2 itself. We call q0 u − →the tail (which is empty if q1 = q0) and the remaining subgraph the core of S(H). Note that S(H), and its core, may be understood as follows. Consider the graph with vertex set FA/H = {gH | g ∈FA}, with an edge from gH to agH for each generator a ∈A. Then this graph, called the Schreier graph (see §24.1) of H\FA, consists of finitely many trees attached to the core of S(H). Theorem 2.10. There is an algorithm that decides whether or not two finitely generated subgroups of FA are conjugate. Rational subsets of groups 11 Proof. Finitely generated subgroups G, H are conjugate if and only if the cores of S(G) and S(H) are equal (up to their basepoints). The Stallings automata of the conjugates of H can be obtained in the following ways: (1) declaring a vertex in the core C to be the basepoint; (2) gluing a tail to some vertex in the core C and taking its other endpoint to be the basepoint. Note that the tail must be glued in some way that keeps the automaton inverse, so in particular this second type of operation can only be performed if the automaton is not complete, or equivalently, if H has infinite index. An immediate consequence is the following classical Proposition 2.11. A finite rank normal subgroup of a free group is either trivial or has finite index. Moreover, a finite index subgroup H is normal if and only if its Stallings automaton is vertex-transitive, that is, if all choices of basepoint yield the same automaton. Example 2.5. Stallings automata of some conjugates of H = ⟨a−1ba, ba2⟩: q0 S(H) = b a b a q0 S(b−1Hb) = b a b a q0 S(b−2Hb2) = b a b a b We can also use the previous discussion on the structure of (finite) Stallings automata to provide them with an abstract characterization. Proposition 2.12. A finite inverse automaton with a basepoint is a Stallings automaton if and only if it has at most one vertex of outdegree 1: the basepoint. Proof. Indeed, for any such automaton we can take a spanning tree and use it to construct a basis for the subgroup as in the proof of Proposition 2.6. 12 L. Bartholdi, P. V. Silva 2.5 Further algebraic properties The study of intersections of finitely generated subgroups of FA provides further applica-tions of Stallings automata. Howson’s classical theorem admits a simple proof using the direct product of two Stallings automata; it is also an immediate consequence of Theorem 3.1 and Corollary 3.4(ii). Theorem 2.13 (Howson). If H, K ⩽f.g. FA, then also H ∩K ⩽f.g. FA. Stallings automata are also naturally related to the famous Hanna Neumann conjec-ture: given H, K ⩽f.g. FA, then rk(H ∩K) −1 ⩽(rk H −1)(rk K −1). The conjec-ture arose in a paper of Hanna Neumann , where the inequality rk(H ∩K) −1 ⩽ 2(rk H −1)(rk K −1) was also proved. In one of the early applications of Stallings’ approach, Gersten provided an alternative geometric proof of Hanna Neumann’s inequal-ity . A free factor of a free group FA can be defined as a subgroup H generated by a subset of a basis of FA. This is equivalent to saying that there exists a free product decomposition FA = H ∗K for some K ⩽FA. Since the rank of a free factor never exceeds the rank of the ambient free group, it is easy to construct examples of subgroups which are not free factors: it follows easily from Proposition 2.6 that any free group of rank ⩾2 can have subgroups of arbitrary finite rank (and even infinite countable). The problem of identifying free factors has a simple solution based on Stallings au-tomata : one must check whether or not a prescribed number of vertex identifications in the Stallings automaton can lead to a bouquet. However, the most efficient solution, due to Roig, Ventura and Weil , involves Whitehead automorphisms and will therefore be postponed to §23.2.7. Given a morphism ϕ : A →B of inverse automata, let the morphic image ϕ(A) be the subautomaton of B induced by the image by ϕ of all the successful paths of A. The following classical result characterizes the extensions of H ⩽f.g. FA contained in FA. We present the proof from : Theorem 2.14 (Takahasi ). Given H ⩽f.g. FA, one can effectively compute finitely many extensions K1, . . . , Km ⩽f.g. FA of H such that the following conditions are equiv-alent for every K ⩽f.g. FA: (i) H ⩽K; (ii) Ki is a free factor of K for some i ∈{1, . . . , m}. Proof. Let A1, . . . , Am denote all the morphic images of S(H), up to isomorphism. Since a morphic image cannot have more vertices than the original automaton, there are only finitely many isomorphism classes. Moreover, it follows from Proposition 2.12 that, for i = 1, . . . , m, Ai = S(Ki) for some Ki ⩽f.g. FA. Since L(S(H)) ⊆L(Ai) = L(S(Ki)), it follows from Proposition 2.5 that H ⩽Ki. Clearly, we can construct all Ai and therefore all Ki. (i) ⇒(ii). If H ⩽K, it follows from Stallings’ construction that L(S(H)) ⊆ L(S(K)) and so there is a morphism ϕ : S(H) →S(K) by Proposition 2.2. Let Ai Rational subsets of groups 13 be, up to isomorphism, the morphic image of S(H) through ϕ. Since Ai = S(Ki) is a subautomaton of S(K), it follows easily from Proposition 2.6 that Ki is a free factor of K: it suffices to take a spanning tree for S(Ki), extend it to a spanning tree for S(K), and the induced basis of Ki will be contained in the induced basis of K. (ii) ⇒(i) is immediate. An interesting research line related to this result is built on the concept of algebraic extension, introduced by Kapovich and Miasnikov , and inspired by the homonymous field-theoretical classical notion. Given H ⩽K ⩽FA, we say that K is an algebraic extension of H if no proper free factor of K contains H. Miasnikov, Ventura and Weil proved that the set of algebraic extensions of H is finite and effectively computable, and it constitutes the minimum set of extensions K1, . . . , Km satisfying the conditions of Theorem 2.14. Consider a subgroup H of a group G. The commensurator of H in G, is CommG(H) = {g ∈G | H ∩Hg has finite index in H and Hg}. (2.4) For example, the commensurator of GLn(Z) in GLn(R) is GLn(Q). The special case of finite-index extensions, H ⩽f.i. K ⩽FA is of special interest, and can be interpreted in terms of commensurators. It can be proved (see [20, Lemma 8.7] and ) that every H ⩽f.g. FA has a maximum finite-index extension inside FA, denoted by Hfi; and Hfi = CommFA(H). Silva and Weil proved that S(Hfi) can be constructed from S(H) using a simple automata-theoretic algorithm: (1) The standard minimization algorithm is applied to the core of S(H), taking all vertices as final. (2) The original tail of S(H) is subsequently reinstated in this new automaton, at the appropriate vertex. We present now an application of different type, involving transition monoids. It follows easily from the definitions that the transition monoid of a finite inverse automaton is always a finite inverse monoid. Given a group G, we say that a subgroup H ⩽G is pure if the implication gn ∈H ⇒g ∈H (2.5) holds for all g ∈FA and n ⩾1. If p is a prime, we say that H is p-pure if (2.5) holds when (n, p) = 1. The next result is due to Birget, Margolis, Meakin and Weil, and is the only natural problem among applications of Stallings automata that is known so far to be PSPACE-complete . Proposition 2.15. For every H ⩽f.g. FA, the following conditions hold: (i) H is pure if and only if the transition monoid of S(H) is aperiodic. (ii) H is p-pure if and only if the transition monoid of S(H) has no subgroups of order p. Proof. Both conditions in (i) are easily proved to be equivalent to the nonexistence in 14 L. Bartholdi, P. V. Silva S(H) of a cycle of the form p q (k ⩾1, p ̸= q) u uk where u can be assumed to be cyclically reduced. The proof of (ii) runs similarly. 2.6 Topological properties We require for this subsection some basic topological concepts, which the reader can recover from Chapter 17. For all u, v ∈FA, written in reduced form as elements of RA, let u ∧v denote the longest common prefix of u and v. The prefix metric d on FA is defined, for all u, v ∈FA, by d(u, v) =  2−|u∧v|−1 if u ̸= v 0 if u = v It follows easily from the definition that d is an ultrametric on FA, satisfying in particular the axiom d(u, v) ⩽max{d(u, w), d(w, v)} . The completion of this metric space is compact; its extra elements are infinite reduced words a1a2a3 . . . , with all ai ∈e A, and constitute the hyperbolic boundary ∂FA of FA, see §24.1.5. Extending the operator ∧to FA ∪∂FA in the obvious way, it follows easily from the definitions that, for every infinite reduced word α and every sequence (un)n in FA, α = lim n→+∞un if and only if lim n→+∞|α ∧un| = +∞. (2.6) The next result shows that Stallings automata are given a new role in connection with the prefix metric. We denote by cl H the closure of H in the completion of FA. Proposition 2.16. If H ⩽f.g. FA, then cl H is the union of H with the set of all α ∈∂FA that label paths in S(H) out of the basepoint. Proof. Since the topology of FA is discrete, we have cl H ∩FA = H. (⊆): If α ∈∂FA does not label a path in S(H) out of the basepoint, then {|α ∧h| : h ∈H} is finite and so no sequence of H can converge to α by (2.6). (⊇): Let α = a1a2a3 · · · ∈∂FA, with ai ∈e A, label a path in S(H) out of the basepoint. Let m be the number of vertices of S(H). For every n ⩾1, there exists some word wn of length < m such that a1 · · · anwn ∈H. Now α = limn→+∞a1 · · · anwn by (2.6) and so α ∈cl H. The profinite topology on FA is defined in Chapter 17: for every u ∈FA, the collection {Ku | K ⩽f.i. FA} constitutes a basis of clopen neighbourhoods of u. In his seminal 1983 paper , Stallings gave an alternative proof of Marshall Hall’s Theorem: Rational subsets of groups 15 Theorem 2.17 (M. Hall). Every finitely generated subgroup of FA is closed for the profi-nite topology. Proof. Fix H ⩽f.g. FA and let u ∈FA \ H be written in reduced form as an element of RA. In view of Proposition 2.5, u does not label a loop at the basepoint q0 of S(H). If there is no path q0 u − →· · · in S(H), we add new edges to S(H) to get a finite inverse automaton A having a path q0 u − →q ̸= q0. Otherwise just take A = S(H). Next add new edges to A to get a finite complete inverse automaton B. In view of Propositions 2.8 and 2.12, we have B = S(K) for some K ⩽f.i. FA. Hence Ku is open and contains u. Since H ∩Ku ̸= ∅yields u ∈K−1H = K, contradicting Proposition 2.5, it follows that H ∩Ku = ∅and so H is closed as claimed. Example 2.6. We consider the above construction for H = ⟨a−1ba, ba2⟩and u = b2: q0 S(H) = b a b a q0 A = b a b a b q0 B = a b a b b a b a If we take the spanning tree defined by the dotted lines in B, it follows from Proposi-tion 2.6 that K = ⟨ba−1, b3, b2ab−2, ba2, baba−1b−1⟩ is a finite index subgroup of F2 such that H ∩Kb2 = ∅. We recall that a group G is residually finite if its finite index subgroups have trivial intersection. Considering the trivial subgroup in Theorem 2.17, we deduce Corollary 2.18. FA is residually finite. We remark that Ribes and Zalessky extended Theorem 2.17 to products of finitely many finitely generated subgroups of FA, see . This result is deeply connected to the solution of Rhodes’ Type II conjecture, see [37, Chapter 4]. If V denotes a pseudovariety of finite groups (see Chapter 16), the pro-V topology on FA is defined by considering that each u ∈FA has {Ku | K f.i. FA, FA/K ∈V} 16 L. Bartholdi, P. V. Silva as a basis of clopen neighbourhoods. The closure for the pro-V topology of H ⩽f.g FA can be related to an extension property of S(H), and Margolis, Sapir and Weil used automata to prove that efficient computation can be achieved for the pseudovarieties of finite p-groups and finite nilpotent groups . The original computability proof for the p-group case is due to Ribes and Zalessky . 2.7 Dynamical properties We shall mention briefly some examples of applications of Stallings automata to the study of endomorphism dynamics, starting with Gersten’s solution of the subgroup orbit prob-lem . The subgroup orbit problem consists in finding an algorithm to decide, for given H, K ⩽f.g. FA, whether or not K = ϕ(H) for some automorphism ϕ of FA. Equivalently, this can be described as deciding whether or not the automorphic orbit of a finitely generated subgroup is recursive. Gersten’s solution adapts to the context of Stallings automata Whitehead’s idea to solve the orbit problem for words . Whitehead’s proof relies on a suitable decom-position of automorphisms as products of elementary factors (which became known as Whitehead automorphisms), and on using these as a tool to compute the elements of min-imum length in the automorphic orbit of the word. In the subgroup case, word length is replaced by the number of vertices of the Stallings automaton. The most efficient solution to the problem of identifying free factors , mentioned in §23.2.5, also relies on this approach: H ⩽f.g. FA is a free factor if and only if the Stallings automaton of some automorphic image of H has a single vertex (that is, a bou-quet). Another very nice application is given by the following theorem of Goldstein and Turner : Theorem 2.19. The fixed point subgroup of an endomorphism of FA is finitely generated. Proof. Let ϕ be an endomorphism of FA. For every u ∈FA, define Q(u) = ϕ(u)u−1. We define a potentially infinite automaton A by taking {Q(u) | u ∈FA} ⊆FA as the vertex set, all edges of the form Q(u) a − →Q(au) with u ∈FA, a ∈e A, and fixing 1 as the basepoint. Then A is a well-defined inverse automaton. Next we take B to be the subautomaton of A obtained by retaining only those vertices and edges that lie in successful paths labelled by reduced words. Clearly, B is still an inverse automaton, and it is easy to check that it must be the Stallings automaton of the fixed point subgroup of ϕ. It remains to be proved that B is finite. We define a subautomaton C of B by removing exactly one edge among each inverse pair Q(u) a − →Q(au), Q(au) a−1 − − →Q(u) with a ∈A as follows: if a−1 is the last letter of Q(au), we remove Q(u) a − →Q(au); otherwise, we remove Q(au) a−1 − − →Q(u). Rational subsets of groups 17 Let M denote the maximum length of the image of a letter by ϕ. We claim that, whenever |Q(v)| > 2M, the vertex Q(v) has outdegree at most 1. Indeed, if Q(v) a−1 − − →Q(a−1v) is an edge in C for a ∈A, then a−1 is the last letter of Q(v). On the other hand, if Q(v) b − →Q(bv) is an edge in C for b ∈A, then b−1 is not the last letter of Q(bv). Since Q(bv) = ϕ(b)Q(v)b−1 and |Q(v)| > 2|ϕ(b)|, then b must be the last letter of Q(v) in this case. Since Q(v) has at most one last letter, it follows that its outdegree is at most 1. Let D be a finite subautomaton of C containing all vertices Q(v) such that |Q(v)| ⩽ 2M. Suppose that p− →q is an edge in C not belonging to D. Since p− →q, being an edge of B, must lie in some reduced path, and by the outdegree property of C, it is easy to see that there exists some path in C of the form p′− →p− →q− →r← −r′ where p′, r′ are vertices in D. Since there are only finitely many directed paths out of D, it follows that C is finite and so is B. Therefore the fixed point subgroup of ϕ is finitely generated. Note that this proof is not by any means constructive. Indeed, the only known al-gorithm for computing the fixed point subgroup of a free group automorphism is due to Maslakova and relies on the sophisticated train track theory of Bestvina and Han-del and other algebraic geometry tools. The general endomorphism case remains open. Stallings automata were also used by Ventura in the study of various properties of fixed subgroups, considering in particular arbitrary families of endomorphisms [57, 30] (see also ). Automata also play a part in the study of infinite fixed points, taken over the continuous extension of a monomorphism to the hyperbolic boundary (see for example ). 3 Rational and recognizable subsets Rational subsets generalize the notion of finitely generated from subgroups to arbitrary subsets of a group, and can be quite useful in establishing inductive procedures that need to go beyond the territory of subgroups. Similarly, recognizable subsets extend the notion of finite index subgroups. Basic properties and results can be found in or . We consider a finitely generated group G = ⟨A⟩, with the canonical map π : FA →G. A subset of G is rational if it is the image by ρ = πθ of a rational subset of e A∗, and is recognizable if its full preimage under ρ is rational in e A∗. For every group G, the classes Rat G and Rec G satisfy the following closure proper-ties: • Rat G is (effectively) closed under union, product, star, morphisms, inversion, sub-group generating. • Rec G is (effectively) closed under boolean operations, translation, product, star, inverse morphisms, inversion, subgroup generating. 18 L. Bartholdi, P. V. Silva Kleene’s Theorem is not valid for groups: Rat G = Rec G if and only if G is finite. However, if the class of rational subsets of G possesses some extra algorithmic properties, then many decidability/constructibility results can be deduced for G. Two properties are particularly coveted for Rat G: • (effective) closure under complement (yielding closure under all the boolean oper-ations); • decidable membership problem for arbitrary rational subsets. In these cases, one may often solve problems (e.g. equations, or systems of equations) whose statement lies far out of the rational universe, by proving that the solution is a rational set. 3.1 Rational and recognizable subgroups We start by some basic, general facts. The following result is essential to connect language theory to group theory. Theorem 3.1 (Anisimov and Seifert). A subgroup H of a group G is rational if and only if H is finitely generated. Proof. (⇒): Let H be a rational subgroup of G and let π : FA →G denote a morphism. Then there exists a finite e A-automaton A such that H = ρ(L(A)). Assume that A has m vertices and let X consist of all the words in ρ−1(H) of length < 2m. Since A is finite, so is X. We claim that H = ⟨ρ(X)⟩. To prove it, it suffices to show that u ∈L(A) ⇒ρ(u) ∈⟨ρ(X)⟩ (3.1) holds for every u ∈e A∗. We use induction on |u|. By definition of X, (3.1) holds for words of length < 2m. Assume now that |u| ⩾2m and (3.1) holds for shorter words. Write u = vw with |w| = m. Then there exists a path →q0 v − →q z − →t → in A with |z| < m. Thus vz ∈L(A) and by the induction hypothesis ρ(vz) ∈⟨ρ(X)⟩. On the other hand, |z−1w| < 2m and ρ(z−1w) = ρ(z−1v−1)ρ(vw) ∈H, hence z−1w ∈ X and so ρ(u) = ρ(vz)ρ(z−1w) ∈⟨ρ(X)⟩, proving (3.1) as required. (⇐) is trivial. It is an easier task to characterize recognizable subgroups: Proposition 3.2. A subgroup H of a group G is recognizable if and only if it has finite index. Proof. (⇒): In general, a recognizable subset of G is of the form NX, where N f.i. G and X ⊆G is finite. If H = NX is a subgroup of G, then N ⊆H and so H has finite index as well. (⇐): This follows from the well-known fact that every finite index subgroup H of G contains a finite index normal subgroup N of G, namely N = T g∈G gHg−1. Since N has finite index, H must be of the form NX for some finite X ⊆G. Rational subsets of groups 19 3.2 Benois’ Theorem The central result in this subsection is Benois’ Theorem, the cornerstone of the whole theory of rational subsets of free groups: Theorem 3.3 (Benois). (i) If L ⊆e A∗is rational, then L is also rational, and can be effectively constructed from L. (ii) A subset of RA is a rational language as a subset of e A∗if and only if it is rational as a subset of FA. We illustrate this in the case of finitely generated subgroups: temporarily calling “Benois automata” those automata recognizing rational subsets of RA, we may convert them to Stallings automata by “folding” them, at the same time making sure they are in-verse automata. Given a Stallings automaton, one intersects it with RA to obtain a Benois automaton. Proof. (i) Let A = (Q, e A, E, I, T) be a finite automaton recognizing L. We define a sequence (An)n of finite automata with ε-transitions as follows. Let A0 = A. Assuming that An = (Q, e A, En, I, T) is defined, we consider all instances of ordered pairs (p, q) ∈ Q × Q such that there exists a path p aa−1 − − →q in An for some a ∈e A, but no path p 1 − →q. (P) Clearly, there are only finitely many instances of (P) in An. We define En+1 to be the union of En with all the new edges (p, 1, q), where (p, q) ∈Q × Q is an instance of (P). Finally, we define An+1 = (Q, e A, En+1, I, T). In particular, note that An = An+k for every k ⩾1 if there are no instances of (P) in An. Since Q is finite, the sequence (An)n is ultimately constant, say after reaching Am. We claim that L = L(Am) ∩RA . (3.2) Indeed, take u ∈L. There exists a sequence of words u = u0, u1, . . . , uk−1, uk = u where each term is obtained from the preceding one by erasing a factor of the form aa−1 for some a ∈e A. A straightforward induction shows that ui ∈L(Ai) for i = 0, . . . , k, since the existence of a path p aa−1 − − →q in Ai implies the existence of a path p 1 − →q in Ai+1. Hence u = uk ∈L(Ak) ⊆L(Am) and it follows that L ⊆L(Am) ∩RA. For the opposite inclusion, we start by noting that any path p u − →q in Ai+1 can be lifted to a path p v − →q in Ai, where v is obtained from u by inserting finitely many factors of the form aa−1. It follows that L(Am) = L(Am−1) = · · · = L(A0) = L and so L(Am) ∩RA ⊆L(Am) = L. Thus (3.2) holds. Since RA = e A∗\ [ a∈e A e A∗aa−1 e A∗ is obviously rational, and the class of rational languages is closed under intersection, it follows that L is rational. Moreover, we can effectively compute the automaton Am and 20 L. Bartholdi, P. V. Silva a finite automaton recognizing RA, hence the direct product construction can be used to construct a finite automaton recognizing the intersection L = L(Am) ∩RA. (ii) Consider X ⊆RA. If X ∈Rat e A∗, then θ(X) ∈Rat FA and so X is rational as a subset of FA. Conversely, if X is rational as a subset of FA, then X = θ(L) for some L ∈Rat e A∗. Since X ⊆RA, we get X = L. Now part (i) yields L ∈Rat e A∗and so X ∈Rat e A∗as required. Example 3.1. Let A = A0 be depicted by a a b b a−1 b−1 We get A1 = a a 1 b b a−1 b−1 A2 = A3 = a a 1 1 b b a−1 b−1 and we can then proceed to compute L = L(A2) ∩R2. The following result summarizes some of the most direct consequences of Benois’ Theorem: Corollary 3.4. (i) FA has decidable rational subset membership problem. (ii) Rat FA is closed under the boolean operations. Proof. (i) Given X ∈Rat FA and u ∈FA, write X = θ(L) for some L ∈Rat e A∗. Then u ∈X if and only if u ∈X = L. By Theorem 3.3(i), we may construct a finite automaton recognizing L and therefore decide whether or not u ∈L. (ii) Given X ∈Rat FA, we have FA \ X = RA \ X and so FA \ X ∈Rat FA by Theorem 3.3. Therefore Rat FA is closed under complement. Since Rat FA is trivially closed under union, it follows from De Morgan’s laws that it is closed under intersection as well. Note that we can associate algorithms to these boolean closure properties of Rat FA in a constructive way. We remark also that the proof of Theorem 3.3 can be clearly adapted to more general classes of rewriting systems (see ). Theorem 3.3 and Corollary 3.4 have Rational subsets of groups 21 been generalized several times by Benois herself and by S´ enizergues, who obtained the most general versions. S´ enizergues’ results hold for rational length-reducing left basic confluent rewriting systems and remain valid for the more general notion of controlled rewriting system. 3.3 Rational versus recognizable Since FA is a finitely generated monoid, it follows that every recognizable subset of FA is rational [5, Proposition III.2.4]. We turn to the problem of deciding which ra-tional subsets of FA are recognizable. The first proof, using rewriting systems, is due to S´ enizergues but we follow the shorter alternative proof from , where a third alternative proof, of a more combinatorial nature, was also given. Given a subset X of a group G, we define the right stabilizer of X to be the submonoid of G defined by R(X) = {g ∈G | Xg ⊆X} . Next let K(X) = R(X) ∩(R(X))−1 = {g ∈G | Xg = X} be the largest subgroup of G contained in R(X) and let N(X) = \ g∈G gK(X)g−1 be the largest normal subgroup of G contained in K(X), and therefore in R(X). Lemma 3.5. A subset X of a group G is recognizable if and only if K(X) is a finite index subgroup of G. In fact, the Schreier graph (see §24.1) of K(X)\G is the underlying graph of an automaton recognizing X, and G/N(X) is the syntactic monoid of X. Proof. (⇒): If X ⊆G is recognizable, then X = NF for some N f.i. G and F ⊆G finite. Hence N ⊆R(X) and so N ⊆K(X) since N ⩽G. Since N has finite index in G, so does K(X). (⇐): If K(X) is a finite index subgroup of G, so is N = N(X). Indeed, a finite index subgroup has only finitely many conjugates (having also finite index) and a finite intersection of finite index subgroups is easily checked to have finite index itself. Therefore it suffices to show that X = FN for some finite subset F of G. Since N has finite index, the claim follows from XN = X, in turn an immediate consequence of N ⊆R(X). Proposition 3.6. It is decidable whether or not a rational subset of FA is recognizable. Proof. Take X ∈Rat FA. In view of Lemma 3.5 and Proposition 2.8, it suffices to show that K(X) is finitely generated and effectively computable. Given u ∈FA, we have u / ∈R(X) ⇔Xu ̸⊆X ⇔Xu ∩(FA \ X) ̸= ∅⇔u ∈X−1(FA \ X), 22 L. Bartholdi, P. V. Silva hence R(X) = FA \ (X−1(FA \ X)) . It follows easily from the fact that the class of rational languages is closed under reversal and morphisms, combined with Theorem 3.3(ii), that X−1 ∈Rat FA. Since Rat FA is trivially closed under product, it follows from Corollary 3.4 that R(X) is rational and effectively computable, and so is K(X) = R(X) ∩(R(X))−1. By Theorem 3.1, the subgroup K(X) is finitely generated and the proof is complete. These results are related to the Sakarovitch conjecture , which states that every rational subset of FA must be either recognizable or disjunctive: a subset X of a monoid M is disjunctive if it has trivial syntactic congruence, or equivalently, if any morphism ϕ : M →M ′ recognizing X is necessarily injective. In the group case, it follows easily from the proof of the direct implication of Lemma 3.5 that the projection G →G/N recognizes X ⊆G if and only if N ⊆N(X). Thus X is disjunctive if and only if N(X) is the trivial subgroup. The Sakarovitch conjecture was first proved in , but once again we follow the shorter alternative proof from : Theorem 3.7 (S´ enizergues). A rational subset of FA is either recognizable or disjunctive. Proof. Since the only subgroups of Z are the trivial subgroup and finite index subgroups, we may assume that #A > 1. Take X ∈Rat FA. By the proof of Proposition 3.6, the subgroup K(X) is finitely generated. In view of Lemma 3.5, we may assume that K(X) is not a finite index sub-group. Thus S(K(X)) is not complete by Proposition 2.8. Let q0 denote the basepoint of S(K(X)). Since S(K(X)) is not complete, q0 · u is undefined for some reduced word u. Let w be an arbitrary nonempty reduced word. We must show that w / ∈N(X). Suppose otherwise. Since u, w are reduced and #A > 1, there exist enough letters to make sure that there is some word v ∈RA such that uvwv−1u−1 is reduced. Now w ∈N(X), hence uvwv−1u−1 ∈N(X) ⊆K(X) by normality. Since uvwv−1u−1 is reduced, it follows from Proposition 2.5 that uvwv−1u−1 labels a loop at q0 in S(K(X)), contradicting q0 · u being undefined. Thus w / ∈N(X) and so N(X) = 1. Therefore X is disjunctive as required. 3.4 Beyond free groups Let π : FA ↠G be a morphism onto a group G. We consider the word problem sub-monoid of a group G, defined as Wπ(G) = (πθ)−1(1). (3.3) Proposition 3.8. The language Wπ(G) is rational if and only if G is finite. Proof. If G is finite, it is easy to check that Wπ(G) is rational by viewing the Cayley graph of G (see §24.1) as an automaton. Conversely, if Wπ(G) is rational, then π−1(1) is a finitely generated normal subgroup of FA, either finite index or trivial by the proof Rational subsets of groups 23 of Theorem 3.7. It is well known that the Dyck language DA = θ−1(1) is not rational if #A > 0, thus it follows easily that π−1(1) has finite index and therefore G must be finite. How about groups with context-free Wπ(G)? A celebrated result by Muller and Schupp , with a contribution by Dunwoody , relates them to virtually free groups: these are groups with a free subgroup of finite index. As usual, we focus on the case of G being finitely generated. We claim that G has a normal free subgroup FA of finite index, with A finite. Indeed, letting F be a finite-index free subgroup of G, it suffices to take F ′ = T g∈G gFg−1. Since F has finite index, so does F ′, see the proof of Lemma 3.5. Taking a morphism π : FB →G with B finite, we get from Corollary 2.9 that π−1(F ′) ⩽f.i. FB is finitely generated, so F ′ is itself finitely generated. Finally, F ′ is a subgroup of F, so F ′ is still free by Theorem 2.7, and we can write F ′ = FA. We may therefore decompose G as a finite disjoint union of the form G = FAb0 ∪FAb1 ∪· · · ∪FAbm, with b0 = 1. (3.4) Theorem 3.9 (Muller & Schupp). The language Wπ(G) is context-free if and only if G is virtually free. Sketch of proof. If G is virtually free, the rewriting system implicit in (3.4) provides a rational transduction between Wπ(G) and DA. The converse implication can be proved by arguing geometrical properties of the Cay-ley graph of G such as in Chapter 24; briefly said, one deduces from the context-freeness of Wπ(G) that the Cayley graph of G is close (more precisely, quasi-isometric) to a tree. It follows that virtually free groups have decidable word problem. In Chapter 24, we shall discuss the word problem for more general classes of groups using other techniques. Grunschlag proved that every rational (respectively recognizable) subset of a virtually free group G decomposed as in (3.4) admits a decomposition as a finite union X0b0 ∪ · · ·∪Xmbm, where the Xi are rational (respectively recognizable) subsets of FA, see . Thus basic results such as Corollary 3.4 or Proposition 3.6 can be extended to virtually free groups (see [18, 47]). Similar generalizations can be obtained for free abelian groups of finite rank . The fact that the strong properties of Corollary 3.4 do hold for both free groups and free abelian groups suggests considering the case of graph groups (also known as free par-tially abelian groups or right angled Artin groups), where we admit partial commutation between letters. An independence graph is a finite undirected graph (A, I) with no loops, that is, I is a symmetric anti-reflexive relation on A. The graph group G(A, I) is the quotient FA/ ∼, where ∼denotes the congruence generated by the relation {(ab, ba) | (a, b) ∈I}. On both extremes, we have FA = G(A, ∅) and the free abelian group on A, which cor-responds to the complete graph on A. These turn out to be particular cases of transitive 24 L. Bartholdi, P. V. Silva forests. We can say that (A, I) is a transitive forest if it has no induced subgraph of either of the following forms: • • • • • • • • C4 P4 We recall that an induced subgraph of (A, I) is formed by a subset of vertices A′ ⊆A and all the edges in I connecting vertices from A′. The following difficult theorem, a group-theoretic version of a result on trace monoids by Aalbersberg and Hoogeboom , was proved in : Theorem 3.10 (Lohrey & Steinberg). Let (A, I) be an independence graph. Then G(A, I) has decidable rational subset membership problem if and only if (A, I) is a transitive for-est. They also proved that these conditions are equivalent to decidability of the member-ship problem for finitely generated submonoids. Such a ‘bad’ G(A, I) gives an example of a finitely presented group with a decidable generalized word problem that does not have a decidable membership problem for finitely generated submonoids. It follows from Theorem 3.10 that any group containing a direct product of two free monoids has undecidable rational subset membership problem, a fact that can be directly deduced from the undecidability of the Post correspondence problem. Other positive results on rational subsets have been obtained for graphs of groups, HNN extensions and amalgamated free products by Kambites, Silva and Steinberg , or Lohrey and S´ enizergues . Lohrey and Steinberg proved recently that the rational subset membership problem is recursively equivalent to the finitely generated submonoid membership problem for groups with two or more ends . With respect to closure under complement, Lohrey and S´ enizergues proved that the class of groups for which the rational subsets form a boolean algebra is closed under HNN extension and amalgamated products over finite groups. On the negative side, Bazhenova proved that rational subsets of finitely generated nilpotent groups do not form a boolean algebra, unless the group is virtually abelian . Moreover, Roman′kov proved in , via a reduction from Hilbert’s 10th problem, that the rational subset membership problem is undecidable for free nilpotent groups of any class ⩾2 of sufficiently large rank. Last but not least, we should mention that Stallings’ construction was successfully generalized to prove results on both graph groups (by Kapovich, Miasnikov and Weid-mann ) and amalgamated free products of finite groups (by Markus-Epstein ). 3.5 Rational solution sets and rational constraints In this final subsection we make a brief incursion in the brave new world of rational constraints. Rational subsets provide group theorists with two main assets: Rational subsets of groups 25 • A concept which generalizes finite generation for subgroups and is much more fit to stand most induction procedures. • A systematic way of looking for solutions of the right type in the context of equa-tions of many sorts. This second feature leads us to the notion of rational constraint, when we restrict the set of potential solutions to some rational subset. And there is a particular combination of circumstances that can ensure the success of this strategy: if Rat G is closed under inter-section and we can prove that the solution set of problem P is an effectively computable rational subset of G, then we can solve problem P with any rational constraint. An early example is the adaptation by Margolis and Meakin of Rabin’s language and Rabin’s tree theorem to free groups, where first-order formulae provide rational solution sets . The logic language considered here is meant to be applied to words, seen as models, and consists basically of unary predicates that associate letters to positions in each word, as well as a binary predicate for position ordering. Margolis and Meakin used this construction to solve problems in combinatorial inverse semigroup theory . Diekert, Gutierrez and Hagenah proved that the existential theory of systems of equa-tions with rational constraints is solvable over a free group . Working basically on a free monoid with involution, and adapting Plandowski’s approach in the process, they extended the classical result of Makanin to include rational constraints, with much lower complexity as well. The proof of this deep result is well out of scope here, but its potential applications are immense. Group theorists are only starting to discover its full strength. The results in can be used to extend the existential theory of equations with ra-tional constraints to virtually free groups, a result that follows also from Dahmani and Guirardel’s recent paper on equations over hyperbolic groups with quasi-convex rational constraints . Equations over graph groups with a restricted class of rational constraints were also successfully considered by Diekert and Lohrey . A somewhat exotic example of computation of a rational solution set arises in the problem of determining which automorphisms of F2 (if any) carry a given word into a given finitely generated subgroup. The full solution set is recognized by a finite automa-ton; its vertices are themselves structures named “finite truncated automata” . References I. J. Aalbersberg and H. J. Hoogeboom. Characterizations of the decidability of some problems for regular trace languages. Math. Systems Theory, 22:1–19, 1989. 24 Algebraic Cryptography Center. CRAG – the Cryptography and Groups Software Library, 2010. 4 G. A. Bazhenova. On rational sets in finitely generated nilpotent groups. Algebra and Logic, 39(4):215–223, 2000. Translated from Algebra i Logika, 39:379–394, 2000. 24 M. Benois. Descendants of regular language in a class of rewriting systems: algorithm and complexity of an automata construction. In Rewriting techniques and applications, volume 256 of Lecture Notes in Comput. Sci., pages 121–132. Springer-Verlag, 1987. 21 26 L. Bartholdi, P. V. Silva J. Berstel. Transductions and context-free languages. B. G. Teubner, 1979. 17, 21 J. Berstel, C. De Felice, D. Perrin, C. Reutenauer, and G. Rindone. Bifix codes and sturmian words. preprint, 2010. arXiv.org/pdf/1011.5369v2. 10 M. Bestvina and M. Handel. Train tracks and automorphisms of free groups. Ann. Math., 135:1–51, 1992. 17 J.-C. Birget, S. W. Margolis, J. C. Meakin, and P. Weil. PSPACE-complete problems for subgroups of free groups and inverse finite automata. Theoret. Comput. Sci., 242(1-2):247– 281, 2000. 13 R. V. Book and F. Otto. String-Rewriting Systems. Springer-Verlag, 1993. 3, 20 F. Dahmani and V. Guirardel. Foliations for solving equations in groups: free, virtually free, and hyperbolic groups. J. Topology, 3(2):343–404, 2010. 25 V. Diekert, C. Gutierrez, and C. Hagenah. The existential theory of equations with rational constraints in free groups is PSPACE-complete. Inform. Comput., 202(2):105–140, 2005. 25 V. Diekert and M. Lohrey. Word equations over graph products. Internat. J. Algebra Comput., 18(3):493–533, 2008. 25 M. J. Dunwoody. The accessibility of finitely presented groups. Invent. Math., 81(3):449–457, 1985. 23 The GAP Group. GAP – Groups, Algorithms, and Programming, Version 4.4.12, 2008. 4 S. M. Gersten. Intersections of finitely generated subgroups of free groups and resolutions of graphs. Inventiones Math., 71(3):567–591, 1983. 12 S. M. Gersten. On whitehead’s algorithm. Bull. Amer. Math. Soc., 10(2):281–284, 1984. 16 R. Z. Goldstein and E. C. Turner. Fixed subgroups of homomorphisms of free groups. Bull. Lond. Math. Soc., 18(5):468–470, 1986. 16 Z. Grunschlag. Algorithms in geometric group theory. PhD thesis, University of California at Berkeley, 1999. 23 M. Kambites, P. V. Silva, and B. Steinberg. On the rational subset problem for groups. J. Algebra, 309(2):622–639, 2007. 24 I. Kapovich and A. Myasnikov. Stallings foldings and subgroups of free groups. J. Algebra, 248(2):608–668, 2002. 4, 13 I. Kapovich, R. Weidmann, and A. Miasnikov. Foldings, graphs of groups and the membership problem. Internat. J. Algebra Comput., 15(1):95–128, 2005. 24 M. Lohrey and G. S´ enizergues. Rational subsets in HNN-extensions and amalgamated prod-ucts. Internat. J. Algebra Comput., 18(1):111–163, 2008. 24, 25 M. Lohrey and B. Steinberg. The submonoid and rational subset membership problems for graph groups. J. Algebra, 320(2):728–755, 2008. 24 M. Lohrey and B. Steinberg. Submonoids and rational subsets of groups with infinitely many ends. To appear in J. Algebra, 2009. 24 G. S. Makanin. Equations in a free group. Math. USSR Izv., 21:483–546, 1983. Translated from Izv. Akad. Nauk. SSR, Ser. Math., 46:1199–1273, 1983. 25 S. W. Margolis and J. C. Meakin. Free inverse monoids and graph immersions. Internat. J. Algebra Comput., 3(1):79–99, 1993. 4 S. W. Margolis and J. C. Meakin. Inverse monoids, trees and context-free languages. Trans. Amer. Math. Soc., 335(1):259–276, 1993. 25 Rational subsets of groups 27 S. W. Margolis, M. V. Sapir, and P. Weil. Closed subgroups in pro-V topologies and the extension problem for inverse automata. Internat. J. Algebra Comput., 11(4):405–446, 2001. 16 L. Markus-Epstein. Stallings foldings and subgroups of amalgams of finite groups. Internat. J. Algebra Comput., 17(8):1493–1535, 2007. 24 A. Martino and E. Ventura. Fixed subgroups are compressed in free groups. Commun. Alge-bra, 32(10):3921–3935, 2004. 17 O. S. Maslakova. The fixed point group of a free group automorphism. Algebra and Logic, 42:237–265, 2003. Translated from Algebra i Logika, 42:422–472, 2003. 17 A. Miasnikov, E. Ventura, and P. Weil. Algebraic extensions in free groups. In Geometric group theory, Trends Math., pages 225–253. Birkh¨ auser, 2007. 12, 13 D. E. Muller and P. E. Schupp. Groups, the theory of ends, and context-free languages. J. Comput. System Sci., 26(3):295–310, 1983. 23 H. Neumann. On the intersection of finitely generated free groups. addendum. Publ. Math. (Debrecen), 5:128, 1957. 12 W. Plandowski. Satisfiability of word equations with constants is in PSPACE. In Proc. 40th Ann. Symp. Found. Comput. Sci., pages 495–500. IEEE Press, 1999. 25 K. Reidemeister. Fundamentalgruppe und ¨ Uberlagerungsr¨ aume. J. Nachrichten G¨ ottingen, pages 69–76, 1928. 8 J. Rhodes and B. Steinberg. The q-theory of finite semigroups. Springer-Verlag, 2009. 15 L. Ribes and P. A. Zalesskii. On the profinite topology on a free group. Bull. Lond. Math. Soc., 25:37–43, 1993. 15 L. Ribes and P. A. Zalesskii. The pro-p topology of a free group and algorithmic problems in semigroups. Internat. J. Algebra Comput., 4(3):359–374, 1994. 16 A. Roig, E. Ventura, and P. Weil. On the complexity of the Whitehead minimization problem. Internat. J. Algebra Comput., 17(8):1611–1634, 2007. 12, 16 V. Roman’kov. On the occurrence problem for rational subsets of a group. In V. Roman’kov, editor, International Conference on Combinatorial and Computational Methods in Mathemat-ics, pages 76–81, 1999. 24 J. Sakarovitch. Syntaxe des langages de Chomsky, essai sur le d´ eterminisme. PhD thesis, Universit´ e Paris VII, 1979. 22 J. Sakarovitch. El´ ements de th´ eorie des automates. Vuibert, 2003. 17 G. S´ enizergues. Some decision problems about controlled rewriting systems. Theoret. Com-put. Sci., 71(3):281–346, 1990. 21 G. S´ enizergues. On the rational subsets of the free group. Acta Informatica, 33(3):281–296, 1996. 21, 22 J.-P. Serre. Arbres, amalgames, SL2. Soci´ et´ e Math´ ematique de France, 1977. Avec un sommaire anglais; r´ edig´ e avec la collaboration de Hyman Bass; Ast´ erisque 46. 4 P. V. Silva. Recognizable subsets of a group: finite extensions and the abelian case. Bull. European Assoc. Theor. Comput. Sci., 77:195–215, 2002. 23 P. V. Silva. Free group languages: rational versus recognizable. RAIRO Inform. Th´ eor. App., 38(1):49–67, 2004. 21, 22 28 L. Bartholdi, P. V. Silva P. V. Silva. Fixed points of endomorphisms over special confluent rewriting systems. Monatsh. Math., 161(4):417–447, 2010. 17 P. V. Silva and P. Weil. On an algorithm to decide whether a free group is a free factor of another. RAIRO Inform. Th´ eor. App., 42:395–414, 2008. 12 P. V. Silva and P. Weil. Automorphic orbits in free groups: words versus subgroups. Internat. J. Algebra Comput., 20(4):561–590, 2010. 25 P. V. Silva and P. Weil. On finite-index extensions of subgroups of free groups. J. Group Theory, 13(3):365–381, 2010. 13 C. C. Sims. Computation with finitely presented groups. Cambridge University Press, 1994. 2, 4 J. R. Stallings. Topology of finite graphs. Inventiones Math., 71(3):551–565, 1983. 4, 14 M. Takahasi. Note on chain conditions in free groups. Osaka J. Math., 3(2):221–225, 1951. 12 W. M. Touikan. A fast algorithm for Stallings’ folding process. Internat. J. Algebra Comput., 16(6):1031–1046, 2006. 7 E. Ventura. On fixed subgroups of maximal rank. Commun. Algebra, 25(10):3361–3375, 1997. 17 E. Ventura. Fixed subgroups of free groups: a survey. Contemporary Math., 296:231–255, 2002. 17 J. H. C. Whitehead. On equivalent sets of elements in a free group. Ann. of Math. (2), 37(4):782–800, 1936. 16 Rational subsets of groups 29 Abstract. This chapter is devoted to the study of rational subsets of groups, with particular em-phasis on the automata-theoretic approach to finitely generated subgroups of free groups. Indeed, Stallings’ construction, associating a finite inverse automaton with every such subgroup, inaugu-rated a complete rewriting of free group algorithmics, with connections to other fields such as topology or dynamics. Another important vector in the chapter is the fundamental Benois’ Theorem, characterizing rational subsets of free groups. The theorem and its consequences really explain why language theory can be successfully applied to the study of free groups. Rational subsets of (free) groups can play a major role in proving statements (a priori unrelated to the notion of rationality) by induction. The chapter also includes related results for more general classes of groups, such as virtually free groups or graph groups. Index Aalbersberg, IJsbrand Jan, 24 Ab´ ert, Mikl´ os, 124 abelian group free, 23, 106 action free, 102 Al¨ eshin group, 118, 122 is free, 129 Aleshin, Stanislav V. [Alxin, Stanislav Vladimiroviq], 118, 122, 129, 130 algebraic extension, 13 alphabet involutive, 2 amalgamated free product, 24 amenable group, 121 Amir, Gideon, 122 Angel, Omer, 122 Anisimov, Anatoly V. [An s mov, Anatol  Vasil~oviq], 18 aperiodic monoid, 13 Artin group, 109 automata group, 3, 115–132 [regular] [weakly] branch, 123 contracting, 121 nuclear, 121 word problem, 120 automatic group, 105–113 biautomatic, 108 normal forms, 105 quadratic isoperimetric inequality, 110 right/left, 108 word problem in quadratic time, 110 automatic structure, 106 geodesic, 108 with uniqueness, 107 automaton bounded, 121 finite truncated, 25 flower, 5 inverse, 4 involutive, 4 Stallings, 6 automorphism orbits, 25 Whitehead, 16 Bartholdi, Laurent, 122 Basilica group, 119, 122 is amenable, 122 is regular weakly branch, 123 presentation, 123 Baumslag-Solitar group, 114, 118 Bazhenova, Galina A. [Baenova, Galina Aleksandrovna], 24 Benois, Mich` ele, 19, 21 Berstel, Jean, 10 Bestvina, Mladen, 17 biautomatic group, 108 conjugacy problem, 111 bicombing, 113 bifix code, 10 bireversible Mealy automaton, 117, 128– 132 Birget, Jean-Camille, 13 boundary hyperbolic, 14 bounded group, 121 is amenable, 122 is contracting, 121 branched covering, 125 combinatorially equivalent, 125 branch group, 123–124 Bridson, Martin R., 108 Brunner, Andrew M., 117 Burger, Marc, 131 Cannon, James W., 104, 105, 127 Cayley, Arthur, 103 Cayley automaton, 127 Cayley graph, 23, 102 code 30 Rational subsets of groups 31 bifix, 10 combing of group, 108 commensurator of group, 13 commutator in groups, 2 cone type, 104, 112 congruence, 3 conjugacy problem, 2 conjugate elements, 2 context-free word problem submonoid, 23 contracting automaton/group, 121, 126 Coxeter group, 109 Dahmani, Franc ¸ois, 25, 113 De Felice, Clelia, 10 Dehn, Max, 103, 104 Diekert, Volker, 25 disjunctive rational subset, 22 distance geodesic, 10 prefix metric, 14 word metric, 104 Dunwoody, Martin J., 23 Dyck language, 23 Epstein, David B. A., 105 equations with rational constraints, 25 existential theory of equations, 25 extension algebraic, 13 finite-index, 13 HNN, 24 fellow traveller property, 107 F∞group, 110 finite order element, 2 Floyd, William J., 127 free group, 3, 122, 130 basis, 3 free factor, 12 generalized word problem, 7 is residually finite, 15 rank, 4 free product, 130 amalgamated, 24 fundamental group, 102, 104 of 3-fold is automatic, 110 of negatively curved manifold, 113 generalized word problem, 3, 113 geodesic distance, 10 Gersten, Stephen M., 12, 16, 111, 112 Gilman, Robert H., 107, 108, 112 Glasner, Yair, 132 Gluˇ skov, Victor M. [Gluxkov, Viktor Mihaloviq], 115 Goldstein, Richard Z., 16 graph Cayley, 102 Schreier, 10, 21, 103, 121 graphs of groups, 24 Greendlinger, Martin D., 104 Grigorchuk, Rostislav I. [Grigorquk, Ros-tislav vanoviq], 114, 118, 122, 124 Grigorchuk group, 114, 117, 118, 124 conjugacy problem, 120 generalized word problem, 120 has intermediate growth, 124 is regular branch, 123 is torsion, 122 presentation, 123 Gromov, Mikhael L. [Gromov, Mihail Leonidoviq], 105, 111, 112, 125 group affine, 117, 127, 128 Al¨ eshin, 118, see Al¨ eshin group amenable, 121 Artin, 23, 109 asynchronously automatic, 108 automata, 3, 117, see automata group automatic, 105, see automatic group of an automaton, 115 ball of radius n in, 104 Basilica, 119, see Basilica group Baumslag-Solitar, 114, 118, 128 biautomatic, 108, see biautomatic group bounded, 121, see bounded group braid, 109 branch, 123 conjugacy separable, 120 32 L. Bartholdi, P. V. Silva Coxeter, 109 F∞, 110 finite, 22 finitely presented, 3, 24, 102 free, 3 free abelian, 23, 106 free partially abelian, 23 functionally recursive, 118 fundamental, see fundamental group graph, 23, 25 Grigorchuk, 118, see Grigorchuk group growth, 124–125, see growth of groups Gupta-Sidki, 118, see Gupta-Sidki group Heisenberg, 114 iterated monodromy, 126 Kazhdan, 132 lamplighter, 118 mapping class, 109 nilpotent, 16, 24, 114 p-, 16, 118, 122 relatively hyperbolic, 113 relators of a, 102 residually finite, 15 right angled Artin, 23 self-similar, 117, 125 semi-hyperbolic, 113 surface, 103 VH-, 131 virtually abelian, 24 virtually free, 23, 25 word-hyperbolic, 25, 112 —, see word-hyperbolic group growth function, 104 growth of groups, 124–125 exponential, 124 intermediate, 124 non-uniformly exponential, 125 polynomial, 124 growth series, 104 Grunschlag, Zeph, 23 Guirardel, Vincent, 25, 113 Gupta, Narain D., 114, 118, 122 Gupta-Sidki group, 114, 117, 118 is regular branch, 123 is torsion, 122 Gutierrez, Claudio, 25 Hagenah, Christian, 25 Hall, Marshall, Jr., 14 Handel, Michael, 17 Heisenberg group, 114 Hermiller, Susan, 108 HNN extension, 24 Hoogeboom, Hendrik Jan, 24 Howson, Albert Geoffrey, 12 Hurwitz, Adolf, 131 hyperbolic boundary, 14, 113 group, see word-hyperbolic group space, 112 hyperbolic graph, 121 hyperbolic plane, 104 isomorphism problem, 3, 113 isoperimetric inequality, 110, 113 Julia set, 127 Kaimanovich, Vadim A. [Kamanoviq, Vadim Adol~foviq], 122 Kambites, Mark, 24 Kapovich, Ilya [Kapoviq, Il~ rikoviq], 4, 13, 24 Kazhdan group, 132 Kleene, Stephen C., 18 Krohn, Kenneth B., 115, 127 lamplighter group, 118 language Dyck, 23 rational, 17 recognizable, 17 limit space, 126 Lohrey, Markus, 24, 25 Lyndon, Roger C., 104 Makanin, Gennady S. [Makanin, Gennadi Semnoviq], 25 mapping automatic, 115 Margolis, Stuart W., 13, 16, 25 Margulis, Grigory A. [Margulis, Grig-ori Aleksandroviq], 105 Markus-Epstein, Luda, 24 Rational subsets of groups 33 Maslakova, Olga S., 17 matrix embedding, 116 Meakin, John C., 13, 25 Mealy automaton, 115 bireversible, 117, 128–132 Cayley automaton, 127, 128 contracting, 121 dual, 117 nuclear, 121 reset machine, 127 reversible, 117, 127 membership problem, 3 rational subset, 24 Miasnikov, Alexei G. [Msnikov, Alekse Georgeviq], 4, 13, 24 Milnor, John W., 124 monoid aperiodic, 13 automatic, 108 inverse, 5, 13 syntactic, 21 transition, 5, 13 monomorphism extension, 17 morphism of deterministic automata, 5 Morse, Marston, 113 Mozes, Shahar, 131, 132 Muller, David E., 23, 103 Muntyan, Yevgen [Muntn, Evgen], 130 Nekrashevych, Volodymyr V. [Nekraxeviq, Volodimir Volodimiroviq], 120, 122, 126 Nerode, Anil, 5 Neumann, Hanna, 12 Nielsen, Jacob, 8 nilpotent group, 16, 114 nucleus of a Mealy automaton, 121 order problem, 3, 113 p-group, 16, 118, 122 Parry, Walter R., 127 Perrin, Dominique, 10 Plandowski, Wojciech, 25 Post, Emil, 24 prefix metric, 14 problem Post correspondence, 24 problem, decision, 2–3, 103 conjugacy, 2 isomorphism, 3 membership, 3 —, see membership problem order, 3, see order problem word, 2, see word problem generalized, 3 —, see generalized word problem property fellow traveller, 107 geometric, 105 (T), 132 [p-]pure subgroup, 13 quasi-geodesic, 113 quasi-isometry, 23, 105 Rabin, Michael O., 25 random walk, 122 self-similar, 122 rational constraint, 25 rational cross-section, 107 rational language, 17 recognizable language, 17 residually finite group, 15 Reutenauer, Christophe, 10 reversible Mealy automaton, 117, 127 rewriting system confluent, 3 length-reducing etc., 21 Rhodes, John L., 15, 115, 127 Ribes, Luis, 15, 16 Rindone, Giuseppina, 10 Roig, Abd´ o, 12 Roman′kov, Vitalii A. [Roman~kov, Vitali Anatol~eviq], 24 Sakarovitch, Jacques, 22 Sapir, Mark [Sapir, Mark Valentinoviq], 16 Savchuk, Dmytro [Savquk, Dmitro], 130 Schreier, Oscar, 4, 8 Schreier graph, 10, 21, 103, 121 Schupp, Paul E., 23, 103 34 L. Bartholdi, P. V. Silva Seifert, Franz D., 18 self-similarity biset, 125 semigroup automatic, 108 of an automaton, 115 inverse, 25 S´ enizergues, G´ eraud, 21, 22, 24 Serre, Jean-Pierre, 4 Short, Hamish B., 111, 112 Sidki, Said N., 114, 117, 118, 121, 122, 130 Silva, Pedro V., 13, 24, 127 small cancellation, 104, 109 spanning tree, 8 Stallings’ construction, 10 amalgamated free products etc., 24 graph groups, 24 Stallings, John R., 4, 129 Stallings automaton, 6 Stallings construction, 5 Steinberg, Benjamin, 24, 127 subgroup finitely generated, 2 fixed point, 16 index of a, 2 intersection, 12 normal, 11 [p-]pure, 13 quasi-convex, 111 surface group, 103 growth series, 105 Sushchansky, Vitalij I. [Suwans~ki, V tal  vanoviq], 120 syntactic monoid, 21 Takahasi, Mutuo, 12 Tartakovski˘ ı, Vladimir A. [Tartakovski, Vladimir Abramoviq], 104 Thurston, William P., 104, 105, 110, 126 Tits, Jacques, 124 topology pro-V, 15 Touikan, Nicholas W. M., 7 transformation automatic, 115 transition monoid, 5, 13 Turner, Edward C., 16 universal cover, 102 Ventura Capell, Enric, 12, 13, 17 VH-group, 131 Vir´ ag, B´ alint, 122 virtually free group, 23, 25 Vorobets, Mariya and Yaroslav [Vorobec~, Mar roslav], 130 weakly branch group, 121, 123–124 satisfies no identity, 124 Weidmann, Richard, 24 Weil, Pascal, 12, 13, 16 Whitehead, John H. C., 12, 16 Wilson, John S., 125 Wise, Daniel T., 131 Wolf, Joseph A., 124 word cyclically reduced, 4 reduced, 3 word-hyperbolic group, 25, 111–114 is biautomatic, 112 linear isoperimetric inequality, 113 word metric, 104 word problem, 2, 104, 110, 113 in automata groups, 120 submonoid, 22 context-free, 23 rational, 22 wreath product, 116, 118 Zalesski˘ ı, Pavel A. [Zalesski, Pavel Alek-sandroviq], 15, 16
198
Published Time: 2012-09-24T10:30:16+00:00 modular arithmetic with large numbers | mathgarage =============== Home About Kelvinella mathgarage a MaTh ShArInG bLoG Stay updated via RSS author: kelvinella Image 1: Kelvinella's avatar Top Posts & Pages DSE Math 2023 Paper 1 Answer Finding a Limit Without L'hôpital Rule or Taylor Expansion DSE Math M2 2019 ANS DSE Math 2016 Paper 2 ANS radius of circumsphere of a regular tetrahedron DSE Math 2016 Paper 1 ANS Recent Posts Solving Problems Using Vieta’s Formula A Handy Way to Find the Equation of a Tangent Line to a Circle Integrating 1/(sin(x)+sec(x)) from 0 to pi/2 MIT Integration Bee 2023 Semi-final Problem 1 DSE Math 2023 Paper 1 Answer Archives September 2024(1) April 2024(1) July 2023(1) May 2023(2) March 2023(1) October 2022(1) May 2022(5) April 2022(1) March 2022(1) October 2021(1) June 2021(1) April 2021(1) March 2021(2) February 2021(1) December 2020(1) November 2020(2) October 2020(2) September 2020(8) August 2020(1) July 2020(1) April 2020(5) April 2019(1) January 2019(3) December 2018(4) October 2018(1) April 2018(2) March 2018(3) May 2017(1) April 2017(2) March 2017(4) April 2016(1) March 2016(1) February 2016(2) June 2015(1) December 2014(1) October 2014(2) September 2014(1) June 2014(1) April 2014(1) March 2014(1) January 2014(3) December 2013(3) November 2013(1) September 2013(1) July 2013(1) June 2013(2) May 2013(3) April 2013(4) March 2013(2) February 2013(1) January 2013(2) December 2012(6) November 2012(7) October 2012(4) September 2012(4) August 2012(1) July 2012(1) mathgarage@twitter Tweets by mathgarage Math Garage Categories Mathematics Meta Create account Log in Entries feed Comments feed WordPress.com modular arithmetic with large numbers Posted: September 24, 2012 in Mathematics Tags: arithmetic, chinese remainder theorem, math, modular arithmetic, number theory 0 In elementary school, we learn division and quite often when a number is divided by a divisor we don’t exactly get the answer without remainder. For example, when a number 14 is divided by 3 then the quotient is going to be 4 with the remainder 2. Because of the following, Now if we translate this into modular arithmetic, it becomes: Now this seems easy everyone can do it. But what if someone asks you to find a remainder when is divided by 15? If you do it the way a grade 4 student does, you will first of all need to compute the monster which is quite impossible for any mortals to calculate by hand and even with a scientific calculator. And not to mention dividing that huge number by 15 which will take probably another life time to do it by hand. Fortunately there is a way to compute all that stuffs. Since then we have Now this seems easy isn’t it? I am sure everyone can do it. Let’s do something more interesting. I saw this problem last week on a forum. Find the last five digits of . Ok, this is an interesting one So I shall give two different ways to solve this for the readers. Notice that this is the same as asking to find << Method 1 >> In this solution, I am going to apply Euler’s Theorem and Chinese Remainder Theorem. The reason that Chinese Remainder Theorem is necessary in here is because the number 100000 isn’t prime. And we can factor 100000 into two numbers which are relatively prime to each other as, then we find their Euler’s phi function, By the Euler’s Theorem, if , then Since 2 and 5 do not divide 32141 we have, then Since We have Here we can use “the power of 2” trick to reduce this since, Advertisement then Therefore, similarly we have, Now we apply the Chinese Remainder Theorem, (see link for explanation) with we need to solve and I will do the first one, and leave the second for reader. then apply the Euclidean Algorithm, then we work it backward, hence, Similarly, Advertisement Remark: There is also an easier way, since then there exists an Since 7 and 32 are relatively prime, we can divide both sides by 7 Since << Method 2 >> This method here involves Binomial Theorem, which is faster in this case. First, we will use a trick from Method 1 to reduce the power 43210 to 3210. Now apply Binomial theorem, Since and , Share this: Click to share on X (Opens in new window)X Click to share on Facebook (Opens in new window)Facebook Like Loading... Related DSE Math 2016 Paper 1 ANSMarch 27, 2017 In "Mathematics" Rule of 72December 27, 2018 In "Mathematics" DSE Math 2023 Paper 1 AnswerMay 2, 2023 In "Mathematics" Leave a comment Cancel reply Δ Some interesting integrals I saw online measuring 9 minutes with a 4 minute sandglass and a 7 minute sandglass Blog at WordPress.com. Comment Reblog SubscribeSubscribed mathgarage Sign me up Already have a WordPress.com account? Log in now. mathgarage SubscribeSubscribed Sign up Log in Copy shortlink Report this content View post in Reader Manage subscriptions Collapse this bar Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use. To find out more, including how to control cookies, see here: Cookie Policy %d Design a site like this with WordPress.com Get started Advertisement
199
graph theory - How can a parallel edge be identified? - Mathematics Stack Exchange =============== Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now How can a parallel edge be identified? Ask Question Asked 12 years, 2 months ago Modified12 years, 2 months ago Viewed 3k times This question shows research effort; it is useful and clear 2 Save this question. Show activity on this post. An edge will have the same vertices as another edge that it is parallel to, so how can it be uniquely described? graph-theory Share Share a link to this question Copy linkCC BY-SA 3.0 Cite Follow Follow this question to receive notifications asked May 31, 2013 at 0:07 EllenEllen 763 3 3 gold badges 9 9 silver badges 20 20 bronze badges 1 "An edge will have the same vertices as another edge that it is parallel to" - no it won't. –Nathaniel Bubis Commented May 31, 2013 at 0:18 Add a comment| 2 Answers 2 Sorted by: Reset to default This answer is useful 3 Save this answer. Show activity on this post. I would prefer to use a variant of the second definition offered by Zev. A multigraph has a set of vertices V V, a set of edges E E. The connection between vertices and edges can be described by a relation on V×E V×E, such that each edge is incident with one or two vertices (just one if you're not inclined to allow loops). Or it can be described by a function that assigns to each edge either one vertex or an unordered pair of vertices. One place where multigraphs force themselves on us is when we consider duals of plane graphs. So we need to work with embeddings of multigraphs and in this situation the multiset approach would be a little awkward. My suspicion is that working graph theorists would not usually use the multiset definition (but I have not carried out a survey). Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications answered May 31, 2013 at 0:51 Chris GodsilChris Godsil 14.1k 2 2 gold badges 26 26 silver badges 41 41 bronze badges Add a comment| This answer is useful 2 Save this answer. Show activity on this post. For an undirected multigraph, we simply have a multiset of edges, not a set. Thus, if V={a,b,c}V={a,b,c}, we might have E={[(a,b),2],[(a,c),3]},E={[(a,b),2],[(a,c),3]}, which specifies that there are two edges connecting vertices a a and b b, and three edges connecting vertices a a and c c. We still can't technically distinguish "individual edges" between the same vertices; all we know is how many edges there are between any two vertices. However, for many purposes, that's all that's really necessary. If you want to point to individual edges, you could take the approach typically used for directed multigraphs: we take a quadruple (V,E,s,t)(V,E,s,t), where V V is the set of vertices, E E is the set of edges, s:E→V s:E→V is the function taking an edge to the vertex it starts at (its "source") and t:E→V t:E→V is the function taking an edge to the vertex it ends at (its "target"). Then edges e 1,e 2∈E e 1,e 2∈E are parallel with the same direction if s(e 1)=s(e 2)s(e 1)=s(e 2) and t(e 1)=t(e 2)t(e 1)=t(e 2), and parallel with opposite directions if s(e 1)=t(e 2)s(e 1)=t(e 2) and t(e 1)=s(e 2)t(e 1)=s(e 2). Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications answered May 31, 2013 at 0:21 Zev ChonolesZev Chonoles 133k 23 23 gold badges 348 348 silver badges 571 571 bronze badges Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions graph-theory See similar questions with these tags. Featured on Meta Will you help build our new visual identity? Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Community help needed to clean up goo.gl links (by August 25) Report this ad Related 6Show triangulations can be transformed into each other by edge flip. 2Can there be a repeated edge in a path? 1Graph with edge disjoint cycles 0G is a underlying graph of an irregular multigraph 1Edge-deleted subgraphs and edge transitivity 15 critical graphs which are planar plus an edge 0Are there any accounts on "hyper-multigraphs"? Hot Network Questions Why לֶחֶם instead of לַחַם? Rectangle and circle with same area and circumference Dropdown width with very long options I failed to make Claus benzene. (With sticks.) Does trading for Kyogre in Pokémon Omega Ruby include its Mega Evolution? How do Commoners "change class"? repeat_and_join function for strings and chars in rust Does the warning "5 years imprisonment for removal" on Canada's Four Corners obelisk have any legal backing? Open dense subset of a compact lie group has full measure Landmark identification in "The Angel" (Arsenal FC's anthem) Wiring a bathroom exhaust fan I found that we can calculate the time of solar eclipses that will happen in the very far future. Do we need relativity in this calculation? If I remove the point in a dataset which is furthest from the mean, does the sample variance automatically decrease, or at least not increase? Is there any way to still use Manifest v2 extensions in Google Chrome 139+? What does it mean to be one's "God"? Is the logic of the original smoking study valid? Why was there a child at the dig site in Montana? Is laser engraving on an interstellar object feasible? Show double quotient with congruence subgroup is simply connected? Summation with fractional part In Grep, how can I grep -r --exclude build/lib//.py Can "Accepted" Be Used as a Noun? Is there a simple method to prove that this triangle is isosceles? If linear negation is interpreted as representing destructors, how to make sense of double linear negation elimination? more hot questions Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices