id
stringlengths
1
6
content
stringlengths
0
5.74M
123400
Skip to main content 5.3: The Fundamental Theorem of Calculus Basics Last updated : Dec 21, 2020 Save as PDF 5.3 E: FTOC Exercises 5.4: Average Value of a Function Page ID : 13813 OpenStax OpenStax ( \newcommand{\kernel}{\mathrm{null}\,}) In the previous two sections, we looked at the definite integral and its relationship to the area under the curve of a function. Unfortunately, so far, the only tools we have available to calculate the value of a definite integral are geometric area formulas and limits of Riemann sums, and both approaches are extremely cumbersome. In this section we look at some more powerful and useful techniques for evaluating definite integrals. These new techniques rely on the relationship between differentiation and integration. This relationship was discovered and explored by both Sir Isaac Newton and Gottfried Wilhelm Leibniz (among others) during the late 1600s and early 1700s, and it is codified in what we now call the Fundamental Theorem of Calculus, which has two parts that we examine in this section. Its very name indicates how central this theorem is to the entire development of calculus. Isaac Newton’s contributions to mathematics and physics changed the way we look at the world. The relationships he discovered, codified as Newton’s laws and the law of universal gravitation, are still taught as foundational material in physics today, and his calculus has spawned entire fields of mathematics. To learn more, read a brief biography of Newton with multimedia clips. Fundamental Theorem of Calculus Part 1: Integrals and Antiderivatives The Fundamental Theorem of Calculus is an extremely powerful theorem that establishes the relationship between differentiation and integration, and gives us a way to evaluate definite integrals without using Riemann sums or calculating areas. The theorem is comprised of two parts, the first of which, the Fundamental Theorem of Calculus, Part 1, is stated here. Part 1 establishes the relationship between differentiation and integration. Fundamental Theorem of Calculus I If f(x)f(x) is continuous over an interval [a,b][a,b], and the function F(x) is defined by F(x)=∫xaf(t)dt, then F′(x)=f(x) over [a,b]. A couple of subtleties are worth mentioning here. First, a comment on the notation. Note that we have defined a function, F(x), as the definite integral of another function, f(t), from the point a to the point x. At first glance, this is confusing, because we have said several times that a definite integral is a number, and here it looks like it’s a function. The key here is to notice that for any particular value of x, the definite integral is a number. So the function F(x) returns a number (the value of the definite integral) for each value of x. Second, it is worth commenting on some of the key implications of this theorem. There is a reason it is called the Fundamental Theorem of Calculus. Not only does it establish a relationship between integration and differentiation, but also it guarantees that any integrable function has an antiderivative. Specifically, it guarantees that any continuous function has an antiderivative. Example 5.3.3: Finding a Derivative with the Fundamental Theorem of Calculus Use the Note to find the derivative of g(x)=∫x11t3+1dt. Solution: According to the Fundamental Theorem of Calculus, the derivative is given by g′(x)=1x3+1. Exercise 5.3.3 Use the Fundamental Theorem of Calculus, Part 1 to find the derivative of g(r)=∫r0√x2+4dx. Hint : Follow the procedures from Example to solve the problem. Answer : g′(r)=√r2+4 Example 5.3.4: Using the Fundamental Theorem and the Chain Rule to Calculate Derivatives Let F(x)=∫√x1sintdt. Find F′(x). Solution Letting u(x)=√x, we have F(x)=∫u(x)1sintdt. Thus, by the Fundamental Theorem of Calculus and the chain rule, F′(x)=sin(u(x))dudx=sin(u(x))⋅(12x−1/2)=sin√x2√x. Exercise 5.3.4 Let F(x)=∫x31costdt. Find F′(x). Hint : Use the chain rule to solve the problem. Answer : F′(x)=3x2cosx3 Example 5.3.5: Using the Fundamental Theorem of Calculus with Two Variable Limits of Integration Let F(x)=∫2xxt3dt. Find F′(x). We have F(x)=∫2xxt3dt. Both limits of integration are variable, so we need to split this into two integrals. We get F(x)=∫2xxt3dt=∫0xt3dt+∫2x0t3dt=−∫x0t3dt+∫2x0t3dt. Differentiating the first term, we obtain ddx[−∫x0t3dt]=−x3. Differentiating the second term, we first let (x)=2x. Then, ddx[∫2x0t3dt]=ddx[∫u(x)0t3dt]=(u(x))3dudx=(2x)3⋅2=16x3. Thus, F′(x)=ddx[−∫x0t3dt]+ddx[∫2x0t3dt]=−x3+16x3=15x3 Exercise 5.3.5 Let F(x)=∫x2xcostdt. Find F′(x). Hint : Use the procedures from Example to solve the problem Answer : F′(x)=2xcosx2−cosx Fundamental Theorem of Calculus, Part 2: The Evaluation Theorem The Fundamental Theorem of Calculus, Part 2, is perhaps the most important theorem in calculus. After tireless efforts by mathematicians for approximately 500 years, new techniques emerged that provided scientists with the necessary tools to explain many phenomena. Using calculus, astronomers could finally determine distances in space and map planetary orbits. Everyday financial problems such as calculating marginal costs or predicting total profit could now be handled with simplicity and accuracy. Engineers could calculate the bending strength of materials or the three-dimensional motion of objects. Our view of the world was forever changed with calculus. After finding approximate areas by adding the areas of n rectangles, the application of this theorem is straightforward by comparison. It almost seems too simple that the area of an entire curved region can be calculated by just evaluating an antiderivative at the first and last endpoints of an interval. The Fundamental Theorem of Calculus, Part 2 If f is continuous over the interval [a,b] and F(x) is any antiderivative of f(x), then ∫baf(x)dx=F(b)−F(a). We often see the notation F(x)|ba to denote the expression F(b)−F(a). We use this vertical bar and associated limits a and b to indicate that we should evaluate the function F(x) at the upper limit (in this case, b), and subtract the value of the function F(x) evaluated at the lower limit (in this case, a). The Fundamental Theorem of Calculus, Part 2 (also known as the evaluation theorem) states that if we can find an antiderivative for the integrand, then we can evaluate the definite integral by evaluating the antiderivative at the endpoints of the interval and subtracting. Proof Let P=xi,i=0,1,…,n be a regular partition of [a,b]. Then, we can write F(b)−F(a)=F(xn)−F(x0)=[F(xn)−F(xn−1)]+[F(xn−1)−F(xn−2)]+…+[F(x1)−F(x0)]=n∑i=1[F(xi)−F(xi−1)]. Now, we know F is an antiderivative of f over [a,b], so by the Mean Value Theorem (see The Mean Value Theorem) for i=0,1,…,n we can find ci in [xi−1,xi] such that F(xi)−F(xi−1)=F′(ci)(xi−xi−1)=f(ci)Δx. Then, substituting into the previous equation, we have F(b)−F(a)=n∑i=1f(ci)Δx. Taking the limit of both sides as n→∞, we obtain F(b)−F(a)=limn→∞n∑i=1f(ci)Δx=∫baf(x)dx. □ Example 5.3.6: Evaluating an Integral with the Fundamental Theorem of Calculus Use Note to evaluate ∫2−2(t2−4)dt. Solution Recall the power rule for Antiderivatives: If y=xn,∫xndx=xn+1n+1+C. Use this rule to find the antiderivative of the function and then apply the theorem. We have ∫2−2(t2−4)dt=t33−4t|2−2 =[(2)33−4(2)]−[(−2)33−4(−2)] =(83−8)−(−83+8) =83−8+83−8=163−16=−323. Analysis Notice that we did not include the “+ C” term when we wrote the antiderivative. The reason is that, according to the Fundamental Theorem of Calculus, Part 2, any antiderivative works. So, for convenience, we chose the antiderivative with C=0. If we had chosen another antiderivative, the constant term would have canceled out. This always happens when evaluating a definite integral. The region of the area we just calculated is depicted in Figure. Note that the region between the curve and the x-axis is all below the x-axis. Area is always positive, but a definite integral can still produce a negative number (a net signed area). For example, if this were a profit function, a negative number indicates the company is operating at a loss over the given interval. Figure 5.3.3: The evaluation of a definite integral can produce a negative value, even though area is always positive. Example 5.3.7: Evaluating a Definite Integral Using the Fundamental Theorem of Calculus, Part 2 Evaluate the following integral using the Fundamental Theorem of Calculus, Part 2: ∫91x−1√xdx. First, eliminate the radical by rewriting the integral using rational exponents. Then, separate the numerator terms by writing each one over the denominator: ∫91x−1x1/2dx=∫91(xx1/2−1x1/2)dx. Use the properties of exponents to simplify: ∫91(xx1/2−1x1/2)dx=∫91(x1/2−x−1/2)dx. Now, integrate using the power rule: ∫91(x1/2−x−1/2)dx=(x3/232−x1/212)∣91 =[(9)3/232−(9)1/212]−[(1)3/232−(1)1/212] =[23(27)−2(3)]−[23(1)−2(1)]=18−6−23+2=403. See Figure. . Figure 5.3.4: The area under the curve from x=1 to x=9 can be calculated by evaluating a definite integral. Exercise 5.3.6 Use Note to evaluate ∫21x−4dx. Hint : Use the power rule. Answer : 724 Example 5.3.8: A Roller-Skating Race James and Kathy are racing on roller skates. They race along a long, straight track, and whoever has gone the farthest after 5 sec wins a prize. If James can skate at a velocity of f(t)=5+2t ft/sec and Kathy can skate at a velocity of g(t)=10+cos(π2t) ft/sec, who is going to win the race? Solution We need to integrate both functions over the interval [0,5] and see which value is bigger. We are using ∫50v(t)dt to find the distance traveled over 5 seconds. For James, we want to calculate ∫50(5+2t)dt. Using the power rule, we have ∫50(5+2t)dt=(5t+t2)∣50=(25+25)=50. Thus, James has skated 50 ft after 5 sec. Turning now to Kathy, we want to calculate ∫5010+cos(π2t)dt. We know sint is an antiderivative of cost, so it is reasonable to expect that an antiderivative of cos(π2t) would involve sin(π2t). However, when we differentiate (sin(π2t), we get π2cos(π2t) as a result of the chain rule, so we have to account for this additional coefficient when we integrate. We obtain ∫5010+cos(π2t)dt=(10t+2πsin(π2t))∣50 =(50+2π)−(0−2πsin0)≈50.6. Kathy has skated approximately 50.6 ft after 5 sec. Kathy wins, but not by much! Exercise 5.3.7 Suppose James and Kathy have a rematch, but this time the official stops the contest after only 3 sec. Does this change the outcome? Hint : Change the limits of integration from those in Example. Answer : Kathy still wins, but by a much larger margin: James skates 24 ft in 3 sec, but Kathy skates 29.3634 ft in 3 sec. A Parachutist in Free Fall Julie is an avid skydiver. She has more than 300 jumps under her belt and has mastered the art of making adjustments to her body position in the air to control how fast she falls. If she arches her back and points her belly toward the ground, she reaches a terminal velocity of approximately 120 mph (176 ft/sec). If, instead, she orients her body with her head straight down, she falls faster, reaching a terminal velocity of 150 mph (220 ft/sec). Figure 5.3.5: Skydivers can adjust the velocity of their dive by changing the position of their body during the free fall. (credit: Jeremy T. Lock) Since Julie will be moving (falling) in a downward direction, we assume the downward direction is positive to simplify our calculations. Julie executes her jumps from an altitude of 12,500 ft. After she exits the aircraft, she immediately starts falling at a velocity given by v(t)=32t. She continues to accelerate according to this velocity function until she reaches terminal velocity. After she reaches terminal velocity, her speed remains constant until she pulls her ripcord and slows down to land. On her first jump of the day, Julie orients herself in the slower “belly down” position (terminal velocity is 176 ft/sec). Using this information, answer the following questions. How long after she exits the aircraft does Julie reach terminal velocity? Based on your answer to question 1, set up an expression involving one or more integrals that represents the distance Julie falls after 30 sec. If Julie pulls her ripcord at an altitude of 3000 ft, how long does she spend in a free fall? Julie pulls her ripcord at 3000 ft. It takes 5 sec for her parachute to open completely and for her to slow down, during which time she falls another 400 ft. After her canopy is fully open, her speed is reduced to 16 ft/sec. Find the total time Julie spends in the air, from the time she leaves the airplane until the time her feet touch the ground. On Julie’s second jump of the day, she decides she wants to fall a little faster and orients herself in the “head down” position. Her terminal velocity in this position is 220 ft/sec. Answer these questions based on this velocity: How long does it take Julie to reach terminal velocity in this case? Before pulling her ripcord, Julie reorients her body in the “belly down” position so she is not moving quite as fast when her parachute opens. If she begins this maneuver at an altitude of 4000 ft, how long does she spend in a free fall before beginning the reorientation? Some jumpers wear “wingsuits” (see Figure). These suits have fabric panels between the arms and legs and allow the wearer to glide around in a free fall, much like a flying squirrel. (Indeed, the suits are sometimes called “flying squirrel suits.”) When wearing these suits, terminal velocity can be reduced to about 30 mph (44 ft/sec), allowing the wearers a much longer time in the air. Wingsuit flyers still use parachutes to land; although the vertical velocities are within the margin of safety, horizontal velocities can exceed 70 mph, much too fast to land safely. Figure 5.3.6: The fabric panels on the arms and legs of a wingsuit work to reduce the vertical velocity of a skydiver’s fall. (credit: Richard Schneider) Answer the following question based on the velocity in a wingsuit. If Julie dons a wingsuit before her third jump of the day, and she pulls her ripcord at an altitude of 3000 ft, how long does she get to spend gliding around in the air Key Concepts The Mean Value Theorem for Integrals states that for a continuous function over a closed interval, there is a value c such that f(c) equals the average value of the function. See Note. The Fundamental Theorem of Calculus, Part 1 shows the relationship between the derivative and the integral. See Note. The Fundamental Theorem of Calculus, Part 2 is a formula for evaluating a definite integral in terms of an antiderivative of its integrand. The total area under a curve can be found using this formula. See Note. Key Equations Mean Value Theorem for Integrals If f(x)is continuous over an interval [a,b], then there is at least one point c∈[a,b] such that f(c)=1b−a∫baf(x)dx. Fundamental Theorem of Calculus Part 1 If f(x) is continuous over an interval [a,b], and the function F(x) is defined by F(x)=∫xaf(t)dt, then F′(x)=f(x). Fundamental Theorem of Calculus Part 2 If f is continuous over the interval [a,b] and F(x) is any antiderivative of f(x), then ∫baf(x)dx=F(b)−F(a). Glossary fundamental theorem of calculus : the theorem, central to the entire development of calculus, that establishes the relationship between differentiation and integration fundamental theorem of calculus, part 1 : uses a definite integral to define an antiderivative of a function fundamental theorem of calculus, part 2 : (also, evaluation theorem) we can evaluate a definite integral by evaluating the antiderivative of the integrand at the endpoints of the interval and subtracting mean value theorem for integrals : guarantees that a point c exists such that f(c) is equal to the average value of the function Contributors Gilbert Strang (MIT) and Edwin “Jed” Herman (Harvey Mudd) with many contributing authors. This content by OpenStax is licensed with a CC-BY-SA-NC 4.0 license. Download for free at 5.3 E: FTOC Exercises 5.4: Average Value of a Function
123401
On the characterization of -harmonic functions on the Heisenberg group by mean value properties =============== Typesetting math: 100% [x] `x^2+y_1+z_12^34` Home Publications AIMS Journals Advances in Computational Science and Engineering Advances in Mathematics of Communications Applied Mathematics for Modern Challenges Communications on Analysis and Computation Communications on Pure and Applied Analysis Discrete and Continuous Dynamical Systems Discrete and Continuous Dynamical Systems - B Discrete and Continuous Dynamical Systems - C Discrete and Continuous Dynamical Systems - S Evolution Equations and Control Theory Foundations of Data Science Frontiers in Applied Mathematics Frontiers of Mathematical Finance Inverse Problems and Imaging Journal of Computational Dynamics Journal of Dynamics and Games Journal of Industrial and Management Optimization Journal of Modern Dynamics Kinetic and Related Models Mathematical Control and Related Fields Mathematical Foundations of Computing Numerical Algebra, Control and Optimization Probability, Uncertainty and Quantitative Risk Conference Publications AIMS Press Math Journals AIMS Mathematics Mathematical Biosciences & Engineering Big Data & Information Analytics Mathematics in Engineering Communications in Analysis and Mechanics Networks and Heterogeneous Media Electronic Research Archive STEM Education Book Series Random & Computational Dynamics Applied Mathematics Differential Equations & Dynamical Systems Conferences About About AIMS Policies Copyright Disclaimer Order Journals Open Access Contact Ethical Standards Terms and Conditions Advanced Search Home Publications AIMS Journals Advances in Computational Science and Engineering Advances in Mathematics of Communications Applied Mathematics for Modern Challenges Communications on Analysis and Computation Communications on Pure and Applied Analysis Discrete and Continuous Dynamical Systems Discrete and Continuous Dynamical Systems - B Discrete and Continuous Dynamical Systems - C Discrete and Continuous Dynamical Systems - S Evolution Equations and Control Theory Foundations of Data Science Frontiers in Applied Mathematics Frontiers of Mathematical Finance Inverse Problems and Imaging Journal of Computational Dynamics Journal of Dynamics and Games Journal of Industrial and Management Optimization Journal of Modern Dynamics Kinetic and Related Models Mathematical Control and Related Fields Mathematical Foundations of Computing Numerical Algebra, Control and Optimization Probability, Uncertainty and Quantitative Risk Conference Publications AIMS Press Math Journals AIMS Mathematics Mathematical Biosciences & Engineering Big Data & Information Analytics Mathematics in Engineering Communications in Analysis and Mechanics Networks and Heterogeneous Media Electronic Research Archive STEM Education Book Series Random & Computational Dynamics Applied Mathematics Differential Equations & Dynamical Systems Conferences About About AIMS Policies Copyright Disclaimer Order Journals Open Access Contact Ethical Standards Terms and Conditions Advanced Search Journal Home About Aim and Scope Indexing Information Editorial Board Journal Statistics Contact Contribute Submit a Paper Guide for Authors Peer Review Guidelines Instructions for Editors Instructions for Referees Special Issue Guidelines Ethical Standards Articles Current Issue Early Access Archive Most Viewed Most Cited Special Issues Open Articles FAQ Discrete and Continuous Dynamical Systems Advanced Search Journal Home About Aim and Scope Indexing Information Editorial Board Journal Statistics Contact Contribute Submit a Paper Guide for Authors Peer Review Guidelines Instructions for Editors Instructions for Referees Special Issue Guidelines Ethical Standards Articles Current Issue Early Access Archive Most Viewed Most Cited Special Issues Open Articles FAQ Home PDF Cite Share This issuePrevious ArticleNext Article Article Contents Article Contents 2014,Volume 34,Issue 7:2779-2793. Doi: 10.3934/dcds.2014.34.2779 This issuePrevious ArticleComputability of the Julia set. Nonrecurrent critical orbitsNext ArticleSchrödinger limit of weakly dissipative stochastic Klein--Gordon--Schrödinger equations and large deviations PDF view On the characterization of p-harmonic functions on the Heisenberg group by mean value properties Fausto Ferrari1,, Qing Liu2,and Juan Manfredi2, 1. Dipartimento di Matematica dell'Università di Bologna, Piazza di Porta S. Donato, 5, 40126 Bologna 2. Department of Mathematics, University of Pittsburgh, Pittsburgh, PA 15260 Received: August 2013 Revised: October 2013 Published: July 2014 Abstract / IntroductionRelated PapersCited by Abstract Abstract We characterize p−harmonic functions in the Heisenberg group in terms of an asymptotic mean value property, where 1<p<∞, following the scheme described in for the Euclidean case. The new tool that allows us to consider the subelliptic case is a geometric lemma, Lemma 3.2 below, that relates the directions of the points of maxima and minima of a function on a small subelliptic ball with the unit horizontal gradient of that function. Keywords: p-Laplacian, Heisenberg group, mean value formulas, viscosity solutions. Mathematics Subject Classification:Primary: 35J60, 35R03; Secondary: 35J70. Citation: Fausto Ferrari, Qing Liu, Juan Manfredi. On the characterization of p-harmonic functions on the Heisenberg group by mean value properties. Discrete and Continuous Dynamical Systems, 2014, 34(7): 2779-2793. doi: 10.3934/dcds.2014.34.2779 \begin{equation} \ \end{equation} Related Papers Fausto Ferrari,Qing Liu,Juan Manfredi. On the characterization of p-harmonic functions on the Heisenberg group by mean value properties.Discrete & Continuous Dynamical Systems,2014,34(7): 2779-2793.doi:10.3934/dcds.2014.34.2779 Heping Liu,Yu Liu. Refinable functions on the Heisenberg group.Communications on Pure & Applied Analysis,2007,6(3): 775-787.doi:10.3934/cpaa.2007.6.775 Florian Krügel. Some properties of minimizers of a variational problem involving the total variation functional.Communications on Pure & Applied Analysis,2015,14(1): 341-360.doi:10.3934/cpaa.2015.14.341 Leszek Gasiński. Positive solutions for resonant boundary value problems with the scalar p-Laplacian and nonsmooth potential.Discrete & Continuous Dynamical Systems,2007,17(1): 143-158.doi:10.3934/dcds.2007.17.143 Raf Cluckers,Julia Gordon,Immanuel Halupczok. Motivic functions, integrability, and applications to harmonic analysis on p-adic groups.Electronic Research Announcements,2014,21(0): 137-152.doi:10.3934/era.2014.21.137 José G. Llorente. Mean value properties and unique continuation.Communications on Pure & Applied Analysis,2015,14(1): 185-199.doi:10.3934/cpaa.2015.14.185 Matthew B. Rudd,Heather A. Van Dyke. Median values, 1-harmonic functions, and functions of least gradient.Communications on Pure & Applied Analysis,2013,12(2): 711-719.doi:10.3934/cpaa.2013.12.711 Kerstin Does. An evolution equation involving the normalized P-Laplacian.Communications on Pure & Applied Analysis,2011,10(1): 361-396.doi:10.3934/cpaa.2011.10.361 Yansheng Zhong,Yongqing Li. On a p-Laplacian eigenvalue problem with supercritical exponent.Communications on Pure & Applied Analysis,2019,18(1): 227-236.doi:10.3934/cpaa.2019012 Bernd Kawohl,Jiří Horák. On the geometry of the p-Laplacian operator.Discrete and Continuous Dynamical Systems Series S,2017,10(4): 799-813.doi:10.3934/dcdss.2017040 Adam Lipowski,Bogdan Przeradzki,Katarzyna Szymańska-Dębowska. Periodic solutions to differential equations with a generalized p-Laplacian.Discrete & Continuous Dynamical Systems - B,2014,19(8): 2593-2601.doi:10.3934/dcdsb.2014.19.2593 Kanishka Perera,Andrzej Szulkin. p-Laplacian problems where the nonlinearity crosses an eigenvalue.Discrete & Continuous Dynamical Systems,2005,13(3): 743-753.doi:10.3934/dcds.2005.13.743 Maya Chhetri,D. D. Hai,R. Shivaji. On positive solutions for classes of p-Laplacian semipositone systems.Discrete & Continuous Dynamical Systems,2003,9(4): 1063-1071.doi:10.3934/dcds.2003.9.1063 Carlo Mercuri,Michel Willem. A global compactness result for the p-Laplacian involving critical nonlinearities.Discrete & Continuous Dynamical Systems,2010,28(2): 469-493.doi:10.3934/dcds.2010.28.469 Francisco Odair de Paiva,Humberto Ramos Quoirin. Resonance and nonresonance for p-Laplacian problems with weighted eigenvalues conditions.Discrete & Continuous Dynamical Systems,2009,25(4): 1219-1227.doi:10.3934/dcds.2009.25.1219 Mikhail D. Surnachev,Vasily V. Zhikov. On existence and uniqueness classes for the Cauchy problem for parabolic equations of the p-Laplace type.Communications on Pure & Applied Analysis,2013,12(4): 1783-1812.doi:10.3934/cpaa.2013.12.1783 Mohammad A. Rammaha,Daniel Toundykov,Zahava Wilstein. Global existence and decay of energy for a nonlinear wave equation with p-Laplacian damping.Discrete & Continuous Dynamical Systems,2012,32(12): 4361-4390.doi:10.3934/dcds.2012.32.4361 Shiping Cao,Hua Qiu. Boundary value problems for harmonic functions on domains in Sierpinski gaskets.Communications on Pure and Applied Analysis,2020,19(2): 1147-1179.doi:10.3934/cpaa.2020054 Zhe Jia. Global boundedness of weak solutions for an attraction-repulsion chemotaxis system with p-Laplacian diffusion and nonlinear production.Discrete and Continuous Dynamical Systems - B,2023,28(9): 4847-4863.doi:10.3934/dcdsb.2023044 Jean-Francois Bertazzon. Symbolic approach and induction in the Heisenberg group.Discrete & Continuous Dynamical Systems,2012,32(4): 1209-1229.doi:10.3934/dcds.2012.32.1209 Cited by Periodical cited type(12) 1.G. Citti, B. Franceschiello, G. Sanguinetti, A. Sarti. Sub-Riemannian Mean Curvature Flow for Image Processing[J]. SIAM Journal on Imaging Sciences, 2016, 9(1936-4954): 212.doi:10.1137/15M1013572 2.Pablo Ochoa. Approximation schemes for non-linear second order equations on the Heisenberg group[J]. Communications on Pure & Applied Analysis, 2015, 14(1553-5258): 1841.doi:10.3934/cpaa.2015.14.1841 3.Juan J. Manfredi, Bianca Stroffolini, G. Buttazzo, E. Casas, L. de Teresa, R. Glowinski, G. Leugering, E. Trélat, X. Zhang. Convergence of the natural p-means for the p-Laplacian[J]. ESAIM: Control, Optimisation and Calculus of Variations, 2021, 27(1292-8119): 33.doi:10.1051/cocv/2021026 4.Qing Liu, Xiaodan Zhou. Weakly coupled systems of fully nonlinear parabolic equations in the Heisenberg group[J]. Nonlinear Analysis, 2018, 174(0362546X): 54.doi:10.1016/j.na.2018.04.008 5.Marta Lewicka, Juan J. Manfredi. Game theoretical methods in PDEs[J]. Bollettino dell'Unione Matematica Italiana, 2014, 7(1972-6724): 211.doi:10.1007/s40574-014-0011-z 6.Weili Meng, Chao Zhang. Asymptotic mean value properties for the elliptic and parabolic double phase equations[J]. Nonlinear Differential Equations and Applications NoDEA, 2023, 30(1021-9722)doi:10.1007/s00030-023-00884-6 7.Fausto Ferrari. Mean value properties of fractional second order operators[J]. Communications on Pure and Applied Analysis, 2014, 14(1534-0392): 83.doi:10.3934/cpaa.2015.14.83 8.Tomasz Adamowicz, Antoni Kijowski, Elefterios Soultanis. Asymptotically Mean Value Harmonic Functions in Subriemannian and RCD Settings[J]. The Journal of Geometric Analysis, 2023, 33(1050-6926)doi:10.1007/s12220-022-01132-6 9.Jeongmin Han. Time-dependent tug-of-war games and normalized parabolic p-Laplace equations[J]. Nonlinear Analysis, 2022, 214(0362546X): 112542.doi:10.1016/j.na.2021.112542 10.Fausto Ferrari, Antonio Vitolo. Regularity Properties for a Class of Non-uniformly Elliptic Isaacs Operators[J]. Advanced Nonlinear Studies, 2020, 20(1536-1365): 213.doi:10.1515/ans-2019-2069 11.Tomasz Adamowicz, Antoni Kijowski, Elefterios Soultanis. Asymptotically Mean Value Harmonic Functions in Doubling Metric Measure Spaces[J]. Analysis and Geometry in Metric Spaces, 2022, 10(2299-3274): 344.doi:10.1515/agms-2022-0143 12.Andreas Minne, David Tewodrose. Asymptotic mean value Laplacian in metric measure spaces[J]. Journal of Mathematical Analysis and Applications, 2020, 491(0022247X): 124330.doi:10.1016/j.jmaa.2020.124330 Other cited types(10) References References N. Arcozzi and F. Ferrari, Metric normal and distance function in the Heisenberg group, Math. Z., 256 (2007), 661-684.doi:10.1007/s00209-006-0098-8. N. Arcozzi and F. Ferrari, The Hessian of the distance from a surface in the Heisenberg group, Ann. Acad. Sci. Fenn. Math., 33 (2008), 35-63. N. Arcozzi, F. Ferrari and F. Montefalcone,CC-distance and metric normal of smooth hypersurfaces in sub-Riemannian two-step Carnot groups, preprint, arXiv:0910.5648v1. T. Bieske, Equivalence of weak and viscosity solutions to the p-Laplace equation in the Heisenberg group, Ann. Acad. Sci. Fenn. Math., 31 (2006), 363-379. A. Bonfiglioli, E. Lanconelli and F. Uguzzoni, Stratified Lie groups and potential theory for their sub-Laplacians, Springer Monographs in Mathematics, Springer, Berlin, 2007. J.-M. Bony, Principe du maximum et inégalité de Harnack pour les opérateurs elliptiques dégénérés, in 1969 Séminaire de Théorie du Potentiel, dirigé par M. Brelot, G. Choquet et J. Deny: 1967/1968, Exp. 10, Secrétariat Mathématique, Paris, 1969, 20 pp. J.-M. Bony, Principe du maximum, inégalite de Harnack et unicité du problème de Cauchy pour les opérateurs elliptiques dégénérés, Ann. Inst. Fourier (Grenoble), 19 (1969), 277-304.doi:10.5802/aif.319. L. Capogna and G. Citti, Generalized mean curvature flow in Carnot groups, Comm. Partial Differential Equations, 34 (2009), 937-956.doi:10.1080/03605300903050257. L. Capogna, D. Danielli, S. D. Pauls and J. T. Tyson, An Introduction to the Heisenberg Group and the Sub-Riemannian Isoperimetric Problem, Progress in Mathematics, 259, Birkhäuser Verlag, Basel, 2007. B. Franchi, R. Serapioni and F. Serra Cassano, Rectifiability and perimeter in the Heisenberg group, Math. Ann., 321 (2001), 479-531.doi:10.1007/s002080100228. F. Ferrari, Q. Liu and J. J. Manfredi,On the horizontal mean curvature Flow for axisymmetric surfaces in the Heisenberg group, to appear in _Commun. Contemp. Math._ doi:10.1142/S0219199713500272. D. Gilbarg and N. S. Trudinger, Elliptic partial differential equations of second order, Reprint of the 1998 edition, Classics in Mathematics, Springer-Verlag, Berlin, 2001. C. Gutiérrez and E. Lanconelli, Classical viscosity and average solutions for PDE's with nonnegative characteristic form, Atti Accad. Naz. Lincei Cl. Sci. Fis. Mat. Natur. Rend. Lincei (9) Mat. Appl., 15 (2004), 17-28. P. Juutinen, P. Lindqvist and J. J. Manfredi, On the equivalence of viscosity solutions and weak solutions for a quasi-linear elliptic equation, SIAM J. Math. Anal., 33 (2001), 699-717.doi:10.1137/S0036141000372179. H. Liu and X. Yang, Asymptotic mean value formula for sub-p-harmonic functions on the Heisenberg group, J. Funct. Anal., 264 (2013), 2177-2196.doi:10.1016/j.jfa.2013.02.009. J. J. Manfredi, M. Parviainen and J. D. Rossi, An asymptotic mean value characterization for p-harmonic functions, Proc. Amer. Math. Soc., 138 (2010), 881-889.doi:10.1090/S0002-9939-09-10183-1. Y. Peres, O. Schramm, S. Sheffield and D. Wilson, Tug-of-war and the infinity Laplacian, J. Amer. Math. Soc., 22 (2009), 167-210.doi:10.1090/S0894-0347-08-00606-1. C. Pucci and G. Talenti, Elliptic (second-order) partial differential equations with measurable coefficients and approximating integral equations, Advances in Math., 19 (1976), 48-105.doi:10.1016/0001-8708(76)90022-0. Access History Access History PDF XML Export Citation SHARE Email to a Friend Article Metrics HTML views()PDF downloads(190)Cited by(22) ###### Access History Other Articles By Authors on this site Fausto Ferrari Qing Liu Juan Manfredi on Google Scholar Fausto Ferrari Qing Liu Juan Manfredi Top Catalog ×Close Export File Citation Format RIS(for EndNote,Reference Manager,ProCite) BibTex Txt Content Citation Only Citation and Abstract Export Close / DownLoad:Full-Size ImgPowerPoint Return Return About AIMS Policies Copyright Disclaimer Order Journals Open Access Contact Ethical Standards Terms and Conditions Site map Copyright © 2025 American Institute of Mathematical Sciences
123402
Published Time: Fri, 20 Jan 2023 15:16:31 GMT arXiv:1410.6812v3 [cond-mat.str-el] 25 Mar 2015 Framing Anomaly in the Effective Theory of Fractional Quantum Hall Effect Andrey Gromov, 1 Gil Young Cho, 2 Yizhi You, 2 Alexander G. Abanov, 1, 3 and Eduardo Fradkin 2, 4 1 Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794, USA 2 Department of Physics and Institute for Condensed Matter Theory, University of Illinois, 1110 W. Green St., Urbana, Illinois 61801-3080, USA 3 Simons Center for Geometry and Physics, Stony Brook University, Stony Brook, NY 11794, USA 4 Kavli Institute for Theoretical Physics, University of California Santa Barbara, CA 93106-4030, USA (Dated: March 26, 2015) We consider the geometric part of the effective action for Fractional Quantum Hall Effect (FQHE). It is shown that accounting for the framing anomaly of the quantum Chern-Simons theory is essential to obtain the correct gravitational linear response functions. In the lowest order in gradients the linear response generating functional includes Chern-Simons, Wen-Zee and gravitational Chern-Simons terms. The latter term has a contribution from the framing anomaly which fixes the value of thermal Hall conductivity and contributes to the Hall viscosity of the FQH states on a sphere. We also discuss the effects of the framing anomaly on linear responses for non-Abelian FQH states. Fractional quantum Hall (FQH) states exemplify gen-uinely new states of matter with long range topologi-cal order. These states owe their fascinating properties to the strong interaction between the electrons partially filling one or several Landau levels. Although much is known about general properties of the FQH states, the general problem of strongly interacting electrons in quan-tizing magnetic field defies controlled analytical treat-ment. To date FQH states are the prototype of a topo-logical quantum fluid. In addition to a quantized topological electromagnetic response, FQH states (as well as general 2 + 1 D topolog-ical phases with broken time-reversal symmetry) exhibit other geometric responses such as Hall viscosity [1–4] and thermal Hall conductance [5–7]. These responses can be computed via adiabatic arguments on a torus, directly from the Laughlin wave function on a curved space [8– 10], using Chern-Simons gauge theory or the projective parton construction . Placing the FQH states on a curved manifold proved to be a useful tool as it allowed to probe more response functions [12, 13] and, therefore, distinguish FQH states having identical charge response . Following the elegant construction of Ref. [13, 14] we introduce the effective action of a general Abelian FQH state on a curved state as S = − 1 4π ∫ [ KIJ aI da J + 2 qI Ada I + 2 sI ωda I ] . (1) Here we use concise “form notation” so that Ada I ↔ ǫμνλ Aμ∂ν aIλ d3x, etc., and integration in Eq.(1) is taken over three-dimensional space-time. The theory contains κ hydrodynamic gauge fields aI , I = 1 , . . . , M coupled to external electro-magnetic vector potential Aμ and to external geometry through the abelian SO (2) spin con-nection ωμ. KIJ is the (symmetric) M × M K -matrix, qI is the charge vector and sJ is the spin vector . We will also use bold symbols for matrices and vectors, so that K, q and s denote a K-matrix, charge vector and spin vector correspondingly. The action of Eq.(1) describes interactions of the con-served currents jIμ ≡ 1 2π ǫμνλ ∂μaIλ with the external gauge field Aμ and the background geometry of the spatial man-ifold parametrized by the Abelian spin connection ωμ.Two remarks are in order. First, only the leading terms in the gradient expansion are kept in Eq.(1). The first term in Eq.(1) is the action of a Chern-Simons gauge the-ory with gauge group U (1) M . This term is independent of the metric and, up to some caveats discussed below, it is the topological part of the action. The higher gradient terms are, of course, present for any FQH state but are suppressed by the gap in the spectrum. Second, as these leading orders are written in terms of differential forms it is clear that the action Eq.(1) does not depend on the metric of the background other than through the spin connection ω in the last term of Eq.(1). The simplest quantum Hall state is described by a 1 ×1 K-matrix, K = 1, with q = 1 and s = 1 /2 as charge and spin “vectors” respectively. It corresponds to the filling factor ν = 1, i.e., the spineless (spin polarized) electrons filling up the lowest Landau level. In this case, the action Eq.(1) reduces to S = − 1 4π ∫ [ ada + 2 Ada + ωda ] . (2) To find the electromagnetic and gravitational linear re-sponse functions from Eq.(2) one has to integrate out the hydrodynamic gauge field aμ to find the generat-ing functional for linear responses. A traditional way of treating this integration is to substitute the solution of saddle point equation for aμ following from Eq.(2), aμ = −Aμ − ωμ/2, back into Eq.(2), and obtain S′ eff = 1 4π ∫ ( A + 1 2 ω ) d ( A + 1 2 ω ) . (3) However, Eq.(3) does not agree with the result of di-rect computation of the effective action for non-interacting fermions at ν = 1. The result found in Ref. 2 is Seff = 1 4π ∫ ( A + 1 2 ω ) d ( A + 1 2 ω ) − 1 48 π ∫ ωdω . (4) Eq.(4) and Eq.(3) differ by the additional gravitational Chern-Simons term. In a recent publication , three of us used Chern-Simons gauge theory (to represent flux attachment) and projective parton constructions to derive, from micro-scopic models, the effective actions of the hydrodynamic fields and their coupling to the background geometry. A key ingredient of this construction is that the worldlines of these composite particles are always framed and, as a result, the effective action yields the correct values of the couplings of the Wen-Zee term and of the Hall vis-cosity. However, a consistent theory of the gravitational Chern-Simons term was lacking. Below we explain that the appearance of this term is not accidental, but is a consequence of a general phenomenon that is present in the quantum Chern-Simons theory known as the framing anomaly [16, 17]. We will see that the framing anomaly is the key ingredient to obtain a consistent effective theory for all FQH states. Main results. In this work we generalize the action of Eq.(4) to arbitrary Abelian and non-Abelian FQH states, providing a generating functional in the leading order in derivatives. For general Abelian FQH states coupled to external electromagnetic field and geometry defined by Eq.(1) the topological part of the effective action is given by Seff = SK + Sanom , (5) SK = 1 4π ∫ (qT A + sT ω) K−1d (qA + sω) , (6) Sanom = − c 96 π ∫ tr ( ΓdΓ + 2 3 Γ3 ) , (7) where we used matrix notations for K-matrix, spin and charge vectors. The contribution shown in Eq.(7) is the framing anomaly of quantum Chern-Simons theory [16, 17]. The coefficient c is the chiral central charge which, for a general Abelian theory, is equal to c = sgn K , (8) where sgn K = N+−N− is the signature of the K-matrix. N± is the number of positive (negative) eigenvalues of the K matrix. Then, as it will be explained below, the general formula of Eq.(7) reduces to Sanom = − c 48 π ∫ ωdω , (9) for a particular choice of geometric background. As a check, it is easy to see that Eq.(9) reduces to the last term of Eq.(4) for K = 1. Using the projective parton approach , we can also find the generalization of Eq.(5)-Eq.(7) for non-Abelian FQH states such as Zk parafermion states. The only yet crucial difference here from the Abelian states is that the central charges of the non-Abelian states are rational fractions , instead of an integer. More precisely, the chiral edge theories of the non-Abelian states are the G/H -coset conformal field theory (CFT) whose central charge is cG/H = cG − cH where cG = k dim( G) k + h (10) is a rational number. In Ref. it was noted that a naive calculation of the gravitational anomaly term of Eq.(9) using the projective parton construction yields an incorrect (integer) value for the chiral central charge. In this Letter, we show that the framing anomaly, which was missing in the work , yields in all cases the correct value of the chiral central charge. Geometric responses. Let us relate the contribution of the framing anomaly to the effective action to physical observables. We focus here on various geometric response functions. These response functions are known to be of interest in the physics of FQHE and have been studied previously. They include the thermal Hall conductance [5–7] and the Hall viscosity [1–4]. The framing anomaly contribution to the effective ac-tion can be considered as the bulk manifestation of the thermal Hall conductance κH which, for a quantum Hall fluid, is known to be proportional to the chiral central charge of the chiral edge states of the FQH fluid [5–7] κH = c πk 2 B T 6 . (11) where c is the central charge. On the other hand, in the presence of the background curvature the gravitational Chern-Simons term also contributes to the Hall viscosity. If the quantum Hall state is on a sphere of constant Ricci curvature R, then the Hall viscosity is given by [15, 19] ηH = ¯s 2 n − c 24 R 4π . (12) The last term in Eq.(12) is a finite size correction to the well known relation ηH = ¯s 2 n. The appearance of the chiral central charge c is, therefore, very natural. We should note that the gravitational Chern-Simons term does not describe the bulk thermal Hall effect . The latter can be understood as a response to the geometry with temporal torsion and is not topologically protected [21, 22]. Framing anomaly. Before proceeding to applications of general results to particular FQH states we give a very brief review of the framing anomaly tailored to our pur-poses. The integration over the hydrodynamic Chern-Simons gauge field in the action of the type Eq.(2) is done 3by substituting the solutions of equations of motion back into the action. While it is true that stationary phase ap-proximation for the gaussian integral is exact there is a subtlety that arises when Chern-Simons theory is defined on a curved space. It is well known that the Chern-Simons theory is topo-logical at the classical level, i.e. it does not depend on the metric and has vanishing stress-energy tensor. However, this is not true for the full quantum theory [16, 17]. The reason is that while the action is metric-independent, the path integral measure does depend on metric in a non-trivial way. Indeed, the definition of the path integral measure DA requires gauge fixing, which should be de-fined in a covariant way to avoid dependence of the parti-tion function on the choice of coordinates. For example, the gauge fixing can be done by including an additional gauge fixing term into the action Sφ = ∫ dV φD μAμ , (13) with the integration over the auxiliary field φ included in the path integral. The term Eq.(13) depends on the geometry of the manifold through both covariant deriva-tive Dμ and the invariant space-time integration measure dV . The term of Eq.(13) is understood as a part of the definition of the integration measure DA μ.The dependence of the full partition function Z on the metric of the manifold can be quantified [16, 17]. Con-sider the partition function of the Chern-Simons theory with arbitrary compact, semi-simple group G at level k.Its partition function is given by Z = ∫ DAD φ exp { −i k 4π ∫ M tr ( AdA + 2 3 A3 ) − iS φ } = τ exp { −i c 96 π ∫ M tr ( ΩdΩ + 2 3 Ω3 )} , (14) where τ is the Ray-Singer analytic torsion . The latter is a topological invariant and is not important for the upcoming discussion. The phase of the partition function Z is given by the framing anomaly and c is the chiral central charge given by Eq.(10). In Eq.(14) Ω ab,μ is the Levi-Civita SO (1 , 2) spin con-nection . We denote it by Ω to avoid the confusion with the SO (2) spin connection ω (see below). In this work we are interested in quantum Hall states, which are inherently non-relativistic systems. For this reason we turn off the temporal components of the spin connection Ωa0,μ = Ω 0b,μ = 0 because non-relativistic physical sys-tems generally do not couple to these components. With this choice the SO (2) component of the spin connection ωμ ≡ Ω12,μ is precisely the one used in Eq.(1). Then, we obtain c 96 π ∫ M tr ( ΩdΩ + 2 3 Ω3 ) = c 48 π ∫ M ωdω . (15) Relation to the gravitational anomaly. Here we em-phasize the relation of the framing anomaly to the edge theory of FQHE. The edge theory has a contribution from the gravitational anomaly [25, 26] which can be related to the bulk gravitational Chern-Simons term in the fol-lowing way. First, let us rewrite the gravitational Chern-Simons term Eq.(15) replacing the SO (1 , 2) spin connec-tion Ω by Christoffel symbols as c 96 π ∫ tr ( ΩdΩ + 2 3 Ω3 ) = c 96 π ∫ tr ( ΓdΓ + 2 3 Γ3 ) − c 288 π ∫ tr( e−1de )3 , (16) The last term in this relation describes the winding num-ber of the dreibeins e and is irrelevant here since the variations of this term on a closed manifold vanish . The gravitational Chern-Simons term written in terms of Christoffel symbols Γ μν,ρ is not invariant with respect to changes of coordinates in the presence of a boundary and induces the gravitational anomaly of the edge the-ory . Thus, in general expressions such as Eq.(7), we present the contributions of the framing anomaly in terms of Christoffel symbols to emphasize the relation to the gravitational anomaly and, in turn, to the thermal Hall effect . Effective action for Abelian FQH states. The effective action for a general Abelian FQH state can be written as Sef f = ν 4π ∫ ( (A + ¯ sω )d(A + ¯ sω ) + βωdω ) − c 96 π ∫ tr [ ΓdΓ + 2 3 Γ3 ] , (17) where ν is the filling fraction, ¯ s is the average orbital spin, β = νs − ν ¯s2 is the orbital spin variance , and νs is the “spin filling fraction” , given by ν = qT K−1q, ν ¯s = qT K−1s, νs = sT K−1s . (18) For the Laughlin series at the filling ν = 1 2r+1 we have ¯s = r + 1 2 , β = 0 , c = 1 . (19) The K-matrix for the Jain series can be found in Ref. . For the Jain series at the filling ν = p 2rp ±1 (with p, r ∈ Z and p ≥ 1, r ≥ 1) we have ¯s = ±r + p 2 , β = ± p(p2 − 1) 12 , c = 1 ± (p − 1) . (20) The relations Eqs.(19)-(20) can be derived through the flux attachment procedure [11, 30] or by the projective parton construction . One can use Eqs.(19)-Eq.(20) to compute Hall viscosity and thermal Hall conductivity from Eqs.(11)-(12). 4 Non-Abelian states. In the following we will derive the effective action for the non-abelian Zk Read-Rezayi (RR) parafermion states at filling ν = k Mk +2 . While the problem of deriving the bulk effective theory for a generic non-abelian gapped FQH state is not solved, the answer for a variety of different states can be ob-tained through the parton construction [18, 31]. The effective bulk theory for the non-abelian Zk Read-Rezayi parafermion states at filling ν = k Mk +2 is given by the ( U (M ) × Sp (2 k)) 1 Chern-Simons theory and U (1) 2k+M 1 Abelian theory. S = 1 4π ∫ tr [ ada + 2 3 a3 + ωda ] − 1 4π ∫ tr [ bdb + 2( QA + Sω )db ] , (21) where Q = 1 kM +2 diag (1 2k, k × 1M ) and S = 1 2 12k+M are (2 k + M )× (2 k + M ) charge and spin matrices. There are 2k + M hydrodynamic U (1) gauge fields b and one non-abelian U (M ) × Sp (2 k) field a. In the second line of Eq. (21) we have coupled the bulk theory to external electro-magnetic field and geometry as in Eq. (1) (see ). In Eq.(21) we have essentially used the coset construction of . Note that the introduction of the abelian fields b does not change the degeneracy on the higher genus surfaces because the corresponding K-matrix is unity. Integration over the low energy degrees of freedom im-plies the universal effective action Eq.(17) with the filling factor, the average orbital spin, and the orbital spin vari-ance given by ν = Tr Q2 = k M k + 2 , (22) ¯s = ν−1Tr QS = M + 2 2 , (23) β = 0 . (24) The chiral central charge c of the boundary U (1) 2k+M 1 /(U (M ) × Sp (2 k)) 1 coset CFT is given by c = cU(1) 2k+M 1 − cU(M)1 − cSp (2 k)1 = 3k k + 2 , (25) which is the correct value of the central charge of the edge states of the RR parafermion states. Note: In this version of the paper we have added the correct versions of Eq. (21) and Eq. (24) which are incorrect in the original posting of the paper and in the published version. The reasons for the change are given explicitly in the Erratum provided in the end of the manuscript. Conclusions. We have derived the effective action for arbitrary FQH states on a curved manifold. It turned out to be very important that quantum Chern-Simons theory depends on the metric through the measure of the functional integral. This metric dependence ultimately leads to an additional gravitational Chern-Simons term in the effective action that fixes the value of thermal Hall conductivity and the “finite size” correction to the Hall viscosity. We have derived the effective action for the various abelian states and also found complete agreement with previously known results. A.G. is grateful to the hospitality and inspirational at-mosphere of the Les Houches Summer School on Topolog-ical Aspects of Condensed Matter Physics. EF thanks the KITP (and the Simons Foundation) and the IRONIC14 program for support and hospitality. EF thanks C. Nayak, T. Hughes, S. Ryu, and X.-G. Wen for dis-cussions. This work of was supported in part by the NSF grants No. DMR-1206790 at Stony Brook Univer-sity (A.G.A.), DMR-1064319 (GYC,EF), DMR 1408713 (YY,EF) at the University of Illinois, and PHY11-25915 at KITP (EF). J. E. Avron, R. Seiler, and P. G. Zograf. Viscosity of Quantum Hall Fluids . Phys Rev Lett, 75 , 697–700 (1995). P. L´ evay. Berry Phases for Landau Hamiltonians on de-formed tori . Journal of Mathematical Physics, 36 , 2792– 2802 (1995). N. Read. Non-Abelian adiabatic statistics and Hall viscos-ity in quantum Hall states and px +ip y paired superfluids .Phys Rev B, 79 , 045308 (2009). N. Read and E. H. Rezayi. Hall viscosity, orbital spin, and geometry: Paired superfluids and quantum Hall sys-tems . Phys Rev B, 84 , 085316 (2011). C. L. Kane and M. P. A. Fisher. Quantized thermal trans-port in the fractional quantum Hall effect . Phys. Rev. B, 55 , 15832–15837 (1997). N. Read and D. Green. Paired states of fermions in two dimensions with breaking of parity and time-reversal sym-metries and the fractional quantum Hall effect . Physical Review B, 61 , 10267 (2000). A. Cappelli, M. Huerta, and G. R. Zemba. Thermal transport in chiral conformal theories and hierarchical quantum Hall states . Nuclear Physics B, 636 , 568 – 582 (2002). M. R. Douglas and S. Klevtsov. Bergman kernel from path integral . Communications in Mathematical Physics, 293 , 205–230 (2010). S. Klevtsov. Random normal matrices, Bergman ker-nel and projective embeddings . Journal of High Energy Physics, 2014 , 1–19 (2014). T. Can, M. Laskin, and P. Wiegmann. Fractional Quantum Hall Effect in a Curved Space: Gravitational Anomaly and Electromagnetic Response . Phys. Rev. Lett., 113 , 046803 (2014). G. Y. Cho, Y. You, and E. Fradkin. Geometry of frac-tional quantum Hall fluids . Phys. Rev. B, 90 , 115139 (2014). J. Fr¨ ohlich and U. M. Studer. Gauge invariance and current algebra in nonrelativistic many-body theory . Rev. Mod. Phys., 65 , 733–802 (1993). X. Wen and A. Zee. Shift and spin vector: New topological 5 quantum numbers for the Hall fluids . Phys. Rev. Lett., 69 , 953 (1992). X.-G. Wen. Topological orders and edge excitations in fractional quantum Hall states . Advances in Physics, 44 ,405–473 (1995). A. G. Abanov and A. Gromov. Electromagnetic and gravitational responses of two-dimensional noninteract-ing electrons in a background magnetic field . Phys. Rev. B, 90 , 014435 (2014). E. Witten. Quantum field theory and the Jones polyno-mial . Communications in Mathematical Physics, 121 ,351–399 (1989). D. Bar-Natan and E. Witten. Perturbative expansion of Chern-Simons theory with non-compact gauge group .Communications in mathematical physics, 141 , 423–440 (1991). M. Barkeshli and X.-G. Wen. Effective field theory and projective construction for Zk parafermion fractional quantum Hall states . Phys. Rev. B, 81 , 155302 (2010). A. Gromov and A. G. Abanov. Density-curvature response and gravitational anomaly . arXiv preprint arXiv:1403.5809 (2014). M. Stone. Gravitational anomalies and thermal Hall ef-fect in topological insulators . Physical Review B, 85 ,184503 (2012). B. Bradlyn and N. Read. Low-energy effective theory in the bulk for transport in a topological phase . arXiv preprint arXiv:1407.2911 (2014). A. Gromov and A. G. Abanov. Thermal Hall Effect and Geometry with Torsion . arXiv preprint arXiv:1407.2908 (2014). A. S. Schwarz. The partition function of degenerate quadratic functional and Ray-Singer invariants . Letters in Mathematical Physics, 2, 247–252 (1978). M. Nakahara. Geometry, topology and physics . CRC Press (2003). H. W. J. Bl¨ ote, J. L. Cardy, and M. P. Nightingale. Conformal invariance, the central charge, and universal finite-size amplitudes at criticality . Phys. Rev. Lett., 56 ,742–745 (1986). I. Affleck. Universal term in the free energy at a critical point and the conformal anomaly . Phys. Rev. Lett., 56 ,746–748 (1986). A. H. Chamseddine and J. Fr¨ ohlich. Two-dimensional Lorentz-Weyl anomaly and gravitational Chern-Simons theory . Comm. Math. Phys., 147 , 549–562 (1992). Importantly, the replacement of the SO (2) spin connec-tion of the SK part of the action Eq.(6) by Christoffel symbols cannot be done. C. G. Callan Jr and J. A. Harvey. Anomalies and fermion zero modes on strings and domain walls . Nuclear Physics B, 250 , 427–436 (1985). A. Lopez and E. Fradkin. Fractional quantum Hall effect and Chern-Simons gauge theories . Phys. Rev. B, 44 ,5246–5262 (1991). X.-G. Wen. Projective construction of non-Abelian quan-tum Hall liquids . Physical Review B, 60 , 8827 (1999). N. Read and E. Rezayi. Beyond paired quantum Hall states: Parafermions and incompressible states in the first excited Landau level . Phys. Rev. B, 59 , 8084 (1999). G. Moore and N. Seiberg. Taming the conformal zoo .Physics Letters B, 220 , 422–430 (1989). arXiv:1410.6812v3 [cond-mat.str-el] 25 Mar 2015 Erratum: Framing Anomaly in the Effective Theory of Fractional Quantum Hall Effect, [Phys. Rev. Lett., 114, 016805 (2015)] Andrey Gromov, 1 Gil Young Cho, 2, 3 Yizhi You, 2 Alexander G. Abanov, 1, 4 and Eduardo Fradkin 2, 5 1 Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794, USA 2 Department of Physics and Institute for Condensed Matter Theory, University of Illinois, 1110 W. Green St., Urbana, Illinois 61801-3080, USA 3 Department of Physics, Korea Advanced Institute of Science and Technology, Daejeon 305-701, Korea 4 Simons Center for Geometry and Physics, Stony Brook University, Stony Brook, NY 11794, USA 5 Kavli Institute for Theoretical Physics, University of California Santa Barbara, CA 93106-4030, USA (Dated: March 26, 2015) In our recent published work there is an error in the computation of the geometric response functional for the Zk parafermion fractional quantum Hall (FQH) states at the filling ν = k Mk +2 which lead to the incorrect result that the spin variance β for the states with M > 0 and k > 1 does not vanish. The error originated from the incorrect form of the effective theory given by Eq.(21) of Ref. , which misses an additional allowed coupling to spin connection in the action of Eq. (21) whose correct form is S = 1 4π ∫ tr [ ada + 2 3 a3 + ωda ] − 1 4π ∫ tr [ bdb + 2( QA + Sω )db ] (21 ∗)Taking into account this additional term in Eq. (21) changes the expression Eq. (24) to β = 0. It can be shown explicitly in the parton construction using the approach of Ref. , that correct version of Eq. (21) of Ref. for the Zk parafermion FQH states is explicitly given by L = M 4π ( k M k + 2 A + ω 2 + 2 αU(1) ) d ( k M k + 2 A + ω 2 + 2 αU(1) ) 1 4π tr ( αSU (M)dα SU (M) + 2 3 α3 SU (M) ) 2k 4π ( 1 M k + 2 A + ω 2 − M α U(1) ) d ( 1 M k + 2 A + ω 2 − M α U(1) ) 1 4π tr ( αSp (2 k)dα Sp (2 k) + 2 3 α3 Sp (2 k) ) − cU(1) 2k+M 1 48 π ωdω (1) Here αμU(1) , αμSU (M) and αμSp (2 k) are, respectively, the internal U (1) gauge field, the SU (M ) gauge field, and the Sp (2 k) gauge field, in the algebra of the gauge group U (1) × SU (M ) × Sp (2 k) = U (M ) × Sp (2 k) of the projective parton construction of the parafermion state . The last term in Eq. (1) is the gravitational Chern-Simons term of 2k + M Landau levels of the parton construction. Upon integrating out the dynamical gauge fields αμU(1) , αμSU (M) and αμSp (2 k) , and carefully including the contributions of the framing anomaly to the partition functions of the gauge fields, we obtain the effective action for the spin connection ωμ and the external electromagnetic field Aμ to be L = 1 4πk M k + 2 ( A + M + 2 2 ω ) d ( A + M + 2 2 ω ) − cU(1) 2k+M 1 − cU(M)1 − cSp (2 k)1 48 π ωdω (2) which implies that Eq. (24) of Ref. should be replaced by β = 0. This result is consistent with the more recent work of Bradlyn and Read who reached this conclusion by computing the spin variance for a class of trial FQH wave functions with the form of conformal blocks. All other results reported in Ref. are correct and are not affected by this erratum. We thank Barry Bradlyn and Nick Read for pointing out this discrepancy. This work of was supported in part by the NSF grants No. DMR-1206790 at Stony Brook University (A.G.A.), DMR-1064319 (GYC,EF), DMR 1408713 (YY,EF) at the University of Illinois, and PHY11-25915 at KITP (EF). A. Gromov, G. Y. Cho, Y. You, A. G. Abanov, and E. Fradkin, Phys. Rev. Lett. 114 , 016805 (2015). N. Read and E. Rezayi, Phys. Rev. B 59 , 8084 (1999). G. Y. Cho, Y. You, and E. Fradkin, Physical Review B 90 , 115139 (2014). M. Barkeshli and X.-G. Wen, Phys. Rev. B 81 , 155302 (2010). B. Bradlyn and N. Read, “Topological central charge from Berry curvature: gravitational anomalies in trial wavefunctions for topological phases,” (2015), arXiv:1502.04126.
123403
High-Rate Full-Diversity Space-Time Block Codes with Linear Receivers Ertuğrul Başar and Ümit Aygölü Istanbul Technical University, Faculty of Electrical & Electronics Engineering, 34469, Maslak, Istanbul, Turkey basarer,[email protected] Abstract —In this paper, we deal with the design of high-rate space-time block codes (STBCs) that achieve full-diversity with linear receivers which enable symbol-wise decoding. We propose three new high-rate coordinate interleaved STBCs and prove that they can achieve full-diversity with linear receivers for any optimally rotated square QAM constellation. Recently, Shang and Xia proved that the symbol rate of an STBC achieving full-diversity with a linear receiver is upper bounded by one complex information symbol per channel use (pcu). However, we show that with the use of coordinate interleaving, the proposed STBCs can exceed this upper bound to 4/3 complex information symbols pcu for two, three and four transmit antennas. For the symbol-by-symbol decoding of the proposed STBCs, we adapt the partial interference cancellation (PIC) group decoding algorithm recently proposed by Guo and Xia, and then further modify this decoder by applying successive interference cancellation (SIC) operation. Simulation results show that when linear receivers with minimum decoding complexity are used, the proposed STBCs achieve better error performance than their counterparts given in the literature. I. INTRODUCTION Space-time block codes (STBCs) have been comprehensi-vely studied since the early works in and . In , Ala-mouti proposed a remarkable scheme for MIMO systems with two transmit antennas, which allows a low-complexity maxi-mum likelihood (ML) decoder. Then, STBCs for more than two transmit antennas were designed in . For such codes, the ML decoding can be performed in symbol-wise way due to the orthogonality of their code matrix. However, it has been proved that the symbol rate of an orthogonal STBC (OSTBC) is upper bounded by 3/4 (rate-3/4) complex information symbols per channel use (pcu) for more than two transmit antennas . The orthogonality condition was then relaxed by quasi-orthogonal STBCs (QOSTBCs) to exceed this upper bound at the expense of increased decoding complexity . QOSTBCs were then modified to obtain full transmit diversity with constellation rotation . Besides the QOSTBCs, a special class of OSTBCs named coordinate interleaved orthogonal designs (CIODs) which exceed the upper bound mentioned above, having an ML decoder with linear decoding complexity, were proposed in . Later, several high-rate STBCs were introduced in [7,8], however their ML decoding complexities grow exponentially with the constellation size which make their implementation difficult and expensive. The full rank criterion derived in ensures maximum diversity order in a quasi-static Rayleigh fading channel. However, this criterion holds for the optimal ML decoder whose implementation becomes infeasible in some cases due to its high computational complexity. To reduce this higher decoding complexity, one may prefer suboptimum decoding algorithms such as zero-forcing (ZF) or minimum mean squared error (MMSE) estimation to perform symbol-wise decoding with linear complexity . However, in such cases, the full rank criterion cannot guarantee full transmit diversity. For OSTBCs, symbol-wise decoding is equivalent to the ML decoding, therefore, full-diversity can be achieved with linear receivers. Recently, some researchers are focused on full-diversity non-orthogonal STBCs which allow symbol-wise decoding. Two classes of STBCs named Toeplitz codes and Overlapped Alamouti codes were proposed by Zhang, et. al. and Shang and Xia , respectively. It has been proved in that the symbol rate of an STBC achieving full-diversity with a linear receiver is upper bounded by 1 symbol pcu. By generalizing the works in [11-13], for general linear dispersive STBCs , Guo and Xia proposed a novel decoding scheme in called partial interference cancellation (PIC) group decoding. The main idea of the PIC group decoding algorithm is to divide the information symbols in an STBC into several groups and decode these groups independently after PIC group decoding algorithm is applied. According to the full-diversity criteria in , two new STBCs were proposed for two and four transmit antennas with symbol rates 4/3, however, due to their structure, these STBCs require the detection of two complex symbols jointly, which corresponds to a higher decoding complexity than that of single-symbol decodable STBCs with linear receivers. In this paper, we deal with the design of single-symbol decodable, high-rate, full-diversity STBCs. Our contributions in this paper are given as below: • We propose a novel high-rate full-diversity coordinate interleaved STBC structure. Using this structure, we introduce three new rate-4/3 STBCs for two, three and four transmit antennas. • We formulate a single-symbol PIC decoder, which decomposes the embedded symbols of the proposed STBCs into independent groups each formed by real and imaginary parts of a single complex information symbol, then decodes these groups (symbols) separately. We choose PIC decoder since it provides an intermediate solution between error performance and complexity. • We prove that the proposed STBCs can achieve full-diversity with linear receivers for any optimally rotated square M-QAM constellation. 978-1-4244-3584-5/09/$25.00 © 2009 IEEE ISWCS 2009 624 • In accordance with the special structure of the proposed STBC design, we further modify this single-symbol PIC decoder with successive interference cancellation (SIC) operation and obtain a novel PIC-SIC-ML decoder. • We show by computer simulation results that the proposed STBCs achieve better error performance than their counterparts given in the literature when linear receivers are used. Notations: Bold, lowercase and capital letters are used for column vectors and matrices, respectively.( ) . T and ( ) . H deno-te transposition and Hermitian transposition, respectively. For a complex variable x, xR and xI denote the real and imaginary parts of x, i.e., R I x x jx = + , where 1 j = −. The fields of real and complex numbers are denoted by \ and ^ , respecti-vely. χ represents a complex signal constellation. Im and 0(m×n) denote the m×m identity matrix and the m×n matrix with all zero elements, respectively. The Euclidean norm of a vector is denoted by . . II. CHANNEL MODEL Let us consider an nT×nR quasi-static Rayleigh flat fading MIMO channel, where nT and nR denote the number of transmit and receive antennas, respectively. The received T×nR signal matrix R T n × ∈ Y ^ can be modeled as = + Y XH N (1) where T T n × ∈ X ^ is the codeword (transmission) matrix, transmitted over T channel uses. H and N are the nT×nR channel matrix and the T×nR noise matrix, respectively. The entries of H and N are i.i.d. complex Gaussian random variables with the pdfs (0,1) N^ and 0 (0, ) N N ^ , respectively. We assume, H remains constant during the transmission of a codeword, and take independent values from one codeword to another. The realization of H is assumed to be known at the receiver, but not at the transmitter. In order to apply operations for extracting and decoding the transmitted information symbols from Y, the channel model in (1) must be rewritten as = + y x n H (2) where 2 R Tn ∈ y \ is the received signal vector, 2 2 R Tn K × ∈\ H is the equivalent channel matrix, 0 0 ( 1) ( 1) , ,..., , T R I K R K I x x x x − − ⎡ ⎤ = ⎣ ⎦ x is the real information symbol vector; 2 R Tn ∈ n \ is the additive Gaussian noise vector having i.i.d entries with the pdf 0 (0, / 2) N N \ . Definition 1: (Symbol Rate) The symbol rate of an STBC with the codeword matrix X is defined as / R K T = symbols per channel use where K is the number of information symbols embedded in X. An STBC is said to be high-rate if 1 R > . Definition 2: (Decoding Complexity) The decoding comple-xity is the number of metric computations performed to decode the information symbol vector x. By direct approach, ML decoding of x is performed by deciding in favor of the symbol vector which minimizes the following metric 2 ˆ arg min . K χ ∈ = − x x y x H (3) For a signal constellation of size M, the minimization in (3) requires the computation of MK metrics which is the worst-case detection complexity since all the symbols in X are detected jointly. On the other hand, in case of single-symbol decoding, the total decoding complexity becomes linear (KM) since all symbols in X are decoded separately. III. NEW COORDINATE INTERLEAVED STBCS In this section, we start by the definition of CIOD, then inspiring from CIODs, we present our high-rate coordinate interleaved STBC structure and give design examples for two, three and four transmit antennas. After the formulation of the proposed STBC structure, we give its single-symbol decoding algorithm which is adapted from the PIC group decoding algorithm recently proposed by Guo and Xia . Definition 3: A CIOD of size nT×nT with symbols xl, 0,1,..., 1 l K = − (where K is even) is given as ( ) ( ) ( ) ( ) ( ) ( ) 0 1 /2 1 /2 /2 /2 /2 1 1 /2 /2 , ,..., , ,..., T T T T K n n K K K n n x x x x x x − × + − × Θ ⎡ ⎤ ⎢ ⎥ Θ ⎢ ⎥ ⎣ ⎦ 0 0       (4) where Θ is the complex orthogonal design (COD) of size ( ) ( ) / 2 / 2 T T n n × with symbol rate / T K n ; Re{ } i i x x =  ( ) /2 Im{ } K i K j x + + and ( )K a denotes a mod K. It is shown in that 1 R = CIOD exists if and only if 2 T n = and 4. 1 R = CIOD can also be generalized for 3 T n = . Although CIODs can be decoded with linear decoding complexity, symbol rate-1 may not be sufficient for next generation wireless communication systems. Therefore, it is desirable to achieve higher symbol rates than 1 with linear receivers. However, it has been proved in that the symbol rate of an STBC achieving full diversity with linear receiver is upper-bounded by 1. Note that the upper bound in symbol rates for OSTBCs (which is 3/4 for nT >2) is exceeded by CIODs. Similarly, coordinate interleaved structures compromising orthogonality allow achieving higher symbol rates than 1 with linear receivers while ensuring full-diversity. We propose the following high-rate full-diversity STBC structure, ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 0 1 /2 1 /2 /2 1 3 /2 1 /2 /2 1 1 3 /2 3 /2 1 2 1 /2 /2 , ,..., , ,..., , ,..., , ,..., T T T T K n n K K K K K K K K K n n x x x x x x x x x x x x − × + − + − + − × Θ ⎡ ⎤ ⎢ ⎥ Θ Θ ⎢ ⎥ ⎢ ⎥ Θ ⎢ ⎥ ⎣ ⎦ 0 0             (5) where Θ is as defined in (4) and { } ( ) { } { } ( ) { } /2 /2 Re Im , for 0 1 Re Im , for 2 1. K K i i K i i i K K x j x i K x x j x K i K + + + ⎧ + ≤≤ − ⎪ = ⎨ ⎪ + ≤≤ − ⎩  (6) As seen from (5), the symbol rate of the proposed STBC is 4 / 3 T K n . We conclude that if the symbol rate of the CIOD of (4) for nT transmit antennas is / T R K n = , then the symbol rate of the corresponding new STBC for nT transmit antennas is 4 / 3 R . According to (5), we propose the following rate-4/3 625 STBCs for two and four transmit antennas, respectively as 0 1 2 3 1 0 3 2 0 0 R I R I R I R I x jx x jx x jx x jx + ⎡ ⎤ ⎢ ⎥ + + ⎢ ⎥ ⎢ ⎥ + ⎣ ⎦ (7) 0 2 1 3 1 3 0 2 4 6 5 7 2 0 3 1 5 7 4 6 3 1 2 0 6 4 7 5 7 5 6 4 0 0 0 0 0 0 0 0 R I R I R I R I R I R I R I R I R I R I R I R I R I R I R I R I x jx x jx x jx x jx x jx x jx x jx x jx x jx x jx x jx x jx x jx x jx x jx x jx + + ⎡ ⎤ ⎢ ⎥ − + − ⎢ ⎥ ⎢ ⎥ + + + + ⎢ ⎥ − + − − + − ⎢ ⎥ ⎢ ⎥ + + ⎢ ⎥ − + − ⎢ ⎥ ⎣ ⎦ . (8) It should be noted that by removing the rightmost column of (8), we obtain a rate-4/3 STBC for three transmit antennas. For decoding operation of (5), we adapt the PIC group decoding technique to our STBCs with the real equivalent channel model given in (2). Suppose the new STBC of (5) transmits 2K complex information symbols drawn from a rotated M-QAM constellation. In this case, we rewrite the 2TnR×4K real equivalent channel matrix in (2) as ( ) ( ) 0, 0, 1, 1, 2 1 , 2 1 , R I R I K R K I − − ⎡ ⎤ = ⎣ ⎦ h h h h h h " H (9) where hi,R and hi,I for 0,1, ,2 1 i K = − … denote the columns of H corresponding to the transmission of the real and imaginary parts of the complex information symbol xi. If we define the 2TnR×2 matrices , , , 0,1, 2 1 i i R i I i K ⎡ ⎤ = = − ⎣ ⎦ h h … H  which corresponds to the symbol xi, then we can rewrite H as [ ] 0 1 2 1 . K − = " HH H H (10) Let us present xi in the vector form as [ ] T i iR iI x x = x for 0,1, 2 1 i K = − … from where we can rewrite (2) as 2 1 0 . K i i i − = + ∑ y x n H = (11) Suppose we want to decode the k-th complex symbol xk, or equivalently the vector xk. We use the PIC group decoding algorithm to completely eliminate the interferences coming from other symbols as follows. First we form the matrix ( ) 2 2 2 R Tn K c k × − ∈\ H by removing the columns belonging to k H from H as [ ] 0 1 1 1 2 1 . c k k k K − + − = " " H H H H H H (12) Then we obtain the projection matrix 2 2 R R Tn Tn k × ∈^ Q using c k H as ( ) ( ) ( ) 1 . T T c c c c k k k k k − = Q H H H H (13) In (13), we assume c k H is full-column rank, otherwise k Q cannot be calculated. The minimum number of receive antennas must be two for the proposed STBCs to ensure that c k H is full-column rank. Finally we obtain the projection matrix 2 2 R R Tn Tn k × ∈ P ^ for which c k k = P 0 H , from 2 . R k Tn k = − P I Q (14) Therefore, multiplying the received signal vector by Pk, all interferences from the other information symbols are canceled. Let k k z P y  , then using the fact that c k k = P 0 H , we obtain 2 1 0 K k k i i k i k k k k − = = + = + ∑ z P x P n P x P n H H . (15) Although the noise term in (15) is no longer white Gaussian, it is also proved that minimum Euclidean distance can be used for ML decoding of xk as follows ˆ arg min PIC k k k k χ ∈ = − x x z P x H (16) where [ ] T R I x x x = . In other words, Eq. (15) can be viewed as xk is transmitted through the channel k k P H with the corres-ponding received noisy signal vector being zk and the resulting decision metric is calculated from (16). The minimization in (16) requires the computation of M metrics since x is drawn from a rotated M-QAM constellation. Therefore, by using (16), we decompose the system into symbols and we decode each symbol independently from the others and as a result, we reduce the total decoding complexity in (3) from 2K M to 2KM which corresponds to a linear decoding complexity. According to our design structure in (5), the equivalent channel matrix for the new STBCs has the following general form ( ) ( ) ( ) 3 4 1 2 R T R T T T T n n n K × ⎡ ⎤ = ∈ ⎢ ⎥ ⎣ ⎦ … \ H H H H (17) where ( ) ( ) ( ) CIOD 2 3 4 CIOD 2 for 1,2,..., T T T l n K l R n K l n K l n × × × ⎡ ⎤ ⎢ ⎥ = = ⎢ ⎥ ⎣ ⎦ 0 0 H H H and 2 2 CIOD T n K l × ∈\ H is the equivalent channel matrix of the corresponding CIOD used for the construction of the new STBC. IV. FULL-DIVERSITY CRITERIA FOR SYMBOL-WISE DECODING OF NEW STBCS In this section, we prove that the proposed STBCs can achieve full-diversity with the linear receivers presented in the previous section for any optimally rotated square M-QAM constellation. We give the two criteria for our STBC to achi-eve full-diversity with PIC based decoder given in (16): • The proposed STBC must achieve full-diversity with ML receiver, i.e., the codeword difference matrix ˆ − X X must be full-rank, for all pairs ˆ , X X with ˆ ≠ X X . • 0 1 2 1 , , , K − … H H H of (10) must create a linearly indepen-dent vector set containing hi,R or hi,I for 0, ,2 1 i K = − … . These criteria are an extension of those given in where the complex transmission model is considered. In the following, we prove that the proposed STBC design guarantees these two criteria for any rotated square M-QAM constellation. To prove this fact, firstly we have to show that the minimum determinant δmin of the codeword distance matrix ˆ ˆ ( )( )H − − X X X X is non-zero for any codeword pairs 626 X and ˆ X of the proposed STBCs with ˆ ≠ X X . Let ˆ iR iR iR x x x Δ = − and ˆ iI iI iI x x x Δ = − denote for 0,1, ,2 1 i K = − … the differences in real and imaginary parts of the transmitted and erroneously detected information symbols i x and ˆi x , respectively, for any ˆ , i i x x χ ∈ and ˆ i i x x ≠ . We calculate the δmin value of the new STBC for two transmit antennas as follows ( ) ( ) { ( ) ( )} 2 2 2 2 2 2 2 2 min 0 0 1 3 1 0 1 2 2 2 2 2 2 2 2 2 2 0 2 3 3 1 2 3 min . R I R R I I R I I R R I R I R I x x x x x x x x x x x x x x x x δ = Δ Δ + Δ + Δ + Δ Δ + Δ + Δ +Δ Δ + Δ + Δ + Δ Δ + Δ + Δ (18) It is obvious that (18) takes its minimum value when only one information symbol is erroneous (for example 0 0 ˆ x x ≠ ) and the resulting minimum determinant is 2 2 min 0 0 min( ) R I x x δ = Δ Δ . Constellation rotation ensures a nonzero δmin value for any square M-QAM constellation. We choose the constellation rotation angle to maximize the δmin value for the proposed STBCs. The optimum rotation angle for square M-QAM with symbols having odd integer coordinates is found to be 31.72° which gives a δmin value of 3.2. With a similar analysis, we obtain the δmin value of the new STBCs for three and four transmit antennas as equal to 2.7 and 10.24, respectively. Due to lack of space, details are omitted. After we guarantee the full-rank property, to ensure full-diversity, we have to show that the proposed STBC structure satisfies the second criterion. The orthogonality holds between the first and the last 2K columns of H in (17) due to the orthogonality of the CIOD, i.e., , , 0,1, , 1, i j i j K ⊥ = − … H H i j ≠ , , , , 1, ,2 1, k l k l K K K k l ⊥ = + − ≠ … H H . Due to the zero submatrices in (5) (and as a result in (17)) the proposed design ensures the linear independence condition, namely, the two columns of , i H 0,1, , 1 i K = − … cannot be expressed as any linear combination of the last 2K columns of H together (one column of i H can be expressed while the other column cannot), and vice versa. Therefore, for a complex information symbol xi, there exists a real vector (hi,R or hi,I) such that cannot be expressed as any linear combination of the rest of the columns of H . There are several STBC structures [7-8] that have higher symbol rates than the proposed STBCs, however, due to linear dependence between columns of their equivalent channel matrix H , suboptimum decoding techni-ques (such as PIC group decoding) cannot guarantee full-diversity for them. V. A MODIFIED DECODING ALGORITHM FOR NEW STBCS According to the PIC decoder given in Section III, all symbols in (5) are decoded independently whatever their decoding order. Furthermore, due to the non-orthogonal structure of the proposed STBC, it is not possible to decode some of the symbols using ML decoding. However, with the aid of SIC, we can perform ML decoding for the half of the symbols due to the special structure of the proposed STBC design. Consider the decoding problem of the STBC in (5). The proposed PIC-SIC-ML decoding algorithm is then as follows, 1) Decode the first K symbols using PIC group decoding algorithm (16) and obtain ˆ PIC i x (or equivalently PIC i x ) for 0,1, , 1 i K = − … . 2) Under the assumption ˆ PIC i i = x x for 0,1, , 1 i K = − … remove the interferences of the first K symbols from the received signal such as 1 ' 0 ˆ . K PIC i i i − = = −∑ y y x H (19) 3) After the SIC operation, the receiver obtains the following channel output, ' = + y x n H (20) where ( ) ( ) 2 1 2 1 [ ]T KR KI K R K I x x x x − − = x … and 1 2 3 2 [( ) ( ) ( ) ] R n T T T T T K × = ∈ … \ H H H H is the equiva-lent channel matrix where ( ) 2 CIOD T K l l × ⎡ ⎤ = ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ 0 H H (21) with 2 2 CIOD l T K × ∈\ H being the equivalent channel matrix of the corresponding CIOD. Since the columns of l H in (21) are orthogonal to each other, we can decode the last K symbols with ML decoding via easy decomposition as ' argmin ML i i x χ ∈ = − x y x H for , 1, ,2 1 i K K K = + − … . It should be noted that the decoding complexities of the PIC decoder and the PIC-SIC-ML decoder are the same (2KM ). VI. SIMULATION RESULTS In this section, we present some simulation results for the proposed STBCs and make comparisons with the existing STBCs in the literature. In Fig. 1, we compare the bit error rate (BER) performances of the proposed STBCs given in (7) and (8) for 4-QAM and 3 receive antennas with respect to received signal-to-noise ratio (SNR). As seen from Fig.1, PIC-SIC-ML decoder provides approximately 0.5dB SNR advantage compared to the PIC decoder. In Fig. 2, we compare the BER performances of the proposed STBC of (7), Guo-Xia STBC , the Golden code and Alamouti’s STBC for two transmit and three receive antennas. To obtain a spectral efficiency of 8 bits/sec/Hz, our code and the Guo-Xia STBC use 64-QAM while Golden code and Alamouti’s STBC use 16-QAM and 256-QAM, respectively. For the linear decoding of our STBC, we use the PIC-SIC-ML decoder given in Section V, and for a fair comparison we also employ symbol-wise PIC decoders for Guo-Xia STBC and the Golden code. As seen from Fig. 2, in case of linear receivers, our new STBC achieves best error performance and Guo-Xia STBC and the Golden code do not achieve full-diversity. In Fig. 3, we compare the BER performance of our new STBC of (8) with those of the Guo-Xia STBC and the CIOD for four transmit and three receive antennas. Our new STBC and Guo-Xia STBC use 64-QAM while the CIOD uses 256-QAM to obtain a spectral efficiency of 8 bits/sec/Hz. Similar to two 627 transmit antennas case, linear receivers are used for all schemes. As seen from Fig. 3, unlike our new STBC and the CIOD, Guo-Xia STBC does not achieve full-diversity in case of symbol-wise decoding. Simulation results show that at a BER value of 10-5, our new code provides approximately 2.4 and 3.3 dB SNR advantages compared to Guo-Xia STBC and the CIOD, respectively. VII. CONCLUSIONS We have proposed three new rate-4/3 coordinate interleaved STBCs for two, three and four transmit antennas. By inspiring the PIC group decoding algorithm, we have developed a novel linear decoder for the proposed STBCs. We have proved that the proposed STBCs can achieve full-diversity with linear receiver for any rotated square M-QAM constellation, therefore, the upper bound in symbols rates of non-orthogonal STBCs achieving full-diversity with linear receivers, which is one symbols pcu, is exceeded by the new schemes using coordinate interleaving. 0 2 4 6 8 10 12 14 16 10 -6 10 -5 10 -4 10 -3 10 -2 10 -1 BER SNR(dB) PIC,2Tx PIC-SIC-ML,2Tx PIC,4Tx PIC-SIC-ML,4Tx Fig.1: BER comparisons for PIC and PIC-SIC-ML decoders (4-QAM, 3Rx) REFERENCES S. M. Alamouti, “A simple transmit diversity technique for wireless communications,” IEEE J. Sel. Areas Commun., vol. 16, no. 8, pp. 1451-1458, Oct. 1998. V. Tarokh, H. Jafarkhani, and A. R. Calderbank, “Space-time block codes from orthogonal designs,” IEEE Trans. Inf. Theory, vol. 45, no. 5, pp. 1456-1466, Jul. 1999. H. Wang and X.-G. Xia, “Upper Bounds of Rates of Complex Orthogonal Space-Time Block Codes,” IEEE Trans. on Inf. Theory, 2003, vol. 49, no. 11, pp. 2788-2796. H. Jafarkhani, “A quasi-orthogonal space-time block code,” IEEE Trans. On Commun., vol. 49, no. 1, pp. 1–4, Jan. 2001. W. Su and X.-G Xia, “Signal constellations for quasi-orthogonal space-time block codes with full-diversity,” IEEE. Trans. Inf. Theory, vol. 50, no. 10, pp. 2331-2347, Oct. 2004. M. Z. A Khan and B. S. Rajan, “Single-symbol maximum likelihood decodable linear STBCs,” IEEE Trans. Inf. Theory, vol. 52, no. 5, pp. 2062-2091, May 2006. A. Tirkkonen, O. Hottinen, and R. Wichman, “Multi-antenna Transceiver Techniques for 3G and Beyond,” John Wiley & Sons Ltd., UK, 2003. J.-C. Belfiore, G. Rekaya, and E. Viterbo, “The Golden code: a 2×2 full-rate space-time code with non-vanishing determinants,” IEEE Trans. Inf. Theory, vol. 51, no. 4, pp. 1432-1436, Apr. 2005. V. Tarokh, N. Seshadri, and A. R. Calderbank, “Space-time codes for high data rate wireless communications: performance criterion and code construction,” IEEE Trans. Inf. Theory, vol. 44, no. 2, pp. 744-765, Mar. 1998 . P. W. Wolniansky, G. J. Foschini, G. D. Golden, and R. A. Valenzuela, “V-BLAST: A high capacity space-time architecture for the rich-scattering wireless channel,” in Proc. Int. Symp. on Signals, Systems and Electronics, ISSSE’98, Pisa, Italy, Sept. 1998. J.-K. Zhang, J. Liu, and K. M. Wong, “Linear Toeplitz space time block codes,” in IEEE Int. Symp. Inform. Theory (ISIT’05), Adelaide, Australia, 4-9 Sept. 2005, pp. 1942–1946. Y. Shang and X.-G. Xia, “Overlapped Alamouti codes,” in Proc. IEEE Global Commun. Conf. (Globecom’07), Washington, D.C., USA, Nov. 26-30, 2007, pp. 2927–2931. Y. Shang and X.-G. Xia, “A criterion and design for space-time block codes achieving full diversity with linear receivers,” in Proc. IEEE Int. Symp. Inform. Theory (ISIT’07), Nice, France, Jun. 24-29, 2007, pp. 2906– 2910. B. Hassibi and B. M. Hochwald, “High-rate codes that are linear in space and time,” IEEE Trans. Inf. Theory, vol. 48, no. 7, pp. 1804-1824, Jul. 2002. X. Guo and X.-G. Xia, “On full diversity space-time block codes with partial interference cancellation group decoding,” available online: to appear in IEEE Trans. Inf. Theory. 0 5 10 15 20 25 30 10 -6 10 -5 10 -4 10 -3 10 -2 10 -1 10 0 BER SNR(dB) New STBC,64-QAM Guo-Xia STBC,64-QAM Golden Code,16-QAM Alamouti STBC,256-QAM Fig.2 : BER comparisons for different STBCs at 8 bits/sec/Hz (2Tx & 3Rx) 0 5 10 15 20 25 30 10 -6 10 -5 10 -4 10 -3 10 -2 10 -1 10 0 BER SNR(dB) New STBC,64-QAM Guo-Xia STBC,64-QAM CIOD,256-QAM Fig.3 : BER comparisons for different STBCs at 8 bits/sec/Hz (4Tx & 3Rx) 628
123404
"Spectral graph theory" by Fan R. K. Chung : r/math =============== Skip to main content"Spectral graph theory" by Fan R. K. Chung : r/math Open menu Open navigationGo to Reddit Home r/math A chip A close button Log InLog in to Reddit Expand user menu Open settings menu Go to math r/math r/math This subreddit is for discussion of mathematics. All posts and comments should be directly related to mathematics, including topics related to the practice, profession and community of mathematics. 3.9M Members Online •7 yr. ago derrdi "Spectral graph theory" by Fan R. K. Chung What are your opinions on this book? Is it worth reading, or is there some other useful book about spectral graph theory? Read more Archived post. New comments cannot be posted and votes cannot be cast. Share New to Reddit? Create your account and connect with a world of communities. Continue with Email Continue With Phone Number By continuing, you agree to ourUser Agreementand acknowledge that you understand thePrivacy Policy. Public Anyone can view, post, and comment to this community Top Posts Reddit reReddit: Top posts of May 29, 2018 Reddit reReddit: Top posts of May 2018 Reddit reReddit: Top posts of 2018 Reddit RulesPrivacy PolicyUser AgreementAccessibilityReddit, Inc. © 2025. All rights reserved. Expand Navigation Collapse Navigation
123405
Published Time: 2019-02-22T04:23:48+00:00 How to Endorse a Negotiable Bill of Lading ? | Buyer's Credit & Supplier's Credit =============== Buyer's Credit & Supplier's Credit Search Primary MenuSkip to content Buyers Credit About Us Articles Service Offerings Letter of Credit Consultancy Services Related News Useful Links In Media Quote Request Buyers Credit Quote Request Suppliers Credit Quote Contact Us Disclaimers Search for: Export Import Documentation & Procedure, Letter of Credit (LC) How to Endorse a Negotiable Bill of Lading ? February 22, 2019Sanjay MandaviaLeave a comment In the earlier articles we have discussed about documents required under letter of credit (LC) and how to prepare and submit compliant document. In this article we will discuss about, what is Negotiable Bill of Lading, Why endorsement is required, who should endorse and what are the endorsements required. What is Negotiable Bill of Lading ? When a Bill of Lading is issued in Original and consigned “To Order” or “To Order of Shippper” or “To Order of XYZ Bank” it is termed as “Negotiable Bill of Lading” Why is Endorsement Required on Negotiable Bill of Lading ? Destination port agent will issue release of cargo only after at least 1 of the issued original B/L are surrendered and after checking the endorsements on the back of the B/L as it is possible for this type of B/L to be endorsed or transferred to another company Endorsement Rules as per ISBP 745 Bill of Lading will have to be endorsed according to the terms of the LC to make a complying presentation under UCP600 and ISBP 745. ISBP 745 has below rules related to endorsement: D17 a. When a multimodal transport document is issued “to order” or “to order of the shipper”, it is to be endorsed by the shipper. An endorsement may be made by a named entity other than the shipper, provided the endorsement is made for [or on behalf of] the shipper. Endorsement of Bill of Lading Below are the permutation and combination of consignee and the endorsements required on the negotiable bill of lading for that consignee. Bill of Lading consigned toEndorsements Required on a Negotiable bill of lading To Order or To Order of ABC Blank Endorsement: Stamp of Shipper and Signature Incase of Actual receiver: Shipper’s endorsement stating DELIVER TO THE ORDER OF “ABC Client” and ABC company’s stamp and sign in case he is taking the final delivery or Incase cargo is sold further: ABC’s endorsement stating, DELIVER TO THE ORDER OF “XYZ Client”. To Order of EFG Bank Shipper’s endorsement stating DELIVER TO THE ORDER OF “EFG BANK” and Incase of Actual receiver:EFG bank’s endorsement stating, DELIVER TO THE ORDER OF “ABC Client” ABC company’s stamp and sign in case he is taking the final delivery or Incase cargo is sold further:ABC’s further endorsement stating, DELIVER TO THE ORDER OF “XYZ Client” FAQ 1. LC ask for a BL ‘to order and blank endorsed’. Presented BL is issued ‘to order’ but not endorsed. This discrepancy was accepted by applicant. Can that BL to be endorsed by issuing bank in favour of applicant, to enable him to take delivery of goods Answer: The B/L cannot be endorsed by the issuing bank as they are not the consignee and rightly so not a party to that B/L. The applicant’s choices are several: Present the B/L to the carrier at destination who will contact the carrier’s issuing office for shipper’s approval to release the goods; 2. Send full set of original B/Ls back to the shipper for endorsement and return; 3. Endorse the B/L themselves purporting to be the agent of the shipper; Definitions Negotiable” means transferable by delivery and “instrument” means a written document by which a right is created in favor of some person. Endorsement: Signing of an instrument on back, face or slip annexed to it for the purpose of negotiation. It can be endorsed by Drawer/ Maker, Holder or Payee is called endorsement under Negotiable Instruments Act, 1881 Negotiable Instruments Act 1881: “Endorsement “in blank” and “in full”.— 5 [(1)] If the endorser signs his name only, the endorsement is said to be “in blank,” and if he adds a direction to pay the amount mentioned in the instrument to, or to the order of, a specified person, the endorsement is said to be “in full”; and the person so specified “endorsee”.—is called the “endorsee” of the instrument.” Reference: UCP 600 ISBP 745 Bill of Lading as a Negotiable or Transferable Document of Title Negotiable Instruments Act 1881 The Bill of Lading Act 1856 Share this: Click to share on Facebook (Opens in new window)Facebook Click to share on X (Opens in new window)X Click to share on LinkedIn (Opens in new window)LinkedIn Click to share on WhatsApp (Opens in new window)WhatsApp Click to print (Opens in new window)Print Related Buyers Credit on High Sea Sales Transaction What is High Sea Sales ? High Sea Sales (HSS) is a sale carried out by the carrier document consignee to another buyer while the goods are yet on high seas or after their dispatch from the port/airport of origin and before their arrival at the port/ airport of destination. January 3, 2013 In "Buyers Credit" How to Prepare and Check Letter of Credit Documents Purpose of Letter of credit (“LC”) is to give payment security to the beneficiary subject to documents presented under the LC complying with the requirements of the LC. To check if documents are compliant, banks examine the required documents based on: The terms and conditions of the documentary credit. The… September 21, 2018 In "Export Import Documentation & Procedure" Documents Under Letter of Credit Article summaries list of documents required under Letter of Credit Transactions. September 19, 2018 In "Letter of Credit (LC)" Post navigation Previous Post How to Check if Letter of Credit is WorkableNext Post RBI Circular : Trade Credit – New Regulatory Framework Leave a comment Cancel reply Δ This site uses Akismet to reduce spam. Learn how your comment data is processed. Trade Finance Name:Sanjay Mandavia Email:[email protected] Mobile:+919825560186 Telephonic consultation are on chargeable basis For Direct & Indirect Tax Name:CA Sarveena Mandavia Email:[email protected] Mobile:+919825223809 Topics Blockchain (1) Buyers Credit (81) Compliance (2) Export Import Documentation & Procedure (5) Forex & Currency Hedging (10) Form 15CA and Form 15CB (10) Letter of Credit (LC) (15) Others (28) RBI Regulation & Circulars (34) Suppliers Credit (45) Taxation (11) GST (1) Income Tax (9) Tax Audit (1) Trade Finance (1) Withholding Tax (14) Recent Articles Allowances Exempted Under Section 10 FX-Retail Platform – FAQ How to fill TDS Columns in ITR Forms FX-Retail Platform: Terms and Conditions Forex Trading Platform “FX-Retail” from CCIL & RBI India’s Merchandise Trade in Financial Year 2018-19 Correspondent Banking: Issues and Current Initiatives Major changes in ITR 4 (Sugam) Form for AY 2019-20 Major changes in New ITR 1 (Sahaj) Form for AY 2019-20 Unregulated Deposit Schemes Banned Meaning, Process, Quote, Interest Rate, SBLC, RBI & More For Update Subscribe Email Address: Follow Join 1,409 other subscribers Follow Us on Social Media Facebook X LinkedIn Visitors till date 1,601,876 hits © buyerscredit.wordpress.com & buyerscredit.in, 2011 to Current Date. Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Links may be used, provided that full and clear credit is given to Sanjay Mandavia and buyerscredit.in with appropriate and specific direction to the original content. Website Powered by WordPress.com. Comment Subscribe Subscribed Buyer's Credit & Supplier's Credit Join 805 other subscribers Sign me up Already have a WordPress.com account? Log in now. Buyer's Credit & Supplier's Credit Subscribe Subscribed Sign up Log in Copy shortlink Report this content View post in Reader Manage subscriptions Collapse this bar
123406
Opens in a new window Opens an external website Opens an external website in a new window This website utilizes technologies such as cookies to enable essential site functionality, as well as for analytics, personalization, and targeted advertising. To learn more, view the following link: Privacy Policy Recently Viewedclose modal Recently Viewed You have not visited any articles yet, Please visit some articles to see contents here. ACS ACS Publications C&EN CAS Find my institution Log In OR SEARCH CITATIONS My Activity Recently Viewed You have not visited any articles yet, Please visit some articles to see contents here. Publications publications my Activity Recently Viewed user resources Access Options ACS Researcher Resources ACS Members Curated Content eAlerts RSS for organizations Products & Services Get Access Manage My Account support Website Demos & Tutorials Support FAQs Live Chat with Agent For Advertisers For Librarians & Account Managers pairing Pair a device My Profile Login Logout Pair a device about us Overview ACS & Open Access Partners Blog Events Recently Viewed You have not visited any articles yet, Please visit some articles to see contents here. Publications CONTENT TYPES SUBJECTS Publications: All Types Error IP Address Blocked IP Address: 3.138.63.122 Your IP address has been blocked automatically due to unusual behavior. For assistance in removing the IP block, please contact [email protected] and include your IP address. Friday, August 22, 2025 Notice: ACS is aware of a recent issue that may have caused some users to be temporarily blocked from accessing content. The issue has been resolved, and we sincerely apologize for any inconvenience caused. If you continue to experience access problems, please contact us at [email protected].
123407
Published Time: Sat, 18 Mar 2023 00:00:19 GMT STRATIFICATION OF SU( r)-CHARACTER VARIETIES OF TWISTED HOPF LINKS ´ANGEL GONZ ´ALEZ-PRIETO, MARINA LOGARES, JAVIER MART ´INEZ, AND VICENTE MU ˜NOZ Abstract. We describe the geometry of the character variety of representations of the fundamental group of the complement of a Hopf link with n twists, namely Γ n = 〈x, y | [xn, y ] = 1 〉 into the group SU( r). For arbitrary rank, we provide geometric descriptions of the loci of irreducible and totally reducible representations. In the case r = 2, we provide a complete geometric description of the character variety, proving that this SU(2)-character variety is a deformation retract of the larger SL(2 , C)-character variety, as conjectured by Florentino and Lawton. In the case r = 3, we also describe different strata of the SU(3)-character variety according to the semi-simple type of the representation. Dedicated to Prof. Peter E. Newstead on the occasion of his 80th birthday. Introduction Let Γ be a finitely generated group and G a real or complex algebraic group. A representation of Γ into G is a group homomorphism ρ : Γ → G. Consider a presentation Γ = 〈γ1, . . . , γ k | { rλ}〉 , where {rλ} is a finite generating set of relations of Γ. The map ρ is completely determined by the k-tuple (A1, . . . , A k) = ( ρ(γ1), . . . , ρ (γk)) subject to the relations rλ(A1, . . . , A k) = id, for all λ. In this way, the set of representations of Γ into G is in bijection with the algebraic set R(Γ , G ) = {(A1, . . . , A k) ∈ Gk | rλ(A1, . . . , A k) = id , ∀λ } ⊂ Gk . Two representations ρ and ρ′ are said to be equivalent if there exists g ∈ G such that ρ′(γ) = g−1ρ(γ)g,for every γ ∈ Γ. When ρ is a faithful representation into G ⊂ GL( V ), the equivalence of representations means that ρ and ρ′ are the same representation up to a G-change of basis of V . The moduli space of representations (or character variety) can be thus obtained as the GIT quotient X(Γ , G ) = R(Γ , G ) // G . An important instance happens when G = GL( r, C), for which we recover the classical notion of a linear representation as a Γ-module structure on the vector space Cr . It is worth noticing that, when Γ is a finite group, the vector space Cr can be equipped with a Γ-invariant hermitian metric. Hence, any representation ρ : Γ → GL( r, C) descends to an U( r)-representation Γ ρ / / ˜ρ ' ' GL( r, C)U( r) ? O O In other words, X(Γ , GL( r, C)) ∼= X(Γ , U( r)). However, in the general case in which Γ is only finitely gen-erated, such an invariant metric may not exist, so X(Γ , U( r)) is only a real subvariety of X(Γ , GL( r, C)). 2020 Mathematics Subject Classification. Primary: 14M35. Secondary: 57K31. Key words and phrases. character variety, representation varieties, unitary group, knots, links. 1 arXiv:2303.06218v1 [math.GT] 10 Mar 2023 2A. GONZ ´ALEZ-PRIETO, M. LOGARES, J. MART ´INEZ, AND V. MU ˜NOZ Similar considerations can be done in the case in which we fix the determinant of the representation, so we analyze the descending property of representations induced by the inclusion SU( r) ↪→ SL( r. C), which exhibits X(Γ , SU( r)) as a real subvariety of X(Γ , SL( r, C)). For non-finite groups Γ, the situation is different. In , it is proved that, when Γ is a free product of nilpotent groups or a star-shaped RAAG (Right Angled Artin Group), the inclusion X(Γ , K ) ↪→ X(Γ , G )is a deformation retract for any reductive group G and its maximal compact subgroup K ⊂ G (a property called ‘flawed’). On the contrary, for Γ = π1(Σ g ), the fundamental group of a compact orientable surface Σg of genus g ≥ 2, this inclusion is never a homotopy equivalence when G is reductive and non-abelian (it is said that Γ is a ‘flawless’ group in the language of ). The study of the homotopy type of these character varieties in the case of knot groups was initiated in . For a knot K ⊂ S3, the finitely presented group Γ = π1(S3 − K) is an important invariant of the knot, called the knot group. The first non-trivial family of knots studied in the literature has been the torus knot of type ( m, n ), for which the knot group is Γ = 〈x, y | xn = ym〉. For the torus knot group, and prove that such inclusion is a deformation retract in the case SU(2) ↪→ SL(2 , C), and proves the same result in the case SU(3) ↪→ SL(3 , C). The aim of this work is to extend this program to a family of 3-dimensional links. In analogy with the cases of knots, given a link L ⊂ S3, the link group is the fundamental group Γ = π1(S3 − L) of its complement. For an algebraic group G, we have the G-character variety of the link X(L, G ) = Hom ( π1(S3 − L), G ) // G , as before. Despite of the advances for representation varieties of knots, much less is known in the case of links. An obvious case is the character variety of trivial links, i.e. representations of the free group. Very recently, more complicated links have been studied, such as the twisted Alexander polynomial for the Borromean link in . In , it is addressed the extension of the analysis of the geometry of character varieties for the the “twisted” Hopf link Hn, obtained by twisting a classical Hopf link with 2 crossings to get 2 n crossings, as depicted in Figure 1. Figure 1. The twisted Hopf link of n twists. The fundamental group of the link complement of Hn can be computed through a Wirtinger presen-tation , giving rise to the group Γ n = 〈a, b | [an, b ] = 1 〉. Therefore, the associated G-character variety is X(Hn, G ) = {(A, B ) ∈ G2 | [An, B ] = 1 } // G. In this sense, X(Hn, G ) should be understood as the variety counting “supercommuting” elements of G, generalizing the case n = 1 of the usual Hopf link that corresponds to commuting elements. STRATIFICATION OF SU( r)-CHARACTER VARIETIES OF TWISTED HOPF LINKS 3 In this setting, we shall show in this paper how the character varieties of twisted Hopf links can be stratified according to the type of its semisimple decomposition in the cases G = SU( r) and G = U( r), an approach that was initiated in . Moreover, we shall provide an explicit description of its irreducible locus (Section 2.3) and locus of representations that fully split into 1-dimensional representations (Sec-tion 2.4). This will lead us to explore other auxiliary spaces such as symmetric spaces of tori. In Section 3, we will apply these techniques to describe the global geometry of the SU(2)-character variety. Furthermore, in Section 4 we will compare its topology with the one of the SL(2 , C)-character variety, variety that was previously studied in . These analyses lead to the main result of this work. Theorem. The SU(2) -character variety of a twisted Hopf link is a strong deformation retract of the corresponding SL(2 , C)-character variety. Moreover, they are homotopically equivalent to the wedge of n copies of 2-spheres. Notice that this result provides the first non-trivial extension of the results of to link groups. In the higher dimensional cases of U(2) and SU(3), we also provide a thorough description of the semisimple stratification arising in the corresponding character varieties. Even though each piece is fully described, the geometry involved is very complicated and a complete description of the intersection pattern that arises is unavailable. This prevents further cohomological calculations that are needed to characterize the homotopy type of these varieties. Despite this essential obstruction, our results suggest that an alternative approach can be considered to address the problem of comparing the homotopy type of SU( r) and SL( r, C) character varieties. In-deed, the methods of this work show that, even though the retraction X(Hn, SL(2 , C)) → X(Hn, SU(2)) does not respect the semisimple strata, it does keep their closures invariant. In this way, even though we do not expect the existence of any retraction that shrinks each stratum locally, such deformation retract may be possible if we allow the homotopy to use neighbouring strata in the closure. With this idea, we pose the following conjecture. Conjecture. For any r ≥ 1, the SU( r)-character variety of a twisted Hopf link is a strong deformation retract of the corresponding SL( r, C)-character variety. Finally, we would like to point out that, even though throughout this paper all these spaces will be studied from a purely representation-theoretic approach, they have been traditionally related with moduli spaces of parabolic bundles. Indeed, the work of Furuta and Steer shows an algebro-geometric approach to the study of the topology of the moduli space of representations of the fundamental group of a Seifert fibered space into SU( r). This idea applies to the twisted Hopf links addressed in present work. Twisted Hopf links are (2 , 2n)-torus links, that is 2-bridge links which satisfy Burde and Murasugi theorem [2, Theorem 1], so that their complement is a Seifert fibered 3-sphere over an orbifold with one singular point of order n.In [1, Section 6.7], Boden shows that, given such Seifert fibered homology 3-sphere M , there exists an orbifold surface Σ such that the character variety X(π1(M ), SU(2)) is isomorphic to X∗(πorb 1 (Σ) , SU(2)), the moduli space of irreducible representations of the orbifold fundamental group of Σ. This later space can be actually understood as an union of moduli spaces of stable parabolic bundles with appropriate parabolic data . From this viewpoint, our work honours the work of Newstead in in the study of the homotopy of moduli spaces of bundles. The aforementioned relation with moduli spaces of parabolic bundles is completed with an analogous relation between SL(2 , C)-character varieties of Seifert fibered homology 3-spheres and moduli spaces of traceless parabolic Higgs bundles, thanks to the work of Nasatyr and Steer . This would motivate us to seek a deformation retraction from the moduli space of traceless parabolic Higgs bundles to the 4 A. GONZ ´ALEZ-PRIETO, M. LOGARES, J. MART ´INEZ, AND V. MU ˜NOZ moduli space of degree 0 stable parabolic bundles. We do not expect it to happen, as it is in fact the case in the non-parabolic situation (see ). The reason may be the same: the moduli space of traceless parabolic Higgs bundles retracts to its well-known nilpotent cone, which contains the moduli space of degree 0 stable parabolic bundles as one of its irreducible components, but it is indeed reducible. Nevertheless, we shall address this question in future work. Acknowledgements. The first author has been supported by the Madrid Government (Comunidad de Madrid – Spain) under the Multiannual Agreement with the Universidad Complutense de Madrid in the line Research Incentive for Young PhDs, in the context of the V PRICIT (Regional Programme of Research and Technological Innovation) through the project PR27/21-029, COMPLEXFLUIDS re-search grant sponsored by the BBVA Foundation and by the Ministerio de Ciencia e Innovaci´ on Project PID2021-124440NB-I00 (Spain). The second author has been supported by the Ministerio de Ciencia e Innovaci´ on Project PID2021-124440NB-I00 (Spain), Santander-UCM PR44/21-29924 research grant and by the National Science Foundation under grant No. DMS-1928930 while she was in residence at the Simons Laufer Mathematical Sciences Institute (previously known as MSRI) Berkeley, California, during Fall 2022 Semester. The fourth author has been partially supported by Ministerio de Ciencia e Innovaci´ on Project PID2020-118452GB-I00 (Spain). 2. Representation spaces and character varieties We fix notation and outline here some of the properties that will be used throughout the paper. Let us consider a finitely generated group Γ = 〈γ1, . . . , γ k | rλ(γ1, . . . , γ k)〉, where rλ(γ1, . . . , γ k) is the set of relations satisfied by the generators γ1, . . . , γ k. Given an algebraic linear group G, the G-representation variety of Γ is the space R(Γ , G ) = Hom (Γ , G ) = {(g1, . . . , g k) ∈ Gk | rλ(g1, . . . , g k) = id }. For our purposes, we shall focus on the classical matrix cases G = GL( r, C), SL( r, C), U( r) and SU( r). Given ρ ∈ R(Γ , G ), we will call ρ reducible if there exists some proper linear subspace W ⊂ Cr that is invariant for all γ ∈ Γ, that is, such that ρ(Γ)( W ) ⊂ W ; and we will say that ρ is irreducible otherwise. In the case that there exists a ρ-equivariant decomposition Cr = W1 ⊕. . . ⊕Ws into irreducible components, ρ is said to be semisimple .There is a natural action of the group G on R(Γ , G ) given by g · ρ = gρg −1 for g ∈ G and γ ∈ Γ, namely the conjugation action. Two representations are said to be equivalent if they belong to the same G-orbit under the conjugation action. The moduli space of G-representations of Γ is the GIT quotient (see for an introduction to these algebraic quotients) with respect to this action, X(Γ , G ) = R(Γ , G ) // G. It is well known (see ) that X(Γ , G ) parametrizes representations up to an equivalence relation called S-equivalence: two representations ρ and ρ′ are S-equivalent if the closures of their G-orbits intersect. It turns out that, for the classical groups G = GL( r, C), SL( r, C), every representation is S-equivalent to a semisimple one. Furthermore, in the compact case G = U( r) and SU( r), every representation is semisimple (see [7, Proposition 1]). Hence, in all these cases, X(Γ , G ) parametrizes semisimple representations up to (classical) equivalence. 2.1. Representations of the twisted Hopf link group. We study here representations of the fol-lowing group. For n ≥ 1, let Hn ⊂ S3 be the so-called twisted Hopf link, a generalization of the classical Hopf link with 2 n crossings instead of 2, as represented in Figure 1. STRATIFICATION OF SU( r)-CHARACTER VARIETIES OF TWISTED HOPF LINKS 5 Using its Wirtinger presentation, in [9, Proposition 3.1] it was shown that the fundamental group of the complement of the link Hn is Γn = π1(S3 − Hn) = 〈a, b | [an, b ] = 1 〉, where [ x, y ] = xyx −1y−1 denotes the group commutator. In this way, the associated G-representation variety is R(Γ n, G ) = Hom (Γ n, G ) = {(A, B ) ∈ G2 | [An, B ] = id }. In this paper we shall study the corresponding representation spaces and character varieties of Γ n for G = U( r) and G = SU( r). We will denote them as Xr = X(Γ n, U( r)) , SXr = X(Γ n, SU( r)) . 2.2. Stratification of representations. Let us fix r ≥ 1. Recall that a partition π of r is a decom-position r = a1r1 + . . . + asrs , where r1 > r 2 > . . . > r s > 0 and ai ≥ 0. We shall denote it by π = ( r1, (a1) . . . , r 1, . . . , r s, (as) . . . , r s). Associated to this partition, we can consider equivalence classes of semisimple representations ρ : Γ n → U( r) of type π, described as: (1) ρ = s ⊕ t=1 al ⊕ l=1 ρtl , ρtl : Γ n → U( rt). We denote the set of such representations by Xπr ⊂ Xr = X(Γ n, U( r)). Analogously, for SU( r) we set SXπr = Xπr ∩ SXr , which is made of those representations in (1) with ∏ t,l det( ρtl ) = 1. As a consequence, the character varieties Xr and SX r decompose into locally closed subvarieties indexed by the set Π r of all partitions of r: Xr = ⊔ π∈Πr Xπr , SXr = ⊔ π∈Πr SXπr . Among all possible partitions, the following two cases will be important: XTR r = Xπ1 r , the set of totally reducible representations, that corresponds to π1 = (1 , (r) . . ., 1), and X∗ r = Xπ0 r , which will denote the set of irreducible representations, where π0 = ( r). Analogous notation will be used for the SU( r)counterparts. Finally, given a (real) algebraic variety X, let us consider the symmetric product of r copies of X,Sym r (X) = Xr /S r , where the symmetric group Sr acts by permutation of the factors. With this symmetric product, we have the following simple characterization of semisimple representations up to equivalence. Lemma 2.1. For any partition π = ( r1, (a1) . . . , r 1, . . . , r s, (as) . . . , r s) ∈ Πr , we have an isomorphism Xπr ∼= s ∏ i=1 Sym ai (X∗ ri ). 2.3. Irreducible representations. First, let us observe the following useful property of irreducible representations of the twisted Hopf link group. Lemma 2.2. Let ρ = ( A, B ) : Γ n → G be an irreducible representation. Then An is a multiple of the identity. 6 A. GONZ ´ALEZ-PRIETO, M. LOGARES, J. MART ´INEZ, AND V. MU ˜NOZ Proof. The relation in Γ n says that An is an equivariant linear map from Cr to Cr . Hence, by Schur’s lemma, An is a multiple of the identity. Consider the natural projection map ω : X∗ r → Sym r (S1). that assigns, to each irreducible representation ( A, B ) ∈ X∗ r , the collection of eigenvalues of the matrix A.To study this map, we stratify Sym r (S1) according to the number of repeated eigenvalues. To be precise, given a partition σ = ( r1, (a1) . . . , r 1, . . . , r s, (as) . . . , r s) ∈ Πr , let us denote by Sym rσ (S1) the collection of sets {λ1, . . . , λ r } such that there are ai groups of ri equal eigenvalues (i.e. ai eigenvalues with multiplicity ri). In particular, σ0 = (1 , (r) . . ., 1) corresponds to the case of different eigenvalues. In this way, given σ ∈ Πr , we set X∗ σ = ω−1(Sym rσ (S1)) . These are equivalence classes of representations ( A, B ) such that A has repeated eigenvalues given by σ.Hence, we get a further stratification of the irreducible locus X∗ r according to the number of repeated eigenvalues (2) X∗ r = ⊔ σ∈Πr X∗ σ . Notice that some of the strata X∗ σ may be empty in this decomposition. The maximal component of X∗ r corresponds to the partition σ0 = (1 , (r) . . ., 1) of non-repeated eigenvalues. Provided that n ≥ r, this stratum is non-empty: once we diagonalize A the irreducibility of the representation ensures that we can choose two non-coincident basis for the eigenvectors of A and B, which provides an element of the stratum. Given σ ∈ Πr , let us pick {λ1, . . . , λ r } ∈ Sym rσ (S1), and let us denote by Stab( σ) the U( r)-stabilizer of the diagonal matrix Aσ = diag( λ1, . . . , λ r ). Notice that Stab( σ) does not depend on the particular choice of eigenvalues {λ1, . . . , λ r } ∈ Sym rσ (S1). Indeed, if σ = ( r1, (a1) . . . , r 1, . . . , r s, (as) . . . , r s), we have the explicit description Stab( σ) = s ∏ i=1 SU( ri)ai . 2.3.1. SU( r)-irreducible representations. If we specialize to the case of SU( r), directly from Lemma 2.2 we get the following. Corollary 2.3. Let ρ = ( A, B ) : Γ n → SU( r) be an irreducible representation. Then An = ξ id , where ξ ∈ μr is an r-th root of unity. In particular, A admits finitely many eigenvalues. This result implies that the map ω : S X∗ r → SSym r (S1)has finite image, where SSym r (S1) ⊂ Sym r (S1) is the collection of sets {λ1, . . . , λ r } ∈ Sym r (S1) such that λ1 · · · λr = 1. The image is precisely those eigenvalues additionally satisfying λn 1 = λn 2 = . . . = λnr ∈ μr . Analogously to the previous setting, we also have stratifications (3) SSym r (S1) = ⊔ σ∈Πr SSym rσ (S1), SX∗ r = ⊔ σ∈Πr SX∗ σ , according to the number of repeated eigenvalues. Given σ ∈ Πr , let us denote by Nσ the number of elements {λ1, . . . , λ r } ∈ SSym rσ (S1) such that λn 1 = λn 2 = . . . = λnr = ξ ∈ μr (and λ1 · · · λr = 1). Fixed ξ ∈ μr , the number of possibilities was STRATIFICATION OF SU( r)-CHARACTER VARIETIES OF TWISTED HOPF LINKS 7 computed in [8, Corollary 6.7] for the case gcd( n, r ) = 1. Hence, accounting for the r choices of ξ, we get that Nσ = rn ( na1, a 2, . . . , a s ) , at least for n and r coprime (conjecturally, for any n and r). Here, we have used the multinomial coefficients ( na1, a 2, . . . , a s ) = n! a1!a2! . . . a s!( n − a1 − . . . − as)! . Lemma 2.4. Let σ ∈ Πr . We have that SX ∗ σ is a disjoint union of Nσ copies of a subspace SFσ ⊂ SU( r)/ Stab( σ), where the action of Stab( σ) on SU( r) is by conjugation. Proof. Let ( A, B ) ∈ SX ∗ σ . Since A is diagonalizable, ( A, B ) has a representative with the matrix A = diag( λ1, . . . , λ r ) for some {λ1, . . . , λ r } ∈ SSym rσ (S1). This form is unique up to the action of Stab( A) = Stab( σ) by conjugation on B, so it defines a point in SU( r)/ Stab( σ), as claimed. Remark 2.5 . The subspace S Fσ corresponding to S X∗ σ can be characterized explicitly. Let S F0 σ by the preimage of S Fσ under the projection map SU( r) → SU( r)/ Stab( σ). Then S F0 σ is given by the classes of matrices B ∈ SU( r) such that B has no proper invariant subspace in common with A = diag( λ1, . . . , λ r ). Notice that the invariant subspaces of A are certain subspaces generated by the canonical basis vectors of Cr , depending on the partition σ.2.3.2. SU( r)-irreducible representations with distinct eigenvalues. In this section, we shall further study the case of the partition σ0 = (1 , (r) . . ., 1) corresponding to non-repeated eigenvalues. In this setting, SF0 σ0 is the collection of orthonormal bases {b1, . . . , b r } of Cr with volume 1 such that 〈bi1 , . . . , b ik 〉 6 = 〈ei1 , . . . , e ik 〉 for any proper subset {i1, . . . , i k} ⊂ { 1, . . . , r }. Moreover Stab( σ0) = ( S1)r with the action (λ1, . . . , λ r ) · (bij ) = ( λiλ−1 j bij ) for ( λ1, . . . , λ r ) ∈ (S1)r and B = ( bij ) ∈ SU( r). Let us first analyze the space SU( r)/(S1)r with the previous action. Recall that SU( r) can be understood as a fiber bundle over S2r−1 (the choice of the first basis vector) whose fiber is a fiber bundle over S2r−3 (the choice of the second basis vector), whose fiber is again a fiber bundle over S2r−5 and so on so forth until the last basis vector, that in principle belongs to S1 but is actually fixed by the volume condition. However, the action of ( S1)r makes things more involved. Given a matrix B = ( b1 | . . . | br ), where bj = ( b1j , . . . , b rj ) are its column vectors, the action allows us to arrange bi1 ∈ R≥0 for all i ≥ 2. Such representant is unique, so we get a projection map ϕ : SU( r)/(S1)r −→ Br onto the “coarse orthant” (c.f. [7, Theorem 10]) Br = {(z, x 2, . . . , x r ) ∈ C × Rr−1 ≥0 | | z|2 + x22 + . . . + x2 r = 1 } ⊂ Sr ⊂ S2r−1. The fiber of ϕ over b = b1 = ( z, x 2, . . . , x r ) is determined by the number of ways in which b can be completed to an orthonormal basis of volume 1. On the interior of Br the fiber ϕ−1(b) is an iterated bundle of spheres, but when b belongs to the boundary of Br , there remains a residual action of Gb = {(λ1, . . . , λ r ) ∈ (S1)r | λ1 = λi if xi = 0 for 2 ≤ i ≤ r} acting on S2r−3 × . . . × S3.If we restrict our attention to S Fσ0 = S F0 σ0 /(S1)r , then we must remove those bases that lead to a reducible representation. Such bases only occur at the boundary of Br , so the previous description also holds for S Fσ0 on the interior of Br .8 A. GONZ ´ALEZ-PRIETO, M. LOGARES, J. MART ´INEZ, AND V. MU ˜NOZ 2.3.3. U( r)-irreducible representations. The case of U( r)-representations is analogous to the previous one, and we get an eigenvalue map ω : X∗ r → Sym r (S1). However, in this case the map no longer has finite image since, for a configuration of eigenvalues {λ1, . . . , λ r } ∈ Sym r (S1) in the image, the element ξ = λn 1 = λn 2 = . . . = λnr is an arbitrary point of S1. We start the analysis of these representations with the simplest case. Lemma 2.6. We have X∗ 1 ∼= S1 × S1.Proof. Since U(1) = S1 is an abelian group, any pair ( A, B ) ∈ U(1) × U(1) = S1 × S1 leads to a representation for the twisted Hopf link. Moreover, since they are 1-dimensional representations, they are automatically irreducible. The higher rank case r > 1 is more involved. As a first step, we have an analogous result to Lemma 2.4. Proposition 2.7. Fix σ ∈ Πr . There is a Zariski locally trivial fiber bundle ωσ : X∗ σ → Sym rσ (S1) with fiber Fσ ⊂ U( r)/ Stab( σ) and base the collection of eigenvalues {λ1, . . . , λ r } ∈ Sym rσ (S1) such that λn 1 = λn 2 = . . . = λnr .Proof. The spectrum map σ : U( r) → Sym rσ (S1) is locally trivial in the Zariski topology. Indeed, on a trivializing open set U ⊂ Sym rσ (S1) we may conjugate each representation ( A, B ) in ω−1 σ (U ) so that A = diag( λ1, . . . , λ r ). In this form, B ∈ U( r) is uniquely determined up to the conjugacy action of Stab( A) = Stab( σ). Remark 2.8 . Again, the subspace Fσ can be characterized explicitly in full analogy with S Fσ as in Remark 2.5, with the only difference that now B ∈ U( r) instead of B ∈ SU( r). To look closer at this fibration, let us consider the variety ˆX∗ r = X∗ r ×Sym r (S1) (S1)r given as the pullback ˆX∗ r / /   X∗ rωσ   (S1)r / / Sym r (S1)where ( S1)r → Sym r (S1) is the quotient map. Analogously, if we want to restrict the number of coincident eigenvalues, we set ˆX∗ σ = X∗ σ ×Sym rσ (S1) (S1)r for σ ∈ Πr , and we denote by Σ σ ⊂ (S1)r the image ˆX∗ σ → (S1)r . Lemma 2.9. The fibration ˆX∗ σ → Σσ is trivial, so we have an isomorphism (4) ˆX∗ σ ∼= Fσ × Σσ . Proof. The elements of ˆX∗ σ are tuples ( A, B, λ 1, . . . , λ r ) ∈ U( r)2 × Σσ such that ( A, B ) is an irreducible representation and the spectrum of A is {λ1, . . . , λ r }. Any such representation has a canonical repre-sentative with A = diag( λ1, . . . , λ r ) and thus B ∈ Fσ , leading to the desired isomorphism. In the case that there exists at least one eigenvalue with multiplicity one, we can describe the fiber of the fibration (4) in terms of the one for SU( r). STRATIFICATION OF SU( r)-CHARACTER VARIETIES OF TWISTED HOPF LINKS 9 Proposition 2.10. If σ = ( r1, (a1) . . . , r 1, . . . , r s, (as) . . . , r s) ∈ Πr is a partition with r1 = 1 (i.e. there exists at least one eigenvalue with multiplicity one), then we have an isomorphism Fσ ∼= S Fσ × S1. Proof. Fix eigenvalues {λ1, . . . , λ r } ∈ Sym rσ (S1). To simplify the notation, we shall suppose that the first eigenvalue λ1 is simple. Given an element ( A, B, λ 1, . . . , λ r ) ∈ Fσ , the representation can be conjugated to one of the form (A, B ) = (diag( λ1, . . . , λ r ), (b1 | b2 | . . . | br )) , where bi are the column vectors of B. Moreover, since λ1 is a simple eigenvalue, the vector b1 is well defined up to re-scaling by action of S1, the stabilizer of the first eigenvector of A. In this way, if set μ = det( B), we have that B′ = ( μ−1b1 | b2 | . . . | br ) ∈ SU( r)/ Stab( σ). Hence, the map Fσ 3 B 7 → (B′, μ ) ∈ SFσ × S1 provides the desired isomorphism. Regarding the base of the fibration in (4), we can easily describe it in combinatorial terms as follows. Lemma 2.11. Given σ = ( r1, (a1) . . . , r 1, . . . , r s, (as) . . . , r s) ∈ Πr , let N = a1 + . . . + as be the number of different eigenvalues. Then, we have that Σσ ∼= S1 × ∆N −1 μn , where ∆N −1 μn is the collection of tuples (2, . . . ,  N ) with i ∈ μ∗ n = μn − { 1} such that i 6 = j for i 6 = j.Proof. Let ( λ1, . . . , λ N ) be the ordered different eigenvalues of an element of Σ σ . Since λn 1 = λn 2 = . . . = λnN , for i ≥ 2 we can uniquely write λi = λ1i for some i ∈ μ∗ n with all the roots 2, . . . ,  N different. Hence, the map ( λ1, . . . , λ N ) 7 → (λ1,  2, . . . ,  N ) yields the isomorphism. Putting together all these pieces, we finally get an explicit description of X∗ σ , at least in the case where there exists a simple eigenvalue. Corollary 2.12. Let σ = ( r1, (a1) . . . , r 1, . . . , r s, (as) . . . , r s) ∈ Πr be a partition with at least one eigenvalue with multiplicity one, and let N = a1 + . . . + as be the number of different eigenvalues. Then we have an isomorphism X∗ σ = (Σ σ × Fσ ) /S n = (S1 × ∆N −1 μn × SFσ × S1) /S n, where the action of the symmetric group Sn on S1 × ∆N −1 μn is given by permutation of eigenvalues and on Fσ = S Fσ × S1 by permutation of columns. 2.4. Totally reducible representations. In this section, we analyze the representations correspond-ing to the partition π1 = (1 , (r) . . ., 1), that is, the spaces of totally reducible representations XTR r and SXTR r for the U( r) and SU( r) cases, respectively. By Lemma 2.6 we have that for G = U( r) XTR r = Sym r (X∗ 1 ) = Sym r (S1 × S1). Analogously, for G = SU( r) we get SXTR r = SSym r (S1 × S1), where SSym r (S1 ×S1) is the subset of Sym r (S1 ×S1) of sets {(λ1, μ 1), . . . , (λr , μ r )} such that λ1 · · · λr = μ1 · · · μr = 1. Our first result in this direction is that the former space is a fibration with fiber the later one. 10 A. GONZ ´ALEZ-PRIETO, M. LOGARES, J. MART ´INEZ, AND V. MU ˜NOZ Proposition 2.13. We have a Zariski locally trivial fibration $σ : Sym rσ (S1 × S1) → S1 × S1, $σ ({(λ1, μ 1), . . . , (λr , μ r )}) = ( λ1 · · · λr , μ 1 · · · μr ), whose fiber is isomorphic to SSym r (S1 × S1).Proof. The base of the fibration is parametrized by ( t, s ) = ( λ1 · · · λr , μ 1 · · · μr ) ∈ S1 × S1, and it is clear that SSym r (S1 × S1) = φ−1(1 , 1). The total space can be trivialized over any ( t, s ) ∈ S1 × S1 via the map φ : $−1 σ (t, s ) × (S1 − {− t}) × (S1 − {− s}) → $−1 σ (S1 − {− t} × S1 − {− s}), given by φ({(λ1, μ 1), . . . , (λr , μ r )}, te iθ , se iα ) = {(eiθ/r λ1, e iα/r μ1), . . . , (eiθ/r λr , e iα/r μr )}, for θ, α ∈ (−π, π ). Now, let us consider the projection onto the first component $ : Sym r (S1 × S1) → Sym r (S1). As in Section 2.3.1, given a partition σ ∈ Πr , we shall focus on the subset Sym rσ (S1) ⊂ Sym r (S1) of configurations of points with coincident entries given by σ, and we set Sym rσ (S1 ×S1) = $−1(Sym rσ (S1)). We analogously consider the determinant 1 case and denoted by SSym rσ (S1×S1) the corresponding space. Furthermore, given a partition σ = ( r1, (a1) . . . , r 1, . . . , r s, (as) . . . , r s) ∈ Πr of r, let Sσ = Sa1 r1 ×· · ·× Sas rs < S r be the subgroup of permutations of type σ. Then, we define Sym σ (S1) = ( S1)r /S σ , and analogously SSym σ (S1) ⊂ Sym σ (S1) for the configurations {μ1, . . . , μ r } such that μ1 · · · μr = 1. Observe in particular that Sym r (S1) = Sym (r)(S1) and SSym r (S1) = SSym (r)(S1). Proposition 2.14. Fixed σ = ( r1, (a1) . . . , r 1, . . . , r s, (as) . . . , r s) ∈ Πr , the fibration (5) SSym rσ (S1 × S1) → SSym rσ (S1). is Zariski locally trivial with fiber SSym σ (S1).Proof. Fix {λ1, . . . , λ r } ∈ SSym rσ (S1) corresponding to the first factor. The algorithm described in [7, Proposition 5] provides a unique way of choosing a logarithm cut α such that we have a natural ordering λi1 = e2πi (α+θ1), λ i2 = e2πi (α+θ2), . . . , λ ir = e2πi (α+θr ) with θ1 ≤ θ2 ≤ . . . ≤ θr . Hence, the r points in the second factor of the fiber $−1({λ1, . . . , λ r }) are uniquely ordered except in the cases where we have coincident values in the first factor. Hence, to remove this ordering, we need to quotient by the action of Sσ , as claimed. Observe that, once a configuration of repeated values fixed, the choice of the logarithm cut α varies algebraically with the points {λ1, . . . , λ r }. Indeed, α corresponds to the argument of one of them, and the chosen value only changes when it surpasses another one. Hence, the construction above is locally trivial in the Zariski topology, as stated. Remark 2.15 . Notice that the base space SSym r (S1) has already been studied in [7, Proposition 5] and turns out to be isomorphic to the ( r − 1)-simplex ∆r−1 = { (u1, . . . , u r ) ∈ Rr ≥0 ∣∣∣∣∣ r ∑ i=1 ui = 0 } . In particular, all the base spaces SSym rσ (S1) of Proposition 2.14 are simplices and thus contractible, so the fiber bundle (5) is topologically trivial. STRATIFICATION OF SU( r)-CHARACTER VARIETIES OF TWISTED HOPF LINKS 11 Remark 2.16 . Using some basic algebraic geometry, it is possible to give an equivalent description of Sym r (S1 × S1). Let us fix an almost-complex structure on S1 × S1, which turns it into an elliptic curve Σ. In this setting, the elements of Sym r (S1 × S1) correspond exactly to effective divisors on Σ of degree r. Under this interpretation, if Pic r (Σ) is the collection of (holomorphic) line bundles on Σ of degree r,we can consider the map (6) Sym r (S1 × S1) → Pic r (Σ) , D 7 → O (D)that sends each divisor D into its induced line bundle O(D). Note that, since Σ is an elliptic curve, Pic r (Σ) ∼= S1 × S1 topologically. The fiber of this map at O(D) is precisely the collection of effective line bundles linearly equivalent to D, which is the complex projective space P(H0(O(D))). Using that the canonical divisor K of Σ has degree zero, we get that O(K − D) has no global holomorphic sections and thus, by the Riemann-Roch theorem, we get h0(O(D)) = r. Hence, we get that (6) is a bundle over S1 × S1 with fiber CP r−1.2.4.1. The spaces Sym 2(S1 × S1) and SSym 2(S1 × S1). In this section, we shall study more closely configurations of two points in S1 × S1. In the first place, SSym 2(S1 × S1) corresponds to collections of sets {(λ, λ −1), (μ, μ −1)}. We stratify this space according to the possible repetitions of partitions σ ∈ Π2, as in Proposition 2.14. Recall from Remark 2.15 that SSym 2(S1) = I = [ −1, 1], the closed interval. Its strata are SSym 2(1 ,1) (S1) = I = ( −1, 1), the open interval (different eigenvalues), and SSym 2(2) (S1) = {− 1, 1}, the endpoints (repeated eigenvalues). (1) For σ = (1 , 1) we have different values in the first factor. In this case,we get an open cylinder since SSym 2(1 ,1) (S1 × S1) = SSym 2(1 ,1) (S1) × SSym (1 ,1) (S1) = I × S1, where we have used that SSym (1 ,1) (S1) = S1 and that the fiber bundle is topologically trivial since the base space I is contractible. (2) For σ = (2) we have repeated eigenvalues in the first factor. Hence, we get SSym 2(2) (S1 × S1) = SSym 2(2) (S1) × SSym 2(S1) = {± 1} × I. Hence, globally SSym 2(S1 × S1) is the cylinder I × S1 with a gluing in the two boundaries {− 1, 1} × S1 that collapses each circle into an interval. Hence, SSym 2(S1 × S1) is topologically (but not smoothly) a 2-sphere: it is a pillowcase with four orbifold points (see Figure 2). Remark 2.17 . An alternative way of understanding SSym 2(S1 ×S1) is as the quotient SSym 2(S1 ×S1) = (S1 × S1)/Z2, where the Z2 action is given by ( λ, μ ) 7 → (λ−1, μ −1). Regarding the torus as a quotient of [0 , 1] 2, the action becomes ( s, t ) ∼ (1 − s, 1 − t), s, t ∈ [0 , 1], whose quotient space is homeomorphic to the 2-sphere. With respect to the space Sym 2(S1 × S1), notice that by Proposition 2.13 we have a description as S2-bundle S2 → Sym 2(S1 × S1) → S1 × S1. Observe that this perfectly fits with the description in Remark 2.16 since CP 1 ∼= S2. The monodromy of this fiber bundle is given by ( λ, μ ) 7 → (eiπ λ, e iπ μ) = ( −λ, −μ). In coordinates ( s, t ), it is given by (s, t ) 7 → (s + 1 /2, t + 1 /2), which is orientation preserving, and interchanges the four orbifold points of S2 in pairs. 12 A. GONZ ´ALEZ-PRIETO, M. LOGARES, J. MART ´INEZ, AND V. MU ˜NOZ 2.4.2. The spaces Sym 3(S1 ×S1) and SSym 3(S1 ×S1). Now, let us look at the 3-fold symmetric product SSym 3(S1 × S1). We stratify this space according to the number of repeated eigenvalues in the first factor: (1) σ = (1 , 1, 1) (three different eigenvalues). In this case, we have SSym 3(1 ,1,1) (S1 × S1) = SSym 3(1 ,1,1) (S1) × SSym (1 ,1,1) (S1) = T × S1 × S1, where T = SSym 3(1 ,1,1) (S1) is the open 2-dimensional triangle as described by Remark 2.15. (2) σ = (1 , 2) (two coincident eigenvalues). This corresponds to the interior of the edges of the triangle T above. In this case, we have SSym 3(1 ,2) (S1 × S1) = SSym 3(1 ,2) (S1) × SSym (1 ,2) (S1). Observe that, solving for the last eigenvalue, we have SSym (1 ,2) (S1) = Sym 2(S1), which is a M¨ obius band (see [7, Corollary 6]). Hence, over each point of the interior of the edges T we find a M¨ obius band. (3) σ = (1 , 3) (three equal eigenvalues). This corresponds to the vertices of the triangle T. Now, SSym 3(3) (S1 × S1) = SSym 3(3) (S1) × SSym 3(S1), where this later space is a closed triangle. Hence, over each vertex of T, we attach a trian-gle. Each of these triangles glues with the M¨ obius bands of the incoming edges through their boundaries. With respect to the total space Sym 2(S1 × S1), notice that by Remark 2.16 we have a description as a fiber bundle CP 2 → Sym 3(S1 × S1) → S1 × S1. SU(2) -character variety For n = 2, there are only two partitions, π0 = (2) and π1 = (1 , 1), that correspond to the set of irreducible representations and the set of totally reducible representations, respectively. (1) The partition π0 = (2) yields the space of irreducible representations S X∗ 2 . Notice that the only non-empty stratum for the stratification (3) of S X∗ 2 corresponds to σ = (1 , 1) since, in the case σ = (2), the matrix A of a representation ρ = ( A, B ) is a multiple of the identity, so every representation is reducible. Following the notation of Section 2.3, the partition σ = (1 , 1) has s = 1, r1 = 1 and a1 = 2. Hence, according to Lemma 2.4, we have that S X∗ (1 ,1) has as many components S F(1 ,1) as N(1 ,1) = 2 n (n 2 ) = 2 nn!2! ( n − 2)! = n − 1. The stabilizer for this stratum is Stab((1 , 1)) = S1 × S1. To determine the space S F(1 ,1) ⊂ SU(2) /(S1 × S1), we observe that, by the discussion of Section 2.3.2, we have a fibration ϕ : S F(1 ,1) → B2 ⊂ S2. On the interior of B2, this map is an isomorphism. On the boundary, we get no irreducible representations since the first vector of the basis must be colinear with e1. Hence, we get that SF(1 ,1) is homeomorphic to D = Int( B2), which is a 2-dimensional open disc. STRATIFICATION OF SU( r)-CHARACTER VARIETIES OF TWISTED HOPF LINKS 13 Remark 3.1 . An equivalent way of describing S F(1 ,1) is the following. With the notation of Remark 2.5, we have that S F0(1 ,1) is the collection of orthonormal bases of C2 whose vectors are not proportional to any of the vectors of the canonical basis. Since these bases are determined by the first vector, which can be seen as an element ( a, b ) ∈ S3 ⊂ C2, we get that S F0(1 ,1) = S3 − S1 is the 3-sphere minus the 1-sphere of equation b = 0. Now, the action of S1 on ( a, b ) ∈ S3 ⊂ C2 is λ · (a, b ) = ( a, λb ), so the quotient is the weighted projective space SF0(1 ,1) /(S1 × S1) = ( S3 − S1)/S 1 = CP 1(0 ,1) − { [1 : 0] }. This latter space can be understood as the set of ( a, b ) ∈ S3 ⊂ C2 such that b ∈ R>0 so CP 1(0 ,1) − { [1 : 0] } is the upper hemisphere of S2 and thus homeomorphic to a disc. (2) The partition π1 = (1 , 1) corresponds to the set of totally reducible representations S XTR 2 . By the results of Section 2.4, we have S XTR 2 = SSym 2(S1 × S1), which is topologically (but not smoothly) a 2-sphere. Summarising from the discussion above, we have that SX2 = S X(1 ,1) 2 t SX(2) 2 = S XTR 2 t SX∗ (1 ,1) . The global picture of S X2 is as follows (see also Figure 2). First, the space S XTR 2 is a 2-dimensional sphere. To it, we attach n − 1 open discs D corresponding the the components of S X∗ (1 ,1) , each of them attached to S XTR 2 through their boundary S1 ⊂ D. These boundaries correspond to reducible representations given by A ∼ diag( λ, λ −1) and B ∼ diag( μ, μ −1), where λ ∈ μ+2n, μ ∈ S1. They all inject into S XTR 2 as n − 1 disjoint circles. The space is homotopically equivalent to the wedge of n copies of 2-spheres. Figure 2. The global picture of X2 = X(Γ n, SU(2)). 4. Homotopy type of the SL(2 , C)-character variety of twisted Hopf links In this section, we study the homotopy type of the SL(2 , C)-character variety of the twisted Hopf link. Recall that this variety is X(Γ n, SL(2 , C)) = Hom (Γ n, SL(2 , C)) // SL(2 , C) = {(A, B ) ∈ SL(2 , C) | [An, B ] = id }// SL(2 , C). This is a complex algebraic variety whose motive has been previously studied in . The aim of this section is to prove that the natural inclusion map S X2 = X(Γ n, SU(2)) ↪→ X(Γ n, SL(2 , C)) is a defor-mation retract. Notice that SU(2) ↪→ SL(2 , C) is the maximal compact subgroup and that SL(2 , C) is the complexification SL(2 , C) = SU(2) C of SU(2). 14 A. GONZ ´ALEZ-PRIETO, M. LOGARES, J. MART ´INEZ, AND V. MU ˜NOZ As in the SU(2)-case, we can decompose the SL(2 , C)-character variety into its reducible and irre-ducible locus X(Γ n, SL(2 , C)) = XTR (Γ n, SL(2 , C)) t X∗(Γ n, SL(2 , C)) . In sharp contrast with the SU(2)-case, not every reducible representation is semisimple. However, it turns out that, for the conjugacy action of SL(2 , C), every representation has a semisimple represen-tation in the closure of its orbit. This implies that every reducible representation is identified in the GIT quotient with a semisimple one, so the reducible locus of the character variety XTR (Γ n, SL(2 , C)) parametrizes totally reducible representations up to equivalence, which justifies the assumed notation. Regarding the irreducible locus X∗(Γ n, SL(2 , C)), recall from the results of that it has n − 1components, corresponding to the possible eigenvalues of the matrix A of a representation ( A, B ) ∈ X∗(Γ n, SL(2 , C)). Fixed eigenvalues λ, λ −1 ∈ C∗ for A, we can choose a representative of the form (A, B ) ∼ (( λ 00 λ−1 ) , (a cb d )) , with bc 6 = 0 since ( A, B ) is irreducible. This representative is not unique, since we get a residual action of C∗ on B ∈ SL(2 , C) given by λ · (a cb d ) = ( a λ−1cλb d ) , λ ∈ C∗. As a consequence, we get that each of these components of X∗(Γ n, SL(2 , C)) is isomorphic to the space SFC := {(a, b, c, d ) ∈ C4 | ad − bc = 1 , bc 6 = 0 } // C∗. The invariant coordinates for this C∗-action are p := bc and a, d , so we finally get the description SFC = {(a, d, p ) ∈ C3 | ad − p = 1 , p 6 = 0 } ∼= C2 − H, where H is the hyperbola H = {ad = 1 } ∼= C∗ and the later isomorphism is ( a, d, p ) 7 → (a, d ). Inside this variety, the corresponding component S F of the SU(2)-character variety (see Section 3) is the open disc SF = S F(1 ,1) = {(a, ¯a, |a|2 − 1) ∈ SFC | | a| < 1} ∼= D. Indeed, given ( a, ¯a, |a|2 − 1) ∈ SF, the corresponding matrix B is B = ( a −√1 − | a|2 √1 − | a|2 ¯a ) . Remark 4.1 . The spaces S F and S FC are not homotopic. A straightforward computation with compactly supported cohomology shows that Hkc (S FC) = Hkc (C2 − C∗) = Z for k = 2 , 3, 4 and Hkc (S FC) = 0 otherwise. Hence, the Betti numbers of S FC are bk(S FC) = 1 for k = 0 , 1, 2 and bk(S FC) = 0 for k ≥ 3, showing that S FC cannot be contractible, as it is S F.Now, in S FC, the set p = 0 corresponds exactly to totally reducible representations in the boundary of the irreducible locus X∗(Γ n, SL(2 , C)). Hence, the closure SFC := {(a, d, p ) ∈ C3 | ad − p = 1 } ∼= C2, with coordinates ( a, d ) ∈ C2, is the collection of representations ( A, B ) ∈ X∗(Γ n, SL(2 , C)) where A has fixed eigenvalues λ, λ −1. The corresponding closure of the component of the SU(2)-character variety is thus the closed disc SF = {(a, ¯a, |a|2 − 1) ∈ SFC | | a| ≤ 1} ∼= D, with coordinates ( a, ¯a) ∈ C2.STRATIFICATION OF SU( r)-CHARACTER VARIETIES OF TWISTED HOPF LINKS 15 Lemma 4.2. Let SF′ C = {(a, d ) ∈ SFC | | a| = |d|}. There exists a smooth homotopy Ht : S FC → SFC, for t ∈ [0 , 1] with H0 = id SFC , H1(S FC) ⊆ SF′ C , and Ht|SF′ C = id SF′ C for all t. Under this homotopy, the space SFC − SFC of SL(2 , C)-reducible representations remains invariant and is rescaled into the space SF − SF of SU(2) -reducible representations. Proof. Given ( a, d ) ∈ SFC = C2, let us use polar coordinates a = re iα and d = se iβ , with r, s ∈ R≥0 and α, β ∈ [0 , 2π). Consider the auxiliary continuous homotopies h1, h 2 : R2 ≥0 × [0 , 1] → R≥0 given by h1 t (r, s ) = { (1 − t)r + t√rs for r ≥ s, rs (1 −t)s+t√rs for s > r, h2 t (r, s ) = { rs (1 −t)r+t√rs for r > s, (1 − t)s + t√rs for s ≥ r. Observe that for all r, s ≥ 0 we have h10(r, s ) = r, h20(r, s ) = s, h11(r, s ) = h21(r, s ) = √rs . Moreover, we have h1 t (r, r ) = h2 t (r, r ) = r and h1 t (r, s ) · h2 t (r, s ) = rs for all t ∈ [0 , 1]. In this setting, we consider the homotopy H : S FC × [0 , 1] → SFC given by Ht(a, d ) = (h1 t (r, s )eiα , h 2 t (r, s )eiβ ) . Notice that this map makes sense even for r = 0 or s = 0 since, in these cases, h1 t (0 , s ) = 0 and h2 t (r, 0) = 0, respectively. This map satisfies the following properties: (1) For t = 0 we get, for any ( a, d ) ∈ SFC, H0(a, d ) = (re iα , se iβ ) = ( a, d ). Thus, H0 = id SFC .(2) For t = 1 we get, for any ( a, d ) ∈ SFC, H1(a, d ) = (√rs e iα , √rs e −iβ ) . In particular, H1(a, d ) ∈ SF′ C for all a, d ∈ SFC.(3) For any point of the form ( a = re iα , d = re iβ ) ∈ SF′ C , we get Ht(a, d ) = (re iα , re iβ ) = ( a, d )for all t ∈ [0 , 1]. In particular, Ht|SF′ C = id SF′ C .(4) For a point of the form ( a = re iα , a −1 = r−1e−iα ) ∈ SFC − SFC (equivalently, for p = 0), we have Ht(a, a −1) = (h1 t (r, r −1) eiα , h 2 t (r, r −1) e−iα ) for all t ∈ [0 , 1]. Hence, since h1 t (r, r −1) · h2 t (r, r −1) = 1 for all t, we have that Ht(a, a −1) ∈ SFC −SFC for all t or, in other words, S FC −SFC is invariant. The homotopy there is a rescaling. Therefore, property (1) shows that this map defines a homotopy equivalence between H0 = id SFC and H1. Moreover, by properties (2) and (3), H defines a strong deformation retraction onto S F′ C . By (4), this homotopy has the desired property on S FC − SFC. Remark 4.3 . Since S F ⊂ SF′ C , this space remains fixed under the previous retraction. Figure 3 shows the geometric interpretation of the homotopy of the proof of Lemma 4.2 in the quadrant ( r, s ): the hyperbolas rs = k, for constant k, are retracted onto the plane {s = r} through the natural R≥0-action λ·(r, s ) = ( λ−1r, λs ) for λ ∈ R≥0. In particular, the hyperbola S FC −SFC = {rs = 1 } remains invariant, and the plane S F′ C = {r = s} is fixed. 16 A. GONZ ´ALEZ-PRIETO, M. LOGARES, J. MART ´INEZ, AND V. MU ˜NOZ Figure 3. Deformation retract of S FC onto S F′ C in the ( r, s )-plane. Lemma 4.4. Let SF′′ C = {(a, ¯a) ∈ SFC }. There exists a smooth homotopy Ht : S F′ C → SFC, for t ∈ [0 , 1] with H0 = id SF′ C , H1(S F′ C ) ⊆ SF′′ C , and Ht|SF′′ C = id SF′′ C for all t.Proof. Let us write a = re iα and d = re iβ with r, s ∈ R≥0 and α, β ∈ [0 , 2π). We consider the homotopy H : S F′ C × [0 , 1] → SFC given by Ht(a, d ) = { ( re iα , r ((1 − t)eiβ + te −iα )) if a, d 6 = 0 , (0 , 0) if a = d = 0 . Observe that, since |a| = |d| in S F′ C , then any of them vanishes if and only if both vanish. We obviously have H0 = id SF′ C and H1(a, d ) = ( a, ¯a) ∈ SF′′ C . Moreover, the points of the form (a = re iα , ¯a = re −iα ) remain fixed, so we directly get that H is the required homotopy. Remark 4.5 . Since S F ⊂ SF′′ C , this space remains fixed under the previous retraction. Remark 4.6 . Notice that, throughout the homotopy of Lemma 4.4, the norm of the second component varies, so the target space of this homotopy is the whole S FC and not the subspace S F′ C . This is related to the following fact. Let ∆ = {(a, a )} ⊆ SF′ C be the diagonal. The space S F′ C − ∆ can be easily retracted to S F′′ C by joining ( a, d ) with ( a, ¯a) through the shortest arc joining d and ¯ a. However, on ∆ = C, retracting it into S F′′ C is the same as homotopying the conjugation map f : C → C, f (a) = ¯ a,into the identity map. If we restrict to f |S1 : S1 → S1, this is impossible, but it is possible as maps on C if we allow the homotopy to pass through 0 ∈ C with the simple linear homotopy. Now, observe that S F′′ C = {(a, ¯a)} radially retracts onto the disc S F = S F′′ C ∩ {| a| ≤ 1}. Therefore, we have proven the following result. Proposition 4.7. Each of the components SFC of the closure of the irreducible locus of the SL(2 , C)-character variety strongly retracts onto the corresponding component SF of the closure of the irreducible locus of the SU(2) -character variety. On the set of totally reducible representations SFC − SFC, the homotopy is just linear rescaling onto SF − SF.Remark 4.8 . At the light of Remark 4.1, Proposition 4.7 has a clear interpretation: the non-trivial elements of H1(S FC) and H2(S FC) are annihilated when we glue back the hyperbola {ad = 1 } of totally reducible representations. Hence, only after taking the closures, the inclusion S F ↪→ SFC becomes a homotopy equivalence. STRATIFICATION OF SU( r)-CHARACTER VARIETIES OF TWISTED HOPF LINKS 17 On the other hand, we can easily prove that the natural inclusion S XTR 2 ↪→ XTR (Γ n, SL(2 , C)) is a deformation retract. Proposition 4.9. The totally reducible locus SXTR 2 of the SU(2) -character variety is a strong deforma-tion retract of the reducible locus of the SL(2 , C)-character variety XTR (Γ n, SL(2 , C)) through a linear rescaling. Proof. By , the reducible locus is given by pairs ( λ, μ ) ∈ (C∗)2 quotiented by the Z2-action that iden-tifies ( λ, μ ) ∼ (λ−1, μ −1). The radial deformation retract of ( C∗)2 onto ( S1)2 descends to a deformation retraction of ( C∗)2/Z2 onto S XTR 2 ∼= SSym 2(S1 × S1) (c.f. Remark 2.17), since it commutes with the Z2-action. Now, observe that on the common locus of totally reducible representations that lie in the closure of irreducible ones, the homotopies of Proposition 4.7 and 4.9 coincide. Hence, we can glue them together to give rise to a global homotopy H : X(Γ n, SL(2 , C)) × [0 , 1] → X(Γ n, SL(2 , C)) such that H0 =id X(Γ n,SL(2 ,C)) , Ht|X(Γ n,SU(2)) = id X(Γ n,SU(2)) and H1(X(Γ n, SL(2 , C))) ⊆ X(Γ n, SU(2)). Therefore, we have proven the following result. Theorem 4.10. The SU(2) -character variety SX2 = X(Γ n, SU(2)) of a twisted Hopf link is a strong deformation retract of the SL(2 , C)-character variety X(Γ n, SL(2 , C)) . U(2) and SU(3) character varieties In this section, using the results of Section 2.3 and 2.4, we shall describe the stratification in the character varieties X2 = X(Γ n, U(2)) and S X3 = X(Γ n, SU(3)). As we will see, a much more involved geometry arises in these cases, with many strata interacting with non-trivial intersection patterns. 5.1. U(2) -character variety. As in Section 3, we get again two possible cases arising from the parti-tions π0 = (2) and π1 = (1 , 1). (1) π0 = (2) corresponds to irreducible representations X∗ 2 and, as for SU(2), the only configuration of eigenvalues that contributes is σ = (1 , 1) (which has parameters r1 = 1 and a1 = 2 in the notation of Section 2.3). Hence, the number of different eigenvalues is N = a1 = 2 and thus by Corollary 2.12, we have that X∗ 2 = X∗ (1 ,1) = ( S1 × μ∗ n × F(1 ,1) )/Z2, where we have used that ∆ 1 μn = μ∗ n . The action of Z2 on S1 × μ∗ n × F(1 ,1) is ( λ, , B ) 7 → (λ,  −1, P 0BP −10 ), where P0 is the permutation matrix that exchanges the columns of B. We have two options: (a) If n is odd, then there exists a unique representative ( λ, , B ) with Im( ) > 0. Hence, using that F(1 ,1) = S F(1 ,1) × S1 = D × S1 with D an open 2-dimensional disc (c.f. Section 3), we get X∗ 2 = S1 × μ+ n × D × S1, where μ+ n = { ∈ μn | Im( ) > 0}. These are ( n − 1) /2 copies of S1 × D × S1.(b) If n is even, we still get ( n − 2) /2 copies of S1 × D × S1, corresponding to the roots of unity with Im( ) > 0. However, for  = −1 we get a residual action of Z2 on S1 × D × S1.Since the action in the first copy of S1 is λ = e2πiθ 7 → − λ = e2πi (θ+1 /2) , there is an unique representative with 0 ≤ θ < 1/2. Hence, in this case we get a contribution I0 × D × S1,where I0 = [0 , 1/2). 18 A. GONZ ´ALEZ-PRIETO, M. LOGARES, J. MART ´INEZ, AND V. MU ˜NOZ (2) π1 = (1 , 1) corresponds to the set of totally reducible representations XTR 2 = Sym 2(S1 × S1). As proven in Section 2.4.1, we have a S2-fibration S2 → Sym 2(S1 × S1) → S1 × S1. The monodromy of this fibration is ( λ, μ ) 7 → (eiπ λ, e iπ μ) = ( −λ, −μ), that is an orientation preserving action interchanging the four orbifold points of S2 in pairs. 5.2. SU(3) -character variety. In this case, we have three possible semisimple types corresponding to the partitions π0 = (3), π1 = (1 , 1, 1) and π2 = (1 , 2). • For π0 = (3) we get irreducible representations S X∗ 3 . We stratify according to the repeated eigenvalues of the matrix A of a representation ( A, B ) ∈ SX∗ 3 as in (3) by SX∗ 3 = S X∗ (1 ,1,1) t SX∗ (1 ,2) t SX∗ (3) . (1) σ = (1 , 1, 1) (three different eigenvalues). By the results of Section 2.3.2, we have that the space S X∗ (1 ,1,1) is a collection of N(1 ,1,1) = 3 n (n 3 ) = (n − 1)( n − 2) 2copies of a certain subspace S F(1 ,1,1) ⊂ SU(3) /(S1)3.For the projection map onto the coarse orthant ϕ : SU(3) /(S1)3 → B3 = {(z, x 2, x 3) ∈ C × R2 ≥0 | | z|2 + x22 + x23 = 1 } ⊂ S3, on the interior of B3 the preimage is exactly S F(1 ,1,1) and the fiber is S3. On an edge of ∂B 3, let us say the one in which x2 = 0, fixed a first column ( z, 0, x 3) ∈ ∂B 3, the fiber is isomorphic to the quotient under the action of S1 × S1 of the space { (w1, w 2, w 3) ∈ C3 ∣∣∣∣ w3 = − w1 ¯zx3 , |w1|2 + |w2|2 + |w3|2 = 1 , (A, B ) is irreducible } . Since, ( A, B ) must be irreducible, we have that w1, w 3 6 = 0 and hence, using the action of S1 × S1 we find an unique point in the orbit with w1 ∈ R>0. Hence, the fiber of S F(1 ,1,1) on an edge is the orthant B2 ⊂ S3, which is a 2-dimensional disc. Finally, the vertices of the orthant are not included since otherwise the representation would be reducible. (2) σ = (1 , 2) (two coincident eigenvalues). In this case, the space S X∗ (1 ,2) is a collection of N(1 ,2) = 3 n ( n 2 1 ) = 3( n − 1)( n − 2) 2copies of a certain subspace F(1 ,2) ⊂ SU(3) /(U(2) × S1), where U(2) × S1 is the stabilizer of the type σ = (2 , 1). Using the action of this stabilizer, any representation ( A, B ) can be put in the form A = diag( λ1, λ 1, λ 2) and B =  x1 y1 z1 0 y2 z2 x3 y3 z3  . The stabilizer of this shape of matrices is now ( S1)3, with action as in Section 2.3.2. This corresponds exactly to an edge in case (1), so we get that F(1 ,2) ∼= I × B2.(3) σ = (3) (three equal eigenvalues). There are no elements in this stratum since they would be reducible. • For π1 = (1 , 1, 1) we get totally reducible representations, which are S XTR 3 = SSym 3(S1 × S1). This space is described in Section 2.4.2. • For π2 = (1 , 2), we get that S X(1 ,2) 3 = X2. This space was described in Section 5.1. STRATIFICATION OF SU( r)-CHARACTER VARIETIES OF TWISTED HOPF LINKS 19 Despite this description provides an accurate geometric picture, the intersection pattern it describes is too involved to compute the homotopy types of these character varieties. A prospective future work is to seek alternative approaches that fully characterize these intersections, enabling effective homological and homotopical calculations in this higher rank case. References H.U. Boden, Representations of orbifold groups and parabolic bundles , Comment. Math. Helv. 66 (1991) 389–447. G. Burde and K. Murasugi, Links and Seifert fiber spaces , Duke Math. J. 37 (1970) 89–93. H. Chen and T. Yu, The SL(2 , C)-character variety of the Borromean link , arXiv:2202.07429. C. Florentino, P. Gothen and A. Nozad, Homotopy type of moduli spaces of G-Higgs bundles and reducibility of the nilpotent cone , Bull. Sci. Math. 150 (2019) 84–101. C. Florentino and S. Lawton, Flawed groups and the topology of character varieties , arXiv:2012.08481. M. Furuta and B. Steer, Seifert Fibred Homology 3-Spheres and the Yang-Mills equations on Riemann surfaces with marked points , Adv. Math. 96 (1992) 38-102. ´A. Gonz´ alez-Prieto, J. Mart´ ınez and V. Mu˜ noz, Geometry of SU(3) -character varieties of torus knots , Topology and its Applications. Special volume in honor to 70th birthday of J.M.R. Sanjurjo. To appear ´A. Gonz´ alez-Prieto and V. Mu˜ noz, Motive of the SL 4-character variety of torus knots , Journal of Algebra 610 (2022) 852–895. ´A. Gonz´ alez-Prieto and V. Mu˜ noz, Representation varieties of twisted Hopf links , Mediterranean Journal of Mathe-matics, 2023, article 89. J. Mart´ ınez and V. Mu˜ noz, The SU(2) -character varieties of torus knots , Rocky Mountain J. Math. (2) 45 (2015) 583–600. V.B. Mehta and C.S. Seshadri, Moduli of Vector Bundles on Curves with Parabolic Structures , Math. Ann. 248 (1980) 205–240. V. Mu˜ noz, The SL(2 , C)-character varieties of torus knots , Rev. Mat. Complut. 22 (2009) 489–497. B. Nasatyr and B. Steer, Orbifold Riemann surfaces and the Yang-Mills-Higgs equations , Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) 22 (1995) 595–643. P.E. Newstead, Topological properties of some spaces of stable bundles , Topology 6 (1967) 241–262. P.E. Newstead, Introduction to moduli problems and orbit spaces , TIFR Lect. Notes, 51 (1978). Departamento de ´Algebra, Geometr´ ıa y Topolog´ ıa, Facultad de Ciencias Matem´ aticas, Universidad Com-plutense de Madrid, Plaza Ciencias 3, 28040 Madrid Spain. Instituto de Ciencias Matem´ aticas (CSIC-UAM-UCM-UC3M), C. Nicol´ as Cabrera 13-15, 28049 Madrid Spain. Email address : [email protected] Departamento de ´Algebra, Geometr´ ıa y Topolog´ ıa, Facultad de Ciencias Matem´ aticas, Universidad Com-plutense de Madrid, Plaza Ciencias 3, 28040 Madrid Spain. Email address : [email protected] Departamento de Matem´ atica Aplicada, Ciencia e Ingenier´ ıa de los Materiales y Tecnolog´ ıa Electr´ onica. E.S. Ciencias Experimentales y Tecnolog´ ıa, Universidad Rey Juan Carlos, C. Tulip´ an 0, 28933 M´ ostoles, Madrid Spain. Email address : [email protected] Instituto de Matem´ atica Interdisciplinar (IMI) and Departamento de ´Algebra, Geometr´ ıa y Topolog´ ıa, Facultad de Ciencias Matem´ aticas, Universidad Complutense de Madrid, Plaza Ciencias 3, 28040 Madrid Spain. Email address : [email protected]
123408
Packing Spheres into a Minimum-Height Parabolic Container =============== Typesetting math: 100% Next Article in Journal Statistical Advancement of a Flexible Unitary Distribution and Its Applications Next Article in Special Issue Simultaneous Method for Solving Certain Systems of Matrix Equations with Two Unknowns Previous Article in Journal Recent Advances in Proximity Point Theory Applied to Fractional Differential Equations Previous Article in Special Issue Strict Vector Equilibrium Problems of Multi-Product Supply–Demand Networks with Capacity Constraints and Uncertain Demands Journals Active JournalsFind a JournalJournal ProposalProceedings Series Topics ------ Information For AuthorsFor ReviewersFor EditorsFor LibrariansFor PublishersFor SocietiesFor Conference Organizers Open Access PolicyInstitutional Open Access ProgramSpecial Issues GuidelinesEditorial ProcessResearch and Publication EthicsArticle Processing ChargesAwardsTestimonials Author Services --------------- Initiatives SciforumMDPI BooksPreprints.orgScilitSciProfilesEncyclopediaJAMSProceedings Series About OverviewContactCareersNewsPressBlog Sign In / Sign Up Notice You can make submissions to other journals here. clear Notice You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader. Continue Cancel clear All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications. Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers. Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal. Original Submission Date Received: . clearzoom_out_mapsearchmenu Journals Active Journals Find a Journal Journal Proposal Proceedings Series Topics Information For Authors For Reviewers For Editors For Librarians For Publishers For Societies For Conference Organizers Open Access Policy Institutional Open Access Program Special Issues Guidelines Editorial Process Research and Publication Ethics Article Processing Charges Awards Testimonials Author Services Initiatives Sciforum MDPI Books Preprints.org Scilit SciProfiles Encyclopedia JAMS Proceedings Series About Overview Contact Careers News Press Blog Sign In / Sign UpSubmit Search for Articles: Title / Keyword Author / Affiliation / Email Journal Axioms Article Type All Article Types Advanced Search Section All Sections Special Issue All Special Issues Volume Issue Number Page Logical Operator Operator Search Text Search Type add_circle_outline remove_circle_outline Journals Axioms Volume 13 Issue 6 10.3390/axioms13060396 Submit to this JournalReview for this JournalPropose a Special Issue ►▼ Article Menu Article Menu Academic EditorsMilena J. Petrović Predrag S. Stanimirovic Gradimir V. Milovanović Subscribe SciFeed Recommended Articles Related Info Link Google Scholar More by Authors Links on DOAJ Stoyan, Y. Yaskov, G. Romanova, T. Litvinchev, I. Velarde Cantú, J. Manuel Acosta, M. López on Google Scholar Stoyan, Y. Yaskov, G. Romanova, T. Litvinchev, I. Velarde Cantú, J. Manuel Acosta, M. López on PubMed Stoyan, Y. Yaskov, G. Romanova, T. Litvinchev, I. Velarde Cantú, J. Manuel Acosta, M. López /ajax/scifeed/subscribe Article Views 1316 Citations 1 Table of Contents Abstract Introduction Problem Statement The Φ-Function for Containment Constraints Mathematical Model Solution Algorithm Computational Results Conclusions Author Contributions Funding Data Availability Statement Acknowledgments Conflicts of Interest Appendix A References Altmetricshare Shareannouncement Helpformat_quote Citequestion_answer Discuss in SciProfiles Need Help? Support Find support for a specific problem in the support section of our website. Get Support Feedback Please let us know what you think of our products and services. Give Feedback Information Visit our dedicated information section to learn more about MDPI. Get Information clear JSmol Viewer clear first_page Download PDF settings Order Article Reprints Font Type: Arial Georgia Verdana Font Size: Aa Aa Aa Line Spacing:    Column Width:    Background: Open Access Article Packing Spheres into a Minimum-Height Parabolic Container by Yuriy Stoyan Yuriy Stoyan SciProfilesScilitPreprints.orgGoogle Scholar 1, Georgiy Yaskov Georgiy Yaskov SciProfilesScilitPreprints.orgGoogle Scholar 1, Tetyana Romanova Tetyana Romanova SciProfilesScilitPreprints.orgGoogle Scholar 1,2, Igor Litvinchev Igor Litvinchev SciProfilesScilitPreprints.orgGoogle Scholar 3, José Manuel Velarde Cantú José Manuel Velarde Cantú SciProfilesScilitPreprints.orgGoogle Scholar 4, and Mauricio López Acosta Mauricio López Acosta SciProfilesScilitPreprints.orgGoogle Scholar 4 1 Pidhornyi Institute of Mechanical Engineering Problems, vul. Komunalnykiv, 2/10, 61046 Kharkiv, Ukraine 2 Leeds University Business School, University of Leeds, Leeds LS2 9JT, UK 3 Graduate Program in Systems Engineering, Nuevo Leon State University (UANL), Av. Universidad s/n, Col. Ciudad Universitaria, San Nicolas de los Garza 66455, Mexico 4 Technological Institute of Sonora (ITSON), Navojoa-City 85870, Mexico Author to whom correspondence should be addressed. Axioms2024, 13(6), 396; Submission received: 2 May 2024 / Revised: 4 June 2024 / Accepted: 7 June 2024 / Published: 13 June 2024 (This article belongs to the Special Issue Numerical Analysis and Optimization) Download keyboard_arrow_down Download PDF Download PDF with Cover Download XML Download Epub Browse Figures Versions Notes Abstract Sphere packing consists of placing several spheres in a container without mutual overlapping. While packing into regular-shape containers is well explored, less attention is focused on containers with nonlinear boundaries, such as ellipsoids or paraboloids. Packing n-dimensional spheres into a minimum-height container bounded by a parabolic surface is formulated. The minimum allowable distances between spheres as well as between spheres and the container boundary are considered. A normalized Φ-function is used for analytical description of the containment constraints. A nonlinear programming model for the packing problem is provided. A solution algorithm based on the feasible directions approach and a decomposition technique is proposed. The computational results for problem instances with various space dimensions, different numbers of spheres and their radii, the minimal allowable distances and the parameters of the parabolic container are presented to demonstrate the efficiency of the proposed approach. Keywords: packing; multidimensional spheres; paraboloid; Φ-function; nonlinear optimization MSC: 05B40; 52C15; 52C17; 90C26 1. Introduction Sphere packing is a well-studied area of research that involves arranging identical or non-identical spherical items within a given volume (container), subject to certain constraints. This problem has a wide range of applications, making it a versatile and important topic in both theoretical and practical contexts . According to the typology of packing and cutting problems introduced in , packing problems are considered in two main formulations: open dimension problems (ODP) and knapsack problems. The ODP is aimed at optimizing the dimension(s) of a container, while packing the maximum number of identical objects or maximizing the total volume of packed objects is a knapsack problem. Typically, continuous nonlinear programming (NLP) models are used to formulate an ODP, while for the knapsack problem, mixed integer NLP models are implemented. Our interest in irregular containers is motivated by the following considerations. Packing problems for regular-shaped containers (rectangles, circles) are well studied for 2D objects, such as circles , ovals [4,5] and ellipses [6,7,8]. Rich theoretical and empirical results are presented in these papers, where various NLP models and exact/heuristic/metaheuristic solution techniques are accompanied by extensive computational experiments to demonstrate the efficiency of the proposed approaches. Different NLP models and solution algorithms for packing 3D objects into regular 3D containers (cuboids, spheres, and cylinders) can be found, e.g., in [9,10] for spherical and in for ellipsoidal shapes, together with corresponding empirical results obtained for different numbers of objects and various container shapes. In all these works, geometric tools for modeling non-overlapping and containment conditions in Euclidean and non-Euclidean [4,5,9] metrics are provided. Algorithms based on combinations of smart heuristics and nonlinear optimization techniques are designed. The proposed solution approaches allow us to find the optimal solutions for small and medium-sized instances, while reasonably good feasible solutions are obtained for larger instances. However, as was highlighted in review papers [12,13,14,15], challenging packing problems in irregular containers are much less investigated. Several publications consider ellipsoidal containers [16,17], while there are only a few works focusing on packing for multiply connected domains and cardioids and paraboloids . Sphere packing into irregular containers arises in, e.g., material science [20,21] and nanotechnology . A simple application of sphere packing into a parabolic container can be found in the food industry , where a parabolic dish container must be designed to store candies. To the best of our knowledge, in the n-dimensional case, the problem of packing spheres into an optimized parabolic container has not been considered before. A brief review of the papers related to packing in irregular containers is provided below. An approach to constructing analytical non-intersection and containment conditions for non-oriented convex two-dimensional objects, defined by second-order curves, is proposed in . This approach was applied to a problem of packing circles into an ellipse and minimizing the ellipse size. Packing algorithms applied to different-shaped two-dimensional domains are studied in , including rectangles, ellipses, crosses, multiply connected domains and even cardioid shapes. The authors introduce a novel approach centered around the concept of “image” disks, enabling the study of packing within fixed containers. Paper focuses on Apollonian circle packing. The method has been used in various models, including geological sheer bands. Mathematical equations utilizing hyperbolas and ellipses are applied. This approach is applicable to a generic, closed, convex contour given the parametrization of its boundary. In the aerospace industry, packing into parabolic or other non-traditional containers is a significant challenge due to the specific shapes and delicate nature of many components . The container is divided by horizontal racks into sub-containers. The proposed mathematical model considers the minimal and maximal allowable distances between objects subject to the behavior constraints of the mechanical system (equilibrium, moments of inertia and stability constraints). The paper describes a solution approach based on the multistart strategy, Shor’s r-algorithm and accelerated search for the terminal nodes of the solution tree. The objective of this paper is to develop a modeling and solutions approach to an ODP that consists of packing n-dimensional spheres into a minimum-height parabolic container. For analytical description of the placement constraints, the Φ-function technique is used. This approach allows us to present mathematical models of optimized packing problems in the form of continuous NLP problems. To describe the containment of spheres in the parabolic container, a new Φ-function for the n D case is introduced. Using a section of the n D paraboloid and spheres by hyperplanes, it is iteratively reduced to consideration of the Φ-function in the 2D case. The Φ-function involves an additional variable parameter that is dynamically adjusted by solving a one-dimensional optimization problem. An approach based on the feasible directions method (FDM) is developed considering the special properties of the Φ-function. The contributions of the paper are as follows: A new problem of packing spheres into a minimum-height parabolic container in n-dimensional space; A new Φ-function for analytical description of the containment of a sphere into a parabolic container in n-dimensional space; An approach based on the feasible directions scheme considering the specific characteristics of the Φ-function. New benchmarks for various sphere radii and the parameters of the parabolic container in n-dimensional space for n = 2, 3, 4, 5. The remainder of this paper is organized as follows. Section 2 describes the problem statement. Section 3 introduces geometrical tools for constructing a mathematical model of the packing problem. A mathematical model is formulated in Section 4. Section 5 presents a modification of the FDM. Section 6 provides the computational results for problem instances in several dimensions, with different numbers of spheres and their radii and various values of the minimal allowable distances and the parameters of the parabolic container. Section 7 concludes. 2. Problem Statement Let a convex domain bounded by a parabolic surface and a hyperplane be defined in n-dimensional Euclidean space ℝ 𝑛 ℝ n as follows: 𝑃 𝑛(ℎ)=𝑃 𝑛∩𝐻 𝑛 P n(h)=P n∩H n where 𝑃 𝑛={𝑋=(𝑥 1,𝑥 2,…,𝑥 𝑛)∈ℝ 𝑛:∑𝑖=1 𝑛−1 𝑥 2 𝑖−2 𝑝 𝑥 𝑛≤0}P n={X=(x 1,x 2,…,x n)∈ℝ n:∑i=1 n−1 x i 2−2 p x n≤0} and 𝐻 𝑛={𝑋∈ℝ 𝑛:𝑥 𝑛−ℎ≤0}H n={X∈ℝ n:x n−h≤0}, i.e., 𝑃 𝑛(ℎ)={𝑋∈ℝ 𝑛:∑𝑖=1 𝑛−1 𝑥 2 𝑖−2 𝑝 𝑥 𝑛≤0,𝑥 𝑛−ℎ≤0}.P n(h)={X∈ℝ n:∑i=1 n−1 x i 2−2 p x n≤0,x n−h≤0}. (1) Further, we refer the domain 𝑃 𝑛(ℎ)P n(h) to a container of variable height ℎ>0 h>0, with the predefined parameter 𝑝>0 p>0. And let a collection of n D spheres 𝑆 𝑗 S j,𝑗∈𝐽={1,2,…,𝑚}j∈J={1,2,…,m} with variable centers 𝑦 𝑗=(𝑦 𝑗 1,𝑦 𝑗 2,…,𝑦 𝑗 𝑛)∈ℝ 𝑛 y j=(y j 1,y j 2,…,y j n)∈ℝ n,𝑗∈𝐽 j∈J, be given and denoted by 𝑆 𝑗(𝑦 𝑗)={𝑋∈ℝ 𝑛:‖𝑋−𝑦 𝑗‖−𝑟 𝑗≤0},𝑗∈𝐽.S j(y j)={X∈ℝ n:‖X−y j‖−r j≤0},j∈J. (2) In addition, the minimal allowable distances between each pair of spheres 𝑆 𝑡(𝑦 𝑡)S t(y t) and 𝑆 𝑗(𝑦 𝑗)S j(y j), as well as between a sphere 𝑆 𝑗(𝑦 𝑗)S j(y j) and the boundary of the container 𝑃 𝑛(ℎ)P n(h), are given, respectively, as 𝛿 𝑡 𝑗 δ t j and 𝛿 𝑗 δ j. Packing problem. Pack spheres 𝑆 𝑗(𝑦 𝑗)S j(y j), 𝑗∈𝐽 j∈J, into the minimum-height container 𝑃 𝑛(ℎ)P n(h), considering the minimal allowable distances 𝛿 𝑡 𝑗 δ t j, 𝑡<𝑗∈𝐽 t<j∈J, and 𝛿 𝑗 δ j, 𝑗∈𝐽 j∈J. To describe the placement constraints of the packing problem analytically, the Φ-function technique is used. For the reader’s convenience, the main definitions of the phi-function are provided in Appendix A. More details can be found in, e.g., Chapter 15 of . The distance constraint for two spheres can be defined using the adjusted Φ-function in the form Φ 𝑡 𝑗(𝑦 𝑡,𝑦 𝑗)=‖𝑦 𝑡−𝑦 𝑗‖2−(𝑟 𝑡+𝑟 𝑗+𝛿 𝑡 𝑗)2.Φ t j(y t,y j)=‖y t−y j‖2−(r t+r j+δ t j)2. Therefore, Φ 𝑡 𝑗(𝑦 𝑡,𝑦 𝑗)≥0⇔𝑑 𝑖 𝑠 𝑡(𝑆 𝑡(𝑦 𝑡),𝑆 𝑗(𝑦 𝑗))≥𝛿 𝑡 𝑗,Φ t j(y t,y j)≥0⇔d i s t(S t(y t),S j(y j))≥δ t j, 𝑑 𝑖 𝑠 𝑡(𝑆 𝑡(𝑦 𝑡),𝑆 𝑗(𝑦 𝑗))=min 𝑎∈𝑆 𝑡(𝑦 𝑡),𝑏∈𝑆 𝑗(𝑦 𝑗)‖𝑎−𝑏‖,𝑎=(𝑎 1,𝑎 2,…,𝑎 𝑛),𝑏=(𝑏 1,𝑏 2,…,𝑏 𝑛).d i s t(S t(y t),S j(y j))=min a∈S t(y t),b∈S j(y j)‖a−b‖,a=(a 1,a 2,…,a n),b=(b 1,b 2,…,b n). In Section 3, we introduce a continuous and everywhere defined function that allows us to describe the containment of each sphere into a parabolic container. 3. The Φ-Function for Containment Constraints To describe analytically the containment constraint, 𝑆 𝑗(𝑦 𝑗)⊂𝑃 𝑛(ℎ)⇔int 𝑆 𝑗(𝑦 𝑗)∩𝑃∗𝑛(ℎ)=∅S j(y j)⊂P n(h)⇔int S j(y j)∩P n(h)=∅, let us define a phi-function for an n D sphere 𝑆 𝑗(𝑦 𝑗)S j(y j) (2) and the object 𝑃∗𝑛(ℎ)=ℝ 𝑛\int 𝑃 𝑛(ℎ)P n(h)=ℝ n\int P n(h) (the compliment of the container 𝑃 𝑛(ℎ)P n(h) interior to the whole space ℝ 𝑛 ℝ n). Note that 𝑃∗𝑛(ℎ)=𝑃∗𝑛∪𝐻∗𝑛 P n(h)=P n∪H n, where 𝐻 𝑛={𝑋∈ℝ 𝑛:𝑥 𝑛−ℎ≥0}H n={X∈ℝ n:x n−h≥0} and 𝑃∗𝑛={𝑋∈ℝ 𝑛:∑𝑖=1 𝑛−1 𝑥 2 𝑖−2 𝑝 𝑥 𝑛≥0}.P n={X∈ℝ n:∑i=1 n−1 x i 2−2 p x n≥0}. (3) A Φ Φ-function for a sphere 𝑆 𝑗(𝑦 𝑗)S j(y j) and the object 𝑃∗𝑛(ℎ)P n(h) can be stated in the following form: Φ∗𝑗(𝑦 𝑗,𝑦 𝑗 𝑛,ℎ)=min{Φ 𝑗(𝑦 𝑗)−𝛿 𝑗,Θ 𝑗(𝑦 𝑗 𝑛,ℎ)},Φ j(y j,y j n,h)=min{Φ j(y j)−δ j,Θ j(y j n,h)}, (4) where Φ 𝑗(𝑦 𝑗)Φ j(y j) is a Φ-function for a sphere 𝑆 𝑗(𝑦 𝑗)S j(y j) and the object 𝑃∗𝑛 P n, and Θ 𝑗(𝑦 𝑗 𝑛,ℎ)=ℎ−𝑦 𝑗 𝑛−𝑟 𝑗 Θ j(y j n,h)=h−y j n−r j is a Φ Φ-function for a sphere 𝑆 𝑗(𝑦 𝑗)S j(y j) and a half-space 𝐻∗𝑛 H n. Let us define a Φ-function for a sphere 𝑆 𝑗(𝑦 𝑗)S j(y j) and the object 𝑃∗𝑛 P n. We state that 𝑆 𝑗(𝑦 𝑗)⊂𝑃 𝑛 S j(y j)⊂P n if 𝑃∗𝑛∩int 𝑆 𝑗(𝑦 𝑗)=∅P n∩int S j(y j)=∅. Firstly, construct an (n − 1)D hyperplane 𝐾 1 K 1 passing through axis 𝑂 𝑥 𝑛 O x n of 𝑃 𝑛 P n and the center 𝑦 𝑗=(𝑦 𝑗 1,𝑦 𝑗 2,…,𝑦 𝑗 𝑛)y j=(y j 1,y j 2,…,y j n) of the n D sphere 𝑆 𝑗(𝑦 𝑗)S j(y j). Then, the section 𝐾 1∩𝑃 𝑛 K 1∩P n yields the parabolic domain 𝑃 𝑛−1={𝑋∈ℝ 𝑛−1:−𝑧 2 1−∑𝑖=3 𝑛−1 𝑥 2 𝑖+2 𝑝 𝑥 𝑛=0},P n−1={X∈ℝ n−1:−z 1 2−∑i=3 n−1 x i 2+2 p x n=0}, where 𝑧 1=±𝑥 2 1+𝑥 2 2−−−−−−√,z 1=±x 1 2+x 2 2, while the section 𝐾 1∩𝑆 𝑗(𝑦 𝑗)K 1∩S j(y j) yields the (n − 1)D-sphere 𝑆 𝑗(𝑛−1)S j(n−1) of radius 𝑟 𝑗 r j and the center 𝑦 𝑗(𝑛−1)=(𝑧 𝑗 1,𝑦 𝑗 3,…,𝑦 𝑗 𝑛),y j(n−1)=(z j 1,y j 3,…,y j n),𝑧 𝑗 1=sign(𝑦 𝑗 1)𝑦 2 𝑗 1+𝑦 2 𝑗 2−−−−−−−√z j 1=sign(y j 1)y j 1 2+y j 2 2. By analogy, an (n − 2)D-hyperplane 𝐾 2 K 2 passing through axis 𝑂 𝑥 𝑛 O x n of 𝑃 𝑛−1 P n−1 and the center 𝑦 𝑗(𝑛−1)y j(n−1) of 𝑆 𝑗(𝑛−1)S j(n−1) is constructed. Then, the section 𝐾 2∩𝑃 𝑛−1 K 2∩P n−1 yields the parabolic domain 𝑃 𝑛−2={𝑋∈ℝ 𝑛−2:−𝑧 2 2−∑𝑖=4 𝑛−1 𝑥 2 𝑖+2 𝑝 𝑥 𝑛=0},P n−2={X∈ℝ n−2:−z 2 2−∑i=4 n−1 x i 2+2 p x n=0}, where 𝑧 2=±∑3 𝑖=1 𝑥 2 𝑖−−−−−−−√z 2=±∑i=1 3 x i 2, while the section 𝐾 2∩𝑆 𝑗(𝑛−1)K 2∩S j(n−1) yields the (n − 2)D-sphere 𝑆 𝑗(𝑛−2)S j(n−2) of radius 𝑟 𝑗 r j and the center 𝑦 𝑗(𝑛−2)=(𝑧 𝑗 2,𝑦 𝑗 4,…,𝑦 𝑗 𝑛),y j(n−2)=(z j 2,y j 4,…,y j n),𝑧 𝑗 2=sign(𝑧 𝑗 1)∑3 𝑖=1 𝑦 2 𝑗 𝑖−−−−−−−√z j 2=sign(z j 1)∑i=1 3 y j i 2. The iterative procedure continues until the 2D parabolic domain 𝑃 2={𝑋∈ℝ 2:−𝑧 2(𝑛−1)+2 𝑝 𝑥 𝑛=0}P 2={X∈ℝ 2:−z(n−1)2+2 p x n=0} with 𝑧 𝑛−1=sign(𝑧 𝑗(𝑛−2))∑𝑛−1 𝑖=1 𝑥 2 𝑖−−−−−−−√z n−1=sign(z j(n−2))∑i=1 n−1 x i 2 and the 2D sphere 𝑆 𝑗 2 S j 2 of radius 𝑟 𝑗 r j and the center (𝑧 𝑗(𝑛−1),𝑦 𝑗 𝑛)(z j(n−1),y j n) are obtained. Note that sign(𝑧 𝑗(𝑛−2))=sign(𝑧 𝑗(𝑛−3))=…=sign(𝑧 𝑗 1)=sign(𝑦 𝑗 1)sign(z j(n−2))=sign(z j(n−3))=…=sign(z j 1)=sign(y j 1). This means that the construction of the Φ-function for the object 𝑃∗𝑛 P n (3) and the n D sphere 𝑆 𝑗(𝑦 𝑗)S j(y j) (2) is reduced to deriving the Φ-function for the object 𝑃∗2 P 2 and the 2D sphere 𝑆 𝑗 2 S j 2. Let an equation of the tangent Υ 𝑗 ϒ j to the boundary of 𝑃 2 P 2 be given 𝑓(𝑧 𝑛−1,𝑥 𝑛,𝑡 𝑗)=−𝑧 𝑛−1 2 𝑝−−√𝑡 𝑗+𝑝(𝑥 𝑛+𝑡 2 𝑗)=0 f(z n−1,x n,t j)=−z n−1 2 p t j+p(x n+t j 2)=0 for any 𝑡 𝑗∈ℝ 1 t j∈ℝ 1. Note that different tangents Υ 𝑗(𝑡 𝑗)ϒ j(t j) can be generated for different values of 𝑡 𝑗 t j. Let a point (𝑧 𝑛−1,𝑥 𝑛)=(𝑡 𝑗 2 𝑝−−√,𝑡 2 𝑗)(z n−1,x n)=(t j 2 p,t j 2) be a tangency point of Υ 𝑗(𝑡 𝑗)ϒ j(t j) and the boundary of 𝑃 2 P 2. Then, the normal equation of Υ 𝑗(𝑡 𝑗)ϒ j(t j) takes the form 𝑓 𝑗(𝑧 𝑛−1,𝑥 𝑛,𝑡 𝑗)=−𝑧 𝑛−1 2 𝑝−−√𝑡 𝑗−𝑝(𝑥 𝑛+𝑡 2 𝑗)2 𝑝 𝑡 2 𝑗+𝑝 2−−−−−−−−√=0.f j(z n−1,x n,t j)=−z n−1 2 p t j−p(x n+t j 2)2 p t j 2+p 2=0. (5) Thus, the normalized Φ-function for the 2D sphere 𝑆 𝑗 2 S j 2 and the half-plane specified by the inequality 𝑓 𝑗(𝑧 𝑛−1,𝑥 𝑛,𝑡 𝑗)≤0 f j(z n−1,x n,t j)≤0 can be defined as follows: Φ 𝑗 0(𝑧 𝑗(𝑛−1),𝑦 𝑗 𝑛,𝑡 𝑗)=𝑓 𝑗(𝑧 𝑗(𝑛−1),𝑦 𝑗 𝑛,𝑡 𝑗)−𝑟 𝑗.Φ j 0(z j(n−1),y j n,t j)=f j(z j(n−1),y j n,t j)−r j. (6) Substituting 𝑓 𝑗(𝑧 𝑛−1,𝑥 𝑛,𝑡 𝑗)f j(z n−1,x n,t j) (5) into (6), the function 𝜔 𝑗(𝑡 𝑗)=(−𝑧 𝑗,(𝑛−1)2 𝑝−−√𝑡 𝑗−𝑝(𝑦 𝑗 𝑛+𝑡 2 𝑗))/2 𝑝 𝑡 2 𝑗+𝑝 2−−−−−−−√ω j(t j)=(−z j,(n−1)2 p t j−p(y j n+t j 2))/2 p t j 2+p 2 can be defined. Then, we search for 𝑡∗𝑗 t j at which the function 𝜔 𝑗(𝑡∗𝑗)ω j(t j) reaches the minimum corresponding to the distance between the center of 𝑆 𝑗 2 S j 2 and the boundary of 𝑃 2 P 2. Consequently, bearing in mind 𝑧 𝑛−1=±∑𝑛−1 𝑖=1 𝑥 2 𝑖−−−−−−−√z n−1=±∑i=1 n−1 x i 2, the normalized Φ-function for 𝑆 𝑗(𝑦 𝑗)S j(y j) and 𝑃∗2 P 2 (3) takes the form Φ 𝑗(𝑦 𝑗)=min 𝑡 𝑗∈[𝛽 𝑗 1,𝛽 𝑗 2]Φ 𝑗 0(𝑧 𝑗(𝑛−1),𝑦 𝑗 𝑛,𝑡 𝑗).Φ j(y j)=min t j∈[β j 1,β j 2]Φ j 0(z j(n−1),y j n,t j). (7) To find the optimal value of 𝑡 𝑗∈[𝛽 𝑗 1,𝛽 𝑗 2]t j∈[β j 1,β j 2], a bisection technique is applied. Let us consider two cases for the locations of the center of 𝑆 𝑗 2 S j 2 with respect to 𝑃 2 P 2: case 1 corresponds to (𝑧 𝑗(𝑛−1),𝑦 𝑗 𝑛)∈𝑃 2(z j(n−1),y j n)∈P 2; case 2 corresponds to (𝑧 𝑗(𝑛−1),𝑦 𝑗 𝑛)∉𝑃 2.(z j(n−1),y j n)∉P 2. Assume (𝑧⌢𝑗(𝑛−1),𝑦⌢𝑗 𝑛)∈𝑃 2(z⌢j(n−1),y⌢j n)∈P 2 and 𝑦⌢𝑗 𝑛≥0 y⌢j n≥0. Here, (𝑧⌢𝑗(𝑛−1),𝑦⌢𝑗 𝑛)(z⌢j(n−1),y⌢j n) is the center point of the sphere 𝑆 𝑗 2 S j 2. Let us consider two tangents Υ 𝑗(𝑡 𝑗 1)ϒ j(t j 1) and Υ 𝑗(𝑡 𝑗 2)ϒ j(t j 2) to fr 𝑃 2 fr P 2 at points 𝐴(2 𝑝 𝑦⌢𝑗 𝑛−−−−−√,𝑦⌢𝑗 𝑛)A(2 p y⌢j n,y⌢j n) and 𝐵(𝑧⌢𝑗(𝑛−1),𝑧⌢2 𝑗(𝑛−1)/(2 𝑝))B(z⌢j(n−1),z⌢j(n−1)2/(2 p)) for corresponding 𝑡 𝑗 1=𝑦⌢𝑗 𝑛−−−√t j 1=y⌢j n and 𝑡 𝑗 2=𝑧⌢𝑗(𝑛−1)/2 𝑝−−√t j 2=z⌢j(n−1)/2 p (Figure 1). Therefore, [𝛽 𝑗 1,𝛽 𝑗 2]=[𝑧⌢𝑗(𝑛−1)/2 𝑝−−√,𝑦⌢𝑗 𝑛−−−√][β j 1,β j 2]=[z⌢j(n−1)/2 p,y⌢j n]. Figure 1. Illustration of interaction of 𝑆 𝑗 2 S j 2 and the boundary of 𝑃 2 P 2 in 2D. In Figure 1, the segment 𝐴 𝐵 A B of the parabola that corresponds to 𝑡 𝑗∈[𝛽 𝑗 1,𝛽 𝑗 2]t j∈[β j 1,β j 2] is shown. The tangent Υ 𝑗(𝑡∗𝑗)ϒ j(t j) at the point 𝐶 C(2 𝑝 𝑦⌢𝑗 𝑛−−−−−√,𝑦⌢𝑗 𝑛,𝑡∗𝑗)(2 p y⌢j n,y⌢j n,t j) corresponds to 𝑡∗𝑗=arg min 𝑡 𝑗∈[𝛽 𝑗 1,𝛽 𝑗 2]Φ 𝑗 0(𝑧⌢𝑗(𝑛−1),𝑦⌢𝑗 𝑛,𝑡 𝑗)t j=arg min t j∈[β j 1,β j 2]Φ j 0(z⌢j(n−1),y⌢j n,t j). Let (𝑧 𝑗(𝑛−1),𝑦 𝑗 𝑛)∈𝑃 2(z j(n−1),y j n)∈P 2 (case 1); then, [𝛽 𝑗 1,𝛽 𝑗 2]=⎧⎩⎨[𝑧 𝑗(𝑛−1)2 𝑝√,𝑦 𝑗 𝑛−−−√]if 𝑧 𝑗(𝑛−1)≥0(case 1.1)[−𝑦 𝑗 𝑛−−−√,𝑧 𝑗(𝑛−1)2 𝑝√]if 𝑧 𝑗(𝑛−1)<0(case 1.2).[β j 1,β j 2]={[z j(n−1)2 p,y j n]if z j(n−1)≥0(case 1.1)[−y j n,z j(n−1)2 p]if z j(n−1)<0(case 1.2). (8) Let (𝑧 𝑗(𝑛−1),𝑦 𝑗 𝑛)∉𝑃 2(z j(n−1),y j n)∉P 2(case 2); then, [𝛽 𝑗 1,𝛽 𝑗 2]=⎧⎩⎨[𝑦 𝑗 𝑛−−−√,𝑧 𝑗(𝑛−1)2 𝑝√]if 𝑧 𝑗(𝑛−1)≥0,𝑦 𝑗 𝑛≥0(case 2.1)[−|𝑧 𝑗(𝑛−1)|2 𝑝√,|𝑧 𝑗(𝑛−1)|2 𝑝√]if 𝑦 𝑗 𝑛<0(case 2.2)[−𝑧 𝑗(𝑛−1)2 𝑝√,𝑦 𝑗 𝑛−−−√]if 𝑧 𝑗(𝑛−1)<0,𝑦 𝑗 𝑛≥0(case 2.3).[β j 1,β j 2]={[y j n,z j(n−1)2 p]if z j(n−1)≥0,y j n≥0(case 2.1)[−|z j(n−1)|2 p,|z j(n−1)|2 p]if y j n<0(case 2.2)[−z j(n−1)2 p,y j n]if z j(n−1)<0,y j n≥0(case 2.3). (9) Figure 2 illustrates two cases: a sphere is arranged inside the parabolic domain 𝑃 2 P 2 Φ 𝑗(𝑦 𝑗)≥0 Φ j(y j)≥0(Figure 2a), and a sphere is arranged outside the parabolic domain 𝑃 2 P 2, Φ 𝑗(𝑦 𝑗)<0 Φ j(y j)<0 (Figure 2b). Figure 2. Arrangements of 𝑆 𝑗 2 S j 2 with respect to 𝑃 2 P 2 for different 𝑡 𝑗∈[𝛽 𝑗 1,𝛽 𝑗 2]t j∈[β j 1,β j 2]: (a) (𝑧 𝑗(𝑛−1),𝑦 𝑗 𝑛)∈𝑃 2(z j(n−1),y j n)∈P 2; (b) (𝑧 𝑗(𝑛−1),𝑦 𝑗 𝑛)∉𝑃 2(z j(n−1),y j n)∉P 2. In particular, to calculate 𝑡∈[𝛽 𝑗 1,𝛽 𝑗 2]t∈[β j 1,β j 2], case 1.1 case 1.1 (8) is used for the parabolic segment 𝐴 1,𝐴 2 A 1,A 2, while case 1.2 case 1.2 (8) is used for the parabolic segment 𝐵 1,𝐵 2 B 1,B 2 (Figure 2a); case 2.1 case 2.1 (9) is used for the parabolic segment 𝐴 1,𝐴 2 A 1,A 2, case 2.2 case 2.2 (9) is used for the parabolic segment 𝐶 1,𝐶 2 C 1,C 2, and case 2.3 case 2.3 in (9) is used for the parabolic segment 𝐵 1,𝐵 2 B 1,B 2 (Figure 2b). 4. Mathematical Model A mathematical model of the packing problem can be formulated as follows: min 𝑌,ℎ ℎ min Y,h h (10) subject to Φ 𝑡 𝑗(𝑦 𝑡,𝑦 𝑗)=‖𝑦 𝑡−𝑦 𝑗‖2−(𝑟 𝑡+𝑟 𝑗+𝛿 𝑡 𝑗)2≥0,𝑡<𝑗∈𝐽 Φ∗𝑗(𝑦 𝑗,𝑦 𝑗 𝑛,ℎ)=min{Φ 𝑗(𝑦 𝑗)−𝛿 𝑗,Θ 𝑗(𝑦 𝑗 𝑛,ℎ)}≥0,𝑗∈𝐽 Φ t j(y t,y j)=‖y t−y j‖2−(r t+r j+δ t j)2≥0,t<j∈J Φ j(y j,y j n,h)=min{Φ j(y j)−δ j,Θ j(y j n,h)}≥0,j∈J (11) where 𝑌=(𝑦 1,𝑦 2,…,𝑦 𝑚)∈ℝ 𝑚 𝑛 Y=(y 1,y 2,…,y m)∈ℝ m n. Note that the inequality Φ∗𝑗(𝑦 𝑗,𝑦 𝑗 𝑛,ℎ)≥0 Φ j(y j,y j n,h)≥0 in (11) is equivalent to the system of inequalities Φ 𝑗(𝑦 𝑗)−𝛿 𝑗≥0 Φ j(y j)−δ j≥0 and Θ 𝑗(𝑦 𝑗 𝑛,ℎ)≥0 Θ j(y j n,h)≥0 and ensures the arrangement of the n D sphere 𝑆 𝑗(𝑦 𝑗)S j(y j) fully inside the parabolic container 𝑃 𝑛(ℎ)P n(h).. The number of inequalities specifying the feasible region (11) is equal to 𝜒=0.5 𝑚(𝑚−1)+2 𝑚 χ=0.5 m(m−1)+2 m. The dimensions of a solution matrix for 𝑊 W are (𝑚 𝑛+1)×𝜒(m n+1)×χ. Thus, the number of inequalities/variables in the inequality system (11) is increased drastically by enlarging 𝑚 m. The solution matrix is strongly sparse. The problem is NP-hard . Since we cannot define 𝑡 𝑗 t j explicitly, then we need to solve the optimization problems of the form (7) for each sphere 𝑆 𝑗(𝑦 𝑗)S j(y j) on each iteration of the solution process of the problem (10), (11). We do not use the solvers BARON or IPOPT for the original problem because of the dynamic nature of the Φ-function. Instead, a modification of the feasible directions method is developed. The following solution strategy scheme for the problem (10), (11) is proposed: Take a sufficiently large height ℎ 0 h 0 of the container that guarantees a placement of spheres 𝑆 𝑗(𝑦 𝑗)S j(y j), 𝑗∈𝐽,j∈J, fully inside 𝑃(ℎ 0)P(h 0); Generate the sphere centers 𝑦 0 𝑗,y j 0,𝑗∈𝐽,j∈J, randomly so that 𝑆 𝑗(𝑦 0 𝑗)⊂𝑃(ℎ 0)S j(y j 0)⊂P(h 0), 𝑗∈𝐽,j∈J,Φ 𝑡 𝑗(𝑦 0 𝑡,𝑦 0 𝑗)≥0,𝑡<𝑗∈𝐽 Φ t j(y t 0,y j 0)≥0,t<j∈J; Apply the modification of the FDM to solve the problem (10), (11) for a set of feasible starting points. Select the best solution. 5. Solution Algorithm In contrast to the problem considered in , in this study, the spheres are of different radii, and the Φ-function describing the containment of a sphere into the container involves an additional parameter, which is dynamically changed during the optimization process. The FDM for the problem (10), (11) is implemented using the iterative formula 𝑌 𝑘+1=𝑌 𝑘+𝜆 𝑘 𝑍 𝑘,𝑘=0,1,2,…,Y k+1=Y k+λ k Z k,k=0,1,2,…, (12) where 𝑌 0∈𝑊 Y 0∈W(for 𝑘=0 k=0) is a starting feasible point, 𝑍 𝑘 Z k is a search direction vector, and 𝜆 𝑘>0 λ k>0 is a parameter that controls the step size. Let 𝜑(𝑌)=ℎ φ(Y)=h denote the objective function. A vector 𝑍 𝑘 Z k in (12) should provide 𝑌 𝑘∈𝑊 Y k∈W. To search for a vector 𝑍 𝑘 Z k, the following linear programming problem is solved: (𝑍 𝑘,𝛼 𝑘)=arg max 𝛼 s.t.Υ=(𝑍,𝛼)∈𝐺 𝑘,(Z k,α k)=arg max α s.t.ϒ=(Z,α)∈G k, (13) 𝐺 𝑘={(𝑍,𝛼)∈ℝ 𝜒+1:∇Φ 𝑡 𝑗(𝑦 𝑘 𝑡,𝑦 𝑘 𝑗)⋅𝑍≥𝛼,𝑡<𝑗∈𝐽,∇Φ 𝑗(𝑦 𝑘 𝑗)⋅𝑍≥𝛼,∇Θ 𝑗(𝑦 𝑘 𝑗 𝑛,ℎ 𝑘)⋅𝑍≥𝛼,−∇𝜑(ℎ 𝑘)⋅𝑍≥𝛼,|𝑧 𝑖|≤1,𝑖∈Ξ={1,2,…,𝜒}},G k={(Z,α)∈ℝ χ+1:∇Φ t j(y t k,y j k)⋅Z≥α,t<j∈J,∇Φ j(y j k)⋅Z≥α,∇Θ j(y j n k,h k)⋅Z≥α,−∇φ(h k)⋅Z≥α,|z i|≤1,i∈Ξ={1,2,…,χ}}, (14) where 𝑍=(𝑧 1,𝑧 2,…,𝑧 𝜒)∈ℝ 𝜒 Z=(z 1,z 2,…,z χ)∈ℝ χ, 𝛼∈ℝ 1 α∈ℝ 1, Φ 𝑗(𝑦 𝑘 𝑗)Φ j(y j k) is constructed according to (7). Note that in (11), each Φ 𝑡 𝑗(𝑦 𝑡,𝑦 𝑗)Φ t j(y t,y j) is an inverse convex function, and each Θ 𝑗(𝑦 𝑗 𝑛,ℎ)Θ j(y j n,h) is a linear function. The vector of feasible directions may be orthogonal to the gradients of these constraints, so the inequalities ∇Φ 𝑡 𝑗(𝑦 𝑘 𝑡,𝑦 𝑘 𝑗)⋅𝑍≥𝛼∇Φ t j(y t k,y j k)⋅Z≥α and ∇Θ 𝑗(𝑦 𝑘 𝑗 𝑛,ℎ 𝑘)⋅𝑍≥𝛼∇Θ j(y j n k,h k)⋅Z≥α in (14) are replaced with ∇Φ 𝑡 𝑗(𝑦 𝑘 𝑡,𝑦 𝑘 𝑗)⋅𝑍≥0∇Φ t j(y t k,y j k)⋅Z≥0 and ∇Θ 𝑗(𝑦 𝑘 𝑗 𝑛,ℎ 𝑘)⋅𝑍≥0∇Θ j(y j n k,h k)⋅Z≥0, respectively. To reduce the dimension of the problem (10), (11) and consequently of the problem (13), (14), a decomposition strategy based on the degree of feasibility is employed. When considering all the placed spheres, the majority of them are positioned at a significant distance from each other. The inequalities in the system (11) are satisfied with a considerable margin, and they can be disregarded when forming the optimization vector. At each step, only a subset of inequalities from the system (11) is considered, specifically those with a low degree of feasibility. During the optimization process, a parameter 𝜀 𝑘>0 ε k>0 determining the degree of feasibility is adjusted dynamically, regulating the system constraints at each step. If an inequality is not considered in the optimization process at a certain step but is violated during the step’s execution, the admissibility is controlled using the parameter 𝜆 𝑘 λ k in iterative Formula (12). Let us denote the inverse convex and linear inequalities in the system (11) as 𝑔 𝑠(𝑌)≥0,𝑠∈Λ⊂Ξ g s(Y)≥0,s∈Λ⊂Ξ and the rest of the inequalities as 𝑞 𝑠(𝑌)≥0,𝑠∈Γ⊂Ξ q s(Y)≥0,s∈Γ⊂Ξ (Λ∪Γ=Ξ Λ∪Γ=Ξ,Λ∩Γ=∅Λ∩Γ=∅), and let 𝐸 𝑘={𝑠∈Λ:0≤𝑔 𝑠(𝑌 𝑘)≤𝜀 𝑘}E k={s∈Λ:0≤g s(Y k)≤ε k} and 𝐵 𝑘={𝑠∈Γ:0≤𝑞 𝑠(𝑌 𝑘)≤𝜀 𝑘}B k={s∈Γ:0≤q s(Y k)≤ε k}. Here, 𝜀 𝑘>0 ε k>0 is a threshold value. Then, the problem (13), (14) takes the form (𝑍 𝑘,𝛼 𝑘)=arg max 𝛼 s.t.(𝑍,𝛼)∈𝐺 𝑘,(Z k,α k)=arg max α s.t.(Z,α)∈G k, (15) 𝐺 𝑘={(𝑍,𝛼)∈ℝ 𝜒+1:∇𝑔 𝑖(𝑌 𝑘)⋅𝑍≥0,𝑖∈𝐸 𝑘,∇𝑞 𝑖(𝑌 𝑘)⋅𝑍≥𝛼,𝑖∈𝐵 𝑘,−∇𝜑(ℎ 𝑘)⋅𝑍≥𝛼,|𝑧 𝑖|≤1,𝑖∈Ξ}.G k={(Z,α)∈ℝ χ+1:∇g i(Y k)⋅Z≥0,i∈E k,∇q i(Y k)⋅Z≥α,i∈B k,−∇φ(h k)⋅Z≥α,|z i|≤1,i∈Ξ}. (16) Taking into account the problem (15), (16), the following step-by-step algorithm is employed to solve problem (10), (11). Step 1. Take a sufficiently large height ℎ 0 h 0 of the container that guarantees a placement of spheres 𝑆 𝑗(𝑦 𝑗)S j(y j), 𝑗∈𝐽,j∈J, fully inside 𝑃 𝑛(ℎ 0)P n(h 0). Step 2. Generate the sphere centers 𝑦 0 𝑗,y j 0,𝑗∈𝐽,j∈J, randomly so that 𝑆 𝑗(𝑦 0 𝑗)⊂𝑃 𝑛(ℎ 0),S j(y j 0)⊂P n(h 0),𝑗∈𝐽,j∈J,Φ 𝑡 𝑗(𝑦 0 𝑡,𝑦 0 𝑗)≥0,𝑡<𝑗∈𝐽 Φ t j(y t 0,y j 0)≥0,t<j∈J. Step 3. Set 𝑘:=0 k:=0, 𝜀 0:=𝜀>0 ε 0:=ε>0. Step 4. Define the functions Φ 𝑗(𝑦 𝑘 𝑗)Φ j(y j k) (7). Step 5. Form the sets 𝐸 𝑘 E k, 𝐵 𝑘 B k. Step 6. Set 𝜆 𝑘:=1 λ k:=1. Step 7. Calculate (𝑍 𝑘,𝛼 𝑘)(Z k,α k) (Problem (15), (16)). Step 8. If 𝛼 𝑘≤0 α k≤0 (there is no a feasible direction decreasing the objective 𝜑(𝑌)=ℎ φ(Y)=h), then set 𝜀 𝑘:=𝜀 𝑘/2 ε k:=ε k/2 and go to Step 5; otherwise (𝛼 𝑘>0 α k>0), go to Step 9. Step 9. Set 𝑌 𝑘+1:=𝑌 𝑘+𝜆 𝑘 𝑍 𝑘 Y k+1:=Y k+λ k Z k(12). Step 10. If 𝑌 𝑘+1∉𝑊 Y k+1∉W, then set 𝜆 𝑘:=𝜆 𝑘/2 λ k:=λ k/2 and go to Step 9; otherwise, go to Step 11. Step 11. If ‖𝑌 𝑘+1−𝑌 𝑘‖<𝜏‖Y k+1−Y k‖<τ, then stop algorithm; otherwise, set 𝜀 𝑘+1:=𝜀 𝑘 ε k+1:=ε k, 𝑘:=𝑘+1 k:=k+1 and go to Step 4. A schematic illustration of the proposed approach is shown in Figure 3. Three consecutive iterations of the FDM in 2D are illustrated in Figure 3a–c. An arrangement of 2D spheres corresponding to the stop criterion, ‖𝑌 𝑘+1−𝑌 𝑘‖<𝜏‖Y k+1−Y k‖<τ, at Step 11 is shown in Figure 3d. Figure 3. Illustration to the main stages of the solution procedure for three spheres: (a) an arrangement of spheres corresponding to the 𝑘 k-th iteration; (b) an arrangement of spheres corresponding to the (𝑘+1)(k+1)-th iteration; (c) an arrangement of spheres corresponding to the (𝑘+2)(k+2)-th iteration; (d) an arrangement of spheres corresponding to the stop criterion. A flowchart corresponding to the solution strategy is presented in Figure 4. Figure 4. The flowchart of the main algorithm. 6. Computational Results The proposed approach was numerically tested for eleven instances of the problem (9), (10) with different (a) dimensions, n = 2, 3, 4, 5; (b) numbers of spheres, m = 50, 100, 200; b) radii, 𝑟 𝑗,𝑗=1,…,𝑚 r j,j=1,…,m; (c) minimal allowable distances, 𝛿 𝑡 𝑗,𝛿 𝑗 δ t j,δ j, 𝑡<𝑗∈𝐽 t<j∈J, 𝑗∈𝐽 j∈J; (d) parameters 𝑝 p of the parabolic container, 𝑝=1,2,5,10 p=1,2,5,10. For all the examples, we set 𝜀 0=𝜀=1 ε 0=ε=1, 𝜏=10−6 τ=10−6. For each problem instance, 10 starting points were generated. The computations were performed using an Intel® Core™ i3-6100T, 3.20 GHz, 8.00 GB of RAM. Example 1. 𝑛=2 n=2, 𝑚=100 m=100, 𝑟 𝑗=1.177 r j=1.177, 𝑗=1,…,24,j=1,…,24,𝑟 𝑗=1.117 r j=1.117, 𝑖=25,…,48,i=25,…,48,𝑟 𝑗=0.97 r j=0.97, 𝑗=49,…,72,j=49,…,72,𝑟 𝑗=0.927 r j=0.927, 𝑗=73,…,96,j=73,…,96,𝑟 𝑗=0.86 r j=0.86, 𝑗=97,…,100;j=97,…,100;𝑝=1 p=1, 𝛿 𝑡 𝑗=𝛿 𝑗=0 δ t j=δ j=0. The best solution found by our algorithm for 5 min is ℎ=37.518079 h=37.518079. Example 2. 𝑛=2 n=2, 𝑚=200,m=200,𝑟 𝑗=1.177 r j=1.177, 𝑗=1,…,24,j=1,…,24,𝑟 𝑗=1.117 r j=1.117, 𝑗=25,…,48,j=25,…,48,𝑟 𝑗=0.97 r j=0.97, 𝑗=49,…,72,j=49,…,72,𝑟 𝑗=0.927 r j=0.927, 𝑗=73,…,96,j=73,…,96,𝑟 𝑗=0.86 r j=0.86, 𝑗=97,…,120,j=97,…,120,𝑟 𝑗=0.812 r j=0.812, 𝑗=121,…,144,j=121,…,144,𝑟 𝑗=0.762 r j=0.762, 𝑗=145,…,168,j=145,…,168,𝑟 𝑗=0.726 r j=0.726, 𝑗=169,…,192,j=169,…,192,𝑟 𝑗=0.664 r j=0.664, 𝑗=193,…,200;j=193,…,200;𝑝=1 p=1, 𝛿 𝑡 𝑗=𝛿 𝑗=0 δ t j=δ j=0. The best solution found by our algorithm for 20 min is ℎ=49.511450 h=49.511450. Example 3. 𝑛=2 n=2, 𝑚=100 m=100, the radii are as in Example 1; 𝑝=5 p=5, 𝛿 𝑡 𝑗=𝛿 𝑗=0 δ t j=δ j=0, ℎ 0=30 h 0=30. The best solution found by our algorithm for 5 min is ℎ=21.695951 h=21.695951. Example 4. 𝑛=2 n=2, 𝑚=50 m=50, {𝑟 𝑗,𝑗=1,…,50}={r j,j=1,…,50}={1.177, 1.177, 1.177, 1.177, 1.117, 1.117, 1.117, 1.117, 1.117, 0.970, 0.970, 0.970, 0.970, 0.970, 0.927, 0.927, 0.927, 0.927, 0.927, 0.860, 0.860, 0.860, 0.860, 0.860, 0.812, 0.812, 0.812, 0.812, 0.812, 0.762, 0.762, 0.762, 0.762, 0.762, 0.726, 0.726, 0.726, 0.726, 0.726, 0.664, 0.664, 0.664, 0.664, 0.664, 0.627, 0.627, 0.627, 0.627, 0.627}, 𝑝=5 p=5, 𝛿 𝑡 𝑗∈[0.1,0.5]δ t j∈[0.1,0.5], 𝛿 𝑗∈[0.1,1]δ j∈[0.1,1]. The best solution found by our algorithm for 2 min is ℎ=11.577099 h=11.577099. The corresponding placements of the spheres in Examples 1–4 are shown in Figure 5a–d. Figure 5. Optimized arrangement of 2D spheres: (a) Example 1; (b) Example 2; (c) Example 3; (d) Example 4. Example 5. 𝑛=3 n=3, 𝑚=50 m=50, {𝑟 𝑗,𝑗=1,…,50}={r j,j=1,…,50}={0.527, 0.564, 0.566, 0.592, 0.612, 0.680, 0.747, 0.760, 0.807, 0.845, 0.850, 0.853, 0.855, 0.868, 0.887, 0.891, 0.934, 0.947, 0.955, 0.961, 1.044, 1.085, 1.180, 1.189, 1.210, 1.229, 1.237, 1.274, 1.275, 1.281, 1.292, 1.309, 1.325, 1.374, 1.399, 1.404, 1.430, 1.484, 1.491, 1.493, 1.525, 1.551, 1.551, 1.636, 1.670, 1.739, 1.819, 2.050, 2.171}, 𝑝=2 p=2, 𝛿 𝑡 𝑗=𝛿 𝑗=0 δ t j=δ j=0. The best solution found by our algorithm for 3 min is ℎ=11.927860 h=11.927860. Example 6. 𝑛=3 n=3, 𝑚=200 m=200, {𝑟 𝑗,𝑗=1,…,200}={r j,j=1,…,200}={2.171, 2.171, 2.171, 2.171, 2.050, 2.050, 2.050, 2.050, 1.819, 1.819, 1.819, 1.819, 1.739, 1.739, 1.739, 1.739, 1.670, 1.670, 1.670, 1.670, 1.636, 1.636, 1.636, 1.636, 1.551, 1.551, 1.551, 1.551, 1.551, 1.551, 1.551, 1.551, 1.525, 1.525, 1.525, 1.525, 1.493, 1.493, 1.493, 1.493, 1.491, 1.491, 1.491, 1.491, 1.484, 1.484, 1.484, 1.484, 1.484, 1.484, 1.484, 1.484, 1.430, 1.430, 1.430, 1.430, 1.404, 1.404, 1.404, 1.404, 1.399, 1.399, 1.399, 1.399, 1.374, 1.374, 1.374, 1.374, 1.325, 1.325, 1.325, 1.325, 1.309, 1.309, 1.309, 1.309, 1.292, 1.292, 1.292, 1.292, 1.281, 1.281, 1.281, 1.281, 1.275, 1.275, 1.275, 1.275, 1.274, 1.274, 1.274, 1.274, 1.237, 1.237, 1.237, 1.237, 1.229, 1.229, 1.229, 1.229, 1.210, 1.210, 1.210, 1.210, 1.189, 1.189, 1.189, 1.189, 1.180, 1.180, 1.180, 1.180, 1.085, 1.085, 1.085, 1.085, 1.044, 1.044, 1.044, 1.044, 0.961, 0.961, 0.961, 0.961, 0.955, 0.955, 0.955, 0.955, 0.947, 0.947, 0.947, 0.947, 0.934, 0.934, 0.934, 0.934, 0.891, 0.891, 0.891, 0.891, 0.887, 0.887, 0.887, 0.887, 0.868, 0.868, 0.868, 0.868, 0.855, 0.855, 0.855, 0.855, 0.853, 0.853, 0.853, 0.853, 0.850, 0.850, 0.850, 0.850, 0.845, 0.845, 0.845, 0.845, 0.807, 0.807, 0.807, 0.807, 0.760, 0.760, 0.760, 0.760, 0.747, 0.747, 0.747, 0.747, 0.680, 0.680, 0.680, 0.680, 0.612, 0.612, 0.612, 0.612, 0.592, 0.592, 0.592, 0.592, 0.566, 0.566, 0.566, 0.566, 0.564, 0.564, 0.564, 0.564, 0.527, 0.527, 0.527, 0.527}, 𝑝=2 p=2, 𝛿 𝑡 𝑗=𝛿 𝑗=0 δ t j=δ j=0. The best solution found by our algorithm for 35 min is ℎ=22.612047.h=22.612047. Example 7. 𝑛=3 n=3, 𝑚=100 m=100, {𝑟 𝑗,𝑗=1,…,100}={r j,j=1,…,100}= {2.171, 2.171, 2.050, 2.050, 1.819, 1.819, 1.739, 1.739, 1.670, 1.670, 1.636, 1.636, 1.551, 1.551, 1.551, 1.551, 1.525, 1.525, 1.493, 1.493, 1.491, 1.491, 1.484, 1.484, 1.484, 1.484, 1.430, 1.430, 1.404, 1.404, 1.399, 1.399, 1.374, 1.374, 1.325, 1.325, 1.309, 1.309, 1.292, 1.292, 1.281, 1.281, 1.275, 1.275, 1.274, 1.274, 1.237, 1.237, 1.229, 1.229, 1.210, 1.210, 1.189, 1.189, 1.180, 1.180, 1.085, 1.085, 1.044, 1.044, 0.961, 0.961, 0.955, 0.955, 0.947, 0.947, 0.934, 0.934, 0.891, 0.891, 0.887, 0.887, 0.868, 0.868, 0.855, 0.855, 0.853, 0.853, 0.850, 0.850, 0.845, 0.845, 0.807, 0.807, 0.760, 0.760, 0.747, 0.747, 0.680, 0.680, 0.612, 0.612, 0.592, 0.592, 0.566, 0.566, 0.564, 0.564, 0.527, 0.527}, 𝑝=10 p=10, 𝛿 𝑡 𝑗=𝛿 𝑗=0 δ t j=δ j=0, ℎ 0=40 h 0=40. The best solution found by our algorithm for 10 min is ℎ=7.577422 h=7.577422. Example 8. 𝑛=3 n=3, 𝑚=100 m=100, the radii are as in Example 7; 𝑝=10 p=10, 𝛿 𝑡 𝑗∈[0.1,0.5]δ t j∈[0.1,0.5], 𝛿 𝑗∈[0.1,1]δ j∈[0.1,1]. The best solution found by our algorithm for 10 min is ℎ=9.727001 h=9.727001. The corresponding placements of the 3D spheres in Examples 5–8 are shown in Figure 6a–d. Figure 6. Optimized arrangement of 3D spheres: (a) Example 5; (b) Example 6; (c) Example 7; (d) Example 8. Example 9. 𝑛=4 n=4, 𝑚=100 m=100, the radii are as in Example 7; 𝑝=2 p=2, 𝛿 𝑡 𝑗=𝛿 𝑗=0 δ t j=δ j=0. The best solution found by our algorithm for 15 min is ℎ=22.612047 h=22.612047. Example 10. 𝑛=4 n=4, 𝑚=200 m=200, the radii are as in Example 6; 𝑝=2 p=2, 𝛿 𝑡 𝑗=𝛿 𝑗=0 δ t j=δ j=0. The best solution found by our algorithm for 45 min is ℎ=14.628068 h=14.628068. Example 11. 𝑛=4 n=4, 𝑚=200 m=200, the radii are as in Example 6; 𝑝=2 p=2, 𝛿 𝑡 𝑗=𝛿 𝑗=0 δ t j=δ j=0. The best solution found by our algorithm for 55 min is ℎ=11.791466 h=11.791466. 7. Conclusions Employing mathematical models offers a structured and systematic approach to problem-solving, facilitating precise analysis and prediction of outcomes . In this paper, a mathematical model for packing different spheres into a minimal-height parabolic container is proposed. Non-overlapping and containment conditions are formulated using the phi-function approach. The minimal allowed distance between the spheres and the boundary of the container is considered. The problem belongs to a class of irregular packing problems due to the nonstandard container shape. To solve the corresponding nonlinear optimization problem, a feasible directions approach combined with the hot start technique is proposed. A decomposition scheme is applied to reduce the number of constraints in the subproblem used to find the search direction. Numerical experiments are provided to demonstrate the efficiency of the proposed solution scheme. A detailed description of the problem instances and corresponding solutions are reported to form a benchmark for future research. Our future research is focused on the following issues. The number of non-overlapping constraints grows quadratically with an increase in the number of spheres, resulting in a large-scale optimization problem. These constraints have a specific structure which can be used either for direct solution of the original problem or to construct tight bounds for the optimal objective [33,34]. The proposed approach is based on modeling the interactions between the spheres and the boundary of the parabolic container. It also can be applied to a broader class of containers, e.g., circular hyperboloids (single- and double-sheeted), spheroids or ellipsoids. Packing problems on surfaces can also be considered, as well as various applications of spherical systems and logistics . Some results in these directions are forthcoming. Author Contributions Conceptualization, G.Y., Y.S., T.R., I.L. and J.M.V.C.; methodology, G.Y., Y.S., T.R. and I.L.; software, Y.S. and M.L.A.; validation, G.Y. and J.M.V.C.; formal analysis, G.Y., Y.S., T.R. and I.L.; investigation, G.Y., Y.S., T.R., I.L., J.M.V.C. and M.L.A.; resources, G.Y. and Y.S.; data curation, G.Y. and Y.S.; writing—original draft preparation, G.Y., Y.S., T.R., I.L., J.M.V.C. and M.L.A.; writing, review and editing, G.Y., Y.S., T.R., I.L., J.M.V.C. and M.L.A.; visualization, G.Y. and Y.S.; supervision, G.Y., Y.S., T.R., I.L. and J.M.V.C.; project administration, J.M.V.C., M.L.A. and Y.S.; funding acquisition, J.M.V.C., M.L.A. and G.Y. All authors have read and agreed to the published version of the manuscript. Funding The second and the third authors were partially supported by the Volkswagen Foundation (grant #97775), and the third author was supported by the British Academy (grant #100072), while the last two authors were partially supported by the Technological Institute of Sonora (ITSON), Mexico, through the Research Promotion and Support Program (PROFAPI 2024). Data Availability Statement The data presented in this study are available on request from the corresponding authors. Acknowledgments The authors would like to thank anonymous referees for constructive and positive comments. Conflicts of Interest The authors declare no conflicts of interest. Appendix A For the reader’s convenience, the main definitions and properties of the Φ-functions are provided. More details can be found, e.g., in [23,25], Chapter 15. Let A be a geometric object. The position of the object 𝐴 A is defined by a motion vector 𝑢 𝐴=(𝑣 𝐴,𝜃 𝐴)u A=(v A,θ A), where 𝑣 𝐴 v A is a translation vector and 𝜃 𝐴 θ A is a vector of the rotation parameters. The object A, rotated by 𝜃 𝐴 θ A and translated by 𝑣 𝐴 v A, is denoted by 𝐴(𝑢 𝐴)A(u A). For two objects 𝐴(𝑢 𝐴)A(u A) and 𝐵(𝑢 𝐵)B(u B), a Φ-function allows us to distinguish the following three cases: (a) 𝐴(𝑢 𝐴)A(u A) and 𝐵(𝑢 𝐵)B(u B) do not overlap, i.e., 𝐴(𝑢 𝐴)A(u A) and 𝐵(𝑢 𝐵)B(u B) do not have any common points; (b) 𝐴(𝑢 𝐴)A(u A) and 𝐵(𝑢 𝐵)B(u B) are in contact, i.e., 𝐴(𝑢 𝐴)A(u A) and 𝐵(𝑢 𝐵)B(u B) have only common frontier points; (c) 𝐴(𝑢 𝐴)A(u A) and 𝐵(𝑢 𝐵)B(u B) are overlapping so that 𝐴(𝑢 𝐴)A(u A) and 𝐵(𝑢 𝐵)B(u B) have common interior points. Following the definition , a continuous and everywhere defined function, denoted by Φ 𝐴 𝐵(𝑢 𝐴,𝑢 𝐵)Φ A B(u A,u B), is called a Φ-function of the objects 𝐴(𝑢 𝐴)A(u A) and 𝐵(𝑢 𝐵)B(u B) if the following conditions are fulfilled: Φ 𝐴 𝐵(𝑢 𝐴,𝑢 𝐵)>0,for 𝐴(𝑢 𝐴)∩𝐵(𝑢 𝐵)=∅Φ A B(u A,u B)>0,for A(u A)∩B(u B)=∅ Φ 𝐴 𝐵(𝑢 𝐴,𝑢 𝐵)=0,for int 𝐴(𝑢 𝐴)∩int 𝐵(𝑢 𝐵)=∅and 𝑓 𝑟 𝐴(𝑢 𝐴)∩𝑓 𝑟 𝐵(𝑢 𝐵)≠∅;Φ A B(u A,u B)=0,for int A(u A)∩int B(u B)=∅and f r A(u A)∩f r B(u B)≠∅; Φ 𝐴 𝐵(𝑢 𝐴,𝑢 𝐵)<0,for int 𝐴(𝑢 𝐴)∩int 𝐵(𝑢 𝐵)≠∅.Φ A B(u A,u B)<0,for int A(u A)∩int B(u B)≠∅. Here, 𝑓 𝑟 𝐴 f r A denotes the boundary of the object 𝐴 A, while int 𝐴 int A stands for its interior. Thus, Φ 𝐴 𝐵(𝑢 𝐴,𝑢 𝐵)≥0⇔int 𝐴(𝑢 𝐴)∩int 𝐵(𝑢 𝐵)=∅Φ A B(u A,u B)≥0⇔int A(u A)∩int B(u B)=∅ To describe a containment constraint 𝐴(𝑢 𝐴)⊂𝐵(𝑢 𝐵)A(u A)⊂B(u B), a phi-function for the objects 𝐴 A and 𝐵∗=𝑅 𝑛\int 𝐵 B=R n\int B is used. In the case, Φ 𝐴 𝐵∗(𝑢 𝐴,𝑢 𝐵)≥0⇔int 𝐴(𝑢 𝐴)∩int 𝐵∗(𝑢 𝐵)=∅⇔Φ A B(u A,u B)≥0⇔int A(u A)∩int B(u B)=∅⇔𝐴(𝑢 𝐴)⊂𝐵(𝑢 𝐵)A(u A)⊂B(u B). To model the distance constraints for two objects, the normalized Φ-function is applied. A Φ-function of the objects 𝐴(𝑢 𝐴)A(u A) and 𝐵(𝑢 𝐵)B(u B) is called a normalized phi-function Φ̃𝐴 𝐵(𝑢 𝐴,𝑢 𝐵)Φ˜A B(u A,u B) if the values of the function coincide with the Euclidean distance between the objects 𝐴(𝑢 𝐴)A(u A) and 𝐵(𝑢 𝐵)B(u B) when int 𝐴(𝑢 𝐴)∩int 𝐵(𝑢 𝐵)=∅int A(u A)∩int B(u B)=∅. Therefore, Φ̃𝐴 𝐵(𝑢 𝐴,𝑢 𝐵)≥𝜌⇔𝑑 𝑖 𝑠 𝑡{𝐴(𝑢 𝐴),𝐵(𝑢 𝐵)}≥𝜌.Φ˜A B(u A,u B)≥ρ⇔d i s t{A(u A),B(u B)}≥ρ. References Scheithauer, G. Introduction to Cutting and Packing Optimization. In International Series in Operations Research & Management Science; Springer: Cham, Switzerland, 2018; Volume 263, pp. 385–405. [Google Scholar] [CrossRef] Wäscher, G.; Haußner, H.; Schumann, H. An improved typology of cutting and packing problems. Eur. J. Oper. Res.2007, 183, 1109–1130. [Google Scholar] [CrossRef] Castillo, I.; Kampas, F.J.; Pinter, J.D. Solving circle packing problems by global optimization: Numerical results and industrial applications. Eur. J. Oper. Res.2008, 191, 786–802. [Google Scholar] [CrossRef] Kampas, F.J.; Castillo, I.; Pinter, J.D. Optimized ellipse packings in regular polygons. Optim. Lett.2019, 13, 1583–1613. [Google Scholar] [CrossRef] Kallrath, J.; Rebennack, S. Cutting ellipses from area-minimizing rectangles. J. Glob. Optim.2014, 59, 405–437. [Google Scholar] [CrossRef] Pankratov, A.; Romanova, T.; Litvinchev, I. Packing ellipses in an optimized rectangular container. Wirel. Netw.2020, 26, 4869–4879. [Google Scholar] [CrossRef] Kampas, F.J.; Pintér, J.D.; Castillo, I. Packing ovals in optimized regular polygons. J. Glob. Optim.2020, 77, 175–196. [Google Scholar] [CrossRef] Castillo, I.; Pintér, J.D.; Kampas, F.J. The boundary-to-boundary p-dispersion configuration problem with oval objects. J. Oper. Res. Soc.2024, 1–11. [Google Scholar] [CrossRef] Elser, V. Packing spheres in high dimensions with moderate computational effort. Phys. Rev. E2023, 108, 034117. [Google Scholar] [CrossRef] [PubMed] Litvinchev, I.; Fischer, A.; Romanova, T.; Stetsyuk, P. A new class of irregular packing problems reducible to sphere packing in arbitrary norms. Mathematics2024, 12, 935. [Google Scholar] [CrossRef] Kallrath, J. Packing ellipsoids into volume-minimizing rectangular boxes. J. Glob. Optim.2017, 67, 151–185. [Google Scholar] [CrossRef] Leao, A.A.S.; Toledo, F.M.B.; Oliveira, J.F.; Carravilla, M.A.; Alvarez-Valdes, R. Irregular packing problems: A review of mathematical models. Eur. J. Oper. Res.2020, 282, 803–822. [Google Scholar] [CrossRef] Guo, B.; Zhang, Y.; Hu, J.; Li, J.; Wu, F.; Peng, Q.; Zhang, Q. Two-dimensional irregular packing problems: A review. Front. Mech. Eng.2022, 8, 966691. [Google Scholar] Rao, Y.; Luo, Q. Intelligent algorithms for irregular packing problem. In Intelligent Algorithms for Packing and Cutting Problem; Engineering Applications of Computational Methods; Springer: Singapore, 2022; Volume 10. [Google Scholar] [CrossRef] Lamas-Fernandez, C.; Bennell, J.A.; Martinez-Sykora, A. Voxel-based solution Aapproaches to the three-dimensional irregular packing problem. Oper. Res.2023, 71, 1298–1317. [Google Scholar] [CrossRef] Gil, M.; Patsuk, V. Phi-functions for objects bounded by the second-order curves and their application to packing problems. In Smart Technologies in Urban Engineering; Arsenyeva, O., Romanova, T., Sukhonos, M., Tsegelnyk, Y., Eds.; STUE 2022, Lecture Notes in Networks and Systems; Springer: Cham, Switzerland, 2023; Volume 536. [Google Scholar] [CrossRef] Santini, C.; Mangini, F.; Frezza, F. Apollonian Packing of Circles within Ellipses. Algorithms2023, 16, 129. [Google Scholar] [CrossRef] Amore, P.; De la Cruz, D.; Hernandez, V.; Rincon, I.; Zarate, U. Circle packing in arbitrary domains featured. Phys. Fluids2023, 35, 127112. [Google Scholar] [CrossRef] Kovalenko, A.A.; Romanova, T.E.; Stetsyuk, P.I. Balance Layout Problem for 3D-Objects: Mathematical Model and Solution Methods. Cybern. Syst. Anal.2015, 51, 556–565. [Google Scholar] [CrossRef] Burtseva, L.; Pestryakov, A.; Romero, R.; Valdez, B. Petranovskii. Some aspects of computer approaches to simulation of bimodal sphere packing in material engineering. Adv. Mater. Res.2014, 1040, 585–591. [Google Scholar] [CrossRef] Ungson, Y.; Burtseva, L.; Garcia-Curiel, E.R.; Valdez Salas, B.; Flores-Rios, B.L.; Werner, F.; Petranovskii, V. Filling of Irregular Channels with Round Cross-Section: Modeling Aspects to Study the Properties of Porous Materials. Materials2018, 11, 1901. [Google Scholar] [CrossRef] [PubMed] Burtseva, L.; Valdez Salas, B.; Romero, R.; Werner, F. Recent advances on modelling of structures of multi-component mixtures using a sphere packing approach. Int. J. Nanotechnol.2016, 13, 44–59. [Google Scholar] [CrossRef] Available online: (accessed on 7 April 2023). Chernov, N.; Stoyan, Y.; Romanova, T. Mathematical model and efficient algorithms for object packing problem. Comput. Geom. Theory Appl.2010, 43, 535–553. [Google Scholar] [CrossRef] Nocedal, J.; Wright, S.J. Numerical Optimization; Springer Series in Operations Research and Financial Engineering; Springer: New York, NY, USA, 2006. [Google Scholar] Kallrath, J. Business Optimization Using Mathematical Programming; Springer: London, UK, 2021; ISBN 978-3-030-73237-0. [Google Scholar] Chen, D. Sphere Packing Problem. In Encyclopedia of Algorithms; Kao, M.Y., Ed.; Springer: Boston, MA, USA, 2008. [Google Scholar] [CrossRef] Sahinidis, N. BARON User Manual v. 2024.5.8. Available online: (accessed on 8 May 2024). IPOPT: Documentation. Available online: (accessed on 14 January 2023). Stoyan, Y.; Yaskov, G. Packing congruent hyperspheres into a hypersphere. J. Glob. Optim.2012, 52, 855–868. [Google Scholar] [CrossRef] Romanova, T.; Stoyan, Y.; Pankratov, A.; Litvinchev, I.; Marmolejo, J.A. Decomposition algorithm for irregular placement problems. In Intelligent Computing and Optimization, Proceedings of the 2nd International Conference on Intelligent Computing and Optimization 2019 (ICO 2019), Koh Samui, Thailand, 3–4 October 2019; Intelligent Systems and Computing; Springer: Cham, Switzerland, 2019; Volume 1072, pp. 214–221. [Google Scholar] Animasaun, I.L.; Shah, N.A.; Wakif, A.; Mahanthesh, B.; Sivaraj, R.; Koriko, O.K. Ratio of Momentum Diffusivity to Thermal Diffusivity: Introduction, Meta-Analysis, and Scrutinization; Chapman and Hall/CRC: New York, NY, USA, 2022. [Google Scholar] [CrossRef] Litvinchev, I.S. Refinement of Lagrangian bounds in optimization problems. Comput. Math. Math. Phys.2007, 47, 1101–1108. [Google Scholar] [CrossRef] Litvinchev, I.; Rangel, S.; Saucedo, J. A Lagrangian bound for many-to-many assignment problems. J. Comb. Optim.2010, 19, 241–257. [Google Scholar] [CrossRef] Lai, X.; Yue, D.; Hao, J.K.; Glover, F.; Lü, Z. Iterated dynamic neighborhood search for packing equal circles on a sphere. Comput. Oper. Res.2023, 151, 106121. [Google Scholar] [CrossRef] Asadi Jafari, M.H.; Zarastvand, M.; Zhou, J. Doubly curved truss core composite shell system for broadband diffuse acoustic insulation. J. Vib. Control2023. [Google Scholar] [CrossRef] Bulat, A.; Kiseleva, E.; Hart, L.; Prytomanova, O. Generalized Models of Logistics Problems and Approaches to Their Solution Based on the Synthesis of the Theory of Optimal Partitioning and Neuro-Fuzzy Technologies. In System Analysis and Artificial Intelligence; Studies in Computational Intelligence; Zgurovsky, M., Pankratova, N., Eds.; Springer: Cham, Switzerland, 2023; Volume 1107, pp. 355–376. [Google Scholar] [CrossRef] Figure 1. Illustration of interaction of 𝑆 𝑗 2 S j 2 and the boundary of 𝑃 2 P 2 in 2D. Figure 2. Arrangements of 𝑆 𝑗 2 S j 2 with respect to 𝑃 2 P 2 for different 𝑡 𝑗∈[𝛽 𝑗 1,𝛽 𝑗 2]t j∈[β j 1,β j 2]: (a) (𝑧 𝑗(𝑛−1),𝑦 𝑗 𝑛)∈𝑃 2(z j(n−1),y j n)∈P 2; (b) (𝑧 𝑗(𝑛−1),𝑦 𝑗 𝑛)∉𝑃 2(z j(n−1),y j n)∉P 2. Figure 3. Illustration to the main stages of the solution procedure for three spheres: (a) an arrangement of spheres corresponding to the 𝑘 k-th iteration; (b) an arrangement of spheres corresponding to the (𝑘+1)(k+1)-th iteration; (c) an arrangement of spheres corresponding to the (𝑘+2)(k+2)-th iteration; (d) an arrangement of spheres corresponding to the stop criterion. Figure 4. The flowchart of the main algorithm. Figure 5. Optimized arrangement of 2D spheres: (a) Example 1; (b) Example 2; (c) Example 3; (d) Example 4. Figure 6. Optimized arrangement of 3D spheres: (a) Example 5; (b) Example 6; (c) Example 7; (d) Example 8. Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. © 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ( Share and Cite MDPI and ACS Style Stoyan, Y.; Yaskov, G.; Romanova, T.; Litvinchev, I.; Velarde Cantú, J.M.; Acosta, M.L. Packing Spheres into a Minimum-Height Parabolic Container. Axioms2024, 13, 396. AMA Style Stoyan Y, Yaskov G, Romanova T, Litvinchev I, Velarde Cantú JM, Acosta ML. Packing Spheres into a Minimum-Height Parabolic Container. Axioms. 2024; 13(6):396. Chicago/Turabian Style Stoyan, Yuriy, Georgiy Yaskov, Tetyana Romanova, Igor Litvinchev, José Manuel Velarde Cantú, and Mauricio López Acosta. 2024. "Packing Spheres into a Minimum-Height Parabolic Container" Axioms 13, no. 6: 396. APA Style Stoyan, Y., Yaskov, G., Romanova, T., Litvinchev, I., Velarde Cantú, J. M., & Acosta, M. L. (2024). Packing Spheres into a Minimum-Height Parabolic Container. Axioms, 13(6), 396. Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here. Article Metrics Yes Citations Crossref 1 Web of Science 1 Google Scholar [click to view] No Article Access Statistics For more information on the journal statistics, click here. Multiple requests from the same IP address are counted as one view. Zoom|Orient|As Lines|As Sticks|As Cartoon|As Surface|Previous Scene|Next Scene Cite Export citation file: BibTeX) MDPI and ACS Style Stoyan, Y.; Yaskov, G.; Romanova, T.; Litvinchev, I.; Velarde Cantú, J.M.; Acosta, M.L. Packing Spheres into a Minimum-Height Parabolic Container. Axioms2024, 13, 396. AMA Style Stoyan Y, Yaskov G, Romanova T, Litvinchev I, Velarde Cantú JM, Acosta ML. Packing Spheres into a Minimum-Height Parabolic Container. Axioms. 2024; 13(6):396. Chicago/Turabian Style Stoyan, Yuriy, Georgiy Yaskov, Tetyana Romanova, Igor Litvinchev, José Manuel Velarde Cantú, and Mauricio López Acosta. 2024. "Packing Spheres into a Minimum-Height Parabolic Container" Axioms 13, no. 6: 396. APA Style Stoyan, Y., Yaskov, G., Romanova, T., Litvinchev, I., Velarde Cantú, J. M., & Acosta, M. L. (2024). Packing Spheres into a Minimum-Height Parabolic Container. Axioms, 13(6), 396. Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here. clear Axioms, EISSN 2075-1680, Published by MDPI RSSContent Alert Further Information Article Processing ChargesPay an InvoiceOpen Access PolicyContact MDPIJobs at MDPI Guidelines For AuthorsFor ReviewersFor EditorsFor LibrariansFor PublishersFor SocietiesFor Conference Organizers MDPI Initiatives SciforumMDPI BooksPreprints.orgScilitSciProfilesEncyclopediaJAMSProceedings Series Follow MDPI LinkedInFacebookX Subscribe to receive issue release notifications and newsletters from MDPI journals Select options [x] Accounting and Auditing [x] Acoustics [x] Acta Microbiologica Hellenica [x] Actuators [x] Adhesives [x] Administrative Sciences [x] Adolescents [x] Advances in Respiratory Medicine [x] Aerobiology [x] Aerospace [x] Agriculture [x] AgriEngineering [x] Agrochemicals [x] Agronomy [x] AI [x] AI Sensors [x] Air [x] Algorithms [x] Allergies [x] Alloys [x] Analytica [x] Analytics [x] Anatomia [x] Anesthesia Research [x] Animals [x] Antibiotics [x] Antibodies [x] Antioxidants [x] Applied Biosciences [x] Applied Mechanics [x] Applied Microbiology [x] Applied Nano [x] Applied Sciences [x] Applied System Innovation [x] AppliedChem [x] AppliedMath [x] AppliedPhys [x] Aquaculture Journal [x] Architecture [x] Arthropoda [x] Arts [x] Astronautics [x] Astronomy [x] Atmosphere [x] Atoms [x] Audiology Research [x] Automation [x] Axioms [x] Bacteria [x] Batteries [x] Behavioral Sciences [x] Beverages [x] Big Data and Cognitive Computing [x] BioChem [x] Bioengineering [x] Biologics [x] Biology [x] Biology and Life Sciences Forum [x] Biomass [x] Biomechanics [x] BioMed [x] Biomedicines [x] BioMedInformatics [x] Biomimetics [x] Biomolecules [x] Biophysica [x] Biosensors [x] Biosphere [x] BioTech [x] Birds [x] Blockchains [x] Brain Sciences [x] Buildings [x] Businesses [x] C [x] Cancers [x] Cardiogenetics [x] Catalysts [x] Cells [x] Ceramics [x] Challenges [x] ChemEngineering [x] Chemistry [x] Chemistry Proceedings [x] Chemosensors [x] Children [x] Chips [x] CivilEng [x] Clean Technologies [x] Climate [x] Clinical and Translational Neuroscience [x] Clinical Bioenergetics [x] Clinics and Practice [x] Clocks & Sleep [x] Coasts [x] Coatings [x] Colloids and Interfaces [x] Colorants [x] Commodities [x] Complexities [x] Complications [x] Compounds [x] Computation [x] Computer Sciences & Mathematics Forum [x] Computers [x] Condensed Matter [x] Conservation [x] Construction Materials [x] Corrosion and Materials Degradation [x] Cosmetics [x] COVID [x] Craniomaxillofacial Trauma & Reconstruction [x] Crops [x] Cryo [x] Cryptography [x] Crystals [x] Current Issues in Molecular Biology [x] Current Oncology [x] Dairy [x] Data [x] Dentistry Journal [x] Dermato [x] Dermatopathology [x] Designs [x] Diabetology [x] Diagnostics [x] Dietetics [x] Digital [x] Disabilities [x] Diseases [x] Diversity [x] DNA [x] Drones [x] Drugs and Drug Candidates [x] Dynamics [x] Earth [x] Ecologies [x] Econometrics [x] Economies [x] Education Sciences [x] Electricity [x] Electrochem [x] Electronic Materials [x] Electronics [x] Emergency Care and Medicine [x] Encyclopedia [x] Endocrines [x] Energies [x] Energy Storage and Applications [x] Eng [x] Engineering Proceedings [x] Entropy [x] Environmental and Earth Sciences Proceedings [x] Environments [x] Epidemiologia [x] Epigenomes [x] European Burn Journal [x] European Journal of Investigation in Health, Psychology and Education [x] Fermentation [x] Fibers [x] FinTech [x] Fire [x] Fishes [x] Fluids [x] Foods [x] Forecasting [x] Forensic Sciences [x] Forests [x] Fossil Studies [x] Foundations [x] Fractal and Fractional [x] Fuels [x] Future [x] Future Internet [x] Future Pharmacology [x] Future Transportation [x] Galaxies [x] Games [x] Gases [x] Gastroenterology Insights [x] Gastrointestinal Disorders [x] Gastronomy [x] Gels [x] Genealogy [x] Genes [x] Geographies [x] GeoHazards [x] Geomatics [x] Geometry [x] Geosciences [x] Geotechnics [x] Geriatrics [x] Glacies [x] Gout, Urate, and Crystal Deposition Disease [x] Grasses [x] Green Health [x] Hardware [x] Healthcare [x] Hearts [x] Hemato [x] Hematology Reports [x] Heritage [x] Histories [x] Horticulturae [x] Hospitals [x] Humanities [x] Humans [x] Hydrobiology [x] Hydrogen [x] Hydrology [x] Hygiene [x] Immuno [x] Infectious Disease Reports [x] Informatics [x] Information [x] Infrastructures [x] Inorganics [x] Insects [x] Instruments [x] Intelligent Infrastructure and Construction [x] International Journal of Environmental Research and Public Health [x] International Journal of Financial Studies [x] International Journal of Molecular Sciences [x] International Journal of Neonatal Screening [x] International Journal of Orofacial Myology and Myofunctional Therapy [x] International Journal of Plant Biology [x] International Journal of Topology [x] International Journal of Translational Medicine [x] International Journal of Turbomachinery, Propulsion and Power [x] International Medical Education [x] Inventions [x] IoT [x] ISPRS International Journal of Geo-Information [x] J [x] Journal of Aesthetic Medicine [x] Journal of Ageing and Longevity [x] Journal of CardioRenal Medicine [x] Journal of Cardiovascular Development and Disease [x] Journal of Clinical & Translational Ophthalmology [x] Journal of Clinical Medicine [x] Journal of Composites Science [x] Journal of Cybersecurity and Privacy [x] Journal of Dementia and Alzheimer's Disease [x] Journal of Developmental Biology [x] Journal of Experimental and Theoretical Analyses [x] Journal of Eye Movement Research [x] Journal of Functional Biomaterials [x] Journal of Functional Morphology and Kinesiology [x] Journal of Fungi [x] Journal of Imaging [x] Journal of Intelligence [x] Journal of Low Power Electronics and Applications [x] Journal of Manufacturing and Materials Processing [x] Journal of Marine Science and Engineering [x] Journal of Market Access & Health Policy [x] Journal of Mind and Medical Sciences [x] Journal of Molecular Pathology [x] Journal of Nanotheranostics [x] Journal of Nuclear Engineering [x] Journal of Otorhinolaryngology, Hearing and Balance Medicine [x] Journal of Parks [x] Journal of Personalized Medicine [x] Journal of Pharmaceutical and BioTech Industry [x] Journal of Respiration [x] Journal of Risk and Financial Management [x] Journal of Sensor and Actuator Networks [x] Journal of the Oman Medical Association [x] Journal of Theoretical and Applied Electronic Commerce Research [x] Journal of Vascular Diseases [x] Journal of Xenobiotics [x] Journal of Zoological and Botanical Gardens [x] Journalism and Media [x] Kidney and Dialysis [x] Kinases and Phosphatases [x] Knowledge [x] LabMed [x] Laboratories [x] Land [x] Languages [x] Laws [x] Life [x] Limnological Review [x] Lipidology [x] Liquids [x] Literature [x] Livers [x] Logics [x] Logistics [x] Lubricants [x] Lymphatics [x] Machine Learning and Knowledge Extraction [x] Machines [x] Macromol [x] Magnetism [x] Magnetochemistry [x] Marine Drugs [x] Materials [x] Materials Proceedings [x] Mathematical and Computational Applications [x] Mathematics [x] Medical Sciences [x] Medical Sciences Forum [x] Medicina [x] Medicines [x] Membranes [x] Merits [x] Metabolites [x] Metals [x] Meteorology [x] Methane [x] Methods and Protocols [x] Metrics [x] Metrology [x] Micro [x] Microbiology Research [x] Microelectronics [x] Micromachines [x] Microorganisms [x] Microplastics [x] Microwave [x] Minerals [x] Mining [x] Modelling [x] Modern Mathematical Physics [x] Molbank [x] Molecules [x] Multimedia [x] Multimodal Technologies and Interaction [x] Muscles [x] Nanoenergy Advances [x] Nanomanufacturing [x] Nanomaterials [x] NDT [x] Network [x] Neuroglia [x] Neurology International [x] NeuroSci [x] Nitrogen [x] Non-Coding RNA [x] Nursing Reports [x] Nutraceuticals [x] Nutrients [x] Obesities [x] Oceans [x] Onco [x] Optics [x] Oral [x] Organics [x] Organoids [x] Osteology [x] Oxygen [x] Parasitologia [x] Particles [x] Pathogens [x] Pathophysiology [x] Pediatric Reports [x] Pets [x] Pharmaceuticals [x] Pharmaceutics [x] Pharmacoepidemiology [x] Pharmacy [x] Philosophies [x] Photochem [x] Photonics [x] Phycology [x] Physchem [x] Physical Sciences Forum [x] Physics [x] Physiologia [x] Plants [x] Plasma [x] Platforms [x] Pollutants [x] Polymers [x] Polysaccharides [x] Populations [x] Poultry [x] Powders [x] Proceedings [x] Processes [x] Prosthesis [x] Proteomes [x] Psychiatry International [x] Psychoactives [x] Psychology International [x] Publications [x] Purification [x] Quantum Beam Science [x] Quantum Reports [x] Quaternary [x] Radiation [x] Reactions [x] Real Estate [x] Receptors [x] Recycling [x] Regional Science and Environmental Economics [x] Religions [x] Remote Sensing [x] Reports [x] Reproductive Medicine [x] Resources [x] Rheumato [x] Risks [x] Robotics [x] Ruminants [x] Safety [x] Sci [x] Scientia Pharmaceutica [x] Sclerosis [x] Seeds [x] Sensors [x] Separations [x] Sexes [x] Signals [x] Sinusitis [x] Smart Cities [x] Social Sciences [x] Société Internationale d’Urologie Journal [x] Societies [x] Software [x] Soil Systems [x] Solar [x] Solids [x] Spectroscopy Journal [x] Sports [x] Standards [x] Stats [x] Stresses [x] Surfaces [x] Surgeries [x] Surgical Techniques Development [x] Sustainability [x] Sustainable Chemistry [x] Symmetry [x] SynBio [x] Systems [x] Targets [x] Taxonomy [x] Technologies [x] Telecom [x] Textiles [x] Thalassemia Reports [x] Theoretical and Applied Ergonomics [x] Therapeutics [x] Thermo [x] Time and Space [x] Tomography [x] Tourism and Hospitality [x] Toxics [x] Toxins [x] Transplantology [x] Trauma Care [x] Trends in Higher Education [x] Tropical Medicine and Infectious Disease [x] Universe [x] Urban Science [x] Uro [x] Vaccines [x] Vehicles [x] Venereology [x] Veterinary Sciences [x] Vibration [x] Virtual Worlds [x] Viruses [x] Vision [x] Waste [x] Water [x] Wild [x] Wind [x] Women [x] World [x] World Electric Vehicle Journal [x] Youth [x] Zoonotic Diseases Subscribe © 1996-2025 MDPI (Basel, Switzerland) unless otherwise stated Disclaimer Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. Terms and ConditionsPrivacy Policy We use cookies on our website to ensure you get the best experience. Read more about our cookies here. Accept Share Link Copy clear Share clear Back to Top Top
123409
Quantum circuits from non-unitary sparse binary matrices | Scientific Reports =============== Your privacy, your choice We use essential cookies to make sure the site can function. We also use optional cookies for advertising, personalisation of content, usage analysis, and social media. By accepting optional cookies, you consent to the processing of your personal data - including transfers to third parties. Some third parties are outside of the European Economic Area, with varying standards of data protection. See our privacy policy for more information on the use of your personal data. Manage preferences for further information and to change your choices. Accept all cookies Skip to main content Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. Advertisement View all journals Search Search Search articles by subject, keyword or author Show results from Search Advanced search Quick links Explore articles by subject Find a job Guide to authors Editorial policies Log in Explore content Explore content Research articles News & Comment Collections Subjects Follow us on Facebook Follow us on Twitter Sign up for alerts RSS feed About the journal About the journal About Scientific Reports Contact Journal policies Guide to referees Calls for Papers Editor's Choice Journal highlights Open Access Fees and Funding Publish with us Publish with us For authors Language editing services Open access funding Submit manuscript Sign up for alerts RSS feed nature scientific reports articles article Quantum circuits from non-unitary sparse binary matrices Download PDF Download PDF Article Open access Published: 02 July 2025 Quantum circuits from non-unitary sparse binary matrices Krishnageetha Karuppasamy1, Varunteja Puram1, K. M. George1& … Thomas P. Johnson1 Show authors Scientific Reportsvolume 15, Article number:22502 (2025) Cite this article Abstract Quantum computing leverages unitary matrices to perform reversible computations while preserving probability norms. However, many real-world applications involve non-unitary sparse matrices, posing a challenge for quantum implementation. This paper introduces a novel method for transforming a class of non-unitary sparse binary matrices into higher-dimensional permutation matrices, ensuring unitarity. Our approach is efficient in both space and time, ensuring practical applicability to large-scale problems. We demonstrate the utility of this transformation in constructing quantum gates and apply the method to model quantum finite state machines (QFSMs) derived from classical deterministic finite automata (DFAs). This work offers a practical pathway for integrating non-unitary transformations into quantum systems, with implications for the many applications that are based on sparse, non-unitary matrices. The significance of this work for automata theory and quantum computation is outlined. Similar content being viewed by others Quantum discriminator for binary classification Article Open access 15 January 2024 Approximate encoding of quantum states using shallow circuits Article Open access 02 July 2024 Variational quantum support vector machine based on (\Gamma ) matrix expansion and variational universal-quantum-state generator Article Open access 26 April 2022 Introduction Quantum computing has the potential to revolutionize a wide range of scientific fields, including cryptography, drug discovery, climate modeling, finance, and artificial intelligence. Unlike classical computing, which relies on binary bits, quantum computing uses qubits, which can exist in superposition, allowing them to represent multiple states simultaneously. This unique property enables quantum computers to perform complex computations at exceptional speeds compared to classical computers. Unitary matrices are crucial in quantum computing, where quantum gates are represented as unitary operators acting on qubits. Unitary matrices preserve two key properties of quantum systems: reversibility and probability conservation. In quantum mechanics, the evolution of a system must be reversible to ensure that no information is lost over time1. Additionally, the sum of probabilities for all possible states of a quantum system must always equal 1(the norm of any quantum state is 1). These properties are embedded in unitary matrices, whose structure ensures that quantum gates can be inverted, and that the quantum state’s total probability remains unchanged throughout the computation process. This preservation is achieved by unitary transformations as unitary matrices preserve the norm of vectors representing quantum states. The motivation for this work can be found in non-Hermitian quantum Physics which is rooted in the observation that Hermiticity is a sufficient (not necessary) condition for real eigenvalues. Recently, there have been many research papers published on this topic highlighting theory and applications. The works presented in2,3 are based on the parity-time-symmetric (PT-symmetric) Hamiltonian theory. PT-symmetric Hamiltonians do not guarantee the evolution operator is unitary. References4,5,6 are based on open system formalism. In this case, a closed quantum system interacts with the environment. The significance of non-Hermitian or open system time evolution is that it can be non-unitary. Hence from the perspective of quantum computation, representation of non-unitary matrices as quantum circuits is necessary. The duality computer, introduced in7, is a theoretical model that extends the conventional quantum computer by allowing wave functions to coherently split and recombine along multiple paths. In this framework, a multi-dubit (duality bit) system propagates along two spatially identical paths. When these sub-waves recombine at the Quantum Wave Combiner (QWC), they interfere constructively because their spatial modes remain in phase. If only a single path is used, the duality computer effectively reduces to a standard quantum computer. This architectural extension provides additional flexibility in manipulating quantum information and lays the foundation for new algorithmic strategies. An algorithm named LCU (linear combination of unitaries) algorithm is developed based on a duality computer to synthesize a quantum circuit. The LCU algorithm expresses a non-unitary matrix A as the linear combination of unitary matrices. The resulting circuit is equivalent to embedding the matrix A as the principal block of a higher dimensional unitary matrix U = (\left(\begin{array}{cc}A& B\ C& D\end{array}\right)). Zheng8 applies the LCU algorithm to simulate a single qubit non-unitary operator. A quantum circuit with two qubits is demonstrated using the LCU algorithm where the (2\times 2) non-unitary matrix is decomposed as a linear combination of Pauli matrices. All these works suggest that non-unitary operations and embedding them into quantum circuits are necessary to synthesize quantum systems. In our work, the non-unitary matrix A is assumed to be sparse and binary. Despite these advances, many practical systems such as those arising from Partial Differential Equations (PDEs) involve non-unitary boundary conditions due to errors, noise, or model approximations9."),10, 052409 (2021)."). PDEs are foundational in modeling real-world phenomena, and quantum computing offers the potential to reduce the cost of solving them. However, to leverage quantum algorithms for such applications, efficient conversion of non-unitary matrices into unitary forms is essential. Our proposed approach provides a structurally simple and resource-efficient alternative for a special class of non-unitary matrices. Specifically, we address non unitary sparse binary matrices with norms greater than one and show how they can be embedded into higher-dimensional permutation matrices, which are naturally unitary. 1. Our main contribution in this paper is the introduction of a novel method for converting non-unitary sparse binary matrices with a norm greater than one into higher-dimensional unitary matrices. Specifically, we focus on transforming(n\times n) square matrices that have atmost n nonzero entries, with each row containing no more than one nonzero element, into higher-dimensional permutation matrices. To the best of our knowledge, no prior work addresses this specific class of transformations. 2. An important advantage of our method is that the resulting permutation matrices are not only unitary but also allow for efficient quantum gate construction. Each permutation can be decomposed into a sequence of transpositions, which can be implemented as a series of SWAP or CNOT operations acting on binary encodings of the indices. This facilitates direct synthesis of permutation-based unitaries into hardware-efficient quantum circuits. 3. As an application, we demonstrate the implementation of a Quantum Finite State Machine (QFSM) where the initial transformation matrix is non-unitary. Our results provide an effective solution to this challenge, ensuring the preservation of unitary properties essential for quantum computations. Related works Our work aims to construct permutation matrices from a certain class of non-unitary matrices. With that focus, we review related work. In11 Robert M. Gingrich and Colin P. Williams, addressed the problem of computing non-unitary operators probabilistically and presented a method to convert a non-unitary matrix to a unitary matrix. They construct the quantum circuit for the operation (\rho { } \to { }\frac{{M\rho M^{\dag } }}{{tr\left( {M\rho M^{\dag } } \right)}}) where M is non-unitary and (\rho) a density matrix and (\dag) is the complex conjugate transpose. The method presented first converts the non-unitary matrix M into a high dimensional unitary matrix by padding zeros. The unitary matrix U is obtained by the transformation (U = { }e^{{i\varepsilon \left[ {\begin{array}{{20}c} 0 & { - iN} \ {iN^{\dag } } & 0 \ \end{array} } \right]}}). The computation introduces an ancilla qubit and approximates the computation as (\rho^{\prime} = U\left( {\left| {1 >< 1} \right| \otimes \rho } \right)M^{\dag }). However, the most significant drawback is its non-deterministic nature. The success probability depends on the norm of the operator, often requiring multiple repetitions that increase circuit depth and error accumulation. Additionally, constructing the required higher-dimensional unitary embedding incurs gate overhead and may disturb the quantum state upon failure. These factors limit the scalability and efficiency of the approach, especially for large systems or complex operators. Childs and Wiebe12.") introduced the LCU method. In this approach, a non-unitary matrix is expressed as a weighted sum of unitary operators: \(A=\sum_{i}{a}_{i}{U}_{i}\), where each \({U}_{i}\) is a unitary (e.g., a Pauli string). The LCU framework constructs a quantum circuit that uses ancilla qubits to encode the coefficients \({a}_{i}\), performs controlled-unitary operations, and applies oblivious amplification (OAA) to boost the success probability of the correct evolution. This method provides a powerful, general-purpose way to simulate time evolution \({e}^{-iHt}\) where \(t\) is time evaluation and \(H\) is Hermitian matrix. But, since it is probabilistic and requires ancilla-driven controlled operations and post-selection or amplitude amplification, which introduces additional circuit depth and ancilla overhead. In contrast, our proposed method, enabling deterministic and low-depth circuit implementations without probabilistic post-selection. This offers a resource-efficient and hardware-friendly alternative for representing non-unitary operations within a fully unitary framework. Lin13 proposed a block encoding method. This approach embeds a general matrix (which may not be unitary) into a higher-dimensional unitary matrix, enabling quantum algorithms to process non-unitary matrices through additional quantum operations. This technique has been foundational in quantum algorithms for efficiently representing complex matrices. Given a matrix A, which may not be unitary, block encoding allows constructing a unitary matrix U such that A is a submatrix of U. Formally, we express U = (\left(\begin{array}{cc}A& B\ C& D\end{array}\right)) In this approach,A represents the original matrix, while B,C, and D are selected to ensure that U is unitary. This transformation allows quantum algorithms to handle non-unitary matrices by simulating unitary operations within a higher-dimensional space. While the size of U increases linearly, the method relies on singular value decomposition method, which has a computational complexity of (O({n}^{3})).In contrast, our proposed method delivers significant improvements in both computational efficiency and practical implementation as described later. In149 (2001)."), George Cybenko addresses the challenge of simplifying complex quantum computations into sequences of elementary quantum operations. Re et al.6, 125112 (2020).") demonstrates any general unitary operation can be represented as a sequence of elementary quantum gates. The paper also discusses the importance of maintaining specific properties, like unitarity and control, during the decomposition process. The work also raises questions about the efficiency and feasibility of implementing these reductions in practice, particularly regarding the exponential number of operations required as the number of qubit increases, which remains a significant challenge in the field of quantum computing. Planat et al.15.") in the section “From permutations to quantum gates”, establish a link between classical permutation matrices and quantum gates. Permutation matrices, which rearrange elements by placing 1 s in specific positions and 0 s elsewhere, can describe certain quantum gates, especially the Controlled-NOT (CNOT) gate. The authors introduce a particular subset of these matrices, termed “magic” permutation matrices, which have 1 s on the main diagonal. These matrices correspond to essential quantum gates, such as the Pauli X gate, the CNOT gate, and the Toffoli gate, which are foundational in constructing multi-qubit quantum gates and generating quantum states like stabilizer states and “magic” states. However, this paper focuses only on a limited group of permutation matrices, those that correspond directly to these specific quantum gates. It does not extend the analysis to the general class of permutation matrices, nor does it explore the broader applicability of permutation matrices in quantum gate design. Weber16.") provides several examples of quantum permutation matrices, illustrating how these matrices can arise from combinations of classical operations but with quantum behavior embedded. One of the key examples discussed involves Pauli matrices combined with unitary transformations, generating a quantum permutation matrix from their tensor products. The study of quantum permutation matrices extends to their application in quantum isomorphisms of graphs, which allows quantum analogs of graph isomorphisms. These quantum isomorphisms provide a broader symmetry framework for graph structures in quantum settings, indicating that quantum symmetries can go beyond classical permutations. One application of our work in constructing permutation matrices from non-unitary matrices can be seen in the domain of quantum automata. Similar to the work on Quantum Automata and Quantum Grammars by Moore and Crutchfield171 (2000)."), where unitary matrices are used to model quantum versions of classical computational structures like finite state machines and pushdown automata, our approach provides a method for handling non-unitary matrices. In their models, unitary matrices were connected to alphabets and grammar symbols, which are applied during state transitions in Hilbert space. In this article, we present a method to convert non-unitary sparse binary matrices into permutation matrices, enabling their use in quantum implementations of finite state machine (QFSM) models with initial non-unitary transformation matrices. Our approach offers a practical solution for integrating non-unitary transformations within a quantum framework and provides a pathway for mapping permutation matrices to quantum gates. Another key contribution of our work is the ability to map the resulting permutation matrices directly to quantum gate sequences. Each permutation is decomposed into a series of transpositions, which can be implemented using a small set of hardware-efficient gates such as CNOT and SWAP. This decomposition allows for the systematic construction of low-depth quantum circuits, making our method highly suitable for noisy intermediate-scale quantum (NISQ) devices. Our work offers a novel solution to the challenge of maintaining unitary properties essential for quantum computation when starting from non-unitary matrices. Our method produces a unitary matrix of size (np \times np), significantly reducing resource requirements compared to other approaches. Additionally, our method enables effective handling of non-unitary matrices within quantum systems, specifically in QFSM models, where unitary matrices are associated with alphabets and grammar symbols that facilitate state transitions in a Hilbert space. Proposed method for sparse to permutation matrix conversion In this section we outline our proposed approach to create a permutation matrix from a non-unitary sparse binary matrix. The class of matrices we consider are sparse binary matrices that have at most one nonzero entry in a row. Our method is presented via the two propositions stated below. Proposition 1 Let(T \epsilon {R}^{n \times n})matrix with entries({T}{ij}= \left{\begin{array}{c}0; 1 \le j\le n-1\ 1; j=0\end{array}\right}). _Then there is a permutation matrix(M \epsilon { R}^{m\times m})where(m={n}^{2}\times {n}^{2})such that(M= \left[\begin{array}{ccc}{M}{00}& \cdots & {M}{0(n-1)}\ \vdots & \ddots & \vdots \ {M}{\left(n-1\right)0}& \cdots & {M}{(n-1)(n-1)}\end{array}\right])and(T= \sum_{i=0}^{n-1}{M}{01})_where({M}{kl} is \; a \; n\times n)_block matrices. Proof Given (T), construct a matrix (A) of dimension (m={n}^{2}\times {n}^{2}) using tensor product (A) = ({I}{n \times n})⊗({T}{n \times n}) where (I) is the identiy matrix. Observe that (A) is a diagonal block matrix with every diagonal block being T. Also, only the first column of (T) is nonzero with all elements 1. So, we can split (A) into (m \times \text{m}) matrices ({A}{t}) such that (A= \sum{t=0}^{m-1} {A}{t}), where all elements of ({A}{t}) are 0’s except the element ({A}{t}\left[t, k\right], where \; k=n \times \text{floor }\left(\frac{t}{n}\right)). Apply mod m column permutation on each ({A}{t}) to move the 1’s into distinct columns. Let (M=)({A}{0}) + (\sum{t=1}^{n-1} {A}{t}\text{ P}\left(0,\text{ nt}+0\right)+)(\sum{t=1}^{\text{n}} {\text{A}}{t}\text{ P}(\text{n}, \left(\text{nt}+1\right)\text{ mod m}) …. + (\sum{t=1}^{n} {\text{A}}{t}\text{P}\left(\text{m}-\text{n}\right),(\left(\text{m}-\text{n}\right)\text{t}+\left(\text{n}-1\right)\text{m mod m}), where (P\upepsilon)({R}^{m\times m}) is a permutation matrix. (M) is a permutation matrix as each column and row has exactly one nonzero entry which is 1 by construction and is unitary. Hence, we can represent (M) as an (m \times \text{m}) block matrix with block size of _n and having exactly one nonzero element. Also, for the first row of the block matrix the nonzero entries are ({M}{0l}\left[l,0\right]=1) where the first row of blocks is ({M}{0l}, 0\le l \le n-1). Furthermore, each step of the construction of M is well defined and hence given a final ({n}^{2}\times {n}^{2}) permutation matrix, we can determine the initial (n\times n) matrix. If all entries of column (k) rather than column 0 are 1’s, then we can apply the permutations (P\left(k,nt+k\right), P(n+k, nt+k+1)) etc. to construct the matrix M. Thus, we can extend Proposition 1 to any matrix T with all columns but one are zeros and the nonzero column is ({\left[\begin{array}{cccc}1& 1& \dots ,& 1\end{array}\right]}^{T}). Properties of M 1. The matrix (M) is an ({n}^{2}\times {n}^{2}) block matrix, represented as follows: (M= \left[\begin{array}{ccc}{M}{00}& \cdots & {M}{0(n-1)}\ \vdots & \ddots & \vdots \ {M}{\left(n-1\right)0}& \cdots & {M}{(n-1)(n-1)}\end{array}\right]), where each ({M}{\text{k}l}, 0\le \text{k},l \le n-1) is an (n\times n) matrix. Each block ({M}{\text{k}l}) contains exactly one nonzero entry, located at position (\left[x,y\right]), with ({M}_{\text{k}l}\left[x,y\right]=1). For a fixed (k) ,if (l=k) then (x=0) and (y=k), if (l=k+1) , then (x=1) and (y=k) and so on. If (l=n), it is reset to (l=0) and the process continue until (l=k-1). So, in the overall matrix, the first (n) rows each have exactly one nonzero entry positioned at columns (0,n,2n\dots ,) respectively. Matrix M is obtained by first taking the tensor product of the matrix (T) with identity matrix, creating a block diagonal matrix (A) with diagonal blocks equal to (T). Then, columns in (A) are permuted in such a way that only one row in each (n)-row block is shifted per permutation. As a result, each (n) -row block of (M) corresponds to a column-permuted version of (T), with distinct permutations applied across blocks. $$\sum_{0}^{n-1}{M}{0l}=T$$ 2. 2. Hence if v is an n-dimensional vector then ((\sum{0}^{n-1}{M}{0l})v=Tv). The matrices (\sum{0}^{n-1}{M}_{kl}) are column permutations of (T) starting with the first column. 3. Let v be an n-dimensional vector and (\overrightarrow{1}= {[1, 1, \cdots ,1]}^{T}) be the n-dimensional vector with all 1’s. Then (M{ }\left( {\vec{1} \otimes v} \right) = { }\left[ {\left( {\mathop \sum \limits_{0}^{n - 1} M_{0l} ,{ }\mathop \sum \limits_{0}^{n - 1} M_{1l} ,{ } \cdots ,\mathop \sum \limits_{0}^{n - 1} M_{{\left( {n - 1} \right)l}} } \right)v} \right]{ }^{T}). 4. From the above properties, (Tv) constitutes the first n elements of (M{ }\left( {\vec{1} \otimes v} \right)). Example 1 To illustrate the construction described above, we provide an example demonstrating how to derive a unitary matrix from a non-unitary sparse binary matrix. Consider a matrix (T) of dimensions (2\times 2) In this matrix, exactly 2 entries are 1’s, all of which are in the first column, while the remaining entries are 0. Given (n=2) and (T=\left[\begin{array}{cc}1& 0\ 1& 0\end{array}\right]) by applying Proposition 1, we get: Step 1 compute (A = I_{n \times n} \otimes { }T_{n \times n}) which gives $$A=\left[\begin{array}{cccc}1& 0& 0& 0\ 1& 0& 0& 0\ 0& 0& 1& 0\ 0& 0& 1& 0\end{array}\right]$$ Step 2 Compute the individual matrices ({A}_{i }s) $${A}{0 }=\left[\begin{array}{cccc}1& 0& 0& 0\ 0& 0& 0& 0\ 0& 0& 0& 0\ 0& 0& 0& 0\end{array}\right]{A}{1 }=\left[\begin{array}{cccc}0& 0& 0& 0\ 1& 0& 0& 0\ 0& 0& 0& 0\ 0& 0& 0& 0\end{array}\right]{A}{2}=\left[\begin{array}{cccc}0& 0& 0& 0\ 0& 0& 0& 0\ 0& 0& 1& 0\ 0& 0& 0& 0\end{array}\right]{A}{3 }=\left[\begin{array}{cccc}0& 0& 0& 0\ 0& 0& 0& 0\ 0& 0& 0& 0\ 0& 0& 1& 0\end{array}\right]$$ Step 3 Compute ({A}_{i }P)’s $${A}{1 }{P}{(\text{0,2})}=\left[\begin{array}{cccc}0& 0& 0& 0\ 0& 0& 1& 0\ 0& 0& 0& 0\ 0& 0& 0& 0\end{array}\right]{A}{2 }{P}{(\text{2,3})}=\left[\begin{array}{cccc}0& 0& 0& 0\ 0& 0& 0& 0\ 0& 0& 0& 1\ 0& 0& 0& 0\end{array}\right]{A}{3 }{P}{(\text{2,1})}=\left[\begin{array}{cccc}0& 0& 0& 0\ 0& 0& 0& 0\ 0& 0& 0& 0\ 0& 1& 0& 0\end{array}\right]$$ Step 4 Finally, Build the matrix M $$M=\left[\begin{array}{cccc}1& 0& 0& 0\ 0& 0& 1& 0\ 0& 0& 0& 1\ 0& 1& 0& 0\end{array}\right]$$ The general case In the previous section, we presented and demonstrated a method for constructing a unitary matrix from a non-unitary sparse binary matrix, where all non-zero entries are confined to the first column. In this section, we extend our discussion to the general case of an (n \times n) sparse binary matrix that contains at most (n) non-zero entries. These non-zero entries are distributed such that some columns contain only zeros, while others contain more than one non-zero entry. The method for constructing a unitary matrix in this context is outlined in the following proposition. Proposition 2 Let(T \epsilon R^{{n \times n}})be a(n\times n)matrix in which every row has exactly one nonzero element, with entries({T}{ij}\in \left{0, 1\right}.)_The matrix(T)has exactly(n)entries are 1’s and some of its columns consist entirely of zero entries. Then there is a permutation matr ix (M \epsilon { R}^{m\times m})where(m=np \text{and})(p)is the maximum number of nonzero elements in any column of(T). Further,(M)is structured as a block matrix:(\left[\begin{array}{ccc}{M}{00}& \cdots & {M}{0(p-1)}\ \vdots & \ddots & \vdots \ {M}{\left(p-1\right)0}& \cdots & {M}{(p-1)(p-1)}\end{array}\right])where each({M}{kl})_is an(n\times n)block matrix. Additionally, we can express(T= \sum_{i=0}^{p-1}{M}_{0i}). Proof We will prove the statement by constructing the matrix (M) based on the given matrix (T) as follows: Let (T) be a (n\times n) binary matrix with q nonzero columns and let p = maximum number of nonzero entries in a column of (T). We construct (n\times n) matrices ({T}{0}, {T}{1}\dots ,{T}{q-1}) , where each ({T}{j},) has only one nonzero column. Specifically: The nonzero column of ({T}_{0}) is corresponds to the first nonzero column of (T) The nonzero column of ({T}_{1}) is corresponds to the second nonzero column of (T) and so on. Then, we can express (T) as sum of these matrices: (T= \sum_{i=0}^{q-1}{T}_{i}). (\text{For each }i=0 \cdots q-1), construct the (np\times np) matrix (A_{i} = { }I \otimes T_{i} ,) where (I) is the (p\times p) identity matrix. Since T contains exactly one nonzero element in each row, every distinct pair ({T}{i}) and ({T}{j}) will have the property that if one row of one matrix is nonzero, the corresponding row in the other will be zero, then ({A}{i}) block diagonal matrix equal to (\left[\begin{array}{ccc}{T}{i}& \cdots & 0\ \vdots & \ddots & \vdots \ 0& \cdots & {T}{i}\end{array}\right]). If (l) is the first nonzero column of ({A}{i}), then the other nonzero columns are (n+l, 2n+l, ...\left(p-1\right)n+l) Each nonzero segment of the nonzero column occurs in the diagonal blocks. We follow the method used in the proof of ‘Proposition 1’ to express ({A}{i}) as the sum of m matrices. Let ({A}{i}^{j}) be the matrix whose ({j}^{th}) row is a copy of the matrix ({A}{i}) and other rows are zeros. Viewed as a block matrix, all blocks are 0 matrices except the one enclosing the ({j}^{th}) row which has exactly one nonzero element. Each ({A}{i}^{j}) has exactly one nonzero element in one of the columns (l, n+l, 2n+l, ...\left(p-1\right)n+l) in one of the diagonal blocks. Then ({A}{i}= \sum{j=0}^{m-1}{A}{i}^{j}). Nonzero columns of ({A}{i}^{j}) may be the same if ({A}{i}) has more than one nonzero element in a column. We use column permutations to align nonzero element of ({A}{i}^{j}) so that no two matrices have nonzero elements in the same column. For the elements in column l (those are in the first diagonal block), use the permutations (P(l, ns+s+1)) where (s=0,\dots ,n-1). For elements of column (nr+l,\text{ where} r=1,\dots p-1) (those are in the ({r}^{th}) diagonal block), use permutations (P(nr+l, \left(nr+l+ns+r\right) mod m). The relative positions of the nonzero elements in the column from the first are given by s. Let ({B}{i}^{j}) denote ({A}{i}^{j}) following the outlined permutation. Let ({B}{i}= \sum{j=0}^{m-1}{B}{i}^{j}). Then ({B}{i}) has at most one nonzero element (which is 1) in each row and column. Also, by construction for any pair ({B}{i}) and ({B}{j}) there is no nonzero intersection of elements. Let (M= \sum_{i=0}^{q-1}{B}{i}). Since T has exactly n elements, M has exactly m = np elements and no row or column contains more than one nonzero element. So, (M) is a permutation matrix. If we represent (M) as a (p\times p) block matrix, (M= \left[\begin{array}{ccc}{M}{00}& \cdots & {M}{0(p-1)}\ \vdots & \ddots & \vdots \ {M}{\left(p-1\right)0}& \cdots & {M}{(p-1)(p-1)}\end{array}\right]) and each ({B}{i}) as block matrices, ({B}{i}= \left[\begin{array}{ccc}{B}{i}^{00}& \cdots & {B}{i}^{0(p-1)}\ \vdots & \ddots & \vdots \ {B}{i}^{\left(p-1\right)0}& \cdots & {B}{i}^{(p-1)(p-1)}\end{array}\right]), then ({M}{0j}= \sum_{i=0}^{p-1}{B}{i}^{0j}) is an n-by-n matrix. Hence (T= \sum{i=0}^{p-1}{M}_{0i}). Example 2 To illustrate the construction described above, we provide an example demonstrating how to derive a unitary matrix from a sparse binary matrix. Consider a matrix (T) of dimensions (4\times 4) In this matrix, exactly 4 entries are 1’s, some columns are entirely composed of zero entries. Given (T=\left[\begin{array}{cccc}1& 0& 0& 0\ 0& 0& 1& 0\ 1& 0& 0& 0\ 0& 0& 1& 0\end{array}\right]), we aim to construct a permutation matrix (M) by applying. Proposition 2 where p(=2). The steps are as follows: Step 1 Express (T) as the sum of matrices with isolated nonzero columns, (T= \sum_{i=0}^{q-1}{T}_{i}) Since (T) has two nonzero columns with two 1 s each, we write (T) as a sum: (T={T}{0}+{T}{1})= T 0 + T 1 where $${T}{0}=\left[\begin{array}{cccc}1& 0& 0& 0\ 0& 0& 0& 0\ 1& 0& 0& 0\ 0& 0& 0& 0\end{array}\right]\text{ and }{ T}{1}=\left[\begin{array}{cccc}0& 0& 0& 0\ 0& 0& 1& 0\ 0& 0& 0& 0\ 0& 0& 1& 0\end{array}\right]$$ Step 2 Construct matrices (A_{i} = I \otimes T_{i} ,) for each (T_{i}) $$A_{0} = { }\left[ {\begin{array}{{20}c} 1 & 0 \ 0 & 1 \ \end{array} } \right] \otimes \left[ {\begin{array}{{20}c} 1 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 \ 1 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 \ \end{array} } \right]$$ $$A_{0} = { }\left[ {\begin{array}{{20}c} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ \end{array} } \right]$$ Step 3 Decompose each ({A}{i}) into matrices ({A}{i}^{j}) with only one 1 per matrix. For each ({A}{i}) separate it as a sum of matrices ({A}{i}^{j}), each containing at most one 1 ({A}_{0}): $${A}{0}= {A}{0}^{0}{+ A}{0}^{1}{+ A}{0}^{2}{+ A}_{0}^{3}$$ where each ({A}_{0}^{j}) has one nonzero entry. For example: $$\begin{aligned} A_{0} = & \left[ {\begin{array}{{20}c} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ \end{array} } \right] + \left[ {\begin{array}{{20}c} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ \end{array} } \right] \ & + \left[ {\begin{array}{{20}c} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \ \end{array} } \right] \ \end{aligned}$$ Step 4 Apply column permutations to ensure unique positions. For each ({A}{0}^{j}) apply column permutations (as in Proposition 1) to avoid overlaps among nonzero entries. Let the resulting matrices be ({B}{0}^{j}) ({B}{0}^{0}=\left[\begin{array}{cccccccc}1& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 0\end{array}\right] {B}{0}^{1}=\left[\begin{array}{cccccccc}0& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 1& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 0\end{array}\right] {B}{0}^{2}=\left[\begin{array}{cccccccc}0& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 0\ 0& 1& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 0\end{array}\right] {B}{0}^{3}=\left[\begin{array}{cccccccc}0& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 1& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 0\end{array}\right]) Step 5 Combine all ({B}{0 }^{j}) to form ({B}{0}) $${B}_{0}= \left[\begin{array}{cccccccc}1& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 1& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 0\ 0& 1& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 1& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 0\end{array}\right]$$ Likewise from (T)1 we construct ({B}_{1}=\left[\begin{array}{cccccccc}0& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 1& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 1& 0\ 0& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 1& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 1\end{array}\right]) Step 6 Finally construct (M \; as \; the \; sum \; of \; {B}_{i}) M = B 0 + B 1. M = (\left[\begin{array}{cccccccc}1& 0& 0& 0& 0& 0& 0& 0\ 0& 0& 1& 0& 0& 0& 0& 0\ 0& 0& 0& 0& 1& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 1& 0\ 0& 1& 0& 0& 0& 0& 0& 0\ 0& 0& 0& 1& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 1& 0& 0\ 0& 0& 0& 0& 0& 0& 0& 1\end{array}\right]) Space complexity The total space complexity for constructing the matrix (M) is (O(np)) where n is the dimension of the given sparse matrix (T), (p\le n) is the maximum number of nonzero entries in any column of (T). Time complexity For a given sparse binary matrix of dimension (n), constructing the corresponding permutation matrix requires ({n}^{2}) assignments. This results in a time complexity of (O({n}^{2})), reflecting the computational effort needed to complete the transformation efficiently. Comparison of methods Our proposed method, with a space complexity of(O(np))and a time complexity of(O({n}^{2})), demonstrates efficient scaling for sparse matrices. Here, n is the matrix dimension,p is the maximum number of nonzero entries in any column, ensuring practical applicability to large-scale problems. This contrasts with the method by Gingrich and Williams in11, which constructs a unitary matrix through probabilistic computations and computational overhead unsuitable for large-scale systems. Similarly, the block encoding method in13 embeds a non-unitary matrix into a higher-dimensional unitary matrix using singular value decomposition, with a space complexity that is linear in the size of the output matrix but a time complexity of(({n}^{3})), limiting scalability due to the computationally expensive decomposition. While the latter methods are versatile and can handle general non-unitary matrices, their resource demands make them impractical for many applications. In comparison, our method not only ensures unitary transformation but also reduces the time complexity, making it particularly advantageous for sparse matrices and applications requiring efficient quantum computations. Table 1 shows the comparison between the proposed methods with other methods. Table 1 comparison between different approaches. Full size table Permutation matrices to gates From the previous section, we conclude that any non-unitary sparse binary matrix with exactly (n) nonzero entries (where (n) is the size of the matrix) can be transformed into a permutation matrix, which is inherently unitary. Though permutation matrices are unitary, their practical implementations are restricted to a particular set of quantum gates. In this section, we demonstrate how to decompose any permutation into a product of elementary quantum gates. Observation on permutations matrices In18, the system model defines a parameterized quantum circuit as a sequence of unitary operations acting on an input state, expressed as: $$U\left(\overrightarrow{\theta }\right)={U}{L}\left({\uptheta }{L}\right){U}{L-1}\left({\uptheta }{L-1}\right)\dots {U}{1}({\uptheta }{1})$$ (1) This structure underlies many variational quantum algorithms, where each ({U}{i}({\uptheta }{i})) represents a layer or gate applied sequentially to the quantum register. Our method aligns naturally with this model because the permutation matrices constructed from sparse binary matrices can be directly translated into such unitary operations, with each permutation corresponding to a swap gate or CNOT gate. Let ({a}{1,} {a}{2,}\dots {a}{n,}) represent an ordered sequence. Consider a permutation ({P}{(i,j),}) which swaps the elements at positions (i) and (j). The permutation ({P}_{(i,j)}) can be expressed as a product of transpositions of neighboring elements as follows $${P}{(i,j)}={P}{(i,i+1),} {P}{\left(i+1,i+2\right),}\dots {P}{\left(j-1,j\right),}{P}{\left(j-2,j-1\right),}\dots {P}{(i,i+1)}$$ (2) Each of these adjacent transpositions corresponds to a hardware-efficient gate, such as a Swap or CNOT, and fits directly into the layered structure in Eq.(1). Moreover, these permutation operations can be interpreted as unitaries derived from Hermitian generators, using the exponential form: $${U}{i}\left({\uptheta }{i}\right)=\text{exp}(-i{\uptheta }{i}{P}{i})$$ (3) where ({P}{i}) is a Hermitian matrix representing a basic transposition, and ({\uptheta }{i}) is a tunable parameter. This provides a formal mapping from permutation logic to the standard unitary framework used in quantum circuits. An additional advantage of our approach is its exploitation of Hamming distance: any permutation of binary strings can be decomposed into transpositions between strings that differ by a Hamming distance of 1. This means the overall permutation matrix can be realized as a sequence of minimal bit-flip operations, further reducing circuit depth and enhancing efficiency for NISQ devices19. In summary, by embedding non sparse binary matrices into structured permutation matrices and decomposing them into transpositions with minimal Hamming distance, we enable an efficient realization of non-unitary operations in the unitary model of Eq.(1). Transposition to quantum gates We construct quantum gates to implement transpositions represented by the matrices whose rows as binary integers differ by 1 in hamming distance. Since a transposition with a Hamming distance of 1 involves swapping two elements that differ in only one bit, it can be realized using a controlled quantum operation that acts conditionally based on that bit. It is well known that any permutation can be expressed as a product of such transpositions; hence we can systematically construct a sequence of quantum gates to realize any desired permutation gate. This approach allows for the decomposition of any arbitrary permutation matrix into a series of elementary gates that operate on transpositions, thereby enabling efficient implementation in quantum circuits. We start with a given ({2}^{n} \times {2}^{n}) permutation matrix. First, we analyze the matrix to identify the indices that have been swapped by the permutation. This can be achieved by comparing the rows and columns of the permutation matrix to determine which positions map to each other. Once the swapped indices are identified, we construct quantum gates to implement these swaps. For each swap, we check if the indices differ by a Hamming distance of 1. If they do, the corresponding transposition can be directly realized using a single gate designed for Hamming distance 1 swaps. If the Hamming distance is greater than 1, we decompose the swap into a sequence of transpositions with neighboring elements (i.e., intermediate swaps with Hamming distance 1), as shown in the earlier observation. By repeating this process for all swaps in the permutation matrix, we construct a sequence of quantum gates that faithfully implements the given ({2}^{n} \times {2}^{n}) permutation matrix. This method ensures a systematic and efficient translation of any permutation matrix into a quantum circuit. The method is implemented in Python 3 using Qiskit, and the pseudo code listings are provided in the “Supplementary Materials”). At a high level, the method involves the following steps: 1. Preprocessing the matrix to identify the swapped elements and the corresponding qubit controls. 2. Checking the Hamming distance between the binary representations of the indices. If the Hamming distance is 1, a direct multi-controlled- X (MCX) gate is inserted. If greater than 1, the permutation matrix is factorized into simpler transpositions. 3. Recursive construction of the circuit until the full permutation is realized. This method guarantees that any ({2}^{n} \times {2}^{n}) transposition matrix can be translated into a corresponding (n)-qubit quantum circuit through a systematic construction of controlled gates. Illustrative example We demonstrate the method with a simple example transposition matrix: (M=)(\left[\begin{array}{cccccccc}1& 0& 0& 0& 0& 0& 0& 0\ 0& 1& 0& 0& 0& 0& 0& 0\ 0& 0& 1& 0& 0& 0& 0& 0\ 0& 0& 0& 1& 0& 0& 0& 0\ 0& 0& 0& 0& 0& 1& 0& 0\ 0& 0& 0& 0& 1& 0& 0& 0\ 0& 0& 0& 0& 0& 0& 1& 0\ 0& 0& 0& 0& 0& 0& 0& 1\end{array}\right]) is a transposition between states 4 and 5 (with indexing from (0) to (n-1)). The binary indices of the states are: ({bin}{s1}=100) and ({bin}{s2}=101). The Hamming distance between ({bin}{s1}) and ({bin}{s2}) is 1. Therefore, the swap can be implemented using a multi-controlled X (MCX) gate with controls on Qubit 0 and Qubit 1, and the target on Qubit 2. Negative control handling is required for Qubit 1, as it is initially 0 in both ({bin}{s1}) and ({bin}{s2}). The resulting quantum circuit is shown in Fig.1. Quantum circuit for transposition M. The circuit uses controls on qubits (q) (positive control) and (q) (negative control, created by add X gate before and after the controls) to apply an X (NOT) gate on the target qubit (q). The negative control ensures that the gate is triggered when (q) is in the (\left| 0 \right. \rangle). Fig. 1 Quantum circuit for transposition M. Full size image Applications Sparse binary matrices have extensive real-world applications due to their efficiency in representing large, structured, and often sparse datasets. Examples include representations of social networks, web graphs, telecommunication networks, finite state machines, and so on. These matrix representations in general are not unitary. However, within the quantum system, operations are unitary matrices. So, to make use of these representations nonunitary-to-unitary transformations are necessary. To demonstrate the application of our method, we consider classical deterministic finite state automata (DFA). DFA is a computational model with a wide range of applications including compiler design, text processing and pattern matching. We show how a quantum finite state system can be built from a classically defined DFA. In the theory of computation, DFAs are powerful models that recognize languages by processing input symbols and transitioning deterministically between states. Traditionally, a deterministic finite state automaton is defined as a five tuple M = (K, Σ, Δ, s, F), where: K is a finite set of states, Σ (the alphabet) is a finite set of symbols, s ∈ K is the initial state, F ⊆ K is the set of accepting or final states, and Δ is the transition function. Δ is a function from K × Σ to K. DFAs consist of a finite set of states, a transition function defined as a mapping from the current state and input symbol to the next state, and a set of final or accepting states that define the acceptance condition of the automaton. Adapting DFAs for quantum systems bridges classical automata with quantum algorithms, laying the foundation for advanced quantum computational models and enhancing our ability to tackle probabilistic or complex-pattern languages. To translate a classical DFA into a quantum framework, DFA states can be encoded as binary vectors in Hilbert space. Transitions are then represented by unitary matrices that transform the quantum state vector. Hence, DFA transitions can be realized as quantum gates. By associating unitary matrices to input symbols, the quantum DFA processes input strings by applying the associated unitary matrices/quantum gates, representing each input symbol in sequence. The system’s final state vector after processing the string indicates whether the input is accepted. Figure2 DFA over (\Sigma = {\text{a},\text{ b}}) recognizes the language L, illustrates a two-state deterministic finite automaton (DFA) over the alphabet (\Sigma = {\text{a},\text{ b}}) designed to recognize the language (L = \left{ {w {\text{|w}}\;{\text{is}}\;{\text{a}}\;{\text{string}}\;{\text{of}}\;{\text{a'}} {\text{s}}\;{\text{and}}\;{\text{b'}} {\text{s}}\;{\text{ending}}\;{\text{in}}\;{\text{b}}} \right}). The start state ({q}{0}) is shown as a single-bordered circle, while the accepting (final) state ({q}{1}) is indicated by a double-bordered circle. There is a self-loop on ({q}{0}) for input (a), allowing the automaton to remain at ({q}{0}) upon reading (a),. Two horizontal transitions connect the states: An upper arrow labeled (a,b) sends the automaton from ({q}{1} to {q}{0}) on reading either (a) or (b). A lower arrow labeled (b) sends the automaton back from ({q}{0} to {q}{1}) when a (b) is read. Fig. 2 DFA over (\Sigma = {\text{a},\text{ b}}) recognizes the language L. Full size image To illustrate the application in a quantum-inspired framework, we represent the DFA states ({q}{0}) and ({q}{1})​ as two-dimensional column vectors: ({q}{0}=\left[\begin{array}{c}1\ 0\end{array}\right]), ({q}{1}=\left[\begin{array}{c}0\ 1\end{array}\right]), The input symbols (a) and (b) are represented as (a=\left[\begin{array}{cc}1& 0\ 1& 0\end{array}\right],\text{ and b}= \left[\begin{array}{cc}0& 1\ 1& 0\end{array}\right].) A state transition upon reading an input symbol is defined as a matrix–vector multiplication: ({v}^{T}={u}^{T}A), where u represents the current state, v r epresents the next state, and A represents the current input symbol either (a) or (b). To realize the method outlined in this paper, complete the list of steps to construct quantum state transition symbol ‘a’ equivalent to the classical state transition ({v}^{T}={u}^{T}A) as shown below: 1. Input preparation. $${\text{U}} = \left[ {\begin{array}{{20}c} 0 \ 1 \ \end{array} } \right],\quad \hat{u} = e_{0} \otimes {\text{u}},\;where{ }\;e_{0} = { }\left[ {1{ }0} \right]^{T} ,{ }\;\hat{u} = \left[ {\begin{array}{{20}c} 0 \ 1 \ 0 \ 0 \ \end{array} } \right]$$ 2. 2. Construct unitary matrix M from the given ‘a’ (steps shown in proposition 1). $$M=\left[\begin{array}{cccc}1& 0& 0& 0\ 0& 0& 1& 0\ 0& 0& 0& 1\ 0& 1& 0& 0\end{array}\right]$$ 3. 3. State transition with the unitary matrix. $$\begin{aligned} \hat{v}^{T} = & \hat{u}^{T} M \ = & \left[ {\begin{array}{{20}c} 0 & 1 & 0 & 0 \ \end{array} } \right]\left[ {\begin{array}{{20}c} 1 & 0 & 0 & 0 \ 0 & 0 & 1 & 0 \ 0 & 0 & 0 & 1 \ 0 & 1 & 0 & 0 \ \end{array} } \right] \ = & \left[ {\begin{array}{{20}c} 0 & 0 & 1 & 0 \ \end{array} } \right] \ \end{aligned}$$ The state vector obtained here is a permutation of the actual vector. The permutation is realized by the permutations performed to construct the unitary matrix. Affected permutation needs to be reversed. 4. Reverse the effect of permutations (affected permutation is columns 1 and 3). $$\hat{v}^{T} = \left[ {\begin{array}{{20}c} 1 & 0 & 0 & 0 \ \end{array} } \right] = \left[ {\begin{array}{{20}c} 1 & 0 \ \end{array} } \right]^{T} { } \otimes \left[ {\begin{array}{{20}c} 1 \ 0 \ \end{array} } \right] = e_{0} \otimes v$$ Thus ({\widehat{v}}^{T}= {\widehat{u}}^{T}M) is equivalent to ({v}^{T} = {u}^{T}T). 5. Convert this M into Quantum gate. 5.1. Express M as the product of Transposition. Cycles in M [1, 1–3] so M can be expressed as: $$M={P}{0}\left[\text{1,3}\right]. {P}{1}\left[\text{1,2}\right]$$ 2. 5.2 Convert the index into binary. $$M= {P}{0}\left[\text{01,11}\right].{P}{1}\left[\text{01,10}\right]$$ In ({P}_{1}) Haming distance between 01 and 10 is not equal to one so we need to decompose it further, $${P}{1}= {P}{10}\left[\text{01,11}\right]. {P}{11}[\text{11,10}] {P}{12}\left[\text{01,11}\right]$$ $$\begin{aligned} M = & P_{0} \left[ {{\text{01,11}}} \right].P_{{10}} \left[ {{\text{01,11}}} \right].P_{{11}} [{\text{11,10}}]P_{{12}} \left[ {{\text{01,11}}} \right] \ = & P_{{11}} [{\text{11,10}}]P_{{12}} \left[ {{\text{01,11}}} \right] \ \end{aligned}$$ 3. 5.3. ({P}_{11}) represent by a CNOT with control on qubit 1 and target on qubit 0. 4. 5.4. ({P}_{12}) represent by a CNOT with control on qubit 0 and target on qubit 1. Figure3 Quantum circuit implementing the DFA symbol (a=\left[\begin{array}{cc}1& 0\ 1& 0\end{array}\right]). The circuit applies a two-qubit controlled-X (CNOT-like) operation, followed by a measurement in the computational (Z) basis on q . The measurement outcome is recorded into the classical register c , indicated by the downward dashed arrow from the measurement box to the classical bit. Fig. 3 Quantum circuit for DFA symbol a. Full size image This arrow signifies the transfer of information from the quantum state to a classical bit, allowing the result to be processed classically after measurement. Though DFA operations are irreversible, each transition deterministically moves from one state to another without storing information about prior states. However, in quantum computing, unitary matrices (the primary transition operators) are inherently reversible. This seemingly contradictory behavior occurs because the DFA transition matrix is embedded into a larger unitary matrix. Moreover, in gate-model quantum neural networks (QNNs), the ability to embed sparse structures into unitary matrices is also highly valuable. Training QNNs often encounters challenges like barren plateaus and slow convergence, which are exacerbated by dense or poorly structured parameterizations. The results in20 highlight the importance of initialization strategies that maintain structure while supporting trainability. Our method allows for the construction of permutation-based unitary matrices from sparse binary inputs. These can serve as lightweight and expressive layers in QNNs, facilitating both better optimization during training and improved generalization performance. This aligns well with current trends in designing noise-resilient and resource-efficient machine learning models. Our method also extends to quantum networking applications. In emerging quantum internet infrastructures, sparse matrices frequently model the distribution of entanglement, routing paths, or connectivity graphs between network nodes. As discussed in21,22, unitary transformations are required to preserve entanglement during distributed operations while minimizing communication overhead. By embedding sparse binary matrices into structured unitary permutation matrices, our method offers an efficient tool for implementing routing protocols, optimizing quantum channel usage, and supporting scalable quantum network architecture. Conclusions This work presents an efficient and scalable method for converting non-unitary sparse binary square matrices into unitary matrices, enabling their application in quantum computational frameworks. The proposed approach addresses a significant bottleneck in embedding non-unitary transformations into quantum systems. The proposed approach is significantly less complex than previously proposed methods. Furthermore, a method to translate permutation matrices into quantum circuits is outlined, providing a practical pathway for implementation. The application of this method in quantum finite state machines demonstrates its potential to bridge classical automata with quantum algorithms. While this study focuses on square matrices, future work will explore the embedding of rectangular matrices into unitary matrices and their integration into diverse quantum computing paradigms. Data availability All data generated or analyzed during this study are included in this published article. References Nielsen, M. A. & Chuang, I. L. Quantum Computation and Quantum Information (Cambridge University Press, 2010). Google Scholar Bender, C. M. & Boettcher, S. Real spectra in non-Hermitian Hamiltonians having P T symmetry. Phys. Rev. Lett.80(24), 5243 (1998). ArticleADSMathSciNetCASGoogle Scholar Zheng, C., Hao, L. & Long, G. L. Observation of a fast evolution in a parity-time-symmetric system. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci.371(1989), 20120053 (2013). ArticleADSMathSciNetGoogle Scholar Breuer, H. P. & Petruccione, F. The Theory of Open Quantum Systems (OUP Oxford, 2002). Google Scholar Hu, Z., Xia, R. & Kais, S. A quantum algorithm for evolving open quantum dynamics on quantum computing devices. Sci. Rep.10(1), 3301 (2020). ArticleADSCASPubMedPubMed CentralGoogle Scholar Del Re, L., Rost, B., Kemper, A. F. & Freericks, J. K. Driven-dissipative quantum mechanics on a lattice: Simulating a fermionic reservoir on a quantum computer. Phys. Rev. B102(12), 125112 (2020). ArticleADSGoogle Scholar Gui-Lu, L. General quantum interference principle and dualitycomputer. Commun. Theor. Phys.45(5), 825 (2006). ArticleADSMathSciNetGoogle Scholar Zheng, C. Universal quantum simulation of single qubit nonunitary operators using duality quantum algorithm. Sci. Rep.11(1), 3960 (2021). ArticleADSCASPubMedPubMed CentralGoogle Scholar Krantz, P. et al. A quantum engineer’s guide to superconducting qubits. Appl. Phys. Rev. (2019). ArticleGoogle Scholar Sato, Y., Kondo, R., Koide, S., Takamatsu, H. & Imoto, N. Variational quantum algorithm based on the minimum potential energy for solving the Poisson equation. Phys. Rev. A104(5), 052409 (2021). ArticleADSMathSciNetCASGoogle Scholar Gingrich, R. M. & Williams, C. P. Non-unitary probabilistic quantum computing. In Proceedings of the Winter International Symposium on Information and Communication Technologies (WISICT ‘04). Trinity College Dublin 1–6 (2004). Childs, A. M. & Wiebe, N. Hamiltonian simulation using linear combinations of unitary operations.arXiv preprint arXiv:1202.5822 (2012). Lin, L. Lecture Notes on Quantum Algorithms for Scientific computing. quant-ph (2022). Cybenko, G. Reducing quantum computations to elementary unitary operations. Comput. Sci. Eng.3(2), 27–32. (2001). ArticleGoogle Scholar Planat, M. & Haq, R. U. The magic of universal quantum computing with permutations. Adv. Math. Phys.2017, 1–9. (2017). ArticleMathSciNetGoogle Scholar Weber, M. Quantum permutation matrices. Complex Anal. Oper. Theory17, 37. (2023). ArticleMathSciNetGoogle Scholar Moore, C. & Crutchfield, J. P. Quantum automata and quantum grammars. Theor. Comput. Sci.237(1–2), 275–306. (2000). ArticleMathSciNetGoogle Scholar Gyongyosi, L. & Imre, S. Circuit depth reduction for gate-model quantum computers. Sci. Rep.10(1), 11229 (2020). ArticleADSPubMedPubMed CentralGoogle Scholar Gyongyosi, L. & Imre, S. Scalable distributed gate-model quantum computers. Sci. Rep.11(1), 5172 (2021). ArticleADSCASPubMedPubMed CentralGoogle Scholar Gyongyosi, L. & Imre, S. Training optimization for gate-model quantum neural networks. Sci. Rep.9(1), 12679 (2019). ArticleADSPubMedPubMed CentralGoogle Scholar Gyongyosi, L. & Imre, S. Advances in the quantum internet. Commun. ACM65(8), 52–63 (2022). ArticleGoogle Scholar Gyongyosi, L., Imre, S. & Nguyen, H. V. A survey on quantum channel capacities. IEEE Commun. Surv. Tutor.20(2), 1149–1205 (2018). ArticleGoogle Scholar Download references Author information Authors and Affiliations Department of Computer Science, Oklahoma State University, Stillwater, 74075, USA Krishnageetha Karuppasamy,Varunteja Puram,K. M. George&Thomas P. Johnson Authors 1. Krishnageetha KaruppasamyView author publications Search author on:PubMedGoogle Scholar 2. Varunteja PuramView author publications Search author on:PubMedGoogle Scholar 3. K. M. GeorgeView author publications Search author on:PubMedGoogle Scholar 4. Thomas P. JohnsonView author publications Search author on:PubMedGoogle Scholar Contributions K.K. conception and design of the research, article writing; V.P. Algorithm design and article reviewing; K.G. research supervision, study design, article writing; J.P research supervision, article revising; All authors have read and approved the final manuscript. Corresponding author Correspondence to Krishnageetha Karuppasamy. Ethics declarations Competing interests The authors declare no competing interests. Additional information Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Supplementary Information Supplementary Information. Rights and permissions Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit Reprints and permissions About this article Cite this article Karuppasamy, K., Puram, V., George, K.M. et al. Quantum circuits from non-unitary sparse binary matrices. Sci Rep15, 22502 (2025). Download citation Received: 04 December 2024 Accepted: 20 May 2025 Published: 02 July 2025 DOI: Share this article Anyone you share the following link with will be able to read this content: Get shareable link Sorry, a shareable link is not currently available for this article. Copy to clipboard Provided by the Springer Nature SharedIt content-sharing initiative Subjects Computer science Information technology Download PDF Sections Figures References Abstract Introduction Related works Proposed method for sparse to permutation matrix conversion Permutation matrices to gates Applications Conclusions Data availability References Author information Ethics declarations Additional information Supplementary Information Rights and permissions About this article Advertisement Fig. 1 View in articleFull size image Fig. 2 View in articleFull size image Fig. 3 View in articleFull size image Nielsen, M. A. & Chuang, I. L. Quantum Computation and Quantum Information (Cambridge University Press, 2010). Google Scholar Bender, C. M. & Boettcher, S. Real spectra in non-Hermitian Hamiltonians having P T symmetry. Phys. Rev. Lett.80(24), 5243 (1998). ArticleADSMathSciNetCASGoogle Scholar Zheng, C., Hao, L. & Long, G. L. Observation of a fast evolution in a parity-time-symmetric system. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci.371(1989), 20120053 (2013). ArticleADSMathSciNetGoogle Scholar Breuer, H. P. & Petruccione, F. The Theory of Open Quantum Systems (OUP Oxford, 2002). Google Scholar Hu, Z., Xia, R. & Kais, S. A quantum algorithm for evolving open quantum dynamics on quantum computing devices. Sci. Rep.10(1), 3301 (2020). ArticleADSCASPubMedPubMed CentralGoogle Scholar Del Re, L., Rost, B., Kemper, A. F. & Freericks, J. K. Driven-dissipative quantum mechanics on a lattice: Simulating a fermionic reservoir on a quantum computer. Phys. Rev. B102(12), 125112 (2020). ArticleADSGoogle Scholar Gui-Lu, L. General quantum interference principle and dualitycomputer. Commun. Theor. Phys.45(5), 825 (2006). ArticleADSMathSciNetGoogle Scholar Zheng, C. Universal quantum simulation of single qubit nonunitary operators using duality quantum algorithm. Sci. Rep.11(1), 3960 (2021). ArticleADSCASPubMedPubMed CentralGoogle Scholar Krantz, P. et al. A quantum engineer’s guide to superconducting qubits. Appl. Phys. Rev. (2019). ArticleGoogle Scholar Sato, Y., Kondo, R., Koide, S., Takamatsu, H. & Imoto, N. Variational quantum algorithm based on the minimum potential energy for solving the Poisson equation. Phys. Rev. A104(5), 052409 (2021). ArticleADSMathSciNetCASGoogle Scholar Gingrich, R. M. & Williams, C. P. Non-unitary probabilistic quantum computing. In Proceedings of the Winter International Symposium on Information and Communication Technologies (WISICT ‘04). Trinity College Dublin 1–6 (2004). Childs, A. M. & Wiebe, N. Hamiltonian simulation using linear combinations of unitary operations.arXiv preprint arXiv:1202.5822 (2012). Lin, L. Lecture Notes on Quantum Algorithms for Scientific computing. quant-ph (2022). Cybenko, G. Reducing quantum computations to elementary unitary operations. Comput. Sci. Eng.3(2), 27–32. (2001). ArticleGoogle Scholar Planat, M. & Haq, R. U. The magic of universal quantum computing with permutations. Adv. Math. Phys.2017, 1–9. (2017). ArticleMathSciNetGoogle Scholar Weber, M. Quantum permutation matrices. Complex Anal. Oper. Theory17, 37. (2023). ArticleMathSciNetGoogle Scholar Moore, C. & Crutchfield, J. P. Quantum automata and quantum grammars. Theor. Comput. Sci.237(1–2), 275–306. (2000). ArticleMathSciNetGoogle Scholar Gyongyosi, L. & Imre, S. Circuit depth reduction for gate-model quantum computers. Sci. Rep.10(1), 11229 (2020). ArticleADSPubMedPubMed CentralGoogle Scholar Gyongyosi, L. & Imre, S. Scalable distributed gate-model quantum computers. Sci. Rep.11(1), 5172 (2021). ArticleADSCASPubMedPubMed CentralGoogle Scholar Gyongyosi, L. & Imre, S. Training optimization for gate-model quantum neural networks. Sci. Rep.9(1), 12679 (2019). ArticleADSPubMedPubMed CentralGoogle Scholar Gyongyosi, L. & Imre, S. Advances in the quantum internet. Commun. ACM65(8), 52–63 (2022). ArticleGoogle Scholar Gyongyosi, L., Imre, S. & Nguyen, H. V. A survey on quantum channel capacities. IEEE Commun. Surv. Tutor.20(2), 1149–1205 (2018). ArticleGoogle Scholar Scientific Reports (Sci Rep) ISSN 2045-2322 (online) nature.com sitemap About Nature Portfolio About us Press releases Press office Contact us Discover content Journals A-Z Articles by subject protocols.io Nature Index Publishing policies Nature portfolio policies Open access Author & Researcher services Reprints & permissions Research data Language editing Scientific editing Nature Masterclasses Research Solutions Libraries & institutions Librarian service & tools Librarian portal Open research Recommend to library Advertising & partnerships Advertising Partnerships & Services Media kits Branded content Professional development Nature Careers Nature Conferences Regional websites Nature Africa Nature China Nature India Nature Italy Nature Japan Nature Middle East Privacy Policy Use of cookies Your privacy choices/Manage cookies Legal notice Accessibility statement Terms & Conditions Your US state privacy rights © 2025 Springer Nature Limited Close Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly. Email address Sign up [x] I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy. Close Get the most important science stories of the day, free in your inbox.Sign up for Nature Briefing: AI and Robotics
123410
Published Time: Thu, 25 Jan 2024 16:44:42 GMT Zeitschrift für Analysis und ihre Anwendungen Journal for Analysis and its Applications Volume 13 (1994), No. 2, 359-364 On a Representation of the General Solution of a Functional-Differential Equation M. Drakhlin and E. Litsyn Abstract. The general solution of a functional-differential equation with non-Volterra operator is found by its reducing to an infinite system. An integral representation of the general solution of this system is presented. Properties of the kernel of this system are studied. Keywords: Functional-differential equations, superposition operators, Volterra operators AMS subject classification: 34 K 05, 47 B 38 Consider the quasi-linear functional-differential equation Lx = Fx (1) where £ is a linear functional-differential operator and F is a nonlinear superposition operator. As the most relevant example of equation (1) we mention the equation (Cox)(t) d(t) + B(t)(g(t)) + A(t)z(h(t)) = f(t, x(t)) (t E [0, 2 (<0). Here x(t) E in", t E [0, co), and A(t), B(t) are n x n-matrices whose entries are measur-able essentially bounded real functions on the half-line. The functions g, h: [0, oo) —' JR are measurable, and the function 1: (0,-oo) x in" —' in" is locally summable Assume that g has the property that, for all measurable subsets e C [0, oo), m(e) = 0 implies m(g'(e)) = 0 (3) where m is the Lebesgue measure. It is well known that condition (3) is necessary and sufficient for the following implication: if a function z : (—oo, oo) —' 1W' is measurable, then the superposition z(g) : [0, ) —' 1W' is also measurable. Denote by D[ 5 , b] the space of all functions x: [a, b] —' in" for which the norm X IID (J = II x IIc t. ei + IIIIL..,1 M. Drakhlin: College of Judea and Samaria, Math. Res. Inst., Ariel 44820, Israel E. Litsyn: Bar-Ilan University, Dep. Math., Ramat . Can 52900, Israel This research was supported by the Ministry of Science and Technology of the State of Israel ISSN 0232-2064 / $ 2.50 © Heldermann Verlag Berlin 360 M. Drakhlin and E. Litsyn is finite; here is the space of summable functions x : [a, b] -, JR Th . Similarly, D(o,,) and L(o ,) are the spaces of locally absolutely continuous and summable functions, respectively. It is possible to define a topology in these spaces by a countable system of semi-norms II XIID (000) = II XIID(oØ ) and II 2 IIL IoOO) = II x IILco, i (fi € liv). In the study of equation (1),. two main different cases have to be distinguished. Case 1: The Operator A is Volterra. In this case it is enough to require g(t) <t and h(t) <t (t € [0, oo)). (4) Then, the conditions for representing the solution of the (linear) problem (Cox)(t)= 1(t) (t € [ 0,00)) and x(0) = 0 (5) in the form X(t) / C(t, s)f(s) ds (6) are well-known. The substitution X(t) = (Wf)(t) JC(t,3)f(s)ds (7) reduces problem (1) to the equation 1(t) = (FWf)(i) (t E (0, oo)) (8) in the space L1o ,) . The existence of the Cauchy matrix Qt, s) and its properties have found a great deal of attention in many papers and monographs (see [1, 4] and references there). The neutral-type equations were studied in detail in . Case 2: The Operator A i3 Non- Volterra. Assume, for example, that g(t) :5 t + 1 and h(t) :5 I + 1 (1 E [0, co)). (9) In this case the solution of problem (5) is in general not representable in the form (6). So, the question about representing the general solution of the equation (Cox)(t) = 1(t). (t € [0, oo)) (10) becomes relevant. We will try to obtain an integral representation for the solution of equation (10) by reducing it to a countable system of functional-differential equations. Denote X? (t) = X(1_1,I)(t)x(t) (I E [0, oo), i € liv) On a Representation of a General Solution 361 where X(s-1,i) is the characteristic function of the interval [i - 1,i). Since x(t) = Ex(t), equation (10) takes then the form 00 00 Co i ±(t) + B(t) (g(t)) + A(t) E x(h(t)) = f(t) (t E [0, oo)) ()() = = 0 ( <0; i E jAr). Furthermore, assume that m(h'(i)) = 0 for i EIV. From (9) we get 1+1 i+1 k(t) + B(t) (g(t)) + A(t) > x(h(t)) = f(i) (t E [i - 1, 1]) (12) = x,() = 0 ( <0;i E lAr). For r E [0, 1] and i E N, set x1(r)=x(i-1+r), B(r)=B(i-1+r), A(7-)=A(i-1+7-) f(r) = Ai - 1 + r), g(r) = g(i - 1 + r), h i (r) = h(i - 1 + r). In this notation we get the following system of equations (r E [0, 1]): i+1 1+1 (C 1 x)(7-) 1 4 (r) + B 4 (r) xk(g.(r)) + A 8 (r) E Zk( h1( T)) = f,(r) = Xk() = 0 (C E (—oo,0)U(1,00); k = 1,2,...,i +1;i € EN). Now, if x 1 , X2,... solve the problem (13) with boundary conditions x i (0)=c (aEiR") and x,(0)=x_i(1) (i=2,3...), then the function x(t)=x,(t—i+1) (t€[i-1,i]) solves the problem (10) with initial condition x(0) = c. The last equality along with our study of boundary value problems for infinite systems of functional-differential equations leads to the following Lemma 1. Let the problem (L,x)(t) = f(r) and x 1 (0) = ai (r E [0,1]; i E N) be uniquely solvable for every Cii € Jflfl and fi E L1 0 , 1 1 (i € N). Then the general solution of equation (10) has a representation X(t) = X4 (t - i + 1)c + f Ci(t - i + 1,$)f(s)ds (t E [i - 1,i]) 362 M. Drakhlin and E. Litsyn where c = (c,x(0),x(1),...) E JR°° and f = ( 11,12,...). Here Xi - is the i-th section of the infinite fundamental matrix X for the system (18), and G i is the i-lh section of the infinite Green matrix C of the preceding ,stem. In it is shown that, under conditions of Lemma 1, the general solution y = (21, 22,...) for the system (13) admits a representation y(r) = X(r)c + J G(r, s)z(s) ds. (14) Here X is the infinite fundamental matrix of (13), and C is the infinite Green matrix of the solution of (13), subject to the conditions x(0) = a and x(0) - x(1) = 0 (i E iN). Let us study the properties of the infinite Green matrix for the system (13) from a more general viewpoint. Consider an infinite system of linear functional-differential equations Mx = 1 (15) where M Dj 11 - is a linear bounded operator. Here D 11 and L'11 are the spaces of functions x = (21,22,...) : [0,1] -+ JR°° with absolutely continuous and summable on [0,1] components, respectively. Denote by D, 11 and L 1) (/3 =0,1,...) the spaces of absolutely continuous and summable /3-dimensional functions = (21,22,... ,x) [0,1] -p R, respectively. Let K$ denote the projection of a vector x = (21,22,...) with an infinite number of components to the vector x =(2 1, 22,... , xp) consisting of the first /3 components of the vector x; thus, x = Kx. A system of semi-norms in D 11 and is defined by the equalities ii x D 2o= II Kx I D5 and II x " - II Kx II Ls (0 E 1W). 10,11 (0,11 IIL11 - (0,11 Let us give several definitions, which are necessary for what follows, from the theory of linear equations in Fréchet spaces (see ). Let E and e be complete countably normed spaces with, systems of semi-norms and II 11 ( '6) (a,/3 E IV), respectively. A linear operator T : E -+' I is said to have the V4, property (T E V) if for every natural /3 there exists a natural number (/3) such that the equality = 0 implies the equality II T II = 0. Evidently, any bounded operator T: E - I possesses this property. Two elements C, q E E are said to be or -equivalent if they coincide in the a-seminorm, i.e. 11C - = 0. Identifying a-equivalent elements, we get a Banach space E' of elements °, where the norm is defined by the a - semi-norm of the space E. Given T € V,, for every 0 € iN one can define a linear operator Tp : -' by putting Tx = (Te) (x = € E). On a Representation of a General Solution 363 The space of sequences g = ( ga, , g, 2 ,...) of linear bounded functionals gj : ? (g ) € (E)a]) is said to be F-adjoint to the space E and is denoted by E (\ = c71, A 2 .... fl . n the case ofA k =k(kEW)we will skip the index. For k € BV, denote by T : [Ek ] [E] the adjoint operator to Tk : E( k) _, The operator T' = (Tj,T,...) :1' - E, = ji(k) is then called the V-adjoint to the operator T : E - E. Likewise, the equation T'g=f (9 Ee',fEE,;.\k=0(k),kE1N) is called Vs-odjoimt to the equation T = We return now to equation (15). Let M E V. Since x(t) = f ) ds + x(0) (t E [0, 1]), equation (15) is reducible to the form (Qi)(t) + P(t)x(0) = 1(t) (t E [0, 1]) where Q = MV, (Vy)(t) = f y(s) ds and P is an infinite matrix whose entries are essentially bounded functions on 10, 1]. If the problem (Mx)(t) = 1(t), x(0) = 0 (16) is uniquely solvable for every f € L 11 , then by virtue of [5: Theorem 7] we have X(t) = f G(t, s)f (s) ds. Applying the Green operator to both sides of (Qi)(t) = f(i) we get / G(t, s)(Q)(s) ds = / x( o,tI(s)(I)(s) Hence, for every t E [0, 11 and a E iN we conclude that JG0 ,(t, s)K()L1J(s) 0 0 I XOltICIO(O(Cr))Ko(o(.)).i(s)ds 0 where 1o('(a)) is the result of adding [((a)) - a] zero columns to the identity matrix I,. Consequently, = X(o,tJ( 3)I aci (t E [0,1],0, € IV). Thus, for every t € [0, 1] the matrix G(t,.) satisfies the V,,-adjoint equation in the second argument. We summarize with the following 364 M. Drakhlin and E. Litsyn Theorem 1. For every t E [0, 1] the Green matrix for the problem (16) satisfies the matrix equation in the second argument Q'G(t, .) = Xo,tI (s € [0, 1]) where I is the infinite identity matrix. When applying the fundamental principles of both linear and nonlinear functional analysis to functional-differential equations, one often has to require also the compact-ness (or weak compactness) of the operators involved, rather than just their continuity. An operator T: E -' E, T E V4, is called V-completely continuous (respectively, weakly V -completely continuous) if the operators T : -p V are completely continu-ous (respectively, weakly completely continuous) for every natural 0. The proof of the following theorem is straightforward (see [61). Theorem 2. Let Q = J - K, where K : L 11 - is a weakly V-completely continuous operator. Then the following holds: For every s E [0, 1], G( . , s) is absolutely continuous on [0, s) U (s, 11, and G(s + 0, s) - G(s - 0, s) =I. For every z E L 11 , the equality f G(t, s)z(s)ds = z(t) + J G(t, s)z(s) ds dt holds. S. For every s E [0, 11, G(•, s) satisfies the relation s) -I K(t, r)G(r, s) dr + P(t)G(O, s) = K(t, s). References Azbelev, N., Maksimov, V. and L. Rakhmatullina, L.: Introduction to the Theory of Functional-Differential Equations (in Russian). Moscow: Nauka 1991. Drakhlin, M.: On oscillatory properties of some functional-differential equations (in Rus-sian). Duff. Uravn. 22 (1986), 396 - 402; Engl. transi.: Duff. Equ. 22(1986), 283 -289. Drakhlin, M.: Some problems in stability of neutral-type functional-differential equations (in Russian). Duff. Uravn. 22 (1986), 925. Engl. transi.: Duff. Equ. 22 (1986), 919 - 925. Hale, J.: Theory of Functional-Differential Equations. Berlin: Springer - Verlag 1977. Litsyn, E.: General theory of functional-differential equations (in Russian). Duff. Uravn. 24 (1988), 977 - 986; Engl. transi.: Duff. Equ. 24 (1988), 638 - 646. Maksimov, V.: On the Cauchy formula for functional-differential equations (in Russian). Duff. Uravn. 13 (1977), 601 - 606; Engl. trans].: Duff. Equ. 13 (1977), 405 - 409. Received 09.08.1993
123411
PHY481 - Lecture 13 A line charge near a grounded conducting cylinder A line charge λ at position x0 on the x-axis is near a grounded conducting cylinder, of radius R, which has its center at (x, y) = (0, 0), with its central axis along the z-axis. The line charge lies parallel to the central axis of the cylinder. Find the electrostatic potential for r > R in plane cylindrical co-ordinates r, φ. Note that the potential does not depend on z. Using Gauss’s law it is easy to show that the electric field near a uniform line charge is ⃗ E(r) = λˆ r/(2πǫ0r). The potential is then of the form V (r) = λln(constant/r)/(2πǫ0). The constant is chosen to fit the boundary conditions. Let’s assume that the problem of a line charge near a grounded conducting cylinder is solved by using an image charge which is located at position x′ 0 and with charge per unit length −λ. Now we need to find x′ 0 and we need to check that V (r, φ) = 0, and that ⃗ E(R, φ) = Erˆ n. The potential is given by superposition, so that, V (r, φ) = λ 2πǫ0 [ln(c1/r1) −ln(c1/r2)] + c2 (1) where c1 and c2 are constants. Using the cosine rule we have, r2 1 = r2 + x2 0 −2rx0cosφ; r2 2 = r2 + x′ 0 2 −2rx′ 0cosφ (2) Lets also assume that the reciprocal relation holds (why not!!), ie x′ 0 = R2/x0. We then have V (r, φ) = λ 4πǫ0 ln(r2 + R4/x2 0 −2r(R2/x0)cosφ r2 + x2 0 −2rx0cosφ ) + c2 (3) At the surface of the cylinder we have, V (R, φ) = λ 4πǫ0 ln(R2 x2 0 ) + c2 (4) This must be zero for our solution to be correct, which implies that, c2 = −λ 2πǫ0 ln(R/x0) (5) The solution to our problem is then, V (r, φ) = λ 4πǫ0 ln(r2 + R4/x2 0 −2rR2/x0cosφ r2 + x2 0 −2rx0cosφ ) − λ 2πǫ0 ln(R/x0) (6) 1 or V (r, φ) = λ 4πǫ0 ln(x2 0r2 + R4 −2rx0R2cosφ r2 + x2 0 −2rx0cosφ ) − λ 2πǫ0 ln(R) (7) Now we need to check that the electric field is given correctly. We find that, Eφ = −1 r ∂V ∂φ = −λ 4πǫ0r( 2rx0R2sinφ x2 0r2 + R4 −2rx0R2cosφ − 2rx0sinφ r2 + x2 0 −2rx0cosφ) (8) From Eq. (7) it is evident that V (R, φ) = 0 as required and from Eq. (8) we find Eφ(R, φ) = 0. We have therefore found a solution which satisfies the boundary condition, so it is correct. For completeness, the electric field in the radial direction is given by, Er = −∂V ∂r = −λ 4πǫ0 ( 2rx2 0 −2x0R2cosφ x2 0r2 + R4 −2rx0R2cosφ − 2r −2x0cosφ r2 + x2 0 −2rx0cosφ) (9) Closing remarks on generalizing image charge problems We have solved three basic image charge problems: (i) a point charge near a grounded flat conducting surface (Lecture 12) (ii) a point charge near a grounded conducting sphere (Lecture 12) (iii) a line charge near a grounded conducting cylinder (This Lecture). Lets call these solutions VG(⃗ r). The extension to problems where the conductor is at some finite voltage (instead of zero) requires adding charges to produce that voltage. The charges have to be placed symmetrically to ensure that no electric field is generated in the metal. E.g. if we want a sphere of radius R at potential V0, then we place an image charge Q0 at the center of the sphere so that V0 = kQ0/R. This corresponds to distributing the charge Q0 uniformly on the surface of the sphere. The electrostatic potential for r > R of this problem is found by superposition, i.e. V (⃗ r) = VG(⃗ r) + kQ0/r. In the case of a conducting slab, a sheet of image charge is placed at the center of the slab, while in the case of a conducting cylinder, a line charge is placed at the center of the cylinder. In a similar way, if we are given a problem where a point charge is near an isolated conducting sphere, cylinder or slab which has total charge Q, then we again have to place an image charge at the center of the sphere. However, now the magnitude of the image charge at the center of the metal sphere has to be the sum of the total charge on the sphere plus the value of the image charge of the grounded system. For example an isolated conducting sphere of charge Q requires that an image charge of Q −q′ be placed at its center, so the total potential for r > R becomes VG(⃗ r) + k(Q −q′)/r, where q′ = −qR/z0 is the image charge of the grounded sphere. 2 Finally there are problems where we are asked to consider a point charge (spherical cavity) or line charge (cylindrical cavity) inside a cavity that is totally surrounded by metal. Again the metal can be grounded, or at a fixed potential V0 or have a total charge Q. The basic solution is for the grounded case where the only induced charge is on the inner surface of the metal. The other cases are treated using superposition as before. For example, this leads to an interesting effect for an isolated sphere, which has a spherical cavity. No matter where we place a point charge inside the sphere, the induced charge on the outer surface (−q′) is symmetric!! The induced charge on the inner surface of the metal is not symmetric - it has a non-trivial σ(θ) in general. However in order to ensure that ⃗ E = 0 in the metal, the charge on the outer surface must be placed uniformly on the surface of the sphere. The solution to the grounded case of a point charge inside a spherical cavity inside a metal is the same as that of the point outside of a metal sphere, however we have to be careful in changing the variables correctly. The real charge in the case of the spherical cavity corresponds to the image charge in the case of the metal sphere. Similarly for the case of the conducting cylinder and the cylindrical cavity. Note also that if there is no charge inside a cavity inside a metal, then no charge is induced on the surfaces of the cavity, no matter how many charges are placed near the the exterior surfaces of the metal. 3
123412
Harlequin ichthyosis: ABCA12 mutations underlie defective lipid transport, reduced protease regulation and skin-barrier dysfunction - PubMed =============== Clipboard, Search History, and several other advanced features are temporarily unavailable. Skip to main page content An official website of the United States government Here's how you know The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site. The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely. Log inShow account info Close Account Logged in as: username Dashboard Publications Account settings Log out Access keysNCBI HomepageMyNCBI HomepageMain ContentMain Navigation Search: Search AdvancedClipboard User Guide Save Email Send to Clipboard My Bibliography Collections Citation manager Display options Display options Format Save citation to file Format: Create file Cancel Email citation Subject: 1 selected item: 22864982 - PubMed To: From: Format: [x] MeSH and other data Send email Cancel Add to Collections Create a new collection Add to an existing collection Name your collection: Name must be less than 100 characters Choose a collection: Unable to load your collection due to an error Please try again Add Cancel Add to My Bibliography My Bibliography Unable to load your delegates due to an error Please try again Add Cancel Your saved search Name of saved search: Search terms: Test search terms Would you like email updates of new search results? Saved Search Alert Radio Buttons Yes No Email: (change) Frequency: Which day? Which day? Report format: Send at most: [x] Send even when there aren't any new results Optional text in email: Save Cancel Create a file for external citation management software Create file Cancel Your RSS Feed Name of RSS Feed: Number of items displayed: Create RSS Cancel RSS Link Copy Full text links Springer Full text links Actions Cite Collections Add to Collections Create a new collection Add to an existing collection Name your collection: Name must be less than 100 characters Choose a collection: Unable to load your collection due to an error Please try again Add Cancel Display options Display options Format Share Permalink Copy Page navigation Title & authors Abstract Similar articles Cited by Publication types MeSH terms Substances Related information LinkOut - more resources Review Cell Tissue Res Actions Search in PubMed Search in NLM Catalog Add to Search . 2013 Feb;351(2):281-8. doi: 10.1007/s00441-012-1474-9. Epub 2012 Aug 4. Harlequin ichthyosis: ABCA12 mutations underlie defective lipid transport, reduced protease regulation and skin-barrier dysfunction Claire A Scott1,Shefali Rajpopat,Wei-Li Di Affiliations Expand Affiliation 1 Centre for Cutaneous Research, Blizard Institute, Barts & The London School of Medicine and Dentistry, Queen Mary, University of London, London, E1 2AT, UK. PMID: 22864982 DOI: 10.1007/s00441-012-1474-9 Item in Clipboard Review Harlequin ichthyosis: ABCA12 mutations underlie defective lipid transport, reduced protease regulation and skin-barrier dysfunction Claire A Scott et al. Cell Tissue Res.2013 Feb. Show details Display options Display options Format Cell Tissue Res Actions Search in PubMed Search in NLM Catalog Add to Search . 2013 Feb;351(2):281-8. doi: 10.1007/s00441-012-1474-9. Epub 2012 Aug 4. Authors Claire A Scott1,Shefali Rajpopat,Wei-Li Di Affiliation 1 Centre for Cutaneous Research, Blizard Institute, Barts & The London School of Medicine and Dentistry, Queen Mary, University of London, London, E1 2AT, UK. PMID: 22864982 DOI: 10.1007/s00441-012-1474-9 Item in Clipboard Full text links Cite Display options Display options Format Abstract Harlequin ichthyosis (HI) is a devastating autosomal recessive congenital skin disease. It has been vital to elucidate the biological importance of the protein ABCA12 in skin-barrier permeability, following the discovery that ABCA12 gene mutations can result in this rare disease. ATP-binding cassette transporter A12 (ABCA12) is a member of the subfamily of ATP-binding cassette transporters and functions to transport lipid glucosylceramides (GlcCer) to the extracellular space through lamellar granules (LGs). GlcCer are hydrolysed into hydroxyceramides extracellularly and constitute a portion of the extracellular lamellar membrane, lipid envelope and lamellar granules. In HI skin, loss of function of ABCA12 due to null mutations results in impaired lipid lamellar membrane formation in the cornified layer, leading to defective permeability of the skin barrier. In addition, abnormal lamellar granule formation (distorted shape, reduced in number or absent) could further cause aberrant production of LG-associated desquamation enzymes, which are likely to contribute to the impaired skin barrier in HI. This article reviews current opinions on the patho-mechanisms of ABCA12 action in HI and potential therapeutic interventions based on targeted molecular therapy and gene therapy strategies. PubMed Disclaimer Similar articles Mutations in lipid transporter ABCA12 in harlequin ichthyosis and functional recovery by corrective gene transfer.Akiyama M, Sugiyama-Nakagiri Y, Sakai K, McMillan JR, Goto M, Arita K, Tsuji-Abe Y, Tabata N, Matsuoka K, Sasaki R, Sawamura D, Shimizu H.Akiyama M, et al.J Clin Invest. 2005 Jul;115(7):1777-84. doi: 10.1172/JCI24834.J Clin Invest. 2005.PMID: 16007253 Free PMC article. The roles of ABCA12 in epidermal lipid barrier formation and keratinocyte differentiation.Akiyama M.Akiyama M.Biochim Biophys Acta. 2014 Mar;1841(3):435-40. doi: 10.1016/j.bbalip.2013.08.009. Epub 2013 Aug 15.Biochim Biophys Acta. 2014.PMID: 23954554 Review. ABCA12 mutations and autosomal recessive congenital ichthyosis: a review of genotype/phenotype correlations and of pathogenetic concepts.Akiyama M.Akiyama M.Hum Mutat. 2010 Oct;31(10):1090-6. doi: 10.1002/humu.21326.Hum Mutat. 2010.PMID: 20672373 Review. Defects in Stratum Corneum Desquamation Are the Predominant Effect of Impaired ABCA12 Function in a Novel Mouse Model of Harlequin Ichthyosis.Zhang L, Ferreyros M, Feng W, Hupe M, Crumrine DA, Chen J, Elias PM, Holleran WM, Niswander L, Hohl D, Williams T, Torchia EC, Roop DR.Zhang L, et al.PLoS One. 2016 Aug 23;11(8):e0161465. doi: 10.1371/journal.pone.0161465. eCollection 2016.PLoS One. 2016.PMID: 27551807 Free PMC article. ABCA12 is the major harlequin ichthyosis gene.Thomas AC, Cullup T, Norgett EE, Hill T, Barton S, Dale BA, Sprecher E, Sheridan E, Taylor AE, Wilroy RS, DeLozier C, Burrows N, Goodyear H, Fleckman P, Stephens KG, Mehta L, Watson RM, Graham R, Wolf R, Slavotinek A, Martin M, Bourn D, Mein CA, O'Toole EA, Kelsell DP.Thomas AC, et al.J Invest Dermatol. 2006 Nov;126(11):2408-13. doi: 10.1038/sj.jid.5700455. Epub 2006 Aug 10.J Invest Dermatol. 2006.PMID: 16902423 See all similar articles Cited by Genome-Wide Analysis of Nubian Ibex Reveals Candidate Positively Selected Genes That Contribute to Its Adaptation to the Desert Environment.Chebii VJ, Oyola SO, Kotze A, Domelevo Entfellner JB, Musembi Mutuku J, Agaba M.Chebii VJ, et al.Animals (Basel). 2020 Nov 22;10(11):2181. doi: 10.3390/ani10112181.Animals (Basel). 2020.PMID: 33266380 Free PMC article. Kallikreins: Essential epidermal messengers for regulation of the skin microenvironment during homeostasis, repair and disease.Nauroy P, Nyström A.Nauroy P, et al.Matrix Biol Plus. 2019 Nov 21;6-7:100019. doi: 10.1016/j.mbplus.2019.100019. eCollection 2020 May.Matrix Biol Plus. 2019.PMID: 33543017 Free PMC article. Guards! Guards! How innate lymphoid cells ensure local law and order.Häfner SJ.Häfner SJ.Biomed J. 2021 Apr;44(2):105-111. doi: 10.1016/j.bj.2021.04.007. Epub 2021 Apr 27.Biomed J. 2021.PMID: 33994144 Free PMC article. Advances in the treatment of autosomal recessive congenital ichthyosis, a look towards the repositioning of drugs.Peña-Corona SI, Gutiérrez-Ruiz SC, Echeverria MLDC, Cortés H, González-Del Carmen M, Leyva-Gómez G.Peña-Corona SI, et al.Front Pharmacol. 2023 Nov 9;14:1274248. doi: 10.3389/fphar.2023.1274248. eCollection 2023.Front Pharmacol. 2023.PMID: 38027029 Free PMC article.Review. The Role of ABC Transporters in Skin Cells Exposed to UV Radiation.Gęgotek A, Skrzydlewska E.Gęgotek A, et al.Int J Mol Sci. 2022 Dec 21;24(1):115. doi: 10.3390/ijms24010115.Int J Mol Sci. 2022.PMID: 36613554 Free PMC article.Review. See all "Cited by" articles Publication types Research Support, Non-U.S. Gov't Actions Search in PubMed Search in MeSH Add to Search Review Actions Search in PubMed Search in MeSH Add to Search MeSH terms ATP-Binding Cassette Transporters / genetics Actions Search in PubMed Search in MeSH Add to Search ATP-Binding Cassette Transporters / metabolism Actions Search in PubMed Search in MeSH Add to Search Animals Actions Search in PubMed Search in MeSH Add to Search Humans Actions Search in PubMed Search in MeSH Add to Search Ichthyosis, Lamellar / genetics Actions Search in PubMed Search in MeSH Add to Search Ichthyosis, Lamellar / metabolism Actions Search in PubMed Search in MeSH Add to Search Lipid Metabolism / genetics Actions Search in PubMed Search in MeSH Add to Search Mutation, Missense Actions Search in PubMed Search in MeSH Add to Search Skin / metabolism Actions Search in PubMed Search in MeSH Add to Search Skin / pathology Actions Search in PubMed Search in MeSH Add to Search Substances ABCA12 protein, human Actions Search in PubMed Search in MeSH Add to Search ATP-Binding Cassette Transporters Actions Search in PubMed Search in MeSH Add to Search Related information MedGen LinkOut - more resources Full Text Sources Springer Medical MedlinePlus Health Information Full text links[x] Springer [x] Cite Copy Download .nbib.nbib Format: Send To Clipboard Email Save My Bibliography Collections Citation Manager [x] NCBI Literature Resources MeSHPMCBookshelfDisclaimer The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited. Follow NCBI Connect with NLM National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers NLM NIH HHS USA.gov
123413
Convergence Rates of Posterior Distributions on JSTOR =============== Free online reading for over 10 million articles Save and organize content with Workspace Link account to institutional access Continue with Google Continue with Microsoft Find my institution or Username or email address Password SHOW Stay logged in Forgot password? Log in Don't have an account? Register for free Skip to main content Have library access? Log in through your library Get support Help logging in Contact us Log in Register Workspace Advanced Search By subject By title Publishers Collections Images Get support Help logging in Contact us All Content Images Advanced Search Search all content Register Log in Browse By subject Journals and books By title Journals and books Publishers Collections Images Workspace Your Artstor image groups were copied to Workspace. The Artstor website will be retired on Aug 1st. journal article Convergence Rates of Posterior Distributions Subhashis Ghosal, Jayanta K. Ghosh and Aad W. van der Vaart The Annals of Statistics Vol. 28, No. 2 (Apr., 2000), pp. 500-531 (32 pages) Published By: Institute of Mathematical Statistics Cite This is a preview. Log in through your library . Abstract We consider the asymptotic behavior of posterior distributions and Bayes estimators for infinite-dimensional statistical models. We give general results on the rate of convergence of the posterior measure. These are applied to several examples, including priors on finite sieves, log-spline models, Dirichlet processes and interval censoring. Journal Information The Annals of Statistics publishes research papers of the highest quality reflecting the many facets of contemporary statistics. Primary emphasis is placed on importance and originality, not on formalism. The discipline of statistics has deep roots in both mathematics and in substantive scientific fields. Mathematics provides the language in which models and the properties of statistical methods are formulated. It is essential for rigor, coherence, clarity and understanding. Consequently, our policy is to continue to play a special role in presenting research at the forefront of mathematical statistics, especially theoretical advances that are likely to have a significant impact on statistical methodology or understanding. Substantive fields are essential for continued vitality of statistics since they provide the motivation and direction for most of the future developments in statistics. We thus intend to also publish papers relating to the role of statistics in interdisciplinary investigations in all fields of natural, technical and social science. A third force that is reshaping statistics is the computational revolution, and The Annals will also welcome developments in this area. Publisher Information The purpose of the Institute of Mathematical Statistics (IMS) is to foster the development and dissemination of the theory and applications of statistics and probability. The Institute was formed at a meeting of interested persons on September 12, 1935, in Ann Arbor, Michigan, as a consequence of the feeling that the theory of statistics would be advanced by the formation of an organization of those persons especially interested in the mathematical aspects of the subject. The Annals of Statistics and The Annals of Probability (which supersede The Annals of Mathematical Statistics), Statistical Science, and The Annals of Applied Probability are the scientific journals of the Institute. These and The IMS Bulletin comprise the official journals of the Institute. The Institute has individual membership and organizational membership. Dues are paid annually and include a subscription to the newsletter of the organization, The IMS Bulletin. Members also receive priority pricing on all other IMS publications. Rights & Usage This item is part of a JSTOR Collection. For terms and use, please refer to our Terms and Conditions The Annals of Statistics © 2000Institute of Mathematical Statistics Request Permissions ABOUT US About JSTOR Mission and History JSTOR Labs JSTOR Daily News Webinars Careers ABOUT US About JSTOR Mission and History JSTOR Labs JSTOR Daily News Webinars Careers EXPLORE CONTENT What's in JSTOR Advanced Search By Subject By Title Collections Publisher Images EXPLORE CONTENT What's in JSTOR Advanced Search By Subject By Title Collections Publisher Images RESEARCH TOOLS Text Analysis Support The JSTOR Understanding Series RESEARCH TOOLS Text Analysis Support The JSTOR Understanding Series HELP CENTER Get Support Get Access LibGuides Research Basics Contact Us HELP CENTER Get Support Get Access LibGuides Research Basics Contact Us For Librarians For Publishers Teaching Resources JSTOR is part of ITHAKA, a not-for-profit organization helping the academic community use digital technologies to preserve the scholarly record and to advance research and teaching in sustainable ways.©2000‍–2025 ITHAKA. All Rights Reserved. JSTOR®, the JSTOR logo, JPASS®, Artstor®, Reveal Digital™ and ITHAKA® are registered trademarks of ITHAKA. Terms & Conditions of Use Privacy Policy Accessibility Cookie Policy Cookie Settings ITHAKA websites, which ITHAKA manages from its location in the United States, use cookies for different purposes, such as to ensure web site function, display non-targeted ads, provide social media features, and track usage, engaging with third party service providers such as Google Analytics. You may manage non-essential cookies in “Cookie Settings”. For more information, please see our Cookie Policy. Cookie Settings OK, proceed Cookie Preference Center Cookie Preference Center Cookie Settings Strictly Necessary Cookies Performance and Analytics Cookies Social Media Cookies Advertising Cookies Functional Cookies Cookie Settings When you visit our websites, we store cookies on your browser to collect information. The information collected might relate to you, your preferences or your device, and is mostly used to make the sites work as you expect them to and to provide a more personalized web experience. However, you can choose not to allow certain types of cookies, which may impact your experience of the sites and the services we are able to offer. Click on the different category headings to find out more and change our default settings according to your preference. Please refresh the web page or navigate to another page on the site to apply your changes. You cannot opt-out of our strictly necessary cookies as they are deployed in order to ensure the proper functioning of our website (such as prompting the cookie banner and remembering your settings, to log into your account, to redirect you when you log out, etc.). For more information about the first and third party cookies used please follow this link: ITHAKA Cookie Policy Strictly Necessary Cookies Always Active These cookies are necessary for our websites to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. Some may be set by us or by third party providers whose services we have added to our pages. You can set your browser to block or alert you about these cookies, but some parts of our sites will not then work. These cookies do not store any personally identifiable information. Performance and Analytics Cookies [x] Performance and Analytics Cookies These cookies, which include Google Analytics, allow us to count visits and traffic sources so we can measure and improve the performance of our sites. They help us to know which pages are the most and least popular and see how users interact with each of ITHAKA’s sites. This information is also used to compile reports to help ITHAKA improve the respective site, including reports on the number of visitors to the site, where the visitors have come from and what pages the users visit on the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies, we will not know when you have visited our sites, and will not be able to monitor their performance. Social Media Cookies [x] Social Media Cookies These cookies are set by a range of social media services that we have added to the site to enable you to share our content with your friends and networks. They are capable of tracking your browser across other sites and building up a profile of your interests. This may impact the content and messages you see on other websites you visit. If you do not allow these cookies, you may not be able to use or see these sharing tools. Advertising Cookies [x] Advertising Cookies For ITHAKA websites that display advertising, cookies identify the beginning of a unique user session in order to display generic ads during the session. ITHAKA does not capture information about a user session in order to display targeted ads. However, advertising cookies may also be set through our site by advertising entities. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. Advertising cookies do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you may experience less targeted advertising. Functional Cookies [x] Functional Cookies These cookies enable our websites to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies, then some or all of these services may not function properly. Cookie List Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Clear [x] checkbox label label Apply Cancel Confirm My Choices Allow All
123414
Published Time: Mon, 27 Sep 2021 21:28:28 GMT Commun.math. Phys.49, 233--246 (1976) Communications in Mathematical Physics @ by Springer-Verlag1976 The Cluster Expansion in Statistical Mechanics David Brydges and Paul Federbush Department of Mathematics,Universityof Michigan,Ann Arbor, Michigan48109, USA Abstract.The Glimm-Jaffe-Spencer cluster expansion from constructive quantum field theory is adapted to treat quantum statistical mechanical systems of particles interacting by finite range potentials. The Hamiltonian H 0 + Vneed be stable in the extended sense that Ho +4V+BN>O for some B. In this situation, with a mild technical condition on the potentials, the cluster expansion converges and the infinite volume limit of the correlation functions exists, at low enough density. These infinite volume correlation functions cluster exponentially. We define a class of interacting boson and fermion particle theories with a matter-like potential, 1/r suitably truncated at large distance. This system would collapse in the absence of the exclusion principle--the potential is unstable--but the Hamiltonian is stable. This provides an example of a system for which our method proves existence of the infinite volume limit, that is not covered by the classic work of Ginibre, which requires stable potentials. One key ingredient is a type of Holder inequality for the expectation values of spatially smeared Euclidean densities, a special interpolation theorem. We also obtain a result on the absolute value of the fermion measure, it equals the boson measure. Introduction In the quantum statistical mechanical theory of matter (positive charged particles and negative charged identical fermions interacting with a 1/r potential) the most basic result is the stability, first proved by Dyson and Lenard in . One of the authors presented a new proof in , and recently another proof was given by Lieb and Thirring in . The second basic result was the proof of the existence of the thermodynamic functions in the infinite volume, by Lieb and Lebowitz in . A natural next problem is the existence of the infinite volume correlation functions, for some range of parameters--an open question. This work was supported in part by NSF Grant MPS 75-10751 MichiganJunior Fellow 234 D. Brydges and P. Federbush Towards this end, the Glimm-Jaffe-Spencer cluster expansion (see ) was adapted to treat the problem of matter with the t/r interaction modified to 1/r ( e - " - e -a") in . In this situation (with suitable values of ~,/~, etc.) the cluster expansion was shown to converge, yielding the existence of the infinite volume correlation functions. However, the classical methods of Ginibre (see ) already applied to this case, so this was not a new result. In a later paper we will show that for a matter-like system with 1/r replaced by e-~/r the infinite volume limit of the correlation functions exists, (Ibr some range of parameters). This will be a straight forward extension of the present paper and . At present we consider a matter-like system with 1/r modified to v ( x - y) = ~ d 3 z f ( x - z)lx- Yl- l f ( y _ z) with f a non-negative function in C~. For this system, the Hamiltonian is stable, H + B N > O for some B, but the potential is not. The system would collapse in the absence of the exclusion principle. We derive the existence of the infinite volume correla-tion functions for this system (in some range of parameters). In fact our treatment is much more general than just of the atbrementioned matter-like systems. We consider systems of boson and fermion particles inter-acting via two-body potentials with H = H 0 + V. In this paper we assume a) The potentials are finite range. b) H o + 4 V i s stable, i.e. there is B' such that H o + 4 V + B ' N > O . c) The potentials are in L3/2. Our main result will be that for such a system at any temperature, if #, the chemical potential, is large enough negative the infinite volume limit of the correla-tion functions exists. We do not detail the need for condition c) in this paper, a technical condition to justify some of the basic manipulations. Section 2 presents the cluster expansion we use. Familiarity with is assumed. Section 3 contains a statement of our basic results. The key steps in the proof of convergence are given in Section 4. Appendix A contains a proof that the absolute value of the fermion measure equals the boson measure. Appendix B discusses the stability of our matter-like system. Appendices D and E contain technical estimates important to the convergence argument. The key to the efficiency of the present paper is the interpolation estimate in Appendix C. It gives a very useful analog of the Holder inequality for systems with fermions, where the natural setting is function spaces over signed measures, rather than measures as with pure boson theories. We believe it goes a long way in bridging the gap between techniques available for fermion theories and tech-niques for boson theories. The cluster expansion as developed here is purely a geometric analysis of the paths that realize the traces in path space. The total path space integral is split into subsets in which paths avoid certain regions and must hit other regions. The use of barrier potentials as in is bypassed, this is a matter of choice. T h e C l u s t e r E x p a n s i o n in Statistical M e c h a n i c s 235 In addition to the generalization to infinite range potentials mentioned above, that will be the subject of a further paper, it is trivial to include finite range many-body forces in the present treatment. Notation and the Ouster Expansion We consider I species of particles described by fields q~l(x). . . . . q~(x) obeying either fermion or boson statistics. Let l Hoo = ~ (1/2mi) .f dx(V~)i)(x)(Ve)i)(x ) (2.1.) i = 1 l Ho = Hoo - E tii ~.dx(),(x)c~,(x) (2.2) i = 1 l N~= ~ dx~(x)dA(x); N = ~ Ni (2.3) i = 1 H = Ho + V (2.4) Vis constructed from potentials with finite range. To partially ftx the length scale assume the range is less than 2/10. We will assume that V is sufficiently regular that Friedrichs extensions H A may be defined by extending H off N-particle wave functions with compact support in an open bounded region (A)NCIR3N; and furthermore that exp ( - / ~ H a) admits a path space representation (Feynman-Kac formula) on N-particle subspaces. IR3 is filled with closed unit cubes {A~} with disjoint interiors. A (the large box one works in) is the interior of a finite union of such cubes. The cluster ex-pansion is applied to quantities of the form -- 5 H A ( v ) d z (A)a = Tra(Te ° A)/Tra(e- ~nA) (2.5) where Tr A is the trace on the Fock space built on LZ(A). T is the time-ordering operator. A has the form A = al(tl) ... a,(t~) (2.6) where l ai(t3 = ~ S dxf~j(x)~j(x)Oj(x). (2.7) j = l Thus the t~ is dummy, it serves to define the order of the operators in (2.5). For a given i each f~j is supported in a single cube A for all j = 1. . . . . 1. Each f j is real, measurable, and 0 < f ~ j < 1, With these conditions our estimates may be taken to depend on the operator A only through the number of factors, s. The expression (2.5) can be represented as a path integral using a signed measure. Thus -- .~ HA('c)d: -~ V (d)d~ TrA(Te ° A) = ~ dl~e ° a~(h).., a,(t3 (2.8) A236 D . B r y d g e s a n d P. F e d e r b u s h l d# may be described in the following laborious way: d/~= I-I d/~(J) where d# C J) j = l is associated with the ith species. Then d# °)= + d/~ ) where the N particle N = O measure d/~ ) is V ~s...d#~.~p~,,) (2.9) P d~,y is the measure on the space of paths t ~ x t s l R 3 (starting at x at t = 0 and ending at y at t=fl) associated with the semigroup e x p - t ( - # ~ - A ) . (dp(d)-- 1). ( 1 ) ( boson t (~) P is a permutation of {1, 2 . . . . . N}. ej= 1 if species j is \fermion]' S = (even) if P is \ o d d ] " The integral over x t . . . . . x s takes the trace. The A on the integral sign in (2.8) means that the integration is over the subset of path space such that the paths of each particle do not hit A ~ in the time interval [0,/3]. V(z), a~(ti) in (2.8) stand for the obvious functions corresponding to the operators V, a~ evaluated at the positions of the paths for each particle at times, z, t~. On an n-particle subspace the n paths describing the particles give a mapping t ~ R 3" which we call an n-path. Our description of the cluster expansion imposes the following notation. {S~} is the set of all faces of cubes {A~}. E~ is the characteristic function of the subset of path space consisting of all n-paths such that no particle hits the "barrier" tl~={xslR3:dist(x,S,)<~} in the time interval [0,/3]. Note that the width of the barrier is greater than the range of V. X C A is a union of cubes Av { A / j e J } is a distinguished set of cubes. The cluster expansion is developed by inserting inside the d# integral in (2.8) the identity t = 1~ (E~+H,) where H~= 1 - E , and ~ runs over faces S~ in A, then (i expanding the product. This is followed by factorizing and resumming outside sets X. Since this is a familiar process from , we merely write down the result. -i V(Ocl~ -I V(~)dz d#e ° al(tl) ... a~(t~)/ ! d#e ° At~ t~ tV(7:)d~ 1 -- f V("~)dv = 2 K(X,F) ~ d#e -~ / i d l ~ e o (2.10) X , r ( A -X ) ~ / A where t) -f V(~)d~ K(X, F)= S d#Hre ° ( X -r ~ ) ~ Hr = 1~ H~. oLaF l-[ a,(ti) (2.11) i (2.12) If SCA, S = {xsS: dist (x, 8S)> ~o}. (?S=(S- Int S ) - 8/i. The distinguished sets have been required to include the supports of all the j}~. F is a subset of {S,} and also denotes the corresponding set in IR3. The sum over X, F in (2.10) runs over The Cluster Expansion in Statistical Mechanics 237 all X C A such that U A j C X and, for a given X, all F such that (1) ( F n Int X ) = F jEJ (2) each component of X - F c contains at least one Aj, j~J. F c is the set of faces S~ in X, complementary to F, considered as a subset of IR3. (The S~ are closed sets.) Our constants ca all satisfy 0 < C a < O O and the same notation may mean different constants in different sections if not obviously related. Results Proposition 3.1. Let Z(A)= TrA(ex p --/~H ~l) IZ(A)I < oo, IZ(A1)/Z(A)I < 1. Proof. Apply the minimax principle. Theorem 3.2. I f for some B >_0 1/2Hoo+ 2V + BN>O then for some t~o (large negative), uniformly in A for I~1. . . . . #~<--Po. and let A1 CA be open. Then, if (3.1) the cluster expansion (2.10), (2.11) converges Theorem 3.3. Let A, B be quantities of the type (2.6) amt let B e for 4sIR 3 denote the translation in the obvious sense of B by 4. For some #o (large negative), if (3.1) holds and #i <- # o for i= 1. . . . . I, then KAB~)A-- (A)A(B)At <= CA, B exp ( - c, ...... u,141) (3.2) uniformly in A, for 141 large, cu...... ~,~oo as Ita . . . . . # l ~ - o o . Theorem 3.4. There exists #o (large negative) such that if pi< go for i= 1. . . . . t and (3.1) holds then (1) lim Z ( ( A - X ) ~ ) / Z ( A ) exists for all X. tal-~o (2) lim ( A ) A exists and the limit of (3.2) holds. IAl-~o Thus the correlation functions exist and cluster exponentially. IAI represents the volume of A. The A's are boxes (rectangular paratlepipeds) centered at the origin whose minimum width goes to infinity, this is understood in the limits in (1) and (2). Part (2) of Theorem 3.4 follows from part (1), Proposition 3.1, Theorem 3.2 and Theorem 3.3. Part (i) is not difficult, and is proven in Appendix E. The con-stants #o in Theorem 3.2, Theorem 3.3, and Theorem 3.4 are taken to be the same, this can be done at the expense of not using the largest possible value in each theorem. Parallel to Proposition 5.3, p. 218, in we have 238 D. Brydges and P. Federbush Proposition 3.5. Under the hypotheses of Theorem 3.2 IK(X, V)l < Ca exp [c 1[Xt- c2,~,...... u,}F[] c2. ~...... , --,oo as # i ~ - ov for i= 1 . . . . . 1, c, fixed. Proposition 3.5 combined with Proposition 3.1 leads to proog of Theorems 3.2 and 3.3 by the same argument as in , which we do not repeat here. Our proof of Proposition 3.5 uses an inequality of the following type: Proposition 3.6. For p> t, 1/p+ l / q = 1 Tr A(Te - ~ I~)"~ o A) < [TrAe-e(n~° +Pv)]lip "(I dllt[Aq)t/q This inequality is closely analogous to Holder's inequality in Euclidean Field Theory. It has the important feature that the fermion statistics, or equivalently, the signed measure in (2.8) has been preserved for the first factor on the right. For the absolute value of the measure # appearing in the other factor on the right we have: Proposition 3.7. The absolute value J#l of # is equal to the measure obtained by changing all fermion species to bosons. Finally, as an example of a potential V which exploits most of the latitude (see (a) and (b) below Theorem 3.8) of Theorems 3.2, 3.3 and 3.4, set 9 2 V = T~d3xd3y:(~l~l--~2~)2)(X)U(x--y)(~l~)l--~2¢2)(y); where q51---~b is a boson field and 42 =~P is a fermion field, v ( x - y ) is the truncated Coulomb po-tential v(x - y) = 5 dazf( x - z) ix-@y l-f ( y - z) where f is a non-negative real C2 function on IR3 such that f ( x ) = 0 for Ixl > 1/10. This V satisfies Theorem 3.8. For B > 0 sufficiently large 1/2H00 + 2V + B N > O . This theorem is the equivalent of the Dyson-Lenard theorem for the Coulomb potential [2, 3, 8] and shares the following features with it (a) V is not stable in the sense of Ruelle (b) at least one species must obey fermion statistics ((b) is not supposed to be obvious). Proof of Proposition 3.5 We use the three lines lemma, thus set -2 z .[ V(z)d~ K(X, F, z) = ~ dpHre o [ I a~ - 2~(ti) Xx i (4.1) The Cluster E x p a n s i o n in Statistical M e c h a n i c s 239 where X 1 = ( X - U ) L To make this analytic in z we temporarily assume that only integrates over the subset of path space representing less that t/j, j = 1,2 ..... l X1 particles of species j, and furthermore that V is bounded above and below on this subset. We obtain bounds uniform with respect to these assumptions which can be removed by taking limits at the end of the proof. By the three lines lemma, IK(X, F)t<-( sup IK(X, F, z)])l/z( sup IK(X, F, z)l) I/z (4.2) \ R e z = O ]\ R e z = l By taking absolute values inside the ~ dp integral and then using the Cauchy Schwartz estimate: sup ]K(X, F, z)l]t/2< ( S dllz[Hr] 1/,( S dl#[ I~)[ a~[(t,))1/, . (4.3) R e z = 0 ]\X: /\Xz Therefore, to prove Proposition 3.5 we derive the following three estimates. j" dl~l H a4(ti) ~= CAec31xl (4.4) Xt i dl#lH r <<-c4e-4C2.,~ . ...... trlec"txl (4.5) X1 sup IK(X, F, z)[ < c : c~rxl . (4.6) R e z = I To prove (4.4), combine Proposition 3.7 with the easy estimate a~< IN. The proof of (4.5) is deferred to Appendix D. Proof of (4.6). Unravel H r by expanding H r = 1~ (i - E~) g~F # - 2z S V(z)dT due o [I a~ -2 z(t,) i sup ]K(X, F, z)[ < ~ sup ~ (4.7) R e z = l f l C f R e z = t X(fi) where X(F1) = ( X - (FC~F1)) ~ 2(ti) . - 2z I V(r)dt =<2Irl sup sup ~ dl~e ° 1~ i (4.8) Flee Rez = 1 X(F1) Thug the proof of (4.6) will be completed by P - 2z f V(z)d~ sup !d#e o [Ia2-2z(ti) <eCT[X[ Rez = 1 i uniformly in Y C X open. The left hand side of (4.9) may be rewritten as [- 7 [H~+ 2zV]dr ) sup TrrITe ° ~ia2-2z(ti). (4.10) Rez= 1 We refer to the proof of Proposition 3.6 in Appendix C to show that (4.10) is less than Tr r (e-p~n~ + 2v)) (4.1 t) (4.9) 240 D. Brydges and P. Federbush This is in fact the main point in the proof of Proposition 3.6. It exploits properties of the trace and exponentiall By the minimax theorem or equivalently Proposi-tion 3.1, (4,tl) is tess than ^ Tr x (e- fl(H°X + 2V)) (4.12) and this may be estimated by e CTIxf by splitting Ho + 2V into 1 / 2 H o , 1 / 2 H o + 2 V and using the well known fact, Tr (e-a)< Tr (e -~) (4.13) if A>__B, along with the hypothesis (3.1). Appendix A. JFermion Measure[ = Boson Measure Let (x I . . . . . xN) and (yl . . . . . YN) be two sets of distinct points in R 3. We denote a single particle path space measure for paths from x to y in time a _ t_< b by The path is described by z(t). We construct the boson and fermion measures as follows: V , B = ( 1 / U ! ) Z e s { P ) I - I ( f d ' " ' b ' (A.1) ~Lgyp(i), XiJ Pi (~) (event where S(P) is if the permutation is \ o d d / and e = l for bosons, giving i% and e = - 1 for fermions, giving/~F- Let ~ be the space whose points are sets of N points in R3; Tbe the space of mappings of [a, b] into 3. The set Z l(t) . . . . . zN(t)¢ identifies the n-paths in (A,1) with points in T, and #v and /1, are defined (at last) as measures on T. The image in T of continuous paths that never intersect each other we call T'. T - T' is a set of measure zero. The sum in (A.1) realizes PF and/~8 as a sum of measures with disjoint supports in T'. Thus I#rt =kt,. Appendix B. The Truncated Coulomb Interaction and Its Stability We consider H = H o F + HoB + g2/2 ~ : (@P -- ~d?)V(!lTV - ~¢): with 1 _ N = NF + N B v is our truncated 1/r potential given by v(x -y) = ~ d 3 z f ( x -z)Lx -Yl - 1f ( y _ z) (B.1) (B.2) (B.3) The Cluster Expansion in Statistical Mechanics 241 We assume f is a non-negative real C2 function on R 3 satisfying f ( x ) = 0 if Ixl>l/10 (B.4) We also define auxiliary potentials v. v . ( x - y) = ~ d3zf (x - z)l x - Yl - 1e-"tx-'l f (y _ z) (B.5) v and v. satisfy the following properties: P,0 v=vo (B.6) P.1 v.(x)=0 if Ixi>2/10 (B.7) Iv.(x)l < cllxI- e -hill (g.8) P.2 There are c2 > 0 and c3 > 0 such that v.(x)~c2lxl-le -"Lxt if c 3 > l x t (B.9) and vm(x ) > 0 for all x. P.3 There is c 4 such that Vn V n ( X ) ~ c 4 ( r - le-"') (r- % - " 9 (B.10) P.4 If n > m then v m - v . > O (B.11) as an operator and numerically. P,5 There is a c5 such that ( v - v,)(0) = csn (B.12) P.6 Let {X~}be translates over a lattice of a real function in C2, then there is a c6 > 0 (c6 depending on Z~) such that v - v I > c 6 ~ ZiZi (B.13) as an operator inequality. These properties are immediate except for P.6. It is proved below. Theorem. I f to v(x) may be associated a set of potentials v,(x) satisfying P.0 through P.6 then H as given in (B.1) is stable. A proof of this theorem may be constructed by examining the proof in and verifying these properties are sufficient to provide stability. In the absence of the exclusion principle--that is if ~p were a boson field instead of a fermion field---the Hamiltonian is unstable. By examining one can deduce if H + BN~>O then 7 > 7/5. Proof of P.6. We wish to prove P.6, that for X~ translates over a lattice of a real function in CZo f ( r - ~(1 - e-~))f >=c ~ Z,X, (B.14) 242 D. Brydges and P. Federbush Basically we proceed via a few reductions. Assume {¢z~} are real functions, {e} a finite indexing set, and for fixed e the ~b~ are translates over a lattice of each other; then if Zi = ~, 4i~ (B.15) it follows that (B.14) is implied by f(r- 1(1 - e-'))f > c'~ dp,~4),~ (B.16) This is the first reduction. It follows from the inequality ¢~(P~p + ~b~p(h~=<~b~b~ + ~b~pq~, (B. 17) upon expanding The next reduction is to observe that (B.16) follows from the relationship 1 -- e-{x-yt f (z- x) f (z- y) > c"dpi~),~ (B.18) Ix- yl for ze U~, the U~ non-empty open sets; i fixed. This can be seen by noting that the integral in (B.16) then contains positive contributions to dominate the terms on the right hand side (which may be picked coming from disjoint portions of the integral). We look at an equivalent form of (B.18) again for ze U,, t - e-t:,-yl 1 1 (B.19) __> c " --q ~ i ~ ( x ) ~ A y ) -- Ix-y] f ( z - x ) f ( z - y ) 1 From the proof of Fact 5 in we get (B.19) provided ~ b ~ , ( x ) is in C2, with derivative estimates uniform in z, for ze U~. The ¢~: are easily constructed as a finite C g partition of )~ satisfying Supp (qSz~(x))C {xlf(x- z) > e} (B.20) for some e > 0 and z. Appendix C. Proof of Proposition 3.6 As in Section 4, the three lines lemma implies (the comments below (4.1) are in force) (Te ~ A) -- I H A ( ' c ) d 1: Tr A oThe Cluster Expansion in Statistical Mechanics 243 where 1/1)+ 1 / q = 1, p > 1, . To complete the proof we need to show that the first factor on the right of (C.1) is less than (Tra(e e \l/p - ~ [no + pvl(~)d~)) . (C.2) The principle involved is contained in the following lemma. Lemma. L e t A , B be hermitian matrices with A > O . L e t s 1 . . . . . s , > O with ~ s i = l and let Ul . . . . . u,_ 1 be unitary matrices, then i=1 [Tr ( e - s l (A + iB)U I e - s2(A + iB)u2 . . . Un _ l e - s.~a + iB))I < Tr e - a . (C.3) Proof. It is sufficient to prove it when sl ..... s, are rational fractions with No their common denominator. Apply the Trotter product formula in the form e - s k ( A + i B ) = lim (e -(1/IN°)iBe-(1/lN°Ia) l~°sk i-+oo for k = 1, 2 ..... n so that the left hand side of (C.3) is lim T r ( ~ f (Vse-~I/m°)A)) (C.4) t~m \ j = l Where gj is a unitary operator (either e -~/IN°)~s or uke -C/m°)iB for some k). By Holder's inequality for trace norms , the absolute value of (C.4) is less than /No lim 1-I (WrlVf-{/'s°)alm°)/'N° l-,m j = l /No = lim 1-[ (Yrle-~/'~°)al'N°)/'u° l~oo j = l = Wr (e -a) (C.5) which concludes the proof of the lemma. We do not discuss the technicalities involved in extending the inequality (C.3) to allow A = H o + p V , B = p ( I m z ) V , and ui=al LIm(1-znq, thereby obtaining (C.2). Appendix D. A Path Space Estimate Incorporating Conditions that Paths Must Hit Barriers We study I = ~ d~H r (D.1) We restrict our notation to the situation where a single boson species is described by the measure, this is a trivial simplification. Without the function H r this would be the integral over n-paths in X that realizes the trace of e - ~ . The inclusion of H r restricts the integral to n-paths with the property that each barrier 244 D, Brydges and P. Federbush in F is hit by some path (different barriers may be hit by different paths or the same path). We majorize (D.1) by a sum of terms, one for each partition p of the faces in F P~--~(P l ..... Ps) (0.2) p~ a subset of faces in F. To each p~ is associated a path integral from x~ to y~, the x~ and Yi localized in A2, and A~;. The paths associated to Pi must hit all the barriers in p~. With this notation we claim ji,J[ i= 1 zlj~ ~j/ pi J .Tr2(]~ Nyi)e -~Hg ~ c~(xi)) (D.3) Realized as n-paths the trace in (D.3) is greater than I since all the n-paths in I are summed over with same numerical weight, but some more than once. We now note that the expression inside the trace equals [1 g(Yi) e-Ne-~Hg +2Ne-N [I 4(xi) (D.4) We let ~j be the number of x~ localized in Aj, and flj the number of y~ localized in Aj, We recall if T > 0 then ITr (e- rR)[ <-_ r r (e- T). T IR]T (0.5) Our "R" is of the form [I(\ ~ dxz ~ dyz]G(xi, yi)e -~" I-I 4(xz) [ [ qS(Y)e-N (D.6) ail ] Normal ordering and employing N~ estimates one finds I[(D.6) Ib< 1-[ (ej + 1) 2C~s+ 1)(fljAV l)2(flj+ 1)Sup [G(x~,Yi)l (D.7) We have used the fact that the integration regions are of volume one, so that the sup norm dominates the L2 norm (and other norms arising in the process). We let hi= h(p~,A j,, Aj,,) be the maximum over x~ and y~ of the path integrals in parentheses in (D.3). This yields the estimate I <=e~lxlZ ~ 1-[ (c~j+ 1)2(~j+ 1)(flj+ 1)2(flj+ 1) I~ hi (D.S) pj~,j'~ We write hi<hli.hzi.h~ with h i i = c 3e-c4d(~, ,S~), S ~ E p i (D.9) and h2 i ---- c 3e-~at~,. s~,),Sa,E pi (D.10) where S~ and S~, are picked minimizing the distance d. We get that Sup ~ (c~j+ 1)2(=J+ ~)(flj+ 1)2{ej+ 1) 1~ h,i [] h2~<e c,lrl (D.11) pji,,j~ The Cluster Expansion in Statistical Mechanics 245 by an argument as in Section 10 of . Thus we have the estimate I <--e¢llxl+csIrl Z l-~ h~i (D.12) P The final result is obtained provided Sup ~ h3~<e -c6lri (D.13) P with c 6 going to infinity with # going to minus infinity, and 2 11 hai~ ecalrl (D.13)' P The estimate for h23~,the heart of the matter, is obtained by the same argument as in Proposition 8.1 of . It is the square root of the measure for paths hitting all the barriers in Pi in time ft. (The root is taken so that hli and h2~ may be factored out of the total probability.) The length of such a path must be at least cvlp~[ for IP~[ large. It is not surprising that one gets h3i < e- 1/4~ + cglwt- ~8Ip~l~lp~i! (D. 14) The factorial accomodates different orders of hitting the barriers. By picking # large enough one gets (D.13) for any c6. To get (D.13)' we observe that to a path that hits barrier i and then barrier j may be associated a numerical factor e-~'e% where d~j is the distance between barrier i and barrier j, such that ~ l-I h3~ is overestimated by P Those to to hit barriers 2 k in have paths contributing h3i required t, order, associated to them k--1 ) 1FI e j=l Theorem. t < e~,lxI-klrl where k can be made arbitrarily large, c 1 fixed, by picking # large negative. Appendix E. Proof of Theorem 3.4 (1) We consider the difference between the ratio of Z's in Theorem 3.4 (1) for two choices of A, A1, and A2. Z ( ( A 1 -X F ) / Z ( A 1) -z ( ( A 2 -X f ) / Z ( A 2 ) = = (Z(A2)Z((A l - X ) ~) - Z(A 1)Z((A2 - X) ~))/Z(A 0Z(A2). (E. 1) We pick a set {Aj, j e J } of distinguished cubes with the property that their union is inside ( A - X F for all A large enough, and such that this union separates X 246 D. Brydges and P. Federbush and the c o m p o n e n t of infinity. This choice depends on X but is independent of A1 and A z for A1 and A2 large enough. ( U Aj is a collar a r o u n d X.) jsJ We view each product of Z's in the n u m e r a t o r of the right term in (E.1) as a single partition function for a doubled system, each subsystem with the same interactions as the original system, but with no mutual interactions. The b o u n d a r y data of the two subsystems are different to yield the indicated products. We expand each product of Z's in the (E.1) n u m e r a t o r in a single cluster expansion for the doubled systems, using the distinguished cubes defined above. Pairs (X, F) arising in the two cluster expansions cancel until X hits either 0A1 or (?.4 2 . Thus the difference in (E.1) goes to zero exponentially with the mini-m u m width of A 1 or A 2 whichever is smaller, provided #o is large enough negative. This proof is similar to the p r o o f of clustering in , Section 4, which also uses a doubled system. References Dyson,F.J.: J. Math. Phys. 8, 1538 (1967) 2. Dyson, F.J., Lenard, A.: J. Math. Phys. 8, 423 (1967); J. Math. Phys. 9, 698 (1968) 3. Federbush, P.: J. Math. Phys. 16, 347 (19")5); J. Math. Phys. 16, 706 (1975) 4. Federbush, P.: J. Math. Phys. 17, 200 (1976); J. Math. Phys. 17, 204 (1976) 5. Ginibre, J.: Some Applications of Functional Integration in Statistical Mechanics. In: Statistical Mechanics and Quantum Field Theory, Les Houches 1970 (ed. C. Dewitt, R. Stora). New York: Gordon and Breach, 1971 6. Glimm, J., Jaffe,A, Spencer,T.: The Cluster Expansion. In: Constructive Quantum Field Theory, The 1973 "Ettore Majorana" International School of Mathematical Physics, (ed. G. Velo, A. Wightman). Berlin-Heidelberg-New York: Springer 1973 7. Lieb, E.H., Lebowitz,J.L.: Advan. Math. 9, 316 (1972) 8. Lieb, E.H., Thirring, W.E.: Phys. Rev. Letters 35, 687 (1975) 9. Reed,M, Simon, B.: Fourier Analysis and Self-Adjointness, New York: Academic Press, 1975 Communicated by A. S. Wightman Received November 11, 1975; in revised form April 6, 1976
123415
Vsevolod Meyerhold – Russiapedia Cinema and theater Prominent Russians =============== RT VERSIONS:روسيا اليومNOTICIASRUPTLYИНОТВFIND US ON:FaceBook RTRussiapedia History Basic facts regions Prominent Russians Entertainment Music Opera and ballet Art Politics and society Sport Literature History and mythology Cinema and theater Science and technology Geography and exploration Leaders Space and aviation Business Religion Education Military The Ryurikovich dynasty The Romanov dynasty Foreigners in Russia Of Russian origin On this day Galleries On this day 5 August On August 5, 1963, the Treaty banning Nuclear Weapon Tests in the Atmosphere, in Outer Space and Underwater, often abbreviated as the Partial Test Ban Treaty, was signed between the Soviet Union, the United States, and Great Britain. … Go to On this day Previous dayNext day Foreigners in Russia Peter Carl Faberge Peter Carl Faberge was a world famous master jeweler and head of the ‘House of Faberge’ in Imperial Russia in the waning days of the Russian Empire. Go to Foreigners in Russia RT.com / RT projects / Russiapedia / Prominent Russians / Cinema and theater / Vsevolod Meyerhold Prominent Russians: Vsevolod Meyerhold February 9, 1874 - (approximately) February 2, 1940 Vsevolod Meyerhold Vsevolod Meyerhold was a Russian and Soviet actor and theater director, and the creator of a new acting system called “biomechanics”. It is hard to overestimate his role in the development of the Russian theater. Meyerhold’s birthname was not Vsevolod, but Karl Kasimir Theodor.He was born in a Lutheran German family which lived in the Russian city of Penza. His father owned a liquor factory and was rather rich, though strict: he controlled the children’s expenses and was never generous with pocket money. He was not much interested in any of the arts, while his wife organized musical evenings regularly and was fond of the theater. Karl and his siblings shared her interest and often participated in amateur plays. In the gymnasium, Karl was not a high achiever: he had to repeat a year three times to get a certificate of completion. He graduated in 1895, and attended the law department of the Moscow State University. Same year, he did two things that shocked his family: he converted to the Russian Orthodox Church and changed his name to Vsevolod. A year later he married his childhood love Olga Munt. In those days, students had to receive special permission from the governor to marry, and Meyerhold was persistent enough to write letter after letter to the authorities until he succeeded. The same year, he went to see Othello as staged by Konstantin Stanislavsky. This simple experience changed Meyerhold's life: inspired by Stanislavsky's talent, he left the law department and attended the Theater and Musical School of the Moscow Philharmonic Society. His tutor, the theater director Vladimir Nemirovich-Danchenko, appreciated Meyerhold’s talent, erudition, and energy. When Nemirovich-Danchenko decided to found a new theater together with Stanislavsky, Meyerhold was among the first students who were invited to join the troupe. Meyerhold accepted the invitation, and in 1898 after graduating joined the newly formed Moscow Art Theater. Vsevolod Meyerhold The newly created theater was headed by Nemirovich-Danchenko and Konstantin Stanislavsky. In those days, Stanislavsky was working on his acting system based on deep character study and realistic acting, which nowadays is world famous. Scrambled Eggs With Onion Meyerhold had been Stanislavsky’s apprentice until 1902. That year, after playing twenty parts on Moscow Art Theater's stage, Meyerhold announced his rejection of Stanislavsky’s methods, left the troupe and turned from acting to directing. Together with Aleksandr Kosheverov, another actor who left with him, Meyerhold organized The New Drama Fellowship in the city of Herson, Ukraine. His first performances resembled those of MAT, but soon he started experimenting, looking for a new theater style and a new expressive means. Nemirovich-Danchenko called Meyerhold's ideas "the muddle, created by a man who discovers several new truths, pushing another one every day", or also "nonsense", “hell knows what” and "scrambled eggs with onion". As opposed to Stanislavsky, Meyerhold was usually indifferent to the psychological side of acting, but he was fascinated by its visual side. He made the actors work on body movements, not on character study, and assured them that “buffoonery and clowning are necessary for an actor, and the simplest simplicity should include the elements of the clown”. His performances resembled marionette theater shows. The suburban Herson did not appreciate Meyerhold's innovations, and The New Drama Fellowship had to go on tour as often as possible. The troupe had earned a certain reputation. If Everyone Scolds Him In 1905, Stanislavsky, who heard about Meyerhold’s success, invited him back to Moscow to head a theater Stanislavsky was going to open and to continue experimenting. Meyerhold accepted the offer, arrived, and staged three plays: The Death of Tentagiles by Maurice Maeterlinck, Love's Comedy by Henrik Ibsen and Schluck and Jau by Gerhart Hauptmann. Unfortunately for him, Stanislavsky found Meyerhold’s experiments too radical and bizarre. He cancelled the studio, and Meyerhold spent all his money to pay to the actors and returned to Herson. In 1906, the famous actress Vera Komissarzhevskaya called Meyerhold to St. Petersburg to work as a director in her theater. Meyerhold staged 13 plays within its walls, and the reaction to his performances had never been unambiguous: usually half of the audience was whistling, while the other half was applauding. Meyerhold kept experimenting. In Ibsen's Hedda Gable r and in Maeterlinck’s Sister Beatrice he used the acting methods of symbolic theater: slow movements and emotionless speech combined with expressive, sculpture-esque poses and gestures. To stage A Puppet Show by Aleksandr Blok, Meyerhold studied the traditions and principles of Italian folk mask theater, comedia dell’arte. Being an actress, Komissarzhevskaya did not appreciate Meyerhold's way of directing. In his performances, actors worked as marionettes in his hands, nothing more. When the season was over, she fired Meyerhold. He had to return to touring with the actors from The New Drama Fellowship. In 1908, the director of the Imperial Theater, Vladimir Telyakovsky, suddenly offered Meyerhold a position in the conservative and respectable Alexandrinsky Theater. As he said in an interview, “I thought: Meyerhold has to be an interesting person, if everyone scolds him.” Meyerhold dedicated nearly ten years of his life to the Alexandrinsky Theater. He abandoned symbolism and turned to ancient theater and folk theater traditions. Among his most significant performances was Molière’s Don Juan, staged to resemble a show from the era of Louis the 14 th. In 1914, he opened a theater studio where he worked on dell'arte comedies. His last premiere on the night of the October Revolution was The Masquarade by Mikhail Lermontov. "RSFSR-1" Meyerhold was deeply inspired by the revolution and several weeks after the Bolsheviks came to power arrived at the Bolshevik headquarters in Smolny, and declared he was ready to cooperate with the new government. He joined the Communist party, headed the theater department of the People’s Education Commissariat and, among other things, launched a press attack against Stanislavsky and Nemirovich-Danchenko, and their views. Meyerhold's ideas were becoming more and more radical. He kept experimenting: he wanted to abolish the profession of an actor and to let the common people to participate in plays, he wanted to give free theater tickets to workers and peasants, he wanted to change all the names of the USSR theaters to the abbreviation “RSFSR”: “RSFSR-1” ("Russian Soviet Federated Socialistic Republic 1”), and “RSFSR-2”, and so on. In a year, The People’s Commissar of Culture removed him from the department. Vsevolod Meyerhold and Zinaida Reich In 1920, Meyerhold founded a theater named "RSFSR-1" to use as his own laboratory. This theater changed names many times, until in 1926 it finally became the State Meyerhold Theater. To work there, the actors had to study “biomechanics”, Meyerhold’s new acting system based on body movements. Meyerhold considered that the art of acting is the art of moving, and that to understand the character the actor has to begin with his mobility. Poses and gestures, according to Meyerhold, represented thoughts and feelings more clearly than words. The main female parts in Meyerhold’s theater were usually played by his second wife, Zinaida Reich. They had met in 1921, and married almost immediately. She had been a secretary, but Meyerhold introduced her to acting and turned her into a great actress. The Last Days Clouds began to gather over Meyerhold’s head in 1936, when Stalin began to look for “people in art conspiracy". The Pravda newspaper published a sardonic article about Shostakovich’s opera Lady Macbeth of Mtsenk. It was called Mess Instead of Music. According to the article, the opera suffered from a lack of realism, from pretentiousness, and contained all the worst features of “meyerholdovshina” (or, bearing resemblance to the art of Meyerhold). This made-up word soon became insulting. In these times Meyerhold attempted to stage a new play, but did not succeed. The next failure of the same kind led to more serious consequences: at this time Meyerhold did not manage to stage a performance which was to coincide with the 20 th anniversary of the Revolution. Meyerhold was accused of libel against the Communist Party. In 1938, his theater was closed. At the last performance, the hall was full and the audience chanted Meyerhold’s surname as a curtain call. Stanislavsky, his teacher and his rival, wanted to help him and invited him to work at the State Opera Theater. Several months later, Stanislavsky died. Meyerhold continued to work at the theater, but it was obvious, that nothing could save him from the state anymore. His last triumph took place on June 13-15, in 1939, at the All-USSR Director Conference. His speeches were met with thunderous ovations. Four days later, the NKVD, the predecessor of KGB arrested him late at night. After a day of torture, Meyerhold signed all the documents his executioners wanted him to, confessing to spying for the British, the Japanese, and stated he was an opponent of the Soviet regime and Soviet art. On February 2, 1940, Meyerhold was executed. He was buried in Donskoye cemetery in Moscow, in a mass grave. Written by Olga Pigareva, RT. Related personalities: _Vladimir Nemirovich-Danchenko_ Vladimir Nemirovich-Danchenko was one of the towering figures of 20th century theater, known for being an innovative theatre director, writer, teacher, playwright, and producer._Georgy Tovstonogov_ Georgy Tovstonogov is a world-acclaimed theater director who devoted all his life to the Leningrad Drama Theater and bred a whole generation of outstanding actors._Andrey Konchalovsky_ Andrey Konchalovsky is the Russian film director, writer and producer._Fyodor Bondarchuk_ Son of Sergey Bondarchuk, actor and director of two successful movies - The Ninth Company and The Uninhabited Island._Andrey Tarkovsky_ Andrey Tarkovsky one of the greatest filmmakers of the 20th century, a script writer and director to get immense popularity and recognition in the West._Aleksandr Ostrovsky_ Aleksandr Ostrovsky was a Russian playwright who made significant contributions to the development of Russian theater. His plays are still performed, as their plots are still relevant. Entertainment Music Opera and ballet Art Politics and society Sport Literature History and mythology Cinema and theater Science and technology Geography and exploration Leaders Space and aviation Business Religion Education Military The Ryurikovich dynasty The Romanov dynasty Corporate profileJob opportunitiesPress releases Legal disclaimerFeedbackContact us© Autonomous Nonprofit Organization “TV-Novosti”, 2005–2021. All rights reserved.
123416
Three-Check | Lichess Wiki | Fandom =============== Fandom is on a quest for your opinions on upcoming movies! We want to hear from you! Sign In Register Lichess Wiki Explore Main Page Discuss All Pages Community Interactive Maps Recent Blog Posts Variants Atomic Racing Kings Antichess King of the Hill Chess960 Three-Check Racing Kings Lichess Staff Thibault Duplessis James Clarke Theo Wait Help Staff My Sandbox Forum Discussions Sign In Don't have an account? Register Sign In Menu Explore More History Advertisement Skip to content Lichess Wiki 61 pages Explore Main Page Discuss All Pages Community Interactive Maps Recent Blog Posts Variants Atomic Racing Kings Antichess King of the Hill Chess960 Three-Check Racing Kings Lichess Staff Thibault Duplessis James Clarke Theo Wait Help Staff My Sandbox Forum Discussions in:Stubs, Variants Three-Check Sign in to edit History Purge Talk (0) Stub This article is a stub. You can help Lichess Wiki by expanding it. Check your opponent 3 times to win the game. Three-Check, or simply 3-Check, is a variant on Lichess. Quick rules[] There is one more way of won apart from standard chess: Checking the opponent's king three times As always, you can also win by time or checkmate. Rules[] All the laws of FIDE chess apply. In particular, a move is legal if and only if it would have been legal in FIDE Chess. If you make a legal move that puts your opponent's King into the third check, you win. Categories Categories: Stubs Variants Community content is available under CC-BY-SA unless otherwise noted. Comments Start a conversation Sign in to share your thoughts and get the conversation going. SIGN IN Don't have account? Register now Advertisement Explore properties Fandom Muthead Fanatical Follow Us Overview What is Fandom? About Careers Press Contact Terms of Use Privacy Policy Digital Services Act Global Sitemap Local Sitemap Do Not Sell My Personal Information Community Community Central Support Help Advertise Media Kit Contact Fandom Apps Take your favorite fandoms with you and never miss a beat. Lichess Wiki is a FANDOM Games Community. View Mobile Site Do Not Sell My Personal Information When you visit our website, we store cookies on your browser to collect information. The information collected might relate to you, your preferences or your device, and is mostly used to make the site work as you expect it to and to provide a more personalized web experience. However, you can choose not to allow certain types of cookies, which may impact your experience of the site and the services we are able to offer. Click on the different category headings to find out more and change our default settings according to your preference. You cannot opt-out of our First Party Strictly Necessary Cookies as they are deployed in order to ensure the proper functioning of our website (such as prompting the cookie banner and remembering your settings, to log into your account, to redirect you when you log out, etc.). For more information about the First and Third Party Cookies used please follow this link. Allow All Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Targeting Cookies [x] Targeting Cookies These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising. Social Media Cookies [x] Social Media Cookies These cookies are set by a range of social media services that we have added to the site to enable you to share our content with your friends and networks. They are capable of tracking your browser across other sites and building up a profile of your interests. This may impact the content and messages you see on other websites you visit. If you do not allow these cookies you may not be able to use or see these sharing tools. Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Confirm My Choices
123417
Published Time: 2024-02-01T13:59:00.000Z Separation of All Classes of Carbohydrates by HPAEC-PAD =============== Advertisement Scroll to Continue to Site Publications All PublicationsLCGC InternationalLCGC North AmericaLCGC EuropeLCGC Asia PacificLCGC SupplementsThe ColumnE-BooksThe Application Notebook Columns News All NewsInterviews App Notes All App NotesBiological, Medical, and ClinicalBiopharmaceuticalsCannabisChiralEnvironmentalFood and BeverageGCGC-MSGeneralIndustrialLCLC-MSMedical/BiologicalMisc TechniquesPharmaceuticalsPolymersSample PrepSize-Exclusion Chromatography (SEC)Supercritical Fluid Chromatography (SFC) Conferences Conference CoverageConference Listing Webcasts All WebcastsChromAcademy Resources ProductsE-BooksChromTubeEventsAnalytically Speaking PodcastPodcastsPodcast SeriesSponsored VideosQ&AsSponsored ContentContent Engagement HubsTips & TricksIndustry InsightsCareer OpportunitiesPeer Exchange DirectorySubscribe Choose Specialty Analytical Instrumentation Analytical Theory Biological, Medical, and Clinical Analysis Biopharmaceutical Perspectives Biopharmaceuticals and Protein Analysis Cannabis Analysis Capillary Electrophoresis Chiral Chromatography ChromAcademy Corporate Profiles Data Acquisition, Handling, and Archiving Data Analysis, Statistics, and Chemometrics Dietary Supplements Analysis Environmental Analysis Field-Flow Fractionation (FFF) Food and Beverage Analysis Forensics, Narcotics From the Editor GC–MS Gas Chromatography (GC) HILIC HPLC Ion Chromatography LCGC Interviews LCGC TV: Gas Chromatography LCGC TV: Hyphenated Techniques LCGC TV: Liquid Chromatography LCGC TV: Sample Preparation LC–MS Liquid Chromatography (LC/HPLC) Market Profiles Mass Spectrometry Medical/Biological Multidimensional GC Multidimensional LC Peer-Reviewed Articles Pharmaceutical Analysis Preparative-Scale Chromatography Process Analytical Technology (PAT) Quality Control/Quality Assurance (QA/QC) Quality by Design (QbD) Regulatory Standards, GLP and GMP Compliance Sample Preparation Size-Exclusion Chromatography (SEC) Solid-Phase Extraction (SPE) Supercritical Fluid Chromatography (SFC) Supercritical Fluid Extraction (SFC) The Next Generation Thin Layer Chromatography Trends UHPLC Web of Science Spotlight - The Future of Mass Spectrometry Advances in Gas Chromatography The 2025 LCGC International PFAS Summit Choose Specialty Analytical Instrumentation Analytical Theory Biological, Medical, and Clinical Analysis Biopharmaceutical Perspectives Biopharmaceuticals and Protein Analysis Cannabis Analysis Capillary Electrophoresis Chiral Chromatography ChromAcademy Corporate Profiles Data Acquisition, Handling, and Archiving Data Analysis, Statistics, and Chemometrics Dietary Supplements Analysis Environmental Analysis Field-Flow Fractionation (FFF) Food and Beverage Analysis Forensics, Narcotics From the Editor GC–MS Gas Chromatography (GC) HILIC HPLC Ion Chromatography LCGC Interviews LCGC TV: Gas Chromatography LCGC TV: Hyphenated Techniques LCGC TV: Liquid Chromatography LCGC TV: Sample Preparation LC–MS Liquid Chromatography (LC/HPLC) Market Profiles Mass Spectrometry Medical/Biological Multidimensional GC Multidimensional LC Peer-Reviewed Articles Pharmaceutical Analysis Preparative-Scale Chromatography Process Analytical Technology (PAT) Quality Control/Quality Assurance (QA/QC) Quality by Design (QbD) Regulatory Standards, GLP and GMP Compliance Sample Preparation Size-Exclusion Chromatography (SEC) Solid-Phase Extraction (SPE) Supercritical Fluid Chromatography (SFC) Supercritical Fluid Extraction (SFC) The Next Generation Thin Layer Chromatography Trends UHPLC Web of Science Spotlight Publications Columns News App Notes Conferences Webcasts Resources Directory Subscribe Advertisement Publication Event February 1, 2024 LCGC International February 2024 Volume 1 Issue 2 Pages: 12–18 Separation of All Classes of Carbohydrates by HPAEC-PAD Author(s): Christian Marvelous,Daniel Vetter +4 More High performance anion-exchange chromatography coupled with pulsed amperometric detection (HPAEC-PAD) is a potential method of choice for the analysis of carbohydrates. Carbohydrates are essential to a wide range of industries, including food, pharmaceuticals, and consumer goods. Within the food industry, carbohydrates stand out as one of the key factors in determining the nutritional value of a product. Consequently, the analysis of carbohydrates has become an indispensable tool in the food industry. Various techniques are available for carbohydrate analysis, each with its own merits and disadvantages (1). This article focuses on high performance anion-exchange chromatography in combination with pulsed amperometric detection (HPAEC-PAD) as a preferred technique for carbohydrate analysis. HPAEC allows the separation of complex mixtures of carbohydrates, for example between mono-, di-, oligo-, and polysaccharides. Furthermore, isomeric sugars such as epimers, or disaccharides with different linkage positions, are known to be separated using HPAEC (2,3). The use of pulsed amperometric detection (PAD) in combination with HPAEC enables the direct analysis of carbohydrates, eliminating the necessity for derivatization (2,3). Additionally, PAD enables sensitive detection of carbohydrates down to pico- or femtomole levels (2). The history of carbohydrate analysis using HPAEC-PAD started in the late 1950s when the ionization of hydroxyl groups of carbohydrates in alkaline conditions was shown, revealing the potential for carbohydrate separation using anion-exchange chromatography (AEC) (4). At that time, the lack of strong and commercially available anion-exchange resins capable to withstand the harsh alkaline conditions limited the practical use of this discovery. It was not until 1983 that Rocklin and Pohl introduced the first AEC for carbohydrates using a 10-µm particle coated with a monolayer anion-exchange latex (5). Since then, the development of carbohydrate analysis using HPAEC-PAD has significantly progressed through improvements in both the chemistry of the anion-exchange resins and the reduction in particle sizes. The separation capability of a stationary phase depends on several factors, such as the type of porous resin (microporous, macroporous, or super macroporous), particle sizes (substrate and latex bead diameters), crosslinking degree of the substrate and latex beads, and type of anion-exchange group. For instance, the particle sizes have evolved from 10 µm to smaller dimensions, such as 8.5 µm, 6 µm, 5.5 µm, 4 µm, and sub-4-µm particles, to improve separation efficiencies and shorten analysis time (6–8). Using smaller particle sizes with improved chemistries and stationary phase architecture enabled fast, high-resolution anion-exchange separation of complex carbohydrate samples (6–8). Nevertheless, it is evident that smaller particle sizes will give rise to higher column back pressures, especially the use of columns with sub-5-µm particles, which puts some limitations on the metal-free ion chromatography (IC) instrumentation that can be used for fast, high-resolution HPAEC-PAD analysis. The construction materials of the equipment, capillaries, and column blanks should have a sufficiently high maximum pressure rating to operate with such columns. Therefore, a novel agglomerated pellicular anion-exchange stationary phase for carbohydrate analysis has been developed and evaluated. The new stationary phase is based on a monodisperse 5-µm resin of a highly crosslinked poly(divinylbenzene-co-ethylvinylbenzene) copolymer coated with quaternary amine functionalized latex nanoparticles. A 200 × 4 mm i.d. column packed with these highly uniform 5-µm resin particles produces relatively low column back pressures, reaching only approximately 130 bar under typical separation conditions (0.7 mL/min, 12 mM NaOH, 30 °C). The schematic in Figure 1 illustrates the particle architecture of the new stationary phase, and the monodispersity of the particles is evident from the provided scanning electron microscope (SEM) image. The monodisperse particle size of the resin should enable high-resolution separation in contrast to resins with a larger particle size distribution. In this article, we demonstrate the performance of this new stationary phase in the separation of all classes of carbohydrates, ranging from mono-, di-, and trisaccharides to oligo- and polysaccharides. The separation was performed using a 200 × 4 mm i.d. analytical column packed with this new stationary phase. The specifications of the column are shown in Table I. FIGURE 1: (a) Schematic of the individual resin particle of the new stationary phase (SweetSep AEX200). The particle consists of a 5-μm non-porous poly(DVB-co-EVB) core (green) coated with latex particles (white) with quaternary amine anion-exchange groups (for clarity, only half of the nano-beads are shown). (b) SEM picture of mono-disperse resin particles, scale bar 10 μm. (c) SEM picture of the latex agglomerated surface of the mono-disperse resin particles, scale bar 1000 nm. Materials and Methods Materials All chemicals were purchased from Sigma-Aldrich, Carbosynth, or Alfa Aesar unless stated otherwise. All carbohydrate standards were of analytical grade. Sodium hydroxide solution (50% w/w), high performance liquid chromatography (HPLC) grade sodium acetate trihydrate, and LC–mass spectrometry (LC–MS) grade acetonitrile were purchased from Fisher Scientific. Ultrapure water was obtained using a Merck Synergy Water Purification UV System (resistivity 18.2 MOhm/cm, TOC≤5 ppb). All mobile phases were manually prepared, sparged, and blanketed with nitrogen 5.0 (nitrogen ≥ 99.999%) to minimize the build-up of carbonate ions and to ensure a reproducible analysis. General Methods All analyses were performed using the ALEXYS Carbohydrate Analyzer (Antec Scientific). This metal-free, bio-inert analyzer consists of a quaternary low-pressure gradient (LPG) pump, autosampler, column thermostat, eluent tray, and electrochemical detector. During the preparation of the mobile phase, borate ions may be present in low parts-per-billion (ppb) concentrations, which can lead to peak tailing of some carbohydrates such as fructose and lactulose. Therefore, as a precaution, a borate ion inline trap column (50 × 4 mm i.d., Antec Scientific) was installed between the pump and the injector. A 200 × 4 mm i.d. analytical column packed with the new stationary phase (SweetSep AEX200) was used for all experiments. The separation temperature was set to 30 °C and an injection volume of 10 µL was used in all applications. For pulsed amperometric detection, the SenCell electrochemical flow cell was used (9). This flow cell has a confined wall-jet design and consists of a gold working electrode (WE), HyREF (Pd/H 2) reference electrode (RE), and stainless-steel auxiliary electrode (AE). The flow cell has an adjustable spacer and was set to position 2, which corresponds to a 50-µm spacing and a 160-nL working volume. A four-step potential PAD waveform was applied for detection: E 1, E 2, E 3, and E 4 were +0.10, –2.0, +0.6, and –0.1 V, respectively, with pulse duration of t 1 = 400 ms, t 2 = 20 ms, t 3 = 10 ms, and t 4 = 70 ms. The signal (cell current) is acquired for 200 ms with a sampling rate of 10 ms during t 1 between t = 0.20–0.40 s. The signal output is the average cell current in nA measured during this 200-ms time period. The data rate of the signal output is 2 Hz, which corresponds to the 500-ms pulse time duration of the applied four-step potential waveform. This particular four-step waveform has several benefits: (1) long-term reproducible response factor for all analytes and (2) minimal electrode wear (10). The detection temperature was set to 35 °C. The stock solutions of the individual standards were prepared in 95:5 (v/v%) water/acetonitrile with a concentration of 10 mM. Acetonitrile was added to prevent fast degradation and minimize bacterial or fungal growth. The stock solutions of the standards were stored in the freezer at −20 °C and were stable for more than a month. The working standard mixes were prepared by serial dilution of the stock standards with deionized (DI) water. Evaluation of Column Performance And Long-Term Stability Separations of a mix of 10 sugar standards were performed on the aforementioned analytical column to evaluate its performance and long-term stability. The mix of standards consists of fucose, arabinose, galactose, glucose, sucrose, fructose, allolactose, lactose, lactulose, and epilactose in DI water. The final concentration of the mix was 10 µM. The separation was based on a step gradient. During the first 20 min, the sugars are eluted under isocratic conditions using 12 mM NaOH as the mobile phase at a flow rate of 0.7 mL/min. The isocratic elution step was followed by a column clean-up step using 100 mM NaOH for 5 min at 0.8 mL/min and equilibration to the starting conditions for 17.5 min, resulting in a total run time of 42.5 min. The long-term stability of the column was assessed by repetitive 10 µL injections of the standard mix solution for about 4 months, resulting in more than 2600 chromatographic runs. Application 1: Sugars in Honey A 10 µM mix of 14 sugars in DI water commonly found in honey (trehalose, glucose, fructose, isomaltose, sucrose, kojibiose, gentiobiose, turanose, palatinose, melezitose, raffinose, 1-kestose, maltose, and erlose) was used as the working standard for this application. A wild honey obtained from a Swiss beekeeper was used as a sample. The honey sample was harvested during the summer season of 2023. The honey samples were prepared by weighing 100 mg of the honey and dissolving it in 100 mL DI water to achieve a concentration of 1 g/L. Subsequently, the samples were filtered over a 0.22-µm polyethersulfone (PES) syringe filter (GVS Filter Technology) into the vials for injection. The separation was performed on the HPAEC-PAD system described above, using the following step-gradient program: isocratic elution at 0.7 mL/min using 68 mM NaOH for 25 min, followed by a 5-min column clean-up step using 100 mM NaOH + 100 mM NaOAc (sodium acetate), and equilibration to the starting conditions for 15 min. The total run time of each run was 45 min. For quantification purposes, the honey sample was diluted into concentrations of 0.1 g/L and 0.01 g/L using serial dilution with DI water. Application 2: Profiling of Fructooligosaccharides (FOS) Inulin from chicory was used as a sample to obtain the fructooligosaccharides profile. The sample was prepared by dissolving a known amount of inulin powder in DI water, followed by filtration over a 0.22-µm PES syringe filter and dilution to the final concentration of 200 ppm. The separation was performed on the same HPAEC-PAD system as mentioned earlier, using a gradient program with a flow rate of 0.8 mL/min. The gradient program started with 100 mM NaOH, and a linear gradient to 100 mM NaOH + 180 mM NaOAc was applied until t = 12 min. Subsequently, a more gentle linear gradient to 100 mM NaOH + 450 mM NaOAc was applied until t = 60 min. The system was equilibrated to the starting condition for 15 min, resulting in a total run time of 75 min. Initial peak assignments were based on the elution pattern of glucose (G), fructose (F), sucrose (GF), 1-kestose (GF 2), nystose (GF 3), and fructosyl nystose (GF 4). Because of the lack of commercial standards for sugars with a high degree of polymerization (DP), further assignments were based on the assumption that the retention of a homologous series of carbohydrates increases as the DP increases. Results and Discussion Column Performance and Long-Term Stability The column performance and long-term stability assessment of a 200 × 4 mm i.d. column were conducted based on the separation of 10 sugars. The 10 sugars were carefully chosen to cover a range of molecular structures of saccharides: (1) monosaccharides (glucose) or disaccharides (sucrose); (2) isomers (allolactose and lactose); (3) epimers (galactose and glucose); (4) hexose (fructose) or pentose (arabinose) for monosaccharides; and (5) deoxy sugars (fucose). The separation of the 10 sugars on the analytical column was achieved under isocratic elution condition with 12 mM NaOH at the flow rate of 0.7 mL/min. Under this condition, all 10 sugars were baseline separated (resolution ≥1.5). The symmetry and tailing factors for the 10 sugars were excellent, with a value of approximately 1.1 for most sugars except for arabinose (1.2). The plate numbers of all sugars range between about 12300 to 19300, and the reduced plate heights (h) for most of the sugars were close to the ideal value of 2.0 for a 200 mm column with a 5-µm particle size. All column parameters are provided in Table II. The long-term stability of the 200 × 4 mm i.d. column was assessed using the same 10-sugar mix under the same conditions. The overlay chromatograms of several selected injections over four months, during which more than 2600 injections were conducted, are provided in Figure 2. The retention times for all 10 compounds remained stable over this period. The small variations in peak height and peak area were caused by differences in the manual preparation or aging (degradation) of the standard mix. The long-term stability of the column was also assessed based on the loss of the plate numbers and changes in tailing factors. There was no observed loss in plate numbers nor increase in tailing factors, which indicates that columns packed with this resin result in very stable columns with outstanding lifetime. FIGURE 2: Overlay of injections #10, #800, #1500, and #2600 after 4 months of continuous injections of a carbohydrate mixture. Peak labels: (1) fucose; (2) arabinose; (3) galactose; (4) glucose; (5) sucrose; (6) fructose; (7) allolactose; (8) lactose; (9) lactulose; and (10) epilactose. Sugars in Honey Honey is a complex natural substance with a promising potential for various health benefits, and it consists of approximately 80% carbohydrates (11). Because of its economic appeal, honey is susceptible to food fraud and adulteration. For instance, in 2021 the value of imported honey was 2.32 €/kg, whereas commonly used adulterants such as rice syrups cost approximately 0.40–0.60 €/kg in the European Union (EU) (12). The composition and definition of honey in the EU are regulated by the EU Honey Directive 2001/110/EC (13). The directive specifies the criteria for unadulterated honey products including the threshold of sugars in honey. Therefore, HPAEC-PAD is an attractive method that can quantify sugars in honey to check the authenticity of honey samples. The separation of the 14 sugars commonly found in honey using the aforementioned column is depicted in Figure 3. Out of the 14 sugars, two are monosaccharides (glucose and fructose), eight are disaccharides (trehalose, isomaltose, sucrose, kojibiose, gentiobiose, turanose, palatinose, and maltose), and four are trisaccharides (melezitose, raffinose, 1-kestose, and erlose). All sugars were eluted within 25 min, and most of the sugars were baseline separated (R ≥1.5), except for palatinose and melezitose (resolution 1.1 and 1.2, respectively). The peak efficiency for all sugars ranged between 8000–16000 plates, and all peaks exhibited no significant tailing (tailing factor between 1.0–1.2). FIGURE 3: Overlay chromatograms of 10 μL injections of 10 μM standard mix of 14 sugars commonly found in honey (black lines) and 1 g/L honey sample obtained from Swiss beekeeper (red lines). Peak labels: (1) trehalose; (2) glucose; (3) fructose; (4) isomaltose; (5) sucrose; (6) kojibiose; (7) gentiobiose; (8) turanose; (9) palatinose; (10) melezitose; (11) raffinose; (12) 1-kestose; (13) maltose; and (14) erlose. The presented method was validated by testing the linearity, repeatability, and determining the limits of detection (LODs). The linearity of the method was investigated in the concentration range of 0.01–50 µM. In this concentration range, the linearity is excellent with the correlation coefficients (r) >0.999 for almost all sugars except for turanose (r= 0.9986). A total of 10 repetitive injections of the 10 µM standard mix in DI water were performed to assess the repeatability of the method. Excellent repeatability was found as shown by the very small relative standard deviations (RSDs) of the retention times, peak heights, and peak areas (<0.3%, <0.5%, and <0.6%, respectively). The LODs were calculated based on the International Council for Harmonization (ICH) guidelines (that is, LODs were calculated as the analyte response corresponding to 3× the ASTM noise, with an average peak-to-peak baseline noise of 10 segments of 0.5 min). The excellent sensitivity of the method is evident from the low detection limits for all sugars (<70 nM). To demonstrate the applicability of the method, a summer honey sample obtained from the Swiss beekeeper was tested. The chromatogram of the 10 µL injection of the honey sample is shown in Figure 3. All 14 sugars were detected in the honey, with glucose and fructose being the most dominant sugars. Quantification of the sugars shows that the glucose, fructose, sucrose, and maltose contents are 27.4 g, 31.9 g, 0.1 g, and 0.8 g per 100-g honey products, respectively. These values align with the specified criteria of unadulterated honey defined by the EU Honey Directive 2001/110/EC (13). Overall, the presented method shows the outstanding separation of sugars using the new stationary phase and sensitive detection of the sugars in honey using pulsed amperometric detection. Profiling of Fructooligosaccharides (FOS) Fructooligosaccharides are polymers consisting of fructose found widely distributed in nature as plant storage carbohydrates. Fructooligosaccharides are a form of dietary fiber, and they can serve as an energy source for the gut microbiota (14). Many plant species, including wheat, onion, bananas, garlic, and chicory, contain inulin-type fructooligosaccharides (ITF). ITF exists as a blend of polymers with degrees of polymerization (DP) ranging from 2 to 60 subunits (14). Some ITFs in plants have a glucose unit at the reducing end, while others do not include a glucose residue at all. Therefore, all ITFs can be described with the generic chemical structure GF n (with G as optional glucose, F as fructose, and n indicating the number of fructose moieties). The column described earlier was employed to obtain a fructooligosaccharides profile in inulin from the chicory sample. A chromatogram in Figure 4 illustrates the profile of fructooligosaccharides from the sample. Based on the chromatogram, inulin predominantly consists of GF n-type fructooligosaccharides ranging from DP 3 (GF 2) to approximately DP61 (GF60). Additionally, this sample contains a substantial amount of free sugars (glucose, fructose, and sucrose). Although the GF n and F n type fructooligosaccharides are baseline separated until GF 7 and F 7, they exhibit slightly different retention behavior. Consequently, they unavoidably overlap, leading to the coelution of components starting from GF 8 and F 8. The GF 12 and F 12 were observed to be baseline separated again until approximately GF 22 and F 22. Further, F 23 onwards was not observed, whereas GF n-type was still detected up to approximately GF60. It is important to note that the chain-length distribution should only be interpreted qualitatively because the response factor decreases with increasing chain length, and therefore, it does not represent the exact quantitative distribution. Overall, the presented method demonstrated excellent separation of inulin-type fructooligosaccharides. FIGURE 4: Chromatogram of 10-μL injections of 200 ppm inulin from chicory. Monosaccharides (glucose and fructose) are labeled with an asterisk. Conclusion A new anion-exchange stationary phase based on 5-µm particles was developed, and it enables fast, high-resolution separation of carbohydrates at moderate column back pressures. A 200 mm × 4 mm i.d. analytical column based on the stationary phase demonstrated a superior performance with reduced plate heights for nearly all sugars close to the ideal value of 2.0. The column showcased great stability in retention times, peak efficiencies, and tailing factors over an impressive span of 2600 injections. The versatility of this new stationary phase was evident in its ability to achieve high-resolution separation of carbohydrates from mono-, di-, tri-, oligo-, up to polysaccharides. In conclusion, the newly introduced column provides high-resolution separation of all classes of carbohydrates using HPAEC-PAD and will help to achieve accurate identification and quantification of carbohydrates in food products, including detection of adulteration and fraud. References (1) Herrero, M.; Cifuentes, A.; Ibáñez, E.; Castillo, M. D. del. Advanced Analysis of Carbohydrates in Foods. In Methods of Analysis of Food Components and Additives; CRC Press, 2012. (2) Lee, Y. C. High-Performance Anion-Exchange Chromatography for Carbohydrate Analysis. Anal. Biochem.1990, 189, 151–162. DOI: 10.1016/0003-2697(90)90099-u (3) Lee, Y. C. Carbohydrate Analyses with High-Performance Anion-Exchange Chromatography. J. Chromatogr. A1996, 720 (1–2), 137–149. DOI: 10.1016/0021-9673(95)00222-7 (4) Frahn, J. L.; Mills, J. A. Paper Ionophoresis of Carbohydrates. I. Procedures and Results for Four Electrolytes. Aust. J. Chem.1959, 12 (1), 65–89. DOI: 10.1071/ch9590065 (5) Rocklin, R. D.; Pohl, C. A. Determination of Carbohydrates by Anion Exchange Chromatography with Pulsed Amperometric Detection. J. Liq. Chromatogr.1983, 6 (9), 1577–1590. DOI: 10.1080/01483918308064876 (6) Rohrer, J. Carbohydrate Analysis by High-Performance Anion-Exchange Chromatography with Pulsed Amperometric Detection (HPAE-PAD). Thermo Fisher Scientific application note 70671. (7) Corradini, C.; Corradini, D.; Huber, C. G.; Bonn, G. K. Synthesis of a Polymeric-Based Stationary Phase for Carbohydrate Separation by High-pH Anion-Exchange Chromatography with Pulsed Amperometric Detection. J. Chromatogr. A1994, 685, 213–220. DOI: 10.1016/0021-9673(94)00665-2 (8) Wouters, S.; Dores-Sousa, J. L.; Liu, Y.; Pohl, C. A.; Eeltink, S. Ultra-High-Pressure Ion Chromatography with Suppressed Conductivity Detection at 70 MPa Using Columns Packed with 2.5 µm Anion-Exchange Particles. Anal. Chem.2019, 91 (21), 13842–13830. DOI: 10.1021/acs.analchem.9b03283 (9) Louw, H. R.; Brouwer, H.-J.; Reinhoud, N. J. Electrochemical Flow Cell. US9310330B2. (10) Rocklin, R. D.; Clarke, A. P.; Weitzhandler, M. Improved Long-Term Reproducibility for Pulsed Amperometric Detection of Carbohydrates via a New Quadruple-Potential Waveform. Anal. Chem.1998, 70 (8), 1496–1501. DOI: 10.1021/ac970906w (11) Samarghandian, S.; Farkhondeh, T.; Samini, F. Honey and Health: A Review of Recent Clinical Research. Pharmacogn. Res.2017, 9 (2), 121–127. DOI: 10.4103/0974-8490.204647 (12) Ždiniaková, T.; Loerchner, C.; De Rudder, O.; et al. EU Coordinated Action to Deter Certain Fraudulent Practices in the Honey Sector, 2023. (accessed 2023-11-20). (13) Council of the European Union. Council Directive 2001/110/EC of 20 December 2001 Relating to Honey, 2001. (accessed 2023-11-20). (14) Niness, K. R. Inulin and Oligofructose: What Are They? The J. Nutr.1999, 129 (7), 1402S-1406S. DOI: 10.1093/jn/129.7.1402S ABOUT THE COLUMN AUTHOR Image 48: David S. Bell is a Research Fellow in Research and Development at Restek. He also serves on the Editorial Advisory Board for LCGC and is the Editor for “Column Watch.” Over the past 20 years, he has worked directly in the chromatography industry, focusing his efforts on the design, development, and application of chromatographic stationary phases to advance gas chromatography, liquid chromatography, and related hyphenated techniques. His main objectives have been to create and promote novel separation technologies and to conduct research on molecular interactions that contribute to retention and selectivity in an array of chromatographic processes. His research results have been presented in symposia worldwide, and have resulted in numerous peer-reviewed journal and trade magazine articles. [email protected]. David S. Bell is a Research Fellow in Research and Development at Restek. He also serves on the Editorial Advisory Board for LCGC and is the Editor for “Column Watch.” Over the past 20 years, he has worked directly in the chromatography industry, focusing his efforts on the design, development, and application of chromatographic stationary phases to advance gas chromatography, liquid chromatography, and related hyphenated techniques. His main objectives have been to create and promote novel separation technologies and to conduct research on molecular interactions that contribute to retention and selectivity in an array of chromatographic processes. His research results have been presented in symposia worldwide, and have resulted in numerous peer-reviewed journal and trade magazine articles. [email protected]. ABOUT THE AUTHORS Image 51: Christian Marvelous has a Ph.D. from Leiden University. He joined Antec Scientific as a research scientist in 2022. His work focuses on carbohydrate analysis using HPAEC-PAD. [email protected] Christian Marvelous has a Ph.D. from Leiden University. He joined Antec Scientific as a research scientist in 2022. His work focuses on carbohydrate analysis using HPAEC-PAD. [email protected] Hendrik-Jan Brouwer obtained his Ph.D. from the University of Groningen in the field of polymer chemistry and joined Antec Scientific in 2000. He is currently leading the R&D team at Antec Scientific. Daniel Vetter, Martin Eysberg, Nico Reinhoud,andJean-Pierre Chervetare members of Antec Scientific. Download Issue PDF Download RIS Articles in this issue Quantitative Metrics to Properly Describe Solute Elution in Size-Exclusion Chromatography The 2024 Lifetime Achievement and Emerging Leader in Chromatography Awards Ingenious Ways to Manipulate Peak Integration? Separation of All Classes of Carbohydrates by HPAEC-PAD The Gradient Delay Volume, Part II: Practice – Effects on Method Transfer Quantitative Determination of Four Lignans in Phyllanthus niruri L. by HPLC Newsletter Join the global community of analytical scientists who trust LCGC for insights on the latest techniques, trends, and expert solutions in chromatography. Subscribe Now! Related Videos Sustainability and New Modality Therapeutics: An HPLC 2025 Interview with Paul Ferguson Challenges and Solutions in Oligonucleotide Analysis (Part 2): An HPLC 2025 Video Interview with Torgny Fornstedt Challenges and Solutions in Oligonucleotide Analysis (Part 1): An HPLC 2025 Video Interview with Torgny Fornstedt Related Content View More Advertisement August 23rd 2025 Modern Capillary Scale Liquid Chromatography Columns: An Update Samuel W. FosterJames P. GriniasDavid S. Bell August 23rd 2025 Best of the Week: Green Solvent Selection, Research Funding, Innovations in LC Will Wetzel August 23rd 2025 Innovations in Liquid Chromatography: 2025 HPLC Column and Accessories Review David S. Bell August 23rd 2025 HS-GC–MS and HPLC–DAD Uncover High Levels of Harmful Compounds in Disposable Electronic Cigarettes Kate Jones August 23rd 2025 Essentials of LC Troubleshooting, VIII: A Deeper Look Into Column Efficiency Dwight R. Stoll August 23rd 2025 Micro-Sample LC–MS/MS Method Maps How Three Key Anesthetics Cross the Placenta During Cesarean Delivery Caroline Hroncich Related Content View More HPLC | Liquid Chromatography (LC/HPLC) | Column: Column Watch Advertisement August 23rd 2025 Modern Capillary Scale Liquid Chromatography Columns: An Update Samuel W. FosterJames P. GriniasDavid S. Bell August 23rd 2025 Best of the Week: Green Solvent Selection, Research Funding, Innovations in LC Will Wetzel August 23rd 2025 Innovations in Liquid Chromatography: 2025 HPLC Column and Accessories Review David S. Bell August 23rd 2025 HS-GC–MS and HPLC–DAD Uncover High Levels of Harmful Compounds in Disposable Electronic Cigarettes Kate Jones August 23rd 2025 Essentials of LC Troubleshooting, VIII: A Deeper Look Into Column Efficiency Dwight R. Stoll August 23rd 2025 Micro-Sample LC–MS/MS Method Maps How Three Key Anesthetics Cross the Placenta During Cesarean Delivery Caroline Hroncich Advertisement Advertisement About Advertise Author Guidelines Contact Us Editorial Advisory Board Ethics Policy Do Not Sell My Personal Information Privacy Policy Permissions Subscriptions Terms and Conditions Contact Info 2 Commerce Drive Cranbury, NJ 08512 609-716-7777 © 2025 MJH Life Sciences All rights reserved. Home About Us News This website uses cookies and other tracking technologies to enhance user experience, display personalized advertisements, and analyze performance and traffic on our website. We also share information about your site use with our social media, advertising, and analytics partners. You can exercise your rights to opt out of the sale or processing of personal data for targeted advertising by clicking the link on the right; for more details, see our Privacy Notice.Cookie Policy Your Privacy Rights Reject All Accept Cookies Preference center When you visit our website, we store cookies on your browser to collect information. The information collected might relate to you, your preferences or your device, and is mostly used to make the site work as you expect it to and to provide a more personalized web experience. However, you can choose not to allow certain types of cookies, which may impact your experience of the site and the services we are able to offer. Click on the different category headings to find out more and change our default settings according to your preference. You cannot opt-out of our First Party Strictly Necessary Cookies as they are deployed in order to ensure the proper functioning of our website (such as prompting the cookie banner and remembering your settings, to log into your account, to redirect you when you log out, etc.). For more information about the First and Third Party Cookies used please follow this link. More information Allow All Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Other Cookies [x] Other Cookies Active As a California consumer, you have the right to opt-out from the sale or sharing of your personal information at any time across business platform, services, businesses and devices. You can opt-out of the sale and sharing of your personal information by using this toggle switch. As a Virginia, Utah, Colorado and Connecticut consumer, you have the right to opt-out from the sale of your personal data and the processing of your personal data for targeted advertising. You can opt-out of the sale of your personal data and targeted advertising by using this toggle switch. For more information on your rights as a United States consumer see our privacy notice. Performance Cookies [x] Switch Label Active These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Targeting Cookies [x] Switch Label Active These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising. Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Reject All Confirm My Choices
123418
Published Time: Mon, 23 Jan 2023 08:28:21 GMT arXiv:2101.11947v1 [math.CO] 28 Jan 2021 Subspace coverings with multiplicities Anurag Bishnoi ∗ Simona Boyadzhiyska †‡ Shagnik Das †§ Tam´ as M´ esz´ aros †¶ January 29, 2021 Abstract We study the problem of determining the minimum number f (n, k, d ) of affine subspaces of codimension d that are required to cover all points of Fn 2 \ { ~0} at least k times while covering the origin at most k − 1 times. The case k = 1 is a classic result of Jamison, which was independently obtained by Brouwer and Schrijver for d = 1. The value of f (n, 1, 1) also follows from a well-known theorem of Alon and F¨ uredi about coverings of finite grids in affine spaces over arbitrary fields. Here we determine the value of this function exactly in various ranges of the parameters. In particular, we prove that for k ≥ 2n−d−1 we have f (n, k, d ) = 2 dk−⌊ k 2n−d ⌋, while for n > 22dk−k−d+1 we have f (n, k, d ) = n + 2 dk − d − 2, and also study the transition between these two ranges. While previous work in this direction has primarily employed the polynomial method, we prove our results through more direct combinatorial and probabilistic arguments, and also exploit a connection to coding theory. 1 Introduction How many affine hyperplanes does it take to cover the vertices of the n-dimensional Boolean hypercube, {0, 1}n ? This simple question has an equally straightforward answer — one can cover all the vertices with a parallel pair of hyperplanes, while it is easy to see that a single plane can cover at most half the vertices, and so two planes are indeed necessary. However, the waters are quickly muddied with a minor twist to the problem. Indeed, if one is instead asked to cover all the vertices except the origin, the parallel hyperplane construction is no longer valid. Given a moment’s thought, one might come up with the much larger family of n hyperplanes given by {~x : xi = 1 } for i ∈ [n]. This fulfils the task and, surprisingly, turns out to be optimal, although this is far from obvious. This problem has led to rich veins of research in both finite geometry and extremal combinatorics, and in what follows we survey its history before introducing our new results. 1.1 An origin story When we work over the finite field F2, this problem is equivalent to the well-known blocking set problem from finite geometry, and it was in this guise that it was first studied. A blocking set in Fn 2 is a set of points that meets every hyperplane, and the objective is to find a blocking set of minimum size. By translating, we may assume that our blocking set contains the origin ~0, and so the problem reduces to finding a collection of points that meets all hyperplanes avoiding the origin. Applying duality, now, we return to our original problem of covering the nonzero points of Fn 2 with affine hyperplanes. ∗ Delft Institute of Applied Mathematics, Technische Universiteit Delft, 2628 CD Delft, Netherlands. E-mail: [email protected] . † Institut f¨ ur Mathematik, Freie Universit¨ at Berlin, 14195 Berlin, Germany. ‡ E-mail: [email protected] . Research supported by the Deutsche Forschungsgemeinschaft (DFG) Graduiertenkolleg “Facets of Complexity” (GRK 2434). § E-mail: [email protected] . Research supported by the Deutsche Forschungsgemeinschaft (DFG) project 415310276. ¶ E-mail: [email protected] . Research supported by the Deutsche Forschungsgemeinschaft (DFG) under Ger-many’s Excellence Strategy - The Berlin Mathematics Research Center MATH+ (EXC-2046/1, project ID: 390685689). 1From this perspective, there is no reason to restrict our attention to the binary field F2, and we can generalise the problem to ask how many hyperplanes are needed to cover the nonzero points of Fnq .Going even further, one may replace the hyperplanes with affine subspaces of codimension d. In this generality, the problem was answered in the late 1970s by Jamison , who proved that the minimum number of affine subspaces of codimension d that cover all nonzero points in Fnq while avoiding the origin is qd − 1 + ( n − d)( q − 1). In particular, when q = 2 and d = 1, this lower bound is equal to n,showing that the earlier construction with n planes is optimal. A simpler proof of the case d = 1 was independently provided by Brouwer and Schrijver . While the finite geometry motivation naturally leads one to work over finite fields, one can also study the problem over infinite fields F. Of course, one would need infinitely many hyperplanes to cover all nonzero points of Fn, which is why we instead ask how many hyperplanes are needed to cover the nonzero points of the hypercube {0, 1}n ⊆ Fn. This problem was raised in the early 1990s by Komj´ ath , who, in order to prove some results in infinite Ramsey theory, showed that this quantity must grow with n. Shortly afterwards, a celebrated result of Alon and F¨ uredi established a tight bound in the more general setting of covering all but one point of a finite grid. They showed that, for any collection of finite subsets S1, S 2, . . . , S n of some arbitrary field F, the minimum number of hyperplanes needed to cover all but one point of S1 × S2 × . . . × Sn is ∑ i (|Si| − 1). If we take Si = {0, 1} for all i, this once again shows that one needs n hyperplanes to cover the nonzero points of the hypercube. 1.2 The polynomial method Despite these motivating applications to finite geometry and Ramsey theory, the primary reason this problem has attracted so much attention lies in the proof methods used. These hyperplane covers have driven the development of the polynomial method — indeed, in light of his early results, this is sometimes referred to as the Jamison method in finite geometry . To see how polynomials come into play, suppose we have a set of hyperplanes {Hi : i ∈ [m]} in Fn, with the plane Hi defined by Hi = {~x : ~x · ~ai = ci} for some normal vector ~ai ∈ Fn and some constant ci ∈ F. We can then define the degree-m polynomial f (~x) = ∏ i∈m (~x · ~ai − ci), observing that f (~x) = 0 if and only if ~x is covered by one of the hyperplanes Hi. Thus, lower bounds on the degrees of polynomials that vanish except at the origin translate to lower bounds on the number of hyperplanes needed to cover all nonzero points. This approach has proven very robust, and lends itself to a number of generalisations. For instance, K´ os, M´ esz´ aros and R´ onyai and Bishnoi, Clark, Potukuchi and Schmitt considered variations over rings, while Blokhuis, Brouwer and Sz˝ onyi studied the problem for quadratic surfaces and Hermitian varieties in projective and affine spaces over Fq. 1.3 Covering with multiplicity In this paper, we shall remain in the original setting, but instead extend the problem to higher multiplicities. That is, we shall seek the minimum number of hyperplanes needed in Fn to cover the nonzero points at least k times, while the origin is covered fewer times. Previous work in this direction has imposed the stricter condition of avoiding the origin altogether; Bruen considered this problem over finite fields, while Ball and Serra and K´ os and R´ onyai worked with finite grids over arbitrary fields, with some further generalisations recently provided by Geil and Matr´ ınez-Pe˜ nas . In all of these papers, the polynomial method described above was strengthened to obtain lower bounds for this problem with higher multiplicities. However, these lower bounds are most often not tight; Zanella discusses when Bruen’s bound is sharp, with some improvements provided by Ball . Significant progress in this line of research was made recently when Clifton and Huang studied the special case of covering all nonzero points of {0, 1}n ⊆ Rn at least k times, while leaving the origin uncovered. Observe that one can remove k − 1 hyperplanes arbitrarily from such a cover, and the remainder will still cover each nonzero point at least once. Thus, by the Alon–F¨ uredi Theorem, we must be left with at least n planes, giving a lower bound of n + k − 1. While it is not hard to 2see that this is tight for k = 2, Clifton and Huang used Ball and Serra’s Punctured Combinatorial Nullstellensatz to improve the lower bound for larger k. They showed that for k = 3 and n ≥ 2, the correct answer is n + 3, while for k ≥ 4 and n ≥ 3, the answer lies between n + k + 1 and n + (k 2 ),conjecturing the upper bound to be correct when n is large with respect to k. However, they showed that this was far from the case when n is fixed and k is large; in this range, the answer is ( cn + o(1)) k,where cn is the nth term in the harmonic series. A major breakthrough was then made by Sauermann and Wigderson , who skipped the ge-ometric motivation and resolved the polynomial problem directly. More precisely, they proved the following theorem. Theorem 1.1. Let k ≥ 2 and n ≥ 2k − 3, and let P ∈ R[x1, . . . , x n] be a polynomial having zeroes of multiplicity at least k at all points in {0, 1}n \ { ~0}, and such that P does not have a zero of multiplicity at least k − 1 at ~0. Then P must have degree at least n + 2 k − 3. Furthermore, for every ℓ ∈ { 0, 1, . . . , k − 2}, there exists a polynomial P with degree exactly n + 2 k − 3 having zeroes of multiplicity at least k at all points in {0, 1}n \ { ~0}, and such that P has a zero of multiplicity exactly ℓ at ~0. As an immediate corollary, this improves the lower bound in the Clifton–Huang result from n+k+1 to n+2 k−3. However, Theorem 1.1 establishes that n+2 k−3 is also an upper bound for the polynomial problem, whereas Clifton and Huang conjecture that the answer for their problem should be n + (k 2 ).This suggests that the polynomial method alone is not sufficient to resolve the hyperplane covering problem. Even though Theorem 1.1 is stated for polynomials defined over R, Sauermann and Wigderson note that the proof works over any field of characteristic zero. However, the result need not hold over finite fields. In particular, they show the existence of a polynomial P4 over F2 of degree n + 4 with zeroes of multiplicity four at all nonzero points in Fn 2 and with P4(~0) 6 = 0. More generally, for every k ≥ 4, Pk(~x) = xk−41 (x1 − 1) k−4P4(~x) is a binary polynomial of degree only n + 2 k − 4 with zeroes of multiplicity k at all nonzero points and of multiplicity k − 4 at the origin. The correct behaviour of the problem over finite fields is left as an open problem. Note also that Theorem 1.1 allows the origin to be covered up to k − 2 times. Sauermann and Wigderson also considered the case where the origin must be covered with multiplicity exactly k − 1, showing that the minimum degree then increases to n + 2 k − 2. In contrast to Theorem 1.1, the proof of this result is valid over all fields. 1.4 Our results In this paper, we study the problem of covering with multiplicity in Fn 2 . We are motivated not only by the body of research described above, but also by the fact, as we shall show in Proposition 3.3, when one forbids the origin from being covered, this problem is equivalent to finding linear binary codes of large minimum distance. As this classic problem from coding theory has a long and storied history of its own, and is likely to be very difficult, we shall instead work in the setting where we require all nonzero points in Fn 2 to be covered at least k times while the origin can be covered at most k − 1 times. In light of the previous results, we shall abstain from employing the polynomial method, and instead attack the problem more directly with combinatorial techniques. As an added bonus, our arguments readily generalise to covering points with codimension-d affine subspaces, rather than just hyperplanes, thereby extending Jamison’s original results in the case q = 2. To be able to discuss our results more concisely, we first introduce some notation that we will use throughout the paper. Given integers k ≥ 1 and n ≥ d ≥ 1, we say a multiset H of ( n − d)-dimensional affine subspaces in Fn 2 is a ( k, d )-cover if every nonzero point of Fn 2 is covered at least k times, while ~0 is covered at most k − 1 times. We next introduce an extremal function f (n, k, d ), which is defined to be the minimum possible size of a ( k, d )-cover in Fn 2 .For instance, when we take k = 1, we obtain the original covering problem, and from the work of Jamison we know f (n, 1, d ) = n + 2 d − d − 1. At another extreme, if we take d = n, then our affine subspaces are simply individual points, each of which must be covered k times, and hence f (n, k, n ) = k (2 n − 1). We study this function for intermediate values of the parameters, determining 3it precisely when either k is large with respect to n and d, or n is large with respect to k and d, and derive asymptotic results otherwise. Theorem 1.2. Let k ≥ 1 and n ≥ d ≥ 1. Then: (a) If k ≥ 2n−d−1, then f (n, k, d ) = 2 dk − ⌊ k 2n−d ⌋.(b) If n > 22dk−d−k+1 , then f (n, k, d ) = n + 2 dk − d − 2.(c) If k ≥ 2 and n ≥ ⌊ log 2 k⌋ + d + 1 , then n + 2 dk − d − log 2(2 k) ≤ f (n, k, d ) ≤ n + 2 dk − d − 2. There are a few remarks worth making at this stage. First, observe that, just as in the Clifton– Huang setting, the extremal function f (n, k, d ) exhibits different behaviour when n is fixed and k is large as compared to when k is fixed and n is large. Second, and perhaps most significantly, Theorem 1.2 demonstrates the gap between the hyperplane covering problem and the polynomial degree problem: our result shows that, for any k ≥ 4 and sufficiently large n, we have f (n, k, 1) = n + 2 k − 3, whereas the answer to the corresponding polynomial problem is at most n + 2 k − 4, as explained after Theorem 1.1. Our ideas allow us to establish an even stronger separation in the case k = 4 — while the polynomial P4 constructed by Sauermann and Wigderson, which has zeroes of multiplicity at least four at all nonzero points of Fn 2 while not vanishing at the origin, has degree only n + 4, we shall show in Corollary 3.4 that any hyperplane system with the corresponding covering properties must have size at least n+log ( 2 3 n). Third, we see that in the intermediate range, when both n and k grow moderately, the bounds in (c) determine f (n, k, d ) up to an additive error of log 2(2 k), which is a lower-order term. Thus, f (n, k, d ) grows asymptotically like n + 2 dk. Last of all, if one substitutes k = 2 n−d−1 − 1, the lower bound from (c) is larger than the value in (a). This shows that k ≥ 2n−d−1 is indeed the correct range for which the result in (a) is valid. In contrast, we believe the bound on n in (b) is far from optimal, and discuss this in greater depth in Section 4. The remainder of this paper is devoted to the proof of Theorem 1.2, and is organised as follows. In Section 2 we prove part (a), determining the extremal function for large multiplicities. We prove part (b) in Section 3, handling the case when the dimension of the ambient space grows quickly. A key step in the proof is showing the intuitive, yet surprisingly not immediate, fact that f (n, k, d ) is strictly increasing in n, as a result of which we shall also be able to deduce the bounds in (c). Section 4 is devoted to the study of the gradual transition between parts (a) and (b), where we exhibit some constructions that show f (n, k, d ) takes values strictly between those of parts (a) and (b). Finally, we end by presenting some concluding remarks and open problems in Section 5. 2 Covering with large multiplicity In this section we prove Theorem 1.2(a), handling the case of large multiplicities. We start by intro-ducing some definitions and notation that we will use in the proof. To start with, it will be convenient to have some notation for affine hyperplanes. Given a nonzero vector ~u ∈ Fn 2 , let H~u denote the hyperplane {~x : ~x · ~u = 1 }.Next, it will sometimes be helpful to specify how many times the origin is covered. Hence, given integers n ≥ d ≥ 1 and k > s ≥ 0, we call a ( k, d )-cover in Fn 2 a ( k, d ; s)-cover if it covers the origin exactly s times. Let us write g(n, k, d ; s) for the minimum possible size of a ( k, d ; s)-cover and call a cover optimal if it has this minimum size. Clearly, we have f (n, k, d ) = min 0≤s<k g(n, k, d ; s), so any knowledge about this more refined function directly translates to our main focus of interest. 2.1 The lower bound To start with, we prove a general lower bound, valid for all choices of parameters, that follows from a simple double-counting argument. This establishes the lower bound of Theorem 1.2(a). 4Lemma 2.1. Let n, k, d, s be integers such that n ≥ d ≥ 1 and k > s ≥ 0. Then g(n, k, d ; s) ≥ 2dk − ⌊ k − s 2n−d ⌋ . In particular, f (n, k, d ) ≥ 2dk − ⌊ k 2n−d ⌋.Proof. Let H be an optimal ( k, d ; s)-cover of Fn 2 , so that we have g(n, k, d ; s) = |H| . We double-count the pairs ( ~x, S ) with ~x ∈ Fn 2 , S ∈ H , and ~x ∈ S. On the one hand, every affine subspace S ∈ H contains 2 n−d points, and so there are 2 n−d|H| such pairs. On the other hand, since every nonzero point is covered at least k times and the origin is covered s times, there are at least (2 n − 1) k + s such pairs. Thus (2 n − 1) k + s ≤ 2n−d|H| , and the claimed lower bound follows from solving for |H| and observing that g(n, k, d ; s) is an integer. The bound on f (n, k, d ) is obtained by noticing that our lower bound on g(n, k, d ; s) is increasing in s, and is therefore minimised when s = 0. 2.2 The upper bound construction To prove the upper bound of Theorem 1.2(a), we must construct small ( k, d )-covers. As a first step, we introduce a recursive method for ( k, d ; s)-covers that allows us to reduce to the d = 1 case. Lemma 2.2. For integers n ≥ d ≥ 2 and k > s ≥ 0 we have g(n, k, d ; s) ≤ g(n − d + 1 , k, 1; s) + 2 k(2 d−1 − 1) , and, therefore, f (n, k, d ) ≤ f (n − d + 1 , k, 1) + 2 k(2 d−1 − 1) . Proof. We first deduce the recursive bound on g(n, k, d ; s). Let S0 ⊂ Fn 2 be an arbitrary ( n − d + 1)-dimensional (vector) subspace, and let S1, . . . , S 2d−1−1 be its affine translates, that, together with S0,partition Fn 2 . For every 1 ≤ i ≤ 2d−1 − 1, partition Si ∼= Fn−d+1 2 further into two subspaces, thereby obtaining a total of 2(2 d−1 − 1) affine subspaces of dimension n − d. We start by taking k copies of each of these affine subspaces. This gives us a multiset of 2 k(2 d−1 − 1) subspaces, which cover every point outside S0 exactly k times and leave the points in S0 completely uncovered. It thus remains to cover the points within S0 appropriately. Since ( n − d)-dimensional subspaces have relative codimension 1 in S0, this reduces to finding a ( k, 1; s)-cover within S0 ∼= Fn−d+1 2 . By definition, we can find such a cover consisting of g(n − d + 1 , k, 1; s) subspaces. Adding these to our previous multiset gives a ( k, d ; s)-cover of Fn 2 of size g(n − d + 1 , k, 1; s) + 2 k(2 d−1 − 1), as required. To finish, since f (n, k, d ) = min s g(n, k, d ; s), and the recursive bound holds for each s, it naturally carries over to the function f (n, k, d ), giving f (n, k, d ) ≤ f (n − d + 1 , k, 1) + 2 k(2 d−1 − 1). Armed with this preparation, we can now resolve the problem for large multiplicities. Proof of Theorem 1.2(a). The requisite lower bound, of course, is given by Lemma 2.1. For the upper bound, we start by reducing to the case d = 1. Indeed, suppose we already know the bound for d = 1; that is, f (n, k, 1) ≤ 2k − ⌊ k 2n−1 ⌋ for all k ≥ 2n−2. Now, given some n ≥ d ≥ 2and k ≥ 2n−d−1, by Lemma 2.2 we have f (n, k, d ) ≤ f (n − d + 1 , k, 1) + 2 k(2 d−1 − 1) ≤ 2k − ⌊ k 2n−d+1 −1 ⌋ 2 k(2 d−1 − 1) = 2 dk − ⌊ k 2n−d ⌋ , as required. Hence, it suffices to prove the bound in the hyperplane case. We begin with the lowest multiplicity covered by part (a), namely k = 2 n−2. Consider the family H0 = {H~u : ~u ∈ Fn 2 , u n = 1 }, where we recall that H~u = {~x : ~x · ~u = 1 }. Note that we then have |H 0| = 2 n−1 = 2 k = 2 k − ⌊ k 2n−1 ⌋, and none of these hyperplanes covers the origin. Given nonzero vectors ~x = ( ~x′, x ) and ~u = ( ~u′, 1) with ~x′, ~ u′ ∈ Fn−12 and x ∈ F2, we have ~x · ~u = 1 if and only if ~x′ · ~u′ = 1 − x. If ~x′ 6 = ~0, precisely half of the choices for ~u′ satisfy this equation; if ~x′ = ~0 (and thus necessarily x = 1), the equation is 5satisfied by all choices of ~u′. Thus each nonzero point is covered at least 2 n−2 times, and hence H0 is a (2 n−2, 1)-cover of the desired size. To extend the above construction to the range 2 n−2 ≤ k < 2n−1, one can simply add an arbitrary choice of k − 2n−2 pairs of parallel hyperplanes. The resulting family will have 2 n−1 + 2 (k − 2n−2) =2k = 2 k − ⌊ k 2n−1 ⌋ elements, every nonzero point is covered at least k times, and the origin is covered k − 2n−2 < k times. Finally, suppose k ≥ 2n−1. Then we can write k = a2n−1 + b for some a ≥ 1 and 0 ≤ b < 2n−1.We take H1 = {H~u : ~u ∈ Fn 2 \ { ~0}} to be the set of all affine hyperplanes avoiding the origin, of which there are 2 n − 1. Moreover, for each nonzero ~x, there are exactly 2 n−1 vectors ~u with ~x · ~u = 1, and so each such point is covered 2 n−1 times by the hyperplanes in H1.Now let H be the multiset of hyperplanes obtained by taking a copies of H1 and appending an arbitrary choice of b pairs of parallel planes. Each nonzero point is then covered a2n−1 + b = k times, while the origin is only covered b < 2n−1 ≤ k times, and so H is a ( k, 1)-cover. Thus, f (n, k, 1) ≤ |H| = a(2 n − 1) + 2 b = 2( a2n−1 + b) − a = 2 k − ⌊ k 2n−1 ⌋ , proving the upper bound. 3 Covering high-dimensional spaces In this section we turn our attention to the case when n is large with respect to k, with the aim of proving part (b) of Theorem 1.2. Furthermore, the results we prove along the way will allow us to establish the bounds in part (c) as well. 3.1 The upper bound construction In this range, in contrast to the large multiplicity setting, it is the upper bound that is straightforward. This bound follows from the following construction, which is valid for the full range of parameters. Lemma 3.1. Let n, k, d be positive integers such that n ≥ d ≥ 1 and k ≥ 2. Then f (n, k, d ) ≤ n + 2 dk − d − 2. Proof. We start by resolving the case d = 1 and k = 2, for which we consider the family of hyperplanes H = {H~ei : i ∈ [n]} ∪ { H~1}, where ~ei is the ith standard basis vector and ~1 is the all-one vector. To see that this is a (2 , 1)-cover of Fn 2 , note first that the planes all avoid the origin. Next, if we have a nonzero vector ~x, it is covered by the hyperplanes {H~ei : i ∈ [n]} as many times as it has nonzero entries. Thus, all vectors of Hamming weight at least two are covered twice or more. The only remaining vectors are those of weight one, which are covered once by {H~ei : i ∈ [n]}, but these are all covered for the second time by H~1. Hence H is indeed a (2 , 1)-cover, and is of the required size, namely n + 1. Now we can extend this construction to the case d = 1 and k ≥ 3 by simply adding k − 2 arbitrary pairs of parallel hyperplanes. The resulting family will be a ( k, 1; k −2)-cover (and hence, in particular, a ( k, 1)-cover) of size n + 2 k − 3, matching the claimed upper bound. That leaves us with the case d ≥ 2, which we can once again handle by appealing to Lemma 2.2. In conjunction with the above construction, we have f (n, k, d ) ≤ f (n − d + 1 , k, 1) + 2 k(2 d−1 − 1) ≤ n − d + 1 + 2 k − 3 + 2 k(2 d−1 − 1) , which simplifies to the required n + 2 dk − d − 2. 3.2 Recursion, again The upper bound in Lemma 3.1 is strictly increasing in n. Our next step is to show that this behaviour is necessary — that is, the higher the dimension, the harder the space is to cover. Although intuitive, this fact turned out to be less elementary than expected, and our proof makes use of the probabilistic method. 6Lemma 3.2. Let n, k, d, s be integers such that n ≥ 2, n ≥ d ≥ 1, and k > s ≥ 0. Then g(n, k, d ; s) ≥ g(n − 1, k, d ; s) + 1 . Proof. Let H be an optimal ( k, d ; s)-cover of Fn 2 . To prove the lower bound on its size, we shall construct from it a ( k, d ; s)-cover H′ of Fn−12 , which must comprise of at least g(n − 1, k, d ; s) subspaces. To obtain this cover of a lower-dimensional space, we restrict H to a random hyperplane H ⊂ Fn 2 that passes through the origin. Since H is a ( k, d ; s)-cover of all of Fn 2 , it certainly covers H ∼= Fn−12 as well. However, we require H′ to be a ( k, d ; s)-cover of H, which must be built of affine subspaces of codimension d relative to H — that is, subspaces of dimension one less than those in H. Fortunately, when intersecting the subspaces S ∈ H with a hyperplane, we can expect their dimension to decrease by one. The exceptional cases are when S is disjoint from H, or when S is contained in H. In the former case, S does not cover any points of H, and can therefore be discarded from H′. In the latter case, we can partition S into two subspaces S = S1 ∪ S2, where each Si is of codimension d relative to H, and replace S with S1 and S2 in H′. By making these changes, we obtain a family H′ of codimension-d subspaces of H. Moreover, these subspaces cover the points of H exactly as often as those of H do, and thus H′ is a ( k, d ; s)-cover of H.When building this cover, though, we need to control its size. Let X denote the set of subspaces S ∈ H that are disjoint from H, and let Y denote the set of subspaces S ∈ H that are contained in H. We then have |H ′| = |H| − | X| + |Y |. The objective, then, is to show that there is a choice of hyperplane H for which |X| > |Y |, in which case the cover H′ we build is relatively small. Recall that H was a random hyperplane in Fn 2 passing through the origin, which is to say it has a normal vector ~u chosen uniformly at random from Fn 2 \ { ~0}. To compute the expected sizes of X and Y , we consider the probability that a subspace S ∈ H is either disjoint from or contained in H.Let S ∈ H be arbitrary and suppose first that ~0 ∈ S. We immediately have P(S ∈ X) = 0, as in this case ~0 ∈ S ∩ H, so S and H cannot be disjoint. On the other hand, P(S ∈ Y ) = 2d−1 2n−1 , as we have S ⊆ H exactly when the normal vector ~u is a nonzero element of the d-dimensional orthogonal complement, S⊥, of S in Fn 2 .In the other case, when ~0 /∈ S, we can write S in the form T + ~v, where ~0 ∈ T ⊂ Fn 2 is an (n − d)-dimensional vector subspace and ~v ∈ Fn 2 \ T . Then S is disjoint from H if and only if ~u ∈ S⊥ and ~u · ~v = 1. Since ~v / ∈ T , these are independent conditions, and so we have P(S ∈ X) = 2d−1 2n−1 .Similarly, in order to have S ⊆ H, ~u must be a nonzero vector satisfying ~u ∈ S⊥ and ~u · ~v = 0, and so P(S ∈ Y ) = 2d−1−1 2n−1 .Now, using linearity of expectation, we have E [|X| − | Y |] = ∑ S∈H (P(S ∈ X) − P(S ∈ Y )) = ∑ S∈H :~0/∈S ( 2d−1 2n − 1 − 2d−1 − 1 2n − 1 ) ∑ S∈H :~0∈S ( 0 − 2d − 1 2n − 1 ) = |{ S ∈ H : ~0 /∈ S}| − (2d − 1) |{ S ∈ H : ~0 ∈ S}| 2n − 1 = |H| − 2ds 2n − 1 , where we used the fact that H is a ( k, d ; s)-cover, and thus |{ S ∈ H : ~0 ∈ S}| = s. We now apply the lower bound on |H| given by Lemma 2.1 to obtain E [|X| − | Y |] ≥ 2dk − ⌊ k−s 2n−d ⌋ − 2ds 2n − 1 = 2d(k − s) − ⌊ k−s 2n−d ⌋ 2n − 1 > 0. Therefore, there must be a hyperplane H for which |X| − | Y | ≥ 1. The corresponding cover of H thus has size at most |H| − 1 but, as a ( k, d ; s)-cover of an ( n − 1)-dimensional space, has size at least g(n − 1, k, d ; s). This gives |H| − 1 ≥ |H ′| ≥ g(n − 1, k, d ; s), whence the required bound, g(n, k, d ; s) = |H| ≥ g(n − 1, k, d ; s) + 1. While this inequality will be used in our proof of part (b) of Theorem 1.2, it also gives us what we need to prove the bounds in part (c). 7Proof of Theorem 1.2(c). Lemma 3.1 gives us the upper bound, f (n, k, d ) ≤ n + 2 dk − d − 2, which is in fact valid for all k ≥ 2 and n ≥ d ≥ 1. When n ≥ ⌊ log 2 k⌋ + d + 1, we can prove the lower bound, f (n, k, d ) ≥ n + 2 dk − d − log 2(2 k), by induction on n. For the base case, when n = ⌊log 2 k⌋ + d + 1, we appeal to Lemma 2.1, which gives f (n, k, d ) ≥ 2dk − ⌊ k 2n−d ⌋ = 2 dk = n + 2 dk − d − ⌊ log 2 k⌋ − 1 ≥ n + 2 dk − d − log 2(2 k). For the induction step we appeal to Lemma 3.2. First note that the lemma gives f (n, k, d ) = min s g(n, k, d ; s) ≥ min s (g(n − 1, k, d ; s) + 1) = f (n−1, k, d )+1. Thus, using the induction hypothesis, for all n > ⌊log 2 k⌋ + d + 1 we have f (n, k, d ) ≥ f (n − 1, k, d ) + 1 ≥ n − 1 + 2 dk − d − log 2(2 k) + 1 = n + 2 dk − d − log 2(2 k), completing the proof. 3.3 A coding theory connection In Lemma 3.2, we proved a recursive bound on g(n, k, d ; s) that is valid for all values of s, the number of times the origin is covered. In this subsection, we establish the promised connection to coding theory, which is the key to our proof. Indeed, as observed in Corollary 3.6 below, it allows us to restrict our attention to only two feasible values of s.We begin with ( k, 1; 0)-covers of Fn 2 , showing that, in this binary setting, hyperplane covers that avoid the origin are in direct correspondence with linear codes of large minimum distance. Proposition 3.3. A (k, 1; 0) -cover of Fn 2 of cardinality m is equivalent to an n-dimensional linear binary code of length m and minimum distance at least k.Proof. Let H = {H1, H 2, . . . , H m} be a ( k, 1; 0)-cover of Fn 2 . Since none of the hyperplanes cover the origin, for each i ∈ [m], Hi has to be described by the equation ~ui · ~x = 1 for some ~ui ∈ Fn 2 \ { ~0}. Let A be the m × n matrix whose rows are ~u1, ~ u2, . . . , ~ um. We claim that A is the generator matrix of a linear binary code of dimension n, length m and minimum distance at least k. Since each ~x ∈ Fn 2 \ { ~0} is covered by at least k of the planes, it follows that the vector A~ x has weight at least k, which in turn is equivalent to the vectors in the column space of A having minimum distance at least k. Indeed, any vector ~y in the column space can be expressed in the form A ~ w for some ~w ∈ Fn 2 . Thus, given two vectors ~y1, ~ y2 in the column space, their difference is of the form A( ~w1 − ~w2), where ~x = ~w1 − ~w2 is nonzero. Hence this difference has weight at least k; i.e., the two vectors ~y1 and ~y2 have distance at least k.Conversely, given a linear binary code of dimension n, length m and minimum distance at least k,let ~u1, ~ u2, . . . , ~ um be the rows of the generator matrix. By the same reasoning as above, the hyperplanes Hi, i ∈ [m], defined by the equation ~ui · ~x = 1, form a ( k, 1; 0)-cover of Fn 2 . Thus, the problem of finding a small ( k, 1; 0)-cover of Fn 2 corresponds to finding an n-dimensional linear code of minimum distance at least k and small length. This is a central problem in coding theory and, as such, has been extensively studied. We can therefore leverage known bounds to bound the function g(n, k, 1; 0). Corollary 3.4. For all k ≥ 2 and n ≥ 1, g(n, k, 1; 0) ≥ n + ⌊ k − 1 2 ⌋ log ( 2n k − 1 ) . Proof. Let H be an optimal ( k, 1, 0)-cover and let C ⊆ Fn 2 be the equivalent n-dimensional linear binary code of length m = |H| and minimum distance at least k, as described in Proposition 3.3. We can now appeal to the Hamming bound: since the code has minimum distance k, the balls of radius t = ⌊ k−1 2 ⌋ around the 2 n points of C must be pairwise disjoint. As each ball has size ∑ti=0 (mi ), and the ambient space has size 2 m, we get 2n ≤ 2m ∑ti=0 (mi ) . 8We bound the denominator from below by t ∑ i=0 (mi ) ≥ (mt ) ≥ ( m t )t ≥ ( n t )t = 2 t log n t , where the last inequality is valid provided m ≥ n, as it must be. Thus we conclude g(n, k, 1; 0) = |H| = m ≥ n + t log n t ≥ n + ⌊ k − 1 2 ⌋ log ( 2n k − 1 ) . Remark 3.5. Although it may seem that some of our bounds might be wasteful, one can deduce upper bounds from the Gilbert-Varshamov bound, which is obtained by considering a random linear code. In particular, if n is large with respect to k, one finds that g(n, k, 1; 0) ≤ n + ( k − 1) log(2 n). Narrowing the gap between these upper and lower bounds remains an active area of research in coding theory. The above lower bound can be used to show that if n is large with respect to k and d then every optimal ( k, d )-cover has to cover the origin many times. This corollary is critical to our proof of the upper bound. Corollary 3.6. If n > 22dk−k−d+1 then any optimal (k, d )-cover of Fn 2 covers the origin at least k − 2 times. Proof. Let S1, . . . , S m be an optimal ( k, d )-cover, and, if necessary, relabel the subspaces so that S1, . . . , S s are the affine subspaces covering the origin. Suppose for a contradiction that s ≤ k − 3, and observe that if we delete the first k − 3 subspaces, each nonzero point must still be covered at least thrice, while the origin is left uncovered. That is, Sk−2, S k−1, . . . , S m forms a (3 , d ; 0)-cover of Fn 2 .For each k − 2 ≤ j ≤ m, we can then extend Sj to an arbitrary hyperplane Hj that contains Sj and avoids the origin. Then {Hk−2, H k−1, . . . , H m} is a (3 , 1; 0)-cover, and hence m − k + 3 ≥ g(n, 3, 1; 0). By Corollary 3.4, this, together with the assumption n > 22dk−k−d+1 , implies f (n, k, d ) = m ≥ g(n, 3, 1; 0) + k − 3 ≥ n + log n + k − 3 > n + 2 dk − k − d + 1 + k − 3 = n + 2 dk − d − 2, which contradicts the upper bound from Lemma 3.1. Remark 3.7. Observe that Corollary 3.6 in fact gives us some stability for large dimensions. If n = 2 2dk−k−d+ω(1) , then the above calculation shows that any (k, d )-cover that covers the origin at most k − 3 times has size at least n + 2 dk + ω(1) . Thus, when n = 2 2dk−k−d+ω(1) , any (k, d )-cover that is even close to optimal must cover the origin at least k − 2 times. 3.4 The lower bound By Corollary 3.6, when trying to bound f (n, k, d ) = min s g(n, k, d ; s) for large n, we can restrict our attention to s ∈ { k − 2, k − 1}. First we deal with the latter case. Lemma 3.8. Let n, k, d be positive integers such that n ≥ d ≥ 1. Then g(n, k, d ; k − 1) = n + 2 dk − d − 1. Proof. To prove the statement, we will show that, for all positive integers n, k, d with n ≥ d ≥ 1, we have g(n + 1 , k, d ; k − 1) = g(n, k, d ; k − 1) + 1. Combined with the simple observation that g(d, k, d ; k − 1) = 2 dk − 1 for all k ≥ 1, since when d = n we are covering with individual points, this fact will indeed imply the desired result. By Lemma 3.2 we know that g(n + 1 , k, d ; k − 1) ≥ g(n, k, d ; k − 1) + 1. For the other inequality, consider an optimal ( k, d ; k − 1)-cover H of Fn 2 . For every S ∈ H , let S′ = S × { 0, 1}, which is a codimension-d affine subspace of Fn+1 2 , and let S0 be any ( n+1 −d)-dimensional affine subspace of Fn+1 2 that contains the vector (0 , . . . , 0, 1) but avoids the origin. We claim that H′ = {S′ : S ∈ H} ∪ { S0} is a ( k, d ; k − 1)-cover of Fn+1 2 . Indeed, for all S ∈ H , a point of the form ( ~x, t ) is covered by S′ if and only if ~x is covered by S. Hence, the collection {S′ : S ∈ H} covers ~0 exactly k − 1 times and each point of the form ( ~x, t ) with ~x 6 = ~0 at least k times. Finally, the point ( ~0, 1) is covered k − 1 times by the {S′ : S ∈ H} and once by the subspace S0, so it is also covered the correct number of times. Hence H′ is indeed a ( k, d ; k − 1)-cover of of size |H| + 1, and so the second inequality follows. 9Remark 3.9. Recall that the special case of d = 1 , g(n, k, 1; k − 1) = n + 2 k − 2, also follows from [19, Theorem 1.5]. The proof of Theorem 1.2(b) is now straightforward. Proof of Theorem 1.2(b). The upper bound is given by Lemma 3.1. For the lower bound, first observe that for any valid choice of the parameters, we have g(n, k, d ; s + 1) ≤ g(n, k, d ; s) + 1, as adding any subspace containing the origin to a ( k, d ; s)-cover yields a ( k, d ; s + 1)-cover. Then, by Corollary 3.6 and Lemma 3.8, we obtain f (n, k, d ) = min {g(n, k, d ; k − 2) , g (n, k, d ; k − 1) } ≥ g(n, k, d ; k − 1) − 1 = n + 2 dk − d − 2, as desired. 4 The transition Parts (a) and (b) of Theorem 1.2 determine the function f (n, k, d ) exactly in the two extreme ranges of the parameters — when k is exponentially large with respect to n, and when n is exponentially large with respect to k. As remarked upon in the introduction, we know that in the former case, the bound on k is best possible. However, that is not true for part (b), and we believe the upper bound of Lemma 3.1 should be tight for much smaller values of n as well. In this section we explore the transition between these two ranges, with an eye towards better understanding when this upper bound becomes tight. As we saw in Lemma 2.2, for our upper bounds we can generally reduce to the hyperplane setting, and so we shall focus on the d = 1 case in this section. To simplify notation, we will refer to a ( k, 1)-cover as a k-cover and write f (n, k ) instead of f (n, k, 1). In this hyperplane setting, the upper bound of Lemma 3.1, valid for all n ≥ 1 and k ≥ 2, has the simple form n + 2 k − 3. Given some fixed k, suppose the bound is tight for some n0; that is, f (n0, k ) = n0 + 2 k − 3. The recursion of Lemma 3.2 implies f (n, k ) ≥ f (n − 1, k ) + 1 for all n ≥ 2, and so these two bounds together imply f (n, k ) = n + 2 k − 3 for all n ≥ n0. Hence, for every k, there is a well-defined threshold n0(k) such that f (n, k ) = n + 2 k − 3 if and only if n ≥ n0(k). Theorem 1.2(b) shows n0(k) ≤ 2k + 1, and our goal now is to explore the true behaviour of this threshold. 4.1 The diagonal case As a natural starting point, one might ask what lower bound we can provide for n0(k). From our previous results, in particular Theorem 1.2(a), we have seen that f (n, k ) behaves differently when k is large compared to n. We therefore know the upper bound of Lemma 3.1 is not tight when k ≥ 2n−2 or, equivalently, we know n0(k) > log 2 k + 2. However, the following construction, valid when k ≥ 4, shows that we can improve upon Lemma 3.1 for considerably larger values of n as well. Proposition 4.1. For all k ≥ 4, we have f (k, k ) ≤ 3k − 4. As a consequence, n0(k) ≥ k + 1 .Proof. To prove the upper bound, we must construct a k-cover H of Fk 2 of size 3 k − 4. Letting ~ei denote the ith standard basis vector and ~1 the all-one vector, we take H = H1 ∪ H 2 ∪ H 3, where H1 = {H~ei : i ∈ [k]}, H2 = {H~1−~ei : i ∈ [k]}, and H3 consists of k − 4 copies of the hyperplane with equation ~x · ~1 = 0. Then H has size 3 k − 4, while the only planes containing the origin are those in H3. Thus it only remains to verify that each nonzero point is covered at least k times. Given a nonzero point ~x, let its weight be w. We then see that ~x is covered w times by the planes in H1. Next, observe that ~x · (~1 − ~ei ) is equal to w if xi = 0, and is equal to w − 1 otherwise. Hence, if w is odd, then ~x is covered by k − w planes in H2, and is thus covered at least k times by H.On the other hand, if w is even, then ~x is covered w times by the planes in H2. However, in this case ~x · ~1 = 0, and so ~x is covered k − 4 times by H3 as well. In total, then, ~x is covered 2 w + k − 4times. As ~x is a nonzero vector of even weight, we must have w ≥ 2, and hence ~x is covered at least k times in this case as well. In conclusion, we see that H forms a k-cover of Fk 2 , and thus f (k, k ) ≤ |H| = 3 k − 4. As this is smaller than the upper bound of Lemma 3.1, it follows that n0(k) ≥ k + 1. 10 4.2 Initial values This still leaves us with a large range of possible values for n0(k): our lower bound is linear, while our upper bound is exponential. To get a better feel for which bound might be nearer to the truth, we next decided to take a closer look at f (n, k ) for small values of the parameters. To be able to compute a number of these values efficiently, it helped to appeal to our recursive bounds. Lemma 3.2 already restricts the behaviour of f (n, k ) as n changes, showing that the function must be strictly increasing in n. It is also very helpful to understand how f (n, k ) responds to changes in k: as the following lemma shows, there is even less flexibility here. Lemma 4.2. For all n ≥ 1 and k ≥ 2 we have f (n, k − 1) + 1 ≤ f (n, k ) ≤ f (n, k − 1) + 2 .Proof. For the lower bound, observe that, given a k-cover of size f (n, k ), removing a hyperplane covering the origin (or, if no such plane exists, an arbitrary plane) leaves us with a ( k − 1)-cover, and thus f (n, k − 1) ≤ f (n, k ) − 1. For the upper bound, given a ( k − 1)-cover of size f (n, k − 1), we can add an arbitrary pair of parallel hyperplanes to obtain a k-cover. Thus f (n, k ) ≤ f (n, k − 1) + 2. Thus, if we know the value of f (n, k − 1), there are only two possible values for f (n, k ). This becomes even more powerful when used in combination with Lemma 3.2, which guarantees f (n, k ) ≥ f (n − 1, k ) + 1. Hence, in case we have f (n − 1, k ) = f (n, k − 1) + 1, the only possible value for f (n, k )is f (n, k − 1) + 2. Although this may seem a very conditional statement, this configuration occurs quite frequently, as one can see in Table 1 below, and allows us to deduce several values of f (n, k ) for free. This observation, together with our previous bounds (and noting that f (n, 2) = n + 1), allows us to almost completely determine f (n, k ) for n ≤ 6. We were able to fill in the few outstanding values through a computer search (using SageMath and Gurobi ). 1 nk 3 4 5 6 7 8 9 10 11 12 13 14 15 16 · · · 3 6 7 9 11 13 14 16 18 20 21 23 25 27 28 · · · 4 7 8 10 12 14 15 17 19 21 23 25 27 29 30 · · · 5 8 10 11 13 15 16 18 20 22 24 26 28 30 31 · · · 6 9 11 13 14 16 18 20 22 23 25 27 29 31 32 · · · Table 1: f (n, k ) for 3 ≤ n ≤ 6: values in green come from Theorem 1.2(a), values in blue are a consequence of the recursive bounds, values in orange follow from Proposition 4.1, and values in red were obtained by a computer search. An asterisk denotes values equal to the upper bound of Lemma 3.1; that is, where n ≥ n0(k). 4.3 The extended Golay code We see from Table 1 that n0(k) = k + 1 for k ∈ { 4, 5}, leading some credence to the belief that the construction from Proposition 4.1 is perhaps indeed the last time the upper bound from Lemma 3.1 can be improved. However, we can once again exploit the coding theory connection of Proposition 3.3 to show that this is not always the case. The extended binary Golay code is a 12-dimensional code of length 24 and minimum distance 8. By Proposition 3.3, this code is equivalent to an (8 , 1; 0)-cover of F12 2 of size 24, thus implying that f (12 , 8) ≤ 24, whereas the upper bound given by Lemma 3.1 is 25. Furthermore, we see in Table 1 that f (6 , 8) = 18. By repeated application of Lemma 3.2, we must have f (12 , 8) ≥ f (6 , 8) + 6, 1Some of these values we first proved by hand, via direct case analysis. However, as we do not see any more broadly applicable generalisation of the arguments therein, we have omitted these proofs. 11 and thus f (12 , 8) = 24. Moreover, there must be equality in every step of the recursion, and thus f (n, 8) = n + 12 for 6 ≤ n ≤ 12. This result, coupled with the techniques described previously, allows us to extend Table 1 to include values for 7 ≤ n ≤ 12 and 3 ≤ k ≤ 10. These new values are depicted in Table 2 below. We see that the equality n0(k) = k + 1 persists for k = 6 , 7 until the Golay construction comes into existence. In light of Lemma 4.2, this ensures n0(k) ≥ k + 2 for 8 ≤ k ≤ 11. nk 3 4 5 6 7 8 9 10 6 9 11 13 14 16 18 20 22 7 10 12 14 16 17 19 21 23 8 11 13 15 17 19 20 22 24 9 12 14 16 18 20 21 23 25 10 13 15 17 19 21 22 24 26 11 14 16 18 20 22 23 25 27 12 15 17 19 21 23 24 26 28 ... ... ... ... ... ... Table 2: More values of f (n, k ): green represents values coming from Theorem 1.2(a), red represents values obtained through computer computations, blue represents values obtained from other values by the recursive bounds, orange represents values obtained by Proposition 4.1 and recursion, and cyan represents values obtained by the Golay code construction and its recursive consequences. An asterisk denotes values attaining the upper bound of Lemma 3.1; that is, where n ≥ n0(k). This begs the question of what happens for larger values of k. Does the gap n0(k) − k continue to grow? Does the threshold return to k + 1 at a later point? Unlike the construction in Proposition 4.1, the Golay code yields a sporadic construction, which we have not been able to generalise. Furthermore, this is known as a particularly efficient code, and we are not aware of any other code whose parameters lead to an improvement on Proposition 4.1. Hence, we are leaning towards the second possibility – not strongly enough, perhaps, to conjecture it as the truth, but enough to pose it as a question. Question 4.3. Do we have n0(k) = k + 1 for all k ≥ 12 ? To answer Question 4.3, we need to determine the value of f (k + 1 , k ). For an affirmative answer, we need to show f (k + 1 , k ) = 3 k − 2, while a negative answer would follow from a construction showing f (k + 1 , k ) ≤ 3k − 3. What could such a construction look like? If we retrace the proof of Theorem 1.2(b), we see that any k-cover of Fk+1 2 that covers the origin at least k − 2 times must have size at least 3 k − 2. Hence, any construction negating Question 4.3 must cover the origin at most k − 3times. While this seemingly contradicts Corollary 3.6, recall that we needed n to be exponentially large with respect to k to draw that conclusion. Without this condition, the Hamming bound on codes with large distance is not strong enough to provide the requisite lower bound on f (n, k ). Indeed, the Gilbert-Varshamov bound, discussed in Remark 3.5, shows that a random collection of k + O(log k)hyperplanes forms a 3-cover of Fk+1 2 with high probably. Adding k −3 arbitrary pairs of parallel planes then gives a k-cover of size 3 k + O(log k) that only covers the origin k − 3 times. Thus, we can find numerous k-covers that are asymptotically optimal, and we cannot hope for any strong stability when n and k are comparable. 12 5 Concluding remarks In this paper, we investigated the minimum number of affine subspaces of a fixed codimension needed to cover all nonzero points of Fn 2 at least k times, while only covering the origin at most k − 1 times. We were able to determine the answer precisely when k is large with respect to n, or when n is large with respect to k, and provided asymptotically sharp bounds for the range in between these extremes. In this final section, we highlight some open problems and avenues for further research. Bounding the threshold In the previous section, we raised the question of determining the thresh-old n0(k) beyond which the result of Theorem 1.2(b) holds. Although our proof requires n to be exponentially large with respect to k, our constructions suggest the threshold might, with limited exceptions, be as small as k + 1. It is quite possible that solving Question 4.3 will require improving the classic bounds on the length of binary codes of large minimum distance, and will therefore perhaps be quite challenging. However, there is plenty of scope to attack the problem from the other direction, and aim to reduce the exponential upper bound on n0(k). Our strategy was to prove the lower bound for g(n, k, 1; k − 1) and g(n, k, 1; k − 2), using the recursive bounds. By removing planes covering the origin, we could reduce the remaining cases to g(n, 3, 1; 0), for which, when n is large, the coding theory connection provides a large enough lower bound. There are two natural ways to improve this argument. The first would be to extend the values s for which we directly prove the lower bound on g(n, k, 1; s). For instance, if we could show that g(n, k, 1; s) ≥ n + 2 k − 3 for s ∈ { k − 3, k − 4} as well, then we could reduce the remaining cases to g(n, 5, 1; 0) instead, for which the Hamming bound gives a stronger lower bound. This would still yield an exponential bound on n0(k), but with a smaller base. The second approach concerns our reduction to g(n, 3, 1; 0), where we use the fact that removing a hyperplane from a k-cover leaves us with a ( k − 1)-cover. However, our constructions contain arbitrary pairs of parallel planes, and thus it is possible to remove from them two planes and still be left with a ( k − 1)-cover. If we can show that this is true in general, it could lead to a linear bound on n0(k). Finally, while we have focused on the hyperplane case in Question 4.3, it would also be worth exploring the corresponding threshold n0(k, d ) for d ≥ 2. It would be very interesting if there were new constructions that appear in this setting where we cover with affine subspaces of codimension d. Larger fields In this paper we have worked exclusively over the binary field F2, but it is also natural to explore these subspace covering problems over larger finite fields, Fq for q > 2. Let us denote the corresponding extremal function by fq(n, k, d ), which is the minimum cardinality of a multiset of (n − d)-dimensional affine subspaces that cover all points of Fnq \ { ~0} at least k times, and the origin at most k − 1 times. The work of Jamison establishes the initial values of this function, showing fq(n, 1, d ) = ( q − 1)( n − d) + qd − 1. When it comes to multiplicities k ≥ 2, some of what we have done here can be transferred to larger fields as well. To start, we can once again resolve the setting where the multiplicity k is large with respect to the dimension n. Indeed, the double-counting lower bound of Lemma 2.1 generalises immediately to this setting, giving fq(n, k, d ) ≥ qdk − ⌊ k qn−d ⌋ , and one can obtain a matching upper bound by taking multiple copies of every affine subspace. In the other extreme, where n is large with respect to k, the problem remains widely open. We first note that the reduction to hyperplanes from Lemma 2.2 can be extended, giving fq(n, k, d ) ≤ fq(n − d + 1 , k, 1) + ( qd−1 − 1) kq . Thus, as before, it is best to first focus on the case d = 1, and we define fq(n, k ) := fq(n, k, 1). Then Jamison’s result gives fq(n, 1) = ( q − 1) n.For an upper bound, let us start by considering 2-covers. It is once again true that if one takes the standard 1-covering by hyperplanes, consisting of all hyperplanes of the form {~x : xi = c} for some i ∈ [n] and c ∈ Fq \ { 0}, the only nonzero vectors that are only covered once are those of Hamming weight 1. However, since the nonzero coordinate of these vectors can take any of q − 1 different values, it takes a further q − 1 hyperplanes to cover these again, and so we have f (n, 2) ≤ (q − 1)( n + 1). 13 Now, given a ( k − 1)-cover of Fnq , one can obtain a k-cover by adding an arbitrary partition of Fnq into q parallel planes, and this yields fq(n, k ) ≤ (q − 1)( n + 1) + q(k − 2). This construction is the direct analogue of that from Lemma 3.1, and so, as in Theorem 1.2(b), we expect it to be tight when n is sufficiently large. However, the lower bounds are lacking. A simple general lower bound is obtained by noticing that removing k − 1 hyperplanes from a k-cover leaves us with at least a 1-cover, and so fq(n, k ) ≥ fq(n, 1) + k − 1 = ( q − 1) n + k − 1. This remains the best lower bound we know — in particular, even the case of fq(n, 2) is unsolved. It would of course be very helpful to use some of the machinery we have developed here, and so we briefly explain where the difficulties therein lie. Key to our binary proof was the equivalence with codes of a certain minimum distance, given in Proposition 3.3. When working over Fq, unfortunately, that equivalence breaks down. For an n-dimensional linear code with minimum distance k with generator matrix A, we require that, for every nonzero vector ~x ∈ Fnq , the vector A~ x has at least k nonzero entries. In the binary setting, this was precisely what we wanted, since ~x was covered by the ith hyperplane if and only if the ith entry of A~ x was nonzero. However, in the q-ary setting, for ~x to be covered by the ith hyperplane, we need the ith entry of A~ x to be equal to a prescribed nonzero value. Hence, while every k-covering of Fnq gives rise to a linear q-ary n-dimensional code of minimum distance at least k, the converse is not true. As a result, the coding theoretic bounds, which are of the form n + O(k log n), are not strong enough to give us information here. Another main tool was the recursion over n, showing that f (n, k ) is strictly increasing in n. The same proof goes through here, and we can again show fq(n, k ) > f q(n − 1, k ). However, from our bounds, we expect the stronger inequality fq(n, k ) ≥ fq(n − 1, k ) + q − 1 to hold. Intuitively, this is because when we restrict a k-cover of Fnq to Fnq−1 ⊂ Fnq , there are q − 1 affine copies of Fnq−1 that are lost. However, this does not (appear to) come out of our probabilistic argument. It would thus be of great interest to develop new tools to handle the q-ary case, as these may also bear fruit when applied to the open problems in the binary setting as well. We believe that new algebraic ideas may be necessary to resolve the following question. Question 5.1. For n ≥ n0(k, q ), do we have fq(n, k ) = ( q − 1)( n + 1) + q(k − 2) ? Polynomials with large multiplicity Finally, speaking of algebraic methods, we return to our introductory discussion of the polynomial method. Recall that previous lower bounds in this area have been obtained by considering the more general problem of the minimum degree of a polynomial in F[x1, x 2, . . . , x n] that vanishes with multiplicity at least k at all nonzero points in some finite grid, and with lower multiplicity at the origin. Sauermann and Wigderson’s recent breakthrough, Theorem 1.1, resolves this polynomial problem for n ≥ 2k − 3 over fields of characteristic 0, while our results here show that, in the binary setting at least, there is separation between the hyperplane covering and the polynomial problems. Despite this, we wonder whether the answers to the two problems might coincide in the range where the multiplicity k is large with respect to the dimension n. That is, can the simple double-counting hyperplane lower bound be strengthened to the polynomial setting? We would therefore like to close by emphasising a question of Sauermann and Wigderson , this time over F2. Question 5.2. Given positive integers k, n with k ≥ 2n−2, let P ∈ F2[x1, x 2, . . . , x n] be a polynomial that vanishes with multiplicity at least k at every nonzero point, and with multiplicity at most k − 1 at the origin. Must we then have deg( P ) ≥ 2k − ⌊ k 2n−1 ⌋? References N. Alon and Z. F¨ uredi, Covering the cube by affine hyperplanes, European J. Combin. 14(2) (1993), 79–83. S. Ball, On intersection sets in Desarguesian affine spaces, European J. Combin. 21(3) (2000), 441–446. 14 S. Ball, The polynomial method in Galois geometries, in Current research topics in Galois geometry, Chapter 5, Nova Sci. Publ., New York, (2012), 105–130. S. Ball and O. Serra, Punctured Combinatorial Nullstellens¨ atze, Combinatorica 29 (2009), 511–522. A. Bishnoi, P. L. Clark, A. Potukuchi and J. R. Schmitt, On zeros of a polynomial in a finite grid, Comb. Prob. Comput. 27(3) (2018), 310–333. A. Blokhuis, A. E. Brouwer and T. Sz˝ onyi, Covering all points except one, J. Algebraic Comb. 32 (2010), 59-66. A. E. Brouwer and A. Schrijver, The blocking number of an affine space, J. Combin. Theory Ser. A 24(2) (1978), 251–253. A. A. Bruen, Polynomial multiplicities over finite fields and intersection sets, J. Combin. Theory Ser. A 60(1) (1992), 19-33. A. A. Bruen and J. C. Fisher, The Jamison method in Galois geometries, Des. Codes Crypt. 1 (1991), 199–205. A. Clifton and H. Huang, On almost k-covers of hypercubes, Combinatorica 40 (2020), 511–526. O. Geil and U. Matr´ ınez-Pe˜ nas, Bounding the Number of common zeros of multivariate polyno-mials and their consecutive derivatives, Comb. Prob. Comput. 28(2) (2019), 253–279. Gurobi Optimizer Reference Manual , Gurobi Optimization, LLC, 2020, . L. Guth, Polynomial methods in combinatorics, Vol. 64, American Mathematical Soc., (2016). R. E. Jamison, Covering finite fields with cosets of subspaces, J. Combin. Theory Ser. A 22(3) (1977), 253–266. P. Komj´ ath, Partitions of vector spaces, Period. Math. Hungar. 28 (1994), 187–193. G. K´ os and L. R´ onyai, Alon’s Nullstellensatz for multisets, Combinatorica 32(5) (2012), 589–605. G. K´ os, T. M´ esz´ aros and L. R´ onyai, Some extensions of Alon’s Nullstellensatz, Publicationes Mathematicae Debrecen 79(3-4) (2011), 507–519. SageMath, the Sage Mathematics Software System (Version 9.0) , The Sage Developers, 2020, . L. Sauermann and Y. Wigderson, Polynomials that vanish to high order on most of the hypercube, arXiv preprint arXiv:2010.00077 (2020). C. Zanella, Intersection sets in AG( n, q ) and a characterization of hyperbolic quadric in PG(3 , q ), Discrete Math. 255 (2002), 381–386. 15
123419
THE COMBINATORIAL CATEGORY OF ANDERSEN, JANTZEN AND SOERGEL AND FILTERED MOMENT GRAPH SHEAVES PETER FIEBIG AND MARTINA LANINI Abstract. We give an overview on the series of articles [FL1, FL2, FL3] that aims at introducing a new approach towards the “combinatorial” cate-gory introduced by Andersen, Jantzen and Soergel in their work on Lusztig’s conjecture on the irreducible highest weight characters of modular algebraic groups. 1. Introduction One of the essential steps in the proof of Lusztig’s conjecture for large enough characteristics is the work of Andersen, Jantzen and Soergel [AJS94]. There, the authors define a category M over an arbitrary field k that has the property that it encodes important structural information on the representation theory of quantum groups at a root of unity if k is of characteristic zero, and of modular Lie algebras if k is of positive characteristic. Using an intricate base change argument, the authors were able to show that Lusztig’s formula for the irreducible highest weight characters of simple, simply connected modular groups follows from the analogous formula in the quantum case (which was proven earlier by Kazhdan and Lusztig), provided that the characteristic of the base field is large enough. After this seminal result was obtained, representation theorists were hoping that every prime above the Coxeter number of the group might be large enough in the above sense, and many people were aspiring to prove this. However, in 2013 Geordie Williamson came up with a series of counterexamples, and in addition to showing that the above hope was too optimistic, Williamson also showed that the exceptional characteristics grow exponentially with the Coxeter number [Wil]. This result is both discouraging and challenging. So far we seem not to have a good idea of what is going on for the exceptional primes, and we seem to be quite far from stating a new conjecture on the irreducible characters in case the characteristic of the ground field is exceptional. Lusztig’s conjecture was inspired by explicit examples for the irreducible char-acters of algebraic groups that were found by Jantzen, who was using a variety Both authors were partially supported by the DFG priority program 1388 “Representation Theory”. 1 2 PETER FIEBIG AND MARTINA LANINI of powerful tools (such as Jantzen’s sum formula, see also [Jan77]) for the calcu-lations. In the present situation, we would like to provide further methods and tools. We hope that they will be of use for gaining an understanding of what to expect of modular representation theory at small primes. Now, the Andersen– Jantzen–Soergel result referred to above holds for any field of characteristic above the Coxeter number, and it allows us to deduce the irreducible highest weight characters from multiplicities encoded in the category M. This motivates our attempt to try to look at the category M from a new perspective. There is a precedent for our approach. In the case of the conjecture of Kazhdan and Lusztig on the irreducible highest weight characters of semisimple complex Lie algebras, one can translate the problem into the realm of either the cate-gory of Soergel bimodules, or of Braden–MacPherson sheaves on finite moment graphs. The connection to Soergel bimodules is established via translation func-tors, whereas the connection to sheaves on moment graphs can be obtained more directly, using the fact that Braden–MacPherson sheaves are projective objects inside a certain category of sheaves that admit a Verma flag (cf [Fie08]). Both approaches have their advantages. In the positive characteristic case, the translation combinatorics side is incor-porated in the definition of the Andersen–Jantzen–Soergel category. Now we want to construct the sheaves-on-moment-graphs side of the picture. It turns out that the right category to look at is the category of filtered sheaves on fi-nite moment graphs. We define this category together with an exact structure and we relate the projective objects to the Andersen–Jantzen–Soergel category. The projective objects can be constructed in two essentially different ways: On the one hand side, there is a filtered Braden–MacPherson algorithm that yields the indecomposable projectives directly. On the other hand, our category also carries translation functors, and the indecomposable projectives occur as direct summands of Bott-Samelson-type objects. We can relate our category to the category of sheaves on affine moment graphs, and hence are able to obtain multiplicity formulas that imply Lusztig’s conjecture for large enough primes. 2. The Andersen–Jantzen–Soergel category The category of Andersen, Jantzen and Soergel that appears prominently in their work on the representation theory of modular Lie algebras and quantum groups, is sometimes called a “combinatorial category”. This term might be somewhat misleading, as the category has not much to do with classical, set-theoretic combinatorics. It is rather a category that is defined in terms of basic linear algebra and that is meant to “categorify” the algorithm for calculating the periodic polynomials inside the periodic Hecke module. Let us now introduce the basic notions. 3 2.1. Alcoves and reflections. Let R ⊂V be an irreducible root system in the Euclidean vector space V . We denote by R∨⊂V ∗the system of coroots, and α∨∈R∨is the coroot associated with α ∈R. With X ⊂V we denote the weight lattice. It is acted upon by the (finite) Weyl group W. We define, for α ∈R, the affine translation tα : V →V , λ 7→λ + α. From this we obtain an action of the root lattice ZR on V . The affine Weyl group is defined as the subgroup generated by W and ZR inside the group of affine transformations of V . The affine Weyl group is also generated by the following subset of affine reflections: For α ∈R and n ∈Z we denote by sα,n : V →V the map λ 7→λ −(⟨λ, α∨⟩−n)α, i.e. the affine reflection at the hyperplane Hα,n := {λ ∈V | ⟨λ, α∨⟩= n}. The set of alcoves A is the set of connected components of the topological space V \ S α,n Hα,n (we think of V as being endowed with its standard, metric topology). Then A is acted upon by the affine Weyl group c W, and this action is free and transitive. Let us fix a system of positive roots R+ ⊂R. We denote by Ae ∈A the base alcove, i.e. the unique alcove contained in the dominant Weyl chamber in V and that contains 0 in its closure. The map c W →A , w 7→w(Ae), is a bijection, and we define Aw := w(Ae). We denote by Π ⊂R+ the set of simple roots, and by γ ∈R+ the highest root (i.e. the unique element with the property γ −α ∈Z≥0R+ for all α ∈R+). The set of simple affine reflections is b S := {sα,0 | α ∈Π} ∪{sγ,1}. The reflection hyperplanes corresponding to s ∈b S are precisely the hyperplanes that have a codimension 1 intersection with the closure of Ae. For any α ∈R+ we define a bijection α ↑·: A →A . Let A be an alcove. Set n = nA,α := min{m ∈Z | ⟨λ, α∨⟩< m for all λ ∈A}. Then we set α ↑λ := sα,n(A). We denote by ⪯the minimal partial order on the set A that satisfies A ⪯α ↑A for all positive roots α and all alcoves A. Let A ∈A be an alcove, and let s ∈b S. Let us denote by Hs the reflection hyperplane corresponding to s. Then there is a unique reflection hyperplane H = HA,s in the c W-orbit of Hs that has a codimension 1 intersection with the closure of A. Let us denote by As the image of A under the reflection at H. We then have As ⪯A or A ⪯As, and we denote by A(s) −(by A(s) + , resp.) the smaller (larger) alcove in the set {A, As}. 4 PETER FIEBIG AND MARTINA LANINI 2.2. Localizations. Now let us fix a field k. We denote by S = S(X ⊗Z k) the symmetric algebra of the k-vector space spanned by the lattice X. Let us define for any positive root α the localization Sα = S[β−1 | β ∈R+, β ̸= α] of S and S∅= S[β−1 | β ∈R+]. We then have canonical inclusions S ⊂Sα ⊂S∅for each positive root α. 2.3. The surrounding category. We now have all ingredients to define the “combinatorial” category that surrounds the Andersen–Jantzen–Soergel category. Definition 2.1 ([AJS94]). Let K be the category that consists of objects M = ({M(A)}A∈A , {M(A, β)}A∈A ,β∈R+) , where (1) M(A) is an S∅-module for each A ∈A and (2) for A ∈A and β ∈R+, M(A, β) is an Sβ-submodule of M(A)⊕M(β ↑A). A morphism f : M →N in K is given by a collection (fA)A∈A of homomorphisms fA : M(A) →N(A) of S∅-modules, such that for all A ∈A and β ∈R+, fA⊕fβ↑A maps M(A, β) into N(A, β). It is convenient to also introduce the following shift functors that incorporate the ZR-symmetry of the set of alcoves. For an element γ ∈ZR and an object M of K we define the functor τγ : K →K as follows. For an alcove A and a positive root β we set (τγM)(A) = M(A + γ), (τγM)(A, β) = M(A + γ, β). If f : M →N is a morphism in K, then τγf : τγM →τγN is given by (τγf)A = fA+γ. 2.4. Translation functors and the base object. In order to define the Andersen– Jantzen–Soergel category we need a set of translation functors that act on the category K. Let s ∈b S be a simple affine reflection and let M be an object in K. We now define an object TsM in K. Let A be an alcove and β ∈R+. We set (TsM)(A) := M(A(s) −) ⊕M(A(s) + ) and (TsM)(A, β) :=                {(βx + y, y) | x, y ∈M(A, β)}, if β ↑A(s) −= A(s) + and A = A(s) −, βM(β ↓A, β) ⊕M(β ↑A, β), if β ↑A(s) −= A(s) + , and A = A(s) + , M(A(s) −) ⊕M(A(s) + ), if β ↑A(s) −̸= A(s) + . 5 These definitions are functorial in M in the obvious way, and hence yield a functor Ts : K →K. Apart from the translation functors we also need the following base object Q0 in K. For an alcove A and a positive root β we set Q0(A) := ( S∅, if A ∈W(Ae), 0, if A ̸∈W(Ae) and Q0(A, β) :=          {(βx + y, y) | x, y ∈Sβ}, if A, β ↑A ∈W(Ae), βSβ, if A ∈W(Ae), β ↑A ̸∈W(Ae), Sβ, if A ̸∈W(Ae), β ↑A ∈W(Ae), 0, if A, β ↑A ̸∈W(Ae). 2.5. The Andersen–Jantzen–Soergel category of “special objects” in K. We are now ready to define the category M. Definition 2.2. The category of special objects is the smallest full subcategory M of K that satisfies the properties. • It contains the object Q0. • It is stable under the translation functors Ts for any simple affine reflection s and the shift functors τγ for any γ ∈ZR. • It is stable under taking direct summands and forming direct sums. 2.6. The connection to modular representation theory. Now suppose that k is an algebraically closed field of positive characteristic p, and suppose that G is an almost simple, connected and simply connected algebraic group defined over k. Let T ⊂G be a maximal torus, and suppose that the associated root system is R. We can then identify X with the set of weights Hom(T, k×) of G. Let g be the Lie algebra of G, and h ⊂g the Lie algebra of T. We now consider a certain category C of X-graded restricted representations of g. First note that g is a restricted Lie algebra, so we can consider the category of restricted repre-sentations. An object in C is now a finite dimensional restricted representation M of g that carries an additional grading M = L µ∈X Mµ as a vector space such that the following holds: for H ∈h and m ∈Mµ we have H.m = µ(H)m, where µ ∈Hom(h, k) is the differential of µ. A morphism of X-graded representations is a homomorphism of representations of g that is diagonal with respect to the gradings. An important property of C is the following: If M is a rational representation, then differentiating the G-action yields a representation of g on M. If we in addition remember the weight space decomposition, i.e. the action of T, then we obtain an object in C. The category C is an abelian category. For any λ ∈X one defines the baby Verma module Z(λ) in C with highest weight λ, and its unique simple quotient 6 PETER FIEBIG AND MARTINA LANINI L(λ). We now consider the p-dilated and ρ-shifted action of the affine Weyl group on the lattice X, i.e. we consider the semidirect product c Wp = W ⋉ZpR with its natural action on X shifted by ρ, i.e. w.λ = w(λ + ρ) −ρ. We identify c Wp with the affine Weyl group c W acting on the set A in the obvious way. Let w0 be the longest element in W. Theorem 2.3 ([AJS94]). Suppose that p > h. • For any alcove A ∈A there is an up to isomorphism unique indecompos-able object QA in M with QA(B) = 0 unless A ⪯B, and QA(A) ∼ = S∅. • We have [Z(w.0) : L(x.0)] = rkS∅QAw0x(Aw0w) for all w, x ∈c W. (Note that from the construction it follows that for any object M of M and any alcove A, the S∅-module M(A) is free of finite rank). This fundamental result shows that the category of special objects encodes the Jordan–H¨ older multiplicities of the baby Verma modules in the category C. In fact, the result obtained in [AJS94] is much stronger: The category M is even equivalent to the category of (deformed) projective objects in C, hence it encodes the (full) categorical structure of C! 2.7. Lusztig’s conjecture. The affine Weyl group together with the set of sim-ple affine reflections b S is a Coxeter system, so it comes equipped with a length function ℓ: c W →Z≥0 and a Bruhat order ≤. The affine Hecke algebra b H is the free Z[v±1]-module with basis {Hw | w ∈c W} whose algebra structure is uniquely determined by the relations Hw · Hx = Hwx if ℓ(wx) = ℓ(w) + ℓ(x), H2 s = He + (v−1 −v)Hs for s ∈c W. Then He is a multiplicative identity in b H, and it turns out that each Hw is invertible in b H. The Kazhdan–Lusztig involution on b H is the Z-linear involution · that is determined by v = v−1 and Hw = H−1 w−1. The element Hs = Hs + v is self-dual with respect to this involution for any simple affine reflection s ∈b S. The periodic module P is the free Z[v±1]-module with basis {A | A ∈A }, equipped with a right action of b H, which is uniquely determined by A · Hs = ( As + vA, if A = A(s) −, As + v−1A, if A = A(s) + , for all A ∈A and s ∈b S. Let P◦be the b H-submodule of P generated by the set ( Eλ := X w∈W vℓ(w)(w(Ae) + pλ) λ ∈X ) . The following theorem gives us a distinguished basis for the module P◦. 7 Theorem 2.4 ([Lus80b], [Soe97, Theorem 4.3]). (1) There exists a unique ad-ditive involutive map · : P◦→P◦such that Eλh = Eλh for all λ ∈X, h ∈b H. (2) For any A ∈A there exists a unique element P A such that P A = P A and P A = A + P B∈A {A} pA,BB with pA,B ∈vZ[v]. The set {P A}A∈A is a basis for P◦. The pA,B appearing in the above statement are called the periodic polynomials. Now Lusztig’s conjecture can be reformulated (e.g., [Fie10]) as the following statement about the Jordan–H¨ older multiplicities of baby Verma modules: For any x, w ∈c W we have [Z(w.0) : L(x.0)] = pAw0w,Aw0x(1). 2.8. An intrinsic definition of M? The above result of Andersen, Jantzen and Soergel places the multiplicity problem into a somewhat elementary context. The categories M and K are defined using only linear algebraic structures, they do not refer any more to the representation theory of Lie algebras or reductive groups. Yet the construction of M is quite complicated, as experience shows that it is quite hard to calculate explicitely with the translation functors defined in Section 2.4. It is hence desirable to have an alternative, maybe more intrinsic definition of M. An example of such an intrinsic definition in a related context is the following. In the case that the characteristic of k is zero, the analogue (and the inspiration) of Lusztig’s conjecture is the slightly older conjecture of Kazhdan and Lusztig on the characters of simple highest weight representations of semisimple com-plex Lie algebras. In the approach of Soergel one translates this problem into a decomposition problem of Soergel bimodules, or, equivalently, of moment graph sheaves. The Soergel bimodules (or the Braden–MacPherson sheaves) can be characterized as being the projective objects in a surrounding category of objects that “admit a Verma flag”. This fact yields a translation-functor-free proof of the analogue of Theorem 2.3 in this context (cf. [Fie08]), and it might help to understand similar situations in which translation functors cannot be defined, as for example the restricted category O at the critical level for an affine complex Kac–Moody algebra. In the following section we review the articles [FL1],[FL2] and [FL3], in which an alternative construction of the category M is given. 3. (Co-)filtered modules over structure algebras Let us consider the algebra Z := {(zx) ∈ M x∈W S | zx ≡zsαx mod α∨for all x ∈W, α ∈R+}. This is the structure algebra (over the field k) of the finite moment graph G associated to the root system R. It is a Z-graded, commutative, unital S-algebra. 8 PETER FIEBIG AND MARTINA LANINI 3.1. Cofiltered Z-modules. Now we consider the set A as a topological space with the ⪯-order ideals as the open sets. That means that J ⊂A is open if A ∈J and B ⪯A imply B ∈J . An (A , ⪯)-cofiltered Z-module, as defined in [FL1], is nothing but a sheaf of Z-modules on A . Yet we decided not to use this terminology, as we are also considering (A , ⪯)-cofiltered sheaves on the finite moment graph and we would have to call these objects “sheaves on A of sheaves on G”, which is a confusing terminology that we want to avoid. So our definition of an (A , ⪯)-cofiltered object M is as follows. It is given by Z-modules M ⪯A for all A ∈A together with restriction homomorphisms rA,B : M ⪯A →M ⪯B whenever B ⪯A. This data should satisfy rA,A = idM⪯A for all A ∈A , and rB,C ◦rA,B = rA,C if C ⪯B ⪯A. We will also use the more suggestive notation m|⪯B for rA,B(m). A morphism f : M →N between (A , ⪯)-cofiltered Z-modules is given by a family of homomorphisms f ⪯A : M ⪯A →N ⪯A for all A ∈A that is compatible with the restriction homomorphisms in the obvious way. For each such open subset J and each (A , ⪯)-cofiltered Z-module M we then define M J := ( (mA) ∈ Y A∈J M ⩽A mA|⪯B = mB for all A, B ∈J with B ⪯A ) . This is a Z-module again. For open subsets J ′ ⊆J the projection Q A∈J M ⪯A → Q A∈J ′ M ⪯A along the decomposition obviously yields a homomorphism M J → M J ′ of Z-modules. We say that M is a flabby (A , ⪯)-cofiltered Z-module if, for any pair (J ′, J ) with J ′ ⊆J , the homomorphism M J →M J ′ is surjective. 3.2. The support condition. We need one more condition on our objects. First, we define, for any A ∈A and any cofiltered Z-module M, the Z-module M[A] := ker M ⪯A →M ≺A . Note that there is a unique w ∈W with w(Ae) ∈A + ZR. We denote this w by π(A), and in this way we obtain a map π: A →W. We say that M satisfies the support condition if for any A ∈A , the action of (zw) ∈Z on M[A] is given by multiplication with the scalar zπ(A). We denote by Z-mod⪯the category of all (A , ⪯)-cofiltered Z-modules that satisfy the following assumptions: (1) M is flabby. (2) M satisfies the support condition. (3) For all open subset J , the S-module M J is finitely generated and torsion free. The following two subcategories are then important for us: • Bref ⊂Z-mod⪯is the full subcategory of objects that have the property that M J is a reflexive S-module for any open subset J . 9 • B ⊂Z-mod⪯is the full subcategory of objects that have the property that M J is graded free over S for any open subset J . 3.3. Projective objects and the Braden–MacPherson algorithm. Neither of the categories Z-mod⪯, Bref or B is abelian. But they do carry a natural exact structure instead. Definition 3.1. We say that a sequence 0 →M →N →O →0 in Z-mod⪯is exact, if for any open subset J of A the induced sequence 0 →M J →N J →OJ →0 is an exact sequence of Z-modules. One checks that this indeed defines an exact structure in the sense of Quillen. This exact structure allows us to talk about projective objects in the categories Z-mod⪯, Bref and B. Note that we call an object P in either of these categories projective if the respective Hom(P, ·) functor maps a short exact sequence to a short exact sequence of abelian groups. We have the following result: Theorem 3.2 ([FL1]). For each A ∈A there is an up to isomorphism unique object B(A) in the category Bref with the following properties: (1) B(A) is indecomposable and projective. (2) B(A)[B] = 0 unless A ⪯B, and B(A)[A] ∼ = S. In the paper [FL1], the object B(A) is obtained by taking global sections of an (A , ⪯)-cofiltered sheaf on the finite moment graph G associated with the root system R. This sheaves is constructed algorithmically by a cofiltered version of the Braden–MacPherson algorithm (cf. [BMP01]). It is not at all clear from the construction that B(A) admits a Verma flag. Still the objects B(A) are characterized by the projectivity, and one can construct them locally, i.e. vertex by vertex, using a linear-algebraic algorithm. 3.4. Translation functors and a duality. In the paper [FL2] an alternative proof of the above statement is presented. In this paper we introduce and study translation functors on the categories Z-mod⪯, Bref and B, and we obtain an additional property of the objects B(A). Theorem 3.3 ([FL2]). For each A ∈A , the object B(A) admits a Verma flag, i.e. it is contained in B. Moreover, we introduce a duality functor D associated with the longest element w0 in the finite Weyl group and we prove the following Theorem 3.4 ([FL2]). For each alcove A we have DB(A) ∼ = B(w0(A))[ℓ(A)], where ℓ: A →Z is the length function introduced by Lusztig. 10 PETER FIEBIG AND MARTINA LANINI 3.5. Connection to the Andersen–Jantzen–Soergel category. In [FL3] we define a functor Ψ: Z-mod⪯→K. The main steps in the construction are the following. Let M be an object in Z-mod⪯. For an alcove A we set Ψ(M)(A) := (M[A])∅. For a positive root α we consider the ⪯-interval [A, α ↑A]. Then there is a unique direct summand of (M[A,α↑A])∅canonically isomorphic to M ∅ [A] ⊕M ∅ [α↑A], and we denote by p the projection. We then define Ψ(M)(A, α) := im  (M[A,α↑A])α →(M[A,α↑A])∅ p →M ∅ [A] ⊕M ∅ [α↑A]  . In [FL3] we then show: Theorem 3.5. The image under the functor Ψ of the subcategory Bproj of pro-jective objects in B is the Andersen–Jantzen–Soergel subcategory M of K. The above theorem hence provides the more intrinsic definition of the category M that we were looking for. 3.6. Multiplicities and periodic polynomials. In [FL3] we introduce another functor. It takes a Braden–MacPherson sheaf on an affine moment graph and produces an object in Z-mod⪯. We show in loc. cit. that we obtain objects that are projective in B. Even though the functor is not fully faithful it yields enough structure for a proof of the following result. Theorem 3.6. Let A ∈A be an alcove, and assume that either ch k = 0 or ch k is big enough. Then rkS B(A)[C] = pC,A(1) for any C ∈A . By “big enough” we mean that there exists a number N such that the statement of the theorem is true if ch k > N. In fact, the theorem above holds if the cor-responding affine Kazhdan–Lusztig conjecture holds for the Braden–MacPherson sheaf B(w) on the affine moment graph over k, for all w such that w(Ae) is contained in the antifundamental box, i.e the set of vectors λ ∈V such that −1 ⩽⟨λ, α∨⟩⩽0 for any α ∈Π. Now if p is big enough in the sense of Theorem 3.6, then we obtain from the above, the definition of Ψ and the Andersen–Jantzen–Soergel result in Theorem 2.3 that [Z(w.0) : L(x.0)] = rkS∅QAw0x(Aw0w) ⩽rkSB(Aw0x)[Aw0w] = pAw0w,Aw0x(1) for all w, x ∈c W. Once this is established, it is easy to obtain the reverse inequal-ity, and hence [Z(w.0) : L(x.0)] = pAw0w,Aw0x(1) which is by what we explained in Section 2.7 equivalent to Lusztig’s conjecture. 11 References [AJS94] Henning Haahr Andersen, Jens Carsten Jantzen, and Wolfgang Soergel, Representa-tions of quantum groups at a pth root of unity and of semisimple groups in charac-teristic p: independence of p, Ast´ erisque (1994), no. 220, 321. [BMP01] T. Braden, R. MacPherson, From moment graphs to intersection cohomology, Math. Ann. 321 (2001), no. 3, 533–551. [FL1] Peter Fiebig and Martina Lanini, Filtered moment graph sheaves, preprint 2015, arXiv:1508.05579. [FL2] Peter Fiebig and Martina Lanini, Periodic structures on affine moment graphs I: Dualities and translation functors, preprint 2015, arXiv:1504.01699. [FL3] Peter Fiebig and Martina Lanini, Periodic structures on affine moment graphs II: Multiplicities and modular representations, in preparation. [Fie08] Peter Fiebig, Sheaves on moment graphs and a localization of Verma flags, Adv. Math. 217 (2008), 683–712. [Fie10] , Lusztig’s conjecture as a moment graph problem, Bull. London Math. Soc. 42(6) (2010), 957-972. [Jan77] Jens Carsten Jantzen, ¨ Uber das Dekompositionsverhalten gewisser modularer Darstel-lungen halbeinfacher Gruppen und ihrere LIe Algebren, J. Algebra 49 [Lus80a] George Lusztig, Some problems in the representation theory of finite Chevalley groups, The Santa Cruz Conference on Finite Groups (Univ. California, Santa Cruz, Calif., 1979), Proc. Sympos. Pure Math., vol. 37, Amer. Math. Soc., Providence, R.I., 1980, pp. 313–317. [Lus80b] , Hecke algebras and Jantzen’s generic decomposition patterns, Adv. Math. 37 (1980), no. 2, 121–164. [Soe97] Wolfgang Soergel, Kazhdan–Lusztig-Polynome und eine Kombinatorik f¨ ur Kipp-Moduln, Represent. Theory 1 (1997), 37–68 (electronic). [Wil] Geordie Williamson, Schubert calculus and torsion explosion, preprint 2013, arXiv:1309.5055. Department Mathematik, FAU Erlangen–N¨ urnberg, Cauerstraße 11, 91058 Erlangen, Germany E-mail address: [email protected] School of Mathematics, University of Edinburgh, Edinburgh EH9 3FD, UK E-mail address: [email protected]
123420
Continuous probability distribution intro Khan Academy 9030000 subscribers 1796 likes Description 292903 views Posted: 10 Dec 2012 Exploring continuous probability distributions (probability density functions) 80 comments Transcript: let's say I have some random variable X and it is a continuous it is a continuous random random variable and I want to explore its probability distribution in fact I want to construct a probability distribution for it so let's draw here on the vertical or on the horizontal axis I should say these are the values that X can take on and in the vertical axis I'll essentially say the probability density for each of those values and we'll see we'll discover in a few moments why we are calling it density so let's say that my random variable X it can never take on negative values so it's a zero probability density of taking on any value that is negative and then it has a uniform density of taking on any value between 0 & 5 so let's say that that's 0 say this is 1 2 3 4 5 and then it can't take on any value above 5 so it's not going to as a zero probability of any value any value greater than 5 0 probability of any value less than 0 and uniform probability between 0 & 5 so this is a shaded encircle uniform probability between 0 0 & 5 so this right over here when we're talking about a continuous probability distribution this can also be referred to as a probability density function probability density function sometimes a PDF probability density function in this case you might notice it is a uniform it is a uniform probability density function now the first question I have for you is what is the height or what is this level we see it's uniform but uniform at what level what is this value going to be where this horizontal line intersects the vertical axis for our probability density function well to think about it we just have to realize that whether we're talking about a continuous random variable or a discrete random variable the sum the probability that you get any one of the possible outcomes the sum of all of those have to be equal to one you have a hundred percent chance of getting one of the possible outcomes for your random variable whether it's discrete or continuous so the sum in the case of a discrete we kind of summed up the bars in a continuous random variable we have to realize it can take on any value not just one or two or three it could take on it could take on 3.14159 keeps going on and on and on forever it could take on the value of pi' it could take on it could take on the value of two point seven one gone and on and on the value the number e it could take on square root of two and any number in between so when we're thinking about all of the possible scenarios all of the possible values that our random variable can take on times the density the probability actually is now the area the combined probability of all of the possibilities are now is now this area so for this random variable in order for this to be a legitimate probability density function or probability distribution the area here that I've highlighted in orange this area here needs to be equal to one so this area needs to be equal to one and given that this base here is of length five what does this height need to be well five times what is going to be equal to one five times its reciprocal so we have a uniform density right here at 1/5 so given that we've defined this probability density function in this way let's think about some probabilities so what if I were to ask you what if I were to ask you the probability the probability that X is greater than one let's say greater than or equal to 1 and less than or equal to 2 what is this probability going to be equal to well you just have to say well what are all the possible values that X can take on so it can be between 1 and 2 including 1 and 2 and so here is its combined probability that X is in that range it's going to be the area under the curve under the curve under the under the curve in that range and so what is this area well the base here is one the height here is 1/5 we haven't drawn the height to scale here base here is one height is 1/5 so it's going to be 1 times 1/5 which is equal to 1/5 let's think about another one let's think about another one what is the probability that our random variable is greater than or equal to 4 and less Center equal to 4 and 1/3 what's that probability going to be equal to so once again what's the range we can be greater than or equal to 4 greater than equal to 4 and lepton less than or equal to 4 and 1/3 which is right about there so we really care about is the area under the curve under the curve in this range and lucky for us this is a rectangle so the base between 4 & 4 & 1/3 you have a distance of 1/3 and then the height once again is 1/5 times 1/5 is equal to 1 over 15 now let's do something interesting now let's do something interesting what is the probability and not that the other stuff wasn't that interesting but let's do something even more interesting what is the probability what is the probability that X X is greater than or equal to 3.9 and less than or equal to actually let me do it this way 2.9 let's say X is greater than or equal to 2.9 or 2 point 9 is less than or equal to our random variable which is less than or equal to 3 point 1 that didn't look like an X and is less than or equal to 3 point 1 what is this probability going to be equal to so we have this little range here right over here the height is 1/5 we've seen that over and over again but what's the area of this rectangle well the base here is between 2 point 9 and 3 point 1 so that is point zero 2 let me draw that rectangle a little wider so if we dot like this draw it zoom in a little bit this point right over here is 2.9 at this point right over here is 3 point 1 the difference between the two is 0.2 or you could say it's 1/5 so 0.2 and then the height here is 1/5 so the base is 0.2 or 1/5 and then the height is 1/5 so it's 1/5 times 1/5 is equal to 1 over 25 well you say well how is that any more interesting what we just did well let's escalate a little bit what's the probability not of that range let's take the probability let's take the probability that 2.99 is less than our random variable which is less than the less than or equal to two point nine nine is less than or equal to our random variable which is less than or equal to three point zero zero one what is this going to be equal to so now we've made our range a little bit smaller our base now is now going to be point zero two the difference between three point zero one and two point nine nine is point zero two so it's now point zero to base and the same height 1/5 so the base is now one not one-fifth but one 50th that's the same thing as point zero two and we multiply that times 1/5 times 1/5 gives us one let me scroll over to the right a little bit 1 over 250 and we could keep going we could keep going and I think you see why this is getting interesting what's the probability that two point nine nine nine is less than or equal to our random variable which is less than or equal to three point zero zero one what's this going to be equal to well same exact logic the range the base this will range right of our random variable it is now a range of one five hundred point zero zero two one five hundred so it's now going to be one over five hundred one over five hundred times the height times 1/5 1/5 which now gives us one 2,500 one 2,500 so you see we're getting closer and closer to X being exactly three and our probabilities getting lower and lower and lower as we narrow our range we can start getting really really really close to 3 so with that let's just finish this video with a very philosophically interesting question what is the probability that my continuous random variable defined this way what is the probability that it is a exactly exactly equal to three not three point zero one not two point nine nine nine nine nine nine nine nine what is the probability that is exactly equal to three well now our rectangle has essentially degenerated or it's just degraded down to just a vertical line its height is still one fifth but it has absolutely no width it has absolutely no width it is a R it is an infinitely skinny rectangle and so your probability here it has no area it has no area so the probability is zero so you actually have a zero probability of getting exactly three not two point nine nine nine not not between two point nine nine nine nine nine nine nine and three point zero zero zero one we're talk about infinite precision getting exactly three the probability is zero and hopefully this little progression that we saw gives you an indication of why that is is we get to a tighter and tighter range around three the probability was getting closer and closer to zero
123421
10-708 PGM | Lecture 4: Exact Inference =============== 10-708 PGM- [x] logisticslecturesnotescalendarhomeworkprojectreports Lecture 4: Exact Inference Introducing the problem of inference and finding exact solutions to it in graphical models. Introduction In this previous lectures, we introduce the concept of Graphical Models and its mathematical formulations. Now we know that we can use a graphical model $M$ (Bayesian network or undirected graph model) to specify a probability distribution $P_{M}$ satisfying some conditional independence property. In this lecture, we will study how to utilize a graphical model. Given a GM $M$, we generally have two type of tasks Inference: answering queries about the probability distribution $P_M$ defined by $M$, for examples, where $X$ and $Y$ are subsets of variables in GM $M$. Learning: estimating a plausible model $M$ from data $D$. We call the process of obtaining a point estimate of $M$ as learning, but for Bayesian, they seek the posterior distribution of , which is actually an inference problem. The learning task is highly related to the inference task. When we want to compute a point estimate of $M$, we need to do inference to impute the missing data if not all the variables are observable. So the learning algorithm usually uses inference as a subroutine. Inference Problems Here we will study different kind of queries associated with the probability distribution $P_M$ defined by GM $M$. Likelihood Most queries one may ask involve an evidence, so we first introduce the definition of evidence. Evidence $\mathbf{e}$ is an assignment of a set of variables $\mathbf{E}$. Without loss of generality, we assume that $\mathbf{E}={X_{k+1}, \cdots, X_k }$. The simplest kind of query is the probability of evidence $\mathbf{e}$ this is often referred as computing the likelihood of $\mathbf{e}$. Conditional Probability We are often interested in the conditional probability of varaibles $X$ given evidence $\mathbf{e}$ this is the a posteriori belief $X$ given evidence $\mathbf{e}$. Usually we only query about a subset of variables $Y$ of all domain variables $X = {Y, Z}$ and “don’t care” about the remaining, $Z$: The process of summing out the “don’t care” variables $Z$ is called marginalization, and the resulting is called a marginal prob. A posteriori belief is very useful. Here we show some applications of a posteriori belief: Prediction: computing the probability of an outcome given the starting condition. Example of prediction in a chain model. The green nodes are observable variables. In this type of queries, the query node is the descendent of the evidence. If we know the value of variable $A$ and $B$, the probability of the outcome is a posteriori belief . Using the conditional independence encoded in the graph, we can simplify it to . Diagnosis: computing the probability of disease/fault given symptoms. Example of diagnosis in a chain model. The green nodes are observable variables. In this type of queries, the query node is the ancestor of the evidence. In the GM $M$, if we know the value of variable $B$ and $C$, the probability of the cause is a posteriori belief . Again using the conditional independence, we can simplify it to . Learning: when learning with partial observation of the variables, we need to compute a posteriori belief in the learning algorithm. In EM algorithm, we will use a posteriori belief to fill in the unobserved variables as part of the algorithm. We will cover more details about learning algorithms later. The information flow between variables is not restricted by the directionality of the edges in a GM. We can actually do a probabilistic inference combing evidence from all parts of the networks. Deep Belief Network (DBN) [Hinton, 2006] is an example. DBN is a generative model or Restricted Boltzmann Machine (RBM) with multiple layers. The model is successful for solving tasks like recognizing handwritten digits, learning motion capture data, collaborative filtering. The following figures shows a DBN with 3 hidden layers. We can infer hidden unit $H_1, H_2, H_3$ from data $V$. We can also generate data $V$ by sampling hidden units $H_3, H_2, H_1$ in the opposite direction. A Deep Belief Network with 3 hidden layers for image processing. Most Probable Assignment Another interesting query is to find the most probable joint assignment(MPA) for some variables in interest. Such reasoning is usually performed under some evidence $\mathbf{e}$ and ignoring some “don’t care” variables $Z$, From the equation, we can find that MPA is the maximum a posteriori configuration of $Y$. This query is typically useful for prediction given a GM $M$. Classification: find the most likely label, given the evidence. Explanation find the most likely scenario, given the evidence. Important Notice: The MPA of a variable depends on its “context” of the problem — the set of variables been jointly queried. For example, the probability distribution of $y_1$ and $y_2$ is shown in the following table. When we compute the MPA of $y_1$, we first compute the marginalization $p(y_1=0) = 0.4, p(y_1=1) = 0.6$, MPA is $\arg \max_{y_1} p(y_1) = 1$. On the other hand the MPA $y_1, y_2$ is $\arg \max_{y_1, y_2} p(y_1, y_2) = (0, 0)$. | $y_1$ | $y_2$ | $p(y_1, y_2)$ | | --- | --- | --- | | 0 | 0 | 0.35 | | 0 | 1 | 0.05 | | 1 | 0 | 0.3 | | 1 | 1 | 0.3 | Inference Methods Inference is generally a hard problem. Actually, there is a theorem showing that computing $P(X = \mathbf{x} | \mathbf{e})$ in a GM is NP-hard. However, the NP-hardness does not mean that we cannot solve inference. The theorem implies that we cannot find a general inference procedure that works efficiently for arbitrary GMs. We still have a chance to find provably efficient algorithms for some particular families of GMs. There are many approaches for inference in GMs. They can be divided into two classes Exact inference algorithms. Including the elimination algorithm, message-passing algorithm (sum-product, belief propagation), the junction tree algorithms. These algorithms can give the precise result of query. The major topic of this lecture is on exact inference algorithms. Approximate inference techniques. Including stochastic simulation / sampling methods, Markov chain Monte Carlo (MCMC) methods, variational algorithms. These algorithms only gives an approximate answer to the inference query. We will cover these methods in future lectures. Elimination Algorithm and Examples Now that we understand the problem of inference, we will examine some simple cases to build intuition for a general method for exact inference. Elimination on Chains Consider a simple chain on variables $A, B, C, D, E$ as seen below. Chain PGM. Imagine we want the probability of $E=e$ regardless of the values of $A, B, C, D$. Naively, we could sum over the joint probability: This will require an exponential number of terms. Thankfully, we can use the properties of Bayesian Networks to cut down on this computational cost. Since Bayesian Networks encode conditional independences, we can decompose the joint probability as follows: This decomposition has allowed us to decouple conditionally independent variables and we can therefore push in and isolate summations, like the following: Focusing on the final term, $\sum_a P(a)P(b\vert a)$, we see that this marginalizes over $a$ and leaves us with a function of only $b$. We will generally refer to this as $\phi(b)$ but semantically it is equivalent to $P(b)$. We are left with the following expression for the marginal probability of $e$. Note that because the variable $a$ is no longer part of this expression we will say $a$ has been eliminated. We are therefore left with a new graphical model for our situation: Graphical Model after Elimination of A. Repeating this, we get the following sequence of steps: \begin{aligned} P(e) &= \sum_d \sum_c P(d | c) P(e | d) \sum_b P(c | b) P(b) \ &= \sum_d \sum_c P(d | c) P(e | d) P(c) \ &= \sum_d P(e | d) \sum_c P(d | c) P(c) \ &= \sum_d P(e | d) P(d) \ \end{aligned} As each elimination step costs $O(Val\vert X_i \vert \times \vert X_{i+1} \vert)$, the overall complexity is $O(nk^2)$, a huge improvement overall from the exponential runtime of the naive summation of the joint probability. Elimination in Hidden Markov Models Now we will consider a model frequently used in time-series analysis and Natural Language Processing known as a Hidden Markov Model. Hidden Markov Model. Naively we could find the conditional probability of $y_i$ given the observed sequence, but using our elimination trick, we can get similar computational advantages as seen in the chain example. \begin{aligned} P(y_i | x_1, \dots, x_T) &= \sum_{y_{-i}} P(y_1, \dots, y_T, x_1, \dots, x_T) \ &= \sum_{y_{-i}} P(y_1) P(x_1 | y_1) P(y_2 | y_1) \dots P(y_T | y_{T-1}) P(x_T | y_T) \ \end{aligned} With this model, we have two intuitive choices for the order of variables to eliminate. We could start from the first time step (known as the Forward Algorithm) or start from the final time step (known as the Backward Algorithm). Note that to each notation, we will represent a summation over all random variables $y$ except the $i$th variable as $y_{-i}$. Forward Algorithm If we choose to eliminate variables by starting at the beginning of the chain, we would first group factors as follows: \begin{aligned} P(y_i | x_1, \dots, x_T) &= \sum_{y_{-1, -i}} P(x_2 | y_2) P(y_3 | y_2) \dots P(y_T | y_{T-1}) P(x_T | y_T) \sum_{y_1} P(y_1) P(x_1 | y_1) P(y_2 | y_1)\ &= \sum_{y_{-1, -i}} P(x_2 | y_2) P(y_3 | y_2) \dots P(y_T | y_{T-1}) P(x_T | y_T) \phi(x_1, y_2) \ &= \sum_{y_{-1, -i}} P(x_2 | y_2) P(y_3 | y_2) \dots P(y_T | y_{T-1}) P(x_T | y_T) P(x_1, y_2) \ \end{aligned} We can continue in this pattern with each intermediate term $\phi(\cdot)$ representing a joint probability. Backward Algorithm If we choose to eliminate variables by starting at the end of the chain, we would first group factors as follows: \begin{aligned} P(y_i | x_1, \dots, x_T) &= \sum_{y_{-T, -i}} P(y_1) P(x_1 | y_1) P(y_2 | y_1) \dots P(x_{T-1} | y_{T-1}) P(y_{T-1} | y_{T-2}) \sum_{y_T} P(y_T | y_{T-1}) P(x_T | y_T) \ &= \sum_{y_{-T, -i}} P(y_1) P(x_1 | y_1) P(y_2 | y_1) \dots P(x_{T-1} | y_{T-1}) P(y_{T-1} | y_{T-2}) \phi(x_T, y_{T-1}) \ &= \sum_{y_{-T, -i}} P(y_1) P(x_1 | y_1) P(y_2 | y_1) \dots P(x_{T-1} | y_{T-1}) P(y_{T-1} | y_{T-2}) P(x_T | y_{T-1}) \ \end{aligned} We can continue in this pattern with each intermediate term $\phi(\cdot)$ representing a conditional probability. Takeaways from Examples The main takeaways from our exploration are that elimination provides a systematic way to efficiently do exact inference and that while we can generally create intermediate factors, the semantics of the intermediate factors can vary. Variable Elimination Algorithm From these examples, we can consolidate our techniques used in the above examples to a more general algorithm called Variable Elimination. Note that a frequent operation in the above examples is that of taking a product of factors ($F$) and then summing over the values of a variable. This is the general Sum-Product operation. Furthermore, we would like a way of incorporating evidence $E=e$ that generalizes our use of factors above. Because we will be maintaining a list of factors and iterating through it, it will be advantages to define new factors that explicitly incorporate our evidence, instead of having fixed variables inside our original factors. Let $\delta(E_i, e_i)$ denote our evidence potential. Then as fitting with out existing framework, we can simply define the total evidence potential to be the product our each of the individual evidence potentials. Now we can treat evidence as just another type of factor. With these concepts in hand we can outline our new algorithm. Given a query of the form $P(X_1 | e)$, we first focus on the joint probability $P(X_1, e)$. . This suggests an implicit “elimination order” over the variables. Following the order prescribed above: Move all the relevant terms to the innermost sum and all irrelevant terms out of it. Perform the Sum-Product operation on the innermost sum, producing a new factor $\phi$. Repeat until the entire joint is calculated. So calculate the desired query, simply divide the joint by the marginal probability of the evidence. Graph Elimination In this section we are going to analyze the complexity of Variable Elimination (VE) algorithm. We first give a basic analysis based on the algorithm procedure and this can give us the insight of the bottleneck of complexity. Then we show that each step of VE can be viewed as a graph transformation step and this can let us analyze the algorithm complexity more clearly in graph perspective. We also formalize the graph perspective view of VA as graph elimination algorithm. Basic Complexity Analysis From last section we have known VE can reduce inference complexity greatly. Now let’s have a closer look. Let $n$ be the number of variables in the fully joint probability, $m$ be the number of initial factors including original factors and evidence potentials. We have seen that VE is an n-step iterative elimination algorithm. In each step the algorithm picks a variable $X_i$ and “push in” the summarization w.r.t. $X_i$, which makes the summarization performs over the product of only a subset of factors involving $X_i$. Let $N_i$ be the number of variables ($y_1, y_2, …, y_{N_i}$) inside the subset of factors. We can formally write the step as: Where $k_i$ is the number of factors involving $X_i$. Assume each variable has no more than $v$ values, for each configuration of $y_1,…,y_{N_i}$ there is $\textnormal{Val}(X_i) k_i \le v N_i$ multiplication and $\textnormal{Val}(X_i) \le v$ addition. And there are at most $v^{N_i}$ configurations of $y_1,…,y_{N_i}$. Considering the algorithm has $n$ steps, and let $N_{max} = \max_i N_i$. We can write down the complexity is $O(nv^{N_{max}})$. Thus we see that the computational cost of VE algorithm is dominated by the maximum size of intermediate factors generated in an exponential growth rate. VE to Graph Elimination: Example We have seen the bottleneck of the VE algorithm is the maximum size of intermediate factors. It is affected by the elimination ordering. Now let’s first see an example that connects iterative elimination steps inside VE with a series of graph structure transformations. This gives us a visualization way of analyzing complexity based on graph elimination. Questions regarding the computation complexity of the VE can be reduced to purely graph-theoretic considerations. Given a Bayesian Network factorizing as the graph shown in below, we are going to do VE to inference $P(A h)$. The initial factors are: Before doing VE we choose an elimination ordering as $H,G,F,E,D,C,B$. Step 1: to handle conditioning $h$ H variable node is observed node, we can add additional evidence indicator factor to make conditioning on observed evidence as isomorphic as a marginalization step: The new product of factors becomes: Graph transformation: After conditioning on $h$, we eliminate $h$ in old factors and generate a new factor $m_h(e, f)$. In the perspective of graph evolving, as shown in the below, this corresponding to delete the node $h$ as we eliminate $h$, and also add an edge between node $E$ and $F$ since the generated new factor depending on $E$ and $F$. The reduced factorization is a Gibbs distribution factorizing over the new graph. Step 2: eliminate G Compute: The new product of factors: Graph transformation: Just remove node $G$ as the below. Step 3: eliminate F Compute: The new product of factors: Graph transformation: Remove node $F$ and moralize $A$ and $E$. Step 4: eliminate E Compute: The new product of factors: Graph transformation: Generated term <=> fully connected subgraph, according to the Gibbs distribution property. As shown in the follwing, node $E$ is removed and $E$’s neighbors are moralized. Step 5: eliminate D Compute: The new product of factors: Graph transformation: As in the following. Step 6: eliminate Compute: The new product of factors: Graph transformation: As in the following, moralize $A$ and $B$. Step 7: eliminate B Compute: The new product of factors: Graph transformation: Now only a single node $A$ left. In the last step we just normalize left product. All in all we can see the corresponding graph transformation can be shown as following. At each step we remove a node from the former graph and moralize removed node’s neighbors. VE to Graph Elimination (GE): Formal Connection As we have shown in the above example, intuitively the graph elimination procedure has a close connection with variable elimination algorithm. We first summarize the graph elimination algorithm, give out the definition of an important graph structure reconstituted graph, and a theorem about the correspondence of elimination clique in GE and the generated intermediate term in VE. Graph Elimination Algorithm : Given: undirected/directed graph $G$, an elimination ordering $I$. Initialization: If $G$ is directed, first moralize $G$. Procedure: For each node $X_i$ in $I$, at each step connect all of the remaining neighbors of $X_i$ and remove $X_i$ from the graph. Reconstituted Graph $G’_I(V, E’)$: also named induced graph Note: the $I$ is an elimination ordering, for different $I$ reconstituted graph is different. Definition: the reconstituted graph $G’_I(V, E’)$ is a graph whose edge set $E’$ is a superset of $E$ containing all edges of $E$ and any new edges created during a run of Graph Elimination Algorithm. Reconstituted graph records the elimination cliques created in graph elimination algorithm. At each step before we remove a node $X_i$ from graph, connecting all neighbors of $X_i$ created a clique, which is the elimination cliques Correspondence between intermediate terms in VE and elimination cliques in GE: Following the corresponding steps of VE and GE, it’s easy to see that at each elimination step, the scope of generated intermediate term in VE is just he elimination clique generated in GE. The following figure shows this relationship based on the example we introduced before: Theorem: The scope of every factor generated during the variable elimination process is a clique in reconstituted graph $G’_I(V, E’)$. Every maximal clique in reconstituted graph $G’_I(V, E’)$ is the scope of some intermediate factor in the computation. The proof of the theorem can be found in Chapter 9 of Koller’s PGM textbook. This theorem tells us the scope of intermediate factors which is a elimination clique, is a clique in reconstituted graph. What’s more, the scope of the largest intermediate factor is a the largest maximal clique in reconstituted graph. Complexity Analysis in Graph Perspective In the beginning of this section we have argued that the bottleneck of VE’s complexity is determined by the scope size of the maximum intermediate factor generated in the procedure of VE. In above subsection we have shown that each intermediate factor in VE is an elimination clique in graph elimination algorithm, and the largest elimination clique is also a largest maximal clique in reconstituted graph. Then, given an elimination ordering $I$, the problem of get the bottleneck maximum scope size $N_{max}$ of intermediate factors is equivalent of finding the largest clique in reconstituted graph $G’_I(V, E’)$, which is a pure graph theoretic question. And there are efficient mature algorithm for solving this problem. Elimination Ordering We can define the width of a reconstituted graph as the size of largest clique minus 1. Let $w_{G,I}$ be the width of $G_I’(V, E’)$. For different ordering $I$, $w_{G,I}$ is different. Now we define tree-width of $G$ as: This term provides us a bound on the best performance we can hope for applying VE to do an inference over a probability that factorizes over $G$. However, finding the best elimination ordering of a graph is a NP-hard problem. As we have shown before the inference task itself is also NP-hard. But these two NP-hard problems are not same. To be more specific, even we have find a best elimination ordering, the complexity of inference can still be exponential if the tree width of the graph $G$ is large. Although design a general best elimination ordering finding algorithm is NP-hard, there are some heuristic algorithm can generate a near-optimal elimination ordering (look at Koller’s PGM for detail). And on the other hand, for some particular graph $G$ there exists “obvious” optimal ordering that can be easily got. Now we give some examples to show opposite scenarios. Example 1: Star graph If we remove centroid first it’s easy to see the width of induced graph is equal to $N-1$ where $N$ is total number of nodes. However if we remove centroid at the end, it’s easy to see the induced graph is the original star graph and the width is just 1. Using this elimination ordering can make variable elimination very efficient. Example 2: Tree graph It’s obvious that eliminating nodes from leaves to root won’t introduce any induced dependency so the induced graph is just the original tree. And we know that there is no clique with size large than 3 in tree. So the width is just 1. Example 3: Ising model It’s extremely hard to find a optimal elimination ordering. And actually the tree width of ising model is large than the $\sqrt{n}$ so even finding a optimal elimination ordering, the VE algorithm is still exponential in $n$. Message Passing Algorithms Overview Now we have devised a general Eliminate algorithm that is able to work on every graph. However, it has several downsides. One of them, as we have discussed, is exponential worst case complexity. Another one is that it is designed to only answer single-node queries. In this section, we build on the same idea of exploiting local structure of a graph to manipulate factors, and formulate a class of exact inference algorithms based on passing messages over the Clique tree data structure. Doing so will give us important insight on the way inference works in general, and also provide computational benefits in the case when multiple queries have to be computed based on the same evidence. Next, we will show that the message-passing idea can be implemented more efficiently for the special case tree-like graphs. Finally, we conclude with a summary of exact inference. This section will provide just a cursory overview of the aforementioned techniques, with the intent of presenting intuitions about how they connect to one another, and also clearing up some confusing terminology. For more in-depth explanations and proofs for each of the topics, the scribe would advise looking into the references. Variable elimination and Clique Trees Let us start by drawing a connection between variable elimination process as we have seen in Eliminate algorithm, and a special data structure called a Clique tree (also known by the names of Junction or Join tree). Recall that performing one step of variable elimination involved creating a factor $\psi_i$ by multiplying several existing factors, and then one variable is summed out of $\psi_i$ to create a factor $\tau_i$ that is sent to other factors as a “message”. A run of this algorithm defines a Clique tree: it is an undirected graph with nodes corresponding to factors $\psi_i$, or cliques of variables $C_i$ in its scope; an edge between $C_i$ and $C_j$ is added if a message $\tau_i$ is used to in the computation of $C_j$’s message $\tau_j$. Notice that any message $\tau_i$ is passed only once: when a factor $\phi_i$ is used to create the next factor $\psi_j$, it is never used again; hence this graph can be seen to be a tree. Figures below present an example. Student network Clique tree for VE execution in order C,D,I,H,G,S,L A more algorithmically principled way of constructing a clique tree given the elimination order triangulates $\mathcal{G}$ into its chordal graph, extracts max cliques in it and and finds a minimal spanning tree in the resulting clique graph. Triangulation “triangulates” the graph by iteratively adding an edge between any non-adjacent vertices into any cycle of length at least $4$. Equivalently, given elimination order, we can add edges that would be added if we were to run elimination. Next, maximal cliques $C_i$ are extracted from this graph and arranged in a graph with edges $s_{i,j} = C_i\cap C_j$. Finally, a minimum spanning tree in this graph is a clique tree. The initial graph, triangulated graph, max cliques and the corresponding clique tree are presented in the below example. Constructing a Clique tree from the Student network. Moreover, there is a simple characterization of exactly those trees with $C_i \subseteq X$ as nodes and $S_{i,j} \subseteq C_i\cap C_j$ as edges that are clique trees defined by some variable elimination procedure (using a property called Running Intersections). This lets us identify clique trees with executions of variable elimination. As we will see, interpreting variable elimination in terms of clique trees has several computational advantages. For example, one tree may be used as a basis for executing several different elimination orderings. Furthermore, it makes it possible to cache intermediate messages for answering multiple marginal probability queries more efficiently. General Sum-Product on a Clique Tree The Sum-Product algorithm provides a way to use a Clique tree to guide variable elimination. Starting with a clique tree $\Tau$ with cliques $C_i$, we perform the following steps: Generate initial potentials by multiplying factors assigned to each clique $C_i$: Choose root $C_r$ to be a clique that contains variable of interest. Orient the graph upward towards the root. This defines partial ordering of operations on the tree. Pass messages bottom-up (collect evidence phase): in the topological order from leaves to root, compute and store Distribute messages top-down (distribute evidence phase): for each clique $C_i$ After step $4$, we can get the marginal for the root node, $P(C_r)$, by multiplying the incoming messages with $C_r$’s own clique potential: (to get the likelihood of the variable of interest, it remains to sum out irrelevant variables). However, the benefit of using the clique tree shows after the top-down phase of step $5$, after which the beliefs $\beta_i$ we obtain are actually equal to marginals $P(C_i)$ for every clique. This way, running $N$ marginal queries is reduced from $Nc$ to just $2c$ (at the cost of storing the tree and messages between bottom-up and top-down passes). There are several modifications of the algorithm. One replaces sum-product with max-product in the collect evidence phase, and with traceback in the distribute evidence phase, to produce a MAP estimate for $\mathcal{G}$. Another gives a way to do posterior sampling on from the model, by having the collect evidence phase proceed as usual, and on the distribute evidence phase sampling variables given the values higher up the tree. The resulting Clique (Junction) tree algorithm is a general algorithm for exact inference. However, it inherits the worst-case complexity of the Eliminate algorithm, which is exponential in the size of the largest clique in elimination order. The smallest size of largest clique over all elimination orderings is called the treewidth of a graph; it captures the complexity of VE, as well as CTA. However, finding the ordering, as well as the treewidth itself, is NP-hard in general. This limits the applicability of both of these algorithms. Next, we will present a more specialized instantiation of a message-passing algorithm that is limited to trees or tree-like structures, but is more efficient. Moreover, it can be applied to non-trees in an iterative fashion, resulting in an approximate inference algorithm known as Loopy Belief propagation. Sum-Product algorithm on trees There is a special class of models for which the exact inference can be performed especially efficiently. If $\mathcalP{G}$ is a undirected tree or a directed graph whose moralization is an undirected tree, we can use the Sum-Product message passing in the same way as described before, using the graph itself in place of a clique tree. Likewise, max-product variation for MAP inference and posterior sampling can both be used; moreover, the algorithm can be extended to certain tree-like structures through construction of so called Factor graphs. This algorithm is also known as Belief Propagation, and when applied to chain graps, as Forward-Backward algorithm. This multitude of names reflects the practical significance of this algorithm: although it is limited to certain graphical model structures, it is efficient due to low treewidth of those models and not needing to construct a clique tree in order to obtain all singleton marginals. One more interesting feature of the tree Sum-Product algorithm is that we can still apply it to graphs that are not trees (i.e. have loops) by repeatedly running message passing until convergence: in that case, it yields an approximate inference method. This algorithm is called Loopy Belief Propagation, and it has been experimentally shown to work well for different classes of models. Summary of exact inference Let us recap what we have learnt about exact inference. We have seen Eliminate, Clique tree, and Sum-Product algorithms. Eliminate algorithm is conceptually simple and applicable to any graphical model. However, it only lets us compute single queries and has worst case exponential time complexity in treewidth. Clique tree algorithm is also applicable to general graphs and is able to fix the first of Eliminate’s issues by caching computation using messages, but has the same computational complexity as a function of graph properties. Sum-product algorithm can be thought of as implementing the same idea of passing messages around the graph and can thus be for several-query applications, but reduces the computational complexity of Clique tree algorithm at the cost of being limited to tree-like graphical models. In general, the above trade-offs between generality and computational complexity are unavoidable: it can be shown that exact inference is NP-hard . In general, this intractability of exact inference leads to the need for approximate inference algorithms, which we will study later in the course. However, it is worth understanding exact inference. For one, as we have seen, under some assumptions about the model’s structure exact inference is feasible, for example when the graphical model is a tree or has low treewidth. This case turns out to be quite important in applications, as many interesting models have a tree-like structure. Moreover, some of the approximate inference algorithms, such as Loopy BP, are inspired by exact inference algorithms. Probabilistic Graphical Models SAILING lab [email protected] sailinglab YouTube © Copyright 2019 Carnegie Mellon University. Powered by Jekyll with al-folio theme.